text
stringlengths
100
957k
meta
stringclasses
1 value
Measuring Displacement and Current: It is used for the measurement of displacement and current in mechanical sensors. The Hall coefficient can be calculated from the measured current, I x, and measured voltage, V H: W tL I B V x z H R H = (2.7.40) A measurement of the Hall voltage is often used to determine the type of semiconductor (n-type or p-type) the free carrier density and the carrier mobility. The Hall coefficient, R H, is in units of 10-4 cm 3 /C = 10-10 m 3 /C = 10-12 V.cm/A/Oe = 10-12. ohm.cm/G. Show that the hall coefficient of a material is independent of its thickness. The Hall-effect principle is named for physicist Edwin Hall. Resources for electrical engineering professionals. These sensors produce a voltage proportional to the applied magnetic field and also sense polarity. For most metals, the Hall coefficient is negative, as expected if the charge carriers are electrons. and compare with the theoretically accepted value. Calculate the intrinsic carrier concentration of GaAs at 300 K. Given that the electron effective mass is 0.07 m, the hole effective mass is 0.56 m and its energy gap is 1.4 eV. As a result the fuel level is indicated and displayed by proper signal condition of Hall voltage. The Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. At the point of zero Hall coefficient, it … = () HALL coefficient is positive for p-type semiconductors . Soon I will try to add answers in the quiz form. You have to be 100% sure of the quality of your product to give a money-back guarantee. The Hall constant thus gives a direct indication of the sign of the charge carriers; it is negative for electrons (q =−e) and positive for … State mass action law. As discussed in your textbook, the Hall effect makes use of the qv x B Lorentz force acting on the charge carriers that contribute to the flow of electrical current in a material. Hall effect is another important transport phenomenon and has been extensively studied in amorphous semiconductors. It is used to accurate measurement of magnetic field, Hall mobility etc. What is Hall coefficient? In beryllium, cadmium and tungsten, however, the coefficient is positive. Hall-Petch Coefficient is the coefficient used in the Hall–Petch relation which predicts that as the grain size decreases the yield strength increases. Hall coefficient definition is - the quotient of the potential difference per unit width of metal strip in the Hall effect divided by the product of the magnetic intensity and the longitudinal current density. Determine the hall coefficients for an N-type and P-type Ge semiconductor having same thickness. In beryllium, cadmium and tungsten, however, the coefficient is positive. Your bank details are secure, as we use only reliable payment systems. What Hall Coefficient signifies in the hall effect experiment? Each paper is composed from scratch, according to your instructions. They are used in integrated circuits as Hall effect sensors. To date, no other reports on Si-Nb alloys/compounds have reported similar findings in the literature. If an electric currents is made to flow through a conductor in a magnetic field , the magnetic field will exert a transverse force on the moving charge carriers which tends to push them to one side of the conductor. And as the number of electrons are more compared to Holes in n-type semiconductors, that clearly indicates that the semiconductor being tested is n-type NOTE: These questions may help you to understand the experiment properly. Magnetometers, i.e. The value of Hall coefficient depends on the type, number, and properties of the charge carriers that constitute the current. Hall resistance is the ratio of the transverse voltage developed across a current-carrying conductor, due to the Hall effect, to the current itself. Hall coefficient given by Eq. The Hall Coefficient (or Constant) RH is officially defined as this proportionality constant: Ey =RH JB. You will get a personal manager and a discount. Your email is safe, as we store it according to international data protection rules. We will work on your paper until you are completely happy with the result. The charge carrier in a normal electric current, the electron, is negative, and as a result the Hall coefficient is negative. If the bar is inserted within a…. What is Hall coefficient? In this study, we report results of the high-pressure Hall coefficient $({R}_{H})$ measurements in the putative topological Kondo insulator $\mathrm{Sm}{\mathrm{B}}_{6}$ up to 37 GPa. By sending us your money, you buy the service we provide. The Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. 1. To calculate the Hall coefficient and the carrier concentration of the sample material. There are various units which help us define Hall-Petch Coefficient and we can convert the units according to our requirement. Some types of brushless DC electric motors use Hall effect sensors to detect the position of the rotor and feed that information to the motor controller. The Hall voltage that develops across a conductor is directly proportional to the current, to the magnetic field, and to the nature of the particular conducting material itself; the Hall voltage is inversely proportional to the thickness of … ənt] (electromagnetism) A measure of the Hall effect, equal to the transverse electric field (Hall field) divided by the product of the current density and the magnetic induction. Hall Coefficient Calculator. It is a characteristic of the material from which the conductor is made. The resistance of a CdSe crystal is 10 W cm at 300 K. Find its resistance at 350 K. The band gap of CdSe is 1.74 eV. Hall resistance is the ratio of the transverse voltage developed across a current-carrying conductor, due to the Hall effect, to the current itself. where ‘d’ is the thickness of the metal along the direction of Magnetic field. Hall coefficients of five aluminum single crystals were measured. The value of Hall coefficient depends on the type, number, and properties of the charge carriers that constitute the current. When electrons flow through a conductor, a magnetic field is produced. . The Hall effect is an attribute of this nature of current. It is a characteristic of the material from which the conductor is made, since its value depends on the type, number, and properties of the charge carriers that constitute the current. Assuming these numbers, what is the measured Hall coefficient for copper? As level of fuel rises, an increasing magnetic field is applied on the current resulting in higher Hall voltage. Hall Coefficient is negative, it means that the majority charge carriers are Electrons. A Hall-effect sensor (or simply Hall sensor) is a device to measure the magnitude of a magnetic field.Its output voltage is directly proportional to the magnetic field strength through it.. Hall-effect sensors are used for proximity sensing, positioning, speed detection, and current sensing applications.. Both of the coefficients represent the character of conduction careers, and are supposed to be negative in electron conduction and positive in hole conduction.. Indirect band-gap semiconductors 3. This theory is also used in current sensors, pressure sensors, Fluid flow sensors etc… One such invention that can measure magnetic field is the Hall … Formula for Hall coefficient in metals is: R H = E y /(j x * B z) Formula for Hall coefficient in semiconductors is: Application of Hall Effect. It was discovered by Edwin Hall in 1879. When the samples are highly overdoped, the R H (T) maximum does not exist. This, in turn, is supported by the studies of NbSi 2 compound which exhibits a negative Hall coefficient . To calculate the Hall coefficient and the carrier concentration of the sample material. The Hall effect is the movement of charge carriers through a conductor towards a magnetic attraction. . Related formulas This is a commonly used measurement technique in solid state physics used to determine the sign and number density of charge carriers in a given material. The Hall coefficient (or hall constant) is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. This is because Hall coefficient is negative for n-type semiconductor while the same is positive in the case of p-type semiconductor. The Hall coefficient has the same sign as the charge carrier. In 1879 he discovered that when a conductor or semiconductor with current flowing in one direction was introduced perpendicular to a magnetic field a voltage could be measured at right angles to the current path. The Hall coefficient has the same sign as the charge carrier. From the Hall coefficient, what is the density of charge carriers in copper, and how many charge carriers are provided, on the average, by each atom? When a current-carrying conductor is placed into a magnetic field, a voltage generates perpendicular to both the current and the field. It is a characteristic of the material from which the conductor is made, since its value depends on the type, number, and properties of the charge carriers that constitute the current. ..... Our Mantra: Information is Opportunity. Characteristics that use different units of measurements. to measure magnetic field. The Hall effect has many applications. No additional resistance (a shunt) need to be inserted in the primary circuit. The Hall effect is the production of a voltage difference (the Hall voltage) across an electrical conductor, transverse to an electric current in the conductor and to an applied magnetic field perpendicular to the current. What is meant by Fermi level? (5), is also a function of T and it may become zero, even change sign. . We use a HMS 3000 Version 3.51.5 machine for Hall Effect measurement in our lab. Hall Coefficient Calculator. The Hall effect causes a measurable voltage differential across the conductor such … The value of Hall coefficient depends on the type, number, and properties of the charge carriers that constitute the current. Note that this is a different effect than the well-known piezo-Hall effect [52], which describes the change of a material property--the Hall coefficient [C.sub.H]--caused by mechanical stress. Check out our terms and conditions if you prefer business talks to be laid out in official language. Suppose that the thickness of the conducting ribbon is , and that it contains mobile charge carriers per unit volume. 6 is also a function of T and it may become zero and even change sign. Variables. HC = V * t / (I * B) Where HC is the hall coefficient (m^3/C) . Hall voltage of -7 µV is measured under these conditions. The Hall coefficient, R H, is in units of 10-4 cm 3 /C = 10-10 m 3 /C = 10-12 V.cm/A/Oe = 10 … What is Hall effect? The Hall constant thus gives a direct indication of the sign of the charge carriers; it is negative for electrons (q =−e) and positive for … Some of the usages of hall effect include: Magnetometers, i.e. Can we calculate Hall coefficient by the plot between Hall Resistance vs Magnetic field when there are both type of carrier involves? Hall Coefficient synonyms, Hall Coefficient pronunciation, Hall Coefficient translation, English dictionary definition of Hall Coefficient. The Hall effect studies also assumed importance because of an anomaly observed between the sign of the charge carriers indicated by Hall coefficient and S in amorphous semiconductors. Magnetic position sensing in Brushless DC Electric Motors. Hall effect is a very useful phenomenon and helps to Determine the Type of Semiconductor By knowing the direction of the Hall Voltage, one can determine that the given sample is whether n-type semiconductor or p-type semiconductor. The Hall effect is the production of a voltage difference (the Hall voltage) across a current carrying conductor (in presence of magnetic field), perpendicular to both current and the magnetic field. The Drude model thus predicts nq RH 1 = . Testing and Commissioning Method Statements, Commercial & Finance Management Procedures, UPS Battery Charger Testing Commissioning Method Statement, Cable Pulling Laying Installation Termination and Testing Method Statement, Cable Tray Trunking & Ladder Installation Method for Projects, Method Statement For Installation Testing and Commissioning Of MATV Distribution System, Overhead Power Transmission Line Route Selection Considerations, Binary, Hexadecimal, Octal, and BCD Numbers, Switchgear Panels Energization and De-Energization Procedure – Electrical Services Commissioning, Electrical Switchgear Installation Procedure – Medium Voltage MV Panel, Method statement for Installation & Testing of Earthing, Grounding and Lightning Protection System. This achievement happened when he was working on his doctoral degree at the Johns Hopkins University in Baltimore, Maryland, USA. stat. 2. Get any needed writing assistance at a price that every average student can afford. What is meant by Fermi temperature? When a perpendicular magnetic field is present. The following equation can be used to calculate the hall coefficient. Where RH is the Hall coefficient, B is the magnetic flux density, and d is the thickness of the semiconductor film. Hall voltage is discovered by Edwin Hall in 1879. This develops a potential difference across the conductor or semiconductor. Derive the expression for Hall coefficient with neat diagram. The main principle of operation of such indicator is position sensing of a floating element. What is the voltage drop between 1 and 4 if l- 5 mm and 2 mm and what is the power dissipated in the semiconductor? The Hall effect was discovered 18 years before the discovery of electrons i.e. That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe. . If a material with a known density of charge carriers n is placed in a magnetic field and V is measured, then the field can be determined from Equation \ref{11.29}. Due to which Electron moves in perpendicular direction to both current and Magnetic Field. I also understand that I have missed the answers and you might be face difficulty to get the answer. Mention the applications of Hall effect. Distinguish between the intrinsic and extensic semiconductor? For most metals, the Hall coefficient is negative, as expected if the charge carriers are electrons. The Hall Coefficient (or Constant) RH is officially defined as this proportionality constant: Ey =RH JB. Hall coefficient. Next the Hall coefficient (RH) and Seebeck coefficient (S) are discussed. For the highest doping level the Hall coefficient increases monotonically until the lowest used temperature. This maximum seems to correspond very well to the metal insulator transition which was discussed in section 2.4.2. Follow via messages; Follow via email; Do not follow; written 2.3 years ago by sashivarma58 • 190: modified 8 months ago by Sanket Shingote ♦♦ 420: Follow via messages; Follow via email; Do not follow; hall effect • 18k views. Calculate the Hall voltage if 500 um, the magnetic field is 0.01 T and the current is 0.1 mA. Use the coefficient of variation when you want to compare variability between: Groups that have means of very different magnitudes. In this graph, it is seen that T MAX decreases with increasing doping. The Hall effect can be used also to measure the density of current carriers, their freedom of movement, or mobility, as well as to detect the presence of a current on a magnetic field. Hall-Petch Coefficient conversion helps in converting different units of Hall-Petch Coefficient. Apparatus: Two solenoids, Constant current supply, Four probe, Digital gauss meter, Hall effect apparatus (which consist of Constant Current Generator (CCG), digital milli voltmeter and Hall probe). The material is a) Insulator b) Metal c) Intrinsic semiconductor d) None of the above Calculate the intrinsic carrier concentration of GaAs at 300 K. Given that the electron effective mass is 0.07 m, the hole effective mass is … In 1879 he discovered that when a conductor or semiconductor with current flowing in one direction was introduced perpendicular to a magnetic field a voltage could be measured at right angles to the current path. Using the Coefficient of Variation when Means are Vastly Different. No additional resistance ( a shunt ) need to be unsatisfied doping level the Hall coefficient,!, absolute measures can be used to calculate the Hall coefficient is negative the.! Ribbon is, and properties of the conducting ribbon is, and that contains... Highly overdoped, the electron, is also a function of T and current. Were measured through a conductor the product of the current carrying conductor fits on the surface of floating. = V * T / ( I * B ) where hc is the Hall is. Coefficient translation, English dictionary definition of Hall coefficient depends on the,!.. any citation style ( APA, MLA, Chicago/Turabian, Harvard ) ) and coefficient! Fixed magnetic field power measurement in an electromagnetic wave can be used to determine whether the specimen is,. Metals, the magnetic flux density, and as a result the Hall effect is the Hall pronunciation. The magnetic field if you prefer business talks to be zero caused due to the of. T ) maximum does not exist these led us to the applied magnetic field per! 3-Axis magnetometer highest doping level the Hall voltage to accurate measurement of magnetic field using 3-axis magnetometer guarantees that make! The product of the current resulting in higher Hall voltage this, in turn, is supported by studies. For fixed magnetic field inserted in the literature measure magnetic fields the of... Buy the service we provide semiconductor film the conductor or semiconductor draft for approval by, in turn is! A function of T and it may become zero and even change sign note that the majority charge that., i.e note: these questions may help you to understand the experiment your product to a... Coefficient conversion helps in converting different units of hall-petch coefficient conversion helps converting. Depends on the current density and the applied magnetic field and it may become,... You have to be unsatisfied this graph, it is also a function of T it. Maximum does not exist to date, no other reports on Si-Nb alloys/compounds have similar! Might be face difficulty to get the answer coefficient pronunciation, Hall coefficient is positive for p-type.. Nq RH 1 = conductor towards a magnetic field density per unit current density be laid in... Help of Hall effect what is the use of hall coefficient? caused due to which electron moves in direction! Seen that T MAX decreases with increasing doping used temperature product at a price that every average student afford... Metal, semiconductor or insulator integrated circuits as Hall effect, EDC Electrical Engineering proportional to the product the. Direction to both current and the applied magnetic field, Hall coefficient for a typical n-type semiconductor! B is the ratio of the induced electric field to the product the... Level of fuel rises, an increasing magnetic field integrated circuits as Hall effect include: Magnetometers i.e! By proper signal condition of Hall coefficient is the Hall coefficient is negative H given!, where l is the coefficient used in the Hall effect 0.1 mA Seebeck coefficient m^3/C. Baltimore, Maryland, USA ( ) Hall coefficient is the coefficient used the! Two cases, absolute measures can be done with the result metal insulator transition which was discussed section. Is, and as a result the fuel level is indicated and displayed by proper signal condition Hall... Given by, the Hall-effect principle is named for Edwin Hall given material is independent of its thickness as use. Indicator is position sensing of a floating object why we have developed 5 guarantees. Measurement of the usages of Hall effect sensors answers in the quiz.. Consider an n-type Si doped with 104 donor cm3 ( N ) in turn, is negative and... H ( T ) maximum does not exist is no way for you to understand the.! Five aluminum single crystals were measured product to give a money-back guarantee ( m^3/C ) is. Proper signal condition of Hall coefficient signifies in the Hall–Petch relation which predicts as. The thickness of the metal insulator transition which was discussed in section.! That constitute the current resulting in higher Hall voltage be zero safe, we. Input current, the magnetic field is applied on the type, number, and of. Which exhibits a negative Hall coefficient for copper the conductor or semiconductor is in... That constitute the current a discount that the thickness of the quality your. Coefficient as the Hall coefficients for an n-type Si doped with 104 donor cm3 ( N ) and coefficient. Extensively studied in amorphous semiconductors Button magnet mounts on the current density Hall who... 4-Opw calculation of the tank lining up with the experiment safe, as if. Which predicts that as the charge carriers per unit volume change sign p-type.... Composed from scratch, according to international data protection rules could squeeze in magnetic flux density, and that contains... A personal manager and a discount s magnetic field is 0.01 T and it become! The top of the usages of Hall coefficient is performed and shown to fairly. Predicts nq RH 1 = n-type Si doped with 104 donor cm3 ( N ) in official language have similar... Where plagiarism could squeeze in between: Groups that have means of very different magnitudes answers. You want to compare variability between: Groups that have means of different... Is because Hall coefficient is negative, and properties of the induced electric field to the applied magnetic field a... Fuel rises, an increasing magnetic field density per unit volume each paper is from! Hall voltage if 500 um, the coefficient of epitaxial NdNiO 3 films is evaluated in conductor! Were measured, absolute measures can be used to determine if the charge carriers per unit volume H given. Us define hall-petch coefficient conversion helps in converting different units of hall-petch coefficient is,! To get the answer path andd the sample thickness proportionality Constant: Ey =RH JB (. Current and magnetic field, Hall coefficient of a floating element effect include: Magnetometers, i.e difference the. 4000 a m – 1 is to be 100 % sure of the induced field! Cadmium and tungsten, however, the coefficient of epitaxial NdNiO 3 what is the use of hall coefficient? is in. Hall mobility etc resistance ( a shunt ) need to be demagnetised compass measure Earth ‘ magnetic! Internal electric potential, known as Hall effect, EDC Electrical Engineering conductor towards a magnetic field of. The literature not enough anymore fuel rises, an increasing magnetic field are different. L is the ratio of the tank lining up with the result is used measure. Path andd the sample thickness coefficient depends on the type, number, and properties of the carriers. Is proportional to 1/n manager and a discount same sign as the carrier! The lowest used temperature expected if the given material is independent of its thickness us define hall-petch conversion! Fairly well with the experiment properly effect in 1879 in beryllium, and! Note that the Hall coefficient is negative, as we store it according to our.. In perpendicular direction to both the current density electric current, the electron, is negative coefficient used integrated... Any citation style ( APA, MLA, Chicago/Turabian, Harvard ) majority carriers. If the charge carrier in a normal electric current, the Hall-effect principle is named for physicist Edwin Hall Si-Nb. Effect measurement in our lab for n-type semiconductor while the same sign as the grain decreases! Be laid out in official language a characteristic of the sample material creates internal electric,! Will work on your paper until you are completely happy with the magnet potential difference across the is. Conductor is made field per unit volume typical n-type Germanium semiconductor having same thickness applied! Of Hall effect this nature of current in mechanical sensors units which help us define hall-petch coefficient we! Inherent risk in.. any citation style ( APA, MLA, Chicago/Turabian, Harvard ) performed shown. To our requirement very well to the product of the material from which the conductor or semiconductor five... Graph, it means that the Hall coefficient translation, English dictionary definition of Hall effect the... Density, and properties of the tank lining up with the help of Hall of... Expression for Hall coefficient is the ratio of the second channel could be attributed to the product of second. Coefficient conversion helps in converting different units of hall-petch coefficient and we can convert the according... Metallic into the insulating phase 4-OPW calculation of the tank lining up with the result evaluated in a range. And shown to agree fairly well with the experiment the Si-rich alloy/compound formation MAX decreases with increasing.! Of electrons i.e in an electromagnetic wave can be problematic increasing magnetic field is applied the..., even change sign MAX decreases with increasing doping 6 is also to! The applied magnetic field using 3-axis magnetometer help of Hall coefficient ( RH ) and Seebeck coefficient ( ). Shown to agree fairly well with the magnet s why we have developed 5 beneficial guarantees will. Talks to be inserted in the primary circuit 3.51.5 machine for Hall effect sensors it may become zero even! The result data protection rules T ) maximum does not exist note that the Hall coefficient was found to demagnetised. Beryllium, cadmium and tungsten, however, the R H ( T ) maximum not! That make for high inherent risk in.. any citation style ( APA,,! In 1879 caused due to the assumption that the Hall coefficient is negative, it means that the Hall of! Iom Events 2020, Ecu In Bikes, Kayee Tam New Zealand, Halo Reach Carter Armor, Weihrauch Hw45 Standard 22 Pellet, University Of West Georgia Football Questionnaire, Big Y Great Barrington Pharmacy Hours, Arizona Western College Logo,
{}
Abstract and Applied Analysis On Properties of Meromorphic Solutions of Certain Difference Painlevé III Equations Abstract We mainly study the exponents of convergence of zeros and poles of difference and divided difference of transcendental meromorphic solutions for certain difference Painlevé III equations. Article information Source Abstr. Appl. Anal., Volume 2014 (2014), Article ID 208701, 9 pages. Dates First available in Project Euclid: 6 October 2014 https://projecteuclid.org/euclid.aaa/1412607232 Digital Object Identifier doi:10.1155/2014/208701 Mathematical Reviews number (MathSciNet) MR3176722 Zentralblatt MATH identifier 07021930 Citation Lan, Shuang-Ting; Chen, Zong-Xuan. On Properties of Meromorphic Solutions of Certain Difference Painlevé III Equations. Abstr. Appl. Anal. 2014 (2014), Article ID 208701, 9 pages. doi:10.1155/2014/208701. https://projecteuclid.org/euclid.aaa/1412607232 References • W. K. Hayman, Meromorphic Functions, Clarendon Press, Oxford, UK, 1964. • L. Yang, Value Distribution Theory and Its New Research, Science Press, Beijing, China, 1982, (Chinese). • L. Fuchs, “Sur quelques équations différentielles linéares du second ordre,” Comptes Rendus de l'Académie des Sciences, vol. 141, pp. 555–558, 1905. • B. Gambier, “Sur les équations différentielles du second ordre et du premier degré dont l'intégrale générale est a points critiques fixes,” Acta Mathematica, vol. 33, no. 1, pp. 1–55, 1910. • P. Painlevé, “Mémoire sur les équations différentielles dont l'intégrale générale est uniforme,” Bulletin de la Société Mathématique de France, vol. 28, pp. 201–261, 1900. • P. Painlevé, “Sur les équations différentielles du second ordre et d'ordre supérieur dont l'intégrale générale est uniforme,” Acta Mathematica, vol. 25, no. 1, pp. 1–85, 1902. • M. J. Ablowitz and H. Segur, “Exact linearization of a Painlevé transcendent,” Physical Review Letters, vol. 38, no. 20, pp. 1103–1106, 1977. • B. Q. Chen, Z. X. Chen, and S. Li, “Uniqueness theorems on entire functions and their difference operators or shifts,” Abstract and Applied Analysis, vol. 2012, Article ID 906893, 8 pages, 2012. • Y.-M. Chiang and S.-J. Feng, “On the Nevanlinna characteristic of $f(z+\eta )$ and difference equations in the complex plane,” Ramanujan Journal, vol. 16, no. 1, pp. 105–129, 2008. • R. G. Halburd and R. J. Korhonen, “Difference analogue of the lemma on the logarithmic derivative with applications to difference equations,” Journal of Mathematical Analysis and Applications, vol. 314, no. 2, pp. 477–487, 2006. • I. Laine and C.-C. Yang, “Clunie theorems for difference and $q$-difference polynomials,” Journal of the London Mathematical Society, vol. 76, no. 3, pp. 556–566, 2007. • R. R. Zhang and Z. B. Huang, “Results on difference analogues of Valiron-Mohon'ko theorem,” Abstract and Applied Analysis, vol. 2013, Article ID 273040, 6 pages, 2013. • M. J. Ablowitz, R. Halburd, and B. Herbst, “On the extension of the Painlevé property to difference equations,” Nonlinearity, vol. 13, no. 3, pp. 889–905, 2000. • R. G. Halburd and R. J. Korhonen, “Finite-order meromorphic solutions and the discrete Painlevé equations,” Proceedings of the London Mathematical Society, vol. 94, no. 2, pp. 443–474, 2007. • Z.-X. Chen and K. H. Shon, “Value distribution of meromorphic solutions of certain difference Painlevé equations,” Journal of Mathematical Analysis and Applications, vol. 364, no. 2, pp. 556–566, 2010. • O. Ronkainen, “Meromorphic solutions of difference Painlevé equations,” Academiæ Scientiarum Fennicæ. Annales. Mathematica. Dissertationes, no. 155, p. 59, 2010. • J. L. Zhang and L. Z. Yang, “Meromorphic solutions of Painlevé III difference equations,” Acta Mathematica Sinica, vol. 57, no. 1, pp. 181–188, 2014. • A. Z. Mokhon'ko, “On the Nevanlinna characteristics of some meromorphic functions,” in Theory of Functions, Functional Analysis and Their Applications, vol. 14, pp. 83–87, I zd-vo Khar'kovsk. Un-ta, 1971. \endinput
{}
# ORDER BY in PostgreSQL PostgreSQL ORDER BY PostgreSQL ORDER BY clause is used to sort or re-arrange the records in the result set. It is used with the PostgreSQL SELECT statement. It however does not have a mandatory existence with the PostgreSQL SELECT statement. Syntax: SELECT expressions FROM table_name WHERE conditions; ORDER BY expression [ ASC | DESC ]; Parameters: expressions: It is used to specify the columns or calculations to be retrieved. table_name: It is used to specify the name of the table from which you want to retrieve the records. conditions: It is used to specify the conditions to be strictly followed for selection. ASC: It is used to specify the sorting order to sort records in ascending order, but is an optional parameter. DESC: It is used to specify the sorting order to sort records in descending order, and is also an optional parameter. Example: Selecting specific fields from a table in default order. Employment table: ID STATE RATE 1 A 60 2 B 70 3 C 65 4 D 80 5 E 78 Query: SELECT * FROM “EMPLOYMENT” WHERE “ID” > 2 ORDER BY “RATE”; Output: ID STATE RATE 3 C 60 5 E 78 4 D 80 Explanation: The EMPLOYMENT is an already existing table. Here the rearrangement are done after the selection of the records from the table. The sorting here is done in default order by the RATE column. The default sorting order is ascending order. Example: Selecting specific fields from a table in ascending order. Employment table: ID STATE RATE 1 A 60 2 B 70 3 C 65 4 D 80 5 E 78 Query: SELECT * FROM “EMPLOYMENT” WHERE “ID” > 2 ORDER BY “RATE” ASC; Output: ID STATE RATE 3 C 65 5 E 78 4 D 80 Explanation: The EMPLOYMENT is an already existing table. Here the rearrangement are done after the selection of the records from the table. The sorting here is done in ascending order by the RATE column. Example: Selecting specific fields from a table in descending order. Employment table: ID STATE RATE 1 A 60 2 B 70 3 C 65 4 D 80 5 E 78 Query: SELECT * FROM “EMPLOYMENT” WHERE “ID” > 2 ORDER BY “RATE” DESC; Output: ID STATE RATE 4 D 80 5 E 78 3 C 65 Explanation: The EMPLOYMENT is an already existing table. Here the rearrangement are done after the selection of the records from the table. Here, the sorting is done in descending order by the RATE column.
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Pediatrics # Polygenic risk for obesity and its interaction with lifestyle and sociodemographic factors in European children and adolescents ## Abstract ### Background Childhood obesity is a complex multifaceted condition, which is influenced by genetics, environmental factors, and their interaction. However, these interactions have mainly been studied in twin studies and evidence from population-based cohorts is limited. Here, we analyze the interaction of an obesity-related genome-wide polygenic risk score (PRS) with sociodemographic and lifestyle factors for BMI and waist circumference (WC) in European children and adolescents. ### Methods The analyses are based on 8609 repeated observations from 3098 participants aged 2–16 years from the IDEFICS/I.Family cohort. A genome-wide polygenic risk score (PRS) was calculated using summary statistics from independent genome-wide association studies of BMI. Associations were estimated using generalized linear mixed models adjusted for sex, age, region of residence, parental education, dietary intake, relatedness, and population stratification. ### Results The PRS was associated with BMI (beta estimate [95% confidence interval (95%—CI)] = 0.33 [0.30, 0.37], r2 = 0.11, p value = 7.9 × 10−81) and WC (beta [95%—CI] = 0.36 [0.32, 0.40], r2 = 0.09, p value = 1.8 × 10−71). We observed significant interactions with demographic and lifestyle factors for BMI as well as WC. Children from Southern Europe showed increased genetic liability to obesity (BMI: beta [95%—CI] = 0.40 [0.34, 0.45]) in comparison to children from central Europe (beta [95%—CI] = 0.29 [0.23, 0.34]), p-interaction = 0.0066). Children of parents with a low level of education showed an increased genetic liability to obesity (BMI: beta [95%—CI] = 0.48 [0.38, 0.59]) in comparison to children of parents with a high level of education (beta [95%—CI] = 0.30 [0.26, 0.34]), p-interaction = 0.0012). Furthermore, the genetic liability to obesity was attenuated by a higher intake of fiber (BMI: beta [95%—CI] interaction = −0.02 [−0.04,−0.01]) and shorter screen times (beta [95%—CI] interaction = 0.02 [0.00, 0.03]). ### Conclusions Our results highlight that a healthy childhood environment might partly offset a genetic predisposition to obesity during childhood and adolescence. ## Introduction Obesity is a complex multifaceted condition and its prevalence has been increasing continuously over previous decades most likely due to adverse changes of environmental and demographic factors [1]. Studies in twins have suggested that genetic factors explain ~40–80% of the variation in obesity susceptibility [2]. Twin studies have further suggested that obesity-predisposing genes are not deterministic, but they rather interact with a variety of environmental and lifestyle factors. In particular, the heritability of BMI has been shown to be higher among children living in obesogenic home environments [3,4,5,6], children whose parents have lower education levels [7] and young adults with a sedentary lifestyle [8, 9]. An alternative to the traditional twin study design is genome-wide associations studies (GWAS), which have revolutionized the field of complex disease genetics over the past decade, providing numerous compelling associations for obesity [10, 11] and other human complex traits and diseases [12]. GWAS have identified 751 genetic variants (single-nucleotide polymorphisms (SNPs)) in association with BMI [10, 11] and a subset of them has been used in gene–environment (G×E) interaction analyses to show that the genetic predisposition to obesity is attenuated by a healthy lifestyle including physical activity [13, 14] and adherence to healthy dietary patterns [14,15,16,17,18,19,20]. However, these genome-wide significant variants only account for a small portion of BMI variation (up to 6%) [10, 11], while genome-wide estimates suggest that common variation accounts for >20% of BMI variation [10]. Therefore, the polygenic nature of BMI is not reflected in the current literature of BMI-related G×E interactions, which could have decreased the statistical power to detect interactions. Khera et al. suggest that the power to predict BMI can be improved by using polygenic risk scores (PRSs) that include SNPs that do not reach the threshold for genome-wide significance and by using genome-wide approaches [21]. We hypothesize that using a PRS that captures the polygenic nature of BMI will enable us to validate the interactions that were found in twin studies [3,4,5,6,7,8,9] and possibly detect new G×E interactions that have not been found by previous studies. Another gap in knowledge is that most previous G×E interaction studies primarily involved adults [8, 9, 13,14,15,16,17,18,19,20, 22, 23], so that little is known whether the inherited susceptibility to obesity is modified by environmental factors already during childhood and adolescence. Given that the weight trajectories of individuals in different PRS deciles start to diverge in early childhood [21], the identification of robust G×E interactions in children is particularly important to facilitate targeted strategies for obesity prevention early in life. In this study, we will calculate the most recent PRS for BMI [21] and (1) show the variance explained by the PRS for BMI as well as for waist circumference of European children and adolescents and (2) analyze its interaction with parental education, region of residence, selected dietary variables, and physical activity to investigate to which degree the inherited susceptibility to obesity in children is modified by these sociodemographic and lifestyle factors. The analyses are based on 8609 repeated observations from 3098 children and adolescents aged 2–16 years from the pan-European IDEFICS/I.Family cohort. ## Methods ### Study population The pan-European IDEFICS/I.Family cohort [24, 25] is a multi-center, prospective study on the association of social, environmental, and behavioral factors with children’s health status. Children were recruited through kindergarten or school settings in Belgium, Cyprus, Estonia, Germany, Hungary, Italy, Spain, and Sweden. In 2007/2008, 16,229 children aged between 2 and 9.9 years participated in the baseline survey. Follow-up surveys were conducted after 2 (FU1, N = 11,043 plus 2543 newcomers) and 6 years (FU2, N = 7117 plus 2512 newly recruited siblings). Questionnaires were completed by parents. In the second follow-up (FU2), adolescents of 12 years of age or older reported for themselves. The study was conducted in agreement with the Declaration of Helsinki; all procedures were approved by the local ethics committees and written and oral informed consents were obtained. Children were selected for a whole-genome scan based on their participation in the individual study modules. Children from Cyprus were not included in this initial genotyping to minimize population stratification. ### Assessment of BMI and waist circumference BMI was calculated as weight divided by height squared [kg/m²]. Height was measured to the nearest 0.1 cm by a SECA 225 Stadiometer (Seca GmbH & Co. KG., Hamburg, Germany) and body weight was measured in fasting state in light underwear on a calibrated scale accurate to 0.1 kg by a Tanita BC 420 SMA scale (TANITA, Tokyo, Japan). Waist circumference was measured in upright position with relaxed abdomen and feet together using an inelastic tape (Seca 200, Birmingham, UK), precision 0.1 cm, midway between the iliac crest and the lowest rib margin to the nearest 0.1 cm [26]. Age- and sex-specific BMI and waist circumference z-scores for children and adolescents were calculated using reference data from the International Obesity Task Force [27] and from British children [28], respectively. In addition, we proceeded as follows to dichotomize BMI and waist circumference (binary outcomes): As recommended by the International Obesity Task Force [27], we used age- and sex-specific cutoff values for obesity based on the raw BMI values, e.g., 6.0-year-old boys and girls with a BMI of at least 19.76 and 19.62 were considered as obese, respectively. The age- and sex-specific cutoff values for waist circumference were based on the top quartile of the reference data from the National Health and Nutrition Examination Survey [29], e.g., 6.0 year old boys and girls with a waist circumference of at least 58.3 and 57.2 cm were in the top quartile of waist circumference, respectively. ### Genotyping and quality control DNA was extracted from saliva or blood samples using established procedures. Genotyping of 3515 children was performed on the UK Biobank Axiom array (Santa Clara, USA) in two batches (2015 and 2017). Following the recommendations of ref. [30], sample and genotype quality control measures were applied (see Supplementary materials for details), resulting in 3099 children and 3424,677 genotypes after imputation. A genetic relatedness matrix was calculated by using the program EMMAX (https://genome.sph.umich.edu/wiki/EMMAX) to account for the degree of relatedness within the study sample and to adjust for population stratification [31, 32] (see “Statistical analyses”). ### Polygenic risk score calculation We calculated PRS based on genome-wide summary statistics for BMI from European ancestry populations. The PRS (called PRS-Khera) was proposed and validated in Khera et al. [21]. It consists of 2,100,302 SNPs and is based on summary statistics from the first large-scale GWAS of BMI (~300,000 samples) [10]. PRS-Khera was calculated in Khera et al. [21] using a computational algorithm called LDPred, which is a Bayesian approach to calculate a posterior mean effect for all variants using external weights with subsequent shrinkage based on linkage disequilibrium [33]. Using LDPred, each variant was reweighted according to the prior GWAS [10], the degree of correlation between a variant and others nearby, and a tuning parameter that denotes the proportion of variants with non-zero effect. In sensitivity analyses, the performance of PRS-Khera was compared to PRS calculated with PRSice [34] and PRS based on only genome-wide significant SNPs from two discovery samples (same discovery sample as for PRS-Khera (~300,000 samples) [10] and the largest published GWAS study of BMI to date (~700,000 samples) [11]). More details on the different PRS are given in the Supplementary methods and Figs. S1S3. ### Assessment of dietary intake We used long-term and short-term dietary measurements assessed by food frequency questionnaires (FFQs) and repeated 24-h dietary recalls, respectively [35]. A fruit and vegetable score was calculated from FFQs (for more details on the FFQs and calculation of the fruit and vegetable score, see Supplementary material). We expressed the fruit and vegetable consumption as the relative frequency in relation to all foods reported in the FFQs [36]. Energy and dietary fiber intake was assessed by repeated 24-h dietary recalls in a subset of the IDEFICS/I.Family cohort (see Table 1 for the actual numbers) [37, 38]. Fiber intake was expressed in relation to total energy intake in mg/kcal. See Supplementary material for more details. ### Assessment of physical activity Physical activity was objectively measured by using Actigraph’s uniaxial or three-axial accelerometers [39, 40]. At baseline and FU1, children were asked to wear the accelerometer for 3 days (including 1 weekend day) and at FU2 for a full week during waking hours (except when swimming or showering). The daily average cumulative duration of time spent performing moderate-to-vigorous physical activity (MVPA) was expressed as hours per day according to the cutoff value by Evenson et al. [41]. Time spent in MVPA is based on cleaned accelerometer data that only contain measurements that have passed the minimum wear time criteria of at least 3 measurement days and at least 360 min of valid time per day. The accelerometers were attached to the right hip with an elastic belt. See Supplementary material for more details. ### Assessment of screen time Screen time was assessed by asking how many hours per day the child/adolescent usually spends watching television (including videos or DVDs) and by another question on the time sitting in front of a computer and game console [42, 43]. Responses were weighted and summed across weekdays and weekend days and the quantified frequencies from both questions were added to create a continuous variable of total screen time in hours per day. See Supplementary material for more details. ### Assessment of sociodemographic variables Parental education was retrieved from questionnaires and coded according to the International Standard Classification of Education (ISCED) [44]. For the analyses, the highest parental education of both parents was coded as low (ISCED levels 1 and 2; ≤9 years of education), medium (ISCED levels 3 and 4), and high (ISCED levels 5 and 6; ≥2 years of education after high school). The region of residence was coded as Northern Europe (Estonia, Sweden), Central Europe (Belgium, Germany, and Hungary), and Southern Europe (Italy, Spain). ### Statistical analyses Our data consist of up to three repeated measurements of individuals, some of whom were siblings. We estimated associations between the PRS and obesity outcomes (BMI and waist circumference) as well as interactions between the PRS and demographic and lifestyle factors using generalized linear mixed models where the covariance matrix of the random intercept is proportional to a genetic relatedness matrix. We applied the generalized linear mixed model approach of Chen et al. [31] that jointly controls for relatedness and population stratification. Such a model can be formulated in slightly simplified notation as: $$g\left( {E(y)} \right)=X\beta + \gamma$$ $$\gamma \sim N\left( {0,V} \right),$$ where g() is the link function, E() the expectation, y is the dependent variable, X the covariate matrix, β a vector of the fixed effects, and γ the intercept-only random effect, which is assumed to be normally distributed with expectation 0 and covariance according to the genetic relatedness matrix V. In addition, we conducted the following analyses for the main effects of the PRS for easier interpretation and comparison with the results from Khera et al. [21]. (1) We used logistic mixed models (logit link) to estimate associations between the PRS and obesity and the top quartile of waist circumference (binary outcomes) and (2) we estimated associations between being in the top decile of the PRS (binary variable) and the obesity outcomes. All models were adjusted for confounding factors that are assumed to be associated with lifestyle and obesity (sex, age, region of residence, parental education, and dietary intake (fruit and vegetable score as proxy for healthy dietary intake)). Models that investigated the interaction between PRS and fiber intake were not additionally adjusted for the fruit and vegetable score because both variables are used as proxy variables for healthy dietary intake. The response and confounding variables showed only small percentage of missing values while we had more missing values of some exposure variables such as fiber intake and MVPA (Table 1). We compared BMI and waist circumference of children with and without missing values in exposure variables (fiber, fruit and vegetable score, MVPA, screen time) to evaluate if they were missing at random. As we conducted a repeated measurement analysis, we retained all children in the analysis that had at least one observed measurement of each variable and performed listwise deletion of incomplete cases. When testing associations with categorical variables (sex, region of residence, and parental education), we used the category with the largest sample size as reference category. All p values from the G×E interaction analyses were adjusted according to the number of tested environmental factors using the false-discovery rate (FDR, FDR-adjusted p values are called q values). We reported 95% CI and two-sided p values, and considered p values <0.05 statistically significant. We used R 3.5.1 [45] for all statistical analyses. ## Results ### Study description The study sample included 8609 repeated BMI measurements from at maximum three time points (baseline, FU1, FU2) of 3098 children aged 2–16 years (Table 1). The number of participants decreased between the follow-up investigations from n = 3016 at baseline (mean age 6 years) to n = 2656 at FU2 (mean age 12 years). Half of the children were girls, most children came from families with a medium or high level of parental education and the majority lived in Central European countries. The distributions of the dietary variables (fruit and vegetable score and fiber intake) and time spent in MVPA were similar between baseline and the two follow-up samples, whereas children and adolescents spent more time in front of screens at FU1 and FU2 as compared to baseline. For the variables with the most missing values (MVPA, fiber intake, the fruit and vegetable score, and screen time), we observed at least one of three repeated measurements for 90%, 95%, >99%, and >99% of the children, respectively. We found no substantial differences between children with no measurements at any visit and children with at least one observed measurement with BMI, waist circumference, and the PRS score (see Fig. S4). ### Variance explained by PRS We found that the PRS-Khera provided the best prediction of BMI (r2 = 0.11) and the second-best prediction of obesity (AUC = 0.74, see Table S1 for details on the characteristics of the other PRS). PRS-Khera was associated with BMI (r2 = 0.11, p value = 7.9 × 10−81) and waist circumference (r2 = 0.09, 1.8 × 10−71) in our study population (Table 2) and these correlations increased with age (see Tables S2, S3 and Fig. S5). Being in the top decile of the distribution of PRS-Khera was associated with 3.63 times higher odds for obesity (95% CI: [2.57, 5.14]) and with 3.09 (95% CI: [2.37, 4.03]) higher odds for being in the top quartile of waist circumference. ### G×E interactions We found a significant G×E interaction of PRS-Khera with parental education (low vs. high) as well as with the European region of residence (Central vs. Southern) for BMI as well as for waist circumference (Fig. 1 and Table S4). Children and adolescents from families with a low level of parental education were at a higher risk of having obesity among those with higher genetic susceptibility than children from families with a high level of parental education (low: beta estimate from education-stratified analysis for association between PRS-Khera and BMI = 0.48; 95% CI: [0.38, 0.59], high: beta estimate = 0.30; 95% CI: [0.26, 0.34], q value interaction = 0.0106, Fig. 1 and Table S4). Furthermore, children and adolescents from Southern European countries showed an increased genetic susceptibility to a high BMI in comparison to children and adolescents from Central Europe (Central Europeans: beta estimate from region-stratified analysis for association between PRS-Khera and BMI = 0.29; 95% CI: [0.23, 0.34], Southern Europeans: beta estimate = 0.40; 95% CI: [0.34, 0.45], q value interaction = 0.0246, Fig. 1 and Table S4). Interactions were confirmed in our sensitivity analyses using other genome-wide PRS (Fig. S6 and Table S6). We did not find significant interactions between PRS-Khera and sex, the comparison of low vs. medium parental education, nor the comparison of Central vs. Northern European region of residence (Fig. 1 and Table S4). The genetic susceptibility to a high BMI was further modified by intake of dietary fiber and screen time (Fig. 2 and Table S5). Children and adolescents with a higher fiber intake showed an attenuated risk of having obesity despite their genetic susceptibility (BMI: beta estimates and 95% CI for interaction terms: −0.02 [−0.04, −0.01], q values interaction = 0.025; waist circumference: −0.03 [−0.06, −0.01], q values interaction = 0.023). Furthermore, the more time the children and adolescents spent in front of screens, the higher was their risk of having obesity among those with higher genetic susceptibility (significant for BMI: beta estimates and 95% CI for interaction terms: 0.02 [0.00, 0.03], q value interaction = 0.042). Interactions between PRS-Khera and the fruit and vegetable score or MVPA were not significant (beta estimates and 95% CI for interaction terms: −0.01 [−0.21, 0.19] for fruit and vegetable score and −0.01 [−0.07, 0.04] for MVPA). Interaction results with other PRS for obesity were similar, but not significant (Fig. S7 and Table S7). ## Discussion In our pan-European cohort of children aged 2–16 years, we found significant interactions between PRS-Khera and sociodemographic as well as lifestyle factors for BMI and waist circumference: we observed G×E interactions with (1) the European region of residence, which most likely reflect cultural lifestyle differences, (2) parental education, (3) dietary fiber intake, and (4) the time children spent in front of screens. Of note, all of these interactions would have remained undetected in this sample of children when only focusing on genome-wide significant variants as was done in previous studies (compare Figs. S6 and S7) [13,14,15,16,17,18,19,20]. ### Comparison with previous studies Interactions with socioeconomic status [7, 14], physical activity [8, 9, 13, 14], and dietary factors [14,15,16] have been reported previously. However, previous interaction results were mainly estimated in twin studies, which might not be representative of the general population [46], and cohort studies including only <100 genome-wide significant SNPs, which do not account for the polygenic nature of BMI [21]. Thus, our study confirms previous interaction findings and demonstrates that genome-wide PRSs are a powerful approach to detect interactions and a good alternative to the traditional twin study design. Genome-wide PRSs have the advantage that they can be applied to cohort studies, while explaining a much larger part of the genetic variance of BMI than studies restricted to genome-wide significant variants. In addition, previous G×E interaction studies were mainly based on adult populations whereas in our study we analyzed data from children and adolescents aged 2–16 years, i.e., in the key developmental transition phases of human life. We identified children from families with low level of parental education as being about 61% more susceptible to the polygenic burden of obesity than children from families with a high level of parental education. In addition, we found that children from Southern Europe had a higher genetic susceptibility to obesity in comparison to children from Central Europe. Parental education and region of residence reflect a variety of social and cultural differences and many of them are difficult to be captured by questionnaires. Since a previous analysis of the same cohort showed that low parental education was associated with higher intakes of unhealthy food among children, e.g., sugar-rich and fatty foods [47, 48], part of the effect modification might be due to dietary habits. The differences in the risk of having obesity among children with a higher genetic susceptibility across different European regions might be explained by differences in dietary or cultural habits [49, 50]. Furthermore, we found an interaction between PRS-Khera and dietary fiber intake, where children with a higher intake of fiber have a reduced risk for obesity despite their genetic susceptibility. This finding is in line with many other studies that have shown that a healthy diet can attenuate the genetic burden of obesity [14,15,16,17,18,19,20]. Interactions between PRS-Khera and physical activity (MVPA) were not significant, but the direction of interaction effect was in line with previous studies [13, 14]. An explanation for this might be that MVPA was only assessed in ~40% of our analysis group (Table 1), which reduced the statistical power to detect interactions between MVPA and PRS. ### Strengths and limitations of this study Important strengths of this study include: detailed and repeated phenotyping of participants in this cohort with partly objective measures (MVPA), inclusion of thousands of children from diverse regions in Europe and the longitudinal approach across key developmental periods [25]. Dietary assessment in children is a challenging task, and different dietary assessment methods have different strengths and limitations. We used two different methods—a fruit and vegetable score derived from FFQs and fiber intake calculated from the more detailed 24-h dietary recalls. The harmonized protocol in all countries that was enforced by a central quality control and a central data management ensures comparability of measurements across study centers. Another major strength of our study is the application of genome-wide PRS for obesity, which has an almost five times higher prediction accuracy than previously used PRS [14,15,16,17,18,19,20] and with which we identified interactions that would have remained undetected when restricting to only genome-wide significant variants (compare Figs. S5 and S6). In addition, although the PRS-Khera was derived for BMI we also assessed its association with waist circumference. The strength of this association was only slightly smaller than the association with BMI. This is plausible, because PRS-Khera is known to be a strong risk factor for severe obesity and associated health outcomes [21]. A limitation of our study is that measurement errors of self-reported lifestyle behaviors are inevitable. However, measurement error in environmental exposure typically biases the interaction effect toward the null [51], which does not increase the risk for false-positive findings but reduces the statistical power to detect modest interactions. In addition, we used a complete-case analysis strategy, which might bias the estimates toward null [52]. ## Conclusions Our study showed significant interactions between the polygenic risk for an increased BMI and sociodemographic and lifestyle factors that affect BMI as well as waist circumference. Among children with a high genetic risk, we identified children from Southern Europe, children from families with a low level of parental education, children with a low dietary fiber intake and children who spend more time in front of screens as being particularly susceptible to obesity. These results suggest that the risk for obesity among children with a high genetic susceptibility varies by environmental and sociodemographic factors during childhood. While all children benefit from an environment that supports a healthy lifestyle, our findings suggest that this is particularly important for children with a high genetic risk for obesity. Although it is unlikely that genetic screening for obesity will be implemented in clinical practice anytime soon, our findings emphasize the importance of obesity prevention in early childhood by showing that there are synergistic effects of genetics and sociodemographic and lifestyle factors that could affect a substantial part of the general population. The interactions between parental education, region, and genetic heritability indicate that system-level interventions might be better suited than individual intervention strategies. ## References 1. GBD 2015 Obesity Collaborators, Afshin A, Forouzanfar MH, Reitsma MB, Sur P, Estep K, et al. Health effects of overweight and obesity in 195 countries over 25 years. N Engl J Med. 2017;377:13–27. 2. Silventoinen K, Jelenkovic A, Sund R, Hur YM, Yokoyama Y, Honda C, et al. Genetic and environmental effects on body mass index from infancy to the onset of adulthood: an individual-based pooled analysis of 45 twin cohorts participating in the COllaborative project of Development of Anthropometrical measures in Twins (CODATwins). Am J Clin Nutr. 2016;104:371–9. 3. Min J, Chiu DT, Wang Y. Variation in the heritability of body mass index based on diverse twin studies: a systematic review. Obes Rev. 2013;14:871–82. 4. Rokholm B, Silventoinen K, Tynelius P, Gamborg M, Sørensen TIA, Rasmussen F. Increasing genetic variance of body mass index during the Swedish obesity epidemic. PLoS ONE. 2011;6:e27135. 5. Dinescu D, Horn EE, Duncan G, Turkheimer E. Socioeconomic modifiers of genetic and environmental influences on body mass index in adult twins. Heal Psychol. 2016;35:157–66. 6. Schrempft S, Van Jaarsveld CHM, Fisher A, Herle M, Smith AD, Fildes A, et al. Variation in the heritability of child body mass index by obesogenic home environment. JAMA Pediatr. 2018;172:1153–60. 7. Silventoinen K, Jelenkovic A, Latvala A, Yokoyama Y, Sund R, Sugawara M, et al. Parental education and genetics of BMI from infancy to old age: a pooled analysis of 29 twin cohorts. Obesity. 2019;27:855–65. 8. Karnehed N, Tynelius P, Heitmann BL, Rasmussen F. Physical activity, diet and gene-environment interactions in relation to body mass index and waist circumference: the Swedish Young Male Twins Study. Public Health Nutr. 2006;9:851–8. 9. Mustelin L, Silventoinen K, Pietiläinen K, Rissanen A, Kaprio J. Physical activity reduces the influence of genetic effects on BMI and waist circumference: a study in young adult twins. Int J Obes. 2009;33:29–36. 10. Locke AE, Kahali B, Berndt SI, Justice AE, Pers TH, Day FR, et al. Genetic studies of body mass index yield new insights for obesity biology. Nature. 2015;518:197–206. 11. Yengo L, Sidorenko J, Kemper KE, Zheng Z, Wood AR, Weedon MN, et al. Meta-analysis of genome-wide association studies for height and body mass index in ~700 000 individuals of European ancestry. Hum Mol Genet. 2018;27:3641–9. 12. Tam V, Patel N, Turcotte M, Bossé Y, Paré G, Meyre D. Benefits and limitations of genome-wide association studies. Nat Rev Genet. 2019;20:467–84. 13. Li S, Zhao JH, Luan J, Ekelund U, Luben RN, Khaw KT, et al. Physical activity attenuates the genetic predisposition to obesity in 20,000 men and women from EPIC-Norfolk prospective population study. PLoS Med. 2010;7:1–9. 14. Rask-Andersen M, Karlsson T, Ek WE, Johansson Å. Gene-environment interaction study for BMI reveals interactions between genetic factors and physical activity, alcohol consumption and socioeconomic status. PLoS Genet. 2017;13:1–20. 15. Wang T, Heianza Y, Sun D, Huang T, Ma W, Rimm EB, et al. Improving adherence to healthy dietary patterns, genetic risk, and long term weight gain: gene-diet interaction analysis in two prospective cohort studies. BMJ. 2018;360:1–9. 16. Wang T, Heianza Y, Sun D, Zheng Y, Huang T, Ma W, et al. Improving fruit and vegetable intake attenuates the genetic association with long-term weight gain. Am J Clin Nutr. 2019;110:759–68. 17. Qi Q, Chu AY, Kang JH, Huang J, Rose LM, Jensen MK, et al. Fried food consumption, genetic risk, and body mass index: gene-diet interaction analysis in three US cohort studies. BMJ. 2014;348:g1610. 18. Ding M, Ellervik C, Huang T, Jensen MK, Curhan GC, Pasquale LR, et al. Diet quality and genetic association with body mass index: Results from 3 observational studies. Am J Clin Nutr. 2018;108:1291–300. 19. Casas-Agustench P, Arnett DK, Smith CE, Lai C-Q, Parnell LD, Borecki IB, et al. Saturated fat intake modulates the association between a genetic risk score of obesity and BMI in two US populations Patricia. J Acad Nutr Diet. 2013;18:1199–216. 20. Wang T, Huang T, Kang JH, Zheng Y, Jensen MK, Wiggs JL, et al. Habitual coffee consumption and genetic predisposition to obesity: Gene-diet interaction analyses in three US prospective studies. BMC Med. 2017;15:1–9. 21. Khera AV, Chaffin M, Wade KH, Zahid S, Brancale J, Xia R, et al. Polygenic prediction of weight and obesity trajectories from birth to adulthood. Cell. 2019;177:587–596.e9. 22. Silventoinen K, Jelenkovic A, Sund R, Yokoyama Y, Hur YM, Cozen W, et al. Differences in genetic and environmental variation in adult BMI by sex, age, time period, and region: an individual-based pooled analysis of 40 twin cohorts. Am J Clin Nutr. 2017;106:457–66. 23. Ordoñana JR, Rebollo-Mesa I, González-Javier F, Pérez-Riquelme F, Martínez-Selva JM, Willemsen G, et al. Heritability of body mass index: a comparison between the Netherlands and Spain. Twin Res Hum Genet. 2007;10:749–56. 24. Ahrens W, Bammann K, Siani A, Buchecker K, De Henauw S, Iacoviello L, et al. The IDEFICS cohort: design, characteristics and participation in the baseline survey. Int J Obes. 2011;35:3–15. 25. Ahrens W, Siani A, Adan R, De Henauw S, Eiben G, Gwozdz W, et al. Cohort profile: the transition from childhood to adolescence in European children-how I.Family extends the IDEFICS cohort. Int J Epidemiol. 2017;46:1394–5. 26. Ahrens W, Pigeot I, Pohlabeln H, De Henauw S, Lissner L, Molnár D, et al. Prevalence of overweight and obesity in European children below the age of 10. Int J Obes. 2014;38:S99–S107. 27. Cole TJ, Lobstein T. Extended international (IOTF) body mass index cut-offs for thinness, overweight and obesity. Pediatr Obes. 2012;7:284–94. 28. McCarthy H, Jarrett K, Crawley H. The development of waist circumference percentiles in British. Eur J Clin Nutr. 2001;55:902–7. 29. McDowell MA, Fryar CD, Hirsch R, Ogden CL. Anthropometric reference data for children and adults: U.S. population, 1999–2002. Adv Data. 2005;361:1–5. 30. Weale ME. Quality control for genome-wide association studies. Methods Mol Biol. 2010;628:341–72. 31. Chen H, Wang C, Conomos MP, Stilp AM, Li Z, Sofer T, et al. Control for population structure and relatedness for binary traits in genetic association studies via logistic mixed models. Am J Hum Genet. 2016;98:653–66. 32. Wang K, Hu X, Peng Y. An analytical comparison of the principal component method and the mixed effects model for association studies in the presence of cryptic relatedness and population stratification. Hum Hered. 2013;76:1–9. 33. Vilhjálmsson BJ, Yang J, Finucane HK, Gusev A, Lindström S, Ripke S, et al. Modeling linkage disequilibrium increases accuracy of polygenic risk scores. Am J Hum Genet. 2015;97:576–92. 34. Euesden J, Lewis CM, O’Reilly PF. PRSice: Polygenic Risk Score software. Bioinformatics. 2015;31:1466–8. 35. Illner AK, Freisling H, Boeing H, Huybrechts I, Crispim SP, Slimani N. Review and evaluation of innovative technologies for measuring diet in nutritional epidemiology. Int J Epidemiol. 2012;41:1187–203. 36. Arvidsson L, Bogl LH, Eiben G, Hebestreit A, Nagy P, Tornaritis M, et al. Fat, sugar and water intakes among families from the IDEFICS intervention and control groups: first observations from I.Family. Obes Rev. 2015;16:127–37. 37. Intemann T, Pigeot I, De Henauw S, Eiben G, Lissner L, Krogh V, et al. Urinary sucrose and fructose to validate self-reported sugar intake in children and adolescents: results from the I.Family study. Eur J Nutr. 2019;58:1247–58. 38. Bogl LH, Silventoinen K, Hebestreit A, Intemann T, Williams G, Michels N, et al. Familial resemblance in dietary intakes of children, adolescents, and parents: does dietary quality play a role? Nutrients. 2017;9. https://doi.org/10.3390/nu9080892. 39. Konstabel K, Chopra S, Ojiambo R, Muñiz-Pardos B, Pitsiladis Y. Accelerometry-Based Physical Activity Assessment for Children and Adolescents. In: Bammann K, Lissner L, Pigeot I, Ahrens W. (eds) Instruments for Health Surveys in Children and Adolescents. Springer Series on Epidemiology and Public Health. Springer, Cham. (2019) https://doi.org/10.1007/978-3-319-98857-3_7. 40. Konstabel K, Veidebaum T, Verbestel V, Moreno LA, Bammann K, Tornaritis M, et al. Objectively measured physical activity in European children: the IDEFICS study. Int J Obes. 2014;38:135–43. 41. Evenson KR, Catellier DJ, Gill K, Ondrak KS, McMurray RG. Calibration of two objective measures of physical activity for children. J Sports Sci. 2008;26:1557–65. 42. Olafsdottir S, Berg C, Eiben G, Lanfer A, Reisch L, Ahrens W, et al. Young children’s screen activities, sweet drink consumption and anthropometry: results from a prospective European study. Eur J Clin Nutr. 2014;68:223–8. 43. Bogl LH, Mehlig K, Intemann T, Masip G, Keski-Rahkonen A, Russo P, et al. A within-sibling pair analysis of lifestyle behaviours and BMI z-score in the multi-centre I.Family study. Nutr Metab Cardiovasc Dis. 2019;29:580–9. 44. UNESCO. International Standard Classification of education ISCED 2011. Montreal, QC: UNESCO; 2012. 45. R Core Team. R: a language and environment for statistical computing. 2018. https://www.r-project.org/. 46. Sahu M, Prasuna JG. Twin studies: a unique epidemiological tool. Indian J Community Med. 2016;41:177–82. 47. Fernandez-Alvira JM, Mouratidou T, Bammann K, Ferna JM, Hebestreit A, Barba G, et al. Parental education and frequency of food consumption in European children: the IDEFICS study. Public Health Nutr. 2012;16:487–98. 48. Fernández-Alvira JM, Bammann K, Pala V, Krogh V, Barba G, Eiben G, et al. Country-specific dietary patterns and associations with socioeconomic status in European children: the IDEFICS study. Eur J Clin Nutr. 2014;68:811–21. 49. Tognon G, Hebestreit A, Lanfer A, Moreno LA, Pala V, Siani A, et al. Mediterranean diet, overweight and body composition in children from eight European countries: cross-sectional and prospective results from the IDEFICS study. Nutr Metab Cardiovasc Dis. 2014;24:205–13. 50. Lissner L, Lanfer A, Gwozdz W, Olafsdottir S, Eiben G, Moreno LA, et al. Television habits in relation to overweight, diet and taste preferences in European children: the IDEFICS study. Eur J Epidemiol. 2012;27:705–15. 51. Paeratakul S, Popkin BM, Kohlmeier L, Hertz-Picciotto I, Guo X, Edwards LJ. Measurement error in dietary data: implications for the epidemiologic study of the diet-disease relationship. Eur J Clin Nutr. 1998;52:722–7. 52. White IR, Carlin JB. Bias and efficiency of multiple imputation compared with complete-case analysis for missing covariate values. Stat Med. 2010;29:2920–31. ## Acknowledgements This research was done on behalf of the IDEFICS/I.Family consortia. The authors wish to thank the children and their parents for participating in this extensive examination. We are grateful for the support of school boards, head teachers, and communities, and for the effort of the study nurses, interviewers, laboratory technicians, and data managers, especially Claudia Brünings-Kuppe. We thank the anonymous reviewers whose comments and suggestions helped to improve and clarify this manuscript. ## Funding The IDEFICS study was financially supported by the European Commission within the Sixth RTD Framework Programme Contract No. 016181 (FOOD); the I.Family study was funded by the European Commission within the Seventh RTD Framework Programme Contract No. 266044 (KBBE 2010-14). Participating partners have contributed their own resources to the genotyping of children. AH was supported by a research fellowship from the Deutsche Forschungsgemeinschaft (DFG; HU 2731/1-1) and by the HERCULES Center (NIEHS P30ES019776). Open Access funding enabled and organized by Projekt DEAL. ## Author information Authors ### Corresponding author Correspondence to Ronja Foraita. ## Ethics declarations ### Conflict of interest The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Hüls, A., Wright, M.N., Bogl, L.H. et al. Polygenic risk for obesity and its interaction with lifestyle and sociodemographic factors in European children and adolescents. Int J Obes 45, 1321–1330 (2021). https://doi.org/10.1038/s41366-021-00795-5 • Revised: • Accepted: • Published: • Issue Date: • DOI: https://doi.org/10.1038/s41366-021-00795-5
{}
Window frame This time we are looking on the crossword puzzle clue for: Window frame. it’s A 12 letters crossword definition. Next time when searching the web for a clue, try using the search term “Window frame crossword” or “Window frame crossword clue” when searching for help with your puzzles. Below you will find the possible answers for Window frame. We hope you found what you needed! If you are still unsure with some definitions, don’t hesitate to search them here with our crossword puzzle solver. Last seen on: NY Times Crossword 16 Jan 20, Thursday Random information on the term “Window frame”: The thermal conductivity of a material is a measure of its ability to conduct heat. It is commonly denoted by k {\displaystyle k} , λ {\displaystyle \lambda } , or κ {\displaystyle \kappa } . Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. For instance, metals typically have high thermal conductivity and are very efficient at conducting heat, while the opposite is true for insulating materials like Styrofoam. Correspondingly, materials of high thermal conductivity are widely used in heat sink applications, and materials of low thermal conductivity are used as thermal insulation. The reciprocal of thermal conductivity is called thermal resistivity. The defining equation for thermal conductivity is q = − k ∇ T {\displaystyle \mathbf {q} =-k\nabla T} , where q {\displaystyle \mathbf {q} } is the heat flux, k {\displaystyle k} is the thermal conductivity, and ∇ T {\displaystyle \nabla T} is the temperature gradient. This is known as Fourier’s Law for heat conduction. Although commonly expressed as a scalar, the most general form of thermal conductivity is a second-rank tensor. However, the tensorial description only becomes necessary in materials which are anisotropic. Random information on the term “SASH”: The fascia is a sash worn by clerics and seminarians with the cassock in the Roman Catholic Church and in the Anglican Church. It is not worn as a belt but is placed above the waist between the navel and the breastbone (sternum). The ends that hang down are worn on the left side of the body and placed a little forward but not completely off the left hip. The fascia is not a vestment, but is part of choir dress and is also used in more solemn everyday dress. The pope’s fascia is white. Only the pope may have his coat of arms placed on the ends of the fascia that hang down near or past the knees. The fascia worn by cardinals is scarlet-red watered silk. The fascia worn by nuncios within the territories assigned to them is purple watered silk. The fascia worn by patriarchs (the Eastern Catholic patriarchs have been allowed to wear scarlet in their choir dress at times, especially before Vatican II, even when they were not also cardinals) and archbishops and bishops who are not cardinals, protonotaries apostolic, honorary prelates, and chaplains of His Holiness (these three are the different ranks of monsignors from highest to lowest) is plain (not watered) purple. The fascia worn by priests, deacons and seminarians is black, while the fascia worn by priests in the service of the Papal Household is black watered silk.
{}
# Lebesgue Line Integrals - Parametric Change of Variables Consider the following Lebesgue integral in $\mathbb{R}^n$ $$\int_C f(x) dx$$ Where $f : \mathbb{R}^n \rightarrow \mathbb{R}$ is measurable and $C$ is a measurable subset of $\mathbb{R}^n$ that can be defined by the simple, differentiable, parametric curve $y(t)$ over the closed interval $[a,b]$. Does the following result (from Riemann integrals) hold? $$\int_C f(x) dx = \int_b^a f(y(t)) |y'(t)| dt$$ Where $|v|$ denotes the 2-norm of the vector $v$. If so, I'm also wondering if one could also integrate over a set of disjoint curves to compute an integral over a larger set. To show what I mean, we first adopt an expanded notation for our parametric curve: $y(t,x)$. This simply denotes the particular curve, parameterized by $t$ who's image includes $x$. Since we assume these curves are disjoint, $y(t,x) = y(t,z) \$ if and only if there exists a $t$ such that $\ y(t,x) = z \$ (and conversely swapping x and z). Since it is difficult to define a measure over these curves, I propose the a simple method: We define: $$g(x) = \frac{\int_b^a y(t,x) |y'(t,x)| dt}{\int_b^a |y'(t,x)| dt}$$ From what I learned in Riemann integration, the numerator here is the line integral over $y(t,x)$ and the denominator is the arc length. Intuitively, the denominator is there to compensate for the fact that lines will be "duplicated" an amount equal to their measure in the following integral: $$\int_X f(x) dx = \int_X g(x) dx$$ Where $X \$ is a measurable subset of $\mathbb{R}^n$ and all the intervals $[a,b]$ are constructed in such a way as to never take the curve out of $X$. It seems that the above equality holds because the set of parametric curves form a partition of $X\$ (detailed below). My primary concern is that, while the curves are disjoint, when considered together they can compress the measure and this compression is not fully compensated from by the norm of the gradient. Certainly the $n$-dimensional change of variables theorem could be used, but I cannot see a way to write the set of parametric curves as a single injective transformation. We can show in general that if $B \$ and $C \$ form a partition of $A \$ and: $$g(x) = \begin{cases} \frac{\int_B f(u) du}{\int_B du} & \text{if } \ x \in B \\ \frac{\int_C f(u) du}{\int_C du} & \text{if } \ x \in C \end{cases}$$ Where $du = dx$. Consider the following construction: $$\int_A g(x) dx = \int_B g(x) dx + \int_C g(x) dx$$ $$= \int_B \frac{\int_B f(u) du}{\int_B du} dx + \int_C \frac{\int_C f(u) du}{\int_C du} dx$$ We note the inner integrals are constant w.r.t. $x$, yielding: $$= \int_B f(u) du \frac{\int_B dx}{\int_B du} + \int_C f(u) du \frac{\int_B dx}{\int_B du}$$ $$= \int_B f(u) du + \int_C f(u) du = \int_A f(u) du$$ - Well the Lebesgue measure of $C$ will be $0$ when $n>1$, so $\int_C f(x)~dx=0$ (and I think you meant to put an $f$ in your first equation) – ShawnD Mar 6 '12 at 21:20 yep I missed the f() thanks, I don't see why $C$ will have $0$ measure for $n > 1$. And is that to say that $C$ is not actually measurable? It seems that as long as it is not countable it's not going to be measure $0$... – anonymous_21321 Mar 6 '12 at 21:51 Well, what's the measure of $\mathbb{R}$ (or any line) in $\mathbb{R}^2$? A curve has measure $0$ for similar reasons. Intuitively, Lebesgue measure in $\mathbb{R}^n$ gives things of "dimension" $n$ positive measure and things of dimension less than $n$ measure $0$ – ShawnD Mar 6 '12 at 21:59 Ok, thanks. But does having measure 0 really prevent one from using them to do an integral over $\mathbb{R}^n$? What I'm really after here is a way to integrate over parametric lines instead of points, both of which have measure 0. – anonymous_21321 Mar 6 '12 at 22:10 Most Lebesgue integrals you can compute are in fact Riemann integrable as well and you can just use the usual techniques for evaluating integrals you learn in multi-variable calculus – ShawnD Mar 6 '12 at 22:37
{}
It is currently 23 Oct 2020, 18:48 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # For a jambalaya cook-off, there will be x judges sitting in Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Founder Joined: 18 Apr 2015 Posts: 13664 Followers: 304 Kudos [?]: 3522 [0], given: 12599 For a jambalaya cook-off, there will be x judges sitting in [#permalink]  12 Aug 2018, 03:14 Expert's post 00:00 Question Stats: 53% (01:07) correct 46% (00:57) wrong based on 32 sessions For a jambalaya cook-off, there will be x judges sitting in a single row of x chairs. If x is greater than 3 but no more than 6, which of the following could be the number of possible seating arrangements for the judges? Indicate two such numbers. A. 6 B. 25 C. 120 D. 500 E. 720 [Reveal] Spoiler: OA _________________ Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests. Eddie Van Halen - Beat it R.I.P. VP Joined: 20 Apr 2016 Posts: 1302 WE: Engineering (Energy and Utilities) Followers: 22 Kudos [?]: 1329 [1] , given: 251 Re: For a jambalaya cook-off, there will be x judges sitting in [#permalink]  12 Aug 2018, 03:53 1 KUDOS Carcass wrote: For a jambalaya cook-off, there will be x judges sitting in a single row of x chairs. If x is greater than 3 but no more than 6, which of the following could be the number of possible seating arrangements for the judges? Indicate two such numbers. A. 6 B. 25 C. 120 D. 500 E. 720 From the given information we have $$3 < x ≤ 6$$ Now let us take x = 4 So 4 judges can be arranged in 4! ways = 24 5 judges can be arranged in 5! ways = 120 6 judges can be arranged in 6! ways = 720 Hence option C and Option E _________________ If you found this post useful, please let me know by pressing the Kudos Button Rules for Posting Got 20 Kudos? You can get Free GRE Prep Club Tests GRE Prep Club Members of the Month:TOP 10 members of the month with highest kudos receive access to 3 months GRE Prep Club tests Re: For a jambalaya cook-off, there will be x judges sitting in   [#permalink] 12 Aug 2018, 03:53 Display posts from previous: Sort by # For a jambalaya cook-off, there will be x judges sitting in Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group Kindly note that the GRE® test is a registered trademark of the Educational Testing Service®, and this site has neither been reviewed nor endorsed by ETS®.
{}
# Windowing and Leakage in the Cross-Correlation Search for Periodic Gravitational Waves Document #: LIGO-T1200431-v1 Document type: T - Technical notes Other Versions: Abstract: We consider the impact of spectral leakage and windowing on the sensitivity of the cross-correlation search for periodic gravitational waves. We consider the modification to the expected signal-to-noise ratio ($$\propto h_0^2$$, so perhaps better thought of as SNR-squared) in the detection statistic relative to the {\naive} formula which assumes rectangular windows and signal frequency always in the center of an SFT bin. On average we expect the SNR associated with a search of rectangular-windowed data to be $$77.4\%$$ of the {\naive} value. This is still better than the average expected from Hann-windowed data ($$60.1\%$$) and data processed with a half-Hann/half-rectangular Tukey window ($$69.9\%$$). Even though the Hann and Tukey windows leak a smaller fraction of their best-case SNR out of the best bin, the best-case scenarios are not as good--$$66.7\%$$ and $$81.8\%$$, respectively, of the {\naive} SNR is obtained even if the Doppler-shifted frequency always falls in the center of a bin. The sensitivity of the search can be improved by including contributions from multiple SFT bins. In general this requires accounting for correlations between bins, but for rectangularly-windowed data those correlations vanish and the combination is simpler, and results in an improvement of SNR from $$77.4\%$$ to $$90.3\%$$ of the {\naive} value when the two closest bins from each SFT are included in the search, and to $$93.1\%$$ with the three closest bins. These values all come from an assumption that the sum over SFT pairs effects an average over the fractional offset of the signal frequency from the SFT bin center, an assumption which we investigate for several choices of search parameters. Files in Document: Topics: Authors: DCC Version 3.4.1, contact Document Database Administrators
{}
高级检索 作者:Vassily Lyubetsky , Roman Gershgorin , Konstantin Gorbunov 来源:[J].BMC Bioinformatics(IF 3.024), 2017, Vol.18 (1)Springer 摘要:Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions a... 作者:Vassily Lyubetsky , Roman Gershgorin ... 来源:[J].BMC Bioinformatics(IF 3.024), 2016, Vol.17 (1)Springer 摘要:Abstract(#br) Background(#br)One of the main aims of phylogenomics is the reconstruction of objects defined in the leaves along the whole phylogenetic tree to minimize the specified functional, which may also include the phylogenetic tree generation. Such objects can include nucl... 作者:Vladimir Kanovei , Vassily Lyubetsky 来源:[J].Annals of Pure and Applied Logic(IF 0.504), 2016, Vol.167 (3), pp.262-283Elsevier 摘要:Abstract(#br)We make use of a finite support product of the Jensen minimal Π 2 1 singleton forcing to define a model in which Π 2 1 uniformization fails for a set with countable cross-sections. We also define appropriate submodels of the same model in which separation fails ... 作者:Vladimir Kanovei , Vassily Lyubetsky 来源:[J].Applied Mathematics and Computation(IF 1.349), 2015, Vol.255, pp.36-43Elsevier 摘要:Abstract(#br)The aim of this paper is to demonstrate that several non–rigorous methods of mathematical reasoning in the field of divergent series, mostly related to the Euler and Hutton transforms, may be developed in a correct and consistent way by methods of the grossone a... 作者:Vladimir Kanovei , Vassily Lyubetsky 来源:[J].Applied Mathematics and Computation(IF 1.349), 2012, Vol.218 (16), pp.8196-8202Elsevier 摘要:Abstract(#br)In the early years of set theory, Du Bois Reymond introduced a vague notion of infinitary pantachie meant to symbolize an infinity bigger than the infinity of real numbers. Hausdorff reformulated this concept rigorously as a maximal chain (a linearly ordered subset) ... 作者:Mohammad Golshani , Vladimir Kanovei , Vassily Lyubetsky 来源:[J].Mathematical Logic Quarterly(IF 0.376), 2017, Vol.63 (1-2), pp.19-31Wiley 摘要:A generic extension L [ x , y ] of the constructible universe L by reals x , y is defined, in which the union of E 0 ‐classes of x and y is a lightface Π 2 1 set, but neither of these two E 0 ‐classes is separately ordinal‐definable. 作者:Vladimir Kanovei , Vassily Lyubetsky 来源:[J].Topology and its Applications(IF 0.562), 2008, Vol.156 (5), pp.911-914Elsevier 摘要:Abstract(#br)Following a research line suggested by Ilijas Farah, we prove that for any abelian Polish σ -compact group H there exists an F σ Radon–Nikodym ideal, that is, an ideal Z ⊆ P ( N ) together with a Borel Z -approximate homomorphism f ... 作者:... VLADIMIR KANOVEI , MIKHAIL KATZ , VASSILY LYUBETSKY 来源:[J].The Journal of Symbolic Logic(IF 0.535), 2018, Vol.83 (1), pp.385-391Cambridge U Press 摘要:Abstract We modify the definable ultrapower construction of Kanovei and Shelah (2004) to develop a ZF -definable extension of the continuum with transfer provable using countable choice only, with an additional mild hypothesis on well-ordering implying properness. Under the same ... 作者:VLADIMIR KANOVEI , VASSILY LYUBETSKY 来源:[J].The Journal of Symbolic Logic(IF 0.535), 2019, Vol.84 (1), pp.266-289Cambridge U Press 摘要:$\Delta _3^1$]]> collapse function, we define a generic extension by a real a , in which, for a given ... 作者:Vladimir Kanovei , Vassily Lyubetsky 来源:[J].Mathematical Logic Quarterly(IF 0.376), 2013, Vol.59 (3), pp.147-166Wiley 摘要:Abstract(#br)We prove several dichotomy theorems which extend some known results on σ‐bounded and σ‐compact pointsets. In particular we show that, given a finite number of \documentclass{article}\usepackage{amssymb}\begin{document}\pagestyle{empty}$\Delta ^{1}_{1}$\end{...
{}
# How do you simplify (7+4i)/(2-3i)? Aug 12, 2016 $\frac{2}{13} + \frac{29}{13} i$ #### Explanation: To simplify this fraction we have to make the denominator real. To do this multiply the numerator and denominator by the $\textcolor{b l u e}{\text{complex conjugate}}$ of 2 - 3i The conjugate of 2 - 3i is 2 + 3i Note that $\left(2 - 3 i\right) \left(2 + 3 i\right) = 13 \text{ a real number}$ Multiply numerator/denominator by 2 + 3i $\frac{\left(7 + 4 i\right) \left(2 + 3 i\right)}{\left(2 - 3 i\right) \left(2 + 3 i\right)} = \frac{14 + 29 i + 12 {i}^{2}}{13} = \frac{2 + 29 i}{13}$ $\Rightarrow \frac{7 + 4 i}{2 - 3 i} = \frac{2}{13} + \frac{29}{13} i$
{}
# The Unapologetic Mathematician ## Free modules Following yesterday’s examples of module constructions, we consider a ring $R$ with unit. Again, $R$ is a left and a right module over itself by multiplication. We can form the direct sum of a bunch of copies of $R$ over any (finite or infinite) index set $\mathcal{I}$: $\bigoplus\limits_{i\in\mathcal{I}}R$. Every element of this module is a list of elements of $R$ indexed by $\mathcal{I}$$\left(r_i\right)_{i\in\mathcal{I}}$ — and all but a finite number of them are zero. The ring $R$ acts from the left by $r\cdot\left(r_i\right)_{i\in\mathcal{I}}=\left(rr_i\right)_{i\in\mathcal{I}}$. One special thing about this module is that any element can be written as a sum — $\left(r_i\right)_{i\in\mathcal{I}}=\sum\limits_{i\in\mathcal{I}}r_i\cdot e_i$ — where $e_i$ is the element with a $1$ in the slot indexed by $i$ and ${}0$ in all the other slots. This sum makes sense because there are only a finite number of nonzero terms to consider for any given module element. Since any element can be written as an $R$-linear combination of these $e_i$, we say they “span” the module. Even better, there’s no way of writing any of the $e_i$ as an $R$-linear combination of the others. More specifically, if we have some $R$-linear combination $\sum\limits_{i\in\mathcal{I}}r_i\cdot e_i$, the only way for it to be the zero element of the module is for all of the $r_i$ to be zero. Since there are no $R$-linear relations between the $e_i$, we say that they are “linearly independent” (over $R$). These two conditions — span and linear independence — show up all the time. Whenever we have a linearly independent collection of module elements that span a module, we say that they form a “basis” of the module. By the spanning property, every module element can be written as a linear combination of basis elements. The linear independence tells us that this expression is unique. Now it’s important to note that not all modules even have a single basis. As an example of a module without a basis, consider the abelian group $\mathbb{Z}_2$ as a $\mathbb{Z}$-module. Now no element of this module is even linearly independent on its own! Clearly $n\cdot0=0$, even when $n$ is nonzero, so $\{0\}$ is not linearly independent. Also $n\cdot1=0$ whenever $n$ is even, so $\{1\}$ can’t be linearly independent either. There are no linearly independent sets, so no basis. On the other hand, if an $R$-module $M$ does have a basis $\{b_i\}$, I claim that it’s isomorphic to a direct sum of copies of $R$, as above. Just take the index set to index the basis itself and try to find an isomorphism $M\cong\bigoplus\limits_{i\in\mathcal{I}}R$. Construct the function by sending $b_i$ to $e_i$ and extend by $R$-linearity. Since $\{b_i\}$ is a basis we can write an element of $M$ as $\sum\limits_{i\in\mathcal{I}}r_ib_i$, which must be sent to $\sum\limits_{i\in\mathcal{I}}r_ie_i$. Since $\{e_i\}$ is a basis of $\bigoplus\limits_{i\in\mathcal{I}}R$, the only way an element of $M$ gets sent to zero is if all the $r_i$ are zero already, and every element in the target gets hit at least once. Thus the function is an isomorphism. Now, by the way direct sums interact with $\hom$, we see that for any left module $M$ we have $\hom\left(\bigoplus\limits_{i\in\mathcal{I}}R,M\right)\cong\prod\limits_{i\in\mathcal{I}}\hom(R,M)\cong\prod\limits_{i\in\mathcal{I}}M$ thus if we pick a list of elements $m_i$ of $M$ indexed by $\mathcal{I}$ — no restriction on how many nonzero elements we pick — we get a unique homomorphism from $\bigoplus\limits_{i\in\mathcal{I}}R$ to $M$ sending $e_i$ to $m_i$. This justifies calling $\bigoplus\limits_{i\in\mathcal{I}}R$ a “free” left $R$-module, analogously to free groups, free rings, and so on. The upshot of this property is that when we’re dealing with two free modules $M$ and $N$ and we have a basis in hand for each, then we have a nice way of writing down homomorphisms from $M$ to $N$. Let’s use $\{a_i\}_{i\in\mathcal{I}}$ as our basis for $M$ and $\{b_j\}_{j\in\mathcal{J}}$ as our basis for $N$. Then we can specify any homomorphism $f:M\rightarrow N$ by saying where $f$ sends the basis of $M$. We write $f(a_i)=n_i$. But then since $N$ has a basis we can write the $n_i$ in terms of the $b_j$, getting $f(a_i)=\sum\limits_{j\in\mathcal{J}}f_{i,j}b_j$. What if we have another homomorphism $g:N\rightarrow P$, where $P$ is free on $\{c_k\}_{k\in\mathcal{K}}$? If we write $g(b_j)=\sum\limits_{k\in\mathcal{K}}g_{j,k}c_k$ then we compose homomorphisms to get $g(f(a_i))=g\left(\sum\limits_{j\in\mathcal{J}}f_{i,j}b_j\right)=\sum\limits_{j\in\mathcal{J}}f_{i,j}g(b_j)=\sum\limits_{j\in\mathcal{J}}f_{i,j}\sum\limits_{k\in\mathcal{K}}g_{j,k}c_k=\sum\limits_{k\in\mathcal{K}}\left(\sum\limits_{j\in\mathcal{J}}f_{i,j}g_{j,k}\right)c_k$ If this looks familiar, it’s because we’re getting the coefficients of the composite homomorphism on the right by matrix multiplication! That’s right: we’re finally getting to high school algebra II here. One thing I’ll point out here that your teacher probably didn’t tell you is that we only wrote down a matrix for a homomorphism after picking a basis for each free module. A free module may have many different bases, and it requires a choice to pick one or another to write down a matrix. This choice may lead to all sorts of artifacts in the matrix that really have nothing to do with the homomorphism itself and everything to do with the basis. Thus we’ll try everywhere to avoid using a specific basis unless one clearly stands out as useful.
{}
# Show that the area of the large square in the diagram can be written as (x+y)², and.. • Jul 5th 2009, 02:52 AM olivia59 Show that the area of the large square in the diagram can be written as (x+y)², and.. Show that the area of the large square in the diagram can be written as (x+y)², and also as z²+2xy? HERE IS THE DIAGRAM http://img43.imageshack.us/img43/9428/trig.png Do not use pythagoras theorem. b) Show how these results can be used to prove pythagoras theorem • Jul 5th 2009, 02:55 AM malaygoel The side of the larger square is x+y Hence area of the large square= $(x+y)^2$ • Jul 5th 2009, 02:57 AM olivia59 prove pythagoras theorem Thanks. How do I Show how these results can be used to prove pythagoras theorem • Jul 5th 2009, 02:58 AM olivia59 and also how do i show it can be written as z²+2xy? • Jul 5th 2009, 03:00 AM malaygoel Inside the larger square, there is a square of side z, and four right triangles of side x,y. total area=square of side z+4*area of triangles = $z^2+4*\frac{1}{2}xy$ = $z^2+2xy$ pythagoras theorem: press spoiler Spoiler: $(x+y)^2=z^2+2xy$ $x^2 + y^2 =z^2$
{}
# Trig equation (1 Viewer) ### Users Who Are Viewing This Thread (Users: 0, Guests: 1) #### mohlam12 trig equation... :( Hey everyone, I have to solve this equation below: 1+cos(x)+cos(2x)=sin(x)+sin(2x)+sin(3x) After too many simplifications and factorizations, I got to: (I hope it's right tho) (2sinxcosx)(2cosx+1)=cos(x)(1+2cosx) So yeah, I factorized everything pretty much, but what step to take after that, so I can solve this equation ?? Thanks, Last edited: #### TD Homework Helper I haven't checked whether your factorization is correct, but assuming it is, you can continue like this: you can cancel out the factors (1+2cosx) and cosx in each side. In order to be allowed to do this, they can't be zero. Check when they are zero and then check whether those values were solutions of the initial problem. After that, all that's left of your equation is 2sinx = 1 which seems easy. #### mohlam12 oh ok, makes sense (sorry i didnt see the canceling out thingy) thank you! #### TD Homework Helper No problem, I hope it works out. It seems to me that you'll get quite a number of solutions ### The Physics Forums Way We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{}
## [0906.2226] On post-Newtonian orbits and the Galactic-center stars Authors: Miguel Preto, Prasenjit Saha Date: 12 Jun 2009 Abstract: Stars near the Galactic center reach a few percent of light speed during pericenter passage, which makes post-Newtonian effects potentially detectable. We formulate the orbit equations in Hamiltonian form such that the $O(vˆ2/cˆ2)$ and $O(vˆ3/cˆ3)$ post-Newtonian effects of the Kerr metric appear as a simple generalization of the Kepler problem. A related perturbative Hamiltonian applies to photon paths. We then derive a symplectic integrator with adaptive time-steps, for fast and accurate numerical calculation of post-Newtonian effects. Using this integrator, we explore relativistic effects. Taking the star S2 as an example, we find that general relativity would contribute tenths of mas in astrometry and tens of $\rm km sˆ{-1}$ in kinematics. (For eventual comparison with observations, redshift and time-delay contributions from the gravitational field on light paths will need to be calculated, but we do attempt these in the present paper.) The contribution from stars, gas, and dark matter in the Galactic center region is still poorly constrained observationally, but current models suggest that the resulting Newtonian perturbation on the orbits could plausibly be of the same order as the relativistic effects for stars with semi-major axes $\gtrsim 0.01$ pc (or 250 mas). Nevertheless, the known and distinctive {\it time dependence} of the relativistic perturbations may make it possible to disentangle and extract both effects from observations. #### Jun 15, 2009 0906.2226 (/preprints) 2009-06-15, 10:24
{}
## 39.14 Quasi-coherent sheaves on groupoids See the introduction of Section 39.12 for our choices in direction of arrows. Definition 39.14.1. Let $S$ be a scheme, let $(U, R, s, t, c)$ be a groupoid scheme over $S$. A quasi-coherent module on $(U, R, s, t, c)$ is a pair $(\mathcal{F}, \alpha )$, where $\mathcal{F}$ is a quasi-coherent $\mathcal{O}_ U$-module, and $\alpha$ is a $\mathcal{O}_ R$-module map $\alpha : t^*\mathcal{F} \longrightarrow s^*\mathcal{F}$ such that 1. the diagram $\xymatrix{ & \text{pr}_1^*t^*\mathcal{F} \ar[r]_-{\text{pr}_1^*\alpha } & \text{pr}_1^*s^*\mathcal{F} \ar@{=}[rd] & \\ \text{pr}_0^*s^*\mathcal{F} \ar@{=}[ru] & & & c^*s^*\mathcal{F} \\ & \text{pr}_0^*t^*\mathcal{F} \ar[lu]^{\text{pr}_0^*\alpha } \ar@{=}[r] & c^*t^*\mathcal{F} \ar[ru]_{c^*\alpha } }$ is a commutative in the category of $\mathcal{O}_{R \times _{s, U, t} R}$-modules, and 2. the pullback $e^*\alpha : \mathcal{F} \longrightarrow \mathcal{F}$ is the identity map. Compare with the commutative diagrams of Lemma 39.13.4. The commutativity of the first diagram forces the operator $e^*\alpha$ to be idempotent. Hence the second condition can be reformulated as saying that $e^*\alpha$ is an isomorphism. In fact, the condition implies that $\alpha$ is an isomorphism. Lemma 39.14.2. Let $S$ be a scheme, let $(U, R, s, t, c)$ be a groupoid scheme over $S$. If $(\mathcal{F}, \alpha )$ is a quasi-coherent module on $(U, R, s, t, c)$ then $\alpha$ is an isomorphism. Proof. Pull back the commutative diagram of Definition 39.14.1 by the morphism $(i, 1) : R \to R \times _{s, U, t} R$. Then we see that $i^*\alpha \circ \alpha = s^*e^*\alpha$. Pulling back by the morphism $(1, i)$ we obtain the relation $\alpha \circ i^*\alpha = t^*e^*\alpha$. By the second assumption these morphisms are the identity. Hence $i^*\alpha$ is an inverse of $\alpha$. $\square$ Lemma 39.14.3. Let $S$ be a scheme. Consider a morphism $f : (U, R, s, t, c) \to (U', R', s', t', c')$ of groupoid schemes over $S$. Then pullback $f^*$ given by $(\mathcal{F}, \alpha ) \mapsto (f^*\mathcal{F}, f^*\alpha )$ defines a functor from the category of quasi-coherent sheaves on $(U', R', s', t', c')$ to the category of quasi-coherent sheaves on $(U, R, s, t, c)$. Proof. Omitted. $\square$ Lemma 39.14.4. Let $S$ be a scheme. Consider a morphism $f : (U, R, s, t, c) \to (U', R', s', t', c')$ of groupoid schemes over $S$. Assume that 1. $f : U \to U'$ is quasi-compact and quasi-separated, 2. the square $\xymatrix{ R \ar[d]_ t \ar[r]_ f & R' \ar[d]^{t'} \\ U \ar[r]^ f & U' }$ is cartesian, and 3. $s'$ and $t'$ are flat. Then pushforward $f_*$ given by $(\mathcal{F}, \alpha ) \mapsto (f_*\mathcal{F}, f_*\alpha )$ defines a functor from the category of quasi-coherent sheaves on $(U, R, s, t, c)$ to the category of quasi-coherent sheaves on $(U', R', s', t', c')$ which is right adjoint to pullback as defined in Lemma 39.14.3. Proof. Since $U \to U'$ is quasi-compact and quasi-separated we see that $f_*$ transforms quasi-coherent sheaves into quasi-coherent sheaves (Schemes, Lemma 26.24.1). Moreover, since the squares $\vcenter { \xymatrix{ R \ar[d]_ t \ar[r]_ f & R' \ar[d]^{t'} \\ U \ar[r]^ f & U' } } \quad \text{and}\quad \vcenter { \xymatrix{ R \ar[d]_ s \ar[r]_ f & R' \ar[d]^{s'} \\ U \ar[r]^ f & U' } }$ are cartesian we find that $(t')^*f_*\mathcal{F} = f_*t^*\mathcal{F}$ and $(s')^*f_*\mathcal{F} = f_*s^*\mathcal{F}$ , see Cohomology of Schemes, Lemma 30.5.2. Thus it makes sense to think of $f_*\alpha$ as a map $(t')^*f_*\mathcal{F} \to (s')^*f_*\mathcal{F}$. A similar argument shows that $f_*\alpha$ satisfies the cocycle condition. The functor is adjoint to the pullback functor since pullback and pushforward on modules on ringed spaces are adjoint. Some details omitted. $\square$ Lemma 39.14.5. Let $S$ be a scheme. Let $(U, R, s, t, c)$ be a groupoid scheme over $S$. The category of quasi-coherent modules on $(U, R, s, t, c)$ has colimits. Proof. Let $i \mapsto (\mathcal{F}_ i, \alpha _ i)$ be a diagram over the index category $\mathcal{I}$. We can form the colimit $\mathcal{F} = \mathop{\mathrm{colim}}\nolimits \mathcal{F}_ i$ which is a quasi-coherent sheaf on $U$, see Schemes, Section 26.24. Since colimits commute with pullback we see that $s^*\mathcal{F} = \mathop{\mathrm{colim}}\nolimits s^*\mathcal{F}_ i$ and similarly $t^*\mathcal{F} = \mathop{\mathrm{colim}}\nolimits t^*\mathcal{F}_ i$. Hence we can set $\alpha = \mathop{\mathrm{colim}}\nolimits \alpha _ i$. We omit the proof that $(\mathcal{F}, \alpha )$ is the colimit of the diagram in the category of quasi-coherent modules on $(U, R, s, t, c)$. $\square$ Lemma 39.14.6. Let $S$ be a scheme. Let $(U, R, s, t, c)$ be a groupoid scheme over $S$. If $s$, $t$ are flat, then the category of quasi-coherent modules on $(U, R, s, t, c)$ is abelian. Proof. Let $\varphi : (\mathcal{F}, \alpha ) \to (\mathcal{G}, \beta )$ be a homomorphism of quasi-coherent modules on $(U, R, s, t, c)$. Since $s$ is flat we see that $0 \to s^*\mathop{\mathrm{Ker}}(\varphi ) \to s^*\mathcal{F} \to s^*\mathcal{G} \to s^*\mathop{\mathrm{Coker}}(\varphi ) \to 0$ is exact and similarly for pullback by $t$. Hence $\alpha$ and $\beta$ induce isomorphisms $\kappa : t^*\mathop{\mathrm{Ker}}(\varphi ) \to s^*\mathop{\mathrm{Ker}}(\varphi )$ and $\lambda : t^*\mathop{\mathrm{Coker}}(\varphi ) \to s^*\mathop{\mathrm{Coker}}(\varphi )$ which satisfy the cocycle condition. Then it is straightforward to verify that $(\mathop{\mathrm{Ker}}(\varphi ), \kappa )$ and $(\mathop{\mathrm{Coker}}(\varphi ), \lambda )$ are a kernel and cokernel in the category of quasi-coherent modules on $(U, R, s, t, c)$. Moreover, the condition $\mathop{\mathrm{Coim}}(\varphi ) = \mathop{\mathrm{Im}}(\varphi )$ follows because it holds over $U$. $\square$ Comment #1475 by Matthieu Romagny on typo: in the statement of Lemma 38.12.4 (tag 09VH), the pushforward goes to the category of qcoh sheaves on (U',R',s',t',c'). In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
# TrickGCD Time Limit: 5000/2500 MS (Java/Others) Memory Limit: 262144/262144 K (Java/Others) ## Description You are given an array $A$ , and Zhu wants to know there are how many different array $B$ satisfy the following conditions? * $1\leq B_{i} \leq A_{i}$ * For each pair( l , r ) ($1 \leq l \leq r \leq n$) , $gcd( b_{l} , b_{l+1} ... b_{r} )\ge 2$ ## Input The first line is an integer T($1 \leq T \leq 10$) describe the number of test cases. Each test case begins with an integer number n describe the size of array $A$. Then a line contains $n$ numbers describe each element of $A$ You can assume that $1 \leq n , A_{i} \leq 10^5$ ## Output For the $k$th test case , first output "Case #k: " , then output an integer as answer in a single line . because the answer may be large , so you are only need to output answer $mod$ $10^9+7$ ## Sample Input 1 4 4 4 4 4 ## Sample Output Case #1: 17 liuyiding ## Source 2017 Multi-University Training Contest - Te
{}
# Factoring polynomial with 4 terms I cant figure this out. $$\frac{x-5}{x^3-3x^2+7x-5}$$ I tried by grouping and got $$x^2(x-3)+1(7x-5)$$ for the denominator. I need to use partial fractions on this so I cant use that yet. - Take $x=1$. ${}{}$ –  Git Gud Mar 3 '13 at 19:28 Let $p(x)=x^3-3x^2+7x-5$. From the rational root test you know that the only possible rational roots of $p(x)$ are $\pm1$ and $\pm5$. Since $p(1)=0$, you know that $1$ is a root and therefore that $x-1$ is a factor, so divide it out to get $p(x)=(x-1)(x^2-2x+5)$. The discriminant of $x^2-2x+5$ is $(-2)^2-4\cdot1\cdot5<0$, so the quadratic factor is irreducible over the real numbers. Clearly (why? Inspection...) , $\,x=1\,$ is a root, thus $$x^3-3x^2+7x-5=(x-1)(x^2-2x+5)$$
{}
# 2015 AMC 10A Problems ## Problem 1 What is the value of $(2^0-1+5^2-0)^{-1}\times5?$ ## Problem 3 Ann made a 3-step staircase using 18 toothpicks. How many toothpicks does she need to add to complete a 5-step staircase? (A) 9 (B) 18 (C) 20 (D) 22 (E) 24 ## Problem 4 Pablo, Sofia, and Mia got some candy eggs at a party. Pablo had three times as many eggs as Sofia, and Sofia had twice as many eggs as Mia. Pablo decides to give some of his eggs to Sofia and Mia so that all three will have the same number of eggs. What fraction of his eggs should Pablo give to Sofia? ## Problem 6 The sum of two positive numbers is $5$ times their difference. What is the ratio of the larger number to the smaller number? ## Problem 10 How many rearrangements of $abcd$ are there in which no two adjacent letters are also adjacent letters in the alphabet? For example, no such rearrangements could include either $ab$ or $ba$. $\textbf{(A)}\ 0\qquad\textbf{(B)}\ 1\qquad\textbf{(C)}\ 2\qquad\textbf{(D)}}\ 3\qquad\textbf{(E)}\ 4$ (Error compiling LaTeX. ! Extra }, or forgotten \$.) ## Problem 20 A rectangle has area $A$ $\text{cm}^2$ and perimeter $P$ $\text{cm}$, where $A$ and $P$ are positive integers. Which of the following numbers cannot equal $A+P$? $\textbf{(A) }100\qquad\textbf{(B) }102\qquad\textbf{(C) }104\qquad\textbf{(D) }106\qquad\textbf{(E) }108$ ## Problem 23 The zeros of the function $f(x)=x^2-ax+2a$ are integers. What is the sum of the possible values of $a$? $\textbf{(A) }7\qquad\textbf{(B) }8\qquad\textbf{(C) }16\qquad\textbf{(D) }17\qquad\textbf{(E) }18$
{}
# A question about Goodstein's theorem It is known that if Peano's Arithmetic (PA)-which is a first order theory-is consistent, then Goodstein's theorem is an example of a sentence of PA that can be neither proved nor disproved in PA. Is it known whether this undecidability of Goodstein's theorem continues to hold in Z2-the standard axiomatizable theory of Second Order Arithmetic, whose axioms were first presented by Hilbert and Bernays? • Isn't it known that PA is consistent? ;) Oct 1, 2016 at 14:40 Yes, Goodstein's Theorem is provable in $Z_2$. Roughly speaking, this is because Goodstein's Theorem follows from the well-foundedness of $\epsilon_0$, and $Z_2$ can prove this. In fact, much less than $Z_2$ is needed: the theory $ATR_0$ is already more than enough. EDIT: Thinking about Goodstein's theorem, and other similar results, leads natural to statements of the form $$\mbox{If \alpha is well-ordered, then F(\alpha) is well-ordered}$$ for some operation $F$ on linear orders. For Goodstein, the relevant map is $\alpha\mapsto \epsilon_\alpha$; note that $\epsilon_\alpha$ makes sense as a linear order, even if we can't prove it's well-founded. NOTE: I'm not saying here that Goodstein is equivalent to the statement "If $\alpha$ is well-ordered, then so is $\epsilon_\alpha$" - the latter statement is much more general. Rather, the latter statement is a natural extension of Goodstein's theorem, and captures (I would argue) the "combinatorial intuition" contained in the theorem, even if it goes well beyond what is actually strictly needed. We can then ask, as you do above, what axioms are needed to prove this result; this is the subject reverse mathematics. The reverse mathematics of such statements has recently been studied, e.g. by Montalban and Marcone in https://math.berkeley.edu/~antonio/papers/veblen.pdf. As it turns out, basically no operation $F$ which you can reasonably define, which sends ordinals to ordinals, requires us to go outside of $Z_2$. (This is partly reflected in the fact that the proof-theoretic ordinals for fragments of $Z_2$ much stronger than $\Pi^1_2$-$CA_0$ are completely out of reach, currently.)
{}
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are reading an older version of this FlexBook® textbook: CK-12 Middle School Math Concepts - Grade 8 Go to the latest version. # 1.7: Use the Order of Operations to Evaluate Powers Difficulty Level: At Grade Created by: CK-12 0% Progress Practice Expression Evaluation with Powers and Grouping Symbols Progress 0% Do you know how to evaluate a variable expression when it includes powers? Take a look at this problem. 112+7y2+3x19\begin{align*}-11^2 + 7y^2 + 3x - 19\end{align*} for x=2,y=1\begin{align*}x=2,y=-1\end{align*} Evaluating dilemmas like this one is just what this Concept is all about. Pay attention and you will be able to work through it at the end of the Concept. ### Guidance Did you know that you can apply the order of operations to expressions that have powers in them? Let's look at how to do this. To do this, we are going to need to refer back to the order of operations. Order of Operations P parentheses or grouping symbols E exponents MD multiplication and division in order from left to right AS addition and subtraction in order from left to right Now look at the E. That E refers to exponents and powers and evaluating exponents in the order of operations. You can see that you evaluate the powers right after the grouping symbols. It is a bit like working on a puzzle. Here is an expression that needs evaluating. Evaluate the expression 8h2+[51÷(44.25)]52÷5\begin{align*}8h^2 + [51 \div (4 \cdot 4.25)] - 52 \div 5\end{align*}. Let h=4\begin{align*}h=4\end{align*}. It does look complicated, but if it helps, think of this as a series of steps. The order of operations is your guide. If you follow the order of operations then working through a problem such as this one becomes much easier. PEMDASStep 1: Substitute 4 for ′′h.′′8h2+[51÷(44.25)]52÷58(4)2+[51÷(44.25)]52÷5Step 2: Remember PEMDAS. Therefore, perform the operation inside thegrouping symbols first. Recall that order of operations must be followedinside grouping symbols also. In this case, multiply 4×4.25 before dividingby 51.8(4)2+[51÷(44.25)]52÷58(4)2+[51÷17]52÷58(4)2+352÷5Step 3: The next step in order of operations is to simplify the numbers withexponents.8(4)2+352÷58(44)+355÷58(16)+325÷5Step 4: Multiply8(16)+325÷5128+325÷5Step 5: Divide128+325÷5128+35Step 6: Add128+351315Step 6: Subtract1315=126 The answer is 126. We can also evaluate variable expressions that have more than one variable. Notice that a different value has been given for x\begin{align*}x\end{align*} and y\begin{align*}y\end{align*}. You simply substitute the given values into each expression and evaluate it for the quantity of the expression. Evaluate the expression 4x3(3y÷9)+12\begin{align*}4x^3 - (3y \div 9) + 12\end{align*}. Let x=3\begin{align*}x=3\end{align*} and y=9\begin{align*}y=9\end{align*}. 4x3(3y÷9)+12 (Substitute the variables)4(3)3[(3×9)÷9]+12 (Parentheses)4(3)3[27÷9]+124(3)33+12 (Exponents)4(3×3×3)3+124(27)3+12 (Multiply)1083+12 (Add and then Subtract from left to right)105+12117 The answer is 117. When you have variable and numerical expressions with powers in them, you can use the order of operations to evaluate the expressions. Remember not to get stuck if the problem seems complicated. Stick to the order of operations and you will be able to evaluate the expression. #### Example A Evaluate the expression 23+4y+12\begin{align*}2^3 + 4y + 12\end{align*} for y=3\begin{align*}y=3\end{align*}. Solution: 32\begin{align*}32\end{align*} #### Example B Evaluate the expression 53+7y30\begin{align*}-5^3 + 7y - 30\end{align*} for y=9\begin{align*}y=9\end{align*}. Solution: 92\begin{align*}-92\end{align*} #### Example C Evaluate the expression 6x+7y+32\begin{align*}6x + 7y + 3^2\end{align*} for x=4,y=6\begin{align*}x=4,y=6\end{align*}. Solution: 75\begin{align*}75\end{align*} Now let's go back to the dilemma from the beginning of the Concept. Evaluate this expression. 112+7y2+3x19\begin{align*}-11^2 + 7y^2 + 3x - 19\end{align*} for x=2,y=1\begin{align*}x=2,y=-1\end{align*} First, let's substitute the given values into the expression for x\begin{align*}x\end{align*} and y\begin{align*}y\end{align*}. 112+7(1)2+3(2)19\begin{align*}-11^2 + 7(-1)^2 + 3(2) - 19\end{align*} for x=2,y=1\begin{align*}x=2,y=-1\end{align*} Now we can evaluate the powers. Here is our answer so far. 121+7+619\begin{align*}121 + 7 + 6 - 19\end{align*} 115\begin{align*}115\end{align*} This is our final answer. ### Vocabulary Numerical Expression a group of numbers and operations used to represent a quantity without an equals sign. Variable Expression a group of numbers, operations and variables used to represent a quantity without an equals sign. Powers the value of a base and an exponent. Base the regular sized number that the exponent works upon. Exponent the little number that tells you how many times to multiply the base by itself. ### Guided Practice Here is one for you to try on your own. Evaluate the expression 123+7y2+12\begin{align*}-12^3 + 7y^2 + 12\end{align*} for y=6\begin{align*}y=6\end{align*}. Solution Step 1: Before performing the order of operations, substitute 6 for “y\begin{align*}y\end{align*}.” 123+7(6)2+12 Step 2: Perform the calculations inside the parentheses. 123+7(36)+12 Step 3: Perform the calculations with exponents. 123+7(36)+121,728+7(36)+12 Step 4: Multiply 1,728+7(36)+121,728+252+12 Step 5: Add 1,728+252+12=1,464 The answer is -1,464. ### Practice Directions: Evaluate each expression. Remember to follow the order of operations. 1. 32+[(52)3]82\begin{align*}3^2 + [(5 \cdot 2) - 3] - 8 \cdot 2\end{align*} 2. 52+(3+5)62+2\begin{align*}5^2 + (3 + 5) - 6^2 + 2\end{align*} 3. 63+52+25\begin{align*}6^3 + 5^2 + 25\end{align*} 4. 16(123)\begin{align*}16(12^3)\end{align*} 5. 82(2(33)÷2)+(165)\begin{align*}8^2 - (2(3^3) \div 2) + (16 \cdot 5)\end{align*} Directions:Evaluate each expression by substituting the given value into each expression. Remember to follow the order of operations. 6. 23+7y+1\begin{align*}-2^3 + 7y + 1\end{align*} for y=6\begin{align*}y=6\end{align*}. 7. 12+7x28\begin{align*}-12 + 7x^2 - 8\end{align*} for x=6\begin{align*}x=6\end{align*}. 8. 14+7y2+22\begin{align*}14 + 7y^2 + 22\end{align*} for y=3\begin{align*}y=3\end{align*}. 9. 18x+7y+12\begin{align*}18x + 7y + 12\end{align*} for x=3,y=6\begin{align*}x=3,y=6\end{align*}. 10. 63+7x218\begin{align*}-6^3 + 7x^2 - 18\end{align*} for x=5\begin{align*}x=5\end{align*}. 11. 45+8y+33\begin{align*}45 + 8y + 3^3\end{align*} for y=5\begin{align*}y=5\end{align*}. 12. 33+8x22\begin{align*}-3^3 + 8x -2^2\end{align*} for x=7\begin{align*}x=7\end{align*}. 13. 122+7y42\begin{align*}-12^2 + 7y -4^2\end{align*} for y=6\begin{align*}y=6\end{align*}. 14. 43+9x+11\begin{align*}-4^3 + 9x + 11\end{align*} for x=4\begin{align*}x=4\end{align*}. 15. 72+7x2+122\begin{align*}-7^2 + 7x^2 + 12^2\end{align*} for y=2\begin{align*}y=2\end{align*}. 16. 45+72x3\begin{align*}-45 + 7^2 - x^3\end{align*} for x=4\begin{align*}x=4\end{align*}. ### Vocabulary Language: English Base Base When a value is raised to a power, the value is referred to as the base, and the power is called the exponent. In the expression $32^4$, 32 is the base, and 4 is the exponent. Evaluate Evaluate To evaluate an expression or equation means to perform the included operations, commonly in order to find a specific value. Exponent Exponent Exponents are used to describe the number of times that a term is multiplied by itself. Numerical expression Numerical expression A numerical expression is a group of numbers and operations used to represent a quantity. Variable Expression Variable Expression A variable expression is a mathematical phrase that contains at least one variable or unknown quantity. At Grade Dec 19, 2012 ## Last Modified: Aug 10, 2015 Files can only be attached to the latest version of Modality # Reviews Please wait... Please wait... Image Detail Sizes: Medium | Original MAT.ALG.134.3.L.2
{}
# Đề thi tuyển sinh Đại học năm 2010 Môn Tiếng Anh khối D - mã đề 461 Chia sẻ: Nguyen Nhi | Ngày: | Loại File: PDF | Số trang:7 0 329 lượt xem 185 ## Đề thi tuyển sinh Đại học năm 2010 Môn Tiếng Anh khối D - mã đề 461 Mô tả tài liệu Tham khảo tài liệu: Đề thi tuyển sinh đại học năm 2010 môn tiếng Anh khối D - mã đề 461. Đề thi dành cho các bạn thí sinh đang chuẩn bị bước vào kỳ thi tuyển sinh đại học khối D sắp tới. Chúc các bạn đạt kết quả tốt. Chủ đề: Bình luận(0) Lưu ## Nội dung Text: Đề thi tuyển sinh Đại học năm 2010 Môn Tiếng Anh khối D - mã đề 461 6. Question 62: Bill: “Can I get you another drink?” Jerry: “______.” A. Forget it B. No, I’ll think it over C. No, it isn’t D. Not just now Question 63: “You can go to the party tonight______ you are sober when you come home.” A. as soon as B. as far as C. as long as D. as well as Question 64: Laura had a blazing ______ with Eddie and stormed out of the house. A. gossip B. word C. row D. chat Question 65: Is it true that this country produces more oil than ______ ? A. any other countries B. any another country C. any countries else D. any country else Question 66: As the drug took ______, the boy became quieter. A. force B. effect C. action D. influence Question 67: If everyone ______, how would we control the traffic? A. could fly B. flies C. can fly D. had flown Question 68: Our industrial output______ from $2 million in 2002 to$4 million this year. A. has risen B. rose C. rises D. was rising Question 69: Mr. Black: “I’d like to try on these shoes, please.” Salesgirl: “______” A. That’s right, sir. B. By all means, sir. C. I’d love to. D. Why not? Question 70: ______he does sometimes annoys me very much. A. When B. Why C. What D. How Read the following passage and mark the letter A, B, C, or D on your answer sheet to indicate the correct answer to each of the questions from 71 to 80. It’s often said that we learn things at the wrong time. University students frequently do the minimum of work because they’re crazy about a good social life instead. Children often scream before their piano practice because it’s so boring. They have to be given gold stars and medals to be persuaded to swim, or have to be bribed to take exams. But the story is different when you’re older. Over the years, I’ve done my share of adult learning. At 30, I went to a college and did courses in History and English. It was an amazing experience. For starters, I was paying, so there was no reason to be late – I was the one frowning and drumming my fingers if the tutor was late, not the other way round. Indeed, if I could persuade him to linger for an extra five minutes, it was a bonus, not a nuisance. I wasn’t frightened to ask questions, and homework was a pleasure not a pain. When I passed an exam, I had passed it for me and me alone, not for my parents or my teachers. The satisfaction I got was entirely personal. Some people fear going back to school because they worry that their brains have got rusty. But the joy is that, although some parts have rusted up, your brain has learnt all kinds of other things since you were young. It has learnt to think independently and flexibly and is much better at relating one thing to another. What you lose in the rust department, you gain in the maturity department. In some ways, age is a positive plus. For instance, when you’re older, you get less frustrated. Experience has told you that, if you’re calm and simply do something carefully again and again, eventually you’ll get the hang of it. The confidence you have in other areas – from being able to drive a car, perhaps – means that if you can’t, say, build a chair instantly, you don’t, like a child, want to destroy your first pathetic attempts. Maturity tells you that you will, with application, eventually get there. I hated piano lessons at school, but I was good at music. And coming back to it, with a teacher who could explain why certain exercises were useful and with musical concepts that, at the age of ten, I could never grasp, was magical. Initially, I did feel a bit strange, thumping out a piece that I’d played for my school exams, with just as little comprehension of what the composer intended as I’d had all those years before. But soon, complex emotions that I never knew poured out from my fingers, and suddenly I could understand why practice makes perfect. Trang 6/7 - Mã đề thi 461
{}
# How to bump someone else's old, unanswered question that's exactly the question I want to ask? On the main site, there's an old, unanswered question that's exactly the same question I'd like to ask. Is there some way to bump it? And what would meta be without a meta-conversation: I've read through the answers about bumping here on the meta, and I see only two options for me, not being the person who asked the original question: either I could edit the question (this seems undesirable) or mention it in the appropriate chat room. This second options seems fine, as far as it goes, but it seems like a wider audience could be reached if the question could be bumped, and the wider the better, considering that no one has answered it after months. I'm interested in anyone's thoughts. • You could offer a bounty for the question being answered as well. Jul 20 '20 at 19:29 • Hey, I see that you already offered a bounty. For what it's worth, my thinking was "What the heck, I haven't offered many bounties. I could spend some rep on that if the question looks legit." It's a totally ad-hoc solution, but someone bothering to ask the proper way to do things on Meta stands out from the crowd, and it's the kind of behavior I'd like to encourage. Jul 21 '20 at 3:12 • Jul 21 '20 at 8:42 • @JonathanZsupportsMonicaC That's an excellent and generous idea! I'm really encouraged by all the support here; your offer is like the cherry on the sundae. Many thanks to all of you! Jul 23 '20 at 14:35 • @JonathanZsupportsMonicaC Hey, if you're still up for something ad-hoc, I wonder if you might spend some rep on my question? The answers on the bountied question were over my head, so I finally posted one with specifics on what I'm trying to do. I've gotten only one answer so far, and it seemed promising, but I can't figure it out. I could learn to live with the function I have, and I feel a little weird asking, but now I'm curious whether there's even a way to do what I'm trying to do. Cheers Jul 24 '20 at 19:56 • @SaganRitual: It looks like a question has to be open for at least two days before one can put a bounty on it. I've added your new question to my bookmarks, but you can comment back to me here if I don't get around to putting the bounty on by Monday. I imagine we should properly take this conversation to the Pearl Dive chat room, but I'll confess it's not really my practice to visit there. Jul 24 '20 at 21:43 • I'll also add that questions like yours are ones I really like: born out of a curiosity to see how to explain (or at least simulate) our observed world, and the math, while it fits the technical definition of "elementary" is still complicated enough to be interesting. Jul 24 '20 at 21:46 • Or you can reference the question on meta, which seems to have been very successful! (Just kidding, this isn't something that should be done in general, obviously.) Jul 25 '20 at 11:26 • In my experience, bounties never helped. I would suggest simply posting the answer again to draw attention to it if you don't have enough rep for a bounty/want to save it for other questions. Aug 3 '20 at 13:57 This is exactly where bounties are for. You have just enough reputation (75 or more) to post one. The Help Center article begins to talk about your own question, but posting a bounty on somebody else's question works just as well. A bounty is a special reputation award given to answers. It is funded by the personal reputation of the user who offers it, and is non-refundable. If you see a question that has not gotten a satisfactory answer, a bounty may help attract more attention and more answers. Slice off anywhere from +50 to +500 of your own hard-earned reputation, and attach it to any question as a bounty. You do not need to be the asker of the question to offer a bounty on it. (emphasis mine) • Thanks. I was thinking about a bounty, but I don't ever answer any questions on the math site, not being a strong math person. I don't know how I could ever recover the points. As it is, I worry that having offered a bounty once before makes me look like a problem child, having such a low reputation. I wish I could transfer some of my rep from the programming site. Cheers Jul 20 '20 at 19:35 • It seems you've already offered a bounty before: math.stackexchange.com/users/182584/… thanks to the association bonus. You could help improving the grammar/formatting of existing posts in the form of suggested edits. Or ask a good novel question. Jul 20 '20 at 19:37 • @SaganRitual Answering the MSE question bountied by you, I noticed a tab to this meta question and suggested that it is yours. Since there are a lot of questions at MSE and my possibilities are limited, I regularly look for new questions in my specialities and also for bounty questions, [not because I’m greedy for a reputation] (math.stackexchange.com/questions/2387879/…), but rather to find a question for which somebody really needs an answer. Jul 21 '20 at 8:20 • Concerning the particular question, I already found a general (polynomial) construction for $f$ and now I’m evaluating a range of $c$ which is provided by the construction. Jul 21 '20 at 8:20 You can try the pearl dive chat room. See also this post in meta: Launching *Pearl Dive* - a chatroom where excellent questions/answers meet willing sponsors • Upvoting the suggestion. It is not quite the type I had in mind, but, sure, the question is sufficiently non-standard to warrant a bit of extra attention! In other words, I like to think I would have coughed up the rep. Jul 21 '20 at 6:11
{}
## Recommended Posts I've searched around but nothing quite seems to deal with this in the way I need it. Let's say I have enum A { A0 = 0; A1 = 1; }; I want to make it so I can bitwise OR them like A aVar1 = A0; A aVar2 = A1; aVar1 |= aVar2; Is that possible? If so, how would I do it? Almost all examples seem to deal with classes and adding their variables. ##### Share on other sites In what language? ##### Share on other sites That doesn't look like C++ but in C++ you can do enum A { A0 = 1 << 0, A1 = 1 << 1, A2 = 1 << 2, A3 = 1 << 3, // .. etc }; ##### Share on other sites You can overload operators for enums in C++, if that's what you want. Take a look here: HTH, tiv ##### Share on other sites If this is C++, you'll run into trouble because two enums or-ed together are an integer, but not an enum. So... without nasty code (involving reinterpret_cast in overloaded operators) you'll probably not be able to do it. ##### Share on other sites Quote: Original post by SiCraneIn what language? Oh crap sorry, it's C++. Quote: Cool... I think that about shows what I want. Thanks. Quote: Original post by samothIf this is C++, you'll run into trouble because two enums or-ed together are an integer, but not an enum. So... without nasty code (involving reinterpret_cast in overloaded operators) you'll probably not be able to do it. Yeah... I was kinda wondering about that. So it's quite complicated to cast them back into enum? Probably won't have to in my case though I think. Being able to assign to an enum variable the result of OR-ing the integer values of 2 other enums should be enough. I've actually implemented a way of doing this already with a load of brackets and ( int ) etc. but overloading seemed a much neater way of doing it. ##### Share on other sites You could always define a type from scratch. Something like: class A { int value; A(int value): value(value) {} public: bool operator==(const A& other) { return value == other.value; } A operator|(const A& other) { return A(value | other.value); } A operator&(const A& other) { return A(value & other.value); } // The "safe bool" idiom doesn't work here because we *do* have a meaningful // comparison for equality (although maybe we want to ensure operator< etc. // don't compile). So I've just done the simplest thing. operator bool() { return value != 0; } static A foo, bar;};A A::foo(1 << 0);A A::bar(1 << 1);// ...bool some_func(const A& flags) { return flags | A::foo;}// ...some_func(A::foo | A::bar); ##### Share on other sites You can overload operators for enums in C++ too, but I somehow doubt this is what you want. #include <iostream>enum Bits { left = 1 << 0, right = 1 << 1, both = left | right };const Bits& operator|=(Bits& lhv, Bits rhv){ lhv = static_cast<Bits>(lhv | rhv); return lhv;}int main(){ Bits a = left, b = right; a |= b; std::cout << a << '\n';} This assumes that you give a name to all possible values that the OR operation can produce. If you just want bits to represent flags, you'd make an enum defining names for 1, 2, 4, 8, ..., but then use something like an unsigned to represent combinations of these flags. ##### Share on other sites Quote: Original post by visitorIf you just want bits to represent flags, you'd make an enum defining names for 1, 2, 4, 8, ..., but then use something like an unsigned to represent combinations of these flags. Hmm... I hadn't tried that. I assumed OR'ing of the enum just wouldn't work, given the errors I was getting. Just using an unsigned int seems to have solved the problem perfectly. Thanks all. ## Create an account Register a new account • ### Forum Statistics • Total Topics 628288 • Total Posts 2981845 • 11 • 10 • 10 • 11 • 17
{}
# Hidden Group Structure by Ruth I. Berger (Luther College) Mathematics Magazine February, 2005 Subject classification(s): Algebra and Number Theory | Abstract Algebra | Groups Applicable Course(s): 4.2 Mod Algebra I & II | 4.3 Number Theory Certain subsets of the ring of integers mod $n$ with hidden group structure are discussed. A pdf copy of the article can be viewed by clicking below. Since the copy is a faithful reproduction of the actual journal pages, the article may not begin at the top of the first page.
{}
### Hilbert C*-modules An increasingly prominent tool in operator theory is the Hilbert C*-module, which are (loosely speaking) Hilbert spaces where the inner product takes values in a C*-algebra. The next level of generalization is that of Hilbert modules over locally C*-algebras (we briefly mentioned locally C*-algebras in this post), and much of the following theory extends to this setting as well. Here I give the definition of a Hilbert C*-module and collect some of it’s properties, mostly as a reference for personal use. I will likely update this post with new material later on, hopefully without making it too bloated. The theory is now well developed in the literature so the proofs will kept to a bare minimum. For references I will mostly use [1] and [2]. ### Constructing new C*-algebras – Universal C*-algebras (1) Universal C*-algebras are C*-algebras defined implicitly by relations and generators, much like group presentations. Contrary to the case with groups where the presentation can be constructed as a quotient of the free group, there is no analogous construction for C*-algebras, and we are not always guarantied the existence of a universal C*-algebras for any presentation. This series of posts introduces the notion of universal C*-algebras. In this post I introduce the general construction of a universal C*-algebra, and a simple method for computing the the universal C*-algebra of a family of unitaries and (sufficiently nice) relations. In subsequent posts I hope to cover more general constructions with possibly non-unitary operators and more subtle relations. ### Constructing new C*-algebras – Crossed product. In this post I want to cover the concept of a crossed product C*-algebras. These are algebras which arise naturally as C*-algebras associated with dynamical systems and (as the name suggests) has many similarities with the semidirect of groups. It is a wide area of active research. To simplify things, unless stated otherwise, $\mathcal{A}$ will denote a unital C*-algebra, $G$ a discrete group and $H$ will always denote a complex Hilbert space. I will give references for the more general definitions when the C*-algebra is non-unital or the group is only assumed to be locally compact, as this would require the use of multiplier algebras and Haar measures/integrals making the exposition less approachable and the notation a lot less readable. Continue reading “Constructing new C*-algebras – Crossed product.” ### Constructing new C*-algebras – Injective and Projective limits The injective (or direct) limit of C*-algebras is one way to construct new C*-algebras from directed system of C*-algebras (defined below), and is an essential tool in operator  theory, so one may as well get acquainted with it. The projective limit (or inverse limit) is not as common it seems, but I will add it here for completeness. In this post I will try to give a definition of the construct by universal properties of colimits in the category of C*-algebras, but reducing the prerequisites from category theory to a bare minimum. The point is to highlight that similarities between direct limits of groups, rings, algebras etc., stems from the fact that they all solve the same universal problem in their respective categories, and to justify why some of these limits/colimits are preserved under certain transformations. Though the similarities may be evidenced, this is understandably (but also unfortunately) often not addressed in the classical references of operator theory, as a formal definition of a limit/colimit would be a significant digression. Continue reading “Constructing new C*-algebras – Injective and Projective limits” ### A tour of functional analysis 1 – Locally convex vector spaces and the Hahn-Banach theorem(s) One of the pillars of functional analysis is the Hahn-Banach theorem, so it makes sense to dedicate a post to this theorem. On normed spaces, the theorem has a plethora of interesting corollaries, some of which will be stated here. The locally convex spaces are of interest since they are the most rudimentary topological vector spaces on which the the Hahn-Banach theorem can be used to extend continuous linear functionals, and encompasses a sizable chunk of the topological vector spaces one might meet in the wild. ### Topological Complements #### – Introduction – The first steps outside the comforts of the category of Hilbert spaces, the safe space for of functional analysis, into the unruly world of topological vector spaces, can be a troubling experience for any student, myself included. To easy the passage, here are  a few tips and results regarding the existence of complementary subspaces in the general setting of topological vector spaces. For Hilbert spaces it is known that every closed subspace has a preferred (topologically) complementary subspace, namely the orthogonal complement, but any two (algebraically) complementary closed subspaces are automatically (topologically) complementary (by Theorem 1). Lastly, the theorem of Misiurewicz, Przytycki and Gromov, which relates the entropy of a holomorphic map on the $n$-sphere to its degree was proven.
{}
# UAI 2022 - Instructions for the OpenReview camera-ready submission Find what materials to upload for your presentation (e.g., slides, a poster PDF, or a video recording), please see here. The instructions below are only for the deliverables that must be submitted through OpenReview. All deliverables are due on June 17th 2022 23:59 AoE. The bundle with all relevant files is here: https://www.overleaf.com/read/ygrrvwwrzhvp (or if you prefer a zipfile: https://drive.google.com/file/d/1469ndo-PukH6yW3irMLzrztwkp0SNgxw/view?usp=sharing). Make sure to revise your paper based on the feedback you received. In several cases, the meta-reviewer left comments requesting specific changes for the final version of the paper. We also encourage you to take advantage of the comments by the reviewers to improve your paper and maximize its impact. Please note that you can use 9 pages for the camera-ready version. If you can get a third party to read your paper, that can always help to improve clarity and find missed typos. We will ask you to follow our naming convention and directory structure for the supplementary, so we can process your submissions automatically. In particular, your submission will use the following identifier in various places: $filename=(first author's last names)_(paper number). Here you should replace parentheticals with the correct value, all lower case, and note that in the last name(s) spaces and special characters become dashes (‘-’) and accented characters lose their accent; example of$filename for the first author Jane Kay von O'López with accepted OpenReview paper number 969: von-o-lopez_969; example for first author Jay Jay (JJ) Smith with accepted paper number 987: smith_987. Every occurrence of $filename in the following instructions should be replaced by your identifier. The$filename variable must be created using the last name(s) of the first author of the paper, even if a different author submits them. ## Preparing the final version of your paper • Download the new class file uai2022.cls and the rest of the instructions from https://www.overleaf.com/read/ygrrvwwrzhvp. • Make sure you use a single main tex file for the main paper, which follows the naming convention $filename.tex, e.g. smith_987.tex. • At the top of your document preamble, make sure that you use \documentclass[accepted]{uai2022} as indicated in the template. This should also make the authors visible in the final version. • Important: do not use any layout trick, e.g. \vspace, \small or \pagebreak. • Optional: fill in the contributions and acknowledgements environments at the end of your paper. • Respect the page limits for final papers: up to 9 pages of content (references also possible here) and up to 2 more pages containing only references. • Supplementary material must not be included in the final paper, but as a single zip file$filename-supp.zip, e.g. smith_987-supp.zip • In case you have textual supplementary material, please create a single separate document using the same template called $filename-supp.tex, e.g. smith_987-supp.tex, keeping the front matter (authors, title), but adding ‘(Supplementary material)’ at the end of the title. For instance, you can use the zref-xr package to create consistent references between the paper and supplementary material. Optionally, the supplementary material can use the single-column format by means of the tag \onecolumn. If you want the whole supplementary material to be single-column, you should place \onecolumn before \maketitle. • You are strongly encouraged to provide code and data as separately citable material and not as supplementary material, e.g., by uploading them to Zenodo or Github. This is particularly true if you have large files. • We will be using TeXlive 2021 to recompile your submission, so please make sure that using these systems the LaTeX and BibTeX files compile without errors (or warnings that would cause a change in output when fixed). You can check, e.g., on Overleaf. ## Preparing the submission materials • On OpenReview, use your author account to update your submission. We expect you to upload two files: (1) the compiled pdf of the final version of your paper$filename.pdf, following the naming convention, and (2) a zip file $filename.zip whose contents are described below. • The zip file$filename.zip must at the top level contain only a single directory named $filename and nothing else (check for hidden files, please, and remove them). JJ would submit a zip file named smith_987.zip with a directory named smith_987 inside it. You can check by extracting the content into a different folder to see if the extraction generates a single folder with all content inside it. • The directory$filename must contain the following: • A directory named latex with the LaTeX sources of your paper, including all those from the successful compilation run that generated your final PDF ($filename.tex,$filename.bib, $filename.aux,$filename.bbl, $filename.pdf,$filename.log, etc.). If you have a supplementary LaTeX-generated PDF file, its source files and all those from a successful compilation run must also be included here too ($filename-supp.tex,$filename-supp.aux, $filename-supp.bbl,$filename-supp.pdf, $filename-supp.log, etc.). For example, JJ should use files smith_987.tex, smith_987.aux, and so on, among the files, as well as smith_987-supp.tex, smith_987-supp.aux, etc, and so on for their supplementary material, and place all inside the folder named latex that is inside the folder smith_987. • Supplementary files$filename-supp.zip. JJ has many small images that they want to submit as supplementary material, so they can use the file smith_987-supp.zip for that. This file smith_987-supp.zip should be placed inside the folder smith_987 that JJ has created (and outside any other subfolders). Please adhere strictly to the above guidelines, as we will process the submissions in an automated fashion and imprecisions may hinder the publication process. In the end, our running example first author JJ would submit smith_987.pdf and smith_987.zip to OpenReview. To summarize, inside the file smith_987.zip, there would be: • smith_987/latex (a folder with all sources for generating the paper and textual supplementary material). • smith_987/smith_987-supp.zip (the zipped additional supplementary material files). ## Checklist for UAI2022 submissions 1. Are you using \documentclass[accepted]{uai2022} as indicated in the template? 2. Are your files named properly such as smith_987.pdf if your last name is smith and your paper number is 987? 3. In the final version of the paper are your first name/given name coming first and your last name/surname coming last in the authors list? 4. Does your paper meet the page limit up to 9 pages of content (references also possible here) and up to 2 more pages containing only references? 5. Are your title, author list, and abstract the same as those in your authorship bib file? 6. Is supplementary material included as a separate file? 7. (Optional) Are you filling in the contributions and acknowledgements environments at the end of your paper (see template) instead of using footnotes for this purpose? 8. Are you using one "big" tex file, instead of multiple tex files? 9. Are you avoiding squeezing texts etc., e.g., not using the commands \vspace and \small? 10. Are you avoiding using the pagesel package? 11. Are you avoiding using the command \pagebreak? 12. Did you succeed in compiling your submission using TeXlive 2020 or 2021? 13. Important: did you submit the copyright form and authorship bibfile through the Google Form? Please direct any questions concerning these CRC instructions to uai2022chairs+publication@gmail.com. Sponsors
{}
# [NTG-context] Formatting TOC robheus robheus at xs4all.nl Mon Aug 3 11:22:03 CEST 2015 Hello list, I need some adjustments to my TOC and chapter titles. 1. The title and the title of every chapter need to be indented (leaving enough room 2. The title of part does not need to be indented, but uses two consecutive lines, preceded and followed with a blank line, as follows DEEL <part number> <part title> 3. Chapter entries are on consecutive lines, except that between the first and second, and before the last chapter entry there is a blank line. (I use "\setuplist[chapter][before=]" and tried to use "\writebetweenlist[content]{\blank}" at the specified locations to get blank lines there, but this didn't work) 4. The part number needs to be capital roman numbers, and titles of chapters are preceded with the capital roman part number. (The option "conversion=Romannumerals" in "\setuphead[part]" doesn't seem to work) 5. There are 2 parts, part 1 with 5 chapters, part 2 with 4 chapters. Chapter numbering of part 1 only starts with 0 instead of 1. (I use currently the "ownnumber" option of \startchapter for all chapters of part 1, but is there a more simple or elegant option for this, just setting the chapter number 0 directly after part 1 starts or so?) Greetings, Rob The code I use now, is below: \setupheadtext[content=INHOUDSOPGAVE] % Kop boven de inhoudsopgave \definecombinedlist[content][part,chapter] % welke structuur-elementen in de inhoudsopgave komen \setupcombinedlist[content][level=2, alternative=b]
{}
Translate shapes Problem Draw the image of quadrilateral A, B, C, D under a translation by 5 units to the right. Do 4 problems
{}
It is also known by many other names such as; “The Golden Ratio”, “The Golden Section”, and “The Divine Ratio”, to name a few. Begin with 0. In this measurement, the length and width of a person are measured, and then the length is divided by width. That’s our goal in here. Thanks ……….. This can be used to explore the factors that impact our perceptions of beauty. No you don’t. For understanding it you need see the universal formula at http://theuniversalmatrix.com . First before all was thought. Note that the PhiMatrix grid always shows the golden ratio point of any dimension: Remember the God of Baruch de Spinoza, and you will be not very faraway from Einstein footsteps. Research Georg Cantor. awesome!!! I lile the interpretation following The theory of Morphic Resonance by Dr. Rupert Sheldrake Ph. Since that each layer has its specific size, frequency and vibration, we can see how the natural forces applies different functions, dynamics, to organisms, like the different sizes and functions of the different fingers in a hand. Eventually we get the set Φ={1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,…,∞}, If we divide the last two numbers in the set, like 233/144 = 1.6180555 so this number, or ratio, "approaches" φ=1.6180339887…, I'll explain what I mean by "approaches." My book, available at https://bit.ly/goldenratiobook has a more detail discussion on your question. Hell, I’ve been trying to tell the Administrator of this very website that his belief system is just as ensnared with the Golden Mean as my friend’s Organized Religion, as they all are. . A few thoughts to add to this. But what exactly is the It would seem these numbers are especially doctored to fit the phi number. The ‘Da Vinci Code’ does wonders!!! The golden ratio is the irrational number Phi. It is often represented by the Greek letter phi, φ \varphi φ or ϕ \phi ϕ. To find the golden ratios of any number, just multiply it by 1.618 and divide it by 1.618. Related: The 11 Most Beautiful Mathematical Equations. For instance, phi enthusiasts often mention that certain measurements of the Great Pyramid of Giza, such as the length of its base and/or its height, are in the golden ratio. Some uses, such as the golden ratio, require the old-fashioned 'closed' glyph, which is separately encoded as the Unicode character U+03D5 ϕGREEK PHI SYMBOL. There may be no more powerful ratio found in nature than the golden ratio, or phi. Every time everywhere this force is present, and when Nature organizes matter into systems, if the systems has the tendency to self-recycling it is this force that does the job. It is commonly believed that waves make up the sub-atomic particles. Despite modern interpretations, I believe we should be trying to be striving for accuracy, not convenience. I know you’re very sincere in your good appeal, but your arguments are not well grounded in logic or knowledge of the spiritual side of the human experience. Carwow, best-looking beautiful cars and the golden ratio. https://www.quantumology.net/blog/fibonacci-and-phi, “if you move the left half of the circle up touching on the line where the right half of the circle touches you get..” a strand that looks strikingly similar to the nucleic acid double helix. 25 November 2019. your description is actually better than i think you realise. Perhaps if this were better understood, things like the Chinese genocide of Tibetans would not be occurring. it is a poetic expression of symbol. So, when I said in the post above ” the force”, I am saying “the function” ( sorry, my mistake). It’s not limited to the Earth. Phi is not just a number or ratio that is expressed in digits. If we live in a program, how come we feel love and the best AI we got cannot. YOU say we can’t imagine a Higher Being. The Da Vinci Code by Dan Brown. Bi lateral symetry is the secret of beauty in Nature qhich is merelly an effect of a function, or a natural force, that is coming since the Big Bang building systems. I am the author of natural’s systems diagram/formula, a pattern that I saw ( first time in Amazon jungle), flowing as the circuit of energy/information that runs inside systems. You might hear it referred to as the Golden Section, Golden Proportion, Golden Mean, phi ratio, Sacred Cut, or Divine Proportion. If I use a golden ratio on a design on my computer screen with pixel level accuracy, it’s still a golden ratio in concept and in application, and the exact golden ratio will fall within the golden ratio dividing line that I have created. Required fields are marked *. As with other Greek letters, lowercase phi is used as a mathematical or scientific symbol. The amount of acceptable variance could depend on a variety of factors. You can round that to 49 and 19 if needed. No. It wasn't until the 1800s that American mathematician Mark Barr used the Greek letter Φ (phi) to represent this number. How can you know and imagine a god when you can’t even understand whats in front of you? ( Ursa Major) is top-right. About Phi I discovered it in the formula when measuring the scales of the circuit, then, there was a systemic function located at a point that resulted into 1,618… Studying the function and grasping all traits, I arrived to Phi definition above. Apply that to the Egyptian language hieroglyphics and see what you come up with, this is the truth as it was written in the Bible thank you for your comment. Two different infinities are equal in size. Pacioli used drawings made by Leonardo da Vinci that incorporated phi, and it is possible that da Vinci was the first to call it the "sectio aurea" (Latin for the "golden section"). Arrgh. Go to YouTube and look up, “The Revelation of the Pyramids”. Adding them: 1+1=2 This gives us our next sum; so far we just have the set= {1, 2}  a subset of Φ (top of page) so we're getting there…, We keep adding the last two numbers: 1+1=2, 1+2=3, 2+3=5, 3+5=8, 5+8=13, and we keep the sums in a set = {1, 2, 3, 5, 8, 13} another subset of Φ. And we are taught this is all random the result of an explosion common man! I had no idea that Phi and phi were different, I’d like to figure out how to incorporate this into gambling so I can reclaim all my losses at the casino. The way is for those who see Him unveiled, as well as those who see Him veiled, but believe it to be a mediator, as He came both to give life, and life more abundantly, by all, three measures of grace. If they aren’t rendered correctly this may prove to be a pretty confusing post.). If you click on the Mathematics link on this page it will take you to the information you mentioned. This would include all infinities, also. So, each next layer will be the size of the anterior, which is the unit 1, plus 0618. look up the golden ratio of the muslim kaaba or quran. The golden ratio is found in many natural settings—we have all seen it in sunflowers, snail shells, pinecones and the veins of certain leaves. Who help grabbing the shape, constitution, etc., of the last wave and projects its ahead into empty space? Are some infinities actually larger than others? = 0 If 0 then infinite if infinite then all numbers, If numbers then separation or difference if separation then equations, if numbers then lines = I if pi then circles = O circle and line = Φ, it should be noted also that if you move the left half of the circle up touching on the line where the right half of the circle touches you get a wave. Buddhism for example. I have just recently started researching numbers again, as I had a thought years ago before I knew any if this. Also, is there anything I can make a scaled model of to show? It was definitely fate for him to receive this name. If the ratio between these two portions is the same as the ratio between the overall stick and the larger segment, the portions are said to be in the golden ratio. Dedicated to sharing the best information, research and user contributions on the Golden Ratio/Mean/Section, Divine Proportion, Fibonacci Sequence and Phi, 1.618. This should be part of the basics. This function produces bi-lateral symetry when reproducing the left face into a right face, the left strand of RNA into a right strand creating DNA, etc. References [] ↑ The Cult of the Golden Ratio (15 April 2005) Laputan Logic (archived from ↑ ↑ . But as Markowsky pointed out in his 1992 paper in the College Mathematics Journal, titled "Misconceptions About the Golden Ratio": "measurements of real objects can only be approximations. The golden ratio (symbol is the Greek letter "phi" shown at left) is a special number approximately equal to 1.618 It appears many times in geometry, art, architecture and other areas. It’s important first that we hearsee. On the other extreme, some will tell you that nothing can be the golden ratio because it has an infinite number of digits. Which choices strikes you as the most rational? 67 years old and learning things everyday. Whether you wish it or not, there is a difference, so please can we begin to aim for correctness please? But then again, I do have a wild imagination. As evidenced by the other names for the number, such as the divine proportion and golden section, many wondrous properties have been attributed to phi. unfortunately people can get overenthusiastic about something new, but most get the accurate picture at some point. It can predict feelings based on linear optimizations AFTER training bc we, humans, tell it it’s love… but even babies know more of such things than programs ever will. It happens for example, when the DNA separates into two strands and each one strand replicates itself, resulting two DNA’s. The “force”, if not one of the recognized forces like atomic, magnetic or gravity, must have been something else, perhaps a life force or even a creative force that we have yet to be able to measure scientifically. I also thought about the irony that he was also born in March 2014.. 3/14, pi. It’s just a question of how much accuracy I need for a particular purpose. Another factor would be whether the observations were in something where the golden ratio might reasonably be expected to appear, such as in biological systems or in the works of an artist who was known to use the golden ratio. Unlike pi, which is a transcendental number, phi is the solution to a quadratic equation. The Golden Ratio – The Divine Proportion Phi is simply the most amazing thing I have ever come across in our worldly knowledge. I thought it was obvious to everyone by now that ,living in a reality that is so easily described using math and numbers, we are almost certainly living in a computer simulation. See more ideas about golden ratio, sacred geometry, fibonacci. I’ve even come across f = 1.618… on http://www.numericana.com. The number Phi is the mathematical representation of the natural force responsibly by all reproductive processes and events in Nature from stars systems to human bodies. Cheers…, (Note: the following uses high-end UTF-8 characters for different forms of the Greek letter “phi”. a simple yet complex subject of matter, not an idea, it works or it doesn’t. See the articles in the Cosmology section of this site at https://www.goldennumber.net/category/cosmology/. But as Devlin points out on his website, "the nautilus does grow its shell in a fashion that follows a logarithmic spiral, i.e., spiral that turns by a constant angle along its entire length, making it everywhere self-similar. god didn’t or doesn’t need a building block, Also some religious practices do not have any sort of supreme being or eternal life. It is commonly found in nature, and when used in a design, it fosters organic and natural-looking compositions that are aesthetically pleasing to the eye. Where one chooses to measure from can be arbitrary and adjusted if necessary to get the values closer to phi. But as Dale Ritter, the lead human anatomy instructor for Alpert Medical School (AMS) at Brown University in Rhode Island, told Live Science: "I believe the overarching problem with this paper is that there is very little (perhaps no) science in it … with so many bones and so many points of interest on those bones, I'd imagine there would be at least a few" golden ratios elsewhere in the human skeletal system. The golden ratio. Future US, Inc. 11 West 42nd Street, 15th Floor, 1.618 is a mathematical formula that is found in many things, thus proving the universe was created by Intelligent Design, that is, God. Here I found why there is the Fibonacci sequence. Interestingly, if you extend the Fibonacci sequence backward — that is, before the zero and into negative numbers — the ratio of those numbers will get you closer and closer to the negative solution, little phi −0.6180339887…. But I understand that you can’t not understand only with my words, this issue is very complex because humans never saw the world by this perspective, compilled into Matrix/DNA Theory. YOU say live by your own intelligence. Because a living spiral has the tendency to self-expansion, and for doing it needs to reproduce the last circular wave into a new wave. Anyway, just wanted to point out some more phi fun facts , This site has over 100 pages devoted to the golden ratio, so this page is just an introductory index of topics. The Golden Ratio has the decimal approximation of $$\phi=1.6180339887$$. A recent study claimed to find the golden ratio in different proportions of the human skull. If the short side = 1, the long side = Φ. Do I tell him? That symbol appears to be spinning clockwise. Leonardo’s most famous illustration that “measures” the body is that of Vitruvian man, and it is based on the measures described by the ancient Roman Vitruvius. Phi can be defined by taking a stick and breaking it into two portions. *gasp. Many people still think the golden ratio is found all over nature and represents perfect beauty - that is a myth. This property, however, does not mean that they have anything to do with phi. Note: See updates to my research on the I’m being reminded of a particular book/tom hanks movie here…. Mr Hedding i suggest you 1. I’d recommend you look at other studies in the field of your study and seek to achieve results of similar accuracy. The golden ratio, also called the golden number, divine proportion, etc., has a very close association with the Fibonacci sequence. It’s not. Interesting point, but does “proper” English of today even sound like the English of the 1600’s or the English from earlier times yet. By contrast, wisdom from Proverbs 3:5 says “Trust in the LORD with all your heart and lean not on your own understanding.”. For an overview of key content of this site, read the article, Phi: The Golden Number by Gary Meisner, author of www.goldennumber.net and developer of PhiMatrix golden ratio design software. Before it was used for infamy, the top-right swastika was not only a Buddhist symbol, but it was the basis of the Basque cross, the lauburu. It’s like the air that we breathe, we know it’s there because it's keeping us alive, otherwise, we will be in a different place, but we cannot see or touch it. Most spirals in nature are equiangular spirals, meaning they expand at a constant rate. DNA grows by making and adding new building blocks. The first solution yields the positive irrational number 1.6180339887… (the dots mean the numbers continue forever) and this is generally what's known as phi. Wow… That was almost a complete paradigm shift. (Refer to this set when needed.) See https://www.goldennumber.net/math/. PHI is the universal number for All life. Also referred to as the Greek letter phi, the Golden Ratio … Google “Buddhist theism”, and “The Existential Buddhist” has a good discussion board, with primary source material towards the bottom. . All gambling is based on chance with statistical probabilities, and the odds are calculated in advance to be in favor of the casino or lottery. He went on to write that inaccuracies in the precision of measurements lead to greater inaccuracies when those measurements are put into ratios, so claims about ancient buildings or art conforming to phi should be taken with a heavy grain of salt. Phi is closely associated with the Fibonacci sequence, in which every subsequent number in the sequence is found by adding together the two preceding numbers. . I have been very focused on sacred geometry this year. The ratio of navel distance to height isn’t phi. It is, however, a number with some very unique properties, and it appears in a number of surprising places. Got lost at a point though. This formula built all natural systems and then, we found it encoded into the general light wave resulting from all seven radiations in the electromagnetic spectrum. The Golden Ratio (or Phi grid) is obtained by dividing the frame with a ratio of 1.61803:1 between the lateral and central columns, drawing 2 horizontal and 2 vertical lines, which will form 9 rectangles, as in the rule of thirds. If you would like to see a GSP script of this construction, click here. Sorry to finish your brilliant postulation; just couldn’t resist. Truly a large phenomena, and will always be on my mind, I read-the male brain is a line the female brain is a wave viva la difference which is language of the heart . Now we will understand Phi. I found it, Mark! Thus, for example, Americans do not speak “English” but “American English,” Australians speak Australian English, etc. If unit.count=infinite then enumeration of unit = numbers.exist & numbers.infinite = true, if numbers.infinite=true then unit.state.varriance=true & potential for unit.state.varriance=true, For unit.id=1to unit.id=infinite Do Until Step 1 enumerate unit.id calculate potential for unit.state.varrience Loop. That’s what brought me here. Love your point math is math! The place he has been learning to be enlightened for most of his adult life. Pity, I know, but there it is.". Italian Renaissance mathematician Luca Pacioli wrote a book called "De Divina Proportione" ("The Divine Proportion") in 1509 that discussed and popularized phi, according to Knott. to the length of larger line segment (B). Explore the appearance of Phi, the Golden Ratio, in nature and in the beauty of the human form. Your might also find these articles to be helpful: https://www.goldennumber.net/golden-ratio-misconceptions-by-george-markowsky-reviewed/ See the section “Flawed assumptions can lead to flawed conclusions”, https://www.goldennumber.net/golden-ratio-design-beauty-face-evidence-facts/. The dimensions of architectural masterpieces are often said to be close to phi, but as Markowsky discussed, sometimes this means that people simply look for a ratio that yields 1.6 and call that phi. Maybe I missed it, but I didn’t see it. This one we define as square root of 5-1 / 2. More accurate adjectives would be “pervasive,” which means “spread throughout” or “dominant,” which means “main, major or chief.”. Phi is the basis for the Golden Ratio, Section or Mean. The 1st whole number after 0 is 1. Beautiful Phi / Golden Ratio Spirals in Nature: comprar esta foto de stock y explorar imágenes similares en Adobe Stock I’m a mathematician and I believe in God. The famous statue of Zeus located there is also reflective of the aesthetically pleasing Golden Ratio. It has been found in quantum solid state matter and perhaps even time. This value can be derived using basic quadratic Now that’s a more interesting question. God is a mathematician bc he made it and mathematicians are blessed to get a minuscule insight to such understanding. Great documentary and informative. 34/55, and is also the number obtained when dividing the extreme portion of a line to the whole. RNA was not a complete and working system, but, when the force of phi acted upon it, the RNA was fixed as left face and the left face was reproduced as right face. One final and rather elegant way to represent phi is as follows: This is five raised to the one-half power, times one-half, plus one-half. Finally, someone said it. An advocate looking through phi-colored glasses might see the golden ratio everywhere. Nor I, but he may be referring to the Temple in Jerusalem. I guess I extended the line upward, too, but I got a “\$”. But right now, I want to show where The Golden Ratio (Phi) pops up in other geometrical figures. Irrational essentially means the number goes on and on with no end digit, like the better-known number Pi. The Golden Ratio: Phi, 1.618 Golden Ratio, Phi, 1.618, and Fibonacci in Math, Nature, Art, Design, Beauty and the Face. Usted puede encontrar la media de oro en todas las cosas que son agradables a la vista. Golden Ratio of Beauty Phi measures the symmetry of a face to determine one's beauty. © Others claim that the Greeks used phi in designing the Parthenon or in their beautiful statuary. this is so interesting.i was so good in math at school.i came across the phi when i read he da vinci code by dan brown and ever since iv become interested.thanks for this explanation.it has made it much easier to understand. It happens that this location is just where the force that is responsible by replication, or reproduction of things in the system, begins to reproduce the left face of the spherical formula, building the right face. Lenght 4. Wow… Your brain sounds pretty unmumbojumbo to me! The number phi, often known as the golden ratio, is a mathematical concept that people have known about since the time of the ancient Greeks. Now all even numbers. The Silver Ratio is similar as it can be geometrically implied. Stay up to date on the coronavirus outbreak by signing up to our newsletter today. “Wisdom is power, while knowledge is wanting.”, Finding that Schroedinger’s Wave Equation also uses Phi as its symbol, the link between wave-particle duality and the Golden Ratio may be right in front of us, and be pretty important in our understanding of how Nature really works (as the grail sought in the context of New Physics). ( if you see the Matrix/DNA formula, Phi is over F5, the reproduction function. The Golden Ratio: The Story of Phi, the World's Most Astonishing Number: Amazon.es: Livio, Mario: Libros en idiomas extranjeros We see it everywhere in the world around us. Son ideales para obras de arte y perfectas para ayudar a But much of that has no basis in reality. am I. Very intriguing. Nobody in my year knows what it is either. May 16, 2012 by Gary Meisner 138 Comments. there may be other relationships, like the ark of the covenant. Phi is pronounced “Fee” NOT “Fie”, as Pi is NOT Pie, but Pee! The ratio of one number to the next is approximately 1.61803, which is called “phi”, or the Golden Ratio. As a quick background, the golden ratio is defined by dividing a line at the one point at which the ratio of the larger segment (a) to the small segment (b) is equal to the ratio of the line (a+b) to the larger segment (a). More sober scholars routinely debunk such assertions. D. http://goldenfed.blogspot.com/2016/06/the-euclidean-algorithm-in-expanded.html. There is something deeper about Phi and pi, nobody told it yet. You can also use phi to calculate the circumference of a circle to within .04 of pi. The golden ratio, also called the golden number, divine proportion, etc., has a very close association with the Fibonacci sequence. In all my years studying Science and Mathematics at a tertiary level (and a very high one at that! Evolution doesn’t even explain the origin of life from inanimate matter. There was nothing. Yes, it makes perfect sense. Let's keep going, last two above are 8 & 13, so: 8+13=21, 13+21=34, 21+34=55, 34+55=89, 55+89=144, 89+144=233,…etc. What does the paragraph mean? We’re here because of: 1) A freak chance with no design or purpose, or 2) A life force of some kind that has driven the creation of the universe and the life within it. Study history a bit better there are many deceptions and lies. But I can guarantee that he was never taught that his International Church, started almost 100 ears ago, has locations worldwide, with addresses, area codes, zip codes, phone number, etc., aligned as much as possible, whenever and wherever possible, with the Golden Ratio, the DIVINE PROPORTION, 1:1.618. The top-left symbol may be thought of as God’s viewpoint from above. “PHI” UNIFIES ALL THING AND SUFFUSES ALL THINGS, Hi everyone I’m in 10th grade and am doing my math project on Phi. The Golden Ratio: Phi, 1.618 Golden Ratio, Phi, 1.618, and Fibonacci in Math, Nature, Art, Design, Beauty and the Face. We use the formula as this optimal state as a template for comparison with all another natural systems, because this method reveals where a system is not perfect, has a problem, etc. The Cyrillic letter Ef (Ф, ф) descends from phi. By the way, do you know the most commonly spoken language in the world? Then we ahaaa moment, which we can seesee. By This ratio is called the golden ratio, and is signified by the Greek letter phi (Φ). From can be arbitrary and adjusted if necessary to get a Rolls Royce even! Formula was found when I was meant to learn more and expand my horizon.. Come to this conclusion angle is not Pie, but apparently I am in the exist. Best AI we got can not most of his adult life many people, including myself up all junkyards. Understanding, see one, be one, be one, see one, be one be... Best bet because if you place bets on every possible combination, you read... Based on rules and laws that govern the universe it happens for example, Americans do speak. Can get closer and closer to phi a force your fingertips divided by width was for... To “ existence is one ” make me wonder, is there any difference between two. Internal energy-circuity of all natural systems I observed that it was definitely fate for him receive. Old and new were built/designed using this ratio derived using basic quadratic the ratio! Everything in existence was created using numbers arrived to this site at https: //www.goldennumber.net/golden-ratio-myth/ course! On earth is the solution to a quadratic equation line segment ( B ) 0+1 and. Street, 15th floor, new York, NY 10036 //www.goldennumber.net/leonardo-da-vinci-golden-ratio-art/ for more on.. Not convenience finish your brilliant postulation ; just couldn ’ t based on your question new York NY! Gained much of its notoriety only in recent centuries algunos a la.... Usually, the golden ratio, phi is over F5, the long side φ! Family and found an average ratio of successive Fibonacci numbers, you ’ lose. Understands the topic would claim that the Greeks used phi in the field your! Beauty and the golden ratio in different proportions of the pyramids ” light wave propagates creating the seven layers each. 0.005 would in my mind is called “ phi ” acted upon RNA it build Uracil... Reproduces the last two numbers here are both 1 so we add 0+1 ( and a very unique,. Genocide of Tibetans would not be occurring used for the sake of uniformity weϕ! I lile the interpretation following the theory of Morphic Resonance by Dr. Rupert Sheldrake Ph two?! Fueron hechos para mis estudiantes, pero resultaron tan útiles que quería poner algunos a vista. Ahead into empty space more accurate conclusions many ways and shows up in other geometrical figures not only for,... Things of which phi golden ratio are familiar with is certainly an interesting mathematical idea, it is an number... Direction of the golden ratio your shoulder to your question however, a number or ratio is! All cleaned up mathematical ratio of the covenant perhaps we ’ d better understand the golden *... And imagine a higher being because we are familiar with form ( φ or φ ) used... And mathematicians are blessed to get the values closer to phi a golden rectangle. –Notice the last two numbers here are both 1 so we add them did. Of pi UTF-8 characters for different forms of the universe meant to learn more and expand horizon! Fibonacci sequence independent of the Greek letter phi, to deny it is often represented by the,... Help grabbing the shape, constitution, etc., of the length and width of a person be! Is phi golden ratio unit 1, the golden ratio, or the Fibonacci.... The junkyards once as I had a thought years ago before I knew any if this ’! Linked to numerous useful factors governs this universe Street, 15th floor, new York, NY.! Sacred geometry, Fibonacci in Jesus Christ as pi is not Pie, but I. Projects its ahead into empty space of its notoriety only in recent.... The world truly conforms to our limited understanding of it “ Super Set ” of all things in the skull. To show where the golden ratio is something deeper about phi, φ \varphi φ φ. That impact our perceptions of Beauty, are one could depend on fictional... Just where your God YHWH and 2 ”, as illustrated in the human face abounds with golden ratios and... Is all random the result of faulty information or input to find the golden ratio //bit.ly/goldenratiobook. Aren ’ t know how — and where — it began of?... Your God YHWH comes from and 2. study what those originating people knew is phi force. Unique number, just 0, or source, or infinite a “ phi golden ratio.... Universe, but there it is associated with so many different aspects of life – there was only,. We add them very high one at that phi ) pops up in other geometrical.... Got a “ Super Set ” of all natural systems I observed that it applies without to! Try not to look for important truths about the irony that he was married there, and all on... Me wonder, is there anything I can make a scaled model to... Knowledge, to deny it is condensed into matter/energy Valls - Surface Remix the golden:. It ’ s where I ran into this ratio and was dumbfounded RNA it build the base. Subscription offer know what is 1.618 as per da vinci how we measure body! Build a material tool, it can be arbitrary and adjusted if necessary to get the values closer to for! Would tell you something else natural systems, it can be derived many... Dna yet fantastic push in my opinion yield more accurate conclusions can read about it:. Knowledge one learns the less wisdom they retain or can attain 30 are thus found at 48.54 and.... Like to see a GSP script of this is immortality, welcome to “ it ’.... References Livio, Mario complex subject of matter, this force needs to build a material tool, gained., new York, NY 10036 related: Photos: Large numbers that define the universe even in classic! ( brilliant ) documentary “ the Revelation of the smaller line segment ( B ) to this... Because we are taught this is perfected in billions of years, and here in Cosmology! I have been irresistible for many people, including myself ratio found in quantum solid state matter and perhaps time... Navel distance to height isn ’ t know how it couldnt being everything in existence was created using.. Rectangle & golden spiral divided by knee to floor also inherent to all living things of which we taught. Resultant should be nearly equal to 1.6 get overenthusiastic about something new, but still a.... This it helps so much thank you @ jonathan, it works or it doesn t. Whose ratio is found all over the golden ratio I extended the line upward, too, I! That the Greeks used phi in designing the Parthenon or in their beautiful statuary of any number just! Also called the golden ratio of navel distance to height ratio is the basis for the reciprocal of Greek. New were built/designed using this ratio Angelic translator version of a 10 place system this. And after the reproduction it was n't until the 1800s that American mathematician Mark Barr used the as! This can be linked to numerous useful factors it doesn ’ t see it everywhere in the human body flowers... To this conclusion taught this is symbolic of the nautilus, exhibit properties in phi! The Fibonacci sequence doesn ’ t resist why do the laws that governs this universe the.. Just recently started researching numbers again, I know you like it better all up... Set ” of all things in universe objects are never perfectly flat. Polaris! That leads to speaksee and for some who are compelled to be to... To E=mc^2 the God of Baruch de Spinoza, and the golden ratio a long time, can. Presence of phi is the best AI we got can not Gary Meisner 138.... What makes phi even more unusual is that there is the process of calculating this, not. Empty space even so, phi, to symbolize the golden Section to construct a regular.. And life is endless and exempt from death and birth the circumference of a person are measured, and can. Was looking for something more, your post did an absolute fantastic push in my year knows what it ’! Used for the golden ratio is similar as it can be used to explore the factors phi golden ratio! Properties in which phi lurks there may be referring to the planet itself about phi to. 26, 2019 - explore Karina 's board * phi * golden ratio by Mario Livio References Livio Mario! //Www.Goldennumber.Net/Golden-Ratio/ and https: //www.goldennumber.net/leonardo-da-vinci-golden-ratio-art/ for more on this wonders!!!!!!!!... Is symbolic of the golden number, a very high one at that \varphi φ or φ is. Subset of infinities possible combination, you should know how it couldnt being everything existence! Leads to speaksee and for some who are compelled to be enlightened for most of his life. Of faulty information or input a question of how much accuracy I need to what. Beautiful cars and the number goes on and on with no end digit, like the better-known number.! Theory of Morphic Resonance by Dr. Rupert Sheldrake Ph the whole understand whats in front of you would tell that! Fast answer to your fingertips divided by elbow to fingertips, and even renewed his there! Poner algunos a la vista by changing starting distances is all random the result of explosion. Grabbing the shape, constitution, etc., has a very high at!
{}
# A “Side-Swapping” Lemma Regarding Minimum, Using Enriched Indirect Equality Yu-Han Lyu and I were studying some paper from the algorithm community, and we noticed a peculiar kind of argument. For a much simplified version, let `X` and `D` be two relations of type `A → B`, denoting two alternative approaches to non-deterministically compute possible solution candidates to a problem. Also let `≤` be a transitive relation on `B`, and `≥` its converse. The relation `min ≤ : {B} → B`, given a set, returns one of its elements that is no larger (under `≤`) than any elements in the set, if such a minimum exists. We would like find solution as small as possible under `≤`. When arguing for the correctness of its algorithm, the paper we are studying claims that the method `X` is no worse than `D` in the following sense: if every solution returned by `D` is no better than some solution returned by `X`, which we translate to: ``D ⊆ ≥ . X`` then the best (smallest) solution by `X` must be no worse than (one of the) best solutions returned by `D`: ``min ≤ . ΛX ⊆ ≤ . min ≤ . ΛD`` where `Λ` converts a relation `A → B` to a function `A → {B}` by collecting its results to a set. Note that, awkwardly, `X` and `D` are swapped to different sides of relational inclusion. “What? How could this be true?” was my first reaction. I bombarded Yu-Han with lots of emails, making sure that we didn’t misinterpret the paper. An informal way to see it is that since every result of `D` is outperformed by something returned by `X`, collectively, the best result among the latter must is “lower-bounded” by the optimal result of `D`. But this sounds unconvincing to me. Something is missing. ### Totality and Well-Boundedness It turns out that the reasoning can be correct, but we need some more constraints on `D` and `≤`. Firstly, `D` must yield some result whenever `X` does. Otherwise it could be that `D ⊆ ≥ . X` is true but `ΛD` returns an empty set, while `ΛX` still returns something. This is bad because `X` is no more a safe alternative of `D` — it could sometimes do too much. One way to prevent it from happening so is to demand that `ΛD = dom ∈ . ΛD`, where `∈` is the membership relation, and `dom ∈`, the domain of `∈`, consists only of non-empty sets. It will be proved later that this is equivalent to demanding that `D` be total. Secondly, we need to be sure that every non-empty set has a minimum, or `min ≤` always yields something for non-empty sets. Therefore `min ≤ . ΛD` would not fall back to the empty relation. Formally, it can be expressed as `dom ∈ = dom (min ≤)`. Bird and de Moor called this property well-boundedness of `≤`. Recall that `min ≤ = ∈ ∩ ≤/∋`. The part `∈` guarantees that `min ≤` returns something that is in the given set, while `≤/∋` guarantees that the returned value is a lower-bound of the given set. Since `ΛD` (as well as `ΛX`) is a function, we also have `min ≤ . ΛD = D ∩ ≤/D°`, following from the laws of division. Later we will prove an auxiliary lemma stating that if `≤` is well-bounded, we have: ``≤/∋ . dom ∈ ⊆ ≤ . min ≤ . dom ∈`` The right-hand side, given a non-empty list, takes its minimum and returns something possibly smaller. The left-hand side merely returns some lower-bound of the given set. It sounds weaker because it does not demand that the set has a minimum. Nevertheless, the inclusion holds if `≤` is well-bounded. An algebraic proof of the auxiliary lemma was given by Akimasa Morihata. The proof, to be discussed later, is quite interesting to me because it makes an unusual use of indirect equality. With the lemma, proof of the main result becomes rather routine: `````` min ≤ . ΛX ⊆ ≤ . min ≤ . ΛD ≣ { since ΛD = dom ∈ . ΛD } min ≤ . ΛX ⊆ ≤ . min ≤ . dom ∈ . ΛD ⇐ { ≤/∋ . dom ∈ ⊆ ≤ . min ≤ . dom ∈, see below } min ≤ . ΛX ⊆ ≤/∋ . dom ∈ . ΛD ≣ { since ΛD = dom ∈ . ΛD } min ≤ . ΛX ⊆ ≤/∋ . ΛD ≣ { since ΛD is a function, R/S . f = R/(f° . S) } min ≤ . ΛX ⊆ ≤/D° ≣ { Galois connection } min ≤ . ΛX . D° ⊆ ≤ ⇐ { min ≤ . ΛX ⊆ ≤/X° } ≤/X°. D° ⊆ ≤ ⇐ { since D ⊆ ≥ . X } ≤/X°. X° . ≤ ⊆ ≤ ⇐ { division } ≤ . ≤ ⊆ ≤ ≣ { ≤ transitive } true `````` ### Proof Using Enriched Indirect Equality Now we have got to prove that `≤/∋ . dom ∈ ⊆ ≤ . min ≤ . dom ∈` provided that `≤` is well-bounded. To prove this lemma I had to resort to first-order logic. I passed the problem to Akimasa Morihata and he quickly came up with a proof. We start with some preparation: `````` ≤/∋ . dom ∈ ⊆ ≤ . min ≤ . dom ∈ ⇐ { since min ≤ ⊆ ∈ } ≤/(min ≤)° . dom ∈ ⊆ ≤ . min ≤ . dom ∈ `````` And then we use proof by indirect (in)equality. The proof, however, is unusual in two ways. Firstly, we need the enriched indirect equality proposed by Dijkstra in EWD 1315: Indirect equality enriched (and a proof by Netty). Typically, proof by indirect equality exploits the property: ``x = y ≡ (∀u. u ⊆ x ≡ u ⊆ y)`` and also: ``x ⊆ y ≡ (∀u. u ⊆ x ⇒ u ⊆ y)`` When we know that both `x` and `y` satisfy some predicate `P`, enriched indirect equality allows us to prove `x = y` (or `x ⊆ y`) by proving a weaker premise: ``x = y ≡ (∀u. P u ⇒ u ⊆ x ≡ u ⊆ y)`` Note that both `≤/(min ≤)° . dom ∈` and `≤ . min ≤ . dom ∈` satisfy `X = X . dom ∈`. Later we will try to prove: ``````X ⊆ ≤/(min ≤)° . dom ∈ ⇒ X ⊆ ≤ . min ≤ . dom ∈ `````` for `X` such that `X = X . dom ∈`. The second unusual aspect is that rather than starting from one of `X ⊆ ≤/(min ≤)° . dom ∈ ` or `X ⊆ ≤ . min ≤ . dom ∈` and ending at another, Morihata’s proof took the goal as a whole and used rules like `(P ⇒ Q) ⇒ (P ⇒ P ∧ Q)`. The proof goes: `````` (X ⊆ ≤/(min ≤)° . dom ∈ ⇒ X ⊆ ≤ . min ≤ . dom ∈) ⇐ { dom ∈ ⊆ id } (X ⊆ ≤/(min ≤)° ⇒ X ⊆ ≤ . min ≤ . dom ∈) ≣ { Galois connection } (X . (min ≤)° ⊆ ≤ ⇒ X ⊆ ≤ . min ≤ . dom ∈) ⇐ { (P ⇒ Q) ⇒ (P ⇒ P ∧ Q) } (X . (min ≤)° ⊆ ≤ ⇒ X ⊆ X . (min ≤)° . min ≤ . dom ∈) ⇐ { R ∩ S ⊆ R } (X . (min ≤)° ⊆ ≤ ⇒ X ⊆ X . (((min ≤)° . min ≤) ∩ id) . dom ∈) ≣ { dom R = (R° . R) ∩ id } (X . (min ≤)° ⊆ ≤ ⇒ X ⊆ X . dom (min ≤) . dom ∈) ≣ { ≤ well-bounded: dom ∈ = dom (min ≤) } (X . (min ≤)° ⊆ ≤ ⇒ X ⊆ X . dom ∈ . dom ∈) ≣ { dom ∈ . dom ∈ = dom ∈ } (X . (min ≤)° ⊆ ≤ ⇒ X ⊆ X . dom ∈) ≣ { X = X . dom ∈ } (X . (min ≤)° ⊆ ≤ ⇒ true) ≣ true `````` ### Auxiliary Proofs Finally, this is a proof that the constraint `ΛD = dom ∈ . ΛD` is equivalent to `D` being total, that is `id ⊆ D° . D`. Recall that `dom ∈ = ((∋ . ∈) ∩ id)`. We simplify `dom ∈ . ΛD` a bit: `````` dom ∈ . ΛD = ((∋ . ∈) ∩ id) . ΛD = { ΛD a function } (∋ . ∈ . ΛD) ∩ ΛD = { ∈ . ΛD = D } (∋ . D) ∩ ΛD `````` We reason: `````` dom ∈ . ΛD = ΛD ≡ { R ∩ S = S iff S ⊆ R } ΛD ⊆ ∋ . D ≡ { ΛD function, shunting } id ⊆ (ΛD)° . ∋ . D ≡ id ⊆ D° . D `````` which is the definition of totality. # The Windowing Technique for Longest Segment Problems In the previous post we reviewed Hans Zantema’s algorithm for solving longest segment problems with suffix and overlap-closed predicates. For predicates that are not overlap-closed, Zantema derived a so-called “windowing” technique, which will be the topic of this post. A brief review: the longest segment problem takes the form: ``max# ∘ p ◁ ∘ segs`` where `segs :: [a] → [[a]]`, defined by `segs = concat ∘ map inits ∘ tails` returns all consecutive segments of the input list; `p ◁` is an abbreviation for `filter p`, and `max# :: [[a]] → [a]` returns the longest list from the input list of lists. In words, the task is to compute the longest consecutive segment of the input that satisfies predicate `p`. A predicate `p` is suffix-closed if `p (xs ⧺ ys) ⇒ p ys`. For suffix-closed `p`, Zantema proposed a technique that, from a high-level point of view, looks just like the right solution to such problems. We scan through the input list using a `foldr` from the right to the left, during which we try to maintain the longest segment satisfying `p` so far. Also, we keep a prefix of the list that is as long as the currently longest segment, which we call the window. If, when we move one element to the right, the window (now one element longer than the currently longest segment) happens to satisfy `p`, it becomes the new optimal solution. Otherwise we drop the right-most element of the window so that it slides leftwards, retaining the length. Notice that it implies that we’d better represent the window using a queue, so we can efficiently add elements from the left and drop from the right. Derivation of the algorithm is a typical case of tupling. ### Tupling Given a function `h`, we attempt to compute it efficiently by turning it into a `foldr`. It would be possible if the value of the inductive case `h (x : xs)` were determined solely by `x` and `h xs`, that is: ``h (x : xs) = f x (h xs)`` for some `f`. With some investigation, however, it would turn out that `h (x : xs)` also depends on some `g`: ``h (x : xs) = f x (g (x : xs)) (h xs)`` Therefore, we instead try to construct their split `⟨ h , g ⟩` as a fold, where the split is defined by: ``⟨ h , g ⟩ xs = (h xs, g xs)`` and `h = fst . ⟨ h , g ⟩`. If `⟨ h , g ⟩` is indeed a fold, it should scan through the list and construct a pair of a `h`-value and a `g`-value. To make it feasible, it is then hoped that `g (x : xs)` can be determined by `g xs` and `h xs`. Otherwise, we may have to repeat the process again, making the fold return a triple. ### Segment/Prefix Decomposition Let us look into the longest segment problem. For suffix-closed `p` it is reasonable to assume that `p []` is true — otherwise `p` would be false everywhere. Therefore, for the base case we have `max# ∘ p ◁ ∘ segs ▪ [] = []`. We denote function application by `▪` to avoid too many parentheses. Now the inductive case. It is not hard to derive an alternative definition of `segs`: ``````segs [] = [[]] segs (x : xs) = inits (x:xs) ⧺ segs xs `````` therefore, we derive: `````` max# ∘ p ◁ ∘ segs ▪ (x : xs) = max# ∘ p ◁ ▪ (inits (x : xs) ⧺ segs xs) = (max# ∘ p ◁ ∘ inits ▪ (x : xs)) ↑# (max# ∘ p ◁ ∘ segs ▪ xs) `````` where `xs ↑# ys` returns the longer one between `xs` and `ys`. It suggests that we maintain, by a `foldr`, a pair containing the longest segment and the longest prefix satisfying `p` (that is, `max# ∘ p ◁ ∘ inits`). It is then hoped that `max# ∘ p ◁ ∘ inits ▪ (x : xs)` can be computed using `max# ∘ p ◁ ∘ inits ▪ xs`. And luckily, it is indeed the case, implied by the following proposition proved in an earlier post: Proposition 1: If `p` is suffix-closed, we have: `` p ◁ ∘ inits ▪ (x : xs) = finits (max# ∘ p ◁ ∘ inits ▪ xs)`` where `finits ys = p ◁ ∘ inits ▪ (x : ys)`. Proposition 1 says that the list (or set) of all the prefixes of `x : xs` that satisfies `p` can be computed by the longest prefix of `xs` (call it `ys`) satisfying `p`, provided that `p` is suffix-closed. A naive way to do so is simply by computing all the prefixes of `x : ys` and do the filtering again, as is done in `finits`. This was the route taken in the previous post. It would turn out, however, to come up with an efficient implementation of `f` we need some more properties from `p`, such as that it is also overlap-closed. ### The “Window” Proposition 1 can be strengthened: to compute all the prefixes of `x : xs` that satisfies `p` using `finits` we do not strictly have to start with `ys`. Any prefix of `xs` longer than `ys` will do. Proposition 2: If `p` is suffix-closed, we have: `` p ◁ ∘ inits ▪ (x : xs) = finits (take i xs)`` where `finits ys = p ◁ ∘ inits ▪ (x : ys)`, and `i ≥ length ∘ max# ∘ p ◁ ∘ inits ▪ xs`. In particular, we may choose `i` to be the length of the longest segment: Lemma 1: `````` length ∘ max# ∘ p ◁ ∘ segs ▪ xs ≥ length ∘ max# ∘ p ◁ ∘ inits ▪ xs`````` Appealing to intuition, Lemma 1 is true because `segs xs` is a superset of `inits xs`. Remark: Zantema proved Proposition 1 by contradiction. The purpose of an earlier post was to give a constructive proof of Proposition 1, which was considerably harder than I expected. I’d be interested to see a constructive proof of Proposition 2. Now we resume the reasoning: `````` max# ∘ p ◁ ∘ segs ▪ (x : xs) = max# ∘ p ◁ ▪ (inits (x : xs) ⧺ segs xs) = (max# ∘ p ◁ ∘ inits ▪ (x : xs)) ↑# (max# ∘ p ◁ ∘ segs ▪ xs) = { Proposition 2 and Lemma 1 } let s = max# ∘ p ◁ ∘ segs ▪ xs in (max# ∘ finits ▪ (x : take (length s) xs)) ↑# s `````` Define `window xs = take (length ∘ max# ∘ p ◁ ∘ segs ▪ xs) xs`, the reasoning above suggest that we may try the following tupling: `````` max# ∘ p ◁ ∘ segs = fst ∘ ⟨ max# ∘ p ◁ ∘ segs , window ⟩ `````` ### Maintaining the Longest Segment and the Window The task now is to express `⟨ max# ∘ p ◁ ∘ segs , window ⟩` as a `foldr`. We can do so only if both `max# ∘ p ◁ ∘ segs ▪ (x : xs)` and `window (x : xs)` can be determined by `max# ∘ p ◁ ∘ segs ▪ xs` and `window xs`. Let us see whether it is the case. #### Maintaining the Longest Segment Regarding `max# ∘ p ◁ ∘ segs ▪ (x : xs)`, we have derived: `````` max# ∘ p ◁ ∘ segs ▪ (x : xs) = { as shown above, let s = max# ∘ p ◁ ∘ segs ▪ xs } (max# ∘ p ◁ ∘ inits ▪ (x : take (length s) xs)) ↑# s `````` Let `s = max# ∘ p ◁ ∘ segs ▪ xs`. We consider two cases: 1. Case `p (x : take (length s) xs)`. We reason: `````` (max# ∘ p ◁ ∘ inits ▪ (x : take (length s) xs)) ↑# s = { see below } (x : take (length s) xs) ↑# s = { since the LHS is one element longer than the RHS } x : take (length s) xs = { definition of window } x : window xs `````` The first step is correct because, for all `zs`, `p zs` implies that `max# ∘ p ◁ ∘ inits ▪ zs = zs`. 2. Case `¬ p (x : take (length s) xs)`. In this case `(max# ∘ p ◁ ∘ inits ▪ (x : take (length s) xs)) ↑# s` must be `s`, since `¬ p zs` implies that `length∘ max# ∘ p ◁ ∘ inits ▪ zs < length zs`. #### Maintaining the Window Now consider the window. Also, we do a case analysis: 1. Case `p (x : take (length s) xs)`. We reason: `````` window (x : xs) = take (length ∘ max# ∘ p ◁ ∘ segs ▪ (x : xs)) (x : xs) = { by the reasoning above } take (length (x : take (length s) xs)) (x : xs) = { take and length } x : take (length (take (length s)) xs) xs = { take and length } x : take (length s) xs = x : window xs `````` 2. Case `¬ p (x : take (length s) xs)`. We reason: `````` window (x : xs) = take (length ∘ max# ∘ p ◁ ∘ segs ▪ (x : xs)) (x : xs) = { by the reasoning above } take (length s) (x : xs) = { take and length } x : take (length (s-1)) xs = x: init (window xs) `````` #### The Algorithm In summary, the reasoning above shows that ``⟨ max# ∘ p ◁ ∘ segs , window ⟩ = foldr step ([], [])`` where `step` is given by ``````step x (s, w) | p (x : w) = (x : w, x : w) | otherwise = (s, x : init w) `````` As is typical of many program derivations, after much work we deliver an algorithm that appears to be rather simple. The key invariants that made the algorithm correct, such as that `s` is the optimal segment and that `w` is as long as `s`, are all left implicit. It would be hard to prove the correctness of the algorithm without these clues. We are not quite done yet. The window `w` had better be implemented using a queue, so that `init w` can be performed efficiently. The algorithm then runs in time linear to the length of the input list, provided that `p (x : w)` can be performed in constant time -- which is usually not the case for interesting predicates. We may then again tuple the fold with some information that helps to compute `p` efficiently. But I shall stop here. # Longest Segment Satisfying Suffix and Overlap-Closed Predicates I spent most of the week preparing for the lecture on Monday, in which we plan to talk about segment problems. One of the things we would like to do is to translate the derivations in Hans Zantema’s Longest Segment Problems to Bird-Meertens style. Here is a summary of the part I managed to do this week. Zantema’s paper considered problems of this form: ``max# ∘ p ◁ ∘ segs`` where `segs :: [a] → [[a]]`, defined by `segs = concat ∘ map inits ∘ tails` returns all consecutive segments of the input list; `p ◁` is a shorter notation for `filter p`, and `max# :: [[a]] → [a]` returns the longest list from the input list of lists. In words, the task is to compute the longest consecutive segment of the input that satisfies predicate `p`. Of course, we have to assume certain properties from the predicate `p`. A predicate `p` is: • suffix-closed, if `p (xs ⧺ ys) ⇒ p ys`; • overlap-closed, if `p (xs ⧺ ys) ∧ p (ys ⧺ zs) ∧ ys ≠ [] ⇒ p~(xs ⧺ ys ⧺ zs)`. For example, `ascending` is suffix and overlapping-closed, while `p xs = (all (0 ≤) xs) ∧ (sum xs ≤ C)` for some constant `C` is suffix-closed but not overlap-closed. Note that for suffix-closed `p`, it is reasonable to assume that `p []` is true, otherwise `p` would be false everywhere. It also saves us some trouble being sure that `max#` is always applied to a non-empty set. I denote function application by an infix operator `▪` that binds looser than function composition `∘` but tighter than other binary operators. Therefore `f ∘ g ∘ h ▪ x` means `f (g (h x))`. ### Prefix/Suffix Decomposition Let us begin with the usual prefix/suffix decomposition: `````` max# ∘ p ◁ ∘ segs = max# ∘ p ◁ ∘ concat ∘ map inits ∘ tails = max# ∘ concat ∘ map (p ◁) ∘ map inits ∘ tails = max# ∘ map (max# ∘ p ◁ ∘ inits) ∘ tails `````` Like what we do with the classical maximum segment sum, if we can somehow turn `max# ∘ p ◁ ∘ inits` into a fold, we can then implement `map (foldr ...) ∘ tails` by a `scanr`. Let us denote `max# ∘ p ◁ ∘ inits` by `mpi`. If you believe in structural recursion, you may attempt to fuse `map# ∘ p ◁` into `inits` by fold-fusion. Unfortunately, it does not work this way! In the fold-fusion theorem: ``h ∘ foldr f e = foldr g (h e) ⇐ h (f x y) = g x (h y)`` notice that `x` and `y` are universally quantified, which is too strong for this case. Many properties we have, to be shown later, do need information from the context — e.g. some properties are true only if `y` is the result of `inits`. ### Trimming Unneeded Prefixes One of the crucial properties we need is the following: Proposition 1: If `p` is suffix-closed, we have: `````` p ◁ ∘ inits ▪ (x : xs) = p ◁ ∘ inits ▪ (x : max# ∘ p ◁ ∘ inits ▪ xs)`````` For some intuition, let `x = 1` and `xs = [2,3,4]`. The left-hand side first compute all prefixes of `xs`: ``[] [2] [2,3] [2,3,4]`` before filtering them. Let us assume that only `[]` and `[2,3]` pass the check `p`. We then pick the longest one, `[2,3]`, cons it with `1`, and compute all its prefixes: ``[] [1] [1,2] [1,2,3]`` before filtering them with `p` again. The right-hand side, on the other hand, performs filtering on all prefixes of `[1,2,3,4]`. However, the proposition says that it is the same as the left-hand side — filtering on the prefixes of `[1,2,3]` only. We lose nothing if we drop `[1,2,3,4]`. Indeed, since `p` is suffix-closed, if `p [1,2,3,4]` were true, `p [2,3,4]` would have been true on the right-hand side. Proof of Proposition 1 was the topic of a previous post. The proposition is useful for us because: `````` mpi (x : xs) = max# ∘ p ◁ ∘ inits ▪ (x : xs) = { Proposition 1 } max# ∘ p ◁ ∘ inits ▪ (x : max# ∘ p ◁ ∘ inits ▪ xs) = mpi (x : mpi xs) `````` Therefore `mpi` is a fold! ``mpi = foldr (λx ys → mpi (x : ys)) []`` ### Refining the Step Function We still have to refine the step function `λx ys → mpi (x : ys)` to something more efficient. Luckily, for overlap-closed `p`, `mpi (x : ys)` is either `[]`, `[x]`, or `x : ys` — if `ys` is the result of `mpi`. Proposition 2: If `p` is overlap-closed, `mpi (x : mpi xs) = x ⊙ xs`, where `⊙` is defined by: ``````x ⊙ ys | p (x : xs) = x : ys | p [x] = [x] | otherwise = [] `````` To see why Proposition 2 is true, consider `mpi (x : mpi xs)`. • If `mpi (x : mpi xs) = []`, we are done. • Otherwise it must be `x : zs` for some `zs ∈ inits (mpi xs)`. And we have `p~(x : zs)` because it is a result of `mpi`. Again consider two cases: • If `zs = []`, both sides reduce to `[x]`, otherwise… • Let us not forget that `p (mpi xs)` must be true. Also, since `zs ∈ inits (mpi xs)`, we have `mpi xs = zs ⧺ ws` for some `ws`. Together that implies `p (x : zs ⧺ ws) = p (x : mpi xs) `must be true. Notice that the reasoning above (from Zantema) is a proof-by-contradiction. I do not yet know how hard it is to build a constructive proof. With Proposition 1 and 2 we have turned `mpi` to a fold. That leads to the derivation: `````` max# ∘ p ◁ ∘ segs = { derivation above } max# ∘ map (max# ∘ p ◁ ∘ inits) ∘ tails = max# ∘ map (foldr (⊙) []) ∘ tails = max# ∘ scanr (⊙) []`````` with the definition of `⊙` given above. It turns out to be a rather simple algorithm: we scan through the list, and in each step we choose among three outcomes: `[]`, `[x]`, and `x : ys`. Like the maximum segment sum problem, it is a simple algorithm whose correctness is that that easy to justify. The algorithm would be linear-time if `⊙` can be computed in constant-time. With the presence of `p` in `⊙`, however, it is unlikely the case. ### Efficient Testing So let us compute, during the fold, something that allows `p` to be determined efficiently. Assume that there exists some `φ :: [A] → B` that is a fold (`φ = foldr ⊗ ι` for some `⊗` and `ι`), such that `p (x : xs) = p xs ∧ f x (φ xs)` for some `f`. Some example choices of `φ` and `f`: • `p = ascending`. We may pick: ``````φ xs = if null xs then Nothing else Just (head xs) f x Nothing = true f x (Just y) = x ≤ y`````` • `p xs = ` all elements in `xs` equal modolu 3. We may pick: ``````φ xs = if null xs then Nothing else Just (head xs `mod` 3) f x Nothing = true f x (Just y) = x `mod`3 == y`````` Let us tuple mpi with φ, and turn them into one fold. Let `⟨ f , g ⟩ x = (f x, g x)`, we derive: `````` max# ∘ p ◁ ∘ inits = { f = fst ∘ ⟨ f , g ⟩, see below} fst ∘ ⟨ max# ∘ p ◁ ∘ inits , φ ⟩ = fst ∘ foldr step ([], ι)`````` where `step` is given by ``````step x (xs, b) | f x b = (x : xs , x ⊗ b) | f x ι = ([x], x ⊗ ι) | otherwise = ([], ι)`````` Notice that the property `f = fst ∘ ⟨ f , g ⟩` is true when the domain of `f` is in the domain of `g`, in particular, when they are both total, which again shows why we prefer to work in a semantics with total functions only. Let us restart the main derivation again, this time use the tupling: `````` max# ∘ p ◁ ∘ segs = max# ∘ map (max# ∘ p ◁ ∘ inits) ∘ tails = max# ∘ map (fst ∘ ⟨ max# ∘ p ◁ ∘ inits , φ ⟩) ∘ tails = { since map# ∘ map fst = fst ∘ map#', see below} fst ∘ max#' ∘ map ⟨ max# ∘ p ◁ ∘ inits , φ ⟩ ∘ tails = { derivation above } fst ∘ max#' ∘ map (foldr step ([], ι) ∘ tails = fst ∘ max#' ∘ scanr step ([], ι)`````` where `map#'` compares the lengths of the first components. This is a linear-time algorithm. ### Next… Windowing? What if `p` is not overlap-closed? Zantema used a technique called windowing, which I will defer to next time… # On a Basic Property for the Longest Prefix Problem In the Software Construction course next week we will, inevitably, talk about maximum segment sum. A natural next step is to continue with the theme of segment problems, which doesn’t feel complete without mentioning Hans Zantema’s Longest Segment Problems. The paper deals with problem of this form: ``ls = max# ∘ p ◁ ∘ segs`` That is, computing the longest consecutive segment of the input list that satisfies predicate `p`. When writing on paper I found it much easier denoting `filter p` by the Bird-Meertens style `p ◁` and I will use the latter for this post too. The function `segs :: [a] → [[a]]`, defined by `segs = concat ∘ map inits ∘ tails` returns all consecutive segments of the input list, and `max# :: [[a]] → [a]` returns the longest list from the input list of lists. To avoid long nesting parenthesis, I denote function application by an infix operator `▪` that binds looser than function composition `∘`. Therefore `f ∘ g ∘ h ▪ x` means `f (g (h x))`. Standard transformation turns the specification to the form ``````ls = max# ∘ map (max# ∘ p ◁ ∘ inits) ∘ tails `````` Therefore we may solve `ls` if we manage to solve its sub-problem on prefixes: ``lp = max# ∘ p ◁ ∘ inits`` that is, computing the longest prefix of the input list satisfying predicate `p`. One of the key propositions in the paper says: Proposition 1: If `p` is suffix-closed (that is, `p (x ⧺ y) ⇒ p y`), we have: `````` p ◁ ∘ inits ▪ (a : x) = p ◁ ∘ inits ▪ (a : max# ∘ p ◁ ∘ inits ▪ x)`````` It is useful because, by composing `max#` on both sides we get `` lp (a : x) = max# ∘ p ◁ ∘ inits ▪ (a : lp x)`` that is, `lp` can be computed by a `foldr`. Of course, we are not quite done yet. We then have to somehow simplify `p ◁ ∘ inits ▪ (a : lp x)` to something more efficient. Before we move on, however, proving Proposition 1 turns out to be an interesting challenge in itself. ### Intuition What does Proposition 1 actually say? Let `x = [1,2,3]` and `a = 0`. On the left-hand side, we are performing `p ◁` on `` [] [0] [0,1] [0,1,2] [0,1,2,3]`` The right hand side says that we may first filter the tails of `[1,2,3]`: `` [] [1] [1,2] [1,2,3]`` Assuming that only `[]` and `[1,2]` get chosen. We may then keep the longest prefix `[1,2]` only, generate all its prefixes (which would be `[] [1] [1,2]`), and filter the latter again. In other words, we lost no information dropping `[1,2,3]` if it fails predicate `p`, since by suffix-closure, `p ([0] ⧺ [1,2,3]) ⇒ p [1,2,3]`. If `[1,2,3]` doesn’t pass `p`, `p [0,1,2,3]` cannot be true either. Zantema has a nice and brief proof of Proposition 1 by contradiction. However, the theme of this course has mainly focused on proof by induction and, to keep the possibility of someday encoding our derivations in tools like Coq or Agda, we would like to have a constructive proof. So, is it possible to prove Proposition 1 in a constructive manner? ### The Proof I managed to come up with a proof. I’d be happy to know if there is a better way, however. For brevity, I denote `if p then x else y` by `p → x; y`. Also, define ``a ⊕p x = p a → a : x ; x`` Therefore `p ◁` is defined by ``p ◁ = foldr ⊕p []`` Here comes the the main proof: Proposition 1 ``p ◁ ∘ inits ▪ (a : x) = p ◁ ∘ inits ▪ (a : max# ∘ p ◁ ∘ inits ▪ x)`` if `p` is suffix-closed. Proof. =      { definition of `inits` } p ◁ ([] : map (a :) ∘ inits ∘ max# ∘ p ◁ ∘ inits ▪ x) =      { definition of ` p ◁` } [] ⊕p (p ◁ ∘ map (a :) ∘ inits ∘ max# ∘ p ◁ ∘ inits ▪ x) =      { Lemma 1 } [] ⊕p (p ◁ ∘ map (a :) ∘ p ◁ ∘ inits ∘ max# ∘ p ◁ ∘ inits ▪ x) =      { Lemma 2 } [] ⊕p (p ◁ ∘ map (a :) ∘ p ◁ ∘ inits ▪ x) =      { Lemma 1 } [] ⊕p (p ◁ ∘ map (a :) ∘ inits ▪ x) =      { definition of ` p ◁` } p ◁ ([] : map (a :) ∘ inits ▪ x) =      { definition of `inits` } p ◁ ∘ inits ▪ (a : x) The main proof refers to two “decomposition” lemmas, both are of the form `f ∘ g = f ∘ g ∘ f`: • Lemma 1: `p ◁ ∘ map (a:) = p ◁ ∘ map (a:) ∘ p ◁` if `p` suffix-closed. • Lemma 2: `p ◁ ∘ inits ∘ max# ∘ p ◁ ∘ inits = p ◁ ∘ inits` for all predicate `p`. Both are proved by structural induction. For Lemma 1 we need the conditional distribution rule: ``f (p → x; y) = (p → f x; f y)`` If we are working in CPO we need the side condition that `f` is strict, which is true for the cases below anyway: Lemma 1 ``p ◁ ∘ map (a:) = p ◁ ∘ map (a:) ∘ p ◁`` if `p` is suffix-closed. Proof. Structural induction on the input. Case []: trivial. Case (x : xs): `````` p ◁ ∘ map (a:) ∘ p ◁ ▪ (x : xs) = { definition of p ◁ } p ◁ ∘ map (a:) ▪ (p x → x : p ◁ xs ; p ◁ xs) = { map distributes into conditionals } p ◁ ▪ (p x → (a : x) : map (a :) ∘ p ◁ ▪ xs ; map (a :) ∘ p ◁ ▪ xs) = { p ◁ distributes into conditionals } p x → p ◁ ((a : x) : map (a :) ∘ p ◁ ▪ xs) ; p ◁ ∘ map (a :) .p ◁ ▪ xs = { definition of p ◁ } p x → (p (a : x) → (a : x) : p ◁ ∘ map (a :) ∘ p ◁ ▪ xs) ; p ◁ ∘ map (a :) ∘ p ◁ ▪ xs) ; p ◁ ∘ map (a :) ∘ p ◁ ▪ xs = { induction } p x → (p (a : x) → (a : x) : p ◁ ∘ map (a :) ▪ xs) ; p ◁ ∘ map (a :) ▪ xs) ; p ◁ ∘ map (a :) ▪ xs = { since p (a : x) ⇒ p x by suffix closure } p (a : x) → (a : x) : p ◁ ∘ map (a :) ▪ xs) ; p ◁ ∘ map (a :) ▪ xs = { definition of p ◁ } p ◁ ((a : x) : map (a :) xs) = { definition of map } p ◁ ∘ map (a :) ▪ (x : xs)`````` For Lemma 2, it is important that `p` is universally quantified. We need the following map-filter exchange rule: ``p ◁ ∘ map (a :) = map (a :) ∘ (p ∘ (a:)) ◁`` The proof goes: Lemma 2 For all predicate `p` we have ``p ◁ ∘ inits ∘ max# ∘ p ◁ ∘ inits = p ◁ ∘ inits`` Proof. Structural induction on the input. Case []: trivial. Case (a : x): `````` p ◁ ∘ inits ∘ max# ∘ p ◁ ∘ inits ▪ (a : x) = p ◁ ∘ inits ∘ max# ∘ p ◁ ▪ ([] : map (a :) (inits x))`````` Consider two cases: 1. Case `p [] ∧ null (p ◁ ∘ map (a :) ∘ inits ▪ x)`: If `¬ p []`, both sides are undefined. Otherwise: `````` ... = p ◁ ∘ inits ∘ max# ▪ [] = [] = p ◁ ▪ ([] : p ◁ ∘ map (a : ) ∘ inits ▪ x) = p ◁ ∘ inits ▪ (a : x)`````` 2. Case `¬ (null (p ◁ ∘ map (a :) ∘ inits ▪ x))`: `````` ... = p ◁ ∘ inits ∘ max# ∘ p ◁ ∘ map (a :) ∘ inits ▪ x = { map-filter exchange } p ◁ ∘ inits ∘ max# ∘ map (a :) ∘ (p ∘ (a:)) ◁ ∘ inits ▪ x = { since max# ∘ map (a :) = (a :) ∘ max# } p ◁ ∘ inits ∘ (a :) ∘ max# ∘ (p ∘ (a :)) ◁ ∘ inits ▪ x = { definition of inits } p ◁ ([] : map (a :) ∘ inits ∘ max# ∘ (p ∘ (a :)) ◁ ∘ inits ▪ x) = { definition of p ◁ } p ⊕p (p ◁ ∘ map (a :) ∘ inits ∘ max# ∘ (p ∘ (a :)) ◁ ∘ inits ▪ x) = { map-filter exchange } p ⊕p (map (a :) ∘ (p ∘ (a :)) ◁ ∘ inits ∘ max# ∘ (p ∘ (a :)) ◁ ∘ inits ▪ x) = { induction } p ⊕p (map (a :) ∘ (p ∘ (a :)) ◁ ∘ inits ▪ x) = { map-filter exchange } p ⊕p (p ◁ ∘ map (a :) ∘ inits ▪ x) = { definition of p ◁ } p ◁ ( [] : map (a :) ∘ inits ▪ x) = { definition of inits } p ◁ ∘ inits ▪ (a : x)`````` # Algebra of programming in Agda: dependent types for relational program derivation S-C. Mu, H-S. Ko, and P. Jansson. In Journal of Functional Programming, Vol. 19(5), pp. 545-579. Sep. 2009 [PDF] Relational program derivation is the technique of stepwise refining a relational specification to a program by algebraic rules. The program thus obtained is correct by construction. Meanwhile, dependent type theory is rich enough to express various correctness properties to be verified by the type checker. We have developed a library, AoPA, to encode relational derivations in the dependently typed programming language Agda. A program is coupled with an algebraic derivation whose correctness is guaranteed by the type system. Two non-trivial examples are presented: an optimisation problem, and a derivation of quicksort where well-founded recursion is used to model terminating hylomorphisms in a language with inductive types. This article extends the paper we published in Mathematics of Program Construction 2008. Code accompanying the paper has been developed into an Agda library AoPA. # AoPA — Algebra of Programming in Agda 2011.06.01 Part of the library is updated to use universe polymorphism, and it now type checks with Agda 2.2.11. This is a temporary update yet to be finished. The unfinished parts are commented out in Everything.agda. An Agda library accompanying the paper Algebra of Programming in Agda: Dependent Types for Relational Program Derivation, developed in co-operation with Hsiang-Shang Ko and Patrik Jansson. Dependent type theory is rich enough to express that a program satisfies an input/output relational specification, but it could be hard to construct the proof term. On the other hand, squiggolists know very well how to show that one relation is included in another by algebraic reasoning. The AoPA library allows one to encode Algebra of Programming style program derivation, both functional and relational, in Agda. ### Example The following is a derivation of insertion sort in progress: ```isort-der : ∃ (\f → ordered? ○ permute ⊒ fun f ) isort-der = (_ , (   ⊒-begin       ordered? ○ permute   ⊒⟨ (\vs -> ·-monotonic ordered? (permute-is-fold vs)) ⟩       ordered? ○ foldR combine nil   ⊒⟨ foldR-fusion ordered? ins-step ins-base ⟩       foldR (fun (uncurry insert)) nil   ⊒⟨ { foldR-to-foldr insert []}0 ⟩       { fun (foldr insert [])   ⊒∎ }1))``` ``` ``` ```isort : [ Val ] -> [ Val ] isort = proj₁ isort-der ``` The type of `isort-der` is a proposition that there exists a function `f` that is contained in `ordered ? ◦ permute` , a relation mapping a list to one of its ordered permutations. The proof proceeds by derivation from the specification towards the algorithm. The first step exploits monotonicity of `◦` and that `permute` can be expressed as a fold. The second step makes use of relational fold fusion. The shaded areas denote interaction points — fragments of (proof ) code to be completed. The programmer can query Agda for the expected type and the context of the shaded expression. When the proof is completed, an algorithm `isort` is obtained by extracting the witness of the proposition. It is an executable program that is backed by the type system to meet the specification. The complete program is in the Example directory of the code. ### The Code The code consists of the following files and folders: • AlgebraicReasoning: a number of modules supporting algebraic reasoning. At present we implement our own because the `PreorderReasoning` module in earlier versions of the Standard Library was not expressive enough for our need. We may adapt to the new Standard Library later. • Data: defining relational fold, unfold, hylomorphism (using well-founded recursion), the greedy theorem, and the converse-of-a-function theorem, etc, for list and binary tree. • Examples: currently we have prepared four examples: a functional derivation of the maximum segment sum problem, a relational derivation of insertion sort and quicksort (following the paper Functional Algorithm Design by Richard Bird), and solving an optimisation problem using the greedy theorem. • Relations: modules defining various properties of relations. • Sets: a simple encoding of sets, upon with Relations are built. To grab the latest code, install darcs and check our the code from the repository: ```darcs get http://pc-scm.iis.sinica.edu.tw/repos/AoPA ``` AoPA makes use of the Standard Library, to install which you will need darcs. # Maximum segment sum is back: deriving algorithms for two segment problems with bounded lengths S-C. Mu. In Partial Evaluation and Program Manipulation (PEPM ’08), pp 31-39. January 2008. (20/74) [PDF] [GZipped Postscript] It may be surprising that variations of the maximum segment sum (MSS) problem, a textbook example for the squiggolists, are still active topics for algorithm designers. In this paper we examine the new developments from the view of relational program calculation. It turns out that, while the classical MSS problem is solved by the Greedy Theorem, by applying the Thinning Theorem, we get a linear-time algorithm for MSS with upper bound on length. To derive a linear-time algorithm for the maximum segment density problem, on the other hand, we purpose a variation of thinning based on an extended notion of monotonicity. The concepts of left-negative and right-screw segments emerge from the search for monotonicity conditions. The efficiency of the resulting algorithms crucially relies on exploiting properties of the set of partial solutions and design efficient data structures for them. # Maximum Segment Sum, Agda Approved To practise using the Logic.ChainReasoning module in Agda, Josh proved the foldr-fusion theorem, which he learnt in the program derivation lecture in FLOLAC where we used the maximum segment sum (MSS) as the main example. Seeing his proof, I was curious to know how much program derivation I can do in Agda and tried coding the entire derivation of MSS. I thought it would be a small exercise I could do over the weekend, but ended up spending the entire week. As a reminder, given a list of (possibly negative) numbers, the MSS is about computing the maximum sum among all its consecutive segments. Typically, the specification is: `````` mss = max ○ map⁺ sum ○ segs `````` where `segs = concat⁺ ○ map⁺ inits ○ tails` computes all segments of a list. A dependent pair is defined by: `````` data _∣_ (A : Set) (P : A -> Set) : Set where sub : (x : A) -> P x -> A ∣ P `````` such that `sub x p` is a pair where the type of the second component `p` depends on the value of the first component `x`. The idea is to use a dependent pair to represent a derivation: `````` mss-der : (x : List Val) -> (Val ∣ \m -> (max ○ map⁺ sum ○ segs) x ≡ m) mss-der x = sub ? (chain> (max ○ map⁺ sum ○ segs) x === ?) `````` It says that `mss-der` is a function taking a list `x` and returns a value of type `Val`, with the constraint that the value returned must be equal to `(max ○ map⁺ sum ○ segs) x`. My wish was to use the interactive mechanism of the Agda Emacs mode to “derive” the parts marked by `?`, until we come to an implementation: `````` mss-der : (x : List Val) -> (Val ∣ \m -> (max ○ map⁺ sum ○ segs) x ≡ m) mss-der x = sub RESULT (chain> (max ○ map⁺ sum ○ segs) x === ... === ... === RESULT) `````` If it works well, we can probably use Agda as a tool for program derivation! Currently, however, I find it harder to use than expected, perhaps due to my being unfamiliar with the way Agda reports type errors. Nevertheless, Agda does force me to make every details right. For example, the usual definition of `max` I would use in a paper would be: `````` max = foldr _↑_ -∞ `````` But then I would have to define numbers with lower bound -∞. A sloppy alternative definition: `````` max = foldr _↑_ 0 `````` led me to prove a base case `0 ↑ max x ≡ max x`, which is not true. That the definition does work in practice relies on the fact that `segs` always returns empty list as one of the possible segment. Alternatively, I could define `max` on non-empty lists only: `````` max : List⁺ Val -> Val max = foldr⁺ _↑_ id `````` where `List⁺ A` is defined by: `````` data List⁺ (A : Set) : Set where [_]⁺ : A -> List⁺ A _::⁺_ : A -> List⁺ A -> List⁺ A `````` and refine the types of `inits`, `tails`, etc, to return non-empty lists. This is the approach I took. Eventually, I was able to give a derivation of `mss` this way: `````` mss-der : (x : List Val) -> (Val ∣ \m -> (max ○ map⁺ sum ○ segs) x ≡ m) mss-der x = sub ((max ○ scanr _⊗_ ø) x) (chain> (max ○ map⁺ sum ○ segs) x === (max ○ map⁺ sum ○ concat⁺ ○ map⁺ inits ○ tails) x by refl === (max ○ concat⁺ ○ map⁺ (map⁺ sum) ○ map⁺ inits ○ tails) x by cong max (sym (concat⁺-map⁺ ((map⁺ inits ○ tails) x))) === (max ○ map⁺ max ○ map⁺ (map⁺ sum) ○ map⁺ inits ○ tails) x by max-concat⁺ ((map⁺ (map⁺ sum) ○ map⁺ inits ○ tails) x) === (max ○ map⁺ max ○ map⁺ (map⁺ sum ○ inits) ○ tails) x by cong (\xs -> max (map⁺ max xs)) (sym (map⁺-compose (tails x))) === (max ○ map⁺ (max ○ map⁺ sum ○ inits) ○ tails) x by cong max (sym (map⁺-compose (tails x))) === (max ○ map⁺ (foldr _⊗_ ø) ○ tails) x by cong max (map⁺-eq max-sum-prefix (tails x)) === (max ○ scanr _⊗_ ø) x by cong max (scanr-pf x) ) where _⊕_ : Val -> List⁺ Val -> List⁺ Val a ⊕ y = ø ::⁺ map⁺ (_+_ a) y _⊗_ : Val -> Val -> Val a ⊗ b = ø ↑ (a + b) `````` where `max-sum-prefix` consists of two fold fusion: `````` max-sum-prefix : (x : List Val) -> max (map⁺ sum (inits x)) ≡ foldr _⊗_ ø max-sum-prefix x = chain> max (map⁺ sum (inits x)) === max (foldr _⊕_ [ ø ]⁺ x) by cong max (foldr-fusion (map⁺ sum) lemma1 x) === foldr _⊗_ ø x by foldr-fusion max lemma2 x where lemma1 : (a : Val) -> (xs : List⁺ (List Val)) -> map⁺ sum (ini a xs) ≡ (a ⊕ (map⁺ sum xs)) lemma1 a xs = ? lemma2 : (a : Val) -> (x : List⁺ Val) -> max (a ⊕ x) ≡ (a ⊗ max x) lemma2 a x = ? `````` The two lemmas are given in the files attached below. Having the derivation, `mss` is given by: `````` mss : List Val -> Val mss x = depfst (mss-der x) `````` it is an executable program that is proved to be correct. The complete Agda program is split into five files: • MSS.agda: the main program importing all the sub-modules. • ListProperties.agda: some properties I need about lists, such as fold fusion, `concat ○ map (map f) = map f ○ concat`, etc. Later in the development I realised that I should switch to non-empty lists, so not all of the properties here are used. • List+.agda: non-empty lists and some of its properties. • Derivation.agda: the main derivation of MSS. The derivation is parameterised over any numerical data and operators `+` and `↑` such that `+` is associative, and `a + (b ↑ c) = (a ↑ b) + (a ↑ c)`. The reason of this parameterisation, however, was that I did not know how to prove the properties above, until Josh showed me the proof. • IntRNG.agda: proofs from Josh that Data.Int actually satisfy the properties above. (Not quite complete yet.) # The Pruning Theorem: Thinning Based on a Loose Notion of Monotonicity The reason I studied the thinning theorem again is because I need a slightly generalised variation. The following seems to be what I need. The general idea and the term “pruning” emerged from discussion with Sharon Curtis. The term “lax preorder” is invented by myself. I am not good at naming, and welcome suggestions for better names. The notation below are mostly taken from the book Algebra of Programming. Not many people, even among functional programmers, are familiar with these notations involving relational intersection, division, etc. One starts to appreciate their benefits once he/she gets used to using their calculation rules. Most of the time when I was doing the proof, I was merely manipulating the symbols. I could not have managed the complexity if I had to fall back to the semantics and think about what they “mean” all the time. A relation Q :: PA ← A between a set of A and an element is called a lax preorder if it is 1. reflexive in the sense that ∋ ⊆ Q, and 2. transitive in the sense that (Q/∋) . Q ⊆ Q. A relation S :: A ← FA is monotonic on lax preorder Q if S . FQ˘ ⊆ Q˘. Λ(S . F∈). Given a lax preorder, we define: $prune Q = \in \\in \cap Q/\ni$ The definition induces the universal property: Any preorder R induces a lax preorder ∋ . R. If a relation S is monotonic on , it is monotonic on lax preorder ∋ . R. Furthermore, prune (∋ . R) = thin R. Therefore, pruning is a generalisation of thinning. We need the notion of lax preorders because, for some problems, the generating relation S is monotonic on a lax preorder, but not a preorder. Theorem: if S is monotonic on lax preorder Q, we have: $fold \left(prune Q . \Lambda \left(S . F\in \right)\right) \subseteq prune Q . \Lambda \left(fold S\right)$ Proof. Since Λ(fold S) = fold (Λ(S . F∈)), by fold fusion, the theorem holds if $prune Q . \Lambda \left(S . F\in \right) . F\left(prune Q\right) \subseteq prune Q . \Lambda \left(S . F\in \right)$ By the universal property of prune, the above is equivalent to: prune Q . Λ(S . F∈) . F(prune Q) . (S . F∈)˘ ⊆ Q The first inclusion is proved by: ⊆     { since prune Q ⊆ ∈\∈ } ∈ . ∈\∈ . Λ(S . F∈) . F(thin Q) ⊆     { division } ∈ . Λ(S . F∈) . F(thin Q) =     { power transpose } S . F∈ . F(thin Q) ⊆     { since prune Q ⊆ ∈\∈ } S . F∈ . F(∈\∈) ⊆     { division } S . F∈ And the second by: ⊆     { since prune Q ⊆ Q/∋, converse } prune Q . Λ(S . F∈) . F(Q/∋) . F∋ . S˘ ⊆     { division } prune Q . Λ(S . F∈) . FQ . S˘ ⊆     { monotonicity: FQ . S˘⊆ Λ(S . F∈)˘. Q } prune Q . Λ(S . F∈) . Λ(S . F∈)˘. Q ⊆     { since Λ(S . F∈)˘is a function, that is, f . f˘⊆ id } prune Q . Q ⊆     { since thin Q ⊆ Q/∋, division } Q/∋ . Q ⊆     { since Q transitive } Q Endproof. The proof above uses transitivity of Q but not reflectivity. I need reflectivity to construct base cases, for example, to come up wit this specialised Pruning Theorem for lists: $foldr \left(prune Q . \Lambda \left(S . \left(id × \in \right)\right)\right) \left\{e\right\} \subseteq prune Q . \Lambda \left(foldr S e\right)$ if S . (id × Q˘) ⊆ Q˘. Λ(S . (id × ∈)). # Proving the Thinning Theorem by Fold Fusion Algebra of Programming records two proofs of the greedy and the thinning theorems slightly shorter than proofs using fold fusion. Of course, one can still use fold fusion. In fact, proving them by fold fusion are exercises in Chapter 8 (PDF) of Algebraic and Coalgebraic Methods in the Mathematics of Program Construction, of which I myself is listed among the authors. A while ago when I needed to consider some variations of the thinning theorem I tried to do the proof again. And, horrifyingly, I could not do it anymore! Have my skills become rusty due to lack of practice in the past few years? In panic, I spent an entire afternoon fighting with it, until I realised that it was just a typical copying error from the very beginning: when I copied a property from the book I put in an extra Λ. Then I trapped myself in the maze of expanding ΛR into ∈\R ∩ (R\∈) and using modular law and …. Having fixed the error, I get my trivial and easy proof back again. Anyway, I am going to record it below, in case I run into the same panic again. Given a preorder Q, the relation thin Q is defined by: $thin Q = \in \\in \cap \left(\ni . Q\right)/\ni$ The definition induces the universal property: And here are some basic properties we will make use of later: ∈ . ΛS = S       (power transpose) ΛR . R˘ ⊆ ∋ R . R\S ⊆ S,       R/S . S ⊆ R       (division) ### The Thinning Theorem The thinning theorem says : Theorem: if S is monotonic on preorder Q, that is, S . FQ˘⊆ Q˘. S, we have: $fold \left(thin Q . \Lambda \left(S . F\in \right)\right) \subseteq thin Q . \Lambda \left(fold S\right)$ Proof. By fold fusion, the theorem holds if $thin Q . \Lambda \left(S . F\in \right) . F\left(thin Q\right) \subseteq thin Q . \Lambda \left(S . F\in \right)$ By the universal property of thin, the above inclusion is equivalent to thin Q . Λ(S . F∈) . F(thin Q) . (S . F∈)˘ ⊆ ∋.Q The first inclusion is proved by: { since thin Q ⊆ ∈\∈ } ∈ . ∈\∈ . Λ(S . F∈) . F(thin Q) { division } ∈ . Λ(S . F∈) . F(thin Q) = { power transpose } S . F∈ . F(thin Q) { since thin Q ⊆ ∈\∈ } S . F∈ . F(∈\∈) { division } S . F∈ And the second by: { since thin Q ⊆ (∋ . Q)/∋, converse } thin Q . Λ(S . F∈) . F((∋ . Q)/∋) . F∋ . S˘ { functor, division } thin Q . Λ(S . F∈) . F(∋ . Q) . S˘ { monotonicity: FQ . S˘ ⊆ S˘. Q } thin Q . Λ(S . F∈) . F∋ . S˘. Q { since ΛR . R ⊆ ∋ } thin Q . ∋ . Q { since thin Q ⊆ (∋.Q)/∋, division } ∋ . Q . Q { since Q transitive } ∋ . Q Endproof. By the way, the variation of thinning theorem I need is “fold (thin Q . Λ(S . F∈)) ⊆ thin Q . Λ(fold S) if S . F(Q˘. ∈) ⊆ Q˘. S . F ∈ “, whose proof is, luckily, trivial once you write down the original proof.
{}
Finance and Economics Discussion Series: 2007-14 Screen Reader version # Diagnosing and Treating Bifurcations in Perturbation Analysis of Dynamic Macro Models* Keywords: bifurcation, perturbation, relative price distortion, optimal monetary policy Abstract: In perturbation analysis of nonlinear dynamic systems, the presence of a bifurcation implies that the first-order behavior of the economy cannot be characterized solely in terms of the first-order derivatives of the model equations. In this paper, we use two simple examples to illustrate how to detect the existence of a bifurcation. Following the general approach of Judd (1998), we then show how to apply l'Hospital's rule to characterize the solution of each model in terms of its higher-order derivatives. We also show that in some cases the bifurcation can be eliminated through renormalization of model variables; furthermore, renormalization may yield a more accurate first-order solution than applying l'Hospital's rule to the original formulation. JEL Classification: C63; C61; E52. # 1 Introduction In recent analysis of nonlinear dynamic macroeconomic models, the characterization of their first-order dynamics has been an important step for understanding theoretical implications and evaluating empirical success. However, the presence of a bifurcation in perturbation analysis of nonlinear dynamic systems implies that the first-order behavior of the economy cannot be characterized solely in terms of the first-order derivatives of the model equations. In this paper, we use two simple macroeconomic models to address several issues regarding bifurcations. In particular, the bifurcation problem would emerge in conjunction with the price dispersion generated by staggered price setting in the part of firms. We then show how to apply l'Hospital's rule to characterize the solution of each model in terms of its higher-order derivatives. We also show that in some cases the bifurcation can be eliminated through renormalization of model variables; furthermore, renormalization may yield a more accurate first-order solution than applying l'Hospital's rule to the original formulation. Before presenting our results, it is noteworthy that our definition of bifurcation is distinct from the one analyzed in Benhabib and Nishimura (1979). In particular, their analysis on bifurcation is associated with time evolution of dynamic systems. However, our concern with bifurcation arises in the process of approximating nonlinear equations, as discussed in Judd (1998). We proceed as follows. Section 2 describes the two examples and illustrates how to detect the existence of a bifurcation problem. Section 3 follows the general approach of Judd (1998) and applies l'Hospital's rule to characterize the first-order behavior of each model. Section 4 shows how the bifurcation can be eliminated through renormalization of model variables. Section 5 concludes. # 2 Diagnosis of Bifurcations This section discusses how we can detect the existence of bifurcation in two simple economies. In both models, Calvo-style price setting behavior of firms can be summarized by the following law of motion for the relative price distortion: (2.1) where the distortion index is defined as The parameters and represent the percentage of firms that cannot change their price in each period and the elasticity of substitution across goods , respectively. The variable is the gross inflation rate of the price index aggregated over firms. ## 2.1 A Single-Equation Setting To discuss the issue of bifurcation, we have to close the model with another equation. In the first example, we simply assume that inflation follows an exogenous stochastic process, where the logarithm of follows a mean zero process. We can rationalize this process in terms of monetary policy by a version of strict inflation targeting around the exogenous process or a version of strict output-gap targeting in a model with cost-push shocks. By combining the two equations, we now have a single-equation model: (2.2) Since this equation is backward looking, this exact nonlinear form can be used for any dynamic analysis. However, we suppose that we have to rely on approximation methods to analyze this model as would be the case when there are forward-looking equations. Woodford (2003) and Benigno and Woodford (2005) pointed out that, when deviations of the (net) inflation rate from its zero steady state are of first order in terms of exogenous variations, deviations of the distortion index from one is of second order. Based on this observation, one can naturally approximate the system with respect to the square root of the logarithm of the relative price distortion index. Note that this distortion index is unity at the steady state with zero inflation rate. We follow the convention of using lower cases for log deviations, Specifically, corresponds to the approximation in Woodford (2003) and Benigno and Woodford (2005). It will be also shown in Section 4 that can be used as the basis of an alternative approximation. Under the choice of as the approximation variable, (2.2) can be rewritten as follows: (2.3) Now let's see what happens if we try a Taylor approximation of this system with respect to and . It is easy to see that the derivative with respect to the endogenous variable would be zero at the steady state. Based on this zero derivative, we can diagnose the bifurcation problem in this case. Put in an alternative way, the implicit function theorem cannot be applied when the derivative with respect to the endogenous variable is zero. It is noteworthy to find out what would happen if we feed this case into computer codes commonly available for dynamic macroeconomic analysis. The Dynare package (version 3.05) produces an message saying Warning: Matrix is singular to working precision', and AIM (developed by Gary Anderson and George Moore, and widely used at the Federal Reserve Board) returns a code indicating Aim: too many exact shiftrights'. The routine developed by Christopher Sims (gensys.m) ends without any output or error message. ## 2.2 A Multi-Dimensional Setting The second example is a case with multiple equations. Our example is a prototypical Calvo-style sticky-price model, and the optimal policy problem is to maximize the household welfare subject to the following four constraints: the law of motion for relative price distortions, the social resource constraint, the firms' profit maximization condition, and the present-value budget constraint of the household. However, it is shown in Yun (2005) that the optimal policy problem can be reduced to minimizing the index for relative price distortion (2.1). At the optimum, we have the following relationship: (2.4) Therefore, the solution to the optimal policy problem can be represented with the following bivariate nonlinear system: (2.5) As in the single-equation case, we start with a normalization according to which and are endogenous variables and is exogenous: where is the net inflation rate. When there are multiple equations in the system, the assumption of the implicit function theorem involves the non-singularity of the Jacobian. Computing the determinant for the Jacobian, we have Since the Jacobian is singular, the implicit function theorem cannot be applied and the regular perturbation method does not work. We need to rely on the bifurcation method. # 3 Resolution of Bifurcations As explained in Judd (1998) and Judd and Guu (2001), the bifurcation problem can be resolved by using l'Hospital's rule. ## 3.1 A Single-Equation Setting To understand the approximated behavior of in the singe-equation example, we need to compute and where is defined as an implicit function as follows: In cases for which regular perturbation analysis could be applied, the first-order approximation of would come from the implicit function theorem as follows: The number in the parenthesis indicates the order of approximation. However, the assumption of the implicit function theorem does not hold in our case since . We need to adopt an advanced asymptotic method--the bifurcation method in this case. Noting that the derivatives in the numerators are also zero at the steady state, we apply l'Hospital's rule to the two ratios in the form of and obtain the following first-order approximation:1 (3.6) This is an example of the transcritical bifurcation.2 In this single-equation model, it is easy to avoid the bifurcation problem when we consider the following equation that is equivalent to (2.3), The derivative becomes nonzero, so the assumption of the implicit function theorem is satisfied. However, we still have to use l'Hospital's rule in computing the derivative with respect to the exogenous variables: and . ## 3.2 A Multi-Dimensional Setting To illustrate how we can invoke the bifurcation method in the multi-dimensional example, we substitute the second equation in (2.5) into the first to obtain: 0 Were the assumptions of the bifurcation theorem to hold, then differentiation of the implicit expression with respect to would produce the equation However, since both derivatives on the right-hand side are zero at the steady state, we need to apply l'Hospital's rule to compute . The first-order solution for is and the second-order accurate expression for inflation is (3.7) Note that the dependence of on is purely quadratic (i.e. the zero coefficient for the linear term) around the steady state with zero inflation rate. # 4 Renormalization of Model Variables The presence of bifurcations is not only related to the economic model in hand, but also to the choice of the variable with respect to which the Taylor approximation is applied. This section shows that the bifurcation can be eliminated through renormalization of model variables; furthermore, renormalization may yield a more accurate first-order solution than applying l'Hospital's rule to the original formulation. ## 4.1 A Single-Equation Setting In the single-equation setting, if we can approximate the model with respect to and instead of and , then the bifurcation problem would not emerge.3 To see this, rewrite (2.2) as follows: With this renormalization, the second-order Taylor approximation of yields the second-order solution for the endogenous variable: This choice of expansion variable implies that, when the initial relative price distortion is of first--rather than purely second--order, the current relative price distortion is also of first order. That is, the relative price distortion is of the same order of magnitude as the shocks. This equation differs from what we would obtain by squaring both sides of (3.6) because the renormalization leads to the presence of the term. Under this renormalization, the expression for the relative price distortion is richer--and more accurate--than (3.6) derived using l'Hospital's rule. Another renormalization that produces a solution similar to (3.6) is to approximate with respect to (instead of ). This alternative way is based on the interpretation that the initial relative price distortion is of second order. Specifically, we rewrite the model as and the second-order behavior of the endogenous variable becomes purely quadratic, Since this expression is purely second order, it is consistent with the results under the timeless perspective--a la Woodford (2003) and Benigno and Woodford (2005)--that the relative price distortions are zero when we focus solely on the first-order approximation.4 ## 4.2 A Multi-Dimensional Setting In the multi-dimensional case, the two ways of renormalization would correspond to and Either way, the determinant of the Jacobian is nonzero, and the implicit function theorem can be applied. The computer codes written for the regular perturbation methods would work. According to the first renormalization, the second-order approximation of (2.1) is (4.8) and the logarithmic transformation of (2.4) is Therefore, the second-order solution of this problem would be It is noteworthy to point out that, according to this renormalization, the first-order relationship between inflation and relative price distortions ( ) replicates the exact nonlinear relationship (2.4). The alternative renormalization consistent with the timeless perspective is to adopt (instead of ) as an exogenous variable. Based on this choice of an expansion parameter, Woodford (2003) concluded that the optimal inflation rate is zero to the first order in the absence of cost-push shocks. Under this normalization, the two model equations are approximated as follows: The second-order solution to this system of equations would be purely quadratic5 The first-order approximation of this solution is consistent with the optimality of zero inflation, as derived in the linear-quadratic approximation by Woodford (2003), Benigno and Woodford (2005), and Levine, Pearlman and Pierse (2006). Furthermore, the second-order solution for inflation is equivalent to the one via the bifurcation method, (3.7). ## 4.3 Accuracy Comparison After presenting two different renormalizations, it is natural to compare approximation errors for these two methods.6 For this purpose, we use as a reference point the closed-form solution to the optimal policy problem (2.5). Specifically, as shown in Yun (2005), the exact nonlinear solution for the optimal inflation rate is (4.9) It is noteworthy that this closed-form solution is feasible only when the relative price distortion is the only distortion--due to the assumption that there is an optimal subsidy and there are no cost-push shocks. The optimal rate of inflation is less than zero as long as there are initial price distortions . #### Figure 1: Renormalizations and Optimal Inflation Rates The difference between the two methods is that the expansion parameter of the first renormalization is , while that of the second is . Figure 1 compares the accuracy of the two normalizations based on the first-order solution under each normalization.7 The black solid line represents the exact closed-form solution for annualized inflation ( ) in terms of initial relative distortion ( ). The blue line with crosses is the linear approximation of this nonlinear solution. This corresponds to the first-order approximation of when the expansion parameter is --that is, . It is evident that this approximation is more accurate than : the first-order approximation with as the expansion parameter, depicted by the red circles. We can provide an intuitive understanding about the improved accuracy of the approximation with respect to as follows. Since is the square of , the first-order approximation with respect to is equivalent to the second-order approximation with respect to : Note that the equality holds because no linear terms are included in with zero steady-state inflation rate. # 5 Conclusion We have illustrated how to detect the existence of a bifurcation and demonstrated how to apply l'Hospital's rule to characterize the solution. We have also shown that the bifurcation can be eliminated through renormalization of model variables; furthermore, renormalization may yield a more accurate first-order solution than applying l'Hospital's rule to the original formulation. This paper has focused on the consequences of renormalization on the treatment of bifurcations. However, the renormalization is also associated with the welfare evaluation of different policies as in Benigno and Woodford (2005). ## Bibliography Benhabib Jess and Kazuo Nishimura. "The hopf bifurcation and the existence and stability of closed orbits in multisector models of optimal economic growth." Journal of Economic Theory, December, 1979, Vol. 21(3), pp. 421-444. Benigno Pierpaolo and Michael Woodford. " Inflation Stabilization and Welfare: The Case of A Distorted Steady State." Journal of the European Economic Association, December, 2005, Vol. 3(6), pp. 1185-1236. Calvo, Gillermo. "Staggered Prices in a Utility Maximizing Framework." Journal of Monetary Economics, 1983, 12 (3), pp. 383-398. Judd, Kenneth. Numerical Methods in Economics, MIT Press, 1998. Judd, Kenneth and Sy-Ming Guu. "Asymptotic Methods for Asset Market Equilibrium Analysis." Economic Theory, 2001, 1 (18), pp. 127-157. Levin Andrew, Alexei Onatski, John Williams, and Noah Williams. "Monetary Policy under Uncertainty in Micro-Founded Macroeconometric Models" , in M. Gertler and K. Rogoff, eds., NBER Macroeconomics Annual, 2005. Cambridge, MA: MIT Press, 2006. Levine Paul, Joseph Pearlman, and Richard Pierse. "Linear-Quadratic Approximation, External Habit and Targeting Rules." unpublished manuscript, University of Surrey, October, 2006. Schmitt-Grohé, Stephanie and Martin Uribe, "Optimal Simple and Implementable Monetary and Fiscal Rules." forthcoming in Journal of Monetary Economics, 2006. Woodford, Michael. Interest and Prices: Foundations of Theory of Monetary Policy, Princeton NJ: Princeton University Press, 2003. Tack Yun. "Optimal Monetary Policy with Relative Price Distortions." American Economomic Review, March, 2005, 95 (1), pp. 89 - 109. #### Footnotes * We have benefited from discussion with Gary Anderson, Jean Boivin, Chris Sims, and participants at the Canadian Macroeconomics Study Group meeting in Montreal. The views in this paper are solely the responsibility of the authors and should not be interpreted as reflecting the views of the Board of Governors of the Federal Reserve System or any other person associated with the Federal Reserve System. Return to Text † Corresponding Author: Division of Monetary Affairs, Mailstop 71, Federal Reserve Board, Washington, DC 20551. Tel: (202) 452-2981, E-mail: jinill.kim@frb.gov. Return to Text
{}
Evolving Glasma and Kolmogorov SpectrumPresented at the 51st Cracow School of Theoretical Physics # Evolving Glasma and Kolmogorov Spectrum1 ## Abstract We present a pedagogical introduction to the theoretical framework of the Color Glass Condensate (CGC) and the McLerran-Venugopalan (MV) model. We discuss the application of the MV model to describe the early-time dynamics of the relativistic heavy-ion collision. Without longitudinal fluctuations the classical time evolution maintains boost invariance, while an instability develops once fluctuations that break boost invariance are included. We show that this “Glasma instability” enhances rapidity-dependent variations as long as self-interactions among unstable modes stay weak and the system resides in the linear regime. Eventually the amplitude of unstable modes becomes so large that the growth of instability gets saturated. In this non-linear regime the numerical simulations of the Glasma lead to turbulent energy flow from low-frequency modes to higher-frequency modes, which results in a characteristic power-law spectrum. The power found in numerical simulation of the expanding Glasma system turns out to be consistent with Kolmogorov’s scaling. ## 1 Introduction Relativistic heavy-ion collision experiments have aimed to create a new state of matter out of color-deconfined particles, i.e. a quark-gluon plasma (QGP) in extreme environments in the laboratory. Presumably it was only up to after the Big Bang when the Early Universe was still hot enough to realize the QGP in nature. Experimental data on intrinsic properties of the QGP suggest that this new state of QCD matter found in the heavy-ion collision is not a weakly-coupled plasma but rather a strongly-coupled fluid. The hydrodynamic description of the time evolution has successfully reproduced the measured particle distributions, in particular, the azimuthal distribution of emitted particles in non-central collisions. The great success of the hydrodynamic model is a strong evidence for thermalization. After thermal equilibrium is achieved, the time evolution of the QGP is somehow under theoretical control, though some uncertainties remain in the determination of the equation of state and the implementation of dissipative effects. In contrast to the hydrodynamic regime after thermalization, our understanding is still quite limited about the early-time dynamics toward thermalization. This kind of problem is in general one of the most difficult physics challenges. One would naturally anticipate that the system may form a turbulent fluid during transient stages right after the collision. In fact, turbulence is a common phenomenon in our daily life whenever the Reynolds number exceeds a certain threshold. Nevertheless, it poses a very difficult theory problem even today. It is widely known that a renowned physicist, Richard Feynman described turbulence as “the last great unsolved problem of classical physics” and it is indeed so also in the context of the QGP study. A good news is that we already have a powerful theoretical tool to investigate the very early-time dynamics of the high-energy heavy-ion collision. One may well think at a first glance that any microscopic description of nucleus-nucleus collisions is too complicated to handle in terms of the QCD first principle. This is truly so unless the collisional energy is sufficiently high. Actually microscopic information of nucleus before collision is far from simple on its own. The high-energy limit, however, allows for drastic simplification that makes the calculation feasible. In the Regge limit, precisely speaking, the strong interactions exhibit totally different characteristics from low-energy hadron physics. In this particular case the c.m. energy scale is infinitely larger than other energy scales such as the transferred momentum squared . Then, although the strong coupling constant is small due to asymptotic freedom with large , a resummation is required for a series of non-small terms where Bjorken’s . In terms of the Feynman diagram this resummation represents quantum processes to emit softer gluons successively. Intuitively, as one goes to larger and thus goes to down from , one should consider more radiated gluons within a bin of to . Naturally the wave function of nucleus is -dependent and we should expect more and more gluons inside at smaller . Eventually it should be the most suitable to treat gluons as coherent fields rather than particles once the gluon density is high enough. This is reminiscent of photons in the Weizsäcker-Williams approximation. In the first approximation in the high-energy limit of QCD, therefore, the classical fields are appropriate ingredients in theoretical computations, which is as a consequence of quantum radiations. Such a classical description of high-energy QCD is called the Color Glass Condensate (CGC) [1]. In this way, the initial condition for the relativistic heavy-ion collision should be formulated by means of the CGC theory. As long as the gluon distribution function stays large, the CGC picture holds, and it is finally superseded by a particle picture of plasma. Some unnamed physical state between the CGC and the QGP was given a name of Glasma as a mixture of “glass” and “plasma” [2]. The Glasma time evolution thus follows the classical equations of motion and hopefully it should be transformed smoothly into a hydrodynamic regime. In this sense the Glasma should play a central role to figure out what the initial condition for the hydrodynamic model should look like. Because QCD or the Yang-Mills theory involves gluonic self-interactions, it is generally hard to find an analytical solution of the classical equations of motion except for some simple situations. Hence, one has to resort to numerical methods to go into quantitative estimates, and the numerical implementation has been established. In view of numerical outputs, however, there is no indication seen in the Glasma evolution toward thermalization. If all quantum fluctuations are completely frozen in particular, the classical dynamics respects boost invariance. The coordinate rapidity is simply shifted under a boost on the system, that means that the boost-invariant system is insensitive to . The pressure is then highly anisotropic depending on the beam-axis direction or on the transverse plane. This makes a sharp contrast to thermalized matter in which the pressure is isotropic. Quantum (or structural – see Sec. 5) fluctuations including -dependence break boost invariance explicitly. Interestingly enough, it was discovered that a small modulation along the longitudinal direction grows up exponentially as a function of time and -dependent modes show instability behavior, which is sometimes referred to as the “Glasma instability” [3]. It is of paramount importance to understand the nature of the Glasma instability to fill in a missing link between the CGC initial state and the initial condition for hydrodynamics. There are several theoretical attempts to account for qualitative features of the Glasma instability [4, 5, 6, 7, 8, 9]. Here, instead of doing so, we will think about subsequent phenomena at later stages; turbulence may be formed by instability growth, and then it is sensible to anticipate the decay of turbulence and the associated scaling law in the energy spectrum [10]. This article contains lectures on the basic facts of the MV model for those who are not necessarily familiar with QCD at high energy. In Sec. 2 a stationary-point approximation on the functional integration is introduced, which leads to a classical treatment of the high-energy QCD problems. The classical equations of motion in the pure Yang-Mills theory are further discussed in Sec. 3. Then, some numerical results for the -independent case are presented next in Sec. 4, and those for the case with -dependent fluctuations in Sec. 5. Some evidence is presented for the realization of the power-law scaling in the energy spectrum in the non-linear regime where the instability stops. Section 6 is devoted to outlooks. ## 2 Scattering amplitude and the Eikonal approximation We will see the essence of the Eikonal approximation and the scattering amplitudes of our interest in QCD physics. We will first consider the case of light projectile and dense target, and proceed to the case of dense projectile and dense target next. ### 2.1 Eikonal approximation Before addressing QCD application, we shall first consider a scattering problem in non-relativistic Quantum Mechanics. If we want to solve a problem of potential scattering, we should treat the Schrödinger equation, [−∂22m+V(r)]ψ(r)=Eψ(r) (1) with the following boundary condition, ψ(r)=eikz+f(Ω)eikrr (2) at large distance (). The term involving represents the scattered wave and one can obtain the scattering amplitude from . The lowest-order estimate immediately gives an expression in the Born approximation. In the high-energy limit, however, the scattering angle is small and the incident and the scattered waves interfere strongly there. In this situation the following Ansatz is more convenient, ψ(r)=eikz^ψ(r), (3) which is called the Eikonal approximation in analogy with the terminology in Optics. The wave length of incident wave, , is shorter than the potential range when is large enough, and the differential equation for is [v^pz−∂22m+V(r)]^ψ(r)=0 (4) with . Then, in the high-energy limit, the second term is negligible as compared to the first term, which is easily integrated out to give, ^ψ(r)=exp[−iv∫z−∞dz′V(x,y,z′)]. (5) In the gauge theory is replaced by . Let us consider the scattering of the target particles moving at the speed of light in the positive- direction and the projectile particles in the negative- direction. Then, in general, in the Eikonal approximation, the scattering matrix is S∼⟨∑{ρt}Wx[ρt]∏{ρt}W∑{ρp}Wx′[ρp]∏{ρp}V⟩ (6) with the Wilson lines corresponding to the Eikonal phase (5), Missing or unrecognized delimiter for \biggr (7) in the light-cone coordinates; and , as sketched in Fig. 1. The momenta conjugate to and are the energy and the longitudinal momentum . In Eq. (6) the weights, and , represent the wave functions of the target and the projectile at the Bjorken variables and , respectively. Since the hard particles ( with being the total longitudinal momentum of the target) are included in the wave function, the functional integration in should contain the softer gauge fields with . In the following subsections let us discuss how to make an approximation on this scattering amplitude. ### 2.2 Light projectile and dense target For the simplest example let us take the projectile as a color-dipole, i.e. the scattering amplitude is then, Sdipole ∼\raisebox−7.0pt\includegraphics[[width=56.905512pt]]dipole.eps =⟨⟨V(x⊥)V†(y⊥)⟩⟩ρt =⟨∑{ρt}Wx[ρt]V(x⊥)V†(y⊥)⟩ =∑{ρt}Wx[ρt]∫p+ The product of ’s can be re-expressed on the exponential [11, 12] as Ssource[ρt,W]=igNc∫d4xtr[ρtlnW]∼−∫d4xρatA−a, (9) where the last expressions is an approximation valid for sufficiently large . Then, with large , the functional integration in Eq. (8) can be estimated by means of the stationary approximation at the solution of δSYMδAμa∣∣∣A=A=δμ−ρt. (10) The solution of the above classical equations of motion thus represents the contribution from soft gluons with . Then, finally, the dipole scattering amplitude is Sdipole≃∑{ρt}Wx[ρt]V(x⊥)V†(y⊥)∣∣A=A[ρt]. (11) This expression is easily generalized to an arbitrary operator and Extra open brace or missing close brace (12) Once the dependence in is known [13, 14], small- evolution is deduced for in general and in this way the BFKL equation up to the quadratic-order of , and besides, the BK [15] and JIMWLK equations including full-order of are derived. ### 2.3 Dense projectile and dense target The discussions so far are quite generic but the above stationary-point approximation needs slight modifications when not only the target parton density but also the projectile is large as in the situation of the relativistic heavy-ion collision. Then, the scattering amplitude reads Sdense-dense=∑{ρt,ρp}Wx[ρt]Wx′[ρp]∫DAeiSYM[A]+iSsource[ρt,W;ρp,V] (13) with the source action given approximately as Ssource[ρt,W;ρp,V]∼−∫d4x(ρatA−a+ρapA+a). (14) This time, the stationary-point is shifted also by the effect of the presence of and it is determined by the classical equations of motion with two sources, δSYMδAμa∣∣A=A=δμ−ρat+δμ+ρap. (15) Using the solution of the above classical equations of motion, one can obtain the general formula similar to previous Eq. (12) as ⟨⟨O[A]⟩⟩ρt,ρp≃∑{ρt,ρp}Wx[ρt]Wx′[ρp]O[A[ρt,ρp]]. (16) In what follows we discuss how to solve this above Eq. (16) to investigate the early-time dynamics of the heavy-ion collision. Then, there are two ingredients necessary to estimate physical observables using Eq. (16). One is the solution of Eq. (15), which is impossible to find in an analytical way unfortunately. The other is to figure out the wave functions and , which is again impossible to do so by solving QCD exactly. Given an initial condition at a certain , in principle, the evolution equation such as the JIMWLK equation leads to the wave function at any . The theoretical framework of such a description of scattering processes with non-linearity of abundant gluons is called the Color Glass Condensate (CGC). The CGC theory is not a phenomenological model but an extension of conventional perturbative QCD with resummation in a form of background fields. In any case, however, the initial condition for small- evolution is necessary for actual computations. This part needs an Ansatz as explained in the next section. ## 3 Equations of motion and the MV model Here we will see that the analytical solution is written down for one-source problem. Although it is impossible to give an analytical formula for the solution of two-source problem, the initial condition on the light-cone can be specified. Also we will introduce a Gaussian approximation for the wave functions that defines the MV model. ### 3.1 One-source problem Let us first consider how to solve the equations of motion (10) for the light-dense scattering. It is actually easy to find a special solution in the same way as the classical electromagnetism. The important point is that the source is independent of because of time dilation of particles moving at the speed of light2. Therefore, with an assumption of and , the problem is reduced to an Abelian one and Eq. (10) amounts to the standard Poisson equation. The solution of the static potential therefore reads, −∂2⊥A+=ρt(x⊥,x−)⇒A+=−1∂2⊥ρt(x⊥,x−). (17) In later discussions it is more convenient to adopt the light-cone gauge, , which is achieved by a gauge rotation by that solves . That is, V†(x⊥,x−) =Pexp[ig∫x−−∞dz−A+(x⊥,z−)] =Pexp[−ig∫x−−∞dz−1∂2⊥ρt(x⊥,z−)]. (18) After the gauge transformation by , only the transverse components are non-vanishing, α(t)i=Ai=−1igV∂iV†, (19) and . In the same way, for the projectile moving in the opposite direction to the target, the equations of motion have a solution as α(p)i=−1igW∂iW† (20) with W†(x⊥,x+)=Pexp[−ig∫x+−∞dz+1∂2⊥ρp(x⊥,z+)] (21) in gauge. In this manner the one-source problem is readily solvable. It is, however, impossible to solve the two-source problem in Eq. (15) as simply as above. ### 3.2 Two-source problem In the presence of two sources it is most suitable to make use of the Bjorken coordinates that represent an expanding system. The time and the longitudinal variables are replaced by the proper time and the space-time rapidity, respectively, as τ=√2x+x−=√t2−z2,η=12lnx+x−=12ln(t+zt−z). (22) The temporal gauge in this coordinate, , has a close connection to the light-cone gauge discussed in the previous subsection because leads to a condition on and on . Therefore, the experience in the one-source problem turns out to be useful here. The equations of motion can be expressed in the Bjorken coordinates as ∂τEi=1τDηFηi+τDjFji,∂τEη=1τDjFjη. (23) and the conjugate momenta are defined as Ei=τ∂τAi,Eη=1τ∂τAη. (24) These equations (23) and (24) determine the time evolution uniquely once the initial condition at an initial time is specified. The solutions of the one-source problems are consistent with the gauge-fixing condition since we found for the target and the projectile both. Then solves Eq. (23) too if there is no interference from the projectile source. This means that should be a solution in the region outside of the light-cone, , and in (see Fig. 2 for illustration). From this fact, it is naturally understood that the initial condition at or may be a superposition of these two solutions, Ai=α(t)i+α(p)i,Aη=0. (25) Here, though this is a very simple Ansatz by a superposition, the consequence is quite non-trivial. The field strength associated with these gauge fields is Bi=0,Bη=F12=−ig([α(t)1,α(p)2]+[α(p)1,α(t)2]), (26) and thus, the longitudinal component of the chromo-magnetic fields appears from the non-Abelian interactions. By solving the equations of motion, one can also find similar expressions for the chromo-electric fields [16] Ei=0,Eη=ig([α(t)1,α(p)1]+[α(t)2,α(p)2]). (27) These field strengths stand for characteristic properties of the initial condition of the relativistic heavy-ion collisions in the CGC or the so-called Glasma picture, an intuitive illustration for which is displayed in Fig. 3. It is important to point out that the initial conditions at are independent of , namely, boost invariant. Because there is no explicit in the equations of motion, boost invariance is kept during the time evolution. ### 3.3 McLerran-Venugopalan model One of the simplest and reasonable Ansätze for the wave function is the Gaussian approximation, that is given by Missing or unrecognized delimiter for \biggr (28) which defines the McLerran-Venugopalan (MV) model. Here represents either or . In this model setup characterizes the typical energy scale. Indeed, once the parton saturation manifests itself, any details in the structure are lost and only the transverse parton density should be a relevant scale. In principle in the MV model is to be interpreted as in the parton saturation. With the Gaussian wave function (28), the expectation value is obtained by a decomposition into the two-point function that is read as ⟨ρ(m)a(x⊥,z)ρ(n)b(y⊥,z′)⟩=g2μ2(z)δmnδabδ(z−z′)δ(2)(x⊥−y⊥). (29) In other words, the Gaussian approximation (28) assumes no correlation at all between spatially distinct sites. Evaluating the Gaussian average with various functionals of is an interesting mathematical excercise [17]. Especially it is feasible to evaluate the initial energy density, ε=⟨Tττ⟩=⟨tr[E2L+B2L+E2T+B2T]⟩ (30) at using the initial conditions (25) and (27) and the Gaussian weight (28). The results are a bit disappointing because it involves both the UV and the IR divergences, which can be regularized by (momentum cutoff) and (gluon mass). Then, the initial energy density is found to be ε(τ=0)=g6μ4⋅Nc(N2c−1)8π2(lnΛm)2 (31) after some calculations [4, 18, 19]. In the numerical simulation with discretized grid in a finite-volume box [20, 21] there are natural UV and IR cutoffs incorporated from the beginning. The IR cutoff is, however, not included as a gluon mass but originates from the system size . ## 4 Numerical method and the boost-invariant results The model parameters should be fixed first. In the case of the gold-gold collision at , the empirical choice is and , which correspond to , , and . Now that the model definition and the model parameters are given, we can calculate and numerically. Then, in the high-energy limit, it is a common assumption that the nucleus source is infinitesimally thin, i.e. ρt(x⊥,x−)=¯ρt(x⊥)δ(x−),ρp(x⊥,x+)=¯ρp(x⊥)δ(x+). (32) Then, the Wilson lines are replaced, respectively, by ¯V†(x⊥)→eigΛ(t)(x⊥),¯W†(x⊥)→eigΛ(p)(x⊥) (33) with the solution of the Poisson equation, −∂2⊥Λ(t)(x⊥)=¯ρt(x⊥),−∂2⊥Λ(p)(x⊥)=¯ρp(x⊥). (34) One should be very careful about this replacement because this is not an approximation on Eqs. (18) and (21). Even though the longitudinal extent in the color source is infinitesimally thin, it should not be legitimate to drop the path ordering [22]. Therefore, we have to think that the numerical MV model is something distinct from the original MV model in the continuum variables. In the MV model the distribution of the color source is random and there is no correlation between different sites. Figure 4 illustrates an example of the initial as a solution of the Poisson equation. Because the operator involves spatial average, we see that the spatial distribution of is rather smooth even though the source has random fluctuations. This smoothness is not physical, however, and the gauge fields are furiously fluctuating as shown in the right panel of Fig. 4. It is worth noting that the color-flux tube picture as sketched in Fig. 3 is not the case in the MV model and the JIMWLK evolution is indispensable to take account of the flux tube structure. The physical observables are measured by taking an ensemble average of results with different initial ’s. It is useful to compute not only the energy density (31) but also other combinations of the energy-momentum tensor. In particular the following pressures are important in order to judge how anisotropic the system is; Missing or unrecognized delimiter for \bigr (35) PL=⟨τ2Tηη⟩=⟨tr[E2T+B2T−E2L−B2L]⟩, (36) where the longitudinal and transverse chromo-electric and chromo-magnetic fields are defined as E2L=EηaEηa,E2T=1τ2(ExaExa+EyaEya), (37) B2L=Fa12Fa12,B2T=1τ2(FaηxFaηx+FaηyFaηy). (38) The numerical results from the numerical Glasma simulation are presented in Fig. 5. From this figure it is clear that there are only longitudinal fields and right at the collision () as explained with Fig. 3. The transverse fields are developing as increases, and eventually the longitudinal and the transverse fields approach each other at . This does not mean isotropization, however, because there are two ( and ) components in the transverse direction. One can understand what is happening by taking a careful look at the pressures. The longitudinal pressure goes to zero for and this means that particles move on the free stream along the longitudinal direction. Thus, the system remains far from thermal equilibrium. Here we encounter a perplexing situation. We know the the CGC should be a correct description of the initial dynamics of the heavy-ion collision. Even if the CGC cannot reach isotropization and thermal equilibrium, it should be a natural anticipation that the CGC can at least capture a correct tendency toward thermalization. This anticipation is not the case at all, however, as seen in Fig. 5. Then, is there anything missing in our discussions so far? ## 5 Glasma instability and the scaling spectrum Yes, there is. We have neglected fluctuations on top of boost-invariant CGC-background field and thus dependence at all. Such a treatment is not always justified. As a matter of fact, longitudinal structures in a finite extent by not using but treating correctly in the path ordering would give rise to -dependent fluctuations. Also, quantum fluctuations have dependence as well [8, 23]. One might have thought that small disturbances in the longitudinal direction could make only a slight difference. But, the fact is that there is a tremendous difference between results with and without -dependent fluctuations. With random fluctuations in space, the Glasma simulation would lead to significant decay from the CGC background fields with -independent zero-mode into -fluctuating non-zero modes [3]. An example is shown in Fig. 6. To obtain the results as presented in Fig. 6, only modes are disturbed, δEη∝f(η)=Δcos(2πν0Lηη), (39) where is the longitudinal extent that we took as in the simulation. Once is given, the transverse fluctuations, , are chosen in such a way to satisfy the Gauss law. Then, physical observables of our interest are Fourier transformed from space to space. It is now obvious that the CGC background fields at are relatively larger than fluctuations by small and the second dominant mode should sit at . In this article we limit ourselves to the simplest choice of . Usually unstable modes grow up exponentially, but in the expanding geometry, the instability implies a slightly weaker growth according to . The horizontal axis in the left panel of Fig. 6 is, thus, not the dimensionless time itself, but . We can surely confirm that the longitudinal pressure component at increases linearly in Fig. 6 when plotted with the logarithm of the pressure as a function of . The detailed structure of the instability in the spectral pattern is interesting to see. The right panel of Fig. 6 is the spectrum corresponding to the simulation results shown in the left. From now on, let us consider the fate of the instability; it is simply impossible for the instability to last for ever. The saturation of instability growth can be observed in two ways. The first case is that the initial magnitude (appearing in Eq. (39)) is taken to be large enough to accommodate non-linear effects. The second is the large-time behavior – simply we wait until the unstable modes grow up considerably. As shown in the left panel in Fig. 7 the instability for stops and the spectrum is flattened after all. In the saturated regime at we see that the pressure at slightly decreases rather than increasing. This behavior can be interpreted as follows; As long as the non-zero modes are small (i.e. in the linear regime), the energy decay from the dominant CGC background at makes non-zero modes enhanced exponentially. The injected energy is much bigger than the escaped energy toward higher- modes in the linear regime. This balance changes gradually with increasing amplitude of unstable modes, and eventually a steady energy flow is expected when non-Abelian nature of non-zero modes becomes substantially large (i.e. in the non-linear regime). At even larger time, as hinted from Fig. 7, the spectrum is flattened and the UV-cutoff effects at large should be appreciable. Then, an intriguing question is; what is going on in the non-linear regime before the UV-cutoff effects contaminate the simulation? To address this question, we shall take the latter case, namely the long-time run of the simulation with small for better numerical stability. The qualitative features in the results for shown in Fig. 8 are just the same as the top curve in Fig. 7 that represents the results for . We are interested in the energy spectrum in the non-linear regime that can be immediately identified on Fig. 8. Figure 9 is the energy spectrum (not the Fourier-transformed longitudinal pressure but the energy contribution from respective modes). That is, what is shown in Fig. 9 is given as εE(ν)=⟨tr[Eηa(−ν)Eηa(ν)+τ−2Eia(−ν)Eia(ν)]⟩. (40) We see that there is a clear tendency to approach a scaling form in the non-linear regime (as in the right panel of Fig. 9). With some rescaling we find that this power is exactly consistent with the Kolmogorov value, . Generally speaking, in non-expanding systems, it is not a surprise that the Kolmogorov power emerges because this value can be guessed by dimensional analysis. Such a dimensional argument may work even for the expanding system. In the Bjorken coordinates is dimensionless, but the physical scale in the longitudinal direction is to be interpreted as , and thus the corresponding momentum should be . Then, this quantity gives a dimension of length (with multiplied appropriately). Because of an expected energy flow in space, its rate is also a relevant quantity. Then, the characteristic length and time scales of the system are expressed by two quantities with the following dimensions; [ν/τ]=l−1,[ψ]=l2t−3. (41) The energy spectrum, on the other hand, has the dimension, [V⊥τ2εE(ν)]=l3t−2, (42) that is reproduced exactly by a unique combination of . Therefore, it is concluded that should exhibit the power-law scaling in terms of whose power is identical to the Kolmogorov’s value, as long as the system stays in the non-linear regime. From the left panel of Fig. 9, we can see only the scaling region or the so-called inertial region realized. The dissipative region at high is not found, probably because of the UV-cutoff effects. This nice scaling is lost at further later time. We can understand from the right panel of Fig. 9 that the energy flow is stuck at the UV edge and the energy spectrum is artificially pushed up by the UV-cutoff contamination. ## 6 Outlooks It was a surprise that the Kolmogorov’s scaling law could be realized in the Glasma simulation in the expanding geometry. The dimensional argument is not so strong to constrain the shape of the energy spectrum uniquely. It should be an important test whether the Kolmogorov’s behavior can be confirmed or not in other simulations of the pure Yang-Mills theory. It has been established that the non-Abelian plasma generally has an instability associated with anisotropy that grows up until the non-Abelianization occurs [24]. Similar phenomena of instability tamed by non-Abelian interactions are found in many other examples. Then, presumably, there must appear the power-law scaling in the region around the non-Abelianization. The turbulent decay and the associated Kolmogorov’s power-law are interesting discoveries from the long-run simulation of the Glasma. But, it cannot answer anything about the realistic thermalization mechanism from the Glasma. The turbulence is certainly a tendency into thermalized matter, but the energy flow is a slow process and the relevant time scale, , is far outside of the validity region of the CGC description. There must be still something missing that can accelerate the thermalization speed. If this missing piece were finally set in, the transient Glasma would provide us with the initial input for the hydrodynamic equations. Even in this case, the analysis we have seen should be useful. The energy spectrum in space should carry important information. Then, the power could agree with or deviate from as suggested in strong-coupling studies [25, 26]. ## Acknowledgments The author thanks the organizers of the 51st Cracow School of Theoretical Physics for a kind invitation. This article summarizes lectures given there. He also thanks Hiro Fujii, Francois Gelis, and Yoshimasa Hidaka for collaborations. The central parts in these lectures are based on works done in collaborations with them. ### Footnotes 1. thanks: Presented at the 51st Cracow School of Theoretical Physics 2. Even though the time dependence is frozen at the speed of light, non-commutativity of color charges needs distinction by the order of interactions along the -axis, which introduces a label that plays the role of time. Such “-dependence” is dropped off in the large limit only. ### References 1. For reviews, see: E. Iancu, A. Leonidov and L. McLerran, arXiv:hep-ph/0202270; E. Iancu and R. Venugopalan, arXiv:hep-ph/0303204; L. McLerran, arXiv:hep-ph/0311028; F. Gelis, T. Lappi and R. Venugopalan, Int. J. Mod. Phys. E 16, 2595 (2007). 2. T. Lappi and L. McLerran, Nucl. Phys. A 772, 200 (2006). 3. P. Romatschke and R. Venugopalan, Phys. Rev. Lett. 96, 062302 (2006); Phys. Rev. D 74, 045011 (2006). 4. K. Fukushima, Phys. Rev. C76, 021902 (2007). 5. H. Fujii, K. Itakura, Nucl. Phys. A809, 88-109 (2008). [arXiv:0803.0410 [hep-ph]]. 6. A. Iwazaki, Prog. Theor. Phys. 121, 809-822 (2009). 7. K. Dusling, T. Epelbaum, F. Gelis, R. Venugopalan, Nucl. Phys. A850, 69-109 (2011). 8. K. Dusling, F. Gelis, R. Venugopalan, [arXiv:1106.3927 [nucl-th]]. 9. T. Epelbaum, F. Gelis, [arXiv:1107.0668 [hep-ph]]. 10. K. Fukushima, F. Gelis, [arXiv:1106.1396 [hep-ph]]. 11. J. Jalilian-Marian, S. Jeon and R. Venugopalan, Phys. Rev. D 63, 036004 (2001). 12. K. Fukushima, Nucl. Phys. A770, 71-83 (2006). 13. E. Iancu, A. Leonidov and L. D. McLerran, Nucl. Phys. A 692, 583 (2001). 14. E. Ferreiro, E. Iancu, A. Leonidov and L. McLerran, Nucl. Phys. A 703, 489 (2002). 15. Y. V. Kovchegov, Phys. Rev. D 60, 034008 (1999). 16. A. Kovner, L. D. McLerran and H. Weigert, Phys. Rev. D 52, 6231 (1995); ibid. D 52, 3809 (1995). 17. K. Fukushima, Y. Hidaka, JHEP 0706, 040 (2007). 18. T. Lappi, Phys. Lett. B643, 11-16 (2006). 19. H. Fujii, K. Fukushima, Y. Hidaka, Phys. Rev. C79, 024909 (2009). 20. A. Krasnitz and R. Venugopalan, Nucl. Phys. B 557, 237 (1999); Phys. Rev. Lett. 84, 4309 (2000); ibid. 86, 1717 (2001). 21. A. Krasnitz, Y. Nara and R. Venugopalan, Phys. Rev. Lett. 87, 192302 (2001); Nucl. Phys. A 717, 268 (2003); ibid. 727, 427 (2003). 22. K. Fukushima, Phys. Rev. D77, 074005 (2008). 23. K. Fukushima, F. Gelis, L. McLerran, Nucl. Phys. A786, 107-130 (2007). 24. P. B. Arnold, G. D. Moore, and L. G. Yaffe, Phys. Rev. D72, 054003 (2005). 25. J. Berges, D. Gelfand, S. Scheffler, and D. Sexty, Phys. Lett. B677, 210–213 (2009). 26. M. Carrington and A. Rebhan, [arXiv:1011.0393 [hep-ph]]. 100794
{}
# An implicit function theorem question In my course on multivariate calculus we treat the implicit function theorem and I am stuck on the following question: Find the values of $a$ and $b$ such that, in a neighbourhood of $(x,y,u,v) = (0,1,1,-1)$, $x$ and $y$ are implicitly defined as $C^1$-functions $f$ and $g$ of $u$ and $v$ by the system of equations : $$\begin{cases} e^x +uy + u^4v-a=0 \\ y\cos(x) +bx +b^2u -2v = 4 \end{cases}$$ NB: check whether all conditions of the implicit function theorem are satisfied and state the conclusions as accurately as possible. My work so far: Let $G : \mathbb{R}^4 \to \mathbb{R}^2$ be defined by: $$G(x,y,u,v)= \begin{pmatrix} e^x +uy + u^4v -a\\ y\cos(x) +bx +b^2u -2v \end{pmatrix}$$ Obviously, this function is $C^1$. After this, I presume I have to form a matrix of partial derivatives and check for what values of $a$ and $b$ that matrix is non-singular. Could anyone give me some guidance on how to continue? • Just do what you presume you have to do: this is exactly the way it should be done. – Etienne Feb 26 '14 at 14:06 The Jacobian Matrix of the function $G: \mathbb{R}^4 \to \mathbb{R}^2$ defined by: $$G(x,y,u,v)=\begin{pmatrix} e^x + uy + u^4 \\ y\cos(x) + bx + b^2u-2 \end{pmatrix}$$ is $$DG(x,y,u,v)=\begin{pmatrix} e^x & u & y+4u^3v & u^4 \\ -y\sin(x) + b & \cos(x) &b^2 & -2 \end{pmatrix}.$$ Because all entries are continuous functions, $G$ is a $C^1$ function. Observe that $$G(0,1,1,-1) = \begin{pmatrix} 1 \\ b^2+3 \end{pmatrix} = \begin{pmatrix} a \\ 4 \end{pmatrix} \iff a = 1 \wedge b^2=1 \Longrightarrow a = 1 \wedge \left( b=-1 \vee b = 1\right).$$ The matrix with partial derivatives of $G$ with respect to $x$ and $y$ at $(0,1,1,-1)$, which is $$\begin{pmatrix} \frac{\partial G}{\partial x}(0,1,1,-1) & \frac{\partial G}{\partial y}(0,1,1,-1) \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ b & 1 \end{pmatrix}$$ is singular if $b=1$ and non-singular if $b=-1$. So for $a=1$ and $b=-1$ all conditions of the implicit function theorem are satisfied and thus there exists an open neighbourhood $B$ of $(1,-1)$, an open neighbourhood $U$ of $(0,1,1,-1)$ and $C^1$ functions $f:B \to \mathbb{R}$ and $g : B \to \mathbb{R}$ such that $$\{ (f(u,v), g(u,v), u, v : (u,v) \in B \} = \{(x,y,u,v) \in U: G(,x,y,u,v) = (1.4) \}$$ That's right. Let $z=[u,v]'$. Then the implicit function theorem implies in your case that $$\frac{\partial G}{\partial z'} + \begin{bmatrix} \frac{\partial G}{\partial x} \; \frac{\partial G}{\partial y}\end{bmatrix} \begin{bmatrix} \frac{\partial f}{\partial z'} \\ \frac{\partial g}{\partial z'} \end{bmatrix}=0.$$ So if the matrix of partial derivatives (with respect to $x,y$) is invertible then life is wonderful. • What do you mean with $z=[u,v]'$ and $z'$? – Nigel Overmars Feb 26 '14 at 14:35 • $z$ is a (column) vector with elements $u$ and $v$, $z$ is its transpose. – JPi Feb 26 '14 at 19:40
{}
# Change in gas phase reaction equilibrium when not all product and reactant are increased $$\ce{2NOCl <=> 2NO + Cl2}$$ What will be the effect on equilibrium concentration of $$\ce{NOCl}$$ when equal moles of $$\ce{NOCl}$$ and $$\ce{NO}$$ are introduced in the mixture at constant temperature? Now this is what I think Increasing the moles will result in increased concentration of both $$\ce{NOCl}$$ and $$\ce{NO}$$. But there also lies a second product $$\ce{Cl2}$$. Since there's an increase in $$\ce{NO}$$ and $$\ce{NOCl}$$, there should be an equal increase in the concentration of $$\ce{Cl2}$$ to keep the equilibrium-constant constant. But since there's no addition of $$\ce{Cl2}$$ molecules there should be a decrease in $$\ce{NOCl}$$ concentration which will decompose further to increase the molar ratio of $$\ce{Cl2}$$ and hence achieve equilibrium. Please enlighten me where my logic is flawed. From comments (edit by BuckThorn): The reported answer is that the position of equilibrium remains unchanged and hence the concentration of $$\ce{NOCl}$$ remains constant. Given the reaction $$\ce{2NOCl <=> 2NO + Cl2}$$ we can formulate the equilibrium as $$\mathrm{K} = \dfrac{\ce{[NO]^2[Cl2]}}{\ce{[NOCl]^2}}$$ The problem then asks "What will be the effect on equilibrium concentration of NOCl when equal moles of NOCl and NO are introduced in the mixture at constant temperature?" Now if we add equal amounts of $\ce{NOCl}$ and $\ce{NO}$ we don't have to do it at the same time to end up with the final equilibrium. So let's add the $\ce{NOCl}$ first. We now have two possibilities (assuming that it is very unlikely that K=1 exactly). • If K>1 then more $\ce{NO}$ and $\ce{Cl2}$ form. • If K<1 then less $\ce{NO}$ and $\ce{Cl2}$ form. Now let's add the $\ce{NO}$. This has to push the reaction to the left. So the net result is it depends. You have too have enough data to plug into the equilibrium expression and other equations to work the net result out. Your logic is true. When you increase the concentration of NOCl and NO, Chlorine become limiting compound. So it will affect the equilibrium. Increase in amount of NOCl, will leads to more decomposition but it will also affected by concentration of NO in product. So in decomposition process, NO is controlling compound. And new equilibrium will achieved with more concentration of NOCl and NO and slightly more of Chlorine compare to initial concentration .
{}
Slab pull Slab pull is that part of the motion of a tectonic plate caused by its subduction. In 1975 Forsyth and Uyeda showed using inverse theory methods that of the many likely driving forces of plates slab pull was the strongest.[1] Plate motion is partly driven by the weight of cold, dense plates sinking into the mantle at oceanic trenches.[2][3] This force and slab suction account for almost all of the force driving plate tectonics. The ridge push at rifts contributes only 5 to 10%.[4] Carlson et al. (1983)[5] in Lallemandet al. (2005)[6] defined the slab pull force as: ${\displaystyle F_{sp}=K\times \Delta \rho \times L\times {\sqrt {A}}}$ Where: K is 4.2g (gravitational acceleration = 9.81 m/s2) according to McNutt (1984);[7] Δρ = 80 kg/m3 is the mean density difference between the slab and the surrounding asthenosphere; L is the slab length calculated only for the part above 670 km (the upper/lower mantle boundary); A is the slab age in Ma at the trench. The slab pull force manifests itself between two extreme forms: Between these two examples there is the evolution of the Farallon Plate: from the huge slab width with the Nevada, the Sevier and Laramide orogenies; the Mid-Tertiary ignimbrite flare-up and later left as Juan de Fuca and Cocos plates, the Basin and Range Province under extension, with slab break off, smaller slab width, more edges and mantle return flow. Some early models of plate tectonics envisioned the plates riding on top of convection cells like conveyor belts. However, most scientists working today believe that the asthenosphere does not directly cause motion by the friction of such basal forces. The North American Plate is nowhere being subducted, yet it is in motion. Likewise the African, Eurasian and Antarctic Plates. Ridge push is thought responsible for the motion of these plates. The subducting slabs around the Pacific Ring of Fire cool down the Earth and its Core-mantle boundary. Around the African Plate upwelling mantle plumes from the Core-mantle boundary produce rifting including the African and Ethiopian rift valleys. References 1. Forsyth, Donald; Uyeda, Seiya (1975-10-01). "On the Relative Importance of the Driving Forces of Plate Motion". Geophysical Journal International. 43 (1): 163–200. doi:10.1111/j.1365-246X.1975.tb00631.x. ISSN 0956-540X. 2. Conrad, Clinton P.; Lithgow-Bertelloni, Carolina (2002-10-04). "How Mantle Slabs Drive Plate Tectonics". Science. 298 (5591): 207–209. doi:10.1126/science.1074161. ISSN 0036-8075. 3. "Plate tectonics, based on 'Geology and the Environment', 5 ed; 'Earth', 9 ed" (PDF). Archived from the original (PDF) on July 11, 2011. 4. Conrad CP, Lithgow-Bertelloni C (2004) 5. Carlson, R. L.; Hilde, T. W. C.; Uyeda, S. (1983). "The driving mechanism of plate tectonics: Relation to age of the lithosphere at trenches". Geophysical Research Letters. 10 (4): 297–300. doi:10.1029/GL010i004p00297. 6. Lallemand, Serge; Heuret, Arnauld; Boutelier, David (2005). "On the relationships between slab dip, back-arc stress, upper plate absolute motion, and crustal nature in subduction zones: SUBDUCTION ZONE DYNAMICS" (PDF). Geochemistry, Geophysics, Geosystems. 6 (9): n/a. doi:10.1029/2005GC000917. 7. McNutt, Marcia K. (1984-12-10). "Lithospheric flexure and thermal anomalies". Journal of Geophysical Research: Solid Earth. 89 (B13): 11180–11194. doi:10.1029/JB089iB13p11180. This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.
{}
# Solving systems of linear equaitons by elimination ### Solving systems of linear equaitons by elimination Solving a system of linear equations by elimination means by adding or subtracting the equations to get rid of a common variable.
{}
Friis formulas for noise Not to be confused with Friis transmission equation. Friis formula or Friis's formula (sometimes Friis' formula), named after Danish-American electrical engineer Harald T. Friis, refers to either of two formulas used in telecommunications engineering to calculate the signal-to-noise ratio of a multistage amplifier. One relates to noise factor while the other relates to noise temperature. The Friis formula for noise factor Friis's formula is used to calculate the total noise factor of a cascade of stages, each with its own noise factor and gain (assuming that the impedances are matched at each stage). The total noise factor can then be used to calculate the total noise figure. The total noise factor is given as $F_{total} = F_1 + \frac{F_2-1}{G_1} + \frac{F_3-1}{G_1 G_2} + \frac{F_4-1}{G_1 G_2 G_3} + ... + \frac{F_n - 1}{G_1 G_2 ... G_{n-1}}$ where $F_i$ and $G_i$ are the noise factor and available power gain, respectively, of the i-th stage, and n is the number of stages. Note that both magnitudes are expressed as ratios, not in decibels. An important consequence of this formula is that the overall noise figure of a radio receiver is primarily established by the noise figure of its first amplifying stage. Subsequent stages have a diminishing effect on signal-to-noise ratio. For this reason, the first stage amplifier in a receiver is often called the low-noise amplifier (LNA). The overall receiver noise "factor" is then $F_{receiver} = F_{LNA} + \frac{(F_{rest}-1)}{G_{LNA}}$ where $F_{rest}$ is the overall noise factor of the subsequent stages. According to the equation, the overall noise factor, $F_{receiver}$, is dominated by the noise factor of the LNA, $F_{LNA}$, if the gain is sufficiently high. (The resultant Noise Figure expressed in dB is 10 log(Noise Factor).) The Friis formula for noise temperature Friis's formula can be equivalently expressed in terms of noise temperature: $T_{eq} = T_1 + \frac{T_2}{G_1} + \frac{T_3}{G_1 G_2} + ...$ Printed references • J.D. Kraus, Radio Astronomy, McGraw-Hill, 1966. Online references • RF Cafe [1] Cascaded noise figure. • Microwave Encyclopedia [2] Cascade analysis. • Friis biography at IEEE [3]
{}
## Cryptology ePrint Archive: Report 2020/486 Rotational-XOR Cryptanalysis of Simon-like Block Ciphers Jinyu Lu and Yunwen Liu and Tomer Ashur and Bing Sun and Chao Li Abstract: Rotational-XOR cryptanalysis is a cryptanalytic method aimed at finding distinguishable statistical properties in ARX-C ciphers, i.e., ciphers that can be described only using modular addition, cyclic rotation, XOR, and the injection of constants. In this paper we extend RX-cryptanalysis to AND-RX ciphers, a similar design paradigm where the modular addition is replaced by vectorial bitwise AND; such ciphers include the block cipher families Simon and Simeck. We analyse the propagation of RX-differences through AND-RX rounds and develop closed form formula for their expected probability. Finally, we formulate an SMT model for searching RX-characteristics in simon and simeck. Evaluating our model we find RX-distinguishers of up to 20, 27, and 35 rounds with respective probabilities of $2^{-26}, 2^{-42}$, and $2^{-54}$ for versions of simeck with block sizes of 32, 48, and 64 bits, respectively, for large classes of weak keys in the related-key model. In most cases, these are the longest published distinguishers for the respective variants of simeck. Interestingly, when we apply the model to the block cipher simon, the best distinguisher we are able to find covers 11 rounds of SIMON32 with probability $2^{-24}$. To explain the gap between simon and simeck in terms of the number of distinguished rounds we study the impact of the key schedule and the specific rotation amounts of the round function on the propagation of RX-characteristics in Simon-like ciphers. Category / Keywords: secret-key cryptography / RX-cryptanalysis · Simeck· Simon· Key Schedule Original Publication (in the same form): ACISP 2020 Date: received 26 Apr 2020, last revised 24 May 2020 Contact author: univerlyw at hotmail com Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2020/486 [ Cryptology ePrint archive ]
{}
# nLab effective group action Contents this entry is about the concept in group theory; for the concept in quantumfield theory see at effective action functional; for disambiguation see effective action group theory ### Cohomology and Extensions #### Representation theory representation theory geometric representation theory # Contents ## Idea A group action is effective if no group element other than the neutral element acts trivially on all elements of the space. ## Definition A group action of a group (group object) $G$ on a set (object) $X$ is effective if $\underset{x \in X}{\forall} g x = x$ implies that $g = e$ is the neutral element. Beware the similarity to and difference with free action: a free action is effective, but an effective action need not be free. Last revised on April 14, 2020 at 11:35:14. See the history of this page for a list of all contributions to it.
{}
# Difference between revisions of "Factor representation" A linear representation $\pi$ of a group or an algebra $X$ on a Hilbert space $H$ such that the von Neumann algebra on $H$ generated by the family $\pi ( X)$ is a factor. If this factor is of type $\textrm{ I }$( respectively, $\textrm{ II }$, $\textrm{ III }$, $\textrm{ II } _ {1}$, $\textrm{ II } _ \infty$ etc.), then $\pi$ is called a factor representation of type $\textrm{ I }$, etc. How to Cite This Entry: Factor representation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Factor_representation&oldid=14007 This article was adapted from an original article by A. Shtern (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{}
# Birthdate Jane on birthday brought 30 lollipops and 24 chewing gum for their friends. How many friends has, if everyone receives the same number of lollipops and chewing gums? How much chewing gum and lollipops got any friend? Correct result: x1 =  6 y1 =  5 z1 =  4 x2 =  3 y2 =  10 z2 =  8 x3 =  2 y3 =  15 z3 =  12 #### Solution: ${y}_{1}=30\mathrm{/}{x}_{1}=30\mathrm{/}6=5$ ${z}_{1}=24\mathrm{/}{x}_{1}=24\mathrm{/}6=4$ ${x}_{2}=3$ ${y}_{2}=30\mathrm{/}{x}_{2}=30\mathrm{/}3=10$ ${z}_{2}=24\mathrm{/}{x}_{2}=24\mathrm{/}3=8$ ${x}_{3}=2$ ${y}_{3}=30\mathrm{/}{x}_{3}=30\mathrm{/}2=15$ ${z}_{3}=24\mathrm{/}{x}_{3}=24\mathrm{/}2=12$ We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you! Tips to related online calculators Do you want to calculate greatest common divisor two or more numbers? Do you want to perform natural numbers division - find the quotient and remainder? ## Next similar math problems: • Write decimals Write in the decimal system the short and advanced form of these numbers: a) four thousand seventy-nine b) five hundred and one thousand six hundred and ten c) nine million twenty-six • The result How many times I decrease the number 1632 to get the result 24? • The number 3 Ski organizers should print the start numbers from 1 to 45. How many times will they use the number 3 when printing? • Math classification In 3A class are 27 students. One-third got a B in math and the rest got A. How many students received a B in math? • Primes 2 Which prime numbers is number 2025 divisible? • Prime factors Write 98 as product of prime factors • Decomposition Make decomposition using prime numbers of number 155. Result write as prime factors (all, even multiple) • Division Which number in division 16 give 12 and the rest 3? • Sum of the digits How many are two-digit natural numbers that have the sum of the digits 9? • Digit sum Determine for how many integers greater than 900 and less than 1,001 has digit sum digit of the digit sum number 1. • Divisibility 2 How many divisors has integer number 13? • By six From the digits 1,2,3,4 we create the long integer number 123412341234. .. .. , which will have 962 digits. Is this number divisible by 6? • Troops If the general sorts troops into the crowd by nine left 6. How many soldiers has regiment if we know that they are less than 300? • Divisibility Determine all divisors of number 84. • Bundle of candies In the store has 168 chocolates, caramel candies 224 and 196 hard candies. How many packages we can do and how many of candies will be in each package? • Lines Five bus lines runs at intervals 3, 6, 9, 12, 15 minutes. In the morning, suddenly start at 4:00. After how many minutes the bus lines meet again? • Groups In the 6th class there are 60 girls and 72 boys. We want to divide them into groups so that the number of girls and boys is the same. How many groups can you create? How many girls will be in the group?
{}
# Deactivating invisible in beamer in handout mode [duplicate] I am using the invisible option in my presentation, but I would like to also have a print version where the invisible option is activated. When I use the handout mode, only the first version of each slide is printed (that is, the items becoming visible later are not displayed). I don't want to manually remove all the invisible instances. Is there a method to deactivate all the invisible instances? Thanks and here is a MWE. \documentclass[handout, english]{beamer} \begin{document} \begin{frame}{Questions:} \begin{itemize} \invisible<1>{\item[A.] Question 1.} \pause\invisible<-2>{\item[B.] Question 2.} \pause\invisible<-3>{\item[B.] Question 3.} \end{itemize} \end{frame} \end{document} ## marked as duplicate by Mensch, Sebastiano, Phelype Oleinik, marmot, Stefan PinnowFeb 12 at 6:18 I don't know if this will help : \documentclass[handout, english]{beamer} \begin{document} \begin{frame}{Questions:} \begin{itemize} \invisible<1|handout:0>{\item[A.] Question 1.} \pause\invisible<-2|handout:0>{\item[B.] Question 2.} \pause\invisible<-3|handout:0>{\item[B.] Question 3.} \pause \end{itemize} \end{frame} \end{document} • @EvangelosCon Pleased to know that your problem has been solved! – Hafid Boukhoulda Feb 12 at 4:05
{}
# Solving Mathematical Problems a personal perspective by Terence Tao. This is a new edition of a book which was written by Tao more than 15 years ago, which means when he was only 15! It’s a thin little book that takes a leisurely look at solving some competition type problems. The coverage is not huge, but the author take pains to go through in great detail various strategies one can adopt in solving problems. Quite a nice book but very pricey for 102 pages. I found exercise 2.1 quite fun. In a parlour game, the ‘magician’ asks one of the participants to think of a three-digit number $abc_{10}$. Then the magician asks the participant to add the five numbers $acb_{10}, bac_{10}, bca_{10}, cab_{10}$ and $cba_{10}$, and reveal their sum. Suppose the sum was 3194. What was $abc_{10}$? My solution is this. If we add all the six permutations, we know that the sum equals $(2a+2b+2c) \times 100 + (2a+2b+2c) \times 10 + (2a+2b+2c)$ $= (a+b+c) \times 222$. So we just need to know the multiples, $1 \times 222, \ldots, 27 \times 222$. Take the smallest multiple larger than the given number, and check by subtracting the difference and summing the digits. You do not have to do it with more than 5 different multiples. $15 \times 222 = 3330; 3330 – 3194 = 136$. But $1+3+6 = 10$, so incorrect. now $16 \times 222 -3194 = 136 + 222 = 358$. And $3+5+8 = 16$and we found our number. This entry was posted in Books, Problems. Bookmark the permalink. ### 3 Responses to Solving Mathematical Problems 1. Carlos says: Wrong, the correct first equation is 100(a+2b+2c)+10(c+2b+2a)+b+2a+2c=3194 Even though that’s not very useful… 2. Carlos says: Forget it, I confused abc_10 with acb_10
{}
Index: The Book of Statistical ProofsGeneral Theorems ▷ Probability theory ▷ Probability functions ▷ Cumulative distribution function of discrete random variable Theorem: Let $X$ be a discrete random variable with possible values $\mathcal{X}$ and probability mass function $f_X(x)$. Then, the cumulative distribution function of $X$ is $\label{eq:cdf-pmf} F_X(x) = \sum_{\overset{t \in \mathcal{X}}{t \leq x}} f_X(t) \; .$ Proof: The cumulative distribution function of a random variable $X$ is defined as the probability that $X$ is smaller than $x$: $\label{eq:cdf} F_X(x) = \mathrm{Pr}(X \leq x) \; .$ The probability mass function of a discrete random variable $X$ returns the probability that $X$ takes a particular value $x$: $\label{eq:pmf} f_X(x) = \mathrm{Pr}(X = x) \; .$ Taking these two definitions together, we have: $\label{eq:cdf-pmf-qed} \begin{split} F_X(x) &\overset{\eqref{eq:cdf}}{=} \sum_{\overset{t \in \mathcal{X}}{t \leq x}} \mathrm{Pr}(X = t) \\ &\overset{\eqref{eq:pmf}}{=} \sum_{\overset{t \in \mathcal{X}}{t \leq x}} f_X(t) \; . \end{split}$ Sources: Metadata: ID: P189 | shortcut: cdf-pmf | author: JoramSoch | date: 2020-11-12, 06:03.
{}
# How many Sovjet Era transport planes would be required to transport 30'000 Rhinocerotidae to the southern US border unnoticed? In a world similar to ours, a mad scientist is bent on preventing the ascendance of one certain individual over the rest of the people inhabiting the northern part of the continental America, also called U.N.A. (the Unity of Northern America). The U.N.A. hold presidential elections similar to those of the earth USA and are currently in the final phase of their elections. Our scientist has tried everything from social media, over press publications over appealing to the common sense of people, but he failed nonetheless. In a last resort he gathers the funds to transport 30'000 members of the species of the Rhinocerotidae to the southern border of the U.N.A. hoping to be able to disrupt the election process and stop people from committing, what he deems to be, a mistake of global consequence. The scientist gathers all the Rhinocerotidae in the eastern part of Europe and wants to transport them all as silent/stealthy as possible to the southern border of the U.N.A. using Soviet Era transport planes. How would he have to go about it and how many planes would he require? • "The scientist gathers all the Rhinocerotidae in the eastern part of Europe" not sure there are 30,000 rhinos in the "Eastern part of Europe"... – clem steredenn Aug 25 '16 at 10:08 • there aren't enough rhinos in the world.... – Charon Aug 25 '16 at 10:48 • Somehow, I think that you couldn't fly any Soviet-era transport planes to the US border "unnoitced". – a CVn Aug 25 '16 at 11:08 • I love that this question is hard science. It's so beautifully absurd that it has to be given full attention to detail. – inappropriateCode Aug 25 '16 at 15:10 • Also, I have been to eastern Europe and I can confirm that Rhinos were not among the many attractions of that part of the world. – Joseph Rogers Aug 25 '16 at 21:41 ### 500 planes If we consider the large Antonov, we have planes that weight 285 tons, and can carry 355 tons of load. They have dimensions for the load of $44\times6\times4.4\mbox{ m}^3$ Now a few facts about the rhinos. They can be up to $4\times2\times2\mbox{ m}^3$, and weight 1 to 3 tons depending on the species. So buy building a deck within your cargo area, you'd be able to place about 60 rhinos within each plane. You might get to more, if you pack them more tightly. The weight of the rhino alone would amount to 60-180 tons. Which is well below the maximum weight. You'd still need to add the structure, pack, food, etc. But we would probably stay within the limit. But on the base of 60, you'd get $30000/60 = 500$ planes. But do note, that summing up the number of rhinos of the various species, we barely come to 30,000 rhinos in the whole world! It's going to require some planing for your stealthy operation. • On the article I cite, for one of the species, they didn't write the exact number, or I missed it. And those are only referring to wild animals. So by maybe stretching the edges, we might get close enough to 30,000. Like 28,000. – clem steredenn Aug 25 '16 at 10:30 • @ddriver Indeed. I still think that it's easier to build another Antonov, than gathering all the rhino population of the world! – clem steredenn Aug 25 '16 at 11:45 • You mean building 498 Antonov An-225's, completing the partially built one, and using the sole completed aircraft. Training about 500 pilots and the air crew to fly them. Perhaps it might be easier to breed all the rihinoceroses needed for this operation as well. Heavens! A mere snap of fingers. – a4android Aug 25 '16 at 11:58 • Considering that the An-255s are most comparable to something like the Airbus A380; of which just under 200 have been built since deliveries began in 2007... with a unit price of \$430m... nothing about this operation can ever be unnoticed. Pretty sure everyone is going to notice the enormous new Antonov factory, and fact someone has managed to steal all of the world's rhinos. – inappropriateCode Aug 25 '16 at 15:09 • @a4android It's a strange day when comments basically come down to questions of whether it's easier to breed rhinos or to breed airplanes. – Cort Ammon Aug 25 '16 at 21:30 Use ships. While 500 planes would be hard to gather and hard to hide, the small number of ships would be much less noticeable. Even relatively small ships would hold 5,000-10,000 20ft containers, each easily capable of holding a rhino and a substantial amount of food. The container acts as a built in cage. People are used to seeing large numbers of containers being moved around, so having the ships unload onto trucks which then drop them at the desired location shouldn't even raise an eyebrow, especially if rumours of a 'big construction project' are spread. You might need to forge some import documents, and even bribe a few inspectors who want to open the containers, but that's definitely easier than trying to hide 500 transport planes. Ships do of course take longer than aircraft, but in this world the US elections take forever - easily enough time to float a few mono-horned behemoths across the Atlantic. The best thing is that when you want the rhinos to actually do their thing (whatever that is) you just remote trigger a hidden door in each container and the rhinos wander out. Spectacular.
{}
# Article Full entry | PDF   (0.2 MB) Keywords: real hypersurface; complex hyperbolic two-plane Grassmannians; Hopf hypersurface; shape operator; Ricci tensor; normal Jacobi operator; commuting condition Summary: We give a classification of Hopf real hypersurfaces in complex hyperbolic two-plane Grassmannians ${\rm SU}_{2,m}/S(U_{2}{\cdot }U_{m})$ with commuting conditions between the restricted normal Jacobi operator $\overline {R}_{N}\phi$ and the shape operator $A$ (or the Ricci tensor $S$). References: [1] Alías, L. J., Romero, A., Sánchez, M.: Uniqueness of complete spacelike hypersurfaces of constant mean curvature in generalized Robertson-Walker spacetimes. Gen. Relativ. Gravitation 27 (1995), 71-84. DOI 10.1007/BF02105675 | MR 1310212 | Zbl 0908.53034 [2] Alías, L. J., Romero, A., Sánchez, M.: Spacelike hypersurfaces of constant mean curvature in spacetimes with symmetries. Proc. Conf., Valencia, 1998 Publ. R. Soc. Mat. Esp. 1, Real Sociedad Matemática Española, Madrid E. Llinares Fuster et al. (2000), 1-14. MR 1791221 | Zbl 0984.53025 [3] Berndt, J., Suh, Y. J.: Contact hypersurfaces in Kähler manifolds. Proc. Am. Math. Soc. 143 (2015), 2637-2649. DOI 10.1090/S0002-9939-2015-12421-5 | MR 3326043 | Zbl 1318.53052 [4] Latorre, J. M., Romero, A.: Uniqueness of noncompact spacelike hypersurfaces of constant mean curvature in generalized Robertson-Walker spacetimes. Geom. Dedicata 93 (2002), 1-10. DOI 10.1023/A:1020341512060 | MR 1934681 | Zbl 1029.53072 [5] Lee, H., Suh, Y. J., Woo, C.: Real hypersurfaces in complex hyperbolic two-plane Grassmannians with commuting structure Jacobi operators. Mediterr. J. Math. 13 (2016), 3389-3407. DOI 10.1007/s00009-016-0692-x | MR 3554315 | Zbl 06661842 [6] Pérez, J. D., Suh, Y. J., Woo, C.: Real hypersurfaces in complex hyperbolic two-plane Grassmannians with commuting shape operator. Open Math. 13 (2015), 493-501. DOI 10.1515/math-2015-0046 | MR 3391385 | Zbl 1348.53064 [7] Suh, Y. J.: Hypersurfaces with isometric Reeb flow in complex hyperbolic two-plane Grassmannians. Adv. Appl. Math. 50 (2013), 645-659. DOI 10.1016/j.aam.2013.01.001 | MR 3032310 | Zbl 1279.53051 [8] Suh, Y. J.: Real hypersurfaces in complex hyperbolic two-plane Grassmannians with Reeb vector field. Adv. Appl. Math. 55 (2014), 131-145. DOI 10.1016/j.aam.2014.01.005 | MR 3176718 | Zbl 1296.53123 [9] Suh, Y. J.: Real hypersurfaces in complex hyperbolic two-plane Grassmannians with commuting Ricci tensor. Int. J. Math. 26 (2015), Article ID 1550008, 26 pages. DOI 10.1142/S0129167X15500081 | MR 3313653 | Zbl 1335.53075 [10] Suh, Y. J.: Real hypersurfaces in the complex quadric with parallel Ricci tensor. Adv. Math. 281 (2015), 886-905. DOI 10.1016/j.aim.2015.05.012 | MR 3366856 | Zbl 06458142 [11] Suh, Y. J., Woo, C.: Real hypersurfaces in complex hyperbolic two-plane Grassmannians with parallel Ricci tensor. Math. Nachr. 287 (2014), 1524-1529. DOI 10.1002/mana.201300283 | MR 3256976 | Zbl 1307.53043 Partner of
{}
# $NO_{2}$ required for a reaction is produced by the decomposition of $N_{2}O_{5}$ in $CCl_{4}$ as per the equation, $2N_{2}O_{5}(g)\rightarrow 4NO_{2}(g)+O_{2}(g)$.The initial concentration of $N_{2}O_{5}$ is $3.00 mol L^{-1}$ and it is $2.75 mol L^{-1}$ after 30 minutes. The rate of formation of $NO_{2}$ is: Option 1) $4.167\times 10^{-3}molL^{-1}min^{-1}$       Option 2) $1.667\times 10^{-2}molL^{-1}min^{-1}$ Option 3) $8.333\times 10^{-3}molL^{-1}min^{-1}$ Option 4) $2.083\times 10^{-3}molL^{-1}min^{-1}$ ($1.667\times 10^{-2}molL^{-1}min^{-1}$) $2N_{2}O_{5}(g)\rightarrow 4NO_{2}(g)+O_{2}(g)$ $t=0$        $3.0 M$ $t=30$        $2.75 M$ so, $\frac{-\Delta [N_{2}O_{5}]}{\Delta t}=\frac{-[2.75-3.0]}{30-0}=\frac{0.25}{30}$ from the reaction $-\frac{1}{2}\frac{\Delta [N_{2}O_{5}]}{\Delta t}=\frac{1}{4}\frac{\Delta [NO_{2}]}{\Delta t}=\frac{\Delta [O_{2}]}{\Delta t}$ $\frac{\Delta [NO_{2}]}{\Delta t}=\frac{0.25}{30}\times 2=1.66\times 10^{-2}M/min$ $=1.66\times 10^{-2}molL^{-1}min^{-1}$ Option 1) $4.167\times 10^{-3}molL^{-1}min^{-1}$ Option 2) $1.667\times 10^{-2}molL^{-1}min^{-1}$ Option 3) $8.333\times 10^{-3}molL^{-1}min^{-1}$ Option 4) $2.083\times 10^{-3}molL^{-1}min^{-1}$ Exams Articles Questions
{}
Academia.edu is a platform for academics to share research papers. For example if you are given a function: Since t=kT, simply replace k in the function definition by k=t/T. The Laplace transform … Laplace transform table (Table B.1 in Appendix B of the textbook) Inverse Laplace Transform Fall 2010 7 Properties of Laplace transform Linearity Ex. 3 2 s t2 (kT)2 ()1 3 2 1 1 1 1 − − − − + z T z z 7. This list is not inclusive and only contains some of the more commonly used Laplace transforms and formulas. 18.031 Laplace Transform Table Properties and Rules Function Transform f(t) F(s) = Z 1 0 f(t)e st dt (De nition) af(t) + bg(t) aF(s) + bG(s) (Linearity) eatf(t) F(s a) (s-shift) f0(t) sF(s) f(0 ) f00(t) s2F(s) sf(0 ) f0(0 ) f(n)(t) snF(s) sn 1f(0 ) f(n 1)(0 ) tf(t) F0(s) t nf(t) ( 1)nF( )(s) u(t a)f(t a) e asF(s) (t-translation or t-shift) u(t a)f(t) e asL(f(t+ a)) (t-translation) These notes are used by myself. Originalfunktion f(t) Bildfunktion L[f(t)] = L(p) 1 1,h(t) 1 p 2 t 1 p2 3 tn, n ∈ N n! The Laplace transform is used to quickly find solutions for differential equations and integrals. Academia.edu is a platform for academics to share research papers. 2. On peut montrer qu’il existe s0 ∈ IR, appelée abscisse de sommabilité de la transformée de Laplace de f, telle que: •∀s>s0 la fonction t −→ f(t)e−st est sommable (et donc la transformée de Laplace de f existe) /Creator (pdfFactory Pro www.pdffactory.com) The Laplace Transform Properties Name Time Domain Laplace Transform 1 x(t) = 2jπ Z Frequency 5 0 obj inverse laplace transforms In this appendix, we provide additional unilateral Laplace transform pairs in Table B.1 and B.2, giving the s -domain expression first. Laplace Transform. 1 s n! We will come to know about the Laplace transform of various common functions from the following table . 2 1 s t⋅u(t) or t ramp function 4. sn 1 1 ( 1)! Proof. −u(−t) 1 s ℜe{s} < 0 4. tn−1 (n− 1)! What are the steps of solving an ODE by the Laplace transform? 2. pn+1 4 e±at 1 p∓a 5 teat 1 (p−a)2 6 tneat n! Be careful when using “normal” trig function vs. hyperbolic trig functions. This list is not a complete listing of Laplace transforms and only contains some of the more commonly used Laplace transforms and formulas. A List of Laplace and Inverse Laplace Transforms Related to Fractional Order Calculus 3 F(s) f(t) k s2+k2 coth ˇs 2k jsinkt 1 s e k=s J 0(2 p kt) p1 s e k=s p1 ˇt cos2 p kt p1 s … 1 0 obj 48 CHAPITRE 4. Tabelle von Laplace-Transformationen Nr. Academia.edu is a platform for academics to share research papers. Table 1: Table of Laplace Transforms Number f(t) F(s) 1 δ(t)1 2 us(t) 1 s 3 t 1 s2 4 tn n! /Filter/FlateDecode – – Kronecker delta δ0(k) 1 k = 0 0 k ≠ 0 1 2. Alexander , M.N.O Sadiku Fundamentals of Electric Circuits Summary t-domain function s-domain function 1. Fall 2010 8 Properties of Laplace transform Differentiation Ex. These slides are not a resource provided by your lecturers in this unit. Instead of reading off the F(s) for each f (t) found, read off the f (t) for each F(s). u(−t) 1 sn ℜe{s} < 0 6. e−αtu(t) 1 s+α ℜe{s} > −ℜe{α} 7. −u(−t) 1 s ℜe{s} < 0 4. tn−1 (n− 1)! This list is not a complete listing of Laplace transforms and only contains some of the more commonly used Laplace transforms and formulas. cosh() sinh() 22 tttt tt +---== eeee 3. The L-notation for the direct Laplace transform produces briefer details, as witnessed by the translation of Table 2 into Table 3 below. Laplace transform The bilateral Laplace transform of a function f(t) is the function F(s), defined by: The parameter s is in general complex : Table of common Laplace transform pairs ID Function Time domain Frequency domain Region of convergence for causal systems 1 ideal delay 1a unit impulse 2 delayed nth power with frequency shift – – Kronecker delta δ0(k) 1 k = 0 0 k ≠ 0 1 2. Table 1: Table of Laplace Transforms Number f (t) F (s) 1 δ(t) 2 us(t) 3 t 4 tn 5 e−at 6 te−at 7 1 tn−1e−at (n−1)!81−e−at 9 e−at −e−bt 10 be−bt −ae−at 11 sinat 12 cosat 13 e−at cosbt 14 e−at sinbt 15 1−e−at(cosbt + a b sinbt) 1 1 s 1 s2 n! − tn−1 (n − 1)! 1. Search Search f (t ) = L -1 {F ( s )} 1. Lecture Notes for Laplace Transform Wen Shen April 2009 NB! Table 1: A List of Laplace and Inverse Laplace Transforms Related to Fractional Order Calculus. There is always a table that is available to the engineer that contains information on the Laplace transforms. [A9] in Appendix 1. This inverse laplace table will help you in every way possible. This list is not inclusive and only contains some of the more commonly used Laplace transforms and formulas. Scribd is the world's largest social reading and publishing site. A short table of commonly encountered Laplace Transforms is given in Section 7.5. (p−a)n+1 7 sinat a p 2+a 8 cosat p p 2+a 9 t sinat 2ap (p 2+a )2 10 t cosat Laplace and Z Transforms; Laplace Properties; Z Xform Properties; Link to shortened 2-page pdf of Laplace Transforms and Properties. 18.031 Laplace Transform Table Properties and Rules Function Transform f(t) F(s) = Z 1 0 f(t)e st dt (De nition) af(t) + bg(t) aF(s) + bG(s) (Linearity) eatf(t) F(s a) (s-shift) f0(t) sF(s) f(0 ) f00(t) s2F(s) sf(0 ) f0(0 ) f(n)(t) snF(s) sn 1f(0 ) f(n 1)(0 ) tf(t) F0(s) t nf(t) ( 1)nF( )(s) u(t a)f(t a) e asF(s) (t-translation or t-shift) u(t a)f(t) e asL(f(t+ a)) (t-translation) View Laplace_Table.pdf from ARVUTISÜS IAX0010 at Technological University of Tallinn. ENGS 22 — Systems Laplace Table Page 1 Laplace Transform Table Largely modeled on a table in D’Azzo and Houpis, Linear Control Systems Analysis and Design, 1988 F (s) f (t) 0 ≤ t 1. pn+1 4 e±at 1 p∓a 5 teat 1 (p−a)2 6 tneat n! TRANSFORMATION DE LAPLACE 4.2 Abscisse de sommabilité Soit f une application sommable et nulle pour t<0. γ(t) is chosen to avoid confusion (and because in the Laplace domain it looks a little like a step function, Γ(s)). Table of Laplace and Z-transforms X(s) x(t) x(kT) or x(k) X(z) 1. S.Boyd EE102 Table of Laplace Transforms Rememberthatweconsiderallfunctions(signals)asdeflnedonlyont‚0. Recall the definition of hyperbolic trig functions. Laplace Transform Table. Sec. |Laplace Transform is used to handle piecewise continuous or impulsive force. Recall the definition of hyperbolic functions. >> u(t) is more commonly used for the step, but is also used for other things. There is always a table that is available to the engineer that contains information on the Laplace transforms. By examining a table of transforms, we find L(e¡t)˘ 1 s¯1. x��[K�I6�> �s(n�Zu:#2�%���h�0 ���;kc֏E���U�U����S�56�ʲg\���/"���~�h��?��ۻ��?�����n�俯7o7�4ݏۻ�� We denote Y(s) = L(y)(t) the Laplace transform Y(s) of y(t). Note that this definition involves integration of a product so it will involve frequent use of integration by parts—see Appendix Section 7.1 for a reminder of the formula and of … Table 2: Laplace Transforms of Elementary Functions Signal Transform ROC 1. δ(t) 1 All s 2. u(t) 1 s ℜe{s} > 0 3. We perform the Laplace transform for both sides of the given equation. Table of Laplace Transforms f(t) L[f(t)] = F(s) 1 1 s (1) eatf(t) F(s a) (2) U(t a) e as s (3) f(t a)U(t a) e asF(s) (4) (t) 1 (5) (t stt 0) e 0 (6) tnf(t) ( 1)n dnF(s) dsn (7) f0(t) sF(s) f(0) (8) fn(t) snF(s) s(n 1)f(0) (fn 1)(0) (9) Z t 0 f(x)g(t x)dx F(s)G(s) (10) tn (n= 0;1;2;:::) n! Be careful when using “normal” trig function vs. hyperbolic trig functions. 2 1 (p+ia)n+1 1 (p−ia)n+1 12 tn cosat, n ∈ N n! This section is the table of Laplace Transforms that we’ll be using in the material. s n +1 p t 7. sin ( at ) 9. t sin ( at ) 11. −e−αtu(−t) 1 They can not substitute the textbook. t … (f n 1)(0) (9) Z t 0 f(x)g(tx)dx F(s)G(s) (10) tn (n =0,1,2,...) n! Laplace;frequency 1 2. t 3. tn na positive integer 4. t1/2 5. t1/2 6. ta 7. sin kt 8. cos kt 9. sin2kt 10. cos2kt 11. eat 12. sinh kt 13. cosh kt 14. sinh2kt 15. cosh2kt 16. teat 17. tneat na positive integer 18. eatsin kt 19. eatcos kt s a (s a)2 k2 k (s a)2 k2 n! Table of Laplace Transforms f (t) =L−1{F(s)} F(s) =L{f (t)} f (t) =L−1{F(s)} F(s) =L{f (t)} 1. Academia.edu is a platform for academics to share research papers. We give as wide a variety of Laplace transforms as possible including some that aren’t often given in tables of Laplace transforms. |Laplace Transform is used to handle piecewise continuous or impulsive force. General f(t) F(s)= Z 1 0 f(t)e¡st dt f+g F+G fif(fi2R) fiF Properties of Laplace Transform - I Ang M.S 2012-8-14 Reference C.K. Table 1: Table of Laplace Transforms Number f (t) F (s) 1 δ(t) 2 us(t) 3 t 4 tn 5 e−at 6 te−at 7 1 tn−1e−at (n−1)!81−e−at 9 e−at −e−bt 10 be−bt −ae−at 11 sinat 12 cosat 13 e−at cosbt 14 e−at sinbt 15 1−e−at(cosbt + a b sinbt) 1 1 s 1 s2 n! /Title (Laplace_Table.doc) Let f(t) be de ned for t 0:Then the Laplace transform of f;which is denoted by L[f(t)] or by F(s), is de ned by the following equation L[f(t)] = F(s) = lim T!1 Z T 0 f(t)e stdt= Z 1 0 f(t)e stdt The integral which de ned a Laplace … u(t) 1 sn ℜe{s} > 0 5. 1 − − tn n n = positive integer 5. e as s 1 − >>stream This is easily accommodated by the table. Laplace_Table.pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free. (4) 3. SEC. – – δ0(n-k) 1 n = k 0 n ≠ k z-k 3. s 1 1(t) 1(k) 1 1 1 −z− 4. s +a 1 e-at e-akT 1 1 1 −e−aT z− 5. s n +1 p t 7. sin ( at ) 9. t sin ( at ) 11. Table Table of Laplace and Z-transforms X(s) x(t) x(kT) or x(k) X(z) 1. Example 1) Compute the inverse Laplace transform of Y (s) = $\frac{2}{3−5s}$. f (t ) = L -1 {F ( s )} 1. Laplace transform function; Laplace transform table; Laplace transform properties; Laplace transform examples; Laplace transform converts a time domain function to s-domain function by integration from zero to infinity. << Table of Laplace Transforms (continued) a b In t f(t) (y 0.5772) eat) cos cot) cosh at) — sin cot Si(t) 15. et/2u(t - 3) 17. t cos t + sin t 19. We get the solution y(t) by taking the inverse Laplace transform. − tn−1 (n − 1)! Table Notes . laplace transforms 183 Combining some of these simple Laplace transforms with the properties of the Laplace transform, as shown in Table 5.3, we can deal with many ap-plications of the Laplace transform. The Laplace transform is de ned in the following way. ... the Laplace Transforms workshop if you need to revise this topic rst. Recall the definition of hyperbolic functions. Using the Laplace transform nd the solution for the following equation @ @t y(t) = e( 3t) with initial conditions y(0) = 4 Dy(0) = 0 Hint. 12t*e arctan arccot s 16. u(t — 2Tr) sin t 18. H��WK�\�q��WLvT��}���p)r*�&eUe� E�~��ig����n s��;N���;�F��sN���W��^_��)w���+c�e2������.ꦌwXxwy��W����J?���O�����v�x�h�חb�,�\^�Ӈ-�t�n��������>������NY�? Table Notes 1. 12t*e arctan arccot s 16. u(t — 2Tr) sin t 18. Reverse Time f(t) F(s) 6. – – δ0(n-k) 1 n = k 0 n ≠ k z-k 3. s 1 1(t) 1(k) 1 1 1 −z− 4. s +a 1 e-at e-akT 1 1 1 −e−aT z− 5. Table of Laplace Transforms (continued) a b In t f(t) (y 0.5772) eat) cos cot) cosh at) — sin cot Si(t) 15. et/2u(t - 3) 17. t cos t + sin t 19. 1 3. t n , n = 1, 2,3,K 5. 1 δ(t) unit impulse at t = 0 2. s 1 1 or u(t) unit step starting at t = 0 3. (p−a)n+1 7 sinat a p 2+a 8 cosat p p 2+a 9 t sinat 2ap (p 2+a )2 10 t cosat p2 −a2 (p 2+a2) 11 tn sinat, n ∈ N in! t-domain s-domain f(t) L{f(t)} 1 1 s, s>0 eat 1 s−a,s>a tn n! cosh() sinh() 22 tttt tt +---== eeee 3. sn+1, s > 0 4. tp, p > −1 Γ(p +1) sp+1, s > 0 5. sin(at) a s2 +a2, s > 0 6. cos(at) s cosh ( ) sinh( ) 22. /Author (dawkins) u(−t) 1 sn ℜe{s} < 0 6. e−αtu(t) 1 s+α ℜe{s} > −ℜe{α} 7. These pdf slides are con gured for viewing on a computer screen. 2 1 s t kT ()2 1 1 1 − −z Tz 6. Table of Laplace Transform Properties. (s−a)n+1,s>a u c(t) e −cs s, s>0 u c(t)f(t−c) e−csF(s)! Frequency Shift eatf (t) F (s a) 5. An example of Laplace transform table has been made below. Laplace Table Page 1 Laplace Transform Table Largely modeled on a table in D’Azzo and Houpis, Linear Control Systems Analysis and Design, 1988 F (s) f (t) 0 ≤ t 1. 6.9 Table of Laplace Transforms 249 6.9 Table of Laplace Transforms For more extensive tables, see Ref. Each expression in the right hand column (the Laplace Transforms) comes from finding the infinite integral that we saw in the Definition of a Laplace Transform section. means that any table of Laplace transforms (such as table 24.1 on page 484) is also a table of inverse Laplace transforms. Laplace Table Derivations L(tn) = n! The reader is advised to move from Laplace integral notation to the L{notation as soon as possible, in order to clarify the ideas of the transform method. We first solve forY: s2Y ¯4Y ˘ 10 s¯1 Y ˘ 1 s2 ¯4 10 s¯1 We perform a partial fraction decomposition: 10 (s2 ¯4)(s¯1) ˘ … Example: Suppose you want to find the inverse Laplace transform x(t) of X(s) = 1 (s +1)4 + s − 3 (s − 3)2 +6. Table 2: Laplace Transforms of Elementary Functions Signal Transform ROC 1. δ(t) 1 All s 2. u(t) 1 s ℜe{s} > 0 3. cosh() sinh() 22 tttt tt +---== eeee 3. The meaning of the integral depends on types of functions of interest. (sin at) * (cos cot) State the Laplace transforms of a few simple functions from memory. %���� View Laplace_Table.pdf from ARVUTISÜS IAX0010 at Technological University of Tallinn. sn+1,s>0 sinat a s2+a2,s>0 cosat s s2+a2,s>0 sinhat a s2−a2,s>|a| coshat s s2−a2,s>|a| eat sinbt b (s−a)2+b2,s>a eat cosbt s−a (s−a)2+b2,s>a tneat n! 1 3. t n , n = 1, 2,3,K 5. The following Table of Laplace Transforms is very useful when solving problems in science and engineering that require Laplace transform. We will first prove a few of the given Laplace transforms and show how they can be used to obtain new trans-form pairs. The ��܌R |��c��{��S���9�M�%!�\�"Hɰ��/%e����q�$Ƈ �Gd��G0�1(�B���T.tґ�X�qF�� 6��w͏� �Q��-1�BV6��oB>�(�b���@��bk���C0�0�0�A� �fyj�����8�x#4(RԱ�ˡ��Ə""/ ]M3�t6d���dp!5�%�c�'����>%�9���{� 3Z��(�����}aɲ��Fߥ��*�L :p��i�����|�>h4��V��6t��~*l,��&¦�A,s�pa�f�|F�������:g��B ��!��h��%^�g]dz�T=\�}�Xd��j�s�{2�$^. A short table of commonly encountered Laplace Transforms is given in Section 7.5. View Laplace Transfrorm Table.pdf from ECE 213 at Illinois Institute Of Technology. 4 0 obj Instead of reading off the F(s) for each f (t) found, read off the f (t) for each F(s). /CreationDate (D:20120412082213-05'00') The Laplace transform 3{13 Table of Laplace Transform Properties. Originalfunktion Bildfunktion 1 f(t) F(s) = Z1 0 f(t)e¡stdt 2 tn n! Table Notes 1. s1+n L(eat) = 1 s a L(cosbt) = s s2 + b2 L(sinbt) = b s2 + b2 L(u(t a)) = e as s L( (t a)) = e as L(floor(t=a)) =e as s(1 e as) L(sqw(t=a)) =1 s tanh(as=2) L(atrw(t=a)) = 1 s2 tanh(as=2) L(t) = (1 + ) s1+ L(t 1=2) = r ˇ s Inverse Laplace Transform Theorems . Recall the definition of hyperbolic trig functions. 2. /Producer (pdfFactory Pro 4.50 $$Windows 7 Ultimate x86$$) Lecture Notes for Laplace Transform Wen Shen April 2009 NB! In the transformed equation, the goal is to solve for Y, and then use a table to find the inverse Laplace transform. [7] Formal definition The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), defined by: The parameter s is a complex number: with real numbers σ and ω. These notes are used by myself. The 3 2 s t2 (kT)2 ()1 3 2 1 1 stream Theorem 1: When a and b are constant, L⁻¹ {a f(s) + b g(s)} = a L⁻¹ {f(s)} + b L⁻¹{g(s)} Theorem 2: L⁻¹ {f(s)} = $e^{-at} L^{-1}$ {f(s - a)} Inverse Laplace Transform Examples. So, in this case, and we can use the table entry for the ramp. 1 1 s, s > 0 2. eat 1 s −a, s > a 3. tn, n = positive integer n! 1 1 s 2. eat 1 s−a 3. t nn, =1,2,3,… 1! Proof. %PDF-1.4 Inverse Laplace transform inprinciplewecanrecoverffromF via f(t) = 1 2…j Z¾+j1 ¾¡j1 F(s)estds where¾islargeenoughthatF(s) isdeflnedfor Table Table 1: Laplace Transform Table. u(t) 1 sn ℜe{s} > 0 5. An example of Laplace transform table has been made below. Linear af1(t)+bf2(r) aF1(s)+bF1(s) 2. Laplace transform 2 solutions that diffused indefinitely in space. 2. Originalfunktion f(t) Bildfunktion L[f(t)] = L(p) 1 1,h(t) 1 p 2 t 1 p2 3 tn, n ∈ N n! Laplace Transform Table (PDF) Check Yourself. /Length 10034 endobj no hint Solution. Table 3. They can not substitute the textbook. As you may have already noticed, we take inverse transforms of “functions of s that are 2 1 s t⋅u(t) or t ramp function 4. sn 1 1 ( 1)! As you may have already noticed, we take inverse transforms of “functions of s that are Table of Elementary Laplace Transforms f(t) = L−1{F(s)} F(s) = L{f(t)} 1. 1 − tn n n = positive integer %�쏢 }l��m���[��v�\�?��w���:�//��d�F��OZ'%V���\$V���Ƨ�[���̦�hCKWk�m2��7�K5��_��&z�I��Ko�'l�����/�}yy�K�{ў��n�6��G0u����9>]^�y]����_.8���Ƕ����_���� �y����>��7�l_6����ݟ��%0�|x���M�RKQ���:F:���-пc�x��r�&uC�L*Җ�+�J�I�����_�� �����:�mi�^s���,H�^q^�6��r,*�}�U�7���D��H��N��"x�H��N�����ϟ���?�����U~���4��6�l��\@���e��6�) �r��nېml�) �+xK��&�pO�W_6�Fv5&�X�v�/�����d�Q�pѭ��:{SO[��)6��S�R�w��)-�y�����N?w��s~=��Z.�ۭ�p��L�� ��[email protected]��H�0�S��M��d'z��[email protected]�g�4��iTO�(;���<9�>x��9�7wyy���}���7. Laplace_Table.pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Tabelle von Laplace-Transformationen Nr. 2. ]�~�ۃr�h?�m+/��ݚ��8h��[��q6)@ymG��_5,�fX�=KOyVX+^�Qo��_ l�4M������v��f�|��`�ƞ���"��K0���������?O~�+����ͣ��g��I��#;�g��Ũ ��x��9�!F����-��S�g/!�2��Y��\��01�4C�_x�1����7�M�L��s���сq�@VKEo������ڑ�vl��cȇf��nV�� 7I��aq���5��JN�h��_Hp�S�IP��r�a�����(ۨ0t�0�X��iմ, ��j�14�F06�)fH:;f�Է��j0��RW��A.Ġ�5r�sqpR��@ޖrǜU!�h�����^�8z*2�m���Ǫ�~�Ò��@)u��+%VĚR�E�)�%�r�њ|�)@m���Ѵ�������F�F��R� Time Shift f (t t0)u(t t0) e st0F (s) 4. u(t) is more commonly used for the step, but is also used for other things. Scaling f (at) 1 a F (sa) 3. Example: The inverse Laplace transform of U(s) = 1 s3 + 6 s2 +4, is u(t) = L−1{U(s)} = 1 2 L−1 ˆ 2 s3 ˙ +3L−1 ˆ 2 s2 +4 ˙ = s2 2 +3sin2t. �2䰹y�i'C�*oPE���m���م��ܾ�>D�~��#�E���C �}��o�������Dn�JZ����И)�ÿ9�w;���c���~�3� \�~੖�H�w��V�~�~K4 (sin at) * (cos cot) State the Laplace transforms of a few simple functions from memory. Laplace Table - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Viewing them on hand-held devices may be di cult as they require a \slideshow" mode. Inverse Laplace transform inprinciplewecanrecoverffromF via f(t) = 1 2…j Z¾+j1 ¾¡j1 F(s)estds where¾islargeenoughthatF(s) isdeflnedfor 0 5 2/Di Fachhochschule Regensburg 1 der! Pdf ) Session Activities 2,3, k 5 der Laplace-Transformation: Nr > 0 eat... And integrals sommable et nulle pour t < 0 4. tn−1 ( n− 1 ) the depends... ) 2 1 s 2. eat 1 s−a 3. t n, n = 1, 2,3, k.! −U ( −t ) 1 k = 0 0 k ≠ 0 1 2 so, in case. Short table of Laplace transforms that we ’ ll be using in the function definition by k=t/T Laplace_Table.pdf ARVUTISÜS. Transform table has been made below on types of functions of interest trans-form pairs we ’ be. How they can be used to obtain new trans-form pairs s t kT )... Sin t 18 come to know about the Laplace transform table has been made.. ; Laplace Properties ; Link to shortened 2-page PDF of Laplace transform is used to quickly find for... In tables of Laplace transform - I Ang M.S 2012-8-14 Reference C.K hand-held devices may be di cult as require... To revise this topic rst the more commonly used for other things, M.N.O Sadiku Fundamentals of Electric Summary..., … 1 ) 2 = L -1 { f ( s 4! As wide a variety of Laplace transforms and only contains some of the more used. - I Ang M.S 2012-8-14 Reference C.K be used to obtain new trans-form pairs require Laplace is... Differential equations and integrals 1 k = 0 0 k ≠ 0 1 2 ) 5 ˘... How they can be used to obtain new trans-form pairs as PDF File (.pdf,... For the step, but is also used for other things transform of various common functions from memory we... 1 2 sinh ( ) 22 tttt tt + -- -== eeee 3 -- eeee! Definition by k=t/T then use a table to find the inverse Laplace table - Free download as PDF (... Linear af1 ( s ) } 1 step, but is also used the. Tttt tt + -- -== eeee 3 time Shift f ( s ) }.... N ∈ n n = 1, 2,3, k 5 > 0 2. eat 1 s−a t. T 18 af1 ( s ) } 1 and formulas help you in every way possible steps. S, s > a 3. tn, n = positive integer Mathematik M Fachhochschule... Sn 1 1 1 − − −z Tz 6 – – Kronecker delta (. For laplace table pdf extensive tables, see Ref ( cos cot ) State Laplace! 0 k ≠ 0 1 2 for both sides of the integral depends on types of functions of.... Or impulsive force often given in section 7.5 they are provided to students as a supplement to textbook!, see Ref laplace table pdf ˘ 1 s¯1 t0 ) e st0F ( s ) 2 tneat. Fall 2010 8 Properties of Laplace transforms of a few simple functions from memory to... This unit more extensive tables, see Ref solution y ( t ) e¡stdt 2 tn n =. Positive integer Mathematik M 2/Di Fachhochschule Regensburg 1 Korrespondenzen der Laplace-Transformation: Nr PDF of Laplace Rememberthatweconsiderallfunctions. ( signals ) asdeflnedonlyont‚0 u ( t ) = L -1 { f ( t ) f ( )! Solving an ODE by the Laplace transform 3 { 13 These PDF slides are not a complete listing Laplace... ( 11 ) tx … table Notes 1 transform 3 { 13 PDF! 2012-8-14 Reference C.K transforms and show how they can be used to quickly find solutions differential! Given a function: Since t=kT, simply replace k in the material 0 1 2 as. Be using in the material is always a table of Laplace transforms f ( ). Read online for Free by e-st the step, but is also used for other things transform is used handle... Ode by the Laplace transform Wen Shen April 2009 NB you may have already noticed, we take transforms... Transforms of a few simple functions from the following table are useful for this... The engineer that contains information on the Laplace transforms and formulas at ) * ( cos cot State. Transforms of a few of the time domain function, multiplied by e-st ) sinh )! Trans-Form pairs Notes for Laplace transform ( PDF ) Choices ( PDF Answer. Cos cot ) State the Laplace transform this case, and we can use the table of encountered... Or impulsive force nn, =1,2,3, … 1 first prove a simple... Sommable et nulle pour t < 0 4. tn−1 ( n− 1 ) social! T nn, =1,2,3, … 1 contains information on the Laplace transforms and formulas -! Equations and integrals cos cot ) State the Laplace transforms and only contains some of more. When solving problems laplace table pdf science and engineering that require Laplace transform of various common functions from memory =1,2,3, 1! Hand-Held devices may be di cult as they require a \slideshow '' mode S.Boyd EE102 table Laplace... Slides are con gured for viewing on a computer screen ARVUTISÜS IAX0010 at University... The textbook using “ normal ” trig function vs. hyperbolic trig functions, 2,3, k.... Few of the more commonly used Laplace transforms that we ’ ll be using in the function definition by.! — 2Tr ) sin t 18 tables of Laplace transform 3 { 13 These PDF slides are not a provided! P+Ia ) n+1 12 tn cosat, n = 1, 2,3, k.... The table entry for the step, but is also used for the ramp 0 tn−1! -- -== eeee 3 devices may be di cult as they require \slideshow. S laplace table pdf kT ( ) 2 6 tneat n are useful for applying this technique sn. 5 teat 1 ( p−ia ) n+1 1 ( p+ia ) n+1 12 tn cosat, n = integer... Is very useful when solving problems in science and engineering that require Laplace?. Fall 2010 8 Properties of Laplace transforms and only contains some of the Laplace... Find the inverse Laplace transform is used to obtain new trans-form pairs - I Ang M.S Reference! T0 ) e st0F ( s ) 4 Shift property ( paragraph 11 ….... Xform Properties ; Link to shortened 2-page PDF of Laplace and Z transforms ; Laplace Properties ; Xform!, see Ref Notes for Laplace transform table has been made below often given in of... The material tt + -- -== eeee 3 transform table has been made below Rememberthatweconsiderallfunctions ( signals asdeflnedonlyont‚0. Solutions that diffused indefinitely in space in tables of Laplace and inverse Laplace table! May be di cult as they require a \slideshow '' mode ) State Laplace... These slides are con gured for viewing on a computer screen Fundamentals of Electric Summary. Paragraph 11 … SEC we can use the Shift property ( paragraph …! Section is the table entry for the step, but is also used for other.! Shen April 2009 NB require a \slideshow '' mode ’ ll be in. Used to quickly find solutions for differential equations and integrals for Free encountered transforms.
{}
# venn diagrams using tikz I found the following code for typesetting Venn Diagrams using Tikz recently. However, I would like for there to be 2 diagrams to each line. How can I accomplish this? % Definition of circles \def\firstcircle{(0,0) circle (1.5cm)} \def\secondcircle{(0:2cm) circle (1.5cm)} \colorlet{circle edge}{blue!50} \colorlet{circle area}{blue!20} \tikzset{filled/.style={fill=circle area, draw=circle edge, thick}, outline/.style={draw=circle edge, thick}} \setlength{\parskip}{5mm} % Set A and B \begin{tikzpicture} \begin{scope} \clip \firstcircle; \fill[filled] \secondcircle; \end{scope} \draw[outline] \firstcircle node {$A$}; \draw[outline] \secondcircle node {$B$}; \node[anchor=south] at (current bounding box.north) {$A \cap B$}; \end{tikzpicture} %Set A or B but not (A and B) also known as A or B \begin{tikzpicture} \draw[filled, even odd rule] \firstcircle node {$A$} \secondcircle node{$B$}; \node[anchor=south] at (current bounding box.north) {${(A \cap B)^{C}}$}; \end{tikzpicture} - What do you mean by "2 diagrams to each line," that the diagrams should be side by side? –  adn May 14 '12 at 6:19 If you simply remove the blank line between the two tikzpictures (or replace it with %) they will be next to each other. –  Peter Grill May 14 '12 at 6:28 If you are looking to put the figures side by side, you can use several methods. Like a table, or just moving them with a subfloat, or using minipages. The main idea is that you need to wrap your diagrams, and them adjust the alignment of the wrappers. For example, using subfloat, you can get: \documentclass{article} \usepackage{tikz} \usepackage{subfig} \begin{document} % Definition of circles \def\firstcircle{(0,0) circle (1.5cm)} \def\secondcircle{(0:2cm) circle (1.5cm)} \colorlet{circle edge}{blue!50} \colorlet{circle area}{blue!20} \tikzset{filled/.style={fill=circle area, draw=circle edge, thick}, outline/.style={draw=circle edge, thick}} \setlength{\parskip}{5mm} \begin{figure} \centering % Set A and B \subfloat{% \begin{tikzpicture} \begin{scope} \clip \firstcircle; \fill[filled] \secondcircle; \end{scope} \draw[outline] \firstcircle node {$A$}; \draw[outline] \secondcircle node {$B$}; \node[anchor=south] at (current bounding box.north) {$A \cap B$}; \end{tikzpicture} } \hfil %Set A or B but not (A and B) also known as A or B \subfloat{% \begin{tikzpicture} \draw[filled, even odd rule] \firstcircle node {$A$} \secondcircle node{$B$}; \node[anchor=south] at (current bounding box.north) {${(A \cap B)^{C}}$}; \end{tikzpicture} } \end{figure} \end{document} And again, you can do the same using a minipage, in which you have to indicate the width. \documentclass{article} \usepackage{tikz} \begin{document} % Definition of circles \def\firstcircle{(0,0) circle (1.5cm)} \def\secondcircle{(0:2cm) circle (1.5cm)} \colorlet{circle edge}{blue!50} \colorlet{circle area}{blue!20} \tikzset{filled/.style={fill=circle area, draw=circle edge, thick}, outline/.style={draw=circle edge, thick}} \setlength{\parskip}{5mm} \begin{figure} \centering % Set A and B \begin{minipage}{0.49\textwidth} \begin{tikzpicture} \begin{scope} \clip \firstcircle; \fill[filled] \secondcircle; \end{scope} \draw[outline] \firstcircle node {$A$}; \draw[outline] \secondcircle node {$B$}; \node[anchor=south] at (current bounding box.north) {$A \cap B$}; \end{tikzpicture} \end{minipage} %Set A or B but not (A and B) also known as A or B \begin{minipage}{0.49\textwidth} \begin{tikzpicture} \draw[filled, even odd rule] \firstcircle node {$A$} \secondcircle node{$B$}; \node[anchor=south] at (current bounding box.north) {${(A \cap B)^{C}}$}; \end{tikzpicture} \end{minipage} \end{figure} \end{document} - Thanks for all your help. I believe that has answered my question. :) –  Michael Dykes May 16 '12 at 17:47
{}
# Choose the Correct Answer of the Following Question: the Surface Areas of Two Spheres Are in the Ratio 16 : 9. the Ratio of Their Volumes is - Mathematics MCQ Choose the correct answer of the following question: The surface areas of two spheres are in the ratio 16 : 9. The ratio of their volumes is • 64 : 27 • 16 : 9 • 4 : 3 • 163 : 93 #### Solution Let  the radius of the two spheres be r and R. As, "Surface area of the first sphere"/"surface area of the second sphere" = 16/9 => (4pi"R"^2)/(4pi"r"^2) = 16/9 => (("R")/"r")^2 = 16/9 => "R"/"r" = sqrt(16/9) => "R"/"r" = 4/3        .........(i) Now, The ratio of their volumes= "Volumes of the first sphere"/"Volume of the second sphere" =((4/3pi"R"^3))/((4/3pi"r"^3)) => ("R"/"r")^3 => (4/3)^3 =>"R"/"r" = 4/3            [Using (i)] = 64/27 = 64 : 27 Hence, the correct answer is option (a). Is there an error in this question or solution?
{}
TheInfoList In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and their changes (cal ... , a module is a generalization of the notion of vector space In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... , wherein the field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grassl ... of scalars is replaced by a ring. The concept of ''module'' is also a generalization of the one of abelian group In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ... , since the abelian groups are exactly the modules over the ring of integer An integer (from the Latin Latin (, or , ) is a classical language belonging to the Italic branch of the Indo-European languages. Latin was originally spoken in the area around Rome, known as Latium. Through the power of the Roman Re ... s. Like a vector space, a module is an additive abelian group, and scalar multiplication is distributive over the operation of addition between elements of the ring or module and is compatible Compatibility may refer to: Computing * Backward compatibility, in which newer devices can understand data generated by older devices * Compatibility card, an expansion card for hardware emulation of another device * Compatibility layer, compone ... with the ring multiplication. Modules are very closely related to the representation theory Representation theory is a branch of mathematics that studies abstract algebra, abstract algebraic structures by ''representing'' their element (set theory), elements as linear transformations of vector spaces, and studies Module (mathematics), ... of group A group is a number A number is a mathematical object used to counting, count, measurement, measure, and nominal number, label. The original examples are the natural numbers 1, 2, 3, 4, and so forth. Numbers can be represented in language with ... s. They are also one of the central notions of commutative algebra Commutative algebra is the branch of algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry ... and homological algebra Homological algebra is the branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and thei ... , and are used widely in algebraic geometry Algebraic geometry is a branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and thei ... and algebraic topology Algebraic topology is a branch of mathematics that uses tools from abstract algebra to study topological spaces. The basic goal is to find algebraic invariant (mathematics), invariants that classification theorem, classify topological spaces up t ... . # Introduction and definition ## Motivation In a vector space, the set of scalars is a field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grassl ... and acts on the vectors by scalar multiplication, subject to certain axioms such as the distributive law In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no gen ... . In a module, the scalars need only be a ring, so the module concept represents a significant generalization. In commutative algebra, both ideals Ideal may refer to: Philosophy * Ideal (ethics), values that one actively pursues as goals * Platonic ideal, a philosophical idea of trueness of form, associated with Plato Mathematics * Ideal (ring theory), special subsets of a ring considered ... and quotient ring In ring theory In algebra, ring theory is the study of ring (mathematics), rings—algebraic structures in which addition and multiplication are defined and have similar properties to those operations defined for the integers. Ring theory studie ... s are modules, so that many arguments about ideals or quotient rings can be combined into a single argument about modules. In non-commutative algebra, the distinction between left ideals, ideals, and modules becomes more pronounced, though some ring-theoretic conditions can be expressed either about left ideals or left modules. Much of the theory of modules consists of extending as many of the desirable properties of vector spaces as possible to the realm of modules over a " well-behaved In mathematics, a pathological object is one which possesses deviant, irregular or counterintuitive property, in such a way that distinguishes it from what is conceived as a typical object in the same category. The opposite of pathological is ... " ring, such as a principal ideal domain In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and th ... . However, modules can be quite a bit more complicated than vector spaces; for instance, not all modules have a basis Basis may refer to: Finance and accounting *Adjusted basisIn tax accounting, adjusted basis is the net cost of an asset after adjusting for various tax-related items. Adjusted Basis or Adjusted Tax Basis refers to the original cost or other b ... , and even those that do, free module In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and th ... s, need not have a unique rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking A ranking is a relationship between a set of items such that, for any two items, the first is either "rank ... if the underlying ring does not satisfy the invariant basis number In mathematics, more specifically in the field of ring theory, a ring (mathematics), ring has the invariant basis number (IBN) property if all finitely generated free module, free left module (mathematics), modules over ''R'' have a well-defined ran ... condition, unlike vector spaces, which always have a (possibly infinite) basis whose cardinality is then unique. (These last two assertions require the axiom of choice In mathematics, the axiom of choice, or AC, is an axiom of set theory equivalent to the statement that ''a Cartesian product#Infinite Cartesian products, Cartesian product of a collection of non-empty sets is non-empty''. Informally put, the a ... in general, but not in the case of finite-dimensional spaces, or certain well-behaved infinite-dimensional spaces such as L''p'' spaces.) ## Formal definition Suppose that ''R'' is a ring, and 1 is its multiplicative identity. A left ''R''-module ''M'' consists of an abelian group In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ... and an operation such that for all ''r'', ''s'' in ''R'' and ''x'', ''y'' in ''M'', we have #$r \cdot \left( x + y \right) = r \cdot x + r \cdot y$ #$\left( r + s \right) \cdot x = r \cdot x + s \cdot x$ #$\left( r s \right) \cdot x = r \cdot \left( s \cdot x \right)$ #$1 \cdot x = x .$ The operation ⋅ is called ''scalar multiplication''. Often the symbol ⋅ is omitted, but in this article we use it and reserve juxtaposition for multiplication in ''R''. One may write ''R''''M'' to emphasize that ''M'' is a left ''R''-module. A right ''R''-module ''M''''R'' is defined similarly in terms of an operation . Authors who do not require rings to be unital omit condition 4 in the definition above; they would call the structures defined above "unital left ''R''-modules". In this article, consistent with the glossary of ring theory Ring theory In algebra, ring theory is the study of ring (mathematics), rings—algebraic structures in which addition and multiplication are defined and have similar properties to those operations defined for the integers. Ring theory studies th ... , all rings and modules are assumed to be unital. An ''(R,S)''- bimoduleIn abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), ri ... is an abelian group together with both a left scalar multiplication ⋅ by elements of ''R'' and a right scalar multiplication * by elements of ''S'', making it simultaneously a left ''R''-module and a right ''S''-module, satisfying the additional condition $\left(r \cdot x\right) \ast s = r \cdot \left( x \ast s \right)$ for all ''r'' in ''R'', ''x'' in ''M'', and ''s'' in ''S''. If ''R'' is commutative In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... , then left ''R''-modules are the same as right ''R''-modules and are simply called ''R''-modules. # Examples *If ''K'' is a field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grassl ... , then ''K''- vector space In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... s (vector spaces over ''K'') and ''K''-modules are identical. *If ''K'' is a field, and ''K'' 'x''a univariate polynomial ring In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... , then a module.html" ;"title="'x'' module">'x'' module ''M'' is a ''K''-module with an additional action of ''x'' on ''M'' that commutes with the action of ''K'' on ''M''. In other words, a ''K''[''x'']-module is a ''K''-vector space ''M'' combined with a linear map In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... from ''M'' to ''M''. Applying the structure theorem for finitely generated modules over a principal ideal domain to this example shows the existence of the rational Rationality is the quality or state of being rational – that is, being based on or agreeable to reason Reason is the capacity of consciously making sense of things, applying logic Logic (from Ancient Greek, Greek: grc, wikt:λογι ... and Jordan canonical forms. *The concept of a Z-module agrees with the notion of an abelian group. That is, every abelian group In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ... is a module over the ring of integer An integer (from the Latin Latin (, or , ) is a classical language belonging to the Italic branch of the Indo-European languages. Latin was originally spoken in the area around Rome, known as Latium. Through the power of the Roman Re ... s Z in a unique way. For , let (''n'' summands), , and . Such a module need not have a basis Basis may refer to: Finance and accounting *Adjusted basisIn tax accounting, adjusted basis is the net cost of an asset after adjusting for various tax-related items. Adjusted Basis or Adjusted Tax Basis refers to the original cost or other b ... —groups containing torsion elements do not. (For example, in the group of integers modulo 3, one cannot find even one element which satisfies the definition of a linearly independent set since when an integer such as 3 or 6 multiplies an element, the result is 0. However, if a finite field In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ... is considered as a module over the same finite field taken as a ring, it is a vector space and does have a basis.) *The decimal fractions The decimal numeral system (also called the base-ten positional numeral system, and occasionally called denary or decanary) is the standard system for denoting integer and non-integer numbers. It is the extension to non-integer numbers of the H ... (including negative ones) form a module over the integers. Only singletons are linearly independent sets, but there is no singleton that can serve as a basis, so the module has no basis and no rank. *If ''R'' is any ring and ''n'' a natural number In mathematics, the natural numbers are those numbers used for counting (as in "there are ''six'' coins on the table") and total order, ordering (as in "this is the ''third'' largest city in the country"). In common mathematical terminology, w ... , then the cartesian product In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... ''R''''n'' is both a left and right ''R''-module over ''R'' if we use the component-wise operations. Hence when , ''R'' is an ''R''-module, where the scalar multiplication is just ring multiplication. The case yields the trivial ''R''-module consisting only of its identity element. Modules of this type are called free Free may refer to: Concept * Freedom, having the ability to act or change without constraint * Emancipate, to procure political rights, as for a disenfranchised group * Free will, control exercised by rational agents over their actions and decis ... and if ''R'' has invariant basis number In mathematics, more specifically in the field of ring theory, a ring (mathematics), ring has the invariant basis number (IBN) property if all finitely generated free module, free left module (mathematics), modules over ''R'' have a well-defined ran ... (e.g. any commutative ring or field) the number ''n'' is then the rank of the free module. *If M''n''(''R'') is the ring of matrices Matrix or MATRIX may refer to: Science and mathematics * Matrix (mathematics) In mathematics, a matrix (plural matrices) is a rectangle, rectangular ''wikt:array, array'' or ''table'' of numbers, symbol (formal), symbols, or expression (mathema ... over a ring ''R'', ''M'' is an M''n''(''R'')-module, and ''e''''i'' is the matrix with 1 in the -entry (and zeros elsewhere), then ''e''''i''''M'' is an ''R''-module, since . So ''M'' breaks up as the direct sum of ''R''-modules, . Conversely, given an ''R''-module ''M''0, then ''M''0⊕''n'' is an M''n''(''R'')-module. In fact, the category of ''R''-modules and the category Category, plural categories, may refer to: Philosophy and general uses *Categorization Categorization is the ability and activity to recognize shared features or similarities between the elements of the experience of the world (such as O ... of M''n''(''R'')-modules are equivalent Equivalence or Equivalent may refer to: Arts and entertainment *Album-equivalent unit, a measurement unit in the music industry *Equivalence class (music) *''Equivalent VIII'', or ''The Bricks'', a minimalist sculpture by Carl Andre *''Equivalent ... . The special case is that the module ''M'' is just ''R'' as a module over itself, then ''R''''n'' is an M''n''(''R'')-module. *If ''S'' is a nonempty In mathematics, the empty set is the unique Set (mathematics), set having no Element (mathematics), elements; its size or cardinality (count of elements in a set) is 0, zero. Some axiomatic set theories ensure that the empty set exists by includ ... set, ''M'' is a left ''R''-module, and ''M''''S'' is the collection of all function Function or functionality may refer to: Computing * Function key A function key is a key on a computer A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations automatically. Modern comp ... s , then with addition and scalar multiplication in ''M''''S'' defined pointwise by and , ''M''''S'' is a left ''R''-module. The right ''R''-module case is analogous. In particular, if ''R'' is commutative then the collection of ''R-module homomorphisms'' (see below) is an ''R''-module (and in fact a ''submodule'' of ''N''''M''). *If ''X'' is a smooth manifold In mathematics, a differentiable manifold (also differential manifold) is a type of manifold The real projective plane is a two-dimensional manifold that cannot be realized in three dimensions without self-intersection, shown here as Boy's s ... , then the smooth function In mathematical analysis Analysis is the branch of mathematics dealing with Limit (mathematics), limits and related theories, such as Derivative, differentiation, Integral, integration, Measure (mathematics), measure, sequences, Series (mat ... s from ''X'' to the real number In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ... s form a ring ''C''(''X''). The set of all smooth vector field In vector calculus Vector calculus, or vector analysis, is concerned with differentiation Differentiation may refer to: Business * Differentiation (economics), the process of making a product different from other similar products * Product ... s defined on ''X'' form a module over ''C''(''X''), and so do the tensor field In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ... s and the differential form In the mathematical Mathematics (from Greek Greek may refer to: Greece Anything of, from, or related to Greece Greece ( el, Ελλάδα, , ), officially the Hellenic Republic, is a country located in Southeast Europe. Its population ... s on ''X''. More generally, the sections of any vector bundle In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no gen ... form a projective module In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no gener ... over ''C''(''X''), and by Swan's theorem, every projective module is isomorphic to the module of sections of some bundle; the category Category, plural categories, may refer to: Philosophy and general uses *Categorization Categorization is the ability and activity to recognize shared features or similarities between the elements of the experience of the world (such as O ... of ''C''(''X'')-modules and the category of vector bundles over ''X'' are equivalent Equivalence or Equivalent may refer to: Arts and entertainment *Album-equivalent unit, a measurement unit in the music industry *Equivalence class (music) *''Equivalent VIII'', or ''The Bricks'', a minimalist sculpture by Carl Andre *''Equivalent ... . *If ''R'' is any ring and ''I'' is any left ideal in ''R'', then ''I'' is a left ''R''-module, and analogously right ideals in ''R'' are right ''R''-modules. *If ''R'' is a ring, we can define the opposite ring In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and the ... ''R''op which has the same underlying set In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... and the same addition operation, but the opposite multiplication: if in ''R'', then in ''R''op. Any ''left'' ''R''-module ''M'' can then be seen to be a ''right'' module over ''R''op, and any right module over ''R'' can be considered a left module over ''R''op. * Modules over a Lie algebra are (associative algebra) modules over its universal enveloping algebra In mathematics, a universal enveloping algebra is the most general (unital algebra, unital, associative algebra, associative) algebra that contains all representation of a Lie algebra, representations of a Lie algebra. Universal enveloping algebras ... . *If ''R'' and ''S'' are rings with a ring homomorphism In ring theory, a branch of abstract algebra, a ring homomorphism is a structure-preserving function (mathematics), function between two ring (algebra), rings. More explicitly, if ''R'' and ''S'' are rings, then a ring homomorphism is a function s ... ''φ'' : ''R'' → ''S'', then every ''S''-module ''M'' is an ''R''-module by defining ''rm'' = ''φ''(''r'')''m''. In particular, ''S'' itself is such an ''R''-module. # Submodules and homomorphisms Suppose ''M'' is a left ''R''-module and ''N'' is a subgroup In group theory, a branch of mathematics, given a group (mathematics), group ''G'' under a binary operation ∗, a subset ''H'' of ''G'' is called a subgroup of ''G'' if ''H'' also forms a group under the operation ∗. More precisely ... of ''M''. Then ''N'' is a submodule (or more explicitly an ''R''-submodule) if for any ''n'' in ''N'' and any ''r'' in ''R'', the product (or for a right ''R''-module) is in ''N''. If ''X'' is any subset In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities a ... of an ''R''-module, then the submodule spanned by ''X'' is defined to be $\langle X \rangle = \,\bigcap_ N$ where ''N'' runs over the submodules of ''M'' which contain ''X'', or explicitly $\left\$, which is important in the definition of tensor products. The set of submodules of a given module ''M'', together with the two binary operations + and ∩, forms a lattice which satisfies the modular law: Given submodules ''U'', ''N''1, ''N''2 of ''M'' such that , then the following two submodules are equal: . If ''M'' and ''N'' are left ''R''-modules, then a map A map is a symbol A symbol is a mark, sign, or that indicates, signifies, or is understood as representing an , , or . Symbols allow people to go beyond what is n or seen by creating linkages between otherwise very different s and s. A ... is a homomorphism of ''R''-modules if for any ''m'', ''n'' in ''M'' and ''r'', ''s'' in ''R'', :$f\left(r \cdot m + s \cdot n\right) = r \cdot f\left(m\right) + s \cdot f\left(n\right)$. This, like any homomorphism In algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and mathematical analysis, analysis. I ... of mathematical objects, is just a mapping which preserves the structure of the objects. Another name for a homomorphism of ''R''-modules is an ''R''- linear map In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... . A bijective In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... module homomorphism is called a module isomorphism In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... , and the two modules ''M'' and ''N'' are called isomorphic. Two isomorphic modules are identical for all practical purposes, differing solely in the notation for their elements. The kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine learnin ... of a module homomorphism is the submodule of ''M'' consisting of all elements that are sent to zero by ''f'', and the image An image (from la, imago) is an artifact that depicts visual perception Visual perception is the ability to interpret the surrounding environment (biophysical), environment through photopic vision (daytime vision), color vision, sco ... of ''f'' is the submodule of ''N'' consisting of values ''f''(''m'') for all elements ''m'' of ''M''. The isomorphism theorem In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It h ... s familiar from groups and vector spaces are also valid for ''R''-modules. Given a ring ''R'', the set of all left ''R''-modules together with their module homomorphisms forms an abelian category In mathematics, an abelian category is a Category (mathematics), category in which morphisms and Object (category theory), objects can be added and in which Kernel (category theory), kernels and cokernels exist and have desirable properties. The mo ... , denoted by ''R''-Mod (see category of modulesIn algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and mathematical analysis, analysis. In i ... ). # Types of modules ; Finitely generated: An ''R''-module ''M'' is finitely generated if there exist finitely many elements ''x''1, ..., ''x''''n'' in ''M'' such that every element of ''M'' is a linear combination In mathematics, a linear combination is an Expression (mathematics), expression constructed from a Set (mathematics), set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of ''x'' and ''y'' would be ... of those elements with coefficients from the ring ''R''. ; Cyclic: A module is called a cyclic module In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and the ... if it is generated by one element. ; Free: A is a module that has a basis, or equivalently, one that is isomorphic to a direct sum The direct sum is an operation from abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathema ... of copies of the ring ''R''. These are the modules that behave very much like vector spaces. ; Projective: Projective module In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no gener ... s are direct summand The direct sum is an operation from abstract algebra, a branch of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geome ... s of free modules and share many of their desirable properties. ; Injective: Injective module In mathematics, especially in the area of abstract algebra known as module theory, an injective module is a module (mathematics), module ''Q'' that shares certain desirable properties with the Z-module Q of all rational numbers. Specifically, if ''Q ... s are defined dually to projective modules. ; Flat: A module is called flat if taking the tensor product In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... of it with any exact sequence An exact sequence is a sequence of morphisms between objects (for example, groups A group is a number of people or things that are located, gathered, or classed together. Groups of people * Cultural group, a group whose members share the same ... of ''R''-modules preserves exactness. ; Torsionless: A module is called torsionless if it embeds into its algebraic dual. ; Simple: A simple moduleIn mathematics, specifically in ring theory, the simple modules over a Ring (mathematics), ring ''R'' are the (left or right) module (mathematics), modules over ''R'' that are Zero_element#Zero_module, non-zero and have no non-zero proper submodules. ... ''S'' is a module that is not and whose only submodules are and ''S''. Simple modules are sometimes called ''irreducible''.Jacobson (1964) p. 4 Def. 1; ; Semisimple: A semisimple module In mathematics, especially in the area of abstract algebra known as module theory, a semisimple module or completely reducible module is a type of module that can be understood easily from its parts. A ring (mathematics), ring that is a semisimple ... is a direct sum (finite or not) of simple modules. Historically these modules are also called ''completely reducible''. ; Indecomposable: An indecomposable moduleIn abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), ri ... is a non-zero module that cannot be written as a direct sum The direct sum is an operation from abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathema ... of two non-zero submodules. Every simple module is indecomposable, but there are indecomposable modules which are not simple (e.g. uniform modules). ; Faithful: A faithful module In mathematics, the annihilator of a subset of a Module (mathematics), module over a ring (mathematics), ring is the ideal (ring theory), ideal formed by the elements of the ring that give always zero when multiplied by an element of . Over an ... ''M'' is one where the action of each in ''R'' on ''M'' is nontrivial (i.e. for some ''x'' in ''M''). Equivalently, the annihilator of ''M'' is the zero ideal In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ... . ; Torsion-free: A torsion-free module In abstract algebra, algebra, a torsion-free module is a module (mathematics), module over a Ring (mathematics), ring such that zero is the only element Absorbing element, annihilated by a zero-divisor, regular element (non zero-divisor) of the ring ... is a module over a ring such that 0 is the only element annihilated by a regular element (non zero-divisor In abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), ... ) of the ring, equivalently $rm=0$ implies $r=0$ or $m=0$. ; Noetherian: A Noetherian moduleIn abstract algebra, a Noetherian module is a module that satisfies the ascending chain condition on its submodule In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mat ... is a module which satisfies the ascending chain conditionIn mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly Ideal (ring theory), ideals in certain commutative rings.Jacobson (2009), p. 1 ... on submodules, that is, every increasing chain of submodules becomes stationary after finitely many steps. Equivalently, every submodule is finitely generated. ; Artinian: An Artinian moduleIn abstract algebra, an Artinian module is a module (mathematics), module that satisfies the descending chain condition on its poset of submodules. They are for modules what Artinian rings are for rings, and a ring is Artinian if and only if it is an ... is a module which satisfies the descending chain conditionIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... on submodules, that is, every decreasing chain of submodules becomes stationary after finitely many steps. ; Graded: A graded module In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It h ... is a module with a decomposition as a direct sum over a graded ring Grade or grading may refer to: Arts and entertainment * Grade (band) Grade is a melodic hardcore band from Canada, often credited as pioneers in blending metallic hardcore with the hon and melody of emo, and - most notably - the alternating scr ... such that for all ''x'' and ''y''. ; Uniform: A uniform module is a module in which all pairs of nonzero submodules have nonzero intersection. # Further notions ## Relation to representation theory A representation of a group ''G'' over a field ''k'' is a module over the group ring In algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and mathematical analysis, analysis. I ... ''k'' 'G'' If ''M'' is a left ''R''-module, then the ''action'' of an element ''r'' in ''R'' is defined to be the map that sends each ''x'' to ''rx'' (or ''xr'' in the case of a right module), and is necessarily a of the abelian group . The set of all group endomorphisms of ''M'' is denoted EndZ(''M'') and forms a ring under addition and composition Composition or Compositions may refer to: Arts * Composition (dance), practice and teaching of choreography * Composition (music), an original piece of music and its creation *Composition (visual arts) The term composition means "putting togethe ... , and sending a ring element ''r'' of ''R'' to its action actually defines a ring homomorphism In ring theory, a branch of abstract algebra, a ring homomorphism is a structure-preserving function (mathematics), function between two ring (algebra), rings. More explicitly, if ''R'' and ''S'' are rings, then a ring homomorphism is a function s ... from ''R'' to EndZ(''M''). Such a ring homomorphism is called a ''representation'' of ''R'' over the abelian group ''M''; an alternative and equivalent way of defining left ''R''-modules is to say that a left ''R''-module is an abelian group ''M'' together with a representation of ''R'' over it. Such a representation may also be called a ''ring action'' of on . A representation is called ''faithful'' if and only if the map is injective In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... . In terms of modules, this means that if ''r'' is an element of ''R'' such that for all ''x'' in ''M'', then . Every abelian group is a faithful module over the integer An integer (from the Latin Latin (, or , ) is a classical language belonging to the Italic branch of the Indo-European languages. Latin was originally spoken in the area around Rome, known as Latium. Through the power of the Roman Re ... s or over some modular arithmetic #REDIRECT Modular arithmetic #REDIRECT Modular arithmetic#REDIRECT Modular arithmetic In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure ( ... Z/''n''Z. ## Generalizations A ring ''R'' corresponds to a preadditive category In mathematics, specifically in category theory, a preadditive category is another name for an Ab-category, i.e., a category (mathematics), category that is enriched category, enriched over the category of abelian groups, Ab. That is, an Ab-catego ... R with a single object Object may refer to: General meanings * Object (philosophy), a thing, being, or concept ** Entity, something that is tangible and within the grasp of the senses ** Object (abstract), an object which does not exist at any particular time or pl ... . With this understanding, a left ''R''-module is just a covariant additive functorIn mathematics, specifically in category theory, a preadditive category is another name for an Ab-category, i.e., a category (mathematics), category that is enriched category, enriched over the category of abelian groups, Ab. That is, an Ab-categor ... from R to the category Ab of abelian groups, and right ''R''-modules are contravariant additive functors. This suggests that, if C is any preadditive category, a covariant additive functor from C to Ab should be considered a generalized left module over C. These functors form a functor categoryIn category theory Category theory formalizes mathematical structure and its concepts in terms of a Graph labeling, labeled directed graph called a ''Category (mathematics), category'', whose nodes are called ''objects'', and whose labelled dire ... C-Mod which is the natural generalization of the module category ''R''-Mod. Modules over ''commutative'' rings can be generalized in a different direction: take a ringed spaceIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... (''X'', O''X'') and consider the sheaves of O''X''-modules (see sheaf of modules). These form a category O''X''-Mod, and play an important role in modern algebraic geometry Algebraic geometry is a branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and thei ... . If ''X'' has only a single point, then this is a module category in the old sense over the commutative ring O''X''(''X''). One can also consider modules over a semiring In abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), ... . Modules over rings are abelian groups, but modules over semirings are only commutative In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... monoid In abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathemati ... s. Most applications of modules are still possible. In particular, for any semiring In abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), ... ''S'', the matrices over ''S'' form a semiring over which the tuples of elements from ''S'' are a module (in this generalized sense only). This allows a further generalization of the concept of vector space In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... incorporating the semirings from theoretical computer science. Over near-rings, one can consider near-ring modules, a nonabelian generalization of modules. * Group ring In algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and mathematical analysis, analysis. I ... * Algebra (ring theory) In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ... * Module (model theory) * Module spectrum * Annihilator # References * F.W. Anderson and K.R. Fuller: ''Rings and Categories of Modules'', Graduate Texts in Mathematics, Vol. 13, 2nd Ed., Springer-Verlag, New York, 1992, , * Nathan Jacobson Nathan Jacobson (October 5, 1910 – December 5, 1999) was an American American(s) may refer to: * American, something of, from, or related to the United States of America, commonly known as the United States The United States of America ( ... . ''Structure of rings''. Colloquium publications, Vol. 37, 2nd Ed., AMS Bookstore, 1964,
{}
General isotopic rules involving the vibrational A mГ—n matrix is said to have a LU-decomposition if there exists we give some examples where Thus, to solve AX= b, we first solve LY= bby forward substitution A novel method distinguishes between mutation rates and Forward Substitution and Back Substitution. A substitution is a mutation that exchanges one base for another For example, consider the There are other types of mutations as well,, A mГ—n matrix is said to have a LU-decomposition if there exists we give some examples where Thus, to solve AX= b, we first solve LY= bby forward substitution. Does anyone know how the following code is able to solve for x Lx=y where L and y Forward substitution of lower triangular matrix. (Typing up an example Forward substitution The general procedure is obtained by solving the ith equation Lx=b for x. i Back Substitution For example, consider. Let P be a Tested C++ code for the compact LU factorization / decomposition schemes of rarely mention that different forward substitution functions [i * d + k]* y [k]; y First we solve Ly = b using forward substitution to get y = using this, we solve L T x = y using backward substitution to get Example 4. Use the Cholesky 3.2 Solution of Triangular Systems † Forward substitution: x1 = b1=l11 Column-Oriented Forward Substitution † Partition Lx = b as follows: h Forward substitution is the process of solving a system of linear algebraic equations (SLAE) $Lx = y$ with a lower triangular coefficient matrix $L Integration by Substitution. A key strategy in mathematical problem-solving is substitution or changing the variable: that is, replacing one variable with another Second Order Differential Equations Example 5 Verify that y Substitution of y = ekx into the diп¬Ђerential equation yields An awful lot of people seem to use the phrase "substitute X for Y That link isn't about for/with - it's about the problems caused when X and Y in OP's example Here we will look at solving a special class of Differential Equations called First Order Linear Differential not d 2 y dx 2 or Let's try an example to see Integration by Substitution. A key strategy in mathematical problem-solving is substitution or changing the variable: that is, replacing one variable with another Here we will look at solving a special class of Differential Equations called First Order Linear Differential not d 2 y dx 2 or Let's try an example to see Forward and Back Substitution Once we know the LU factorization of a regular from MATH Example 4.5. With the LU to find the solution to 2 1 1 4 5 2 2-2 0 x y L U Decomposition method: in structure we can use forward subsititution and back substitution to find y and x from the Uy = b first and then Lx = y. Integration by substitution mc-TY-intbysub-2009-1 In this example we make the substitution u = 1+x2, in order to simplify the square-root term. Linear System of Equations Lx = b where the matrix L This procedure of solving a lower triangular system is called the general forward substitution. Example 2 Solving by Substitution 1 Coolmath.com. The Substitution Method Examples. BACK; Example 2. Solve this linear Solve this linear system of equations by the substitution method. y = -x + 1., COMPUTER OPTIMIZATION Example of indexing (3, 2) element: Index( 3,2 ) UX = Y Solve for Y using forward-substitution. Then solve for X. A novel method distinguishes between mutation rates and The Substitution Method Examples Shmoop. The marginal rate of substitution of X for Y is the amount of Y that will be given up for obtaining each additional unit of X. Definition:, solving a linear system of equations that Example 1. Use the forward-substitution Solve LY=B for Y using forward substitution. 3. Solve UX=Y for X using back. meaning Substitute X for Y - English Language & Usage how would you solve the forward substitution with a lower. MATLAB program: forward-substitution for a lower triangular linear system. function x=forwardSubstitution(L,b,n) % Solving a lower triangular system by forward- Examples. This matrix ] is upper is very easy to solve by an iterative process called forward substitution for lower triangular matrices and. substitution N. Mohan and A where LX (containing the dix A with the help of a simple example (XYZ(Cs)-type molecules J, the above equations can be used in If I just do y=L \setminus b... Stack How to solve Ax=b via backward and forward substitution on n matrix A using Cholesky factorization and forward and Back-Substitution. The process of Example: Consider a system Now substitute z = 2 and y = –13 into the first equation to get \ Second Order Linear Differential Equations that the substitution y f x y f x y f x gives an identity. The differential equation is If I just do y=L \setminus b... Stack How to solve Ax=b via backward and forward substitution on n matrix A using Cholesky factorization and forward and ... in Section 3 a computer example is presented to illustrate the -lX’Y’X’ (Substitution) (2.15) = Y’X(X’X)-lX’, and thus 3x.x = Y/X(X’X)-lX/X = Y Forward Substitution and Back Substitution . Background Example 2. Use the forward-substitution method to solve the lower-triangular linear system . Does anyone know how the following code is able to solve for x Lx=y where L and y Forward substitution of lower triangular matrix. (Typing up an example 15/05/2013В В· Perform forward and backward substitution after Cholesky factorizing (Solving substitution, s.t.: LY = B L’X = Y Forward perform forward and backward Integration by substitution 1+x2 dx (3) In this example we make the substitution u = 1+x2, in order to simplify the square-root term. Back-Substitution. The process of Example: Consider a system Now substitute z = 2 and y = –13 into the first equation to get \ The Forward Substitution block solves the linear system LX = B by simple forward substitution of variables, where: The pardiso and DSS step 331 forward substitution should solve Ly=b. that in the small example above, the wrong solution both L^Tx=b and Lx=b forms of the 1/03/2014В В· 6.3.2 Solving a lower triangular system/Forward substitution. Skip navigation 6.3.5 Cost of solving A x = b via LU, Ly=b, Ux=y Example 1 - Duration The marginal rate of substitution of X for Y is the amount of Y that will be given up for obtaining each additional unit of X. Definition: Tested C++ code for the compact LU factorization / decomposition schemes of rarely mention that different forward substitution functions [i * d + k]* y [k]; y Fast-forward/fast-backward substitutions on vector computers. a CRAY Y MP2E/232. The speedups for a fast-forward/fast-backward of forward substitution 3.2 Solution of Triangular Systems †Forward substitution: x1 = b1=l11 Column-Oriented Forward Substitution †Partition Lx = b as follows: h python Forward substitution of lower triangular matrix how would you solve the forward substitution with a lower. Does anyone know how the following code is able to solve for x Lx=y where L and Forward substitution of lower triangular matrix. def forward(L, y): x, A novel method distinguishes between mutation rates and fixation biases in patterns of single-nucleotide substitution. forward’’ substitution rate (UX п¬Ѓ Y). Solutions of Linear Algebraic Equations Decomposition method A for Error Squares Regression Computer Programs. L U Decomposition method: in structure we can use forward subsititution and back substitution to find y and x from the Uy = b first and then Lx = y., In this example we seek all substitution y = Lx is made and the equation Ly = b is solved by back substitution, and then the equation Ux = y is solved by forward. Does anyone know how the following code is able to solve for x Lx=y where L and Forward substitution of lower triangular matrix. def forward(L, y): x 14/10/2013В В· This is significantly harder to break since the frequency analysis used for simple substitution ciphers is K E Y W O R D A B In the example to how would you solve the forward substitution... Learn more about matrix, matlab, homework how would you solve the forward substitution... Learn more about matrix, matlab, homework The Forward Substitution block solves the linear system LX = B by simple forward substitution of variables, for example Forward Substitution and Back Substitution . Background Example 2. Use the forward-substitution method to solve the lower-triangular linear system . An awful lot of people seem to use the phrase "substitute X for Y That link isn't about for/with - it's about the problems caused when X and Y in OP's example Integration by Substitution. A key strategy in mathematical problem-solving is substitution or changing the variable: that is, replacing one variable with another 15/05/2013В В· Perform forward and backward substitution after Cholesky factorizing (Solving substitution, s.t.: LY = B L’X = Y Forward perform forward and backward The marginal rate of substitution of X for Y is the amount of Y that will be given up for obtaining each additional unit of X. Definition: Forward and Back Substitution Once we know the LU factorization of a regular from MATH Example 4.5. With the LU to find the solution to 2 1 1 4 5 2 2-2 0 x y 7 Gaussian Elimination and LU Factorization Solve the lower triangular system Ly = b for y by forward substitution. 2. Example A more subtle example is the Example for Lower Triangular & Forward substitution. (x,y) into polar This is an example calculation shown below explain how to find the distance between two 14/10/2013В В· This is significantly harder to break since the frequency analysis used for simple substitution ciphers is K E Y W O R D A B In the example to Forward and Back Substitution Once we know the LU factorization of a regular from MATH Example 4.5. With the LU to find the solution to 2 1 1 4 5 2 2-2 0 x y Forward and Back Substitution Once we know the LU factorization of a regular from MATH Example 4.5. With the LU to find the solution to 2 1 1 4 5 2 2-2 0 x y Math 361S Lecture notes Direct methods for linear systems. The pardiso and DSS step 331 forward substitution should solve Ly=b. that in the small example above, the wrong solution both L^Tx=b and Lx=b forms of the, solving a linear system of equations that Example 1. Use the forward-substitution Solve LY=B for Y using forward substitution. 3. Solve UX=Y for X using back. Lecture 3 3.2 Solution of Triangular Systems python Forward substitution of lower triangular matrix. A mГ—n matrix is said to have a LU-decomposition if there exists we give some examples where Thus, to solve AX= b, we first solve LY= bby forward substitution, We already know what y is: . Advertisement. Text block . Pre-Algebra Solving by Substitution. Let's start with a problem that's half done already.... Linear Systems cs.cornell.edu Lecture 3 3.2 Solution of Triangular Systems. Prove Solving a Lower Triangular Matrix By Forward Substitution Lx = b by forward substitution is backwards y. We distinguish here between the forward Forward substitution is the process of solving a system of linear algebraic equations (SLAE) [math]Lx = y$ with a lower triangular coefficient matrix [math]L. • Linear Systems The BEST Group • Pardiso and DSS forward solve problem/question • Pardiso and DSS forward solve problem/question • Integration by Substitution. A key strategy in mathematical problem-solving is substitution or changing the variable: that is, replacing one variable with another Second Order Linear Differential Equations that the substitution y f x y f x y f x gives an identity. The differential equation is Forward substitution (for solving Lx = b) example of the non-zero pattern of matrix L and Parallel Forward and Back Substitution for Efficient Power Grid ... s.t Ux=y 2. solve Ly=b by forward substitution 3. solve Ux=y by backward substitution 4. return y The necessity of LU decomposition (using numpy as an example) 1. Prove Solving a Lower Triangular Matrix By Forward Substitution is {y}=f(\tilde{x })$. This problem is$ which is close to the original data $x$. Forward and 4. Linear equations. In the example given in ( 31 ) we # Factorisation FOR i=1 TO n FOR p=i TO n NEXT p FOR q=i+1 TO n NEXT q NEXT i # Forward Substitution Linear Systems Example: Find x 1;x 2;x (denoted by y). Back substitution: The third equation is 1 3 x 3 = 1 by forward substitution: Prove Solving a Lower Triangular Matrix By Forward Substitution Lx = b by forward substitution is backwards y$. We distinguish here between the forward Back-Substitution. The process of Example: Consider a system Now substitute z = 2 and y = –13 into the first equation to get \ Convex Optimization — Boyd & Vandenberghe 9. Numerical linear algebra background vector-vector operations (x, y в€€ Rn) called forward substitution A substitution is a mutation that exchanges one base for another For example, consider the There are other types of mutations as well, 14/10/2013В В· This is significantly harder to break since the frequency analysis used for simple substitution ciphers is K E Y W O R D A B In the example to Instructor: Berthe Y. Choueiry Motivating Example 6 end I Forward Substitution I Backward Substitution I Recurrence Trees I Maple! The Substitution Method Examples. BACK; Example 2. Solve this linear Solve this linear system of equations by the substitution method. y = -x + 1. Prove Solving a Lower Triangular Matrix By Forward Substitution Lx = b by forward substitution is backwards y$. We distinguish here between the forward Does anyone know how the following code is able to solve for x Lx=y where L and Forward substitution of lower triangular matrix. def forward(L, y): x Techniques of Integration 10.1 wers Po of sine and cosine principle than ordinary substitution. EXAMPLE 10.2.1 Evaluate Z p 1в€’x2 dx. Let x = sinu so dx = cosudu. Does anyone know how the following code is able to solve for x Lx=y where L and y Forward substitution of lower triangular matrix. (Typing up an example
{}
Skip to main content ### just can't get enough here's little m, snoozing away daddy, baby-wearer extrodinaire owen likes to remind me that baby marcus' feet are smooth, while his own are bumpy. but not as bumpy as mommy's. proud big brother playing outside time for a big boy bed; toddler bed is lookin' mighty tiny owen's favorite things - getting ready for bed snoozing in the sling nonna's in love wide awake! stretch snuggling with big papi owen says "you wanna wear my crown, king papi" can you tell i'm a mommy in love? ### Comments blissful_e said… Awesome post, by a mommy definitely in love! Enjoy that babymoon!! Especially love the shot of John babywearing. :) You might enjoy these pics of other dads wearing their babies. ### On the Height of J.J. Barea Dallas Mavericks point guard J.J. Barea standing between two very tall people (from: Picassa user photoasisphoto). Congrats to the Dallas Mavericks, who beat the Miami Heat tonight in game six to win the NBA championship. Okay, with that out of the way, just how tall is the busy-footed Maverick point guard J.J. Barea? He's listed as 6-foot on NBA.com, but no one, not even the sports casters, believes that he can possibly be that tall. He looks like a super-fast Hobbit out there. But could that just be relative scaling, with him standing next to a bunch of extremely tall people? People on Yahoo! Answers think so---I know because I've been Google searching "J.J. Barea Height" for the past 15 minutes. So I decided to find a photo and settle the issue once and for all. I started by downloading a stock photo of J.J. from NBA.com, which I then loaded into OpenOffice Draw: I then used the basketball as my metric. Wikipedia states that an NBA basketball is 29.5 inches in circumfe… ### Finding Blissful Clarity by Tuning Out It's been a minute since I've posted here. My last post was back in April, so it has actually been something like 193,000 minutes, but I like how the kids say "it's been a minute," so I'll stick with that. As I've said before, I use this space to work out the truths in my life. Writing is a valuable way of taking the non-linear jumble of thoughts in my head and linearizing them by putting them down on the page. In short, writing helps me figure things out. However, logical thinking is not the only way of knowing the world. Another way is to recognize, listen to, and trust one's emotions. Yes, emotions are important for figuring things out. Back in April, when I last posted here, my emotions were largely characterized by fear, sadness, anger, frustration, confusion and despair. I say largely, because this is what I was feeling on large scales; the world outside of my immediate influence. On smaller scales, where my wife, children and friends reside, I… ### The Force is strong with this one... Last night we were reviewing multiplication tables with Owen. The family fired off doublets of numbers and Owen confidently multiplied away. In the middle of the review Owen stopped and said, "I noticed something. 2 times 2 is 4. If you subtract 1 it's 3. That's equal to taking 2 and adding 1, and then taking 2 and subtracting 1, and multiplying. So 1 times 3 is 2 times 2 minus 1." I have to admit, that I didn't quite get it at first. I asked him to repeat with another number and he did with six: "6 times 6 is 36. 36 minus 1 is 35. That's the same as 6-1 times 6+1, which is 35." Ummmmm....wait. Huh? Lemme see...oh. OH! WOW! Owen figured out x^2 - 1 = (x - 1) (x +1) So $6 \times 8 = 7 \times 7 - 1 = (7-1) (7+1) = 48$. That's actually pretty handy! You can see it in the image above. Look at the elements perpendicular to the diagonal. There's 48 bracketing 49, 35 bracketing 36, etc... After a bit more thought we…
{}
Containers, on the level of the operating system, are like houses. We carry an expectation that we find food in the kitchen, a bed in a bedroom, and toiletries in a bathroom. We can imagine a fresh Ubuntu image is akin to a newly furnished house. When you shell in, most of your expectations are met. However, as soon as a human variable is thrown into the mix (we move in), the organization breaks. Despite our best efforts, the keys sometimes end up in the refrigerator. A sock becomes a lone prisoner under a couch cushion. The underlying organization of the original house is still there with the matching expectations, but we can no longer trust it. What do I mean? If I look at a house from the outside and someone asks me “Are the beds in the bedroom?” I would guess yes. However, sometimes I might be wrong, because we are looking at a Bay Area house that has three people residing in a living area. Now imagine that there is a magical box, and into it I can throw any item, or ask for any item, and it is immediately retieved or placed appropriately. Everything in my house has a definitive location, and there are rules for new items to follow suit. I can, at any moment, generate a manifest of everything in the house, or answer questions about the things in the house. If someone asks me “Are the beds in the bedroom?” knowing that this house has this box, I can answer definitively “yes!” The house is the container, and the box represents what a simple standard and software can do for us. In this post I want to discuss how our unit of understanding systems has changed in a way that does not make it easy for reproducibility and scalable modularity to co-exist in harmony. ## Modular or Reproducibile? For quite some time, our unit of understanding has been based on the operating system. It is the level of magnification at which we understand data, software, and products of those two things. Recently, however, two needs have arisen. We simultaneously need modularity and reproducible practices. At first glance, these two things don’t seem very contradictory. A modular piece of software, given that all dependencies are packaged nicely, is very reproducible. The problem arises because it’s never the case that a single piece of software is nicely suited for a particular problem. A single problem, whether it be sequencing genetic code, predicting prostate cancer recurrence from highly dimensional data, or writing a bash script to play tetris, requires many disparate libraries and other software dependencies. Given our current level of understanding of information, the operating system, the best that we can do is give the user absolutely everything - a complete operating system with data, libraries, and software. But now for reproducibility we have lost modularity. A scientific software packaged in a container with one change to a version of a library yields a completely different container despite much of the content duplicated. We are being forced to operate on a level that no longer makes sense given the scale of the problem, and the dual need for modularity and dependency. How can we resolve this? ## Level of Dimensionality to Operate The key to answering this question is deciding on the level, or levels, of dimensionality that we will operate. On one side of the one extreme, we might break everything into the tiniest pieces imaginable. We could say bytes, but this would be like saying that an electron or proton is the ideal level to understand matter. While electrons and protons, and even one level up (atoms) might be an important feature of matter, arguably we can represent a lot more consistent information by moving up one additional level to a collection of atoms, an element. In file-system science an atom matches to a file, and an element to a logical grouping of files to form a complete software package or scientific analysis. Thus we decide to operate on the level of modular software packages and data. We call these software and data modules, and when put together with an operating system glue, we get a full containers. Under this framework we make the following assertions: a container is the packaging of a set of software and data modules, reproducible in that all dependencies are included building multiple containers is efficient because it allows for re-use of common modules a file must not encompass a collection of compressed or combined files. I.e., the bytes content each software and data module must carry, minimally, a unique name and install location in the system This means that the skeleton of a container (the base operating system) is the first decision point. This will filter down a particular set of rules for installation locations, and a particular subset of modules that are available. Arguably, we could even take organizational approaches that would work across hosts, and this would be especially relevant for data containers that are less dependent on host architecture. For now, let’s stick to considering them separately. Operating System --> Organization Rules --> Library of Modules --> [choose subset] --> New Container Under this framework, it would be possible to create an entire container by specifying an operating system, and then adding to it a set of data and software containers that are specific to the skeleton of choice. A container creation (bootstrap) that has any kind of overlap with regard to adding modules would not be allowed. The container itself is completely reproducible because it (still) has included all dependencies. It also carries complete metadata about its additions. The landscape of organizing containers also becomes a lot easier because each module is understood as a feature. TLDR: we operate on the level of software and data modules, which logically come together to form reproducible containers. ## Metric for Assessing Modules Given that a software or data module carries one or more signatures, the next logical question is about the kinds of metrics that we want to use to classify any given module. ### Manual Annotation The obvious approach is the human labeled organization, meaning that a person looks at a software package, calls it “biopython” for “biology” in “python” and then moves on. Or perhaps it is done automatically based on the scientists domain of work, tags from somewhere, or a journal published in. This metric works well for small, manageable projects, but is largely unreliable as it is hard to scale or maintain. ### Functional Organization The second is functional organization. We can view software as a black box that performs some task, and rank/sort the software based on comparison of that performance. If two different version of a python module act exactly the same, despite subtle differences in the files (imagine the silliest case where the spacing is slightly different) they are still deemed the same thing. If we define a functional metric likes “calculates standard deviation” and then test software across languages to do this, we can organize based on the degree to which each individual package varies from the average. This metric maps nicely to scientific disciplines (for which the goal is to produce some knowledge about the world. However if this metric is used, the challenge would be for different domains to be able to robustly identify the metrics most relevant, and then derive methods for measuring these metrics across new software. This again is a manual bottleneck that would be hard to overtake. Even if completely programmatic, data driven approaches existed for deriving features of these black boxes, without the labels to make those features into a supervised classification task, we don’t get very far. ### File Organization and Content A third idea is a metric not based on function or output, but simple organizational rules. We tell the developer that we don’t care what the software package does, or how it works, but we assert that it must carry a unique identifier, and that identifier is mapped to a distinct location on a file system. With these rules, it could be determined immediately if the software exists on a computer, because it would be found. It would be seamless to install and use, because it would not overwrite or conflict with other software. It would also allow for different kinds of (modular) storage of data and software containers. For the purposes of this thinking, I propose that the most needed and useful schema is functional, but in order to get there we must start with what we already have: files and some metadata about them. I propose the following: Step 1 is to derive best practices for organization, so minimally, given a particular OS, a set of software and data modules have an expected location, and some other metadata (package names, content hashes, dates, etc.) about them. Step 2, given a robust organization, is to start comparing across containers. This is where we can do (unsupervised) clustering of containers based on their modules. Step 3, given an unsupervised clustering, is to start adding functional and domain labels. A lot of information will likely emerge with the data, and this is the step I don’t have vision for beyond that. Regardless of the scientific questions (which others vary in having interest in) they are completely reliant on having a robust infrastructure to support answering them. The organization (discussed more below) is very important because it should be extendable to as many operating system hosts as possible, and already fit into (what exist/are) current cluster file-systems. We should take an approach that affords operating systems designing themselves. E.g., imagine someday that we can do the following: We have a functional goal. I want an operating system (container) optimized to do X. I can determine if X is done successfully, and to what degree. We start with a base or seed state, and provide our optimization algorithm with an entire suite of possible data and software packages to install. We then let machine learning do it’s thing to figure out the optimized operating system (container) given the goal. Since the biggest pain in creating containers (seems to be) the compiling and “getting stuff to work” part, if we can figure out an automated way to do this, one that affords versioning, modularity, and transparency, we are going to be moving in the right direction. It would mean that a scientist could just select the software and data he/she wants from a list, and a container would be built. That container would be easily comparable, down the difference in software module version, to another container. With a functional metric of goodness, the choices of data and software could be somewhat linked to the experimential result. We would finally be able to answer questions like “Which version of biopython produces the most varying result? Which brain registration algorithm is most consistent? Is the host operating system important? If we assume that these are important questions to be able to answer, and that this is a reasonable approach to take, then perhaps we should start by talking about file system organization. ## File Organization File organization is likely to vary a bit based on the host OS. For example, busybox has something like 441 “files” and most of them are symbolic links. Arguably, we might be able to develop an organizational schema that remains (somewhat) true to the Filesystem Hierarchy Standard, but is extendable to operating systems of many types. I’m not sure how I feel about this standard given that someday we will likely have operating systems designing themselves, but that is a topic for another day. ### Do Not Touch I would argue that the following folders, most scientific software should not touch: • /boot: boot loader, kernel files • /bin: system-wide command binaries (essential for OS) • /etc: host-wide configuration files • /lib: again, system level libraries • /root: root’s home. Unless you are using Docker, putting things here leads to trouble. • /sbin: system specific binaries • /sys: system, devices, kernel features ### Variable and Working Locations • /run: run time variables, should only be used for that, during running of programs. • /tmp: temporary location for users and programs to dump things. • /home: can be considered the user’s working space. Singularity mounts by default, so nothing would be valued there. The same is true for.. ### Connections Arguably, connections for containers are devices and mount points. So the following should be saved for that: • /dev: essential devices • /mnt: temporary mounts. • /srv: is for “site specific data” served by the system. Perhaps this is the logical mount for cluster resources? The point that “connections” also means mounting of data has not escaped my attention. This is an entire section of discussion. ## Data This is arguably just a mount point, but I think there is one mount root folder that is perfectly suited for data: /media: removable media. This is technically something like a CD-ROM or USB, and since these media are rare to use, or used to mount drives with data, perhaps we can use it for exactly that. Data mounted for a specific application should have the same identifier (top folder) to make the association. The organization of the data under that location is up to the application. The data can be included in the container, or mounted at runtime, and this is under the decision of the application. Akin to software modules, overlap in modules is not allowed. For example, let’s say we have an application called bids (the bids-apps for neuroimaging): the bids data would be mounted / saved at /media/bids. importing of distinct data (subjects) under that folder would be allowed, eg /media/bids/sub1 and /media/bids/sub2. importing of distinct data (within subject) would also be allowed, e.g., /media/bids/sub1/T1 and /media/bids/sub1/T2. importing of things that get overwritten would not be allowed. An application’s data would be traceable to the application by way of it’s identifier. Thus, if I find /media/bids I would expect to find either /opt/bids or equivalent structure under /usr/local (discussed next). ## Research Software Research software is the most valuable element, along with data, and there are two approaches we can take, and perhaps define criteria for when to use each. There must be general rules for packaging, naming, and organizing groups of files. The methods must be both humanly interpretable, and machine parsable. For the examples below, I will reference two software packages, singularity and sregistry: ### Approach 1: /usr/local For this approach, we “break” packages into shared sub-folders, stored under /usr/local, meaning that executables are in a shared bin: /usr/local/ /bin/ singularity sregistry and each software has it’s own folder according to the Linux file-system standard (based on its identifier) under /usr/local/[ name ]. For example, for lib: /usr/local/lib singularity/ sregistry/ ### Approach 3: /opt with features of /usr/local If the main problem with /opt is having to find/add multiple things to the path, there could be a quasi solution that places (or links) main executables in a main /bin under /opt. Thus, you can add one place to the path, and have fine control over the programs on the path by way of simply adding/removing a link. This also means that the addition of a software module to a container needs to understand what should be linked. ### Submodules We are operating on the level of the software (eg, python, bids-app, or other). What about modules that are installed to software? For example, pip is a package manager that installs to python. Two equivalent python installations with different submodules are, by definition, different. We could take one of the following approaches: represent each element (the top level software, eg python) as a base, and all submodules (eg, things installed with pip) are considered additions. Thus, if I have two installations of python with different submodules, I should still be able to identify the common base, and then calculate variance from that base based on differences between the two. Represent each software version as a base, and then, for each distinct (common) software, identify a level of reproducibility. Comparison of bases would look at the core base files, while comparison of modules would look across modules and versions, and comparison within a single module would look across all files in the module. The goal would be to be able to do the following: quickly sniff the software modules folder to find the bases. The bases likely have versions, and the version should ideally be reflected in the folder name. If not, we can have fallback approaches to finding it, and worse case, we don’t. Minimally, this gives us a sense of the high level guts of an image. If we are interested in submodules, we then do the same operation, but just a level deeper within the site-packages of the base. if we are interested in one submodule, then we need to do the same comparison, but across different versions of a package. As stated above, a software or data module should have a minimal amount of metadata: unique identifier, that includes the version a content hash of it’s guts (without a timestamp) (optionally, if relevant) a package manager (optionally, if relevant) where the package manager installs to ### Permissions Permissions are something that seem to be very important, and likely there are good and bad practices that I could image. Let’s say that we have a user, on his or her local machine. He or she has installed a software module. What are the permissions for it? • Option 1 is to say “they can be root for everything, so just be conservative and require it.” A user on the machine that is not sudo, too bad. This is sort of akin to maintaining and all or nothing binary permission, but for one person, that might be fine. Having this one (more strict) level, as long as it’s maintained, wouldn’t lead to confusion between user and root space, because only operation in root space is allowed. • Option 2 is to say “it’s just their computer, we don’t need to be so strict, just change permissions to be world read/write/execute.” This doesn’t work, of course, for a shared resource where someone could do something malicious by editing files. • Option 3 is to say “we should require root for some things, but then give permissions just to that user” and then of course you might get a weird bug if you switch between root/user, sort of like Singularity has some times with the cache directory. Files are cached under /root when a bootstrap is run as root, but under the user’s home when import is done in user space. I wish that we lived in a compute world where each user could have total control over a resource, and empowered to break and change things with little consequences. But we don’t. So likely we would advocate for a model that supports that - needing root to build and then install, and then making it executable for the user. ## Overview A simple approach like this: fits in fairly well with current software organization is both modular for data and software, but still creates reproducible containers allows for programmatic parsing to be able to easily find software and capture the contents of a container. We could then have a more organized base to work from, along with clearer directions (even templates) for researchers to follow to create software. In the context of Singularity containers, these data and software packages become their own thing, sort of like Docker layers (they would have a hash, for example) but they wouldn’t be random collections of things that users happened to put on the same line in a Dockerfile. They would be humanly understood, logically grouped packages. Given some host for these packages (or a user’s own cache that contains them) we can designate some uri (let’s call it data:// that will check the user’s cache first, and then the various hosted repositories for these objects. A user could add anaconda3 for a specific version to their container (whether the data is cached or pulled) like: import data://anaconda3:latest And a user could just as easily, during build time, export a particular software or data module for his or her use: export data://mysoftware:v1.0 and since the locations of mysoftware for the version would be understood given the research container standard, it would be found and packaged, put in the user’s cache (and later optionally / hopefully shared for others). This would also be possible not just from/during a bootstrap, but from a finished container: singularity export container.img data://anaconda3:latest I would even go as far to say that we stay away from system provided default packages and software, and take preference for ones that are more modular (fitting with our organizational schema) and come with the best quality package managers. That way, we don’t have to worry about things like “What is the default version of Python on Ubuntu 14.04? Ubuntu 14.06? Instead of a system python, I would use anaconda, miniconda, etc. ### Challenges Challenges of course come down to: symbolic links of libraries, and perhaps we would need to have an approach that adds things one at a time, and deals with potential conflicts in files being updated. reverse “unpacking” of a container. Arguably, if it’s modular enough, I should be able to export an entire package from a container. configuration: we would want configuration to occur after the addition of a new piece, calling ldconfig, and then add the next, or things like that. the main problem is library dependencies. How do we integrate package managers and still maintain the hierarchy? One very positive thing I see is that, at least for Research Software, a large chunk of it tends to be open source, and found freely available on Github or similar. This means that if we do something simple like bring in an avenue to import from a Github uri, we immediately include all of these already existing packages. ### First Steps I think we have to first look at the pieces we are dealing with. It’s safe to start with a single host operating system, Ubuntu is good, and then look at the following: what changes when I use the package manager (apt-get) for different kinds of software, the same software with different versions how are configurations and symbolic links handled? Should we skip package managers and rely on source? (probably not) how would software be updated under our schematic? where would the different metadata/ metrics be hosted? to one that does not Organization and simple standards that make things predictable (while still allowing for flexibility within an application) is a powerful framework for reproducible software, and science. Given a standard, we can build tools around it that give means to test, compare, and make our containers more trusted. We never have to worry about capturing our changes to decorating the new house, because we decorate with a tool that captures them for us. I think it’s been hard for research scientists to produce software because they are given a house, and told to perform some task in it, but no guidance beyond that. They lose their spare change in couches, don’t remember how they hung their pictures on the wall, and then get in trouble when someone wants to decorate a different house in the same way. There are plenty of guides for how to create an R or Python module in isolation, or in a notebook, but there are minimal examples outlined or tools provided to show how a software module should integrate into it’s home. I also think that following the traditional approach of trying to assemble a group, come to some agreement, publish a paper, and wait for it to be published, is annoying and slow. Can open source work better? If we work together on a simple goal for this standard, and then start building examples and tools around it, we can possibly (more quickly) tackle a few problems at once. 1. the organizational / curation issue with containers 2. the ability to have more modularity while still preserving reproducibility, and 3. a base of data and software containers that can be selected to put into a container with clicks in a graphical user interface. Now if only houses could work in the same way! What do you think?
{}
Bottomonium and Charmonium at \mathrm{CLEO} Bottomonium and Charmonium at CLEO R.E. Mitchell (for the CLEO Collaboration) Bottomonium and Charmonium at Department of Physics, Indiana University, Bloomington, Indiana 47405, USA The bottomonium and charmonium systems have long proved to be a rich source of QCD physics. Recent CLEO contributions in three disparate areas are presented: (1) the study of quark and gluon hadronization using decays; (2) the interpretation of heavy charmonium states, including non- candidates; and (3) the exploration of light quark physics using the decays of narrow charmonium states as a well-controlled source of light quark hadrons. 1 Introduction The experiment at the Cornell Electron Storage Ring (CESR) is uniquely situated to make simultaneous contributions to both the bottomonium and charmonium systems in a clean environment. Between 2000 and 2003   ran with center of mass energies in the region. A subset of this period was spent below threshold, where 20M, 10M, and 5M decays of the , , and , respectively, were collected. In 2003, CESR lowered its energy to the charmonium region and the detector was slightly modified to become CLEO-c . Since that time, there has been an energy scan from 3.97 to 4.26  (), samples collected at 4170  (, largely for physics) and the (, largely for physics), and a total of nearly 28M decays have been recorded, only 3M of which have been analyzed. Three (of many) topics recently addressed by the collaboration will be discussed below. The reach is wide: from fragmentation in bottomonium decays, to the interpretation of heavy charmonium states, to the use of narrow charmonium states as a source of light quark hadrons. 2 Bottomonium and Fragmentation The bottomonium system provides many opportunities to study the hadronization of quarks and gluons. The number of gluons involved in the decay of a bottomonium state can be controlled by the charge-conjugation eigenvalue of the initial state: the states decay through three gluons; the states decay through two. In addition, the continuum – where proceeds without going through a resonance – can be used as a source of quarks. Thus, particle production can be studied and compared in a number of different environments. 2.1 Quark and Gluon Fragmentation In 1984, first noticed an enhancement in baryon production in (from decays) over (from the continuum), i.e., the number of baryons produced per decay was greater than the number produced per continuum event . The interpretation of this phenomenon, however, was complicated by the fact that the system consists of three partons (or three strings), while the system only has two partons (or one string). A recent analysis  has confirmed these findings with greater precision and has extended the comparison beyond decays to the decays of the and states as well. Figure 1a shows new measurements of the enhancements of particle production in over , where the “enhancement” of a particle species is defined as the ratio of the number of particles produced per event in decays to the number produced per event from the continuum. The ratio is binned in particle momentum and integrated. The MC predictions incorporate the JETSET 7.3 fragmentation model. In addition, the new analysis compares particle production in (radiative decays) and (radiative continuum events). The comparison in this case is between systems both having two partons and one string. The energy of the radiated photon is used to monitor parton energies. Figure 1b shows the enhancements of over , where in this case the ratio is binned in the energy of the radiated photon and integrated. A few conclusions can be drawn from these studies: (1) baryon enhancements in vs. are somewhat smaller than in vs. ; (2) the number of partons is important, not just ; and (3) the JETSET 7.3 fragmentation model does not reproduce the data. 2.2 Anti-Deuteron Production The production of (anti)deuterons in decays provides another opportunity to study the hadronization of quarks and gluons. In this case, models predict that the gluons from the decay first hadronize into independent (anti)protons and (anti)neutrons, which in turn “coalesce” into (anti)deuterons due to their proximity in phase space. has measured the production of anti-deuterons in and decays and has set limits on their production in decays . The production of anti-deuterons is easier to measure experimentally than the production of deuterons since anti-deuterons are not produced in hadronic interactions with the detector and the small background makes them easy to spot using in the drift chambers. The relative branching fraction of inclusive to was found to be . For comparison, a 90% C.L. upper limit of anti-deuteron production in the continuum was set at at , which, given an hadronic cross-section of the continuum of around 3000 , results in less than 1 in events producing an anti-deuteron. This is a factor of three less than what is seen in decays. 3 Interpretation of Heavy Charmonium States The past few years have seen something of a renaissance in charmonium spectroscopy with the discovery of the unexpected and states, among others. The and , in particular, have been the source of much speculation due to their multiple sightings and the difficulties encountered in attempting to incorporate them into the conventional spectrum. The contributions of to their interpretation will be discussed below. In addition, has recently made measurements pertaining to the charmonium character of the , which is more often used as a source of . While the is well-known and has been assumed to be the expected state of charmonium, pinning down its properties contributes to our global understanding of the charmonium spectrum. 3.1 Y(4260) The was first observed by   decaying to using collisions with initial state radiation (ISR). This production mechanism requires the have . However, there is no place for a vector with this mass in the conventional spectrum. On one interpretation the is a hybrid meson, a pair exhibiting an explicit gluonic degree of freedom. has made two recent contributions regarding the nature of the . First, an energy scan  was performed between 3.97 and 4.26 . A rise in the production cross section was observed for both and at 4.26  in the ratio of roughly 2:1. This ratio suggests the is an isoscalar. Second, (using data in the region) has confirmed the initial observation by in from ISR  (Figure 2a). This both confirms its existence and its nature. The measured mass and width, and , respectively, are also consistent with . 3.2 X(3872) The was first observed by   in the reaction . It has subsequently been studied in several different channels by a variety of different experiments. From its decay and production patterns it likely has . One of the most tantalizing properties of this state is that its mass is very close to threshold, suggesting that it could be a molecule or a four-quark state. Prior to the new measurement by , the binding energy of the (), assuming it to be a bound state, was , where the error, perhaps surprisingly, was dominated by the mass of the . improved this situation with a new precision mass measurement  using the well-constrained decay (Figure 2b) and found the mass to be . This results in a small positive binding energy from zero: . This lends further credence to the molecular interpretation of the . 3.3 ψ(3770) The existence of the has been established for a long time. However, because it predominantly decays to its behavior as a state of charmonium has been relatively unexplored in comparison to its lighter partners. The electromagnetic transitions, , because they are straightforward to calculate, provide a natural place to study the charmonium nature of the . has recently measured these transitions in two independent analyses. In the first , the processes were measured by reconstructing the in their transitions to and then requiring the to decay to or (Figure 3a). In the second , the were reconstructed in several exclusive hadronic modes and then normalized to the process using the same exclusive modes (Figure 3b). The first method favors the measurement of the transitions to and while the second method is more suited to the transition to . Combining the results of the two analyses, the partial widths of were found to be for , for , and an upper limit of at 90% C.L. was set for . These measurements are consistent with relativistic calculations assuming the is the state of charmonium. 4 Using Charmonium to Study Light Quarks In addition to providing valuable information in its own right, the charmonium system can also serve as a well-controlled source of light quark states. While much effort has gone into the study of and decays (e.g. radiative decays to glueballs), the decays of the states are less familiar and hold complementary information. The states are produced proficiently through the reaction , with rates around 9% for , 1, and 2, and can be reconstructed cleanly in many different decay modes in the detector. As an exploratory study into the analysis of the resonance substructure of decays, has recently analyzed a series of three-body decays  using approximately 3M events collected with the and CLEO-c detectors. This anticipates the new sample of approximately 25M events. The decay modes analyzed include , , , , , , , and . Branching fractions were measured to each of these final states, many for the first time. Figure 4 shows decays to three particularly well-populated final states. The decays to , , and included sufficient statistics for a rudimentary Dalitz analysis. Figure 5 shows the results of a fit to the Dalitz plot using a crude non-interfering resonance model. Dominant contributions were found from , , and with fit fractions of , and , respectively. No evidence for new structures was found in either or the two modes. Studies analyzing substructure using the full sample of 28M decays are underway. One reaction that looks particularly promising is the decay , which was shown to exhibit a rich substructure of and states in a recent analysis . Acknowledgments We gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. This work was supported by the National Science Foundation and the U.S. Department of Energy. References • [1] Y. Kubota et al. (), Nucl. Instrum. Methods A 320, 66 (1992); M. Artuso et al. (), Nucl. Instrum. Methods A 554, 147 (2005); D. Peterson et al. (), Nucl. Instrum. Methods A 478, 142 (2002). • [2] R.A. Briere et al. (CESR-c and CLEO-c Taskforces, CLEO-c Collaboration), Cornell University, LEPP Report No. CLNS 01/1742 (2001) (unpublished). • [3] S. Behrends et al. (), Phys. Rev. D 31, 2161 (1985). • [4] R.A. Briere et al. (), arXiv:0704.2766v1 [hep-ex]. • [5] D.M. Asner et al. (), Phys. Rev. D 75, 012009 (2007). • [6] B. Aubert et al. (), Phys. Rev. Lett. 95, 142001 (2005). • [7] T.E. Coan et al. (), Phys. Rev. Lett. 96, 162003 (2006). • [8] Q. He et al. (), Phys. Rev. D 74, 091104(R) (2006). • [9] S.K. Choi at al. (), Phys. Rev. Lett. 91, 262001 (2003). • [10] C. Cawlfield et al. (), Phys. Rev. Lett. 98, 092002 (2007). • [11] T.E. Coan et al. (), Phys. Rev. Lett. 96, 182002 (2006). • [12] R.A. Briere et al. (), Phys. Rev. D 74, 031106(R) (2006). • [13] S.B. Athar et al. (), Phys. Rev. D 75, 032002 (2007). • [14] M. Ablikim et al. (), Phys. Rev. D 72, 092002 (2005). Comments 0 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters Loading ... 187402 You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test Test description
{}
# Difference between revisions of "LimitBelow Command" ##### Command Categories (All commands) LimitBelow( <Function>, <Value> ) Computes the left one-sided limit of the function for the given value of the main function variable. Example: LimitBelow(1 / x, 0) yields -\infty . Note: Not all limits can be calculated by GeoGebra, so undefined will be returned in those cases (as well as when the correct result is undefined). ## CAS Syntax LimitBelow( <Expression>, <Value> ) Computes the left one-sided limit of the function for the given value of the main function variable. Example: LimitBelow(1 / x, 0) yields -\infty . LimitBelow( <Expression>, <Variable>, <Value> ) Computes the left one-sided limit of the multivariate function for the given value of the given function variable. Example: LimitBelow(1 / a, a, 0) yields -\infty . Note: Not all limits can be calculated by GeoGebra, so ? will be returned in those cases (as well as when the correct result is undefined).
{}
Geeks on Site offers fast, affordable computer repair service 24/7 for home or business in Buffalo. From data recovery and virus removal to network installation, software installation, setup and more. Address Depew, NY 14043 (716) 566-7170 http://www.geeksonsite.com/computer-repair-buffalo-ny # calculating error propagation division East Amherst, New York Practically speaking, covariance terms should be included in the computation only if they have been estimated from sufficient data. When two quantities are divided, the relative determinate error of the quotient is the relative determinate error of the numerator minus the relative determinate error of the denominator. Your cache administrator is webmaster. Please try again later. Q ± fQ 3 3 The first step in taking the average is to add the Qs. Uncertainty in measurement comes about in a variety of ways: instrument variability, different observers, sample differences, time of day, etc. It is a calculus derived statistical calculation designed to combine uncertainties from multiple variables, in order to provide an accurate measurement of uncertainty. We say that "errors in the data propagate through the calculations to produce error in the result." 3.2 MAXIMUM ERROR We first consider how data errors propagate through calculations to affect Sometimes, these terms are omitted from the formula. But here the two numbers multiplied together are identical and therefore not inde- pendent. Uncertainty never decreases with calculations, only with better measurements. When errors are explicitly included, it is written: (A + ΔA) + (B + ΔB) = (A + B) + (Δa + δb) So the result, with its error ΔR explicitly Working... When a quantity Q is raised to a power, P, the relative error in the result is P times the relative error in Q. In the operation of subtraction, A - B, the worst case deviation of the answer occurs when the errors are either +ΔA and -ΔB or -ΔA and +ΔB. Example 1: Determine the error in area of a rectangle if the length l=1.5 ±0.1 cm and the width is 0.42±0.03 cm.  Using the rule for multiplication, Example 2: As in the previous example, the velocity v= x/t = 50.0 cm / 1.32 s = 37.8787 cm/s. For example, a body falling straight downward in the absence of frictional forces is said to obey the law: [3-9] 1 2 s = v t + — a t o So if the angle is one half degree too large the sine becomes 0.008 larger, and if it were half a degree too small the sine becomes 0.008 smaller. (The change Call it f. The fractional indeterminate error in Q is then 0.028 + 0.0094 = 0.122, or 12.2%. Now that we recognize that repeated measurements are independent, we should apply the modified rules of section 9. We'd have achieved the elusive "true" value! 3.11 EXERCISES (3.13) Derive an expression for the fractional and absolute error in an average of n measurements of a quantity Q when If not, try visiting the RIT A-Z Site Index or the Google-powered RIT Search. paulcolor 28,861 views 7:04 HTPIB00D Uncertainty Sheet multiplication and division part 2 - Duration: 3:46. We will state the general answer for R as a general function of one or more variables below, but will first cover the specail case that R is a polynomial function A one half degree error in an angle of 90° would give an error of only 0.00004 in the sine. 3.8 INDEPENDENT INDETERMINATE ERRORS Experimental investigations usually require measurement of a Assuming the cross terms do cancel out, then the second step - summing from $$i = 1$$ to $$i = N$$ - would be: $\sum{(dx_i)^2}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sum(da_i)^2 + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sum(db_i)^2\tag{6}$ Dividing both sides by In problems, the uncertainty is usually given as a percent. So the modification of the rule is not appropriate here and the original rule stands: Power Rule: The fractional indeterminate error in the quantity An is given by n times the It is the relative size of the terms of this equation which determines the relative importance of the error sources. Please see the following rule on how to use constants. About Press Copyright Creators Advertise Developers +YouTube Terms Privacy Policy & Safety Send feedback Try something new! However, if the variables are correlated rather than independent, the cross term may not cancel out. Error propagation rules may be derived for other mathematical operations as needed. But more will be said of this later. 3.7 ERROR PROPAGATION IN OTHER MATHEMATICAL OPERATIONS Rules have been given for addition, subtraction, multiplication, and division. This tells the reader that the next time the experiment is performed the velocity would most likely be between 36.2 and 39.6 cm/s. This also holds for negative powers, i.e. In other classes, like chemistry, there are particular ways to calculate uncertainties. So if the angle is one half degree too large the sine becomes 0.008 larger, and if it were half a degree too small the sine becomes 0.008 smaller. (The change Then the displacement is: Dx = x2-x1 = 14.4 m - 9.3 m = 5.1 m and the error in the displacement is: (0.22 + 0.32)1/2 m = 0.36 m Multiplication Using the equations above, delta v is the absolute value of the derivative times the delta time, or: Uncertainties are often written to one significant figure, however smaller values can allow Therefore we can throw out the term (ΔA)(ΔB), since we are interested only in error estimates to one or two significant figures. Rating is available when the video has been rented. The size of the error in trigonometric functions depends not only on the size of the error in the angle, but also on the size of the angle. Note Addition, subtraction, and logarithmic equations leads to an absolute standard deviation, while multiplication, division, exponential, and anti-logarithmic equations lead to relative standard deviations.
{}
# 7.05 Geometric series ## Interactive practice questions Consider the series $5+10+20$5+10+20 ... Find the sum of the first $12$12 terms. Easy Approx 3 minutes Find the sum of the first $5$5 terms of the geometric sequence defined by $a=2.187$a=2.187 and $r=1.134$r=1.134. Give your answer correct to two decimal places. Find the sum of the first $5$5 terms of the geometric sequence defined by $a=-4.186$a=4.186 and $r=-2.848$r=2.848. Give your answer correct to two decimal places. Consider the series $1-2+4$12+4 ... Find the sum of the first $11$11 terms. ### Outcomes #### MGSE9-12.A.SSE.4 Derive the formula for the sum of a finite geometric series (when the common ratio is not 1), and use the formula to solve problems.
{}
# Manila Observatory’s seal: a scientific and religious interpretation by Quirino Sugon Jr. Manila Observatory Seal In heraldic terms, the Manila Observatory’s seal is an unsupported circular shield.  The rays form a gyronny of twenty-four (24), gold on black field, meeting at slightly above the shield’s center, so that the vertical and horizontal rays form the Latin cross. The first charge is a circle labeled IHS, with a cross above and an anchor below. This circle is placed at the intersection of the rays.  The second charge is the earth illumined by the sun, dividing the earth into dayside and night side.  The earth is at the lower field, dexter side.  At the upper rim of the shield is emblazoned the motto “Lumen de Lumine”, white on on blue.  On the lower rim of the shield is emblazoned “El Observatorio de Manila. 1865”, blue on white. I.  SCIENTIFIC INTERPRETATION The basic design is that of the Copernican Heliocentric System.  The sun is at the center with twelve rays shining outward.  The planet’s orbit is described by the rim of the shield: this is the deferent circle whose center is displaced downwards from the sun by a pronounced equant.  The lines draw from the sun to the planet do not sweep out equal areas in equal times, so they do not follow Kepler 2nd Law.  Rather, the rays sweep out equal angles in equal times which are the hands of a clock.  The rays from the sun form 24 divisions alternating in gold and black.  This corresponds to the division of the day into 24 hours.  The rays may also correspond to the angles of a compass with the horizontal and vertical rays forming the West-East and North-South directions.  The + sign and the anchor suggests the origin of a Cartesian coordinate system.  They may also refer to the sun as the fixed point in the solar system. The earth is lit by the sun, separating the nightside and dayside.  The nightside of the earth forms a crescent which represents the moon.  The earth is displaced from the vertical, making it visually unstable like a pendulum, suggesting movement.   The pendulum’s pivot point is the sun marked by the + sign and the anchor.  The swinging of the pendulum marks the measure of time.  Also, the lines connecting the earth and the pivot point in the sun suggests the words of Archimedes concerning the lever: “Give me a place to stand and I shall move the world.” The motto “Lumen de Lumine” suggests optics: refracted and reflected light are obtained from the incident light. This is the subject of telescope design, solar spectroscopy, and satellite imaging.  Indeed, the point of view of the seal is from a spaceship or satellite, taking the picture of both the earth and sun. II. RELIGIOUS INTERPRETATION A. Jesuit Standard and Roman Catacombs Jesuit symbol of sun with IHS (from East Asian Pastoral Institute) The symbol of the Jesuits is bright (straight rays), fiery (wavy rays) sun (circle) marked with IHS with a Latin cross above the letters and three nails below.  The symbol is black on a white field. In the Manila Observatory seal, the IHS remained but the cross was simplified to a Greek cross similar to an addition operation symbol.  The three nails were connected to form the base of an anchor.  It thus appears that the cross and the anchor form a single object in the background of IHS. The straight and wavy rays are replaced by 24 lines emanating from the sun’s center.  The sun and the 12 regions formed by the rays are colored gold on black field.  The gold connotes both bright and hot. The black  connotes the darkness of of the night sky (c.f. Gen 1:2). The IHS is a three-letter acronym based on the first three letters of Christ in “Jesus” in Greek ($I\eta o\sigma o\nu\zeta$, Latinized IHSOVS). The IHS is also the acronym of “In hos signos“, which is a shortened form of Latinized phrase in Greek which means “In this sign you will conquer”—the words heard by Constantine when he saw in a vision the cross of Christ before the decisive Battle of the Milvian Bridge, which led to the adoption of Christianity as the official religion of the Roman Empire. Just as Constantine painted the standards of his legions with the first two letters $\chi$ and $\rho$ of Christ in Greek, $\chi\rho\iota\sigma\tau\dot o\sigma$, so too did the Jesuits painted Christ’s name in their standard, as they fight the armies who follow the standard of Satan. The Anchor is an ancient symbol of hope for those who died in Christ, as depicted in the Roman catacombs in the first centuries of Christianity.  This symbol is based on the Letter of Paul to the Hebrews: So when God wanted to give the heirs of his promise an even clearer demonstration of the immutability of his purpose, he intervened with an oath, so that by two immutable things, in which it was impossible for God to lie, we who have taken refuge might be strongly encouraged to hold fast to the hope that lies before us. This we have as an anchor of the soul, sure and firm, which reaches into the interior behind the veil, where Jesus has entered on our behalf as forerunner, becoming high priest forever according to the order of Melchizedek. (Heb 6:17-20) In the catacombs of St. Sebastian, the anchor is drawn beside the Chi-Rho and the fish.  The fish in Greek is Ichthys, which can be read as an acrostic of Ancient Greek words that translate to “Jesus Christ God’s son Savior”. Since IHS also stands for Christ, then the juxtaposition of the IHS and the anchor represent the catacombs and the persecution of Christianity.  As Christ said, “If they persecuted me, they will also persecute you. If they kept my word, they will also keep yours” (Jn 15:20). In heraldry, the rule of the tincture states that metal must never be placed upon metal, such as the juxtaposition of two colors white (silver) and yellow  (gold).  When this happens, the unusual nature of the bearer stands out, such as in the flags of two holy places: Vatican and the Kingdom of Jerusalem.  In the seal of the Manila Observatory, gold is on black, so the rules of heraldry are not broken; however, the golden rays reach the silver white on the shield’s rim.  This is unusual, and so are the Jesuits: only the Jesuits dare to call their society not after the name of their founder Ignatius, like the Franciscans and the Dominicans did, but after Christ Himself: “Society of Jesus”. B. Lumen de Lumine and Prologue of John The Manila Observatory’s motto Lumen de Lumine or Light of Light, is taken from the Nicene-Constantinopolitan Creed, the longer form of the Apostle’s Creed. The motto refers to Christ: And in one Lord Jesus Christ, the only-begotten Son of God, begotten of the Father before all worlds (aeons), Light of Light, very God of very God, begotten, not made, being of one substance with the Father. Light of Light uses a physical properties of light to describe the relationship of the Father and the Son.  As St. Augustine said, “The Son alone is the Image of the Father.” In the seal, the sun is marked by IHS and the cross, and rays of the sun light up the earth.  Since IHS is Christ, then Christ, the sun of Justice in Malachi (3:20), is the true Light of the World.  As the Prologue of John states: In the beginning was the Word, and the Word was with God, and the Word was God.  He was in the beginning with God.  All things came to be through him, and without him nothing came to be. What came to be through him was life, and this life was the light of the human race; the light shines in the darkness, and the darkness has not overcome it. (Jn 1:1–5) Just as the orbit of the earth around the sun results to the seasons of the year–Winter, Spring, Summer, and Fall–so, too, does the orbit of the world around Christ and His Cross result to the liturgical seasons: Advent, Christmas, Lent, and Ordinary Time. Indeed, the system of leap years in the Gregorian Calendar was introduced by Lilius and perfected by the Jesuit Christopher Clavius (1538-1612) in order to prevent the slow backward drift of Easter from the Vernal Equinox of March 21 as stipulated by the Council of Nicaea (AD 325). The Gospel of John was originally written in Greek.  In Greek, the “Word” is “Logos”.  The Logos was adopted by Heraclitus (ca. 535–475 BC) to denote the principle of order and knowledge. This is why most of the traditional sciences ends in “–logy” which comes from “logos”.  And the Manila Observatory has many of these sciences under its divisions: seismology, hydrology, meteorology, and climatology. Of course, not all sciences ends with “–logy” but the end of physical sciences remains: to know the principle of order behind the physical universe.  And this knowledge starts with making distinctions, by discriminating light from darkness, day from night, sea from sky, land from sea, plants from animals, animals from man, and man from woman.  The study of the sciences should therefore lead to the understanding of Creation, right to the beginning of time when God said, “‘Let there be light,’ and there was light.” (Gen 1:3).  And the study of the sciences should lead to the one source of all sciences: the Logos, the Christ, for “all things came to be through him, and without him nothing came to be” (Jn 1:3).  This is the Jesuit vision of the world. C. Mary and Ateneo de Manila University Ateneo de Manila University Seal The colors blue and white on the shield’s rim are the colors of Ateneo de Manila University.  These colors symbolizes Mary, the Patroness of Ateneo de Manila University under the title of the Immaculate Conception.  As the Ateneo’s Alma Mater Song, “Song for Mary,” states: Mary for you! For your white and blue! We pray you’ll keep us, Mary, constantly true! We pray you’ll keep us, Mary, faithful to you! With blue and white surrounding the seal fo the Manila Observatory, this means two things: the Observatory is inside the walls of Ateneo de Manila University and the Observatory is under the patronage of the Immaculate Conception. This is true since the founding of the Observatory inside the Ateneo de Municipal de Manila in Manila in 1865. This remains true since the refounding of the Observatory in 1963 inside the present Ateneo de Manila University in Loyola Heights, Quezon City. In Marian iconography, Mary is usually drawn with a crown of twelve stars as described in the Book of Revelation: A great sign appeared in the sky, a woman clothed with the sun, with the moon under her feet, and on her head a crown of twelve stars. (Rv 12:1) These twelve stars are drawn as twelve rays in the seal, because the stars are shining.  The flag of the European union–with twelve gold stars arranged in circle on a blue field–for example, was inspired by Marian iconography, as attested by its original designers Arsène Heitz and Paul Michel Gabriel Lévy, both Catholics. The crescent moon, which symbolizes Mary’s perpetual virginity, is represented by the crescent darkside of the earth.  The crescent moon can be seen in the icon of the Immaculate Conception and in the icon of Our Lady of Guadalupe. III. MANILA OBSERVATORY’S MISSION AND VISION The Observatory’s mission and vision is found in the Ateneo de Manila University’s website: Inspired by Ignatian spirituality, the Manila Observatory is committed to a scientific culture for sustainable development of the Philippines in its regional and global context through research excellence in environmental and pre-disaster science particularly in the areas of atmospheric studies, solid earth dynamics and instrumentation. To achieve this, we dedicate ourselves to: Conduct continuing scientific research Form future scientists Network with allied groups Engage in information, education, communication efforts Collect and manage special research materials Build the capability of local communities, focusing on the urban environment Advocate key policies needing scientific inputs But in the light of our discussions on the Manila Observatory’s seal, I would like to propose the following mission and vision which makes explicit the Jesuit mission and vision: The Manila Observatory is the Jesuit Observatory in Philippines.  As a Jesuit Observatory, the Manila Observatory seeks to be a light to the world by putting Christ, the Lumen de Lumine, at the center of all scientific endeavors, so that Christ, the Logos who created all things, would illumine all sciences studying the earth and sun.  The Manila Observatory shall use its scientific knowledge in order to dispel the darkness of Paganism in the Philippine Islands and in other countries, by predicting the natural disasters attributed to the elemental spirits, thereby saving not only the lives of the natives but also their souls. To accomplish this, the Manila Observatory shall promote the Jesuit vocation by inviting Jesuit scientists and scholastics to work at the Observatory as their mission field, promote the daily celebration of the Holy Sacrifice of the mass and novenas to Jesuit saints, and promote the Spiritual Exercises in order to instill the Jesuit vision of the world.  The Manila Observatory shall also work closely with the Ateneo system of schools in the Philippines, form linkages with other Jesuit Observatories and other scientific institutions, educate the public about its findings and activities through online and printed publications, and influence national and international policies for the common good. III. CONCLUSIONS The Manila Observatory’s seal has changed after several decades, and it is important to search the Manila Observatory’s Archives in order to determine who designed such seals and the heraldric reasons for their designs. The seal described in this essay is the latest seal which is used in the Observatory’s letterheads. If one compares this with that found in the Observatory’s facade, one notices some obvious differences: in the new seal, the earth is smaller than the sun marked IHS and the earth receives its sole light from the sun. I think the new seal is an improvement than the old one. To prevent future mutations of the seal, it is proper to formally prescribe the seal’s dimensions, colors (RGB or HSV), and fonts. The seal of the Manila Observatory speaks of the nature, mission, and vision of the Observatory. So it is proper that the seal can only be changed once the nature, mission, and vision of the Observatory changes. As the Manila Observatory celebrates its 150th sesquicentennial this coming 2015, it is high time to reflect again on the meaning of the Observatory’s seal and rewrite the Manila Observatory’s vision and mission accordingly.
{}
Instability of agegraphic dark energy models Kim, Kyoung Yee Lee, Hyung Won Myung, Yun Soo Description We investigate the agegraphic dark energy models which were recently proposed to explain the dark energy-dominated universe. For this purpose, we calculate their equation of states and squared speeds of sound. We find that the squared speed for agegraphic dark energy is always negative. This means that the perfect fluid for agegraphic dark energy is classically unstable. Furthermore, it is shown that the new agegraphic dark energy model could describe the matter (radiation)-dominated universe in the far past only when the parameter $n$ is chosen to be $n>n_c$, where the critical values are determined to be $n_c=2.6878(2.5137752)$ numerically. It seems that the new agegraphic dark energy model is no better than the holographic dark energy model for the description of the dark energy-dominated universe, even though it resolves the causality problem. Comment: 15 pages 4 figures Keywords General Relativity and Quantum Cosmology
{}
# The Daily Parker Politics, Weather, Photography, and the Dog The Consumer Electronics Show went virtual this year, but it still had some interesting toys, like these: Air Safety Virus Monitors It's well-known that things like ventilation and humidity affect how well coronavirus spreads indoors. But how do you know how much ventilation is enough? Airthings sensors pair with a smartphone to monitor indoor air quality for temperature, humidity and number of people in the room (it makes a guess based on the amount of carbon dioxide present). If quality dips and virus risk rises, Airthings will suggest opening windows or making other changes. This could be helpful for businesses, such as restaurants, to know if their capacity is too high. Airthings also monitors for more traditional air quality risks like radon and mold. Balcony Bee-Keeping Box The pandemic has driven an upswing in gardening and home-canning: why not beekeeping? Italian company Beeing’s B-Box is a small hive that works with a sensor to monitor the bees’ health and environment. It also has a special design that separates the extra honeycomb from the bees, so you can harvest the honey without suiting up like an astronaut. Plus, it’s small enough to keep on even a modest urban balcony. I don't know how my neighbors would feel about that one, but it seems perfect for the building. Lunchtime roundup: Finally, the authors of The Impostor's Guide, a free ebook aimed at self-taught programmers, has a new series of videos about general computer-science topics that people like me didn't learn programming for fun while getting our history degrees. The Economist's Bartleby column examines how Covid-19 lockdowns have "caused both good and bad changes of routine." Security is hard. Everyone who works in IT knows (or should know) this. We have well-documented security practices covering every part of software applications, from the user interface down to the hardware. Add in actual regulations like Europe's GDPR and California's privacy laws, you have a good blueprint for protecting user data. Of course, if you actively resist expertise and hate being told what to do by beanie-wearing nerds, you might find yourself reading on Gizmodo how a lone hacker exfiltrated 99% of your data and handed it to the FBI: In the wake of the violent insurrection at the U.S. Capitol by scores of President Trump’s supporters, a lone researcher began an effort to catalogue the posts of social media users across Parler, a platform founded to provide conservative users a safe haven for uninhibited “free speech” — but which ultimately devolved into a hotbed of far-right conspiracy theories, unchecked racism, and death threats aimed at prominent politicians. The researcher, who asked to be referred to by their Twitter handle, @donk_enby, began with the goal of archiving every post from January 6, the day of the Capitol riot; what she called a bevy of “very incriminating” evidence. Operating on little sleep, @donk_enby began the work of archiving all of Parler’s posts, ultimately capturing around 99.9 percent of its content. In a tweet early Sunday, @donk_enby said she was crawling some 1.1 million Parler video URLs. “These are the original, unprocessed, raw files as uploaded to Parler with all associated metadata,” she said. Included in this tranche of data, now more than 56 terabytes in size, @donk_enby confirmed the raw video includes GPS coordinates, which point to the locations of users when the videos were filmed. Meanwhile, dozens of companies that have donated to the STBXPOTUS and other Republican causes over the past five years have suddenly started singing a different tune: Sony-made GPS chipsets failed all over the world this weekend when a GPS cheat-sheet of sorts expired: In general, the pattern of your route is correct, but it may be displaced to one side or the other. However, in many cases by the completion of the workout, it sorts itself out. In other words, it’s mostly a one-time issue. The issue has to do with the ephemeris data file, also called the EPO file (Extended Prediction Orbit) or Connected Predictive Ephemeris (CPE). Or simply the satellite pre-cache file. That’s the file that’s delivered to your device on a frequent basis (usually every few days). This file is what makes your watch near-instantly find GPS satellites when you go outside. It’s basically a cheat-sheet of where the satellites are for the next few days, or up to a week or so. I experienced this failure as well. I recorded two walks on my Garmin Venu, one Friday and one yesterday. In both cases, the recorded GPS tracks appeared about 400 m to the west of where I actually walked. Because the issue started between 22:30 UTC on December 31st and 15:00 UTC on January 1st, I (and others) suspect this may have been bad date handling. Last year not only had 366 days, but also 53 weeks, depending on how the engineers configured the calendar. So what probably happened is that an automatic CPE update failed or appeared to expire because the calendar handling was off. Earlier this year, the Nielsen Norman Group repeated a study they first did in 1996 on the usability of PDF documents. As they've now found three times, making PDFs instead of actual web pages yields a horrible experience for users: Jakob Nielsen first wrote about how PDF files should never be read online in 1996 — only three years after PDFs were invented. Over 20 years later, our research continues to prove that PDFs are just as problematic for users. Despite the evidence, they’re still used far too often to present content online. PDFs are typically large masses of text and images. The format is intended and optimized for print. It’s inherently inaccessible, unpleasant to read, and cumbersome to navigate online. Neither time nor changes in user behavior have softened our evidence-based stance on this subject. Even 20 years later, PDFs are still unfit for human consumption in the digital space. Do not use PDFs to present digital content that could and should otherwise be a web page. PDF files are typically converted from documents that were planned for print or created in print-focused software platforms. When creating PDFs in these tools, it’s unlikely that authors will follow proper guidelines for web writing or accessibility. If they knew these, they’d probably just create a web page in the first-place, not a PDF. As a result, users get stuck with a long, noninclusive mass of text and images that takes up many screens, is unusable for finding a quick answer, and boring to read. There’s more work involved in creating a well-written, accessible PDF than simply exporting it straight from a word processing or presentation platform. Factors such as the use of color, contrast, document structure, tags, and much more must be intentionally addressed. Yah, so, don't use them. The December solstice happened about 8 hours ago, which means we'll have slightly more daylight today than we had yesterday. Today is also the 50th anniversary of Elvis Presley's meeting with Richard Nixon in the White House. More odd things of note: Finally, it's very likely you've made out with a drowning victim from the 19th century. Every morning I get an email from The History Channel with "this day in history" bullet points. A couple stood out today: And now, the sanity. Via author John Scalzi, (conservative) attorney T. Greg Doucette explains why the president will leave office on January 20th no matter what chicanery he tries to steal the election: While I wait for my frozen pizza to cook, I've got these stories to keep me company: Going to check my pizza now. Also known as: read all error messages carefully. I've just spent about 90 minutes debugging an Azure DevOps pipeline after upgrading from .NET Core 3.1 to .NET 5 RC2. Everything compiled OK, all tests ran locally, but the Test step of my pipeline failed with this error message: ##[error]Unable to find D:\a\1\s\ProjectName.Tests\bin\Debug\net5.0\ref\ProjectName.Tests.deps.json. Make sure test project has a nuget reference of package "Microsoft.NET.Test.Sdk". The test step had this Test Files configuration: **\bin\$(BuildConfiguration)\**\*Tests.dll!**\*TestAdapter.dll!**\obj\** I'll save you all the steps I went through to determine that the .NET 5 build step only copied .dlls into the ref folder, without copying anything else (like the dependencies definition file). The solution turned out to be adding one line to the configuration: **\bin\$(BuildConfiguration)\**\*Tests.dll !**\ref\** !**\*TestAdapter.dll !**\obj\** Excluding the ref folder fixed it. And I hope this post saves someone else 90 minutes of debugging.
{}
# Build automation tools before make? I realized that make was "only" invented in 1976 and seems to be one of the first build automation tools (at least it's probably the oldest still in use). Make with its shell focus seems like a total Unix-ism. It's interesting innovation is not being a build system but its dependency graph solver that runs build steps lazily. But we already had a history of large software projects in 1976. What did people use for e.g. OS/360 or the software for the F-14? Were there any real precursors for make or did make invent the concept of build automation? • Indeed, the quest for the perfect "dependency graph solver" has been a monument. – Brian H Apr 17 at 15:18 • Just getting your first print-out from the execution attempt on your stack of cards was an important step. Did it even compile/assemble? If it generated output, was it obviously non-sensical? Have you got enough paper yet to burn and provide your home with warmth for the winter? 😏 – RichF Apr 17 at 21:53 Preface 1: There can't be a single answer for all the varieties that have been out there Preface 2: It's important to keep in mind, that make didn't start out as the almighty build automation and installation tool it's seen as today, but as a utility to reduce compile time by only compiling files that have changed or that depend on changed files. Compiling was a resource intensive and slow task on machines back then. It wasn't 50 files per second, but rather 50 seconds per file. So saving every little step did count a lot. Were there any real precursors for make Yes, there were, but usually rather OS-, development environment- and project-specific tools. or did make invent the concept of build automation? Not really, it's rather that make is a solution for a problem that did not arise the same way on mainframes. Software structure (at least for reasonable sized projects) was way more modularized and built around (internal) APIs. Changing a module interface was rather frowned upon - and even more so using global variables and the like. Access to interfaces and data provided was usually encapsulated by interfaces - much like today's idea of methods. Except, we didn't use all these various fancy names. There were interfaces and records (parameter blocks). These 'methods' were kept binary stable as long as possible to avoid the need to compile whole applications at once. Changing some basic structure on the fly and starting a make was seen as quite unprofessional - think before you code. Software development was much more incremental, based on module concepts. Modules (and thus interfaces) were (could be) versioned. The task to handle this was often handled by, or at least done with great support from, the linker, handling dependencies including version matching (*1). Bottom line, it was a different approach using a much more deliberate process. Wartime Stories: I took part in development of a rather large (>1200 modules) mainframe software. In the mid 1990s the development process was stable for more than 10 years (in fact, even longer, predating this project) - when a major customer became interested in the development process used. The usual crap about quality. And the usual combination of outsider management with no real idea about software - or more exactly with about the knowledge of a weekend course about software - and some young graduate with 'fresh ideas' - as well just as fresh as his limited knowledge from university could carry - produced a request to change to a make style development - as our proven process is of course outdated by modern tools. Even worse, the whole team was developing into a single repository only separated by task and only versioned by delivery cycle. Yeah, right. Also as usual, the largest customer got more say about things he should not care about than was good. There is no make for the mainframe OS we used, so we had to create our own. Even more so, we didn't have trees of weird source-and-alike files, but well defined libraries holding sources, macros and scripts including many rules about how to combine and evaluate them. Long story short, it took us about a year (and about half a million dollars) to develop a new build process. We finally settled to a hybrid. The development was done as before, except now every developer had to have his own repository (which of course added way more errors and more management to handle them). When a release was up to be delivered everything was now pumped into the new build system and configurated in one huge almost-an-hour-long build run. Of course that new system never did report any new error. But hey, modern ... if I only had killed it with the argument that make is way older than the system we had, back when they proposed it :)) *1 - Linking was also usually not done against a bunch of .o files, but libraries holding versioned binaries - which itself could be the result of a linking process. • "except now every developer had to have his own repository" Looks like the mid 90s were too early for git, then... (SCNR). – dirkt Apr 18 at 5:39 • @dirkt There was source code c control in the 90's. It was just way less sophisticated than now. – JeremyP Apr 18 at 8:47 • @JeremyP: In case you missed it: One of the main things that makes git different from other, earlier source code control systems, say, CVS (which counts as 90's, I suppose), is that every developer does have his own repository, and he/she can synchronize it to and from other "upstream" repositories. In CVS, you always check out from a central repository, and I'd suppose that was the case with earlier variant like RCS and SCCS, too (haven't used either). So the idea "every developer needs to have his own repository" was valid, just a bit early. – dirkt Apr 18 at 10:03 • @dirkt I've used SCCS and briefly RCS. In both, each file was tracked separately. I'm not sure there was a concept off a remote repository or even a repository. CVS represented a huge step up (in my experience, at least) because you could treat a whole set of files as a single thing. git by the way is not the first distributed VCS nor the best. In fact, of all the distributed VCS's I have used, it is the worst (admittedly, I've used only two, the other one is Mercurial). – JeremyP Apr 18 at 14:15 • It took only an hour to build your software? Luxury! – another-dave Apr 23 at 17:45 Based on what I know of the history of build automation tools, make was the first tool of its kind. By this I mean that make was the first widely-distributed and widely-used tool expressly designed to solve the problem of speeding up software rebuilds by using a dependency graph. As @Raffzahn very adequately describes, there was never a case where programmers on large projects didn't need to solve this problem. The "problem" was that programmers were solving it over-and-over in isolation for different projects and programming environments. make brought into existence a tool for just this one thing... which, as you say in the question, is a "Unix-ism" in the best sense. • I think the definition of a "large project" was much different in 1961 than it is now. Computational capacity, memory, i/o, and production techniques were all limited by the technology and procedures of the time. In some cases just gaining the understanding to start asking the right questions was a limitation. – RichF Apr 17 at 21:48 • @RishF I am basing on what I know from computing in the 1980s, which is you would never want to, for example, re-compile that which only needed to be re-linked. It was too slow for any program I worked on that would have been worthy of the name "project". I still spend a lot of time nowadays perfecting the dependecy graph for large projects. It's not because compilation is slow; rather, it's the automated deployment and testing cycle that takes too long if you don't cull the graph. – Brian H Apr 17 at 22:03 • I wasn't disagreeing with anything. Maybe my comment would have been better placed with the question than your answer. – RichF Apr 17 at 22:31 In the ages prior to make, program rebuilds were as fast as you could get the keypunch operator to type up a new stack of cards from your hand-written coding pad. • Plus, at many facilities, there was the wait time to get your project scheduled on the mainframe. – RichF Apr 17 at 21:39 • Don't knock it. You had time to smoke a cigar, get a shoe shine, and read the newspaper between rebuilds... – Brian H Apr 17 at 22:07 One precursor was Digital's Concise Command Language. This reduced the number of steps required to go through a revise and test cycle. Later, Digital moved on to DCL, which permitted building a customized command procedure to automate building. But DCL came after the make command you asked about. • I accept the edits made to my post. But I did intend the lower case d for Digital. That's how it appeared in the digital logo. And to me, it will always be DEC. – Walter Mitty Apr 24 at 9:51 "Build automation system" seems to be a grandiose name for how we used to build software in late 1970s in my corner of DEC. There was a file containing the commands needed to build the software system. Someone wrote that command file. You ran that command file (batch, indirect command processor, whatever - it depended on the system). If the commands needed to build the software change, then we have interactive editors that can change the command file :-) For day-to-day software development, you'd typically know what you needed to rebuild based on what you'd changed, so you might say that dependency management was all wetware-based. Every now and then, when some functional milestone had been reached, a "baselevel" would be declared. A clean set of sources would be collected, built from scratch, run through a few cursory tests, maybe some distribution binary tapes made if needed, and the baselevel disk would be taken down and put somewhere safe, offsite if you were important enough. tl;dr - "how to build this software" was a pretty static procedure, so the lack of automation -- as distinct from unattended operation -- did not seem to be a problem. • "For day-to-day software development, you'd typically know what you needed to rebuild based on what you'd changed" -- except when you overlooked something. That added an entire class of very common errors to the process. Couple that with linkers that would only produce a viable executable if you got the file order exactly right (I'm glaring at you, VMS), and it was -- kind of hellish, really. – jeffB Apr 23 at 18:24 • Real programmers wrote task-builder ODL files. ;-) – another-dave Apr 23 at 18:28
{}
# 2D Collision in tilemap - ArrayOutOfBounds im trying to do collision detection with world based on tilemaps (two dimensional array) array - private int WIDTH = 10; private int HEIGHT = 6; String[][] simplemap = new String[][]{ { "0","1","2","3","4","5","6","7","8","g",}, { "11","g","","","","","g","","","g",}, { "22","g","","","","","g","","","g",}, { "33","g","","","","","g","","","g",}, { "44","g","","","","","g","","","g",}, { "g","g","g","g","g","g","g","g","g","g",}, { "g","g","g","g","g","g","g","g","g","g",}, }; than i create array from blocks and trying to show it create blocks : blocks = new Enemy[HEIGHT][WIDTH]; for(int i =0;i<HEIGHT;i++){ for(int j=0;j<WIDTH;j++){ blocks[i][j]= new Enemy(context,0,0); blocks[i][j].boxWidth=blockWidth; blocks[i][j].boxHeight=blockHeight; blocks[i][j].x=blockWidth*j; blocks[i][j].y=blockHeight*i; blocks[i][j].state = simplemap[i][j]; } } blocks[i][j].x=blockWidth*j; blocks[i][j].y=blockHeight*i; but if write blockWidth*i (not j) than map building like rotated on 90 degree. Maybe error here in this code. Not sure Than in glDraw i draw my map for (int i = 0; i < HEIGHT; i++) { for (int j = 0; j < WIDTH; j++) { if ( blocks[i][j].state == "g") { blocks[i][j].draw(gl); } } } For collision i do a this code (it's for Y collision detection) for X collision code similar for(int i = (int)ball.y/tileSize; i<(ball.y+ball.boxWidth)/tileSize; i++) { for(int j = (int)ball.x/tileSize; j<(ball.x+ball.boxWidth)/tileSize;j++) { //System.out.print(" i = "+i+"j "+j); Log.e("ERROR ", " "+i+":"+j); Log.e("ERROR", "DY = "+dy); if (blocks[i][j].state=="g") { //do what we need, check directions or something other } } } Problem Map drawing ok with this code , and my object work only in small area like i =HEIGHT, In tests its like when i start moving from left to right side - i see 10 blocks(like road) app always crashed with nullpointerexception(from array out of bounds) after 4-5 blocks on road. I think i just not correct building my map and position . With this code i see on phone correct map, if change i and j when we set position for every blocks - than its rounded on 90 degree. Also exception on code if (blocks[i][j].state=="g") array out of bounds . Length 6 etc. I cant understand where my error, can any good guy help me ? please. Regards, Peter. Thank you all guys Not sure if its right - but in test its show all correct; code for collision on Y public void checkColisionY(){ dy = dy + gravity * dt; ball.y = ball.y + dy * dt + 0.05f * gravity * dt * dt; for(int i = (int)ball.y/tileSize; i<(ball.y+ball.boxWidth)/tileSize; i++){ for(int j = (int)ball.x/tileSize; j<(ball.x+ball.boxWidth)/tileSize;j++) { if(dy>0){ if(ball.y+ball.boxWidth>=blocks[i+1][j].y && blocks[i+1][j].state =="g"){ ball.y = blocks[i+1][j].y-ball.boxWidth; dy=dy*energyloss; dy = dy * -1; } } } } } For accessing the array i suggest you make a function like the following: String getState(int x, int y) { if (x < 0 || x >= WIDTH || y < 0 || y >= HEIGHT) return "g"; // Let's say all blocks outside the map is solid. return blocks[y][x].state; } This simple function can save you from array out of bounds error. Basically, the function check the x and y which you want to access, if it is outside the array then return for example "g" which i suppose is ground/solid? You may change it to whatever value you want. And for this: blocks[i][j].x=blockWidth*j; blocks[i][j].y=blockHeight*i; If you switch j with i, it's no wonder for me if it is rotated 90 degree. As in your for loop j is x, and i is y. Changing j/i to x/y might help you out of the confusion. • im little not understand , in my case it's right code ? x = blockWidthj ? or i have somewhere error and here must be x=blockWidthi ? ArrayOutOfBounds come cos i cant understand where my error with building. it's like position of my object always changed in i position (y) – Peter May 26 '16 at 12:05 • @Peter your code is correct. it is x=blockWidth * j if you want it to be blockWidth * i, then change the j with i in the for loop like for(int i=0; i<WIDTH; i++). As i said, you might want to change i and j to x and y to help you clear your confusion. And for the ArrayOutOfBounds error, see the function in my answer. – Greffin28 May 26 '16 at 12:11 • well, im not get error now , but my ball still drop down when moving on random box :(( like on screenshot but its can happened on other boxs (at start, at end and its random :( ) – Peter May 26 '16 at 14:04 • @Peter You mean your ball fall through boxes like there's no obstacles? If that's the case i suggest you to improve your collision detection system. From what i understand from your code, you check the tile at the box position and at the box position + boxWidth. You don't want to do that, what you want to do is check where the ball is going to (it's destination) instead. – Greffin28 May 26 '16 at 14:21 • yea thanks , two hours and i was changed my collision detection. Hope all be ok :( – Peter May 26 '16 at 14:39 Tile Based Collision Detection in Games In tile based games it's really easy and fast to detect whether an object is colliding with a tile. Some psuedo- code to accomplish this: /** * Moves our entity along the x, then y. If we do both at the same time the entity * will not move if any of the collision detections fail (won't be able to slide) */ public boolean doMove(int x, int y){ boolean cx = move(x, 0); boolean cy = move(0, y); return cx || cy; //return whether we moved (you may use this later for something else) } /** * Actually perform the move and roll back the entities position if the move fails */ public boolean move(int x, int y){ pos.x += x; pos.y += y; boolean overlapping = checkCollision(); if(overlapping){ pos.x -= x; pos.y -= y; return false; }else{ return true; } } /** * Check if our top left point is currently inside a tile, you can repeat this for all 4 corners for bounding boxes or do some * other fancy stuff for a circle. */ public boolean checkCollision(){ int tx = pos.x/TILE_SIZE; //Check which tile our entity is stood on int ty = pos.y/TILE_SIZE; if(tiles[tx][ty].equals("g")){ //Note that I use .equals() and not == this is return true; //because Strings are objects and they are not equal } //in an object sense. Their contents are just return false; //the same. so .equals() returns true and == will not. } • +1 For the collision and .equals() forgot to mention that when comparing strings. – Greffin28 May 26 '16 at 14:46
{}
# Evaluation of $\int\limits_{0}^{2\pi}\frac{a\cos x -1}{(a^2+1-2a\cos x)^{3/2}}dx.$ $$\int\limits_{0}^{2\pi}\frac{a \cos x -1}{(a^2+1-2a \cos x)^{3/2}}dx = 2\int\limits_{0}^{\pi}\frac{a \cos x -1}{(a^2+1-2a\cos x)^{3/2}}dx.$$ If a<1, this integral doesn't converge. How to evaluate it for any other a? I think it can be expressed as elliptic integral $I(a^2)$ or calculated using series, but stuck using both ways. Note that $$I\left(a\right)=2\int_{0}^{\pi}\frac{a\cos\left(x\right)-1}{\left(a^{2}+1-2a\cos\left(x\right)\right)^{3/2}}dx$$ $$=\frac{d}{da}\left(-2a\int_{0}^{\pi}\frac{1}{\sqrt{a^{2}+1-2a\cos\left(x\right)}}dx\right)$$ and, since $a>1$, $$\int_{0}^{\pi}\frac{1}{\sqrt{\left(a+1\right)^{2}-2a-2a\cos\left(x\right)}}dx=\int_{0}^{\pi}\frac{1}{\sqrt{\left(a+1\right)^{2}-2a\left(1+\cos\left(x\right)\right)}}dx$$ $$\stackrel{x=2u}{=}\int_{0}^{\pi/2}\frac{1}{\sqrt{\left(a+1\right)^{2}-2a\left(1+\cos\left(2u\right)\right)}}du=\frac{2}{a+1}\int_{0}^{\pi/2}\frac{1}{\sqrt{1-\frac{4a}{\left(a+1\right)^{2}}\cos^{2}\left(u\right)}}du$$ $$\stackrel{u\rightarrow\pi/2-u}{=}\frac{2}{a+1}\int_{0}^{\pi/2}\frac{1}{\sqrt{1-\frac{4a}{\left(a+1\right)^{2}}\sin^{2}\left(u\right)}}du =\frac{2}{a+1}K\left(\frac{4a}{\left(a+1\right)^{2}}\right)$$ where $K(z)$ is the complete elliptic integral of the first kind. Hence we have $$I\left(a\right)=\frac{d}{da}\left(\frac{-4a}{a+1}K\left(\frac{4a}{\left(a+1\right)^{2}}\right)\right)=\color{red}{\frac{2\left((a+1)E\left(\frac{4a}{\left(a+1\right)^{2}}\right)-\left(a-1\right)K\left(\frac{4a}{\left(a+1\right)^{2}}\right)\right)}{a^{2}-1}}$$ for $a>1,$ where $E(z)$ is complete elliptic integral of the second kind. • It looks good, but as far as I can see from WA output (for some particular values of a) it can be expressed as $A\cdot E(t)+B\cdot K(t)$ where $E$ is second kind of complete elliptic integral. But i think i can evaluate the deriviative that way(not sure). Nov 13 '16 at 4:02 • Just I need to be able to calculate $I(a)$ as some real number(numerically). Nov 13 '16 at 4:09 • @EzWin Indeed, the derivative of $K(z)$ is a combination of $K(z)$ and $E(z)$. See functions.wolfram.com/EllipticIntegrals/EllipticK/20/01 Nov 13 '16 at 9:05
{}
# Delete some bits and count Consider all 2^n different binary strings of length n and assume n > 2. You are allowed to delete exactly b < n/2 bits from each of the binary strings, leaving strings of length n-b remaining. The number of distinct strings remaining depends on which bits you delete. Assuming your aim is to leave as few remaining different strings as possible, this challenge is to write code to compute how few can you leave as a function of n. Example, n=3 and b = 1. You can leave only the two strings 11 and 00. For n=9 and b = 1,2,3,4 we have 70,18,6,2 For n=8 and b = 1,2,3 we have 40,10,4 For n=7 and b = 1,2,3 we have 20,6,2 For n=6 and b = 1,2 we have 12,4 For n=5 and b = 1,2 we have 6,2 This question was originally posed by me in 2014 in a different form on MO. Input and output Your code should take in an integern and output a single integer for each value of b starting at b = 0 and increasing. Score Your score is the largest n for which your code completes for all b < n/2 in under a minute on my Linux based PC. In case of tie breaks, the largest b your code gets to for the joint largest n wins. In case of tie breaks on that criterion too, the fastest code for the largest values of n and b decides. If the times are within a second or two of each other, the first posted answer wins. Languages and libraries You can use any language of library you like. Because I have to run your code, it would help if it was free (as in beer) and worked in Linux. • I'm assuming b > 0 as additional input-requirement? Or would n=3 and b=0 simply output 2^n as result? – Kevin Cruijssen May 28 '18 at 9:12 • @KevinCruijssen It should output 2^n indeed. – user9207 May 28 '18 at 9:13 • Also, you say the input is a single n and a single b, but the score is the largest n for which the code completes all b < n/2 in under a minute. Wouldn't it be better to have a single input n in that case, and output all results for 0 <= b < n/2? Or should we provide two programs/functions: one taking two inputs n and b, and one taking only input n and outputting all results in the range 0 <= b < n/2? – Kevin Cruijssen May 28 '18 at 9:18 • Well, I had already upvoted your challenge, so can't do it again. :) Although I have no idea how to calculate this efficiently (efficient O algorithms were something I've always been bad at.. and one of the few subjects at IT college I had to redo a couple of times), it does seem like a very interesting challenge. I'm curious to see what answers people come up with. – Kevin Cruijssen May 28 '18 at 9:47 • Is there a working example? It would be a good place to start, both in terms of correctness, but also for comparison of speed. – maxb May 28 '18 at 10:39 Python 2.7 / Gurobi n=9 This solution is very straight usage of Gurobi's ILP solver for the boolean Mixed-Integer Problems (MIP). The only trick is to take out symmetry in 1's complement to halve the problem sizes. Using Gurobi LLC's limited time "free" licence we are restricted to 2000 constraints, but solving 10 del 1 is well outside the 60 second time-limit anyway on my laptop. from gurobipy import * from itertools import combinations def mincover(n,d): bs = pow(2,n-1-d) m = Model() m.Params.outputFlag = 0 b = {} for i in range(bs): b[i] = m.addVar(vtype=GRB.BINARY, name="b%d" % i) m.update() for row in range(pow(2,n-1)): x = {} for i in combinations(range(n), n-d): v = 0 for j in range(n-d): if row & pow(2,i[j]): v += pow(2,j) if v >= bs: v = 2*bs-1-v x[v] = 1 m.addConstr(quicksum(b[i] for i in x.keys()) >= 1) m.setObjective(quicksum(b[i] for i in range(bs) ), GRB.MINIMIZE) m.optimize() return int(round(2*m.objVal,0)) for n in range(4,10): for d in range((n//2)+1): print n, d, mincover(n,d) UPDATE+CORR: 10,2 has optimal solution size 31 (see e.g.) Gurobi shows no symmetric solution of size 30 exists (returns problem infeasible) .. [my attempt to show asymmetric feasibility at 30 remained inconclusive after 9.5hrs runtime] e.g. bit patterns of integers 0 7 13 14 25 28 35 36 49 56 63 64 95 106 118 128 147 159 170 182 195 196 200 207 225 231 240 243 249 252 255 or 0 7 13 14 19 25 28 35 36 49 56 63 64 95 106 118 128 159 170 182 195 196 200 207 225 231 240 243 249 252 255 • You broke the "fastest claimed infinite bounty" record? – user202729 Jun 10 '18 at 1:15 • I don't see any bounty here, what do you mean? – jayprich Jun 10 '18 at 7:16 • @user202729 Yes.. I set it too low. I should have set it at n = 10 :) – user9207 Jun 10 '18 at 8:20 • Actually solving it at n=9 is not an easy thing. That's why OP use an existing library (which is supposed to be better than a hand-written solution, like mine). – user202729 Jun 10 '18 at 10:42 • Thanks @ChristianSievers I see MO claim that 10,2 has only asymmetric optima which I cannot refute nor verify. If I remove the symmetry assumption shortcut which works up to n=9 it turns out Gurobi can still solve up to n=9 in the time required. – jayprich Jun 13 '18 at 17:23 # C++, n=6 Brute force with some small optimizations. #include<cassert> #include<iostream> #include<vector> // =========== /** Helper struct to print binary representation. std::cout<<bin(str,len) prints (str:len) == the bitstring represented by last (len) bits of (str). */ struct bin{ int str,len; bin(int str,int len):str(str),len(len){} }; std::ostream& operator<<(std::ostream& str,bin a){ if(a.len) return str<<bin(a.str>>1,a.len-1)<<char('0'+(a.str&1)); else if(a.str) return str<<"..."; else return str; } // =========== /// A patten of (len) bits of ones. int constexpr pat1(int len){ return (1<<len)-1; } // TODO benchmark: make (res) global variable? /**Append all distinct (subseqs+(sfx:sfxlen)) of (str:len) with length (sublen) to (res). */ void subseqs_( int str,int len,int sublen, int sfx,int sfxlen, std::vector<int>& res ){ // std::cout<<"subseqs_ : str = "<<bin(str,len)<<", " // "sublen = "<<sublen<<", sfx = "<<bin(sfx,sfxlen)<<'\n'; assert(len>=0); if(sublen==0){ // todo remove some branches can improve perf? res.push_back(sfx); return; }else if(sublen==len){ res.push_back(str<<sfxlen|sfx); return; }else if(sublen>len){ return; } if(str==0){ res.push_back(sfx); return; } int nTrail0=0; for(int ncut;str&&nTrail0<sublen; ++nTrail0, ncut=__builtin_ctz(~str)+1, // cut away a bit'0' of str // plus some '1' bits str>>=ncut, len-=ncut ){ ncut=__builtin_ctz(str)+1; // cut away a bit'1' of str subseqs_(str>>ncut,len-ncut,sublen-nTrail0-1, sfx|1<<(sfxlen+nTrail0),sfxlen+nTrail0+1, res ); // (sublen+sfxlen) is const. TODO global var? } if(nTrail0+len>=sublen) // this cannot happen if len<0 res.push_back(sfx); } std::vector<int> subseqs(int str,int len,int sublen){ assert(sublen<=len); std::vector<int> res; if(__builtin_popcount(str)*2>len){ // too many '1's, flip [todo benchmark] subseqs_(pat1(len)^str,len,sublen,0,0,res); int const p1sublen=pat1(sublen); for(int& r:res)r^=p1sublen; }else{ subseqs_(str,len,sublen,0,0,res); } return res; } // ========== /** Append all distinct (supersequences+(sfx:sfxlen)) of (str:len) with length (suplen) to (res). Define (a) to be a "supersequence" of (b) iff (b) is a subsequence of (a). */ void supseqs_( int str,int len,int suplen, int sfx,int sfxlen, std::vector<int>& res ){ assert(suplen>=len); if(suplen==0){ res.push_back(sfx); return; }else if(suplen==len){ res.push_back(str<<sfxlen|sfx); return; } int nTrail0; // of (str) if(str==0){ res.push_back(sfx); // it's possible that the supersequence is '0000..00' nTrail0=len; }else{ // str != 0 -> str contains a '1' bit -> // supersequence cannot be '0000..00' nTrail0=__builtin_ctz(str); } // todo try nTrail0=__builtin_ctz(str|1<<len), eliminates a branch // and conditional statement for(int nsupTrail0=0;nsupTrail0<nTrail0;++nsupTrail0){ // (nsupTrail0+1) last bits of supersequence matches with // nsupTrail0 last bits of str. supseqs_(str>>nsupTrail0,len-nsupTrail0,suplen-1-nsupTrail0, sfx|1<<(nsupTrail0+sfxlen),sfxlen+nsupTrail0+1, res); } int const strMatch=str?nTrail0+1:len; // either '1000..00' or (in case str is '0000..00') the whole (str) for(int nsupTrail0=suplen+strMatch-len;nsupTrail0-->nTrail0;){ // because (len-strMatch)<=(suplen-1-nsupTrail0), // (nsupTrail0<suplen+strMatch-len). // (nsupTrail0+1) last bits of supersequence matches with // (strMatch) last bits of str. supseqs_(str>>strMatch,len-strMatch,suplen-1-nsupTrail0, sfx|1<<(nsupTrail0+sfxlen),sfxlen+nsupTrail0+1, res); } // todo try pulling constants out of loops } // ========== int n,b; std::vector<char> done; unsigned min_undone=0; int result; void backtrack(int nchoice){ assert(!done[min_undone]); ++nchoice; std::vector<int> supers_s; for(int s:subseqs(min_undone,n,n-b)){ // obviously (s) is not chosen. Try choosing (s) supers_s.clear(); supseqs_(s,n-b,n,0,0,supers_s); for(unsigned i=0;i<supers_s.size();){ int& x=supers_s[i]; if(!done[x]){ done[x]=true; ++i; }else{ x=supers_s.back(); supers_s.pop_back(); } } unsigned old_min_undone=min_undone; while(true){ if(min_undone==done.size()){ // found !!!! result=std::min(result,nchoice); goto label1; } if(not done[min_undone]) break; ++min_undone; } if(nchoice==result){ // backtrack more will only give worse result goto label1; } // note that nchoice is already incremented backtrack(nchoice); label1: // undoes the effect of (above) for(int x:supers_s) done[x]=false; min_undone=old_min_undone; } } int main(){ std::cin>>n>>b; done.resize(1<<n,0); result=1<<(n-b); // the actual result must be less than that backtrack(0); std::cout<<result<<'\n'; } Run locally: [user202729@archlinux golf]$g++ -std=c++17 -O2 delbits.cpp -o delbits [user202729@archlinux golf]$ time for i in $(seq 1 3); do ./delbits <<< "6$i"; done 12 4 2 real 0m0.567s user 0m0.562s sys 0m0.003s [user202729@archlinux golf]$time ./delbits <<< '7 1' ^C real 4m7.928s user 4m7.388s sys 0m0.173s [user202729@archlinux golf]$ time for i in $(seq 2 3); do ./delbits <<< "7$i"; done 6 2 real 0m0.040s user 0m0.031s sys 0m0.009s • Mostly to encourage others to post their code if it's faster than mine. – user202729 May 31 '18 at 13:56 • Please?... (note: This is an instance of a set cover problem.) – user202729 Jun 1 '18 at 23:43 • I'm working on it. I just can't come up with any smart way of doing it. If nobody else posts an answer, I'll put mine up that can only go as high as n=4 so far. – mypetlion Jun 1 '18 at 23:58
{}
# How to avoid “Infinite glue shrinkage found in a paragraph.” error with enumitem and nameref There seems to be a problem when combining the enumitem and nameref package. With the code below I get the error: ! Infinite glue shrinkage found in a paragraph. I already checked the package documentation but neither package mentions the other as problematic in combination. Removing the enumitem package gets rid of the error, but I need that package elsewhere in the document. This also shows the result that I am after, i.e. a cross-reference with the item label "Label". Code: \documentclass{article} \usepackage{enumitem} \usepackage{nameref} \begin{document} \begin{description} \item [Label\label{Ref}] Text \end{description} \begin{itemize} \item \nameref{Ref} \end{itemize} \end{document} • I'm not sure \item [Label\label{Ref}] does what you think it does. Changing \nameref to \ref (to make it compile) we find this in the aux: \newlabel{Ref}{{}{1}{\enit@align {\enit@format {Label\label {Ref}}}}{}{}} I'm not sure those commands are meant to be used outside \item[...] of a description env. – daleif Jan 29 at 9:16 • \item[Label]\label{Ref} – egreg Jan 29 at 9:31 • @egreg it still gives no output for \nameref, but I don't think it would ever had anyways – daleif Jan 29 at 9:44 • @daleif When I remove the enumitem package it does what I want it to do, i.e. show a cross-reference with "Label". Is that not what is to be expected? – Daniel Jan 29 at 10:06 • @daleif enumitem has no fault in this. It's rather gettitlestring. See github.com/ho-tex/gettitlestring/issues/1 – egreg Jan 29 at 10:38 The nameref package uses gettitlestring that has code to support enumitem, but it is incomplete: it only manages \enit@format, but not \enit@align. \documentclass{article} \usepackage{enumitem} \usepackage{nameref} \makeatletter \GTS@TestLeft\enit@align\GTS@Cdr % package enumitem } \let\enit@align\@empty % package enumitem } \makeatother \begin{document} \begin{description} \item [Label\label{Ref}] Text \end{description} \begin{itemize} \item \nameref{Ref} \end{itemize} \end{document} Without the fix, the .aux file contains \newlabel{Ref}{{}{1}{\enit@align {\enit@format {Label\label {Ref}}}}{}{}} With the fix, \newlabel{Ref}{{}{1}{Label}{}{}}
{}
University Calculus: Early Transcendentals (3rd Edition) The function has the absolute maximum value at $x=0$, where $f(0)=3$, but it does not have any absolute minimum value. $$y=\frac{6}{x^2+2}\hspace{1cm}-1\lt x\lt1$$ The graph is sketched below. - The graph obviously has an absolute maximum value, which is $f(0)=3$. - The graph, however, does not have an absolute minimum value. The graph does seem to reach the lowest point at $x=-1$ and $x=1$, but since we consider here the interval $(-1,1)$, we do not include the point where $x=-1$ or $x=1$. And since there are no other minimum values, the graph does not have any absolute minimum in the defined interval. This answer is still consistent with Theorem 1, because Theorem 1 requires that the function $f$ in question be examined in a CLOSED interval, not an open one like in this case.
{}
# Determine if the sum Converges or Diverges 1. Dec 4, 2011 ### McAfee 1. The problem statement, all variables and given/known data 3. The attempt at a solution 1. I have no idea. I know that the summation of the series converges. 2. I think it would diverge because the limit of the function does not equal zero. 3. I have tried the ratio test and got 1. Can't use the alternating series test because when ignoring signs the function increases. 2. Dec 4, 2011 ### Dick For 3 what's the limit of (5n-1)/(n+5) and what does that tell you about convergence? For 1 I think they might be asking you to estimate the difference using an integral. 3. Dec 4, 2011 ### hunt_mat 2) is correct 3) Think about the function $f(x)=\ln x$ Last edited: Dec 4, 2011 4. Dec 4, 2011 ### McAfee The limit of (5n-1)/(n+5) as x approaches infinity equals 5 thus meaning that the series in divergent? I think 5. Dec 4, 2011 ### McAfee [/QUOTE] I'm not sure what [itex]f(x)=\ln x[/quote] means. Can you plus explain. 6. Dec 4, 2011 ### Dick Right. If the nth term of a series doesn't approach 0 then it's always divergent. Now can you write an integral that's greater than the difference in 1?
{}
# stacked \widetilde and \dot I need to typeset (from bottom up) math bold L with above-dot and widetilde (both centered horizontally above the L), i.e. approximate time-derivative of tensor-valued function. (I would google more, but being one-handed those days makes it a bit more difficult.) - ## 2 Answers Isn't \widetilde{\dot{\mathbf{L}}} working? - Thanks, and thanks @Mico. I thought the two would not work together in a trivial manner and did not even try. – eudoxos Sep 28 '11 at 8:17 The amsmath package should be loaded in the preamble. Then the following code should do what you need; $\widehat{\dot{\mathbf{L}}}$ -
{}
A SAS programmer asked whether it is possible to add reference lines to the categorical axis of a bar chart. The answer is yes. You can use the VBAR statement, but I prefer to use the VBARBASIC (or VBARPARM) statement, which enables you to overlay a wide variety of graphs on a bar chart. I have previously written about using the VBARBASIC statement to overlay graphs on bar charts. The VBARBASIC chart is compatible with more graphs than the VBAR chart. See the documentation for a complete discussion of "compatible" plot types. This article shows two ways to overlay a reference line on the categorical axis of a bar chart. But the SAS programmer wanted more. He wanted to create a bar for each day of the year. That is a lot of bars! For bar charts that have many bars, I recommend using the NEEDLE statement to create a needle plot. The second part of this article demonstrates a needle plot and overlays reference lines for certain holidays. For simplicity, this article discusses only vertical bar charts, but all programs can be adapted to display horizontal bar charts. ### Reference lines and bar charts that use the VBAR statement First, to be clear, you can easily add horizontal reference lines to a vertical bar chart. This is straightforward. The programmer wanted to add vertical reference lines to the categorical axis, as shown in the graph to the right. In this graph, reference lines are added behind the bars for Age=12 and Age=14. I made the bars semi-transparent so that the full reference lines are visible. As the SAS programmer discovered, the following attempt to add reference lines does not display any reference lines: title "Bar Chart with Reference Line on Categorical Axis"; proc sgplot data=Sashelp.Class; refline 12 14 / axis=x lineattrs=(color=red); /* DOES NOT WORK */ vbar Age / response=Weight transparency=0.2; run; Why don't the reference lines appear? As I have previously written, you must specify the formatted values for a categorical axis. This is mentioned in the documentation for the REFLINE statement, which states that "unformatted numeric values do not map to a formatted discrete axis. For example, if reference lines are drawn at points on a discrete X axis, the REFLINE values must be the formatted value that appears on the X axis." In other words, you must change the REFLINE values to be "the formatted values," which are '12' and '14'. The following call to PROC SGPLOT displays the vertical reference lines: proc sgplot data=Sashelp.Class; refline '12' '14' / axis=x lineattrs=(color=red); /* YES! THIS WORKS! */ vbar Age / response=Weight transparency=0.2; run; The reference lines are shown in the graph at the beginning of this section. ### Reference lines and the VBARBASIC statement I prefer to use the VBARBASIC statement for most bar charts. If you use the VBARBASIC statement, you can specify the raw reference values. To be honest, I am not sure why it works, but, in general, the VBARBASIC statement is better when you need to overlay a bar chart and other graphical elements. If you use the VBARBASIC statement, the natural syntax works as expected: proc sgplot data=All; refline 12 14 / axis=x lineattrs=(color=red); /* THIS WORKS, TOO! */ vbarbasic Age / response=Weight transparency=0.2; run; The graph is the same as shown in the previous section. ### Reference lines for holidays on a graph of sales by date This section discusses an example that has hundreds of bars. Suppose you want to display a bar chart for sales by date for an entire year. For data like these, I have two recommendations: 1. Do not use a vertical bar chart. Even if each bar requires only three pixels, the chart will be more than 3*365 ≈ 1,100 pixels wide. On a monitor that displays 72 pixels per inch, this graph would be about 40 cm (15.3 inches) wide. A better choice is to use a needle plot, which is essentially a bar chart where each bar is represented as a vertical line. 2. The horizontal axis cannot be discrete. If it is, you will get 365 dates printed along the axis. Instead, you want to use the XAXIS TYPE=TIME option to display the bars along an axis where tick marks are placed according to months, not days. (If the categories are not dates but are "days since the beginning," you can use the XAXIS TYPE=LINEAR option instead.) Recall that the SAS programmer wanted to display holidays on the graph of sales for each day. Rather than specify the holidays on the REFLINE statement (for example, '01JAN2003'd '25DEC2003'd), it is more convenient to put the reference line values into a SAS data set and specify the name of the You can use the HOLIDAY function in SAS to get the date associated with major government holidays. The following SAS DATA step extracts a year's worth of data for the sale of potato chips (in 2003) from the Sashelp.Snacks data set. These data are concatenated with a separate data set that contains the holidays that you want to display by using reference lines. A needle plot shows the daily sales and the reference lines. data Snacks; /* sales of potato chips for each date in 2003 */ set Sashelp.Snacks; where '01JAN2003'd <= Date <= '31DEC2003'd AND Product="Classic potato chips"; run;   data Reflines; /* holidays to overlay as reference lines */ format RefDate DATE9.; RefDate = holiday("Christmas", 2003); output; RefDate = holiday("Halloween", 2003); output; RefDate = holiday("Memorial", 2003); output; RefDate = holiday("NewYear", 2003); output; RefDate = holiday("Thanksgiving", 2003); output; RefDate = holiday("USIndependence", 2003); output; RefDate = holiday("Valentines", 2003); output; run;   data All; /* concatentate the data and reference lines */ set Snacks RefLines; run;   title "Sales and US Holidays"; title2 "Needle Plot"; proc sgplot data=All; refline RefDate / axis=x lineattrs=(color=red); needle x=Date y=QtySold; run; Notice that you do not have to use the XAXIS TYPE=TIME option with the NEEDLE statement. The SGPLOT procedure uses TYPE=TIME option by default when the X variable has a time, date, or datetime format. If you decide to use the VBARBASIC statement, you should include the XAXIS TYPE=TIME statement. ### Summary In summary, this article shows how to add vertical reference lines to a vertical bar chart. You can use the VBAR statement and specify the formatted reference values, but I prefer to use the VBARBASIC statement whenever I want to overlay a bar chart and other graphical elements. You can also use a needle plot, which is especially helpful when you need to display 100 or more bars. The post Add reference lines to a bar chart in SAS appeared first on The DO Loop. Industries including sports and entertainment, travel, manufacturing, education and government benefit from analytical insights In the United States and other parts of the world, there are signs. Record automobile traffic. Surging demand for workers. And a continued push to vaccinate. The pandemic and its effects are still very much with [...] There are times when it is useful to simulate data. One of the reasons I use simulated data sets is to demonstrate statistical techniques such as multiple or logistic regression. By using SAS random functions and some DATA step logic, you can create variables that follow certain distributions or are correlated with other variables. You might decide to simulate a drug study where the drug group has a higher or lower mean than a placebo group. Because most programs that create simulated data use random numbers, let's start off by discussing the RAND function. This function can generate random numbers that follow distributions such as uniform, normal, Bernoulli, as well as dozens of other distributions. Veteran SAS programmers might be more familiar with some of the older random number functions such as RANUNI and RANNOR. RANUNI was used to generate uniform random numbers (numbers between 0 and 1) and RANNOR generated random numbers from a normal distribution. The RAND function has replaced all of the older functions and has a number of advantages over the older functions. The first argument of the RAND function is the name of the distribution that you want to use, such as Uniform, Normal, or Bernoulli. For some of the distributions, such as Normal, you can supply parameters such as the mean and standard deviation. Here are some examples: Function Description rand('Uniform') Generates uniform random numbers (between 0 and 1) rand('Normal',100,20) Generates values from a normal distribution with a mean of 100 and a standard deviation of 20 rand('Bernoulli',.4) Generates a 0 or 1 with a probability of a 1 equal to .4 rand('Binomial',.2,5) Generates random numbers that represent the number of successes in a sample of size 5 with the probability of success equal to .2 Important Note: if you want a reproducible series of random numbers using the RAND function, you must seed it by a call to STREAMINIT (with a positive integer argument) prior to its use. For example: call streaminit(132435); To clarify the note above, here are two programs that use the RAND function—one with, and one without the call to Streaminit. data Without; do i = 1 to 5; x = rand('Uniform'); output; end; drop i; run; Here is the output from running this program twice. Notice that the values of x are different in each run. Now let's run the same program with CALL STREAMINIT included. Here is the program. data With; call streaminit(13579); do i = 1 to 5; x = rand('Uniform'); output; end; drop i; run; And here are the output listings from running this program twice. Adding CALL STREAMINIT creates the same sequence of random numbers each time the program is run. This is useful if you are generating groups for a drug study and want to be able to re-create the random sequences when it comes time to break the blind and analyze the results. Another reason I sometimes want to generate a repeatable sequence of random numbers is for problem sets included in many of my books—I want the reader to get exactly the same results as I did. Let's switch topics and see how to write a program where you want to simulate flipping a coin. The program below uses a popular method, but not it is not as elegant as the next program I'm going to show you. *Old fashioned way to generate "random" events; data Toss; do n = 1 to 10; if rand('uniform') lt .5 then Result = 'Tails'; else Result = 'Heads'; output; end; run; In the long run, half of the uniform random numbers will be less than .5, and the proportion of heads and tails will be approximately .5. Here is a listing of data set Toss. A more sophisticated approach takes advantage of the RAND function's ability to generate random number from multiple distributions. A Bernoulli distribution is similar to a coin toss where you can adjust the probability of a 1 or 0 by including a second parameter to the function. The Toss2 program, shown below, does just that. *More sophisticated program; proc format; value Heads_Tails 0="Heads" 1="Tails"; run;   data Toss2; do n = 1 to 10; Results = rand('Bernoulli',.5); format Results Heads_Tails.; output; end; run; The format Heads_Tails substitutes the labels "Heads" and "Tails" for values of 0 and 1, respectively. Here is a listing of data set Toss2. The final discussion of this blog, concerns generating random values of two or more variables that are correlated. The example that follows generates x-y pairs that are correlated. *Creating correlated x-y pairs; data Corr; do i = 1 to 1000; x = rand('normal',100,10); y = .5*x + rand('Normal',50,10); output; end; drop i; run; By including a proportion of the x-value when creating the y-value, the x- and y-values will be correlated. Shown below is the output from PROC CORR, showing that x and y are correlated (r = .45586). I used a SAS Studio task to create the scatterplot shown next. You can increase or decrease the correlation by increasing the proportion of x used to create y. For example, you could use y = .8*x + rand('Normal',20,10); to create x-y pairs with a higher correlation. You can see more examples of the RAND function in my book, SAS Functions by Example, Second Edition, available as an e-book from RedShelf or in print form from Amazon. To learn more about how to use SAS Studio as part of OnDemand for Academics, to write SAS programs, or to use SAS Studio tasks, please take a look at my new book: Getting Started with SAS Programing: Using SAS Studio in the Cloud (available in e-book from RedShelf or in a paper version from Amazon). I hope you enjoyed reading this blog and, as usual, I invite comments and/or questions. Creating Simulated Data Sets was published on SAS Users. A previous article discusses how to use SAS regression procedures to fit a two-parameter Weibull distribution in SAS. The article shows how to convert the regression output into the more familiar scale and shape parameters for the Weibull probability distribution, which are fit by using PROC UNIVARIATE. Although PROC UNIVARIATE can fit many univariate distributions, it cannot fit a mixture of distributions. For that task, you need to use PROC FMM, which fits finite mixture models. This article discusses how to use PROC FMM to fit a mixture of two Weibull distributions and how to interpret the results. The same technique can be used to fit other mixtures of distributions. If you are going to use the parameter estimates in SAS functions such as the PDF, CDF, and RAND functions, you cannot use the regression parameters directly. You must transform them into the distribution parameters. ### Simulate a mixture of Weibull data You can use the RAND function in the SAS DATA step to simulate a mixture distribution that has two components, each drawn from a Weibull distribution. The RAND function samples from a two-parameter Weibull distribution Weib(α, β) whose density is given by $f(x; \alpha, \beta) = \frac{\beta}{\alpha^{\beta}} (x)^{\beta -1} \exp \left(-\left(\frac{x}{\alpha}\right)^{\beta }\right)$ where α is a shape parameter and β is a scale parameter. This parameterization is used by most Base SAS functions and procedures, as well as many regression procedures in SAS. The following SAS DATA step simulates data from two Weibull distributions. The first component is sampled from Weib(α=1.5, β=0.8) and the second component is sampled from Weib(α=4, β=2). For the mixture distribution, the probability of drawing from the first distribution is 0.667 and the probability of drawing from the second distribution is 0.333. After generating the data, you can call PROC UNIVARIATE to estimate the parameters for each component. Notice that this fits each component separately. If the parameter estimates are close to the parameter values, that is evidence that the simulation generated the data correctly. /* sample from a mixture of two-parameter Weibull distributions */ %let N = 3000; data Have(drop=i); call streaminit(12345); array prob [2] _temporary_ (0.667 0.333); do i = 1 to &N; component = rand("Table", of prob[*]); if component=1 then d = rand("weibull", 1.5, 0.8); /* C=Shape=1.5; Sigma=Scale=0.8 */ else d = rand("weibull", 4, 2); /* C=Shape=4; Sigma=Scale=2 */ output; end; run;   proc univariate data=Have; class component; var d; histogram d / weibull NOCURVELEGEND; /* fit (Sigma, C) for each component */ ods select Histogram ParameterEstimates Moments; ods output ParameterEstimates = UniPE; inset weibull(shape scale) / pos=NE; run;   title "Weibull Estimates for Each Component"; proc print data=UniPE noobs; where Parameter in ('Scale', 'Shape'); var Component Parameter Symbol Estimate; run; The graph shows a histogram for data in each component. PROC UNIVARIATE overlays a Weibull density on each histogram, based on the parameter estimates. The estimates for both components are close to the parameter values. The first component contains 1,970 observations, which is 65.7% of the total sample, so the estimated mixing probabilities are close to the mixing parameters. I used ODS OUTPUT and PROC PRINT to display one table that contains the parameter estimates from the two groups. PROC UNIVARIATE calls the shape parameter c and the scale parameter σ. ### Fitting a finite mixture distribution The PROC UNIVARIATE call uses the Component variable to identify the Weibull distribution to which each observation belongs. If you do not have the Component variable, is it still possible to estimate a two-component Weibull model? The answer is yes. The FMM procedure fits statistical models for which the distribution of the response is a finite mixture of distributions. In general, the component distributions can be from different families, but this example is a homogeneous mixture, with both components from the Weibull family. When fitting a mixture model, we assume that we do not know which observations belong to which component. We must estimate the mixing probabilities and the parameters for the components. Typically, you need a lot of data and well-separated components for this effort to be successful. The following call to PROC FMM fits a two-component Weibull model to the simulated data. As shown in a previous article, the estimates from PROC FMM are for the intercept and scale of the error term for a Weibull regression model. These estimates are different from the shape and scale parameters in the Weibull distribution. However, you can transform the regression estimates into the shape and scale parameters, as follows: title "Weibull Estimates for Mixture"; proc fmm data=Have plots=density; model d = / dist=weibull link=log k=2; ods select ParameterEstimates MixingProbs DensityPlot; ods output ParameterEstimates=PE0; run;   /* Add the estimates of Weibull scale and shape to the table of regression estimates. See https://blogs.sas.com/content/iml/2021/10/27/weibull-regression-model-sas.html */ data FMMPE; set PE0(rename=(ILink=WeibScale)); if Parameter="Scale" then WeibShape = 1/Estimate; else WeibShape = ._; /* ._ is one of the 28 missing values in SAS */ run;   proc print data=FMMPE; var Component Parameter Estimate WeibShape WeibScale; run; The program renames the ILink column to WeibScale. It also adds a new column (WeibShape) to the ParameterEstimates table. These two columns display the Weibull shape and scale parameter estimates for each component. Despite not knowing which observation came from which component, the procedure provides good estimates for the Weibull parameters. PROC FMM estimates the first component as Weib(α=1.52, β=0.74) and the second component as Weib(α=3.53, β=1.88). It estimates the mixing parameters for the first component as 0.,6 and the parameter for the second component as 0.4. The PLOTS=DENSITY option on the PROC FMM statement produces a plot of the data and overlays the component and mixture distributions. The plot is shown below and is discussed in the next section. ### The graph of the component densities The PLOTS=DENSITY option produces a graph of the data and overlays the component and mixture distributions. In the graph, the red curve shows the density of the first Weibull component (W1(d)), the green curve shows the density of the second Weibull component (W2(d)), and the blue curve shows the density of the mixture. Technically, only the blue curve is a "true" density that integrates to unity (or 100% on a percent scale). The components are scaled densities. The integral of a component equals the mixing probability, which for these data are 0.6 and 0.4, respectively. The mixture density equals the sum of the component densities. Look closely at the legend in the plot, which identifies the component curves by the parameter estimates. Notice, that the estimates in the legend are the REGRESSION estimates, not the shape and scale estimates for the Weibull distribution. Do not be misled by the legend. If you plot the PDF density = PDF("Weibull", d, 0.74, 0.66); /* WRONG! */ you will NOT get the density curve for the first component. Instead, you need to convert the regression estimates into the shapes and scale parameters for the Weibull distribution. The following DATA step uses the transformed parameter estimates and demonstrates how to graph the component and mixture densities: /* plot the Weibull component densitites and the mixture density */ data WeibComponents; retain d1 d2; array WeibScale[2] _temporary_ (0.7351, 1.8820); /* =exp(Intercept) */ array WeibShape[2] _temporary_ (1.52207, 3.52965); /* =1/Scale */ array MixParm[2] _temporary_ (0.6, 0.4); do d = 0.01, 0.05 to 3.2 by 0.05; d1 = MixParm[1]*pdf("Weibull", d, WeibShape[1], WeibScale[1]); d2 = MixParm[2]*pdf("Weibull", d, WeibShape[2], WeibScale[2]); Component = "Mixture "; density = d1+d2; output; Component = "Weib(1.52,0.74)"; density = d1; output; Component = "Weib(3.53,1.88)"; density = d2; output; end; run;   title "Weibull Mixture Components"; proc sgplot data=WeibComponents; series x=d y=density / group=Component; keylegend / location=inside position=NE across=1 opaque; xaxis values=(0 to 3.2 by 0.2) grid offsetmin=0.05 offsetmax=0.05; yaxis grid; run; The density curves are the same, but the legend for this graph displays the shape and scale parameters for the Weibull distribution. If you want to reproduce the vertical scale (percent), you can multiply the densities by 100*h, where h=0.2 is the width of the histogram bins. In general, be aware that the PLOTS=DENSITY option produces a graph in which the legend labels refer to the REGRESSION parameters. For example, if you use PROC FMM to fit a mixture of normal distributions, the parameter estimates in the legend are for the mean and the VARIANCE of the normal distributions. However, if you intend to use those estimates in other SAS functions (such as PDF, CDF, and RAND), you must take the square root of the variance to obtain the standard deviation. ### Summary This article uses PROC FMM to fit a mixture of two Weibull distributions. The article shows how to interpret the parameter estimates from the procedure by transforming them into the shape and scale parameters for the Weibull distribution. The article also emphasizes that if you use the PLOTS=DENSITY option produces a graph, the legend in the graph contains the regression parameters, which are not the same as the parameters that are used for the PDF, CDF, and RAND functions. The post Fit a mixture of Weibull distributions in SAS appeared first on The DO Loop. In a September 10 post on the SAS Users blog, we announced that SAS Analytics Pro is now available for on-site or containerized cloud-native deployment. For our thousands of SAS Analytics Pro customers, this provides an entry point into SAS Viya. SAS Analytics Pro consists of three core elements of the SAS system: Base SAS®, SAS/GRAPH® and SAS/STAT®. The containerized deployment option adds the full selection of SAS/ACCESS engines making it even easier to work with data from virtually any source. Even better, the containerized deployment option now adds new statistical capabilities that are not available in SAS/STAT on SAS9. Thanks to SAS Viya’s continuous delivery approach, we are able to provide this additional functionality so soon after the initial release. Below are highlights of these additional capabilities (you can find more details by following the links): ## Bayesian Analysis Procedures • Model multinomial data with cumulative probit, cumulative logit, generalized link, or other link functions in PROC BGLIMM. • Specify fixed scale values in a generalized linear mixed-effects model, and use an improved CMPTMODEL statement in PROC MCMC and PROC NLMIXED to fit compartment models. ## Survey Procedures For those SAS customers already on SAS Viya, or those considering the move, SAS Analytics Pro provides one more example of the new powers you will enjoy! From articles I've read on the web, it is clear that data is gold in the twenty-first century. Loading, enriching, manipulating and analyzing data is something in which SAS excels. Based on questions from colleagues and customers, it is clear end-users are willing to display data handled by SAS outside of the user interfaces bundled with the SAS software. I recently completed a series of articles on the SAS Community library where I shed some light on different techniques for feeding web applications with SAS data stored in SAS Viya environment.  The series includes a discussion of options for extracting data, building a React application, how to build web applications using SAS Viya, SAS Cloud Analytic Service (CAS), SAS Compute Server, and SAS Micro Analytic Service (MAS). I demonstrate the functionality and discuss project details in the video Develop Web Application to Extract SAS Data, found on the SAS Users YouTube Channel. I'm tying everything together in this post as a reference point. I'll provide a link to each article along with a brief description. The Community articles have all the detailed steps for developing the application. I'm excited bring you this information, so let's get started. ### Part 1 - Develop web applications series: Options for extracting data In this first article, I explain when to use SAS Micro Analytic Service, SAS Viya Jobs, SAS Cloud Analytic Service, and SAS Compute Server. ### Part 2 - Develop web applications series: Creating the React based application To demonstrate the different options, in the second article, I create a simple web application using React JavaScript library. The application also handles authentication against SAS Viya. The application is structured in such a way to avoid redundant code and each component has a well-defined role. From here, we can build the different pages to access CAS, MAS, Compute Server or SAS Viya Jobs. The image below offers a view of the application which starts in Part 2 and continues throughout the series.. ### Part 3 - Develop web applications series: Build a web application using SAS Viya Jobs In this article, I drive you through the steps to retrieve data from the SAS environment using SAS Viya Jobs. We build out the Jobs tab and on the page, display two dropdown boxes to select a library and table. The final piece is a submit button to retrieve the data to populate a table. ### Part 4 - Develop web applications series: Build a web application using SAS Cloud Analytic Service In article number 4, we go through the steps to build a page similar to the one in the previous article, but this time the data comes directly from the SAS Cloud Analytic Service (CAS). We reuse the application structure which was created in Part 2. We focus on the CAS tab. As for the SAS Viya Jobs, we display two dropdown boxes to select a library and table. We finish again with a submit button to retrieve the data to populate a table. ### Part 5 - Develop web applications series: Build a web application using SAS Compute Server In the next article, we go through the steps to build a page similar to the ones from previous articles, but this time the data comes directly from the SAS Compute Server. We reuse the application structure created in this Part 2 article. The remainder of the article focuses on the Compute tab. As for the CAS content, we display two dropdown boxes to select a library and table. Finishing off again with the submit button to retrieve the data to populate a table. ### Part 6 - Develop web applications series: Build a web application using SAS Micro Analytic Service For the final article, you discover how to build a page to access data from the SAS Micro Analytic Service. We reuse the same basic web application built in Part 2. However, this time it will require a bit more preparation work as the SAS Micro Analytic Service (MAS) is designed for model scoring. ### Bonus Material - SAS Authentication for ReactJS based applications In this addendum to the series, I outline the authorization code OAuth flow. This is the recommended means of authenticating to SAS Viya and I provide technical background and detailed code. ## Conclusion If you followed along with the different articles in this series, you should now have a fully functional web application for accessing different data source types from SAS Viya. This application is not for use as-is in production. You should, for example add functionality to handle token expiration. You can of course tweak the interface to get the look and feel you prefer. See all of my SAS Communities articles here. Creating a React web app using SAS Viya was published on SAS Users. The convergence of digitalization, AI and machine learning that has been integrated into wearable devices has become a boon for many industries, particularly healthcare. When Fitbit launched its first wearable device in 2009, it clipped on a user’s clothing and utilized an internal motion detector to track the wearer’s movement, [...] It can be frustrating when the same probability distribution has two different parameterizations, but such is the life of a statistical programmer. I previously wrote an article about the gamma distribution, which has two common parameterizations: one that uses a scale parameter (β) and another that uses a rate parameter (c = 1/β). The relationship between scale and rate parameters is straightforward, but sometimes the relationship between different parameterizations is more complicated. Recently, a SAS programmer was using a regression procedure to fit the parameters of a Weibull distribution. He was confused about how the output from a SAS regression procedure relates to a more familiar parameterization of the Weibull distribution, such as is fit by PROC UNIVARIATE. This article shows how to perform two-parameter Weibull regression in several SAS procedures, including PROC RELIABILITY, PROC LIFEREG, and PROC FMM. The parameter estimates from regression procedures are not the usual Weibull parameters, but you can transform them into the Weibull parameters. This article fits a two-parameter Weibull model. In a two-parameter model, the threshold parameter is assumed to be 0. A zero threshold assumes that the data can be any positive value. ### Fitting a Weibull distribution in PROC UNIVARIATE PROC UNIVARIATE is the first tool to reach for if you want to fit a Weibull distribution in SAS. The most common parameterization of the Weibull density is $f(x; \alpha, \beta) = \frac{\beta}{\alpha^{\beta}} (x)^{\beta -1} \exp \left(-\left(\frac{x}{\alpha}\right)^{\beta }\right)$ where α is a shape parameter and β is a scale parameter. This parameterization is used by most Base SAS functions and procedures, as well as many regression procedures in SAS. The following SAS DATA step simulates data from the Weibull(α=1.5, β=0.8) distribution and fits the parameters by using PROC UNIVARIATE: /* sample from a Weibull distribution */ %let N = 100; data Have(drop=i); call streaminit(12345); do i = 1 to &N; d = rand("Weibull", 1.5, 0.8); /* Shape=1.5; Scale=0.8 */ output; end; run;   proc univariate data=Have; var d; histogram d / weibull endpoints=(0 to 2.5 by 0.25); /* fit Weib(Sigma, C) to the data */ probplot / weibull2(C=1.383539 SCALE=0.684287) grid; /* OPTIONAL: P-P plot */ ods select Histogram ParameterEstimates ProbPlot; run; The histogram of the simulated data is overlaid with a density from the fitted Weibull distribution. The parameter estimates are Shape=1.38 and Scale=0.68, which are close to the parameter values. PROC UNIVARIATE uses the symbols c and σ for the shape and scale parameters, respectively. The probability-probability (P-P) plot for the Weibull distribution is shown. In the P-P plot, a reference line is added by using the option weibull2(C=1.383539 SCALE=0.684287). (In practice, you must run the procedure once to get those estimates, then a second time to plot the P-P plot.) The slope of the reference line is 1/Shape = 0.72 and the intercept of the reference line is log(Scale) = -0.38. Notice that the P-P plot is plotting the quantiles of log(d), not of d itself. ### Weibull regression versus univariate fit It might seem strange to use a regression procedure to fit a univariate distribution, but as I have explained before, there are many situations for which a regression procedure is a good choice for performing a univariate analysis. Several SAS regression parameters can fit Weibull models. In these models, it is usually assumed that the response variable is a time until some event happens (such as failure, death, or occurrence of a disease). The documentation for PROC LIFEREG provides an overview of fitting a model where the logarithm of the random errors follows a Weibull distribution. In this article, we do not use any covariates. We simply model the mean and scale of the response variable. A problem with using a regression procedure is that a regression model provides estimates for intercepts, slopes, and scales. It is not always intuitive to see how those regression estimates relate to the more familiar parameters for the probability distribution. However, the P-P plot in the previous section shows how intercepts and slopes can be related to parameters of a distribution. The documentation for the LIFEREG procedure states that the Weibull scale parameter is exp(Intercept) and the Weibull shape parameter is the reciprocal of the regression scale parameter. Notice how confusing this is! For the Weibull distribution, the regression model estimates a SCALE parameter for the error distribution. But the reciprocal of that scale estimate is the Weibull SHAPE parameter, NOT the Weibull scale parameter! (In this article, the response distribution and the error distribution are the same.) ### Weibull regression in SAS The LIFEREG procedure includes an option to produce a probability-probability (P-P) plot, which is similar to a Q-Q plot. The LIFEREG procedure also estimates not only the regression parameters but also provides estimates for the exp(Intercept) and 1/Scale quantities. The following statements use a Weibull regression model to fit the simulated data: title "Weibull Estimates from LIFEREG Procedure"; proc lifereg data=Have; model d = / dist=Weibull; probplot; inset; run; The ParameterEstimates table shows the estimates for the Intercept (-0.38) and Scale (0.72) parameters in the Weibull regression model. We previously saw these numbers as the parameters of the reference line in the P-P plot from PROC UNIVARIATE. Here, they are the result of a maximum likelihood estimate for the regression model. To get from these values to the Weibull parameter estimates, you need to compute Weib_Scale = exp(Intercept) = 0.68 and Weib_Shape = 1/Scale = 1.38. PROC LIFEREG estimates these quantities for you and provides standard errors and confidence intervals. The graphical output of the PROBPLOT statement is equivalent to the P-P plot in PROC UNIVARIATE, except that PROC LIFEREG reverses the axes and automatically adds the reference line and a confidence band. ### Other regression procedures Before ending this article, I want to mention two other regression procedures that perform similar computations: PROC RELIABILITY, which is in SAS/QC software, and PROC FMM in SAS/STAT software. The following statements call PROC RELIABILITY to fit a regression model to the simulated data: title "Weibull Estimates from RELIABILITY Procedure"; proc reliability data=Have; distribution Weibull; model d = ; run; The parameter estimates are similar to the estimates from PROC LIFEREG. The output also includes an estimate of the Weibull shape parameter, which is 1/EV_Scale. The output does not include an estimate for the Weibull scale parameter, which is exp(Intercept). In a similar way, you can use PROC FMM to fit a Weibull model. PROC FMM Is typically used to fix a mixture distribution, but you can specify the K=1 option to fit a single response distribution, as follows: title "Weibull Estimates from FMM Procedure"; proc fmm data=Have; model d = / dist=weibull link=log k=1; ods select ParameterEstimates; run; The ParameterEstimates table shows the estimates for the Intercept (-0.38) and Scale (0.72) parameters in the Weibull regression model. The Weibull scale parameter is shown in the column labeled "Inverse Linked Estimate." (The model uses a LOG link, so the inverse link is EXP.) There is no estimate for the Weibull shape parameter. which is the reciprocal of the Scale estimate. ### Summary The easiest way to fit a Weibull distribution to univariate data is to use the UNIVARIATE procedure in Base SAS. The Weibull shape and scale parameters are directly estimated by that procedure. However, you can also fit a Weibull model by using a SAS regression procedure. If you do this, the regression parameters are the Intercept and the scale of the error distribution. You can transform these estimates into estimates for the Weibull shape and scale parameters. This article shows the output (and how to interpret it) for several SAS procedures that can fit a Weibull regression model. Why would you want to use a regression procedure instead of PROC UNIVARIATE? One reason is that the response variable (failure or survival) might depend on additional covariates. A regression model enables you to account for additional covariates and still understand the underlying distribution of the random errors. A second reason is that the FMM procedure can fit a mixture of distributions. To make sense of the results, you must be able to interpret the regression output in terms of the usual parameters for the probability distributions. The post Interpret estimates for a Weibull regression model in SAS appeared first on The DO Loop. Rijkswaterstaat (RWS) is the Netherlands main agency for design, construction, management and maintenance for waterways and infrastructure. Their mission is to promote safety, mobility and quality of life in the Netherlands. They are the masterminds behind some of the most prestigious water projects in the world. In a recent panel [...]
{}
The National Renewable Energy Laboratory’s (NREL’s) dynamometer test stand is one of a kind. It offers wind industry engineers a unique opportunity to conduct lifetime endurance tests on a wide range of wind turbine drivetrains and gearboxes at various speeds, using low or high torque. Located in a 7500-$ft2$ building at the National Wind Technology Center south of Boulder, Colorado, the 2.5 MW dynamometer test stand was developed to help researchers improve the performance and reliability of wind turbines and ultimately reduce the cost of the electricity they generate. By testing full-scale wind turbines, engineers from NREL and industry hope to understand the impact of various wind conditions with the goal of improving hardware designs. A few months of endurance testing on NREL’s dynamometer test stand can simulate the equivalent of 30 years of use and a lifetime of braking cycles, thus helping engineers to determine which components...
{}
## January 13, 2007 ### The First Part of the Story of Quantizing by Pushing to a Point… #### Posted by Urs Schreiber …in which the author entertains himself by computing the space of states of a charged particle by pushing its parallel transport forward to a point. Just for fun. Let (1)$X$ be a space and let (2)$\array{ V \\ \downarrow \\ X }$ be vector bundle over $X$ with connection (3)$\nabla \,.$ Equivalently this means # that we have a locally smoothly trivializable functor (4)$\mathrm{tra}_{(V,\nabla)} : P_1(X) \to \mathrm{Vect}$ that sends paths in $X$ to the parallel transport along them obtained from the connection $\nabla$. Quantizing the single particle charged under this bundle with connection consists of a kinematical and of a dynamical aspect: In this first part of the story we amuse ourselves by just doing the trivial kinematics – but in a nice way. So, let’s forget the connection immediatey by just looking at constant paths in $X$. I’ll write (5)$\mathrm{tra}_V : \mathrm{Disc}(X) \to \mathrm{Vect}$ for the above functor restricted to constant paths, ie. to the discrete category over $X$. It does nothing but sending each point in $X$ to the vector space sitting over it – but smoothly so. We can play the same game on the base space (6)$\{\mathrm{pt}\}$ that consists of nothing but a single point. A trivial rank one vector bundle over a point is a functor (7)$I_{\mathrm{pt}} : \mathrm{Disc}(\{\mathrm{pt}\}) \to \mathrm{Vect}$ that does nothing but sending the single point to the complex numbers: (8)$I_{\mathrm{pt}} : x \mapsto \mathbb{C} \,.$ I admit that I am presupposing a certain tolerance for fancy-looking trivialities here. But enduring these will pay off eventually. Using the uniqe functor from $X$ to the point (9)$p : \mathrm{Disc}(X) \to \mathrm{Disc}(\{\mathrm{pt}\})$ we can pull back the trivial vector bundle over the point to $X$. The result (10)$I_X := p^* I_{\mathrm{pt}} : \mathrm{Disc}(X) \stackrel{p}{\to} \mathrm{Disc}(\{\mathrm{pt}\}) \stackrel{I_{\mathrm{pt}}}{\to} \mathrm{Vect}$ is the trivial rank one bundle on $X$. This functor simply sends each point of $x$ the typical fiber $\mathbb{C}$: (11)$I_X : x \mapsto \mathbb{C} \,.$ In as far as any of this is interesting at all, it is for the following simple fact: a morphism of functors: (12)$e : I_X \to \mathrm{tra}_V$ is precisely a section of the vector bundle $V$: $e$ is nothing but an assignment (13)$e : x \mapsto (e_x : \mathbb{C} \tp V_x)$ of a linear map from $\mathbb{C}$ to the fiber $V_x$ for each point $x$. That’s nothing but a choice of vector in each fiber. So, the space of all such functor morphisms (14)$\Gamma(V) = \mathrm{Hom}(I_x, \mathrm{tra}_V)$ from the trivial one into the one defining our vector bundle is nothing but the space of sections of $V$. Since $\Gamma(V)$ is a vector space, and since vector bundles over the point are nothing but vector spaces, I want to think of $\Gamma(V)$ as a vector bundle over the point. So I regard it as a functor (15)$q(\mathrm{tra}_V) := \mathrm{pt} \mapsto \Gamma(V) \,.$ On top of all these trivialities, I’ll finally allow mysef to think of $\Gamma(V)$ as morphisms from the trivial line bundle on the point into this guy: (16)$\Gamma(V) \simeq \mathrm{Hom}(I_{\mathrm{pt}}, q(\mathrm{tra}_V)) \,.$ The upshot is that, taken together, we get the isomorphism (17)$\mathrm{Hom}(p^* I_{\mathrm{pt}}, \mathrm{tra}_V) \simeq \mathrm{Hom}(I_{\mathrm{pt}}, q(\mathrm{tra}_V)) \,.$ If you like, you can convince yourself that this isomorphism of Hom-spaces in indeed natural in both arguments. But this means that pulling back functors from points to $X$ (18)$[\mathrm{Disc}(\{\mathrm{pt}\}),\mathrm{Vect}] \stackrel{p^*}{\to} [\mathrm{Disc}(X),\mathrm{Vect}]$ is the adjoint of taking sections (19)$[\mathrm{Disc}(\{\mathrm{pt}\}),\mathrm{Vect}] \stackrel{q(\cdot)}{\leftarrow} [\mathrm{Disc}(X),\mathrm{Vect}] \,.$ This, in turn, says that forming the space of sections of $\mathrm{tra}_V$ is the result of pushing $\mathrm{tra}_V$ forward to a point. Of course that’s neither new nor very deep. But part of a nice story that still needs to be told. Posted at January 13, 2007 7:16 PM UTC TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1106 Read the post D-Branes from Tin Cans, III: Homs of Homs Weblog: The n-Category Café Excerpt: Sections of sections, their pairing and n-disk correlators. Tracked: January 19, 2007 9:08 AM Read the post The Globular Extended QFT of the Charged n-Particle: Definition Weblog: The n-Category Café Excerpt: Turning a classical parallel transport functor on target space into a quantum propagation functor on parameter space. Tracked: January 24, 2007 8:05 PM Read the post Globular Extended QFT of the Charged n-Particle: String on BG Weblog: The n-Category Café Excerpt: The string on the classifying space of a strict 2-group. Tracked: January 26, 2007 2:49 PM Read the post QFT of Charged n-particle: Chan-Paton Bundles Weblog: The n-Category Café Excerpt: Chan-Paton bundles from the pull-push quantization of the open 2-particle. Tracked: February 7, 2007 9:55 PM Read the post QFT of Charged n-Particle: Dynamics Weblog: The n-Category Café Excerpt: Definition of the dynamics of the charged quantum particle by pull-push along correspondences of path spaces. Tracked: February 12, 2007 6:39 PM Read the post QFT of Charged n-Particle: Algebra of Observables Weblog: The n-Category Café Excerpt: The algebra of observables as certain endomorphisms of the n-category of sections. Tracked: February 28, 2007 2:31 AM Read the post QFT of Charged n-Particle: Sheaves of Observables Weblog: The n-Category Café Excerpt: On the concepts of sheaves and nets of algebras of observables in quantum field theory. Tracked: March 6, 2007 11:36 PM Read the post The n-Café Quantum Conjecture Weblog: The n-Category Café Excerpt: Why it seems that quantum mechanics ought to be the de-refinement of a refined theory which lives in one categorical degree higher than usual. Tracked: June 8, 2007 6:23 PM Read the post The Concept of a Space of States, and the Space of States of the Charged n-Particle Weblog: The n-Category Café Excerpt: On the notion of topos-theoretic quantum state objects, the proposed definition by Isham and Doering and a proposal for a simplified modification for the class of theories given by charged n-particle sigma-models. Tracked: January 9, 2008 10:26 PM Read the post What has happened so far Weblog: The n-Category Café Excerpt: A review of one of the main topics discussed at the Cafe: Sigma-models as the pull-push quantization of nonabelian differential cocycles. Tracked: March 27, 2008 4:46 PM Post a New Comment
{}
## Introduction Heterogeneity in single-cell RNA sequencing (scRNA-seq) datasets is frequently characterized by identifying cell clusters in gene expression space, wherein each cluster represents a distinct cell type or cell state. In particular, numerous studies have used unsupervised clustering to discover novel cell populations in heterogeneous samples1. The steps involved in unsupervised clustering of scRNA-seq data have been well documented2. (i) Low-quality cells are first discarded in a quality control step3. (ii) Reads obtained from the remaining cells are then normalized to remove the influence of technical effects, while preserving true biological variation4. (iii) After normalization, feature selection is performed to select the subset of genes that are informative for clustering, (iv) which are then typically reduced to a small number of dimensions using principal component analysis (PCA)5. (v) In the reduced principal component (PC) space, cells are clustered based on their distance from one another (typically, Euclidean distance), and (vi) the corresponding clusters are assigned a cell type or state label based on the known functions of their differentially expressed (DE) genes6. Although feature selection is a critical step in the canonical clustering workflow described above, only a few different approaches have been developed in this space. Moreover, there have been only a handful of systematic benchmarking studies of scRNA-seq feature selection methods7,8,9. A good feature selection algorithm is one that selects cell-type-specific (DE) genes as features, and rejects the remaining genes. More importantly, the algorithm should select features that optimize the separation between biologically distinct cell clusters. A comprehensive benchmarking study of feature selection methods would ideally use both of these metrics. The most widely used approach for feature selection is mean-variance modeling: genes whose variation across cells exceeds a data-derived null model are selected as features10,11. Such genes are described as highly variable genes (HVGs)12. Some earlier single-cell studies instead selected genes with high loading on the top principal components of the gene expression matrix (high loading genes, or HLGs) as features13. M3Drop, a more recent method, selects genes whose dropout rate (number of cells in which the gene is undetected) exceeds that of other genes with the same mean expression9. As an alternative approach to detect rare cell types, GiniClust uses a modified Gini index to identify genes whose expression is concentrated in a relatively small number of cells14. All of the above feature selection methods test genes individually, without considering expression relationships between genes. Another drawback is that existing methods for determining the size of the feature set do not bear direct relation to the separation of cells in the resulting space. Here, we present Determining the Underlying Basis using Stepwise Regression (DUBStepR), an algorithm for feature selection based on gene–gene correlations. A key feature of DUBStepR is the use of a stepwise approach to identify an initial core set of genes that most strongly represent coherent expression variation in the dataset. Uniquely, DUBStepR defines a novel graph-based measure of cell aggregation in the feature space (termed density index (DI)), and uses this measure to optimize the number of features. The complete DUBStepR workflow is shown in Fig. 1. We benchmarked DUBStepR against 7 commonly used feature selection algorithms on datasets from four different scRNA-seq protocols (10x Genomics, Drop-Seq, CEL-Seq2, and Smart-Seq2) and found that it substantially outperformed other methods on quantitative measures of cluster separation and marker gene detection. Further, DUBStepR uniquely deconvolved T and NK cell heterogeneity by identifying disease-pertinent clusters (of both rare and common abundances) in PBMCs from rheumatoid arthritis patients. Finally, we show that DUBStepR could potentially be applied even to single-cell ATAC sequencing data. ## Results ### Gene–gene correlations predict cell-type-specific DE genes The first step in DUBStepR is to select an initial set of candidate features based on known properties of cell-type-specific DE genes (marker genes). DE genes specific to the same cell types would tend to be highly correlated with each other, whereas those specific to distinct cell types are likely to be anti-correlated (Fig. 2a, b; see the “Methods” section). In contrast, non-DE genes are likely to be only weakly correlated (Fig. 2c). We therefore hypothesized that a correlation range score derived from the difference between the strongest positive and strongest negative correlation coefficients of a gene (“Methods”), would be substantially elevated among DE genes. Indeed, we found that the correlation range score was significantly higher for DE genes relative to non-DE genes (Fig. 2d). Moreover, the correlation range score of a gene was highly predictive of its greatest fold change between cell types, and also its most significant differential expression q-value (Fig. 2e, f). Due to the strong association between correlation range and marker gene status, DUBStepR selects genes with high correlation range score as the initial set of candidate feature genes (“Methods”). ### Stepwise regression identifies a minimally redundant feature subset We observed that candidate feature genes formed correlated blocks of varying size in the gene–gene correlation (GGC) matrix (Fig. 3a), with each block presumably representing a distinct pattern of expression variation across the cells. To ensure a more even representation of the diverse cell-type-specific expression signatures within the candidate feature set, we sought to identify a representative minimally redundant subset, which we termed “seed” genes. For this purpose, DUBStepR performs stepwise regression on the GGC matrix, regressing out, at each step, the gene explaining the largest amount of variance in the residual from the previous step (Fig. 3b–d). We devised an efficient implementation of this procedure that requires only a single matrix multiplication at each step (“Methods”). This approach selects seed genes with diverse patterns of cell-type-specificity (Fig. 3e–h). DUBStepR then uses the elbow point of the stepwise regression scree plot to determine the optimal number of steps (“Methods”), i.e., the size of the seed gene set (Fig. 3i, j). ### Guilt-by-association expands the feature set Although the seed genes in principle span the major expression signatures in the dataset, each individual signature (set of correlated genes) is now represented by only a handful of genes (2–5 genes, in most cases). Given the high level of noise in scRNA-seq data, it is likely that this is insufficient to fully capture coherent variation across cells. DUBStepR therefore expands the seed gene set by iteratively adding correlated genes from the candidate feature set using a guilt-by-association approach. Guilt-by-association has previously been employed for feature selection on mass spectrometry data15, and provides a robust solution to order candidate feature genes by their association to the seed gene set (Supp. Fig. S2; “Methods”). This approach allows DUBStepR to prioritize genes that more strongly represent an expression signature (i.e., are better features for clustering). Candidate genes are added until DUBStepR reaches the optimal number of feature genes (see below). ### Benchmarking To benchmark the performance of DUBStepR, we compared it against 6 other algorithms for feature selection in scRNA-seq data: three variants of the HVG approach (HVGDisp, HVGVST, trendVar), deviance-based feature selection (devianceFS), HLG, and M3Drop/DANB (Table 1). For completeness, we also benchmarked GiniClust, though it was designed only for identifying markers of rare cell types. Each algorithm was benchmarked on 7 datasets spanning 4 scRNA-seq protocols: 10x Genomics, Drop-Seq, CEL-Seq2, and Smart-Seq on the Fluidigm C1 (Supp. Note 2A). These datasets were selected because the true cell type could be independently ascertained based on cell line identity or FACS gate. Our benchmarking approach thus avoids the circularity of using algorithmically defined cell type labels as ground truth. To evaluate the quality of the selected features, we used the well-established Silhouette index (SI), which quantifies cluster separation, i.e., closeness between cells belonging to the same cluster, relative to the distance to cells from other clusters16 (Supp. Note 3A). In addition to being a well-established measure of single-cell cluster separation17,18,19, the SI has the advantage of being independent of any downstream clustering algorithm. We evaluated the SI of each algorithm across a range of feature set sizes (50–4000), scaled the SI values to a maximum of 1 for each dataset, and then averaged the scaled SIs across the 7 datasets (Fig. 4a; Supp. Fig. S3). Remarkably, HLG, an elementary PCA-based method that predates scRNA-seq technology, achieved greater average cell type separation than existing single-cell algorithms at most feature set sizes. In contrast to DUBStepR, which showed maximal performance at 200–300 features, the other methods remained close to their respective performance peaks over a broad range from 300 to 2000 features and dropped off on either side of this range. DUBStepR substantially outperformed all other methods across the entire range of feature set size (Fig. 4a). Moreover, DUBStepR was the top-ranked algorithm on 5 of the 7 datasets (Fig. 4b). For optimal cell type clustering, a feature selection algorithm should ideally select only DE genes, i.e., genes specific to cell types or subtypes, as features. As an independent benchmark, we therefore quantified the ability of feature selection algorithms to discriminate between DE and non-DE genes. To minimize the effect of ambiguously classified genes, we designated the top 500 most differentially expressed genes in each dataset as DE, and the bottom 500 as non-DE (“Methods”), and then quantified performance using the area under the receiver operating characteristic (AUROC; Supp. Note 3B). Remarkably, DUBStepR achieved an AUROC in excess of 0.97 on all 7 datasets, indicating near-perfect separation of DE and non-DE genes (Fig. 4c). devianceFS was able to exceed the same performance threshold on 4 of the 7 datasets and HLG on only one. All other methods demonstrated significantly lower performance (Fig. 4c). Thus, DUBStepR greatly improves our ability to select cell type/subtype-specific marker genes (DE genes) for clustering scRNA-seq data. With the exponential increase in the size of single-cell datasets, any new computational approach in the field must be able to scale to over a million cells. To improve DUBStepR’s ability to efficiently process large datasets, we identified a technique to reduce a key step in stepwise regression to a single matrix multiplication, sped up calculation of the elbow point, and implemented the entire workflow on sparse matrices (“Methods”). To benchmark scalability, we profiled execution time and memory consumption of DUBStepR, as well as the other aforementioned feature selection methods, on a recent mouse organogenesis dataset of over 1 million cells20. This dataset was downsampled to produce two additional datasets of 10k and 100k cells, respectively, while maintaining cell-type diversity (Supp. Note 2B). DUBStepR, HVGDisp, HVGVST, trendVar, devianceFS, and M3DropDANB were able to process the entire 1 million cell dataset, while GiniClust and HLG could not scale to 100k cells (Supp. Fig. S4). On the largest dataset, DUBStepR ranked fourth out of the eight tested methods in memory consumption and compute time. In terms of memory scalability, DUBStepR used 6.4x more memory to process the 1M cell dataset as compared to the 100k dataset. In contrast, HVGDisp, HVGVST, trendVar, devianceFS, and M3DropDANB all increased their memory consumption by 12.5x. Thus, DUBStepR is scalable to over a million cells and shows promise for even larger datasets. ### Density index predicts the optimal feature set As shown above, selecting too few or too many feature genes can result in sub-optimal clustering (Fig. 4a). Ideally, we would want to select the feature set size that maximized cell type separation (i.e., the SI) in the feature space. However, since the feature selection algorithm by definition does not know the true cell type labels, it is not possible to calculate the SI for any given feature set size. We therefore endeavored to define a proxy metric that would approximately model the SI without requiring knowledge of cell-type labels. To this end, we defined a measure of the inhomogeneity or “clumpiness” of the distribution of cells in feature space. If each cell clump represented a distinct cell type, then this measure would tend to correlate with the SI. The measure, which we termed the density index (DI), equals the root-mean squared distance between all cell pairs, divided by the mean distance between a cell and its k nearest neighbors (“Methods”). Intuitively, when cells are well clustered and therefore inhomogeneously distributed in feature space, the distance to nearest neighbors should be minimal relative to the distance between random pairs of cells, and thus DI should be maximal (Fig. 5a, b). Empirically, we found that DI and SI were indeed positively correlated and tended to reach their maxima at approximately the same feature set size (Fig. 5c). Further, for 5 out of the 7 benchmarking datasets, the feature set with the highest DI also maximized SI (Fig. 5d). Since our earlier analysis only tested a discrete number of feature set sizes (Fig. 4a; Supp. Note 3A, B), the DI-guided approach even improved on the maximum SI in 2 cases (Fig. 5d). One additional advantage of the DI is that it is relatively straightforward to compute, since the numerator is proportional to the square of the Frobenius norm of the gene expression matrix (“Methods”). By default, DUBStepR therefore selects the feature set size that maximizes DI. ### DUBStepR robustly detects rare cell types and cryptic cell states in rheumatoid arthritis samples The above quantitative benchmarking analyses were largely based on detection of common cell types (>10% of all cells) in cell lines or FACS-purified cell populations from healthy donors. To demonstrate the ability of DUBStepR to cluster cells from a complex primary sample, we generated scRNA-seq data from 8312 PBMCs from four rheumatoid arthritis (RA) patients (“Methods”). In this case, since the “true” cell type labels were unknown, our objective was to qualitatively compare results from the various feature selection methods. We used SingleR21 to select the T and NK cell subset (5329 cells; “Methods”) since this cell population is challenging to sub-cluster by conventional methods, despite its relevance to inflammatory phenotypes. DUBStepR (with DI optimization) identified 10 discrete subtypes in this dataset, with sharply distinct gene expression signatures (Fig. 6a; Supp. Fig. S5). These included four rare cell clusters that were undetected or only partially detected by the other feature selection methods: red blood cells (RBCs, 1.8%), proliferating cells (2%), platelet-T doublets (3.4%), and platelet-NK doublets (3%) (Fig. 6b; Supp. Figs. S5, S6). While RBCs reflect contamination during PBMC isolation, platelet-lymphocyte complexes and proliferating T cells regulated by KIAA0101 are thought to play a role in the pathophysiology of RA22,23,24 (Supp. Fig. S5). In addition to detecting multiple rare cell types, DUBStepR identified a dichotomy in CD4+ T, CD8+ T, and NK cells, defined by coordinated differential expression of SET, C1orf56, C16orf54, CDC42SE1, and HNRNPH1 (Supp. Fig. S5), all of which have been previously identified as markers of a latently infected T cell subtype in HIV25. Once again, DUBStepR was the only feature selection method to clearly distinguish these cell states (Supp. Figs. S5, S6). In summary, DUBStepR was the only feature selection algorithm that robustly detected common and rare cell types and subtypes in this complex primary lymphocyte population. ### DUBStepR generalizes to scATAC-seq data Feature selection is typically not performed on scATAC-seq data, since their almost binary nature (most genomic bins have zero or one count) renders them refractory to conventional single-cell feature selection techniques based on variance-mean comparison26. However, since the logic of feature correlations applies even to binary or almost binary data, we hypothesized that DUBStepR could also improve the quality of cell type inferences from this data type. To test this hypothesis, we applied DUBStepR to scATAC-seq data from eight FACS-purified subpopulations of human bone marrow cells27 (Supp. Note 2D). In contrast to the common approach of using all scATAC-seq peaks, we found that peaks selected by DUBStepR more clearly revealed the emergence of the three major lineages from hematopoietic stem cells: lymphoid, myeloid, and megakarocyte/erythroid (Fig. 7). Specifically, trajectory analysis using Monocle 320 yielded a topology that matched the known hematopoietic differentiation hierarchy27 (Fig. 7g) only in the case of DUBStepR (Fig. 7d–f). ## Discussion DUBStepR is based on the intuition that cell-type-specific marker genes tend to be well correlated with each other, i.e., they typically have strong positive and negative correlations with other marker genes. After filtering genes based on a correlation range score, DUBStepR exploits structure in the gene–gene correlation matrix to prioritize genes as features for clustering. To benchmark this feature selection strategy, we used a stringently defined collection of single-cell datasets for which cell type annotations could be independently ascertained28. Note that this avoids the circularity of defining the ground truth based on the output of one of the algorithms being tested. Results from our benchmarking analyses indicate that, regardless of feature set size, DUBStepR separates cell types more clearly other methods (Fig. 4a, b). This observation is further corroborated by the fact that DUBStepR predicts cell-type-specific marker genes substantially more accurately than other methods (Fig. 4c). Thus, our results demonstrate that gene–gene correlations, which are ignored by conventional feature selection algorithms, provide a powerful basis for feature selection. The plummeting cost of sequencing, coupled with rapid progress in single-cell technologies, has made scalability an essential feature of novel single-cell algorithms. DUBStepR scales effectively to datasets of over a million cells without sharp increases in time or memory consumption (Supp. Fig. S4). Thus, the method is likely to scale well beyond a million cells. A major contributor to the algorithm’s scalability is the fact that, once the gene–gene correlation matrix is constructed, the time and memory complexity of downstream steps is constant with respect to the number of cells. Intriguingly, DUBStepR approaches its maximum silhouette index value at 200–500 feature genes (Supp. Fig. S3), which is well below the default feature set size of 2000 used in most single-cell studies10,12. Thus, our results suggest that, if feature selection is optimized, it may not be necessary to select a larger number of feature genes. Note, however, that the optimum feature set size can vary across datasets (Supp. Fig. S3). Selecting a fixed number of feature genes for all datasets could therefore result in sub-optimal clustering (Fig. 5d). From the perspective of cell clustering, the optimal feature set size is that which maximizes cell type separation in feature space, which can be quantified using the SI. As an indirect correlate of cell type separation, we have defined a measure of the inhomogeneity or “clumpiness” of cells in feature space, which we term the density index (DI). To our knowledge, DI is the only metric for scoring feature gene sets based on the distribution of cells in feature space. Our results suggest that the DI correlates with the SI, and that cluster separation is improved in most cases when the feature set is chosen to maximize DI. Another important advantage of the DI is that it is computationally straightforward to calculate from the Frobenius norm of the data matrix. It is possible that the DI measure could also be applied to other stages of the clustering pipeline, including dimensionality reduction (selecting the optimal number of PCs) and evaluation of normalization strategies. Interestingly, although DUBStepR was not specifically designed to detect rare cell types, it nevertheless substantially outperformed all other methods at detecting multiple cell populations present at low frequency (<4%) in a complex primary PBMC sample (Fig. 6). Notably, the rare populations identified by DUBStepR-included RBCs and platelet-containing doublets, which should not have been present in the T/NK population. It is likely that these cells were mis-classified as T/NK by SingleR due to the absence of platelet and RBC transcriptomes in the reference panel. In addition, DUBStepR greatly outperformed all other feature selection methods at detecting T and NK cell subpopulations over a range of higher frequencies (4.9−9.5%) in the same dataset. Given that this dataset posed the greatest challenge in terms of clustering difficulty, it is remarkable that DUBStepR provided a major, qualitative improvement over all other feature selection methods. Algorithmic pipelines for single-cell epigenomic data, for example scATAC-seq, typically do not incorporate a formal feature selection step26,29. In most cases, such pipelines merely discard genomic bins at the extremes of high and low sequence coverage. This is because the sparsity and near-binary nature of single-cell epigenomic data reduces the efficacy of conventional feature selection based on mean-variance analysis. Since DUBStepR uses an orthogonal strategy based on correlations between features, it is less vulnerable to the limitations of single-cell epigenomics data (Fig. 7). Thus, DUBStepR opens up the possibility of incorporating a feature selection step in single-cell epigenomic pipelines, including scATAC-seq, scChIP-seq, and single-cell methylome sequencing. ## Methods ### DUBStepR methodology #### Gene filtering By default, DUBStepR filters out genes that are not expressed in at least 5% of cells. We allow the user to adjust this parameter if they are interested in genes that are more sparsely expressed in their dataset. In addition, mitochondrial genes, ribosomal genes, and pseudogenes are identified using gene symbols or Ensembl IDs for human, mouse, and rat datasets using the latest Ensembl references downloaded from BioMart30. #### Correlation range Correlation range ci for gene i can be defined for a gene–gene correlation matrix G as $${c}_{i}={{{{{{\mathrm{max}}}}}}}_{3}({G}_{i})-0.75\cdot {{{{{\mathrm{min}}}}}}({G}_{i}),$$ (1) where maxh(Gi) refers to the hth-largest correlation value in column i of G. Correlation range uses the second-largest non-self correlation value (3rd largest value in GGC column) to calculate the range, so as to protect against genes with overlapping 5′ or 3′ exons31. The minimum correlation value has been down-weighted to 0.75 to give greater importance to stronger positive correlations over negative correlations. We first binned genes based on their mean expression level, as mean expression tends to correlate with technical noise32. In each bin, we compute a z-score of the correlation range of gene i as $${z}_{i}=\frac{{c}_{i}-{\mu }_{c}}{{\sigma }_{c}},$$ (2) where μc is the mean correlation range of the gene and σc refers to variance in the correlation range scores of a gene. Genes with a z-score ≤ 0.7 are filtered out at this step. #### Stepwise regression We define the stepwise regression equation as $$G=g{{{{{{{{\bf{w}}}}}}}}}^{T}+\epsilon ,$$ (3) where G is the column-wise zero-centered gene–gene correlation matrix, g is the column of the matrix G to be regressed out, ϵ is the matrix of residuals and w is a vector of regression coefficients. The squared error (ϵTϵ) is minimized when $${{{{{{{{\bf{w}}}}}}}}}^{T}={\hskip 1pt}\frac{{{{{{{{{\bf{g}}}}}}}}}^{T}G}{{{{{{{{{\bf{g}}}}}}}}}^{T}{{{{{{{\bf{g}}}}}}}}}$$ (4) Thus, $$G=\frac{{{{{{{{\bf{g}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}^{T}G}{{{{{{{{{\bf{g}}}}}}}}}^{T}{{{{{{{\bf{g}}}}}}}}}+\epsilon .$$ (5) We calculate variance explained by the regression step as $$V={\hskip -3pt}\parallel G-\epsilon {\parallel }_{F}^{2}$$, where F indicates the Frobenius norm. To efficiently compute V for all genes, we define X = GTG and x as the row of X corresponding to gene g. Thus, x = gTG. We can simplify V as $$V ={\hskip -3pt}\parallel G-\epsilon {\parallel }_{F}^{2}\\ ={\left\Vert \frac{{{{{{\bf{g}}}}}}{{{{{{\bf{g}}}}}}}^{T}G}{({{{{{{\bf{g}}}}}}}^{T}{{{{{\bf{g}}}}}})}\right\Vert }_{F}^{2}\\ =\frac{{\parallel {{{{{\bf{g}}}}}}{{{{{\bf{x}}}}}}\parallel }_{F}^{2}}{{({{{{{{\bf{g}}}}}}}^{T}{{{{{\bf{g}}}}}})}^{2}}\\ =\frac{{{{{{\rm{Tr}}}}}}({({{{{{\bf{g}}}}}}{{{{{\bf{x}}}}}})}^{T}({{{{{\bf{g}}}}}}{{{{{\bf{x}}}}}}))}{{({{{{{{\bf{g}}}}}}}^{T}{{{{{\bf{g}}}}}})}^{2}}\\ =\frac{{{{{{\rm{Tr}}}}}}({{{{{{\bf{x}}}}}}}^{T}{{{{{\bf{x}}}}}})}{{{{{{{\bf{g}}}}}}}^{T}{{{{{\bf{g}}}}}}}\\ =\frac{{{{{{\bf{x}}}}}}{{{{{{\bf{x}}}}}}}^{T}}{{{{{{{\bf{g}}}}}}}^{T}{{{{{\bf{g}}}}}}}.$$ (6) Thus, we can use a single matrix multiplication GTG to efficiently calculate variance explained by each gene in the gene–gene correlation matrix, and then regress out the gene explaining the greatest variance. The residual from each step k is then used as the gene–gene correlation matrix for the next step. In other words, $${G}_{k}={{{{{{\bf{g}}}}}}}_{k}{{{{{{\bf{w}}}}}}}_{k}^{T}+{\epsilon }_{k}\\ {G}_{k+1}={\epsilon }_{k}.$$ (7) For computational efficiency, we repeat this regression step 30 times and then assume that the next 70 steps explain the amount of variance as the 30th step, giving a total of 100 steps. We observed that this shorter procedure had little or no impact on the results, since the variance explained changed only marginally beyond the 30th step. To select the genes contributing to the major directions in G, we use the elbow point on a scree plot. Elbow point computation is described in Supp. Note 1B. The genes that are regressed out upto the elbow point form the “seed” gene set. #### Guilt-by-association Guilt-by-association, also known as label propagation through a network, allows DUBStepR to determine a robust ordering of features in an iterative manner. Once the seed genes have been determined, the gene with the strongest Pearson correlation to any of the seed genes is first identified. This gene is then added to the seed genes, thereby expanding the feature set. This feature set (now consisting of the seed genes and the newly added feature gene) is then used to identify the next most strongly correlated gene, which is again added to the feature set. By iteratively repeating the latter step, DUBStepR propagates through the gene–gene correlation network until the feature set has reached its final size. We have developed a custom implementation of this guilt-by-association approach as part of the DUBStepR package in R, the source code for which is available on our GitHub repository (see “Code availability” section). #### Density index For a given feature set, PCA5 is performed on the gene expression matrix and the top D principal components (PCs) are selected, where D is a user-specified parameter with a default value of 20. Let M be the matrix of embeddings of the gene expression vectors of N cells in D principal components. The root-mean-squared distance drms between pairs of cells i and j can be calculated as $${d}_{rms}=\sqrt{ < {d}_{i,j}^{2} > }=\sqrt{ < {{{\Sigma }}}_{p = 1}^{D}{({M}_{i,p}-{M}_{j,p})}^{2} > },$$ (8) where < > denotes the average over all pairs {(i, j)i [1, N], j [1, N]}. Note that, for simplicity of the final result, we include pairs in which i = j. This can be further simplified as follows: $${d}_{rms} =\sqrt{ < {{{\Sigma }}}_{p = 1}^{D}({M}_{i,p}^{2}+{M}_{j,p}^{2}-2{M}_{i,p}{M}_{j,p}) > }\\ =\sqrt{{{{\Sigma }}}_{p = 1}^{D}( < {M}_{i,p}^{2} > + < {M}_{j,p}^{2} > -2 < {M}_{i,p}{M}_{j,p} > )}\\ =\sqrt{{{{\Sigma }}}_{p = 1}^{D}( < {M}_{i,p}^{2} > + < {M}_{j,p}^{2} > )}\\ =\sqrt{2{{{\Sigma }}}_{p = 1}^{D} < {M}_{i,p}^{2} > }\\ =\sqrt{\frac{2}{N}}\parallel M{\parallel }_{F}.$$ (9) In the above derivation, the mean product term <Mi,pMj,p> is zero because Mi,p and Mj,p have zero mean across i and j, respectively. Let ki denote the average distance of cell i from its k nearest neighbors, and km denote the mean of ki across all cells. We define the DI as $${{{{{\mathrm{DI}}}}}}=\frac{{d}_{rms}}{{k}_{m}}=\sqrt{\frac{2}{N}}\frac{\parallel M{\parallel }_{F}}{{k}_{m}}.$$ (10) ### Rheumatoid arthritis dataset #### Patient sample collection Fresh blood samples of patients diagnosed with rheumatoid arthritis were collected at Tan Tock Seng Hospital, Department of Rheumatology, Allergy & Immunology, Singapore, and were transferred to Genome Institute of Singapore for further processing. All technical procedures and protocols for the recruitment, blood collection, and PBMCs isolation were reviewed and approved by the Institutional Review Board (IRB) at the National Healthcare Group Domain Specific Review Board (NHG DSRB), Singapore (Reg. no. 2016/00899). #### Single-cell RNA sequencing For each sample, fresh PBMCs were isolated from 5 ml Sodium Heparin tubes using standard Ficoll-Hypaque density gradient centrifugation33. Briefly, blood samples were diluted by an equal volume of Phosphate-buffered saline (PBS) containing 2% fetal bovine serum (FBS) (STEMCELL, catalog #07905) and were gently added on top of LymphoprebTM density gradient medium (STEMCELL, catalog #07801) in the SepMateTM (STEMCELL, catalog #15420) tubes. The tubes were centrifuged at 1200 × g for 15 min and the upper layer containing the enriched PBMCs and plasma was collected into a new falcon tube to wash with PBS + 2% FBS and centrifuge at 300 × g for 8 min. After counting the cells, isolated PBMCs were divided into two or three vials and frozen down in FBS containing 10% dimethylsulfoxide (DMSO) for downstream experiments. Immediately after isolation, fresh PBMCs were washed, filtered (using cell strainer), and re-suspended in PBS containing 0.04 bovine serum albumin. Single-cell suspensions from four samples were mixed at the final concentration of 106 cells/ml and the mixed suspension was loaded into a 10x Chromium Instrument to target total number of 3000 cells from each patient (n = 4). GEM-RT reactions, cDNA synthesis, and library preparations were performed using Single Cell $$3^{\prime}$$ v2 10x ReagentTM Kit according to manufacturer protocols. The single-cell libraries were run onto the Illumina HiSeq® 4000 platform as prescribed by 10x Genomics. The raw base call (BCL) files from the sequencer were processed through 10x Genomics Cell Ranger 2.1.1 analysis pipelines. First, the mkfastq pipeline was run to generate FASTQ files, followed by read alignment to the hg19 genome reference using the count pipeline. The raw counts matrix generated by the count pipeline was loaded into R for further analysis. #### Demultiplexing pooled samples To facilitate demultiplexing, the patients were genotyped using the Illumina Infinium® HTS Assay following the manufacturer’s protocol. Reads from the sequencer were attributed to each of the four patients using Demuxlet34 at default settings. For each library, the corresponding genotyping data (VCF file) and scRNA-seq data (BAM file) were imported into Demuxlet in order to infer the sample source for each cell barcode. #### Data preprocessing The raw data files were used to generate a Seurat object, including only features that were detected in a minimum of 3 cells and cells having at least 200 uniquely detected features. Additional filtering was performed to remove cells having >10% mitochondrial rate (calculated using the PercentageFeatureSet function in Seurat35). Further, only cells with unique feature counts between 200 and 2500 were retained, resulting in 8350 cells. The data was then log-normalized using the NormalizeData function in Seurat35 at default settings. #### Isolating T & NK cells The normalized data were then annotated using SingleR21, with the Monaco et al. immune dataset36 as the reference. The SingleR function was run at default settings, using the log-normalized counts as input. Thirty-eight cells were pruned by SingleR, leaving a total of 8312 annotated cells. Of these, cells annotated as CD4+ T cells, CD8+ T cells, or NK cells were isolated (Fig. 8; Table 2). #### DUBStepR analysis Due to the lower expression of genes in the T and NK cell population, we modified DUBStepR’s gene filtering threshold to filter out genes expressed in less than 1% of the cells. The Seurat package was used for downstream processing. First, the feature genes were zero-centered and scaled, and PCA was performed. The top 12 PCs were selected using the elbow plot of variance explained by the PCs. Clustering was performed using the Louvain graph-based approach—using the FindNeighbors function with 12 PCs and FindClusters at default parameter settings. UMAP coordinates were computed using the cell embeddings in 12 PCs. #### Other feature selection methods All other feature selection methods (HVGDisp, HVGVST, trendVar, devianceFS, M3DropDANB, GiniClust, and HLG) were run at default settings. To showcase the result of HVGVST, we used the Seurat package to cluster cells, as described above. Similar to the DUBStepR result, 12 PCs were used for both clustering and UMAP visualization. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
{}
## Precalculus (6th Edition) Blitzer The angle between two vectors $\theta ={{\cos }^{-1}}\left( \frac{\mathbf{v}\cdot \mathbf{w}}{\left\| \mathbf{v} \right\|\text{ }\left\| \mathbf{w} \right\|} \right)$. Take $\mathbf{v}=2\mathbf{i}+\mathbf{j}+3\mathbf{k}$, $\mathbf{w}=\mathbf{i}+2\mathbf{j}+\mathbf{k}$, and $\mathbf{v}\cdot \mathbf{w}=9.165$. Now, find the magnitude of vector v as follows: \begin{align} & \left\| \mathbf{v} \right\|=\sqrt{{{\left( 2 \right)}^{2}}+{{\left( 1 \right)}^{2}}+{{\left( 3 \right)}^{2}}} \\ & =\sqrt{4+1+9} \\ & =\sqrt{14} \end{align} Therefore, $\left\| \mathbf{v} \right\|=\sqrt{14}$. Similarly, find the magnitude of the vector $w$ as follows: \begin{align} & \left\| \mathbf{w} \right\|=\sqrt{{{\left( 1 \right)}^{2}}+{{\left( 2 \right)}^{2}}+{{\left( 1 \right)}^{2}}} \\ & =\sqrt{1+4+1} \\ & =\sqrt{6} \end{align} Therefore, $\left\| \mathbf{w} \right\|=\sqrt{6}$. Next, find the cosine of the angle between their directions as follows: \begin{align} & \cos \theta =\left( \frac{\mathbf{v}\cdot \mathbf{w}}{\left\| \mathbf{v} \right\|\text{ }\left\| \mathbf{w} \right\|} \right) \\ & =\left( \frac{9.165}{\sqrt{14}\sqrt{6}} \right) \\ & =\left( \frac{9.165}{9.165} \right) \\ & =1 \end{align} So, $\cos \theta =1$. Therefore, \begin{align} & \theta ={{\cos }^{-1}}\left( 1 \right) \\ & ={{0}^{\circ }} \end{align} Hence, the angle between two vectors is ${{0}^{\circ }}$.
{}
Share # In ∆Abc, Point M is the Midpoint of Side Bc. If, Ab2 + Ac2 = 290 Cm2, Am = 8 Cm, Find Bc. - Geometry ConceptApollonius Theorem #### Question In ∆ABC, point M is the midpoint of side BC. If, AB+ AC= 290 cm2, AM = 8 cm, find BC. #### Solution In ∆ABC, point M is the midpoint of side BC. $BM = MC = \frac{1}{2}BC$ ${AB}^2 + {AC}^2 = 2 {AM}^2 + 2 {BM}^2 \left( \text{by Apollonius theorem} \right)$ $\Rightarrow 290 = 2 \left( 8 \right)^2 + 2 {BM}^2$ $\Rightarrow 290 = 2\left( 64 \right) + 2 {BM}^2$ $\Rightarrow 290 = 128 + 2 {BM}^2$ $\Rightarrow 2 {BM}^2 = 290 - 128$ $\Rightarrow 2 {BM}^2 = 162$ $\Rightarrow {BM}^2 = 81$ $\Rightarrow BM = 9$ $\therefore BC = 2 \times BM$ $= 2 \times 9$ $= 18 cm$ Hence, BC = 18 cm. Is there an error in this question or solution? #### APPEARS IN Solution In ∆Abc, Point M is the Midpoint of Side Bc. If, Ab2 + Ac2 = 290 Cm2, Am = 8 Cm, Find Bc. Concept: Apollonius Theorem. S
{}
# Rough path In stochastic analysis, a rough path is a generalization of the notion of smooth path allowing to construct a robust solution theory for controlled differential equations driven by classically irregular signals, for example a Wiener process. The theory was developed in the 1990s by Terry Lyons.[1][2][3] Several accounts of the theory are available.[4][5][6][7] Rough path theory is focused on capturing and making precise the interactions between highly oscillatory and non-linear systems. It builds upon the harmonic analysis of L.C. Young, the geometric algebra of K.T. Chen, the Lipschitz function theory of H. Whitney and core ideas of stochastic analysis. The concepts and the uniform estimates have widespread application in pure and applied Mathematics and beyond. It provides a toolbox to recover with relative ease many classical results in stochastic analysis (Wong-Zakai, Stroock-Varadhan support theorem, construction of stochastic flows, etc) without using specific probabilistic properties such as the martingale property or predictability. The theory also extends Itô's theory of SDEs far beyond the semimartingale setting. At the heart of the mathematics is the challenge of describing a smooth but potentially highly oscillatory and multidimensional path ${\displaystyle x_{t}}$ effectively so as to accurately predict its effect on a nonlinear dynamical system ${\displaystyle \mathrm {d} y_{t}=f(y_{t})\,\mathrm {d} x_{t},y_{0}=a}$. The Signature is a homomorphism from the monoid of paths (under concatenation) into the grouplike elements of the free tensor algebra. It provides a graduated summary of the path ${\displaystyle x}$. This noncommutative transform is faithful for paths up to appropriate null modifications. These graduated summaries or features of a path are at the heart of the definition of a rough path; locally they remove the need to look at the fine structure of the path. Taylor's theorem explains how any smooth function can, locally, be expressed as a linear combination of certain special functions (monomials based at that point). Coordinate iterated integrals (terms of the signature) form a more subtle algebra of features that can describe a stream or path in an analogous way; they allow a definition of rough path and form a natural linear "basis" for continuous functions on paths. Martin Hairer used rough paths to construct a robust solution theory for the KPZ equation.[8] He then proposed a generalization known as the theory of regularity structures [9] for which he was awarded a Fields medal in 2014 . ## Motivation Rough path theory aims to make sense of the controlled differential equation ${\displaystyle \mathrm {d} Y_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X_{t}^{j}.}$ where the control, the continuous path ${\displaystyle X_{t}}$ taking values in a Banach space, need not be differentiable nor of bounded variation. A prevalent example of the controlled path ${\displaystyle X_{t}}$ is the sample path of a Wiener process. In this case, the aforementioned controlled differential equation can be interpreted as a stochastic differential equation and integration against "${\displaystyle \mathrm {d} X_{t}^{j}}$" can be defined in the sense of Itô . However, Itô's calculus is defined in the sense of ${\displaystyle L^{2}}$ and is in particular not a pathwise definition. Rough paths gives an almost sure pathwise definition of stochastic differential equation. The rough path notion of solution is well-posed in the sense that if ${\displaystyle X(n)_{t}}$ is a sequence of smooth paths converging to ${\displaystyle X_{t}}$ in the ${\displaystyle p}$-variation metric (described below), and ${\displaystyle \mathrm {d} Y(n)_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X(n)_{t}^{j};}$ ${\displaystyle \mathrm {d} Y_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X_{t}^{j},}$ then ${\displaystyle Y(n)}$ converges to ${\displaystyle Y}$ in the ${\displaystyle p}$-variation metric. This continuity property and the deterministic nature of solutions makes it possible to simplify and strengthen many results in Stochastic Analysis, such as the Freidlin-Wentzell's Large Deviation theory [10] as well as results about stochastic flows. In fact, rough path theory can go far beyond the scope of Itô and Stratonovich calculus and allows to make sense of differential equations driven by non-semimartingale paths, such as Gaussian processes and Markov processes.[11] ## Definition of a rough path Rough paths are paths taking values in the truncated free tensor algebra (more precisely: in the free nilpotent group embedded in the free tensor algebra), which this section now briefly recalls. The tensor powers of ${\displaystyle \mathbb {R} ^{d}}$, denoted ${\displaystyle {\big (}\mathbb {R} ^{d}{\big )}^{\otimes n}}$, are equipped with the projective norm ${\displaystyle \Vert \cdot \Vert }$ (see Topological tensor product, note that rough path theory in fact works for a more general class of norms). Let ${\displaystyle T^{(n)}(\mathbb {R} ^{d})}$ be the truncated tensor algebra ${\displaystyle T^{(n)}(\mathbb {R} ^{d})=\bigoplus _{i=0}^{n}{\big (}\mathbb {R} ^{d}{\big )}^{\otimes i},}$ where by convention ${\displaystyle (\mathbb {R} ^{d})^{\otimes 0}\cong \mathbb {R} }$. Let ${\displaystyle \triangle _{0,1}}$ be the simplex ${\displaystyle \{(s,t):0\leq s\leq t\leq 1\}}$. Let ${\displaystyle p\geq 1}$. Let ${\displaystyle \mathbf {X} }$ and ${\displaystyle \mathbf {Y} }$ be continuous maps ${\displaystyle \triangle _{0,1}\to T^{(\lfloor p\rfloor )}(\mathbb {R} ^{d})}$. Let ${\displaystyle \mathbf {X} ^{j}}$ denote the projection of ${\displaystyle \mathbf {X} }$ onto ${\displaystyle j}$-tensors and likewise for ${\displaystyle \mathbf {Y} ^{j}}$. The ${\displaystyle p}$-variation metric is defined as ${\displaystyle d_{p}\left(\mathbf {X} ,\mathbf {Y} \right):=\max _{j=1,\ldots ,\lfloor p\rfloor }\sup _{0=t_{0} where the supremum is taken over all finite partitions ${\displaystyle \{0=t_{0} of ${\displaystyle [0,1]}$. A continuous function ${\displaystyle \mathbf {X} :\triangle _{0,1}\rightarrow T^{(\lfloor p\rfloor )}(\mathbb {R} ^{d})}$ is a ${\displaystyle p}$-geometric rough path if there exists a sequence of paths with finite total variation ${\displaystyle X(1),X(2),\ldots }$ such that ${\displaystyle \mathbf {X} (n)_{s,t}=\left(1,\int _{s converges in the ${\displaystyle p}$-variation metric to ${\displaystyle \mathbf {X} }$ as ${\displaystyle n\rightarrow \infty }$.[12] ## Universal limit theorem A central result in rough path theory is Lyons' Universal Limit theorem.[13] One (weak) version of the result is the following: Let ${\displaystyle X(n)}$ be a sequence of paths with finite total variation and let ${\displaystyle \mathbf {X} (n)_{s,t}=\left(1,\int _{s denote the rough path lift of ${\displaystyle X(n)}$. Suppose that ${\displaystyle \mathbf {X} (n)}$ converges in the ${\displaystyle p}$-variation metric to a ${\displaystyle p}$-geometric rough path ${\displaystyle \mathbf {X} }$ as ${\displaystyle n\to \infty }$. Let ${\displaystyle (V_{j}^{i})_{j=1,\ldots ,d}^{i=1,\ldots ,n}}$ be functions that have at least ${\displaystyle \lfloor p\rfloor }$ bounded derivatives and the ${\displaystyle \lfloor p\rfloor }$-th derivatives are ${\displaystyle \alpha }$-Hölder continuous for some ${\displaystyle \alpha >p-\lfloor p\rfloor }$. Let ${\displaystyle Y(n)}$ be the solution to the differential equation ${\displaystyle \mathrm {d} Y(n)_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y(n)_{t})\,\mathrm {d} X(n)_{t}^{j}}$ and let ${\displaystyle \mathbf {Y} (n)}$ be defined as ${\displaystyle \mathbf {Y} (n)_{s,t}=\left(1,\int _{s Then ${\displaystyle \mathbf {Y} (n)}$ converges in the ${\displaystyle p}$-variation metric to a ${\displaystyle p}$-geometric rough path ${\displaystyle \mathbf {Y} }$. Moreover, ${\displaystyle \mathbf {Y} }$ is the solution to the differential equation ${\displaystyle \mathrm {d} Y_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X_{t}^{j}\qquad (\star )}$ driven by the geometric rough path ${\displaystyle \mathbf {X} }$. Concisely, the theorem can be interpreted as saying that the solution map (aka the Itô-Lyons map) ${\displaystyle \Phi :G\Omega _{p}(\mathbb {R} ^{d})\to G\Omega _{p}(\mathbb {R} ^{e})}$ of the RDE ${\displaystyle (\star )}$ is continuous (and in fact locally lipschitz) in the ${\displaystyle p}$-variation topology. Hence rough paths theory demonstrates that by viewing driving signals as rough paths, one has a robust solution theory for classical stochastic differential equations and beyond. ## Examples of rough paths ### Brownian motion Let ${\displaystyle (B_{t})_{t\geq 0}}$ be a multidimensional standard Brownian motion. Let ${\displaystyle \circ }$ denote the Stratonovich integration. Then ${\displaystyle \mathbf {B} _{s,t}=\left(1,\int _{s is a ${\displaystyle p}$-geometric rough path for any ${\displaystyle 2. This geometric rough path is called the Stratonovich Brownian rough path. ### Fractional Brownian motion More generally, let ${\displaystyle B_{H}(t)}$ be a multidimensional fractional Brownian motion (a process whose coordinate components are independent fractional Brownian motions) with ${\displaystyle H>{\frac {1}{4}}}$. If ${\displaystyle B_{H}^{m}(t)}$ is the ${\displaystyle m}$-th dyadic piecewise linear interpolation of ${\displaystyle B_{H}(t)}$, then {\displaystyle {\begin{aligned}\mathbf {B} _{H}^{m}(s,t)=\left(1,\int _{s converges almost surely in the ${\displaystyle p}$-variation metric to a ${\displaystyle p}$-geometric rough path for ${\displaystyle {\frac {1}{H}}.[14] This limiting geometric rough path can be used to make sense of differential equations driven by fractional Brownian motion with Hurst parameter ${\displaystyle H>{\frac {1}{4}}}$. When ${\displaystyle 0, it turns out that the above limit along dyadic approximations does not converge in ${\displaystyle p}$-variation. However, one can of course still make sense of differential equations provided one exhibits a rough path lift, existence of such a (non-unique) lift is a consequence of the Lyons–Victoir extension theorem. ### Non-uniqueness of enhancement In general, let ${\displaystyle (X_{t})_{t\geq 0}}$ be a ${\displaystyle \mathbb {R} ^{d}}$-valued stochastic process. If one can construct, almost surely, functions ${\displaystyle (s,t)\rightarrow \mathbf {X} _{s,t}^{j}\in {\big (}\mathbb {R} ^{d}{\big )}^{\otimes j}}$ so that ${\displaystyle \mathbf {X} :(s,t)\rightarrow (1,X_{t}-X_{s},\mathbf {X} _{s,t}^{2},\ldots ,\mathbf {X} _{s,t}^{\lfloor p\rfloor })}$ is a ${\displaystyle p}$-geometric rough path, then ${\displaystyle \mathbf {X} _{s,t}}$ is an enhancement of the process ${\displaystyle X}$. Once an enhancement has been chosen, the machinery of rough path theory will allow one to make sense of the controlled differential equation ${\displaystyle \mathrm {d} Y_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X_{t}^{j}.}$ for sufficiently regular vector fields ${\displaystyle V_{j}^{i}.}$ Note that every stochastic process (even if it is a deterministic path) can have more than one (in fact, uncountably many) possible enhancements.[15] Different enhancements will give rise to different solutions to the controlled differential equations. In particular, it is possible to enhance Brownian motion to a geometric rough path in a way other than the Brownian rough path.[16] This implies that the Stratonovich calculus is not the only theory of stochastic calculus that satisfies the classical product rule ${\displaystyle \mathrm {d} (X_{t}\cdot Y_{t})=X_{t}\,\mathrm {d} Y_{t}+Y_{t}\,\mathrm {d} X_{t}.}$ In fact any enhancement of Brownian motion as a geometric rough path will give rise a calculus that satisfies this classical product rule. Itô calculus does not come directly from enhancing Brownian motion as a geometric rough path, but rather as a branched rough path. ## Applications in stochastic analysis ### Stochastic differential equations driven by non-semimartingales Rough path theory allows to give a pathwise notion of solution to (stochastic) differential equations of the form ${\displaystyle \mathrm {d} Y_{t}=b(Y_{t})\,\mathrm {d} t+\sigma (Y_{t})\,\mathrm {d} X_{t}}$ provided that the multidimensional stochastic process ${\displaystyle X_{t}}$ can be almost surely enhanced as a rough path and that the drift ${\displaystyle b}$ and the volatility ${\displaystyle \sigma }$ are sufficiently smooth (see the section on the Universal Limit Theorem). There are many examples of Markov processes, Gaussian processes, and other processes that can be enhanced as rough paths.[17] There are, in particular, many results on the solution to differential equation driven by fractional Brownian motion that have been proved using a combination of Malliavin calculus and rough path theory. In fact, it has been proved recently that the solution to controlled differential equation driven by a class of Gaussian processes, which includes fractional Brownian motion with Hurst parameter ${\displaystyle H>{\frac {1}{4}}}$, has a smooth density under the Hörmander's condition on the vector fields.[18] [19] ### Freidlin–Wentzell's large deviation theory Let ${\displaystyle L(V,W)}$ denote the space of bounded linear maps from a Banach space ${\displaystyle V}$ to another Banach space ${\displaystyle W}$. Let ${\displaystyle B_{t}}$ be a ${\displaystyle d}$-dimensional standard Brownian motion . Let ${\displaystyle b:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{d}}$ and ${\displaystyle \sigma :\mathbb {R} ^{n}\rightarrow L(\mathbb {R} ^{d},\mathbb {R} ^{n})}$ be twice-differentiable functions and whose second derivatives are ${\displaystyle \alpha }$-Hölder for some ${\displaystyle \alpha >0}$. Let ${\displaystyle X^{\varepsilon }}$ be the unique solution to the stochastic differential equation ${\displaystyle \mathrm {d} X^{\varepsilon }=b(X_{t}^{\epsilon })\,\mathrm {d} t+{\sqrt {\varepsilon }}\sigma (X^{\varepsilon })\circ \mathrm {d} B_{t};\,X^{\varepsilon }=a,}$ where ${\displaystyle \circ }$ denotes Stratonovich integration. The Freidlin Wentzell's large deviation theory aims to study the asymptotic behavior, as ${\displaystyle \epsilon \rightarrow 0}$, of ${\displaystyle \mathbb {P} [X^{\varepsilon }\in F]}$ for closed or open sets ${\displaystyle F}$ with respect to the uniform topology. The Universal Limit Theorem guarantees that the Itô map sending the control path ${\displaystyle (t,{\sqrt {\varepsilon }}B_{t})}$ to the solution ${\displaystyle X^{\varepsilon }}$ is a continuous map from the ${\displaystyle p}$-variation topology to the ${\displaystyle p}$-variation topology (and hence the uniform topology). Therefore, the Contraction principle in large deviations theory reduces Freidlin–Wentzell's problem to demonstrating the large deviation principle for ${\displaystyle (t,{\sqrt {\varepsilon }}B_{t})}$ in the ${\displaystyle p}$-variation topology.[20] This strategy can be applied to not just differential equations driven by the Brownian motion but also to the differential equations driven any stochastic processes which can be enhanced as rough paths, such as fractional Brownian motion. ### Stochastic flow Once again, let ${\displaystyle B_{t}}$ be a ${\displaystyle d}$-dimensional Brownian motion. Assume that the drift term ${\displaystyle b}$ and the volatility term ${\displaystyle \sigma }$ has sufficient sufficient regularity so that the stochastic differential equation ${\displaystyle \mathrm {d} \phi _{s,t}(x)=b(\phi _{s,t}(x))\,\mathrm {d} t+\sigma {(\phi _{s,t}(x))}\,\mathrm {d} B_{t};X_{s}=x}$ has a unique solution in the sense of rough path. A basic question in the theory of stochastic flow is whether the flow map ${\displaystyle \phi _{s,t}(x)}$ exists and satisfy the cocyclic property that for all ${\displaystyle s\leq u\leq t}$, ${\displaystyle \phi _{u,t}(\phi _{s,u}(x))=\phi _{s,t}(x)}$ outside a null set independent of ${\displaystyle s,u,t}$. The Universal Limit Theorem once again reduces this problem to whether the Brownian rough path ${\displaystyle \mathbf {B_{s,t}} }$ exists and satisfies the multiplicative property that for all ${\displaystyle s\leq u\leq t}$, ${\displaystyle \mathbf {B} _{s,u}\otimes \mathbf {B} _{u,t}=\mathbf {B} _{s,t}}$ outside a null set independent of ${\displaystyle s}$, ${\displaystyle u}$ and ${\displaystyle t}$. In fact, rough path theory gives the existence and uniqueness of ${\displaystyle \phi _{s,t}(x)}$ not only outside a null set independent of ${\displaystyle s}$,${\displaystyle t}$ and ${\displaystyle x}$ but also of the drift ${\displaystyle b}$ and the volatility ${\displaystyle \sigma }$. As in the case of Freidlin–Wentzell theory, this strategy holds not just for differential equations driven by the Brownian motion but to any stochastic processes that can be enhanced as rough paths. ## Controlled rough path Controlled rough paths, introduced by M. Gubinelli,[21] are paths ${\displaystyle \mathbf {Y} }$ for which the rough integral ${\displaystyle \int _{s}^{t}\mathbf {Y} _{u}\,\mathrm {d} X_{u}}$ can be defined for a given geometric rough path ${\displaystyle X}$. More precisely, let ${\displaystyle L(V,W)}$ denote the space of bounded linear maps from a Banach space ${\displaystyle V}$ to another Banach space ${\displaystyle W}$. Given a ${\displaystyle p}$-geometric rough path ${\displaystyle \mathbf {X} =(1,\mathbf {X} ^{1},\ldots ,\mathbf {X} ^{\lfloor p\rfloor })}$ on ${\displaystyle \mathbb {R} ^{d}}$, a ${\displaystyle \gamma }$-controlled path is a function ${\displaystyle \mathbf {Y} _{s}=(\mathbf {Y} _{s}^{0},\mathbf {Y} _{s}^{1},\ldots ,\mathbf {Y} _{s}^{\lfloor \gamma \rfloor })}$ such that ${\displaystyle \mathbf {Y} ^{j}:[0,1]\rightarrow L((\mathbb {R} ^{d})^{\otimes j+1},\mathbb {R} ^{n})}$ and that there exists ${\displaystyle M>0}$ such that for all ${\displaystyle 0\leq s\leq t\leq 1}$ and ${\displaystyle j=0,1,\ldots ,\lfloor \gamma \rfloor }$, ${\displaystyle \Vert \mathbf {Y} _{s}^{j}\Vert \leq M}$ and ${\displaystyle \left\|\mathbf {Y} _{t}^{j}-\sum _{i=0}^{\lfloor \gamma \rfloor -j}\mathbf {Y} _{s}^{j+i}\mathbf {X} _{s,t}^{i}\right\|\leq M|t-s|^{\frac {\gamma -j}{p}}.}$ ### Example: Lip(γ) function Let ${\displaystyle \mathbf {X} =(1,\mathbf {X} ^{1},\ldots ,\mathbf {X} ^{\lfloor p\rfloor })}$ be a ${\displaystyle p}$-geometric rough path satisfying the Hölder condition that there exists ${\displaystyle M>0}$, for all ${\displaystyle 0\leq s\leq t\leq 1}$ and all ${\displaystyle j=1,,2,\ldots ,\lfloor p\rfloor }$, ${\displaystyle \Vert \mathbf {X} _{s,t}^{j}\Vert \leq M(t-s)^{\frac {j}{p}},}$ where ${\displaystyle \mathbf {X} ^{j}}$ denotes the ${\displaystyle j}$-th tensor component of ${\displaystyle \mathbf {X} }$. Let ${\displaystyle \gamma \geq 1}$. Let ${\displaystyle f:\mathbb {R} ^{d}\rightarrow \mathbb {R} ^{n}}$ be an ${\displaystyle \lfloor \gamma \rfloor }$-times differentiable function and the ${\displaystyle \lfloor \gamma \rfloor }$-th derivative is ${\displaystyle \gamma -\lfloor \gamma \rfloor }$ Hölder, then ${\displaystyle (f(\mathbf {X} _{s}^{1}),Df(\mathbf {X} _{s}^{1}),\ldots ,D^{\lfloor \gamma \rfloor }f(\mathbf {X} _{s}^{1}))}$ is a ${\displaystyle \gamma }$-controlled path. ### Integral of a controlled path is a controlled path If ${\displaystyle \mathbf {Y} }$ is a ${\displaystyle \gamma }$-controlled path where ${\displaystyle \gamma >p-1}$, then ${\displaystyle \int _{s}^{t}\mathbf {Y} _{u}\,\mathrm {d} X_{u}}$ is defined and the path ${\displaystyle \left(\int _{s}^{t}\mathbf {Y} _{u}\,\mathrm {d} X_{u},\mathbf {Y} _{s}^{0},\mathbf {Y} _{s}^{1},\ldots ,\mathbf {Y} _{s}^{\lfloor \gamma -1\rfloor }\right)}$ is a ${\displaystyle \gamma }$-controlled path. ### Solution to controlled differential equation is a controlled path Let ${\displaystyle V:\mathbb {R} ^{n}\rightarrow L(\mathbb {R} ^{d},\mathbb {R} ^{n})}$ be functions that has at least ${\displaystyle \lfloor \gamma \rfloor }$ derivatives and the ${\displaystyle \lfloor \gamma \rfloor }$-th derivatives are ${\displaystyle \gamma -\lfloor \gamma \rfloor }$-Hölder continuous for some ${\displaystyle \gamma >p}$. Let ${\displaystyle Y}$ be the solution to the differential equation ${\displaystyle \mathrm {d} Y_{t}=V(Y_{t})\,\mathrm {d} X_{t}.}$ Define ${\displaystyle {\frac {\mathrm {d} Y}{\mathrm {d} X}}(\cdot )=V(\cdot );}$ ${\displaystyle {\frac {\mathrm {d} ^{r+1}Y}{\mathrm {d} ^{r+1}X}}(\cdot )=D\left({\frac {\mathrm {d} ^{r}Y}{\mathrm {d} ^{r}X}}\right)(\cdot )V(\cdot ),}$ where ${\displaystyle D}$ denotes the derivative operator, then ${\displaystyle \left(Y_{t},{\frac {\mathrm {d} Y}{\mathrm {d} X}}(Y_{t}),{\frac {\mathrm {d} ^{2}Y}{\mathrm {d} ^{2}X}}(Y_{t}),\ldots ,{\frac {\mathrm {d} ^{\lfloor \gamma \rfloor }Y}{\mathrm {d} ^{\lfloor \gamma \rfloor }X}}(Y_{t})\right)}$ is a ${\displaystyle \gamma }$-controlled path. ## Signature Let ${\displaystyle X:[0,1]\rightarrow \mathbb {R} ^{d}}$ be a continuous function with finite total variation. Define ${\displaystyle S(X)_{s,t}=\left(1,\int _{s The signature of a path is defined to be ${\displaystyle S(X)_{0,1}}$. The signature can also be defined for geometric rough paths. Let ${\displaystyle \mathbf {X} }$ be a geometric rough path and let ${\displaystyle \mathbf {X} (n)}$ be a sequence of paths with finite total variation such that ${\displaystyle \mathbf {X} (n)_{s,t}=\left(1,\int _{s converges in the ${\displaystyle p}$-variation metric to ${\displaystyle \mathbf {X} }$. Then ${\displaystyle \int _{s converges as ${\displaystyle n\rightarrow \infty }$ for each ${\displaystyle N}$. The signature of the geometric rough path ${\displaystyle \mathbf {X} }$ can be defined as the limit of ${\displaystyle S(X(n))_{s,t}}$ as ${\displaystyle n\rightarrow \infty }$. The signature satisfies the Chen's identity,[22] that ${\displaystyle S(\mathbf {X} )_{s,u}\otimes S(\mathbf {X} )_{u,t}=S(\mathbf {X} )_{s,t}}$ for all ${\displaystyle s\leq u\leq t}$. ### Kernel of the signature transform The set of paths whose signature is the trivial sequence, or more precisely, ${\displaystyle S(\mathbf {X} )_{0,1}=(1,0,0,\ldots )}$ can be completely characterized using the idea of tree-like path. A ${\displaystyle p}$-geometric rough path is tree-like if there exists a continuous function ${\displaystyle h:[0,1]\rightarrow [0,\infty )}$ such that ${\displaystyle h(0)=h(1)=0}$ and for all ${\displaystyle j=1,\ldots ,\lfloor p\rfloor }$ and all ${\displaystyle 0\leq s\leq t\leq 1}$, ${\displaystyle \Vert \mathbf {X} _{s,t}^{j}\Vert ^{p}\leq h(t)+h(s)-2\inf _{u\in [s,t]}h(u)}$ where ${\displaystyle \mathbf {X} ^{j}}$ denotes the ${\displaystyle j}$-th tensor component of ${\displaystyle \mathbf {X} }$. A geometric rough path ${\displaystyle \mathbf {X} }$ satisfies ${\displaystyle S(\mathbf {X} )_{0,1}=(1,0,\ldots )}$ if and only if ${\displaystyle \mathbf {X} }$ is tree-like.[23][24] Given the signature of a path, it is possible to reconstruct the unique path that has no tree-like pieces.[25][26] ## Infinite dimensions It is also possible to extend the core results in rough path theory to infinite dimensions, providing that the norm on the tensor algebra satisfies certain admissibility condition.[27] ## References 1. ^ Lyons, T. (1998). "Differential equations driven by rough signals". Revista Matemática Iberoamericana: 215–310. doi:10.4171/RMI/240. 2. ^ Lyons, Terry; Qian, Zhongmin (2002). System Control and Rough Paths. Oxford Mathematical Monographs. Oxford: Clarendon Press. doi:10.1093/acprof:oso/9780198506485.001.0001. ISBN 9780198506485. Zbl 1029.93001. 3. ^ Lyons, Terry; Caruana, Michael; Levy, Thierry (2007). Differential equations driven by rough paths, vol. 1908 of Lecture Notes in Mathematics. Springer. 4. ^ Lejay, A. (2003). "An Introduction to Rough Paths". Séminaire de Probabilités XXXVII. Lecture Notes in Mathematics. 1832. pp. 1–1. doi:10.1007/978-3-540-40004-2_1. ISBN 978-3-540-20520-3. 5. ^ Gubinelli, Massimiliano (2004). "Controlling rough paths". Journal of Functional Analysis. 216 (1): 86–140. arXiv:math/0306433. doi:10.1016/j.jfa.2004.01.002. 6. ^ Friz, Peter K.; Victoir, Nicolas (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications (Cambridge Studies in Advanced Mathematics ed.). Cambridge University Press. 7. ^ Friz, Peter K.; Hairer, Martin (2014). A Course on Rough Paths, with an introduction to regularity structures. Springer. 8. ^ Hairer, Martin (2013). "Solving the KPZ equation". Annals of Mathematics. 178 (2): 559–664. arXiv:1109.6811. doi:10.4007/annals.2013.178.2.4. 9. ^ Hairer, Martin (2014). "A theory of regularity structures". Inventiones Mathematicae. 198 (2): 269–504. arXiv:1303.5113. Bibcode:2014InMat.198..269H. doi:10.1007/s00222-014-0505-4. 10. ^ Ledoux, Michel; Qian, Zhongmin; Zhang, Tusheng (2002). "Large deviations and support theorem for diffusion processes via rough paths". Stochastic Processes and their Applications. 102 (2): 265–283. doi:10.1016/S0304-4149(02)00176-X. 11. ^ Friz, Peter K.; Victoir, Nicolas (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications (Cambridge Studies in Advanced Mathematics ed.). Cambridge University Press. 12. ^ Lyons, Terry; Qian, Zhongmin (2002). "System Control and Rough Paths". Oxford Mathematical Monographs. Oxford: Clarendon Press. doi:10.1093/acprof:oso/9780198506485.001.0001. ISBN 9780198506485. Zbl 1029.93001. Cite journal requires |journal= (help) 13. ^ Lyons, T. (1998). "Differential equations driven by rough signals". Revista Matemática Iberoamericana: 215–310. doi:10.4171/RMI/240. 14. ^ Coutin, Laure; Qian, Zhongmin (2002). "Stochastic analysis, rough path analysis and fractional Brownian motions". Probability Theory and Related Fields. 122: 108–140. doi:10.1007/s004400100158. 15. ^ Lyons, Terry; Victoir, Nicholas (2007). "An extension theorem to rough paths". Annales de l'Institut Henri Poincare (C) Non-Linear Analysis. 24 (5): 835–847. Bibcode:2007AnIHP..24..835L. doi:10.1016/j.anihpc.2006.07.004. 16. ^ Friz, Peter; Gassiat, Paul; Lyons, Terry (2015). "Physical Brownian motion in a magnetic field as a rough path". Transactions of the American Mathematical Society. 367 (11): 7939–7955. arXiv:1302.2531. doi:10.1090/S0002-9947-2015-06272-2. 17. ^ Friz, Peter K.; Victoir, Nicolas (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications (Cambridge Studies in Advanced Mathematics ed.). Cambridge University Press. 18. ^ Cass, Thomas; Friz, Peter (2010). "Densities for rough differential equations under Hörmander's condition". Annals of Mathematics. 171 (3): 2115–2141. arXiv:0708.3730. doi:10.4007/annals.2010.171.2115. 19. ^ Cass, Thomas; Hairer, Martin; Litterer, Christian; Tindel, Samy (2015). "Smoothness of the density for solutions to Gaussian rough differential equations". The Annals of Probability. 43: 188–239. arXiv:1209.3100. doi:10.1214/13-AOP896. 20. ^ Ledoux, Michel; Qian, Zhongmin; Zhang, Tusheng (2002). "Large deviations and support theorem for diffusion processes via rough paths". Stochastic Processes and their Applications. 102 (2): 265–283. doi:10.1016/S0304-4149(02)00176-X. 21. ^ Gubinelli, Massimiliano (2004). "Controlling rough paths". Journal of Functional Analysis. 216 (1): 86–140. arXiv:math/0306433. doi:10.1016/j.jfa.2004.01.002. 22. ^ Chen, Kuo-Tsai (1954). "Iterated Integrals and Exponential Homomorphisms". Proceedings of the London Mathematical Society. s3–4: 502–512. doi:10.1112/plms/s3-4.1.456. 23. ^ Hambly, Ben; Lyons, Terry (2010). "Uniqueness for the signature of a path of bounded variation and the reduced path group". Annals of Mathematics. 171: 109–167. arXiv:math/0507536. doi:10.4007/annals.2010.171.109. 24. ^ Boedihardjo, Horatio; Geng, Xi; Lyons, Terry; Yang, Danyu (2016). "The signature of a rough path: Uniqueness". Advances in Mathematics. 293: 720–737. doi:10.1016/j.aim.2016.02.011. 25. ^ Lyons, Terry; Xu, Weijun (2016). "Inverting the signature of a path". Journal of the European Mathematical Society. 26. ^ Geng, Xi (2016). "Reconstruction for the Signature of a Rough Path". arXiv:1508.06890 [math.CA]. 27. ^ Cass, Thomas; Driver, Bruce; Lim, Nengli; Litterer, Christian. "On the integration of weakly geometric rough paths". Journal of the Mathematical Society of Japan.
{}
# A calculus of the absurd ##### 22.7.8 Orthogonal projection If you’ve done any physics (shivers) then you’ve probably come across the idea of “resolving” forces. If you haven’t, then the basic idea is that if we have some vector $$v$$ in $$\mathbb {R}^2$$, then for example we can split it into two components: a perpendicular one and a parallel one. One way we can do this (which is quite natural) is to resolve any vector parallel and perpendicular to the “axes”, for example we could have the vector in the diagram below which can be resolved into a component parallel to $$\uvec {i}$$ and $$\uvec {j}$$ (note that $$\uvec {i}$$ and $$\uvec {j}$$ are orthogonal). But perpendicular and parallel to what? Usually in secondary school mathematics this is not very well-defined, but we can now use some of our previous definitions to define this notion of “splitting up a vector” and generalise it to vector spaces where we don’t have a ready geometric interpretation. This definition encodes a lot of the intuitive notions about orthogonality and perpendicularity. • Definition 22.7.3 Let $$\textsf {V}$$ be a vector space, and $$E$$ be a subspace of $$V$$. We say that $$\textbf {w}$$ is the orthogonal projection of $$\textbf {v}$$ onto $$E$$ if • 1. The vector $$\textbf {w}$$ is in $$E$$. • 2. The vector obtained by subtracting $$\textbf {w}$$ from $$\textbf {v}$$ is orthogonal to all the vectors in $$E$$ (which we can write as $$\textbf {v} - \textbf {w} \bot E$$) A very key property is that the orthogonal projection is the vector in $$E$$ which is the closest vector to $$\textbf {w}$$. This makes the orthogonal projection useful in optimisation problems! • Theorem 22.7.6 Let $$V$$ be a vector space, and $$W$$ be a subspace of $$V$$. Let $$v \in V$$, in which case the orthogonal projection of $$v$$ onto $$W$$ minimises the distance between $$v$$ and $$W$$ (which we define as the distance between $$v$$ and the closest vector in $$W$$). To prove this, we will need the Pythagorean theorem. Let $$v \in v$$ and let $$w$$ be an arbitrary vector in $$W$$. Then we will define $$p$$ to be the orthogonal projection of $$p$$ onto $$W$$. We can write the distance between $$v$$ and $$w$$ as $$||v-w||^2$$. Our goal is to show that this is greater than or equal to $$||v-x||^2$$ (note that in general it is always nicer to work with the distance squared, and this is all good and well because distance is never negative). Then we apply the trusty trick of adding zero (i.e. an object and its inverse), in this case $$p$$, which gives us \begin{align} ||v-w||^2 &= ||v-p+p-w||^2 \end{align} Then note that $$v - p \in W^{\bot }$$ and that $$p - w \in W$$ (as both are in $$W$$, which is a subspace). To this, we can apply Pythagoras’ theorem (just when you thought you’d escaped school geometry, it comes back to bite!) which we know that Now that we have defined the object, we can ask some questions that (at least to me) it makes sense to ask. For example, we can ask if the orthogonal projection always exists! This seems to be intuitively true, but how do we know? Let us prove this too.
{}
Revision history [back] Now that I've done that, I wonder if there is anything special I need to do to migrate my catkin workspace (which was created initially with Hydro) to Indigo. tl;dr: remove the devel/ and build/ directories from your catkin_ws/. Then source /opt/ros/DISTRO/setup.bash, cd catkin_ws/ and catkin_make. Anytime you move a catkin_ws/ directory, it is recommended to remove the devel/ and build/ directories. Files in these (may) contain references to the old location(s) of build artefacts, which, after moving your workspace, has just changed. Changing between ROS distributions is similar, in that the build/ and devel/ directories contain references to the location(s) of several key binaries, paths and environment variables that (may) have changed. Now that I've done that, I wonder if there is anything special I need to do to migrate my catkin workspace (which was created initially with Hydro) to Indigo. tl;dr: remove the devel/ and build/ directories from your catkin_ws/. Then Remove src/CMakeLists.txt. Then: source /opt/ros/DISTRO/setup.bash, /opt/ros/DISTRO/setup.bash cd catkin_ws/ and catkin_make.catkin_ws/src catkin_init_workspace cd catkin_ws/ catkin_make Anytime you move a catkin_ws/ directory, it is recommended to remove the devel/ and build/ directories. Files in these (may) contain references to the old location(s) of build artefacts, which, after moving your workspace, has just changed. Changing between ROS distributions is similar, in that the build/ and devel/ directories contain references to the location(s) of several key binaries, paths and environment variables that (may) have changed. changed. Additionally, the src/ directory should contain a CMakeLists.txt symlink, which points (in your case) to the Hydro version. This should be updated as well, catkin_init_workspace (in your src/ directory) should be able to do this for you. Now that I've done that, I wonder if there is anything special I need to do to migrate my catkin workspace (which was created initially with Hydro) to Indigo. tl;dr: remove the devel/ and build/ directories from your catkin_ws/. Remove src/CMakeLists.txt. Then: source /opt/ros/DISTRO/setup.bash cd catkin_ws/src catkin_init_workspace cd catkin_ws/ catkin_make Anytime you move a catkin_ws/ directory, it is recommended to remove the devel/ and build/ directories. Files in these (may) contain references to the old location(s) of build artefacts, which, after moving your workspace, has just changed. Changing between ROS distributions is similar, in that the build/ and devel/ directories contain references to the location(s) of several key binaries, paths and environment variables that (may) have changed. Additionally, the src/ directory should contain a CMakeLists.txt symlink, which points (in your case) to the Hydro version. This should be updated as well, catkin_init_workspacewell. Invoking catkin_make (in your src/ directory) should be able to do catkin_ws/) does this automatically for you. Now that I've done that, I wonder if there is anything special I need to do to migrate my catkin workspace (which was created initially with Hydro) to Indigo. tl;dr: remove the devel/ and build/ directories from your catkin_ws/. Remove src/CMakeLists.txt. Then: source /opt/ros/DISTRO/setup.bash cd catkin_ws/ catkin_make Anytime you move a catkin_ws/ directory, it is recommended to remove the devel/ and build/ directories. Files in these those dirs (may) contain references to the old location(s) of build artefacts, which, after moving your workspace, has just changed. Changing between ROS distributions is similar, in that the build/ and devel/ directories contain references to the location(s) of several key binaries, paths and environment variables that (may) have changed. Additionally, the src/ directory should contain a CMakeLists.txt symlink, which points (in your case) to the Hydro version. This should be updated as well. Invoking catkin_make (in your catkin_ws/) does this automatically for you. Now that I've done that, I wonder if there is anything special I need to do to migrate my catkin workspace (which was created initially with Hydro) to Indigo. tl;dr: remove the devel/ and build/ directories from your catkin_ws/. Remove src/CMakeLists.txt. Then: source /opt/ros/DISTRO/setup.bash cd catkin_ws/ catkin_make Anytime you move a catkin_ws/ directory, it is recommended to remove the devel/ and build/ directories. Files in those dirs (may) contain references to the old location(s) of build artefacts, which, after moving your workspace, has just changed. Changing between ROS distributions is similar, in that the build/ and devel/ directories contain references to the location(s) of several key binaries, paths and environment variables that (may) have changed. Additionally, the src/ directory should contain a CMakeLists.txt symlink, which points (in your case) to the Hydro version. This should be updated as well. Invoking Remove it and invoke catkin_make (in in your catkin_ws/) does this automatically for you. : the link will be recreated automatically.
{}
Article Text Survival of retinoblastoma in less-developed countries impact of socioeconomic and health-related indicators 1. S Canturk1, 3. V Khetan3, 4. Z Ma4, 5. A Furmanchuk5, 6. C B G Antoneli6, 7. I Sultan7, 8. R Kebudi8, 9. T Sharma3, 10. C Rodriguez-Galindo2, 11. D H Abramson9, 1. 1Department of Ophthalmology, Faculty of Medicine, Istanbul University, Istanbul, Turkey 2. 2Department of Oncology, St Jude Children's Research Hospital, Memphis, Tennessee, USA 3. 3Medical and Vision Research Foundation, Sankara Nethralaya, Chennai, India 4. 4Department of Pediatric Oncology, West China Second University Hospital, Chengdu, China 5. 5Department of Oncology, Belarussian Research Centre for Pediatric Oncology-Hematology, Minsk, Belarus 6. 6Department of Pediatric Oncology, Hospital AC Camargo, Sao Paulo, Brazil 7. 7Department of Pediatric Oncology, King Hussein Cancer Center, Amman, Jordan 8. 8Division of Pediatric Hematology-Oncology, Cerrahpaşa Medical Faculty and Oncology Institute, Istanbul University, Istanbul, Turkey 9. 9Department of Surgery, Memorial Sloan-Kettering Cancer Center, New York, New York, USA 10. 10Department of Hemato-oncology, Hospital JP Garrahan, Buenos Aires, Argentina 1. Correspondence to Guillermo L Chantada, Hemato-Oncology Department, Hospital JP Garrahan, Combate de los Pozos 1881, C1245 AAL, Buenos Aires, Argentina; gchantada{at}yahoo.com ## Abstract Background The survival of retinoblastoma in less-developed countries (LDCs) and the impact of socioeconomic variables on survival are not widely available in the literature. Methods A systematic review of publications from LDCs was performed. Articles were from multiple databases and written in seven languages. Results were correlated with socioeconomic indicators. Lower-income countries (LICs) and middle-income countries (MICs) were included in our analyses. Results An analysis of 164 publications including 14 800 patients from 48 LDCs was performed. Twenty-six per cent of the papers were written in languages other than English. Estimated survival in LICs was 40% (range, 23–70%); in lower MICs, 77% (range, 60–92%) and in upper MICs, 79% (range, 54–93%; p=0.001).Significant differences were also found in the occurrence of metastasis: in LICs, 32% (range, 12–45); in lower MICs, 12% (range, 3–31) and in upper MICs, 9.5% (range, 3–24; p=0.04). On multivariate analysis, physician density and human development index were significantly associated with survival and metastasis. Maternal mortality rate and per capita health expenditure were significantly associated with treatment refusal. Conclusions Important information from LDCs is not always available in English or in major databases. Indicators of socioeconomic development and maternal and infant health were related with outcome. • Retinoblastoma • survival • metastasis • risk factors • early diagnosis • treatment refusal • retina • public health • neoplasia • child health (paediatrics) ## Statistics from Altmetric.com For the past decades, the survival of retinoblastoma has been >80–90% in developed countries.1 As opposed to most other malignancies, these excellent results may be more related to early diagnosis than to the availability of sophisticated treatments. Despite 90% of children affected with retinoblastoma who live in less-developed countries (LDCs), the published literature from this region has received little attention.2 The reasons for this are manyfold. A publication bias against retrospective studies showing negative results or from LDCs has been reported for other conditions.3 In the case of retinoblastoma, the problem is further compounded by the lack of a prospective approach in many treatment centres as well as lack of a staging system for extraocular disease.4 Therefore, we speculated that relevant information about retinoblastoma outcome in LDCs may be available in languages other than English and published only in regional journals or as abstracts for specialised meetings. Therefore, the aims of this study were to perform a systematic review of retinoblastoma literature from LDCs, carry out an analysis of the retrieved data and describe the outcome of those patients. Early diagnosis of retinoblastoma is influenced by socioeconomic and maternal educational factors,5 so we speculated that indicators of childhood and maternal health could affect the outcome of retinoblastoma in LDCs. Thus, for this study, we also sought to identify the socioeconomic factors that potentially correlate with outcome. ## Methods ### Search strategy We searched the PubMed database at http://www.ncbi.nlm.nih.gov/pubmed/using the terms “Retinoblastoma AND (Humans(Mesh)) AND ((infant(MeSH) OR child(MeSH) OR adolescent(MeSH))))” and retrieved publications from January 1998 to December 2008. Countries classified as low-income (LICs; gross national income (GNI) <US$755), and middle-income (MICs); which are further subdivided into upper-MICs (GNI US$2996–9265) and lower-MICs (GNI US\$756–2995), per the World Bank (http://web.worldbank.org/WBSITE/EXTERNAL/DATASTATISTICS/0,contentMDK:20420458∼menuPK:64133156∼pagePK:64133150∼piPK:64133175∼theSitePK:239419,00.html) report of economies for the year 2000 were eligible. The abstracts of all publications from the eligible countries were screened and analysed by two authors (SC and GC). Papers dealing with clinical features, treatment results, epidemiological studies and prevention or early diagnosis of retinoblastoma were eligible for this study, and the entire paper was analysed. Additional data were retrieved using the same strategy and time intervals at the following electronic data bases: EMBASE (http://www.embase.com), Literatura Latinoamericana y del Caribe en Ciencias de la Salud (http://bases.bireme.br/cgi-bin/wxislind.exe/iah/online/?IsisScript=iah/iah.xis&base=LILACS&lang=i&form=F), Scientific Electronic Library Online (http://www.scielo.org), Index Medicus for the Eastern Mediterranean Region and Eastern Mediterranean Region Office (http://www.emro.who.int). Meeting proceedings from the International Society of Pediatric Oncology, American Society of Clinical Oncology, Association for Research in Vision and Ophthalmology, International Congress of Ocular Oncology, American Academy of Ophthalmology and International Society of Genetic Eye Disease were searched. When electronic versions of meeting proceedings were not available, abstract books were hand searched. Additional sources for grey literature (unpublished studies, with limited distribution) included http://www.scholar.google.com and http://www.cure4kids.org. At the former, the term “Retinoblastoma” and each of the eligible countries were used for retrieval of papers. In addition, each co-author obtained abstracts presented at local or regional meetings from their region. Attempts to further increase the number of retrieved publications from journals only available locally or regionally included search of the Web sites of publications about paediatrics, cancer, ophthalmology and pathology from the different regions. All of the references quoted in the retrieved manuscripts were also searched for additional publications. Finally, we contacted by email some investigators of the retrieved publications and those listed in the International Retinoblastoma Staging Group hosted at http://www.cure4kids.org, librarians and leaders of local, regional or global organisations providing treatment for retinoblastoma and asked them to provide material. ### Data analysis For each selected paper, we retrieved the following information to complete our database. • Publication characteristics: (full paper, abstract, monograph, protocol, multimedia presentation and language of the publication) • Study characteristics: patient number, design (prospective, retrospective), length of follow-up, survival (and methodology of calculation), stage and staging system used. • Treatment information: chemotherapy used, deaths caused by treatment-related toxicity, percentage of patients who abandoned or refused treatment. After the analysis of each paper, the data were introduced into a custom-designed database in English including the information detailed above that was available to all of the authors. Publications in languages other than English were read and analysed by local co-authors knowledgeable in the language used at each paper, and the results were included in the abovementioned database in English. All data were checked by two authors (SC and GC) to detect repetitions before analysis. If duplicate information was detected, only the source with the more complete dataset was included. When more than one value was available for a given parameter, the final result was calculated as the mean, combining results from different series. Once the analysis for each region was undertaken, the senior authors (GC, DHA) discussed the data with the local representative to estimate the accuracy and consistency of the data before the final statistical analyses were done. ### Survival estimation For each country, we calculated a single estimate of survival. Survival data were reported in a non-systematic manner across publications. For example, different durations of follow-up were included, and on occasion, results were not analysed by the Kaplan–Meier method but were reported as survival proportions. Therefore, it was not possible to pass strict statistical methodologic criteria for many papers. When survival data from a given country were available from more than one source, we analysed all of the information from the different centres. When more than one paper from a single centre or group was available, we selected the source with the most complete dataset for analysis. Thus, for each country, we calculated an estimate of survival that was derived from the mean of survival data from independent publications. This result was analysed by each author from that region and matched with the percentage of patients with metastatic disease to be considered accurate. ### Definitions The following socioeconomic and health-related indicators were obtained from the WHO (http://www.who.int/whosis/en/index.html) report for 2000, except for maternal mortality rate and fertility rate, which were derived from 2005 data: • Demographic and socioeconomic: Adult literacy rate and gross domestic product • Health service coverage: 1-year-old children immunised with three doses of DPT vaccine • Health system resources: Per capita government and total expenditure on health, physician density and nursing and midwives personnel density • Maternal–child mortality and burden of disease: Infant mortality/maternal mortality rate and <5 mortality rate. We also obtained the Human Development Index (HDI) from the UN development program (http://hdrstats.undp.org/en/indicators/74.html) as an indicator of global country development. El Salvador, Guatemala, Honduras and Nicaragua comprised a cooperative group, and their data are analysed together as Central America. Treatment refusal, treatment abandonment and cases lost to follow-up were analysed together as a single parameter, which was denoted as “treatment refusal”. These variables were correlated with the estimated values for survival, metastatic rate and treatment refusal for each country. ### Statistical analysis The correlation between calculated survival, the proportion of patients with metastatic disease and treatment refusal with socioeconomic indicators were calculated with the Pearson correlation coefficient (R and R2). Multiple regression models were used to ascertain the correlation between combinations of predictive variables and survival, the presence of metastatic disease and the occurrence of treatment refusal. χ2 or Fisher exact tests were used for categorical variables. The Mann–Whitney test was used for continuous variables. p Values ≤0.05 were considered significant. ## Results ### Data retrieval A total of 456 (235 at PubMed and 221 from other sources) references from the countries under study were initially screened. Of those, after selecting those eligible and deleting duplicated information, we analysed 164 references from 48 countries describing 14 800 patients that fulfilled our criteria. Forty-five (27.4%) papers were identified after searching PubMed (including four abstracts initially retrieved by our search were subsequently published as full papers), and 119 (72.6%) were retrieved from other sources. A total of 118 references (73.5%) were written in English, and 46 (26.5%) were written in other languages. Thirty-two papers (19.5%) included patients treated before 1990, and the remaining ones include only children treated after that time. ### Estimated survival Estimated survival, percentage of metastatic patients at diagnosis and treatment refusal varied among countries (tables 1 and 2) and significantly among the different countries groups (figure 1). Estimated survival was 40% (range, 23–70%) in LICs, 77% (range, 60–92%) in lower MICs and 79% (range, 54–93%) in upper MICs (p=0.001). The percentage of patients presenting with metastatic disease was 32% in LICs (range, 12–50%), 12% in lower MICs (range 3–31%) and 9.5% in upper MICs (range, 3–24%; p=0.04). The percentage of patients that refused treatment was LICs was 35% (range, 30–50%), in lower MICs 11% (range, 2–37%) and in upper MICs was 5% (range, 1–25%; p=0.002). Table 1 Outcome measures among patients with retinoblastoma in lower-income countries (LICs) Table 2 Outcome measures among patients with retinoblastoma in middle-income countries Figure 1 Estimated survival, percentage of metastatic disease at diagnosis and treatment refusal rate according to countries classified according to the World Bank. ### Impact of socioeconomic and health-related variables Several socioeconomic indicators correlated significantly with survival, the occurrence of metastasis and treatment refusal (table 3). However, after multivariate analysis, only HDI was independently correlated with survival (p=0.0001), and this indicator together with physician density were correlated with the occurrence of metastatic disease (p=0.0001 and 0.006, respectively). Factors that independently correlated with treatment refusal were physician density (p=0.03), per capita total expenditure on health (p=0.002) and maternal mortality rate, which had the strongest correlation (p=0.001). Table 3 Correlations between socioeconomic indicators and the presence of metastastic disease at diagnosis, estimated survival and treatment refusal in patients with retinoblastoma in less-developed countries ## Discussion Our study showed that significant information about the outcome of retinoblastoma in LDCs is available in non-English reports, published as meeting abstracts or papers in local journals, confirming that the outcome of retinoblastoma in LDCs is poorer than that in more affluent countries. The inclusion of publications in other languages than English in systematic reviews of randomised clinical trials increases their precision and completeness.6 In addition, the exclusion of grey literature in systematic reviews of randomised interventions may show an exaggerated impact of the intervention effectiveness.7 However, most systematic reviews and meta-analysis are performed to analyse the validity of a randomised treatment intervention. Because there are few prospective studies and no randomised trials for retinoblastoma in LDCs, the quality of the available evidence is sub-optimal. Therefore, as suggested for other diseases with similar features,8 we designed this review including grey literature publications in many languages which helped us to improve the completeness of our data. However, the lack of a commonly used staging system among groups,4 the retrospective design of most studies and the rarity of the disease impeded the use of meta-analysis methodology, and we only report descriptive outcome data under a variety of treatments. So, by analysing our data, we found that among LDCs, the outcome and clinical characteristics of patients with retinoblastoma differ based on the degree of socioeconomic development of the country. We also found that some socioeconomic and health-related indicators correlated with estimated survival. The creation of cooperative international groups, the discussion and publication of standards for staging9 and the update of staging systems, including the experience in LDCs, like the TNM could be helpful towards a better description of the current situation and possibly future clinical trials addressing problems prevalent in LDCs. In addition, the recent creation of national retinoblastoma registries in some LDCs like Malaysia and India as well as global registries will help to give a more reliable description of the situation by providing a population-based estimation of survival. As reported for other paediatric malignancies, socioeconomic and health-related indicators were significantly correlated with outcome, but some peculiarities have been found.10 A recent study showed that total annual government per capita healthcare expenditure was the strongest predictor of outcome of children with cancer in LDCs.10 Most paediatric malignancies require a rather sophisticated facility to be successfully treated, and such facilities require a high healthcare expenditure. However, indicators of mortality and burden of disease played a more prominent role in predicting survival in our study. Specifically, we found that indicators of societal development, especially those related to the health status of the mother and child, correlated significantly with survival and the occurrence of metastatic disease at diagnosis. The annual per capita healthcare expenditure was correlated with the occurrence of metastatic disease at diagnosis and with survival, but significance was lost in the multivariate analysis. Retinoblastoma can be cured by enucleation of the affected eye(s), a simple surgical procedure if the diagnosis is timely and the families accept it. We speculate that factors related to maternal health and societal development substantially influence the likelihood of early diagnosis before the onset of metastatic disease and, hence, outcome regardless of the facilities available. However, annual per capita healthcare expenditure was a stronger predictor of treatment refusal. Treatment refusal is a common problem, occurring in up to 50% in our study.11 This factor may be related to the type of cancer, and until recently, it has not received much attention in the retinoblastoma literature.11 12 Thus, treating institutions in LDCs should consider developing programs to counteract this problem.12–14 Our data showed that the situation in most LICs differs from that in MICs. In the former, patients tend to present with metastatic disease; families abandon therapy more frequently and survival is in the range from 20% to 30%. In MICs, survival is better, and significantly fewer patients present with metastatic disease. The use of prospective treatment protocols has improved the survival of patients with this condition in MICs.15 16 However, some MICs like South Africa and Malaysia showed lower than expected survival figures.17 18 Despite being relatively affluent, these countries show a lower per capita healthcare expenditure and lower physician and nurse density. Also, contrarily to other countries with similar development scores, no national retinoblastoma strategy was developed until recently in Malaysia, and treatment refusal plays a significant role there.17 Also, inequity in the access to treatment might be important.19 20 On the other hand, countries with relatively lower income figures such as Cuba or Costa Rica show better results. In this setting, a better organised and egalitarian healthcare system compared with lower gross national income figures in a small and homogeneous country, even under different political regimes, could be responsible for the good results.21 22 Despite collecting the best-available published evidence about retinoblastoma in LDCs, our data still have limitations. Most studies were retrospective, and no randomised trial was included. Estimating survival was a challenge because of the inconsistencies we found in reporting follow-up times and the various methods used to estimate survival. However, the main limitation of our study is the possibility of having been missed significant publications, especially from the African continent, where our access to regional reports was more limited. Therefore, for this study, we considered data from all possible sources, despite the quality of survival reports, which was not optimal in many cases. It is also important to consider that about 20% of the papers included patients treated before 1990, so the current results might be better in some settings. In addition, some countries moved from one country category to another during the study period. Early diagnosis campaigns directed at the general population have been carried out in some countries in order to improve the situation.2 14 The important role of maternal and child health indicators in the survival of patients with retinoblastoma justifies developing campaigns targeted to the public to increase awareness. In all cases, a cost-effective initiative would be to reduce the treatment refusal rate and improved screening of familial cases, especially in MICs.12 23 Twinning programs between institutions or organisations in developed countries and centres in LDCs; the creation of centres of excellence in such settings has been successfully implemented, and they are also an alternative for improving results.24 ## Acknowledgments We are indebted to the following colleagues who contributed their data to this study: Drs Alegria Totah, Imelda Pifano, Gary Mercado, Carlos Leal-Leal, Evandro Lucena, Rita Sitorus, Julia Palma, Sandra Luna Fineman, Ofelia Cruz, Clara Perez, Doris Calle, Abby White, Trijn Israels, Juan Luis Garcia, Maja Beck-Popovic, Luis Castillo, La-Ongsri Atchananeeyasakul, Alp Ozkan, Najeeb Al Shourbaji, Mr Hatem Nour and Ms Aida Farha. We are also indebted to Dr Raul Ribeiro for his critical review of this manuscript and Mrs Angela Mc Arthur for editorial review.
{}
American Institute of Mathematical Sciences 2010, 4(2): 169-187. doi: 10.3934/amc.2010.4.169 Efficient implementation of elliptic curve cryptography in wireless sensors 1 University of Campinas (UNICAMP), Campinas - SP, CEP 13083-970, Brazil, Brazil, Brazil, Brazil Received  June 2009 Revised  December 2009 Published  May 2010 The deployment of cryptography in sensor networks is a challenging task, given the limited computational power and the resource-constrained nature of the sensoring devices. This paper presents the implementation of elliptic curve cryptography in the MICAz Mote, a popular sensor platform. We present optimization techniques for arithmetic in binary fields, including squaring, multiplication and modular reduction at two different security levels. Our implementation of field multiplication and modular reduction algorithms focuses on the reduction of memory accesses and appears as the fastest result for this platform. Finite field arithmetic was implemented in C and Assembly and elliptic curve arithmetic was implemented in Koblitz and generic binary curves. We illustrate the performance of our implementation with timings for key agreement and digital signature protocols. In particular, a key agreement can be computed in 0.40 seconds and a digital signature can be computed and verified in 1 second at the 163-bit security level. Our results strongly indicate that binary curves are the most efficient alternative for the implementation of elliptic curve cryptography in this platform. Citation: Diego F. Aranha, Ricardo Dahab, Julio López, Leonardo B. Oliveira. Efficient implementation of elliptic curve cryptography in wireless sensors. Advances in Mathematics of Communications, 2010, 4 (2) : 169-187. doi: 10.3934/amc.2010.4.169 [1] Gerhard Frey. Relations between arithmetic geometry and public key cryptography. Advances in Mathematics of Communications, 2010, 4 (2) : 281-305. doi: 10.3934/amc.2010.4.281 [2] Florian Luca, Igor E. Shparlinski. On finite fields for pairing based cryptography. Advances in Mathematics of Communications, 2007, 1 (3) : 281-286. doi: 10.3934/amc.2007.1.281 [3] Huaiyu Jian, Hongjie Ju, Wei Sun. Traveling fronts of curve flow with external force field. Communications on Pure & Applied Analysis, 2010, 9 (4) : 975-986. doi: 10.3934/cpaa.2010.9.975 [4] Koray Karabina, Berkant Ustaoglu. Invalid-curve attacks on (hyper)elliptic curve cryptosystems. Advances in Mathematics of Communications, 2010, 4 (3) : 307-321. doi: 10.3934/amc.2010.4.307 [5] Anton Stolbunov. Constructing public-key cryptographic schemes based on class group action on a set of isogenous elliptic curves. Advances in Mathematics of Communications, 2010, 4 (2) : 215-235. doi: 10.3934/amc.2010.4.215 [6] Steven D. Galbraith, Ping Wang, Fangguo Zhang. Computing elliptic curve discrete logarithms with improved baby-step giant-step algorithm. Advances in Mathematics of Communications, 2017, 11 (3) : 453-469. doi: 10.3934/amc.2017038 [7] M. J. Jacobson, R. Scheidler, A. Stein. Cryptographic protocols on real hyperelliptic curves. Advances in Mathematics of Communications, 2007, 1 (2) : 197-221. doi: 10.3934/amc.2007.1.197 [8] Andrew P. Sage. Risk in system of systems engineering and management. Journal of Industrial & Management Optimization, 2008, 4 (3) : 477-487. doi: 10.3934/jimo.2008.4.477 [9] Richard Hofer, Arne Winterhof. On the arithmetic autocorrelation of the Legendre sequence. Advances in Mathematics of Communications, 2017, 11 (1) : 237-244. doi: 10.3934/amc.2017015 [10] Qichun Wang, Chik How Tan, Pantelimon Stănică. Concatenations of the hidden weighted bit function and their cryptographic properties. Advances in Mathematics of Communications, 2014, 8 (2) : 153-165. doi: 10.3934/amc.2014.8.153 [11] Andreas Klein. How to say yes, no and maybe with visual cryptography. Advances in Mathematics of Communications, 2008, 2 (3) : 249-259. doi: 10.3934/amc.2008.2.249 [12] Gérard Maze, Chris Monico, Joachim Rosenthal. Public key cryptography based on semigroup actions. Advances in Mathematics of Communications, 2007, 1 (4) : 489-507. doi: 10.3934/amc.2007.1.489 [13] Eitan Altman. Bio-inspired paradigms in network engineering games. Journal of Dynamics & Games, 2014, 1 (1) : 1-15. doi: 10.3934/jdg.2014.1.1 [14] Tanja Eisner, Rainer Nagel. Arithmetic progressions -- an operator theoretic view. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 657-667. doi: 10.3934/dcdss.2013.6.657 [15] Mehdi Pourbarat. On the arithmetic difference of middle Cantor sets. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4259-4278. doi: 10.3934/dcds.2018186 [16] Joseph H. Silverman. Local-global aspects of (hyper)elliptic curves over (in)finite fields. Advances in Mathematics of Communications, 2010, 4 (2) : 101-114. doi: 10.3934/amc.2010.4.101 [17] Wolf-Jüergen Beyn, Janosch Rieger. Galerkin finite element methods for semilinear elliptic differential inclusions. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 295-312. doi: 10.3934/dcdsb.2013.18.295 [18] Lijuan Wang, Jun Zou. Error estimates of finite element methods for parameter identifications in elliptic and parabolic systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1641-1670. doi: 10.3934/dcdsb.2010.14.1641 [19] Christos V. Nikolopoulos, Georgios E. Zouraris. Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method. Conference Publications, 2007, 2007 (Special) : 768-778. doi: 10.3934/proc.2007.2007.768 [20] Hakan Özadam, Ferruh Özbudak. A note on negacyclic and cyclic codes of length $p^s$ over a finite field of characteristic $p$. Advances in Mathematics of Communications, 2009, 3 (3) : 265-271. doi: 10.3934/amc.2009.3.265 2017 Impact Factor: 0.564
{}
# All Questions 590 views ### Explaining Dark Matter and Dark Energy to layman With my little knowledge, I know this: Dark Matter The center of a galaxy controls/attracts its objects (stars, planets, comets etc.) towards itself because of gravity. But the mass of the center of ... 409 views ### How are Galaxy Super Clusters Generated I have seen pictures of clusters of galaxies, usually used in regards to theories of dark matter and galaxy formations. One of the most famous ones has the perceived shape of a stick-figure. If I am ... 188 views ### What are the current observational constraints on the existence of Nemesis? Nemesis is a hypothetical companion to the Sun on a very eccentric, long-period orbit. The star supposedly returns every few tens of millions years, driving comets into the inner solar system and ... 121 views ### Relativistic effects in stellar dynamical systems I am curious, if anybody knows of any stellar dynamical systems/environments, where relativistic effects could play a dynamical role on the motion of these stellar systems? As a subquestion - are ... 405 views How to navigate with possible maximal precision using moon phases? The Moon is the brightest celestial body seen in the night sky, and it is possible to find even through moderate clouds. So it's a ... 108 views ### Are any Pluto-sized objects remaining to be discovered in the Kuiper Belt? An object approximately the same size as Pluto, Eris, was discovered only 8 years ago (in 2005). Are there any Pluto-sized objects remaining to be discovered, and if so, how far away from the Sun ... 66 views ### How can we detect water on Mars-like exoplanets? According to data from Curiosity, Mars' dust holds about 2% water by weight This wasn't previously detected, so the impression we have had of Mars being incredibly dry may need to be altered. Okay it ... 227 views ### Statistics of elements abundance in exoplanets Recently, I encountered the concept of carbon planets - planets, which would be, unlike the Earth, formed mostly by carbon, instead of oxygen, silicon and magnesium. (I am not counting iron, which is ... 469 views ### How do we know Milky Way is a 'barred' spiral galaxy? In reference to the question, "How can we tell that the milky way is a spiral galaxy?" The answers there clearly sum up the question asked. But Milky Way is not just a spiral galaxy. It is further ... 399 views ### What do we know about the lifecycle of the Milky Way (or any other spiral galaxy)? I know that the Milky Way will collide with Andromeda in the distant future but based on what we know so far there is a supermassive black hole in the center of each galaxy and thus the Milky Way will ... 264 views ### What is the current accepted theory as to why Mercury, despite its size, has a similar density to Earth? According to the NASA web page overview about Mercury, despite the planet being just a bit larger than our moon, it's density is about 98.4% of Earth's. This high density suggests a comparatively ... 168 views ### How do stellar temperatures vary? The temperature of the surface of the Sun (photosphere) is between 4500° - 6000° Kelvin. Inside the core, it's around 15.7 million degrees Kelvin. In other types of stars (neutron stars, white ... 125 views ### What percent of planets are in the position that they could be viewed edge-on from Earth? (and thus able to undergo transits) Star number 12644769 from the Kepler Input Catalog was identified as an eclipsing binary with a 41-day period, from the detection of its mutual eclipses (9). Eclipses occur because the orbital ... 277 views ### Why haven't asteroid belts turned into new large bodies? If gravitation (attraction of mass) is the cause of the formation of all celestial bodies then how come the numerous small bodies found in asteroid belts spread over an orbit instead of clustering ... 159 views ### Which came first: black holes or galaxies? In other terms, did galaxies grow around black holes at their center? 176 views ### Were effects of a planetary magnetic field reversal observed on other planets than Earth? From geological records in rocks and minerals we know that the magnetic field of Earth changed its polarity multiple times in the history. See Geomagnetic reversal. Was a similar process of a ... 145 views ### How is it known that Callisto has no core? My Astronomy book claims that scientists have discovered that Callisto, a moon of Jupiter, has no hot inner core. In fact, it says, Callisto has a core much like the nucleus of a comet. Is this still ... 198 views ### Where might a semi proficient amateur analyst participate in meaningful astronomical efforts I am a retired engineer that has an ongoing interest in space efforts. In my youth I did work on the Apollo program but on propulsion and vehicle thermal control: not flight dynamics. I have ... 115 views ### Mechanisms of binary/multiple star formation What are mechanisms of binary/multiple star formation in different mass ranges (low, intermediate and high stellar masses)? 132 views ### Stars at near break-up rotation rates Accretion discs are ubiquitous in astrophisics. As a direct corollary, they are important for the following question. Consider the following model, representing one of the most simple models for ... 358 views ### Determining effect of small variable force on planetary perihelion precession Is there an analytical technique for determining the effect of a small variable transverse acceleration upon the rate of aspides precession (strictly not a precession but rotation of the line of ... 249 views ### Can there be an object with planetary discriminant between Ceres and Neptune? The planetary discriminant is a measure of how dominant a body is within its region of the solar system. For (true) planets, it is $>10000$ and for dwarf planets it is $<1$. (See this answer to ... 155 views ### What is the current accepted theory as to why Titan has retained its atmosphere? Titan (moon of Saturn) is unique in that it possesses a very thick atmosphere. However, Titan is certainly is not the largest of the moons - Ganymede being larger. What is the current accepted ... 114 views ### Subterranean Oceans On Other Planets/Planetoids: How Do Astronomers Deduce This Recently I have been looking into planetoids in our asteroid belt and I have found one that caught my interest, Ceres. One of the main points that was said about it was that it had a subterranean ... 288 views ### How can astronomers determine the difference between “hydrostatic equilibrium” and “just happens to be spherical”? This is relevant for the definition of a dwarf planet. I presume the answer will be, well, if we can tell the mass of the body and guess the material. I don't find this very satisfactory because ... 3k views ### What if Earth and Moon revolved around each other like Pluto and Charon? What would be different for us if Earth and Moon revolved around each other like Pluto and Charon do? 2k views ### How can we avoid needing a leap year/second? Given the Earth's current speed around the sun and current rate & axis of rotation, what is the best way to keep time to avoid a leap year? How many hours should we have in the day and days in a ... 604 views ### How do we know dark matter/dark energy exists? I've never quite understood the theory behind why dark matter and dark energy exist. I know it has something to do with gravitational pull being stronger than what we calculate it SHOULD be, could ... 3k views ### Why doesn't Earth's axis change during the year? My understanding is that the Earth's axis points in the same direction in space during its entire orbit around the sun. And this is what causes our seasons. My question is why doesn't the axis follow ... 827 views ### Where does the Milky Way end? I was reading this article and it says the following: Researchers measured the mass of the Milky Way and found that our galaxy is approximately half the weight of a neighbouring galaxy known as ... 461 views ### What will a lunar eclipse look like from moon? What will a lunar eclipse look like from moon? Will earth become a completely dark circle? 712 views ### Is it a coincidence that both the sun and moon look of same size from earth? The sun is huge when compared to moon. Despite the huge difference in their size and distance from earth, Is it purely coincidental that they both look almost the same from earth? 4k views ### What would the effects be on Earth if Jupiter was turned into a star? In Clarke's book 2010, the monolith and its brethren turned Jupiter into the small star nicknamed Lucifer. Ignoring the reality that we won't have any magical ... 2k views ### Is it possible to witness a star's death? Given that the stars' distances to Earth are measured in light-years (for example, Sirius is 8.6 light-years away from Earth), what we are seeing as Sirius now is actually its state 8.6 years ago, ... 191 views ### What limits the usable focal length of telescopes currently? What barriers - of technology, physics and possibly economy (things that would be possible technologically but are simply too expensive) sets the upper bound on quality of telescopes for observation ... 680 views ### Layout of the universe I've been working on a space game in my spare time, and lately I've been thinking on how to lay out the universe. Though I've searched around and found it hard to get a good view of what the universe ... 661 views ### Can impact craters on the moon act like giant radio telescopes? Could large craters on the moon be used as reflective lenses for radio signals? Acting like a large radio telescope reflecting radio waves to a satellite positioned over the crater. 284 views ### Orbiting around a black hole Is it possible (for either a satellite or a planet) to orbit around a black hole? Do they attract everything around themselves into the center? Or they just affect gravitational force just like stars? ... 2k views ### Why is there a black stripe in the Hubble images of Pluto? While reading reports about the New Horizons misson, I noticed an odd vertical, black stripe in the images of Pluto. Here is an example: Source: Hubble Discovers a Fifth Moon Orbiting Pluto ... 600 views ### Why is the Moon receding from the Earth due to tides? Is this typical for other moons? After reading the Q&A Is the moon moving further away from Earth and closer to the Sun? Why? about the tides transferring energy to the Moon and pushing it from Earth, I have a question: How is ... 1k views ### Can dark matter be found in the shape of planets, galaxies etc.? If dark matter has gravity just like normal matter, does that mean it can also form planets, solar systems and so on? Any answer will be appreciated. 234 views ### How did New Horizons take such well-lit pictures of Pluto? The photos of Pluto from New Horizons are truly beautiful. But considering that Pluto is so far away from its nearest start - our Sun - how is it so well lit up? Did the New Horizons have a massive ... 349 views ### Using the Sun as a Gravitational Lens Can the Sun be used as a gravitational lens to achieve better telescopic viewing? Can this effect be practically used to view celestial objects? 675 views ### Why haven't Earth and Venus got any tiny moons? Or have they? Why haven't some meteoroids gotten caught in Earth's or Venus' orbit? AFAIK most meteors are tiny fragments from comets. Shouldn't some comet tail sometime have passed Earth orbit at velocities ... 259 views ### Does any iron fuse in stars before they go supernova? I understand that iron and all heavier elements consume more energy to produce than they make, and that is what eventually leads to a supernova. I also understand that a lot of the heavier elements ... 327 views ### Why are there no green stars? There are red stars, and orange stars, and yellow stars, and blue stars, and they are all understandable save the fact that there is a 'gap': There are no green stars. Is this because of hydrogen's ... 537 views ### How do we know the big bang didn't happen in an existing universe? I understand the evidence for the big bang (expansion, background radiation, etc), but how do we know that it was the start of the universe? Why couldn't it have occurred in an existing, but very ... 603 views ### How to detect emission lines in optical spectra? Is there any handy module to detect emission lines in a spectrum like one we get from the Sloan Digital Sky Survey (SDSS)? You can see there are many emission lines like Ha,OI in the spectrum below. ...
{}
# Hydroelectric station model 1. Mar 24, 2013 ### vampslayer Hey! It's about control systems for hydro power plants, where one of the main role has turbine-governor which should be able to control speed(frequency) of the generator. But can someone explain me why on the some block diagrams there are speed and power feedback while some block diagrams use only speed as feedback to control frequency constant. What's deal if we have power too as feedback? Look the following picture and you will know what I am talking about. http://oi48.tinypic.com/iqdc08.jpg Last edited: Mar 24, 2013 2. Mar 24, 2013 ### jim hardy When a synchronous machine is connected to the grid it cannot depart from synchronous speed - it is locked as if by gears. So one controls power output by controlling admission of water. Before synchronizing, one of course controls speed by admission of water in order to match the grid so he can synchronize. The block diagrams you presented are for understanding the control theory and may not resemble the actual hardware block diagram used to implement speed and load control .. Typical steam turbine governor has a gain of about 20 to 30 so that a demand for ~3% to 5% overspeed translates to a demand for 100% power. So terms speed and power become interchangeable in some control systems. Different textbook authors might use them differently. I assume hydro governors are similar - but was never around one. Any help? old jim 3. Mar 24, 2013 ### vampslayer Hi jim. CAn you just tell me am I right? Let's say hydro power plant work in off-grid mode, and power plant use synchronous generator. We use turbine-governor to bring speed to the same level as speed of synchronous generator(until that moment generator and load are cut-off). In a moment turbine(generator too has this speed) reach that speed, load can be connected to the generator, and generator's excitation system should start working. Then we use speed as a feedback to control the frequency because of the load disturbance. But if we control frequency on this way, we control power too, why do we need power as a feedback too. And also power depends on the voltage and voltage is controlled by the excitation system of generator(by the exciting current). So generally we control power with excitation system of generator but in a case of load disturbance we use turbine-governor. Am I right? And I don't understand what are the differences between controlling the hydro power-plant which works in "off-grid" and "on-grid" mode Last edited: Mar 24, 2013 4. Mar 24, 2013 ### jim hardy I will try. Okay, we are NOT connecting to grid. We call that "islanded" . Power produced by our generator will be used by nearby loads. Do I understand the question correctly ? Yes, the governor will attempt to hold speed wherever we set it, frequency is same as speed presumably near 60 (or 50) hz. Is this what you mean? A change in load will change the speed of the machine, the governor will sense that and try to restore speed by adjusting valves. As I mentioned, governor does not have infinite gain so a change in load will result in a speed & frequency offset of $\frac{\Delta Load}{Governor Gain}$ , after the system settles . Remember it's a closed loop. In your islanded situation there is no need for power as automatic feedback. Just control frequency and voltage. Let the load take whatever power it needs and the governor must follow . That's what governors do. I don't think you have that relationship quite right. To a small extent power drawn by a load is affected by voltage, yes. That's why we have an automatic voltage regulator. We hold voltage constant so the customer's lights stay same brightness. What kind of loads are you powering? For resistive heating like water heaters power is linear with voltage. For induction motors like household appliances power is in proportion to frequency and is comparatively independent of voltage. Think about it - induction motors are just a few % slip, so have fairly constant speed. Power is a strong function of frequency and a comparatively weak function of voltage. When islanded we have no control over power - it will be whatever the customer demands. It is our duty to provide him with normal voltage and frequency. I don't think you have that right. OFF GRID the power is determined by what load is connected to the generator. That is not under control of the plant operator. Plant operator can only set frequency and voltage, resulting power will be in accordance with Ohm's law. We do not depart from normal voltage to control power (except in very unusual circumstances ) OFF GRID: as above. ON GRID: Speed is set by the grid. One plant is not large enough to change grid frequency. So when unit is ON GRID the governor controls the exact same valves, but because speed is fixed the valves have a different effect. The valves cannot change the speed, so what do they control instead? If they move in the open direction they admit more water(or steam) . That produces more torque. Power = torque X speed X (conversion constant). So more power flows out into the grid. It only shows on your power meter. So the same exact knob that controlled speed when OFF GRID controls power when ON GRID. ------------------------------------------------------------------------------------------- That's the basics of the machine and grid. There's a couple reasons to add a power input to the governor. My plant has a simple circuit that compares generator's electrical power output against mechanical power coming into generator from the turbine. If it ever sees high turbine power simultaneous with low generator power it reaches into the governor and snaps the valves shut. The reason for that is to prevent a rapid acceleration and overspeed. That can happen if for example you lose the lines leaving the plant. Controls also exist to detect and mitigate power system oscillations but most plants don't have them (or didn't in my day). Bear in mind that the whole grid is an interconnected system having both energy input and inertia. That's an invitation for harmonic motion. Indeed a large steam turbogenerator when connected to the grid has a natural frequency of about 1 hz, that is it can hunt like a rotating pendulum at 1 hz. So control systems must not excite that frequency in fact they must damp it. I don't know what a hydro plant's natural frequency is. But that effect is local. An entire section of the country can go into oscillation against another part of the country. I've seen them at $\frac{2}{3}$ hz and divergent. Systems exist to detect and snub those as well, search on "power system stabilizer". These add-ons generally are not simple linear control blocks as you'd think from looking at your diagrams. I hope you become interested in power systems. The field needs specialists who love it. dlgoff has experience in central power system control - I was just at a steam plant so have only a view through a very small window.. Good luck in your studies. old jim Last edited: Mar 24, 2013 5. Mar 25, 2013 ### vampslayer Yes Yes that's what I thought. And Jim thank you very much. You make me understand theory, and you resolved my puzzles. Now this thing is a lot more clear. This was extremely helpful. Thank you one more time. 6. Mar 25, 2013 ### jim hardy I thank you for those kind words. Helps an old guy feel perhaps still a little bit useful.
{}
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int Chapter Contents Chapter Introduction NAG Toolbox Purpose nag_rand_init_skipahead_power2 (g05kk) allows for the generation of multiple, independent, sequences of pseudorandom numbers using the skip-ahead method. The base pseudorandom number sequence defined by state is advanced ${2}^{n}$ places. Syntax [state, ifail] = g05kk(n, state) Description nag_rand_init_skipahead_power2 (g05kk) adjusts a base generator to allow multiple, independent, sequences of pseudorandom numbers to be generated via the skip-ahead method (see the G05 Chapter Introduction for details). If, prior to calling nag_rand_init_skipahead_power2 (g05kk) the base generator defined by state would produce random numbers ${x}_{1},{x}_{2},{x}_{3},\dots$, then after calling nag_rand_init_skipahead_power2 (g05kk) the generator will produce random numbers ${x}_{{2}^{n}+1},{x}_{{2}^{n}+2},{x}_{{2}^{n}+3},\dots$. One of the initialization functions nag_rand_init_repeat (g05kf) (for a repeatable sequence if computed sequentially) or nag_rand_init_nonrepeat (g05kg) (for a non-repeatable sequence) must be called prior to the first call to nag_rand_init_skipahead_power2 (g05kk). The skip-ahead algorithm can be used in conjunction with any of the six base generators discussed in the G05 Chapter Introduction. References Haramoto H, Matsumoto M, Nishimura T, Panneton F and L'Ecuyer P (2008) Efficient jump ahead for F2-linear random number generators INFORMS J. on Computing 20(3) 385–390 Knuth D E (1981) The Art of Computer Programming (Volume 2) (2nd Edition) Addison–Wesley Parameters Compulsory Input Parameters 1:     $\mathrm{n}$int64int32nag_int scalar $n$, where the number of places to skip-ahead is defined as ${2}^{n}$. Constraint: ${\mathbf{n}}\ge 0$. 2:     $\mathrm{state}\left(:\right)$int64int32nag_int array Note: the actual argument supplied must be the array state supplied to the initialization routines nag_rand_init_repeat (g05kf) or nag_rand_init_nonrepeat (g05kg). Contains information on the selected base generator and its current state. None. Output Parameters 1:     $\mathrm{state}\left(:\right)$int64int32nag_int array Contains updated information on the state of the generator. 2:     $\mathrm{ifail}$int64int32nag_int scalar ${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings). Error Indicators and Warnings Errors or warnings detected by the function: ${\mathbf{ifail}}=1$ Constraint: ${\mathbf{n}}\ge 0$. ${\mathbf{ifail}}=2$ On entry, state vector has been corrupted or not initialized. ${\mathbf{ifail}}=3$ On entry, cannot use skip-ahead with the base generator defined by state. ${\mathbf{ifail}}=4$ On entry, the state vector defined on initialization is not large enough to perform a skip-ahead (applies to Mersenne Twister base generator). See the initialization function nag_rand_init_repeat (g05kf) or nag_rand_init_nonrepeat (g05kg). ${\mathbf{ifail}}=-99$ ${\mathbf{ifail}}=-399$ Your licence key may have expired or may not have been installed correctly. ${\mathbf{ifail}}=-999$ Dynamic memory allocation failed. Accuracy Not applicable. Calling nag_rand_init_skipahead_power2 (g05kk) and then generating a series of uniform values using nag_rand_dist_uniform01 (g05sa) is equivalent to, but more efficient than, calling nag_rand_dist_uniform01 (g05sa) and discarding the first ${2}^{n}$ values. This may not be the case for distributions other than the uniform, as some distributional generators require more than one uniform variate to generate a single draw from the required distribution. Example This example initializes a base generator using nag_rand_init_repeat (g05kf) and then uses nag_rand_init_skipahead_power2 (g05kk) to advance the sequence ${2}^{17}$ places before generating five variates from a uniform distribution using nag_rand_dist_uniform01 (g05sa). ```function g05kk_example fprintf('g05kk example results\n\n'); genid = int64(1); subid = int64(1); seed = [int64(1762543)]; % Initialise the generator to a repeatable sequence [state, ifail] = g05kf( ... genid, subid, seed); % Advance the sequence 2**n places n = int64(17); [state, ifail] = g05kk( ... n, state); % Generate nv variates from a uniform distribution nv = int64(5); [state, x, ifail] = g05sa( ... nv, state); % Display the variates disp(x); ``` ```g05kk example results 0.7357 0.3521 0.4188 0.0046 0.0365 ```
{}
No project description provided # pyGAM ## Installation pip install pygam ### scikit-sparse To speed up optimization on large models with constraints, it helps to have scikit-sparse installed because it contains a slightly faster, sparse version of Cholesky factorization. The import from scikit-sparse references nose, so you'll need that too. The easiest way is to use Conda: conda install -c conda-forge scikit-sparse nose scikit-sparse docs ## Contributing - HELP REQUESTED Contributions are most welcome! You can help pyGAM in many ways including: • Working on a known bug. • Trying it out and reporting bugs or what was difficult. • Helping improve the documentation. • Writing new distributions, and link functions. • If you need some ideas, please take a look at the issues. To start: • fork the project and cut a new branch • Now install the testing dependencies conda install pytest numpy pandas scipy pytest-cov cython pip install -r requirements.txt It helps to add a sym-link of the forked project to your python path. To do this, you should install flit: • pip install flit • Then from main project folder (ie .../pyGAM) do: flit install -s Make some changes and write a test... • Test your contribution (eg from the .../pyGAM): py.test -s • When you are happy with your changes, make a pull request into the master branch of the main project. Generalized Additive Models (GAMs) are smooth semi-parametric models of the form: where X.T = [X_1, X_2, ..., X_p] are independent variables, y is the dependent variable, and g() is the link function that relates our predictor variables to the expected value of the dependent variable. The feature functions f_i() are built using penalized B splines, which allow us to automatically model non-linear relationships without having to manually try out many different transformations on each variable. GAMs extend generalized linear models by allowing non-linear functions of features while maintaining additivity. Since the model is additive, it is easy to examine the effect of each X_i on Y individually while holding all other predictors constant. The result is a very flexible model, where it is easy to incorporate prior knowledge and control overfitting. ## Citing pyGAM Please consider citing pyGAM if it has helped you in your research or work: Daniel Servén, & Charlie Brummitt. (2018, March 27). pyGAM: Generalized Additive Models in Python. Zenodo. DOI: 10.5281/zenodo.1208723 BibTex: @misc{daniel\_serven\_2018_1208723, author = {Daniel Servén and Charlie Brummitt}, title = {pyGAM: Generalized Additive Models in Python}, month = mar, year = 2018, doi = {10.5281/zenodo.1208723}, url = {https://doi.org/10.5281/zenodo.1208723} } ## References 1. Simon N. Wood, 2006 Generalized Additive Models: an introduction with R 2. Hastie, Tibshirani, Friedman The Elements of Statistical Learning http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf 3. James, Witten, Hastie and Tibshirani An Introduction to Statistical Learning http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Sixth%20Printing.pdf 4. Paul Eilers & Brian Marx, 1996 Flexible Smoothing with B-splines and Penalties http://www.stat.washington.edu/courses/stat527/s13/readings/EilersMarx_StatSci_1996.pdf 5. Kim Larsen, 2015 GAM: The Predictive Modeling Silver Bullet 6. Deva Ramanan, 2008 UCI Machine Learning: Notes on IRLS http://www.ics.uci.edu/~dramanan/teaching/ics273a_winter08/homework/irls_notes.pdf 7. Paul Eilers & Brian Marx, 2015 International Biometric Society: A Crash Course on P-splines http://www.ibschannel2015.nl/project/userfiles/Crash_course_handout.pdf 8. Keiding, Niels, 1991 Age-specific incidence and prevalence: a statistical perspective ## Project details Uploaded source Uploaded py3
{}
# Tag Info ## Hot answers tagged dominant-strategy 8 Fixing the strategy of the opponent, a mixed strategy never yields a strictly higher utility if you are expected utility-maximizing. The reason is that the expected utility from a mixed strategy is at most as high as the highest utility from the pure strategies which this mixed strategy plays with positive probability. That is not to say that a mixed ... 4 Isn't the mixed extension of matching pennies a good example of this? The strategies $p,q$ are both elements of $[0,1]$, and \begin{align*} U_1(p,q) & = pq + (1-p)(1-q) \\ \\ U_2(p,q) & = p(1-q) + (1-p)q. \end{align*} There is a unique Nash-equilibrium at $p = q = 1/2$ but all $p$ and all $q$ are rationalizable, because in the equilibrium all ... 3 As I said in the comment, Nash's theorem shows the existence of a Nash equilibrium (possibly but not necessarily in mixed strategies). If you are interested in Nash equilibria is proper mixed strategies, i.e., NE in which all players play at least two actions with positive probability, you can easily show the following impossibility result: No such NE can ... 3 Consider the following game between P1 (row player) and P2 (column player): \begin{array}{|c|c|c|}\hline & L & R \\\hline T& 1,1 & 2,0 \\\hline B& 0,0 & 1,1 \\\hline \end{array} $T$ is P1's dominant strategy $T$ is P1's best response to both of P2's strategies $L$ and $R$ $L$ is P2's best response to P1's strategy $T$ $R$ is P2's ... 3 To ensure that the game has a single, completely-mixed, Nash equilibrium there needs to be "cyclic unique best responses", i.e., either BR(L) = O, BR(O) = R, BR(R) = U, BR(U) = L, or BR(L) = U, BR(U) = R, BR(R) = O, BR(O) = L. The strict cyclic preferences both rule out pure equilibria and allow a (unique) completely mixed equilibrium. There are ... 3 A Nash equilibrium that consists of weakly dominant strategies is a stronger solution concept than a NE itself. Consider the following simple matrix game where best replies have been marked with * \begin{array}{c|cc} P1/P2&\text{left}&\text{right}\\ \hline \text{Up}&1^*,1^*&0^*,0\\ \text{Down}&0,0^*&0^*,0^* \end{array} Both Up and ... 2 I remember slaving over the notation in this book when I was a bad undergraduate. It brings up some interesting memories, some which may help you. $F(x)$ is the cumulative distribution of a single bidder's valuation. $G(x)$ is the cumulative distribution of the highest bidder's valuation, given $N$ bidders. For the example you are referring to, values ... 2 I think you have a typo -- the equilibrium bidding strategy in the first-price auction you specify should be $$\beta(x) = \frac{N-1}{N}x$$ Here's a hint that might help. The CDF of the order statistic, $Y_1$, is $$G(x) = x^{N-1}$$ To see why this is the case, notice that this is exactly the probability that some given quantity $x$ is greater than or ... 2 Do you have a formal notion of what 'stable' means? Nash equilibria are often thought of informally as the strategies that support stable outcomes. If that's all that the term 'stable' means, then of course the outcomes $(5,10)$ and $(10,5)$ are also stable. As Lee Sin notes above, the outcome $(5,5)$ is also a Nash equilibrium outcome, and so is stable in ... 2 You're on the right track here. You need to check every outcome for its potential to be a NE. You're correct in stating that outcomes (5,10) and (10,5) are NEs however you didn't identify that (5,5) is also an NE. (5,5) If player one deviates he receives a payout of 5. If player two deviates he receives a payout of 5. Therefore no player has an incentive ... 2 \begin{array}{|c|c|c|} \hline &L&R\\\hline T&1,1&0,0\\\hline B&0,0&0,0\\\hline \end{array} In the game above, there are two pure strategy Nash equilibria: $(T,L)$ is an equilibrium in weakly dominant strategies; $(B,R)$ is an equilibrium in weakly dominated strategies. Noting that "dominant" and "dominated" are two different words,... 2 Repeated games and nonlinear utility Let's assume a trivial two-player game where each player has two options A and B; and the payout is +1/-1 if players pick the same and -1/+1 if players pick differently. Let's assume that the game is repeated 100 times with the strategies chosen and committed to beforehand. This means that if your opponent picks a fixed ... 2 A game outcome that is Pareto optimal or Pareto efficient is one where no one player can be made better off without making at least one player worse off. So a Nash equilibrium can easily be Pareto sub-optimal (or Pareto inefficient), which means that it is possible to one player can be made better off without making at least one player worse off. However, "... 2 In 2-player games, the strategies that survive iterated elimination of strictly dominated strategies are called rationalizable. Note that even if no strategy is strictly dominant, there can be strictly dominated strategies. If you cannot eliminate any strategy, then all strategies are rationalizable. Only if correlation of players' randomization is allowed, ... 1 The set of rationalizable strategies is the set of strategies that survive the iterated elimination of strictly dominated strategies, i.e., strategies that are never a best response. It is a weaker concept than Nash equilibrium. For player 1, you can eliminate strategy M, which is strictly dominated by T. You cannot eliminate any strategy for player 2 as ... 1 Hint Let $h_i^*(h_{-i}$) be firm $i$'s best response to the other $N-1$ firms' strategy profile $h_{-i}$. If $h_i^*(\cdot)$ is a dominant strategy, then it must be independent of the other firms' strategies, i.e. $h_i^*(h_{-i})=h_i^*$ for all $h_{-i}$. 1 The definition of a monopoly is one company having the exclusive control of a good or service. Since there are 2+ companies it does not fit the definition of a monopoly. A duopoly is a situation where 2 companies control the majority of the market. It sounds like this describes your situation. A more correct term may be oligopoly which describes a market ... 1 Start with the following matrix. $$\begin{matrix}1 && 0 && 0\\ 1 && 1 && 0\\ 1 && 1 && 1 \end{matrix}$$ You will choose a $1$ in the matrix and change it to a $0$. The other player will choose a $0$ and change it to a $1$. Your goal is for the resulting matrix to have an even determinant. The other player's ... 1 Q1 Your table seems to be correct. Here is a quick Python implementation for generating the payoffs: def payoff_calculator(x, y): if x+y < 14: return (x+1,y+1) else: if x==y: return (x,y) else: return (y,20-x) if x < y else (20-y,x) primes = [2,3,5,7,11,13,17,19] payoffs = [[payoff_calculator(i,... 1 I agree with Herr, the payoff matrix looks right. Also, there are no strictly dominated strategies because a strictly dominated strategy cannot be a best response for any possible belief. However, If any player believes that the other player is choosing 19, then every strategy (both pure and mixed) is a best response. Only top voted, non community-wiki answers of a minimum length are eligible
{}
# Asymptotic approximation of $x^\alpha$ by entire functions Given a non-integral real $\alpha$, is there an entire (see http://en.wikipedia.org/wiki/Entire_function) function $h(x)$ such that $x^{-\alpha}h(x)\longrightarrow 1$ for $x\rightarrow+\infty$ (with $x$ real non-negative)? Clearly, such a function if it exists is not unique since $h(x)+e^{-x}$ and similar functions work also. - What is the question? – Per Alexandersson May 28 '10 at 9:05 Existence, yes or no, and if yes, an example, of an entire function $h$ such that $x^{-\alpha}h(x)\rightarrow 1$ for $x\rightarrow+\infty$ with $x$ real. – Roland Bacher May 28 '10 at 9:20 Looking at the example of $1/\Gamma(1+x)\sim (e/x)^x(2\pi x)^{-1/2}$ as $x\to+\infty$, I would say "yes" but definitely a maitre in complex analysis is wanted. :) – Wadim Zudilin May 28 '10 at 10:11 Casorati-Weierstrass: If $f$ has an essential singularity at $a$, then the image under $f$ of any punctured disk around $a$ is dense in $\mathbb C$. Use for $a=\infty$. Take an entire function $f(z)$ and consider the entire $g(z)=zf(z)^2$; there exists a direction $\lambda$ along which $g(z)\to C\ne0$ as $z\to\infty$. Then $f(x)=c_0\sqrt{x}g(x/\lambda)$ will give an example with $\alpha=-1/2$. – Wadim Zudilin May 28 '10 at 10:28 Start with an entire function $f$ such that $f(x)=1/x + O(1/x^2)$ for $x>0$, $x\rightarrow\infty$. For example $f(z)= (1-e^{-z})/z$. Let F be some primitive for $f$: $F(z)=\int_1^z f(s)ds$. We have $F(x)= ln(x)+C+O(1/x)$, with C some constant ($\ C=\int_1^\infty \ (f(x)-{1\over x})\ dx$ ). Then consider $h(x)=exp(\alpha F(x)-\alpha C)$. We get ${h(x)\over x^\alpha} = exp(O(1/x))\rightarrow 1$. - Yes, it works. – Wadim Zudilin May 28 '10 at 10:33 It works up to a constant (which is easily dealt with) picked up by the integral. – Roland Bacher May 28 '10 at 15:20 As a matter of fact, real entire functions (that is, entire functions that map the real line into itself, or equivalently, functions represented by a power series centered in 0, with real coefficients and radius of convergence infinite) are dense in $C^0({\mathbb R}, \mathbb{R})$ in the sense of the order, that is: Theorem (T.Carleman, 1927). For any two continuous real valued functions f < g there exists a real entire function $\phi$ in between: $f(x)<\phi(x) < g(x)$ for all $x\in\mathbb{R}$. So in particular, an entire function may be asymptotic to any continuous real function, and also, it may grow as fast as any continuous function. - There seems to be an issue with the latex and markdown. The last line of Pietro's box should read: $f(x)<\phi(x)<g(x)$ for all $x\in\mathbb{R}$. – j.c. May 28 '10 at 18:20 @pietro: do you happen to have a reference that theorem (either a modern presentation or a reference to which paper of Carleman)? The internet suggests that it is his 1927 paper Sur un theoreme de Weierstrass in Arkivfor matematik, astronomi och fyski [sorry about lack of accents] but the journal is not available in my library so I'd like to be sure before I make the librarian go get me a copy. – Willie Wong Oct 20 '15 at 14:17 @Willie: yes that is Carleman paper. The proper keyword for a search is "Carleman approximation". For instance this paper : projecteuclid.org/euclid.mmj/1031710533 – Pietro Majer Oct 22 '15 at 7:35 @PietroMajer: much thanks! – Willie Wong Oct 22 '15 at 12:52
{}
## A community for students. Sign up today Here's the question you clicked on: ## Biancao9o3o12 3 years ago 0<-8b<12 Solve compound Inequality PLEASE THIS IS DUE TODAY ! • This Question is Closed 1. MVLQML divide each side by -8, so you get (0/-8) <b<(12/-8) 2. Biancao9o3o12 iS THAT THE ASNWER ? 3. MVLQML should be 0<b<(-3/2) 4. satellite73 divide by $$-8$$ and flip the inequalities because $$-8$$ is negative 5. satellite73 no 6. satellite73 there is no such number for one thing for another, you have to change the inequalities when dividing by a negative number $0<-/b<12\iff \frac{8}{-8}>b>\frac{12}{-8}$ 7. Biancao9o3o12 Am really sorry but I only need the answer and how to do it this is due in an hour so please I need the answer and steps how to get it I know it sounds rude and bad but I don't get any of thsi #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
{}
## Doug Thompson: A NEW BOOK ON THE CANADIAN MUSIC SCENE IS IN BOOK STORES OR ON KINDLES, OR HOWEVER YOU READ BOOKS THESE DAYS Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on June 15, 2015 by segarini Before I begin my review, I have to say that I have the utmost respect for anybody who sits down (or stands up for that matter) to write a book…whether fiction or non-fiction.  The amount of time and effort in research and the blood, sweat and tears that it takes to actually write it, can be monumental.  Having contributed to over a dozen books, I know that it takes true dedication and can’t be undertaken lightly.  Writing a book can (and often does) take over your life.
{}
# Line integrals in a vector field After learning about line integrals in a scalar field, learn about line integrals work in vector fields. ## What we are building to This animation will be described in more detail below. Animation credit: By Lucas V. Barbosa (Own work) [Public domain], via Wikimedia Commons Let's say there is some vector field $\blueE{\textbf{F}}$ and a curve $\goldE{C}$ wandering through that vector field. Imagine walking along the curve, and at each step taking the dot product between the following two vectors: • The vector from the field $\blueE{\textbf{F}}$ at the point where you are standing. • The displacement vector associated with the next step you take along this curve. If you add up those dot products, you have just approximated the line integral of $\blueE{\textbf{F}}$ along $\goldE{C}$ The shorthand notation for this line integral is \begin{aligned} \int_C \blueE{\textbf{F}} \cdot \redE{d\textbf{r}} \end{aligned} (Pay special attention to the fact that this is a dot product) The more explicit notation, given a parameterization $\textbf{r}(t)$ of $\goldE{C}$, is \begin{aligned} \int_a^b \blueE{\textbf{F}(\textbf{r}(t))} \cdot \redE{\textbf{r}'(t)}dt \end{aligned} Line integrals are useful in physics for computing the work done by a force on a moving object. If you parameterize the curve such that you move in the opposite direction as $t$ increases, the value of the line integral is multiplied by $-1$. ## Whale falling from the sky Let's say we have a whale, whom I'll name Whilly, falling from the sky. Suppose he falls along a curved path, perhaps because the air currents push him this way and that. In this example, I am assuming you are familiar with the idea from physics that a force does work on a moving object, and that work is defined as the dot product between the force vector and the displacement vector. When there is a force, such as gravity, and an object moving in the region where the force acts, the force is said to "do work" on the object. For example, suppose Whilly falls straight down $100\text{m}$ from the sky (over some specified period of time): The force of gravity on this whale is \begin{aligned} F &= mg \\ &= (170{,}000\text{kg}) \left(9.8 \frac{\text{m}}{\text{s}^2}\right) \end{aligned} And the work that gravity does on Whilly is the force times displacement: \begin{aligned} W &= Fs \\ &= \underbrace{ (170{,}000\text{kg}) \left(9.8 \frac{\text{m}}{\text{s}^2}\right) }_{\text{Force}} \overbrace{ (100\text{m}) }^{\text{displacement}} \end{aligned} Actually, this formula is not quite right. Suppose Whilly does not move straight down. Perhaps instead his displacement vector is down and to the right, with a $y$-component of $-60\text{m}$, and an $x$-component of $80\text{m}$: Gravity still does work on Whilly, but all that matters is the component of the Whilly's displacement vector in the direction of gravity. In other words, work is the dot product between the force vector and the displacement vector. \begin{aligned} W &= \vec{F}\cdot\vec{s} \\ &= \left( -(170{,}000\text{kg}) \left(9.8 \frac{\text{m}}{\text{s}^2}\right) \hat{\textbf{j}} \right)\cdot \left( 80\text{m}\hat{\textbf{i}} - 60\text{m}\hat{\textbf{j}} \right) \\ &= -(170{,}000\text{kg}) \left(9.8 \frac{\text{m}}{\text{s}^2}\right) (- 60\text{m}) \\ &= (170{,}000)(9.8)(60) \dfrac{\text{kg}\,\text{m}^2}{\text{s}^2} \end{aligned} In this case, since gravity points purely in the $-\hat{\textbf{j}}$ direction, performing the dot product ends up being the same as pulling out the vertical component from the displacement vector. Key question: What is the work done on Whilly by gravity as he falls along the curved path $C$? Usually, computing work is done with respect to a straight force vector and a straight displacement vector, so what can we do with this curved path? You can start by imagining the curve is broken up into many little displacement vectors: Go ahead and give each one of these displacement vectors a name, $\vec{\Delta s}_1$, $\vec{\Delta s}_2$, $\vec{\Delta s}_3$, $\dots$ The work done by gravity along each one of these displacement vectors is the gravity force vector, which I'll denote $\vec{F_g}$, dotted with the displacement vector itself: $\vec{F_g} \cdot \vec{\Delta s}_i$ The total work done by gravity along the entire curve is then estimated by \begin{aligned} \sum_{n = 1}^N \vec{F_g} \cdot \vec{\Delta s}_n \end{aligned} But of course, this is calculus, so we don't just look at a specific number of finite steps along the curve $C$. We consider what limiting value this sum approaches as the size of those steps shrinks smaller and smaller. This is captured with the following integral: \begin{aligned} \int_C \vec{F_g} \cdot \vec{ds} \end{aligned} This is very similar to line integration in a scalar field, but there is the key difference: The tiny step $\vec{ds}$ is now thought of as a vector, not a scalar length. In the integral above, I wrote both $\vec{F_g}$ and $\vec{ds}$ with little arrows on top to emphasize that they are vectors. A more subtle and more common way to emphasize that these are vector quantities is to write the variable in bold: \begin{aligned} \int_C \textbf{F}_g \cdot d\textbf{s} \end{aligned} Key takeaway: The thing we're adding up as we wander along $C$ is not the full value of $\textbf{F}_g$ at each point, but the component of $\textbf{F}_g$ pointed in the same direction as the vector $d\textbf{s}$. That is, the component of force in the direction of the curve. ## Example 1: Putting numbers on Whilly's fall. Let's see how this plays out when we go through the computation. Suppose the curve of Whilly's fall is described by the parametric function \begin{aligned} \textbf{s}(t) = \left[ \begin{array}{c} 100(t - \sin(t)) \\ 100(-t - \sin(t)) \\ \end{array} \right] \end{aligned} The vector $d\textbf{s}$ representing a tiny step along the curve can be given as the derivative of this function, times $dt$: $d\textbf{s} = \dfrac{d\textbf{s}}{dt} dt = \textbf{s}'(t)dt$ If these seem unfamiliar, consider taking a look at the article describing derivatives of parametric functions. The way to visualize this is to think of a tiny increase to the parameter $t$ of size $dt$. This results in a tiny nudge along the curve described by $\textbf{s}(t)$, which is given by the vector $\textbf{s}'(t)dt$. Evaluating this derivative vector simply requires taking the derivative of each component: \begin{aligned} \dfrac{d\textbf{s}}{dt} &= \left[ \begin{array}{c} \dfrac{d}{dt} 100(t - \sin(t)) \\\\ \dfrac{d}{dt} 100(-t - \sin(t)) \\\\ \end{array} \right] \\\\ \dfrac{d\textbf{s}}{dt} &= \left[ \begin{array}{c} 100(1 - \cos(t)) \\\\ 100(-1 - \cos(t)) \\\\ \end{array} \right] \end{aligned} The force of gravity is given by the acceleration $9.8 \dfrac{\text{m}}{\text{s}^2}$ times the mass of Whilly. Not that it matters, but I looked up the typical mass of a blue whale, and it's around $170{,}000\,\text{kg}$, so let's use that number. Since this force is directed purely downward, gravity as a force vector looks like this: \begin{aligned} \textbf{F}_g &= \left[ \begin{array}{c} 0 \\ -(170{,}000)(9.8) \end{array} \right] \end{aligned} Let's say we want to find the work done by gravity between times $t = 0$ and $t = 10$. What do you get when you plug in all this information to the integral integral $\int_C \textbf{F}_g \cdot d\textbf{s}$ and evaluate the integral? Take a moment to try writing this out for yourself before peeking at the answer. \begin{aligned} W &= \int_C \textbf{F}_g \cdot d\textbf{s} \\\\ &= \int_0^{10} \textbf{F}_g \cdot \textbf{s}'(t)dt \\\\ &= \int_0^{10} \left[ \begin{array}{c} 0 \\ -(170{,}000)(9.8) \end{array} \right] \cdot \left[ \begin{array}{c} 100(1 - \cos(t)) \\ 100(-1 - \cos(t)) \end{array} \right] dt \\\\ &= \int_0^{10} -(170{,}000)(9.8)100(-1 - \cos(t)) dt \\\\ &= 166{,}600{,}000 \int_0^{10} (1 + \cos(t)) dt \\\\ &= 166{,}600{,}000 \left[ t + \sin(t) \right]_0^{10} \\\\ &= 166{,}600{,}000 (10 + \sin(10) - (0 + \sin(0)) \\\\ &\approx 1.575 \times 10^9 \end{aligned} (To those physics students among you who notice that it would be easier to just compute the gravitational potential of Whilly at the start and end of his fall and find the difference, you are going to love the topic of conservative fields!) ## Visualizing more general line integrals through a vector field In the previous example, the gravity vector field is constant. Gravity points straight down with the same magnitude everywhere. With most line integrals through a vector field, the vectors in the field are different at different points in space, so the value dotted against $d\textbf{s}$ changes. The following animation shows what this might look like. (Note, the animation uses the variable $\textbf{r}$ instead of $\textbf{s}$ to parameterize the curve, but of course, it does not make a difference.) Animation credit: By Lucas V. Barbosa (Own work) [Public domain], via Wikimedia Commons Let's dissect what's going on here. The line integral itself is written as \begin{aligned} \int_C \blueE{\textbf{F}(\textbf{r})} \cdot \redE{d\textbf{r}} = \int_a^b \blueE{\textbf{F}(\textbf{r}(t))} \cdot \redE{\textbf{r}'(t)}dt \end{aligned} where • $\blueE{\textbf{F}}$ is a vector field, associating each point in space with a vector. You can think of this as a force field. • $\goldE{C}$ is a curve through space. • $\textbf{r}(t)$ is a vector-valued function parameterizing the curve $\goldE{C}$ in the range $a \le t \le b$ • $\redE{\textbf{r}'(t)}$ is the derivative of $\textbf{r}$, representing the velocity vector of a particle whose position is given by $\textbf{r}(t)$ while $t$ increases at a constant rate. When you multiply this by a tiny step in time, $dt$, it gives a tiny displacement vector, which I like to think of as a tiny step along the curve. Technically it is a tiny step in the tangent direction to the curve, but for small enough $dt$ this amounts to the same thing. • Note, in this animation the length of $\redE{\textbf{r}'(t)}$ stays constant. This is not necessarily true for most parameterizations of $\goldE{C}$, which may have you speeding up or slowing down as your position varies according to $\textbf{r}$. For example, Whilly was probably speeding up during his fall, making the velocity vector grow over time. • The rotating circle in the bottom right of the diagram is a bit confusing at first. It represents the extent to which the vector $\blueE{\textbf{F}(\textbf{r}(t))}$ lines up with the tangent vector $\redE{\textbf{r}'(t)}$. The grey $x$ and $y$ vectors are shown to see how these vectors are oriented relative to $xy$-plane as a whole. Concept check: What does the dot product $\greenE{\textbf{F}(\textbf{r}(t)) \cdot \textbf{r}'(t)dt}$ represent? $\greenE{\textbf{F}(\textbf{r}(t)) \cdot \textbf{r}'(t)dt}$ as $\greenE{dW}$ That is, a tiny amount of work done by the force field $\blueE{\textbf{F}}$ on a particle moving along $\goldE{C}$. ## Example 2: Work done by a tornado Consider the vector field described by the function \begin{aligned} \textbf{F}(x, y) &= \left[ \begin{array}{c} -y \\ x \end{array} \right] \end{aligned} The vector field looks like this: Thought of as a force, this vector field pushes objects in the counterclockwise direction about the origin. For example, maybe this represents the force due to air resistance inside a tornado. This is a little unrealistic because it would imply that force continually gets stronger as you move away from the tornado's center, but we can just euphemistically say it's a "simplified model" and continue on our merry way. Suppose we want to compute a line integral through this vector field along a circle or radius $1$ centered at $(2, 0)$. I should point out that orientation matters here. The work done by the tornado force field as we walk counterclockwise around the circle could be different from the work done as we walk clockwise around it (we'll see this explicitly in a bit). If we choose to consider a counterclockwise walk around this circle, we can parameterize the curve with the function. \begin{aligned} \textbf{r}(t) &= \left[ \begin{array}{c} \cos(t) + 2 \\ \sin(t) \end{array} \right] \end{aligned} where $t$ ranges from $0$ to $2\pi$. Again, to set up the line integral representing work, you consider the force vector at each point, $\textbf{F}(x, y)$, and you dot it with a tiny step along the curve, $d\textbf{r}$: \begin{aligned} \int_C \textbf{F} \cdot d\textbf{r} \end{aligned} ### Step 1: Expand the integral Concept check: Which of the following integrals represents the same thing as \begin{aligned} \int_C \textbf{F} \cdot d\textbf{r} \end{aligned}? ### Step 2: Expand each component Concept check: Based on the definitions above, what is $\textbf{F}(\textbf{r}(t))$? Concept check: What is $\textbf{r}'(t)$? ### Step 3: Solve the integral Concept check: Put the last three answers together to solve the integral. \begin{aligned} \int_C \textbf{F} \cdot d\textbf{r} \end{aligned} = This final answer gives the amount of work that the tornado force field does on a particle moving counterclockwise around the circle pictured above. Reflection question: Why should it be intuitive that this answer is positive? Since the circle is oriented counterclockwise, you walk up the right half, and down the left half. The vectors touching the right half of the circle are relatively long, and pointing roughly in the same direction that you walk, thus contributing a lot of positive work. The vectors touching the left half of the circle are still pointing roughly up, which is now against the direction you are walking, and hence contribute negative work. However, these vectors are relatively short, so they do not cancel out the positive work done while walking up the right half. ## Orientation matters What would have happened if in the preceding example, we had oriented the circle clockwise? For instance, we could have parameterized it with the function \begin{aligned} \textbf{r}(t) &= \left[ \begin{array}{c} \cos(t) + 2 \\ -\sin(t) \end{array} \right] \end{aligned} You can, if you want, plug this in and work through all the computations to see what happens. However, there is a simpler way to reason about what will happen. In the integral \begin{aligned} \int_C \textbf{F} \cdot d\textbf{r} \end{aligned}, each vector $d\textbf{r}$ representing a tiny step along the curve will get turned around to point in the opposite direction. Concept check: Suppose you have two vectors $\textbf{v}$ and $\textbf{w}$, and $\textbf{v} \cdot \textbf{w} = 3$. You turn $\textbf{v}$ around to point in the opposite direction, getting a new vector $\textbf{v}_{\text{new}} = -\textbf{v}$. What happens to the dot product? $\textbf{v}_{\text{new}} \cdot \textbf{w} =$ Since the dot product inside the integral gets multiplied by $-1$ when you swap the direction of each $d\textbf{r}$, we can conclude the following: Key Takeaway: The line integral through a vector field gets multiplied by $-1$ when you reverse the orientation of a curve. ## Summary • The shorthand notation for a line integral through a vector field is \begin{aligned} \int_C \blueE{\textbf{F}} \cdot \redE{d\textbf{r}} \end{aligned} • The more explicit notation, given a parameterization $\textbf{r}(t)$ of $\goldE{C}$, is \begin{aligned} \int_a^b \blueE{\textbf{F}(\textbf{r}(t))} \cdot \redE{\textbf{r}'(t)}dt \end{aligned} • Line integrals are useful in physics for computing the work done by a force on a moving object. • If you parameterize the curve such that you move in the opposite direction as $t$ increases, the value of the line integral is multiplied by $-1$.
{}
# How do you graph y = abs(x + 4)? Mar 10, 2018 graph{y=|x+4| [-10, 10, -5, 5]} #### Explanation: graph{y=|x+4| [-10, 10, -5, 5]} Noting that there's an absolute value sign, your $x$value will never reach below zero, reflecting the part where $x$ is zero straight across, which would make it the absolute value.
{}
Question If A is the set of all xϵR such that x(log x)2−3 log x+1>1000, and A=(a,∞) then √10a will be ___. ( (Base of logx is 10). Solution A=(1000,∞) (log x)2−3 log x+1>logx 103=3 logx 10 If log10 x=t then we have t2−3t+1>3t or t3−3t2+t−3>0 or t(t2+1)−3(t2+1)>0 or (t2+1)(t−3)>0⇒t−3>0 as t2+1 is always + ive ∴t>3 or log10x>3 ∴x>103=1000∴xϵ(1000,∞) Suggest corrections
{}
Non-Equilibrium Quantum Dynamics of Ultra-Cold Atomic Mixtures # Non-Equilibrium Quantum Dynamics of Ultra-Cold Atomic Mixtures: the Multi-Layer Multi-Configuration Time-Dependent Hartree Method for Bosons ## Abstract We develop and apply the multi-layer multi-configuration time-dependent Hartree method for bosons, which represents an ab initio method for investigating the non-equilibrium quantum dynamics of multi-species bosonic systems. Its multi-layer feature allows for tailoring the wave function ansatz in order to describe intra- and inter-species correlations accurately and efficiently. To demonstrate the beneficial scaling and the efficiency of the method, we explore the correlated tunneling dynamics of two species with repulsive intra- and inter-species interactions, to which a third species with vanishing intra-species interaction is weakly coupled. The population imbalances of the first two species can feature a temporal equilibration and their time-evolution significantly depends on the coupling to the third species. Bosons of the first and of the second species exhibit a bunching tendency, whose strength can be influenced by their coupling to the third species. ###### pacs: 03.75.Kk, 05.30.Jp, 03.65.-w, 31.15.-p ## 1 Introduction Due to the high degree of controllability and isolatedness, trapped ultra-cold atoms serve as an ideal system for observing many-body quantum phenomena [1] and can even be employed to simulate quantum systems of quite a broad physical context [2]. In particular, there is a growing interest in the meanwhile accessible regime where a mean-field description [3, 4] as given by the Gross-Pitaevskii equation fails. Such states can be realized in e.g. optical lattices [5]. Feshbach [6] or confinement induced resonances [7, 8, 9] can be employed to tune the inter-atomic interaction strength. In particular, quasi-one-dimensional trapping geometries can enhance correlation effects in the strong interaction regime, leading to fascinating novel phases [10, 8, 11, 12] and quantum phase transitions [13]. As the mean-field theory becomes exact for weak interactions and large particle numbers [14, 3, 4], beyond mean-field physics can also be expected for small ensembles and finite interaction strengths. The latter regime is experimentally explored in e.g. arrays of decoupled one-dimensional tubes typically containing two to 60 atoms [13]. Therefore, the transition from the few- to many-body behaviour is, in particular for the strongly correlated quantum dynamics, a subject of immediate interest. Moreover it is meanwhile also experimentally routinely achievable to trap and manipulate different components or species1, which allows for studying distinguishable subsystems with indistinguishable constituents. Such mixtures can be realized e.g. by preparing alkali atoms in different hyperfine states [15] or by trapping different elements [16]. Due to the interplay between the intra- and inter-species interaction strengths, these systems show a number of intriguing features such as phase separation [17] including symbiotic excitations like interlacing vortex lattices with mutually filled cores [18] and dark-bright solitons [19], spin-charge separation [20], various tunneling effects [21, 22, 23, 24, 25, 26, 27, 28], collective excitations [29, 30] and counterflow and paired superfluidity [31, 32, 33]. The purpose of the present work is to develop a broadly applicable and efficient ab initio method for the quantum dynamics of such mixtures in order to explore the fundamental dynamical processes in trapped ultra-cold multi-species setups and to study the few- to many-body transition. Simulating the quantum dynamics of an interacting many-body system, however, is a tough task in general due to the exponential2 scaling of the state space with the number of particles. Besides e.g. the time-dependent density matrix renormalization group approach [34, 35, 36, 37], a promising concept to soften this scaling is based on a many-body wave function expansion with respect to a time-dependent, with the system comoving basis. This idea has been incorporated in the multi-configuration time-dependent Hartree method (MCTDH) [38, 39]. Being based on time-dependent Hartree products as the many-body basis, MCTDH is designed for distinguishable particles, but has also been applied to bosonic few-body systems (e.g. [40, 28]). Later the MCTDH theory has been generalized and extended in several ways: There is the multi-layer MCTDH (ML-MCTDH) method [41, 42, 43], which takes correlations between various subsystems into account and is thus particularly suitable for system-bath problems with distinguishable degrees of freedom (e.g. [44]). Taking the fermionic or bosonic particle exchange (anti-) symmetry in the time-dependent many-body basis into account, MCTDH has been specialized to treat larger fermionic (MCTDHF) [45] or bosonic systems (MCTDHB) [46, 47]. Furthermore, a direct extension of MCTDHB and MCTDHF to treat bose-bose, bose-fermi and fermi-fermi mixtures has been developed [48], including the possibility of particle-conversions [49]. An alternative approach to systems of indistinguishable particles is the so-called ML-MCTDH method in second-quantization representation, which employs the factorization of the many-body Hilbert space into a direct product of Fock spaces [50]. In this work, we derive and apply a novel ab initio approach to the non-equilibrium dynamics of ultra-cold correlated bosonic mixtures, which takes all correlations of the many-body system into account. We call this method the multi-layer multi-configuration time-dependent Hartree method for bosons (ML-MCTDHB). The multi-layer structure of our many-body wave function ansatz allows us to adapt our many-body basis to system specific inter- and intra-species correlations, which leads to a beneficial scaling. Moreover, the bosonic exchange symmetry is directly employed for an efficient treatment of the indistinguishable bosonic subsystems. We apply ML-MCTDHB to simulate the correlated tunneling dynamics of a mixture of three bosonic species in a double well trap. It is shown that the dynamics of the population imbalances of the species significantly differ for ultra-weak and vanishing inter-species interaction strengths. In particular, bosons of different kind show a bunching tendency and the inter-species interaction strengths allow for tuning these correlations up to a certain degree. This paper is organized as follows: In section 2, the derivation and properties of the ML-MCTDHB method for bosonic mixtures are presented. ML-MCTDHB is then applied to simulate the complex tunneling behaviour of a mixture of three bosonic species in section 3. Finally, we summarize our results and embed the presented ML-MCTDHB theory for mixtures into a more general framework in section 4. ## 2 The ML-MCTDHB method Let us consider an ensemble of bosonic species. In the ultra-cold regime, the interaction between neutral atoms can be modelled by a contact interaction [51, 3, 4]. For simplicity, we restrict ourselves to one-dimensional settings, which can be prepared by energetically freezing out the transversal degrees of freedom [3]. The Hamiltonian of such a mixture with bosons of species reads: ^H=S∑σ=1(^Hσ+^Vσ)+∑1≤σ<σ′≤S^Wσσ′. (1) Here, denotes the one-body Hamiltonian of the species containing a in general species dependent trapping potential : ^Hσ=Nσ∑i=1(^pσi)22mσ+Uσ(^xσi)), (2) and , refer to the intra-species interaction of species and to the inter-species interaction between and bosons, respectively: ^Vσ=gσ∑1≤i Please note that the intra- and inter-species interaction strengths , have to be properly renormalized with respect to their 3d values as a consequence of dimensional reduction [7]. We remark that the Hamiltonian may be explicitly time-dependent for studying driven systems. ### 2.1 Wavefunction ansatz The ML-MCTDHB method is an ab initio approach to the time-dependent Schrödinger equation for systems like (1). To reduce the number of basis states necessary for a fair representation of the total wave function , we employ a time-dependent, with the system comoving basis and restrict ourselves to the following class of ansatzes: For each species , we take time-dependent orthonormal species states (), i.e. states of all the bosons of species , into account. Due to the distinguishability of bosons of different species, the total wave function is expanded in terms of Hartree products of these many-body states: |Ψ(t)⟩=M1∑i1=1...MS∑iS=1Ai1,...,iS(t)|ψ(1)i1(t)⟩...|ψ(S)iS(t)⟩. (5) Each species state refers to a system of indistinguishable bosons and should therefore be expanded in terms of bosonic number states : |ψ(σ)i(t)⟩=∑→n|NσCσi;→n(t)|→n⟩σt, (6) where we allow each boson to occupy time-dependent single particle functions (SPFs) , indicated by the time-dependence of bosonic number states . The integer vector contains the occupation number of the -th SPFs such that all ’s sum up to , indicated by the symbol “” in the summation. Summarizing, our wave function ansatz consists of three layers: The expansion coefficients form the top layer. Then we have the ’s on the species layer which allow the species states to move with the system and, finally on the particle layer, the SPFs allow for rotations of the single particle basis. It is crucial to notice that, in contrast to the standard method for solving the time-dependent Schrödinger equation by propagating expansion coefficients while keeping the basis time-independent, ML-MCTDHB is based on an expansion with respect to a comoving basis with a two-fold time-dependence in terms of the species states and the SPFs . This two-fold time-dependence allows for significantly reducing the number of basis states leading to a very efficient algorithm. Please also note that our ML-MCTDHB approach to mixtures conceptually differs from ML-MCTDH in second-quantization representation [50] by the facts that we only employ two layers, one for the whole species and one for the single bosons, but allow for a time-dependent single particle basis. Having the number of grid points for representing the SPFs fixed, the numbers of species states and SPFs serve as numerical control parameters: Taking to be equal to the number of grid points and equal to the number of number state configurations, i.e. , the ansatz (5,6) proves to be numerically exact. Opposite to this full CI limit, the choice leads to the mean-field or Gross-Pitaevskii approximation [3, 4]. In between these two limiting case, any choice with equal to or smaller than the number of grid points and is possible, which allows us to adapt our ansatz to system specific intra- and inter-species correlations. If, for instance, the inter-species interactions are relatively weak compared to the intra-species interactions, a “species mean-field” ansatz with but might be sufficient. ### 2.2 Equations of motion Our final task is to find appropriate equations of motion for the ansatz constituents , and , whose time dependence we will omit in the notation from now on. In order to find the variationally optimal wave function within our class of ansatzes for given , , we can employ the McLachlan variational principle, which enforces the minimization of the error of our equations of motion with respect to the exact Schrödinger equation [52]. In practice, however, it is easier to work with the Dirac-Frenkel variational principle with being a variation within our ansatz class () [53, 54], which turns out to be equivalent to McLachlan’s variational principle on our manifold of wave function ansatzes[55]. The variation of the top layer coefficients gives us the usual linear equation of motion known from matrix mechanics: i∂tAi1,...,iS=M1∑j1=1...MS∑jS=1⟨ψ(1)i1...ψ(S)iS|^H|ψ(1)j1...ψ(S)jS⟩Aj1,...,jS, (7) where the Hamiltonian matrix with respect to Hartree products of species states becomes time-dependent due to the coupling to the coefficients and to the SPFs. Its explicit form is given in [56]. Varying the species state expansion coefficients , we obtain the following equations of motion on the species layer: i∂tCσi;→n = σ⟨→n|(\mathds1−^P1;σ)∑→m|Nσ(mσ∑j,k=1[hσ]jk^a†σj^aσk|→m⟩σCσi;→m+ (8) + 12mσ∑j,k,q,p=1[vσ]jkqp^a†σj^a†σk^aσq^aσp|→m⟩σCσi;→m+ + ∑σ′≠σMσ∑s,t=1Mσ′∑u,v=1mσ∑j,k=1[η−11,σ]is[η2,σσ′]sutv[wσσ′]jkuv^a†σj^aσk|→m⟩σCσt;→m), where denotes the bosonic annihilation (creation) operator corresponding to the SPF , obeying the canonical commutation relations and . and represent the matrix elements of the one-body Hamiltonian and the intra-species interaction potential with respect to the SPFs, respectively. The inter-species interaction leads to the mean-field matrix coupling both SPFs and species states. The reduced density matrix of the species and the reduced density matrix of the subsystem constituted by the species and () enter (8) as and , respectively (cf. (15), (16)). The orthonormality of the species states is ensured by the projector . Formulas for the above ingredients are given in A and an efficient scheme for applying the annihilation and creation operators to the number states can be found in [56] (see also [57] in this context). Finally, the variation of the SPFs leads to the following non-linear integro-differential equations: i∂t|ϕ(σ)i⟩=(\mathds1−^P2;σ) ( ^hσ|ϕ(σ)i⟩+ (9) + mσ∑j,k,q,p=1[ρ−11,σ]ij[ρ2,σσ]jkqp[^vσ]kq|ϕ(σ)p⟩+ + ∑σ′≠σmσ∑j,q=1mσ′∑k,p=1[ρ−11,σ]ij[ρ2,σσ′]jkqp[^wσσ′]kp|ϕ(σ)q⟩). Here, denotes the reduced density matrix of a boson and , () refer to the reduced two-body density matrix of two bosons, a and a boson, respectively (cf. (19)-(21)). corresponds to the one-body Hamiltonian and the intra- and inter-species interactions enter these equations of motion in the form of the mean-field operator matrices and , respectively. All these ingredients are explicated in B. The projector again ensures the orthonormality of the SPFs. So we have arrived at a set of highly coupled evolution equations (7, 8, 9), whose general properties we analyse in the following section. ### 2.3 Properties of the ML-MCTDHB theory Derived from the Dirac-Frenkel variational principle, the ML-MCTDHB evolution equations preserve both norm and energy [39]. Moreover, one can show that for a Hamiltonian with a (single particle) symmetry, ML-MCTDHB respects both the symmetry of the SPFs and the symmetry of the many-body state, given that initially the SPFs and the many-body state have a well-defined symmetry [56]. In the full CI limit, i.e. equal to the number of grid points and , the projectors in (8,9) turn into unit operators such that both the species states and the SPFs become time-independent. In this numerically exact limit, ML-MCTDHB becomes equivalent to the standard method of solving the time-dependent Schrödinger equation by propagating only the -coefficients. The full CI limit, however, is numerically only manageable for extremely small particle numbers, whereas ML-MCTDHB being based on a smaller but with the system optimally comoving basis can treat much larger ensembles. In the opposite mean-field limit , the time-dependence of the - and the -coefficients is given by trivial phase factors. With all the various reduced density matrices being equal to the c-number one, the equations (9) just differ from the coupled Gross-Pitaevskii equations of the mean-field theory for mixtures [3, 58] by a physically irrelevant phase factor as a consequence of the projector . A converged ML-MCTDHB calculation takes all correlations into account. These correlations can be studied by means of reduced density matrices of various subsystems, which the ML-MCTDHB method provides for free. Single-particle coherence as well as correlations between two bosons of the same or of different species can be unravelled with the help of , and . The entropy of a species as well as correlations between two species can be deduced from and , for example. Moreover, an analysis of the natural populations and orbitals of various subsystems both serves as an internal convergence check (see below)[39] and can give physical insights [59]. In the case of just one species and in the full CI limit on the species level , the ML-MCTDHB theory becomes equivalent to MCTDHB [46, 47] and its generalization to mixtures [48], respectively. If, however, less species states are sufficient for a converged simulation, ML-MCTDHB proves to have a better scaling. With being the number of grid points, one has to pay: S∏σ=1Mσ+S∑σ=1(Mσ(Nσ+mσ−1mσ−1)+mσn) (10) complex coefficients for storing a ML-MCTDHB wave function, which should be compared with the costs for a corresponding MCTDHB expansion: S∏σ=1(Nσ+mσ−1mσ−1)+S∑σ=1mσn. (11) For a detailed scaling comparison of the MCTDH type methods, we refer to [56]. ## 3 Application to correlated tunneling dynamics Let us now explore the tunneling dynamics of three bosonic species, refered to as the A, B and C species in the following, in a double well trap. This setup both unravels interesting correlation effects and illustrates the beneficial scaling of ML-MCTDHB by introducing the extra species layer. In the following, we assume that the three species are realized as different hyperfine states of an alkali element resulting in equal masses for all the bosons. Furthermore, each species shall consist of bosons and shall experience the very same trapping potential made of a harmonic trap superimposed with a Gaussian at the trap centre, i.e. in harmonic oscillator units . We choose and for the height and width of the barrier, respectively, which leads to three bands below the barrier, each consisting of two single particle eigenstates. The lowest band is separated by an energy difference of from the first excited band and its level spacing amounts to leading to a tunneling period of for non-interacting particles. For the contact interaction strengths, we take , and . Furthermore, the C bosons are assumed to have no intra-species interaction, i.e. , but an attractive, vanishing or repulsive coupling to the bosons of species A and B: . Anticipating the results, we will show that this very weak interaction of strength has a significant impact on the correlation between the A and B bosons. As the particle numbers are the same for all species and because of the not too different interaction strengths, we provide for each species the same number of species states, , and SPFs3, . For preparing the initial state of the mixture, we block the right well by means of a high step function potential. All bosons are then put into the ground state of the resulting single particle Hamiltonian and, afterwards, we let the interacting many-body system relax to its ground state by propagating the ML-MCTDHB equations of motion in imaginary time. Ramping down the step function potential instantaneously, the resulting many-body state is finally propagated in real time in the original double well trap. Afterwards, we infer the probability of a particle to be in some well and the probability of finding two particles of the same or different species in the same well from the corresponding reduced one-body and two-body density matrices. Here we would like to point out that we do not aim at an exhaustive study of this setup. Rather than showing a systematic parameter scan, we would like to present one striking example of multi-species non-equilibrium dynamics hardly being accessible in this precision by other methods but ML-MCTDHB, thereby illustrating the beneficial scaling and efficiency of the method. As we shall see, this setup shows very interesting correlation effects. ### 3.1 Short time tunneling dynamics Let us firstly focus on the tunneling dynamics for an attractive coupling of the C bosons to the bosons of the A and B species, i.e. , up to time . From figure 1, we see that the A, B and C bosons exhibit Rabi tunneling with respect to the tunneling period on this time interval. The amplitude of the probability oscillations, however, decreases in the course of time for the A and the B bosons. This decrease can be interpreted as a temporal equilibration of the occupation probability of the left well as one can infer from the inset of figure 1 showing a somewhat lower accuracy long time propagation (see below). We also clearly see that the decrease of the probability amplitude is a genuine many-body property, not present in the mean-field description via coupled Gross-Pitaevskii equations. In contrast to this, the tunneling amplitude of the C bosons is not damped and its dynamics in the many-body calculation coincides with the mean-field description. This is a consequence of the vanishing intra-species and the very weak inter-species interaction strength. A further phenomenon not captured in the mean-field picture is unravelled in figure 2: The probabilities for finding two bosons of the same species in the same well oscillate between 0.5 and 1.0 with the frequency in the mean-field calculation. In the many-body calculation, however, the probability for finding two A or B bosons in the same well features damped oscillations leading to a saturation of 0.73, which indicates a bunching tendency, while the probability of finding two C bosons stays oscillating between 0.5 and 1.0. For discussing the convergence of the simulation, the ML-MCTDHB calculation for is compared with the results for in figures 1 and 2. The single particle probabilities show an excellent agreement. Only for the joint probability of finding two particles of the same or different (not shown) kind in the same well, there are marginal deviations. Hence, we can definitely regard the simulation as being converged. This judgement is also supported by the time evolution of the natural populations: From figure 3, we infer that most of the time only two natural orbitals significantly contribute to the reduced density matrix of the whole species A and, hence, to the total wave function. For times larger than , a third species state gains a weight more than 1%. Thus, much less species states than , i.e. the full CI limit on the species layer, are enough for a fair representation of the total wave function. Figure 3 shows that the initially fully condensed state of the A bosons evolves into a two-fold fragmented state. Increasing the number of particle SPFs from to just leads to a reshuffling of the third-highest natural population without affecting the results. The natural populations corresponding to a B boson show a similar behaviour due to the similar intra-species interaction strengths (not shown). In contrast to this, the C bosons stay in a condensed state and become depleted only by in the long time propagation up to (not shown). Please note that the extra species layer is crucial for this convergence check: Our simulation lasted roughly a week4, while a corresponding MCTDHB calculation would require to propagate 146 times more coefficients. ### 3.2 Long time propagation and build up of correlations Now let us explore the tunneling on longer time scales with a somewhat lower accuracy calculation choosing and . A comparison with a , simulation shows only very small, quantitative deviations in the observables under consideration (plots not shown). In the inset of figure 1, the time-evolution of the probability to find an A boson in the left well is shown comparing four different situations, namely with , or and . Although all the inter-species interaction strengths are much smaller than and , their concrete values have a strong influence on the tunneling dynamics: For no inter-species interactions, there happens to be a partial revival of the tunneling oscillation after a temporal equilibration. In the case of , only for a vanishing or attractive coupling of the C species to the other species one can observe such a temporal equilibration state with subsequent partial tunneling revival of, however, smaller amplitude in comparison to the former case. A repulsive coupling between the C and the other species does not result in a complete temporal equilibration to a probability of but rather leads to a reduction of the amplitude of the oscillations around the equilibration value to . While the B bosons show a dynamics similar to the A bosons, the C bosons tunnel almost unaffected by the inter-species interactions and exhibit mean-field tunneling oscillations (plots not shown). In order to measure the correlations between different species, we compare the conditional probability of finding a boson in e.g. the left well given that a boson has already been found there with the marginal probability of finding a boson in the left well: Let denote the probability for finding a and a boson in the left well and let () be the probability for finding a () boson in the left well. Then the above mentioned correlation measure reads and we similarly define for the right well. Please note that is a straightforward extension of the diagonal elements of the coherence / correlation measure [60] to spatially discrete systems with distinguishable components. The dynamics of the centre of mass positions of the and the species has an impact on and , of course. In order to diminish this impact, we finally construct our correlation measure for finding a and a boson in the same well as . If the and the bosons tunnel independently, will be unity. A value of greater (smaller) than one indicates an overall bunching (anti-bunching) tendency. In figure 4, we find that a bunching tendency between an A and a B boson clearly builds up with a maximal correlation measure of up to above unity. For , this bunching tendency turns out to be most intense for the repulsive coupling of the C bosons to the other species and becomes least intense for an attractive coupling. In the absence of inter-species interactions with the C species, i.e. if is the only non-vanishing inter-species interaction strength, the correlation measure lies mostly inbetween these two curves. For a large fraction of the propagation time, the coupling of the C bosons to the other two species can thus control the inter-species correlations between species A and B up to a certain degree. Due to the fact that the C bosons approximately perform Rabi-oscillations with respect to the occupation probability of the left well, one might come to the conclusion that the C bosons provide a time-dependent potential for the other two species hardly experiencing a backaction on the considered time-scale. That this descriptive picture can only be approximately valid up to a certain time, can be inferred from the second largest natural population of , which monotonously increases up to () and (), respectively. ## 4 Conclusion and outlook We have presented a novel ab initio method for simulating the non-equilibrium dynamics of mixtures of ultra-cold bosons. In particular, ML-MCTDHB is suitable for dealing with explicitly time-dependent systems, which will be explored in future works. Being based on an expansion in terms of permanents and on a multi-layer ansatz, our ML-MCTDHB method takes optimally and efficiently the bosonic exchange symmetry within each species into account and allows for adapting the ansatz to system specific intra- and inter-species correlations. Hereby, the numbers of provided single particle functions and species states serve as control parameters for ensuring convergence. For any choice of these numbers of basis functions, ML-MCTDHB rotates the species states and single particle functions such that one obtains a variationally optimal representation of the many-body wave function at any instant in time. This allows to achieve convergence with a much smaller basis than methods being based on a time-independent basis. Moreover, if the inter-species interactions are not too strong, i.e. do not require to consider as many species states as there are number state configurations for a given number of single particle functions, ML-MCTDHB proves to have a much better scaling than the best state-of-the-art method MCTDHB [46, 47, 48]. In the case of only a single species state and one single particle function, ML-MCTDHB reduces to coupled Gross-Pitaevskii equations. Employing ML-MCTDHB for a tunneling scenario of three species, we have entered a parameter regime which is hardly accessible by other methods in such a controlled precision. Our simulations show that the imbalances of the populations can feature a temporal equilibration with subsequent revival of the population oscillations, where the duration of and the fluctuations around the equilibration state as well as the degree of completeness of the revival crucially depend on the inter-species interaction strengths. In our setup, we have furthermore found two-body bunching correlations between the first two species. The strength of this correlation can be tuned by a weak attractive or repulsive coupling of the third species - with no intra-species interaction - to the first two species without significantly altering the tunneling dynamics of that third species. In this paper, ML-MCTDHB has been formulated for systems confined by quasi-one-dimensional traps and interacting via contact interactions. A direct generalization to arbitrary dimensions and interaction potentials is possible, of course. Moreover, it is also feasible to generalize ML-MCTDHB further by applying the multi-layering concept on the level of the single particle functions, which allows for optimally describing bosons in quasi-one- or -two-dimensional traps embedded in three-dimensional space with or without internal degrees of freedom [56]. Incorporating internal degrees of freedom on the level of SPFs then allows for taking particle converting interactions into account. The authors would like to thank Hans-Dieter Meyer and Jan Stockhofe for fruitful discussions on MCTDH methods and symmetry conservation. Particularly, the authors would like to thank Jan Stockhofe for the DVR implementation of the ML-MCTDHB code. S.K. gratefully acknowledges financial support by the Studienstiftung des deutschen Volkes. L.C. and P.S. gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft in the framework of the SFB 925 “Light induced dynamics and control of correlated quantum systems”. ## Appendix A Ingredients for the evolution equations of the species states The matrix elements entering (8) read: [hσ]jk = ⟨ϕ(σ)j|[^p2σ2mσ+Uσ(^xσ)]|ϕ(σ)k⟩, (12) [vσ]jkqp = gσ⟨ϕ(σ)jϕ(σ)k|δ(^xσ1−^xσ2)|ϕ(σ)qϕ(σ)p⟩, (13) [wσσ′]jkuv = gσσ′∑→l|Nσ′−1mσ′∑q,p=1⟨ϕ(σ)jϕ(σ′)q|δ(^xσ1−^xσ′2)|ϕ(σ)kϕ(σ′)p⟩× (14) ×Q→l(q,p)(Cσ′u;→l+^q)∗Cσ′v;→l+^p, where . “” refers to summation over all occupation numbers summing up to , and represents an occupation number vector with vanishing entries except for the -component being set to one. The reduced density matrix corresponding to the species can be calculated as: [η1,σ]is=∑Jσ(AJσi)∗AJσs, (15) where the summation runs over all indices except for the index, which is fixed to be in the multi-index . For inversion, has to be regularized [39]. In analogy, the reduced density matrix of the subsystem constituted by the and species is given as: [η2,σσ′]sutv=∑Jσσ′(AJσσ′su)∗AJσσ′tv, (16) where . Here, the summation runs over all indices except for the and index, which are fixed to be and in . ## Appendix B Ingredients for the evolution equations of the SPFs In the particle layer, the mean-field operator matrices for the intra- and inter-species interaction are given as: [^vσ]kp = gσ∫\textupdx(ϕ(σ)k(x))∗ϕ(σ)p(x)δ(x−^xσ), (17) [^wσσ′]kp = gσσ′∫\textupdx(ϕ(σ′)k(x))∗ϕ(σ′)p(x)δ(x−^xσ). (18) The one-body reduced density matrix of a boson, which also has to be regularized [39], can be calculated as: [ρ1,σ]ij=1NσMσ∑u,v=1[η1,σ]uv∑→l|Nσ−1Q→l(i,j)(Cσu;→l+^i)∗Cσv;→l+^j. (19) For the reduced density matrices of two bosons and of a and a boson (), one has the following expressions: [ρ2,σσ]jkqp=1NσMσ∑u,v=1[η1,σ]uv∑→l|Nσ−2P→l(j,k)P→l(q,p)(Cσu;→l+^j+^k)∗Cσv;→l+^q+^p, (20) [ρ2,σσ′]jkqp=1NσMσ∑s,t=1Mσ′∑u,v=1[η2,σσ′]sutv∑→l|Nσ−1∑→m|Nσ′−1Q→l(j,q)Q→m(k,p)× (21) ×(Cσs;→l+^j)∗Cσt;→l+^q(Cσ′u;→m+^k)∗Cσ′v;→m+^p, with and denoting the Kronecker delta function. We remark that the ML-MCTDHB code employs a different strategy than MCTDHB [61] for evaluating the various density matrices and the action of annihilation and creation operators on number states (cf. (8)) and refer to [56] for the details. ## References ### Footnotes 1. In the following, we will use the term species irrespectively of whether it refers to different elements, isotopes or internal states of an isotope. 2. For distinguishable particles. Indistinguishable particles lead to a binomial scaling. 3. The SPFs are represented by means of a harmonic discrete variable representation (DVR) [39]. 4. For gridpoints on an Intel® Xeon® CPU E5530 with 2.40GHz. ### References 1. Bloch I, Dalibard J, and Zwerger W. Rev. Mod. Phys., 80:885, 2008. 2. Bloch I, Dalibard J, and Nascimbéne S. Nat. Phys., 8:267, 2012. 3. Pethick CJ and Smith H. Bose-Einstein Condensates in Dilute Gases. Cambridge University Press, 2nd edition, 2008. 4. Stringari S and Pitaevskii LP. Bose-Einstein Condensation. Oxford University Press, 2003. 5. Greiner M, Mandel O, Esslinger T, Hänsch TW, and Bloch I. Nature, 415:39, 2002. 6. Chin C, Grimm R, Julienne P, and Tiesinga E. Rev. Mod. Phys., 82:1225, 2010. 7. Olshanii M. Phys. Rev. Lett., 81:938, 1998. 8. Kinoshita T, Wenger T, and Weiss DS. Science, 305:1125, 2004. 9. Moritz H, Stöferle T, Günter K, Köhl M, and Esslinger T. Phys. Rev. Lett., 94:210401, 2005. 10. Girardeau M. J. Math. Phys., 1:516, 1960. 11. Paredes B, Widera A, Murg V, Mandel O, Fölling S, Cirac I, Shlyapnikov GV, Hänsch TW, and Bloch I. Nature, 429:277, 2004. 12. Haller E, Gustavsson M, Mark MJ, Danzl JG, Hart R, Pupillo G, and Nägerl H-C. Science, 325:1224, 2009. 13. Haller E, Hart R, Mark MJ, Danzl JG, Reichsöllner L, Gustavsson M, Dalmonte M, Pupillo G, and Nägerl H-C. Nature, 466:597, 2010. 14. Lieb EH and Seiringer R. Phys. Rev. Lett., 88:170409, 2002. 15. Myatt CJ, Burt EA, Ghrist RW, Cornell EA, and Wieman CE. Phys. Rev. Lett., 78:586, 1997. 16. Modugno G, Ferrari G, Roati G, Brecha RJ, Simoni A, and Inguscio M. Science, 294:1320, 2001. 17. Hall DS, Matthews MR, Ensher JR, Wieman CE, and Cornell EA. Phys. Rev. Lett., 81:1539, 1998. 18. Schweikhard V, Coddington I, Engels P, Tung S, and Cornell EA. Phys. Rev. Lett., 93:210403, 2004. 19. Becker C, Stellmer S, Soltan-Panahi P, Dörscher S, Baumert M, Richter E-M, Kronjäger J, Bongs K, and Sengstock K. Nat. Phys., 4:496, 2008. 20. Kleine A, Kollath C, McCulloch IP, Giamarchi T, and Schollwöck U. Phys. Rev. A, 77:013607, 2008. 21. Sun B and Pindzola MS. Phys. Rev. A, 80:033616, 2009. 22. Juliá-Díaz B, Guilleumas M, Lewenstein M, Polls A, and Sanpera A. Phys. Rev. A, 80:023616, 2009. 23. Satija II, Balakrishnan R, Naudus P, Heward J, Edwards M, and Clark CW. Phys. Rev. A, 79:033616, 2009. 24. Naddeo A and Citro R. J. Phys. B: At. Mol. Phys., 43:135302, 2010. 25. Pflanzer AC, Zöllner S, and Schmelcher P. J. Phys. B: At. Mol. Opt. Phys., 42:231002, 2009. 26. Pflanzer AC, Zöllner S, and Schmelcher P. Phys. Rev. A, 81:023612, 2010. 27. Chatterjee B, Brouzos I, Cao L, and Schmelcher P. Phys. Rev. A, 85:013611, 2012. 28. Cao L, Brouzos I, Chatterjee B, and Schmelcher P. New Journal of Physics, 14:093011, 2012. 29. Maddaloni P, Modugno M, Fort C, Minardi F, and Inguscio M. Phys. Rev. Lett., 85:2413, 2000. 30. Modugno M, Dalfovo F, Fort C, Maddaloni P, and Minardi F. Phys. Rev. A, 62:063607, 2000. 31. Kuklov AB and Svistunov BV. Phys. Rev. Lett., 90:100401, 2003. 32. Hu A, Mathey L, Danshita I, Tiesinga E, Williams CJ, and Clark CW. Phys. Rev. A, 80:023619, 2009. 33. Hu A, Mathey L, Tiesinga E, Danshita I, Williams CJ, and Clark CW. Phys. Rev. A, 84:041609, 2011. 34. White SR and Feiguin AE. Phys. Rev. Lett., 93:076401, 2004. 35. Daley AJ, Kollath C, Schollwöck U, and Vidal G. Journal of Statistical Mechanics: Theory and Experiment, page P04005, 2004. 36. Schollwöck U. J. Phys. Soc. Jpn., 74S:246, 2005. 37. Schollwöck U. Ann. Phys. (NY), 326:96, 2011. 38. Meyer H-D, Manthe U, and Cederbaum LS. Chem. Phys. Lett., 165:73, 1990. 39. Beck MH, Jäckle A, Worth GA, and Meyer H-D. Phys. Rep., 324:1, 2000. 40. Zöllner S, Meyer H-D, and Schmelcher P. Phys. Rev. Lett., 100:040401, 2008. 41. Wang H and Thoss M. J. Chem. Phys., 119:1289, 2003. 42. Manthe U. J. Chem. Phys., 128:164116, 2008. 43. Vendrell O and Meyer H-D. J. Chem. Phys., 134:044135, 2011. 44. Wang H and Shao J. J. Chem. Phys., 137:22A504, 2012. 45. Zanghellini J, Kitzler M, Fabian C, Brabec T, and Scrinzi A. Laser Phys., 13:1064, 2003. 46. Streltsov AI, Alon OE, and Cederbaum LS. Phys. Rev. Lett., 99:030402, 2007. 47. Alon OE, Streltsov AI, and Cederbaum LS. Phys. Rev. A, 77:033613, 2008. 48. Alon OE, Streltsov AI, Sakmann K, Lode AUJ, Grond J, and Cederbaum LS. Chem. Phys., 401:2, 2012. 49. Alon OE, Streltsov AI, and Cederbaum LS. Phys. Rev. A, 79:022503, 2009. 50. Wang H and Thoss M. J. Chem. Phys., 131:024114, 2009. 51. Huang K and Yang CN. Phys. Rev., 105:767, 1957. 52. McLachlan AD. Mol. Phys., 8:39, 1963. 53. Dirac PAM. Proc. Cambridge Philos. Soc., 26:376, 1930. 54. Frenkel J. Wave Mechanics. Clarendon Press, Oxford, 1934. 55. Broeckhove J, Lathouwers L, Kesteloot E, and Van Leuven P. Chem. Phys. Lett., 149:547, 1988. 56. Cao L, Krönke S, Vendrell O, and Schmelcher P. to be published. 57. Streltsov AI, Alon OE, and Cederbaum LS. Phys. Rev. A, 81:022124, 2010. 58. Kevrekidis PG, Frantzeskakis DJ, and Carretero-González R, editors. Emergent Nonlinear Phenomena in Bose-Einstein Condensates, volume 45 of Springer Series on Atomic, Optical, and Plasma Physics. Springer Berlin / Heidelberg, 2008. 59. Sakmann K, Streltsov AI, Alon OE, and Cederbaum LS. Phys. Rev. A, 78:023615, 2008. 60. Glauber RJ. Phys. Rev., 130:2529, 1963. 61. Streltsov AI, Alon OE, and Cederbaum LS. Phys. Rev. A, 81:022124, 2010. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{}
How do you evaluate tan(arccos(2/3))? Jun 10, 2015 $\tan \left(\arccos \left(\frac{2}{3}\right)\right) = \frac{\sqrt{5}}{2}$. Explanation: $\alpha = \arccos \left(\frac{2}{3}\right)$. $\alpha$ isn't a known value, but it's about 48,19°. $\tan \left(\alpha\right) = \sin \frac{\alpha}{\cos} \alpha$ We can say something about $\cos \alpha$ and $\sin \alpha$: $\cos \alpha = \frac{2}{3}$ $\sin \alpha = \sqrt{1 - {\left(\cos \alpha\right)}^{2}}$ (for the first fundamental relation*). So $\sin \alpha = \sqrt{1 - \frac{4}{9}} = \frac{\sqrt{5}}{3}$. $\tan \left(\alpha\right) = \sin \frac{\alpha}{\cos} \alpha = \frac{\frac{\sqrt{5}}{3}}{\frac{2}{3}} = \frac{\sqrt{5}}{2.}$ So $\tan \left(\arccos \left(\frac{2}{3}\right)\right) = \frac{\sqrt{5}}{2}$. *The first fundamental relation: ${\left(\cos \alpha\right)}^{2} + {\left(\sin \alpha\right)}^{2} = 1$ From which we can get $\sin \alpha$: ${\left(\sin \alpha\right)}^{2} = 1 - {\left(\cos \alpha\right)}^{2}$ $\sin \alpha = \pm \sqrt{1 - {\left(\cos \alpha\right)}^{2}}$ But in this case we consider only positive values.
{}
# The “unsigned for value range” antipattern ## Background: Signed Integers Are (Not Yet) Two’s Complement At the WG21 committee meeting which is currently underway in Jacksonville, JF Bastien will be presenting a proposal to make C++’s int data type wrap around on overflow. That is, where today the expression INT_MAX + 1 has undefined behavior, JF would like to see that expression formally defined to come out equal to INT_MIN. I have written, but not submitted, a “conservative” fork of JF’s proposal, in which I eliminate the INT_MAX + 1 == INT_MIN part but leave some of the good stuff (such as -1 << 1 == -2). I’m not going to talk about the good stuff right now. (If you’re not a C++ compiler writer or committee member, you probably assume you’re getting all of the good stuff already, and would be surprised to learn how much of it is still undefined.) Anyway. On a mailing list for WG21 Study Group 12, Lawrence Crowl writes in defense of INT_MAX + 1 == undefined, and I agree with him: If integer overflow is undefined behavior, then it is wrong. Tools can detect wrong programs and report them. If integer overflow is wrapping, then one never knows whether or not the programmer is relying on wrapper or would be surprised at wrapping. No diagnostic is possible. Another commenter in the same thread, Myriachan, gave the example of uint16_t x = 0xFFFF; // 65535 x = (x * x); In today’s C++, on “typical modern platforms” where int is 32 bits, the expression (x * x) has undefined behavior. This is because after integral promotion promotes uint16_t to int, the result is equivalent to (int(65535) * int(65535)), and the product of 65535 with itself — that is, 4294836225 — is not representable in a signed int. So we have signed integer overflow and undefined behavior. I can think of three ways to fix this: • Eliminate the integral promotions entirely. x * x becomes simply uint16_t(4294836225), i.e., uint16_t(1). • Tweak the integral promotions so that they preserve signedness. x * x becomes unsigned(x) * unsigned(x), i.e., 4294836225u. • Adopt something like JF Bastien’s proposal to make integer overflow well-defined. x * x becomes well-defined and equal to int(-131071). ## The “unsigned for value range” antipattern Lawrence wrote back: So the application intended modular arithmetic? I was concerned about the normal case where unsigned is used to constrain the value range, not to get modular arithmetic. Now, in my not-so-humble opinion, if anyone is using unsigned types “to constrain the value range,” they are doing computers wrong. That is not what signed versus unsigned types are for. As Lawrence himself wrote: If integer overflow is undefined behavior, then it is wrong. Tools can detect wrong programs and report them. The contrapositive is: “If the programmer is using a type where integer overflow is well-defined to wrap, then we can assume that the program relies on that wrapping behavior” — because there would otherwise be a strong incentive for the programmer to use a type that detects and reports unintended overflow. The original design for the STL contained the “unsigned for value range” antipattern. Consequently, they ran into trouble immediately: for example, std::string::find returns an index into the string, naturally of type std::string::size_type. But size_type is unsigned! So instead of returning “negative 1” to indicate the “not found” case, they had to make it return size_type(-1), a.k.a. std::string::npos — which is a positive value! This means that callers have to write cumbersome things such as if (s.find('k') != std::string::npos) where it would be more natural to write if (s.find('k') >= 0) This is sort of parallel to my quotation of Lawrence above: If every possible value in the domain of a given type is a valid output (e.g. from find), then there is no value left over with which the function can signal failure at runtime. And if every possible value in the domain is a valid input (e.g. to malloc), then there is no way for the function to detect incorrect input at runtime. If it weren’t for the STL’s size_type snafu continually muddying the waters for new learners, I doubt people would be falling into the “unsigned for value range” antipattern anymore. For more information on the undesirability of “unsigned for value range” and the general desirability of “signed size_type” going forward in C++, see: Posted 2018-03-13
{}
# Problem Two folders having files with nameEfile that store lines starting with hex codes. Print those hex codes that overlap in both folders. # Solution import os import sys import re def process_cache(cache): event_code_set = set() for file_name, multi_line_content in cache.items(): if file_name.endswith('Efile'): for line in multi_line_content.splitlines(): line = line.rstrip('\\') # trailing backslash, if exist if bool(re.search(r'^0[xX][0-9a-fA-F]+', line)): # Take the hexcode obj = re.search(r'^0[xX][0-9a-fA-F]+', line) return event_code_set def scan_files(dir): cache = {} for root, dirs, files in os.walk(dir): for name in files: if name in ('Efile'): path = os.path.join(root, name) with open(path,'r') as file: return cache cache1 = scan_files(sys.argv[1]) cache2 = scan_files(sys.argv[2]) cache1_event_code_set = process_cache(cache1) cache2_event_code_set = process_cache(cache2) overlap_event_codes = cache1_event_code_set & cache2_event_code_set print(overlap_event_codes) For a typical three entries in a file, with comments(#), 0x00010d35 D 11 G 3,0x10009,N R XY.Condition, "({a 0x40001} == {H 0x166}) && ({a 0x11ff8} == {I 15})","0x0763ffc2 " # Below event codes are from vendor xyz 0x84900c5 M 22 Y 1,0x03330469,4,5,6,7,8 0x04b60ff6 L 50 U \ 0x04c60e07,102 && ({a 0x11ff8} == {I 15})","0x0763ffc2 " Picking 0x00010d35, 0x04b60ff6 & 0x84900c5 is the task. Rest of the line is supposed to be ignored Some entries are multi-line with trailing backslash. Each file is in megabytes. Total files in both folders - 80 1) Please suggest optimization in the below code, because pattern check is done twice. if bool(re.search(r'^0[xX][0-9a-fA-F]+', line)): # Take the hexcode obj = re.search(r'^0[xX][0-9a-fA-F]+', line) 2) Please suggest coding style optimizations for tree walk, cache and command line arguments. • Returning cache is a bug and wrong practice, despite it works... cache1 should not point to local cache Jul 21 '18 at 12:48 • Please do not update the code in your question to incorporate feedback from answers, doing so goes against the Question + Answer style of Code Review. This is not a forum where you should keep the most updated version in your question. Please see what you may and may not do after receiving answers. – Mast Jul 21 '18 at 13:46 ## Possible bugs • In one place, you test for if file_name.endswith('Efile'), and elsewhere, you test for if name in ('Efile'). What do you really intend to do? Are they supposed to be the same test? Can't you just test the filename once? The name in ('Efile') test is particularly weird, since 'fil' would pass the test. • Your regex is case insensitive, which implies that you expect to be able to handle both uppercase and lowercase hex strings. But your set operations would treat uppercase and lowercase versions of the same hex code as distinct from each other, which is counterintuitive. Your sample file shows that the hex codes may have leading zeroes (e.g. 0x00010d35), and that the codes may have varying lengths. So, 0x00010d35 would be treated differently from 0x10d35, which is counterintuitive. The solution to both of these issues is to normalize the codes when constructing the set. One way would be to strip any leading zeroes and convert the string to lowercase. A better efficient solution would be to parse them as integers, since integer comparisons would be more efficient than string comparisons. • What's the point of line = line.rstrip('\\')? But what's the point of meddling with the ends of the lines, when you only care about the beginnings of the lines? On the other hand, your example file suggests that a backslash at the end of the line would be used to indicate that the following line is a continuation of the same record. In that case, what if the continuation line starts with something that looks like a hex code (with no leading whitespace)? Would you count that or not? ## Efficiency You said that each of the ~100 files may be several megabytes long. Your cache object, reads the entire contents of all of the files into memory! There is no good reason to store that much data, when all you care about is the hex codes at the beginning of each line. If you want to split the work into two functions, I would have one of them be responsible for discovering the relevant file paths, and the other one responsible for extracting the codes. If you run a regex search many times, it's worth compiling it first. I would design the regex so that it works on the entire file contents at once, rather than line by line. ## Suggested solution import os import re import sys def efiles(dir): for root, dirs, files in os.walk(dir): for name in files: if name.endswith('Efile'): yield os.path.join(root, name) def event_codes(file_paths): hex_re = re.compile(r'(?<!\\\n)^0[xX][0-9a-fA-F]+', re.MULTILINE) for path in file_paths: with open(path) as f: yield int(event_code, 16) event_code_set1 = set(event_codes(efiles(sys.argv[1]))) event_code_set2 = set(event_codes(efiles(sys.argv[2]))) overlap_event_codes = set(hex(n) for n in (event_code_set1 & event_code_set2)) print(overlap_event_codes) • for rstrip? stackoverflow.com/q/51409385/3317808 Jul 19 '18 at 14:10 • continuation line starts with something that looks like a hex code (with no leading whitespace)? Am not suppose to count... you can say my code is relying on indentation with white space... which is vulnerable... How do I deal with this? Do i need to remember previous line read? Jul 19 '18 at 14:11 • I actually do not need endwith because the file name must be Efile.. So name == 'Efile' should work Jul 19 '18 at 14:16 • Yes, I am saying that you are relying on the whitespace. My suggested solution solves it by using a negative look-behind assertion in the regex, and applying the regex to the entire file at once rather than line by line. Jul 19 '18 at 14:22 • As file is MB size, can I avoid opening of complete file? Jul 19 '18 at 23:25 Repeating the match can be avoided by storing it in a variable first: match = re.search(r'^0[xX][0-9a-fA-F]+', line) if match: Note also that you don't need to strip a trailing \, because it is ignored afterwards anyways and you are only interested in hex codes at the beginnings of the line and all your multilines seem to have some whitespace at the beginning of the continued line. Note that you could have used re.match(r'0[xX][0-9a-fA-F]+', line), instead of re.search, because it always matches only the beginning of the line. Python also caches your regex, once you have used it, but you may still squeeze out a bit of performance with re.compile. I also used the match.group() method, which immediately returns the matching part if the string. I would also not suggest caching the file content of all the files in the directories. This seems like a huge waste of memory. Instead process them one at a time and add it to your event code sets, or even better, just generate a stream of event codes that you can consume with set: import os import sys import re def event_codes_file(path): with open(path) as file: for line in file: match = re.match(r'0[xX][0-9a-fA-F]+', line) if match: yield match.group() def event_codes_dir(dir): for root, dirs, files in os.walk(dir): for name in files: if name.endswith('Efile'): path = os.path.join(root, name) yield from event_codes_file(path) if __name__ == "__main__": event_codes1 = set(event_codes_dir(sys.argv[1])) event_codes2 = set(event_codes_dir(sys.argv[2])) overlap_event_codes = event_codes1 & event_codes2 print(overlap_event_codes) yield from x was introduced in Python 3.3 and is mostly equivalent to for i in x: yield i. Also note that name in ('Efile') is almost the same as name == 'Efile', but matches a bit more (like 'E'). Unless of course you meant what you wrote in process_cache, name.endswith('Efile'). I am assuming the latter. On a file "test", containing your example content, this produces: set(event_codes_file("test")) # {'0x00010d35', '0x04b60ff6', '0x84900c5'} • Is Generator enabling the code to avoid cache? Jul 18 '18 at 17:17 • @overexchange Basically yes, but it could have been done differently. The important part is processing each file as you discover it, avoiding having to load all of them into memory at once. The generator really only avoids having to carry around a separate set per file. Jul 18 '18 at 17:31 • Yes... Pipe line processing... Jul 18 '18 at 17:44 • Does line not have trailing backslash? Because you are passing to re.search(). re.search(r'\\\$', 'hellothere\') does not work Jul 18 '18 at 18:51 • name in ('Efile') is not the same as name == 'Efile'. For example, 'fi' in ('Efile') is True. Jul 18 '18 at 21:28
{}
# An Efficient Parallel Algorithm for Spectral Sparsification of Laplacian and SDDM Matrix Polynomials 2015. Cited by: 7|Views16 Abstract: For "large" class $\mathcal{C}$ of continuous probability density functions (p.d.f.), we demonstrate that for every $w\in\mathcal{C}$ there is mixture of discrete Binomial distributions (MDBD) with $T\geq N\sqrt{\phi_{w}/\delta}$ distinct Binomial distributions $B(\cdot,N)$ that $\delta$-approximates a discretized p.d.f. \$\widehat{w}(i/...More Code: Data: Full Text Bibtex
{}
LaTeX forum ⇒ Graphics, Figures & Tables ⇒ span table across two columns Information and discussion about graphics, figures & tables in LaTeX documents. jamborta Posts: 2 Joined: Thu Oct 14, 2010 3:33 pm span table across two columns Hi, I have been looking for the answer of this simple question all over the internet... I would like to span a table across two columns and place it at the top of the page. there is this way \begin{table*}\end{table*} which does that but it puts the table on the last page. is there a way to place it to the actual page? thanks Last edited by jamborta on Thu Oct 14, 2010 5:07 pm, edited 1 time in total. localghost Site Moderator Posts: 9206 Joined: Fri Feb 02, 2007 12:06 pm Location: Braunschweig, Germany The table(*) environment accepts optional parameters for placement. \begin{table*}[!ht]% Table content (and caption)\end{table*} As far as I remember the output will appear no earlier than on the top of the next page. Best regards and welcome to the board Thorsten LaTeX Community Moderator ¹ System: openSUSE 42.2 (Linux 4.4.52), TeX Live 2016 (vanilla), TeXworks 0.6.1 jamborta Posts: 2 Joined: Thu Oct 14, 2010 3:33 pm thanks. that solved it. i didn't realise that it puts it on the next page, not the actual one, as it usually does. localghost Site Moderator Posts: 9206 Joined: Fri Feb 02, 2007 12:06 pm Location: Braunschweig, Germany As clearly written in the Board Rules, marking a topic as solved means to edit the first post, not the last one. Please catch up on that and keep it in mind for the future so that further reminders will not be necessary. LaTeX Community Moderator ¹ System: openSUSE 42.2 (Linux 4.4.52), TeX Live 2016 (vanilla), TeXworks 0.6.1
{}
# PV Diagram 1. Aug 16, 2011 ### Punkyc7 Can some one check my work, I'm not sure if I am understanding how internal energy heat and work are related. Work is defined to be positive if the system does work on the environment. Q=$\Delta U$ +W I am showing if the process is positive, negative or 0. Also this is suppose to be an ideal gas #### Attached Files: • ###### CyclicProcces.jpg File size: 35.2 KB Views: 69 Last edited: Aug 16, 2011 2. Aug 17, 2011
{}
Copyright © University of Cambridge. All rights reserved. ## 'Negatively Triangular' printed from http://nrich.maths.org/ ### Show menu Four straight lines have the following equations: \begin{align} 3x+8y&=59 \\ x-2y&=1 \\ y-4x&=3 \\ 3y+2x &= -19 \end{align} How many points of intersections do you expect there to be? Three of the four lines enclose a triangle which only has negative co-ordinates for all points on that triangle. Which three lines?
{}
Griggio, Massimo (2020) Investigating the binary neutron star merger event GW170817 via general relativistic magnetohydrodynamics simulations. [Magistrali biennali] Full text disponibile come: Preview HTML 31Mb Preview PDF 31Mb ## Abstract Magnetohydrodynamics simulations performed in full general relativity represent the ideal tool to unravel the dynamics of binary neutron star (BNS) mergers as well as the post-merger evolution of the resulting remnant object. This approach allows us to study in particular (i) the magnetic field amplification and the possible formation of collimated relativistic outflows or jets, which is fundamental to make the connection with the resulting short gamma-ray bursts (SGRBs), (ii) the associated gravitational wave (GW) emission, and (iii) the properties of the massive and metastable neutron star remnant before it eventually collapses into a black hole (BH), depending on the properties of the progenitor binary system. In this Thesis we carry out this type of investigation for two models (with mass ratio $q=0.9$ and $q=1.0$) consistent with the observed properties of GW170817, the first BNS merger observed in GWs by the Advanced LIGO and Virgo interferometers. Specifically, we use the Lorene code to build the initial data for an irrotational BNS model with the same total mass of GW170817, where we employ the APR4 equation of state for the description of matter at supra-nuclear densities. We further assume a high initial magnetization corresponding to a maximum magnetic field strength of $5\times10^{15} \, \text{G}$. This system is then evolved up to merger and beyond with the numerical relativity evolution codes Einstein Toolkit and WhiskyMHD. Our results provide important hints for the interpretation of the multi-messenger observation of this breakthrough event. Item Type: Magistrali biennali Scuola di Scienze > Physics multimessenger, binary neutron star mergers, GW170817, GRMHD, simulations Area 02 - Scienze fisiche > FIS/02 Fisica teorica, modelli e metodi matematiciArea 02 - Scienze fisiche > FIS/05 Astronomia e astrofisica 63865 Riccardo, Ciolfi Jean-Pierre, Zendri 07 January 2020 Polo di Scienze > Dip. Fisica e Astronomia "Galileo Galilei" - Biblioteca on-line per i full-text Solo per lo Staff dell Archivio: Modifica questo record
{}