text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
## Intermediate Algebra (6th Edition)
The solution is $(-\infty,-3]\cap(-\infty,0]$
$4x+2\le-10$ and $2x\le0$ Solve both inequalities separately: $4x+2\le-10$ Take $2$ to the right side: $4x\le-10-2$ $4x\le-12$ Take $4$ to divide the right side: $x\le-\dfrac{12}{4}$ $x\le-3$ The solution is $(-\infty,-3]$ $2x\le0$ Take $2$ to divide the right side: $x\le\dfrac{0}{2}$ $x\le0$ The solution is $(-\infty,0]$ Since the compound inequality has the word "and", the solution is the intersection of the two solution sets found. The solution is $(-\infty,-3]\cap(-\infty,0]$
|
{}
|
# calculus
difinite integral f(x)=4-x from x=0 to x=4
1. 👍 0
2. 👎 0
3. 👁 82
1. (4x - x^2/2)
at 4
4*4 - 16/2 = 8
at 0
0 - 0 = 0
so
8-0 = 8
1. 👍 0
2. 👎 0
## Similar Questions
1. ### Physics, Calculus(alot of stuff together)= HELP!!
A rod extending between x=0 and x= 14.0cm has a uniform cross- sectional area A= 9.00cm^2. It is made from a continuously changing alloy of metals so that along it's length it's density changes steadily from 2.70g/cm^3 to
asked by ~christina~ on September 16, 2007
2. ### Calculus
online class and I don't know what to do so I posts. Any help is great.Thank you Convert the integral [0,1]∫ [0,√(1-x^2 -y^2)]∫𝑧√(x^2 +y^2 +z^2)dz dy dx into anequivalent integral in spherical coordinates and evaluate
asked by C on July 22, 2020
3. ### calculus integrals
Evaluate the integral by making the given substitution. (Use C for the constant of integration. Remember to use absolute values where appropriate.) integral x^5/x^6-5 dx, u = x6 − 5 I got the answer 1/6ln(x^6-5)+C but it was
asked by Anonymous on February 4, 2020
4. ### Calculus
If f(x) and g(x) are continuous on [a, b], which one of the following statements is true? ~the integral from a to b of the difference of f of x and g of x, dx equals the integral from a to b of f of x, dx minus the integral from a
asked by Tiff on March 15, 2016
1. ### calculus
1.Evaluate the integral. (Use C for the constant of integration.) integral ln(sqrtx)dx 2. Use the method of cylindrical shells to find the volume V generated by rotating the region bounded by the curves about the given axis. y =
asked by Anonymous on February 6, 2020
2. ### math
Evaluate the following indefinite integral by using the given substitution to reduce the integral to standard form integral 2(2x+6)^5 dx, u=2x+6
asked by vishal on September 10, 2013
3. ### Calculus
Which of the following integrals can be integrated using partial fractions using linear factors with real coefficients? a) integral 1/(x^4-1) dx b) integral (3x+1)/(x^2+6x+8) dx c) integral x^2/(x^2+4) d) None of these
asked by Alice on April 7, 2019
4. ### calculus (please with steps and explanations)
consider the function f that is continuous on the interval [-5,5] and for which the definite integral 0(bottom of integral sign) to 5(top of integral sign) of f(x)dx=4. Use the properties of the definite integral to evaluate each
asked by Linda on April 9, 2015
1. ### Calculus
1. Express the given integral as the limit of a Riemann sum but do not evaluate: integral[0 to 3]((x^3 - 6x)dx) 2.Use the Fundamental Theorem to evaluate integral[0 to 3]((x^3 - 6x)dx).(Your answer must include the
asked by Ernie on May 7, 2018
2. ### Quick calc question
Which of the following definite integrals could be used to calculate the total area bounded by the graph of y = 1 – x2 and the x-axis? the integral from 0 to 1 of the quantity 1 minus x squared, dx plus the integral from 1 to 2
asked by Mel on February 12, 2016
3. ### math, calculus
if f(1)=12 and f' is continuous, what is the value of f(4)? integral from 1 to 4 of f'(x)dx = 17 IF the integral of f'(x) dx from 1 to 4 is 17, as you say, then the function f(x), which is the integral with an arbitrary constant,
asked by Nicole on November 27, 2006
4. ### double integral
1. Sketch the region of integration & reverse the order of integration. Double integral y dydz... 1st (top=1, bottom =0)... 2nd(inner) integral (top=cos(piex), bottom=(x-2)... 2. Evaluate the integral by reversing the order of
asked by Michelle on March 11, 2011
|
{}
|
Compatibility - Maple Help
Compatibility Issues in Maple2021
The following is a brief description of the compatibility issues that affect users upgrading from Maple 2020 to Maple 2021.
Multiline Inputs in Code Edit Regions
• Pressing Enter in a code edit region now executes the code. To get a new line in the code edit region press the Shift + Enter keys. In Maple 2020 and earlier, Enter moved you to the next line of the code edit region.
User Interface
• In document mode, if you are in math mode, pressing Enter when your cursor is at the end or middle of the line of math executes and moves the cursor to the next math input (or inserts a new document block if you are at the end of the document).
• Note this method of stepping through the document works for math that is evaluated on a new line or evaluated inline. If you have some math that is evaluated inline, and you want to change it to evaluate on a new line, with your cursor on the input, from the Context Panel (or the Evaluate menu) clear the check box for Evaluate and Display Inline. Now the output displays on a new line.
• In document mode, pressing Enter when cursor is at the start of the line (the Home position) inserts a new line above. In Maple 2020 and earlier, that was true if you were in a blank line, but otherwise it executed the current document block. This new ability to create white space is indicated visually by the cursor. For more information, see What's New in the User Interface?
• Note that the behavior is now different if you are at the start of a nonempty line of math: pressing Enter now creates white space, and does not execute the current line.
• The shortcut key F5 now toggles between three states: text, nonexecutable math, and executable math. Previously, it toggled between text and executable math.
Default Plotting Domains
• When you do not specify a domain for the plot or plot3d commands, Maple 2021 now tries to determine a reasonable domain. In Maple 2020 and earlier, the default domain was either $-10..10$ or $-2\mathrm{\pi }..2\mathrm{\pi }$. For more information on default plotting domains in Maple 2021, see What's New in Visualization?
Auto Save
• Auto Save has changed. Auto Save now saves backup files to the user's home directory. Auto Save also now by default keeps files when Maple is shut down normally. For more information, see What's New in the User Interface?
The queue and stack Commands
• The queue and stack commands have been deprecated. Use the superseding command, DEQueue instead.
|
{}
|
# American Institute of Mathematical Sciences
December 2019, 24(12): 6693-6724. doi: 10.3934/dcdsb.2019163
## Trait selection and rare mutations: The case of large diffusivities
Laboratoire Jacques-Louis Lions, CNRS UMR 7598, Paris Sorbonne Université, 4 place Jussieu, 75005 Paris, France
* Corresponding author: Idriss Mazari
Received July 2018 Revised March 2019 Published December 2019 Early access July 2019
Fund Project: The author was partially supported by the Project "Analysis and simulation of optimal shapes - application to lifesciences" of the Paris City Hall.
We consider a system of $N$ competing species, each of which can access a different resources distribution and who can disperse at different speeds. We fully characterize the existence and stability of steady-states for large diffusivities. Indeed, we prove that the resources distribution yielding the largest total population size at equilibrium is, broadly speaking, always the winner when species disperse quickly. The criterion also uses the different dispersal rates. The methods used rely on an expansion of the solutions of the Lotka-Volterra sytem for large diffusivities, and is an extension of the "slowest diffuser always wins" principle.
Using this method, we also study the case of an equation modelling a trait structured population, with small mutations. We assume that each trait is characterized by its diffusivity and the resources it can access. We similarly derive a criterion mixing these diffusivities and the total population size functional for the single species model to show that for rare mutations and large diffusivities, the population concentrates in a neighbourhood of a trait maximizing this criterion.
Citation: Idriss Mazari. Trait selection and rare mutations: The case of large diffusivities. Discrete and Continuous Dynamical Systems - B, 2019, 24 (12) : 6693-6724. doi: 10.3934/dcdsb.2019163
##### References:
[1] O. Bénichou, V. Calvez, N. Meunier and R. Voituriez, Front acceleration by dynamic selection in fisher population waves, Phys. Rev. E, 86 (2012), 041908, https://link.aps.org/doi/10.1103/PhysRevE.86.041908. [2] H. Berestycki and T. Lachand-Robert, Some properties of monotone rearrangement with applications to elliptic equations in cylinders, Mathematische Nachrichten, 266 (2004), 3-19. doi: 10.1002/mana.200310139. [3] H. Berestycki, F. Hamel and L. Roques, Analysis of the periodically fragmented environment model: Ⅰ. species persistence, Journal of Mathematical Biology, 51 (2005), 75-113. doi: 10.1007/s00285-004-0313-3. [4] E. Bouin and S. Mirrahimi, A hamilton-jacobi approach for a model of population structured by space and trait, Communications in Mathematical Sciences, 13 (2015), 1431-1452. doi: 10.4310/CMS.2015.v13.n6.a4. [5] R. S. Cantrell and C. Cosner, Spatial Ecology via Reaction-Diffusion Equations, John Wiley & Sons Ltd., Chichester, UK, 2003. doi: 10.1002/0470871296. [6] R. S. Cantrell and C. Cosner, On the effects of spatial heterogeneity on the persistence of interacting species, Journal of Mathematical Biology, 37 (1998), 103-145. doi: 10.1007/s002850050122. [7] R. S. Cantrell, C. Cosner and V. Hutson, Permanence in ecological systems with spatial heterogeneity, Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 123 (1993), 533-559. doi: 10.1017/S0308210500025877. [8] R. S. Cantrell and C. Cosner, Diffusive logistic equations with indefinite weights: Population models in disrupted environments. Ⅱ, SIAM J. Math. Anal., 22 (1991), 1043-1064. doi: 10.1137/0522068. [9] R. S. Cantrell, C. Cosner and V. Hutson, Ecological models, permanence and spatial heterogeneity, Rocky Mountain J. Math., 26 (1996), 1-35. doi: 10.1216/rmjm/1181072101. [10] J. Dockery, V. Hutson, K. Mischaikow and M. Pernarowski, The evolution of slow dispersal rates: A reaction diffusion model, Journal of Mathematical Biology, 37 (1998), 61-83. doi: 10.1007/s002850050120. [11] R. A. Fisher, The wave of advance of advantageous genes, Annals of Eugenics, 7 (1937), 355-369. doi: 10.1111/j.1469-1809.1937.tb02153.x. [12] L. Girardin, Competition in periodic media: I-Existence of pulsating fronts, Discrete and Continuous Dynamical Systems Series B, 22 (2017), 1341-1360, https://hal.archives-ouvertes.fr/hal-01328421. doi: 10.3934/dcdsb.2017065. [13] X. Q. He and W. -M. Ni, Global dynamics of the Lotka-Volterra competition-diffusion system: Diffusion and spatial heterogeneity Ⅰ, Communications on Pure and Applied Mathematics, 69 (2016), 981-1014. doi: 10.1002/cpa.21596. [14] X. Q. He and W. -M. Ni, Global dynamics of the Lotka-Volterra competition-diffusion system with equal amount of total resources, Ⅱ, Calculus of Variations and Partial Differential Equations, 55 (2016), Art. 25, 20 pp. doi: 10.1007/s00526-016-0964-0. [15] X. Q. He and W. -M. Ni, Global dynamics of the Lotka-Volterra competition-diffusion system with equal amount of total resources, Ⅲ, Calculus of Variations and Partial Differential Equations, 56 (2017), Art. 132, 26 pp. doi: 10.1007/s00526-017-1234-5. [16] V. Hutson, Y. Lou and K. Mischaikow, Spatial heterogeneity of resources versus Lotka-Volterra dynamics, Journal of Differential Equations, 185 (2002), 97-136, http://www.sciencedirect.com/science/article/pii/S0022039601941579. doi: 10.1006/jdeq.2001.4157. [17] B. Kawohl, Rearrangements and Convexity of Level Sets in PDE, Springer Berlin Heidelberg, 1985. doi: 10.1007/BFb0075060. [18] K. -Y. Lam and Y. Lou, An integro-PDE model for evolution of random dispersal, Journal of Functional Analysis, 272 (2017), 1755-1790. doi: 10.1016/j.jfa.2016.11.017. [19] J. Lamboley, A. Laurain, G. Nadin and Y. Privat, Properties of optimizers of the principal eigenvalue with indefinite weight and Robin conditions, Calc. Var. Partial Differential Equations, 55 (2016), Art. 144, 37 pp. doi: 10.1007/s00526-016-1084-6. [20] Y. Lou, Tutorials in athematical biosciences Ⅳ: Evolution and ecology, Chapter Some Challenging Mathematical Problems in Evolution of Dispersal and Population Dynamics, 171-205, Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. [21] Y. Lou, On the effects of migration and spatial heterogeneity on single and multiple species, Journal of Differential Equations, 223 (2006), 400-426. doi: 10.1016/j.jde.2005.05.010. [22] I. Mazari, G. Nadin and Y. Privat, Optimal location of resources maximizing the total population size in logistic models, Working paper or preprint, (2017), https://hal.archives-ouvertes.fr/hal-01607046. [23] S. W. Pacala and J. Roughgarden, Spatial heterogeneity and interspecific competition, Theoretical Population Biology, 21 (1982), 92-113, http://www.sciencedirect.com/science/article/pii/0040580982900089. doi: 10.1016/0040-5809(82)90008-9. [24] B. Perthame and P. E. Souganidis, Rare mutations limit of a steady state dispersion trait model. [25] N. Shigesada and K. Kawasaki, Biological Invasions: Theory and Practice, Oxford University Press, 1997. [26] J. G. Skellam, Random dispersal in theoretical populations, Biometrika, 38 (1951), 196-218. doi: 10.1093/biomet/38.1-2.196. [27] O. Turanova, On a model of a population with variable motility, Mathematical Models and Methods in Applied Sciences, 25 (2015), 1961-2014, https://www.worldscientific.com/doi/abs/10.1142/S0218202515500505. doi: 10.1142/S0218202515500505.
show all references
##### References:
[1] O. Bénichou, V. Calvez, N. Meunier and R. Voituriez, Front acceleration by dynamic selection in fisher population waves, Phys. Rev. E, 86 (2012), 041908, https://link.aps.org/doi/10.1103/PhysRevE.86.041908. [2] H. Berestycki and T. Lachand-Robert, Some properties of monotone rearrangement with applications to elliptic equations in cylinders, Mathematische Nachrichten, 266 (2004), 3-19. doi: 10.1002/mana.200310139. [3] H. Berestycki, F. Hamel and L. Roques, Analysis of the periodically fragmented environment model: Ⅰ. species persistence, Journal of Mathematical Biology, 51 (2005), 75-113. doi: 10.1007/s00285-004-0313-3. [4] E. Bouin and S. Mirrahimi, A hamilton-jacobi approach for a model of population structured by space and trait, Communications in Mathematical Sciences, 13 (2015), 1431-1452. doi: 10.4310/CMS.2015.v13.n6.a4. [5] R. S. Cantrell and C. Cosner, Spatial Ecology via Reaction-Diffusion Equations, John Wiley & Sons Ltd., Chichester, UK, 2003. doi: 10.1002/0470871296. [6] R. S. Cantrell and C. Cosner, On the effects of spatial heterogeneity on the persistence of interacting species, Journal of Mathematical Biology, 37 (1998), 103-145. doi: 10.1007/s002850050122. [7] R. S. Cantrell, C. Cosner and V. Hutson, Permanence in ecological systems with spatial heterogeneity, Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 123 (1993), 533-559. doi: 10.1017/S0308210500025877. [8] R. S. Cantrell and C. Cosner, Diffusive logistic equations with indefinite weights: Population models in disrupted environments. Ⅱ, SIAM J. Math. Anal., 22 (1991), 1043-1064. doi: 10.1137/0522068. [9] R. S. Cantrell, C. Cosner and V. Hutson, Ecological models, permanence and spatial heterogeneity, Rocky Mountain J. Math., 26 (1996), 1-35. doi: 10.1216/rmjm/1181072101. [10] J. Dockery, V. Hutson, K. Mischaikow and M. Pernarowski, The evolution of slow dispersal rates: A reaction diffusion model, Journal of Mathematical Biology, 37 (1998), 61-83. doi: 10.1007/s002850050120. [11] R. A. Fisher, The wave of advance of advantageous genes, Annals of Eugenics, 7 (1937), 355-369. doi: 10.1111/j.1469-1809.1937.tb02153.x. [12] L. Girardin, Competition in periodic media: I-Existence of pulsating fronts, Discrete and Continuous Dynamical Systems Series B, 22 (2017), 1341-1360, https://hal.archives-ouvertes.fr/hal-01328421. doi: 10.3934/dcdsb.2017065. [13] X. Q. He and W. -M. Ni, Global dynamics of the Lotka-Volterra competition-diffusion system: Diffusion and spatial heterogeneity Ⅰ, Communications on Pure and Applied Mathematics, 69 (2016), 981-1014. doi: 10.1002/cpa.21596. [14] X. Q. He and W. -M. Ni, Global dynamics of the Lotka-Volterra competition-diffusion system with equal amount of total resources, Ⅱ, Calculus of Variations and Partial Differential Equations, 55 (2016), Art. 25, 20 pp. doi: 10.1007/s00526-016-0964-0. [15] X. Q. He and W. -M. Ni, Global dynamics of the Lotka-Volterra competition-diffusion system with equal amount of total resources, Ⅲ, Calculus of Variations and Partial Differential Equations, 56 (2017), Art. 132, 26 pp. doi: 10.1007/s00526-017-1234-5. [16] V. Hutson, Y. Lou and K. Mischaikow, Spatial heterogeneity of resources versus Lotka-Volterra dynamics, Journal of Differential Equations, 185 (2002), 97-136, http://www.sciencedirect.com/science/article/pii/S0022039601941579. doi: 10.1006/jdeq.2001.4157. [17] B. Kawohl, Rearrangements and Convexity of Level Sets in PDE, Springer Berlin Heidelberg, 1985. doi: 10.1007/BFb0075060. [18] K. -Y. Lam and Y. Lou, An integro-PDE model for evolution of random dispersal, Journal of Functional Analysis, 272 (2017), 1755-1790. doi: 10.1016/j.jfa.2016.11.017. [19] J. Lamboley, A. Laurain, G. Nadin and Y. Privat, Properties of optimizers of the principal eigenvalue with indefinite weight and Robin conditions, Calc. Var. Partial Differential Equations, 55 (2016), Art. 144, 37 pp. doi: 10.1007/s00526-016-1084-6. [20] Y. Lou, Tutorials in athematical biosciences Ⅳ: Evolution and ecology, Chapter Some Challenging Mathematical Problems in Evolution of Dispersal and Population Dynamics, 171-205, Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. [21] Y. Lou, On the effects of migration and spatial heterogeneity on single and multiple species, Journal of Differential Equations, 223 (2006), 400-426. doi: 10.1016/j.jde.2005.05.010. [22] I. Mazari, G. Nadin and Y. Privat, Optimal location of resources maximizing the total population size in logistic models, Working paper or preprint, (2017), https://hal.archives-ouvertes.fr/hal-01607046. [23] S. W. Pacala and J. Roughgarden, Spatial heterogeneity and interspecific competition, Theoretical Population Biology, 21 (1982), 92-113, http://www.sciencedirect.com/science/article/pii/0040580982900089. doi: 10.1016/0040-5809(82)90008-9. [24] B. Perthame and P. E. Souganidis, Rare mutations limit of a steady state dispersion trait model. [25] N. Shigesada and K. Kawasaki, Biological Invasions: Theory and Practice, Oxford University Press, 1997. [26] J. G. Skellam, Random dispersal in theoretical populations, Biometrika, 38 (1951), 196-218. doi: 10.1093/biomet/38.1-2.196. [27] O. Turanova, On a model of a population with variable motility, Mathematical Models and Methods in Applied Sciences, 25 (2015), 1961-2014, https://www.worldscientific.com/doi/abs/10.1142/S0218202515500505. doi: 10.1142/S0218202515500505.
[1] Guo-Bao Zhang, Ruyun Ma, Xue-Shi Li. Traveling waves of a Lotka-Volterra strong competition system with nonlocal dispersal. Discrete and Continuous Dynamical Systems - B, 2018, 23 (2) : 587-608. doi: 10.3934/dcdsb.2018035 [2] Zhaohai Ma, Rong Yuan, Yang Wang, Xin Wu. Multidimensional stability of planar traveling waves for the delayed nonlocal dispersal competitive Lotka-Volterra system. Communications on Pure and Applied Analysis, 2019, 18 (4) : 2069-2092. doi: 10.3934/cpaa.2019093 [3] Claudio Marchi. On the convergence of singular perturbations of Hamilton-Jacobi equations. Communications on Pure and Applied Analysis, 2010, 9 (5) : 1363-1377. doi: 10.3934/cpaa.2010.9.1363 [4] Isabeau Birindelli, J. Wigniolle. Homogenization of Hamilton-Jacobi equations in the Heisenberg group. Communications on Pure and Applied Analysis, 2003, 2 (4) : 461-479. doi: 10.3934/cpaa.2003.2.461 [5] Gonzalo Dávila. Comparison principles for nonlocal Hamilton-Jacobi equations. Discrete and Continuous Dynamical Systems, 2022 doi: 10.3934/dcds.2022061 [6] Xiaoyue Li, Xuerong Mao. Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation. Discrete and Continuous Dynamical Systems, 2009, 24 (2) : 523-545. doi: 10.3934/dcds.2009.24.523 [7] Meng Liu, Ke Wang. Population dynamical behavior of Lotka-Volterra cooperative systems with random perturbations. Discrete and Continuous Dynamical Systems, 2013, 33 (6) : 2495-2522. doi: 10.3934/dcds.2013.33.2495 [8] Laura Caravenna, Annalisa Cesaroni, Hung Vinh Tran. Preface: Recent developments related to conservation laws and Hamilton-Jacobi equations. Discrete and Continuous Dynamical Systems - S, 2018, 11 (5) : i-iii. doi: 10.3934/dcdss.201805i [9] Fabio Camilli, Paola Loreti, Naoki Yamada. Systems of convex Hamilton-Jacobi equations with implicit obstacles and the obstacle problem. Communications on Pure and Applied Analysis, 2009, 8 (4) : 1291-1302. doi: 10.3934/cpaa.2009.8.1291 [10] Yasuhiro Fujita, Katsushi Ohmori. Inequalities and the Aubry-Mather theory of Hamilton-Jacobi equations. Communications on Pure and Applied Analysis, 2009, 8 (2) : 683-688. doi: 10.3934/cpaa.2009.8.683 [11] Olga Bernardi, Franco Cardin. On $C^0$-variational solutions for Hamilton-Jacobi equations. Discrete and Continuous Dynamical Systems, 2011, 31 (2) : 385-406. doi: 10.3934/dcds.2011.31.385 [12] Emeric Bouin. A Hamilton-Jacobi approach for front propagation in kinetic equations. Kinetic and Related Models, 2015, 8 (2) : 255-280. doi: 10.3934/krm.2015.8.255 [13] Gawtum Namah, Mohammed Sbihi. A notion of extremal solutions for time periodic Hamilton-Jacobi equations. Discrete and Continuous Dynamical Systems - B, 2010, 13 (3) : 647-664. doi: 10.3934/dcdsb.2010.13.647 [14] Antonio Avantaggiati, Paola Loreti, Cristina Pocci. Mixed norms, functional Inequalities, and Hamilton-Jacobi equations. Discrete and Continuous Dynamical Systems - B, 2014, 19 (7) : 1855-1867. doi: 10.3934/dcdsb.2014.19.1855 [15] Martino Bardi, Yoshikazu Giga. Right accessibility of semicontinuous initial data for Hamilton-Jacobi equations. Communications on Pure and Applied Analysis, 2003, 2 (4) : 447-459. doi: 10.3934/cpaa.2003.2.447 [16] Xifeng Su, Lin Wang, Jun Yan. Weak KAM theory for HAMILTON-JACOBI equations depending on unknown functions. Discrete and Continuous Dynamical Systems, 2016, 36 (11) : 6487-6522. doi: 10.3934/dcds.2016080 [17] Gui-Qiang Chen, Bo Su. Discontinuous solutions for Hamilton-Jacobi equations: Uniqueness and regularity. Discrete and Continuous Dynamical Systems, 2003, 9 (1) : 167-192. doi: 10.3934/dcds.2003.9.167 [18] David McCaffrey. A representational formula for variational solutions to Hamilton-Jacobi equations. Communications on Pure and Applied Analysis, 2012, 11 (3) : 1205-1215. doi: 10.3934/cpaa.2012.11.1205 [19] Mihai Bostan, Gawtum Namah. Time periodic viscosity solutions of Hamilton-Jacobi equations. Communications on Pure and Applied Analysis, 2007, 6 (2) : 389-410. doi: 10.3934/cpaa.2007.6.389 [20] Piermarco Cannarsa, Marco Mazzola, Carlo Sinestrari. Global propagation of singularities for time dependent Hamilton-Jacobi equations. Discrete and Continuous Dynamical Systems, 2015, 35 (9) : 4225-4239. doi: 10.3934/dcds.2015.35.4225
2020 Impact Factor: 1.327
|
{}
|
Running PHG with singularity: Error at CreatePHG_step2_addHapsFromGVCF
1
0
Entering edit mode
3 months ago
bp • 0
HI,
I am trying to setup PHG using singularity. So far into the steps 1 was good until I hit an iceberg at step: 2D Filter GVCF and add variants to database (https://bitbucket.org/bucklerlab/practicalhaplotypegraph/wiki/UserInstructions/CreatePHG_step2_addHapsFromGVCF.md)
I am working with 95 taxa and the parameters required for the step 2D as stated in the wiki is provided in the config file in the following manner:
LoadHaplotypesFromGVCFPlugin Parameters
wgsKeyFile: /tempFileDir/data/load_wgs_genome_key_file.txt
gvcfDir: /tempFileDir/data/outputs/gvcfs/
referenceFasta: /tempFileDir/data/reference/reference.fa
bedFile: /tempFileDir/data/bam/temp/intervals.bed
haplotypeMethodName: GATK_PIPELINE
haplotypeMethodDescription: GVCF_DESCRIPTION
numThreads: 3
maxNumHapsStaged: 10000
mergeRefBlocks: false
queueSize: 30
the rest of the parameters unchanged from previous step
To run the script using singularity; I ran the following command
singularity exec -B /tempFileDir/:/tempFileDir/ ~/phg_latest.sif /CreateHaplotypesFromGVCF.groovy -config /tempFileDir/data/config.txt
The script runs momentarily to give me the following error:
[pool-1-thread-1] DEBUG net.maizegenetics.plugindef.AbstractPlugin - Error writing to the DB:
java.lang.IllegalStateException: Error writing to the DB:
at net.maizegenetics.pangenome.db_loading.LoadHaplotypesFromGVCFPlugin.processData(LoadHaplotypesFromGVCFPlugin.kt:226)
at net.maizegenetics.plugindef.AbstractPlugin.performFunction(AbstractPlugin.java:111)
at net.maizegenetics.plugindef.AbstractPlugin.dataSetReturned(AbstractPlugin.java:2017)
at net.maizegenetics.plugindef.ThreadedPluginListener.run(ThreadedPluginListener.java:29)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.NoSuchElementException: Collection is empty.
at kotlin.collections.CollectionsKt___CollectionsKt.first(_Collections.kt:184)
at net.maizegenetics.pangenome.db_loading.LoadHaplotypesFromGVCFPlugin$processKeyFileEntry$2.invokeSuspend(LoadHaplotypesFromGVCFPlugin.kt:312)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594)
at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:740)
I can't seem to figure out the problem here. Please help.
PHG singularity • 263 views
ADD COMMENT
0
Entering edit mode
Please use the formatting bar (especially the code option) to present your post better. I've done it for you this time.
Thank you!
ADD REPLY
0
Entering edit mode
many thanks Genomax! I'm new to this and not fully acquainted with all the tricks/skills. I'll adhere to the posting guidelines from next time.
ADD REPLY
0
Entering edit mode
3 months ago
zrm22 ▴ 20
Hello,
The error which is being thrown is coming from your bedFile being empty. The CreateHaplotypesFromGVCF.groovy script will attempt to pull down a BED file based on what is in the DB using the following parameter which should have been set:
refRangeMethods=FocusRegion,FocusComplement
If you look in the log file this should be set and it should be correct. Are there any other errors higher up in the log file? Could you post the full file? I would also check to see if the automatically generated BED file has any entries in it. It should be here(unless you set the tempFileDir parameter in the config file):
/phg/inputDir/loadDB/bam/temp/intervals.bed
In the meantime, I will add in a more informative error message which should be included in the next release.
ADD COMMENT
0
Entering edit mode
Hi Zack,
Thanks for addressing my issue. Setting the parameter refRangeMethods to 'FocusRegion,FocusComplement' did solve the problem that I was having.
I guess I got confused with what I read on the wiki and mis-interpreted the statement;
refRangeMethods=refRegionGroup This is used to extract a BED file out of the DB before the GVCF file is processed. The BED file is then used to extract out regions of the GVCF used to become the haplotypes. Typically, refRegionGroup refers to the anchor Reference ranges. If "refRegionGroup,refInterRegionGroup" is used it will create a BED file representing both anchors and inter anchors. We strongly suggest not setting this parameter in the Config File
ADD REPLY
0
Entering edit mode
Update:
It did work after I set the parameter 'refRangeMethods=FocusRegion,FocusComplement'. However, it didn't run for long pertaining to the out of memory error.
[pool-1-thread-1] INFO net.maizegenetics.pipeline.TasselPipeline - net.maizegenetics.pangenome.db_loading.LoadHaplotypesFromGVCFPlugin: time: Mar 17, 2021 19:25:40: progress: 100% [pool-1-thread-1] INFO net.maizegenetics.plugindef.AbstractPlugin - net.maizegenetics.pangenome.db_loading.LoadHaplotypesFromGVCFPlugin Citation: Bradbury PJ, Zhang Z, Kroon DE, Casstevens TM, Ramdoss Y, Buckler ES. (2007) TASSEL: Soft$[pool-1-thread-1] ERROR net.maizegenetics.plugindef.ThreadedPluginListener - Out of Memory: LoadHaplotypesFromGVCFPlugin could not complete task: Current Max Heap Size: 10225 Mb Use -Xmx option in start_tassel.pl or start_tassel.bat to increase heap size. Included with tassel standalone zip. Then I changed the -Xmx to a 100G and continued to run it which ended up giving me another error 'Error writing to the DB: caused by PHGDBAccess:putHaplotypesForGamete: failed' for chromosome 7D of my first taxa which I believe had been processed in my first attempt. [pool-1-thread-1] DEBUG net.maizegenetics.plugindef.AbstractPlugin - Error writing to the DB: java.lang.IllegalStateException: Error writing to the DB: at net.maizegenetics.pangenome.db_loading.LoadHaplotypesFromGVCFPlugin.processData(LoadHaplotypesFromGVCFPlugin.kt:226) at net.maizegenetics.plugindef.AbstractPlugin.performFunction(AbstractPlugin.java:111) at net.maizegenetics.plugindef.AbstractPlugin.dataSetReturned(AbstractPlugin.java:2017) at net.maizegenetics.plugindef.ThreadedPluginListener.run(ThreadedPluginListener.java:29) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: PHGdbAccess:putHaplotypesForGamete: failed at net.maizegenetics.pangenome.db_loading.PHGdbAccess.putHaplotypesForGamete(PHGdbAccess.java:1617) at net.maizegenetics.pangenome.db_loading.PHGdbAccess.processHaplotypesData(PHGdbAccess.java:1722) at net.maizegenetics.pangenome.db_loading.PHGdbAccess.putHaplotypesData(PHGdbAccess.java:1635) at net.maizegenetics.pangenome.db_loading.LoadHaplotypesFromGVCFPlugin$processDBUploading$2.invokeSuspend(LoadHaplotypesFromGVCFPlugin.kt:619) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241) at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594) at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60) at kotlinx.coroutines.scheduling.CoroutineScheduler\$Worker.run(CoroutineScheduler.kt:740) Suppressed: java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.HashMap.newNode(HashMap.java:1750)
Would you be able to help me on this?
ADD REPLY
0
Entering edit mode
I guess I'm updating the status here as I resolved the issue. For future reference or anyone needing the solution to the above stated problem;
Apparently, if you get an error message with regards to maximum memory usage (or any other errors?) while running CreateHaplotypesFromGVCF.groovy plugin, do not continue just by resolving that particular issue. One more issue that followed in my case, was a corrupt database. I'm not sure if this happens every time, but one possible avenue could be a corrupt database which will lead to myriads of other "error writing to the db" errors. A simple solution was to re-create the db and run it through. It worked for me.
ADD REPLY
Login before adding your answer.
Traffic: 1232 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by the version 2.3.6
|
{}
|
# What non-monoidal functors on monoidal categories are used “in nature”?
## Background
For my PhD dissertation, I've developed a categorical generalization of many different systems of denotational semantics for light linear logic (LLL). I'd like to see if I can use this generalization to find a more "natural" (in the colloquial sense of the word) denotational semantics for LLL. At its core is a symmetric monoidal closed category with two functors on it. One of the functors is monoidal, and the other is not (well, it could be monoidal, but then it's a trivial example). There are some other requirements, but for the moment, I'm mostly curious about how common it is to have non-monoidal functors in the first place.
## Question
If you know of an example where someone uses a non-monoidal functor $T$ on a symmetric monoidal closed category $\mathbb{C}$, I'd like to hear about it. If you know of such a $T$ with natural transformations $d_A:TA \to TA\otimes TA$ and $e_A:TA \to 1$ forming comonoids for every object $A$, even better. If the category $\mathbb{C}$ also comes with a monoidal functor $S$, that would be even more fantastic. And if there's a natural transformation $T\Rightarrow S$, then I'll buy you dinner.
I've got examples (fibered phase spaces, stratified coherent spaces and locally bounded stratified cliques, games and discreet strategies, light length spaces), but they're all specifically created for this purpose, and I'm curious to see just how natural this kind of construction is.
-
Could you clarify a couple of points of terminology? First, some people use "monoidal functor" to mean what's sometimes called "lax monoidal" (so that you have not-necessarily-invertible maps $TA \otimes TB \to T(A \otimes B)$), while others use it to mean what's sometimes called "strong monoidal" (so that you have isomorphisms $TA \otimes TB \to T(A \otimes B)$). Second, what do you mean by a functor "on" a category $C$? Do you mean an endofunctor of $C$, or a functor $C \to \text{Set}$, or just a functor with domain $C$? – Tom Leinster Mar 20 '12 at 14:02
Actually, the more I think about this question, the more strange it seems. Take Set, with cartesian product. There are very many useful/natural endofunctors T of Set, and many of them aren't monoidal. Every set is a comonoid in a unique way, and for any endofunctor T of set, there are unique nat transfs d and e satisfying your conditions. Moreover, Set comes with a monoidal endofunctor S: the identity. It's surely the case that for some non-monoidal endofunctors T of Set, there exists a nat transf T => id. But I won't attempt to think of any, as I've just had lunch. – Tom Leinster Mar 20 '12 at 14:57
Because he's interested in models of linear logic, I'm going to speculate that Erik wants his symmetric monoidal category to not be cartesian. But what about $T(A)=$ the cofree comonoid on $A$? – Mike Shulman Mar 20 '12 at 20:58
By "monoidal", I mean lax monoidal, not strong monoidal, and by "a functor on $\mathbb{C}$, I mean an endofunctor on $\mathbb{C}$. Mike is absolutely correct. I'd prefer for the symmetric monoidal category to not be cartesian. Although now that I think about it, I forgot to mention that it should be a symmetric monoidal <em>closed</em> category. Technically, it's still a light linear category if it's cartesian, but it's not very interesting as a model of LLL. The same goes for when $S$ is the identity (it gets closer to a model for ordinary linear logic in that case). – Erik Wennstrom Mar 21 '12 at 13:54
@Erik guess Mike means $C$ symm monoidal closed such that $U \colon C \to \mathbf{Comon}(C)$ has a right adjoint $!$. Then you get a comonad $T = U\circ !$ on $C$. We obviously have $d_A \colon TA \to TA \otimes TA, e_a \colon TA \to A$ as you require, and these being nat transformations to the comonoid morphism eqs for each $!f$. This looks like Lafont categories but without cocommutativity. However, I think that $T$ will be monoidal anyway; as ($C$ being symmetric) the left adjoint $U$ is strong monoidal and then by doctrinal adjunction $!$ is lax monoidal, so $T= U\circ !$ lax monoidal – Eduardo Pareja Tobes Mar 23 '12 at 15:38
As in my comment above, I'm not sure precisely what's being asked, so the following might or might not be a useful answer. In any case, it doesn't get me dinner.
Let $\mathbf{D}$ be the category of finite totally ordered sets, which is monoidal under disjoint union. It doesn't matter whether you take "all" finite totally ordered sets or just a skeleton, but the important thing is that the empty set is included — so $\mathbf{D}$ is not $\Delta$.
Small theorem: colax monoidal functors $\mathbf{D} \to \mathbf{Set}$ (yes, covariant) are the same thing as simplicial sets.
Generally, for any category $\mathcal{E}$ with finite products, colax monoidal functors $\mathbf{D} \to \mathcal{E}$ amount to simplicial objects in $\mathcal{E}$. (Proof: Proposition 3.1.7 of this, where I'm afraid $\mathbf{D}$ is called $\Delta$ and $\Delta$ is called $\Delta^+$.)
For a functor $T$ to be colax monoidal means that it comes equipped with maps $$T(A \otimes B) \to TA \otimes TB, \qquad T(I) \to I$$ satisfying coherence axioms. In this case, they're not invertible unless the corresponding simplicial set is the nerve of a monoid. So, whether by "monoidal" you meant "lax monoidal" or "strong monoidal", the functors $\mathbf{D} \to \mathbf{Set}$ corresponding to simplicial sets are not usually monoidal.
Edit I see that Erik wanted examples of non-monoidal functors on symmetric monoidal categories. The monoidal category $\mathbf{D}$ isn't symmetric, so my example won't do. But there's something analogous in the symmetric world, concerning not simplicial sets but the $\Gamma$-sets of Segal.
Let $\mathbf{F}$ be the category of finite sets (including $\emptyset$), which is symmetric monoidal under disjoint union. Then a symmetric colax monoidal functor $\mathbf{F} \to \mathbf{Set}$ turns out to be the same thing as a $\Gamma$-set. Again, you can replace $\mathbf{Set}$ by any other category with finite products, and again these functors are not in general monoidal.
-
It seems like many of the standard examples of monads in functional programming can be transported to linear logic to produce examples of non-monoidal functors.
E.g., the linear state monad $T_S(A) = S \multimap S \otimes A$ has two evident natural transformations $T_S(A) \otimes T_S(B) \to T_S(A \otimes B)$ (corresponding to evaluating the left or the right argument first), but neither one will satisfy the coherence properties needed to be a monoidal functor. Likewise, the linear exception monad $E(A) = 1 \oplus A$ doesn't even have a natural transformation of the right type, and is not even strong.
Is there some extra condition you want? Perhaps if you could say something about the operational intuition I could be more helpful (your $T$ looks like the restricted exponential of light logic, and I guess $S$ is the "paragraph" modality?)
-
You're right; $T$ represents the restricted $!$ exponential of LLL and $S$ is the neutral $\mathsection$ exponential ("paragraph"). I kind of hinted at it above, but here it is explicitly: All I need to model intuitionist multiplicative light linear logic is a symmetric monoidal closed category equipped with a (not necessarily monoidal) endofunctor $T$, a monoidal endofunctor $S$, and natural transformations $d_A: TA\to TA\otimes TA$, $e_A: TA\to 1$, and $l_A: TA\to SA$ such that for every object $A$, $(TA,d_A,e_A)$ forms a commutative comonoid. – Erik Wennstrom Mar 22 '12 at 15:35
(That "\mathsection" is supposed to be the section symbol, by the way. I wish you could preview comments.) <br> Of course, you can get these easily if you pick trivial versions. If $T$ is monoidal, then you end up with a model for elementary linear logic (by just ignoring $S$). And if in addition you've got a symmetric (lax) monoidal comonad, you get a model for non-light (heavy?) linear logic. <br> So that's why I'm looking for a specifically non-monoidal endofunctor. Most of the rest of the requirements are pretty ordinary, but I've never really worked with non-monoidal functors before. – Erik Wennstrom Mar 22 '12 at 15:43
I've only ever encountered CS monads in passing before. Is the exception monad the same as the "maybe" monad? If not, could you point me at a good place to read up on it? – Erik Wennstrom Mar 24 '12 at 16:58
Yes, it's the same thing. If you want a nice collection of examples, see Philip Wadler's notes "Monads for Functional Programming" <homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/…;. These notes will probably seem slow-paced to you, but there were (and still are) a lot of programming language researchers who don't know much category theory. – Neel Krishnaswami Mar 24 '12 at 18:44
I should add that Wadler's comments about "single-threadedness" are exactly about requiring a functor to be non-monoidal as part of accurately model computation. – Neel Krishnaswami Mar 24 '12 at 18:45
Consider the category of algebras over a fixed field k. Taking the dual as a k-vectorspace gives a (contravariant) monoidal functor that can be your S. On the other hand, you can also consider the ''finite dual'', usually denoted as ^\circ. As the finite dual turns every algebra into a coalgebra, this can act as your functor T. Moreover, the finite dual is a subspace of the dual vectorspace, this gives you your natural transformation T\to S. As you want to obtain cocommutative coalgebras, you can restrict to commutative algebras in the first place. The problem in this example is to get the functors as endofunctors on a suitable (sub)category. I don't know how essential this point is for you. If you start with (all) algebras, or (all) commutative algebras as above, then taking the linear dual or finite dual gets you to vectorspaces or coalgebras. The finite dual is however an endofunctor on the category of Hopfalgebras, so here you are out of problems. For the linear dual, I would suggest to replace it by the ''restricted dual'' on the category of multiplier Hopf algebras, and I think on this category you will have your example.
-
I'm not sure a contravariant functor could do the job here. Are these duals involutive? – Erik Wennstrom Mar 24 '12 at 17:01
|
{}
|
# The Proof of Knaster-Tarski Theorem
## Definition[Monotone Function]
A function \mathcal{F}:\mathcal{P(U)}\rightarrow\mathcal{P(U)} from some universal set \mathcal{U} is monotone if X\subseteq Y\implies \mathcal{F}(X)\subseteq\mathcal{F}(Y)
## Definition[Fixed Point]
1. Set X is \mathcal{F}-closed if \mathcal{F}(X)\subseteq X
2. Set X is \mathcal{F}-consistent if X\subseteq\mathcal{F}(X)
3. Set X is a fixed point of \mathcal{F} if it's both \mathcal{F}-closed and \mathcal{F}-consistent, i.e., \mathcal{F}(X)=X, the greatest fixed point of \mathcal{F} is denoted by \mathcal{\nu F}, and the least fixed point of \mathcal{F} is denoted by \mathcal{\mu F}
## Theorem[Knaster-Tarski (Specialized on Sets)]
1. The intersection of all \mathcal{F}-closed set is the least fixed point of \mathcal{F}
2. The union of all \mathcal{F}-consistent set is the greatest fixed point of \mathcal{F}
### Proof
#### Part 1
Let X be the set of all \mathcal{F}-closed sets, i.e., X=\lbrace C|\mathcal{F}(C)\subseteq C \rbrace. Let P be the intersection of X, i.e., P=\bigcap_X, we need to prove: 1. \mathcal{F}(P)\subseteq P 2. P\subseteq\mathcal{F}(P).
For the first, noticed that P\subseteq C for all C\in X, from the monotonicity of \mathcal{F} we get \mathcal{F}(P)\subseteq\mathcal{F}(C) for all C\in X, from the definition of X we can further conclude that \mathcal{F}(P)\subseteq C for all C\in X, since P is the intersection of all C, if an element x is in all C, then x must in P, and since \forall C\in X.\mathcal{F}(P)\subseteq C, then x\in\mathcal{F}(P)\implies\forall C\in X. x\in C, which follows the definition of set intersection and further proves that \forall x\in\mathcal{F}(P).x\in P, i.e., \mathcal{F}(P)\sube P.
For the second, since we've proved that \mathcal{F}(P)\sube P, from the definition of \mathcal{F} we have \mathcal{F}(\mathcal{F}(P))\sube \mathcal{F}(P), which, from the definition of X, we know that \mathcal{F}(P)\in X, moreover, we know that P is the intersection of X. This means, P\sube C for all C\in X, and since \mathcal{F}(P)\in X, it is clear that P\sube \mathcal{F}(P) must hold.
#### Part 2
Let X be the set of all \mathcal{F}-consistent sets, i.e., X=\lbrace C|C\sube \mathcal{F}(C) \rbrace. Let P be the union of X, i.e. P=\bigcup_{X}, we need to prove: 1. P\sube\mathcal{F}(P) 2. P\sube\mathcal{F}(P).
For the first, noticed that C\sube P for all C\in X, from the monotonicity of \mathcal{F} and the definition C we get C\sube\mathcal{F}(C)\sube \mathcal{F}(P) for all C\in X, since P=\bigcup_{X}, we have P=\bigcup_{X}\sube \mathcal{F}(P).
For the second, since we've proved P\sube\mathcal{F}(P), from the monotonicity of \mathcal{F} it is obvious that \mathcal{F}(\mathcal{F}(P))\sube \mathcal{F}(P), moreover, from the definition of X, we know \mathcal{F}(P)\in X,and since P is the union of X, \mathcal{F}(P)\sube P must hold. ∎
|
{}
|
Home >> Densities in a ball mill
# Densities in a ball mill
### Grinding control strategy on the conventional milling ...
by a ball mill in series. Crusher product ( mm) is fed to the rod mill, and the water is ... cyclone feed densities, cyclone feed flow or cyclone overflow ... Grinding control strategy on the conventional milling circuit With the increase in ratio set point, a decrease in Cyclone 1
### The Selection and Design of Mill Liners - MillTraj
Figure 5. High–low wave ball mill liner Materials The selection of the material of construction is a function of the application, abrasivity of ore, size of mill, corrosion environment, size of balls, mill speed, etc. liner design and material of construction are integral and cannot be chosen in isolation.
Our ball mill loading guide from with fill levels for various applications. Mills can be loaded by volume or by weight based on product’s bulk density.
### Know-how on Improving Grinding Efficiency and Reducing ...
Therefore, large feeding sizes will affect ball mill efficiency. 3. Mineral slurry density: in wet grinding, material pass time, production efficiency and ball mill power are all influenced by mineral slurry density. Generally speaking, the suitable density is close to 80% when feeding size is large and circulating load is …
### Advanced Controller for Grinding Mills: Results from a ...
Advanced Controller for Grinding Mills: Results from a Ball Mill Circuit in a Copper Concentrator Anoop Mathur, Sujit Gaikwad, Honeywell Technology Center Robert Rodgers, Nels Gagnon ... density, particle size measurements), or with the hardness/softness of ore. These adjustments
### TECHNICAL NOTES 8 GRINDING R. P. King - Mineral Tech
The density of the charge must account for all of the material in the mill including the media which may be steel balls in a ball mill, or large lumps of ore in an autogenous mill or a mixture in a semi-autogenous mill, as well as the slurry that makes up the operating charge. Let Jt be the fraction of the mill volume that is occupied
### The grinding balls bulk weight in fully unloaded mill
· It is done in order to accurately definition the grinding ball mass during measuring in a ball mill and exclude the mill overloading with grinding balls possibility There are two methods for determining the grinding balls bulk weight in a mill: Method with complete grinding media discharge from the mill inner drum.
· Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
### density of ball mill
density of ball mill The density of the charge must account for all of the material in the mill including the media which may be steel balls in a ball mill, or large lumps of ore in an autogenous mill or a mixture in a semi-autogenous mill, as well as the slurry that makes up the operating charge.
### Paper # 25 - Copper Mountain Overview on the Grinding ...
The 24’ x 39.5’ Ball Mill The Copper Mountain ball mills are 7315 mm [24 feet] in diameter mm [39.5 feet] long. They are overflow discharge ball mills with inside diameters of 7315 mm [24 feet] and grinding lengths of 11887 mm [39 feet]. Each Ball mill is …
### Ball Mill: Operating principles, components, Uses ...
A ball mill also known as pebble mill or tumbling mill is a milling machine that consists of a hallow cylinder containing balls; mounted on a metallic frame such that it can be rotated along its longitudinal axis. The balls which could be of different diameter occupy 30 % of the mill volume and its size depends on the feed and mill size.
### Comparison of grinding media—Cylpebs versus balls
ball mill, and a set of model parameter scale-up criteria to simulate the steady state performance of full-scale mill circuit from the laboratory results ( Man, 1999, 2001 ). Accordingly all the laboratory tests were con-ducted in a standard Bond ball mill loaded with various grinding media to treat the same feed ore.
### what is formula for calculating grinding media balls bulk ...
· Ball mill – Wikipedia, the free encyclopedia. Density: The media should be denser than the material being ground. … The grinding balls in the grinding jars are subjected to superimposed rotational movements, … »More detailed
### Ball Mill Design/Power Calculation
The basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density, specific density, desired mill tonnage capacity DTPH, operating % solids or pulp density, feed size as F80 and maximum ‘chunk size’, product size as P80 and maximum and finally the type of circuit open/closed ...
### The grinding balls bulk weight in fully unloaded mill
· It is done in order to accurately definition the grinding ball mass during measuring in a ball mill and exclude the mill overloading with grinding balls possibility There are two methods for determining the grinding balls bulk weight in a mill: Method with complete grinding media discharge from the mill inner drum.
### AMIT 135: Lesson 8 Rod Mills – Mining Mill Operator Training
Under given load and particle size requirement, capacity is a function of mill length and diameter: Q = kLD 2+N. N is related to mill diameter which decreases with larger diameters k a constant equal to π /4. A chart showing rod mill capacity vs. mill diameter [image: ( 5)] A chart showing rod mill capacity vs. mill …
### Low Ball Mill Density - YouTube
· Runs 65% solutions ALWAYS = poor grinding. How to Grow Roses From Cuttings Fast and Easy | Rooting Rose Cuttings with a 2 Liter Soda Bottle - Duration: 28:23. Mike Kincaid 653,167 views
### Grinding mills -
Every mining operation has a unique grinding process. has experience of over 8,000 grinding mills globally, including manufacturing and delivering the largest SAG/AG mills in the world. Our experts welcome the opportunity to assist you with circuit and circuit control design as well as start-up, operation, and optimization of your mill.
### ATTRITOR GRINDING MILLS AND NEW DEVELOPMENTS
ATTRITOR GRINDING MILLS AND NEW DEVELOPMENTS I. INTRODUCTION AND PRINCIPLES In this presentation we will discuss the principle of the Attritor and its applications. The Attritor is a grinding mill containing internally agitated media. It has been generically referred to as a “stirred ball mill.”
### Grinding Media & Grinding Balls | Union Process, Inc.
As the developer and manufacturer of industry-leading particle size reduction equipment, including Attritors (internally agitated ball mills) and DMQX horizontal media mills, Union Process is uniquely positioned to help you identify and source the correct grinding media for your application.
### Ball mill - Wikipedia
The ball mill is a key piece of equipment for grinding crushed materials, and it is widely used in production lines for powders such as cement, silicates, refractory material, fertilizer, glass ceramics, etc. as well as for ore dressing of both ferrous and non-ferrous metals. The ball mill can grind various ores and other materials either wet ...
### Milling Media Review - Part 2: Bead Density Effect
· Part 1 of this series (see the April 2008 issue of PCI) detailed the main considerations of bead density effects on mill operation and efficiency. Equally important is the correct choice of bead size to suit particular formulations, applications and process. The following discussion reviews some of the major effects of bead size.
### Model Predictive Control for SAG Milling in Minerals ...
Model Predictive Control for SAG Milling in Minerals Processing | 5 Model Predictive Control on a SAG Mill and Ball Mills The solution for the SAG Mill is an adaptive controller which controls mill load using direct mill weight measurement or indirectly from bearing oil pressure.
### Bal-tec - Ball Weight and Density
Ball Weight and Density How much will a ball of a given diameter in a certain material weigh? The answer is calculated by multiplying the volume of the ball by the density of the material. $\text"Weight" = \text"Volume" ⋅ \text"Density"$ For example, calculate the weight of a two inch diameter lead ball:
### Grinding mills -
Every mining operation has a unique grinding process. has experience of over 8,000 grinding mills globally, including manufacturing and delivering the largest SAG/AG mills in the world. Our experts welcome the opportunity to assist you with circuit and circuit control design as well as start-up, operation, and optimization of your mill.
### Theory and Practice for - U. S. Stoneware
Theory and Practice for . Jar, Ball and Pebble Milling . Types of Mills . Ball and Pebble Mills: The expressions “ball milling” and “pebble milling” are frequently used interchangeably. Usually, however, a ball mill is referred to as one that uses steel balls as grinding media, while a pebble mill is one that uses
### AMIT 135: Lesson 7 Ball Mills & Circuits – Mining Mill ...
Ball Mill Operation. Ball mills ride on steel tires or supported on both ends by trunnions. Girth gears bolted to the shell drive the mill through a pinion shaft from a prime mover drive. The prime movers are usually synchronized motors. During rotation, a portion of the charge is lifted along the inside perimeter.
### INNOVATION NEWS Mills and GMDs - ABB Ltd
increasing, but also the ‘power density’, i.e. the ratio power/mill-diameter. A similar trend can be seen with ball mills. The first 26’ ball mills had a rated power of 15.5 MW, whereas today the range is typically from 16.4 to 17.5 MW. The figure shows the power density figure for SAG and ball mills …
### AMIT 135: Lesson 8 Rod Mills – Mining Mill Operator Training
Under given load and particle size requirement, capacity is a function of mill length and diameter: Q = kLD 2+N. N is related to mill diameter which decreases with larger diameters k a constant equal to π /4. A chart showing rod mill capacity vs. mill diameter [image: ( 5)] A chart showing rod mill capacity vs. mill …
### MILLING CONTROL & OPTIMISATION - Mintek
MILLING CONTROL & OPTIMISATION MillSTAR. ... Millstar Mill Discharge Density Estimator Millstar Ball Load Estimator ... • The standard deviation of the mill load, flotation feed flow and density was considerably less in MillStar mode, as can be seen in the table below.
|
{}
|
# Projection
Projection, projections or projective may refer to:
## Chemistry
• Fischer projection, a two-dimensional representation of a three-dimensional organic molecule
• Haworth projection, a way of writing a structural formula to represent the cyclic structure of monosaccharides
• Natta projection, a way to depict molecules with complete stereochemistry in two dimensions in a skeletal formula
• Newman projection, a visual representation of a chemical bond from front to back
## Biology
• Projection areas, areas of the brain where sensory processing occurs
• Projection fiber, in neuroscience, white matter fibers that connect the cortex to the lower parts of the brain or the spinal cord
## Other uses
• Projection (alchemy), process in Alchemy
• Projections (journal), an interdisciplinary academic journal related to cinema and visual media
• Power projection, the capacity of a state to implement policy by means of force, or the threat thereof
• Psychological projection, or "Freudian projection", a defense mechanism in which one attributes to others one's own unacceptable or unwanted attributes, thoughts, or emotions
• Forecasting, making predictions of the future based on past and present data
|
{}
|
doc/python/rips.rst
author Dmitriy Morozov Mon, 11 May 2009 12:45:49 -0700 branch dev changeset 134 c270826fd4a8 child 147 d39a20acb253 permissions -rw-r--r--
Added documentation; for now mostly for the Python bindings
:class:Rips class
======================
.. class:: Rips
.. method:: __init__(distances)
Initializes :class:Rips with the given distances whose main purpose
is to return the distance of two points given their indices. See
Distances_ below.
.. method:: generate(k, max, functor[, seq])
Calls functor with every simplex in the k-skeleton of the Rips
complex :math:VR (max). If seq is provided, then the complex is
restricted to the vertex indices in the sequence.
.. method:: vertex_coface(v, k, max, functor[, seq])
Calls functor with every coface of the vertex v in the k-skeleton
of the Rips complex :math:VR (max). If seq is provided, then the
complex is restricted to the vertex indices in the sequence.
.. method:: edge_cofaces(u, v, k, max, functor[, seq])
Calls functor with every coface of the edge (u, v) in the
k-skeleton of the Rips complex :math:VR (max). If seq is
provided, then the complex is restricted to the vertex indices in the
sequence.
.. method:: cmp(s1, s2)
Compares simplices s1 and s2 with respect to their ordering in the
Rips complex. Note that like Python's built in cmp this is a three
possible outsome comparison (-1,0,1) for (:math:\leq, =, \geq,
respectively).
.. method:: eval(s)
Returns the size of simplex s, i.e. the length of its longest edge.
.. _distances:
Distances
---------
An instance of distances passed to the constructor of :class:Rips should
know its length and the distances between the points. The length should be
retrievable via len(distance) and it determines how many points the complex
is built on. The distances between the points are inferred by the class
:class:Rips by calling distances with a pair of vertices as arguments.
For example, the following class represents 10 points on an integer lattice::
class Distances:
def __len__(self):
return 10
def __call__(self, x, y):
return math.fabs(y-x)
The bindings provide a pure Python class :class:PairwiseDistances to deal with
explicit points in a Euclidean space. It is defined in
:sfile:bindings/python/dionysus/distances.py::
class PairwiseDistances:
def __init__(self, points, norm = l2):
self.points = points
self.norm = norm
def __len__(self):
return len(self.points)
def __call__(self, x, y):
return self.norm([x - y for (x,y) in zip(self.points[p1], self.points[p2])])
Another distances class is available that speeds up the computation of the Rips
complex at the expense of the memory usage: :class:ExplicitDistances. It is
initialized with an instance of any class that behaves like a distances class,
and it stores all of its distances explicitly to not have to recompute them in
the future::
distances = PairwiseDistances(points)
distances = ExplicitDistances(distances)
Example
-------
The following example reads in points from a file, and fills the list
simplices with the simplices of the 2-skeleton of the Rips complex built on
those vertices with distance cutoff parameter 50. Subsequently it computes the
persistence of the resulting filtration (defined by rips.cmp)::
points = [for p in points_file('...')]
distances = PairwiseDistances(points)
rips = Rips(distances)
simplices = []
rips.generate(2, 50, simplices.append)
f = Filtration(simplices, rips.cmp)
p = StaticPersistence(f)
p.pair_simplices()
Essentially the same example is implemented in
:sfile:examples/rips/rips-pairwise.py, although it reads the k and max
parameters for the Rips complex on the command line, and uses a trick to speed
up the computation.
|
{}
|
# A class function $f$ is a character if and only if $(f,\chi_{q_i})_G$ is a non-negative integer, for all irreducible characters $\chi_{q_i}$
I'm currently revising representation theory and I'm a bit stuck trying to prove the converse of the above statement.
$(\Rightarrow)$ is straight forward because if $f$ is the character of a representation $\rho$, then it is a direct sum of irreducible representations $q_i$. And $\rho\sim n_1q_1\oplus...\oplus n_kq_k$ then $(f,\chi_{q_i})_G =n_i$ which is a non-negative integer.
For $(\Leftarrow)$ I'm not sure how to do it.
I know that $\chi_{q_i}$ form a basis for the space of class functions. So $f=\sum_{i=0}^{k} c_i\chi_{q_i}$ where $c_i\in \Bbb{C}$
If $(f,\chi_{q_i})_G =n_i \in \Bbb{Z}_{\ge 0}$ then im guessing the $n_i=c_i$ (not sure why using the definition of $(\cdot,\cdot)_G$).
And maybe $\rho$ will be the direct num of $n_i$ copies on each $q_i$ but we haven't actually shown that $f$ is a character yet?
Any help would be appreciated.
The thing to remember is that the $\chi_{q_i}$s are orthonormal with respect to this inner product. In particular if $f=\sum_{i=0}^{k} c_i\chi_{q_i}$ then $(f,\chi_{q_i})_G = c_i$, so by assumption all the $c_i$s are non-negative integers. And indeed we get that $f$ is the character of $\rho\sim c_1q_1\oplus...\oplus c_kq_k$.
|
{}
|
Go to the next section.
% -*-texinfo-*-
% Follow the following instructions to print the VIP manual. % % Run tex on this file: % % tex vip.texinfo % % This creates vip.dvi and some files for cross references and % indices. Since the manual contains key index and concept % index, it is necessary to create sorted index files for % them. It is also necessary to edit the file vip.kys. This % is done as follows. % % texindex vip.ky vip.cp % sed -e '/\\initial/ d' -e 's/{[^ ]* /{/' vip.kys > tmp % mv tmp vip.kys % tex vip.texinfo % % The dvi file created by the second run of tex can be used % for printing.
\input texinfo
VIP
A Vi Package for GNU Emacs (Version 3.5, September 15, 1987)
Masahiko Sato
|
{}
|
# How would I say two triangles are congruent? [duplicate]
There is a sign to signify two triangles are congruent. It looks like this: ≅. How would I do this in latex?
## marked as duplicate by Werner, Au101, Stefan Pinnow, user36296, Phelype OleinikJan 22 at 19:45
• \usepackage{amssymb} and then \cong... – Werner Jan 22 at 18:52
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{amssymb}
\begin{document}
\section{Introduction}
$\triangle A \cong \triangle B$
\end{document}
Output:
Use \usepackage{amssymb} package and \cong command.
I strongly refer you to consult with List of LaTeX mathematical symbols link. It will help you a lot.
• Why do you need to include \usepackage[utf8]{inputenc}? – Werner Jan 22 at 19:10
• Oh that not necessary for your problem. – Encipher Jan 22 at 19:19
|
{}
|
Show Summary Details
More options …
# Nanophotonics
Editor-in-Chief: Sorger, Volker
12 Issues per year
CiteScore 2017: 6.57
IMPACT FACTOR 2017: 6.014
5-year IMPACT FACTOR: 7.020
In co-publication with Science Wise Publishing
Open Access
Online
ISSN
2192-8614
See all formats and pricing
More options …
# Magneto-optical response in bimetallic metamaterials
Evangelos Atmatzakis
• Optoelectronics Research Centre and Centre for Photonic Metamaterials, University of Southampton, Southampton SO17 1BJ, UK
• Other articles by this author:
/ Nikitas Papasimakis
• Corresponding author
• Optoelectronics Research Centre and Centre for Photonic Metamaterials, University of Southampton, Southampton SO17 1BJ, UK
• Email
• Other articles by this author:
/ Vassili Fedotov
• Optoelectronics Research Centre and Centre for Photonic Metamaterials, University of Southampton, Southampton SO17 1BJ, UK
• Other articles by this author:
/ Guillaume Vienne
• Data Storage Institute, Agency for Science, Technology and Research (A*STAR), Singapore 117608, Singapore
• School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
• Other articles by this author:
/ Nikolay I. Zheludev
• Optoelectronics Research Centre and Centre for Photonic Metamaterials, University of Southampton, Southampton SO17 1BJ, UK
• The Photonics Institute and Centre for Disruptive Photonic Technologies, Nanyang Technological University, Singapore 637371, Singapore
• Other articles by this author:
Published Online: 2017-07-22 | DOI: https://doi.org/10.1515/nanoph-2016-0162
## Abstract
We demonstrate resonant Faraday polarization rotation in plasmonic arrays of bimetallic nano-ring resonators consisting of Au and Ni sections. This metamaterial design allows the optimization of the trade-off between the enhancement of magneto-optical effects and plasmonic dissipation. Nickel sections corresponding to as little as ~6% of the total surface of the metamaterial result in magneto-optically induced polarization rotation equal to that of a continuous nickel film. Such bimetallic metamaterials can be used in compact magnetic sensors, active plasmonic components, and integrated photonic circuits.
This article offers supplementary material which is provided at the end of the article.
## 1 Introduction
The ability to tailor light-matter interactions is equally important for the development of current and future technologies (telecommunications, sensing, data storage), as well as for the study of the fundamental properties of matter (spectroscopy). A typical example involves the exploitation of magneto-optical (MO) effects, where quasistatic magnetic fields can induce optical anisotropy in a material. This is a direct manifestation of the Zeeman effect, the splitting of electronic energy levels due to interactions between magnetic fields and the magnetic dipole moment associated with the orbital and spin angular momentum [1]. This energy splitting gives rise to numerous polarization phenomena, such as magnetically induced birefringence and dichroism, which enable dynamic control over the polarization state of light.
In recent years, magnetoplasmonics, the study of systems that combine plasmonic and magnetic properties of matter, holds promise to both enhance MO effects and to enable non-reciprocal, magnetic-field-controlled plasmonic devices [2], [3], [4], [5], [6], [7]. Ferromagnetic metals, such as Fe, Ni, and Co, are known to be magneto-optically active but have poor plasmonic properties in the near-infrared (NIR) spectral range due to high Joule losses. In contrast, noble metals suffer less from loss, but exhibit negligible MO response. Hence, by combining ferromagnetic and noble metals, one can construct a system with strong MO response. Indeed, early studies identified dramatic enhancement of the MO response due to strongly localized electromagnetic fields associated with plasmonic resonances. In particular, enhanced Faraday effect (FE) and magneto-optical Kerr effect (MOKE) have been observed in hybrid devices that simultaneously support plasmonic resonances and are MO active [8], [9], [10]. Furthermore, nanostructuring of magnetoplasmonic systems provides control over the frequency dispersion and phase of MO effects [3], [6], [7], [11], [12], [13], [14] while maintaining a compact form factor. In bulk homogeneous media, strong MO effects require long interaction lengths between light and matter. However, the light confinement originating from nanostructuring can offer similar response at significantly smaller dimensions, an often required property for the realization of compact modulators, isolators, and circulators [15]. On the other hand, the presence of external magnetic fields can be employed to control the plasmonic response, a desirable functionality in sensing applications [16], [17].
Magnetoplasmonic metamaterials, consisting of arrays of plasmonic resonators hybridized with MO materials, employ resonant plasmonic fields to maximize the enhancement of MO effects. The MO active material can be introduced either as substrate/superstrate or as part of the plasmonic resonator. Since the enhancement of MO effects occurs mainly where the resonant plasmonic fields overlap with the MO active components, very compact magnetoplasmonic devices can be realized. Here, we implement for the first time a design for magnetoplasmonic metamaterials, where the MO active component is integrated directly into the plasmonic resonators. The MO response is provided by a small Ni section, which is combined with a gold split-ring to form a bimetallic ring resonator (as illustrated in Figure 1A). This design allows the optimization of the trade-off between dissipation loss (which weakens the plasmonic fields) and the strength of the MO response. In particular, by increasing the size of the Ni section, dissipation loss increases due to the presence of a larger (lossy) Ni section. At the same time, the strength of the MO response is expected to increase, as the Ni sector also provides the MO response. Varying the composition of the ring allows one to maximize the strength of magnetoplasmonic response of the system. Moreover, changing the wavelength or polarization of the incident light allows one to shift the area of light confinement along the resonator and thus control the MO response. Finally, our approach is based on continuous metallic nanostructures and enables not only to realize complex resonators with prescribed MO and plasmonic response but also to exploit other types of physical response involving thermal and electric effects [18], [19].
Figure 1:
Linear optical properties of bimetallic ring resonator arrays.
(A) Schematic of a bimetallic metamaterial array. In the presence of an external magnetic field (represented by the green lines), the metamaterial induces rotation of the polarization azimuth angle, φ, on the incident beam. The inset is a scanning electron microscope (SEM) image of a fabricated sample. (B) Characteristic transmission spectra of a metamaterial sample with Ni sector that spans 90° illuminated with light polarized in the symmetry plane of the ring. Similar resonant behavior is observed for the orthogonal polarization (see Figure 1 of Supplementary Material). The dashed line is obtained by a computational analysis of an infinite 2D array, while the solid line corresponds to the experimentally measured spectra of a 100×100 μm2 metamaterial sample. Inset: Electric field distribution in the vicinity of the bimetallic ring at resonance (λ=850 nm). (C) Transmittance of regular arrays consisting of bimetallic rings with Ni sector varying from 0° to 360°, calculated numerically. The three colored lines correspond to different fabricated samples (90° – red/III, 135° – green/II and 180° – blue/I).
## 2 Results and discussion
A strong plasmonic resonance can be excited in a bimetallic metamaterial array under illumination with light polarized in the plane of symmetry of the resonators (yz plane – see Figure 1A). The characteristic case of a metamaterial with a 90° Ni sector is presented in Figure 1B, where the resonance is seen as a transmission dip at a wavelength of ~850 nm. The experimental transmission spectrum (solid red line) is in good agreement with the results of numerical modeling (dashed red line). At the resonance, the electric field profile exhibits two “hot spots” of charge concentration with currents oscillating in-phase between the hot spots (see inset to Figure 1B). This symmetric current configuration corresponds to the lowest order electric dipole mode of the ring. Owing to the symmetry of the bimetallic ring resonators and the absence of splits, only symmetric current configurations can be excited by an incident wave, while anti-symmetric excitations commonly encountered in the context of Fano resonances are not allowed [20]. Scattering and dissipation in the bimetallic metamaterial system are determined by the regions of high current density. Strong currents in the Ni sector reduce the total power scattered at resonance, as nickel is less conductive and exhibits higher Joule losses than gold. In accordance, increasing the Ni sector size leads to damping of the resonance and to a decrease of its Q-factor, as can be deduced from Figure 1C. Such changes in the ring composition also alter the effective permittivity of the hybrid structure and shift the position of the resonance towards shorter wavelengths.
In the presence of an external magnetic field (H), the permittivity tensor of Ni becomes non-diagonal, which affects the polarization state of the light transmitted through the sample. The rotation of polarization azimuth induced by the magnetic field was experimentally measured, and the results are shown in Figure 2. For all three arc lengths of the Ni sector (90°, 135°, and 180°), the Faraday rotation spectra follow closely the linear optical response with the maxima of rotation occurring close to the plasmonic resonance frequency. Importantly, smaller Ni sectors lead to a stronger FE; reducing by 50% the arc length of the Ni sector (from 180° to 90°) leads to a 40% increase of peak rotation (from 0.29 mrad to 0.41 mrad). Simultaneously, the linewidth of the Faraday rotation resonance decreases significantly. These features are reproduced by finite element simulations (dashed curves) following a normalization of the numerical transmittance spectra to the experimental ones (described in the Materials and methods section). Moreover, in the numerically obtained Faraday spectra, the peak position experiences a relatively minor blue shift while varying the Nickel section angle from 90° to 180°. This is a direct result of changes in the effective permittivity of the ring. In the case of the experimental measurements, the Faraday maximum follows a non-monotonic dependence on the Ni sector arc length. We attribute this discrepancy to fabrication imperfections including inhomogeneities in the manufactured arrays and departures of the Ni sector from an ideal circular arc shape.
Figure 2:
Dots describe the experimental results for samples with Ni sectors that span over 90° (red), 135° (green), and 180° (blue) in the presence of a 100-mT external magnetic field. Solid lines act as guide for the eye. Dashed lines represent the corresponding computational results when transmission is normalized to the experimental values (see Materials and methods). The dashed grey line follows the peak position of simulation data, as seen in Figure 4A, while dot-dashed line shows the corresponding trend for the experimental results.
The counterintuitive dependence of the MO activity on the Ni sector arc length suggests that the Faraday rotation is controlled by resonant field enhancement in the vicinity of the Ni section. In particular, the incident electric field excites resonantly the Au part of the ring, which in turn induces currents in the Ni sector along the circumference of the arc. Owing to the off-diagonal elements of the Ni permittivity tensor in the presence of an external magnetic field, the induced currents in the Ni sector are coupled to radial charge oscillations resulting in a net component orthogonal to the polarization of the incident wave [21]. Hence, the bimetallic rings radiate fields polarized both parallel and orthogonal to the electric field of the incident wave leading to the observed Faraday rotation. In order to quantify the enhancement of Faraday rotation, we compare the MO response of the metamaterial to that of a continuous Ni film. As the latter exhibits negligible transmission, a straightforward comparison of the FE in these two systems is not instructive. Instead, as a figure of merit (FOM), we use the scattered electric field component, which is perpendicular to the incident polarization, normalized to the incident electric field and to the Ni filling factor. The FOM is calculated as $\text{FOM}=\frac{|{E}_{\text{sc}}|\text{sin}\left(\varphi \right)}{|{E}_{\text{inc}}|f}\simeq \frac{|{E}_{\text{sc}}|\varphi }{|{E}_{\text{inc}}|f}$ (in the small angle approximation), where ϕ is the Faraday rotation angle, f is the Ni filling factor, Esc and Einc are the amplitudes of the scattered and incident electric fields, respectively. This FOM allows for a comparison between the MO response of the metamaterial in transmission and that of a continuous nickel film in reflection. Taking into account that the metamaterial sample with a 90° Ni sector has a peak rotation of ~0.41 mrad at 880 nm, transmittance of ~10%, and Ni filling factor of 6%, the resulting value of FOMMM≈2.16×10−3. In comparison, a continuous 60-nm thick Ni film exhibits a reflectance of 70% and an MO rotation of 0.4 mrad at the same wavelength resulting in FOMNi≈0.33×10−3. Hence, the metamaterial leads to an enhancement of MO activity of $\frac{{\text{FOM}}_{\text{MM}}}{{\text{FOM}}_{\text{Ni}}}\simeq 6.5.$ If we use the power of incident and scattered waves instead of the electric field amplitudes for calculating FOM, the enhancement reduces to ~2.55. Remarkably, at the resonance frequency, the metamaterial rotates the polarization of the incident wave equally to a nickel film of the same thickness but only with a nickel filling factor of 6%.
Owing to the anisotropic nature of the bimetallic rings, the MO response depends strongly on the polarization of the incident wave. The geometrical position and span of the two metal segments create a system with a single plane of symmetry, deviating from which affects both the plasmonic and MO response. In Figure 3B, we can see that the geometrical anisotropy of the sample leads to polarization conversion (in the absence of external static magnetic fields). Varying the angle of polarization azimuth of the excitation beam from 0° to 360° results in a four-lobe rosette with the points of zero conversion at 0°, 90°, 180°, and 270°. These points correspond to the polarization eigenstates of the system. In contrast, the effect of anisotropy becomes maximum at n×45° (for n=1, 3, 5, 7), and it results in a rotation of the incident polarization state by about 100 mrad. The introduction of an external magnetic field creates an additional contribution to the polarization rotation due to magnetically induced anisotropy, which leads to an offset of the total rotation. For opposite directions of magnetic fields, this effect appears as a splitting of the rotation curve, where the separation between the two curves corresponds to the Faraday rotation (see Figure 3A). The angle of the incident polarization azimuth shifts the current distribution along the ring (across areas of different dielectric properties), which significantly affects the plasmonic enhancement of the Faraday rotation. For angles of incident polarization that drives currents through more lossy areas (Nickel), the plasmonic resonance is damped and, thus, the Faraday rotation is reduced by almost 50% (Figure 3C). Here, we attribute the departure of the experimental curve from the theoretical one to small shifts of the beam within the sample in combination with the structural inhomogeneities in the metamaterial arrays.
Figure 3:
Sample optical anisotropy.
(A–B) Rotation of the polarization azimuth of a beam transmitted through the sample (135° Ni sector). Red lines represent computational analysis, and blue lines stand for experimental results. Solid and dashed lines represent opposite directions of external magnetic fields. θ is the angle of polarization azimuth in reference to the symmetry axis of the metamaterial array. (C) The effect due to the presence of the external magnetic field appears as the difference between the solid and dashed lines plotted in (A) and (B). The red line corresponds to numerical results, while blue circles represent experimental measurements. The blue line is a guide for the eye.
A detailed numerical study of the effects of the ring composition and the incidence wave polarization on the Faraday rotation is presented in Figure 4A and B, respectively. In accordance to the experimental results of Figure 2, Figure 4A shows that the size of the nickel sector strongly impacts the Faraday rotation. Indeed, increasing the size of the Ni sector leads to an increase both in the dissipation loss (hence, a decrease in plasmonic field enhancement) and in the filling factor of the MO active material. These two competing mechanisms result in an optimum ring composition with an Ni sector of ~50°, away from which Faraday rotation rapidly decreases. Moreover, rings with Ni sectors smaller than 180° are highly sensitive to the polarization of the incident wave (see Figure 4B). For polarization along the y-axis (see inset to Figure 1A), the resonant currents flow mainly in the Au sector, which leads to a strong plasmonic resonance and (through field enhancement) to a high value of the Faraday rotation. When the polarization is rotated by 90°, the current density in Ni increases, which damps the MO response. On the contrary, for rings with large Ni sectors, the situation reverses and polarization angles normal to the symmetry plane of the ring offer a stronger MO response. In this case, due to the weaker plasmonic resonances, the modulation depth is reduced.
Figure 4:
Bimetallic ring composition study.
(A) Faraday rotation spectra calculated for different ring compositions. (B) At resonance, denoted with dashed line on (A), Faraday rotation is calculated for different polarization angles of the incident beam, and an external static magnetic field of 100 mT.
The dependence of the metamaterial MO response on the strength of the external magnetic field was investigated by hysteresis measurements, where the magnetic field intensity was varied while its orientation was maintained. The measurements reveal an almost linear magnetic field dependence and zero coercivity (Figure 5). These observations are explained by considering that the magnetic field is oriented normal to the plane of the nano-rings and that the field strength used in the experiment is not sufficient to saturate their magnetization. This behavior is strongly supported by the micromagnetic simulations, represented by the solid lines in Figure 5. The (numerically calculated) magnetization is connected by a linear relation to the (experimentally measured) Faraday rotation allowing, thus, for a direct comparison.
Figure 5:
Hysteresis in bimetallic ring resonators.
Simulated (solid lines) and experimentally measured (markers) hysteresis in bimetallic rings with 90° (red) and 135° (green) Ni sector. Simulated data correspond to the magnetization of the system, while experimental results are presented in the form of Faraday rotation, which is proportional to the former.
## 3 Conclusion
We have introduced bimetallic nano-ring resonators as compact building blocks for MO devices. In particular, we have demonstrated experimentally multi-fold enhancement of the Faraday rotation in metamaterial arrays of such resonators where a nickel (surface) filling factor of 6% is sufficient to achieve the same MO effect as a continuous film of nickel. We also studied the dependence of the effect on external static magnetic fields, polarization of the incident wave, and wavelength as means of controlling the MO response of the metamaterial. We expect bimetallic metamaterials to find applications in refractive index and magnetic field sensing, as well as in compact active components for integrated nanophotonic circuits. In addition, this design has been shown to support transient thermoelectric currents that give rise to magnetic pulses [18], [19].
## 4.1 Fabrication
The materials of choice for the bimetallic metamaterial system are Au and Ni, which comprise two arc sectors that form a complete ring (Figure 1A). The angle, over which the Ni sector spans, controls the composition of the ring. Three samples have been fabricated on a glass substrate by electron-beam lithography with Ni sectors of 90°, 135°, and 180°. The mean diameter of the rings is 185±5 nm with a linewidth of 65 nm and a height of 60 nm. Each sample comprises a regular array of 250×250 unit cells, positioned with a period of 400 nm and covering a total area of 100×100 μm2. The fabrication process is separated in two steps, one for each of the two metals. In each step, metals are thermally evaporated on the substrate with a thickness of 60 nm and subsequently lifted off to reveal the design. A 50-nm-thick layer of co-polymer deposited underneath 200 nm of PMMA accelerates the lift-off process. The Ni part is deposited after Au to minimize any oxidization in the junctions between the two metals. Here, careful design and alignment are crucial in order to achieve good contact at the junctions. Discrepancies between the designed and fabricated samples can be attributed to the surface roughness, which is identified mainly in the Ni sector and has an RMS value that varies from 5 to 20 nm between individual unit cells. As Ni tends to form large grains, nanostructures suffer from small deformations, especially in the z-direction, due to the constrains that the e-beam lithography mask enforces in the x-y plane. Nevertheless, such deformations do not alter the plasmonic properties in a significant way as the latter mostly depends on the resonator dimensions in the plane perpendicular to light propagation (x-y plane). Furthermore, these artefacts are inconsistent across each sample and thus lead only to a small inhomogeneous broadening of the plasmonic resonances. Finally, we would like to note that the response of the bimetallic ring metamaterials does not depend strongly on the thickness of the metallic film. An increase of thickness would result in a minor enhancement of the magnetoplasmonic response due to an increase in resonance strength of the Au sections and changes in the magnetic shape anisotropy of the Ni sections [22].
## 4.2 Experimental characterization
The characterization of the linear optical properties in the vis-NIR for the fabricated samples was performed using a commercial microspectrometer (CRAIC Technologies Inc., San Dimas, CA, USA). A sensitive polarimeter setup, described in Ref. [23], was used in order to probe angles of polarization rotation with a precision of 10−5 mrad. The setup consists of a polarization modulator (Faraday modulator) positioned between two crossed polarizers. The sample is placed between the Faraday modulator and the second polarizer. The rotation can be derived by simultaneous detection locked in both the fundamental frequency and second harmonic of the modulator. For incident laser power Ps, the detected signal can be described by the following equation:
$Pout=Ps2(−4Aφcos(ωt+ψ)+A2cos(2ωt+2ψ)+A2),$(1)
where A is the rotation induced from the Faraday modulator, ω is its fundamental frequency, ψ is the arbitrary phase of the modulation, and φ is the polarization angle rotation we want to probe.
The two frequency components (fundamental frequency F and 2nd harmonic S signal) are recorded by lock-in detection. The rotation is derived as follows:
$φ=−FSA4.$(2)
The rotation spectrum was measured by using a tunable laser source (Mai Tai from Spectra-Physics, Santa Clara, CA, USA) and a photo-detector. The wavelength of the source was varied between 700 and 1000 nm, with a step of 10 nm. The external magnetic field was provided by ring-shaped neodymium magnets placed in the propagation direction, in such a way as to form a magnetic field with direction normal to the sample plane. In order to accommodate hysteresis measurements in the setup, the distance between the magnets and the sample was varied in controlled steps.
## 4.3 Numerical simulations
The plasmonic and MO properties of the bimetallic metamaterials were simulated using the finite element method implemented in a commercial solver (COMSOL Inc., Burlington, MA USA). An infinite 2D array of resonators is assumed to be positioned on a semi-infinite glass substrate. In the presence of external magnetic fields, the permittivity tensor of Ni becomes non-diagonal [24], [25], [26]. Limiting the magnetic field vector to a direction normal to the plane of the sample reduces the off-diagonal elements to only a single non-zero pair:
$ϵ=ϵr(1−iβ0iβ10001),$(3)
where ϵr is the (isotropic) complex relative permittivity of the magneto-optically active material and β is a frequency- and magnetic field-dependent MO parameter, obtained by interpolating values found in the literature [24], [27].
The Faraday effect depends strongly on the transmission of the medium, and even small differences between the experimentally and numerically obtained transmittance can lead to large discrepancies in the estimation of FE. In order to account for such effects, we normalize the amplitude of the calculated transmitted electric fields to the value we measure experimentally.
Hysteresis cycles of bimetallic nano-rings were simulated using OOMMF [28]. Additional magnetic parameters used are those commonly found in the literature for Ni [29], the magnetization saturation is 0.47 pJ/m, and the exchange constant is 8.2 pJ/m. The maximum applied external field Hz is 100 mT, and it is oriented perpendicular to the plane of the sample. The mesh size was 4×4×4 nm3, thus allowing accurate simulations while keeping reasonable computational times. The results presented in Figure 5 show a typical hysteresis curve for soft, thick magnetic films (where perpendicular magnetic anisotropy is not present), although modulated due to the non-extended geometry of the Ni sectors.
## Acknowledgments
The authors would like to thank Anibal L. Gonzalez Oyarce for his advice on micromagnetic simulations and F. Javier Garcia de Abajo for numerous fruitful discussions. The authors acknowledge the support of the MOE Singapore (grant MOE2011-T3-1-005), the UK’s Engineering and Physical Sciences Research Council (grants EP/G060363/1, EP/M008797/1), and the Leverhulme Trust. The data from this paper can be obtained from the University of Southampton ePrints research repository: http://dx.doi.org/10.5258/SOTON/405057.
## References
• [1]
Zvezdin AK, Kotov VA. Modern magnetooptics and magnetooptical materials. CRC Press, New York, 1997. Google Scholar
• [2]
Safarov VI, Kosobukin VA, Hermann C, Lampel G, Peretti J, Marlière C. Magneto-optical effects enhanced by surface plasmons in metallic multilayer films. Phys Rev Lett 1994;73:3584–7.
• [3]
González-Daz JB, Garca-Martn A, Garca-Martn JM, et al. Plasmonic Au/Co/Au nanosandwiches with enhanced magneto-optical activity. Small 2008;4:202–5.
• [4]
Grunin AA, Zhdanov AG, Ezhov AA, Ganshina EA, Fedyanin AA. Surface-plasmon-induced enhancement of magneto-optical kerr effect in all-nickel subwavelength nanogratings. App Phys Lett 2011;97:261908. Google Scholar
• [5]
Wang L, Clavero C, Huba Z, et al. Plasmonics and enhanced magneto-optics in core-shell Co-Ag nanoparticles. Nano Lett 2011;11:1237–40.
• [6]
Chin JY, Steinle T, Wehlus T, et al. Nonreciprocal plasmonics enables giant enhancement of thin-film Faraday rotation. Nat Comm 2013;4:1599.
• [7]
Du GX, Mori T, Saito S, Takahashi M. Shape-enhanced magneto-optical activity: degree of freedom for active plasmonics. Phys Rev B Condensed Matter Mater Phys 2010;82:1–4. Google Scholar
• [8]
Feil H, Haas C. Magneto-optical kerr effect, enhanced by the plasma resonance of charge carriers. Phys Rev Lett 1987;58:65–8.
• [9]
Reim W, Weller D. Kerr rotation enhancement in metallic bilayer thin films for magneto-optical recording. App Phys Lett 1988;53:2453–4.
• [10]
Ferré J, Penissard G, Marliere C, Renard D, Beauvillain P, Renard JP. Magneto-optical studies of Co/Au ultrathin metallic films. App Phys Lett 1990;56:1588.
• [11]
Maccaferri N, Bergamini L, Pancaldi M, et al. Anisotropic nanoantenna-based magnetoplasmonic crystals for highly enhanced and tunable magneto-optical activity. Nano Lett 2016;16:2533–42.
• [12]
Belotelov VI, Akimov IA, Pohl M, et al. Enhanced magneto-optical effects in magnetoplasmonic crystals. Nat Nanotechnology 2011;6:370–6.
• [13]
Maccaferri N, Gregorczyk KE, de Oliveira TVAG, et al. Ultrasensitive and label-free molecular-level detection enabled by light phase control in magnetoplasmonic nanoantennas. Nat Comm 2015;6:6150.
• [14]
Kataja M, Pourjamal S, Maccaferri N, et al. Hybrid plasmonic lattices with tunable magneto-optical activity. Opt Express 2016;24:3652.
• [15]
Chau KJ, Irvine SE, Elezzabi AY. A gigahertz surface magneto-plasmon optical modulator. Quantum Elec IEEE 2004;40:571–9.
• [16]
Sepúlveda B, Calle A, Lechuga LM, Armelles G. Highly sensitive detection of biomolecules with the magneto-optic surface-plasmon-resonance sensor. Opt Lett 2006;31:1085–7.
• [17]
Zubritskaya I, Lodewijks K, Maccaferri N, et al. Active magnetoplasmonic ruler. Nano Lett 2015;15:3204–11.
• [18]
Tsiatmas A, Atmatzakis E, Papasimakis N, et al. Optical generation of intense ultrashort magnetic pulses at the nanoscale. New J Phys 2013;15:113035.
• [19]
Vienne G, Chen X, Teh YS, Ng YJ, Chia NO, Ooi CP. Novel layout of a bi-metallic nanoring for magnetic field pulse generation from light. New J Phys 2015;17:013049.
• [20]
Fedotov VA, Rose M, Prosvirnin SL, Papasimakis N, Zheludev NI. Sharp trapped-mode resonances in planar metamaterials with a broken structural symmetry. Phys Rev Lett 2007;99:147401.
• [21]
Maccaferri N, Berger A, Bonetti S, et al. Tuning the magneto-optical response of nanosize ferromagnetic Ni disk using the phase of localized plasmons. Phys Rev Lett 2013;111:167401.
• [22]
Atmatzakis E, Papasimakis N, Zheludev NI. Plasmonic absorption properties of bimetallic metamaterials. Microelectron. Eng. 2017;172:30–4.
• [23]
Bennett PJ. Novel polarization phenomena and their spectroscopic application in bulk solids and films. PhD thesis, University of Southampton, 1998. Google Scholar
• [24]
Goto T, Hasegawa M, Nishinomiya T, Nakagawa Y. Temperature dependence of magneto-optical effects in nickel thin films. J Phys Soc Jap 1977;43:494–8.
• [25]
Argyres PN. Theory of the Faraday and Kerr effects in ferromagnetics. Phys Rev 1955;97:334–45.
• [26]
Zharov AA, Kurin VV. Giant resonant magneto-optic Kerr effect in nanostructured ferromagnetic metamaterials. J App Phys 2007;102:123514.
• [27]
Snow C. The magneto-optical parameters of iron and nickel. Phys Rev 1913;2:29–38.
• [28]
Porter DG, Donahue MJ. Oommf user’s guide, version 1.0, interagency report nistir 6376, 1999. Google Scholar
• [29]
Talagala P, Fodor PS, Haddad D, et al. Determination of magnetic exchange stiffness and surface anisotropy constants in epitaxial ni1−x cox(001) films. Phys Rev B 2002;66:144426.
## Supplemental Material:
The online version of this article (DOI: https://doi.org/10.1515/nanoph-2016-0162) offers supplementary material, available to authorized users.
Revised: 2016-11-30
Accepted: 2016-12-07
Published Online: 2017-07-22
Published in Print: 2018-01-01
Citation Information: Nanophotonics, Volume 7, Issue 1, Pages 199–206, ISSN (Online) 2192-8614,
Export Citation
|
{}
|
# Induction Proof
## Homework Statement
The question asks me to prove inductively that 3n ≥ n2n for all n ≥ 0.
## The Attempt at a Solution
I believe the base case is when n = 0, in which case this is true. However, I cannot for the life of me prove n = k+1 when n=k is true. I start with:
$3^k ≥ k2^k$
and then try:
$3^{k+1} ≥ (k+1)2^{k+1}$ which gets me nowhere.
I then try:
$3^{k+1} ≥ 3k2^k$
Whoops the Latex is all whack. Edit: Thanks
Last edited:
Related Calculus and Beyond Homework Help News on Phys.org
Mark44
Mentor
## Homework Statement
The question asks me to prove inductively that 3n ≥ n2n for all n ≥ 0.
## The Attempt at a Solution
I believe the base case is when n = 0, in which case this is true. However, I cannot for the life of me prove n = k+1 when n=k is true. I start with:
$3^k ≥ k2^k$
and then try:
$3^{k+1} ≥ (k+1)2^{k+1}$ which gets me nowhere.
This is unrelated to what you need to do.
Hat1324 said:
I then try:
$3^{k+1} ≥ 3k2^k$
The above is what you need to show, so you can't just start with it.
Start with $3^{k + 1}$ which is the same as 3 * 3k, and use what you have already assumed in your induction hypothesis (i.e., that 3k ≥ n*2n.
Hat1324 said:
Whoops the Latex is all whack
It's fine, but you were doing it wrong. If an exponent has more than one character, surround it with braces, as in 3^{k + 1}. This renders as $3^{k + 1}$.
I think I've explained my steps a little wrong. The first thing I did after the base case was assume $$3^k≥k2^k$$
Then I setup $$3^{k+1}=3(3k)≥3(k2^k)≥...$$
But I simply cannot find something to substitute ... that is actually true
PeroK
Homework Helper
Gold Member
Have you heard of proof by contradiction?
I have and I wish. But we're required to prove this directly using induction
PeroK
Homework Helper
Gold Member
There's no rule that says you can't use a contradiction argument as part of proving the inductive step!
Heh. I'm following as best I can I really don't follow. :P I can't see how $∃ 3^{k+1} ≤ (k+1)2^{k+1}$ is any easier to disprove than $∀ 3^{k+1} ≥ (k+1)2^{k+1}$ is to prove.
PeroK
Homework Helper
Gold Member
It's almost always a good idea if you're stuck proving something to turn it round and assume what you're trying to show is false and see where that leads.
So, if:
$3^k > k2^k$
But
$3^{k+1} < (k+1)2^{k+1}$
BvU
Homework Helper
2019 Award
I think I've explained my steps a little wrong. The first thing I did after the base case was assume $$3^k≥k2^k$$
Then I setup $$3^{k+1}=3(3k)≥3(k2^k)≥...$$
But I simply cannot find something to substitute ... that is actually true
Perok calls is contradiction; you could also try what I call working from the other end: write out $(k+1)2^{k+1}$ and see what you can strike off
OK. I think I have it. So what I ended up doing was brute forcing a base case of $0 \le n \le 2$ and testing it for 0, 1, and 2 (which holds). Then I showed that for all $k \ge 2$
$3^{k+1} = 3*3^k$
$2^{k+1} = 2*2^k$
$k+1 \le (1.5)k$
This got me to derive $3^{k+1} \ge (k+1)2^{k+1} \rightarrow (3)3^k \ge (1.5)(2)k2^k$ which is $(3)3^k \ge (3)k2^k$ which is obviously true. Does what I did count as induction?
Last edited:
Mark44
Mentor
OK. I think I have it. So what I ended up doing was brute forcing a base case of $0 \le n \le 2$ and testing it for 0, 1, and 2 (which holds).
You need only one base case.
Hat1324 said:
Then I showed that for all $k \ge 2$
$3^{k+1} = 3*3^k$
$2^{k+1} = 2*2^k$
$k+1 \le (1.5)k$
This got me to derive $(3)3^k \ge (1.5)(2)k2^k$ which is $(3)3^k \ge (3)k2^k$ which is obviously true. Does what I did count as induction?
I don't think so. As I said earlier, assume that the statement is true for n = k. Then use this assumption to show that the statement is true for n = k + 1. The work you have done above might be useful in doing this.
Yeah I know I only need one base case, but I couldnt do the work above without showing n=0,1,2 so I included them anyway
Ray Vickson
Homework Helper
Dearly Missed
## Homework Statement
The question asks me to prove inductively that 3n ≥ n2n for all n ≥ 0.
## The Attempt at a Solution
I believe the base case is when n = 0, in which case this is true. However, I cannot for the life of me prove n = k+1 when n=k is true. I start with:
$3^k ≥ k2^k$
and then try:
$3^{k+1} ≥ (k+1)2^{k+1}$ which gets me nowhere.
I then try:
$3^{k+1} ≥ 3k2^k$
Whoops the Latex is all whack. Edit: Thanks
You can write the desired statement as $(3/2)^n \geq n$; that may be a lot easier to deal with.
BvU
Homework Helper
2019 Award
I think I've explained my steps a little wrong. The first thing I did after the base case was assume $$3^k≥k2^k$$
Then I setup $$3^{k+1}=3(3k)≥3(k2^k)≥...$$
But I simply cannot find something to substitute ... that is actually true
Let me go back to where you were after the first step: If $3^k≥k2^k$ is true, then you want to prove $3^{k+1}\ge (k+1)2^{k+1}$ and you started well with $3^{k+1}=3\;3^k$.
Now my suggestion (and in fact PerOK's as well) was to write out $(k+1)2^{k+1}$ and see what happens. Not so much as a sequence of $\ge$ , but as a sum of terms you might be able to strike off:$$(k+1)2^{k+1}= k\;2^{k+1} +2^{k+1}= 2k\;2^k + 2\;2^k$$Do you see what you can strike off (because it's assumed to satisfy the $\ge$ ) ? And what remains ?
If you can prove that remainder also satisfies the $\ge$ , then you have the ingredients to write it down in a decent sequence of .. is true for k = 0 & if .. then .. , therefore .. is true for all k.
But I think there's a nice little snag built in that forces you to backtrack one step with this approach: it doesn't go smoothly for small k. So you have to bootstrap the induction (true for k = .., .. and from then on induction).
Sorry about the vague description; I don't want to give it all away.
Open for more elegant approaches :)
Oh boy, Ray's post was right in front of me. More coffee, please...
Last edited:
BvU
Homework Helper
2019 Award
OK. I think I have it. So what I ended up doing was brute forcing a base case of $0 \le n \le 2$ and testing it for 0, 1, and 2 (which holds). Then I showed that for all $k \ge 2$
$3^{k+1} = 3*3^k$
$2^{k+1} = 2*2^k$
$k+1 \le (1.5)k$
This got me to derive $3^{k+1} \ge (k+1)2^{k+1} \rightarrow (3)3^k \ge (1.5)(2)k2^k$ which is $(3)3^k \ge (3)k2^k$ which is obviously true. Does what I did count as induction?
More egg , since you did as I suggested, and did quite well.
Let me recapitulate (a chance for still more egg :) ):
To me it counts as induction, especially if you write it down in a neat order:
You need to prove $P_k : 3^k > k 2^k$ for all k $\ge 0.$ You show $P_0, P_1, P_2$ and for k $\ge$ 2 you show
$$P_k \Rightarrow 3\; 3^k \ge 3k\;2^k \Rightarrow 3^{k+1} \ge 1.5\;k\;2^{k+1} \Rightarrow 3^{k+1} \ge (k+1)\;2^{k+1} = P_{k+1}$$Therefore $P_k \; \forall {k\ge 0}$
It's a slightly liberal interpretation of what my (1971!) math book calls 'an analogon of full induction' : if $P_{n_0}$ for a certain integer $n_0$, and .. etc.
|
{}
|
Thursday, November 4, 2010
What is the Correct Stock Price?
How is a stock's price determined? I looked at SmartMoney.com's price evaluator and here's the definition: "Our Price Check Calculator can help you estimate a fair price to pay for a stock based on three main things: the company's earnings, the rate at which those earnings are projected to grow and the stock's volatility." So, it's determined using earnings, projected earnings growth rate, volatility. I also looked at MoneyChimp and they had a formula I got lost in. I went to Wikipedia and found this for the P/E ratio, just part of what goes into determining a stock's price:
$\mbox{P/E ratio}=\frac{\mbox{Price per Share}}{\mbox{Annual Earnings per Share}}$
I went through all that for a reason. The pieces of each formula are reported quarterly. So if either of these is the "correct" formula for determining stock prices, why do stocks fluctuate by the minute? For example, if the formula was A x B + C/(DxE) = T, if A thru E don't change, then T should not change, right? What if T constantly fluctuates? That would mean the formula must be wrong. I think the very smart people that came up with these formulas were trying to get close to actual price of a stock, using all known current information, and they explain their misses as buying opportunities (the stock is priced lower than the formula determines) or buying at a premium (the stock is priced higher than the formula determines).
Here's a very loose example. Say I decide I want a way to predict/estimate how heavy passenger vehicles are that come down a certain road. I assume a certain load per vehicle based on the tire size and multiply by 4. Later, when I check my results against the actual, I find that sometimes my results are too high, sometimes too low. So I decide that the formula is right, the tires are over or underinflated. The problem is not with my formula, reality is wrong. Yeah, that makes all the sense in the world. But that's the equivalent of what's being said when the experts say a stock is over priced or under priced based on the assets of the company, projected sales, etc. The price of the stock reflects what someone is willing to pay for it at that moment. Period.
Speaking of charts, I put the following chart together comparing the prices of all my holdings since the beginning of the year. I haven't held each of them that long, but I wanted to see in general which way my portfolio was heading. What this tells me is that, with few exceptions, my portfolio, and probably the market in general has been headed up most of the year. I think we're probably about to have a good run for maybe the next year or two. Fortunately I'm in the game.
|
{}
|
# Math Help - Function word problems
1. ## Function word problems
Hi, I need help on a couple of problems. I'll be having my math mid-terms on the 26th, which is a Friday, and we were told by the teacher that some of the questions appearing in the test would be similar to the problems below. Specifically, I need help on four of them.
1) 1000 copies of a souvenir program will be sold if the price is $50 and that the number of copies sold decreases by 10 for each$1 added to the price. Write a function that determines gross revenue as a function of x. What price yields the largest gross sales revenue and what is the largest gross revenue? How many copies must be sold to yield the maximum gross revenue?
2) A rectangular field is fenced along a river bank, which is not. The material for the fence costs $12 per foot for the side parallel to the river, and while it costs$8 for the two other sides. What are the dimensions of the largest possible rec. field that can be enclosed with a budget of $3600? assume all material would be used up. 3) A waterway whose cross-section is a parabola is being designed for a theme park boat ride. It has a depth of 12ft. and a width of 6ft. at the surface. Rectangular cross-sectioned boats are to be used, and they sink to a depth of 4ft. What should be the proper width of the boat to facilitate easy passage? [Note: parabola opens upwards] 4) Describe each of the equations in the given non-linear system. a) (x-3)^2 = 9 -- y + 9 2 b) (x-3)^2 t y^2 = 4 2. Originally Posted by archistrategos214 1) 1000 copies of a souvenir program will be sold if the price is$50 and that the number of copies sold decreases by 10 for each $1 added to the price. Write a function that determines gross revenue as a function of x. What price yields the largest gross sales revenue and what is the largest gross revenue? How many copies must be sold to yield the maximum gross revenue? The sales at price $x$are given by the equation: $S(x)=1000-10(x-50)$, and the gross revenue then is: $R(x)=x\, S(x)=1500\, x-10\,x^2=x(1500-10\,x$ Thus the graph of revenue against price is a parabola that opens downwards, and has its maximum midway between its roots. The roots of the quadratic are $x=0$, and $x=150$, so the price that maximises revenue is $\75$. Now plug the $\75$ into the equation for revenue to get the maximum revenue: $R(75)=75 (1500-750)=75\times 750=\5,671,800$ RonL 3. Hello, archistrategos214! 2) A rectangular field is fenced along a river bank, which is not. The material for the fence costs$12 per foot for the side parallel to the river,
while it costs $8 per foot for the two other sides. What are the dimensions of the largest possible field that can be enclosed with a budget of$3600?
Code:
- * - - - - - - - * -
| |
y| |y
| |
*---------------*
x
Let $x$ = length of side parallel to the river.
. . At $12/ft, this will cost: $12x$ dollars. Let $y$ = length of the other two sides. . . At$8/ft, they will cost: $8(2y) = 16y$ dollars.
Using all the budget: . $12x + 16y \:=\:3600\quad\Rightarrow\quad y \:=\:225 - \frac{3}{4}x$ [1]
The area of the field is: . $A \;=\;xy$ [2]
Substitute [1] into [2]: . $A \;=\;x\left(225 - \frac{3}{4}x\right) \;=\;225x - \frac{3}{4}x^2$
Differentiate and equate to zero:
. . $A' \;=\;225 - \frac{3}{2}x\:=\:0\quad\Rightarrow\quad\boxed{ x\,=\,150\text{ ft}}$
Substitute into [1]: . $y \:=\:225 - \frac{3}{4}(150)\quad\Rightarrow\quad\boxed{ y\,=\,112.5\text{ ft}}$
4. Thanks for the help, guys! It is really appreciated.
Does anyone have any tips for number three? I tried solving it, but came out unsuccessful. Any help is most welcome!
5. Hello again, archistrategos214!
3) A waterway whose cross-section is a parabola is being designed for a theme park boat ride.
It has a depth of 12 ft and a width of 6 ft at the surface.
Rectangular cross-sectioned boats are to be used, and they sink to a depth of 4 ft.
What should be the proper width of the boat to facilitate easy passage?
Code:
|
* - -*------------+--------------*- - *(3,12)
| | |
* | | 4| *
* | x | x | *
*------------+--------------*
* | * :y
* | * :
- - - - - - - - - - * - - - - - - -+- - -
|
The parabola has the equation: . $y \,=\,ax^2$
The point $(3,12)$ is on the parabola.
. . We have: . $12 \:=\:a\cdot3^2\quad\Rightarrow\quad a = 2$
The parabola is: . $y \,=\,2x^2$
Since the boats sink to 4 feet, then: . $12 - y = 4$
. . Then: . $12 - 2x^2\:=\:4\quad\Rightarrow\quad x \,=\,2$
Therefore, the boatrs should be $2x = 4$ feet wide at most.
6. "Given the equation 8y = x^2 + *x + 32 represents a parabola with vertical axis of symmetry; find the vertex, focus, directrix, axis of symmetry and x and y intercepts."
Just to double check. I solved this one and came up with a different graph than a parabola.
7. Originally Posted by archistrategos214
"Given the equation 8y = x^2 + *x + 32 represents a parabola with vertical axis of symmetry; find the vertex, focus, directrix, axis of symmetry and x and y intercepts."
Just to double check. I solved this one and came up with a different graph than a parabola.
|
{}
|
#### Thank you for registering.
One of our academic counsellors will contact you within 1 working day.
Click to Chat
1800-1023-196
+91-120-4616500
CART 0
• 0
MY CART (5)
Use Coupon: CART20 and get 20% off on all online Study Material
ITEM
DETAILS
MRP
DISCOUNT
FINAL PRICE
Total Price: Rs.
There are no items in this cart.
Continue Shopping
# Two forces of 60n and 80n acting at an angle of 60degree with each other pull an object. What single pull would replace the given forces. Pls explain how to calculate the direction properly.Is it true that we can get two angles w.r.t 2 diff vectors? I think thats where I am getting confused. Pls see this picture and tell me if I have made a mistake.pls ans my question
Arun Kumar IIT Delhi
6 years ago
Hello Student,
The single force that replace them both is
$\\\vec F_{res}=60n \vec {n1}+80n \vec {n2} \\where \\\vec {n1}. \vec {n2}=cos60=1/2$
Thanks & Regards
Arun Kumar
Btech, IIT Delhi
|
{}
|
# Public Key Encryption using ECDHE and AES-GCM
Using a text book example, Alice and Bob want to communicate securely using encrypted messages over an insecure channel (the Internet). Alice and Bob have decided to use ECDH (using ephemeral keys generated per session) and start off by each generating a public/private key pair using Curve P-256 - using SHA1PRNG for randomness. Alice sends Bob her public key and Bob generates a secret key using Alice's public key and his private key. Alice does the same using Bob's public key and her private one.
Eve who is potentially sitting in the middle isn't able to solve the discrete logarithm problem that EC requires so isn't able to compute the secret key. If Eve was actually sitting in the middle at the right time she could have sent Alice and Bob her public key instead. I'm not sure if ECDSA plays a part here, but I have assumed that Alice and Bob would verify they are talking to each other by comparing a fingerprint of their public key through another method. Alice and Bob now have the same 256 bit secret key that they can use for symmetric AES-GCM encryption.
Question 1 - AES-128 requires a 128 bit key and AES-256 recommends a stronger ECDH Curve than P-256 - this means the secret key generated by ECDH is always going to be longer than the encryption algorithm requires. I assume the recommended approach is to use a KDF function like HKDF, but what is the security implication of taking an SHA-256 hash and using it directly for AES-256 or truncating it for AES-128 (Alice and Bob are using Java which doesn't have a native implementation of HKDF and I don't think it is a good idea to try and write your own).
Using an approved method they manage to both derive the same 128 bit encryption key. Alice starts by sending a message to Bob - she generates a 96 bit random IV (she confirms she will never use the same IV with the same key again). She specifies an Authentication Tag length of 128 bits and encrypts the message (she doesn't include any additional authenticated data). She prefixes the IV to the ciphertext and sends to Bob. Bob then recovers the IV and decrypts the message - he knows the message hasn't been modified otherwise the Authentication Tag would be incorrect.
Question 2 - What advantage does ECDSA provide in this scenario or am I mixing things up? Assuming Alice and Bob have verified the public key fingerprints belong to each other, only Bob is in possession of his private key (stored in memory for the duration of the session) - Eve is unable to encrypt a message using the correct secret key because she doesn't have the required private key? For Alice to be able to decrypt the message it must have been encrypted by Bob.
• I suppose it is infeasible for Alice and Bob to verify the fingerprint of a public key every time they initiate a session. Do I assume the correct approach is for Alice and Bob to both generate a static ECDSA key pair - they send the public half to each other and verify the fingerprints offline? They then use their ECDSA private key to sign the ECDH public key before sending. The ECDH public key is then verified using the pre-verified ECDSA public key and authenticity is confirmed. Alice and Bob should both store their ECDSA private key using strong symmetric encryption on disk. – chrixm Nov 4 '15 at 8:19
I assume the recommended approach is to use a KDF function like HKDF, but what is the security implication of taking an SHA-256 hash and using it directly for AES-256 or truncating it for AES-128 (Alice and Bob are using Java which doesn't have a native implementation of HKDF and I don't think it is a good idea to try and write your own).
HKDF(-Expand) is easy to implement if you have access to HMAC, since for short keys it is just HMAC(s, 0x01) with s the shared secret; an optional context string can be prepended to the 1-byte. However, just a hash is fine as well.
What advantage does ECDSA provide in this scenario or am I mixing things up?
If you only need to guarantee privacy and authenticity, there is no need for adding ECDSA into the mix. AES-GCM with the shared secret from the ECDH-exchange is sufficient. However, there are two other scenarios that may make ECDSA useful here:
1. First, ECDSA is one way for Alice and Bob to authenticate the ephemeral keys used in the key exchange, like you suggest in the comment. If they already know each other's ECDSA keys, they can use them to sign and verify the ephemeral keys.
2. If they require non-repudiation, they can use ECDSA to sign the messages before encryption. With only ECDH and AES-GCM there is no way to prove if Alice or Bob wrote a particular message, since either can encrypt under the same shared key. A signature would allow Alice to prove to others that Bob said something. (Whether you want non-repudiation or want to avoid it depends on the use case.)
• If you only need to guarantee privacy and authenticity - should that be "privacy and integrity" as opposed to "authenticity"? – chrixm Nov 4 '15 at 16:24
• @chrixm, AES-GCM guarantees both. Bob knows the message is from Alice if he didn't write it himself. – otus Nov 4 '15 at 17:06
• "Sometimes non-repudiation is required, but sometimes it is a failing." I'm not sure I understand that very last part of this otherwise fine answer. – Maarten Bodewes Nov 4 '15 at 17:21
• @otus on its own how does AES-GCM guarantee that the message is from Alice? Is this based on the assumption that Bob has already pre-verified that he definitely has Alice's public key via an offline method - either through comparing an SHA-256 hash or another method? If that is the case then AES-GCM provides authenticity as only Alice and Bob are able to generate the same shared secret. – chrixm Nov 4 '15 at 17:35
• @MaartenBodewes, in some situations you want public verifiability that someone really said something. In others you a protocol that has deniable authentication. ECDH + AES-GCM gives you the latter, while adding a signature gives you the former. I'll try to rephrase. – otus Nov 4 '15 at 21:03
|
{}
|
# Maximum Distance Between the Leader and the Laggard for Three Brownian Walkers
### Satya N. Majumdar 1, Alan J. Bray 2
#### Journal of Statistical Mechanics (2010) P08023
We consider three independent Brownian walkers moving on a line. The process terminates when the left-most walker (the Leader') meets either of the other two walkers. For arbitrary values of the diffusion constants D_1 (the Leader), D_2 and D_3 of the three walkers, we compute the probability distribution P(m|y_2,y_3) of the maximum distance m between the Leader and the current right-most particle (the Laggard') during the process, where y_2 and y_3 are the initial distances between the leader and the other two walkers. The result has, for large m, the form P(m|y_2,y_3) \sim A(y_2,y_3) m^{-\delta}, where \delta = (2\pi-\theta)/(\pi-\theta) and \theta = cos^{-1}(D_1/\sqrt{(D_1+D_2)(D_1+D_3)}. The amplitude A(y_2,y_3) is also determined exactly.
• 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS),
CNRS : UMR8626 – Université Paris XI - Paris Sud
• 2. School of Physics and Astronomy,
University of Manchester
|
{}
|
# Konstantin Kashin
Institute for Quantitative Social Science, Harvard University
Konstantin Kashin is a Fellow at the Institute for Quantitative Social Science at Harvard University and will be joining Facebook's Core Data Science group in September 2015. Konstantin develops new statistical methods for diverse applications in the social sciences, with a focus on causal inference, text as data, and Bayesian forecasting. He holds a PhD in Political Science and an AM in Statistics from Harvard University.
## Bootstrap Confidence Interval Methods in R
| Tags: R bootstrapping
This post briefly sketches out the types of bootstrapped confidence intervals commonly used, along with code in R for how to calculate them from scratch. Specifically, I focus on nonparametric confidence intervals. The post is structured around the list of bootstrap confidence interval methods provided by Canty et al. (1996). This is just a quick introduction into the world of bootstrapping - for an excellent R package for doing all sorts of bootstrapping, see the boot package by Brian Ripley.
Suppose that $x_1, x_2, …, x_n \sim f(\theta)$, where $f(\cdot)$ is some arbitrary probability distribution. We are interested in estimating $\theta$. Let $\hat{\theta}$ be the point estimate of the parameter of interest obtained from the original dataset. Note that these are nonparameteric confidence intervals (that is, we don’t assume that $f$ is a particular parametric family). The example comes from page 201 of DiCiccio and Efron (1996), which contains a dataset on five exams across 22 students (with some missing data). The parameter of interest is the maximum eigenvalue of the empirical covariance matrix. You may download the data here. The code for estimating the max eigenvalue is provided at the end of this post. The function is called calculate.max.eigen.
Using this function on the dataset, we obtain $\hat{\theta}=216.2$.
Now, suppose that we bootstrap the dataset M times and for each of the datasets, calculate $\hat{\theta}^*_{m}$, which is the point estimate of $\theta$ for the mth bootstrapped dataset. The empirical distribution of $\hat{\theta}^*$ is the bootstrap distribution and is an approximation to the sampling distribution for $\hat{\theta}$.
To create $M=10000$ bootstrap replicates in R:
BS.eigen <- function(data){
index <- sample(1:nrow(data),size=nrow(data),replace=TRUE)
bs.data <- data[index,]
return(calculate.max.eigen(bs.data))
}
bs.sampling <- replicate(10000,BS.eigen(data),simplify=TRUE)
Here is the distribution of these estimates (with the median of the bootstrapped estimates denoted with the vertical blue line and $\hat{\theta}$ denoted with a vertical red line):
We can then make use of the bootstrap distribution in a number of ways to obtained bootstrapped confidence intervals.
### Normal intervals
$$\hat{\theta} \pm z_{\alpha/2} \sigma_{bs},$$ where $z_{\alpha/2}$ is the $\alpha/2$-quantile of the standard normal distribution and $\sigma_{bs}$ is the standard deviation of the bootstrap distribution. These intervals are justified by asymptotic theory (Central Limit Theorem).
In R:
theta.hat + qnorm(0.025)*sd(bs.sampling)
theta.hat + qnorm(0.975)*sd(bs.sampling)
For our example, we obtain a confidence interval of [202.64,1050.01].
### Transformed normal intervals
$$t^{-1}[t(\hat{\theta}) \pm z_{\alpha/2} \sigma_{tbs}],$$ where $t(\cdot)$ is a variance-stabilizing square root transformation and $\sigma_{tbs}$ is the standard deviation of the transformed bootstrap distribution (distribution of $\hat{\theta}^*$).
In R:
(sqrt(theta.hat) + qnorm(0.025)*sd(sqrt(bs.sampling)))^2
(sqrt(theta.hat) + qnorm(0.975)*sd(sqrt(bs.sampling)))^2
For our example, we obtain a confidence interval of [263.67, 1143.33].
### Basic bootstrap intervals
$$\left[2\hat{\theta}-\hat{\theta}^*_{m=(1-\alpha/2)M}, 2\hat{\theta}-\hat{\theta}^*_{m=(\alpha/2)M}\right]$$
In R:
2*theta.hat - quantile(bs.sampling,0.975)
2*theta.hat - quantile(bs.sampling,0.025)
For our example, we obtain a confidence interval of [186.45, 1018.62].
### Transformed basic bootstrap intervals
These are basic bootstrap intervals with similar transformation to the transformed normal intervals.
In R:
(2*sqrt(theta.hat) - quantile(sqrt(bs.sampling),0.975))^2
(2*sqrt(theta.hat) - quantile(sqrt(bs.sampling),0.025))^2
For our example, we obtain a confidence interval of [302.76, 1208.00].
### Percentile confidence intervals
$$\left[\hat{\theta}^*_{m=(1-\alpha/2)M},\hat{\theta}^*_{m=(\alpha/2)M}\right]$$
In R:
quantile(bs.sampling,0.975)
quantile(bs.sampling,0.025)
For our example, we obtain a confidence interval of [233.93, 1066.10].
### BCa confidence intervals
A refinement on the percentile confidence interval method, designed to increase accuracy. Note that BCa reduces to standard percentile confidence intervals if the bootstrap distribution is unbiased (median of the distribution is equal to the original point estimate) and the acceleration (skewness) is zero.
Step 1: Calculate the bias-correction $\hat{z}_0$, which gives the standard normal quantile function of the proportion of bootstrapped estimates less than the original point estimate: $$\hat{z}_0 = \Phi^{-1} \left[\dfrac{\sum {\hat{\theta}^* < \hat{\theta} }}{M} \right]$$
In R:
z0 <- qnorm(mean(bs.sampling < theta.hat))
For our example, $\hat{z}_0$ is 0.194, which indicates a positive bias correction. This follows from the fact that 57.7% of $\hat{\theta}^*$s are below the original point estimate $\hat{\theta}$ (downward bias). We can see this from the plot above since the median of the bootstrap distribution is to the left of the point estimate from the original dataset.
Step 2: Calculate the acceleration $a$, which has an interpretation of skewness (more specifically, it measures how rapidly standard error changes on a normalized scale). A non-parametric estimate of $a$ is: $$\hat{a} = \dfrac{1}{6}\dfrac{\sum_{i=1}^n (\hat{\theta}-\hat{\theta}_{(i)})^3}{(\sum_{i=1}^n (\hat{\theta}-\hat{\theta}_{(i)})^2)^{3/2}}.$$
where $\hat{\theta}_{(i)}$ is the jackknifed estimate of the parameter, obtained by omitting observation $i$ from the data.
In R:
jk.theta <- sapply(1:nrow(data),function(x){
jk.data <- data[-x,]
return(calculate.max.eigen(jk.data))
})
a <- sum((theta.hat-jk.theta)^3)/(6*sum((theta.hat-jk.theta)^2)^(3/2))
For our example, we obtain $a=0.103$.
Step 3: The bias-corrected $\alpha/2$ endpoints for the percentile bootstrap confidence interval are then calculated as: $$\hat{\theta}_{BC_a}[\alpha/2] = \hat{G}^{-1} \left( \Phi \left(z_0 + \dfrac{z_0+z_{\alpha/2}}{1-a(z_0+z_{\alpha/2})} \right)\right),$$
where $\hat{G}^{-1}(\cdot)$ is the quantile function of the bootstrap distribution.
In R:
q.lb <- pnorm(z0+(z0+qnorm(0.025))/(1-a*(z0+qnorm(0.025))))
q.ub <- pnorm(z0+(z0+qnorm(0.975))/(1-a*(z0+qnorm(0.975))))
quantile(bs.sampling,q.lb)
quantile(bs.sampling,q.ub)
We find that the BCa percentiles we should be using are actually 9.70% and 99.85% instead of 2.5% and 97.5% for a 95% confidence interval. Thus, we obtain a confidence interval of [322.0132,1320.345].
References:
• Canty, A.J., A.C. Davison and D.V. Hinkley. (1996). “Comment on ‘Bootstrap Confidence Intervals’.” Statistical Science 11(3):214-219.
• DiCiccio, Thomas J. and Bradley Efron. (1996). “Bootstrap Confidence Intervals.” Statistical Science 11(3):189-228.
• Efron, B. and Tibshirani R. (1993). An Introduction to the Bootstrap. Chapman and Hall, New York.
• Hall. (1988). “Theoretical comparison of bootstrap confidence intervals (with discussion).” Ann. Statist. 16: 927-985.
Code for calculating maximum eigenvalue:
library(reshape2)
calculate.max.eigen <- function(df){
n <- nrow(df)
k <- ncol(df)
df.melt <- melt(df)
colnames(df.melt) <- c("test","score")
df.melt[,3] <- as.factor(rep(1:n,times=k))
colnames(df.melt)[3] <- "student"
lm.out <- lm(score~test+student,data=df.melt)
score.hat <- predict(lm.out, df.melt)
data.hat <- matrix(data=score.hat, nrow=n, ncol=k, byrow=FALSE)
demeaned.hat <- apply(data.hat,MARGIN=1, function(x) x-colMeans(data.hat))
return(max(eigen(demeaned.hat%*%t(demeaned.hat)/n, only.values=TRUE)\$values))
}
Code for density plot:
library(ggplot2)
df <- data.frame(bs.theta=bs.sampling)
ggplot(df, aes(x=bs.theta)) +
geom_density(alpha=.2, fill="coral", color="steelblue") +
theme_bw() +
geom_vline(aes(xintercept=theta.hat), color="red", size=1) +
geom_vline(aes(xintercept=median(bs.sampling)), color="steelblue", size=1) +
scale_x_continuous("Bootstrapped Estimate of Max Eigenvalue") +
scale_y_continuous("Density")
}
|
{}
|
## Editorial for IOI '02 P4 - Batch Scheduling
Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.
Submitting an official solution before solving the problem yourself is a bannable offence.
This problem can be solved using dynamic programming. Let be the minimum total cost of all partitionings of jobs into batches. Let be the minimum total cost when the first batch is selected as . That is, .
Then we have that
• for ,
• and .
#### Time Algorithm
The time complexity of the above algorithm is .
#### Time Algorithm
Investigating the property of , P. Bucker [1] showed that this problem can be solved in time.
From , we have that for ,
Let and .
Property 1: Assume that for . Then .
Property 2: Assume for some . Then for each with , or .
Property 2 implies that if for , is not needed for computing . Using this property, a linear time algorithm can be designed, which is given in the following.
##### Algorithm Batch
The algorithm calculates the values for down to . It uses a queue-like list with tail and head satisfying the following properties:
• and
• — (1)
When is calculated,
1. // Using , remove unnecessary element at head of .
If , delete from since for all , and by Property 1.
Continue this procedure until for some , .
Then by Property 1, for or and contains only .
Therefore, is equal to .
2. // When inserting at the tail of , maintain for the condition (1) to be satisfied.
If , delete from by Property 2.
Continue this procedure until .
Append as a new tail of .
##### Analysis
Each is inserted into and deleted from at most once. In each insertion and deletion, it takes a constant time. Therefore the time complexity is .
|
{}
|
Home
#### The Fundamental Theorem of Calculus
Three Different Concepts
The Fundamental Theorem of Calculus (Part 2)
The Fundamental Theorem of Calculus (Part 1)
More FTC 1
#### The Indefinite Integral and the Net Change
Indefinite Integrals and Anti-derivatives
A Table of Common Anti-derivatives
The Net Change Theorem
The NCT and Public Policy
#### Substitution
Substitution for Indefinite Integrals
Examples to Try
Revised Table of Integrals
Substitution for Definite Integrals
Examples
#### Area Between Curves
Computation Using Integration
To Compute a Bulk Quantity
The Area Between Two Curves
Horizontal Slicing
Summary
#### Volumes
Slicing and Dicing Solids
Solids of Revolution 1: Disks
Solids of Revolution 2: Washers
More Practice
#### Integration by Parts
Integration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
#### Integrals of Trig Functions
Antiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
#### Trig Substitutions
How Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
#### Partial Fractions
Introduction
Linear Factors
Improper Rational Functions and Long Division
Summary
#### Strategies of Integration
Substitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
#### Improper Integrals
Type 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
#### Differential Equations
Introduction
Separable Equations
Mixing and Dilution
#### Models of Growth
Exponential Growth and Decay
Logistic Growth
#### Infinite Sequences
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
#### Infinite Series
Introduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums and the FTC
#### Integral Test
The Integral Test
Estimates of Value of the Series
#### Comparison Tests
The Basic Comparison Test
The Limit Comparison Test
#### Convergence of Series with Negative Terms
Introduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio Test
The Root Test
Examples
#### Strategies for testing Series
Strategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
#### Power Series
Finding the Interval of Convergence
Power Series Centered at $x=a$
#### Representing Functions as Power Series
Functions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
#### Taylor and Maclaurin Series
The Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
#### Applications of Taylor Polynomials
Taylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
#### Partial Derivatives
Visualizing Functions in 3 Dimensions
Definitions and Examples
An Example from DNA
Geometry of Partial Derivatives
Higher Order Derivatives
Differentials and Taylor Expansions
#### Multiple Integrals
Background
What is a Double Integral?
Volumes as Double Integrals
#### Iterated Integrals over Rectangles
How To Compute Iterated Integrals
Examples of Iterated Integrals
Cavalieri's Principle
Fubini's Theorem
Summary and an Important Example
#### Double Integrals over General Regions
Type I and Type II regions
Examples 1-4
Examples 5-7
Order of Integration
### The Basic Comparison Test
Theorem: If $\displaystyle{\sum_{n=1}^\infty a_n}$ and $\displaystyle{\sum_{n=1}^\infty b_n}$ are series with non-negative terms, then: If $\displaystyle{\sum_{n=1}^\infty b_n}$ converges and $a_n \le b_n$ for all $n$, then $\displaystyle{\sum_{n=1}^\infty a_n}$ converges. If $\displaystyle{\sum_{n=1}^\infty b_n}$ diverges and $a_n \ge b_n$ for all $n$, then $\displaystyle{\sum_{n=1}^\infty a_n}$ diverges.
In fact, 1. will work if $a_n\le b_n$ for all $n$ larger than some finite positive $N$, and similarly for 2.
Example 1:
The series $\displaystyle \sum_{n=1}^\infty\frac{2^n}{3^n+1}$ converges, since $$\frac{2^n}{3^n+1}\le \frac{2^n}{3^n}$$ and we know that the geometric series $\displaystyle \sum_{n=1}^\infty\left(\frac{2}{3}\right)^n$ is a convergent geometric series, with $r=\frac23<1$.
The video explains the test, and looks at an example.
Example 2: Test the series $\displaystyle\sum_{k=1}^\infty\frac{\ln k}{k}$ for convergence or divergence.
DO: Do you think this series converges? Try to figure out what to compare this series to before reading the solution
Solution 2: $\displaystyle\frac{\ln k}{k}\ge\frac1k$, and the harmonic series with terms $\frac1k$ diverges, so our series diverges.
----------------------------------------------------------------
Example 3: Test the series $\displaystyle\sum_{n=1}^\infty\frac{1}{5n+10}$ for convergence or divergence. DO: Try this before reading further.
Solution 3: The terms look much like the harmonic series, and when we compare terms, we see that $\displaystyle\frac{1}{5n+10}\le\frac1n$. But the harmonic series diverges. Our terms are smaller than those of a divergent series, so we know nothing. Let's compare to $\displaystyle\frac1{n^2}$. The series $\displaystyle\sum\frac{1}{n^2}$ is a convergent $p$-series, with $p=2$. But when we compare terms, we get $\displaystyle\frac{1}{5n+10}\ge\frac1{n^2}$ as long as $n\ge7$, so our terms are larger than those of a convergent series, and this comparison also tells us nothing. We will use the limit comparison test (coming up) to test this series.
|
{}
|
# IV international conference on particle physics and astrophysics
22-26 October 2018
Hotel Intourist Kolomenskoye 4*
Europe/Moscow timezone
## SEARCH FOR STATES WITH ENHANCED RADII IN TRIPLET 12B-12C-12N
22 Oct 2018, 15:40
2h 30m
Petrovskiy hall (Hotel Intourist Kolomenskoye 4*)
### Petrovskiy hall
#### Hotel Intourist Kolomenskoye 4*
Kashirskoye shosse, 39B, Moscow, Russia, 115409
Poster Nuclear physics
### Speaker
Mr. Andrey Danilov
### Description
Two independent methods: ANC (Asymptotic normalization coefficients) [1,2] and MDM (Modified diffraction model) [3,4] were applied to new and existing experimental data. The purpose of this analysis is search for states with enhanced radii in isobar-analog excited states of triplet A=12: $^{12}$B – $^{12}$C – $^{12}$N.
There is experimental work [1] where halo was observed for 2 states of $^{12}$B: 2¯, 1.67 MeV and 1¯, 2.62 MeV. To check this result new experimental data $^{11}$B(d,p)$^{12}$B was obtained at E$_{d}$ = 21.5 MeV [5,6]. On base of ANC analysis of this new data [5,6], neutron halo existence was confirmed for the 2¯, 1.67 MeV and 1¯, 2.62 MeV states in $^{12}$B. An unexpected result was obtained for the unbound 3¯, 3.39 MeV state, which is 19 keV above the neutron emission threshold. Its halo radius was also found to be increased and equal to ~ 6.5 fm [5,6]. This result can be considered as an evidence of the halo-like structure in this $^{12}$B state.
What can we expect in isobar-analog states in $^{12}$C and $^{12}$N? Are these states also characterized by enhanced radii? To check this prediction, preliminary analysis of existing $^{12}$C($^{3}$He,t)$^{12}$N and $^{12}$C($^{3}$He,$^{3}$He’)$^{12}$C experimental data using Modified diffraction model (MDM, [3,4]) was done.
1. Z. H. Liu, Phys. Rev. C 64, 034312 (2001).
2. T. L. Belyaeva et al., Phys. Rev. C 90, 064610 (2014).
3. A.N. Danilov et al., Phys. Rev. C 80, 054603 (2009)
4. A.S. Demyanova et al., Phys. Atom. Nucl., 80, 831 (2017)
5. T.L. Belyaeva et al., EPJ Web Conf., 165, 01004 (2017)
6. T.L. Belyaeva et al., Phys. Rev. C 98, 034602 (2018)
### Presentation Materials
There are no materials yet.
###### Your browser is out of date!
Update your browser to view this website correctly. Update my browser now
×
|
{}
|
# Thread: HSC 2012-2015 Chemistry Marathon (archive)
1. ## Re: HSC 2012 Chemistry Marathon
This is from Independent 09 Trial Exam
Multiple Choice Q3:
3. The molar heat of combustion of ethanol is 1367kJ/mol
What mass of ethanol is required to heat 1.0 moles of water by 10 degrees.
a - 136.7g
b - 46.0g
c - 25.3g
d - 0.025g
2. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by nightweaver066
Question:
A. Amphiprotic species are those that can act as proton donors as well as proton acceptors. For example when HCl is added, HCO3- + H+ -> H2CO3. In this case the bicarbonate ion has accepted a proton thus act as a proton acceptor. However when titrated against NaOH, HCO3- + OH- -> H2O(l) + CO32-, where the bicaronate ion act as a proton donor, donating a hydrogen ion to the hydroxide. HCO3-, able to donate as well as accept hydrogen ions is thus amphiprotic.
B. 8
QUESTIOn
Explain the process of filtration by microscopic membrane filters with reference to their design and composition.
3. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by EazyEEE
A. Amphiprotic species are those that can act as proton donors as well as proton acceptors. For example when HCl is added, HCO3- + H+ -> H2CO3. In this case the bicarbonate ion has accepted a proton thus act as a proton acceptor. However when titrated against NaOH, HCO3- + OH- -> H2O(l) + CO32-, where the bicaronate ion act as a proton donor, donating a hydrogen ion to the hydroxide. HCO3-, able to donate as well as accept hydrogen ions is thus amphiprotic.
B. 8
QUESTIOn
Explain the process of filtration by microscopic membrane filters with reference to their design and composition.
Microscopic membrane filters have microscopic pores and the use of appropriate sized filters can avoid the need to chemically treat the water. The filters can be classified as microfiltration, ultrafiltration, nanofiltration or reverse osmosis membranes depending on the size of the pore.
The membrane is made from synthetic polymers dissolved in a mixture of solvents.
Semi-permeable membranes used in reverse osmosis are either made of cellulose acetate or a layer of polyamide attached to another polymer. Under pressure these polymers allow the passage of water molecules but not that of most atoms, ions or other molecules.
Water is made to flow across the membrane not through it. This reduces the blockage of the pores and contaminants are carried away as waste. The membrane is housed in a pressure vessel and is either made as a wound spiral or hollow fibres.
Microfiltration removes protozoans, bacteria, colloids, some colouration and some viruses. The size of the pore determines which sized particle or organism may pass through the membrane. The finer the pore size the smaller the particles trapped and the more expensive the membrane.
Question- explain, with an example, how oxidation states allow chemists to decide whether or not a reaction involves electron transfer. (assume 3 marks for this quest)
4. ## Re: HSC 2012 Chemistry Marathon
bump
explain, with an example, how oxidation states allow chemists to decide whether or not a reaction involves electron transfer. (assume 3 marks for this quest)
5. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by VJ30
bump
explain, with an example, how oxidation states allow chemists to decide whether or not a reaction involves electron transfer. (assume 3 marks for this quest)
Your response to that membrane filter was great.
Of course, in an exam, diagram!
If oxidation state increases, oxidation has occured and an electron has been released.
If oxidation state decreases, reduction has occured and an electron has been captured.
In redox reactions, if oxidation occurs, a reduction reaction must occur.
If the oxidation state of one reactant increases, and decreases for another reactant in the same reaction, this means an electron has been released by one of the reactants the same electron was captured by the other revealing that the reaction involves an electron transfer.
Therefore, by monitoring the oxidation states of reactants in a reaction, if one increases and another decreases of the same magnitude, a reaction involving an electron transfer has occured.
Now,
Assess the effectiveness of monitoring and managing CFC & halon production & usage. (5)
6. ## Re: HSC 2012 Chemistry Marathon
In this option you studies one natural product that was not a fossil fuel. Describe the issues associated with shrinking world supplies of this natural product, and evaluate progress being made to solve the problems identified.
Marks : 10
Criteria
Identifies an appropriate natural product
Provides a judgement
Provides characteristics and features of at least TWO issues associated with shrinking world supplies of the natural product
Provides characteristics and features of progress being made to find replacement materials
Provides a response that demonstrate coherence and logical progression and includes correct use of scientific principles and idea
7. ## Re: HSC 2012 Chemistry Marathon
Industrial Chemistry Question ^
8. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by nightweaver066
- Ethene is usually produced from the cracking of large hydrocarbons.
- The cracking of a large alkane always produces a small alkene, usually ethene and occasionally propene, and a large alkane.
- The cracking process involves heating the hydrocarbons to 500C with a zeolite catalyst in the absence of oxygen to prevent combustion.
--> Example: C8H18 --> C2H4 + C6H14
- The zeolite catalyst adsorb alkanes, weakening their bonds is required because the activation energy of cracking without a catalyst is very high, requiring immensely high temperatures making it uneconomical.
- Ethene can then be separated from the hydrocarbons by cooling the gas where 5-carbon alkyl chains will condense into liquids.
- Ethene can be separated from small the other smaller hydrocarbons such as propene by fractional distillation, if necessary.
9. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by someth1ng
- Ethene is usually produced from the cracking of large hydrocarbons.
- The cracking of a large alkane always produces a small alkene, usually ethene and occasionally propene, and a large alkane.
- The cracking process involves heating the hydrocarbons to 500C with a zeolite catalyst in the absence of oxygen to prevent combustion.
--> Example: C8H18 --> C2H4 + C6H14
- The zeolite catalyst adsorb alkanes, weakening their bonds is required because the activation energy of cracking without a catalyst is very high, requiring immensely high temperatures making it uneconomical.
- Ethene can then be separated from the hydrocarbons by cooling the gas where 5-carbon alkyl chains will condense into liquids.
- Ethene can be separated from small the other smaller hydrocarbons such as propene by fractional distillation, if necessary.
Great response
10. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by nightweaver066
Great response
- There are many ways a radioisotope can be produced.
- Two such methods include neutron bombardment and nuclear fission.
--> Both of these methods are frequent used as induced nuclear transformations.
- Neutron bombardment usually involves bombarding a certain nuclei with a neutron but does not necessarily involve the product to undergo fission to produce smaller products, as seen in the case of Technetium-99m.
[Equation] Mo-98+neutron-->Mo-99
[Equation] Mo-99-->Tc-99m+electron+gamma
- Nuclear fission often involves the bombardment of a certain nuclei or particle (including neutrons) causing it to spontaneously undergo fission to produce smaller products, as seen in the case of Strontium-90.
[Equation] U-235+neutron-->Xe-138+Sr-90+8neutrons+gamma
- Evidently, although both methods are different, they are both used to produce radioisotopes.
Fairly plain response but I think it'd get 3/4, not exactly a 4/4 response, in my opinion.
Assess the impact of atomic absorption spectroscopy (AAS) on our understanding of trace elements. (3 marks)
11. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by someth1ng
- There are many ways a radioisotope can be produced.
- Two such methods include neutron bombardment and nuclear fission.
--> Both of these methods are frequent used as induced nuclear transformations.
- Neutron bombardment usually involves bombarding a certain nuclei with a neutron but does not necessarily involve the product to undergo fission to produce smaller products, as seen in the case of Technetium-99m.
[Equation] Mo-98+neutron-->Mo-99
[Equation] Mo-99-->Tc-99m+electron+gamma
- Nuclear fission often involves the bombardment of a certain nuclei or particle (including neutrons) causing it to spontaneously undergo fission to produce smaller products, as seen in the case of Strontium-90.
[Equation] U-235+neutron-->Xe-138+Sr-90+8neutrons+gamma
- Evidently, although both methods are different, they are both used to produce radioisotopes.
Fairly plain response but I think it'd get 3/4, not exactly a 4/4 response, in my opinion.
I think you should include the technology/machinery involved or comparing something else to get 4/4
Assess the impact of atomic absorption spectroscopy (AAS) on our understanding of trace elements. (3 marks)
- AAS is a quantitative analysis technique for determining small concentrations of metals in samples.
- Using AAS, scientists have been able to monitor and understand the effect of certain concentrations of trace elements such as cobalt and copper in agricultural land for optimum crop growth
- However, AAS has detection limits in determining concentrations (e.g. detection limit of 1ppb for lead in water samples) and so cannot monitor the effects of such small concentrations
- Overall, AAS has made a profound impact on our understanding of trace elements as farmers are able to optimise plant growth as [Pb] < 1 ppb does not greatly effect plant growth (made this part up, is it fine? lol)
Question:
12. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by nightweaver066
I think you should include the technology/machinery involved or comparing something else to get 4/4
- AAS is a quantitative analysis technique for determining small concentrations of metals in samples.
- Using AAS, scientists have been able to monitor and understand the effect of certain concentrations of trace elements such as cobalt and copper in agricultural land for optimum crop growth
- However, AAS has detection limits in determining concentrations (e.g. detection limit of 1ppb for lead in water samples) and so cannot monitor the effects of such small concentrations
- Overall, AAS has made a profound impact on our understanding of trace elements as farmers are able to optimise plant growth as [Pb] < 1 ppb does not greatly effect plant growth (made this part up, is it fine? lol)
Yeah, I know...I felt that the marking scheme would expect at least two valid comparisons (2 marks, 1 mark for each comparison), identifying two methods (1 mark), identifying two examples (1 mark).
1. Another comparison could be:
Neutron bombardment usually does not require acceleration as neutrons hold no electrical charge, as it is not repelled by the positive nuclei by electrostatic repulsion, it won't need to be forces into the nucleus.
Nuclear fission can often require bombardment of a positively charged nuclei, this means that it must be accelerated to very high speeds to overcome electrostatic repulsion and allowing a new nucleus to be formed which could spontaneously decay into smaller molecules.
2. How? Give an example such as in South Australia, what appeared to be land very suitable for agriculture did not thrive and it was found that cobalt deficiency did not allow grass to grow for grazing.
3. Never make information up, you get 0 marks for it. If you said "extremely miniscule amounts of many heavy metals (eg lead) such that it is undetectable by AAS usually does not have a significant adverse effect on plant growth", you could have gotten a good mark for it and being general but "academic" makes your work more favourable.
13. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by someth1ng
Yeah, I know...I felt that the marking scheme would expect at least two valid comparisons (2 marks, 1 mark for each comparison), identifying two methods (1 mark), identifying two examples (1 mark).
1. Another comparison could be:
Neutron bombardment usually does not require acceleration as neutrons hold no electrical charge, as it is not repelled by the positive nuclei by electrostatic repulsion, it won't need to be forces into the nucleus.
Nuclear fission can often require bombardment of a positively charged nuclei, this means that it must be accelerated to very high speeds to overcome electrostatic repulsion and allowing a new nucleus to be formed which could spontaneously decay into smaller molecules.
2. How? Give an example such as in South Australia, what appeared to be land very suitable for agriculture did not thrive and it was found that cobalt deficiency did not allow grass to grow for grazing.
3. Never make information up, you get 0 marks for it. If you said "extremely miniscule amounts of many heavy metals (eg lead) such that it is undetectable by AAS usually does not have a significant adverse effect on plant growth", you could have gotten a good mark for it and being general but "academic" makes your work more favourable.
Didn't think i needed to be very specific with the example.
Thanks for the feedback
14. ## Re: HSC 2012 Chemistry Marathon
Originally Posted by nightweaver066
Didn't think i needed to be very specific with the example.
Thanks for the feedback
I just thought it wasn't detailed enough, how have they understood it? Those sort of questions, in a way, undermine your answer to some extent since it doesn't fully address all aspects of the question. It's probably better to do too much than too little.
That's my thoughts, at least. But otherwise, you're doing well.
15. ## HSC Chemistry Marathon
The HSC Chemistry Marathon is an open chain of questions between students. It works by answering a question then posting another question and allowing the cycle to repeat itself.
Rules:
- After answering a question, always provide a new one - this is what keeps the thread alive.
- Allocate a number of marks for any question that you post.
- Do not cheat, if you cannot answer a question, do not search how to answer the question but rather, allow other students to answer the question.
- No copyrighted questions (eg CSSA and Independent) should be posted.
Tips:
- You may post more than one question.
- When possible, after questions have been answered, you can peer mark using the marking scheme.
16. ## re: HSC Chemistry Marathon Archive
$HCl \rightarrow H^+ + Cl^-$
$n(HCl) = CV = 0.1 \cdot 0.005 = 0.0005 \ mol$
There is a one to one ratio between HCl and H+
Hence
$n(H^+)= 0.0005$
$\therefore [H^+]= \frac{0.0005}{0.105} =0.0047...$
We get 0.105 from the Volume of the soln which is 105 ml
We now have conecntration of Hydrogen ions, so we take -log of it
$pH = -\log 0.0047 = 2.32$
I hope I'm right lol
=======================
How is it possible to have NEUTRAL water at ph6?
(just brief and straight to the point)
17. ## re: HSC Chemistry Marathon Archive
Correct.
Isn't neutral water defined with pH 7?
18. ## re: HSC Chemistry Marathon Archive
Originally Posted by bleakarcher
Correct.
Isn't neutral water defined with pH 7?
I would tell you why this is not the case, but it would give the answer away haha.
EDIT: I didn't want to kill the thread this early, here is another question in the mean time while people think about it
Upon analysis of mass of a hydrocarbon was found to contain 82.6% Carbon and 17.4% Hydrogen. Calculate its empirical formula
19. ## re: HSC Chemistry Marathon Archive
Originally Posted by Sy123
Upon analysis a hydrocarbon was found to contain 82.6% Carbon and 17.4% Hydrogen. Calculate its empirical formula
What sort of analysis?
Without knowing the type of analysis you don't know whether it is mol% or mass%
20. ## re: HSC Chemistry Marathon Archive
Originally Posted by Riproot
What sort of analysis?
Without knowing the type of analysis you don't know whether it is mol% or mass%
Fixed. But as far as I know it is impossible for there to be those numerical percentages and it meaning mol%, the Carbon - Hydrogen ratio is too high for it to be mol.
21. ## re: HSC Chemistry Marathon Archive
Umm just curious coz I'm kinda wtf-ing, but are both of your questions HSC level?
22. ## re: HSC Chemistry Marathon Archive
Originally Posted by Sy123
Fixed. But as far as I know it is impossible for there to be those numerical percentages and it meaning mol%, the Carbon - Hydrogen ratio is too high for it to be mol.
But still! It's the principle of the thing.
(It's gravimetric analysis or something, right?)
23. ## re: HSC Chemistry Marathon Archive
Originally Posted by theind1996
Umm just curious coz I'm kinda wtf-ing, but are both of your questions HSC level?
The pH 6 one would be one of the tricky ones but the percentage one is fine. I think we did more of that stuff in year 11 though iirc.
24. ## re: HSC Chemistry Marathon Archive
Originally Posted by theind1996
Umm just curious coz I'm kinda wtf-ing, but are both of your questions HSC level?
They are both within syllabus standards not sure what you mean by HSC level.
25. ## re: HSC Chemistry Marathon Archive
Originally Posted by Riproot
The pH 6 one would be one of the tricky ones but the percentage one is fine. I think we did more of that stuff in year 11 though iirc.
Oh yeah shit.
The empirical formula is from moles shit in Yr 11.
Post some more Production questions guys? I think most people haven't done Acidic in great depth.
Page 5 of 81 First ... 345671555 ... Last
There are currently 1 users browsing this thread. (0 members and 1 guests)
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
|
{}
|
# Mensuration - Quantitative Aptitude for GMAT | GMAT | Notes, Videos & Tests
Mensuration is topic-wise collection of Important notes, Topic Wise tests, Video lectures, NCERT Textbook, NCERT Solution, and Previous Year papers is designed in a way where you get a complete chapter-wise package for your preparation of Quantitative Aptitude for GMAT in one place? Here, the chapter-wise guide is framed by the best teachers having tremendous knowledge in the respective streams, thereby making the Mensuration - Quantitative Aptitude for GMAT the ultimate study source for the chapter.
## Notes, Videos & Tests you need for Mensuration
Mensuration: Notes & Solved Examples Doc | 1 page Test: Mensuration, Solid Geometry Questions Test | 20 ques | 20 min
## Notes for Mensuration - Quantitative Aptitude for GMAT | GMAT
Mensuration Notes for GMAT is part of Quantitative Aptitude for GMAT Notes for Quick Revision. These Mensuration sections for Quantitative Aptitude for GMAT Notes are comprehensive and detailed yet concise enough to glance through for exam preparations. The Mensuration Topic is one of the critical chapters for GMAT aspirants to understand thoroughly to perform well in the Quantitative Aptitude for GMAT Section of the GMAT Examination. Many aspirants find this section a little complicated and thus they can take help from EduRev notes for GMAT, prepared by experts according to the latest GMAT syllabus.
## Notes for Mensuration - Quantitative Aptitude for GMAT | GMAT
After completing the Mensuration it becomes important for students to evaluate themselves how much they have learned from the chapter. Here comes the role of the chapter-wise Test of Mensuration. EduRev provides you with three to four tests for each chapter. These MCQs (Multiple Choice Questions) for GMAT are designed to make them understand the types of questions that come during the exam. By attempting these tests one can not only evaluate themselves but can also make a good hold on Quantitative Aptitude for GMAT. Taking tests helps them manage time during the exam and also builds their confidence. For proper learning, we have provided here a number of Tests. Taking these tests will definitely help them improve your score.
## More chapters similar to Mensuration for GMAT
The Complete Chapterwise preparation package of Quantitative Aptitude for GMAT is created by the best GMAT teachers for GMAT preparation. 79592 students are using this for GMAT preparation.
Mensuration | Quantitative Aptitude for GMAT
|
{}
|
1. We don't have a wiki here yet...
2. Varna, Bulgaria (2010 – present)
LieVeiL was founded in 2010 at the Bulgarian sea-side capital – Varna. In the beginning it was only a musical…
3. We don't have a wiki here yet...
4. We don't have a wiki here yet...
5. We don't have a wiki here yet...
6. We don't have a wiki here yet...
7. We don't have a wiki here yet...
8. We don't have a wiki here yet...
9. We don't have a wiki here yet...
10. We don't have a wiki here yet...
11. We don't have a wiki here yet...
12. We don't have a wiki here yet...
13. We don't have a wiki here yet...
14. We don't have a wiki here yet...
15. We don't have a wiki here yet...
16. We don't have a wiki here yet...
17. We don't have a wiki here yet...
18. We don't have a wiki here yet...
19. We don't have a wiki here yet...
|
{}
|
Microbial 'omics
Brought to you by
# anvi-import-collection [program]
Import an external binning result into anvi'o.
See program help menu or go back to the main page of anvi’o programs and artifacts.
## Usage
This program, as one might think, allows you to import a collection. This allows you to easily import any binning that you’ve already done into a profile-db, since the bins within that collection will be carried over.
This information (in the form of a collection-txt) can either come from another Anvi’o project (using anvi-export-collection) or you can get the coverage and sequence composion of your data using anvi-export-splits-and-coverages to bin your contigs with software outside of Anvi’o, then import that data into your database with this program.
You can run this program like so:
anvi-import-collection -C my_bins.txt \ -p profile-db \ -c contigs-db
This will import the collection indicated in my_bins.txt into your profile-db.
my_bins.txt should be a tab-delimited file where the first column lists a split name and the second lists the bin that it is placed in. You can see an example of this here.
You can also provide this information by listing your contigs instead of your splits (like this). Just add the --contigs-mode tag.
You can also provide an information file to describe the source and/or colors of your bins. This file is an example of such an information file.
Edit this file to update this information.
Are you aware of resources that may help users better understand the utility of this program? Please feel free to edit this file on GitHub. If you are not sure how to do that, find the __resources__ tag in this file to see an example.
|
{}
|
## 👓 ‘A Sort of Everyday Struggle’ | The Harvard Crimson
'A Sort of Everyday Struggle' by Hannah Natanson
Women in Harvard's math department report a bevy of inequalities—from a discouraging absence of female faculty to a culture of "math bro" condescension.
Oddly not featured in the story was any reference to the Lawrence H. Summers incident of 2005. Naturally, one can’t pin the issue on him as this lack of diversity has spanned the life of the university, but apparently the math department didn’t get the memo when the university president left.
I’ve often heard that the fish stinks from the head, but apparently it’s the whole fish here.
Syndicated copies to:
## 👓 Filing errors knock rent control in Glendale out of consideration — for now | Glendale News-Press
Filing errors knock rent control in Glendale out of consideration — for now by Jeff Landa (Glendale News-Press)
The petition hit an administrative setback.
Syndicated copies to:
## 👓 Trump offered a grieving military father \$25,000 in a call, but didn’t follow through | Washington Post
Trump offered a grieving military father \$25,000 in a call, but didn’t follow through by Dan Lamothe, Lindsey Bever and Eli Rosenberg (Washington Post)
‘No other president has ever done something like this,’ Trump told the late soldier’s father.
It kills me that he’s so unfeeling, unkind, and generally has no empathy. The fact that he hasn’t caught on that people are going to fact check him and make him continually look like an even bigger looser is even more painful. The disrespect to our troops just becomes the icing on the cake. His actions really just hurt my brain because they just make no sense within the framework of humanity.
Syndicated copies to:
## 👓 Here’s a hack so you can tweet with 280 characters right now | The Verge
How to tweet with 280 characters right now by Tom Warren (The Verge)
Sadly Twitter has figured out the work around and disabled it so it doesn’t work anymore. Fortunately I can always write on my own site without character limits.
Syndicated copies to:
## 👓 Another USC medical school dean resigns | Washington Post
Another USC medical school dean resigns by Susan Svrluga (Washington Post)
The University of Southern California announced Thursday that Rohit Varma has resigned as dean of the Keck School of Medicine. He had replaced a dean who was banned from campus after allegations of drug use and partying.
I’ve been so busy in the last month, I had to do a double-take at the word ANOTHER!
The statement USC released seems highly disingenuous and inconsistent to me.
“As you may have heard, today Dr. Rohit Varma resigned as dean of the Keck School of Medicine of USC,” the school’s provost, Michael Quick, wrote in a message to the community.
“I understand how upsetting this situation is to all of us, but we felt it was in the best interest of the faculty, staff, and students for all of us to move in this direction. Today we learned previously undisclosed information that caused us to lose confidence in Dr. Varma’s ability to lead the school. Our leaders must be held to the highest standards. Dr. Varma understands this, and chose to step down.”
First they say Varma resigned as dean which makes it seem as if he’s stepping aside of his own accord when the next paragraph indicates that the University leadership has lost confidence in him and forced him out. So which is it? He resigned or was fired?
Secondly they mentioned “undisclosed information”. This is painful because the so-called undisclosed information was something that USC was not only aware of, but actually paid off a person involved to the tune of more than \$100,000!
USC paid her more than \$100,000 and temporarily blocked Varma from becoming a full member of the faculty, according to the records and interviews.
“The behavior you exhibited is inappropriate and unacceptable in the workplace, reflects poor judgment, is contrary to the University’s standards of conduct, and will not be tolerated at the University of Southern California,” a USC official wrote in a 2003 letter of reprimand.”
Even the LA Times reports: “The sexual harassment allegation is well known in the upper echelons of the university, but not among many of the students and staff.” How exactly was this “undisclosed?!”
So, somehow, a person who was formally reprimanded years ago (and whose reprimands were later greatly lessened by the way) was somehow accidentally promoted to dean of an already embattled division of the university?? I’m not really sure how he even maintained his position after the original incident much less subsequently promoted and allowed to continue on to eventually be appointed dean years later. Most shocking, there was no mention of his other positions at USC. I take this to mean that he’s still on the faculty, he’s still on staff at the hospital, and he’s still got all the rights and benefits of his previous positions at the University? I sincerely hope that he learned his lesson in 2003, but suspect that he didn’t, and if this is the case and others come forward, he will be summarily dispatched. For the University’s sake, I further hope they’re looking into it internally with a fine-toothed comb before they’re outed again by the Los Angeles Times reporting staff who seem to have a far higher level of morality than the USC leadership over the past several years.
During a month which has seen an inordinate amount of sexual harassment backlash, I’m shocked that USC has done so very little and has only acted (far too long after-the-fact) to sweep this all under the rug.
Syndicated copies to:
## 📖 Read chapter one of Weapons of Math Destruction by Cathy O’Neil
📖 Read chapter one of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil
I don’t think she’s used the specific words in the book yet, but O’Neil is fundamentally writing about social justice and transparency. To a great extent both governments and increasingly large corporations are using these Weapons of Math Destruction inappropriately. Often it may be the case that the algorithms are so opaque as to be incomprehensible by their creators/users, but, as I suspect in many cases, they’re being used to actively create social injustice by benefiting some classes and decimating others. The evolving case of Facebook’s involvement in potentially shifting the outcome of the 2016 Presidential election especially via “dark posts” is an interesting case in point with regard to these examples.
In some sense these algorithms are like viruses running rampant in a large population without the availability of antibiotics to tamp down or modify their effects. Without feedback mechanisms and the ability to see what is going on as it happens the scale issue she touches on can quickly cause even greater harm over short periods of time.
I like that one of the first examples she uses for modeling is that of preparing food for a family. It’s simple, accessible, and generic enough that the majority of people can relate directly to it. It has lots of transparency (even more than her sabermetrics example from baseball). Sadly, however, there is a large swath of the American population that is poor, uneducated, and living in horrific food deserts that they may not grasp the subtleties of even this simple model. As I was reading, it occurred to me that there is a reasonable political football that gets pushed around from time to time in many countries that relates to food and food subsidies. In the United States it’s known as the Supplemental Nutrition Assistance Program (aka SNAP) and it’s regularly changing, though fortunately for many it has some nutritionists who help to provide a feedback mechanism for it. I suspect it would make a great example of the type of Weapon of Mass Destruction she’s discussing in this book. Those who are interested in a quick overview of it and some of the consequences can find a short audio introduction to it via the Eat This Podcast episode How much does a nutritious diet cost? Depends what you mean by “nutritious” or Crime and nourishment Some costs and consequences of the Supplemental Nutrition Assistance Program which discusses an interesting crime related sub-consequence of something as simple as when SNAP benefits are distributed.
I suspect that O’Neil won’t go as far as to bring religion into her thesis, so I’ll do it for her, but I’ll do so from a more general moral philosophical standpoint which underpins much of the Judeo-Christian heritage so prevalent in our society. One of my pet peeves of moralizing (often Republican) conservatives (who often both wear their religion on their sleeves as well as beat others with it–here’s a good recent case in point) is that they never seem to follow the Golden Rule which is stated in multiple ways in the Bible including:
He will reply, ‘Truly I tell you, whatever you did not do for one of the least of these, you did not do for me.
Matthew 25:45
In a country that (says it) values meritocracy, much of the establishment doesn’t seem to put much, if any value, into these basic principles as they would like to indicate that they do.
I’ve previously highlighted the application of mathematical game theory before briefly in relation to the Golden Rule, but from a meritocracy perspective, why can’t it operate at all levels? By this I’ll make tangential reference to Cesar Hidalgo‘s thesis in his book Why Information Grows in which he looks not at just individuals (person-bytes), but larger structures like firms/companies (firmbytes), governments, and even nations. Why can’t these larger structures have their own meritocracy? When America “competes” against other countries, why shouldn’t it be doing so in a meritocracy of nations? To do this requires that we as individuals (as well as corporations, city, state, and even national governments) need to help each other out to do what we can’t do alone. One often hears the aphorism that “a chain is only as strong as it’s weakest link”, why then would we actively go out of our way to create weak links within our own society, particularly as many in government decry the cultures and actions of other nations which we view as trying to defeat us? To me the statistical mechanics of the situation require that we help each other to advance the status quo of humanity. Evolution and the Red Queeen Hypothesis dictates that humanity won’t regress back to the mean, it may be regressing itself toward extinction otherwise.
### Highlights, Quotes, & Marginalia
Chapter One – Bomb Parts: What is a Model
You can often see troubles when grandparents visit a grandchild they haven’t seen for a while.
Highlight (yellow) page 22 | Location 409-410
Added on Thursday, October 12, 2017 11:19:23 PM
Upon meeting her a year later, they can suffer a few awkward hours because their models are out of date.
Highlight (yellow) page 22 | Location 411-412
Added on Thursday, October 12, 2017 11:19:41 PM
Racism, at the individual level, can be seen as a predictive model whirring away in billions of human minds around the world. It is built from faulty, incomplete, or generalized data. Whether it comes from experience or hearsay, the data indicates that certain types of people have behaved badly. That generates a binary prediction that all people of that race will behave that same way.
Highlight (yellow) page 22 | Location 416-420
Added on Thursday, October 12, 2017 11:20:34 PM
Needless to say, racists don’t spend a lot of time hunting down reliable data to train their twisted models.
Highlight (yellow) page 23 | Location 420-421
Added on Thursday, October 12, 2017 11:20:52 PM
the workings of a recidivism model are tucked away in algorithms, intelligible only to a tiny elite.
Highlight (yellow) page 25 | Location 454-455
Added on Thursday, October 12, 2017 11:24:46 PM
A 2013 study by the New York Civil Liberties Union found that while black and Latino males between the ages of fourteen and twenty-four made up only 4.7 percent of the city’s population, they accounted for 40.6 percent of the stop-and-frisk checks by police.
Highlight (yellow) page 25 | Location 462-463
Added on Thursday, October 12, 2017 11:25:50 PM
So if early “involvement” with the police signals recidivism, poor people and racial minorities look far riskier.
Highlight (yellow) page 26 | Location 465-466
Added on Thursday, October 12, 2017 11:26:15 PM
The questionnaire does avoid asking about race, which is illegal. But with the wealth of detail each prisoner provides, that single illegal question is almost superfluous.
Highlight (yellow) page 26 | Location 468-469
Added on Friday, October 13, 2017 6:01:28 PM
judge would sustain it. This is the basis of our legal system. We are judged by what we do, not by who we are.
Highlight (yellow) page 26 | Location 478-478
Added on Friday, October 13, 2017 6:02:53 PM
(And they’ll be free to create them when they start buying their own food.) I should add that my model is highly unlikely to scale. I don’t see Walmart or the US Agriculture Department or any other titan embracing my app and imposing it on hundreds of millions of people, like some of the WMDs we’ll be discussing.
You have to love the obligatory parental aphorism about making your own rules when you have your own house.
Yet the US SNAP program does just this. It could be an interesting example of this type of WMD.
Highlight (yellow) page 28 | Location 497-499
Added on Friday, October 13, 2017 6:06:04 PM
three kinds of models.
namely: baseball, food, recidivism
Highlight (yellow) page 27 | Location 489-489
Added on Friday, October 13, 2017 6:08:26 PM
The first question: Even if the participant is aware of being modeled, or what the model is used for, is the model opaque, or even invisible?
Highlight (yellow) page 28 | Location 502-503
Added on Friday, October 13, 2017 6:08:59 PM
many companies go out of their way to hide the results of their models or even their existence. One common justification is that the algorithm constitutes a “secret sauce” crucial to their business. It’s intellectual property, and it must be defended,
Highlight (yellow) page 29 | Location 513-514
Added on Friday, October 13, 2017 6:11:03 PM
the second question: Does the model work against the subject’s interest? In short, is it unfair? Does it damage or destroy lives?
Highlight (yellow) page 29 | Location 516-518
Added on Friday, October 13, 2017 6:11:22 PM
While many may benefit from it, it leads to suffering for others.
Highlight (yellow) page 29 | Location 521-522
Added on Friday, October 13, 2017 6:12:19 PM
The third question is whether a model has the capacity to grow exponentially. As a statistician would put it, can it scale?
Highlight (yellow) page 29 | Location 524-525
Added on Friday, October 13, 2017 6:13:00 PM
scale is what turns WMDs from local nuisances into tsunami forces, ones that define and delimit our lives.
Highlight (yellow) page 30 | Location 526-527
Added on Friday, October 13, 2017 6:13:20 PM
So to sum up, these are the three elements of a WMD: Opacity, Scale, and Damage. All of them will be present, to one degree or another, in the examples we’ll be covering
Highlight (yellow) page 31 | Location 540-542
Added on Friday, October 13, 2017 6:18:52 PM
You could argue, for example, that the recidivism scores are not totally opaque, since they spit out scores that prisoners, in some cases, can see. Yet they’re brimming with mystery, since the prisoners cannot see how their answers produce their score. The scoring algorithm is hidden.
This is similar to anti-class action laws and arbitration clauses that prevent classes from realizing they’re being discriminated against in the workplace or within healthcare. On behalf of insurance companies primarily, many lawmakers work to cap awards from litigation as well as to prevent class action suits which show much larger inequities that corporations would prefer to keep quiet. Some of the recent incidences like the cases of Ellen Pao, Susan J. Fowler, or even Harvey Weinstein are helping to remedy these types of things despite individuals being pressured to stay quiet so as not to bring others to the forefront and show a broader pattern of bad actions on the part of companies or individuals. (This topic could be an extended article or even book of its own.)
Highlight (yellow) page 31 | Location 542-544
Added on Friday, October 13, 2017 6:20:59 PM
the point is not whether some people benefit. It’s that so many suffer.
Highlight (yellow) page 31 | Location 547-547
Added on Friday, October 13, 2017 6:23:35 PM
And here’s one more thing about algorithms: they can leap from one field to the next, and they often do. Research in epidemiology can hold insights for box office predictions; spam filters are being retooled to identify the AIDS virus. This is true of WMDs as well. So if mathematical models in prisons appear to succeed at their job—which really boils down to efficient management of people—they could spread into the rest of the economy along with the other WMDs, leaving us as collateral damage.
Highlight (yellow) page 31 | Location 549-552
Added on Friday, October 13, 2017 6:24:09 PM
##### Guide to highlight colors
Yellow–general highlights and highlights which don’t fit under another category below
Orange–Vocabulary word; interesting and/or rare word
Blue–Interesting Quote
Gray–Typography Problem
Red–Example to work through
I’m reading this as part of Bryan Alexander’s online book club.
Syndicated copies to:
## 👓 Going Indie. Step 2: Reclaiming Content | Matthias Ott
Going Indie. Step 2: Reclaiming Content by Matthias Ott (Matthias Ott | User Experience Designer)
We have lost control over our content. To change this, we need to reconsider the way we create and consume content online. We need to create a new set of tools that enable an independent, open web for everyone.
A nice narrative for the IndieWeb movement by Matthias.
Some of my favorite quotes from the piece:
Having your own website surely is a wonderful thing, but to be relevant, useful, and satisfactory, it needs to be connected to other sites and services. Because ultimately, human interactions are what fuels social life online and most of your friends will still be on social networks, for now.
…what the IndieWeb movement is about: Creating tools that enable a decentralized, people-focused alternative to the corporate web, putting you back in control, and building an active community around this idea of independence.
Tim Kadlec reminded us of the underlying promise of the web:
Wilson Miner put it in his 2011 Build conference talk:
“The things that we choose to surround ourselves will shape what we become. We’re actually in the process of building an environment, where we’ll spend most of our time, for the rest of our lives.”
This also reminds me that I ought to swing by room 3420 in Boelter Hall on my way to math class this week. I forget that I’m always taking classes just a few floors away from the room that housed the birth of the internet.
Syndicated copies to:
## 👓 Two alternatives to #WomenBoycottTwitter that don’t rely on women’s silencing | Another Angry Woman
Two alternatives to #WomenBoycottTwitter that don’t rely on women’s silencing by Zoe Stavri (Another Angry Woman)
After Twitter extending their risible “abuse” policy to a suspension of a celebrity white woman speaking out against sexual violence, the problems in their model have been laid bare, and to my pleasant surprise, people are talking about taking action (I’d been pessimistic about this). Unfortunately, it’s entirely the wrong kind of action: a women’s boycott. This is a problem, because once again, it forces us to do the heavy lifting. And once again, it forces us to silence ourselves: the very opposite of what we should be doing. So, here’s two things that can be done. One is an activity for men who consider themselves allies. The other is for all of us. Especially women.
I took part in #WomenBoycottTwitter today and it honestly wasn’t too difficult, though I did miss out on some of the scientific chatter that crosses my desk during the day. Since I post mostly to my own website more often and syndicate to Twitter only occasionally, the change didn’t feel too drastic to me, though there were one or two times I almost accidentally opened Twitter to track down people’s sites. Fortunately I’ve taken control of more of my online experience back for myself using IndieWeb principles.
This particular post has some seemingly interesting methods for fighting against the status quo on Twitter for those who are entrenched though. The first #AmplifyWomen sounds a lot like the great advice I heard from Valerie Alexander a few months ago at an Innovate Pasadena event.
Some of the others almost seek to reverse-gamify Twitter’s business model. People often complain about silos and how they work, but few ever seek to actively subvert or do this type of reverse-gamification of those models. This is an interesting concept though to be as useful tools as they might be, it may be somewhat difficult to accomplish in some cases and may hamper one’s experience on such platforms. This being said, having ultimate control over your domain, data, and interactions is still a far preferable model.
And while we’re thinking about amplifying women, do take a look at some of Zoe’s other content, she’s got a wealth of good writing. I’ll be adding her to my follow list/reader.
Syndicated copies to:
## 👓 Towards a more democratic Web | Tara Vancil
Towards a more democratic Web by Tara Vancil (Tara Vancil)
Many people who have suffered harassment on Twitter (largely women), are understandably fed up with Twitter’s practices, and have staged a boycott of Twitter today October 13, 2017. Presumably the goal is to highlight the flaws in Twitter’s moderation policies, and to push the company to make meaningful changes in their policies, but I’d like to argue that we shouldn’t expect Twitter’s policies to change.
It’s not going to get better.
I think there are a lot of people, including myself, who also think like she does here:
I want online media to work much more like a democracy, where users are empowered to decide what their experience is like.
The difference for her is that she’s actively building something to attempt to make things better not only for herself, but for others. This is tremendously laudable.
I’d heard of her project Beaker and Mastodon before, but hadn’t heard anything before about Patchwork, which sounds rather interesting.
h/t Richard Eriksson for highlighting this article on Reading.am though I would have come across it tomorrow morning likely in my own feed reader.
Syndicated copies to:
## 👓 Twitter CEO promises to crack down on hate, violence and harassment with “more aggressive” rules | Tech Crunch
Twitter CEO promises to crack down on hate, violence and harassment with “more aggressive” rules by Matthew Panzarino (Tech Crunch)
Twitter CEO Jack Dorsey took to…Twitter today to promise a “more aggressive” stance in its rules and how it enforces them. The tweet storm was based in a response to the #WomenBoycottTwitter protest, as well as work that Dorsey says Twitter has been working ‘intensely’ on over the past few months. Dorsey says that critical decisions were made today in how to go about preventing the rampant and vicious harassment many women, minorities and other users undergo daily on the platform. “We decided to take a more aggressive stance in our rules and how we enforce them,” Dorsey says. “New rules around: unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence. These changes will start rolling out in the next few weeks. More to share next week.”
I don’t have very high hopes for the climate changing on this issue though I did participate in the Twitter boycott today.
Syndicated copies to:
## 📖 Read pages 112-121 of Abstract Algebra: An Introduction by Thomas W. Hungerford
📖 Read pages 112-121 of Abstract Algebra: An Introduction (First Edition) by Thomas W. Hungerford
Chapter 5: Congruence in $F[x]$ and Congruence-Class arithmetic, Sections 1 and 2
Reviewing over some algebra for my algebraic geometry class tonight. I always did love the pedagogic design of this textbook. The way he builds up algebraic structures is really lovely.
Syndicated copies to:
## 👓 Half the universe’s missing matter has just been finally found | New Scientist
Half the universe’s missing matter has just been finally found by Leah Crane (New Scientist)
About half the normal matter in our universe had never been observed – until now. Two teams have finally seen it by combining millions of faint images into one
Discoveries seem to back up many of our ideas about how the universe got its large-scale structure
Andrey Kravtsov (The University of Chicago) and Anatoly Klypin (New Mexico State University). Visualisation by Andrey Kravtsov
The missing links between galaxies have finally been found. This is the first detection of the roughly half of the normal matter in our universe – protons, neutrons and electrons – unaccounted for by previous observations of stars, galaxies and other bright objects in space.
You have probably heard about the hunt for dark matter, a mysterious substance thought to permeate the universe, the effects of which we can see through its gravitational pull. But our models of the universe also say there should be about twice as much ordinary matter out there, compared with what we have observed so far.
Two separate teams found the missing matter – made of particles called baryons rather than dark matter – linking galaxies together through filaments of hot, diffuse gas.
Continue reading “👓 Half the universe’s missing matter has just been finally found | New Scientist”
Syndicated copies to:
## 📗 Started reading Weapons of Math Destruction by Cathy O’Neil
📖 Read introduction of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil
Based on the opening, I’m expecting some great examples many which are going to be as heavily biased as things like redlining seen in lending practices in the last century. They’ll come about as the result of missing data, missing assumptions, and even incorrect assumptions.
I’m aware that one of the biggest problems in so-called Big Data is that one needs to spend an inordinate amount of time cleaning up the data (often by hand) to get something even remotely usable. Even with this done I’ve heard about people not testing out their data and then relying on the results only to later find ridiculous error rates (sometimes over 100%!)
Of course there is some space here for the intelligent mathematician, scientist, or quant to create alternate models to take advantage of overlays in such areas, and particularly markets. By overlay here, I mean the gambling definition of the word in which the odds of a particular wager are higher than they should be, thus tending to favor an individual player (who typically has more knowledge or information about the game) rather than the house, which usually relies on a statistically biased game or by taking a rake off of the top of a parimutuel financial structure, or the bulk of other players who aren’t aware of the inequity. The mathematical models based on big data (aka Weapons of Math Destruction or WMDs) described here, particularly in financial markets, are going to often create such large inequities that users of alternate means can take tremendous advantage of the differences for their own benefits. Perhaps it’s the evolutionary competition that will more actively drive these differences to zero? If this is the case, it’s likely that it’s going to be a long time before they equilibrate based on current usage, especially when these algorithms are so opaque.
I suspect that some of this book will highlight uses of statistical errors and logical fallacies like cherry picking data, but which are hidden behind much more opaque mathematical algorithms thereby making them even harder to detect than simple policy decisions which use the simpler form. It’s this type of opacity that has caused major market shifts like the 2008 economic crash, which is still heavily unregulated to protect the masses.
I suspect that folks within Bryan Alexander’s book club will find that the example of Sarah Wysocki to be very compelling and damning evidence of how these big data algorithms work (or don’t work, as the case may be.) In this particular example, there are so many signals which are not only difficult to measure, if at all, that the thing they’re attempting to measure is so swamped with noise as to be unusable. Equally interesting, but not presented here, would be the alternate case of someone tremendously incompetent (perhaps who is cheating as indicated in the example) who actually scored tremendously high on the scale who was kept in their job.
### Highlights, Quotes, & Marginalia
Introduction
Do you see the paradox? An algorithm processes a slew of statistics and comes up with a probability that a certain person might be a bad hire, a risky borrower, a terrorist, or a miserable teacher. That probability is distilled into a score, which can turn someone’s life upside down. And yet when the person fights back, “suggestive” countervailing evidence simply won’t cut it. The case must be ironclad. The human victims of WMDs, we’ll see time and again, are held to a far higher standard of evidence than the algorithms themselves.
Highlight (yellow) – Introduction > Location xxxx
Added on Sunday, October 9, 2017
[WMDs are] opaque, unquestioned, and unaccountable, and they operate at a scale to sort, target or “optimize” millions of people. By confusing their findings with on-the-ground reality, most of them create pernicious WMD feedback loops.
Highlight (yellow) – Introduction > Location xxxx
Added on Sunday, October 9, 2017
The software is doing it’s job. The trouble is that profits end up serving as a stand-in, or proxy, for truth. We’ll see this dangerous confusion crop up again and again.
Highlight (yellow) – Introduction > Location xxxx
Added on Sunday, October 9, 2017
I’m reading this as part of Bryan Alexander’s online book club.
Syndicated copies to:
## 👓 Vladimir Voevodsky, 1966 – 2017 | John Carlos Baez
This mathematician died last week. He won the Fields Medal in 2002 for proving the Milnor conjecture in a branch of algebra known as algebraic K-theory. He continued to work on this subject until he helped prove the more general Bloch-Kato conjecture in 2010. Proving these results — which are too technical to easily describe to nonmathematicians! — required him to develop a dream of Grothendieck: the theory of motives. Very roughly, this is a way of taking the space of solutions of a collection of polynomial equations and chopping it apart into building blocks. But the process of 'chopping up', and also these building blocks, called 'motives', are very abstract — nothing simple or obvious.
There’s some interesting personality and history in this short post of John’s.
Syndicated copies to:
## 👓 The Next Platform | Pierre Levy
The Next Platform by Pierre Levy (Pierre Levy's Blog)
One percent of the human population was connected to the Internet at the end of the 20th century. In 2017, more than 50% is. Most of the users interact in social media, search information, buy products and services online. But despite the ongoing success of digital communication, there is a growing dissatisfaction about the big tech companies (the “Silicon Valley”) who dominate the new communication environment. The big techs are the most valued companies in the world and the massive amount of data that they possess is considered the most precious good of our time. The Silicon Valley owns the big computers: the network of physical centers where our personal and business data are stored and processed. Their income comes from their economic exploitation of our data for marketing purpose and from their sales of hardware, software or services. But they also derive considerable power from the knowledge of markets and public opinions that stems from their information control.
Transparency is the very basis of trust and the precondition of authentic dialogue. Data and people (including the administrators of a platform), should be traceable and audit-able. Transparency should be reciprocal, without distinction between rulers and ruled. Such transparency will ultimately be the basis of reflexive collective intelligence, allowing teams and communities of any size to observe and compare their cognitive activity.
The trouble with some of this is the post-truth political climate in which basic “facts” are under debate. What will the battle between these two groups look like and how can actual facts win out in the end? Will the future Eloi and Morlocks be the descendants of them? I would have presumed that generally logical, intelligent, and educated people would generally come to a broadly general philosophical meeting of the minds as to how to best maximize life, but this seems to obviously not be the case as the result of the poorly educated who will seemingly believe almost anything. And this problem is generally separate from the terrifically selfish people who have differing philosophical stances on how to proceed. How will these differences evolve over time?
This article is sure to be interesting philosophy among some in the IndieWeb movement, but there are some complexities in the system which are sure to muddy the waters. I suspect that many in the Big History school of thought may enjoy the underpinnings of this as well.
I’m going to follow Pierre Levy’s blog to come back and read a bit more about his interesting research programme. There’s certainly a lot to unpack here.
#### Annotations
The Next Platform
Commonality means that people will not have to pay to get access to the new public sphere: all will be free and public property. Commonality means also transversality: de-silo and cross-pollination.
Openness is on the rise because it maximizes the improvement of goods and services, foster trust and support collaborative engagement.
We need a new kind of public sphere: a platform in the cloud where data and metadata would be our common good, dedicated to the recording and collaborative exploitation of our memory in the service of collective intelligence. According to the current zeitgeist, the core values orienting the construction of this new public sphere should be: openness, transparency and commonality
The practice of writing in ancient palace-temples gave birth to government as a separate entity. Alphabet and paper allowed the emergence of merchant city-states and the expansion of literate empires. The printing press, industrial economy, motorized transportation and electronic media sustained nation-states.
The digital revolution will foster new forms of government. We discuss political problems in a global public space taking advantage of the web and social media. The majority of humans live in interconnected cities and metropoles. Each urban node wants to be an accelerator of collective intelligence, a smart city.
Syndicated copies to:
|
{}
|
# Investment strategies, lazy evaluation and memoization
This article will cover an interesting problem: given a set of possible investments, each with different tax rates, yearly rates and minimum time until withdrawal, what is the best investment strategy for the next 10, 20 or n years?
For instance, given the following investments:
• $i_1 = 9\%$ yearly rate, 25% taxes on profits upon withdrawal 1 year later
• $i_2 = 8\%$ yearly rate, 15% taxes on profits upon withdrawal 5 years later
• $i_3 = 7\%$ yearly rate, 0% taxes and withdrawal 3 years later
If we want to maximize earnings over 10 years, should we purchase $i_1$ ten times, $i_2$ twice, $i_3$ three times and one $i_1$, $i_3$ once and $i_2$ once and $i_1$ twice or some other combination?
Before we go into programming, let’s do some basic math/algorithms. This is all simple math, so don’t worry. You can also skip to programming if you prefer.
Theorem 1: The final value to withdraw with any investment i such as the ones exemplified can be written as product of the initial value and a factor defined by the investment i, $e(i) = 1 + (1 - taxes) \cdot ((1 + yearlyRate)^{time} - 1)$
Proof: Given some initial value v and composite yearly interest rates r, investment time t and taxes on profits taxes:
$profitsBeforeTaxes = v \cdot (1 + r)^t - v = v \cdot ((1 + r)^t - 1)$
$profitsAfterTaxes = profitsBeforeTaxes \cdot (1 - taxes)$
$profitsAfterTaxes = v \cdot ((1 + r)^t - 1) \cdot (1 - taxes)$
$finalValue = v + profitsAfterTaxes = v + v \cdot ((1 + r)^t - 1) \cdot (1 - taxes)$
$finalValue = v \cdot (1 + ((1 + r)^t - 1) \cdot (1 - taxes))$
The previous line proves the existence of the factor. Now, since $finalValue=v \cdot e(i)$:
$e(i) = finalValue / v = 1 + (1 - taxes) \cdot ((1 + r)^t - 1)$
Theorem 2: Given a set of possible investments and a deadline $n$, the best investment strategy $s_n=e(i_1) \cdot e(i_2) \cdot ... \cdot e(i_j)$, any sub-strategy contained in $s_n$ is the best strategy for the sum of the times of the investments contained in it.
Proof: Without loss of generality, let us consider $s_n=s_a \cdot s_b$, with $a+b=n$ and $0 and assume the contrary: $s_a$ is not the best strategy for time $a$, but $s_n$ is the best strategy for time $n$. So there must be $s'_a > s_a$, and that would mean $s'_n = s'_a \cdot s_b > s_n$, which contradicts $s_n$ being the best strategy. Therefore, there can be no $s'_a > s_a$ and $s_a$ is optimal.
### Programming
Maybe you skipped the last part, but don’t worry. I’ll just roll out the recursive solution to the problem. How do we describe the list of investments to be made that maximizes earnings after some time n?
$s_0 = 1$
$s_n = max \{s_1 \cdot s_{n-1}, s_2 \cdot s_{n-2}, ..., s_{n-1} \cdot s_1, i \}\text{, with } i =\text{ investment with largest }e(i)\text{ of all possible investments of } time=n$
This basically means we test every possible combination, which is not very smart, of course. The advantage of finding a recursive solution is that we can compute and store calculations for use later on. This is what we call memoization.
Also, we only need to check up to $s_{floor(n/2)} \cdot s_{n - floor(n/2)}$, since every other check is redundant.
How do we implement this? In a procedural language we could use an array of size n and fill it up with all solutions from 0 to n. In Haskell we generally don’t want to use mutable data structures nor do we want to specify the order of evaluation of things, so we must find another tool in the toolbox to do this; it turns out that lazy evaluation is just that!
Lazy evaluation is, roughly speaking, a mechanism by which a value is only computed when required by some function. This means that we can define a data structure in terms of itself, and that it can even be infinite. Take the following example:
repeat :: a -> [a]
repeat x = x : repeat x
What happens here is that the function repeat takes an object of some type a and returns a possibly infinite list of a. The list will grow in size as more elements of it are demanded by evaluation. Let us take this idea to implement our bestStrategyFunctionBad:
— We use the “investment” below to make sure the algorithm always returns some strategy, even if it means leaving you money in the bank
investmentLeaveInTheBank :: Investment
investmentLeaveInTheBank = Investment { name = “Leave it in the bank account”, rate = 0, taxes = 0.0, time = 1 }
withMax :: Ord a => (b -> a) -> [b] -> Maybe b
withMax f xs = snd maybeRes
where maybeRes = foldl’ (\acc el ->case acc of
Just (maxVal, maxEl) -> let cmp = f el in if cmp > maxVal then Just (cmp, el) else acc
Nothing -> Just (f el, el)) Nothing xs
withMax1 :: Ord a => (b -> a) -> b -> [b] -> b
withMax1 f firstEl xs = snd $foldl’ (\acc@(maxVal, _) el -> let cmp = f el in if cmp > maxVal then (cmp, el) else acc) (f firstEl, firstEl) xs bestStrategyBad :: Int -> [Investment] -> [Investment] bestStrategyBad timeInYears invs’ = go !! timeInYears where invs = investmentLeaveInTheBank : invs’ factorStrategyBad is = product$ fmap factorInvestment is
bestStrat desiredTime = withMax1 factorStrategyBad (maybeToList (bestInvestmentWithTime desiredTime)) (allCombinations desiredTime)
bestInvestmentWithTime desiredTime = withMax factorInvestment $filter (\i -> time i == desiredTime) invs — For desiredTime=7 “allCombinations” returns strategies e1 ++ e6, e2 ++ e5 and e3 ++ e4 allCombinations desiredTime = let halfTheTime = floor (fromIntegral desiredTime / 2) in fmap (\i -> go !! i ++ go !! (desiredTime – i)) [1..halfTheTime] go :: [[Investment]] go = [] : fmap bestStrat [1..] There is nothing magical about the code above. When demanding $\text{go !! 20}$, for instance, the function bestStrat will be called with the value 20, which will demand all possible strategy investments (as defined by our equations). Demanding all combinations will once again require $\text{go !! 19}$, $\text{go !! 18}$ and many others, which will repeat the process for a smaller n (the fact that they are smaller is crucial for our recursion to converge). What is different from recursion in imperative languages is that go is not a function: it is a list whose values are lazily calculated. As values are demanded from it, they are calculated only once, so you don’t have to worry about what order to evaluate things in. In C# this is sort of like a List<Lazy<Investment[]>>. This is nice! Still, there are two bad things about this solution: 1. go is a list, so accessing $\text{go !! n}$ is $O(n)$. If this were an array this would be better. We will not tackle this issue for now, but feel free to do so! 2. We are creating a large number of lists with the (++) function, not to mention that once we combine two strategies we have to go through every investment in the combined strategy to calculate its complete factor, when we could do better. So now let’s go and solve issue number 2. ### More lazy evaluation $\text{and a little abstraction}$ So, how can we solve issue number 2? Combining two strategies leads to a strategy with a factor that is the product of the factors of each strategy. There is no need to concatenate lists to discover the best strategy of some given size. To avoid needless work, we need more lazy evaluation. Let’s add some functions to our code and create the bestStrategyGood function: data StrategyCalc = StrategyCalc [Investment] Double factorStrategyGood (StrategyCalc _ x) = x combine :: StrategyCalc -> StrategyCalc -> StrategyCalc combine (StrategyCalc s1 f1) (StrategyCalc s2 f2) = StrategyCalc (s1 ++ s2) (f1 * f2) bestStrategyGood :: Int -> [Investment] -> [Investment] bestStrategyGood timeInYears invs’ = let StrategyCalc res _ = go !! timeInYears in res where invs = investmentLeaveInTheBank : invs’ bestStrat desiredTime = withMax1 factorStrategyGood (bestInvestmentWithTimeOr1 desiredTime) (allCombinations desiredTime) bestInvestmentWithTimeOr1 desiredTime = case withMax factorInvestment$ filter (\i -> time i == desiredTime) invs of
Nothing -> StrategyCalc [] 1
Just i -> StrategyCalc [i] (factorInvestment i)
— For desiredTime=7 “allCombinations” returns strategies e1 ++ e6, e2 ++ e5 and e3 ++ e4
allCombinations desiredTime = let halfTheTime = floor (fromIntegral desiredTime / 2) in fmap (\i -> combine (go !! i) (go !! (desiredTime – i))) [1..halfTheTime]
go :: [StrategyCalc]
go = StrategyCalc [] 1 : fmap bestStrat [1..]
Take your time to digest this: the list of investments in each StrategyCalc will only be evaluated when the caller needs it to be evaluated. However, the combine function will create a StrategyCalc whose factor is calculated in constant time when combining two strategies. In fact, you could even have the final factor of the optimal strategy without having ever constructed a non empty list. Nice!
### A little abstraction $\text{(skip to results if you prefer)}$
I thought a nice touch to finish this article would be to introduce an abstraction: the Monoid.
A Monoid is just a fancy name for a binary operation that is associative and a value that is an identity for this operation. The Int type, the sum function (+) and the value $0$ (zero) form an instance of Monoid, for instance, since any number plus zero equals itself and $(a+b)+c=a+(b+c)$ for any $a, b, c$ of type Int.
The same thing happens with investment strategies! So we can replace the combine function by the Monoidal append:
instance Monoid StrategyCalc where
mempty = StrategyCalc [] 1
mappend (StrategyCalc i1 f1) (StrategyCalc i2 f2) = StrategyCalc (i1 ++ i2) (f1 * f2)
Don’t forget that <> is an infix alias for mappend!
### Results
When taking the code for bestStrategyGood and the three investments from the beginning of the article, let us devise the best strategy to maximize gains over the next 11 years:
$ghci$ :l Investments.hs
ghci> let availableInvestments = [ Investment { name = “Investment 1”, rate = 0.09, taxes = 0.25, time = 1 } , Investment { name = “Investment 2”, rate = 0.08, taxes = 0.15, time = 5 } , Investment { name = “Investment 3”, rate = 0.07, taxes = 0, time = 3 } ]
ghci> fmap name \$ bestStrategyGood 11 availableInvestments
[“Investment 3″,”Investment 3″,”Investment 2”]
So it seems that buying investment 3, rebuying it and then buying investment 2 is the best strategy in this case.
That’s it! I hope you liked it, and if it helps, do know that this problem is still solvable with the same algorithm if the tax of each investment is a function of the amount of time since the investment title was purchased and if the time until withdrawal is either an exact time or a minimum time. It is also possible to include inflation-correcting investments if you pass around some estimated inflation; all of this with only minor modifications. Also, feel free to change the time unit to months and get something much more precise for your investments!
# Interfaces and typeclasses: Number APIs in C# and Haskell
In C# sometimes I sorely miss something like an INumber<T> interface with methods Add, Subtract, Multiply and others. The lack of this means it is cumbersome to write generic code on numbers. It means that instead of writing something like:
T Sum(IEnumerable<T> numbers) where T : INumber<T>;
We have to write all possible overloads:
double Sum(IEnumerable<double> numbers);
float Sum(IEnumerable<float> numbers);
decimal Sum(IEnumerable<decimal> numbers);
The implementation body of all these functions will be exactly the same, but we have to write it multiple times anyway. Some people work this out by creating generic methods for the operations they need while resorting to runtime type-checking:
T Add(T a, T b) {
if (a is double) {
return (double)a + (double)b;
}
else if (a is int) {
return (int)a + (int)b;
}
// .. and so on
}
This is not a terribly good solution, however, since we have no compile-time guarantees that the type T is a number at all, not to mention the performance costs of runtime type-checking. What’s more, this solution can’t be extended to new types; what if someone writes a Complex class to represent complex numbers? The implementation of Add would have to be open for modification, so this code could never be packaged in a library.
Sadly, it is hard to solve this conundrum without changing the Base Class Libraries themselves. The only thing we could hope for is for the people at Microsoft to design numeric interfaces such as INumber<T> (or others) and make our well-known primitive numeric types implement these interfaces.
Haskell is the programming language I’ve been playing with for the last year, and except for the steep learning curve, I have only good things to say. It can be extremely expressive and it is amusing to see that when my code builds it almost certainly works! It is also extremely terse, as you can easily see by this window manager’s less than 2000 lines of code, xmonad, and by the code on this post.
In Haskell, the problem shown above can be solved with typeclasses, which we can think of for now as something similar to interfaces, since they specify a contract that concrete types must obey. The big difference here is that when we create a typeclass, we can make types we don’t own implement (in Haskell: instantiate) it! This means we can design our numeric typeclasses and have Haskell’s standard numeric types, such as Data.Int and Data.Complex, instantiate them! What’s more: in Haskell we can create functions named “+”, “*”, “/”, “-” with infix application. No need to differentiate operators from regular functions: they are one and the same!
module Numeric where -- "Numeric" will be the namespace in which the definitions below will live
import qualified Prelude -- The prelude is a base set of types, typeclasses and functions that are used for common tasks
-- The "class" construct actually creates a typeclass (similar to an interface). Here we say that concrete types that instantiate this typeclass must implement functions called "+" and "*", both of them receiving two parameters of type "t" and returning an object of type "t" as well
class Number t where
(+) :: t -> t -> t
(*) :: t -> t -> t
zero :: t
-- Specifies the type "Int", which we don't own, as an instance of "Number". The Add and Multiply functions already exist in Haskell inside the Prelude. We'll use those.
instance Number Prelude.Int where
a + b = (Prelude.+) a b
a * b = (Prelude.*) a b
zero = 0
-- Now we can define a generic "sum" function with sums all numbers in a list. The following line says that type "t" must be an instance of the typeclass "Number", and that it receives a list of t and returns t. There are better ways to write this in Haskell, but that is not important right now
sum :: Number t => [t] -> t
sum [] = zero -- This is our base case: empty list sums to zero
sum (x:xs) = x + sum xs -- This separates the first element in the list, "x", from the remaining list, "xs"
Note to the reader: Haskell’s prelude already comes with a Num typeclass with more than just addition and multiplication, and existing numeric types already implement those.
And that’s it! The syntax up there really is that short, and it really is type-checked! Also, it only scratches the surface of Haskell is capable of. Believe me, just a tiny scratch.
It is important to notice here that in C# it is entirely possible to make new types that can be added to existing types by defining a public static T operator +(T a, T2 b) in the new type T. What we can’t do is specify generic type constraints that allow us to work with numeric types. In reality, this is not just about numeric APIs: it is just a consequence of the fact that we can’t make types we don’t own implement interfaces, combined to the fact that parametric polymorphism only allows restrictions based on subclassing or interface implementation (with the exception of the new(), struct and class constraints).
It is not hard to think of how useful typeclasses can be. Why doesn’t IList and ICollection implement IReadOnlyCollection anyways? Maybe we want both StringBuilder and System.String to implement IString, allowing for generic code that doesn’t need to convert between one and another. There are many possibilities out there.
Let’s take this a little further, because it can get pretty interesting: how about subtraction? In C# we can subtract a TimeSpan from a DateTime and get another DateTime. Subtracting an int from an int, however, yields another int. Can we encode this information in Haskell in a way that is checked by the compiler itself, allowing us to write generic code that is able to subtract one object from another? The answer is yes.
More: can we develop a set of typeclasses that makes sure that arithmetic operations will NEVER overflow? This would really help us write banking software, for example, allowing us to add and subtract enormous values without worrying about it. With two language extensions called MultiParamTypeClasses and TypeFamilies we can!
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE TypeFamilies #-}
module Numeric where
import qualified Prelude
import Prelude (Int, Integer, toInteger, negate) -- These types and functions will be available without the need to be prefixed by "Prelude."
import Data.Time
class Subtractive t1 t2 where
type Difference t1 t2 :: * -- This is just a fancy way of saying that the combination of "Difference" and two types is meant to represent another type
(-) :: t1 -> t2 -> Difference t1 t2
-- In Haskell the type "Integer" represents arbitrarily large integers. They are like "BigInteger" in .NET or Java. Our implementation of (-) is exactly the same as Prelude's in this case.
instance Subtractive Integer Integer where
type Difference Integer Integer = Integer
(-) = (Prelude.-)
instance Subtractive Int Int where
type Difference Int Int = Integer;-- Here we say that the Difference between two Ints is an Integer, because no matter how small two Ints are, their difference is always representable as an Integer
a - b = (toInteger a) - (toInteger b)
-- Let's just enjoy ourselves a little and put in some date and time types in the mix
instance Subtractive UTCTime NominalDiffTime where
type Difference UTCTime NominalDiffTime = UTCTime
a - b = addUTCTime (negate b) a
-- The function below works for any two types t1 and t2 which allow for (t1 - t2). It takes in a list of tuples and returns a list of the differences between the two elements in each tuple.
someGenericDifferenceFunction :: Subtractive t1 t2 => [(t1, t2)] -> [Difference t1 t2]
someGenericDifferenceFunction [] = []
someGenericDifferenceFunction ((a, b) : xs) = (a - b) : someGenericDifferenceFunction xs -- Quick reminder: ":" is a function that takes an element and a list and prepends the element into the list
The example above is still incomplete: we need instances of Subtractive Integer Int and Subtractive Int Integer for this API to become more practical. This is left to the reader, however. Meanwhile, let’s try this out in ghci:
terminal> ghci ghci> :l Numeric.hs *Numeric> let list = [(1, 2), (5, 5), (10, 3)] :: [(Int, Int)] -- We need to specify the type of "list" because literals could be any type that implements "Num", including Int and Integer. *Numeric> let difs = someGenericDifferenceFunction list *Numeric> difs [-1,0,7] *Numeric> :t difs difs :: [Integer]
C# is great and a lot of what we achieved with Haskell could be achieved through an ISubtractive<T, T2, TResult>, if only we could make existing types implement it. We could also create structs that simply wrap existing types and write implicit coercion rules from (and to) them, making these new types implement our custom interfaces, and make a lot of things possible with that, but we wouldn’t be able to pass an instance of IEnumerable<WrapperType> as a replacement for a IEnumerable<WrappedType> without explicit casting, for example, and we’d also have to watch out and stay away from built-in arithmetic operators, since they might no longer obey the relations between types specified through the interfaces (we might want int + int = BigInteger).
So that’s it. I hope you’ve enjoyed your reading, but most of all I hope any C# or Java (or any other mainstream language) developer that reads this gives Haskell a shot. It really is an amazing language.
Any comments and corrections are very welcome!
|
{}
|
Which one of the following represents non-growth associated product formation kinetics in a bioprocess system? $X$ and $P$ denote viable cell and product concentrations, respectively.
1 vote
|
{}
|
# American Institute of Mathematical Sciences
• Previous Article
Modeling and analyzing the chaotic behavior in supply chain networks: a control theoretic approach
• JIMO Home
• This Issue
• Next Article
A variational inequality approach for constrained multifacility Weber problem under gauge
July 2018, 14(3): 1105-1122. doi: 10.3934/jimo.2018001
## Portfolio procurement policies for budget-constrained supply chains with option contracts and external financing
1 School of Management and Economics, University of Electronic Science and Technology of China, Chengdu, China 2 Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong 3 Department of Marketing and International Business, Valdosta State University, Valdosta, USA
* Corresponding author: Xu Chen, E-mail: xchenxchen@263.net, Tel: +86-28-83206622
Received October 2015 Revised September 2017 Published July 2018 Early access January 2018
This study investigates a budget-constrained retailer's optimal financing and portfolio order policies in a supply chain with option contracts. To this end, we develop two analytical models: a basic model with wholesale price contracts as the benchmark and a model with option contracts. Each model considers both the financing scenario and the no-financing scenario. Our analyses show that the retailer uses wholesale price contracts for procurement, instead of option contracts, when its budget is extremely tight. The retailer starts to use a combination of these two types of contracts when the budget constraint is relieved. As the budget increases, the retailer adjusts the procurement ratio through both types until it can implement the optimal ordering policy with an adequate budget. In addition, the condition for seeking external financing is determined by the retailer's initial budget, financing cost, and profit margin.
Citation: Benyong Hu, Xu Chen, Felix T. S. Chan, Chao Meng. Portfolio procurement policies for budget-constrained supply chains with option contracts and external financing. Journal of Industrial & Management Optimization, 2018, 14 (3) : 1105-1122. doi: 10.3934/jimo.2018001
##### References:
show all references
##### References:
The structure of the optimal order policies
The effects of option contracts without financing
The effects of option contracts with financing
Suppliers possible production quantity function
Nomenclature
Notation Description $D$ Random variable for market demand with $D\geq0$ $f(x)$ Probability density function for market demand $F(x)$ Cumulative distribution function for market demand, which is a continuous, strictly increasing and invertible function of $x$ with $F(x)=0$ $F^{-1}(x)$ Inverse function of $F(x)$ $p$ Product retail price ($/unit) $c$ Product manufacturing cost ($/unit) $s$ Product salvage value ($/unit) $g$ Retailer's shortage penalty ($/unit) $w$ Product wholesale price under wholesale price contracts ($/unit) $w_1$ Product wholesale price under option contracts ($/unit) $b$ Product option price ($/unit) $w_2$ Option exercise price ($/unit) $q$ Retailer's order quantity in the basic model $q^1$ Retailer's firm order quantity in the model with option contracts $q^2$ Retailer's option order quantity in the model with option contracts $q^1+q^2$ Retailer's portfolio order quantity in the model with option contracts, denoted as $q^1+q^2=q$ $Y$ Retailer's initial budget $H$ Retailer's financing amount $\lambda_i$ Generalized Lagrange multiplier, $i=1, 2, 3$ $x^+$ $x^+=max(0, x)$ $u$ Mean of market demand, $u=E(D)$ $E(x)$ Expected value of variable $x$ $min(x, y)$ Minimum between $x$ and $y$
Notation Description $D$ Random variable for market demand with $D\geq0$ $f(x)$ Probability density function for market demand $F(x)$ Cumulative distribution function for market demand, which is a continuous, strictly increasing and invertible function of $x$ with $F(x)=0$ $F^{-1}(x)$ Inverse function of $F(x)$ $p$ Product retail price ($/unit) $c$ Product manufacturing cost ($/unit) $s$ Product salvage value ($/unit) $g$ Retailer's shortage penalty ($/unit) $w$ Product wholesale price under wholesale price contracts ($/unit) $w_1$ Product wholesale price under option contracts ($/unit) $b$ Product option price ($/unit) $w_2$ Option exercise price ($/unit) $q$ Retailer's order quantity in the basic model $q^1$ Retailer's firm order quantity in the model with option contracts $q^2$ Retailer's option order quantity in the model with option contracts $q^1+q^2$ Retailer's portfolio order quantity in the model with option contracts, denoted as $q^1+q^2=q$ $Y$ Retailer's initial budget $H$ Retailer's financing amount $\lambda_i$ Generalized Lagrange multiplier, $i=1, 2, 3$ $x^+$ $x^+=max(0, x)$ $u$ Mean of market demand, $u=E(D)$ $E(x)$ Expected value of variable $x$ $min(x, y)$ Minimum between $x$ and $y$
[1] Qiang Yan, Mingqiao Luan, Yu Lin, Fangyu Ye. Equilibrium strategies in a supply chain with capital constrained suppliers: The impact of external financing. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3027-3047. doi: 10.3934/jimo.2020106 [2] Jun Wu, Shouyang Wang, Wuyi Yue. Supply contract model with service level constraint. Journal of Industrial & Management Optimization, 2005, 1 (3) : 275-287. doi: 10.3934/jimo.2005.1.275 [3] Jun Li, Hairong Feng, Kun-Jen Chung. Using the algebraic approach to determine the replenishment optimal policy with defective products, backlog and delay of payments in the supply chain management. Journal of Industrial & Management Optimization, 2012, 8 (1) : 263-269. doi: 10.3934/jimo.2012.8.263 [4] Nina Yan, Baowen Sun. Comparative analysis of supply chain financing strategies between different financing modes. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1073-1087. doi: 10.3934/jimo.2015.11.1073 [5] Na Song, Ximin Huang, Yue Xie, Wai-Ki Ching, Tak-Kuen Siu. Impact of reorder option in supply chain coordination. Journal of Industrial & Management Optimization, 2017, 13 (1) : 449-475. doi: 10.3934/jimo.2016026 [6] Yeong-Cheng Liou, Siegfried Schaible, Jen-Chih Yao. Supply chain inventory management via a Stackelberg equilibrium. Journal of Industrial & Management Optimization, 2006, 2 (1) : 81-94. doi: 10.3934/jimo.2006.2.81 [7] Sushil Kumar Dey, Bibhas C. Giri. Coordination of a sustainable reverse supply chain with revenue sharing contract. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020165 [8] Yafei Zu. Inter-organizational contract control of advertising strategies in the supply chain. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021126 [9] Qiang Lin, Ying Peng, Ying Hu. Supplier financing service decisions for a capital-constrained supply chain: Trade credit vs. combined credit financing. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1731-1752. doi: 10.3934/jimo.2019026 [10] Qiang Lin, Yang Xiao, Jingju Zheng. Selecting the supply chain financing mode under price-sensitive demand: Confirmed warehouse financing vs. trade credit. Journal of Industrial & Management Optimization, 2021, 17 (4) : 2031-2049. doi: 10.3934/jimo.2020057 [11] Kebing Chen, Tiaojun Xiao. Reordering policy and coordination of a supply chain with a loss-averse retailer. Journal of Industrial & Management Optimization, 2013, 9 (4) : 827-853. doi: 10.3934/jimo.2013.9.827 [12] Kai Kang, Taotao Lu, Jing Zhang. Financing strategy selection and coordination considering risk aversion in a capital-constrained supply chain. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021042 [13] Zhiyuan Zhen, Honglin Yang, Wenyan Zhuo. Financing and ordering decisions in a capital-constrained and risk-averse supply chain for the monopolist and non-monopolist supplier. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021104 [14] Weihua Liu, Xinran Shen, Di Wang, Jingkun Wang. Order allocation model in logistics service supply chain with demand updating and inequity aversion: A perspective of two option contracts comparison. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3269-3295. doi: 10.3934/jimo.2020118 [15] Amin Aalaei, Hamid Davoudpour. Two bounds for integrating the virtual dynamic cellular manufacturing problem into supply chain management. Journal of Industrial & Management Optimization, 2016, 12 (3) : 907-930. doi: 10.3934/jimo.2016.12.907 [16] Jing Shi, Tiaojun Xiao. Service investment and consumer returns policy in a vendor-managed inventory supply chain. Journal of Industrial & Management Optimization, 2015, 11 (2) : 439-459. doi: 10.3934/jimo.2015.11.439 [17] Kun-Jen Chung, Pin-Shou Ting. The inventory model under supplier's partial trade credit policy in a supply chain system. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1175-1183. doi: 10.3934/jimo.2015.11.1175 [18] Jiuping Xu, Pei Wei. Production-distribution planning of construction supply chain management under fuzzy random environment for large-scale construction projects. Journal of Industrial & Management Optimization, 2013, 9 (1) : 31-56. doi: 10.3934/jimo.2013.9.31 [19] Bing-Bing Cao, Zai-Jing Gong, Tian-Hui You. Stackelberg pricing policy in dyadic capital-constrained supply chain considering bank's deposit and loan based on delay payment scheme. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2855-2887. doi: 10.3934/jimo.2020098 [20] Wucheng Zi, Jiayu Zhou, Honglei Xu, Guodong Li, Gang Lin. Preserving relational contract stability of fresh agricultural product supply chains. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2505-2518. doi: 10.3934/jimo.2020079
2020 Impact Factor: 1.801
## Metrics
• HTML views (976)
• Cited by (2)
• on AIMS
|
{}
|
## Covariant and Contravariant Tensors
Hey everyone, I am reading a Schaum's Outline on Tensor Calculus and came to something I can't seem to understand. I'm admittedly young to be reading this but so far I've understood everything except this. My question is: what is the difference between a contravariant tensor and a covariant tensor, and what do these terms mean conceptually? I've gone online and searched a variety of articles with no luck. I appreciate it, thanks in advance.
PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor
Blog Entries: 1 Recognitions: Gold Member Homework Help First, there are covariant and contravariant vectors. A multilinear function acting on covariant vectors is a contravariant tensor. A multilinear function acting on contravariant vectors is a covariant tensor. A multilinear function which acts on both is a mixed tensor.
Adding to what dx has said, a particular system/structure of tensors is defined with respect to a particular vector space. Vectors of this vector space are called contravariant vectors. Vectors of its dual space (i.e. scalar-valued linear functions of one vector each) are called covariant vectors (or covectors, dual vectors, linear functionals, etc.). As such, covariant vectors are the simplest kind of covariant tensor. Similarly, contravariant vectors can be thought of as scalar-valued linear functions of one covariant vector each, with the following definition: If w is a covariant vector, and v a contravariant vector, then v(w) is defined as w(v). Thus contravariant vectors (often called simply "vectors") are the simplest kind of contravariant tensor.
## Covariant and Contravariant Tensors
Ok I think I'm starting to understand this, thanks guys. So what would be some typical examples of contravariant and covariant vectors?
Recognitions: Gold Member Science Advisor The gradient would be a pretty good example of a covariant vector (although if you know the formalism of differential forms you could just use the more modern definition of a covariant vector as a one - form which actually makes more sense intuitively) and there are countless examples of contravariant vectors (or simply vectors) but in keeping with the previous example, one example of a contravariant vector is the directional derivative along a curve. Evaluated at some point p, this type of vector can form a basis for the tangent vector space that Rasalhague talked about. The gradient then, at that point, can form a basis for the cotangent space (this is all to some manifold).
Ok I can visualize that without trouble, thanks WannabeNewton :) Yeah as far as I can see this book doesn't cover the modern definition of the one-form, though I do believe I've heard of that, maybe that'd clear things up a bit too..
Recognitions: Science Advisor I never know which is which, but you can think of contravariant and covariant vectors as row and column vectors. By matrix multiplication, a row vector can be thought of as a function that takes column vectors as input and produces a number as an output.
Recognitions:
Gold Member
Science Advisor
Quote by benk99nenm312 Ok I can visualize that without trouble, thanks WannabeNewton :) Yeah as far as I can see this book doesn't cover the modern definition of the one-form, though I do believe I've heard of that, maybe that'd clear things up a bit too..
Actually it does with detail at the level of a typical Schaum's Outline book so its a good introduction to it however brief, go to the very last chapter titled Tensor Fields on Manifolds. It pretty much talks about everything I just said and then some (actually a lot more than some =D).
Quote by atyy I never know which is which, but you can think of contravariant and covariant vectors as row and column vectors. By matrix multiplication, a row vector can be thought of as a function that takes column vectors as input and produces a number as an output.
In this book it shows the contravariant with upper indices and the covariant with lower indices, so that helps too thanks.
Quote by WannabeNewton Actually it does with detail at the level of a typical Schaum's Outline book so its a good introduction to it however brief, go to the very last chapter titled Tensor Fields on Manifolds. It pretty much talks about everything I just said and then some (actually a lot more than some =D).
Oh awesome! Lol I'm only in chapter 3 :P
Mentor This post might be useful.
The terminology does seem deceptive. The principle of covariance is that 2 different views or perspectives of the same phenomenon should be symmetric or inversely related to one another. Ideally the term covariant tensor would have been used for the pair of tensors which are together covariant concerning a particular phenomenon.
Thanks Fredrik, yeah so many terms to understand.. that's great to have them all lined up in a single post :) I see, so if covariance is symmetrical or inversely symmetrical, what would contravariance be?
Recognitions:
Science Advisor
Quote by benk99nenm312 I see, so if covariance is symmetrical or inversely symmetrical, what would contravariance be?
That's a very tangentially related use of the term. Just use the definitions in the book you are reading.
Or if we are allowed to change definitions mid-sentence, both covariant and contravariant vectors transform covariantly:P
OK, to be more serious, let's imagine we have a 2D surface covered by coordinates (x,y). Imagine that each point has a different temperature f(x,y). A vehicle, carrying a clock which reads time t moves across the surface making a curve (x(t),y(t)). The variation in temperature versus time that the vehicle experiences is df/dt which is just one dimensional calculus. At a point p, df/dt=df/dx.dx/dt+df/dy.dy.dt, a scalar which we can rewrite as a a row vector (df/dx,df/dy) multiplied by a column vector (dx/dt,dy/dt), with all derivatives evaluated at p. The column vector or contravariant vector is something like the velocity, and the row vector or covariant vector is something like the gradient of the temperature.
We don't conceive of the velocity at that point as belonging to only one curve, since many curves can have the same velocity at that point. We also conceive of the velocity of any particular curve being the same under a change of coordinates from (x,y) to (U(x,y),V(x,y)). The same temperature variation is now described by f(U,V), and the same path is now described by (U(t),V(t)). So the new column vector representing the same velocity will be (dU/dt,dV/dt)=(dU/dx.dx/dt+dU/dy.dy/dt,dV/dx.dx.dt+dV/dy.dy/dt), which is how the coordinate representation of a contravariant vector transforms. Similarly, the new row vector representing the same gradient will be (df/dU,df/dV)=(df/dx.dx/dU+df/dy.dy/dU,df/dx.dx/dV+df/dy.dy/dV), which is how the coordinate representation of a covariant vector transforms. df/dt=df/dU.dU/dt+df/dV.dV/dt remains unchanged.
Something to keep in mind for later, when the metric is introduced: at this stage, we can multiply row and column vectors, but we haven't defined what it means to "multiply" column vectors, which is a job the metric can do.
Quote by atyy That's a very tangentially related use of the term. Just use the definitions in the book you are reading. Or if we are allowed to change definitions mid-sentence, both covariant and contravariant vectors transform covariantly:P OK, to be more serious, let's imagine we have a 2D surface covered by coordinates (x,y). Imagine that each point has a different temperature f(x,y). A vehicle, carrying a clock which reads time t moves across the surface making a curve (x(t),y(t)). The variation in temperature versus time that the vehicle experiences is df/dt which is just one dimensional calculus. At a point p, df/dt=df/dx.dx/dt+df/dy.dy.dt, a scalar which we can rewrite as a a row vector (df/dx,df/dy) multiplied by a column vector (dx/dt,dy/dt), with all derivatives evaluated at p. The column vector or contravariant vector is something like the velocity, and the row vector or covariant vector is something like the gradient of the temperature. We don't conceive of the velocity at that point as belonging to only one curve, since many curves can have the same velocity at that point. We also conceive of the velocity of any particular curve being the same under a change of coordinates from (x,y) to (U(x,y),V(x,y)). The same temperature variation is now described by f(U,V), and the same path is now described by (U(t),V(t)). So the new column vector representing the same velocity will be (dU/dt,dV/dt)=(dU/dx.dx/dt+dU/dy.dy/dt,dV/dx.dx.dt+dV/dy.dy/dt), which is how the coordinate representation of a contravariant vector transforms. Similarly, the new row vector representing the same gradient will be (df/dU,df/dV)=(df/dx.dx/dU+df/dy.dy/dU,df/dx.dx/dV+df/dy.dy/dV), which is how the coordinate representation of a covariant vector transforms. df/dt=df/dU.dU/dt+df/dV.dV/dt remains unchanged. Something to keep in mind for later, when the metric is introduced: at this stage, we can multiply row and column vectors, but we haven't defined what it means to "multiply" column vectors, which is a job the metric can do.
This makes incredible sense! Thanks so much, that really cleared things up :)
Quote by WannabeNewton The gradient would be a pretty good example of a covariant vector (although if you know the formalism of differential forms you could just use the more modern definition of a covariant vector as a one - form which actually makes more sense intuitively) and there are countless examples of contravariant vectors (or simply vectors) but in keeping with the previous example, one example of a contravariant vector is the directional derivative along a curve. Evaluated at some point p, this type of vector can form a basis for the tangent vector space that Rasalhague talked about. The gradient then, at that point, can form a basis for the cotangent space (this is all to some manifold).
1) I understand the diff between co variant and contra variant coordinate system but not sure why velocity is considered contra where as grad is considered "co"variant. What does that have to do with how we lay out coordinate system?
2) Why do we always write covariant basis for contra variant components?
Mentor
Quote by Gadhav 1) I understand the diff between co variant and contra variant coordinate system
A coordinate system is just a function that assigns n-tuples of real numbers ("coordinates") to points in the manifold. They aren't covariant or contravariant.
Quote by Gadhav but not sure why velocity is considered contra where as grad is considered "co"variant. What does that have to do with how we lay out coordinate system?
It doesn't matter which coordinate system is used. Velocity is defined as a tangent vector to a curve. The gradient of ##\phi## has components ##\partial_i\phi##. Those components transform covariantly (the same way as the basis vectors) because the partial derivative operators ##\partial_i## are the basis vectors for the tangent space at the relevant point. To understand this better, I recommend that you read the post I linked to above, and the three posts I linked to in that one, in particular the first one.
Quote by Gadhav 2) Why do we always write covariant basis for contra variant components?
The convention to write cotangent vectors as ##\omega_i e^i## and tangent vectors as ##v^i e_i## is just that, a convention.
Thread Tools
Similar Threads for: Covariant and Contravariant Tensors Thread Forum Replies Differential Geometry 2 Calculus & Beyond Homework 1 Differential Geometry 2 Special & General Relativity 6 General Physics 15
|
{}
|
# functional analysis – For a compact operator \$T in K(X, H)\$ and H an Hilbert space, \$overline{T(X)}\$ is separable
Let X be a normed space, H be a Hilbert space, and let $$T in K(X, H)$$ the set of compact operators. Show that T(X) is separable. I tried to use the fact that the set of finite rank operators $$F(X,H) = K(X,H)$$ when H is an Hilbert space, to show that $$T(x)$$ is countable (since we need to show that $$overline{T(X)}$$ is separable, that is, that it contains a countable dense subset)
|
{}
|
# Parsing hexadecimal numbers to binary and iterating over bits
I'm currently working on a software for an graphic display, so I recreated it in TikZ for documentation purposes:
I currently use this to set individual pixels on the display:
\setpixel{x}{y}
Where x and y are coordinates between 0 and 132/64.
The real display is separated into 8 pages, each 8 pixels high, and a byte sent to the display is displayed as a column. See this image for details.
Since I don't want to always calculate the individual pixels, I'd like the LaTeX variant to behave like the real display, that is, I want some command like
\dispbyte{0x01}
\dispbyte{0x03}
\dispbyte{0x07}
\dispbyte{0x0F}
\dispbyte{0x1F}
\dispbyte{0x3F}
\dispbyte{0x7F}
\dispbyte{0xFF}
(this would create one of the triangles above) I have an idea how I'd implement counting of the current column and page switching and all that -- shouldn't be hard with some counters after all -- but I just can't find out how to parse the hexadecimal values and then iterate over every bit inside them.
I've found fmtcount and binhex.tex, but they only display a LaTeX counter in another format, and I can't understand their code at all.
The finished source code (display.sty) and some examples are now available at http://cmpl.cc/downloads/disp/
-
That sounds fascinating. Do you intent to publish the source to the pixel -> TikZ conversion? – Alexander Apr 25 '13 at 21:13
@Alexander I will release the complete source code in a few weeks, when the project I'm writing the documentation for is finished. – The Compiler Apr 26 '13 at 7:37
It is not absolutely clear to me what you want, hence I write this as a comment: Take a look at the bitset package by Heiko Oberdiek. It provides the abstraction of bitsets of various lenght, which can be initialized from a hex value by \bitsetSetHex. Individual bits can be tested (\bitsetGet) or all set lists transformed into a comma-separated list (\bitsetGetSetBitList) to be then iterated with, e.g., \foreach. – Daniel Apr 26 '13 at 10:07
Note also this related question. – Daniel Apr 26 '13 at 10:09
@TheCompiler: For \foreach you just have to expand it first, e.g., with \edef. I have now posted this as an answer. – Daniel Apr 26 '13 at 14:53
The following is less impressive than the other answers from the visual point of view (Mark, I like yours!), but addresses the actual question of the OP: Bit-wise iteration over hexadecimal values, which becomes fairly easy when using the bitset package by Heiko Oberdiek:
\documentclass{article}
\usepackage{bitset}
\usepackage{pgf,pgffor}
\begin{document}
\bitsetSetHex{mybitset}{AA}
% use \bitsetGetSetBitList
% expand first
\edef\mybits{\bitsetGetSetBitList{mybitset}}
\noindent
\foreach \bit in \mybits {%
Bit \bit{} is set! \\
}
% just itereate all bits
\noindent
\foreach \i in {0,...,7} {%
Bit \i: \bitsetGet{mybitset}{\i} \\
}
\end{document}
-
That was by far the most easy to use way. Took me like 5 minutes to implement what I wanted based on that. Thanks! – The Compiler Apr 27 '13 at 1:24
Not sure this is robust, it only works within the limited numerical range of PGFMath, and clearly I've gone for something a bit more over-the-top than the requirements.
EDIT: following Daniels example of the bitset package the code has been updated to be a bit more like that.
\documentclass[border=5pt]{standalone}
\usepackage{tikz}
\newcount\bitcount
\tikzset{
zeros/.style={
draw=black,
insert path={
(-\nbit-1/2, -1/2) rectangle ++(1,1)
}
},
ones/.style={
draw=black,
fill=gray,
insert path={
(-\nbit-1/2, -1/2) rectangle ++(1,1)
}
},
max bits/.store in=\maxbits,
max bits=0
}
\newcommand\dispbyte[2][]{%
\begingroup%
\tikzset{#1}%
\pgfmathsetcount\bitcount{#2}%
\pgfmathparse{int(\maxbits)}\let\maxbits=\pgfmathresult%
\pgfmathloop%
\ifnum\bitcount>0\relax%
\ifodd\bitcount%
\expandafter\def\csname bit\pgfmathcounter\endcsname{1}%
\else%
\expandafter\let\csname
bit\pgfmathcounter\endcsname=\relax%
\fi%
\divide\bitcount by2\relax%
\repeatpgfmathloop%
\pgfmathparse{int(\maxbits>\pgfmathcounter?\maxbits+1:\pgfmathcounter+1)}%
\let\nbits=\pgfmathresult%
\pgfmathloop%
\ifnum\pgfmathcounter=\nbits\relax%
\else%
\let\nbit=\pgfmathcounter%
\expandafter\ifx\csname bit\pgfmathcounter\endcsname\relax%
\path [zeros];
\else%
\path [ones];
\fi%
\repeatpgfmathloop%
\endgroup%
}
\begin{document}
\begin{tikzpicture}[x=10pt, y=10pt]
\foreach \d [count=\c from 0, evaluate={\x=floor(\c/8)*8; \y=-mod(\c,8);}]
in
{%
0x7f,0x49,0x08,0x08,0x08,0x08,0x08,0x1c,
0x00,0x00,0x08,0x00,0x18,0x08,0x08,0x1c,
0x08,0x08,0x10,0x12,0x14,0x28,0x24,0x22,
0x7f,0x02,0x04,0x08,0x08,0x10,0x20,0x7f}{
\dispbyte[max bits=8,shift={(\x,\y)}]{\d}
}
\foreach \d [count=\c from 0, evaluate={\x=floor(\c/8)*8; \y=-mod(\c,8);}]
in
{%
0x7f,0x49,0x08,0x08,0x08,0x08,0x08,0x1c,
0x00,0x00,0x08,0x00,0x18,0x08,0x08,0x1c,
0x08,0x08,0x10,0x12,0x14,0x28,0x24,0x22,
0x7f,0x02,0x04,0x08,0x08,0x10,0x20,0x7f}{
\dispbyte[zeros/.style={},
ones/.style={
fill=gray,
insert path={
(-\nbit-3/8, -3/8) rectangle ++(0.75,0.75)
}
},shift={(\x,\y-9)}]{\d}
}
\foreach \d [count=\c from 0, evaluate={\x=floor(\c/8)*8; \y=-mod(\c,8);}]
in
{%
0x7f,0x49,0x08,0x08,0x08,0x08,0x08,0x1c,
0x00,0x00,0x08,0x00,0x18,0x08,0x08,0x1c,
0x08,0x08,0x10,0x12,0x14,0x28,0x24,0x22,
0x7f,0x02,0x04,0x08,0x08,0x10,0x20,0x7f}{
\dispbyte[max bits=8,
zeros/.style={
fill=black,
insert path={
}
},
ones/.style={
fill=orange,
insert path={
}
},shift={(\x,\y-18)}]{\d}
}
\end{tikzpicture}
\end{document}
-
You can use LaTeX3 that has a function for converting integers to binary. Instead of \fbox use your preferred macro.
\documentclass{article}
\usepackage{xparse}
\ExplSyntaxOn
\NewDocumentCommand{\dispbyte}{m}
{
\compiler_dispbyte:n { #1 }
}
\tl_new:N \l_compiler_bits_tl
\tl_new:N \l_compiler_byte_tl
\cs_new_protected:Npn \compiler_dispbyte:n #1
{
% we need to remove the 0x
\tl_set:Nn \l_compiler_byte_tl { #1 }
\tl_remove_once:Nn \l_compiler_byte_tl { 0x }
% convert the number to a string of bits
\tl_set:Nx \l_compiler_bits_tl
{ \int_to_binary:n { "\l_compiler_byte_tl } }
% loop through the list of bits
\tl_map_inline:Nn \l_compiler_bits_tl
{ \fbox{ ##1 } }
}
\ExplSyntaxOff
\begin{document}
\dispbyte{0x01}
\dispbyte{0x03}
\dispbyte{0x07}
\dispbyte{0x0F}
\dispbyte{0x11}
\dispbyte{0x13}
\dispbyte{0x17}
\dispbyte{0x1F}
\end{document}
If you need the bits in reverse order, just add
\tl_set:Nx \l_compiler_bits_tl
{ \tl_reverse:V \l_compiler_bits_tl }
before the \tl_map_inline:Nn line.
-
I didn't get exactly what you want to do but pgfmath already understands Hex syntax.
\documentclass{article}
\usepackage{tikz}
\begin{document}
\pgfmathparse{bin(0x1F)}
\pgfmathresult
\pgfmathparse{bin(0X1F)}
\pgfmathresult
\end{document}
-
This loops over the binary digits from the hex, for testing it just prints them out (in reverse order to keep the code simple)
\documentclass{article}
\makeatletter
\def\dispbyte#1{\@displaybyte#1\relax}
\def\@displaybyte#1x#2\relax{\count@\string"#2\relax
\loop
\fbox{\ifodd\count@ 1\else0\fi}%
\divide\count@\tw@
\ifnum\count@>\z@
\repeat}
\makeatother
\begin{document}
\dispbyte{0x17}
\end{document}
-
It can't:-) Do you have a non standard format making " active even if babel is notloaded? – David Carlisle Apr 25 '13 at 21:01
add \tracingall before \dispbyte and post the log (or email it me: google my name for my gmail account) – David Carlisle Apr 25 '13 at 21:03
I didn't try it yet, but two things come to mind by looking at it: 1) how would I use that with babel? Wouldn't " be active as well then? 2) shouldn't the second \makeatletter be a \makeatother? – The Compiler Apr 26 '13 at 5:42
@TheCompiler you can use \string" to be safe if " is active and yes the second make... is a typo (although it doesn't really matter, I'll fix both:-) – David Carlisle Apr 26 '13 at 6:45
|
{}
|
## 18.11.12
### Redefine memoir Part to allow image in memoir class
First we define a command to set the image name for each part. This command takes the same arguments as the \includegraphics command from the graphicx package. Then we add the image to the memoir class \printparttitle command. You can play with the \vfilcommands to change the vertical spacing as you require it.
\documentclass{memoir}
\usepackage[demo]{graphicx}
\makeatletter
% define a user command to choose the image
% this command also creates an internal command to insert the image
\newcommand{\partimage}[2][]{\gdef\@partimage{\includegraphics[#1]{#2}}}
\renewcommand{\printparttitle}[1]{\parttitlefont #1\vfil\@partimage\vfil}
\makeatother
\begin{document}
\partimage{foo.png}
\part{A part}
\partimage[width=\textwidth]{bar.png}
\part{Another part}
\end{document}
|
{}
|
CGAL 4.7 - 2D Triangulation
TriangulationTraits_2 Concept Reference
## Definition
The concept TriangulationTraits_2 describes the set of requirements to be fulfilled by any class used to instantiate the first template parameter of the class CGAL::Triangulation_2<Traits,Tds>. This concept provides the types of the geometric primitives used in the triangulation and some function object types for the required predicates on those primitives.
Has Models:
All the CGAL Kernels
CGAL::Projection_traits_xz_3<K>
CGAL::Triangulation_2<Traits,Tds>
## Types
typedef unspecified_type Point_2
The point type.
typedef unspecified_type Segment_2
The segment type.
typedef unspecified_type Triangle_2
The triangle type.
typedef unspecified_type Construct_segment_2
A function object to construct a Segment_2. More...
typedef unspecified_type Construct_triangle_2
A function object to construct a Triangle_2. More...
typedef unspecified_type Less_x_2
A function object to compare the x-coordinate of two points. More...
typedef unspecified_type Less_y_2
A function object to compare the y-coordinate of two points. More...
typedef unspecified_type Compare_x_2
A function object to compare the x-coordinate of two points. More...
typedef unspecified_type Compare_y_2
A function object to compare the y-coordinate of two points. More...
typedef unspecified_type Orientation_2
A function object to compute the orientation of three points. More...
typedef unspecified_type Side_of_oriented_circle_2
A function object to perform the incircle test for four points. More...
typedef unspecified_type Construct_circumcenter_2
A function object to compute the circumcentr of three points. More...
## Creation
Only a default constructor, copy constructor and an assignment operator are required.
Note that further constructors can be provided.
TriangulationTraits_2 ()
default constructor.
TriangulationTraits_2 (TriangulationTraits_2 gtr)
Copy constructor.
TriangulationTraits_2 operator= (TriangulationTraits_2 gtr)
Assignment operator.
## Predicate Functions
Construct_segment_2 construct_segment_2_object ()
Construct_triangle_2 construct_triangle_2_object ()
Comparison_x_2 compare_x_2_object ()
Comparison_y_2 compare_y_2_object ()
Orientation_2 orientation_2_object ()
Side_of_oriented_circle_2 side_of_oriented_circle_2_object ()
Required only if side_of_oriented_circle is called called.
Construct_circumcenter_2 construct_circumcenter_2_object ()
Required only if circumcenter is called.
## Member Typedef Documentation
A function object to compare the x-coordinate of two points.
Provides the operator:
Comparison_result operator()(Point p, Point q)
which returns SMALLER, EQUAL or LARGER according to the $$x$$-ordering of points p and q.
A function object to compare the y-coordinate of two points.
Provides the operator:
Comparison_result operator()(Point p, Point q)
which returns (SMALLER, EQUAL or LARGER) according to the $$y$$-ordering of points p and q.
A function object to compute the circumcentr of three points.
Provides the operator:
Point operator()(Point p, Point q, Point r)
which returns the circumcenter of the three points p, q and r. This type is required only if the function Point circumcenter(Face_handle f)is called.
A function object to construct a Segment_2.
Provides:
Segment_2 operator()(Point_2 p,Point_2 q),
which constructs a segment from two points.
A function object to construct a Triangle_2.
Provides:
Triangle_2 operator()(Point_2 p,Point_2 q,Point_2 r ),
which constructs a triangle from three points.
A function object to compare the x-coordinate of two points.
Provides the operator:
bool operator()(Point p, Point q)
which returns true if p is before q according to the $$x$$-ordering of points.
A function object to compare the y-coordinate of two points.
Provides the operator:
bool operator()(Point p, Point q)
which returns true if p is before q according to the $$y$$-ordering of points.
A function object to compute the orientation of three points.
Provides the operator:
Orientation operator()(Point p, Point q, Point r)
which returns LEFT_TURN, RIGHT_TURN or COLLINEAR depending on $$r$$ being, with respect to the oriented line pq, on the left side , on the right side or on the line.
A function object to perform the incircle test for four points.
Provides the operator:
Oriented_side operator()(Point p, Point q, Point r, Point s) which takes four points $$p, q, r, s$$ as arguments and returns ON_POSITIVE_SIDE, ON_NEGATIVE_SIDE or, ON_ORIENTED_BOUNDARY according to the position of points s with respect to the oriented circle through through $$p,q$$ and $$r$$. This type is required only if the function side_of_oriented_circle(Face_handle f, Point p) is called.
|
{}
|
# Increased evolutionary rate in the 2014 West African Ebola outbreak is due to transient polymorphism and not positive selection
### Abstract
Gire et al. (Science 345:1369–1372, 2014) analyzed 81 complete genomes sampled from the 2014 Zaire ebolavirus (EBOV) outbreak and reported “rapid accumulation of [. . . ] genetic variation” and a substitution rate that was “roughly twice as high within the 2014 outbreak as between outbreaks.” These findings have received widespread attention, and many have perceived Gire et al. (2014)’s results as implying rapid adaptation of EBOV to humans during the current outbreak. Here, we argue that, on the contrary, sequence divergence in EBOV is rather limited, and that the currently available data contain no robust signal of particularly rapid evolution or adaptation to humans. The doubled substitution rate can be attributed entirely to the application of a molecular-clock model to a population of sequences with minimal divergence and segregating polymorphisms. Our results highlight how subtle technical aspects of sophisticated evolutionary analysis methods may result in highly-publicized, misconstrued statements about an ongoing public health crisis.
Type
Publication
bioRxiv doi: 10.1101/011429
|
{}
|
# What is the limiting distribution of $Y_n = \sqrt{n}(\bar{X}_n-1)$ as $n \to \infty$?
Let $$X_1,\cdots,X_n$$ be independently and identically distributed with pdf $$f(x)=e^{-x}, 0 < x < \infty$$. Let $$Y_n = \sqrt{n}(\bar{X}_n-1)$$.
What is the limiting distribution of $$Y_n$$ as $$n \to \infty$$?
My work:
I decided to try an mgf approach. Clearly, $$X_1,\cdots,X_n \sim Exp(1)$$, so $$M_{X_i}(t)=\frac{1}{1-t}, t < 1$$. After a bit of work, I found that
$$M_{Y_n}(t)=[e^{t/\sqrt{n}}(1-\frac{t}{\sqrt{n}})]^{-n}, t < \sqrt{n}$$. This does not appear to resemble a known distribution's mgf. Should I change my approach?
Updated:
I think I'm supposed to solve the problem using an mfg method. Now I am getting this:
$$M_{y_n}(t)=[[1 + \frac{t}{\sqrt{n}} + (\frac{t}{\sqrt{n}})^2\cdot1/2! + \cdots]-[\frac{t}{\sqrt{n}} +(\frac{t}{\sqrt{n}})^2+(\frac{t}{\sqrt{n}})^3\cdot 1/3!+\cdots]]^{-n}$$, which resembles some work that we have done in class, but I am not too sure how I can evaluate this.
Due to Glen_b's comment, I attempted using CLT. Here is my updated "updated work."
Since $$X_i \sim Exp(1), i=1,\cdots,n$$, then $$E(X_i)=1, Var(X_i)=1$$. So,
$$\sqrt{n}(\bar{X}_n-1) \to N(0,1)$$ in distribution by Central Limit Theorem, which matches the answers provided below.
• Hint: $\bar{X}_n$ has a Gamma distribution Nov 23, 2019 at 21:46
• If you're using the mgf approach (NB I have not checked your mgf is correct), what happens to the function in the limit as $n\to \infty$? Nov 23, 2019 at 22:16
• Are you allowed to invoke the Central Limit Theorem? (Even if not, it tells you immediately what the limiting distribution is, which can guide your demonstration.)
– whuber
Nov 23, 2019 at 22:24
• @edison (i) regarding to your discussion with whuber -- what does the CLT say? (ii) On the other hand if (as I had previously assumed) you can't invoke the CLT, you have already expanded $M$ (though as I say I am not checking your work), what you'd need next is to use facts about limits. Nov 24, 2019 at 2:50
• No, it's incredibly easy to deal with. What's $\mu_Y$? What's $\sigma_Y$? Write down a standardized $\bar{Y}$ ... Nov 24, 2019 at 6:19
In your updated version, you will get $$\left[1 - \frac{t^2/2}{n} +o(\frac{1}{n})\right]^{-n} = \left[1 + \frac{t^2/2}{n} +o(\frac{1}{n})\right]^{n}$$
and the limit of that is $$e^{t^2/2}$$, which is the moment generating function of a standard normal distribution with mean $$0$$ and variance $$1$$, much as you might expect from the central limit theorem
• What is $o(\frac{1}{n})$? Sorry, I am not familiar with this notation. Additionally, how do you get that equality? Nov 24, 2019 at 1:49
• @Edison This is Little o notation and here means means that the remainder when divided by $\frac1n$ tends to $0$ as $n$ increases so eventually very much smaller in magnitude than $\frac1n$ or $\frac{t^2/2}{n}$. The equality is similar to saying $\frac{1}{1-x} = 1+x +\cdots$ but here we have $\frac{t^2/2}{n}$ rather than $x$ Nov 24, 2019 at 2:03
• Thank you for linking the Wikipedia article- that was very helpful. I get that $\lim_{n\to\infty} \left(1+\frac{x}{n}\right)^n = e^x$, but how come we can neglect $o(\frac{1}{n})$? Also, how did you know to keep the $-\frac{t^2/n}{n}$ term out of $o(\frac{1}{n})$? Nov 24, 2019 at 3:17
|
{}
|
# How do you solve sin(x)+cos(x)=-1?
Mar 8, 2018
$x = 2 n \pi + \pi$ or $x = 2 n \pi - \frac{\pi}{2}$
#### Explanation:
Given: $\sin x + \cos x = - 1$
Square both sides
${\left(\sin x + \cos x\right)}^{2} = {\left(- 1\right)}^{2}$
${\sin}^{2} x + {\cos}^{2} x + 2 \sin x \cos x = 1$
$1 + 2 \sin x \cos x = 1$
$2 \sin x \cos x = 0$
Either $\sin x = 0 \text{ or } \cos x = 0$
If $\sin x = 0$
$\cos x = - 1$
and $x = 2 n \pi + \pi$
If $\cos x = 0$
$\sin x = - 1$
and $x = 2 n \pi - \frac{\pi}{2}$
Mar 8, 2018
x= pi+2pin, n∈Z
x= (3pi)/2+2pin, n∈Z
#### Explanation:
1. Square both sides
${\left(\sin \left(x\right) + \cos \left(x\right)\right)}^{2} = {\left(- 1\right)}^{2}$
$\sin {\left(x\right)}^{2} + {\cos}^{2} x + 2 \sin x \cos x = 1$
$1 + 2 \sin x \cos x = 1$
$2 \sin x \cos x = 0$
$2 \sin x \left(\cos x\right) = 0$
$\sin x = 0$
$\cos x = 0$
x= pi+2pin, n∈Z
x= (3pi)/2+2pin, n∈Z
x=pi/2+2pin, n∈Z EXTRANEOUS
x=0+2pin, n∈Z EXTRANEOUS
Mar 8, 2018
$x = \frac{3}{2} \pi + 2 \pi n \text{ or } \pi + 2 \pi n$
#### Explanation:
$\sin \left(x\right) + \cos \left(x\right) = - 1$
$\implies \sin \left(x\right) = - 1 - \cos \left(x\right)$ equation-1
We have an identity
${\sin}^{2} \left(x\right) + {\cos}^{2} \left(x\right) = 1$
Use this to find the value of $\sin \left(x\right)$
$\implies {\sin}^{2} \left(x\right) = 1 - {\cos}^{2} \left(x\right)$
$\implies \sin \left(x\right) = \pm \sqrt{1 - {\cos}^{2} \left(x\right)}$
We got two values for $\sin \left(x\right)$
$+ \sqrt{1 - {\cos}^{2} \left(x\right)} \text{and} - \sqrt{1 - {\cos}^{2} \left(x\right)}$
Put them one by one in equation-1.
$\implies + \sqrt{1 - {\cos}^{2} \left(x\right)} = - 1 - \cos \left(x\right)$
Squaring both sides
$\implies 1 - {\cos}^{2} \left(x\right) = 1 + {\cos}^{2} \left(x\right) + 2 \cos \left(x\right)$
$\implies 0 = 2 {\cos}^{2} \left(x\right) + 2 \cos \left(x\right)$
Divide by two both sides
$\implies 0 = {\cos}^{2} \left(x\right) + \cos \left(x\right)$
$\implies 0 = \cos \left(x\right) \left(\cos \left(x\right) + 1\right)$
It gives $\cos \left(x\right) = 0$
We get $\sin \left(x\right) = - 1$
The solution for this is
$x = \frac{3}{2} \pi + 2 \pi n$
Here , $\pi = {180}^{\circ}$ and n is any integer.
Now , we also get $\cos \left(x\right) + 1 = 0$
$\implies \cos \left(x\right) = - 1$
It gives $\sin \left(x\right) = 0$ according to the given equation.
The solution for this is
$x = \pi + 2 \pi n$
If you put $\sin \left(x\right) = - \sqrt{1 - {\cos}^{2} \left(x\right)}$ you get the same results.
Hope it helps :)
|
{}
|
Full disclaimer: Much of this content was pillaged from the wonderful text R for Data Science (https://r4ds.had.co.nz/), specifically chapter 19. Also check out the functions chapter from Advanced R (http://adv-r.had.co.nz/).
## Setup
Let’s start fresh with a new project in RStudio.
1. Open RStudio and click “New project…” under the File menu
2. Name that project something simple like “functions_practice”
3. Once the project is created, open a new script from the file menu (or Ctrl/Cmd + Shift + N)
4. At the top of the script, write some informative stuff so you know what it’s for:
# Learning about functions in R
# Created March 10, 2020
# FWRI R club
library(tidyverse)
5. Save the script with an informative name, e.g., 01_functions.R
Use this project to follow along or take notes!
## Why functions?
Have you ever found yourself copying/pasting the same lines of code over and over for repeated tasks? Consider the following (bad) example. We start by creating a tibble object of random observations.
dat <- tibble::tibble(
a = rnorm(100),
b = rnorm(100),
c = rnorm(100),
d = rnorm(100)
)
Suppose we want to modify the columns somehow:
dat$a <- (dat$a - min(dat$a, na.rm = TRUE)) / (max(dat$a, na.rm = T) - min(dat$a, na.rm = T)) dat$b <- (dat$b - min(dat$b, na.rm = TRUE)) / (max(dat$b, na.rm = T) - min(dat$b, na.rm = T))
dat$c <- (dat$b - min(dat$c, na.rm = TRUE)) / (max(dat$c, na.rm = T) - min(dat$c, na.rm = T)) dat$d <- (dat$c - min(dat$d, na.rm = TRUE)) / (max(dat$d, na.rm = T) - min(dat$d, na.rm = T))
What does the above code do to the dat object? Why is this problematic?
We can improve this workflow by using a function for the repeated tasks. Here we write a function called rescale_fun() and use it to rescale each column. Now the working parts of the code are only in one spot.
rescale_fun <- function(x) (x - min(x, na.rm = TRUE)) / (max(x, na.rm = T) - min(x, na.rm = T))
dat$a <- rescale_fun(dat$a)
dat$b <- rescale_fun(dat$b)
dat$c <- rescale_fun(dat$c)
dat$d <- rescale_fun(dat$d)
Much better, yes? We can take this one step further with the mutate_all() function from dplyr. This applies the function to every column in the dataset.
dat <- mutate_all(dat, rescale_fun)
• Shorter/cleaner code (less headaches for you)
• Minimize errors with copy/paste
• Increase reproducibility if data or requirements change!
There are differing opinions about when a function should be written. As a general rule, consider writing a function if you repeat a block of code more than twice. This follows the DRY principle of Don’t Repeat Yourself. Side note, this contrasts with the “WET” principle of “write everything twice”, “we enjoy typing”, or “waste everyone’s time”.
## Parts of a function
There are three steps to writing a function:
1. Choosing a name that is not too long, not too short, but describes simply what the function does.
2. Identifying a list of arguments that are required inputs or that control how the function works.
3. Writing the body of the function that does what you want based on the input arguments.
What are the elements of the following function?
add_one <- function(x){
x + 1
}
All functions have the same format. They are objects you create by choosing a name and assigning the function to that name (i.e., with the assignment operator, <-). Names should be chosen to be descriptive for what the function does - the shorter the better, but don’t sacrifice clarity for brevity. The arguments are included inside the function() and the body of the function is enclosed within the brackets.
Think of the body of the function as it’s own mini environment (or a workspace within a workspace). Anything you put within the body can only use what’s defined by the arguments, or conversely, anything in your global environment is not recognized by the function unless explicitly used in an argument.
Here’s an example that sheds some light on the function environment.
add_one(x = 2)
## [1] 3
Works as intended… but when we try to add something to the function that’s not defined by the arguments:
a <- 1
add_one(x = 2, a)
#> Error in add_one(x = 2, a) : unused argument (a)
Or we try to include something in the body that’s not in the arguments:
add_one <- function(x){
x + 1 + a
}
add_one(2)
#> Error in add_one(2) : object 'a' not found
Note: You don’t always have to name the arguments when using a function, i.e., add_one(x = 2) vs. add_one(2). For simplicity, you can omit the argument name but this can sometimes be confusing or lead to accidents if there are multiple arguments.
## A more involved exercise
Let’s think about a more practical example of how functions can improve your workflow. We can work with the FWC Fisheries Independent Monitoring dataset. For this example, I’ve created a subset of the data for the Old Tampa Bay area. You can import the data as follows.
url <- 'https://raw.githubusercontent.com/tbep-tech/tbep-r-training/013432d6924d278a9fbb151591ddcfd5b7de87ab/data/otbfimdat.csv'
otbfimdat <- read.csv(url, stringsAsFactors = F)
Let’s say we want to plot catch of red drum over time. The slot limit for this species is not less than 18" or more than 27", so maybe we want to subset the data within this range to evaluate effectiveness of the regulation.
Here’s a typical workflow for how we could wrangle the data and then plot a time series of total catch.
1. Filter by species
2. Filter by the slot limit
3. Format the date column into a date object
subdat <- otbfimdat %>%
filter(Commonname %in% 'Red Drum') %>%
filter(avg_size > 18 & avg_size < 27) %>%
mutate(Sampling_Date = as.POSIXct(Sampling_Date, format = '%Y-%m-%d %H:%M:%S', tz = 'America/New_York'))
We also know that not all sampling gear are created equally. How many records do we have for each gear type?
table(subdat\$Gear)
##
## 20 300
## 69 4
According to the metadata, gear “20” is a 21.3-m seine. Let’s subset the data by this gear type since it has the most records.
subdat <- subdat %>%
filter(Gear %in% 20)
Now let’s plot the data with ggplot. We create a simple plot of total number (as points) over time. We also change the scale of the axis to log-scale for clarity and add a smooth line to identify a general trend. We also add/modify the labels for context.
p1 <- ggplot(subdat, aes(x = Sampling_Date, y = TotalNum)) +
geom_point() +
scale_y_log10() +
geom_smooth(method = 'lm') +
labs(
x = NULL,
y = 'Total catch',
title = "Red drum catch in gear 20",
subtitle = "Data subset to average size between 18-27 mm"
)
p1
Not a very interesting story here. Let’s look at another species. The slot limit for spotted sea trout is within 15 and 20 mm.
subdat <- otbfimdat %>%
filter(Commonname %in% 'Spotted Seatrout') %>%
filter(avg_size > 15 & avg_size < 20) %>%
filter(Gear %in% 20) %>%
mutate(Sampling_Date = as.POSIXct(Sampling_Date, format = '%Y-%m-%d %H:%M:%S', tz = 'America/New_York'))
p2 <- ggplot(subdat, aes(x = Sampling_Date, y = TotalNum)) +
geom_point() +
scale_y_log10() +
geom_smooth(method = 'lm') +
labs(
x = NULL,
y = 'Total catch',
title = "Spotted Seatrout catch in gear 20",
subtitle = "Data subset to average size between 15-20 mm"
)
p2
What’s the potential problem with this workflow? How can we make it better?
Let’s write a function to automate the data wrangling and plotting. Where do we start to write our function?
It’s often helpful to work backwards and identify the parts of this workflow that might change depending on when the data change or when your needs for reporting/summarizing the data change. Some arguments could include:
• Common name
• Sizes to plot
• Gear to plot
plotcatch <- function(name, szrng, gearsel){}
What else is missing? Remember, the function acts as a mini-environment, so you want to include the required data as input.
plotcatch <- function(name, szrng, gearsel, datin){}
Now we add the body. We’ll need to “generalize” the places in the code where the arguments are added (i.e., replace specific names with the argument names). This happens in the workflow for creating subdat, but also note what we’ve done for the plot labels.
plotcatch <- function(name, szrng, gearsel, datin){
subdat <- datin %>%
filter(Commonname %in% name) %>%
filter(avg_size > szrng[1] & avg_size < szrng[2]) %>%
filter(Gear %in% gearsel) %>%
mutate(Sampling_Date = as.POSIXct(Sampling_Date, format = '%Y-%m-%d %H:%M:%S', tz = 'America/New_York'))
p <- ggplot(subdat, aes(x = Sampling_Date, y = TotalNum)) +
geom_point() +
scale_y_log10() +
geom_smooth(method = 'lm') +
labs(
x = NULL,
y = 'Total catch',
title = paste0(name, " catch in gear ", gearsel),
subtitle = paste0("Data subset to average size between ", szrng[1], '-', szrng[2], " mm")
)
p
}
Now we can use our function!
plotcatch('Red Drum', c(25, 30), 20, otbfimdat)
You can also include some default values for the arguments. These will help you (and others) understand the format of the required inputs and not require each argument to be used if the inputs don’t change.
plotcatch <- function(name = 'Red Drum', szrng = c(18, 27), gearsel = 20, datin = otbfimdat){
subdat <- datin %>%
filter(Commonname %in% name) %>%
filter(avg_size > szrng[1] & avg_size < szrng[2]) %>%
filter(Gear %in% gearsel) %>%
mutate(Sampling_Date = as.POSIXct(Sampling_Date, format = '%Y-%m-%d %H:%M:%S', tz = 'America/New_York'))
p <- ggplot(subdat, aes(x = Sampling_Date, y = TotalNum)) +
geom_point() +
scale_y_log10() +
geom_smooth(method = 'lm') +
labs(
x = NULL,
y = 'Total catch',
title = paste0(name, " catch in gear ", gearsel),
subtitle = paste0("Data subset to average size between ", szrng[1], '-', szrng[2], " mm")
)
p
}
What are some other ways we can improve this function?
## The RStudio cheat
RStudio has a useful feature that might help you write functions. The “Extract Function” shortcut (under the Code menu, or Ctrl/Cmd + Alt + X) can create a function by identifying the arguments and body in a block of code. It works pretty well for simple examples and kind of well for more complex examples.
A simple example:
x + 1
Using the extract function shortcut (select text, then hit Ctrl/Cmd + Alt + X):
addone <- function(x) {
x + 1
}
Our example:
subdat <- otbfimdat %>%
filter(Commonname %in% 'Spotted Seatrout') %>%
filter(avg_size > 15 & avg_size < 20) %>%
filter(Gear %in% 20) %>%
mutate(Sampling_Date = as.POSIXct(Sampling_Date, format = '%Y-%m-%d %H:%M:%S', tz = 'America/New_York'))
p <- ggplot(subdat, aes(x = Sampling_Date, y = TotalNum)) +
geom_point() +
scale_y_log10() +
geom_smooth(method = 'lm') +
labs(
x = NULL,
y = 'Total catch',
title = "Spotted Seatrout catch in gear 20",
subtitle = "Data subset to average size between 15-20 mm"
)
p
Using the extract function shortcut - not so good, but we can clean it up by hand with minimal effort.
plotcatch <- function(otbfimdat, Commonname, avg_size, Gear, Sampling_Date, TotalNum) {
subdat <- otbfimdat %>%
filter(Commonname %in% 'Spotted Seatrout') %>%
filter(avg_size > 15 & avg_size < 20) %>%
filter(Gear %in% 20) %>%
mutate(Sampling_Date = as.POSIXct(Sampling_Date, format = '%Y-%m-%d %H:%M:%S', tz = 'America/New_York'))
p <- ggplot(subdat, aes(x = Sampling_Date, y = TotalNum)) +
geom_point() +
scale_y_log10() +
geom_smooth(method = 'lm') +
labs(
x = NULL,
y = 'Total catch',
title = "Spotted Seatrout catch in gear 20",
subtitle = "Data subset to average size between 15-20 mm"
)
p
}
## How much should a function do?
Once you get comfortable the tendency is to write more complex functions that accomplish multiple tasks (automate all the things!). Although this can be fun for a while, you’ll quickly find that complexity is difficult to manage and not easily generalizable. What defines how much a function should do and when should functions be split?
The short answer is that one function does one task. The long answer is that you’ll get a sense for what’s manageable within the scope of a function the more functions you write to automate your workflows. To start, try to think about the purpose of why you’re writing a function. Although exceptions may arise for how data could or should be processed within a function, reminding yourself of why you wanted to create a function in the first place should help with defining the limits of what it does.
|
{}
|
Physics: Trigonometry
1. Jan 25, 2005
shawonna23
A force vector F1 points due east and has a magnitude of 200N. A second force F2 is added to F1. The resultant of the two vectors has a magnitude of 400N and points along the east/west line. Find the magnitude and direction of F2.
i really don't know what equation I would use to solve this problem.
2. Jan 25, 2005
dextercioby
Write the addition of vectors and then chose 2 axes of coordinates and project it...
Daniel.
3. Jan 25, 2005
BobG
What are your choices for equations?
A vector pointing due East lies on the East/West line. This winds up being the same as doing arithmetic on a number line. You only have two choices for directions (East or West). Each direction would give you a different magnitude. (Double check to see if they specified which direction the resultant was pointing).
4. Jan 25, 2005
Staff: Mentor
They're telling you that
$$\bold {F}_1 + \bold {F}_2 = \bold {R}$$
They're giving you the values of $$\bold {F}_1$$ and $$\bold {R}$$.
So in order to find $$\bold {F}_2$$ you need to do this:
$$\bold {F}_2 = \bold {R} - \bold {F}_1$$.
Now, if you've learned how to add two vectors by now, how would you modify the procedure so as to subtract them instead of add them? :uhh:
If you haven't learned how to add two vectors by now, I suggest you go back and do that before tackling this problem.
5. Jan 25, 2005
shawonna23
Thanks for the help. I think the answers are 200N due east and 600N due west
6. Jan 25, 2005
dextercioby
Yes,the answers are correct.We only hope u've gotten them through a correct method...
Daniel.
|
{}
|
# Hoeffding's inequality within not a range of values but only two possible values
We have to create confidence intervals for a certain probability p of n i.i.d Bernoulli random variables (Xn) that can take only two specific values {red,blue}. The confidence interval must be exponentially high.
The thing is, I'm positive you can approach the problem with Hoeffeding's inequality, however I'm not sure how to plug in those "red",blue" values.
Any ideas?
-
Got it, there's a special case for Hoeffding's inequality for Bernoulli variables. That's enough. – Rotational Nov 13 '12 at 0:16
|
{}
|
A linear map having a left inverse which is not a right inverse
We consider a vector space $$E$$ and a linear map $$T \in \mathcal{L}(E)$$ having a left inverse $$S$$ which means that $$S \circ T = S T =I$$ where $$I$$ is the identity map in $$E$$.
When $$E$$ is of finite dimension, $$S$$ is invertible.
For the proof we take $$x \in E$$. We have $$S(T(x)) = I(x) = x$$, which means that $$T(x)$$ is a pre-image of $$x$$ for $$S$$ proving that $$S$$ is onto. If $$E$$ is supposed to be of finite dimension, $$S$$ is also one-to-one and hence invertible and equal to $$T^{-1}$$. Another way to prove that $$S$$ is invertible is to use the determinant. We have $$\det(S T) = \det(S) \det(T)=\det(I)=1$$, hence $$\det(S) \neq 0$$ and $$S$$ is invertible.
What about the case where $$E$$ is of infinite dimension? In that case, a left inverse might not be a right inverse. We provide below a counterexample.
Consider the space $$E$$ of real sequences, the linear mapping $$T$$ that maps a sequence $$(a_0,a_1, \dots)$$ to the sequence $$(0,a_0,a_1, \dots)$$ and the linear mapping $$S$$ that maps a sequence $$(a_0,a_1,a_2, \dots)$$ to the sequence $$(a_1, a_2, \dots)$$. It is clear that $$ST = I$$. Now consider the sequence $$a=(1,0,0,0, \dots)$$. We have $$S(a)=\textbf{0}$$ where $$\textbf{0}$$ is the sequence that vanishes identically and also $$T S(a) = \textbf{0}$$ hence $$T S \neq I$$.
|
{}
|
## Mathematical Statistics Lesson of the Day – Basu’s Theorem
Today’s Statistics Lesson of the Day will discuss Basu’s theorem, which connects the previously discussed concepts of minimally sufficient statistics, complete statistics and ancillary statistics. As before, I will begin with the following set-up.
Suppose that you collected data
$\mathbf{X} = X_1, X_2, ..., X_n$
in order to estimate a parameter $\theta$. Let $f_\theta(x)$ be the probability density function (PDF) or probability mass function (PMF) for $X_1, X_2, ..., X_n$.
Let
$t = T(\mathbf{X})$
be a statistics based on $\textbf{X}$.
Basu’s theorem states that, if $T(\textbf{X})$ is a complete and minimal sufficient statistic, then $T(\textbf{X})$ is independent of every ancillary statistic.
Establishing the independence between 2 random variables can be very difficult if their joint distribution is hard to obtain. This theorem allows the independence between minimally sufficient statistic and every ancillary statistic to be established without their joint distribution – and this is the great utility of Basu’s theorem.
However, establishing that a statistic is complete can be a difficult task. In a later lesson, I will discuss another theorem that will make this task easier for certain cases.
## Eric’s Enlightenment for Thursday, May 14, 2015
1. Alcohol kills more people worldwide than HIV, AIDS, violence and tuberculosis combined.
2. Some crystals don’t recrystallize after heating and cooling, but form amorphous supercooled liquids. Modifying the molecular structure of diketopyrrolopyrrole using shear forces can induce this type of behaviour. Here is a video demonstration. Here is the original paper.
3. How pyrex was born out of an accident in cooking spongecake 100 years ago. (Hat Tip: Lauren Wolf)
4. Check out David Campbell’s graduate statistical computing course at SFU. It dives into some cool topics in his research that are not always covered in statistical computing, like approximate Bayesian computation and many computational Bayesian methods.
## Using the Golden Section Search Method to Minimize the Sum of Absolute Deviations
#### Introduction
Recently, I introduced the golden search method – a special way to save computation time by modifying the bisection method with the golden ratio – and I illustrated how to minimize a cusped function with this script. I also wrote an R function to implement this method and an R script to apply this method with an example. Today, I will use apply this method to a statistical topic: minimizing the sum of absolute deviations with the median.
While reading Page 148 (Section 6.3) in Michael Trosset’s “An Introduction to Statistical Inference and Its Applications”, I learned 2 basic, simple, yet interesting theorems.
If X is a random variable with a population mean $\mu$ and a population median $q_2$, then
a) $\mu$ minimizes the function $f(c) = E[(X - c)^2]$
b) $q_2$ minimizes the function $h(c) = E(|X - c|)$
I won’t prove these theorems in this blog post (perhaps later), but I want to use the golden section search method to show a result similar to b):
c) The sample median, $\tilde{m}$, minimizes the function
$g(c) = \sum_{i=1}^{n} |X_i - c|$.
This is not surprising, of course, since
$|X - c|$ is just a function of the random variable $X$
– by the law of large numbers,
$\lim_{n\to \infty}\sum_{i=1}^{n} |X_i - c| = E(|X - c|)$
Thus, if the median minimizes $E(|X - c|)$, then, intuitively, it minimizes $\lim_{n\to \infty}\sum_{i=1}^{n} |X_i - c|$. Let’s show this with the golden section search method, and let’s explore any differences that may arise between odd-numbered and even-numbered data sets.
## Scripts and Functions: Using R to Implement the Golden Section Search Method for Numerical Optimization
In an earlier post, I introduced the golden section search method – a modification of the bisection method for numerical optimization that saves computation time by using the golden ratio to set its test points. This post contains the R function that implements this method, the R functions that contain the 3 functions that were minimized by this method, and the R script that ran the minimization.
I learned some new R functions while learning this new algorithm.
– the curve() function for plotting curves
– the cat() function for concatenating strings and variables and, hence, for printing debugging statements
## The Golden Section Search Method: Modifying the Bisection Method with the Golden Ratio for Numerical Optimization
#### Introduction
The first algorithm that I learned for root-finding in my undergraduate numerical analysis class (MACM 316 at Simon Fraser University) was the bisection method. It’s very intuitive and easy to implement in any programming language (I was using MATLAB at the time). The bisection method can be easily adapted for optimizing 1-dimensional functions with a slight but intuitive modification. As there are numerous books and web sites on the bisection method, I will not dwell on it in my blog post.
Instead, I will explain a clever and elegant way to modify the bisection method with the golden ratio that results in faster computation; I learned this method while reading “A First Course in Statistical Programming with R” by John Braun and Duncan Murdoch. Using a script in R to implement this special algorithm, I will illustrate how to minimize a non-differentiable function with the golden section search method. In a later post (for the sake of brevity), I will use the same method to show that the minimizer of the sum of the absolute deviations from a univariate data set is the median. The R functions and script for doing everything are in another post.
The Fibonacci spiral approximates the golden spiral, a logarithmic spiral whose growth factor is the golden ratio.
Source: Dicklyon via Wikimedia
|
{}
|
Code covered by the BSD License
# Chebfun V4
30 Apr 2009 (Updated )
Numerical computation with functions instead of numbers.
### Editor's Notes:
This file was selected as MATLAB Central Pick of the Week
Multiple BVP solutions by solving an IVP
# Multiple BVP solutions by solving an IVP
Asgeir Birkisson, May 2011
## Contents
(Chebfun example ode/TwoSolBVPfromIVP.m)
It is well known that nonlinear boundary-value problems (BVPs) can have multiple solutions. However, it is difficult to construct general numerical methods to find these solutions -- whereas we can often hope to find one solution with common numerical software, we have to use clever tricks to find more solutions.
One such trick is to start by solving an initial value problem (IVP) with initial conditions similar to the boundary conditions of the original BVP, and use the solution of the IVP as an initial guess for the solution of the BVP. Here, this method is demonstrated for the nonlinear BVP
u''+2usin(u) = 0, u'(0) = 0, u(5) = 1
We start by solving the BVP using a constant initial guess, then obtain another initial guess by solving the IVP
u''+2usin(u) = 0, u'(0) = 0, u(0) = 3
and find another solution to the BVP by using that initial guess.
## Obtaining the first solution, constant initial guess
Here, our initial guess of the solution is the constant function u(x) = pi.
Setup a BVP chebop
Nbvp = chebop(0,5);
Nbvp.op = @(u) diff(u,2)+2*u.*sin(u);
Nbvp.lbc = 'neumann'; Nbvp.rbc = 1;
Assign the initial guess u(x) = pi, and solve using nonlinear backslash
Nbvp.init = pi;
bvpSol1 = Nbvp\0;
disp(['Residual, first solution: ' num2str(norm(Nbvp(bvpSol1)))])
Residual, first solution: 1.9552e-11
## Obtaining an initial guess by solving an IVP
Here, we obtain an initial guess for the solution of the BVP by solving an IVP. The solution of the IVP will satisfy
u''+2usin(u) = 0, u'(0) = 0, u(0) = 3
Setup a IVP chebop and solve (the system will automatically construct an initial guess for this problem):
ca, clc
cheboppref('display','iter','plotting','on','damped','on')
Nivp = chebop(0,5);
x = chebfun('x',domain(Nivp));
Nivp.op = @(u) diff(u,2)+2*u.*sin(u);
Nivp.lbc = @(u) [diff(u),u-3];
Nivp.init = -x.^2 + 3;
plot(Nivp.init)
ivpSol = Nivp\0;
Iter. || update || length(update) stepsize length(u)
---------------------------------------------------------------------------
1 1.462e-01 134 1 134
2 1.726e+00 237 1 237
3 2.394e+00 432 1 432
4 8.210e-02 432 1 432
5 1.969e-03 267 1 432
6 8.460e-07 111 1 432
7 1.027e-12 22 1 432
7 iterations
Final residual norm: 2.19e-04 (interior) and 2.39e-07 (boundary conditions).
## Obtaining the second solution
We now assign the solution of the IVP as the initial guess to the original BVP chebop, and find another solution of the problem:
Nbvp.init = ivpSol;
bvpSol2 = Nbvp\0;
disp(['Residual, second solution: ' num2str(norm(Nbvp(bvpSol2)))])
Iter. || update || length(update) stepsize length(u)
---------------------------------------------------------------------------
1 1.633e-02 108 1 432
2 3.267e-02 90 1 432
3 6.841e-04 107 1 432
4 8.598e-06 106 1 432
5 3.049e-10 69 1 432
6 0.000e+00 1 1 432
6 iterations
Final residual norm: 2.55e-04 (interior) and 4.81e-07 (boundary conditions).
Residual, second solution: 0.0016488
## Plotting
A plot of both the solutions is shown below:
plot(bvpSol1,'linewidth',2), hold on, plot(bvpSol2,'r-.','Linewidth',2), grid on
legend('First soln.','Second soln.'), ylim([-6 6])
title('Multiple solutions of the BVP u''''+2usin(u) = 0, u''(0) = 0, u(0) = 3')
|
{}
|
# Learning MPLAB XC [closed]
I am trying to learn mplab xc8. I am familiar with C and C++. I have written some code using these languages. But while I am trying to learn mplab, I couldn't find any good source or book. For example I know what TRIS means but I don't know how to write it to program (btw I know how).
Neither datasheet nor mplab help section worked for me. I mean for example while I am learning C language there are a lot of books which explain code and how to use it.
Where should I start to study?
There is some resources about Microchip on their developer's website. You will find some example about 8,16,32 bits microcontrollers and more.
You can also take a look on some code examples.
Did you ever write some code for microcontrollers? If not, buy a development board, you will have plenty of code examples for a specific dev board.
• Thank you. Then you mean I can learn these concepts with examples. – Anıl Kırıkkaya May 22 '18 at 5:53
(Note that, when I last checked, MPLAB X IDE is supplied separately from the compilers XC8, XC16, and XC32.)
XC8 compiles C89 a.k.a. C90 with a few extensions, all documented in the User Guide. (Soon it will compile C99 and C11 when asked, but that has not yet been released.)
When you #include <xc.h>, that pulls in declarations for the various Special Function Registers on the chip you select at compile-time. These SFRs are specified as integers and bitfields of appropriate sizes. You simply read from them and write to them just like you would any other variable.
|
{}
|
# Prefix and conditional statement with biblatex
I would like to highlight the contributions of a given author in my list of bibliographic references. In order to do so, I'm thinking about adding his initials to the citation label. Something like what follows, knowing that the author to be highlighted is First Name:
• bib file
@article{label1,author = {First Name and author2},title = {title}}
@article{label2,author = {author2 and author3},title = {title}}
• main file
text: \cite{label1} \cite{label2}
• output:
text [FN1] [2]
• the list of references could be
[FN1] First Name 1, author2, title
[2] author1, author2, title
A very minimal MWE is suggested:
\documentclass{article}
\usepackage[style=numeric-comp,backend=biber,maxnames=99]{biblatex}
\begin{document}
text \cite{label1}, \cite{label2}
\end{document}
and refs.bib stores the following information
@article{label1,
author = {First Name and author2},
title = {title}
}
@article{label2,
author = {author2 and author3},
title = {title}
}
Automatic testing on the First Name author is preferred. Two separate lists would be acceptable but a single one is preferred.
• An MWE would nevertheless be appreciated. It gives us something to start from. It also shows us what style you use (numeric?) and saves us some work in making up .bib entries and stuff. Why is [2] not [A2] even though author1 contributed? Do you want biblatex to automatically detect the author or would you be OK with telling it yourself? Do you want the initials to be detected automatically or would you be OK with adding them to a special field yourself? – moewe Oct 14 '17 at 17:15
• @moewe Done. I've clarified what I could. – pluton Oct 14 '17 at 17:30
• Would shorthand = {FN1} help? – samcarter_is_at_topanswers.xyz Oct 14 '17 at 18:14
• @samcarter It looks like it replaces the expected citation number by FN and I do not want to enter the numbering by hand. – pluton Oct 14 '17 at 20:55
This is a little hackish, but it will do what you want. I've set the usera field with the prefix you want for a particular reference. Then I have used \AtEveryCitekey and \AtEveryBibitem to alter the label if usera is set.
MWE:
\documentclass{article}
\usepackage{filecontents}
\begin{filecontents}{\jobname.bib}
@article{label1,
author = {First Name and author2},
title = {Title},
journaltitle = {Journal},
date = {2017},
usera = {FN}
}
@article{label2,
author = {author2 and author3},
title = {Title},
journaltitle = {Journal},
date = {2016}
}
\end{filecontents}
\pagestyle{empty}
\usepackage[style=numeric-comp,sorting=none,backend=biber,maxnames=99]{biblatex}
\AtEveryCitekey{%
\iffieldundef{usera}
{}
{\savefield*{usera}{\tempa}%
\restorefield{labelprefix}{\tempa}}}
\DeclareFieldFormat{labelnumberwidth}{%
\mkbibbrackets{%
\printfield{usera}%
#1}}
\begin{document}
text \cite{label1}, \cite{label2}
\printbibliography
\end{document}
|
{}
|
## LOG#082. Group Theory (II).
Basic definitions of group theory: that is the topic today! We need some background previous to the “group axioms”. Definition (1). Set is a collection of objects with some properties. Objects in the set are called “elements” or “members” of … Continue reading
## LOG#048. Thomas precession.
LORENTZ TRANSFORMATIONS IN NON-STANDARD FORM Let me begin this post with an uncommon representation of Lorentz transformations in terms of “uncommon matrices”. A Lorentz transformation can be written symbolically, as we have seen before, as the set of linear transformations … Continue reading
|
{}
|
 A 10x1.5x0.2 cm3 glass plate weighs 8.6 gm in air. Now it is immersed half in water with longest side vertical. What will be its apparent weight? (surface tension of water is 70 dyne /cm.) from Physics Mechanical Properties of Fluids Class 11 Manipur Board
### Chapter Chosen
Mechanical Properties of Fluids
### Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
### Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
A 10x1.5x0.2 cm3 glass plate weighs 8.6 gm in air. Now it is immersed half in water with longest side vertical. What will be its apparent weight? (surface tension of water is 70 dyne /cm.)
The different forces acting on the plate are:
(i) Weight W vertically downward,
(ii) Upward thrust U vertically upward,
(iii) Surface tension force T in downward direction .
These forces are given by,
Weight, W=8.6 x 980 = 8428 dyne
Upward Thrust, U=weight of water displaced
=Volume of water displaced x density of water x g
Let,
T = Total length of the water in touch with plate x surface tension
=2(1.5 + 0.2) x 70 = 238 dyne
Now the apparent weight,
W+ W + T - U = 8428 + 238 - 1470
= 7196 dyne
= 7.434 gf
125 Views
What is hydrostatics?
Hydrostatics is the branch of fluid mechanics that studies incompressible fluids at rest. The study of fluids at rest or objects placed at rest in fluids is hydrostatics.
854 Views
What is hydrodynamics?
Hydrodynamics is the branch of science that studies about the force exerted by the fluids or acting on the fluids.
799 Views
Why solids have definite shape while liquids do not have definite shape?
Solids: Intermolecular forces are very strong and thermal agitations are not sufficiently strong to separate the molecules from their mean position. Solids are rigid and hence they have definite shapes.
Liquids: In liquids intermolecular forces are not sufficiently strong to hold the molecules at definite sites, as a result they move freely within the bulk of liquid, therefore, do not possess definite shapes. Liquids take the same shape as that of the container.
977 Views
Do intermolecular or inter-atomic forces follow inverse square law?
No. Intermolecular and inter-atomic forces do not obey the inverse square law.
1257 Views
What is fluid?
Any material that can flow is a fluid. Liquids and gases are examples of fluid.
976 Views
|
{}
|
## Pseudo-Hermiticity and generalized $$PT$$- and $$CPT$$-symmetries.(English)Zbl 1061.81076
Summary: We study certain linear and antilinear symmetry generators and involution operators associated with pseudo-Hermitian Hamiltonians and show that the theory of pseudo-Hermitian operators provides a simple explanation for the recent results of C. M. Bender, D. C. Brody and H. F. Jones [Phys. Rev. Lett. 89, No. 27, 270401 (2002), see also quant-ph/0208076] on the $$CPT$$-symmetry of a class of $$PT$$-symmetric non-Hermitian Hamiltonians. We present a natural extension of these results to the class of diagonalizable pseudo-Hermitian Hamiltonians $$H$$ with a discrete spectrum. In particular, we introduce generalized parity $$(P)$$, time-reversal $$(T)$$, and charge-conjugation $$(C)$$ operators and establish the $$PT$$- and $$CPT$$-invariance of $$H$$.
### MSC:
81U15 Exactly and quasi-solvable systems arising in quantum theory 47N50 Applications of operator theory in the physical sciences
Full Text:
### References:
[1] C. M. Bender, D. C. Brody, and H. F. Jones, ”Complex Extension of Quantum Mechanics,” arXiv: quant-ph/0208076. · Zbl 1267.81234 [2] DOI: 10.1063/1.532860 · Zbl 1057.81512 [3] DOI: 10.1063/1.532860 · Zbl 1057.81512 [4] DOI: 10.1063/1.1418246 · Zbl 1059.81070 [5] DOI: 10.1063/1.1461427 · Zbl 1060.81022 [6] DOI: 10.1063/1.1489072 · Zbl 1061.81075 [7] Mostafazadeh A., Nucl. Phys. B 640 pp 419– (2002) · Zbl 0997.81031 [8] Mostafazadeh A., J. Math. Phys. 43 pp 6343– (2002) · Zbl 1060.81023 [9] Scholtz F. G., Ann. Phys. 213 pp 74– (1992) · Zbl 0749.47041 [10] Mostafazadeh A., Class. Quantum Grav. 20 pp 155– (2003) · Zbl 1039.83005 [11] A. Mostafazadeh, ”On a Factorization of Symmertic Matrices and Antilinear Symmerties,” ArXiv: math-ph/0203023. [12] Mostafazadeh A., Mod. Phys. Lett. A 17 pp 1973– (2002) · Zbl 1083.81514
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
# Maximal angle of Euler's pendulum [closed]
A small ball of mass $m$ is connected to one side of a string with length $L$ (see illustration). The other side of the string is fixed in the point $O$. The ball is being released from its horizontal straight position. When the string reaches vertical position, it wraps around a rod in the point $C$ (this system is called "Euler's pendulum"). Prove that the maximal angle $\alpha$, when the trajectory of the ball is still circular, satisfies the equality: $\cos \alpha = \frac{2}{3}$.
In the previous part of the question, they asked to find the velocity at any given angle $\alpha$, and I solved it right using the law of conservation of energy:
$V_{\alpha}=\sqrt{gL(1-\cos\alpha)}$
Now, they ask to find the maximal angle at which the ball will still be in circular motion. The ball is in circular motion as long as the tension of the string is not zero. Therefore, we should see when the tension force becomes zero. So I did the following:
In the radial axis we have the tension force and the weight component (it might be negative - depends on the angle). So the net force in that direction equals to the centripetal force: $T_{\alpha}+mg\cos\alpha=m \frac{V_{\alpha}^2}{L}$
If we substitute the velocity, we get:
$T_{\alpha}=m \left(\frac{V_{\alpha}^2}{L}-g\cos\alpha \right)=m \left[\frac{gL(1-\cos \alpha)}{L}-g\cos\alpha \right]=mg(1-2\cos\alpha)$
When the tension force becomes zero:
$0=1-2\cos\alpha\\ \cos\alpha=\frac{1}{2}$
And that's not the right angle. So my question is - why? Where's my mistake? I was thinking that maybe I've got some problem with the centripetal force equation, but I can't figure out what's the problem. Or maybe the book is wrong? Thanks in advance.
## closed as off-topic by Kyle Kanos, stafusa, user191954, ZeroTheHero, John RennieSep 9 '18 at 8:14
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Kyle Kanos, stafusa, Community, ZeroTheHero, John Rennie
If this question can be reworded to fit the rules in the help center, please edit the question.
You have used the right formula for the centripetal force, but the wrong radius. You have used $L$ as radius but it should be $\frac{L}{2}$. Redo the same calculation and you should get the right answer.
|
{}
|
## Differential and Integral Equations
### Jumping nonlinearities for a nonlinear equation with an $n$th order disconjugate operator
Marta García-Huidobro
#### Article information
Source
Differential Integral Equations, Volume 4, Number 6 (1991), 1281-1291.
Dates
First available in Project Euclid: 13 June 2013
https://projecteuclid.org/euclid.die/1371154287
Mathematical Reviews number (MathSciNet)
MR1133758
Zentralblatt MATH identifier
0733.34029
Subjects
Primary: 34B15: Nonlinear boundary value problems
Secondary: 47H15 47N20: Applications to differential and integral equations
#### Citation
García-Huidobro, Marta. Jumping nonlinearities for a nonlinear equation with an $n$th order disconjugate operator. Differential Integral Equations 4 (1991), no. 6, 1281--1291. https://projecteuclid.org/euclid.die/1371154287
|
{}
|
Labeling Variables and Values
Variable labels
label variable income "Gross income in 2008, in Euro"
This command, which may be abbreviated as la var, has to be repeated for each variable that is to be labeled.
Value labels
Basics
Giving labels to values works like this: You first have to define one or several labels; in a second step the label(s) is or are attached to one or several variables. Therefore, two command lines are necessary
label define mstatus 0 "unmarried" 1 "married"
label value status mstatus
Note that "status" refers to the name of the variable and "mstatus" to the name of the label (both names may be identical, by the way). The advantage of this two-step procedure is that often several variables have the same values with the same "meanings" (for instance, in the case of Likert-scaled items), and this can be made explicit by attaching the same label to these variables. To return to our example, there may be a list of up to 10 household members, and for each member there is a variable indicating whether s/he is married or not. You still will define one label as in the example above and attach it to variables, say, status1 to status 10:
label value status1-status10 mstatus
label define can be abbreviated as la de and label value as la val.
Choice of value labels is not easy, as often only a small number of characters will be displayed. I advise my students to create labels that convey significant information with the first 8 characters or so, otherwise labels may become indistinguishable in the output of some procedures.
label list mstatus
or
labelbook mstatus
will display a table showing the correspondence between values and labels of the variable "mstatus"; labelbook will present some additional information and will underline the first 12 characters of each label which helps you to judge whether the labels will be unique in a typical piece of Stata output. Note that "mstatus" is the name of the label, not of the variable. If you don't remember name of the label attached to a variable, you can find it with the help of the describe or the codebook command (just insert the variable name after the respective command). As of Stata version 12, value labels are also shown in the "Variables" section of the Properties window.
Modifying existing value labels
Existing labels can be modified with the help of options. The most important options are:
label define mstatus 2 "divorced" 3 "widowed", add
add can be used to label values that have no label attached
label define mstatus 0 "cohabiting" 2 "divorced" 3 "widowed", modify
modify has to be used if existing labels are to be changed. It includes add; in other words, you can modify existing labels and at the same time add new ones.
Dropping value labels
A value label attached to a particular variable can be removed with the help of the dot. Look closely at the end of the following example – it terminates with a dot (and not a stain of dirt on your screen):
label values name-of-variable[s] .
I have deviated from the usual practice in this guide to use an example variable name to make it clear that you have to name the variable(s) from which the labels are to be removed and not the labels themselves.
On the other hand, you may remove a variable label from the data set with the command
label drop values name-of-label
© W. Ludwig-Mayerhofer, Stata Guide | Last update: 02 Aug 2015
|
{}
|
# Demonstration Functions¶
To aid in exploration we include a variety of test functions that can be used in conduction with the parameter space dimension reduction techniques. We divide these into two classes: those that are simple formulas and those that are the results of the numerical approximation of differential equations. The former are inexpensive to evaluate and equipped with analytic gradients; the latter can be expensive, may not have gradients, and may contain computational noise.
We provide two different ways to access these demo functions: a psdr.Function() interface working on the normalized domain and a low-level access to the underlying function and domain methods.
## Formula-based Test Functions¶
### Borehole Function¶
class psdr.demos.Borehole(domain='deterministic', dask_client=None)[source]
The borehole test function
A function implementing the borehole test function [VLSE_borehole]. This function has the form
$f(r_w, r, T_u, H_u, T_l, H_l, L, K_w) = \frac{ 2\pi T_u (H_u - H_l)}{ \ln(r/r_w) \left( 1 + \frac{2 L T_u}{\ln(r/r_w) r_w^2 K_w} + \frac{T_u}{T_l} \right) }$
where the input variables have the domain
Variable Interpretation
$$r_w\in [0.05, 0.15]$$ radius of borehole (m)
$$r \in [100, 50 \times 10^3]$$ radius of influence (m)
$$T_u \in [63070,115600]$$ trasmissivity of upper aquifer (m^2/yr)
$$H_u \in [ 990, 1110]$$ potentiometric head of upper aquifer (m)
$$T_l \in [63.1, 116]$$ transmissivity of lower aquifer (m^2/yr)
$$H_l \in [700, 820]$$ potentiometric head of lower aquifer (m)
$$L \in [1120, 1680]$$ length of borehole (m)
$$K_w \in [9855, 12045]$$ hydraulic conductivity of borehole (m/yr)
An alternative to this deterministic domain is an uncertain domain where $$r_w \sim \mathcal{N}(0.10, 0.0161812)$$ and $$\log r \sim \mathcal{N}(7.71, 1.0056)$$ and the remainder come from a uniform distribution on the domain previously specified.
Parameters: domain (['deterministic', 'uncertain']) – Which domain to use when constructing the function dask_client (dask.distributed.Client or None) – If specified, allows distributed computation with this function.
References
[VLSE_borehole] Virtual Library of Simulation Experiments, Borehole Function https://www.sfu.ca/~ssurjano/borehole.html
### Golinski Gearbox¶
class psdr.demos.GolinskiGearbox(dask_client=None)[source]
The Golinski Gearbox Optimization Test Problem
This test problem originally descibed by Golinski [Gol70] and subsequently appearing in, for example, [Ray03] and [MDO], seeks to design a gearbox (speedreducer) to minimize volume subject to a number of constraints.
Here we take our formulation following [Ray03], which reduces the original 25 constraints to 11 by removing redundancy. We also shift these constraints such that they are satisfied if the values returned are negative and the constraints are violated if their value is positive.
Parameters: dask_client (dask.distributed.Client or None) – If specified, allows distributed computation with this function.
References
[MDO] Langley Research Center: Multidisciplinary Optimization Test Suite, Golinski’s Speed Reducer
[Gol70] “Optimal Synthesis Problems Solved by Means of Nonlinear Programming and Random Methods”, Jan Golinski, J. Mech. 5, 1970, pp.287–309. https://doi.org/10.1016/0022-2569(70)90064-9
[Ray03] (1, 2) “Golinski’s Speed Reducer Problem Revisited”, Tapabrata Ray, AIAA Journal, 41(3), 2003, pp 556–558 https://doi.org/10.2514/2.1984
### Nowacki Beam¶
class psdr.demos.NowackiBeam[source]
The Nowacki Beam Problem
In this example, we seek to design a beam with fixed cross-sectional breath and height with respect to two objectives subject to five constraints described in [FSK08].
The two parameters for this model and their ranges are given below.
Variable Interpretation
$$b \in [0.005, 0.050]$$ breadth (m)
$$h \in [0.020, 0.250]$$ height (m)
The following are the seven quantities of interest associated with this problem.
Function name Formula Role
cross-sectional area $$a = bh$$ objective: minimize
bending stress $$\sigma = 6FL/(bh^2)$$ objective: minimize
tip deflection $$\delta = FL^3/(3EI_Y) \quad I_Y = bh^3/12$$ constraint: $$\delta \le 0.005$$
bending stress $$\sigma_B = 6FL/(bh^2)$$ constraint: $$\sigma_B \le \sigma_Y$$
shear stress $$\tau = 3F/(2bh)$$ constraint: $$\tau \le \sigma_Y/2$$
height to breadth ratio $$\rho = h/b$$ constraint: $$\rho \le 10$$
tip force for buckling $$F_T = (4/L^2)\sqrt{G I_T E I_Z/(1-\nu)}$$ constraint: $$F_T \ge 2F$$
where $$I_T = (b^3h+bh^3)/12$$ and $$I_Z = b^3h/12$$. The constants that appear above are taken from those values for mild steel:
Constant Value
tip force $$F=5\times 10^3$$ N
beam length $$L=1.5$$ m
yield stress $$\sigma_Y = 240\times 10^6$$ Pa
Young’s modulus $$E=216.62\times 10^9$$ Pa
Poisson’s ratio $$\nu=0.27$$
shear modulus $$G=86.65\times 10^9$$ Pa
This example originates in [Now80].
[FSK08] Engineering Design via Surrogate Modeling: A Practical Guide Aleander I. J. Forrester, Andras Sobester, and Andy J. Keane Wiley, 2008
[Now80] Modling of Design Decisions for CAD Horst Nowacki In: Computer Aided Design Modeling, J. Encarncao (editor) DOI:10.1007/BFb0040161
### OTL Circuit Function¶
class psdr.demos.OTLCircuit(dask_client=None)[source]
The OTL circuit test function
The OTL Circuit function “models an output transformerless push pull circuit” [VLSE]
$\begin{split}f(R_{b1}, R_{b2}, R_f, R_{c1}, R_{c2}, \beta) :=& \frac{ \beta (V_{b1} + 0.74)(R_{c2} + 9)}{\beta(R_{c2} + 9) + R_f} + \frac{11.35 R_f}{\beta(R_{c2} + 9) + R_f} \\ &+\frac{0.74 R_f \beta (R_{c2} + 9)}{R_{c1}(\beta(R_{c2} + 9) + R_f)}, \\ \text{where } V_{b1} :=& \frac{12 R_{b2}}{R_{b1} + R_{b2}}\end{split}$
Variable Interpretation
$$R_{b1} \in [50, 150]$$ resistance b1 (K-Ohms)
$$R_{b2} \in [25, 75]$$ resistance b2 (K-Ohms)
$$R_{f} \in [0.5, 3]$$ resistance f (K-Ohms)
$$R_{c1} \in [1.2, 2.5]$$ resistance c1 (K-Ohms)
$$R_{c1} \in [0.25, 1.2]$$ resistance c2 (K-Ohms)
$$\beta \in [50, 300]$$ current gain (Amperes)
Parameters: dask_client (dask.distributed.Client or None) – If specified, allows distributed computation with this function.
References
[VLSE] Virtual Library of Simulation Experiments, OTL Circuit https://www.sfu.ca/~ssurjano/otlcircuit.html
### Piston Function¶
class psdr.demos.Piston(dask_client=None)[source]
Piston test function
The Piston function “models the circular motion of a piston within a cylinder” [VLSE_piston].
$\begin{split}f(M, S, V_0, k, P_0, T_a, T_0) :=& 2\pi \sqrt{\frac{M}{k+ S^2 \frac{P_0V_0}{T_0} \frac{T_a}{V^2}}}, \text{ where } \\ V &= \frac{S}{2k}\left( \sqrt{A^2 + 4k \frac{P_0V_0}{T_0}T_a} - A \right) \\ A &= P_0 S + 19.62M - \frac{k V_0}{S}\end{split}$
Variable Interpretation
$$M \in [30, 60]$$ piston weight (kg)
$$S \in [0.005, 0.020]$$ piston surface area (m^2)
$$V_0 \in [0.002, 0.010]$$ initial gas volume (m^3)
$$k \in [1000, 5000]$$ spring coefficient (N/m)
$$P_0 \in [90\times 10^3, 110 \times 10^3]$$ atmospheric pressure (N/m^2)
$$T_a \in [290, 296]$$ ambient temperature (K)
$$T_0 \in [340, 360]$$ filling gas temperature (K)
Parameters: dask_client (dask.distributed.Client or None) – If specified, allows distributed computation with this function.
References
[VLSE_piston] Virtual Library of Simulation Experiments, Piston Function https://www.sfu.ca/~ssurjano/piston.html
### Robot Arm Function¶
class psdr.demos.RobotArm(dask_client=None)[source]
Robot Arm test function
A test function that measures the distance of a four segment arm from the origin [VLSE_robot].
$\begin{split}f(\theta_1, \theta_2, \theta_3, \theta_4, L_1, L_2, L_3, L_4) &:= \sqrt{u^2 + v^2}, \text{ where }\\ u &:= \sum_{i=1}^4 L_i \cos \left( \sum_{j=1}^i \theta_j\right) \\ v &:= \sum_{i=1}^4 L_i \sin \left( \sum_{j=1}^i \theta_j\right)\end{split}$
Variable Interpretation
$$\theta_i \in [0, 2\pi]$$ angle of the ith arm segment
$$L_i \in [0,1]$$ length of the ith arm segment
Parameters: dask_client (dask.distributed.Client or None) – If specified, allows distributed computation with this function.
References
[VLSE_robot] Virtual Library of Simulation Experiments, Robot Arm Function https://www.sfu.ca/~ssurjano/robot.html
### Wing Weight Function¶
class psdr.demos.WingWeight[source]
The wing weight test function
This function models the weight of a wing based on several design parameters [VLSE_wing]:
$\begin{split}&f(S_w, W_{fw}, A, \Lambda, q, \lambda, t_c, N_z, W_{dg}, W_p) := \\ &\quad 0.036 S_w^{0.758} W_{fw}^{0.0035} \left( \frac{A}{\cos^2 \Lambda} \right)^{0.6} q^{0.006} \lambda^{0.04} \left( \frac{100 t_c}{\cos \Lambda} \right)^{-0.3} (N_z W_{dg})^{0.49} + S_w W_p\end{split}$
Variable Interpretation
$$S_w \in [150, 200]$$ wing area (ft^2)
$$W_{fw} \in [220, 300]$$ weight of fuel in the wing (lb)
$$A \in [6,10]$$ aspect ratio
$$\Lambda \in [-10,10]$$ quarter-chord sweep (degrees)
$$q \in [16,45]$$ dynamic pressure at cruise (lb/ft^2)
$$\lambda \in [0.5,1]$$ taper ratio
$$t_c\in [0.08,0.18]$$ aerfoil thickness to chord ratio
$$N_z \in [2.5, 6]$$ ultimate load factor
$$W_{dg} \in [1700, 2500]$$ flight design gross weight (lb)
$$W_p \in [0.025, 0.08]$$ paint weight (lb/ft^2)
References
[VLSE_wing] Virtual Library of Simulation Experiments, Wing Weight Function https://www.sfu.ca/~ssurjano/wingweight.html
## Model-based Test Functions¶
### OpenAeroStruct¶
class psdr.demos.OpenAeroStruct[source]
A test problem from OpenAeroStruct
A test problem using OpenAeroStruct similar to that described in [JHM18].
References
[JHM18] Open-source coupled aerostructural optimization using Python. John P. Jasa, John T. Hwang, and Joaquim R. R. A. Martins. Structural and Multidisciplinary Optimization (2018) 57:1815-1827 DOI: 10.1007/s00158-018-1912-8
### MULTI-F¶
class psdr.demos.MULTIF(truncate=1e-07, level=0, su2_maxiter=5000, workdir=None, dask_client=None, **kwargs)[source]
An interface to the MULTI-F multiphysics jet nozzle model test problem.
The function describes a multiphysics model of a jet nozzle [FMAA18]. The domain consists of 96 design parameters and 40 uncertain variables and the output returns the mass, thrust, and many different failure constraints. This uses a Docker image to encapsulate the dependencies for MULTI-F; to use this function, install Docker and pull the multif image
>> docker pull jeffreyhokanson/multif:v25
This function encodes multiple different fidelity levels: in order of increasing fidelity and approximate one-core cost
Level Physics Mesh Mechanics Runtime (sec)
0 1d Non-ideal N/A linear 20
1 1d Non-ideal N/A nonlinear 205
2 2d Euler Coarse linear 82
3 2d Euler Medium linear 256
4 2d Euler Fine linear 713
5 3d Euler Coarse linear 1083
6 3d Euler Medium linear 3123
7 3d Euler Fine linear 13350
8 2d RANS Coarse linear
9 2d RANS Medium linear
10 2d RANS Fine linear
11 3d RANS Coarse linear 80690
12 3d RANS Medium linear 175888
13 3d RANS Fine linear 408512
Note at the present time the 2d RANS levels are buggy and not recommended for use.
Parameters: truncate (float in [0,1]; default 1e-7) – If non-zero, truncate the random domains by the specified probability. Truncation is necessary to provide a bounded domain for use with most sampling strategies. level (int in [0, 13]; default 0) – What level of fidelity to run (see description above) su2_maxiter (int; default: 5000) – Number of iterations used to solve for the fluid flow in SU2 workdir (str; default: None) – If defined, this specifies the location where temporary files are written while solving the problem keep_data (bool; default: False) – If True, do not delete the work files; i.e., use this to preserve the flow solution for later use verbose (bool; default: False) – If True, print the output of running MULTI-F to stdout dask_client (dask.distributed.Client or None) – If specified, allows distributed computation with this function.
References
[FMAA18] Reliability-Based Design Optimization of a Supersonic Nozzle Richard W. Fenrich, Victorien Menier, Philip Avery, and Juan J. Alonso, 6th European Conference on Computational Mechanics http://www.eccm-ecfd2018.org/admin/files/filePaper/p437.pdf
### NACA0012¶
class psdr.demos.NACA0012(n_lower=10, n_upper=10, fraction=0.01, dask_client=None, **kwargs)[source]
The lift and drag of a NACA0012 airfoil perturbed by Hicks-Henne bump functions
This function computes the lift and drag of a NACA0012 airfoil perturbed by Hicks-Henne bump functions. The lift and drag are computed using [SU2] using Euler flow at Mach number 0.8 and angle of attack 1.25.
This example slightly modifies the configuration file that appears in the SU2 quickstart folder: (inv_NACA0012.cfg)[https://github.com/su2code/SU2/blob/master/QuickStart/inv_NACA0012.cfg].
Parameters: n_lower (int, optional (default: 10)) – Number of bump functions for the lower surface n_upper (int, optional (default: 10)) – Number of bump functions for the upper surface fraction (float, optional (default:0.01)) –
References
|
{}
|
# Darlington pair. Purpose of components
Giving the following schematic: What is purpose of R1 and the diodes ?
• Not enough context. Post the surrounding circuit, or where you got the schematic from? It looks to me like $R_1$, $R_2$ and the associated diode are there to speed up turning off $T_2$. The diode on the right is either intrinsic (if it's a monolithic Darlington pair) or it's there on purpose because the expected use has the collector voltage falling below the emitter voltage. – TimWescott Dec 10 '18 at 16:25
• That's the whole circuit. I found it as a "better design of a darlington pair". – Baciu Vlad-Eusebiu Dec 10 '18 at 16:27
• R1 and (R2 + diode) are here to speed up Cbe capacitor discharging time also they make sure that the leakage current does not turn-ON the Darlington. – G36 Dec 10 '18 at 16:29
• But I suspect that this left diode should be connected between B and E terminals. – G36 Dec 10 '18 at 16:33
• The left diode speeds up turning off the 2nd transistor when the B terminal is grounded. – Andy aka Dec 10 '18 at 16:34
|
{}
|
## Basic College Mathematics (10th Edition)
$1\frac{17}{24}$
$\frac{5}{6}+\frac{7}{8}$ = $\frac{5\times4}{6\times4}+\frac{7\times3}{8\times3}$ (LCM of 6 & 8 is 24) = $\frac{20}{24}+\frac{21}{24}$ = $\frac{20+21}{24}$ (addition of fractions with same denominator) = $\frac{41}{24}$ = $1\frac{17}{24}$
|
{}
|
# Synopsis: More tau leptons than expected
An excess of tau leptons in bottom meson decays signals a puzzling departure from the standard model.
As reported in Physical Review Letters, the BaBar collaboration at SLAC has analyzed a large data set and found an excess of events containing tau leptons in the decay of bottom mesons that doesn’t agree with the predictions of the standard model of particle physics.
BaBar looked for the decays of bottom mesons (a bound state of a bottom quark and a light quark) into a charm meson, a charged lepton, and a neutrino. Compared to a previous analysis, they were able to increase the efficiency with which they identified signal events by more than a factor of $3$. BaBar determined the ratio of those decays that contained tau leptons to those that contained light charged leptons (electrons or muons), obtaining a larger ratio than predicted by the standard model by $3.4$ standard deviations.
This deviation could be due to some new particle, such as a charged Higgs boson, which couples more strongly to heavy particles like taus than to electrons or muons (though BaBar shows that one of the most commonly studied models with a charged Higgs boson does not work). Systematic errors or statistical fluctuations could also give rise to the apparent excess of tau leptons. Finally, it could be that the standard model theoretical prediction BaBar compares their data to will change. In a recent paper (Jon A. Bailey et al., Phys. Rev. Lett. 109 071802 (2012)) researchers recalculated one of the theoretical inputs into that prediction, and their results reduce the discrepancy BaBar finds to $3.2$ standard deviations. – Robert Garisto
More Features »
### Announcements
More Announcements »
## Subject Areas
Particles and Fields
## Previous Synopsis
Nonlinear Dynamics
Optics
## Related Articles
Particles and Fields
### Synopsis: More Gluons in the Pion
A combined analysis of collider data finds that the gluon contribution to the pion is 3 times larger than earlier estimates. Read More »
Particles and Fields
### Viewpoint: Higgs Decay into Bottom Quarks Seen at Last
Two CERN experiments have observed the most probable decay channel of the Higgs boson—a milestone in the pursuit to confirm whether this remarkable particle behaves as physicists expect. Read More »
Particles and Fields
### Viewpoint: Fast-Forwarding the Search for New Particles
A proposed machine-learning approach could speed up the analysis that underlies searches for new particles in high-energy collisions. Read More »
|
{}
|
# Probabilistic Programming
## Rapid, easy model development
BAli-Phy contains a simple language for expressing probabilistic models as programs. Inference on parameters can then be performed automatically. Such languages are called probabilistic programming languages (PPL). Other well-known PPLs include BUGS, RevBayes, and Stan. The goal of the language is to allow researchers to spend their time designing models instead of designing new inference programs. The inference should take care of itself after the model is specified.
#### Language properties
The modeling language is a functional language, and uses Haskell syntax. Features currently implemented include:
1. Functions work, and can be used to define random variables.
2. Modules work, and allow code to be factored in clean manner.
3. Packages work, and allow researchers to distribute their work separately from the BAli-Phy architecture.
4. Optimization works, and speeds up the code written by the user by using techniques such as inlining.
5. Composite Objects work, and can be used to define random data structures.
6. Random control flow works, allowing if-then-else and loops that depend on random variables.
Features that are expected to be completed during 2019 include:
• Random Trees. Random processes on trees that are not known in advance. (partially implemented)
• Rooted Trees. Rooted trees implemented as a data structure within the language. (partially implemented)
• JSON logging. This enables logging inferred parameters when their dimension and number is not fixed. (partially implemented)
• Random numbers of random variables. Random variables can be conditional created, without the need for reversible-jump methods.
• Lazy random variables. Infinite lists of random variables can be created. Random variables are only instantiated if they are accessed.
• Type checking. Type checking will enable polymorphism and give useful error messages for program errors.
## Examples
### Simple linear regression
Here is a short program that performs linear regression. The program samples variables from their prior distribution using the sample function, and then conditions on the data using the observe function.
import Data.ReadFile
import Distributions
import System.Environment
main = do
b <- sample $normal 0.0 1.0 a <- sample$ normal 0.0 1.0
s <- sample $exponential 1.0 let f x = b*x + a sequence_ [observe y (normal mu_y s) | (x,y) <- zip xs ys, let mu_y = f x] return$ log_all [b %% "b", a %% "a", s %% "s"]
#### Interpretation:
• let f x = b*x +a defines the prediction function f
• b <- sample $normal 0.0 1.0 places a prior on the slope of the line. • observe y (normal mu_y s) says that the data y comes from a normal distribution with mean mu_y and standard deviation s. You can find this file here and run it as bali-phy -m LinearRegression.hs --iter=1000. (The method of reading from files "xs" and "ys" here is kind of a hack. A high-quality interface for reading from CSV files will be easy to implement after type polymorphism is implemented.) ### Random data structures and Recursion The iid distribution returns a list of random values from another distribution. We can apply the map and sum operations to such lists to sample a sum of squares. module Demo2 where import Distributions main = do xs <- sample$ iid 10 (normal 0.0 1.0)
let ys = map (\x -> x*x) xs
return $log_all [xs %% "xs", ys %% "squares", sum ys %% "sum"] Here (\x -> x*x) describes an un-named function that takes an argument x and returns x*x. (Currently the number 10 of i.i.d. normal variables cannot be random. Soon we will allow random numbers of random variables, and this restriction will be relaxed.) ### Random control flow I: if-then-else The modeling language can handle graphs that change. One thing that leads to a changing graph is control-flow statements like if-then-else: module Demo3 where import Distributions main = do i <- sample$ bernoulli 0.5
y <- sample $normal 0.0 1.0 let x = if (i==1) then y else 0.0 return$ log_all [i %% "i", x %% "x"]
### Random control flow II: arrays with random subscripts
Traditional graphical modelling languages, like BUGS, allow arrays of random variables. However, they do not allow selecting a random element of these arrays. Dynamic graphs allow random subscripts. This can be used to divide observations into categories. Here different elements of ys will be exactly equal, if they belong to the same category:
module Demo4 where
import Distributions
main = do
xs <- sample $iid 10 (normal 0.0 1.0) categories <- sample$ iid 10 (categorical (replicate 10 0.1))
let ys = [xs!!(categories!!i) | i <- [0..9]]
return $log_all [ys %% "ys"] Haskell uses the !! operator to subscript a list. The C equivalent of xs!!(categories!!i)) would be something like xs[categories[i]]. ### Random sampling via recursive functions: Brownian Bridge We can use recursive functions to randomly sample lists of random values. Here, we define a function random_walk that produces a list of random values starting from x0. module Demo5 where import Distributions random_walk x0 n f | n < 1 = error "Random walk needs at least 1 element" | n == 1 = return [x0] | otherwise = do x1 <- sample$ f x0
xs <- random_walk x1 (n-1) f
return (x0:xs)
-- 20 element brownian bridge
main = do
zs <- random_walk 0.0 19 (\mu -> normal mu 1.0)
observe 1.5 $normal (last zs) 1.0 return$ log_all [ zs %% "zs" ]
The argument f is a function. In Haskell, we write f x instead of f(x) to apply a function. Here, f x gives the distribution of the point after x.
The observe command specifies observed data. Here we observe that the next point after element 10 of zs is 1.5. This constrains the random walk to end at 1.5, creating a Brownian Bridge.
comments and suggestions: benjamin . redelings * gmail + com
|
{}
|
# qml.transforms.merge_rotations¶
merge_rotations = <function merge_rotations>[source]
Quantum function transform to combine rotation gates of the same type that act sequentially.
If the combination of two rotation produces an angle that is close to 0, neither gate will be applied.
Parameters
• qfunc (function) – A quantum function.
• atol (float) – After fusion of gates, if the fused angle $$\theta$$ is such that $$|\theta|\leq \text{atol}$$, no rotation gate will be applied.
• include_gates (None or list[str]) – A list of specific operations to merge. If set to None (default), all operations with the is_composable_rotation attribute set to True will be merged. Otherwise, only the operations whose names match those in the list will undergo merging.
Returns
the transformed quantum function
Return type
function
Example
Consider the following quantum function.
def qfunc(x, y, z):
qml.RX(x, wires=0)
qml.RX(y, wires=0)
qml.CNOT(wires=[1, 2])
qml.RY(y, wires=1)
qml.CRZ(z, wires=[2, 0])
qml.RY(-y, wires=1)
return qml.expval(qml.PauliZ(0))
The circuit before optimization:
>>> dev = qml.device('default.qubit', wires=3)
>>> qnode = qml.QNode(qfunc, dev)
>>> print(qml.draw(qnode)(1, 2, 3))
0: ───RX(1)──RX(2)──────────╭RZ(3)──┤ ⟨Z⟩
1: ──╭C──────RY(2)──RY(-2)──│───────┤
2: ──╰X──────H──────────────╰C──────┤
By inspection, we can combine the two RX rotations on the first qubit. On the second qubit, we have a cumulative angle of 0, and the gates will cancel.
>>> optimized_qfunc = merge_rotations()(qfunc)
>>> optimized_qnode = qml.QNode(optimized_qfunc, dev)
>>> print(qml.draw(optimized_qnode)(1, 2, 3))
0: ───RX(3)─────╭RZ(3)──┤ ⟨Z⟩
1: ──╭C─────────│───────┤
2: ──╰X──────H──╰C──────┤
It is also possible to explicitly specify which rotations merge_rotations should be merged using the include_gates argument. For example, if in the above circuit we wanted only to merge the “RX” gates, we could do so as follows:
>>> optimized_qfunc = merge_rotations(include_gates=["RX"])(qfunc)
>>> optimized_qnode = qml.QNode(optimized_qfunc, dev)
>>> print(qml.draw(optimized_qnode)(1, 2, 3))
0: ───RX(3)─────────────────╭RZ(3)──┤ ⟨Z⟩
1: ──╭C──────RY(2)──RY(-2)──│───────┤
2: ──╰X──────H──────────────╰C──────┤
|
{}
|
This is “Linear Inequalities (Two Variables)”, section 3.8 from the book Beginning Algebra (v. 1.0). For details on it (including licensing), click here.
Has this book helped you? Consider passing it on:
Creative Commons supports free culture from music to education. Their licenses helped make this book available to you.
DonorsChoose.org helps people like you help teachers fund their classroom projects, from art supplies to books to calculators.
## 3.8 Linear Inequalities (Two Variables)
### Learning Objectives
1. Identify and check solutions to linear inequalities with two variables.
2. Graph solution sets of linear inequalities with two variables.
## Solutions to Linear Inequalities
We know that a linear equation with two variables has infinitely many ordered pair solutions that form a line when graphed. A linear inequality with two variablesAn inequality relating linear expressions with two variables. The solution set is a region defining half of the plane., on the other hand, has a solution set consisting of a region that defines half of the plane.
For the inequality, the line defines one boundary of the region that is shaded. This indicates that any ordered pair that is in the shaded region, including the boundary line, will satisfy the inequality. To see that this is the case, choose a few test pointsA point not on the boundary of the linear inequality used as a means to determine in which half-plane the solutions lie. and substitute them into the inequality.
Also, we can see that ordered pairs outside the shaded region do not solve the linear inequality.
The graph of the solution set to a linear inequality is always a region. However, the boundary may not always be included in that set. In the previous example, the line was part of the solution set because of the “or equal to” part of the inclusive inequality $≤$. If we have a strict inequality $<$, we would then use a dashed line to indicate that those points are not included in the solution set.
Consider the point (0, 3) on the boundary; this ordered pair satisfies the linear equation. It is the “or equal to” part of the inclusive inequality that makes it part of the solution set.
So far, we have seen examples of inequalities that were “less than.” Now consider the following graphs with the same boundary:
Given the graphs above, what might we expect if we use the origin (0, 0) as a test point?
Try this! Which of the ordered pairs (−2, −1), (0, 0), (−2, 8), (2, 1), and (4, 2) solve the inequality $y>−12x+2$?
Answer: (−2, 8) and (4, 2)
### Video Solution
(click to see video)
## Graphing Solutions to Linear Inequalities
Solutions to linear inequalities are a shaded half-plane, bounded by a solid line or a dashed line. This boundary is either included in the solution or not, depending on the given inequality. If we are given a strict inequality, we use a dashed line to indicate that the boundary is not included. If we are given an inclusive inequality, we use a solid line to indicate that it is included. The steps for graphing the solution set for an inequality with two variables are outlined in the following example.
Example 1: Graph the solution set: $y>−3x+1$.
Solution:
Step 1: Graph the boundary line. In this case, graph a dashed line $y=−3x+1$ because of the strict inequality. By inspection, we see that the slope is $m=−3=−31=riserun$ and the y-intercept is (0, 1).
Step 2: Test a point not on the boundary. A common test point is the origin (0, 0). The test point helps us determine which half of the plane to shade.
Step 3: Shade the region containing the solutions. Since the test point (0, 0) was not a solution, it does not lie in the region containing all the ordered pair solutions. Therefore, shade the half of the plane that does not contain this test point. In this case, shade above the boundary line.
Consider the problem of shading above or below the boundary line when the inequality is in slope-intercept form. If $y>mx+b$, then shade above the line. If $y, then shade below the line. Use this with caution; sometimes the boundary is given in standard form, in which case these rules do not apply.
Example 2: Graph the solution set: $2x−5y≥−10$.
Solution: Here the boundary is defined by the line $2x−5y=−10$. Since the inequality is inclusive, we graph the boundary using a solid line. In this case, graph the boundary line using intercepts.
Next, test a point; this helps decide which region to shade.
Since the test point is in the solution set, shade the half of the plane that contains it.
In this example, notice that the solution set consists of all the ordered pairs below the boundary line. This may be counterintuitive because of the original ≥ in the inequality. This illustrates that it is a best practice to actually test a point. Solve for y and you see that the shading is correct.
In slope-intercept form, you can see that the region below the boundary line should be shaded. An alternate approach is to first express the boundary in slope-intercept form, graph it, and then shade the appropriate region.
Example 3: Graph the solution set: $y<2$.
Solution: First, graph the boundary line $y=2$ with a dashed line because of the strict inequality.
Next, test a point.
In this case, shade the region that contains the test point.
Try this! Graph the solution set: $5x−y≤10$.
### Video Solution
(click to see video)
### Key Takeaways
• Linear inequalities with two variables have infinitely many ordered pair solutions, which can be graphed by shading in the appropriate half of a rectangular coordinate plane.
• To graph the solution set of a linear inequality with two variables, first graph the boundary with a dashed or solid line depending on the inequality. If given a strict inequality, use a dashed line for the boundary. If given an inclusive inequality, use a solid line. Next, choose a test point not on the boundary. If the test point solves the inequality, then shade the region that contains it; otherwise, shade the opposite side.
• When graphing the solution sets of linear inequalities, it is a good practice to test values in and out of the solution set as a check.
### Topic Exercises
Part A: Solutions to Linear Inequalities (Two Variables)
Is the ordered pair a solution to the given inequality?
1. $y<5x+1$; (0, 0)
2. $y>−12x−4$; (0, −2)
3. $y≤23x+1$; (6, 5)
4. $y≥−13x−5$; (−3, −8)
5. $y<15x−13$; $(−13, −1)$
6. $4x−3y≤2$; (−2, −1)
7. $−x+4y>7$; (0, 0)
8. $7x−3y<21$; (5, −3)
9. $y>−5$; (−3, −1)
10. $x≤0$; (0, 7)
Part B: Graphing Solutions to Linear Inequalities
Graph the solution set.
11. $y<−3x+3$
12. $y<−23x+4$
13. $y≥−12x$
14. $y≥45x−8$
15. $y≤8x−7$
16. $y>−5x+3$
17. $y>−x+4$
18. $y>x−2$
19. $y≥−1$
20. $y<−3$
21. $x<2$
22. $x≥2$
23. $y≤34x−12$
24. $y>−32x+52$
25. $−2x+3y>6$
26. $7x−2y>14$
27. $5x−y<10$
28. $x−y<0$
29. $3x−2y≥0$
30. $x−5y≤0$
31. $−x+2y≤−4$
32. $−x+2y≤3$
33. $2x−3y≥−1$
34. $5x−4y<−3$
35. $12x−13y<1$
36. $12x−110y≥12$
37. $x≥−2y$
38. $x<2y+3$
39. $3x−y+2>0$
40. $3−y−2x<0$
41. $−4x≤12−3y$
42. $5x≤−4y−12$
43. Write an inequality that describes all points in the upper half-plane above the x-axis.
44. Write an inequality that describes all points in the lower half-plane below the x-axis.
45. Write an inequality that describes all points in the half-plane left of the y-axis.
46. Write an inequality that describes all points in the half-plane right of the y-axis.
47. Write an inequality that describes all ordered pairs whose y-coordinates are at least 2.
48. Write an inequality that describes all ordered pairs whose x-coordinate is at most 5.
1: Yes
3: Yes
5: Yes
7: No
9: Yes
11:
13:
15:
17:
19:
21:
23:
25:
27:
29:
31:
33:
35:
37:
39:
41:
43: $y>0$
45: $x<0$
47: $y≥2$
|
{}
|
# Alice and Bob moving in a circular ring of radius $R$
Alice and Bob are moving in opposite direction around a circular ring of Radius $R$, which is at rest in an inertial frame. Both move with constant speed $V$ as measured in that frame. Each carries a clock, which they synchronize to zero time at a moment when they are at the same position on the ring. Bob predicts that when they next meet, Alice's clock will read less than his because of the time dilation arising because she is moving relative to him. Alice predicts that Bob's clock will read less with the same reason. They both can't be right. What's wrong with their arguments? What will the clocks really read?
I have try to answer it that their all wrong, Since the they are all moving at the same speed, and they will all cover the same distance ($\pi R$), so they will be at same place with the same time? but I am not sure about my reasoning, Then the clock reading will be $\Delta t=\sqrt{1-\frac{V^2}{c^2}}\Delta t_B.$
Can any one give me a clear reasons on What's wrong with their arguments?
• So I think part of the issue here may be that neither Alice nor Bob is an inertial observer. They are both accelerating radially inward with acceleration V^2/R, so regular old special relativity isn't going to be valid in this problem. It seems clear to me by symmetry that they will read the same time when they pass next. What that time will be compared to a stationary observer, I have no clue (though obviously it ought to be less than the outside observer due to time dilation effects). – mhodel Mar 28 '14 at 2:27
Both move with constant speed V as measured in that frame.
Constant speed but not constant velocity; both Alice and Bob are accelerated, i.e., each observes the other's accelerometer to read non-zero so standard SR reasoning doesn't apply.
However, their worldlines, between the initial event and the event they next meet, are congruent thus, the proper time along each worldline is identical so their clocks will agree when they next meet.
• Is it possible to measure the time dilation of Alice with respect to Bob? or Bob with respect to Alice? without considering the rest frame? – Lembris Mar 29 '14 at 10:09
• How would one go about actually calculating the time elapsed in the two different frames. I.e. calculate how much time elapsed on Alice's Clock from Bob's point of view. They are the same but I think it would be interesting to compute. @AlfredCentauri – ClassicStyle Jun 17 '14 at 4:42
• Should I post a new question for this? – ClassicStyle Jun 17 '14 at 4:43
Each carries a clock, which they synchronize to zero time at a moment when they are at the same position on the ring. Bob predicts that when they next meet, Alice's clock will read less than his because of the time dilation arising because she is moving relative to him. What's wrong [...] ?
There are two things wrong with Bob's argument as presented in the question:
1. There was no explicit assumption or expectation given how the proper "rates" of Alice's clock readings and of Bob's clock readings, at least on average between their meetings,
$$\frac{(t_A[ \text{reunion} ] - t_A[ \text{departure} ])}{\Delta \tau_A[ \small{ \text{from departure until reunion}} ]}$$
and
$$\frac{(t_B[ \text{reunion} ] - t_B[ \text{departure} ])}{\Delta \tau_B[ \small{ \text{from departure until reunion}} ]}$$,
ought to be related to each other; e.g. whether these two rates were assumed/expected to be equal to each other, or what else.
(And given nothing more than the initial synchronization of the two clocks, there is of course no reason at all to hold any particular expectation about how these two rates might be related to each other.)
And 2.:
Based on the setup description of the movements of Alice and Bob relative to each other, accounting for "time dilation" as usual, Bob (and Alice, and everyone) would determine
$\Delta \tau_A[ \small{ \text{from departure until reunion}} ] == \Delta \tau_B[ \small{ \text{from departure until reunion}} ] == \sqrt{ 1 - \frac{V^2}{c^2} } \pi \frac{R}{V}$.
So that's certainly no reason to predict/expect
$t_A[ \text{reunion} ] < t_B[ \text{reunion} ]$
(where according to the initial synchronization $t_A[ \text{departure} ] = t_B[ \text{departure} ] = 0$ ).
|
{}
|
# Omit Canary Hosts Predicate¶
This extension may be referenced by the qualified name envoy.retry_host_predicates.omit_canary_hosts
Note
This extension is intended to be robust against untrusted downstream traffic. It assumes that the upstream is trusted.
## config.retry.omit_canary_hosts.v2.OmitCanaryHostsPredicate¶
[config.retry.omit_canary_hosts.v2.OmitCanaryHostsPredicate proto]
{}
|
{}
|
## Fatou’s Theorem (Complex Analysis) I
Let $F: \mathbb{D} \rightarrow \mathbb{C}$ be a holomorphic function. What conditions do we have to impose on $F$ on the boundary $\partial\mathbb{D}$ to guarantee convergence (in some sense) to boundary values? We begin by formulating a suitable form of convergence to answer this question.
We say that a function $F: \mathbb{D} \rightarrow \mathbb{C}$ has a radial limit at the point $\theta \in [-\pi,\pi]$ on the circle if the limit
$\displaystyle \lim_{{r \rightarrow 1} \atop{r < 1}} f(re^{i\theta})$
exists.
Before we give sufficient conditions, we need to review some basic results about Fourier series. For $0 \leq r < 1$ and $\theta \in [-\pi,\pi]$, we have by the geometric series formula the following identity:
$\displaystyle \sum_{n=-\infty}^{\infty}r^{\left|n\right|}e^{in\theta} = \frac{1-r^{2}}{1 - 2r\cos(\theta) + r^{2}} =: P_{r}(\theta)$
We call $P_{r}$ the Poisson kernel on the unit disk $\mathbb{D}$.
Proof. Noting that, for $n \in \mathbb{Z}^{\geq 0}$, $(re^{\pm i\theta})^{n} = r^{n}e^{\pm i n \theta}$, we obtain from the geometric series formula that
$\begin{array}{lcl} \sum_{n=-\infty}^{\infty}r^{\left|n\right|}e^{in\theta} = \sum_{n=1}^{\infty}r^{n}e^{-in\theta} + \sum_{n=0}^{\infty}r^{n}e^{in\theta} = \frac{re^{-i\theta}}{1 - re^{-i\theta}} + \frac{1}{1 - re^{i\theta}} &=& \frac{re^{-i\theta}(1 - re^{i\theta}) + (1-re^{-i\theta})}{(1-re^{-i\theta})(1-re^{i\theta})}\\ &=& \frac{re^{-i\theta} - r^{2} + 1 - re^{-i\theta}}{1 - (re^{i\theta} + re^{-i\theta}) + r^{2}}\\ &=& \frac{1-r^{2}}{1 - 2r\cos(\theta) + r^{2}} \end{array},$
since $re^{i\theta} +re^{-i\theta} = 2\text{Real}(re^{i\theta}) = 2r\cos(\theta)$. $\Box$
We use the Poisson kernel to prove the next theorem on the convergence of Fourier series. We will use (without proof of) the result that $\left\{P_{r}\right\}_{0 \leq r < 1}$ is an approximate identity.
Theorem 1. Suppose $f \in L^{1}([-\pi,\pi])$. Denote the Fourier coefficients of $f$ by $a_{n}$, $n \in \mathbb{Z}$.
1. If $a_{n} = 0$ for all $n$, then $f = 0$ a.e.
2. $\sum_{n=-\infty}^{\infty}a_{n}r^{\left|n\right|}e^{inx} \rightarrow f(x)$ for a.e. $x$, as $0 < r {\rightarrow}^{-}1$.
Proof. (1) is a consequence of (2), so we only show the latter. Changing $f(\pi)$ to $f(-\pi)$ if necessary, we may assume without loss of generality that $f(-\pi) = f(\pi)$, so that $f$ has a clear $2\pi$-periodic extension to all of $\mathbb{R}$. By the dominated convergence theorem and the above identity,
$\begin{array}{lcl}\lim_{{r \rightarrow 1} \atop {0 \leq r < 1}}\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x-y)P_{r}(y)dy &=& \lim_{{r \rightarrow 1} \atop {0 \leq r < 1}}\sum_{n=-\infty}^{\infty}\left(\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x-y)e^{iny}dy\right)r^{\left|n\right|}\\&=& \lim_{{r \rightarrow 1} \atop {0 \leq r < 1}}\sum_{n=-\infty}^{\infty}r^{\left|n\right|}\left(\frac{1}{2\pi}\int_{x-\pi}^{x+\pi}f(y)e^{in(x-y)}dy\right)\\&=& \lim_{{r \rightarrow 1} \atop {0 \leq r < 1}}\sum_{n=-\infty}^{\infty}a_{n}r^{\left|n\right|}e^{inx} \ \text{a.e.} \end{array},$
where the penultimate equality follows from translation invariance and periodicity. Since the Poisson kernel is an approximate identity,
$\displaystyle \lim_{{r \rightarrow 1} \atop {0 \leq r}}\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x-y)P_{r}(y)dy=\lim_{{r \rightarrow 1} \atop {0 \leq r < 1}} (f \ast P_{r})(x) = f(x)$
at every Lebesgue point of $f$, hence almost everywhere. $\Box$
As above, let $a_{n}$ denote the $n^{th}$ Fourier coefficient of a function $f \in L^{1}([-\pi,\pi])$. We now prove some fundamental results for Fourier series of functions in $L^{2}([-\pi,\pi])$.
Theorem 2. Suppose $f \in L^{2}([-\pi,\pi])$.
1. (Parseval’s Identity) $\sum\left|a_{n}\right|^{2}=\frac{1}{2\pi}\int_{-\pi}^{\pi}\left|f(x)\right|^{2}dx$
2. The mapping $f \mapsto (a_{n})_{n \in \mathbb{Z}}$ defines a unitary transformation from $L^{2}([-\pi,\pi])$ to $\ell^{2}(\mathbb{Z})$.
3. $\left\|f - S_{N}(f)\right\|_{L^{2}([-\pi,\pi])} \rightarrow 0$, where $S_{N}(f) := \sum_{\left|n\right| \leq N}a_{n}e^{inx}$.
Proof. Assertions (1), (2), and (3) follow from more general Hilbert space results for complete orthonormal bases. We know that $\left\{e^{inx}\right\}_{n \in \mathbb{Z}}$ is an orthonormal system, so it remains for us to completeness. Recall that a system is complete if and only if the only element orthogonal to all the elements is $0$. By assertion (1) of Theorem 1,
$\displaystyle \langle{f,e_{n}}\rangle_{L^{2}([-\pi,\pi])} := \frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)e^{-inx}dx = 0, \indent \forall n \in \mathbb{Z}$
implies that $f = 0$ a.e. $\Box$
Theorem 3. (Fatou) A bounded holomorphic function $F: \mathbb{D} \rightarrow \mathbb{C}$ has radial limits at almost every $\theta$.
ProofSince $F$ is holomorphic on $\mathbb{D}$, we can write $F(z) = \sum_{n=0}^{\infty}a_{n}z^{n}$, for $a_{n} \in \mathbb{C}$, where convergence holds absolutely uniformly for $z = re^{i\theta}$ with $r< 1$. For $r \in (0,1)$, define a curve $\gamma_{r} : [-\pi,\pi] \rightarrow \mathbb{C}$ by $\gamma(\theta) = re^{i\theta}$. By the uniqueness of Laurent series and the formula for Laurent coefficients,
$\begin{array}{lcl} & & a_{n} = \frac{1}{2\pi i}\int_{\gamma_{r}}\frac{F(z)}{z^{n+1}}dz = \frac{1}{2\pi i}\int_{-\pi}^{\pi}F(re^{in\theta})r^{-(n+1)}e^{-i(n+1)\theta}i(re^{i\theta})d\theta \\&\Longleftrightarrow& a_{n}r^{n} = \frac{1}{2\pi}\int_{-\pi}^{\pi}F(re^{in\theta})e^{-in\theta}d\theta\end{array}$
Note that the last integral vanishes for $n < 0$, since $F$ is holomorphic on $\mathbb{D}$. Hence, $\sum_{n=0}^{\infty}a_{n}r^{n}e^{in\theta}$ is the Fourier series of the function $\theta \mapsto F(re^{i\theta})$, for $0 \leq r < 1$ fixed.
Let $M > 0$ be such that $\left|F(z)\right| \leq M$ for all $z \in \mathbb{D}$. By Parseval’s identity,
$\displaystyle \sum_{n=0}^{\infty}\left|a_{n}\right|^{2}r^{2n} = \frac{1}{2\pi}\int_{-\pi}^{\pi}\left|F(re^{i\theta})\right|^{2}d\theta, \indent \forall 0 \leq r < 1$
Since $F$ is dominated by $M$, hence $\sum_{n=0}^{\infty}\left|a_{n}\right|^{2}r^{2n} \leq M^{2}$, it follows from the dominated convergence theorem that $\sum_{n=0}^{\infty}\left|a_{n}\right|^{2} \leq M^{2}$. Let $F(e^{i\theta})$ denote the $L^{2}([-\pi,\pi])$ function whose Fourier coefficients $a_{n}$, $n \in \mathbb{Z}$. By assertion (2) of Theorem 1 above,
$\displaystyle \sum_{n=0}^{\infty}a_{n}r^{n}e^{in\theta} \rightarrow F(e^{i\theta}),$
for a.e. $\theta$, which completes the proof of the theorem. $\Box$
We define the Hardy space $H^{2}(\mathbb{D})$ to be the set of all holomorphic functions $F: \mathbb{D} \rightarrow \mathbb{C}$ such that
$\displaystyle \sup_{0 \leq r < 1}\frac{1}{2\pi}\int_{-\pi}^{\pi}\left|F(re^{i\theta})\right|^{2}d\theta < \infty$
It follows from Minkowski’s inequality and basic limit properties that
$\displaystyle\left\|F\right\|_{H^{2}(\mathbb{D})} := \left(\sup_{0 \leq r < 1}\frac{1}{2\pi}\int_{-\pi}^{\pi}\left|F(re^{i\theta})\right|^{2}d\theta\right)^{\frac{1}{2}}$
defines a norm on $H^{2}(\mathbb{D})$. We will later see that we can extend the definition of Hardy spaces on the unit disk to arbitrary $p \in (0,\infty)$.
Stay tuned for the second installment on Fatou’s theorem!
|
{}
|
# Two limit switch signals affecting each other
I'm trying to make circuit which will allow me to sense some platform presence at the ends of some rail.
I'm using the limit switches, here is schematic:
SENS-* signals connected to MCU pins, that is configured to trigger an interrupt on a rising edge of a signal. When I trigger manually only one of the switches - then both interrupts are triggered - right one first and wrong one second. It seems like signal from switches interfere each other through GND line, but I cant figure how exactly.
How to prevent triggering of a second interrupt? I feel, that there is some well known solution, but I could not find it.
• What is the outcome when you measure the condition of the pins? – PlasmaHH Jun 18 '15 at 10:00
• @PlasmaHH, What do you mean by condition of the pins? Their logical states? I dont read states of pins directly. I'm using STM32 MCU with an external interrupt controller (EXTI), configured to trigger an interrupt when signal on a pin goes from logical 0 to logical 1. And I'm using that handler routines, not the values from the pins. – Vasilly.Prokopyev Jun 18 '15 at 10:31
• @PlasmaHH, Sorry, now I've got what you meant, I'l try to measure pin states directly – Vasilly.Prokopyev Jun 18 '15 at 10:38
Since you are trying to manage mechanical limit switches using interrupts generated from the edge of the signal be aware that you could get multiple interrupts when the switch first closes. This is caused by the bounce of the switch contacts.
Depending upon how the platform limit switches are used in your application you may want to consider a more robust solution that is commonly used in motion control systems. Each end of travel uses two sets of sensing. The first one that the platform would trigger near the end of travel would be one that does not use a mechanical switch. Instead it may be an optical interrupter or an indexing pulse of an optical encoder. Such signal is your logical end of travel and will be much more repeatable in terms of position detect. In fact with a controlled positioner this may be repeatable to the same position each time.
The second limit to be triggered would be the mechanical switches as you have now but connected is an emergency cutout. Such contacts would be wired directly to disable the drive signals to the platform positioner. Under normal operating conditions these contacts would never trigger and there would be some range of platform motion between when the logical trigger and when these emergency switches would engage.
Mechanical switches have a lot of hysteresis in them and will not give a repeatable end of travel position reading when you are using a servo or stepper to control the platform motion.
One final note on the emergency cutout switches. The limit cutout switch for the (+) direction wants to only cut out the drive control of (+) direction stepping. Likewise for the (-) direction of the platform. This is important because it still allows the control system to move the platform away from the emergency stop position to regain control of the platform in recovery positioning.
• Thank you, I wanted to delete question, but this answer... My project is easy and cheap DIY project, so there is no need for any precise or robust approaches. Still, could you provide me with an example of mentioned optical sensor? – Vasilly.Prokopyev Jun 18 '15 at 16:44
Thanks to PlasmaHH comment, I found the problem - there was some solder error in my circuit. Now everything works.
|
{}
|
zbMATH — the first resource for mathematics
Representations and characters of groups. 2nd ed. (English) Zbl 0981.20004
Cambridge: Cambridge University Press. viii, 458 p. (2001).
The first edition of the book (1993) was very well received, so that it was reprinted in 1995. This reviewer wrote the review on that (Zbl 0792.20006). Now in 2001, a second edition has been published. In comparison to the first edition, substantial additions have been made: In chapter 20 on Clifford’s theorem on normal subgroups, in chapter 23 on the Brauer-Fowler theorem, in the former chapter 28 on remarks regarding Brauer’s centralizer-of-an-involution techniques. Two new chapters have been added: chapter 28 on the character table of $$\text{GL}(2,p)$$ and chapter 29 on permutations and characters.
On the whole, this reviewer regards the second printing of the book as a gem in studying representation theory of finite groups.
MSC:
20C15 Ordinary representations and characters 20-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to group theory
|
{}
|
# Traxxas Stampede VXL
This vehicle was chosen to understand how a Pixhawk could be used for wheeled platforms. We chose to use a Traxxas vehicle as they are very popular and it is a very strong brand in the RC community. The idea was to develop a platform that allows for easy control of wheeled UGVs with an autopilot.
## Assembly
The assembly consists of a wooden frame on which all the autopilot parts were attached. Tests showed that a better vibration insulation should be used, especially for the Pixhawk and the Flow module.
For this particular mounting we chose to use the clip supplied with the rover to attach the upper plate. For this, two supports were 3D printed. The CAD files are provided here.
It is HIGHLY RECOMMENDED to set the ESC in training mode (see Traxxas Stampede Manual) so to reduce the power to 50%.
## Output Connections
PWM Output Actuator
MAIN2 Steering servo
MAIN4 ESC input
As documented in the Airframe reference here.
## Configuration
Rovers are configured using QGroundControl in the same way as any other vehicle.
The main rover-specific configuration is setting the correct frame:
1. Switch to the Basic Configuration section in QGroundControl
2. Select the Airframe tab.
3. Scroll down the list to find the Rover icon.
4. Choose Traxxas stampede vxl 2wd from the drop down list.
## Usage
At the current time, PX4 only supports MISSION and MANUAL modes when a RC remote is connected. To use the mission mode, first upload a new mission to the vehicle with QGC. Then, BEFORE ARMING, select MISSION and then arm.
It is VERY IMPORTANT to do a mission composed ONLY of normal waypoints (i.e. NO TAKEOFF WAYPOINTS) and it is crucial to SET THE WAYPOINT HEIGHT OF EVERY WAYPOINT TO 0 for a correct execution. Failing to do so will cause the rover to continuously spin around a waypoint.
A correct mission setup looks as follows:
|
{}
|
911 views
Match the following with respect to algorithm paradigms :
$\begin{array}{clcl} & \textbf{List-I} & {} & \textbf{List-II} \\ \text{a.} & \text{Merge sort} & \text{i.} & \text{Dynamic programming} \\ \text{b.} & \text{Huffman coding} & \text{ii.} & \text{Greedy approach} \\ \text{c.} & \text{Optimal polygon triangulation} & \text{iii.} & \text{Divide and conquer} \\ \text{d.} & \text{Subset sum problem} & \text{iv.} & \text{Back tracking} \\ \end{array}$
$\textbf{Codes :}$
1. $\text{a-iii, b-i, c-ii, d-iv}$
2. $\text{a-ii, b-i, c-iv, d-iii}$
3. $\text{a-ii, b-i, c-iii, d-iv}$
4. $\text{a-iii, b-ii, c-i, d-iv}$
### 1 comment
Merge sort - Divide and conquer
Huffman coding -Greedy approach
Optimal polygon triangulation -Back tracking
Subset sum problem - Dynamic programming
it is straight from book and no explanation required i think
Mere sort-> D &Q
Huffman ->Greedy
Subset Sum->
already we got the answer..only D matches this two options
so D is correct answer
Ans will be 4
Merge sort -----> iii Divide and conquer
Huffman coding -----> ii Greedy approach
Optimal polygon triangulation -------> i. Dynamic Programming
Subset sum problem -------> iv. Back tracking
Greedy algorithm is an algorithm that follows the problem solving mechanism of making the locally optimal solution at each stage with thinking of finding a global optimum solution for the problem and In Huffman coding in every stage we try to find the the prefix free binary code and try to minimize expected code word for optimum solution for compress the data.
Merge sort -----> iii Divide and conquer
Huffman coding -----> ii Greedy approach
Optimal polygon triangulation -------> i. Dynamic Programming
Subset sum problem -------> iv. Back tracking
just match first 2..
Merge sort is a sorting technique based on divide and conquer technique. With worst-case time complexity being Ο(n log n), it is one of the most respectedalgorithmsMerge sort first divides the array into equal halves and then combines them in a sorted manner.
Greedy algorithm is an algorithm that follows the problem solving mechanism of making the locally optimal solution at each stage with thinking of finding a global optimum solution for the problem and In Huffman coding in every stage we try to find the the prefix free binary code and try to minimize expected code word for optimum solution for compress the data.
We can do this question simply Elimination
Merge sort is based on Divide and Conquer because it Firstly divide then merge
So ii and iii eliminated
Huffman coding is a greeady approach
So option i is eliminated
Answer is option D
1
1,784 views
|
{}
|
Followers 0
# Game Actor and System Architecture
## 4 posts in this topic
Hey guys, I've been looking through some books and online on the topic of game engine architectures and how actors factor in. A big one was from this thread right here (http://www.gamedev.net/topic/617256-component-entity-model-without-rtti/page-2). The way I understood it is like this:
Actor Component: Defines some relatively independent data that represents some isolated attribute of a larger being. For example, a actor for a gun might have a component for the gun's model, a component for the amount of ammo, and a component for the damage properties of the gun.
Actor: A list of actor components.
System: Runs the game logic, has a list of actors on which the system operates. An example, the physics system has a list of actors that have a physics object which it uses to check for collisions and notify's the actors and their components when a collision happens.
This is where things get kind of shady. A system is supposed to carry out game logic but it doesn't make sense for all the game logic to be done in a system. Using the physics system example, it makes sense for the system to find collisions but when a collision happens, it doesn't always mean calculate the reflection of both objects. Sometimes, I might be colliding with ammo so I should be picking it up instead. Stuff like that doesn't make sense to be done in the system but rather in the actor/their components.
This works nice but then it makes defining the components a bit more iffy. If the ammo actor is supposed to have some way of reacting to a collision, how does the physics system know which component it should be looking for? There might only be one type of component that is a physics collision model which could describe the collision model for the ammo, but that same component could be used for a rigid body on another actor which should react by physics laws to a collision.
So the way I understand it, here is how it roughly looks right now:
class Actor
{
std::vector <IActorComponent> m_ActorComponents;
};
class IActorComponent
{
// will be overridden and will have some new properties
virtual bool VInit ();
virtual bool VDestroy ();
};
class ISystem
{
virtual void VInit ();
virtual void VUpdate (unsigned int deltaMs);
virtual void VDestroy ();
};
And here is a implementation:
class CollisionModelComponent : public IActorComponent
{
std::vector <Vertices> m_VertexArray;
};
class PhysicsSystem : public ISystem
{
std::list <Actor> m_Actors;
void VUpdate ()
{
for every actor
{
if actor collided
{
// What do we look for here? How do we know to run ammo collision response or rigid body response?
}
}
}
};
You could make a collision response actor component which tells the physics system how to respond to a collision but then you have a issue where the ammo collision response has to have access to the ammo component.
In my code, the actors are created from xml files and each actor is created the same through a factory class. In it, I loop through all the nodes of a xml file and apply the properties to the given component at hand. All components override the virtual VInit function which takes no parameters. If I wanted to create a dependancy between ammo component and collision response component, I would need to somehow pass the ammo instance to the collision response through the init but not all components need a dependancy so it doesn't make sense to have it by default pass a pointer to some actor component through VInit. There could also be cases where we have multiple dependancies which complicates the process.
Is there another way to do this or some way to restructure or apply constraints in order to make this architecture work? It's a really clean architecture if one where to be able to make everything separable. Any help?
0
##### Share on other sites
I haven't read (yet) the thread you've cited. The following is just the approach I'm following in my own implementation of CES (which uses sub-systems, and allows for both data components and behavior components, the latter as some kind of plug-in for sub-systems).
Actor entities can be equipped with components that belong to perception. Besides the visual and aural perception (I'm playing with thoughts about olfactorial perception), there is also touch perception. All of them are used for AI purposes, but the latter one is also used to apply physical damage. Conceptually, the interaction of a stimulus (something that can be perceived) and a receptor (something that detects stimuli and assesses it against thresholds) result in an actual perception.
So this concept distinguishes collision for touch perception purposes from general rigid body collision. It introduces a specific group of colliders (the geometry of the touch stimuli, usually line-strips) and collidees (the geometry of touch receptors, usually some bounding volumes), between which an unilateral collision detection has to be done. The immediate result of collision detection is just to hand over the stimulus to the receptor for further processing. The linked in damage sub-system isn't sufficiently parametrized with the boolean "there is a collision". Instead, it uses attributes of the stimulus together with "the stats" to compute the resulting damage (which may perhaps be zero).
I'm running a sub-system that is called SpatialServices. This sub-system allows for uni-lateral collision detection (and proximity detection for e.g. the other senses) as is described above. The physics sub-system, perhaps a black box, isn't used for this purpose.
Hope that helps a bit
Edited by haegarr
0
##### Share on other sites
What I have done for collision-type work is, first, define what objects each entity can collide with (collision meaning cannot occupy the same space...think top-down 2d game). So, an entity can collide with a bullet, and it can collide with a wall. It CAN'T collide with a teleporter, but it can use it if it's touching it.
Also, I would create a collision component, and give it to each entity that is colliding with each other. So, the collision component has a link to other entity that is being collided with. If it's a bullet hitting a player, then the damage system would remove the bullet entity (thus, the collision entities), and damage the player entity.
If it's a wall hitting a player, there isn't a system defined for it, so the physic system just handles it, and does the proper collision response. Once the player stops touching the wall, the collision component is removed between the 2.
That might not be the best way to do it, but it should give you an idea of other ways to do this.
0
##### Share on other sites
Hmm okay I'm starting to see a path through this. The perception idea is super cool but I probably don't need to implement something as complicated as that for my game in terms of touch, sound, smell etc with the stimuli. It is something I will want to look at in the future just because it sounds so cool and plays well with some other ideas I have bouncing around in my head. Okay so I think I figured out a nice way of organizing everything:
To start off, actors are as always just a id and a collection of components except now they are a collection of component id's instead of pointers.
Components of each type are instead stored in each of their own component managers which might be just a list, a vector or a map. This would allow for future endeavors to compress the components since we know each component manager only holds variables of one component type. As an example, if my component manager holds orientation components for all actors, it could apply some sort of spacial optimization to all the components or reuse similar coordinates if such compression where ever needed. It also has the capability of being more data oriented although since my project isn't going to be huge, I'll just leave them as maps. Each component manager has to only implement simple methods like add/destroy component and get component manager type.
Components are same as always except now initialization will be a little different. So before, I just had a Init method for the actor to call Init on all components but now I added a post init which could be used to resolve all component dependencies. A orientation component won't have a dependency so its PostInit method will be empty. But something like a physics collision component could add in post init a dependency to the orientation by going through the owner actor pointer and searching for a component of type orientation. I could also have it add a collision response component/s which could be used to resolve collisions in the physics system when they happen. The benefit of post init is that we know all components are created, we are now just linking them together sort of like your program compiles everything, and then links.
In PostInit, we could also attach our components to all the systems they have to be attached to. So a physics collision model could check to see if a orientation and collision response component exist on its current actor. If they do, it can link them up and attach itself to the physics system. Otherwise, it could signal an error or maybe run a default although I would rather signal an error.
As for BeerNutts method of solving the collisions, I think with the system I described above you could implement it since I sort of wrapped my head around settling dependencies between components. I do have two options however. I could make multiple collision responses for collision with different types of actors (although this creates a issue since a actor doesn't really have a type, its just a bundle of components). Or could make one large collision response component that handles multiple collision types. Both are a bit weird since a actor doesn't have a type. Would you somehow grab it from the collision model component which could potentially hold a type or add a type to each actor by default?
There is another thing that bothers me that might be a bit more physics related but still something I think needs to be understood by me. Lets say two teleporters collide (yes this shouldn't ever happen but there could be other similar situations). Both objects have equal priority in terms of resolving the collision so which one would take priority and teleport the other to the teleporting location? Since its very likely two colliding actors will try to somehow resolve a collision, if both collision responses want to modify the other actor, it has to be somehow decided which one gets priority over the other and applies the modification first.
I was also of thinking of just having a collision response actor that implements rigid body responses to collision, instead of having it in the physics system. That way, the physics system only ever worries about finding collisions and calling the appropriate collision response calls based on whatever priority. By doing so, the actor components technically implement all the systems while the system manages them to have everything work as a group.
Thanks for all the help so far!!
EDIT: Hmm, looking at it now, I think its better to skip having the component managers in the first place since for now, they don't really make a difference other than take extra implementation time. Maybe in the future I should add them in but everything else should apply as long as id's are changed to pointers.
Edited by D.V.D
0
##### Share on other sites
This is where things get kind of shady. A system is supposed to carry out game logic but it doesn't make sense for all the game logic to be done in a system. Using the physics system example, it makes sense for the system to find collisions but when a collision happens, it doesn't always mean calculate the reflection of both objects. Sometimes, I might be colliding with ammo so I should be picking it up instead. Stuff like that doesn't make sense to be done in the system but rather in the actor/their components.
This works nice but then it makes defining the components a bit more iffy. If the ammo actor is supposed to have some way of reacting to a collision, how does the physics system know which component it should be looking for? There might only be one type of component that is a physics collision model which could describe the collision model for the ammo, but that same component could be used for a rigid body on another actor which should react by physics laws to a collision.
Look at the situation from the opposing side and it might make a bit more sense. First, your physics system simply steps your simulation and determines where collisions have happened. It emits those collisions in some fashion for the related actors and it's done. You then have other systems that listen or inspect the actors for these collisions and respond accordingly.
Taking your example, the bullets which are to be collected upon collision are actors maintained in a pickup system because they have a pickup component. Upon a collision situation with these actors, it looks at the collision component to see whom it collided with. If the collision was with a player, it simply adds itself to the player's inventory in question and then signals the actor (bullet) to be removed. In the case of shooting a bullet at another player, this bullet is managed by a damage system because it has a damage component. Upon collision, the damage system determines the collision actor list, calculates the damage and emits the damage caused to the related actors. The damage system then signals the actor (bullet) to be removed.
Now you can add coins and other pickup items that go into the player's inventory by simply giving them the same pickup component. The pickup component can have flags that determine what it does upon pickup (add to inventory, increment ammo counter, increment powerup level, etc). But in the end, the reaction between the actors is identical, collision implies pickup.
Similarly, you can add additional things that do damage such as melee attacks, castable spells, falling rocks, etc. The damage system is then the place that inspects that collision list of actors for a given actor that can do damage, determines the damage done and emits those events to the related actors.
Like with anything in programming, break things down into their smallest yet reasonable interactions and you'll see how quickly things can come together, particularly when you are working with an actor/entity system.
0
## Create an account
Register a new account
|
{}
|
# Quod Erat Demonstrandum
## 2010/01/04
### 廣義積分定義
Filed under: Pure Mathematics — johnmayhk @ 2:57 下午
Tags:
$\int_{-\infty}^{\infty} f(x)dx$
$\displaystyle \lim_{a \rightarrow \infty}\int_{-a}^{a} f(x)dx$
$\int_{-\infty}^{\infty} f(x)dx = \displaystyle \lim_{a \rightarrow -\infty} \displaystyle \lim_{b \rightarrow \infty}\int_{a}^{b} f(x)dx$
$\int_{-\infty}^{\infty} xdx$
$\int_{-\infty}^{\infty} xdx = \displaystyle \lim_{a \rightarrow \infty}\int_{-a}^{a} xdx = 0$(或以更一般的 odd function 取代 $x$ 也可)
$\displaystyle \lim_{a \rightarrow \infty}\int_{-a}^{2a} xdx = +\infty$
$\int_{-\infty}^{\infty} xdx$ 是什麼?
* Don’t worry, the topic of improper integrals is out-of-syllabus now.
## 3 則迴響 »
1. You are using a limit of a Riemann integral to define an indefinite integral, that’s why you see the problem. If you define the indefinite integral as a Lebesgue integral, the problem is solved because there is no need to have the limit. You are just doing integration on the set of all real numbers. In deed, to make the Lebesgue integral equal to Riemann integral, you need the upper limit and lower limit to be +a and -a respectively before doing the limit. Otherwise, you are weighting the positive part of the real line and the negative part of the real line different.
迴響 由 Alienz Thame — 2010/01/05 @ 1:19 上午 | 回應
• Thank you Alienz Thame!
That means, to match the Lebesgue and Riemann, we MUST take +a and -a to ensure even weighting of the positive and negative parts on the real line.
迴響 由 johnmayhk — 2010/01/06 @ 5:21 下午 | 回應
• I am not sure if setting +a to -a is a necessary condition but surely it is sufficient. In mathematics, dealing with infinity is always not intuitive!
迴響 由 Alienz Thame — 2010/01/07 @ 5:30 上午
|
{}
|
# Egg timer in Bash
I just started to learn programming, going through a lot of different tutorials, trying out different programming languages and I stumble in the same sort of questions all over the place.
I give you an simple egg timer in bash I use for cooking.
#!/bin/bash
### varibales
min=$1 sec=$2
message=$3 ### run timer=$(($min * 60 +$sec))
for i in $(seq 0$timer)
do
remain=$(($timer - $i )) echo -ne "$(($remain / 60)):$(($remain % 60)) \r" sleep 1 done echo -ne " \r" echo "$message"
afplay ~/alarm.mp3
This egg timer is very simple and therefore usually reliable. I use it for cooking. Yet, when I imagine doing it on a grander scale, there would be so many questions, that are neglectable in such a small application. But I am not sure if they would stay neglectable.
1. Is it a good idea to keep the additional variable timer in there? It helps readability, but wouldn't that hold another variable in my RAM? Shouldn't I better forget about readability and directly calculate it in the echo command. Same goes with the variables set for min, sec and message. Those variables are basically doubled for readability alone. I could directly use them in the script.
2. Using sleep to count down time. As far as I know, things like sleep, wait and pause can be off when the CPU is under heavy usage. Something probably to consider for scripts on my old raspberry, that only has one core. Shouldn't I first calculate the time of the alarm and then use my system time to calculate the time left? Should it do that in every iteration of the loop, that has to comes at least every second or should I only do it every minute or hour?
3. If I check it with system time, should I rather put the checks in interlapping loops than add an if check? Feels like an interlapping loop would take less processing power than use if checks every time it runs.
4. While we're on the question of waiting. How do I find good timings for programs? I mean, let's assume my program needs to check something regularily. Like, a file on the computer. How do I find a good timing for it? Every .5 second would feel pretty responsive. Yet, am I clogging my CPU with unnecessary checks? Is there some rule of thumb, like check 10 times more often for things on your RAM instead of things on your HD?
Like I said, those programs won't very much suffer from such things, but I wonder right now if I build on a wrong premise when trying to learn to code without considering writing effective code and are there any materials that you can point me to that answers these sorts of questions and hopefully a lot more questions I haven't thought of yet?
Is it a good idea to keep the additional variable "timer" in there? It helps readability, but wouldn't that hold another variable in my RAM? Shouldn't I better forget about readability and directly calculate it in the "echo" command. Same goes with the variables set for "min", "sec" and "message". Those variables are basically doubled for readability alone. I could directly use them in the script.
Readability is extremely important. Write programs for people, not for computers. The computer has no problem reading a program that's written ugly. But it's not the computer that might have to search for bugs or implement the next feature, that's going to be a human. Code is read far more than it's written.
As far as RAM goes, the cost of an additional variable is negligible, you can safely ignore it.
Using "sleep" to count down time. As far as I know, things like "sleep", "wait" and "pause" can be off when the CPU is under heavy usage. Something probably to consider for scripts on my old raspberry, that only has one core. Shouldn't I first calculate the time of the alarm and then use my system time to calculate the time left? Should it do that in every iteration of the loop, that has to comes at least every second or should I only do it every minute or hour?
Yes, it will be more accurate to calculate the target end time, and in every iteration recalculate the remaining time to display. It's up to you how you pace the loop. Every second seems fine, with a sleep 1 like it is now.
If I check it with system time, should I rather put the checks in interlapping loops than add an if check? Feels like an interlapping loop would take less processing power than use if checks every time it runs.
I don't really get what an "interlapping" loop is, but in any case, you only need one loop, something like this: (see at the bottom a complete implementation)
target_time=...
while :; do
current_time=$(date +%s) ((current_time >= target_time)) && break print_remaining_time current_time target_time done While we're on the question of waiting. How do I find good timings for programs? I mean, let's assume my program needs to check something regularily. Like, a file on the computer. How do I find a good timing for it? Every .5 second would feel pretty responsive. Yet, am I clogging my CPU with unnecessary checks? Is there some rule of thumb, like check 10 times more often for things on your RAM instead of things on your HD? That's a bit too broad to answer. It can depend on your hardware and specific circumstances and requirements. And looping is not always the right thing to do. In the example you gave, waiting for a file, it's better to look for a way to listen on filesystem events, rather than a loop with a sleep. ### Code review Bash arithmetics can help you simplify a lot. For example, instead of this: timer=$(($min * 60 +$sec))
You can write like this:
((timer = min * 60 + sec))
That is, you can drop all the $ and use comfortable spacing around operators. You did not indent the body of your for loop. This is not easy to read. It would be better this way: for i in$(seq 0 $timer) do remain=$(( $timer -$i ))
echo -ne " $(($remain / 60)):$(($remain % 60)) \r"
sleep 1
done
seq is not standard, and therefore not recommended. And you can easily achieve the same thing using for:
for ((i = 0; i < timer; ++i)); do
You did not validate your input. If the script is called without parameters, the behavior will be odd. It would be more friendly to display a help message to tell the user that something's wrong.
### Suggested implementation
Applying some of the above suggestions (also borrowing a bit from the answer of @200_success, a cleaner, safer, more readable implementation:
#!/bin/bash
if test $# -lt 3; then echo usage:$0 minutes seconds message
exit 1
fi
min=$1 sec=$2
message=$3 ((target_time =$(date +%s) + min * 60 + sec))
while :; do
((current_time = $(date +%s))) ((current_time >= target_time)) && break ((remain_sec_total = target_time - current_time)) ((remain_min = remain_sec_total / 60)) ((remain_sec = remain_sec_total % 60)) printf '%4d:%02d \r'$remain_min $remain_sec sleep 1 done printf '%-7s\n' "$message"
Honestly, this is a pretty good program for a beginner. Using a carriage return so that the same line gets overwritten repeatedly is smart. The program generally works well — not without concerns, but well enough.
1. The $timer variable is optional. You could do without it, since your main objective is to count down, so $remain is more important. But defining $timer is not an altogether bad idea. 2. Strictly speaking, yes, the use of sleep is somewhat problematic, but my main concern is not efficiency. If you were truly concerned about efficiency, you wouldn't be using shell scripting at all, since invoking external commands is so much less efficient than calling functions within a typical programming language. Rather, I'm more concerned with accuracy. sleep N guarantees that the process will be stopped for at least N seconds. Given enough time, the clock could drift. But fixing that problem is rather complicated, so I would just leave it. 3. I'm not sure what you mean by "overlapping loops". 4. Checking every second is fine. Checking every half-second is still within the right ballpark. Checking every two seconds is obviously too infrequent. The problem is, the sleep command doesn't necessarily handle intervals shorter than one second. For example, from the OS X sleep(1) man page: The sleep command will accept and honor a non-integer number of specified seconds (with a . character as a decimal point). This is a non-portable extension, and its use will nearly guarantee that a shell script will not execute properly on another system. Therefore, I think that a smart design decision would be to keep it simple, and just sleep for one second at a time. If the result is off by 5 seconds out of 5 minutes, the egg will still be OK. The advice above applies to shell scripting. If I were implementing this in C, I would make a different set of design trade-offs — perhaps aiming more for accuracy than simplicity. Considering the remarks above, I'd just make minor changes to the code. #!/bin/bash min="$1"
sec="$2" message="$3"
for remain in $(seq$((60 * $min +$sec)) 0); do
printf '%4d:%02d\r' $(($remain / 60)) $(($remain % 60))
sleep 1
done
# Ensure that the message is space-padded to at least 7 characters so as to
# overwrite the "0:00".
printf '%-7s\n' "$message" Notably: • Your for loop was not well formatted. • Use seq to count down rather than up. • Use printf(1) instead of echo so as to get the seconds to be zero-padded as necessary. A secondary consideration is that there are many implementations of echo (e.g. the Bash built-in echo interprets its options differently from GNU /bin/echo, which may be different from Solaris /usr/bin/echo), whereas printf is better standardized. • One smart use of printf can take care of both invocations of echo at the end (one of which is for clearing the line, and the other for actually printing the message). Overall, I'd say that this is a fine program, and, given the inherent limitations of Bash, the design decisions you have made are reasonable. You're on the right track! Regarding #2: You can also count the time in advance and then just check whether the current time is still less than the final time. #!/bin/bash start_time=$(date +%s)
min=$1 sec=$2
message=$3 end_time=$((start_time + sec + 60 * min))
until (( current_time >= end_time )) ; do
sleep 1
current_time=$(date +%s) echo -ne$(( end_time - current_time )) seconds remaining... $'\r' done echo$'\n'"\$message"
You can also have a look at at.
|
{}
|
## Basic College Mathematics (10th Edition)
$\frac{36}{x} = \frac{30}{100}$
The percent is 30%. The part is 36. The whole is unknown. $\frac{36}{x} = \frac{30}{100}$
|
{}
|
# Section 8: Single-slot cache
This section introduces you to the notion of access pattern and a classification of different access patterns. It ends by walking through code for a single-slot cache—code that can be the foundation of your problem set.
A cache is a small amount of fast storage that helps speed up access to slower underlying storage. The underlying storage often has more capacity than the cache, meaning that it holds more data. When discussing caches in general, we use these terms:
• A block is a unit of data storage. Both the underlying storage and the cache are divided into blocks. In some caches, blocks have fixed size; in others, blocks have variable size. Some underlying storage has a natural block size, but in general, the block size is determined by the cache in question.
• Each block of underlying storage has an address. We represent addresses abstractly as integers, though in a specific cache, an address might be a memory address, an offset into a file, a disk position, or even a web page’s URL.
• Each block in the cache is a slot. A slot can either be empty or full. An empty slot has no data, while a full slot contains a cached version of a specific block of underlying storage. A full slot therefore has an associated address, which is the address of that block on underlying storage.
Note that the notion of address corresponds to the unit of access to underlying storage, which might differ from the accesses performed by software. For example, a processor cache for primary memory accesses primary memory in units of cache lines, which are aligned contiguous blocks of 64 or 128 bytes.
EXERCISE. When considering x86-64 general-purpose registers as a cache for words of primary memory (via processor caches), roughly how many slots are there in the cache, how big is a block, and what do cache addresses represent?
EXERCISE. When considering an x86-64 core’s level-1 processor cache as a cache for lines of primary memory, roughly how many slots are there in a cache, and what do cache addresses represent? You may assume a block (a cache line) is 64 aligned contiguous bytes of memory.
## Access patterns
The sequence of addresses accessed in slower storage has enormous impact on cache effectiveness. For instance, repeated reads to the same address (temporal locality of reference) are easy to cache. On the first access, the system must fetch a block from underlying storage into a cache slot, but all future accesses can use the cache. On the other hand, a truly random access sequence is very hard to cache when the underlying storage has large capacity: in expectation, every read will require fetching a block.
We can semi-formalize these issues into concepts called reference strings and access patterns.
A reference string is an ordered sequence of addresses representing access requests. For instance, the reference string 0R, 1R, 2R, 3R represents a sequence of four read requests for blocks—first for block 0, then blocks 1, 2, and 3. The operation (R or W) is often omitted.
An access pattern is a description of a class of reference string. It’s important to know at least these:
• Repeated access pattern: The same address over and over, as in 1, 1, 1, 1, 1, ….
• Sequential access pattern: A contiguous increasing sequence of addresses, as in 1, 2, 3, 4, 5, 6… or 1009, 1010, 1011, 1012, 1013…, with few or no skipped or repeated addresses.
• Reverse-sequential access pattern: A contiguous decreasing sequence of addresses, as in 60274, 60273, 60272, 60271, 60270, ….
• Strided access pattern: A monotonically increasing or decreasing sequence of addresses with uniform distance, as in 0, 4096, 8192, 12288, 16384, …. The distance between adjacent addresses is called the stride.
Sequential and reverse-sequential access are kinds of strided access with stride +1 and −1, respectively, but they are important enough to be assigned their own names. We are more likely to describe an access pattern as strided if it has a large stride.
• Random access pattern: An apparently-unpredictable sequence of addresses spread over some range.
Real reference strings often contain multiple patterns as subsequences. For example, the reference string 0, 1, 2, 3, …, 39998, 39999, 40000, 100000, 99999, 99998, 99997, … 1 consists of sequential access (0–40000) followed by reverse-sequential access (100000–1).
EXERCISE. Sketch an access pattern that contains three distinct subsequences of sequential access.
EXERCISE. Describe a scenario where random access might not be difficult to cache.
## File positions
We now turn from abstract descriptions of reference strings and caches to specific reference strings, as found in file I/O, and a specific type of cache, a standard I/O cache.
Standard I/O caches are intended to speed up access to data read and written from file descriptors, which are the Unix representation of data streams and random-access files. System calls like read and write perform the reading and writing. Standard I/O caches can improve performance by reducing the number of system calls performed.
Besides transferring data from or to a file, read and write also affect the file descriptor’s file position. This is a numeric byte offset representing the current position in the file.1 A read system call starts reading at the current file position and advances the file position by the number of characters read (and similarly for write). This means that successive calls to read progress sequentially through the file, as you can see in the following example.
The kernel maintains all file positions: user processes can only observe or affect a file position by making a system call, such as read, write, or lseek. The lseek system call modifies the file position without transferring data, and is required to read or write a file out of sequential order.
## strace exercises
Test your knowledge by characterizing access patterns and cache types given some system call traces produced by strace!
Recall that strace is run like this:
$strace -o strace.out PROGRAMNAME ARGUMENTS... This runs PROGRAMNAME with the given ARGUMENTS, but simultaneously snoops on all the system calls PROGRAMNAME makes and writes a readable summary of those system calls to strace.out. You can examine that output using your favorite editor or by running less strace.out. The first part of an strace comprises boilerplate caused by program startup. It’s usually pretty easy to tell where the boilerplate ends and program execution begins; for example, you might look for the first open calls that refer to data files, rather than program libraries; or the first accesses to standard input and standard output (file descriptors 0 and 1), which are not accessed during program startup. You can find out about any system call in an strace by reading its manual page (e.g., man 2 read), but it’s best to focus your attention on a handful of system calls that correspond to file access: open, read, write, lseek, close. (File descriptor notes) In the cs61-sections/storages1 directory, you’ll see a bunch of truncated strace output in files straceNN.out. EXERCISE. Describe the access pattern for standard input in strace01.out. What block size is being accessed? What access pattern? Is a standard I/O cache likely in use by this program? EXERCISE. Describe the access pattern for standard input in strace02.out. What block size is being accessed? What access pattern? Is a standard I/O cache likely in use by this program? HOME EXERCISE. Describe the access patterns for standard output in strace01.out and strace02.out. Are they the same or different? What block sizes are being accessed? What access patterns? Is a standard I/O cache likely in use? HOME EXERCISES. Describe the access patterns for the other strace*.out files. ## Single-slot cache A single-slot cache is, as the name suggests, a cache comprising a single slot. The standard I/O cache implemented in most operating systems’ standard I/O libraries has a single slot. In the rest of section, we’re going to work on a specific representation of a single-slot I/O cache. The purpose is to explain such caches and to get you used to thinking in terms of file offsets. Here’s the representation. struct io61_file { int fd; static constexpr off_t bufsize = 4096; // block size for this cache unsigned char cbuf[bufsize]; // These “tags” are addresses—file offsets—that describe the cache’s contents. off_t tag; // file offset of first byte in cache (0 when file is opened) off_t end_tag; // file offset one past last valid byte in cache off_t pos_tag; // file offset of next char to read in cache }; pos_tag refers to the effective file position for users of io61 functions. That is, the next io61_read or io61_write function should begin at the file offset represented by pos_tag. Note that this is often different from the actual file position! ### Example single-slot cache for reading To demonstrate how this single-slot cache representation might work, we walk through the file position example with a small bufsize of 4 for clarity. Recall that the goal of a cache for reads is to produce equivalent results to a series of system calls, so our io61_reads should return the same bytes in the same order as the reads above. ### Cache invariants Here are some facts about this cache representation. • tag <= end_tag. This is an invariant: an expression that is always true for every cache. You could encode the invariant as an assertion: assert(tag <= end_tag). • The cache slot contains end_tag - tag bytes of valid data. • If tag == end_tag, then the cache slot is empty (contains no valid data). • If tag <= o < end_tag, then byte cbuf[o - tag] corresponds to the byte of file data at offset o. • end_tag - tag <= bufsize. Another invariant: the cache slot contains space for at most bufsize data bytes. • tag <= pos_tag && pos_tag <= end_tag. Another invariant: the pos_tag lies between the tag and end_tag, inclusive. • The file descriptor’s file position equals end_tag for read caches of seekable files. It equals tag for write caches of seekable files. • A io61_read or io61_write call should begin accessing the file at the offset equal to pos_tag. Thus, the first byte read by io61_read should come from file offset pos_tag. ### Fill read cache We strongly recommend that you try these exercises yourself before looking at the solutions. As you implement, remember the invariants: we find the invariants and representation extremely good guidance for a correct implementation. EXERCISE SS1. Implement this function, which should fill the cache with data read from the file. You will make a read system call. void io61_fill(io61_file* f) { // Fill the read cache with new data, starting from file offset end_tag. // Only called for read caches. // Check invariants. assert(f->tag <= f->pos_tag && f->pos_tag <= f->end_tag); assert(f->end_tag - f->pos_tag <= f->bufsize); // Reset the cache to empty. f->tag = f->pos_tag = f->end_tag; // Recheck invariants (good practice!). assert(f->tag <= f->pos_tag && f->pos_tag <= f->end_tag); assert(f->end_tag - f->pos_tag <= f->bufsize); } ### Read from read cache (simple version) EXERCISE SS2. Implement io61_read, assuming that the desired data is entirely contained within the current cache slot. ssize_t io61_read(io61_file* f, unsigned char* buf, size_t sz) { // Check invariants. assert(f->tag <= f->pos_tag && f->pos_tag <= f->end_tag); assert(f->end_tag - f->pos_tag <= f->bufsize); // The desired data is guaranteed to lie within this cache slot. assert(sz <= f->bufsize && f->pos_tag + sz <= f->end_tag); } ### Read from read cache (full version) EXERCISE SS3. Implement io61_read without that assumption. You will need to call io61_fill, and will use a loop. You may assume that no call to read ever returns an error. Our tests do not check whether your IO61 library handles errors correctly. This means that you may assume that no read and write system call returns a permanent error. But the best implementations will handle errors gracefully: if a read system call returns a permanent error, then io61_read should return -1. (What to do with restartable errors is up to you, but most I/O libraries retry on encountering EINTR.) ssize_t io61_read(io61_file* f, unsigned char* buf, size_t sz) { // Check invariants. assert(f->tag <= f->pos_tag && f->pos_tag <= f->end_tag); assert(f->end_tag - f->pos_tag <= f->bufsize); } ### Write to write cache (easy version) When reading from a cached file, the library fills the cache using system calls and empties it to the user. Writing to a cached file is the converse: the library fills the cache with user data and empties it using system calls. For a read cache, the cache buffer region between pos_tag and end_tag contains data that has not yet been read, and the region after end_tag (if any) is invalid. There are several reasonable choices for write caches; in these exercises we force pos_tag == end_tag as an additional invariant. EXERCISE SS4. Implement io61_write, assuming that space for the desired data is available within the current cache slot. ssize_t io61_write(io61_file* f, const unsigned char* buf, size_t sz) { // Check invariants. assert(f->tag <= f->pos_tag && f->pos_tag <= f->end_tag); assert(f->end_tag - f->pos_tag <= f->bufsize); // Write cache invariant. assert(f->pos_tag == f->end_tag); // The desired data is guaranteed to fit within this cache slot. assert(sz <= f->bufsize && f->pos_tag + sz <= f->tag + f->bufsize); } ### Flush write cache EXERCISE SS5. Implement a function that empties a write cache by flushing its data using a system call. As before, you may assume that the system call succeeds (all data is written). void io61_flush(io61_file* f) { // Check invariants. assert(f->tag <= f->pos_tag && f->pos_tag <= f->end_tag); assert(f->end_tag - f->pos_tag <= f->bufsize); // Write cache invariant. assert(f->pos_tag == f->end_tag); } ### Write to write cache (full version) EXERCISE SS6. Implement a function that implements write without the assumption from Exercise SS4. You will call io61_flush and use a loop. ssize_t io61_write(io61_file* f, const unsigned char* buf, size_t sz) { // Check invariants. assert(f->tag <= f->pos_tag && f->pos_tag <= f->end_tag); assert(f->end_tag - f->pos_tag <= f->bufsize); // Write cache invariant. assert(f->pos_tag == f->end_tag); } ### Seeks To continue with this implementation offline, try adding an implementation of io61_seek. How should this function change the representation? Think about the invariants! ## Aside: Running blktrace If you’re interested in exploring the pattern of references made to a physical hard disk, you can try running blktrace, a Linux program that helps debug the disk requests made by an operating system in the course of executing other programs. This is below the level of system calls: a single disk request might be made in response to multiple system calls, and many system calls (such as those that access the buffer cache) do not cause disk requests at all. Install blktrace with sudo yum install blktrace (on Ubuntu), and run it like this: $ df . # figure out what disk you’re on
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 61897344 18080492 41107316 31% /
$sudo sh -c "blktrace /dev/sda1 &" # puts machine-readable output in sda1.blktrace.*$ PROGRAMNAME ARGUMENTS...
$sudo killall blktrace === sda1 === CPU 0: 282 events, 14 KiB data CPU 1: 658 events, 31 KiB data Total: 940 events (dropped 0), 45 KiB data$ blkparse sda1 | less # parses data collected by blktrace
The default output of blkparse has everything anyone might want to know, and is a result is quite hard to read. Try this command line to restrict the output to disk requests (“-a issue”, which show up as lines with “D”) and request completions (“-a complete” and “C”).
\$ blkparse -a issue -a complete -f "%5T.%9t: %2a %3d: %8NB @%9S [%C]\n" sda1 | less
With those arguments, blkparse output looks like this:
0.000055499: D R: 16384B @ 33902656 [bash]
0.073827694: C R: 16384B @ 33902656 [swapper/0]
0.074596596: D RM: 4096B @ 33556536 [bash]
0.090967020: C RM: 4096B @ 33556536 [swapper/0]
0.091078346: D RM: 4096B @ 33556528 [bash]
0.091270951: C RM: 4096B @ 33556528 [swapper/0]
0.091432386: D WS: 4096B @ 24284592 [bash]
0.091619438: C WS: 4096B @ 24284592 [swapper/0]
0.100106375: D WS: 61440B @ 17181720 [kworker/0:1H]
0.100526435: C WS: 61440B @ 17181720 [swapper/0]
0.100583894: D WS: 4096B @ 17181840 [jbd2/sda1-8]
| | | | | |
| | | | | |
| | | | | +--- responsible command
| | | | |
| | | | +----------- disk offset of read or write
| | | +-------------------- number of bytes read or written
| | |
| | +------------------------------ Read or Write (plus other info)
| +---------------------------------- D = request issued to device,
| C = request completed
|
+------------------------------------------- time of request
1. File descriptors that represent infinite streams do not have file positions. ↩︎
2. It is not be allowed to return 0 as a “short read”, because 0 always means end of file. ↩︎
|
{}
|
You are not logged in. Please login at www.codechef.com to post your questions!
×
# How to figure out what the question is actually asking??
3 Hey Kunal, Don't get disheartened by seeing other people solving the question in just one attempt. At first all the question seems tough. If you will practice well you'll get the question AC in just first attempt. > Please go through my answer on how to get good at problem solving. Hope it helped! answered 18 Aug '17, 21:04 782●7 accept rate: 15% Thank you, I'll try my best and can you suggest me how to learn algorithms. Do you know any YouTuber who teach algorithms? since I'm a visual learner. Thank you!! (18 Aug '17, 21:09) kunnu1202★ YouTube videos by mycodeschool really helped me alot learning new algorithms. you can follow them. https://www.youtube.com/playlist?list=PL2_aWCzGMAwI3W_JlcBbtYTwiQSsOTa6P (18 Aug '17, 21:15) @sandeep_007 thank you so much for the help, I will check out the videos (18 Aug '17, 21:16) kunnu1202★
1 It depends on many factors, most dominantly how exposed you are to competitive coding. There exists a category of setters (lmao include me if i start setting problems) who make sure to twist the problem statements thoroughly, to make sure the contestant isnt able to make head or tail of it and has to go by his confidence and intuition on what is going on. If you look at Q of STRINGRA, the setter left no stone unturned to give people a tough job. Else, the problem is very easy but for the difficulty in untangling it. It will come by practice. Rainbow array and chef and mover were simple statements, they werent complicated or hard to understand. If you feel that they werent clear, then please discuss them here so we can help you. More often than not, I find that its a "one directional" or lopsided thinking of contestant which makes things complicated for him. Also, give a look to good coders solution, sometimes that really helps! answered 18 Aug '17, 21:18 13.6k●1●10●36 accept rate: 19% I was able to solve the "rainbow" question but wasn't able to solve the "chef and mover problem https://www.codechef.com/viewsolution/15017438 Can you please look at my code and tell me what was wrong? If you don't have time that's okay I know it takes time to understand someone's code. But thank you so much for the help!! (18 Aug '17, 21:30) kunnu1202★ I will, just wait for 30min plz, there is a contest going on atm (18 Aug '17, 21:46) Check how your code is doing for this test case- 1 5 1 1 1 1 1 6 The answer is 10 but you are printing only 1. (18 Aug '17, 22:16) Thank you so much @vijju123 (18 Aug '17, 23:14) kunnu1202★ @vijju123 can you please tell me what atm is? the contest?? where? at codechef? Thanks (19 Aug '17, 05:59) kunnu1202★ Atm is a short form for "at this moment". The contest was on codeforces (19 Aug '17, 09:44) showing 5 of 6 show all
1 Answer is hidden as author is suspended. Click here to view. answered 19 Aug '17, 10:04 4★raj79 (suspended) accept rate: 10% @raj79 thank you for the advice. I will try it. Thanks again (19 Aug '17, 11:52) kunnu1202★
toggle preview community wiki:
Preview
### Follow this question
By Email:
Once you sign in you will be able to subscribe for any updates here
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×922
×188
×119
×25
question asked: 18 Aug '17, 20:56
question was seen: 314 times
last updated: 19 Aug '17, 11:52
|
{}
|
# Chapter 10 - Radical Expressions and Equations - Mid-Chapter Quiz - Page 619: 22
$3y^2\sqrt y$
#### Work Step by Step
The area of a triangle is $\frac{1}{2}bh$, where b is the base and h is the height. $a=\frac{1}{2}\times\sqrt{18y}\times\sqrt{2y^4}\longrightarrow$ factor out the perfect squares $a=\frac{1}{2}\times\sqrt{9\times y\times2}\times\sqrt{2\times y^4}\longrightarrow$ multiplication property of square roots $a=\frac{1}{2}\times\sqrt9\times\sqrt y\times(\sqrt2\times\sqrt2)\times\sqrt {y^4}\longrightarrow$ simplify $a=\frac{1}{2}\times3\times\sqrt y\times2\times y^2\longrightarrow$ simplify; combine like terms $a=3y^2\sqrt y$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
# Equation (mathematics)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
In mathematics, an equation is a statement that two quantities are equal. It is usually regarded as a kind of mathematical problem in which you have to find a value which makes the equation true. A simple example is the question: What do you have to fill in on the dots in … + 2 = 3? The answer is 1, because 1 + 2 = 3. The unknown quantity in such problems is frequently denoted by a letter, often x, so that the equation becomes x + 2 = 3. The solution of this equation is x = 1. In general, an equation may have no solution, one solution or many solutions.
By contrast, an identity is an equality which is stated to be universally true for all permissible values of the variables, rather than representing a condition on those values. For example, x + y = y + x is an identity for real numbers, since it is true for all real values of x and y.
Scientific laws are often formulated as equations, especially in physics and other natural sciences. Examples are Newton's laws, the equation of a harmonic oscillator and the Schrödinger equation. The term equation is used also in chemistry, indicating conservation of atomic or isotopic content (and rarely other forms of energy, e.g. light) during chemical reactions.
## Domain of the unknown
Technically, an equation has to indicate what values the unknown variable can take. This is called the domain of the variable. For instance, the equation x + 1 = 0 has no solutions if x is supposed to be a natural number, but it does have a solution (namely, x = −1) if x is supposed to be an integer number.
Similarly, the equation x2 = 2 has no solutions if the domain is formed by the rational numbers, it has two solutions (namely, $x=\sqrt{2}$ and $x=-\sqrt{2}$) among the real numbers, and it has only one solution (namely, $x=\sqrt{2}$) among the positive real numbers.
Often, the domain is not specified explicitly, but it is assumed that the reader knows what it is supposed to be.
## Inverse function
The equation may have the form
(1) F(x) = 0
or
(2) F(x) = G(x)
where F and G are known functions and x is the unknown variable.
Function F in equation (1) ; or functions F and G in equation (2) may also depend on some parameter(s). In this case, the solution(s) x also may depend on parameter(s). Indicating the function, the parameter, say, b, can be specified as a second argument, writing F(x,b), G(x,b) or as subscript, writing Fb(x), Gb(x).
In relatively simple cases, function F depends only on the unknown variable, and G depends on the parameter; for example,
(3) F(x) = b .
In this case, the solution x is considered as an inverse function of b, which can be written as
(4) x = F − 1(b).
Depending on the function F, range of values of b and the domain of x, there may exist no inverse function, one inverse function or several inverse functions.
## Graphical solution of equations
Fig.1. Example of graphic solution of equation x = logb(x) for $b=\sqrt{2}$ (two solutions, x = 2 and x = 4), b = exp(1 / e) (one solution x = e), and b = 2 (no real solutions).
Solving equations, it may worth to begin with graphic solution of the equation, which allows the quick and dirty estimates. One plots both functions, F and G at the same graphic, and watch the point (s) of the intersection of the curves. In figure 1, functions y = F(x) = x is plotted with black line, and function $y=G(x)=\log_{\sqrt{2}}(x)$ is plotted with red curve. The intersections with black curve indicate values of x which are solutions.
At the same figure, the cases G(x) = logexp(1 / e)(x) (only one solution, x = e) and G(x) = log2(x) (no solutions among real numbers) are shown with green and blue curves.
## System of equations
In the equations (1) or (2), x may denote several numbers at once, x = {x0,x1,..xn − 1}; and functions F and G may return values from multidimensional space {F0,F1,F2,..Fm − 1}. In this case, one says that there is system of equations. For example, there is well developed theory of systems of linear equations, while unknown variables x are real or complex numbers.
## Differential equations and integral equations
In particular, the variable may denote a function of one or several variables, so that the set of possible values is some functional space, usually a Banach or even Hilbert space; the function F is then an operator on this space which may be expressed in terms of derivatives or integrals of the function elements. In these cases, the equation is called a differential equation or an integral equation.
## Operator equations
Equations can be used for objects of any origin, as soon, as the operation of equality is defined. In particular, in Quantum mechanics, the Heisenberg equation deals with non-commuting objects (operators).
|
{}
|
[BACK]
An Enhanced Privacy Preserving, Secure and Efficient Authentication Protocol for VANET
1Department of IT Convergence Engineering, Gachon University, Seongnam, 13320, Korea
2ENSAIT, GEMTEX-Laboratoire de Genie et Materiaux Textiles, University of Lille, Lille, F59000, France
3School of Computing & Institute of Cyber Security (ICSS), University of Kent, United Kingdom
4Department of Computer Engineering, Gachon University, Seongnam, 13320, Korea
*Corresponding Author: Seong Oun Hwang. Email: sohwang@gachon.ac.kr
Received: 09 September 2021; Accepted: 20 October 2021
Abstract: Vehicular ad hoc networks (VANETs) have attracted growing interest in both academia and industry because they can provide a viable solution that improves road safety and comfort for travelers on roads. However, wireless communications over open-access environments face many security and privacy issues that may affect deployment of large-scale VANETs. Researchers have proposed different protocols to address security and privacy issues in a VANET, and in this study we cryptanalyze some of the privacy preserving protocols to show that all existing protocols are vulnerable to the Sybil attack. The Sybil attack can be used by malicious actors to create fake identities that impair existing protocols, which allows them to imitate traffic congestion or at worse cause an accident that may result in the loss of human life. This vulnerability exists because those protocols store vehicle identities in an encrypted form, and it is not possible to search over the encrypted identities to find fake vehicles. This attack is serious in nature and very prevalent for privacy-preserving protocols. To cope with this kind of attack, we propose a novel and practical protocol that uses Public key encryption with an equality test (PKEET) to search over the encrypted identities without leaking any information, and eventually eliminate the Sybil attack. The proposed approach improves security and at the same time maintains privacy in VANET. Our performance analysis indicates that the proposed protocol outperforms state-of-the-art protocols: The proposed beacon generation time is constant compared to a linear increase in existing protocols, with beacon verification shown to be faster by 7.908%. Our communicational analysis shows that the proposed protocol with a beacon size of 322 bytes has the least communicational overhead compared to other state-of-the-art protocols.
Keywords: VANET; authentication protocol; cryptanalysis; privacy preserving; intelligent systems
1 Introduction
Vehicular ad hoc networks (VANETs) are a subset of Mobile ad hoc networks (MANETs) in which smart vehicles act as mobile nodes with their movement governed by road topologies. The vehicles communicate with each other using Vehicle-to-vehicle (V2V) communication and with Road-side unit (RSU), known as Vehicle-to-infrastructure (V2I) communication. Each vehicle is equipped with an On board unit (OBU) that can perform computations and establish communication. A vehicle in a VANET periodically broadcasts messages containing information related to its speed, location, traffic and accidents, known as beacons [1].
Despite these advantages, authentication, security and privacy are critical challenges for VANETs [2]. VANETs have an additional number of challenges, particularly in the domain of authentication, privacy and security [3]. Unauthenticated information in the network may lead to malicious attacks and service abuses that pose a threat to users [4]. In contrast to classical wired networks that have protections in terms of firewalls and gateways, security attacks on such wireless networks could come from various sources to target all nodes [5,6]. Additionally, VANETs are an instance of mobile ad hoc networks. Consequently, they inherit all known and unknown security flaws, such as Sybil attacks that are associated with MANETs [7]. VANETs are more challenging to secure due to their distinct characteristics and features, such as the high mobility of the end users and wide area of the network [8]. Therefore, a new mechanism to provide the desired security, including authentication, integrity, and nonrepudiation, needs to be proposed prior to the practical deployment of VANETs [9].
With regards to issues such as authentication privacy and security, researchers have proposed a number of protocols [10,11]. In [10], the authors proposed a hierarchical privacy preserving pseudonymous authentication protocol for the VANET. Their protocol makes use of two different types of pseudonyms, one is referred to as the primary pseudonym and the second is the secondary pseudonym. A primary pseudonym is provided to a vehicle upon successful verification by the Certification authority (CA). Using this primary pseudonym, the vehicle can request the RSU for a secondary pseudonym that is then broadcast along with the beacons in the VANET. Each pseudonym is associated with an expiration time after which the vehicle has to request new primary and secondary pseudonyms. Primary pseudonyms are associated with a relatively longer expiration time than the secondary pseudonyms.
Furthermore, in [10], the authors provided a brief security analysis to claim the security of the proposed protocol. Usually, in VANETs (e.g., in [10,11]), authorities (such as a CA and Social network (SN), respectively) store the real identities of the vehicles in an encrypted form. Consequently, searching on encrypted data becomes complicated because these authorities are required to first decrypt the encrypted identity prior to searching [1215] work based on a fully trusted authority. If the trusted authority is compromised, these protocols do not provide security and privacy. We need protocols which can provide privacy even if partially/fully trusted authorities are compromised. Sometimes in an Internet of things (IoT) era (e.g., Internet of vehicles (IoV), smart grids, healthcare, etc.) we need a balance between user privacy and access to information by authorities (CA and SN). Searchable encryption is a cryptographic scheme that provides this kind of balance [1622]. Public key encryption with equality test (PKEET) is a special type of searchable encryption scheme [23], with a kind of Public key encryption (PKE) that allows to check whether two ciphertexts are encrypted under (possibly) different public keys contain the same message. In other words, searchable encryption is a positive way to protect users’ sensitive data, while preserving search ability on the server side. It allows the server to search encrypted data without leaking any information in plaintext data. For more in-depth information about searchable encryption, readers are encouraged to read [24].
We have surveyed many VANET related protocols and classify them into two categories. The protocols in the first category do not encrypt the real identities of vehicles, including [25,26], which can reveal information about the vehicle's daily route. Therefore, they do not provide privacy preservation. The protocols in the second category encrypt the real identities in order to preserve the privacy of the vehicles, which is more desirable for state-of-the-art methods. However, we found that a number of existing protocols including [10,11] in the second category are vulnerable to the Sybil attack since the identities are encrypted and cannot be verified. Thus, a malicious vehicle can create many fake identities, exploiting this vulnerability which may result in fake congestion in the network and subsequently cause serious accidents. To address this problem, we present a novel privacy-preserving protocol secure against the Sybil attack with practical performance. Therefore, this paper not only shows that the protocols are vulnerable against the Sybil attack, but also provides a novel protocol with stronger security along with efficient authentication, beacon generation and beacon verification. In addition, other parameters for the proposed algorithm like the transmit power and scheduling are based on adaptive-transmit power control algorithm [27], while radio resource allocation is performed using a supervised deep learning technique [28].
The contributions of this paper can be summarized as follows:
1. In this paper, we cryptanalyze existing protocols that protect privacy by encrypting vehicles’ real identities and are claimed to be secure and efficient protocols for VANETs. We show for the first time that they are vulnerable to the Sybil attack because they cannot verify the vehicle identity during primary pseudonym generation, which is encrypted. This attack may result in fake congestion in the network and subsequently cause catastrophic consequences, which should be resolved. However, protecting such protocols from the Sybil attack is complicated in nature because the vehicle identities in the existing protocols such as [10,11] are encrypted to protect user privacy. It is difficult to protect both user privacy and security against the Sybil attack at the same time. Hence, we present a new research direction when designing a secure and privacy-preserving protocol for VANET.
2. We propose a novel privacy-preserving protocol secure against the Sybil attack by introducing searchable encryption that allows for verification of encrypted vehicle identities. This proposed method is generic in the sense that it can be applied to all protocols vulnerable to the Sybil attack due to encrypted vehicle identities. We examine various attack scenarios and prove that the proposed protocol is secure against the Sybil attack along with satisfying the general security requirements for VANET protocols.
3. We also perform an in-depth analysis of the proposed protocol in terms of the performance over time. The proposed beacon generation has constant time with an increasing number of beacons, in contrast to the linearly increasing time for existing protocols. Regarding beacon verification time, it outperforms existing protocols by 7.908%. We also show that the proposed protocol with a beacon size of 322 bytes has the least communicational over-head compared to other state-of-the-art protocols. The proposed protocol shows its practical applicability in VANET as the beacons are generated and verified on a massive scale.
The rest of the paper is organized as follows: The background and relevant studies are introduced in Section 2. A cryptanalysis of the existing protocols is provided in Section 3. Section 4 defines the proposed protocol. Section 5 presents the analysis of the proposed protocol. Section 6 concludes the paper.
2 Background
In this section, we present background knowledge on a conventional VANET architecture as well as the assumptions and cryptographic tools used in this work.
2.1 Conventional VANET Architecture
Fig. 1 depicts a conventional VANET architecture, where a vehicle first needs to be registered with the Registration authority (RA) in order to receive and send the beacons in the VANET. According to [29], each vehicle has a unique identifier associated with it. In this paper, we refer to it as VIDi. VIDi, is an electronic license plate, and can be issued and installed in the vehicle's OBU by the vehicle registration authorities. VIDi serves as a real identity of a vehicle and uniquely identifies it. A vehicle in our model is required to provide VIDi to RA for registration. Our system model consists of the following participants.
1. Registration authority (RA): During the registration, RA generates a public/private key pair using PKEET [30] and encrypts the VIDi with a public key and sends the encrypted VIDi to the initial vehicle (Vi) and a trapdoor T to CA through a secure channel. Upon the request of legal authorities, RA provides the decryption key to reveal the VIDi.
2. Certification authority (CA): CA issues the primary pseudonyms and keeps associations between the encrypted VIDi and the primary pseudonym. Each primary pseudonym has an expiration time (TCA), after which a vehicle needs to get a new primary pseudonym. This can be done by two means: first by requesting through the RSU located in the area where the vehicle is currently traveling and the other by directly requesting it to CA through 3G/4G communication.
3. Roadside units (RSUs): Secondary pseudonyms are issued by RSU upon request from a vehicle. RSU maintains the association between the primary pseudonyms and secondary pseudonyms. Upon the request of secondary pseudonyms, RSU checks the validation of primary pseudonyms and issues the secondary pseudonyms if they are valid. Each secondary pseudonym has an expiration time (TRSU) relatively smaller than TCA. Once TRSU expires, the vehicle needs to acquire a new secondary pseudonym from RSU.
4. Sender/Initial vehicle: The sender vehicle, denoted by Vi in the rest of the paper, generates the beacons and broadcasts them.
5. Receiver vehicle: The receiver vehicle, denoted by Vr in the rest of the paper, verifies the received beacon. In case the message is forged, it reports this message to RSU. If TRSUexpires, the beacon is discarded.
Figure 1: Conventional VANET architecture
2.2 Assumptions
We have made a number of assumptions that underpin the cryptanalysis and the proposed protocol, and these are outlined next.
1. An honest but curious behavior is expected from CA, RA, and RSU.
2. We assume that CA, RA, and RSU do not collude.
3. Cryptographic credentials are kept safe by all parties.
4. All parties have synchronized clocks.
5. CA, RA, and RSU use a secure channel to communicate.
2.3 Cryptographic Tools
Elliptic curve cryptography (ECC) is one of the most used cryptographic schemes in security because it provides a high level of security and efficiency. However, it doesn't provide search over encrypted data, which is sometimes required. To provide search over encrypted data, researchers have proposed schemes like PKEET. Therefore, we use two different cryptographic schemes. We use PKEET if there is need to search over encrypted data, and we use ECC if there is no need to search over encrypted data, because it is computationally efficient compared to PKEET. Each of the cryptographic schemes is given as follows:
1. Elliptic curve cryptography (ECC): ECC [31,32] is widely used to implement encryption algorithms and digital signatures. Assume that Fp is a finite field where p is a large prime number. E denotes an elliptic curve over Fp and it is defined by the following equation.
y2=x3+ax+bmodp(1)
Here, (4a3 + 27b2) mod p ≠ 0 and x, y, a, b ɛ Fp . Let us suppose that O be an infinite point, G be an additive group with order q and generator p. Let P and Q be two points on E; then point addition operation in G is defined as P + Q = R. The Elliptic curve discrete logarithmic problem (ECDLP) [33] is computationally infeasible. Scalar multiplication in G is given by the following equation.
s.P=P+P++P(stimes)(2)
Given the points P and Q from G, ECDLP is to find s ɛ Fp such that s.P = Q. ECC consists of the following algorithms:
a) Setup (π): It takes security parameter π as input and outputs the public parameter
b) KeyGen (pp): It takes the public parameter pp as input, and outputs a public/secret key pair (Pk, Sk).
c) Encrypt (m, Pk): It takes a message m and the receiver's public key Pk as input and outputs a ciphertext C.
d) Decrypt (C, Sk): It takes a ciphertext C and the receiver's secret key Sk as input and outputs a plaintext m.
e) Signature Generation (C, Sk): It takes a ciphertext C and the sender's secret key Sk as input and outputs a signature s.
f) Signature Verification (s, Sk): It takes a signature s and the sender's public key Pk as input and outputs 1 if the signature s is true otherwise 0.
Although the complete description of these modules is out of the scope of this paper, interested readers are encouraged to read [34] for more information about these modules.
2. Public key encryption with equality test (PKEET): This is a special type of searchable encryption scheme that allows checking whether two ciphertexts encrypted under (possibly) different public keys contain the same message or not. The scheme in [23] does not provide a trapdoor for authorization to perform an equality test. Later, Ma et al. [30] proposed a variant of PKEET with different authorization methods that provide more control to the user over authorization. It consists of the following algorithms:
a) Setup (λ): It takes security parameter λ as input and outputs the public parameter pp.
b) KeyGen (pp): It takes the public parameter pp as input and outputs a public/secret key pair (Pk, Sk).
c) Encrypt (M, Pk): It takes a message M and the receiver's public key Pk as input and outputs a ciphertext C.
d) Decrypt (C, Sk): It takes a ciphertext C and the receiver's secret key Sk as input and outputs a plaintext M.
e) Aut1 (SKi): It takes the secret key SKi as input and outputs a trapdoor T1,i for user Ui.
f) Test1(Ci, T1,i, Cj, T1,j): It takes Ui's ciphertext Ci, the trapdoor T1,i, Uj's ciphertext Cj and the trapdoor T1,j as inputs, and outputs 1 if Ci and Cj contain the same message and 0 otherwise.
Note that [30] is only used by RA and CA in our scheme.
2.4 Security and Privacy Requirements
Both security and privacy are important for secure communications in VANETs. According to the latest research efforts [3538], a scheme for VANET should meet the following requirements:
1. Message authentication: RSUs are able to check the validity of the messages sent by vehicles. In addition, RSUs are able to detect any modification of the received message.
2. Privacy preservation: RSUs and other vehicles are not able to extract the vehicle's real identity. Any third party is not able to get the vehicle's real identity by analyzing intercepted messages.
3. Traceability/Vehicle revocation: The protocol is able to extract the vehicle's real identity by analyzing its messages when it is necessary. e.g., when a malicious vehicle sends a false or forged message to mislead others.
4. Un-linkability: RSUs and malicious vehicles are not able to link two messages sent by the same vehicle, i.e., they cannot trace the vehicle's action through its messages.
5. Resistance to attacks: The scheme is able to withstand various common attacks such as the Sybil attack that exist in VANETs.
3 Cryptanalysis
In this section, we cryptanalyze and demonstrate with an example how to mount the Sybil attack in one of the state-of-the-art protocols [10], because in a VANET, Sybil attacks are very important to address. In the Sybil attack, the attacker subverts the reputation of a network service by creating a large number of pseudonymous identities and uses them to gain a disproportionately large influence. For an easy explanation, we assume the following scenario from [10] (The complete description of [10] is out of the scope of this research article. However, we encourage interested readers to read [10,11] to completely understand the attack). Let us suppose that in the step of primary pseudonym generation in [10], a malicious (internal) initial vehicle Vi wants to authenticate a fake vehicle Vf. Vi generates two random numbers n, n’, public/private ECC key pairs PKi/SKi and PKi’/SKi’. Vi sends the information along with VIDi to CA in the following two steps.
Step 1: Vi → CA: n || PKi || VIDi.
RA generates a number of public/private ECC key pairs and provides CA the public keys that are later used by CA for VIDi encryption. CA validates the VIDi. Upon verification, it encrypts VIDi with one of the public keys of RA. CA encrypts n with its paillier homomorphic encryption public key PKCAP, generates an expiration time TCA and creates the following database entries as shown in Tab. 1.
CA signs (TCA || PKi || (n)PKCAP) and assigns it to Vi as its primary pseudonym. Note that CA has VIDi in encrypted form using RA's public key. Hence, CA cannot see VIDi once encrypted (until RA provides the corresponding private key). Now Vi again sends n’, PKi’, and VIDi to CA in order to get another valid primary pseudonym.
Step 2: Vi → CA: n || PKi || VIDi.
CA validates VIDi. CA can not check whether VIDi is already in its database or not because VIDi is in encrypted form in its database. Upon verification, it encrypts VIDi with one of the public keys generated by RA, encrypts n’ with its Paillier public key PKCAP, generates an expiration time T’CA and creates the following database entries as shown in Tab. 2.
CA signs (T’CA || PK’i || (n)’PKCAP) and assigns it to Vi as its primary pseudonym. Hence, Vi has successfully obtained two valid primary pseudonyms at a time. Vi can give one of its primary pseudonyms to Vf. Using this primary pseudonym, Vf can obtain a secondary pseudonym and can communicate in the network. Vi can create many numbers of such fake authenticated vehicles in the network and can cause fake congestion. The Sybil attack is shown in Fig. 2.
Figure 2: Demonstration of sybil attack in [10]
4 Proposed Protocol
In this section, we propose a new protocol that uses PKEET to eliminate the Sybil attack. The proposed protocol provides improved efficiency, security and maintains privacy in the VANET. The dynamic network architecture of the proposed protocol is shown in Fig. 3 and the protocol is given as follows:
4.1 Off-line Registration and Initialization
At the time of offline registration, CA broadcasts security parameters. Each entity creates its public/private key pairs using ECC. In the case of RA, it also creates public/private key pairs using PKEET. Vi contacts RA with its VIDi and public key PKi through a secure channel (e.g., by physically visiting). RA uses (PKEET) [30] to encrypt and sign the VIDi and gives it to Vi. Furthermore, RA gives the trapdoor T of its private key PKRA to CA through a secure channel (Note that, in [10,11], RA gives just its public keys to CA and SN, respectively). We use PKEET in [30], which allows to stop the Sybil attack by using the equality test.
Step 1: Vi → RA: VIDi || PKi.
Step 2: RA → Vi: (PKEET(VIDi )PKRA)SKRA.
Step 3: RA → CA: T.
Figure 3: Working of proposed protocol
4.2 Primary Pseudonym Generation
Vi signs (PKEET(VIDi )PKRA)SKRA|| PKi using its private key SKi and encrypts it using CA's public key PKCA and sends it to CA to get a primary pseudonym for the first time (i.e., initially but only once).
Step 4: Vi → (((PKEET(VIDi )PKRA)SKRA|| PKi)SKi )PKCK.
CA verifies the signature of RA and checks using the equality test function whether PKEET(VIDi )PKRA exists in its database or not. If it does not exist, then CA issues the primary pseudonym (TCA || PKi)SKCK and stores it in its database as shown in Tab. 3.
CA encrypts (TCA || PKi)SKCK using PKi and sends it to Vi.
Step 5: CA → Vi: ((TCA || PKi)SKCK )PKi .
4.3 Secondary Pseudonym Generation
Vi decrypts the primary pseudonym, generates a random number r and a new public/private key pair Pk’i, Sk’i, and then concatenates r and Pk’i with the primary pseudonym. Thereafter, Vi signs it using SKi, encrypts it using the public key PKRSU of RSU and sends it to RSU.
Step 6: Vi → RSU: (((TCA || PKi)SKCK || r || Pk’i )SKi )PKRSU .
RSU decrypts (((TCA || PKi)SKCK || r || Pk’i )SKi )PKRSU and verifies the signatures SKCA and SKi. If TCA is valid, RSU generates a secondary pseudonym Ss = (TRSU || PK’i ) and signs it using its private key SKRSU. RSU also computes Z = Ss + r and signs it using SKRSU. RSU sends ((Ss)SKRSU ), ((Z)SKRSU ) by encrypting it using PK’i to Vi. RSU maintains its database as shown in Tab. 4.
Step 7: RSU → Vi: ((Ss)SKRSU , ((Z)SKRSU )PK'i .
4.4 Beacon Generation
Vi computes h1 =H(Ss + r + m + TRSU), where m is a beacon message and H is a hash function. Vi then sends (h1, m, (Z)SKRSU , TRSU) to Vr.
Step 8: Vi ← Vr: (h1, m, (Z)SKRSU , TRSU).
4.5 Beacon Verification
Vr checks if TRSU is valid. If it is valid, Vr verifies the signature SKRSU and computes hx = H(Z + m + TRSU). If h1 = hx, Vr accepts the beacon. Otherwise it rejects the beacon.
4.6 Renewal of Primary Pseudonym
Once the TCA expires, Vi requests CA via 3G/4G communication or RSU for a new primary pseudonym. Vi generates a new public/private key pair PK’’i/SK’’i , concatenates PK’’i with its current primary pseudonym, signs it using current PKi, encrypts it using PKCA and sends it to CA. CA generates a new primary pseudonym using PK’’i and new expiration time T’CA and signs it using SKCA. CA encrypts it using PK’’i and sends the new primary pseudonym to Vi. CA then updates the corresponding primary pseudonym with a new primary pseudonym in its database.
Vi → CA: (((TCA || PKi)SKCK || PK’’i )SKi )PKCA .
CA → Vi: (((T’CA )|| PK’’i)SKCK )PK’’i .
4.7 Renewal of Secondary Pseudonym
To renew a secondary pseudonym, once the TRSU expires, Vi generates a new public/private key pair PK’’’i/SK’’’i and a random number r’. Vi then concatenates PK’’’i and r’ with its current secondary pseudonym and signs it using the current PK’i. Vi then encrypts them using PKRSU and sends them to RSU. RSU checks the signatures SKRSU and SK’i. RSU creates a new secondary pseudonym with new expiration time TRSU’. RSU generates Z’ = Ss’ + r’, where Ss’ = (TRSU’ || PKi’’’). RSU signs Z’ and Ss’ using SKRSU. RSU then sends the signed Z’ and Ss’ to Vi by encrypting them using PKi’’’. RSU updates the corresponding secondary pseudonym in its database with the new one. The optimal time to refresh the secondary pseudonym is 1.4 s, which means that each Vi requests for secondary pseudonym after approximately 7 beacons. In case Vi tries to use expired secondary pseudonym, it will not be accepted by RSU as all the entities are synchronized. Thus, the vehicle using the expired secondary pseudonym will not be able to communicate.
5 Analysis of Proposed Protocol
This section provides an analysis of our protocol with two perspectives. First is the security analysis, where we provide various attack scenarios and explain the resilience of our protocol against those attacks to show the effectiveness of the protocol in accordance with the design goals. Next, we provide the computational and communicational efficiency in form of the computational time and communicational bytes required for beacon generation and verification.
5.1 Security Analysis
1. Message Integrity: The receiving vehicles ensure the integrity of the received message by verifying the signature of the RSU and comparing h1 and hx.
2. Vehicle Authentication: Vr authenticates Vi by verifying the signature of RSU in secondary pseudonym and validity of TRSU.
3. Non-repudiation: Vi broadcasts the beacons by computing h1 using r (which is only known to Vi) with its secondary pseudonym, m and TRSU. Hence it provides non repudiation. TRSU is valid for very short time. Therefore, each beacon is unique itself and it also prevents the replay attack and keeps the vehicle anonymous.
4. Privacy Preserving: The primary pseudonym and secondary pseudonym are changing very often, which makes it very difficult to track a particular device. Even if RA, CA, or RSU are compromised, the privacy of the vehicles remains secure.
5. Vehicle Revocation: If Vi is involved in a malicious activity, Vr reports it to the RSU with its secondary pseudonym. RSU blocks the malicious vehicle and sends it primary pseudonym to CA which also blocks it and reports it to RA. Then RA can reveal its real identity by sharing the respective private key of PKEET with CA. Thus, the offender's vehicle is revoked.
6. Conditional Anonymity: The real identity of the offender vehicle in our scheme is traced and revoked from the VANET when malicious activities are detected, as described above.
5.2 Attack Scenarios
We show how the proposed protocol defends against the following attack scenarios.
Theorem 1: Sybil attack is not feasible in the proposed protocol.
Proof: If Vi tries to get another primary pseudonym by the Sybil attack, CA will check if PKEET(VIDi )PKRA exists in its database or not by using the equality test. If it exists, CA will not issue the primary pseudonym. Hence, the Sybil attack is not feasible.
Theorem 2: Communication between all the participants is secure.
Proof: All the communication in our protocol is encrypted using ECC cryptography. According to the Diffie-Hellman Problem, given an element h and the value hx, it is computationally infeasible for an attacker to compute secret x. Therefore, all the communication is secure.
Theorem 3: Vr cannot correlate the secondary pseudonyms of Vi.
Proof: Vi changes the secondary pseudonyms after a few beacon broadcasts. Thereafter, Vr receives the beacons containing different secondary pseudonyms. Therefore, it is very hard for a Vr to establish any correlation between rapidly changing secondary pseudonyms.
Theorem 4: If an attacker succeeds in compromising RSU, no valuable information is leaked.
Proof: RSU only stores the mapping between the current primary pseudonym and secondary pseudonym. The primary pseudonyms are changed after a short period of time. Therefore, it is very hard for an attacker to get any useful information by compromising any of the RSUs.
Theorem 5: If an attacker succeeds in compromising the CA database, no valuable information is leaked.
Proof: Since the database of CA contains encrypted VIDs of vehicles, no valuable information is leaked even though CA is compromised.
Theorem 6: Even if an RSU tries to correlate the pseudonyms of an initiator. it hardly establishes the linkability between them.
Proof: RSUs only issue secondary pseudonyms on the basis of a primary pseudonym. An initiator only uses a primary pseudonym for a short period of time, after which it securely gets another primary pseudonym from CA without the knowledge of RSU. It is, therefore, very hard for the RSU to establish any link between two primary pseudonyms.
If any vehicle is involved in malicious activity, the malicious participant can be reported to RSU with its secondary pseudonym. RSU can then report it to the CA which will further cooperate with RA to get the associated encryption key for primary pseudonym. Thereafter, they can revoke/track the malicious participant using its ID in primary pseudonym.
Tab. 5 shows the comparison of the proposed solution with [10,11]. We observed that the protocols in [10,11] are vulnerable to the Sybil attack. However, the proposed solution in this study is resistant to against the possible Sybil attack. If CA and SN are compromised in [10,11], respectively, they do not provide privacy preservation because Vi sends its VIDi in plaintext. In the proposed solution, since CA is given an encrypted form of VIDi, a compromised CA cannot leak any valuable information regarding VIDi.
5.3 Performance Evaluation
We evaluate the performance of our proposed scheme using our testbed. Our testbed includes an Intel i5 processor with 8 GB of RAM and the computation is carried in a C++, as C++ supports a rich set of cryptographic libraries [39]. Execution time for the cryptographic operations is given in Tab. 6. We compare the computational overhead for generation and verification of broadcasted beacons in [10,11] with our proposed protocol.
Beacon generation: In Fig. 4a, it is evident that the proposed protocol outperforms others in terms of beacon generation. For beacon generation, [10] computes one signature, so the total time required for beacon generation is 1.88 ms, whereas [11] computes one signature, one AES encryption, and one ECC key generation. Hence, the total time required for beacon generation is 1.88 + 0.357 + 2 ≈ 4.237 ms. For beacon generation, the proposed protocol only computes one hash (H), so the total time required is 0.0001 ms. Hence, the proposed protocol is the most efficient in beacon generation, taking almost constant time, which means that, unlike [10,11], the time for beacon generation does not increase linearly with an increase in the number of beacons generated because the proposed method uses only one hash for beacon generation whereas others use signature verification along with encryption, which are computationally inefficient.
Beacon verification: For beacon verification, as shown in Fig. 4b, [10] computes two signature verifications, so the total time required for beacon verification is 2.946 + 2.946 ≈ 5.892 ms, whereas [11] computes one signature verification and one AES decryption for beacon verification, hence the time required for beacon verification is 2.946 + 0.253 ≈ 3.199 ms. For beacon verification, the proposed protocol computes one hash and one signature verification, so the total time required for message verification is 2.946 + 0.0001 ≈ 2.9461 ms. The proposed protocol provides 7.908% faster time for beacon verification compared to [11], because the proposed protocol uses hash which is computationally efficient compared to AES decryption in [11].
In this section, we evaluate and compare the size of beacons and show that the proposed protocol has the most efficient communicational overhead. Description of size for each element in a beacon is given in Tab. 7. The the size of beacon in the proposed protocol is 322 bytes, whereas the size of beacon in [10,11] is 362 and 364 respectively, as shown in Tab. 8. Hence the proposed protocol outperforms others in terms of communicational overhead.
Figure 4: Computational overhead for: (a) beacon generation, (b) beacon verification
The corresponding hardware implementations for this protocol consist of hash function for the beacon generation, and hash function and signature verification for beacon verification. The associated hardware requirements can be accomplished using one hash; the light weight hash function as implemented in [40] can serve as the best option for this case in FPGA platform. Similarly for the signature verification case, the ECC can be employed, where the efficient implementations can be found in [32]. Later, we intend to design the whole protocol in hardware that can utilize the resource sharing techniques for the efficient implementation of the protocol.
6 Conclusions and Future Work
In this paper, our cryptanalysis results found that the VANET authentication protocols are vulnerable to Sybil attacks. To remove this flaw, we proposed a novel protocol using searchable encryption by enabling search over encrypted identities. As a result of that, the security is improved while maintaining the privacy. The proposed solution is generic and can be applied to existing protocols that are vulnerable to Sybil attack. Simulation results show that the beacon generation time is constant while the beacon verification time is 7.908% faster compared to the state-of-the-art protocols. In addition, the beacon size is also reduced to 322 bytes, indicating that the protocol is efficient enough to be used for practical applications. In the future, we intend to adopt lightweight authentication [41] to improve the speed of the beacon verification in the proposed protocol. Efficient implementation techniques can also be developed for specialized hardware (GPU and FPGA) to enhance the speed performance and allow for large scale adoption. Furthermore, we will extend our proposed protocol to other applications such as bluetooth low energy [42] to make them secure from such attacks.
Acknowledgement: We thank our families and colleagues who provided us with moral support.
Funding Statement: This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-00540, Development of Fast Design and Implementation of Cryptographic Algorithms based on GPU/ASIC).
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
References
1. 1. F. J. Ros, P. M. Ruiz and I. Stojmenovic, “Acknowledgment-based broadcast protocol for reliable and efficient data dissemination in vehicular ad hoc networks,” IEEE Transactions on Mobile Computing, vol. 11, no. 1, pp. 33–46, 2010.
2. 2. W. Li and H. Song, “Art: An attack-resistant trust management scheme for securing vehicular ad hoc networks,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 960–969, 2015.
3. 3. B. Mokhtar and M. Azab, “Survey on security issues in vehicular ad hoc networks,” Alexandria Engineering Journal, vol. 54, no. 1, pp. 1115–1126, 2015.
4. 4. H. E. Sayed, M. Chaqfeh, H. E. Kassabi, M. A. Serhani, H. Alexander et al., “Trust enforcement in vehicular networks: Challenges and opportunities,” IET Wireless Sensor Systems, vol. 9, no. 5, pp. 237–246, 2019.
5. 5. I. Memon, “A secure and efficient communication scheme with authenticated key establishment protocol for road networks,” Wireless Personal Communications, vol. 85, no. 3, pp. 1167–1191, 2015.
6. 6. H. Sedjelmaci, S. M. Senouci and M. A. A. Rgheff, “An efficient and lightweight intrusion detection mechanism for service-oriented vehicular networks,” IEEE Internet of Things Journal, vol. 1, no. 6, pp. 570–577, 2014.
7. 7. E. C. Eze, S. Zhang, E. Liu and J. C. Eze, “Advances in vehicular ad-hoc networks: Challenges and road-map for future development,” International Journal of Automation and Computing, vol. 13, no. 1, pp. 1–18, 2016.
8. 8. X. Cheng, C. Wang, H. Wang, X. Gao, X. You et al., “Cooperative mimo channel modeling and multi-link spatial correlation properties,” IEEE Journal on Selected Areas in Communications, vol. 30, no. 2, pp. 388–396, 2012.
9. 9. L. Zhang, Q. Wu, A. Solanas and J. D. Ferrer, “A scalable robust authentication protocol for secure vehicular communications,” IEEE Transactions on Vehicular Technology, vol. 59, no. 4, pp. 1606–1617, 2010.
10. 10. U. Rajput, F. Abbas and H. Oh, “A hierarchical privacy preserving pseudonymous authentication protocol for vanet,” IEEE Access, vol. 4, pp. 7770–7784, 2016.
11. 11. S. A. Shah, C. Gongliang, L. Jianhua and Y. Glani, “A dynamic privacy preserving authentication protocol in vanet using social network,” in Int. Conf. on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, Springer, Cham, pp. 53–65, 2019.
12. 12. D. He, S. Zeadally, B. Xu and X. Huang, “An efficient identity-based conditional privacy-preserving authentication scheme for vehicular ad hoc networks,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 12, pp. 2681–2691, 2015.
13. 13. J. Zhang, J. Cui, H. Zhong, Z. Chen, L. Liu et al., “Pa-crt: Chinese remainder theorem based conditional privacy-preserving authentication scheme in vehicular ad-hoc networks,” IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 2, pp. 722–735, 2019.
14. 14. J. Cui, L. Wei, H. Zhong, J. Zhang, Y. Xu et al., “Edge computing in vanets-an efficient and privacy-preserving cooperative downloading scheme,” IEEE Journal on Selected Areas in Communications, vol. 38, no. 6, pp. 1191–1204, 2020.
15. 15. J. Cui, J. Zhang, H. Zhong and Y. Xu, “Spacf: A secure privacy-preserving authentication scheme for vanet with cuckoo filter,” IEEE Transactions on Vehicular Technology, vol. 66, no. 11, pp. 10283–10295, 2017.
16. 16. C. Hu and P. Liu, “An enhanced searchable public key encryption scheme with a designated tester and its extensions,” Journal of Computer, vol. 7, no. 3, pp. 716–723, 2012.
17. 17. D. Boneh, G. D. Crescenzo, R. Ostrovsky and G. Persiano, “Public key encryption with keyword search,” in Int. Conf. on the Theory and Applications of Cryptographic Techniques, Berlin, Heidelberg, pp. 506–522, 2004.
18. 18. X. Guibao, M. Yubo and L. Jialiang, “Inclusion of artificial intelligence in communication networks and services,” Itu Journal: Ict Discoveries, no. vol. 1, pp. 1–6, 2017.
19. 19. R. Chen, Y. Mu, G. Yang, F. Guo, X. Wang et al., “A new general framework for secure public key encryption with keyword search,” in Australasian Conf. on Information Security and Privacy, Springer, Cham, pp. 59–76, 2015.
20. 20. L. Fang, W. Susilo, C. Ge and J. Wang, “Public key encryption with keyword search secure against keyword guessing attacks without random oracle,” Information Sciences, vol. 238, pp. 221–241, 2013.
21. 21. C. Hu and P. Liu, “Decryptable searchable encryption with a designated tester,” Procedia Engineering, vol. 15, pp. 1737–1741, 2011.
22. 22. C. Liu, L. Zhu, M. Wang and Y. Tan, “Search pattern leakage in searchable encryption: Attacks and new construction,” Information Sciences, vol. 265, pp. 176–188, 2014.
23. 23. S. Ma, Q. Huang, M. Zhang and B. Yang, “Efficient public key encryption with equality test supporting flexible authorization,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 3, pp. 458–470, 2014.
24. 24. Y. Wang, J. Wang and X. Chen, “Secure searchable encryption: A survey,” Journal of Communications and Information Networks, vol. 1, no. 4, pp. 52–65, 2016.
25. 25. J. Huang, L. Yeh and H. Chien, “Abaka: An anonymous batch authenticated and key agreement scheme for value-added services in vehicular ad hoc networks,” IEEE Transactions on Vehicular Technology, vol. 60, no. 1, pp. 248–262, 2010.
26. 26. D. A. Rivas, J. M. B. Ordinas, M. G. Zapata and J. D. M. Pozo, “Security on vanets: Privacy, misbehaving nodes, false information and secure data aggregation,” Journal of Network and Computer Applications, vol. 34, no. 6, pp. 1942–1955, 2011.
27. 27. H. Amir and S. Hwang, “Adaptive transmit power control algorithm for sensing-based semi-persistent scheduling in c-v2x mode 4 communication,” Electronics, vol. 8, no. 8, pp. 1–18, 2019.
28. 28. S. Ali, A. Haider, M. Rahman, M. Sohail, Y. B. Zikria et al., “Deep learning based joint resource allocation and rrh association in 5 g-multi-tier networks,” IEEE Access, vol. 9, pp. 118357–118366, 2021.
29. 29. J. Petit, F. Schaub, M. Feiri and F. Kargl, “Pseudonym schemes in vehicular networks: A survey,” IEEE Communications Surveys & Tutorials, vol. 17, no. 1, pp. 228–255, 2014.
30. 30. S. Ma, Q. Huang, M. Zhang and B. Yang, “Efficient public key encryption with equality test supporting flexible authorization,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 3, pp. 458–470, 2014.
31. 31. V. S. Miller, “Use of elliptic curves in cryptography,” in Conf. on the Theory and Application of Cryptographic Techniques, Springer, Berlin, Heidelberg, pp. 417–426, 1985.
32. 32. S. Khan, K. Javeed and Y. A. Shah, “High-speed fpga implementation of full-word montgomery multiplier for ecc applications,” Microprocessors and Microsystems, vol. 62, pp. 91–101, 2018.
33. 33. S. D. Galbraith and P. Gaudry, “Recent progress on the elliptic curve discrete logarithm problem,” Designs, Codes and Cryptography, vol. 78, no. 1, pp. 51–72, 2016.
34. 34. S. Vasundhara, “The advantages of elliptic curve cryptography for security,” Global Journal of Pure and Applied Mathematics, vol. 13, no. 9, pp. 4995–5011, 2017.
35. 35. N. Lo and J. Tsai, “An efficient conditional privacy-preserving authentication scheme for vehicular sensor networks without pairings,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 5, pp. 1319–1328, 2015.
36. 36. J. K. Liu, T. H. Yuen, M. H. Au and W. Susilo. “Improvements on an authentication scheme for vehicular sensor networks,” Expert Systems with Applications, vol. 41, no. 5, pp. 2559–2564, 2014.
37. 37. Z. Jianhong, X. Min and L. Liying, “On the security of a secure batch verification with group testing for vanet,” International Journal of Network Security, vol. 16, no. 5, pp. 351–358, 2014.
38. 38. X. Lin, X. Sun, P. Ho and X. Shen, “Gsis: A secure and privacy-preserving protocol for vehicular communications,” IEEE Transactions on Vehicular Technology, vol. 56, no. 6, pp. 3442–3456, 2007.
39. 39. M. Scott, “Support for fp8, new bestpair program,” Github, 2018. [Online] Available: https://github.com/miracl/MIRACL/blob/master/readme.txt.
40. 40. S. Khan, W. Lee and S. O. Hwang, “A flexible gimli hardware implementation in fpga and its application to rfid authentication protocols,” IEEE Access, vol. 9, pp. 105327–105340, 2021.
41. 41. K. M. Kay, L. Bassham, M. S. Turan and N. Mouha, “Report on lightweight cryptography,” NIST Internal Interagency Report 8114, National Institute of Standards and Technology, pp. 1–23, 2016.
42. 42. A. Raza, S. Khan and S. O. Hwang, “A secure authentication protocol against the co-located app attack in ble,” IEIE Transactions on Smart Processing & Computing, vol. 9, no. 5, pp. 399–404, 2020.
|
{}
|
# Math Help - Percentage Change
1. ## Percentage Change
I wasn't sure if this problem is categorized under statistics, algebra, or calc so I just assumed statistics. Anywho.. I am having difficulty understanding this question:
On May 16, 2000, the US Federal Reserve increased its target for the federal funds rate from 6.00% to 6.50%. This change of ___________ means that the Fed raised its target by approximately __________________.
For the first blank it is looking for a change of percentage points. Would it be a .50% change in percentage points? & For the second blank it is looking for the percentage of how much the Fed raised its target; which I cannot seem to figure out. Any suggestions/help would be awesome!
2. ## Re: Percentage Change
Yes the first part is 0.50%
For the second part, percentage change is the change relative to the original
$\text{percentage change}=\frac{\text{final}-\text{original}}{\text{original}}$
$\text{percentage change}=\frac{6.5-6}{6}$
|
{}
|
1. Jan 28, 2005
im working on a mathematical model of newton's cradle. im trying to explain mathematically why, when two balls are released, two balls pop up the other side etc.
i said that if there are N balls in the cradle, each with mass m, and the two balls you displace have gained a velocity v by the time they hit the others, initially the momentum is 2mv, and KE is mv^2. Then i said that if n balls emerged from the other side with the same velocity, it would need to equal 2v/n for conservation of momentum. Then KE is equal to (2mv^2)/n and we see KE can only be conserved when n=2.
I was just wondering if anyone can help me prove this result for when the velocity of the n balls emerging from the left is not equal. i.e then, the average velocity of the n balls would need to be 2v/n, but i dont know how to prove that n must equal 2. Thanks
2. Jan 28, 2005
basically, if you let v1,v2,......vn, be the velocities of the n balls that 'pop out' after the collision, therefore v1<v2<v3......<vn. From the conservation of momentum we get:
v1+v2+......vn=2v
and from the conservation of KE we get:
(v1)^2+(v2)^2+.......(vn)^2= 2v^2
is there any way i can show that the only solution to this is vn=v(n-1)=2 and v1=v2=......v(n-2)=0???
3. Jan 28, 2005
alternatively can anyone prove that, if we have N balls, and we displace 2, 2 will 'pop out' the other side in another way? any ideas welcome!! thanks
|
{}
|
# How can one interpret the Stata output for Multiple Correspondence Analysis?
As an alternative to conducting exploratory factor analysis on a set of data, with binary responses, I have been suggested to use Multiple Correspondence Analysis (MCA).
Following is a curtailed and slightly modified version of the output that I receive from Stata.
. mca q*, method(burt) normalize(principal) compact
Multiple/Joint correspondence analysis
Number of obs = 61 Total inertia = .0440935
Method: Burt/adjusted inertias Number of axes = 2
| principal cumul
Dimension | inertia percent percent
------------+----------------------------------
dim 1 | .0078404 17.78 17.78
dim 2 | .0066631 15.11 32.89
dim 3 | .005856 13.28 46.17
dim 4 | .0035926 8.15 54.32
dim 5 | .0026133 5.93 60.25
dim 6 | .0020177 4.58 64.82
dim 7 | .0016913 3.84 68.66
dim 8 | .0012484 2.83 71.49
dim 9 | .0008873 2.01 73.50
dim 10 | .000839 1.90 75.41
dim 11 | .0006795 1.54 76.95
dim 12 | .0004584 1.04 77.99
dim 13 | .0002605 0.59 78.58
dim 14 | .0002364 0.54 79.11
dim 15 | .0001877 0.43 79.54
dim 16 | .0000679 0.15 79.69
dim 17 | .000035 0.08 79.77
dim 18 | .0000192 0.04 79.82
dim 19 | 1.73e-06 0.00 79.82
------------+----------------------------------
Total | .0440935 100.00
Statistics (x1000) for column categories in principal normalization
------------------- overall ---------- dimension 1 ------- dimension 2 ----
Categories| mass qualt %inert | coord sqcor contr | coord sqcor contr |
-------------+--------------------+-------------------+-------------------+
q1a | | | |
0 | 6 762 10 | 43 27 1 | 33 16 1 |
1 | 9 762 7 | -30 27 1 | -23 16 1 |
-------------+--------------------+-------------------+-------------------+
q1c | | | |
0 | 13 771 2 | -53 404 5 | -17 43 1 |
1 | 2 771 14 | 352 404 31 | 115 43 4 |
-------------+--------------------+-------------------+-------------------+
Can someone help me with interpreting this output and compare it to a factor analysis output.
I am assuming that for interpreting the MCA output, principal inertia should be interpreted similarly to eigenvalues. Does there exist a criteria like eigenvalue>1 for MCA as well for choosing the number of items/dimensions to retain?
Are the "contr" columns in the second table similar to factor loadings?
Thanks,
May
I'm not Stata user and won't interpret the specific output you show, the so more that you gave only results, not the data to analyze it. Instead, I'll offer few lines about the relationship between the types of analyses, just to guide you.
Multiple Correspondence analysis (which is Correspondence analysis of 3+ dim. contingency table and with optional computation of individuals' "object scores"), MCA, is an optimal scaling dimensionality reduction / mapping technique for all the input variables being nominal.
It is actually a particular case of, and becomes equivalent to Categorical Principal Component analysis (CatPCA) when the latter uses multiple nominal quantification for all the input variables.
If all the variables are dichotomous then MCA is equivalent to CatPCA using any type of quantification - because a variable with just 2 categories can be quantified no otherwise than one way - linearly. And this way MCA/CatPCA becomes almost equivalent to usual linear PCA.
The equivalencies pertain to eigenvalues and object scores (= component scores). MCA does not or normally should not output component-variable loadings because MCA uses "multiple nominal quantification" which is the treating of every value (category) of a variable as a separate variable (a dummy). What will correspond in MCA to the variable "loadings" of PCA is the coordinates of centroids of the categories of the variables.
But in your case - because all the variables are dichotomous and so each variable is fully represented by one nonredundanrt category (one dummy) - you may use and interpret results of just PCA with its loadings, in place of MCA with its centroids coordinates. To repeate: since all variables binary, you actually don't need MCA, usual PCA suffice$^1$ $^2$.
By word inertia the scale is meant in literature: either squared (eigenvalues and their sum) or nonsquared (singular values).
$^1$ You may expect just minor differences, primarily due to MCA being iterative and PCA not. If the convergence is not complete, as often, these ignorable differences show up.
$^2$ When we are considering to use PCA with binary data it is worth thinking about doing or not doing variable centering before the analysis. Typically, PCA is done on centered or standardized data (= done based on covariances or correlations, respectively). Not centering the data can produce very different PCA results, but they can make more sense in some studies with binary categorical data. For example, in text analytics PCA is frequently done based on cosines which implies data normalization without centering.
• Thanks for the detailed help ttnphns. This is helping me a lot in understanding the similarities and differences between mca and (cat)pca. – May Ank Mar 28 '16 at 9:29
@ttnphns's explanation is excellent. That said, Stata definitely decomposes the information in ways that aren't familiar to me, despite having used MCA for years. One thing that is consistent across all packages is the low dimensional nature of all CA output...two dimensions are always reported. The thing that's missing from this Stata output is the comparable and symmetric output for the rows. Since MCA works on a tabular information, that table is either row or column dominant -- the choice of which one to focus on is the analyst's.
Regarding the terminology, it's too complex for a quick summary (here) except to note that a key insight to understanding these metrics lies in the geometry behind them.
One of the architects of CA is Michael Greenacre who has been publishing seminal works on it since the 80s. His most recent book on this topic is Correspondence Analysis in Practice on Amazon here:
http://www.amazon.com/Correspondence-Analysis-Practice-Interdisciplinary-Statistics/dp/1584886161/ref=sr_1_1?ie=UTF8&qid=1459161494&sr=8-1&keywords=Correspondence+Analysis+in+Practice
You can access this book online. It contains, in the first chapters, thorough explanations of this terminology as well as of the geometry underlying the decompositions.
• To judge from works on a tabular information, that table is either row or column dominant I suspect you may be speaking about simple (2-way table) CA. But the OP's situation (although, unfortunately, the data have not been presented) seems different. It looks to me like there was 19 binary variables. That is, 2x2x2....x2 19-way frequency table, a task for Multiple CA. – ttnphns Mar 28 '16 at 11:22
• @ttphns That is true but my point was different in noting that only two dimensions are reported. This is the standard output across CA software. – Mike Hunter Mar 28 '16 at 11:25
• Thank you very much for the Greenacre reference, DJohnson. I am now going through the text. :-) – May Ank Mar 31 '16 at 20:22
|
{}
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Some fixed point theorems in ordered $G$-metric spaces and applications. (English) Zbl 1217.54057
Summary: We study a number of fixed point results for the two weakly increasing mappings $f$ and $g$ with respect to partial ordering relation $\preceq$ in generalized metric spaces. An application to integral equations is given.
##### MSC:
54H25 Fixed-point and coincidence theorems in topological spaces
Full Text:
##### References:
[1] A. C. M. Ran and M. C. B. Reurings, “A fixed point theorem in partially ordered sets and some applications to matrix equations,” Proceedings of the American Mathematical Society, vol. 132, no. 5, pp. 1435-1443, 2004. · Zbl 1060.47056 · doi:10.1090/S0002-9939-03-07220-4 [2] R. P. Agarwal, M. A. El-Gebeily, and D. O’Regan, “Generalized contractions in partially ordered metric spaces,” Applicable Analysis, vol. 87, no. 1, pp. 109-116, 2008. · Zbl 1140.47042 · doi:10.1080/00036810701556151 [3] J. J. Nieto and R. Rodríguez-López, “Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations,” Order, vol. 22, no. 3, pp. 223-239, 2005. · Zbl 1095.47013 · doi:10.1007/s11083-005-9018-5 [4] J. J. Nieto and R. Rodríguez-López, “Existence and uniqueness of fixed point in partially ordered sets and applications to ordinary differential equations,” Acta Mathematica Sinica, vol. 23, no. 12, pp. 2205-2212, 2007. · Zbl 1140.47045 · doi:10.1007/s10114-005-0769-0 [5] J. J. Nieto, R. L. Pouso, and R. Rodríguez-López, “Fixed point theorems in ordered abstract spaces,” Proceedings of the American Mathematical Society, vol. 135, no. 8, pp. 2505-2517, 2007. · Zbl 1126.47045 · doi:10.1090/S0002-9939-07-08729-1 [6] D. O’Regan and A. Petru\csel, “Fixed point theorems for generalized contractions in ordered metric spaces,” Journal of Mathematical Analysis and Applications, vol. 341, no. 2, pp. 2505-2517, 2007. · doi:10.1016/j.jmaa.2007.11.026 [7] Z. Mustafa and B. Sims, “A new approach to generalized metric spaces,” Journal of Nonlinear and Convex Analysis, vol. 7, no. 2, pp. 289-297, 2006. · Zbl 1111.54025 [8] Z. Mustafa and B. Sims, “Some remarks concerning D-metric spaces,” in Proceedings of the International Conference on Fixed Point Theory and Applications, pp. 189-198, Valencia, Spain, July 2003. · Zbl 1079.54017 [9] Z. Mustafa, W. Shatanawi, and M. Bataineh, “Existence of fixed point results in G-metric spaces,” International Journal of Mathematics and Mathematical Sciences, vol. 2009, Article ID 283028, 10 pages, 2009. · Zbl 1179.54066 · doi:10.1155/2009/283028 · eudml:45716 [10] Z. Mustafa, H. Obiedat, and F. Awawdeh, “Some fixed point theorem for mapping on complete G-metric spaces,” Fixed Point Theory and Applications, vol. 2008, Article ID 189870, 12 pages, 2008. · Zbl 1148.54336 · doi:10.1155/2008/189870 · eudml:54664 [11] M. Abbas and B. E. Rhoades, “Common fixed point results for noncommuting mappings without continuity in generalized metric spaces,” Applied Mathematics and Computation, vol. 215, no. 1, pp. 262-269, 2009. · Zbl 1185.54037 · doi:10.1016/j.amc.2009.04.085 [12] R. Chugh, T. Kadian, A. Rani, and B. E. Rhoades, “Property p in G-metric spaces,” Fixed Point Theory and Applications, vol. 2010, Article ID 401684, 12 pages, 2010. · Zbl 1203.54037 · doi:10.1155/2010/401684 · eudml:227131 [13] R. Saadati, S. M. Vaezpour, P. Vetro, and B. E. Rhoades, “Fixed point theorems in generalized partially ordered G-metric spaces,” Mathematical and Computer Modelling, vol. 52, no. 5-6, pp. 797-801, 2010. · Zbl 1202.54042 · doi:10.1016/j.mcm.2010.05.009 [14] W. Shatanawi, “Fixed point theory for contractive mappings satisfying \Phi -maps in Gmetric spaces,” Fixed Point Theory and Applications, vol. 2010, Article ID 181650, 9 pages, 2010. · Zbl 1204.54039 · doi:10.1155/2010/181650 · eudml:232392 [15] H. Aydi, B. Damjanović, B. Samet, and W. Shatanawi, “Coupled fixed point theorems for nonlinear contractions in partially ordered G-metric spaces,” Mathematical and Computer Modelling. In press. · Zbl 1237.54043 · doi:10.1016/j.mcm.2011.05.059 [16] I. Altun and H. Simsek, “Some fixed point theorems on ordered metric spaces and application,” Fixed Point Theory and Applications, vol. 2010, Article ID 621469, 17 pages, 2010. · Zbl 1197.54053 · doi:10.1155/2010/621469 · eudml:226094 [17] B. Ahmed and J. J. Nieto, “The monotone iterative technique for three-point secondorder integrodifferential boundary value problems with p-Laplacian,” Boundary Value Problems, vol. 2007, Article ID 57481, 9 pages, 2007. · Zbl 1149.65098 · doi:10.1155/2007/57481 · eudml:54721 [18] A. Cabada and J. J. Nieto, “Fixed points and approximate solutions for nonlinear operator equations,” Journal of Computational and Applied Mathematics, vol. 113, no. 1-2, pp. 17-25, 2000. · Zbl 0954.47038 · doi:10.1016/S0377-0427(99)00240-X [19] J. J. Nieto, “An abstract monotone iterative technique,” Nonlinear Analysis: Theory Method and Applications, vol. 28, no. 12, pp. 1923-1933, 1997. · Zbl 0883.47058 · doi:10.1016/S0362-546X(97)89710-6
|
{}
|
# checking configuration history of Turing machine using PDA
I am trying to understand the technique of using configuration history in proofs.
To prove that: $$\{|M\,\,\,is\,\,\,a\,\,\,TM\,\,\,and\,\,\,L(M)=\sum^* \}\notin RE$$
given $$$$ we have built a Turing machine that accepts all words except M accepting configuration on w. (and then simple reduction)
To prove that: $$\{
|P is\,\,\, a\,\,\, PDA\,\,\, and\,\,\, L(P)=\sum^*\}\notin RE$$
we showed the same proof, only that we built a PDA that accepts all the words except the accepting configuration of M on w.
Does PDA’s ability to determine whether input is an accepting configuration of M on w actually means that I can simulate M’s run on w with a PDA? Or testing whether configuration is an accepting configuration is different from a simulation
|
{}
|
# The 19th Graduate Colloquium of the Swiss Doctoral Program in Mathematics
Good to know
• All talks take place in Hörsaal 119 at Kollegienhaus (address: Petersplatz 1, 4051 Basel).
• We will have the social dinner at Restaurant Union (address: Klybeckstrasse 95, 4057 Basel) on Tuesday evening. It’s a burger place with nice vegetarian (and vegan) options.
• If the weather is nice, bring your swimsuit! Those who want might take a dip in the Rhine.
• The talks should take around 45 min.
• Organiser: Julia Schneider (Julia.noemi.schneider@unibas.ch)
The Plan
Monday, June 4
14:00 Welcome Coffee 14:30 Linda 15:30 Luc 16:30 Tour de Bâle
Tuesday, June 5
9:30 Good Morning Coffee 10:00 Mattias 11:00 Christian 12:00 Lunch 14:00 Lucas 15:00 Roman 19:00 Dinner
Wednesday, June 6
9:00 Good Morning Coffee 9:30 Gabriel 10:30 Giacomo 11:45 Birkhäuser Prize
## Abstracts
Linda Frey (University of Basel)
Introduction to heights, the Bogomolov property and elliptic curves
We will see some beauty of number theory without any technicalities. This talk will introduce even applied math PhD students to some hardcore algebraic number theory without even noticing it. We will introduce the notion of the height of an algebraic number, mention some properties and learn about the Bogomolov property for fields. Using elliptic curves we will construct such a field.
Luc Pétiard (University of Neuchâtel)
TBA
Mattias Hemmig (University of Basel)
TBA
Christian Schulze (University of Basel)
Cellular mixing with bounded palenstrophy
We study the problem of optimal mixing of a passive scalar $\rho$ advected by an incompressible flow on the two dimensional unit square. The scalar solves the continuity equation with a divergence-free velocity field $u$ with uniform-in-time bounds on the palenstrophy. We measure the degree of mixedness of the tracer $\rho$ via the two different notions of mixing scale commonly used in this setting, namely the functional and the geometric mixing scale. We analyze velocity fields of cellular type, which is a special localized structure often used in constructions of explicit analytical examples of mixing flows and can be viewed as a generalization of the self-similar construction by Alberti, Crippa and Mazzucato. We show that for any velocity field of cellular type both mixing scales cannot decay faster than polynomially.
Lucas Dahinden (University of Neuchâtel):
Counting moduli spaces of circular linkages
A (planar) linkage is a collection of bars of fixed length in the plane that are connected through joints around which the bars can freely rotate. Since a linkage has a certain liberty from the joints and rigidity from the barlengths, it is an interesting task to study the space of its possible positions. As an exercise, try to find the space of positions of a quadrilateral (= circular linkage with four bars) with side lengths (4,3,3,1). Surprisingly, we can find topological invariants of this space by simple combinatorics. When we go a level up and look at the set of possible circular linkages, and count the different moduli spaces that can arise this way, the "simple combinatorics" develop into a hard problem.
Roman Prosanov (University of Fribourg)
A variational approach to Alexandrov-type results
How can we describe the boundary of a 3-dimensional Euclidean polytope from the intrinsic metric viewpoint? One can easily find that it is a metric on the 2-sphere, which is flat everywhere except vertices where it has conical singularities of total angle less than 2pi (we call it positive curvature). A natural question is if this description is complete. It was answered positively by Alexandrov, who proved that every flat metric on the 2-sphere with conical singularities of positive curvature can be uniquely (up to isometry) realized as the induced metric on the boundary of a 3-dimensional convex polytope.
In 90's Igor Rivin obtained a similar result for hyperbolic cusp-metrics on the 2-sphere and ideal hyperbolic 3-polytopes. Ten years after Jean-Marc Schlenker gave a proof for the case of hyperbolic cusp-metrics on surfaces of genus > 1 and ideal Fuchsian 3-polytopes. All their original proofs were indirect.
In our talk we will discuss a more constructive new approach to the results of this type. It is based on resolving singularities in polytopal manifolds by a variational technique. We will also consider some another perspectives of this method.
Gabriel Dill (University of Basel)
Unlikely intersections: a fairy tale
Once upon a time, there were two polynomials G(T) and H(T). They liked roots of unity very much and were always happy when in the forest of complex numbers they found some number at which both their values were roots of unity. They wondered if they could always find another such number or if at some point there would be no more new ones left. They went to see three wise men called Ihara, Serre and Tate who told them the answer. And they lived happily ever after.
Giacomo Elefante (University of Fribourg)
QMC method for integration on manifolds with mapped low-discrepancy points and greedy minimal k_s-energy points
To integrate with the Quasi-Monte Carlo method (QCM) on two-dimensional manifolds we consider two sets of points.
The first is the set of mapped low-discrepancy sequence by
a measure preserving map.
Low-discrepancy points are best choice to integrate functions through QCM in the unit cube [0,1]^d but to use them to integrate functions on a manifold we need to preserve their uniformity with respect to the Lebesgue measure.
The second is the greedy minimal Riesz s-energy points extracted from a suitable discretization of the manifold.
We chose greedy minimal energy points since thanks to the Poppy-seed Bagel Theorem (cf. Saff) we know that the class of points with minimal Riesz $s$-energy, under
suitable assumptions, are asymptotically uniformly distributed with respect to the normalized Hausdorff measure.
On the other hand, we do not know if the greedy extraction produce a set of points that are
a good choice to integrate functions with the QCM on manifolds.
Through theoretical considerations, by showing some properties of these points and by numerical experiments, we attempt to answer to these questions.
Bibliography:
D. P. Hardin and E. B. Saff, Minimal Riesz energy point configurations for rectifiable $d$-dimensional manifolds., Adv. Math., vol. 193, no. 1, pp. 174-204, 2005.
|
{}
|
# University of Hertfordshire
## The Origin of High Energy Emission in the Young Radio Source PKS 1718-649
Research output: Contribution to journalArticlepeer-review
### Documents
• Małgosia Sobolewska
• Giulia Migliori
• Luisa Ostorero
• Aneta Siemiginowska
• Łukasz Stawarz
• Matteo Guainazzi
• Martin Hardcastle
Original language English The Astrophysical Journal Submitted - 4 Nov 2021
### Abstract
We present a model for the broadband radio-to-$\gamma$-ray spectral energy distribution of the compact radio source, PKS 1718-649. Because of its young age (100 years) and proximity ($z=0.014$), PKS 1718-649 offers a unique opportunity to study nuclear conditions and the jet/host galaxy feedback process at the time of an initial radio jet expansion. PKS 1718-649 is one of a handful of young radio jets with $\gamma$-ray emission confirmed with the Fermi/LAT detector. We show that this $\gamma$-ray emission can be successfully explained by Inverse Compton scattering of the ultraviolet photons, presumably from an accretion flow, off non-thermal electrons in the expanding radio lobes. The origin of the X-ray emission in PKS 1718-649 is more elusive. While Inverse Compton scattering of the infrared photons emitted by a cold gas in the vicinity of the expanding radio lobes contributes significantly to the X-ray band, the data require that an additional X-ray emission mechanism is at work, e.g. a weak X-ray corona or a radiatively inefficient accretion flow, expected from a LINER type nucleus such as that of PKS 1718-649. We find that the jet in PKS 1718-649 has low power, $L_j \simeq 2.2 \times 10^{42}$ erg s$^{-1}$, and expands in an environment with density $n_0 \simeq 20$ cm$^{-3}$. The inferred mass accretion rate and gas mass reservoir within 50-100 pc are consistent with estimates from the literature obtained by tracing molecular gas in the innermost region of the host galaxy with SINFONI and ALMA.
### Notes
10 pages, 2 figures, submitted to ApJ
ID: 26269532
|
{}
|
# growth rate of $\mathbb{Z}^2\rtimes_{\sigma} \mathbb{Z}$?
I am interested in the growth rate of this type of group: $G=\mathbb{Z}^2\rtimes_{\sigma} \mathbb{Z}$, where $\sigma(a)=\begin{pmatrix}x&y\\z&w\end{pmatrix}\in SL_2(\mathbb{Z})$, where $a$ is the generator on the right copy of $\mathbb{Z}$ and the action is just by matrix multiplication.
Here are two examples:
For $\sigma(a)=\begin{pmatrix}1&1\\0&1\end{pmatrix}$, this gives us the discrete Heisenberg group $H_3$, which is nilpotent, and hence by Gromov's theorem, it has polynomial growth rate(see here).
When $\sigma(a)=\begin{pmatrix}2&1\\1&1\end{pmatrix}$, it was mentioned here this group has exponential growth rate.
So my first question is:
1, Could anyone give me a reference to show the link between whether the above group $G$ has polynomial growth rate or not and the property, say eignvalue, of the matrix $\sigma(a)$?
Note that the above $G$ is a polycyclic-by-finite group, my question is:
2, Could anyone give me a polycyclic-by-finite group not of the type of $G$ with exponential growth rate?
• The theorem that $H_3$ and other (virtually) nilpotent groups have polynomial growth is a theorem of Milnor, with the exact degree of polynomial growth computed by Bass. Gromov's theorem is the converse: every group of polynomial growth is virtually nilpotent. – Lee Mosher Aug 11 '13 at 16:02
• @Lee, in your answer, you mentioned Milnor's paper, I checked it, but it is still not clear to me how to relate the nilpotentness of $G$ to the property of $\sigma(a)$, could you give more hints? – Jiang Aug 11 '13 at 22:14
• @Lee, especially, is the lemma 1 in Milnor's paper useful in our situation? – Jiang Aug 11 '13 at 22:17
• @Jiang: For your group to be (virtually) nilpotent, $\sigma(a)$ must fix a point (otherwise the center of $G$ would be trivial). You can also check this is a sufficient condition (quotient by the fixed subgroup, and check it is (virtually) abelian). – Steve D Aug 13 '13 at 2:47
To answer your 2nd question, simply generalize your second example to higher dimensions, e.g. take $\mathbb{Z}^3\rtimes_{\sigma} \mathbb{Z}$ where $\sigma \in SL_3(\mathbb{Z})$ has an eigenvalue not on the unit circle.
• thanks! For the type of $G$, I mean the type of $\mathbb{Z}^d\rtimes \mathbb{Z}$, maybe I should state it clearly next time. – Jiang Aug 11 '13 at 16:14
• Would one of type ${\mathbb Z}^d \lhd {\mathbb Z}^2$ suit? – Derek Holt Aug 11 '13 at 17:36
• @DerekHolt, what does $\vartriangleleft$ mean? – Jiang Aug 11 '13 at 19:28
• I typed the wrong symbol. I meant ${\mathbb Z}^d \rtimes {\mathbb Z}^k$ with $k>1$. You can construct examples like that from algebraic number fields $K$, where you let the torsion-free part of the group of units of $K$ act on the additive group of the integers of $K$, and the action is given by multiplication in $K$. – Derek Holt Aug 11 '13 at 20:15
|
{}
|
# Virtual knot diagrams on surfaces with genus?
To the best of my limited understanding, a virtual knot diagram may be thought of as the projection of an embedding of $\mathbb{S}^1$ in a 2-manifold with genus onto $\mathbb{R}^2$. That is to say it is an extension of the notion of a classical knot diagram to two dimensional handlebodies allowing for "virtual crossings" wherein one part of the diagram may "cross" another part by passing on the other side of a handle.
My question, then, is as follows:
What sort of object would be represented by a diagram with virtual crossings drawn on such a manifold? For example: if one were to draw a virtual knot diagram on $\mathbb{T}^2$ in a non-trivial manner (i.e. such that the diagram is drawn in a neighborhood of the torus that is not homeomorphic to an open disk in $\mathbb{R}^2$), what would this diagram be the projection of?
Note: If I have misused any terminology please feel free to correct me. (I have yet to take any courses on topology)
-
|
{}
|
# Simple non-linear regression problem
I'm trying to model a simple use case: predicting the price of a car based on its mileage, with RStudio. I know it's a really naive model, just one variable, but it's for comprehension purposes.
My first attempt was to to use the lm function:
predictions <- lm(price~mileage, data = ads_clean)
If I plot the model using the visreg function, I get a scatter plot of my prices/mileages with a straight line (negative slope) on it. I can see according to that plot that I can obtain negative predictions (it seems normal according to the negative coefficient of the mileage).
The second attempt was to elminate such negative predictions using a log10 on the price. What I'm predicting now is not the price, but the log10(price). If I want to get back to the 'right' predicted price I use 10^(predictedPrice).
predictions <- lm(log10(price)~mileage, data = ads_clean)
If I plot the model I still get a straight line on my scatter plot, but without negative predictions this time.
How do I get a curve instead of a straight line? I suppose that lm can only generate straight lines (ax1 + bx2 + .... + A).
May I use another kind of function? glm?
I'd like to get such visreg (red curve):
• Could you provide a data sample as an example? – Tim Mar 19 '15 at 10:34
• These data look overcool! Is this public data? – Elvis Mar 19 '15 at 11:33
• Re "How to get a curve instead of a straight line?" All the lines you have drawn are curves. The blue line fit to the log-log data will, when drawn in the original units, will even look curved. – whuber Mar 19 '15 at 19:12
If you log-transformed your outcome variable and then fit a regression model, just exponentiate the predictions to plot it on the original scale.
In many cases, it's better to use some nonlinear functions such as polynomials or splines on the originale scale, as @hejseb mentioned. This post might be of interest.
Here is an example in R using the mtcars dataset. The variable used here were chosen totally arbitrarily, just for illustration purposes.
First, we plot Log(Miles/Gallon) vs. Displacement. This looks approximately linear.
After fitting a linear regression model with the log-transformed Miles/Gallon, the prediction intervals on the log-scale look like this:
Exponentiating the prediction intervals, we finally get this graphic on the original scale:
This ensures that the prediction intervals never go below 0.
We could also fit a quadratic model on the original scale and plot the prediction intervals.
Using a quadratic fit on the original scale, we cannot be sure that the fit and prediction intervals stay above 0.
Here is the R-code that I used to generate the figures.
#------------------------------------------------------------------------------------------------------------------------------
#------------------------------------------------------------------------------------------------------------------------------
data(mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Scatterplot with log-transformation
#------------------------------------------------------------------------------------------------------------------------------
plot(log(mpg)~disp, data = mtcars, las = 1, pch = 16, xlab = "Displacement", ylab = "Log(Miles/Gallon)")
#------------------------------------------------------------------------------------------------------------------------------
# Linear regression with log-transformation
#------------------------------------------------------------------------------------------------------------------------------
log.mod <- lm(log(mpg)~disp, data = mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Prediction intervals
#------------------------------------------------------------------------------------------------------------------------------
newframe <- data.frame(disp = seq(min(mtcars$disp), max(mtcars$disp), length = 1000))
pred <- predict(log.mod, newdata = newframe, interval = "prediction")
#------------------------------------------------------------------------------------------------------------------------------
# Plot prediction intervals on log scale
#------------------------------------------------------------------------------------------------------------------------------
plot(log(mpg)~disp
, data = mtcars
, ylim = c(2, 4)
, las = 1
, pch = 16
, main = "Log scale"
, xlab = "Displacement", ylab = "Log(Miles/Gallon)")
lines(pred[,"fit"]~newframe$disp, col = "steelblue", lwd = 2) lines(pred[,"lwr"]~newframe$disp, lty = 2)
lines(pred[,"upr"]~newframe$disp, lty = 2) #------------------------------------------------------------------------------------------------------------------------------ # Plot prediction intervals on original scale #------------------------------------------------------------------------------------------------------------------------------ plot(mpg~disp , data = mtcars , ylim = c(8, 38) , las = 1 , pch = 16 , main = "Original scale" , xlab = "Displacement", ylab = "Miles/Gallon") lines(exp(pred[,"fit"])~newframe$disp, col = "steelblue", lwd = 2)
lines(exp(pred[,"lwr"])~newframe$disp, lty = 2) lines(exp(pred[,"upr"])~newframe$disp, lty = 2)
#------------------------------------------------------------------------------------------------------------------------------
# Quadratic regression on original scale
#------------------------------------------------------------------------------------------------------------------------------
quad.lm <- lm(mpg~poly(disp, 2), data = mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Prediction intervals
#------------------------------------------------------------------------------------------------------------------------------
newframe <- data.frame(disp = seq(min(mtcars$disp), max(mtcars$disp), length = 1000))
pred <- predict(quad.lm, newdata = newframe, interval = "prediction")
#------------------------------------------------------------------------------------------------------------------------------
# Plot prediction intervals on log scale
#------------------------------------------------------------------------------------------------------------------------------
plot(mpg~disp
, data = mtcars
, ylim = c(7, 36)
, las = 1
, pch = 16
, main = "Original scale"
, xlab = "Displacement", ylab = "Miles/Gallon")
lines(pred[,"fit"]~newframe$disp, col = "steelblue", lwd = 2) lines(pred[,"lwr"]~newframe$disp, lty = 2)
lines(pred[,"upr"]~newframe\$disp, lty = 2)
If all you want is a quadratic term, you can use lm(y~x+I(x^2)). An example:
For your model that would mean predictions <- lm(price~mileage+I(mileage^2), data = ads_clean). For higher order polynomials, you can just add them in the same way. You could also try some nonparametric regression, for example locpoly.
x <- rnorm(100)
y <- x + x^2 + rnorm(100)
plot(x, y)
model1 <- lm(y~ x+ I(x^2))
plotdata <- cbind(x, predict(model1))
lines(plotdata[order(x),], col = "red")
Please be aware that, depending on your goal, this might be associated with other problems such as heteroscedasticity. If you want to make inference, you need to pay extra care that the assumptions you would rely on actually appear to be satisfied. But, if you are truly only interested in how to get a curve instead of a straight line and you're just playing around, this is sufficient.
• All of your recommendations will suffer from heteroscedasticity with the example dataset in the question. – Roland Mar 19 '15 at 11:54
• @Roland, sure, but the question was how to "get a curve instead of a straight line". Judging by the first line of the post, finding a good model or make correct inference does not seem to be the objective. – hejseb Mar 19 '15 at 11:58
• On this site, even if OP asks for it, you should not give advice that would get them into trouble; at least not without a warning. – Roland Mar 19 '15 at 12:01
• @Roland if that were a formal guideline I'd have been banned by now – shadowtalker Mar 19 '15 at 12:25
• @Roland, can you provide a link to resources to that would assist the OP (and others) to understand and deal with heteroscedasticity? – Tony Ladson Mar 24 '15 at 22:27
|
{}
|
## The Future of the Equity Premium: SUMMARY: PRELIMINARY AND INCOMPLETE
Konstantin Magin and I are closing in on the first draft of our "Future of the Equity Premium" paper...
SUMMARY: PRELIMINARY AND INCOMPLETE: DRAFT
The Future of the Equity Premium
J. Bradford DeLong and Konstantin Magin
December 2006
Suppose that, at the start of some year since the beginning of the twentieth century, you had taken $1,000,000 that you had invested in bonds and believed you would not want to touch for twenty years, and invested it insteade in a diversified portfolio of equities. (Or suppose you had been able to borrow$1,000,000 at the long-term government bond rate). And suppose you had then let both legs of that investment ride for twenty years. What would have been the results in dollars (adjusted for inflation) twenty years later?
The figure above tells you the answer, for each year from 1901, the start of the twentieth century, until 1986--twenty years ago.
In two years you lose: $50,000 if you invest in 1913, and$150,000 if you invest in 1929. In both years, you would have done better to invest in bonds. In all the other years--all 84 of them, 97.7% of the time--you win.
Often you win big. The average return is $4,356,000: more than four times the value of the initial long equity leg of the portfolio. 93% of the time you are up more than$500,000. 80% of the time you are up more than $1,000,000. 59% of the time you are up more than$2,000,000. 42% of the time you are up more than $4,000,000. 19% of the time you are up more than$8,000,000. And the champion moment to go long stocks and short bonds was 1942, just after Pearl Harbor, when your portfolio held until 1962 would have given you an edge of $15,600,000. This is, in brief, the equity premium puzzle of Mehra and Prescott (1985). Throughout the twentieth century anybody with some initial wealth devoted to bonds and with a twenty-year or more time horizon had a near money machine at his or her disposal. And not a nickel-and-dime money machine either. Rietz (1988) and Barro (2005) attribute the apparent equity premium to serious and severe risks of equity ownership that do not show up in the twentieth-century sample. The problem is that it is hard to think of what such risks might be. We have one Great Depression in the sample. Isn't that enough? And when the Day of the Revolution comes and the red flag flies from the towers of the Loop, investors in U.S. government bonds will be in no better shape than investors in corporate equities. At a more subtle level, Weitzman (2006) argues that the equity premium is generated by unknown unknowns--risks that we cannot characterize and that have the potential to have any conceivable effect on our expected utility. In Weitzman's framework, the anomaly is not that stock prices are so low but that they are so high: that the equity premium return is not still bigger than it has been. But once again there is the question of just what these risks are, and why they depress the relative price of stocks rather than bonds. Once you put the possibility that the ex ante distribution of stock relative to bond returns had a big, important, substantial lower tail that just does not show up in the data ex post, attempts to explain the equity premium return as some kind of rational-expectations equilibrium for investors with reasonable degrees of risk aversion are likely to be in vain. You have to construct a scenario in which investors and markets are such that investors do not turn the crank on a money machine. And that is very, very hard to do. An enormous variety of institutions and investors ought to have taken advantage of this money machine--sold their bonds and put their money in stocks: young parents of newborns looking forward to their children's college, the middle-aged looking at rapidly-escalating health-care costs, the elderly looking forward to bequeathing some of their wealth to their descendents or to worthy causes, workers with defined-contribution pensions, businesses with defined-benefit pensions, life insurance companies, the Social Security Trust Fund, the reserve accounts of the world's central banks, businesses with reputational capital that do not expect to be blindsided by new technologies, hedge funds, and so on. On the other side of the market, there are all the companies that appear underleveraged: replacing high-priced equity capital with low-priced debt capital would seem to have been as profitable a strategy for a long-lived company as investing in high-return equity rather than low-return debt is for a long-lived investor. Yet they have not done so, or have not done so on a sufficient scale to pick up all the$4,356,000 checks and the occasional $15,600,000 check left on the sidewalk. We can explain why any one particular group might have good reason not to try to take advantage of the equity premium, but it is much harder to come up with the many good reasons why all of the groups of potential investors, and institutions, and organizations passed the$4,356,000 checks by.
It's not that the existence of an equity premium return is a new discovery. In 1923 financial analyst Edgar L. Smith pointed out that diversified investments in American equities had far outperformed bonds of all types. Edgar Smith's (1923) Common Stocks as Long-Term Investments was not the first publication to point out that stocks earned higher returns than bonds on average-—that was well-known. But investments in equities were viewed then as the domain of the risk-loving and the entrepreneurial. Speculators—-either possessing inside information about fundamentals or market microstructure, risk-loving, or naïve—owned stocks. But prudent investors did not: the risk of ruin was seen as too high. What Smith did was to publicize the fact that equity diversification worked: diversified stock portfolios produced higher rates of return without bearing higher systematic risk than bond portfolios, especially once one took inflation risk into proper account. A portfolio invested in one or two individual stocks was overwhelmingly risky: "subject," Smith wrote, "to the temporary hazard of hard times, and [might] be permanently lost as a result of a radical change in the arts or of poor corporate management." But, Smith pointed out, these risks could be diversified away: "effectively eliminated through the application of the same principles which make the writing of fire and life insurance policies profitable." By contrast, portfolio diversification did not work for bonds, which were all "subject to the same hazards" which were "not reduced by increasing the number of different bonds held."
Some economists--Blanchard (1993) is probably the best advocate for this position--see the large premium equity return in the past as a mistake on the part of the market, a mistake that the market should be correcting. They anticipate that the equity premium is smaller in the present than it was in the past and that it will vanish or nearly vanish in the future. Blanchard sees the high equity premiums of the 1950s and the 1970s as a combination of excessive salience of the memory of the Great Crash and the Great Depression in investors' minds, and as the result of simple money illusion a la Modigliani and Cohn (1979).
But we do not see any signs that Ms. Market has moved to eliminate past mistakes. The real return on the 20-year U.S. Treasury inflation-protected bond is 2.1%, while the current annual earnings yield on the S&P composite stock market index is 5.7%. These numbers suggest an expected equity premium today of 3.6%, lower than the 6% premium equity return of Mehra and Prescott (1985), but far, far higher than the value of 0.25% per year of Mehra's (2003) baseline representative-agent model for a coefficient of relative risk aversion of 2. and an equity premium of 3.6% per year is enough to double your relative wealth over twenty years, and quadruple it over forty. Plus you get substantial immunity from long-run inflation risk as well.
The equity premium remains a puzzle. And there is no reason to think that it is a puzzle about to vanish. As Rajnish Mehra (2003) wrote:
The data used to document the equity premium over the past 100 years are as good an economic data set as analysts have, and 100 years is a long series when it comes to economic data. Before the equity premium is dismissed, not only do researchers need to understand the observed phenomena, but they also need a plausible explanation as to why the future is likely to be any different from the past. In the absence of this explanation, and on the basis of what is currently known, I make the following claim: Over the long term, the equity premium is likely to be similar to what it has been in the past and returns to investment in equity will continue to substantially dominate returns to investment in T-bills for investors with a long planning horizon.
Or as Warren Buffett wrote twenty years ago:
[T]he secret has been out for fifty years, ever since Ben Graham and Dave Dodd wrote Security Analysis, yet I have seen no trend toward value investing in the 35 years I have practiced it. There seems to be some perverse human characteristic that likes to make easy things difficult. The academic world, if anything, has actually backed away from the teaching of value investing.... It's likely to continue that way.... [T]hose who read their Graham and Dodd will continue to prosper.
|
{}
|
2. ## ONE easy trig question
Don't mind the Alvira bit. Please explain algebraically how the diagram is in this equation: d km= 3tan((pi)x)
3. Originally Posted by delicate_tears
Don't mind the Alvira bit. Please explain algebraically how the diagram is in this equation: d km= 3tan((pi)x)
note that on a right triangle, we have $\tan\theta=\frac{b}{a}$ where $b$ is the opposite side and $a$ is the adjacent side.
thus, from your figure, we have $\tan\theta=\frac{d}{3} \Rightarrow d = 3\tan\theta$..
now, how come that $\theta = \pi t$?
it was said that light spans a complete rotation (1 rev = $2\pi$) every two minutes and thus the rate of revolution is $\pi$ per minute.
so, at time t minutes, the light spans $\pi$ times that time $t$, and that is the angle of rotation, thus we have $d = 3\tan(\pi t)$
aww, you were using $x$.. well it is just the same.. they are just dummy variables.
4. and to answer the second part, note that tangent is not defined on $\frac{\pi}{2} + n\pi$..
5. Originally Posted by mr fantastic
Said where? Certainly not in this thread. Is this a continuation of a question discussed in another thread?
yeah, actually..
|
{}
|
# Presentation
Here the code for the diffusion equation. We took the Vibrating String code and modified it to show the diffusion phenomenon. The equation used is the well-known diffusion equation also the Heat Equation without source : \displaystyle f_t=\mu f_{xx} The resolution is the same that the vibrating string, a march in time for the time derivative and a differentiation matrix for the spatial derivative.
clear all; close all;
% parameters
N=200; % number of gridpoints
L=50; % domain length
mu=0.5; % thermic conductivity
dt=0.05; % time step
tmax=20; % final time
x0=L/2;
t0=0.5;
% differentiation matrices
scale=-2/L;
[x,DM] = chebdif(N,2);
dx=DM(:,:,1)*scale;
dxx=DM(:,:,2)*scale^2;
x=(x-1)/scale;
I=eye(N);
% system matrices
E=I;
A=mu*dxx;
# Boundary Conditions
We know that at the boundary conditions, the solution is zero, thus we put this value in the differentiation matrix like this :
% boundary conditions
E([1 N],:)=0;
A([1 N],:)=I([1 N],:);
% march in time matrix
M=(E-A*dt/2)\(E+A*dt/2);
# Initial Condition
For the initial condition we use the Gaussian function that represent well the diffusion at the initial time : \displaystyle T=\frac{1}{\sqrt{2\pi t_0}}exp\left(\frac{-(x-x_0)^2}{2t_0}\right)
% initial condition
q=1/sqrt(2*pi*t0)*exp(-(x-x0).^2/2/t0);
% marching loop
tvec=0:dt:tmax;
Nt=length(tvec);
filename = 'Diffusion.gif';
for ind=1:Nt
q=M*q; % one step forward
t=ind*dt;
Tt=1/sqrt(2*pi*(t+t0))*exp(-(x-x0).^2/2/(t+t0));
% plotting
subplot(1,2,1);
plot(x,q,'k+',x,Tt,'g-');
title('Diffusion');
xlabel('x'); ylabel('T');
axis([L/4,3*L/4,-0.1,0.5])
subplot(1,2,2);
plot(t,norm((q-Tt),2),'b.'); grid on; hold on
drawnow
title('Error')
xlabel('t'); ylabel('T');
end
|
{}
|
## (Solved) What Is The Bound For Your Error Of Your Estimate Tutorial
Home > What Is > What Is The Bound For Your Error Of Your Estimate
## Contents
Let’s take a look at an example. All rights reserved. So, first of all, your series is close, but not correct. The error is given by |cos(x)-T6(x)| = |R6|, and so it can be bounded by 1^8/8!, |cos(x)-T6(x)| < 1/8! = 1/40320 = 0.000024802. http://compaland.com/what-is/what-is-the-standard-error-of-estimate-mean.html
Help with double integral problems (Replies: 2) Help with trigonometric integration problem (Replies: 2) Need help on integral problem (Replies: 6) Help to understand a problem, integral (Replies: 11) Help with From Download Page All pdfs available for download can be found on the Download Page. Algebra [Notes] [Practice Problems] [Assignment Problems] Calculus I [Notes] [Practice Problems] [Assignment Problems] Calculus II [Notes] [Practice Problems] [Assignment Problems] Calculus III [Notes] [Practice Problems] [Assignment Problems] Differential Equations [Notes] Extras Hence, if you divide your infinite series from (b) by k (the answer to (a)), you have found an estimate for the value of π in terms of an infinite series. https://answers.yahoo.com/question/index?qid=20110428195235AATlLyo
## Alternating Series Estimation Theorem
Since $$\frac{40}{x^2+4}$$ is the same as $$\frac{10}{1+\frac{x^2}{4}}$$, for all x in the interval of convergence, $${\int_0^2 f(x) dx} = 10 {\int_0^2 {\sum_n \left(\frac{-1}{4}\right)^n x^{2n} } dx} = 10{\sum_n \left(\frac{-1}{4}\right)^n \frac{2^{2n+1} }{2n+1}}$$ You can only upload videos smaller than 600MB. Source(s): Matt · 6 years ago 0 Thumbs up 0 Thumbs down Comment Add a comment Submit · just now Report Abuse enable x be the brink quantity = x^3 enable Then, integrate it from 0 to 2, and call it S.
Before we get into how to estimate the value of a series let’s remind ourselves how series convergence works. It doesn’t make any sense to talk about the value of a Linearization and Differentials (Derivatives)!? You should see an icon that looks like a piece of paper torn in half. Your answer should ...
View Full Document Your answer should be in the form k π , where k is an integer. Power Series Calculator Suppose you approximate f(x) by T6(x). Series and bound on error? pop over to these guys FelderΈκδοσηεικονογραφημένηΕκδότηςJohn Wiley & Sons, 2015ISBN1118449606, 9781118449608Μέγεθος830 σελίδες Εξαγωγή αναφοράςBiBTeXEndNoteRefManΣχετικά με τα Βιβλία Google - Πολιτική Απορρήτου - ΌροιΠαροχήςΥπηρεσιών - Πληροφορίες για Εκδότες - Αναφορά προβλήματος - Βοήθεια - Χάρτης ιστότοπου - GoogleΑρχική
In the mean time you can sometimes get the pages to show larger versions of the equations if you flip your phone into landscape mode. Missing Schengen entrance stamp Is the Set designed properly? Consider the power series 2 (2x )n n . Then, integrate it from 0 to 2, and call it S.
## Power Series Calculator
As we’ll soon see if we can get an upper and lower bound on the value of the remainder we can use these bounds to help us get upper and lower http://www.chegg.com/homework-help/questions-and-answers/evaluate-integral--answer-form-integer-value-hint-b-lets-evaluate-integral-using-power-ser-q4493428 Ratio Test This will be the final case that we’re going to look at for estimating series values and we are going to have to put a couple of fairly stringent Alternating Series Estimation Theorem If we are unable to get an idea of the size of Tn then using the comparison test to help with estimates won’t do us much good. Integral Calculator Privacy Statement - Privacy statement for the site.
From Content Page If you are on a particular content page hover/click on the "Downloads" menu item. Before moving on to the final part of this section let’s again note that we will only be able to determine how good the estimate is using the comparison test if If |x|\leq 1, find a bound on the error in your approximation by using the alternating series estimate. Converges by alternating series test 1. ∞ ∑ n = 1 ( - 1 ) n 4 n + 5 2. ∞ ∑ n = 1 ³ - e π ´ Wolfram Alpha
TERM Fall '13 PROFESSOR Kilic TAGS Math, Calculus, Power Series Click to edit the document details Share this link with a friend: Copied! Your answer should b... (a) Evaluate the integral . Request Permission for Using Notes - If you are an instructor and wish to use some of the material on this site in your classes please fill out this form. http://compaland.com/what-is/what-is-the-standard-error-of-estimate-for-skinfold-measurement.html Down towards the bottom of the Tools menu you should see the option "Compatibility View Settings".
|
{}
|
CarYon — An OI/ACM Contest Test Case Generator based on C++
Revision en11, by luosw, 2020-08-03 15:48:05
# Front
Have you ever encountered the following problems when holding a self-contained OI match:
• Want to quickly produce a paragraph of text?
• Want to quickly perform mathematical operations to generate data?
• Want to generate test data one by one without using freopen?
• Want to generate a set of random data or series?
• Quickly generate data to match the two programs?
Then, you can use CarYon and C++ to quickly generate data. Previously supported features are:
• Randomly generate a chapter, some words, some words
• Get out of the limitation of RAND_MAX, freely draft random numbers
• Mathematics library under development, supporting multiple features
• Create some circles, regular polygons and fractions, and use it to perform calculations
Perform test.cpp in real 1 within minutes to have the intensity data
Hope you guys can help improve this project. Hope this item can help everyone save time!
## Something wrong?
You are welcome to send an issue to the Github repository to ask questions, and you are also welcome to post in this chapter.
# Instructions for use
## How to install?
### npm installation (stable version)
You can go to the GitHub repository to download the latest version, the link is in the next title, and it can also be used with node-js installed:
\$ npm install datamaker-caryon --save
nstall the stable version of this data generator.
https://github.com/luosiwei-cmd/caryon
Everyone, remember to star~
### exe installation (stable version)
Visit http://luosw.fun/caryon/caryon-setup.exe to download the installation package, run the installation package, in the installation directory (the default isC://Program Files(x86)/CarYon) can find the corresponding caryon.h file.
## Data generation
You should know that nearly all functions are in the namespace ca.
The basic operations below are to include the header file caryon.h . Note that the header file must be included in the program’s directory folder after being compiled
Only the caryon.h.gch file produced later can the data generator be used.
makein(1,10){
csh();
xxxxx;
}
This operation is used to create files: 1. In-10.in , you can freely change the two parameters of makein for replacement. E.gmakein(3,5) is to produce 3.in-5.in .When we finish test.cpp , we will find that there is an extra folder in the root directory. There are files from 1.in to 10.in. This isIs the result of manufacturing
csh();The command must not be changed or replaced!
Here:
dataname="";
This is to fill in the prefix, such as the following program:
#include"caryon.h"
using namespace std;
using namespace ca;
int main(){
dataname="chen_zhe-ak-ioi";
makein(1,10){
csh();
xxx;
}
}
It will be created in the folder data-chen_zhe-ak- ioi of the root directory into chen_zhe-ak-ioi1.in~chen_zhe-ak-ioi10.in .
Note that due to the new version, no spaces can appear in the dataname field! ! !
After all our things are done, remember to us
closefile();
Function to free up memory space. (The effect is similar to fclose, you don’t need to write it)
We have learned to createin files, how to create correspondingoutfiles? Let's enrich the previous examples:
#include"caryon.h"
using namespace std;
using namespace ca;
int main(){
dataname="chen_zhe-ak-ioi";
makein(1,10){
csh();
xxx;
}
makeout(1,10);
}
At this time, there must be a std.exe file in the directory where test.cpp is located , which is commonly known as a standard program. Note that it must be the standard.
After the program is compiled, the std.exe file can produce the corresponding out file.
Let's create a random number below:
cyrand(a,b);
Its function is to return a random number between a and b .
The MT19337 (or Mason rotation method) used by this random number breaks through the limitation of C++'s native RAND_MAX.
(If you want to generate a random number in the long long range, use cyrand_ll()).
Let's take a look at how to store integer variables in the input file:
inint(a);
instring(b);
Both of these functions are used to input things into the in file. If we want to input a random number, we write:
inint(cyrand());
That's it.
For Example:
#include"caryon.h"
using namespace std;
using namespace ca;
int main(){
dataname="test";
makein(1,10){
csh();
inint(cyrand(0,100));
}
}
Of Contents will find in the data-test folder in the display will appear test1.in-test10.in files, Using Notepad to open these files,
You will find that every file has a random number.
If you don’t know how to use Notepad to open in files and out files, please right-click the file, click Open Mode, and find your note
this. Or you can use Dev-C++, open the software, and drag the in file into it.
For this program, if we write std.cpp like this :
#include<bits/stdc++.h>
using namespace std;
int main(){
int a;
cin>>a;
cout<<a+10;
return 0;
}
After compiling, change test.cpp to:
#include"caryon.h"
using namespace std;
using namespace ca;
int main(){
dataname="test";
makein(1,10){
csh();
inint(cyrand(0,100));
}
makeout(1,10);
}
Then use Notepad to open the in and out files separately , you can find that the number of each in file is added $10$ the result is out.
Due to the support of the new version's features and features, there will be a prompt when the file is created, so you don't have to worry about which frame is jumping and jumping!
This is the working principle of the entire data generator.
We can also generate many random things, such as:
cyrand_bool (); //Random Boolean type value
cyrand_engs (); //Random English lowercase letters
cyrand_engb (); //Random English uppercase letters
cyrand_formatc (); //random escape character
cyrand_word ( a ); //A random word of length a
cyrand_article ( a ); //A random paragraph with a vocabulary
cyrand_letter (); //random character
These things can be used to DIY and achieve the desired effect.
### Graph and tree Graph and tree
CarYon supports the function of creating graphs.
#### Create a graph
You can create a graph with the following command:
template<typename T> //No need to write this line
graph<T> example;
This generates a graph with edge weight type T.
#### Join the edge
To add edges to the generated graph, you need:
example.addedge(/*start*/,/*end*/,cyrand(/*min*/,/*max*/))
#### Make a random graph
rand_graph(n,m,min,max,randfunc);
It can return a random graph with npointsm edges and side weights between min and max.
If you want to assign a value to the generated graph, directly:
example=rand_graph(n,m,min,max);
#### Graph class member functions
Here are some useful functions:
example.is_connect();
This function returns a Boolean value, which represents whether the graph is connected.
example.output();
Output this graph.
example=rand_dag(n,m,min,max,randfunc);
Returns a directed acyclic graph.
example=rand_tree(n,k,min,max,randfunc);
Return a k-ary tree with n points.
example=connect_graph(n,m,min,max,randfunc);
Make a random connected graph.
### Tool function
For the needs of data generation, CarYon provides some simple tool functions.
1. The choice function:
The parameter is an array, a starting index, and a ending index.
Eventually, a random value between the two subscripts of this array will be returned.
E.g:
choice(a,1,10);
1. The doubleRandom function:
Returns a floating point number between 0 and 1.
E.g:
doubleRandom();
### Commonly used constants
CarYon provides some commonly used constants.
PI
That is the value of pi. 3.1415926...
E
The value of the natural base. 2.7182818...
ALPHABET_SMALL
A string containing all lowercase letters. "abcdefghijklmnopqrstuvwxyz".
ALPHABET_CAPITAL
A string containing all capital letters. "ABCDEFGHIJKLMNOPQRSTUVWXYZ".
ALPHABET
A string containing all letters. "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ".
NUMBERS
A string containing all numbers. "0123456789".
There is also a math library.
## Program match
In the course of the competition, in order to check whether the algorithm with low complexity is correct, it is usually used to compile a low-level algorithm to solve the same problem.
Then use a large sample at the same time to shoot the results of these two programs.
Now CarYon finally supports the camera-matching function! ! !
The matching of the program can be divided into the following steps:
1. Write myprogram.cpp in the current directory and compile it into a myprogram.exe file;
2. Write test.cppand std.cpp according to the data generation module ;
3. Add a line aftertest.cpp
debug(/*start*/,/*end*/);
For example, if you are confidently submitting the high-precision a+b, you need to use low-precision values to match your program.
First, put the following high-precision version a+b into your myprogram.cpp and compile it into myprogram.exe :
#include<iostream>
#include<algorithm>
using namespace std;
int main()
{
string a,b;
int xa[500]={},xb[500]={},tot[500]={};
cin>>a>>b;
for(int i=0;i<a.length();i++)
xa[i]=a[a.length()-i-1]-'0';
for(int i=0;i<b.length();i++)
xb[i]=b[b.length()-i-1]-'0';
int len=max(a.length(),b.length());
for(int i=0;i<len;i++)
tot[i]=xa[i]+xb[i];
for(int i=0;i<len;i++)
{
tot[i+1]+=tot[i]/10;
tot[i]%=10;
}
if(tot[len]) cout<<tot[len];
for(int i=len-1;i>=0;i--)
cout<<tot[i];
cout<<endl;
}
Then fill in the simplest a+b in std.cpp ;
And write test.cpp like this
#include"caryon.h"//Already include universal header files
using namespace std ;
using namespace ca ; //Namespace
int main (){
dataname = "a+btest" ; //Write your own prefix here
makein ( 1 , 10 ){
csh ();/*Please look at the use document and two test examples by yourself here*/
}
makeout ( /*start*/ , /*number of times*/ );
debug ( /*start*/ , /*number of times*/ );
//program matching command, you don’t need to write
//The value of makeout must be less than or equal to makein
//Please compile std and put it in this folder, there must be an exe file
return 0 ;
}
Note that due to the new version, no spaces can appear in the dataname field! ! !
After the operation, you can find that not only the data-a+btest folder, but also a+btest1.in/out-a+btest10.in/out , but also A new folder debug-a+btest appears , the folder is a+btest1.ans- a+btest10.ansoutput by myprogram.exe , and then you can use cmd 's comp function to compare these two files!
## Instructions for use oftest.cpp
The original information of test.cpp in the root directory is as follows:
#include"caryon.h"//Already include universal header files
using namespace std ;
using namespace ca ; //Namespace
int main (){
dataname = "a+btest" ; //Write your own prefix here
makein ( 1 , 10 ){
csh ();/*Please look at the use document and two test examples by yourself here*/
}
makeout ( /*start*/ , /*number of times*/ );
debug ( /*start*/ , /*number of times*/ );
//program matching command, you don’t need to write
//The value of makeout must be less than or equal to makein
//Please compile std and put it in this folder, there must be an exe file
return 0 ;
}
Remember not to change the overall framework of the program, otherwise there will be problems with your execution results
1. dataname is the prefix of the input and output files, if you leave it blank , there will be no prefix;
2. The number of times in makein() is the number of in files generated;
3. Csh in makein ; remember that it cannot be changed, there will be an unknown error when changing;
4. The number of times in makeout must be smaller than that in makein , and the default is to form out files starting from the prefix 1.in , which can be continued
### a + b problem data test.cpp preparing model
#include"caryon.h"
using namespace std;
using namespace ca;
int main(){
dataname="a+b test";
makein(1,10){
csh();
inint(cyrand(-1000,1000));
instring(" ");
inint(cyrand(-1000,1000));
}
makeout(1,10);
return 0;
}
Explanation: The data of a+b are two random numbers (with spaces in between), so you need to use the instring(" "); function to add spaces, if you need to change the line, you need to use instring("\n");, and then there is a problem of two random numbers.
For high-precision data, it can be generated in the following cycle:
inint(cyrand(1,9));
for(int i=0;i<LEN-1;i++){
inint(cyrand(0,9));
}
The above program can only generate a high-precision data.
The above content is enough to generate the data selected by noip, so I won't talk about it later, and wait for the user to explore it myself. If you have any questions, please comments, thank you!
There is an important patch, plase re-download it if you like it.
Thank you!
#### History
Revisions
Rev. Lang. By When Δ Comment
en11 luosw 2020-08-03 15:48:05 1282
en10 luosw 2020-08-02 14:27:40 69
en9 luosw 2020-08-02 08:09:40 70
en8 luosw 2020-08-01 18:18:19 2 Tiny change: '();\n\n### Comm' -> '();\n\n\n### Comm'
en7 luosw 2020-08-01 08:22:01 234
en6 luosw 2020-08-01 08:21:27 1379
en5 luosw 2020-07-31 16:51:48 26
en4 luosw 2020-07-31 09:28:18 20 Tiny change: 'Test Case Maker based on' -> 'Test Case Generator based on'
en3 luosw 2020-07-31 08:28:32 12
en2 luosw 2020-07-31 08:27:11 396
en1 luosw 2020-07-31 07:15:02 11934 Initial revision (published)
|
{}
|
Article Contents
Article Contents
# Remarks on a class of kinetic models of granular media: Asymptotics and entropy bounds
• We obtain new a priori estimates for spatially inhomogeneous solutions of a kinetic equation for granular media, as first proposed in [3] and, more recently, studied in [1]. In particular, we show that a family of convex functionals on the phase space is non-increasing along the flow of such equations, and we deduce consequences on the asymptotic behaviour of solutions. Furthermore, using an additional assumption on the interaction kernel and a potential for interaction'', we prove a global entropy estimate in the one-dimensional case.
Mathematics Subject Classification: Primary: 82C21, 82C22, 82C70.
Citation:
• [1] M. Agueh, Local existence of weak solutions to kinetic models of granular media, 2014. Available from: http://www.math.uvic.ca/ agueh/Publications.html. [2] L. Ambrosio, N. Gigli and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures, Lectures in Mathematics, Birkhäuser, Basel, 2005. [3] D. Benedetto, E. Caglioti and M. Pulvirenti, A kinetic equation for granular media, RAIRO Model. Math. Anal. Numer., 31 (1997), 615-641. [4] D. Benedetto, E. Caglioti and M. Pulvirenti, Erratum: A kinetic equation for granular media, M2AN Math. Model. Numer. Anal., 33 (1999), 439-441.doi: 10.1051/m2an:1999118. [5] D. Benedetto and M. Pulvirenti, On the one-dimensional Boltzmann equation for granular flows, M2AN Math. Model. Numer. Anal., 35 (2001), 899-905.doi: 10.1051/m2an:2001141. [6] A. L. Bertozzi, T. Laurent and J. Rosado, $L^p$ theory for multidimensional aggregation model, Comm. Pure Appl. Math., 64 (2011), 45-83.doi: 10.1002/cpa.20334. [7] J.-M. Bony, Existence globale et diffusion en théorie cinétique discrète, in Advances in Kinetic Theory and Continuum Mechanics, Springer-Verlag, Berlin, 1991, 81-90. [8] J. A. Carrillo, R. J. McCann and C. Villani, Kinetic equilibration rates for granular media and related equations: Entropy dissipation and mass transportation estimates, Rev. Matemàtica Iberoamericana, 19 (2003), 971-1018.doi: 10.4171/RMI/376. [9] J. A. Carrillo, R. J. McCann and C. Villani, Contractions in the 2-Wasserstein length space and thermalization of granular media, Arch. Ration. Mech. Anal., 179 (2006), 217-263.doi: 10.1007/s00205-005-0386-1. [10] J. A. Carrillo, M. DiFrancesco, A. Figalli, L. Laurent and D. Slepcev, Global-in-time weak measure solutions and finite-time aggregation for nonlocal interaction equations, Duke Math. J., 156 (2011), 229-271.doi: 10.1215/00127094-2010-211. [11] C. Cercignani and R. Illner, Global weak solutions of the boltzmann equation in a slab with diffusive boundary conditions, Arch. Ration. Mech. Anal., 134 (1996), 1-16.doi: 10.1007/BF00376253. [12] E. DiBenedetto, Degenerate Parabolic Equations, Springer-Verlag, New York, 1993.doi: 10.1007/978-1-4612-0895-2. [13] R. Illner and G. Rein, Time decay of the solutions of the Vlasov-Poisson system in the plasma physical case, Math. Methods Appl. Sci., 19 (1996), 1409-1413.doi: 10.1002/(SICI)1099-1476(19961125)19:17<1409::AID-MMA836>3.0.CO;2-2. [14] T. Laurent, Local and global existence for an aggregation equation, Comm. Partial Differential Equations, 32 (2007), 1941-1964.doi: 10.1080/03605300701318955. [15] H. Neunzert, An introduction to the nonlinear Boltzmann-Vlasov equation, in Kinetic Theories and the Boltzmann Equation, Lecture Notes in Mathematics, 1048, Springer-Verlag, Berlin, 1984, 60-110.doi: 10.1007/BFb0071878. [16] G. Toscani, One-dimensional kinetic models for granular flows, RAIRO Modél. Math. Anal. Numér., 34 (2000), 1277-1292.doi: 10.1051/m2an:2000127.
|
{}
|
# How is $\zeta(0)=-1/2$? [duplicate]
Possible Duplicate:
Why does $1+2+3+\dots = {-1\over 12}$?
Fermat's Dream by Kato et al. gives the following:
1. $\zeta(s)=\sum\limits_{n=1}^{\infty}\frac{1}{n^s}$ (the standard Zeta function) provided the sum converges.
2. $\zeta(0)=-1/2$
Thus, $1+1+1+...=-1/2$ ? How can this possibly be true? I guess I'm under the impression that $\sum 1$ diverges.
-
## marked as duplicate by J. M., Asaf Karagila, Jonas Teuwen, Rasmus, Chris EagleSep 21 '11 at 17:35
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
As you say, $\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}$ provided the sum converges. This says nothing directly about the value of $\zeta(s)$ when this sum diverges, for example when $s=0$. – Chris Eagle Sep 21 '11 at 17:04
1. is true for $\hbox{Re} \;s > 1$ only... 2. you will have to learn about "analytic continuation" to answer this. – GEdgar Sep 21 '11 at 17:04
See this and this related question. – J. M. Sep 21 '11 at 17:05
@Chris, go and try to explain that to well renowned physicists as Lubos Motl that still assert that the sum itself is what evaluates to minus one twelfth – lurscher Sep 21 '11 at 17:18
However, i would be happy with that assertion if i would be shown evidence that any analytic continuation of that sum needs to be equal to the Riemann Zeta wherever it is well defined – lurscher Sep 21 '11 at 17:22
## 1 Answer
As GEdgar noted, the zeta function is extended to values for which the series diverges via an analytic continuation.
-
Ok this helps, I guess I need to study analytic continuation. – Jason Smith Sep 21 '11 at 17:21
|
{}
|
# Help figuring out appropriate transistor for LED Cube
#### jerseyguy1996
Joined Feb 2, 2008
206
I am trying to design my first LED Cube using STP16CPC26 LED drivers and an ATTINY85 microcontroller. I am using the TINY85 mostly because I have them and want to use them on something fun but I know it is not typical to use an 8 pin microcontroller in this type of application. My plan is that the first 5 bits sent from the microcontroller will specify the row of the cube to be turned on and the next 25 bits will specify the columns. If I am understanding the datasheet for the LED driver correctly it appears that they will sink current but not source current which (in my very limited understanding of electronics) suggests that I would need to drive PNP transistors with the LED Driver to light specific rows. I have included what I have so far in the attached schematic and Q1 through Q5 are the transistors in question.
The LEDs are driven at 20 mA based on a 1K resistor on R-EXT of the LED drivers which I think would mean that if all LED's in a row are lit that the transistor will have to supply 500 mA. I am horrible at reading transistor data sheets but it seems that all of the ones that I read show them sourcing closer to 10 mA at saturation so I am not sure how to get 500 mA. Can someone help me understand what I should be looking for to select an appropriate transistor for this application?
#### Attachments
• 42.6 KB Views: 27
#### thatoneguy
Joined Feb 19, 2009
6,359
The way I did mine was to only light one LED at a time per port.
My vertical layers were "Z", X was left to right, Y was front to back. Only one Z was enabled at any given moment, then I enabled the out x and y ports for that layer. Left them on while I went to next layer, once buffer was ready, disabled previous layer, put out new bits, enabled new layer. I did the output on an interrupt at about 1kHz, the entire array was mirrored in a RAM framebuffer which was manipulated using draw functions that were defined. When the interrupt happens, the framebuffer was put out to the LED cube. There wasn't any problems with the interrupt happening in the middle of drawing a line or lighting up a plane, as it was only 1/1000th of a second, so the function would finish and the result would look perfectly smooth. The entire cube was refreshed 200 times per second.
I used a PIC with more pins to direct drive the X/Y, but should be able to use shift registers for the high driver, since you only need a single 20mA port to drive one (x,y) LED (that flickers at 1kHz with high duty cycle to appear brightly lit), and the IC driver to sink all of the LEDs on that Z layer (few hundred mA max).
I structured mine like:
Rich (BB code):
zmax
| / ymax
| /
|/_____xmax
Last edited:
#### nickelflipper
Joined Jun 2, 2010
280
Using the STP16CPC26 as a pfet high side driver should work fine for the 5 levels of the cube. Driving the gate low with a cmos output will turn on a logic level pfet, if the total gate charge (Qg) is low enough (say less than 25 nC?, the lower the better). For instance, I have used a 74hc138 to drive the gate to an irf7304. which had a led load up to 500ma. There should be many pfets with similar characteristics.
#### jerseyguy1996
Joined Feb 2, 2008
206
Using the STP16CPC26 as a pfet high side driver should work fine for the 5 levels of the cube. Driving the gate low with a cmos output will turn on a logic level pfet, if the total gate charge (Qg) is low enough (say less than 25 nC?, the lower the better). For instance, I have used a 74hc138 to drive the gate to an irf7304. which had a led load up to 500ma. There should be many pfets with similar characteristics.
Is pfet synonymous with p-channel mosfet?
#### thatoneguy
Joined Feb 19, 2009
6,359
Is pfet synonymous with p-channel mosfet?
Yes. It's just abbreviated, used to switch high side voltage/source current rather than switch/sink current for n-fet/N-Channel MOSFET, although you can switch current with an N-MOSFET if you use a high side driver, which makes things complicated, high side drivers and N-Fet are pretty much only used when driving large loads such as motors in an H-Bridge.
#### jerseyguy1996
Joined Feb 2, 2008
206
Yes. It's just abbreviated, used to switch high side voltage/source current rather than switch/sink current for n-fet/N-Channel MOSFET, although you can switch current with an N-MOSFET if you use a high side driver, which makes things complicated, high side drivers and N-Fet are pretty much only used when driving large loads such as motors in an H-Bridge.
So I revised my schematic to use the p-fet. Does the attached schematic look correct? Would I need weak pullups on the fet gates to insure cutoff or will the LED driver provide a high enough logic level to shut the fet off?
#### Attachments
• 47 KB Views: 38
#### jerseyguy1996
Joined Feb 2, 2008
206
The way I did mine was to only light one LED at a time per port.
My vertical layers were "Z", X was left to right, Y was front to back. Only one Z was enabled at any given moment, then I enabled the out x and y ports for that layer. Left them on while I went to next layer, once buffer was ready, disabled previous layer, put out new bits, enabled new layer. I did the output on an interrupt at about 1kHz, the entire array was mirrored in a RAM framebuffer which was manipulated using draw functions that were defined. When the interrupt happens, the framebuffer was put out to the LED cube. There wasn't any problems with the interrupt happening in the middle of drawing a line or lighting up a plane, as it was only 1/1000th of a second, so the function would finish and the result would look perfectly smooth. The entire cube was refreshed 200 times per second.
I used a PIC with more pins to direct drive the X/Y, but should be able to use shift registers for the high driver, since you only need a single 20mA port to drive one (x,y) LED (that flickers at 1kHz with high duty cycle to appear brightly lit), and the IC driver to sink all of the LEDs on that Z layer (few hundred mA max).
I structured mine like:
Rich (BB code):
zmax
| / ymax
| /
|/_____xmax
I have actually set mine up like that with each pin of the LED driver only sinking current from one LED at a time but my horizontal layers are also being driven by the LED Driver so I need to drive a transistor on the high side to source enough current to drive potentially 25 LED's all at once.
#### thatoneguy
Joined Feb 19, 2009
6,359
I have actually set mine up like that with each pin of the LED driver only sinking current from one LED at a time but my horizontal layers are also being driven by the LED Driver so I need to drive a transistor on the high side to source enough current to drive potentially 25 LED's all at once.
You could switch it around, use 74HC595 Shift Registers for X and Y on the high side (4 outputs from uC), which would supply your individual layer, and then sink that entire layer using the driver, which would enable all the features of the driver, such as PWM on some layers for fade effects, etc.
|
{}
|
# siunitx + fouriernc = Size substitutions with differences? [duplicate]
Possible Duplicate:
xfrac + siunitx gives me a font warning
This document (New Century Schoolbook font in \SI)
\documentclass{scrreprt}
\usepackage[utf8]{inputenc}
\usepackage{fouriernc}
\usepackage[T1]{fontenc}
\usepackage{siunitx}
\begin{document}
\SI{1}{\metre\per\second}
\end{document}
produces this warning
LaTeX Font Warning: Size substitutions with differences
(Font) up to 2.01195pt have occurred.
Additional information: This does only happen with the NC font in \SI with \per ("power to the -1"). Outside of \SI this does not happen (for example $a^{-1}$ does not produce the warning). This does only happen on the units, not on magnitudes (10^-1).
• Why does this happen?
• Does siunitx require another font size for powers? Why?
• How to fix this warning?
-
## marked as duplicate by lockstep, Claudio Fiandrino, zeroth, Marco Daniel, ThorstenOct 10 '12 at 17:02
Load the fix-cm package. See tex.stackexchange.com/questions/32378/… – lockstep Oct 10 '12 at 10:20
Thanks, this removes the warnings, but care to explain why this package works for the New Century font when it should fix problems in Computer Modern? – Foo Bar Oct 10 '12 at 10:23
Not sure myself, but siunitx seems to rely on Computer Modern for certain symbols even when other font packages are loaded. – lockstep Oct 10 '12 at 10:25
BTW, without fix-cm I do get a warning outside tabular. Please test again and, if true, edit your question. – lockstep Oct 10 '12 at 10:30
Thanks, you are right. Maybe I did something different on the first run. I'll edit the question. – Foo Bar Oct 10 '12 at 10:38
The siunitx package jumps through a lot of 'hoops' to give the correct appearance of output as far as possible. That means quite a bit of font detection and math/text mode switching. The warning can be generated by an example such as
\documentclass{article}
\usepackage{fouriernc}
\usepackage{siunitx}
\begin{document}
\ensuremath{^{\text{{\unboldmath$-1$}}}}
\end{document}
where you'll note that there is no change of font at all (without \unboldmath` the warning is slightly different). That I know of, there should be no change in the fonts actually used in the output: a quick check shows that everything looks OK.
-
Thanks, but how does this solve the problem? It's just another example, a bit more "low-level" however. Why does your code produces the warning and what do you suggest to fix it? – Foo Bar Oct 10 '12 at 12:47
|
{}
|
# How to determine Center of Gravity? [closed]
I came across this question while having conversation with one person. We know that Center of Gravity of a solid cube is at the intersection of connecting the opposite vertex of the cube. Suppose, you have a hollow cube, filled with water/air. Now, how do you find the center of gravity of that object?
-
## closed as too localized by Waffle's Crazy Peanut, Nic, Manishearth♦Apr 27 '13 at 8:45
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
Hi Learner. Welcome to Physics.SE. How do you find... - Do you ask for some kinda formula-suggestion..? – Waffle's Crazy Peanut Apr 24 '13 at 8:25
intuition is enuough – Learner Apr 24 '13 at 8:27
What you call "centre of gravity" is more commonly called center of mass (COM). The general formula is not too hard:
$$\mathbf{r}_{COM} = \frac{\sum \mathbf{r}_i m_i } {\sum m_i}$$
where $\mathbf{r}_i$ are the position vectors of all individual masses $m_i$.
In words: multiply a mass by its offset w.r.t. some coordinate system. Do this for all the masses you might have in a system, and add everything up. Divide the end result by the total mass of the system, e.g., the sum of all the masses alone.
Now, if you do not have individual masses but a certain density distribution throughout an arbitrary volume $V$, just turn the summation into an integral:
$$\mathbf{r}_{COM} = \frac{\int\int\int{\rho(\mathbf{r})\mathbf{r}dV}}{\int\int\int{\rho(\mathbf{r})dV}}$$
Lastly, the COM of a system of different objects of which you already know the COM and total mass $M$ (like your cube, and the cubical body of water it contains), can be computed by substituting their masses and relative positions into the first equation.
If the hollow cube of mass 1 with sides 1 is completely filled with water of mass 3, the calculation is simple. The hollow cube has its $\mathbf{r}_{COM}$ at the intersection of the diagonals, as does the body of water. Taking a coordinate system in the centre of the cube:
$$\frac{1 \cdot \mathbf{0} + 3 \cdot \mathbf{0} } {1 + 3} = \mathbf{0}$$
so, the COM is simply in the centre of the cube.
Filling the cube only with a third of the water changes things. The centre of mass for each 1/3 slice of water lies in the center of that slice, so, the centre of mass of only the bottom 1/3 slice of water lies at $z=-1/3$:
$$\frac{1 \cdot \mathbf{0} + \frac33 \cdot [0\,, 0\,, -\frac13] } {1 + \frac33} = [0\,, 0\,, -\frac{1}{6}]$$
Where $[x\,, y\,, z]$ indicates a vector at coordinates $x$, $y$ and $z$, respectively.
So, the COM now lies at 1/6 down from the centre of mass of the cube alone.
-
I didn't get that. Center of Mass is commonly Center of Gravity? – idiosincrasia23 Apr 24 '13 at 13:34
Center of mass and center of gravity is not the same thing. Admitted, they often are at the same point though. The center of gravity is the integral of the force of gravity, while the center of mass is the integral over the mass density. If you have a non-uniform acceleration of gravity they are different (such as for very tall buildings). – S. Gammelmark Apr 24 '13 at 13:49
@S.Gammelmark: Hmm, didn't know that, thanks. I can see why the distinction is useful. Perhaps you should adjust the wiki article on CoM (because centre of gravity just links to CoM) However, I think the OP simply meant the centre of mass :) – Rody Oldenhuis Apr 25 '13 at 5:31
There is actually a link at wikipedia: Centers of gravity in non-uniform fields. Turns out that I was not entirely accurate, since my description only applies to fields with a constant direction through space, not for general non-uniform fields. In any case, I think you are correct, that the OP was referring to CoM. – S. Gammelmark Apr 25 '13 at 7:47
Actually, a hollow cube (assuming it's hollowed out symmetrically) completely filled with water/air just has its center of gravity at the intersection of the space diagonals while a solid cube doesn't generally have that. It only has that if the mass is distributed homogeneously. And in a hollow cube completely filled with air/water, the mass is distributed homogeneously, so the center of gravity is where you expect it to be.
Now, if you only fill the hollow cube with water halfway, the situation is different. The center of gravity must then obviously move down. How far down depends on the mass of the left-over faces of the hollow cube and the mass of the water. The lower the mass of the faces, the closer the cg of the whole will be to the cg of the water itself (which is at the intersection of the space diagonals of the water cuboid). If the mass of the faces, however, is far greater than that of the water, the cg will only move down ever so slightly.
A hollow cube filled with air will always have its center of gravity at the intersection of the space diagonals since air is a gas which always spreads out homogeneously. You might now also see why - as Rody Oldenhuis points out in his answer - we more often refer to the center of gravity as the center of mass; it is determined completely by the mass distribution of the object.
-
|
{}
|
# Invertible matrix
## Homework Statement
Let A be a square matrix.
If B is a square matrix satisfying AB=I
## Homework Equations
Proof that B=A^-1
## Answers and Replies
radou
Homework Helper
Well, you can first use the relation det(AB) = det(A)det(B). What does it tell you?
mjsd
Homework Helper
you can show this by showing that the inverse is unique.. ie. if AB=AC=I then B=C
radou
Homework Helper
you can show this by showing that the inverse is unique.. ie. if AB=AC=I then B=C
Or, you have to show that, for a regular matrix A, and matrices B, C (where C is the inverse of A, the existence of which you have the right to assume after working out the hint in post #2), the cancellation law holds.
AlephZero
Science Advisor
Homework Helper
You seem to be making this more complicated than it is. You don't know anything about A and B except that A has an inverse. So the only thing that you can use is the definition of what the inverse is:
A^-1.A = A.A^-1 = I
So just use that definition, to get from the equation AB = I to the equation B = A^-1.
Hint: the associative law for multiplication says (XY)Z = X(YZ) for any matrices X Y and Z.
HallsofIvy
Science Advisor
Homework Helper
I imagine that the point of this exercise is to show that the inverse is unique. You know that AB= I. Can you use that to prove that BA= I?
Suppose AC= CA= I. Can you prove that B= C (hint, if AB= I = AC, multiply on both sides, on the left, by C.)
Thanks so much to all of for replying
The main problem is to show that A is invertible
than I can show That [B=A^-1] easily
Last edited:
I imagine that the point of this exercise is to show that the inverse is unique. You know that AB= I. Can you use that to prove that BA= I?
Suppose AC= CA= I. Can you prove that B= C (hint, if AB= I = AC, multiply on both sides, on the left, by C.)
Can you please show me how I can prove this???
radou
Homework Helper
Actually, the problem is already solved for you. Just re-read the replies.
Actually, the problem is already solved for you. Just re-read the replies.
Thanks.
But can you show me the solution more clearly ,please???
I still confused about it.
radou
Homework Helper
Thanks.
But can you show me the solution more clearly ,please???
I still confused about it.
No. You show us exactly which part you're confused about.
AlephZero
Science Advisor
Homework Helper
Thanks so much to all of for replying
The main problem is to show that A is invertible
than I can show That [B=A^-1] easily
It A is not invertible, then det(A) = 0
That leads to a contradiction - see Radou's first hint.
I'm sorry.I need to slove this problem without using the determinants.
My attempt at the solution is:_
First: If A is invertible:-
By multiplying by A^-1 on both sides on the left:_
(A^-1)(AB)=(A^-1)I
IB=A^-1
B=A^-1
Second: I need to show now, that A is invertible, to complete the solution.
This is the part which I'm confused about.
please,I need the solution very quickly.And sorry for the disturbance.
radou
Homework Helper
I'm sorry.I need to slove this problem without using the determinants.
My attempt at the solution is:_
First: If A is invertible:-
By multiplying by A^-1 on both sides on the left:_
(A^-1)(AB)=(A^-1)I
IB=A^-1
B=A^-1
Second: I need to show now, that A is invertible, to complete the solution.
This is the part which I'm confused about.
If you need to show that A is invertible, the upper attempt is invalid, since you can't multiply an equation with something you don't even know exists (A^-1).
Yes.but it can be valid if we show that A is invertible.
There was another part of this problem,but it was about BA=I,The reference book solved it like the way I've written above.
But It was easier:_
To show that A is invertible: We can show that AX=0, have only the trivial solution:_
So by multypling AX=0 by B on the left:_
B(AX)=B0
(BA)X=0 And, since BA=I
IX=0
X=0 And this is the trivial solution
So A is invertible.
But in our case,The problem is that we have [AB=I],not [BA=I],so we can't multiply by B, and derive that easily.
|
{}
|
# Cambridge Coincidences Collection
## Professor David Spiegelhalter of Cambridge University wants to know about your coincidences!
Back in High School, about 2005, my younger brother and I had made a friend named Bryan. We all hit it off and would hang out all the time and it didn't take long until I ended up at his place to hang out after school one day. He'd told me about his dad, actually his grandfather, and said I'd probably think he was cool. So he finally gets home from whatever he was doing that day and he's a big Irish guy from Chicago so he's loud and funny and full of smiles and politeness. After a couple minutes he says I looked just like his buddy from back in the day and that he threw the best parties. I asked him what his buddies name was and sure as hell it was my grandfather. They went to school together and joined the marines together and worked together after Korea. We lived about 40 miles from where they grew up and it just blew me away that after 50 years two old war buddies grandkids would meet and become friends when there's millions of people around and our grandparents hadn't lived near each other since the 60's.
## Classmate coincidence
I was camping in France when after a day i noticed a familiar person. It was one of my classmates, i come from The Netherlands so that of the hundreds of campings we could’ve chosen we chose the same one at around the same time
## Bumping into your neighbor across the world.
My family happened to run into our front-door neighbor while in a train station in Zurich, Switzerland. We are from Florida.
## Ran into family friend at the perfect time in new city
I had moved to New York City. A couple of months before. Things were not going well financially and I didn’t know if was going to be able to cut it. I had just paid my rent and I had $3 to my name and no food in my apartment. Figured I would eat a candy bar for dinner. I was leaving work and heading to the train near Times Square looking at my phone and I almost literally walk into my fathers lifelong friend who was visiting from over 4 hours away. He invited me to dinner and treated me to a burger fries and beer. And gave me$40 for food until my next paycheck. I often wonder how I would have fared if it wasn’t for this opportune coincidence.
## My step father’s mother died
My mother and step father are Rob and Cheri Robson. In New Orleans we met and became friends with a couple named Rob and Cheree Robinson. Both Rob’s mothers died the same week of the same illness.
## A Perfectly Timed Phone Call
A few years ago as summer break was beginning I took a trip to see extended family in China. I had not informed my friends of the exact dates that I would be away, only that I would be gone for about the first chunk of summer break. Obviously, I would also not have regular cellular phone service due to being halfway around the world. Fast forward to my return home, our plane had landed on the runway not even five minutes before I receive a phone call, right as I regain connection to my local phone service. It is my friend - while hanging out with another friend of mine, he decided on a whim to call me just to see if I happened to be around, completely unaware of my travel schedule and if I was even in the country. Somehow, he managed to phone me at near-precisely the moment I touched the ground, the moment I was capable of receiving his phone call. To make things even weirder, it was his first attempt at calling me since summer break began. We were both freaked out by this coincidence, given that he could have called me at any moment nearly 2 months prior and not have gotten through. But it just so happened that he threaded the telecommunication needle at the perfect time.
## First s job, then a house
I moved from Oregon to Washington seven years ago after getting a job doing admin support at a college. The person I replaced had been there her whole career before retiring, and so for the 3 years I was there I was known as "the new Pat". I left that position but stayed in the area, and a few years later purchased my first house. This year I was on the assessor's website looking at various data points for my property. I came across a list of previous owners, and one of the names listed looked very familiar. I did some digging and it turns out, the person listed is Pat's husband. So, not only did I move up here and assumed Pat's old job, but I also bought a house she'd lived in. They were the owners right before the people I'd purchased from.
## Finding family friend in a park on the other side of the world.
I’m from a low-income rural village in the Northern USA. Many people from my hometown have never left the state, much less the country. While traveling to Ireland in 2019, my group decided to walk a little hidden garden path. We walked by two people and my husband pointed out that they had similar accents to mine. I stopped them to ask where they were from. Turns out they were from my hometown and had even worked with my dad for over a decade. They had moved away years before for a job opportunity, which is why I didn’t know them. Just proves that if you are from a small town, you can’t go anywhere without the chance of being recognized (and possibly snitched) on by a family friend.
## Twin connection
I was watching a movie on Netflix in bed after getting a flu shot when my identical twin sister texted me to say hi and ask what I was up to. I told her and she said she was watching a movie too, and that she had gotten her flu shot the day before. I asked her what she was watching and it was the same movie I was watching! (Called “Carol,” about a pair of women in the 1950s who have an affair). I was 15 minutes ahead of my sister in the story.
## Neighbor I knew so much about but never met
Growing up my mom used to talk about this guy she dated before my dad. His name was Chip. That wasn't a nickname, just his name. She said he was great. I knew lots about his family, even having never met him because of how they went to the same school and grew up visiting each other's houses. After they broke up, she got together with my dad and Chip started.dating a girl from a different school. Both new significant others didn't like the idea of ex's hanging out and so they lost track of each other. Over the next 27 years many thing happened. I was born, we moved around a lot. My mom never heard from or saw chip again. When I was 25, my husband and I purchased a house. This house was 3 hours u to the country, away from the city area my mom lived as a kid. We went around meaning the neighbors and one of them was named Chip. Not a nickname, just his name. I kinda chuckled and said I had only ever heard of that not being a nickname one other time, this guy my mom dated before I was born. He laughed and said, well what's your mom's name? Turns out, it was the same Chip.
|
{}
|
TITLE
# Complexity for Extended Dynamical Systems
AUTHOR(S)
Bonanno, Claudio; Collet, Pierre
PUB. DATE
October 2007
SOURCE
Communications in Mathematical Physics;Oct2007, Vol. 275 Issue 3, p721
SOURCE TYPE
DOC. TYPE
Article
ABSTRACT
We consider dynamical systems for which the spatial extension plays an important role. For these systems, the notions of attractor, ϵ-entropy and topological entropy per unit time and volume have been introduced previously. In this paper we use the notion of Kolmogorov complexity to introduce, for extended dynamical systems, a notion of complexity per unit time and volume which plays the same role as the metric entropy for classical dynamical systems. We introduce this notion as an almost sure limit on orbits of the system. Moreover we prove a kind of variational principle for this complexity.
ACCESSION #
26430060
## Related Articles
• A variational principle for topological pressure for certain non-compact sets. Thompson, Daniel // Journal of the London Mathematical Society;Dec2009, Vol. 80 Issue 3, p585
Let (X, d) be a compact metric space, let f:X ↦ X be a continuous map with the specification property and let ϕ: X ↦ ℠be a continuous function. We prove a variational principle for topological pressure (in the sense of Pesin and Pitskel) for non-compact sets of the form...
• ENTROPY OPERATOR FOR CONTINUOUS DYNAMICAL SYSTEMS OF FINITE TOPOLOGICAL ENTROPY. RAHIMI, MEHDI; RIAZI, ABDOLHAMID // Bulletin of the Iranian Mathematical Society;Dec2012, Vol. 38 Issue 4, p883
In this paper we introduce the concept of entropy operator for a continuous system of finite topological entropy. It is shown that it generates the Kolmogorov entropy as a special case. If φ is invertible then the entropy operator is bounded by the topological entropy of φ as its norm.
• Pointwise Variation Growth and Entropy of the Descartes Product of a Few of Interval Maps. Risong Li; Zengxiong Cheng // Pure Mathematics;Oct2011, Vol. 1 Issue 3, p184
In this paper, the definition of pointwise variation growth of interval maps was extended to continuous self-maps on k-dimensional space I1 x I2 x�x Ik, where Ii is a closed interval. Let fi : Ii & larr; Ii be a continuous map and the total variation Due to image rights restrictions,...
• On the relation between topological entropy and entropy dimension. Saltykov, P. S. // Mathematical Notes;Jun2009, Vol. 86 Issue 1/2, p255
For the Lipschitz mapping of a metric compact set into itself, there is a classical upper bound on topological entropy, namely, the product of the entropy dimension of the compact set by the logarithm of the Lipschitz constant. The Ghys conjecture is that, by varying the metric, one can...
• Finite- and infinite-dimensional attractors for porous media equations. M. Efendiev; S. Zelik // Proceedings of the London Mathematical Society;Jan2008, Vol. 96 Issue 1, p51
The fractal dimension of the global attractors of porous media equations in bounded domains is studied. The conditions which guarantee this attractor to be finite dimensional are found and the examples of infinite-dimensional attractors that do not satisfy these conditions are constructed. The...
• Metric Entropy of Convex Hulls in Hilbert Spaces. Carl, Bernd // Bulletin of the London Mathematical Society;1997, Vol. 29 Issue 4, p452
We show in this note the following statement which is an improvement over a result of R. M. Dudley and which is also of independent interest. Let X be a set of a Hilbert space with the property that there are constants Ï, σ>0, and for each n∈ N, the set X can be covered by at most n...
• Computing Topological Entropy in a Space of Quartic Polynomials. Radulescu, Anca // Journal of Statistical Physics;Jan2008, Vol. 130 Issue 2, p373
This paper adds a computational approach to a previous theoretical result illustrating how the complexity of a simple dynamical system evolves under deformations. The algorithm targets topological entropy in the 2-dimensional family P Q of compositions of two logistic maps. Estimation of the...
• Entropy and collapsing of compact complex surfaces G. P. Paternain was partially supported by CIMAT, Guanajuato, México. J. Petean is supported by grant 37558-E of CONACYT.. Gabriel P. Paternain; Jimmy Petean // Proceedings of the London Mathematical Society;Nov2004, Vol. 89 Issue 3, p763
We study the problem of existence of $\mathcal{F}$-structures (in the sense of Cheeger and Gromov, but not necessarily polarized) on compact complex surfaces. We give a complete classification of compact complex surfaces of Káhler type admitting $\mathcal{F}$-structures. In the...
• Localizing Entropies via an Open Cover. Kesong Yan; Fanping Zeng; Qi Wang // Journal of Concrete & Applicable Mathematics;Jan2008, Vol. 6 Issue 1, p209
For a given topological dynamical system (X, T), we introduce and study some entropies of open covers. The main result as follows: (1)the inequality hp(T, U) ≤ hm(T, U) ≤ htop(T, U) ≤ hm(T, U) + hb(T), relating pointwise pre-image entropies of open covers hp(T, U), hm(T, U),...
Share
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.