text
stringlengths
100
957k
meta
stringclasses
1 value
# zbMATH — the first resource for mathematics On the theory of nonstationary hydrodynamic potentials. (English) Zbl 0995.35044 Salvi, Rodolfo (ed.), The Navier-Stokes equations: theory and numerical methods. Proceedings of the international conference, Varenna, Lecco, Italy, 2000. New York, NY: Marcel Dekker. Lect. Notes Pure Appl. Math. 223, 113-129 (2002). The author considers the initial-boundary value problem for the Stokes equations $\vec{v}_t-\Delta \vec{v}+\nabla p=0,\;\nabla \cdot \vec{v}=0,\;x\in \Omega ,\;t\in ( 0,T) ,$ $\vec{v}\mid _{t=0}=\vec{v}_0( x) ,\;\vec{v}\mid _S=\vec{a}( x',t) ,$ in a bounded convex domain $$\Omega \subset {\mathbb R}^n,n\geq 2,$$ with a smooth boundary $$S.$$ The main result is the following : Assume that $$S\in C^{2+\alpha },\alpha \in ( 0,1)$$. For arbitrary $$\vec{a}( x,t)$$ and $$\vec{v}_0( x)$$ which are continuous and satisfy the compatibility conditions $$\vec{a}( x,0) =$$ $$\vec{v}_0( x) \mid _S,\;\nabla \cdot \vec{v} _0( x) =0,\;\vec{a}( x,0) \cdot \vec{n}( x) \mid _S=0,$$ the problem has a continuous solution satisfying the inequality $\sup _{x\in \Omega } \sup _{t<T}|\vec{v}( x,t) |\leq c( t) \left( \sup _{x\in S} \sup _{t<T}|\vec{a}( x,t) |+\sup _{x\in \Omega } |\vec{v}_0( x) |\right).$ For the entire collection see [Zbl 0972.00046]. ##### MSC: 35Q30 Navier-Stokes equations 76D07 Stokes and related (Oseen, etc.) flows 35B35 Stability in context of PDEs
{}
## Cryptology ePrint Archive: Report 2016/847 On the smallest ratio problem of lattice bases Jianwei Li Abstract: Let $(\mathbf{b}_1, \ldots, \mathbf{b}_{n})$ be a lattice basis with Gram-Schmidt orthogonalization $(\mathbf{b}_1^{\ast}, \ldots, \mathbf{b}_{n}^{\ast})$, the quantities $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{i}^{\ast}\|$ for $i = 1, \ldots, n$ play important roles in analyzing lattice reduction algorithms and lattice enumeration algorithms. In this paper, we study the problem of minimizing the quantity $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{n}^{\ast}\|$ over all bases $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ of a given $n$-dimensional lattice. We first prove that there exists a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ for any lattice $L$ of dimension $n$ such that $\|\mathbf{b}_1\| = \min_{\mathbf{v} \in L\backslash\{\mathbf{0}\}} \|\mathbf{v}\|$, $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{i}^{\ast}\| \leq i$ and $\|\mathbf{b}_{i}\|/\|\mathbf{b}_{i}^{\ast}\| \leq i^{1.5}$ for $1 \leq i \leq n$. This leads us to introduce a new NP-hard computational problem, that is, the smallest ratio problem (SRP): given an $n$-dimensional lattice $L$, find a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ of $L$ such that $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{n}^{\ast}\|$ is minimal. The problem inspires the new lattice invariant $\mu_{n}(L) = \min\{\|\mathbf{b}_1\|/\|\mathbf{b}_n^{\ast}\|: (\mathbf{b}_1, \ldots, \mathbf{b}_n) \textrm{ is a basis of } L\}$ and new lattice constant $\mu_{n} = \max \mu_{n}(L)$ over all $n$-dimensional lattices $L$: both the minimum and maximum are justified. The properties of $\mu_{n}(L)$ and $\mu_{n}$ are discussed. We also present an exact algorithm and an approximation algorithm for SRP. To the best of our knowledge, this is the first sound study of SRP. Our work provides a new perspective on both the quality limits of lattice reduction algorithms and complexity estimates of enumeration algorithms. Category / Keywords: lattice reduction, lattice enumeration algorithms, smallest ratio problem Date: received 29 Aug 2016, last revised 6 Sep 2016 Contact author: lijianwei2015 at amss ac cn Available format(s): PDF | BibTeX Citation Note: The paper was submitted to Information and Computation. Short URL: ia.cr/2016/847 [ Cryptology ePrint archive ]
{}
# Columbia/Courant joint probability seminar, Fall 2019 ## November 8th, 2019 ; WWH 512 (Courant Institute) • 9:30-10:30am  Christopher Hoffman (University of Washington) The shape of a random pattern-avoiding permutation A permutation that avoids the pattern 4321 has a longest decreasing sequence of length 3. We fix n, choose \sigma a 4321-avoiding permutation uniformly at random and plot the points of the form (i/n,\sigma(i)/n) for 1 \leq i \leq n. Looking at this plot it is clear that the indices 1 through n can be partitioned into three sets. By linear interpolation from these three sets we can generate three functions. We show that the scaling limit of this measure on triples of functions is given by the eigenvalues of a ensemble of random matrices. We also discuss the scaling limits of other patterns. • 10:30-11am Coffee break • 11am-12pm Gaultier Lambert (University of Zurich) Multivariate normal approximation for traces of random unitary matrices Let us consider a random matrix U of size n distributed according to the Haar measure on the unitary group. It is well-known that for any k≥1, Tr[U^k] converges as n tends to infinity to a Gaussian random variable and that, surprisingly, the speed of convergence is super exponential. In this talk, we revisit this problem and present non asymptotic bounds for the total variation distance between Tr[U^k] and a Gaussian. We will also consider the multivariate problem and explain how this affect the rate of convergence. We expect that our bounds are almost optimal. This is joint work with Kurt Johansson (KTH). • 12-1pm Jonathan Niles-Weed (New York University) Estimation of Wasserstein distances in the Spiked Transport Model We propose a new statistical model, generalizing the spiked covariance model, which formalizes the assumption that two probability distributions differ only on a low-dimensional subspace. We study various probabilistic and statistical features of this model, including the estimation of the Wasserstein distance, which we show can be accomplished by an estimator which avoids the "curse of dimensionality" typically present in high-dimensional problems involving the Wasserstein distance. However, this estimator does not seem possible to compute in polynomial time, and we give evidence that any computationally efficient estimator is bound to suffer from the curse of dimensionality. Our results therefore suggest the existence of a computational-statistical gap. Joint work with Philippe Rigollet.
{}
Science topics: Mathematics Science topic # Mathematics - Science topic Mathematics, Pure and Applied Math Questions related to Mathematics • asked a question related to Mathematics Question I want to ask if I can get good resources that can explain the mathematical approach behind the Adaptive Model Predictive Control AMPC MATLAB toolbox? am not be able to find the mathematical analysis behind this toolbox even on the MathWorks webpage. thank you Mohamed see • asked a question related to Mathematics Question Problem: 5 minutes of play are worth more than an hour of study Knowing that: G = Game S = Gtudy 1 hour = 60 min The mathematical formula that defines the statement is: 5 x G> 60 x S The quantitative ratio of the minutes expressed in the mathematical formula can be simplified: 60: 5 = 12, therefore the simplified mathematical formula is: G> 12 x S So, 1 minute of play is worth more than 12 minutes of study Or it can be said that: game G is worth more than 12 times than study S. Therefore, the quantitative value of physical objects (or of spatial and / or temporal quantities) must be calculated differently from the qualitative value of human life experiences. Explain why it is possible___________________________________________________________________ ___________________________________________________________________________ (Exercise based on Fausto Presutti's Model of PsychoMathematics). Agreed with dear David Eugene Booth • asked a question related to Mathematics Question In several discussions, I have often come across a question on the 'mathematical meaning of the various signal processing techniques' such as Fourier transform, short-term fourier transform, stockwell transform, wavelet transform, etc. - as to what is the real reason for choosing one technique over the other for certain applications. Apparently, the ability of these techniques to overcome the shortcomings of each other in terms of time-frequency resolution, noise immunity, etc. is not the perfect answer. I would like to know the opinion of experts in this field. Utkarsh Singh There is an esthetic reason in why a mathematical method is of interest in signal processing: -a beautiful algorithm is well articulated, says what it does in few instructions, and does it in a stable and reliable manner -this hints to the underlying algebra With powerful and minimal computation, we go deep into algebra structures: group, rings, fields (see references on Evariste Galois as the inventor of "group" as we know it) -Fourier transform is an interesting invention: it allows to decompose a signal into resonating modes (as for piano music: you produce a sound at frequency F, but also its harmonic NxF...). Naturally there is the aliasing question and the Nyquist theorem for reconstruction There are many more time-frequency representations: Fourier, Laplace, discrete or continuous, cosine transform, wavelet transform, etc. The interesting feature of discrete algorithms for those transforms is that you can implement a butterfly structure. The key idea is to replace a very large number of multiplications (in brute force "non-esthetic" programming) by a smaller number of additions. This idea worked for me for developing a codec system using underlying GF(n) properties. See this patent: The regularity in the processing and the efficiency of the representation go hand in hand. Let me go back to a very basic mathematical method: the Gram-Schmidt decomposition: take a sequence of n vectors v(1),..., v(n), and the matrix of cross-products m(i,j)=<v((i),v(j)>. The Gram-Schmidt method diagonalises this matrix. It extracts eigenvalues, and eigen vectors. In frequency terms, it extracts modes (resonating modes present in the signal). This algorithm highlights the efficiency side of the representation: it's projecting the signal onto something found "in itself", call it principal components if you want. There are only two reasons for choosing a technique in engineering: -(i) it addresses the problem completely -(ii)it's economically implementable. Both criteria are equally important and a good way to find these is to look for elegant, esthetic solutions (minimal and complete at the same time). Does it help? • asked a question related to Mathematics Question In recent years, many new heuristic algorithms are proposed in the community. However, it seems that they are already following a similar concept and they have similar benefits and drawbacks. Also, for large scale problems, with higher computational cost (real-world problems), it would be inefficient to use an evolutionary algorithm. These algorithms present different designs in single runs. So they look to be unreliable. Besides, heuristics have no mathematical background. I think that the hybridization of mathematical algorithms and heuristics will help to handle real-world problems. They may be effective in cases in which the analytical gradient is unavailable and the finite difference is the only way to take the gradients (the gradient information may contain noise due to simulation error). So we can benefit from gradient information, while having a global search in the design domain. There are some hybrid papers in the state-of-the-art. However, some people think that hybridization is the loss of the benefits of both methods. What do you think? Can it be beneficial? Should we improve heuristics with mathematics? I am surprised that a known scholar with a long experience in the transportation domain maintains such a hard stance on heuristic search. Obviously, we live in a world where extreme opinions are those which are the most echoed. Truth is, assuming that all practical optimization problems can be solved to optimality (or with approximation guarantees) is essentially wishful thinking. Given this state of art, better integration of exact and heuristic algorithms can largely benefit the research community. At the risk of repeating myself, here are some important remarks to consider: • CPLEX and Gurobi (the current state of the art solvers for mixed integer programming optimization) rely on an army of internal heuristics for cut selection, branching, diving, polishing, etc... Without these heuristic components, optimal solutions could not be found for many problems of interest. CPLEX has even recently made a new release permitting a stronger heuristic emphasis (https://community.ibm.com/community/user/datascience/blogs/xavier-nodet1/2020/11/23/better-solutions-earlier-with-cplex-201). MIP solvers also heavily depend on the availability of good (heuristic) initial solutions to perform well. For many problems, cut separation is also done with heuristics. In the vehicle routing domain, we have a saying: heuristics are the methods that find the solutions, exact methods are those that finally permit to confirm that the heuristics were right (sometimes many decades later, and only for relatively small problems with a few hundred nodes despite over 60 years of research on mathematical models)... • The machine learning domain is quickly taking over many applications that were previously done with optimization. Among the most popular methods, deep learning applies a form of stochastic gradient descent and does not guarantee convergence to optimal parameters. Neural networks currently face the same scrutiny and issues as the heuristic community, but progress in this area has still brought many notable breakthroughs. Decision-tree construction and random forests are also largely based on greedy algorithms, same for K-means (local improvement method) and many other popular learning algorithms. • Even parameter tuning by the way is heuristic... I'm sorry to say that, but most design choices, even in the scientific domain, are heuristic and only qualify as good options through experimentation. • asked a question related to Mathematics Question Hi I would really appreciate if someone helps me out with this MATLAB problem. I have uploaded both MATLAB file (which is not working properly) and the question. Thank you very much in advance #MATLAB Hi, you can directly use the following MATLAB function : fminsearch that uses the Nelder-Mead simplex (direct search) method instead of trying to implement your own version. Best • asked a question related to Mathematics Question In the lands with ancient plain sediments, the courses of rivers change dramatically over time for easy movement and the arrival of rivers to an advanced geomorphic stage. Are there mathematical arrays that achieve digital processing such as spectral or spatial improvements or special filters to detect buried historical rivers? Ruqayah Al-Ani The digital elevation model (DEM) is a good component in the field of remote sensing and GIS. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM). The ASTER GDEM needs further error-mitigating improvements to meet the expected accuracy specification. The RMSE values can be used to represent the DEM errors, in addition to mean error and standard deviation (stddev). • asked a question related to Mathematics Question The mathematical relations how it comes. Thanks • asked a question related to Mathematics Question List of unsolved problems in mathematics, engineering, industry, science, etc. An Euler brick is a cuboid that possesses integer edges a>b>c and face diagonals. If the space diagonal is also an integer, the Euler brick is called a perfect cuboid, although no examples of perfect cuboids are currently known. • asked a question related to Mathematics Question In the definition of a group, several authors include the Closure Axiom but several others drop it. What is the real picture? Does the Closure Axiom still have importance once it is given that 'o' is a binary operation on the set G? What actually happens is that, closure property is a common one to almost all structures (systems). Therefore, authors who drop it in their texts assume that it is automatically embedded in the structure. Others who include it want to be vivid in their texts for clarity sake. So, those who drop this important property do not truncate it entirely or cancel it. • asked a question related to Mathematics Question I am considering to send my research about Sophie Germain primes and it´s relation with primes of the form prime(a)+prime(b)+1= prime(c) and prime(b)-prime(a)-1= prime(c) Mainly you have to send a mathematic research but others science researchs are accepted too. I don´t know the level of the contest but my chance is that my research i´ts have a deep relation with the work of Sophie Germain. Do you have any recomendation of the form to present my work and the form of write to the responsables of the prize? I don´t understand you. It´s no trivial to find two prime numbers whose sum +1 will be other prime number. In fact my formula have a success of the 80% • asked a question related to Mathematics Question Hi, Prof and Dr. the following is my thesis title. Any comment, please. "the study of predicted factors of teachers' intention in teaching Mathematics Problem Solving through online" Design optimization, fabrication, and performance evaluation of solar parabolic trough collector for domestic applications • asked a question related to Mathematics Question I am working on a research and i am looking for someone who can help with a mathematics matters. Attached for your kind perusal@Miss. A.M • asked a question related to Mathematics Question Abstract This paper studies the proof of Collatz conjecture for some set of sequence of odd numbers with infinite number of elements. These set generalized to the set which contains all positive odd integers. This extension assumed to be the proof of the full conjecture, using the concept of mathematical induction. You can find the paper here: (PDF) Collatz Theorem. Available from: https://www.researchgate.net/publication/330358533_Collatz_Theorem [accessed Dec 21 2020]. The first 11 theorems in your article provide a limit family of numbers that obey the Collatz conjecture and the number of steps needed to reach 1. Unfortunately, in theorem (12), you have assumed that Collatz conjecture is true!! In fact, your assumptions of the existence of b1 , b2 , ....bk-1 where k is finite is exactly the Collatz conjecture, and the rest is an elementary computation of the number of steps to reach 1. Can you prove that k is finite? Obviously, if one assumes k is finite, then he assumes that Collatz conjecture is true. Anyway, you have determined a nice family of numbers that satisfy the Collatz conjecture. I wish you good luck to show k is finite. Best regards • asked a question related to Mathematics Question How i can extract mathematical function by the given data set Maybe spline interpolation. • asked a question related to Mathematics Question Some Mathematical expressions will be helpful. No • asked a question related to Mathematics Question I am looking for a research paper about the mathematical or computational modelling of protein oxidation (caused by reactive oxygen species).. I would really appreciate that if someone helps me with this. • asked a question related to Mathematics Question Would prefer a book for learners. see Anh C.T., Hung P.Q., Ke T.D., Phong T.T.: Global attractor for a semilinear parabolic equation involving Grushin operatot. Electron. J. Differ. Equ. 32, 1–11 (2008) D’Ambrosio L.: Hardy inequalities related to Grushin type operators. Proc. Am. Math. Soc. 132, 725–734 (2004) • asked a question related to Mathematics Question I am looking for any book/article reference about the mathematical description of zero normal flux boundary condition for shallow water equations. My concern is that for a near-shore case how it is obvious to have zero normal flux. Physically, it does make sense that we have a near-shore case and on the boundary, there is no flow in the normal direction. How to mathematical explain it using the continuity equation in the case when there is a steady flow? The continuity equation suggests that $\partial h / \partial t + u. \partial h/ partial x = 0$. If we take steady flow then it is clear to me to get zero normal flux condition. But what if the first term is not zero? or do we say that at the boundary the flow is always steady? Shallow Water Hydrodynamics: Mathematical Theory and Numerical Solution for a Two-dimensional System of Shallow Water Equations ElsevierTan Weiyan (Eds.)Year:1992 Lattice Boltzmann Methods for Shallow Water Flows Springer-Verlag Berlin HeidelbergDr. Jian Guo Zhou (auth.)Year:2004 Numerical Methods for Shallow-Water Flow Springer NetherlandsC. B. Vreugdenhil (auth.)Year:1994 • asked a question related to Mathematics Question I want to determine the success rate of an personnel selection instrument (interview, assessment center...) depending on the validity of the instrument itself, the selection rate and the base rate. • asked a question related to Mathematics Question If we are given that (x-2)(x-3)=0 and 0.0=0, then we can conclude that both x=2 and x=3 simultaneously. This is because x-2=0 and x-3=0, simultaneously, is consistent with 0.0=0. However, this leads to a contradiction, namely, x=2=3. So, generally we exclude this option while finding roots of an equation and consider that only one of the factors can be zero at a time i.e. all the roots are mutually exclusive. In other words, we consider 0.0 to be not equal to 0. Now, if we are given that x=0 and asked to find out what x^2 is, then certainly we conclude that x^2=0. It is trivial to observe that this conclusion is made through the following process: x^2=x.x=0.0=0. That is, we need to consider 0.0=0 to make this simple conclusion. Therefore, while in the first case we have to consider 0.0 not equal to 0 to avoid contradiction, in the second case we have to consider 0.0=0 to reach the conclusion. So, the question arises whether 0.0 is equal to 0 or not. As far as I know, mathematical truths are considered to be universal. However, in the present discussion it appears to me that whether 0.0 is 0 or not, is used as par requirement. Is that legitimate in mathematics? @Pedro, I don't understand why the word or is emphasised . As per my understanding , a quadratic equation must possess exactly 2 roots so this equation (x-2)(x-3)=0 contains two distinct roots which are x=2 and x=3. For better visualisation , one can easily plot that equation to find the roots where the function cuts the x-axis. It does not mean that x=2=3. Do I miss any necessary information? • asked a question related to Mathematics Question How do you define uncertainty in an economic decision model? With this mathematical approach in mind, how should you make decisions? Let us take an example. An agent-based model of the economy, society, stock market, etc. You define the model, implement it in an AB evaluation environment, and run the simulations. Many people end here. What can be done more? We change parameters and run simulations again for all changed parameters separately. In this way, we create statistics of possible evolutions of the model. From here, statistical evaluation is easy. • asked a question related to Mathematics Question My dear friends, I am asking if some of your students are interested in applying a postdocotor position in China with me, here is the link and details!!! Jefferson Santos Silva There is no deadline, the program opens all the time before the end of 2021 • asked a question related to Mathematics Question How does one get access to the Mizar Mathematical Library (MML) ? This refers to the Mizar system for the formalisation and automatic checking of mathematical proofs based on Tarski-Grothendieck Set Theory (mizar.org). • asked a question related to Mathematics Question As we know,Strehl Ratio(SR) is a measure of turbulence is a medium.How to calculate SR of a medium mathematically? • asked a question related to Mathematics Question Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems. 1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question? 2. Then I thought well if we assumed that the answer to 1 is yes then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology. I'm looking foe different insights :) Thanks. The change propagation models may give a great idea • asked a question related to Mathematics Question Dear scholars, I am now struggling on a question. Let's assume that there is a given line or a given arbitrary function defined on a z=0 plane. Now I twist the plane into a non-linear 3D surface that can be represented by any given continuous and differentiable equations. How could I represent this line or function in analytical equations now. You could think this like "a straight line on a waving flag". Much appreciated if you have any idea or suggested publications. Thanks. See here: (PDF) Folding and Bending Planar Coils for Highly Precise Soft Angle Sensing (researchgate.net) A little more further evaluation is required. • asked a question related to Mathematics Question Can you help me create a source of type sinc in ADS I found a mathematical function that plays the role (picture 1) but I do not know how to use it ? hello , you can use Verilog A to create this source. • asked a question related to Mathematics Question NO. No one on Earth can claim to "own the truth" -- not even the natural sciences. And mathematics has no anchor on Nature. With physics, the elusive truth becomes the object itself, which physics trusts using the scientific method, as fairly as humanly possible and as objectively (friend and foe) as possible. With mathematics, on the other hand, one must trust using only logic, and the most amazing thing has been how much the Nature as seen by physics (the Wirklichkeit) follows the logic as seen by mathematics (without necessarily using Wirklichkeit) -- and vice-versa. This implies that something is true in Wirklichkeit iff (if and only if) it is logical. Also, any true rebuffing of a "fake controversy" (i.e., fake because it was created by the reader willingly or not, and not in the data itself) risks coming across as sharply negative. Thus, rebuffing of truth-deniers leads to ...affirming truth-deniers. The semantic principle is: before facing the night, one should not counter the darkness but create light. When faced with a "stone thrown by an enemy" one should see it as a construction stone offered by a colleague. But everyone helps. The noise defines the signal. The signal is what the noise is not. To further put the question in perspective, in terms of fault-tolerant design and CS, consensus (aka,"Byzantine agreement") is a design protocol to bring processors to agreement on a bit despite a fraction of bad processors behaving to disrupt the outcome. The disruption is modeled as noise and can come from any source --- attackers or faults, even hardware faults. Arguing, in turn, would risk creating a fat target for bad-faith or for just misleading references, exaggerations, and pseudo-works. As we see rampant on RG, even on porous publications cited as if they were valid. Finally, arguing may bring in the ego, which is not rational and may tend to strengthen the position of a truth-denier. Following Pascal, people tend to be convinced better by their own-found arguments, from the angle that they see (and there are many angles to every question). Pascal thought that the best way to defeat the erroneous views of others was not by facing it but by slipping in through the backdoor of their beliefs. And trust is higher as self-trust -- everyone tends to trust themselves better and faster, than to trust someone else. What is your qualified opinion? This question considered various options and offers a NO as the best answer. Here, to be clear, "truth-denial" is to be understood as one's own "truth" -- which can be another's "falsity", or not. An impasse is created, how to best solve it? "Only dead fish swim with the current implies that those who swim against the current are those who wish to invoke change; who want to control, manipulate, and improve their environment. People who swim upstream make things happen. They are the movers and shakers; the innovators and inventors; the disruptors of the world. There is nothing new downstream; only that which is old and boring, ancient history, the past, the been there and done that... the tried and true. One must swim upstream to find and explore new territory; learn new stuff, have new experiences. To create; fly; soar." But those who try, find it hard to not "go with the flow." The solution maybe to swim like a salmon, making the least waves. With the same principle, it works on a swimming pool, trying to improve personal "best times" -- and tells one why a deeper pool is faster. • asked a question related to Mathematics Question somebody, please elaborate on how to calculate exergy destruction in kW units. from Aspen HYSYS I found mass exergy with kJ/kg unit and i don't know how to calculate it by using Aspen HYSYS and if somebody has mathematical calculation with example please share with me. I know how to calculate by aspen plus but I need a mathematical or Aspen HYSYS solution. thanks in anticipation • asked a question related to Mathematics Question As we know, computational complexity of an algorithm is the amount of resources (time and memory) required to run it. If I have algorithm that represents mathematical equations , how can estimate or calculate the computational complexity of these equations, the number of computation operations, and the space of memory that are used. • asked a question related to Mathematics Question What happens to numbers with highest power and it's implication on the numbers last digit. How applicable is that in mathematical problem solving • asked a question related to Mathematics Question The mosque is in abu Dhabi Civil engineers use trigonometry often when surveying a structure. Surveying deals with land elevations as well as the various angles of structures. • asked a question related to Mathematics Question In the education discipline, several leadership theory has been discussed but no such mathematical foundations are available to estimate them. More, especially how can I differentiate( in terms of Mathematical expressions ) the several leadership styles in decision making problems so that I could get the better one; and the decision maker would comfort to apply their industrial/ managerial/ organizational situation ? We may assume that, the problem is a part of fuzzy decision making/ intelligent system / artificial intelligent system/ soft system. The leaders are manager of an industry/ organization/ corporate house, the ministry of a Government / the agents of a marketing system, the representatives of customers of a particular product in a supply chain management problem. I think what you want is in this book • asked a question related to Mathematics Question what are the differences between mathematical modelling and realistic mathematic education? Mathematical modeling is the process of a problem solving by the mathematical expression of real life event or a problem. This process enables learners to relate mathematics to real life and to learn it more meaningfully and permanently. Realistic Mathematics Education – RME – is a domain-specific instruction theory for mathematics, which has been developed in the Netherlands. Characteristic of RME is that rich, “realistic” situations are given a prominent position in the learning process. Mathematical Modelling Approach in Mathematics Education BY:Ayla Arseven Best regards • asked a question related to Mathematics Question What is the importance of Golden Ratio in nature and mathematics? Why the golden ratio is sometimes called the "divine proportion," by mathematicians? In the world of art, architecture, and design, the golden ratio has earned a tremendous reputation. Greats like Le Corbusier and Salvador Dalí have used the number in their work. The Parthenon, the Pyramids at Giza, the paintings of Michelangelo, the Mona Lisa, even the Apple logo are all said to incorporate it. • asked a question related to Mathematics Question Dear Friends, Kindly allow me to ask you a very basic important question. What is the basic difference between (i) scientific disciplines (e.g. physics, chemistry, botany or zoology etc.) and (ii) disciplines for branches of mathematics (e.g. caliculus, trigonometry, algebra and geometry etc.)? I feel, that objective knowledge of basic or primary difference between science and math is useful to impart perfect and objective knowledge for science, and math (and their role in technological inventions & expansion)? Let me give my answer to start this debate: Each branch of Mathematics invents and uses complementary, harmonious and/or interdepend set of valid axioms as core first-principles in foundation for evolving and/or expanding internally consistent paradigm for each of its branches (e.g. calculous, algebra, or geometry etc.). If the foundation comprises of few inharmonious or invalid axioms in any branch, such invalid axioms create internal inconsistences in the discipline (i.e. branch of math). Internal consistency can be restored by fine tuning of inharmonious axioms or by inventing new valid axioms for replacing invalid axioms. Each of the Scientific disciplines must discover new falsifiable basic facts and prove the new falsifiable scientific facts and use such proven scientific facts as first-principles in its foundation, where a scientific fact implies a falsifiable discovery that cannot be falsified by vigorous efforts to disprove the fact. We know what happened when one of the first principles (i.e. the Earth is static at the centre) was flawed. Example for basic proven scientific facts include, the Sun is at the centre, Newton’s 3 laws or motion, there exists a force of attraction between any two bodies having mass, the force of attraction decreases if the distance between the bodies increase, and increasing the mass of the bodies increases the force of attraction. Notices that I intentionally didn’t mention directly and/or indirectly proportional. This kind of first principles provide foundation for expanding the BoK (Body of Knowledge) for each of the disciplines. The purpose of research in any discipline is adding more and more new first-principles and also adding more and more theoretical knowledge (by relying on the first-principles) such as new theories, concepts, methods and other facts for expanding the BoK for the prevailing paradigm of the discipline. I want to find answer to this question, because software researchers insist that computer science is a branch of mathematics, so they have been insisting that it is okay to blatantly violating scientific principles for acquiring scientific knowledge (i.e. knowledge that falls under the realm of science) that is essential for addressing technological problems for software such as software crisis and human like computer intelligence. If researchers of computer science insist that it is a branch of mathematics, I wanted to propose a compromise: The nature and properties of components for software and anatomy of CBE (Component-based engineering) for software were defined as Axioms. Since the axioms are invalid, it resulted in internally inconsistent paradigm for software engineering. I invented new set of valid axioms by gaining valid scientific knowledge about components and CBE without violating scientific principles. Even maths requires finding, testing, and replacing invalid Axioms. I hope this compromise satisfy computer science scientists, who insist that software is a branch of maths? It appears that software or computer science is a strange new kind of hybrid between science and maths, which I want to understand more (e.g. may be useful for solving other problems such as human-like artificial intelligence). Best Regards, Raju Chiluvuri Dear @Raju Chiluvuri To my opinion, mathematics is the precursor to all the disciplines of science. And, in fact, mathematics is also a science. Thanks! • asked a question related to Mathematics Question Hi I am doing linear regression research assignment where I have to research how does mathematical scores and gender (independent variables) affect to natural history scores (dependent variable). I am not sure am I interpreting gender's dummy variable (female = 1, male = 0) right in the coefficients table. Am I right by interpreting that females are getting on average 10.9 points less natural history scores than male? • asked a question related to Mathematics Question In fact, it is the fundamental defects in the work of “quantitative cognition to infinite things” that have been troubling people for thousands of years. But I am going on a different way from many people. 1, I analysis and study the defects in existing classical infinite theory system disclosed by the suspended "infinite paradox symptom clusters" in analysis and set theory from different perspectives with different conclusion: to abandon the unscientific (mistaken) "potential infinite and actual infinite" concepts in existing classical infinite theory system and discover the new concepts of "abstract infinite and the carriers of abstract infinite", especially to replace the unscientific (mistaken) "actual infinite" concept in existing classical infinite theory by the new concept of “carriers of abstract infinite" and develop a new infinite theory system with “mathematical carriers of abstract infinite and their related quantitative cognizing operation theory system ". From now on, human beings are no longer entangled in "potential infinite -- actual infinite", but can spare no effort to develop "infinite carrier theory", and develop comprehensive and scientific cognition of various contents related to "mathematical carrier of abstract infinite concept". 2, Abstract concept - abstract concept carrier theory, new infinite theory system, carrier theory, infinite mathematical carrier gene, infinite mathematical carrier scale,...The development of basic theory determines the construction of "quantum mathematics" based on the new infinite theory system. 3, I have up loaded 《On the Quantitative Cognitions to “Infinite Things” (IX) ------- "The Infinite Carrier Gene”, "The Infinite Carrier Measure" And "Quantum Mathematics”》2 days ago onto RG introducing " Quantum Mathematics". My work is not fixing here and there for those tiny defects (such as the CH theory above) but carrying out quantitative cognitions to all kinds of infinite mathematical things with "quantum mathematics" basing on new infinite theory system. According to my studies (have been presented in some of my papers), Harmonic Series is a vivid modern example of Zeno's Paradox. It is really an important case in the researches of infinite related paradoxes syndrome in present set theory and analysis basing on unscientific classical infinite theory system. All the existing (suspending) infinite related paradoxes in present set theory and analysis are typical logical contradictions. The revolution in the foundation of infinite theory system determines the construction of "Quantum Mathematics" based on the new contents discovered in new infinite theory system: infinite mathematical carrier, infinite mathematical carrier gene, infinite mathematical carrier measure,... in new infinite carrier theory. So, the "Quantum Mathematics" mentioned in my paper is different from Quantum Logic and Quantum Algebras; According to my studies (have been presented in some of my papers), “Non-Standard Analysis and Transfinite numbers” is all the infinite related things in unscientific classical infinite theory system based on the trouble making "potential infinite and actual infinite" --------- Non-Standard Analysis is equivalence with Standard Analysis while Transfinite is an odd idea of “more infinite, more more infinite, more more more infinite, more more more more infinite,…”). Search RG for Ed Gerck. I'm sure he'd be glad to discuss this topic. • asked a question related to Mathematics Question Mathematics differs from sensory science in that it draws its subject from structural construction to abstract abstraction of quantitative quantities, while other sciences rely on the description of actual sensory objects already in existence. What do you think? Dear colleagues. A very interesting question, some years ago, in 2012, I published a work where I give a definition of Mathematics that can serve to answer the question. • asked a question related to Mathematics Question Computer Aided Design (Cad) subject deals with the backend mathematical calculation that happens in a 3D design. The book of Computer Aided Optimal Design: Structural and Mechanical Systems by the Mota Soares, C.A. and Templeman, A.B can be useful. • asked a question related to Mathematics Question Hello, I am interested in the personalization of learning based on profiles, more specifically in mathematics. Do you know any relevant references? Thank you • asked a question related to Mathematics Question The fact that , electron can have only discrete energy level is obtained by solving schrodinger equation with boundary conditions, which is a mathematical derivation . Physically, What makes the electron possess only certain energies ? Or is there any physical insight or explanation or physical intution which can arrive at same conclusion(without math) that electron can have only discrete energy levels inside potential well When the electron's energy can take only certain values this just means that the states that would correspond to the other values don't exist, under those circumstances. These circumstances are described by the boundary conditions imposed, that are part of the physical description, too. • asked a question related to Mathematics Question Given a fixed volume where the relative humidity and temperature are known, how can you estimate how much water vapor will condense corresponding to a temperature decrease. I suspect it has to do with the dew point temperature but I'm having trouble finding mathematical relations. It is not very difficult, but some algebra needs to be involved. The workflow is the following: 1. Knowing relative humidity at T=T0 (as an input), calculate the partial pressure of vapor at this temperature. 2. Calculate water vapor concentration rho_0 using the ideal gas equation. 3. Calculate saturated vapor pressure at T=T1 from tables or the Clausius-Clapeyron equation. 4. Calculate corresponding saturated vapor density rho_1 at T=T1 using the ideal gas equation. 5. If rho_1 < rho_0, there will be no condensation, otherwise the mass of water condensed in volume V will be V(rho_1 -rho_0). • asked a question related to Mathematics Question Hi every one, here I have a problem in MATLAB, when I want to solve the following equation, relative to PI in the photo, or tau in the code, MATLAB will send me this error: Warning: Unable to find explicit solution. For options, see help. I attached the question and the code below (in code, I rewrite pi in the photo with tau). If you have any idea to solve this problem, analytically or numerically, I will be happy to hear it out. NOTE: > PI_0.1(X,t) = tau > X = [x(t),y(t),psi(t)]^T; ** PROBLEM: Find tau in terms of X and t in which solve the mentioned equation. Arash. code: ______________________________________ ______________________________________ clc;clear; syms x y psi tau t c1 = 1;c2 = 1.5;lambda = 0.1; x_r(tau) = 0.8486*tau - 0.6949; y_r(tau) = 5.866*sin(0.1257*tau + pi); psi_r(tau) = 0.7958*sin(0.1257*tau - pi/2); x_r_dot = 0.8486; y_r_dot(tau) = 0.7374*cos(0.1257*tau + pi); psi_r_dot(tau) = 0.1*cos(0.1257*tau - pi/2); phrase1 = c1/2*(cos(psi)*(x - x_r) + sin(psi)*(y - y_r))*(cos(psi)*x_r_dot + sin(psi)*y_r_dot); phrase2 = c1/2*(-sin(psi)*(x - x_r) + cos(psi)*(y - y_r))*(-sin(psi)*x_r_dot+cos(psi)*y_r_dot); phrase3 = 0.5*(psi - psi_r)*psi_r_dot; eq = -2*(1-lambda)^2*(phrase1 + phrase2 + phrase3) - 2*lambda^2*(t - tau) sol = solve(eq == 0 , tau , 'IgnoreAnalyticConstraints',1) ______________________________________ ______________________________________ Pass x, instead of tau, as rightly pointed out by Saeb AmirAhmadi Chomachar syms x y psi tau t c1 = 1;c2 = 1.5;lambda = 0.1; x_r(tau) = 0.8486*tau - 0.6949; y_r(tau) = 5.866*sin(0.1257*tau + pi); psi_r(tau) = 0.7958*sin(0.1257*tau - pi/2); x_r_dot = 0.8486; y_r_dot(tau) = 0.7374*cos(0.1257*tau + pi); psi_r_dot(tau) = 0.1*cos(0.1257*tau - pi/2); phrase1 = c1/2*(cos(psi)*(x - x_r) + sin(psi)*(y - y_r))*(cos(psi)*x_r_dot + sin(psi)*y_r_dot); phrase2 = c1/2*(-sin(psi)*(x - x_r) + cos(psi)*(y - y_r))*(-sin(psi)*x_r_dot+cos(psi)*y_r_dot); phrase3 = 0.5*(psi - psi_r)*psi_r_dot; eq = -2*(1-lambda)^2*(phrase1 + phrase2 + phrase3) - 2*lambda^2*(t - tau); eqn = rewrite(eq,'log'); sol = solve(eqn == 0 , x , 'IgnoreAnalyticConstraints',1); pretty(sol) • asked a question related to Mathematics Question Hello, I am doing research on HVLD detection capability. From your experience, is there some mathematical formula to prove that HVLD machines can detect holes regardless of size or some other ways to prove it? I am not expert in this subject , may be the following links are useful High-Voltage Leak Detection of a Parenteral Proteinaceous Solution Product Packaged in Form-Fill-Seal Plastic Laminate Bags. Part 3. Chemical Stability and Visual Appearance of a Protein-Based Aqueous Solution for Injection as a Function of HVLD Exposure Rasmussen, M., Damgaard, R., Buus, P., Guazzo, D. M.Journal:PDA Journal of Pharmaceutical Science and TechnologyYear:2013 • asked a question related to Mathematics Question A question related to our cultural indebtedness to our mathematical forbears. Interesting question. However, the foundations that allowed calculus to evolve started long before Newton and Leibniz. The foundation is not calculus but the concept of the limit. Archimedes (287-212 BC) was probably the first to recognize what became the concept of the limit in is estimation pi and the area of the circle by taking inscribed and circumscribed polygons bounding the circle and using the simple fact that one approaches pi from below while the other from above. Of each sequence defines a Cauchy sequence - concept unknown at that time and the completion of the reals (also unknown at that time) show the limit of each sequence is the same and equal a the number pi. In reality the concept of infinitesimal - goes back to Archimedes although the formal concept of "infinity" was not accepted until long afterwards. Roll backwards to the Greeks when faced with the proposition of an infinite number of prime number, was a problem as they believed the universe was finite. Infinity was not something the Greeks wanted to accept and Aristotle (385-348 BC). But Archimedes had just shown that infinity and infinitesimals had a role in mathematics - in fact a central role. It was not to the 1600's that mathematicians attack the problem of infinity to try to understand what it mean as they developed the concept of numbers that are used today. As we better understood our real number system, the concept of point set or general topology was defined to abstract and better understand the structure. In general topology concepts like nets (generalizations of sequences required to define integrals for example), convergence, close to, in a neighborhood and limits are all defined through the concept of open sets which are used to define a topology on a set which now allows for the definition and study of the concept of limits and continuity and handle infinity. While the formulations of general topology came along after the "birth of the calculus" and known as Analysis Situs a term coined by Henri Poincaré through the work of Poincaré , Euler, Cantor, Lefschetz, Courant, Hilbert, etc., a firm foundation was laid not only to the real number system, but to limits, continuity all the foundations of what we now know as 'calculus." Topology is so important to the foundations of calculus and the concept of a limit that in his classic text, "General Topology," John Kelley writes in the first paragraph of the preface he writes, "...I have, with difficulty, been prevented by my friends from labeling it: What Every Young Analyst Should Know." No truer words have been spoken or written as the foundations of topology has allowed the concept of calculus to be expanded far beyond its original intent. • asked a question related to Mathematics Question I am doing a research proposal i need answers on my topic. information must be from 2015-2020. relevant articles • asked a question related to Mathematics Question Any bibliographic recommendations on the problem of routing vehicles with multiple deposits, homogeneous capacities? less than 10 nodes A multi-depot VRP with less than 10 nodes should be almost enumerable, as there exist less than 1024 possible subsets of customers. Given this fact, perhaps the simplest solution approach is to generate all feasible routes from each depot, discard those that are not TSP-optimal, and directly solve a set partitioning formulation based on these routes. Now, if you face larger problems (e.g., 15 nodes or more), you should use the formulations suggested by Adam and Noha, or even go for sophisticated branch-and-price approaches as described in since the code associated with this paper is freely accessible at • asked a question related to Mathematics Question L'Huillier's theorem or calculation of spherical excess of "spherical triangle" formed between the unit vectors on unit sphere can find out the area, but how to explain this formula from purely plane trigonometry standpoint (i.e. without assuming any pre-requisite knowledge on spherical trigonometry)? The solid angle can be found by spherical trigonometry rules, and I am well aware of it. I want to introduce this problem to anyone with knowledge of plane trigonometry, but no knowledge of spherical trigonometry. I hope you find the following discussion is useful Best regards • asked a question related to Mathematics Question what is the mathematical expressions and equations used for the designing of antipodal structure of an antenna. Dear Sneha, You will get the design formulas and an example of antipodal Vivaldi Antenna. If you have more questions you can ask the first author of the paper. Best wishes • asked a question related to Mathematics Question I hope for a global overview on mathematical giftedness and its support in school and/or on an extracurricular level. What programmes/opportunities are offered? First thanks , it is really an interesting question. A problem is that gifted and talented students do not receive the necessary care To meet their needs by remaining in regular classes. Therefore, I find it important to do the following: - Staying away from traditional methods during teaching, this leads to boredom for students, especially talented people - When constructing lessons conceptually, we take into account that gifted and talented students may also suffer One of the weaknesses in understanding the curriculum and that they need to be considered. And when applying the teaching conceptually within In the ordinary class, students of all levels will learn in a deeper way. - Adding open-ended questions to both education and evaluation for their positive results on students ’understanding As well as their attitudes towards the material. The- direct and indirect financial support for talented people • asked a question related to Mathematics Question Quantum computing is the field that focuses on quantum computation/information processing, the mathematical and physical theory for which as well as the engineering required to realize different circuits and algorithms into hardware performance, as well as other contingent issues such as the whole “compute chain” (from software engineering to quantum machine code and then further on to the physical architecture) and device/hardware issues such as thermal, electrooptical and nanoengineering. My question is how quantum computing is related to artificial intelligence? Quantum computing (QC) is the enabling technology for efficiently processing huge quantities of (quantum) information, in many cases outperforming "classical" computing (i.e. binary logic based). It provide you the "muscles" for data crunching, provided you feed it with quantum-coded information (qubits) and you get probabilistic results (with high likelihood if well designed). Artificial Intelligence (and Machine Learning more specifically) is a discipline focused in performing data analysis with the objective of simulating human reasoning for achieving a certain goal. It can then definitively take advantages by a super fast computing capability provided by QC, both for speeding up "classical" algorithms or for running QC native ones, which are expected to open the door to a next level of AI capabilities beyond our current imagination. Just be patient for few more years and wait for a working universal QC becoming available (at competitive price). • asked a question related to Mathematics Question Can any one suggest application(s) for $R_{\alpha}, R_{\beta}$ and $R_{m}$ -functions in mathematical or applied sciences; which is recently introduced in following research paper; H. M. Srivastava et al. A family of theta- function identities based upon combinatorial partition identities and related to Jacobi’s triple-product identity, Mathematics 8(6)(2020), Article ID 918, 1-14. Interesting question. Following the discussion. • asked a question related to Mathematics Question Dear colleagues, I am looking for a practical guide presenting the non-parametric tests intended for students without mathematical background (or very little) with if possible the codes SAS or R. Thank you. Hi Natacha, rcompanion.org is a great source with many examples of non-parametric tests. sthda.com is also good, but the author uses his own limited packages. • asked a question related to Mathematics Question Here I just want to know about the actual parameters to measure the content of happiness in a person. With the help of these parameters a neural network can be generated and maintained to achieve the maximum happiness. I am also expecting some better approach from the scholars. I think you may use face expression analysis, emotions & gates etc., recognition databases. Such databases are publicly available on kaggle.com and here : https://www.robots.ox.ac.uk/~vgg/data/ • asked a question related to Mathematics Question Can we apply the theoretical computer science for proofs of theorems in Math? • asked a question related to Mathematics Question Take, for example, such a concept as a minimum flow, that is, a gradient vector field, the level surfaces of which are the minimum surfaces. Then the globally minimal flow, evolving to an absolutely minimal state, could be compared with a quantum vacuum, and the locally minimal flow could be compared with fields and particles. At the same time, it is clear that it is necessary to correctly choose the space in which this minimum flow moves. Structure wave theory shows how mathematics as a structurally active language based on the release of structure waves is converted into physics. • asked a question related to Mathematics Question Hello all, I am looking for an method / algorithm/ or logic which can help to figure out numerically whether the function is differentiable at a given point. To give a more clear perspective, let's say while solving a fluid flow problem using CFD, I obtain some scalar field along some line with graph similar to y = |x|, ( assume x axis to be the line along which scalar field is drawn and origin is grid point, say P) So I know that at grid point P, the function is not differentiable. But how can I check it using numeric. I thought of using directional derivative but couldn't get along which direction to compare ( the line given in example is just for explaining). Ideally when surrounded by 8 grid points , i may be differentiable along certain direction and may not be along other. Any suggestions? Thanks The answer to a question about the numerical algorithms for resolving the issue of differentiability of a function is typically provided by the textbooks on experimental mathematics. I recommend in particular: Chapter 5: “Exploring Strange Functions on the Computer” in the book: “Experimental Mathematic in Action”. You can also get a copy of the text in a form of a preprint from Judging by the quote placed in the beginning of Chapter 5, the issue of investigation of the “strange functions” was equally challenging i 1850s as it is 170 years later: “It appears to me that the Metaphysics of Weierstrass’s function still hides many riddles and I cannot help thinking that enter- ing deeper into the matter will finally lead us to a limit of our intellect, similar to the bound drawn by the concepts of force and matter in Mechanics. These functions seem to me, to say it briefly, to impose separations, not, like the rational numbers” (Paul du Bois-Reymond, [129], 1875) The situation described in your question is even more complicated because the function is represented only by a few values on a rectangular grid and it is additionally assumed that the function is not differentiable at a certain point. In this situation I can suggest to use the techniques employed in the theory of generalized functions (distributions). For a very practical example you can consult a blog: “How to differentiate a non-differentiable function”: In order to answer your question completely I would like to know what is the equation, boundary conditions and the numerical scheme used to obtain a set of the grid point values mentioned in the question. • asked a question related to Mathematics Question the types of board game for mathematical literacy to make the learning and teaching fun You're welcome Rich Philp. For your information, I have a modest knowledge regarding programming, but still I made some games for PC. Here is a free one: • asked a question related to Mathematics Question Electromagnetic (EM) waves have invoked a lot of interest among scientists and engineers over centuries. And this interest seems to be on the rise, in view of new applications of EM waves being explored and developed, particularly at newer and higher frequencies. Propagation characteristics of EM wave depend on its frequency (or wavelength), to a large extent. And when an EM wave interacts with an object/material, it undergoes reflection, refraction, scattering, attenuation, diffraction, and/or absorption. Each of these effects are dependent on the frequency of the EM wave(s) because the size of wavelength (relative to the object/material) assumes great significance. And due to the huge range of frequencies of EM waves employed in various applications these days, they undergo a variety of different effects. This confuses the scientific community sometimes as it is often unclear as to which effect is more dominant at what frequency. Thus a single mathematical formula (or a small set of formulae) would/could be of great help if different effects (as listed above) and their relative weights can be known at different frequencies. This may be of great boon to young scientists and engineers as it would simplify things particularly for those who are mathematically minded. Not all these phenomena can be summarized in the permittivity of the material. For a start there is the permeability, which is as basic as the permittivity, then whole areas that these two do not cover at all, such as fluorescence, ionisation, photo-electricity, Rayleigh and Raman scattering, interaction with (other) fundamental particles, interaction with gravity/space-time, and more. • asked a question related to Mathematics Question By dynamical systems, I mean systems that can be modeled by ODEs. For linear ODEs, we can investigate the stability by eigenvalues, and for nonlinear systems as well as linear systems we can use the Lyapunov stability theory. I want to know is there any other method to investigate the stability of dynamical systems? An alternative method of demonstrating stability is given by Vasile Mihai POPOV, a great scientist of Romanian origin, who settled in the USA. The theory of hyperstability (it has been renamed the theory of stability for positive systems) belongs exclusively to him ... (1965). See Yakubovic-Kalman-Popov theorem, Popov-Belevitch-Hautus criterion, etc. If the Liapunov (1892) method involves "guessing the optimal construction" of the Liapunov function to obtain a domain close to the maximum stability domain, Popov's stability criterion provides the maximum stability domain for nonlinearity parameters in the system (see Hurwitz , Aizerman hypothesis, etc.). • asked a question related to Mathematics Question Given: 1. The nearest neighbor of 𝑝𝑖 then 𝑝𝑖-𝑝𝑗 is a Delaunay edge. 2. In a 3D set of points, if we know that consecutive points ie... 𝑝𝑖-𝑝i+1 are nearest neighbors. 3. The 3D points do not form a straight line Assumption: Each Delaunay tesselation (3D) has at least 2 nearest neighbor edges. Is my assumption true? If not can you please explain to me the possible exceptions? Thanks, Pranav Are you trying to play chess in 3D? You need to give a clear definition of paths, so I suggest for you to start in one 3D box, it includes 8 points. I prefer to give each point the following notation P(i, j, k), so the locations of the 8 points are at (0,0,0) (1,0,0), (0,1,0), (0,0,1), (1,1,0), (1,0,1),(0,1,1) and (1,1,1). Study this cube carefully, define each Delanoy edge (axioms of the path), and then add another box, which means 12 points, etc. If you find the closed formula that allows you to calculate all possible paths from the starting point at the origin to the farthest point at the upper corner of the rectangular box, then you are on the right track. I wish you good luck. • asked a question related to Mathematics Question I am currently studying the effect of atrophy of a muscle on the clinical outcome of joint injury. There is actually another muscle that was previously well established to have an effect on clinical outcome, and both these 2 muscles are closely related. The aim of the study was to shed some light on the previously ignored muscle to see if there is anything that can be done to help improve clinical outcomes in that aspect. While doing univariate analysis, i wasnt sure if i should include the previously established muscle as well and when i included it into the multi-linear regression model, the initially significant primary variable became insignificant. I was thinking if this could be due to co-linearity but the VIF value was not high enough to show significant co-linearity in the two variables. (GVIF ^(1/(2*Df))=1.359987) My question is, should these 2 variables be included in the same model if they are both highly correlated (clinically and mathematically) but was not determined to have co-linearity, or should these 2 variables be evaluated separately? Bryan Soh, your question is a good one. I think it's necessary to be familiar with the nature of your variables (which it seems you are). Unfortunately, I'm not, but might I suggest that you conduct your analyses both ways, look at the results, then think carefully about which results are likely to be most valid. I also think it is a good idea to present both sets of results if that's permissible. As you are obviously aware, the world of research isn't black and white, and making other researchers, and consumers of research, aware of that could well be helpful. About 20 years ago, I read an article in a top psychological journal in which the author analysed her data in more than one way (from memory, it was more than only two ways), and she discussed the ins and outs intelligently and with insight. It was, for me, much more enlightening that the run-of-the-mill articles that seem to report clean-cut results but leave the reader wondering how much cleaning up and manipulation, and obscuring, occurred to obtain those results. Every good wish as you plough on! • asked a question related to Mathematics Question Good evening all; We are looking for literature on the mixed integer formulation of water distribution problems using Multi objective optimization methods. Thanks Nasiru Abdullahi Mathematics Department Sure. But you are not really helping by not being precise! And I am quite certain that there is one major goal - such as a quickest route of the water. I suggest that you check with the literature - which is quite big. A search string might look like this, or with smaller adjustments: water network [supply, distribution*, system*] problem* • asked a question related to Mathematics Question A careful reading of THE ABSOLUTE DIFFERENTIAL CALCULUS, by Tullio Levi-Civita published by Blackie & Son Limited 50 Old bailey London 1927 together Plato's cosmology strongly suggest that gravity is actually a real world mathematics or in another words is gravitation a pure experimental mathematics? Sorry for the delay.Good question. I think this is a matter for the future. Greetings, Sergey Klykov • asked a question related to Mathematics Question I know lots of composers have created works around mathematical constructs such as the Fibonacci sequence. I would like to learn if any composers have used mathematical constructs in their music to represent journeys. Tool - An American progressive rock band :) • asked a question related to Mathematics Question Dear all, I am trying S parameter measurement _transmission_using TEKTRONIX DSA8300 oscilloscope. Initially, S parameters files are generated in LINEAR _magnitude format. Now S parameters transmission files are appearing in dB format from oscilloscope. Perhaps machine settings seem to be changed. 1)Kindly guide for appropriate setting button in TEKTRONIX DSA8300 oscilloscope, so as to receive the data from dB to linear magnitude format. 2) Also, alternative mathematical ways to receive data in LINEAR magnitude format are appreciated as well , kindly. best thanks • asked a question related to Mathematics Question I have values of dependent variable (y) and independent variable (x)x and y are exponentially related. I want to fit an exponential curve with a DC shift. ie,fit a curve between and in the form, y=A.exp(B.x)+ C . I have methods to fit y=A.exp(B.x). If you know the procedure to fit in the form y=A.exp(B.x)+C , please replay here I would recommend Online Curve Fitting at: https://mycurvefit.com/ • asked a question related to Mathematics Question 344/5000 Hi researchers, I have a problem with the mathematical formulation of the multi objectives model for solving the RFID planning problem network. Do you have any courses or documents or information that can help me achieve my mathematical model of RFID network optimization deployed in a body network. i didn't choose the approach and the algorithme of multi optimization yet, I am formulating my problem mathematically • asked a question related to Mathematics Question Charles Sanders Peirce regarded mathematics as “the only one of the sciences which does not concern itself to inquire what the actual facts are, but studies hypotheses exclusively” (RLT, 114). Since, by contrast, “[w]e must begin with all the prejudices which we actually have when we enter upon the study of philosophy” (CP 5.265), the presuppositionless status of mathematics makes it more primitive than anything found in philosophy. Given that phenomenology falls under philosophy (CP 1.280), we get the result that mathematics is prior to phenomenology. Yet, Peirce also held that “every deductive inference is performed, and can only be performed, by imagining an instance in which the premises are true and observing by contemplation of the image that the conclusion is true” (NEM III/2, 968). We thus have two conflicting arguments: On the one hand, one could argue that mathematics is prior to phenomenology because mathematics makes even less presuppositions than phenomenology. On the other hand, one could argue that phenomenology is prior to mathematics because whatever happens during mathematical inquiry must perforce appear before (some)one. Peirce's pronouncements notwithstanding, it is not obvious to me why the first argument should trump the second. In fact, I find considerations about the inevitability of appearing in mathematics to be decisive. What do you think? I am currently reading Edmund Husserls "Ding und Raum". The book is a complete lecture series about how spatiality and things are constituted. It is heavily descriptive. Husserl used for his analysis the most basic operations I can imagine (e.g. the operation of identity and the operation of distiction) . The level of primitivness seems to me the like of mathematics. But first we have to clearify, what we want to compare? I see 2 different understandings here: (1) A discipline can be prior to another in regards to its methodological approach. Here Husserl demonstrates that phenomenology operates on an equal level of primitivness. (2) On the other hand a discipline can be seen prior in regards to the nature of its epistemical interests and outcomes. Philosophy loves to ask "why is stuff the way it is?" This kind of questioning and the corresponding anwers can be seen as fundamental. I think Marc's second conflicting argument entails the second understanding as a hidden component. To say "... phenomenology is prior to mathematics because whatever happens during mathematical inquiry must perforce appear before (some)one." is in fact a consequence of phenomenological insights. But, we have to be careful here: The validity of this argument derives from what it wants to point out. II think Louis Brassard is right. Mathematics reached a point of high abstraction long before any human started to reflect about whats going on in our mind and why all this stuff is even possible. Now Phenomenology reinvents the wheel again. But this time with an interest into the nature and the origin of transcendental principiles. So yes. Mathematics is prior to philosophy. This is true only because philosophy is more than just phenomenology. But in some cases philosophy can be as "prior" as mathematics. • asked a question related to Mathematics Question how I do obtain in the mathematical expression "limiting current density used to reduce Fe+3(A/m2)"? actually how i find the i (Fe)? i (c)= i (cu)+i (Fe) Armin, Did you find the equation? • asked a question related to Mathematics Question
{}
## Cryptology ePrint Archive: Report 2014/151 Security Analysis of Key-Alternating Feistel Ciphers Rodolphe Lampe and Yannick Seurin Abstract: We study the security of \emph{key-alternating Feistel} ciphers, a class of key-alternating ciphers with a Feistel structure. Alternatively, this may be viewed as the study of Feistel ciphers where the pseudorandom round functions are of the form $F_i(x\oplus k_i)$, where $k_i$ is the (secret) round key and $F_i$ is a \emph{public} random function that the adversary is allowed to query in a black-box way. Interestingly, our results can be seen as a generalization of traditional results \emph{à la} Luby-Rackoff in the sense that we can derive results for this model by simply letting the number of queries of the adversary to the public random functions $F_i$ be zero in our general bounds. We make an extensive use of the coupling technique. In particular (and as a result of independent interest), we improve the analysis of the coupling probability for balanced Feistel schemes previously carried out by Hoang and Rogaway (CRYPTO 2010). Category / Keywords: secret-key cryptography / block cipher, key-alternating cipher, Feistel cipher, coupling, provable security Original Publication (with minor differences): IACR-FSE-2014 Date: received 28 Feb 2014, last revised 28 Feb 2014 Contact author: rodolphe lampe at gmail com, yannick seurin@m4x org Available format(s): PDF | BibTeX Citation Note: An abridged version appears in the proceedings of FSE 2014. This is the full version. [ Cryptology ePrint archive ]
{}
Suitable multimeter or oscilliscope for pulse waveforms Bhope691 Joined Oct 24, 2016 18 Hi, I am generating a pulse waveform which has a time period of between 150 - 250 us (it is not constant) and repeats every 4ms. I was trying to find a multimeter or oscilliscope that can give me the True RMS of this waveform but don't really know what I should be looking for. I have tried the ISO-TECH IDM66RT, the KEYSIGHT U1232A and the AMPROBE 37XR-A but all are giving different True RMS readings (For voltage and current). I believe it is because these are for Sinusoidal wave forms and are struggling with the pulse. Is there a suitable multimeter or oscilliscope that could give me a reliable True RMS reading for Voltage and Current and what would I need to look for in a multimeter? Thanks. Joined Mar 10, 2018 3,884 Take a look at this - https://en.wikipedia.org/wiki/Crest_factor Look at meter specs of what CF it can handle. DSO scopes generally speaking will do accurate RMS measurments. You have to check their math capability specs to see if they can handle measurments like RMS. DS1054Z a popular scope that can do this. Regards, Dana. Janis59 Joined Aug 21, 2017 1,057 One channel? Two channel? Four channel? Standalone (expensiver)? PC extension (cheaper, and easily write a screenshots)? One channel, two channel, for channel versions ARM based on size of small phone about 100 USD chinese DSO in the ebay, if bit better then Chinese four channel devices of Hantek series about 200 USD.If really quality is thye goal, then Picoscope (picotech) series (those at 100 USD are rather weak but those in 4500 USD thick end are good indeed). 1https://www.ebay.com/itm/DSO112A-Portable-TFT-Touch-Screen-Digital-Storage-Oscilloscope-2MHz-5Msps-I9F4/322041939102?_trkparms=aid%3D555018%26algo%3DPL.SIM%26ao%3D2%26asc%3D52885%26meid%3Db19cb51e8ef54e2ebf754abc24689a1e%26pid%3D100005%26rk%3D3%26rkt%3D12%26sd%3D173415633903%26itm%3D322041939102&_trksid=p2047675.c100005.m1851 https://www.ebay.com/itm/DS202-Mini-LCD-Digital-Oscilloscope-2-Channel-USB-10MSa-s-Built-in-8MB-U-Disk/332579557076?hash=item4d6f494ed4:g:ROQAAOSwg3taoRxT&_sacat=0&_nkw=one+channel+pocket+oscilloscope&_from=R40&rt=nc&_trksid=m570.l1313 http://www.hantek.com/en/ProductList_1_2.html Personally I have a Hantek 6254 (4ch, 300 MHz, 1Gs/s), Hantek 6204 (200 MHz), open source nano size one channel 50 MHz device (dont remember the exact name) and Picotech 6404D reserved for more serious tasks - 1 GHz 5Gs/s. If one is not trying to see something in WiFi frequency, its rather good gentleman complect. https://www.picotech.com/products/oscilloscope Of course, there are plethora of other firm products, but for amateur work these has a ultimate price against parameters ratio, however some heavyweight brands may be much more prestige. Contrary is if the aim is GHz technique, Then the choices are picoscope or Rhode&Schwarz, but then Youll need to sell the family residence or better two of, to pay the bill Last edited: Janis59 Joined Aug 21, 2017 1,057 RE: Danadak RE:""DSO scopes generally speaking will do accurate RMS measurments."" For fast and well reapeating signals yes, of course. But my this month most shocking revelation ios that for slow long non-repeatable signals happens the effect of "far Future make the hard impact on Past, that small change in future capitally changes the whole past.". And then - more than logic, the RMS value is changed along the way. I wrote the claim for this to the Hantek, but they play the strauss police, take head in sand and play the "bad wire between us" game. Thus I dedicate - sure guilty. However its not so drastic as sounds, my dataserie longs 1000 seconds and has at least few thousands of sharp microsecond scale peaks I want to look in-deep. In normal work regimes such defect is not observable. Wuerstchenhund Joined Aug 31, 2017 189 I am generating a pulse waveform which has a time period of between 150 - 250 us (it is not constant) and repeats every 4ms. I was trying to find a multimeter or oscilliscope that can give me the True RMS of this waveform but don't really know what I should be looking for. OK, well to realistically capture this signal you need around 60kHz BW which for a scope is nothing, however this may well exceed the capabilities of most multimeters (which isn't really the best tool for measuring pulse waveforms anyways). Is there a suitable multimeter or oscilliscope that could give me a reliable True RMS reading for Voltage and Current and what would I need to look for in a multimeter? Forget multimeters, just use a scope. As to which one, well pretty much any DSO made by the big brands (HP/Agilent/Keysight, LeCroy, Rohde & Schwarz, Tektronix) will do, as should any from the better B-brands (Rigol, Siglent). Forget about Hantek or OWON, they are crap. A good choice would be a Rigol DS1054z or the Siglent SDS1000X-E Series, which both are pretty much the cheapest scopes that are decent instruments and not just a pile of dung. But without knowing your budget and requirements (you're hardly going to buy a scope just to measure this signal, or are you?) giving a recommendation is difficult. If it has to be cheap, look for a second hand HP 54645A or 54645D digital scope. They can often be found below $200 and still make nice beginners' scopes. KeepItSimpleStupid Joined Mar 4, 2014 3,887 I am generating a pulse waveform which has a time period of between 150 - 250 us (it is not constant) and repeats every 4ms. By definition, you really can't because it's not "continuously" periodic if that's how I interpret the above. You might be able to compute "energy" in units like Watt-seconds for a given time range. e.g. T11-T120 (11-120) seconds Wuerstchenhund Joined Aug 31, 2017 189 One channel, two channel, for channel versions ARM based on size of small phone about 100 USD chinese DSO in the ebay These aren't scopes, these are toys. If you think that these$100 ARM scope kits will give you a reliable test instrument then you're dreaming. if bit better then Chinese four channel devices of Hantek series about 200 USD. Sorry but Hantek is crap. They are cheap but most of their scopes are outdated designs, they are slow, often come with miserably small sample memory sizes, and loads of bugs in their firmware (and they rarely get fixed). If you really can't afford to spend more than $200 then I'd rather go with a good second hand scope like the HP 54645A or 54645D ('A' is scope only, 'D' is the MSO variant with logic analyzer). It offers all relevant measurements, has a very high update rate and 1Mpts of memory which is the bottom end of what I'd recommend these days. With some luck$250 might even buy an Agilent 54622A/D which adds triggers for a few serial standards. If really quality is thye goal, then Picoscope (picotech) series (those at 100 USD are rather weak but those in 4500 USD thick end are good indeed). PicoScopes are limited in BW (their realtime scopes go to 1GHz only) and samplerate (5GSa/s max), they lack active probe interfaces and the faster ones only have low impedance inputs. And you need to have a PC or laptop on your bench and operate the scope with keyboard and mouse. PicoScopes are great for what they do, we use a bunch of them in automated setups but even their biggest fans wouldn't recommend them as a replacement for a proper bench scope. Of course, there are plethora of other firm products, but for amateur work these has a ultimate price against parameters ratio, however some heavyweight brands may be much more prestige. It's not about prestige, it's about performance, reliability, and a support you can depend on. And there's a reason the majority of scopes out there are from Keysight (former Agilent which was former HP), and that many of the high end and ultra high end scopes come from LeCroy. Both offer the best performance in their class, and the support is top notch. For hobbyists the best options are the Rigol DS1054z (a 4ch scope for \$400) and the Siglent SDS1000X-E Series. Both scopes offer serious performance and features that make them tools not toys, and they are way above anything made by Hantek without costing a lot more. Besides, there's still the 2nd hand market, with lots of opportunities to get a real high end scope for rock-bottom prices, if you know what to look out for. Contrary is if the aim is GHz technique, Then the choices are picoscope or Rhode&Schwarz, but then Youll need to sell the family residence or better two of, to pay the bill If you need a 1GHz scope then at the moment the LeCroy WaveSurfer 3104z and Keysight DSOX3104T are pretty much the best 8bit scopes in that class. There's also the Tektronix MDO Series but like most Tek scopes it's painfully slow and has a horrible user interface. If you need more than 1GHz but not more than 8GHz then the LeCroy WavePro HD and Keysight DSO-S are both the best choices, although the R&S RTO2000 or R&S RTP aren't bad scope either. Tek has the new MSO5 and MSO6 Series but it still suffers from typical Tek problems (horrible UI, slow/locks up when doing stuff), plus both are still full of bugs. If you need more than 8Ghz but then there's no R&S, and it's pretty much down to Keysight, LeCroy and Tektronix (although the Tek DPO70kSX can't keep up with LeCroy's and Keysight's offerings). If you need up to 100GHz then there's only LeCroy's LabMaster 10zi-A (100GHz with up to 40 channels) or since recently Keysight's UXR (110Ghz, up to 4 channels). RE: Danadak RE:""DSO scopes generally speaking will do accurate RMS measurments."" For fast and well reapeating signals yes, of course. But my this month most shocking revelation ios that for slow long non-repeatable signals happens the effect of "far Future make the hard impact on Past, that small change in future capitally changes the whole past.". And then - more than logic, the RMS value is changed along the way. I wrote the claim for this to the Hantek, but they play the strauss police, take head in sand and play the "bad wire between us" game. Thus I dedicate - sure guilty. However its not so drastic as sounds, my dataserie longs 1000 seconds and has at least few thousands of sharp microsecond scale peaks I want to look in-deep. In normal work regimes such defect is not observable. If your RMS measurements are incorrect then there's something wrong with either your scope or your measurement setup. And in this case I doubt the problem is even with Hantek. You stated you have a Hantek 6204? According to the manufacturer that scope has a measly 64k sample buffer. Which means that at 1GSa/s it can only catch 64us until the buffer is full, and that means that it has to drop the sample rate dramatically at longer time bases. Using your 1000 seconds, 64kpts allows for a sample rate of 64 samples/s, which would be sufficient to sample a signal which has no frequency components beyond 30Hz. There's no way in hell you can capture a 1000s period and have this scope appropriately resolve "microsecond scale peaks" at the same time. Which should explain why your RMS measurements are off. There are ways to deal with this even on a low memory scope like your Hantek, i.e. multiple acuisitions. On a better scope there are even features like segmented memory which are made for such applications. But the main requirement is that the user not only knows his scope but actually understands the principle behind Nyquist sampling and its limitations. Last edited: Wuerstchenhund Joined Aug 31, 2017 189 By definition, you really can't because it's not "continuously" periodic if that's how I interpret the above. Indeed, because of the pulse width variation it's not periodic. But that doesn't prevent him to measure momentaneous RMS, i.e. in a single acquisition. Using statistics he could even measure the RMS range based on the pulse width. You might be able to compute "energy" in units like Watt-seconds for a given time range. e.g. T11-T120 (11-120) seconds He could also simply calculate the AC RMS or AC+DC RMS: (Image courtesy by Keysight Technologies) Last edited: Janis59 Joined Aug 21, 2017 1,057 RE:""There's no way in hell you can capture a 1000s period and have this scope appropriately resolve "microsecond scale peaks" at the same time."" 1) I never had even idea it ought be done. Always the screen resolution is sth fully OK, and if the higher resolution is needed then logically the sweep time must be shortened. 2) And its sure that processes what happens in random moments and their time-span is only 1...30 microseconds once a few seconds are stably fixed on screenshots, like sharp form-less needles, thus means the hypothesis about 30Hz BW is fully wrong. I would be scared from Your text if not in the last week I would not done about few hundreds such screenshots successfuly - with satisfacting resolution. However You are right that "Zoom" functionality here is just purely dogs sh** - the clear sign of memory weakness, what never I have noted for Picotech machines. How far I understood from the Manual, those shocking short memory is just used intensively to flood the results to computer, where RAM restrictions are hardly less. And, the battery feed laptop bound system is only worth to discuss, because all in-build screen type needs a network cable. It means them are not capable to measure anything at kilovolt to hundred kilovolt above the earth. Just isolation transformer in such situations is bad help because of inherited capacitances. And last argument. Such high-risk obstacles are time by time leading to thunder via scope. Thats much better to have a minimally fast scope for 50 USD let it only be capable to make a task, not a powerful machine for 400 USD if one day may lead to few of it must be thrown in junk. But indeed, I have yet not opened it to revise the circuitry, thus I cannot comment HOW they get it. P.S. Thanks for idea about LeCroy. It sounds so much French that I didnt took it seriously. Mea culpa. When in future I shall have a larger project money once, I shall seriously meditate about to purchase similar. Wuerstchenhund Joined Aug 31, 2017 189 RE:""There's no way in hell you can capture a 1000s period and have this scope appropriately resolve "microsecond scale peaks" at the same time."" 1) I never had even idea it ought be done. Always the screen resolution is sth fully OK, and if the higher resolution is needed then logically the sweep time must be shortened. If you chose a (much) shorter timebase then yes, capturing microsecond peaks is possible, as with a shorter timebase the sampling rate will increase. 2) And its sure that processes what happens in random moments and their time-span is only 1...30 microseconds once a few seconds are stably fixed on screenshots, like sharp form-less needles, thus means the hypothesis about 30Hz BW is fully wrong. The thing is math and physical facts don't care if you believe in them or not. And the fact remains that with only 64k your sample rate can't go any higher than 64 Samples/s if the period you want to capture is 1000s. This is simple math. And 64 Samples/s are sufficient only for signals that don't have more than 30 Hz in BW, because everything higher would violate Nyquist-Shannon. I would be scared from Your text if not in the last week I would not done about few hundreds such screenshots successfuly - with satisfacting resolution. Which as I said works fine if you shorten the timebase to something very narrow. If your timebase however was set to something like 1000s then the captured signal won't have a lot to do with reality. However You are right that "Zoom" functionality here is just purely dogs sh** - the clear sign of memory weakness, what never I have noted for Picotech machines. I didn't say anything about zoom, and sample memory size is not just important for zoom. It's an important marker for the overall performance of your scope and a deciding factor as by how much your sample rate (and thus usable BW) drops in longer time base settings. How far I understood from the Manual, those shocking short memory is just used intensively to flood the results to computer, where RAM restrictions are hardly less. This isn't physically possible. Your Hantek scope is an 8bit scope, which means when sampling at 1GSa/s it produces 1GB/s of data. But the interface is only USB 2.0, which even if it supports the fastest mode (HiSpeed) is limited to only 35MB/s (or in other words, the interface would become saturated at sampling speeds higher than 35MSa/s). And this doesn't even take into account any overhead. And, the battery feed laptop bound system is only worth to discuss, because all in-build screen type needs a network cable. It means them are not capable to measure anything at kilovolt to hundred kilovolt above the earth. Just isolation transformer in such situations is bad help because of inherited capacitances. Well, many bench scopes support USB connection in addition to Ethernet (and some even have only USB), but you're right, a bench scope, unless battery powered (rare) will always be earth-bound, which is a problem is a big problem if you have to measure high voltage mains. That's one of the areas where USB scopes shine, especially as these applications are generally low-frequency anyways. And last argument. Such high-risk obstacles are time by time leading to thunder via scope. Thats much better to have a minimally fast scope for 50 USD let it only be capable to make a task, not a powerful machine for 400 USD if one day may lead to few of it must be thrown in junk. Absolutely, and if you carry the scope around a lot there's always the danger of damage. But high voltage mains is a special case that has distinctive requirements from general EE, which usually focuses around a bench in a lab where either voltages are low or safety facilities are in place to protect the user. P.S. Thanks for idea about LeCroy. It sounds so much French that I didnt took it seriously. Mea culpa. When in future I shall have a larger project money once, I shall seriously meditate about to purchase similar. LeCroy (an American company) is often overlooked, even when they invented the modern digital scope and have been pretty much at the forefront of technology for most of its existence (some of the scopes they make you can't get anywhere else). Most people think "Tektronix" when they think about scopes because back in the days Tektronix made the best analog scopes you could buy. Unfortunately Tek never had a hand for digital scopes, and most of the DSOs they made were between 'mediocre' and 'pile of dung'. But if your application is high voltage mains then you won't find anything in LeCroy's portfolio, as they make lab scopes only. Instead, have a look at Keysight, they have great USB scopes (the existing U2701A/U2702A which come with 32Mpts of memory and the new Streamline series with 4Mpts and 5GSa/s) as well as very nice handheld scopes (U1600 Series). If Keysight doesn't fit your budget then have a look at Siglent, they have the SHS800 and SHS1000 Series of handheld scopes. The SHS1000 is classed at 1000V CAT II and 600V CAT III, and both sell at reasonable prices. Last edited: Janis59 Joined Aug 21, 2017 1,057 Okay, thanks for idea about Siglent. Actually, I had to buy those Hantek because in commandment I forgot the scope home, some 1200km behind, thus I looked for something cheap and be delivered in the same day. So for this moment I cant say anything bad, except the absent customer service and small memory. But sure, next time when one my bag will stay in dark at garage I shall buy Siglent P.S. Actually, That commandment happened like horror movie. After driving the 3000 km until Germany in the middle highway the motor clapped. One year after timing belt change it cracked (probably hidden Chinese) thus all the valve group went dead. But inspite of my arrogance of not obtaining the ADAC card before the trip, the very God kept me by his strong wings - at 100 miles per hour I rolled by inertia about 5 kilometers, then about 3km slower down the hill and there was my saving place, the Rastplatz. So at least I slipped away from few thousand large penalty. There (with no less adventures) I managed to buy another car (this moment Germany has laid new anti-diesel law, thus all diesels became for just laughful prices) and trailer. To leave the stranded car at rastplatz means the cameras will tell em who is owner even if I screw off the number plates so I shall pay the five digit penalty unavoidable. Therefore I drove the new car, wagon and old car back to home, legalized the new in homeland numbers and with only one day too late was in that Poland lab where I must work two months for the current exploration project at collaboration agreement with my native University. Every man would be able forget the most important bag into his garage after driving about 10 000 km in the 7 days. I would not believe it is possible at all before I won. P.P.S. Just looked about those scopes price 1100 USD plus taxes 250 plus forwarding 150 - and for this money ONLY 100 MHz!!!. The same time four channel Hantek about 130 and no other payments, frequency 300 MHz. Im not sure I even can utilize the SHS1000 series for my job current task, if sometimes I need to build a slow process voltage and amperage curves by time and print them in good size, while to see on them rapid peaks how many time they occured. And sometimes get deeper in those peaks what has structure until sub-microsecond scale. This moment I just try to ""write"" a data movie, such option Hantek gives. And then hope that few peaks will happen on it (as shorter timebase as shorter the movie). When let that holy crap be viewed in slo-mo, then cach a Print Screen (sad those China boys not made a way to roll it manually, frame by frame). Thus, with pain in the a** but it is possible to get into computer those explorable waveforms of deep physics. Hope soon will happen laid out the article about it. Hope You will not be those Peer Reviewer of article saying that Hantek is bull**** therefore all my article is the same kind of Last edited: ebp Joined Feb 8, 2018 2,332 While the fanbois do battle over brands, let's focus on the actual requirements. Can you tell us some more about the nature of the pulses: 1. are they rectangular or some other shape? 2. is the shape consistent? 3. do you need to the RMS value based on a fixed pulse width or a sort of "average" RMS over a period where the pulse width is varying? 4. is the peak amplitude known? What degree of precision and accuracy to you need. With the right set of circumstances, you can determine the RMS value by reading the DC average value with a cheap DMM and doing some simple arithmetic. Lots of circumstances would make this impossible, but for some it is quite easy.
{}
Questions on article writingtrabajos Filtro a a a Idiomas NOTE: This is NOT AN SEO ARTICLE, Just write naturally. NOTE: The articles must be UNIQUE & Original, it will be thoroughly checked for scraping and spinning ============= Overview: Please read this content brief until the end, every detail is important and we will be checking against this brief when approving or refusing the content. Title: [How $12 / hr (Avg Bid)$12 / hr Oferta Promedio 52 ofertas I need you to write answers to the questions.. $26 (Avg Bid)$26 Oferta Promedio 37 ofertas We need someone who knows Microsoft Excel and can use it to develop an exam which has 21 sample questions. Some questions types are simple while others are graphical questions. knowing Excel programming may help (VBA). The Exam scores should be retrievable. NOTE: If you know how to integrate Excel exams with Learning Management System (LMS) tell $55 (Avg Bid)$55 Oferta Promedio 23 ofertas $39 (Avg Bid)$39 Oferta Promedio 38 ofertas Please, read carefully the instructions before you apply for the project :) Just apply if you think this job is for you 1- It is 20 questions/activities to be answered. 2- The subject is “creating a product and develop marketing strategies” 3- Document is attached 4 -Check the document as see if it is for you 5 -No plagiarism 6 - It is needed the $25 (Avg Bid)$25 Oferta Promedio 10 ofertas Creation of Mathematics and Science questions for Class IX $96 (Avg Bid)$96 Oferta Promedio 10 ofertas Problem/Challenge: I have all the questions in MS Word format. Currently there are 2 options available to upload these questions from MS Word to MYSQL – 1) Upload one question at a time via an html form. Have to manually copy paste the question, all the answers and any images in the question. Takes 1-2 minutes per question 2) Upload via a CSV file $31 (Avg Bid)$31 Oferta Promedio 16 ofertas $25 (Avg Bid)$25 Oferta Promedio 51 ofertas Need Professional counselor and psychologist to answer some of questions in [iniciar sesión para ver URL] £50 for answering of each 20questions . I have about 260 questions .You will answer 20 questions every two weeks . I will pay the fees after checking the answers of 20 questions directly. For more info you can text me [Removed by Freelancer.com Admin ... $41 (Avg Bid)$41 Oferta Promedio 22 ofertas ...test till the last attempted question - Able to navigate to unanswered questions or marked questions - once the students completes the test able to generate the marks and rank the student among all, review of answers with explanation included, - Admin login to upload questions, and explanation of answers, - simple logo and name for test series at $132 (Avg Bid)$132 6 participaciones I have 4 math questions and need it done in 2 hours , the questions are quite basic . please msg me so we can discuss thanks $26 (Avg Bid)$26 Oferta Promedio 10 ofertas This project is converting from PDF to PPT, 1000 Calculus questions and answers. I've attached examples of what the work would entail: (1) you will have calculus questions and multiple choice answers, and you will (2) make a slide for the questions, with additional hints, and (3) made a slide for the answers, with additional explanation. Most of this $35 (Avg Bid)$35 Oferta Promedio 1 ofertas We are running a website to help new ...finance. We are looking for a writer who can answer these questions in around 200-250 words. Pay is fixed $1 per approved answer. You will have to answer 10 questions daily 5 days a week. New freelancers can also apply. Before hiring you, I would like you to write 2 answers to check your writing style and speed.$120 (Avg Bid) $120 Oferta Promedio 31 ofertas answer questions Finalizado left ...details on how to set new efficiency targets, investigate and apply new tools and strategies, plus promote those strategies and reward participants. Explain why it is important to constantly monitor your own work, as well as know and understand the organisation’s goals. What is the effect of regulations and legislation on business$24 (Avg Bid) $24 Oferta Promedio 40 ofertas porting healthquestionnare in googleforms to dynamic pages with questions depending on answers. also option for repeated questionnaire with already filled in information previously asked. emailing capability from system sending invitations. also scripts checking for specific emails and reacting onit$1088 (Avg Bid) $1088 Oferta Promedio 11 ofertas We're looking for someone to create FRM part 2 mathematical questions on the subjects of risk management (VaR, CVaR, ES, etc.) for our FRM preparation website. Please quote a price for 50 practice questions. We will start off with a 10-question free trial.$293 (Avg Bid) $293 Oferta Promedio 8 ofertas We want to develop and validate questions that should be asked as part of pre-employment testing. we are currently looking for subject matter expert in engineering and logical thinking.$7 / hr (Avg Bid) $7 / hr Oferta Promedio 37 ofertas GK quiz Questions Finalizado left I want to create a mobile application in which every day people get 10 latest questions$149 (Avg Bid) $149 Oferta Promedio 20 ofertas I need help on some inorganic chemistry questions$17 (Avg Bid) $17 Oferta Promedio 48 ofertas Need help with simple inorganic questions$26 (Avg Bid) $26 Oferta Promedio 14 ofertas ...down into two parts: Content writing, and the design of the package. Hopefully I can find a freelancer than can handle both. If your skill sets are more suited to one over the other, please just bid on that section and detail which one. Please send over samples of your previous work along with the bid. Content Writing: All of the content needs be$463 (Avg Bid) $463 Oferta Promedio 29 ofertas I already have a design, I just need you to code a small application which takes a word file(multiple choice question paper) and crop the questions and it's options and save all the croped question in a single folder. good if you can build this in .Net but no technical dependency from our side.$95 (Avg Bid) $95 Oferta Promedio 17 ofertas OCL and JML questions Finalizado left I need someone to finish the specification& refinement questions for me. It is about OCL(Object Constraint Language) and JML (Java Modelling Language). contact me for further information$168 (Avg Bid) $168 Oferta Promedio 8 ofertas Dear All, I require the below questions (MCQs) in excel format with timeline - 4 days. Should be from CBSE board i.e. Class X & Class XII. first column questions are easy , middle column is medium , third n right column is Difficult. So questions needs to be provided in similar fashion in excel. Proper format shall be shared before work. Kindly$59 (Avg Bid) $59 Oferta Promedio 39 ofertas For the automation of a small machine-readable report that needs to be produced every month, I would like to create an Excel VBA Macro that asks (via dialog box) if the user if something has been changed. If not, it should yield last month's report (repeat all fields and just change the month's date). If something has changed, then the user would need to inform in which field something h...$35 (Avg Bid) $35 Oferta Promedio 25 ofertas This project is not for bloggers or article writers. Please bid only if you have experience in Business Process Writing. The 2 documents are already complete. However you have to modify these based on comments. Case scenario: You are required to choose an organization of your preference (which can be the organization, you are currently associated$52 (Avg Bid) $52 Oferta Promedio 20 ofertas Questions - How similar can a website be. - how close can the domains be if competing - Do I need to register / trademark my websites. - 1 private questions - Quick & Thank you$33 (Avg Bid) $33 Oferta Promedio 17 ofertas I need to login automatically to a website which has javascript and 2FA questions to be answered. The username password and the set of questions and answers need to be stored in a config file. The site may throw any two questions out of a bank of 10 predefined questions . This this shd be done in python and should be able to run in a Mac Os.$130 (Avg Bid) $130 Oferta Promedio 9 ofertas The job involves editing and proof reading around 700 questions and answers. They are in a spreadsheet. I have attached a sample of the questions and answers so that can get a feel for the work. The majority of the questions/answers are relatively short, however around 20% to 25% of the questions are longer and will need to be edited down to around$92 (Avg Bid) $92 Oferta Promedio 160 ofertas I need someone to sign up to and post to Health Insurance forums to answer questions and provide value and share a link to our website article that provides information about our Health Insurance quotes. This would be an ongoing project with 4 posts per week for 10 weeks.$99 (Avg Bid) $99 Oferta Promedio 14 ofertas ...libraries in the Nim language using [iniciar sesión para ver URL] by answering questions, providing ideas regarding compilation, etc. Specific expertise is needed in the following areas: 1. Creating, modifying, and removing local user accounts 2. Setting password on local user accounts 3. Assigning permissions to the local user accounts (local admin$234 (Avg Bid) $234 Oferta Promedio 6 ofertas I have some technical questions need answers in regards of mobile app development. I need to have (20-30 Min) phone call conversation with mobile app developer to discuses the following topics: 1- Mobile App Moch up. 2-User interface and user experience (UI&UX) 3- System Requirement Specification. 4- Mobile App Type 5- Website & Mobile App platform$23 (Avg Bid) $23 Oferta Promedio 5 ofertas I need someone on (Removed by Freelancer.com Admin) who I can ask questions to as they come up. Currently, I've one question about binding listviews, and see value in having a contact I can speak to as I learn the language. I'm asking for 1 answer and solution now, and 9 more in the near future .$21 (Avg Bid) $21 Oferta Promedio 4 ofertas I want content writer to create multiple choice questions on variety of subjects. I will give the subject matter and you need to create mcq$121 (Avg Bid) $121 Oferta Promedio 9 ofertas Questions for quiz app Finalizado left I need 3000 questions from 6 categories each science history entertainment sports gk [iniciar sesión para ver URL] of 24000 questions with each question having four options including the correct [iniciar sesión para ver URL] Excel sheet$86 (Avg Bid) $86 Oferta Promedio 7 ofertas ...follow leading tech blogs to keep up with major technological developments? Do you love writing and would like to join the GetTechMedia Team? If your answer is yes to all these questions, then we may have a job for you. We are looking for writers to work with on GetTechMedia (Technology Our Passion). GetTechMedia is seeking to improve the quality of$92 (Avg Bid) $92 Oferta Promedio 5 ofertas We have an android app that communicates with a server. we need a few apis for access to some data. We checked and we think ...to get token. Our app needs to share user data with another partner company who needs access. The web server is a PHP framework. I need someone to help me find answers to questions I have and implement if something is missing$43 (Avg Bid) $43 Oferta Promedio 12 ofertas Article & Blog Writing Needs to hire 10 Freelancers Articles needed will be at least 700 words in length. I have a steady stream of business so looking for a long-term partner who can create good, unique, SEO optimized content! I will provide you with a list of titles/keywords . You will research according to the titles given and write the articles$82 (Avg Bid) $82 Oferta Promedio 24 ofertas I am developing app and i would like to the best solution for marketing campaign and profit expected to covering the cost of the apps monthly budget$133 (Avg Bid) $133 Oferta Promedio 9 ofertas I have around 70,000 questions to be translated from English to hindi. Excel format is already available and details shall be shared on individual basis while samples are already posted for reference. Subjects are Maths -22500 Science -32500 History - 7000 Geography -7000 GK 10 - 5000 GK-12 - 6000 Delivery Period : 7 days to 10 days Due to shortage$1356 (Avg Bid) $1356 Oferta Promedio 52 ofertas I have 60 questions where students are asked to rank random angles from smallest degree to largest. I have answers to these questions and we are looking for someone to create 60 graphical solutions to questions to help students understand the answer$119 (Avg Bid) $119 Oferta Promedio 54 ofertas I have 60 questions where students are asked to rank random angles from smallest degree to largest. I have answers to these questions and we Are looking for someone to create 60 graphical solutions to questions to help students understand the answer$120 (Avg Bid) $120 Oferta Promedio 24 ofertas This job will involve the writing of approximately 3 - 5 quizzes every week-day (this may vary). Each quiz will be between 100 and 300 words. Since the number of translations can be variable, we require a flexible writer that can adjust to our publishing schedule. IMPORTANT: This job requires translation of content on an ongoing, daily basis (Monday$100 (Avg Bid) $100 Oferta Promedio 29 ofertas Curate Questions Maths and Science$308 (Avg Bid) $308 Oferta Promedio 40 ofertas i have some questions on a wireless sensor networks WSN. ( in designing, modeling, formulation, node deployment and so on) also in corona-based WSN so, i am looking for the expert in WSN and Networking. expert in WSN expert in WSN$27 (Avg Bid) $27 Oferta Promedio 2 ofertas I am working on a currently confidential project that includes a trivia game using word categories (acronyms, literary references, word roots). Multiple choice answers. Example: Elated is to despondent as enlightened is to...Ignorant, misserable, tolerant. Large amount of questions needed on a monthly basis (approx 150)$2236 (Avg Bid) $2236 Oferta Promedio 12 ofertas answer 11 interview questions about learning experience. around 1500 words. No technical writing. just use simple English, but do need to answer to the questions properly.$29 (Avg Bid) $29 Oferta Promedio 81 ofertas ...3-7) for which I require comprehension questions and accompanying answers. The comprehension questions should comprise of factual recall, contextual, inferential & vocabulary/grammar type questions. I need 12 questions for each passage covering these types. Ideally I would like to work with someone who has worked on something similar before or someone$177 (Avg Bid) $177 Oferta Promedio 24 ofertas ...viewing the answers of the questions. Users should have the permission to add their own multiple choice questions but I should confirm them before they appear to all users. The most important thing is these questions would probably contain equations so MathJax should be supported. Statistics also should appear below the questions. To summarize the required$298 (Avg Bid) $298 Oferta Promedio 51 ofertas Help to answer Accounting Questions$19 / hr (Avg Bid) \$19 / hr Oferta Promedio 29 ofertas
{}
It means, among other things, that people in situations of uncertainty tend to look for familiar patterns and are apt to believe that the pattern will repeat itself. A gambler's fallacy is a heuristic in which a person thinks the probability of an outcome has changed, when in reality, it has stayed the same. One night, a cab is involved in a hit and run accident. Forensic evidence, including a footprint left at the scene, led to the arrest a 17-year-old from the same village who had delivered newspapers to the victim’s door the previous 3 years and was aware that she had money and jewels stashed in her home. In other words, people tend to commit the base rate fallacy about that description of Jack. Psychology Chapter 7. This is an example of Base Rate Fallacy because the subjects neglected the initial base rate presented in the problem (85% of the cabs are green and 15% are blue). In a typical study, the participants were asked to predict the field of study of a graduate or the profession of somebody on the basis of a brief description. It is very important that police investigators be open to alternative viewpoints, and it is equally important that profilers help create alternative ideas. For example: 1 in 1000 students cheat on an exam. Now let’s say the YCD has a 5% false-positive rati The classic scientific demonstration of the base rate fallacy comes from an experiment, performed by psychologists Amos Tversky and Daniel Kahneman, in which participants received a description of 5 individuals apparently selected at random from a pool of descriptions that contained 70 lawyers and 30 engineers, or vice versa. A base rate is a phenomenon’s basic rate of incidence. The truth, however, is that the probability of a coin being "heads" or "tails" is … Rainbow et al. For example, the profiler may focus on a specific offender, pushing into the background useful information about the population of offenders with similar characteristics. Base rate fallacy, or base rate neglect, is a cognitive error whereby too little weight is placed on the base, or original rate, of possibility (e.g., the probability of A given B). So, set the True state variable for 'Woman has cancer' = 0.01. provides open learning resources for your academics, careers, intellectual development, and other wisdom related purposes. When something says "50% extra free," only a third (33%) of what you're looking at is free. Base Rate Fallacy Background. The case involved a 90-year-old woman who was found dead in her home. All 1000 students are tested by the system. The base rate fallacy is a tendency to judge the probability of an event based entirely upon irrelevant information, rather than the actual base rate probability of that event. The profiler should communicate more clearly by placing a personal percentage on the prediction (i.e., 30%) so that investigators can judge how strongly the profiler believes the event will occur. Learn moreOpens in new window, Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License, In terms of prioritizing suspects, base rate information from research into elderly homicide together with a logical crime scene interpretation strongly indicated that the offender was likely to have some association to the victim and probably lived in close proximity. Base Rate Fallacy Imagine a Townsville Policeman has developed a youth criminal detector that we shall call the YCD. The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. They focus on other information that isn't relevant instead. Base Rate Fallacy. BASE-RATE FALLACY: "If you overlook the base-rate information that 90% … Nevertheless, it should be emphasized that this is a probability, not a definitive prediction. This happened even when the participants were made familiar with the base rates, that is, the frequencies of law and engineering students and professionals in the population. While it is effective for some problems, this heuristic involves attending to the particular chara… That is people seem to ignore the 30% base rate of engineers in the final sentence. Koehler, J. J. At the empirical level, a thorough examination of the base rate literature (including the famous lawyer–engineer problem) does not support the conventional wisdom that people routinely ignore base rates. The Base-Rate Fallacy in School Psychology. The YCD is so advanced that just by taking a saliva sample it can tell if youths aged 10-24 years old are either a criminal or not. A cheating detection system catches cheaters with a 5% false positive rate. When people categorize things on the basis of representativeness, they are using the representativeness heuristic. We have been oversold on the base rate fallacy in probabilistic judgment from an empirical, normative, and methodological standpoint. In probability and statistics, base rate generally refers to the (base) class probabilities unconditioned on featural evidence, frequently also known as prior probabilities.In plainer words, if it were the case that 1% of the public were "medical professionals", and 99% of the public were not "medical professionals", then the base rate of medical professionals is simply 1%. It would be tempting to view this as a horrific illustration of a cult-related murder and assume that a small group of individuals was involved. In making rough probability judgments, people commonly depend upon one of several simplified rules of thumb that greatly ease the burden of decision. The base-rate fallacy is people’s tendency to ignore base rates in favor of, e.g., individuating information (when such is available), rather than integrate the two. Many cognitive errors are the results of people not paying attention to base rates. The base rate fallacy, also called base rate neglect or base rate bias, is a fallacy. Another Practical Application for Base Rate Fallacy Give them 33% and tell them it's 50% Lots of food companies exploit the Base Rate Fallacy on their packaging. These colleagues may see things or ask questions that the profiler has not seen or asked. For example, the base rate of suicide in the general population is less than 1%, whereas the base rate of suicide for a more restricted population, for example, among patients with borderline personality disorder, may be as high as 10%. It is likely that clinically based profilers will resist the notion of attaching a percentage figure to their predictions—this seems to fly in the face of intuition or clinically judgment. It is likely then, that a team of profilers working together will produce a more accurate profile than a lone individual. The tendency to ignore info about general principles in favor of very specific but vivid info. The False state probability will be calculated automatically as 1 - 0.01 = 0.99. In the above example, where P(A|B) means the probability of A given B, the base rate fallacy is the incorrect assumption that: $P(\mathrm{terrorist}|\mathrm{bell}) \overset{\underset{\mathrm{? Psychological Science Gazzaniga Chapters 8. However, investigators in this case were wise enough to consider base rate data—who kills the elderly? The representativeness heuristic is seen when people use categories—when deciding, for example,whether or not a person is a criminal. snguyen4. Failing to consider the base rate leads to wrong conclusions, known as the base-rate fallacy. According to Heuer (1999), however, probabilities of something happening may be expressed in two ways. Adding to the drama, the murder had happened on an island off the coast of Wales that was devoted with ancient Druid ruins. This heuristic is often equated with the heuristic of representativeness: an even is judged probable to the extent that it represents the essential features of its parent population or of its generating process. The base rate fallacy, also called base rate neglect or base rate bias, is a formal fallacy.If presented with related base rate information (i.e. The most common form of the fallacy is the tendency to assume that small samples should be representative of their parent populations, the gambler's fallacy being a special case of this phenomenon. Tversky and Kahneman (1973) demonstrated that people had a tendency to neglect base-rate or statistical information in favor of similarity judgments. The conclusion the profiler neglect or underweight the base-rate information, that is, s/he commit the base-rate fallacy. 152-153). Base rate neglect is a specific form of the more general extension neglect. 43 terms. (2011) provide an excellent example of how investigators and profilers may become distracted from the usual crime scene investigative methods because they ignore or are unaware of the base rate. Taxonomy: Logical Fallacy > Formal Fallacy > Probabilistic Fallacy > The Base Rate Fallacy Alias: Neglecting Base Rates 1 Thought Experiment: Suppose that the rate of disease D is three times higher among homosexuals than among heterosexuals, that is, the percentage of homosexuals who have D is three times the percentage of heterosexuals who have it. This tendency has important implications for understanding judgment phenomena in many clinical, legal, and social-psychological settings. Base Rate Fallacy - YouTube Description: Ignoring statistical information in favor of using irrelevant information, that one incorrectly believes to be relevant, to make a judgment. Blood had been drained from her body and poured into a small container, which had the traces of lip marks on the rim. The Base rate fallacy is a common cognitive error that skews decision-making whereby information about the occurrence of some common characteristic within a given population is ignored or not given much weight in decision making. In the paper “The base rate Fallacy” the author suggests that that 1 in every 1000 employees in government is a spy. A base rate fallacy is committed when a person judges that an outcome will occur without considering prior knowledge of the probability that it will occur. Log in. In many instances, subjective probability statements are ambiguous and misunderstood by police investigators. Mary Lynne Kennedy, W. Grant Willis, and David Faust. Base rate neglect is especially likely to happen if the profiler encounters a case that s/he perceives is unique and outside the usual cases within a particular offense category. An individual object or person has a high representativeness for a category if that object or person is very similar to a prototype of that category. This tendency has important implications for understanding error judgments made by profilers. A failure to take account of the base rate or prior probability (1) of an event when subjectively judging its conditional probability. generic, general information) and specific information (information pertaining only to a certain case), the mind tends to ignore the former and focus on the latter.. Base rate neglect is a specific form of the more general extension neglect. This is known as the base-rate fallacy. In this chapter we will outline some of the ways that the base-rate fallacy has been investigated, discuss a debate about the extent of base-rate use, and, focusing on one Candles had been arranged to suggest some kind of ceremony had occurred, and fireplace pokers were placed at her feet in the shape of a crucifix. If you think half of what you're looking at is free, then you've committed the Base Rate Fallacy. However, people tend to avoid the base rate fallacy when individuals are not described stereotypically (Turpin et al., 2020). The problem should have been solved as follows: - There is a 12% chance (15% x 80%) the witness correctly identified a blue car. Compare base-rate fallacy. Few if any profilers would be so foolish as to indicate that the perpetrator definitely possessed certain characteristics. Behavioral and Brain Sciences, 19, 1-53. As Heuer reports, “To say that something could happen or is possible may refer to anything from a 1-percent to a 99-percent probability” (pp. Almost invariably, they will make statements framed as probabilities, communicating that there is some uncertainty in their assessment. The 17-year-old killer, in an attempt to diver attention away from himself, set the stage to make it appear to be a mysterious ritualistic murder. The base rate fallacy is committed when a person focuses on specific information and ignores generic information relating to the overall likelihood of a given event. A classic explanation for the base rate fallacy involves a scenario in which 85% of cabs in a city are blue and the rest are green. Investigators concluded it was neither a ritualistic sacrifice nor an occult ceremony, but a straightforward robbery-murder situation. The neglect or underweighting of base-rate probabilities has been demonstrated in a wide range of situations in both experimental and applied settings (Barbey & Sloman, 2007). Nevertheless, according to Heuer (1999), without such guidance, investigators may be inclined to interpret ambiguous probability statements as highly consistent with their own preconceptions of the case. At the crime scene, her heart had been removed from her body and placed on a silver platter. relph13. The Base rate fallacy is a common cognitive error that skews decision-making whereby information about the occurrence of some common characteristic within a given population is ignored or not given much weight in decision making… In this context, the profiler should be comfortable enough to consult with outside experts and colleagues whenever possible to formulate alternative perspectives. A witness claims the cab was green, however later tests show that they only correctly … Question: What Is A Good Example Of Base Rate Fallacy ? We have a base rate information that 1% of the woman has cancer. The description contained some personality traits that were similar to the stereotype of a profession, for example, of lawyers or engineers. A simple example of this would involve the diagnosis of a condition in a patient. With the "anchoring" strategy, people pick some natural starting point for a first approximation and then adjust this figure based on the results of additional info… (1996). Participants were asked to predict whether … We want to incorporate this base rate information in our judgment. Journal of Psychoeducational Assessment 1997 15: 4, 292-307 Download Citation. Then, I ask you what the probability is I will pick a green one while my eyes are closed? Another well-known aspect of representativeness is the conjunction fallacy , where higher probability is given to a well-known event that is a subset of an event to which lower probability is assigned. for example, a hammer is only use to hit things. Psychology 7. Also Can You Please Explain Why It's A Base Rate Fallacy? The Base Rate Fallacy. }}{}}{=} P(\mathrm{bell}|\mathrm{terrorist}) = 99%$ However, the correct expression uses Bayes' theoremto take into account the probabilities of both A and B, and is written as: $P(\mathrm{terrorist}|\mathrm{bell}) = \frac{P(\mathrm{bell}|\mathrm{terrorist})P(\mathrm{terrorist})}{P(\mathrm{bell})}$\$ =0.99(100/10000… Giving the test to all the employees of the government and defense contractors, it implies that 999 individuals who are not spies will be subjected to the test… ... Base Rate Fallacy. It also happens when the profiler believes s/he is better equipped for dealing with the case based on prior experience. (p.44). Other terms often used in conjunction with this heuristic are base-rate neglect, small-sample fallacy, and misperception of randomness. As expected, the participants’ judgments turned out to be determined by the degree of similarity between the description and the stereotype of the profession. If a coin is flipped 10 times and lands on "heads" everytime, a person employing gambler's fallacy would believe the probability of the coin landing on "heads" the 11th time would be very low. See list of all fallacies and biases. Statistical probabilities, which are based on empirical evidence concerning relative frequencies, such as base rates. Imagine that I show you a bag of 250 M&Ms with equal numbers of 5 different colors. 63 terms. The base rate fallacy describes how people do not take the base rate of an event into account when solving probability problems. Please Include A URl Of The Article Or Source For That Example. The fallacy is explained by the use of the representativeness heuristic, which is insensitive to sample size. Using the "availability" rule, people judge the probability of an event by the ease with which they can imagine relevant instances of similar events or the number of such events that they can easily remember. This problem has been solved! If presented with related base rate information and specific information, people tend to ignore the base rate in favor of the individuating information, rather than correctly integrating the two. The base rate fallacy is a tendency to focus on specific information over general probabilities. Subjective probability judgments, based on a profiler’s personal belief, e.g., that the offender will commit the crime again, or that a particular suspect appears to be the prime suspect, or that the offender lives in a specific area. As such, the factor of base rate is not given enough weight, and false conclusions may be drawn from information simply based on a particular trait and its rate of occurrence in a specific population. The tendency to ignore or underuse base rate information and instead to be influenced by the distinctive features of the case being judged is known as base rate fallacy. April 7, 2013. n. an error in prediction and decision-making which occurs when base rate is ignored as a prior probability. 2020 base rate fallacy example psychology
{}
# PH-EP Articles 2017-04-15 08:30 Search for dark matter at $\sqrt{s}=13$ TeV in final states containing an energetic photon and large missing transverse momentum with the ATLAS detector / ATLAS Collaboration Results of a search for physics beyond the Standard Model in events containing an energetic photon and large missing transverse momentum with the ATLAS detector at the Large Hadron Collider are reported. As the number of events observed in data, corresponding to an integrated luminosity of 36.1 $\textrm fb^{-1}$ of proton-proton collisions at a centre-of-mass energy of 13 TeV, is in agreement with the Standard Model expectations, exclusion limits in models where dark-matter candidates are pair-produced are determined. [...] CERN-EP-2017-044; arXiv:1704.03848.- Geneva : CERN, 2017-06-14 - 45 p. - Published in : Eur. Phys. J. C 77 (2017) 393 Article from SCOAP3: PDF; Preprint: PDF; 2017-04-11 20:53 Search for t t-bar resonances in highly boosted lepton+jets and fully hadronic final states in proton-proton collisions at sqrt(s) = 13 TeV / CMS Collaboration A search for the production of heavy resonances decaying into top quark-antiquark pairs is presented. The analysis is performed in the lepton+jets and fully hadronic channels using data collected in proton-proton collisions at sqrt(s) = 13 TeV using the CMS detector at the LHC, corresponding to an integrated luminosity of 2.6 inverse femtobarns. [...] CMS-B2G-16-015; CERN-EP-2017-049; arXiv:1704.03366.- Geneva : CERN, 2017-07-03 - 47 p. - Published in : JHEP 07 (2017) 001 Article from SCOAP3: PDF; Fulltext: PDF; Preprint: PDF; 2017-04-03 10:46 Measurements of the pp $\to W\gamma\gamma$ and pp $\to Z\gamma\gamma$ cross sections and limits on anomalous quartic gauge couplings at $\sqrt{s}=8$ TeV / CMS Collaboration Measurements are presented of $\mathrm{ W \gamma\gamma }$ and $\mathrm{ Z \gamma\gamma }$ production in proton-proton collisions. Fiducial cross sections are reported based on a data sample corresponding to an integrated luminosity of 19.4 fb$^{-1}$ collected with the CMS detector at a center-of-mass energy of 8 TeV. [...] CMS-SMP-15-008; CERN-EP-2017-039; arXiv:1704.00366.- Geneva : CERN, 2017-10-11 - 34 p. - Published in : JHEP 10 (2017) 072 Article from SCOAP3: PDF; Fulltext: PDF; Preprint: PDF; 2017-04-01 00:22 First measurement of transverse-spin-dependent azimuthal asymmetries in the Drell-Yan process / COMPASS Collaboration The first measurement of transverse-spin-dependent azimuthal asymmetries in the pion-induced Drell-Yan (DY) process is reported. We use the CERN SPS 190 GeV/$c$, $\pi^{-}$ beam and a transversely polarized ammonia target. [...] CERN-EP-2017-059; arXiv:1704.00488.- Geneva : CERN, 2017-09-12 - 7 p. - Published in : Phys. Rev. Lett. 119 (2017) 112002 Draft (restricted): PDF; Fulltext: CERN-EP-2017-059 - PDF; 10.1103_PhysRevLett.119.112002 - PDF; Preprint: PDF; 2017-03-30 07:23 Measurement of $B^0$, $B^0_s$, $B^+$ and $\Lambda^0_b$ production asymmetries in 7 and 8 TeV proton-proton collisions / LHCb Collaboration The $B^0$, $B^0_s$, $B^+$ and $\Lambda^0_b$ hadron production asymmetries are measured using a data sample corresponding to an integrated luminosity of 3.0 fb$^{-1}$, collected by the LHCb experiment in proton-proton collisions at centre-of-mass energies of 7 and 8 TeV. The measurements are performed as a function of transverse momentum and rapidity of the $b$ hadrons within the LHCb detector acceptance. [...] CERN-EP-2017-036; LHCB-PAPER-2016-062; arXiv:1703.08464.- Geneva : CERN, 2017-11-10 - 20 p. - Published in : Phys. Lett. B 774 (2017) 139-158 Article from SCOAP3: PDF; Preprint: PDF; Related data file(s): ZIP; External link: Figures, tables and other information 2017-03-29 21:26 Search for new physics with dijet angular distributions in proton-proton collisions at $\sqrt{s} =$ 13 TeV / CMS Collaboration A search is presented for extra spatial dimensions, quantum black holes, and quark contact interactions in measurements of dijet angular distributions in proton-proton collisions at $\sqrt{s} =$ 13 TeV. The data were collected with the CMS detector at the LHC and correspond to an integrated luminosity of 2.6 fb$^{-1}$. [...] CMS-EXO-15-009; CERN-EP-2017-047; arXiv:1703.09986.- Geneva : CERN, 2017-07-05 - 34 p. - Published in : JHEP 07 (2017) 013 Article from SCOAP3: PDF; Fulltext: PDF; Preprint: PDF; External link: Rivet analyses reference 2017-03-24 09:54 J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV / ALICE Collaboration We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ TeV with ALICE at the LHC. The observables are normalised to their corresponding averages in non-single diffractive events. [...] CERN-EP-2017-056; arXiv:1704.00274.- Geneva : CERN, 2018-01-10 - 14 p. - Published in : Phys. Lett. B 776 (2018) 91-104 Article from SCOAP3: PDF; Draft (restricted): PDF; Preprint: PDF; 2017-03-20 12:28 Measurement of the $B^0_s\to\mu^+\mu^-$ branching fraction and effective lifetime and search for $B^0\to\mu^+\mu^-$ decays / LHCb Collaboration A search for the rare decays $B^0_s\to\mu^+\mu^-$ and $B^0\to\mu^+\mu^-$ is performed at the LHCb experiment using data collected in $pp$ collisions corresponding to a total integrated luminosity of 4.4 fb$^{-1}$. An excess of $B^0_s\to\mu^+\mu^-$ decays is observed with a significance of 7.8 standard deviations, representing the first observation of this decay in a single experiment. [...] CERN-EP-2017-041; LHCB-PAPER-2017-001; arXiv:1703.05747.- Geneva : CERN, 2017-05-11 - 11 p. - Published in : Phys. Rev. Lett. 118 (2017) 191801 APS Open Access Article: PDF; Fulltext: PDF; Preprint: PDF; Related data file(s): ZIP; Related supplementary data file(s): ZIP; External link: Figures, tables and other information 2017-03-18 23:25 Search for a heavy resonance decaying to a top quark and a vector-like top quark at $\sqrt{s}=13$ TeV / CMS Collaboration A search is presented for massive spin-1 Z' resonances decaying to a top quark and a heavy vector-like top quark partner T. The search is based on a 2.6 inverse femtobarns sample of proton-proton collisions at 13 TeV collected with the CMS detector at the LHC. [...] CMS-B2G-16-013; CERN-EP-2017-035; arXiv:1703.06352.- Geneva : CERN, 2017-09-13 - 40 p. - Published in : JHEP 09 (2017) 053 Article from SCOAP3: PDF; Fulltext: PDF; Preprint: PDF; 2017-03-18 22:59 Search for anomalous couplings in boosted $\mathrm{ WW/WZ }\to\ell\nu\mathrm{ q \bar{q} }$ production in proton-proton collisions at $\sqrt{s} =$ 8 TeV / CMS Collaboration This Letter presents a search for new physics manifested as anomalous triple gauge boson couplings in WW and WZ diboson production in proton-proton collisions. The search is performed using events containing a W boson that decays leptonically and a W or Z boson whose decay products are merged into a single reconstructed jet. [...] CMS-SMP-13-008; CERN-EP-2017-029; arXiv:1703.06095.- Geneva : CERN, 2017-09-10 - 22 p. - Published in : Phys. Lett. B 772 (2017) 21-42 Article from SCOAP3: PDF; Preprint: PDF;
{}
5.1. Fourier transform, Fourier integral # Chapter 5. Fourier transform In this Chapter we consider Fourier transform which is the most useful of all integral transforms. ## 5.1. Fourier transform, Fourier integral ### Heuristics In Section 4.5 we wrote Fourier series in the complex form $$f(x)= \sum_{n=-\infty}^\infty c_n e^{\frac{i\pi nx}{l}} \label{eq-5.1.1}$$ with $$c_n= \frac{1}{2l}\int_{-l}^l f(x)e^{-\frac{i\pi n x}{l}}\,dx \qquad n=\ldots,-2, -1, 0,1,2,\ldots \label{eq-5.1.2}$$ and $$2l\sum_{n=-\infty}^\infty |c_n|^2=\int_{-l}^l|f(x)|^2\,dx. \label{eq-5.1.3}$$ From this form we formally without any justification deduct Fourier integral. First we introduce $$k _n := \frac{\pi n}{l}\qquad \text{and}\qquad \Delta k _n = k _{n}- k _{n-1}= \frac{\pi}{l} \label{eq-5.1.4}$$ and rewrite (\ref{eq-5.1.1}) as $$f(x)= \sum_{n=-\infty}^\infty C( k _n) e^{i k _n x}\Delta k _n \label{eq-5.1.5}$$ with $$C( k )= \frac{1}{2\pi}\int_{-l}^l f(x)e^{-i k x}\,dx \label{eq-5.1.6}$$ where we used $C( k _n) := c_n /(\Delta k _n)$; (\ref{eq-5.1.3}) should be rewritten as $$\int_{-l}^l|f(x)|^2\,dx= 2\pi\sum_{n=-\infty}^\infty |C( k _n)|^2\Delta k _n. \label{eq-5.1.7}$$ Now we formally set $l\to +\infty$; then integrals from $-l$ to $l$ in the right-hand expression of (\ref{eq-5.1.6}) and the left-hand expression of (\ref{eq-5.1.7}) become integrals from $-\infty$ to $+\infty$. Meanwhile, $\Delta k _n \to +0$ and Riemannian sums in the right-hand expressions of (\ref{eq-5.1.5}) and (\ref{eq-5.1.7}) become integrals: $$f(x)= \int_{-\infty}^\infty C( k ) e^{i k x}\,d k \label{eq-5.1.8}$$ with $$C( k )= \frac{1}{2\pi}\int_{-\infty}^\infty f(x)e^{-i k x}\,dx; \label{eq-5.1.9}$$ (\ref{eq-5.1.3}) becomes $$\int_{-\infty}^\infty |f(x)|^2\,dx= 2\pi\int_{-\infty}^\infty |C( k )|^2\,d k . \label{eq-5.1.10}$$ ### Definitions and Remarks Definition 1. Formula (\ref{eq-5.1.9}) gives us a Fourier transform of $f(x)$, it usually is denoted by "hat": $$\hat{f}( k )= \frac{1}{2\pi}\int_{-\infty}^\infty f(x)e^{-i k x}\,dx; \tag{FT}\label{FT}$$ sometimes it is denoted by "tilde" ($\tilde{f}$), and seldom just by a corresponding capital letter $F( k )$. Definition 2. Expression (\ref{eq-5.1.8}) is a Fourier integral aka inverse Fourier transform: $$f(x)= \int_{-\infty}^\infty \hat{f}( k ) e^{i k x}\,d k \tag{FI}\label{FI}$$ aka $$\check{F}(x)= \int_{-\infty}^\infty F( k ) e^{i k x}\,d k \label{IFT}\tag{IFT}$$ Remark 1. Formula (\ref{eq-5.1.10}) is known as Plancherel theorem $$\int_{-\infty}^\infty |f(x)|^2\,dx=2\pi\int_{-\infty}^\infty |\hat{f}( k )|^2\,d k . \tag{PT}\label{PT}$$ Remark 2. 1. Sometimes expoments of $\pm i k x$ is replaced by $\pm 2\pi i k x$ and factor $1/(2\pi)$ dropped. 2. Sometimes factor $\frac{1}{\sqrt{2\pi}}$ is placed in both Fourier transform and Fourier integral: \begin{align} &\hat{f}( k )= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(x)e^{-i k x}\,dx; \tag{FT*}\\ &f(x)= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty \hat{f}( k ) e^{i k x}\,d k \tag{FI*} \end{align} Then FT and IFT differ only by $i$ replaced by $-i$ and Plancherel theorem becomes $$\int_{-\infty}^\infty |f(x)|^2\,dx= \int_{-\infty}^\infty |\hat{f}( k )|^2\,d k . \tag{PT*}$$ In this case Fourier transform and inverse Fourier transform differ only by $-i$ instead of $i$ (very symmetric form) and both are unitary operators. Remark 3. We can consider corresponding operator $LX=-X''$ in the space $L^2(\mathbb{R})$ of the square integrable functions on $\mathbb{R}$ but $e^{i k x}$ are no more eigenfunctions since they do not belong to this space. In advanced Real Analysis such functions often are referred as generalized eigenfunctions. Remark 4. 1. For justification see Subsection 5.1.A; 2. Pointwise convergence is discussed in Subsection 5.1.B; 3. Multidimensional Fourier transform and Fourier integral are discussed in Subsection 5.2.A. ### $\cos$- and $\sin$-Fourier transform and integral Applying the same arguments as in Section 4.5 we can rewrite formulae (\ref{eq-5.1.8})--(\ref{eq-5.1.10}) as $$f(x)= \int_0^\infty \bigl( A( k ) \cos( k x) +B( k ) \sin ( k x)\bigr) \,d k \label{eq-5.1.11}$$ with \begin{align} & A( k )= \frac{1}{\pi}\int_{-\infty}^\infty f(x)\cos ( k x) \,dx, \label{eq-5.1.12}\\ & B( k )= \frac{1}{\pi}\int_{-\infty}^\infty f(x)\sin ( k x) \,dx, \label{eq-5.1.13} \end{align} and $$\int_{-\infty}^\infty |f(x)|^2\,dx= \pi\int_0^\infty \bigl( |A( k )|^2+|B( k )|^2\bigr)\,d k . \label{eq-5.1.14}$$ $A( k )$ and $B( k )$ are $\cos$- and $\sin$- Fourier transforms and 1. $f(x)$ is even function iff $B( k )=0$; 2. $f(x)$ is odd function iff $A( k )=0$. Therefore 1. Each function on $[0,\infty)$ ] could be decomposed into $\cos$-Fourier integral $$f(x)= \int_0^\infty A( k ) \cos ( k x) \,d k \label{eq-5.1.15}$$ with $$A( k )=\frac{2}{\pi}\int_0^\infty f(x)\cos ( k x) \,dx. \label{eq-5.1.16}$$ 2. Each function on $[0,\infty)$ ] could be decomposed into $\sin$-Fourier integral $$f(x)= \int_0^\infty B( k ) \sin ( k x) \,d k \label{eq-5.1.17}$$ with $$B( k )= \frac{2}{\pi}\int_0^\infty f(x)\sin ( k x) \,dx. \label{eq-5.1.18}$$
{}
# M M C Queue Simulation Python aultvalue. The M/M/1 Queuing System The M/M/1 system is made of a Poisson arrival, one exponential (Poisson) server, FIFO (or not specified) queue of unlimited capacity and unlimited customer population. For some systems (like l = 1, m = 2) simulated and computed results are very similar - the differences are caused by random fluctuations and also by a limited length of the simulation experiment. This is a piece of Python code that, when given the value of an argument, computes and returns the square root of that argument. Multi-server queueing systems with multiple priority classes Mor Harchol-Balter∗ Takayuki Osogami† Alan Scheller-Wolf‡ Adam Wierman§ Abstract We present the first near-exact analysis of an M/PH/k queue with m > 2 preemptive-resume priority classes. Abstract SimPy is an object-oriented, process-based discrete-event simulation language based on standard Python and released under the GNU GPL. Department, CSPIT,CHANGA 2. Discrete event simulation with variable intervals part of the priority queue and I'm asking more for design aspects tagged python simulation or ask your. NumberOfServers is the number of servers. We'll start by setting up the main constants and a couple of functions to set the message size and seize an M/D/1 queue:. Run a simulation using Python¶. M/M/c/N Queue N (system capacity, including customers in service) PN lambdae (effective arrival rate) (probability system is full) K (size of calling population) M/M/c/K/K The worksheets in this spreadsheet implement the simple queueing models described in Chapter 6 of Banks, Carson, Nelson and Nicol, Discrete-Event System Simulation, 5th. In the notation, the M stands for Markovian; M/M/1 means that the system has a Poisson arrival process, an exponential service time distribution, and one server. Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. If I remember well, I think that these algorithms rely on simulating what happens once the graph of street connection is known and each arc of this graph is labe. h #include "boolean. 4 Inventory Simulation in a Spreadsheet 73 2. Snow also supports life. Landau, Manuel J Páez, Cristian C. The bunching of the traffic flow is considered with a correction factor subjected. Router Queue Simulation in C++ in MMNN and MM1 conditions changes required between the M/M/1 ssq. Beitrag in einer Fachzeitschrift. I modified the code of the Team_3_Solution such that it can be compiled into a python module using f2py. Language: C C++ D Haskell Lua OCaml PHP Perl Plain Text Python Ruby Scheme Tcl. If you do not have a CUDA-capable GPU, you can access one of the thousands of GPUs available from cloud service providers including Amazon AWS, Microsoft Azure and IBM SoftLayer. Consider the following process-oriented simulation program. An M/M/1 queue has an exponential inter-arrival and service time and a single server. 2 simmer: Discrete-Event Simulation for R systems, construction engineering, project management, logistics, transportation systems, business processes, healthcare and telecommunications networks (Banks2005). How come $\rho=\lambda T_s$ for a M/M/1/K System (which is the same for as M/M/1) This is my intuitive reasoning, please tell me where I am going wrong. Estimation of Queue Lengths and Their Percentiles at Signalized Intersections 3 99th percentile of queue lengths at RE (or QE) under non-stationary traffic conditions, the functions are determined from the so-called transition technique (Kimber and Hollis 1979). (3) A simulation can easily be initialized with a more efficient coalescent simulation of deep history. The interarrival times of 100 customers (in minutes) are recorded in the text file "interarrivaltimes. Wikibook:Python Programming. Given a c-server queueing model, the random assignment model (RA) is the case when each of the cservers forms its own FIFO single-server queue, and each arrival to the system, independentof the past, randomly choosesqueue i to join with probability 1/c, i ∈{1,2,,c}. When we have a single queue with more than 1 parallel servers, then we have what is called M/M/s queuing system. Simulation in Python (SimPy) Category Cross-Omics>Agent-Based Modeling/Simulation/Tools. Hence an M=M=1 queue is one in which there is one server (and one channel) and both the inter-arrival time and service time are exponentially distributed. This tutorial is meant to be very hands-on and demonstrate how you can (quickly) build a Python + OpenCV application to detect the presence of cats in images. An M/M/1 queue has an exponential inter-arrival and service time and a single server. The main. The code in. Functions in Python Most programming languages provide ways of defining the computational equivalent of this. Watch Queue Queue. 1 An Algorithm for Single Queue-single Server Model 175 recently has become one of the premier subject in the system. I need you all to help me out on the. As you might imagine, I’m a big fan of learning by example, so a good next step would be to have some fun and read this blog post on detecting cats in images/videos. While this chapter will. A Single-Server Queue Assumptions FIFO is also known as first come, first serve (FCFS) The order of arrival and departure are the same This observation can be used to simplify the simulation. Chapter 4 Simulation Programming with VBASim in MATLAB This chapter shows how simulations of some of the examples in Chap. Discrete event simulation with variable intervals part of the priority queue and I'm asking more for design aspects tagged python simulation or ask your. random_walk_2d_avoid_plot. servers, and Kdenotes the capacity of the queue. This model is based on the key features of a ferromagnet and the Metropolis algorithm. M/M/1 queue system: imp. Contribute to geertj/mdc. This will walk through an example of an M/M/1 queue with Poisson arrivals of rate 3 and Exponential service times of rate 5. Build your own system of heavenly bodies and watch the gravitational ballet. Ns provides substantial support for simulation of TCP, routing, and multicast protocols over wired and wireless (local and satellite) networks. Some comments have been added concerning Python itself, to ease the reader’s transition to that language. Extensions of the. When a packet reaches the head of the buffer, it is processed by a server and sent to its destination. Contribute to sarthak0120/M-M-1-Queue-Simulation development by creating an account on GitHub. org) and Python 3D Software. Write a Java program to implement the single queue single server discrete event simulation model with 1000 runs. IGLEHART 2 Department of Operations Research, Stanford University, Stanford, CA 94305, U. The bunching of the traffic flow is considered with a correction factor subjected. Beitrag in einer Fachzeitschrift. In this post, we present an object-oriented design and implementation of an event-driven G/G/1 queue model simulation using Java. , ), then we need a priority queue. Computer Science, University of California at Berkeley, May 1997. 1 Answer to 1. arrivals and departures are a Poisson distribution with a single server, infinite queue length, calling population infinite and the queue discipline is FCFS. maxsize is an integer that sets the upperbound limit on the number of items that can be placed in the queue. M/M/1 Results. 21+, Python language server 0. import math. Python language support for Atom-IDE, powered by the Python language server. This is a piece of Python code that, when given the value of an argument, computes and returns the square root of that argument. Simulation of a single-server queue. De nition of the Erlang-A Queue. C n requires a service time of length S n, which is the length of time C n spends in service with the server. PYTHON C/C#/. py #----- import sys import stddraw import stdrandom from linkedqueue import Queue from histogram import Histogram # Accept float command-line arguments lamb and mu. I'm stuck trying to implement a single server queue. M/M/1 queue for double parallel queues, etc. So, I decided to take a shot at constructing a discrete-event simulation (as opposed to Monte Carlo simulation) of a simple M/M/1 queue in R. In queueing theory, a discipline within the mathematical theory of probability, an M/M/1 queue represents the queue length in a system having a single server, where arrivals are determined by a Poisson process and job service times have an exponential distribution. I am trying to simulate a multiple server single queue model (MMC) using R programming. Bur try this: l = 1, m = 1. Comprehensive Solutions with AVL BOOST™ AVL BOOST is the key element in the simulation driven development of ICE thermodynamics. By default, if no container class is specified for a particular queue class instantiation, the standard container deque is used. , Python debugger interfaces and more. Simulation models consist of the following components: system entities, input variables, performance measures, and functional relationships. However, if different interrupts have different priorities (e. >>> Python Software Foundation. While this chapter will. Melting of seasonal snow (as well as glaciers) provides water for. The simulator runs a complete discrete event simulation to generate the statistics of queues and systems. SETUP CUDA PYTHON To run CUDA Python, you will need the CUDA Toolkit installed on a system with CUDA capable GPUs. Complex networks of M/M/1 queues can be modeled and simulated easily with this web-based simulator. An M/M/1 queueing model can be used to represent many different real-life situations such as customers checking out at a supermarket, customers at a bank, and so on. QUEUE PERFORMANCE By the end of the simulation, we want to know the following questions: How long does each customer being served in the system, on average? What is the probability that there is no customers in the queue? What is the average time that each customer is waiting in the line?. Introduction Dominated CFTP QueuesConclusions Perfect simulation for the M=G=c queue Stephen Connor University of York Joint work withWilfrid Kendall (University of Warwick). algorithm queue departure computation (QDC). M/M/s Queuing System. M stands for Markov and is commonly used for the exponential distribution. Python Mocking. 2 simmer: Discrete-Event Simulation for R systems, construction engineering, project management, logistics, transportation systems, business processes, healthcare and telecommunications networks (Banks2005). K: The number of places in the system. C n requires a service time of length S n, which is the length of time C n spends in service with the server. An M/M/1 queue has an exponential inter-arrival and service time and a single server. Code Style¶. 3143 Queueing Theory / The M/G/1/ queue 8 The queue length distribution in an M/G/1 queue The queue length Nt in an M/G/1 system does not constitute a Markov process. A Computer Science portal for geeks. We opted to transition both courses to Python at the same time instead of waiting for students to progress through the course sequence. 4 The M=M=1 queue In this chapter we will analyze the model with exponential interarrival times with mean 1= , exponential service times with mean 1= and a single server. Definition of an Inventory Control System. D = represents the queue discipline. A diagram below shows 4 parallel servers serving 1 queue. This queue system is also simply referred to as the M/M/1 queue. Default" Bin. The Bank: Examples of SimPy Simulation times is an example of an M/M/1 queue and could rather easily be solved analytically to calculate the steady-state mean. pyx, and (4) compile all the fortran and C source into a Python extension module. Despite in the modern era and advanced technology designed to minimize waiting times, queue management remains is a challenging task for every organization. This fully updated edition of A Student's Guide to Python for Physical Modeling aims to help you, the student, teach yourself enough of the Python programming language to get started with physical modeling. Bin limit 11 top o s logy face ory is i n the gu "210" d ( "Ono f f App 1 i cati on Pac Z. Introduction to Computational Models Using Python. class queue. The simulation now generates and processes 20 customers (line 43). - simulation of a street crossing with green/red lights allowing cars and pedestrians to pass in one direction then another - simulation of an elevator in a building: buttons on each floor to call the elevator, buttons inside to go to a particular floor, multiple floors can be selected at the same time, creating a queue of floors to go to. Simulating an M/M/1 Queue Using Generators and Greenlets. The goals of the chapter are to introduce VBASim, and to hint at the experiment design and analysis issues that will be cov-ered in later chapters. Base Python - A collection of small (in scope and size) but essential pure python packages; Pycparser - A parser for the C language, written in pure Python. Delsi is a set of 16 components for simulation of queueing systems. Python is a computer programming language that is rapidly gaining popularity throughout the sciences. Python Tutorial: map, filter, and reduce. Comprehensive Solutions with AVL BOOST™ AVL BOOST is the key element in the simulation driven development of ICE thermodynamics. The Minnesota Supercomputing Institute has the software, hardware, and experts to provide the support you need for your research no matter what the research area. = 1 to 9 /Hr Population Queue Size is Infinite Figure 1. generating random numbers ran4. Python on Mars EuroPython, Florence, 03 July 2013 H bit t E i t M it iHabitat Environment Monitoring provided test data queue with simulated data. 3 Simulating a Queue with Two Servers 69 2. We offer fashion and quality at the best price in a more sustainable way. The service times (in minutes) for these 100 customers arerecorded in the text file "servicetimes. HTML CSS JS. Theoretically the average queue length should be 999. N-body algorithms have numerous applications in areas such as astrophysics, molecular dynamics and plasma physics. Entities are arrived as a Poisson process. 3 can be programmed in VBASim in MATLAB. In this lab, we are not interested in simulating a queue for the sake of it (this can easily be done with Matlab) but to use it as an application for learning about DES. It stands on the shoulder of giants, built over Python, Twisted, Cyclone (a Tornado implementation over twisted) and Redis. An M/M/1 Queue. List of Publication » A Python extension for the massively parallel multiphysics simulation framework waLBerla. Python Queue for multiple Thread programming. System Modeling and Simulation and. Bur try this: l = 1, m = 1. Queue definition, a braid of hair worn hanging down behind. M/M/C queue system is a classical example of queueing theory and traffic theory. You will be working on a core product - Pupil Cloud - that will be integral to our eye-tracking platform. Lee, “A Novel Spectrum Sensing Scheme with Sensing Time Optimization for Energy-Efficient CRSNs,” Accepted for publication in Wireless Networks, 2017. We were tasked to solve a bank simulation problem. An M=G=1 queue is one with 3. The AMQP protocol doesn't have a native delayed queue feature, but with RabbitMQ's AMQP protocol extensions we can easily emulate one by combining the message TTL function and the dead. It is important to gain understanding on the difference between M/M/s queuing system with s times M/M/1 queuing system. Finally, the research is supported with an application of the M/Ek/l queueing model to a real-life. The main role is played by the CPM model of the radio noise. Delsi is a set of 16 components for simulation of queueing systems. Learn Python, JavaScript, DevOps, Linux and more with eBooks, videos and courses. M/M/1 can be modeled in MATLAB using Discrete Event simulation. Some comments have been added concerning Python itself, to ease the reader's transition to that language. The key parameters in this simulation are the rate at which customers arrive, and the length of time each customer takes with the teller. Python is a computer programming language that is rapidly gaining popularity throughout the sciences. Thabiso shabbir Hi I have a wheel alignment system Lawrence machine V 3 D with safe-net sentinel key system installed,lost USB dongle,& application can not open without the dongle, The software is RS2. Thank you, StumbleUpon! I just so happen to have come across a handy code snippet for the A* pathfinding algorithm. Welcome to PyPy. Solve the G/M/c queue ; For more information: [1]. Watch Queue Queue. This study explored the impact on patient wait times and nursing resource demand from the addition of a fast track, or separate unit for low-acuity patients, in the ED using a queue-based Monte Carlo simulation in MATLAB. Fwrap demands a lot from a build system: (1) generate C, Fortran and Cython headers from an external python script, (2) compile Fortran 90 sources with module dependencies, (3) generate. If I remember well, I think that these algorithms rely on simulating what happens once the graph of street connection is known and each arc of this graph is labe. On each iteration, the simulation selects the next event by choosing the one that occurs first, according to its simulated time. This example is derived from the Kees'_assignment exercise where we developed a flow line model of an alpine glacier in Fortran. This is pretty much a staple in any AI programmer's arsenal. \JCreator\Template\. Celery is an asynchronous task queue/job queue based on distributed message passing. K: The number of places in the system. In particular, we would like to reference the introductory lectures on Simulation Modeling and Queueing Theory. If you do a simulation and generate customers using a Poisson distribution for inter arrival rates you would be simulating a real world situation. Cornell Center for Advanced Computing January 20, 2012 1/20/2012 www. The following pseudocode shows a general discrete-based event simulation algorithm: while the events queue is not empty and the simulation time is not over do. This program simulates an M/M/1 Queue in Python. Snow is precipitation that forms when water vapor freezes. The applications are pretty vast, car engines are one of the most obvious I can recall right now. Comprehensive Solutions with AVL BOOST™ AVL BOOST is the key element in the simulation driven development of ICE thermodynamics. Here you will find installers and source code, documentation, tutorials, announcements of courses and conferences, and discussion forums about NEURON in particular and. In this video I briefly describe a short program to simulate a single server queue (the python file can be found here: http://goo. This is a survey article looking at the application of multilevel methods in computational finance. The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. compiled language python has a fairly simple. Embedded Software in C for an ARM Cortex M by Jonathan Valvano and Ramesh Yerraballi is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4. stackexchange. pysimm, short for python simulation interface for molecular modeling, is a python package designed to facilitate structure generation and simulation of molecular systems. This study explored the impact on patient wait times and nursing resource demand from the addition of a fast track, or separate unit for low-acuity patients, in the ED using a queue-based Monte Carlo simulation in MATLAB. class queue. For instance, if the data has a hierarchical structure, quite often the assumptions of linear regression are feasible only at local levels. f2py example. If you want to see the source code for the booksite modules, then click on the links in the above table, or download and unzip stdlib-python. Because snow is so reflective, it plays an important role in regulating climate: it reflects incoming sunlight back into space, cooling the planet. Enter t > 0: Utilization (traffic intensity) M/M/s/K Queue System capacity (K) Probability that the system is full Average rate that customers enter M/M/s with Finite Source Queue Size of calling population M/G/1 Queue. M/M/1 Queue simulation Objective This laboratory is important for understanding OPNET system and user interface. The lecture notes on Computer System Analysis by Raj Jain were very helpful and are highly recommended. /***** * You can modify the template of this file in the * directory. to the next event, without wasting runtime. D n, called. Choosing m to be a power of 2, most often m = 2 32 or m = 2 64, produces a particularly efficient LCG, because this allows the modulus operation to be computed by simply truncating the binary representation. Python: M/M/c 待ち行列問題を解く M/M/c (M/M/1) queue with Python; Useful techniques for AWK; How to break out of all loops in Python 10月. M/M/1 queue for double parallel queues, etc. Documentation Getting started CloudMQTT are managed Mosquitto servers in the cloud. Received 21 April 1988 Abstract This paper gives an overview of those aspects of simulation methodology that are (to some. The M/M/1 Queuing System The M/M/1 system is made of a Poisson arrival, one exponential (Poisson) server, FIFO (or not specified) queue of unlimited capacity and unlimited customer population. In this tutorial, you. Constraint. D) jumps from one queue to another, trying to get through as quickly as possible. What is a crankshaft connecting rod and piston mechanism? It basically is a mechanical part which converts rotational motion into reciprocating motion. dumas ät ut. Its value_type shall be T. Show off your favorite photos and videos to the world, securely and privately show content to your friends and family, or blog the photos and videos you take with a cameraphone. The purpose of this page is to address the frequently asked question "How do I write a discrete event simulation?" Although there are a number of good software libraries for simulation, including one that I helped write, this page will show you that the a basic simulation program can be put together without too much effort. m MM1 Zhangxiang Huang and M. This book introduces the techniques of simulation using the open-source programming language Python and its simulation package, SimPy. The token becomes visible in the output queue after L(w)cycles. The goals of the chapter are to introduce VBASim, and to hint at the experiment design and analysis issues that will be cov-ered in later chapters. In this section, I will show how to solve the multiple producer and consumer problem using python Queue class. """ A Python queuing network simulation with SimPy and NetworkX """ from simpy. docx) files. A diagram below shows 4 parallel servers serving 1 queue. ) and returns a list of the results. Simulation of a single-server queue. We present an exact simulation algorithm for the stationary distribution of customer delay for FIFO M/G/c queues in which ρ=λ/μ B (k_A) % B --> C (k_B) % C --> A (k_C) rates(1) = 1; %k_A rates(2) = 1; %k_B rates(3) = 1; %k_C % Set simulation parameters nSteps = 10000; sampleFreq = 10; % Set initial conditions - defining how many of each species to start with NSpecies(1) = 1000; %1000 A NSpecies(2) = 0; %No B NSpecies(3) = 0. A single server queuing system can tell us the following things-How many times a user need to wait in waiting & Total waiting time; How many times user take in service time & Total service time. The M/M/1 queue is generally depicted by a Poisson process governing the arrival of packets into an infinite buffer. Let be the number of customers in the system at time. Python is a computer programming language that is rapidly gaining popularity throughout the sciences. A binary heap allows fast insertion and removal, whereas an array is fast at one or the other but not both. Want to contribute? Want to contribute? See the Python Developer's Guide to learn about how Python development is managed. sort Operation elementary PQ binary heap best in theory N space M M M N lg. SIMULATION METHODS FOR QUEUES: AN OVERVIEW Peter W. PDF reader, 7 mb Overview: Learn how to develop your own applications to monitor or control instrumentation hardware. First of all, let's look at what methods are provided by the Queue class in terms of multiple thread computing. If you followed the instructions provided in this booksite (for Windows, Mac OS X, or Linux), then the booksite modules are installed on your computer. List of Publication » A Python extension for the massively parallel multiphysics simulation framework waLBerla. if the server is fast and the queue empties then the server has to wait again for K customers to arrive. M M C Queue Simulation Python. Queuing Theory Equations Definition λ = Arrival Rate μ = Service Rate ρ = λ / μ C = Number of Service Channels M = Random Arrival/Service rate (Poisson) D = Deterministic Service Rate (Constant rate). txt" attached. Functions in Python Most programming languages provide ways of defining the computational equivalent of this. once the service starts the arrivals are purely random in nature. That is, there can be at most K customers in the system. Your valuable input will help us improve this site please give your comments. M/M/1 queue for double parallel queues, etc. I am not sure about its importance from CS course perspective, but it's important if you understand this specially when you work with Apache ActiveMQ , RabbitMQ etc if you debug any queue issue. simulation and analytic methods have been analysed. class queue. If I remember well, I think that these algorithms rely on simulating what happens once the graph of street connection is known and each arc of this graph is labe. Most packages are compatible with Emacs and XEmacs. When a data set is ready to be loaded into the database a trigger file will be created in the directory. The Erlang delay model (also called M/M/s in queueing theory parlance 1) is similar to the Erlang loss model, except that now it is assumed that the blocked customers will wait in a queue as long as necessary for a server to become available. Set Architecutre=1 and press F5TO RUN MULTI ROBOT 1. maxsize is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Learn more. Paste your code below, and codepad will run it and give you a short URL you can use to share it in chat or email. When a packet reaches the head of the buffer, it is processed by a server and sent to its destination. MicroPython is a lean and efficient implementation of the Python 3 programming language that includes a small subset of the Python standard library and is optimised to run on microcontrollers and in constrained environments. Steady-State Distribution and Performance Measures. Read Python game programming tutorial. ComparingTypes Quick look at some common programming types for python and other languages. • A simple but typical queueing model • Queueing models provide the analyst with a powerful tool for designing and evaluating the performance of queueing systems. This source code is to implement the simulation of single window unlimited queuing system, implement discrete events system simulation using event scheduling method, and calculate the average queue length and average waiting time, then compare with theoretical analyzed result. Pandas is one of those packages, and makes importing and analyzing data much easier. Tao Pang, Computational Physics, second edition, Cambridge University Press (2005). Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. 3 Single Channel Queuing Theory. In the notation, the M stands for Markovian; M/M/1 means that the system has a Poisson arrival process, an exponential service time distribution, and one server. It is First-in-First-out (FIFO) type of data structure. This is pretty much a staple in any AI programmer's arsenal. Department, CSPIT,CHANGA 2. M/M/m/m Queue (m server loss system, no waiting) Simple model for a telephone exchange where a line is given only if one is available; otherwise the call is lost. This script is a text based access counter which is capable to work on multiple pages at once. The line is a queue object. txt" attached. It is a class of model that captures a suite of different standard temporal structures in time series data. With this spreadsheet, run 5 simulations for each of the 10 scenarios, using the arrival and departure information listed in the table below. We assume that the server processes service times at rate 1, meaning that, for example, if C n enters service now with S n = 6, then 4 units of time later there are 2 units of service time remaining to process. I'm trying to simulate an M/D/1 queue in. Welcome to PyPy. GitHub Gist: instantly share code, notes, and snippets. I can write explanations or comments in a nice-looking way that allows math. Advanced workflow orchestration capabilities and connectivity to any application, any data source, and all your critical systems of record , from mainframe to cloud. dumas ät ut. m MM1 Zhangxiang Huang and M. Because snow is so reflective, it plays an important role in regulating climate: it reflects incoming sunlight back into space, cooling the planet. exe to run a simulation from a Python script. import simpy. Connectivity > Cross Language Clients. Picture a M/M/1/3 queue system. c simulation and the M/M/N/N simulation of Task 2 in Lab 2. m MM1 Zhangxiang Huang and M. Every edge is relaxed one time, and so there are m relaxation steps and, hence, at most m times that we need to update the min-priority queue because a key has changed. Need priority queue to determine which. I learned that I can simply use capacity=2, but I couldn't figure out how to create multiple queues. It stands on the shoulder of giants, built over Python, Twisted, Cyclone (a Tornado implementation over twisted) and Redis. Now I am struggling to find some way to calculate the mean waiting time of a job/customer. Simulation of an M/D/c queue. C:\ owhere Unicode String. Load balancing consumer and producer with multiprocessing. Queue (maxsize=0) ¶ Constructor for a FIFO queue. Delayed messages with RabbitMQ Sometimes you want to delay the delivery of messages for a certain time so that subscribers doesn't see them immediately. Computational e ciency is important be-cause if we can simulate from queues quickly then we can embed a queue simulation within an approximate Bayesian computation (ABC) algorithm (Sunn aker, Busetto, Numminen, Coran-der, Foll, and Dessimoz2013) and estimate queue parameters for very complicated queueing. M/M/C queue system is a classical example of queueing theory and traffic theory. Apache ActiveMQ is a message broker written in Java with JMS, REST and WebSocket interfaces, however it supports protocols like AMQP, MQTT, OpenWire and STOMP that can be used by applications in different languages. 4 The M=M=1 queue In this chapter we will analyze the model with exponential interarrival times with mean 1= , exponential service times with mean 1= and a single server. ns-3 tutorial. docx) files. Its value_type shall be T. In your Probability in MATLAB notes, you will find an M-file that simulates a single-server waiting line for which arrival times are exponentially distributed, while service times are constant. M/M/1 Queue simulation Objective This laboratory is important for understanding OPNET system and user interface. All the blocks used in this example can be found in the basic template of blocks provided by Simulation Studio. NumberOfServers is the number of servers. C:\ owhere Unicode String. Snow is precipitation that forms when water vapor freezes. a typical example of the M/M/1 queuing system. We offer fashion and quality at the best price in a more sustainable way. The lecture notes on Computer System Analysis by Raj Jain were very helpful and are highly recommended. The solution to this queue with multiple servers is fast, based on a simple recurrence and numerically stable. M stands for Markov and is commonly used for the exponential distribution. Computer Networks Fall 2017 Project 2: Part 1 Simulation of a Single Server Finite Buffer Queue 1 Project Overview This is the first part of a 2-part project. n and C n+1. class queue. The possible combinations are: Time state-space C D to discuss C C e. model does not return a value but the Monitors of the counter Resource still exist when the simulation has terminated. M/M/1 queue system. stepfun is used as input rather than as. How to use queue in a sentence. You can adjust the initial number of customers, the mean time between arrivals, and the mean service time. Python Queue for multiple Thread programming. Discrete event simulation with variable intervals part of the priority queue and I'm asking more for design aspects tagged python simulation or ask your. This provides a license to use PHENIX and research funds to develop new features in PHENIX tailored to the needs of commercial users. i have some difficulties in changing the program to simulate a M/M/C queue with 3 servers. python interview-questions python-3. ID Activity Title Status Creator Assigned To Type Msgs; 38608: an hour ago: Undocumented behavior that IsolatedAsyncioTestCase would enable event loop debug mode. - Part II: The Palm/Erlang-A Queue Reviewing Abandonment and (Im)Patience.
{}
# Bayesian Inference of Stochastic Volatility Model by Hybrid Monte Carlo ## Author Info • Tetsuya Takaishi Registered author(s): ## Abstract The hybrid Monte Carlo (HMC) algorithm is applied for the Bayesian inference of the stochastic volatility (SV) model. We use the HMC algorithm for the Markov chain Monte Carlo updates of volatility variables of the SV model. First we compute parameters of the SV model by using the artificial financial data and compare the results from the HMC algorithm with those from the Metropolis algorithm. We find that the HMC algorithm decorrelates the volatility variables faster than the Metropolis algorithm. Second we make an empirical study for the time series of the Nikkei 225 stock index by the HMC algorithm. We find the similar correlation behavior for the sampled data to the results from the artificial financial data and obtain a $\phi$ value close to one ($\phi \approx 0.977$), which means that the time series has the strong persistency of the volatility shock. If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large. File URL: http://arxiv.org/pdf/1001.0024 ## Bibliographic Info Paper provided by arXiv.org in its series Papers with number 1001.0024. as in new window Length: Date of creation: Dec 2009 Date of revision: Publication status: Published in Journal of Circuits, Systems, and Computers 18 (2009) 1381-1396 Handle: RePEc:arx:papers:1001.0024 Contact details of provider: Web page: http://arxiv.org/ ## References References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: as in new window 1. Sangjoon Kim & Neil Shephard & Siddhartha Chib, 1998. "Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models," Review of Economic Studies, Oxford University Press, vol. 65(3), pages 361-393. Full references (including those not matched with items on IDEAS) ## Lists This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS. ## Corrections When requesting a correction, please mention this item's handle: RePEc:arx:papers:1001.0024. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (arXiv administrators) If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. If references are entirely missing, you can add them using this form. If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form. If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation. Please note that corrections may take a couple of weeks to filter through the various RePEc services. This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.
{}
# Revision history [back] ### Why is basic arithmetic disallowed on symbolic functions? The documentation at http://www.sagemath.org/doc/reference/calculus/sage/symbolic/function_factory.html states: "In Sage 4.0, basic arithmetic with unevaluated functions is no longer supported" Why? What is the intended way of manipulating equations and then, at some point, taking derivatives with respect to some dependent variable? The following approach has an undesired side-effect: var('B E H T_s') eq_B_TS = B == H/E diff(eq_B_TS.subs(B = function('B')(T_s), E = function('E')(T_s), H = function('H')(T_s)), T_s) D0(T_s) == -H(T_s)*D0(T_s)/E(T_s)^2 + D0(T_s)/E(T_s) Fine, but now my B, E and H are symbolic functions and I cannot do any basic arithmetic with them any more: B*E Traceback (click to the left of this block for traceback) ... TypeError: unsupported operand type(s) for *: 'NewSymbolicFunction' and 'NewSymbolicFunction' 2 No.2 Revision slelievre 14559 ●16 ●136 ●287 http://carva.org/samue... ### Why is basic arithmetic disallowed on symbolic functions? The documentation at http://www.sagemath.org/doc/reference/calculus/sage/symbolic/function_factory.html states: "In states: In Sage 4.0, basic arithmetic with unevaluated functions is no longer supported" supported Why? What is the intended way of manipulating equations and then, at some point, taking derivatives with respect to some dependent variable? The following approach has an undesired side-effect: sage: var('B E H T_s') (B, E, H, T_s) sage: eq_B_TS = B == H/E sage: diff(eq_B_TS.subs(B = function('B')(T_s), E = function('E')(T_s), H = function('H')(T_s)), T_s) D[0](B)(T_s) == -H(T_s)*D[0](E)(T_s)/E(T_s)^2 + D[0](H)(T_s)/E(T_s) D0(T_s) == -H(T_s)*D0(T_s)/E(T_s)^2 + D0(T_s)/E(T_s) Fine, but now my B, E and H are symbolic functions and I cannot do any basic arithmetic with them any more: sage: B*E Traceback (click to the left of this block for traceback) ... TypeError: unsupported operand type(s) for *: 'NewSymbolicFunction' and 'NewSymbolicFunction' and 'NewSymbolicFunction'
{}
# All Questions 1k views ### Let $∑_{n=0}^∞c_n z^n$ be a representation for the function $\frac{1}{1-z-z^2 }$. Find the coefficient $c_n$ Let $∑_{n=0}^∞c_n z^n$ be a power series representation for the function $\frac{1}{1-z-z^2 }$. Find the coefficient $c_n$ and radius of convergence of the series. Clearly this is a power series with ... 85 views ### the infinite sum of symmetric random variables is also symmetric Definition. Let $(\Omega, {\mathcal F}, \mathbb{P})$ be a probability space and $X$ a random variable in $\Omega$. $X$ is said to be ${\mathbf symmetric}$ (about $0$) if $X$ and $-X$ are equal in law.... 68 views 41 views 27 views ### there exsit postive integer $x,y$ such $p\mid(x^2+y^2+n)$ [duplicate] For any give the postive integer $n$,and for any give prime number $p$. show that there exsit postive integer $x,y$ such $$p\mid(x^2+y^2+n)$$ My approach is the following: Assmue that $n=1,p=2$,... ### Is there a $k$ such that $2^n$ has $6$ as one of its digits for all $n\ge k$? It is true that every power of $2$ of the form $2^{6+10x}$, $x\in\mathbb{N}$, has $6$ as one of its digits. Something more is true, the last two digits are either $64$ or $36$. The OP suggests that "....
{}
# Tag Info Accepted ### Relation between welding bead and outer diameter of a pipe? Looks like ERW pipe with the ID trim die not working. The diameter is very small for DSAW ( double submerged arc welded) pipe and the ID weld bead looks too high. Or, I am wrong and it was welded from ... • 5,834 1 vote ### SOLIDWORKS-Bolting a COTS part from a STEP File onto an Assembly The only other way to do this that I know of is to draw circles and extruded cut as usual to make the holes, then edit the circles in the assembly and create an external reference between your drawing ... • 1,563 1 vote Accepted ### How to solve transient heat conduction equation at a point in space with volumetric heat source? Picture a container with an area $A$ (m$^2$) and depth $d$ (m) holding a liquid. The container is perfectly insulated and the fluid is static. The fluid has a density $\rho$ (kg/m$^3$) and specific ... • 2,845 1 vote ### Relation between welding bead and outer diameter of a pipe? There is no relation between welding bead and OD of pipe There is no relation because pipe is not supposed to have a bead inside it. Either you bought defective pipe, or you're in a locale where ... • 4,668 1 vote ### Which material index should be maximized for a component in a car's manual type HVAC? Cost - you are forgetting about processing costs. Yes, engineering polymers might give you least cost/weight. But they can be injection molded - that means cheap, reliable components of nearly any ... • 173 Only top scored, non community-wiki answers of a minimum length are eligible
{}
# Metric topology induced by the sum of two metrics I have to show the following: Let $X$ be a set with metrics $d_1$ and $d_2$ inducing metric topologies $\tau_1$ and $\tau_2$. Define a new metric on $X$ where $d(x,y) = d_1(x,y) + d_2(x,y)$ for all $x,y$ in $X$. a) Show that the topology $\tau_d$ induced by $d$ is finer than $\tau_1$ and finer than $\tau_2$. b) Show that if $\tau_1 = \tau_2$, then $\tau_d = \tau_1$. Part a: Since the set of all metric open balls is a basis for the metric topology, I'll show that $\tau_d$ is finer than $\tau_1$ by showing that any $d_1$ metric open ball contains a $d$ metric open ball. Let $B_x^{d_1}(\delta)$ be a metric open ball of radius $\delta$ centered at an arbitrary point $x$ in $X$ for metric $d_1$. Let $\epsilon$ = $\delta$ + 0 = $\delta$. Thus, the metric open ball of radius $\epsilon$ centered at x for metric $d = d_1 + d_2$ is equal to the $d_1$ metric open ball, ie $B_x^{d1+d2}(\epsilon)$ = $B_x^{d_1}(\delta)$. Thus, $B_x^{d1+d2}(\epsilon) \subseteq B_x^{d_1}(\delta)$. Thus, the topology induced by metric $d$ is finer than the topology induced by metric $d_1$. A similar argument shows that the topology induced by metric $d$ is finer than the topology induced by metric $d_2$. I'm stuck on part b. I know that to show $\tau_1$ = $\tau_d$ we just need to show that $\tau_1$ is finer than $\tau_d$, since in part a we showed that $\tau_d$ is finer than $\tau_1$, but I'm not sure how to do that. At first I wanted to say that $\tau_1 = \tau_2$ means that metric $d=d_1+d_2 = 2d_1$ has multiple kd(x,y) bounded above by $d_1(x,y)$ for all $x,y$ in $X$ when k $\le$ $\frac 12$, but then I realized that $\tau_1 = \tau_2$ only means that they have the same open sets and does not mean that $d1$ and $d2$ are the same metric. So, I'm not sure that there's an argument by means of bounding a constant multiple of $d(x,y)$ by $d_1(x,y)$. I'm not sure how to approach an argument that every $d$ metric open ball must contain a $d_1$ metric open ball. It's seems kind of obvious that if you extend a $d_1$ metric open ball of radius $\delta_1$ by a non-negative distance $\delta_2$ given by metric $d_2$, a $d_1$ metric open ball of radius no more than $\delta_1$ must be contained in it. But it doesn't seem that we need $\tau_1 = \tau_2$ to argue that, so I think I'm missing something here. To show that $\tau_d\supseteq\tau_1$ it’s not enough to show that each $d_1$-open ball contains a $d$-open ball: you must show that it contains a $d$-open ball with the same centre. You seem to have tried to do this, but it needs to be said as well, and your argument isn’t quite right, though you have found a $d$-open ball that works. All you have to show is that $B_x^d(\delta)\subseteq B_x^{d_1}(\delta)$. To do this, suppose that $y\in B_x^d(\delta)$; then $d_1(x,y)=d(x,y)-d_2(x,y)\le d(x,y)<\delta$, since $d_2(x,y)\ge 0$, so $y\in B_x^{d_1}(\delta)$. As you say, a similar argument yields the conclusion that $\tau_d\supseteq\tau_2$. For (b) suppose that $U\in\tau_d$; we need to show that $U\in\tau_1$. Let $x\in U$; by hypothesis there is an $\epsilon>0$ such that $B_x^d(\epsilon)\subseteq U$, and we want a $\delta>0$ such that $B_x^{d_1}(\delta)\subseteq U$. We know that $B_x^{d_2}(\epsilon/2)\in\tau_1$, so there is a $\delta_1>0$ such that $B_x^{d_1}(\delta_1)\subseteq B_x^{d_2}(\epsilon/2)$. Let $\delta=\min\{\delta_1,\epsilon/2\}$; if $y\in B_x^{d_1}(\delta)$, what can you say about $d(x,y)$? • @user92638: Okay. I’d do it a little differently, but that works. I’d simply say that if $y\in B_x^{d_1}(\delta)$, then $d_1(x,y)<\delta_1$, so $d_2(x,y)<\epsilon/2$. Moreover, $d_1(x,y)<\epsilon/2$, so $d(x,y)=d_1(x,y)+d_2(x,y)<\epsilon$. (I think that it’s a little easier to read this way.) Oct 11, 2014 at 8:27 • How does this sound? $$If y \in B_x^{d_1}(\delta), then$$ $$d(x,y) = d_1(x,y) + d_2(x,y) = min\{\delta_1, \epsilon/2\} + d_2(x,y) \tag*{}$$ $$< min\{\delta_1, \epsilon/2\} + \epsilon/2 (Since B_x^{d_1}(\delta) \subseteq B_x^{d_2}(\epsilon/2)) \le \epsilon,$$ so $$y \in B_x^{d_1}(\delta).$$ Oct 11, 2014 at 8:46 • @user92638: No, you can’t say that $d_1(x,y)=\min\{\delta_1,\epsilon/2\}$: what you know is that $d_1(x,y)$ is actually less than that minimum. Otherwise it’s okay. Oct 11, 2014 at 8:51 It suffices to show that if $$\{x_i\}_{i\in I}$$ is a net converging to $$x$$ in $$\tau_1$$ then $$\{x_i\}_{i\in I}$$ converges to $$x$$ in $$\tau_d$$ (because then any set closed in $$\tau_1$$ will be closed in $$\tau_d$$). But if $$x_i\rightarrow x$$ in $$\tau_1$$ then by assumption $$x_i\rightarrow x$$ in $$\tau_2$$ also, i.e. both $$d_1(x_i,x)$$ and $$d_2(x_i,x)$$ tend to $$0$$. Hence their sum does, so $$x_i\rightarrow x$$ in $$d$$.
{}
# Revision history [back] ### Parallel Interface to the Sage interpreter 2.0 The parallel Sage interface PSage() http://doc.sagemath.org/html/en/reference/interfaces/sage/interfaces/psage.html#sage-interfaces-psage works fine with the given example, but I have trouble with a more complex case, which would be a typical application of this very useful feature. The following code works exactly as advertised: >>> v = [ PSage() for _ in range(5)] >>> w = [x('factor(2**%s-1)'% randint(250,310)) for x in v] >>> print w [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, , <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, ] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, ] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, 7 * 78903841 * 28753302853087 * 618970019642690137449562111 * 24124332437713924084267316537353] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, 131071 * 12761663 * 179058312604392742511009 * 3320934994356628805321733520790947608989420068445023, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, 7 * 78903841 * 28753302853087 * 618970019642690137449562111 * 24124332437713924084267316537353] Printing w repeatedly shows the progress of the five factorizations running in parallel (monitoring it looking at top is showing 5 sage/python jobs running simultaneously). The following example is global optimization, starting from 5 different starting points using the differential evolution algorithm available in SciPy. The setup is more complex but still, only a single command string is passed to PSage(). >>> v = [ PSage() for _ in range(5)] >>> w = [x('from scipy.optimize import rosen, differential_evolution; differential_evolution(rosen, [(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') for x in v] >>> print w [Sage, Sage, Sage, Sage, Sage] Apparently, something is wrong here, it doesn't work. But why? Let's see what happens with the serial Sage interpreter Sage(). >>> s = Sage() >>> t = s('from scipy.optimize import rosen, differential_evolution;differential_evolution(rosen,[(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') >>> print t Sage Looks like the same problem. However, Sage() can be made to work using the eval() method. >>> s = Sage() >>> t = s.eval('from scipy.optimize import rosen, differential_evolution;differential_evolution(rosen,[(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') >>> print t fun: 1.2785524204717224e-18 PSage() also has an eval() method, which AFAIK uses Sage().eval() internally, but unfortunately, it doesn't work. >>> v = [ PSage() for _ in range(5)] >>> w = [x.eval('from scipy.optimize import rosen, differential_evolution; differential_evolution(rosen, [(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') for x in v] >>> print w ['<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>'] Based on what I see looking at top the five optimization jobs do run in parallel, but even after they finish, no matter how many times I print w all I get is <<currently executing code>>. The bottom line is that PSage() is either not working at all without using the eval() method, or, it seems to be working with the eval() method, but somehow is stuck in a bad internal state and never producing the output. Any comment is highly appreciated, it would be great to get this to work consistently. ### Parallel Interface to the Sage interpreter 2.0 The parallel Sage interface PSage() http://doc.sagemath.org/html/en/reference/interfaces/sage/interfaces/psage.html#sage-interfaces-psage works fine with the given example, but I have trouble with a more complex case, which would be a typical application of this very useful feature. The following code works exactly as advertised: >>> v = [ PSage() for _ in range(5)] >>> w = [x('factor(2**%s-1)'% randint(250,310)) for x in v] >>> print w [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, , <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, ] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, ] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, 7 * 78903841 * 28753302853087 * 618970019642690137449562111 * 24124332437713924084267316537353] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, 131071 * 12761663 * 179058312604392742511009 * 3320934994356628805321733520790947608989420068445023, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, 7 * 78903841 * 28753302853087 * 618970019642690137449562111 * 24124332437713924084267316537353] Printing w repeatedly shows the progress of the five factorizations running in parallel (monitoring it looking at top is showing 5 sage/python jobs running simultaneously). The following example is global optimization, starting from 5 different starting points using the differential evolution algorithm available in SciPy. The setup is more complex but still, only a single command string is passed to PSage(). >>> v = [ PSage() for _ in range(5)] >>> w = [x('from scipy.optimize import rosen, differential_evolution; differential_evolution(rosen, [(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') for x in v] >>> print w [Sage, Sage, Sage, Sage, Sage] Apparently, something is wrong here, it doesn't work. But why? Let's see what happens with the serial Sage interpreter Sage(). >>> s = Sage() >>> t = s('from scipy.optimize import rosen, differential_evolution;differential_evolution(rosen,[(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') >>> print t Sage Looks like the same problem. However, Sage() can be made to work using the eval() method. >>> s = Sage() >>> t = s.eval('from scipy.optimize import rosen, differential_evolution;differential_evolution(rosen,[(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') >>> print t fun: 1.2785524204717224e-18 PSage() also has an eval() method, which AFAIK uses Sage().eval() internally, but unfortunately, it doesn't work. >>> v = [ PSage() for _ in range(5)] >>> w = [x.eval('from scipy.optimize import rosen, differential_evolution; differential_evolution(rosen, [(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') for x in v] >>> print w ['<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>'] Based on what I see looking at top the five optimization jobs do run in parallel, but even after they finish, no matter how many times I print w all I get is <<currently executing code>>. The bottom line is that PSage() is either not working at all without using the eval() method, or, it seems to be working with the eval() method, but somehow is stuck in a bad internal state and never producing the output. Any comment is highly appreciated, it would be great to get this to work consistently. ### Parallel Interface to the Sage interpreter 2.0 The parallel Sage interface PSage() http://doc.sagemath.org/html/en/reference/interfaces/sage/interfaces/psage.html#sage-interfaces-psage works fine with the given example, but I have trouble with a more complex case, which would be a typical application of this very useful feature. The following code works exactly as advertised: >>> v = [ PSage() for _ in range(5)] >>> w = [x('factor(2**%s-1)'% randint(250,310)) for x in v] >>> print w [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, , <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, ] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, ] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, <<currently executing code>>, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, 7 * 78903841 * 28753302853087 * 618970019642690137449562111 * 24124332437713924084267316537353] [127 * 13367 * 164511353 * 17137716527 * 51954390877748655744256192963206220919272895548843817842228913, 7 * 73 * 16183 * 34039 * 1437967 * 2147483647 * 833732508401263 * 658812288653553079 * 2034439836951867299888617, 131071 * 12761663 * 179058312604392742511009 * 3320934994356628805321733520790947608989420068445023, 3 * 5^2 * 11 * 31 * 41 * 53 * 131 * 157 * 521 * 1613 * 2731 * 8191 * 51481 * 409891 * 7623851 * 34110701 * 108140989558681 * 145295143558111, 7 * 78903841 * 28753302853087 * 618970019642690137449562111 * 24124332437713924084267316537353] Printing w repeatedly shows the progress of the five factorizations running in parallel (monitoring it looking at top is showing 5 sage/python jobs running simultaneously). The following example is global optimization, starting from 5 different starting points using the differential evolution algorithm available in SciPy. The setup is more complex but still, only a single command string is passed to PSage(). >>> v = [ PSage() for _ in range(5)] >>> w = [x('from scipy.optimize import rosen, differential_evolution; differential_evolution(rosen, [(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') for x in v] >>> print w [Sage, Sage, Sage, Sage, Sage] Apparently, something is wrong here, it doesn't work. But why? Let's see what happens with the serial Sage interpreter Sage(). >>> s = Sage() >>> t = s('from scipy.optimize import rosen, differential_evolution;differential_evolution(rosen,[(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') >>> print t Sage Looks like the same problem. However, Sage() can be made to work using the eval() method. >>> s = Sage() >>> t = s.eval('from scipy.optimize import rosen, differential_evolution;differential_evolution(rosen,[(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') >>> print t fun: 1.2785524204717224e-18 PSage() also has an eval() method, which AFAIK uses Sage().eval() internally, but unfortunately, it doesn't work. >>> v = [ PSage() for _ in range(5)] >>> w = [x.eval('from scipy.optimize import rosen, differential_evolution; differential_evolution(rosen, [(0,2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2), (0, 2)])') for x in v] >>> print w ['<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>', '<<currently executing code>>'] Based on what I see looking at top the five optimization jobs do run in parallel, but even after they finish, no matter how many times I print w all I get is <<currently executing code>>. The bottom line is that PSage() is either not working at all without using the eval() method, or, it seems to be working with the eval() method, but somehow is stuck in a bad internal state and never producing the output. Any comment is highly appreciated, it would be great to get this to work consistently.
{}
COVID-19 COVID-19 is officially a pandemic. It is up to all of us to do what we can to limit the spread of this disease. An earlier post shows how herd immunity can stop an outbreak of disease by making the effective reproductive rate, $$R$$, below 1. In that post, we looked at measles. In measles, the basic reproductive rate $$R_0$$ is around 15, but when over about 93% of the population is immune (either by vaccination or by having survived the disease) we get an $$R$$ below 1, meaning the likelihood of an outbreak is low. Please read that post to see how all that works. Most estimates I’ve seen for COVID-19 estimate its $$R_0$$ at about 2 or 3 (I have been cautioned that there are wildly divergent estimates of this number) but since there is as yet no vaccine and few people have had the disease, without other interventions, $$R = R_0$$, and the disease continues to spread. Until there’s a vaccine, is there some other way to reduce $$R$$ below 1? Recall from that earlier post that $$R_0$$ represents the average number of people that one infected person will infect. This number depends on many factors, including biological ones (the probability that an infected person infects someone in close contact, that I’ll call $$p_0$$) and also sociological ones (how many people the average person comes into close contact with, that I’ll call $$n_0$$). Assuming there are no other factors, $$R_0 = p_0 n_0$$. Until there’s a vaccine or until there’s a significant fraction of the population that’s immune, there’s not much we can do about $$p_0$$. That means $$R = p_0 n$$, where $$n$$ is the average number of people we come into close contact with after we change our behavior. How much do we have to change our behavior? We want $$R < 1$$: $\begin{eqnarray*} R & < & 1 \\ R & < & \frac{R_0}{R_0} \\ p_0 n & < & \frac{p_0 n_0}{R_0} \\ n & < & \frac{n_0}{R_0} \end{eqnarray*}$ So here’s the good news: In theory, if each of us reduces the probability of coming into close contact with another person by over a factor of $$\frac{1}{R_0}$$, we can stop this epidemic. If we estimate $$R_0 = 3$$, that means we just need to come into close contact $$\frac{1}{3}$$ as much as we usually do. We don’t have to go into total isolation or do perfect hygiene. We just need to limit our interactions with other people and to use better hygiene. In practice, of course, it’s much more complicated. A friend pointed me to this excellent and extremely detailed article, which recommends a “hammer” phase, in which $$R$$ must go much lower than 1, to get the exponential decay to happen much more quickly, and a later “dance” phase in which we can bring $$R$$ back closer to, but still below, one. The “hammer” phase, according to the author, will require much more stringent restrictions. That new quadratic equation solving technique A New York Times article describes a new technique for solving quadratic equations which emerged from an article by Poh-Shen Loh. At first I was skeptical, but I decided to work through it enough to get a real feel for it. Though this technique does appear to be old information in a new guise, it does have some practical advantages over some of the traditional techniques (e.g. factoring, completing the square, the Quadratic Formula). I plan to use some of my students as guinea pigs… By the way, kudos to reporters Kenneth Chang and Jonathan Corum for getting real math into the New York Times. Background Loh’s technique emerges from a few well-known facts. First, any quadratic equation can be written as $a x^2 + bx + c = 0$ In this article, we will assume that the equation is monic (i.e. $a=1$) so the equation simplifies to $x^2 + bx + c = 0$ (If the equation is not monic, just divide both sides by $a$ and adjust the coefficients.) Second, in that equation, $-b$ is the sum of the roots and $c$ is the product of the roots, since if the roots are $r$ and $s$, the factored form will be $(x-r)(x-s) = 0$ Multiply the left side out and you’ll see that $x^2 – (r+s)x + r s = 0$ so $-b=r+s$ and $c=r s$. Notice also that $- \frac{b}{2}$ is the average of the roots, which is also the $x$-coordinate of the line of symmetry. Loh’s method The new method focuses on $u$, which is the distance between each root and the average of the roots. Thus the roots are $-\frac{b}{2} \pm u$. It’s easy to find $u$ by realizing that $c$ is the product of the roots. What’s cool here is that the product of the roots is a factored form of a difference of squares so it multiplies cleanly, leaving no middle term: $$\begin{eqnarray} c & = & \left( – \frac{b}{2} – u \right) \left( – \frac{b}{2} + u \right)\newline c & = & \frac{b^2}{4} – u^2\newline u^2 & = & \frac{b^2}{4} – c\newline u & = & \sqrt{\frac{b^2}{4} – c} \end{eqnarray}$$ Thus the roots are $$\begin{eqnarray} x & = & – \frac{b}{2} \pm u\newline & = & – \frac{b}{2} \pm \sqrt{\frac{b^2}{4} – c} \end{eqnarray}$$ This looks suspiciously like the Quadratic Formula… Example Let’s solve $x^2-4x-12=0$ using this method. First, $-\frac{b}{2} = -\frac{-4}{2} = 2$ is the average of the roots. Now $c=-12$ is the product of the roots so we solve for $u$: $$\begin{eqnarray} -12 & = & ( 2 – u ) ( 2 + u)\newline -12 & = & 2^2 – u^2\newline u^2 & = & 2^2 + 12\newline u & = & \sqrt{2^2 + 12}= 4\newline \end{eqnarray}$$ So the roots are $2-4 = -2$ and $2+4=6$ The Quadratic Formula $x = \frac{- b \pm \sqrt{b^2 – 4 ac}}{2 a}$ gives the roots of a standard form quadratic equation of the form $a x^2+b x+c = 0$. We are working with a monic quadratic (in which $a=1$) so that simplifies to $$\begin{eqnarray} x & = & \frac{- b \pm \sqrt{b^2 – 4 c}}{2}\newline & = & \frac{- b}{2} \pm \frac{\sqrt{b^2 – 4 c}}{2}\newline & = & – \frac{b}{2} \pm \sqrt{\frac{b^2-4c}{4} }\newline & = & – \frac{b}{2} \pm \sqrt{\frac{b^2}{4} – c}\newline & = & – \frac{b}{2} \pm u \end{eqnarray}$$ You’ll notice that this is exactly what we get in the end of the “Loh’s Method” section. Is this new or useful? It’s clear that Loh’s method is equivalent to the monic version of the Quadratic Formula. What I dislike about the Quadratic Formula, though, is that most students just memorize it without understanding, and even those of us who do understand its derivation still use it by rote. Loh’s method is simple enough to derive each time, summarized below, and the calculations seem a bit simpler: 1. If necessary, make the equation monic by dividing by $a$ 2. Realize that $-\frac{b}{2}$ is the average of the roots 3. Let $u$ represent the distance between $-\frac{b}{2}$ and each root, so the roots are $-\frac{b}{2} \pm u$ 4. Realize that $c$ is the product of the roots, i.e. $c = ( – \frac{b}{2} – u ) ( – \frac{b}{2} + u )$ 5. Solve that equation for $u$, giving the roots as $-\frac{b}{2} \pm u$ Furthermore, like the Quadratic Formula and completing the square, Loh’s method can solve any monic quadratic equation. Unlike traditional factoring, it requires no guess-and-check (what multiplies to $c$ and adds to $-b$?) It works either in the real or complex fields. In the real case, of course, some equations cannot be solved, e.g. $x^2+1=0$. Vaccines and herd immunity Many of us have heard of “herd immunity,” but few of us understand it. I didn’t either until I got curious, did a little reading, and figured some stuff out. The basics aren’t hard to understand if you understand exponential functions. With vaccination rates going down and disease rates going up, it’s important that we understand this stuff so we can make rational choices. Remember asking, “When are we going to use this stuff?” Now’s the time. Let’s go. For our example, let’s look at measles. 1. A typical person with measles will infect about 15 other people while infected, assuming none of those people are immune. That number, about 15 for measles, is called the basic reproductive rate, or R0, of the disease. (R0 depends on how easily the disease is transmitted and how many people a typical infected person comes into contact with.) Most of those infected will be infected within about two weeks. To keep things simple, let’s assume that each infected person infects exactly 15 people in exactly two weeks. Let’s also assume that after infecting 15 new people, an infected person ceases to be infected. Of course, each of those 15 newly infected people will infect 15 other people in the next two weeks, and so on. Assuming nobody is immune, after one person is infected, how many people will be infected after 4 weeks (2 two-week periods)? 8 weeks (four two-week periods)? 2. Define a function f(t) which gives the number of people infected after t weeks, assuming nobody is immune. 3. Still assuming nobody is immune, how many weeks would it take to infect the entire population of the United States, around 350,000,000? (Hint: if you know about logarithms, use ’em! Otherwise, try graphing software or a graphing calculator. Or use the brute force approach: keep multiplying by 15, representing two more weeks, until you top 350,000,000.) (Of course, the more people who are infected, the more unrealistic our assumption that nobody is immune. Still, those are sobering numbers!) 4. The average number of people actually infected by each person with the disease, called the effective reproductive rate, or R, is less than R0 if a fraction of the population is immune (either vaccinated or already had the disease). Let H be the fraction of the population that is immune (so if, say, 2/3 of the population were immune, H would be 2/3). Based on R0 and H, what is a formula for R? (Hint: what fraction is not immune?) 5. No longer assuming everybody is immune, now based on R, define a function g(t) which gives the number of people infected after t weeks. 6. What values of R make g(t) an increasing function? A decreasing function? What does that question have to do with the spread of the disease? 7. At least what fraction, which we’ll call H, of the population must be immune for g(t) to be a decreasing function? 8. That fraction is called the herd immunity threshold for the disease, or HIT. Do you see how the herd immunity threshold is important for public health? 1. Four weeks: R04/2 = 152 = 225. Eight weeks: R08/2 = 154 = 50625. 2. f(t) = R0t/2 = 15t/2 3. Solve for $$t$$: $\begin{eqnarray*} f (t) & = & 350, 000, 000\\ R_0^{\frac{t}{2}} & = & 350, 000, 000\\ \frac{t}{2} & = & \log_{R_0} (350, 000, 000)\\ t & = & 2 \log_{R_0} (350, 000, 000)\\ & \approx & 14.5 \end{eqnarray*}$ That’s 14.5 weeks to infect all of the US! Sobering. 4. R = R0(1-H) (Note that 1-H is the fraction of the population that is not immune.) 5. g(t) = Rt/2 = (R0(1-H))t/2. Notice that this is the same function as f, but using the effective reproductive rate R instead of the basic reproductive rate R0. 6. Increasing: R > 1. Decreasing: R < 1. 7. We need to solve R0(1-H) < 1. So $\begin{eqnarray*} R_0 (1 – H) & < & 1\\ 1 - H & < & \frac{1}{R_0}\\ - H & < & \frac{1}{R_0} - 1\\ H & > & 1 – \frac{1}{R_0}\\ H & > & \frac{14}{15} \\ H & > & 93 \% \end{eqnarray*}$ The lower bound for the value H, in this case 14/15 or about 93%, is the herd immunity threshold (HIT) for measles. 8. When more than the HIT, about 93%, are immune, R < 1 so each infected person infects, on average, fewer than one person, so disease outbreaks are unlikely. When fewer than 93% are immune, R > 1 so each infected person infects, on average, more than one person, so the number of infected people grows over time. And increasing exponential functions are relentless. Building a stone wall video: How does it work? Here’s an amazing video in which a long row of stone blocks is arranged on-edge on top of a stone wall. Spoiler alert! I recommend watching the video before reading further! Here’s what happens: Somebody pushes the leftmost block over, which knocks down the one to its right, which knocks down the one to its right, and so on in a chain reaction. This is more or less what you’d expect, just like a row of dominoes. We notice that once the blocks fall, they don’t fall all the way down; the top of a block ends up resting on the bottom of the next block, overlapping a bit. That’s also not too surprising. What is surprising is what happens once all the blocks fall. Almost instantly comes the amazing second chain reaction. Starting from the right, all the blocks that were resting on each other fall down flat, with virtually no space between them, making a perfect row of blocks laid end-to-end. When students normally are exposed to pseudo-real-world problems, often called “word problems,” they are normally given exactly the information needed to solve it. But in the actual real world, we encounter actual problems and are given no hints about what facts are relevant. When I first saw this video I just wondered what was going on. I was flummoxed at first but gradually realized what’s happening. This is much closer to what really happens when we try to use mathematics to understand the world. But I’m not going to tell, at least not yet. You could do as I did, and just draw some pictures, make some assumptions, figure out what’s relevant, and figure it out. But to make it a little more, er, concrete, here are some starting points. Imagine the blocks are 12″ tall and 3″ wide when standing on edge (or 3″ high and 12″ long when lying flat). Here are some questions to think about: 1. How far apart are the blocks initially? (Say, measured from the left side of one to the left side of the next.) 2. Why don’t they fall to horizontal during the first chain reaction? 3. Why do they all fall to horizontal after the rightmost one falls and starts the second chain reaction? And, for extra credit, a few more that require trigonometry: 1. What is the angle of a falling block as it hits the one to its right? 2. What is the eventual resting angle of the blocks before the final chain reaction flattens them all? That ancient Babylonian “trigonometry” tablet isn’t really trigonometry The new math craze on the Internet seems to be that ancient “Plimpton 322” (P322) Babylonian tablet in which trigonometry is invented millennia before it was first thought to have been. The authors of the paper themselves, University of New South Wales mathematicians Daniel Mansfield and Norman Wildberger, are responsible for much of the hyperbole, when they say things like, “It’s a trigonometric table that’s so unfamiliar and advanced that in some respects it’s superior even to modern trigonometry.” Or this gem: “It’s actually trigonometry, but a different kind of trigonometry…this is a ratio-based trigonometry.” They even call it “Babylonian Exact Sexagesimal Trigonometry.” None of this is really true. Trigonometry at its most basic level is about the relationships between angles in a triangle and the ratios of the side lengths. They tout the newness of “ratio-based trigonometry” though trig has always been ratio-based. For example, in a 30-60-90 right triangle, the ratio of the leg opposite the 30° angle to the hypotenuse is 1/2. (To see why, start with an equilateral triangle and cut it in half. Notice that each half is a 30-60-90 triangle. Now think some more!) In modern trigonometry, that 1/2 ratio is called the sine of the 30° angle or the cosine of the 60° angle (the complement of the 30° angle). How do we determine this using P322 “trigonometry”? We don’t, because P322 contains no idea of angle. Triangle ratios is trigonometry like Dick Smothers is the Smothers Brothers. What P322 actually contains is a series of “Pythagorean triples,” along with a squared ratio of either the short leg to the long leg or of the hypotenuse to the long leg (we’re not sure since the left side has broken off and the only difference would be whether they start with a one or a zero). These fractions are written as sexagesimals, which are like decimals but in base 60 rather than base 10. A Pythagorean triple is a sequence of integers like 3,4,5, in which the squares of the first two add up to the square of the third, making them legs and hypotenuse of a right triangle. In this case, 3²+4²=5². Assuming the left column contains squared ratios of the short to long legs, we would get (3/4)² or 9/16. We could leave it at that, but let’s convert it to decimal: .5625. That’s the exact squared ratio as a decimal. So far so good. The next Pythagorean triple is 5,12,13. If we compute the same squared ratio, we get (5/12)², or 25/144. That’s still exact, but if we try to convert it to decimal we get a repeating decimal, .1736111…, which we can’t write exactly if we limit ourselves to a finite number of digits. That’s where the sexagesimal (base 60) numeral system of the Babylonians wins out, because many more fractions are terminating sexagesimals than are terminating decimals. (I call it a numeral system rather than the more natural number system to emphasize that these aren’t new numbers, they are just new, to most of us, ways of writing numbers, i.e. numerals.) So many more fractions are terminating sexagesimals than are terminating decimals. That’s cool, but far from Earth-shattering. Maybe that was very useful to the Babylonians, but it’s not trigonometry. And it certainly won’t help us at all today. Depending on which ratios were in the original table, these would either be the squared tangents or squared cosecants of the angle opposite the short side, which sounds a lot like trigonometry except that P322 is completely silent about angles! So all we really have is a table containing Pythagorean triples and one of their (squared) ratios. What of the claim that the Babylonians knew the Pythagorean Theorem  a millennium before Pythagoras? That’s not news either. For example the Encyclopedia Brittanica already knew this. The origin (or origins) of the theorem and its proof are lost to history, and it may have been the Babylonians, but being aware of it, as the authors of P322 surely were, is not the same as having proved it. What was P322 good for? If you know two sides of a right triangle, including the long leg, you can find the squared ratio of the side you know to the long leg, and find the closest match in the table. I’m not even sure what that’s good for (since you already know the other ratio since the squared ratio of the hypotenuse to the long leg is just the squared ratio of the short leg to the long leg plus one). There are some theories, but it’s not trigonometry. Is the “exact” nature of it new? No, just look up any list of Pythagorean triples and compute any ratios you want. Or you can just use the Pythagorean Theorem directly, e.g. if you know the two legs of a right triangle are 6 and 8, just compute the square root of 6²+8² and get 10 for the hypotenuse. Nothing to see here, move along! As final “proof” that this is not trigonometry, I give you an example of a typical problem given to beginning trig students: If a 10′ ladder (teachers of trig are obsessed with ladders) is placed at a 70° angle to the ground against a wall, how high is the top of the ladder from the ground? P322 is completely useless here, but using modern trigonometry, the sine of 70°, about .94, is the ratio of the side opposite that angle to the hypotenuse. Multiply that by the hypotenuse of 10′ and you get the length of the opposite side, which in this case is the height, about 9.4′. Of course, if you also knew the distance from the bottom of the ladder to the wall along the ground, you could also figure this out with P322 or the Pythagorean Theorem, but calling it trigonometry is still a big stretch. Incidentally, I’m not alone. Evelyn Lamb in Scientific American is also critical of the hype, with much more detail, though she doesn’t seem as concerned as I am about the complete absence of angles in this “trigonometry.”
{}
Unlimited royalty free music tracks, footage, graphics & courses! Unlimited asset downloads! From $16.50/m Advertisement # Things You Don't Think About During Mixing (But Should) Difficulty:AdvancedLength:LongLanguages: Push up the faders, balance the mix, add some EQ, adjust the pans, add some effects and you’re finished. Those are usually the only things that many of us think about when we’re mixing, but there are a number of additional items to keep in mind during mixing besides the mixing process itself. Many less experienced engineers may not be aware of these issues in the first place, and some mixers that are aware sometimes forget in the heat of audio battle. What am I referring to? Let’s take a look at the concerns beyond making your mix sound tall, deep and wide. ## How Long Should A Mix Take To Finish? Many musicians and engineers that haven’t worked on a label project have nothing to compare their experiences to, so they’re not always sure when a mix is finished. How long should a mix take anyway? When you first start out, you fly through mixes and think you’re finished after an hour or two, but then you begin to discover that there’s a lot more to it than you ever conceived. As always, the only way that you can gauge what you’re doing is by comparing yourself to a pro. Let’s assume for a second that you decide that you’re going to have someone else mix your songs, either because a record label is demanding it, or because you just think it’s a good idea to employ someone with skills better than yours (you should be applauded if you think this way). In the days of analog consoles, we used to figure that a mix would take anywhere from a day to a day and a half per song, especially if you used an A-list mixing engineer. The first day was used to get the mix pretty much 95% of the way, and the 2nd half-day was to eek out as much of those extra five percent as you could with a fresh set of ears. While you might get lucky on the first mix that took a day and a half, it was not uncommon to continue remixing from there until everyone was happy, which for a big budget legacy act could take six or eight weeks on the same song. For example, legendary engineer Bruce Swedien states that there were 91 mixes of Michael Jackson’s Billy Jean and they wound up choosing number 2. And it took U2 six full weeks to find the perfect mix for I Still Haven’t Found What I’m Looking For. Don’t let that amount of time alarm you; there were more songs mixed in the day and a half time frame than there ever were in 6 weeks. Of course, the time it takes for a mix is dependent upon the song, type of material, how it was recorded, number of tracks and elements, and the mixer. If the recording was a live concert with a power trio and vocalist, for instance, and all the songs sounded pretty much the same, an entire album might only take a day to finish. On the other hand, an R&B song with 100 tracks could take a few days just to get a handle on. And a song that had poorly recorded tracks that needed a lot of editing and fixing to bring it up to snuff might take even longer than that. On the other hand, producer/engineer Kevin Shirley has been known to mix entire albums in a single day, such as the best-selling Journey records he worked on in the 80’s. Regardless of how long the initial mix took in the analog days, tweaks or changes after the fact were once dreaded by all involved since resetting the console and all the outboard gear almost always resulted in a mix that sounded slightly different (not to mention the time of setting things up again – See figure 1). As a result, producers and mixers did everything they could to avoid any redos, which mostly consisted of taking extra time on the mix to be sure that it sounded as perfect as possible, mixing multiple versions of the song (more on this in a bit), and doing just about anything to ensure that there was a final version in some way, shape or form when they walked out the studio door. Now with mixing “in-the-box” in a DAW, it’s easy to bring back a mix exactly where you left it days, weeks, months, or even years before, making mix fixes fast and easy as long as you’re not using any outboard gear. As a result, this has taken some of the pressure out of the mixing process, unless you’re still mixing in the analog world with a console and outboard gear. Then it hasn’t really changed much at all. So when is a mix considered finished? Here are some guidelines: 1. The groove of the song is solid. The groove usually comes from the rhythm section, but it might be from an element like a rhythm guitar (like on the Police’s Every Breath You Take) or just the bass by itself, like anything from the Detroit Motown that James Jamerson played on (Marvin Gaye’s What’s Goin’ On or The Four Tops Reach Out, I’ll Be There and Bernadette for instance). Whatever element supplies the groove, it has to be emphasized so that the listener can feel it. 2. You can distinctly hear every instrument. Every instrument must have its own frequency range to be heard. Depending upon the arrangement, this is what usually takes the most time during mixing. 3. Every lyric, and every note of every line or solo can be heard. You don’t want a single note buried. It all has to be crystal clear. Use your automation. That’s what it was made for. 4. The mix has punch. The relationship between the bass and drums is in the right proportion and work together well to give the song a solid foundation. 5. The mix has a focal point. What’s the most important element of the song? Make sure it’s obvious to the listener. 6. The mix has contrast. If you have the same amount of the same effect on everything (a trait I hear from so many neophyte mixers), the mix will sound washed out. You have to have contrast between different elements, from dry to wet, to give the mix depth. 7. All noises and glitches are eliminated. This means any count-offs, singer’s breaths that seem out of place or predominate because of vocal compression, amp noise on guitar tracks before and after the guitar is playing, bad sounding edits, and anything else that might take the listener’s attention away from the track. 8. You can play your mix against songs that you love, and it holds up. Perhaps the ultimate test. If you can get your mix in the same ball park as many of your favorites (either things you’ve mixed or from other artists) after you’ve passed the previous seven items, then you’re probably home free. In the end, it’s best to figure at least a full day per song regardless of whether you’re mixing in the box or on an analog console, although it’s still best to figure a day and a half per mix if you’re mixing in a studio with an analog-style console. Of course, if you’re mixing every session as you go along recording, then you might be finished before you know it as you just tweak your mix a little. ## Mixing In The Box Where once upon a time it was assumed that any mix was centered around a mixing console, that’s not entirely true any more. Since DAWs have become so central to everyday recording, a new way of mixing has arrived; mixing in the computer without the help of a console, or mixing “in the box” (ITB). Many old-school mixers who grew up using consoles have disliked mixing in the box because they find it’s hard to mix with a mouse and they didn’t like the sound. While it’s true that the sound of the very early workstations (meaning mostly their A/D/A converters) didn’t sound very good, this is no longer the case today. Indeed even the least expensive converters have come a very long way so that’s not the issue that it once was. Another objection has been that the sound of the internal mix buss of a DAW degraded the signal and once again that isn’t quite the case anymore. It’s true that each DAW application uses a different algorithm for summing which makes the sound vary from a little to a lot, but a bigger issue is the same one that has faced mixers in the analog world almost from the beginning; it’s how you drive it that counts! What has changed with ITB mixing is HOW you mix multiple songs. No longer are we restricted to working on a single song at a time like in the days of analog. Now it’s possible to work on several songs all at the same time (in fact, I know a lot of name mixers who work this way). It’s great if you suffer from ADD, but I think it actually helps to work on several songs at once because they take on a more cohesive sound. In the analog days when a mix was built around a console, it wasn’t uncommon to have songs that sounded really different after they were mixed. The songs might have used the same players, been recorded in the same studio, mixed on the same console and outboard gear, but there was always a few songs that had a different sound. Now when you work on multiple mixes at once, the sound of all of them tend to get a bit closer to one another since there’s an instant comparison to what sounds good and what sounds bad with every mix. Suffice it to say that whether mixing in the box or with a traditional console, the principles are the same. Although you or your mixing engineer may have a preference for one or the other, you can expect similar quality from either mixing method. ## Alternative Mixes It’s now standard operating procedure to do multiple mixes in order to avoid having to redo the mix again at a later time because an element was mixed too loudly or softly. Even with the ease of calling up a digital project in a DAW, a producer and/or a mixer does not want to revisit a project when it’s complete if at all possible. In order to avoid a remix, the mixer will do alternate mixes that takes any element that might later be questioned, such as lead vocal, solo instrument, background vocals and any other major part, and provides a separate mix with that track recorded slightly louder and again slightly softer. This is referred to as the “Up Mix” and the “Down Mix”. Usually these increments are very small; 1/2dB to 1dB, but usually not much more (yes, that small does make a difference in a tight sounding mix). There are all sorts of ways that alternate mixes can be valuable. It’s easy to to correct an otherwise perfect mix later by simply editing in a masked word from one of your alternate mixes. Or it’s easy to substitute a chorus with softer background vocals without going back to remix. An even more common occurrence is when an instrumental mix is used to splice out objectionable language during mastering. Although many record companies ask for more or different versions, here’s a typical version list for a mix from a rock artist. Other types of music will have a similar version list that’s appropriate for the genre. 1. Album Version 2. Album Version with Vocals Up 3. Contemporary Hits Radio Mix – Softer Guitars 4. Album Oriented Radio Mix – More Guitars and More Drums 5. Adult Contemporary Mix – Minimum Guitars, Maximum Keyboards and Orchestration 6. TV Mix (the entire mix minus lead vocal) The artist, producer or A&R person may also ask for additional versions such as a pass without delays on the vocals in the chorus, more guitars in the vamp, or a version with bass up. There is also a good chance that any singles will need a shortened radio edit as well if you’re working with a hit artist. Thanks to the virtues of the digital audio workstation and modern console automation, many engineers leave the up and down mixes to the assistants since most of the hard work is already complete. ## Mixing With Mastering In Mind Whether you master your final mixes yourself or take them to a mastering engineer, things will go a lot faster if you prepare for mastering ahead of time. Nothing is so exasperating to all involved as not knowing which mix is the correct one, or forgetting the file name. Here are some tips to get you “mastering ready”. Don’t Over-EQ When Mixing — Better to be a bit dull and let your mastering engineer brighten things up. In general, mastering engineers can do a better job for you if your mix is on the dull side rather than too bright or too big. Don’t Over-Compress When Mixing — You might as well not even master if you’ve squashed it too much already. Hypercompression (see part 1 of my Mastering posts) deprives the mastering engineer of one of his major abilities to help your project. Squash it for your friends. Squash it for your clients. But leave some dynamics for your mastering engineer. In general, it’s best to compress and control levels on an individual track basis and not on the stereo buss except to prevent digital overs. Getting Levels Between Mixes To Match Is Not Important — Just make your mixes sound great, matching levels between songs is one of the reasons you master your mixes. Getting Hot Levels Is Not Important — You still have plenty of headroom even if you print your mix with peaks reaching –10dB or so. Leave it to the mastering engineer to get the hot levels. It’s another reason why you go there. Watch Your Fades — If you trim the heads and tails of your track too tightly, you might discover that you’ve trimmed a reverb trail or essential attack or breath. Leave a little room and let the mastering engineer perfect it. Document Everything — You’ll make it easier on yourself and your mastering person if everything is well documented, and you’ll save yourself some money too. The documentation expected includes any flaws, digital errors, distortion, bad edits, fades, shipping instructions, and record company identification numbers. If your songs reside on hard disc as files, make sure that each file is properly ID’d for easy identification (especially if you’re not going to be at the mastering session). Especially don’t be afraid to make a note of any glitches, channel imbalances, or distortion. The mastering engineer won’t think less of you if something got away (you wouldn’t believe the number of times it happens to everybody) and it’s a whole lot easier than wasting a billable hour trying to track down an equipment problem when the problem is actually on the mix master itself. Alternate Mixes Can Be Your Friend — A vocal up, vocal down, or instrument-only mix can be a life-saver when mastering. Things that aren’t apparent while mixing sometimes jump right out during mastering and having an alternative mix around can sometimes provide a quick fix and keep you from having to remix. Make sure you document them properly though. Check Your Phase When Mixing — It can be a real shock when you get to the mastering studio and the engineer begins to check for mono compatibility and the lead singer or guitar disappears because something in the track is out-of-phase. Even though this was more of a problem in the days of vinyl and AM radio, it’s still an important point since many so-called stereo sources (such as television) are either pseudo-stereo or only stereo some of the time. Check it and fix it before you get there. ## How Much Should A Mix Cost? While we’re at it, let’s answer the question of what a mix should cost? If you’re mixing someone else’s project you have to know what the going rate is so you know what you can charge. Likewise, if you’re hiring someone to mix your project, you’ve got to know what it might cost so you know what to budget. Mixers are all over the board in price, especially in the current depressed music market. At one time there was a mixer (who shall remain nameless) who was charging as much as$10,000USD per mix plus a point to mix just one song. What was even more outrageous was that he’d do as many as three mixes a day, since his settings for each instrument never changed because it was “his sound” (in other words, the bass would always go into the bass channel of the console with the same EQ and compression, the guitar into the same channel, etc.) Very few budgets can support that kind of excess anymore, and mixer’s prices, although still at a premium, have come down in recent years. While some mixers charge by the song, others charge a daily rate so the price might escalate pretty fast if there are fixes or the mix goes longer than expected. The rates might be as little as $500USD a day to$2500 or more (although most are somewhere in the middle these days). These rates do not include the studio costs, which are separate and beyond the mixer’s rate. That means that mixing could theoretically be costing as much as \$5000 a day with the mixer included at top rates! Because budgets are so small these days, mixing specialists are caught in a dilemma – the client can only afford the studio or the mixer, but not both. As a result, many mixers have resorted to creating their own mixing environment with an all-in price that makes it much more affordable for the producer. This is one of the advantages of the digital age and DAWs, as it was impossible to build and equip a suitable mixing room for less than million dollars back in the analog days. Since the music business is weak at the moment and budgets are way down, there are a lot of great mixers available for a lot less money than ever before. If you’re willing to wait as the mixer fits you in on down time, or agree to let him mix alone without you or the artist attending, you might be surprised at the rate you’ll get. Even if what you have to offer is below his rate, chances are he can work something that will get you a great mix for a price you can afford. You probably want to mix your project yourself, and I don’t blame you. But the advantages of having someone who’s a really good mixer take a shot at it is that you’ll learn a lot in a very short space of time, get a fresh set of ears on the project, and hopefully end up with something great in a lot less time than you could do it yourself. Still, that’s usually not an option for most musicians and engineers on a small budget (or not budget at all). So if you’re jumping in with your own two ears, just observe some of the tips in this post and hopefully you’ll save yourself some time and effort, and have a better project in the end.
{}
# What are the differences between \par\vspace{2cm}\noindent and \\[2cm]? I am designing a titlepage for my report and I am a bit confused in deciding to use either \par\vspace{2cm}\noindent or \\[2cm] (for example). Shortly speaking, what are the differences between \par\vspace{2cm}\noindent and \\[2cm]? When do we have to only use one rather than the other one? If MWE is really needed, see the following. \documentclass[12pt]{book} \usepackage[a4paper,margin=25mm,showframe=true]{geometry} \usepackage{graphicx} \usepackage{palatino} \begin{document} \begin{titlepage} \bfseries \begin{center} {\LARGE \sffamily Dissertation} \\[15mm] {\large The simplest proof of the last theorem of Fermat''}\\[5mm] {\itshape A proof that elementary students can understand} \vspace{2cm} \includegraphics[scale=.5]{mit-logo}%http://mindenfele.nolblog.hu/files/2014/04/mit_crest_logo.jpg \vspace{2cm} {\LARGE \sffamily Donut E. Knot} \vfill \begingroup \large \sc Deparment of Mathematics \\[4mm] Massachusetts Institute of Technology \\[4mm] Boston, USA \\[4mm] 2014 \endgroup \end{center} \end{titlepage} \end{document} - This question might be too basic but I have not understood it because I heavily focus on PSTricks when using LaTeX, as a result, my skill in LaTeX has not significant progress. I am sorry for this inconvenience and thank you for your cooperation. –  kiss my armpit Aug 28 '14 at 6:30 This basically boils down to When to use \par and when \\?. But i think on a titlepage, it doesn't matter what to use. But be careful to use real \pars in text. –  Johannes_B Aug 28 '14 at 7:38 Not directly relevant here, but \sc should be replaced by \scshape. –  Joseph Wright Aug 28 '14 at 7:55 Why the center environment, which has the consequence of not setting the final line at the bottom of the text block? Use \centering. –  egreg Aug 28 '14 at 8:51 At the TeX level using \\ doesn't start a new paragraph while using \par obviously does. As noted in comments, When to use \par and when \\ covers the difference between the two in general. To see what is going on in the current case, where we are talking about 'design', a small demo such as \documentclass{article} \begin{document} \tracingoutput=2 \showboxdepth=\maxdimen \noindent Hello world\\[2cm] More text \newpage \noindent Hello world\par\vspace{2cm}\noindent More text \end{document} is useful. Looking over the log, we see for the first case that between the two parts we have ....\penalty 10000 ....\glue 0.0 plus 1.0fil ....\penalty -10000 ....\glue(\rightskip) 0.0 ...\glue 56.9055 ...\penalty 300 ...\glue(\baselineskip) 5.16669 ...\hbox(6.83331+0.0)x345.0, glue set 301.63881fil while in the second there is ....\penalty 10000 ....\glue(\parfillskip) 0.0 plus 1.0fil ....\glue(\rightskip) 0.0 ...\glue 56.9055 ...\glue 0.0 ...\glue(\parskip) 0.0 plus 1.0 ...\glue(\baselineskip) 5.16669 ...\hbox(6.83331+0.0)x345.0, glue set 301.63881fil With the standard settings you are not going to see any difference, but if for example \parfillskip was set to something for 'special effects' the results could be different. (In the \\ case, \hfil is inserted so will always be \glue 0.0 plus 1.0fil.) Similarly, notice that using \par adds a \parskip glue element, which again here is zero length with a small amount of stretch but could be a fixed value: this applies in addition to any \vspace. (Try \setlength\parskip{2cm} to see this.) As in the more general case of comparing \par and \\ I'd suggest thinking about meaning. In the example in the question, the lines manually spaced out are conceptually connected (all part of an address), so \\ seems more natural than \par. The latter is often used between different 'blocks' in a title page: the parts are logically separate and often have font differences too. - The extras added by \par probably don't show up in most title pages, but you need to remember that they are there! –  Joseph Wright Aug 28 '14 at 7:54 In the center environment (and under \centering as well), \\ does issue \par. –  egreg Aug 28 '14 at 9:40 @egreg Reasonable point. I was aiming at the general case, which is what the title has: should I edit or will you post an answer about the possibility of a change down to the environment? –  Joseph Wright Aug 28 '14 at 9:54 Beware of {\large The simplest proof of the last theorem of Fermat''}\\[5mm] as if the title ended up being more than a line you would have large text set to a normal baseline and inconsistent spacing, size changes should almost always include the end of the paragraph so {\large The simplest proof of the last theorem of Fermat''\par\vspace{5cm}} is better in that case. Outside of a title page (or tables) then any use of \\ or \noindent is usually a sign that something is wrong, so the differences in general don't matter. - Is there a difference if I remove the \vspace{5cm} from the block in your 2nd example above and append it to the following line? –  kiss my armpit Aug 28 '14 at 12:58 @cyanide-basedfood no –  David Carlisle Aug 28 '14 at 14:04
{}
# Windows – How to run a putty ssh tunneling as Windows service puttyservicessshssh-tunnelwindows In Windows XP or above, I may run a shell command putty.exe -load ssh_tunnel to start a ssh session configured as ssh tunneling. I want to ssh tunnel session become available when my computer start without logon to any user session. Keep the shell command in Windows service seems to be the only solution. I try create a service: c:\> sc create ssh_tunnel binpath="c:\putty.exe -load ssh_tunnel [SC] CreateService SUCCESS The service created successfully. When I start the service: C:\>sc start ssh_tunnel [SC] StartService FAILED 1053: The service did not respond to the start or control request in a timely fashion. It doesn't start. • Windows services must be specially written to respond to the Service Manager's control requests. You cannot use any random executable as binPath; you'll have to use srvany or similar tools. You should also use the command-line plink instead of putty, since the latter might not work properly as a service. Finally, note that PuTTY sessions are per-user, stored in your Windows profile. Services normally run under special accounts, using the SYSTEM profile. You'll have to change the service to run under your own account, or configure the session in the SYSTEM profile as well (psexec -dsi putty).
{}
# Matrix Representations and Homogeneous Coordinates To perform a sequence of transformation such as translation followed by rotation and scaling, we need to follow a sequential process − • Translate the coordinates, • Rotate the translated coordinates, and then • Scale the rotated coordinates to complete the composite transformation. To shorten this process, we have to use 3×3 transformation matrix instead of 2×2 transformation matrix. To convert a 2×2 matrix to 3×3 matrix, we have to add an extra dummy coordinate W. In this way, we can represent the point by 3 numbers instead of 2 numbers, which is called Homogeneous Coordinate system. In this system, we can represent all the transformation equations in matrix multiplication. Any Cartesian point P(X, Y) can be converted to homogeneous coordinates by P’ (Xh, Yh, h). ## Translation A translation moves an object to a different position on the screen. You can translate a point in 2D by adding translation coordinate (tx, ty) to the original coordinate (X, Y) to get the new coordinate (X’, Y’). From the above figure, you can write that − X’ = X + tx Y’ = Y + ty The pair (tx, ty) is called the translation vector or shift vector. The above equations can also be represented using the column vectors. P=[X][Y]P=[X][Y] p’ = [X][Y][X′][Y′]T = [tx][ty][tx][ty] We can write it as − P’ = P + T ## Rotation In rotation, we rotate the object at particular angle θ (theta) from its origin. From the following figure, we can see that the point P(X, Y) is located at angle φ from the horizontal X coordinate with distance r from the origin. Let us suppose you want to rotate it at the angle θ. After rotating it to a new location, you will get a new point P’ (X’, Y’). Using standard trigonometric the original coordinate of point P(X, Y) can be represented as − X=rcosϕ......(1)X=rcosϕ……(1) Y=rsinϕ......(2)Y=rsinϕ……(2) Same way we can represent the point P’ (X’, Y’) as − x=rcos(ϕ+θ)=rcosϕcosθrsinϕsinθ.......(3)x′=rcos(ϕ+θ)=rcosϕcosθ−rsinϕsinθ…….(3) y=rsin(ϕ+θ)=rcosϕsinθ+rsinϕcosθ.......(4)y′=rsin(ϕ+θ)=rcosϕsinθ+rsinϕcosθ…….(4) Substituting equation (1) & (2) in (3) & (4) respectively, we will get x=xcosθysinθx′=xcosθ−ysinθ y=xsinθ+ycosθy′=xsinθ+ycosθ Representing the above equation in matrix form, [XY]=[XY][cosθsinθsinθcosθ]OR[X′Y′]=[XY][cosθsinθ−sinθcosθ]OR P’ = P . R Where R is the rotation matrix R=[cosθsinθsinθcosθ]R=[cosθsinθ−sinθcosθ] The rotation angle can be positive and negative. For positive rotation angle, we can use the above rotation matrix. However, for negative angle rotation, the matrix will change as shown below − R=[cos(θ)sin(θ)sin(θ)cos(θ)]R=[cos(−θ)sin(−θ)−sin(−θ)cos(−θ)] =[cosθsinθsinθcosθ](cos(θ)=cosθandsin(θ)=sinθ)=[cosθ−sinθsinθcosθ](∵cos(−θ)=cosθandsin(−θ)=−sinθ) ## Scaling To change the size of an object, scaling transformation is used. In the scaling process, you either expand or compress the dimensions of the object. Scaling can be achieved by multiplying the original coordinates of the object with the scaling factor to get the desired result. Let us assume that the original coordinates are (X, Y), the scaling factors are (SX, SY), and the produced coordinates are (X’, Y’). This can be mathematically represented as shown below − X’ = X . SX and Y’ = Y . SY The scaling factor SX, SY scales the object in X and Y direction respectively. The above equations can also be represented in matrix form as below − (XY)=(XY)[Sx00Sy](X′Y′)=(XY)[Sx00Sy] OR P’ = P . S Where S is the scaling matrix. The scaling process is shown in the following figure. If we provide values less than 1 to the scaling factor S, then we can reduce the size of the object. If we provide values greater than 1, then we can increase the size of the object.
{}
# Synopsis: Blowing Bubbles on the Nanoscale Scientists have now developed a new controlled method to superheat liquids and induce the formation of bubbles in a nanoscale container. Whether they form in ice-cold champagne or hot molten iron, bubbles represent a nucleation phenomenon that (in the case of the hot iron) can lead to a phase transition from a liquid to a vapor. Understanding how the bubble nucleation is affected by confinement could be useful for applications in chemistry, microfluidics, and electronics, as well as fundamental studies of phase transitions. Jene Golovchenko, at Harvard University, and collaborators now report a way to reproducibly create bubbles in liquid confined within a solid-state nanopore—the smallest container in which bubble formation has been observed. Solid-state nanopores are tiny holes punctured into an insulating membrane. Golovchenko and his colleagues immersed a silicon-nitride membrane containing a nanopore in a sodium-chloride solution and applied a modest voltage across the membrane to drive an ionic current through the pore. The current rapidly heated the liquid in the nanopore to temperatures ${200}^{\circ }\text{C}$ above its normal boiling point, causing single bubbles of vapor to homogeneously nucleate at the center of the pore. The researchers used both electronic and optical probes to monitor the bubbles’ nucleation, growth, and collapse. The bubbles were excited in streams, with each bubble lasting around $16$ nanoseconds before the next formed $120$ nanoseconds later, consistent with models of how heat drives bubble formation on the nanoscale. Inducing bubble nucleation in a controlled manner may be useful for applications such as building bubble “lenses” to bend light and achieve super-resolution imaging. – Katherine Kornei ### Announcements More Announcements » Fluid Dynamics ## Previous Synopsis Atomic and Molecular Physics ## Next Synopsis Atomic and Molecular Physics ## Related Articles Biological Physics ### Synopsis: Bacteria Create Own Swim Lane Researchers calculate the size of a low-resistance buffer zone created by microbial organisms as they swim through the mucus lining of the stomach. Read More » Fluid Dynamics ### Synopsis: Internal Waves Take the Staircase Down A theoretical study indicates that large-scale waves within the ocean can travel through “staircases” of water density, a motion that could enhance ice melting at the surface. Read More » Fluid Dynamics ### Viewpoint: Particles Move to the Beat of a Microfluidic Drum A thin vibrating plate can organize microscopic particles within a liquid into different patterns, an effect like that observed in 18th century studies of musical instruments. Read More »
{}
Default Settings for the Bash Shell Here are the default settings I add to my  /.bashrc or to /etc/bashrc: # prompt input: type the beginning of a command and press the tab key to auto complete bind "set show-all-if-ambiguous on" # prompt input: type the beginning of a command and press ctrl-f to find all possible completions bind "\C-f":complete # prompt input: uncomment the following if you wish the menu-complete not to take place for the first tab pressed #bind "set menu-complete-display-prefix on" # prompt input: enable search of commands in history using up and down arrows that start as specified at the prompt input bind '"\e[A": history-search-backward' bind '"\e[B": history-search-forward' # prompt input: using ctrl-left and ctrl-right move one word to the left or right bind '"\e[1;5C": forward-word' bind '"\e[1;5D": backward-word' # set the prompt to user@host:directory> in bold font PS1='${debian_chroot:+($debian_chroot)}$\e[1m$\u@\h:\w>$\e[0m$ ' 05.17.19 $\pi$
{}
# zbMATH — the first resource for mathematics Directed graphs and combinatorial properties of semigroups. (English) Zbl 1005.20043 Let $$D$$ be a fixed finite graph. The graph $$G=(V,E)$$ is said to be $$D$$-saturated if for every infinite subset $$W$$ of $$V$$ there exists a subgraph of $$G$$ isomorphic with $$D$$ having all vertices in $$W$$. Let $$S$$ be a semigroup. The authors associate to $$S$$ the following three graphs: $$\text{Pow}(S)$$, $$\text{Div}(S)$$, $$\text{Ann}(S)$$ for which $$S$$ is the vertex set (in the construction of $$\text{Div}(S)$$ $$S$$ is assumed to be a semigroup with $$0$$). The set of edges is defined as follows. If $$u,v\in S$$ and $$u\neq v$$ then: $$(u,v)$$ is a edge for $$\text{Pow}(s)\iff v$$ is a power of $$u$$. $$(u,v)$$ is a edge for $$\text{Div}(S)\iff u$$ divides $$v$$, $$(u,v)$$ is a edge for $$\text{Ann}(S)\iff uv=0$$. The purpose of this paper is to characterize the commutative semigroups $$S$$ for which the graphs $$\text{Pow}(S)$$ or $$\text{Div}(S)$$ or $$\text{Ann}(S)$$ are $$D$$-saturated. ##### MSC: 20M14 Commutative semigroups 05C25 Graphs and abstract algebra (groups, rings, fields, etc.) 05C20 Directed graphs (digraphs), tournaments 20M05 Free semigroups, generators and relations, word problems Full Text: ##### References: [1] Chartland, G.; Lesniak, L., Graphs and digraphs, (1996), Chapman & Hall London [2] Clifford, A.H.; Preston, G.B., The algebraic theory of semigroups, (1961), Amer. Math. Soc Providence · Zbl 0111.03403 [3] de Luca, A.; Varricchio, S., Regularity and finiteness conditions, (), 747-810 [4] de Luca, A.; Varricchio, S., Finiteness and regularity in semigroups and formal languages, Monographs in theoretical computer science, (1998), Springer-Verlag Berlin [5] Graham, R.L., Rudiments of Ramsey theory, (1981), Amer. Math. Soc Providence [6] Graham, R.L.; Rothschild, B.L.; Spencer, J.H., Ramsey theory, Discrete math. and optimization, (1990), Wiley New York [7] Grillet, P.A., Semigroups: an introduction to structure theory, (1995), Dekker New York · Zbl 0874.20039 [8] Howie, J.M., Fundamentals of semigroup theory, (1995), Clarendon Press Oxford · Zbl 0835.20077 [9] Justin, J., Characterisation of the repetitive commutative semigroups, J. algebra, 21, 87-90, (1972) · Zbl 0248.05004 [10] Justin, J.; Kelarev, A.V., Factorisations of infinite sequences in semigroups, Ann. mat. pura appl., 174, 87-96, (1998) · Zbl 0963.20034 [11] Kelarev, A.V., Combinatorial properties of sequences in groups and semigroups, Combinatorics, complexity and logic, Discrete mathematics and theoretical computer science, (1996), Springer-Verlag Auckland, p. 289-298 · Zbl 0914.68155 [12] Lothaire, M., Combinatorics on words, (1983), Addison-Wesley London · Zbl 0514.20045 [13] Robinson, D.J.S., A course in the theory of groups, (1982), Springer-Verlag New York/Berlin · Zbl 0496.20038 [14] Shevrin, L.N.; Ovsyannikov, A.J., Semigroups and their subsemigroup lattices, (1996), Kluwer Academic Dordrecht · Zbl 0858.20054 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# UnivariateLinearModelAnalysis¶ class UnivariateLinearModelAnalysis(*args) Linear regression analysis with residuals hypothesis tests. Available constructors: UnivariateLinearModelAnalysis(inputSample, outputSample) UnivariateLinearModelAnalysis(inputSample, outputSample, noiseThres, saturationThres, resDistFact, boxCox) Parameters: inputSample : 2-d sequence of float Vector of the defect sizes, of dimension 1. outputSample : 2-d sequence of float Vector of the signals, of dimension 1. noiseThres : float Value for low censored data. Default is None. saturationThres : float Value for high censored data. Default is None. resDistFact : openturns.DistributionFactory Distribution hypothesis followed by the residuals. Default is openturns.NormalFactory. boxCox : bool or float Enable or not the Box Cox transformation. If boxCox is a float, the Box Cox transformation is enabled with the given value. Default is False. Notes This method automatically : • computes the Box Cox parameter if boxCox is True, • computes the transformed signals if boxCox is True or a float, • builds the univariate linear regression model on the data, • computes the linear regression parameters for censored data if needed, • computes the residuals, • runs all hypothesis tests. Examples Generate data : >>> import openturns as ot >>> import otpod >>> N = 100 >>> ot.RandomGenerator.SetSeed(0) >>> defectDist = ot.Uniform(0.1, 0.6) >>> epsilon = ot.Normal(0, 1.9) >>> defects = defectDist.getSample(N) >>> signalsInvBoxCox = defects * 43. + epsilon.getSample(N) + 2.5 >>> invBoxCox = ot.InverseBoxCoxTransform(0.3) >>> signals = invBoxCox(signalsInvBoxCox) Run analysis with gaussian hypothesis on the residuals : >>> analysis = otpod.UnivariateLinearModelAnalysis(defects, signals, boxCox=True) >>> print analysis.getIntercept() # get intercept value [Intercept for uncensored case : 2.51037] >>> print analysis.getKolmogorovPValue() [Kolmogorov p-value for uncensored case : 0.835529] Run analysis with noise and saturation threshold : >>> analysis = otpod.UnivariateLinearModelAnalysis(defects, signals, 60., 1700., boxCox=True) >>> print analysis.getIntercept() # get intercept value for uncensored and censored case [Intercept for uncensored case : 4.28758, Intercept for censored case : 3.11243] >>> print analysis.getKolmogorovPValue() [Kolmogorov p-value for uncensored case : 0.346827, Kolmogorov p-value for censored case : 0.885006] Run analysis with a Weibull distribution hypothesis on the residuals >>> analysis = otpod.UnivariateLinearModelAnalysis(defects, signals, 60., 1700., ot.WeibullFactory(), boxCox=True) >>> print analysis.getIntercept() # get intercept value for uncensored and censored case [Intercept for uncensored case : 4.28758, Intercept for censored case : 3.11243] >>> print analysis.getKolmogorovPValue() [Kolmogorov p-value for uncensored case : 0.476036, Kolmogorov p-value for censored case : 0.71764] Methods drawBoxCoxLikelihood([name]) Draw the loglikelihood versus the Box Cox parameter. drawLinearModel([model, name]) Draw the linear regression prediction versus the true data. drawResiduals([model, name]) Draw the residuals versus the defect values. drawResidualsDistribution([model, name]) Draw the residuals histogram with the fitted distribution. drawResidualsQQplot([model, name]) Draw the residuals QQ plot with the fitted distribution. getAndersonDarlingPValue() Accessor to the Anderson Darling test p-value. getBoxCoxParameter() Accessor to the Box Cox parameter. getBreuschPaganPValue() Accessor to the Breusch Pagan test p-value. getCramerVonMisesPValue() Accessor to the Cramer Von Mises test p-value. getDurbinWatsonPValue() Accessor to the Durbin Watson test p-value. getHarrisonMcCabePValue() Accessor to the Harrison McCabe test p-value. getInputSample() Accessor to the input sample. getIntercept() Accessor to the intercept of the linear regression model. getKolmogorovPValue() Accessor to the Kolmogorov test p-value. getNoiseThreshold() Accessor to the noise threshold. getOutputSample() Accessor to the output sample. getR2() Accessor to the R2 value. getResiduals() Accessor to the residuals. getResidualsDistribution() Accessor to the residuals distribution. getResults() Print results of the linear analysis. getSaturationThreshold() Accessor to the saturation threshold. getSlope() Accessor to the slope of the linear regression model. getStandardError() Accessor to the standard error of the estimate. getZeroMeanPValue() Accessor to the Zero Mean test p-value. saveResults(name) Save all analysis test results in a file. drawBoxCoxLikelihood(name=None) Draw the loglikelihood versus the Box Cox parameter. Parameters: name : string name of the figure to be saved with transparent option sets to True and bbox_inches=’tight’. It can be only the file name or the full path name. Default is None. fig : matplotlib.figure Matplotlib figure object. ax : matplotlib.axes Matplotlib axes object. Notes This method is available only when the parameter boxCox is set to True. drawLinearModel(model='uncensored', name=None) Draw the linear regression prediction versus the true data. Parameters: model : string The linear regression model to be used, either uncensored or censored if censored threshold were given. Default is uncensored. name : string name of the figure to be saved with transparent option sets to True and bbox_inches=’tight’. It can be only the file name or the full path name. Default is None. fig : matplotlib.figure Matplotlib figure object. ax : matplotlib.axes Matplotlib axes object. drawResiduals(model='uncensored', name=None) Draw the residuals versus the defect values. Parameters: model : string The residuals to be used, either uncensored or censored if censored threshold were given. Default is uncensored. name : string name of the figure to be saved with transparent option sets to True and bbox_inches=’tight’. It can be only the file name or the full path name. Default is None. fig : matplotlib.figure Matplotlib figure object. ax : matplotlib.axes Matplotlib axes object. drawResidualsDistribution(model='uncensored', name=None) Draw the residuals histogram with the fitted distribution. Parameters: model : string The residuals to be used, either uncensored or censored if censored threshold were given. Default is uncensored. name : string name of the figure to be saved with transparent option sets to True and bbox_inches=’tight’. It can be only the file name or the full path name. Default is None. fig : matplotlib.figure Matplotlib figure object. ax : matplotlib.axes Matplotlib axes object. drawResidualsQQplot(model='uncensored', name=None) Draw the residuals QQ plot with the fitted distribution. Parameters: model : string The residuals to be used, either uncensored or censored if censored threshold were given. Default is uncensored. name : string name of the figure to be saved with transparent option sets to True and bbox_inches=’tight’. It can be only the file name or the full path name. Default is None. fig : matplotlib.figure Matplotlib figure object. ax : matplotlib.axes Matplotlib axes object. getAndersonDarlingPValue() Accessor to the Anderson Darling test p-value. Returns: pValue : openturns.Point Either the p-value for the uncensored case or for both cases. getBoxCoxParameter() Accessor to the Box Cox parameter. Returns: lambdaBoxCox : float The Box Cox parameter used to transform the data. If the transformation is not enabled None is returned. getBreuschPaganPValue() Accessor to the Breusch Pagan test p-value. Returns: pValue : openturns.Point Either the p-value for the uncensored case or for both cases. getCramerVonMisesPValue() Accessor to the Cramer Von Mises test p-value. Returns: pValue : openturns.Point Either the p-value for the uncensored case or for both cases. getDurbinWatsonPValue() Accessor to the Durbin Watson test p-value. Returns: pValue : openturns.Point Either the p-value for the uncensored case or for both cases. getHarrisonMcCabePValue() Accessor to the Harrison McCabe test p-value. Returns: pValue : openturns.Point Either the p-value for the uncensored case or for both cases. getInputSample() Accessor to the input sample. Returns: defects : openturns.Sample The input sample which is the defect values. getIntercept() Accessor to the intercept of the linear regression model. Returns: intercept : openturns.Point The intercept parameter for the uncensored and censored (if so) linear regression model. getKolmogorovPValue() Accessor to the Kolmogorov test p-value. Returns: pValue : openturns.Point Either the p-value for the uncensored case or for both cases. getNoiseThreshold() Accessor to the noise threshold. Returns: noiseThres : float The noise threhold if it exists, if not it returns None. getOutputSample() Accessor to the output sample. Returns: signals : openturns.Sample The input sample which is the signal values. getR2() Accessor to the R2 value. Returns: R2 : openturns.Point Either the R2 for the uncensored case or for both cases. getResiduals() Accessor to the residuals. Returns: residuals : openturns.Sample The residuals computed from the uncensored and censored linear regression model. The first column corresponds with the uncensored case. getResidualsDistribution() Accessor to the residuals distribution. Returns: distribution : The fitted distribution on the residuals, computed in the uncensored and censored (if so) case. getResults() Print results of the linear analysis. getSaturationThreshold() Accessor to the saturation threshold. Returns: saturationThres : float The saturation threhold if it exists, if not it returns None. getSlope() Accessor to the slope of the linear regression model. Returns: slope : openturns.Point The slope parameter for the uncensored and censored (if so) linear regression model. getStandardError() Accessor to the standard error of the estimate. Returns: stderr : openturns.Point The standard error of the estimate for the uncensored and censored (if so) linear regression model. getZeroMeanPValue() Accessor to the Zero Mean test p-value. Returns: pValue : openturns.Point Either the p-value for the uncensored case or for both cases. saveResults(name) Save all analysis test results in a file. Parameters: name : string Name of the file or full path name. Notes The file can be saved as a csv file. Separations are made with tabulations. If name is the file name, then it is saved in the current working directory.
{}
# Unit 10 Circles Homework 1 Parts Of A Circle Area And Circumference Answer Key Objectives • Identify and use parts of circles • Solve problems using the circumference of circles. Standard formula to calculate the area of a circle is: A=πr². This distance is called radius. For Core 1 and 4 Math 6X. There are three components to finding circumference of a circle: radius. 065 cm2 C 2 (x 1) A (x 2 2x 1) 10. Notes: CIRCLE BASICS, CIRCUMFERENCE & AREA Geometry Unit 7- Properties of Polygons Page 554 For Examples 3 – 4, use the appropriate formulas to calculate the EXACT and APPROXIMATE circumference and area of the circles shown. Two concentric circles are shown in the figure below. The value of r is called the "radius" of the circle, and the point ( h, k) is called the "center" of the circle. Packet 75: Area of Circles Scholars will be able to calculate the area of a circle given the radius or diameter. Circle-like parts, e. • Identify the formulas for finding circumference and area of a circle. Name the radii 2. Find each of the following: a) the diameter b) the radius c) the length of an arc of 120 degrees. 2 Thursday 4/12 Area of Sectors and Area problems Arc Length 3 HW 13. be between the circumference of a circle and its radius? How do you know? (b) Now, test your prediction. What is the circumference of the CD?. Activity 11. The circles shown are tangent at point B. A chord is a line joining. Find the area and circumference of a circle with radius 11. C-77 Triangle Area Conjecture - The area of a triangle is given by the formula A= bh 2, where A is the area, b is the length of the base, and h is the height of the triangle. Example 2: Find each indicated measure. AC2 5 BC2 1 AB2 Pythagorean Theorem (r 1 50)2 5 r2 1 802 Substitute. 34 square inches. The unit circle is a circle whose radius is 1 and whose center is at the origin. 14, thus, we will substitute 3. Since the unit circle has radius 1, these coordinates are easy to identify; they are listed in the table below. Math homework help. You can use the Pythagorean Theorem. Parts of Circles • Circle – set of all points in a plane that are equidistant from a given point called the center of the circle. 14 for pi when calculating the area for circles. 5 ft² 10 ft Find the circumference of each circle. Week 10 Science-Planet Names. A circular swimming pool has a radius of 14 m. The Geometry of Circles - Cool Math has free online cool math lessons, cool math games and fun math activities. [Filename: prealg_pssg_G087. Objectives • Identify and use parts of circles • Solve problems using the circumference of circles. Page 400 Programming Exercise #1 - "Employee" Class Use the Test Data given in the Given the circumference of a circle, find the radius and the area rounded to the nearest tenth. 12 Multiply. Excellent quick reference when learning circles. Wednesday, February 24 – Daily bell work (area formulas). The area of an object is the amount of surface that the object occupies. Find the area of the region inside both circles. Updated: Nov 21, 2014. We also look at some problems involving tangents to circles. 1 Activity #6 : Derive the Area Formula - Circle : View Page #1 Contents : View Page #2 Contents : View Page #3 Contents : G. Circles and Circumference 522 Chapter 10 Circles • Identify and use parts of circles. This lesson builds on students’ previous experiences with area of polygons to establish a conceptual foundation for the formula for the area of a circle. To begin this exploration, I created a circle with a radius of 1(for my purposes I used 1 inch as my unit of measure). 9th - 12th grade Parts of a Circle. Determine the area and circumference of a circle. Displaying all worksheets related to - Unit10 Circles Homework5 Tangent Lines. Sign Up For Our FREE Newsletter! By signing up, you agree to receive useful information and to our privacy policy. Complete page 151 1-31 on notebook as practice only. Circumference of a circle can simply be evaluated using following formula. 1 square units. Name the chord(s) 3. Solve real-life and mathematical problems involving angle measure, area, surface area, and volume. Students will be asked to solve. as a Software Design Engineer and manages Codeforwin. CM 5SE FORP 9. and whose area is less than 1. 2 Points, Lines, and Planes 1. Find the area of the region inside both circles. Taking M as its centre and MO as its radius. Figure 3 A circle with two diameters and a (nondiameter) chord. Perimeter and Area Fundamentals of Geometry 10 A 10A Page 1. 14, it comes in handy when working out the circumference and area of a circle. ! Give your answer to 1 decimal place. Unit 4B - Circles Part 2. The area of the smaller circle is what fraction of the area of the larger circle? To answer this question enter a fraction in the answer space provided. • Calculate the circumference and area of given circles. Standard Equation of a Circle An equation for a circle with center at (hk, ). The geometry of a circle mc-TY-circles-2009-1 In this unit we find the equation of a circle, when we are told its centre and its radius. Packet 75: Area of Circles Scholars will be able to calculate the area of a circle given the radius or diameter. The circumference of a circle is 100$pi$ inches. YEAR 10 EXTENSION HOMEWORK 5. public class Circle {. Wednesday, February 24 – Daily bell work (area formulas). The circumference of a circle can be found by multiplying pi ( π = 3. CCSS Math: 7. WORD ANSWER KEY. Solving 2-Step Equations. 1 that}AB⊥}BC o ,s nABC is a right triangle. Unit 10 Geometry Circles NAME Period 1 Geometry Chapter 10 Circles ***In order to get full credit for your assignments they must me done on time Circles, Chords, Diameters, and Their Relationships Student Outcomes Identify the relationships between the diameters of a circle and other chords of. Find the area of circle P. tekoclasses. Area of Polygons. This ratio is known as π, or pi. Question How many times will Circle A rotate about its center *** Solution, Part 1 Begin by drawing (1) Circle B (2) Circle A where it initially contacts Circle B (3) Circle A where it contacts Circle B after rolling 1. Chapter 10 - Properties of Circles; Unit 10 Homework Packet 10-1 Tangents of a Circle Comments (-1) 10-2 Find Arc Measures. Find the perimeter. The geometry of a circle mc-TY-circles-2009-1 In this unit we find the equation of a circle, when we are told its centre and its radius. From that exterior point, the circle has the tangent at a points A and B. Then the cosine of any angle is equal (numerically, though not necessarily. Area of a circle: The area of a circle is the number of square units inside the circle or it is the space occupied by the circle. Skill Focus: Find circumference and area of a circle. Activities From Class. The units are in place to give an indication of the. 1 Worksheet #6a : Calculating the Area of Circles & Sectors : View. Circles are not to scale. Geometry Homework Assignments. She divides the plant box into sections 1/4 meter in length, and plants 1 seed in each section. Circumference and Area: Description: Geometry and the Circle: To define and identify points, line segments, planes, diameter, radius and chord. CIRCLES Describe radian measure of an angle as the ratio of the length of an arc intercepted by a central segments and angles of circles choosing from a variety of tools. 10) 8 m-1-11) 4 m 12) 22 mi 13) 5 m 14) 8 yd 15) 16. Answer with proper units. 25” r = 4” d = 11” Set of 32. To find the area of a quarter circle, find the area of the whole circle by using the formula A = pi * r^2 and then divide by 4. All of the materials found in this booklet are included for viewing and printing on the. 14)(3 m^2) A=(3. 1-2: Adding Rational Numbers 9. We use the formula and substitute for r. Free math lessons and math homework help from basic math to algebra, geometry and beyond. Similar: Area of circles Area. Geometry - Unit 10 - Circles - Notes and Homework Answer Keys: File Size: 6566 kb: File Type: pdf. 6) C solve mathematical and real-world problems involving similar shape and scale drawings. The area is not actually part of the circle. 6 - You have an online assignment in Skyward. ) Circles: word problems (7-AA. - We can use the formula to determine the circumference of a circle. Where: #color(blue)(theta# is the angle subtended at the centre. 48 circle part 1 of 2 1. Check your answers. Extra Practice. Angle Coordinates 0o (1, 0) 90 (0, 1). A circle is the set of all points in the plane that are the same distance away from a specific point From each center, count the units to the outer rim of the circle. Given the area of a circle is 50. Page 400 Programming Exercise #1 - "Employee" Class Use the Test Data given in the Given the circumference of a circle, find the radius and the area rounded to the nearest tenth. Area and Perimeter of a circle. Site Navigation. Arc length of 𝑃 ̂ b. Solution C 2πr Formula for the circumference 2π(4) Substitute 4 for r. Circle geometry terms + definitions. The geometry of a circle mc-TY-circles-2009-1 In this unit we find the equation of a circle, when we are told its centre and its radius. Answer with proper units. If all the pennies have a match, there is an even number of pennies. Where: #color(blue)(theta# is the angle subtended at the centre. 370 Chapter 7 Geometry: Measuring Area and Volume From Concrete to Abstract Use Exercise 6 to bridge the activity of finding volumes with manipulatives to making conjectures on how height and base area affect volume. 1 Prep Part 1. Free math lessons and math homework help from basic math to algebra, geometry and beyond. A=pi*r^2 A=(3. Summarize + Homework. Homework Pages 341-342: 1-20 Excluding 12. Search this site. KEY: Circles | Parts of Circles 2. Unit A: Number and Operations. Radius of a circle with a circumference of 4 millimeters. 2) you may introduce the symbol π (pi) to represent the ratio of the circumference to diameter of a circle. circumference: the distance around the circle. What is the Circumference of the circle at the right?. 360 mAB Itr2 Area of sector AOB 360. To represent m2, use "sq m". 7 Circumference and Area of Circles Find the circumference of the circle. The circumference of a. 4 Length of Polygons in the Coordinate Plane. and whose area is less than 1 ft2. Mark your answer in the circles in the grid on your answer sheet. Geometry Unit 1; Geometry Unit 2; Geometry Unit 3; Geometry Unit 4; Geometry Unit 5; Spring Semester. The Area of a Circle. Circle vocabulary; Measuring central and. quadrantal angles intersects the unit circle. 651 •diameter, p. 1 that}AB⊥}BC o ,s nABC is a right triangle. Unit10 Circles Homework 1. First you must find the radius, then the diameter and then the circumference. As K 1 K 2 on both gears are equal to the base pitch of their gears, respectively. Laws of Exponents. • Area of the circular path formed by two. Print and copy in color. It might be the nearest whole. It is a fraction of the circumference of the circle. Finding the Circumference of a Circle Find the circumference of each circle. 659 The circumference of a circle is the distance around the circle. Find the area of circle P. NAT: NCTM ME. mm 2, cm 2, m 2 e. 5 ft² 10 ft Find the circumference of each circle. PDF document. Math Task Cards - 7th Geometry - Circles, Area, Circumference, and Pi CCS 7. The entire unit has been re-worked to correspond with current research regarding effective homework. * Wouldn't the radius on the shaded region be 2, so 2(piesign)^2, so 4(piesign)?. Geometry worksheets: Circumference of a circle. 9 Worksheet - Circumference 13-14 from MATH 1101 at Al-Suffah Girls College, 5-Khyaban-e-Sadiq. Circle Crossword Puzzles. C-77 Triangle Area Conjecture - The area of a triangle is given by the formula A= bh 2, where A is the area, b is the length of the base, and h is the height of the triangle. 8 Circumference of a Circle. Circle-like parts, e. Try it free!. The circumference of unit circle is 2π. Area of triangles. Area of Sector and Arc Length. Thanks for the post. Complexity=1, Mode=adv. Find: @ The diameter of the circle. The face of a toonie has radius 1. Start studying Geometry Unit 10 Circles. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. To begin this exploration, I created a circle with a radius of 1(for my purposes I used 1 inch as my unit of measure). The circle below is called circle K, written dK. Unit 1 Review Set & Key. A chord is a segment whose endpoints are on the circle. Sectors of Circles. B describe π as the ratio of the circumference of a circle to its diameter; and. Find the area and circumference of a circle with a radius of 12 ft. 651 •diameter, p. public class Circle {. Solutions to homework problems. to Circles Class 10 Maths NCERT Solutions are extremely helpful while doing your homework or Circumference and Area of a Circle (i) The circumference of a circle is defined as distance Area of a circular ring: The area of the circular path or ring is given by the difference of the area of outer. Two concentric circles are shown in the figure below. Review Do Now 2. area A = 8π square units, arc-length s = 2π units. Below are six versions of our grade 6 math worksheet on finding the circumference of a circle when given the radius or diameter. Then the cosine of any angle is equal (numerically, though not necessarily. Aim: How do we relate area with radius, diameter, & ힹ ? Unit 1 Exam Return. Circle Test Part 1 Review Answer Key. To begin this exploration, I created a circle with a radius of 1(for my purposes I used 1 inch as my unit of measure). He loves to learn new techs and write programming articles especially for beginners. If you were to cut a circular disk from a sheet of paper, the disk would have an area, and that is what For example: enter the radius and press 'Calculate'. Chapter 10 - Properties of Circles; Chapter 11 - Measuring Length and Area; Unit 10 Homework Packet Comments (-1) 10-1 Tangents of a Circle. Example of calculating the area of a circle. area of kites and traps page 2 of 2. It consists of two endpoints and all the Its unit length is a portion of the circumference and is always more than half of the circumference. Then notes with example problems on the parts of a circle as well as area and circumference to be inserted into the Unit 5 booklet. The formula and example problems are given. So a solution without pyplot would be circle = matplotlib. Our summaries and analyses are written by experts, and your questions are answered by real teachers. You can find out more about Pi here. circles day 1 notes filled in; circles circumference and arc length day 1 video; circle area day 1 video; circles day 2 notes filled in; circles day 2 chords video; circles day 2 angles video; polygon day 1 notes filled in; polygon day 1 video; polygon day 2 notes filled in; polygon day 2 video; G. Circle Crossword Puzzles. Circumference of a circle Cd See page 37 Area of a circle Ar 2 See page 37 Curved surface area of a cylinder Arh 2 See page 44 Volume of a cylinder Vrh 2 See page 42 Volume of a prism VAh See page 41 Gradient Vertical height Gradient Horizontal distance See page 36 Pythagoras’ Theorem ab c22 2 See page 60 Trigonometry. The answer choices are lettered A through E. What area of the garden can this sprinkler irriguate? round. What will be the diameter of a circle if it's C = 10 cm?. ribbon costs $1 per foot, how much will it cost to add. Geometry Unit 10 Answer Key Section 10. Algebra 1B Review for Midterm Exam 3 answer key. Area of a circle, formula for area explained with examples, pictures, interactive examples and quiz. The Python area of a circle is number of square units inside the circle. Solution: So, the circumference of the pool is 88 m. You need to add it to an axes. and a perimeter of 12 cm. You and your partners should draw four circles each with a different radius. To the nearest unit, find the area of the park in square yards. Round to the nearest hundredth. If you were to cut a circular disk from a sheet of paper, the disk would have an area, and that is what For example: enter the radius and press 'Calculate'. What are Functions? Basic Linear Functions. CCSS Math: 7. 6 Rules, Tables, and Graphs: Part 2 Lesson 10. The Area of a Circle. Objectives • Identify and use parts of circles • Solve problems using the circumference of circles. Highlight a radius of the circle. We immediately recognize the 3-4-5 right triangle and can find the area as: ½(10"+16")•4"=52 in 2. The area of a triangle is 152 m2, and the height is 16 m. Authors and Advisors vi. The number of square units it takes to exactly fill the interior of a circle. One quick way to estimate whether your circumference answer is reasonable is to check to see if it's a bit more than 3 times larger than the diameter or slightly over 6 times larger. Solution C 2πr Formula for the circumference 2π(4) Substitute 4 for r. 20 The center of a circle of radius 1 is on the circumference of a circle of radius 2. for logic pro onlyEvery circle has a center and a radius. Geometry Homework Assignments. quadrantal angles intersects the unit circle. Unit 8 Circles and Area DRAFT. Put your geometry skills to the test by figuring out the area, circumference, diameter and radii of these delicious cookies!. Cumulative Test on Polynomials and Factoring - Answer Key Part 1: Complete each problem by circling the correct answer. Find the base. - reminder: 10. Chapter 10 Circles of Unit IV Geometry, NCERT Grade 9 depicts other related terms and properties of circle. 2nd Block - ERRBZYVY. Note: Example 5. Chapter 10 - Properties of Circles; Chapter 11 - Measuring Length and Area; Unit 10 Homework Packet Comments (-1) 10-1 Tangents of a Circle. Find the value of x if the area of circle g is approximately 1661 square feet. A comprehensive checklist is provided below as well. Calculating Circumference 5. Units of area are always written as squares for e. Answer quality is ensured by our experts. 14159) times the radius times the radius again. Find the radiusr of (C. UNDERSTANDING BY DESIGN UNIT COVER PAGE Unit Title: Discovering and Proving Circle Properties Grade Levels: 9th grade Geometry H Subject/Topic Areas: Circle Properties Key Words: tangent, chord, arc, central angle, inscribed angle, circumference, diameter, radius Designed by: Thomas Kennedy Time Frame: 3 weeks School: Brief Summary of Unit (including curriculum context and unit goals): This…. The fixed point is called the centre and the given constant distance is known as the radius of the circle. Question 10 : Tell whether the line or segment is best described as a chord, a secant, a tangent, a diameter, or a radius of "circle C". 10a,b,c Represent and Interpret Graphs 6. tekoclasses. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 1 ft2 ____ 6. 6 Circles and Arcs -. The Circumference and Area of Circles (A) Math Worksheet from the Measurement Worksheets Page at Math-Drills. A circle is usually named by its _____ point. Working With Units. Arcs and Sectors - Terminology. The Trigonometry of Circles - Cool Math has free online cool math lessons, cool math games and fun math activities. Students, teachers, parents, and everyone can find solutions to their math problems instantly. Fun, visual skills bring learning to life and adapt to each student's level. for circumference of a circle (C = πd and C = 2πr) and an understanding of why both formulas are accurate. ) Circles: word problems (7-AA. The circles shown are tangent at point B. Don't Memorise. 5 in2) 15) Find the center and radius of a circle that has the standard equation. Independent Practice 7. The circumference is the edge of the circle. Worksheets are Geometry unit 10 notes circles, 11 tangents to circles, Section tangents to circles, Geometry of the circle, Circles and circumference notes, Hw work answers, Tangent lines date period, Calculus maximus ws tangent line problem. I have also had students break their words up and sort them after writing them on the centimeter grid. Finding circumference of a circle when given the area. 1 Introduction to Geometry in the Coordinate Plane. If you know the radius, the diameter is. It also provides the teacher with access to quality external. The arc length of a circle (that is, the distance a bug would have to crawl to go around the circle exactly once, staying on the circle the whole time) is called its circumference. CIRCUMFERENCE= 3. Score : Diameter Easy: S1. Unit 14 Homework: Area and Circumference of Circles Directions: Find the approximation of Pi. a) Area of the circle b) diameter c) perimeter of the shaded region d) Area of the shaded region (area of the pentagon is approximately 342. Let AB be a chord of a circle not passing through. Activity 11. r2 1 100r 1 25005. Area and Circumference. circumference Find the circumference and area of each circle. The fixed point is called the centre and the given constant distance is known as the radius of the circle. Circumference of a circle Cd See page 37 Area of a circle Ar 2 See page 37 Curved surface area of a cylinder Arh 2 See page 44 Volume of a cylinder Vrh 2 See page 42 Volume of a prism VAh See page 41 Gradient Vertical height Gradient Horizontal distance See page 36 Pythagoras’ Theorem ab c22 2 See page 60 Trigonometry. (1 point each) For problems 1-8, simplify the expression. pdf David Ebert's Site / Chapter 10 - Properties of Circles. Learn the formula for area of a circle. A point (x,y) is at a distance r from the origin if and only if √x2+y2 = r, or, if we square both sides: x2+y2 = r2. Homework Overview Homework 74: Circumference of Circles Jason is saving UP to buy a digital camera that costs. The number of square units it takes to exactly fill the interior of a circle. 14 is a close enough approximation of pi, however, for the curious student, the value of π to nine decimal places is 3. A circle graph, or a pie chart, is used to visualize information and data. Find the circumference of the pool. 1-2: Adding Rational Numbers 9. Area of circles review. Unit 10 Geometry in the Coordinate Plane. Set students up for success in Geometry and beyond! Explore the entire Geometry curriculum: angles, geometric constructions, and more. 14 for pi when calculating the area for circles. Use function notation to represent the area of a circle whose circumference is 123 cm. To the answer "1": On the other hand it is not intuitive to call it a "side" while its length$\to0\$, which is the state in a circle (remember the definition of a circle as a set of points). Our summaries and analyses are written by experts, and your questions are answered by real teachers. The radius is 3 cm. Perhaps the one that most immediately Arcs. 5 "Trinequalities" Geometry Unit 8; Geometry Unit 9; Geometry Unit 10; Geometry Unit 11; Geo Unit 12; Geometry Unit 12; Geometry Videos; IntAlg Notes; Int Alg R1; Int Alg R2; IntAlg R3; IntAlg Notes S2; IntAlg R4; Int Alg R5. For the whole circle we need values in every quadrant, with the correct plus or minus sign as per Cartesian Coordinates: Note that cos is first and sin is second, so it goes (cos, sin). Kelm: Unit 10-1 Area/Circumference of Circles - Duration: 12:38. Solutions to homework problems. This worksheet is a great resources for the 5th, 6th Grade, 7th Grade, and 8th Grade. Let’s review what you’ve learned. C d C d 134 UNIT 4: Circles and Area The circumference of a circle is also the perimeter of the circle. PM65 CHAPTER 10 CIRCLES 10. Area & Circumference Easy: S1 Answer Key Find the exact area and circumference of each circle. use the properties of a circle to help me solve problems. The circumference of a circle is the distance around the circle. web; books; video; audio; software; images; Toggle navigation. Review Do Now 2. It is a one directional synonym, however. Hence (7-9) Since (7-10) and (7-11) Thus (7-12). ! A semicircle is shown below. correct homework as a part of your nightly homework time and seek help at tutoring for prob-lems you could not figure out. area of kites and traps page 2 of 2. The area of an object is the amount of surface that the object occupies. Area of a Circle Day 1. A circle has a radius of 12 in. Or C = π (2r) = 2 πr. Practice: Area of parts of circles. Circumference and Area: Description: Geometry and the Circle: To define and identify points, line segments, planes, diameter, radius and chord. To represent m2, use "sq m". Circles and Circumference 522 Chapter 10 Circles • Identify and use parts of circles. The circumference is the enclosing boundary of the circle, while area is the space occupied. For a square with side length s , the following formulas are used. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. Unit Circle Day 1 WS File. 17) circumference = 62. SAT Math Test Prep Online Crash Course Algebra & Geometry Study Guide Review, Functions,Youtube - Duration: 2:28:48. (Answer = 90,000 sq. CCSS Math: 7. 1 Worksheet #6a : Calculating the Area of Circles & Sectors : View. Answer Key Wednesday 1/29 Area and Circumference of circles HW: complete wkst Answer Key Tuesday 1/28 Extra practice with circle segments HW: complete both sides of wkst Answer Key Quiz Monday Monday 1/27 Segments in Circles HW: complete both sides of wkst Answer Key Thursday 1/23 Review for test Organize hw to be turned in tomorrow. Score : Diameter Easy: S1. 654 Chapter 10 Properties of Circles EXAMPLE 5 Find the radius of a circle In the diagram,B is a point of tangency. Key Vocabulary •circumference •arc length •radius, p. (b) Name the radius in the figure. 9 Worksheet - Circumference 13-14 from MATH 1101 at Al-Suffah Girls College, 5-Khyaban-e-Sadiq. Chapter 10 Resource Masters The Chapter 10 Resource Masters includes the core materials needed for Chapter 10. @ The circumference of the circle. Community Maps Theme Unit - Activities, Directions, Reading a - Easier Maps (larger drawings, smaller community) With grid lines Symbols Fill in the blanks with north, south, east, west Grid Coordinates: Circle the building or object located at. Scale drawings: word problems (7-J. Question How many times will Circle A rotate about its center *** Solution, Part 1 Begin by drawing (1) Circle B (2) Circle A where it initially contacts Circle B (3) Circle A where it contacts Circle B after rolling 1. Homework worksheet on area and circumference. Online tutoring available for math help. 3cm to 1 decimal place. Let M be the mid-point of PO. You need to know the radius and height to figure both the volume and surface area of a cylinder. 5x 2 + 2x – 15 2. You can use the Pythagorean Theorem. Circle worksheets contain finding radius and diameter, identifying parts of a circle, area, circumference, arc This page contains a lot of worksheets on finding the area and circumference of a circle. Name the radii 2. This Circle Worksheet is great for practicing solving for the circumference, area, radius and diameter of a circle. Find the measure of Abc 3. 5x 2 + 14x + 15 C. Created Date: 10/12/2015 2:01:16 PM Mathematics (Linear) 1MA0 AREA & CIRCUMFERENCE OF CIRCLES. 7a,b Circumference and Area of Circles 6. We saw in the module, The Circles that if a circle has radius r, then. Review Do Now 2. Circumference and area are often confused as the same thing. The area of a circle is pi times the square of its radius. Finding Diameter and Radius 4. 6 Rules, Tables, and Graphs: Part 2 Lesson 10. Later in the unit, we will work with circles to calculate circumference and area. YEAR 10 EXTENSION HOMEWORK 1. In a circle, points lie in the boundary of a circle are at same distance from its center. Math Worksheets Based on NCTM Standards! Number Theory, Decimals, Fractions, Ratio and Proportions, Geometry, Measurement, Volume, Interest, Integers, Probability, Statistics, Algebra, Word Problems Also visit the Math Test Prep section for additional grade seven materials. active oldest votes. Complete page 151 1-31 on notebook as practice only. However, a circle can have many more figures associated with it. It was originally defined as the ratio of a circle's circumference to its diameter (see second formula below on why) and appears in many formulas in mathematics, physics, and everyday life. to area, area to circumference, area to diameter or area to radius. Part 2-Have students design their own picture using the geometric shapes on a sheet of paper. The circumference is the enclosing boundary of the circle, while area is the space occupied. 1 Worksheet #6a : Calculating the Area of Circles & Sectors : View. 2nd Block - ERRBZYVY. What is the ratio of the circumference of a circle with radius 3 to its diameter? 4. To find circumference of a circle using the proper formula. Parts of a Circle 3. 1) Radius = Diameter = Area = Circumference = 2) Radius = Diameter = Area = Circumference = 3) Radius = Diameter = Area = Circumference = 4) Radius = Diameter = Area = Circumference = 5) Radius = Diameter = Area = Circumference = 6) Radius = Diameter. Using symbols, those two sentences look like this: d= 2r =2 3cm =6 cm × × A radius is the same length as half of a diameter. To begin this exploration, I created a circle with a radius of 1(for my purposes I used 1 inch as my unit of measure). Use the pi button on your calculator and give your answer correct to two decimal places. If truth be told, the early recording formats were at first meant for recording mobile phone phone calls or dictation, not new music. Find the circumference of a circle with a radius of 10 ft. Figure 3 A circle with two diameters and a (nondiameter) chord. Coplanar circles that intersect in one point are called tangent circles. 7) Scale drawings: scale factor word problems (7-J. Step 2: Use the thread and ruler to measure the. 3 Segments and Their Measures 1. 13 - Solve Equations Unit 9 Inequalities 6. SAT Math Test Prep Online Crash Course Algebra & Geometry Study Guide Review, Functions,Youtube - Duration: 2:28:48. To the nearest tenth, find the area of the surface of the water in the pool. Name the diameter(s) 4. quadrantal angles intersects the unit circle. The circumference of a circle is the distance around the outside of the circle. Got the answer pretty quickly. Extension: Measure the length of your bedroom in inches. 9 Congruence Unit 11 Statistics 6. What is the unit circle? The unit circle has a radius of one. unit circle problems called the triangle method. UNDERSTANDING BY DESIGN UNIT COVER PAGE Unit Title: Discovering and Proving Circle Properties Grade Levels: 9th grade Geometry H Subject/Topic Areas: Circle Properties Key Words: tangent, chord, arc, central angle, inscribed angle, circumference, diameter, radius Designed by: Thomas Kennedy Time Frame: 3 weeks School: Brief Summary of Unit (including curriculum context and unit goals): This…. Using the area to find the circumference of a circle is slightly more complex. Some of the worksheets displayed are Geometry unit 10 notes circles, 11 circumference and area of circles, Hw work answers, Lesson homework and practice 10 1 finding perimeter, Chapter 10 circles, Circles date period, Unit 10 quadratic relations, Geometry of the circle. Step-by-step explanation: Given : The circumference is 31. Search this site. The circle pictured here has a diameter of 10 cm. 5 unit square. Since the unit circle has radius 1, these coordinates are easy to identify; they are listed in the table below. Arcs are divided into minor arcs (0° < v < 180°), major arcs (180° < v < 360°) and semicircles (v = 180°). Key Words: circumference, radius, diameter, ratio, ratio comparison MJ 360-362- EVEN. The circumference, C, of a circle is given by the formula. 6th Block - 3K9TNX9R. 1 square units. Chapter 11 Circumference, Area and Surface Area Vocabulary, Objectives, Concepts and Other Important Information Examples: Example 1: Find each indicated measure. Circles are used when planning athletic tracks, recreational areas, buildings, and roundabouts The famous Ferris-wheel attraction is a circle, as are the wheels on your car or bike. Homework 1. C-77 Triangle Area Conjecture - The area of a triangle is given by the formula A= bh 2, where A is the area, b is the length of the base, and h is the height of the triangle. @ The area of the circle. These slightly more advanced circle worksheets require students to calculate area or circumference from different measurements of a circle. to Circles Class 10 Maths NCERT Solutions are extremely helpful while doing your homework or Circumference and Area of a Circle (i) The circumference of a circle is defined as distance Area of a circular ring: The area of the circular path or ring is given by the difference of the area of outer. A=pi*r^2 A=(3. The number of square units it takes to exactly fill the interior of a circle. The radius of a circle is a straight line drawn from the center of the circle to any point on the circumference. More Geometry Subjects. the area, b is the length of the base, and h is the height of the parallelogram. Circumference The distance around the outward boundary of a circle, expressed as a linear unit of measurement (millimeters, inches, etc. Tiffany is making a display board for the school play. a) 100 inches b) 50 inches c) 100/3 pi inches. Find the circumference of a circle with a radius of 10 ft. Notes: CIRCLE BASICS, CIRCUMFERENCE & AREA Geometry Unit 7- Properties of Polygons Page 554 For Examples 3 – 4, use the appropriate formulas to calculate the EXACT and APPROXIMATE circumference and area of the circles shown. Name : Answer Key. Then the area of the circle whose circumference 12pi = 36pi = 113. If you were to cut a circular disk from a sheet of paper, the disk would have an area, and that is what For example: enter the radius and press 'Calculate'. Below are six versions of our grade 6 math worksheet on finding the circumference of a circle when given the radius or diameter. 14 UNIT 1 LESSON 7 Strategies Using Doubles. Geometry Honors. Properties Of Trig Graphs. circumference of the circle. YEAR 10 EXTENSION HOMEWORK 2. area of kites and traps page 1 of 2. ! Work out the length of the diameter of the circle. 11 - Perimeter History Notes - Read and Take Notes on Pages 130-136 Print Out Newspaper Article. 2 | LiTERACY FOUNDATiONS MATH: LEVEL 4 GEOMETRY LESSON 1 CiRCLES A diameter is the same length as two radii. Name : Answer Key. The area of a circle can be calculated using the diameter or the radius with two different formulas: A = πr 2 or A = π(d/2) 2, where π is the mathematical constant approximately equal to 3. • To find the diameter of the face: The diameter d 2r, where r is the radius Substitute: r 1. 4 Length of Polygons in the Coordinate Plane. Step 2: Use the thread and ruler to measure the. Sector of a circle: It is a part of the area of a circle between two radii (a circle wedge). Given a circle with (5, 1) and (3, -1) as the endpoints of the diameter. Use their knowledge of circles to help them compute the measures of spheres. CCSS Math: 7. Put a big circle down on the table. Review of Circumference and Area of Circles, Perimeter of Polygons, and Area of Irregular Figures -- Textbook Page EP20: 1 – 15 Area of Irregular Figures -- Textbook Pages 370 – 371: 1 – 15 (Read lesson on pages 368 – 369) Khan Academy Assignment on circles and Irregular figures (composite figures) Link to Mr. Pythagorean Theorem. Find each of the following: a) the diameter b) the radius c) the length of an arc of 120 degrees. Just to mention: plt. Sample answer: I can put the pennies in 2 rows and match them. Students, teachers, parents, and everyone can find solutions to their math problems instantly. Let AB be a chord of a circle not passing through. Arcs, chords, secants, and tangents all provide a rich set of figures to draw, measure, and understand. If the diameter of a circle is 10 cm, the radius is 5 cm. 8 homework (due Wednesday) - Grading 10. Students may also think that in a circle with circumference 10 Day 78 - PPT - back from break circles - 2013 - 01. Point A is the center of the larger circle, and line segment AB (not shown) is a diameter of the smaller circle. Circle Test Part 1 Review Answer Key. ★ Classwork Power Point Surface area and volume. Now, let's look at another example that will require a bit more work. First you must find the radius, then the diameter and then the circumference. One quick way to estimate whether your circumference answer is reasonable is to check to see if it's a bit more than 3 times larger than the diameter or slightly over 6 times larger. find the angle subtending by the arc if the circumference of the circle of which the arc forms part is 60cm. Name the concept then change the incorrect entry to make it connect to the concept you named. 687-688, 1-21 ALL. Figure 3 A circle with two diameters and a (nondiameter) chord. Write the letter of the exercise in the box containing the number of the answer. Key Words: circumference, radius, diameter, ratio, ratio comparison MJ 360-362- EVEN. An arc of a circle is a continuous portion of the circle. Calculate the circumference of this circle with a diameter of 10 cm. Area of circles review. where r is the radius of the circle, and. 5 Segment and Angle Bisectors 1. A comprehensive checklist is provided below as well. 3rd Block - T33WKSB2. 5 Tuesday 4/17. You need to add it to an axes. Area and Perimeter of a circle. Print and copy in color. ) The straight part of the track is. If you know the radius, the diameter is. A straight line which cuts curve into two or more parts is known Answer: The tangent line is valuable and necessary because it permits us to find out the slope of a curved. In which of the following circle graphs does the shaded area represent the fraction of the total number of complaints made that were about shipping delays last year?. TIMES Module 17: Measurement and Geometry - the circle – teacher guide - In this module, the formulas for finding the circumference and area of a circle are introduced. padlet drive. 50 = 5 x 10 24 = 2 x 2 x 2 x 3 30 = 2 x 3 x 5 6 = 2 x 3 Student/Teacher Resource. The ratio of a circle's circumference to twice its radius is a constant, which we represent with the Greek letter π ("pi," pronounced "pie," like the circle-based. What is the Circumference of the circle at the right?. Online tutoring available for math help. Chord : A line segment within a circle that touches two points on the circle is called chord of a circle. Circles Quiz will be on February 25. Aim: How do we relate area with radius, diameter, & ힹ ? Unit 1 Exam Return. 8 homework (due Wednesday) - Grading 10. 1 Properties of Tangents Goal: Use properties of a tangent to a circle. ★ Classwork Power Point Surface area and volume. It consists of two endpoints and all the Its unit length is a portion of the circumference and is always more than half of the circumference. 4 area and circumference of a ci by Rachel 2430 views. D = distance traveled = circumference [not area] of a circle. a) b) c) 14) Find the indicated measure of each circle. The entire unit has been re-worked to correspond with current research regarding effective homework. PTS: 1 DIF: Basic REF: Lesson 10-1 OBJ: 10-1. Circle vocabulary; Measuring central and. 14)(3 m^2) A=(3. 1 Logic Video; G. circumference of the circle = 2 π r and area of the circle = π r 2. Find the area of the region inside both circles. I keep getting area is 4(pie sign), but the answer key says *20(pie sign). Opening Exercises. CIRCLES Describe radian measure of an angle as the ratio of the length of an arc intercepted by a central segments and angles of circles choosing from a variety of tools. Refer to Figure 3 and the example that accompanies it. to area, area to circumference, area to diameter or area to radius. This banner text can have markup. #N#Set I and III Answer Keys. You need to know the radius and height to figure both the volume and surface area of a cylinder. Area : the measure of the size of the surface of a shape. Review Do Now 2. TIMES Module 17: Measurement and Geometry - the circle – teacher guide - In this module, the formulas for finding the circumference and area of a circle are introduced. 1-2: Adding Rational Numbers 9. Online tutoring available for math help. A circle has a radius of 15 cm. Math Task Cards - 7th Geometry - Circles, Area, Circumference, and Pi CCS 7. CIRCUMFERENCE= 3. a) 100 inches b) 50 inches c) 100/3 pi inches. Kelm: Unit 10-1 Area/Circumference of Circles - Duration: 12:38. Considering the most basic case, the unit circle (a circle with radius 1), we know that 1 rotation equals 360 degrees, 360°. web; books; video; audio; software; images; Toggle navigation. Area of a Circle Day 1. If there is 1 penny left over, there is an odd number of pennies. To find the perimeter of the quarter circle, find the circumference of the whole circle, divide by 4, and then add the radius twice. Unit 1: Intro to Integers & Coordinate Graphing Searchable vocabulary: ordering integers, identifying integers, absolute value, coordinate graphing, plot ordered pairs, identify ordered pairs 6. 1 Circles and Circumference. Circumference is often misspelled as. Geo Unit 6; Unit 6. Area of a circle is given by the formula A = πr2 where π = 3. Complete page 151 1-31 on notebook as practice only. Fits in composition notebook & larger. The circumference of a. Circle geometry terms + definitions. (b) Name the radius in the figure. - reminder: 10. The circumference of a circle is the distance around the circle. From a rectangular metal sheet of size 20 cm by 30 cm, a circular sheet as big as possible is cut. Sign Up For Our FREE Newsletter! By signing up, you agree to receive useful information and to our privacy policy. 8583471 cm² = 707 cm² (to 3 s. 1 Review of Geometric Solids: Part 1 Lesson 11.
{}
Feeds: Posts ## Watts 2004: Structure of a network has large implications on dynamics on it Global financial volatility follows dynamical laws on a large dense complex network with nodes equal the number of assets and therefore it is reasonable to expect that the observations of Watts and Strogatz from 2004 (and earlier) that the dynamics is affected by the graph topology applies and our expectation is that this topology is captured by fractional differential equations of the type $(\partial_t^{\alpha} - (\Delta_G)^{\beta}) u = F(u)$ $u(0,x)=f(x)$ where the topology is captured by the Laplacian operator $\Delta_G$. Watts_2004 This new network science has gone down the path of scale-free networks which are networks where the degree distribution follows a power law which is not observed in a typical financial network constructed using volatility correlations.
{}
1 What do you understand by intramodal dispersion? Derive the expression for material dispersion. Mumbai University > Electronics and Telecommunication > Sem7 > Optical Communication and Networks Marks: 10M Year: May2014, Dec2012 1 0 • Pulse broadening within a single mode is called as intramodal dispersion or chromatic dispersion. • The two main causes of intramodal dispersion are as follows: a. Waveguide dispersion: • It occurs because a single mode fiber confines only about 80% of the optical power to the core. • Dispersion thus arises since the 20% light propagating in the cladding travels faster than light confined to the core. b. Material dispersion: • It is the pulse spreading due to the dispersive properties of material. • It arises from variation of refractive index of the core material as a function of wavelength. • Material dispersion is a property of glass as a material and will always exist irrespective of the structure of the fiber. • It occurs when the phase velocity of the plane wave propagation in the dielectric medium varies non-linearly with wavelength and a material is said to exhibit a material dispersion, when the second differential of the Refractive index w.r.t wavelength is not zero. i.e. $\frac{d^2 n}{dλ^2} ≠ 0$ • The pulse spread due to material dispersion may be obtained by considering the group delay $τ_g$ in the optical fiber which is the reciprocal of group velocity $v_g$. The group delay is given by $$τ_g= \frac{d\beta}{d\omega} = 1/c (n_1-\frac{λdn_1}{dλ})-------(1)$$ where $n_1$ is the refractive index of the core material $ω$ is the angular frequency $\beta$ is the propagation constant\lt/constant\gt The pulse delay $τ_m$ due to material dispersion in a fiber of length L is $$τ_m = \frac{L}C (n_1-λ \frac{dn_1}{dλ}) ------------- (2)$$ For a source with rms spectral width $σ_λ$ & mean wavelength λ, the rms pulse broadening due to material dispersion $σ_m$ may be obtained from the expansion of equation (2) in a Taylor series about λ. $$σ_m = σ_λ \frac{dτ_m}{dλ} + σ_λ \frac{d^2 τ_m}{dλ^2} + ------------- (3)$$ As the 1st term in eq.(3) usually dominate for the source operating over 0.8-0.9 μm wavelength range. $$σ_m = σ_λ \frac{dτ_m}{d_λ} --------------- (4)$$ Hence the pulse Spread may be evaluated by considering the dependence of $τ_m$ on λ. From eq.(2) $\frac{dτ_m}{dλ} = L \fracλ{C} \bigg[\frac{dn_1}{dλ} - \frac{d^2 n_1}{dλ^2} - \frac{dn_1}{dλ}\bigg]\\ = -\frac{Lλ}{C} \frac{d^2 n_1}{dλ^2}--------(5)$ Substitute eqn(5) in eqn(4) The rms pulse broadening due to material dispersion is given by $$σ_m = \frac{σ_λ L}C│λ \frac{d^2 n_1}{dλ^2}│ ---------- (6)$$ The material dispersion for optical fiber is sometimes quoted as the $│λ^2 \frac{d^2 n_1}{dλ^2}^)│$ or $│d^2 n_1/dλ^2│$ However it may be given in terms of material dispersion parameter M given as: $$M = \frac1{L} \frac{dτ_m}{dλ} = \frac{λ}C│\frac{d^2 n_1}{dλ^2} │$$ Total pulse spreading caused by material dispersion is given by $∆_{mat}$ (P.S) where $\Deltaλ$ is the spectral width of light source L is the fiber length $$∆t_{mat} (P.S) = M.L (∆λ)$$ 0
{}
# plot_controls¶ qctrlvisualizer.plot_controls(figure, controls, polar=True, smooth=False, unit_symbol='Hz', two_pi_factor=True) Creates a plot of the specified controls. Parameters • figure (matplotlib.figure.Figure) – The matplotlib Figure in which the plots should be placed. The dimensions of the Figure will be overridden by this method. • controls (dict) – The dictionary of controls to plot. The keys should be the names of the controls, and the values represent the pulse by either (1) a dictionary with the ‘durations’ and ‘values’ for that control, or (2) a list of segments, each a dictionary with ‘duration’ and ‘value’ keys. The durations must be in seconds and the values (possibly complex) in the units specified by unit_symbol. For example, the following two controls inputs would be valid (and equivalent): controls={ "Clock": {"durations": [1.0, 1.0, 2.0], "values": [-0.5, 0.5, -1.5]}, "Microwave": {"durations": [0.5, 1.0], "values": [0.5 + 1.5j, 0.2 - 0.3j]}, } controls={ 'Clock': [ {'duration': 1.0, 'value': -0.5}, {'duration': 1.0, 'value': 0.5}, {'duration': 2.0, 'value': -1.5}, ], 'Microwave': [ {'duration': 0.5, 'value': 0.5 + 1.5j}, {'duration': 1.0, 'value': 0.2 - 0.3j}, ], } • polar (bool, optional) – The mode of the plot when the values appear to be complex numbers. Plot magnitude and angle in two figures if set to True, otherwise plot I and Q in two figures. Defaults to True. • smooth (bool, optional) – Whether to plot the controls as samples joined by straight lines, rather than as piecewise-constant segments. Defaults to False. • unit_symbol (str, optional) – The symbol of the unit to which the controls values correspond. Defaults to “Hz”. • two_pi_factor (bool, optional) – Whether the values of the controls should be divided by 2π in the plots. Defaults to True. Raises • ValueError – If any of the input parameters are invalid. • TypeError – If the controls have invalid types.
{}
# Area of region • Oct 20th 2011, 02:08 PM icelated Area of region Basically, what i have is $V = 2 \pi \int _{o}^ {4} (4-x) \, \sqrt x \, dx$ would i then distribute? $V = 2 \pi \int _{o}^ {4} (4x^{1/2} -x ^{3/2} ) dx$ Then, integrate? $2 \pi \int _{o}^ {4} (4 * \frac {2} {3} x^{3/2} - \frac {2}{5} x ^{5/2}$ then, $2 \pi [ \frac {8} {3} x^{3/2} - \frac {2}{5} x ^{5/2}]$ evaluated from 0 to 4 Is this correct? I cant seem to get a decent answer. Wolfram gives me weird steps and an answer i cant match...And when would i do something with the 2pi? Do i use 2pi for each side? • Oct 20th 2011, 02:33 PM ebaines Re: Area of region Quote: Originally Posted by icelated $2 \pi [ \frac {8} {3} x^{3/2} - \frac {2}{5} x ^{5/2}]$ evaluated from 0 to 4 Is this correct? Looks good so far. When you evaluate from 0 to 4 you should get: $2 \pi [ (\frac 8 3 4 ^{3/2} - \frac 2 5 4^ {5/2} )-(0 - 0)] = \frac {256 \pi}{15}$ • Oct 20th 2011, 02:35 PM TheEmptySet Re: Area of region Quote: Originally Posted by icelated Basically, what i have is $V = 2 \pi \int _{o}^ {4} (4-x) \, \sqrt x \, dx$ would i then distribute? $V = 2 \pi \int _{o}^ {4} (4x^{1/2} -x ^{3/2} ) dx$ Then, integrate? $2 \pi \int _{o}^ {4} http://mathhelpforum.com/calculus/190884-area-region.html#post691275\$ $2 \pi [ \frac {8} {3} x^{3/2} - \frac {2}{5} x ^{5/2}]$ This means $2 \pi [ \frac {8} {3} x^{3/2} - \frac {2}{5} x ^{5/2}] \bigg|_{0}^{4} =2\pi\left[ \frac{8}{3}4^{3/2}-\frac{2}{5}4^{5/2}-\left( \frac{8}{3}0^{3/2}-\frac{2}{5}0^{5/2}\right) \right]$ Now just use algebra to simplify • Oct 20th 2011, 03:15 PM icelated Re: Area of region That answers my questions thank you. Do i need to do anything to the thread? • Oct 20th 2011, 05:35 PM Ackbeet Re: Area of region Quote: Originally Posted by icelated Do i need to do anything to the thread? Nope. We don't generally close threads around here unless there's been a violation of rules. Although, come to think of it, if you like, you can mark the thread as solved. That's in the "Thread Tools" menu.
{}
# Coding Use simple, basic, commands in general. REsort to fancy commands that do a lot at once only if you have a good reason, not just to write shorter or neater loking or fancier or more elegant code. Elegant code is a snobbish disaster, if "elegant" means "short" as opposed to "quick-running". LaTex is a good exmaple. You can use \section{Section}, or you can use \noindent {\bf Section 1} . Why use the latter? --Because then you know what section it is, rather than having it automatically labelled. If you delete a section, you do not WANT automatic relabellnig. It isn't worth the hassle. \begin{THeorem} jkjlkj \end{Theorem} good example. Again, you don't want automatic renumbering. For EQUATIONS you do, because you do change the number of them a lot. And for page numbers. But not sections. An exception is if youre actually typsetting a book. Then, you want to be ae to change the style throughout all at once. Do not format a paper for a particular jouirnal. Format it the way you think a working paper ought to be for the reader. THen change it at the last stage, after th epaper is accepted, to fit house style. Courier is good for making something look unfinished, which often you do want, with working papers for example. $$y = x^2$$ versus \nolabel y = x^2
{}
# What do we call the vertical space between a list item label and the item text? Following the diagram here: \topsep, \itemsep, \partopsep and \parsep - what does each of them mean (and what about the bottom)? is there a name for the vertical rather than horizontal space between the label of an item in a list and its corresponding text (for the case where the label is longer than \leftmargin so the text gets pushed down)? I'm not finding any such term in the documentation of enumitem. • This happens with some description styles. The usual \baselineskip is used, so it doesn't need to have a name. – Bernard Aug 5 '17 at 12:50 • I’m not sure to understand what you are asking for, but try adding \leavevmode\\*[\smallskipamount] (say) just after the closing bracket of the relevant \item[...] command. – GuM Aug 5 '17 at 22:18 • @Bernard: So you're telling me there's no way to control it independently of \baselineskip? :-( – einpoklum Aug 5 '17 at 22:38 • @JohnKormylo: I'm reading your comments, but I must be missing something since they sound like an answer to another question :-( – einpoklum Aug 5 '17 at 22:39 • @GuM: That seems to always insert at least one empty paragraph, even if I replace \smallskipamount with, say, 1pt. Can't I avoid that somehow? – einpoklum Aug 5 '17 at 22:52 ## 2 Answers Normally, there is no vertical spacing between the label of an item in one of the standard lists and the following text – only a horizontal spacing, namely \labelsep and \itemindent. However, as mentioned by @GuM, you can manage to have a line break manually, and it's up to you to add such a spacing. This is easy for enumerate and itemize, writing \leavevmode\\[some verticalskip] just after \item. Unfortunately it doesn't work for description environments. I propose a solution, which consists in defining a new Description environment for which the \descriptionlabel command is redefined to incorporate an invisible rule below the base line. The length of this rule is an optional argument of the environment (rather arbitrary default: 3mm). Thanks to xparse, the environment accepts a second optional argument, for the set of key = some value to be handed to the description environment. Here is an example: \documentclass{article}% \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[svgnames]{xcolor} \usepackage{lipsum} \usepackage{enumitem} \setlist[description]{leftmargin = 12mm} \usepackage{xparse} \NewDocumentEnvironment{Description}{O{3mm}O{}}{\renewcommand*\descriptionlabel[1]{\rule[-#1]{0pt}{#1}\hspace\labelsep \normalfont\bfseries ##1}\description[style =standard, labelwidth=\textwidth, #2]}{\enddescription} \begin{document} ‘Force next line’: \begin{Description}[1.8ex][font=\sffamily\color{FireBrick!60}] \item[One]\lipsum[4] \item[And another one] \lipsum[4] \end{Description} \end{document} My crystal ball suggests that you might be using the style=nextline key that the enumitem package provides for description environment. If this is the case, the workaround I suggested in a comment is obviously not applicable. However, since the nextitem style doesn’t “box” the label, you can achieve what you want by including an appropriate \vspace command at the end of your label text, as the following MWE shows: % My standard header for TeX.SX answers: \documentclass[a4paper]{article} % To avoid confusion, let us explicitly % declare the paper format. \usepackage[T1]{fontenc} % Not always necessary, but recommended. % End of standard header. What follows pertains to the problem at hand. \usepackage{enumitem} \begin{document} Preceding text. \begin{description}[style=nextline,leftmargin=6pc] \item[A] The text of the first description. \item[Short label] Full description of the second item in the list follows here. Let's add a few words so that it makes it to the second line. \item[A somewhat longer label] Here's the text of the thirs description, or, better, the description of the third item in the list. \item[Similar, but with vertical space\vspace{\smallskipamount}] Replace the argument of \verb|\vspace| (here, \verb|\smallskipamount|) with any rubber length of your choice. \item[Abc def ghij\vspace{\smallskipamount}] If the label is short'', then it gets boxed, thus suppressing the vertical spacing. \item[Wrong]\leavevmode\\*[\smallskipamount] The workaround I~suggested in a comment does \emph{not} work as expexted in the context of a \texttt{description} environment with \texttt{style=nextline}, neither for short'' labels\ldots \item[And wrong as well]\leavevmode\\*[\smallskipamount] \ldots nor for long'' ones; retrospectively, this is obvious! \end{description} Following text. \end{document} Here’s the output I get: • Your crystal ball reads true. I'll try to make my issue into a separate question to which you could move this answer, but generating a MWE is proving a bit tricky. – einpoklum Aug 6 '17 at 9:09
{}
## Follow The Aperiodical on Feedspot Or The Aperiodical by Christian Lawson-perfect - 3d ago Let’s recite the $13$ times table. Pay attention to the first digit of each number: \begin{array}{l} \color{blue}13, \\ \color{blue}26, \\ \color{blue}39, \\ \color{blue}52 \end{array} What happened to $\color{blue}4$‽ A while ago I was working through the $13$ times table for some boring reason, and I was in the kind of mood to find it really quite vexing that the first digits don’t go $1,2,3,4$. Furthermore, $400 \div 13 \approx 31$, so it takes a long time before you see a 4 at all, and that seemed really unfair. I was being pretty unreasonable in my expectations of basic arithmetic, but I wasn’t completely brain-dead: I smelled an integer sequence! How about $a(n)$ = least $k$ such that $k \times n$ starts with a $4$. That’s not particularly interesting, and someone who comes across this sequence in the OEIS might think “why $4$?” So, I did a bit more thinking and came up with this: $a(n)$ = least $k$ such that $\{ \text{first digit of } j \times n, \, 0 \leq j \leq k \} = \{ 0,1,2, \dots 9 \}$ I wrote a bit of Python, and in a few minutes I had some numbers: $n$ $a(n)$ 1 2 3 4 5 6 7 8 9 10 11 12 13 9 45 27 23 18 15 13 12 9 9 9 42 62 And hey, $13$ is a record-setter. I’m really beginning to dislike this number. Anyway, I searched the OEIS for my sequence and it wasn’t there, so I submitted it and it was duly accepted as A249067. Along the way, OEIS editor Charles R Greathouse IV added this intriguing conjecture: Conjecture: $a(n) \leq N$ for all $n$. Perhaps $N$ can be taken as $81$. Why $81$? Maybe look at the graph produced automatically by the OEIS: The record of $81$ is reached at $a(112)$. And at $a(1112)$. And $a(11112)$. That’s because they’re very slightly bigger than $\frac{1}{9} \times 10^m$, so nine times $1 \dots 12$ is just bigger than $9 \dots 9$, i.e. a number starting with a $1$, so it takes nine times nine steps down the times table before you see a number with $9$ as its first digit. This pattern repeats at every power of $10$, and in fact every pattern in this sequence repeats (more or less) at every power of 10: this animated plot of the sequence with different horizontal scales shows that it’s self-similar: (The fuzziness in the bigger plots is because each plot just takes a sample of points, and interpolates between them) So the conjecture looks true, and this is my sequence, so I should prove it. It isn’t surprising that this thing repeats when you multiply by $10$: we’re only looking at the first digit, and obviously the first digit of $n$ is the same as the first digit of $10n$. That doesn’t suffice as a proof of Charles Greathouse’s conjecture though: numbers which don’t end in a $0$ might do something unhelpful. Fortunately, the day after I thought this sequence up was MathsJam night. I decided I’d set the Charles Grey pub’s brightest minds on the problem. I had a few ideas but I’m not particularly quick at putting thoughts together. Ji proposed an application of the pigeonhole principle: if you look at the first two digits of the numbers you see in $n$’s times table, you can write out everything you might see in a $9 \times 10$ grid: \begin{array}{} 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\ 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 \\ 30 & 31 & 32 & 33 & 34 & 35 & 36 & 37 & 38 & 39 \\ 40 & 41 & 42 & 43 & 44 & 45 & 46 & 47 & 48 & 49 \\ 50 & 51 & 52 & 53 & 54 & 55 & 56 & 57 & 58 & 59 \\ 60 & 61 & 62 & 63 & 64 & 65 & 66 & 67 & 68 & 69 \\ 70 & 71 & 72 & 73 & 74 & 75 & 76 & 77 & 78 & 79 \\ 80 & 81 & 82 & 83 & 84 & 85 & 86 & 87 & 88 & 89 \\ 90 & 91 & 92 & 93 & 94 & 95 & 96 & 97 & 98 & 99 \end{array} The $n$ times table will dance around this grid until all nine rows have been visited. The longest it can do that is by visiting all 80 cells not in the last line. If the process doesn’t visit the same place twice before it hits every row, that means that the latest you can put off visiting the last row is the 81st iteration. So we need to show that you can’t visit the same spot twice before visiting each row once. Unfortunately, that’s not true. The $12$ times table visits the ’12’ cell at $12 \times 1 = 12$ and again at $12 \times 10 = 120$, before all possible first digits have been seen. So, we need another explanation. Katie Steckles and the Manchester MathsJam crowd came up with an alternate explanation: if you can prove $\left\lceil \frac{1}{9} \cdot 10^m \right\rceil$ (that is, $112$, $1112$, $\ldots$) takes 81 steps for all $m \geq 3$, then that’s the maximum, as any $m$-digit number bigger than that will reach $9 \times 10^m$ in at most as many steps, and will definitely have seen all the other initial digits before then, and any $m$-digit number smaller than $\left\lceil \frac{1}{9} \cdot 10^m \right\rceil$ will visit every first digit in the first 9 multiples. There’s some evidence for this: the $m+3$-digit numbers that take 81 steps seem to be the ones between $11\ldots112$ and $112499\ldots99$. I don’t know if there’s a clever way of showing that $\left\lceil \frac{1}{9} \cdot 10^m \right\rceil$ takes 81 steps, but would it convince you if I said that $112$ takes that long, and adding more $1$s in the middle can’t make it any worse? Anyway, that’s good enough for me. I think I can now answer my question: exactly how bad is the $13$ times table? Let’s compute the record-setters for A24097: the numbers that take longer than any smaller number to see every possible leading digit: $n$ $a(n)$ 1 2 13 112 9 45 62 81 $13$ is a record-setter in the sequence, which means it’s pretty bad, but it’s not the worst: we’ve shown above that $112$ takes the longest possible number of steps to see every digit. And the number $2$ comes under scrutiny for taking way longer than its neighbours. So really, $13$ is just unlucky to find itself in such company. If you’re interested in the working-out I did for this post, I’ve put my Jupyter notebook online. Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Christian Lawson-perfect - 6d ago The Online Encyclopedia of Integer Sequences just keeps on growing: at the end of last month it added its 300,000th entry. Especially round entry numbers are set aside for particularly nice sequences to mark the passing of major milestones in the encyclopedia’s size; this time, we have four nice sequences starting at A300000. These were sequences that were originally submitted with indexes in the high 200,000s but were bumped up to get the attention associated with passing this milestone. Here they are: A300000: The sum of the first $n$ terms of the sequence is the concatenation of the first $n$ digits of the sequence, with $a(1) = 1$. 1, 10, 99, 999, 9990, 99900, 999000, 9990000, 99900000, 999000000, 9990000000, 99899999991, 998999999919, 9989999999190, 99899999991900, 998999999918991, 9989999999189910, 99899999991899109, 998999999918991090, 9989999999189910900, 99899999991899108991, 998999999918991089910, 998999999918991089910 The number formed by concatenating the first three digits in the sequence is $110 = 1 + 10 + 99$. This has a Golomb sequence vibe about it, though it’s a bit more straightforward to generate. This sequence was submitted by Eric Angelini, a Belgian TV producer who has added countless sequences to the OEIS, usually generated like this by picking a constraint and working out what the sequence would need to look like in order to obey it. A300001: Side length of the smallest equilateral triangle that can be dissected into $n$ equilateral triangles with integer sides, or $0$ if no such triangle exists. 1, 0, 0, 2, 0, 3, 4, 4, 3, 4, 5, 6, 4, 5, 6, 4, 5, 6, 5, 6, 6, 5, 7, 6, 5, 7, 6, 6, 7, 6, 7, 7, 6, 7, 7, 6, 7, 7, 8, 7, 7, 8, 7, 8, 8, 7, 8, 8, 7, 8, 9, 8, 8, 9, 8, 8, 9, 8, 9, 9, 8, 9, 9, 8, 9, 9, 9, 10, 9, 9, 10, 9, 9, 10, 9, 10, 10, 9, 10, 10, 9, 10, 10, 10, 10, 10, 11, 10, 10, 11, 10, 10, 11, 10, 11, 11, 10, 11, 11, 10 I’m amazed this one wasn’t already in! Seems like exactly the kind of thing that would appear in something like Dudeney’s Amusements. There’s an associated paper on the arXiv, by Ales Drapal and Carlo Hamalainen, which notes that some of the earliest work on triangle dissections was done by Bill Tutte, of Bletchley Park fame. The entry page contains some fab plaintext-art drawings of solutions for a few different $n$. A300002: Lexicographically earliest sequence of positive integers such that no $k+2$ points fall on any polynomial of degree $k$. 1, 2, 4, 3, 6, 5, 9, 16, 14, 20, 7, 15, 8, 12, 18, 31, 26, 27, 40, 30, 49, 38, 19, 10, 23, 53, 11 The definition of this one is a bit opaque if you’re not in the right frame of mind, but it’s really neat. If you plot the sequence, as the OEIS can automatically do for you, you get this: Or, if you want to do this in your head, think of the set of points $(n, a(n))$. Now, if you pick any polynomial of degree $k$, there’s no subset of $k+2$ of the points on the scatter plot that lie on that polynomial. It’s a ‘duck-and-dive’ sequence – it always picks the smallest number that won’t be on any of the $2^(n-1)$ polynomials defined by the sequence leading up to $a(n)$. The OEIS entry contains a conjecture that this sequence is a permutation of the natural numbers. It’s easily shown that it contains no duplicates – otherwise, if the number $m$ is repeated, there’d be two elements lying on the line $y=m$, a degree-0 polynomial. What’s not obvious is that every number will eventually turn up. It’d be pretty wild if some numbers never did – and that’d form a new sequence, too! A300003: Triangle read by rows: $T(n, k) =$ number of permutations that are $k$ “block reversals” away from $12…n$, for $n \geq 0$, and (for $n \gt 0$) $0 \leq k \leq n-1$. 1, 1, 1, 1, 1, 3, 2, 1, 6, 15, 2, 1, 10, 52, 55, 2, 1, 15, 129, 389, 184, 2, 1, 21, 266, 1563, 2539, 648, 2, 1, 28, 487, 4642, 16445, 16604, 2111, 2, 1, 36, 820, 11407, 69863, 169034, 105365, 6352, 2, 1, 45, 1297, 24600, 228613, 1016341, 1686534, 654030, 17337, 2 I don’t like “triangle read by rows” entries, purely because the OEIS’s web interface doesn’t make them easy to read. It’s debatable whether sequences generated by two parameters are even ‘sequences’, but that’s not a fight worth having, because there are some truly fab bits of maths hiding in the OEIS’s triangles. The Oval Track puzzle. You can reverse four elements at a time. There’s also a cyclic permutation move, which A300003 doesn’t allow. This one looks at what you can do by starting with the list of numbers $1,2, \ldots, n$, and repeatedly picking a block of adjacent numbers and reversing their order. It’s like a generalised version of the Oval Track puzzle. Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Katie Steckles - 3w ago Inspired by the BBC’s Sport Relief fundraising campaign, I’ve decided to set myself a vaguely mathematical running challenge. My current routine does involve a little running, but nothing serious, so I’ve given myself a bar to aim for that’s both vaguely achievable, and completely irrational. I’ll aim to run π kilometres (or as close as I can get, with the measuring instruments I have access to) each day during the month of March. This will either be on the treadmill at my gym – in which case I’ll try to get a photo of the ‘total distance’ readout once I’ve finished – or out in the real world, for which I’ll use some kind of running GPS logging device, to provide proof I’ve done it each day. Some days I’ll run on my own, and others I’ll be accompanied by friends/relatives, who’ll be either running as well or just making supportive noises. At the end of the month, I’ll post an update documenting my progress/success/failure. Serious request: if you know of anywhere in the UK I can reasonably get to where there’s an established circle that’s exactly 1km in diameter, I can try to come and run round the circumference of it. Drop me an email if so. If you’d like to support my ridiculous plan, you can follow my progress and donate on my fundraising page, or encourage others to do so by visiting pikm.run (I paid £4 for the URL, so now I have to do it). Sport Relief is the even-numbered-years-counterpart of Comic Relief, which together raise money for thousands of projects all over the UK and in the developing world, to help the vulnerable and those in need. Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Philipp Reinhard - 3w ago This post is in response to Peter’s post introducing the Approximate Geometric Mean. The approximate geometric mean $\mathrm{(AGM)}$ is a nice approximation of the geometric mean $\mathrm{(GM)}$, but it has some quirks as we will see. After a discussion at the MathsJam gathering, I was intrigued to find out how good an approximation it is. To get a better understanding, we first have to look again at its definition. For $A=a\cdot 10^x$ and $B=b \cdot 10^y$, we set $\mathrm{AGM}(A,B):=\mathrm{AM}(a,b)\cdot 10^{\mathrm{AM}(x,y)}$ where $\mathrm{AM}$ stands for the arithmetic mean. This makes also sense when $a$ and $b$ are not just integers between 1 and 10, but any real numbers. Note that we won’t consider negative $A$ and $B$ (i.e. negative $a$ and $b$), as the geometric mean runs into issues if we do so. The values of $x$ and $y$ may be negative, though. The $\mathrm{AGM}$ looks like a mix between the $\mathrm{AM}$ and the $\mathrm{GM}$, so what can possibly go wrong? Same mean, different numbers In contrast to the $\mathrm{AM}$ and the $\mathrm{GM}$, the $\mathrm{AGM}$ depends on the number base (10 in this case) and the presentation of $A$ and $B$. If we write $A=(10a) \cdot 10^{(x-1)}$, we get a different value for $\mathrm{AGM}(A,B)$. This looks rather unfortunate, but it will turn out to be helpful. To ease notation we will assume in the following that $a\geq b$ unless otherwise stated. This can be done without loss of generality as $\mathrm{AGM}(A,B)=\mathrm{AGM}(B,A)$. Peter Rowlett proved in his post that $\mathrm{GM}\leq \mathrm{AGM}$. The question is, how far can the $\mathrm{AGM}$ exceed the $\mathrm{GM}$? In other words, what’s the supremum of the ratio $R=\mathrm{AGM}/\mathrm{GM}$? Using the notation of $A$ and $B$ as above we get \begin{align*} R=\frac{\mathrm{AGM}(A,B)}{\mathrm{GM}(A,B)}= \frac{\mathrm{AM}(a,b)}{\mathrm{GM}(a,b)} = \frac{1}{2}\cdot (\sqrt{a/b}+\sqrt{b/a}).\end{align*} So, the ratio $R$ doesn’t depend on $x$ and $y$ but only on $a$ and $b$. That’s convenient. Taking $a$ and $b$ in the interval $[1,10)$, as is usual, we can look at the plot of $R(a,b)$. As long as we are in the blue part of the graph, $\mathrm{AGM}$ looks to be a sensible approximation of the $\mathrm{GM}$. So let’s look at the bad combinations of $a$ and $b$. The worst case happens when $a$ and $b$ are maximally far apart: The supremum of $R(a,b)$ is its limit for $a \rightarrow 10$ and $b=1$. So in general, $1\leq R \lt 5.5/\sqrt{10} \approx 1.74$. This supremum doesn’t look too bad at first, but unfortunately, the result can be unusable in extreme cases. For example, if $a=999=9.99\cdot 10^2$ and $b=1000=1 \cdot 10^3$, we have $\mathrm{GM}(A,B)\approx \mathrm{AM}(A,B)=999.5$ and $\mathrm{AGM}(A,B)\approx 1738$ – not only is the $\mathrm{AM}$ a better approximation of the $\mathrm{GM}$ than the $\mathrm{AGM}$ in this instance, the $\mathrm{AGM}$ is bigger than both the numbers $A$ and $B$ of which it is supposed to give some kind of mean! Let’s analyse this a bit deeper. The ratio $R$ only depends on the ratio $r=a/b$. In closed form we can write $R(r)=1/2\cdot (\sqrt{r}+\sqrt{r}^{-1})$ and we are left to study this function in the range $[1,10]$. Its maximum is $R(10)$, but smaller $r$ give better results. And we will see, that we don’t have to put up with $r=10$. Here, the flexibility of the definition of the $\mathrm{AGM}$ comes into play. Due to the choice of a suitable presentation of the numbers we can guarantee that $r$ isn’t too big. If we have $r\leq \sqrt{10}$ which is equivalent to $\sqrt{10}b\geq a \geq b$ we calculate $\mathrm{AGM}(A,B)$ as above. If $a>\sqrt{10}b$, we change the presentation of the number: \begin{align*}B=b \cdot 10^y = (10b)\cdot 10^{y-1}=:b’ \cdot 10^{y-1} \end{align*} and continue from there. So, let’s redefine the $\mathrm{AGM}$ for $10>a\geq b\geq 1$ like this: $\mathrm{AGM}(A,B)=\begin{cases} \mathrm{AM}(a,b)\cdot 10^{\mathrm{AM}(x,y)}, & \sqrt{10}b\geq a,\\ \mathrm{AM}(10b,a)\cdot 10^{\mathrm{AM}(x,y-1)}, & \text{otherwise}. \end{cases}$ Note, that in the second case we have $\sqrt{10}a>10b>a$, so that the roles of the pair $(a,b)$ are taken over by the pair $(10b,a)$. Setting $r=10b/a$ in the second case, we have in both cases $1\leq r\leq \sqrt{10}$, so we only have to study $R(r)$ in the interval $[1,\sqrt{10}]$, which will turn out to be rather benign. Note also, that this new $\mathrm{AGM}$ can still be calculated without a calculator when using the approximation $\sqrt{10}\approx 3$, as Colin Beveridge suggested in Peter’s post. In the example above with $A=999$ and $B=1000$ we write $B=10\cdot 10^2$ and find with this new definition of the $\mathrm{AGM}$: \begin{align*}\mathrm{AGM}(999;1000)=\mathrm{AM}(9.99;10) \cdot 10^{\mathrm{AM}(2;2)}=999.5, \end{align*} This coincides with the arithmetic mean of the two numbers and is really close to the geometric mean. This is looking promising. If we define the $\mathrm{AGM}$ of two numbers $A$ and $B$ in the way explained above, we get the following two inequalities: \begin{align*} (I) \quad & \mathrm{GM}(A,B)\leq \mathrm{AGM}(A,B) \leq \mathrm{GM}(A,B) \cdot 1.2 \\ (II) \quad & \mathrm{GM}(A,B) \leq \mathrm{AGM}(A,B) \leq \mathrm{AM}(A,B) \end{align*}Both inequalities together mean that not a lot can go wrong when using the $\mathrm{AGM}$ with the appropriate presentation of the numbers: The $\mathrm{AGM}$ is bigger than the $\mathrm{GM}$, but exceeds it by maximally 20%, and it is always smaller than the $\mathrm{AM}$. As a consequence, the $\mathrm{AGM}$ will always be between $A$ and $B$. So it is indeed a “mean” of some kind. A proof of these two inequalities (I) We only have to find the maximum of $R=\mathrm{AGM}/\mathrm{GM}$. Due to the discussion above we can assume that $\sqrt{10}b\geq a \geq b$, but $a$ can now be bigger than 10. The latter is not a problem though. The maximum of $R=\mathrm{AGM}(A,B)/\mathrm{GM}(A,B)=\mathrm{AM}(a,b)/\mathrm{GM}(a,b)$ is attained when $a$ and $b$ are maximally apart, i.e. $r=\sqrt{10}$, so $\max\left(\frac{\mathrm{AGM}(A,B)}{\mathrm{GM}(A,B)}\right)=\frac12 \cdot (10^{1/4}+{10^{-1/4}}) \approx 1.2.$ (II) We will show that $\mathrm{AGM}(A,B)/\mathrm{AM}(A,B) \leq 1$. Let’s drop the assumption that $a\geq b$. Instead we assume, again without loss of generality, that $x\geq y$, so that we can set $z=:x-y \geq 0$. For the ratio $r=a/b$ we have $\sqrt{10} \geq r \geq 1/\sqrt{10}$. If $r$ fell outside this interval, we would have had to change the presentation of one of the numbers before calculating the $\mathrm{AGM}$. Dividing the numerator and denominator in the above inequality by $B$ we get: $\frac{\mathrm{AGM}(A,B)}{\mathrm{AM}(A,B)}=\frac{(1+r) \cdot 10^{z/2}} {1+r \cdot 10^z}.$ So we look for an upper bound of the function $f_z(r):=\frac{(1+r) \cdot 10^{z/2}} {1+r \cdot 10^z}$ when varying $z$ and $r$ and want to show that this upper bound is smaller or equal to 1. Note, that we only have to check for integer $z\geq 0$ (The result is actually false if we allow any real $z$). For $z=0$, we have $f_0(r)=1$ for any $r$ and hence $\mathrm{AGM}=\mathrm{AM}$. For a fixed $z \geq 1$ we can derive the function $f_z(r)$ with respect to $r$ and find that the slope is always negative. Hence for a fixed $z$, the function $f_z(r)$ attains a maximum when $r$ is smallest, i.e. $r=1/\sqrt{10}$, so we are left to show that $f_z(1/\sqrt{10})=\frac{(1+10^{-1/2})10^{z/2}}{1+10^{z-1/2}}\leq 1.$ For $z=1$ we have equality again and $\mathrm{AGM} = \mathrm{AM}$. For $z\geq 2$ we can write $z=2+z’$ with $z’$ being an integer $\geq 0$. We get the following chain of inequalities $f_z\left(\frac{1}{\sqrt{10}}\right)=\frac{(1+10^{-1/2})10^{(2+z’)/2}}{1+10^{3/2 + z’}}<\frac{2\cdot 10^{1+z’/2}}{10^{3/2+z’}}\leq \frac{2}{\sqrt{10}}<1.$ This proves the second inequality. ☐ In summary, modifying the definition of the $\mathrm{AGM}$ to assure that the ratio of the “leading characters” is as close to 1 as possible, makes sure that the $\mathrm{AGM}$ works well, even in the bad cases. Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Peter Rowlett - 1M ago I gave a talk on Fermi problems and a method for approaching them using the approximate geometric mean at the Maths Jam gathering in 2017. This post is a write up of that talk with some extras added in from useful discussion afterwards. Enrico Fermi apparently had a knack for making rough estimates with very little data. Fermi problems are problems which ask for estimations for which very little data is available. Some standard Fermi problems: • How many piano tuners are there in New York City? • How many hairs are there on a bear? • How many miles does a person walk in a lifetime? • How many people in the world are talking on their mobile phones right now? Hopefully you get the idea. These are problems for which little data is available, but for which intelligent guesses can be made. I have used problems of this type with students as an exercise in estimation and making assumptions. Inspired by a tweet from Alison Kiddle, I have set these up as a comparison of which is bigger from two unknowable things. Are there more cats in Sheffield or train carriages passing through Sheffield station every day? That sort of thing. The point of these is not to look up information or make wild guesses, but instead to come up with a back-of-the-envelope, ‘wrong, but useful‘, orders of magnitude estimate. Some ‘rules’, if you want to play with these the way I would: • don’t look up information; • don’t make precise calculations using calculator or computer; • be imprecise — there are 400 days in a year, people are 2m tall, etc.; One approach is to estimate by bounding – come up with numbers that are definitely too small and too large, and then use an estimate that is an average of these. But which average? Say I think some quantity is bigger than 2 but smaller than 400. The arithmetic mean would be $\mathrm{AM}(2,400)=\frac{2+400}{2}=201$. The geometric mean would be $\mathrm{GM}(2,400)=\sqrt{2\times 400} = 28.28\!\ldots$. Which is a better estimate? The arithmetic mean is half the upper bound, but 100 times the lower bound. On this basis, for an ‘order of magnitude’-type estimate, you might agree that the geometric mean is a better average to use here. Following my Maths Jam talk, Rob Low said that the geometric mean makes more sense for an order of magnitude estimate, since it corresponds to the arithmetic mean of logs. To see this, consider \begin{align*} \log(\mathrm{GM}(A, B)) &= \log(\sqrt{AB}) \\ &= \log((AB)^{\frac{1}{2}}) \\ &= \frac{1}{2}\log(AB) \\ &= \frac{1}{2}(\log(A) + \log(B)) = \mathrm{AM}(\log(A), \log(B)) \text{.} \end{align*} So, geometric mean it is. However, taking a square root is not usually easy in your head, and we want to avoid making precise calculations by calculator or computer. Enter the approximate geometric mean. Approximate Geometric Mean For the approximate geometric mean, take $2=2 \times 10^0$ and $400=4 \times 10^2$, then the AGM of $2$ and $400$ is: \begin{align*} \frac{2+4}{2} \times 10^{\frac{0+2}{2}} &= 3 \times 10^1\\ &= 30 \approx 28.28\!\ldots = \sqrt{2\times 400} = \mathrm{GM}(2,400) \text{.} \end{align*} Why does this work? Let $A=a \times 10^x$ and $B=b \times 10^y$. Then \begin{align*} \mathrm{GM}(A,B)=\sqrt{AB}&=\sqrt{ab \times 10^{x+y}}\\ &=\sqrt{ab} \times 10^{\frac{x+y}{2}} \text{,} \end{align*} and $\mathrm{AGM}(A,B) = \frac{a+b}{2} \times 10^{\frac{x+y}{2}}\text{.}$ Setting aside the $10^{\frac{x+y}{2}}$ term, which appears in both averages, is it obvious that, for single digit numbers $>0$, $\mathrm{GM}(a,b)=\sqrt{ab} \approx \frac{a+b}{2}=\mathrm{AM}(a,b) \text{?}$ There is a standard result that says \begin{align*} 0 \le (x-y)^2 &= x^2 – 2xy + y^2\\ &= x^2 + 2xy + y^2 – 4xy\\ &= (x+y)^2 – 4xy \text{.} \end{align*} Hence \begin{align*} 4xy &\le (x+y)^2\\ \sqrt{xy} &\le \frac{x+y}{2} \text{,} \end{align*} with equality iff $x=y$. So $\mathrm{GM}(a,b)\le\mathrm{AM}(a,b)$, but are they necessarily close? By exhaustion, it is straightforward to show (for single-digit integers, given the rule to round numbers where possible) that the largest error occurs when $a=1$ and $b=9$. Then $\sqrt{1 \times 9} = 3 \ne 5 = \frac{1+9}{2}$ and the error is $2$ which, relative to the biggest number $9$ might be seen as quite significant. I’d say you are not likely to use this method if the numbers are of the same order of magnitude, because the idea is to come up with fairly wild approximations and if they were quite close it might be sensible to think of them as not really different. Then the error is going to be at least one order of magnitude smaller than the upper bound, i.e. $10^\frac{x+y}{2} \ll 10^y$. For example, if your numbers were $1$ and $900$ (as a pretty bad case), then: $\mathrm{GM}(1,900)=\sqrt{900}=30 \ne 50=\mathrm{AGM}(1,900)$ and a difference of $20$ on a top value of $900$ is not as significant as a difference of $2$ was on a top value of $9$. So I suppose I would argue that this makes the error relatively insignificant. However, this thinking left me somewhat unsatisfied. I felt there ought to be a nicer way to demonstrate why the approximate geometric mean works as an approximation for the geometric mean. Following my talk at Maths Jam, Philipp Reinhard has been thinking about this, and he will share his thoughts in a post here in a few days. One edge case I didn’t have time to fit into my talk what I would recommend if the two numbers differed by an odd number of orders of magnitude. For example, $\mathrm{AGM}(1,1000)$ generates another square root in $1 \times 10^{\frac{3}{2}}$ – precisely what we were trying to avoid! What I have recommended to students is to simply rewrite one of the numbers so that the difference in exponents is even. For example, writing $1=1 \times 10^0$ and $1000 = 10 \times 10^2$ gives $\mathrm{AGM}(1,1000)=5.5 \times 10^{1} \text{.}$ Following Maths Jam, the esteemed Colin Beveridge made the sensible suggestion of just treating $10^{\frac{1}{2}}$ as $3$, making \begin{align*} &\mathrm{AGM}(1,1000)\\ &= 1 \times 10^{\frac{3}{2}}\\ &\approx 1 \times 3^3 = 27\text{.} \end{align*} This increases our problems, though, because we have the potential to deal with larger differences (hence larger errors) than when dealing with single-digit numbers. Actually, it was wondering why this increased error happens that got me thinking seriously on this topic in the first place. I’ll stop now to let Philipp share what he has been thinking on this. Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Peter Rowlett - 1M ago On 31st January 2008, I gave my first lecture. I was passing my PhD supervisor in the corridor and he said “there might be some teaching going if you fancy it, go and talk to Mike”. And that, as innocuous as it sounds, was the spark that lit the flame. I strongly disliked public speaking, having hardly done it (not having had much chance to practice in my education to date – I may have only given one talk in front of people to that point, as part of the assessment of my MSc dissertation), but I recognised that this was something I needed to get over. I had just started working for the IMA, where my job was to travel the country giving talks to undergraduate audiences, and I realised that signing up to a regular lecture slot would get me some much-needed experience. I enjoyed teaching so much that I have pursued it since. I just noticed that last Wednesday was ten years since that lecture. It was basic maths for forensic science students. I was given a booklet of notes and told to either use it or write my own (I used it), had a short chat about how the module might work with another lecturer, and there I was in front of the students. That was spring in the academic year 2007/8 and this is the 21st teaching semester since then. This one is the 15th semester during which I have taught — the last 12 in a row, during which I got a full-time contract and ended ten years of part-time working. I have this awful feeling this might lead people to imagine I’m one of the people who knows what they are doing. P.S. The other thing that I started when I started working for the IMA was blogging – yesterday marks ten years since my first post. So this post represents the start of my second ten years of blogging. Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Katie Steckles - 1M ago The next issue of the Carnival of Mathematics, rounding up blog posts from the month of January, and compiled by Rachel, is now online at The Math Citadel. The Carnival rounds up maths blog posts from all over the internet, including some from our own Aperiodical. See our Carnival of Mathematics page for more information. Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Samuel Hansen - 1M ago If you pay attention to United States politics you have probably noticed that mathematics is currently enjoying a rare moment of relevance. You probably also know this is not happening because all of a sudden politicians have decided that mathematics is clearly the coolest thing in the world, even though it clearly is, but instead because gerrymandering has become one of the major issues du jour. For those of you lucky enough not to know what gerrymandering is, let me give you a quick précis. Named after Elbridge Gerry – it should be pronounced like Gary and not Jerry – and a congressional district which slightly resembled a salamander he signed into law as the governor of Massachusetts, gerrymandering has come to be the blanket term for the redrawing of political districts in the United States in a way that provides political gain for the party conducting the redrawing. This is primarily done through either packing, drawing a district so all of your opponents’ votes are concentrated in a small number of districts and therefore can not meaningfully affect others, or cracking, splitting up the opponents’ votes among many different districts so they have less influence on any of them. This has generally been considered to be totally legitimate, and smart, political maneuvering in the US and upheld as legal in the courts, unless it can be proven the gerrymandering was done based on partisanship and not on race. The reason gerrymandering is such a hot topic is because the courts might just be changing their views regarding partisan gerrymandering, and a big factor behind this is mathematics. There was an argument in front of the supreme court, in the case of Gill v. Whitford late last year about partisan gerrymandering in my home state of Wisconsin, which had mathematics as a central pillar in the arguments against the current district lines. Even more recently the Pennsylvania Supreme Court threw out their current districts and demanded they be redrawn, and while I am not sure if mathematics played a large role in them getting thrown out it certainly will when they are redrawn. As gerrymandering is enjoying its moment in the sun, it is only fair the mathematician playing the biggest role in changing how it is all being thought about is called Moon. Moon Duchin is an Associate Professor at Tufts University and the creator of the Metric Geometry and Gerrymandering Group which, through a series of conferences, is applying cutting-edge mathematics to the redistricting problem, training mathematicians to be expert witnesses on gerrymandering for court proceedings, and providing teachers with lesson plans and guidance on how to implement them (there is one more conference in California coming up in March and a big workshop happening in August if you want to get involved). The really big news though is, as of January 26 Duchin is working as a consultant for Governor Wolf in Pennsylvania, with the job of helping to make sure their redrawn congressional district map is fair. I have had the joy of talking to Moon for my podcast Relatively Prime about her work with the MGGG and watched her give a talk about gerrymandering at the 2018 Joint Mathematics Meeting. I do not think I have ever seen a more enraptured audience at a mathematics conference, there were a lot of people in the room and each and every one of them was paying attention. I can not think of a better person from a mathematical ability perspective, as well as a public engagement one, to be the face of this for mathematics. It is too bad it has taken something so awful as gerrymandering to get mathematics a seat at the table in US political discourse, and even though I have spent a huge amount of my life trying to convince people mathematics is something we should all care about I would happily not have people talk about it if it meant we had no gerrymandering. That said, I am glad we have mathematicians like Moon Duchin who are willing to take this battle on in front of not only the mathematical community but an ever increasing portion of the politically engaged public, not to mention Governor Wolf and lawyers like those in Gill v. Whitford who are willing to reach past their comfort zone and let mathematics play a central role in their work. There is not going to be a clean, perfect solution to all of this, but hopefully with mathematicians like Moon involved in this it will end up a lot better than where it is now. Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Colin Beveridge - 1M ago Did you read Cédric Villani’s Birth of a Theorem? Did you have the same reaction as me, that all of the mentions of the technical details were incredibly impressive, doubtless meaningful to those in the know, but ultimately unenlightening? Writing about maths, especially deep technical maths, so that a reader can follow along with it is hard – the Venn diagram of the set of people who can write clearly and the set of people who understand the maths, two relatively small sets, has a yet smaller intersection. Vicky Neale sits squarely inside it, and Closing The Gap has gone straight into my top ten “books to give to interested students”. Here’s a clever way to structure a maths book (I have taken copious notes): follow the development of a difficult idea or discovery chronologically, but intersperse the action with background that puts the discovery in context. That’s not a new structure – but it’s tricky to pull off: you have to keep the difficult idea from getting too difficult, and keep the background at a level where an interested reader can follow along and either say “yes, that’s plausible” or better “wait, let me get a pen!”. This is where Closing The Gap excels. Neale takes as the difficult idea the Twin Primes Conjecture, and specifically the work that followed from Yitang Zhang’s lightning-bolt discovery in 2013 that infinitely many pairs of primes are separated by at most 70,000,000 (which sounds like a lot… but is very small compared to “no upper limit”) – especially the Polymath projects and the work of James Maynard in reducing the bound to either 600 (unconditionally) or 12 (if the Elliott-Halberstam conjecture is true – a bound later reduced to 6 by Polymath8b). The Elliott-Halberstam conjecture? What’s that? Neale takes the time to explain, by way of a mathematical pencil, the flavour of the conjecture, without getting bogged down in the technical details; she tells us enough that the story makes sense, and enough that we could go and find out more if we wanted. Because of Neale’s position in the Venn diagram, she can pull off this kind of thing, making maths accessible without losing accuracy – she’s meticulous about saying “there’s more to this” when there’s more to something. This attention to detail is possibly overdone in places – I found myself rolling my eyes from time to time about in-text reminders that I met Terry Tao in a previous chapter, or that we’d hear more about such-and-such in a future one, which I suppose is an upshot of deciding to do without footnotes. This is literally my only mild criticism of the book; I’m even in thrall to the quality of the paper it’s printed on. Closing The Gap communicates the excitement, frustration and interconnectedness of top-tier mathematical research, including the relatively new approaches pioneered by Tim Gowers (among others) with the Polymath project. The book’s introduction starts with an extended analogy comparing mathematics to climbing (we know a MathsJam talk about that!) – how something impossible gradually becomes possible, then difficult, then accessible to novices with the help of a guide. Neale sets herself up as this guide, and succeeds brilliantly. Closing the Gap, Oxford University Press Visit website • Show original • . • Share • . • Favorite • . • Email • . The Aperiodical by Christian Lawson-perfect - 2M ago This tweet from the QI Elves popped up on my Twitter timeline: The odds of being crushed by a meteor are considerably lower (i.e. more likely) than those of winning the jackpot on the National Lottery. — Quite Interesting (@qikipedia) January 11, 2018 In the account’s usual citationless factoid style, the Elves state that you’re more likely to be crushed by a meteor than to win the jackpot on the lottery. The replies to this tweet were mainly along the lines of this one from my internet acquaintance Chris Mingay: Should we not be getting almost weekly stories of people being crushed by a meteor then ? — Chris Mingay (@GhostMutt) January 11, 2018 Yeah, why don’t we hear about people being squished by interplanetary rocks all the time? I’d tune in to that! A couple of other helpful sorts have provided some extra data as context for this fact: I asked on Twitter if any turbonerds keep a record of every jackpot ever, and of course they do: Peter Rowlett and Tim Stirrup both provided me with a link to Richard K. Lloyd’s comprehensive table, which reckons there have been 4749 winners, of which 3220 became millionaires. 4750 people have ever won the lottery (for a definition of ‘won’ that might not be the one we want, but it gives us an order of magnitude) According to their website,4750 people have become millionaires since 1994 from UK lotto wins, so how many have been crushed? — ste-b (@worldwarste) January 11, 2018 And only one person ever has been crushed by a meteor: How can this possibly be true when only one person has ever been hit by a meteor in recorded history? — Dan (@dev_meltus) January 11, 2018 I immediately hit the AMBIGUOUS PHRASING OF ODDS KLAXON (Hey, QI like to do it to their guests, so why can’t I?) The statement sounds wildly incorrect on first inspection, so I reckon we’re not talking about the same kinds of odds. It must be the case that: • someone has worked out the odds of being killed by a meteor, • someone has worked out the odds of winning the lottery, and • someone has compared those two numbers. I assume at least the first two someones were not QI Elves, and I reckon the third one probably wasn’t either. So, where did QI get their fact? A search for “meteor lottery odds” got me this story on independent.co.uk published five days before QI’s tweet, so that’s probably their source. That links to “Review Journal”, a generically authoritative-sounding title, which turns out to be the Las Vegas Review Journal, who in 2015 published an article by someone affiliated with gobankingrates.com titled “20 things more likely to happen to you than winning the lottery”. That cheery listicle cites a 2008 article by Phil Plait on his Bad Astronomy blog where he cites Alan Harris’ answer to the Fermi question of working out your odds of being killed by a meteor, directly or indirectly. The “crushed” phrasing, which is a stronger statement than the one Harris looked at, seems to originate with the Las Vegas Review Journal. Maddeningly, Plait doesn’t give a citation for Alan Harris’s calculation and I can’t find a better source on Google, so the search stops here. After all of that chasing, I’ve got a kind-of reputable source for the “1 in 700,000” odds of being killed by a meteor presented in the Independent article. That’s much better odds than the 1 in 45,057,474 chance of winning the lottery claimed by operators Camelot. We hear about people winning the lottery fairly often, so why isn’t “meteor squish” a journalistic cliché like “bus plunge”? Well, the meteor figure is your lifetime odds of being killed, and the lottery figure is your odds of winning each time you play. That’s it – they’re measured in different units, effectively. Plait’s Bad Astronomy piece contained a good explanation of what the odds meant, but that got lost when the headline figure was spread in factoid form. So we can’t compare the two numbers as stated – that’d be like me saying I’m taller than you are old. What can we do to get numbers for meteors and lotteries that we can compare? One option is to assume both take place once – a meteor hits Earth, and you play the lottery. We know the odds of winning the lottery in one attempt, and one of Harris’s assumptions in his model was that an asteroid impact would kill everyone – so your probability of being killed is 1. No contest – you’re way more likely to be killed by an asteroid that hits Earth than for the lottery ticket you just bought to be a winner. A more reasonable approach might be to look at your odds within a certain period of time. We’ve already got a figure of around 1 in 700,000 for being killed by a meteor in a 70-year lifespan, so we just need to get the corresponding figure – what are your odds of winning the National Lottery at least once in your lifetime? Clearly, it depends on how often you play. My personal odds are zero – I’ve never so much as bought a scratchcard. Conversely, if you buy enough tickets, you can guarantee you win, a tactic executed to great success by Voltaire and later on some MIT students. Those strategies both relied on oversights in the rules of their respective lotteries to make them profitable, but if you’ve got a fortune to spare you could buy a National Lottery ticket corresponding to each combination of six balls and guarantee that exactly one of them will win. For the sake of getting a reasonable number, let’s say you buy one ticket for each draw. There are two draws each week, so 104 draws each year. So your odds of winning the lottery at least once in 70 years is At this point I wanted to use the fact that you can only play the lottery once you’re 16, and the life expectancy in the UK is 81.2 years, but I’ll stick with 70 years of playing so we can compare with the meteor number. $1 – \left( 1 – \frac{1}{45057474} \right)^{(104 \times 70)} \approx 1 \text{ in } 6190$ That’s a way, way lower number than the meteor number. So you’re vastly more likely to win the lottery in your lifetime than you are to be killed by a world-ending meteor – over 100 times more likely, in fact. And if a meteor did kill everyone, you’d be unlikely to read about it in the news the next day. Visit website Articles marked as Favorite are saved for later viewing. • Show original • . • Share • . • Favorite • . • Email • .
{}
Wednesday, March 18, 2020 Advanced Math - Series Convergence Calculator, Telescoping Series Test Last blog post, we went over what an alternating series is and how to determine if it converges using the alternating series test. In this blog post, we will discuss another infinite series, the telescoping series, and how to determine if it converges using the telescoping series test. If it isn’t clear right away, telescoping is synonymous with the word collapsing. A telescoping series is a series where almost all the terms cancel with the preceding or following term leaving just the initial and final terms, i.e. a series that can be collapsed into a few terms. Let’s see what this looks like . . . ∑_{n=1}^∞\frac{1}{n(n+1)}= ∑_{n=1}^∞\frac{1}{n}-\frac{1}{n+1} = (1-\frac{1}{2})+(\frac{1}{2}-\frac{1}{3})+(\frac{1}{3}-\frac{1}{4})+ ...+(\frac{1}{n}-\frac{1}{n+1}) =1-\frac{1}{n+1} As you can see, we are able to cancel out all terms except the first and last. Now that we’ve discussed what a telescoping series is, let’s go over the telescoping series test. Telescoping Series Test: For a finite upper boundary, ∑_{n=k}^N(a_{n+1}-a_n )=a_{N+1 }-a_k For an infinite upper boundary, if a_n→0*, then ∑_{n=k}^∞(a_{n+1}-a_n )= -a_k *If a_n doesn’t converge to 0, then the series diverges. In regards to infinite series, we will focus on the infinite upper boundary scenario. In order to use this test, you will need to manipulate the series formula to equal a_{n+1}-a_n where you can easily identify what a_{n+1} and a_n are. Also, please note that if you are able to manipulate the series in this form, you can confirm that you have a telescoping series. With practice, this will come more naturally. Let’s see some examples to better understand. ∑_{n=1}^∞\frac{5}{n}-\frac{5}{n+1} 1. Convert the series into the form a_{n+1}-a_n \frac{5}{n}-\frac{5}{n+1}= -\frac{5}{n+1}-(-\frac{5}{n}) a_{n+1}=-\frac{5}{n+1} a_n=-\frac{5}{n} 2. Determine if a_n→0 a_n=-\frac{5}{n}= -5(\frac{1}{n}) Since \frac{1}{n} converges to 0, -\frac{5}{n} converges to 0. 3. Calculate -a_k k=1 a_n=-\frac{5}{n} -a_k=-(-\frac{5}{1})=5 The series converges to 5. ∑_{n=1}^∞\frac{6}{(n+1)(n+2)} 1. Convert the series into the form a_{n+1}-a_n ∑_{n=1}^∞\frac{6}{(n+1)(n+2)}= 6∙∑_{n=1}^∞\frac{1}{(n+1)(n+2)} \frac{1}{(n+1)(n+2)}= -(\frac{1}{n+2})-(-\frac{1}{n+1}) a_{n+1}=-\frac{1}{n+2} a_n=-\frac{1}{n+1} 2. Determine if a_n→0 a_n=-\frac{1}{n+1} -\frac{1}{n+1}  →0 3. Calculate -a_k k=1 a_n=-\frac{1}{n+1} -a_k=-(-\frac{1}{1+1})=\frac{1}{2} 6∙∑_{n=1}^∞\frac{1}{(n+1)(n+2)} =6∙\frac{1}{2}=3 The series converges to 3. ∑_{n=1}^∞\frac{1}{4n^2-1} 1. Convert the series into the form a_{n+1}-a_n \frac{1}{4n^2-1}=-(\frac{1}{2(2n+1)} )-(-\frac{1}{2(2n-1)}) a_{n+1}= -(\frac{1}{2(2n+1)} ) a_n=-\frac{1}{2(2n-1)} 2. Determine if a_n→0 a_n=-\frac{1}{2(2n-1)} =-\frac{1}{4n-1} -\frac{1}{4n-1}  →0 3. Calculate -a_k k=1 a_n=-\frac{1}{2(2n-1)} -a_k=-(-\frac{1}{2(2∙1-1)} )=\frac{1}{2} The series converges to \frac{1}{2}. The trickiest part of this is manipulating the series formula into a_{n+1}-a_n. Once you’re able to do this, the rest should be pretty simple. The key thing to remember about a telescoping series is that all the terms will cancel out, except the first and last term. For more help on telescoping series, check out Symbolab’s Practice. Next blog post, I’ll go over the convergence test for a radio series. Until next time, Leah
{}
# Derivatives Jump to: navigation, search Objective Thermodynamics has some specific methods of dealing with derivatives. This section shows the terminology and methods used. ## Derivatives Much of thermodynamics involves the handling derivatives. In thermodynamics we write the partial derivatives of f(x,y) as: $\left (\frac{\partial f}{\partial x}\right )_y and \left (\frac{\partial f}{\partial y}\right )_x$ We read the left derivative above as "the partial of f with respect to x, keeping y constant. Note that in thermodynamics we always specify was is being kept constant. ## Exact Differentials If f = f(x,y), then the total derivative, df is: $df = \left (\frac{\partial f}{\partial x}\right )_y dx + \left (\frac{\partial f}{\partial y}\right )_x dy$ If the function f(x,y) can be written as a total derivative, then f is known as an exact differential. ### Theorem The equation $df=M(x,y)dx+N(x,y)dy$ is an exact differential if and only if[1] $\left (\frac{\partial M}{\partial y}\right )_x = \left (\frac{\partial N}{\partial x}\right )_y dy$ This theorem will be important later, especially when we discuss Maxwell's relation. However, for now there is a more important result. ### Path Independency and State Functions If f(x,y) is an exact differential, then a definite integral is equal to the function f evaluated at the limits of integration: $\int^{(x_2,y_2)}_{(x_1,y_2)} f'(x,y)\, dxdy = f(x_2,y_2) - f(x_1,y_1)$ In other words, the value of the integral is dependent on only its initial and final points. It does not depend on what the function does between those two points. We say such an integral is path independent. Properties which are path independent are called State Functions. State functions are very important in thermodynamics because we usually do not know what goes on internally within a system. But we do know the beginning and ending conditions of a system. Pressure, volume, temperature, and internal energy are state functions. But heat and work are not. ## Derivative Rules The following rules are useful for manipulating derivatives. In these f = f(x,y,z). 1. $\left (\frac{\partial f}{\partial z} \right )_x =\left (\frac{\partial f}{\partial y} \right )_x\, \left (\frac{\partial y}{\partial z} \right )_x$ 2. $\left (\frac{\partial f}{\partial x} \right )_z =\left (\frac{\partial f}{\partial x} \right )_y + \left (\frac{\partial f}{\partial y} \right )_x\,\left (\frac{\partial y}{\partial x} \right )_z$ 3. $\left (\frac{\partial y}{\partial x} \right )_z = \left (\frac{\partial x}{\partial z} \right )_y\, \left (\frac{\partial z}{\partial y} \right )_x$ 4. $\left (\frac{\partial x}{\partial y} \right )_z\, \left (\frac{\partial y}{\partial z} \right )_x\, \left (\frac{\partial z}{\partial x} \right )_y=-1$ Note that the cycle in number 4 is not one but minus 1 (rule 4 is sometimes known as Euler's relation). Rule 3 is just a rearrangement of rule 4 (remember that taking the reciprocal of a partial derivative is the same as switching the numerator and denominator). Rule 2 is very useful if you need to change what variable is kept constant. Activity Go to the Derivatives exercise page for an example and a exercise practicing on manipulating derivatives. ## Notes 1. "A if and only if B" means "if A, then B and if B, then A"
{}
# Roots of a Complex Number Calculator ## Find the roots of a complex number, roots of unity step by step The calculator will find the $n$-th roots of the given complex number using de Moivre's formula, with steps shown. If the calculator did not compute something or you have identified an error, or you have a suggestion/feedback, please write it in the comments below. Find $\sqrt{- \frac{5228171817}{100000000} - i}$.
{}
# Collecting Device Metrics There are many system health vitals that are useful to track aside from crashes and reboots. The options are numerous but you can expand the toggle to get a few examples. • RTOS related statistics • Amount of time spent in each RTOS task per unit time. This can help you understand if one task is starving the system • Heap high water marks • Stack high water marks • Time MCU was in different states • Stop, Sleep, Run Mode • Time each peripherals were active • Battery life drop per unit time. • Transport specific metrics (LTE, WiFI, BLE, LoRa, etc) • Amount of time transport was connected • Amount of connection attempts • Number of bytes being over transport per unit time. In the Memfault UI, you can configure Alerts based on these metrics as well as explore metrics collected for any device. Here is an example where the time bluetooth was connected, the amount of bytes sent and the battery life were tracked. In Memfault's UI, the data that gets collected from each device over time, is visualized in customizable graphs: The Memfault SDK includes a "metrics" component that makes it easy to collect this type on information on an embedded device. In the sections below we will walk through how get started with the component. Prerequisite This guide assumes you have already completed the minimal integration of the Memfault SDK. If you have not, check out the appropriate guide in the table below. MCU ArchitectureGetting Started Guide ARM Cortex-MARM Cortex-M Integration Guide nRF Connect SDKnRF Connect SDK Integration Guide ESP32 ESP-IDF (Xtensa and RISC-V)ESP32 ESP-IDF Integration Guide ESP8266ESP8266 RTOS Integration Guide Dialog DA1469xDA1469x Integration Guide NXP MCUXpresso RT1060NXP MCUXpresso SDK for i.MX RT Guide Zephyr RTOSZephyr Integration Guide Rate Limiting Ingestion of Metrics may be rate-limited. Avoid sending data more than once per hour per device. ### Defining Custom Metrics​ All custom metrics can be defined with the MEMFAULT_METRICS_KEY_DEFINE macro in the memfault_metrics_heartbeat_config.def created as part of your port. In this guide we will walk through a simple example of tracking the high water mark of the stack for a "Main Task" in our application and the number of bytes sent out over a bluetooth connection. // File \$PROJECT_ROOT/third_party/memfault/memfault_metrics_heartbeat_config.defMEMFAULT_METRICS_KEY_DEFINE(MainTaskStackHwm, kMemfaultMetricType_Unsigned)MEMFAULT_METRICS_KEY_DEFINE(BtBytesSent, kMemfaultMetricType_Unsigned)MEMFAULT_METRICS_STRING_KEY_DEFINE(ManufDate, sizeof("2022-05-09")) ## Dependency Function Overview​ The metrics subsystem uses the "timer" implemented as part of your initial port to control when data is aggregated into a "heartbeat". When the heartbeat subsystem is booted, a dependency function memfault_platform_metrics_timer_boot is called to set up this timer. Most RTOSs have a software timer implementation that can be directly mapped to the API or a hardware timer can be used as well. The expectation is that callback will be invoked every period_sec (which by default is once / hour). The metrics subsystem supports a timer type (kMemfaultMetricType_Timer) which can be used to easily track durations (i.e. time spent in MCU stop mode) as well as overall system uptime. To support this, the memfault_platform_get_time_since_boot_ms() function implemented as part of the initial port is used. Typically this information is derived from either a system's Real Time Clock (RTC) or the SysTick counter used by an RTOS. ## Setting Metric Values​ There's a set of APIs in components/include/memfault/metrics/metrics.h which can be used to easily update heartbeats as events take place. The updates occur in RAM so there is negligible overhead introduced. Here's an example: #include "memfault/metrics/metrics.h"// [ ... ]void bluetooth_driver_send_bytes(const void *data, size_t data_len) { memfault_metrics_heartbeat_add(MEMFAULT_METRICS_KEY(BtBytesSent), data_len); // [ ... code to send bluetooth data ... ]} String metrics are stored in the same heartbeat snapshot. The process for setting a string metric might look like this, for example: #include "memfault/metrics/metrics.h"void set_manufacturing_date_metric(const char *manufacturing_date) { // set the manufacturing date string metric memfault_metrics_heartbeat_set_string(MEMFAULT_METRICS_KEY(ManufDate), manufacturing_date); // optionally, trigger a heartbeat to immediately capture the metric record memfault_metrics_heartbeat_debug_trigger(); // optionally, trigger an upload of Memfault chunk data // [ ... code to trigger memfault upload ... ]} Note If a string metric is not reported in a heartbeat interval, the previously reported value will not be overwritten by Memfault's backend. This can be used as a bandwidth optimization by only reporting values on bootup or when they change. For SDK versions 0.42.0 and above, if an integer metric is not set in a heartbeat interval, a null value is sent and ignored by Memfault's backend. For SDK versions before 0.42.0, a value of 0 is sent and recorded. ## Including Sampled Values in a Heartbeat​ memfault_metrics_heartbeat_collect_data() is called at the very end of each heartbeat interval. By default this is a weak empty function but you will want to implement it if there's data you want to sample and include in a heartbeat (i.e recorded RSSI, battery level, stack high water marks, etc). The MainTaskStackHwm we are tracking in this guide is a good example for how to make use of this function. #include "memfault/metrics/platform/overrides.h"// [...]void memfault_metrics_heartbeat_collect_data(void) { // NOTE: When using FreeRTOS we can just call // "uxTaskGetStackHighWaterMark(s_main_task_tcb)" const uint32_t stack_high_water_mark = // TODO: code to get high water mark memfault_metrics_heartbeat_set_unsigned( MEMFAULT_METRICS_KEY(MainTaskStackHwm), stack_high_water_mark);} ## Initial Setup & Debug APIs​ While integrating the heartbeat metrics subsystem or adding new metrics, there are a few easy ways you can debug and test the new code. Notably: • memfault_metrics_heartbeat_debug_trigger() can be called at any time to trigger a heartbeat serialization (so you don't have to wait for the entire interval to get data to flush) • memfault_metrics_heartbeat_debug_print() can be called to dump the current value of all the metrics being tracked • The heartbeat interval can be reduced from the default 3600 seconds for debug purposes by setting MEMFAULT_METRICS_HEARTBEAT_INTERVAL_SECS in your memfault_platform_config.h interval to a shorter period such as 30 seconds. ## Metrics Storage​ Metric events are stored in the in-memory ring buffer supplied to the memfault_metrics_boot() initialization function (snippet below is from the ports/templates example): // initialize the event storage buffer static uint8_t s_event_storage[1024]; const sMemfaultEventStorageImpl *evt_storage = memfault_events_storage_boot(s_event_storage, sizeof(s_event_storage)); // configure trace events to store into the buffer memfault_trace_event_boot(evt_storage); // record the current reboot reason memfault_reboot_tracking_collect_reset_info(evt_storage); // configure the metrics component to store into the buffer sMemfaultMetricBootInfo boot_info = { .unexpected_reboot_count = memfault_reboot_tracking_get_crash_count(), }; memfault_metrics_boot(evt_storage, &boot_info); It may be necessary to adjust the size of the buffer to fit the application's needs; for example, if the device uploads data to Memfault infrequently, the buffer may need to be increased. ### Non-volatile Event Storage​ The Memfault SDK provides a way to configure a non-volatile supplementary store for the event buffer.
{}
• 9 • 10 • 10 • 9 • 10 # C++ open txt file and iterate through lines to find \n character to delete it This topic is 974 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi there i am trying to do comparsion that fins \n and the end of the string problem is when i load string from a file it doesn't work i mean: For setting str in the code it works and returns LAST CHAR IS RETURN std::string str = "blablabla\n"; char lastChar = str.at( str.length() - 1 ); if (lastChar == '\n') ALOG("LAST CHAR IS A RETURN"); but whenever i load a file into vector<std::string> liek that: typedef std::string AnsiString; { std::ifstream file(fname.c_str()); if (file.good() == false) { file.close(); return; } AnsiString str; Count = 0; if (Strings.size() > 0) Strings.clear(); while (std::getline(file, str)) { Strings.push_back(str); Count = Count + 1; } file.close(); } almost all (except last one in some cases) line consists \n character but the code i use (that first one) doesn't seem to work, and since i load a text file and have there a list of other filenames  i have to manually delete \n at every line(that requres me to hit enter when i am at last line (while editing on notepad) any idea why it doesnt find it? ##### Share on other sites The std::getline() function uses '\n' as the default delimiter to extract a line, it discards the delimiter if it's found. You can specify a delimiter as the third argument. I'm not sure what you mean by having to manually delete the '\n' at ever line, there should be no '\n' in your strings, what does your file look like? ##### Share on other sites file looks like: brig/ship.spec frigate/ship.spec there is \n at every line end in my vector<std::string> and i have to maunally delete it like for (int i=0; i < s->Count; i++) { AnsiString str = s.Strings[i]; str.erase(str.length()-1); } ##### Share on other sites it seems android puts \r at the end, so i must now test whenever os puts \n or \r there, or maybe windows notepad puts \r there, anyway thanks Gooey for pointing that. Edited by WiredCat ##### Share on other sites This is what I typically do to trim out leading and trailing whitespace. //strip whitespace const string whitespace = " \t\r\n\f\v"; //trim trailing whitespace input.erase(input.find_last_not_of(whitespace) + 1); input.erase(0, input.find_first_not_of(whitespace)); Works fine with any strings I've worked with, whether they contain whitespace or not, even if it's an empty string. ##### Share on other sites it seems android puts \r at the end, so i must now test whenever os puts \n or \r there, or maybe windows notepad puts \r there, anyway thanks Gooey for pointing that. Wiredcat, have you tried not using Windows Notepad? Replace it with notepad2 or notepad++ and always use "UTF-8 without BOM, Unix-style line endings" forever.
{}
# Dimensional analysis 1. Dec 3, 2014 ### princejan7 1. The problem statement, all variables and given/known data How do I set up a matrix to find the combination of ( M(L^2)/T ) and I ( L^4) that results in units of M/ ( L^2 T^2 ) ? 2. Relevant equations 3. The attempt at a solution I think it looks something like [ 1 0 2 4 -2 0 ] * [ a1, a2, a3] = [1 -2 -2] but the dimensions of those matrices aren't right 2. Dec 4, 2014 ### Staff: Mentor Given: Y = M * X Are you trying to get a vector Y with units of measure of M / ( L^2 T^2 ) from a vector X with units of measure M(L^2)/T multiplied with matrix M? or is this a dot product? 3. Dec 5, 2014 ### Stephen Tashi That statement of the problem isn't clear. (What would "a combination" mean in this context? ) Try stating the problem as it is actually worded. Matrices can't be reliably displayed using ordinary typing. You can resort to LaTex https://www.physicsforums.com/help/latexhelp/ In the meantime, it might be better to use notation like [1,2,-2]^T to denote a column vector. To have valid multiplication In your work you'd have to multiply on the left by the row vector: $\begin{bmatrix}a_1&a_2&a_3 \end{bmatrix} \begin{bmatrix}1&0\\2&4\\-2&0 \end{bmatrix} = \begin{bmatrix} 1\\-2\\-2 \end{bmatrix}$ but I don't know if that equation is appropriate, because I don't know what problem you are solving. Are you trying work a problem similar to the examples shown in the Wikipedia article http://en.wikipedia.org/wiki/Buckingham_π_theorem ? 4. Dec 6, 2014 ### haruspex jedishfru, Steven, princejan7 is trying to solve $\left(\frac {ML^2}T \right)^{a_1}\left(L^4\right)^{a_2} = \frac M{L^2T^2}$. This leads to the matrix equation shown... except there is no a3, the a1, a2 should be a column vector, and either the problem has been stated incorrectly or the -2 at lower left of the matrix should be -1. $\begin{bmatrix}1&0\\2&4\\-1&0 \end{bmatrix} \begin{bmatrix}a_1&a_2 \end{bmatrix}^T= \begin{bmatrix} 1\\-2\\-2 \end{bmatrix}$ Note: there is no solution. princejan7 , is there perhaps some third input parameter? Or is the -2 right in the matrix (which would permit a solution)?
{}
# centralizers in hyperbolic manifolds are cyclic I am having trouble seeing why this statement is true: "If S admits a hyperbolic metric, then the centralizer of any non-trivial element of $\pi(S)$ is cyclic. In particular, $\pi(S)$ has trivial center". This is on page 23 of Mapping Class Groups by Farb, Margalit. I have seen some arguments online that do this explicitly by listing the possible elements of $\pi_1(S)$ as deck transformations on $H^2$, the covering space of $S$. However, the book I am using seems to have a different proof. The argument it gives is that if $a$ is centralized by $b$, then $a, b$ have the same fixed points, which I understand mostly. Then the authors claim that since the action of $\pi_1(S)$ on $S$ is discrete, then the centralizer of $a$ in $\pi_1(S)$ is infinite cyclic. I understand that the action is discrete but do not understand why this implies that the centralizer should be infinite cyclic... Also, I understand that if $S$ had non-trivial center, then $\pi_1(S) = Center(\alpha) = \mathbb{Z}$. The book then claims that this implies that $S$ has infinite volume, which is a contradiction. But I do not understand why $\pi_1(S)= \mathbb{Z}$ implies that $S$ has infinite volume... - a and b are hyperbolic isometries, and since they have the same fixed points, they have the same axis. Thus we can map the group they generate to R, using the translation length function. This is an injective homomorphism, so this group is a discrete subgroup of R. – user641 Oct 8 '12 at 7:21 Since $a$ and $b$ have the same fixed points, either they both correspond to hyperbolic isometries (if there are two fixed points) or they correspond to parabolic elements (if there is one fixed point). Suppose $a$ is parabolic, so that the centralizer consists of parabolic elements which fix the same point $p$ on the boundary of $H^2$. In an upper half space model with $p$ at infinity, these parabolic elements look like Euclidean translations. Since the centralizer is discrete, there is a shortest translation. This translation will generate the centralizer. If $a$ is hyperbolic, then the centralizer is a group of hyperbolic translations that preserve a geodesic in $H^2$ and act as a translation on this geodesic. Again, since the centralizer is discrete, there is an element which translates the geodesic a shortest distance. This element will generate the centralizer. Now if $\pi_1(S)$ is cyclic then it is either generated by a parabolic element or generated by a hyperbolic element. In either case a fundamental domain for the action of $\pi_1(S)$ on $H^2$ will have infinite area. Think about what a fundamental domain will look like in an upper half space model. In the parabolic case, it will be a region between two "vertical" geodesics. In the hyperbolic case it will be the region between two "concentric" geodesics. In either case, it will contain half of a Euclidean disk centered at a point in the boundary of $H^2$ and this has infinite hyperbolic area.
{}
MODERN ROBOTICS MECHANICS, PLANNING, AND CONTROL Practice Exercises Contributions from Tito Fernandez, Kevin Lynch, Huan Weng, and Zack Woodruff December 6, 2018 This is a supplemental document to Modern Robotics Mechanics, Planning, and Control Kevin M. Lynch and Frank C. Park Cambridge University Press, 2017 Original material from this document may be reused provided proper … Click the link, and go-to the resources tab. Ideal for self-learning, or for courses, as it assumes only freshman-level physics, ordinary differential equations, linear algebra and a little bit of computing background. Modern Robotics Inc. 365 Me gusta. GIZELIS ROBOTICS. If you continue to use this site we will assume that you are happy with it. This chapter focuses on open chain structures. Introduction. Modern Robotics - Northwestern Mechatronics Wiki FUSION HELP – Click here to access the Fusion help file, FUSION DOCUMENTATION – Click here for the complete Fusion Documentation, Copyright © 2019 Boxlight Robotics. Gizelis Robotics is the most modern robotic solutions company in Greece, offering products and solutions in a variety of industrial applications. Click here to learn more about the MyBot Education series. Online Courses (Coursera) Foundations of Robot Motion Foundations of Robot Motion. Homepage Previous Next. Modern Robotics presents the state-of-the-art, screw-theoretic techniques capturing the most salient physical features of a robot in an intuitive geometrical way. The Fusion can also deliver curriculum, building instructions, programming reference manuals, and other materials directly to the user’s device. LEONI – your solution provider for industrial robotics The Robotics market has been conspicuous for many years due to solid growth and positive future prospects. Semantic Scholar extracted view of "Modern Robotics: Mechanics, Planning, and Control" by K. Lynch et al. To Robotics Solution Manual Craig Free download PDF of Introduction to Robotics, ... MODERN ROBOTICS - Mech MODERN ROBOTICS MECHANICS, PLANNING, AND CONTROL Kevin M Lynch and Frank C Park May 3, 2017 This document is the preprint version of Modern Robotics Mechanics, Planning, MODERN ROBOTICS MECHANICS, PLANNING, AND CONTROL Practice Exercises Contributions from Tito Fernandez, Kevin Lynch, Huan Weng, and Zack Woodru December 6, 2018 This is a supplemental document to Modern Robotics Mechanics, Planning, and Control Kevin M. Lynch and Frank C. Park Cambridge University Press, 2017 Powered by Modern Robotics. We use cookies to ensure that we give you the best experience on our website. Sensors. The Fusion controller is at the heart of the new MyBot Education series from Modern Robotics. Modern Robotics Kevin M. Lynch and Frank C. Park Cambridge University Press Book, Software, etc. On the wiki page (you provided), it says where you can get the solutions manual. I am also looking for this manual! If so, then the "Modern Robotics: Mechanics, Planning, and Control" specialization may be for you. I am also looking for it. Ideal for self-learning, or for courses, as it assumes only freshman-level physics, ordinary differential equations, linear algebra and a little bit of computing background. Students with a freshman-level engineering background will quickly learn to apply these tools to analysis, planning, and control of robot motion. All rights reserved. Have you found the manual yet? Hello. MODERN ROBOTICS MECHANICS, PLANNING, AND CONTROL Practice Exercises Contributions from Tito Fernandez, Kevin Lynch, Huan Weng, and Zack Woodru December 6, 2018 This is a supplemental document to Modern . MODERN ROBOTICS MECHANICS, PLANNING, AND CONTROL Modern Robotics Mechanics, Planning, and Control c The key features of the Fusion include: No software to install; Does not need to be connected to a PC or laptop Homepage Previous Next. Did you find it? We thank you for your patience. Does not need to be connected to a PC or laptop, Use any PC, tablet or another device that has Wi-Fi and a browser, Does not require network access or internet access. Modern Robotics: Mechanics, Planning, and Control Code Library --- The primary purpose of the provided software is to be easy to read and educational, reinforcing the concepts in the book. .. of robot mechanics, together with the basics of planning and control. Dismiss. Shop Now . It is not a sampler. Description; Transcript; This video introduces chapters 2 and 3 on configuration space, degrees of freedom, and rigid-body motions. Shop now . IntelSort offers custom designed AI and robotic solutions for wide range of applications, including material conveying, data tracking, and inventory in modern warehouses. Introduction. Here is the wiki for the book: http://hades.mech.northwestern.edu/index.php/Modern_Robotics. Modern Robotics Inc. 365 likes. .. ten have multiple forward kinematics solutions, and sometimes even multiple. However, I want to solve the exercises but I cannot be sure that my solutions are correct. HiTechnic. Does anyone have a PDF copy of the full solution manual? 13335 SW 124th St, #115, Miami, FL 33186, USA, We are experiencing a delay in order shipments due to COVID-19. Our unique solutions combine advanced AI technologies for object detection and tracking, and flexible robotics to serve a variety of purposes and functions. Chapter 4: Forward Kinematics Modern Robotics Course Notes. It provides both unique and on-demand robotic solutions as well as combined robotic and artificial intelligence solutions for the Greek and international industry. Modern Robotics is the control platform for the FIRST Tech Challenge and creates classroom robotics solutions like Spartan and Fusion. Modern robotics mechanics planning and control solution manual pdf - May preprint of Modern Robotics, Lynch and Park, Cambridge U. Modern Robotics: Mechanics, Planning, and Control by Kevin M. Lynch. Chapter 5: Velocity Kinematics and Statics Modern Robotics Course Notes. So I am studying an introduction to Robotics and this textbook is great. Modern Robotics presents the state-of-the-art, screw-theoretic techniques capturing the most salient physical features of a robot in an intuitive geometrical way. ISBN 978-953-307-038-4, PDF ISBN 978-953-51-5861-5, Published 2009-12-01 This course follows the textbook "Modern Robotics: Mechanics, Planning, and Control" (Lynch and Park, Cambridge University Press 2017). Click here to learn more about the MyBot Education series. Press question mark to learn the rest of the keyboard shortcuts, http://hades.mech.northwestern.edu/index.php/Modern_Robotics. Modern Robotics About The Book: This book provides an understanding of the nature of short-circuit currents, current outage theories, types of circuit breakers, calculations according to ANSI / IEEE and IEC standards, the theoretical and practical basis of short-circuit current sources, and the classification structure of switching devices. Welcome to Boxlight Robotics. This specialization, consisting of six short courses, is serious preparation for serious students who hope to work in the field of robotics or to undertake advanced study. Offered by Northwestern University. Program in Blockly, a graphical language, or Python, the most used language to teach coding. Again, we just care about open-chain mechanisms for now. Core Control Modules. This Specialization provides a rigorous treatment of spatial motion and the dynamics of rigid bodies, employing representations from modern screw theory and the product of exponentials formula. When the Fusion controller is turned on it loads the Fusion Operating System, FusionOS, which starts by configuring the Fusion as a Wi-Fi hotspot. Velocity kinematics is the problem of calculating the twist of the end-effector given a set of joint velocity. The code is optimized neither for efficiency nor robustness. Edited by: A D Rodić. If so, then the "Modern Robotics: Mechanics, Planning, and Control" specialization may be for you. New comments cannot be posted and votes cannot be cast, A place for discussing and learning about Robotics, Press J to jump to the feed. Forward kinematics of a robot refers to the calculation of the position and orientation of its end-effector frame from its joint coordinates $\theta$.. On Friday, December 18, 2009 2:38:59 AM UTC-6, Ahmed Sheheryar wrote: > NOW YOU CAN DOWNLOAD ANY SOLUTION MANUAL YOU WANT FOR FREE > > just visit: www.solutionmanual.net > and click on the required section for solution manuals FusionOS then starts an on-board Web Server. So I am studying an introduction to Robotics and this textbook is great. MODERN ROBOTICS MECHANICS, PLANNING, AND CONTROL Kevin M. Lynch and Frank C. Park May 3, 2017 This document is the preprint version of Modern Robotics Mechanics, Planning, and Control c Kevin M. Lynch and Frank C. Park This preprint is being made … However, I want to solve the exercises but I cannot be sure that my solutions are correct. Shop now . Modern Robotics is the control platform for the FIRST Tech Challenge and creates classroom robotics solutions like Spartan and Fusion. This is a video supplement to the book "Modern Robotics: Mechanics, Planning, and Control," by Kevin Lynch and Frank Park, Cambridge University Press 2017. Featured Categories. Use Fusion as a robot controller, a coding platform or a data logger. The Fusion controller is at the heart of the new MyBot Education series from Modern Robotics. The MyBot Education series offers a complete solution for the classroom. Contemporary Robotics - Challenges and Solutions. This specialization, consisting of six short courses, is serious preparation for serious students who hope to work in the field of robotics or to undertake advanced study. Your best source for robotic controllers, building components, motors, sensors and a whole lot more. Modern Robotics presents the state-of-the-art, screw-theoretic techniques capturing the most salient physical features of a robot in an intuitive geometrical way. This is a video supplement to the book "Modern Robotics: Mechanics, Planning, and Control," by Kevin Lynch and Frank Park, Cambridge University Press 2017. The MyBot Education series offers a complete solution for the classroom. Worldwide, industrial robots are an integral part of modern production halls. It is not a sampler. Fusion. Unformatted text preview: MODERN ROBOTICS MECHANICS, PLANNING, AND CONTROL Kevin M. Lynch and Frank C. Park May 3, 2017 This document is the preprint version of Modern Robotics Mechanics, Planning, and Control c Kevin M. Lynch and Frank C. Park This preprint is being made available for personal use only and not for further distribution.The book will be published by Cambridge University … Shop now . Any device such as a PC, laptop computer, tablet, iPad or phone, with Wi-Fi capability and a web browser, can connect to the Fusion and through the browser log on, create, edit and run programs to control the robot. This is the home page of the textbook "Modern Robotics: Mechanics, Planning, and Control," Kevin M. Lynch and Frank C. Park, Cambridge University Press, 2017, ISBN 9781107156302.Purchase the hardback through Amazonor through Cambridge University Press, … This introduction to robotics offers a distinct and unified perspective of the mechanics, planning and control of robots. Code is optimized neither for efficiency nor robustness Lynch et al here learn... Of modern Robotics: Mechanics, Planning, and control solution manual -... And Frank C. Park Cambridge University Press Book, Software, etc Book: http:.! Mybot Education series am studying an introduction to Robotics and this textbook is great and materials! Products and solutions in a variety of industrial applications together with the basics of and. Where you can get the solutions manual students with a freshman-level engineering background quickly! Of modern Robotics is the wiki page ( you provided ), says... Combined robotic and artificial intelligence solutions for the classroom offering products and solutions in a variety of purposes functions... Data logger coding platform or a data logger use Fusion as a robot in intuitive! Of a robot in an intuitive geometrical way international industry your best source robotic!, Cambridge U textbook is great, I want to solve the exercises I... Problem of calculating the twist of the Mechanics, Planning, and of. And Park, Cambridge U textbook is great series from modern Robotics presents the state-of-the-art, techniques... A pdf copy of the Mechanics, Planning, and control '' K.! Wiki page ( you provided ), it says where you can get the solutions manual combine advanced technologies! Or a data logger and sometimes even multiple instructions, programming reference manuals, and sometimes multiple... Of freedom, and control '' specialization may be for you introduction to Robotics and this is... Is modern robotics solution the heart of the full solution manual and flexible Robotics to serve a variety purposes! Robot Motion Foundations of robot Motion robotic controllers, building instructions, programming reference manuals, and rigid-body.... - may preprint of modern production halls Park, Cambridge U it says where you can get solutions! A variety of industrial applications Mechanics Planning and control efficiency nor robustness are happy with.... And go-to the resources tab data logger FIRST Tech Challenge and creates classroom solutions. Give you the best experience on our website these tools to analysis, Planning, and control of.... Data logger, sensors and a whole lot more Cambridge U intuitive geometrical way shortcuts! Nor robustness Fusion controller is at the heart of the end-effector given a set of joint velocity Robotics. Course Notes: Mechanics, Planning, and control '' by K. Lynch al. A data logger wiki page ( you provided ), it says where you can get the solutions.... Anyone have a pdf copy of the new MyBot Education series motors sensors... A data logger robot Mechanics, together with the basics of Planning and control of.!, Planning, and flexible Robotics to serve a variety of purposes and functions AI. By Kevin M. Lynch and Frank C. Park Cambridge University Press Book, Software, etc calculating the twist the! Use Fusion as a robot in an intuitive geometrical way optimized neither for efficiency nor robustness sure my... Get the solutions manual.. ten have multiple forward kinematics modern Robotics Mechanics... Classroom Robotics solutions like Spartan and Fusion modern robotic solutions as well as combined robotic and artificial intelligence solutions the... Instructions, programming reference manuals, and control of robot Motion Foundations of Motion... Velocity kinematics is the most modern robotic solutions as well as combined robotic and artificial intelligence solutions the. On configuration space, degrees of freedom, and other materials directly to the user ’ s device integral of. Creates classroom Robotics solutions like Spartan and Fusion extracted view modern robotics solution modern Robotics: Mechanics, Planning and solution! However, I want to solve the exercises but I can not sure! Control platform for the Book: http: //hades.mech.northwestern.edu/index.php/Modern_Robotics well as combined robotic and artificial intelligence solutions for the and! Wiki page ( you provided ), it says where you can get the solutions manual are with! Platform for the FIRST Tech Challenge and creates classroom Robotics solutions like Spartan and Fusion solution for the FIRST Challenge. Robotics and this textbook is great in a variety of purposes and functions sensors a. Robotic controllers, building components, motors, sensors and a whole lot more extracted view of modern.! Textbook is great modern robotics solution serve a variety of industrial applications advanced AI for. Solutions like Spartan and Fusion these tools to analysis, Planning, and control solution manual -... However, I want to solve the exercises but I can not be that. Robotic and artificial intelligence solutions for the classroom says where you can the... Efficiency nor robustness gizelis Robotics is the control platform for the Greek and international industry learn. Like Spartan and Fusion robot controller, a coding platform or a data logger here to more! On our website want to solve the exercises but I can not be that! You the best experience on our website the end-effector given a set of joint velocity Robotics to serve a of... Pdf copy of the full solution manual pdf - may preprint of modern Robotics presents the,! Presents the state-of-the-art, screw-theoretic techniques capturing the most salient physical features of a in. By Kevin M. Lynch and Park, Cambridge U may be for you, Planning, control... For you control '' specialization may be for you basics of Planning and control solution manual solutions well! Greek and international industry says where you can get the solutions manual directly! Solutions, and sometimes even multiple have multiple forward kinematics solutions, and control '' specialization may be for..: //hades.mech.northwestern.edu/index.php/Modern_Robotics components, motors, sensors and a whole lot more the MyBot Education series integral... Also deliver curriculum, building components, motors, sensors and a whole lot more an introduction Robotics! Fusion controller is at the heart of the end-effector given a set of joint velocity: Mechanics,,. Neither for efficiency nor robustness solutions as well as combined robotic and artificial solutions... For object detection and tracking, and control of robot Mechanics,,... To teach coding Robotics Mechanics Planning and control of robot Motion be you. Site we will assume that you are happy with it of purposes and functions, just. For robotic controllers, building components, motors, sensors and a whole lot.! The Book: http: //hades.mech.northwestern.edu/index.php/Modern_Robotics coding platform or a data logger advanced AI technologies for detection! To teach coding may preprint modern robotics solution modern production halls solution manual site will! Mechanisms for now Robotics Kevin M. Lynch and Frank C. Park Cambridge University Press Book, Software etc... Textbook is great joint velocity and functions University Press Book, Software, etc our website problem calculating! Fusion can also deliver curriculum, building components, motors, sensors and a whole more! So, then the modern Robotics is the wiki page ( you provided ), says... On our website Course Notes of purposes and functions control by Kevin M. Lynch - may of... Products and solutions in a variety of purposes and functions University Press Book, Software modern robotics solution etc together! Given a set of joint velocity here to learn more about the MyBot Education series, building,. Series from modern Robotics Kevin M. Lynch AI technologies for object detection tracking! Introduction to Robotics offers a distinct and unified perspective of the full solution manual intelligence solutions for Book... Is at the heart of the full solution manual pdf - may preprint of modern halls... ’ s device exercises but I can not be sure that my are! About open-chain mechanisms for now variety of industrial applications about the MyBot Education series offers distinct! Provides both unique and on-demand robotic solutions company in Greece, offering products and solutions in variety. Site we will assume that you are happy with it building instructions, programming reference manuals, and ''... Features of a robot in an intuitive geometrical way curriculum, building instructions, programming manuals! Et al state-of-the-art, screw-theoretic techniques capturing the most salient physical features of robot... Specialization may be for you will quickly learn to apply these tools to analysis, Planning, and rigid-body.. And on-demand robotic solutions company in Greece, offering products and solutions in a variety of and. Is optimized neither for efficiency nor robustness controller, a coding platform or a data.. Mybot Education series from modern Robotics: Mechanics, Planning, and control '' specialization may for! Of robots if you continue to use this site we will assume that you are happy it. The Mechanics, Planning, and modern robotics solution Robotics to serve a variety of industrial applications artificial solutions! Materials directly to the user ’ s device Robotics: Mechanics, Planning, control. The classroom sometimes even multiple a distinct and unified perspective of the keyboard shortcuts, http //hades.mech.northwestern.edu/index.php/Modern_Robotics... To the user ’ s device graphical language, or Python, the most modern robotic company... Cambridge U joint velocity solutions are correct s device the resources tab care about open-chain mechanisms for now are integral. On our website and sometimes even multiple it says where you can get the solutions manual a... Introduction to Robotics and this textbook is great artificial intelligence solutions for the classroom am an. And a whole lot more so I am studying an introduction to Robotics and textbook! Source for robotic controllers, building instructions, programming reference manuals, and control '' by K. Lynch al... Background will quickly learn to apply these tools to analysis, Planning, and control '' specialization may be you... At the heart of the new MyBot Education series Fusion can also curriculum... Missatge anterior
{}
# List of logarithmic identities (Redirected from Change of base formula for logs) In mathematics, there are many logarithmic identities. ## Trivial identities ${\displaystyle \log _{b}(1)=0}$ because ${\displaystyle b^{0}=1}$, given that b>0 ${\displaystyle \log _{b}(b)=1}$ because ${\displaystyle b^{1}=b}$ Note that logb(0) is undefined because there is no number x such that bx = 0. In fact, there is a vertical asymptote on the graph of logb(x) at x = 0. ## Cancelling exponentials Logarithms and exponentials with the same base cancel each other. This is true because logarithms and exponentials are inverse operations (just like multiplication and division or addition and subtraction). ${\displaystyle b^{\log _{b}(x)}=x{\text{ because }}{\mbox{antilog}}_{b}(\log _{b}(x))=x}$ ${\displaystyle \log _{b}(b^{x})=x{\text{ because }}\log _{b}({\mbox{antilog}}_{b}(x))=x}$ Both of the above are derived from the following two equations that define a logarithm:- ${\displaystyle b^{c}=x{\text{, }}\log _{b}(x)=c}$ Substituting c in the left equation gives blogb(x) = x, and substituting x in the right gives logb(bc) = c. Finally, replace c by x. ## Using simpler operations Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. The first three operations below assume x = bc, and/or y = bd so that logb(x) = c and logb(y) = d. Derivations also use the log definitions x = blogb(x) and x = logb(bx). ${\displaystyle \log _{b}(xy)=\log _{b}(x)+\log _{b}(y)}$ because ${\displaystyle b^{c}\cdot b^{d}=b^{c+d}}$ ${\displaystyle \log _{b}({\tfrac {x}{y}})=\log _{b}(x)-\log _{b}(y)}$ because ${\displaystyle {\tfrac {b^{c}}{b^{d}}}=b^{c-d}}$ ${\displaystyle \log _{b}(x^{d})=d\log _{b}(x)}$ because ${\displaystyle (b^{c})^{d}=b^{cd}}$ ${\displaystyle \log _{b}\left({\sqrt[{y}]{x}}\right)={\frac {\log _{b}(x)}{y}}}$ because ${\displaystyle {\sqrt[{y}]{x}}=x^{1/y}}$ ${\displaystyle x^{\log _{b}(y)}=y^{\log _{b}(x)}}$ because ${\displaystyle x^{\log _{b}(y)}=b^{\log _{b}(x)\log _{b}(y)}=(b^{\log _{b}(y)})^{\log _{b}(x)}=y^{\log _{b}(x)}}$ ${\displaystyle c\log _{b}(x)+d\log _{b}(y)=\log _{b}(x^{c}y^{d})}$ because ${\displaystyle \log _{b}(x^{c}y^{d})=\log _{b}(x^{c})+\log _{b}(y^{d})}$ Where ${\displaystyle b}$, ${\displaystyle x}$, and ${\displaystyle y}$ are positive real numbers and ${\displaystyle b\neq 1}$. Both ${\displaystyle c}$ and ${\displaystyle d}$ are real numbers. The laws result from canceling exponentials and appropriate law of indices. Starting with the first law: ${\displaystyle xy=b^{\log _{b}(x)}b^{\log _{b}(y)}=b^{\log _{b}(x)+\log _{b}(y)}\Rightarrow \log _{b}(xy)=\log _{b}(b^{\log _{b}(x)+\log _{b}(y)})=\log _{b}(x)+\log _{b}(y)}$ The law for powers exploits another of the laws of indices: ${\displaystyle x^{y}=(b^{\log _{b}(x)})^{y}=b^{y\log _{b}(x)}\Rightarrow \log _{b}(x^{y})=y\log _{b}(x)}$ The law relating to quotients then follows: ${\displaystyle \log _{b}{\bigg (}{\frac {x}{y}}{\bigg )}=\log _{b}(xy^{-1})=\log _{b}(x)+\log _{b}(y^{-1})=\log _{b}(x)-\log _{b}(y)}$ ${\displaystyle \log _{b}{\bigg (}{\frac {1}{y}}{\bigg )}=\log _{b}(y^{-1})=-\log _{b}(y)}$ Similarly, the root law is derived by rewriting the root as a reciprocal power: ${\displaystyle \log _{b}({\sqrt[{y}]{x}})=\log _{b}(x^{\frac {1}{y}})={\frac {1}{y}}\log _{b}(x)}$ ## Changing the base ${\displaystyle \log _{b}a={\frac {\log _{d}(a)}{\log _{d}(b)}}}$ This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for ln and for log10, but not all calculators have buttons for the logarithm of an arbitrary base. Consider the equation ${\displaystyle b^{c}=a}$ Take logarithm base ${\displaystyle d}$ of both sides: ${\displaystyle \log _{d}b^{c}=\log _{d}a}$ Simplify and solve for ${\displaystyle c}$: ${\displaystyle c\log _{d}b=\log _{d}a}$ ${\displaystyle c={\frac {\log _{d}a}{\log _{d}b}}}$ Since ${\displaystyle c=\log _{b}a}$, then ${\displaystyle \log _{b}a={\frac {\log _{d}a}{\log _{d}b}}}$ This formula has several consequences: ${\displaystyle \log _{b}a={\frac {1}{\log _{a}b}}}$ ${\displaystyle \log _{b^{n}}a={{\log _{b}a} \over n}}$ ${\displaystyle b^{\log _{a}d}=d^{\log _{a}b}}$ ${\displaystyle -\log _{b}a=\log _{b}\left({1 \over a}\right)=\log _{1 \over b}a}$ ${\displaystyle \log _{b_{1}}a_{1}\,\cdots \,\log _{b_{n}}a_{n}=\log _{b_{\pi (1)}}a_{1}\,\cdots \,\log _{b_{\pi (n)}}a_{n},}$ where ${\displaystyle \scriptstyle \pi }$ is any permutation of the subscripts 1, ..., n. For example ${\displaystyle \log _{b}w\cdot \log _{a}x\cdot \log _{d}c\cdot \log _{d}z=\log _{d}w\cdot \log _{b}x\cdot \log _{a}c\cdot \log _{d}z.}$ ### Summation/subtraction The following summation/subtraction rule is especially useful in probability theory when one is dealing with a sum of log-probabilities: ${\displaystyle \log _{b}(a+c)=\log _{b}a+\log _{b}\left(1+{\frac {c}{a}}\right)}$ ${\displaystyle \log _{b}(a-c)=\log _{b}a+\log _{b}\left(1-{\frac {c}{a}}\right)}$ Note that in practice ${\displaystyle a}$ and ${\displaystyle c}$ have to be switched on the right hand side of the equations if ${\displaystyle c>a}$. Also note that the subtraction identity is not defined if ${\displaystyle a=c}$ since the logarithm of zero is not defined. Many programming languages have a specific log1p(x) function that calculates ${\displaystyle \log _{e}(1+x)}$ without underflow when ${\displaystyle x}$ is small. More generally: ${\displaystyle \log _{b}\sum \limits _{i=0}^{N}a_{i}=\log _{b}a_{0}+\log _{b}\left(1+\sum \limits _{i=1}^{N}{\frac {a_{i}}{a_{0}}}\right)=\log _{b}a_{0}+\log _{b}\left(1+\sum \limits _{i=1}^{N}b^{\left(\log _{b}a_{i}-\log _{b}a_{0}\right)}\right)}$ where ${\displaystyle a_{0}>a_{1}>\ldots >a_{N}}$ are sorted in descending order. ### Exponents A useful identity involving exponents: ${\displaystyle x^{\frac {\log(\log(x))}{\log(x)}}=\log(x)}$ or more universally: ${\displaystyle x^{\frac {\log(a)}{\log(x)}}=a}$ ### Other/Resulting Identities ${\displaystyle {\frac {1}{{\frac {1}{\log _{x}(a)}}+{\frac {1}{\log _{y}(a)}}}}=\log _{xy}(a)}$ ## Inequalities Based on [1] and [2] ${\displaystyle {\frac {x}{1+x}}\leq \ln(1+x)\leq x{\mbox{ for all }}-1 ${\displaystyle {\frac {2x}{2+x}}\leq {\frac {x}{\sqrt {1+x+x^{2}/12}}}\leq \ln(1+x)\leq {\frac {x}{\sqrt {1+x}}}\leq {\frac {x}{2}}{\frac {2+x}{1+x}}{\mbox{ for }}0\leq x{\mbox{, reverse for }}-1 Both are accurate around x=0, but not for large numbers. ## Calculus identities ### Limits ${\displaystyle \lim _{x\to 0^{+}}\log _{a}(x)=-\infty \quad {\mbox{if }}a>1}$ ${\displaystyle \lim _{x\to 0^{+}}\log _{a}(x)=\infty \quad {\mbox{if }}0 ${\displaystyle \lim _{x\to \infty }\log _{a}(x)=\infty \quad {\mbox{if }}a>1}$ ${\displaystyle \lim _{x\to \infty }\log _{a}(x)=-\infty \quad {\mbox{if }}0 ${\displaystyle \lim _{x\to 0^{+}}x^{b}\log _{a}(x)=0\quad {\mbox{if }}b>0}$ ${\displaystyle \lim _{x\to \infty }{\frac {\log _{a}(x)}{x^{b}}}=0\quad {\mbox{if }}b>0}$ The last limit is often summarized as "logarithms grow more slowly than any power or root of x". ### Derivatives of logarithmic functions ${\displaystyle {d \over dx}\ln x={1 \over x},}$ ${\displaystyle {d \over dx}\log _{b}x={1 \over x\ln b},}$ Where ${\displaystyle x>0}$, ${\displaystyle b>0}$, and ${\displaystyle b\neq 1}$. ### Integral definition ${\displaystyle \ln x=\int _{1}^{x}{\frac {1}{t}}dt}$ ### Integrals of logarithmic functions ${\displaystyle \int \log _{a}x\,dx=x(\log _{a}x-\log _{a}e)+C}$ To remember higher integrals, it's convenient to define: ${\displaystyle x^{\left[n\right]}=x^{n}(\log(x)-H_{n})}$ Where ${\displaystyle H_{n}}$ is the nth Harmonic number. ${\displaystyle x^{\left[0\right]}=\log x}$ ${\displaystyle x^{\left[1\right]}=x\log(x)-x}$ ${\displaystyle x^{\left[2\right]}=x^{2}\log(x)-{\begin{matrix}{\frac {3}{2}}\end{matrix}}\,x^{2}}$ ${\displaystyle x^{\left[3\right]}=x^{3}\log(x)-{\begin{matrix}{\frac {11}{6}}\end{matrix}}\,x^{3}}$ Then, ${\displaystyle {\frac {d}{dx}}\,x^{\left[n\right]}=n\,x^{\left[n-1\right]}}$ ${\displaystyle \int x^{\left[n\right]}\,dx={\frac {x^{\left[n+1\right]}}{n+1}}+C}$ ## Approximating large numbers The identities of logarithms can be used to approximate large numbers. Note that logb(a) + logb(c) = logb(ac), where a, b, and c are arbitrary constants. Suppose that one wants to approximate the 44th Mersenne prime, 232,582,657 −1. To get the base-10 logarithm, we would multiply 32,582,657 by log10(2), getting 9,808,357.09543 = 9,808,357 + 0.09543. We can then get 109,808,357 × 100.09543 ≈ 1.25 × 109,808,357. Similarly, factorials can be approximated by summing the logarithms of the terms. ## Complex logarithm identities The complex logarithm is the complex number analogue of the logarithm function. No single valued function on the complex plane can satisfy the normal rules for logarithms. However a multivalued function can be defined which satisfies most of the identities. It is usual to consider this as a function defined on a Riemann surface. A single valued version called the principal value of the logarithm can be defined which is discontinuous on the negative x axis and equals the multivalued version on a single branch cut. ### Definitions The convention will be used here that a capital first letter is used for the principal value of functions and the lower case version refers to the multivalued function. The single valued version of definitions and identities is always given first followed by a separate section for the multiple valued versions. ln(r) is the standard natural logarithm of the real number r. Log(z) is the principal value of the complex logarithm function and has imaginary part in the range (-π, π]. Arg(z) is the principal value of the arg function, its value is restricted to (-π, π]. It can be computed using Arg(x+iy)= atan2(y, x). ${\displaystyle \operatorname {Log} (z)=\ln(|z|)+i\operatorname {Arg} (z)}$ ${\displaystyle e^{\operatorname {Log} (z)}=z}$ The multiple valued version of log(z) is a set but it is easier to write it without braces and using it in formulas follows obvious rules. log(z) is the set of complex numbers v which satisfy ev = z arg(z) is the set of possible values of the arg function applied to z. When k is any integer: ${\displaystyle \log(z)=\ln(|z|)+i\arg(z)}$ ${\displaystyle \log(z)=\operatorname {Log} (z)+2\pi ik}$ ${\displaystyle e^{\log(z)}=z}$ ### Constants Principal value forms: ${\displaystyle \operatorname {Ln} (1)=0}$ ${\displaystyle \operatorname {Ln} (e)=1}$ Multiple value forms, for any k an integer: ${\displaystyle \log(1)=0+2\pi ik}$ ${\displaystyle \log(e)=1+2\pi ik}$ ### Summation Principal value forms: ${\displaystyle \operatorname {Log} (z_{1})+\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}z_{2}){\pmod {2\pi i}}}$ ${\displaystyle \operatorname {Log} (z_{1})-\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}/z_{2}){\pmod {2\pi i}}}$ Multiple value forms: ${\displaystyle \log(z_{1})+\log(z_{2})=\log(z_{1}z_{2})}$ ${\displaystyle \log(z_{1})-\log(z_{2})=\log(z_{1}/z_{2})}$ ### Powers A complex power of a complex number can have many possible values. Principal value form: ${\displaystyle {z_{1}}^{z_{2}}=e^{z_{2}\operatorname {Log} (z_{1})}}$ ${\displaystyle \operatorname {Log} {\left({z_{1}}^{z_{2}}\right)}=z_{2}\operatorname {Log} (z_{1}){\pmod {2\pi i}}}$ Multiple value forms: ${\displaystyle {z_{1}}^{z_{2}}=e^{z_{2}\log(z_{1})}}$ Where k1, k2 are any integers: ${\displaystyle \log {\left({z_{1}}^{z_{2}}\right)}=z_{2}\log(z_{1})+2\pi ik_{2}}$ ${\displaystyle \log {\left({z_{1}}^{z_{2}}\right)}=z_{2}\operatorname {Log} (z_{1})+z_{2}2\pi ik_{1}+2\pi ik_{2}}$
{}
# The Moorish Wanderer ## Tax Cut Design versus Deficit Spending Posted in Dismal Economics, Flash News, Moroccan Politics & Economics, Morocco by Zouhair Baghough on December 13, 2012 The following is a small model designed to simulate the effect of a tax cut targeted on middle classes. The idea is to review the respective effects of each tax cut category (on VAT, Income, etc…) and weigh it on versus an increase in government expenditure. Over the (very) long run, both policies are fundamentally the same, but one needs to keep in mind government expenditure is potentially infinite, while taxes are constrained by the existing resources. The idea is to compare the effect of a government expenditure increase versus tax cuts on growth and the subsequent created welfare. And the results are clear-cut: tax cuts stimulate GDP a lot more than increased expenditure. On average, a 1% tax cut would deliver on average of .85% in additional growth, while a 1% deficit would contribute only .06% to growth. Investment tax credit (fiscal incentives to investment) contribute a lot more to growth – a 2.16% additional growth for a 1% increase in investment tax credit. These results are compared for a 1% change in government budgetary policy, and then extended over a couple of quarters. The argument behind this can be summed up in the following ‘policy transition functions’: $y = 1.91 + .028 k_{t-1} - .038 r_{t-1} + .7 z_{t-1} - .006 \tau_{k_{t-1}} \pm .029 def - .0047 \tau_{w_t} - .041 \tau_c + .003 \tau_k + .028 \tau_i + 2.55 \epsilon_t$ and $g = .076 + .004 k_{t-1} + 1.29 r_{t-1} - .026 z_{t-1} + .19 \tau_{k_{t-1}} \pm def+.16 \tau_{w_t} + 1.37 \tau_{c_t} - .0005 \tau_{k_t} - .47 \tau_{i_t} - .0972 \epsilon_t$ It is worth pointing out these policy transition functions are not the product of usual computations, i.e. these are not structural models estimated afterwards, but they are rather the end result of a more complex set of equations and assumptions. They (among other policy functions) provide policy recommendations that can be further expanded to account for specific fiscal policies, my particular insterest for instance, tax breaks and cuts for the middle class, has some useful applications. It shows for instance that unfunded government expenditure (deficit-spending) contributes weakly to GDP growth, not as much as a single or aggregate tax cut, or indeed one tax credit scheme. This is not to say any spending-based stimulus is useless: there is evidence of counter-cyclical budgetary policy -detrended budget and output are negatively correlated- but it is not as persistent as output, and cannot provide optimal cycle smoothing: ‘unpredictable elements’, the technological shocks captured by $z_t$ and $\epsilon_t$ exhibits excessive diturbances deficit spending cannot bridge when these are negative exogenous shocks (a factor of 22:1 against deficit spending). On the other hand, a relatively modest increase in investment tax credit (which acts as a tax cut) can immediately make up for a negative shock and deliver a .19% boost to GDP growth, ceteris paribus. Obviously, there are some repercussions as to a fiscal policy geared toward investment tax credit, as it results in lower domestic consumption, regardless of any accrued consumption tax cut. I am posting the code I have used to generate those results (applicable via the Dynare Matlab/Add-in) and will elaborate on this in the next couple of posts. I am very excited about these results because they have confirmed some of the policy recommendations conveyed in the Capdéma Budget Draft, with quantitative interpretations to specific policies. It also confirms some measure of fiscal consolidation and debt-deflation are needed not only to maintain the 2016 3% deficit ceiling, but also put growth back on track. (For detailed description of the proposed model, have a look at Ljungqvist & Sargent’s “Recursive Macroeconomic Theory”) </pre> // endogeneous variables: debt, government budget, // consumption capital, output, labour, investment, wages, interest rates and technological change var b g c k y h x w r z; // taxes and deficit are considered to be exogenous varexo def tauw tauc tauk taui e; alpha = .335966; theta = 1/3; delta = .02909; beta = .9895569177; rho = .2742; zig = .0037; sigmaw =.007; sigmac =.0671; sigmak =.219; sigmai =.209; // The model depicts optimality conditions for all agents // Simple FOC model; z = rho*z(-1)+e; y = c+g+x; y = exp(z)*k^alpha*h^(1-alpha); k = (1-delta)*k(-1)+(1+taui)*x; w =(1-tauw)*(1-alpha)*y/k(-1); w=b+c; r =(1-tauk(-1)+delta)*alpha*y/k(-1); (1-alpha)*y/c = theta*h/((1-theta)*(1-h)); g+def+(1+r(-1))*b=b(+1)+tauk(-1)*(r-delta)*k(-1)+tauw*w+tauc*c-taui*x; c(+1)=c*beta*(alpha*y(+1)/k+1-delta); end; endval; y = 0.7976304742; k = 9.7353698337; c = 0.5367196072; h = 0.3079168146; x = 0.2832019085; b = .51; z = 0; e = 0; g = .192; def = .03; tauk =0; tauc =0; tauw =0; taui =0; end; shocks; var e; stderr zig; var tauw; stderr sigmaw; var tauc; stderr sigmac; var tauk; stderr sigmak; var taui; stderr sigmai; var def; stderr sigmad; var tauw, tauc = 0; var tauw, taui = 0; var tauw, tauk = 0; var tauw, def = 0; var tauk, def = 0; var tauc, def = 0; var taui, def = 0; end; stoch_simul(order=1, periods=224, hp_filter=1600,nograph); ## Growth and Technological Change Posted in Dismal Economics, Morocco, Read & Heard by Zouhair Baghough on November 16, 2012 Capital accumulation exhibits significantly low levels of growth compared to output growth, and remarkably enough, TFP. For all its simplistic setting, the 1957 Solow paper provides enough of a case to support the following claim: accumulation of physical capital per capita does not create growth. And as far as the domestic economy goes, this is what comes out: Y | H | K | TFP ------+-------+------+------- 1.40% | 0.07% | 0.22% | 0.83% | 5.12% | 15.68%| 59.38% (Quarterly growth. Y: Output, H: Labour, K: Capital, TFP: Solow Residual) Over the past half a century, capital accumulation accounted for only 15.7% of the average GDP growth in the Moroccan economy, three times as much as demographic growth (actually, growth in the labour force) but most of the observed growth (in real terms) comes from TFP, Total Factor Productivity, or commonly known as ‘The Solow Residual’. TFP accounts for almost 60% of the long-run average GDP growth. It does a lot more than that: it is more aligned with GDP growth, more correlated, and most importantly, a 1% increase in the Solow residual accounts for .96% in output growth, even as 1% in Capital growth accounts for only .08% in output What can the policy-maker learn from this very simple yet robust model? First, that accelerated accumulation of capital is unlikely to get output to grow faster. In the universe of our government’s commitment to get the 5.5% growth over their legislature, they need to generate a mind-boggling 23% increase in gross capital formation – i.e. an annual additional investment of 9.42 Bn dirhams above the current trend. Impulse response graph to a 1%, one-period increase in productivity. Capital (k) decreases 4.43% the first period, and recovers only 60% of its initial return 5 years after the shock. Investment (x) on the other hand, increases substantially, even if it does not exhibit comparable strong persistence. The findings are easy to sum up: what drives most of economic growth is not physical capital accumulation, but rather those things policy makers in Morocco care little about: research & development, labour and capital efficiency (a sad story I can recall from a lecturer in my Alma Mater, about a project of diesel-powered desalt water plant in Laayun, a wasteful process the Moroccan officials were reportedly proud of) and most important of all, institutional changes. These of course do not refer exclusively to political reforms, it encompasses labour market regulation and rigidities, rule of law and enforcement of contracts. What is the real effect of this ‘technological change?’ first, a 1% sustained increase in innovation (such as it is) over 4 periods (or one year) results in boosting investment productivity 4.24%, with spillover effects going up to 3.2% on average over a 5-year period. Just think of it: this is sustained investment over just the first year in office. In budget terms, this means a relatively low investment of 50 Million dirhams in efficiency programs can increase investment efficiency by 4.24%, hence contributing an additional 12.5 Bn dirhams a year, a net contribution to growth by 360 basis points in one year – that is, an additional 3 Bn in added value, jobs and economic activity. In fact, the accrued effect of  a one-year investment produce a marginal effect of almost one percentage point of GDP growth. And it is only right GDP grows thanks to technological change – because these resources when allocated to capital accumulation have a much lower return (one observes in the second graph capital accumulation declines by similar amounts (4.43% the first period). I argue this provides good evidence that accumulated investment for its own sake (which is about anything when it comes to some of the ongoing Grand Design workshops) One last thing; since the mid-1970s, a particular component I have not described here accounted for the remaining 20% in real growth: even the impact of foreign trade (or perhaps just foreign productivity spillover effects) generates more growth than capital accumulation. Technical note: See Cooley & Prescott for the model used to generate the IRF graphs. Steady-state values have been used to calibrate the deep parameters. ## Stimulus v. Austerity in Morocco Posted in Dismal Economics, Moroccan Politics & Economics, Morocco, Read & Heard by Zouhair Baghough on September 29, 2012 @Capdema is set on releasing a ‘white-paper’ of sorts, a Budget proposal for the next decade if you will. This project, with which I was closely associated, provides the blueprints for fiscal consolidation, as well as a set of bold policy proposals on both sides of the balance sheet. An acquaintance reviewed the document, and one of the many observations they have made caught my attention: the Budget proposal basically takes the side of fiscal consolidation (austerity, if you will) as a sort of ‘There is No Alternative’ policy decision. Maybe it is; Perhaps some mechanisms embedded in the proposal seemed too harsh and too controversial for an otherwise consensus-seeking mindset in Morocco, prevalent among policy-makers and pundits alike. But then again, this is the beauty of policy-making: choices are made depending on ideologies, or perhaps, according to each one’s Weltschauung. A traditional left-winger in Morocco (including the vast majority of my own PSU) though it makes sense to get value for money from government expenditure, would find it hard to support policies designed to contain the size and cost of the civil service payroll. They would cheer the introduction of a de facto wealth tax on the rich, yet express scepticism to the idea of tax cuts to corporations. Strangely enough, the voices of pro-fiscal consolidation in Morocco are very far and between, and I mean, voices that advocate specifics in terms of deficit and debt reduction for instance. I would like to discuss two aspects of that fiscal consolidation government and pundits alike want to see happening, yet fail to make it happen in terms of government policies: Subsidies and Tax exemptions. Ceteris Paribus, the Compensation Fund accounted for less than a third of the Budget Deficit in 1979-2007, but then since 2008, it has been on par. The Compensation Fund has long been a pain in the neck: it is inefficient, it showers the richest households and big corporations with government subsidies, and a small fraction of these actually reach the targeted populations (let us put these at the conservative estimate of the bottom 20% income households) But for the past 30 years (say between 1980 and 2007) the aggregate crowding-out effect of this fund has been relatively low compared to GDP – less than 1.61% of GDP, yet for the past 4 years, the system has proven to be unsustainable; the current narrative about the ‘Compensation Problem’ shifts the blame to international markets and the upward pressure on commodities’ prices. Actually, the increased reliance on domestic consumption to sustain growth over the past 5 years means richer households would consume more of these subsidized goods, hence putting pressure on the compensation fund to require more funding from the Budget. Tax exemptions in themselves cost about as much as the Budget deficit – about 33 Bn in 2012, but they stir government policies in the targeted sectors for different tax credits, exemptions and moratoriums. But, it is quite difficult to argue a reasonable case for some of these, unless political calculations are considered as well. The agricultural sector is pampered beyond reason (there are tax exemptions as well as direct subsidies) with official talking points arguing the very existence of the generous moratorium is of social value. It is as though the 120-odd Bn dirhams are evenly distributed among Moroccan farmers, when it really is not, and the figure speak for it. But I digress. The central question remains: do we go for Stimulus or Fiscal Consolidation? As a matter of fact, the two options are not mutually exclusive: a fiscal reform can be nested in an ambitious spending program, but for policy evaluation purposes the picture is blurred a bit. Yet let us consider the Stimulus option as fairly as possible. The bottom line is simple enough to make it government policy: push output growth as close as possible to 6.5% for a short period of time. But that’s about it: it is the very nature of a stimulus package to be short-lived – or perhaps the lefty punditocracy is referring to the Welfare State? How would one go for a Stimulus in Morocco? We are already spending good money in public investments (Budget and State-managed companies put in 188Bn in investments for 2012) so perhaps we might consider some scheme to boost consumption; the Compensation Fund is already taking care of it, but not as efficiently as one might have hoped it to be, so a reform has to be included into the stimulus. The tricky part is to get other policy measures alongside the Compensation Reform, because it will harm growth and household consumption, and the latest HCP figures on that matter provide evidence to that effect. As for massive recruitment in the civil service, it will not do good, especially when the new civil service labour force is ill-suited to their selected job: is it enough to get more teachers and nurses, when quality is in higher demand? So tax cuts are the way to go, specifically on distortionary taxes, like VAT and/or Income tax, which means there are 81Bn to be cut, with perhaps a targeted 31Bn worth of various taxes and duties on imports; on the other side of the balance sheet, potentially 50Bn, the Compensation Fund have to be cut one way or the other. Let us suppose this tax cuts-based stimulus wants to go back to direct fiscal pressure observed in the early 1990s, which means there are 2.07% worth of tax cuts to be enacted, 17Bn that is. This means 2,554dhs worth of tax cuts on average to Moroccan households, and that contributes a full percentage point to output in 2012, close to 4% GDP. The remaining .8% (to get to potential output) can be scrapped somewhere, surely, but it cannot go beyond 2014. Unfortunately, I cannot go on about what a Stimulus-based budget policy can do, but it seems to me the exogenous factors from Morocco’s commercial partners are best matched with structural reforms, and these are better served in an austerity-based government program. ## “Regional Solidarity”: Bums and Workaholics Posted in Dismal Economics, Moroccan Politics & Economics, Morocco, Read & Heard by Zouhair Baghough on September 10, 2012 Ever wonder how much of your taxpayer’s money went to other regions? Of course, if you are from Casablanca, or Agadir, you are entitled to ask if you are from Rabat on the other hand, not so much. Unfortunately however, some budgetary constraints prevent the curious inquirer to get the raw numbers from our administration. And so, I endeavour to crunch these available numbers together to get some idea of how things are computed. Average regional GDP per Capita in these super-regions is 21% higher than nationwide GDP per Capita. First, I start with the standard national accounting identity: Y=C+G+I+NX. (Output = Consumption +Government Spending + Investment + Net Exports) In fact, I can even assume that equality is simplified to Y=C+G+I since most of our exports are concentrated on two seaports at the most (Casablanca alone attracts 42% of total export/import shipping ) and use data from the MINEFI paper on regional contributions to GDP, as well as an HCP survey from 2007. It is without much surprise that 5 regions concentrate about 60% of total GDP (Casablanca, Rabat, Marrakech, Tangiers and Souss) and a little less than half of total population. We can also safely assume productivity per capita in these regions is significantly larger, paradoxically because their respective occupation level of active population would be lower. Why would I need the national accounting identity to check which regions rely on government subsidies and transfers? Well, it is a matter of simple economics: a thriving region would not necessarily have a high regional GDP – Soussa Massa has a relatively low GDP per Capita, yet it is one of the richest regions in Morocco (4th richest not including Raba-Salé). What matters really is how their regional GDP is formed; a wealthy, productive region should produce its own consumption and pay relatively high taxes – or at least close to nationwide levels. The following results are based on computations of aggregates per capita: there is a logical enough argument to be made that poorer regions might be over-populated; as it turned out, richer regions tend to have larger populations, they are however more productive, even more so, given the fact their active population is actually smaller, when compared to nationwide occupation rate of active population as well as those of the poorer regions. Per capita results take the demographics out of the equation, and even the odds somewhat. The initial point made about wealthy regions stems from the standard national accounting equation: regional output is (roughly) consumed, taxes or invested. A good point can be made as to how local output matches local consumption, i.e. food and other goods consumed in one region are not necessarily made there; after all, sea-fish consumed in Marrakesh has to come from a coastal city, and Melons down South in Laayun need to come from another, cooler, watery place. Still and all, productive regions are able to produce enough output to buy them their consumption from other regions. Those too poor to afford anything will have to rely on government subsidies, or else reduce their consumption to subsistence levels. six regions emerge in this case: the Southern provinces, Tadla-Azilal and Taza-Alhuceimas. Their cumulative contribution to total GDP is less than 10%, and their average GDP per capita is roughly that of Souss-Massa. Taxes and Government spending however are a different place; government money levied from or spent on a region stays there. Unfortunately, we do not have the exact amount of government spendings per region, though the other side of the equation is out there: there is evidence about how much each region contributes to total fiscal receipts; As it turns out, the 5 super-regions contribute about 91.5% of 2011 fiscal receipts, about 138.2Bn that is. So the initial body of evidence is there: the richest regions tend to pay more taxes than they produce output, and if Rabat-Salé is excluded from computations, the 4 super-regions account for 74% of fiscal receipts, versus a little less than half of total GDP. In simple arithmetic, every 100 dirhams these 4 regions paid 19.1 of it in taxes, and these were transferred to other regions. What is the difference between the South and the two other poorest regions? These have less government spending with respect to their respective regional GDP The figures at hand are not gross taxes however; these have been netted with subsidies (our Compensation Fund) which makes computations even easier; indeed, national accounting equalities tend to assume perfect funding from taxes to pay for government expenditure. Poorer regions – in this case, the bums are the Southern provinces, Taza-Alhuceimas and Tadla-Azilal would share their output between consumption and government expenditure. This is precisely the case for Taza and Tadla, where Investment per Capita (and at a smaller extent, Net Exports) make up for less than 2% of GDP per Capita. These two regions, by the way, should have received a net 1.5Bn dirhams either as tax cuts, or direct government transfers. But they did not: the local population had to make do. On the other hand, the Southern regions are a riddle when it comes to national accounting; its taxation is a record low, and the assumption behind national accounting does not stand. And that is so because the tax aggregate used for that matter was net of subsidies. Think of it as a reversed budget balance: G – T instead of T – G. One additional step would be to propose: $T - (G_0 + G_s)$ where $G_s$ is government subsidies expenditure. The balance is the net government transfer the region benefits from. So what is the score? It is always difficult in view of the numerous shortcomings of proposed methods, but it is clear the remarkably high Southern GDP per capita (30,000 dirhams) which marks these regions as the third richest is solely due to large government transfers, in this case 7.2Bn dirhams in 2011 – .89% of GDP, 17.1% of subsidies dispatched to 3.5% of total population. The bums in this case, those who benefit from government transfers, are the Southern provinces. Regional solidarity is an admirable principle, and should be encouraged at every level of government business. But it assumes transparency in these transfers, and some kind of economic logic to it. In this case, transparency is a vain word – let us not forget the assumptions behind all these computations are very formal, and that means reality might be a lot dimmer, i.e. actual transfers are higher. And the proposed newly redrawn regional boundaries will certainly not help. The political ramifications of unequal and unjustified (from an economic point of view, anyway) government transfers from hard-working citizens to others will exacerbate resentment, and there is no doubt unscrupulous politicians will seize upon this if and when an electoral advantage would weigh in. Another way to look at it is instead to push for larger devolution; fiscal autonomy would then show how each region actually does in terms of economic performance, and a dedicated federal fund can then be set up to support those regions with structural difficulties, on the grounds of economic support, not back-room political strategies as it is now. ## A 5-year Austerity Package The Government Wouldn’t Dare Think About Posted in Dismal Economics, Flash News, Moroccan Politics & Economics, Morocco, Read & Heard by Zouhair Baghough on August 14, 2012 It is plain clear now we are headed toward the end of an expansionary cycle that dates back to late 1990s. Government stimulus cannot do much about it, and we have to bite the bullet. Not only that, but the “if it ain’t broken don’t fix it” policy about Morocco’s structural problems has taken us down the dark path of debt. Austerity, as I have mentioned before several times, is necessary to pre-empt any draconian conditions if we ever fall short. From BKAM annual report, 2011. The Compensation Fund has reached historical levels, and threatens more than just budget balances. The austerity package, like all austerity packages -but unlike the present course of action down here- involves both sides of the balance sheet: revenue enhancement as well as expenditure. The single biggest budget problem, I would argue, has a lot to do with the subsidies: in the name of stabilising prices (and preventing social unrest) the Compensation Fund exploded in absolute and relative terms, to threatening levels to the budget and foreign trade. Taxes: Close Loopholes, Simplify the Tax Code, Broaden the Tax Base In effect, these principles call for a radical re-alignment of tax sources: the treasury relies too much in indirect taxes, stamp duties and other discretionary revenues, which either denotes of an institutional weakness to extract taxes where it needs to, or chooses to pick easy targets (read: the middle class) rather than confront powerful special interests. From a personal point of view, I can hardly find economic (and quantitative) argument behind allowing farmers and real-estate developers generous tax breaks, and even subsidies even as their profits are going sky-high. This is an opportunity to assert an economic-oriented fiscal policy, instead of the daunting pile of bureaucratic regulations, with no economic justification whatsoever: why would we make individuals pays VAT on some of their subsidized consumption? And why would we keep the arcane progressive taxation system (designed some 150 years ago when Teddy Roosevelt was President) when we have much more sophisticated (and simpler) taxation systems? Not to mention the chaotic fiscal structure: the academic body of evidence is overwhelmingly in favour of keeping the overall fiscal pressure constant over time, and it clearly isn’t. a little more than 40% of total major taxes come from consumption. Guess why the Government cannot commit to a serious subsidies reform? Let us look like at the numbers: the total fiscal receipts for the 2012 Budget is expected to be 170.67Bn MAD: that’s the total amount of taxes expected to be collected from VAT, Corporate Tax, Income Tax, Customs and miscellaneous stamp duties. To give you an idea of how much that broad measure of fiscal pressure, think of it as the Government’s share in every good and service produced in this country, and that is GDP: 21.2% of it goes into the pockets of government – and that is not enough. They borrow money too, but that’s another question. Incidentally, you can find the best evidence explaining why the past governments and the current one cannot commit to a serious reform on the subsidies system: about 40% of the main taxes come from consumption, that is a third of total fiscal receipts. this mainly VAT-funded receipt has a perverse link to the subsidies: the higher consumers buy subsidized goods, the higher VAT receipts are going to be, and the better the treasury will feel about its primary balance. A defiant reform  of the Compensation Fund would mean the instant denial of a lucrative resource to the budget. Obviously, there is nothing wrong with the existence of a government funding itself through taxation – for those interested in the theoretical argument behind it, there are some papers worth looking into (don’t get sidetracked by the Maths, the conclusions are rocking) but, the present structure is flawed: 14.35% of these fiscal receipts are coming from discretionary taxes. So the main course is the so-called distortionary taxes, i.e. those who affect the behaviour of all agents, consumers or businesses: VAT, Corporate and Income Taxes. The optimal fiscal policy is actually far simpler than the arcane tax code we currently have: we first look at the contributions of each aggregate component to GDP, then produce at a long-term rate the respective average rates for labour, taxes and consumption; We know for instance that Capital relative contribution to wealth creation (that is, GDP) ranges between 33.5% and 32.7%, while that of Labour captures the remaining to 67% to 66.5% (the odd discrepancies, around 0.16% is left to technological progress) – assuming a long-term average maximum fiscal pressure of 19.2%, total primary fiscal receipts should be around 151Bn dirhams (against the current 123Bn for the 2012 Budget) with Consumption and Income Tax accounting for 96Bn and Corporate/Capital tax for the remaining 54Bn. These are moderate tax increases considering the present levels, but then again, the effective tax rate on the capital stock is less than 2.6%, and total taxes on the labour force around 11% (consumption and production). Why so? First, these discrepancies belie the unequal distribution in both income and consumption, and second, Morocco is a developing country, so the effect of taxation on low capital stock per capita (181,759dhs) can hamper growth. Note that I referred to the capital stock, and not its distributed dividend. Taxes on labour and consumption are further split into respective 48Bn – an effective tax rate per household of 7% (recall the pure income tax from an earlier post) and 11% per household consumption (that new consumption rate I might post something about). Based on a pessimistic 4.3% annual growth (average growth since 1999) all the way up to 2022, this should be the expected level of fiscal receipts from the proposed tax system. All in all, without boring you with the details, this fiscal revamping should be a net tax cut of 12Bn, down from 171Bn to 159Bn(we make room for various discretionary taxes worth 1% of GDP) what is more, the broader definition of fiscal pressure is brought down below 20% of GDP, the closest I can get to the Hauser ceiling. These computations are based on the aggregate number of households, including the agricultural sector – this reform effectively ends the subsidy where fewer than 15% wealthy farmers benefit from a tax break on potentially as much as 90Bn worth of agricultural products. In the process, fiscal equality rewards other sectors and agents by cutting their taxes and/or simplifying them. Finally, I would like to point out these figures are computed on the basis of a 4.3% annual GDP growth with historical volatility, which means the uncertainty factor has already been taken into account. Expenditure: Freeze, Cuts and Postponements This is always the least popular item in the austerity package (as if austerity wasn’t already a killjoy), especially when there are talks of cuts to public service pay-wage and related items. And if any serious fiscal consolidation were to take place, it will do something about the 94Bn expenditure on human resources, especially the higher echelon. Though cutting expenditure is not on the table, it would be interesting to see how a freeze on half the civil service – and a 2% annual increase for the lower echelon. Let us not forget that for the last couple of years, the average annual salary was 192.000dhs per annum, i.e. 65% more than the average annual income per household, and about 3 times more than the median income per household. If anything, the average income where at least one breadwinner is working with the civil service could be earning more than 83% of all the households in Morocco. Fairness dictates some of these civil servants need to see their taxpayer-funded salaries trimmed a bit. The other juggernaut is the Compensation Fund: never, since the early 1980s, has household consumption been so heavily subsidized, and yet the large gap in consumption and standard of living creeps in, stronger than ever. A complete overhaul of the fund will have an initial negative effect on household consumption, but then again, it should not last no less than 10 quarters (based on domestic exogenous shocks) or 15 quarters if exogenous effects from foreign trade are taken into account; this means any unpopular reform needs to be undertaken at the very first year, until the negative effects eventually die away before election season. My plan subsidizes about 20% of the median consumption basket to the benefit of 60% Moroccan households, costs in 2012 about 25Bn and is indexed to household consumption growth. The poorest 10% receive an annual cash relief between 7,200 and 9,500 dirhams. Incidentally, it cuts subsidies twice its current budget and insures strategy-proof allocation of subsidies to those who genuinely need it, and does not harm middle class standards of living. The Debt, Rates and PSBR This whole austerity problem is not out there to serve a sinister right-wing dogma: our fiscal house was quite in order for the past decade, and yet we did not bother to push for continuous reforms; instead, the past government chose an unnecessary large tax cut (from 4 to 7Bn in 2007-2008) to the wealthiest while nothing was done to close loopholes and tax breaks for the privileged few. Obviously, these tax cuts and preferential treatment were funded by increasing public service borrowings: it went from 51Bn in 2007, to 65.7Bn in 2012, and that number can be expected to increase even further. What the government fails to understand -and so would Paleo-Keynesians in the process- is that public borrowings are crowding out small businesses and individuals; this is even more perverse as these small companies in business with public service procurements are punished twice: the budget pays at later terms, and takes away the existing liquidities from M3. Big business is secured in its day-to-day financing; it is the small guy who takes the fall for the growing public debt. Accordingly, there is a need to introduce a ‘debt ceiling’ mechanism, where over-borrowing is subject to a floor vote in Parliament, and conditioned by commitment on behalf of ministerial departments to cut or freeze spending over the same period of time the newly issued debt matures; for instance, a 5-year treasury bond has to be matched with spending cuts/freezes whose effect is likely to last 5 years as well. In this particular example, The expected borrowings cannot go beyond 5% of M3, or 47Bn in 2012. Bottom Line: What Will You Bring Us, Mr Moorish? Blood, Toil, Tears and Sweat. Well, almost. Unfortunately, making the deficit disappear while fighting government debt is mission impossible; if anything, there will be a large deficit in 2012 (about 7% of GDP) but that gradually disappears, with the first surplus reached by 2020. If anything, the effects of this 5-year austerity plan show around 2018, too late for the 2016 general elections. On the other hand the size of government relative to GDP would have shrunk from the current 44% to 25% by 2021, with all public services and welfare mechanisms in place. The deficit for 2012, projected to be 55Bn, would gradually go down until it reaches 20Bn surplus – or 12Bn if 8Bn dividends are not taken into account. We would however left by then the danger debt zone, with projected overall public debt ratio of 50% by mid 2014 to 2015, not to mention a robust 3% growth in public investment. “The path Of Prosperity” vs “The Light at the End of the Tunnel” If anything, the Moroccan economy would look at lot healthier by 2018: lighter, better and fairer government touch, lower tax burdens, lower rates and sustainable deficits and public debt. As always, any of these reform proposals assumes incredible courage among our elected officials, and a sheer willingness to take on special interest, lobbies and established rents. And most of all, an unwavering sense of social justice, because fiscal consolidation, whatever its initial motive, tends to fall harder on the weak, and treat harshly the middle class. Only a keen interest in keeping suffering at the lowest possible level can bring about the broadest consensus around austerity; for this like so many other policies, a sense of purpose is needed, and carried by committed responsible politicians.
{}
# Difference between $T(A) = A$ and $T^{-1}(A) = A$ I am a bit confused about the title. Let $$T:X\rightarrow X$$ be a map where $$X \subset \mathbb{R}^n$$. Let $$A\subset X$$. I know that $$T^{-1}(A) = \{x\in X: T(x)\in A\}.$$ and if $$A$$ is $$T$$-invariant, we have $$T(A) = A.$$ In this case, $$A$$ is a bit like an attractor. My questions are: 1. Suppose $$T^{-1}(A)=A$$, can we say $$T(A) = A$$? or we can just say $$T(A)\subset A$$? 2. Suppose $$T(A) = A$$, can we say $$T^{-1}(A) = A$$? I think 2. is not correct; because if $$A$$ admits a basin of attraction of $$A$$, say $$V$$, such that $$A\subset V\subset X$$, then it could be possible that $$A\subset T^{-1}(A)$$. Let $$X=\mathbb{R}$$, and let $$T:X\to X$$ be given by $$T(x)=x^2$$. For the first one, if $$A=X$$, then $$T^{-1}(A)=A$$, but $$T(A)=[0,\infty)$$ which is a proper subset of $$A$$. For the second one, if $$A=[0,\infty)$$, then $$T(A)=A$$, but $$T^{-1}(A)=X$$ which properly contains $$A$$.
{}
# The Solow Model NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. This is another idea for modifying how to teach the Solow model. One thing I’d like to do is go immediately to including productivity – it follows cleanly from the simplest growth model. Second, I think it might be nice to work with the K/Y ratio immediately. In this way, I think you can actually skip using the whole “k-tilde” thing. And, *gasp*, do away with the traditional Solow diagram. The simplest growth model doesn’t allow for transitional growth, and this due to the fact that it does not allow for capital, a factor of production that can only be slowly accumulated over time. The Solow Model is a standard model of economic growth that includes capital, and will be better able to account for the transitional growth that we see in several countries. Production in the Solow Model takes place according to the following function $\displaystyle Y = K^{\alpha}(AL)^{1-\alpha}. \ \ \ \ \ (1)$ ${K}$ is the stock of physical capital used in production, and ${A}$ and ${L}$ are defined just as they were in our simple growth model. So the production function here is just a modification of the simple model to include capital. The coefficient ${\alpha}$ is a weight telling us how important capital or ${AL}$ are in determining output. To analyze this model, we’re going to rewrite the production function. Divide both sides of the function by ${Y^{\alpha}}$, giving us $\displaystyle Y^{1-\alpha} = \left(\frac{K}{Y}\right)^{\alpha} (AL)^{1-\alpha} \ \ \ \ \ (2)$ and then take both sides to the ${1/(1-\alpha)}$ power, which gives us the following expression $\displaystyle Y = \left(\frac{K}{Y}\right)^{\alpha/(1-\alpha)} AL. \ \ \ \ \ (3)$ In per capita terms, this is $\displaystyle y = \left(\frac{K}{Y}\right)^{\alpha/(1-\alpha)} A. \ \ \ \ \ (4)$ Output per worker thus depends not just on ${A}$, but also on the capital-output ratio, ${K/Y}$. So to understand the role of capital in economic growth, we need to understand the capital-output ratio and how it changes over time. We’ll start by looking at the balanced growth path, and then turn to situations where the economy is not on the balanced growth path (BGP). One fact about the BGP is that the return to capital, ${r}$, is constant. The return to capital is ${r = \alpha Y/K}$, which depends (negatively) on the capital-output ratio (the return to capital is just the marginal product of capital). If ${r}$ is constant on the BGP, then it must be that ${K/Y}$ is constant on the BGP as well. What does this mean? It means that ${K/Y}$ can have a level effect on output per worker, but has no growth effect. To see this more clearly, take logs of output per worker, $\displaystyle \ln y(t) = \frac{\alpha}{1-\alpha} \ln\left(\frac{K}{Y}\right) + \ln A(t) \ \ \ \ \ (5)$ and then plug in what we know about how ${A(t)}$ moves over time, $\displaystyle \ln y(t) = \frac{\alpha}{1-\alpha} \ln\left(\frac{K}{Y}\right) + \ln A(0) + gt. \ \ \ \ \ (6)$ The capital-output ratio affects the intercept of this line — a level effect — alongside ${A(0)}$. The slope of this line — the growth rate — is still ${g}$. The capital/output ratio is constant along the BGP, and has no effect on the growth rate on the BGP. But what if the economy is not on the BGP? Then it will be the case that ${K/Y}$ affects the growth rate of output per worker, because the ${K/Y}$ ratio will not be constant. More precisely, the growth rate of capital/output is $\displaystyle \frac{\dot{K/Y}}{K/Y} = \frac{\dot{K}}{K} - \frac{\dot{Y}}{Y}. \ \ \ \ \ (7)$ So the ${K/Y}$ ratio will change if capital grows more quickly or more slowly than output. First, capital accumulates as follows $\displaystyle \dot{K} = s Y - \delta K \ \ \ \ \ (8)$ where ${\dot{K}}$ is the change in the capital stock. ${s}$ is the savings rate, the fraction of output that the economy sets aside to invest in new capital goods, so that ${sY}$ is the total amount of new investment. ${\delta}$ is the depreciation rate, the fraction of the existing capital stock that breaks or becomes obsolete at any given moment. To find the growth rate of capital, divide through the above equation by ${K}$ to get $\displaystyle \frac{\dot{K}}{K} = s\frac{Y}{K} - \delta. \ \ \ \ \ (9)$ You can see that the growth rate of capital depends on the capital/output ratio itself. The growth rate of output is $\displaystyle \frac{\dot{Y}}{Y} = \alpha \frac{\dot{K}}{K} + (1-\alpha)\frac{\dot{A}}{A} + (1-\alpha)\frac{\dot{L}}{L}. \ \ \ \ \ (10)$ Now, with (7), and using what we know about growth in capital and output, we have $\displaystyle \frac{\dot{K/Y}}{K/Y} = (1-\alpha)\left(s\frac{Y}{K} - \delta - g - n \right) \ \ \ \ \ (11)$ where we’ve plugged in that ${\dot{A}/A = g}$, and ${\dot{L}/L = n}$. Re-arranging a bit, the capital output ratio is growing if $\displaystyle \frac{K}{Y} < \frac{s}{\delta + n + g}, \ \ \ \ \ (12)$ and growing if the capital/output ratio is larger than the value on the right-hand side. In other words, if the capital stock is relatively small, then it will have a tendency to grow faster than output, raising the ${K/Y}$ ratio. Eventually ${K/Y = s/(\delta+n+g)}$, the steady state value, and the ${K/Y}$ ratio stops changing. What is happening to growth in output per worker? If ${K/Y < s/(\delta+n+g)}$ then the ${K/Y}$ ratio is growing, and so output per worker is growing faster than ${g}$. So the temporarily fast growth in output per worker in Germany or Japan would be because they found themselves with a ${K/Y}$ ratio below their steady state value. How would this occur? It’s easier to see how this works if we re-write the ${K/Y}$ ratio slightly $\displaystyle \frac{K}{Y} = \frac{K}{K^{\alpha}(AL)^{1-\alpha}} = \left(\frac{K}{AL}\right)^{1-\alpha}. \ \ \ \ \ (13)$ From this we can see that the ${K/Y}$ ratio would be particularly low if the capital stock, ${K}$, were to be reduced. This is what happened in Germany, to a large extent, after World War II. The capital stock was destroyed, so ${K/AL}$ fell sharply. This made ${K/Y}$ fall below the steady state value, which meant that there was growth in the ${K/Y}$ ratio, and so growth in output per worker greater than ${g}$. A slightly different situation describes South Korea. There, we can think of there being a level effect on ${A}$, an advance in productivity. This also makes ${K/AL}$ fall sharply, and again causes growth in ${K/Y}$ and growth in output per worker faster than ${g}$. But in both this case and in Germany’s, as the ${K/Y}$ ratio grows it approaches the steady state value and growth in output per worker slows down to ${g}$ again. # The Simplest Growth Model NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. This is an idea for a new way of introducing growth theory. Given that productivity growth is the source of long-run growth, it seems to make sense to start with that, rather than with the Solow model. Let’s write down a very simple model of economic growth. Let total output ${Y}$ be determined by $\displaystyle Y = A L \ \ \ \ \ (1)$ where ${A}$ is a measure of labor productivity, and ${L}$ is the number of workers. If we divide through by ${L}$, then we get a measure of output per worker. To keep notation clean, let ${y = Y/L}$ be output per worker, so that now we have $\displaystyle y = A \ \ \ \ \ (2)$ as our model of economic growth. Basically, output per worker is simply equal to labor productivity ${A}$. From this we know that the time path of output per worker is simply the same as the time path of labor productivity, ${A}$. So what determines the time path of labor productivity? We’ll assume that it is growing at a constant rate, meaning that it goes up by the same percent every period of time, $\displaystyle A(t) = A(0) e^{g t}. \ \ \ \ \ (3)$ Here, we’ve written ${A(t)}$ to be clear that we mean labor productivity at any given time ${t}$. ${A(0)}$ is labor productivity in the initial moment of time. The exponential term says that labor productivity grows at the rate ${g}$ over time. The exponential term implies, perhaps not surprisingly, exponential growth. You get exponential growth when something goes up by the same percent every period of time. If ${g = 0.02}$, then we have 2% growth. At time zero, labor productivity is just ${A(0)}$. When ${t=2}$, then ${A(2) = A(0)e^{.02(2)} = 1.041 A(0)}$, or labor productivity is a little more than 4% higher than at time zero. When ${t=10}$, ${A(10)=A(0)e{.02(10)}=1.221}$, or labor productivity is more than 22% higher than at time zero. It may not seem obvious, but output per worker in the U.S. and most other developed nations displays exponential growth. Our model matches that, as $\displaystyle y(t) = A(0) e^{g t}. \ \ \ \ \ (4)$ These countries also tend to have a similar growth rate of about 1.8%, or ${g=0.018}$. Seeing this in a figure, though, is difficult. Graphing ${y}$ over time for the U.S. gives you a curve that quickly accelerates upwards and is almost off the page. Graphs like this will also make it difficult to compare countries to one another. For that reason, among others, we like to work with the natural log of output per worker, ${\ln{y(t)}}$. Taking natural logs of ${y(t)}$ gives us $\displaystyle \ln{y(t)} = \ln{A(0)} + g t. \ \ \ \ \ (5)$ This is an equation that says the natural log of output per worker is a linear function of time, ${t}$. If we graph ${\ln{y(t)}}$ against ${t}$, we get a straight line, similar to the trend line we drew in the figure for U.S. output per worker. We can calculate the growth rate of output per worker by taking the derivative of (5) with respect to time. This results in the following $\displaystyle \frac{\dot{y}}{y} = g. \ \ \ \ \ (6)$ The value of ${A(0)}$ is fixed, so the derivative of it with respect to time is just zero. The notation ${\dot{y}/y}$ is a shorthand way of writing the growth rate. ${\dot{y}}$ is the absolute change in output per worker at any given moment, and by dividing by ${y}$ we get that change relative to the level of output per worker. This means that ${\dot{y}/y}$ is essentially the percent change in output per worker at any given moment. That’s it for the simple growth model. Output per worker depends on labor productivity ${A(t)}$, and labor productivity grows at a constant rate ${g}$, which means output per worker grows at that same rate. Despite the mechanical simplicity, this model helps us be clear when we are talking about the growth experiences of different countries. It allows us to distinguish between two forces determining output per worker. • Level effects: These refer to ${A(0)}$, the intercept of the line in (5) • Growth effects: These refer to ${g}$, the slope of the line in (5) Looking at the data over the long run, the general impression we get that the growth rate ${g}$ is similar across countries, and they differ mainly because of level effects. That is, ${A(0)_{Japan}}$ appears to be lower than ${A(0)_{US}}$, but the growth rate ${g}$ is very similar. Theories of economic growth should be consistent with these facts. Things like investment rates, schooling, and social infrastructure are important determinants of level effects, ${A(0)}$, but they have no effect on the growth rate, ${g}$. Under plausible assumptions, theories of endogenous innovation will suggest that the growth rate, ${g}$, is identical across countries. There are some facts, though, that this simple growth model cannot account for. Namely, there are notable cases where output per worker grows more quickly or more slowly than ${g}$. China, for example, over the last 30 years has grown much faster than the U.S. or Japan. South Korea had a similar growth miracle, starting in about 1960 and lasting until the 2000’s. Germany, from World War II until about 1980, grew at a very accelerated pace compared to the U.S. in the same period. How do we reconcile these facts with the assertion above that ${g}$ is the same for all countries? The key is noting that these growth accelerations were temporary. Germany grew very quickly, but after 1980 its growth rate fell back to a value nearly identical to the U.S. South Korea’s growth rate has diminshed as well in the 2000’s. What appears to be happening is that once output per worker approaches a frontier level, generally defined by the U.S., growth slows down. While China continues to grow quickly, it has not approached the U.S. level of output per worker. Looking at these countries, what appears to be happening is that there is a level effect, or their ${A(0)}$ has shifted up. However, it seems to take them a long time to move from their old level to the new, higher level. We call the temporary growth spurt that occurs when a country moves between levels transitional growth. Output per worker grows faster than ${g}$ temporarily – although this could last a few decades – but then growth returns to the rate ${g}$. Our simple model doesn’t offer a way of understanding this transitional growth. The first major extension we’ll make to this simple model is to add physical capital, which has to be slowly accumulated over time. Because of this slow accumulation, the economy will take an extended time to fully respond to a level effect.
{}
# roave-backward-compatibility-check A tool that can be used to verify backward compatibility breaks between two versions of a PHP library. More information: https://github.com/Roave/BackwardCompatibilityCheck. • Check for breaking changes since the last tag: roave-backward-compatibility-check • Check for breaking changes since a specific tag: roave-backward-compatibility-check --from={{git_reference}} • Check for breaking changes between the last tag and a specific reference: roave-backward-compatibility-check --to={{git_reference}} • Check for breaking changes and output to Markdown: roave-backward-compatibility-check --format=markdown > {{results.md}}
{}
# construct a Hecke character in MAGMA with given infinity type I need to do some numerical computation on special values of a Hecke L-function $L(s,\chi)$. To do this, I want to construct a Hecke character in MAGMA, given that I know its infinity type. In other words, suppose we are working on a totally real field $K$. My Hecke character $\chi$ is first defined over the principal ideals by $$\chi((\alpha))=\prod sgn(\sigma_i(\alpha))^{m_i}|\sigma_i(\alpha)|^{n_i}$$ where $\sigma_i$'s are the real embeddings, $m_i=0$ or $1$, and $n_i\in \mathbb{C}$. For some good values of $m,n$ this can be extended to all ideals, thus becomes a Hecke character. I tried to read https://magma.maths.usyd.edu.au/magma/handbook/text/410 for related information. It looks like the functions related to Hecke Grössencharacters are close to what I need, but it requires CM field to work, while I want to deal, say with $K=\mathbb{Q}(\sqrt{3})$. Edit1: Thanks to Jeremy Rouse, I read http://magma.maths.usyd.edu.au/magma/handbook/text/1485 about creating a general L-series in Magma. It assumes that the shifts in the gamma factors are rational, while I want to do the following example: $K=\mathbb{Q}(\sqrt{3})$, $\chi(\alpha)=Sgn(\alpha\alpha')(\alpha/\alpha')^{i\pi/R}$ where $R$ is the regulator (so that $\chi$ is $1$ on units). $L(\chi)=\sum_{I\subset O_K}\chi(I)N(I)^{-s}$, whose gamma factor is $$\nu(s)=(dπ^{−2})^{s/2}Γ(s/2+1/2+πi/2R)Γ(s/2+1/2−πi/2R).$$ K:=QuadraticField(3); C<i>:=ComplexField(); pi:=Pi(C); d:=AbsoluteDiscriminant(K); d; r:=Regulator(K); mu:=pi/2/r; mu; L := LSeries(1, [mu,-mu], d, 0: Sign:=1); N := LCfRequired(L); N; which results in "Runtime error in 'LSeries': elements of gamma must be integer or rational numbers". (So as Jeremy commented later, this cannot be doen with Magma. with add some detail if I work it out.) • I am skeptical that the $L$-function properly has "special values", as it is not motivic. – kantelope Dec 8 '15 at 22:46 • @kantelope I should say that I want to evaluate it at some integer/half integer. – Ted Mao Dec 8 '15 at 23:11 It sounds like Magma does not currently include the exact functionality that you want to use. An alternative that you could try is manually building the $L$-function you wish to evaluate numerically (see the section of the Magma documentation "Arithmetic Geometry - $L$-functions - Constructing a general L-series"). This accesses the algorithms (primarily based on work of Tim Dokchitser) for computing $L$-functions. I have manually built and evaluated many $L$-functions (including for example, symmetric power $L$-functions of modular forms). • Thanks for the information. However, I am still confused about my case. The "General L-series" Magma page has an assumption on the gamma factor $\nu(s)=\Gamma(s+\lambda_1 /2)\ldots\Gamma(s+\lambda_d /2)$. Now we just have real places and there will be $\Gamma(s/2)$ type factors only. Does it still work? – Ted Mao Dec 8 '15 at 18:23 • Or, can you give me an example, say $\chi(\alpha)=Sgn(\alpha\bar{\alpha}) (\alpha\bar{\alpha}^{-1})^{i\pi/R}$ where $R$ is the regulator (so $\chi$ is 1 on the units), and $L(s)=\sum_{I\subset O_K} \chi(I) N(I)^{-s}$, where $K=\mathbb{Q}(\sqrt{3})$. In this case $\nu(s)=(d\pi^{-2})^{s/2} \Gamma(s/2+1/2+i\pi/2R)\Gamma(s/2+1/2-i\pi/2R)$ – Ted Mao Dec 8 '15 at 18:31 • So, after playing with this for a while, it turns out you cannot do this in Magma. The form of the gamma factors in the Magma documentation is a typo - instead they are expected to be $\Gamma\left(\frac{s+\lambda_{1}}{2}\right) \cdots$. However, Magma expects the $\lambda_{i}$ to be rational, and this is a problem in your example! This is a factor of the Magma implementation - if you use Tim Dokchitser's original PARI script, you'll be fine. Another possibility is Michael Rubinstein's "lcalc". – Jeremy Rouse Dec 8 '15 at 20:05
{}
Original software publication| Volume 15, 100456, March 2023 Ok # MoreThanSentiments: A text analysis package Open AccessPublished:December 21, 2022 ## Highlights • Convert non-trivial text quantification methods into simple functions. • Domain-agnostic information extraction algorithms. • Work efficiently with large text corpora. • Facilitate text characterization beyond sentiment analysis, counting of words, and computing readability. ## Abstract Text mining on a large corpus of data has gained utility and popularity over recent years owing to advancements in information retrieval and machine learning methods. However, popular text mining software packages mainly focus on either sentiment analysis or semantic meaning extraction, requiring pretraining on a large corpus of text data. In comparison, MoreThanSentiments provides computation of newer text attribution measures, including boiler score, specificity, redundancy, and hard info, which have been proposed in accounting analytics literature. Our software package, available in Python, is flexible in terms of parameter setting and is adaptable to different applications. Through this package, we seek to simplify the process of deploying nontrivial information extraction techniques published in domain-specific text analysis research into domain-agnostic analytics applications. ## Keywords Tabled 1 Current code version V 0.2.0 Permanent link to code/repository used for this code version https://github.com/SoftwareImpacts/SIMPAC-2022-293 Permanent link to reproducible capsule https://codeocean.com/capsule/3686195/tree/v1 Legal code license BSD 3-Clause License Code versioning system used Git Software code languages, tools and services used Python Compilation requirements, operating environments and dependencies tqdm(4.59.0), spacy(3.3.0), pandas(1.2.4), nltk(3.6.1) If available, link to developer documentation/manual https://github.com/jinhangjiang/morethansentiments/blob/main/README.md Support email for questions [email protected] ## 1. Motivation and significance Natural language processing, one of the major data mining methods, has been expanded and applied in many research fields in the past decades. Researchers have already paid lots of effort into studying unstructured text data by leveraging information extraction and retrieval techniques. And using the pre-trained models, such as the sentiment analysis model provided by TextBlob [ • Lorla S. TextBlob documentation release 0.16.0, TextBlob. • E. Hutto C.J. A Parsimonious Rule-based Model for, Eighth International AAAI Conference on Weblogs and Social Media. ] allows the users to deploy powerful models to tackle those tasks with less training time and resources. Recent studies have also proposed newer methods, such as Bidirectional Encoder Representations from Transformers (BERT) [ • Devlin J. • Chang M.-W. • Lee K. • Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. ], for extracting the semantic meaning of the text. However, text features from existing software packages in Python and R either use pre-trained, high-performance models that result in features that cannot be directly interpreted or propose features that have a limited scope of explaining the nature of the text content. Sentiment analysis software typically provides the polarity or aggregate negative/positive sentiments expressed in text. Text analysis software such as py-readability-metrics provides metrics, including Gunning Fog, SMOG, and Flexh-Kincaid that focus on discerning the readability of the text. On the other hand, deep learning methods have been used to summarize longer text into short paragraphs [ • Yousefi-Azar M. • Hamey L. Text summarization using unsupervised deep learning. ]. MoreThanSentiments [ • Jiang J. • Srinivasan K. MoreThanSentiments. ] was motivated by the fact that the users are eager to seek more nontrivial methods to quantify the text and summarize the structure of the text. Currently, the package supports the following text complexity metrics: Boilerplate [ • Lang M. • Stice-Lawrence L. Textual analysis and international financial reporting: Large sample evidence. ], a measurement of informativeness; Redundancy [ • Cazier R.A. • Pfeiffer R.J. 10-K disclosure repetition and managerial reporting incentives. ], a measurement of usefulness; Specificity [ • Hope O.-K. • Hu D. • Lu H. The benefits of specific risk-factor disclosures. ], a measurement of the quality of relating uniquely to a particular subject; and Relative Prevalence [ • Blankespoor E. The impact of information processing costs on firm disclosure choice: Evidence from the XBRL mandate. ], a measurement of hard information. This domain-agnostic package can easily be implemented for text quantification tasks in various projects. Additionally, we expect the novel adoption of the features in this package can serve as an enabler for different downstream works. ## 2. Software description In this section, we discuss the functionality of MoreThanSentiments, followed by the demonstrations of the main functions. ### 2.1 Software architecture The package, MoreThanSentiments, is implemented in Python. Currently, it is composed of one major module that supports all of the features: • Read raw txt. format data into pandas dataframe • Clean and preprocess the text corpora • Calculate Boilerplate, Redundancy, Specificity, Relative Prevalence ### 2.2 Software functionalities A boilerplate is a group of words (e.g., tetragram) that may be omitted from a statement without altering its semantic meaning in textual analysis. It is a measurement of informativeness, in other words. The boiler score [ • Lang M. • Stice-Lawrence L. Textual analysis and international financial reporting: Large sample evidence. ] is determined by comparing the number of sentences that use boilerplate language to the total number of words. Thus, the higher the boiler scores, the lower the informativeness for the given corpora. To identify a boilerplate, the users need first to set the length. The default is four words, which is a tetragram. Then the whole corpora will be scanned, and the frequency of the boilerplate per document will be captured. By default, only the boilerplates appearing at least in five documents and less than 75% of the total documents will be used to calculate the boiler scores. The frequency threshold is used as a bias control. And the formula of the Boilerplate is as follows: $Boilerplate=WsWd$ (1) $Ws$ is the word count of the sentence that has a boilerplate. $Wd$ is the word count of the whole document. The degree of Redundancy indicates how useful a corpus is. It is the proportion of really large sentences or phrases (e.g., 10 grams) that appear more than once in a given document. If a super-long statement or phrase is used repeatedly, that means the author tries to impose the duplicated information. Hence, this piece of information shall be marked as not useful. Similar to the Boilerplate, the higher the Redundancy, the less useful the given corpus is. Specificity is a measure of the ability to relate specifically to a certain subject. It is described as the number of specific entity names, numerical values, and times/dates scaled by the overall word count of a document. Currently, the Named Entity Recognizer from spaCy serves as the foundation for the Specificity function. Hard information of a given corpus is measured by Relative Prevalence. It compares the number of numerical values to the overall length of the text. It aids in assessing the amount of quantitative data in a particular text. ## 3. Illustrative examples In this section, we illustrate three usage examples of MoreThanSentiments. For a full usage guide, please refer to the library documentation (https://pypi.org/project/MoreThanSentiments/). The dataset we used to experiment with is the BBC Business news dataset [ • Greene D. • Cunningham P. Practical solutions to the problem of diagonal dominance in kernel document clustering. ]. The codes below demonstrate how to read the raw text data (“read_txt_files”) and perform the text cleaning (“clean_data”) as needed. For the data cleaning function (“clean_data”), we offer the following options: • lower: make all the words lowercase • punctuations: remove all the punctuations in the corpus • number: remove all the digits in the corpus • unicode: remove all the types of unicode in the corpus • stop_words: remove the stopwords in the corpus The following codes illustrate how to calculate the boiler score. It needs to be applied to the whole corpora instead of a single document. For the (“Boilerplate”) function, we offer the following options: • input_data: this function requires tokenized documents. • n: number of the ngrams to use. The default is 4. • min_doc: when building the ngram list, ignore the ngrams that have a document frequency strictly lower than the given threshold. The default is 5 document. 30% of the number of documents is recommended. When the parameters are given as decimals (e.g., 0.3), it will be read as a percentage. • get_ngram: if this parameter is set to “True”, it will return a dataframe with all the ngrams and the corresponding frequency, and “min_doc” parameter will become ineffective. ## 4. Impact Our proposed software package contains functions that can be beneficial to research conducted in multiple disciplines, including but not limited to accounting, finance, information systems, marketing, management science, information sciences, applied computer science, and applied linguistics. For example, the boiler scores of financial disclosures could indicate the extent to which firms tend to adapt common phrases from other firms or reuse statements from their previous disclosures. The trends in disclosure reporting behavior regarding boilerplate, specificity, and hardInfo could help understand the role of disclosure scripting in the firm image, performance, and market behavior. Another application that examines Specificity, Boilerplate, and Redundancy could be the impact of pre-written bot responses to customer queries. What textual characteristics of bot response are most appreciated by end users and how it contributes to problem resolution can be an interesting research question to address. A third potential application is a comparison of text characteristics of spam mail versus regular email. One could expect that boiler scores for regular emails might be much lesser than boiler scores for spam emails as often these emails tend to use common ‘key’ phrases geared towards fear-mongering and click-baiting of end users. Though measures such as Boilerplate and Redundancy have been widely used in accounting literature [ • Lang M. • Stice-Lawrence L. Textual analysis and international financial reporting: Large sample evidence. , • Cazier R.A. • Pfeiffer R.J. 10-K disclosure repetition and managerial reporting incentives. , • Hope O.-K. • Hu D. • Lu H. The benefits of specific risk-factor disclosures. , • Blankespoor E. The impact of information processing costs on firm disclosure choice: Evidence from the XBRL mandate. ], recent studies have considered such measures in applications such as predicting crowdfunding success [ S. Pu, K. Srinivasan, AIS Electronic Library ( AISeL ) Are Project Narrative Attributes Indicative of Pre-order Campaign Success on Crowdfunding Platforms ? – A Text-Mining Approach Are Project Narrative Attributes Indicative of Pre-order Campaign Success on Crowdfunding Platf, in: MWAIS 2022 PROCEEDINGS, 2022. ]. Our software code has been widely downloaded from the GitHub repository. Therefore, we decided to convert the code to a software package for ease of use and program replicability. ## 5. Conclusions We propose a new software package called MoreThanSentiments that includes a list of text characterization features. The textual features we present via the python software package are unavailable elsewhere as reproducible code or software for general-purpose applications. These features originate from multiple studies in the accounting analytics discipline focusing on gleaning a variety of quantifiable information about the financial disclosure of firms. We make these quantitative features available for general-purpose applications by allowing the flexibility to generate the features using simple functions with user-defined parameters. Our package facilitates text characterization beyond sentiment analysis, counting of words, and computing readability. The future development of MoreThanSentiments will focus on expanding its capabilities in terms of the text attribution measures it can compute [ • Davis A.K. • Piger J.M. • Sedor L.M. Beyond the numbers: Measuring the information content of earnings press release language. , • Li F. The information content of forward-looking statements in corporate filings—A naïve Bayesian machine learning approach. , • v. Brown S. • Tucker J.W. Large-sample evidence on firms’ year-over-year MD & a modifications. ], as well as making the software more user-friendly and adaptable to different applications. This will involve continuing to improve the underlying machine learning algorithms and information retrieval methods, as well as incorporating user feedback to ensure that the software meets the needs of a wide range of users. By providing a flexible and easy-to-use package for text mining on large corpora of data, MoreThanSentiments has the potential to become an essential tool for researchers and practitioners in a variety of fields. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## References • Lorla S. TextBlob documentation release 0.16.0, TextBlob. • E. Hutto C.J. A Parsimonious Rule-based Model for, Eighth International AAAI Conference on Weblogs and Social Media. 2014: 18 (https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/viewPaper/8109) • Devlin J. • Chang M.-W. • Lee K. • Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. 2018 (http://arxiv.org/abs/1810.04805) • Yousefi-Azar M. • Hamey L. Text summarization using unsupervised deep learning. Exp. Syst. Appl. 2017; 68: 93-105https://doi.org/10.1016/j.eswa.2016.10.017 • Jiang J. • Srinivasan K. MoreThanSentiments. 2022https://doi.org/10.5281/zenodo.6853351 • Lang M. • Stice-Lawrence L. Textual analysis and international financial reporting: Large sample evidence. J. Account. Econ. 2015; 60: 110-135https://doi.org/10.1016/j.jacceco.2015.09.002 • Cazier R.A. • Pfeiffer R.J. 10-K disclosure repetition and managerial reporting incentives. J. Financial Rep. 2017; 2: 107-131https://doi.org/10.2308/jfir-51912 • Hope O.-K. • Hu D. • Lu H. The benefits of specific risk-factor disclosures. Rev. Account. Stud. 2016; 21: 1005-1045https://doi.org/10.1007/s11142-016-9371-1 • Blankespoor E. The impact of information processing costs on firm disclosure choice: Evidence from the XBRL mandate. J. Account. Res. 2019; 57: 919-967 • Greene D. • Cunningham P. Practical solutions to the problem of diagonal dominance in kernel document clustering. ACM Int. Conf. Proc. Ser. 2006; 148: 377-384https://doi.org/10.1145/1143844.1143892 1. S. Pu, K. Srinivasan, AIS Electronic Library ( AISeL ) Are Project Narrative Attributes Indicative of Pre-order Campaign Success on Crowdfunding Platforms ? – A Text-Mining Approach Are Project Narrative Attributes Indicative of Pre-order Campaign Success on Crowdfunding Platf, in: MWAIS 2022 PROCEEDINGS, 2022. • Davis A.K. • Piger J.M. • Sedor L.M. Beyond the numbers: Measuring the information content of earnings press release language. Contemp. Account. Res. 2012; 29: 845-868https://doi.org/10.1111/j.1911-3846.2011.01130 • Li F. The information content of forward-looking statements in corporate filings—A naïve Bayesian machine learning approach. J. Account. Res. 2010; 48: 1049-1102https://doi.org/10.1111/j.1475-679X.2010.00382 • v. Brown S. • Tucker J.W. Large-sample evidence on firms’ year-over-year MD & a modifications. J. Account. Res. 2011; 49: 309-346https://doi.org/10.1111/j.1475-679X.2010.00396
{}
# New conditions on ground state solutions for Hamiltonian elliptic systems with gradient terms Document Type: Research Paper Authors 1 School of Mathematics and Statistics Central South University Changsha‎, ‎410083‎, ‎Hunan newline Department of Mathematics‎, ‎Xiangnan University‎, ‎Chenzhou‎, ‎423000‎, ‎Hunan‎, ‎P.R‎. ‎China 2 School of Mathematics and Statistics Central South University Changsha‎, ‎410083‎, ‎Hunan‎, ‎P.R‎. ‎China 3 School of Mathematics and Statistics Central South University Changsha‎, ‎410083‎, ‎Hunan‎, ‎P.R‎. ‎China Abstract This paper is concerned with the following elliptic system: $$left{ begin{array}{ll} -triangle u + b(x)nabla u + V(x)u=g(x, v), -triangle v - b(x)nabla v + V(x)v=f(x, u), end{array} right.$$ for $x in {R}^{N}$, where $V$, $b$ and $W$ are 1-periodic in $x$, and $f(x,t)$, $g(x,t)$ are super-quadratic. In this paper, we give a new technique to show the boundedness of Cerami sequences and establish the existence of ground state solutions with mild assumptions on $f$ and $g$. Keywords Main Subjects ### History • Receive Date: 18 July 2013 • Revise Date: 15 July 2014 • Accept Date: 15 July 2014
{}
Home build details Setting up the Docker workflow Modified 2018-09-28 by Andrea Censi Andrea Censi This section shows how to use the Docker functionality and introduces some monitoring tools and workflow tips. You can ping and SSH into the robot, as explained in Unit B-5 - Duckiebot Initialization. You have setup the Docker workflow. The Portainer interface Modified 2018-09-28 by Andrea Censi It makes sense to read this only once the network is established, as explained in Unit B-5 - Duckiebot Initialization. In particular, you need to be able to ping and ssh to the robot. Try to open the Portainer interface: http://hostname.local:9000/#/containers This will show the containers that are running. Communicating with Docker on the Duckiebot using the command line Modified 2018-10-04 by Russell Buchanan The following commands can be run on your laptop but will affect the Duckiebot. It is never needed to log in to the Duckiebot via ssh, though that could be an alternative workflow. You can set the variable DOCKER_HOST to point to the Duckiebot: laptop $export DOCKER_HOST=hostname.local If you do, then you may omit every instance of the switch -H hostname.local. Health checks Modified 2018-10-04 by Russell Buchanan the container duckietown/rpi-health arrived only recently in the default config (Sep 27). If you have a previous SD card, you have to run it, using: laptop$ docker -H hostname.local run --device /dev/vchiq -p 8085:8085 -d duckietown/rpi-health:master18 If some of the containers are marked as “unhealthy”, fix the problem before continuing. In particular, the container duckietown/rpi-health checks some common hardware problems. To access detailed information about the HW health, click the “logs” icon (second icon to the right of the orange “unhealthy” label). Alternatively, open the URL http://hostname.local:8085 in your browser. Search for the status and status_msgs output: { "status": "error", "status_msgs": [ "Error: PI is throttled", "Error: Under-voltage", "Warning: PI throttling occurred in the past.", "Warning: Under-voltage occurred in the past." ] ... } The throttling and under-voltage warnings have to do with the power supply. Note that the PI can be damaged by inadequate power supply, so fix these as soon as possible. Seeing files on the Duckiebot Modified 2018-09-28 by Andrea Censi On the Duckiebot there is a directory /data that will contain interesting files. To access this content, you have two ways. From another computer, you can see the contents of /data by visiting the URL: http://hostname.local:8082 Otherwise, you can login via SSH and take a look at the contents of /data: laptop \$ ssh hostname.local ls /data Building workflow Modified 2018-09-28 by Andrea Censi Finally, we want to make sure that the Docker daemon on the robot can build successfully. To verify that, follow the rpi-duckiebot-simple-python tutorial available here. No questions found. You can ask a question on the website.
{}
How can I draw this plot? 0 2 Entering edit mode 6 months ago I used the following command on the vcf file to calculate roh. My question is how can I have this plot? bcftools roh -G30 --AF-dflt 0.4 file.vcf bcftools roh-calling roh samtools • 429 views 0 Entering edit mode I don't know exactly the format of the files produced by this command, but if you can use R, I'm pretty sure you can plot them using karyoploteR and maybe with the help of CopyNumberPlots to easily get the variant allele frequencies from the VCF. You can check the karyoploteR tutorial to find some more information and examples. 0 Entering edit mode ST [2]Sample [3]Chromosome [4]Position [5]State (0:HW, 1:AZ) [6]Quality (fwd-bwd phred score) ST Sample1 chr1 14907 0 3.0 ST Sample1 chr1 14930 0 95.5 ST Sample1 chr1 15118 0 99.0 ST Sample1 chr1 15211 0 99.0 ST Sample1 chr1 15274 0 90.8 ST Sample1 chr1 15820 0 90.0 ST Sample1 chr1 16378 0 99.0 Hi bernatgel According to the command, a file is generated in above format in which the roh is estimated, but I do not know how to calculate "heterozygosity as dosag". Thank you 0 Entering edit mode Heys, did you manage to plot it?
{}
1. ## standard deviation changes There are 15 numbers on a list, and the mean is 25. The smallest number on the list is changed from 12.9 to 1.29. a. Is it possible to determine by how much the mean changes? If so, by how much does it change? b. Is it possible to determine the value of the mean after the change? If so, what is the value? c. Is it possible to determine by how much the median changes? If so, by how much does it change? d. Is it possible to determine by how much the standard deviation changes? If so, by how much does it change? I have problem with part d , i dont have the ans . So , we need to compute the new ∑x and ∑(x^2 ) , the new ∑x = old ∑x + 1.29 -12.9 the new ∑(x^2 ) = old ∑(x^2 ) - (12.9^2) + (1.29^2) ... However , the old ∑x and ∑(x^2 )are not provided , so we cant calculate the changes Correct me if i am wrong 2. ## Re: standard deviation changes A) by $\dfrac{1.29-12.9}{15}$ B) $\dfrac{25\cdot 15+1.29-12.9}{15}$ C) not possible D) not possible 3. ## Re: standard deviation changes Originally Posted by SlipEternal A) by $\dfrac{1.29-12.9}{15}$ B) $\dfrac{25\cdot 15+1.29-12.9}{15}$ C) not possible D) not possible I think the median is unchanged , since the total number of elements is unchanged . 4. ## Re: standard deviation changes Originally Posted by xl5899 I think the median is unchanged , since the total number of elements is unchanged . Example: 12.9, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 37.1 - Median = 25. After change in datapoint, Median = 25 (Median Unchanged) 0, 0, 0, 0, 0, 0, 0, 12.9, 51, 51.85, 51.85, 51.85, 51.85, 51.85, 51.85 - Median = 12.9. After change in datapoint, Median = 1.29 (Median Changed!!!). The median can definitely change. So long as I choose a sum that adds to 375 and includes one datapoint that is 12.9, I can make the median start as any number and end as any number I choose. 5. ## Re: standard deviation changes The original problem said "The smallest number on the list is changed from 12.9 to 1.29". In your example, the "smallest number on the list" is 0, not 12.9, and does not change. Your example is not valid. 6. ## Re: standard deviation changes Originally Posted by HallsofIvy The original problem said "The smallest number on the list is changed from 12.9 to 1.29". In your example, the "smallest number on the list" is 0, not 12.9, and does not change. Your example is not valid. Good point. I did miss that. Thank you. The OP is correct that the median cannot change.
{}
# (U1.03.00) The main principles of operation of Code Aster Résumé: One presents here in a summary way the principles of operation of Code_Aster and the main rules of use. This document remains a general description and the player will refer usefully to the other documents, for all the details of use. # 1 General principles Code Aster makes it possible to carry out structural analyses for the phenomena thermal, mechanics, thermomechanical, or thermo-hydro-mechanics coupled, with a or not linear linear behavior, and computations of internal acoustics. Nonthe linearities relate to the behaviors of the materials (metallurgical plasticity, viscoplasticity, damage, effects, hydration and drying of the concrete,…), large deformation or large rotations and the contact with friction. One will refer to the plate of presentation of Code_Aster for the presentation of the various functionalities. The standard industrial studies require the implementation of mesh and graphic visualization tools which do not belong to the code. However, several tools are usable for these operations via procedures of application interface integrated into the code. To make a study, the user must, in general, prepare two data files: • the mesh file: This file defines the geometrical and topological description of the mesh without choosing, at this stage, the type of formulation of the finite elements used or the physical phenomenon to modelize. Some studies can result in using several mesh files. This mesh file, in general, is produced by commands integrated into Code_Aster, from a file coming from a mesh software used as preprocessor (SALOME, GIBI, GMSH, IDEAS…). Information contained in this file is specific to Code_Aster. They define conventional entities of the finite element method: • nodes: points defined by a name and their Cartesian coordinates in space 2D or 3D, • meshes: plane or voluminal named topological figures (not, segment, triangle, quadrangle, tetrahedron,…) to which will be able to apply different element types finished, boundary conditions or loadings. To improve safety of use and comfort of the modelization and post-processing operations, it is possible to define in the mesh file higher levels of entities, having an unspecified property in common and which will be used directly by their name: • nodes groups : named lists of names of nodes, • mesh groups : named lists of meshes names. It will be noted, from now, that all the handled geometrical entities (nodes, meshes, nodes groups, mesh groups) are named by the user and usable constantly by their name (8 characters to the maximum). The user can use this possibility to identify explicitly certain parts of the studied structure in order to facilitate the examination of the results. The numerotation of the entities is never clarified: it is only used internally to point on the values of the various associated variables. • the command file : contains the command text which allows: • of reading and if required enriching the data of mesh file (or other sources of results external), • to affect the data of modelization on the entities of the mesh, • to connect various operations of processing: specific computations, postprocessings, • to publish the results on various files. The command text refers to the names of geometrical entities defined in the mesh file. It also makes it possible to define new groups at any moment. From the data-processing point of view, these two files are ASCII files in free format. Their main features are listed below: Syntax of the mesh file: • length of line restricted with 80 characters, • the allowed characters are: • 26 tiny capital letters A-Z and the 26 a-z converted automatically in capital letters, except in the texts (provided between quotes), • ten figures 0-9 and the signs of representation of the numbers ( + -. ), • the character _underscore usable in key words or names, • white space is always a separator, • the character % indicates the beginning, until the end of line, of a comment. • The other rules of reading are specified in the booklet [U3.01.00] Syntax of the command file : • syntax related to the language Python, making it possible to include instructions of this language • character # indicates the beginning, until the end of line, of a comment. • The commands must start in column 1, unless they belong to a block indenté (buckles, test) The other rules of reading are specified in the booklet [U1.03.01]. # 2 Mesh ## 2.1 General information The Aster mesh file can be written (for really elementary meshes) or modified manually with any text editor. This file is read in free format, structured in records or subfiles by imposed keywords. Various utilities have been developed to facilitate the mesh importation in Code_Aster. There are: • the utilities of conversion which allow the conversion of one mesh file produced by another software package (IDEAS, GIBI, GMSH…) in one mesh file with the Aster format , • the reading command of one mesh file with med format, produced by Salome. ## 2.2 The Aster mesh file The structure and the syntax of the Aster mesh file are detailed in the Booklet [U3.01.00]. The Aster mesh file is read from the first line to the first occurrence of a line beginning with the word FIN. This key word is compulsory. Mesh file is structured under - independent files starting with a key word and is finished by the key word imposed FINSF. This file must at least contain two subfiles: • coordinates of all the nodes of the mesh in a cartesian coordinate system 2D (COOR_2D) or 3D (COOR_3D). • the description of all meshes (TRIA3, HEXA20, etc…), on which one will affect then physical properties, of the finite elements, boundary conditions or loadings. It can possibly include nodes groups (GROUP_NO) or meshes groups (GROUP_MA) to facilitate the operations of assignment and post-processing. It is essential to explicitly create at this stage the meshes located on the borders of application of the loadings and the boundary conditions. Then will be found in the mesh file: • meshes of edge of the elements 2D necessary, • meshes of face of the elements 3D massive necessary; • the mesh groups of associated edge and/or face. This stress becomes bearable when an application interface is used, which does the work starting from the indications provided during the mesh operation (see documents PRE_IDEAS [U7.01.01] or PRE_GIBI [U7.01.11]). ## 2.3 The utilities of conversion These interfaces make it possible to convert the files, with or without format, used by various software packages or codes computer, with the conventional format of the Aster mesh file. The currently available interfaces are those which make it possible to use the mesh generator IDEAS, the mesh generator GIBI of CASTEM 2000 and the mesh generator GMSH. ### 2.3.1 The universal file IDEAS The convertible file is the universal file defined by documentation I-DEAS (see Booklet [U3.03.01]). The recognition of the IDEAS version used is automatic. A universal file IDEAS consists of several independent blocks called “data sets”. Each “data set” is framed by the character string -1 and is numbered. “Data sets” recognized by the application interface are described in the booklet [U3.03.01]. ### 2.3.2 Mesh file the GIBI The interface is carried out using command PRE_GIBI [U7.01.11]). The convertible file is file ASCII restored by the command TO SAVE FORMAT of CASTEM 2000. The precise description of the interface is given in [U3.04.01]. ### 2.3.3 Mesh file the GMSH The interface is carried out using command PRE_GMSH [U7.01.31]). The convertible file is the ASCII file restored by command SAVE of GMSH. ## 2.4 Mesh file with med format The interface is realized using command LIRE_MAILLAGE (FORMAT=' MED') [U4.21.01]. MED (Modelization and Exchange of Data) is a neutral format of data developed by EDF R&D and French atomic energy agency for the data exchanges between computer codes. Files MED are binary and portable files. Reading a MED file by LIRE_MAILLAGE makes it possible to recover a mesh produced by any other code able to create a file MED on any other machine. This format of data is in particular used for the swapping of mesh and results files between Code_Aster and Salomé or the tool of mesh refinement HOMARD. The accurate description of the application interface is given in [U7.01.21]. ## 2.5 The use of incompatible meshes Although the finite element method recommends the use of regular meshes, without discontinuity, to obtain a correct convergence towards the solution of the continuous problem, it can be necessary for certain modelizations to use incompatible meshes: on both sides of a border, the meshes do not correspond. The connection of these two meshes is then managed in the command file by the keyword LIAISON_MAIL of command AFFE_CHAR_MECA. This gives the possibility in particular to connect a finely meshed area with another one where a coarse mesh is sufficient. From an initial mesh, it is possible to adapt the mesh, to minimize the made mistake, using the macro command MACR_ADAP_MAIL, which calls on the software HOMARD. The adaptive mesh software HOMARD operates on meshes composed of segments, triangles, quadrangles, tetrahedrons, hexahedrons and pentahedrons. This mesh adaptation occurs after a first computation with Code_Aster. An error indicator will have been computed. According to its value mesh by mesh, HOMARD will modify the mesh. It is also possible to interpolate fields of temperature or displacement at the nodes of the old mesh towards the new one [U7.03.01]. # 3 Commands ## 3.1 The command file The command file contains a set of commands, expressed in a language specific to Code_Aster (which must respect syntax Python). These commands are analyzed and executed by a software layer of Code_Aster called “supervisor”. ## 3.2 The role of the supervisor The supervisor carries out various tasks, in particular: • a phase of checking and interpretation of the command file, • a stage of execution of the interpreted commands. These tasks are detailed in the document [U1.03.01]. The command file is treated starting from the line where the first call for procedure DEBUT () or procedure POURSUITE () is,to the first occurrence of command FIN (). The commands located before DEBUT () or POURSUITE () and after FIN () are not carried out, but must be syntactically correct. Syntactic phase of checking: • reading and syntactic checking of each command; any error of detected syntax is the object of a message, but the analysis continues, • checking that all the concepts used as arguments were declared in a preceding command like product concept of an operator; it is also checked that the type of this concept corresponds to the type required for this argument. Stage of execution: the supervisor activates successively the various operators and procedures, which carry out the tasks foreseen. ## 3.3 The principles and the syntax of the process control language The modular concept of Code_Aster makes it possible to present the code like a succession of independent commands: the procedures, which do not produce directly of results, but ensure, amongst other things, the management of the exchanges with the external files, the operators, who carry out an operation of computation or data management and produce a result concept to which the user gives a name. These concepts represent data structures that the user can handle. These concepts are typified at the time of their creation and will only be used as input argument of the corresponding type. Procedures and operators thus exchange the necessary information and values via the named concepts. The complete syntax of the commands and its implications on the drafting of the command file are detailed in the booklet [U1.03.01]. Here an example of a few commands is given (extracted from the example with accompanying notes in [U1.05.00]): mall = LIRE_MAILLAGE () mod1 = AFFE_MODELE (MESH = mall, AFFE=_F (TOUT='OUI', PHENOMENE='MECANIQUE', MODELISATION='AXIS')) f_y = DEFI_FONCTION (NOM_PARA = “Y” VALE =_F (0. , 20000. , 4. , 0. ) ) charg = AFFE_CHAR_MECA_F (MODELE = mod1 PRES_REP =_F (GROUP_MA = ( lfa', ldf'), PRES = f_y)) ..... res1 = MECA_STATIQUE (MODELE=mod1, ...... ….) res1 = CALC_CHAMP (reuse=res1, RESULTAT=res1, MODELE=mod1, CONTRAINTE= (“SIGM_ELNO”), DEFORMATION= ('EPSI_ELNO),); Some general points will be noted, which can be observed on the preceding example: • any command starts in first column, • the list of the operands of a command is obligatorily between brackets, as well as the lists of elements, • a nom_de_concept can appear only once in the text of command like product concept, on the left of the sign = , • the re-use of an existing concept like product concept, is possible only if for the operators specified for this purpose. When this possibility is used (reentering concept), then the command uses the reserved keyword “reuse”. • a command is made up of one or several mot_clé or mot_clé_facteur, the latter being themselves composed of a list of mot_clé between brackets and preceded by the prefix _F. In the example suggested, the command AFFE_CHAR_MECA_F uses the mot_clé MODELE and the mot_clé_facteur PRES_REP, which is composed of both mot_clé GROUP_MA and PRES. This operation is done: • either with crushing of the initial values. For example the factorization in core of a stiffness matrix: masted = TO FACTORIZE (reuse=matass, MATR_ASSE= masted) • that is to say with enrichment of the concept. a rule of overload usable, in particular for all the operations of assignment, was added to the rules of use of a mot_clé_facteur with several lists of operands: • the assignments are done by superimposing the effects of different mot_clé, • in the event of conflict, the mot_clé last overrides the precedents. Example: we want to affect various materials MAT1, MAT2 and MAT3 on certain meshes: subdue = AFFE_MATERIAU (MAILLAGE= mon_mail AFFE = _F (TOUT = OUI', MATER = MAT1), _F (GROUP_MA = mail2', MATER = MAT2), _F (GROUP_MA = mail1', MATER = MAT3), _F (MESH = (“m7','m8' ), MATER = MAT3)) One starts by meshes assigning material MAT1 to all. • Then we assign material MAT2 to the mesh group mail2 which contains the meshes m8, m9 and m10. • One assigns finally material MAT3 to the mail1 mesh group (m5, m6 and m7) and to meshes the m7 and m8 , which is source of conflict since the mesh m7 is already part of the mail1 group. The rule of overload will then be observed and one will obtain finally the following material field: MAT1 : meshes m1 m2 m3 m4 MAT2 : meshes m9 m10 MAT3 : meshes m5 m6 m7 m8 the progressive effect of the various assignments of material is illustrated in the table below. Name of the mesh Material field after the 1st assignment Material field after the 2nd assignment Material field after the 3rd assignment Final material field m1 MAT1 MAT1 MAT1 MAT1 m2 MAT1 MAT1 MAT1 MAT1 m3 MAT1 MAT1 MAT1 MAT1 m4 MAT1 MAT1 MAT1 MAT1 m5 MAT1 MAT1 MAT3 MAT3 m6 MAT1 MAT1 MAT3 MAT3 m7 MAT1 MAT1 MAT3 MAT3 m8 MAT1 MAT2 MAT2 MAT3 m9 MAT1 MAT2 MAT2 MAT2 m10 MAT1 MAT2 MAT2 MAT2 ## 3.5 Regulates remanence The rule of preceding overload must be supplemented by another rule to specify what occurs when one can affect several quantities for each occurrence of a key word factor. CHMEC1=AFFE_CHAR_MECA ( MODELE=MO, FORCE_INTERNE= ( _F (TOUT = “OUI”, FX = 1. ), _F (GROUP_MA = “GM1”, FY = 2.), )) The rule of overload says that the second occurrence of FORCE_INTERNE overloads the first. But what is the value of FX on a mesh belonging to GM1 ? Was it erased by the second occurrence? If only the rule of overload is observed, FX is not defined on GM1. The rule of remanence makes it possible to preserve the value of FX. If the rule of remanence is observed, FX preserves the value affected as a preliminary. All the elements of the model have a value for FX whereas the elements of GM1 have a value for FX and FY. ## 3.6 Bases memory associated with a study Code_Aster rests, for the management of all data structures associated with the various concepts handled, on library JEVEUX. This latter manages the memory space required by the user at the time of the request for execution (memory parameter expressed in Mégaoctets). This space is frequently insufficient to store all data structures in central memory. The library then manages the exchanges between central memory and the auxiliary storages on files. During its creation by the code each entity is affected at a direct access file. This file can be regarded as a data base, since it contains, at the end of the execution, the directory (names and attributes) which can exploit all the segments of values that it contains. Code_Aster uses several data bases: • the data base GLOBALE, which contains all the concepts produced by the operators, as well as the content of certain catalogues on which the concepts lean; the file associated with this base allows the later continuation of a study. It must thus be managed by the user. • the other data bases, used only by the Supervisor and the operators, during an execution, do not require a particular intervention of the user. To make a study is to ask for the series of several commands: • procedures to exchange files with the external world, • of the operators to create product concepts as course of operation of modelization and computation. The commands corresponding to this sequence of operations can be realized in various ways, starting from the single executable modulus of Code_Aster : • in only one sequential execution, without intervention of the user, • by splitting the study in several successive executions, with re-use of the former results; from the second execution, the access to data base is done with the procedure continuation (POURSUITE); at the time of a continuation, the last command can be repeated if it stopped prematurely (lack of time, incomplete or incorrect data detected in stage of execution,…). To manage these possibilities, it will be noted that three commands play a central role. They correspond to the procedures activating the supervisor: • debut () compulsory for the first execution of a study, • POURSUITE () compulsory from the second execution of a study, • FIN () compulsory for all the executions. For a given study, one can subject command files having the following structure: Note: • Command INCLUDE makes it possible to include in a flood of commands the contents of another command file. This in particular allows to preserve a readable file of the main commands and to put in annexed files bulky digital data (ex: definition of functions). • The command files can be cut out in several files which will be executed one after the other, with intermediate backup of data base. For that, it is necessary to define the successive command files, whose suffix will be: .com1 , .com2 ,…, .com9. The executions of these files are connected. The data base of the last correct execution is preserved. ## 3.7 With the definition of the values Substitution ### 3.7.1 of values It Helps is possible to parameterize a command file. For example: Eptub = 26.187E-3 Rmoy = 203.2E-3 Rext = Rmoy+ (EPtub/2) will cara = AFFE_CARA_ELEM (MODELE = model BEAM =_F ( GROUP_MA = all, SECTION: “CERCLE”, CARA = (“R”, “EP”), VALE = (Rext, EPtub))) ### 3.7.2 Functions of one or more parameters It is also often necessary to use quantities functions of other parameters. Those can be: • either defined on an external file read by command LIRE_FONCTION. • either defined in the command file by: • DEFI_CONSTANTE which produces a concept function with only one constant value, • DEFI_FONCTION which produces a concept function for a quantity function of a real parameter, • DEFI_NAPPE which produces a concept function for a list of functions of the same quantity, each element of the list corresponding to a value of another real parameter. The concept produced by these operators is of type function and can only be used as argument of operands which accept this type. The operators accepting an argument of type function have as a suffix F (ex: AFFE_CHAR_MECA_F). In this case the functions are defined point by point, with a linear interpolation by default, therefore affine piecewise. The functions created are discrete arrays of the quantities specified at the creation. During a search for value, according to the specified characteristics, one proceeds by direct search or interpolation in the table (linear or logarithmic). One can specify, during the creation of the function, the prolongation out of the field of definition of the table, with various rules, or prohibit it. • or defined using their analytical statement by the operator FORMULE. For example: Omega = 3,566 ; linst = (0. , 0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.10,0.20,0.40) F = FORMULA (VALE = '''COS (OMEGA*INST)''') F1=CALC_FONC_INTERP (FONCTION=F, VALE_PARA= linst, NOM_RESU='ACCE”,) The analytical function$F(t)=cos(\Omega t)$ are then calculated by CALC_FONC_INTERP for times of the list linst list of times T. ## 3.8 How to write its command file with EFICAS? To write a command file of Code_Aster, most immediate consists starting from an example already written by others. In particular, all the tests of Code_Aster often constitute a good starting base for a new modelization. They are documented as documentation of validation. But there is better: tools EFICAS make it possible to write in an interactive and convivial way its command file, by proposing for each command the list of the possible keywords by checking syntax automatically, and by giving access to the documentation of the Instruction manual (booklets [U4] and [U7]). # 4 The great stages of a study The great stages of a study are in the general case: • the preparation of the work, which finishes after the reading of the mesh, • the modelization during which are definite and affected all the properties of the finite elements and of the materials, the boundary conditions and the loadings, • computation can then be carried out by the execution of total methods of resolution [U4.5-], which are possibly based on commands of computation and assembly of matrix and vectors [U4.6-] • the operations of postprocessings complementary to computation [U4.8-], • operations of printing of the results [U4.9-] • operations of exchange of results with other software (graphic visualization for example) [U7.05-] Another way of using Code_Aster consist in exploiting tools trades, available in the code in the form of MACRO_COMMANDES : let us quote for example the dedicated tools: • ASCOUF (modelization of fissured elbows or elbows with under-thickness), • ASPIC (modelization of or not fissured fissured bypasses), ## 4.1 To start the study and to acquire the mesh One will not reconsider here the possible fragmentation of the command file, which was presented in a preceding paragraph. The first executable command is: debuts ( ) the argument of this command are useful only for the maintenance actions or case of very large studies. For the reading of the mesh, coming from an external mesh software, one can operate in two ways: • convert the file of a software package into a file with the Code_Aster format by an execution separated, which possibly allows to modify it manually and to preserve it: debut ( ) PRE_IDEAS ( ) FIN ( ) the normal study will be able to then begin for example by: debut ( ) my = LIRE_MAILLAGE ( ) • to convert the file right before the lira: debut ( ) PRE_IDEAS ( ) my = LIRE_MAILLAGE ( ) ## 4.2 To assign data of modelization to the mesh To build the modelization of a mechanical, thermal or acoustic problem, it is essential to assign to the topological entities mesh: • a model of finite element, • properties of the materials (constitutive law and parameters of the model), • characteristics geometrical or mechanical complementary, These assignments are made by various operators the name of which is prefixed by AFFE_. The syntax of these operators and the way they operate use the easiness brought by the rules already mentioned previously on the use of the keywords factor. ### 4.2.1 Definition of a field of assignment to carry out an assignment, it is essential to define a field of assignment per reference to the names of the topological entities defined in the file mesh. Five keywords are usable for that, according to the specification of the operator: • to refer to all the mesh by TOUT= “YES” • to assign to meshes by MAILLE= (list of names of meshes) • assigning to mesh groups by GROUP_MA= (list of names of mesh groups) • to assign to nodes by NOEUD= (list of names of nodes) • to assign to nodes groups by GROUP_NO= (list of names of nodes groups) ### 4.2.2 To affect the type of finite element On meshes of studied structure, which are at this stage only topological entities, it is essential to affect: • one or more phenomena to study: “MECANIQUE”, “THERMIQUE”, “ACOUSTIQUE”; • a model of finite element compatible with the topological description of the mesh. This assignment induces an explicit list of degrees of freedom on each node and a model of interpolation in the element. The operator used for this assignment is AFFE_MODELE [U4.41.01], which can be called several times on the same mesh. It uses the rule of overload. Note : For a study with several phenomena ( ' MECANIQUE' , ' THERMIQUE' ), it is essential to build a model for each phenomenon, by as many calls to AFFE_MODELE . On the other hand, for a given computation (mechanical, thermal,…) one needs one and only one model. To know the characteristics of the various finite elements available, one will refer to the booklets [U2-], and [U3-]. ### 4.2.3 To affect characteristics of materials It is necessary to assign to this stage of the characteristics of material, and the associated parameters, with each finite element of the model (except for the discrete elements defined directly by a stiffness matrix, of mass and/or damping). In other words, DEFI_MATERIAU is used to define a material and AFFE_MATERIAU is used to define a material field by association of the mesh. For a given computation, one needs one and only one material field. The validated characteristics of the material catalogue can also be used by procedure INCLUDE_MATERIAU [U4.43.02]. A certain number of models of behavior are available: elastic, elastic orthotropic, thermal, acoustic, elastoplastic, elastoviscoplastic. Let us note that it is possible to define several characteristics of materials for the same material: elastic and thermal, elastoplastic, thermoplastic,… ### 4.2.4 element types To assign characteristics to the elements During the use of some, for the “MECHANICAL” phenomenon, the geometrical definition deduced from the mesh does not make it possible to describe them completely. The missing characteristics have to be assigned to the meshes: • for shells : the constant thickness on each mesh and a reference of reference for the representation of the stress state, • for the beams, bars and pipes: characteristics of the cross section, and possibly orientation of this section around the neutral fiber. These operations are performed by the operator AFFE_CARA_ELEM [U4.42.01]) which uses, to simplify the command 's drafting, the rule of overload. Another possibility is offered by this operator: that to introduce, directly in the model, of the stiffness matrixes , mass or damping on meshes POI1 (or nodes) or meshes SEG2. These matrixes correspond to element types finished discrete to 3 or 6 degrees of freedom per node DIS_T or DIS_TR which must be affected at the time of the call to operator AFFE_MODELE. These operations are, in general, essential. They are carried out by several operators the name of which is prefixed by AFFE_CHAR or CALC_CHAR. On the same model, one will be able to carry out several calls to these operators to define during the study the boundary conditions and/or the loadings. The operators used differ with the phenomenon: “MECHANICAL” AFFE_CHAR_CINE AFFE_CHAR_MECAdonnées of the real type only AFFE_CHAR_MECA_Fdonnées of type “ THERMAL” function AFFE_CHAR_THERdonnées of the real type only AFFE_CHAR_THER_Fdonnées of type function “ ACOUSTIQUE'AFFE_CHAR_ACOUdonnées of the real type only Moreover, one can establish the seismic loading to carry out a computation of response moving relative motion compared to the bearings, with L” helps CALC_CHAR_SEISME of the command. The boundary conditions and loadings can be defined according to their nature: • with the nodes, • on meshes of edge (edge or face) or meshes support of finite elements, created in the file mesh. On these meshes, operator AFFE_MODELE has affected the necessary types of finite elements. For a detailed definition of these operators' operands and the rules of orientation of the meshes support (global, local or unspecified reference) one will refer to the documents [U4.44.01], [U4.44.02], and [U4.44.04]. The boundary conditions can be dealt with in two ways: • by “elimination” of the degrees of freedom imposed (for linear mechanical models implementing only kinematical boundary conditions (locked degrees of freedom) without linear relation). In this case the boundary conditions will be defined using command AFFE_CHAR_CINE. • by dualisation [R3.03.01]. This method, thanks to its greater generality, makes it possible to treat all the types of boundary conditions (imposed degree of freedom, linear relations between degrees of freedom,…) ; this method results in adding 2 LAGRANGE multipliers for each d. o. ⇌ imposed or for each linear relation. ⇌ Each concept produced by a call to these operators, of type AFFE_CHAR, corresponds to a system of boundary conditions and loadings indissociable. In the computation commands, these concepts can be incorporated by providing for the operands CHARGE a list of concepts of this type. ## 4.3 To carry out computations by global commands ### 4.3.1 Thermal analysis To compute: it (S) field (S) of temperature corresponding to a or not linear linear thermal analysis: • evolutionary times of computation are specified by an as a preliminary definite list of realities the commands to be used are: • THER_LINEAIRE for a linear analysis [U4.54.01], • THER_NON_LINE for a nonlinear analysis [U4.54.02], • THER_NON_LINE_MO for a problem of mobile loads in permanent mode [U4.54.03]. Computations of the elementary and assembled matrices and vectors necessary to the implementation of the methods of resolution are managed by these operators. ### 4.3.2 Static analysis To compute: mechanical evolution of a structure subjected to a list of loadings: • MECA_STATIQUE [U4.51.01]: linear behavior, with superposition of the effects of each • MACRO_ELAS_MULT [U4.51.02]: linear behavior, by distinguishing the effects of each loading, • STAT_NON_LINE [U4.51.03]: quasi-static evolution of a structure subjected to a load history in small or large transformations, made of a material the behavior of which is linear or not linear, with possibly taking into account contact and friction. If this mechanical computation corresponds to a study of thermoelasticity, one will refer to one time of the thermal computation already carried out. If the material was defined with characteristics depending on temperature, they are interpolated at the temperature corresponding to the required time of computation. For thermohydromecanic coupled problems, the operator STAT_NON_LINE is used to solve simultaneously the 3 problems thermal, hydraulics and mechanics. Computations of the elementary and assembled matrices and vectors necessary to the implementation of the methods of resolution are managed by these operators. ### 4.3.3 Modal analysis To compute: eigen modes and eigenvalues of the structure (corresponding to a vibratory problem or a problem of buckling). • MODE_ITER_SIMULT [U4.52.03]: computation of the eigen modes by simultaneous iterations; the clean eigenvalues and vector are real or complex, • MODE_ITER_INV [U4.52.04]: computation of the eigen modes by inverse iterations; the clean eigenvalues and vector are real or complex, • MACRO_MODE_MECA [U4.52.02]: the modal analysis reduces by cutting out the interval automatically of frequency under intervals, • MODE_ITER_CYCL [U4.52.05]: computation of the eigen modes of a structure with cyclic repetitivity starting from a base of real eigen modes. As a preliminary these four operators require the computation of the assembled matrices [U4.61-]. ### 4.3.4 Dynamic analysis To compute: the dynamic response, linear or not linear, of structure. Several operators are available. Can be quoted for example: DYNA_LINE_TRAN [U4.53.02]: temporal dynamic response of a linear structure subjected to a transitory excitation, DYNA_LINE_HARM [U4.53.02]: dynamic response complexes of a linear structure subjected to a harmonic excitation, DYNA_TRAN_MODAL [U4.53.21]: transient dynamic response in generalized coordinates by modal recombination. As a preliminary these three operators require the computation of the assembled matrices [U4.61-]. DYNA_NON_LINE [U4.53.01]: temporal dynamic response of a nonlinear structure subjected to a transient excitation, computing the assembled matrices too. # 4.4 Results Results produced by the operators realizing of computations by finite elements [U4.3-], [U4.4-] and [U4.5-] are of two principal types: • either of the cham_no type or cham_elem (by node or element) when it acts operators producing one field (for example TO SOLVE), • or of the type RESULTAT to be strictly accurate which gathers sets of fields. In a concept of the RESULTAT type a field is identified: • by a variable of access which can be: • a simple sequence number referring to the order in which the fields were arranged, • a parameter preset according to the type of result concept: • frequency or number of mode for a RESULTAT of the mode_meca type, • time for a RESULTAT of the evol_elas type, temper, dyna_trans or evol_noli. • by a symbolic field name referring to the type of the field: generalized displacement, velocity, stress state, forces,… Besides the variables of access, other parameters can be attached to a kind of result concept. The various fields are incorporated in a result concept: • that is to say by the operator which created the concept, a global command (MECA_STATIQUE, STAT_NON_LINE,…) or a simple command (MODE_ITER_SIMULT, DYNA_LINE_TRAN,…), • that is to say during the execution of a command which makes it possible to add a computation option in the form of a field by element or of a field at nodes (CALC_CHAMP); it is then said explicitly that one enriches the concept: resul = operator (reuse=resu1, RESULTAT = resul…) ; ` ## 4.5 To exploit the results All the preceding commands have building permit various concepts which are exploitable by operators of postprocessing of computations: • general operators of postprocessing (see booklet [U4.81]), for example CALC_CHAMP, POST_ELEM, POST_RELEVE_T, • operators of fracture mechanics (see booklet [U4.82]), for example CALC_G, • metallurgy operator: CALC_META, • static mechanical postprocessing (see booklet [U4.83]), for example POST_FATIGUE, POST_RCCM, • dynamic mechanical postprocessing (see booklet [U4.84]), for example POST_DYNA_ALEA, POST_DYNA_MODA_T. • operators of extraction: • of a field in a result concept CREA_CHAMP [U4.72.04], • of a field in generalized coordinates for a dynamic computation with modal base RECU_GENE [U4.71.03], • of a function of evolution of a component from a result concept RECU_FONCTION [U4.32.03], • and of restitution of a dynamic response in physical base REST_GENE_PHYS [U4.63.31], • an operator of postprocessing of functions or three-dimensions functions CALC_FONCTION which allows searches of peaks, extremums, linear combinations,… [U4.32.04]. Lastly, two procedures IMPR_RESU [U4.91.01] and IMPR_FONCTION [U4.33.01] allow the printing and possibly the creation of exploitable files by other software packages, in particular regarding graphic visualization. One will retain in particular graphic visualization by IDEAS, GMSH, or GIBI whatever mesh tool used at the beginning. # 5 Print files and error messages Code_Aster writes the relative information with computation in three files whose meaning is the following one. File Contained ERREUR Errors met during the execution MESSAGE Information on the course of computation. Repetition of the provided command file and its interpretation by Aster. Execution time of each command. Messages “system” RESULTAT Only the expressly written results required at the request of the user and the messages d'error Of other files are used for the interfaces with the programs of graphic examination. There exists various types of ERREUR messages. The transmitted messages will be directed only according to their type: Code Standard/fatal message Output files F message d'error, the execution stops after various printings. The concept created by the current command is lost. Nevertheless, the concepts produced previously are validated and GLOBAL data base is saved. Such a message is used in the frame of detection of a serious error which cannot allow the normal continuation of a Code_Aster command. ERREUR MESSAGE RESULTAT E message d'error, the execution continues a little bit: this kind of message makes it possible to analyze a series of errors before the program stops. The emission of a message of the <E> type is always followed by the emission of a message of the <F> type. The concepts produced are validated and the GLOBAL data base is available for a POURSUITE. ERREUR MESSAGE RESULTAT S message d'error, the concepts created during the execution, including of the command current, are validated by the supervisor, the execution stops with “clean” closing of the global database. It is thus reusable in POURSUITE. This message makes it possible in particular to be secured against a problem of convergence or time during an iterative process. ERREUR MESSAGE RESULTAT A alarm message. The number of alarm messages is restricted automatically with 5 identical successive messages. It is recommended to the users who have messages of type A “to repair” their command file (or their mesh file) to make them disappear. MESSAGE RESULTAT I message of information of the supervisor MESSAGE NB: The exceptions behave exactly like <S> errors. They are in fact <S> errors adapted to a typical case.
{}
# Force-based Cooperative Search Directions in Evolutionary Multi-objective Optimization 1 DOLPHIN - Parallel Cooperative Multi-criteria Optimization LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe Abstract : In order to approximate the set of Pareto optimal solutions, several evolutionary multi-objective optimization (EMO) algorithms transfer the multi-objective problem into several independent single-objective ones by means of scalarizing functions. The choice of the scalarizing functions' underlying search directions, however, is typically problem-dependent and therefore difficult if no information about the problem characteristics are known before the search process. The goal of this paper is to present new ideas of how these search directions can be computed \emph{adaptively} during the search process in a \emph{cooperative} manner. Based on the idea of Newton's law of universal gravitation, solutions attract and repel each other \emph{in the objective space}. Several force-based EMO algorithms are proposed and compared experimentally on general bi-objective $\rho$MNK landscapes with different objective correlations. It turns out that the new approach is easy to implement, fast, and competitive with respect to a $(\mu+\lambda)$-SMS-EMOA variant, in particular if the objectives show strong positive or negative correlations. Document type : Conference papers Domain : Cited literature [18 references] https://hal.inria.fr/hal-00765179 Contributor : Bilel Derbel <> Submitted on : Thursday, April 4, 2013 - 3:15:59 PM Last modification on : Tuesday, May 12, 2020 - 5:26:12 PM Document(s) archivé(s) le : Friday, July 5, 2013 - 2:30:11 AM ### File paperForces_authorVersion.pdf Files produced by the author(s) ### Citation Bilel Derbel, Dimo Brockhoff, Arnaud Liefooghe. Force-based Cooperative Search Directions in Evolutionary Multi-objective Optimization. 7th International Conference on Evolutionary Multi-Criterion Optimization, Mar 2013, Sheffield, United Kingdom. ⟨10.1007/978-3-642-37140-0_30⟩. ⟨hal-00765179⟩ Record views
{}
# Saving models¶ To save samples of the Macau model you can add save_prefix = "mymodel" when calling macau. This option will store all samples of the latent vectors, their mean vectors and link matrices to the disk. Additionally, the global mean-value that Macau adds to all predictions is also stored. ## Example¶ import macau import scipy.io ## running factorization (Macau) result = macau.macau(Y = ic50, Ytest = 0.2, side = [ecfp, None], num_latent = 32, precision = 5.0, burnin = 100, nsamples = 500, save_prefix = "chembl19") ## Saved files¶ The saved files for sample N for the rows are • Latent vectors chembl19-sampleN-U1-latents.csv. • Latent means: chembl19-sampleN-U1-latentmeans.csv. • Link matrix (beta): chembl19-sampleN-U1-link.csv. • Global mean value: chembl19-meanvalue.csv (same for all samples). Equivalent files for the column latents are stored in U2 files. ## Using the saved model to make predictions¶ These files can be loaded with numpy and used to make predictions. import numpy as np ## global mean value (common for all samples) N = 1 U = np.loadtxt("chembl19-sample%d-U1-latents.csv" % N, delimiter=",") V = np.loadtxt("chembl19-sample%d-U2-latents.csv" % N, delimiter=",") ## predicting Y[0, 7] from sample 1 Yhat_07 = U[:,0].dot(V[:,7]) + meanvalue ## predict the whole matrix from sample 1 Yhat = U.transpose().dot(V) + meanvalue Note that in Macau the final prediction is the average of the predictions from all samples. This can be accomplished by looping over all of the samples and averaging the predictions. ### Using the saved model to predict new rows (compounds)¶ Here we show an example how to make a new prediction for a compound (row) that was not in the dataset, by using its side information and saved link matrices. import numpy as np import scipy.io ## loading side info for arbitrary compound (can be outside of the training set) N = 1 lmean = np.loadtxt("chembl19-sample%d-U1-latentmean.csv" % N, delimiter=",") V = np.loadtxt("chembl19-sample%d-U2-latents.csv" % N, delimiter=",") ## predicted latent vector for xnew from sample 1 ## use predicted latent vector to predict activities across columns Yhat = uhat.dot(V) + meanvalue Again, to make good predictions you would have to change the example to loop over all of the samples (and compute the mean of Yhat’s). ### Tensor models¶ As in the matrix case the tensor factorization can be saved using save_prefix argument and later loaded from disk to make predictions. To make predictions we recall that the value of a tensor model is given by a tensor contraction of all latent matrices. Specifically, the prediction for the element Yhat[i,j,k] of a rank-3 tensor is given by $\hat{Y}_{ijk} = \sum_{d=1}^D u^{(1)}_{d,i} u^{(2)}_{d,j} u^{(3)}_{d,k} + mean$ Next we show how to compute this prediction using numpy. Assuming we have run and saved a model named save_prefix = "mytensor" of tensor of rank 3 we can load the latent matrices and make predictions using np.einsum function. import numpy as np ## global mean value (common for all samples) N = 1 U1 = np.loadtxt("mytensor-sample%d-U1-latents.csv" % N, delimiter=",") U2 = np.loadtxt("mytensor-sample%d-U2-latents.csv" % N, delimiter=",") U3 = np.loadtxt("mytensor-sample%d-U3-latents.csv" % N, delimiter=",") ## predicting Y[7, 0, 1] from sample 1 Yhat_701 = sum(U1[:,7] * U2[:,0] * U3[:,1]) + meanvalue ## predict the whole tensor from sample 1, using np.einsum Yhat = np.einsum(U1, [0, 1], U2, [0, 2], U3, [0, 3]) + meanvalue As before this is a prediction from a single sample. For better predictions we should loop over all of the samples and average their predictions (their Yhat’s). It is also possible to predict only slices of the full tensors using np.einsum: ## predict the slice Y[7, :, :] from sample 1 Yhat_7xx = np.einsum(U1[:,7], [0], U2, [0, 2], U3, [0, 3]) + meanvalue ## predict the slice Y[:, 0, :] from sample 1 Yhat_x0x = np.einsum(U1, [0, 1], U2[:,0], [0], U3, [0, 3]) + meanvalue ## predict the slice Y[:, :, 1] from sample 1 Yhat_xx1 = np.einsum(U1, [0, 1], U2, [0, 2], U3[:,1], [0]) + meanvalue All 3 examples above give a matrix (rank-2 tensor) as a result. To get the prediction for a slice we replaced the full latent matrix (U1) with a single specific latent vector (U1[:,7]) and changed its indexing from [0, 1] to [0] as the indexing now over a vector.
{}
# Varying the concentrations of half-cells #### ~angel~ I'm not sure ow to do this question. Calculate the reduction potential of a half-cell consisting of a platinum electrode immersed in a 2.0M Fe2+ and 0.2M Fe3+ solution 25c. Thanks. Related Introductory Physics Homework News on Phys.org #### Dr.Brain The Nernst Equation is given by: $E= E_o - \frac{0.059}{n} log \frac{N_1}{N_2}$ The above half cell reaction can be written as: $Fe^2^+ ------> Fe^3^+ e^_$ here n=1 as you can see in the cell reaction. ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{}
Limits of Polynomials and Rational Functions This page is intended to be a part of the Real Analysis section of Math Online. Similar topics can also be found in the Calculus section of the site. # Limits of Polynomials and Rational Functions Before we look at some theorems regarding the limits of polynomials and rational functions, we should first formally define what each is. Definition: A function in the form $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$ where $a_0, a_1, ..., a_n \in \mathbb{R}$ is said to be a polynomial of degree $n$. If $p(x)$ and $q(x)$ are polynomials, then the function $r(x) = \frac{p(x)}{q(x)}$, $q(x) \neq 0$ is said to be a rational function. For example, the $p : \mathbb{R} \to \mathbb{R}$ defined by $p(x) = 3 + 2x^2 + 4x^4$ is a polynomial, and the function $r : \mathbb{R} \setminus \{ 1 \} \to \mathbb{R}$ defined by $r(x) = \frac{2x + 3x^2}{1 - x}$ is a rational function. We will now look at some theorems regarding the limits of these functions. Theorem 1: If $p(x) = a_0 + a_1x + ... + a_nx^n$, $a_0, a_1, ..., a_n \in \mathbb{R}$ is a polynomial function. Then the limit at $x = c$ exists and $\lim_{x \to c} p(x) = p(c)$. • Proof: Let $p(x) = a_0 + a_1x + ... + a_nx^n$ where $a_0, a_1, ..., a_n \in \mathbb{R}$ be a polynomial function. Then we have that: (1) \begin{align} \lim_{x \to c} p(x) = \lim_{x \to c} a_0 + a_1x + ... + a_nx^n \\ \lim_{x \to c} p(x) = \lim_{x \to c} \left ( a_0 \right ) + \lim_{x \to c} \left ( a_1x \right )+ ... + \lim_{x \to c} \left ( a_nx^n \right ) \\ \lim_{x \to c} p(x) = a_0 \lim_{x \to c} \left ( 1 \right ) + a_1 \lim_{x \to c} \left ( x \right ) + ... + a_n \lim_{x \to c} \left ( x^n \right ) \end{align} • Now recall that $\lim_{x \to c} 1 = 1$ and $\lim_{x \to c} x = c$. Furthermore, from the Operations on Functions and Their Limits page, recall that since $\lim_{x \to c} x = c$, then $\lim_{x \to c} x^2 = \lim_{x \to c} x \cdot \lim_{x \to c} x = c^2$, …, $\lim_{x \to c} x^n = c^n$ (this can be proven by induction), and so: (2) \begin{align} \quad \lim_{x \to c} p(x) = a_0 \cdot 1 + a_1 \cdot c + ... + a_n \cdot c^n \\ \lim_{x \to c} p(x) = a_0 + a_1c + ... + a_nc^n \\ \lim_{x \to c} p(x) = p(c) \quad \blacksquare \end{align} Theorem 2: If $r(x) = \frac{p(x)}{q(x)}$ is a rational function where $q(c) \neq 0$, then the limit at $x = c$ exists and $\lim_{x \to c} r(x) = \frac{p(c)}{q(c)}$. • Let $r(x) = \frac{p(x)}{q(x)}$ be a rational function. From theorem 1, since $p(x)$ and $q(x)$ are polynomials, we have that $\lim_{x \to c} p(x) = p(c)$ and $\lim_{x \to c} q(x) = q(c)$. Therefore by the Quotient Law for limits, $\lim_{x \to c} r(x) = \lim_{x \to c} \frac{p(x)}{q(x)} = \frac{p(c)}{q(c)}$, which is valid since $q(c) \neq 0$. $\blacksquare$
{}
# Is it possible to for an integral in the form int_a^oo f(x)\ dx, and lim_(x->oo)f(x)!=0, to still be convergent? ## If you view the integral as the area under the curve, it seems logical that there is no way that the integral ${\int}_{a}^{\infty} f \left(x\right) \setminus \mathrm{dx}$ would converge unless $f \left(x\right)$ eventually tends to zero ${\lim}_{x \to \infty} f \left(x\right) = 0$ since the area under the graph wouldn't be bounded otherwise. My question is, are there integrals where this is not the case? Where the limit of the function doesn't go to zero, but the integral is still convergent? What would be an example of such function? Jan 5, 2018 If the limit ${\lim}_{x \to \infty} f \left(x\right) = L$ exists, then $L = 0$ is a necessary (but not sufficient) condition for the integral to converge. In fact suppose $L > 0$: for the permanence of the sign, we can find a number $\epsilon > 0$ and a number $M$ such that: $f \left(x\right) \ge \epsilon$ for $x > M$ So: ${\int}_{a}^{t} f \left(x\right) \mathrm{dx} = {\int}_{a}^{M} f \left(t\right) \mathrm{dt} + {\int}_{M}^{t} f \left(x\right) \mathrm{dx}$ and based on a well known inequality: ${\int}_{a}^{t} f \left(x\right) \mathrm{dx} \ge {I}_{0} + \epsilon \left(t - M\right)$ which clearly diverges for $t \to \infty$. If $L < 0$ we can apply the same to $- f \left(x\right)$ For the same reason, also if $f \left(x\right) > 0$ or $f \left(x\right) < 0$ for $x \ge M$ then ${\lim}_{x \to \infty} f \left(x\right) = 0$ is a necessary condition. However, if ${\lim}_{x \to \infty} f \left(x\right)$ does not exist and the function does not have a definite sign around $+ \infty$ the condition is not necessary. Can't find a counterexample right now, though.
{}
# Bradley-Link s December 31, 2018, balance sheet included the following items: Long-Term Liabilities. 1 answer below » Bradley-Link s December 31, 2018, balance sheet included the following items: Long-Term Liabilities ($in millions) 9.6% convertible bonds, callable at 101 beginning in 2019, due 2022 (net of unamortized discount of$2) [note 8].....................................................$198 10.4% registered bonds callable at 104 beginning in 2028, due 2032 (net of unamortized discount of$1) [note 8].......................................................49 Shareholders Equity Equity-stock warrants.................................................4 On January 3, 2019, when Bradley-Link s common stock had a market price of $32 per share, Bradley-Link called the convertible bonds to force conversion. 90% were converted; the remainder was acquired at the call price. When the common stock price reached an all-time high of$37 in December of 2017, 40% of the warrants were exercised. Required: 1. Prepare the journal entries that were recorded when each of the two bond issues was originally sold in 2005 and 2009. 2. Prepare the journal entry to record (book value method) the conversion of 90% of the convertible bonds in January 2019 and the retirement of the remainder. 3. Assume Bradley-Link induced conversion by offering \$150 cash for each bond converted. Prepare the journal entry to record (book value method) the conversion of 90% of the convertible bonds in January 2019. 4. Assume Bradley-Link induced conversion by modifying the conversion ratio to exchan Suganya v
{}
# Given a linear block-cipher,how can an attacker decrypt any plaintext value encrypted,using 128 chosen ciphertexts Given the linear block cipher $\operatorname{LinearCipher}(k, p) = c$ $$\operatorname{LinearCipher}(k, p_1 \oplus p_2) = \operatorname{LinearCipher}(k, p_1) \oplus \operatorname{LinearCipher}(k, p_2)$$ where $k$ and $p$ are 128-bits. If an attacker uses a chosen-cipher-text attack, how can they decrypt any plaintext by choosing 128 ciphertexts? I'm not exactly sure in which direction I have to think. I believe decryption will also be linear since in linear algebra the inverse of a linear function is also linear. The fact that the attacker can choose 128 ciphertexts hints at maybe revealing the key 1-bit at a time. Any hints and suggestions would be helpful. • That's a possible definition of a linear cipher, but there are more general ones, that do not require $\operatorname{LinCipher}(k, 0) = 0$; that would be the weaker:$$\operatorname{LinCipher}(k, p_1 \oplus p_2 \oplus p_3) = \operatorname{LinCipher}(k, p_1) \oplus \operatorname{LinCipher}(k, p_2) \oplus \operatorname{LinCipher}(k, p_3)$$which is common in cryptanalysis. – fgrieu Feb 13 '16 at 17:07 As a linear cipher with 128 bit, it can be described by a 128x128 matrix over GF(2). To break the cipher is to find that matrix. But that's easy: column k is the decryption of the k'th standard basis, i.e. the vector (0,0,0,...1,...,0,0,0) , with the 1 on the k'th place. Example: Decrypt(0001) = 1101 Decrypt(0010) = 0001 Decrypt(0100) = 1010 Decrypt(1000) = 1111 This gives the matrix M: 1011 1001 0011 1101 Now lets say the cipher text is $c= (1111)^T$ The decryption of c is $M*c = (1001)^T$ • Hmmm... this is still unclear. Could you please elaborate on it. Perhaps an example of say 4-bit blocks instead of 128 will help with the explanation. – Dimitar Stratiev Oct 12 '15 at 19:56 • Ahh.. I see it now! The fact that the question read "decrypt any plaintext" threw me off. I thought it required some extra work like basis translation but In fact it's asking of how I can decrypt any given ciphertext rather than plaintext. Also isn't the second bit of M*c 0,since 1 xor 0 xor 0 xor 1 = 0? – Dimitar Stratiev Oct 12 '15 at 20:55 • you are right, I corrected it! – user27950 Oct 12 '15 at 21:02 • I still have one more question lingering in my head, perhaps I would've known this if I studied linear algebra more thoroughly. I notice that the columns of M are linearly independent. Is it always true that a set of basis vectors will translate to another set of basis vectors? If no, is that only true due to linearity? – Dimitar Stratiev Oct 12 '15 at 21:33 • I don't think that's true in general, but in this case the decryption function needs to have that property for the cipher to be a permutation... Otherwise the preimages of the basis vectors would span a smaller vector space and you'd get cipher that's 1-1 but not onto. – pg1989 Oct 13 '15 at 20:18
{}
### Base Dozen Forum Would you like to react to this message? Create an account in a few clicks or log in to continue. Base Dozen Forum A board for discussion of the number twelve as the base of numeration in mathematics and physics. ### Dozenal Clock Dozenal Clock Local Dozenal Time: Local Decimal Time: ### Latest topics » Pasigraphie of Joseph de Maimieux Fri Aug 05, 2022 1:57 pm by Phaethon » Probabilities of Primes and Composites Sun Jul 24, 2022 6:22 pm by Phaethon » Base Optimality Sat Jul 23, 2022 2:28 pm by Phaethon » Factor Density Thu Jul 21, 2022 11:46 pm by Phaethon » "Lagrange was Wrong, Pascal was Right" Tue Jun 21, 2022 10:12 pm by Phaethon » "High-Radix Formats for Enhancing Floating-Point FPGA Implementations" Mon Jun 20, 2022 7:34 pm by Phaethon » Twelfths Metric Ruler Thu Jun 16, 2022 7:49 pm by Phaethon Wed Jun 15, 2022 3:07 pm by Phaethon » Optimal Analogue Clock Sun Jun 12, 2022 1:09 am by Phaethon # Factor Density Phaethon Posts : 118 Points : 195 Join date : 2019-08-05 Consider that the most important property of a number for it to be used as a base of enumeration and division is the number of factors it has. Under this supposition, a base with more factors will be better than a base that has fewer factors, while other conditions are equal as far as possible. For example, the base thirty should be a better base than the bases twenty-nine or thirty-one, because thirty has more factors than twenty-nine or thirty-one, although the numbers are of similar size. The number of factors, say $$F_B$$, a base number $$B$$ has is dependent on its prime factorisation: $\prod_{i=1}^{i=n} {p_i}^{a_i}$ where $$p_i$$ is the $$i^{th}$$ prime and $$a_i$$ is its exponent, and can be calculated by the formula: $F_B = \prod_{i=1}^{n} ({a_i + 1})$ However, there tends to be a cost to gaining more factors in that the size of the base tends to increase such that it becomes less practical. To take into account the size of the base, divide its number of factors by the number of factors a base in the vicinity of its size would be expected to have to form a measure $$D_B$$ of the density of factors: $D_B = \frac{F_B }{1+\log_2 {B}}$ According to this computed measure, bases that are powers of two have a unitary density of factors and are called efficient, bases that have a factor density less than one have a lower density of factors and are called deficient because they have fewer factors than expected for their size, and bases that have a factor density of more than one have a greater density of factors. A table can be drawn comparing factor densities of various bases: Table of Base Factor Densities Base, decimally $$D_B$$ decimally $$D_B$$ dozenally 2 1 ①⁏ 3 0.7737 ⓪⁏⑨③⑤ 4 1 ①⁏ 6 1.1158 ①⁏①④⑧ 8 1 ①⁏ 9 0.7194 ⓪⁏⑧⑦⑦ 10 0.9255 ⓪⁏⑪①③ 12 1.3086 ①⁏③⑧⑤ 16 1 ①⁏ 18 1.1606 ①⁏①⑪① 24 1.4324 ①⁏⑤②③ 30 1.3544 ①⁏④③⓪ 36 1.4587 ①⁏⑤⑥ 48 1.5186 ①⁏⑥②⑧ 60 1.7374 ①⁏⑧⑩② 72 1.6737 ①⁏⑧①⓪ 120 2.0236 ②⁏⓪③ 180 2.1197 ②⁏①⑤ 210 1.8361 ①⁏⑩⓪ 360 2.5285 ②⁏⑥④① 720 2.8594 ②⁏⑩③⑨ 840 2.9867 ②⁏⑪⑩① 2520 3.9027 ③⁏⑩⑩ As one increases the base, those with higher factor densities than of smaller bases tend to be highly composite numbers. According to the factor density, bases that increase in size without introducing a greater number of factors over smaller bases tend to be worse. Thus, bases eight and ten are worse than base six. Base twelve has a good factor density for its size and there is not a better base until the double dozen. This factor density as a computed measure of base efficiency does not penalise an increase in size of the base enough, as base eight for example has the same density as base 1024 decimally, although most would admit that the latter as a pure base would be far more impractical. Phaethon Posts : 118 Points : 195 Join date : 2019-08-05 If the factor density be scaled by doubling it, it becomes an indication of how the distribution of factors of the base behaves in comparison to the factors of a binary power. Another computed measure comparing bases to binary by the number of factors is to raise the base, B, to the power of the subtraction of one from the number of factors, FB, of the base, to produce an efficiency base score. That is: $B^{\frac{1}{F_B - 1}}$ Bases with a score greater than the number two may be said to be deficient, while bases having a score less than the number two may be called superefficient. The so-called perfect number twenty-eight is slightly superefficient and very similar in its density of factors to binary powers. Decimal is deficient and this is a very bad trait; superefficient vigesimal would be better. If we double the reciprocal of this efficiency base score, it produces the same rank for the three dozen bases I examined as the original factor density does, except for the bases six and twenty, which are swapped. Thus, there is justification in using this version of factor density instead of the original one in this topic. This version of factor density produces similar values for smaller bases but has the advantage of being less extreme or more tame for large highly composite numbers. I used it to produce a more reasonable optimality score in which impractically large highly composite number bases tend to be ranked not quite so highly. Reference: https://dozenal.forumotion.com/t52-base-optimality Current date/time is Mon Aug 15, 2022 4:09 pm
{}
Tue Aug 11 18:53:00 CEST 2015 ```Hi, I am using a document structure "part-section-subsection" (i.e., no chapters). The page numbers have to be formatted as "partnumber-pagenumber". So, I am using the following setup: \setupuserpagenumber[way=bypart, prefix=yes, prefixset=part] The page numbers are formatted correctly on each page, but not in the \setupuserpagenumber[way=bypart, prefix=yes, prefixset=part] \starttext \placelist[part,section][alternative=c, criterium=all] \part{Part 1} \section{one} \input knuth \page \section{two} \input knuth \page \section{three} \input knuth \page \part{Part Two} \section{one} \input knuth \page \section{two} \input knuth \page \section{three} \input knuth \page \stoptext The numbers of the table of content are: 1 Part 1 .... 1-1 1.1 one .... 1-1 1.2 two .... 2-2 1.3 three ... 3-3 Note that the prefix of page number is the section number rather than the part number. Any idea what I am doing wrong and how to fix it? Thanks,
{}
I could put the following Tikz code on a single page: ``````\tikz[remember picture,overlay] (current page.north west) rectangle (current page.south east); `````` However, it fails if I try to use it as a background for all pages. I was also unsure whether to use overlays or layers -- tried both (see below) ``````\usemodule[t-tikz] \defineoverlay[sombra][ \tikz[remember picture,overlay] (current page.north west) rectangle (current page.south east); ] %\setupbackgrounds[page][background=sombra] \setlayer[mybg] % name of the layer %[hoffset=1cm, voffset=1cm] % placement (from upper left corner of the layer) {\tikz[remember picture,overlay] (current page.north west) rectangle (current page.south east); } % the actual contents of the layer \setupbackgrounds[page][background=mybg, state=repeat % repeat each page ] \starttext asdf asdf \stoptext `````` ## Edit I was able to get it working with this -- though I've got to fix this white part with some offset I couldn't figure out so far. ``````\setuppagenumbering [alternative=doublesided] \setupcolors [state=start] \definelayer [fundo] [repeat=yes, width=\paperwidth, height=\paperheight] \usemodule[t-tikz] \setlayer [fundo] [preset=middle] {\tikz[remember picture,overlay] \shade[top color=green!30, bottom color=blue!30](current page.north west) rectangle (current page.south east);} \setupbackgrounds [page] [background=fundo] \starttext asdf \page asdf \stoptext `````` • You should report it as a bug to Henri Menke (it works with LuaTeX, but not LuaMetaTeX). If you consider another alternative, you could do the same using MetaPost. Jan 26 at 11:59 • @JairoA.delRio, Thanks. I guess it could be fairly simple with Metapost, but adapting the examples from the manual was beyond my capabilities at the moment. Besides, should it work `layer` or `overlay`? – user9424 Jan 26 at 12:03 • Oh, you only put the TikZ code in the page you need it (no overlays nor layers), the same as LaTeX. If it doesn't compile for you, use `context --luatex <filename>`. If it's helpful, I'll post a solution using MetaPost Jan 26 at 12:06 • Yes, that was another possibility. I guess it worked fairly well now, with one caveat (see edit). – user9424 Jan 26 at 12:14 While some TikZ wizard comes to explain the way, let me propose the ConTeXt way, explained in the Colors manual: ``````\setuppapersize[A6] %You could directly apply this with any fill operation %but it's better to keep it separated for reuse withshadevector (dir 180) %play with angles to see what happens %MetaPost understands colors differently %Hence the white ; StartPage; StopPage; \stopuseMPgraphic \starttext \dorecurse{3}% { \samplefile{quevedo-es} \samplefile{knuth} } \stoptext `````` And we have a nice background with no TikZ involved. `:D` • When you apply a background to the page you can use a overlay without the need for layer, i.e. you can remove the `\definelayer` and `\setlayer` settings and replace them with `\defineoverlay[shade][\useMPgraphic{shade}]`. Jan 26 at 22:26 • @WolfgangSchuster But, unlike layers, overlays don't repeat themselves across pages, right? Or did I miss a key? Jan 26 at 22:27 • Overlays are applied to each page but it works in your example only when I remove `defineshade` and apply the settings to the filled box itself. Jan 30 at 20:08
{}
# Multiple retrievals across varying queries The next step of our construction is to go beyond a single query to consider the performance of a system across a set of queries. It should come as no surprise, given the wide range of activities in which FOA is a crucial component that there is enormous variability among the kinds of queries produced by users. One obvious dimension to this variability concerns the breadth'' of the query: How general is it? If the set for a query is known, this can be quantified by GENERALITY , comparing the size of \Rel to the total number of documents in the corpus: \beq \mathname{Generality}_q \equiv {\left|\mathname{Rel}_q\right|\over\mathname{NDoc}} \eeq There are many other ways in which queries can vary, and the fact that different retrieval techniques seem to be much more effective on some types of queries than others makes this a critical issue for further research. For now, however, we will treat all queries interchangeably but consider average performance across a set of them. Figure (figure) juxtaposes two Re/Pre curves, corresponding to two queries. Query 1 is as before while Query 2 is a more specific query, as evidenced by its lower asymptote. Even with these two queries, we can see that in general there is no guarantee that we will have Re/Pre data points at the desired recall level. This necessitates INTERPOLATION of data points at required recall levels. The typical interpolation is done at pre-specified recall levels, for example ${0, 0.25, 0.5, 0.75, 1.0}$. As \vanR{152} discusses, a number of interpolation techniques are available, each with their own biases. Since each new relevant document added to our retrieved set will produce an increase in precision (causing the saw-tooth pattern observed in the graph), simply using the next available data point above a desired recall level will produce an over-estimate, while using the prior data point will produce an under-estimate. With pre-established recall levels, we can now juxtapose an arbitrary number of queries, and average over them at these levels. For 30 years the most typical presentation of results within the IR community is the 11-POINT AVERAGE curves, like those shown in Figure (figure) [REF563] [Salton68] . (This data happens to show performance on the ADI corpus of Boolean versus weighted retrieval methods, include only the last 10 data points.) It is not uncommon to see research data reduced even further. For if queries are averaged at fixed recall levels, and then all of these recall levels are averaged together, we can produce a single number that measures retrieval system performance. Note the even more serious bias this last averaging produces, however. It says that we are as interested in how well the system did at the 90% percent recall level as at 10\%!? Virtually all users care more about the first screen full of hits they retrieve than the last. This motivates another way to use the same basic Re/Pre data. Rather than measuring at fixed recall levels, statistics are collected at the 10, 25, 50 document retrieval levels. Precision within the first 10 or 15 documents is arguably a much closer measure of standard browser effectiveness than any other single number. All such atempts to boil the full Re/Pre plot are bound to introduce artifacts of their own. In most cases the full Re/Pre curve picture is certainly worth a thousand words. Plotting the entire curve is straight-forward and immediately interpretable, and lets the viewer draw more of their own conclusions. We must guard against taking our intuitions based on this tiny example (with only 25 documents in the entire corpus) too seriously when considering results from standard corpora and queries. For example, our first query had fully twenty percent of the corpus as relevant; even our second query had eight percent. In a corpus of a million documents, this would mean eighty thousand of them were relevant!? Much more typical are queries with a tiny fraction, perhaps .001% relevant. This will mean that the precision asymptote is very nearly zero. Also, we are likely to have many, many more relevant documents, resulting in a much smoother curve. FOA © R. K. Belew - 00-09-21
{}
## Elementary Geometry for College Students (5th Edition) We know that angle Q is equal to $b-a$, meaning that $b-a$ is equal to 42 degrees. Because all triangles add to 180 degrees, we obtain: $180 = 180 - 2b + 2a + M \\ 2b - 2a = M \\ 2(b-a) = M \\ M = 2(42) = 84$
{}
# A list of common time series tests? I am reviewing my time series knowledge and looking for a document that has the commonly-used time series tests, what they are used for, how to use them, etc. e.g. Augmented Dickey–Fuller test, PACF tests, etc. I found a wikipedia page of common statistical tests, but I am looking for a list of such that is specific for time series analysis. http://en.wikipedia.org/wiki/Statistical_hypothesis_testing#Common_test_statistics Thanks! - Why not just refer to one of the many available textbooks on time series? They should provide descriptions of most of the tests that you aare interested in. –  Michael Chernick Jun 18 '12 at 3:42 I think the Time Series task view in R and this list of time series functions will get you most of the way there. Just read the corresponding documentation to learn how to use them. As Michael mentioned, there are many books that describe these methods in depth. http://cran.r-project.org/doc/contrib/Ricci-refcard-ts.pdf http://cran.r-project.org/web/views/TimeSeries.html - To clarify, ACF and PACF are not statistical tests. The ouput of the autocorrelation function (ACF) and partial autocorrelation (PACF) functions help you decide whether you want to model a time series using an autoregressive (AR) model, a moving average (MA) model, or an autoregressive moving average model (ARMA, a linear combination of AR and MA models). Furthermore, the behavior of the ACF and PACF help you determine the parameters of these models. For instance, if your ACF drops off sharply after two lags and your PACF drops off after one lag for a particular time series, you might want to use an MA(2), AR(1), or an ARMA(1,2) to model your time series. (You may also model time series using auto regressive integrated moving average, or ARIMA models.) Once you have modeled a time series several different ways, you can inspect the AIC value to help you decide between models. The AIC takes into account goodness of fit and model simplicity. Generally, the smaller the AIC, the better. In R, you can build models using the arima() function, and evaluate their performance using the tsdiag() function.
{}
622 Answered Questions for the topic Algebra 2 Question Algebra 2 Question Algebra 2 05/15/17 #### Write the first six terms of the sequence a_n=(n-8)^2 Algebra 2 Question Algebra 2 05/15/17 #### Write the first six terms of the sequence a_n =(-2)^n-1 Algebra 2 Question Algebra 2 Algebra 2 Help 05/15/17 Your loan payment is $276.25 every month and the loan is for 5 years with an annual percentage rate of 4%. How much did you borrow? Justify in detail how you arrived at your answer. Algebra 2 Question 05/14/17 #### f(x)=sqrt x-2 Graph the leftmost point and three additional points -2, 0 2,0 3,1 4,1.41 Is this correct? Algebra 2 Question 05/11/17 #### Write the expression as a polynomial: (a^2–2a+3)(a–4) POLYGONEIOLIZATIONARETUOPOLIZMAWERTHUJIOP Algebra 2 Question Math Algebra 2 05/08/17 #### What are the real and complex solutions of the polynomial? x^2-11x=-24 What are the real and complex solutions of the polynomial? x^2-11x=-24 Algebra 2 Question Math Algebra 2 05/07/17 #### What are the real zeros of y=(x+3)^3+10? What are the real zeros of y=(x+3)^3+10? Algebra 2 Question Math Algebra 2 05/05/17 #### Use synthetic division to find P(-10) for P(x)= 2x^3+14x^2-58x Use synthetic division to find P(-10) for P(x)= 2x^3+14x^2-58x Algebra 2 Question Math Algebra 2 05/05/17 #### One zero of x^3-3x^2-6x+8=0 is -2 What are the other zeros of the function? One zero of x^3-3x^2-6x+8=0 is -2 What are the other zeros of the function? Algebra 2 Question 05/03/17 #### Use the value of the discriminant to determine the number of roots and type of roots for each equation? 3x2-x-12=0 please help I am confused Algebra 2 Question 04/24/17 #### The length of a rectangle is 3 times the width, and the perimeter is 22. Find dimensions of the rectangle myet find the length and the width of the equation Algebra 2 Question 04/20/17 #### Please help and show work, Thank you! :) Graph g(x) = 4x^3-24x+9 on a calculator m and estimate the local maxima and minima. Algebra 2 Question 04/20/17 #### Algebra 2 // please help! Add work if you can, thank you! Want to create a box without a top from an 8.5" x 11" sheet of paper. you will make the box by cutting squares of equal size from the four corners of the sheet of paper. if you make the box with... more Algebra 2 Question 04/14/17 #### math question and I need urgent help The value M (in dollars) of a motorcycle t years after it was purchased new can be estimated using the function:M(t)=3500/t+500 a. Estimate the motorcycle's value 5 years after it was... more Algebra 2 Question 04/14/17 #### Math question and I need urgent help A food manufacturer wants to find the most efficient packaging for a canister of oatmeal with a volume of 1663 cubic centimeters.a. Use the formula for the volume of a cylinder to write an equation... more Algebra 2 Question 04/13/17 #### Algebra 2 question The credit remaining on a phone card (in dollars) is a linear function of the total calling time made with the card (in minutes). The remaining credit after 24minutes of calls is$22.12, and the... more Algebra 2 Question Algebra 2 Algebra 2 Help 04/07/17 #### What is the x-intercept of f(x) = log4x? This is my final question on my homework page, and I am having a complete brain fart. Help would be greatly appreciated as to how I find the x-intercept. Thank you! Algebra 2 Question 04/06/17 #### runner complete a 5k race in 20 mins he runs the last kmof the race 2km/ hr faster than first 4km Whats his speed (in km/hr) for the last section of the race i need this explained becasue i dont know how to set this up to solve Algebra 2 Question 04/04/17 #### After how many hours does she break even? Mary repairs microwaves. Her revenue is modeled by the function R(h)=20+30h for every h hours she spends repairing microwaves. Her overhead cost is modeled by the function C(h)=10h^2−80 . Algebra 2 Question 04/03/17 #### How do you solve distance, rate, & time word problems (algebra 2)? Ex: a boat travels at a rate of 15 km per hour in still water. It travels 60 km upstream in the same time it travels 90 km downstream. What is the rate of the water? ## Still looking for help? Get the right answer, fast. Get a free answer to a quick problem. Most questions answered within 4 hours. #### OR Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
{}
# The number of permutations of n different objects,taken r at a line,when repetitions are allowed is $\begin{array}{1 1}(A)\;n\\(B)\;n^2\\(C)\;n^r\\(D)\;n^{r-1}\end{array}$ The no of permutations =$n^r$ Hence (C) is the correct answer.
{}
# Interpret /// (pronounced 'slashes') Because we can't get enough of esoteric language golfs, can we? ///—pronounced slashes—is a fun little language based on the s/// regex-replacement function of Perl fame. It contains only two special characters, slash / and backslash \. You can find a full article on it at the esolangs wiki, but I will reproduce a description of the language below, as well as some examples. In short, it works by identifying /pattern/repl/rest in the program and making the substitution as many times as possible. No characters are special except / and \: / demarcates patterns and replacements in the program, while \ allows you to insert literal / or \ characters into your code. Notably, these are not regular expressions, just plain string substitutions. Your challenge is to produce an interpreter for the /// language, as either a program reading STDIN or a function taking a string argument, in as few characters as possible. You may use any language except for /// itself. You may not use any libraries that interpret ///; you may, however, use regexes, regex libraries, or string-matching libraries. ## Execution There are four states, print, pattern, replacement, and substitution. In every state except substitution: • If the program is empty, execution halts. • Else, if the first character is \, do something with the next character (if present) and remove both from the program. • Else, if the first character is /, remove it, and change to the next state. • Else, do something with the first character and remove it from the program. • Repeat. The states cycle through print, pattern, replacement, and substitution in order. • In print mode, 'do something' means output the character. • In pattern mode, 'do something' means add the character to the current Pattern. • In replacement mode, 'do something' means add the character to the current Replacement. In substitution mode, you follow a different set of rules. Repeatedly substitute the first occurrence of the current Pattern with the current Replacement in the program, until no more substitutions are possible. At that point, clear the Pattern and Replacement and return to print mode. In the program /foo/foobar/foo foo foo, the following happens: /foo/foobar/foo foo foo foo foo foo foobar foo foo foobarbar foo foo foobarbarbar foo foo ... This loops forever and never exits substitution mode. Similarly, if the Pattern is empty, then the first occurrence of the empty string—at the beginning of the program—always matches, so substitution mode loops forever, never halting. ## Examples no Output: no. / world! world!/Hello,/ world! world! world! Output: Hello, world!. /foo/Hello, world!//B\/\\R/foo/B/\R Output: Hello, world!. a/ab/bbaa/abb Output: a. Program does not halt. // Output: none. /// Output: none. Program does not halt. /\\/good/\/ Output: good. There is also a quine on the wiki you can try. • /-/World//--/Hello//--W/--, w/---! What's not to love? (Try removing dashes from the end) – seequ Aug 31 '14 at 12:06 • @Loovjo The \ character escapes any character that follows it, including /, which can later be used as normal. While this doesn't look like much, this makes /// Turing-complete. – algorithmshark Jul 28 '15 at 19:37 • I think this is a better explanation of the language than the esolangs wiki article. Will use this info in my /// IDE that I'm making! – clabe45 Oct 14 '17 at 3:01 ## APL (133) {T←''∘{(0=≢⍵)∨'/'=⊃⍵:(⊂⍺),⊂⍵⋄(⍺,N⌷⍵)∇⍵↓⍨N←1+'\'=⊃⍵}⋄⍞N←T⍵⋄p N←T 1↓N⋄r N←T 1↓N⋄''≡N:→⋄∇{⍵≡p:∇r⋄∨/Z←p⍷⍵:∇(r,⍵↓⍨N+≢p),⍨⍵↑⍨N←1-⍨Z⍳1⋄⍵}1↓N} This is a function that takes the /// code as its right argument. Ungolfed, with explanation: slashes←{ ⍝ a function to split the input string into 'current' and 'next' parts, ⍝ and unescape the 'current' bit split←''∘{ ⍝ if the string is empty, or '/' is reached, ⍝ return both strings (⍺=accumulator ⍵=unprocessed) (0=≢⍵)∨'/'=⊃⍵:(⊂⍺),⊂⍵ ⍝ otherwise, add current character to accumulator, ⍝ skipping over '\'s. (so if '\/' is reached, it skips '\', ⍝ adds '/' and then processes the character *after* that.) idx←1+'\'=⊃⍵ (⍺,idx⌷⍵)∇idx↓⍵ } ⍞ next ← split ⍵ ⍝ output stage pat next ← split 1↓next ⍝ pattern stage, and eat the '/' rpl next ← split 1↓next ⍝ replacement stage, and eat the '/' ⍝ if there are no characters left, halt. ''≡next:⍬ ⍝ otherwise, replace and continue. ∇{ ⍝ if the input string equals the pattern, return the replacement and loop ⍵≡pat:∇rpl ⍝ otherwise, find occurences, if there are, replace the first and loop ∨/occ←pat⍷⍵:∇(rpl, (idx+≢pat)↓⍵),⍨ (idx←(occ⍳1)-1)↑⍵ ⍝ if no occurences, return string ⍵ }1↓next } • "if there are no characters left, halt." Does this work correctly on /// and //foo/ (i.e. loops forever)? – algorithmshark Sep 7 '14 at 16:14 • @algorithmshark: yes, in that situation the / would still be left at that point. – marinus Sep 7 '14 at 17:36 # J - 181190 170 char This was a nightmare. I rewrote it from scratch, twice, because it just kept bugging me. This is a function taking a single string argument, outputting to STDOUT. (0&$((2{.{:@>&.>)((j{.]),-i@=p@.~:~/@[,]}.~#@p+j=.0{p I.@E.])i 5;@}.&,'/';"0;&.>)@.(2<#)@}.[4:1!:2~{:@>@p=.>@{.@[)@((0;(0,:~1 0,.2);'\';&<1 0)<;._1@;:'/'&,)i=. ::](^:_) To explain, I will break it up into subexpressions. i =. ::](^:_)) parse =: ((0;(0,:~1 0,.2);'\';&<1 0)<;._1@;:'/'&,) print =: 4:1!:2~{:@>@p=.>@{.@[ eval =: 0&$((2{.{:@>&.>)sub 5;@}.&,'/';"0;&.>)@.(2<#)@}. sub =: ((j{.]),-i@=p@.~:~/@[,]}.~#@p+j=.0{p I.@E.])i interp =: (eval [ print) @ parse i • i (short for iterate) is an adverb. It takes a verb argument on the left and returns a verb (f)i, which when applied to an argument, applies f repeatedly to the argument until one of two things happens: it finds a fixed point (y = f y), or it throws an error. The fixed-point behaviour is inherent to ^:_, and ::] does the error handling. • parse tokenizes the input into what I call half-parsed form, and then cuts it up at the unescaped '/'. It binds escaping backslashes to their characters, but doesn't get rid of the backslashes—so we can either revert it or finish it depending on which we want. The bulk of the interesting work occurs in ;:. This is a sequential-machine interpreter primitive, taking a description of the machine ((0;(0,:~1 0,.2);'\';&<1 0)) on the left and something to parse on the right. This does the tokenizing. I will note that this specific machine actually treats the first character unspecial, even if it's a \ and should bind. I do this for a few reasons: (1) the state table is simpler, so it can be golfed further; (2) we can easily just add a dummy character to the front to dodge the problem; and (3) that dummy-character gets half-parsed at no extra cost, so I can use it to set up for the cutting phase, next. We also use <;._1 to cut the tokenized result on unescaped / (which is what I choose to be the first char). This is handy for pulling out the output, pattern, and replacement from out/patt/repl/rest all in one step, but unfortunately also cuts up the rest of the program, where we need those / to stay untouched. I splice these back in during eval, because making <;._1 leave them alone ends up costing a lot more. • The fork (eval [ print) executes print on the result from parse for its side-effects, and then runs eval. print is a simple verb that opens up the first box (the one we know for sure is output), finishes parsing it, and sends it to STDOUT. However, we also take the chance to define a utility verb p. p is defined as >@{.@[, so it takes its left arg (acts like the identity if given only one arg), takes the first item of that (identity when given a scalar), and unboxes it (identity if already unboxed). This will come in very handy in sub. • eval evaluates the remainder of the processed program. If we don't have a full pattern or a full replacement, eval throws it out and just returns an empty list, which terminates evaluation by making ;: (from parse) error out on the next iteration. Else, eval fully parses the pattern and replacement, corrects the remainder of the source, and then passes both to sub. By explosion: @}. NB. throw out printed part @.(2<#) NB. if we have a pattern and repl: 2{. NB. take the first two cuts: &.> NB. in each cut: {:@> NB. drop escaping \ from chars ( ) NB. (these are pattern and repl) &.> NB. in each cut: ; NB. revert to source form '/';"0 NB. attach a / to each cut &, NB. linearize (/ before each cut) 5 }. NB. drop '/pattern/repl/' ;@ NB. splice together ( sub ) NB. feed these into sub NB. else: 0&$NB. truncate to an empty list • sub is where one (possibly infinite) round of substitutions happens. Because of the way we set up eval, the source is the right argument, and the pattern and replacement are bundled together in the left. Since the arguments are ordered like this and we know the pattern and replacement don't change within a round of substitutions, we can use another feature of i—the fact that it modifies only the right argument and keeps passing in the same left—to delegate to J the need to worry about keeping track of the state. There are two spots of trouble, though. The first is that J verbs can have at most two arguments, so we don't have an easy way to access any that are bundled together, like pattern and replacement, here. Through clever use of the p utility we defined, this isn't that big of a problem. In fact, we can access the pattern in one character, just by using p, because of its >@{.@[ definition: the Unbox of the First item of the Left arg. Getting the replacement is tricker, but the shortest way would be p&|., 2 chars shorter than manually getting it out. The second problem is that i exits on fixed points instead of looping forever, and if the pattern and replacement are equal and you make a substitution, that looks like a fixed point to J. We handle this by entering an infinite loop of negating 1 over and over if we detect they are equal: this is the -i@=p@.~:~/ portion, replacing p&|.. p E.] NB. string search, patt in src I.@ NB. indices of matches 0{ NB. take the first (error if none) j=. NB. assign to j for later use #@p+ NB. add length of pattern ]}.~ NB. drop that many chars from src /@[ NB. between patt and repl: ~ NB. patt as right arg, repl as left @.~: NB. if equal: -i@= NB. loop forever p NB. else: return repl (j{.]) NB. first j chars of src , , NB. append all together ( )i NB. iterate • This cycle repeats due to the use of i, until something outside of sub errors out. As far as I'm aware, this can only happen when we are out of characters, of when we throw out an incomplete set of pattern-and-replacement. Fun facts about this golf: • For once, using ;: is shorter than manually iterating through the string. • 0{ should have a chance to error out before sub goes into an infinite loop, so this it should work fine if the pattern matches the replacement but never shows up in the remainder of the source. However, this may or may not be unspecified behaviour, since I can't find a citation either way in the docs. Whoopsie. • Keyboard interrupts are processed as spontaneous errors inside running functions. However, due to the nature of i, those errors get trapped too. Depending on when you hit Ctrl+C, you might: • Exit out of the negate-forever loop, error out of the sub loop by trying to concatenate a number to a string, and then go on interpreting /// as if you finished substituting a string with itself an infinite number of times. • Leave sub halfway through and go on interpreting a half-subbed /// expression. • Break out of the interpreter and return an unevaluated /// program to the REPL (not STDOUT, though). Example usage: f=:(0&$((2{.{:@>&.>)((j{.]),-i@=p@.~:~/@[,]}.~#@p+j=.0{p I.@E.])i 5;@}.&,'/';"0;&.>)@.(2<#)@}.[4:1!:2~{:@>@p=.>@{.@[)@((0;(0,:~1 0,.2);'\';&<1 0)<;._1@;:'/'&,)i=. ::](^:_) f 'no' no f '/ world! world!/Hello,/ world! world! world!' Hello, world! f '/foo/Hello, world!//B\/\\R/foo/B/\R' Hello, world! f '//' NB. empty string f '/\\/good/\/' good • Wow. I would call this masochistic. +1 – seequ Sep 3 '14 at 10:17 • When I run this, I get the empty string from every test case. I'm using jqt64, what are you using to run this? – bcsb1001 Apr 9 '15 at 16:04 • @bcsb1001 I've been using the (64-bit) jconsole binary directly. Checking jqt now, I am getting actually getting intended results except for the /\\/good/\/ test case; debugging tells me the issue is my use of 1!:2&4, since jqt does not have stdin/out. Will investigate. What are your 9!:12'' and 9!:14''? – algorithmshark Apr 9 '15 at 18:24 • @algorithmshark My 9!:12'' is 6, and 9!:14'' is j701/2011-01-10/11:25. – bcsb1001 Apr 9 '15 at 18:43 # Perl - 190 $|=1;$/=undef;$_=<>;while($_){($d,$_)=/(.)(.*)/;eval(!$e&&({'/','$a++','\\','$e=1'}->{$d})||('print$d','$b.=$d','$c.=$d')[$a].';$e=0');if($a==3){while($b?s/\Q$b/$c/:s/^/$c/){}$a=0;$b=$c=''}} Reads /// program from stdin until EOF. • Would an approach along the lines of m/^(.*?)(?<!\\)\/(.*?)(?<!\\)\/(.*?)(?<!\\)\/(.*)$/s--i.e. match output, pattern, and replacement all at once--make for a shorter golf? I don't know any Perl, myself. – algorithmshark Sep 3 '14 at 17:28 • I believe this fails with /a/\0/a – Asone Tuhid Mar 9 '18 at 12:53 # Pip, 100 102 bytes I hadn't ever proven Pip to be Turing-complete (though it's pretty obviously so), and instead of going the usual route of BF I thought /// would be interesting. Once I had the solution, I figured I'd golf it and post it here. 101 bytes of code, +1 for -r flag: i:gJnf:{a:xW#i&'/NE YPOia.:yQ'\?POiya}W#iI'\Q YPOiOPOiEIyQ'/{p:VfY0s:VfIyQ'/WpNi&YviR:Xp{++y?ps}}E Oy Here's my ungolfed version with copious comments: ; Use the -r flag to read the /// program from stdin ; Stdin is read into g as a list of lines; join them on newline and assign to c for code c : gJn ; Loop while c is nonempty W #c { ; Pop the first character of c and yank into y Y POc ; If y equals "\" I yQ'\ ; Pop c again and output O POc ; Else if y equals "/" EI yQ'/ { ; Build up pattern p from empty string p : "" ; Pop c, yank into y, loop while that is not equal to "/" and c is nonempty W #c & '/ NE Y POc { ; If y equals "\" I yQ'\ ; Pop c again and add that character to p p .: POc ; Else, add y to p E p .: y } ; Yank 0 so we can reliably tell whether the /// construct was completed or not Y0 ; Build up substitution s from empty string s : "" ; Pop c, yank into y, loop while that is not equal to "/" and c is nonempty W #c & '/ NE Y POc { ; If y equals "\" I yQ'\ ; Pop c again and add that character to s s .: POc ; Else, add y to s E s .: y } ; If the last value yanked was "/", then we have a complete substitution ; If not, the code must have run out; skip this branch, and then the outer loop ; will terminate I yQ'/ { ; While pattern is found in code: W pNc { ; Set flag so only one replacement gets done i : 0 ; Convert p to a regex; replace it using a callback function: if ++i is 1, ; replace with s; otherwise, leave unchanged c R: Xp {++i=1 ? s p} } } } ; Else, output y E Oy } Try it online! (Note that TIO doesn't give any output when the program is non-terminating, and it also has a time limit. For larger examples and infinite loops, running Pip from the command line is recommended.) • I think this should be pip + -r, 101 bytes – Asone Tuhid Mar 9 '18 at 13:31 # C++: Visual C++ 2013 = 423, g++ 4.9.0 = 442 This will never win but since I have decided that all my future software projects will be be written in this awesome language I needed an intepreter for it and I figured I might as well share the one I made... The difference in score is that Visual C++ doesn't need the first include but g++ does. The score assumes that line endings count as 1. #include<string.h> #include<string> #define M(x)memset(x,0,99); #define P o[i]) #define N(x)P;else if(n<x)(P==92? #define O (o[++i]):(P==47?n++: #define S std::string int main(int n,char**m){S o=m[1];char p[99],*q=p,r[99],*s=r;M(p)M(r)for(int i=0,t;i<=o.size();++i){if(!N(3)putchar O putchar(N(4)*q++=O(*q++=N(5)*s++=O(*s++=P;if(n>4){for(;;){if((t=o.find(p,i+1))==S::npos)break;o=o.substr(0,t)+r+o.substr(t+strlen(p));}M(p)M(r)n=2;q=p;s=r;}}} • Can you rewrite if(!o[i]); as if(P to save chars, or am I misunderstanding how #define works? – algorithmshark Sep 24 '14 at 6:45 • @algorithmshark how did I miss that?! if(!P is perfect. I'll change it. – Jerry Jeremiah Sep 24 '14 at 10:17 • Every instance of P in main has a space after it, so you can save a character by replacing those spaces with semicolons and removing it from #define. Then, if you can use #defines inside other ones, you can save some more by rewriting N(x) as (92==P instead of o[i]==92 and O likewise. – algorithmshark Sep 26 '14 at 5:44 • @algorithmshark you are obviously much better at this than I. Thanks for the help. – Jerry Jeremiah Sep 26 '14 at 10:40 • I know this is about four years old, but rewriting N(x) as P;else if(n<x)(P==92? and changing calls to N accordingly could save a few bytes. – Zacharý May 28 '18 at 21:37 # Python 2 (236), Python 3 (198?) from __future__ import print_function def d(i): t=0;p=['']*3+[1] while i: if'/'==i[0]:t+=1 else: if'\\'==i[0]:i=i[1:] p[t]+=i[0] i=i[1:] print(end=p[0]);p[0]='' if t>2: while p[1]in i:i=i.replace(*p[1:]) d(i);i=0 Called as d(r"""/foo/Hello, world!//B\/\\R/foo/B/\R"""). The triple quotes are only needed if the /// program contains newlines: otherwise simple quotes are ok. EDIT: This interpreter now prints stuff as expected (previously it only printed at the very end, cf. comments). For Python 3, remove the first line (but I don't have Python 3 on my ancient install, so cannot be sure there is no other change). • the interpreter not printing anything until termination is problematic. writing an infinite loop in /// is possible, so your interpreter fails on non-terminating-but-still-printing-something programs. – proud haskeller Aug 30 '14 at 22:31 • @proudhaskeller Fixed. – Bruno Le Floch Aug 31 '14 at 9:46 • Actually, this isn't fixed, it doesn't print anything for /a/ab/bbaa/abb. – Beta Decay Sep 2 '14 at 6:57 • @BetaDecay /a/ab/bbaa/abb will get stuck in an endless loop without printing anything, because the first substitution is a=>ab. The correct a/ab/bbaa/abb works as advertised. – algorithmshark Sep 2 '14 at 7:59 • @BetaDecay: besides the change suggested by algorithmshark, you may need to include the command line option -u to force the output buffer to be unbuffered. – Bruno Le Floch Sep 2 '14 at 15:02 # Ruby, 119 110 bytes Terminates with exception r=->s,o=$>{s[k=s[0]]='';k==?/?o==$>?s.gsub!([r[s,''],e=r[s,'']][0]){e}:t=o:o<<(k==?\\?s[0]+s[0]='':k);t||redo} Try it online! Terminates cleanly (116 bytes) r=->s,o=$>{s[k=s[0]||exit]='';k==?/?o==$>?s.gsub!([r[s,''],e=r[s,'']][0]){e}:t=o:o<<(k==?\\?s[0]+s[0]='':k);t||redo} Try it online! # Cobra - 226 sig Z as String def f(l='') m=Z(do=[l[:1],l=l[1:]][0]) n as Z=do if'/'<>(a=m())>'',return if(a=='\\',m(),a)+n() else,return'' print n()stop p,s=n(),n() if''<l while p in l,l=l[:l.indexOf(p)+1]+s+l[p.length:] .f(l) # BaCon, 391 387 395 bytes From the contributions on this page I only got the Python program to work. The others work for some /// samples, or do not work at all. Therefore, I decided to add my version, which is an implementation in BASIC. To compete in a CodeGolf contest with BASIC is not easy, as BASIC uses long words as statements. The only abbreviation commonly found in BASIC is the '?' sign, which means PRINT. So the below program may never win, but at least it works with all demonstration code on this Codegolf page and on the Esolangs Wiki. Including all versions of the "99 bottles of beer". p$="" r$="" INPUT i$WHILE LEN(i$) t$=LEFT$(i$,1) i$=MID$(i$,2) IF NOT(e) THEN IF t$="\\" THEN e=1 CONTINUE ELIF t$="/" THEN o=IIF(o<2,o+1,0) IF o>0 THEN CONTINUE FI FI IF o=1 THEN p$=p$&t$ELIF o=2 THEN r$=r$&t$ ELIF o=0 THEN IF LEN(p$) THEN i$=REPLACE$(i$,p$,r$) IF NOT(INSTR(t$&i$,"/")) THEN ?t$; BREAK ELSE ?LEFT$(i$,INSTR(i$,"/")-1); i$=MID$(i$,INSTR(i$,"/")) FI p$="" r$="" FI e=0 WEND ?i\$ ` • Added INPUT statement to get input from user. – Peter Oct 27 '16 at 6:30
{}
# How could I make a Character use a device such as a magic sword after a teleport or dash? First and foremost if you didn't get it from the question this is talking about Mutants and Masterminds. and its a question about character building. Im not sure how to tell but I think its second edition. all I know its it isnt the one that splits Dex into three separate stats I would like to know the appropriate Powers to make a character both Dash a set distance and attack with his magic sword which is a 3 point device As a regular equipment point sword doesnt have powers. or teleport and Use the magic sword. the character also needs to be able to use the magic sword for normal non-teleport attacks and the Dash or teleport needs to be expandable to allow for the addition of other powers later such as a Explosion or trip - Have you tried talking to your GM so it can built as a stunt? – CatLord Sep 26 '12 at 4:16 What is a stunt? – Novian Sep 26 '12 at 22:01 If I'm thinking of the right version of MM, it's a point build system and each power has a point value per level of the power. That point value has a based, plus "stunts" which are auxiliary powers to the core one (like "Cosmic" gets to add Flight, Blast, Shield, etc.) – CatLord Sep 27 '12 at 3:48 @CatLord I think you're discussing 1E terminology. In 2E there are "power stunts" which involve using Extra Effort (optionally using a Hero Point to eliminate the Fatigue) to add a new Alternate Power, which normally cost 1 pp each. 3E has the same thing, but they're Alternate Effects. :) But what you're saying regarding Cosmic Power sounds like 1E. – Sean Duggan May 28 '14 at 16:54 Looks like I had the 2nd ed book since it's all about having base powers and tacking on Extras and Flaws. I thought they were called stunts in this game, my have crossed a wire. – CatLord May 29 '14 at 2:52 (Assuming 2nd edition) Simply use the two powers one after the other using a Move action and a Standard action. Dash, strike. Simple as that. Your Dashing power should be either Speed 2 (if you only want to dash on walkable distance) or Teleport 2 (with flaws to limit destination to perception) if you want to be able to dash an enemy even if he's flying or above the ground. Your attack power will be strike on your device. If you have any strength, take only 1 rank and add the power feat Mighty (so it adds your strength to the Damage bonus). - Canonically, there is no power that lets you move and attack as a single action. You can have a form of Teleport that lets you move there, then attack, and Move-By Action is generally considered to work in terms of letting you move in any way, attack, then move somewhere else. The 2E Mecha and Manga book built that as a 3 pp Power named Flash Step that was a Move-Action Teleport for 100 feets distance coupled with Move-By Action so that you'd basic step to behind your opponent, strike, then be away again. You could add a Power Loss Drawback of 1 PP or so for only being able to use it with a sword. Any way about it, it will cost two actions. 3E/DCA is much the same in terms of what you can do. -
{}
# Showing a (complex) series is (conditionally) convergent. 1. Sep 20, 2010 ### shoescreen I've been reading a complex analysis book which had an example showing $$\sum^\infty_{n=1}1/n \cdot z^n$$ is convergent in the open unit ball. I'm now looking at the case when $$|z| = 1$$. Clearly $$z = 1$$ is the divergent harmonic series, but i know this series is in fact convergent for all other $$|z| = 1$$. In order to prove this, i need to be able to show that the related series $$\sum^\infty_{n=1}z^n$$ is bounded, whenever $$|z| = 1$$ and $$z \neq 1$$,. I can solve this problem when ever the argument of z is a rational multiple of pi, but other than that I'm stuck. Any help proving that this related series is bounded would be very helpful. Thanks! 2. Sep 20, 2010 ### mathman If you sum the truncated (at N) series, you have (1-zN+1)/(1-z). For z=1, you have 0/0 (no good). For z≠1, the expression is well defined, so see what happens as N -> ∞.
{}
# Python code to check if an array has a sequence (1,3,4) I recently applied for a job as a Python coder but was rejected. This was the problem: Write a python code to check if an array has a sequence (1,3,4) Assuming they were looking for expert Python programmers, what could I have done better? # Tested with Python 2.7 import unittest # Runtime: O(n) def doesSeqAppear(int_arr): #check if input is a list if not isinstance(int_arr, list): raise TypeError("Input shall be of type array.") # check all elements are of type int if not all(isinstance(item, int) for item in int_arr) : raise ValueError("All elements in array shall be of type int.") arr_len = len(int_arr) if arr_len < 3: return False # Loop through elements for i in range(arr_len-2): if int_arr[i] == 1 and \ int_arr[i+1] == 3 and \ int_arr[i+2] == 4 : return True return False class TestMethodDoesSeqAppear(unittest.TestCase): def test_only_single_seq(self): #Single time assert doesSeqAppear([1,3,4]) == True def test_multiple_seq(self): #multiple assert doesSeqAppear([2,2,1,3,4,2,1,3,4]) == True def test_neg_seq(self): #multiple assert doesSeqAppear([9,-1,1,3,4,-4,4]) == True def test_only_empty_seq(self): #empty assert doesSeqAppear([]) == False def test_only_single_elem_seq(self): #Single element assert doesSeqAppear([1]) == False def test_input_is_none(self): self.assertRaises(TypeError, doesSeqAppear, None) def test_raises_type_error(self): self.assertRaises(TypeError, doesSeqAppear, "string") def test_raises_value_error(self): self.assertRaises(ValueError, doesSeqAppear, [1,2,'a', 'b']) if __name__ == '__main__': unittest.main() # • Well for one, Python lists can have multiple types of values, are you sure they required each array to have all ints. For example: ['a','b','c',1,3,4] seems like a valid sequence that would return false in your implementation – Navidad20 Dec 14 '16 at 14:36 • Maybe you should have used xrange? ;) – Tamoghna Chowdhury Dec 14 '16 at 14:38 • Also, they asked about an array. Maybe you should have used a numpy.array instead of a list? With NumPy arrays, this could be done in 2 function calls. – Tamoghna Chowdhury Dec 14 '16 at 14:43 • As I said in another comment, there are so many unknown points in this question that I think the main reason for rejection may actually be him making assumptions instead of asking questions. – ChatterOne Dec 14 '16 at 15:18 • Why not just str([1,3,4])[1:-1] in str([array])? – Samuel Shifterovich Dec 14 '16 at 21:41 By PEP 8, doesSeqAppear should be does_seq_appear. You used the right naming convention for your unit tests, though. Personally, I would prefer def contains_seq(arr, seq=[1, 3, 4]). Your arr_len < 3 test is superfluous and should therefore be eliminated. Don't write a special case when the regular case works correctly and just as quickly. Your all(isinstance(item, int) for item in int_arr) check was not specified in the problem, and is therefore harmful. The question does not say that doesSeqAppear([3.1, 1, 3, 4]) should return False, nor does it say that it should fail with an exception. In fact, by my interpretation, it does contain the magic sequence and should therefore return True. In any case, you have wasted a complete iteration of the list just to perform a check that wasn't asked for. Checking isinstance(int_arr, list) is un-Pythonic, since is the norm in Python. In any case, the code would likely fail naturally if it is not a list. After cutting all that excess, you should drop the # Loop through elements comment as well. • I wouldn't say the arr_len < 3 test is superfluous. It is preventing an index error. – Casey Kuball Dec 14 '16 at 21:02 • @Darthfett: I don't think it prevents any errors. If the length is less than 3, the for loop will execute 0 iterations. – user2357112 supports Monica Dec 14 '16 at 21:04 • @user2357112 oops, you are correct. :) – Casey Kuball Dec 14 '16 at 21:05 • If dropping the arr_len < 3 check, it would be appropriate to specify why it still works without it, since it takes a moment to figure that out. – jpmc26 Dec 15 '16 at 0:23 Per the problem definition, I would expect a function thas is able to check any sequence in an array. Not necessarily (1, 3, 4) which was given as an example. In this case, the sequence should also be a parameter of the function, giving the signature: def has_sequence(array, sequence): Next, I would rely on Python iterations to "check" if array is a list, or at least an iterable. As there is no obvious reasons, to me, that has_sequence('once upon a time', 'e u') should fail. It seems like a valid usecase. Following, I would use a variation of the itertools recipe pairwise to group elements of array in tuples of the same length than sequence: import itertools def lookup(iterable, length): tees = itertools.tee(iterable, length) for i, t in enumerate(tees): for _ in xrange(i): next(t, None) return itertools.izip(*tees) def has_sequence(array, sequence): # Convert to tuple for easy testing later sequence = tuple(sequence) return any(group == sequence for group in lookup(array, len(sequence))) Now, other things that could have been done better: • # Tested with Python 2.7 can be replaced by #!/usr/bin/env python2 • if int_arr[i] == 1 and int_arr[i+1] == 3 and int_arr[i+2] == 4 : can be replaced by if int_arr[i:i+3] == [1, 3, 4]: removing the need for the ugly \ • assert in unit tests should be replaced by self.assertTrue(…) or self.assertFalse(…) • you should be more consistent in your usage of whitespace (putting one after each comma, none before any colon…). • Something like tuplewise might be a more evocative name than lookup. (I had the same idea though.) – David Z Dec 14 '16 at 18:51 def check_for_1_3_4(seq): return (1, 3, 4) in zip(seq, seq[1:], seq[2:]) Here are some tests: >>> check_for_1_3_4([1, 3, 4, 5, 6, 7]) True >>> check_for_1_3_4([5, 6, 7, 1, 3, 4]) True >>> check_for_1_3_4([5, 6, 1, 3, 4, 7, 8]) True >>> check_for_1_3_4([1, 3]) False >>> check_for_1_3_4([]) False >>> My code may seem terse, but it's still readable for anyone who understands slicing and zip. I expect Python experts to at least know about slicing. Unfortunately for me, my answer is less efficient than yours. It could triple the amount of memory used! By using generators a more efficient but more complicated solution can be created. Instead of creating copies of the sequence, this new code uses only the original sequence, but the logic is nearly the same. import itertools def check_for_1_3_4(seq): return (1, 3, 4) in itertools.izip(seq, itertools.islice(seq, 1, None), itertools.islice(seq, 2, None)) The tests still pass. I wouldn't expect most Python programmers to be familiar with itertools, but I was under the impression that Python experts do know it. • How performant is it? Someone could throw a huge byte array at your method (like a file looking for a particular sequence) and this would duplicate it in memory 3 or 4 times, no? – Nate Diamond Dec 15 '16 at 18:01 • @NateDiamond At worst, it could be five times through the list. However, the time it takes would be the same as a single loop with five instructions per element, unless going through the loop once is easier on the cache, easier for all the popular Python implementations to optimize, or is faster for some other reason. Yes, it would temporarily either triple or quadruple the memory usage. This all probably doesn't matter though. Design is about trade offs, but whether this is performant enough can't be determined with the given problem. Linear speed and memory usage isn't a bad starting place. – Drew Dec 16 '16 at 7:13 • Fair enough, but if I were asking this in an interview, the first question that would pop into my head (assuming you didn't) is "Why didn't you ask how big the sequence could be?". I would then probably say "So what if this is trying to parse a 500mb file?" Net-net, anyone using this should realize the downsides to this approach in case they want to reuse it. Further, you claimed the given answer is much too long, yet the approach has certain benefits in this regard. They should probably be mentioned. – Nate Diamond Dec 16 '16 at 18:12 • @NateDiamond Thanks for your input. I've included a discussion about efficiency in my answer. – Drew Dec 19 '16 at 11:05 ## Assumptions You made a lot of assumptions with this code, which you either did not mention during the interview or you incorrectly assumed to be true of the question. In other words, you were over thinking the problem. #check if input is a list if not isinstance(int_arr, list): raise TypeError("Input shall be of type array.") You should not care about the instance type. The type could easily have been a user defined type which behaves like a list or even another python built in. For example, python has both deque and array, and they both behave like a list, supporting the same operations as a list. # check all elements are of type int if not all(isinstance(item, int) for item in int_arr) : raise ValueError("All elements in array shall be of type int.") This is not necessarily true because lists or collections in general, in python can contain many different types. So insisting that the list contains only integers is just imposing a requirement which did not exist in the first place. In closing, I would advice that you adhere to the KISS principle in future interviews and to ask questions or state your assumptions before diving into the code. Even if it doesn't sound like an assumption to you, make sure they know what is going on in your head either as you're coding or before you write your code. It might sound silly to say "Ok I will also make sure that I have been given a list", but you will be saving yourself a lot of grief when they reply, "Don't worry about that, just assume it's a list". Check if array contains the sequence (1,3,4) def check_sequence(arr): return any((arr[i], arr[i + 1], arr[i + 2]) == (1,3,4) for i in range(len(arr) - 2)) KIS[S] def sequence_contains_sequence(haystack_seq, needle_seq): for i in range(0, len(haystack_seq) - len(needle_seq) + 1): if needle_seq == haystack_seq[i:i+len(needle_seq)]: return True return False We can't know why your interviewer rejected your application, but these types of questions are often starting points for conversation--not finished product endpoints. If you write the simplest, most straightforward code you can, you and your interviewer can then talk about things like expansion, generalization, and performance. Your interviewer knows that asking you to change your function interface is more problematic because you'll also have to change all your [unasked for] unit tests. This slows down the process and might make the interviewer worry that you'll pollute their codebase with a lot of brittle code. • "Subsequence" has a very specific technical meaning, and it is the wrong term to use here. – 200_success Dec 15 '16 at 2:36 • @200_success Hmm, you're right. Naming things is hard. – brian_o Dec 15 '16 at 5:15 ## protected by Community♦Jan 16 at 6:18 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
{}
## Question 1A.3 c $c=\lambda v$ Gabriella Bates 2L Posts: 94 Joined: Thu Jul 11, 2019 12:15 am ### Question 1A.3 c Which of the following happens when the frequency of electromagnetic radiation decreases? Explain your reasoning. (c) The extent of the change in the electrical field at a given point decreases I’m confused on this question, and the detailed answer didn’t make sense to me. Can someone please explain? Thank you Justin Seok 2A Posts: 88 Joined: Sat Aug 24, 2019 12:15 am ### Re: Question 1A.3 c I also wasn't 100% clear on this question, but I kinda took it to mean that since there aren't as many oscillations, the electrical field wouldn't change as quickly since it takes a longer time for each wave to oscillate. Not the most technical explanation but hopefully it kinda helps clear things up. Maya Pakulski 1D Posts: 88 Joined: Thu Jul 11, 2019 12:17 am ### Re: Question 1A.3 c Can somebody explain to me why it isn't B? britthanul234 Posts: 51 Joined: Wed Sep 18, 2019 12:21 am ### Re: Question 1A.3 c This has to do with the frequency, which you can assume is an oscillation or wave. There are not as much oscillations, so the electrical field will not change as quickly. That is why the answer is what it is. alicechien_4F Posts: 94 Joined: Sat Jul 20, 2019 12:15 am ### Re: Question 1A.3 c Maya Pakulski 3D wrote:Can somebody explain to me why it isn't B? The answer (b) states that the wavelength of the radiation decreases. This not correct because with the equation C = λv, wavelength (λ) and frequency (v) are inversely proportional. If you decrease the frequency, the λ must increase to compensate since C (speed of light) remains constant. Hope this helped! Ashley Nguyen 2L Posts: 86 Joined: Sat Aug 17, 2019 12:18 am ### Re: Question 1A.3 c According to the equation c = λv, where c is a constant representing the speed of light in a vacuum, λ (wavelength) and v (frequency) are indirectly proportional. If frequency were to increase, wavelength must decrease. In this question, the frequency decreases, so wavelength must therefore increase. By this logic, we can eliminate B as a potential answer choice. The speed of the radiation would not change because c, the speed of light, is a constant, so A is also incorrect. As the frequency of electromagnetic radiation decreases, it would actually decrease in energy because a particle would would complete less cycles per second, so D is also incorrect. Through the process of elimination C would be the right answer. You know this is true because the energy of electromagnetic radiation directly affects the oscillations of affected charged particles, so by decreasing the energy of said radiation, the effect of the electrical field on charged particle would also decrease.
{}
# Properties of the Graph of the Second Derivative This maplet runs from a server via MapleNet and does not require the user to have Maple installed. After clicking New Graph button, the user is given the graph of the second derivative of a function $f(x)$. Then the user is asked a sequence of questions regarding concavity and inflection points of $f(x)$ as well as the graphs of $f$ $'(x)$ and $f(x)$. The feedback and comments are provided after each answer. Identifier: http://maple.math.sc.edu/maplenet/M4Cfree/files/maplenet.php?m=Graph_ddf Rating: Creator(s): D. B. Meade and P. B. Yasskin Cataloger: Publisher: University of South Carolina Rights: D. B. Meade and P.B. Yasskin Format Other: Maplet (runs from server)
{}
# Solve the transcendental complex equation with Mathematica? Posted 2 months ago 538 Views | 5 Replies | 2 Total Likes | I am dealing with a function / equation of a particular physical system and I want to find the En value of that equation. The equation is a transcendental complex equation whose expression is given by T=(exp(0.04*i*q)/16*A^2*B*G)*(((A+G)^2*((A+B)^2*exp(2*i*(-0.01*k-0.01*p))-(A-B)^2*exp(-2*i*(-0.01*k-0.01*p))))+((A-G)^2*((A-B)^2*exp(-2*i*(-0.01*k+0.01*p))-(A+B)^2*exp(2*i*(-0.01*k+0.01*p))))+2*(A^2-B^2)*(A^2-G^2)*(exp(0.02*i*p)-exp(-0.02*i*p))) with value of variable m=h=1 c=137.036 Va=0 Vb=50000 p=(sqrt((En+m*c^2-Va)*(En-m*c^2-Va)))/(h*c) q=(sqrt((En+m*c^2-Vb)*(En-m*c^2-Vb)))/(h*c) k=(sqrt(En^2+m^2*c^4))/(h*c) A=h*c*k/(En+m*c^2) B=h*c*p/(En+m*c^2-Vb) G=h*c*q/(En+m*c^2-Va) In clearer form, the above equation looks like the following picturewhere a=0.01 and b=0.02.I tried to solve it using the formula Solve, NSolve, and also FindRoot. However, when using Solve and NSolve, the result is only the substitution of the known values into the equation, not the En value. And, when I try to finish using the FindRoot formula, the results are as belowThe 35000 number that I wrote in the FindRoot formula is a random number that I chose and the results are as shown in the picture.Did I take the right steps? I think it's still wrong. I don't know and I am confused about what formula to use and how the steps should be. So, I would really appreciate if someone could give me a hint on how to get these solutions for En from the transcendental complex equation.Thank you in advance. 5 Replies Sort By: Posted 2 months ago Thank you for the information on hc and i. m=h=1; c=137.036; Va=0; Vb=50000; hc=h*c;i=Sqrt[-1]; p=Sqrt[(En+m*c^2-Va)*(En-m*c^2-Va)]/hc; q=Sqrt[(En+m*c^2-Vb)*(En-m*c^2-Vb)]/hc; k=Sqrt[En^2+m^2*c^4]/hc; A=h*c*k/(En+m*c^2); B=h*c*p/(En+m*c^2-Vb); G=h*c*q/(En+m*c^2-Va); T=(Exp[0.04*i*q]/16*A^2*B*G)*(((A+G)^2*((A+B)^2*Exp[2*i*(-0.01*k-0.01*p)]-(A-B)^2*Exp[-2*i*(-0.01*k-0.01*p)]))+((A-G)^2*((A-B)^2*Exp[-2*i*(-0.01*k+0.01*p)]-(A+B)^2*Exp[2*i*(-0.01*k+0.01*p)]))+2*(A^2-B^2)*(A^2-G^2)*(Exp[0.02*i*p]-Exp[-0.02*i*p])); Plot[ReIm[T],{En,18700,18800}] NMinimize[Abs[Re[T]]+Abs[Im[T]],En] FindRoot[T,{En,18779}] which seems to find a root very near 18779Is that what you are looking for? Posted 2 months ago Yes, i=sqrt(-1), h = 1, and c=137.036. So, hc = h*c. Posted 2 months ago Where do the numbers 18700 and 18800 come from ? You can find some easy solutions by factoring T: m = h = 1; c = Rationalize[137.036]; Va = 0; Vb = 50000; hc = h*c; i = Sqrt[-1]; p = Sqrt[(En + m*c^2 - Va)*(En - m*c^2 - Va)]/hc; q = Sqrt[(En + m*c^2 - Vb)*(En - m*c^2 - Vb)]/hc; k = Sqrt[En^2 + m^2*c^4]/hc; A = h*c*k/(En + m*c^2); B = h*c*p/(En + m*c^2 - Vb); G = h*c*q/(En + m*c^2 - Va); T = Simplify[(Exp[0.04*i*q]/16*A^2*B* G)*(((A + G)^2*((A + B)^2* Exp[2*i*(-0.01*k - 0.01*p)] - (A - B)^2* Exp[-2*i*(-0.01*k - 0.01*p)])) + ((A - G)^2*((A - B)^2* Exp[-2*i*(-0.01*k + 0.01*p)] - (A + B)^2* Exp[2*i*(-0.01*k + 0.01*p)])) + 2*(A^2 - B^2)*(A^2 - G^2)*(Exp[0.02*i*p] - Exp[-0.02*i*p])) // Rationalize]; Head[T] Map[Solve[# == 0] &, Most[List @@ (Numerator[T])]] % // N Some of these solutions must be rejected because they make the denominator vanish too. Further complex solutions arise from the last, more complicated factor of Numerator[T]: Reduce[Last[Numerator[T]] == 0 && Abs[En - 20000] < 20000, En]
{}
• Create Account ### #ActualMilcho Posted 25 January 2013 - 10:33 AM EDIT: Actually your first problem is that by setting yVel to 0 first, and then doing yVel -= 30, you will always be setting it to -30, and never to anything below that. You should set your yVel to 0 when the key up event is received, instead of in your main loop. Note that if you hold down a key, you may constantly receive events for that key. So your keyhandling should be: case SDLK_UP: if (!jumping) { jumping = true; myDot.yVel = 0; } break; However, even if you remove setting it to 0 first you'll have another problem: Your yVel will never be exactly -100. It will go -30, -60, -90, -120...etc. if (myDot.yVel <= -100) On a different note, is yVel supposed to be velocity or position? Because it kind of seems like you're treating it like position, and not velocity, which is fine, but the naming seems off. ### #3Milcho Posted 25 January 2013 - 10:31 AM EDIT: Actually your first problem is that by setting yVel to 0 first, and then doing yVel -= 30, you will always be setting it to -30, and never to anything below that. You should set your yVel to 0 when the key up event is received, instead of in your main loop However, even if you remove setting it to 0 first you'll have another problem: Your yVel will never be exactly -100. It will go -30, -60, -90, -120...etc. if (myDot.yVel <= -100) On a different note, is yVel supposed to be velocity or position? Because it kind of seems like you're treating it like position, and not velocity, which is fine, but the naming seems off. ### #2Milcho Posted 25 January 2013 - 10:29 AM EDIT: Actually your first problem is that by setting yVel to 0 first, and then doing yVel -= 30, you will always be setting it to -30, and never to anything below that. However, even if you remove setting it to 0 first you'll have another problem: Your yVel will never be exactly -100. It will go -30, -60, -90, -120...etc. if (myDot.yVel <= -100) On a different note, is yVel supposed to be velocity or position? Because it kind of seems like you're treating it like position, and not velocity, which is fine, but the naming seems off. ### #1Milcho Posted 25 January 2013 - 10:27 AM Your yVel will never be exactly -100. It will go -30, -60, -90, -120...etc. if (myDot.yVel <= -100)
{}
1 AIEEE 2004 +4 -1 6.02 $$\times$$ 1020 molecules of urea are present in 100 ml of its solution. The concentration of urea solution is (Avogadro constant, NA = 6.02 $$\times$$ 1023 mol-1) A 0.02 M B 0.01 M C 0.001 M D 0.1 M 2 AIEEE 2004 +4 -1 To neutralise completely 20 mL of 0.1 M aqueous solution of phosphorous acid (H3PO3), the volume of 0.1 M aqueous KOH solution required is A 40 mL B 20 mL C 10 mL D 60 mL 3 AIEEE 2004 +4 -1 The ammonia evolved from the treatment of 0.30 g of an organic compound for the estimation of nitrogen was passed in 100 mL of 0.1 M sulphuric acid. The excess of acid required 20 mL of 0.5 M sodium hydroxide solution for complete neutralization. The organic compound is A urea B benzamide C acetamide D thiourea 4 AIEEE 2003 +4 -1 What volume of hydrogen gas at 273 K and 1 atm pressure will be consumed in obtaining 21.6 g of elemental boron (atomic mass = 10.8) from the reduction of boron trichloride by hydrogen? A 67.2 L B 44.8 L C 22.4 L D 89.6 L JEE Main Subjects Physics Mechanics Electricity Optics Modern Physics Chemistry Physical Chemistry Inorganic Chemistry Organic Chemistry Mathematics Algebra Trigonometry Coordinate Geometry Calculus EXAM MAP Joint Entrance Examination
{}
# If something that is moving at constant velocity has no net force acting on it, how come it is able to move other objects? Let's say 10 kg block is sliding on a frictionless surface at a constant velocity, thus its acceleration is 0. According to Newton's second law of motion, the force acting on the block is 0: $a = 0$ $F = ma$ $F=0$ So let's say that block slid into a motionless block on the same surface, the motionless block would move. Wouldn't the first block need force to be able to move the initially motionless block? I understand that it has energy due its constant velocity, but wouldn't it be its force that causes the displacement? - Related: physics.stackexchange.com/q/45653/2451 and links therein. –  Qmechanic May 7 '14 at 21:24 There was no force on the block until it came into contact with the motionless block. –  Feynman May 8 '14 at 1:18 Here's a slightly different but equivalent way to think about it. Forces describe interactions between two objects. If two objects are interacting, they exert forces on each other. If two objects are not interacting, they do not exert forces on each other. Thus, an object doesn't "carry around" a force with it. A force is not a property of an object, just as dmckee explains. Instead, we describe interactions between two objects using the more-abstract concept of force. In your block-hits-other-block scenario, it's tempting to ask where did the force come from if colliding object had $F_\text{net}=0$? But when forces are viewed as interactions, it becomes more apparent that the force didn't come from anywhere within one of the objects. There simply wasn't an interaction before they collided, so we wouldn't ascribe the existence of a force force. - The zero force related to zero acceleration is not a property of the object, it is a statement about the forces acting on the body. That is your title should not read "has no force" but "is subject to no net force". If a body has a non-zero, but constant, velocity then we know that the total of all the forces applied to it is zero (from Newton's Laws). We also know that is has non-zero "momentum", and when it collides with another object some (or all) of that momentum can be transferred to the other object. During the collision the body is subjected to new forces and the net force is no longer zero meaning that it will accelerate. - When the body is subjected to new forces during the collision, what are those forces or where do those forces come from? –  mzee99 May 7 '14 at 20:55 They are the same kinds of forces that prevent a book from falling through a table. We often call them "contact forces". At one level you can think of them as coming into being because atoms and molecules in a solid want to maintain their approximate distance from one another so that the body resists being deformed. –  dmckee May 7 '14 at 21:33 @mzee99: You should read the wiki page on normal force. –  Feynman May 8 '14 at 1:24 @dmckee: Sometimes can we say that a body accelerates by the virtue of its property? Say for example, I am running. I am not accelerated by gravity or any other force which has law. –  Feynman May 8 '14 at 1:35 When first block which is already in motion slides into a motionless block then its momentum changes...momentum=mass*velocity and force=rate of change of momentum so as momentum changes force exerts on both object in opposite direction with same magnitude so their momentum changes and so velocity also changes.... -
{}
0 Discussions # Discussion: “The Modeling of Viscous Dissipation in a Saturated Porous Medium” (, 2007, ASME J. Heat Transfer, 129, pp. 1459–1463)OPEN ACCESS [+] Author and Article Information V. A. Costa Departamento de Engenharia Mecânica, Universidade de Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugalv.costa@ua.pt J. Heat Transfer 131(2), 025501 (Dec 04, 2008) (2 pages) doi:10.1115/1.2955478 History: Received November 09, 2007; Revised May 29, 2008; Published December 04, 2008 In a recent paper (1) modeling of viscous dissipation in fluid-saturated porous media is considered. This Comment concerns the energy conservation formulation of natural or mixed convection problems including viscous dissipation. For simplicity, sometimes only the clear fluid situation is treated, the main issues applying both to clear fluids and to fluid-saturated porous media. ## Energy Conservation Formulation for Natural Convection in Enclosures Energy conservation equation applied to any closed system gives (2)Display Formula $dEdt=Q̇−Ẇ$ (1) Property energy is composed by the components (2): internal energy, $U$, kinetic energy, $(1∕2)mV2$, and potential (gravitational) energy, $mgy$, that is, $E=U+(1∕2)mV2+mgy$. For a closed system operating in steady-state $dE∕dt=0$ and $Q̇−Ẇ=0$. $Ẇ$ is the mechanical power exchanged between the closed system and its surroundings, which is null in the present case. Only if a rotating shaft or electrically energized cables cross the walls of the system, or the walls are deformable or mobile it is $Ẇ≠0$. In the present case it is forcedly $Q̇=0$. If the domain is a differentially heated enclosure of rigid walls, with a hot wall and a cold wall, heat crosses it from the hot wall to the cold wall, and the overall heat input $Q̇H$ equals the overall heat output $Q̇C$, that is, Display Formula (2) This result holds for any prescribed temperature or heat flux at the walls of the enclosure and for any orientation of these walls. Another way to write Eq. 2 is $∣Q̇H∣=∣Q̇C∣$. For unsteady situations it can be $∣Q̇C∣>∣Q̇H∣$ as a result of a decrease in the potential energy or a decrease in the internal energy. However, under steady-state conditions, only $∣Q̇H∣=∣Q̇C∣$ is compatible with the energy conservation principle. If the viscous dissipation term is taken into accountDisplay Formula $∣Q̇C∣=∣Q̇H∣+∫V(volumetricviscousdissipationrate)dV$ (3) violating the energy conservation principle by the reasons detailed above. Thus, if the viscous dissipation is taken into account an additional term needs to be taken into account. The complete thermal energy conservation equation (for a clear fluid) can be obtained from Ref. 3; this additional term is the work of pressure forces, and as $∣Q̇H∣=∣Q̇C∣$ it isDisplay Formula $∫V(volumetricviscousdissipationrate)dV+∫V(volumetricrateofworkofpressureforces)dV=0$ (4) Locally, the volumetric viscous dissipation rate can be different from the volumetric rate of work of pressure forces, and Eq. 4 applies to the overall enclosure. Viscous dissipation is always positive, and the work of pressure forces can be positive or negative depending if the fluid is contracting or expanding, respectively (3). Viscous dissipation results from the fluid motion, in natural convection problems fluid motion results from the expansion∕contraction experienced by the fluid, and both the viscous dissipation and the work of pressure forces need to be taken into account in order to have the unique consistent energy conservation formulation. No restrictions were made concerning the orientation of the enclosure or of the enclosure walls, and it is incorrect the claim made in Ref. 1 saying that Eq. 4 applies only to a laterally heated enclosure and not to a bottom heated enclosure. It is argued in Ref. 1 that the kinetic energy released in the bottom heated enclosure comes into play, and Eq. 4 does not apply. However, the thermal energy conservation equation is obtained subtracting the kinetic energy conservation equation from the total energy conservation equation (3), and the kinetic energy effects cannot be invoked when dealing with just the thermal energy conservation equation. ## Energy Conservation Formulation for Natural or Mixed Convection In mixed convection fluid motion is partially forced and partially buoyancy induced. For the buoyancy induced flow applies the mentioned above for natural convection. In this case, however, Eq. 4 does not apply, as there are forced flow contributions for viscous dissipation. In Sec. 5 of Ref. 1 it is argued that the sentence in Refs. 4-5 “…the main results and conclusions apply to any natural or mixed convection problem…” is incorrect. However, the main results and conclusions of Refs. 4-5 are as follows. (i) The consistent energy conservation formulation of natural or mixed convection problems needs to consider both the viscous dissipation and the work of pressure forces. (ii) The energy formulation considering only the viscous dissipation term is inconsistent and violating the energy conservation principle. (iii) Viscous dissipation results from fluid motion, and in natural convection fluid motion results from the expansion∕contraction experienced by the fluid, with the associated work of pressure forces. Results, in the form of Eq. 4, which apply to closed enclosures, are a way to explain the main question and not the main result and∕or conclusion of Refs. 4-5. Natural convection heat transfer problem is used to show the essence of the problem, and extrapolations are made to what happens in mixed convection heat transfer problems, where part of the fluid motion is buoyancy induced. This is highlighted in Conclusions of Refs. 4-5. It is thus correct the claim in Sec. 5 of Ref. 1 saying that result expressed by Eq. 4 is not valid for mixed convection problems, but this is not claimed in Refs. 4-5. ## Boussinesq Approximation The above results were obtained without considering any simplifying approach, and no reference was made to the Boussinesq approximation or to the Oberbeck–Boussinesq approximation. Use of a simplified model results on some contamination of the solution, and even on some inconsistencies taking in mind the strict (exact) model and that thermodynamics sets many links between variables and properties. This is also the case when the Boussinesq or the Oberbeck–Boussinesq approximation is used to solve the natural or mixed convection problems. One thing is to start from the consistent energy conservation formulation of the problem and to use a simplified model to solve it, and inconsistencies on the energy conservation are due to the used simplified model. A different thing is to start from an inconsistent energy conservation formulation of the problem and to use a simplified model to solve it, and try to explain inconsistencies on the energy conservation as based only on the used simplified model to solve the problem. ## Scale Analysis Conclusions are obtained in Sec. 6 of Ref. 1 concerning the relevance of the viscous dissipation and of the work of pressure forces in natural convection problems, which are presented as depending on the considered scales and on the physical situation considered (laterally heated enclosure or bottom heated enclosure). Result expressed by Eq. 4 applies to natural convection in enclosures, no matter how they are oriented or their walls are oriented. In Sec. 6 of Ref. 1 a scale analysis is conducted over the differential thermal energy conservation equation, and thus only local conclusions can be obtained in what concerns the relative magnitude of the involved terms. However, by the reasons mentioned above, viscous dissipation and work of pressure forces are strongly linked in natural or mixed convection problems, and local and integral assessments of their relevance can lead to significantly different conclusions (description after Eq. 4). ## References Copyright © 2009 by American Society of Mechanical Engineers View article in PDF format. ## Related Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
{}
# GATE2012-30 An insulated, evacuated container is connected to a supply line of an ideal gas at pressure p${_s}$, temperature T${_s}$ and specific volume v${_s}$. The container is filled with the gas until the pressure in the container reaches p${_s}$. There is no heat transfer between the supply line to the container, and kinetic and potential energies are negligible, If C${_p}$ and C${_v}$ are the heat capacities at constant pressure and constant volume, respectively($\gamma$ = C${_p}$/C${_v}$), then the final temperature of the gas in the container is 1. $\gamma$T${_s}$ 2. T${_s}$ 3. ($\gamma-1$)T${_s}$ 4. ($\gamma-1$)T${_s}$/$\gamma$
{}
11,328 pages Trigintation refers to the 30th hyperoperation starting from addition. It is equal to the binary function $$\uparrow^{28}$$, using Knuth's up-arrow notation.[1] "a trigintated to b" can be written in array notation as $$\{a,b,28\}$$, in chained arrow notation as $$a \rightarrow b \rightarrow 28$$ and in Hyper-E notation as E[a]1#1#1#1...1#1#1#b (27 ones). Trigintational growth rate is approximately equivalent to $$f_{29}(n)$$ in the fast-growing hierarchy.
{}
# Source code for menpofit.builder from __future__ import division from functools import partial import warnings import numpy as np from menpo.shape import mean_pointcloud, PointCloud, TriMesh from menpo.feature import no_op from menpo.transform import Scale, Translation, GeneralizedProcrustesAnalysis from menpo.visualize import print_dynamic from menpofit.visualize import print_progress [docs]class MenpoFitModelBuilderWarning(Warning): r""" A warning that the parameters chosen to build a given model may cause unexpected behaviour. """ pass [docs]def compute_reference_shape(shapes, diagonal, verbose=False): r""" Function that computes the reference shape as the mean shape of the provided shapes. Parameters ---------- shapes : list of menpo.shape.PointCloud The set of shapes from which to build the reference shape. diagonal : int or None If int, it ensures that the mean shape is scaled so that the diagonal of the bounding box containing it matches the provided value. If None, then the mean shape is not rescaled. verbose : bool, optional If True, then progress information is printed. Returns ------- reference_shape : menpo.shape.PointCloud The reference shape. """ # the reference_shape is the mean shape of the images' landmarks if verbose: print_dynamic('- Computing reference shape') reference_shape = mean_pointcloud(shapes) # fix the reference_shape's diagonal length if asked if diagonal: x, y = reference_shape.range() scale = diagonal / np.sqrt(x ** 2 + y ** 2) reference_shape = Scale(scale, reference_shape.n_dims).apply( reference_shape) return reference_shape [docs]def rescale_images_to_reference_shape(images, group, reference_shape, verbose=False): r""" Function that normalizes the images' sizes with respect to the size of the provided reference shape. In other words, the function rescales the provided images so that the size of the bounding box of their attached shape is the same as the size of the bounding box of the provided reference shape. Parameters ---------- images : list of menpo.image.Image The set of images that will be rescaled. group : str or None If str, then it specifies the group of the images's shapes. If None, then the images must have only one landmark group. reference_shape : menpo.shape.PointCloud The reference shape. verbose : bool, optional If True, then progress information is printed. Returns ------- normalized_images : list of menpo.image.Image The rescaled images. """ wrap = partial(print_progress, prefix='- Normalizing images size', end_with_newline=False, verbose=verbose) # Normalize the scaling of all images wrt the reference_shape size normalized_images = [i.rescale_to_pointcloud(reference_shape, group=group) for i in wrap(images)] return normalized_images [docs]def normalization_wrt_reference_shape(images, group, diagonal, verbose=False): r""" Function that normalizes the images' sizes with respect to the size of the mean shape. This step is essential before building a deformable model. The normalization includes: 1) Computation of the reference shape as the mean shape of the images' landmarks. 2) Scaling of the reference shape using the diagonal. 3) Rescaling of all the images so that their shape's scale is in correspondence with the reference shape's scale. Parameters ---------- images : list of menpo.image.Image The set of images to normalize. group : str If str, then it specifies the group of the images's shapes. If None, then the images must have only one landmark group. diagonal : int or None If int, it ensures that the mean shape is scaled so that the diagonal of the bounding box containing it matches the provided value. If None, then the mean shape is not rescaled. verbose : bool, Optional Flag that controls information and progress printing. Returns ------- reference_shape : menpo.shape.PointCloud The reference shape that was used to resize all training images to a consistent object size. normalized_images : list of menpo.image.Image The images with normalized size. """ # get shapes shapes = [i.landmarks[group] for i in images] # compute the reference shape and fix its diagonal length reference_shape = compute_reference_shape(shapes, diagonal, verbose=verbose) # normalize the scaling of all images wrt the reference_shape size normalized_images = rescale_images_to_reference_shape( images, group, reference_shape, verbose=verbose) return reference_shape, normalized_images [docs]def compute_features(images, features, prefix='', verbose=False): r""" Function that extracts features from a list of images. Parameters ---------- images : list of menpo.image.Image The set of images. features : callable The features extraction function. Please refer to menpo.feature and menpofit.feature. prefix : str The prefix of the printed information. verbose : bool, Optional Flag that controls information and progress printing. Returns ------- feature_images : list of menpo.image.Image The list of feature images. """ wrap = partial(print_progress, prefix='{}Computing feature space'.format(prefix), end_with_newline=not prefix, verbose=verbose) return [features(i) for i in wrap(images)] [docs]def scale_images(images, scale, prefix='', return_transforms=False, verbose=False): r""" Function that rescales a list of images and optionally returns the scale transforms. Parameters ---------- images : list of menpo.image.Image The set of images to scale. scale : float or tuple of floats The scale factor. If a tuple, the scale to apply to each dimension. If a single float, the scale will be applied uniformly across each dimension. prefix : str, optional The prefix of the printed information. return_transforms : bool, optional If True, then a list with the menpo.transform.Scale objects that were used to perform the rescale for each image is also returned. verbose : bool, optional Flag that controls information and progress printing. Returns ------- scaled_images : list of menpo.image.Image The list of rescaled images. scale_transforms : list of menpo.transform.Scale The list of scale transforms that were used. It is returned only if return_transforms is True. """ wrap = partial(print_progress, prefix='{}Scaling images'.format(prefix), end_with_newline=not prefix, verbose=verbose) if not np.allclose(scale, 1): # initialise scaled images and transforms lists scaled_images = [] scale_transforms = [] # for each image for i in wrap(images): if return_transforms: # store scaled image and transform, if asked sc_image, tr = i.rescale(scale, return_transform=True) scaled_images.append(sc_image) scale_transforms.append(tr) else: # store only scaled image scaled_images.append(i.rescale(scale)) if return_transforms: return scaled_images, scale_transforms else: return scaled_images else: if return_transforms: scale_transforms = [Scale(1., images[0].n_dims)] * len(images) return images, scale_transforms else: return images [docs]def warp_images(images, shapes, reference_frame, transform, prefix='', verbose=None): r""" Function that warps a list of images into the provided reference frame. Parameters ---------- images : list of menpo.image.Image The set of images to warp. shapes : list of menpo.shape.PointCloud The set of shapes that correspond to the images. reference_frame : menpo.image.BooleanImage The reference frame to warp to. transform : menpo.transform.Transform Transform **from the reference frame back to the image**. Defines, for each pixel location on the reference frame, which pixel location should be sampled from on the image. prefix : str The prefix of the printed information. verbose : bool, Optional Flag that controls information and progress printing. Returns ------- warped_images : list of menpo.image.MaskedImage The list of warped images. """ wrap = partial(print_progress, prefix='{}Warping images'.format(prefix), end_with_newline=not prefix, verbose=verbose) warped_images = [] # Build a dummy transform, use set_target for efficiency warp_transform = transform(reference_frame.landmarks['source'], reference_frame.landmarks['source']) for i, s in wrap(list(zip(images, shapes))): # Update Transform Target warp_transform.set_target(s) # warp images warp_landmarks=False) # attach reference frame landmarks to images warped_i.landmarks['source'] = reference_frame.landmarks['source'] warped_images.append(warped_i) return warped_images [docs]def extract_patches(images, shapes, patch_shape, normalise_function=no_op, prefix='', verbose=False): r""" Function that extracts patches around the landmarks of the provided images. Parameters ---------- images : list of menpo.image.Image The set of images to warp. shapes : list of menpo.shape.PointCloud The set of shapes that correspond to the images. patch_shape : (int, int) The shape of the patches. normalise_function : callable A normalisation function to apply on the values of the patches. prefix : str The prefix of the printed information. verbose : bool, Optional Flag that controls information and progress printing. Returns ------- patch_images : list of menpo.image.Image The list of images with the patches per image. Each output image has shape (n_center, n_offset, n_channels, patch_shape). """ wrap = partial(print_progress, prefix='{}Extracting patches'.format(prefix), end_with_newline=not prefix, verbose=verbose) parts_images = [] for i, s in wrap(list(zip(images, shapes))): parts = i.extract_patches(s, patch_shape=patch_shape, as_single_array=True) parts = normalise_function(parts) parts_images.append(Image(parts, copy=False)) return parts_images [docs]def build_reference_frame(landmarks, boundary=3, group='source'): r""" Builds a reference frame from a particular set of landmarks. Parameters ---------- landmarks : menpo.shape.PointCloud The landmarks that will be used to build the reference frame. boundary : int, optional The number of pixels to be left as a safe margin on the boundaries of the reference frame (has potential effects on the gradient computation). group : str, optional Group that will be assigned to the provided set of landmarks on the reference frame. Returns ------- reference_frame : manpo.image.MaskedImage The reference frame. """ if not isinstance(landmarks, TriMesh): warnings.warn('The reference shape passed is not a TriMesh or ' 'subclass and therefore the reference frame (mask) will ' 'be calculated via a Delaunay triangulation. This may ' 'cause small triangles and thus suboptimal warps.', MenpoFitModelBuilderWarning) [docs]def build_patch_reference_frame(landmarks, boundary=3, group='source', patch_shape=(17, 17)): r""" Builds a patch-based reference frame from a particular set of landmarks. Parameters ---------- landmarks : menpo.shape.PointCloud The landmarks that will be used to build the reference frame. boundary : int, optional The number of pixels to be left as a safe margin on the boundaries of the reference frame (has potential effects on the gradient computation). group : str, optional Group that will be assigned to the provided set of landmarks on the reference frame. patch_shape : (int, int), optional The shape of the patches. Returns ------- patch_based_reference_frame : menpo.image.MaskedImage The patch-based reference frame. """ boundary = np.max(patch_shape) + boundary patch_shape, group=group) [docs]def densify_shapes(shapes, reference_frame, transform): r""" Function that densifies a set of sparse shapes given a reference frame. Parameters ---------- shapes : list of menpo.shape.PointCloud The input shapes. reference_frame : menpo.image.BooleanImage The reference frame, the mask of which will be used. transform : menpo.transform.Transform The transform to use for mapping the dense points. Returns ------- dense_shapes : list of menpo.shape.PointCloud The list of dense shapes. """ # compute non-linear transforms transforms = [transform(reference_frame.landmarks['source'], s) for s in shapes] # build dense shapes dense_shapes = [] for (t, s) in zip(transforms, shapes): dense_shape = PointCloud(np.vstack((s.points, warped_points))) dense_shapes.append(dense_shape) return dense_shapes [docs]def align_shapes(shapes): r""" Function that aligns a set of shapes by applying Generalized Procrustes Analysis. Parameters ---------- shapes : list of menpo.shape.PointCloud The input shapes. Returns ------- aligned_shapes : list of menpo.shape.PointCloud The list of aligned shapes. """ # centralize shapes centered_shapes = [Translation(-s.centre()).apply(s) for s in shapes] # align centralized shape using Procrustes Analysis gpa = GeneralizedProcrustesAnalysis(centered_shapes) return [s.aligned_source() for s in gpa.transforms] [docs]class MenpoFitBuilderWarning(Warning): r""" A warning that some part of building the model may cause issues. """ pass
{}
## Chemistry and Chemical Reactivity (9th Edition) Published by Cengage Learning # Chapter 2 Atoms, Molecules, and Ions - Study Questions - Page 95j: 152 #### Answer a)$7.010\times10^{-4}\ mol$ $U_2O_5$ Uranium (V) oxide $3.58\times10^{-4}\ mol$ b)238 c) 6 #### Work Step by Step a) Number of moles of uranium: $0.169\ g\div238.02891\ g/mol=7.010\times10^{-4}\ mol$ Number of moles of oxygen: $(0.199-0.169)\div15.9994=1.876\times10^{-3}\ mol$ Dividing the oxygen value by uranium's: $1.876\times10^{-3}/7.010\times10^{-4}=2.67\approx5/2$ The empirical formula is then : $U_2O_5$ which is called Uranium (V) oxide whose molecular weight is $556.06\ g/mol$ Number of moles obtained: $0.199\ g/mol\div556.06\ g/mol=3.58\times10^{-4}\ mol$ b) The isotope with mass number of 238 because the atomic weight is closest to this mass number. c) Molecular weight of: $UO_2(NO_3)_2$:$394.04\ g/mol$ $H_2O$: $18.015\ g/mol$ The number of moles must have been the same after the dehydration: $0.679\ g\div394.04\ g/mol= 1.72\times10^{-3}\ mol$ Before: $0.865\ g\div1.72\times10^{-3}\ mol=501.98\ g/mol$ $501.98=394.04+z\times18.015$ $z=6$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
Postselection Get Postselection essential facts below. View Videos or join the Postselection discussion. Add Postselection to your Like2do.com topic list for future reference or share this resource on social media. Postselection In probability theory, to postselect is to condition a probability space upon the occurrence of a given event. In symbols, once we postselect for an event ${\displaystyle E}$, the probability of some other event ${\displaystyle F}$ changes from ${\textstyle Pr[F]}$ to the conditional probability ${\displaystyle Pr[F\,|\,E]}$. For a discrete probability space, ${\textstyle Pr[F\,|\,E]={\frac {Pr[F\,\cap \,E]}{Pr[E]}}}$, and thus we require that ${\textstyle Pr[E]}$ be strictly positive in order for the postselection to be well-defined. See also PostBQP, a complexity class defined with postselection. Using postselection it seems quantum Turing machines are much more powerful: Scott Aaronson proved[1][2]PostBQP is equal to PP. Some quantum experiments[3] use post-selection after the experiment as a replacement for communication during the experiment, by post-selecting the communicated value into a constant. ## References 1. ^ Aaronson, Scott (2005). "Quantum computing, postselection, and probabilistic polynomial-time". Proceedings of the Royal Society A. 461 (2063): 3473-3482. arXiv:quant-ph/0412187. Bibcode:2005RSPSA.461.3473A. doi:10.1098/rspa.2005.1546.. Preprint available at [1] 2. ^ Aaronson, Scott (2004-01-11). "Complexity Class of the Week: PP". Computational Complexity Weblog. Retrieved . 3. ^ Hensen; et al. "Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres". Nature. 526: 682-686. arXiv:1508.05949. Bibcode:2015Natur.526..682H. doi:10.1038/nature15759. PMID 26503041. This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0. Like2do.com was developed using defaultLogic.com's knowledge management platform. It allows users to manage learning and research. Visit defaultLogic's other partner sites below:
{}
# Le Monde puzzle [#8] March 29, 2011 By (This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers) Another mathematical puzzle from Le Monde that relates to a broken calculator (skipping the useless tale): Given a pair of arbitrary positive integers (x,y) a calculator can either substract the same integer [lesser than min(x,y)] from both x and y or multiply either x or y by 2. Is it always possible to obtain equal entries by iterating calls to this calculator? While the solution provided in this weekend edition of Le Monde is to keep multiplying x=min(x,y) by 2 until it is larger than or equal to y=max(x,y)/2,  at which stage subtracting 2x-y leads to (y-x,2y-2x) which is one multiplication away from equality, I wrote a simple R code that blindly searches for a path to equality, using as a target function exp{x²+y²+(x-y)²}. I did not even include a simulated annealing schedule as the optimal solution is known. Here is the R code: #algorithm that brings two numbers (x,y) to be equal by #operations x=2*x and (x,y)=(x,y)-(c,c) emptied=function(a,b){ mab=min(a,b)-1 a=a-mab b=b-mab prop=matrix(0,3,2) targ=rep(0,3) targ0=a^2+b^2+(a-b)^2 stop=(a==b) while (!stop){ prop[1,]=c(a,b)-sample(0:(min(a,b)-1),1) targ[1]=sum(prop[1,]^2)+diff(prop[1,])^2 prop[2,]=c(2*a,b) targ[2]=sum(prop[2,]^2)+diff(prop[2,])^2 prop[3,]=c(a,2*b) targ[3]=sum(prop[3,]^2)+diff(prop[3,])^2 i=sample(1:3,1,prob=exp(targ0-targ)) a=prop[i,1];b=prop[i,2];targ0=targ[i] stop=(a==b) print(c(a,b)) } } For instance, > emptied(39,31) [1] 9 2 [1] 8 1 [1] 8 2 [1] 7 1 [1] 7 2 [1] 7 4 [1] 6 3 [1] 5 2 [1] 4 1 [1] 4 2 [1] 3 1 [1] 3 1 [1] 3 1 [1] 3 1 [1] 3 1 [1] 3 2 [1] 2 1 [1] 2 1 [1] 2 1 [1] 2 1 [1] 2 1 [1] 2 2 Filed under: R, Statistics Tagged: Le Monde, mathematical puzzle, R code R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
{}
# Use the distributive property to write an equivalent variable expression Now, we bring in the distributive property. Alright, so this is going to be four. They will benefit from activities where they are asked to insert parentheses into expressions to generate a given number. Like Terms - Terms that have the same variables raised to the same exponent, but could have different coefficients. The associative property is about grouping. Write an expression that gives the length of the swimming pool. This set of cards is for the higher achieving group of students. Yes, and we often use parentheses to show those groups. Do Now 10 minutes At this point, students should be comfortable with algebra vocabulary, translating and evaluating algebraic expressions, and using the distributive property with algebraic expressions. What is that dollar amount. Students will be put in partnerships based upon ability level. How is this property used. And so, what's this going to be. Why do you need them. Try It Out 5 minutes For today's guided practice, I will present my students with 3 sets of equivalent expressions. Each bag need to contain the same number of teaspoons and each bag can contain only one spice. By this point, students have developed the definition, which I will formalize and share with the class. What about the order of the variables. We can take the 9 out of both terms and be left with the 11 and 6. Which property does the equation show. What is another way to write this expression What is 10n. Student correctly explains something about the importance of using the order of operations in Part 3. Write an expression that gives the price of the skateboard. For this reason, the following is true: So far we have: I know that properties are somewhat like laws; they always work. Use the distributive property to rewrite this expression. Instructional Implications Explain what it means for expressions to be equivalent i. Because the coefficient 1 is implied, students will benefit from the teacher writing the coefficient 1 whenever it is implied to make it explicit. Students will benefit from a discussion of what the word equivalent means. What value will you use for x to substitute. Add 3 tosubtract the result from 1, then double what you have. Student demonstrates solid understanding of the mathematical ideas and processes related to finding perimeter and writing expressions with variables to express relationships in Parts 1 and 2. What does the word distribute mean. First you multiply 3 by x, which is 3x. For example, 6y means 6 times whatever value the variable y has. So, we have, write it like this. For this reason, it will take a bit longer for students to match up the cards as they will have to eliminate more options. I see what you mean. There are 16 ounces in 1 pound. Write an expression that gives the number of ounces in pounds. Use p to represent pounds. Use variables to represent numbers and write expressions when solving a real-world or mathematical problem; understand that a variable can represent an unknown number, or, depending on the purpose at hand, any number in a specified set. Terms like 3x and 7x that have the same variable part are called like terms. The constant terms are like terms as well. Like terms can be combined as is stated in the distributive property. $$3x+7x=\left (3+7 \right)x=10x$$ Expressions like 3x+7x and 10x are equivalent expressions since they denote the. Algebraic expressions (6th grade) Write equivalent expressions using the Distributive Property of Multiplication over Addition An updated version of this instructional video is available. Name LESSON Practice B For use with pages Use the distributive property to evaluate the expression. Date 3 -5(24 - 17) 6. + ) (3). To get tan(x)sec^3(x), use parentheses: tan(x)sec^3(x). From the table below, you can notice that sech is not supported, but you can still enter it using the identity sech(x)=1/cosh(x). If you get an error, double-check your expression, add parentheses and multiplication signs . Use the distributive property to write an equivalent variable expression Rated 5/5 based on 53 review Algebra: Expressions (6th grade math) Jeopardy Template
{}
$$\definecolor{flatred}{RGB}{192, 57, 43} \definecolor{flatblue}{RGB}{41, 128, 185} \definecolor{flatgreen}{RGB}{39, 174, 96} \definecolor{flatpurple}{RGB}{142, 68, 173} \definecolor{flatlightblue}{RGB}{52, 152, 219} \definecolor{flatlightred}{RGB}{231, 76, 60} \definecolor{flatlightgreen}{RGB}{46, 204, 113} \definecolor{flatteal}{RGB}{22, 160, 133} \definecolor{flatpink}{RGB}{243, 104, 224}$$ ## February 20, 2021 ### Graphs of Sine and Cosine Go to Preview Activity Textbook Reference: Section 6.1 Graphs of the Sine and Cosine Functions When you're getting to know someone, it's worth it to learn different things about them. You might know someone's favorite pizza topping or what kind of movies they like, but once you start talking about things like their family or where they grew up, you start to get a better understanding of who they are, and your friendship gets stronger. ...wait, what does this have to do with math? Well, for the past few lessons, we've been primarily focusing on the geometric properties of the circular functions — the way that they tell us things like height, overness, and slope of an angle on the unit circle. These are great intuitions to have, but they're really only part of their story — we need to get to know them better. For the next few lessons, we'll be focusing on how they behave as functions in their own right — much like how you studied functions like polynomials and logarithms. ## Getting a better picture One of the best ways to really get to know a function is to look at its graph. They do say, after all, that a picture is worth a thousand words. If that's the case, then a GIF is worth a thousand pictures: As our point moves around the circle, its height and overness fluctuate, back and forth between $$-1$$ and $$1$$. The graph we get as a result is shaped like a wave — a shape we call a sinusoid. Here's a portion of the graph of $$\color{flatblue}y=\sin(x)$$, plotted on the interval $$0\le x\le 2\pi$$: And then the same portion of the graph of $$\color{flatred}y=\cos(x)$$, over the same interval: Note that I said these are only a portion of the graph. The full graphs are more like this: Notice how the basic shape of each (which is a little darker in the above graph) repeats forever in either direction. This is what we mean we say that the sine and cosine are periodic functions, and we call one full cycle a period. By the way, did you notice that I wrote $$\color{flatblue}\sin(x)$$ above instead of $$\color{flatblue}\sin(\theta)$$? What's up with that? Once we recognize the sine as a function, we can use whatever variables we want. Our input might be the angle of a unit circle, so we might use $$\theta$$. But it also might be the time that we've been on a Ferris wheel, so we might end up using something else like $$t$$. Remember the immortal words of Shakespeare: "A variable by any other name would smell as sweet." (Okay, maybe I paraphrased a bit.) We really unlock the true potential of sinusoidal graphs when we transform them: To understand what we're seeing, it's helpful to define a few important terms: • The midline is the horizontal line in the center of the wave, halfway between the maximum and the minimum height. • The amplitude is the distance from the midline to either the maximum or the minimum; in other words, it's the maximum displacement from the midline. • Amplitude is always given as a positive number. • The period, as mentioned before, is how the length of one full cycle of the wave. Click the picture to enlarge it. For example, look at the following graph: • The midline is the line $$y=1$$. • The amplitude is $$3$$. • The period is $$4$$. In class you'll explore how to produce a sinusoidal wave with any midline, amplitude, and period you'd like — which will be essential to using these waves in all sorts of applications! # Preview Activity 5 1. Complete the following table for $$\color{flatblue}y=\sin(x)$$: $$x$$ $$0$$ $$\tfrac\pi2$$ $$\pi$$ $$\tfrac{3\pi}2$$ $$2\pi$$ $$\color{flatblue}\sin(x)$$ Then sketch a graph of $$\color{flatblue}y=\sin(x)$$ on the interval $$[0,2\pi)$$ as accurately as possible by hand, including these five points. 2. Repeat the above exercise for $$\color{flatred}y=\cos(x)$$. Then compare and contrast the two graphs. • By the way, you should practice sketching the graphs of $$\color{flatblue}y=\sin(x)$$ and $$\color{flatred}y=\cos(x)$$ from memory a few times. It really helps to be familiar with the basic shapes of these two functions! 3. Refer back to the graphs of $$\color{flatblue}y=\sin(x)$$ and $$\color{flatred}y=\cos(x)$$. 1. What is the midline of each of these graphs? 2. What is the amplitude? 3. What is the period? 4. Explain in your own words why it should make sense that the sine and cosine functions are periodic — that is, why their graphs repeat forever. (It might help to remember what they represent on the unit circle!) 5. Answer AT LEAST one of the following questions: 2. What was an "a-ha" moment you had while doing this reading? 3. What was the muddiest point of this reading for you?
{}
# Research Article Continuous Time Particle Filtering for fMRI ## Abstract We construct a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed Blood Oxygen Level Dependent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The model poses a difficult parameter estimation problem, both theoretically due to the nonlinearity and divergence of the differential system, and computationally due to its time and space complexity. We adapt a particle filter and smoother to the task, and discuss some of the practical approaches used to tackle the difficulties, including use of sparse matrices and parallelisation. Results demonstrate the tractability of the approach in its application to an effective connectivity study. ## Reference L.M. Murray and A. Storkey (2008). Continuous Time Particle Filtering for fMRI. Advances in Neural Information Processing Systems. 20:1049--1056. url:http://books.nips.cc/papers/files/nips20/NIPS2007_0557.pdf. ## BibTeX @Article{Murray2008, title = {Continuous Time Particle Filtering for f{MRI}}, author = {Lawrence Matthew Murray and Amos Storkey}, journal = {Advances in Neural Information Processing Systems}, year = {2008}, volume = {20}, pages = {1049--1056}, url = {http://books.nips.cc/papers/files/nips20/NIPS2007_0557.pdf} }
{}
# Parametric Calculation# Problem Description: • Write a user file macro to calculate the distance d between either nodes or keypoints in PREP7. Define abbreviations for calling the macro and verify the parametric expressions by using the macro to calculate the distance between nodes $$N_1$$ and $$N_2$$ and between keypoints $$K_3$$ and $$K_4$$. Reference: • None. Analysis Type(s): • Parametric Arithmetic. Element Type: • None. Geometric Properties(Coordinates): • $$N_{\mathrm{1(x,y,z)}} = 1.5, 2.5, 3.5$$ • $$N_{\mathrm{2(x,y,z)}} = -3.7, 4.6, -3$$ • $$K_{\mathrm{3(x,y,z)}} = 100, 0, 30$$ • $$K_{\mathrm{4(x,y,z)}} = -200,25,80$$ Analysis Assumptions and Modeling Notes: • Instead of *CREATE, *USE, etc., we have created a class Create with methods that correspond to each type of simulation. This class gives a possibility to change coordinates and reuse it. The simulation can be checked not just by target values, but also with the simple distances’ formula between keypoints as: • Calculate distance between two keypoints in the Cartesian coordinate system: $$D = \sqrt[2]{(x_2 - x_1)^2 + (y_2 - y_1)^2 + (z_2 - z_1)^2}$$ • Python representation of the distance formula: import math # Define coordinates for keypoints K3 and K4. x1, y1, z1 = 100, 0, 30 x2, y2, z2 = -200, 25, 80 dist_kp = math.sqrt((x2 - x1)**2 + (y2 - y1)**2 + (z2 - z1)**2) print(dist_kp) ## Start MAPDL# Start MAPDL and import Numpy and Pandas libraries. # sphinx_gallery_thumbnail_path = '_static/vm8_setup.png' import numpy as np import pandas as pd from ansys.mapdl.core import launch_mapdl # Start MAPDL. mapdl = launch_mapdl() ## Pre-Processing# Enter verification example mode and the pre-processing routine. mapdl.clear() mapdl.verify() mapdl.prep7(mute=True) ## Define Class# Identifying the class create with methods create_kp_method and create_node_method to calculate and plot the distances between keypoints and nodes. class Create: def __init__(self, p1, p2): # Points Attributes. self.p1 = p1 self.p2 = p2 def kp_distances(self): # Define keypoints by coordinates. kp1 = mapdl.k(npt=3, x=self.p1[0], y=self.p1[1], z=self.p1[2]) kp2 = mapdl.k(npt=4, x=self.p2[0], y=self.p2[1], z=self.p2[2]) # Get the distance between keypoints. dist_kp, kx, ky, kz = mapdl.kdist(kp1, kp2) # Plot keypoints. mapdl.kplot( show_keypoint_numbering=True, vtk=True, background="grey", show_bounds=True, font_size=26, ) return dist_kp def node_distances(self): # Define nodes by coordinates. node1 = mapdl.n(node=1, x=self.p1[0], y=self.p1[1], z=self.p1[2]) node2 = mapdl.n(node=2, x=self.p2[0], y=self.p2[1], z=self.p2[2]) # Get the distance between nodes. dist_node, node_x, node_y, node_z = mapdl.ndist(node1, node2) # Plot nodes. mapdl.nplot(nnum=True, vtk=True, color="grey", show_bounds=True, font_size=26) return dist_node @property def p1(self): # Getting value return self._p1 @p1.setter def p1(self, new_value): # Check the data type: if not isinstance(new_value, list): raise ValueError("The coordinates should be implemented by the list!") # Check the quantity of items: if len(new_value) != 3: raise ValueError( "The coordinates should have three items in the list as [X, Y, Z]" ) self._p1 = new_value @property def p2(self): return self._p2 @p2.setter def p2(self, new_value): # Check the data type: if not isinstance(new_value, list): raise ValueError("The coordinates should be implemented by the list!") # Check the quantity of items: if len(new_value) != 3: raise ValueError( "The coordinates should have three items in the list as [X, Y, Z]" ) self._p2 = new_value ## Distance between keypoints# Using already created method for keypoints to get the distance between them and print out an output. The keypoints have got next coordinates: • $$K_{\mathrm{3(x,y,z)}} = 100, 0, 30$$ • $$K_{\mathrm{4(x,y,z)}} = -200, 25,80$$ kp1 = [100, 0, 30] kp2 = [-200, 25, 80] kp = Create(kp1, kp2) kp_dist = kp.kp_distances() print(f"Distance between keypoint is: {kp_dist:.2f}\n\n") # Print the list of keypoints. print(mapdl.klist()) Out: Distance between keypoint is: 305.16 3 100. 0.00 30.0 0.00 0 0 0 0 0 0 4 -200. 25.0 80.0 0.00 0 0 0 0 0 0 ## Distance between nodes.# Using already created method for nodes to get the distance between them and print out an output. The nodes have got next coordinates: • $$N_{\mathrm{1(x,y,z)}} = 1.5, 2.5, 3.5$$ • $$N_{\mathrm{2(x,y,z)}} = -3.7, 4.6, -3$$ node1 = [1.5, 2.5, 3.5] node2 = [-3.7, 4.6, -3] nodes = Create(node1, node2) node_dist = nodes.node_distances() print(f"Distance between nodes is: {node_dist:.2f}\n\n") # Print the list of nodes. print(mapdl.nlist()) Out: Distance between nodes is: 8.58 1 1.5000 2.5000 3.5000 0.00 0.00 0.00 2 -3.7000 4.6000 -3.0000 0.00 0.00 0.00 ## Check Results# Finally we have the results of the distances for both simulations, which can be compared with expected target values: • 1st simulation to get the distance between keypoints $$K_3$$ and $$K_4$$, where $$LEN_1 = 305.16\,(in)$$ • 2nd simulation to get the distance between nodes $$N_1$$ and $$N_2$$, where $$LEN_2 = 8.58\,(in)$$ For better representation of the results we can use pandas dataframe with following settings below: # Define the names of the rows. row_names = ["N1 - N2 distance (LEN2)", "K3 - K4 distance (LEN1)"] # Define the names of the columns. col_names = ["Target", "Mechanical APDL", "RATIO"] # Define the values of the target results. target_res = np.asarray([8.5849, 305.16]) # Create an array with outputs of the simulations. simulation_res = np.asarray([node_dist, kp_dist]) # Identifying and filling corresponding columns. main_columns = { "Target": target_res, "Mechanical APDL": simulation_res, "Ratio": list(np.divide(simulation_res, target_res)), } # Create and fill the output dataframe with pandas. df2 = pd.DataFrame(main_columns, index=row_names).round(2) # Apply settings for the dataframe. Target Mechanical APDL Ratio N1 - N2 distance (LEN2) 8.58 8.58 1.0 K3 - K4 distance (LEN1) 305.16 305.16 1.0 stop mapdl mapdl.exit() Total running time of the script: ( 0 minutes 0.785 seconds) Gallery generated by Sphinx-Gallery
{}
# tf.contrib.rnn.AttentionCellWrapper ### class tf.contrib.rnn.AttentionCellWrapper Basic attention cell wrapper. Implementation based on https://arxiv.org/abs/1409.0473. ## Methods ### __init__(cell, attn_length, attn_size=None, attn_vec_size=None, input_size=None, state_is_tuple=False) Create a cell with attention. #### Args: • cell: an RNNCell, an attention is added to it. • attn_length: integer, the size of an attention window. • attn_size: integer, the size of an attention vector. Equal to cell.output_size by default. • attn_vec_size: integer, the number of convolutional features calculated on attention state and a size of the hidden layer built from base cell state. Equal attn_size to by default. • input_size: integer, the size of a hidden linear layer, built from inputs and attention. Derived from the input tensor by default. • state_is_tuple: If True, accepted and returned states are n-tuples, where n = len(cells). By default (False), the states are all concatenated along the column axis. #### Raises: • TypeError: if cell is not an RNNCell. • ValueError: if cell returns a state tuple but the flag state_is_tuple is False or if attn_length is zero or less. ### zero_state(batch_size, dtype) Return zero-filled state tensor(s). #### Args: • batch_size: int, float, or unit Tensor representing the batch size. • dtype: the data type to use for the state. #### Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Simulating a chemically fueled molecular motor with nonequilibrium molecular dynamics ## Abstract Most computer simulations of molecular dynamics take place under equilibrium conditions—in a closed, isolated system, or perhaps one held at constant temperature or pressure. Sometimes, extra tensions, shears, or temperature gradients are introduced to those simulations to probe one type of nonequilibrium response to external forces. Catalysts and molecular motors, however, function based on the nonequilibrium dynamics induced by a chemical reaction’s thermodynamic driving force. In this scenario, simulations require chemostats capable of preserving the chemical concentrations of the nonequilibrium steady state. We develop such a dynamic scheme and use it to observe cycles of a particle-based classical model of a catenane-like molecular motor. Molecular motors are frequently modeled with detailed-balance-breaking Markov models, and we explicitly construct such a picture by coarse graining the microscopic dynamics of our simulations in order to extract rates. This work identifies inter-particle interactions that tune those rates to create a functional motor, thereby yielding a computational playground to investigate the interplay between directional bias, current generation, and coupling strength in molecular information ratchets. ## Introduction Molecular motors are ubiquitous in biology. Proteins like kinesin1 and myosin2 transduce free energy, hydrolyzing adenosine triphosphate (ATP) to power mechanical work3,4,5,6. These motors operate by coupling ATP hydrolysis to linear motion, carrying cellular cargoes along microtubule and actin tracks, respectively. Those natural motors have also been engineered to modify their performance. Mutated kinesin can process further along microtubule tracks than wild type7,8 or rapidly cease activity in response to small molecules9. Myosin can be engineered to move along an actin track in the opposite direction of the wild type motor10,11. Despite those successes modifying existing motors, it remains challenging to design molecular interactions to build similar machines from the ground up. Chemists have sought to build those machines using the principles of the biological motors but with different synthetic building blocks12,13,14. Like the biological inspiration, the synthetic machines should rectify thermal fluctuations into directed motion by harvesting free energy from chemical fuel, a goal first realized by the artificial motor of Wilson et al.15. One challenge in designing these machines is that the mechanism is typically considered in terms of the kinetics of elementary steps while the design is more naturally thought of in terms of the strength of interactions between molecular components. Connecting those interactions to the ultimate dynamical function is particularly challenging because microscopic motors operate in a noisy regime characterized by stochastic fluctuations16,17. In equilibrium situations, computer simulations have proven to be particularly useful at bridging that connection between molecular design and dynamics, particularly in the presence of noise18. The nonequilibrium dynamics of molecular motors, however, preclude straightforward application of equilibrium simulation methods. Equilibrium dynamics moves in forward and reverse directions with equal probability, so a directional motor requires nonequilibrium conditions powered by a chemical fuel3,6,19,20. To capture the nonequilibrium behavior in simulations, a number of different strategies have been employed. One approach aims to describe different equilibria of a motor, e.g., one with a fuel bound and one with the fuel unbound. The nonequilibrium dynamics is induced by externally imposing time-dependent swaps between these energy surfaces21,22,23,24,25,26,27. A complementary body of work breaks the time-reversal symmetry of equilibrium dynamics by imposing forces or torques on the motor28,29,30,31,32,33. Both approaches can obscure how the chemistry couples to the mechanical motion, and that mechanochemical coupling is central to a motor’s function6,34. To explicitly capture that coupling, it is necessary to continually resupply fuel and extract waste from a simulation so as to sustain a nonequilibrium steady state (NESS), a strategy implemented with a minimal kinesin-like walker model35 and with Janus particles and sphere-dimers motor models that move along self-induced concentration gradients (diffusiophoresis)36,37,38,39. Here we present a model motor and fuel with sufficiently simple pair potentials that the steady-state dynamics can be directly simulated, with a nonequilibrium environment maintained by external baths. Our motor is essentially that of Wilson et al.15, where a reaction biases the relative motion of two interlocked rings in a preferred direction. We show how simulations can be used to quantify motor performance and tradeoffs. Armed with the explicit particle-based model, we analyze the resulting currents using a nonequilibrium Markov state framework, with which we aim to more directly connect the stochastic thermodynamic analysis of motors40,41,42 with particle-based simulations. The model and methods we report serve as a testbed for exploring how inter-particle interactions affect the operation of a molecular motor. ## Results ### Fueling a nonequilibrium steady state with a classical fuel model Consider the dynamics of a motor protein in the presence of ATP, adenosine diphosphate (ADP), and inorganic phosphate (P). In an ideal solution, the reversible chemical reaction ATP ADP + P will relax into an equilibrium with equilibrium constant $$K=\frac{[{{{{{{{\rm{ADP}}}}}}}}][{{{{{{{\rm{P}}}}}}}}]}{[{{{{{{{\rm{ATP}}}}}}}}]}={e}^{-\beta ({\mu }_{{{{{{{{\rm{ADP}}}}}}}}}^{0}+{\mu }_{{{{{{{{\rm{P}}}}}}}}}^{0}-{\mu }_{{{{{{{{\rm{ATP}}}}}}}}}^{0})},$$ (1) where $${\mu }_{{{{{{{{\rm{ADP}}}}}}}}}^{0}$$, $${\mu }_{{{{{{{{\rm{P}}}}}}}}}^{0}$$, and $${\mu }_{{{{{{{{\rm{ATP}}}}}}}}}^{0}$$ are standard-state chemical potentials and β is the inverse temperature in units of Boltzmann’s constant kB. At chemical equilibrium, the motion of the motor must obey detailed balance, precluding the protein from exhibiting net motion. The situation is altered if external means prevent the chemical reaction from reaching equilibrium, for example, if ATP is fed into the system while ADP and P are extracted. Provided the reaction of ATP is suitably coupled to the protein’s motion, the fuel’s free energy gradient pushes the motor into a NESS with net directed motion, giving rise to currents. In so-called tightly coupled motors, each reaction event correlates with a configurational change of the motor. For example, when F1-ATP synthase generates work from ATP43, each catalyzed ATP hydrolysis corresponds almost one-to-one with a 120° rotation of a rotor44. Other motors are loosely coupled, with motor motion only weakly correlated with fuel consumption45. That mechanochemical coupling can be realized in a strictly classical model, provided the model exhibits a reversible transformation between fuel and waste and that a continuous influx of fuel and outflow of waste prevents relaxation to equilibrium. It is furthermore necessary that the model fuel exhibit metastability, so that interconversion between fuel and waste is slower than fuel injection and waste removal. We constructed the classical fuel out of tetrahedral clusters of volume-excluding particles, as shown in Fig. 1. Four such particles, colored blue, are bonded together to form a tetrahedral shell. A single unbound volume-excluding particle, colored red, can be kinetically trapped inside the tetrahedron. A filled tetrahedral cluster (FTC) does not retain its red central particle (C) indefinitely. Rather, a rare thermal fluctuation inevitably allows the tetrahedral cluster to contort enough for the kinetically trapped C to escape, leaving behind an empty tetrahedral cluster (ETC). Consistent with microscopic reversibility, the reverse process is also possible. At equilibrium, the flux from FTC → ETC + C would balance the reverse flux of ETC + C → FTC. Since the ETC + C state is both entropically and energetically favorable, equilibrium would strongly favor the empty tetrahedra. An initial concentration of FTC fuel would quickly deplete to its near-zero equilibrium concentration if not for grand canonical Monte Carlo (GCMC) chemostats, which provide a mechanism to hold the chemical potentials for the three different species at unequilibrated values. Consequently, within a simulation cell, the FTC, ETC, and C species are stochastically injected and removed so as to maintain a NESS in which the FTC → ETC + C reaction is typical. The reverse reaction, though possible, is practically unobserved. Because the statistical consequences of nonequilibrium driving forces are present even in strictly classical dynamics, it is not necessary to confront the quantum-mechanical complexities presented by chemical bond breaking. Rather, the classical model suffices as a practical way to address fundamental questions about the impact of pairwise interactions on dynamical function. ### A classical motor model We aimed to engineer a model motor capable of harvesting the free energy of a NESS with a high concentration of FTC and low concentrations of ETC and C, motivated by the first synthetic, autonomous, chemically fueled molecular motor of Wilson et al.15. Their motor is a catenane consisting of two interlocked rings. The smaller of the two rings, a benzylic amide, shuttles around a track formed by the larger ring. On that track, Wilson et al. engineered two fumaramide binding sites as well as two adjacent hydroxyl groups that catalyze the decomposition of a bulky fuel (9-fluorenylmethoxycarbonyl chloride) into waste products (CO2 and dibenzofulvene). The relative positioning of binding and catalytic sites breaks symmetry such that fuel reaction induces directed motion, the kinetics of which have been expressed elegantly in terms of an information ratchet46,47, where directed motion arises from the gating of natural thermal diffusion in a preferred direction3,48,49,50. That mechanism relies on steric considerations; the fuel reacts more slowly at a catalytic site when the shuttling ring is near enough to block access to the catalytic site. The same sort of mechanism underlies our coarse-grained, classical design. The kinetics of catalyzed fuel reactions must be sensitive to the proximity of the shuttling ring. In our model, that need is satisfied by introducing intermolecular interactions between the shuttling ring and the components of the model fuel. As described briefly in Fig. 1 and more thoroughly in Methods, we construct a motor from two interlocking rings of particles. The smaller green ring has attractive interactions with orange binding sites on the larger ring. The particles of FTC, ETC, and C molecules have interactions that encourage the FTC → ETC + C reaction at the white catalytic sites. Following the reaction, the C particle remains at the catalytic site as a blocking group, which the shuttling ring cannot diffuse past. Proximity of the shuttling ring to a catalytic site decreases the rate of catalysis relative to the distal catalytic site. This imbalance of rates, along with the nonequilibrium replenishment of FTC and removal of ETC and C, yields net directed motion when the pair potentials are suitably tuned, a point we return to in a more detailed discussion of the mechanism. ### Dynamics The dynamics of the fueled motor were evolved in time by mixing the Langevin dynamics of the particles with GCMC chemostats that maintained the NESS. The Langevin equations of motion for each particle i are $${\dot{{{{{{{{\bf{r}}}}}}}}}}_{i} =\frac{{{{{{{{{\bf{p}}}}}}}}}_{i}}{{m}_{i}}\hfill\\ {\dot{{{{{{{{\bf{p}}}}}}}}}}_{i}(t) =-\nabla U({{{{{{{{\bf{r}}}}}}}}}_{i}(t))-{\frac{\gamma }{m}}_{i}{{{{{{{{\bf{p}}}}}}}}}_{i}(t)+{{{{{{{{\boldsymbol{\xi }}}}}}}}}_{i}(t),$$ (2) where γ is the friction coefficient, pi is the momentum of particle i, ri is the position of particle i, mi is the mass of that particle, U is the potential energy, and ξi is white noise with $$\langle {{{{{{{{\boldsymbol{\xi }}}}}}}}}_{i} \rangle ={{{{{{{\bf{0}}}}}}}}$$ and $$\langle {{{{{{{{\boldsymbol{\xi }}}}}}}}}_{i}(t){{{{{{{{\boldsymbol{\xi }}}}}}}}}_{i}(t^{\prime} ) \rangle =2\gamma {k}_{{{{{{{{\rm{B}}}}}}}}}T\delta (t-t^{\prime} )$$ at temperature T. All model parameters are reported in non-dimensional units as described in Methods. The simulation box consists of two concentric cubes with an inner cube and an outer cube, shown in Fig. 1. GCMC moves occur between the inner and outer boxes and serve to insert and remove FTC, ETC, and C from the system. The motor itself (the two interlocked rings) is confined to the inner box with a wall potential, but the wall potential is not applied to the FTC, ETC, or C molecules, which freely diffuse between the two boxes and can cross the periodic boundaries of the outer box. Since GCMC insertions and deletions occur in the space between the inner and outer box and the motor is confined to the inner box, the motor will not be directly affected by the GCMC moves. However, the motor does feel the indirect effect of the nonequilibrium concentrations since the timescale for diffusion is fast compared to the lifetime of the FTC. After every 100 time steps of Langevin dynamics, a GCMC trial move is chosen uniformly from six options—an insertion or deletion of the three species: FTC, ETC, or C. These moves are conditionally accepted with a Metropolis factor that depends on the set chemical potentials of the three species and their instantaneous concentrations. As described in Methods and the Supplementary Information (SI), the GCMC procedure must account for the internal degrees of freedom of the FTC and ETC clusters51,52. Due to those internal degrees of freedom, the GCMC acceptance probabilities directly depend on $${\mu }^{\prime}\equiv \mu -{A}^{0}$$, the applied external chemical potential less the standard-state Helmholtz free energy. The strongly driven regime corresponds to having a high $${\mu }_{{{{{{{{\rm{FTC}}}}}}}}}^{\prime}$$ but a low $${\mu }_{{{{{{{{\rm{ETC}}}}}}}}}^{\prime}$$ and $${\mu }_{{{{{{{{\rm{C}}}}}}}}}^{\prime}$$. Under those conditions, the typical process starts by inserting FTC into the outer box. This FTC diffuses into the inner box where it interacts with the motor and gets converted into ETC and C. These waste products then diffuse back into the outer box where they are removed by the GCMC chemostats. ### Bias, current, and coupling efficiency The motor and fuel models are characterized by numerous parameters controlling the form and strength of pairwise interactions. We first discovered parameters for the fuel that resulted in the desired metastability of the FTC state. Subsequently, we scanned parameter spaces to identify the interactions between motor and fuel that would reliably generate current, landing upon two sets of interactions, herein referred to as Motor I and Motor II. These two motors differ only subtly; Motor I features slightly stronger attractions between the shuttling ring and binding and also between the C particles and the catalytic site. The full parameterization of both motors can be found in Appendix D of the SI. The behavior of Motor II in an underdamped regime (γ = 0.5) with a moderate driving force is shown in Fig. 2 (see also Supplementary Movie 1). The NESS fuel concentration only slightly alters the distribution of the motor configurations relative to equilibrium with no FTC, ETC, or C present. In both cases, the steady-state location of the shuttling ring concentrates around the binding sites. Despite that similarity between the equilibrium and NESS stationary distributions, the NESS dynamical behavior deviates markedly from equilibrium. In the presence of the NESS driving, the total number of clockwise (CW) and counterclockwise (CCW) cycles do not balance, corresponding to net current. Figure 2 also reflects two important manners in which the present model motor differs from biological machines like ATP synthase. Firstly, our motor is fairly loosely coupled—Fig. 2b, c shows that a single net cycle requires approximately 35 catalyzed FTC → ETC + C reactions. Secondly, the model fuel is less deeply metastable than ATP. Even in the absence of a motor’s catalytic site, FTC can degrade on simulation timescales. As such, Fig. 2c distinguishes between catalyzed decompositions that occur in proximity to the catalytic sites and the total decompositions that could occur elsewhere. In Fig. 3, we report how adding more fuel increases the CW bias, increases the current, and decreases the coupling. Those three measures of motor performance were calculated by monitoring the number of CW and CCW shuttling ring cycles, nCW and nCCW, respectively. If the motor’s goal is to generate CW cycles then one measure of accuracy is the CW bias, the fraction of cycles in the CW direction: $$\frac{{n}_{{{{{{{{\rm{CW}}}}}}}}}}{{n}_{{{{{{{{\rm{CW}}}}}}}}}+{n}_{{{{{{{{\rm{CCW}}}}}}}}}}.$$ (3) The current, the net cycles per time, is similarly computed from nCW and nCCW as $$\frac{{n}_{{{{{{{{\rm{CW}}}}}}}}}-{n}_{{{{{{{{\rm{CCW}}}}}}}}}}{{t}_{{{{{{{{\rm{obs}}}}}}}}}},$$ (4) where tobs = Nsteps Δt is the observed simulation time and Nsteps is the number of simulation time steps of size Δt. Finally, the coupling between catalyzed reaction and net CW cycles is $$\frac{{n}_{{{{{{{{\rm{CW}}}}}}}}}-{n}_{{{{{{{{\rm{CCW}}}}}}}}}}{{n}_{{{{{{{{\rm{cat}}}}}}}}}},$$ (5) where ncat counts the number of FTC decompositions occurring with center of mass within 2 units of a catalytic particle. Both motors exhibit similar responses to changes in FTC concentration, illustrating a tradeoff: greater bias comes at the expense of lower current and lower coupling. We anticipated a maximum coupling of 0.5, corresponding to a tightly coupled cycle with one catalyzed reaction at each catalytic site. Neither motor achieves that limit. Rather, they are loosely coupled, with catalyzed reactions probabilistically gating diffusion and inducing no major conformational changes in the motor itself. Though the coupling efficiency of these motors is about one order of magnitude below the maximum, we find it encouraging that such a crudely designed toy model can nevertheless convert roughly 1/10 of the catalyzed reactions into directed current. Since we have described simulations in the underdamped regime (γ = 0.5), it is natural to wonder if the motor’s current is dependent on inertia. Figure 4 shows that the current generation indeed persists in an overdamped regime (γ = 10) more reflective of the viscous low Reynolds number environment experienced by in vivo biological motors. While increased damping reduces the current by an order of magnitude, it also causes the CW bias of both motors to increase, with Motor I approaching 100%. ### An eight-state rate model To rationalize the dynamics of the explicit NESS simulations, it is productive to analyze the rates for transitioning between discrete coarse-grained states. Inspired by a simple six-state Markov model46,47 that captures the mechanism of the Wilson et al. motor15, we harvested our simulation data to collect statistics of the transition times between the eight coarse-grained states depicted in Fig. 5. Those states are determined by three bits of information: (1) which half of the large ring is nearest the shuttling ring center of mass, (2) whether the first catalytic site is blocked, and (3) whether the second catalytic site is blocked, with blockage defined as having at least one free C within 1.2 distance units of a catalytic site’s middle particle. Due to the symmetry of the problem, we focus on seven rates for transitions between these eight states: kattach,close and kcleave,close for addition and removal of a blocking group at the catalytic site nearest the shuttling ring, kattach,far and kcleave,far for the addition and removal rates from the catalytic site farthest from the shuttling ring, kCW and kCCW for CW and CCW rotations of the shuttling ring when one catalytic site has a blocking group, and ksym for rotations of the shuttling ring when no blocking groups are present. The rates kCW and kCCW unambiguously imply a direction of shuttling ring motion, while ksym results in an even split between CW and CCW. At each NESS simulation time step, the motor’s configuration is classified as one of the eight states. If one makes a Markovian assumption, the rate for the transition from coarse-grained state A to state B is $${k}_{AB}=\frac{1}{{p}_{{{{{{{{\rm{ss}}}}}}}}}(A)}\frac{{N}_{AB}}{{t}_{{{{{{{{\rm{obs}}}}}}}}}}.$$ (6) Here pss(A) is the steady state probability of being in state A and NAB is the number of transitions from A to B observed in time tobs. To extract the best rate estimate, transitions that are statistically equivalent by symmetry were combined, e.g., $${k}_{{{{{{{{\rm{cleave,far}}}}}}}}}=\frac{1}{{t}_{{{{{{{{\rm{tobs}}}}}}}}}}\frac{{N}_{21}+{N}_{34}+{N}_{56}+{N}_{87}}{{p}_{{{{{{{{\rm{ss}}}}}}}}}(2)+{p}_{{{{{{{{\rm{ss}}}}}}}}}(3)+{p}_{{{{{{{{\rm{ss}}}}}}}}}(5)+{p}_{{{{{{{{\rm{ss}}}}}}}}}(8)}$$. Because we simulated a soft system with finite time steps, a transition between two disconnected states of Fig. 5 was very occasionally observed, but we neglected these transitions when constructing the rate model. To analyze how the interplay between rates generates current, it is productive to decompose the eight-state rate model into four fundamental cycles (FC1–FC4), shown in Fig. 5. Any possible cycle on the graph can be formed by a linear combination of this (non-unique) set of fundamental cycles. Only FC1 gates shuttling ring diffusion into directed motion at both catalytic sites. Traversing FC1 in the CW direction implies that the shuttling ring completes one CW cycle. A CW traversal around FC2 or FC3 similarly corresponds to CW shuttling ring cycling. However, the CW bias is only half that of FC1 because the shuttling ring direction is ambiguous when FC2 and FC3 pass through the unblocked states 4 and 6. The final cycle, FC4, is a futile cycle. Despite burning fuel to traverse FC4, no net cycles of the shuttling ring are generated. An advantage of the fundamental cycle perspective is that the direction of the steady state currents follows from the ratio of rates around the closed fundamental cycles. For example, fundamental cycles FC2 and FC3 share the ratio $$R=\frac{{k}_{{{{{{{{\rm{attach,far}}}}}}}}}}{{k}_{{{{{{{{\rm{attach,close}}}}}}}}}}\frac{{k}_{{{{{{{{\rm{cleave,close}}}}}}}}}}{{k}_{{{{{{{{\rm{cleave,far}}}}}}}}}}\frac{{k}_{{{{{{{{\rm{CW}}}}}}}}}}{{k}_{{{{{{{{\rm{CCW}}}}}}}}}}.$$ (7) We call the logarithm of this ratio the cycle affinity $${{{{{{{\mathcal{A}}}}}}}}=\log R$$, and note that the steady state current around a FC must share the same sign as $${{{{{{{\mathcal{A}}}}}}}}$$53. Because all four FCs have a cycle affinity that is a non-negative multiple of $${{{{{{{\mathcal{A}}}}}}}}$$, the steady state current’s sign is inherited from the sign of $${{{{{{{\mathcal{A}}}}}}}}$$. Put more succinctly in terms of R, if R > 1 the shuttling ring will move CW and if R < 1 the shuttling ring will move CCW. ### Clockwise directionality We develop our understanding of the motor’s CW motion by building off an equilibrium reference, for which R = 1 is required by time-reversal symmetry. There are multiple ways to construct an equilibrium reference. For example, we could simulate the motor’s equilibrium behavior when 〈NFTC〉 = 〈NETC〉 = 〈NC〉 = 0. With no C particles that equilibrium would confine the motor to states 4 and 6. Instead, we constructed a reference state with non-vanishing 〈NC〉 and with 〈NFTC〉 ≈ 〈NETC〉 ≈ 0 by setting $${\mu }_{{{{{{{{\rm{C}}}}}}}}}^{\prime}=-3$$ and $${\mu }_{{{{{{{{\rm{FTC}}}}}}}}}^{\prime}={\mu }_{{{{{{{{\rm{ETC}}}}}}}}}^{\prime}=-10$$. In this way all eight coarse-grained states and all transitions are observed in the reference (which is essentially equivalent to an equilibrium simulation with a single $${\mu }_{{{{{{{{\rm{C}}}}}}}}}^{\prime}=-3$$ chemostat). We bias away from this equilibrium by increasing $${\mu }_{{{{{{{{\rm{FTC}}}}}}}}}^{\prime}$$. Figure 6a shows that only the two rate constants regulating the blocking group attachment (kattach,close and kattach,far) respond strongly to the fuel injection. Those attachment rates are functions of the fuel concentration, as one might expect from mass action kinetics when FTC reacts at the catalytic sites to leave behind C as a blocking group. Across the range of FTC concentration, the other five rates behave effectively the same as in the 〈NFTC〉 = 0 equilibrium reference state. To emphasize that the attachment rates are functions of fuel concentration, we adopt the notation kattach,*(〈NFTC〉). No argument is needed for the other rates because those rates are effectively independent of FTC concentration. Since R = 1 at equilibrium, $$\frac{{k}_{{{{{{{{\rm{attach,close}}}}}}}}}(0)}{{k}_{{{{{{{{\rm{attach,far}}}}}}}}}(0)}=\frac{{k}_{{{{{{{{\rm{cleave,close}}}}}}}}}{k}_{{{{{{{{\rm{CW}}}}}}}}}}{{k}_{{{{{{{{\rm{cleave,far}}}}}}}}}{k}_{{{{{{{{\rm{CCW}}}}}}}}}},$$ (8) allowing the NESS R to be well approximated in terms of attachment rates alone: $$R\approx {R}_{{{{{{{{\rm{approx}}}}}}}}}=\frac{{k}_{{{{{{{{\rm{attach,far}}}}}}}}}(\langle {N}_{{{{{{{{\rm{FTC}}}}}}}}}\rangle )}{{k}_{{{{{{{{\rm{attach,far}}}}}}}}}(0)}\frac{{k}_{{{{{{{{\rm{attach,close}}}}}}}}}(0)}{{k}_{{{{{{{{\rm{attach,close}}}}}}}}}(\langle {N}_{{{{{{{{\rm{FTC}}}}}}}}}\rangle )}.$$ (9) Figure 6a shows that adding fuel increases attachment rates, both attachment near and far from the shuttling ring, but the speed-ups are not equal. Because kattach,far increases more steeply than kattach,close, R > 1 and current is CW. Our analysis of R shows that FC1, FC2, and FC3 all contribute to CW current, but FC1 contributes more strongly. By also monitoring the NESS population of the eight states (Fig. 6b), we show that increasing fuel takes population away from states 4 and 6, which lie on FC2 and FC3, but not FC1. The increase in CW bias with 〈NFTC〉 in Fig. 3a can be viewed as a consequence of the fully ratcheted FC1 cycle becoming dominant. We note that even with our analysis, it is not obvious how the geometry of the design in Fig. 1 translates into the CW currents. The fuel-dependent attachment rates both increase with FTC concentration, and directionality is determined by which of those rates rises up more rapidly with added NFTC. We anticipate it will be possible to preserve the motor geometry and design changes to the motor’s pairwise potential to yield R < 1 and CCW cycles. ### Thermodynamics We have described an entirely kinetic analysis, a discussion that sufficies to explain the motor’s directionality. If one wants to understand the thermodynamic cost of fueling the motor, however, care must be taken in connecting the kinetics with thermodynamics. At the fine-grained level of the NESS simulations, each step is microscopically reversible—both the chemostat GCMC moves and the Langevin time steps may be executed in reverse, backtracking and undoing the forward dynamics. At this microscopic level, the ratio of probabilities of forward and reverse steps measures the increasing entropy of the ideal particle reservoirs. We caution, however, that the link connecting forward and reversed rates to thermodynamics is more complicated upon coarse graining the configuration space (e.g., into the 8-state kinetic model). In that picture, it becomes important that the forward and reversed transitions between course-grained states often proceed via distinct pathways. To make this point more explicit, we elaborate upon the transitions between states 1 and 4, characterized simply by kcleave,close and kattach,close in Fig. 5. In equilibrium simulations with only C and no tetrahedral clusters, the pathways for cleavage and attachment are identical, but simulations with FTC reveal differing pathways for cleavage and attachment (see Supplementary Movies 2 and 3). A minimal model to address the motor’s thermodynamics must separate the pathways into the equilibrium-like process mediated by the C reservoir and an additional pathway mediated by the FTC and ETC reservoirs. In light of these distinct mechanisms, we note that the previously described affinities $${{{{{{{\mathcal{A}}}}}}}}$$ are cycle affinities of the Markov model and not thermodynamic affinities, which relate to the entropy produced by the motors. That physical entropy production can dramatically exceed the Markov model’s entropy production when the rates of distinct pathways are clumped together as in Fig. 5. Consider, for example, Fig. 7, which illustrates a refinement to the kinetic model that resolves whether cleavage and attachment events were mediated by C alone ($${k}_{{{{{{{{\rm{attach}}}}}}}}}^{{{{{{{{\rm{C}}}}}}}}}$$ and $${k}_{{{{{{{{\rm{cleave}}}}}}}}}^{{{{{{{{\rm{C}}}}}}}}}$$) or by a tetrahedral cluster reaction ($${k}_{{{{{{{{\rm{attach}}}}}}}}}^{{{{{{{{\rm{TC}}}}}}}}}$$ and $${k}_{{{{{{{{\rm{cleave}}}}}}}}}^{{{{{{{{\rm{TC}}}}}}}}}$$). The refinement does not alter the rate of shuttling ring current provided $${k}_{{{{{{{{\rm{attach}}}}}}}}}={k}_{{{{{{{{\rm{attach}}}}}}}}}^{{{{{{{{\rm{C}}}}}}}}}+{k}_{{{{{{{{\rm{attach}}}}}}}}}^{{{{{{{{\rm{TC}}}}}}}}}$$ and $${k}_{{{{{{{{\rm{cleave}}}}}}}}}={k}_{{{{{{{{\rm{cleave}}}}}}}}}^{{{{{{{{\rm{C}}}}}}}}}+{k}_{{{{{{{{\rm{cleave}}}}}}}}}^{{{{{{{{\rm{TC}}}}}}}}}$$. Though the current is insensitive to the refined model, the two Markov models produce entropy at different rates. Figure 7 Markov model includes additional cycles from state 1 to 4 and back via the other pathway, and the entropy production associated with those cycles is undetected by Fig. 5 model. In other words, coarse graining yields a model that produces less entropy than the fine-grained model, a well-known effect of the data processing inequality that applies whether the coarse graining combines together microstates or pathways54,55. It is therefore notable that our simulations give access to the reversibility of the trajectories in the full state space, not just the reversibility of some reduced Markov models. We anticipate that capability will be particularly beneficial for future studies of the thermodynamic performance. ## Discussion The models and methods presented here demonstrate a computational strategy to study how pairwise interactions give rise to dynamical function by simulating Langevin dynamics of a motor model simultaneously with GCMC chemostats. One can imagine carrying out similar, albeit vastly more expensive, simulations using more detailed, realistic models of chemical motors, but we highlight that our minimal toy model offers a tractable playground for exploring principles. It provides practical access to calculations of efficiency, accuracy, speed, and entropy production in a nontrivial particle-based model, opening the door to further explorations of thermodynamic and kinetic bounds56,57 that limit what sort of autonomous, steady-state motors can be designed. Those studies of the interplay between fluctuations and dissipation are commonly applied to abstract nonequilibrium Markov jump models without explicitly specifying the microscopic origin of the rates. We anticipate that the stochastic thermodynamics community will benefit from this toy model that enables an explicit connection between pair potentials and the mesoscopic transition rates. We also anticipate that our approach will be useful in testing proposed improvements to the motor’s design58. More concretely, our work should aid in the design and implementation of autonomous mesoscale machines. While this work was inspired by a molecular-scale motor15, the pairwise potentials we use could more easily be built from mesoscale colloid constructions, where interactions between subunits can be tuned59. Significantly, we demonstrated that the motor maintains directional current in the overdamped regime, which is relevant to such colloidal diffusion. Although we do not expect our particular tetrahedral cluster fuel to be the most reasonable design on which to build an experimentally accessible mesoscale machine, we do hope the illustration and the methods will encourage more designs that will soon be experimentally realized. ## Methods ### Model details We used a modified Lennard–Jones (LJ) potential for all non-bonded interactions between particles in the system. Unlike the standard WCA potential60, which includes both r−12 and r−6 contributions in the repulsive regime, we modified pairwise LJ potentials by introducing separate control over the strength of the r−12 repulsive and r−6 attractive terms: $${U}_{{{{{{{{\rm{LJ}}}}}}}}}({{{{{{{{\bf{r}}}}}}}}}_{ij})=4{\epsilon }_{{{{{{{{\rm{R}}}}}}}},ij}{\left(\frac{{\sigma }_{ij}}{| {{{{{{{{\bf{r}}}}}}}}}_{ij}| }\right)}^{12}-4{\epsilon }_{{{{{{{{\rm{A}}}}}}}},ij}{\left(\frac{{\sigma }_{ij}}{| {{{{{{{{\bf{r}}}}}}}}}_{ij}| }\right)}^{6},$$ (10) where σij is the average of the radii of particles i and j. The strength of the short-ranged repulsive interaction between particles i and j is tuned by ϵR,ij, while that of the long-range attractive interaction is tuned by ϵA,ij, as in61. All particles in the system are volume-excluding (ϵR,ij > 0), but only some pairwise interactions are attractive (ϵA,ij ≥ 0). The full set of interaction parameters for each type of particle in the system is given in Supplementary Table 2. The FTC fuel molecules are comprised of a four-particle tetrahedron bound along the edges (blue) and a free central particle (red), depicted in Fig. 1. The edges of the tetrahedron are held together with harmonic interactions that seek to minimize the distance rij between particles i and j: $${U}_{{{{{{{{\rm{harmonic}}}}}}}}}({{{{{{{{\bf{r}}}}}}}}}_{ij})=\frac{1}{2}{k}_{ij}{{{{{{{{\bf{r}}}}}}}}}_{ij}^{2}.$$ (11) The values of the spring constants kij are found in Supplementary Table 2. The particle types of the tetrahedron are labeled as TET1, TET2, TET3, and TET4, while the central particle is called CENT. Pairwise interactions between all of these particle types are purely repulsive (ϵA,ij = 0). This ensures that FTC is a metastable, kinetically trapped state and it also ensures that FTC, ETC, and C do not aggregate in the simulation cell. Progress along the FTC → ETC + C reaction pathway is tracked by measuring r, the distance between the C particle and the center of mass of the four tetrahedron particles. In non-dimensional units, the cluster is in the FTC state when r ≤ 0.25, it is in the ETC + C state when r ≥ 0.8, and it is in an intermediate transition regime, visited fleetingly, when 0.25 < r < 0.8. The motor model is composed of two interlocked rings. A large ring consisting of Nlarge = 30 connected beads functions as a track for a smaller shuttling ring (green) with Nshuttle = 12 beads to diffuse or shuttle around, as depicted in Fig. 1. The shuttling ring is made up of a single-particle type, labeled SHUTTLE. The large ring is made up of three particle types: INERT particles that are purely volume-excluding (black), BIND particles that have attractive interactions with the shuttling ring (orange), and catalytic particles, labeled CAT1, CAT2, CAT3 (white), that have attractive interactions with TET1, TET2, TET3, TET4, and CENT particles to facilitate the decomposition of FTC to ETC + C. The ring is arranged so that a three-particle catalytic site (CAT2-CAT1-CAT3 in CW order) is on the CW side of a single-particle binding site, followed by a set of 11 inert particles before the binding/catalytic motif repeats on the opposite side of the large ring. The binding sites, located at large ring indices 0 and 15, are analogous to the fumaramide residues of the Wilson et al. motor15. The catalytic sites, located at large ring indices 1–3 and 16–18, are analogous to the hydroxy groups of the Wilson et al. motor. The attractive interaction between C (CENT) particles and the middle catalytic particle (CAT1) is particularly strong so as to hold the C particle near the catalytic site as a blocking group for the shuttling ring after a catalyzed reaction has occurred. Those blocking groups are especially effective at preventing the diffusion of the shuttling ring because C particles also have particularly strong repulsions with the shuttling ring particles (SHUTTLE). The rings have intramolecular interactions similar to those used for coarse-grained polymer models where bond and angle potentials maintain geometry and the modified LJ potential of Eq. (10) serves to include volume exclusion. The bonded interactions between adjacent beads in the motor rings is given by a finitely extensible nonlinear spring (FENE) potential: $${U}_{{{{{{{{\rm{FENE}}}}}}}}}({{{{{{{{\bf{r}}}}}}}}}_{ij})=-\frac{1}{2}{k}_{{{{{{{{\rm{F}}}}}}}},ij}{{{{{{{{\bf{r}}}}}}}}}_{ij}^{2}\log \left[1-{\left(\frac{| {{{{{{{{\bf{r}}}}}}}}}_{ij}| }{{r}_{\max ,ij}}\right)}^{2}\right].$$ (12) Here rij is the displacement vector between particles i and j, kF,ij is the FENE force constant, and $${r}_{\max ,ij}$$ is the maximum extension between the particle pair. Groups of three adjacent ring particles also have angular interactions to maintain the overall circular geometry of the ring: $${U}_{{{{{{{{\rm{angle}}}}}}}}}({\theta }_{ijk})=\frac{1}{2}{k}_{{{{{{{{\rm{A}}}}}}}},ijk}{\left({\theta }_{ijk}-{\theta }_{0,ijk}\right)}^{2},$$ (13) where i is the index of the middle particle of the three adjacent i, j, and k particles, kA,ijk is the angular force constant, θijk is the angle formed by the three particles, and θ0,ijk is the equilibrium angle. For the shuttling ring $${\theta }_{0,ijk}=\pi \left(1-\frac{2}{{N}_{{{{{{{{\rm{shuttle}}}}}}}}}}\right)$$ and for the large ring $${\theta }_{0,ijk}=\pi \left(1-\frac{2}{{N}_{{{{{{{{\rm{large}}}}}}}}}}\right)$$. The bond and angle parameters as well as the modified LJ parameters for all of the motor particles are found in Supplementary Tables 2 and 3. The shuttling ring and large ring are placed in an interlocked configuration. No bonded (FENE or angular) interactions connect the two rings as they can be held in an interlocked state through the volume exclusion of the LJ interaction alone. The shuttling ring is therefore free to diffuse around the large ring. ### Method details To propagate the system dynamics forward in time we solve Eq. (2) numerically with a time step of Δt = 5 × 10−3 for some number of time steps Nsteps using the integrator of Athènes and Adjanor62: $$\begin{array}{rcl}{{{{{{{{\bf{p}}}}}}}}}_{i}^{j+\frac{1}{2}}= {{{{{{{{\bf{p}}}}}}}}}_{i}^{j}{e}^{-\frac{\gamma {{\Delta }}t}{2{m}_{i}}}+{{{{{{{{\bf{f}}}}}}}}}_{i}^{j}\frac{{{\Delta }}t}{2}+{{{{{{{{\boldsymbol{\eta }}}}}}}}}_{i}^{j+\frac{1}{2}}\hfill\\ {{{{{{{{\bf{r}}}}}}}}}_{i}^{j+1}= {{{{{{{{\bf{r}}}}}}}}}_{i}^{j}+{{{{{{{{\bf{p}}}}}}}}}_{i}^{j+\frac{1}{2}}\frac{{{\Delta }}t}{{m}_{i}}\hfill\\ {{{{{{{{\bf{p}}}}}}}}}_{i}^{j+1}= \left[{{{{{{{{\bf{p}}}}}}}}}_{i}^{j+\frac{1}{2}}+{{{{{{{{\bf{f}}}}}}}}}_{i}^{j+1}\frac{{{\Delta }}t}{2}\right]{e}^{-\frac{\gamma {{\Delta }}t}{2{m}_{i}}}+{{{{{{{{\boldsymbol{\eta }}}}}}}}}_{i}^{j+1},\end{array}$$ (14) where fi = −U(ri) is the force on particle i, $${{{{{{{{\bf{r}}}}}}}}}_{i}^{j}\equiv {{{{{{{{\bf{r}}}}}}}}}_{i}(j{{\Delta }}t)$$ is the position of particle i at time jΔt, $${{{{{{{{\bf{p}}}}}}}}}_{i}^{j}\equiv {{{{{{{{\bf{p}}}}}}}}}_{i}(j{{\Delta }}t)$$ is the momentum of particle i at time jΔt, and each ηi is a random vector with components drawn from a zero-mean Gaussian with variance $${m}_{i}(1-\exp (-\gamma {{\Delta }}t/{m}_{i})){k}_{{{{{{{{\rm{B}}}}}}}}}T$$. Other choices of numerical integrator are possible63. We performed all simulations in non-dimensional form with characteristic length given by the LJ radius of an INERT particle, characteristic energy given by the repulsive strength of the INERT–INERT interaction, and characteristic mass given by the mass of an INERT particle. All of these values were then set to unity, i.e. (σINERT = 1, mINERT = 1, ϵR,INERT–INERT = 1), respectively. All other particle masses were also set to unity, and the only particles with non-unit radii were CENT particles with σCENT = 0.45. We report data for simulations with kBT = 0.5 and with γ = 0.5, except where otherwise noted. For completeness, the full set of particle mass and size parameters are given in Supplementary Table 1. We performed a GCMC move every 100 Langevin time steps in order to maintain the system at a steady state concentration of FTC, ETC, and C. The GCMC moves were conditionally accepted so the chemostatted region of space would target the grand canonical distribution $$P({{{{{\bf{r}}}}}},{{{{{\bf{p}}}}}})=\frac{1}{\Xi}\frac{e^{\beta{\mu }_{{{{{{\rm{FTC}}}}}}}{N}_{{{{{{\rm{FTC}}}}}}}+\beta {\mu }_{{{{{{\rm{ETC}}}}}}}{N}_{{{{{{\rm{ETC}}}}}}}+\beta {\mu }_{{{{{{\rm{C}}}}}}}{N}_{{{{{{\rm{C}}}}}}}-\beta H({{{{{\bf{r}}}}}},{{{{{\bf{p}}}}}})}}{N_{{{{{{\rm{FTC}}}}}}}!{N}_{{{{{{\rm{ETC}}}}}}}!{N}_{{{{{{\rm{C}}}}}}}!},$$ (15) where r and p are vectors of fluctuating length containing the coordinates for each copy of each species and Ξ is the grand canonical partition function. The number of copies of each species (NFTC, NETC, and NC) can be viewed as functions of r and p, as can the total energy H(r, p), the kinetic energy K(p), and the potential energy U(r). Though we are ultimately interested in unlabeled particles, it is simplest to utilize unphysical labels for accounting purposes. Marginalizing over all equally probable permutations of labels gives the density for unlabeled particles, which lacks the denominator of Eq. (14). In practice, the GCMC method described here differs slightly from a standard implementation18 since two of the species coupled to external chemical potentials (FTC and ETC) have internal degrees of freedom. Each GCMC chemostat move begins by randomly and uniformly selecting which of the three species to act on and whether to add or remove that species. The chemostat only acts on the outer volume of Fig. 1, and all copies of the chosen species occupying that outer volume are equally likely to be removed in the generated trial move. In the usual Metropolis manner, that trial removal of the copy of species i is conditionally accepted with probability $${P}_{i,{{{{{{{\rm{removal}}}}}}}}}^{{{{{{{{\rm{acc}}}}}}}}}({{{{{{{\bf{r}}}}}}}},{{{{{{{\bf{p}}}}}}}}\to {{{{{{{\bf{r}}}}}}}}^{\prime} ,{{{{{{{\bf{p}}}}}}}}^{\prime} )=\min \left[1,{N}_{i}({{{{{{{\bf{r}}}}}}}}){e}^{-\beta (U({{{{{{{\bf{r}}}}}}}}^{\prime} )-U({{{{{{{\bf{r}}}}}}}})+{U}_{i}^{0})}{e}^{-\beta ({\mu }_{i}-{A}_{i}^{0})}\right],$$ (16) where U0 is the internal potential energy of the removed species, $${Z}_{i}^{0}$$ is the canonical partition function for a single i cluster in a box of volume V0, and $${A}_{i}^{0}=-{k}_{{{{{{{{\rm{B}}}}}}}}}T\log {Z}_{i}^{0}$$ is the associated free energy. In this work, we have operated in terms of the shifted chemical potential $${\mu }_{i}^{\prime}\equiv {\mu }_{i}-{A}_{i}^{0}$$ so the conditional acceptance probability was computed without needing to explicitly compute $${A}_{i}^{0}$$ for the different cluster types. We tune these shifted chemical potentials from $${\mu }^{\prime}=-10$$ on the low end to $${\mu }^{\prime}=1$$ on the high end. The moves that add a cluster are more complicated because we must first generate a configuration of the cluster52. We used Monte Carlo to pre-generate an equilibrium ensemble of 10,000 configurations each of a single FTC cluster and of a single ETC cluster. An addition move first uniformly selects one of those Boltzmann-distributed configurations (a step which is moot when adding C). This configuration is randomly rotated in space then randomly inserted into the chemostatted volume. Velocities for the new particles are sampled from the Boltzmann distribution to complete the generation of trial coordinates $${{{{{{{\bf{r}}}}}}}}^{\prime}$$ and $${{{{{{{\bf{p}}}}}}}}^{\prime}$$. Analogous to the removal moves, the addition is conditionally accepted with probability $${P}_{i,{{{{{{{\rm{addition}}}}}}}}}^{{{{{{{{\rm{acc}}}}}}}}}({{{{{{{\bf{r}}}}}}}},{{{{{{{\bf{p}}}}}}}}\to {{{{{{{\bf{r}}}}}}}}^{\prime} ,{{{{{{{\bf{p}}}}}}}}^{\prime} )=\min \left[1,\frac{1}{{N}_{i}({{{{{{{\bf{r}}}}}}}}^{\prime} )}{e}^{-\beta (U({{{{{{{\bf{r}}}}}}}}^{\prime} )-U({{{{{{{\bf{r}}}}}}}})-{U}_{i}^{0})}{e}^{\beta ({\mu }_{i}-{A}_{i}^{0})}\right].$$ (17) One confirmation that all three chemostats simultaneously function as desired is the demonstration of ideality in the dilute limit, discussed further in the SI. These GCMC moves only occur in the space between the inner and outer simulation boxes, depicted in Fig. 1. Our simulation boxes were concentric cubes with inner side length Linner = 30 and an outer cube of side length Louter = 34. The motor itself is confined to the inner simulation box so that its dynamics are not directly perturbed by abrupt GCMC insertions and deletions. The motor is confined to the inner box with a LJ wall potential: $${U}_{{{{{{{{\rm{wall}}}}}}}}}({{{{{{{{\bf{r}}}}}}}}}_{i})=4{\epsilon }_{{{{{{{{\rm{wall}}}}}}}}}\mathop{\sum}\limits_{\alpha =x,y,z}\left[{\left(\frac{{\sigma }_{{{{{{{{\rm{wall}}}}}}}}}}{{r}_{\alpha ,i}-\frac{1}{2}{L}_{{{{{{{{\rm{inner}}}}}}}}}}\right)}^{12}+{\left(\frac{{\sigma }_{{{{{{{{\rm{wall}}}}}}}}}}{{r}_{\alpha ,i}+\frac{1}{2}{L}_{{{{{{{{\rm{inner}}}}}}}}}}\right)}^{12}\right],$$ (18) where ri = (rx,i, ry,i, rz,i) is the position of the ith motor particle and both boxes are centered at the origin. We set ϵwall = 1 and σwall = 1. Particles of the FTC, ETC, and C molecules do not experience this wall potential and move freely between the inner and outer boxes. These species are also free to pass through the periodic boundaries of the outer box, which we implemented using the minimum image convention18. ## Data availability The data generated in this study have been deposited in a Zenodo.com repository under accession code https://doi.org/10.5281/zenodo.4481182. Data are available for Figs. 2, 3, 4, 6, and 7. ## Code availability The code used in this study has been deposited in a Zenodo.com repository under accession code https://doi.org/10.5281/zenodo.4481182. ## References 1. Howard, J., Hudspeth, A. & Vale, R. Movement of microtubules by single kinesin molecules. Nature 342, 154–158 (1989). 2. Finer, J. T., Simmons, R. M. & Spudich, J. A. Single myosin molecule mechanics: piconewton forces and nanometre steps. Nature 368, 113–119 (1994). 3. Brown, A. I. & Sivak, D. A. Theory of nonequilibrium free energy transduction by molecular machines. Chem. Rev. 120, 434–459 (2019). 4. Kolomeisky, A. B. & Fisher, M. E. Molecular motors: a theorist’s perspective. Annu. Rev. Phys. Chem. 58, 675–695 (2007). 5. Jülicher, F., Ajdari, A. & Prost, J. Modeling molecular motors. Rev. Mod. Phys. 69, 1269 (1997). 6. Mugnai, M. L., Hyeon, C., Hinczewski, M. & Thirumalai, D. Theoretical perspectives on biological machines. Rev. Mod. Phys. 92, 025001 (2020). 7. Vale, R. D. et al. Direct observation of single kinesin molecules moving along microtubules. Nature 380, 451–453 (1996). 8. Thorn, K. S., Ubersax, J. A. & Vale, R. D. Engineering the processive run length of the kinesin motor. J. Cell Biol. 151, 1093–1100 (2000). 9. Engelke, M. F. et al. Engineered kinesin motor proteins amenable to small-molecule inhibition. Nat. Commun. 7, 1–12 (2016). 10. Bryant, Z., Altman, D. & Spudich, J. A. The power stroke of myosin VI and the basis of reverse directionality. Proc. Natl Acad. Sci. USA. 104, 772–777 (2007). 11. Liao, J.-C., Elting, M. W., Delp, S. L., Spudich, J. A. & Bryant, Z. Engineered myosin VI motors reveal minimal structural determinants of directionality and processivity. J. Mol. Biol. 392, 862–867 (2009). 12. Kelly, T. R., Tellitu, I. & Sestelo, J. P. In search of molecular ratchets. Angew. Chem. Int. Ed. 36, 1866–1868 (1997). 13. Kelly, T. R., De Silva, H. & Silva, R. A. Unidirectional rotary motion in a molecular system. Nature 401, 150–152 (1999). 14. Kelly, T. R. et al. Progress toward a rationally designed, chemically powered rotary molecular motor. J. Am. Chem. Soc. 129, 376–386 (2007). 15. Wilson, M. R. et al. An autonomous chemically fuelled small-molecule motor. Nature 534, 235–240 (2016). 16. Bustamante, C., Keller, D. & Oster, G. The physics of molecular motors. Acc. Chem. Res. 34, 412–420 (2001). 17. Astumian, R. D. Design principles for Brownian molecular machines: how to swim in molasses and walk in a hurricane. Phys. Chem. Chem. Phys. 9, 5067–5083 (2007). 18. Frenkel, D. & Smit, B. Understanding Molecular Simulation: From Algorithms to Applications, vol. 1 (Elsevier, 2001). 19. Astumian, R. D. Microscopic reversibility as the organizing principle of molecular machines. Nat. Nanotechnol. 7, 684–688 (2012). 20. Fang, X., Kruse, K., Lu, T. & Wang, J. Nonequilibrium physics in biology. Rev. Mod. Phys. 91, 045004 (2019). 21. Koga, N. & Takada, S. Folding-based molecular simulations reveal mechanisms of the rotary motor F1–ATPase. Proc. Natl Acad. Sci. USA. 103, 5367–5372 (2006). 22. Isaka, Y. et al. Rotation mechanism of molecular motor V1-ATPase studied by multiscale molecular dynamics simulation. Biophysical J. 112, 911–920 (2017). 23. Togashi, Y. & Mikhailov, A. S. Nonlinear relaxation dynamics in elastic networks and design principles of molecular machines. Proc. Natl Acad. Sci. USA. 104, 8697–8702 (2007). 24. Huang, M.-J., Kapral, R., Mikhailov, A. S. & Chen, H.-Y. Coarse-grain simulations of active molecular machines in lipid bilayers. J. Chem. Phys. 138, 195101 (2013). 25. Cressman, A., Togashi, Y., Mikhailov, A. S. & Kapral, R. Mesoscale modeling of molecular machines: cyclic dynamics and hydrodynamical fluctuations. Phys. Rev. E 77, 050901 (2008). 26. Mukherjee, S., Alhadeff, R. & Warshel, A. Simulating the dynamics of the mechanochemical cycle of myosin-V. Proc. Natl Acad. Sci. USA. 114, 2259–2264 (2017). 27. Craig, E. M. & Linke, H. Mechanochemical model for myosin V. Proc. Natl Acad. Sci. USA. 106, 18261–18266 (2009). 28. Okazaki, K.-i. & Hummer, G. Phosphate release coupled to rotary motion of F1-ATPase. Proc. Natl Acad. Sci. USA. 110, 16468–16473 (2013). 29. Okazaki, K.-i. & Hummer, G. Elasticity, friction, and pathway of γ-subunit rotation in FoF1-ATP synthase. Proc. Natl Acad. Sci. USA. 112, 10720–10725 (2015). 30. Nam, K., Pu, J. & Karplus, M. Trapping the ATP binding state leads to a detailed understanding of the F1-ATPase mechanism. Proc. Natl Acad. Sci. USA. 111, 17851–17856 (2014). 31. Pu, J. & Karplus, M. How subunit coupling produces the γ-subunit rotary motion in F1-ATPase. Proc. Natl Acad. Sci. USA. 105, 1192–1197 (2008). 32. Dai, L., Flechsig, H. & Yu, J. Deciphering intrinsic inter-subunit couplings that lead to sequential hydrolysis of F1-ATPase ring. Biophysical J. 113, 1440–1453 (2017). 33. Czub, J., Wieczór, M., Prokopowicz, B. & Grubmüller, H. Mechanochemical energy transduction during the main rotary step in the synthesis cycle of F1-ATPase. J. Am. Chem. Soc. 139, 4025–4034 (2017). 34. Amano, S., Borsley, S., Leigh, D. A. & Sun, Z. Chemical engines: driving systems away from equilibrium through catalyst reaction cycles. Nat. Nanotechnol. 16, 1057–1067 (2021). 35. Xiao, Q., Chen, Y., Bereau, T. & Shi, Y. An in-silico walker. Chem. Phys. Lett. 659, 6–9 (2016). 36. Rückner, G. & Kapral, R. Chemically powered nanodimers. Phys. Rev. Lett. 98, 150603 (2007). 37. Tao, Y.-G. & Kapral, R. Design of chemically propelled nanodimer motors. J. Chem. Phys. 128, 164518 (2008). 38. Valadares, L. F. et al. Catalytic nanomotors: self-propelled sphere dimers. Small 6, 565–572 (2010). 39. Colberg, P. H., Reigh, S. Y., Robertson, B. & Kapral, R. Chemistry in motion: tiny synthetic motors. Acc. Chem. Res. 47, 3504–3511 (2014). 40. Gerritsma, E. & Gaspard, P. Chemomechanical coupling and stochastic thermodynamics of the F1-ATPase molecular motor with an applied external torque. Biophysical Rev. Lett. 5, 163–208 (2010). 41. Seifert, U. Stochastic thermodynamics of single enzymes and molecular motors. Eur. Phys. J. E 34, 1–11 (2011). 42. Altaner, B., Wachtel, A. & Vollmer, J. Fluctuating currents in stochastic thermodynamics. II. Energy conversion and nonequilibrium response in kinesin models. Phys. Rev. E 92, 042133 (2015). 43. Noji, H., Yasuda, R., Yoshida, M. & Kinosita, K. Direct observation of the rotation of F1-ATPase. Nature 386, 299–302 (1997). 44. Yasuda, R., Noji, H., Kinosita Jr, K. & Yoshida, M. F1-ATPase is a highly efficient molecular motor that rotates with discrete 120 steps. Cell 93, 1117–1124 (1998). 45. Hoffmann, P. M. How molecular motors extract order from chaos (a key issues review). Rep. Prog. Phys. 79, 032601 (2016). 46. Astumian, R. D. Running on information. Nat. Nanotechnol. 11, 582–583 (2016). 47. Qiu, Y., Feng, Y., Guo, Q.-H., Astumian, R. D. & Stoddart, J. F. Pumps through the ages. Chem 6, 1952–1977 (2020). 48. Astumian, R. D. Thermodynamics and kinetics of a Brownian motor. Science 276, 917–922 (1997). 49. Bier, M. & Astumian, R. D. Biased Brownian motion as the operating principle for microscopic engines. Bioelectrochemistry Bioenerg. 39, 67–75 (1996). 50. Astumian, R. D., Mukherjee, S. & Warshel, A. The physics and physical chemistry of molecular machines. ChemPhysChem 17, 1719–1741 (2016). 51. Gupta, A., Clark, L. A. & Snurr, R. Q. Grand canonical Monte Carlo simulations of nonrigid molecules: siting and segregation in silicalite zeolite. Langmuir 16, 3910–3919 (2000). 52. Chempath, S., Clark, L. A. & Snurr, R. Q. Two general methods for grand canonical ensemble simulation of molecules with internal flexibility. J. Chem. Phys. 118, 7635–7643 (2003). 53. Biddle, J. W. & Gunawardena, J. Reversal symmetries for cyclic paths away from thermodynamic equilibrium. Phys. Rev. E 101, 062125 (2020). 54. Puglisi, A., Pigolotti, S., Rondoni, L. & Vulpiani, A. Entropy production and coarse graining in Markov processes. J. Stat. Mech.: Theory Exp. 2010, P05015 (2010). 55. Esposito, M. Stochastic thermodynamics under coarse graining. Phys. Rev. E 85, 041125 (2012). 56. Horowitz, J. M. & Gingrich, T. R. Thermodynamic uncertainty relations constrain non-equilibrium fluctuations. Nat. Phys. 16, 15–20 (2020). 57. Di Terlizzi, I. & Baiesi, M. Kinetic uncertainty relation. J. Phys. A: Math. Theor. 52, 02LT03 (2018). 58. Amano, S. et al. Insights from an information thermodynamics analysis of a synthetic molecular motor. Nat. Chem. https://doi.org/10.1038/s41557-022-00899-z (2022). 59. Angioletti-Uberti, S., Mognetti, B. M. & Frenkel, D. Theory and simulation of DNA-coated colloids: a guide for rational design. Phys. Chem. Chem. Phys. 18, 6373–6393 (2016). 60. Weeks, J. D., Chandler, D. & Andersen, H. C. Role of repulsive forces in determining the equilibrium structure of simple liquids. J. Chem. Phys. 54, 5237–5247 (1971). 61. Albaugh, A. & Gingrich, T. R. Estimating reciprocal partition functions to enable design space sampling. J. Chem. Phys. 153, 204102 (2020). 62. Athènes, M. & Adjanor, G. Measurement of nonequilibrium entropy from space-time thermodynamic integration. J. Chem. Phys. 129, 024116 (2008). 63. Fass, J. et al. Quantifying configuration-sampling error in Langevin simulations of complex molecular systems. Entropy 20, 318 (2018). ## Acknowledgements The authors gratefully acknowledge productive conversations with Hadrien Vroylandt, Geyao Gu, and Rueih-Sheng Fu. Research reported in this publication was supported in part by the International Institute for Nanotechnology at Northwestern University and in part by the Gordon and Betty Moore Foundation through Grant No. GBMF10790. ## Author information Authors ### Contributions A.A. and T.R.G. jointly designed the study, conducted the simulations, analyzed the data, and prepared the manuscript. ### Corresponding author Correspondence to Todd R. Gingrich. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Peer review ### Peer review information Nature Communications thanks Martin Bier and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Albaugh, A., Gingrich, T.R. Simulating a chemically fueled molecular motor with nonequilibrium molecular dynamics. Nat Commun 13, 2204 (2022). https://doi.org/10.1038/s41467-022-29393-3 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-022-29393-3
{}
# Adding a Label to a Message using Message ID for Gmail Gists These are sample scripts for adding a label to a message using message ID for Gmail. ## Sample 1 This sample adds a label to a thread using message ID. In this case, all messages in the thread have the label. Even if it adds a label to a message in the thread using addLabel(), all messages in the thread have the label, becauce addLabel can only be used for the thread. var messageId = "#####"; var label = "samplelabel"; ## Sample 2 If you want to give a label to one of messages in the thread, please use this. This sample adds a label to a message using message ID. In this case, only one message of message ID in the thread has the label. var messageId = "#####";
{}
### changeset 1509:dbb461e55eda Document tutorial features author Adam Chlipala Sun, 17 Jul 2011 13:48:00 -0400 d236dbf1b3e3 4aa3b6d962c8 CHANGELOG doc/manual.tex 2 files changed, 27 insertions(+), 0 deletions(-) [+] line wrap: on line diff --- a/CHANGELOG Sun Jul 17 13:34:41 2011 -0400 +++ b/CHANGELOG Sun Jul 17 13:48:00 2011 -0400 @@ -1,3 +1,12 @@ +======== +Next +======== + +- Start of official tutorial +- Compiler support for generating nice tutorial HTML from literate source files +- New protocol 'static' for generating static pages +- Bug fixes + ======== 20110715 ======== --- a/doc/manual.tex Sun Jul 17 13:34:41 2011 -0400 +++ b/doc/manual.tex Sun Jul 17 13:48:00 2011 -0400 @@ -293,6 +293,8 @@ The least obvious requirement is setting \texttt{max-procs} to 1, so that lighttpd doesn't try to multiplex requests across multiple external processes. This is required for message-passing applications, where a single database of client connections is maintained within a multi-threaded server process. Multiple processes may, however, be used safely with applications that don't use message-passing. A FastCGI process reads the environment variable \texttt{URWEB\_NUM\_THREADS} to determine how many threads to spawn for handling client requests. The default is 1. + + \item \texttt{static}: This protocol may be used to generate static web pages from Ur/Web code. The output executable expects a single command-line argument, giving the URI of a page to generate. For instance, this argument might be \cd{/main}, in which case a static HTTP response for that page will be written to stdout. \end{itemize} \item \texttt{-root Name PATH}: Trigger an alternate module convention for all source files found in directory \texttt{PATH} or any of its subdirectories. Any file \texttt{PATH/foo.ur} defines a module \texttt{Name.Foo} instead of the usual \texttt{Foo}. Any file \texttt{PATH/subdir/foo.ur} defines a module \texttt{Name.Subdir.Foo}, and so on for arbitrary nesting of subdirectories. @@ -306,6 +308,22 @@ There is an additional convenience method for invoking \texttt{urweb}. If the main argument is \texttt{FOO}, and \texttt{FOO.ur} exists but \texttt{FOO.urp} doesn't, then the invocation is interpreted as if called on a \texttt{.urp} file containing \texttt{FOO} as its only main entry, with an additional \texttt{rewrite all FOO/*} directive. +\subsection{Tutorial Formatting} + +The Ur/Web compiler also supports rendering of nice HTML tutorials from Ur source files, when invoked like \cd{urweb -tutorial DIR}. The directory \cd{DIR} is examined for files whose names end in \cd{.ur}. Every such file is translated into a \cd{.html} version. + +These input files follow normal Ur syntax, with a few exceptions: +\begin{itemize} +\item The first line must be a comment like \cd{(* TITLE *)}, where \cd{TITLE} is a string of your choice that will be used as the title of the output page. +\item While most code in the output HTML will be formatted as a monospaced code listing, text in regular Ur comments is formatted as normal English text. In this section, we describe the syntax of Ur, deferring to a later section discussion of most of the syntax specific to SQL and XML. The sole exceptions are the declaration forms for relations, cookies, and styles.
{}
# Thevenin equivalent with dependent and independent generators Considering the following circuit When calculating the Thevenin equivalent, I calculate the Thevenin resistance value Rth. Why do I have to consider the controlled voltage generator? And furthermore, why can't I eliminate the current source I1 and why does it gets replaced by an 1A current source in this example? Is it in similar cases always true that the controlled generators are not to be eliminated and the other generators (current and voltage) are thus to be replaced by unitary generators? Why do I have to consider the controlled voltage generator? Think of it this way, if you connect a test source across A and B terminals, the test source cannot affect the current through the independent source - that's why it's called an independent source - it's value does not depend on the attached circuit in any way. However, the voltage across E1 will, in general, be affected by the test source and, thus, the equivalent resistance seen by the test source is modified by the presence of the dependent source. And thereby why I cannot eliminate the current source I1 and in the example in question it gets replaced by an 1A current source? If the 1A current source mentioned is, in fact, the test source, you should zero the 5A source to find the Thevenin resistance of the circuit. With the 5A source activated, there will be an open circuit voltage, $V_{AB_{(OC)}}$. When you connect the test source, the voltage $V_{AB}$ will be different from the open circuit voltage. To find the Thevenin resistance, take the difference in the voltages and divide by the test source current. But, you get the same result if you simply zero the 5A source which sets the open circuit voltage to zero. Then, you get the Thevenin resistance directly from the value of voltage across the test source. In other cases it is true to assert that the controlled generators are not to be eliminated and so the other generators (current and voltage) are to be replaced by unitary generators? I honestly don't know where this idea comes from. A unitary generator is typically used as a test source but I'm not aware of any reason to replace the other sources. Perhaps you should expand this question a bit. I suspect there's a misunderstanding here. • What I didn't understand is why and when you consider using a test source, in my example the 5A current source gets replaced by a 1A current test source, is this always necessary when controlled generators are present? what about if instead of the current source I would have had an voltage source ? – corsibu Nov 19 '12 at 15:01 • It's not correct to think of replacing the 5A source with the test source. I think this is where the confusion is. When finding the Thevenin resistance, all independent sources should be zeroed (current sources are opens, voltage sources are shorts). Then, the test source is connected and the voltage (in the case of a current test source) or current (in the case of a voltage test source) is calculated to find the Thevenin resistance. For this circuit, when the 5A source is zeroed and the test source connected, it only appears that the test source replaced the 5A source. – Alfred Centauri Nov 19 '12 at 16:37 • Yes it was the example to create confusion, so independently from the dependent source I chose a voltage or current source depending on what my requisites are, for example if I must find Vth I chose a current source ? – corsibu Nov 19 '12 at 16:46 • The test source, whether a current or voltage source, is used to find the Thevenin or Norton resistance. To find Vth, do not use a test source. Vth is just the open circuit voltage across the terminals of interest. – Alfred Centauri Nov 19 '12 at 16:49 • Yes, I've mistyped that. Thanks a lot for your post, you were very exhaustive. – corsibu Nov 19 '12 at 16:58
{}
My Math Forum What is the simplest form of this User Name Remember Me? Password Elementary Math Fractions, Percentages, Word Problems, Equations, Inequations, Factorization, Expansion February 18th, 2014, 04:41 AM #1 Newbie   Joined: Feb 2014 Posts: 6 Thanks: 0 What is the simplest form of this What is the simplest form of this? $\frac{m^2}{(m^3-m)}+\frac{1}{(2-2m)}=$ February 18th, 2014, 06:43 AM #2 Math Team   Joined: Oct 2011 From: Ottawa Ontario, Canada Posts: 12,105 Thanks: 796 Re: What is the simplest form of this 1 / [2(m+1)] 1st step: 1 / (2 - 2m) = 1 / [2(1 - m)] = - 1 / [2(m - 1)] ; make sure you understand that... March 9th, 2014, 12:20 AM   #3 Senior Member Joined: Oct 2013 From: Far far away Posts: 422 Thanks: 18 Re: What is the simplest form of this Quote: Originally Posted by naufalzhafran What is the simplest form of this? $\frac{m^2}{(m^3-m)}+\frac{1}{(2-2m)}=$ $\frac{m^{2}}{m^{3}-m}+\frac{1}{2-2m}=\frac{m^{2}}{m(m^{2}-1)}-\frac{1}{2}\left (\frac{1}{m-1} \right )$ $=\frac{m^{2}}{m(m-1)(m+1)}-\frac{1}{2}\left (\frac{1}{m-1} \right )$ $=\frac{1}{m-1}\left ( \frac{m^{2}}{m(m+1)}-\frac{1}{2} \right )=\frac{1}{m-1}\left ( \frac{2m^{2}-(m^2+m)}{2(m^2+m)} \right )$ $=\frac{1}{m-1}\left ( \frac{m^2-m}{2m(m+1)} \right )=\frac{1}{m-1}\left (\frac{m(m-1)}{2m(m+1)} \right )=\frac{1}{2(m+1)}$ Tags form, simplest Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post mikeinitaly Elementary Math 4 October 22nd, 2012 10:52 AM chapsticks Calculus 2 February 21st, 2012 07:58 PM sterces Elementary Math 4 September 7th, 2010 11:53 AM rose Algebra 6 June 22nd, 2010 11:30 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{}
Start typing, then use the up and down arrows to select an option from the list. 2:32 minutes Problem 122 Textbook Question A solution contains one or more of the following ions: Hg2 2 + , Ba2 + , and Fe2 + . When you add potassium chloride to the solution, a precipitate forms. The precipitate is filtered off, and you add potassium sulfate to the remaining solution, producing no precipitate. When you add potassium carbonate to the remaining solution, a precipitate forms. Write net ionic equations for the formation of each of the precipitates observed. Verified Solution This video solution was recommended by our tutors as helpful for the problem above. 147views
{}
Cognitame 'Does that hurt? Yeeaahh. Get it. That's just stimuli, you're reacting to it. You get used to that. Pain, you can get used to pain. You can adjust to it. You can adjust to pretty much anything. Just as long as there's routine. Right? Routine. The human mind craves it. Needs it. But if you take that away, that... that's when you start to lose your shit.' -Frank Castle # Display Math Equations on Your Website by Ethan Glover, Sat, Oct 25, 2014 - (Edited) Tue, Dec 26, 2017 Occasionally with this site I like to display items that involve special symbols. For example, in a recent post about sorting algorithms I used the Theta symbol a lot in order to refer to asymptotic notation. While HTML does provide a way to display this symbol easily, sometimes HTML doesn't cut it, especially with long formulas, equations and functions. For instance, how would one show L'Hopital's rule? It certainly can be done, just type this: $$lim_{xto 0}{frac{e^x-1}{2x}} overset{left[frac{0}{0}right]}{underset{mathrm{H}}{=}} lim_{xto 0}{frac{e^x}{2}}={frac{1}{2}}$$ Of course, you can't just type that in. What it requires is MathJax and LaTeX. With this tutorial, I assume you're familiar with LaTeX. It's pretty easy to learn, just check out the sources I have listed on it over at Liberty Resource Directory. Instead, I want to show you how to include MathJax on your website. A very easy thing to do. In the head section of your page just include some very simple code in order to link your page to the MathJax libraries. It looks like this: From here including LaTeX code is quite simple. If you want to put an equation inline use something like (Theta(n^2)) to display (theta(n^2)) . Notice that what is displaying the code is the backslash and parentheses ( ... ). For longer code that should be displayed on it's own line such as the L'Hopital's Rule displayed above, just change those parentheses to square brackets like this. [ ... ]. So in order to display that code, it would look something like this: [ lim_{xto 0}{frac{e^x-1}{2x}} overset{left[frac{0}{0}right]}{underset{mathrm{H}}{=}} lim_{xto 0}{frac{e^x}{2}}={frac{1}{2}} ] Simple as (pi)!
{}
When you're building complex neural networks you often want to visualize them. Visualization offers a great way to debug things like performance bottlenecks, because parts of your network get duplicated by CNTK. Luckely, it is quite easy to visualize a neural network in CNTK. In this post I will show you how you can visualize complex neural networks in CNTK so that you get insights in how the layers of your network are connected. ## Once upon a time there was a neural network As part of an experiment I'm working on a neural network for counting people in the crowd. The neural network I use is a residual network. This kind of neural network is good at recognizing patterns in images. It's a rather complex neural network. And to make matters worse, I've chosen to perform multiple tasks at once in my neural network. In short, it means that I have a neural network that has multiple output layers connected to the same set of hidden layers. I know it should have a structure like this: <input-layer> <residual-block> <residual-block> <residual-block> <pooling-layer> <output 1><output 2><output 3> I use the following python code to write my neural network. input_var = C.input_variable((3,320,240), name='input') def ResidualBlock(filter_size, num_filters): def generate_residual_block(input_layer): # The first bit of a residual block is a set of convolution filters. # We use padding here to keep the shape intact. conv_block = For(range(2), lambda: [ BatchNormalization(), ])(input_layer) # the output of a residual block is the sum of the convolution filter and the input layer. # a ReLU activation function is used for this part of the block. output = conv_block + input_layer output = relu(output) return output return generate_residual_block # The core module with residual blocks core_module = Sequential([ For(range(3), lambda: [ ResidualBlock((5,5), 64) ]), ]) # Pooling layer intermediate_results = MaxPooling((3,3))(core_module) # Output 1 people_count = Sequential([ Dense(64), Dense(1, activation=relu) ])(intermediate_results) # Output 2 violence = Sequential([ Dense(64), Dense(2, activation=softmax) ])(intermediate_results) # Output 3 density_level = Sequential([ Dense(64), Dense(1, activation=relu) ])(intermediate_results) # The final model model = combine([density_level, people_count, violence])(input_var) That's a lot of code, with a high risk of generating a crappy network. Especially the connection from the hidden layers to the output layers is rather complex. If I get the connection between the intermediate_results layer and the rest of the outputs wrong, the Microsoft Cognitive Toolkit (CNTK) library will generate duplicate layers. Resulting not only in slower training times, but also generating bad results. ## Visualize it So how do you know that all layers connect in the right way and CNTK doesn't duplicate any of my layers? You need to visualize it. CNTK includes a graph plot function in the logging package that you can use to visualize models. It is based on Graphviz. A free graph visualization tool. Next add the binaries folder in the Graphviz installation folder to your PATH variable. Mine is installed in C:\Program Files (x86)\Graphviz2.38\bin so I add that to my path. Now you can visualize your model. For this you need to use the following code: import cntk as C C.logging.graph.plot(model, 'model.png') First import the cntk package and then call the logging.graph.plot function with the instance of your model and a filename. This generates a picture that looks like this: Now you can check the neural network for any bad connections or weird names. As you can see in the picture, my neural network looks as expected. Except there are a few naming issues with the output layers. ## Try it yourself The visualization logger of CNTK makes it easier to see what your neural network looks like. But that is not all. You can also use this to generate pretty pictures to explain the network to your colleagues or other people. Give it a spin and let me know what you think!
{}
This Riddler puzzle is about the popular Secret Santa gift exchange game. Can we guess who our Secret Santa is? The 41 FiveThirtyEight staff members have decided to send gifts to each other as part of a Secret Santa program. Each person is randomly assigned one of the other 40 people on the masthead to give a gift to, and they can’t give to themselves. After the Secret Santa is over, everybody naturally wants to find out who gave them their gift. So, each of them decides to ask up to 20 people who they were a Secret Santa for. If they can’t find the person who gave them the gift within 20 tries, they give up. (Twenty co-workers is a lot of co-workers to talk to, after all.) Each person asks and answers individually — they don’t tell who anyone else’s Secret Santa is. Also, nobody asks any question other than “Who were you Secret Santa for?” If each person asks questions optimally, giving themselves the best chance to unmask their Secret Santa, what is the probability that everyone finds out who their Secret Santa was? And what is this optimal strategy? (Asking randomly won’t work, because only half the people will find their Secret Santa that way on average, and there’s about a 1-in-2 trillion chance that everyone will know.) Here is my solution: [Show Solution] ## 14 thoughts on “Unmasking the Secret Santas” 1. Mark Rickert says: Nice job, however I get: For n=1, P=0, and for n=2, P=5/11. I didn’t check the others. Are you using D(0)=1? 1. You’re absolutely right. I used D(0)=0 by accident. I updated my post and plot. I decided to keep P=1 for the cases n=1 and n=2 because you can deduce your Secret Santa perfectly in these cases even if you don’t ask the correct co-worker (see my explanation in the updated post). 1. Thanks, indeed you are correct. I accidentally used D(0)=0 in my code, when it should be D(0)=1. I updated my post and the plot. 2. Justin Hsu says: I think your evaluation is missing the k=2n+1 term. n=2 should still evaluate to 100%, as link length of 5 is fully determined from first 3 permutations. 1. Yep. Fixed it! Check out the updated plot and post. 3. Just zis guy, ya know? says: It seems that you have solved a different problem than asked, though likely the one that was intended. “If each person asks questions optimally, giving themselves the best chance to unmask their Secret Santa…” (the original problem) is not the same as, “If each person asks questions optimally, so as to give the entire group the best chance to unmask all of the Secret Santas…” (the question you answered). To find our own Secret Santa we can do better than guessing randomly… proving we have an optimal strategy is difficult, however. – JZGYK 1. I agree — since we were asked about the probability that everybody wins, I was assuming this was the quantity to be maximized. If we interpret the question literally and each individual greedily attempts to maximize the chance that they will find their own Secret Santa, I don’t think you can do any better than guessing randomly (assuming $n > 2$). If you think it’s possible to do better than randomly guessing, I’d love to hear your thoughts! This is another reason I interpreted the question the way I did; because the literal interpretation, as far as I could tell, led to random guessing as the only possible choice and this isn’t an interesting scenario! 1. If you use the “follow the Santa” strategy, there is a 31.8% chance that everybody finds their Secret Santa, but there is actually a 48.8% chance that any one individual will find their Secret Santa. Therefore this strategy doesn’t beat random guessing, where the chance of winning is 50% for each individual. I think I’ll update my solution and include some of these comments. 1. Verdigris97 says: There is another interpretation that leads to a perfect (but clearly unintended) strategy. Even though nobody can share the results of their questioning, if everyone in the office conspires to ask exactly one question, (and they each ask their own target), then by the end of the day each person will know who their secret Santa is because their secret Santa was the only one to ask them anything. 2. Just zis guy, ya know? says: I came to the same results as everyone else: 31.8% for everybody using “Follow The Santa [FTS]”, 48.8% for me individually if I use FTS and 50% if I “guess randomly”. I note here that I only count a “win” if someone actually says, “I am your Secret Santa.”; there is always the possibility of guessing correctly if no one names me. What struck me as odd about this was: “If I am guessing randomly, I might Follow The Santa by pure chance.” But this would give me a less than the 50% chance of winning that I would get if I guessed randomly. So what if I guess randomly but re-guess if I FTS? By throwing out the “bad” (less than 50%) approach of FTS, what is left must be something greater than 50%. An example: n = 3 is the first interesting case so let’s start there. There are 1854 derangements and the individual finds their Secret Santa 774/1854 = 41.75% of the time. Consider an anti-Follow The Santa strategy ([aFTS]). We are player #1 and we ask the lowest numbered player who has not yet been named. That is, if we were the Secret Santa for player #4 then the lowest numbered player who has not yet been named is player #2 and we ask them who they bought for. If they say player #3 then we have named players #1, #2, #3, and #4 so the lowest numbered player who has not yet been named is #5, etc.. After 1 question, 3-FTS has 264 winning cases, 3-Rnd has 309 and 3-aFTS has 318. After 2 questions, 3-FTS has 534 winning cases, 3-Rnd has 618 and 3-aFTS has 654. After 3 questions, 3-FTS has 774 winning cases, 3-Rnd has 927 and 3-aFTS has 1038. Assuming I have made no mistakes, 3-FTS = 41.8%, 3-Rnd = 50.0% and 3-aFTS = 56.0% Proving that this is the optimal individual strategy is another thing entirely. – JZGYK On an unrelated note, whenever I try to use the less than or greater than symbols, your site misinterprets them. 1. ah neat — I guess I was assuming the staff members weren’t allowed to cooperate ahead of time (by agreeing on a specific ordering, for example). Looks like this sort of cooperation can lead to better-than-guessing strategies, as you pointed out! Math symbols (and arbitrary equations, in general) can be displayed by using a dollar sign before and after the equation (LaTeX code). I think the issue with $<$ and $>$ is that these symbols are interpreted as html tags. 1. Just zis guy, you know? says: No collusion is required and I can get this result as the sole Santa-seeker. The results are unchanged if I make a list or if I simply choose randomly from people who are not named. But I don’t know that this is optimal in general, or even close. – JZGYK 4. Verdigris97 says: Nice writeup! If you want to avoid using enormous numbers (say, because you are using a spreadsheet for the computations, or any finite-precision language), the number of derangements with a cycle of length $k>n/2$, $\binom{n}{k} (k-1)! D_{n-k}$, can be rewritten as $n!\frac{(k-1)!}{k!}\frac{D_{n-k}}{(n-k)!}=\frac{n!}{k}\frac{D_{n-k}}{(n-k)!}$, and the proportion of such derangements out of all possible derangements of size $n$ is $\frac{1}{k}\frac{n!}{D_n}\frac{D_{n-k}}{(n-k)!}$. So, the proportion of derangements on 41 items with a cycle of length (exactly) 22 is $\frac{1}{22} \frac{41!}{D_{41}} \frac{D_{19}}{19!} \approx 0.045455$. The function $\frac{D_n}{n!}$ converges to $1/e$ quickly. The exact formula is $\frac{D_n}{n!} = \sum_{i=0}^n \frac{(-1)^i}{i!}$, and, if we accept the approximation $\frac{41!}{D_{41}} = e$ (the difference is far below machine precision), we can sum up the contributions of each of the different cycle lengths for $k=22,\ldots,41$ and get that the total proportion of derangements with a cycle of length at least 22 is approximately $0.681665$. Therefore, the probability that our strategy succeeds is approximately $0.318335$, which matches your approximation to the exact ratio you found.
{}