content
stringlengths
86
994k
meta
stringlengths
288
619
Solving Stable Generalized Lyapunov Equations for Hankel Singular Values Computation Vasile Sima Generalized Lyapunov equations are often encountered in systems theory, analysis and design of control systems, and in many applications, including balanced realization algorithms, procedures for reduced order models, or Newton methods for generalized algebraic Riccati equations. An important application is the computation of the Hankel singular values of a generalized dynamical system, whose behavior is defined by a regular matrix pencil. This application uses the controllability and observability Gramians of the system, given as the solutions of a pair of generalized Lyapunov equations. The left hand side of each of these equations follows from the other one by applying the (conjugate) transposition operator. If the system is stable, the solutions of both equations are non-negative definite, hence they can be obtained in a factorized form. But these theoretical results may not hold in numerical computations if the symmetry and non-negative definiteness are not preserved by a solver. The paper summarizes new related numerical algorithms for complex continuous- and discrete-time generalized systems. Such solvers are not yet available in the SLICOT Library or MATLAB. The developed solvers address the essential practical issues of reliability, accuracy, and efficiency. Paper Citation in Harvard Style Sima V. (2022). Solving Stable Generalized Lyapunov Equations for Hankel Singular Values Computation. In Proceedings of the 19th International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO, ISBN 978-989-758-585-2, pages 130-137. DOI: 10.5220/0011259900003271 in Bibtex Style author={Vasile Sima}, title={Solving Stable Generalized Lyapunov Equations for Hankel Singular Values Computation}, booktitle={Proceedings of the 19th International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO,}, in EndNote Style TY - CONF JO - Proceedings of the 19th International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO, TI - Solving Stable Generalized Lyapunov Equations for Hankel Singular Values Computation SN - 978-989-758-585-2 AU - Sima V. PY - 2022 SP - 130 EP - 137 DO - 10.5220/0011259900003271
{"url":"http://scitepress.net/PublishedPapers/2022/112599/","timestamp":"2024-11-05T20:03:10Z","content_type":"text/html","content_length":"7082","record_id":"<urn:uuid:71072980-2358-4680-b49a-f29993bd751f>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00099.warc.gz"}
Is there a general equation for convection of a fin's side surface • Thread starter kp3legend • Start date In summary, the conversation discussed the use of a circular fin with uniform cross section and attached ends to surfaces, and the search for a general equation to find the heat rate by convection along the fin's side surface. The speaker also mentioned trying to relate all the conditions and ended up with an equation that varies with position x, but was unsure if this was the correct approach. They asked if heat transfer along the fins can be represented by a function, and requested additional help or information. I'm wondering with the circular fin, with uniform cross section area with given length and diameter. both ends are attached to surfaces. is there a general equation to find the heat rate by convection of the fin side surface. I tried to relate all the conditions and ended up with a equation that varies with position x along the fin. But I don't know if its a right way or not. Can heat transfer by convection along the fins be presented by a function which will gives the overall heat transfer rate. thank you for your time. i know this is wordy but i really need you help so i can move on with the next stuffs. I'm sorry you are not generating any responses at the moment. Is there any additional information you can share with us? Any new findings? FAQ: Is there a general equation for convection of a fin's side surface 1. What is convection of a fin's side surface? Convection of a fin's side surface is the transfer of heat between the fin and the surrounding fluid due to the movement of the fluid. 2. Why is it important to have a general equation for convection of a fin's side surface? A general equation for convection of a fin's side surface allows for easier and more accurate calculations of heat transfer in various systems, such as in heat exchangers and cooling systems. 3. Is there a single equation that can be used to calculate convection of a fin's side surface? No, there is not a single equation that can be used for all situations. The equation used will depend on various factors such as the geometry of the fin, properties of the fluid, and flow conditions. 4. What are some factors that affect the convection of a fin's side surface? Some factors that affect convection of a fin's side surface include the shape and size of the fin, the temperature difference between the fin and the fluid, the velocity of the fluid, and the properties of the fluid such as viscosity and thermal conductivity. 5. How can the convection coefficient of a fin's side surface be determined? The convection coefficient can be determined experimentally or through empirical correlations. It can also be estimated using numerical methods, such as computational fluid dynamics simulations.
{"url":"https://www.physicsforums.com/threads/is-there-a-general-equation-for-convection-of-a-fins-side-surface.741556/","timestamp":"2024-11-09T16:10:35Z","content_type":"text/html","content_length":"75059","record_id":"<urn:uuid:6ff3cb87-5129-4b54-98bb-f0ca756a2f9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00780.warc.gz"}
73 pounds to usd - Coincrafty 73 pounds to usd When talking about a person’s weight, we often think of a person’s size, such as a size 12 shoe. But to us, it is actually the number of pounds that the person weighs. It is the number of pounds that a person has on them. There are three different ways that a person can determine their body mass index which is the most common way of determining a person’s weight. The standard way is through the equation called BMI and it’s the equation that we use to determine the weight of a person. It’s based on height, weight, and age. The BMI formula looks like this BMI=weight/height2.3. I’ve seen some people say the formula is for women and a men’s version of the equation is BMI=weight/height. I like both, so the formula I’ve used is BMI=weight/height2. The BMI formula only works for people in the US. It’s an equation that’s been used by the CDC since the 1950s as a way of determining obesity. Now, the BMI weight that the CDC uses is the weight based on height, so a person in the US would be underweight. However, it’s usually used more for people in a Western country where they have more access to medical care. In the UK, the UK is the place where you can buy a pair of high heels. However, these heels are made of glass and are made from recycled plastic. The BMI formula is based on weight that would be calculated using a calculator and would actually be around the same weight as a person in the US. The formula would also be based on height, so if you are in a different town and you get the wrong answer, then you are probably getting that wrong answer. If you are in a different town and you get the wrong answer, then you are probably getting that wrong answer. The problem is that, because of their low BMI, people who wear high heels are considered “obese” by the BMI, which makes it more difficult for them to measure their height for BMI calculations. The BMI formula is based on weight that would be calculated using a calculator and would actually be around the same weight as a person in the US. If you are overweight, but not obese, then you are probably a little too thin. To calculate your BMI, you need to add up your weight in pounds and divide by your height in inches. If you are overweight, then you are probably slightly overweight. To be honest, it’s best to avoid this calculator and just go with your height in inches. BMI calculations. BMI is the body mass index, which is a weight-to-height ratio. As it turns out, its best to avoid this calculator and just go with your height in inches. We’ll talk more about BMI later, but basically BMI is a measurement of body fat. To calculate your BMI, you need to add up your weight in pounds and divide by your height in inches. If you are overweight, then you are probably slightly overweight. In what way is your body fat? To answer this question, I will address some of the most common body fat-related questions. I will be using the word fat as opposed to fat in the title (since it’s the fat that is the main culprit), but I will also mention that most people are overweight. I will also talk about how some people have “fat issues” and others have “fat issues” themselves, so it doesn’t have to be that way.
{"url":"https://coincrafty.com/73-pounds-to-usd/","timestamp":"2024-11-03T17:07:09Z","content_type":"text/html","content_length":"50716","record_id":"<urn:uuid:1bc75013-3a8b-497c-8f57-1e4967a1dc4a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00480.warc.gz"}
Go to the source code of this file. subroutine ctzrzf (M, N, A, LDA, TAU, WORK, LWORK, INFO) Function/Subroutine Documentation subroutine ctzrzf ( integer M, integer N, complex, dimension( lda, * ) A, integer LDA, complex, dimension( * ) TAU, complex, dimension( * ) WORK, integer LWORK, integer INFO Download CTZRZF + dependencies [TGZ] [ZIP] [TXT] CTZRZF reduces the M-by-N ( M<=N ) complex upper trapezoidal matrix A to upper triangular form by means of unitary transformations. The upper trapezoidal matrix A is factored as A = ( R 0 ) * Z, where Z is an N-by-N unitary matrix and R is an M-by-M upper triangular matrix. M is INTEGER [in] M The number of rows of the matrix A. M >= 0. N is INTEGER [in] N The number of columns of the matrix A. N >= M. A is COMPLEX array, dimension (LDA,N) On entry, the leading M-by-N upper trapezoidal part of the array A must contain the matrix to be factorized. [in,out] A On exit, the leading M-by-M upper triangular part of A contains the upper triangular matrix R, and elements M+1 to N of the first M rows of A, with the array TAU, represent the unitary matrix Z as a product of M elementary reflectors. LDA is INTEGER [in] LDA The leading dimension of the array A. LDA >= max(1,M). TAU is COMPLEX array, dimension (M) [out] TAU The scalar factors of the elementary reflectors. WORK is COMPLEX array, dimension (MAX(1,LWORK)) [out] WORK On exit, if INFO = 0, WORK(1) returns the optimal LWORK. LWORK is INTEGER The dimension of the array WORK. LWORK >= max(1,M). For optimum performance LWORK >= M*NB, where NB is the optimal blocksize. [in] LWORK If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. INFO is INTEGER [out] INFO = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. April 2012 A. Petitet, Computer Science Dept., Univ. of Tenn., Knoxville, USA Further Details: The N-by-N matrix Z can be computed by Z = Z(1)*Z(2)* ... *Z(M) where each N-by-N Z(k) is given by Z(k) = I - tau(k)*v(k)*v(k)**H with v(k) is the kth row vector of the M-by-N matrix V = ( I A(:,M+1:N) ) I is the M-by-M identity matrix, A(:,M+1:N) is the output stored in A on exit from DTZRZF, and tau(k) is the kth element of the array TAU. Definition at line 152 of file ctzrzf.f.
{"url":"https://netlib.org/lapack/explore-html-3.4.2/df/d9f/ctzrzf_8f.html","timestamp":"2024-11-08T15:52:47Z","content_type":"application/xhtml+xml","content_length":"13439","record_id":"<urn:uuid:646c29bf-cdbc-4a1c-a238-cd7b3aa4b58e>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00702.warc.gz"}
nearest neighbor descent for approximate nearest neighbors PyNNDescent is a Python nearest neighbor descent for approximate nearest neighbors. It provides a Python implementation of Nearest Neighbor Descent for k-neighbor-graph construction and approximate nearest neighbor search, as per the paper: Dong, Wei, Charikar Moses, and Kai Li. "Efficient k-nearest neighbor graph construction for generic similarity measures." Proceedings of the 20th international conference on World wide web. ACM, 2011. This library supplements that approach with the use of random projection trees for initialisation. This can be particularly useful for the metrics that are amenable to such approaches (euclidean, minkowski, angular, cosine, etc.). Graph diversification is also performed, pruning the longest edges of any triangles in the graph. Currently this library targets relatively high accuracy (80%-100% accuracy rate) approximate nearest neighbor searches. Upload more screenshots Please help extend the collection of screenshots. Just make a screenshot and upload it here. You don't need to register or anything. Upload a screenshot Hint: upload an image here from your clipboard with Ctrl-V Install this software package If the package is available for the distribution you are currently using on your computer then install the software by clicking on… Install python3-pynndescent
{"url":"https://screenshots.debian.net/package/python3-pynndescent","timestamp":"2024-11-14T11:49:23Z","content_type":"text/html","content_length":"5488","record_id":"<urn:uuid:3d907767-47ab-4dc3-8c14-23fa98026392>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00868.warc.gz"}
Understanding Special Relativity Problems Welcome to our article on understanding special relativity problems! Special relativity is a fundamental theory in modern physics that revolutionized our understanding of space and time. It was developed by Albert Einstein in the early 20th century and has since been used to explain countless phenomena in the universe. In this article, we will delve into the world of special relativity and explore some of its most interesting and challenging problems. Whether you are a student studying physics or just someone with a curious mind, this article is for you. So buckle up and get ready to expand your understanding of the universe with us. Let's dive into the fascinating world of special relativity!To start, it's important to have a solid understanding of the basic concepts of special relativity. This includes understanding the principle of relativity, which states that the laws of physics are the same for all observers in uniform motion. This fundamental principle forms the basis of special relativity and is crucial to understanding its implications and applications. One of the key concepts in special relativity is time dilation. This phenomenon occurs when an object is moving at high speeds, causing time to appear to slow down for that object. This concept was first introduced by Albert Einstein in his famous theory of special relativity and has been confirmed by numerous experiments since then. Another important concept in special relativity is length contraction. This describes how an object's length appears to decrease when it is moving at high speeds. This may seem counterintuitive, but it is a direct consequence of the principles of special relativity and has been observed and measured in various experiments. Understanding these basic concepts is crucial for solving special relativity problems. It allows you to apply the correct formulas and equations and make accurate predictions about the behavior of objects in motion. Additionally, having a solid understanding of these concepts will also help you interpret the results of experiments and make sense of new research findings in the field. Special relativity has played a crucial role in modern physics, leading to groundbreaking discoveries such as the theory of general relativity, which explains the force of gravity. It has also been used in various practical applications, such as GPS technology and particle accelerators. As with any field of science, staying updated on the latest research and developments is essential for anyone pursuing a career in physics. With special relativity being such a fundamental theory, it is constantly being studied and refined by scientists around the world. Keeping up with new research findings will not only deepen your understanding of the subject but also allow you to contribute to the field in your own way. In conclusion, special relativity is a crucial topic in modern physics, and understanding it is essential for anyone pursuing a career in the field. By having a solid grasp of the basic concepts and staying updated on the latest research, you can confidently tackle special relativity problems and contribute to the ongoing advancements in this fascinating field. Formulas for Solving Special Relativity Problems When it comes to solving special relativity problems, having a good grasp of the relevant formulas is crucial. Some of the most important formulas include the Lorentz transformation equations, which allow you to calculate how time and space coordinates change for objects moving at high speeds. Other important formulas include the time dilation equation and the length contraction equation. Conducting Experiments on Special Relativity One of the best ways to understand special relativity is by conducting experiments. This allows you to see firsthand how objects behave at high speeds and how they differ from our everyday experiences. Some popular experiments include measuring the speed of light and observing the effects of time dilation and length contraction. Applying Special Relativity in Real-World Scenarios Special relativity has many practical applications, such as in GPS technology and particle accelerators. By understanding the theory and its formulas, you can apply it to real-world problems and make important contributions to the field of physics. Staying Updated on the Latest Research As with any scientific field, staying updated on the latest research is crucial for understanding special relativity. Follow reputable scientific publications and attend conferences to stay informed about new discoveries and advancements in the field. Resources for Learning Special Relativity If you're looking to learn more about special relativity, there are plenty of resources available. Online tutorials and courses can provide a comprehensive overview of the theory, while textbooks and research papers can provide more in-depth information. Additionally, attending conferences and seminars can help you stay updated on the latest research and developments in the field. Common Misconceptions About Special Relativity While special relativity is a well-established theory, there are still some common misconceptions about it. For example, many people believe that time dilation means that time actually slows down, when in reality it only appears to slow down from an outside perspective. Addressing these misconceptions can help you gain a better understanding of the theory. In conclusion, special relativity is a fascinating and important theory that has revolutionized our understanding of the universe. By familiarizing yourself with its concepts, formulas, and applications, you can gain a deeper understanding of how objects behave at high speeds and contribute to the ongoing research in this field.
{"url":"https://www.onlinephysics.co.uk/modern-physics-problems-special-relativity-problems","timestamp":"2024-11-07T17:02:48Z","content_type":"text/html","content_length":"173498","record_id":"<urn:uuid:f41c347a-7aa5-459f-a999-b6c39bbba594>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00387.warc.gz"}
Hedges’ g: Definition, Formula Effect Size > Hedges’ g What is Hedges’ g? Hedges’ g is a measure of effect size. Effect size tells you how much one group differs from another—usually a difference between an experimental group and control group. Hedges’ g and Cohen’s d are extremely similar. Both have an upwards bias (an inflation) in results of up to about 4%. The two statistics are very similar except when sample sizes are below 20, when Hedges’ g outperforms Cohen’s d. Hedges’ g is therefore sometimes called the corrected effect size. • For very small sample sizes (<20) choose Hedges’ g over Cohen’s d. • For sample sizes >20, the results for both statistics are roughly equivalent. • If standard deviations are significantly different between groups, choose Glass’s delta instead. Glass’s delta uses only the control group’s standard deviation (SD[C]). The Hedge’s g formula is: Need help with the formula? Check out our tutoring page! The main difference between Hedge’s g and Cohen’s D is that Hedge’s g uses pooled weighted standard deviations (instead of pooled standard deviations). A note on small sample sizes: Hedges’ g (like Cohen’s d) is biased upwards for small samples (under 50). To correct for this, use the following formula: Interpreting Results A g of 1 indicates the two groups differ by 1 standard deviation, a g of 2 indicates they differ by 2 standard deviations, and so on. Standard deviations are equivalent to z-scores (1 standard deviation = 1 z-score). Rule of Thumb Interpretation Cohen’s d and Hedges’ g are interpreted in a similar way. Cohen suggested using the following rule of thumb for interpreting results: • Small effect (cannot be discerned by the naked eye) = 0.2 • Medium Effect = 0.5 • Large Effect (can be seen by the naked eye) = 0.8 Cohen did suggest caution when using this rule of thumb. The terms “small” and “large” effects can mean different things in different areas. For example, a “small” reduction in suicide rates is invaluable, where a “small” weight loss may be meaningless. Durlak (2009) suggests referring to prior studies to see of where your results fit into the bigger picture. Cohen, J. (1977). Statistical power analysis for the behavioral sciences. Routledge. Durlak, J. (2009) How to Select, Calculate, and Interpret Effect Sizes. Journal of Pediatric Psychology. March: 34(9):917-28. Ellis, P. (2010). The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Hedges, L. (1981). Distribution Theory for Glass’s Estimator of Effect Size and Related Estimators. Journal of Educational Statistics. Vol. 6, No. 2 (Summer, 1981), pp. 107-128. Entire PDF available for free from JSTOR. Hedges L. V., Olkin I. (1985). Statistical methods for meta-analysis. San Diego, CA: Academic Press
{"url":"https://www.statisticshowto.com/hedges-g/","timestamp":"2024-11-04T04:32:29Z","content_type":"text/html","content_length":"70017","record_id":"<urn:uuid:73f6208c-22ed-4e21-8ab9-872a4226afbb>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00740.warc.gz"}
Algorithm to find cliques of a given size k in O(n^k) time complexity Open-Source Internship opportunity by OpenGenus for programmers. Apply now. Reading time: 35 minutes In this article, we will go through a simple yet elegant algorithm to find a clique of a given size. Clique is an interesting topic in itself given that the clique decision problem is NP-Complete and clique arises in almost all real-life applications involving graphs. Before we go into the wonderful algorithm, we will go through some basic ideas. A clique is a subset of vertices of an undirected graph G such that every two distinct vertices in the clique are adjacent; that is, its induced subgraph is complete. Cliques are one of the basic concepts of graph theory and are used in many other mathematical problems and constructions on graphs. So we can say that a clique in an undirected graph is a subgraph that is complete. Learn more about Clique in general and related ideas and problems A maximal clique is a clique that cannot be extended by including one more adjacent vertex, that is, a clique which does not exist exclusively within the vertex set of a larger clique. Maximal cliques can be very small. A graph may contain a non-maximal clique with many vertices and a separate clique of size 2 which is maximal. While a maximum (i.e., largest) clique is necessarily maximal, the converse does not hold. Learn why the Clique decision problem is NP-Complete A clique of size k in a graph G is a clique of graph G containing k vertices, i.e. the degree of each vertex is k-1 in that clique. So particularly, if there is a subset of k vertices that are connected to each other in the graph G, we say that graph contains a k-clique. A k-clique can be a maximal clique or can be a subset of a maximal clique, so if a graph contains a clique of size more than k then it definitely contains a clique of size k. For example the graph shown below: We can find all the 2-cliques by simply enumerating all the edges. To find k+1-cliques, we can use the previous results. Compare all the pairs of k-cliques. If the two subgraphs have k-1 vertices in common and graph contains the missing edge, we can form a def k_cliques(graph): # 2-cliques cliques = [{i, j} for i, j in graph.edges() if i != j] k = 2 while cliques: # result yield k, cliques # merge k-cliques into (k+1)-cliques cliques_1 = set() for u, v in combinations(cliques, 2): w = u ^ v if len(w) == 2 and graph.has_edge(*w): cliques_1.add(tuple(u | w)) # remove duplicates cliques = list(map(set, cliques_1)) k += 1 The above algorithm of finding k-clique in a graph G takes polinomial time for its execution. The algorithm starts from 2-clique pairs and use this as base data to find 3-cliques and more. To generate 3-cliques from 2-cliques we take each combination pair of 2-cliques and take intersection of the pair, if the intersection is an edge and it is present in the graph then the union of the pair is a clique of size 3. By doing intersection of the pair we find the missing edge so that the 2-clique can be extended to 3-clique, and if the edge is present in the graph then we extend the 2-clique pair into 3-clique and store it. In similar way we generate k+1-clique from k-clique. Let's understand with it with a graph with 4 vertices: To find k-cliques we iterate the same method O(k) times. The method which finds the p+1-clique from p-clique takes O(n) time where n is number of vertices. So in overall the algorithm takes O(n^k) time in the worst case. Code in Python3 from itertools import combinations import networkx as nx def k_cliques(graph): # 2-cliques cliques = [{i, j} for i, j in graph.edges() if i != j] k = 2 while cliques: # result yield k, cliques # merge k-cliques into (k+1)-cliques cliques_1 = set() for u, v in combinations(cliques, 2): w = u ^ v if len(w) == 2 and graph.has_edge(*w): cliques_1.add(tuple(u | w)) # remove duplicates cliques = list(map(set, cliques_1)) k += 1 def print_cliques(graph, size_k): for k, cliques in k_cliques(graph): if k == size_k: print('%d-cliques = %d, %s.' % (k, len(cliques), cliques)) nodes, edges = 6, 10 size_k = 3 graph = nx.Graph() graph.add_edge(1, 2) graph.add_edge(1, 3) graph.add_edge(1, 5) graph.add_edge(2, 3) graph.add_edge(2, 4) graph.add_edge(2, 6) graph.add_edge(3, 4) graph.add_edge(3, 6) graph.add_edge(4, 5) graph.add_edge(4, 6) print_cliques(graph, size_k) 3-cliques = 5, [{3, 4, 6}, {2, 3, 6}, {2, 4, 6}, {1, 2, 3}, {2, 3, 4}]. Time Complexity • The k-clique algorithm takes O(n^k) (i.e. polynomial) time in the worst case. Space Complexity • The k-clique algoorithm takes O(n^2) auxiliary space in the worst case. Related articles Using Bron Kerbosch algorithm to find maximal cliques in O(3^(N/3)) Greedy approach to find a single maximal clique in O(V^2) time complexity
{"url":"https://iq.opengenus.org/algorithm-to-find-cliques-of-a-given-size-k/","timestamp":"2024-11-09T20:37:35Z","content_type":"text/html","content_length":"57333","record_id":"<urn:uuid:f9df24a0-4c43-4331-a752-654557cfc6be>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00884.warc.gz"}
Map Projections - Solutions CBSE Class 11 Geography NCERT Solutions Chapter 27 Map Projections 1. Choose the right answer from the four alternatives given below: (i) A map projection least suitable for the world map: (a) Mercator (b) Simple Cylindrical (c) Conical (d) All the above Ans. (c) Conical. A map projection in which an area of the earth is projected on to a cone, of which the vertex is usually above one of the poles. (ii) A map projection that is neither the equal area nor the correct shape and even the directions are also incorrect (a) Simple Conical (b) Polar zenithal (c) Mercator (d) Cylindrical Ans. (a) Simple Conical is often used both in air and ocean navigation. (iii) A map projection having correct direction and correct shape but area greatly exaggerated polewards is: (a) Cylindrical Equal Area (b) Mercator (c) Conical (d) All the above Ans. (b) Mercator. The Mercator projection is a cylindrical map projection presented by the Flemish geographer and cartographer Gerardus Mercator in 1569. (iv) When the source of light is placed at the centre of the globe, the resultant projection is called: (a) Orthographic (b) Stereographic (c) Gnomonic (d) All the above Ans. (c) Gnomonic. A gnomonic map projection displays all great circles as straight lines, resulting in any straight line segment on a gnomonic map showing a geodesic, the shortest route between the segment's two endpoints. 2. Answer the following questions in about 30 words: (i) Describe the elements of map projection. Ans. (a) Reduced Earth: A model of the earth is represented by the help of a reduced scale on a fiat sheet of paper. This model is called the "reduced earth". This model should be more or less spheroid having the length of polar diameter lesser than equatorial and on this model the network of graticule can be transferred. (b) Parallels of Latitude: These are the imaginary circles running round the globe parallel to the equator and maintaining uniform distance from the poles. An example of a parallel of latitude is the Arctic Circle that runs east - west around the Earth at a latitude of 66° 33' 44". (c) Meridians of Longitude: These are semi-circles drawn in north-south direction from one pole to the other, and the two opposite meridians make a complete circle, i.e. circumference of the globe. Example of meridians of longitude is the Prime Meridian. (d) Global Property: In preparing a map projection the following basic properties of the global surface are to be preserved by using one or the other methods: (i) Distance between any given points of a region; (ii) Shape of the region; (iii) Size or area of the region in accuracy; (iv) Direction of any one point of the region bearing to another point. (ii) What do you mean by global property? Ans. The correctness of area, shape, direction and distances are the four major global properties to be preserved in a map. But none of the projections can maintain all these properties simultaneously. Therefore, according to specific need, a projection can be drawn so that the desired quality may be retained. Thus, on the basis of global properties, projections are classified into equal area, orthomorphic, azimuthal and equi-distant projections. Equal Area Projection is also called homolographic projection. It is that projection in which areas of various parts of the earth are represented correctly. Orthomorphic or True-Shape projection is one in which shapes of various areas are portrayed correctly. The shape is generally maintained at the cost of the correctness of area. Azimuthal or True-Bearing projection is one on which the direction of all points from the centre is correctly represented. Equi-distant or True Scale projection is that where the distance or scale is correctly maintained. However, there is no such projection, which maintains the scale correctly throughout. It can be maintained correctly only along some selected parallels and meridians as per the requirement. In preparing a map projection the following basic properties of the global surface are to be preserved by using one or the other methods: (a) Distance between any given points of a region; (b) Shape of the region; (c) Size or area of the region in accuracy; (d) Direction of any one point of the region bearing to another point. (iii) Not a single map projection represents the globe truly. Why? Ans. No map projection is perfect for every task. One must carefully weigh pros and cons and how they affect the intended map's purpose before choosing its projection. Unfortunately, only a globe offers such properties for any points and regions. However, there is no such projection, which maintains the scale correctly throughout. It can be maintained correctly only along some selected parallels and meridians as per the requirement. Projection is a shadow of globe which has to be presented on a map. When shape of globe changes certainly inaccuracy comes in. Therefore, it is rightly said that not a single map projection represents the globe truly. (iv) How is the area kept equal in cylindrical equal area projection? Ans. The area is kept equal in cylindrical equal area projection because latitudes and longitudes intersect each other at right angles in the straight line form. The cylindrical equal area projection, also known as the Lambert’s projection, has been derived by projecting the surface of the globe with parallel rays on a cylinder touching it at the equator. Both the parallels and meridians are projected as straight lines intersecting one another at right angles. The pole is shown with a parallel equal to the equator; hence, the shape of the area gets highly distorted at the higher latitude. 3. Differentiate between: (i) Developable and non-developable surfaces │ Basis │ Developable. Surface │ Non-developable Surface │ │ Meaning │ A developable surface is one, which can be flattened, and on which, a network of latitude and │ A non-developable surface is one, which cannot be flattened without shrinking, │ │ │ longitude can be projected. │ breaking or creasing. │ │ Example │ A cylinder, a cone and a plane have the property of developable surface. │ A globe or spherical surface has the property of non-developable surface │ On the basis of nature of developable surface, the projections are classified as cylindrical, conical and zenithal projections. (ii) Homolographic and orthographic projections │ Basis │ Homolographic Projection │ Orthographic Projection │ │ Meaning │ A projection in which the network of latitudes and longitudes is developed in such a way that every graticule on the map is equal │ A projection in which the correct shape of a given │ │ │ in area to the corresponding graticule on the globe. It is also known as the equal-area projection. │ area of the earth's surface is preserved. │ (iii) Normal and oblique projections │ Basis │ Normal Projection │ Oblique Projection │ │ Meaning │ If the developable surface touches the globe at the equator, it is called the equatorial or │ If projection is tangential to a point between the pole and the equator, it is called the │ │ │ normal projection. │ oblique projection. │ (iv) Parallels of latitude and meridians of longitude │ Basis │ Meridians of Longitude │ Parallels of Latitude │ │ Meaning │ The meridians of longitude refer to the angular distance, in degrees, minutes, and seconds, of a │ The parallels of latitude refer to the angular distance, in degrees, minutes and │ │ │ point east or west of the Prime (Greenwich) Meridian. │ seconds of a point north or south of the Equator. │ │ Name │ Lines of longitude are often referred to as meridians. │ Lines of latitude are often referred to as parallels. │ │ Reference │ 0° longitude is called prime meridian │ 0° latitude is called equator. │ │ point │ │ │ │ Division │ It divides the earth into eastern hemisphere and western hemisphere. │ It divides the earth into northern hemisphere and southern hemisphere. │ │ Number │ These are 360 in number: 180 in the eastern hemisphere and 180 in the western hemisphere. │ These are 180 in number: 90 in southern hemisphere and 90 in northern hemisphere. │ │ Importance │ It helps to determine time of a place. │ It helps to determine temperature of a place. │ │ Equality │ These are not equal. │ These are equal. │ 4. Answer the following questions in not more than 125 words: (i) Discuss the criteria used for classifying map projection and state the major characteristics of each type of projection. Ans. Types of Map Projection: (a) On the basis of drawing techniques, map Projections may be classified perspective, non-perspective and conventional or mathematical. Perspective projections can be drawn taking the help of a source of light by projecting the image of a network of parallels and meridians of a globe on developable surface. Non¬perspective projections are developed without the help of a source of light or casting shadow on surfaces, which can be flattened. Mathematical or conventional projections are those, which are derived by mathematical computation and formulae and have little relations with the projected image. (b) On the basis of developable surface, it can be developable surface and non developable surface. A developable surface is one, which can be flattened, and on which, a network of latitude and longitude can be projected. A globe or spherical surface has the property of non-developable surface whereas a cylinder, a cone and a plane have the property of developable surface. On the basis of nature of developable surface, the projections are classified as cylindrical, conical and zenithal projections. (c) The correctness of area, shape, direction and distances are the four major global properties to be preserved in a map. But none of the projections can maintain all these properties simultaneously. Therefore, according to specific need, a projection can be drawn so that the desired quality may be retained. Thus, on the basis of global properties, projections are classified into equal area, orthomorphic, azimuthal and equi-distant projections. Equal Area Projection is also called homolographic projection. It is that projection in which areas of various parts of the earth are represented correctly. Orthomorphic or True-Shape projection is one in which shapes of various areas are portrayed correctly. The shape is generally maintained at the cost of the correctness of area. Azimuthal or True-Bearing projection is one on which the direction of all points from the centre is correctly represented. Equi-distant or True Scale projection is that where the distance or scale is correctly maintained. However, there is no such projection, which maintains the scale correctly throughout. It can be maintained correctly only along some selected parallels and meridians as per the requirement. (d) On the basis of location of source of light, projections may be classified as gnomonic, stereographic and orthographic. Gnomonic projection is obtained by putting the light at the centre of the globe. Stereographic projection is drawn when the source of light is placed at the periphery of the globe at a point diametrically opposite to the point at which the plane surface touches the globe. Orthographic projection is drawn when the source of light is placed at infinity from the globe, opposite to the point at which the plane surface touches the globe. The correctness of area, shape, direction and distances are the four major global properties to be preserved in a map. But none of the projections can maintain all these properties simultaneously. Therefore, according to specific need, a projection can be drawn so that the desired quality may be retained. (ii) Which map projection is very useful for navigational purposes? Explain the properties and limitations of this projection. Ans. Mercator's Projection is very useful for navigational purposes. A Dutch cartographer Mercator Gerardus Karmer developed this projection in 1569. The projection is based on mathematical formulae. (a) It is an orthomorphic projection in which the correct shape is maintained. (b) The distance between parallels increases towards the pole. (c) Like cylindrical projection, the parallels and meridians intersect each other at right angle. It has the characteristics of showing correct directions. (d) A straight line joining any two points on this projection gives a constant bearing, which is called a Laxodrome or Rhumb line. (e) All parallels and meridians are straight lines and they intersect each other at right angles. (f) All parallels have the same length which is equal to the length of equator. (g) All meridians have the same length and equal spacing. But they are longer than the corresponding meridian on the globe. (h) Spacing between parallels increases towards the pole. (i) Scale along the equator is correct as it is equal to the length of the equator on the globe; but other parallels are longer than the corresponding parallel on the globe; hence the scale is not correct along them. (j) Shape of the area is maintained, but at the higher latitudes distortion takes place. (k) The shape of small countries near the equator is truly preserved while it increases towards poles. (l) It is an azimuthal projection. (m) This is an orthomorphic projection as scale along the meridian is equal to the scale along the parallel. (a) There is greater exaggeration of scale along the parallels and meridians in high latitudes. As a result, size of the countries near the pole is highly exaggerated. (b) Poles in this projection cannot be shown as 90° parallel and meridian touching them are infinite. (iii) Discuss the main properties of conical projection with one standard parallel and describe its major limitations. Ans. A conical projection is one, which is drawn by projecting the image of the graticule of a globe on a developed cone, which touches the globe along a parallel of latitude called the standard parallel. As the cone touches the globe located along AB, the position of this parallel on the globe coinciding with that on the cone is taken as the standard parallel. (a) All the parallels are arcs of concentric circle and are equally spaced. (b) All meridians are straight lines merging at the pole. The meridians intersect the parallels at right angles. (c) The scale along all meridians is true. (d) An arc of a circle represents the pole. (e) The scale is true along the standard parallel but exaggerated away from the standard parallel. (f) Meridians become closer to each other towards the pole. (g) This projection is neither equal area nor orthomorphic. (a) It is not suitable for a world map due to extreme distortions in the hemisphere opposite the one in which the standard parallel is selected. (b) Even within the hemisphere, it is not suitable for representing larger areas as the distortion along the pole and near the equator is larger. (a) This projection is commonly used for showing areas of mid-latitudes with limited latitudinal and larger longitudinal extent. (b) A long narrow strip of land running parallel to the standard parallel and having east-west stretch is correctly shown on this projection. (c) Direction along standard parallel is used to show railways, roads, narrow river valleys and international boundaries. (d) This projection is suitable for showing the Canadian Pacific Railways, Trans-Siberian Railways, international boundaries between USA and Canada and the Narmada Valley.
{"url":"https://mobile.surenapps.com/2020/09/map-projections-solutions.html","timestamp":"2024-11-14T21:27:50Z","content_type":"application/xhtml+xml","content_length":"109370","record_id":"<urn:uuid:6ead066d-4e40-43f9-8654-62a63ba596fb>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00481.warc.gz"}
A car is traveling at a speed of 90 kilometers per hour. What is the car's speed in miles per hour? How many miles will the car travel in 5 hours? In your computations, assume that 1 mile is equal to Answered You can hire a professional tutor to get the answer. A car is traveling at a speed of 90 kilometers per hour. What is the car's speed in miles per hour? How many miles will the car travel in 5 hours? In your computations, assume that 1 mile is equal to A car is traveling at a speed of 90 kilometers per hour. What is the car's speed in miles per hour? How many miles will the car travel in 5 hours? In your computations, assume that 1 mile is equal to 1.6 kilometers. Do not round your answers. Show more Homework Categories Ask a Question
{"url":"https://studydaddy.com/question/a-car-is-traveling-at-a-speed-of-90-kilometers-per-hour-what-is-the-car-s-speed","timestamp":"2024-11-08T14:45:43Z","content_type":"text/html","content_length":"26838","record_id":"<urn:uuid:c37c3bad-0d4c-4884-9700-c7110f196e72>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00838.warc.gz"}
Notation for Software Engineering and CS CS and Math have a rich and well defined syntax to express the difference between different primitives (e.g: literal, set, list, ...). However, since ideas and documentation often needs to be done in plain-text, there is no official This article attempts to achieve the following goals: 1. All-in one place definition and explanation of the notation for common computer science primitives. 2. For all the primitives in 1., suggest a consistent annotation for plain-text files. 3. A reference place for myself so that I am consistent with my notation. Elementary Types Literals are simple values. For example, in int i = 1, i is a literal. Literals start with a lowercase letter. Examples: int i = 1; float pressure = 0.5; string name = "I, Claudius"; An array is a collection of elements. Those elements can be literals, or complex objects. Arrays uses square brackets: [1, 2, 3, 4, 1, 2]. They are a collection of element with no special restrictions (can repeat, no specific order). Arrays are usually contiguous in memory. Note that many languages (such as C++) use curly brackets as array initializers -- that conflicts with that is typically used in Mathematics: curly banquets '{' and '}' are usually reserved for sets. A tuple is a sequence of elements where the order matters. A n-tuple is a sequence of n elements. For example, (1, 2, 3) is a 3-tuple. Since the order matters, (1, 2, 3) and (3, 2, 1) are two distinct tuples. Value can repeat in tuples and are not necessarily ordered. So (5, 5, 5, 1, 4) is a valid tuple. (5, 5) is a distinct tuple from (5). Square brackets '[]' are also sometime used as a notation for tuple since tuples are very similar to arrays. • http://en.wikipedia.org/wiki/Tuple A set is a collection of distinct objects. Think of a set as telling you if an element is present or not. For example, the set of people at a party can either contain or not contain each of your friend once. Set use curly brackets; {1, 2, 3} is a set of items 1, 2 and 3. Since order does not matter in sets, {1, 2, 3} = {3, 2, 1}; they are considered equivalent. However, it is less confusing to order the elements in a set as a matter of convention, so {1, 2, 3} would be preferred to {3, 2, 1} or {2, 3, 1}, although all three sets are equivalent. Note that {1, 2, 2} is not a valid set since set do not repeat elements; 2 is present or not, having it twice in the set is meaningless. Set Relationship A ⊆ B indicates that A is a subset of B. A ⊆ B holds true for: A = {1, 2}, B = {1, 2, 3} since all element of A are also in B. Set References • http://en.wikipedia.org/wiki/Set_(mathematics) • http://en.wikipedia.org/wiki/Set_theory • https://en.wikipedia.org/wiki/Set_notation • http://en.wikipedia.org/wiki/Set-builder_notation • http://0a.io/0a-explains-set-theory-and-axiomatic-systems-with-pics-and-gifs Complex Types Graphs are a set of vertices (singular: vertex, somtimes also called nodes) where some pair of vertices are connected by edges (sometimes called links). A popular notation for graphs is G = (V, E), where V is a set of vertices which are connected by edges E, which are 2-element subsets of V. For example: V = {1, 2, 3, 4, 5, 6} E = {{1, 2}, {1, 5}, {2, 3}, {2, 5}, {3, 4}, {4, 5}, {4, 6}} ... is a valid graph. Since E is a set of set, it implies that the edges are not directed and that there are no self loops. In order to have either of those, E would need to be a set of tuples; e.g.: E = {(1,2), (2,1), (2,2), ...}. Here (2, 1) implies a directed edge from vertex 2 to 1. For a complete yet accessible review of graphs and their use in CS, see MIT6_042, section 5.1.1. A matrix is a rectangular 2D-array of elements arranges in rows and columns. A column is vertical whereas a row is horizontal. A common convention with matrices is that the first number (y) represents the column, and the second (x) the line. In general, column represent different dimensions: they represent different type of entities whereas the line elements represent another instance of the same type of entity. Matrixes can be represented in plain-text files using a capital letter followed by an underscore, then the column and row number: M_y,x. For example, the last element of a 3x3 matrix 'M' is denoted as M_3,3. The complete matrix can be enumerated by using square brackets. Since it is hard to represent matrices in a a text file, the coma (;) represents the end of a line. So [1, 2, 3; 4, 5, 6] is the same [1, 2, 3 4, 5, 6] A Matrix transposition is noted by appending a 'T' after the matrix closing bracket. So [1, 2, 3; 4, 5, 6]T is: [1, 4 2, 5 3, 6] Logic Symbols ∀: for all ∃: there exist References and Links • Commonly used mathematical symbols: http://en.wikipedia.org/wiki/ISO_31-11 • Graph notation and set examples: http://en.wikipedia.org/wiki/Graph_(mathematics) • Hash functions: http://research.microsoft.com/pubs/64588/hash_survey.pdf, check section 2.1. • http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-042j-mathematics-for-computer-science-fall-2010/readings/MIT6_042JF10_notes.pdf • http://en.wikipedia.org/wiki/First-order_logic
{"url":"http://www.grokit.ca/cnt/NotationSoftwareEngineeringAndCS/","timestamp":"2024-11-07T03:32:57Z","content_type":"text/html","content_length":"9878","record_id":"<urn:uuid:df4b964b-9a5a-408e-b355-0c524cc109fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00191.warc.gz"}
BigGanEx: A Dive into the Latent Space of BigGan Using computer algorithms to create art used to be something only a small group of artists also skilled in the arcane arts of programming could do. But with the invention of Deep Learning algorithms such as style transfer and Generative Adversarial Networks (GANs - more on these in a bit), and the availability of open source code and pre-trained models, it is quickly becoming an accessible reality for anyone with some creativity and time. As Wired succinctly put it, “We Made Our Own Artificial Intelligence Art, and So Can You”: “I was soon looking at the yawning blackness of a Linux command line—my new AI art studio. Some hours of Googling, mistyped commands, and muttered curses later, I was cranking out eerie portraits. I may reasonably be considered “good” with computers, but I’m no coder; I flunked out of Codecademy’s easy-on-beginners online JavaScript course. And though I like visual arts, I’ve never shown much aptitude for creating my own. My foray into AI art was built upon a basic familiarity with the command line [and access to pre-existing code and models].” This level of accessibility becoming the norm was made clearer than ever with the release of BigGan, the latest and greatest in the world of GANs. Soon, all sorts of people who had nothing to do with its creation were playing around with it and spawning strange and wonderful AI creations. And these creations were so esoteric and interesting, that we were inspired to write this article highlighting them. BigGan Art by Niels Justesen (Source) A Primer on Image Synthesis Before we dive into the survey of fun BigGan creations, let’s get some necessary technical exposition out of the way (those already aware of these details can freely skip). Image Synthesis refers to the process of creating realistic images from random input data. This is done using generative adversarial networks (GANs), which was introduced in 2014 by Ian Goodfellow et al. GANs mainly consist of two networks: the discriminator and the generator. The discriminator is tasked with deciding whether an input image is real or fake, whereas the generator is given random noise and attempts to generate realistic images from the learnt distribution of the training data. 4 years of GAN progress (source: https://t.co/hlxW3NnTJP ) pic.twitter.com/kmK5zikayV — Ian Goodfellow (@goodfellow_ian) March 3, 2018 GANs are trained on datasets with hundreds of thousands of images. However, a common problem when working on a huge dataset is the stability of the training process. This problem causes the generated images to be not realistic or contain some artifacts. So, BigGan was introduced to resolve these issues. The Bigger the Better In order to capture the fine details of the synthesized images we need to train networks that are big i.e contain a lot of trainable parameters. Mainly, GANs contain a moderate number of trainable parameters i.e 50 - 100 million. In September 2018 a paper titled "Large Scale GAN Training for High Fidelity Natural Image Synthesis" by Andrew Brock et al. from DeepMind was published. BigGan are a scaled up version of previous approaches by providing larger networks and larger batch size. According to the paper: GANs benefit dramatically from scaling, and we train models with two to four times as many parameters and eight times the batch size compared to prior state of the art. The largest BigGan model has a whooping 355.7 million parameters. The models are trained on 128 to 512 cores of a Google TPU. It provides a state of the art results for image synthesis with IS of 166.3 and FID of 9.6, improving over the previous best IS of 52.52 and FID of 18.65. FID (the lower the better) and IS (the higher the better) are metrics to quantify the the quality of synthesized images. The results are impressive! Just look for yourself: BigGan models are conditional GANs, meaning they take the class index as an input to generate images from the same category. Moreover, the authors used a variant of hierarchical latent spaces, where the noise vector is inserted into multiple layers of the generator at different depths. This allows the latent vector to act on features extracted from different levels. In less jargon-y terms it makes it easier for the network to know what to generate. The authors characterized instabilities related to large-scale GANs, and created solutions to decrease the instabilities - we won’t get into the details, except to say the solutions work but also have a high computational cost. The Latent Space The latent vector is a a lower dimensional representation of the features of an input image. The space of all latent vectors is called the latent space. The latent vector denoted by the symbol $z$, represents an intermediate feature space in the generator network. A generator network follows the architecture of an autoencoder which contains two networks. The first part, the encoder, encodes the input images into a lower dimensional representation (latent vector) using down-sampling. The second part, called the decoder, reconstructs the shape of the image using upsampling. The size of the latent vector is lower than the size of the input (ie, an image) of the encoder. After training a generator network we could discard the encoder part and use the latent vector to construct the generated images. This is useful because it makes the size of the model smaller. The latent vector has a 1-dimensional shape and is usually sampled from a certain probability distribution, where nearby vectors represent similar generated images. BiGans latent vectors are sampled from a truncated normal distribution. The truncated normal distribution is a normal distribution where the values outside the truncation value are resampled to lie again the region inside the truncation region. Here is a simple graph showing truncated normal distribution in the region $[-2, 2]$: Apparently, the points are more dense near zero and more sparse near the truncation region. Reproducible Results Makes for Accessible Art So, BigGan is a cool model - but why has it been so easy for so many to play with it despite its huge size? In a word: reproducibility. In order to test any machine learning model first you need an implementation of the model that is simple to import and you need the compute power. The BigGan’s generator network was released for public on TF Hub, a library for reusable machine learning models implemented in TensorFlow. A notebook was also posted on seedbank, a website by Google that collects many notebooks on different fileds of machine learning. You can open the notebook using google collaboratory, and play around with the model even if you don’t own a GPU; collaboratory offers a kernel with a free GPU for research purposes. The notebook illustrates how to import three BigGan models with different resolutions 128, 256 and 512. Note that each model takes three inputs: the latent vector (a random seed to generate distinctive images), class, and a truncation parameters (which controls the variation of the generated images - see the last section for detailed explanation). Making such models public for artists and researchers has made it easy to create some cool results, without in-depth expertise on the concepts or access to Google-scale resources. Making Art with BigGan Since the release of the BigGan’s model by DeepMind, a lot of researchers and artists have been experimenting with it. Phillip Isola, one of the authors of pix2pix and cycleGans papers, shows how to create a nice 3D effect on BigGan by interpolating between two different poses of a certain class: March 3, 2018 Mario Klingemann, an artist resident at Google, shows a nice rotary motion in the latent space of BigGan’s by using the sine function: March 3, 2018 Joel Simon shows how to generate really nice and colorful art by breeding between different classes using GANbreeder: March 3, 2018 Devin Wilson, an artist, created a simple style transfer effect between different classes of animals by keeping the noise seed and the truncation value the same March 3, 2018 Gene Kogan, an artist and a programmer, created a mutation of different classes by the means of simple mathematical operations like addition March 3, 2018 And many, many, many more. Understanding BigGan’s Latent Space Let’s dive a bit more into how to create some cool experiments in the latent space. Manipulating the latent vector and truncation values can give us some indications about the distribution of the generated images; in the few last weeks I tried to understand the relations between these variables and generated images. If you want to replicate these experiments you can run this notebook. Let us take a look at some examples. In this experiment we breed between two different classes -- i.e we create intermediate classes using a combination of two different classes. The idea is simple we just average the encoded classes and use the same seed for the latent vector. Given two classes $y_1$ and $y_2$ we use the function $$\hat{y} = ay_1 + (1-a)y_2$$ Note that using $a = 0.5$ combines the two categories. If $a<0.5$ the generated image will be closer to $y_2$ and if $a>0.5$ it will be similar to $y_1$ . First class (left). Interpolated image with $a=0.5$ (middle). Second class (right). Background Hallucination In this experiment we try to change the background of an arbitrary image while keeping the foreground the same. Note that values near zero in the latent vector mainly control the dominant class in the generated image. We can use the $f(x) = \sin(x)$ to resample different background because it preserves values near zero so $\sin(x) \sim x $ Pairs of hallucinated backgrounds for the same image. Natural zoom We try to zoom into a certain generated image to observe its fine details. To do that, we need to increase the weights of the latent vector. This doesn't work unless each value in the latent vector has either the value 1 or -1. This can be done by dividing each value in the latent vector by its absolute value namely $\frac{z}{|z|}$. Then we can provide scaling by multiplying by increasing negative values such as $-a \frac{z}{|z|}$ Zooming at different levels by increasing the value of $a$. Interpolation refers to the process of finding intermediate data points between specific known data points in the space. The closer the data points the smoother the transition between these points. The simplest form of an interpolation function is a linear interpolation. Given two vectors $v_1$ and $v_2$ and $N$ as the number of interpolated vectors, we evaluate the interpolation function as $$F(v_1, v_2, N) = x v_2 + (1-x) v_1 , x \in \left(0,\frac{1}{N}, \cdots, \frac{N}{N}\right)$$ Note that if $x=0$ then the first data point is $v_1$ and if $x=1$ the data point which is the last is $v_2$. I used interpolations to create a nice zooming effect called “The life of a Beetle ” The life of a beetle following (5/N) pic.twitter.com/hGvoJvLDMA — Zaid Alyafeai (زيد اليافعي ) (@zaidalyafeai) November 21, 2018 Truncation Trick Previous work on GANs samples the latent vector $z \sim \mathcal{N}(0, I)$ as a normal distribution with the identity covariance matrix. On the other hand, the authors of BigGan used a truncated normal distribution in a certain region $[-a, a]$ for $a \in \mathbb{R}^+$ where the sampled values outside that region are resampled to fall in the region. This resulted in better results of both IS and FID scores. The drawback of this is a reduction in the overall variety of vector sampling. Hence there is a trade-off between sample quality and variety for a given network G. Notice that if $a$ is small then the generated images from a truncated normal distribution will not vary a lot because all the values will be near zero. In the following figure we vary the truncation value from left to right with an increasing value. For each pair we use the same truncation value but we resample a new random vector. We notice that the variation of the generated increases as we increase the truncation value. Final Thoughts The availability of reusable models, open source code, and free compute power has made it easy for researchers, artists and programmers to play around with such models and create some cool experiments. When BigGan was introduced on Twitter I had zero knowledge about how it works. But, experimenting with the shared notebook of DeepMind was an incentive for me to try to understand more than what is mentioned in the original paper and share my thought process with the community. However, we are still far from understanding the hidden world of the latent space. How does the latent vector create the generated image? Can we define more controlled interpolated images by adjusting certain features of the latent vector? Maybe you as a reader will help answering such questions by exploring the available notebooks and trying to implement your own ideas. For attribution in academic contexts or books, please cite this work as Zaid Alyafeai, "BigGanEx: A Dive into the Latent Space of BigGan", The Gradient, 2018. BibTeX citation: author = {Alyafeai, Zaid} title = {BigGanEx: A Dive into the Latent Space of BigGan}, journal = {The Gradient}, year = {2018}, howpublished = {\url{https://thegradient.pub/bigganex-a-dive-into-the-latent-space-of-biggan/ } }, If you enjoyed this piece and want to hear more, subscribe to the Gradient and follow us on Twitter.
{"url":"https://thegradient.pub/bigganex-a-dive-into-the-latent-space-of-biggan/","timestamp":"2024-11-03T21:33:07Z","content_type":"text/html","content_length":"114158","record_id":"<urn:uuid:af43782c-434a-437c-95b0-a4517b5c2614>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00618.warc.gz"}
Exercise 2.33 of SICP Exercise 2.33: Fill in the missing expressions to complete the following definitions of some basic list-manipulation operations as accumulations: (define (map p sequence) (accumulate (lambda (x y) <??>) nil sequence)) (define (append seq1 seq2) (accumulate cons <??> <??>)) (define (length sequence) (accumulate <??> 0 sequence)) (define (accumulate fn init-value items) (if (null? items) (fn (car items) (accumulate fn init-value (cdr items))))) (define (map p sequence) (accumulate (lambda (x y) (cons (p x) y)) '() sequence)) (define (append seq1 seq2) (accumulate cons seq2 seq1)) (define (length seq) (accumulate (lambda (x y) (+ 1 y)) 0 seq)) I find it interesting how length is created by discarding the x in lambda thus discarding (car items).
{"url":"http://danboykis.com/posts/exercise-2-33-of-sicp/","timestamp":"2024-11-02T06:00:00Z","content_type":"text/html","content_length":"10369","record_id":"<urn:uuid:ef5b6077-0e32-42ab-ad1a-91afa6706f44>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00522.warc.gz"}
Egg Dropping | Brilliant Math & Science Wiki Egg dropping refers to a class of problems in which it is important to find the correct response without exceeding a (low) number of certain failure states. In a toy example, there is a tower of \(n \) floors, and an egg dropper with \(m\) ideal eggs. The physical properties of the ideal egg is such that it will shatter if it is dropped from floor \(n^*\) or above, and will have no damage whatsoever if it is dropped from floor \(n^*-1\) or below. The problem is to find a strategy such that the egg dropper can determine the floor \(n^*\) in as few egg drops as possible. This problem has many applications in the real world such as avoiding a call out to the slow HDD, or attempting to minimize cache misses, or running a large number of expensive queries on a database. \(2\) Eggs, \(100\) Floors Suppose you have two eggs and you want to determine from which floors in a one hundred floor building you can drop an egg such that is doesn't break. You are to determine the minimum number of attempts you need in order to find the critical floor in the worst case while using the best strategy. • If the egg doesn't break at a certain floor, it will not break at any floor below. • If the eggs break at a certain floor, it will break at any floor above. • The egg may break at the first floor. • The egg may not break at the last floor. One would be tempted to solve this problem using binary search, but actually this is not the best strategy, and you will see why. Try to work it out yourself and answer this question, or read on to see a detailed explanation of the problem. You are asked to find the highest floor of a 100-story building from which an egg can be dropped without breaking. You have two eggs to test-drop from the building, and you can continue to drop your eggs until they break. From which floor should you drop your first egg to optimize your chances of dropping the eggs as few times as possible? Using binary search, you would have to drop the first egg from the \(50^\text{th}\) floor. If it doesn't break, you would drop the same egg from the \(75^\text{th}\) and so on; in the best case scenario you would be able to cover all floors with 7 drops. But what if the egg broke in your first attempt, i.e. in the \(50^\text{th}\) floor? If this happens, you are obligated to drop the remaining egg from each floor until finding \(n^*\), which has the potential for \(49\) drop tests, which is \(O(n)\). Remember, the problem is to determine the critical floor \(n^*\), so you can't let your last egg break before finding it. Dropping your last egg from the \(25^\text{th}\) floor would be quite risky because if it broke you wouldn't be able to determine the critical floor. It is clear that binary search is not the optimal solution. Knowing that, what is the best strategy? From which floor should you start? What's the minimum number of drops you would have to do in the worst case while using the best strategy? Starting from the \(14^\text{th}\) floor is the best strategy because, as we will show, the number of attempts in the worst case is always 14. • What if the first egg breaks at floor 14? If the first egg breaks at the \(14^\text{th}\) floor, then we should check the first floor, then the second one, until the \(13^\text{th}\) floor. Doing this the total number of attempts would be 14. • What if it doesn't break? Then you should check the \(27^\text{th}\) floor. Why? Because if it breaks, you would have to check all the floors from the \(15^\text{th}\) until the \(26^\text{th}\) one (thirteen floors), which keeps the total number of attempts at 14 \(\big(\)first attempt at the \(14^\text{th}\) floor, second at the \(27^\text{th}\) floor, and the twelve remaining drops from the \(15^\text{th}\) floor until the \(26^\text{th}\) floor\(\big).\) And if it doesn't break, you would have to check the \(39^\text{th}\) floor; if it breaks you would have to check all the floors from the \(28^\text{th}\) until the \(38^\text{th}\) one. \(\big(\) Remember, one attempt at the \(14^\text{th}\) floor, the second attempt at the \(27^\text{th}\) floor, the third attempt at the \(39^\text{th}\) floor, and the 11 remaining attempts at the floors \(\ {28,29,30,31,32,33,34,35,36,37\}\) and 38, totaling 14 attempts in this case.\(\big)\) Using the same reasoning, you should check the \(50^\text{th}\) floor, the \(60^\text{th}\), the \(69^\text{th}\), the \(77^\text{th}\), the \(84^\text{th}\), the \(90^\text{th}\), the \(95^\text{th} \), the \(99^\text{th}\) and finally the \(100^\text{th}\) one. See? Using this strategy you would cover all the floors and the number of attempts would never be greater than 14, even in the worst Wonder how this table might look if there were 3 eggs!!... Using other strategies, like the binary search, fewer attempts would be required in some cases (like in our first example), but it would require a high number of attempts in the worst case \((\)in our second example, where the egg broke in the \(50^\text{th}\) floor and 50 drops were necessary in the worst case\().\) Therefore, we can conclude that using another strategy you would need more than 14 attempts in the worst case. Now let's try to find a solution for the case where you have 2 eggs and a building with \(k\) floors. A good way to start this problem is to ask "Are we able to cover all the floors with \(x\) drops?" Suppose that in the best strategy, the number of drops in the worst case is \( x \). Then, you should start at the \(x^\text{th}\) floor, because if the egg breaks, you will have to check floors \ (1,2,3,\ldots,x-2\) and \(x-1\), so the total number of drops will be \(x\). If it doesn't break, you will have to check the \(\big(x+(x-1)\big)^\text{th}\) floor. If the egg breaks, you will have to check the floors \(x+1,x+2, \ldots, \big(x+(x-1)-1\big)\). Hence, the number of drops will be \(\big(x+(x-1)-1\big)-(x+1)+1+2=x\). Do you realize what we are doing? Based on the assumption that the number of drops will always be \(x\) in the worst case, we find the floors where we should drop the egg. The crucial point here is understanding that we are not trying to find the minimum number of drops knowing the best strategy; actually, we are trying to find the best strategy supposing that the minimum number of drops is \( x \), and we have to determine if covering all the floors using at most \( x \) attempts is possible or not. We can find an analytical solution to this problem: Suppose the minimum number of attempts in the worst case, while using the best strategy, is \(x\). In our attempt, we will drop the egg at the \( x^\text{th}\) floor, covering \(x\) floors, then we will drop it at the \(\big(x+(x-1)\big)^\text{th}\) floor, covering \(x-1\) floors, and the third drop would be at the \( \big(x+(x-1)+(x-2)\big)^\text{th}\) floor, covering \( x-2\) floors. We can see that using this strategy we would cover \[ x+(x-1)+(x-2)+(x-3) + \cdots + 2+1=\frac{x(x+1)}{2}\] If we are able to cover \(\frac{x(x+1)}{2}\) floors using this strategy and the building has \(k\) floors, we just have to find the minimum value of \(x\) such that \[\frac{x(x+1)}{2} \geq k. \] \[ x^2+x-2k=0 \implies x = \frac{-1+\sqrt{1+8k} }{2}. \] But \(x\) must be an integer, implying \[ x =\left\lceil \frac{-1+\sqrt{1+8k} }{2} \right\rceil. \] In our first example, \(k=100\), so plugging it into the previous equation gives \(\lceil 13.65\rceil = 14\). Suppose you have \(N \) eggs and you want to determine from which floors in a \( k \)-floor building you can drop an egg such that is doesn't break. You are to determine the minimum number of attempts you need in order find the critical floor in the worst case while using the best strategy. Now the problem is a bit more complicated because we must find a general solution for any number of eggs and floors. There are three different solutions: • The recursive solution: This solution is more straightforward and can be implemented with ease, but it is also the slowest one. Using this solution on programming contests is not advisable due to its bad performance. • The dynamic programming solution: This solution is similar to the previous one, but it's faster and may be used to solve the problem for medium or small values of \(k \) and \( N\). • A solution that combines both binary search and recursion: This is the faster one, and once the strategy is understood, it is rather easy to implement. Imagine the following situation: you have \(n \) eggs and \( h\) consecutive floors yet to be tested, and afterward you drop the egg at floor \(i\) in this sequence of \(h\) consecutive floors: • If the eggs break: The problem reduces to \( n-1 \) eggs and \( i-1 \) remaining floors. • If it doesn't break: The problem reduces to \( n \) eggs and \(h-i\) remaining floors. This is an important point. The floors we want to test aren't important; in fact, the number of remaining floors is what matters. For example, testing the floors between 1 and 20 (both 1 and 20 included) would require the same number of drops to test the floors between 21 and 40, or between 81 and 100. In all three situations, we tested 20 floors. Now we can define a function \( W(n,h)\) that computes the minimum number of drops required to find the critical floor in the worst case scenario, whilst using the best strategy. We can codify the above findings to find the following recursion for determining \( W(n,h)\): Recursion for the egg dropping puzzle: \[ W[n,h]=1+\min\Big(\max \big(W(n-1,i-1),W(n,h-i)\big)\Big)\] \((\)Pay attention: \(n\)=current number of eggs, \(N\)=total number of eggs, \(h\)=number of consecutive floors that still have to be tested, \(k\)=number of floors in the building.\()\) The base cases are as follows: Because we need \(h\) drops if only \(1\) egg remains, \(W(1,h)=h.\) Because we need only one drop to test one floor, regardless of the number of eggs, \(W(n,1)=1.\) Because 0 floors requires no drops, \(W(n,0)=0.\) The pseudo-code for this algorithm is given by 1 def drops(n,h): 2 if(n == 1 or h == 0 or h == 1): 3 return h 4 end if 6 minimum = ∞ 8 for x = 1 to h: 9 minimum = min(minimum, 10 1 + max(drops(n - 1, x - 1), drops(n, h - x)) 11 ) 12 end for 14 return minimum C++ code that uses the recursive solution: 2 #include <iostream>#include <limits.h>using namespace std; 4 //Compares 2 values and returns the bigger one 5 int max(int a,int b) { 6 int ans=(a>b)?a:b; 7 return ans; 8 } 10 //Compares 2 values and returns the smaller one 11 int min(int a,int b){ 12 int ans=(a<b)?a:b; 13 return ans; 14 } 16 int egg(int n,int h){ 18 //Base case 19 if(n==1) return h; 20 if(h==0) return 0; 21 if(h==1) return 1; 23 int minimum=INT_MAX; 25 //Recursion to find egg(n,k). The loop iterates i: 1,2,3,...h 26 for(int x=1;x<=h;x++) minimum=min(minimum,(1+max(egg(n,h-x),egg(n-1,x-1)))); 28 return minimum; 29 } 31 int main() 32 { 33 int e;//Number of eggs 34 int f;//Number of floors 36 cout<<"Egg dropping puzzle\n\nNumber of eggs:"; 38 cin>>e; 40 cout<<"\nNumber of floors:"; 42 cin>>f; 44 cout<<"\nNumber of drops in the worst case:"<<egg(e,f); 46 return 0; 47 } The previous solution is very slow, and the same function is called more than once, which is not necessary. However, due to its overlapping subproblems, and to its optimal substructure property (we can find the solution to the problem using the subproblem's optimal solutions), we can solve the problem via dynamic programming. We can avoid recalculation of the same subproblems by memoizing the function egg(n,h) with a two-dimensional array numdrops[n][h]. Then, we just have to fill it up. Here's the pseudocode: 1 def solvepuzzle(N,k): 2 for i = 1 to N 3 numdrops(i,1) = 1 4 numdrops(i,0) = 0 5 end for 7 for i=1 to k 8 numdrops(1, i) = i 9 end for 11 for i = 2 to N 12 for j = 2 to k 14 numdrops[i][j] = ∞ 15 minimum = ∞ 17 for x = 1 to j 18 minimum = min(minimum, 19 1 + max(numdrops(i-1,x-1),numdrops(i,j-x)) 20 ) 21 end for 23 numdrops[i][j] = minimum 25 end for 26 end for 28 return numdrops(N,k) C++ code: 2 #include <iostream>#include <limits.h>using namespace std; 4 //Compares 2 values and returns the bigger one 5 int max(int a,int b) { 6 int ans=(a>b)?a:b; 7 return ans; 8 } 10 //Compares 2 values and returns the smaller one 11 int min(int a,int b){ 12 int ans=(a<b)?a:b; 13 return ans; 14 } 16 int solvepuzzle(int n,int k){ 18 int numdrops[n+1][k+1]; 19 int i,j,x; 21 for(i=0;i<=k;i++) numdrops[0][i]=0; 22 for(i=0;i<=k;i++) numdrops[1][i]=i; 23 for(j=0;j<=n;j++) numdrops[j][0]=0; 25 //This loop fills up the matrix 26 for(i=2;i<=n;i++){ 27 for(j=1;j<=k;j++){ 29 //Defines the minimum as the highest possible value 30 int minimum=INT_MAX; 32 //Evaluates 1+min{max(numeggs[i][j-x],numeggs[i-1][x-1])), for x:1,2,3...j-1,j} 33 for(x=1;x<=j;x++) minimum=min(minimum,(1+max(numdrops[i][j-x],numdrops[i-1][x-1]))); 35 //Defines the minimum value for numeggs[i][j] 36 numdrops[i][j]=minimum; 37 } 39 } 41 cout<<"\nArray:\n\n"; 43 //Prints numeggs 44 for(i=0;i<=n;i++){ 45 for(j=0;j<=k;j++){ 46 cout<<numdrops[i][j]<<" "; 47 } 48 cout<<"\n"; 49 } 51 cout<<"\nNumber of trials in the worst case using the best strategy:\n"; 53 return numdrops[n][k]; 54 } 56 int main() 57 { 58 int e;//Number of eggs 59 int f;//Number of floors 61 cout<<"Egg dropping puzzle\n\nNumber of eggs:"; 63 cin>>e; 65 cout<<"\nNumber of floors:"; 67 cin>>f; 69 cout<<solvepuzzle(e,f); 71 return 0; 72 } Before advancing to the next section, we must see some useful mathematical relations related to binomials. We know that \[C^n_k=C(n,k)= \binom{n}{k} = \frac{n!}{k!(n-k)!}.\] We also know that the Pascal triangle is And we can easily find a recursion if we write the Pascal triangle in this way: By looking at the table or by a simple mathematical proof we get the following recurrence: \[ C ( n , k ) = C( n - 1 , k ) + C ( n - 1 , k - 1 ) . \] And the base cases are \[ C(n,0) = \frac{n!}{0!(n-0)!} = 1 \quad \text{and} \quad C(n,n )= \frac {n!} {n!(n-n)!} = 1. \] With this knowledge in hand, let's define a function \( f(d,n) \) that represents the number of floors we can cover using \( n \) eggs and with \(d \) remaining drops. If the egg breaks, we will be able to cover \( f(d-1,n-1) \) floors; otherwise we'll be able to cover \( f(d-1,n) \) floors. Hence, the total number of floors we will be able to cover is \[ f(d,n) = 1 + f(d-1,n-1) + f(d-1,n) . \] We must find a function \( f(d,n) \) that's a solution for this recursion. First, we will define an auxiliary function \( g(d,n) \): \[ g(d,n)=f(d,n+1)-f(d,n) . \] Plugging it into our first equation gives \[ g(d,n) &= f(d,n+1)-f(d,n) \\ &= f(d-1,n+1)+f(d-1,n)+1-f(d-1,n)-f(d-1,n-1)-1\\ &=[f(d-1,n+1)-f(d-1,n)]+[f(d-1,n)-f(d-1,n-1)] \\ &=g(d-1,n)+g(d-1,n-1). \] This is precisely the same recursion that we saw in the previous section, and thus the function \( g(d,n) \) can be written as \[ g(d,n)= \binom{d}{n}. \] But we have a problem: \( f(0,n) \) is 0 for every \( n \), as well as \( g(0,n) \), according to the relation between \(f\) and \(g\). However, a contradiction occurs when \(n=0\) because \(g(0,0)=\ binom{0}{0}=1\). But \(g(0,n)\) should be \(0\) for every \(n\)! We can fix this problem by defining \(g(d,n)\) as follows: \[ g(d,n)= \binom{d}{n+1}. \] And the recursion is still valid (you can check it by yourself!). Now, using a telescopic sum for \( f(d,n) \), we can write it as \[ f(d,n)= &[f(d,n)-f(d,n-1)]\\ +&[f(d,n-1)-f(d,n-2)]\\ +&\cdots \\ +&[f(d,1)-f(d,0)] \\ +&f(d,0). \] We know that \(f(d,0)=0\), and therefore \[ f(d,n)=g(d,n-1)+g(d,n-2)+\cdots+g(d,0).\] And we also know that \[ g(d,n)=\binom{d}{n+1}. \] \[ g(d,n-1)+g(d,n-2)+\cdots+g(d,0)=\binom{d}{n}+\binom{d}{n-1}+\cdots+\binom{d}{1}.\] \[ f(d,n) = \sum_{i=1}^{n}{\binom{d}{i}} .\] Now that we have a nice formula for \( f(d,n),\) how can we find the minimum number of drops? It's simple! We know that \( f(d,N) \) is the number of floors we can cover in the building with \(k\) floors using \(N\) eggs and no more than \( d \) drops in the worst cases. We simply have to find a value for \( d \) such that \[ f(d,N) \geqslant k. \] Using our last formula, \[ \sum_{i=1}^{N}{\binom{d}{i}} \geqslant k. \] This solution is very fast. We can do a linear search to find a value for \( d \), or we can binary search it for an even faster solution! C++ code: 2 #include <iostream>#include <math.h>using namespace std; 4 //Evaluates C(n,k) and verifies if it's greater than or equal to k 5 long long binomial(int x,int n,int k){ 7 int i; 8 long long int answer=0; 9 double aux=1; 11 //Calculates C(n,k) using the formula: C(n,k): sum_i_0^k {(n-i+1)/i} 12 for(i=1;i<=n;i++){ 14 aux*=(float)x+1-i; 15 aux/=(float)i; 16 answer+=aux; 18 if(answer>k) break; 19 } 21 return answer; 22 } 24 int main() 25 { 26 int n; //Number of eggs 27 int k; //Number of floors 29 cout<<"Egg dropping puzzle: ( O(n log k) solution )\n\n"; 31 cout<<"Number of floors:"; 32 cin>>k; 34 cout<<"\nNumber of eggs:"; 35 cin>>n; 37 //Binary search variables: 38 //Mid: middle 39 //Upper: upper limit 40 //Inf: inferior limit 42 int mid,upper,inf; 44 upper=k; 45 inf=0; 46 mid=(upper+inf)/2; 48 //Binary search 49 while(upper-inf>1){ 51 //Finds the middle 52 mid=inf+(upper-inf)/2; 54 //Define new limits 55 if(binomial(mid,n,k)<k) inf=mid; 56 else upper=mid; 58 } 60 cout<<"\nNumber of drops in the worst case:"<<inf+1<<"\n"; 61 } • DP solution: • Solution using binomials:
{"url":"https://brilliant.org/wiki/egg-dropping/?subtopic=algorithms&chapter=dynamic-programming","timestamp":"2024-11-05T16:57:30Z","content_type":"text/html","content_length":"92541","record_id":"<urn:uuid:1be026bf-ed02-404e-bba2-36caef015904>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00205.warc.gz"}
Introductory Work due to Friction equals Change in Mechanical Energy Problem Introductory Work due to Friction equals Change in Mechanical Energy Problem (8:58) The equation Work due to Friction equals Change in Mechanical Energy can often be confusing for students. This video is a step-by-step introduction in how to use the formula to solve a problem. This is an AP Physics 1 topic. Content Times: 0:09 The problem 1:29 Why we can use this equation in this problem 1:52 Expanding the equation 2:29 Identifying Initial and Final Points and the Horizontal Zero Line 3:00 Substituting into the left hand side of the equation 4:05 Deciding which Mechanical Energies are present 4:59 Where did all that Kinetic Energy go? 5:27 Identifying which variables we know and do not know 5:58 Solving for the Force Normal 6:57 Substituting Force Normal back into the original equation 8:09 Why isn’t our answer negative?
{"url":"https://www.flippingphysics.com/intro-wf-problem.html","timestamp":"2024-11-05T09:59:58Z","content_type":"text/html","content_length":"41067","record_id":"<urn:uuid:c97898ec-6b8d-4cdb-8ea9-5215cd1af46a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00820.warc.gz"}
Ship Resistance/Components of Resistance - Wikibooks, open books for an open world A ship moves on the surface of water. Water is normally taken to be an incompressible fluid, with low viscosity. However, in order to study the components of resistance, let us begin by assuming that the ship is submerged in an ideal fluid. We then see what changes if the fluid is now viscous and finally look at what happens if we bring the ship to the surface. Hydrostatic equilibrium Let us assume that the ship is submerged in an infinitely large ideal fluid, so that the ship is far away from the surface. The forces acting on the ship in static equilibrium are the weight of the ship acting at the centre of gravity, and the pressure forces acting all along the surface, normal to the surface. If the body is neutrally buoyant, then these forces will be equal and opposite. Therefore, the net effect of the hydrostatic forces is to oppose the weight of the body. In fact, this is what is known as the Archimedes' Principle. If the body is in motion, we first change our frame of reference so that we move with the body and therefore see the fluid in motion in the opposite direction. Now, the pressure is not purely hydrostatic. Due to the motion of the fluid, and the relation between pressure and velocity as given by the Bernoulli's Equation, a hydrodynamic pressure is set up. This pressure varies along the body, and is maximum at the start and end of the body. The forces acting on the body are everywhere normal to the body. We can split these forces into components along the direction of motion, and transverse to the direction of motion. Due to symmetry of the body, we can see that the transverse forces are in pairs, opposite and equal, and therefore will cancel out. There need not be symmetry in the longitudinal direction, but because the discontinuity in the flow, which was caused by the body, starts and ends with the body, the net opposing forces at the fore end will be the same as the net supporting forces at the aft. Therefore, the net force acting on the body will be zero. This paradox, that a body moving in an ideal fluid in steady state, experiences no resistance, was first proposed by D'Alembert and is known as D'Alemberts paradox. Body in Viscous Fluid If we now relax our assumption about the viscosity of the fluid, we need to account for viscous forces. Due to viscosity, the tangential velocity of water relative to the ship is zero at all points on the surface. As we move away from the surface, the velocity gradually increases until it reaches the ideal fluid velocity at some distance away from the body. This layer of high velocity gradients is called the boundary layer. Now the viscous force is directly proportional to the velocity gradients. Hence, a viscous force, opposite in direction to the velocity, acts on the body through the boundary layer. This force which acts on the surface of the body is called drag. Drag can be studied by decomposition. The mere presence of a surface leads to a drag, which is called the 2D drag. A ship also has a form, and the two ships with the same wetted surface area, but differing in form, will have different drag. This drag is called the form drag. The formation of the boundary layer also has an effect on the hydrodynamic pressure forces. As we said, the velocity reaches the ideal fluid velocity at a distance away from the surface. This distance is called the boundary layer thickness. The thickness increases from the bow of the ship to the stern. The pressure forces now act on the effective body, which is the boundary layer. The net force acting on this body is zero, but because the boundary layer thickness is not uniform, the ship faces the same resisting force, but gets only a part of the supporting stern pressure forces. Hence, there is a net resisting pressure force on the ship because of the effect of viscosity. This is called the viscous pressure drag. In addition to the resistance in the ideal and viscous fluids, when the body is at the surface it will subjected to wind resistance, body waves which generates when the body moves and the fluid waves Other components of resistance 1. Induced Resistance: If a vessel moves with a leeway, like in turn or when there is a wind force component sidewards, a lift force (directed sidewards) is developed. Associated with the lift as an induced resistance, which can be considerable, especially for sailing yachts and vessels. When the hull moves slightly sidewards a high pressure is developed on one side (leeward) and a low pressure on the other (windward). The pressure difference gives rise to a flow from the high to the low pressure, normally under the bottom or tip of the keel and rudder, and longitudinal vortices are generated. These vortices contain energy left behind and are thus associated with a resistance component, the Induced Resistance. 2. Appendage Resistance: This resistance is mainly the viscous resistance, hence can be included in the viscous resistance. There are reasons, however, to treat this component separately. First, the Reynold's number, based on the chord length of brackets, struts, etc is considerably smaller than that of the hull herself and therefore a separate scaling is required. Second, the appendages are normally streamlined sections, for which separate empirical relations apply. For sailing yachts the correct shape of the appendage sections is of utmost importance for good performance, particularly since these appendages normally operate at an angle of attack. 3. Blockage Effect: In restricted waters the flow around the hull and the wave making are influenced by the presence of the confining surface. This could be the seabed in shallow water or the banks of a canal. All resistance components may be influenced. Often the effect is modeled as an additional resistance component due to the Blockage Effect of the confining wall. 4. Air Resistance: An additional resistance, which may be considerable, for instance for fully loaded container ships is the wind resistance. The frontal area facing the relative wind on board the ship can be large and the containers do not have an aerodynamic shape, so large forces may be generated in strong winds. Even in still air, there is in fact a resistance component, however small. This component, the Air Resistance, is considered in the model-ship extrapolation procedure. 5. Added Resistance: A seaway will cause an additional resistance on the vessel. This is mainly due to the generation of waves by the hull when set in motion by waves, but also due to wave reflection in short waves.
{"url":"https://en.m.wikibooks.org/wiki/Ship_Resistance/Components_of_Resistance","timestamp":"2024-11-08T09:15:57Z","content_type":"text/html","content_length":"32857","record_id":"<urn:uuid:9ae6de95-5859-477b-9eb8-3d77c5bc3dd8>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00821.warc.gz"}
6,009 research outputs found Motivated by the hypothesis that financial crashes are macroscopic examples of critical phenomena associated with a discrete scaling symmetry, we reconsider the evidence of log-periodic precursors to financial crashes and test the prediction that log-periodic oscillations in a financial index are embedded in the mean function of this index. In particular, we examine the first differences of the logarithm of the S&P 500 prior to the October 87 crash and find the log-periodic component of this time series is not statistically significant if we exclude the last year of data before the crash. We also examine the claim that two separate mechanisms are responsible for draw downs in the S&P 500 and find the evidence supporting this claim to be unconvincing.Comment: 26 pages, 10 figures, figures are incorporated into paper, some changes to the text have been mad Thanks to the availability of new historical census sources and advances in record linking technology, economic historians are becoming big data genealogists. Linking individuals over time and between databases has opened up new avenues for research into intergenerational mobility, assimilation, discrimination, and the returns to education. To take advantage of these new research opportunities, scholars need to be able to accurately and efficiently match historical records and produce an unbiased dataset of links for downstream analysis. I detail a standard and transparent census matching technique for constructing linked samples that can be replicated across a variety of cases. The procedure applies insights from machine learning classification and text comparison to the well known problem of record linkage, but with a focus on the sorts of costs and benefits of working with historical data. I begin by extracting a subset of possible matches for each record, and then use training data to tune a matching algorithm that attempts to minimize both false positives and false negatives, taking into account the inherent noise in historical records. To make the procedure precise, I trace its application to an example from my own work, linking children from the 1915 Iowa State Census to their adult-selves in the 1940 Federal Census. In addition, I provide guidance on a number of practical questions, including how large the training data needs to be relative to the sample.This research has been supported by the NSF-IGERT Multidisciplinary Program in Inequality & Social Policy at Harvard University (Grant No. 0333403) By viewing the covers of a fractal as a statistical mechanical system, the exact capacity of a multifractal is computed. The procedure can be extended to any multifractal described by a scaling function to show why the capacity and Hausdorff dimension are expected to be equal.Comment: CYCLER Paper 93mar001 Latex file with 3 PostScript figures (needs psfig.sty For one dimensional maps the trajectory scaling functions is invariant under coordinate transformations and can be used to compute any ergodic average. It is the most stringent test between theory and experiment, but so far it has proven difficult to extract from experimental data. It is shown that the main difficulty is a dephasing of the experimental orbit which can be corrected by reconstructing the dynamics from several time series. From the reconstructed dynamics the scaling function can be accurately extracted.Comment: CYCLER Paper 93mar008. LaTeX, LAUR-92-3053. Replaced with a version with all figure We analyze the quarterly average sale prices of new houses sold in the USA as a whole, in the northeast, midwest, south, and west of the USA, in each of the 50 states and the District of Columbia of the USA, to determine whether they have grown faster-than-exponential which we take as the diagnostic of a bubble. We find that 22 states (mostly Northeast and West) exhibit clear-cut signatures of a fast growing bubble. From the analysis of the S&P 500 Home Index, we conclude that the turning point of the bubble will probably occur around mid-2006.Comment: 7 Elsaet Latex pages + 9 eps figure Obtaining and maintaining anonymity on the Internet is challenging. The state of the art in deployed tools, such as Tor, uses onion routing (OR) to relay encrypted connections on a detour passing through randomly chosen relays scattered around the Internet. Unfortunately, OR is known to be vulnerable at least in principle to several classes of attacks for which no solution is known or believed to be forthcoming soon. Current approaches to anonymity also appear unable to offer accurate, principled measurement of the level or quality of anonymity a user might obtain. Toward this end, we offer a high-level view of the Dissent project, the first systematic effort to build a practical anonymity system based purely on foundations that offer measurable and formally provable anonymity properties. Dissent builds on two key pre-existing primitives - verifiable shuffles and dining cryptographers - but for the first time shows how to scale such techniques to offer measurable anonymity guarantees to thousands of participants. Further, Dissent represents the first anonymity system designed from the ground up to incorporate some systematic countermeasure for each of the major classes of known vulnerabilities in existing approaches, including global traffic analysis, active attacks, and intersection attacks. Finally, because no anonymity protocol alone can address risks such as software exploits or accidental self-identification, we introduce WiNon, an experimental operating system architecture to harden the uses of anonymity tools such as Tor and Dissent against such attacks.Comment: 8 pages, 10 figure Scholars of presidential primaries have long posited a dynamic positive feedback loop between fundraising and electoral success. Yet existing work on both directions of this feedback remains inconclusive and is often explicitly cross-sectional, ignoring the dynamic aspect of the hypothesis. Pairing high-frequency FEC data on contributions and expenditures with Iowa Electronic Markets data on perceived probability of victory, we examine the bidirectional feedback between contributions and viability. We find robust, significant positive feedback in both directions. This might suggest multiple equilibria: a candidate initially anointed as the front-runner able to sustain such status solely by the fundraising advantage conferred despite possessing no advantage in quality. However, simulations suggest the feedback loop cannot, by itself, sustain advantage. Given the observed durability of front-runners, it would thus seem there is either some other feedback at work and /or the process by which the initial front-runner is identified is informative of candidate quality What was the return to education in the US at mid-century? In 1940, the correlation between years of schooling and earnings was relatively low. In this paper, we estimate the causal return to schooling in 1940, constructing a large linked sample of twin brothers to account for differences in unobserved ability and family background. We find that each additional year of schooling increased labor earnings by approximately 4%, about half the return found for more recent cohorts in contemporary twins studies. These returns were evident both within and across occupations and were higher for sons from lower SES families.First author draf We consider a selfish variant of the knapsack problem. In our version, the items are owned by agents, and each agent can misrepresent the set of items she owns---either by avoiding reporting some of them (understating), or by reporting additional ones that do not exist (overstating). Each agent's objective is to maximize, within the items chosen for inclusion in the knapsack, the total valuation of her own chosen items. The knapsack problem, in this context, seeks to minimize the worst-case approximation ratio for social welfare at equilibrium. We show that a randomized greedy mechanism has attractive strategic properties: in general, it has a correlated price of anarchy of $2$ (subject to a mild assumption). For overstating-only agents, it becomes strategyproof; we also provide a matching lower bound of $2$ on the (worst-case) approximation ratio attainable by randomized strategyproof mechanisms, and show that no deterministic strategyproof mechanism can provide any constant approximation ratio. We also deal with more specialized environments. For the case of $2$ understating-only agents, we provide a randomized strategyproof $\frac{5+4\sqrt{2}}{7} \approx 1.522$ -approximate mechanism, and a lower bound of $\frac{5\sqrt{5}-9}{2} \approx 1.09$. When all agents but one are honest, we provide a deterministic strategyproof $\frac{1+\sqrt{5}}{2} \approx 1.618$ -approximate mechanism with a matching lower bound. Finally, we consider a model where agents can misreport their items' properties rather than existence. Specifically, each agent owns a single item, whose value-to-size ratio is publicly known, but whose actual value and size are not. We show that an adaptation of the greedy mechanism is strategyproof and $2$-approximate, and provide a matching lower bound; we also show that no deterministic strategyproof mechanism can provide a constant approximation ratio
{"url":"https://core.ac.uk/search/?q=authors%3A(Feigenbaum)","timestamp":"2024-11-09T16:14:02Z","content_type":"text/html","content_length":"147325","record_id":"<urn:uuid:7c3fc434-d81d-40d9-84d2-b712e3d6c1ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00799.warc.gz"}
Irule to rewrite URL | DevCentral Forum Discussion Please I need an irule to rewrite URL. http://10.2.2.13/sav/dtl/anything/anything.html -> http://10.2.2.120:8081/anything/anything.html ; http://10.2.2.85:8081/anything/anything.html 10.2.2.13 is the VIP address and 10.2.2.120/85 are the pool member addresses. The most important part of the irule is that I want the /sav/dtl to be removed and any text entered after this to be carried over to the server side as this is the path that exists on the server. It's a kind of reverse proxy, the url with /sav/dtl is what the world knows. • Sorry I just noticed that you do not want a redirect but a pool assignment The key is you want to remove /sav/dtl/ and /sav/jog to remove these characters use the string range command. pool1 is the first server and pool 2 is the second server. Make sure the pool are using the 8081 port when HTTP_REQUEST { set Vuri [ string tolower [HTTP::uri]] if { $Vuri starts_with "/sav/dtl/"} then { set HTTP::uri to start 8 Characters from the start og it self HTTP::uri [string range [HTTP::uri] 8 end] pool pool1 elseif {$Vuri starts_with "/sav/jog/" } then { HTTP::uri [string range [HTTP::uri] 8 end] pool srvpool2 • I want the /sav/dtl or sav/jog URLs to stay the same when entered into the browser so doesn't want this to be transparent to the user. Thank you. • Here this irule should work for you when HTTP_REQUEST { set Vuri [ string tolower [HTTP::uri]] if { $Vuri starts_with "/sav/dtl/"} then { set the new_vuri variable starting 8 Characters from the start set new_Vuri [string range [HTTP::uri] 8 end] HTTP::redirect "http://10.2.2.120:8081$new_Vuri" elseif {$Vuri starts_with "/sav/jog/" } then { set new_Vuri [string range [HTTP::uri] 8 end] • Sorry I just noticed that you do not want a redirect but a pool assignment The key is you want to remove /sav/dtl/ and /sav/jog to remove these characters use the string range command. pool1 is the first server and pool 2 is the second server. Make sure the pool are using the 8081 port when HTTP_REQUEST { set Vuri [ string tolower [HTTP::uri]] if { $Vuri starts_with "/sav/dtl/"} then { set HTTP::uri to start 8 Characters from the start og it self HTTP::uri [string range [HTTP::uri] 8 end] pool pool1 elseif {$Vuri starts_with "/sav/jog/" } then { HTTP::uri [string range [HTTP::uri] 8 end] pool srvpool2 □ Thanks for this Hectorm, I will test this tomorrow and revert back. It's the 'string range' command that I might have been looking for. □ Yes also notice that the range command starts at 0 and not 1 so I think you may need to change the 8 for a 7 since you want the last bracket not to be removed. • Sorry I just noticed that you do not want a redirect but a pool assignment The key is you want to remove /sav/dtl/ and /sav/jog to remove these characters use the string range command. pool1 is the first server and pool 2 is the second server. Make sure the pool are using the 8081 port when HTTP_REQUEST { set Vuri [ string tolower [HTTP::uri]] if { $Vuri starts_with "/sav/dtl/"} then { set HTTP::uri to start 8 Characters from the start og it self HTTP::uri [string range [HTTP::uri] 8 end] pool pool1 elseif {$Vuri starts_with "/sav/jog/" } then { HTTP::uri [string range [HTTP::uri] 8 end] pool srvpool2 □ Thanks for this Hectorm, I will test this tomorrow and revert back. It's the 'string range' command that I might have been looking for. □ Yes also notice that the range command starts at 0 and not 1 so I think you may need to change the 8 for a 7 since you want the last bracket not to be removed.
{"url":"https://community.f5.com/discussions/technicalforum/irule-to-rewrite-url/79735","timestamp":"2024-11-12T20:24:10Z","content_type":"text/html","content_length":"345553","record_id":"<urn:uuid:6e2a7411-40c9-479d-b191-6b4ff5d2130f>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00347.warc.gz"}
Reflecting on Mathematics: The Greatest Show (part 1)Reflecting on Mathematics: The Greatest Show (part 1) This blog is the first of two reflections on the annual MANSW (Mathematical Association of NSW) conference. This year’s theme was “Mathematics: The Greatest Show” and it didn’t disappoint! What an amazing conference, ever year it gets bigger and better and the calibre of presentations and keynotes was out of this world! (I should save this phrase for next year’s conference actually as the theme will be A Maths Odyssey: Space … and geometry!) Matt Parker aka @standupmaths was the opening keynote. His presentation wove together aspects of comedy, entertainment, and the sheer joy of mathematics. As a previous mathematics teacher, Matt has the power as a speaker to not only engage people as the audience of a show (and it was tremendously funny!) but also to engage people as learners (and it was extremely knowledge-building). Matt enthralled the audience of some 600+ teachers with his ‘ooo’ and ‘ahhh’ causing Moebius loop tricks. Although as Matt highlighted himself, it’s the mathematics itself that is amazing, we just need to learn how to harness that in our classrooms for our students. I’m sure plenty of those teachers in the room would have tried out these ideas back in the classroom last week. If you are a NSW primary teacher and still have the CD ROM Teaching Space and Geometry, there are two lessons that link to Matt’s paper tricks – “Two Rings Puzzle” and “Moebius Strips”. The Moebius strip lesson poses questions such as: How many surfaces does the ring have? How many edges does the ring have? If the ring were cut along the middle, what would happen? What is your prediction and why? How do you think the Moebius strip was constructed? How could you check? Why am I learning this? Why does it work? In exploring different tasks with the Moebius loops Matt made the point of saying “Don’t tell this students this!” as he explained some of the ‘why it works’ parts of the task. This resonated with me, teaching isn’t about telling. Students need to feel the maths-burn and think hard. Thinking hard is part of learning mathematics. Matt also said that every mathematician thinks maths is hard. People who enjoy maths know it’s difficult, and enjoy this aspect of challenge. It’s hard work teaching your brain new thinking skills that lead to developing intuition. “Humble Pie: A comedy of maths errors” is Matt’s new book about some of the funny, and often unfortunate, ways in which things have gone wrong because of basic mathematical errors. Matt shared a few of these stories during his keynote like the McDonald’s combination meals where they used factorial instead of combinations to work out the number of different meals it was possible to make. And the story about the plane that had to make an emergency landing due to an error (twice!) with kg vs pounds of fuel. I can’t do Matt’s story-telling ability justice so you’ll just have to buy the book! Matt also shared some cool digital image investigations using spreadsheets (every mathematics teachers’ dream!). Information about this tasks and lesson plans to go with it can be found here on Matt’s website think-maths.co.uk He also has a couple of other websites mathsgear.co.uk and standupmaths.com You should also subscribe to his youtube collaboration Numberphile these videos are great for your own learning or to use as hooks during the launch of your mathematics lessons. Some of my favourites are the How to order 43 Chicken McNuggets, The scientific way to cut a cake, Calculating Pi with real pies and 43,252,003,274,489,856,000 Rubik’s Cube Combinations. I also attended Matt’s paper folding workshop – I’ll share my reflections on that workshop in part 2 of this blog. Building Number Sense in Early Primary, Doug Clarke Doug’s sessions are always great (Doug Clarke is a regular presenter both at the annual MANSW conference and at the K-8 PAM conference – keep an eye on the MANSW conference page for upcoming conferences). Doug provided some great new ways to utilise the hundreds chart jigsaw idea. Asking students as they are putting the chart back together: What number goes there? How do you know? There’s a piece missing, draw me a picture of what it looks like. If I know the numbers 21 and 43 are on the same piece, what might that piece look like? Domino games were also part of Doug’s workshop, I really like the foam dominoes Dough uses, the numbers are really clear and they don’t make any noise. We played roll two dice, then find the matching domino to the dice. Player with the most dots (not dominoes) at the end wins. This opens up a lovely investigation around how the students count the dots. Doug coined a nice phrase to use with the students “How can I used clever counting?” to get the students communicating about their ways of counting the dots. You could also pose questions like: What strategies did you use? What strategy is efficient? Why? You could also adapt to the player with most dominoes at the end wins (highest stack). Other domino games included Highest sum, turn over one domino each at the same time, add your domino dots, highest total keeps their domino for that round, continue play. At the end, the same scoring can be used as above, most dots wins. Lowest difference was another game, same as above, turn over one domino each at the same time, lowest difference between the two lots of dots on your own domino wins and keeps their domino for that round. This one was really interesting as students soon realise there are dominoes you want to get and ones that you don’t – I’ll let you work that out for yourselves! Crack the code was a good game to engage all students working collaboratively across groups. In this game, each pair of students work together to arrange their set of dominoes into two piles – all face up. They need to use a rule (that they keep secret) to organise the dominoes into the two piles. One member of each pair then visits other pairs to try to guess or predict the rule that group is using to separate their dominoes. A great discussion starter, the teacher could take photos of a group’s piles, put it up on the interactive whiteboard for a whole class discussion. Think it … Say it, Ayesha Ali Khan Ayesha Ali Khan (follow @missalikhan and @DoE_Mathematics) is currently the Primary Mathematics Advisor at the Department of Education and presented a workshop focusing on classroom conversations centred on mathematical concepts. She was accompanied by her colleagues Linda De Marcellis @linda_demar and Cathy Vogt @cathyvogt6. Ayesha’s session was really informative and referenced all the go-to researchers who currently are exploring the importance of talk in the mathematics classroom. One of the strategies Ayesha mentioned was the ‘convince yourself, convince a friend, convince a skeptic‘ that’s referenced by both Jo Boaler on youcubed and also by Robert Kaplinsky. It’s a way of exploring argumentation that I was first introduced to by Peter Gould, it originally comes from Mason, Burton and Stacey’s book “Thinking Mathematically” – a must read, and one that still holds true for the classroom some 30+ years after its first publication. This cycle—convince yourself, convince a friend, convince a skeptic —can be used in different ways. One technique is first to encourage students to understand the problem well enough that they believe they have come up with a correct solution. Next they produce a justification that could be convincing to someone else in the class. The final level is a justification that is complete enough to be convincing to someone who found a different solution or might disagree with the solution provided. With this cycle, students construct arguments that grow in sophistication. The Cycle of Challenge, excerpt from Back-pocket Strategies for argumentation, Graham & Lesseig In Ayesha’s session we also explored how to talk, to support students’ learning of mathematics, using the Talk Move strategies. I’ve discussed the benefits of talk moves in other blogs you can read such as; 6 practices that should be in your mathematical repertoire and valuing student voice in mathematics. The Department have created some video vignettes that showcase talk moves being used in classroom settings, these will be available soon – I hope! Contact Ayesha for more details. Ayesha also mentioned Mike Askew’s work around private (pair chats) and public (whole class) conversations and also Kazemi and Hintz‘s strategies such as compare and connect and why, let’s justify from their book Intentional Talk – another must-read. Well that was the end of day one! I’ll continue reflecting on day 2 of the conference in my next blog. If you are keen to find out what other conference-goers enjoyed about the conference you can check out the Twitter feed #manswgreatestshow Graham, M., & Lesseig, K. (2018). Back-Pocket Strategies for Argumentation. Mathematics Teacher, 112(3), 172-178. Mason, J, Burton, L, & Stacey, K. (1982). Thinking Mathematically. London: Addison-Wesley. National Council of Teachers of Mathematics (NCTM). Categories: Mathematics, OpinionBy Katherin Cartwright Tags: conferenceMANSWmathematicsnumeracyreflections Author: Katherin Cartwright Katherin Cartwright is a passionate mathematics educator and is currently a sessional lecturer and tutor at The University of Sydney teaching mathematics to pre-service teachers in primary education. She has just completed her PhD researching teacher noticing of mathematical fluency in primary students. Related Posts
{"url":"https://primarylearning.com.au/2019/09/30/reflecting-on-mathematics-the-greatest-show-part-1/","timestamp":"2024-11-02T21:37:14Z","content_type":"text/html","content_length":"141349","record_id":"<urn:uuid:60191929-0624-479e-b913-ec30c141f64e>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00686.warc.gz"}
Well-posedness Issues For the Half-Wave Maps Equation With Hyperbolic And Spherical Targets The conjugate heat transfer in mixtures of a fluid and single granular clusters is studied in this paper using a novel lattice Boltzmann method (LBM) programmed for parallel computation on the graphics processing unit (GPU). The LBM is validated for heat c ...
{"url":"https://graphsearch.epfl.ch/en/publication/304450","timestamp":"2024-11-12T00:13:48Z","content_type":"text/html","content_length":"106049","record_id":"<urn:uuid:68ae03ab-7e57-43e4-920b-fddd71d6ba8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00332.warc.gz"}
Are there any natural problems complete for NP INTER TALLY? NP INTER SPARSE? A is a tally set if A ⊆ 1 A is a sparse set if there is a polynomial p such that the number of strings of length n is ≤ p(n). If there exists a sparse set A that is NP-hard under m-reductions (even btt-reductions) then P=NP. (See this post If there exists a sparse set A that is NP-hard under T-reductions then PH collapses. (See this post Okay then! I have sometimes had a tally set or a sparse set that is in NP and I think that its not in P. I would like to prove, or at least conjecture, that it's NP-complete. But alas, I cannot since then P=NP. (Clarification: If my set is NP-complete then P=NP. I do not mean that the very act of conjecturing it would make P=NP. That would be an awesome superpower.) So what to do? A is if A is in NP, A is sparse, and for all B that are in NP and sparse, B ≤ Similar for NPTALLY and one can also look at other types of reductions. So, can I show that my set is NPSPARSE-complete? Are there any NPSPARSE-complete sets? Are there NATURAL ones? (Natural is a slippery notion- see this post by Lance Here is what I was able to find out (if more is known then please leave comments with pointers.) 1) It was observed by Bhurman, Fenner, Fortnow, van Velkebeek that the following set is NPTALLY-complete: Let M , M , ... be a standard list of NP-machines. Let A = { 1 : M ) accepts on some path within t steps }' The set involves Turing Machines so its not quite what I want. Messner and Toran show that, under an unlikely assumption about proof systems there exists an NPSPARSE-complete set. The set involves Turing Machines. Plus it uses an unlikely assumption. Interesting, but not quite what I want. 3) Buhrman, Fenner, Fortnow, van Melkebeek also showed that there are relativized worlds where there are no NPSPARSE sets (this was their main result). Interesting but not quite what I want. 4) If A is NE-complete then the tally version: { 1 : x is in A } is likely NPTALLY-complete. This may help me get what I want. Okay then! Are there any other sets that are NPTALLY-complete. NPSPARSE-complete? The obnoxious answer is to take finite variants of A. What I really want a set of such problems so that we can proof other problems NPTALLY-complete or NPSPARSE-complete with the ease we now prove problems NP-complete. 1 comment: 1. A small note on "natural"; what is $1^a$ ... it's a number (a) in unary encoding. So we are asking for "natural" problems that have a single number (represented inefficiently) as input ... I think we also have serious problems to find natural problems (or NPC natural problems) that has a sigle number (represented efficiently) as input; i.e. problems that don't use that number as an encoding for something else. We have FACTORING but no other come to my mind ... Even COMPRESSIBILITY ("Is it N compressible?") is a kind of hack because it hides the fact that what we are really asking for is the compressibility of the 0-1 string that represents/is represented by N. Also note that TALLY could be extended to TALLY^k, i.e. a *fixed* number of unary strings (1^{a_1},..,1^{a_k}) and the corresponding hierarchy could be "investigated" ... I never seen it before but I think it could be interesting. In binary representation, N^3 "contains" the famous natural NPC problem QUADRATIC DIOPHANTINE EQUATION (as long as a math problem can be considered natural :-). But with TALLY^3 it seems that you can't do too much.
{"url":"https://blog.computationalcomplexity.org/2019/09/are-there-any-natural-problems-complete.html?m=0","timestamp":"2024-11-08T23:44:12Z","content_type":"application/xhtml+xml","content_length":"177320","record_id":"<urn:uuid:e97bebbc-b0ff-4a95-ab63-eaddd15207d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00101.warc.gz"}
How do you rewrite cos^2 x - (1/2) using double angle formula? Thank you. | HIX Tutor How do you rewrite cos^2 x - (1/2) using double angle formula? Thank you. Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-rewrite-cos-2-x-1-2-using-double-angle-formula-thank-you-44068a4ab4","timestamp":"2024-11-05T20:31:49Z","content_type":"text/html","content_length":"562932","record_id":"<urn:uuid:da693e51-f44a-4c27-a02f-cee9c1193343>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00435.warc.gz"}
Simultaneous Equations & Inequalities Check solutions to simultaneous equations Find the point of intersection graphically Plot the curve and the line to find the two points of intersection Equations and inequalities: Page 44, Example 6 Explore which regions on the graph satisfy which inequalities Solving Simultaneous Linear and Quadratic Solution of simultaneous equations depends on whether or not their graphs intersect. This display will help you understand how you can solve a Quadratic and Linear equation simultaneously. You can find one or more common 'x's where the difference between the equations is zero, and then substitute these 'x's back into either equation to find the corresponding 'y's. The linear equation might be the simpler one to substitute your 'x' into, but it can be reassuring to check each 'x' and 'y' against both equations. Solving Two Simultaneous Quadratics Solution of simultaneous equations depends on whether or not their graphs intersect. This display will help you understand how you can solve two quadratic equations simultaneously; one marked in blue and one marked in red. You can find one or more common 'x's where the difference between the equations is zero, and then substitute these 'x's back into either equation to find the corresponding 'y's. It can be reassuring to check each 'x' and 'y' against both equations. How many solutions can two simultaneous quadratics have; at least and at most? Investigate Linear Inequalities The inequality $ax + b > cx + d$ can be investigated in this display. The line $y = ax + b$ is shown in blue and the line $y = cx + d$ is shown in red. $ax + b > cx + d$ where the blue line lies above the red line. The solution is shown on the number line at the bottom. Note that the solution is itself an inequality, but in terms of just x. Note also, that the intersection point is shown as an open circle on the number line, because it isn't itself included in the solution. When the solution lies to the left of the intersection you should be able to see how the direction of an inequality reverses when multiplying or dividing both sides by a negative number. Investigate Quadratic Inequalities The quadratic inequality $a{x^2} + bx + c < 0$ can be investigated in this display. The curve $y = a{x^2} + bx + c < 0$ is plotted, and the inequality holds where this curve lies below the x-axis. The solution is shown on the number line at the bottom. Note that the solution is itself an inequality, but in terms of just x. Note also, that intersection points are shown as open circles on the number line, because they aren't themselves included in the solution. Investigate Quadratic/Linear Inequalities The solutions to inequalities like $\color{green}{a{x^2} + bx + c \ge dx + e}$ can be illustrated graphically. Both the curve $\color{blue}{y = a{x^2} + bx + c}$ and the line $\color{red}{y = dx + e}$ are plotted, and the inequality holds where the curve lies on or above the line. The solution is shown on the number line at the bottom. Note that the solution is itself an inequality, but in terms of just x. Note also, that intersection points are shown as closed circles on the number line, because they are included in the solution.
{"url":"https://fineview.academy/SyllabusContent?syllabusContentId=6","timestamp":"2024-11-10T17:55:26Z","content_type":"text/html","content_length":"491260","record_id":"<urn:uuid:6ed7885a-a2ca-4fa4-8062-4741a89351d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00119.warc.gz"}
N Dimensional Rotations and Magnetism How many distinct rotations are possible in ND? This is the same thing as asking how many distinct magnets are possible. There is a crucial distinction. Is it possible to identify the different rotations? It makes a difference. This is true even here in 3D. If you have no way to orient yourself then there is only one possible rotation in 3D. If you look at the Earth from above the north pole then the earth is rotating counterclockwise. If you look at the Earth from above the south pole then the earth is rotating clockwise. If on the other hand there is some way for the observer to orient themselves, by using the magnetic field or stars or whatever, then there are two possible rotations. In 2D there are two distinct rotations, as the 2D observer can't help but have an orientation. So let's exclude that and consider the non-orientable observer cases. In 4D there are also two, as the two rotations can be compared against one another. In 5D only one. Essentially the distinction is that in odd dimensional spaces there is always an axis, a null dimension that isn't rotating. One can use this dimension to rotate so that any given rotational plane appears to change direction. So how many distinct rotations are there in a 2N dimensional space without orientation? There are always two. The only invariant is the parity of the rotations. Any rotation of the observer may change the sign of two rotations, leaving parity the same. Now suppose it is possible to order the planes of rotation. Perhaps the rotational periods give us an ordering. Then there are 2^(N-1) distinct rotations for 2N > 2. Same with the possible number of types of magnets. If the magnetic planes are of differing strength then there are the same number of possible different magnets. If you have some way to orient the first plane in the order, then you get a full 2^N distinct rotations. Last edited by PatrickPowers on Tue Feb 07, 2023 6:28 am, edited 1 time in total. Re: N Dimensional Rotations and Magnetism There is not a lot of what i could find, but i given up searching the literature, because of the wrong approach. In even dimensions, there is 'planetary rotation', which can be view in complex space, by the equation of a line. This means, in CEn, which maps onto E2n, the equations that generate a straight line in En, but treated as complex numbers, gives CEn. The lines that pass through the origin, can be multiplied by \(cis\ \omega\t\) which, for the mapping of CEn onto E2n, every point lies on a great arrow that orbits the origin at the same angular speed. The contention is then that every rotation in E2n is comprised of n possible sets of these, each one uniquely distinct. So in six dimensions, we ought find three 'orthogonal' planetry rotations, in eight, four such rotations. The planetry rotation in any given dimension has the same rank as not exceeding that dimension, so in 3d, it is 1 3-d rotation, in 4d, it's 2 3d rotations, in 5d it is 2 5d rotations, and so forth. In 2d, it is 1 1d rotation. That is to say, the full field of rotations, can be imagined by x controls each of y dimensions. x is half the even number not exceeding n, and y is the largest odd number not exceeding n. In two dimensions, you have 1 1d control. A slider, that goes from -u to +u representing its rotation speed. In three dimensions, you have 1 3d control. In essence, you move a point in 3-space, the radius represents the speed, and the position is the north pole. In four dimensions, you have two 3d controls. These control the left and right isoclinal rotations. Whether the resulting rotation is left- or right- handed, depends largely on which control has the bigger speed. When the two speeds are set equal, you get a 'wheel' rotation, where all axies except two are fixed. In a motor car, one has the wheel as in the hedrix (2space) of height and forward. The wheel works by swinging the axle against a succession of points at the rim, which produces motion. Any other rotation at the axle will turn the cabin of the carriage around, making travelling a dizzying experience. Five dimensions is still foggy, although the controlls are known to be 2 5d controlls. I have seen one of these. The dream you dream alone is only a dream the dream we dream together is reality. \ ( \(\LaTeX\ \) \ ) [no spaces] at https://greasyfork.org/en/users/188714-wendy-krieger Re: N Dimensional Rotations and Magnetism Magnetism in 3d is a curl, but it can also be regarded as the outcome of retarded potential. Part of the reason that i am looking at circulation in higher dimensions, is because of eddies in fluids, including magnetism. If one supposes that magnetism is the interaction between moving charges, then \(\vec H\) is a ray pointing from the source in all directions, but modified by the the direction of the motion, and \(\ vec B\) appears to a moving charge in the plane (n-1 space) orthogonal to its motion. The study with the out-vector tells us that any loop (boundary of an n-1 patch in n-space), bounds a specific vector area, there is an area moment given by the vector area by the intensity of circulation. This generalises the magnetic area dipole \(\vec m = I \vec A\), where A is n-1 space, and I is a numeric. This particular relation derives from the definition of volume = moment of area, and that the sum of the area-vector is zero, because the volume is not dependent on position. This means that if we take an n-balloon, and cut a hole into its interior, the necessary patch for it is the same vector area regardless of shape. The dream you dream alone is only a dream the dream we dream together is reality. \ ( \(\LaTeX\ \) \ ) [no spaces] at https://greasyfork.org/en/users/188714-wendy-krieger Re: N Dimensional Rotations and Magnetism Aha, I neglected to mention that I assumed the number of rotations is always maximal. If you allow some of the planes to be still then the number of "rotations" is greater. What I have in mind is the signs of these rotations, either + or -. So in 8D the signature of a maximal rotation would be something like +--+. If the observer can't tell which rotation is which then in even-D spaces rotation of the observer can transform this signature to any other signature with the same parity. Signature +--+ can go to +-+- but can't become +-++. So there are two equivalence classes of these maximal rotations. Re: N Dimensional Rotations and Magnetism wendy wrote:If one supposes that magnetism is the interaction between moving charges, then \(\vec H\) is a ray pointing from the source in all directions, but modified by the the direction of the motion, and \(\vec B\) appears to a moving charge in the plane (n-1 space) orthogonal to its motion. I would say that electromagnetism is the interaction between moving charges. By relativity we can always say that one of the charges is motionless. That point combined with the velocity vector of the other charge defines a two dimensional plane. Then we can say that the other charge is motionless and get a different 2D plane. The two charges see things differently. Re: N Dimensional Rotations and Magnetism I have plotted out the number of different rotations in 2N and 2N+1 dimensions to be N!. I'll play around with the model tomorrow, to see if it all makes sense. The dream you dream alone is only a dream the dream we dream together is reality. \ ( \(\LaTeX\ \) \ ) [no spaces] at https://greasyfork.org/en/users/188714-wendy-krieger Re: N Dimensional Rotations and Magnetism The phase space for rotations can be thought of as the background for a slider, where every possible rotation and speed is represented as a different point. This includes those rotations at different The model is as in radiant space, each point of which represents a cartesian product of the shape given at the sizes on the axies, against whatever the axies represents. So the point (1,2) would represent a product of sphere-surfaces (that being the stated axis here), of a size 1 sphere in prism product with a size 2 sphere. The solid product would require all the points in the rectangle 0,0 to 1,2 be included, The altitude space is essentially space divided by direction. For dimensions 2N and 2N-1, the sphere on the axis is the surface of a 2N-1 sphere. In odd dimensions, this corresponds to an polar axis to a 2N-2 rotations. In even dimensions, it corresponds to a swirlybob, such as taking the slope of a line in CEn, This has n-1 defining equations, (eg z=ax, z=by, ... in CE3), and the sphere is generated by projection from the point where the line from (1,0,0,...) to (0, a, b, ..) crosses the sphere whose diameter is (0-1, 0,0,...). For 2N and 2N+1, there are N separate forms of this sphere, each in a sense chiral to the others. This is represented by a coordinate system, where each axis represents one of these spheres, the coordinate of the axis represents the intensity or speed of rotation. The combined coordinate then represents a sum of these rotations, which can add and subtract the relative rotation in a given The point representing any rotation is a single point in this space, each axis supplying further a direction in space for its range of directions. Thus in six dimensions, the altitude space is 3d, ie x, y, z, and these are mapped onto 15 axies as x1, x2, x3, x4, x5, y1, y2, ..., y5, z1, ..., z5. Where x=y=z=..., this corresponds to a great arrow or wheel rotation. This is where two coordinates rotate, and the rest remain static. generally we can suppose x>y>z..., where all points are in motion, often in varying helix-on-helix motions rather than a simple orbit. This is the motion in 4d, when one speed is faster than the other is for the faster one to spiral around the slower one. If x, y is such a spiral, then y, x is the opposite hand of that same spiral. The thing is that we can tell a left-screw from a right screw in 4d, which is the outcome of x>y vs y>x, The thing then matters that where x>y>z against all other combinations, represent distinct rotations (free from coordinates). Also, does x>y=z represent a case with non-turning axies. The dream you dream alone is only a dream the dream we dream together is reality. \ ( \(\LaTeX\ \) \ ) [no spaces] at https://greasyfork.org/en/users/188714-wendy-krieger Re: N Dimensional Rotations and Magnetism wendy wrote:Five dimensions is still foggy, although the controlls are known to be 2 5d controlls. I have seen one of these. Why not just say there's one 10D control? Can the two 5D controls really be separated? wendy wrote:The point representing any rotation is a single point in this space, each axis supplying further a direction in space for its range of directions. Thus in six dimensions, the altitude space is 3d, ie x, y, z, and these are mapped onto 15 axies as x1, x2, x3, x4, x5, y1, y2, ..., y5, z1, ..., z5. What rotations do these 15 axes represent, exactly? In 4D space, with axes e1,e2,e3,e4, the rotation phase-space has axes x1,x2,x3, y1,y2,y3, where x is left-isoclinic and y is right-isoclinic. The x1 axis represents rotations generated by the bivector e1e4+e2e3; that is, a point v=(v1,v2,v3,v4) gets rotated to (v1 cosθ - v4 sinθ, v2 cosθ - v3 sinθ, v2 sinθ + v3 cosθ, v1 sinθ + v4 cosθ). The other axes correspond to these bivectors: x1, x2, x3: e1e4 + e2e3, e2e4 + e3e1, e3e4 + e1e2 y1, y2, y3: e1e4 - e2e3, e2e4 - e3e1, e3e4 - e1e2 So what would this look like in 6D space, with a 15D rotation phase-space? ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎ ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
{"url":"http://hi.gher.space/forum/viewtopic.php?p=28677","timestamp":"2024-11-10T02:17:14Z","content_type":"application/xhtml+xml","content_length":"37515","record_id":"<urn:uuid:0813622a-ed56-41b5-9df5-11d83549e9dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00413.warc.gz"}
STA437H1 Lecture Notes - Fall 2010, - Royal Institute Of Technology, False Discovery Rate, Principal Component Analysis STA437H1 Lecture Notes - Royal Institute Of Technology, False Discovery Rate, Principal Component Analysis Document Summary Notes for sta 437/1005 methods for multivariate data. Let x be a random vector with p elements, so that x = [x1, . , xp] , where denotes transpose. (by convention, our vectors are column vectors unless otherwise indicated. ) We denote a particular realized value of x by x. The expectation (expected value, mean) of a random vector x is e(x) = r xf (x)dx, where f (x) is the joint probability density function for the distribution of x. We often denote e(x) by , with j = e(xj) being the expectation of the j"th element of x. The variance of the random variable xj is var(xj) = e[(xj e(xj)) 2], which we some- times write as 2 j . The standard deviation of xj is pvar(xj) = j. The covariance of xj and xk is cov(xj, xk) = e[(xj e(xj))(xk e(xk))], which we sometimes write as jk.
{"url":"https://oneclass.com/class-notes/ca/utsg/stat-sci/sta437h1/2241-online-notes.en.html","timestamp":"2024-11-02T20:54:25Z","content_type":"text/html","content_length":"1050588","record_id":"<urn:uuid:3808f834-6902-4978-861a-550e0c60319c>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00822.warc.gz"}
12.6 Motion of an Object in a Viscous Fluid - College Physics 2e | OpenStax Learning Objectives By the end of this section, you will be able to: • Calculate the Reynolds number for an object moving through a fluid. • Explain whether the Reynolds number indicates laminar or turbulent flow. • Describe the conditions under which an object has a terminal speed. A moving object in a viscous fluid is equivalent to a stationary object in a flowing fluid stream. (For example, when you ride a bicycle at 10 m/s in still air, you feel the air in your face exactly as if you were stationary in a 10-m/s wind.) Flow of the stationary fluid around a moving object may be laminar, turbulent, or a combination of the two. Just as with flow in tubes, it is possible to predict when a moving object creates turbulence. We use another form of the Reynolds number $NR′NR′$, defined for an object moving in a fluid to be $NR′=ρvLη(object in fluid), NR′=ρvLη(object in fluid),$ where $LL$ is a characteristic length of the object (a sphere’s diameter, for example), $ρρ$ the fluid density, $ηη$ its viscosity, and $vv$ the object’s speed in the fluid. If $NR′NR′$ is less than about 1, flow around the object can be laminar, particularly if the object has a smooth shape. The transition to turbulent flow occurs for $NR′NR′$ between 1 and about 10, depending on surface roughness and so on. Depending on the surface, there can be a turbulent wake behind the object with some laminar flow over its surface. For an $NR′NR′$ between 10 and $106106$, the flow may be either laminar or turbulent and may oscillate between the two. For $NR′NR′$ greater than about $106106$, the flow is entirely turbulent, even at the surface of the object. (See Figure 12.18.) Laminar flow occurs mostly when the objects in the fluid are small, such as raindrops, pollen, and blood cells in plasma. Does a Ball Have a Turbulent Wake? Calculate the Reynolds number $NR′NR′$ for a ball with a 7.40-cm diameter thrown at 40.0 m/s. We can use $NR′=ρvLηNR′=ρvLη$ to calculate $NR′NR′$, since all values in it are either given or can be found in tables of density and viscosity. Substituting values into the equation for $NR′NR′$ yields $NR′ = ρvLη =(1.29 kg/m3 )(40.0 m/s) (0.0740 m)1.81×10−51.00 Pa⋅s = 2.11× 10 5 . NR′ = ρvLη =(1.29 kg/m3 )(40.0 m/s) (0.0740 m)1.81×10−51.00 Pa⋅s = 2.11× 10 5 .$ This value is sufficiently high to imply a turbulent wake. Most large objects, such as airplanes and sailboats, create significant turbulence as they move. As noted before, the Bernoulli principle gives only qualitatively-correct results in such situations. One of the consequences of viscosity is a resistance force called viscous drag $FVFV$ that is exerted on a moving object. This force typically depends on the object’s speed (in contrast with simple friction). Experiments have shown that for laminar flow ( $NR′NR′$ less than about one) viscous drag is proportional to speed, whereas for $NR′NR′$ between about 10 and $106106$, viscous drag is proportional to speed squared. (This relationship is a strong dependence and is pertinent to bicycle racing, where even a small headwind causes significantly increased drag on the racer. Cyclists take turns being the leader in the pack for this reason.) For $NR′NR′$ greater than $106106$, drag increases dramatically and behaves with greater complexity. For laminar flow around a sphere, $FVFV$ is proportional to fluid viscosity $ηη$, the object’s characteristic size $LL$, and its speed $vv$. All of which makes sense—the more viscous the fluid and the larger the object, the more drag we expect. Recall Stoke’s law $FS=6πrηvFS=6πrηv$. For the special case of a small sphere of radius $RR$ moving slowly in a fluid of viscosity $ηη$, the drag force $FSFS$ is given by An interesting consequence of the increase in $FVFV$ with speed is that an object falling through a fluid will not continue to accelerate indefinitely (as it would if we neglect air resistance, for example). Instead, viscous drag increases, slowing acceleration, until a critical speed, called the terminal speed, is reached and the acceleration of the object becomes zero. Once this happens, the object continues to fall at constant speed (the terminal speed). This is the case for particles of sand falling in the ocean, cells falling in a centrifuge, and sky divers falling through the air. Figure 12.19 shows some of the factors that affect terminal speed. There is a viscous drag on the object that depends on the viscosity of the fluid and the size of the object. But there is also a buoyant force that depends on the density of the object relative to the fluid. Terminal speed will be greatest for low-viscosity fluids and objects with high densities and small sizes. Thus a skydiver falls more slowly with outspread limbs than when they are in a pike position—head first with hands at their side and legs together. Take-Home Experiment: Don’t Lose Your Marbles By measuring the terminal speed of a slowly moving sphere in a viscous fluid, one can find the viscosity of that fluid (at that temperature). It can be difficult to find small ball bearings around the house, but a small marble will do. Gather two or three fluids (syrup, motor oil, honey, olive oil, etc.) and a thick, tall clear glass or vase. Drop the marble into the center of the fluid and time its fall (after letting it drop a little to reach its terminal speed). Compare your values for the terminal speed and see if they are inversely proportional to the viscosities as listed in Table 12.1. Does it make a difference if the marble is dropped near the side of the glass? Knowledge of terminal speed is useful for estimating sedimentation rates of small particles. We know from watching mud settle out of dirty water that sedimentation is usually a slow process. Centrifuges are used to speed sedimentation by creating accelerated frames in which gravitational acceleration is replaced by centripetal acceleration, which can be much greater, increasing the terminal speed.
{"url":"https://openstax.org/books/college-physics-2e/pages/12-6-motion-of-an-object-in-a-viscous-fluid","timestamp":"2024-11-04T18:57:42Z","content_type":"text/html","content_length":"582322","record_id":"<urn:uuid:4828b8ce-2e4a-4c48-b34a-335e45bb3cf1>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00176.warc.gz"}
UE23CS251A: Digital Design and Computer Organization This course focuses on the structure, design and operation of a computer system at different levels of abstraction. The digital design part of the course describes low level digital logic building blocks while the computer organization part explains the structure and operation of microprocessors. Course Objectives: • Fundamental (combinational and sequential) building blocks of digital logic circuits. • Design of more complex logic circuits such as adders, multipliers and register files. • Design of Finite State Machines based on problem specification. • Construction, using above logic circuits, of a microprocessor and its functioning at the clock cycle level. Course Outcomes: • Perform analysis of given synchronous digital logic circuit. • Design and implement small to medium scale data path logic circuits from given specification. • Design and implement control logic using Finite State Machines. • Understand hardware level microprocessor operation, providing a foundation for the higher layers and utilize the concepts and techniques learnt to implement complex digital systems. Course Contents: Introduction, Boolean Functions, Truth Tables, The map method Four variable K-map, Product of Sums Simplification, Donâ t Care conditions, NAND and NOR implementation, Analysis procedure Design Procedure, Combinational logic-1: Binary Combinational logic: Adder- Subtractor, Decimal Adder, Binary multiplier, Magnitude comparator Decoders Encoders, Multiplexers. Introduction, Sequential circuits, Storage elements: Latches, Flip flops, Analysis of clocked sequential circuits, State reduction and assignment, Design procedure, Registers, Shift register, Ripple counters, Synchronous counters, Other counters. Unit 3: Basic structure of computers, Standard IO interface, Interrupts Computer Types, Functional Units: Input Unit, Memory Unit, ALU, Output Unit, Control Unit, Basic operational concepts, Number representation and arithmetic Operations, Character representation, Memory locations and addresses, Memory Operations, Instruction and instruction sequencing,Addressing modes,Assembly Languages, I/O Operations,Accessing I/O Devices, Interrupts,Standard I/O Interfaces Shift / add Multiplier / Divider, Integer division, Floating point number and operations with architecture. Some fundamental concepts, Execution of a complete instruction, Multiple Bus Organization,Hardwired control, Micro programmed control, Single-cycle, Multi-cycle processor data path and control.
{"url":"https://polarhive.net/wiki/uni/ue23cs251a/ddco","timestamp":"2024-11-06T13:56:04Z","content_type":"text/html","content_length":"37292","record_id":"<urn:uuid:1d7a84b8-7cac-47f3-818c-dd84c90cf3c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00747.warc.gz"}
How to find How to find Cosecant in Python To calculate the cosecant in Python you can use the csc() function of the mpmath module. from mpmath import csc The parameter x is the angle measured in radians The csc () function returns the cosecant of the angle. What is the cosecant? In trigonometry the cosecant is the reciprocal of the sine function. It is equal to infinity if the angle is 0 °, it is one if the angle is 90 ° (sine = 1) or -1 if the angle is 180 °. Alternative method The cosecant can also be calculated as the reciprocal of the sine. from math import sin Practical examples Example 1 To calculate the cosecant of 45 ° >>> from mpmath import csc, radians >>> x=radians(45) >>> csc(x) The radians() function converts the sexagesimal degrees of the angle to radians. The csc() function calculates the cosecant. The cosecant of 45 ° is approximately 1.41421 Example 2 To calculate the cosecant of 90 ° >>> from math import sin, radians >>> x=radians(90) >>> 1/sin(x) This time the cosecant is calculated as the reciprocal of the function sin (x). The output result is The cosecant of an angle of 90 ° is 1. Report an error or share a suggestion to enhance this page
{"url":"https://how.okpedia.org/en/python/how-to-find-the-cosecant-in-python","timestamp":"2024-11-07T13:55:20Z","content_type":"text/html","content_length":"12921","record_id":"<urn:uuid:420675f5-c80c-4b5d-897b-a9bbc10e0fad>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00893.warc.gz"}
People’s thoughts on changing to S40 now? Buckets are slightly hire spec at best Other option I’m toying with is just a standard style tilty hitch Well-known member People’s thoughts on changing to S40 now? Buckets are slightly hire spec at best Other option I’m toying with is just a standard style tilty hitch I looked at tilt hitch but its only half the job - go s40 and put a tilty on it later I looked at tilt hitch but its only half the job - go s40 and put a tilty on it later Tilting hitch would do what I need I think, as much as I like a steelwrist etc it’s not financially feasible at this time and I can’t see that machine seeing one. It’s really only a thought so I’m collecting the right style of attachments for the future. S40 tilting bucket seems silly though Well-known member Tilting hitch would do what I need I think, as much as I like a steelwrist etc it’s not financially feasible at this time and I can’t see that machine seeing one. It’s really only a thought so I’m collecting the right style of attachments for the future. S40 tilting bucket seems silly though S40 is deff the way to go. Tilt hitch seemed dear new when I was weighing it up verses engcon. do it as well as you can,but learn to do it better S40 is deff the way to go. Tilt hitch seemed dear new when I was weighing it up verses engcon. but will go 180 if you buy the right one Well-known member but will go 180 if you buy the right one Iirc the lad i know with a couple says be better lower build height and limit the tilt. do it as well as you can,but learn to do it better Iirc the lad i know with a couple says be better lower build height and limit the tilt. can't see how you could lower the build height by limiting the tilt on a rotary / helical --- it is what it is Well-known member can't see how you could lower the build height by limiting the tilt on a rotary / helical --- it is what it is Dont know tbh - seemed a gangly thing in the photos but didnt take a lot of notice otherwise, seemed to remember him saying something along those lines a year or 2 ago. can't see how you could lower the build height by limiting the tilt on a rotary / helical --- it is what it is I’d imagine he’s referring to a ram type tilt hitch- although you could bring the hitch up under a helac unit which would cause it to ground out before 90 degrees. do it as well as you can,but learn to do it better I’d imagine he’s referring to a ram type tilt hitch- although you could bring the hitch up under a helac unit which would cause it to ground out before 90 degrees. I had considered that, but we were discussing (or so I thought) helical type -- yes if the under bracket was too short, it'd be possible to clash with the upper, but it'd wanna be bloody tight Well-known member People’s thoughts on changing to S40 now? Buckets are slightly hire spec at best Other option I’m toying with is just a standard style tilty hitch I love the S40 system, best thing I ever did was flog 15 x 3cx buckets I had and buy into the S- type setup! I love the S40 system, best thing I ever did was flog 15 x 3cx buckets I had and buy into the S- type setup! Same, I’m going straight for s45 with the 8 tonner I love the S40 system, best thing I ever did was flog 15 x 3cx buckets I had and buy into the S- type setup! There is part of me wonders if thats the best plan of the whole lot Same, I’m going straight for s45 with the 8 tonner im guessing S45 be kinda big for my machine? It seems a bit of a no mans land where some are on s45 an some s40? Well-known member There is part of me wonders if thats the best plan of the whole lot im guessing S45 be kinda big for my machine? It seems a bit of a no mans land where some are on s45 a some s40? You want S40 on that Takeuchi - S45 is a lot bigger and not necessary on a 6 tonner. You want S40 on that Takeuchi - S45 is a lot bigger and not necessary on a 6 tonner. Thanks ill keep that in mind. Kinda doubt a total change is in the budget but i also guess the cheapest time is now Thanks ill keep that in mind. Kinda doubt a total change is in the budget but i also guess the cheapest time is now Well-known member Def s40 on that - i thought s40 looked a bit lightweight after coming off 60mm on the 8t but its obviously plenty man enough. Well-known member You want S40 on that Takeuchi - S45 is a lot bigger and not necessary on a 6 tonner. I got sent a 3ft bucket with a S45 top on it by accident direct from Engcon, it was blooming huge, the headstock hung over the front and back of the bucket- was a mess! They had it back and sent the correct one out Well-known member Euro Auctions Leeds this week. The Doosan had about 3000 hours. Would you have went for a fresher Chinese machine instead? Euro Auctions Leeds this week. The Doosan had about 3000 hours. Would you have went for a fresher Chinese machine instead? I would have went after the lad that did the quick hitch hoses on the doosan
{"url":"https://www.planttalk.co.uk/threads/2-7t.5584/page-5","timestamp":"2024-11-06T01:22:03Z","content_type":"text/html","content_length":"136674","record_id":"<urn:uuid:dcd2b581-db5a-44c5-b79c-00f394401949>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00763.warc.gz"}
Orthogonal Wavelet Families Biorthogonal Wavelets Biorthogonal Spline Details on the supported Wavelet families in the DiscreteTransforms package Note:Orthogonal Wavelet FamiliesBiorthogonal WaveletsReferences All wavelets have been normalized to have (NiMmSSJMRzYiNiMiIiM=) norm 1. This means that values given here may be different (usually by a factor of NiMtSSVzcXJ0RzYiNiMiIiM= or NiMqJC1JJXNxcnRHNiRJKnByb3RlY3RlZEdGJ0koX3N5c2xpYkc2IjYjIiIjISIi) from values listed in references. The Daubechies Wavelets are a family of orthogonal wavelets with vanishing moments, and were developed by Ingrid Daubechies. In WaveletCoefficients(Daubechies,n), n can be any positive even number. n is the size of the resulting filters. The Daubechies wavelet of size n has NiMsJEkibkc2IiMiIiIiIiM= vanishing moments. The values given by WaveletCoefficients for the Daubechies Coefficients, when multiplied by NiMtSSVzcXJ0RzYkSSpwcm90ZWN0ZWRHRiZJKF9zeXNsaWJHNiI2IyIiIw==, agree with those in "Ten Lectures on Wavelets" by Ingrid Daubechies. Symlets are also know as the Daubechies least asymmetric wavelets. Their construction is very similar to the Daubechies wavelets. Whereas the Daubechies wavelets have maximal phase, the Symlets have minimal phase. In WaveletCoefficients(Symlet,n), n can be any positive even number. n is the size of the resulting filters. The Symlet wavelet of size n has NiMsJEkibkc2IiMiIiIiIiM= vanishing moments. The values given by WaveletCoefficients, when normalized, agree with those listed in "Ten Lectures on Wavelets" by Ingrid Daubechies. Coiflets are a family of orthogonal wavelets designed by Ingrid Daubechies to have better symmetry than the Daubechies wavelets. Note: Currently, only Coiflets 1-7 are supported. In WaveletCoefficients(Coiflet,n), n can be 1,2,3,4,5,6, or 7. The nth Coiflet has size NiMsJEkibkc2IiIiJw==. Coiflet scaling functions have NiMsJkkibkc2IiIiIyEiIiIiIg== vanishing moments, and their wavelet functions have NiMsJEkibkc2IiIiIw== vanishing moments. The algorithm used to generated Coiflets is a modification of the one given in "Orthonormal Bases of Compactly Supported Wavelets II," by Ingrid Daubechies. The values generated agree with those in "Ten Lectures on Wavelets" by Ingrid Daubechies, when normalized. Battle-Lemarie wavelets, also know as orthogonal spline wavelets, are a family of wavelets developed from a multi-resolutional analysis of spaces of piecewise polynomial, continuously differentiable functions. Unlike many other wavelets, they have closed form representations in the frequency domain. Battle-Lemarie wavelets use guarddigits=5 by default. This greatly speeds up WaveletCoefficients by allowing it to do hardware float integration. WARNING: Because of the low default setting of guarddigits, a call to WaveletCoefficients for BattleLemarie with Digits=10 will result in an answer that is not necessarily accurate to full hardware float precision. Battle-Lemarie wavelets do not have compact support. That is, the associated filters do not have finite length. WaveletCoefficients(BattleLemarie, 4, 5) will give the 4th Battle-Lemarie wavelet with 11 coefficients. In general, WaveletCoefficients(BattleLemarie, n, m) will give the nth Battle-Lemarie wavelet with NiMsJkkibUc2IiIiIyIiIkYn coefficients. The coefficients in the Battle-Lemarie wavelets converge very quickly to zero, so although WaveletCoefficients(BattleLemarie,n,m) will give filters that are not quite orthogonal, they are usually almost orthogonal. Increasing m will improve the orthogonality of the resulting wavelet. WaveletCoefficients(BattleLemarie, n, m) gives the middle NiMsJkkibUc2IiIiIyIiIkYn coefficients of WaveletCoefficients (BattleLemarie, n, m+1). Because WaveletCoefficients(BattleLemarie,n,m) uses numerical integration, increasing the Digits setting will significantly affect performance. The Cohen-Daubechies-Feauveau 9 tap 7 tap wavelet, or CDF wavelet, is used in the JPEG 2000 image compression standard. WaveletCoefficients(CDF) gives the CDF wavelet. It in fact returns four length 10 Vectors. This is to allow for offsets. Biorthogonal spline wavelets are a family of biorthogonal wavelets. In WaveletCoefficients(BiorthogonalSpline, b, c), b and c can be any positive integers whose sum is even. b and c are the number of vanishing moments of the analysis and synthesis filters, respectively. Daubechies, Ingrid. "Orthonormal Bases of Compactly Supported Wavelets II: Variations on a Theme." SIAM J MATH ANAL. (March 1993). Daubechies, Ingrid. "Ten Lectures on Wavelets." SIAM. 1992. See AlsoDiscreteWaveletTransformInverseDiscreteWaveletTransformWaveletCoefficientsWavelet Examples and ApplicationsWaveletPlot
{"url":"https://de.maplesoft.com/support/help/content/2616/DiscreteTransforms-Wavelets.mw","timestamp":"2024-11-12T17:19:05Z","content_type":"application/xml","content_length":"28258","record_id":"<urn:uuid:dc467f37-4ad0-4a12-aab6-1eb4eb2dcee4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00838.warc.gz"}
The Death of Similar Mathematics With the assistance of a tutor, Mathematics can be as simple as ABC. Believe it or not, you may require many skills in math to maintain a budget current. You find a tutor, somebody who can guide and teach you. The standards are made to be certain that Indiana students are ready to enter and successfully complete postsecondary education, and that they’re ready for long-term, economically-viable career opportunities. Thus, this book’s aim is to assist undergraduates rapidly develop the fundamental understanding of engineering mathematics. Although it can be tempting to assist students, it’s important that students are NOT told what things to do by another person. How to Get Started with Similar Mathematics? They are sometimes different sizes, but they should have exactly the same form. You must be in a position to apply it. Geometric figures are supposedly similar if they’re identical in shape, whether or not they are identical in dimension. http://www.bu.edu/today/2012/bu-joins-association-of-american-universities/ You’ll also use precisely the same number again and again, which can feel weird sometimes. Search for the perfect one here. The history of mathematics can be regarded as an ever-increasing set of abstractions. In this and other associated lessons we’ll briefly explain fundamental math operations. If you want to earn Word useful for more educational and research work, take a look at the Chemistry Add-in for Word also! If this is the case, you should look at a major in mathematics. They can include math vocabulary, story problems, or even equations. Choose a category to discover a math crossword puzzle to fulfill your requirements, or create your own math crossword puzzle. Similar Mathematics Secrets Authors using LaTeX are advised to use the Magazine article template. In Word, you will have access to a wide selection of equation editing tools that are built-in. Take a look at the post for details research paper help on our neighborhood competition and the worldwide modeling competition. The Ultimate Similar Mathematics Trick Not every nautilus shell creates a Fibonacci spiral, though all of them adhere to some kind of logarithmic spiral. Nonetheless, it is completely your choice how you vector. Polynomial functions could be given geometric representation by way of analytic geometry. Similar shapes don’t have to be the exact same dimensions, nor do they need to be in an identical position. That’s since there aren’t any shapes very similar to the pink circle. If that’s the case, then both polygons are alike. Contemplating the broad cluster of alternatives that are accessible, investing considerable energy to investigate which computerized learning stage is the most reasonable to fulfill your scientific needs is something which will satisfy over the very long haul. Keep a look out for problems such as these, and you’ll reap the benefits with a greater score! Bear in mind you may choose to take either test on test day, no matter what test you registered for. These guys are graded on the physical portion of the game that’s only about 10% of what is necessary to be a superior lineman. Furthermore, the significant reason kids hate maths is they do not like to practice and solve many questions at one time. Lifelong learning is a brilliant approach to stay in contact with people, meet new friends, and relish life surrounded by the business of folks that are truly embracing the excitement of our later decades. Most Noticeable Similar Mathematics If you score a whole lot higher on a single test, pick it. As you are acting, you will need to observe your progress. You are able to also use the very same as to compare actions. Nobody enjoys loneliness. There’s no hope this is likely to get fixed. Any sensible person would agree this shouldn’t be happening. These puppies aren’t the exact same thing as the great old SAT. This is nothing to be worried about. It is very important that you always bear that in mind. How to Find Similar Mathematics on the Web Inside this section, we discuss sensitivity coefficients, and the way they may be used to ascertain key parameters. We can also utilize Pascal’s Triangle to discover the values. His strategy contains four phases. Every test is currently a one-hour timed test. After this we can discover the Scale Factor which exists between the 2 tiangles. This was the most innovative number system on the planet at the moment, apparently in use several centuries before the frequent era and well before the evolution of the Indian numeral system. The college application procedure is stressful enough without fretting about something previously you can’t change. For instance, the tuition company that must get an assurance, that in case the tutor you select is certainly not the best one, of course, when you end the expert services of the tutor in the very first hour of service, you are going to be waived out of your very first lesson’s charges. Functions have been put to use in mathematics for quite a long time, and a lot of unique names and ways of writing functions have come about. The truly amazing variety in ways of representing numbers is particularly intriguing. Similar Mathematics Options However I don’t understand the question and consequently I don’t understand how to get started. Mathematical ideas are often accepted dependent on deductive proofs, while ideas in different sciences are usually accepted dependent on the accumulation of several distinctive observations supporting the idea. They also give a thorough explanation of each answer. Irrespective of how you state the issue, your statement ought to be as specific as possible. The issue is known as the boundary-rigidity conjecture.
{"url":"https://charityschakras.com/2019/11/28/the-death-of-similar-mathematics/","timestamp":"2024-11-05T00:42:48Z","content_type":"text/html","content_length":"39845","record_id":"<urn:uuid:a8c19dff-ed9f-4b1e-9e5a-79f308f5058b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00299.warc.gz"}
• 【标题】Analytical solutions of H2 control and efficiency-based designs of structure systems equipped with a tuned viscous mass damper • 【摘要】In this study, an optimal design of a tuned viscous mass damper (TVMD) incorporated into a single-degree-of-freedom structure system subjected to earthquake-induced random vibrations is studied. Closed-form design formulas of the optimal frequency ratio and damping ratio are proposed for the control of structural displacement and acceleration responses based on H2 norm optimization. The optimal solutions exist when the mass ratio is smaller than 0.11. The calculated optimal parameters for displacement control and acceleration control are close to each other, which indicates that the displacement and acceleration responses can be controlled simultaneously with the TVMDs designed by the easy-to-use formulas proposed in this study. To expand the use of the proposed formulas for cases with large mass ratios, the alternative suggested design (ASD) is proposed to explicitly represent the recommended relationships between the design parameters. This paper also examines the control effect of a TVMD over linear viscous dampers, defining an efficiency index as a ratio of the response yielded by a TVMD system to that of a viscous damper system. The optimally designed TVMD is generally more efficient than directly adding damping with viscous dampers when the mass ratio is small, while this advantage is insignificant when the mass ratio grows. Based on this outcome, a new method termed efficiency-based design is proposed. This user-friendly method can assist practicing engineers in determining the mass ratio, the frequency ratio, and the damping ratio of TVMDs. • 【关键词】efficiency-based design; H2 control; optimization; structural control; tuned viscous mass damper (TVMD) • 【索引】 • 【获取全文】PDF下载
{"url":"http://jixiaodong.net/frames/resrc.php?lang=cn&type=sci&id=37","timestamp":"2024-11-13T04:35:37Z","content_type":"text/html","content_length":"3031","record_id":"<urn:uuid:3c0a4546-2d20-4be2-a88a-4497e6e7a336>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00870.warc.gz"}
Acceleration methods for image super-resolution When designing and deploying computer vision systems, engineers are often confronted with the limits of available cameras with respect to the image quality they can provide. Similar to other sensing devices, cameras alter the measured signal and often yield images representing a degraded version of the original scene. Degradation sources include blur induced by such factors as the lens and sensors, frequency aliasing due to subsampling, signal quantization and circuitry reading noise. The consequence is that the fine details in a scene are often lost during the image acquisition process. On the other hand, the importance of digital imaging technologies in consumer and other markets motivates the need for higher quality images. In particular, high-resolution (HR) images are desired and often required since they can provide details that are critical for subsequent analysis in many applications. To obtain higher camera resolution, one needs to increase the number of pixel sensors on the camera chip, which can be achieved by either reducing the size of the sensors or increasing the size of the chip. Unfortunately, the application of such solutions is limited due to constraints imposed by physics and image sensor technology. One such constraint is shot noise, which becomes more significant as the size of the pixel sensors decreases. Also, the higher level of capacitance associated with larger chips makes it difficult to accelerate the charge transfer rate in such structures. Finally, although high precision image sensors and optical components are sometimes available, they may also be prohibitively expensive for general purpose commercial applications. Due to the previous limitations, there is a growing interest in image processing algorithms capable of overcoming the resolution limits of imaging systems. Super-resolution (SR) refers to techniques for synthesizing a HR image or video sequence from a set of degraded and aliased low-resolution (LR) ones. Generally, this is done by exploiting knowledge of the relative subpixel displacements of each LR image with respect to a reference frame. In situations where high quality vision systems cannot be incorporated or are too expensive to deploy, such algorithms may constitute an alternative and allow for the recovery of high quality images from more affordable imaging hardware. Super-resolution is generally formulated as an optimization problem, in which the HR image sought is computed by iteratively improving an initial solution until the desired level of convergence is achieved. The efficiency of the restoration process is determined by the convergence rate of the iterative method employed, which, in turn, depends on the SR problem formulation. In this project, we are interested in the development of fast SR algorithms. The motivation comes from the fact that the computational complexity associated with many SR algorithms may hinder their use in time-critical applications. In particular, we developed an efficient method for accelerating computations associated with edge-preserving image SR problems in which a linear shift-invariant (LSI) point spread function (PSF) is employed. Our technique is suitable for SR scenarios where the motion between the observed LR images is translational. It is also possible to accelerate SR algorithms employing rational magnification factors. The use of such factors is motivated in part by the work of Lin and Shum suggesting that, under certain circumstances, optimal magnification factors for SR are non-integers. Related Work The super-resolution literature is very rich and good summaries of existing methods are given by Borman and Stevenson and Park et al.. Special issues on super-resolution edited by Kang and Chaudhuri, Ng et al. and Hardie et al. were also published recently. Finally, different topics related to super-resolution, including motionless super-resolution, are discussed in two books by Chaudhuri. To accelerate SR computations, some methods shift and interpolate all LR frames on a single HR grid, and then apply an iterative deblurring process to this temporary image in order to synthesize the HR image. However, the optimality of the HR images obtained in this manner is difficult to verify, since the interpolation process introduces errors in the input data. Nonetheless, when the motion model employed is translational, Elad and Hel-Or proposed a technique for fusing the LR images in a way that preserves the optimality of the SR reconstruction process in the maximum-likelihood sense. This optimality relies on the assumption that the translations between the LR frames are discretized on the HR grid. These authors assume that rounding the displacement values has a negligible effect on image quality, since it occurs on the HR grid. Farsiu et al. later adapted this method to the robust SR case. Tanaka and Okutomi also exploited such a discretization of the displacements between the observed pixels in order to accelerate computations in SR problems. Their method groups LR pixels whose positions are the same after discretization and computes the average of the pixel values in each group. The computations are then performed on this reduced set of averaged values instead of the original data. However, such discretizations can produce undesirable artifacts in the restored HR image, particularly when small magnification factors are employed. Unlike these techniques, the proposed method does not sacrifice the optimality of the SR problem formulation when noninteger displacements are present between the LR images. When both the computational speed and the observation model accuracy are deemed important, our method thus constitutes a valuable approach. We conduct a few restoration experiments using a set of 235 Bayer CFA images captured with a Point Grey Flea2 digital camera. During video acquisition, the camera is slightly rotated manually about its vertical and horizontal axes, which results in small translational motion between the LR frames. The resolution of this set of images is increased using magnification factors of two, three and four, as shown below. In the plots below, we compare the performance of the PNCG and NCG methods under different conditions by varying the number of LR images used as input. For each experiment, we indicate the number of PNCG and NCG iterations required to solve the problem, along with the computation time. Note that for both measurements, each number corresponds to the sum of the values for all color channels. To facilitate comparisons, the ratio of the number of iterations for the PNCG method to that of the NCG method is indicated for each experiment above the associated pair of bars in the plots on the left. Similarly, the ratio of the computation time for the PNCG method to that of the NCG method is provided in the plots on the right. Magnification factor of two Magnification factor of three Magnification factor of four We now show that rounding the shift values can have a strong negative impact on the visual quality of the HR image. For this purpose, we repeat the SR experiments presented above; this time, however, we round all displacement values of the LR frames on the HR grid prior to SR restoration. The figure below shows a comparison of the two approaches. For better visualization, we only show a region of interest in the restored images. By comparing corresponding images in both sets of experiments, one can readily observe that the HR images synthesized after rounding the shift values contain more artifacts than those obtained without altering these displacements. Furthermore, such undesirable effects are more visible when smaller magnification factors are employed. Note that one possibility to reduce the visual impact of such artifacts is to employ a one-norm penalty function in the data consistency term, which is known to be more robust to misregistered LR frames. Although this might be an acceptable approach in the case where the errors in the motion vectors stem from limitations of the image registration algorithm, we argue that this solution is suboptimal when such errors result from a deliberate deterioration of the registration parameter values by enforcing the nearest neighbor displacement paradigm. • S. Borman and R. Stevenson, "Spatial resolution enhancement of low resolution image sequences - a comprehensive review with directions for future research," tech. rep., University of Notre Dame, • S. Chaudhuri, Super-Resolution Imaging. Norwell, MA, USA: Kluwer Academic Publishers, 2001. • S. Chaudhuri and J. Manjunath, Motion-Free Super-Resolution. Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2005. • M. Elad and Y. Hel-Or, "Fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur," IEEE Trans. Image Processing, vol. 10, pp. 1187-1193, Aug. • S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, "Fast and robust multiframe super resolution," IEEE Trans. Image Processing, vol. 13, pp. 1327-1344, Oct. 2004. • R. C. Hardie, R. R. Schultz, and K. E. Barner, "Super-resolution enhancement of digital video (guest editorial)," EURASIP Journal on Advances in Signal Processing, special issue on super-resolution enhancement of digital video, vol. 2007, pp. Article ID 20984, 3 pages, 2007. • M. G. Kang and S. Chaudhuri, "Super-resolution image reconstruction (guest editorial)," IEEE Signal Processing Magazine, special issue on super-resolution, vol. 20, pp. 19-20, May 2003. • Z. Lin and H.-Y. Shum, "Fundamental limits of reconstruction-based superresolution algorithms under local translation," IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 1, pp. 83-97, 2004 • M. Ng, T. Chan, M. G. Kang, and P. Milanfar, "Super-resolution imaging: analysis, algorithms, and applications (guest editorial)," EURASIP Journal on Applied Signal Processing, special issue on super-resolution, vol. 2006, pp. Article ID 90531, 2 pages, 2006. • S. C. Park, M. K. Park, and M. G. Kang, "Super-resolution image reconstruction: A technical overview," IEEE Signal Processing Magazine, vol. 20, pp. 21-36, May 2003. • M. Tanaka and M. Okutomi, "A fast algorithm for reconstruction-based superresolution and evaluation of its accuracy," Systems and Computers in Japan, vol. 38, no. 7, pp. 44-52, 2007. Image datasets Image datasets for performing super-resolution are available here. Last update: 5 November 2009
{"url":"https://srl.cim.mcgill.ca/projects/superres/","timestamp":"2024-11-11T16:46:52Z","content_type":"text/html","content_length":"13772","record_id":"<urn:uuid:a6711cfb-2ca4-4260-b743-f9d3c8bfcd0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00564.warc.gz"}
Equality property of Angles opposite to equal sides in a Triangle Equality of Opposite angles when Two sides of a Triangle are equal If two sides of a triangle are equal, then the angles opposite to the same two sides are also equal. The lengths of two sides in a triangle can be equal. It is possible only in the case of both equilateral triangle and isosceles triangle. Due to the equality property of two sides in the triangle, the angles that are opposite to them are also equal geometrically. This theorem can also be proved in geometry on the basis of symmetry Therefore, it is cleared that the angles opposite to the two sides of equal length are equal in a triangle. For proving this theorem, we have to construct either an equilateral or isosceles triangle. $\Delta ABC$ is a triangle and its two sides are of equal length. Hence, the triangle $ABC$ is called an isosceles triangle. In this isosceles triangle, the lengths of the sides $\overline{AC}$ and $ \overline{BC}$ are equal. $AC = BC$ Draw a perpendicular line to the side $\overline{AB}$ from point $C$. It divides the side $\overline{AB}$ at its middle point exactly and the point of intersection is $D$. Therefore, $AD = BD$. Similarly, the line $\overline{DC}$ divides the $\Delta ABC$ as two right angled triangles $\Delta DAC$ and $\Delta DBC$. Now, the perpendicular line $\overline{DC}$ is a common side to both right angled triangles. Now, compare the lengths of three sides of both triangles. 1. $AC = BC$ 2. $AD = BD$ 3. $DC = DC$ The comparison of lengths of sides of the both triangles revealed that the two triangles are same but represented differently. Therefore, the $\Delta DAC$ and $\Delta DBC$ are known as congruent triangles and they also represent Right angle Hypotenuse side (RHS) criterion. $\Delta DAC \cong \Delta DBC$ In this case, $\angle DAC$ is the opposite angle to the side $\overline{BC}$ and $\angle DBC$ is also opposite angle to the side $\overline{AC}$. The two angles are equals due to the congruency. $\angle DAC = \angle DBC$ Therefore, if two sides of a triangle are equal, then the angles opposite to them are also equal. This property of the triangle can be proved geometrically by constructing a triangle but the lengths of any two sides of the triangle should be equal. Take ruler and draw a straight line ($\overline{XY}$) of length $8 \, cm$ horizontally. Use compass and set distance between needle point and pencil point to $6 \, cm$. Draw an arc from point $X$ and also draw another arc from point $Y$ but the arcs should be intersected. The intersecting point of both arcs is $Z$. Join points $X$ and $Z$ and also join points $Y$ and $Z$ by a straight line using ruler. Thus, the $\Delta XYZ$ is constructed geometrically. $\angle XYZ$ and $\angle YXZ$ are opposite angles of the equal length sides $\overline{XZ}$ and $\overline{YZ}$ respectively. Now, measure the angles $XYZ$ and $YXZ$ by using protractor. You practically observe that the $\angle XYZ$ and $\angle YXZ$ are equal and it is $48^\circ$ geometrically. Therefore, the angles opposite to the two equal length sides are equal geometrically.
{"url":"https://www.mathdoubts.com/angles-opposite-two-equal-sides-triangle/","timestamp":"2024-11-11T00:43:45Z","content_type":"text/html","content_length":"30303","record_id":"<urn:uuid:62e2a267-6f75-4ac7-ac77-befe17a90502>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00325.warc.gz"}
When is a Locally Convex Space Eberlein–Grothendieck? The weak topology of a locally convex space (lcs) E is denoted by w. In this paper we undertake a systematic study of those lcs E such that (E, w) is (linearly) Eberlein–Grothendieck (see Definitions 1.2 and 3.1). The following results obtained in our paper play a key role: for every barrelled lcs E, the space (E, w) is Eberlein–Grothendieck (linearly Eberlein–Grothendieck) if and only if E is metrizable (E is normable, respectively). The main applications concern to the space of continuous real-valued functions on a Tychonoff space X endowed with the compact-open topology C [k](X). We prove that (C[k](X) , w) is Eberlein–Grothendieck (linearly Eberlein-Grothen—dieck) if and only if X is hemicompact (X is compact, respectively). Besides this, we show that the class of E for which (E, w) is linearly Eberlein–Grothendieck preserves linear continuous quotients. Various illustrating examples are provided. • Barrelled space • C(X) space • Compact space • Locally convex space • Weak topology ASJC Scopus subject areas • Mathematics (miscellaneous) • Applied Mathematics Dive into the research topics of 'When is a Locally Convex Space Eberlein–Grothendieck?'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/when-is-a-locally-convex-space-eberleingrothendieck","timestamp":"2024-11-06T08:58:39Z","content_type":"text/html","content_length":"56745","record_id":"<urn:uuid:1b897f68-a4cc-414b-8c11-8209cf984317>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00510.warc.gz"}
Converting and Adjusting Recipes and Formulas 6 Converting and Adjusting Recipes and Formulas Recipes often need to be adjusted to meet the needs of different situations. The most common reason to adjust recipes is to change the number of individual portions that the recipe produces. For example, a standard recipe might be written to prepare 25 portions. If a situation arises where 60 portions of the item are needed, the recipe must be properly adjusted. Other reasons to adjust recipes include changing portion sizes (which may mean changing the batch size of the recipe) and better utilizing available preparation equipment (for example, you need to divide a recipe to make two half batches due to a lack of oven space). Conversion Factor Method The most common way to adjust recipes is to use the conversion factor method. This requires only two steps: finding a conversion factor and multiplying the ingredients in the original recipe by that Finding Conversion Factors To find the appropriate conversion factor to adjust a recipe, follow these steps: 1. Note the yield of the recipe that is to be adjusted. The number of portions is usually included at the top of the recipe (or formulation) or at the bottom of the recipe. This is the information that you HAVE. 2. Decide what yield is required. This is the information you NEED. 3. Obtain the conversion factor by dividing the required yield (from Step 2) by the old yield (from Step 1). That is, conversion factor = (required yield)/(recipe yield) or conversion factor = what you NEED ÷ what you HAVE. To find the conversion factor needed to adjust a recipe that produces 25 portions to produce 60 portions, these are steps you would take: 1. Recipe yield = 25 portions 2. Required yield = 60 portions 3. Conversion factor 1. = (required yield) ÷ (recipe yield) 2. = 60 portions ÷ 25 portions 3. = 2.4 If the number of portions and the size of each portion change, you will have to find a conversion factor using a similar approach: 1. Determine the total yield of the recipe by multiplying the number of portions and the size of each portion. 2. Determine the required yield of the recipe by multiplying the new number of portions and the new size of each portion. 3. Find the conversion factor by dividing the required yield (Step 2) by the recipe yield (Step 1). That is, conversion factor = (required yield)/(recipe yield). For example, to find the conversion factor needed to change a recipe that produces 20 portions with each portion weighing 150 g into a recipe that produces 60 portions with each portion containing 120 g, these are the steps you would take: 1. Old yield of recipe = 20 portions × 150 g per portion = 3000 g 2. Required yield of recipe = 40 portions × 120 g per portion = 4800 g 3. Conversion factor 1. = required yield ÷ old yield 2. = 4800 ÷ 3000 3. = 1.6 To ensure you are finding the conversion factor properly, remember that if you are increasing your amounts, the conversion factor will be greater than 1. If you are reducing your amounts, the factor will be less than 1. Adjusting Recipes Using Conversion Factors Now that you have the conversion factor, you can use it to adjust all the ingredients in the recipe. The procedure is to multiply the amount of each ingredient in the original recipe by the conversion factor. Before you begin, there is an important first step: Before converting a recipe, express the original ingredients by weight whenever possible. Converting to weight is particularly important for dry ingredients. Most recipes in commercial kitchens express the ingredients by weight, while most recipes intended for home cooks express the ingredients by volume. If the amounts of some ingredients are too small to weigh (such as spices and seasonings), they may be left as volume measures. Liquid ingredients also are sometimes left as volume measures because it is easier to measure a litre of liquid than it is to weigh it. However, a major exception is measuring liquids with a high sugar content, such as honey and syrup; these should always be measured by weight, not volume. Converting from volume to weight can be a bit tricky and may require the use of tables that provide the approximate weight of different volume measures of commonly used recipe ingredients. Once you have all ingredients in weight, you can then multiply by the conversion factor to adjust the recipe. When using U.S. or imperial recipes, often you must change the quantities of the original recipe into smaller units. For example, pounds may need to be expressed as ounces, and cups, pints, quarts, and gallons must be converted into fluid ounces. Converting a U.S. Measuring System Recipe The following example will show the basic procedure for adjusting a recipe using U.S. measurements. Adjust a standard formulation (Table 13) designed to produce 75 biscuits to have a new yield of 300 biscuits. Table 13: Table of ingredients for conversion recipe in U.S. system Ingredient Amount Flour 3¼ lbs. Baking Powder 4 oz. Salt 1 oz. Shortening 1 lb. Milk 6 cups 1. Find the conversion factor. 1. conversion factor = new yield/old yield 2. = 300 biscuits ÷ 75 biscuits 3. = 4 2. Multiply the ingredients by the conversion factor. This process is shown in Table 14. Table 14: Table of ingredients for recipe adjusted in U.S. system Ingredient Original Amount (U.S) Conversion factor New Ingredient Amount Flour 3¼ lbs. 4 13 lbs. Baking powder 4 oz. 4 16 oz. (= 1 lb.) Salt 1 oz. 4 4 oz. Shortening 1 lb. 4 4 lbs. Milk 6 cups 4 24 cups (= 6 qt. or 1½ gal.) Converting an Imperial Measuring System Recipe The process for adjusting an imperial measure recipe is identical to the method outlined above. However, care must be taken with liquids as the number of ounces in an imperial pint, quart, and gallon is different from the number of ounces in a U.S. pint, quart, and gallon. (If you find this confusing, refer back to Table 7 and the discussion on imperial and U.S. measurements.) Converting a Metric Recipe The process of adjusting metric recipes is the same as outlined above. The advantage of the metric system becomes evident when adjusting recipes, which is easier with the metric system than it is with the U.S. or imperial system. The relationship between a gram and a kilogram (1000 g = 1 kg) is easier to remember than the relationship between an ounce and a pound or a teaspoon and a cup. Adjust a standard formulation (Table 15) designed to produce 75 biscuits to have a new yield of 150 biscuits. Table 15: Table of ingredients for conversion recipe in metric system Ingredient Amount Flour 1.75 kg Baking powder 50 g Salt 25 g Shortening 450 g Milk 1.25 L 1. Find the conversion factor. 1. conversion factor = new yield/old yield 2. = 150 biscuits÷75 biscuits 3. = 2 2. Multiply the ingredients by the conversion factor. This process is shown in Table 16. Table 16: Table of ingredients for recipe adjusted in metric system Ingredient Amount Conversion Factor New Amount Flour 1.75 kg 2 3.5 kg Baking powder 50 g 2 100 g Salt 25 g 2 50 g Shortening 450 g 2 900 g Milk 1.25 L 2 2.5 L Cautions when Converting Recipes Although recipe conversions are done all the time, several problems can occur. Some of these include the following: • Substantially increasing the yield of small home cook recipes can be problematic as all the ingredients are usually given in volume measure, which can be inaccurate, and increasing the amounts dramatically magnifies this problem. • Spices and seasonings must be increased with caution as doubling or tripling the amount to satisfy a conversion factor can have negative consequences. If possible, it is best to under-season and then adjust just before serving. • Cooking and mixing times can be affected by recipe adjustment if the equipment used to cook or mix is different from the equipment used in the original recipe. The fine adjustments that have to be made when converting a recipe can only be learned from experience, as there are no hard and fast rules. Generally, if you have recipes that you use often, convert them, test them, and then keep copies of the recipes adjusted for different yields, as shown in Table 17. Recipes for Different Yields of Cheese Puffs Table 17.1: Cheese Puffs, Yield 30 Ingredient Amount Butter 90 g Milk 135 mL Water 135 mL Salt 5 mL Sifted flour 150 g Large eggs 3 Grated cheese 75 g Cracked pepper To taste Table 17.2: Cheese Puffs, Yield 60 Ingredient Amount Butter 180 g Milk 270 mL Water 270 mL Salt 10 mL Sifted flour 300 g Large eggs 6 Grated cheese 150 g Cracked pepper To taste Table 17.3: Cheese Puffs, Yield 90 Ingredient Amount Butter 270 g Milk 405 mL Water 405 mL Salt 15 mL Sifted flour 450 g Large eggs 9 Grated cheese 225 g Cracked pepper To taste Table 17.4: Cheese Puffs, Yield 120 Ingredient Amount Butter 360 g Milk 540 mL Water 540 mL Salt 20 mL Sifted flour 600 g Large eggs 12 Grated cheese 300 g Cracked pepper To taste Baker’s Percentage Many professional bread and pastry formulas are given in what is called baker’s percentage. Baker’s percentage gives the weights of each ingredient relative to the amount of flour (Table 18). This makes it very easy to calculate an exact amount of dough for any quantity. Table 18: A formula stated in baker’s percentage Ingredient % Total Unit Flour 100.0% 15 kg Water 62.0% 9.3 kg Salt 2.0% 0.3 kg Sugar 3.0% 0.45 kg Shortening 1.5% 0.225 kg Yeast 2.5% 0.375 kg Total weight: 171.0% 25.65 kg To convert a formula using baker’s percentage, there are a few options: If you know the percentages of the ingredients and amount of flour, you can calculate the other ingredients by multiplying the percentage by the amount of flour to determine the quantities. Table 19 shows that process for 20 kg flour. Table 19: Baker’s percentage formula adjusted for 20 kg flour Ingredient % Total Unit Flour 100.0% 20 kg Water 62.0% 12.4 kg Salt 2.0% 0.4 kg Sugar 3.0% 0.6 kg Shortening 1.5% 0.3 kg Yeast 2.5% 0.5 kg Total weight: 171.0% 34.20 kg If you know the ingredient amounts, you can find the percentage by dividing the weight of each ingredient by the weight of the flour. Remember, flour is always 100%. For example, the percentage of water is 6.2 ÷ 10 = 0.62 × 100 or 62%. Table 20 shows that process for 10 kg of flour. Table 20: Baker’s percentages given for known quantities of Ingredient % Total Unit Flour 100.0% 10 kg Water 62.0% 6.2 kg Salt 2.0% 0.2 kg Sugar 3.0% 0.3 kg Shortening 1.5% 0.15 kg Yeast 2.5% 0.25 kg Use baker’s percentage to find ingredient weights when given the total dough weight. For instance, you want to make 50 loaves at 500 g each. The weight is 50 × 0.5 kg = 25 kg of dough. You know the total dough weight is 171% of the weight of the flour. To find the amount of flour, 100% (flour) is to 171% (total %) as n (unknown) is to 25 (Table 21). That is, 1. 100 ÷ 171 = n ÷ 25 2. 25 × 100 ÷ 171 = n 3. 14.62 = n Table 21: Formula adjusted based on total dough weight Ingredient % Total Unit Flour 100.0% 14.62 kg Water 62.0% 9.064 kg Salt 2.0% 0.292 kg Sugar 3.0% 0.439 kg Shortening 1.5% 0.219 kg Yeast 2.5% 0.366 kg Total weight: 171.0% 25.00 kg As you can see, both the conversion factor method and the baker’s percentage method give you ways to convert recipes. If you come across a recipe written in baker’s percentage, use baker’s percentage to convert the recipe. If you come across a recipe that is written in standard format, use the conversion factor method. A formula that states the ingredients in relation to the amount of flour.
{"url":"https://opentextbc.ca/basickitchenandfoodservicemanagement/chapter/convert-and-adjust-recipes-and-formulas/","timestamp":"2024-11-07T17:16:44Z","content_type":"text/html","content_length":"94450","record_id":"<urn:uuid:c07edd3b-1bfb-4002-9ab6-f35e79c9752a>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00103.warc.gz"}
Accuracy of prediction equations to estimate submaximal V̇O<sub>2</sub> during cycle ergometry: The HERITAGE Family Study It was hypothesized that more accurate equations for estimating submaximal V̇O[2] during cycle ergometry could be developed if more independent variables were used in the equation. Purpose: The purposes of this study were: (1) to develop new equations for estimating submaximal V̇O[2] during cycle ergometry; and (2) to examine the accuracy of the newly developed equations and those of the American College of Sports Medicine (1995), Berry et al. (1993), Lang et al. (1992), Latin and Berg (1994), and Londeree et al. (1997). Methods: Subjects (715 men and women, ages 16-65 yr, from the HERITAGE Family Study) completed a maximal cycle ergometry test, two submaximal trials at 50 W and 60% of V̇O(2max), hydrostatic weighing, and stature and body mass measures before and after 20 wk of cycle ergometry training. Regression analysis generated prediction equations using pretraining data from the 60% trials. Results: No equation with more independent variables was better than an equation that used only power output. This equation, HERITAGE-1, with only power output was cross-validated using the 'jackknife' technique. Paired t-tests, mean differences, SEEs, and Es were used to compare the V̇O[2] estimated by HERITAGE-1 and those of previously published equations with the measured V̇O[2] at 60% of V̇O(2max). Conclusions: HERITAGE-1 was slightly better than the equations of ACSM, Lang et al., and Latin and Berg using pretraining data but was not better when using post-training data. All four of these equations were superior to the equations of Berry et al. and Londeree et al. • Energy expenditure (EE) • Estimating V̇O • Oxygen uptake Dive into the research topics of 'Accuracy of prediction equations to estimate submaximal V̇O[2] during cycle ergometry: The HERITAGE Family Study'. Together they form a unique fingerprint.
{"url":"https://profiles.wustl.edu/en/publications/accuracy-of-prediction-equations-to-estimate-submaximal-vosub2sub","timestamp":"2024-11-09T03:38:55Z","content_type":"text/html","content_length":"53333","record_id":"<urn:uuid:62402171-6d55-4c37-a6b1-9a0c1e83bc52>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00328.warc.gz"}
double category A comma double category is a generalization of a comma category to double categories and virtual double categories. There are two apparently-maximal levels of generality of this construction, which intersect nontrivially but do not coincide. The first is an ordinary comma object in the 2-category of virtual double categories (which includes as a full sub-2-category the 2-category of pseudo double categories and lax functors), which produces another virtual double category. The second is a colax/lax comma object relative to the 2-monad whose algebras are pseudo double categories. The two constructions intersect in the case of the comma object of a pseudo double functor over a lax one, between pseudo double categories. For virtual double categories For virtual double categories, the comma virtual double category has the universal property of a comma object in the 2-category of virtual double categories, functors and vertical transformations. Explicitly, it is constructed as follows. Let $C, D, E$ be virtual double categories and $F : C \to E, G : D \to E$ be functors of virtual double categories. The comma double category $F / G$ is defined as 1. Its vertical category is the ordinary comma category $F_v/G_v$ of the vertical components of the functors. We write an object as $A : F A_C \to G A_D$. 2. A horizontal arrow $R$ from $A$ to $B$ consists of horizontal arrows $R_C : A_C \to B_C$ and $R_D : A_D \to B_D$ and a 2-cell in $E$ from $F R_C$ to $G R_D$ along $A,B$. 3. A 2-cell $\alpha$ from $R_1,\ldots,R_n$ to $S$ consists of a pair of 2-cells $\alpha_C : (R_{1C},\ldots,R_{nC}) \to S_C$ and $\alpha_D : (R_{1D},\ldots,R_{nD}) \to S_D$ such that $G(\alpha_D) \ circ (R_1,\ldots,R_n) = S \circ F(\alpha_C)$. The oplax/lax case Let $K = Cat^{\rightrightarrows}$ be the 2-category of directed graphs internal to Cat. There is a 2-monad $T$ on $K$ whose algebras are (pseudo) double categories, and whose lax and colax morphisms are lax and colax double functors respectively. The oplax/lax comma double category is then an oplax/lax comma object for this 2-monad. Structures on a comma virtual double category Next, we consider what properties are required of the input data (in the virtual case) to determine that a comma virtual double category has units and composites. An analogy with the double category case gives some guidance. Since functors of virtual double categories correspond to lax functors of double categories, we don’t have any requirements for the functor $G$ on top of $D$ having composites or units. On the other hand, for $F$ to be “oplax”, we require that it be normal for units or furthermore strong for composites. If $C$, $D$ and $E$ have units and $F$ is a normal functor, then $F / G$ has units. If $C$, $D$ and $E$ have composites and $F$ is a strong functor, then $F / G$ has composites. Next, the comma has restrictions whenever the constituent categories do and the functors preserve them. If $C$, $D$ and $E$ have restrictions and $F,G$ preserve them, then $F / G$ has restrictions. In practice this proof burden can be reduced if we are interested in virtual equipments (i.e. having units and restrictions) because in that case restrictions are automatically preserved. We summarize this as follows: If $C, D$ and $E$ are virtual equipments and $F$ is a normal functor, then $F/G$ is a virtual equipment. 1. A monad $T$ in the horizontal bicategory of a double category $C$ is equivalent to a lax functor $T : 1 \to C$ from the terminal double category. In this case we might call $Id_C / T$ the slice double category?. 2. generalized multicategories can be constructed using a slice category when the monad $T$ is a polynomial monad. Specifically, let $C$ be the double category of polynomial functors in some locally cartesian closed category $E$; then a polynomial monad $T$ on $E$ can be identified with a horizontal monad in $C$ on the terminal object $1$. The slice $Id_C/T$ is then equivalent to the “horizontal Kleisli category” presented in Cruttwell-Shulman; $T$-multicategories are then monads in that comma double category. 3. The double category of decorated cospans is naturally constructed as a comma double category. Given a symmetric lax monoidal functor $F : (C,+) \to (D,\otimes)$, there is an associated lax double functor from $F' : Cospan(C) \to BD$ where $BD$ is the delooping of $D$ into a double category whose horizontal category is $D$ and vertical category is the terminal category. Then there is a colax (pseudo even) double functor $* : 1 \to BD$ that picks out the unique object of $BD$. Then the double category of decorated cospans is $*/F'$. 4. poset-valued sets given by an endofunctor $F$ on $Rel$ and a poset $P$ can be viewed as the comma double category from $F$ to $P$, since a poset is a monad in $Rel$, and $F$ is a colax endofunctor of $Rel$. The “morphisms” of poset-valued sets are the horizontal morphisms in the resulting comma double category. 5. The Dialectica construction associated to an internal poset $\Omega$ in a monoidal category $C$ with pullbacks can be obtained as a comma double category. Let $Span(C)$ be the double category whose horizontal morphisms are spans in $C$, regard $C\times C^{op}$ as a double category in the horizontal direction, and let $F: C\times C^{op} \to Span(C)$ be the colax functor defined on objects by $(A,B) \mapsto A\otimes B$ and taking the pair $(f,g) : (A_1,B_1) \to (A_2,B_2)$ (so that $f:A_1\to A_1$ and $g:B_2\to B_1$) to the span $A_1\otimes B_1 \xleftarrow{1\otimes g} A_1\ otimes B_2 \xrightarrow{f\otimes 1} A_2\otimes B_2$. The internal poset $\Omega$ is a monad in $Span(C)$, so we have a comma double category $F/\Omega$, whose horizontal category is the Dialectica construction $Dial(C,\Omega)$. 6. The double gluing construction relative to a pair of functors $L:C\to E$ and $K:C\to E^{op}$ can be phrased as a comma double category of the cospan $C \xrightarrow{(L,K)} \mathbb{C}hu(E,1) \ leftarrow Chu(E,1)$, where $C$ and $Chu(E,1)$ are regarded as vertically discrete double categories and $\mathbb{C}hu(E,1)$ is the double Chu construction. To obtain the relevant monoidal structures we can consider this instead to be a comma object in double polycategories?. An explicit description of the comma double category from an oplax to a lax functor is given in
{"url":"https://ncatlab.org/nlab/show/comma+double+category","timestamp":"2024-11-05T10:42:26Z","content_type":"application/xhtml+xml","content_length":"48960","record_id":"<urn:uuid:28d292c5-e4db-4180-a7f7-ccbb2274da52>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00372.warc.gz"}
B. Zhang and C. Liang, "High-Order Numerical Simulation of Flows over Rotating Cylinders of Various Cross-Sectional Shapes", AIAA Paper, 2019-3430, June, 2019. doi:10.2514/6.2019-3430 1. Flow field around a rotating elliptic cylinder at $\dot{\alpha}=\pi/4$ and $Re=200$. 2. Flow field around a rotating triangular cylinder at $\dot{\alpha}=\pi/4$ and $Re=200$. 3. Flow field around a rotating square cylinder at $\dot{\alpha}=\pi/4$ and $Re=200$. 4. Flow field around a rotating circular cylinder at $\dot{\alpha}=\pi/4$ and $Re=200$. 5. Flow field when the elliptic cylinder rotates to its initial position. 6. Flow field around a rotating elliptic cylinder at $\dot{\alpha}=\pi/4$ and $Re=1000$. 7. Flow field around a rotating triangular cylinder at $\dot{\alpha}=\pi/4$ and $Re=1000$. 8. Flow field around a rotating square cylinder at $\dot{\alpha}=\pi/4$ and $Re=1000$. 9. Flow field around a rotating circular cylinder at $\dot{\alpha}=\pi/4$ and $Re=1000$.
{"url":"https://binzhang.org/publications/movies/2019-aiaa2","timestamp":"2024-11-08T18:34:35Z","content_type":"text/html","content_length":"5642","record_id":"<urn:uuid:cfa80f41-2108-4917-a604-ee4f9b5dc6bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00435.warc.gz"}
Bringing the Real World to Your Math Class Every Day I’ve been spending the first 5-10 minutes of every 2-hour math class discussing graphs in the news with my students. I’ll give you a few examples of what came up naturally week by week: • Lots of social media graphs: Slope of the adoption rates for new users, the DAU (daily active users) and MAU (monthly active users) over time, and comparison of the adoption of new features in different platforms (Snapchat, Facebook, Instagram, and Twitter) • Tech industry: Growth of Amazon, Facebook, and Google, rise in employees at these companies, and comparison the money spent on Black Friday in the U.S. to the amount spent on Singles Day in China • Science: Money spent on the space industry over time, periodic nature of radiation on mars, distance of the ISS above the surface of the earth (over time), percent of people with HIV who receive drug treatment in the world, and all sorts of numbers and graphs about plastic • Game industry: VR game adoption rates, Fortnite, Fortnite compared to (fill in the blank) game Okay that is just a small sampling. If it’s in the news, and it’s a graph I’ve seen, we talk about it. I think that graphs (especially those that chart trends over time) give a much more realistic sense of what is going on than the small news blurbs and social media outrages that pop up in the moment. When you see the whole trend over many years or comparisons of one item with like items, it gives the reader a different context in which to frame the data. Plus, for many of my students, this seems to be the only time they actually discuss anything from the week of news (as most of it is a surprise to all but a few students). With each graph, I try to prod the class to think critically: • What does the graph mean? • Are there any inconsistencies on the axes? (large jumps in years, for example) • What questions do you have now that you’ve seen this graph? • What bothers you about the graph? • What would you want to see added to the graph? And then many times we go find extra information based on the suggestions and questions from the class. The students each semester seem to enjoy “Graphs Time” in class, so a few weeks ago I asked my students how they were going to keep up with graphs after the term was over. One of them said “Aren’t you going to keep sending us graphs?” (with a smile) and I thought … huh. So first I tried to get them to sign up for an excellent Chart of the Day email newsletter from Statista. There was almost no reaction. I had one taker. During our 5-minute break, I reflected on something that I myself had said during a social media workshop at a conference recently. If you want to reach today’s traditional-age students, you’ve got two choices: Snapchat and Instagram. So after the break I pointed the students to the Statista Instagram account. Ahh progress. Now 10 students turned on their phones to sign up. Reference: https://www.statista.com/chart/4823/teenagers-favorite-social-networks/ Unfortunately, I realized after class that the Statista Instagram account does not actually post graphs (mostly photos). So disappointing. But then I realized that I actually find the graphs in all sorts of places. My weekly reading includes all sorts of traditional, digital, and social media that post graphs as part of their normal offerings. This is when I decided that this “sharing of the graphs” is important to me. It is vital that students learn about the trends in the world, learn to see the long game, the history of the world, the context of comparisons. This is one small thing I can do to help create more well-informed citizens. I’m collecting the graphs anyways, I might as well share them. So, I decided to start an Instagram graphsintheworld account (for the young’uns) and a Facebook Graphs In the World page (for us older readers). Since I collect about 1-3 graphs a day, it should provide a way to rack up graphs for your classes, assignments, and discussions as well. This is not just for math folks. There are plenty of graphs in here with social and business implications. I am surprised by at least one graph every week that teaches me something I did not know. Please consider following a Graphs In the World account and inviting your colleagues and students (and friends and spouses) to follow as well. Then you too can be the annoying person in every conversation that says “Hold on a moment, I think I have a graph about this.” About Author
{"url":"https://edgeoflearning.com/bringing-the-real-world-to-your-math-class-every-day/","timestamp":"2024-11-04T20:15:31Z","content_type":"text/html","content_length":"199657","record_id":"<urn:uuid:eb5d020e-4fe6-49d6-81cd-40ad2aeefa6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00496.warc.gz"}
Mean entropy of States in classical statistical mechanics ENTROPIE RESEAUX PHYSIQUE STATISTIQUE SYSTEMES COMPLEXES Abstract : The equilibrium states for an infinite system of classical mechanics may be represented by states over Abelian C* algebras. We consider here continuous and lattice systems and define a mean entropy for their states. The properties of this mean entropy are investigated : linearity, upper semi-continuity, integral representations. In the lattice case, it is found that our mean entropy coincides with the KOLMOGOROV-SINAI invariant of ergodic theory. RUELLE ROBINSON P/ 66/04 IHES 1966 A4 25 f. EN TEXTE PREPUBLICATION P_66_04.pdf 1966 On the Classification of Von Neumann algebras and their automophisms ALGEBRES DE VON NEUMANN ESPACES DE HILBERT ISOMORPHISMES CONNES P/76/132 IHES 02/1976 A4 27 f. EN TEXTE PREPUBLICATION P_76_132.pdf 1976
{"url":"https://omeka.ihes.fr/items/browse?tags=ROKHLIN&sort_field=added&sort_dir=a&output=dcmes-xml","timestamp":"2024-11-06T05:38:40Z","content_type":"application/rdf+xml","content_length":"2696","record_id":"<urn:uuid:7c74cd15-108e-46d0-b01e-62c6b1e84fa6>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00878.warc.gz"}
Simple Investment Tracker A free downloadable template for Microsoft Excel® - by Jon Wittwer | Updated 7/10/2020 This spreadsheet was designed for people who want a simple way to track the value of their investment accounts over time. Every investment site or financial institution seems to have its own way of reporting results, and what I want to know most of all is simply the return on investment over time. That is why I have been using my own spreadsheet for the past decade to track my 401k and other accounts. This investment tracker template is what my spreadsheet evolved into. You can read more about it below. Investment Tracker for Excel or Google Sheets ⤓ Excel (.xlsx) For: Excel 2010 or later "No installation, no macros - just a simple spreadsheet" This template was designed to provide a simplified way to track an investment account. It boils everything down to tracking only what you have invested and the current value of that investment. It doesn't track cost basis and should not be used for tax purposes. Although some explanations are provided in the Help worksheet and in cell comments, the spreadsheet does not define every term and every calculation in detail. It is up to the individual to make sure they understand what is being calculated. For example, this spreadsheet does not distinguish between realized or unrealized gains. Disclaimer: This spreadsheet is NOT meant to be used for calculating anything to do with taxes. The spreadsheet and content on this page should not be used as financial advice. Why Track an Investment with a Spreadsheet? I use a spreadsheet only as an ADDITIONAL way to track accounts. I'm not suggesting that it is the best way or that it should be used in place of reports generated by the advisor or financial institution. Below are a few reasons why I use this spreadsheet to track investments. Reason #1 - A Consistent Way to Compare Different Types of Investments Having a consistent way to look at return on investment makes it possible to compare real estate investments to stock brokerage accounts or 401(k) accounts or simple savings accounts. Though there may be subtle differences or even major differences between accounts, especially when considering the effects of taxes, the simplest way I have found to compare different investments is to compare market value or total return to what I put into it (the total out-of-pocket investment). Although there are many metrics that can be used to compare returns for different types of investments, my favorite is to use the effective annualized compound rate of return. In this spreadsheet, that is calculated using the XIRR() function. For a one-time investment, this results in the same rate as the CAGR formula (see my CAGR Calculator page). However, the XIRR() function lets you take into account a series of cash flows - such as making additional monthly investments. Reason #2 - Learn how things work I like to try to understand how investments work, and that is why I like using a spreadsheet. I like to see and to try to understand the formulas so that I can better understand what is being Unfortunately, the ability to enter and edit formulas also makes a spreadsheet error-prone. I would not recommend using this investment tracker unless you are comfortable using Excel and can identify and fix errors that may be introduced. Reason #3 - Fees, Dividends, Interest Earned, Re-investments, Cost-Basis, Realized vs. Unrealized gains, ... All these issues are important, but they can also be distracting when I am only trying to compare my out-of-pocket investment to the total value of the investment. Sometimes the information about what has come out of my pocket is lost when using only the online reports generated by a brokerage or financial institution. This may be due to fees, re-invested dividends, or whatever. Using a separate spreadsheet allows me to track what I want to track instead of relying only on the financial institution's statements. Handling Investment Income Investment income that remains within your account as cash (or reinvested) will generally be included automatically in the total value of your account. However, how do you handle investment income that you withdraw or that you have automatically deposited into another account? You may want to track the investment income separately and do your own calculation for return on investment. You can use the blank columns to the right of the table to track whatever numbers you want (that's the great thing about using a spreadsheet). For example, you might calculate a separate ROI value that includes the total income withdrawn from the account using a formula like ( Current Market Value + Total Income Withdrawn - Total Invested ) / Total Invested. Recording a Withdrawal from an Account The market value you enter will already take into account the withdrawal, so the question is how to adjust the Total Invested amount. For some accounts, you might withdraw only from the principal invested, so you can enter a negative value in the Amount Invested column to adjust the Total Invested. If you want to continue using the overall %Gain/Loss as an indicator of how well your investment is doing, you can calculate the amount to subtract from the Total Invested using the formula = Withdrawal * Previous Total Invested / Previous Market Value. This formula was derived based on keeping the Total %Gain the same before and after the withdrawal (assuming no change in the market during that time). This represents withdrawing a portion of the principal as well as a portion of the gain. This formula is not meant for official cost basis calculations, but it can be useful for basic investment tracking. XIRR Function for Calculating Annualized Return This spreadsheet uses the XIRR() function to calculate the internal rate of return for a series of cash flows. In this case we are using it to calculate the annualized compounded rate of return. The spreadsheet also calculates a running XIRR value and a 6-period XIRR value (meaning the annualized rate of return based on the last 6 periods). These formulas are fairly complex because they are that use nested IF functions (thanks to TonySaunders for this idea). The 6-period XIRR function will get messed up if you delete rows from the table. Note: If you are only entering information from your monthly statement, you may not be capturing the actual date that investments are made because in this spreadsheet the XIRR() assumes that amounts invested in column B are invested on the date specified in column A. • XIRR Function at support.office.com - See this page or press F1 in Excel and search for XIRR to see the help regarding this function.
{"url":"https://totalsheets.com/ExcelTemplates/investment-tracker.html","timestamp":"2024-11-07T02:18:40Z","content_type":"text/html","content_length":"30917","record_id":"<urn:uuid:bd6249db-5a11-4c68-9861-57b345d2cffd>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00826.warc.gz"}
The Weekly Challenge - Perl & Raku Raku Solutions Weekly Review Task #1: Niven (or Harshad) Numbers This is derived in part from my blog post made in answer to the Week 7 of the Perl Weekly Challenge organized by Mohammad S. Anwar as well as answers made by others to the same challenge. The challenge reads as follows: Print all the Niven numbers from 0 to 50 inclusive, each on their own line. A Niven number is a non-negative number that is divisible by the sum of its digits. A Niven number or harshad number is a strictly positive integer that can be evenly divided by the sum of its digits. Note that this property depends on the number base in which the number is expressed (the divisibility property is intrinsic to a pair of numbers, but the sum of digits of a given number depends on the number base in which the number is expressed). Here we will consider only numbers in the common base 10. Please also note that 0 cannot be a divisor of a number. Therefore, 0 cannot really be a Niven number. We’ll start at 1 to avoid problems. This is an easy challenge, but I thought it was interesting to include it in this blog series because of the diversity of solutions suggested by the various challengers. Really, there is more than one way to do it.. My solution For a simple problem like this, I can’t resist doing in with a Perl 6 one-liner: $ perl6 -e ‘for 1..50 -> $num { my $sum = [+] $num.comb; say $num if $num %% $sum}’ 1 2 3 4 5 6 7 8 9 10 12 18 20 21 24 27 30 36 40 42 45 48 50 As it turns out, I’m afraid I was a bit sloppy here (because I simply translated to P6 my original P5 one-liner, instead of taking full advantage of Perl 6 expressive syntax): in Perl 6, the one-liner could be much more concise, as we will see in other challengers’ solutions below. And it you prefer a real script, this is one way it could be done, using a gather/take block: use v6; .say for gather { for 1..50 -> $num { my $sum = [+] $num.comb; take $num if $num %% $sum Alternative Solutions Arne Sommer provided a script that could have been a Perl 6 one-liner and is significantly more concise than my own one-liner: .say if $_ %% $_.comb.sum for 0 .. 50; Arne also provided a solution creating a lazy infinite list of Niven numbers and printing out those up to 50: unit sub MAIN (Int $limit where $limit > 0 = 50); my $niven := gather for 0..Inf take $_ if $_ %% $_.comb.sum; .say for $niven[^$limit]; This program is sort of slightly wrong, though, as it does not print the Niven numbers up to 50, as required in the challenge specification, but the first 50 Niven numbers (up to 153). But Arne knows that, and that’s really secondary: it would be no big deal to change the last code line to make it satisfy the requirement. Finley provided a script, but the real code to compute Niven’s number holds in one single line, even slightly shorter than Arne’s solution: .say if ($_ %% [+] .comb) for 0..50; Francis J. Whittle also provided a script that could be a simple Perl 6 one-liner: .put for (0..50).grep: { $_ %% [+] .comb }; Jo-Christian Oterhals‘s script is the most concise so far and could also become a simple one-liner: .say if $_ %% [+] .comb for 0..50; Ruben Westerberg also provided a script that could boil down to a one-liner: say join "\n", (0..50) .grep({($_ > 0) && $_ %% (.Str.comb>>.Int .sum)}); Jaldhar H. Vyas dedicated his script to the memory of his father, Dr. Harshad V. Vyas (remember that the Niven numbers are commonly called the Harshad Numbers). His script is significantly longer than all those seen so far (but still fairly short): for 1 .. 50 -> $number { if $number % $number.comb.sum == 0 { say $number; Joelle Maslak also wrote a fairly short full-fledged script: sub MAIN(UInt:D $max = 50) { for 0..$max -> $i { say $i if $i %% [+] $i.comb; Ozzy made the most complicated solution so far: sub SumDigits ( Int $x is copy ) { my $sum=0; while $x != 0 { $sum += $x mod 10; $x= $x div 10; return $sum; sub MAIN { for 0..50 -> $x { say $x if $x %% SumDigits($x); Simon Proctor also somewhat over-engineered it (in my humble view) with multi subroutines : multi sub is-niven( 0 ) { False } multi sub is-niven( Int $num where * > 0 ) { $num %% [+] $num.comb; sub MAIN( UInt() $max=50 ) { .say if is-niven($_) for 0..$max; Only two blog posts this time: Arne Sommer: https://perl6.eu/niven-ladder.html Jo-Christian Oterhals: https://medium.com/@jcoterhals/perl-6-small-stuff-19-a-challenge-of-niven-numbers-and-word-ladders-ed33dcd2b45b Wrapping up Please let me know if I forgot any of the challengers or if you think my explanation of your code misses something important (you can just raise an issue against this GitHub page). If you want to participate to the Perl Weekly Challenge, please connect to this site.
{"url":"https://theweeklychallenge.org/blog/p6-review-challenge-007/","timestamp":"2024-11-10T15:43:58Z","content_type":"text/html","content_length":"27263","record_id":"<urn:uuid:8f8bf56d-68ac-4e2f-9491-b9f1619bc599>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00809.warc.gz"}
Whatsapp Only 1 Students solved it IAS exam Archives Here is a challenge for mathematical geniuses. Test your skills by solving this riddle. In the given series find the next number i.e value of ? 6, 12, 48, 96, 480 ? So were you able to solve the riddle? Leave your answers in the comment section below. You can check if your answer is … Read more Find The Missing Number In The Series: 10,11,19,28,??,117 Find the connection between the numbers given in the numerical riddle below. Find the Missing Number In The Series 10,11,19,28,??,117 Above mentioned is a series of numbers which are connected in some way or the other. You have to crack the connection and find the missing number. You can share it with your friends on … Read more Find the Dimensions of the Box Ready for some geometry puzzles?? Test your skills with geometry with this riddle. There is a box. The area of its top is 240 square units, the area of the front is 300 square units and the area of the end is 180 square units. Can you calculate the dimensions of this box with the … Read more Find the Code For September: If January = 1017, Then September = ? How good are you at cracking coded messages? Here’s a chance to test your skills. The following months are coded in a certain way, which you need to find and then represent September in the coded format. January = 1017 February = 628 March = 1335 April = 145 May = 1353 June = 1064 … Read more What Time Should the Last Watch Show? Picture Brain Teasers helps you to test your mental and visual ability. From the image you need to answer the question asked. in the below picture there are four watches showing a certain time, based on the previous watches, what time should the last watch show? So were you able to solve the riddle? Leave … Read more Find Value of x in (2^x) (30^3) = (2^3) (3^3) (4^3) (5^3) Have fun with numbers with the help of this riddle. Can you find out the value of x in the following equation? (2x) (303) = (23) (33) (43) (53) This one is for all the maths fans who like to play with the numbers. So were you able to solve the riddle? Leave your answers … Read more Murder Mystery: A Newly-Wed Couple Went For Hiking Trip A newly-wed couple went for hiking trip outside the country. After two days, the wife returned and informed the police that her husband fell while hiking and could not survive. Police registered the case and started their investigation. The next day, they went to her home and arrested her. On asking why they were arrested … Read more
{"url":"https://ec2-65-0-158-107.ap-south-1.compute.amazonaws.com/tag/whatsapp-only-1-students-solved-it-ias-exam/","timestamp":"2024-11-05T20:23:19Z","content_type":"text/html","content_length":"136688","record_id":"<urn:uuid:d17a4c31-54b9-42ea-8c78-f6b062f6d794>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00542.warc.gz"}
Study Guide - Linear Factorization and Descartes Rule of Signs Linear Factorization and Descartes Rule of Signs Learning Objectives • Use linear factorization to find the equation of a polynomial function given it's zeros and a point on it's graph • Use Descartes rule of signs to determine the maximum number of possible real zeros of a polynomial function • Solve a polynomial function application involving volume A vital implication of the Fundamental Theorem of Algebra is that a polynomial function of degree n will have n zeros in the set of complex numbers, if we allow for multiplicities. This means that we can factor the polynomial function into n factors. The Linear Factorization Theorem tells us that a polynomial function will have the same number of factors as its degree, and that each factor will be in the form (x – c), where c is a complex number. Let f be a polynomial function with real coefficients, and suppose [latex]a+bi\text{, }b\ne 0[/latex], is a zero of [latex]f\left(x\right)[/ latex]. Then, by the Factor Theorem, [latex]x-\left(a+bi\right)[/latex] is a factor of [latex]f\left(x\right)[/latex]. For f to have real coefficients, [latex]x-\left(a-bi\right)[/latex] must also be a factor of [latex]f\left(x\right)[/latex]. This is true because any factor other than [latex]x-\left(a-bi\right)[/latex], when multiplied by [latex]x-\left(a+bi\right)[/latex], will leave imaginary components in the product. Only multiplication with conjugate pairs will eliminate the imaginary parts and result in real coefficients. In other words, if a polynomial function f with real coefficients has a complex zero [latex]a+bi[/latex], then the complex conjugate [latex]a-bi[/latex] must also be a zero of [latex]f\left(x\right)[/latex]. This is called the Complex Conjugate Theorem A General Note: Complex Conjugate Theorem According to the Linear Factorization Theorem, a polynomial function will have the same number of factors as its degree, and each factor will be in the form [latex]\left(x-c\right)[/latex], where c is a complex number. If the polynomial function f has real coefficients and a complex zero in the form [latex]a+bi[/latex], then the complex conjugate of the zero, [latex]a-bi[/latex], is also a How To: Given the zeros of a polynomial function [latex]f[/latex] and a point [latex]\left(c\text{, }f(c)\right)[/latex] on the graph of [latex]f[/latex], use the Linear Factorization Theorem to find the polynomial function. 1. Use the zeros to construct the linear factors of the polynomial. 2. Multiply the linear factors to expand the polynomial. 3. Substitute [latex]\left(c,f\left(c\right)\right)[/latex] into the function to determine the leading coefficient. 4. Simplify. Example: Using the Linear Factorization Theorem to Find a Polynomial with Given Zeros Find a fourth degree polynomial with real coefficients that has zeros of –3, 2, , such that [latex]f\left(-2\right)=100[/latex]. Answer: Because [latex]x=i[/latex] is a zero, by the Complex Conjugate Theorem [latex]x=-i[/latex] is also a zero. The polynomial must have factors of [latex]\left(x+3\right),\left(x - 2\right),\left (x-i\right)[/latex], and [latex]\left(x+i\right)[/latex]. Since we are looking for a degree 4 polynomial, and now have four zeros, we have all four factors. Let’s begin by multiplying these factors. [latex]\begin{array}{l}f\left(x\right)=a\left(x+3\right)\left(x - 2\right)\left(x-i\right)\left(x+i\right)\\ f\left(x\right)=a\left({x}^{2}+x - 6\right)\left({x}^{2}+1\right)\\ f\left(x\right)=a\left ({x}^{4}+{x}^{3}-5{x}^{2}+x - 6\right)\end{array}[/latex] We need to find to ensure [latex]f\left(-2\right)=100[/latex]. Substitute [latex]x=-2[/latex] and [latex]f\left(2\right)=100[/latex] into [latex]f\left(x\right)[/latex]. [latex]\begin{array}{l}100=a\left({\left(-2\right)}^{4}+{\left(-2\right)}^{3}-5{\left(-2\right)}^{2}+\left(-2\right)-6\right)\hfill \\ 100=a\left(-20\right)\hfill \\ -5=a\hfill \end{array}[/latex] So the polynomial function is [latex]f\left(x\right)=-5\left({x}^{4}+{x}^{3}-5{x}^{2}+x - 6\right)[/latex] Analysis of the Solution We found that both and – were zeros, but only one of these zeros needed to be given. If is a zero of a polynomial with real coefficients, then must also be a zero of the polynomial because is the complex conjugate of Q & A If 2 + 3i were given as a zero of a polynomial with real coefficients, would 2 – 3i also need to be a zero? Yes. When any complex number with an imaginary component is given as a zero of a polynomial with real coefficients, the conjugate must also be a zero of the polynomial. Try It Find a third degree polynomial with real coefficients that has zeros of 5 and –2 such that [latex]f\left(1\right)=10[/latex]. Answer: [latex]f\left(x\right)=-\frac{1}{2}{x}^{3}+\frac{5}{2}{x}^{2}-2x+10[/latex] Descartes’ Rule of Signs There is a straightforward way to determine the possible numbers of positive and negative real zeros for any polynomial function. If the polynomial is written in descending order, Descartes’ Rule of Signs tells us of a relationship between the number of sign changes in [latex]f\left(x\right)[/latex] and the number of positive real zeros. For example, the polynomial function below has one sign change. This tells us that the function must have 1 positive real zero. There is a similar relationship between the number of sign changes in [latex]f\left(-x\right)[/latex] and the number of negative real zeros. In this case, [latex]f\left(\mathrm{-x}\right)[/latex] has 3 sign changes. This tells us that [latex]f\left(x\right)[/latex] could have 3 or 1 negative real zeros. A General Note: Descartes’ Rule of Signs According to Descartes’ Rule of Signs , if we let [latex]f\left(x\right)={a}_{n}{x}^{n}+{a}_{n - 1}{x}^{n - 1}+...+{a}_{1}x+{a}_{0}[/latex] be a polynomial function with real coefficients: • The number of positive real zeros is either equal to the number of sign changes of [latex]f\left(x\right)[/latex] or is less than the number of sign changes by an even integer. • The number of negative real zeros is either equal to the number of sign changes of [latex]f\left(-x\right)[/latex] or is less than the number of sign changes by an even integer. Example: Using Descartes’ Rule of Signs Use Descartes’ Rule of Signs to determine the possible numbers of positive and negative real zeros for [latex]f\left(x\right)=-{x}^{4}-3{x}^{3}+6{x}^{2}-4x - 12[/latex]. Answer: Begin by determining the number of sign changes. [latex]\begin{array}{l}f\left(-x\right)=-{\left(-x\right)}^{4}-3{\left(-x\right)}^{3}+6{\left(-x\right)}^{2}-4\left(-x\right)-12\hfill \\ f\left(-x\right)=-{x}^{4}+3{x}^{3}+6{x}^{2}+4x - 12\hfill \ Again, there are two sign changes, so there are either 2 or 0 negative real roots. There are four possibilities, as we can see below. Positive Real Zeros Negative Real Zeros Complex Zeros Total Zeros Analysis of the Solution We can confirm the numbers of positive and negative real roots by examining a graph of the function. We can see from the graph that the function has 0 positive real roots and 2 negative real roots. Try It Use Descartes’ Rule of Signs to determine the maximum possible numbers of positive and negative real zeros for [latex]f\left(x\right)=2{x}^{4}-10{x}^{3}+11{x}^{2}-15x+12[/latex]. Use a graph to verify the numbers of positive and negative real zeros for the function. Answer: There must be 4, 2, or 0 positive real roots and 0 negative real roots. The graph shows that there are 2 positive real zeros and 0 negative real zeros. Solve real-world applications of polynomial equations We have now introduced a variety of tools for solving polynomial equations. Let’s use these tools to solve the bakery problem from the beginning of the section. Example: Solving Polynomial Equations A new bakery offers decorated sheet cakes for children’s birthday parties and other special occasions. The bakery wants the volume of a small cake to be 351 cubic inches. The cake is in the shape of a rectangular solid. They want the length of the cake to be four inches longer than the width of the cake and the height of the cake to be one-third of the width. What should the dimensions of the cake pan be? Answer: Begin by writing an equation for the volume of the cake. The volume of a rectangular solid is given by [latex]V=lwh[/latex]. We were given that the length must be four inches longer than the width, so we can express the length of the cake as [latex]l=w+4[/latex]. We were given that the height of the cake is one-third of the width, so we can express the height of the cake as [latex]h=\ frac{1}{3}w[/latex]. Let’s write the volume of the cake in terms of width of the cake. [latex]\begin{array}{l}V=\left(w+4\right)\left(w\right)\left(\frac{1}{3}w\right)\\ V=\frac{1}{3}{w}^{3}+\frac{4}{3}{w}^{2}\end{array}[/latex] Substitute the given volume into this equation. [latex]\begin{array}{l}\text{ }351=\frac{1}{3}{w}^{3}+\frac{4}{3}{w}^{2}\hfill & \text{Substitute 351 for }V.\hfill \\ 1053={w}^{3}+4{w}^{2}\hfill & \text{Multiply both sides by 3}.\hfill \\ \text{ } 0={w}^{3}+4{w}^{2}-1053 \hfill & \text{Subtract 1053 from both sides}.\hfill \end{array}[/latex] Descartes' rule of signs tells us there is one positive solution. The Rational Zero Theorem tells us that the possible rational zeros are [latex]\pm 3,\pm 9,\pm 13,\pm 27,\pm 39,\pm 81,\pm 117,\pm 351[/latex], and [latex]\pm 1053[/latex]. We can use synthetic division to test these possible zeros. Only positive numbers make sense as dimensions for a cake, so we need not test any negative values. Let’s begin by testing values that make the most sense as dimensions for a small sheet cake. Use synthetic division to check [latex]x=1[/latex]. [latex]l=w+4=9+4=13\text{ and }h=\frac{1}{3}w=\frac{1}{3}\left(9\right)=3[/latex] The sheet cake pan should have dimensions 13 inches by 9 inches by 3 inches. Try It A shipping container in the shape of a rectangular solid must have a volume of 84 cubic meters. The client tells the manufacturer that, because of the contents, the length of the container must be one meter longer than the width, and the height must be one meter greater than twice the width. What should the dimensions of the container be? Answer: 3 meters by 4 meters by 7 meters Licenses & Attributions CC licensed content, Original CC licensed content, Shared previously • Question ID 19266. Authored by: Sousa,James, mb Lippman,David. License: CC BY: Attribution. License terms: IMathAS Community License CC-BY + GPL. • College Algebra. Provided by: OpenStax Authored by: Abramson, Jay et al.. Located at: https://openstax.org/books/college-algebra/pages/1-introduction-to-prerequisites. License: CC BY: Attribution . License terms: Download for free at http://cnx.org/contents/[email protected].
{"url":"https://www.symbolab.com/study-guides/ivytech-wmopen-collegealgebra/linear-factorization-and-descartes-rule-of-signs.html","timestamp":"2024-11-14T18:51:20Z","content_type":"text/html","content_length":"144141","record_id":"<urn:uuid:3378b771-3bab-4854-8328-12e70489348f>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00823.warc.gz"}
The Distance Formula | sofatutor.com Basics on the topic The Distance Formula The distance formula is a formula used to find the distance between two distinct points on a plane. The formula was derived from the Pythagorean theorem, which states that for any right triangle, the square of the hypotenuse is equal to the sum of the square of the two legs. Finding the distance between two distinct points on a plane is the same as finding the hypotenuse of a right triangle. From this perspective, the distance formula states that the distance of two distinct points on a plane is equal to the square root of the sum of the square of the rise and run. The distance formula comes with some uses in everyday life. It can be used as a strategy for easy navigation and distance estimation. For example, if you want to estimate the distance of two places on a map, simply get the coordinate of the two places and apply the formula. Or when a pilot wants to know the distance of an incoming plane and his plane, he can use the plane radar and find the coordinates of the two planes and then apply the formula. Use coordinates to prove simple geometric theorems algebraically Transcript The Distance Formula Deep in the Amazonian jungle, Carlos lives with his family in a teeny-tiny village. Carlos likes most things about his village, but he has to wake up at the crack of dawn if he wants to be at school on time. It’s not so much that his school is so far away, but there’s a huge canyon between the village and the school, so Carlos must walk around the canyon to get to his school. To make his journey faster, Carlos has a great idea. But to make sure his idea will work he’ll need to use the Distance Formula. Take a look at this map The scale's in yards. Here’s the path that Carlos usually walks to go to school, around this side of the canyon, across the bridge, and then along the other side of the canyon. It takes Carlos about 2 hours to walk to school every day. So, what's Carlos' great idea? He wants to build a zip line to go right across the canyon, allowing him to get to school in a fraction of the time! But he doesn't have any rope. What can he do? In a stroke of genius, Carlos decides to use the rope from his mom's clothesline. There are just two problems with this plan: the clothesline is only 350 yards long. Will that be enough? And what will his mom say about him using the rope from the clothesline to build the zip line? Using the Pythagorean Theorem to Calculate the Distance To answer the first question, he needs to calculate the distance between these two points. As for Carlos' mom and the missing clothesline? Only time will tell... We can't help Carlos out with his mom, but we can help him solve his little math problem. To find the distance between any two known points in a coordinate plane, first construct a right triangle. Then, modify the Pythagorean Theorem to solve for the unknown distance. Notice how we replaced 'a' and 'b' with the quantities 'x'-two minus 'x'-one and 'y'-two minus 'y'-one, respectively. Since 'c' is the distance we want to know, we'll now call this variable 'd'. After taking the square roots of both sides of the equation, we're left with the Distance Formula. The location of Carlos' village is at the ordered pair one hundred, one hundred and the location of his school is at the ordered pair two hundred, four hundred. Carlos' home will be point 1 and his school will be point 2. Now, using the known points, we can replace the variables in the expression and solve for the distance Now that there are no more variables, we can finish this off with PEMDAS to get the distance. To get across the canyon, the zip line only needs to be approximately 316.23 yards...so Carlos has enough rope! He's really excited to use the zip line for the first time, AND he was even able to sleep two hours longer than usual. There he goes. Whe! Oh man, now Carlos knows a bit more about the villagers than he wanted. The Distance Formula exercise Would you like to apply the knowledge you’ve learned? You can review and practice it with the tasks for the video The Distance Formula. • Explain how to calculate the distance between two points. The Pythagorean theorem states that the sum of the squares of the legs of any right triangle is the same as the square of the hypotenuse. In the picture above, the lengths of both legs are given by the difference of the coordinates of the two points. We don't want to know the squared distance. To determine the distance between two given points, $(x_1,y_1)$ and $(x_2,y_2)$, in a coordinate plane we first draw a right angle. Then we use the Pythagorean theorem to get where $c$ is the length of the hypotenuse. Because the length of the hypotenuse is the desired distance, we replace it by $d$: Lastly, we take the square root on both sides and change the sides to get the distance formula • Calculate the distance from Carlos' village to his school. Here you see the distance formula for two points in a coordinate plane. You get the points by just looking at the coordinate plane on the map pictured above. For Carlos' village: □ Draw a line parallel to the $x$-axis passing this point. The $y$-coordinate is the intersection of this line and the $y$-axis. □ Draw a line parallel to the $y$-axis passing this point. The $x$-coordinate is the intersection of this line and the $x$-axis. The coordinate for Carlos' school can be found in a similar fashion. PEMDAS means the order of operations: □ Paranthesis □ Exponent □ Multiplication □ Division □ Addition □ Subtraction The coordinates of Carlos' village as well as his school are: □ Home: $(100,100)$ □ School: $(200,400)$ The distance formula for two points in a coordinate plane is given by where, in our situation, we have that: □ $x_2=200$ □ $x_1=100$ □ $y_2=400$ □ $y_1=100$ Putting those values into the distance formula above, we get: Using PEMDAS, we get We can simplify this expression to get • Connect the ordered pairs with the right formula. Pay attention to the correct order of the points. Just check the exponents. Here you see an example of using the distance formula. So we have $d=\sqrt{(56-12)^2+(78-34)^2}$. The distance between two points is the square root of the sum of the squares of the differences of the coordinates: Let's practice it with a few pairs of points: The distance between $(10,20)$ and $(30,40)$ is Pay attention to the order: $(10,20)$ and $(40,30)$ leads to It looks quite similar, but it's totally different. The distance between $(40,20)$ and $(10,50)$ is while the distance between $(40,20)$ and $(50,10)$ is • Determine the distance. Use the distance formula for two points in a coordinate plane: If you have to round to the nearest hundredth, you just have to look at the third position after the decimal: if this position is less or equal $4$ then round down, otherwise round up. Here are two examples for rounding decimals: The point representing Carlos' location is $(100,100)$ and the point representing where his friend lives is $(150,300)$. We use the distance formula for two points in a coordinate plane: with the given coordinates above. So we put them into this formula to get Using PEMDAS, we get We can write this result as or as a rounded decimal • Find the right distance formula. Here you see a right triangle with the points $(x_1,y_1)$ and $(x_2,y_2)$. Use the Pythagorean property: The sum of the squares of the legs of any right triangle is the same as the square of the hypotenuse. The distance is the length of the hypotenuse. Two points, $(x_1,y_1)$ as well as $(x_2,y_2)$, of a right triangle are given. With the Pythagorean theorem, we get where $c$ is the length of the hypotenuse and $|x_2-x_1|$ and $|y_2-y_1|$ the lengths of the legs. Because the length of the hypotenuse is the desired distance, we replace it by $d$: Lastly, we take the square root on both sides and change the sides to get the distance formula • Examine the distances of the given points. Use the distance formula: Here you see an example for the calculation of the distance of the two points $(10,40)$ and $(40,20)$. Be careful with the order of the coordinates. The distance between two points is given as the square root of the sum of the squares of the differences of the coordinates: Let's practice the calculation of the distance for a few pairs of points: $(10,20)$ and $(30,40)$ gives $\begin{array}{rcl} d&=&~\sqrt{(30-10)^2+(40-20)^2}\\ &=&~\sqrt{(20)^2+(20)^2}\\ &=&~\sqrt{800}=20\sqrt2 \end{array}$ Pay attention to the order of the coordinates: $(10,20)$ and $(40,30)$ leads to $\begin{array}{rcl} d&=&~\sqrt{(40-10)^2+(30-20)^2}\\ &=&~\sqrt{(30)^2+(10)^2}\\ &=&~\sqrt{1000}=10\sqrt{10} \end{array}$ So you see, it's very important to be careful with the order of the coordinates. Those calculations look quite similar, but the result is totally different. The distance between $(10,30)$ and $(30,30)$ is $\begin{array}{rcl} d&=&~\sqrt{(30-10)^2+(30-30)^2}\\ &=&~\sqrt{(20)^2}\\ &=&~20 \end{array}$ Lastly, the distance between $(40,40)$ and $(10,90)$ is $\begin{array}{rcl} d&=&~\sqrt{(10-40)^2+(90-40)^2}\\ &=&~\sqrt{(-30)^2+(50)^2}\\ &=&~\sqrt{2500}=50 \end{array}$ More videos in this topic Radical Expressions / Equations
{"url":"https://us.sofatutor.com/math/videos/the-distance-formula","timestamp":"2024-11-12T23:45:36Z","content_type":"text/html","content_length":"155790","record_id":"<urn:uuid:e0e9ae48-0446-480a-9f59-e16af0c69f20>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00199.warc.gz"}
Lack of self-averaging in random systems - Liability or asset? The study of quenched random systems is facilitated by the idea that the ensemble averages describe the thermal averages for any specific realization of the couplings, provided that the system is large enough. Careful examination suggests that this idea might have a flaw, when the correlation length becomes of the order of the size of the system. We find that certain bounded quantities are not self-averaging when the correlation length becomes of the order of the size of the system. This suggests that the lack of self-averaging, expressed in terms of properly chosen signal-to-noise ratios, may serve to identify phase boundaries. This is demonstrated by using such signal-to-noise ratios to identify the boundary of the ferromagnetic phase of the random field Ising system and compare the findings with more traditional measures. • Correlations • Ising • Phase-transition • Random • Self-averaging • Signal-to-noise Dive into the research topics of 'Lack of self-averaging in random systems - Liability or asset?'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/lack-of-self-averaging-in-random-systems-liability-or-asset","timestamp":"2024-11-10T02:08:41Z","content_type":"text/html","content_length":"49074","record_id":"<urn:uuid:2048773b-4ab0-4e23-b680-6d5b2f249cd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00814.warc.gz"}
Circles - The Brainbox Tutorials It requires the right amount of practice in every topic, if a student aims to score high marks in any subject. Students can score good marks by referring to the MCQs prepared by The Brainbox Tutorials. MCQ on Circles (Theorems) ICSE Class 9 Maths has been made keeping in mind that all the important theorems of … Read more Circles MCQ questions Chapter 10 CBSE Class 9 Maths Circles MCQ questions Chapter 10 CBSE Class 9 Maths has been prepared by The Brainbox Tutorials in a very simple and comprehensive manner. The sums asked in this MCQ test on the chapter Circles are based on the theorems related to Circles given in NCERT Maths book for class 9. Students of CBSE Class … Read more MCQs on Circles (Theorems) CBSE Class 10 Maths Circles is a very important chapter in CBSE Class 10 Maths. MCQs on Circles (Theorems) CBSE Class 10 is provided by The Brainbox Tutorials. Take this Maths Practice Test on the chapter Circles here. This online aptitude test on theorems related to tangents of a circle will increase the confidence in students and they will never hesitate to do … Read more Circles MCQ Test ICSE Class 10 Maths Circles in ICSE Class 10 syllabus includes theorems on angle properties of circles, theorems on cyclic quadrilaterals and theorems on tangent and Chord properties of a circle. Before attempting this quiz it is advised to go through all the theorems once again. The Brainbox Tutorials has prepared Circles MCQ Test ICSE Class 10 Maths. The … Read more
{"url":"https://thebrainboxtutorials.com/category/circles","timestamp":"2024-11-02T17:34:35Z","content_type":"text/html","content_length":"76927","record_id":"<urn:uuid:81aedf99-ccd1-4a5f-802b-28d5b3d942bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00059.warc.gz"}
Introduction to Vector Space 17 Introduction to Vector Space Thus far we have covered various forms of counting. But more advanced methods in NLP often rely on comparing instead. To understand these methods, we must get comfortable with the idea of vector This chapter is a basic introduction to the concept of representing documents as vectors. We also introduce two basic vector-based measurement techniques: Euclidean distance and cosine similarity. A more advanced and in-depth guide to navigating vector space will be covered in Chapter 20. A fictional example^1: Daniel and Amos filled out a psychology questionnaire. The questionnaire measured three aspects of their personalities: extraversion, openness to experience, and neuroticism. On the extraversion scale, Amos scored a 2 and Daniel scored a 4: On the openness scale, Amos scored a 7 and Daniel scored a 6: On the neuroticism scale, Amos scored a 3 and Daniel scored a 5. We can now represent each person’s personality as a list of three numbers, or a three dimensional vector. We can graph these vectors in three dimensional vector space: Now imagine that we encounter a third person, Elizabeth. We would like to know whether Elizabeth is more similar to Daniel or to Amos. After graphing all three people in three-dimensional vector space, it becomes obvious that Elizabeth is more similar to Daniel than she is to Amos. Thinking of people (or any sort of observations) as vectors is powerful because it allows us to apply geometric reasoning to data. The beauty of this approach is that we can measure these things without knowing anything about what the dimensions represent. This will be important later. Thus far we have discussed three-dimensional vector space. But what if we want to measure personality with the full Big-Five traits—openness, conscientiousness, extraversion, agreeableness, and neuroticism? Five dimensions would make it impossible to graph the data in an intuitive way as we have done above, but in a mathematical sense, it doesn’t matter. We can measure distance—and many other geometric concepts—just as easily in five-dimensional vector space as in three dimensions. 17.1 Distance and Similarity When we added Elizabeth to the graph above, we could tell that she was more similar to Daniel than to Amos just by looking at the graph. But how do we quantify this similarity or difference? 17.1.1 Euclidean Distance The most straightforward way to measure the similarity between two points in space is to measure the distance between them. Euclidean distance is the simplest sort of distance—the length of the shortest straight line between the two points. The Euclidean distance between two vectors \(A\) and \(B\) can be calculated in any number of dimensions \(n\) using the following formula: \[ d\left( A,B\right) = \sqrt {\sum _{i=1}^{n} \left( A_{i}-B_{i}\right)^2 } \] A low Euclidean distance means two vectors are very similar. Let’s calculate the Euclidean distance between Daniel and Elizabeth, and between Amos and Elizabeth: #> # A tibble: 3 × 4 #> person extraversion openness neuroticism #> <chr> <dbl> <dbl> <dbl> #> 1 Daniel 4 6 5 #> 2 Amos 2 7 3 #> 3 Elizabeth 8 4 6 # Elizabeth's vector eliza_vec <- personality |> filter(person == "Elizabeth") |> select(extraversion:neuroticism) |> # Euclidean distance function euc_dist <- function(x, y){ diff <- x - y # distance between Elizabeth and each person personality_dist <- personality |> rowwise() |> dist_from_eliza = euc_dist(c_across(extraversion:neuroticism), eliza_vec) #> # A tibble: 3 × 5 #> # Rowwise: #> person extraversion openness neuroticism dist_from_eliza #> <chr> <dbl> <dbl> <dbl> <dbl> #> 1 Daniel 4 6 5 4.58 #> 2 Amos 2 7 3 7.35 #> 3 Elizabeth 8 4 6 0 We now see that the closest person to Elizabeth is… Elizabeth herself, with a distance of 0. After that, the closest is Daniel. So we can conclude that Daniel has a more Elizabeth-like personality than Amos does. 17.1.2 Cosine Similarity Besides Euclidean distance, the most common way to measure the similarity between two vectors is with cosine similarity. This is the cosine of the angle between the two vectors. Since the cosine of 0 is 1, a high cosine similarity (close to 1) means two vectors are very similar. A nice thing about the cosine is that it is always between -1 and 1: When the two vectors are pointing in a similar direction, the cosine is close to 1, and when they are pointing in a near-opposite direction (180°), the cosine is close to -1. Looking at the above visualization, you might wonder: Why should the angle be fixed at the zero point? What does the zero point have to do with anything? If you wondered this, good job. The reason: Cosine similarity works best when your vector space is centered at zero (or close to it). In other words, it works best when zero represents a medium level of each variable. This fact is sometimes taken for granted because, in practice, many vector spaces are already centered at zero. For example, word embeddings trained with word2vec, GloVe, and related models (Section 18.3) can be assumed to center at zero given sufficiently diverse training data because their training is based on the dot products between embeddings (the dot product is a close cousin of cosine similarity). The ubiquity of zero-centered vector spaces makes cosine similarity a very useful tool. Even so, not all vector spaces are zero-centered, so take a moment to consider the nature of your vector space before deciding which similarity or distance metric to use. The formula for calculating cosine similarity might look a bit complicated: \[ Cosine(A,B) = \frac{A \cdot B}{|A||B|} = \frac{\sum _{i=1}^{n} A_{i}B_{i}}{\sqrt {\sum _{i=1}^{n} A_{i}^2} \cdot \sqrt {\sum _{i=1}^{n} B_{i}^2}} \] In R though, it’s pretty simple. Let’s calculate the cosine similarity between Elizabeth and each of the other people in our sample. To make sure the vector space is centered at zero, we will subtract 4 from each value (the scales all range from 1 to 7). # cosine similarity function cos_sim <- function(x, y){ dot <- x %*% y normx <- sqrt(sum(x^2)) normy <- sqrt(sum(y^2)) as.vector( dot / (normx*normy) ) # center at 0 eliza_vec_centered <- eliza_vec - 4 personality_sim <- personality |> mutate(across(extraversion:neuroticism, ~.x - 4)) # distance between Elizabeth and each person personality_sim <- personality_sim |> rowwise() |> similarity_to_eliza = cos_sim(c_across(extraversion:neuroticism), eliza_vec_centered) #> # A tibble: 3 × 5 #> # Rowwise: #> person extraversion openness neuroticism similarity_to_eliza #> <chr> <dbl> <dbl> <dbl> <dbl> #> 1 Daniel 0 2 1 0.2 #> 2 Amos -2 3 -1 -0.598 #> 3 Elizabeth 4 0 2 1 Once again, we see that the most similar person to Elizabeth is Elizabeth herself, with a cosine similarity of 1. The next closest, as before, is Daniel. If you are comfortable with cosines, you might be happy with the explanation we have given so far. Nevertheless, it might be helpful to consider the relationship between cosine similarity and a more familiar statistic that ranges between -1 and 1: the Pearson correlation coefficient (i.e. regular old correlation). Cosine similarity measures the similarity between two vectors, while the correlation coefficient measures the similarity between two variables. Now just imagine our vectors as variables, with each dimension as an observation. Since we only compare two vectors at a time with cosine similarity, let’s start with Elizabeth and Amos: Now imagine centering those variables at zero, like this: When seen like this, the correlation is the same as the cosine similarity. In other words, the correlation between two vectors is the same as the cosine similarity between them when the values of each vector are centered at zero.^2 Seeing cosine similarity as the non-centered version of correlation might give you extra intuition for why cosine similarity works best for vector spaces that are centered at zero. 17.2 Word Counts as Vector Space The advantage of thinking in vector space is that we can quantify similarities and differences even without understanding what any of the dimensions in the vector space are measuring. In the coming chapters, we will introduce methods that require this kind of relational thinking, since the dimensions of the vector space are abstract statistical contrivances. Even so, any collection of variables can be thought of as dimensions in a vector space. You might, for example, use distance or similarity metrics to analyze groups of word counts. An example of word counts as relational vectors in research: Ireland & Pennebaker (2010) asked students to answer essay questions written in different styles. They then calculated dictionary-based word counts for both the questions and the answers using 9 linguistic word lists from LIWC (see Section 14.6), including personal pronouns (e.g. “I”, “you”), and articles (e.g, “a”, “the”). They treated these 9 word counts as a 9-dimensional vector for each text, and measured the similarity between questions and responses with a metric similar to Euclidean distance. They found that students automatically matched the linguistic style of the questions (i.e. answers were more similar to the question they were answering than to other questions) and that women and students with higher grades matched their answers especially closely to the style of the questions. Alammar, J. (2019). The illustrated Word2vec. In Jay Alammar – Visualizing machine learning one concept at a time Ireland, M. E., & Pennebaker, J. W. (2010). Language style matching in writing: Synchrony in essays, correspondence, and poetry. Journal of Personality and Social Psychology, 99(3), 549–571. O’Connor, B. (2012). Cosine similarity, correlation, and coefficients. In AI and Social Science 1. This section is adapted from Alammar (2019)↩︎ 2. For proof, see O’Connor (2012)↩︎
{"url":"https://ds4psych.com/vectorspace-intro","timestamp":"2024-11-13T06:31:36Z","content_type":"application/xhtml+xml","content_length":"69251","record_id":"<urn:uuid:8a43fa57-df87-48e7-b4f9-6014a532f4f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00816.warc.gz"}
Aggregation operators on backtrackable predicates Aggregate bindings in Goal according to Template. The aggregate_all/3 version performs findall/3 on Goal. Note that this predicate fails if Template contains one or more of min(X), max(X), min (X,Witness) or max(X,Witness) and Goal has no solutions, i.e., the minimum and maximum of an empty set is undefined. The Template values count, sum(X), max(X), min(X), max(X,W) and min(X,W) are processed incrementally rather than using findall/3 and run in constant memory. True when the conjunction of instances of Goal created from solutions for Generator is true. Except for term copying, this could be implemented as below. foreach(Generator, Goal) :- findall(Goal, Generator, Goals), maplist(call, Goals). The actual implementation uses findall/3 on a template created from the variables shared between Generator and Goal. Subsequently, it uses every instance of this template to instantiate Goal, call Goal and undo only the instantiation of the template and not other instantiations created by running Goal. Here is an example: ?- foreach(between(1,4,X), dif(X,Y)), Y = 5. Y = 5. ?- foreach(between(1,4,X), dif(X,Y)), Y = 3. The predicate foreach/2 is mostly used if Goal performs backtrackable destructive assignment on terms. Attributed variables (underlying constraints) are an example. Another example of a backtrackable data structure is in library(hashtable). If we care only about the side effects (I/O, dynamic database, etc.) or the truth value of Goal, forall/2 is a faster and simpler alternative. If Goal instantiates its arguments it is will often fail as the argument cannot be instantiated to multiple values. It is possible to incrementally grow an argument: ?- foreach(between(1,4,X), member(X, L)). L = [1,2,3,4|_]. Note that SWI-Prolog up to version 8.3.4 created copies of Goal using copy_term/2 for each iteration, this makes the current implementation unable to properly handle compound terms (in Goal’s arguments) that share variables with the Generator. As a workaround you can define a goal that does not use compound terms, like in this example: mem(E,L) :- % mem/2 hides the compound argument from foreach/2 ?- foreach( between(1,5,N), mem(N,L)). Find free variables in bagof/setof template. In order to handle variables properly, we have to find all the universally quantified variables in the Generator. All variables as yet unbound are universally quantified, unless 1. they occur in the template 2. they are bound by X^P, setof/3, or bagof/3 free_variables(Generator, Template, OldList, NewList) finds this set using OldList as an accumulator. - Richard O'Keefe - Jan Wielemaker (made some SWI-Prolog enhancements) Public domain (from DEC10 library). To be done - Distinguish between control-structures and data terms. - Exploit our built-in term_variables/2 at some places?
{"url":"https://eu.swi-prolog.org/pldoc/man?section=aggregate","timestamp":"2024-11-05T15:07:01Z","content_type":"text/html","content_length":"41301","record_id":"<urn:uuid:538ffe70-d5b7-4ffa-b87a-7c62cff50de5>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00415.warc.gz"}
Ruler and Segment Addition Postulates Most geometry books start with the fundamentals which can be confusing for students because the math itself is so easy. After working through the abstractness that is algebra, when we ask students to perform some simple addition and subtraction they wonder, “What’s the catch?”. The initial focus is on “teaching the rules” of geometry and the symbols that are used. Once students get the hang of the notation used, those who are visual learners often find that they excel in geometry where they may have struggled in algebra. The ruler postulate, for example, basically states that you can use a number line or two points’ coordinates to find the distance between them. Students have been using a number line since early elementary school, so the math itself is not difficult. What they may not have been exposed to is the notation. AB is read as “the distance from A to B”. We can state things like AB = CD meaning that segments AB and CD have equal length. Likewise, we add a new symbol, an equals sign with a tilde above it ≅, and state that it means “is congruent to”. Then we tell students that segments can be congruent but distances are equal. For students who are not used to that level of precision in thier vocabulary, they can get lost if they do not notice the distinction. Interactive notebooks can help get students off on the right foot by encouraging them to take notes from their text, take notes from the class discussion and work through the examples as they are presented. Taking notes in math class requires a different skill set than taking notes in science, social studies, or English. Hopefully the foldable below can help students start off the year well. Ruler and Segment Addition Postulate Foldable – page 3 has two examples printed twice – cut in half to distribute to two students. Leave A Comment
{"url":"https://systry.com/ruler-and-segment-addition-postulates/","timestamp":"2024-11-10T09:46:59Z","content_type":"text/html","content_length":"40806","record_id":"<urn:uuid:a1590fd7-f1f3-40f3-8105-698d2f151a78>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00151.warc.gz"}
Important Topics Of Quantitative Aptitude 101: What Does It Mean? Quantitative aptitude is a measure of an individual’s numeric ability and problem-solving skills. This sort of aptitude is highly valued in fields like computer science, engineering, and mathematics because it usually correlates with success in those fields. However, there is a push to further this understanding in qualitative fields like journalism and the digital arts too. Companies and people are becoming more focused on understanding and improving quantitative aptitude through training and testing in the workplace. Here’s what you need to know about quantitative aptitude and how to up yours. Here’s a Guide on How to Gauge Your Quantitative Aptitude How is Quantitative Aptitude Tested? Gauging quantitative aptitude is a requirement on many important entry tests for different fields. It is also used in the job selection process for many companies including well-known ones like Goldman Sachs. While quantitative aptitude is traditionally tested using, well, tests. There are other means of gauging that understanding, including through work-related projects like charting the spikes in google searches for a particular product or mapping demand for a service in a particular area. Types of Tests to Gauge Quantitative Aptitude There are a number of tests related to measuring quantitative aptitude, including common standardized tests like the SAT, ACT, and GRE. However, other tests for measuring numeric ability and problem-solving skills are IQ tests, math tests, and application exams. Testing is the most common way of gauging the quantitative aptitude of an individual. Major companies, especially tech companies, continue to use tests in their interview process to gauge this understanding. Topics Covered in Quantitative Aptitude Testing There are many topics covered in quantitative aptitude tests, which also vary by degree depending on how far along you are in understanding use-cases of quantitative aptitude. Some of the larger topic areas of quantitative aptitude testing include arithmetic, algebra, geometry, number systems, and modern mathematics. Most of the topics are covered in middle school and high school. Developing a strong quantitative aptitude takes time, patience, and hard work to make good progress. A Further Breakdown of Topics Some books, like Quantitative Aptitude by R.S. Aggarwal, covering the topic of quantitative aptitude development suggests breaking down the larger topic into arithmetic ability and data The first subtopic, arithmetic ability, includes topics like averages, percentages, roots, and simplifications. The second subtopic, data interpretation, looks at an understanding of tabulation and various graphs like line graphs, bar graphs, and histograms. What Jobs Value Quantitative Aptitude While all fields should value a strong aptitude for problem solving and working with numbers, there are a number of fields that require an aptitude for those skills. Some of the most common fields where numeric ability and problem-solving skills are tested include engineering, actuarial science, computer science, economics, statistics, and mathematics. Not all fields value quantitative aptitude in the same way, but there are commonalities. Those who work in quantitative fields are likely to have developed some of the following skills: Reasoning and Reverse Reasoning Logical reasoning is the ability to think through a problem step-by-step using analytics and deductive and inductive methodologies. The ability to think through a problem’s solution backwards is an important skill. It’s powerful in finding and understanding a methodology that works to solve the problem. Spatial and Abstract Relationships Knowing how to apply mathematical concepts and apply symbolism to express those concepts is important in communicating results in many heavily quantitative fields. It creates a standardized system to approaching problems among all practitioners within a particular field. The ability to choose from a number of mathematical models to communicate the best solutions to problems in various fields. Field experts, especially those in quantitative fields, need to model problems to communicate the best solutions to clients and to other stakeholders. Modeling also provides a methodology to approaching similar problems in the future. Attention to Detail Being accurate is important when working in quantitative fields, since job success depends on correct valuations and calculations. It is important to construct precise formulations and models to communicate solutions in mathematics and in writing. How to Further Develop your Quantitative Aptitude Quantitative Aptitude Grows with Age Research on productivity from the National Institute of Health shows that there are many ways in which changes in age can affect productivity, including on quantitative aptitude. Some of the studies also show a pattern where productivity peaks along with verbal and quantitative reasoning at around 40 years of age. Improving Quantitative Aptitude But, just like learning anything else, if you want to further your quantitative aptitude then you’ve got to practice using it. The trick to improving is to understand that it is not about solving as many problems as possible correctly, but developing an understanding about how to approach many different problems using logic. The logic and reasoning you develop from solving challenging problems leads to a better aptitude for numeric ability and problem-solving skills. Tips and Tricks to Solving Questions to Develop Quantitative Aptitude Some easy tricks to help you build a strong quantitative aptitude involve steps like underlining key bits of information in word problems and deriving formulas that are needed to solve the problem. It is better to grasp the logic given in the problem to find the next step instead of jumping to conclusions about given answers. Read Carefully Understanding all the components of a problem-solving question is an important starting point to building quantitative aptitude. Don’t underestimate the value of taking apart a question to understand what needs to be solved for. Pick at the Details Underline any details that would prove important pieces of information in solving through the problem. These will start to become relevant as you go through the step-by-step process of problem Do Not Jump to Conclusions Grasp the logic of a question first without jumping to a conclusion about how to approach the problem. It is important to identify all the moving parts in a problem before putting pen to paper. Solve as Many Types of Questions as Possible Keep attempting many different kinds of problems because this will help build your familiarity in applying the prior three tips and tricks. It will also help you build your familiarity in approaching a wide variety of problem, including word problems, graphic problems, and much more. Keep Timing your Problem Solving As you start becoming familiar with a large cache of problem types, start timing your progress in attempting each type of problem. Recording the time it takes you to solve particular problems will identify skills areas you need to improve on and those that you have already mastered. The point is to decrease your reaction time to many different problems by keeping up your practice. Improving Your Quantitative Aptitude Can Open New Doors While a quantitative aptitude can improve test taking capabilities and measure success for particular jobs, a strong quantitative aptitude also has the potential of strong personal enrichment. Computer Science and Data Science Projects Many fields like computer science and data science have general practitioners working on projects in their free time outside of their full-time jobs. Strengthening quantitative aptitude can lead to exciting project ideas to pursue in those two fields and a better understanding of how to complete such kinds of projects. One platform to look for these kinds of ideas and projects is GitHub. The Future is Interconnected The discovery of new fields by combining a focus on strengthening quantitative analysis and numeric ability with other skill areas is a huge boon to the world. It allows us to become more collaborative and innovative in coming up with new ways to improve society and ourselves. There is also a lot of potential to think up ideas to execute in other fields and work environments. One example is in journalism. Data journalism is a new and upcoming field within the last few years with a lot of potential to change the way we find and tell stories. Without enthusiasts with strong quantitative aptitudes finding ways to contribute to their own specialized fields there would likely not be discoveries of certain collaborations like data journalism.
{"url":"https://javabeat.net/quantitative-aptitude/","timestamp":"2024-11-08T14:16:02Z","content_type":"text/html","content_length":"102543","record_id":"<urn:uuid:ebdb5e75-112c-4e45-893a-d6db1f62ba5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00171.warc.gz"}
Mode Purity Improvement of High-Order Vortex Waves [1] B. Liu, Y. Cui, and R. Li, “Delay in space: Orbital angular momentum beams transmitting and receiving in radio frequency,” , vol.36, no.7, pp.409–421, 2016. doi: 10.1080/02726343.2016.1220909 [2] J. Wu, Z. X. Huang, X. A. Ren, et al ., “Wideband millimeter-wave dual-mode dual circularly polarized OAM antenna using sequentially rotated feeding technique,” IEEE Antennas and Wireless Propagation Letters , vol.19, no.8, pp.1296–1300, 2020. doi: 10.1109/LAWP.2020.2997057 [3] S. Feng and P. Kumar, “Spatial symmetry and conservation of orbital angular momentum in spontaneous parametric down-conversion,” Physical Review Letters , vol.101, no.16, article no.163602, 2008. doi: 10.1103/PhysRevLett.101.163602 [4] Z. Lin, X. Pan, J. Yao, et al ., “Characterization of orbital angular momentum applying single-sensor compressive imaging based on a microwave spatial wave modulator,” IEEE Transactions on Antennas and Propagation , vol.69, no.10, pp.6870–6880, 2021. doi: 10.1109/TAP.2021.3070067 [5] Z. K. Meng, Y. Shi, W. Y. Wei, et al ., “Multifunctional scattering antenna array design for orbital angular momentum vortex wave and RCS reduction,” IEEE Access , vol.8, pp.109289–109296, 2020. doi: 10.1109/ACCESS.2020.3001576 [6] Y. M. Zhang and L. Jia, “Analyses and full-duplex applications of circularly polarized OAM arrays using sequentially rotated configuration,” IEEE Transactions on Antennas and Propagation , vol.66, no.12, pp.7010–7020, 2018. doi: 10.1109/TAP.2018.2872169 [7] F. Qin, L. H. Li, and Y. Lin, “A four-mode OAM antenna array with equal divergence angle,” IEEE Antennas and Wireless Propagation Letters , vol.18, no.9, pp.1941–1945, 2019. doi: 10.1109/LAWP.2019.2934524 [8] K. Liu, H. Y. Lin, Y. L. Qin, et al ., “Generation of OAM beams using phased array in the microwave band,” IEEE Transactions on Antennas and Propagation , vol.64, no.9, pp.3850–3857, 2016. doi: 10.1109/TAP.2016.2589960 [9] L. Zhang, M. Deng, W. W. Li, et al ., “Wideband and high-order microwave vortex-beam launcher based on spoof surface plasmon polaritons,” Scientific Reports , vol.11, article no.23272, 2021. doi: 10.1038/s41598-021-02749-3 [10] Y. Pan, S. Zheng, J. Zheng, et al., “Generation of orbital angular momentum radio waves based on dielectric resonator antenna,” IEEE Antennas and Wireless Propagation Letters, vol.16, no.3, pp.385–388, 2017. [11] J. Liang and S. Zhang, “Orbital angular momentum (OAM) generation by cylinder dielectric resonator antenna for future wireless communication,” IEEE Access , vol.4, pp.9570–9574, 2016. doi: 10.1109/ACCESS.2016.2636166 [12] Z. Yu, N. Guo, and J. Fan, “Water spiral dielectric resonator antenna for generating multimode OAM,” IEEE Antennas and Wireless Propagation Letters , vol.19, no.4, pp.601–605, 2020. doi: 10.1109/LAWP.2020.2972969 [13] J. Ren and K. W. Leung, “Generation of microwave orbital angular momentum states using hemispherical dielectric resonator antenna,” Applied Physics Letters , vol.112, no.13, article no.131103, 2018. doi: 10.1063/1.5021951 [14] C. Guo, X. W. Zhao, C. Zhu, et al ., “An OAM patch antenna design and its array for higher order OAM mode generation,” IEEE Antennas and Wireless Propagation Letters , vol.18, no.5, pp.816–820, 2019. doi: 10.1109/LAWP.2019.2900265 [15] Y. H. Huang, X. P. Li, Q. W. Li, et al., “Generation of broadband high-purity dual-mode OAM beams using a four-feed patch antenna: Theory and implementation,” Scientific Reports, Vol.9, Article No.12977, 2019. [16] Z. T. Zhang, S. Q. Xiao, Y. Li, et al ., “A circularly polarized multimode patch antenna for the generation of multiple orbital angular momentum modes,” IEEE Antennas and Wireless Propagation Letters , vol.16, pp.521–524, 2017. doi: 10.1109/LAWP.2016.2586975 [17] G. Yang, Y. Liu, Y. Yan, et al ., “Vortex wave generation using single-fed circular patch antenna with arc segment,” Microwave and Optical Technology Letters , vol.63, pp.1732–1738, 2021. doi: 10.1002/mop.32806 [18] W. W. Li, L. Zhang, S. Yang, et al ., “A reconfigurable second-order OAM patch antenna with simple structure,” IEEE Antennas and Wireless Propagation Letters , vol.19, no.9, pp.1531–1535, 2020. doi: 10.1109/LAWP.2020.3008447 [19] W. W. Li, J. B. Zhu, Y. C. Liu, et al ., “Realization of third-order OAM mode using ring patch antenna,” IEEE Transactions on Antennas and Propagation , vol.68, no.11, pp.7607–7611, 2020. doi: 10.1109/TAP.2020.2990311 [20] R. J. Garbacz and R. H. Turpin, “A generalized expansion for radiated and scattered fields,” IEEE Transactions on Antennas and Propagation , vol.19, no.3, pp.348–358, 1971. doi: 10.1109/TAP.1971.1139935 [21] K. R. Schab, J. M. Outwater, M. W. Young, et al ., “Eigenvalue crossing avoidance in characteristic modes,” IEEE Transactions on Antennas and Propagation , vol.64, no.7, pp.2617–2627, 2016. doi: 10.1109/TAP.2016.2550098 [22] M. Cabedo-Fabres, E. Antonino-Daviu, A. Valero-Nogueira, et al ., “The theory of characteristic modes revisited: A contribution to the design of antennas for modern applications,” IEEE Antennas and Propagation Magazine , vol.49, no.5, pp.52–68, 2007. doi: 10.1109/MAP.2007.4395295 [23] M. Barbuto, F. Trotta, F. Bilotti, et al ., “Circular polarized patch antenna generating orbital angular momentum,” Progress in Electromagnetics Research , vol.148, pp.23–30, 2014. doi: 10.2528/PIER14050204 [24] A. Derneryd, “Analysis of the microstrip disk antenna element,” IEEE Transactions on Antennas and Propagation , vol.27, no.5, pp.660–664, 1979. doi: 10.1109/TAP.1979.1142159 [25] J. Huang, “Circularly polarized conical patterns from circular microstrip antennas,” IEEE Transactions on Antennas and Propagation , vol.32, no.9, pp.991–994, 1984. doi: 10.1109/TAP.1984.1143455 [26] R. Vescovo, “Characteristic modes for bodies endowed with mutually orthogonal symmetry planes,” Microwave and Optical Technology Letters , vol.2, no.11, pp.390–393, 1989. doi: 10.1002/mop.4650021109 [27] S. Dey, D. Chatterjee, E. J. Garboczi, et al., “Plasmonic nano-antenna optimization using characteristic mode analysis,” IEEE Transactions on Antennas and Propagation, vol.68, no.1, pp.43–53, [28] J. Zhao, Y. Chen, and S. Yang, “In-band radar cross section reduction of slot antenna using characteristic modes,” IEEE Antennas and Wireless Propagation Letters , vol.17, no.7, pp.1166–1170, 2018. doi: 10.1109/LAWP.2018.2836926 [29] S. L. Zheng, X. N. Hui, X. F. Jin, et al ., “Transmission characteristics of a twisted radio wave based on circular traveling-wave antenna,” IEEE Transactions on Antennas and Propagation , vol.63, no.4, pp.1530–1536, 2015. doi: 10.1109/TAP.2015.2393885 [30] L. C. Zhang, J. B. Jiang, Y. C. Lin, et al., “Single-fed patch antenna with reconfigurable orbit angular momentum order,” Radio Engineering, vol.30, no.1, pp.34–39, 2021. [31] N. Kumprasert and W. Kiranon, “Simple and accurate formula for the resonant frequency of the circular microstrip disk antenna,” IEEE Transactions on Antennas and Propagation , vol.43, no.11, pp.1331–1333, 1995. doi: 10.1109/8.475109 [32] I. Wolff and N. Knoppik, “Rectangular and circular microstrip disk capacitors and resonators,” IEEE Transactions on Microwave Theory and Technicals , vol.22, no.10, pp.857–864, 1974. doi: 10.1109/TMTT.1974.1128364 [33] G. Yang, L. Zhang, W. W. Li, et al ., “Orbital angular momentum spectrum of antenna vortex beam based on loop integration,” AIP Advances , vol.11, article no.115204, 2021. doi: 10.1063/5.0062179 [34] Z. Sipus, J. Bartolic, and Z. Milin, “On Symmetry in microstrip antenna analysis,” Microwave and Optical Technology Letters , vol.7, no.10, pp.464–466, 1994. doi: 10.1002/mop.4650071012 [35] H. Xue, H. Liu, Q. Shao, et al ., “Double-deflection vortex beam generation using a single elliptical patch with the theory of characteristic modes,” Optics Express , vol.28, no.8, pp.12322–12330, 2020. doi: 10.1364/OE.389729
{"url":"https://cje.ejournal.org.cn/article/doi/10.1049/cje.2022.00.115","timestamp":"2024-11-05T13:09:06Z","content_type":"text/html","content_length":"103955","record_id":"<urn:uuid:2a653841-5ad3-441d-a826-416a39abbffc>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00227.warc.gz"}
Exercises - Hypothesis Testing (Two Samples) The variances are not significantly different: Testing $H_0 : \sigma_1^2 = \sigma_2^2$ gave a test statistic of $F = 1.2922$. The critical value for $\alpha = 0.05$ is $F(9,9) = 4.0260$ (two-tailed test). Fail to reject $H_0$, the assumption of homogeneity of variances is met. There is not a significant difference in the brain size between the two groups: $H_0 : \mu_1 = \mu_2$ for small samples. Test statistic is $t = 1.84$. Critical value for $\alpha = 0.05$ is $t(18) = 2.101$. Fail to reject $H_0$. The variances are not significantly different. $H_0 : \sigma_1^2 = \sigma_2^2$ gave test statistic of $F = 1.44$. The critical value $F(11,11) = 3.4286$ or is approximated by $F(10,11) = 3.5257$ (for $\alpha = 0.025$ in one tail). Fail to reject $H_0$. Variances are not significantly different. The assumption of homogeneity of variances is met. The weights of the two brands are not significantly different: $H_0 : \mu_1 - \mu_2 = 0$ gives test statistic of $t = 0.7762$. (Note: $s_p^2 = 0.0488$) Critical value: $t(22) = 2.074$ at $\alpha = 0.05$. Fail to reject $H_0$. The weights are not significantly different between brands of raisin bran cereal. $H_0 : \mu_d = 0$ or $\overline{D} = 0$. The mean of the difference is $0.888^+$ and the standard deviation of the difference is $4.4566$. Test statistic, $t = 0.5984$. For $\alpha = 0.05$, $t(8) = 2.306$. Fail to reject $H_0$. The exercise program did not seem to make a significant difference in the weight of the subjects. Check to see if the variances are not significantly different. $H_0 : \sigma_1^2 = \sigma_2^2$ gives test statistic $F = 1.865$. Critical value at $\alpha = 0.05$ is (two-tailed) $F(20,60) = 1.9445$. Fail to reject $H_0$. The assumption of homogeneity of variances has been met. Test the claim of $\mu_1 \gt \mu_2$ with $H_1 : \mu_1 \gt \mu_2$ or use the notation $H_1 : \mu_1 - \mu_2 \gt 0$ where $ \mu_1$ represents freshmen. One of the samples is less than $30$, so small samples test statistic, $t = 2.359$. Critical value $t(74)$ is estimated on the standard normal, $z(0.01) = 2.327$ (Note: degrees freedom is $19 + 57 - 2 = 74$). Reject $H_0$, the freshmen have a significantly higher average than the sophomores in Math 111. (a) $H_0 : \mu_1 = \mu_2$, test statistic is $z = -13.229$, highly significant for a value of $z$ on the standard normal table; $p$-value is $p \lt 0.0001$. Reject the null hypothesis. There is a significant difference in mean waiting times for the two counties. (b) $-82.4 \lt \mu_1 - \mu_2 \lt -57.8$. The interval does not include zero and the null hypothesis in part a was rejected. These are consistent results. (a) $H_0 : \sigma_1 = \sigma_2$. Test statistic is $1.242$ with critical value of $F(14,14)$ needed, use $F(15,14) = 2.9493$. Fail to reject. There is not a significant difference between the variances. The assumption of homogeneity of variance is met. (b) $H_0 : \mu_4 = \mu_2$ (with alternative hyp $H_1 : \mu_4 \gt \mu_2$) Test statistic is $t = 2.363$. For $\alpha = 0.05$, $t(28) = 1.701$, one-tailed test. Note: $s_p^2 = 6.5$. Reject the null hypothesis at $\alpha = 0.05$. The fourth floor has significantly higher temperature than the second floor. strike data collected $H_0 : \mu_d = 0$. The mean of the differences is $0.168$ and the standard deviation of the differences is $0.411$. You need to be able to find these. The test statistic is $t = 1.29$. The test criterion is $t(9) = 2.262$ for $\alpha = 0.05$. Fail to reject the null hypothesis. There seems to be no difference in strength between the basic back kick and the reverse high punch. sample of newborn boy weights (in pounds) in a middle class neighborhood is taken. (a) Is there an outlier? If so, discard. Explain your decision. (b) It is assumed that the distribution of weights of newborn boys is normally distributed. Check to make sure that the distribution is not significantly skewed. (c) The national average of weights of newborn boys is 7.8 pounds. Is the above sample significantly different from the national average? (d) A second sample of newborn boy weights (in pounds) in a lower socioeconomic neighborhood hospital is taken. Check this sample for outliers and discard if needed. Is this distribution significantly skewed? Is this sample significantly different from the national average? (e) For the two samples, are the variances homogeneous? (f) It is believed that the birth weight of newborns in lower socioeconomic homes is lower than the birth weights of thos in middle or upper socioeconomic homes. With these samples, test this belief. (a) $2.3$ is an outlier by both definitions; (b) $I = 0.03$, not skewed; (c) $H_0 : \mu = 7.8; t = 1.223$ is the test statistic; $t(18) = 2.101$ is the critical value; Fail to reject the null hypothesis. The sample is not significantly different from the national average birth weight of boys. (d) No outliers; $I = -0.066$, not significantly skewed; $H_0 : \mu = 7.8; t = -3.051$ is the test statistic; $t(19) = 2.093$ is the critical value; reject the null hypothesis. This sample is significantly different from the national average. (e) $H_0 : \sigma_1 = \sigma_2$; $F = 1.306$ is the test statistic; $F(19,18) = 2.559$ is the critical value for $\alpha = 0.05$; The samples are homogeneous; (f) $H_0 : \mu_2 = \mu_1 , H_1 : \mu_2 \lt \mu_1 : t = -3.077, p = 0.002; t(37) = -1.645, \alpha = 0.05$; Reject the null hypothesis. The birth weight of boys from lower socioeconomic homes is significantly lower than the birth weight of boys from the middle socioeconomic homes. A study of reaction times was conducted with 12 volunteers. Their reaction times (in milliseconds) before and after the consumption of one alcoholic beverage was measured. Use hypothesis testing techniques to test the claim that reaction time is different after consumption of this beverage. Check all assumptions. The value $-713$ (or $713$ depending on which you subtracted) is an outlier, so remove it. $I = -0.403$, so not significantly skewed. Data set: mean is $-202.1$, $s = 45.4$, median is$-196$, $n = 11$. The claim is that the difference is not zero. $H_0 : \overline{D} = 0$; test statistic $t = -14.76$; critical value for $\alpha = 0.05$, $t(10) = \pm 2.228$; Reject the null hypothesis. There is significant evidence to support the claim that reaction time is different after consumption of the beverage. A regional sales manages selects two similar offices to study the effectiveness of a new training program aimed at increasing sales. One office institutes the program and the other does not. The office that does not, Office 1, has 47 sales people. For this office, the mean sales per person over the next month is $\$3,193$ with a standard deviation of $\$102$. Office 2, the office that does institute the training program, has 51 sales people. For this office, the mean sales per person over the next month is $\$3,229$ with a standard deviation of $\$107$. (a) At a 5% significance level, does the training program appear to increase sales? (Show all steps clearly) (b) Construct a 90% confidence interval for the difference in mean sales of the two offices. Interpret your findings. (c) Are the results of parts (a) and (b) consistent? Clearly explain your reasoning. (a) Claim: $\mu_2 \gt \mu_1$ and $H_0 : \mu_2 - \mu_1 = 0$. Test statistic, $z = 1.705$; Critical value at $\alpha = 0.05$, $z = 1.645$; $p$-value is $0.0436$ (area in right tail when using the test statistic value as the cut-off). Reject the null hypothesis. The training program seems to produce a significant increase in sales (b) $E = 34.7$ and the interval is $(1.3,70.7)$. We are 90% confident that the difference between the means is in the interval given. (c) Yes. The null hypothesis was rejected and the value $0$ is not in the above interval since all points in the interval are positive. The results are consistent. This problem used a one-tailed test because it was reasonable to assume that the training would increase sales. An instructor wants to compare the number of hours a student studies for a calculus exam to the number of hours a student studies for a statistics exam. The instructor takes random samples of his calculus students statistics students to this end. Determine, at a 5% significance level, whether the two courses have different mean study times for an exam. Clearly check all appropriate assumptions. Assumptions: No outliers; for skewness: $I = -0.21$ for the calculus group and $I = 0.836$ for the statistics group, therefore neither are significantly skewed. Test for homogeneity of variances: $H_0 : \sigma_1^2 = \sigma_2^2$. $F = 2.307$ and the critical value $F(9,13) = 3.3120$. Variances are homogeneous. Test the means: $H_0 : \mu_1 - \mu_2 = 0$. (Note: $s_p^2 = 2.284$) Test statistic $t = 1.486$ at $\alpha = 0.05$, the critical value is $t(22) = \pm 2.074$. Fail to reject the null hypothesis. There does not seem to be a significant difference in the mean study time for the students in the two courses.
{"url":"https://mathcenter.oxford.emory.edu/site/math117/probSetHypTestingTwoSamples/","timestamp":"2024-11-10T19:34:42Z","content_type":"text/html","content_length":"18607","record_id":"<urn:uuid:81f9e0e4-b760-486d-94dc-435468fa0ae6>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00882.warc.gz"}
Construction Of A Talon An illustration showing how to construct a talon, or two circle arcs that will tangent themselves, and meet two parallel lines at right angles in the given points A and B. “Join A and B; divide AB into four equal parts erect perpendiculars; then m and n are the centers of the circle arcs of the required talon.” geometric construction circle arcs Albert A. Hopkins Scientific America: Handy Book Of Facts And Formulas (New York: Munn & Co., Inc., 1918) 37
{"url":"https://etc.usf.edu/clipart/49900/49944/49944_construction.htm","timestamp":"2024-11-10T07:53:01Z","content_type":"text/html","content_length":"21974","record_id":"<urn:uuid:c4072169-bd8d-46cb-8a5c-e7cfd95375c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00682.warc.gz"}
kap module controls kap module controls The MESA/kap parameters are given default values here. The actual values as modified by your inlist are stored in the Kap_General_Info data structure. They can be accessed by code at runtime using the kap_handle to get a pointer to it. Base metallicity The base metallicity for the opacity tables. This provides the reference metallicity necessary to calculate element variations. Physically, this usually corresponds to the initial metallicity of the model and remains static during the entirety of a calculation. Variations in metallicity from nuclear burning and mixing should not alter Zbase as its purpose is to serve as a reference metallicity for C and O enhancement during Helium burning. See use_Zbase_for_Type1. By default, Zbase does not typically affect the outer regions of a stellar model when the surface metallicity remains below the initial metallicity (Z < Zbase). Zbase is typically only used for Type2 opacities which operate deeper in the stellar interior. Specific scenarios such as high Z accretion onto a metal poor model can produce a situation where (Z > Zbase). During this scenario the surface metallicity can be enhanced beyond the initial metallicity. Users can opt to exclusively adopt Type1 opacities in this scenario with the use_Zbase_for_Type1 flag. NOTE: For wind schemes that scale with metallicity, we use Zbase rather than Z (as long as Zbase > 0). This is because wind mass loss rate is primarily determined by iron opacity, which is unlikely to change during the evolution. Zbase must be set if using kapCN, AESOPUS, or Type2 opacities or else MESA well draw an error. Table selection Select the set of opacity tables for higher temperature, hydrogen-rich conditions. Also referred to as Type1 tables. See Blend controls to understand precisely when these tables are used. These tables use the value of Zbase for Z, unless use_Zbase_for_Type1 = .false.. The Type1 tables cover a wider range of X and have a higher resolution in Z for each X than Type2. The OPAL/OP Type1 tables are for 126 (X,Z) pairs from the following sets: • X: 0.0, 0.1, 0.2, 0.35, 0.5, 0.7, 0.8, 0.9, 0.95, 1-Z • Z: 0.0, 1e-4, 3e-4, 1e-3, 2e-3, 4e-3, 1e-2, 2e-2, 3e-2, 4e-2, 6e-2, 8e-1, 1e-1 The OPLIB Type1 tables offer additional table density, for 1194 (X,Z) pairs from the following sets: • X: 0.0, 0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, • 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, • 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, • 0.91, 0.92, 0.93, 0.94, 0.95, 0.96, 0.97, • 0.98, 0.99, 1-Z • Z: 0.0, 0.000001, 0.00001, 0.00003, 0.00007, 0.0001, • 0.0003, 0.0007, 0.001, 0.002, 0.003, 0.004,0.006, • 0.008, 0.01, 0.012, 0.014, 0.015, 0.016, 0.017, 0.018, • 0.019, 0.02, 0.021, 0.022, 0.023, 0.024, 0.025, 0.026, • 0.028, 0.03, 0.035, 0.04, 0.05, 0.06, 0.07, 0.08, • 0.09, 0.1, 0.15, 0.2 Available options: • 'gn93' • 'gs98' • 'a09' • 'OP_gs98' • 'OP_a09_nans_removed_by_hand' • 'oplib_gs98’ • 'oplib_agss09’ • 'oplib_aag21’ • 'oplib_mb22’ Select the set of opacity tables for higher temperature, hydrogen-poor/metal-rich conditions. Also referred to as Type2 tables. Critically, Type2 tables account for C and O enhancement during and after He burning. See Blend controls to understand precisely when these tables are used. abundances previous to any CO enhancement. Ignored if use_Type2_opacities = .false.. These tables use the value of Zbase as the base metallicity. The Type2 tables are for (X,Z) pairs from the following sets: • X: 0.0, 0.03, 0.10, 0.35, 0.70 • Z: 0.00, 0.001, 0.004, 0.01, 0.02, 0.03, 0.05, 0.1 Available options: • 'gn93_co' • 'gs98_co' • 'a09_co' kap_CO_prefix = 'gs98_co' Select a set of opacity tables for lower temperatures. Available options: • 'lowT_fa05_mb22' • 'lowT_fa05_aag21' • 'lowT_Freedman11' • 'lowT_fa05_gs98' • 'lowT_fa05_gn93' • 'lowT_fa05_a09p' • 'lowT_af94_gn93' • 'lowT_rt14_ag89' • 'kapCN' • 'AESOPUS' kap_CN uses tables from Lederer, M. T.; Aringer, B. (2009) Low temperature Rosseland opacities with varied abundances of carbon and nitrogen 'AESOPUS' uses tables from AESOPUS Marigo, P.; Aringer, B. (2009) Low-temperature gas opacity. ÆSOPUS: a versatile and quick computational tool Specify which file using AESOPUS_filename. The file is first looked for in the work directory. If not found, then data/kap_data is searched. Currently one set of opacities is provided, with the filename 'AESOPUS_AGSS09.h5'. To see more detail about the composition details of the tables set show_info = .true.. You can generate your own tables with their web interface at http://stev.oapd.inaf.it/cgi-bin/aesopus . See kap/preprocessor/AESOPUS/README for information on preparing the tables for MESA. kap_lowT_prefix = 'lowT_fa05_gs98' AESOPUS_filename = '' ! used only if kap_lowT_prefix = 'AESOPUS' Blend controls If true, then if use_Type2_opacities = .true., Type1 opacities will be computed using Zbase instead of Z when Z > Zbase. This helps with blending from Type1 to Type2. Ignored if use_Type2_opacities = use_Zbase_for_Type1 = .true. Select whether to use Type2 opacity tables (see kap_CO_prefix). Even when true, in regions where hydrogen is above a given threshold, or the metallicity is not significantly higher than Zbase, Type1 tables are used instead, with blending regions to smoothly transition from one to the other (see following controls). use_Type2_opacities = .true. Switch to Type1 if X too large. Type2 is full off for X >= kap_Type2_full_off_X Type2 can be full on for X <= kap_Type2_full_on_X. kap_Type2_full_off_X = 1d-3 kap_Type2_full_on_X = 1d-6 Switch to Type1 if dZ too small (dZ = Z - Zbase). Type2 is full off for dZ <= kap_Type2_full_off_dZ. Type2 can be full on for dZ >= kap_Type2_full_on_dZ. kap_Type2_full_off_dZ = 0.001d0 kap_Type2_full_on_dZ = 0.01d0 X and dZ terms are multiplied to get actual fraction of Type2. The fraction of Type2 is calculated for each cell depending on the X and dZ for that cell. So you can be using Type1 in cells where X is large or dZ is small, while at the same time you can be using Type2 where X is small and dZ is large. When frac_Type2 is > 0 and < 1, then both Type1 and Type2 are evaluated and combined linearly as (1-frac_Type2)*kap_type1 + frac_Type2*kap_type2. Add kap_frac_Type2 to your profile columns list to see frac_Type2 for each cell. Region to blend between higher temperature tables (see kap_file_prefix and kap_CO_prefix) and lower temperature tables (see kap_lowT_prefix). The upper/lower blend boundary will be clipped to the true extent of the opacity tables. The upper boundary will be min of kap_blend_logT_upper_bdy and the max logT for lowT tables. The lower boundary will be max of kap_blend_logT_lower_bdy and min logT for highT tables. The typical min logT of the higher temperature tables tables is 3.75. Check your tables to be sure. It is probably a good idea to keep the blend away from H ionization. logT upper of about 3.9 or a bit less will do that. kap_blend_logT_upper_bdy = 3.88d0 kap_blend_logT_lower_bdy = 3.80d0 Interpolation options type of interpolation in X. true is cubic; false is linear. cubic_interpolation_in_X = .false. type of interpolation in Z. true is cubic; false is linear. cubic_interpolation_in_Z = .false. Custom tables If the prefix options in Table selection above do not match one of the available options, MESA still searches for files in data/kap_data with the given prefix. This allows for custom tables. However, the user must also indicate the X and Z values for which the tables are provided. Separate controls exist for each class of prefix. Number of X values. X values for the tables (length user_num_kap_Xs). Values such that X + Z > 1 will have X reduced to 1-Z. Choose user_num_kap_Xs_for_this_Z such that at most 1 X value for each Z will be reduced in this way. ! user_kap_Xs = 0.0d0, 0.1d0, 0.2d0, 0.35d0, 0.5d0, 0.7d0, 0.8d0, 0.9d0, 0.95d0, 1.0d0 Number of Z values. Z values for the tables (length user_num_kap_Zs). ! user_kap_Zs = 0.000d0, 0.0001d0, 0.0003d0, 0.001d0, 0.002d0, 0.004d0, 0.01d0, 0.02d0, 0.03d0, 0.04d0, 0.06d0, 0.08d0, 0.100d0 At different values of Z, the number of values of X may change. In particular, tables with X > 1-Z will not exist. Use the first N (<= user_num_kap_Xs) X values for the tables of the corresponding Z (length user_num_kap_Zs). ! user_num_kap_Xs_for_this_Z = 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 9, 9, 8 Number of X values. X values for the tables (length user_num_kap_CO_Xs). ! user_kap_CO_Xs = 0.00d0, 0.03d0, 0.10d0, 0.35d0, 0.70d0 Number of Z values. Z values for the tables (length user_num_kap_CO_Zs). ! user_kap_CO_Zs = 0.000d0, 0.001d0, 0.004d0, 0.010d0, 0.020d0, 0.030d0, 0.050d0, 0.100d0 At different values of Z, the number of values of X may change. In particular, tables with X > 1-Z will not exist. Use the first N (<= user_num_kap_CO_Xs) X values for the tables of the corresponding Z (length user_num_kap_CO_Zs). ! user_num_kap_CO_Xs_for_this_Z = 5, 5, 5, 5, 5, 5, 5, 5 Number of X values. ! user_num_kap_lowT_Xs = 10 X values for the tables (length user_num_kap_lowT_Xs). Values such that X + Z > 1 will have X reduced to 1-Z. Choose user_num_kap_lowT_Xs_for_this_Z such that at most 1 X value for each Z will be reduced in this way. ! user_kap_lowT_Xs = 0.0d0, 0.1d0, 0.2d0, 0.35d0, 0.5d0, 0.7d0, 0.8d0, 0.9d0, 0.95d0, 1.0d0 Number of Z values. ! user_num_kap_lowT_Zs = 13 Z values for the tables (length user_num_kap_lowT_Zs). ! user_kap_lowT_Zs = 0.000d0, 0.0001d0, 0.0003d0, 0.001d0, 0.002d0, 0.004d0, 0.01d0, 0.02d0, 0.03d0, 0.04d0, 0.06d0, 0.08d0, 0.100d0 At different values of Z, the number of values of X may change. In particular, tables with X > 1-Z will not exist. Use the first N (<= user_num_kap_lowT_Xs) X values for the tables of the corresponding Z (length user_num_kap_lowT_Zs). ! user_num_kap_lowT_Xs_for_this_Z = 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 9, 9, 8 Conductive opacity options add conduction opacities to radiative opacities include_electron_conduction = .true. Use fits from Blouin et al. (2020) for H and He in the regime of moderate coupling and moderate degeneracy. use_blouin_conductive_opacities = .true. Miscellaneous controls if true, then output additional information as the opacities are loaded. this is particularly useful to see the detailed composition coverage of the AESOPUS opacity files. Other hooks Control whether to use other hooks. See kap/other. Replace electron conduction opacity routine use_other_elect_cond_opacity = .false. Replace Compton opacity routine use_other_compton_opacity = .false. Replace radiative opacity routine. The standard routine evaluates the opacity using the low-T and high-T tables. use_other_radiative_opacity = .false. User controls These are arrays of size(10) that can be used to pass in custom information to kap kap_ctrl(:) = 0d0 kap_integer_ctrl(:) = 0 kap_logical_ctrl(:) = .false. kap_character_ctrl(:) = '' Extra inlist controls One can split a kap inlist into pieces using the following parameters. It works recursively, so the extras can read extras too. If read_extra_kap_inlist(i) is true, then read &eos from the file extra_kap_inlist_name(i). read_extra_kap_inlist(:) = .false. extra_kap_inlist_name(:) = 'undefined' Debugging controls Specify a range of calls for which to receive debugging information. dbg = .false. logT_lo = -1d99 logT_hi = 1d99 logRho_lo = -1d99 logRho_hi = 1d99 X_lo = -1d99 X_hi = 1d99 Z_lo = -1d99 Z_hi = 1d99
{"url":"https://docs.mesastar.org/en/24.08.1/reference/kap.html","timestamp":"2024-11-13T15:46:59Z","content_type":"text/html","content_length":"49108","record_id":"<urn:uuid:34584054-6449-4116-8713-8267ff4cd28d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00459.warc.gz"}
The difference of a number divided by 3 and 6 is equal to 5. What is the number? | HIX Tutor The difference of a number divided by 3 and 6 is equal to 5. What is the number? Answer 1 Let x be the number, then the equation is: #x/3-x/6 = 5# Multiply both sides of the equation by 6: #2x - x = 30# #x = 30# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 Let #x# be the number #x/3 - 6 = 5# Multiplying both sides of the equation by 3, #x-18=15# #x=15+18# Therefore, #x=33# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 3 See a solution process below: First, let's call "a number": #n# Next, we can write "a number divided by 3" as: Then, we can write "the difference between" a number divided by 3 "and 6" as: #n/3 - 6# Now, we can complete this equation by write "is equal to 5" as: #n/3 - 6 = 5# To solve this, first, add #color(red)(6)# to each side of the equation to isolate the #n# term while keeping the equation balanced: #n/3 - 6 + color(red)(6) = 5 + color(red)(6)# #n/3 - 0 = 11# #n/3 = 11# Now, multiply each side of the equation by #color(red)(3)# to solve for #n# while keeping the equation balanced: #color(red)(3) xx n/3 = color(red)(3) xx 11# #cancel(color(red)(3)) xx n/color(red)(cancel(color(black)(3))) = 33# #n = 33# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 4 The number is either $+ 33$ or $- 3$ #color(red)("The difference of ")color(blue)("a number ")color(green)("divided by 3")color(red)(" and ")color(magenta)6color(brown)( " is equal to 5")# If we represent #color(blue)("a number")# by the variable #color(blue)n# this becomes: #color(red)("The difference of ")color(blue)(n)color(green)(/3)color(red)(" and ")color(magenta)6color(brown)(=5)# This has two possible interpretations #{: (color(blue)(n)color(green)(/3) color(red)(-) color(magenta)6color(brown)(=5),color(white)("xx")"and"color(white)("xx"),color(magenta)6color(red)-color(blue) ncolor(green)(/3)color(brown)(=5)), (rarr color(blue)ncolor(green)(/3)=11,,rarrcolor(red)-color(blue)ncolor(green)(/3)color(brown)=-1), (rarr color(blue)n=33,,rarr color(blue)n=-3) :}# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 5 Let's denote the number as ( x ). According to the given information: [\frac{x}{3} - 6 = 5] Now, solve for ( x ) by isolating it on one side of the equation: [\frac{x}{3} = 5 + 6] [\frac{x}{3} = 11] [x = 11 \times 3] [x = 33] So, the number is ( 33 ). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/the-difference-of-a-number-divided-by-3-and-6-is-equal-to-5-what-is-the-number-84670bbceb","timestamp":"2024-11-01T23:52:38Z","content_type":"text/html","content_length":"594210","record_id":"<urn:uuid:2fce00e3-ce54-4847-86f6-b2ca0b1b430e>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00171.warc.gz"}
[color=teal][size=1][b]I made this poem, actually it's a song, or it could be a poemasong! Anyways here it is.It's mostly about me. If you're Psyco and you know it stab someone. If you're Psyco and you know it stab someone. If you're Psyco and you know it and you really wanna show it, stab someone. Is it good?[/b][/size][/color] Let's just say that calling it "bad" would be a compliment. If it was supposed to be funny, I missed that too lol :p Isn't it weird that if you edit your post, it says your old name did it? Never noticed that before. [color=crimson]*Stabs Him For How Much He Hates That Poem.* Irony is the lowest form of wit.[/color] That has to be the most awful personal rendition of "If You're Happy And You Know It..." I've ever read. I would suggest inputting a little thing called "Thought" into what you write. Just a suggestion. ^_~[/color] [QUOTE][i]Originally posted by Psyco [/i] [B][color=teal][size=1][b]I made this poem, actually it's a song, or it could be a poemasong! Anyways here it is.It's mostly about me. [strike]If you're[/strike] Psyco and you know it stab someone If you're [strike]Psyco[/strike]a visage [strike]and you know it[/strike] stab someone. [strike]If you're Psyco and you know it and you really wanna show it, stab someone.[/strike] Psycho and you know it blood red and shot it. Is it good? [color=red]Not really, to be truthful.[/color][/b][/size][/color] [/B][/QUOTE] [color=red] Hm. There's my rendition at some try with it. It's way too repetitive offhand, I must say. It just doesn't go anywhere. It needs more description...ah well. I couldn't really make it better, lol. It's just that bad. Don't feel bad though. You have to start somewhere with everything.[/color] [size=1]It aint badder than Mitch's banner thats for sure :rolleyes: but the last sentence dont fit to real song ya no. If you're Psyco and you know it, and you really wanna show it, [i][b]blah blah blah blah blah blah[/b][/i], stab someone. you left the part where i put blah blah blah ;)[/size] [QUOTE][i]Originally posted by Dark_Apocalyps [/i] [B][size=1]It aint badder than Mitch's banner thats for sure [/size] [/B][/QUOTE] [color=crimson]Its better than a chocobo on LSD discovering the world of Trippy-Land.[/color] I love Mitch's banner. I thought it was the best thing I saw yesterday, but that's me heh. [QUOTE][i]Originally posted by Semjaza Azazel [/i] [B]I love Mitch's banner. I thought it was the best thing I saw yesterday, but that's me heh. [/B][/QUOTE] [color=red] Glad you like it, [b]Se[/b][strike]m[/strike][b]j[/b][strike]azaz[/strike] [strike]Azazel[/strike]. It's a thing Zeh and I are doing. We put insulting phrases in our sig. If you look in Tasis's sig, you'll see the phrase I've made up, lol. It's much better than the one he gave me. The banner's by Zeh as well. He's forcing me to use it. And badder just doesn't sound good, D_A, heh. And it's not my fault that it's so hard to work with.[/color] [color=crimson]How about. PYSCHO [YOU KNOW IT] PYSCHO [WE SHOW IT] WE'RE PYSCHO AND YOU KNOW IT, CRIMSON ON THE FLOOR I dont know. I'm pyschopathic. Dont listen to my songs, they are demonized. :D[/color] [color=red] I listen to your songs nonetheless, Kenny. You've got a knack for words. And I need to start reading more poetry, and just everything in general. ;)[/color] [QUOTE][i]Originally posted by Dark_Apocalyps [/i] [B][size=1]It aint badder than Mitch's banner thats for sure :rolleyes: [/size] [/B][/QUOTE] [color=teal][size=1][b]Actually Mitch's banner [i]is[/i] better than my poem... Thank you for your compliments!I hope to become a singer/songwriter! I also want to thank my mother,father and family.Heh.[/b][/size][/color] [i]Last edited by Mztik_Gohan10 on 08-28-2001 at 07:00 PM Last edited by Psyco on 03-25-2003 at 02:55 AM[/i] ok >_> Now im gonna kill people >_> Arent I long enough on these boards to make you atleast know that I never use rolleyes or :p in a serious thread?! :nope: to SA: [I]I[/I], not anyone on this forum, but [I]I[/I] ;) Usually rolling eyes adds some sarcasm to what precedes it, so that's how everyone would have taken it.
{"url":"https://www.otakuboards.com/topic/13280-psyco/","timestamp":"2024-11-04T20:28:23Z","content_type":"text/html","content_length":"191489","record_id":"<urn:uuid:fbb8d894-1a40-4dec-b55d-6925e2440c75>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00534.warc.gz"}
NCERT Solutions Class 9 Science Chapter 10 Gravitation - Download Free PDFs NCERT Solutions Class 9 Science Chapter 10 – CBSE Free PDF Download *According to the CBSE Syllabus 2023-24, this chapter has been renumbered as Chapter 9. NCERT Solutions for Class 9 Science Chapter 10 Gravitation provides you with the necessary insights into the concepts involved in the chapter. Detailed answers and explanations provided by us in NCERT Solutions will help you in understanding the concepts clearly. Gravity is a fascinating topic that explains many things, from how our planet stays in orbit to why things fall down. Explore Science Chapter 10 – Gravitation of NCERT Solutions for Class 9 to learn everything you need to know about gravity. Content is crafted by highly qualified teachers and industry professionals with decades of relevant knowledge. Moreover, the solutions have been updated to include the latest content prescribed by the CBSE board. Furthermore, we ensure that relevant content on NCERT Solutions Class 9 is regularly updated as per the norms and prerequisites that examiners often look for in the CBSE exam. This ensures that the content is tailored to be class relevant without sacrificing the informational quotient. BYJU’S also strives to impart maximum informational value without increasing the complexity of topics. This is achieved by ensuring that the language is simple and that all technical jargon is explained at the required school level. Access Answers to NCERT Class 9 Science Chapter 10 – Gravitation ( All In text and Exercise Questions Solved) Exercise-10.1 Page: 134 1. State the universal law of gravitation. The universal law of gravitation states that every object in the universe attracts every other object with a force called the gravitational force. The force acting between two objects is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers. 2. Write the formula to find the magnitude of the gravitational force between the earth and an object on the surface of the earth. Consider F as the force of attraction between an object on the surface of earth and the earth Also, consider ‘m’ as the mass of the object on the surface of earth and ‘M’ as the mass of earth The distance between the earth’s centre and object = Radius of the earth = R Therefore, the formula for the magnitude of the gravitational force between the earth and an object on the surface is given as F = G Mm/R^2 Exercise-10.2 Page: 136 1. What do you mean by free fall? Earth’s gravity attracts each object to its center. When an object is dropped from a certain height, under the influence of gravitational force it begins to fall to the surface of Earth. Such an object movement is called free fall. 2. What do you mean by acceleration due to gravity? When an object falls freely from a certain height towards the earth’s surface, its velocity keeps changing. This velocity change produces acceleration in the object known as acceleration due to gravity and denoted by ‘g’. The value of the acceleration due to gravity on Earth is, Exercise-10.3 Page: 138 1. What are the differences between the mass of an object and its weight? The differences between the mass of an object and its weight are tabulated below. Mass Weight Mass is the quantity of matter contained in the body. Weight is the force of gravity acting on the body. It is the measure of inertia of the body. It is the measure of gravity. It only has magnitude. It has magnitude as well as direction. Mass is a constant quantity. Weight is not a constant quantity. It is different at different places. Its SI unit is kilogram (kg). Its SI unit is the same as the SI unit of force, i.e., Newton (N). 2. Why is the weight of an object on the moon 1/6th its weight on the earth? The mass of the moon is 1/100 times and its radius 1/4 times that of earth. As a result, the gravitational attraction on the moon is about one-sixth when compared to earth. The moon’s gravitation force is determined by the mass and the size of the moon. Hence, the weight of an object on the moon is 1/6th its weight on the earth. The moon is far less massive than the Earth and has a different radius(R) as well. Exercise-10.4 Page: 141 1. Why is it difficult to hold a school bag having a strap made of a thin and strong string? It is tough to carry a school bag having a skinny strap because of the pressure that is being applied on the shoulders. The pressure is reciprocally proportional to the expanse on which the force acts. So, the smaller the surface area, the larger is going to be the pressure on the surface. In the case of a skinny strap, the contact expanse is quite small. Hence, the pressure exerted on the shoulder is extremely huge. 2. What do you mean by buoyancy? The upward force possessed by a liquid on an object that’s immersed in it is referred to as buoyancy. 3. Why does an object float or sink when placed on the surface of water? An object floats or sinks when placed on the surface of water because of two reasons. (i) If its density is greater than that of water, an object sinks in water. (ii) If its density is less than that of water, an object floats in water. Exercise-10.5 Page: 142 1. You find your mass to be 42 kg on a weighing machine. Is your mass more or less than 42 kg? A weighing machine measures the body weight and is calibrated to indicate the mass. If we stand on a weighing machine, the weight acts downwards while the upthrust due to air acts upwards. So our apparent weight becomes less than the true weight. This apparent weight is measured by the weighing machine and therefore the mass indicated is less than the actual mass. So our actual mass will be more than 42 kg. 2. You have a bag of cotton and an iron bar, each indicating a mass of 100 kg when measured on a weighing machine. In reality, one is heavier than other. Can you say which one is heavier and why? The correct answer is the cotton bag is heavier than an iron bar. The bag of cotton is heavier than the bar of iron. The cotton bag experiences a larger air thrust than the iron bar. Therefore, the weighing machine indicates less weight than its actual weight for the cotton bag. The reason is True weight = (apparent weight + up thrust) The cotton bag’s density is less than that of the iron bar, so the volume of the cotton bag is more compared to the iron bar. So the cotton bag experience more upthrust due to the presence of air. Therefore, in the presence of air, the cotton bag’s true weight is more compared to the true weight of the iron bar. Exercises-10.6 Page: 143 1. How does the force of gravitation between two objects change when the distance between them is reduced to half? Consider the Universal law of gravitation, According to that law, the force of attraction between two bodies is m[1] and m[2] are the masses of the two bodies. G is the gravitational constant. r is the distance between the two bodies. Given that the distance is reduced to half then, r = 1/2 r F = 4F Therefore once the space between the objects is reduced to half, then the force of gravitation will increase by fourfold the first force. 2. Gravitational force acts on all objects in proportion to their masses. Why then does a heavy object not fall faster than a light object? All objects fall from the top with a constant acceleration called acceleration due to gravity (g). This is constant on earth and therefore the value of ‘g’ doesn’t depend on the mass of an object. Hence, heavier objects don’t fall quicker than light-weight objects provided there’s no air resistance. 3. What is the magnitude of the gravitational force between the earth and a 1 kg object on its surface? (Mass of the earth is 6 × 10^24 kg and radius of the earth is 6.4 × 10^6m.) From Newton’s law of gravitation, we know that the force of attraction between the bodies is given by 4. The earth and the moon are attracted to each other by gravitational force. Does the earth attract the moon with a force that is greater or smaller or the same as the force with which the moon attracts the earth? Why? The earth attracts the moon with a force same as the force with which the moon attracts the earth. However, these forces are in opposite directions. By universal law of gravitation, the force between moon and also the sun can be d = distance between the earth and moon. m[1] and m[2] = masses of earth and moon respectively. 5. If the moon attracts the earth, why does the earth not move towards the moon? According to the universal law of gravitation and Newton’s third law, we all know that the force of attraction between two objects is the same, however in the opposite directions. So the earth attracts the moon with a force same as the moon attracts the earth but in opposite directions. Since earth is larger in mass compared to that of the moon, it accelerates at a rate lesser than the acceleration rate of the moon towards the Earth. Therefore, for this reason the earth does not move towards the moon. 6. What happens to the force between two objects, if (i) The mass of one object is doubled? (ii) The distance between the objects is doubled and tripled? (iii) The masses of both objects are doubled? According to universal law of gravitation, the force between 2 objects (m[1] and m[2]) is proportional to their plenty and reciprocally proportional to the sq. of the distance(R) between them. If the mass is doubled for one object. F = 2F, so the force is also doubled. If the distance between the objects is doubled and tripled If it’s doubled F = (Gm[1]m[2])/(2R)^2 F = 1/4 (Gm[1]m[2])/R^2 F = F/4 Force thus becomes one-fourth of its initial force. Now, if it’s tripled F = (Gm[1]m[2])/(3R)^2 F = 1/9 (Gm[1]m[2])/R^2 F = F/9 Force thus becomes one-ninth of its initial force. If masses of both the objects are doubled, then F = 4F, Force will therefore be four times greater than its actual value. 7. What is the importance of universal law of gravitation? The universal law of gravitation explains many phenomena that were believed to be unconnected: (i) The motion of the moon round the earth (ii) The responsibility of gravity on the weight of the body which keeps us on the ground (iii) The tides because of the moon and therefore the Sun (iv) The motion of planets round the Sun 8. What is the acceleration of free fall? Acceleration due to gravity is the acceleration gained by an object due to gravitational force. On Earth, all bodies experience a downward force of gravity which Earth’s mass exerts on them. The Earth’s gravity is measured by the acceleration of the freely falling objects. At Earth’s surface, the acceleration of gravity is 9.8 ms^-2 and it is denoted by ‘g’. Thus, for every second an object is in free fall, its speed increases by about 9.8 metres per second. 9. What do we call the gravitational force between the earth and an object? The gravitation force between the earth and an object is called weight. Weight is equal to the product of acceleration due to the gravity and mass of the object. 10. Amit buys few grams of gold at the poles as per the instruction of one of his friends. He hands over the same when he meets him at the equator. Will the friend agree with the weight of gold bought? If not, why? [Hint: The value of g is greater at the poles than at the equator.] The weight of a body on the earth’s surface; W = mg (where m = mass of the body and g = acceleration due to gravity) The value of g is larger at poles when compared to the equator. So gold can weigh less at the equator as compared to the poles. Therefore, Amit’s friend won’t believe the load of the gold bought. 11. Why will a sheet of paper fall slower than one that is crumpled into a ball? A sheet of paper has a larger surface area when compared to a crumpled paper ball. A sheet of paper will face a lot of air resistance. Thus, a sheet of paper falls slower than the crumpled ball. 12. Gravitational force on the surface of the moon is only 1/6 as strong as gravitational force on the earth. What is the weight in newton’s of a 10 kg object on the moon and on the earth? Given data: Acceleration due to earth’s gravity = g[e] or g = 9.8 m/s^2 Object’s mass, m = 10 kg Acceleration due to moon gravity = g[m] Weight on the earth= W[e] Weight on the moon = W[m] Weight = mass x gravity g[m] = (1/6) g[e] (given) So W[m] = m g[m] = m x (1/6) g[e] W[m] = 10 x (1/6) x 9.8 = 16.34 N W[e] = m x g[e] = 10 x 9.8 W[e] = 98N 13. A ball is thrown vertically upwards with a velocity of 49 m/s. (i) The maximum height to which it rises, (ii) The total time it takes to return to the surface of the earth. Given data: Initial velocity u = 49 m/s Final speed v at maximum height = 0 Acceleration due to earth gravity g = -9.8 m/s^2 (thus negative as ball is thrown up). By third equation of motion, 2gH = v^2 – u^2 2 × (- 9.8) × H = 0 – (49)^2 – 19.6 H = – 2401 H = 122.5 m Total time T = Time to ascend (T[a]) + Time to descend (T[d]) v = u + gt 0 = 49 + (-9.8) x T[a] Ta = (49/9.8) = 5 s Also, T[d] = 5 s Therefore T = T[a] + T[d] T = 5 + 5 T = 10 s 14. A stone is released from the top of a tower of height 19.6 m. Calculate its final velocity just before touching the ground. Given data: Initial velocity u = 0 Tower height = total distance = 19.6m g = 9.8 m/s^2 Consider third equation of motion v^2 = u^2 + 2gs v^2 = 0 + 2 × 9.8 × 19.6 v^2 = 384.16 v = √(384.16) v = 19.6m/s 15. A stone is thrown vertically upward with an initial velocity of 40 m/s. Taking g = 10 m/s^2, find the maximum height reached by the stone. What is the net displacement and the total distance covered by the stone? Given data: Initial velocity u = 40m/s g = 10 m/s^2 Max height final velocity = 0 Consider third equation of motion v^2 = u^2 – 2gs [negative as the object goes up] 0 = (40)^2 – 2 x 10 x s s = (40 x 40) / 20 Maximum height s = 80m Total Distance = s + s = 80 + 80 Total Distance = 160m Total displacement = 0 (The first point is the same as the last point) 16. Calculate the force of gravitation between the earth and the Sun, given that the mass of the earth = 6 × 10^24 kg and of the Sun = 2 × 10^30 kg. The average distance between the two is 1.5 × 10^ 11 m. Given data: Mass of the sun m[s] = 2 × 10^30 kg Mass of the earth m[e] = 6 × 10^24 kg Gravitation constant G = 6.67 x 10^-11 N m^2/ kg^2 Average distance r = 1.5 × 10^11 m Consider Universal law of Gravitation 17. A stone is allowed to fall from the top of a tower 100 m high and at the same time another stone is projected vertically upwards from the ground with a velocity of 25 m/s. Calculate when and where the two stones will meet. Given data: (i) When the stone from the top of the tower is thrown, Initial velocity u’ = 0 Distance travelled = x Time taken = t (ii) When the stone is thrown upwards, Initial velocity u = 25 m/s Distance travelled = (100 – x) Time taken = t From equations (a) and (b) 5t^2 = 100 -25t + 5t^2 t = (100/25) = 4sec. After 4sec, two stones will meet From (a) x = 5t^2 = 5 x 4 x 4 = 80m. Putting the value of x in (100-x) = (100-80) = 20m. This means that after 4sec, 2 stones meet a distance of 20 m from the ground. 18. A ball thrown up vertically returns to the thrower after 6 s. Find (a) The velocity with which it was thrown up, (b) The maximum height it reaches, and (c) Its position after 4s. Given data: g = 10m/s^2 Total time T = 6sec T[a] = T[d] = 3sec (a) Final velocity at maximum height v = 0 From first equation of motion:- v = u – gt[a] u = v + gt[a] = 0 + 10 x 3 = 30m/s The velocity with which stone was thrown up is 30m/s. (b) From second equation of motion The maximum height stone reaches is 45m. (c) In 3sec, it reaches the maximum height. Distance travelled in another 1sec = s’ The distance travelled in another 1sec = 5m. Therefore in 4sec, the position of point p (45 – 5) = 40m from the ground. 19. In what direction does the buoyant force on an object immersed in a liquid act? The buoyant force on an object that is immersed in a liquid will be in a vertically upward direction. 20. Why a block of plastic when released under water come up to the surface of water? The density of plastic is lesser than that of water. Therefore, the force of buoyancy on plastic block will be greater than the weight of plastic block. Hence, the acceleration of plastic block is going to be in the upward direction. So, the plastic block comes up to the surface of water. 21. The volume of 50 g of a substance is 20 cm^3. If the density of water is 1 g cm^–3, will the substance float or sink? To find the Density of the substance the formula is Density = (Mass/Volume) Density = (50/20) = 2.5g/cm^3 Density of water = 1g/cm^3 Density of the substance is greater than density of water. So the substance will sink. 22. The volume of a 500 g sealed packet is 350 cm^3. Will the packet float or sink in water if the density of water is 1 g cm^–3? What will be the mass of the water displaced by this packet? Density of sealed packet = 500/350 = 1.42 g/cm^3 Density of sealed packet is greater than density of water Therefore the packet will sink. Considering Archimedes Principle, Displaced water volume = Force exerted on the sealed packet. Volume of water displaced = 350cm^3 Therefore displaced water mass = ρ x V = 1 × 350 Mass of displaced water = 350g. NCERT Solutions for Class 9 Science Chapter 10 – Gravitation Chapter 10 – Gravitation is a part of Unit 3 – Motion, Force and Work, which carries a total of 27 out of 100. Usually, 2 or 3 questions do appear from this chapter every year, as previous trends have shown. The topics usually covered under this chapter are: • Universal Law of Gravitation and its Importance • Characteristics of Gravitational Forces • Concept of Free Fall • Difference between Gravitation Constant and Gravitational Acceleration NCERT Solutions for Class 9 Science Chapter 10 – Gravitation Often times, the term gravity and gravitation are used interchangeably and this is wrong. However, these two terms are related to each other but their implications are quite different. Academically, Chapter 10 – Gravitation is an important concept as it carries a considerable weightage in the CBSE exam. Therefore, ensure that all relevant concepts, formulas and diagrams are studied thoroughly. Explore how gravitation works at the molecular level, discover its applications and learn other related important concepts by exploring our NCERT Solutions. Key Features of NCERT Solutions for Class 9 Science Chapter 10 – Gravitation 1. Solutions provided in an easy-to-understand language 2. Qualified teachers and their vast experience helps formulate the solutions 3. Questions updated to the latest prescribed syllabus 4. A detailed breakdown of the most important exam questions 5. Access to additional learning resources like sample papers and previous year question papers Dropped Topics – Following Box Items: a. Brief Description of Isaac Newton, b. How did Newton guess the inverse– square rule? 10.7 Relative Density and Example 10.7. Frequently Asked Questions on NCERT Solutions for Class 9 Science Chapter 10 Does the NCERT Solutions for Class 9 Science Chapter 10 help students grasp the features of gravitational force? Chapter 10 of NCERT Solutions for Class 9 Science is an important part of the syllabus. Students need to focus more on the classroom sessions and try to understand what the teacher is explaining during class hours. The chapters must be studied unit-wise, and the students must clear their doubts instantly using the solutions available on BYJU’S. The new concepts are also explained in an interactive manner to help students grasp them without any difficulty. How to solve the problems based on gravitation quickly in Chapter 10 of NCERT Solutions for Class 9 Science? Regular practice is the main key to remembering the concepts efficiently. Students are advised to solve the problems present in the textbook and understand the method of answering them. If they possess any doubts regarding the problems, they can refer to the NCERT Solutions for Class 9 Science Chapter 10 from BYJU’S. The problems are solved in the most systematic way by keeping in mind the marks weightage allotted for each step in the CBSE exam. Why should I use the NCERT Solutions for Class 9 Science Chapter 10 PDF from BYJU’S? Science is one of the important subjects for Class 9 students as most of the concepts are continued in higher levels of education. For this purpose, obtaining a strong foundation of the fundamental concepts is important. Students are recommended to answer the textbook questions using the solutions PDF available on BYJU’S to gain a grip on the important concepts. The PDF of solutions can be downloaded and referred to understand the method of answering complex questions. Leave a Comment 1. Byjus is very best app please 🥺🥺 all children and adults download the app for solving doubts.
{"url":"http://investinnorthcyprus.org/index-103.html","timestamp":"2024-11-05T13:01:09Z","content_type":"text/html","content_length":"702299","record_id":"<urn:uuid:af235c7a-1ea4-4aeb-9185-57ee9f2e5177>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00269.warc.gz"}
Monte Carlo Pi approximation 3 dimension | Intel DevMesh | Subarna Falguni, 02/22/2021 Monte Carlo Pi approximation 3 dimension Monte Carlo Simulation is a broad category of computation that utilizes statistical analysis to reach a result. This sample uses the Monte Carlo Procedure to estimate the value of pi ...learn more Project status: Under Development Intel Technologies oneAPI, DPC++, DevCloud Overview / Usage The Monte Carlo procedure for estimating pi is easily parallelized, as each calculation of a random coordinate point can be considered a discrete work item. The computations involved with each work item are entirely independent of one another except for in summing the total number of points inscribed within the circle. This code sample demonstrates how to utilize the DPC++ reduction extension for this purpose. The code will attempt to execute on an available GPU and fallback to the system's CPU if a compatible GPU is not detected. The device used for the compilation is displayed in the output, along with the elapsed time to complete the computation. A rendered image plot of the computation is also written to a file. Methodology / Approach I have used One API for the simulation I have added z axis and have the simulation more complex Output in Devcloud Calculating estimated value of pi... Running on Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz The estimated value of pi (N = 40000) is: 2.1006 Computation complete. The processing time was 1.52605 seconds. The simulation plot graph has been written to 'MonteCarloPi.bmp' End of output for job 799384.v-qsvr-1.aidevcloud Date: Mon 22 Feb 2021 02:24:20 PM PST Job Completed in 22 seconds. Technologies Used One API
{"url":"https://devmesh.intel.com/projects/monte-carlo-pi-approximation-3-dimension","timestamp":"2024-11-09T19:56:23Z","content_type":"text/html","content_length":"31060","record_id":"<urn:uuid:efc9d23b-0688-4a63-9d1e-c1c86b7417ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00531.warc.gz"}
sc_HSOSSCF - Linux Manuals (3) sc_HSOSSCF (3) - Linux Manuals sc::HSOSSCF - The HSOSSCF class is a base for classes implementing a self-consistent procedure for high-spin open-shell molecules. #include <hsosscf.h> Inherits sc::SCF. Inherited by sc::HSOSHF, and sc::HSOSKS. Public Member Functions HSOSSCF (StateIn &) HSOSSCF (const Ref< KeyVal > &) The KeyVal constructor. void save_data_state (StateOut &) Save the base classes (with save_data_state) and the members in the same order that the StateIn CTOR initializes them. void print (std::ostream &o=ExEnv::out0()) const Print information about the object. double occupation (int irrep, int vectornum) Returns the occupation. double alpha_occupation (int irrep, int vectornum) Returns the alpha occupation. double beta_occupation (int irrep, int vectornum) Returns the beta occupation. int n_fock_matrices () const RefSymmSCMatrix fock (int i) Returns closed-shell (i==0) or open-shell (i==1) Fock matrix in AO basis (excluding XC contribution in KS DFT). RefSymmSCMatrix effective_fock () Returns effective Fock matrix in MO basis (including XC contribution for KS DFT). void symmetry_changed () Call this if you have changed the molecular symmetry of the molecule contained by this MolecularEnergy. int spin_polarized () Return 1 if the alpha density is not equal to the beta density. RefSymmSCMatrix density () Returns the SO density. RefSymmSCMatrix alpha_density () Return alpha electron densities in the SO basis. RefSymmSCMatrix beta_density () Return beta electron densities in the SO basis. Protected Member Functions void set_occupations (const RefDiagSCMatrix &evals) void init_vector () void done_vector () void reset_density () double new_density () double scf_energy () Ref< SCExtrapData > extrap_data () void init_gradient () void done_gradient () RefSymmSCMatrix lagrangian () RefSymmSCMatrix gradient_density () void init_hessian () void done_hessian () void two_body_deriv_hf (double *grad, double exchange_fraction) Protected Attributes Ref< PointGroup > most_recent_pg_ int user_occupations_ int tndocc_ int tnsocc_ int nirrep_ int * initial_ndocc_ int * initial_nsocc_ int * ndocc_ int * nsocc_ ResultRefSymmSCMatrix cl_fock_ ResultRefSymmSCMatrix op_fock_ RefSymmSCMatrix cl_dens_ RefSymmSCMatrix cl_dens_diff_ RefSymmSCMatrix cl_gmat_ RefSymmSCMatrix op_dens_ RefSymmSCMatrix op_dens_diff_ RefSymmSCMatrix op_gmat_ RefSymmSCMatrix cl_hcore_ Detailed Description The HSOSSCF class is a base for classes implementing a self-consistent procedure for high-spin open-shell molecules. Constructor & Destructor Documentation sc::HSOSSCF::HSOSSCF (const Ref< KeyVal > &) The KeyVal constructor. .IP "total_charge" 1c This floating point number gives the total charge, $c$, of the molecule. The default is 0. This integer gives the total number of singly occupied orbitals, $n_mathrm{socc}$. If this is not given, then multiplicity will be read. This integer gives the multiplicity, $m$, of the molecule. The number of singly occupied orbitals is then $n_mathrm{socc} = m - 1$. If neither nsocc nor multiplicity is given, then if, in consideration of total_charge, the number of electrons is even, the default $n_mathrm{socc}$ is 2. Otherwise, it is 1. This integer gives the total number of doubly occupied orbitals $n_mathrm{docc}$. The default $n_mathrm{docc} = (c - n_mathrm{socc})/2$. socc This vector of integers gives the total number of singly occupied orbitals of each irreducible representation. By default, the $n_mathrm{socc}$ singly occupied orbitals will be distributed according to orbital eigenvalues. If socc is given, then docc must be given and they override nsocc, multiplicity, ndocc, and total_charge. docc This vector of integers gives the total number of doubly occupied orbitals of each irreducible representation. By default, the $n_mathrm{docc}$ singly occupied orbitals will be distributed according to orbital eigenvalues. If docc is given, then socc must be given and they override nsocc, multiplicity, ndocc, and total_charge. This has the same meaning as in the parent class, SCF; however, the default value is 100. This has the same meaning as in the parent class, SCF; however, the default value is 1.0. Member Function Documentation double sc::HSOSSCF::alpha_occupation (int irrep, int vectornum) [virtual] Returns the alpha occupation. The irreducible representation and the vector number within that representation are given as arguments. Reimplemented from sc::OneBodyWavefunction. double sc::HSOSSCF::beta_occupation (int irrep, int vectornum) [virtual] Returns the beta occupation. The irreducible representation and the vector number within that representation are given as arguments. Reimplemented from sc::OneBodyWavefunction. RefSymmSCMatrix sc::HSOSSCF::effective_fock () [virtual] Returns effective Fock matrix in MO basis (including XC contribution for KS DFT). Implements sc::SCF. RefSymmSCMatrix sc::HSOSSCF::fock (int i) [virtual] Returns closed-shell (i==0) or open-shell (i==1) Fock matrix in AO basis (excluding XC contribution in KS DFT). Use effective_fock() if you want the full KS Fock matrix. double sc::HSOSSCF::occupation (int irrep, int vectornum) [virtual] Returns the occupation. The irreducible representation and the vector number within that representation are given as arguments. Implements sc::OneBodyWavefunction. void sc::HSOSSCF::save_data_state (StateOut &) [virtual] Save the base classes (with save_data_state) and the members in the same order that the StateIn CTOR initializes them. This must be implemented by the derived class if the class has data. Reimplemented from sc::SCF. Reimplemented in sc::HSOSKS, and sc::HSOSHF. void sc::HSOSSCF::symmetry_changed () [virtual] Call this if you have changed the molecular symmetry of the molecule contained by this MolecularEnergy. Reimplemented from sc::SCF. Generated automatically by Doxygen for MPQC from the source code.
{"url":"https://www.systutorials.com/docs/linux/man/docs/linux/man/docs/linux/man/3-sc_HSOSSCF/","timestamp":"2024-11-12T16:59:31Z","content_type":"text/html","content_length":"14462","record_id":"<urn:uuid:195b20fc-4cdd-47e5-a02e-e828aedfe1b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00846.warc.gz"}
(PDF) Reliability Assessment of Passive Safety Systems for Nuclear Energy Applications: State-of-the-Art and Open Issues Reliability Assessment of Passive Safety Systems for Nuclear Energy Applications: State-of-the-Art and Open Issues Francesco Di Maio 1, * , Nicola Pedroni 2, Barnabás Tóth 3, Luciano Burgazzi 4and Enrico Zio 1,5 Citation: Di Maio, F.; Pedroni, N.; Tóth, B.; Burgazzi, L.; Zio, E. Reliability Assessment of Passive Safety Systems for Nuclear Energy Applications: State-of-the-Art and Open Issues. Energies 2021,14, 4688. Academic Editors: Jong-Il Yun and Hiroshi Sekimoto Received: 3 May 2021 Accepted: 19 July 2021 Published: 2 August 2021 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affil- Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// 1Energy Department, Politecnico di Milano, 20156 Milan, Italy; enrico.zio@polimi.it 2Dipartimento di Energia, Politecnico di Torino, 10121 Turin, Italy; nicola.pedroni@polito.it 3NUBIKI Nuclear Safety Research Institute Ltd, 1121 Budapest, Hungary; tothb@nubiki.hu 4ENEA Agenzia Nazionale per le Nuove Tecnologie, L’energia e lo Sviluppo Economico Sostenibile, 40121 Bologna, Italy; luciano.burgazzi@enea.it Centre for Research on Risk and Crises (CRC), MINES ParisTech, PSL Research University, 75006 Paris, France *Correspondence: francesco.dimaio@polimi.it Passive systems are fundamental for the safe development of Nuclear Power Plant (NPP) technology. The accurate assessment of their reliability is crucial for their use in the nuclear industry. In this paper, we present a review of the approaches and procedures for the reliability assessment of passive systems. We complete the work by discussing the pending open issues, in particular with respect to the need of novel sensitivity analysis methods, the role of empirical modelling and the integration of passive safety systems assessment in the (static/dynamic) Probabilistic Safety Assessment (PSA) framework. reliability assessment; Probabilistic Safety Assessment; passive safety systems; nuclear power plants 1. Introduction Passive systems are in use since the dawning of nuclear power technology. They have, then, received a renewal of interest after the major nuclear accidents in 1979, 1986 and 2011. However, in all, passive systems design has been the focus of a large number of researches and applications that have not led to a common understanding of the benefits and cons of passive safety systems implementation. On the contrary, a common understanding must be laid down in particular with respect to the reliability of passive systems, for demonstrating their qualification and use- fulness for nuclear safety. Specifically, the large uncertainty associated with inadequacies of the design codes used to simulate the complex passive systems physical behavior must be addressed for the reliability assessment, because it may lead to hidden large unreliability. In comparison to active systems, passive safety systems benefit from less dependence on external energy sources, no need for operator actions to activate them and reduced costs, including easier maintenance. Recognition of those advantages is shared among most stakeholders in the nuclear industry, as demonstrated by the number of nuclear reactor designs that make use of passive safety systems. Yet, it is still necessary to precisely assess and demonstrate the reliability of passive safety systems and the capacity to perform and complete the expected functions. In simple and direct words, passive safety systems may contribute to improving the safety of Nuclear Power Plants (NPPs), provided that their performance-based design and operation are demonstrated by tailored deterministic and reliability assessment methods, approaches and data (e.g., experimental databases) available to industry and regulators [1–5]. With reference to the passive natural circulation of fluid for emergency cooling, the complex set of physical conditions that occurs in the passive safety systems, where no ex- ternal sources of mechanical energy for the fluid motion are involved, has led the designers Energies 2021,14, 4688. https://doi.org/10.3390/en14154688 https://www.mdpi.com/journal/energies Energies 2021,14, 4688 2 of 17 of the present-generation reactors to position the main heat sink (i.e., the steam generators for pressurized water reactors and feed-water inlet for boiling water reactors) at a higher elevation with respect to the heat source location (i.e., the core). By so doing, should the forced circulation driven by centrifugal pumps become unavailable, the removal of the decay heat produced by the core is still allowed [ ]. For their reliability assessment, mathe- matical models are typically built [ ] to describe the mathematical relationship between the passive system physical parameters influencing the NPP behavior, then translated into detailed mechanistic Thermal-Hydraulic (T–H) computer codes for simulating the effects of various operational transients and accident scenarios on the system [7–15]. In practice, characteristics of the system under analysis are only partially captured and, therefore, simulated by the associated T–H code. Moreover, the uncertainties affecting the behavior of passive systems and its modeling are usually much larger than those associated with active systems, challenging the passive systems reliability assessment [ ]. This is due to [ ]: (i) stochastic transitions of intrinsically random phenomena occurring (such as component degradation and failures), and (ii) the lack of experimental results, that mine the completeness of the knowledge about some of that same phenomena [ Such uncertainties translate into uncertainty of the model output uncertainty that, for the sake of a realistic reliability assessment, must be estimated [22,26–28]. In this paper, we review the methodological solutions to the T–H passive safety systems reliability assessment. In particular, the approaches for the reliability assessment of nuclear passive systems are described in Section 2: independent failure modes, hardware failure modes, functional failure modes approaches are described in Sections 2.1–2.3, respectively. In Section 3, the advanced Monte Carlo simulation approaches are introduced. In Section 4, the existing coordinated procedures for reliability evaluation are presented. Open issues, along with the methods proposed in the literature to address these issues, are discussed in Section 5, that include (i) the identification of the most contributing model hypotheses and parameters to the output uncertainty (Section 5.1), (ii) the empirical regression modelling for reducing computational time (Section 5.2), and (iii) the integration of reliability assessment of passive systems into the current Probabilistic Safety Assessment (PSA) (Section 5.3). 2. Approaches for the Reliability Assessment of Passive Systems In general, the reliability of passive systems depends on: •systems/components reliability; physical phenomena reliability, which accounts for the physical boundary conditions and mechanisms. This means that, to guarantee a large passive system reliability: well-engineered components (with at least the same reliability as active systems) are to be selected; the physical principles (e.g., gravity and density difference in T–H passive systems) and the effects of surrounding/environments conditions in which they occur and affect the parameters evolution during the accident development (e.g., flow rate and exchanged heat flux in T–H passive systems) are to be fully understood and captured. Both aspects should be considered within a consistent approach to passive system reliability assessment. In what follows, a summary of three different approaches is provided for passive systems performance assessment upon onset of system operation. 2.1. The Independent Failure Modes Approach The independent failure modes approach entails [ ]: (i) identifying the failure modes leading to the unfulfillment of the passive system function, and (ii) evaluating the system failure probability as the probability of failure modes occurrence. Typically, failure modes are identified from the application of a Failure Modes and Effects Analysis (FMEA) procedure [29]. Conventional probabilistic failure process models commonly used for hardware com- ponents (i.e., the exponential distribution, e , where is the failure rate and tis time) are Energies 2021,14, 4688 3 of 17 not applicable to model physical processes failures; in this case, each failure is character- ized by specific critical physical parameters distributions and a defined failure mode, that implies, for each of these latter, the definition of the probability distributions and failure ranges of the critical physical parameters (for example, for a T–H passive system, these may include non-condensable gas build-up, the undetected leakage, the heat exchange process reduction due to surface oxidation, piping layout, thermal insulation degradation, etc.). Eventually, to evaluate the probability of the event of failure of the system, Pe , the probabilities of the different failure mode events, Pei,i = 1, . . . ,n, are combined according to a series logic, assuming mutually non-exclusive independent events [29]: Pet=1.0 −((1.0 −Pe1)∗(1.0 −Pe2)∗... ∗(1.0 −Pen)) (1) Since this approach assesses the system failure probability assuming that a single failure mode event is sufficient to lose the system function, the resulting value of failure probability of system failure can be conservatively assumed as an upper bound for the unreliability of the system [29]. 2.2. The Hardware Failure Modes Approach In the hardware failure modes approach [ ], the unreliability of the passive system is obtained by accounting for the probabilities of occurrence of the hardware failures that degrade the physical mechanisms (which the passive system relies upon for its function). For example, with reference to a typical Isolation Condenser [ ], natural circulation failure due to high concentration of non-condensable gas is modelled in terms of the probability of occurrence of vent lines failure to purge the gases [ ]; natural circulation failure because of insufficient heat transfer to an external source is assessed through two possible hardware failure modes: (1) insufficient water in the pool and make-up valve malfunctioning, (2) degraded heat transfer conditions due to excessive fouling of the heat exchanger pipes. Thus, the probabilities of degraded physical mechanisms are expressed in terms of unreliability of the components whose failures degrade the function of the passive system. Some critical aspects of this approach are: (i) lack of completeness of the possible failure modes and corresponding hardware failures), (ii) failures due to unfavourable initial or boundary conditions being neglected, and (iii) fault tree models typically adopted to represent the hardware failure modes may inappropriately replace the complex T–H code behavior and predict interactions among physical phenomena of the system [3]. 2.3. The Functional Failure Approach The functional failure approach is based on the concept of functional failure [ in the context of passive systems, this is defined as the probability of failing to achieve a safety function (i.e., the probability of a given safety variable—load—to exceed a safety threshold—capacity). To model uncertainties, probability distributions are assigned, mainly by subjective/engineering judgments, to both the safety threshold (for example, a minimum value of water mass flow required) and safety variable (i.e., the water mass flow circulating through the system). 3. Advanced Monte Carlo Simulation Approach The functional failure approach (Section 2) relies on the deterministic T–H computer code model (mathematically represented by the nonlinear function f( )) and the Monte Carlo (MC) propagation of the uncertainties in the code inputs x(i.e., the Probability Density Functions (PDFs) q(x)) to the outputs y=f(x), with respect to which the failure event is defined according to given safety thresholds. The propagation consists in repeating the T–H code computer runs (or simulations) for different sets of the uncertain input values x, sampled from their PDF q(x) [ ]. The main strength of MC simulation is that it does not force the analyst to resort to simplifying approximations, since it does not suffer Energies 2021,14, 4688 4 of 17 from any T–H model complexity, and is, therefore, expected to provide the most realistic passive system assessment. On the other hand, it is challenged by the long calculations needed to run the de- tailed, mechanistic T–H code (one run for each batch of sampled input values) and the computational efforts, increasing with decreasing failure probability [ ], that, inciden- tally is particularly small (e.g., less than 10 ) for functional failure of T–H passive safety systems [5,28]. To reduce the number of T–H code runs and the computational time as much as possi- ble, alternatives to be considered are fast running surrogate regression models (also called response surfaces or metamodels) and advanced Monte Carlo simulation methods [ Fast-running surrogate regression models mimic the response of the original T–H model code, circumventing the long computing time (as it will be described in Section 5.2: to name a few, polynomial Response Surfaces (RSs) [ ], polynomial chaos expansions [ stochastic collocations [ ], Artificial Neural Networks (ANNs) [ ], Support Vector Machines (SVMs) [48] and kriging [49] (see the following Section 5.2 for details). Advanced Monte Carlo Simulation allows limiting of the number of code runs, guar- anteeing, at the same time, robust estimations [ ]. The present Section focuses on this latter class of methods. Among these, Stratified Sampling consists in calculating the probability of each of the non-overlapping subregions (i.e., strata) of the sample space; by randomly sampling a fixed number of outcomes from each stratum (i.e., the stratified samples), the coverage of the sample space is ensured [ ]. However, the definition of the strata and the calculation of the associated probabilities is a major challenge [53]. Latin Hypercube Sampling (LHS), commonly used in PSA [ ] and reliability assessment problems [ ], is a compromise between standard MCS and Stratified Sampling, but it does not overcome enough the performance of standard MCS for small failure probabilities estimation [ ], as in the case of passive safety systems reliability assessment. Subset Simulation (SS) [ ] and Line Sampling (LS) [ ] have been proposed as advanced MCS methods to tackle the typical multidimensional load-capacity problems of structural reliability assessment: therefore, they can address the problem of the functional failure probability assessment of T–H passive systems [22,47,66]. In the SS approach, the problem is tackled by performing simulations of sequences of (more) frequent events in their conditional probability spaces: finally, the product of the conditional probabilities of such more frequent events is taken as the functional failure probability; Markov Chain Monte Carlo (MCMC) simulations are used to generate the conditional samples [ ], which, by sequentially populating the intermediate conditional regions, reach the final functional failure region. In the LS method, the failure domain of the high-dimensional problem under analysis is explored by means of lines, instead of random points [ ]. One-dimensional problems are solved along an “important direction” that optimally points towards the failure domain, in place of the high-dimensional problem [ ]. The approach overcomes standard MCS in a wide range of engineering applications [ ] and allows ideally reducing to zero the variance of the failure probability estimator if the “important direction” is perpendicular to the almost linear boundaries of the failure domain [64]. Finally, the more frequently adopted advanced MCS method is Importance Sampling (IS): in IS, the original PDF q(x) is replaced by an Importance Sampling Density (ISD) g(x) biased towards the MC samples that lead to outputs close to the failure region, in a way to artificially increase the (rare) failure event frequency. To approximate the ideal ISD g*(x) (i.e., the one that would make the standard deviation of the MC estimator result equal to zero) the Adaptive Kernel (AK) [ ], the Cross-Entropy (CE) [ ], the Variance Minimization (VM) [ ] and the Markov Chain Monte Carlo-Importance Sampling (MCMC- IS) [78] methods have been proposed. Energies 2021,14, 4688 5 of 17 Adaptive Metamodel-based Subset Importance Sampling (AM-SIS) is a recently pro- posed method [ ] which combines SS and metamodels (for example, Artificial Neural Networks, ANNs) within an adaptive IS scheme, as follows [78,79]: Subset Simulation (SS) is used to create an input batch from the ideal, zero-variance ISD g*(x) relying on an ANN that (i) is adaptively refined in proximity of failure region by means of the samples iteratively produced by SS, and (ii) substitutes the expensive T–H code f(x); The g*(x) built at step (1) is used to perform IS and calculate the probability of failure of the T–H passive system. Notice that the idea of integrating metamodels within efficient MCS schemes has been widely proposed in the literature: see, e.g., [80–88]. 4. Frameworks for the Reliability Assessment of Passive Systems A first framework for the reliability assessment of passive systems is the REPAS (Reliability Evaluation of Passive Systems) methodology [ ], then continued onto the EU (European Union) project called RMPS (Reliability Methods for Passive Systems) project (https://cordis.europa.eu/project/id/FIKS-CT-2000-00073, accessed on 3 May 2021) [ ]. The RMPS methodology is aimed at the: (1) identification and quantification of the sources of uncertainties—combining (often vague and imprecise) expert judgments and the (typically scarce) experimental data available—and determination of the critical parameters, (2) propagation of the uncertainties through thermal–hydraulic (T–) codes and assessment of the passive system unreliability, and (3) introduction of the passive system unreliability in the accident sequences for probabilistic risk analysis. The RMPS methodology has been successfully applied to passive systems providing natural circulation of coolant flow in different types of reactors (BWR, PWR and VVER). A complete example of application concerning the passive residual heat removal system of a CAREM (Central Argentina Reactor de Elementos Modulares) is presented in [ ]. Recently, the RMPS methodology has been applied by ANL (Argonne National Laboratory) in studies for the evaluation of the reliability of passive systems designed for GenIV sodium fast reactors: see, for instance, [90]. In the APSRA (Assessment of Passive System ReliAbility) methodology [ ], a failure hyper-surface is generated in the space of the critical physical parameters by considering their deviations from the nominal values, after a root-cause analysis is performed to identify the causes of deviation of these parameters, assuming that the deviation of such physical parameters occurs only due to failures of mechanical components. Then, the probability of failure of the passive system is evaluated from the failure probability of these mechanical components. Examples of the APSRA (and its evolution APSRA+) application can be found in [91,92] The two frameworks, RMPS and APSRA, have certain features in common, as well as distinctive characteristics. To name a few similarities, both methodologies use Best Estimate (BE) codes to estimate the T–H behavior of the passive systems and integrate both probabilistic and deterministic analyses to assess the reliability of the systems; with respect to differences, while the RMPS framework proceeds with the identification and quantifica- tion of the parameter uncertainties using probability distributions and propagating their realizations via a T–H code or a response surface, the APSRA methodology assesses the causes of deviation of the parameters from their nominal values. 5. Open Issues In the following Sections, the open issues regarding the methods and frameworks for the reliability assessment of passive safety systems and for their application are discussed, in particular, with respect to the need of novel sensitivity analysis methods, the role of empirical regression modelling and the integration of passive systems in PSA. Energies 2021,14, 4688 6 of 17 5.1. Sensitivity Analysis Methods Safety margins are practically verified resorting to T–H codes [41,93]. Recently, these calculations have been performed by BE T–H codes that provide realistic results and avoid over-conservatism [ ], and also by the demanding identification and quantification of the uncertainties in the code, which require a large number of simulations [94]. To tackle this challenge, various approaches of Uncertainty Analysis (UA) have been developed, e.g., Code Scaling, Applicability and Uncertainty (CSAU) [ ], Automated Sta- tistical Treatment of Uncertainty Method (ASTRUM), Integrated Methodology for Thermal Hydraulics Uncertainty Analysis (IMTHUA) [ ]. In all the mentioned approaches, the assumption is that input variables follow statistical distributions: this implies that if N input sets are sampled from these distributions and fed to the BE code, the corresponding Noutput values can be calculated, propagating the variability of the input variables onto the output. To speed up the computation and substituting the TH code with a simple and faster surrogate, a combination of Order Statistics (OS) [ ] and Artificial Neural Networks [ ] has been proposed. However, this latter approach does not allow one to completely characterize the PDF of the output variable but only some percentiles [5]. Particularly, SA techniques can be categorized in: Local, Regional and Global [ The local approaches provide locally valid information since they analyze the effect on the output of small variations around fixed values of the input parameters. Regional approaches analyze the effects on the output of partial ranges of the inputs distributions. Global approaches analyze the contribution of the entire distribution of the input on the output variability. This makes the global approaches more suitable when models are non-linear and non-monotone, with respect to which, local and regional approaches may fail. The higher capabilities of global approaches are paid by larger computational costs. Examples of global methods are Fourier Amplitude Sensitivity Test (FAST) [ ], Response Surface Methodology (RSM) [43] and variance decomposition methods [26]. In this Section, we will illustrate a relatively recent method for global SA, called the distribution-based approach [ ]. In practice, the PDF of the output variable is recon- structed, with fewer runs than variance decomposition-based methods, for conducting an SA. Polynomial Chaos Expansion (PCE) methods have been used [ ], although the multimodal output variable distribution cannot be modeled by PCE (because, to accu- rately enough reconstruct the PDF, the order of the expansion and the computational cost become too large) In such cases, Finite Mixture Models (FMMs) [ ] can overcome the problem, by naturally “clustering” the T–H code output (e.g., subdividing the inputs leading to output values with large, low, insufficient safety margins) in probabilistic models (i.e., PDFs) composing the mixture. Advantages are (i) the availability the analytical PDF of the model output and (ii) a lower computational cost than classical global SA methods. To further reduce the computational cost related with the T–H code runs, a framework based on FMMs has been proposed in [ ]. The natural clustering made by the FMM on the T–H code output [ ] (where one cluster corresponds to one Gaussian model of the mixture) is exploited to develop an ensemble of three SA methods that perform differently depending on the data at hand: input saliency [ ], Hellinger distance [ ] and Kullback–Leibler divergence [ ]. The advantage offered by the diversity of the methods is the possibility of overcoming possible errors of the individual methods that may occur, due to the limited quantity of data. The proposed framework applicability to the reliability assessment of passive safety systems is challenging because one must consider the uncertainties affecting the passive systems functional performance [1,16,66,92,103]. In [ ], the application of the framework to a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000 during a Loss Of Coolant Accident (LOCA) is shown. The combination of multiple sensitivity rankings is shown to increase the robustness of the results, without any additional T–H code run. The work in [ ] has been extended in [ ] by considering three Global SA methods (the Input Saliency (IS), Hellinger Distance (HD), Kullback–Leibler Divergence (KLD)) and Energies 2021,14, 4688 7 of 17 Bootstrap [ ] that (artificially, but without information bias) increase the amount of data obtained. The framework has been applied to a real case study of a Large Break Loss of Coolant Accident (LBLOCA) in the Zion 1 NPP [106], simulated by the TRACE code. 5.2. Role of Empirical Regression Modelling To address the computational problem related to the run of the detailed, mechanistic T–H system code, either efficient sampling techniques can be adopted as described in Section 3, or nonparametric order statistics [ ] can be employed, especially if only particular statistics (e.g., the 95th percentile) of the outputs of the code are needed [ ], or fast-running, surrogate regression models can be implemented to mimic the long-running T–H model. In general terms, the construction (i.e., training) of such regression models entails using a (reduced) number (e.g., 50–100) input/output patterns of the T–H model code for fitting, by statistical techniques, the response surface of the regression model to the input/output data. Several examples can be found in the literature: in [ ], polynomial Response Surfaces (RSs) are used to calculate the structural failure probability; in [ ], with linear and quadratic polynomial RSs, the reliability analysis of a T–H passive system of an advanced nuclear reactor is performed; Radial Basis Functions (RBFs), Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) are shown to provide local approximation of the failure domain in structural reliability problems and for the functional failure analysis of a passive safety systems in a Gas-cooled Fast Reactor (GFR), in [ ]; finally, Gaussian meta-models have been used for the sensitivity analysis of inputs driving the radionuclide transport in groundwater as modeled by complex hydrogeological models in [113,114]. 5.3. Integration of Passive Systems in PSA The introduction of passive safety systems in the framework of PSA based on FTs and ETs deserves particular attention. The reason is that the reliability of these systems does not depend only on (mechanical) components failure modes, but also on the occurrence of phenomenological events. This makes the problem nontrivial (see Sections 2and 3), because it is difficult to define the status of these systems along an accident sequence only in Boolean terms of ‘success/failure’. An ‘intermediate’ mode of operation of a passive system or, equivalently, a degraded performance of the system (up to the failure point) should be considered, where the passive system might still be capable of providing a functional level sufficient for the mitigation to the accident progression. 5.3.1. Integration of Passive System Reliability into Static PSA An ET describes—in a logically structured, graphical form—the sequences of events (scenarios) that can possibly originate from an initiating event, depending on the fulfilment (or not) of the functional requirements of the safety (and operational) systems involved in the accident scenario. For each of these systems, an FT displays in graphical/logic form all the combinations of the so-called basic events that cause the failure of the system, by connecting the events through logic gates. The basic events represent the fundamental failure modes of the system and can be assessed by different reliability models and data. With respect to active safety systems working in conventional, currently operating nuclear facilities, the following two fundamental failure modes are usually considered: Start-up failure: for standby active equipment (e.g., pumps, fans), the failure probabil- ity of start-up should be assessed, while for valves, the failure probability of opening and/or closing should be modelled. Failure during operation: the failure probability during operation of active compo- nents (e.g., pumps) should be quantified and modelled in the PSA. To this purpose, the most commonly applied reliability models employ the failure rate and the expected mission time (or functional time) of the component. For components with relatively short mission time (1–2 h), this kind of malfunction is usually modelled within the start-up failure framework. Energies 2021,14, 4688 8 of 17 With respect to passive systems, the applicability of the FT method depends on the passivity level (A, B, C and D), as defined by the IAEA [115]. Type ‘B’ passive systems do not contain any moving mechanical parts and the start-up of the system is triggered by passive phenomena (with the exclusion of valve utilization): in this case, the start-up failure probability of the system is determined only based on the probability that the passive physical phenomenon occurs or not (e.g., that natural circulation develops in the cooling circuit). Failure during operation is, instead, determined by the physical stability of the passive phenomenon (e.g., long-term stability of the natural circulation), which is mainly influenced by the initial and the boundary conditions. It is worth mentioning that, as pointed out before, modelling start-up failure and failure during operation needs the consideration of different physical phenomena, because alterations in boundary conditions during accident mitigation can result in the degradation of the driving forces even after a successful start-up. When passive systems are concerned, other failure modes are to be considered, such as mechanical equipment failures (e.g., heat exchanger plugging, rupture or leak, etc.), which can also lead to failure during operation [ ] and alter the physical stability, and human errors, which can influence the long-term operation of a passive system. In some cases (for example, [89]), these failure modes are considered in a separate FT. As an example of a type ‘B’ passive system, let us consider a passive residual heat removal system [ ] where the heat is transferred into a pool that must be refilled to ensure the fulfillment of the safety function in the long run. The resulting FT for the start-up and during-operation failure modes is shown in Figure 1: the failure probability of the ‘phenomenological’ basic events (i.e., ‘natural circulation fails to start’ NC-FS and ‘natural circulation fails to run’ NC-FR) should be derived from the reliability assessment of the physical phenomenon, while the failure probabilities of the mechanical parts (i.e., ‘compo- nent failure during operation’ COMP-FAIL and ‘refill failure of ultimate sink’ REFILL) are the result of classical FMEA or HAZOP methods. Figure 1. FT for start-up failure and failure during operation for type ‘B’ passive systems. Types ‘C’ and ‘D’ passive systems may contain moving mechanical parts (e.g., check valves in case of type ‘C’ and motor-operated valves in case of type ‘D’), in order to trigger the operation of the system. In this case, the system start-up failure is determined by both the malfunction of the active (or mechanical) component and the probability of the physical phenomenon development, while the failure during operation is determined by the stability of the physical phenomena, the reliability of mechanical parts and the possible failure of the refill procedure (if considered), similarly to type ‘B’ passive systems. Moreover, for type ‘D’ passive systems, the failures of electric power supply and Instrumentation and Control (I & C) systems have to be considered along with the active component failure during start-up. Energies 2021,14, 4688 9 of 17 Typical FTs for start-up failure and failure during operation for type ‘C’ and ‘D’ passive systems are shown in Figure 2. Figure 2. FTs for the start-up failure and failure during operation for type ‘C’ and ‘D’ passive systems. As usual in traditional PSA, the FTs have to be linked to the ETs, where the passive system success/failure is considered among the ETs header events [ ]. In general terms, the call in operation of a passive system results from the malfunction of an active system: therefore, the header representing passive systems is typically preceded by headers of active systems. Integration can be done by, alternatively: •Separate headers for start-up failures and failures during operation; •One header representing both types of failure. The ETs representing these two alternatives are presented in Figure 3, left and right, respectively. Figure 3. Possible approaches to integrating FTs of passive systems into ETs. Left: separate head- ers for start-up failures and failures during operation; right: one header representing both types of failure. In the former case, the FTs presented in Figures 1and 2are placed behind the two distinct headers (‘Passive System Successfully Starts’ and ‘Passive System Successfully Continues Operation’), whereas in the latter case, the two FTs are linked together into an ‘OR’ gate and placed behind the single header ‘Passive System Successfully Starts and Continues Operation’. Energies 2021,14, 4688 10 of 17 In most cases, the two ET construction approaches result in the same minimal cut-set lists; however, the first approach should be cautiously applied for scenarios where more than one redundant train is available, and the operation of a single train can fulfill the required safety function. In this particular case, some relevant minimal cut-sets are left out from the results. For illustration purposes, consider a passive system with two redundant trains. The top gate of the FT for the start-up failure is an ‘AND’ gate, which links the start-up failures of the two redundant trains. The FT for the failure during operation also has the same structure. As a result, the passive system fails only if both trains fail to start or both trains fail to run, neglecting the minimal cut set ‘one train fails to start and the other train fails to run’. Therefore, in this case (when there are 100% redundant trains), the second option is preferable. 5.3.2. Integration of Passive Systems into Dynamic PSA In the PSA practice, accident scenarios, though dynamic in nature, are usually ana- lyzed with the ‘static’ ETs and FTs, as discussed in the previous Section 5.3.1. The current ‘static’ PSA framework is limited when: (i) handling the actual events timing, which ultimately influences the evolution of the scenarios; and (ii) modelling the interactions between the hardware components (i.e., failure rates) and the process variables (temperatures, mass flows, pressures, etc.) [ ]. In practice, with respect to (i), different orders of the same success and failure events (and/or different timing of these events occurrence) along an accident scenario typically lead to different outcomes; with respect to (ii), the event/scenarios occurrence probabilities are affected by process variables values (temperatures, mass flows, pressures, etc.). This highlights another limitation of the ‘static’ PSA framework, which can only handle Boolean representations of system states (i.e., success or failure), neglecting any intermediate (partial operation) states, which, conversely, is fundamental when concerned with the passive system operation. In fact, because of its specific features, defining the status of a passive system simply in terms of ‘success’ or ‘failure’, is limited, since ‘intermediate’ modes of operation or equivalently degraded performance states (up to the failure point) are possible and may (still) guarantee some (even limited) operation. This operation could be sufficient to recover a failed system (e.g., through redundancy configuration) and, ultimately, a severe accident. In complex situations where several (multi-state) safety systems are involved and where human behavior still plays a relevant role, advanced solutions have been proposed and already used for dynamic PSA, like Continuous Event Trees (CETs) [ ], Dynamic Event Trees (DETs) [ ], Discrete DETs (DDETs) [ ], Monte Carlo DETs (MCDETs) [ and Repairable DETs (RDETs) [ ], because they provide more realistic frameworks than static FTs and ETs, since they capture the interaction between the process parameters and the passive system states within the dynamical evolution of the accident. The most evident difference between DETs and static ETs is that while ETs are constructed by expert analysts that draw their branches based on success/failure criteria set by the analysts, in DETs, these are spooned by a software that embeds the (deterministic) models simulating the plant dynamics and the (stochastic) models of components failure. Naturally, the DET generates a number of scenarios much larger than that of the classical static FT/ET approaches, so that the a posteriori retrieval of information can become quite burdensome and complex [ Another challenge is related to the relevant effort in terms of computational time required for generating a large number of time-dependent accident scenarios by means of Monte Carlo techniques that are typically employed to deeply and thoroughly explore the entire system state-space, and to cover in principle all the possible combinations of events over long periods of time. This, for thermal hydraulic passive systems, is even more relevant, since during the accident progression their reliability strongly depends (more than other safety systems) upon time and the state/parameter evolution of the system. Therefore, also in this case, resorting to metamodels can help [ ], accomplishing the evaluation process of T–H passive systems in a consistent manner. Energies 2021,14, 4688 11 of 17 The goal of dynamic PSA is, therefore, to account for the interaction of the process dynamics and the stochastic nature/behavior of the system at various stages and embed the state/parameter evaluation by deterministic thermal hydraulic codes within a DET generation [ ]. The framework should be able to estimate the physical variations of all the system parameters/variables and the frequency of the accident sequences, while taking into proper account the dynamic effects. If the (mechanical) components failure probabilities (e.g., the failure probability per-demand of a valve) are known, then they can be combined with the probability distributions of estimated parameters/variables, in order to predict the probabilistic evolution of each scenario. In [ ], the T–H passive system behavior is represented as a non-stationary stochastic process, where natural circulation is modelled in terms of time-variant performance pa- rameters (e.g., thermal power and mass flow rate) assumed as stochastic variables. In that work, which can be considered as a preliminary attempt to address the dynamic aspect in the context of passive system reliability, the statistics of such stochastic variables (e.g., mean values and standard deviations) change in time, so that the corresponding random variables assume different values in different realizations (i.e., each realization is different). 6. Conclusions, Recommendations and Additional Issues In this paper, we have laid down a common understanding of the state-of-the-art and open issues with respect to the reliability assessment of passive safety systems for their adoption in nuclear installations. Indeed, such safety systems rely on intrinsic physical phenomena, which makes the assessment of their performance quite challenging to carry out with respect to traditional (active) systems. This is due to the typical scarcity of data in a sufficiently wide range of operational conditions, which introduces relevant (aleatory and epistemic) uncertainties into the analysis. These issues could have a negative impact on the public acceptance of next generation nuclear reactors, which instead—thanks to use of passive systems—should be safer than the current ones. Thus, structured and sound frameworks and techniques must be sought, developed and demonstrated for a robust quantification of the reliability/failure probability of nuclear passive safety systems. With respect to T–H passive systems, a review of the available approaches and frame- works for the quantification of the reliability of nuclear passive safety systems has been presented, followed by a critical discussion of the pending open issues. It has turned out that the massive use of expert judgement and subjective assumptions combined with often scarce data requires the propagation of the corresponding uncertainties by simulating numerous times the system behavior under different operating conditions. In this light, the most realistic assessment of the passive system is provided by the functional failure-based approach, thanks to MCS, which is flexible and is not negatively affected by any model complexity: therefore, it does not require any simplifying assumption. On the other hand, often prohibitive computational efforts are required, because a large number of MC-sampled model evaluations must be often carried out for an accurate and precise assessment of the frequently small (e.g., lower than 10 ) functional failure probability: actually, each evaluation requires the call of a long-running mechanistic code (several hours, per run). Thus, we must resort to advanced methods to tackle the issues associated with the analysis. As open issues, we focused, in particular, on the role of empirical regression modelling, the need of advanced sensitivity analysis methods and the integration of passive systems in the (static/dynamic) PSA framework. In this regard, we can provide general conclusions and recommendations for those practitioners who tackle the issue of passive systems reliability assessment: If the estimation of the passive system functional failure probability is of interest, we suggest combining metamodels with efficient MCS techniques, for example, by constructing and adaptively refining the metamodel by means of samples generated by the advanced MCS method in proximity of the system functional failure region [78–86]. An example is represented by the Adaptive Metamodel-based Subset Importance Sam- Energies 2021,14, 4688 12 of 17 pling (AM-SIS) method, recently proposed by some of the authors, which intelligently combines Subset Simulation (SS), Importance Sampling (IS) and iteratively trained Artificial Neural Networks (ANNs) [78,79]. If thorough uncertainty propagations (e.g., the determination of the PDFs, CDFs, percentiles of the code outputs) and SA are of interest to the analyst, a combination of Finite Mixture Models (FMMs) and ensembles of global SA measures are suggested, as proposed by some of the authors in [94,98]. Finally, it is worth mentioning that, to foster these methods’ acceptance in the nuclear research community and to consequently promote the public acceptance of future reactor designs involving passive safety systems, other (open) issues should be addressed, such as: The methods proposed rely on the assessment of the uncertainty (both aleatory and epistemic) in the quantitative description provided by models of the phenomena pertaining to the functions of the passive systems. This requires a systematic, sound and rigorous Inverse Uncertainty Quantification (IUQ) approaches to find a characteri- zation of the input parameters uncertainty that is consistent with the experimental data, while limiting the associated computational burden. Approaches have been already proposed in the open literature, but not yet in the field of passive system reliability assessment [131–136]. If we resort to empirical metamodels for estimating passive systems failure probabili- ties and carrying out uncertainty and SA, the following problems should be considered: the regression error should be carefully quantified (and possibly controlled) throughout the process, in order to reduce its impact on the entire reliability assessment [81]; the higher the input dimensionality (e.g., in the presence of time series data), the higher the size of the training dataset should be to obtain metamodel accuracy. Rigorous (linear or nonlinear) approaches to reduce the input dimen- sionality (e.g., Principal Component Analysis, PCA, or Stacked Sparse Autoen- coders) should be sought, with increased metamodel performances [137]; the quality of metamodel training can be negatively affected by noisy data. Data filtering, carried out on the model code predictions, may impact on the metamodel predictive performance [138]. The introduction of passive safety systems in the framework of PSA deserves particular attention, in particular, when accident scenarios are generated in a dynamic fashion. The reasons are the following: it is difficult to define the state of passive systems along an accident sequence only in the classical binary terms of ‘success/failure’; rather, ‘intermediate’ modes of operation or degraded performances states should be considered, where the passive system might still be capable of providing a functional level sufficient for the mitigation of the accident progression; the amount of accident scenarios to be handled is consistently larger than that associated with the traditional static fault/event tree techniques. Thus, the “a posteriori” retrieval of information can be quite burdensome and difficult. In this view, artificial intelligence techniques could be embraced to address the problem [125–127]; the thorough exploration of the dynamic state-space of the passive safety sys- tem is impracticable by standard (sampling) methods: advanced exploration schemes should be sought to intelligently drive the search towards ‘interesting’ scenarios (e.g., extreme unexpected events), while reducing the computational effort [139,140]. Energies 2021,14, 4688 13 of 17 Author Contributions: All authors provided equal contributions to the technical work. In addition, F.D.M. and N.P. attended to the editing of the paper. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. Pagani, L.P.; Apostolakis, G.E.; Hejzlar, P. The Impact of Uncertainties on the Performance of Passive Systems. Nucl. Technol. 149, 129–140. [CrossRef] Marquès, M.; Pignatel, J.; Saignes, P.; D’Auria, F.; Burgazzi, L.; Müller, C.; Bolado-Lavin, R.; Kirchsteiger, C.; La Lumia, V.; Ivanov, I. Methodology for the reliability evaluation of a passive system and its integration into a Probabilistic Safety Assessment. Nucl. Eng. Des. 2005,235, 2612–2631. [CrossRef] Jafari, J.; D’Auria, F.; Kazeminejad, H.; Davilu, H. Reliability evaluation of a natural circulation system. Nucl. Eng. Des. 79–104. [CrossRef] Thunnissen, D.P.; Au, S.-K.; Tsuyuki, G.T. Uncertainty Quantification in Estimating Critical Spacecraft Component Temperatures. J. Thermophys. Heat Transf. 2007,21, 422–430. [CrossRef] Fong, C.J.; Apostolakis, G.E.; Langewisch, D.R.; Hejzlar, P.; Todreas, N.E.; Driscoll, M.J. Reliability analysis of a passive cooling system using a response surface with an application to the flexible conversion ratio reactor. Nucl. Eng. Des. ,239, 2660–2671. Bousbia Salah, A.; Auria, F. Insights into Natural Circulation—Phenomena, Models, and Issues; LAP Lambert Academic Publishing: Saarbrücken, Germany, 2010. Aksan, N.; D’Auria, F. Relevant Thermal Hydraulic Aspects of Advanced Reactor Design-Status Report. NEA/CSNI/R. 1996. Available online: https://www.oecd-nea.org/jcms/pl_16144/relevant-thermal-hydraulic- aspects-of- advanced-reactors-design- status-report-1996?details=true (accessed on 3 May 2021). Aksan, N.; Boado, R.; Burgazzi, L.; Choi, J.H.; Chung, Y.J.; D’Auria, F.S.; De La Rosa, F.C.; Gimenez, M.O.; Ishii, M.; Khartabil, H.; et al. Natural Circulation Phenomena and Modeling for Advanced Water Cooled Reactors. IAEA-TECDOC-1677 978-92- 0-127410-6. 2012. Available online: https://www-pub.iaea.org/MTCD/Publications/PDF/TE-1677_web.pdf (accessed on 3 May 2021). Aksan, N.; D’Auria, F.S.; Marques, M.; Saha, D.; Reyes, J.; Cleveland, J. Natural Circulation in Water-Cooled Nuclear Power Plants Phenomena, Models, and Methodology for System Reliability Assessments. IAEA-TECDOC-1474 92-0-110605-X. 2005. Available online: https://www- pub.iaea.org/MTCD/Publications/PDF/TE_1474_web.pdf (accessed on 3 May 2021). 10. Aksan, N.; Choi, J.H.; Chung, Y.J.; Cleveland, J.; D’Auria, F.S.; Fil, N.; Gimenez, M.O.; Ishii, M.; Khartabil, H.; Korotaev, K.; et al. Passive Safety Systems and Natural Circulation in Water Cooled Nuclear Power Plants. IAEA-TECDOC-1624 978-92-0-111309-2. 2009. Available online: https://www-pub.iaea.org/MTCD/Publications/PDF/te_1624_web.pdf (accessed on 3 May 2021). Ricotti, M.E.; Zio, E.; D’Auria, F.; Caruso, G. Reliability Methods for Passive Systems(RMPS) Study—Strategy and Results. Unclassified NEA CSNI/WGRISK 2002,10, 149. Lewis, M.J.; Pochard, R.; D’auria, F.; Karwat, H.; Wolfert, K.; Yadigaroglu, G.; Holmstrom, H.L.O. Thermohydraulics of Emergency Core Cooling in Light Water Reactors—A State of the Art Report. CSNI Rep. 1989. Available online: https: //www.oecd-nea.org/upload/docs/application/pdf/2020-01/csni89-161.pdf (accessed on 3 May 2021). Aksan, N.; D’auria, F.; Glaeser, H.; Pochard, R.; Richards, C.; Sjoberg, A. A Separate Effects Test Matrix for Thermal-Hydraulic Code Validation: Phenomena Characterization and Selection of Facilities and Tests; OECD/GD OECD Guidance Document; OECD: Paris, France, 1993. Aksan, N.; D’Auria, F.; Glaeser, H.; Lillington, J.; Pochard, R.; Sjoberg, A. Evaluation of the Separate Effects Tests (SET) Validation Matrix; OECD/GD OECD Guidance Document; OECD: Paris, France, 1996. U.S. Nuclear Regulatory Commission. Others Compendium of ECCS Research for Realistic LOCA Analysis; NUREG-1230; Division of Systems Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission: Washington, DC, USA, 1988. 16. Burgazzi, L. Addressing the uncertainties related to passive system reliability. Prog. Nucl. Energy 2007,49, 93–102. [CrossRef] Burgazzi, L. State of the art in reliability of thermal-hydraulic passive systems. Reliab. Eng. Syst. Saf. ,92, 671–675. [CrossRef] Burgazzi, L. Thermal–hydraulic passive system reliability-based design approach. Reliab. Eng. Syst. Saf. ,92, 1250–1257. 19. Burgazzi, L. Evaluation of uncertainties related to passive systems performance. Nucl. Eng. Des. 2004,230, 93–106. [CrossRef] D’Auria, F.; Giannotti, W. Development of a Code with the Capability of Internal Assessment of Uncertainty. Nucl. Technol. 131, 159–196. [CrossRef] Ahn, K.-I.; Kim, H.-D. A Formal Procedure for Probabilistic Quantification of Modeling Uncertainties Employed in Phenomeno- logical Transient Models. Nucl. Technol. 2000,130, 132–144. [CrossRef] Zio, E.; Pedroni, N. Building confidence in the reliability assessment of thermal-hydraulic passive systems. Reliab. Eng. Syst. Saf. 2009,94, 268–281. [CrossRef] Apostolakis, G. The concept of probability in safety assessments of technological systems. Science ,250, 1359–1364. [CrossRef] Energies 2021,14, 4688 14 of 17 24. Helton, J.; Oberkampf, W. Alternative representations of epistemic uncertainty. Reliab. Eng. Syst. Saf. 2004,85, 1–10. [CrossRef] Zio, E. A study of the bootstrap method for estimating the accuracy of artificial neural networks in predicting nuclear transient processes. IEEE Trans. Nucl. Sci. 2006,53, 1460–1478. [CrossRef] Helton, J.; Johnson, J.; Sallaberry, C.; Storlie, C. Survey of sampling-based methods for uncertainty and sensitivity analysis. Reliab. Eng. Syst. Saf. 2006,91, 1175–1209. [CrossRef] Zio, E.; Pedroni, N. Estimation of the functional failure probability of a thermal–hydraulic passive system by Subset Simulation. Nucl. Eng. Des. 2009,239, 580–599. [CrossRef] Pourgol-Mohamad, M.; Mosleh, A.; Modarres, M. Methodology for the use of experimental data to enhance model output uncertainty assessment in thermal hydraulics codes. Reliab. Eng. Syst. Saf. 2010,95, 77–86. [CrossRef] Burgazzi, L. Failure Mode and Effect Analysis Application for the Safety and Reliability Analysis of a Thermal-Hydraulic Passive System. Nucl. Technol. 2006,156, 150–158. [CrossRef] 30. Burgazzi, L. Passive System Reliability Analysis: A Study on the Isolation Condenser. Nucl. Technol. 2002,139, 3–9. [CrossRef] Burgazzi, L. Reliability Evaluation of Passive Systems through Functional Reliability Assessment. Nucl. Technol. 145–151. [CrossRef] Bassi, C.; Marques, M. Reliability Assessment of 2400 MWth Gas-Cooled Fast Reactor Natural Circulation Decay Heat Removal in Pressurized Situations. Sci. Technol. Nucl. Install. 2008,2008, 287376. [CrossRef] Mackay, F.J.; Apostolakis, G.E.; Hejzlar, P. Incorporating reliability analysis into the design of passive cooling systems with an application to a gas-cooled reactor. Nucl. Eng. Des. 2008,238, 217–228. [CrossRef] Mathews, T.S.; Ramakrishnan, M.; Parthasarathy, U.; Arul, A.J.; Kumar, C.S. Functional reliability analysis of Safety Grade Decay Heat Removal System of Indian 500MWe PFBR. Nucl. Eng. Des. 2008,238, 2369–2376. [CrossRef] Mathews, T.S.; Arul, A.J.; Parthasarathy, U.; Kumar, C.S.; Subbaiah, K.V.; Mohanakrishnan, P. Passive system reliability analysis using Response Conditioning Method with an application to failure frequency estimation of Decay Heat Removal of PFBR. Nucl. Eng. Des. 2011,241, 2257–2270. [CrossRef] Arul, A.J.; Iyer, N.K.; Velusamy, K. Adjoint operator approach to functional reliability analysis of passive fluid dynamical systems. Reliab. Eng. Syst. Saf. 2009,94, 1917–1926. [CrossRef] Arul, A.J.; Iyer, N.K.; Velusamy, K. Efficient reliability estimate of passive thermal hydraulic safety system with automatic differentiation. Nucl. Eng. Des. 2010,240, 2768–2778. [CrossRef] Mezio, F.; Grinberg, M.; Lorenzo, G.; Giménez, M. Integration of the functional reliability of two passive safety systems to mitigate a SBLOCA+BO in a CAREM-like reactor PSA. Nucl. Eng. Des. 2014,270, 109–118. [CrossRef] Schuëller, G.; Pradlwarter, H. Benchmark study on reliability estimation in higher dimensions of structural systems—An overview. Struct. Saf. 2007,29, 167–182. [CrossRef] Schuëller, G. Efficient Monte Carlo simulation procedures in structural uncertainty and reliability analysis—Recent advances. Struct. Eng. Mech. 2009,32, 1–20. [CrossRef] Zio, E.; Pedroni, N. How to effectively compute the reliability of a thermal–hydraulic nuclear passive system. Nucl. Eng. Des. 2011,241, 310–327. [CrossRef] Zio, E.; Pedroni, N. Monte Carlo simulation-based sensitivity analysis of the model of a thermal–hydraulic passive system. Reliab. Eng. Syst. Saf. 2012,107, 90–106. [CrossRef] Kersaudy, P.; Sudret, B.; Varsier, N.; Picon, O.; Wiart, J. A new surrogate modeling technique combining Kriging and polynomial chaos expansions—Application to uncertainty analysis in computational dosimetry. J. Comput. Phys. ,286, 103–117. 44. Kartal, M.E.; Ba¸sa˘ga, H.B.; Bayraktar, A. Probabilistic nonlinear analysis of CFR dams by MCS using Response Surface Method. Appl. Math. Model. 2011,35, 2752–2770. [CrossRef] Pedroni, N.; Zio, E.; Apostolakis, G. Comparison of bootstrapped artificial neural networks and quadratic response surfaces for the estimation of the functional failure probability of a thermal–hydraulic passive system. Reliab. Eng. Syst. Saf. ,95, 386–395. Zio, E.; Pedroni, N. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems. Reliab. Eng. Syst. Saf. 2010,95, 1300–1313. [CrossRef] Babuška, I.; Nobile, F.; Tempone, R. A stochastic collocation method for elliptic partial differential equations with random input data. SIAM Rev. 2010,52, 317–355. [CrossRef] Hurtado, J.E. Filtered importance sampling with support vector margin: A powerful method for structural reliability analysis. Struct. Saf. 2007,29, 2–15. [CrossRef] Bect, J.; Ginsbourger, D.; Li, L.; Picheny, V.; Vazquez, E. Sequential design of computer experiments for the estimation of a probability of failure. Stat. Comput. 2012,22, 773–793. [CrossRef] Rubino, G.; Tuffin, B. Rare Event Simulation Using Monte Carlo Methods; John Wiley & Sons: Hoboken, NJ, USA, 2009; ISBN Zio, E.; Apostolakis, G.; Pedroni, N. Quantitative functional failure analysis of a thermal–hydraulic passive system by means of bootstrapped Artificial Neural Networks. Ann. Nucl. Energy 2010,37, 639–649. [CrossRef] Helton, J.; Davis, F. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliab. Eng. Syst. Saf. 2003,81, 23–69. [CrossRef] Energies 2021,14, 4688 15 of 17 Cacuci, D.G.; Ionescu-Bujor, M. A Comparative Review of Sensitivity and Uncertainty Analysis of Large-Scale Systems—II: Statistical Methods. Nucl. Sci. Eng. 2004,147, 204–217. [CrossRef] 54. Morris, M.D. Three Technomefrics Experimental Design Classics. Technometrics 2000,42, 26–27. [CrossRef] Helton, J.; Davis, F.; Johnson, J. A comparison of uncertainty and sensitivity analysis results obtained with random and Latin hypercube sampling. Reliab. Eng. Syst. Saf. 2005,89, 305–330. [CrossRef] Sallaberry, C.; Helton, J.; Hora, S. Extension of Latin hypercube samples with correlated variables. Reliab. Eng. Syst. Saf. 1047–1059. [CrossRef] Helton, J. Uncertainty and sensitivity analysis in the presence of stochastic and subjective uncertainty. J. Stat. Comput. Simul. 1997,57, 3–76. [CrossRef] Storlie, C.B.; Swiler, L.P.; Helton, J.C.; Sallaberry, C.J. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models. Reliab. Eng. Syst. Saf. 2009,94, 1735–1763. [CrossRef] Olsson, A.; Sandberg, G.; Dahlblom, O. On Latin hypercube sampling for structural reliability analysis. Struct. Saf. 47–68. [CrossRef] Pebesma, E.J.; Heuvelink, G.B. Latin Hypercube Sampling of Gaussian Random Fields. Technometrics ,41, 303–312. [CrossRef] Au, S.-K.; Beck, J.L. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Eng. Mech. 2001,16, 263–277. [CrossRef] Au, S.-K.; Beck, J.L. Subset Simulation and its Application to Seismic Risk Based on Dynamic Analysis. J. Eng. Mech. 901–917. [CrossRef] 63. Au, S.-K.; Wang, Y. Engineering Risk Assessment with Subset Simulation; John Wiley & Sons: Hoboken, NJ, USA, 2014. Koutsourelakis, P.-S.; Pradlwarter, H.; Schuëller, G. Reliability of structures in high dimensions, part I: Algorithms and applications. Probabilistic Eng. Mech. 2004,19, 409–417. [CrossRef] Pradlwarter, H.; Pellissetti, M.; Schenk, C.; Schuëller, G.; Kreis, A.; Fransen, S.; Calvi, A.; Klein, M. Realistic and efficient reliability estimation for aerospace structures. Comput. Methods Appl. Mech. Eng. 2005,194, 1597–1617. [CrossRef] Zio, E.; Pedroni, N. Functional failure analysis of a thermal–hydraulic passive system by means of Line Sampling. Reliab. Eng. Syst. Saf. 2009,94, 1764–1781. [CrossRef] Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953,21, 1087–1092. [CrossRef] 68. Schuëller, G.; Pradlwarter, H.; Koutsourelakis, P.-S. A critical appraisal of reliability estimation procedures for high dimensions. Probabilistic Eng. Mech. 2004,19, 463–474. [CrossRef] 69. Lu, Z.; Song, S.; Yue, Z.; Wang, J. Reliability sensitivity method by line sampling. Struct. Saf. 2008,30, 517–532. [CrossRef] Valdebenito, M.; Pradlwarter, H.; Schuëller, G. The role of the design point for calculating failure probabilities in view of dimensionality and structural nonlinearities. Struct. Saf. 2010,32, 101–111. [CrossRef] Wang, B.; Wang, N.; Jiang, J.; Zhang, J. Efficient estimation of the functional reliability of a passive system by means of an improved Line Sampling method. Ann. Nucl. Energy 2013,55, 9–17. [CrossRef] De Boer, P.-T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A Tutorial on the Cross-Entropy Method. Ann. Oper. Res. ,134, 19–67. Hutton, D.M. The Cross-Entropy Method: A Unified Approach to Combinatorial Optimisation, Monte-Carlo Simulation and Machine Learning. Kybernetes 2005,34, 903. Botev, Z.I.; Kroese, D. An Efficient Algorithm for Rare-event Probability Estimation, Combinatorial Optimization, and Counting. Methodol. Comput. Appl. Probab. 2008,10, 471–505. [CrossRef] Au, S.-K.; Beck, J. A new adaptive importance sampling scheme for reliability calculations. Struct. Saf. ,21, 135–158. Morio, J. Extreme quantile estimation with nonparametric adaptive importance sampling. Simul. Model. Pract. Theory 76–89. [CrossRef] Botev, Z.I.; L’Ecuyer, P.; Tuffin, B. Markov chain importance sampling with applications to rare event probability estimation. Stat. Comput. 2011,23, 271–285. [CrossRef] 78. Asmussen, S.; Glynn, P.W. Stochastic Simulation: Algorithms and Analysis; Springer: Berlin/Heidelberg, Germany, 2007. 79. Pedroni, N.; Zio, E. An Adaptive Metamodel-Based Subset Importance Sampling approach for the assessment of the functional failure probability of a thermal-hydraulic passive system. Appl. Math. Model. 2017,48, 269–288. [CrossRef] Echard, B.; Gayton, N.; Lemaire, M. AK-MCS: An active learning reliability method combining Kriging and Monte Carlo Simulation. Struct. Saf. 2011,33, 145–154. [CrossRef] Echard, B.; Gayton, N.; Lemaire, M.; Relun, N. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models. Reliab. Eng. Syst. Saf. 2013,111, 232–240. [CrossRef] Bourinet, J.-M.; Deheeger, F.; Lemaire, M. Assessing small failure probabilities by combined subset simulation and Support Vector Machines. Struct. Saf. 2011,33, 343–353. [CrossRef] Dubourg, V.; Sudret, B.; Deheeger, F. Metamodel-based importance sampling for structural reliability analysis. Probabilistic Eng. Mech. 2013,33, 47–57. [CrossRef] Fauriat, W.; Gayton, N. AK-SYS: An adaptation of the AK-MCS method for system reliability. Reliab. Eng. Syst. Saf. 137–144. [CrossRef] Energies 2021,14, 4688 16 of 17 Cadini, F.; Santos, F.; Zio, E. Passive systems failure probability estimation by the meta-AK-IS 2 algorithm. Nucl. Eng. Des. 277, 203–211. [CrossRef] Cadini, F.; Gioletta, A.; Zio, E. Improved metamodel-based importance sampling for the performance assessment of radioactive waste repositories. Reliab. Eng. Syst. Saf. 2015,134, 188–197. [CrossRef] Liel, A.B.; Haselton, C.B.; Deierlein, G.G.; Baker, J. Incorporating modeling uncertainties in the assessment of seismic collapse risk of buildings. Struct. Saf. 2009,31, 197–211. [CrossRef] Gavin, H.P.; Yau, S.C. High-order limit state functions in the response surface method for structural reliability analysis. Struct. Saf. 2008,30, 162–179. [CrossRef] Lorenzo, G.; Zanocco, P.; Giménez, M.; Marques, M.; Iooss, B.; Lavín, R.B.; Pierro, F.; Galassi, G.; D’Auria, F.; Burgazzi, L. Assessment of an Isolation Condenser of an Integral Reactor in View of Uncertainties in Engineering Parameters. Sci. Technol. Nucl. Install. 2010,2011, 827354. [CrossRef] Bucknor, M.; Grabaskas, D.; Brunett, A.J.; Grelle, A. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event. Nucl. Eng. Technol. 2017,49, 360–372. [CrossRef] Chandrakar, A.; Nayak, A.K.; Gopika, V. Development of the APSRA+ Methodology for Passive System Reliability Analysis and Its Application to the Passive Isolation Condenser System of an Advanced Reactor. Nucl. Technol. 2016,194, 39–60. [CrossRef] Nayak, A.; Jain, V.; Gartia, M.; Prasad, H.; Anthony, A.; Bhatia, S.; Sinha, R. Reliability assessment of passive isolation condenser system of AHWR using APSRA methodology. Reliab. Eng. Syst. Saf. 2009,94, 1064–1075. [CrossRef] Zio, E.; Di Maio, F.; Tong, J. Safety margins confidence estimation for a passive residual heat removal system. Reliab. Eng. Syst. Saf. 2010,95, 828–836. [CrossRef] Di Maio, F.; Nicola, G.; Zio, E.; Yu, Y. Ensemble-based sensitivity analysis of a Best Estimate Thermal Hydraulics model: Application to a Passive Containment Cooling System of an AP1000 Nuclear Power Plant. Ann. Nucl. Energy ,73, 200–210. Zuber, N.; Wilson, G.E.; (Lanl), B.B.; (Ucla), I.C.; (Inel), R.D.; (Mit), P.G.; (Inel), K.K.; (Sli), G.L.; (Sli), S.L.; (Bnl), U.R.; et al. Quantifying reactor safety margins Part 5: Evaluation of scale-up capabilities of best estimate codes. Nucl. Eng. Des. 97–107. [CrossRef] 96. Guba, A.; Makai, M.; Pál, L. Statistical aspects of best estimate method—I. Reliab. Eng. Syst. Saf. 2003,80, 217–232. [CrossRef] Secchi, P.; Zio, E.; Di Maio, F. Quantifying uncertainties in the estimation of safety parameters by using bootstrapped artificial neural networks. Ann. Nucl. Energy 2008,35, 2338–2350. [CrossRef] Di Maio, F.; Nicola, G.; Zio, E.; Yu, Y. Finite mixture models for sensitivity analysis of thermal hydraulic codes for passive safety systems analysis. Nucl. Eng. Des. 2015,289, 144–154. [CrossRef] 99. McLachlan, G.; Peel, D. Finite Mixture Models; John Wiley & Sons: Hoboken, NJ, USA, 2000. Law, M.H.C.; Figueiredo, M.; Jain, A.K. Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 2004,26, 1154–1166. [CrossRef] 101. Diaconis, P.; Zabell, S.L. Updating Subjective Probability. J. Am. Stat. Assoc. 1982,77, 822–830. [CrossRef] 102. Gibbs, A.L.; Su, F.E. On Choosing and Bounding Probability Metrics. Int. Stat. Rev. 2002,70, 419–435. [CrossRef] Cummins, W.E.; Vijuk, R.P.; Schulz, T.L. Westinghouse AP1000 advanced passive plant. In Proceedings of the American Nuclear Society—International Congress on Advances in Nuclear Power Plants 2005 (ICAPP05), Seoul, Korea, 15–19 May 2005. Di Maio, F.; Nicola, G.; Borgonovo, E.; Zio, E. Invariant methods for an ensemble-based sensitivity analysis of a passive containment cooling system of an AP1000 nuclear power plant. Reliab. Eng. Syst. Saf. 2016,151, 12–19. [CrossRef] Di Maio, F.; Bandini, A.; Zio, E.; Alberola, S.C.; Sanchez-Saez, F.; Martorell, S. Bootstrapped-ensemble-based Sensitivity Analysis of a trace thermal-hydraulic model based on a limited number of PWR large break loca simulations. Reliab. Eng. Syst. Saf. 153, 122–134. [CrossRef] Perez, M.; Reventos, F.; Batet, L.; Guba, A.; Tóth, I.; Mieusset, T.; Bazin, P.; de Crécy, A.; Borisov, S.; Skorek, T.; et al. Uncertainty and sensitivity analysis of a LBLOCA in a PWR Nuclear Power Plant: Results of the Phase V of the BEMUSE programme. Nucl. Eng. Des. 2011,241, 4206–4222. [CrossRef] Wilks, S.S. Statistical Prediction with Special Reference to the Problem of Tolerance Limits. Ann. Math. Stat. ,13, 400–409. 108. Makai, M.; Pál, L. Best estimate method and safety analysis II. Reliab. Eng. Syst. Saf. 2006,91, 222–232. [CrossRef] Nutt, W.T.; Wallis, G.B. Evaluation of nuclear safety from the outputs of computer codes in the presence of uncertainties. Reliab. Eng. Syst. Saf. 2004,83, 57–77. [CrossRef] Bucher, C.; Most, T. A comparison of approximate response functions in structural reliability analysis. Probabilistic Eng. Mech. 2008,23, 154–163. [CrossRef] 111. Deng, J. Structural reliability analysis for implicit performance function using radial basis function network. Int. J. Solids Struct. 2006,43, 3255–3291. [CrossRef] Cardoso, J.B.; Almeida, J.; Dias, J.S.; Coelho, P. Structural reliability analysis using Monte Carlo simulation and neural networks. Adv. Eng. Softw. 2008,39, 505–513. [CrossRef] Volkova, E.; Iooss, B.; Van Dorpe, F. Global sensitivity analysis for a numerical model of radionuclide migration from the RRC “Kurchatov Institute” radwaste disposal site. Stoch. Environ. Res. Risk Assess. 2006,22, 17–31. [CrossRef] Energies 2021,14, 4688 17 of 17 Marrel, A.; Iooss, B.; Laurent, B.; Roustant, O. Calculations of Sobol indices for the Gaussian process metamodel. Reliab. Eng. Syst. Saf. 2009,94, 742–751. [CrossRef] International Atomic Energy Agency. Safety Related Terms for Advanced Nuclear Plants; Terminos Relacionados con la Seguridad para Centrales Nucleares Avanzadas; International Atomic Energy Agency (IAEA): Vienna, Austria, 1995. Zio, E.; Baraldi, P.; Cadini, F. Basics of Reliability and Risk Analysis: Worked Out Problems and Solutions; World Scientific: Singapore, 2011; ISBN 9814355038. Marseguerra, M.; Zio, E. Monte Carlo approach to PSA for dynamic process systems. Reliab. Eng. Syst. Saf. ,52, 227241. Di Maio, F.; Vagnoli, M.; Zio, E. Risk-Based Clustering for Near Misses Identification in Integrated Deterministic and Probabilistic Safety Analysis. Sci. Technol. Nucl. Install. 2015,2015, 693891. [CrossRef] Hakobyan, A.; Denning, R.; Aldemir, T.; Dunagan, S.; Kunsman, D. A Methodology for Generating Dynamic Accident Progression Event Trees for Level-2 PRA. In Proceedings of the PHYSOR-2006—American Nuclear Society’s Topical Meeting on Reactor Physics, Vancouver, BC, Canada, 10–14 September 2006. Coyne, K.; Mosleh, A. Nuclear plant control room operator modeling within the ADS-IDAC, Version 2, Dynamic PRA Environ- ment: Part 1—General description and cognitive foundations. Int. J. Perform. Eng. 2014,10, 691–703. Devooght, J.; Smidts, C. Probabilistic Reactor Dynamics—I: The Theory of Continuous Event Trees. Nucl. Sci. Eng. 229–240. [CrossRef] Kopustinskas, V.; Augutis, J.; Rimkevi ˇcius, S. Dynamic reliability and risk assessment of the accident localization system of the Ignalina NPP RBMK-1500 reactor. Reliab. Eng. Syst. Saf. 2005,87, 77–87. [CrossRef] Kloos, M.; Peschke, J. MCDET: A Probabilistic Dynamics Method Combining Monte Carlo Simulation with the Discrete Dynamic Event Tree Approach. Nucl. Sci. Eng. 2006,153, 137–156. [CrossRef] Kumar, R.; Bechta, S.; Kudinov, P.; Curnier, F.; Marquès, M.; Bertrand, F. A PSA Level-1 method with repairable compo- nents: An application to ASTRID Decay Heat Removal systems. In Proceedings of the Safety and Reliability: Methodol- ogy and Applications—European Safety and Reliability Conference, ESREL 2014, Wroclaw, Poland, 14–18 September 2014; CRC Press/Balkema: Leiden, The Netherlands, 2015; pp. 1611–1617. Di Maio, F.; Rossetti, R.; Zio, E. Postprocessing of Accidental Scenarios by Semi-Supervised Self-Organizing Maps. Sci. Technol. Nucl. Install. 2017,2017, 2709109. [CrossRef] Di Maio, F.; Baronchelli, S.; Vagnoli, M.; Zio, E. Determination of prime implicants by differential evolution for the dynamic reliability analysis of non-coherent nuclear systems. Ann. Nucl. Energy 2017,102, 91–105. [CrossRef] Di Maio, F.; Picoco, C.; Zio, E.; Rychkov, V. Safety margin sensitivity analysis for model selection in nuclear power plant probabilistic safety assessment. Reliab. Eng. Syst. Saf. 2017,162, 122–138. [CrossRef] 128. Zio, E. The future of risk assessment. Reliab. Eng. Syst. Saf. 2018,177, 176–190. [CrossRef] Di Maio, F.; Zio, E. Dynamic accident scenario generation, modeling and post-processing for the integrated deterministic and probabilistic safety analysis of nuclear power plants. In Advanced Concepts in Nuclear Energy Risk Assessment and Management; World Scientific: Singapore, 2018; Volume 1, pp. 477–504. ISBN 9813225629. Burgazzi, L. About time-variant reliability analysis with reference to passive systems assessment. Reliab. Eng. Syst. Saf. 1682–1688. [CrossRef] Shrestha, R.; Kozlowski, T. Inverse uncertainty quantification of input model parameters for thermal-hydraulics simulations using expectation–maximization under Bayesian framework. J. Appl. Stat. 2015,43, 1011–1026. [CrossRef] Hu, G.; Kozlowski, T. Inverse uncertainty quantification of trace physical model parameters using BFBT benchmark data. Ann. Nucl. Energy 2016,96, 197–203. [CrossRef] Faes, M.; Broggi, M.; Patelli, E.; Govers, Y.; Mottershead, J.; Beer, M.; Moens, D. A multivariate interval approach for inverse uncertainty quantification with limited experimental data. Mech. Syst. Signal Process. 2019,118, 534–548. [CrossRef] 134. Kennedy, M.C.; O’Hagan, A. Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B 2001,63, 425–464. [CrossRef] Wu, X.; Kozlowski, T.; Meidani, H. Kriging-based inverse uncertainty quantification of nuclear fuel performance code BISON fission gas release model using time series measurement data. Reliab. Eng. Syst. Saf. 2018,169, 422–436. [CrossRef] Wu, X.; Kozlowski, T.; Meidani, H.; Shirvan, K. Inverse uncertainty quantification using the modular Bayesian approach based on Gaussian process, Part 1: Theory. Nucl. Eng. Des. 2018,335, 339–355. [CrossRef] Baccou, J.; Zhang, J.; Fillion, P.; Damblin, G.; Petruzzi, A.; Mendizábal, R.; Reventos, F.; Skorek, T.; Couplet, M.; Iooss, B.; et al. SAPIUM: A Generic Framework for a Practical and Transparent Quantification of Thermal-Hydraulic Code Model Input Uncertainty. Nucl. Sci. Eng. 2020,194, 721–736. [CrossRef] Nagel, J.B.; Rieckermann, J.; Sudret, B. Principal component analysis and sparse polynomial chaos expansions for global sensitivity analysis and model calibration: Application to urban drainage simulation. Reliab. Eng. Syst. Saf. 2020,195, 106737. [CrossRef] Roma, G.; Di Maio, F.; Bersano, A.; Pedroni, N.; Bertani, C.; Mascari, F.; Zio, E. A Bayesian framework of inverse uncertainty quantification with principal component analysis and Kriging for the reliability analysis of passive safety systems. Nucl. Eng. Des. 2021,379, 111230. [CrossRef] Turati, P.; Pedroni, N.; Zio, E. An Adaptive Simulation Framework for the Exploration of Extreme and Unexpected Events in Dynamic Engineered Systems. Risk Anal. 2016,37, 147–159. [CrossRef] [PubMed]
{"url":"https://www.researchgate.net/publication/353684791_Reliability_Assessment_of_Passive_Safety_Systems_for_Nuclear_Energy_Applications_State-of-the-Art_and_Open_Issues","timestamp":"2024-11-13T18:26:53Z","content_type":"text/html","content_length":"878394","record_id":"<urn:uuid:fe1c7853-0a87-4dbd-8ed8-73411fa0c84d>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00126.warc.gz"}
How to Turn a Percent Into a Decimal in 3 Easy Steps In the business world, it is important to be able to understand percentages. They can tell you a lot about how your business is doing, and how you can improve it. However, sometimes it is necessary to convert these percentages into decimals in order to get a better understanding of them. In this blog post, we will teach you how to turn a percent into a decimal! We will walk you through three easy steps that will help you turn any percentage into a decimal in no time! So, without further ado, let’s get started! The first step is to take the percentage and move the decimal point two places to the left. For example, if we have a percentage of 24%, we would take the decimal point and then move the decimal point two places to the left, ending up with 0.24. The second step is to remove the percent sign. This can be done by simply deleting it from the number, or by multiplying the decimal by 100. So, using our previous example, we would either delete the percent sign and be left with 0.24, or multiply 0.24 by 100 to get 24. The last step is to simplify the number if possible. In our case, we cannot simplify any further, so our final answer is 0.24. Common mistakes that people make when trying to convert percentages into decimals: First, they forget to move the decimal point. This is a crucial step, as it will give you an entirely different answer if you forget to do it! Second, they don’t remove the percent sign. Again, this will change your final answer, so be sure to remember to do it. Third, they try to simplify the number too early. You should only simplify after you have completed all of the other steps, or you may end up with an incorrect answer. Now that you know how to turn a percent into a decimal, put your new skills to the test! Try converting some percentages on your own and see how well you do. With a little practice, you’ll be a pro in no time! How to check your work: The easiest way to check your work is to take the decimal that you came up with and multiply it by 100. This should give you the original percentage that you started with. For example, if we took 24% and turned it into a decimal, we would get 0.24. If we then multiplied this by 100, we would get 24% again – which means that our answer is correct! 3 Things To Remember When Turning A Percent Into A Decimal: First, always move the decimal point two places to the left. Second, don’t forget to remove the percent sign! And lastly, simplify if possible. End note: We hope that this blog post was helpful in teaching you how to convert a percent into a decimal. While it may seem like a daunting task at first, with a little practice it will become second nature! Remember the three steps we outlined above, and you’ll be able to do it without even thinking about it. Good luck!
{"url":"https://sutherlandkovach.com/how-to-turn-a-percent-into-a-decimal/","timestamp":"2024-11-03T19:53:00Z","content_type":"text/html","content_length":"241814","record_id":"<urn:uuid:b0faeefe-923b-4db1-8c9f-9a24d046df41>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00238.warc.gz"}
is the "science concerned with the study of the shape and size of the earth in the geometrical sense and with the study of certain physical phenomena, such as gravity, in seeking explanation of fine irregularities in the earth's shape." (D.H. Maling: Coordinate Systems and Map Projections, 2nd edition, Pergamon press 1992).So what has to do with that? All geographic applications like need geodesy for their calculations. One of the important calculations is the measurement of the distance between two points on the earth given the two geographical positions. This distance can be calculated using T. Vincenty's algorithm as published in Survey Review Vincenty T., (1975) "Direct and inverse solutions of geodesics on the ellipsoid with application of nested equations", Survey Review 23, Number 176, p 88-93 The only thing we have to know beyond the two positions are the parameters of the ellipsoid in which the positions are expressed. This is important since the result will vary according to the used ellipsoid. So we define: # all known Ellipsoids: # each entry is a list with: ("major semi-axis","minor semi-axis","name") # axes are in metres set App(ellipsoid,wgs84) [list 6378137 6356750.321 "WGS84"] set App(ellipsoid,grs80) [list 6378137 6356752.314 "GRS80"] set App(ellipsoid,clarke) [list 6378206.4 6356583.8 "Clarke (1866) / NAD27"] set App(ellipsoid,int) [list 6378388 6356911.946 "International (1909/1924)"] set App(ellipsoid,krassovsky) [list 6378245 6356863.019 "Krassovsky (1940)"] set App(ellipsoid,bessel) [list 6377397.155 6356078.963 "Bessel (1841)"] set App(ellipsoid,wgs72) [list 6378135 6356750.52 "WGS72"] set App(ellipsoid,wgs66) [list 6378145 6356759.769 "WGS66"] set App(ellipsoid,airy) [list 6377563.396 6356256.909 "Airy 1830"] Often you will want to use the WGS84 or GRS80 ellipsoid since GPS data are expressed with reference to these. Now we need to define some constant values: set App(PI) [expr {atan(1)*4}] ;# this is pi set App(nm) 1852.0 ;# this is the number of meters that go into a nautical mile And then a conversion factor between degrees (ranging from 0 to 360) and radians (ranging from 0 to 2*pi): set App(d2r) [expr {2*$App(PI)/360.0}] Now we are ready to go along with the algorithmic solution. It uses a loop to calculate the correct value iteratively until the result is good enough: # method -> one of the elements of the above ellipsoid array, e.g., 'ellipsoid,bessel' # x1 ... y2 -> position of the points 1 and 2 in decimal degrees # returns the distance between the points expressed in metres proc DistEllipsoid {method x1 y1 x2 y2} { global App # set our precision set eps 0.00000000005 # convert to radians: set lon1 [expr {$x1 * $App(d2r)}] set lon2 [expr {$x2 * $App(d2r)}] set lat1 [expr {$y1 * $App(d2r)}] set lat2 [expr {$y2 * $App(d2r)}] # get ellipsoid data: foreach {a0 b0 name} $App($method) break # flatness: set flat [expr {($a0 - $b0) / $a0}] set r [expr {1 - $flat}] set tanu1 [expr {$r * tan($lat1)}] set tanu2 [expr {$r * tan($lat2)}] set x [expr {atan($tanu1)}] set cosu1 [expr {cos($x)}] set sinu1 [expr {sin($x)}] set x [expr {atan($tanu2)}] set cosu2 [expr {cos($x)}] set sinu2 [expr {sin($x)}] set omega [expr {$lon2 - $lon1}] set lambda $omega # loop untile the result is good enough: while {1} { set testlambda $lambda set s2s [expr {pow($cosu2 * sin($lambda),2) + \ pow($cosu1 * $sinu2 - $sinu1 * $cosu2 * cos($lambda),2)}] set ss [expr {sqrt($s2s)}] set cs [expr {$sinu1 * $sinu2 + $cosu1 * $cosu2 * cos($lambda)}] set tansigma [expr {$ss / $cs}] set sinalpha [expr {$cosu1 * $cosu2 * sin($lambda) / $ss}] set x [expr {asin($sinalpha)}] set cosalpha [expr {cos($x)}] set c2sm [expr {$cs - (2 * $sinu1 * $sinu2 / pow($cosalpha,2))}] set c [expr {$flat / 16.0 * pow($cosalpha,2) * \ (4 + $flat * (4 - 3 * pow($cosalpha,2)))}] set lambda [expr {$omega + (1 - $c) * \ $flat * $sinalpha * (asin($ss) + $c * $ss * ($c2sm + \ $c * $cs * (-1 + 2 * pow($c2sm,2))))}] # result good enough? if {abs($testlambda - $lambda) <= 0.00000000000005} {break} set u2 [expr {pow($cosalpha,2) * (pow($a0,2) - pow($b0,2)) / pow($b0,2)}] set a [expr {1 + ($u2 / 16384.0) * (4096 + $u2 * \ (-768 + $u2 * (320 - 175 * $u2)))}] set b [expr {($u2 / 1024.0) * (256 + $u2 * (-128 + $u2 * (74 - 47 * $u2)))}] set dsigma [expr {$b * $ss * ($c2sm + ($b / 4.0) * \ ($cs * (-1 + 2 * pow($c2sm,2)) - ($b / 6.0) * $c2sm * \ (-3 + 4 * pow($ss,2)) * (-3 + 4 * pow($c2sm,2))))}] return [expr {$b0 * $a * (asin($ss) - $dsigma)}] That's it! There are a large number of calculations for map projections available in the package in tcllib already - . If you're dealing with maps at the scale of a continent, or of most countries or US States, you won't notice the errors between the ellipsoid and the sphere. If you are dealing with larger scale maps, I'd certainly welcome code that fits a reference geoid into the framework! (In any case, even large scale maps are usually projected on a polyconic or transverse-Mercator projection, with geodetic latitude lifted from the geoid onto a sphere (and the map scale adjusted accordingly).Related pages:[ needs to make time to demonstrate how much simpler the calculation is for a sphere, and how little inaccuracy that introduces.] [Nice work above, though, Torsten.] - you are right, calculation for a sphere is set d [expr {acos(sin($y1)*sin($y2)+cos($y1)*cos($y2)*cos($x1-$x2))}] set distance_in_metres [expr {180*60.0/3.1415926535*$d*1852}] Canvas geometrical calculations for details. Under the assumption that 1 nautical mile is the same as 1 Minute in latitude, I can give an example:Two points: x1=10 , y1=54 , x1=10.5 , y2=54 (Btw: I live at 10°8'16''E 54°22'14''N in WGS84 coordinates) • spherical (1' = 1nm): 32657 metres • WGS84 (GPS): 32788 metres The difference is 131 metres. It depends on the application if this difference is significant enough.We can, however, come a little closer to a good result by using the sperical formula with the parameters of the WGS84 ellipsoid. The distance $d above is the arc distance. We just had to multiply by the earth's radius to get the distance in metres. So, if we multiply by 6378137 (the major semi-axis of the WGS84 ellipsoid, we get: • spherical (pseudo-WGS84): 32716 metres Now we are "only" 72 metres away from the correct distance. This may still be too much for finding and digging up a treasure chest in a desert given a pair of coordinates ... -- Here's two flavors of a sherical one I did: proc geographical_distance {lat1 long1 lat2 long2} { global pi # Radius of the earth. Assume a perfect sphere. set r_smi 3963.1 set r_km 6378 # convert to radians set y1 [expr {$lat1*($pi/180)}] set x1 [expr {$long1*($pi/180)}] set y2 [expr {$lat2*($pi/180)}] set x2 [expr {$long2*($pi/180)}] if {0} { # found @ http://jan.ucc.nau.edu/~cvm/latlon_formula.html set c [expr {acos(cos($y1)*cos($x1)*cos($y2)*cos($x2) + \ cos($y1)*sin($x1)*cos($y2)*sin($x2) + sin($y1)*sin($y2))}] } else { # The "Haversine Formula" # found @ http://mathforum.org/library/drmath/view/51879.html set dlon [expr {$x2 - $x1}] set dlat [expr {$y2 - $y1}] set a [expr {pow((sin($dlat/2)),2) + \ set c [expr {2 * asin(sqrt($a))}] # statute_miles kilometers return [list [expr {$r_smi * $c}] [expr {$r_km * $c}]] Is there anyway to modify the ellipsoid equation at the top to return arc distance instead of meters? It looks so optimized, I can't understand the innards at all. Lars H : Isn't there a problem that "arc distance" isn't all that well-defined on an ellipsoid? The angle between the two vectors from the midpoint to the two given points is one possible definition (and not at all hard to compute), but the problem is that this angle corresponds to the route from one point to the other that lies in the plane determined by the two given points and the midpoint, and this is in general not the shortest route!Of course, if you only want the distance "in degrees" you can just translate the length to an angle as for a circle or something. The numerically easiest conversion is probably that 1 meter = 9e-6 degrees, based on the original definition of the meter as 1/10000000 of the distance from the north pole to the equator (along the meridian of Paris (however that didn't come out quite right, see [ ])). If that is good enough depends on the application. : At 2010-04-15 08:21:13, [email protected] edited the Haversine Formula to change from "set c [expr {2 * atan2(sqrt($a),sqrt(1-$a))}]" to "set c [expr {2 * asin(sqrt($a))}]". I'm afraid I don't understand this change. Could someone please explain it? If you think it's wrong (I suspect it might be), and you have more confidence in your assessment than I do, go ahead and revert the change back to page version 21. Lars H , by definition, returns a solution tan($theta) == $y/$x by definition returns a solution sin($theta) == $y . For both formulae compute the same angle , since ^2 + ^2 = 1. I must say those pow's and sqrt's in the code look a bit odd, though: anyone with a classical training in numerical analysis would have used the trigonometric identities for cosine of the double angle to get rid of them. Or on second though, maybe not: doing it with the sqrt's and pow's avoids cancellation, which for short distances could be very noticable. Still, the following rewrite using might be a better alternative set c [expr {2*asin( hypot( sin($dlat/2), sqrt(cos($y1)*cos($y2)) * sin($dlon/2) ) )}] : Oops, when I saw the change I did a quick check for a random value of $a and got different results, hence my idea that the identity did not hold. Now I redid the test and got the same result. I must have made a typo. Sorry guys! :^)Longitude and latitude can be expressed in a number of ways. LongLat Converter converts between the different formats. : For the mathematical sluts among us who still think the earth is locally pretty darn flat, have forgotten the difference between the Gilead and the Geodesy, and who need to get a useful calculation done in a short amount of time, may I present the following two alternate distance calculations, cribbed from Wikipedia. It's very informative to compare these calculations with the Haversine / Great Circle version above. For distances under a few hundred kms, the flat earth variant performs as well as the Haversine formula and the FCC algorithm differs in ways that don't seem to justify it's use if you're in a hurry. Anyway, here they are: # assume all points are {lat lon} where lat and lon # are expressed in decimal degrees # spherical projection on a flat earth # surprisingly accurate for distances under a few hundred kilometers # and requires only one cosine and five multiplications. # (no divisions if divide by 2 == a right shift in fixed point :-) # The ellipsoidal FCC flat earth projection requires 5 cosines # and nine multiplications proc fedist {pt1 pt2} { set dlat [expr {[lat $pt1] - [lat $pt2]}] set dlon [expr {[lon $pt1] - [lon $pt2]}] set dlon [expr {$dlon * cos([d2r [expr {([lat $pt1] + [lat $pt2])/2.0}]])}] set dlat [deg2km $dlat] set dlon [deg2km $dlon] expr {sqrt($dlat * $dlat + $dlon * $dlon)} # for details on this algorithm see FCC 47 CFR 73.208 # it's an ellipsoidal (Clarke 1866) projection onto a flat earth # the FCC says not to use this for distances over 475km # probably isn't as good in Australia as in N. Amer. proc fccdist {pt1 pt2} { set K11 111.13209 set K12 0.56605 set K13 0.0012 set K21 111.41513 set K22 0.09455 set K23 0.00012 set dlat [expr {[lat $pt1] - [lat $pt2]}] set dlon [expr {[lon $pt1] - [lon $pt2]}] set midpt [d2r [expr {([lat $pt1] + [lat $pt2])/2.0}]] set mphi $midpt set K1 $K11 set K2 [expr {$K21 * cos($mphi)}]; # cos(phi) set mphi [expr {$mphi + $midpt}] set K1 [expr {$K1 - $K12 * cos($mphi)}]; # cos(2phi) set mphi [expr {$mphi + $midpt}] set K2 [expr {$K2 - $K22 * cos($mphi)}]; # cos(3phi) set mphi [expr {$mphi + $midpt}] set K1 [expr {$K1 + $K13 * cos($mphi)}]; # cos(4phi) set mphi [expr {$mphi + $midpt}] set K2 [expr {$K2 + $K23 * cos($mphi)}]; # cos(5phi) expr {sqrt(pow($K1*$dlat,2) + pow($K2*$dlon,2))} # some helper procs proc deg2km d {expr {$d * 111.19492664455873}} package require math::constants math::constants::constants {degtorad} proc lat p {lindex $p 0} proc lon p {lindex $p 1} proc d2r d {expr {$d * $::degtorad}}
{"url":"http://oldwiki.tcl-lang.org/9325","timestamp":"2024-11-12T17:06:21Z","content_type":"text/html","content_length":"22722","record_id":"<urn:uuid:72b4a3bf-5a96-4fff-83f5-b7e1486b61d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00220.warc.gz"}
Shelf:Elementary arithmetic - Wikibooks, open books for an open world < Mathematics Elementary arithmetic Books on this shelf deal with elementary arithmetic : the most basic kind of mathematics: it concerns the operations of addition, subtraction, multiplication, and division. Completed books In subsections: Subsections Books nearing completion In subsections: Half-finished books In subsections: Partly developed books In subsections: Featured Books Freshly started books In subsections: Unknown completion In subsections:
{"url":"https://en.m.wikibooks.org/wiki/Shelf:Elementary_arithmetic","timestamp":"2024-11-15T03:53:56Z","content_type":"text/html","content_length":"34980","record_id":"<urn:uuid:7498cadd-a98d-44e6-bcef-d62fb85a13f6>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00327.warc.gz"}
Excel 2016 Line Chart Multiple Series 2024 - Multiplication Chart Printable Excel 2016 Line Chart Multiple Series Excel 2016 Line Chart Multiple Series – You could make a multiplication graph or chart in Excel using a format. You can find numerous types of layouts and discover ways to format your multiplication graph or chart using them. Here are a few tips and tricks to make a multiplication graph or chart. Upon having a template, all you have to do is copy the method and paste it in the new mobile. You may then make use of this method to grow some phone numbers by another establish. Excel 2016 Line Chart Multiple Series. Multiplication table template You may want to learn how to write a simple formula if you are in the need to create a multiplication table. Initially, you should fasten row one of the header line, then increase the telephone number on row A by mobile B. An additional way to develop a multiplication kitchen table is to try using mixed personal references. In this case, you might enter $A2 into column A and B$1 into row B. The effect can be a multiplication kitchen table with a method that really works both for rows and columns. If you are using an Excel program, you can use the multiplication table template to create your table. Just wide open the spreadsheet along with your multiplication dinner table change and template the name to the student’s brand. You can also modify the page to suit your individual requirements. It comes with an option to alter the colour of the tissue to change the look of the multiplication desk, also. Then, it is possible to change the range of multiples to suit your needs. Developing a multiplication chart in Stand out When you’re employing multiplication kitchen table application, you can easily build a easy multiplication table in Stand out. Simply develop a page with columns and rows numbered in one to 40. In which the columns and rows intersect is definitely the answer. For example, if a row has a digit of three, and a column has a digit of five, then the answer is three times five. The same goes for the First, you are able to enter in the amounts that you have to increase. For example, if you need to multiply two digits by three, you can type a formula for each number in cell A1. To produce the figures bigger, choose the cells at A1 and A8, then select the proper arrow to select a range of tissues. You can then variety the multiplication formulation within the cellular material inside the other columns and rows. Gallery of Excel 2016 Line Chart Multiple Series How To Make A Line Graph In Excel 2016 With Multiple Lines Creating Polygons Multiple Lines Chart In Microsoft Excel 2016 Office Excel Chart With Multiple Series Based On Pivot Tables Super User Leave a Comment
{"url":"https://www.multiplicationchartprintable.com/excel-2016-line-chart-multiple-series/","timestamp":"2024-11-06T12:13:04Z","content_type":"text/html","content_length":"52619","record_id":"<urn:uuid:e860b27a-3674-48ab-8bfc-10a84ac91d68>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00465.warc.gz"}
Combining multiple Excel files with different shee… Problem description Eventhough Power BI can connect to hundreds of data sources, in lot of scenarios people choose Excel instead of a more technical solution. Excel is usually connected to Power BI in two ways: 1. One way, is to connect one big file, which changes over time and with every single refresh new data is pulled to Power BI. 2. Second way, is when people use folders, where multiple files are being stored. Typical scenario would be that each month there is a new file, which must be combined with other files from previous months. Problem I am going to explain is related to the second scenario, where you try to combine multiple files, while Excel sheets have different names. For example, one file contains sheet “Values 2020”, and the other sheet “Values 2021”. With the default autogenerared function in Power Query this is not possible. Let’s look at the real life scenario: Initial Setup In my demonstration, I connected one of my SharePoint using SharePoint Folder connector. Result of such a connection can be seen below. My SharePoint Folder contains Excel files named “2020”, “2021”, and “2022”. The data structure of all those excel files is the same. Of course, my goal here is to combine all three files into one single query. I have two sheets and each sheet contains a very simple table with only three columns and several rows: As we can see my file “2020” contains sheet “Revenue 2020” and “Cost 2020”. Logically my other file “2021” has sheets with “Revenue 2021”, “Cost 2021” etc. Problem Solving Now if I jump back to the Power Query editor, where I already connected my SharePoint folder, I will click on the option of combining files: Next window will pop up with additional options: I will combine sheets with revenue, which will create new Power Query function in my Query list: The most important is our function called “Transform File”. This particular query is responsible for our excel sheets combination. Unfortunately, by default it does not work as we usually intended. Let’s look at the original query and see how the data looks like: We can see that only rows from file “2022” were imported. The reason why other files were not properly imported is because none of them contains my sheet called “Revenue 2022”. How can we fix this problem? Instead of connecting to a specific sheet name, we can simply use the position of the sheet. For example, if the data with revenue is always in the first sheet, we will connect only the first sheet from each file – different name does not matter anymore. For this, we need to go to autogenerated function “Transform File” and open Advanced Editor: In the Advanced Editor we need to pay extra attention to the step #“Revenue 2022_Sheet“. In this step we need to remove the hardcoded name and replace it with a more dynamic approach. The solution is that we remove everything in the curly brackets and replace it with a number. In my case we will use 0, since 0 represents first sheet, 1 represents second sheet etc. Now when we return to our original query, we can see that all data is imported as we wanted: As you can see, the solution is quite easy but it has huge impact, especially if you use Excel as your main data source. Be the first to comment
{"url":"https://quantinsightsnetwork.com/combining-multiple-excel-files-with-different-shee/","timestamp":"2024-11-07T17:34:55Z","content_type":"text/html","content_length":"165713","record_id":"<urn:uuid:8ba83ca7-d101-4e68-8b85-e393febe04f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00427.warc.gz"}
How to Check if Set is Empty in Python We can easily check if a set is empty in Python. An empty set has length 0, and is equal to False, so to check if a set is empty, we can just check one of these conditions. empty_set = set() #length check if len(empty_set) == 0: print("Set is empty!") #if statement check if empty_set: print("Set is empty!") #comparing to empty set if empty_set == set(): print("Set is empty!") In Python, sets are a collection of elements which is unordered and mutable. When working with sets, it can be useful to be able to easily determine if the set is empty. There are a few ways you can determine if a set is empty. Of course, you can always test to see if the set is equal to another empty set. Second, the length of an empty set is 0. Finally, when converting an empty set to a boolean value, we get False. In this case, we can use any one of these conditions to determine if a set is empty or not. In the following Python code, you can see the three ways you ca check if a set is empty or not. empty_set = set() #length check if len(empty_set) == 0: print("Set is empty!") #if statement check if empty_set: print("Set is empty!") #comparing to empty set if empty_set == set(): print("Set is empty!") Checking if Set is Empty with if Statement in Python One fact we can use in Python to check if a set is empty is that a set that is empty is equivalent to the boolean value False. In this case, we can test if a set is empty using a simple if statement. empty_set = set() #if statement check if empty_set: print("Set is empty!") Checking if Set is Empty Using Python len() Function One of the ways we can easily check if a set is empty in Python is with the Python len() function. The length of a set which is empty is 0. Checking to see if a set is empty using the Python len() function is shown in the following Python code. empty_set = set() if len(empty_set) == 0: print("Set is empty!") Checking if Set is Empty By Comparing to Another Empty Set in Python You can also check if a set is empty by comparing it to another empty set. This is the most obvious method and works if you want to check if a tuple is empty, or check if a dictionary is empty. Below is how to compare an empty set to another set to determine if the other set is empty or not. empty_set = set() if empty_set == set(): print("Set is empty!") Hopefully this article has been useful for you to learn how to check if a set is empty in Python.
{"url":"https://daztech.com/python-check-if-set-is-empty/","timestamp":"2024-11-07T14:08:41Z","content_type":"text/html","content_length":"246561","record_id":"<urn:uuid:d0656203-a13b-4880-8099-c932786bb0a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00035.warc.gz"}
If F(x)=x^2+5x how do I find in simplest form F(p)-F(q)/p-q ? | HIX Tutor If F(x)=x^2+5x how do I find in simplest form F(p)-F(q)/p-q ? Answer 1 #(p^2+5p-(q^2+5q))/(p-q)# insert the function #(p^2+5p-q^2-5q)/(p-q)# multiply through #-1# #(p^2-q^2 +5p-5q)/(p-q)# rearrange #((p+q)(p-q)+5(p-q))/(p-q)# difference of squares. #((p-q)(p+q+5))/(p-q)# factor out #p-q# #p + q + 5# cancel factors Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find (\frac{F(p) - F(q)}{p - q}), where (F(x) = x^2 + 5x), substitute (p) and (q) into (F(x)) to find (F(p)) and (F(q)). Then simplify the expression by applying the difference quotient formula. [ F(p) = p^2 + 5p ] [ F(q) = q^2 + 5q ] Now, substitute these values into the expression: [ \frac{F(p) - F(q)}{p - q} = \frac{(p^2 + 5p) - (q^2 + 5q)}{p - q} ] Now, simplify the numerator: [ = \frac{p^2 + 5p - q^2 - 5q}{p - q} ] [ = \frac{p^2 - q^2 + 5(p - q)}{p - q} ] [ = \frac{(p - q)(p + q) + 5(p - q)}{p - q} ] [ = \frac{(p - q)(p + q + 5)}{p - q} ] [ = p + q + 5 ] Therefore, (\frac{F(p) - F(q)}{p - q} = p + q + 5). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/if-f-x-x-2-5x-how-do-i-find-in-simplest-form-f-p-f-q-p-q-975aeb0fb7","timestamp":"2024-11-14T10:55:28Z","content_type":"text/html","content_length":"577553","record_id":"<urn:uuid:c440ff6a-da0b-46c0-9b81-eac760a58ac6>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00631.warc.gz"}
Wheatstone Bridge Test Applications The Wheatstone bridge circuit is ideal for many accurate test applications and as a result various formats have been developed to enable its use in many ways. Wheatstone Bridge For Test Includes: Wheatstone bridge test applications basics Strain gauge The Wheatstone bridge has been used for many years to provide a very accurate means of measuring resistance. Although many people may think its use is confined to demonstrations in school physics laboratories, nothing could be further from the truth. Wheatstone bridges can be found in high accuracy resistance measuring instruments and in addition to this they are finding increasing use in sensors from strain gauges to load cells as we as many There are several variants on the basic Wheatstone bridge theme including four wire and six wire variants as well as having multiple measurement active elements. Wheatstone bridge measurement basics The basics of the Wheatstone bridge have been used for many years. In its early days it was used mainly to determine the resistance of a single unknown resistor by having two fixed resistors and a third calibrated variable resistor or potentiometer. To understand the operation of the bridge, it can be considered as two parallel potential dividers consisting of R1 and R4 and then R2 and R3. The Wheatstone bridge used for measuring unknown resistance Typically R1 and R2 would be the same value, and therefore when there is a balance, i.e. no potential difference between the two points "b: and "d: on the diagram, the unknown resistance is the the same as that of the potentiometer which may be calibrated so its resistance is known for any position. Note on the Wheatstone Bridge: The Wheatstone bridge circuit consists of four resistors, two in each arm, with meter bridging the centres of the two arms. Although it appears to be a basic circuit, it is widely used in measurement systems, and the fundamental electrical calculations are easy to understand. Read more about the Wheatstone Bridge. When used for measurement systems, the basic way in which the bridge circuit is used can be slightly different. For modern measurement systems, the Wheatstone bridge may be operated in what may be termed an out of balance condition. Here the bridge circuit may be almost in balance for the basic condition and any deviation will result in a potential difference change across the centre of the bridge circuit which can be denoted as The Wheatstone bridge used for measuring unknown resistance It is possible to easily calculate the change in voltage acros the points between 'b' and 'd' giving a way of determining the resistance and hence any change in R3. This method is far more preferable than having to vary another resistor to achieve a full balance within the bridge circuit. It is possible to calculate the voltage across the points 'b' and 'd' quite easily. ${V}_{\mathrm{out}}={V}_{\mathrm{in}}\left(\frac{r+\Delta R}{2R+\Delta R}\right)$ This simplifies down to: ${V}_{\mathrm{out}}=\frac{{V}_{\mathrm{in}}}{2}\left(\frac{\Delta R}{2R+\Delta R}\right)$ R is the value for R1, R2, and R4 which are all equal ΔR is the small change in value of R3 from R to ΔR as a result of the changes in the sensor causing the resistance to change. Vin is the supply voltage to the Wheatstone bridge Vout is the output voltage across the points 'b' and 'd'. For small changes in the sensor resistance, ΔR there is a virtually linear change on the output voltage Vout because R dominates over ΔR on the denominator of the equation. However for the small changes normally encountered in sensors such as strain gauges, load cells, etc using a Wheatstone bridge, the non-linearity is not an issue. For systems where larger changes in ΔR are encountered it is possible to correct this using processing. Two active elements: half bridge The example above looked the situation where only one of the resistors varied - the remaining three resistors all remained constant. It is possible to have two resistors change - for example R1 and R3 may both change, and this can be used in some sensors to advantage, especially where the resistance changes are small. This type of bridge circuit is often referred to as a half bridge. Wheatstone bridge with two sensor resistors To see how this affects the circuit, it cna be deduced in a similar way to the single variable Wheatstone bridge: ${V}_{\mathrm{out}}={V}_{\mathrm{in}}\left(\frac{\Delta R}{2R+\Delta R}\right)$ It can be seen in this instance that the change in output voltage is twice what it was before for a given change in resistance. This can be very important when the changes may only be very small. It can help with reducing errors caused by noise and transients, etc as well as accommodating a larger signal that is easier to It's also possible to place two active elements within the same branch. This is still referred to as a half bridge because there are two active elements. In this case the active elements should act in the opposite direction to each other, one increasing in resistance while the other decreases. If they both acted in the same sense, then the effects would nullify each other as they are effectively in the same potential divider. Wheatstone bridge with two sensor resistors in same leg ${V}_{\mathrm{out}}={V}_{\mathrm{in}}\left(\frac{\Delta R}{2R}\right)$ Note that in this case the equation is rather different to that of the example where they are in different legs and acting in the same sense. Four active elements: full bridge The logical extension of the half bridge is to extend the number of variable or active elements to four in what is known as a full bridge. For the benefits of this topology to be gained, the resistors in each branch must act in opposite directions and then the other branch must act in the opposite sense to the first one as shown in the Wheatstone bridge using four active elements It is found that the sensitivity of this type of bridge is better than any version of the half bridge. ${V}_{\mathrm{out}}={V}_{\mathrm{in}}\left(\frac{\Delta R}{R}\right)$ It is possible to see from this that the full bridge has double the sensitivity of the half bridge circuits and four times that of the single active element bridge. This is hardly surprising seeing that four active elements are going to be better than one. There are several variations on the basic Wheatstone bridge that can all be used with various forms of measurement and sensing. The bridge is becoming particularly widespread with the huge number of sensors that are being incorporated into automated control and monitoring systems as well as accurate sensing technologies. Even though many might feel the Wheatstone bridge had been consigned to the basic experiments for school physics laboratories, nothing could be further from the truth. Ian Poole . Experienced electronics engineer and author. More Test Topics: Data network analyzer Digital Multimeter Frequency counter Oscilloscope Signal generators Spectrum analyzer LCR meter Dip meter, GDO Logic analyzer RF power meter RF signal generator Logic probe PAT testing & testers Time domain reflectometer Vector network analyzer PXI GPIB Boundary scan / JTAG Data acquisition Return to Test menu . . .
{"url":"https://www.electronics-notes.com/articles/test-methods/bridge-measurement-circuits/wheatstone-bridge-applications-test.php","timestamp":"2024-11-07T21:39:56Z","content_type":"text/html","content_length":"38316","record_id":"<urn:uuid:f56e1f7d-3adc-4226-bb72-ecb2d027652f>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00483.warc.gz"}
Understanding Darcs/Patch theory - Wikibooks, open books for an open world Math and computer science nerds only (The occasional physicist will be tolerated) Casual users be warned, the stuff you're about to read is not for the faint of heart! If you're a day-to-day darcs user, you probably do not need to read anything from this page on. However, if you are interested in learning how darcs really works, we invite you to roll up your sleeves, and follow us in this guided tour of the growing Theory of Patches. What is the theory of patches? The darcs patch formalism is the underlying "math" which helps us understand how darcs should behave when exchanging patches between repositories. It is implemented in the darcs engine as data structures for representing sequences of patches and Haskell functions equivalent to the operations in the formalism. This section is addressed at two audiences: curious onlookers and people wanting to participate in the development of the darcs core. My aim is to help you understand the intuitions behind all this math, so that you can get up to speed with current conflictors research as fast as possible and start making contributions. You should note that I myself am only starting to learn about patch theory and conflictors, so there may be mistakes ahead. One difference between centralized and distributed version control systems is that "merging" is something that we do practically all the time, so it is doubly important that we get merging right. Turning the problem of version control into a math problem has two effects: it lets us abstract all of the irrelevant implementation details away, and it forces us to make sure that whatever techniques we come up with are fundamentally sound, that they do not fall apart when things get increasingly complicated. Unfortunately, math can be difficult for people who do not make use of it on a regular basis, so what we attempt to do in this manual is to ease you into the math through the use of concrete, illustrated examples. A word of caution though, "getting merging right" does not necessarily consist of having clever behaviour with respect to conflicts. We will begin by focusing on successful, non-conflicting merges and move on to the darcs approach to handling conflicts. Note that the notation we use follows that from FOSDEM 2006, and not the darcs patch theory appendix. Namely, patch composition is written left to right. ${\displaystyle AB}$ means that B is applied after A Context, patches and changes Let us begin with a little shopping. Arjan is working to build a shopping list for the upcoming darcs hackathon. As we speak, his repository contains a single file s_list with the contents 1 apples 2 bananas 3 cookies 4 rice Note:the numbers you see are just line numbers; they are not part of the file contents As we will see in this and other examples in this book, we will often need to assign a name to the state of the repository. We call this name a context. For example, we can say that Arjan's repository is a context ${\displaystyle o}$ , defined by there being a file s_list with the contents mentioned above. the initial context Arjan makes a modification which consists of adding a line in s_list. His new file looks like this: 1 apples 2 bananas 3 beer 4 cookies 5 rice When Arjan records this change (adding beer), we produce a patch which tells us not only what contents Arjan added ("beer") but where he added them, namely to line 3 of s_list. We can say that in his repository, we have moved from context ${\displaystyle o}$ to context ${\displaystyle a}$ via a patch A. We can write this using a compact notation like ${\displaystyle {}^{o}A^{a}}$ or using the graphical representation below: patch A Starting from this context, Arjan might decide to make further changes. His new changes would be patches that apply to the context of the previous patches. So if Arjan makes a new patch ${\ displaystyle B}$ on top of this, it would take us from context ${\displaystyle a}$ to some new context ${\displaystyle b}$ . The next patch would take us from this context to yet another new context ${\displaystyle c}$ , and so on and so forth. Patches which apply on top of each other like this are called sequential patches. We write them in left to right order as in the table below, either representing the contexts explicitly or leaving them out for brevity: │ with context │sans context (shorthand)│ │${\displaystyle {}^{o}A^{a}}$ │${\displaystyle A}$ │ │${\displaystyle {}^{o}A^{a}B^{b}}$ │${\displaystyle AB}$ │ │${\displaystyle {}^{o}A^{a}B^{b}C^{c}}$ │${\displaystyle ABC}$ │ All darcs repositories are simply sequences of patches as above; however, when performing a complex operation such as an undo or exchanging patches with another user, it becomes absolutely essential that we have some mechanism for rearranging patches and putting them in different orders. Darcs patch theory is essentially about giving a precise definition to the ways in which patches and patch-trees can be manipulated and transformed while maintaining the coherence of the repository. Let's return to the example from the beginning of this module. Arjan has just added beer to our hackathon shopping list, but in a sudden fit of indecisiveness, he reconsiders that thought and wants to undo his change. In our example, this might consist of firing up his text editor and remove the offending line from the shopping list. But what if his changes were complex and hard to keep track of? The better thing to do would be to let darcs figure it out by itself. Darcs does this by computing an inverse patch, that is, a patch which makes the exact opposite change of some other patch: Definition (Inverse of a patch): The Inverse of patch ${\displaystyle P}$ is ${\displaystyle P^{-1}}$ , which is the patch for which the composition ${\displaystyle PP^{-1}}$ makes no changes to the context and for which the inverse of the inverse is the original patch. So above, we said that Arjan has created a patch ${\displaystyle A}$ which adds beer to the shopping list, passing from context ${\displaystyle o}$ to ${\displaystyle a}$ , or more compactly, ${\ displaystyle {}^{o}A^{a}}$ . Now we are going to create the inverse patch ${\displaystyle A^{-1}}$ , which removes beer from the shopping list and brings us back to context ${\displaystyle o}$ . In the compact context-patch notation, we would write this as ${\displaystyle {}^{o}A^{a}{A^{-1}}^{o}}$ . Graphically, we would represent the situation like this: An inverse patch makes the opposite change of another patch Patch inverses may seem trivial, but as we will see later on in this module, they are a fundamental operation and absolutely crucial to make some of the fancier stuff -- like merging -- work correctly. One of the rules we impose in darcs is that every patch must have an inverse. These rules are what we call patch properties. A patch property tells us things which must be true about a patch in order for darcs to work. People often like to dream up new kinds of patches to extend darcs's functionality, and defining these patch properties is how we know that their new patch types will behave properly under darcs. The first of these properties is dead simple: Patch property: Every patch must have an inverse Arjan was lucky to realise that he wanted to undo his change as quickly as he did. But what happens if he was a little slower to realise his mistake? What if he makes some other changes before realising that he wants to undo the first change? Is it possible to undo his first change without undoing all the subsequent changes? It sometimes is, but to do so, we need to define an operation called commutation. Consider a variant of the example above. As usual, Arjan adds beer to the shopping list. Next, he decides to add some pasta on line 5 of the file: Beer and pasta The question is how darcs should behave if Arjan now decides that he does not want beer on the shopping list after all. Arjan simply wants to remove the patch that adds the beer, without touching the one which adds pasta. The problem is that darcs repositories are simple, stupid sequences of patches. We can't just remove the beer patch, because then there would no longer be a context for the pasta patch! Arjan's first patch ${\displaystyle A}$ takes us to context ${\displaystyle a}$ like so: ${\displaystyle {}^{o}A^{a}}$ , and his second patch takes us to context ${\displaystyle b}$ , notably starting from the initial context ${\displaystyle a}$ : ${\displaystyle {}^{a}B^{b}}$ . Removing patch ${\displaystyle A}$ would be pulling the rug out from under patch ${\displaystyle B}$ . The trick behind this is to somehow change the order of patches ${\displaystyle A}$ and ${\displaystyle B}$ . This is precisely what commutation is for: The commutation of patches ${\displaystyle X}$ and ${\displaystyle Y}$ is represented by ${\displaystyle XY\leftrightarrow {Y_{1}}{X_{1}}}$ , where ${\displaystyle X_{1}}$ and ${\displaystyle Y_{1}}$ are intended to perform the same change as ${\displaystyle X}$ and ${\displaystyle Y}$ Why not keep our old patches? To understand commutation, you should understand why we cannot keep our original patches, but are forced to rely on evil step sisters instead. It helps to work with a concrete example such as the beer and pasta one above. While we could write the sequence ${\displaystyle AB}$ to represent adding beer and then pasta, simply writing ${\displaystyle BA}$ for pasta and then beer would be a very foolish thing to do. Put it this way: what would happen if we applied ${\displaystyle B}$ before ${\displaystyle A}$ ? We add pasta to line 5 of the file: 1 apples 2 bananas 3 cookies 4 rice 5 pasta Does something seem amiss to you? We continue by adding beer to line 3. If you pay attention to the contents of the end result, you might notice that the order of our list is subtly wrong. Compare the two lists to see why: │${\displaystyle BA}$ (wrong!) │${\displaystyle AB}$ (right)│ │1 apples │1 apples │ │2 bananas │2 bananas │ │3 beer │3 beer │ │4 cookies │4 cookies │ │5 rice │5 pasta │ │6 pasta │6 rice │ It might not matter here because it is only a shopping list, but imagine that it was your PhD thesis, or your computer program to end world hunger. The error is all the more alarming because it is subtle and hard to pick out with the human eye. The problem is one of context, specifically speaking, the context between ${\displaystyle A}$ and ${\displaystyle B}$ . In order for instructions like "add pasta to line 5 of s_list" to make any sense, they have to be in the correct context. Fortunately, commutation is easy to do, it produces two new patches ${\displaystyle B_{1}}$ and ${\displaystyle A_{1}}$ which perform the same change as ${\displaystyle A}$ and ${\displaystyle B}$ but with a different context in between. Patch ${\displaystyle A_{1}}$ is identical to ${\displaystyle A}$ . It adds "beer" to line 3 of the shopping list. But what should patch ${\displaystyle B_{1}}$ do? One more important detail to note though! We said earlier that getting the context right is the motivation behind commutation -- we can't simply apply patches ${\displaystyle AB}$ in a different order, ${\displaystyle BA}$ because that would get the context all wrong. But context does not have any effect on whether A and B can commute (or how they should commute). This is strictly a local affair. Conversely, the commutation of A and B does not have any effect either on the global context: the sequences ${\displaystyle AB}$ and ${\displaystyle B_{1}A_{1}}$ (where the latter is the commutation of the former) start from the same context and end in the same context. The complex undo revisited Now that we know what the commutation operation does, let's see how we can use it to undo a patch that is buried under some other patch. The first thing we do is commute Arjan's beer and pasta patches. This gives us an alternate route to the same context. But notice the small difference between ${\displaystyle B}$ and ${\displaystyle B_{1}}$ ! commutation gives us two ways to get to the same patch The purpose of commuting the patches is essentially to push patch ${\displaystyle A}$ on to end of the list, so that we could simply apply its inverse. Only here, it is not the inverse of ${\ displaystyle A}$ that we want, but the inverse of its evil step sister ${\displaystyle A_{1}}$ . This is what applying that inverse does: it walks us back to the context ${\displaystyle b1}$ , as if we had only applied the pasta patch, but not the beer one. commutation gives us two ways to get to the same patch And now the undo is complete. To sum up, when the patch we want to undo is buried under some other patch, we use commutation to squeeze it to the end of the patch sequence, and then compute the inverse of the commuted patch. For the more sequentially minded, this is what the general scheme looks like: Using commutation to handle a complex undo Imagine the opposite scenario: Arjan had started by adding pasta to the list, and then followed up with the beer. 1. If there was no commutation, what concretely would happen if he tried to remove the pasta patch, and not the beer patch? 2. Work out how this undo would work using commutation. Pay attention to the line numbers. Commutation and patches Every time we define a type of patch, we have to define how it commutes with other patches. Most of time, it is very straightforward. When commuting two hunk patches, for instance, we simply adjust their line offset. For instance, we want to put something on line 3 of the file, but if we use patch ${\displaystyle Y}$ to insert a single line before that, what used to be line 3 now becomes line 4! So patch ${\displaystyle X_{1}}$ inserts the line "x" into line 4, much like ${\displaystyle X}$ inserts it into line 3. Some patches cannot be commuted. For example, you can't commute the addition of a file with adding contents to it. But for now, we focus on patches which can commute. Note: this might be a good place to take a break. We are moving on to a new topic and new (but similar) examples We have presented two fundamental darcs operations: patch inverse and patch commutation. It turns out these two operations are almost all that we need to perform a darcs merge. Arjan and Ganesh are working together to build a shopping list for the upcoming darcs hackathon. Arjan initialises the repository and adds a file s_list with the contents 1 apples 2 bananas 3 cookies 4 rice He then records his changes, and Ganesh performs a darcs get to obtain an identical copy of his repository. Notice that Arjan and Ganesh are starting from the same context the initial context Arjan makes a modification which consists of adding a line in s_list. His new file looks like this: 1 apples 2 bananas 3 beer 4 cookies 5 rice Arjan's patch brings him to a new context ${\displaystyle a}$ : patch A Now, in his repository, Ganesh also makes a modification; he decides that s_list is a little hard to decipher and renames the file to shopping. Remember, at this point, Ganesh has not seen Arjan's modifications. He's still starting from the original context ${\displaystyle o}$ , and has moved a new context ${\displaystyle b}$ , via his patch ${\displaystyle B}$ : patch B At this point in time, Ganesh decides that it would be useful if he got a copy of Arjan's changes. Roughly speaking we would like to pull Arjan's patch A into Ganesh's repository B. But, there is a major problem! Namely, Arjan's patch takes us from context ${\displaystyle o}$ to context ${\displaystyle a}$ . Pulling it into Ganesh's repository would involve trying to apply it to context ${\ displaystyle b}$ , which we simply do not know how to do. Put another way: Arjan's patch tells us to add a line to file s_list; however, in Ganesh's repository, s_list no longer exists, as it has been moved to shopping. How are we supposed to know that Arjan's change (adding the line "beer") is supposed to apply to the new file shopping instead? Ganesh attempts to pull from Arjan Arjan and Ganesh's patches start from the same context o and diverge to different contexts a and b. We say that their patches are parallel to each other, and write it as ${\displaystyle A\vee B}$ . In trying to pull patches from Arjan's repository, we are trying to merge these two patches. The basic approach is to convert the parallel patches into the sequential patches ${\displaystyle BA_{1}}$ , such that ${\displaystyle A_{1}}$ does essentially the same change as ${\displaystyle A}$ does, but within the context of b. We want to produce the situation ${\displaystyle {}^{o}{B^{b}}{A_{1}^ Performing the merge Converting Arjan and Ganesh's parallel patches into sequential ones requires little more than the inverse and commutation operations that we described earlier in this module: 1. So we're starting out with just Ganesh's patch. In context notation, we are at ${\displaystyle {}^{o}{B}^{b}}$ 2. We calculate the inverse patch ${\displaystyle B^{-1}}$ . The sequence ${\displaystyle BB^{-1}}$ consists of moving s_list to shopping and then back again. We've walked our way back to the original context: ${\displaystyle {}^{o}{B}^{b}{B^{-1}}^{o}}$ 3. Now we can apply Arjan's patch without worries: ${\displaystyle {}^{o}{B}^{b}{B^{-1}}^{o}A^{a}}$ , but the result does not look very interesting, because we've basically got the same thing Arjan has now, not a merge. 4. All we need to do is commute the last two patches, ${\displaystyle {B}^{-1}{A}}$ , to get a new pair of patches ${\displaystyle {A_{1}}{B_{1}}^{-1}}$ . Still, the end result doesn't seem to look very interesting since it results in exactly the same state as the last step: ${\displaystyle {}^{o}{B}^{b}{A_{1}}^{c}{{B_{1}}^{-1}}^{a}}$ 5. However, one crucial difference is that the second to last patch produces just the state we're looking for! All we now have to do to get at it is to ditch the ${\displaystyle {B_{1}}^{-1}}$ patch, which is only serving to undo Ganesh's precious work anyway. That is to say, by simply determining how to produce an ${\displaystyle A_{1}}$ which will commute with ${\displaystyle B}$ , we have determined the version of ${\displaystyle A}$ which will update Ganesh's repository. Merging via inverse and commutation The end result of all this is that we have the patch we're looking for, ${\displaystyle A_{1}}$ and a successful merge. We want a new patch A_1 Merging is symmetric Merging is symmetric Concretely, we've talked about Ganesh pulling Arjan's patch into his repository, so what about the other way around? Arjan pulling Ganesh's patch into his repository would work the same exact way, only that he is looking for a commuted version of Ganesh's patch ${\displaystyle B_{1}}$ that would apply to his repository. If Ganesh can pull Arjan's patch in, then Arjan can pull Ganesh's one too, and the result would be exactly the same: Merging is symmetric Definition (Merge of two patches): The result of a merge of two patches ${\displaystyle A}$ and ${\displaystyle B}$ is one of two patches ${\displaystyle A_{1}}$ and ${\displaystyle B_{1}}$ , which satisfy the relationship ${\ displaystyle A\vee B\Longrightarrow B{A_{1}}\leftrightarrow A{B_{1}}}$ The merge definition describes what should happen when you combine two parallel patches into a patch sequence. The built-in symmetry is essential for darcs because a darcs repository is defined entirely by its patches. Put another way, To be written The commutation with inverse property The definition of a merge tells us what we want merging to look like. How did we know how to actually perform that merge? The answer comes out of the following property of commutation and inverse: if you can commute the inverse of a patch ${\displaystyle A^{-1}}$ with some other patch ${\displaystyle B}$ , then you can also commute the patch itself against ${\displaystyle B_{1}}$ . ${\displaystyle B{A_{1}}\leftrightarrow A{B_{1}}}$ if and only if ${\displaystyle {B_{1}}{A_{1}}^{-1}\leftrightarrow {A^{-1}}B}$ , provided both commutations succeed. Note how the left hand side of this property exactly matches the relationship demanded by the definition of a merge. To see why this all works, To be written Definitions and properties definition of inverse ${\displaystyle AA^{-1}}$ has no effect inverse of an inverse ${\displaystyle (A^{-1})^{-1}=A}$ inverse composition property ${\displaystyle (AB)^{-1}=B^{-1}A^{-1}}$ definition of commutation ${\displaystyle AB\leftrightarrow {B_{1}}{A_{1}}}$ definition of a merge ${\displaystyle A\vee B\Longrightarrow B{A_{1}}\leftrightarrow A{B_{1}}}$ commutation with inverse property ${\displaystyle B{A_{1}}\leftrightarrow A{B_{1}}}$ if and only if ${\displaystyle {B_{1}}{A_{1}}^{-1}\leftrightarrow {A^{-1}}B}$
{"url":"https://en.m.wikibooks.org/wiki/Understanding_Darcs/Patch_theory","timestamp":"2024-11-10T21:38:16Z","content_type":"text/html","content_length":"210310","record_id":"<urn:uuid:660d0522-57f2-40a1-a5aa-22a13f88da3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00607.warc.gz"}
Coherent pairs and Sobolev-type orthogonal polynomials on the real line: An extension to the matrix case | Escuela Matemáticas y Estadística, UPTC In this contribution, we extend the concept of coherent pair for two quasi-definite matrix linear functionals $u_{0}$ and $u_{1}$. Necessary and sufficient conditions for these functionals to constitute a coherent pair are determined, when one of them satisfies a matrix Pearson-type equation. Moreover, we deduce algebraic properties of the matrix orthogonal polynomials associated with the Sobolev-type inner product $$<p,q>_{s} = <p,q>_{\mathbf{u}_0} + <p’\mathbf{M}_1,q’\mathbf{M}_2>_{\mathbf{u}_1},$$ where $\mathbf{M}_1$ and $\mathbf{M}_2$ are non-singular matrices $p$ and $q$ are matrix polynomials. Journal of Mathematical Analysis and Applications, 518(1), 341-358. doi:10.1016/j.jmaa.2022.126674
{"url":"https://matematicas.netlify.app/publication/2023-02-01-coherent_pairs_sobolev-type_orthogonal_polynomials/","timestamp":"2024-11-10T10:39:24Z","content_type":"text/html","content_length":"20359","record_id":"<urn:uuid:1063a89a-b4de-4283-a9c3-bc71f78d9690>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00191.warc.gz"}
What is the typical bitcoin transaction? Have you ever wondered what a typical Bitcoin transaction is? We will arrive at the answer using onchain analysis. This is also an introduction to onchain analysis for beginners explaining the concepts of an UTXO and its fundamental properties. In this blog post, we'll explore how onchain analysis can help us better understand Bitcoin transactions. By breaking down transactions based on the spent bitcoin's value and age, we can gain insights into who is active in the market. Now, you might be wondering, what exactly is a typical Bitcoin transaction? This is not only a fun exercise, but also a great introduction to onchain analysis for beginners. What is a bitcoin transaction? A bitcoin transaction happening onchain is simply the transfer of bitcoin value from one UTXO to new UTXOs. We need to clarify what an UTXO is! An UTXO is the onchain record of unspent bitcoins. In some ways, UTXOs very much resembles the coins you have in your fiat wallet. An UTXO can have any value, e.g. 0.1 BTC or 100 BTC. To spend an UTXO you need to unlock it with your private key, e.g.the mnemonic seed phrase. Not your keys, not your bitcoin. Another curious property of an UTXO is that it can only be spent Once spent, the UTXO is destroyed, and it's value are "transacted" into one or more new UTXOs. These new UTXOs start with a reset lifespan of 0 days, but will age until they are spent. So, every UTXO has an age. We have just revealed the two most fundamental properties in onchain analysis; the age and value of an UTXO. By analyzing the value and age of all transactions, we can start to paint a picture of what a typical Bitcoin transaction might look like. So, let's dive in and explore the data! What is the value of a typical UTXO? Let's start with the chart below. All spent UTXOs on each day is grouped by increasing USD value visualized by different colors blue-to-red. How large a group is, is reflected by the number of UTXO in each group. The "Very small tx" and "Small tx" clearly dominates. As you can see form the chart, bitcoin transactions with low values account for the gross numerical majority. Spent Output Grouped by Value from 2013-2023. Transactions worth less than 1000 USD accounts for > 60% of the total number of transactions. What is the age of a typical spent UTXO? Now, let's take a look at the next chart, which groups transactions by how old the spent UTXOs are (in days). It turns out that the numerical majority of UTXOs being transacted are quite young – almost 80% of them are less than a month old. Bitcoins transacted are usually typically young, close to 80% are less than 1 month old. Spent Output Grouped by Age. Over 80% of all bitcoin transacted are usually less than 30 days old. By analyzing bitcoin transactions in this way, we can gain insight into the two key properties of a bitcoin: its value and age. These two properties are important for onchain analysis and can help us determine who is transacting at a larger scale. More about that in a later blog post. So, what can we conclude from all of this analysis? Well, based on our data (and a little more analysis), the typical Bitcoin transaction on April 18th, 2023 had a value of $21.80 and an age of 13.0 days (medians). UTXO = Unspent Transaction Output
{"url":"https://researchbitcoin.net/what-is-a-typical-bitcoin-transaction/","timestamp":"2024-11-11T04:31:34Z","content_type":"text/html","content_length":"24768","record_id":"<urn:uuid:245cdc5d-0b30-433c-93a8-f1f0187cf339>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00116.warc.gz"}
Fraction Exponent Calculator A fraction exponent calculator is designed to find the fractional exponent of a number “X” of the form \(X^{\frac{n}{d}}\). Where “X” is the base, the numerator of the exponent is “n” and the denominator is “d”. What is a Fractional Exponent? "A Fractional Exponent can be defined as a number having power in terms of fraction rather than an integer." “X”, “n” and “d” = R If we enter the other than the real number then we are not able to find the value of exponential fractions. X(base)numerator/denominator = Xn/d How to Solve Fractions with Exponents? There can be two situations we can face when solving the fractions with exponents. These can be multiplied by each other or divided. We implement the two following laws of the exponents when solving the fractions with exponents: • The Law of Multiplication of Exponents • The Law of Division of Exponents The Law of Multiplication of Exponents: “ We need to add the exponents when multiplying the exponents.” This is the Law of the exponents for multiplication and we frequently apply it when simplifying fractional exponents in the multiplication of fractional exponents. Let's use the law of exponents for multiplication which depicts that we can add the exponents when multiplying two powers that have the same base: x^a * x^b= x^(a+b) Now consider how we can solve if n = 2 x * x=x¹ * x¹= x¹⁺¹=x² Try this with any exponent number you like, it's always true! The fraction exponent calculator is compatible with multiplying Fractional Exponents. The Law of Division of Exponents: “Subtract the exponents When we divide the exponents.” This is the Law of the exponents for division and we frequently apply it when simplifying fractional exponents in the division of fractional exponents. Let's use the law of exponents for the division which describe subtracting the exponents when dividing two powers that have the same base: x^a/x^b= x^(a-b) Now consider x^2 / x=x^2 / x¹= x^2-1=x¹ Try this with any number you like, it's always true! The Fraction exponent calculator is suited to divide Fractional Exponents. Properties of the Fractional Exponents: We need to simplify fractional exponents by knowing the properties of the fractional exponents. Fractional Exponents having Numerators “1”: The fractional exponent is a way to express the power and base(roots) in one notation. The fractional exponent having numerators are most commonly used in our calculation: There are different kinds of Fractional Exponents having Numerators “1”: • An exponent having an exponent like ½ (Square root exponent) • An exponent having an exponent like 1/3 (Cube root exponent) • An exponent having an exponent like 1/4 (Fourth root exponent) We can portray the exponent as having a numerator like 1/k as follows: X1/k= kx The fraction exponent calculator is convenient to solve the exponents Fractional Exponents having Numerators equal “1” Examples 1: Now consider a fraction having an exponent (½)or ⎷ Then, let's look at the fractional exponents of x: x ^(1/2) * x ^(1/2)= ⎷x* ⎷x= x (1/2 + 1/2)= x¹=x Convenient to use the square-root-calculator for solving the Square root, Fractional Exponents. Examples 2: Now consider a fraction having an exponent of cube root (1/3)or 3⎷ Then, let's look at the fractional exponents of x: x (1/3) * x (1/3)* x (1/3) = \(\sqrt[3]{}\)*\(\sqrt[3]{}\)* \(\sqrt[3]{}\) =x (1/3 + 1/3+1/3) = x¹ =x Fractional Exponents having Different Numerators Than “1”: How to solve fractions with exponents having different numerators than “1”, in this condition the numerator of the fractions ≠ 1 Then our fraction with exponent should be like that: n= Whole number the Fractional Exponents 1/d= A fractional part of the Fractional Exponents X(n/d) = X(n.1/d) = (Xn)1/d = (X1/d)n X(n/d)= d⎷Xn= [d⎷X]n The fraction exponent calculator is easy to solve the exponent's Fractional Exponents have Different Numerators than “1” Let's try to understand where the base x = 4, fractional exponent = 3/2, the numerator part is 3 which is first solved then we solve the (½) the denominator part. • 4 3/2 = 4 (3 * 1/2) = (43)1/2 = √(4³) = √64 = 8 Alternative Method: • 4 3/2 = 4 (1/2) * 3 = (41/2)3 = (√4)³ = 2³ = 64 The fraction exponent calculator accepts the values of the Base, fractional parts of the numerator, and denominators separately. Fractional Exponents having Negative Fraction: How to solve fractions with exponents having negative exponents. If our exponent is a negative number then how negative fraction exponents is going to be solved: Consider: X^-3 = 1/X+ 1/X+ 1/X+ 1/X = 1/X3 We can represent it as: X-n/d=[1/(d⎷Xn)] The negative fractional exponents calculator is a convenient way to solve the negative fractional exponents. Working of the Fraction Exponent Calculator: We do believe rational exponents calculator are straight forward, but just for the record we are going to explain the working of the calculator. • Enter the values of the Base • Enter the values of the numerator • Enter the values of the denominator • The calculate the fraction exponent Output: Fractional exponents calculator is an interesting way for students to find difficulty in how to solve fractions with exponents. • The answer is given above • All the steps involved explained It can be convenient for us to simplify fractions with an exponents calculator as the tool is extremely collaborative and interactive for students. How do you add fractional exponents having the same base? We add them by combining the bases of the exponents and they apply the addition process of the exponents. X5 * X3= X5+3=X8 How do you add fractions exponents with the same denominators? When fractional exponents have the same denominator, we need to take the denominator and add all the numerator values. Like ⅖+⅗+⅘=9/5 What is the quotient rule for the exponent of the same base? The quotient rule states that we can use the one base for fractional exponents having the same base but different exponential fractions. Like: X4/3*X1/3=X(4+⅓)= X5/3 We can use the different quotient calculators for solving the quotients. We commonly use the fractional exponent in the field of mathematics but need to enter only real numbers in the fractions. The fraction exponent calculator is a straightforward and simple way to solve difficult fractional exponents. The fractional exponent can be difficult to solve when we are dealing with the higher power of the Base. From the source studypug.com: Quotient rule of exponents, Lessons, Basic Concepts From the sourceTutorme.com: The Basics of Exponents, Fractions With Exponents, Solving Fractions With Exponents From the source, mathsisfun.com: Whole Number Exponents ,Fractional Exponents, Try Another Fraction
{"url":"https://calculator-online.net/fraction-exponent-calculator/","timestamp":"2024-11-03T22:25:29Z","content_type":"text/html","content_length":"70220","record_id":"<urn:uuid:e381efb7-2064-4953-a38f-06a2400cd79e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00660.warc.gz"}
Inverse Variation Calculator - Nethercraft.net Inverse Variation Calculator What is an Inverse Variation Calculator? An inverse variation calculator is a tool used to determine the relationship between two variables that change in opposite directions. In an inverse variation, as one variable increases, the other variable decreases, and vice versa. This type of relationship is also known as a negative correlation. The calculator uses the formula y = k/x, where y and x are the two variables, and k is a constant value. How to Use an Inverse Variation Calculator To use an inverse variation calculator, you need to input the values of the two variables (y and x) into the corresponding fields. The calculator will then calculate the constant value (k) and show you the inverse variation equation. You can also use the calculator to find the value of one variable when given the value of the other variable. Example of Using an Inverse Variation Calculator For example, let’s say you have the following inverse variation equation: y = 6/x. If you know that x = 2, you can plug this value into the equation to find the value of y. Simply substitute x = 2 into the equation: y = 6/2 = 3. Therefore, when x = 2, y = 3 in this inverse variation. Benefits of Using an Inverse Variation Calculator Using an inverse variation calculator can help you quickly and accurately determine the relationship between two variables. Whether you are studying physics, economics, or any other field that involves inverse variations, this tool can save you time and effort in calculations. Additionally, the calculator provides a clear and easy-to-understand equation, making it simpler to analyze the relationship between the variables. Applications of Inverse Variation Inverse variation relationships can be found in various real-world scenarios. For example, as the number of workers increases, the time taken to complete a task decreases. Similarly, as the speed of a car increases, the time it takes to travel a certain distance decreases. Understanding and recognizing inverse variations can help in making predictions and decisions in a wide range of situations. Using Inverse Variation in Problem-Solving When faced with a problem that involves inverse variation, such as determining the relationship between two variables, an inverse variation calculator can be a valuable tool. By inputting the values of the variables into the calculator, you can quickly solve the equation and obtain the necessary information. This can be particularly helpful in academic settings, where inverse variations are commonly studied in mathematics and science courses. An inverse variation calculator is a useful tool for determining the relationship between two variables that change in opposite directions. By inputting the values of the variables, the calculator can calculate the constant value and provide the inverse variation equation. This tool is beneficial for students, professionals, and anyone working with inverse variations in various fields. Whether you are solving equations or making predictions, an inverse variation calculator can simplify the process and help you analyze the relationship between the variables effectively.
{"url":"https://nethercraft.net/inverse-variation-calculator/","timestamp":"2024-11-09T13:00:15Z","content_type":"text/html","content_length":"53182","record_id":"<urn:uuid:3305c22c-2671-4d89-b485-0dcf8ad3b945>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00026.warc.gz"}
The Michael-Simon Allard inequality Dr. Gyula Csato, Universitat Politecnica de Catalunya, Spain Speaker Dr. Gyula Csato, Universitat Politecnica de Catalunya, Spain When Feb 05, 2019 from 04:00 PM to 05:00 PM Where LH006 Add event to calendar vCal The Michael-Simon-Allard inequality, proven first independently by Allard and Michael-Simon is a generalization of the Sobolev (respectively isoperimetric) inequality to manifolds. It states the following: There is a universal constant \(C\) depending only on the dimension \(n\) and the exponent \(p,\) such that for any \(n\)-dimensional submanifold \(M\subset\mathbb{R}^{n+1}\) one has \(\|u\|_{L^{p^{\ast}}(M)}\leq C\left(\int_M|\nabla u|^p+\int_M|u|^p|H|^p\right)^{\frac{1}{p}}\quad\forall\, u\in C_c^{\infty}(M),\) \(p^{\ast}=\frac{np}{n-p},\quad \text{ and $H$ is the mean curvature of $M$}\) There is still no other proof known, except in dimension \(2.\) Neither is the best constant known, not even for \(p=1\) (analogue of isoperimetric inequality). I will give an overview on this inequality: some ideas of the proof, connection to minimal surfaces and an application to mean curvature flow. If time permits I will talk about a first result on this type of inequality in the nonlocal setting by Cabr\'e and Cozzi.
{"url":"https://www.math.tifrbng.res.in/events/later-2","timestamp":"2024-11-01T22:37:32Z","content_type":"application/xhtml+xml","content_length":"33166","record_id":"<urn:uuid:98a5a558-b6eb-441e-a3dd-6997c7e106ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00028.warc.gz"}
Problem Model Prompt P_Inequality_22 GPT-3.5 5-Shot Ground Truth Answer: 2 Use in Code import champ_dataset # "pip install champ-dataset" if necessary dataset = champ_dataset.load('v0') problem = dataset['P_Inequality_22'] messages = problem.conversations['GPT-3.5|5-Shot'].messages for message in messages: print(f'Role: {message.role}') print(f'Text: {message.text}') print(f'Error: {message.error}') Full Conversation System Prompt User Input Imputed Model Response Model Output You are an expert on mathematics. Solve the following problem. Make sure to show your work before giving the final answer. Find the smallest value of the polynomial f(x)=x^3(x^3+1)(x^3+2)(x^3+3). Step-wise solution: 1. Let t=x^3, we have f(x)=t(t+1)(t+2)(t+3)=(t^2+3t)(t^2+3t+2). 2. Let u=t^2+3t, we have f(x)=u(u+2)=u^2+2u+1-1=(u+1)^2-1≥-1. 3. Thus, the minimum value is -1. Solve the following problem. Make sure to show your work before giving the final answer. For positive integers a, b, c, d, e satisfying a^5+b^5+c^5+d^5+e^5 mod 25=0, what are possible values of abcde mod 5? Step-wise solution: 1. If 5 ∤ abcde, then none of a, b, c, d, e is divisible by 5. 2. For 5 ∤ u, we have u=5k±1 or u=5k±2. 3. For u=5k±1, u^5=(5k±1)^5=(5k)^5±5*(5k)^4+10*(5k)^3±10*(5k)^2+5*(5k)±1, and u^5 mod 25=±1. 4. For u=5k±2, u^5=(5k±2)^5=(5k)^5±5*(5k)^4*2+10*(5k)^3*4±10*(5k)^2*8+5*(5k)*16±32, and u^5 mod 25=±7. 5. Thus, in a^5+b^5+c^5+d^5+e^5 mod 25, each u^5 contributes ±1 or ±7. 6. No addition of five of these numbers result in a multiple of 25. 7. Thus, 25 ∤ a^5+b^5+c^5+d^5+e^5. 8. This means that if a^5+b^5+c^5+d^5+e^5 mod 25=0, we must have abcde mod 5=0. Solve the following problem. Make sure to show your work before giving the final answer. For positive a, b, c, what is the smallest value of sqrt(ab+bc+ac)/(abc)^(1/3)? Step-wise solution: 1. We have ab+bc+ac≥3*(ab*bc*ac)^(1/3)=3*(abc)^(2/3). 2. Thus, sqrt(ab+bc+ac)≥sqrt(3)*(abc)^(1/3). 3. So sqrt(ab+bc+ac)/(abc)^(1/3)≥sqrt(3). 4. Thus, the smallest value is sqrt(3), which is achieved at a=b=c. Solve the following problem. Make sure to show your work before giving the final answer. Let f(x) be a polynomial of degree n with integer coefficients. If there are three different integers a, b, c, such that f(a)=f(b)=f(c)=-1, then at most how many integer-valued roots can this polynomial have? Step-wise solution: 1. Let g(x)=f(x)+1, so a, b, c are the three roots of g(x). 2. Thus, g(x)=(x-a)(x-b)(x-c)h(x) for some polynomial h(x). 3. If there exists integer k such that f(k)=0, then g(k)=f(k)+1=1. 4. This means that (k-a)(k-b)(k-c)h(k)=1. 5. The first three terms are all distinct integers, but 1 can be the product of at most one distinct integers (1 itself). 6. Thus, we have a contradiction, and no such k exists. Solve the following problem. Make sure to show your work before giving the final answer. Find all positive integer solutions to the equation x^3+3=4y(y+1). Step-wise solution: 1. We have x^3=4y^2+4y+1-4=(2y+1)^2-2^2=(2y+3)(2y-1). 2. gcd(2y+3, 2y-1)=gcd(2y+3-(2y-1), 2y-1)=gcd(4, 2y-1)=1 since 4 only has prime factor of 2 and 2y-1 is odd. 3. Thus, we have 2y+3=u^3 and 2y-1=v^3 with gcd(u,v)=1 and u,v≥1. 4. This means that u^3-v^3=4, but u^3-v^3=(u-v)(u^2+uv+v^2), which is at least 7 (for u=2, v=1). 5. Thus, the original equation has no solution. Solve the following problem. Make sure to show your work before giving the final answer. Let a, b, c be the lengths of the three sides of a (possibly degenerate) triangle. What is the largest value of (a^2+b^2+c^2)/(ab+bc+ac)? Step-wise solution: 1. We have (a^2+b^2+c^2)/(ab+bc+ac)=(a^2+b^2+c^2)/(a(b+c)+bc). 2. By the Cauchy-Schwarz inequality, we have (a^2+b^2+c^2)(1+1+1)≥(a+b+c)^2. 3. Simplifying, we get 3(a^2+b^2+c^2)≥(a+b+c)^2. 4. Expanding the right side, we have 3(a^2+b^2+c^2)≥a^2+b^2+c^2+2(ab+bc+ac). 5. Rearranging, we get 2(a^2+b^2+c^2)≥2(ab+bc+ac). 6. Dividing both sides by 2, we have a^2+b^2+c^2≥ab+bc+ac. 7. Therefore, the largest value of (a^2+b^2+c^2)/(ab+bc+ac) is 1, which is achieved when a=b=c. Now, summarize the answer above in one sentence, without any intermediate steps or explanations. The largest value of (a^2+b^2+c^2)/(ab+bc+ac) is 1, which is achieved when a=b=c.
{"url":"https://yujunmao1.github.io/CHAMP/solutions/p_inequality_22_gpt35_5shot.html","timestamp":"2024-11-05T03:28:54Z","content_type":"text/html","content_length":"9804","record_id":"<urn:uuid:33be2691-f01d-42b8-ae1f-5599d8640034>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00582.warc.gz"}
Geometry Problem 1409: Right Triangle, Incircle, Excircle, Collinear The figure below shows a right triangle ABC (angle B = 90 degree) with the incircle I tangent to AC and BC at D and F, respectively. The excircle E corresponding to BC is tangent to AB extended at G. Prove that D, F, and G are collinear points. See also: Typography and poster of problem 1408. Flyer of Problem 1409 using iPad Apps
{"url":"https://www.gogeometry.com/school-college/5/p1409-right-triangle-incircle-excircle-tangent-collinear.htm","timestamp":"2024-11-10T02:16:08Z","content_type":"text/html","content_length":"9025","record_id":"<urn:uuid:14118b1f-cece-4f4c-be7f-2823fe04b03e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00859.warc.gz"}
Corkboard Connections Why Your Students Might Not Be Classifying Trapezoids Correctly Do you teach quadrilateral classification? If so, did you know there are THREE ways to define a trapezoid Americans use either the inclusive or the exclusive definition depending on their curriculum. To complicate matters even more, teachers who live outside the United States define trapezoids in a completely different way! Believe it or not, the British English definition is the exact opposite of the two American definitions! Which definition are you supposed to be teaching? If you're not sure, it's entirely possible that you're teaching the wrong definition! But don't feel bad if you discover this to be true because you are not alone. In fact, until recently, I didn't even know which definition was used by the Common Core State Standards! Before we dig into this topic, you need to know which definition you're currently teaching. To find out, answer the trapezoid question below before you read the rest of this post. Then read the information under the 3 polygons that explains what your answer means. What Your Answer Reveals Because there are three ways to define a trapezoid, there are three correct answers to the question. Your response will reveal the definition you use to classify trapezoids. • If you only chose polygon 3, you use the exclusive definition which states that a trapezoid has EXACTLY one pair of parallel sides. This is the definition that I learned, and it's the one I thought the Common Core used (but I was wrong). • If you chose polygons 1 AND 3, you use the inclusive definition which states a trapezoid has AT LEAST one pair of parallel sides. Many educators favor this definition because the other quadrilateral definitions are inclusive. For example, a parallelogram is a 4-sided figure with both pairs of opposite sides parallel, which means that squares and rectangles are also • If you only chose polygon 2, you're using the British English classification system which states that a trapezoid is a quadrilateral with NO parallel sides. You teach your students that a quadrilateral with one pair of parallel sides is a trapezium, not a trapezoid. Which definition SHOULD you be teaching? Now you know which definition you use to classify trapezoids, but is that the definition you're supposed to be teaching ? If you aren't 100% sure, make a note to check on it. Until recently, I thought the Common Core used the exclusive definition, but I discovered that the CCSS actually uses the inclusive definition! I posted a question on my Facebook page to find out which trapezoid definition most teachers were using, and over 180 people responded. I was surprised to learn that most teachers who follow the CCSS teach the inclusive definition. How to Teach Kids to Classify Tricky Trapezoids If this is the first you've heard that there are three ways to define a trapezoid, you might be wondering how much to share with your students. I mean, quadrilateral classification is challenging enough to teach without having to explain that there are three different correct ways to define a trapezoid! I recommend that you find out which trapezoid definition you are expected to teach, and only teach that ONE definition. You could tell your students that they might learn a slightly different definition at some point in the future, but if you go into too much detail, your students will end up more confused than ever. After you know which definition you're supposed to be teaching, how do you introduce it to your students and help them learn to classify trapezoids correctly? I've found that the best way to help your kids teaching those tricky trapezoids is with a simple sorting activity. There are two versions of this activity, and it's best to use both of them if possible. The first is a printable, hands-on activity for math partners which is great for guided practice. The other is a Google Slides activity you can assign in Google Classroom for additional practice or assessment. Both activities are included in the Sorting Tricky Trapezoids (or Trapeziums) freebie below. The directions in this post explain how to conduct the teacher-guided partner activity; directions for using the Google Slides version are included in the freebie. Trapezoid (or Trapezium) Sorting Directions for Partners: 1. Begin the activity by introducing the characteristics of a trapezoid (or trapezium) according to the definition you are expected to teach. 2. Next, pair each student with a partner and give each pair one copy of the Quadrilaterals to Sort printable. Ask them to work together to cut out the polygons and stack them in a pile. 3. Explain that they will take turns sorting the quadrilaterals into one of two categories using the T-chart. Give each pair a copy of the T-chart or have one person in each pair draw the T-chart on a dry erase board. 4. Before guiding them through the sorting activity, assign the roles of Partner A and Partner B in each pair. Then ask Partner A to select the first quadrilateral and place it in the correct column on the T-chart. Partner A then explains the quadrilateral's placement to Partner B who gives a thumbs up if he or she agrees. If Partner B does not agree, the two students should discuss the proper placement of the quadrilateral and move it to the other column if needed. 5. Partner B then chooses one of the remaining quadrilaterals, places it on the chart, and explains its placement to Partner A. Partner A must approve the placement, or the two students discuss the definition and placement before continuing. 6. Students continue to switch roles throughout the activity. If they aren't able to agree on the placement of one of the quadrilaterals, they should set it aside for the time being. 7. As students are working, walk around and observe them to see if they are classifying the trapezoids correctly. Stop to help students who are confused or who can't agree on the placement of one or more quadrilaterals. Hands-on Activities for Classifying Quadrilaterals This simple sorting activity is actually one of the most effective ways to teach kids to classify any type of quadrilateral. In fact, it's so effective that I developed a complete lesson for classifying quadrilaterals based on this strategy. Classify It! Exploring Quadrilaterals includes several introductory activities as well as a challenging game and two assessments. One reason I wanted to bring the tricky trapezoid situation to your attention is that I've recently updated Classify It! Exploring Quadrilaterals to include all three definitions. There are now THREE versions of the lesson materials within the product file. No matter which definition you're supposed to be teaching, Classify It! Exploring Quadrilaterals has you covered. You'll find lessons, printables, task cards, answer keys, and assessments that are aligned with the quadrilateral classification system used by your curriculum. Not only are these activities engaging and fun for kids, the lessons will help them nail those quadrilateral classifications every time! If you don't believe me, head over to see this product on TpT where you can read feedback from 400 teachers who have used Classify It! Exploring Quadrilaterals with their students. By the way if you already own Classify It, you can download the updated version for free by clicking over to the Classify It! Exploring Quadrilaterals page on TpT . If you're logged in, you'll see a link at the top that says "Download Now! You own it!" If you teach quadrilaterals and haven't purchased it yet, take a few minutes to preview it on TpT. If you use it with your students, I think you'll agree that Classify It is the most effective and FUN way to foster a deep understanding of quadrilateral classification! Guest blog post by Dr. Shirley Disseler We know that current math standards require students to learn through modeling using manipulatives. I have been using LEGO bricks for many years to teach students math concepts throughout the elementary and middle school curriculum. It’s a perfect math manipulative, and students love using the bricks, since many students are very familiar with them. I’ve developed specific strategies for teaching math using LEGO bricks for modeling and have been thrilled over the years to watch students’ test scores improve after they learn math using these strategies. In recent years, I’ve taught many graduate students at High Point University how to teach with these methods, and they also report great success for their students when they use the techniques as new math teachers. I’ve recently published a series of books that show how to utilize LEGO bricks to teach all the major math topics in elementary school: Counting and Cardinality, Addition, Subtraction, Multiplication, Division, and Fractions. Free LEGO Fractions Book I’d like to share an example of how to teach using LEGO bricks. This is a strategy for teaching how to add fractions that have like denominators. It's one of the lessons in my book, Teaching Fractions Using LEGO® Bricks, which is a part of my Brick Math Series. If you'd like to see more fraction lessons, you can download the entire PDF of this book as a sample of the series! Click here to request your free copy. Adding Fractions with Like Denominators Teaching students to add fractions can be a challenge. Students must first understand that a fraction shows part of a whole. This method of modeling fractions with bricks helps students see clearly what the parts of the fractions mean, and how only the numerators are added, since the two fractions are part of the same whole. Let’s add the fractions 1/6 and 2/6 together to show how the process works. 1. First, build models of the two fractions on a baseplate using LEGO bricks. The baseplate is an important component of Brick Math, because it keeps all the bricks in place. 2. Start to model the two fractions, denominators first. Use a 2x3 brick (6 studs) to model the denominator of 6. Use two 2x3 bricks that are the same color, to help students understand that the denominators are the same. Leave a little space between the two 2x3 bricks. 3. Model the numerator of the fraction 1/6 by placing a 1x1 brick above the first 2x3 brick. Model the numerator of the fraction 2/6 by placing a 1x2 brick above the second 2x3 brick. Using different color bricks for the numerators helps to show they are not the same. 4. Now it’s time to model the action of adding the two fractions. Take another 2x3 brick and place it at the bottom of a baseplate. Place the 1x1 brick above this 2x3 brick. Then place the 1x2 brick above the 1x1 brick. Your model now shows 3 studs over 6 studs. Take three 1x1 bricks and stack them on each stud of the combined numerator bricks. Have students touch each stud to count 3 as the numerator of the solution fraction of 3/6 . 5. If your students are ready for it, you can demonstrate how 3/6 = 1/2 . Place a 1x3 brick on top of the three 1x1 bricks in the model and show students that the 1x3 brick (modeling the numerator) is 1/2 the 2x3 brick (modeling the denominator). 6. The final step in the process is to have students draw their brick models on baseplate paper. Drawing the models they have built helps students reinforce the visual depiction of the mathematical concepts. Baseplate paper is included in my book, Teaching Fractions Using LEGO® Bricks, which is a free sample of my Brick Math Series books. When you take students through the modeling process, you give them a powerful way to visualize the action of the math. For both visual and tactile learners, this method helps student understand how to add fractions that are part of the same whole. See Two Fraction Lessons in Action on YouTube Watch the YouTube video below to see two fraction lessons demonstrated step by step. Learn More If you want to learn more about how to teach using LEGO bricks, check the Brick Math program website. The books in the series are available as both printed books and as PDFs, and can be purchased on the website, on Amazon and Kindle, and on TpT. Brick sets that have been designed for the program are available from that site as well. You can also purchase individual LEGO bricks from LEGO Pick a Brick, or from online resellers of LEGO bricks such as www.bricklink.com or www.brickowl.com. Dr. Shirley Disseler is an associate professor at High Point University and chair of the Department of Elementary and Middle Grades Education, and the STEM coordinator for the BA to MEd program. She is a LEGO® Education Academy Trainer and has been instrumental in developing and testing several LEGO® Education products. Disseler serves on the LEGO® Education Ambassadors Panel and is the trainer for the High Point University Teacher Academy for LEGO® Education. She has over 25 years of educational experience from elementary school teaching through higher education, including gifted education and exceptional children. She has recently started a new business called BrickEd on the Move that offers camps, field trips, and events based on learning with LEGO bricks.
{"url":"https://corkboardconnections.blogspot.com/2017/07/?m=1","timestamp":"2024-11-01T19:11:21Z","content_type":"text/html","content_length":"57248","record_id":"<urn:uuid:a94cc19a-0a2e-4c7f-bea2-0a053205c6ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00205.warc.gz"}
NOC BDY Tools This is just a note to illustrate where I had got to with a few scripts to set up open boundaries for NEMO simulations using the BDY code. This set of tools was born out of a requirement to have a generic method by which to provide boundary data for use in regional NEMO configurations. The original code for these tools was written in Mathworks Matlab. It is was being translated into Python for wider distribution and to facilitate development, but this process stalled at the beginning of 2013. The Python port is 70% complete and the desire is to finish an alpha version within the next six months. At present the tools have only been tested by transferring data from global NEMO simulations to refined regional domains, although in principle could use non-NEMO input to generate BDY data. The BDY tools use grid information from the source data (e.g. a global NEMO 025 run) and destination simulation (i.e. the proposed regional simulation - at present you have to run the proposed regional simulation with nn_msh=3 to get the required mesh/mask files for the setup tools) to determine which source points are required for data extraction. This is done using a kdtree approximate nearest neighbour algorithm. The idea behind this targeted method is that if a NEMO style grid file is produced (tool to be written) for non-NEMO source data (e.g. we could use NCML [XML] to wrap non-NEMO netcdf files so they appear to have the common variables and dimensions e.g. votemper etc) in principle the BDY tools become more generic. At present the tools do not contain many options, but those that exist are accessed through a NEMO style namelist that is read in when the main python call is made. • Works using a NEMO style namelist to initiate BDY configuration. • Automatic identification of BDY points from a user chosen or predefined mask • KDTree nearest neighbour matchup between the identified BDY points and the associated locations on source grid • Data are first interpolated (horizontally) from source grid to destination BDY points using: □ Bi-linear □ Gauss-like – distance weighted function using nearest 9 points with a de-correlation distance r0 proportional to dx*cos(lat(j,i)) □ Nearest – takes closest point on distance (for use with similar res src and dst grids) • Then in the vertical • Output in NEMO v3.2/3.3 and 3.4 forms • Time stretching used to accommodate mismatch in source and destination calendars • Optional smoothing of BDY boundary (post interpolation) • Should be able to accept non NEMO netcdf src files, but not tested yet • Handles rotation of vector quantities and can accommodate rotated grids (e.g. pan arctic) • At present is only coded to use TPXO7.2 inverse tidal model to provide tidal boundary conditions • Recently add a hack to setup generic tracer boundary conditions (e.g. so one can include nutrients for coupled ecosystem simulations Example namelist: ! vertical coordinate ln_zco = .false. ! z-coordinate - full steps (T/F) ln_zps = .true. ! z-coordinate - partial steps (T/F) ln_sco = .false. ! s- or hybrid z-s-coordinate (T/F) rn_hmin = -10 ! min depth of the ocean (>0) or ! min number of ocean level (<0) ! s-coordinate or hybrid z-s-coordinate rn_sbot_min = 10. ! minimum depth of s-bottom surface (>0) (m) rn_sbot_max = 7000. ! maximum depth of s-bottom surface ! (= ocean depth) (>0) (m) ln_s_sigma = .false. ! hybrid s-sigma coordinates rn_hc = 150.0 ! critical depth with s-sigma ! grid information cn_src_hgr = 'some_dir/mesh_hgr_src.nc' cn_src_zgr = 'some_dir/mesh_zgr_src.nc' cn_dst_hgr = 'some_dir/mesh_hgr_dst.nc' cn_dst_zgr = 'some_dir/mesh_zgr_dst.nc' cn_src_msk = 'some_dir/mask_src.nc' ! I/O cn_src_dir = 'path_to_src_data/' cn_dst_dir = 'path_to_dst_data/' cn_fn = 'my_exp' ! prefix for output files nn_fv = -1e20 ! set fill value for output files ! unstructured open boundaries ln_coords_file = .true. ! =T : produce bdy coordinates files cn_coords_file = 'coordinates.bdy.nc' ! name of bdy coordinates files ! (if ln_coords_file=.TRUE.) ln_mask_file = .false. ! =T : read mask from file cn_mask_file = '' ! name of mask file (if ln_mask_file=.TRUE.) ln_dyn2d = .true. ! boundary conditions for barotropic fields ln_dyn3d = .true. ! boundary conditions for baroclinic velocities ln_tra = .true. ! boundary conditions for T and S ln_ice = .true. ! ice boundary condition ln_tide = .true. ! tide boundary condition nn_rimwidth = 9 ! width of the relaxation zone ! Time information nn_year_000 = 1982 ! year start nn_year_end = 1982 ! year end nn_month_000 = 1 ! month start (default = 1 is years>1) nn_month_end = 12 ! month end (default = 12 is years>1) cn_dst_calendar = 'gregorian' ! output calendar format nn_base_year = 1960 ! base year for time counter ! Additional parameters nn_wei = 1 ! smoothing filter weights rn_r0 = 0.0417 ! decorrelation distance use in gauss ! smoothing onto dst points. Need to ! make this a funct. of dlon cn_history = 'bdy files produced by a.nobody' ! history for netcdf file ln_nemo3p4 = .true. ! else presume v3.2 or v3.3 nn_alpha = 0 ! Euler rotation angle nn_beta = 0 ! Euler rotation angle nn_gamma = 0 ! Euler rotation angle
{"url":"https://forge.ipsl.jussieu.fr/nemo/wiki/WorkingGroups/ConfigurationManager/2014/NOC_tools?version=2","timestamp":"2024-11-11T11:30:00Z","content_type":"application/xhtml+xml","content_length":"23465","record_id":"<urn:uuid:e9954b6e-c116-4ac6-b605-db93793137f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00251.warc.gz"}
(x+1) is a factor of polynomial x^3 -2 x^2 -x +2 1 thought on “(x+1) is a factor of polynomial x^3 -2 x^2 -x +2<br />Please explain in detail …<br />please ” 1. Answer: Keep it in pic form not understanding. Plz follow. plz add me as brainliest Leave a Comment
{"url":"https://wiki-helper.com/1-is-a-factor-of-polynomial-3-2-2-2-please-eplain-in-detail-please-kitu-40580861-60/","timestamp":"2024-11-12T18:32:23Z","content_type":"text/html","content_length":"125879","record_id":"<urn:uuid:3c5361d8-945f-44c7-b65b-c75444dd5ca6>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00384.warc.gz"}
1992 USAMO Problems/Problem 4 Chords $AA'$, $BB'$, and $CC'$ of a sphere meet at an interior point $P$ but are not contained in the same plane. The sphere through $A$, $B$, $C$, and $P$ is tangent to the sphere through $A'$, $B'$ , $C'$, and $P$. Prove that $AA'=BB'=CC'$. Solution 1 Consider the plane through $A,A',B,B'$. This plane, of course, also contains $P$. We can easily find the $\triangle APB$ is isosceles because the base angles are equal. Thus, $AP=BP$. Similarly, $A'P =B'P$. Thus, $AA'=BB'$. By symmetry, $BB'=CC'$ and $CC'=AA'$, and hence $AA'=BB'=CC'$ as desired. By another person ^v^ The person that came up with the solution did not prove that $\triangle APB$ is isosceles nor the base angles are congruent. I will add on to the solution. There is a common tangent plane that pass through $P$ for the $2$ spheres that are tangent to each other. Since any cross section of sphere is a circle. It implies that $A$, $A'$, $B$, $B'$ be on the same circle ($\omega_1$), $A$, $B$, $P$ be on the same circle ($\omega_2$), and $A'$, $B'$, $P$ be on the same circle ($\omega_3$). $m\angle APB= m\angle A'PB'$ because they are vertical angles. By power of point, $(AP)(A'P)=(BP)(B'P)\rightarrow\frac{AP}{B'P}=\frac{BP}{A'P}$ By the SAS triangle simlarity theory, $\triangle APB \sim\triangle B'PA'$. That implies that $\angle ABP\cong\angle B'PA'$. Let's call the interception of the common tangent plane and the plane containing $A$, $A'$, $B$, $B'$, $P$, line $l$. $l$ must be the common tangent of $\omega_2$ and $\omega_3$. The acute angles form by $l$ and $\overline{AA'}$ are congruent to each other (vertical angles) and by the tangent-chord theorem, the central angle of chord $\overline{AP}$ and $\overline{A'P}$ are Similarly the central angle of chord $\overline{BP}$ and $\overline{B'P}$ are equal. The length of any chord with central angle $2\theta$ and radius $r$ is $2r\sin\left({\theta}\right)$, which can easily been seen if we drop the perpendicular from the center to the chord. Thus, $\frac{AP}{A'P}=\frac{BP}{B'P}$. By the SAS triangle simlarity theory, $\triangle APB \sim\triangle A'PB'$. That implies that $\angle ABP\cong\angle B'PA'$. That implies that $\angle ABP\cong\angle A'PB'\cong\angle B'PA'$. Thus, $\triangle A'PB'$ is an isosceles triangle and since $\triangle APB \sim\triangle A'PB'$, then$\triangle APB$ must be an isosceles triangle too. Solution 2 Call the large sphere $O_1$, the one containing $A$$O_2$, and the one containing $A'$$O_3$. The centers are $c_1$, $c_2,$ and $c_3$. Since two spheres always intersect in a circle , we know that A,B, and C must lie on a circle ($w_1$)completely contained in $O_1$ and $O_2$ Similarly, A', B', and C' must lie on a circle ($w_2$) completely contained in $O_1$ and $O_3$. So, we know that 3 lines connecting a point on $w_1$ and P hit a point on $w_2$. This implies that $w_1$ projects through P to $w_2$, which in turn means that $w_1$ is in a plane parallel to that of $w_2$. Then, since $w_1$ and $w_2$ lie on the same sphere, we know that they must have the same central axis, which also must contain P (since the center projects through P to the other center). So, all line from a point on $w_1$ to P are of the same length, as are all lines from a point on $w_2$ to P. Since AA', BB', and CC' are all composed of one of each type of line, they must all be See Also The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
{"url":"https://artofproblemsolving.com/wiki/index.php/1992_USAMO_Problems/Problem_4","timestamp":"2024-11-14T07:56:58Z","content_type":"text/html","content_length":"56250","record_id":"<urn:uuid:96aa4f4a-88eb-4145-8069-7fe9d52dfdb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00028.warc.gz"}
Redundant Information Information theory is fundamentally about We understand information itself in terms of questions and answers: 1 bit of information is the uncertainty in the answer to a question with a 50-50 outcome, e.g. "will this coin flip give tails?". Just as importantly though, the measures of information theory themselves are all about questions and answers too. For the basic measures, the questions they ask seem fairly obvious. The Shannon Entropy "How much uncertainty is there in the state of this variable X?". Mutual information asks " how much information does the state of variable X tell me about the state of Y? ", while conditional mutual information asks " how much information does the state of variable X tell me about the state of Y, given that I already know the state of Z? But I want to make a few more subtle points about these questions and answers. In my opinion (which is of course the only correct one), the answers that the measures give are correct. If you think they're wrong, then you're asking the wrong question, or have malformed the question in some way. There are plenty of ways to do this, or at least to inadvertently change the question that you're asking. I see the sample data itself as part of the question that a measure is answering. When you estimate the probability distribution functions (PDFs) empirically from a given sample data set, your original question about entropy really becomes: "How much uncertainty is there in the state of this variable X, given what we're assuming to be a representative sample of realisations x of X here?" Of course, your representative sample could simply be too short, and thereby completely misrepresent the PDF. Or you could get into trouble with stationarity (1) of the process - you might implicitly have appended "given what we're assuming to be a representative stationary sample here" to the question, but that assumption may not be true. In both cases, the measure will give the correct answer to your question, but it might not be the question you really intended to ask. As another way of inadvertently changing the question, one must realise that for the same information-theoretic measure, different estimators (or indeed different parameter settings for the same estimator) answer different questions. Take the mutual information, for example, which one could measure on continuous-valued data via (box) kernel estimation . Using this estimator, the measure asks: " how much information does knowing the state of variable X within radius r tell me about the state of variable Y within radius r? " Clearly, using different parameter values for r amount to asking different questions - potentially the questions are very different if one uses radically different scales for r. Going further, one could measure the mutual information using the enhanced Kraskov-Grassberger kernel estimation technique. With this estimator, the mutual information measure asks " how much information does knowing the state of variable X tell me about the state of variable Y, to the precision defined in their k closest neighbours of the sample data set in the joint X-Y space? " Apart from that being something of a mouthful, it's obviously a different question to what the box kernel estimation is asking. And again, changing the parameter k changes the question being asked as well. So to reiterate, information theory is fundamentally about questions and answers - the better you can keep that in mind, the better you will understand information theory and its tools. UPDATE- 13/12/12 - My colleague Oliver Obst provided a perfect quote about this: "Better a rough answer to the right question than an exact answer to the wrong question" - attributed to Lord Kelvin. (1) Here's a controversial statement: I suggest that it can be valid to make information-theoretic measurements on non- stationary processes . This simply changes the question that is being asked to something like: " how much uncertainty is there in the state of this non-stationary variable X, if we don't know how the joint probability distribution of the non-stationary process is operating at this specific time, given what we're assuming to be a representative sample of the joint probability distribution weighted over all possible ways it may operate? ". Now, obviously that's quite a mouthful, but I'm trying to capture that intuition that one could validly consider how much information it takes to predict X if we don't know the specifics of the non-stationarity at this particular point in time, but do know the overall distribution of X (covering all possible behaviours). So long as one bears in mind that a different question is being asked (indeed a question that is quite different to the intended use of the measure), then certainly the answer can be validly interpreted. Of course, the bigger issue is in properly sampling the PDF of X over all possible behaviours, but that's another story. I'm pleased to announce the NeFF-Workshop on Non-linear and model-free Interdependence Measures in Neuroscience and TRENTOOL course which will be held at Goethe University Frankfurt, Germany on April 26-27, 2012 (hosted by the MEG Unit of the Brain Imaging Centre, Frankfurt). The synopsis from the workshop announcement is as follows: Understanding complex systems composed of many interacting units, such as neural networks, means understanding their directed and causal interactions. If the units in question interact in a nonlinear way, as it can be assumed in neural networks, we are faced with the problem that the analysis of interactions must be blind to the type of interaction if we want to cover all possible interactions in the network, as we may not know the type of nonlinear interaction a priori. Prematurely limiting our search to specific models, nonlinearities or, even worse, linear interactions may block the road to discovery. Novel model-free techniques for the quantification of directed interactions from information theory offer a promising alternative to more traditional methods in the field of interaction analyses, but also come with their own specific challenges. This symposium brings together the most active researchers in the field to discuss the state of the art, future prospects and challenges on the way to an model-free, information theoretic assessment of neuronal directed interactions. I'm happy to be co-organising this workshop with Michael Wibral (head of the MEG Unit, Brain Imaging Center, Goethe University Frankfurt) and Raul Vicente (Frankfurt Institute for Advanced Studies). We've got several speakers lined up to talk about their work in this field, particularly using information-theoretic tools including the transfer entropy. The speakers include some collaborators of mine (e.g. Mikhail Prokopenko, Paul Williams), many others I'm looking forward to meeting (e.g. Stefano Panzeri, Luca Faes), and the organisers of course :). Plus there will also be a workshop on Michael and Raul's Transfer Entropy toolbox (TRENTOOL), which is designed to provide effective network analysis on neuro data sets in Matlab. I'm looking forward to playing around with this more myself, I've already got a project and some data in mind. We're hoping to get lots of participants (though space is limited) - full details on how to register are available at the workshop website - http://www.neff-ffm.de/de/veranstaltungen/seminars/ I hope to see you there! Updated 26 October - our paper has been published in Europhysics Letters 99 68007 (2012) doi:10.1209/0295-5075/99/68007 - we've also updated the preprint on arXiv with the revised material. Update 3 December - MPI made a press release about the paper, and New Scientist (German edition) published an article about it (in German). The news at this end is that Frank Bauer and I just submitted a new preprint on arXiv: F. Bauer and J.T. Lizier, "Identifying influential spreaders and efficiently estimating infection numbers in epidemic models: a walk counting approach", MPI MIS Preprint 1/2012, arXiv:1203.0502, We introduce a new method to efficiently approximate the number of infections resulting from a given initially-infected node in a network of susceptible individuals. Our approach is based on counting the number of possible infection walks of various lengths to each other node in the network. We analytically study the properties of our method, in particular demonstrating different forms for SIS and SIR disease spreading (e.g. under the SIR model our method counts self-avoiding walks). In comparison to existing methods to infer the spreading efficiency of different nodes in the network (based on degree, k-shell decomposition analysis and different centrality measures), our method directly considers the spreading process and, as such, is unique in providing estimation of actual numbers of infections. Crucially, in simulating infections on various real-world networks with the SIR model, we show that our walks-based method improves the inference of effectiveness of nodes over a wide range of infection rates compared to existing methods. We also analyse the trade-off between estimate accuracy and computational cost, showing that the better accuracy here can still be obtained at a comparable computational cost to other methods. Epidemic spreading in biological, social, and technological networks has recently attracted much attention. The structure of such networks is generally complex and heterogeneous, so a key question in this domain is: "Given a first infected individual of the network (patient zero) - how likely is it that a substantial part of the network will become infected?" It is, of course, of particular interest to identify the most influential spreaders. This knowledge could, for instance, be used to prioritise vaccinations. The most obvious and direct way to address this question is to estimate the number of infections by running simulations of the infection model. There are several well-known infection models which can be used to simulate diseases with different properties. These include the SIR (susceptible-infected-removed) model for diseases where a subject may only be infected once (due to either recovery with full immunity or death), and the SIS (susceptible-infected-susceptible) model for diseases where infected subjects recover and become susceptible to reinfection. To run a simulation, one must consider the network structure connecting susceptible individuals, and the infection rate or probability β that an infected individual will infect a given neighbour. One problem with running simulations however, is that it takes a lot of computational time to obtain appropriate accuracy. For example, for an SIR model we run on a network of 27519 nodes and 116181 undirected edges, 10000 simulated initial infections per node for around 20 values of infection rate β took 2000 hours to simulate. Of course, this runtime does not scale well with the size of the network, while for real-world problems it is generally the large networks that we are genuinely interested in. As such, it would be useful to find a more efficient way than full simulation to estimate infection numbers, and/or to be able to use local network properties of nodes to understand their spreading So the problem that we are addressing here is two-fold: 1. How to efficiently estimate the number of infections resulting from a given initially-infected node in a network of susceptible individuals? 2. What network structural properties which are local to the initially-infected node are most useful for predicting how well disease will spread from it? In fact, there has been a lot of work recently trying to find local network properties of nodes that are useful in predicting the relative spreading influence of different initially-infected nodes in a network. This attempts to address problem 1, but additionally gives very useful insight into how local network structure can promote or inhibit disease spreading (i.e. problem 2). The properties other authors have investigated range from simply examining out-degree, to k-shell analysis, to various measures of node centrality in the network (e.g. eigenvector centrality). And the good news is that you get a surprisingly accurate insight into the relative spreading efficiency of the various nodes with these very simple measures. Such work has attracted a lot of attention, for example being published in Nature Physics. However, we observed two issues with these approaches: 1. They only infer the relative spreading efficiency of initially infected nodes; i.e. they do not provide an estimate of the actual numbers of infections resulting from each node. These actual infection numbers could be very important in many applications. 2. While the existing inference measures do a good job, they do not actually directly consider the mechanics of disease spreading. As such, there is still room for improvement. As an example of potential improvement areas: none of the above-mentioned measures change their relative inference with the rate of infection β. So, we sat down and thought hard about the local network properties that best relate to the mechanics of disease spreading. We focussed on the fact that disease spreads on a network in the manner of a walk. The disease can only reach node B from node A on a walk from A to B, where every node on that walk is also infected. Our basic premise is that the count of the number of walks from the initially infected node to other susceptible nodes (an approach known as walk counting) should be a local network structural property that gives good insight into disease spreading. The idea had been previously raised in the literature, but not properly examined. We developed the idea further, working out the mathematics to turn these walk counts into estimates of infection numbers, as a function of infection rate β. Interestingly, different types of walks are involved for different disease spreading models; e.g. for SIR spreading one is only interested in self-avoiding walks (since no node can be infected twice), whereas for SIS spreading one is interested in any type of walk. Our estimates are not perfect, and we identify where and how known errors will occur. However, the estimates have several very important and useful properties compared to other approaches: 1. Our technique provides estimates of actual numbers of infections from a given initially-infected node, which is more useful than inferred relative spreading efficiency alone. It's useful to know who the most influential spreaders are, but you also want to know the extent to which they will spread the disease. 2. Importantly, testing with simulations on various social network structures reveals that our technique infers more accurate relative spreading efficiencies than those of the aforementioned previously published techniques over a wide range of infection rates β (up to the lower super-critical spreading regime). This is because our technique directly considers the mechanics of disease 3. And our technique has excellent computational efficiency. Note that there is a trade-off in our technique between computational efficiency and higher accuracy: by considering only short infection walks, our algorithm runs faster, but better accuracy is generally obtained by considering longer walks. Our short-walk estimates can be made in approximately the same run-time as the aforementioned techniques, but with greater accuracy in inferring relative spreading efficiency. Our longer-walk estimates produce better accuracy again, and do so with orders of magnitude less runtime than simulations of the disease spreading process which obtain the same accuracy. As such, we show that our walk counting approach provides the following unique combination of features: they are the most relevant local network structural feature to infer relative disease spreading efficiency, provide estimates of actual infection numbers, and are computationally efficient. Comments/suggestions welcome, of course ...
{"url":"https://redundantinformation.blogspot.com/2012/","timestamp":"2024-11-04T05:54:33Z","content_type":"application/xhtml+xml","content_length":"56353","record_id":"<urn:uuid:aeb6c50e-2a25-4d25-827c-052bf285570e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00655.warc.gz"}
Cellular Automata Cave Generation A cellular automata is a collection of cells whose states change over time based on the states of adjacent cells. They can be used to produce natural-looking patterns, such as the cave in the picture Perhaps the most well-known instance of a cellular automata is Conway’s Game of Life, shown below (click to start). Each cell is considered to be either alive or dead. In this example, the initial states of each cell is random. At each step, the neighbours of each cell are examined to determine if that cell will be alive or dead in the next step. This proceeds according to the following rules: • if a cell is alive and has less than 2 living neighbours, the cell dies (as if of underpopulation) • if a cell is alive and has more than 3 living neighbours, the cell dies (as if of overpopulation) • if a cell is alive and has 2 or 3 living neighbours, the cell remains alive • if a cell is dead and has exactly 3 living neighbours, the cell becomes alive (as if by reproduction) Conway's game of life. Click to toggle animation. Here’s pseudocode for the function applied at each step of the simulation. In short, it takes a 2D array of cells and updates the state of each cell based on the states of its neighbours. SURVIVE_MIN = 2 SURVIVE_MAX = 3 RESURRECT_MIN = 3 RESURRECT_MAX = 3 step(cells[HEIGHT][WIDTH]) { for i in 0..HEIGHT-1 { for j in 0..WIDTH-1 { count = 0 for each neighbour of cells[i][j] { if neighbour is alive { if cells[i][j].alive { if count >= SURVIVE_MIN and count <= SURVIVE_MAX { cells[i][j].alive_next_step = true } else { cells[i][j].alive_next_step = false } else { if count >= RESURRECT_MIN and count <= RESURRECT_MAX { cells[i][j].alive_next_step = true } else { cells[i][j].alive_next_step = false for i in 0..HEIGHT-1 { for j in 0..WIDTH-1 { cells[i][j].alive = cells[i][j].alive_next_step Notice the four constants at the start of the code. These encode the rules for Conway’s game of life, described above. These values can be changed to obtain different behaviour of the cellular • if a cell is alive and has less than 4 living neighbours, the cell dies • if a cell is alive and has 4 or more living neighbours, the cell remains alive • if a cell is dead and has exactly 5 living neighbours, the cell becomes alive (as if by reproduction) The constants that encode these rules are: SURVIVE_MIN = 4 SURVIVE_MAX = 8 RESURRECT_MIN = 5 RESURRECT_MAX = 5 Variant of Conway's game of life used for cave generation. These set of rules creates large clumps of living cells which can be used as the walls of a cave. When generating terrain for a game level of finite size, players shouldn’t be able to walk off the edge of the map. A simple way to enforce this is to place walls around the edge of the map. A solution that places natural-looking walls around the outside of the map is to ensure the border of the area is always made up of living cells. Enforcing this at each step of the simulation means that these cells cause other nearby cells to become (and stay) alive. The walls grow inwards and interact with interior walls to give the appearance of natural cavern walls. Cells around border are always alive. Within the first few steps, natural-looking caverns are generated. In the subsequent steps, the walls recede and the caverns become very vast. Limiting the number of steps to 4 seems to produce interesting-looking caves. Rough edges and single dead cells can be smoothed/removed by killing any cells with less than 2 living neighbours, and resurrecting cells with more than 5 living neighbours. Finally, to ensure that there are no closed-off sections of the generated cave, all but the largest contiguous groups of dead cells are are resurrected, filling in closed-off open spaces. Running for 4 steps, then cleaning.
{"url":"https://www.gridbugs.org/cellular-automata-cave-generation/","timestamp":"2024-11-11T05:03:15Z","content_type":"text/html","content_length":"12938","record_id":"<urn:uuid:dff93f1a-8917-4f2d-8fb2-639a09983b31>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00818.warc.gz"}
Profit and Loss Every businessman wants to make a profit, and as large a profit as possible. This means selling your wares at a higher cost thean your bought them for, plus whatever costs have been incurred. Making a profit is not always possible though. Sometimes it is necessary to view your expenses as done deals, and your purpose then is to maximise your revenue from the goods you have to sell. Example: A market trader sells potatoes. One day he buys 500 kg of potatoes in 25 kg bags at £8 per bag from the wholesaler, to be bagged into 5 kg bags and sold at £2.50 each. Unfortunately, none of the bags can be weighed at eactly 5kg, and in order to avoid giving the customer less than 5 kg of potatoes, he always gives them slightly over. This means he only manages to get 95 5 kg bags. When he goes to the market and starts selling, he only manages to sell 80 bags at the advertised price of £2.50, and it being the day before his annual holiday, he reduces the price of the remaining bags to £1.50 in an attempt to sell the remaining bags before the end of the day – and avoid the possibility that they might be unsaleable after the holiday. Find his profit, and his percentage profit. Each 25 kg bag costs £8 so the 500 kg of potatoes costs 20 times £8 = £160. He sells 80 bags at £2.50 each so his revenue from these is 80 times £2.50 = £200. The remaining 15 bags sell for £1.50 so his revenue from these bags is 15 times £1.50 = £22.50 His total revenue is £200 + £22.50=£222.50 His total costs are £160 His profit is £222.50 - £160=£62.50 His percentage profit is
{"url":"https://mail.astarmathsandphysics.com/igcse-maths-notes/497-profit-and-loss.html","timestamp":"2024-11-11T13:04:16Z","content_type":"text/html","content_length":"29024","record_id":"<urn:uuid:3ac4af91-bced-449e-9e89-540520e352d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00501.warc.gz"}
Re: AMBER: Non-Bonded Cutoff vs PME From: Thomas E. Cheatham, III <cheatham.chpc.utah.edu> Date: Wed, 20 Apr 2005 08:53:38 -0600 (Mountain Daylight Time) > on the subject...May I ask one more related question. When in PBCs we say > "infinitely repeated", what does it mean, how calculations are done on > infinite scale? I am not able to capture this concept...or its that only a Infinity is indeed a difficult concept to grasp, execpts perhaps for the Coulomb interaction, we can convert the sum over pairs to a double sum over all unit cells and all pairs. This is an infinite sum. If the system were net-charged, this total energy would diverge (however the forces interestingly do not diverge). The sum is conditionally convergent which means that the value in the limit (assuming it is net-neutral) obtained will be different depending on how you sum the series. In the literature describing this, two means of summing are generally presented, that of adding more and more 2D slabs or that of adding more and more layers until the sum converges. Ewald converted this conditionally convergent sum into a sum to two finite converging series, the screened (erfc) direct space sum and the reciprocal sum. A missing factor is the surface at the limit; almost all of the Ewald codes assume tin-foil (conducting) boundary conditions at the surface. This avoids the shape dependence. There are a few other details, but basically the sum converges in the limit; our goal is to make sure that we calculate it to sufficient accuracy. Implementations of PME, SPME, PPME, etc all can reproduce the exact Ewald energetics/forces to whatever the desired accuracy. With sander, we tend to look at force errors in the 10**-6 range. For more detail, please continue to dig through the literature. The AMBER Mail Reflector To post, send mail to amber.scripps.edu To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu Received on Wed Apr 20 2005 - 16:53:00 PDT
{"url":"http://archive.ambermd.org/200504/0329.html","timestamp":"2024-11-11T23:52:23Z","content_type":"application/xhtml+xml","content_length":"8543","record_id":"<urn:uuid:64650224-69dd-4d53-a8e7-c1cf1e8e9c30>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00467.warc.gz"}
R&amp;D finance and economic growth: a Schumpeterian model with endogenous financial structures | Macroeconomic Dynamics | Cambridge Core 1. Introduction In this paper, we investigate the impacts of fiscal policy (the dividend, corporate, and bond income taxes) and monetary policy (the nominal interest rate) in an endogenous growth model of R&D. We endogenize the firm’s financial structure (in terms of internal and external funds) and enable the financial market (the equity market and the bond market) to reshuffle loanable funds out of less productive firms toward others with greater productivity. The endogenous financing of R&D provides a potentially important channel to link finance and economic growth. We show that the financial structure-growth relationship is not monotonic and that it depends on the relative productivity between the existing and new firms and the allocation of loanable funds between them. Distinct policies end up with quite different consequences for firms’ financial structures (the demand for loanable funds) and households’ asset investments (the supply of loanable funds), with both jointly determining inflation, growth, and the market structure. High growth can be associated with either an intensive (a large number of firms with a small firm size) or extensive margin (a small number of firms with a large firm size) in terms of market structure. It is crucial to highlight R&D firms’ financial sources as we examine firms’ innovation activities and their influences on the economy’s performance. It is well documented that innovation activities require a lot of funding, but it is difficult for firms to raise sufficient R&D funds (Schumpeter (Reference Schumpeter1942), ch.VIII), leading to an underinvestment in R&D (Arrow (Reference Arrow 1962)). There are two main reasons for this phenomenon: the knowledge nonrivalry and the return divergence between the supply of and demand for funding. Knowledge is nonrivalrous, and this externality gives rise to a disincentive effect, discouraging firms from collecting enough funds for the R&D investment, particularly when they need to raise funds externally.Footnote ^1 This calls for the government’s intervention via a patent system (e.g. Segerstrom (Reference Segerstrom2000); Futagami and Iwaisako (Reference Futagami and Iwaisako2007); Iwaisako and Futagami (Reference Iwaisako and Futagami2013); Yang (Reference Yang2018); Chu et al. (Reference Chu, Lai and Liao2019)) or tax incentives (Chu and Cozzi (Reference Chu and Cozzi2018)) to remedy the underinvestment in R &D. Tax incentives may be more attractive to policymakers; for example, corporate taxes have been lowered in many industrial countries. From 1980 to 2018, the average worldwide statutory corporate income tax rate declined from 38.84% to 23.03%, representing a 41% reduction over the past 38 years. Corporate income tax rates declined the most in Europe, decreasing by 55% from 40.5% in 1980 to 18.38% in 2018. These interventions, however, still have difficulty increasing the availability of external finance. The problem of return divergence, as stressed by Arrow (Reference Arrow1962) and other financial economists, manifests itself when R&D firms do not have enough internal funding (retained earnings) to finance their innovation activities, while external funding is costly. Corporate investment decisions may be distorted because the loanable funds suppliers (households) and demanders (R&D firms) have different objectives and different expectations regarding their returns on R&D.Footnote ^2 Conventional R&D growth models ignore the importance of financial structures by simply assuming that firms have enough internal funds (retained earnings) to finance R&D expenditures without the need for external funds. Instead, this study endogenizes firms’ financial structures by considering the return divergence between the supply of and demand for R&D funding. Due to the costly nature of state verification, the agency cost between the supply of and demand for funding exists, giving rise to the return divergence (Jensen and Meckling (Reference Jensen and Meckling1976); Bernanke and Gertler (Reference Bernanke and Gertler1989)). The existence of return divergence vividly portrays the transformation from firms’ external finance to their productivity, which provides a microfoundation of macroeconomics (Carlstrom and Fuerst (Reference Carlstrom and Fuerst1997)). Our analysis accounts for the agency cost and, accordingly, endogenizes R&D firms’ financial structures (specifically, the financial leverage (measured by the debt-equity ratio) and the user cost of capital (measured by the weighted average cost of capital, WACC)). The financial structure of R&D firms will affect the demand for and supply of loanable funds, which in turn governs the effects of fiscal and monetary policy. Thus, we offer quite different policy implications from the conventional wisdom gleaned from growth models without endogenously determined financial structures.Footnote ^3 In this paper, we develop a growth monetary model in which (1) incumbents engage in R&D aimed at quality improvement (vertical R&D), while entrants engage in R&D aimed at variety expansion (horizontal R&D); (2) both the existing and new firms can access the financial market (the equity market and the bond market) for external funding; and (3) agency costs exist between the supply of and demand for funding. The first feature endogenizes the market structure (the firm size and the number of firms) whereby the competition between incumbents and entrants jointly determines the growth and the proliferation of product varieties to eliminate the scale effect on growth (see Peretto (Reference Peretto2003)).Footnote ^4 The second and third features endogenize the firm’s financial structure and enable the financial market to reshuffle loanable funds away from less productive firms toward others with greater productivity, as stressed by Chetty and Saez (Reference Chetty and Saez2006). The empirical literature on finance has shown that external finance (in terms of both equity and debt finance) is non-negligible, particularly for R&D-intensive firms. Compared to mature firms, younger firms rely more on equity finance due to the lack of retained earnings and the problem of financing frictions (see Brown and Petersen (Reference Brown and Petersen2011)). Based on the three novel features, we show that the endogenously determined debt-equity ratio and WACC will affect not only the market amount of loanable funds but also the allocation of loanable funds between incumbents and entrants, with both governing inflation, growth, and the market structure. Analytically, we show that there exists a unique balanced-growth path (BGP) equilibrium with an endogenous financial structure which is locally determinate. In the BGP equilibrium, the balanced-growth rate can be either positively or negatively correlated with the WACC. A higher WACC leads firms to set a higher price for passing through the increased user cost of capital, pushing up inflation. Higher inflation lowers the real returns on financial assets, decreasing the households’ supply of loanable funds. A scarcity of loanable funds, however, does not necessarily imply a lower growth rate. If entrants are more productive than incumbents, growth decreases with the WACC as loanable funds go to more productive entrants, which reduces the effectiveness of incumbents’ R&D aimed at product quality improvement. By contrast, if incumbents are more productive than entrants, growth increases with the WACC because the financial market can reshuffle funds toward more productive incumbents, which increases the effectiveness of quality-improving R&D, resulting in an increase in growth. This result implies that instead of the market amount of loanable funds, the allocation of loanable funds plays a more important role in determining economic growth. The importance of intersectoral capital allocation has been pointed out in McKinsey’s study report (by Lewis et al. (Reference Lewis, Agrawal, Buttgenbach, Findley, Jeddy, Petry, Kondo, Subramanian, Bőrsch-Supan, Huang and Greene 1996)) which shows that although Japan and Germany had much higher investment rates, US investment was able to be allocated to more profitable (i.e. higher productivity) sectors so that national income was considerably greater in the United States. We also show that higher financial leverage (a higher debt-equity ratio) is not necessarily favorable to economic growth. By bringing the allocation of loanable funds between incumbent R&D firms and new entrants into the picture, our result stands in contrast to the financial accelerator effect proposed by Bernanke and Gertler ( Reference Bernanke and Gertler1995) and Bernanke et al. (Reference Bernanke, Gertler and Gilchrist1996), in which higher financial leverage stimulates investment and boosts growth. The ambiguous growth effect in our analytical study explains why empirical findings not only are mixed but also vary greatly. For example, instead of positive growth effects, OECD (2017) and Shah et al. (Reference Shah, Abdul-Majid and Karim2019) find that corporate debt has negative effects on growth in OECD countries (see Section 3.1 for more details). Numerically, we show that while the dividend tax (levied on the households’ supply of loanable funds) and the corporate tax (levied on the firm’s demand for loanable funds) increase the equilibrium debt-equity ratio, both have quite different impacts on the balanced growth and market structure, given that the dividend tax raises the firm’s WACC but the corporate tax lowers it due to the “tax shield effect.” In spite of a lower WACC, the corporate tax unambiguously decreases growth since it substantially lowers firms’ after-tax profits, which forces incumbents to reduce their demand for loanable funds and impedes their R&D activities. The market structure, nevertheless, has an uncertain response to a higher corporate tax; specifically, the firm size (the number of firms) has an inverted U-shaped (U-shaped) relationship with the corporate tax. By contrast, the dividend tax has an ambiguous impact on growth but an unambiguous impact on the market structure. The dividend tax decreases (increases) growth if incumbents are less (more) productive than entrants. A higher dividend tax raises the firm’s WACC and hence the economy’s inflation, giving rise to an unfavorable effect on the households’ supply of loanable funds. If entrants are more productive, scarce loanable funds will be allocated to entrants, which reduces the effectiveness of incumbents’ R&D aimed at quality improvement and hence decreases growth. Otherwise, the financial market will allocate scarce loanable funds toward more productive incumbents, and, therefore, the balanced-growth rate rises. The calibration results show that the market structure exhibits an intensive margin (a small number of firms with a large firm size) in response to the dividend tax, regardless of whether incumbents are less or more productive. By focusing on the bond income tax, we also find uncertain responses in terms of the balanced growth and market structure. Again, these responses depend on the relative productivity between incumbents and entrants and the allocation of loanable funds between them. In the sensitivity analysis, we further find that in the presence of higher agency costs the market structure exhibits an intensive margin response to the corporate tax, while it exhibits an extensive margin response to the bond income tax. In regard to monetary policy, in response to a higher nominal interest rate, inflation can be positively related to growth, resembling the so-called Mundell−Tobin effect. Our channel, however, is different from their asset substitution effect. In effect, our model predicts that a higher nominal interest rate tends to decrease, rather than increase, the supply of loanable funds. Even though loanable funds become scarce, a higher nominal interest rate can enhance growth, provided that incumbents are more productive and that the financial market can effectively reshuffle loanable funds to them. In this case, inflation is positively related to growth. While the conventional macroeconomic model predicts a negative relationship (see, e.g. Cooley and Hansen (Reference Cooley and Hansen 1989); Wang and Yip (Reference Wang and Yip1992)), the empirical evidence gives rise to the possibility of a positive inflation-growth relationship (e.g. Bullard and Keating (Reference Bullard and Keating1995)).Footnote ^5 While the seminal works of Peretto (Reference Peretto2007, Reference Peretto2011) contribute to the literature, the growth effects of the dividend tax and the corporate tax in our analysis are in contradiction to those of his works. In his models, incumbents are assumed to have enough retained earnings (internal funds) to invest without the need for external funds, whereas entrants have to amass funds by issuing equities (external funds). The asymmetric financial structure eliminates the problem of the return divergence between the supply of and demand for funding for firms, and growth thereby unambiguously increases in response to the dividend income tax (Peretto (Reference Peretto2007, Reference Peretto2011)) and the corporate tax (Peretto (Reference Peretto2007)). In our model, both incumbents and entrants can access the equity and bond markets and optimally decide their financial structures so that the financial market can effectively reshuffle funds out of less productive firms toward more productive ones, as stressed by Chetty and Saez (Reference Chetty and Saez2006). The effects of taxation on growth crucially depend on the relative productivity and the allocation of loanable funds between the existing and new firms. Empirically, there is no consensus on the growth effect of either dividend (see Carroll (Reference Carroll2010) for a summary) or corporate taxation (see Auerbach (Reference Auerbach2013) for a summary).Footnote ^6 To complement Peretto’s study, our results provide convincing explanations for the mixed empirical findings. 2. The model The economy consists of households, firms, and a government. Households derive utility from consumption and leisure and make portfolio choices among various assets: money, equities, corporate bonds, and government bonds.Footnote ^7 There are two sectors: the final-good sector is perfectly competitive while the intermediate-good sector is characterized by monopolistic competition. Like Peretto ( Reference Peretto2007, Reference Peretto2011), the intermediate-good firms engage in two distinct types of R&D investment: vertical (in-house quality) R&D and entry investment (horizontal variety R& D). Unlike their setup, in addition to retained earnings, the intermediate-good firms can collect funding for their R&D investment by issuing not only equities but also corporate bonds. Accordingly, the debt-equity ratio is determined endogenously. The government (the monetary authority) runs a balanced budget and implements a nominal interest rate peg by purchasing/selling government bonds in the open market. Money is introduced into this model through a cash-in-advance (CIA) constraint. In line with Lucas (Reference Lucas1980), real money balances are required prior to purchasing the consumption good. Focusing the CIA constraint only on consumption will make our point (the balance sheet channel) more striking.Footnote ^8 Time $t$ is continuous. For compact notation, the time index is suppressed throughout the paper. 2.1. Households The economy is populated by a unit measure of identical infinitely lived households. For simplicity, there is no population growth. Each household, in facing the budget and CIA constraints, maximizes the discounted sum of future instantaneous utilities. To be specific, it optimally chooses consumption $c$ and working time $L$ ( $(1-L)$ is leisure) and also makes an asset portfolio allocation among nominal money balances $M$ , government bonds $B^{G}$ , outstanding equities $E_{j}$ , and corporate bonds $B_{j}^{F}$ issued by firm $j$ , taking the general price (the consumer price index) $P$ , wage offers $W$ , the market price for firm $j$ ’s share $V_{j}$ , the yield rates of equities $i_{j}^{E}$ , corporate bonds $i_{j}^{B}$ , government bonds $\overline{i}$ , and the government’s tax (lump-sum) $T$ as given. Thus, given a set of initial endowment assets $\{M(0),E_{j}(0),B_{j}^{F}(0),B^{G}(0)\}$ , the representative household’s optimization problem can be expressed as: (1) $$\max \int olimits _{0}^{\infty }\left [ \ln c+\delta \ln\!(1-L)\right ] \cdot e^{-\rho t}dt,\text{ with }0\lt \delta,\rho \lt \infty,$$ subject to the (real) budget constraint, \begin{equation*} \frac {\overset {\centerdot }{M}}{P}+\frac {\dot {B}^{G}}{P}+\frac {1}{P}\frac {\partial }{\partial t}\left ( \int _{0}^{N}B_{j}^{F}dj\right ) +\frac {1}{P}\frac {\partial }{\ partial t}\left ( \int _{0}^{N}V_{j}E_{j}dj\right ) =\left ( 1-\tau _{L}\right ) \frac {W}{P}L+(1-\tau _{B})\overline {i}\frac {B^{G}}{P}+ \end{equation*} (2) $$\int _{0}^{N}\left ( 1-\tau _{D}\right ) i_{j}^{E}\frac{V_{j}}{P}E_{j}dj+\int _{0}^{N}\left [ \left ( 1-\tau _{B}\right ) i_{j}^{B}-\sigma _{j}\right ] \frac{B_{j}^{F}}{P}dj+\int _{0}^{N}\left ( 1-\tau _{V}\right ) \frac{\dot{V}_{j}}{P}E_{j}dj-\left ( 1+\tau _{c}\right ) c-\frac{T}{P},$$ and the CIA constraint, (3) $$\frac{M}{P}\geqq c,$$ where $\rho$ is the time preference rate, $\delta$ measures the relative preference weight of leisure to consumption, $N$ is the mass of intermediate goods (the number of intermediate-good firms), $\ tau _{L}$ is the tax rate imposed on labor income, $\tau _{c}$ is the tax rate imposed on consumption, $\tau _{D}$ is the tax rate imposed on the dividend incomes from outstanding equities, $\tau _ {B}$ is the tax rate imposed on the yield incomes from government and corporate bonds, $\tau _{V}$ is the tax rate imposed on the capital gains of outstanding equities (i.e. $\dot{V}_{j}E_{j}$ ), and $\sigma _{j}$ is the agency cost of debt. In line with Osterberg (Reference Osterberg1989), households, as indicated in (2), incur an extra cost due to the risk associated with holding corporate bonds $\sigma _{j}$ , compared with holding equities. The extra cost $\sigma _{j}$ may stem from the default risk of private firms or the potential monitoring cost of debt issued by private firms. These potential costs of debt are attributable to the so-called agency cost, as stressed by Jensen and Meckling (Reference Jensen and Meckling1976) and Leland (Reference Leland1994). There is a costly financial contractual relationship—the difference in interests and the existence of information asymmetry—between debt holders (households) and debtors (corporations). Because corporate managers in general have more information about the prospects of the business compared to debt holders, they have incentives to misreport the true cash flow, and the debt holders (households) attempt to take various preventive measures to monitor the actions of the debtors (corporations). As a result, it is plausible that any additional resources devoted to increasing the intensity of monitoring of debtors or decreasing the conflicting interests between debtors and debt holders can be treated as the agency cost of debt $\sigma _{j}$ . Equities may also give rise to a similar agency cost between shareholders and corporations, but equity is better informed about the firm’s financial structure (Habib and Johnsen (Reference Habib and Johnsen2000)). In (2), the agency cost is specified as an extra risk cost of holding corporate bonds, compared with holding equities. This “relative agency cost” implies that debt holders (households) tolerate a higher risk cost if the debtors’ (corporations’) leverage (capital structure) relies more on debt financing, rather than equity financing. Our results, however, are robust even though the agency costs of both debt and equity holdings are separately included. Following Osterberg (Reference Osterberg1989) specification, we establish the following assumption: Assumption 1. The relative agency cost of debt to equity $\sigma _{j}$ is increasing and convex in the corporation’s debt-equity ratio, denoted by $\lambda _{j}(\!=B_{j}^{F}/V_{j}E_{j})$ , that is, $ \sigma _{j}^{\prime }(\!=\partial \sigma _{j}/\partial \lambda _{j})\gt 0$ and $\sigma _{j}^{\prime \prime }(\!=\partial (\sigma _{j})^{2}/\partial ^{2}\lambda _{j})\gt 0$ . Assumption 1 is consistent with the “costly contracting hypothesis” of Smith and Warner (Reference Smith and Warner1979) in the sense that the presence of bond covenants can be viewed as a method of controlling the conflict between debt holders and debtors, and bond covenants are negotiated to restrict the level of debt for a given value of equity. Thus, the higher the debt-equity ratio $\lambda _{j}$ , the more likely it is that the covenant will be violated, resulting in restrictions for investors (debt holders) on their investment activities in relation to corporate bonds. In the handbook of the economics of finance, Stein (Reference Stein2003) claims that such a reduced-form specification can capture the costly external finance, although it may appear ad hoc. The specification of agency cost shares the merit with a variant of the Townsend (Reference Townsend1979) and Gale and Hellwig (Reference Gale and Hellwig1985) costly state verification models, as shown by Froot et al. ( Reference Froot, Scharfstein and Stein1993), and an appropriately parameterized version of the Myers and Majluf (Reference Myers and Majluf1984) adverse-selection model, as shown by Stein (Reference In the analysis, we focus on the symmetric equilibrium. Thus, we can impose symmetry across firms to keep the notation simple. Let $\eta$ be the shadow price associated with the real budget constraint and $\zeta$ be the Lagrangian multiplier of the CIA constraint. The necessary conditions, in real terms, for this optimization problem are summarized as follows: (4) $$\frac{\delta c}{1-L}=\frac{1-\tau _{L}}{\left ( 1+\tau _{c}\right ) +\left ( 1-\tau _{B}\right ) \overline{i}}w,$$ (5) $$\frac{\zeta }{\eta }=\left ( 1-\tau _{B}\right ) \overline{i},$$ (6) $$m=c,$$ (7) $$\frac{\overset{\centerdot }{c}}{c}=-\frac{\overset{\centerdot }{\eta }}{\eta }=\left ( 1-\tau _{B}\right ) \overline{i}-\pi -\rho,$$ (8) $$\left ( 1-\tau _{B}\right ) \overline{i}=(1-\tau _{B})i^{B}-\sigma\! \left ( \lambda \right ) =(1-\tau _{D})i^{E}+\left ( 1-\tau _{V}\right ) \frac{\overset{\centerdot }{V}}{V},$$ and the transversality conditions are as follows: \begin{equation*} \lim _{t\rightarrow \infty }\eta m\cdot e^{-\rho t}=\lim _{t\rightarrow \infty }\eta b^{G}\cdot e^{-\rho t}=\lim _{t\rightarrow \infty }\eta b^{F}\cdot e^{-\rho t}=\lim _{t\ rightarrow \infty }\eta vE\cdot e^{-\rho t}=0, \end{equation*} where $w(\!=\frac{W}{P})$ is the real wage, $\pi (\!=\frac{\dot{P}}{P})$ is the inflation rate, $m(\!=\frac{M}{P})$ are real money balances, $b^{G}(\!=\frac{B^{G}}{P})$ are real government bonds, $v (\!=\frac{V}{P})$ is the relative price of equities to final goods, and $b^{F}(\!=\frac{B^{F}}{P})$ are real corporate bonds. Equation (4) describes how the household trades off consumption and leisure at the real tax-adjusted wage $\frac{1-\tau _{L}}{\left ( 1+\tau _{c}\right ) +\left ( 1-\tau _{B}\right ) \overline{i}}w$ . Equation (5) refers to the optimal condition for real money holdings, which equates the shadow price of real money balances to its opportunity cost, that is, the after-tax nominal yield on government bonds $ ( 1-\tau _{B} ) \overline{i}$ . While (6) is the CIA constraint, (7) refers to the consumption Euler equation. Equation (8) is a no-arbitrage condition, indicating that all the rates of after-tax yields on government bonds $ ( 1-\tau _{B} ) \ overline{i}$ , on corporate bonds $(1-\tau _{B})i^{B}-\sigma$ , and on equities $(1-\tau _{D})i^{E}+\left ( 1-\tau _{V}\right ) \frac{\overset{\centerdot }{V}}{V}$ must be equal. In our paper, government bonds are treated as a risk-free asset (such as Blanchard (Reference Blanchard1993)), and their return rate $\overline{i}$ can thus be viewed as the benchmark return for which households are willing to supply loanable funds (i.e. hold risk assets including equities and corporate bonds). Thus, from the no-arbitrage condition (8), we can further obtain the nominal rate of returns on corporate bonds, $i^{B}$ , and the capital gain or loss stemming from a change in the equity price, $\frac{\overset{\centerdot }{V}}{V}$ : (9) $$i^{B}=\overline{i}+\frac{\sigma \!\left ( \lambda \right ) }{1-\tau _{B}},$$ (10) $$\frac{\overset{\centerdot }{V}}{V}=\frac{1}{1-\tau _{V}}\left [ \left ( 1-\tau _{B}\right ) \overline{i}-(1-\tau _{D})i^{E}\right ],$$ Equation (9) indicates that the nominal rate of return no corporate bonds equals the sum of the nominal yield on riskless government bonds $\overline{i}$ and its risk premium adjusted by the corporate bond tax $\frac{\sigma }{1-\tau _{B}}$ . Equation (10) then indicates that the capital gain (loss) from the equity price appreciation (depreciation) is the wedge between the after-tax yield rate on government bonds $\frac{1-\tau _{B}}{1-\tau _{V}}\overline{i}$ and that on corporate equities $\frac{1-\tau _{D}}{1-\tau _{V}}i^{E}$ . 2.2. Firms The final-good and intermediate-good sectors make up the production side. To focus on R&D activities, physical capital is abstracted from the production of final and intermediate goods, for 2.2.1. Final-good firms In line with Romer (Reference Romer1990) and Aghion and Howitt (Reference Aghion and Howitt2005), we assume that in the final-good sector competitive firms produce a homogeneous final good $y$ . The final good, as in Peretto (Reference Peretto2007), can be consumed, used to produce intermediate goods, invested in R&D that raises the quality of existing intermediate goods, or invested in the creation of new intermediate goods. Final goods are produced by using labor $l_{j}$ (with the production share/elasticity $1-\theta$ ) and a continuum of intermediate goods $x_{j}$ , $j\in (0,N)$ (with the production share/elasticity $\theta$ ), according to the following Cobb−Douglas production technology: (11) $$y=\int _{0}^{N}x_{j}^{\theta }(A_{j}l_{j})^{1-\theta }dj, \, 0\lt \theta \lt 1,$$ where $A_{j}$ is the productivity parameter of workers $l_{j}$ (which are associated with the use of intermediate goods $x_{j}$ ). Specifically, $A_{j}$ depends on good $j$ ’s quality $z_{j}$ and on average quality $Z=\frac{1}{N}\int _{0}^{N}z_{j}dj$ (which captures the positive externality of R&D) according to: \begin{equation*} A_{j}=z_{j}^{\alpha }Z^{1-\alpha }, \, 0\lt \alpha \lt 1. \end{equation*} By defining $P_{x_{j}}$ as the price of intermediate goods, the final-good firms’ profit maximization problem is given by: \begin{equation*} \underset {x_{j},l_{j}}{\max }\!\left ( Py-\int _{0}^{N}P_{x_{j}}x_{j}dj-\int _{0}^{N}Wl_{j}dj\right ). \end{equation*} Thus, the first-order conditions for the final-good firms are as follows: (12) $$\theta Px_{j}^{\theta -1}(z_{j}^{\alpha }Z^{1-\alpha }l_{j})^{1-\theta }=P_{x_{j}},$$ (13) $$(1-\theta )Px_{j}^{\theta }(z_{j}^{\alpha }Z^{1-\alpha }l_{j})^{1-\theta }\frac{1}{l_{j}}=W.$$ Equation (12) is the demand function of the final-good firm for intermediate goods $x_{j}$ showing that the value of the marginal product of intermediate good $j$ equals its price, $P_{x_{j}}$ . Equation (13) is the demand function of the final-good firm for labor $l_{j}$ showing that the value of the marginal product of labor equals the wage rate, $W$ . 2.2.2. Intermediate-good firms There are two dimensions of technology change in the intermediate-good (or corporate) sector. In the vertical dimension, incumbents engage in in-house R&D to raise the quality of their products and earn higher profits. In the horizontal dimension, entrepreneurs make entry decisions and compete with incumbents for market share. The introduction of new firms (firm entry) expands the variety of intermediate goods (the number of firms) $N$ . Following Peretto (Reference Peretto2011), intermediate-good firm $j$ produces its differentiated good with a technology that requires one unit of final output per unit of intermediate good and a fixed operating cost $\phi Z$ . Moreover, the intermediate firm increases its product quality according to the technology: (14) $$\dot{z}_{j}=I_{j},$$ where $I_{j}$ is the R&D investment in terms of final goods. The R&D investment can be financed by either internal funds (retained earnings $R_{j}$ ) or external funds (issuing new corporate bonds $\ dot{B}_{j}^{F}$ and new equities $V_{j}\dot{E}_{j}$ ). Thus, the financing constraint facing an intermediate firm is as follows: (15) $$PI_{j}=R_{j}+V_{j}\dot{E}_{j}+\dot{B}_{j}^{F}.$$ Notice that Peretto (Reference Peretto2007, Reference Peretto2011) studies abstract from the possibility of external funds. The consideration of the external funds enables firms to collect funds by issuing equities and corporate bonds to households, and the debt-equity ratio $\lambda _{j}(\!=B_{j}^{F}/V_{j}E_{j})$ can be thereby determined optimally (see below). Define $Q$ as the average price of product quality $Z$ , that is, $Q=\frac{1}{N}\int _{0}^{N}Q_{j}dj$ , where $Q_{j}$ is the price of quality $z_{j}$ . Accordingly, intermediate-good firm $j$ ’s pretax gross profits are given by: (16) $$\Pi _{j}=\left ( P_{x_{j}}-P\right ) x_{j}-Q\phi Z-i_{j}^{B}B_{j}^{F}.$$ The firm’s pretax profits equal total revenues $P_{x_{j}}x_{j}$ minus total production costs $Px_{j}+Q\phi Z$ and the interest payment for corporate bonds $i_{j}^{B}B_{j}^{F}$ . Let $\tau _{\Pi }$ be the corporate tax rate. Thus, the post-tax gross profits are either transferred to stockholders as dividends $D_{j}$ or become the firm’s internal funds as retained earnings $R_{j}$ . That is, (17) $$(1-\tau _{\Pi })\Pi _{j}=D_{j}+R_{j}.$$ In line with Turnovsky (Reference Turnovsky2000, chs. 9 and 10), we assume that the intermediate-good firms offer a fixed dividend yield to stockholders, that is, $i^{E}=\frac{D_{j}}{V_{j}E_{j}}$ . This assumption enables us to isolate the dividend policy from the firm’s investment decisions (see, e.g. Myers and Majluf (Reference Myers and Majluf1984)), which allows us to easily construct the firm’s objective function and to attach more attention to the investment effect of an endogenous debt-equity ratio. The intermediate firm’s market value of total assets $\Omega _{j}$ is the sum of the market value of its equities $V_{j}E_{j}$ and debts $B_{j}^{F}$ , that is, $\Omega _{j}=V_{j}E_{j}+B_{j}^{F}$ . Differentiating $\Omega _{j}$ with respect to time, and utilizing (15), (17), and (8) yield: (18) $$\overset{.}{\Omega }_{j}=\Gamma _{j}\Omega _{j}-\omega _{j}.$$ In (18), as in Osterberg (Reference Osterberg1989), $\omega _{j}$ is defined as the firm’s post-tax cash flow and $\Gamma _{j}$ is the (nominal) WACC. Specifically, (19) $$\omega _{j}=(1-\tau _{\Pi })\left [ \left ( P_{x_{j}}-P\right ) x_{j}-Q\phi Z\right ] -PI_{j},$$ (20) $$\Gamma _{j}=\frac{1}{1\text{+}\lambda _{j}}C^{E}\text{+}\frac{\lambda _{j}}{1\text{+}\lambda _{j}}C^{B}=\frac{1}{1\text{+}\lambda _{j}}\left ( \frac{1-\tau _{B}}{1-\tau _{V}}\overline{i}+\frac {\tau _{D}-\tau _{V}}{1-\tau _{V}}i^{E}\right ) \text{+}\frac{\lambda _{j}}{1\text{+}\lambda _{j}}\left ( 1-\tau _{\Pi }\right ) \left ( \overline{i}+\frac{\sigma _{j}}{1-\tau _{B}}\right ),$$ where $C^{E}=\left ( \frac{1-\tau _{B}}{1-\tau _{V}}\overline{i}+\frac{\tau _{D}-\tau _{V}}{1-\tau _{V}}i^{E}\right )$ and $C^{B}=\left ( 1-\tau _{\Pi }\right ) \left ( \overline{i}+\frac{\sigma _ {j}}{1-\tau _{B}}\right )$ represent the cost of equity capital and the cost of debt capital, respectively. While the net cash flow (19) is related to the firm’s production, the nominal WACC is related to the firm’s financial structure.Footnote ^9 It is well documented in the finance literature (e.g. Arditti (Reference Arditti1973)) that the WACC plays a crucial role in determining a firm’s financial (capital) structure: it measures the user cost of capital for perpetuity companies, which is a decisive criterion in investment decision-making. In our model, the nominal WACC is a weighted average cost of issuing equity (equity capital) $C^{E}$ and issuing corporate bonds (debt capital) $C^{B}$ , with the weights being given by their relative structures $\frac{1}{1+\lambda _{j}}$ and $ \frac{\lambda _{j}}{1+\lambda _{j}}$ , respectively. The cost of equity capital consists of the tax-adjusted opportunity cost of issuing equity $\frac{1-\tau _{B}}{1-\tau _{V}}\overline{i}$ and the net tax burden on dividends $\frac{\tau _{D}-\tau _{V}}{1-\tau _{V}}i^{E}$ . The cost of debt capital is made up of the tax-adjusted opportunity cost of issuing corporate bonds $(1-\tau _{\Pi })\ overline{i}$ and the tax-adjusted agency cost of holding corporate bonds $\frac{1-\tau _{\Pi }}{1-\tau _{B}}\sigma _{j}$ . With the interest rate on risk-free government bonds as the benchmark return for which households are willing to supply loanable funds (i.e. hold risk assets including equities and corporate bonds), the nominal WACC can be alternatively expressed as: $\Gamma =\overline{i}+\ frac{1}{1+\lambda _{j}}(C^{E}-\overline{i})+\frac{\lambda _{j}}{1+\lambda _{j}}(C^{B}-\overline{i})$ , which conveys the viewpoint of Bernanke and Gertler (Reference Bernanke and Gertler1995) in the sense that the firm’s cost of capital consists of the riskless interest rate $\overline{i}$ and the weighted wedges between the cost of equity capital $(C^{E}-\overline{i})$ and that of debt capital $(C^{B}-\overline{i})$ . Most notably, in our model firms collect funds via issuing equities and corporate bonds, and, accordingly, their capital structure (the debt-equity ratio) is endogenously determined. Through the financial channel (or the balance sheet channel), any fiscal ( $\tau _{D},\tau _{B},\tau _{\Pi },\tau _{V}$ ) or monetary policy ( $\overline{i}$ ) which affects the firm’s WACC will influence economic growth. To construct the firm’s objective function, we solve (18) for $\Omega _{j}(t)$ . Accordingly, the intermediate-good firm’s objective is assumed to be its initial market value of total assets $\Omega _{j}(0)$ , as in Osterberg (Reference Osterberg1989) and Turnovsky (Reference Turnovsky1990), as follows:Footnote ^10 (21) $$\Omega _{j}(0)=\int olimits _{0}^{\infty }\omega _{j}(t)e^{-\int olimits _{0}^{t}\Gamma _{j}d\xi }dt.$$ Equation (21) indicates that the market value of total assets $\Omega _{j}(0)$ reflects the discounted value of the firm’s lifetime post-tax cash flow. It should be noted that $\Gamma _{j}$ is a function of variables related to financial structure, whereas $\omega _{j}$ is a function of variables related to production. Thus, the intermediate-good firm can make its optimal choice based on the following sequential procedure. As in Osterberg (Reference Osterberg1989), subject to the evolution of quality (14) and (15) and given the initial values of $z_{j}(0)$ , $B_{j}^{F}(0)$ , and $E_{j} (0)$ , the intermediate-good firm first chooses $x_{j}$ , $P_{x_{j}}$ , $I_{j}$ , and $z_{j}$ to maximize (21) and then chooses $\lambda _{j}$ to minimize the nominal WACC of (20). Under the symmetric equilibrium, the optimality conditions for a typical intermediate-good firm’s optimization problem are given by: (22) $$x=\theta ^{\frac{2}{1-\theta }}z^{\alpha }Z^{1-\alpha }l,$$ (23) $$P_{x}=\frac{P}{\theta },$$ (24) $$Q=P,$$ (25) $$(1-\tau _{\Pi })\alpha \frac{1-\theta }{\theta }\frac{x}{z}=\Psi ^{In}=\Gamma -\pi,$$ (26) $$\tilde{\tau }\!\left ( 1-\tau _{B}\right ) \overline{i}+\left ( \tau _{D}-\tau _{V}\right ) i^{E}=\left ( 1-\tilde{\tau }\right ) \left [ \sigma +\sigma ^{\prime }\cdot \lambda \!\left ( 1+\ lambda \right ) \right ],$$ (14), (15), and the transversality conditions \begin{equation*} \underset {t\rightarrow \infty }{\lim }B^{F}e^{-\int \nolimits _{0}^{t}\Gamma d\xi }=\underset {t\rightarrow \infty }{\lim }VEe^{-\int \nolimits _{0}^{t}\Gamma d\xi }=\underset {t\ rightarrow \infty }{\lim }Qze^{-\int \nolimits _{0}^{t}\Gamma d\xi }=0, \end{equation*} where $0\lt \tilde{\tau }=1-\frac{\left ( 1-\tau _{\Pi }\right ) \left ( 1-\tau _{V}\right ) }{1-\tau _{B}}\lt 1$ is defined as the effective tax advantage of issuing debt and $\Psi ^{In}=(1-\tau _{\ Pi })\alpha \frac{1-\theta }{\theta }\frac{x}{z}$ is denoted as the incumbent’s tax-adjusted marginal product of raising product quality. Equation (22) describes how the intermediate-good firm decides its optimal output $x$ . The intermediate-good firm’s pricing rule (23) indicates that the price of the intermediate goods $P_{x}$ decreases with the final-good production elasticity with respect to intermediate goods $\theta$ . Equation (24) equates the price of final goods to the average price of product quality, which reflects the fact that final goods can be either consumed or invested in R&D that raises the quality of intermediate goods. Equation (25) refers to the equality between the marginal product of quality and the user cost of capital; that is, the tax-adjusted marginal product of quality $\Psi ^{In}$ equals the real WACC (net of the inflation rate) $\Gamma -\pi$ . Equation (26) pins down the optimal debt-equity ratio, indicating that the relative advantages of debt (corporate bonds) to equity financing should be balanced by the disadvantage of debt stemming from its agency costs. In particular, the relative advantages of debt to equity financing (the LHS of (26)) consist of two components. One is the tax shield effect (captured by $\tilde{\tau } ( 1-\tau _{B} ) \overline{i}$ ) which is raised by Modigliani and Miller (Reference Modigliani and Miller1963). Intuitively, a higher corporate tax $\tau _{\Pi }$ induces the intermediate-good firm to raise its debt-equity ratio $\lambda$ because the interest payment for corporate bonds, as shown in (16) and (17), reduces the firm’s profits but escapes from the corporate tax. Therefore, issuing corporate bonds provides a tax shield that results in a reduction of taxable corporate taxes. The tax shield effect is crucial for our analysis. For example, the tax shield effect plays an important role in affecting the growth effect of monetary policy via implementing a nominal interest rate peg $\overline{i}$ . The other one reflects the cost efficiency effect of issuing corporate bonds (captured by $ ( \tau _{D}-\tau _ {V} ) i^{E}$ ). In the presence of a higher dividend income tax (net of the tax rate imposed on the capital gains of outstanding equities), households are inclined to hold more corporate bonds and fewer equities due to a lower return on equities. This implies, as shown in (20), the user cost of equity capital $C^{E}=\left ( \frac{1-\tau _{B}}{1-\tau _{V}}\overline{i}+\frac{\tau _{D}-\tau _{V}} {1-\tau _{V}}i^{E}\right )$ becomes higher as $ ( \tau _{D}-\tau _{V} )$ rises. As a result, firms can collect external funds more efficiently by issuing corporate bonds. In practice, the tax shield effect must be substantially large so that firms are willing to use relatively costly debt ( $\lambda \gt 0$ ) as their external funds to engage in investment (see Strulik (Reference Strulik2003, Reference Strulik2008) for a more detailed illustration). The importance of the tax shield is supported by empirical studies, such as Bradley et al. (Reference Bradley, Jarrell and Kim1984) and Booth et al. (Reference Booth, Aivazian, Demirguc-Kunt and Maksimovic2001). In our model, we assume a substantially large tax shield effect (a sufficient but not necessary condition) in order to ensure a non-negative ratio of debt to equity. Accordingly, we have: Assumption 2. (Interior solution for the optimal debt-equity ratio) \begin{equation*} \tilde {\tau }\!\left ( 1-\tau _{B}\right ) \overline {i}+\left ( \tau _{D}-\tau _{V}\right ) i^{E}\gt 0. \end{equation*} By the implicit-function theorem, we can use (26) with Assumption 2 to obtain the optimal debt-equity ratio, denoted by $\tilde{\lambda }$ :Footnote ^11 (27) $$\tilde{\lambda }=F\!\left ( \underset{+}{\tau _{D}},\underset{+}{\tau _{\Pi }},\underset{-}{\tau _{B}},\underset{\pm }{\tau _{V}},\underset{+}{\overline{i}}\right ).$$ It easily follows from (20) that the user cost of issuing equity $C^{E}$ increases with the dividend income tax $\tau _{D}$ and decreases with the corporate bond income tax $\tau _{B}$ , while the user cost of issuing corporate bonds $C^{B}$ decreases with the corporate tax $\tau _{\Pi }$ and increases with the corporate bond income tax $\tau _{B}$ . Thus, to minimize the WACC, the debt-equity ratio $\tilde{\lambda }$ is positively related to $\tau _{D}$ and $\tau _{\Pi }$ but negatively related to $\tau _{B}$ . Since the tax on the capital gains of outstanding equities $\tau _{V}$ may increase (via the tax-adjusted opportunity cost of issuing equity) or decrease (via the net tax burden on dividends) the user cost of issuing equity $C^{E}$ , the relationship between $\tilde{\lambda }$ and $\tau _{V}$ is ambiguous. In addition, a higher nominal interest rate $\overline{i}$ induces firms to rely more on debt financing, rather than on equity financing, in the presence of the tax shield effect ( $0\lt \tilde{\tau }\lt 1$ ). Finally, substituting (26) and (27) into (20) allows us to further obtain the optimal nominal WACC, denoted by $\tilde{\Gamma }$ , as follows: (28) $$\tilde{\Gamma }=\left ( 1-\tau _{\Pi }\right ) \left [ \overline{i}+\frac{\sigma (F(\tau _{D},\tau _{\Pi },\tau _{B},\tau _{V},\overline{i}))+F(\tau _{D},\tau _{\Pi },\tau _{B},\tau _{V},\ overline{i})\cdot \sigma ^{\prime }(F(\tau _{D},\tau _{\Pi },\tau _{B},\tau _{V},\overline{i}))}{1-\tau _{B}}\right ].$$ By following Peretto (Reference Peretto2007, Reference Peretto2011), setting up a firm is assumed to require $\beta z$ units of final output, where $\beta \gt 1$ , capturing the fact that entrants have to pay additional setup costs ( $P\beta z$ ) that incumbents have already paid. Due to the additional setup cost, new firms introduce a new good to engage in a Bertrand competition with the incumbent monopolist. The entry of new firms thus expands product variety. Similar to incumbents, new entrants can issue equity and debt to finance their entry. While this specification is realistic, it is different from a common treatment in the literature; for example, Peretto (Reference Peretto2007, Reference Peretto2011) assumes that, for simplicity, new firms finance their entry by issuing equity only. In the spirit of Modigliani and Miller (Reference Modigliani and Miller1958, p. 268), the funds raised for the entrants equal the sum of the expected market value of the firm (i.e. $\Omega$ , the discounted value of the firm’s lifetime net cash flow, as shown in (21)).Footnote ^12 That is, the post-entry profit that accrues to an entrant is given by the expression derived for a typical incumbent, as in Peretto (Reference Peretto2007, Reference Peretto2011 ). With an endogenous financial structure, if the discounted value of the entrant’s lifetime net cash flow (profit stream) $\Omega$ is larger (resp. less) than its setup cost, entry is positive (resp. negative). In equilibrium the free-entry (the no-arbitrage) condition holds, that is: (29) $$\Omega =P\cdot \beta z.$$ Note that $\frac{1}{\beta }$ , as we will see later, captures the extent of an entrant’s productivity. To derive the equity price evaluated in the financial market, we take logs and time derivatives of (29). Substituting (14), (15), (16), (17), and (24) into the resulting equation, we have: (30) $$\frac{\overset{\centerdot }{V}}{V}=\left ( 1+\lambda \right ) \left ( \frac{\beta -1}{\beta }\frac{\dot{z}}{z}+\pi +\frac{1-\tau _{\Pi }}{\beta }\frac{\Pi }{Pz}\right ) -i^{E}.$$ Equation (30) illustrates the evolution of the reserved equity price under endogenous entry. If the issue price of the equity is higher (lower) than the reserved equity price, the funds raised will be large (will not be large) enough to cover the setup cost, leading the entrant to enter (stay out of) the market. It essentially describes how entrants (new firms) require loanable funds. 2.3. The government (monetary authority) A nominal interest rate peg is implemented by targeting the nominal level of the interest rate on government bonds $\overline{i}$ . By letting the growth rate of money be $\mu =\dot{M}/M$ , the evolution of real money balances is $\frac{\dot{m}}{m}=\mu -\pi$ . Accordingly, the government (monetary authority) endogenously adjusts the money growth rate $\mu$ (by purchasing/selling government bonds in the open market) to whatever level is needed for the targeted interest rate $\overline{i}$ to prevail. In addition, the government runs a balanced budget. It spends government consumption $Pg$ , provides lump-sum transfers $T$ to households, and also pays interest to government-bond holders $\overline {i}B^{G}$ . To finance these expenditures, the government taxes the consumption, labor income, government/corporate bonds yields, the dividend income of equities, and the capital gains on households, while it levies the corporate tax on the profits of intermediate-good firms. Besides, the government’s expenditures can be financed by issuing bonds and money. Thus, the government budget constraint is given by: \begin{equation*} Pg+\overline {i}B^{G}=\tau _{c}Pc+\tau _{L}WL+\tau _{B}(\overline {i}B^{G}+\int \nolimits _{0}^{N}i_{j}^{B}B_{j}^{F}dj)+\tau _{D}\int \nolimits _{0}^{N}D_{j}dj+\tau _{V}\int \ nolimits _{0}^{N}\dot {V}_{j}E_{j}dj \end{equation*} (31) $$+\tau _{\Pi }\int olimits _{0}^{N}\Pi _{j}dj+\dot{B}^{G}+\dot{M}+T.$$ In the endogenous growth model, we must assume that the government consumption is proportional to total output of final goods, that is, $g=\varphi y$ , with $0\lt \varphi \lt 1$ , to prevent it from 3. Competitive equilibrium The equilibrium, as noted above, is symmetric across firms, implying that the total labor force is $L=Nl$ and $Z=z$ . Thus, the competitive equilibrium is defined as a tuple of paths for prices $ \{ w,i^{B},\pi \} _{t=0}^{\infty }$ , real allocations $ \{ c,L,z,I,\mu \} _{t=0}^{\infty }$ , real assets $ \{ b^{G},b^{F},m,E \} _{t=0}^{\infty }$ , the debt-equity ratio $ \{ \lambda \} _{t=0}^{\ infty }$ , and policy variables $ \{ \overline{i},g,\tau _{D},\tau _{\Pi },\tau _{B},\tau _{V},\tau _{c},\tau _{L},i^{E},T \} _{t=0}^{\infty }$ that satisfy: By putting (2), (11), (15), (16), (17), and (31) together, we have the economy-wide resource constraint: (32) $$y=c+\varphi y+\sigma b^{F}N+\left ( x+\phi z+I\right ) N+\beta z\dot{N},$$ which is also the clearing condition for the final-good market. Note that the agency cost (captured by $\sigma b^{F}N$ ) is a kind of resource depletion which becomes a component of resource utilization in the economy’s resource constraint reported in (32). From (22) with $L=Nl$ and $Z=z$ , the clearing condition for the intermediate-good market satisfies: (33) $$\frac{x}{z}=\theta ^{\frac{2}{1-\theta }}\frac{L}{N}.$$ To extract intuition from (33), we define the average gross profit to a typical incumbent brought by a quality-improving invention as $\frac{\left ( P_{x}-P\right ) x}{Qz}$ . From (23), (24), and (33 ), the ratio of the gross profit to quality can be expressed as: (34) $$\frac{\left ( P_{x}-P\right )\!x}{Qz}=\frac{1-\theta }{\theta }\frac{x}{z}=\frac{1-\theta }{\theta }\theta ^{\frac{2}{1-\theta }}\frac{L}{N}.$$ This equation indicates that a higher total labor force $L$ shifts out the conditional demand for the intermediate good and, accordingly, increases the ratio of firm’s gross profit to quality. By contrast, a larger number of firms $N$ implies a lower market share per firm and thus decreases the gross profit. Note that because the firm’s market share is defined as $s_{j}=\frac{P_{x_{j}}x_{j}} {\int _{0}^{N}P_{x_{\varsigma }}x_{\varsigma }d\varsigma }$ (a certain firm $j$ ’s output divided by the industry-wide output), under the symmetric equilibrium the firm’s market share is $s=\frac{1} {N}$ . This implies that the growth rate of the quality innovation ( $\frac{\dot{z}}{z}$ ), as we will see from (38), depends on the firm size $\frac{L}{N}=l$ , rather than the total labor force $L$ , and the scale effect is thereby eliminated by the endogenous market structure. Moreover, from (4) and (13) with $L=Nl$ and $Z=z$ , the clearing condition for the labor market is: (35) $$\frac{c}{z}=\frac{1}{\Theta }\theta ^{\frac{2\theta }{1-\theta }}\left ( 1-L\right ),$$ where $\Theta =\frac{\delta }{1-\theta }\frac{\left ( 1+\tau _{c}\right ) +\left ( 1-\tau _{B}\right ) \overline{i}}{1-\tau _{L}}$ . With regard to the bond market, we can obtain the clearing condition for the corporate bond market from (26) and (9): (36) $$i^{B}=\overline{i}+\frac{1}{1-\tau _{B}}\sigma \!\left ( \tilde{\lambda }\right ),$$ indicating that the equilibrium return rate on corporate bonds is jointly determined by the demand for and supply of bonds issued by private intermediate firms. On the other hand, the government implements a nominal interest rate peg (at the level of $\overline{i}$ ) by purchasing/selling government bonds in the open market. This implies that the equilibrium condition of the government-bond market is given by (7) with the inflation rate $\pi =\Gamma -(1-\tau _{\Pi })\alpha \frac{1-\theta }{\theta }\frac{x}{z}$ obtained from (25): (37) \begin{eqnarray} \frac{\dot{c}}{c} &=&\left \{ \left [ \left ( 1-\tau _{B}\right ) \overline{i}-\pi \right ] -(\widetilde{\Gamma }-\pi )\right \} +\Psi ^{In}-\rho \\[5pt] &=&\left ( 1-\tau _{B}\ right ) \overline{i}-\widetilde{\Gamma }+\left ( 1-\tau _{\Pi }\right ) \alpha \frac{1-\theta }{\theta }\frac{x}{z}-\rho, \notag \end{eqnarray} where $\widetilde{\Gamma }$ is the firm’s optimal nominal WACC reported in (28). Equation (37) is the “modified” consumption Euler equation when the financial (loanable funds) market (i.e. the equity and bond (government and corporate bonds) markets) are in existence. Equation (37) atrophies to a standard consumption Euler equation: $\frac{\dot{c}}{c}=\Psi ^{In}-\rho$ , recalling that $\Psi ^{In} $ is the marginal product of raising product quality if the financial friction caused by agency costs and the endogenously determined financial structure are ignored. Instead, there is an additional force—the return divergence between the supply of and demand for loanable funds, $ \{ [ ( 1-\tau _{B} ) \overline{i}-\pi ] -(\widetilde{\Gamma }-\pi ) \}$ , in the modified consumption Euler equation. Under the no-arbitrage condition (8), $ [ ( 1-\tau _{B} ) \overline{i}-\pi ]$ captures the household’s real return from supplying loanable funds. As for the intermediate-good firm, the real user cost of capital $(\widetilde{\Gamma }-\pi )$ can be thought of as the required real return for demanding loanable funds. Under Assumption 2, in the presence of financial friction the return divergence (wedge) between the supply of and demand for loanable funds, $ ( 1-\tau _{B} ) \overline{i}-\widetilde{\Gamma }$ , will play an important role in affecting the consumption growth (or the balanced growth), as will be clear below.Footnote ^13 Finally, the equity market equilibrium is obtained by equating the demand for equities (i.e. equation (10)) to the supply of equities (substituting (16), (23), and (25) into (30)): (38) $$\frac{\dot{z}}{z}=\frac{1}{1-\frac{1}{\beta }}\left ( \Psi ^{In}-\Psi ^{En}\right ) =\frac{1-\tau _{\Pi }}{1-\frac{1}{\beta }}\left [ \left ( \alpha -\frac{1}{\beta }\right ) \frac{1-\theta } {\theta }\frac{x}{z}+\frac{\phi }{\beta }\right ],$$ where $\Psi ^{In}=(1-\tau _{\Pi })\alpha \frac{1-\theta }{\theta }\frac{x}{z}$ is the incumbent’s tax-adjusted marginal product of R&D (see (25)) and $\Psi ^{En}=\frac{1-\tau _{\Pi }}{\beta }(\frac {1-\theta }{\theta }\frac{x}{z}-\phi )$ is the counterpart entrant’s tax-adjusted marginal product of R&D. The quantity-quality ratio of intermediate goods $\frac{x}{z}$ affects not only the gross profit (see (34)) but also the firm size (see (33) with $\frac{L}{N}=l$ ). Note that, the existence of an extra sunk cost weakens the entrant’s R&D productivity, captured by $\frac{1}{\beta }$ . In the model, incumbents engage in in-house R&D (the vertical R&D) to raise the product quality for higher profits, which thereby entails an incentive for innovation $z$ . New firms (entrepreneurs) enter the market by engaging in variety-expanding R&D (the horizontal R&D), and the new products compete with those of the incumbents for market share. If the competition from new products decreases the (endogenously determined) incumbent’s market share, entry gives rise to a disincentive effect on the quality innovation $z$ . Thus, the growth rate of $z$ , as shown in (38), increases with $\Psi ^{In}$ but decreases with $\Psi ^{En}$ . As noted above, the competition between incumbents and entrants, on the one hand, endogenizes the market structure (the firm size and the number of firms) and, on the other hand, eliminates the scale effect (i.e. the growth rate $\frac{\dot{z}}{z}$ depends on the firm size $\frac{L}{N}=l$ , rather than on the total labor force $L$ ). As stressed by Laincz and Peretto (Reference Laincz and Peretto2006), the endogeneity of the market structure allows the proliferation of product varieties to reduce the effectiveness of R&D aimed at quality improvement, by causing it to be spread more thinly over a larger number of different products in the process of the development of new products. Thus, the scale effect is eliminated via product proliferation. The interaction between the quality-improved and variety-expanded R&D determines economic growth. 3.1. Balanced-growth-path equilibrium A nondegenerate BGP equilibrium is a tuple of paths such that each of the quantity variables $c$ , $x$ , $y$ , $z$ , $m$ , $b^{G}$ , and $b^{F}$ grows at a constant common rate, while the financial structure variables $\lambda$ and $\Gamma$ , the price variables $\pi$ and $i^{B}$ , working time $L$ , and the number of the intermediate-good firms $N$ are positively constant. All firms (incumbents and entrants) can access the financial market (both the equity market and the bond market) for external funding. To solve the common balanced-growth rate, we define the consumption-output ratio $\hat{c}=\frac{c}{y}$ as the transformed variable. Under symmetric equilibrium, combining (33) and (11), together with $L=Nl$ , yields $y=\theta ^{\frac{2\theta }{1-\theta }}zL$ and, accordingly, from the clearing condition for the labor market (35) we can derive the total labor force as a function of the consumption-output ratio: (39) $$L=\frac{1}{1+\Theta \hat{c}}.$$ Thus, substituting (39) into (33) yields the ratio of the quantity of the intermediate goods to the quality as follows: (40) $$\frac{x}{z}=\theta ^{\frac{2}{1-\theta }}\frac{1}{N}\frac{1}{1+\Theta \hat{c}}.$$ With (40) which is a function of $\hat{c}$ and $N$ , the dynamic system of our model can be expressed by the following two differential equations in terms of $\hat{c}$ and $N$ (see Appendix A for the (41) $$\frac{\overset{\cdot }{\hat{c}}}{\hat{c}}=(1\text{+}\Theta \hat{c})\left \{ \left ( 1-\tau _{B}\right ) \overline{i}-\left [ \tilde{\Gamma }-\left ( 1-\tau _{\Pi }\right ) \alpha \frac{1-\ theta }{\theta }\frac{x}{z}\right ] -\rho -\frac{1-\tau _{\Pi }}{1-\frac{1}{\beta }}\left ( \frac{\alpha \beta -1}{\beta }\frac{1-\theta }{\theta }\frac{x}{z}\text{+}\frac{\phi }{\beta }\right ) \ right \},$$ (42) $$\frac{\dot{N}}{N}=\frac{1}{\beta }\left \{ \left [ \left ( 1-\hat{c}-\varphi \right ) \theta ^{-2}\frac{x}{z}-\frac{\beta \tilde{\lambda }\sigma (\tilde{\lambda })}{1+\tilde{\lambda }}\right ] -\left [ \frac{x}{z}+\phi +\frac{1-\tau _{\Pi }}{1-\frac{1}{\beta }}\left ( \frac{\alpha \beta -1}{\beta }\frac{1-\theta }{\theta }\frac{x}{z}\text{+}\frac{\phi }{\beta }\right ) \right ] \right It is clear from (27) and (28) that $\tilde{\Gamma }=\left ( 1-\tau _{\Pi }\right ) \left [ \overline{i}+\frac{\sigma (\tilde{\lambda })+\tilde{\lambda }\sigma ^{\prime }(\tilde{\lambda })}{1-\tau _ {B}}\right ]$ where $\tilde{\lambda }=F(\tau _{D},\tau _{\Pi },\tau _{B},\tau _{V},\overline{i})$ . Let the superscript “^*” denote the stationary values of relevant variables in the steady state in which $\overset{\cdot }{\hat{c}}=\dot{N}=0$ . Once the steady-state values of $\hat{c}^{\ast }$ and $N^{\ast }$ are solved, the growth rate in the BGP equilibrium, denoted by $\gamma ^{\ast }$ , is determined by (38) with (40). Accordingly, we arrive at: Proposition 1 (Existence and Uniqueness of the Equilibrium). Under Assumptions 1 and 2, there exists a nondegenerate, unique balanced-growth equilibrium of the monetary model with the endogenous debt-equity ratio and the WACC. The steady-state BGP equilibrium is locally determinate. Proof: See Appendix A. As shown in Appendix A, we can solve the steady-state rates of growth $\gamma ^{\ast }$ and inflation $\pi ^{\ast }$ as follows: (43) $$\gamma ^{\ast }=\frac{1}{1-\frac{1}{\beta }}\left [ (\Psi ^{In})^{\ast }-(\Psi ^{En})^{\ast }\right ] =\frac{\alpha \beta -1}{1-\alpha }\left [ \Gamma ^{\ast }-(1-\tau _{B})\overline{i}+\rho + \frac{1-\tau _{\Pi }}{\beta -1}\phi \right ] +\frac{1-\tau _{\Pi }}{\beta -1}\phi.$$ (44) $$\pi ^{\ast }=\Gamma ^{\ast }-(\Psi ^{In})^{\ast }=\Gamma ^{\ast }-\frac{\alpha (\beta -1)}{1-\alpha }\left [ \Gamma ^{\ast }-(1-\tau _{B})\overline{i}+\rho +\frac{1-\tau _{\Pi }}{\beta -1}\phi \right ],$$ Equation (43) indicates that the productivity wedge of incumbent to entrant firms, $(\Psi ^{In})^{\ast }-(\Psi ^{En})^{\ast }$ , plays a key role in determining economic growth in equilibrium. Thus, any policy (regardless of fiscal or monetary policy) that increases the productivity wedge of incumbent to entrant firms will induce more expenditure on quality-improving R&D, which leads to a higher balanced-growth rate $\gamma ^{\ast }$ . Equation (44) conveys a straightforward result that the steady-state inflation rate rises as the incumbents’ optimal nominal WACC $\Gamma ^{\ast }$ (user cost of capital) increases, but it falls as the incumbents’ tax-adjusted marginal product of R&D $(\Psi ^{In})^{\ast }$ increases. This is because the incumbent firms set a higher (lower) price when their user cost of capital (tax-adjusted marginal product of R&D) is higher. Of particular note, while the balanced-growth rate depends on the productivity wedge between incumbents and entrants, the steady-state inflation is related only to the incumbents’ behaviors. This difference implicitly points to the existence of a mixed relationship, breaking down the conventional tradeoff between growth and inflation. In the next section, our comparative statics results will show the non-monotonic relationship and thus provide a reconciliation with the mixed relationship between inflation and growth found in the empirical studies. Next, we investigate the correlation between firms’ WACC and the economy’s balanced growth. Proposition 2 (WACC and balanced growth). In the BGP equilibrium with a positive agency cost of debt ( $\sigma (\lambda ^{\ast })\gt 0$ ), the balanced-growth rate ( $\gamma ^{\ast }$ ) can be either positively or negatively correlated with the WACC ( $\Gamma ^{\ast }$ ), depending upon the relative productivity parameter of incumbents to entrants, that is, $\alpha -1/\beta$ . Proof: See Appendix A. Recall that $\alpha$ is the incumbent’s output elasticity of R&D and $1/\beta$ can be thought of as the counterpart entrant’s R&D productivity (given that $\beta$ is the sunk cost parameter of a new firm). We also note from (25) that a firm is willing to bear a higher user cost of capital WACC if its R&D productivity is higher. In our model, all firms (incumbents and entrants) can access the financial market (both the equity market and the bond market) to raise external funds, and dividend payments allow households to re-assess the market value of firms. Thus, as stressed by Chetty and Saez (Reference Chetty and Saez2006) and Gourio and Miao (Reference Gourio and Miao2011), the financial market can reshuffle funds away from less productive firms toward other ventures with greater productivity. Proposition 2 indicates that the balanced-growth rate could be either positively or negatively related to the user cost of capital. To make our analytical result clearer, in what follows the economic intuition behind Proposition 2 is stated with the aid of a graphical exposition. Figure 1 indicates that the balanced-growth rate ( $\gamma ^{\ast }$ ) is determined by equating the growth rates of consumption ( $\frac{\dot{c}}{c}$ reported in (37)) and quality-improving R&D ( $\frac{\dot{z}}{z}$ reported in (38)). $\frac{\dot{c}}{c}$ measures the willingness of households to supply loanable funds, while $\frac{\dot{z}}{z}$ measures the willingness of firms to seek loanable funds from the equity and bond markets. In the $\alpha \lt 1/\beta$ scenario where entrants are more productive than incumbents, the $\frac{\dot{c}}{c}$ locus is upward sloping but the $\frac{\dot{z}}{z}$ locus is downward sloping. In the $\alpha \gt 1/\beta$ scenario where incumbents are more productive than entrants, both the $\frac{\dot{c}}{c}$ and $\frac{\dot{z}}{z}$ loci are upward sloping, with the $\frac{\dot{c}}{c}$ locus being steeper. In response to a higher WACC $\Gamma ^{\ast }$ , (37) indicates that the $\frac{\dot{c}}{c}$ locus shifts downwards. Intuitively, a higher nominal WACC $\Gamma ^{\ast }$ , other things being equal, leads households to expect higher inflation because the intermediate firms will set a higher price for passing through the increased user cost of capital. With higher inflationary expectations, households will decrease the supply of loanable funds because the real rate of return on financial assets decreases with higher inflation. Thus, a higher WACC enlarges the return divergence between the firm’s demand for loanable funds and the household’s supply of loanable funds, thereby decreasing the total amount of loanable funds in the financial market. As scarce loanable funds go to entrants that are more productive in the $\alpha \lt 1/\beta$ scenario, the effectiveness of incumbents’ R&D aimed at quality improvement is reduced. Economic growth then decreases with the WACC, as shown in Figure 1. By contrast, in the $\alpha \gt 1/\beta$ scenario where incumbents are more productive than entrants, although a higher WACC induces households to reduce their supply of loanable funds, the balanced growth increases, rather than decreases. This is because the financial market reshuffles loanable funds away from less productive firms toward more productive firms, that is, incumbents. Because innovations aimed at product quality improvement increase, Figure 1 shows that the steady-state growth $\gamma ^{\ast }$ and WACC $\Gamma ^{\ast }$ have a positive correlation in the $\alpha \gt 1/\beta$ scenario. In the model, a lower market amount of loanable funds does not necessarily imply a lower growth rate, although a higher WACC widens the return wedge between the demand for and the supply of funding. Growth can increase even though a higher WACC decreases loanable funds in the financial market. Proposition 2 provides an important implication, in that instead of the market amount of loanable funds, the allocation of loanable funds (between firms with various productivity levels) is the key for determining economic growth. The importance of intersectoral capital allocation has been pointed out in McKinsey’s report (by Lewis et al. (Reference Lewis, Agrawal, Buttgenbach, Findley, Jeddy, Petry, Kondo, Subramanian, Bőrsch-Supan, Huang and Greene1996)), which shows that although Japan and Germany had much higher investment rates, US investment was able to be allocated to more profitable (i.e. higher productivity) sectors, so national income was considerably greater in the United States. One may expect from the model that higher financial leverage (a higher debt-equity ratio) is not necessarily associated with higher growth, given that an optimal debt-equity ratio is achieved by the WACC minimization. This ambiguity contradicts the so-called “financial accelerator effect” proposed by Bernanke and Gertler (Reference Bernanke and Gertler1995) and Bernanke et al. (Reference Bernanke, Gertler and Gilchrist1996) in the sense that higher financial leverage can stimulate more investment projects and boost economic growth. Again, the allocation of loanable funds matters to the relationship between financial leverage and economic growth. Empirical findings vary greatly and are sensitive to the scale of debt. For example, corporate debt may affect growth negatively in OECD countries (as shown in OECD (2017); Shah et al. (Reference Shah, Abdul-Majid and Karim2019)). Cecchetti et al. (Reference Cecchetti, Mohanty and Zampolli2011) find a threshold effect for the debt-growth relationship; while corporate debt is favorable to growth, it becomes harmful when it’s scale is too high. Similarly, Zhu et al. (Reference Zhu, Asimakopoulos and Kim2020) show that although the overall effect of the private credit to GDP ratio on innovation is positive, this effect is substantially lower when the ratio exceeds a certain level. Our model can easily recover the argument of the irrelevance of capital structure, as in Modigliani and Miller (Reference Modigliani and Miller1958). In a perfect financial market without any distortion caused by agency costs ( $\sigma (\lambda ^{\ast })=0$ ) and the government’s tax interventions ( $\tau _{D}=\tau _{\Pi }=\tau _{B}=\tau _{V}=0$ ), there is an identical cost for the firm to issue equities and corporate bonds, that is, $i^{E}=i^{B}=\overline{i}$ (see the no-arbitrage condition (8)). As a result, the WACC reduces to $\Gamma =\overline{i}$ regardless of the firm’s debt-equity ratio $\lambda$ and, accordingly, the return divergence vanishes and the balanced-growth rate is independent of the firm’s capital structure. This case of a perfect financial market thus vividly conveys the argument of the irrelevance of capital structure. Proposition 2 simply discusses the correlation between the WACC and balanced-growth rate while remaining silent on their causality. For a further discussion, we shall examine how the government’s policy affects the firm’s financial structure and the economy’s growth. It is difficult, however, to obtain clear comparative statics results analytically (see Appendix B for the algebra of the comparative statics). In the next section, we will numerically conduct the comparative statics exercises based on a reasonable parameterization of the economy developed above. 4. Quantitative analysis In this section, we calibrate our model to the US economy and numerically evaluate the effects of both fiscal policy (by changing $\tau _{D}$ , $\tau _{B}$ , $\tau _{\Pi }$ , and $\tau _{V}$ ) and monetary policy (by changing $\overline{i}$ ). 4.1. Calibration To start with, we provide a numerical characterization of the steady-state equilibrium based on a reasonable parameterization of the model economy delineated in the last section. We assume that the agency cost of debt follows the functional form: $\sigma (\lambda )=a_{0}\lambda ^{1+a_{1}}$ in which, to meet Assumption 1, we impose $a_{0}\gt 0$ and $a_{1}\gt 0$ . While $a_{0}$ is a scaling parameter, $a_{1}$ measures the sensitivity of the agency costs with respect to the firm’s debt-equity ratio $\lambda$ . Accordingly, our calibration can be fully characterized by 17 parameters: $\ delta$ , $\rho$ , $a_{0}$ , $a_{1}$ , $\bar{\imath }$ , $\tau _{L}$ , $\tau _{B}$ , $\tau _{D}$ , $\tau _{V}$ , $\tau _{c}$ , $\tau _{\Pi }$ , $\varphi$ , $\theta$ , $\phi$ , $i^{E}$ , $\alpha$ , and $\beta$ . The benchmark parameter values are summarized in Table 1. The parameters we set are adopted from the commonly used values in the literature or calibrated to match the empirical data. By following Peretto (Reference Peretto2011), we choose the dividend income tax rate $\tau _{D}=0.35$ , the corporate tax rate $\tau _{\Pi }=0.335$ , and the tax rate imposed on the capital gains of outstanding equities $\tau _{V}=0.2$ . We choose the bond income tax rate $\tau _{B}=0.245$ that is in accordance with Gordon and Lee (Reference Gordon and Lee2001), Strulik (Reference Strulik2003), Gourio and Miao (Reference Gourio and Miao2011), and Strulik and Trimborn (Reference Strulik and Trimborn2010). In line with commonly used values in the real business cycle literature, we set the labor income tax rate as $\tau _{L}=25.6\%$ , the value-added (consumption) tax rate as $\tau _{c}=5\%$ , and the government spending-output ratio as $\varphi =0.143$ (see, e.g. Cooley and Hansen (Reference Cooley and Hansen1992)). We calculate the nominal interest rate (the yield rate of government bonds) as $\overline{i}=6.55\%$ from the nominal yields on 10-year US Treasury Securities during 1971−2016. Regarding the firm’s finance-related parameters, we calculate the agency cost parameters as $a_{0}=0.0577$ and $a_{1}=0.0330$ by using (9) and (28) such that the before-tax cost of debt (the yield rate of corporate bonds) is $i^{B}=8.21\%$ , the nominal WACC is $\Gamma =6.6\%$ , and the debt-to-equity ratio is $\lambda =22.79\%$ , which are consistent with the estimates of Moore (Reference Moore2016) and Damodaran (Reference Damodaran2018).Footnote ^14 From (26), we further calculate the before-tax cost of equities (the yield rate of equities) as $i^{E}=3.62\%$ in the steady state, which is close to the S&P 500 average dividend yield of $3.48\%$ during 1970−2000.Footnote ^15 We choose the growth rate as $\gamma =1.79\%$ and the inflation rate as $\pi =2.25\%$ , which are consistent with the long-term US data.Footnote ^16 Accordingly, from (7) we have the time preference rate $\rho =0.91\%$ . By following Peretto (Reference Peretto2011), we choose the consumption-output ratio $\hat{c}=0.69$ , average hours worked of employees per firm $\frac{L}{N}=6.1773$ , and working time $L=0.33$ .Footnote ^17 Thus, we can calculate from (25) and (40) with (39) that the final-good elasticity with respect to intermediate goods is $\theta =0.2821$ and the preference weight for leisure is $\delta =1.4295$ . Likewise, we follow Peretto (Reference Peretto2011) and combine (29) with (35) to derive the ratio of firms’ total assets ( $N\Omega$ ) to GDP ( $Py$ ) as $\frac{N\ Omega }{Py}=\beta \frac{\Theta \hat{c}}{\theta ^{\frac{2\theta }{1-\theta }}(\frac{L}{N})}\frac{L}{1-L}$ , which pins down the entrant’s sunk cost parameter $\beta =7.0216$ . Finally, from (38) and ( 42) with $\dot{N}=0$ , we calculate the incumbent’s unit operating cost as $\phi =0.1655$ and its output elasticity with respect to R&D as $\alpha =0.1414$ . Proposition 2 indicates that the relative productivity parameter of incumbents to entrants ( $\alpha -1/\beta$ ) is crucial to the relationship between the firms’ WACC ( $\Gamma ^{\ast }$ ) and the economy-wide growth ( $\gamma ^{\ast }$ ). Peretto (Reference Peretto2011) also stressed that different productivity regimes (namely, the low- $\alpha \beta$ regime or high- $\alpha \beta$ regime) may end up with quite different growth effects of the government’s tax policies. While in the benchmark $\alpha =0.1414\lt 1/\beta =0.1424$ (in Peretto’s terminology, the low- $\alpha \beta$ regime with $\alpha \beta =0.9929\lt 1$ ), we also examine the regime of $\alpha =0.1414\gt 1/\beta =0.1412$ to delicately examine the effects of the government’s distinct policies. Because there is a wide range of estimated values of $\beta$ (see Peretto (Reference Peretto2011) for the details), here we change $\beta$ for the extended regime. 4.2. Comparative statics We now examine the effects of both fiscal policy ( $\tau _{D}$ , $\tau _{\Pi }$ , and $\tau _{B}$ ) and monetary policy ( $\overline{i}$ ) under two distinct scenarios where entrants are relatively productive $\alpha \lt 1/\beta$ and incumbents are relatively productive $\alpha \gt 1/\beta$ . Gourio and Miao (Reference Gourio and Miao2011) have shown that the impacts of capital gains tax cuts on output, consumption, investment, and labor are qualitatively similar to those of dividend tax cuts in the long run. Moreover, Peretto (Reference Peretto2003) has shown that consumption and labor taxes result in an irrelevance of growth. Hence, we abstract these taxes from our analysis. The comparative statics results of the dividend, corporate, and bond income taxes are shown in Figures 2− 4 and those of the nominal interest rate are shown in Figure 5. Result 1. (Dividend Tax) In response to an increase in the dividend income tax ( $\tau _{D}$ ), 1. (i) regardless of the scenario where either $\alpha \lt 1/\beta$ or $\alpha \gt 1/\beta$ , 1. (a) the firm’s debt-equity ratio ( $\lambda ^{\ast }$ ) and the WACC ( $\Gamma ^{\ast }$ ) increase; 2. (b) the market structure exhibits an intensive margin response in the sense that the firm size ( $l^{\ast }=\frac{L^{\ast }}{N^{\ast }}$ ) increases but the number of firms ( $N^{\ast }$ ) 2. (ii) In the scenario where $\alpha \lt 1/\beta$ , growth ( $\gamma ^{\ast }$ ) decreases but inflation ( $\pi ^{\ast }$ ) increases, but in the scenario where $\alpha \gt 1/\beta$ , growth ( $\ gamma ^{\ast }$ ) increases but inflation ( $\pi ^{\ast }$ ) decreases. Equation (20) shows that the dividend income tax $\tau _{D}$ raises the user cost of issuing equity $C^{E}$ , leading firms to issue more corporate bonds. Thus, the equilibrium debt-to-equity ratio $ \lambda ^{\ast }$ increases, which in turn raises the nominal WACC $\Gamma ^{\ast }$ , as shown in Figure 2. It follows from (25) that a higher nominal WACC, other things being equal, leads households to expect higher inflation because the intermediate firms will set a higher price for passing through the increased user cost of capital. Given the fact that the real rate of return on financial assets decreases with higher inflation, households decrease their supply of loanable funds accordingly. Since a higher WACC increases, the return wedge of the demand for loanable funds with the supply of loanable funds, the market’s loanable funds become scarce. Thus, entry declines and the number of firms $N^{\ast }$ falls. As a result, the existing firm’s real gross profit $\frac{\left ( P_{x}-P\right ) x}{Qz}=\frac{1-\theta }{\theta }\theta ^{\frac{2}{1-\theta }}\frac{L}{N}$ increases and the firm’s size $l^{\ast }$ expands, leading the market to become more concentrated.Footnote ^18 Thus, the market structure exhibits an intensive margin response to the dividend income taxation. As the market becomes more concentrated, the incumbent’s tax-adjusted marginal product of R&D $ ( \Psi ^{In} ) ^{\ast }$ and the entrant’s tax-adjusted marginal product of R&D $ ( \Psi ^{En} ) ^{\ast }$ both increase. In the $\alpha \lt 1/\beta$ scenario, incumbents are less productive than entrants, and therefore, the increment of $ ( \Psi ^{In} ) ^{\ast }$ is less than that of $ ( \Psi ^{En} ) ^{\ast }$ .Footnote ^19 Thus, Proposition 2 (together with (43)) indicates that a higher WACC is unfavorable to economic growth $\gamma ^{\ast }$ , because loanable funds go to more productive firms, that is, entrants in the scenario where $\alpha \lt 1/\beta$ . Such a loanable funds reallocation reduces the effectiveness of incumbents’ R&D aimed at product quality improvement, so the balanced growth falls in response to an increase in the dividend income tax. Moreover, (44) indicates that the steady-state inflation rate increases with the incumbents’ nominal WACC $\Gamma ^{\ast }$ but decreases with their tax-adjusted marginal product of R&D $(\Psi ^{In}) ^{\ast }$ . Because $ ( \Psi ^{In} ) ^{\ast }$ increases less in the $\alpha \lt 1/\beta$ scenario, a higher nominal WACC (the user cost of capital) pushes the price up and raises the inflation rate $\pi ^{\ast }$ . By contrast, in the $\alpha \gt 1/\beta$ scenario, incumbents are more productive than entrants, and, accordingly, the increment of $ ( \Psi ^{In} ) ^{\ast }$ is more than that of $ ( \Psi ^{En} ) ^{\ast }$ . In this case, a higher dividend tax favors economic growth, because loanable funds go to more productive incumbents, leading them to engage in more quality-improving R&D. At the same time, due to a great increment in $ ( \Psi ^{In} ) ^{\ast }$ , more quality-improving R&D lowers the equilibrium inflation rate. In Peretto (Reference Peretto2007, Reference Peretto2011) models, incumbents are assumed to have enough retained earnings (internal funds) to invest without the need for external funds, whereas entrants have to collect funds by issuing equities (external funds) only. Given the asymmetric financial structure, he shows that the dividend income tax unambiguously increases the balanced-growth rate. The main reason is that a higher dividend tax does not affect the return to quality, but it shifts down the return to entry because only entrants issue equities for external funds. Since entrants get hurt by a higher dividend tax, economic resources shift from variety expansion to quality growth, thereby unambiguously increasing economic growth. The positive growth effect holds true, regardless of whether incumbents are productive ( $\alpha \gt 1/\beta$ ) or entrants are productive ( $\alpha \lt 1/\beta$ ). Instead, in our model all firms (incumbents and entrants) can access the equity and bond markets and optimally decide the debt-equity ratio $\lambda _{j}(\!=B_{j}^{F}/V_{j}E_{j})$ , and dividend tax changes, thereby allowing households to reshuffle funds more easily out of less productive firms toward others with greater productivity. In contradiction to their results, our comparative statics show that increasing the dividend income tax may either increase or decrease the balanced-growth rate, depending upon the relative productivity between incumbents and entrants and the response of the incumbents’ financial structure (the WACC and debt-equity ratio). Obviously, the financial structure is crucial for determining the effect of the dividend tax on growth. In practice, all firms, regardless of whether mature or young, have become more reliant on equity finance, and this trend has become more pronounced since about the year 2000 (see Brown and Petersen (Reference Brown and Petersen2011)). Debt finance also seems to be non-negligible, although it plays a relatively small part, being related to the availability of equity finance. Due to the lack of retained earnings, younger firms may face more significant financing frictions and rely more on external funding. In addition to the distinct financial structures between the existing and new firms, our results suggest that whether or not loanable funds can be effectively allocated to firms that are more productive is decisive for the positive growth effect. Empirically, there is no consensus about the impact of dividend taxation on economic growth. Some empirical studies (Gravelle (Reference Gravelle2003); Yagan (Reference Yagan2015)) refer to a positive but insignificant effect on growth and investment. Nevertheless, Poterba and Summers (Reference Poterba and Summers1984), Treasury Department (2006), and Dackehag and Hansson (Reference Dackehag and Hansson2016) show that the dividend tax is detrimental to investment and growth. To complement Peretto’s study, the ambiguous growth effect in Result 1 convincingly explains the mixed empirical findings. With regard to the corporate tax, Peretto (Reference Peretto2007) finds that a tax cut in corporate incomes may decrease economic growth. Corporate tax rates have greatly declined in OECD countries during the last two decades. As for the US, in the Tax Cuts and Jobs Act of 2017, the corporate tax rate was lowered from 35% to 21%. It is thus interesting to reexamine the effects of corporate income taxation. Based on Figure 3, we have: Result 2. (Corporate Tax) In response to an increase in the corporate tax ( $\tau _{\Pi }$ ), regardless of the scenario where either $\alpha \lt 1/\beta$ or $\alpha \gt 1/\beta$ , 1. (i) the firm’s debt-equity ratio ( $\lambda ^{\ast }$ ) increases while the WACC ( $\Gamma ^{\ast }$ ) decreases; 2. (ii) the firm size ( $l^{\ast }$ ) exhibits an inverted U-shaped response, while the number of firms ( $N^{\ast }$ ) exhibits a U-shaped response; 3. (iii) growth ( $\gamma ^{\ast }$ ) decreases while inflation ( $\pi ^{\ast }$ ) increases. As the corporate tax increases, the tax shield effect becomes more pronounced. On the one hand, the user cost of issuing debt $C^{B}$ decreases, leading firms to choose a higher debt-equity ratio $\ lambda ^{\ast }$ . On the other hand, taking advantage of the tax shield allows firms to have a lower WACC $\Gamma ^{\ast }$ , as shown in Figure 3. Unlike the dividend tax, the corporate tax influences not only the household’s supply of loanable funds but also the firm’s demand for loanable funds. Due to these two effects, the relationships between corporate taxation and firm size and the number of firms are not monotonic. An increase in the corporate tax, as shown in (25), lowers the tax-adjusted marginal product of R&D $ ( \Psi ^{In} ) ^{\ast }$ , leading incumbents to reduce their demand for loanable funds. Thus, the existing firms decrease their labor and output and the firm size declines as well. In addition, a lower tax-adjusted marginal product of R&D also leads households to expect a higher inflation rate, which lowers the real return on financial assets. Due to a lower return on financial assets, households decrease their supply of loanable funds in response to a higher corporate tax. A scarcity of loanable funds discourages entry, enhancing the size of the existing firms. The former effect becomes more and more pronounced as the corporate tax rate continues to increase. Therefore, Figure 3 shows that the firm size $l^{\ast }$ has an inverted U-shaped relationship with the corporate tax, with the firm-size-maximizing tax rate being $\tau _{\Pi }=0.65$ . Because the corporate tax has totally opposite impacts on the number of firms $N^{\ast }$ , the number of firms has a U-shaped relationship with the corporate tax. As a direct and dominating effect, increasing the corporate income tax unambiguously decreases the tax-adjusted marginal product of R&D for both incumbents $ ( \Psi ^{In} ) ^{\ast }$ and entrants $ ( \Psi ^{En} ) ^{\ast }$ .Footnote ^20 In our parameterization, a tax increase for corporate income hurts incumbents more, and therefore, the decrement of $ ( \Psi ^{In} ) ^{\ast }$ is more than that of $ ( \Psi ^{En} ) ^{\ast }$ , regardless of the scenario in association with either $\alpha \lt 1/\beta$ or $\alpha \gt 1/\beta$ . Consequently, (43) indicates that the balanced-growth rate $\gamma ^{\ast }$ decreases with a higher corporate tax. Moreover, (44) indicates that since the decrease in $ ( \Psi ^{In} ) ^{\ast }$ is substantially large, inflation increases with a higher $\tau _{\Pi } $ . Note that the corporate tax unambiguously decreases, rather than increases, economic growth, which is in contradiction to the result of Peretto (Reference Peretto2007) but is consistent with that of Peretto (Reference Peretto2011). Early empirical studies, for example, Dowrick (Reference Dowrick1993) and Widmalm (Reference Widmalm2001), show that corporate taxes have no significant effect on growth. Angelopoulos et al. (Reference Angelopoulos, Economides and Kammas2007) present evidence of a positive but fragile effect of changes in the corporate tax on growth. Some OECD studies, however, find evidence of a negative effect of corporate taxes on productivity growth; see Schwellnus and Arnold (Reference Schwellnus and Arnold2008) for the firm-level productivity growth and Vartia (Reference Vartia2008) for the industry-level productivity growth. Recent empirical results seem to support our result with a negative corporate tax-growth relationship (see, Lee and Gordon ( Reference Lee and Gordon2005); Arnold (Reference Arnold2008); Arnold et al. (Reference Arnold, Brys, Heady, Johansson, Schwellnus and Vartia2011); Gemmell et al. (Reference Gemmell, Kneller and Sanz 2011, Reference Gemmell, Kneller and Sanz2014, Reference Gemmell, Kneller, McGowan, Sanz and Sanz-Sanz2018)). By focusing on the effects of the bond income tax, we have: Result 3. (Bond Income Tax) In response to an increase in the bond income tax ( $\tau _{B}$ ), 1. (i) regardless of the scenario where either $\alpha \lt 1/\beta$ or $\alpha \gt 1/\beta$ , 1. (a) the firm’s debt-equity ratio ( $\lambda ^{\ast }$ ) and WACC ( $\Gamma ^{\ast }$ ) decrease; 2. (b) the firm size ( $l^{\ast }$ ) exhibits an inverted U-shaped response, while the number of firms ( $N^{\ast }$ ) exhibits a U-shaped response; 3. (c) the inflation rate ( $\pi ^{\ast }$ ) unambiguously decreases. 2. (ii) In the scenario where $\alpha \lt 1/\beta$ , growth ( $\gamma ^{\ast }$ ) decreases first and then increases. By contrast, in the scenario where $\alpha \gt 1/\beta$ , growth ( $\gamma ^{\ ast }$ ) increases first and then decreases. The bond income tax increases the user cost of issuing corporate bonds $C^{B}$ . This induces firms, on the one hand, to reduce their debt, decreasing the equilibrium debt-equity $\lambda ^{\ast }$ and, on the other hand, to lower their WACC $\Gamma ^{\ast }$ . Figure 4 shows that because the decrease in the user cost of capital $\Gamma ^{\ast }$ is substantial, firms lower their prices and inflation $\pi ^{\ast }$ falls as a response. There are two conflicting effects governing loanable funds in the financial market. First, as a direct effect, the bond income tax discourages households from holding corporate bonds, decreasing loanable funds in the market. Second, a fall in inflation gives rise to an induced effect, leading households to expect a higher real return rate on financial assets, which in turns increases loanable funds. If the direct effect dominates, entry decreases, leading to a smaller number of firms $N^{\ast }$ and a larger firm size $l^{\ast }$ . If the induced effect dominates, entry increases, leading to a larger number of firms $N^{\ast }$ and a smaller firm size $l^{\ast }$ . Intuitively, a lower (higher) return wedge between the supply of and demand for loanable funds implies a lower (higher) financial friction (in terms of the agency cost), which is favorable (unfavorable) to the firm’s real gross profit per quality. In response to increasing the bond income tax, the return wedge between the loanable fund supply and demand ( $ [ ( 1-\tau _{B} ) \overline{i}-\pi ] - [ \Gamma ^{\ast }-\pi ]$ ) decreases first and then increases, because the agency cost and hence the WACC $\Gamma ^{\ast }$ decrease more and more as $\tau _{B}$ increases. Our calibration results show that when the bond income tax rate is relatively low $(\tau _{B}\lt 0.27$ ), the direct effect dominates (a decrease in the return wedge); otherwise, the induced effect dominates (an increase in the return wedge). Therefore, similar to the corporate tax, the firm size exhibits an inverted U-shaped response, while the number of firms exhibits a U-shaped response, as shown in Figure 4. Due to the ambiguous effect on loanable funds, the bond income tax also has an uncertain impact on the tax-adjusted marginal product of R&D for incumbents $ ( \Psi ^{In} ) ^{\ast }$ and entrants $ ( \Psi ^{En} ) ^{\ast }$ .Footnote ^21 Our calibration results show that if incumbents are less productive than entrants ( $\alpha \lt 1/\beta$ ), both $ ( \Psi ^{In} ) ^{\ast }$ and $ ( \Psi ^{En} ) ^ {\ast }$ increase when the bond income tax rate is relatively low $(\tau _{B}\lt 0.27$ ) (which is associated with a lower return wedge), with the increment in $ ( \Psi ^{En} ) ^{\ast }$ being larger. Thus, it follows from (43) that the balanced growth $\gamma ^{\ast }$ decreases with a higher bond tax. In contrast, when $\tau _{B}\gt 0.27$ (which is associated with a higher return wedge), both $ ( \Psi ^{In} ) ^{\ast }$ and $ ( \Psi ^{En} ) ^{\ast }$ decrease, with the decrement in $ ( \Psi ^{En} ) ^{\ast }$ being larger. This, as shown in (43), gives rise to a favorable effect on the balanced growth $\gamma ^{\ast }$ . In the scenario where incumbents are more productive than entrants ( $\alpha \gt 1/\beta$ ), we have opposite effects on $ ( \Psi ^{In} ) ^{\ast }$ and $ ( \Psi ^ {En} ) ^{\ast }$ . As a result, taxing bond incomes increases the balanced growth first and then decreases it after the tax rate becomes higher (i.e. $\tau _{B}\gt 0.27$ ). Overall, Results 1−3 indicate that the relationship between fiscal policy and economic growth is ambiguous. It does not seem likely that fiscal policy is able to solve the problem of R&D finance. With regard to monetary policy, we have: Result 4. (Nominal Interest Rate) In response to an increase in the nominal interest rate ( $\overline{i}$ ), 1. (i) regardless of the scenario where either $\alpha \lt 1/\beta$ or $\alpha \gt 1/\beta$ , 1. (a) the firm’s debt-equity ratio ( $\lambda ^{\ast }$ ) and WACC ( $\Gamma ^{\ast }$ ) increase; 2. (b) the firm size ( $l^{\ast }$ ) increases but the number of firms ( $N^{\ast }$ ) decreases; 3. (c) the inflation rate ( $\pi ^{\ast }$ ) increases. 2. (ii) Growth ( $\gamma ^{\ast }$ ) decreases in the scenario where $\alpha \lt 1/\beta$ , but it increases in the scenario where $\alpha \gt 1/\beta$ . When monetary policy raises the nominal interest rate $\overline{i}$ , the opportunity cost of holding money increases. Under the CIA constraint on consumption, households tend to decrease their money holding (and hence consumption) and increase other financial assets. That is, households substitute financial assets (equities and corporate bonds) for money. This asset substitution effect induces households to supply more loanable funds. A higher nominal interest rate also affects the demand for loanable funds through the financial effect. Equation (26) indicates that an increase in the nominal interest rate $\overline{i}$ amplifies the tax shield effect, which leads firms to rely more on external debt financing. Figure 5 shows that since the equilibrium debt-equity ratio increases $\lambda ^{\ast }$ , the user cost of capital WACC $\Gamma ^{\ast }$ becomes higher, as indicated by (27) and (28). Because the firm’s user cost of capital increases, inflation rises as a response. In our parameterization, the financial effect dominates the asset substitution effect. Thus, the return divergence (wedge) between the demand for and supply of loanable funds, $\Gamma ^{\ ast }- ( 1-\tau _{B} ) \overline{i}$ , becomes wider and the equilibrium amount of loanable funds decreases. As a result, entry decreases, leading to a decrease in the number of firms $N^{\ast }$ and an increase in the firm size $l^{\ast }$ . However, a lower equilibrium amount of loanable funds does not imply a lower growth rate. In this model, the financial market can reshuffle funds away from less productive firms toward other ventures with greater productivity. A firm is willing to bear a higher user cost of capital, the WACC, if its R&D productivity is higher, that is, $\Psi ^{In}=\Gamma -\pi$ as shown in (25). If incumbents are less productive than entrants ( $\alpha \lt 1/\beta$ ), loanable funds go to more productive entrants, which reduces the effectiveness of incumbents’ R&D aimed at product quality improvement. The balanced-growth rate therefore decreases with a higher nominal interest rate. By contrast, if incumbents are more productive than entrants ( $\alpha \gt 1/\beta$ ), loanable funds go to more productive incumbents, which allows them to increase their quality-improving R&D. Thus, the balanced-growth rate increases with a higher nominal interest rate.Footnote ^22 Notice that even though the equilibrium amount of loanable funds decreases, a higher nominal interest rate can enhance economic growth, provided that incumbents are more productive than entrants and loanable funds can effectively go to these more productive incumbents. The effective allocation of loanable funds, instead of the market amount of loanable funds, matters for economic growth. In this case, there is a positive growth-inflation relationship, resembling the so-called Mundell−Tobin effect. In our paper, monetary policy has a quite different impact on growth from previous studies when shedding light on the role of money in a firm’s R&D financial structure. In the otherwise similar R& D-driven growth model without an endogenous financial structure, such as Chu and Ji (Reference Chu and Ji2016) and Huang et al. (Reference Huang, Chang and Lei2021), monetary policy has no impact on the BGP growth rate when only consumption is subject to the CIA constraint. The money superneutrality of growth holds since the scale effect is eliminated by the entry of new firms and, as a result, the firm size is unresponsive to changes in the nominal interest rate. In our analysis, by shedding light on the role of money in firms’ R&D financial structure, the nominal interest rate can affect economic growth, even though there is no scale effect.Footnote ^23 Bernanke and Gertler (Reference Bernanke and Gertler1995) and Bernanke et al. (Reference Bernanke, Gertler and Gilchrist1996) proposed the existence of a positive relationship between financial leverage and economic growth. Are more aggressive levels of financial leverage—higher debt-equity ratios—really favorable to growth? Our calibration results indicate that the financial accelerator effect is not always valid. The positive relationship between growth and the debt-equity ratio is valid only when incumbents are relatively productive ( $\alpha \gt 1/\beta$ ) in the presence of an increase in the dividend tax or the nominal interest rate. In response to higher corporate taxes, higher financial leverage unambiguously decreases, rather than increases, the balanced growth. What is the relationship between the firm size and economic performance? The Lucas (Reference Lucas1978) hypothesis claims that average firm size and economic growth are positively related. In recent decades, several empirical studies using data over a long period of time have not supported such a positive relationship. More recent evidence seems to point to a negative relationship between average firm size and economic growth/development, contradicting the Lucas hypothesis. For several developed countries, this relationship seems to have changed from a positive relationship to a negative one (Congregado et al. (Reference Congregado, Golpe and Parker2012)). It is particularly the case that, from the 1970s onwards, self-employment levels started to increase in many advanced economies, first in the United States. Our calibration results show that the market structure exhibits an intensive margin (the number of firms decreases, but each firm’s size becomes larger) in response to an increase in the dividend income tax or the nominal interest rate. In the two cases, the relationship between the firm size and economic growth is positive if incumbents are relatively productive ( $\alpha \gt 1/\beta$ ), but it is negative if entrants are relatively productive ( $\alpha \lt 1/\beta$ ). In response to the bond income tax, while the market structure may exhibit either an intensive or extensive margin, the positive (negative) firm size-growth relationship still holds in the scenario where $\alpha \gt 1/\beta$ ( $\alpha \lt 1/\beta$ ). In terms of the corporate tax, the relationship between the firm size and economic growth is also mixed, while the market structure effects are more complicated. 4.3. The role of agency costs In this subsection, we investigate the role of the agency cost in relation to the policy effectiveness by changing the value of the agency cost parameter $a_{0}$ . Because the effect of the nominal interest rate is not so sensitive to changes in $a_{0}$ under a reasonable range, we thus focus on the tax policy only. Result 5. (Agency Costs and Fiscal Policy Effects) In the face of a higher agency cost $a_{0}$ , the steady-state debt-equity ratio decreases for all tax policies ( $\tau _{D}$ , $\tau _{\Pi }$ , and $\tau _{B}$ ). 1. (i) The effects of the dividend tax ( $\tau _{D}$ ) on growth and market structure (in terms of the firm size and the number of firms) become more pronounced, regardless of the scenario where either $\alpha \lt 1/\beta$ or $\alpha \gt 1/\beta$ . 2. (ii) The effect of the corporate tax ( $\tau _{\Pi }$ ) on growth is amplified in the $\alpha \lt 1/\beta$ scenario but it is attenuated in the $\alpha \lt 1/\beta$ scenario. The corporate tax is more likely to increase the firm size but to reduce the number of firms. 3. (iii) The bond income tax ( $\tau _{B}$ ) becomes more likely to increase growth in the $\alpha \lt 1/\beta$ scenario but to reduce growth in the $\alpha \gt 1/\beta$ scenario. For any scenario, it is more likely for the firm size to decrease but for the number of firms to increase. It follows from Figure 6 that the larger the value of $a_{0}$ (10 times the benchmark value), the higher the agency cost, reducing the relative return of holding corporate bonds to equities for households. This, as shown in (26), increases the user cost of issuing corporate bonds and, as a result, the optimal debt-equity ratio $\lambda ^{\ast }$ becomes lower in equilibrium.Footnote ^24 This is true for any policy and any scenario. When the government raises the dividend tax $\tau _{D}$ , households tend to substitute corporate bonds for equities. This leads firms to increase their debt-equity ratio and hence WACC, as indicated in Result 1. In the presence of a higher agency cost parameter $a_{0}$ , the WACC effect becomes more pronounced, that is, $\frac{\partial \left \vert \frac{\partial \Gamma ^{\ast }}{\partial \tau _ {D}}\right \vert }{\partial a_{0}}=\frac{-i^{E}}{(1-\tau _{V})\left ( 1+\lambda ^{\ast }\right ) ^{2}}\frac{\partial \lambda ^{\ast }}{\partial a_{0}}\gt 0$ . Thus, in addition to asset reallocation, households decrease their total assets, including corporate bonds and equities, because a higher WACC raises inflation, lowering the real return on financial assets. Therefore, the supply of loanable funds decreases and loanable funds become scarce in the financial market. Because firms’ levels of financial leverage become more aggressive, the impacts of the dividend tax on the balanced growth and the market structure turn out to be more pronounced, as shown in Figure 6. A policy implication is that to mitigate the impacts of a dividend tax increase, the government should take into account the agency cost issue, in order to ensure a low financial contract cost in terms of the conflicting interest and information asymmetry problems between debt holders and debtors before policy implementation. This is particularly important since dividend taxation may decrease, rather than increase, economic growth, as predicted in Result 1(ii). As suggested by Chetty and Saez (Reference Chetty and Saez2010), dividend taxation should be used relatively little if agency problems are prevalent. As noted above, the corporate tax decreases not only the household’s supply of loanable funds but also the firm’s demand for loanable funds. The decrease in the supply of loanable funds discourages entry and gives rise to a positive effect on the firms’ size, whereas the decrease in the demand for loanable funds leads firms to reduce their output and gives rise to a negative effect on the firms’ size. Intuitively, higher agency costs $a_{0}$ reinforce the impact stemming from the supply of loanable funds (given that households directly incur agency costs). Thus, Figure 7 shows that in the presence of a higher $a_{0}$ the corporate tax turns out to be more likely to increase the firm size. By contrast, the number of firms is more likely to decrease with the corporate tax. In other words, in the presence of high agency costs the market structure is more likely to exhibit an intensive margin in response to the corporate tax. As regards the balanced growth, the difference $(\Psi ^{In})^{\ast }-(\Psi ^{En})^{\ast }$ determines the growth effect. Result 2 indicates that an increase in the corporate tax decreases both $ ( \Psi ^{In} ) ^{\ast }$ and $ ( \Psi ^{En} ) ^{\ast }$ , with the decline in $ ( \Psi ^{In} ) ^{\ast }$ being greater than that of $ ( \Psi ^{En} ) ^{\ast }$ . In the face of higher agency costs $a_{0}$ , loanable funds become more scarce. If incumbents are less productive ( $\alpha \lt 1/\beta$ ) the decline in $ ( \Psi ^{In} ) ^{\ast }$ will be further amplified by higher agency costs $a_{0}$ . Thus, the negative growth effect of the corporate tax will become more pronounced. In contrast, if entrants are less productive ( $\alpha \gt 1/\beta$ ), higher agency costs will increase the decline in $ ( \Psi ^{En} ) ^{\ast }$ . Under such a situation, Figure 7 shows that the relative impact on $(\Psi ^{In})^{\ast }-(\Psi ^{En})^{\ast }$ becomes less significant, and therefore, higher agency costs alleviate the negative growth effect of the corporate tax. Next, we turn to the bond income tax. As shown in Result 3, there are two opposite effects, governing loanable funds in the financial market. On the one hand, the bond income tax directly discourages households from holding corporate bonds, resulting in a decrease in the supply of loanable funds. On the other hand, the bond income tax lowers the expected inflation and raises the real return on financial assets, resulting in an increase in loanable funds in the market. Higher agency costs lead households to substitute more equities for corporate bonds. This asset substitution amplifies the latter effect. It turns out that loanable funds increase, which attracts more entrants but reduces the firm size, regardless of the scenario where $\alpha \lt 1/\beta$ or $\alpha \gt 1/\beta$ . Thus, Figure 8 shows that the bond income tax can unambiguously reduce the firm size but increase the number of firms. In contradiction to the dividend tax and the corporate tax, in the presence of higher agency costs there is an extensive margin in response to the bond income tax. Figure 8 further shows that the bond income tax is more likely to increase growth in the $\alpha \lt 1/\beta$ scenario but to reduce growth in the $\alpha \gt 1/\beta$ scenario. Result 3 has indicated that if incumbents are less productive ( $\alpha \lt 1/\beta$ ), both $ ( \Psi ^{In} ) ^{\ast }$ and $ ( \Psi ^{En} ) ^{\ast }$ increase when the bond income tax rate is relatively low ( $\tau _{B}\lt 0.27$ ), with the increase in the entrant’s tax-adjusted marginal product of R&D $ ( \Psi ^{En} ) ^{\ast }$ being larger. In the presence of higher agency costs $a_{0}$ , the bond income tax leads the market’s loanable funds to become more abundant, so not only do more productive entrants raise funds more easily but also less productive incumbents obtain sufficient funds. Consequently, the bond income tax can increase, rather than decrease, economic growth in the $\alpha \lt 1/\beta$ scenario, as shown in Figure 8. In addition, Result 3 indicates that there are opposite effects of the bond tax on $ ( \Psi ^{In} ) ^{\ast }$ and $ ( \Psi ^{En} ) ^{\ast }$ if incumbents are more productive ( $\alpha \gt 1/\beta$ ). Under such a situation, the bond income tax is more likely to decrease the balanced-growth rate because abundant loanable funds attract too many entrants that are less productive, so the over-proliferation of product varieties greatly reduces the effectiveness of incumbents’ quality-improving R&D. It is interesting to point out that in the presence of higher agency costs, the market structure exhibits an intensive margin (a large number of firms with a small firm size) in response to the dividend and corporate taxes, while it exhibits an extensive margin (a small number of firms with a large firm size) in response to the bond income tax. Nonetheless, high growth can be associated with either an intensive or extensive margin in terms of the market structure. 5. Concluding remarks In this paper, we build an endogenous growth model of R&D to investigate the effects of fiscal and monetary policy. We consider two dimensions of R&D, namely incumbents’ quality-improving R&D and entrants’ variety-expanding R&D, which endogenizes the market structure, enabling the proliferation of product varieties to eliminate the scale effect on growth. We allow both the existing and new firms to access the financial market to raise external funds, which endogenizes the firm’s financial structure, thereby enabling the financial market to reshuffle loanable funds out of less productive firms toward others with greater productivity. Thus, not only the market amount of loanable funds but also the allocation of loanable funds between incumbents and entrants governs economic Analytically, we show that the balanced-growth rate can be either positively or negatively correlated with the WACC. A higher WACC raises inflation, which in turn reduces the market’s loanable funds. A decrease in loanable funds, however, does not necessarily imply a lower growth rate. Compared with the market amount of loanable funds, the allocation of loanable funds is more important in governing economic growth. This offers a counterexample for the financial accelerator effect of Bernanke and Gertler (Reference Bernanke and Gertler1995) and Bernanke et al. (Reference Bernanke, Gertler and Gilchrist1996). Numerically, we show that while the dividend tax (levied on the household’s supply of loanable funds) and the corporate tax (levied on the firm’s demand for loanable funds) both increase the equilibrium debt-equity ratio, they have quite different impacts on the balanced growth and market structure. The balanced-growth rate unambiguously decreases in response to higher corporate taxes but increases in response to higher dividend taxes, provided that incumbents are more productive than entrants. The market structure exhibits an intensive margin (a small number of firms with a large firm size) in response to the dividend tax, while it may exhibit an extensive response (a large number of firms with a small firm size) to the corporate tax, provided that the status-quo tax rate is relatively high. Our calibration results show that, in response to a higher nominal interest rate, inflation can be positively related to growth, generating the so-called Mundell−Tobin effect. The balanced-growth rate can increase even if loanable funds become scarce, provided that the financial market reshuffles funds toward the existing firms that have greater productivity. Our results also show that a larger firm size does not necessarily imply higher growth, as conjectured by Lucas (Reference Lucas1978). We are deeply grateful to the Editor,William Barnett, an anonymous associate editor, an anonymous referee, Been-Lon Chen, Chien-Yu Huang, Lei Ji, Robert Kaney, Ping Wang, and Chong Kee Yip for their helpful suggestions and insightful comments on an earlier version of this paper. We would like to express our gratitude for the financial support provided by Academia Sinica and National Science and Technology Council (MOST 104-2410-H-004-015). Any remaining errors are, of course, our own responsibility. APPENDIX A: PROOF FOR PROPOSITIONS 1 AND 2 Taking the logarithm derivatives of $\hat{c}=\frac{c}{y}$ , $y=\theta ^{\frac{2\theta }{1-\theta }}zL$ , and (39) with respect to time yields $\frac{\overset{\centerdot }{\hat{c}}}{\hat{c}}=\frac{\ dot{c}}{c}-\frac{\dot{y}}{y}$ , $\frac{\dot{y}}{y}=\frac{\dot{z}}{z}+\frac{\dot{L}}{L}$ , and $\frac{\dot{L}}{L}=\frac{-\Theta \hat{c}}{1+\Theta \hat{c}}\frac{\overset{\centerdot }{\hat{c}}}{\hat{c}} $ , respectively. By manipulating these relationships, we have: \begin{equation*} \frac {\overset {\centerdot }{\hat {c}}}{\hat {c}}=(1+\Theta \hat {c})\left ( \frac {\dot {c}}{c}-\frac {\dot {z}}{z}\right ). \end{equation*} Substituting (37) and (38) into the above equation, we can obtain (41) in the text. From the debt-equity ratio $\lambda \equiv \frac{B^{F}}{VE}$ and (29), we have $b^{F}=\beta \frac{\lambda }{1+\ lambda }z$ . With (27), substituting $y=\theta ^{\frac{2\theta }{1-\theta }}zL$ and $b^{F}=\beta \frac{\tilde{\lambda }}{1+\tilde{\lambda }}z$ into (32) and dividing the resultant expression by $zN$ \begin{equation*} \theta ^{\frac {2\theta }{1-\theta }}\frac {L}{N}=\left ( \hat {c}+\varphi \right ) \theta ^{\frac {2\theta }{1-\theta }}\frac {L}{N}+\beta \frac {\tilde {\lambda }\sigma (\tilde {\ lambda })}{1+\lambda }+\theta ^{\frac {2}{1-\theta }}\frac {L}{N}+\phi +\frac {\dot {z}}{z}+\beta \frac {\dot {N}}{N}. \end{equation*} Given (38), by combining (33) with the above equation, the evolution of the number of intermediate-good firms can be derived as (42) in the text. In the steady state, $\overset{\centerdot }{\hat{c}}=\overset{\centerdot }{N}=0$ holds true in (41) and (42). Then, from (27) and (28), in equilibrium the optimal debt-equity ratio and optimal nominal WACC are expressed as $\lambda ^{\ast }=\tilde{\lambda }$ and $\Gamma ^{\ast }=\tilde{\Gamma }$ , respectively. Given (27) and (28), we solve $\left ( \frac{x}{z}\right ) ^{\ast }$ , $\hat{c} ^{\ast }$ , and $N^{\ast }$ from (41) and (42) with $\overset{\centerdot }{\hat{c}}=\overset{\centerdot }{N}=0$ and (40): (A1) $$\left ( \frac{x}{z}\right ) ^{\ast }=\frac{\Gamma ^{\ast }-(1-\tau _{B})\overline{i}+\rho +\frac{1-\tau _{\Pi }}{\beta -1}\phi }{(1-\tau _{\Pi })\frac{1-\theta }{\theta }\frac{1-\alpha }{\beta (A2) $$\hat{c}^{\ast }=1-\varphi -\frac{\beta \lambda ^{\ast }\sigma (\lambda ^{\ast })}{1+\tilde{\lambda }}\frac{\theta ^{2}}{\left ( \frac{x}{z}\right ) ^{\ast }}-\left \{ \theta ^{2}\text{+}\phi \ frac{\theta ^{2}}{\left ( \frac{x}{z}\right ) ^{\ast }}\text{+}\frac{\theta ^{2}}{\left ( \frac{x}{z}\right ) ^{\ast }}\left [ \frac{\alpha \beta -1}{\beta -1}\left ( 1-\tau _{\Pi }\right ) \frac{1-\ theta }{\theta }\left ( \frac{x}{z}\right ) ^{\ast }\text{+}\frac{1-\tau _{\Pi }}{\beta -1}\phi \right ] \right \} \!.$$ (A3) $$N^{\ast }=\frac{\left ( 1-\tau _{\Pi }\right ) \frac{1-\theta }{\theta }\frac{1-\alpha }{\beta -1}}{\Gamma ^{\ast }-(1-\tau _{B})\overline{i}+\rho +\frac{1-\tau _{\Pi }}{\beta -1}\phi }\theta ^{\frac{2}{1-\theta }}\cdot L^{\ast },$$ where $L^{\ast }=\frac{1}{1+\Theta \hat{c}^{\ast }}$ , as reported in (39). Furthermore, by substituting (A1) into (25) and (38), the steady-state growth and inflation are given by: (A4) $$\gamma ^{\ast }=\frac{1}{1-\frac{1}{\beta }}\left [ (\Psi ^{In})^{\ast }-(\Psi ^{En})^{\ast }\right ] =\frac{\alpha \beta -1}{1-\alpha }\left [ \Gamma ^{\ast }-(1-\tau _{B})\overline{i}+\rho + \frac{1-\tau _{\Pi }}{\beta -1}\phi \right ] +\frac{1-\tau _{\Pi }}{\beta -1}\phi,$$ \begin{equation*} \pi ^{\ast }=\Gamma ^{\ast }-(\Psi ^{In})^{\ast }=\Gamma ^{\ast }-\frac {\alpha (\beta -1)}{1-\alpha }\left [ \Gamma ^{\ast }-(1-\tau _{B})\overline {i}+\rho +\frac {1-\tau _{\Pi }} {\beta -1}\phi \right ]. \end{equation*} From (A4), we can derive the relationship between the steady-state growth and the optimal nominal WACC as follows: \begin{equation*} \frac {\partial \gamma ^{\ast }}{\partial \Gamma ^{\ast }}\mathbf {=}\frac {\alpha \beta -1}{1-\alpha }\mathbf {\gtrless 0}\text {, if }\alpha \gtrless 1/\beta \mathbf {.} \end To examine the dynamic property of this model, we linearize the dynamic system (41) and (42) around the steady state as follows: (A5) $$\left [ \begin{array}{c} \overset{\centerdot }{\hat{c}} \\[5pt] \overset{\centerdot }{N}\end{array}\right ] =\left [ \begin{array}{c@{\quad}c} J_{11} & J_{12} \\[5pt] J_{21} & J_{22}\end {array}\right ] \left [ \begin{array}{c} \hat{c}-\hat{c}^{\ast } \\[5pt] N-N^{\ast }\end{array}\right ],$$ where $J_{11}=\left ( 1+\Theta \hat{c}^{\ast }\right ) (1-\tau _{\Pi })\frac{1-\alpha }{\beta -1}\frac{1-\theta }{\theta }\theta ^{\frac{2}{1-\theta }}\frac{\hat{c}^{\ast }L_{\hat{c}}}{N^{\ast }}\lt 0$ , $J_{12}=\left ( 1+\Theta \hat{c}^{\ast }\right ) (1-\tau _{\Pi })\frac{1-\alpha }{\beta -1}\frac{1-\theta }{\theta }\theta ^{\frac{2}{1-\theta }}\frac{-\hat{c}^{\ast }L_{\hat{c}}}{\left ( N^{\ ast }\right ) ^{2}}$ $\lt 0$ , $J_{21}=\frac{1}{\beta }\left ( -\frac{1}{\theta ^{2}}\theta ^{\frac{2}{1-\theta }}\frac{L^{\ast }}{N^{\ast }}+\Phi \frac{L_{\hat{c}}}{L^{\ast }}\right ) N^{\ast }\lt 0$ , $J_{22}=-\frac{1}{\beta }\Phi \lt 0$ , $L_{\hat{c}}=\frac{-\Theta }{\left ( 1+\Theta \hat{c}^{\ast }\right ) ^{2}}$ , and $\Phi \equiv \left ( \frac{1-\tau _{\Pi }}{\beta -1}+1\right ) \phi +\ frac{\beta \lambda ^{\ast }\sigma (\lambda ^{\ast })}{1+\tilde{\lambda }}$ . It follows from (A5) that the trace and the determinant of the Jacobian matrix are as follows: \begin{equation*} \text{Tr}=\left ( 1+\Theta \hat {c}^{\ast }\right ) (1-\tau _{\Pi })\frac {1-\alpha }{\beta -1}\frac {1-\theta }{\theta }\theta ^{\frac {2}{1-\theta }}\frac {\hat {c}^{\ast }L_{\hat {c}}}{N^{\ast }}-\frac {1}{\beta }\Phi \lt 0, \end{equation*} \begin{equation*} \text{Det}=-\left ( 1+\Theta \hat {c}^{\ast }\right ) \frac {1}{\beta }(1-\tau _{\Pi })\frac {1-\alpha }{\beta -1}\frac {1-\theta }{\theta }\frac {\hat {c}^{\ast }}{\theta ^{2}}\ left ( \theta ^{\frac {2}{1-\theta }}\frac {L^{\ast }}{N^{\ast }}\right ) ^{2}\lt 0. \end{equation*} Because there is one jump variable $\hat{c}$ and one state variable $N$ in the model, the Routh−Hurwitz stability criterion indicates that the dynamic system is locally determinate. B.1. Effects of the dividend tax From (27), we have $\lambda _{\tau _{D}}^{\ast }=\frac{\partial \lambda ^{\ast }}{\partial \tau _{D}}=\frac{1}{\Sigma }\frac{i^{E}}{1-\tau _{V}}\gt 0$ , where $\Sigma =\frac{1-\tau _{\Pi }}{1-\tau _ {B}}\!\left [ 2\sigma ^{\prime }(\lambda ^{\ast })+\lambda ^{\ast }\sigma ^{\prime \prime }(\lambda ^{\ast })\right ]\! (1+\lambda ^{\ast })\gt 0$ . Accordingly, taking the partial derivative of (28) with respect to $\tau _{D}$ yields $\Gamma _{\tau _{D}}^{\ast }=\frac{1}{1+\lambda ^{\ast }}\frac{i^{E}}{1-\tau _{V}}\gt 0$ . Equipped with $(\Psi ^{In})^{\ast }=(1-\tau _{\Pi })\alpha \frac{1-\theta }{\theta }\left ( \frac{x}{z}\right ) ^{\ast }$ , $(\Psi ^{En})^{\ast }=\frac{1-\tau _{\Pi }}{\beta }\left [ \frac{1-\theta }{\theta }\left ( \frac{x}{z}\right ) ^{\ast }-\phi \right ]$ , and (A1), we take the differentials of $(\Psi ^{In})^{\ast }$ and $(\Psi ^{En})^{\ast }$ with respect to $\tau _{D}$ , respectively, yielding $\Psi _{\tau _{D}}^{In}=\frac{\partial (\Psi ^{In})^{\ast }}{\ partial \tau _{D}}=\frac{\alpha (\beta -1)}{1-\alpha }\Gamma _{\tau _{D}}^{\ast }\gt 0$ and $\Psi _{\tau _{D}}^{En}=\frac{\partial (\Psi ^{En})^{\ast }}{\partial \tau _{D}}=\frac{\beta -1}{\beta (1-\ alpha )}\Gamma _{\tau _{D}}^{\ast }\gt 0$ . Moreover, differentiating (43) and (44) with respect to $\tau _{D}$ , respectively, we obtain: \begin{equation*} \frac {\partial \gamma ^{\ast }}{\partial \tau _{D}}=\frac {\alpha \beta -1}{1-\alpha }\Gamma _{\tau _{D}}^{\ast }\gtrless 0, \, \text{if} \, \alpha \beta -1\gtrless 0, \end \begin{equation*} \frac {\partial \pi ^{\ast }}{\partial \tau _{D}}=\left [ 1-\frac {\alpha (\beta -1)}{1-\alpha }\right ] \Gamma _{\tau _{D}}^{\ast }=\frac {1-\alpha \beta }{1-\alpha }\Gamma _{\tau _{D}}^{\ast }\lessgtr 0,\text { if }\alpha \beta -1\gtrless 0. \end{equation*} Differentiating (A2), (A3), and (39) with respect to $\tau _{D}$ further yields: \begin{equation*} \frac {\partial \hat {c}^{\ast }}{\partial \tau _{D}}=\frac {\theta ^{2}}{\left ( \frac {x}{z}\right ) ^{\ast }}\left [ \frac {\Phi \Gamma _{\tau _{D}}^{\ast }}{\Gamma ^{\ast }-(1-\ tau _{B})\overline {i}+\rho +\frac {1-\tau _{\Pi }}{\beta -1}\phi }-\beta \frac {\sigma (\lambda ^{\ast })+\lambda ^{\ast }\sigma ^{\prime }(\lambda ^{\ast })\left ( 1+\lambda ^{\ast }\right ) }{\ left ( 1+\lambda ^{\ast }\right ) ^{2}}\lambda _{\tau _{D}}^{\ast }\right ] \gtrless 0, \end{equation*} \begin{equation*} \frac {\partial N^{\ast }}{\partial \tau _{D}}=-N^{\ast }\Theta L^{\ast }\cdot \frac {\partial \hat {c}^{\ast }}{\partial \tau _{D}}-\frac {N^{\ast }}{(\Psi ^{In})^{\ast }}\frac {\ alpha (\beta -1)}{1-\alpha }\Gamma _{\tau _{D}}^{\ast }\lessgtr 0. \end{equation*} \begin{equation*} \frac {\partial L^{\ast }}{\partial \tau _{D}}=-\Theta \!\left ( L^{\ast }\right ) ^{2}\cdot \frac {\partial \hat {c}^{\ast }}{\partial \tau _{D}}\lessgtr 0, \end{equation*} B.2. Effects of the corporate tax According to (27), we have $\lambda _{\tau _{\Pi }}^{\ast }=\frac{\partial \lambda ^{\ast }}{\partial \tau _{\Pi }}=\frac{1}{\Sigma }\left [ \overline{i}+\frac{\sigma (\lambda ^{\ast })+\lambda ^{\ ast }\sigma ^{\prime }(\lambda ^{\ast })\left ( 1+\lambda ^{\ast }\right ) }{1-\tau _{B}}\right ] \gt 0$ . Taking the partial derivative of (28) with respect to $\tau _{\Pi }$ yields $\Gamma _{\tau _ {\Pi }}^{\ast }=\frac{-\lambda ^{\ast }}{1+\lambda ^{\ast }}\left [ \overline{i}+\frac{\sigma (\lambda ^{\ast })}{1-\tau _{B}}\right ] \lt 0$ . By taking the differentials of $(\Psi ^{In})^{\ast }$ and $(\Psi ^{En})^{\ast }$ with respect to $\tau _{\Pi }$ , we can obtain $\Psi _{\tau _{\Pi }}^{In}=\frac{\partial (\Psi ^{In})^{\ast }}{\partial \tau _{\Pi }}=\frac{\alpha (\beta -1)}{1-\alpha }\ left ( \Gamma _{\tau _{\Pi }}^{\ast }-\frac{\phi }{\beta -1}\right ) \lt 0$ and $\Psi _{\tau _{\Pi }}^{En}=\frac{\partial (\Psi ^{En})^{\ast }}{\partial \tau _{\Pi }}=\frac{\beta -1}{\beta \left ( 1- \alpha \right ) }\left ( \Gamma _{\tau _{\Pi }}^{\ast }-\frac{\alpha \phi }{\beta -1}\right ) \lt 0$ , respectively. Moreover, differentiating partially (43) and (44) with respect to $\tau _{\Pi }$ \begin{align*} \frac {\partial \gamma ^{\ast }}{\partial \tau _{\Pi }} &= \frac {1}{1-\frac {1}{\beta }}\left ( 1-\frac {\Psi _{\tau _{\Pi }}^{En}}{\Psi _{\tau _{\Pi }}^{In}}\right ) \frac {\alpha (\ beta -1)}{1-\alpha }\left ( \Gamma _{\tau _{\Pi }}^{\ast }-\frac {\phi }{\beta -1}\right ) \\[5pt] & = \frac {\alpha \beta -\Xi }{1-\alpha }\left ( \Gamma _{\tau _{\Pi }}^{\ast }-\frac {\phi }{\beta -1}\right ) \lessgtr 0, \, \text {if }\alpha \beta -\Xi \gtrless 0, \end{align*} \begin{equation*} \frac {\partial \pi ^{\ast }}{\partial \tau _{\Pi }}=\frac {\Xi -\alpha \beta }{1-\alpha }\left ( \Gamma _{\tau _{\Pi }}^{\ast }-\frac {\phi }{\beta -1}\right ) \lessgtr 0, \, \text {if} \, \alpha \beta -\Xi \gtrless 0, \end{equation*} where $\Xi =\frac{\Gamma _{\tau _{\Pi }}^{\ast }-\frac{\alpha \phi }{\beta -1}}{\Gamma _{\tau _{\Pi }}^{\ast }-\frac{\phi }{\beta -1}}\in (0,1)$ . In addition, differentiating partially (A2), (A3), and (39) with respect to $\tau _{\Pi }$ , respectively, yields: \begin{equation*} \frac {\partial \hat {c}^{\ast }}{\partial \tau _{\Pi }}=\frac {\theta ^{2}}{\left ( \frac {x}{z}\right ) ^{\ast }}\left [ \frac {\Phi\!\left ( \Gamma _{\tau _{\Pi }}^{\ast }-\frac {\phi }{\beta -1}\right ) }{\Gamma ^{\ast }\text {-}(1-\tau _{B})\overline {i}\text {+}\rho \text {+}\frac {1-\tau _{\Pi }}{\beta -1}\phi }\text {+}\frac {\phi }{\beta \text {-}1}-\beta \frac {\sigma (\lambda ^{\ast })\text {+}\lambda ^{\ast }\sigma ^{\prime }(\lambda ^{\ast })\left ( 1\text {+}\lambda ^{\ast }\right ) }{\left ( 1\text {+}\lambda ^{\ast }\right ) ^{2}}\lambda _{\tau _{\Pi }}^{\ ast }\right ] \text {+}\frac {1\text {-}\hat {c}^{\ast }\text {-}\varphi \text {-}\theta ^{2}}{1-\tau _{\Pi }}\gtrless 0, \end{equation*} \begin{equation*} \frac {\partial N^{\ast }}{\partial \tau _{\Pi }}=-\frac {N^{\ast }}{1-\tau _{\Pi }}-N^{\ast }\Theta L^{\ast }\cdot \frac {\partial \hat {c}^{\ast }}{\partial \tau _{\Pi }}-\frac {N ^{\ast }}{(\Psi ^{In})^{\ast }}\frac {\alpha (\beta -1)}{1-\alpha }\left ( \Gamma _{\tau _{\Pi }}^{\ast }-\frac {\phi }{\beta -1}\right ) \lessgtr 0, \end{equation*} \begin{equation*} \frac {\partial L^{\ast }}{\partial \tau _{\Pi }}=-\Theta \left ( L^{\ast }\right ) ^{2}\cdot \frac {\partial \hat {c}^{\ast }}{\partial \tau _{\Pi }}\lessgtr 0. \end{equation*} B.3. Effects of the bond income tax From (27), we have $\tilde{\lambda }_{\tau _{B}}=\frac{\partial \lambda ^{\ast }}{\partial \tau _{B}}=\frac{-1}{\Sigma \left ( 1-\tau _{V}\right ) }\left \{ \overline{i}+\frac{1-\tilde{\tau }}{1-\tau _{B}}\left [ \sigma (\lambda ^{\ast })+\lambda ^{\ast }\sigma ^{\prime }(\lambda ^{\ast })\left ( 1+\lambda ^{\ast }\right ) \right ] \right \} \lt 0$ and, accordingly, taking the partial derivative of (28) with respect to $\tau _{B}$ yields $\Gamma _{\tau _{B}}^{\ast }=\frac{1}{1+\lambda ^{\ast }}[\frac{(1-\tau _{\Pi })\lambda ^{\ast }\sigma (\lambda ^{\ast })}{\left ( 1-\tau _{B}\right ) ^{2}} -\frac{\overline{i}}{1-\tau _{V}}]\gtreqless 0$ . Given that $(\Psi ^{In})^{\ast }=(1-\tau _{\Pi })\alpha \frac{1-\theta }{\theta }\left ( \frac{x}{z}\right ) ^{\ast }$ and $(\Psi ^{En})^{\ast }=\ frac{1-\tau _{\Pi }}{\beta }(\frac{1-\theta }{\theta }\left ( \frac{x}{z}\right ) ^{\ast }-\phi )$ (together with (A1)), taking the differentials of $(\Psi ^{In})^{\ast }$ and $(\Psi ^{En})^{\ast }$ with respect to $\tau _{B}$ yields $\Psi _{\tau _{B}}^{In}=\frac{\partial (\Psi ^{In})^{\ast }}{\partial \tau _{B}}=\frac{\alpha (\beta -1)}{1-\alpha }\left ( \Gamma _{\tau _{B}}^{\ast }+\overline{i} \right ) \gtrless 0$ and $\Psi _{\tau _{B}}^{En}=\frac{\partial (\Psi ^{En})^{\ast }}{\partial \tau _{B}}=\frac{\beta -1}{\beta \left ( 1-\alpha \right ) }\left ( \Gamma _{\tau _{B}}^{\ast }+\ overline{i}\right ) \gtrless 0$ , respectively. Moreover, differentiating partially (43) and (44) with respect to $\tau _{B}$ , respectively, yields: \begin{equation*} \frac {\partial \gamma ^{\ast }}{\partial \tau _{B}}=\frac {\alpha \beta -1}{1-\alpha }\left ( \Gamma _{\tau _{B}}^{\ast }+\overline {i}\right ) \gtrless 0,\ \text {if }(\alpha \ beta -1)\left ( \Gamma _{\tau _{B}}^{\ast }+\overline {i}\right ) \gtrless 0, \end{equation*} \begin{equation*} \frac {\partial \pi ^{\ast }}{\partial \tau _{B}}=\frac {1-\alpha \beta }{1-\alpha }\Gamma _{\tau _{B}}^{\ast }-\frac {\alpha (\beta -1)}{1-\alpha }\overline {i}\lessgtr 0. \end Sequentially, differentiating partially (A2), (A3), and (39) with respect to $\tau _{B}$ yields: \begin{equation*} \frac {\partial \hat {c}^{\ast }}{\partial \tau _{B}}=\frac {\theta ^{2}}{\left ( \frac {x}{z}\right ) ^{\ast }}\left [ \frac {\Phi \left ( \Gamma _{\tau _{B}}^{\ast }+\overline {i} \right ) }{\Gamma ^{\ast }-(1-\tau _{B})\overline {i}+\rho +\frac {1-\tau _{\Pi }}{\beta -1}\phi }-\beta \frac {\sigma (\lambda ^{\ast })+\lambda ^{\ast }\sigma ^{\prime }(\lambda ^{\ast })\left ( 1+ \lambda ^{\ast }\right ) }{\left ( 1+\lambda ^{\ast }\right ) ^{2}}\lambda _{\tau _{B}}^{\ast }\right ] \gtrless 0, \end{equation*} \begin{equation*} \frac {\partial N^{\ast }}{\partial \tau _{B}}=\frac {N^{\ast }(1-L^{\ast })\overline {i}}{(1+\tau _{c})+(1-\tau _{B})\overline {i}}-N^{\ast }\Theta L^{\ast }\cdot \frac {\partial \ hat {c}^{\ast }}{\partial \tau _{B}}-\frac {N^{\ast }}{(\Psi ^{In})^{\ast }}\frac {\alpha (\beta -1)}{1-\alpha }\left ( \Gamma _{\tau _{B}}^{\ast }+\overline {i}\right ) \lessgtr 0, \end{equation*} \begin{equation*} \frac {\partial L^{\ast }}{\partial \tau _{B}}=\frac {L^{\ast }(1-L^{\ast })\overline {i}}{(1+\tau _{c})+(1-\tau _{B})\overline {i}}-\Theta \left ( L^{\ast }\right ) ^{2}\cdot \frac {\partial \hat {c}^{\ast }}{\partial \tau _{B}}\lessgtr 0. \end{equation*} B.4. Effects of the nominal interest rate From (27), we have $\lambda _{\overline{i}}^{\ast }=\frac{\partial \lambda ^{\ast }}{\partial \overline{i}}=\frac{1}{\Sigma }\frac{1-\tau _{B}}{1-\tau _{V}}\tilde{\tau }\gt 0$ . Taking the partial derivative of (28) with respect to $\overline{i}$ yields $\Gamma _{\overline{i}}^{\ast }=\frac{1}{1+\lambda ^{\ast }}\left [ \left ( 1-\tau _{\Pi }\right ) \lambda ^{\ast }+\frac{1-\tau _{B}}{1-\tau _{V}}\right ] \gt 0$ . Given $(\Psi ^{In})^{\ast }$ , $(\Psi ^{En})^{\ast }$ , and (A1), we take the differentials of $(\Psi ^{In})^{\ast }$ and $(\Psi ^{En})^{\ast }$ with respect to $\overline{i}$ and obtain $\Psi _{\overline{i}}^{In}=\frac{\partial (\Psi ^{In})^{\ast }}{\partial \overline{i}}=\frac{\alpha (\beta -1)}{1-\alpha }\left [ \Gamma _{\overline{i}}^{\ast }-(1-\tau _{B})\right ] \ gtrless 0$ and $\Psi _{\overline{i}}^{En}=\frac{\partial (\Psi ^{En})^{\ast }}{\partial \overline{i}}=\frac{\beta -1}{\beta (1-\alpha )}\left [ \Gamma _{\overline{i}}^{\ast }-(1-\tau _{B})\right ] \ gtrless 0$ , respectively. Furthermore, differentiating partially (43) and (44) with respect to $\overline{i}$ , respectively, yields: \begin{equation*} \frac {\partial \gamma ^{\ast }}{\partial \overline {i}}=\frac {\alpha \beta -1}{1-\alpha }\left [ \Gamma _{\overline {i}}^{\ast }-(1-\tau _{B})\right ] \gtrless 0, \, \text{if} \, \left ( \alpha \beta -1\right ) \left [ \Gamma _{\overline {i}}^{\ast }-(1-\tau _{B})\right ] \gtrless 0, \end{equation*} \begin{equation*} \frac {\partial \pi ^{\ast }}{\partial \overline {i}}=\frac {1-\alpha \beta }{1-\alpha }\Gamma _{\overline {i}}^{\ast }+\frac {\alpha (\beta -1)}{1-\alpha }(1-\tau _{B})\lessgtr 0. Then, differentiating partially (A2), (A3), and (39) with respect to
{"url":"https://core-cms.prod.aop.cambridge.org/core/journals/macroeconomic-dynamics/article/rd-finance-and-economic-growth-a-schumpeterian-model-with-endogenous-financial-structures/41B58F676CFE18F568B0F2728C6CC5DF","timestamp":"2024-11-10T16:18:15Z","content_type":"text/html","content_length":"1049980","record_id":"<urn:uuid:b2750278-c0be-4a5b-a53f-ed44c322e23d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00013.warc.gz"}
Equity Unveiled: Demystifying Advanced Poker Math - Poker Ace Network Equity Unveiled: Demystifying Advanced Poker Math Poker is a game of skill and strategy, where players must make calculated decisions based on the information they have at hand. While many factors contribute to success in poker, understanding and applying poker math is crucial for advanced players. In this article, we will demystify advanced poker math and highlight its importance in becoming a successful player. The Importance of Understanding Poker Math for Advanced Players Firstly, let’s clarify what poker math entails. Poker math refers to the application of mathematical principles in analyzing and making decisions during a poker game. It involves probability, statistics, and game theory to determine the best course of action in various situations. By understanding poker math, players can gain an edge over their opponents and make more informed decisions. One of the key aspects of poker math is calculating pot odds. Pot odds are a ratio that compares the current size of the pot to the cost of a contemplated call. By comparing these two figures, players can determine whether it is profitable to continue playing or fold their hand. This calculation allows players to assess the risk-reward ratio and make optimal decisions based on the potential value of their hand. Equally important is understanding implied odds. Implied odds take into account not only the current pot size but also the potential future bets that could be won if a favorable card comes on subsequent rounds. Advanced players consider implied odds when deciding whether to invest more chips in a hand with a strong drawing possibility. By factoring in the potential future winnings, players can make more accurate calculations and increase their chances of long-term profitability. Another crucial aspect of poker math is understanding equity. Equity represents a player’s share of the pot based on their chance of winning the hand at any given moment. Calculating equity requires considering the range of hands an opponent might hold and evaluating how our own hand matches up against that range. By accurately assessing equity, players can make decisions that maximize their expected value and minimize losses. Understanding equity also enables players to make effective bluffs and value bets. By evaluating the likelihood of their opponents folding or calling, players can determine when it is profitable to bluff or extract value from a strong hand. This calculation involves estimating the probability of various outcomes and weighing them against the potential gains or losses. Advanced players use their understanding of equity to manipulate their opponents’ decisions and exploit any weaknesses in their play. Furthermore, poker math helps in making optimal bet sizing decisions. By considering pot odds, implied odds, and equity, players can determine the ideal bet size that maximizes their expected value. A well-calculated bet size ensures that players neither give away too much information nor miss out on potential profits. It also helps in maintaining a balanced range of bets, preventing opponents from easily reading one’s hand strength based on bet sizes alone. In conclusion, understanding and applying poker math is crucial for advanced players seeking to elevate their game. By calculating pot odds, implied odds, equity, and making optimal bet sizing decisions, players gain a competitive edge over their opponents. Poker math enables players to make more informed decisions, maximize profitability, and manipulate their opponents’ actions. Therefore, aspiring advanced players should invest time and effort in demystifying and mastering the intricacies of poker math to improve their overall performance at the table. Mastering Poker Math: How to Improve Your Game and Make Better Decisions Poker is a game of skill, strategy, and calculated risks. To truly excel at the game, players must understand the underlying mathematics that govern it. While basic poker math, such as pot odds and expected value, are well-known concepts, advanced poker math can often be intimidating and shrouded in mystery. In this article, we will demystify advanced poker math and explain how it can help you make better decisions and improve your overall game. At its core, advanced poker math revolves around equity. Equity is a mathematical concept that represents a player’s share of the pot based on their chances of winning the hand. It is an essential tool for evaluating the profitability of different plays and making informed decisions. To calculate equity, players must consider their hand strength, the range of hands their opponents could have, and the community cards on the board. This information allows them to estimate their chances of winning the hand at any given moment. By comparing their equity with the size of the pot and the cost of calling or betting, players can determine whether a play is profitable in the long One fundamental concept in advanced poker math is expected value (EV). EV is a measure of the average amount of money a player can expect to win or lose over the long term. By calculating the EV of different plays, players can identify the most profitable options and avoid costly mistakes. Calculating EV involves multiplying the probability of each possible outcome by its associated payoff or loss. For example, if a player has a 50% chance of winning $100 and a 50% chance of losing $50, the EV of that play would be ($100 * 0.5) + (-$50 * 0.5) = $25. Positive EV plays are those with an expected profit, while negative EV plays are likely to result in losses. Another crucial aspect of advanced poker math is understanding pot odds. Pot odds compare the current size of the pot to the cost of a contemplated call. By comparing pot odds to the probability of winning the hand, players can determine whether calling is a profitable move. For example, if the pot is $100 and a player must call a $20 bet, their pot odds are 5:1 (100/20). If their chances of winning the hand are greater than 1 in 6 (16.67%), calling would be a profitable play in the long run. However, if their chances of winning are lower, folding would be the more prudent decision. To fully grasp advanced poker math, players must also understand implied odds. Implied odds take into account potential future bets that could be won if a player hits a strong hand. By factoring in these additional winnings, players can make more accurate calculations regarding the profitability of their decisions. In conclusion, mastering advanced poker math is essential for any serious player looking to improve their game and make better decisions. Understanding equity, expected value, pot odds, and implied odds allows players to evaluate the profitability of different plays and avoid costly mistakes. By developing proficiency in these concepts, players gain a strategic edge over their opponents and increase their chances of long-term success at the poker table. So don’t let advanced poker math intimidate you; embrace it as a powerful tool to enhance your game. Exploring the Role of Probability in Advanced Poker Strategies Poker is a game that combines skill, strategy, and a little bit of luck. While many players are familiar with the basics of poker math, such as calculating pot odds and expected value, there is a whole realm of advanced poker math that can greatly enhance your understanding of the game. In this article, we will delve into the world of equity, a fundamental concept in poker math. Equity, simply put, is the share of the pot that belongs to you based on your chances of winning at any given point in the hand. It is a crucial concept because it allows you to make more informed decisions about whether to bet, call, or fold. By understanding your equity, you can maximize your profits and minimize your losses over the long run. To calculate your equity, you need to consider two key factors: your hand’s strength and the range of hands your opponents could have. This requires a deep understanding of probability theory and some mathematical calculations. But fear not, as we will demystify these concepts and show you how to apply them in real-world scenarios. Let’s say you hold a pair of kings, and the flop comes 8-9-10 with two hearts. You suspect that one of your opponents might have a flush draw. To determine your equity in this situation, you would need to calculate the probability of winning against all possible hands your opponent could have, taking into account the remaining cards to be dealt. This calculation involves considering the number of outs you have, which are the cards that improve your hand. For example, if you have a pair of kings, and there are two hearts on the flop, you have nine outs to make a full house or four of a kind by the river. By using a simple formula, you can estimate the probability of hitting one of these outs. Once you have determined the probability of hitting your outs, you can then calculate your equity. This is done by multiplying the probability of winning with each possible hand by the amount of money in the pot. By summing up all these values, you get your overall equity in the hand. Understanding your equity allows you to make more precise decisions about whether to bet, call, or fold. If your equity is high, it might be wise to put more money into the pot. Conversely, if your equity is low, folding might be the best option to avoid losing more chips. Equity calculations become even more complex when multiple opponents are involved. In such cases, you need to consider their ranges of hands and adjust your calculations accordingly. This requires a deep understanding of poker strategy and experience in reading your opponents’ tendencies. While advanced poker math may seem intimidating at first, it is an essential tool for serious players looking to take their game to the next level. By mastering equity calculations, you will gain a significant edge over your opponents and be able to make more informed decisions based on solid mathematical reasoning. In conclusion, equity is a fundamental concept in advanced poker math that allows players to determine their share of the pot based on their chances of winning. By calculating your equity, you can make more informed decisions and maximize your profits in the long run. While it may require some mathematical calculations and a deep understanding of probability theory, mastering equity will undoubtedly enhance your overall poker skills. So, embrace the world of advanced poker math and unveil the power of equity! Unveiling the Secrets Behind Poker Math: A Comprehensive Guide for Players Poker is a game of skill, strategy, and intuition. While many players rely on their instincts to make decisions at the table, there is another aspect of the game that can greatly enhance your chances of success โ poker math. Understanding the principles of poker math allows you to make more informed decisions based on probability and expected value. It provides a solid foundation for strategic play, enabling you to analyze situations and calculate your equity in a hand. Equity, in poker terms, refers to the share of the pot that belongs to each player. By calculating your equity, you can determine whether it is profitable to continue playing a particular hand or fold. This knowledge is especially crucial when facing bets or raises from opponents. To begin unraveling the mysteries of advanced poker math, let’s first discuss the concept of expected value (EV). EV is a measure of the average amount of money you can expect to win or lose over the long run. It takes into account both the probability of winning a hand and the potential payoff. Calculating EV involves multiplying the probability of each possible outcome by its respective payoff and summing them up. If the resulting value is positive, it means that continuing with the hand is profitable in the long run. Conversely, a negative EV suggests folding would be the more prudent decision. Equity calculations are closely tied to EV, as they help determine the likelihood of winning a hand at any given moment. To calculate equity, you need to consider your hole cards, the community cards, and the range of hands your opponents might hold. One commonly used method for estimating equity is known as the Monte Carlo simulation. This technique involves running numerous random simulations of a hand to approximate the likelihood of different outcomes. The more simulations performed, the more accurate the equity calculation becomes. Once you have a grasp of basic equity calculations, you can delve into more advanced concepts like pot odds and implied odds. Pot odds compare the size of the current bet to the potential payoff, helping you decide whether it is worth calling or folding. Implied odds take into account the potential future bets that can be won if you hit a strong hand. For example, if you have a flush draw and believe your opponent will make additional bets on later streets, the implied odds may justify continuing with the hand despite unfavorable pot odds. Understanding equity also allows you to evaluate different betting strategies. By comparing your equity against the pot odds, you can determine whether it is more profitable to bet for value or bluff. A solid understanding of poker math enables you to make these decisions based on logic rather than guesswork. In conclusion, advanced poker math is an essential tool for any serious player looking to improve their game. Equity calculations provide valuable insights into the profitability of each decision you make at the table. By demystifying poker math, you gain a deeper understanding of the game, allowing you to make more informed and strategic choices. So, embrace the numbers and let them guide you towards success in the world of poker. Taking Your Poker Skills to the Next Level with Advanced Mathematical Concepts Poker is a game that combines strategy, skill, and a little bit of luck. While many players rely on instinct and experience to make decisions at the poker table, there is another aspect of the game that can greatly enhance your chances of success: advanced mathematical concepts. Understanding the principles of equity and how to calculate it can take your poker skills to the next level. Equity, in the context of poker, refers to the share of the pot that belongs to you based on the strength of your hand and the likelihood of improving it. It is a fundamental concept that every serious poker player should understand. By calculating equity, you can make more informed decisions about whether to call, raise, or fold in any given situation. To calculate equity, you need to consider two main factors: the strength of your hand and the number of outs you have. The strength of your hand refers to how likely it is to win at showdown. For example, if you have a pair of Aces, your hand is considered very strong. On the other hand, if you have a lowly 2 and 7 offsuit, your hand is weak. Outs, on the other hand, are cards that can improve your hand. Let’s say you have a flush draw, meaning you have four cards of the same suit and need one more to complete the flush. In this case, there are nine remaining cards of that suit in the deck, so you have nine outs. Once you know the strength of your hand and the number of outs you have, you can use mathematical calculations to determine your equity. There are various formulas and methods to do this, but one common approach is known as the “rule of 4 and 2.” This rule allows you to quickly estimate your equity by multiplying your outs by either 4 or 2, depending on the stage of the hand. During the flop, when there are still two more cards to come, you multiply your outs by 4. For example, if you have a flush draw with nine outs, your equity would be approximately 36%. During the turn, when there is only one card left to come, you multiply your outs by 2. In this case, your equity would be around 18%. By understanding and calculating equity, you can make more informed decisions at the poker table. If the pot odds are greater than your equity, it may be profitable to call or raise. Conversely, if the pot odds are lower than your equity, folding would be the better option. It’s important to note that equity calculations are not an exact science. They are estimates based on probabilities and assumptions. However, by consistently making decisions that align with your calculated equity, you can improve your overall profitability in the long run. In conclusion, advanced mathematical concepts, such as equity calculations, can take your poker skills to the next level. By understanding the principles of equity and how to calculate it, you can make more informed decisions at the poker table. Whether you’re a beginner or an experienced player, incorporating advanced mathematical concepts into your game can greatly enhance your chances of success. So, demystify the world of advanced poker math and start taking your poker skills to new heights!
{"url":"https://pokeracenetwork.com/equity-unveiled-demystifying-advanced-poker-math/","timestamp":"2024-11-06T23:57:19Z","content_type":"text/html","content_length":"142794","record_id":"<urn:uuid:e5b829d4-7146-496d-93c1-47d29fd25b0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00750.warc.gz"}
LPeg: inserting previous captures by substitution [Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index] • Subject: LPeg: inserting previous captures by substitution • From: Александр Машин <alex.mashin@...> • Date: Sat, 25 Jul 2015 01:45:19 +0700 Dear All, I wrote a simple LPeg grammar in re.lua syntax that matches a signless integer with digits grouped by spaces (there are several ways to do it in different cultures): int <- digits spaces int / digits digits <- [0-9]+ spaces <- %s+ It works, matching figures like "6 000 000". I can also remove the spaces with substitution capture: int <- {~ digits spaces -> '' int / digits ~} digits <- [0-9]+ spaces <- %s+ It will match "6 000 000" and return "6000000". But what if I want to return "6000000|6 000 000"? Can I memorise the original capture, before removing the spaces and insert it by substitution after processed string? I mean, can I do it with pure re syntax, without function captures? Thanks in advance, Alexander Mashin
{"url":"http://lua-users.org/lists/lua-l/2015-07/msg00507.html","timestamp":"2024-11-13T08:51:58Z","content_type":"text/html","content_length":"4723","record_id":"<urn:uuid:11d3df72-e454-413b-b206-bf9498219d45>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00120.warc.gz"}
GATE Agricultural Engineering Syllabus (AG) 2024 - GATE 2042 Syllabus GATE Agricultural Engineering Syllabus (AG) 2024 – Download GATE Syllabus in PDF IIT Kanpur is the organizing authority of Gate 2024. It is responsible for devising the syllabus for the exam. The GATE Agriculture Engineering Syllabus 2024 comprises 7 subjects. The 7 subjects are bifurcated into subsections. Almost 70% of the GATE Agriculture Engineering Syllabus is based on core agriculture-based subjects, and 15% of the total marks consist of General Aptitude and the Engineering Mathematics section. This article will give you complete information regarding the GATE Agricultural Engineering Syllabus. GATE Agricultural Engineering Syllabus (AG) 2024 GATE Agricultural Engineering Sections consist of some of the Arithmetic Problems, Engineering Mathematics, and Agricultural Engineering. GATE Engineering Mathematics Syllabus Linear Algebra Algebra of matrices: A matrix’s inverse and rank; a system of linear equations; Determinants; Symmetric, skew-symmetric, and orthogonal matrices; Eigenvalues and eigenvectors, matrices diagonalization, Cayley-Hamilton Theorem Functions of a single variable: Limitation, continuity, and differentiation; Mean value theorems, indeterminate forms, and L’Hospital’s rule; Maxima and minima Taylor’s theorem, Fundamental theorem, and mean value-theorems of integral calculus; Evaluation of definite and improper integrals; applications of definite integrals to evaluate areas and volumes. Functions of two variables: Limitation, continuity, and partial derivatives Directional derivative, total derivative; The normal line and the tangent plane; Maxima, minima, and saddle points, as well as the Lagrange multiplier method; Double and triple integrals and their applications Sequence and series: Convergence of sequence and series; Tests for convergence, Power series; Taylor’s series; Fourier Series; Half range sine and cosine series. Vector Calculus Gradient, divergence, and curl; Line and surface integrals; Green’s theorem, Stokes theorem, and Gauss divergence theorem (without proofs). Complex Variable Analytic functions; Cauchy-Riemann equations; Line integral, Cauchy’s integral theorem and integral formula (without proof); Taylor’s series and Laurent series; The Residue Theorem (without proof) and its applications. Ordinary Differential Equation First-order equations (linear and nonlinear); higher-order linear differential equations with constant coefficients; second-order linear differential equations with variable coefficients; The method of parameter variation; the Cauchy-Euler equation; power series solutions; Legendre polynomials; and the properties of Bessel functions of the first kind. Partial Differential Equation Classification of second-order linear partial differential equations; variable separation method; Laplace equation; one-dimensional heat and wave equation solutions. Axioms of probability; Conditional probability; Bayes’ Theorem; Discrete and continuous random variables: Binomial, Poisson, and normal distributions; Correlation and linear regression. Numerical Methods The solution of systems of linear equations using LU decomposition, Gauss elimination, and Gauss-Seidel methods; Solution of polynomial and transcendental equations by the Newton-Raphson method; Numerical integration by trapezoidal rule, Simpson’s rule, and Gaussian quadrature rule; Numerical solutions of first-order differential equations by Euler’s method and 4th order Runge-Kutta method. Farm Machinery Machine Design: Design and selection of machine elements – gears, pulleys, chains, sprockets, belts; overload safety devices used in farm machinery; measurement of force, torque, speed, displacement, and acceleration on machine elements. Farm Machinery: Soil tillage; forces acting on a tillage tool; hitch systems and hitching of tillage implements; functional requirements, principles of working, construction, and operation of manual, animal, and power-operated equipment for tillage, sowing, planting, fertilizer application, inter-cultivation, spraying, mowing, chaff cutting, harvesting, threshing, and transport; testing of agricultural machinery and equipment; calculation of performance parameters-field capacity, efficiency, application rate, and losses; cost analysis of implements and tractors. Farm Power Sources of Power: Sources of power on the farm – human, animal, mechanical, electrical, wind, solar, and biomass; biofuels. Farm Power: Thermodynamic principles of internal combustion engines; internal combustion engine cycles; engine components; fuels and combustion; lubricants and their properties; internal combustion engine systems – fuel, cooling, lubrication, ignition, electrical, intake and exhaust; selection, operation, maintenance, and repair of internal combustion engines; power efficiencies and measurement; calculation of power, torque, fuel consumption, heat load, and power losses. Tractors and Powertillers: tractor type, tractor selection, maintenance, and repair of tractors and power tillers; tractor clutches and brakes; power transmission systems–gear trains, differential, final drives, and power take-off; tractor chassis mechanics; traction theory; three-point hitches-free link and restrained link operations; mechanical, steering, and hydraulic control systems used in tractors; tractor tests and performance human engineering and safety in the design of tractors and agricultural implements. Soil and Water Conservation Engineering Fluid Mechanics: Ideal and real fluids, properties of fluids; hydrostatic pressure and its measurement; hydrostatic forces on plane and curved surface; continuity equation; Bernoulli’s theorem; laminar and turbulent flow in pipes, Darcy- Weisbach and Hazen-Williams equations, Moody’s diagram; flow through orifices and notches; flow in open channels. Soil Mechanics: Engineering properties of soils; fundamental definitions and relationships; index properties of soils; permeability and seepage analysis; shear strength, Mohr’s circle of stress, active and passive earth pressures; stability of slopes. Hydrology: Hydrological cycle and components; meteorological parameters, their measurement and analysis of precipitation data; runoff estimation; hydrograph analysis, unit hydrograph theory, and application; stream flow measurement; flood routing, hydrological reservoir, and channel routing. Surveying and Leveling: Measurement of distance and area; instruments for surveying and leveling; chain surveying, methods of traversing; measurement of angles and bearings, plane table surveying; types of leveling; theodolite traversing; contouring; computation of areas and volume. Soil and Water Erosion: Mechanics of soil erosion, soil erosion types, wind and water erosion, factors affecting erosion; soil loss estimation; biological and engineering measures to control erosion; terraces and bunds; vegetative waterways; gully control structures, drop, drop inlet and chute spillways; earthen dams. Watershed Management: Watershed characterization; land use capability classification; rainwater harvesting structures, check dams, and farm ponds. Irrigation and Drainage Engineering Soil-Water-Plant Relationship: Water requirement of crops; consumptive use and evapotranspiration; measurement of infiltration, soil moisture, and irrigation water infiltration. Irrigation Water Conveyance and Application Methods: Design of irrigation channels and underground pipelines; irrigation scheduling; surface, sprinkler, and micro-irrigation methods, design and evaluation of irrigation methods; irrigation efficiencies. Agricultural Drainage: Drainage coefficient; planning, design, and layout of surface and sub-surface drainage systems; leaching requirement and salinity control; irrigation and drainage water quality and reuse. Groundwater Hydrology: Groundwater occurrence; Darcy’s Law, steady flow in confined and unconfined aquifers, evaluation of aquifer properties; groundwater recharge. Wells and Pumps: Types of wells, steady flow through wells; classification of pumps; pump characteristics; pump selection and installation. Agricultural Processing Engineering Drying: Psychrometry – properties of air-vapors mixture; concentration and drying of liquid foods – evaporators, tray, drum and spray dryers; hydrothermal treatment; drying and milling of cereals, pulses, and oilseeds. Size Reduction and Conveying: Mechanics and energy requirement in size reduction of granular solids; particle size analysis for comminuted solids; size separation by screening; fluidization of granular solids-pneumatic, bucket, screw, and belt conveying; cleaning and grading; Effectiveness of grain cleaners; centrifugal separation of solids, liquids, and gases. Processing and By-product Utilization: Processing of seeds, spices, fruits, and vegetables; By-product utilization from processing industries. Storage Systems: Controlled and modified atmosphere storage; perishable food storage, godowns, bins, and grain silos. Dairy and Food Engineering Heat and Mass Transfer: Steady-state heat transfer in conduction, convection, and radiation; transient heat transfer in simple geometry; working principles of heat exchangers; diffusive and convective mass transfer; simultaneous heat and mass transfer in agricultural processing operations; material and energy balances in food processing systems; water activity, sorption and desorption Preservation of Food: Kinetics of microbial death – pasteurization and sterilization of milk and other liquid foods; preservation of food by cooling and freezing; refrigeration and cold storage basics and applications. Books for GATE Agricultural Engineering (AG) GATE Agricultural Engineering (AG) Name Buy Now Rhizobium Biology and Biotechnology (Soil Biology) Click Here Plant Nanotechnology: Principles and Practices Click Here Advances in Insect Control and Resistance Management Click Here Agricultural Engineering, or AE, is a lengthy exam that consists of 7 different examinations. The candidates must understand the syllabus and topics first and then start their preparation. It is critical to prepare using only the most credible and trustworthy study materials. The students must maintain a timetable to schedule their preparation. It is also essential to manage studies and health together. Do take care of yourself apart from your studies. Always keep in mind that a healthy mind and body produce more productivity. Work hard, work smart! Good luck! People are also reading:
{"url":"https://learndunia.com/gate-agricultural-engineering-syllabus/","timestamp":"2024-11-05T08:42:52Z","content_type":"text/html","content_length":"130146","record_id":"<urn:uuid:841f6368-0717-4c85-a3e8-b66d03939a2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00273.warc.gz"}
Integer Operations: In this section, we will apply our knowledge of integer addition and subtraction to solve word problems. We perform integer operations in our daily lives, whether we are calculating how much money we spent at the store or figuring out how many minutes we have left before our bus arrives. Negative integers represent decreasing values or downward movements and positive integers represent increasing values or upward movements. We encounter negative and positive integers all around us. For example, we deal with these integers when talking about temperature, altitude, money and even hockey scores!
{"url":"https://www.studypug.com/basic-math-help/application-of-integer-operations","timestamp":"2024-11-04T05:39:10Z","content_type":"text/html","content_length":"434813","record_id":"<urn:uuid:02a4f776-5141-42ee-aecc-968aa4417182>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00732.warc.gz"}
Research Focus Number theory I study number theory, specifically Diophantine approximation and its applications to solving equations in integers. I am also interested in computational number theory, an area where large scale calculations are used to provide evidence in support of existing conjectures and gather extensive data on various mathematical invariants. MATH Courses - Fall 2024 MATH Courses - Spring 2025
{"url":"https://math.cornell.edu/anton-mosunov","timestamp":"2024-11-11T17:29:54Z","content_type":"text/html","content_length":"57453","record_id":"<urn:uuid:2ebea78d-0f51-4fb6-a6ed-bcd848220d4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00534.warc.gz"}
Algorithms (Partial Least Squares) 17.7.2.3 Algorithms (Partial Least Squares) Partial Least Squares is used to construct a model where there is a large number of correlated predictor variables or when the number of predictor variables exceeds the number of observations. In these cases, use of multiple linear regression techniques often fails to produce a predictive model, due to over-fitting. Partial least squares finds a use in modeling industrial processes and for such things as calibrating and predicting component amounts in spectral analysis. Partial least squares extracts factors by linear combinations of predictor variables, and projects predictor variables and response variables onto the extracted factor space. An observation containing one or more missing values will be excluded from the analysis, i.e. excluded in a listwise way. Let numbers of observations, predictor variables and response variables be n, m and r respectively. Predictor variables are denoted by the matrix X with size of $n \times m$, and response variables by Y with size of $n \times r$. Subtract the mean from each column in matrices X and Y, and let them be $X_0$ and $Y_0$. • Scale Variables Each column in the matrix $X_0$ is divided by the standard deviation. Partial Least Squares Method Origin supports two methods to compute extracted factors: Wold's Iterative and Singular Value Decomposition (SVD). Wold's Iterative Use the initial vector u. If r=1, initialize u=Y, otherwise u can be a vector of random values. • Repeat each iteration until w converges. $w=X_0^Tu$, and normalize w by $w=w/|| w ||$ $t=X_0w$, and normalize t by $t=t/|| t ||$ $q=Y_0^Tt$, and normalize q by $q=q/|| q ||$ After w converges, update $t=X_0w$, and normalize t by $t=t/|| t ||$ where w, t, u, p, q are x weights, x scores, y scores, x loadings and y loadings for the first factor. • Repeat the above process with the residual matrices k times: and k factors can be constructed. x weights, x scores, y scores, x loadings and y loadings for k factors can be denoted by matrices: W, T, U, P, and Q. Note that in Origin signs of x scores, y scores, x loadings and y loadings for each factor are normalized by forcing the sum of x weights for each factor to be positive. • X Weights for the First Factor w is the normalized first left singular vector of $X_0^TY_0$, and, $t=X_0w$, and normalize t by $t=t/|| t ||$ • Repeat the above process with the residual matrices k times. And k factors can be extracted. Cross Validation Origin uses "leave-one-out" to find the optimal number of factors. It leaves out one observation each time and uses other observations to construct the model and predict responses for the • PRESS PRESS is the predicted residual sum of squares. It can be calculated by: $\text{PRESS} = \sum_{i=1}^n \sum_{j=1}^r (Y_{ij} - \hat{Y}_{ij})^2$ where $\hat{Y}_{ij}$ is the predicted Y value by leave-one-out. Note that if variables are scaled, PRESS is the scaled result. If maximum number of factors is k, then it will calculate PRESS for 0, 1, ... k factors. For 0 factor, $\text{PRESS} = \sum_{i=1}^n \sum_{j=1}^r (Y_{ij} - \bar{Y}_{j})^2$ where $\bar{Y}_{j}$ is the mean value for jth Y variable. • Root Mean PRESS Root mean PRESS is the root mean of PRESS. It is defined by: $\text{Root Mean PRESS} = \sqrt{ \frac{\text{PRESS}}{ (n-1)r } }$ Origin uses the minimum Root Mean PRESS to find the optimal number of factors in Cross Validation. Response Prediction Once the model is constructed, responses can be predicted by coefficients of the fitted model. Coefficients are calculated from weights and loadings matrix: $C = W(P^TW)^{-1}Q^T$ And the predicted responses are calculated as: $\hat{Y}_0 = C X_{0}$ Note that here variables are centered. If variables are also scaled, responses should be scaled back. • Variance Explained for X Effects Variance Explained for the lth X variable, $\frac{ \sum_{j=1}^{k} P_{lj}^2 }{ \sum_{i=1}^{n} {X_0}_{il}^2 }$ Variance Explained for X variables, $\frac{ \sum_{l=1}^{m} \sum_{j=1}^{k} P_{lj}^2 }{ \sum_{l=1}^{m} \sum_{i=1}^{n} {X_0}_{il}^2 }$ • Variance Explained for Y Responses Variance Explained for the lth Y variable, $\frac{ \sum_{j=1}^{k} Q_{lj}^2 }{ \sum_{i=1}^{n} {Y_0}_{il}^2 }$ Variance Explained for Y variables, $\frac{ \sum_{l=1}^{r} \sum_{j=1}^{k} Q_{lj}^2 }{ \sum_{l=1}^{r} \sum_{i=1}^{n} {Y_0}_{il}^2 }$ • VIP Statistic VIP (variable influence on projections) explains each predictor variable using the mean variance in responses. • Residual X Residuals, $X_r = X_0 - TP^T$ Y Residuals, $Y_r = Y_0 - TQ^T$ When variables are scaled, residuals should be scaled back. • Distances Distances to X model for the ith observation, $\text{Dist}_x = \sqrt{ \sum_{j=1}^m X_{rij}^2 }$ Distances to Y model for the ith observation, $\text{Dist}_y = \sqrt{ \sum_{j=1}^r Y_{rij}^2 }$ • T Square T Square for the ith observation, $T^2=\sum_{j=1}^k \frac{T_{ij}}{\text{Var}_j}$ where $\text{Var}_j$ is the variance for X scores of the jth factor. • Control Limit for T Square • Radius for Confidence Ellipse in Scores Plot $\sqrt{(n-1)^2/n \cdot \text{betainv}(0.95,1,(n-3)/2.0) \cdot \text{Var}_j}$ where $\text{Var}_j$ is the variance for X scores or Y scores of the jth factor.
{"url":"https://d2mvzyuse3lwjc.cloudfront.net/doc/Origin-Help/PLS-Algorithm","timestamp":"2024-11-03T07:22:27Z","content_type":"text/html","content_length":"160478","record_id":"<urn:uuid:10fb932c-d114-4db6-9dbb-27db5c1e86c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00235.warc.gz"}
Math 10 AW: April 23 ~ 4.8 Area of Irregular Polygons Today’s lesson was tricky for sure. Lets break it down into smaller manageable steps. First we need to have some sort of quadrilateral shape (four sided uneven shape). Steps to solve 1. Divide shape from one point to another using a ruler to form 2 triangles. 2. Divide each triangle into 2 by drawing a perpendicular line from the remaining point to the line you drew in step 1. 3. Measure each side of the 4 right triangles using a ruler. 4. Calculate area of each triangle. 5. Add all the areas together. Homework is on pg. 111 Q 1 This entry was posted in Math 10- Workplace and Apprenticeship. Bookmark the permalink.
{"url":"https://mrsdildy.com/math-10-aw-april-23-4-8-area-of-irregular-polygons/","timestamp":"2024-11-07T09:54:24Z","content_type":"text/html","content_length":"28190","record_id":"<urn:uuid:c6d79ed8-04bb-4458-ab4a-ded910ebefe5>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00591.warc.gz"}
Generating images with test objects or functions void dip::GaussianEdgeClip(dip::Image const& in, dip::Image& out, dip::Image::Pixel const& value = {1}, dip::dfloat sigma = 1.0, dip::dfloat truncation = 3.0) Maps input values through an error function, can be used to generate arbitrary band-limited objects. void dip::GaussianLineClip(dip::Image const& in, dip::Image& out, dip::Image::Pixel const& value = {1}, dip::dfloat sigma = 1.0, dip::dfloat truncation = 3.0) Maps input values through a Gaussian function, can be used to generate arbitrary band-limited lines. void dip::FillDelta(dip::Image& out, dip::String const& origin = "") Fills an image with a delta function. void dip::CreateDelta(dip::Image& out, dip::UnsignedArray const& sizes, dip::String const& origin = "") Creates a delta function image. auto dip::CreateDelta(dip::UnsignedArray const& sizes, dip::String const& origin = "") -> dip::Image Overload for the function above, which takes image sizes instead of an image. void dip::CreateGauss(dip::Image& out, dip::FloatArray const& sigmas, dip::UnsignedArray derivativeOrder = {0}, dip::dfloat truncation = 3.0, dip::UnsignedArray exponents = {0}) Creates a Gaussian kernel. void dip::CreateGabor(dip::Image& out, dip::FloatArray const& sigmas, dip::FloatArray const& frequencies, dip::dfloat truncation = 3.0) Creates a Gabor kernel. void dip::FTEllipsoid(dip::Image& out, dip::FloatArray radius = {1}, dip::dfloat amplitude = 1) Generates the Fourier transform of an ellipsoid. auto dip::FTEllipsoid(dip::UnsignedArray const& sizes, dip::FloatArray radius = {1}, dip::dfloat amplitude = 1) -> dip::Image Overload for the function above, which takes image sizes instead of an image. void dip::FTBox(dip::Image& out, dip::FloatArray length = {1}, dip::dfloat amplitude = 1) Generates the Fourier transform of a box. auto dip::FTBox(dip::UnsignedArray const& sizes, dip::FloatArray length = {1}, dip::dfloat amplitude = 1) -> dip::Image Overload for the function above, which takes image sizes instead of an image. void dip::FTCross(dip::Image& out, dip::FloatArray length = {1}, dip::dfloat amplitude = 1) Generates the Fourier transform of a cross. auto dip::FTCross(dip::UnsignedArray const& sizes, dip::FloatArray length = {1}, dip::dfloat amplitude = 1) -> dip::Image Overload for the function above, which takes image sizes instead of an image. void dip::FTGaussian(dip::Image& out, dip::FloatArray sigma, dip::dfloat amplitude = 1, dip::dfloat truncation = 3) Generates the Fourier transform of a Gaussian. auto dip::FTGaussian(dip::UnsignedArray const& sizes, dip::FloatArray sigma, dip::dfloat amplitude = 1, dip::dfloat truncation = 3) -> dip::Image Overload for the function above, which takes image sizes instead of an image. void dip::TestObject(dip::Image& out, dip::TestObjectParams const& params, dip::Random& random) Generates a test object according to params. auto dip::TestObject(dip::UnsignedArray const& sizes, dip::TestObjectParams const& params, dip::Random& random) -> dip::Image Overload for the function above, which takes image sizes instead of an image. void dip::TestObject(dip::Image& out, dip::TestObjectParams const& params = {}) Calls the main dip::TestObject function with a default-initialized dip::Random object. auto dip::TestObject(dip::UnsignedArray const& sizes = {256,256}, dip::TestObjectParams const& params = {}) -> dip::Image Overload for the function above, which takes image sizes instead of an image. void dip::FillPoissonPointProcess(dip::Image& out, dip::Random& random, dip::dfloat density = 0.01) Fills the binary image out with a Poisson point process of density. void dip::CreatePoissonPointProcess(dip::Image& out, dip::UnsignedArray const& sizes, dip::Random& random, dip::dfloat density = 0.01) Creates a binary image with a Poisson point process of density. void dip::FillRandomGrid(dip::Image& out, dip::Random& random, dip::dfloat density = 0.01, dip::String const& type = S::RECTANGULAR, dip::String const& mode = S::TRANSLATION) Fills the binary image out with a grid that is randomly placed over the image. void dip::CreateRandomGrid(dip::Image& out, dip::UnsignedArray const& sizes, dip::Random& random, dip::dfloat density = 0.01, dip::String const& type = S::RECTANGULAR, dip::String const& mode = Creates a binary image with a random grid. Class documentation Describes the parameters for a test object, used by dip::TestObject. dip::String objectShape Can be "ellipsoid", "ellipsoid shell", "box", "box shell", or "custom". dip::FloatArray objectSizes Sizes of the object along each dimension. dip::dfloat objectAmplitude Brightness of object pixels. bool randomShift If true, add a random sub-pixel shift in the range [-0.5,0.5]. dip::String generationMethod Can be "gaussian" (spatial domain method) or "fourier" (frequency domain method). dip::dfloat modulationDepth Strength of modulation, if 0 no modulation is applied. dip::FloatArray modulationFrequency Frequency of a sine modulation added to the object, units are periods/pixel. dip::String pointSpreadFunction PSF, can be "gaussian", "incoherent", or "none". dip::dfloat oversampling Determines size of PSF (Gaussian PSF has sigma = 0.9*oversampling). dip::dfloat backgroundValue Background intensity, must be non-negative. dip::dfloat signalNoiseRatio SNR = average object energy divided by average noise power. If SNR > 0, adds a mixture of Gaussian and Poisson noise. dip::dfloat gaussianNoise Relative amount of Gaussian noise. dip::dfloat poissonNoise Relative amount of Poisson noise. Function documentation Maps input values through an error function, can be used to generate arbitrary band-limited objects. in is a scalar, real-valued function whose zero level set represents the edges of an object. The function indicates the Euclidean distance to these edges, with positive values inside the object. out will have a value of value inside the object, zero outside the object, and a Gaussian profile in the transition. If sigma is larger or equal to about 0.8, and the input image is well formed, the output will be approximately bandlimited. The error function mapping is computed in band around the zero crossings where the input image has values smaller than sigma * truncation. If value has more than one element, the output will be a tensor image with the same number of elements. The following example draws a band-limited cross, where the horizontal and vertical bars both have 20.5 pixels width, and different sub-pixel shifts. The foreground has a value of 255, and the background of 0. dip::UnsignedArray outSize{ 256, 256 }; dip::Image xx = 20.5 - dip::Abs( dip::CreateXCoordinate( outSize ) + 21.3 ); dip::Image yy = 20.5 - dip::Abs( dip::CreateYCoordinate( outSize ) - 7.8 ); dip::Image cross = dip::GaussianEdgeClip( dip::Supremum( xx, yy ), { 255 } ); Maps input values through a Gaussian function, can be used to generate arbitrary band-limited lines. in is a scalar, real-valued function whose zero level set represents the lines to be drawn. The function indicates the Euclidean distance to these edges. out will have lines with a Gaussian profile and a weight of value (the integral perpendicular to the line is value), and a value of zero away from the lines. If sigma is larger or equal to about 0.8, and the input image is well formed, the output will be approximately bandlimited. The Gaussian function mapping is computed in band around the zero crossings where the input image has values smaller than sigma * truncation. If value has more than one element, the output will be a tensor image with the same number of elements. The following example draws a band-limited cross outline, where the horizontal and vertical bars both have 20.5 pixels width, and different sub-pixel shifts. The lines have a weight of 1500, and the background has a value of 0. dip::UnsignedArray outSize{ 256, 256 }; dip::Image xx = 20.5 - dip::Abs( dip::CreateXCoordinate( outSize ) + 21.3 ); dip::Image yy = 20.5 - dip::Abs( dip::CreateYCoordinate( outSize ) - 7.8 ); dip::Image cross = dip::GaussianLineClip( dip::Supremum( xx, yy ), { 1500 } ); Fills an image with a delta function. All pixels will be zero except at the origin, where it will be 1. out must be forged, and scalar. origin specifies where the origin lies: • "right": The origin is on the pixel right of the center (at integer division result of size/2). This is the default. • "left": The origin is on the pixel left of the center (at integer division result of (size-1)/2). • "corner": The origin is on the first pixel. This is the default if no other option is given. Creates a delta function image. All pixels will be zero except at the origin, where it will be 1. out will be of size sizes, scalar, and of type dip::DT_SFLOAT. See dip::FillDelta for the meaning of origin. Creates a Gaussian kernel. out is reforged to the required size to hold the kernel. These sizes are always odd. sigmas determines the number of dimensions. order and exponents will be adjusted if necessary to match. derivativeOrder is the derivative order, and can be a value between 0 and 3 for each dimension. If derivativeOrder is 0, the size of the kernel is given by 2 * std::ceil( truncation * sigma ) + 1. The default value for truncation is 3, which assures a good approximation of the Gaussian kernel without unnecessary expense. For derivatives, the value of truncation is increased by 0.5 * derivativeOrder. Truncation is limited to avoid unusefully small values. By setting exponents to a positive value for each dimension, the created kernel will be multiplied by the coordinates to the power of exponents. Creates a Gabor kernel. out is reforged to the required size to hold the kernel. These sizes are always odd. sigmas determines the number of dimensions. frequencies must have the same number of elements as sigmas. Frequencies are in the range [0, 0.5), with 0.5 being the frequency corresponding to a period of the size of the image. The size of the kernel is given by 2 * std::ceil( truncation * sigma ) + 1. The default value for truncation is 3, which assures a good approximation of the kernel without unnecessary expense. Truncation is limited to avoid unusefully small values. Generates the Fourier transform of an ellipsoid. The length of the axes of the ellipsoid are specified through radius, which indicates the half-length of the axes along each dimension. amplitude specifies the brightness of the ellipsoid. The function is defined for images between 1 and 3 dimensions. out must be forged, scalar, and of a floating-point type. Generates the Fourier transform of a box. The length of the sides of the box are specified through length, which indicates the half-length of the sides along each dimension. amplitude specifies the brightness of the box. out must be forged, scalar, and of a floating-point type. Generates the Fourier transform of a cross. The length of the sides of the cross are specified through length, which indicates the half-length of the sides along each dimension. amplitude specifies the brightness of the cross. out must be forged, scalar, and of a floating-point type. Generates the Fourier transform of a Gaussian. The size of the Gaussian is specified with sigma (note that the Fourier transform of a Gaussian is also a Gaussian). volume is the integral of the Gaussian in the spatial domain. out must be forged, scalar, and of a floating-point type. Generates a test object according to params. Generates a test object in the center of out, which must be forged, scalar and of a floating-point type. The test object can optionally be modulated using a sine function, blurred, and have noise params describes how the object is generated: • params.generationMethod can be one of: □ "gaussian": creates the shape directly in the spatial domain, the shape will have Gaussian edges with a sigma of 0.9. □ "fourier": creates the shape in the frequency domain, the shape will be truly bandlimited. • params.objectShape can be one of: □ "ellipsoid" or "ellipsoid shell": the shape is drawn with dip::DrawBandlimitedBall or dip::FTEllipsoid, depending on the generation method. In the case of "gaussian" (spatial-domain generation), the shape must be isotropic (have same sizes in all dimensions). In the case of "fourier", the image cannot have more than three dimensions. □ "box" or "box shell": the shape is drawn with dip::DrawBandlimitedBox or dip::FTBox, depending on the generation method. □ "custom": out already contains a shape, which is used as-is. In the case that params.generationMethod is "gaussian", out is taken to be in the spatial domain, and in the case of "fourier", in the frequency domain. • params.objectSizes determines the extent of the object along each dimension. Must have either one element or as many elements as image dimensions in out. • params.objectAmplitude determines the brightness of the object. • params.randomShift, if true, shifts the object with a random sub-pixel shift in the range [-0.5,0.5]. This sub-pixel shift can be used to avoid bias due to digitization error over a sequence of generated objects. params also describes what effects are applied to the image: Modulation is an additive sine wave along each dimension, and is controlled by: • params.modulationDepth controls the strength of the modulation. If this value is zero, no modulation is applied. • params.modulationFrequency controls the frequency along each image axis. The units are number of periods per pixel, and hence values below 0.5 should be given to prevent aliasing. Blurring is controlled by: • params.pointSpreadFunction determines the point spread function (PSF) used. It can be "gaussian" for Gaussian blurring, "incoherent" for a 2D, in-focus, diffraction limited incoherent PSF (applied through Fourier domain filtering), or "none" for no blurring. • params.oversampling determines the size of the PSF. In the case of "gaussian", the sigma used for blurring is 0.9 * params.oversampling. In the case of "incoherent", this is the oversampling parameter passed to dip::IncoherentOTF. Noise is controlled by: • params.backgroundValue determines the background intensity added to the image. This is relevant for the Poisson noise. • params.signalNoiseRatio determines the signal to noise ratio (SNR), which we define as the average object energy divided by average noise power (i.e. not in dB). If the SNR is larger than 0, a mixture of Gaussian and Poisson noise is added to the whole image. • params.gaussianNoise determines the relative amount of Gaussian noise used. • params.poissonNoise determines the relative amount of Poisson noise used. The magnitude of these two quantities is not relevant, only their relative values are. If they are equal, the requested SNR is divided equally between the Gaussian and the Poisson noise. random is the random number generator used for both the sub-pixel shift and the noise added to the image. Fills the binary image out with a Poisson point process of density. out must be forged, binary and scalar. On average, one of every 1/density pixels will be set. Creates a binary image with a Poisson point process of density. out will be of size sizes, binary and scalar. Fills the binary image out with a grid that is randomly placed over the image. This grid can be useful for random systematic sampling. type determines the grid type. It can be "rectangular" in any number of dimensions, this is the default grid. For 2D images it can be "hexagonal". In 3D it can be "fcc" or "bcc" for face-centered cubic and body-centered cubic, respectively. density determines the grid density. On average, one of every 1/density pixels will be set. The grid is sampled equally densely along all dimensions. If the density doesn’t lead to an integer grid spacing, the grid locations will be rounded, leading to an uneven spacing. density must be such that the grid spacing is at least 2. Therefore, density must be smaller than , with the image dimensionality, in the rectangular case. In the hexagonal case, this is . mode determines how the random grid location is determined. It can be either "translation" or "rotation". In the first case, only a random translation is applied to the grid, it will be aligned with the image axes. In the second case, the grid will also be randomly rotated. This option is used only for 2D and 3D grids. out must be forged, binary and scalar. Creates a binary image with a random grid. out will be of size sizes, binary and scalar. See dip::FillRandomGrid for the meaning of the remainder of the parameters, which define the grid.
{"url":"https://diplib.org/diplib-docs/generation_test.html","timestamp":"2024-11-13T08:29:44Z","content_type":"text/html","content_length":"69112","record_id":"<urn:uuid:fe1114a6-3347-4db8-8d89-a31765451267>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00440.warc.gz"}
Carroll Morgan Carroll Morgan Emeritus Professor Research Interests Carroll's research is on formal methods, semantics, security, program correctness and probability. • Microkit Verification Career Summary Australian Professorial Fellow, UNSW Lecturer/Reader Oxford University UK • TS Group Papers (2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2011, 2009, 2008, 2007) Best Papers Carroll Morgan, Mário Alvim, Konstantinos Chatzikokolakis, Annabelle McIver, Catuscia Palamidessi and Geoffrey Smith Additive and multiplicative notions of leakage, and their capacities Computer Security Foundations, pp. 308–322, Vienna, Austria, July, 2014 Winner of the 2015 NSA Best Scientific Cybersecurity Paper Award Trustworthy Systems Group Papers Mário Alvim, Natasha Fernandes, Annabelle McIver, Carroll Morgan and Gabriel Nunes Flexible and scalable privacy assessment for very large datasets, with an application to official governmental microdata Proc. Priv. Enhancing Technol. 2022(4), pp. 378–99, 2022 Natasha Fernandes, Annabelle McIver and Carroll Morgan How to develop an intuition for risk... and other invisible phenomena (invited talk) Proc. Computer Science Logic 2022, pp. 2:1–2:14, 2022 Richard S. Bird, Jeremy Gibbons, Ralf Hinze, Peter Hoefner, Johan Jeuring, Lambert G. L. T. Meertens, Bernhard Möller, Carroll Morgan, Tom Schrijvers, Wouter Swierstra and Nicolas Wu Volume 600 in IFIP Advances in Information and Communication Technology. Springer, 2021 Natasha Fernandes, Annabelle McIver and Carroll Morgan The Laplace mechanism has optimal utility for differential privacy over continuous queries 36th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 2021 Jeremy Gibbons, Annabelle McIver, Carroll Morgan and Tom Schrijvers Quantitative information flow with monads in Haskell Foundations of Probabilistic Programming, pp. 391–448, Cambridge University Press, 2020 Annabelle McIver and Carroll Morgan Correctness by construction for probabilistic programs Leveraging Applications of Formal Methods, Verification and Validation: Verification Principles—9th International Symposium on Leveraging Applications of Formal Methods, ISoLA 2020, Rhodes, Greece, October 20-30, 2020, Proceedings, Part I, pp. 216–239, 2020 Carroll Morgan, Annabelle McIver and Tahiry Rabehaja Abstract hidden markov models: A monadic account of quantitative information flow Mathematical Structures in Computer Science, Volume 15, Issue 1, pp. 36:1-36:50, March, 2019 Mário S. Alvim, Konstantinos Chatzikokolakis, Annabelle McIver, Carroll Morgan, Catuscia Palamidessi and Geoffrey Smith An axiomatization of information flow measures Theoretical Computer Science, Volume 777, pp. 32-54, 2019 Annabelle McIver and Carroll Morgan Proving that programs are differentially private Programming Languages and Systems, pp. 3–18, 2019 Tahiry Rabehaja, Annabelle McIver, Carroll Morgan and Georg Struth Categorical information flow The Art of Modelling Computational Systems: A Journey from Logic and Concurrency to Security and Privacy: Essays Dedicated to Catuscia Palamidessi on the Occasion of Her 60th Birthday, pp. 329–343, Springer International Publishing, 2019 Annabelle McIver, Carroll Morgan, Benjamin Kaminski and Joost-Pieter Katoen A new proof rule for almost-sure termination ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Los Angeles, January, 2018 Annabelle McIver, Tahiry Rabehaja, Roland Wen and Carroll Morgan Privacy in elections: How small is "small"? Journal of Information Security and Applications, Volume 36, Issue 1, pp. 112-126, October, 2017 Nicolas Bordenabe, Annabelle McIver, Carroll Morgan and Tahiry Rabehaja Reasoning about distributed secrets FORTE, pp. 156-170, Shanghai, June, 2017 Annabelle McIver, Carroll Morgan and Tahiry Rabehaja Algebra for quantitative information flow International Conference on Relational and Algebraic Methods in Computer Science, pp. 3–23, Lyon, France, May, 2017 Carroll Morgan A demonic lattice of information Concurrency, Security, and Puzzles — Essays Dedicated to Andrew William Roscoe on the Occasion of His 60th Birthday, pp. 203–222, Volume 10160 in Lecture Notes in Computer Science, Springer, 2017 June Andronick, Corey Lewis, Daniel Matichuk, Carroll Morgan and Christine Rizkallah Proof of OS scheduling behavior in the presence of interrupt-induced concurrency International Conference on Interactive Theorem Proving, pp. 52–68, Nancy, France, August, 2016 Mario Alvim, Kostantinos Chatzikokolakis, Annabelle McIver, Carroll Morgan, Catuscia Palamidessi and Geoffrey Smith Axioms for information leakage Computer Security Foundations, pp. 77-92, Lisbon, June, 2016 June Andronick, Corey Lewis and Carroll Morgan Controlled Owicki-Gries concurrency: reasoning about the preemptible eChronos embedded operating system Workshop on Models for Formal Analysis of Real Systems (MARS 2015), pp. 10–24, Suva, Fiji, November, 2015 Carroll Morgan, Annabelle McIver and Tahiry Rabehaja Abstract hidden Markov models: A monadic account of quantitative information flow Annual IEEE Symposium on Logic in Computer Science, pp. 597–608, Tokyo, Japan, July, 2015 Carroll Morgan a nondeterministic lattice of information One-hour invited talk at Mathematics of Program Construction, Königswinter, Germany, June, 2015 Annabelle McIver, Larissa Meinicke and Carroll Morgan Hidden-markov program algebra with iteration Mathematical Structures in Computer Science, Volume 25, Number 2, pp. 320–360, 2015 Yuxin Deng, Rob van Glabbeek, Matthew Hennessy and Carroll Morgan Real reward testing for probabilistic processes Theoretical Computer Science, Volume 538, pp. 16–36, July, 2014 Carroll Morgan, Mário Alvim, Konstantinos Chatzikokolakis, Annabelle McIver, Catuscia Palamidessi and Geoffrey Smith Additive and multiplicative notions of leakage, and their capacities Computer Security Foundations, pp. 308–322, Vienna, Austria, July, 2014 Winner of the 2015 NSA Best Scientific Cybersecurity Paper Award Carroll Morgan, Annabelle McIver, Geoffrey Smith, Barbara Espinoza and Larisa Meinicke Abstract channels and their robust information-leakage ordering Principles of Security and Trust (ETAPS), pp. 83–102, Grenoble, France, April, 2014 Yuxin Deng, Rob van Glabbeek, Matthew Hennessy and Carroll Morgan Real reward testing for probabilistic processes (extended abstract) Ninth Workshop on Quantitative Aspects of Programming Languages (QAPL 2011), pp. 61–73, Saarbrücken, Germany, July, 2011 Yuxin Deng, Rob van Glabbeek, Matthew Hennessy and Carroll Morgan Testing finitary probabilistic processes (extended abstract) International Conference on Concurrency Theory (CONCUR), pp. 274–288, Bologna, Italy, August, 2009 Yuxin Deng, Rob van Glabbeek, Matthew Hennessy and Carroll Morgan Characterising testing preorders for finite probabilistic processes Logical Methods in Computer Science, Volume 4, Number 4, pp. 1–33, October, 2008 Yuxin Deng, Rob van Glabbeek, Matthew Hennessy, Carroll Morgan and Cuicui Zhang Characterising testing preorders for finite probabilistic processes Annual IEEE Symposium on Logic in Computer Science, pp. 313–322, Wroclaw, Poland, July, 2007 Yuxin Deng, Rob van Glabbeek, Matthew Hennessy, Carroll Morgan and Cuicui Zhang Remarks on testing probabilistic processes Electronic Notes in Theoretical Computer Science, Volume 172, Number , pp. 359–397, April, 2007 Yuxin Deng, Rob van Glabbeek, Carroll Morgan and Chenyi Zhang Scalar outcomes suffice for finitary probabilistic testing European Symposium on Programming, pp. 363–378, Braga, Portugal, March, 2007 Research Theses Supervised Rob Sison Proving confidentiality and its preservation under compilation for mixed-sensitivity concurrent programs PhD Thesis, UNSW, Sydney, Australia, October, 2020
{"url":"https://trustworthy.systems/people/?cn=Carroll+Morgan","timestamp":"2024-11-03T07:27:19Z","content_type":"text/html","content_length":"36450","record_id":"<urn:uuid:07f87ef8-57b3-4b04-9976-ad654811943b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00031.warc.gz"}
how to find area of a triangle Title: How to Find the Area of a Triangle: A Comprehensive Guide 📏🔺Introduction:Have you ever wondered how to find the area of a triangle? Look no further! In this article, we will take you through everything you need to know about finding the area of a triangle, step-by-step. From the simple formula to more complex methods, we’ve got you covered. So, whether you are a student, a teacher, or just someone looking to refresh your memory, keep reading to learn how to find the area of a triangle.Subheadings:1. The Basics: Understanding the Formula 🔢2. Types of Triangles 📐3. Finding the Area of an Equilateral Triangle 🌀4. Finding the Area of a Right-Angled Triangle 📏🧱5. Heron’s Formula 🧮6. Using Trigonometry 📐7. Barycentric Coordinates 📊8. Calculation Examples 📝💻9. Practical Applications 🏗️🌉10. Common Mistakes to Avoid ❌🚫11. Additional Tips and Tricks 🤫💡12. Frequently Asked Questions 🙋♀️🙋♂️13. Conclusion: Time to Apply What You’ve Learned! 🎓💪The Basics: Understanding the Formula 🔢The formula for finding the area of a triangle is simple: A = 1/2 * b * h, where A is the area, b is the base of the triangle, and h is the height of the triangle. The base is the length of the side that is perpendicular to the height, and the height is the length of the line drawn from the base to the opposite vertex.Types of Triangles 📐Before we dive into the methods for finding the area of a triangle, let’s familiarize ourselves with the different types of triangles:- Equilateral triangles: all sides and angles are equal- Isosceles triangles: two sides and two angles are equal- Scalene triangles: no sides or angles are equal- Right-angled triangles: one angle is 90 degreesFinding the Area of an Equilateral Triangle 🌀An equilateral triangle has three sides and three angles that are equal. To find the area of an equilateral triangle, you can use the formula A = √3/4 * s^2, where A is the area and s is the length of one side.Finding the Area of a Right-Angled Triangle 📏🧱A right-angled triangle has one angle that is 90 degrees. To find the area of a right-angled triangle, you can use the formula A = 1/2 * base * height, where A is the area, the base is the length of the side that is perpendicular to the height, and the height is the length of the line drawn from the base to the opposite vertex.Heron’s Formula 🧮Heron’s formula is a more complex method for finding the area of a triangle. It uses the lengths of all three sides of the triangle to calculate the area. The formula is A = √s(s-a)(s-b)(s-c), where A is the area, s is the semi-perimeter (half the perimeter), and a, b, and c are the lengths of the sides.Using Trigonometry 📐Another method for finding the area of a triangle is to use trigonometry. You will need to use the sine function to find the height of the triangle, given one angle and the length of one side. Once you have the height, you can use the basic formula A = 1/2 * b * h to find the area.Barycentric Coordinates 📊Barycentric coordinates are another way of finding the area of a triangle. This method involves using the vertices of the triangle to calculate the area. It is a more complicated method, but it can be useful in certain situations.Calculation Examples 📝💻Let’s work through some examples to see how these formulas and methods are applied in practice. We’ll cover different types of triangles and show you how to find the area step-by-step.Practical Applications 🏗️🌉Finding the area of a triangle is a crucial skill in many fields, including architecture, engineering, and mathematics. It is used to calculate surface areas, volumes, and more. Understanding how to find the area of any given triangle is essential for tackling such problems.Common Mistakes to Avoid ❌🚫As with any formula or method, there are common mistakes that people make when finding the area of a triangle. We’ll cover some of these mistakes and show you how to avoid them.Additional Tips and Tricks 🤫💡We’ve shared some of the most common methods for finding the area of a triangle, but there are other tricks and tips that you can use to simplify the process. We’ll share some of these with you to help you find the area of a triangle more easily.Frequently Asked Questions 🙋♀️🙋♂️1. How do you find the area of an irregular triangle?2. How do you find the height of a triangle?3. How do you find the length of the base of a triangle?4. How do you find the area of a triangle with three sides?5. How do you find the area of a triangle with given angles?6. Can a triangle have a negative area?7. What is the unit of measurement for the area of a triangle?8. How is the area of a triangle related to its perimeter?9. How is the area of a triangle related to its sides and angles?10. What is the Pythagorean Theorem, and how is it used?11. How do you find the area of a right-angled triangle when only one side is given?12. How do you find the area of a triangle when only the perimeter is given?13. How do you find the area of a triangle when only two sides and an angle are given?Conclusion: Time to Apply What You’ve Learned! 🎓💪Congratulations, you’ve reached the end of our comprehensive guide to finding the area of a triangle! We’ve covered everything from the basics to complex methods, examples, and FAQs. Now it’s time to put what you’ve learned into practice. Whether you are a student, teacher, engineer, or just someone who wants to brush up their math skills, we hope this article has been helpful. Remember to use the formula and method that best suits your needs, and don’t be afraid to ask for help or find additional resources if needed.Closing or Disclaimer:This article is intended to be a comprehensive guide to finding the area of a triangle. However, it is not meant to replace expert advice or professional consultation. Always check your work carefully and seek help if you are unsure about any step in the process. We cannot be held responsible for any errors, losses, or damages that may result from the use of this article. Cuplikan video:how to find area of a triangle
{"url":"https://iffaustralia.com/how-to-find-area-of-a-triangle","timestamp":"2024-11-03T20:14:04Z","content_type":"text/html","content_length":"46727","record_id":"<urn:uuid:64daaef2-fb19-494a-a1c3-abcd2b7a570e>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00034.warc.gz"}
Multiplication 4 5 6 Worksheets Mathematics, specifically multiplication, develops the foundation of countless scholastic techniques and real-world applications. Yet, for lots of learners, understanding multiplication can pose an obstacle. To resolve this obstacle, educators and moms and dads have actually embraced a powerful device: Multiplication 4 5 6 Worksheets. Intro to Multiplication 4 5 6 Worksheets Multiplication 4 5 6 Worksheets Multiplication 4 5 6 Worksheets - Learning the facts is easy as 1 2 3 4 5 6 Teaching the Times Tables Teach the times tables in no time Free Multiplication Worksheets Download and printout our FREE worksheets HOLIDAY WORKSHEETS Free Secret Word Puzzle Worksheets New Years Multiplication facts with 4 s 6 s Students multiply 4 or 6 times numbers up to 12 Worksheet 1 is a table of all multiplication facts with four or siz as a factor 4 6 times tables Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 3 More Similar Multiply by 5 and 10 Multiply by 7 and 8 What is K5 Significance of Multiplication Practice Understanding multiplication is critical, laying a solid foundation for advanced mathematical principles. Multiplication 4 5 6 Worksheets supply structured and targeted practice, cultivating a much deeper understanding of this basic math procedure. Development of Multiplication 4 5 6 Worksheets Fun Times Table Worksheets 4 Worksheets Multiplication Multiplication Times Tables Times Fun Times Table Worksheets 4 Worksheets Multiplication Multiplication Times Tables Times Multiplication by 4s Here are some practice worksheets and activities for teaching only the 4s times tables Multiplication by 5s These games and worksheets focus on the number 5 as a factor Multiplication by 6s If you re reviewing the 6 times tables this page has some helpful resources Multiplication by 7s Multiplication Math Worksheets Math explained in easy language plus puzzles games quizzes videos and worksheets For K 12 kids teachers and parents Multiplication Worksheets Worksheets Multiplication Mixed Tables Worksheets Individual Table Worksheets Worksheet Online 2 Times 3 Times 4 Times 5 Times 6 Times 7 Times 8 Times 9 Times From standard pen-and-paper exercises to digitized interactive styles, Multiplication 4 5 6 Worksheets have actually progressed, dealing with varied discovering designs and preferences. Kinds Of Multiplication 4 5 6 Worksheets Fundamental Multiplication Sheets Easy exercises focusing on multiplication tables, assisting learners build a solid math base. Word Issue Worksheets Real-life scenarios integrated into problems, improving crucial reasoning and application abilities. Timed Multiplication Drills Tests developed to improve rate and accuracy, aiding in fast mental mathematics. Benefits of Using Multiplication 4 5 6 Worksheets Math Worksheets Grade 4 Multiplication Word Problems Kidsworksheetfun Math Worksheets Grade 4 Multiplication Word Problems Kidsworksheetfun Using Arrays to Multiply Look closely at each array illustration Then tell how many columns how many rows and how many dots are in each Then write the corresponding multiplication fact for each array 2nd through 4th Grades View PDF Multiplication with Bar Models Learn the multiplication tables in an interactive way with the free math multiplication learning games for 2rd 3th 4th and 5th grade The game element in the times tables games make it even more fun learn Practice your multiplication tables Here you can find additional information about practicing multiplication tables at primary school Enhanced Mathematical Abilities Consistent technique hones multiplication proficiency, improving overall math capabilities. Improved Problem-Solving Abilities Word problems in worksheets create analytical reasoning and technique application. Self-Paced Learning Advantages Worksheets suit private knowing rates, fostering a comfy and versatile learning atmosphere. Just How to Create Engaging Multiplication 4 5 6 Worksheets Including Visuals and Shades Lively visuals and shades record focus, making worksheets visually appealing and engaging. Consisting Of Real-Life Circumstances Relating multiplication to day-to-day circumstances adds significance and usefulness to workouts. Customizing Worksheets to Different Skill Levels Personalizing worksheets based on varying proficiency degrees makes certain comprehensive knowing. Interactive and Online Multiplication Resources Digital Multiplication Tools and Games Technology-based resources supply interactive knowing experiences, making multiplication interesting and enjoyable. Interactive Internet Sites and Apps On the internet platforms give diverse and easily accessible multiplication practice, supplementing standard worksheets. Tailoring Worksheets for Different Discovering Styles Visual Students Visual aids and layouts aid comprehension for learners inclined toward visual knowing. Auditory Learners Spoken multiplication problems or mnemonics satisfy learners who grasp ideas with auditory methods. Kinesthetic Students Hands-on activities and manipulatives sustain kinesthetic students in understanding multiplication. Tips for Effective Implementation in Learning Uniformity in Practice Routine technique enhances multiplication skills, promoting retention and fluency. Stabilizing Rep and Selection A mix of repetitive workouts and varied issue styles keeps passion and understanding. Giving Useful Comments Feedback help in determining locations of improvement, motivating continued development. Obstacles in Multiplication Technique and Solutions Motivation and Involvement Difficulties Tedious drills can lead to disinterest; ingenious approaches can reignite inspiration. Getting Rid Of Concern of Mathematics Negative assumptions around mathematics can hinder progression; producing a positive knowing environment is crucial. Impact of Multiplication 4 5 6 Worksheets on Academic Efficiency Research Studies and Study Searchings For Study suggests a positive connection between consistent worksheet use and boosted mathematics performance. Final thought Multiplication 4 5 6 Worksheets emerge as flexible tools, fostering mathematical effectiveness in students while accommodating diverse knowing designs. From fundamental drills to interactive on the internet resources, these worksheets not only improve multiplication abilities however additionally advertise vital thinking and problem-solving abilities. 7th Grade Math Worksheets Multiplication Times Tables Worksheets Pin On Multiplication worksheets Ideas For Kids Check more of Multiplication 4 5 6 Worksheets below 4th Grade Multiplication Practice Worksheets Free Printable Multiplication Tables Check MTC Worksheets 1 5 Times Tables Worksheets Pdf Printable 7th Grade Math Multiplication Worksheets Free Printable Free Multiplication Worksheets 4 Times Tables Review Home Decor Multiplication Grade 2 Math Worksheets Multiply by 4 and 6 worksheets K5 Learning Multiplication facts with 4 s 6 s Students multiply 4 or 6 times numbers up to 12 Worksheet 1 is a table of all multiplication facts with four or siz as a factor 4 6 times tables Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 3 More Similar Multiply by 5 and 10 Multiply by 7 and 8 What is K5 Multiplication Worksheets K5 Learning Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets Multiplication facts with 4 s 6 s Students multiply 4 or 6 times numbers up to 12 Worksheet 1 is a table of all multiplication facts with four or siz as a factor 4 6 times tables Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 3 More Similar Multiply by 5 and 10 Multiply by 7 and 8 What is K5 Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets 7th Grade Math Multiplication Worksheets Free Printable Multiplication Tables Check MTC Worksheets Free Multiplication Worksheets 4 Times Tables Review Home Decor Multiplication Grade 2 Math Worksheets multiplication Worksheet For Kids Archives EduMonitor Commutative Property Of Multiplication Worksheets 2nd Grade Free Printable Commutative Property Of Multiplication Worksheets 2nd Grade Free Printable 4th Grade Multiplication Worksheets Best Coloring Pages For Kids 4th Grade Multiplication FAQs (Frequently Asked Questions). Are Multiplication 4 5 6 Worksheets appropriate for any age groups? Yes, worksheets can be tailored to different age and ability degrees, making them adaptable for different learners. Exactly how frequently should pupils practice making use of Multiplication 4 5 6 Worksheets? Regular practice is vital. Normal sessions, ideally a couple of times a week, can generate considerable renovation. Can worksheets alone enhance math skills? Worksheets are a beneficial tool yet should be supplemented with different discovering methods for comprehensive ability advancement. Exist on the internet systems providing free Multiplication 4 5 6 Worksheets? Yes, numerous educational web sites provide open door to a wide variety of Multiplication 4 5 6 Worksheets. Exactly how can moms and dads support their kids's multiplication technique in the house? Urging consistent technique, supplying aid, and producing a favorable understanding setting are beneficial steps.
{"url":"https://crown-darts.com/en/multiplication-4-5-6-worksheets.html","timestamp":"2024-11-12T06:08:48Z","content_type":"text/html","content_length":"28755","record_id":"<urn:uuid:a6ca7d4f-e69f-4a9c-b128-98b8f5891cfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00281.warc.gz"}
Vector Sum Theorems Vector Sum Theorems We will now look at two very important theorems regarding the sum of vector subspaces. The first theorem will tell us that the sum of any subspaces of $V$ will result in a subspace of $V$, and this sum will contain all of the subspaces in its sum. The second theorem will tell us that $(U_1 + U_2)$ is the smallest subspace containing both $U_1$ and $U_2$. Theorem 1: Let $V$ be a vector space over the field $\mathbb{F}$ and let $U_1, U_2$ be subspaces of $V$. Then $(U_1 + U_2)$ is a subspace of $V$, and $(U_1 + U_2)$ contains $U_1$ and $U_2$. • Proof: We first note that $(U_1 + U_2) \subseteq V$ since all vectors in $U_1$ are in $V$ and all vectors in $U_2$ are in $V$. We want to show that $(U_1 + U_2)$ is a subspace of $V$ by verifying that $0 \in (U_1 + U_2)$, and that $(U_1 + U_2)$ is closed under addition and scalar multiplication. • First, we have that $0 \in U_1$ and $0 \in U_2$ since all vector subspaces must contain the zero vector. Therefore $0 = \underbrace{0}_{0 \in U_1} + \underbrace{0}_{0 \in U_2}$ and so since $0$ can be written as a sum of a vector from $U_1$ and a vector from $U_2$, it follows that $0 \in (U_1 + U_2)$. • Let $x, y \in (U_1 + U_2)$. Then it follows that there exists vectors $x_1, y_1 \in U_1$ and $x_2, y_2 \in U_2$ such that $x = \underbrace{x_1}_{x_1 \in U_1} + \underbrace{x_2}_{x_2 \in U_2}$ and $y = \underbrace{y_1}_{y_1 \in U_1} + \underbrace{y_2}_{y_2 \in U_2}$. Therefore $x + y = (x_1 + x_2) + (y_1 + y_2) = \underbrace{(x_1 + y_1)}_{(x_1 + y_1) \in U_1} + \underbrace{(x_2 + y_2)}_ {(x_2 + y_2) \in U_2}$, and so therefore $x + y$ can be written as the sum of one vector in $U_1$ and one vector in $U_2$ and so $(x + y) \in (U_1 + U_2)$, or in other words, $(U_1 + U_2)$ is closed under addition. • Let $x \in (U_1 + U_2)$. Then there exists a vector $x_1 \in U_1$ and a vector $x_2 \in U_2$ such that $x = \underbrace{x_1}_{x_1 \in U_1} + \underbrace{x_2}_{x_2 \in U_2}$. Let $a \in \mathbb{F} $. If we multiply both sides by $a$ we get that $ax = a(x_1 + x_2) = ax_1 + ax_2$. But $ax_1 \in U_1$ since $U_1$ is a subspace and closed under scalar multiplication. Similarly $ax_2 \in U_2$ since $U_2$ is a subspace and closed under scalar multiplication. So $ax = \underbrace{ax_1}_{ax_1 \in U_1} + \underbrace{ax_2}_{ax_2 \in U_2}$ can be written as the sum of a vector from $U_1$ and a vector from $U_2$ and so $ax \in (U_1 + U_2)$ or in other words, $(U_1 + U_2)$ is closed under scalar multiplication. • Therefore $(U_1 + U_2)$ is a vector subspace. • Lastly we need to show that $(U_1 + U_2)$ contains both $U_1$ and $U_2$, that is $U_1 \subseteq (U_1 + U_2)$ and $U_2 \subseteq (U_1 + U_2)$. Let $x \in U_1$. We have that $0 \in U_2$ by the definition of a vector space and so $x = \underbrace{x}_{x \in U_1} + \underbrace{0}_{0 \in U_2}$ and so $x$ can be written as the sum of a vector from $U_1$ and $U_2$ and so $x \in (U_1 + U_2)$. Therefore $U_1 \subseteq (U_1 + U_2)$. Similarly let $y \in U_2$. We have that $0 \in U_1$ by the definition of a vector space and so $y = \underbrace{0}_{0 \in U_1} + \underbrace{y}_{y \in U_2}$ and so $y$ can be written as the sum of a vector from $U_1$ and $U_2$ and so $y \in (U_1 + U_2)$. Therefore $U_2 \subseteq (U_1 + U_2)$. $\blacksquare$ Theorem 2: Let $V$ be a vector space over the field $\mathbb{F}$ and let $U_1$ and $U_2$ be subspaces of $V$. If $U$ is a subspace of $V$ such that $U_1 \subseteq U$ and $U_2 \subseteq U$ then $(U_1 + U_2) \subseteq U$. • Proof: Let $U$ be a subspace of $V$ such that $U_1 \subseteq U$ and $U_2 \subseteq U$. We want to show that $(U_1 + U_2) \subseteq U$. • First let $x \in (U_1 + U_2)$. Therefore there exists an $x_1 \in U_1$ and an $x_2 \in U_2$ such that $x = \underbrace{x_1}_{x_1 \in U_1} + \underbrace{x_2}_{x_2 \in U_2}$. • Now since $U_1 \subseteq U$ we have that since $x_1 \in U_1$ then this implies that $x_1 \in U$. Similarly since $U_2 \subseteq U$ we have that since $x_2 \in U_2$ then this implies that $x_2 \in U$. Now since $U$ is a subspace of $V$, we have that $U$ is closed under addition and so $x_1 + x_2 = x \in U$. • Therefore since $x \in (U_1 + U_2)$ implies that $x \in U$, we have that $(U_1 + U_2) \subseteq U$. $\blacksquare$ Example 1 Suppose that $U_1$, $U_2$, and $U_3$ are subspaces of the vector space $V$ over the field $\mathbb{F}$, and suppose that $U_1 + U_2 = U_1 + U_3$. Is it true that then $U_2 = U_3$? To understand this problem, we need to have a clear understanding of what a subspace sum is. We note that $(U_1 + U_2) := \{ u_1 + u_2 : u_1 \in U_1 \: \mathrm{and} \: u_2 \in U_2 \}$ and $(U_1 + U_3) := \{ u_1 + u_3 : u_1 \in U_1 \: \mathrm{and} \: u_3 \in U_3 \}$. In other words, if $(U_1 + U_2) = (U_1 + U_3)$, then must it be that any $x \in (U_1 + U_2) = (U_1 + U_3)$, that $x = u_1 + u_2 = u_1 + u_3$, in other words, $u_2 = u_3$? The answer is no. Suppose that $V$ is a nonzero vector space, and let $U_1 = V$, let $U_2 = V$, and let $U_3 = \{ 0 \}$. Then we have that: U_1 + U_2 = U_1 + U_3 \\ V + V = V + \{ 0 \} Clearly $V + V = V + \{ 0 \}$ since any vector $x \in V$ can be written as $x = \underbrace{x_1}_{\in V} + \underbrace{x_2}_{\in V}$ and any vector $x \in V$ can be written as $x = \underbrace{x_1}_ {\in V} + \underbrace{\in 0}$, but $V \neq \{ 0 \}$ since we asserted $V$ was a nonzero vector space.
{"url":"http://mathonline.wikidot.com/vector-sum-theorems","timestamp":"2024-11-06T08:58:29Z","content_type":"application/xhtml+xml","content_length":"23259","record_id":"<urn:uuid:2aa9e531-bf21-496e-a23c-567abd09b6ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00034.warc.gz"}
Question | Amity BBA Solve Assignment For Business Economics Business Economics Case Study Suppose that Russ has budgeted $20 a month to buy candy bars, music downloads, or some combination of each. If Russ buys only candy bars he can obtain 40 bars a month; if he buys only downloads, he can buy 20 a month. Question 1: If an economy moves from producing 10 units of A and 4 units of B to producing 7 As and 5Bs, the opportunity cost of the 5th B is: a. 7As b. 10As c. 3As d. 1A Question 2: In a free market the combination of products produced will be determined by: a. Market forces of supply and demand b. The government c. The law d. The public sector Question 3: The basic economic problems will not be solved by: a. Market forces b. Government intervention c. A mixture of government intervention and the free market d. The creation of unlimited resources Question 4: The sacrifice involved when you choose a particular course of action is called the: a. Alternative b. Opportunity cost c. Consumer cost d. Producer cost Question 5: What is the opportunity cost of a candy bar? a. 0.6 b. 0.2 c. 0.5 d. 0.6 Question 6: What is the opportunity cost of a music download? a. 3 b. 2 c. 4 d. 5 Question 7: What is the price of a candy bar? a. 20 cents b. 50 cents c. 40 cents d. 10 cents Question 8: What is the price of a music download? a. $1 b. $3 c. $4 d. $5 Question 9: Which of the following is a normative statement in economics? a. More spending by the government will reduce poverty b. Higher taxes will lead to less desire to work c. The UK economy is growing fast relative to other European Union members d. The government should concentrate on reducing unemployment Question 10: Which one of the following is not one of the basic economic questions? a. What to produce b. Who to produce for c. How to produce d. How to minimize economic growth Case Study Suppose that the government of Zanzi decides that there is a need to reduce cigarette smoking in their country. The cigarette market in Zanzi can currently be described by the following demand and supply equations: Demand for cigarettes: Q = 1125 – 12.5P Supply of cigarettes: Q = 1100P – 1100 The government proposes implementing a quantity control of 500 units: this quantity control would limit the number of cigarettes that could be sold in Zanzi to exactly 500 units. The government has asked you to evaluate this program by answering the following series of questions. Question 1: Before implementing the quantity control, what is the equilibrium price of cigarettes in Zanzi? a. $4 b. $2 c. $3 d. $7 Question 2: Before implementing the quantity control, what is the equilibrium quantity of cigarettes in Zanzi? a. 700 cigarettes b. 900 cigarettes c. 1100 cigarettes d. 800 cigarettes Question 3: Before implementing the quantity control, what is the value of consumer surplus in the market for cigarettes in Zanzi? a. $48,400 b. $48,200 c. $43,400 d. $38,400 Question 4: Before implementing the quantity control, what is the value of producer surplus in the market for cigarettes in Zanzi? a. $220 b. $550 c. $5 d. $320 Question 5: Suppose the government implements the quantity control. What is the deadweight loss due to this program? a. $13, 656 b. $14,565 c. $12,343 d. $12,345 Question 6: Suppose the government implements the quantity control. What is the value of consumer surplus with this program? a. $6,000 b. $4,000 c. $10,000 d. $15,000 Question 7: Suppose the government implements the quantity control. What is the value of producer surplus with this program? a. $113 b. $112 c. $123 d. $235 Question 8: Suppose the government implements the quantity control. What price must consumers pay in order to only demand 500 cigarettes in Zanzi? a. $40 b. $30 c. $30 d. $50 Question 9: Suppose the government implements the quantity control. What price must producers receive in order to only supply 500 cigarettes in Zanzi? (Round your answer to the nearest cent.) a. $1 b. $1 c. $2 d. $2 Question 10: Suppose the government implements the quantity control. What price will the government sell the right to sell a unit of cigarettes for in Zanzi if the government sets the quantity control at 500 cigarettes? a. $49 b. $49 c. $55 d. $47 Case Study Consider the Beiswanger Company, a small firm engaged in engineerng analysis. Beiswanger’s president has estimated that the firm’s output per month (Q) is related in the following way to the number of engineers € and technicians used (T): Q = 20E – E^2 + 12T – 0.5T^2 . The monthly wage of an engineer is $4,000, and the monthly wage of a technician is $2,000. President allots $28,000 per month for the combined wages of engineers and technicians. Question 1: By re-stating the firm’s supply decision, we have the following: a. if at the best production level ‘q*’ greater than the average variable cost, then the firm should choose to produce ‘q*’ b. if at the best production level ‘q*’ greater than the average fixed cost, then the firm should choose to produce ‘q*’ c. if at the best production level ‘q*’ less than the average variable cost, then the firm should choose to produce ‘q*’ d. if at the best production level ‘q*’ greater than the marginal cost, then the firm should choose to produce ‘q*’ Question 2: Diminishing marginal returns occurs when a. when the opportunity cost the extra output increases b. when the opportunity cost the extra output decreases c. output remains constant as more of variable factor is added to a fixed factor, output initially increases, then peaks before finally declining d. output declines as more of variable factor is added to a fixed factor, output initially increases, then peaks before finally declining Question 3: If the president is to maximize output, he must choose a bundle of engineers and technicians such that a. MP^e / P^e = MP^t / P^t b. MP^e / P^t = MP^t / P^e c. MP^t / P^e = MP^e / P^t d. None of the above Question 4: The firm should consider temporary shut down when: Select one: a. if at output ‘q*’, price is greater than average variable cost b. if at output ‘q*’, price is equals average variable cost c. if at output ‘q*’, price is less than average variable cost d. if at output ‘q*’, price is greater than marginal cost Question 5: Total costs are: a. total fixed cost plus average variable costs b. total fixed costs plus total variable costs c. total average fixed costs plus total average variable costs d. total costs plus opportunity costs Question 6: What amount of engineers should be hired? a. 2 b. 4 c. 6 d. 8 Question 7: What amount of technicians should be hired? a. 4 b. 2 c. 6 d. 8 Question 8: What will be the calculated MP^e? a. 10 – 2E b. 20 – 2E c. 30 – 3E d. 16 – 3e Question 9: What will be the calculated MP^t? a. 12 – T b. 13 – T c. 22 – T d. 15 – T Question 10: What will be the value of T? a. E – 2 (wrong answer) b. E + 2 c. E – 6 d. E – 4 Case Study Assume that two identical firms in a purely oligopolistic industry selling a homogenous product agree to share the maket equally. The total market demand function for the commodity is Qd = 240 – 10P. The cost schedules of the firms are given in the following table: q1 40 50 60 80 q2 50 70 100 SMC1 (Rs.) 8 10 12 16 SMC1 (Rs.) 4 6 9 SAC1 (Rs.) 13 12.3 12 13 SAC1 (Rs.) 7 6 7 Question 1: Profits for this firm will be: a. Rs. 420 b. Rs. 130 (wrong) c. Rs. 350 d. Rs. 450 Question 2: When q1 = 40, What will be MR1? a. 2 b. 8 c. 5 d. 4 Question 3: When q1 = 40, what will be the profit maximising output for the first firm? a. 30 b. 60 c. 40 d. 20 Question 4: When q1 = 50, what will be MR1? a. 7 b. 2 c. 4 d. 3 Question 5: When q1 = 60, what will be MR1? a. 0 b. 2 c. 4 d. 6 Question 6: When q1 = 80, what will be MR1? a. 7 b. -4 c. 5 d. -8 Question 7: When q2 = 100, then MR2 will be a. 16 b. 32 c. -32 d. -16 Question 8: When q2 = 50, price at this level of output will be a. 12 b. 14 c. 24 d. 32 Question 9: When q2 = 50, then MR2 will be a. 2 b. 4 c. 5 d. 6 Question 10: When q2 = 70, then MR2 will be a. 4 b. -9 c. -4 d. -5 Case Study The president of the Martin Company is considering two alternative investments, X and Y. If each investment is carried out, there are four possible outcomes. The present value of net profit and profitability of each outcome follow: Investment X Investment Y Outcome Net Present Value Probability Outcome Net Present value Probability 1 $ 20 million 0.2 A $ 12 million 0.1 2 8 million 0.3 B 9 million 0.3 3 10 million 0.4 C 6 million 0.1 4 3 million 0.1 D 11 million 0.5 Question 1: Risk management is responsibility of the a. Customer b. Investor c. Developer d. Project team Question 2. The president of the Martin Company has the utility function U = 10 + 5P – 0.01 P^2. Which investment should she choose? a. Investment x b. nor investment X neither investment Y c. Investment Y d. None of the above Question 3. What is the coefficient of variation of investment X? Select one: a. 37% b. 23% c. 47% d. 65% Question 4. What is the coefficient of variation of investment Y? a. 37% b. 23% c. 47% d. 18% Question 5. What is the expected present value of investment X? a. $ 6 million b. $ 10.7 million c. $ 4 million d. $ 11.09 million Question 6. What is the expected present value of investment Y? a. $11 million b. $ 9 million c. $ 5 million d. $ 7 million Question 7. What is the standard deviation of investment X? a. $ 5.06 million b. $ 6 million c. $ 5 million d. $ 4 million Question 8. What is the standard deviation of investment Y? a. $ 5.06 million b. $ 1.95 million c. $ 5 million d. $ 4 million Question 9. Which investment is risker? a. Investment X b. Investment Y c. Investment X and Y both d. None of the above Question 10. Which of the following technique will ensure that impact of risk will be less? a. Risk avoidance technique b. Risk Mitigation technique c. Risk contingency technique d. All of the above Case Study The market for study desks is characterized by perfect competition. Firms and consumers are price takers and in the long run there is free entry and exit of firms in this industry. All firms are identical in terms of their technological capabilities. Thus the cost function as given below for a representative firm can be assumed to be the cost function faced by each firm in the industry. The total cost and marginal cost functions for the representative firm are given by the following equations: TC = 2qs^2+ 5qs + 50 MC = 4qs + 5 Suppose that the market demand is given by: PD = 1025 – 2QD Note: Q represents market values and q represents firm values. The two are different. Question 1: At the new long-run equilibrium, how many firms will be in the industry? a. 32 b. 45 c. 150 d. 230 Question 2. At the new long-run equilibrium, what will be the output of each representative firm in the industry? a. 4 b. 5 c. 3 d. 2 Question 3. Determine the equation for average total cost for the firm a. 2qs + 2 + 50/qs b. 2qs + 5 + 50/qs c. 3qs + 5 + 50/qs d. None of the above Question 4. Determine the market quantity Q from the market demand curve, given that we know the above calculated market price. a. 23 b. 504 c. 34 d. 89 Question 5. In the long-run given this technological advance, how many firms will there be in the industry? a. 34 b. 84 c. 32 d. 56 Question 6. Now suppose that the number of students increases such that the market demand curve for study desks shifts out and is given by, PD = 1525 – 2QD . What will be the new long-run equilibrium price in this industry? a. 25 b. 34 c. 23 d. 45 Question 7. Now, consider another scenario where technology advancement changes the cost functions of each representative firm. The market demand is still the original one (before the increase in the number of students). The new cost functions are: TC = qs^2+5qs + 36 MC = 2qs + 5 What will be the new equilibrium price? a. 17 b. 4 c. 2 d. 6 Question 8. What is the long-run equilibrium price in this market? a. 12 b. 14 c. 25 d. None of the above Question 9. What is the long-run output of each representative firm in this industry? a. 5 b. 3 c. 6 d. 7 Question 10. When this industry is in long-run equilibrium, how many firms are in the industry? a. 3 b. 80 c. 40 d. 100
{"url":"https://guidetechs.com/question/amity-bba-solve-assignment-for-business-economics","timestamp":"2024-11-04T12:29:42Z","content_type":"text/html","content_length":"69882","record_id":"<urn:uuid:510b539a-8880-4fda-8e23-2e6ad3f1175d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00558.warc.gz"}