content
stringlengths
86
994k
meta
stringlengths
288
619
Raggy's Inc. offers an annuity due with quarterly payments for 25 years at 8 percent interest.... Raggy's Inc. offers an annuity due with quarterly payments for 25 years at 8 percent interest.... Raggy's Inc. offers an annuity due with quarterly payments for 25 years at 8 percent interest. The annuity costs $200,000 today. What is the amount of each annuity payment? Information provided: Present value= $200,000 Time= 25 years*4= 100 quarters Quarterly interest rate= 8%/ 4= 2% This is solved using a financial calculator by inputting the below into the calculator: The financial calculator is set in the end mode. Annuity due is calculated by setting the calculator to the beginning mode (BGN). To do this, press 2^ndBGN 2^ndSET on the Texas BA II Plus calculator. The amount of annuity payment is calculated by entering the below in a financial calculator in BGN mode: PV= -200,000 N= 100 I/Y= 2 Press the CPT key and PMT to compute the amount of annuity payment. The value obtained is 4,549.56. Therefore, the amount of annuity payment is $4,549.56.
{"url":"https://justaaa.com/finance/16535-raggys-inc-offers-an-annuity-due-with-quarterly","timestamp":"2024-11-12T09:13:33Z","content_type":"text/html","content_length":"41312","record_id":"<urn:uuid:d3722dd8-a2cc-48a0-95cb-fea2a33a444c>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00096.warc.gz"}
Three classical approaches to hypothesis testing Here I introduce three classical approaches to hypotheses testing in the frequentists’s approach of statistics: the likelihood-ratio test, the Lagrange multiplier test (or the score test), and the Wald test. The likelihood-ratio test The oldest is the likelihood-ratio test, sometimes abbreviated as ‘LQT’. It compares the goodness of fit of two statistical models being considered by the ratio of their likelihoods, which is found either by maximization over the entire parameter space, or either by imposing certain constraints. The constraints, for instance, can be expressed as the null hypothesis. If the constraints are supported by the data, the two likelihoods should not differ by more than sampling error. Therefore, the likelihood-ratio test tests whether the ratio of two likelihood values is significant from The Neyman-Pearson lemma states that when we compare two models each of which has no unknown parameters, the likelihood-ratio test is the most powerful test among all statistical tests. The likelihood-ratio test is a standard statistical test for comparing nested models. Two models are nested if one model contains all the terms of the other, and at least one additional term. By simulation, it is also possible to use the test for non-nested models. For details, see Lewis, Butler, and Gilbert, 2010. Besides the likelihood-ratio test, the other two approaches of hypothesis testing are the score test (also known as ‘Lagrange multiplier test’) and the Wald test, which is named after the Hungarian mathematician Abraham Wald, who discovered the survivorship bias. The score test The score test evaluates constraints on the parameter to be estimated based on the gradient of the likelihood function with respect to the parameter, which is known as the score, evaluated at the hypothesized parameter value under the null hypothesis. If the estimator is near the maximum of the likelihood function, then the score should not differ from zero by more than sampling error. The Wald test The Wald test in essence is based on the weighted distance between the estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate, namely the reciprocal of the variance. So the form resembles closely that of a t-test. Indeed, when testing a single parameter, the square root of the Wald statistic can be understood as a (pseudo) t-ratio, which is, however, not actually t-distributed except for the special case of linear regression. The relationship between the three methods and their applications The three approaches are asymptotically equivalent (all approaching the chi-square distribution), though in the case of finite samples, the results can differ strongly between the methods. In bioinformatics and computational biology, these methods are used in many settings, for instance estimating differential gene expression in bulk and single-cell RNA sequencing studies. Inference and model selection in Bayesian statistics In Bayesian statistics, suppose that we have a prior distribution of a parameter, which can be our intuition or prior knowledge, some data associated with that parameter, and we wish to estimate (infer) the value of a parameter given the prior distribution and the data. One can use the mode of the posterior distribution, which is distribution of the parameter updated from the prior model by the data, as the best guess of the parameter. The mode of the posterior distribution is known as the Maximum a posteriori probability estimate, or simply the MAP estimate. Hypothesis testing is then addressed by quantifying the confidence interval of the MAP estimate with respect to the hypothesis. If we have two hypotheses, we can choose the one with the highest posterior probability. This is known as the MAP hypothesis test. See the course’s website of Introduction to Probability, Statistics and Random Processes for mathematical details and examples of MAP hypothesis test. And see the blog post by Jonny Brooks-Bartlett Probability concepts explained: Bayesian inference for parameter estimation for graphical examples and more explanations. But what happens that we have two competing hypotheses with strong difference in complexity? Then we need to select among the models, following the principle of Occam’s Razor: Use fewer things unless The mostly used Bayesian versions of model selection are Akaike information criterion (AIC) and Bayesian information criterion (BIC). They both reward parameters and models that maximize likelihood, while penalizing complex models. In the frequentist’s approach of statistics, one can use the likelihood-ratio test, the score test, or the Wald test to test a hypotheses against another. The likelihood-ratio test is used for model selection, either directly for nested models, or via simulation for non-nested models. In the Bayesian approach, we often use MAP estimates and confidence intervals to test a hypothesis. AIC and BIC uses likelihood and penalties of model complexity to select the parsimonious model that fit the data at best.
{"url":"https://accio.github.io/statistics/2020/06/19/hypothesis-testing.html","timestamp":"2024-11-03T07:36:42Z","content_type":"text/html","content_length":"14500","record_id":"<urn:uuid:b20afbfe-30c6-44fe-8966-8d332c91eac7>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00252.warc.gz"}
Scalar conservation laws with discontinuous flux function: I. The viscous profile condition. (1996) In Communications in Mathematical Physics 176. p.23-44 The equation , whereH is Heaviside's step function, appears for example in continuous sedimentation of solid particles in a liquid, in two-phase flow, in traffic-flow analysis and in ion etching. The discontinuity of the flux function atx=0 causes a discontinuity of a solution, which is not uniquely determined by the initial data. The equation can be written as a triangular 2×2 non-strictly hyperbolic system. This augmentation is non-unique and a natural definition is given by means of viscous profiles. By a viscous profile we mean a stationary solution ofu t +(F ) x =u xx , whereF is a smooth approximation of the discontinuous flux, i.e.,H is smoothed. In terms of the 2×2 system, the discontinuity atx=0 is either a regular Lax, an... (More) The equation , whereH is Heaviside's step function, appears for example in continuous sedimentation of solid particles in a liquid, in two-phase flow, in traffic-flow analysis and in ion etching. The discontinuity of the flux function atx=0 causes a discontinuity of a solution, which is not uniquely determined by the initial data. The equation can be written as a triangular 2×2 non-strictly hyperbolic system. This augmentation is non-unique and a natural definition is given by means of viscous profiles. By a viscous profile we mean a stationary solution ofu t +(F ) x =u xx , whereF is a smooth approximation of the discontinuous flux, i.e.,H is smoothed. In terms of the 2×2 system, the discontinuity atx=0 is either a regular Lax, an under-or overcompressive, a marginal under- or overcompressive or a degenerate shock wave. In some cases, depending onf andg, there is a unique viscous profile (e.g. undercompressive and regular Lax waves) and in some cases there are infinitely many (e.g. overcompressive waves). The main purpose of the paper is to show the equivalence between a previously introduced uniqueness condition for the discontinuity of the solution atx=0 and the viscous profile condition. (Less) publishing date Contribution to journal publication status Communications in Mathematical Physics 23 - 44 external identifiers LU publication? 8a26a983-9a8e-4c9a-aa4b-08ae6e744ad5 (old id 792790) alternative location date added to LUP 2016-04-04 07:07:42 date last changed 2022-01-29 01:44:13 abstract = {{The equation , whereH is Heaviside's step function, appears for example in continuous sedimentation of solid particles in a liquid, in two-phase flow, in traffic-flow analysis and in ion etching. The discontinuity of the flux function atx=0 causes a discontinuity of a solution, which is not uniquely determined by the initial data. The equation can be written as a triangular 2×2 non-strictly hyperbolic system. This augmentation is non-unique and a natural definition is given by means of viscous profiles. By a viscous profile we mean a stationary solution ofu t +(F ) x =u xx , whereF is a smooth approximation of the discontinuous flux, i.e.,H is smoothed. In terms of the 2×2 system, the discontinuity atx=0 is either a regular Lax, an under-or overcompressive, a marginal under- or overcompressive or a degenerate shock wave. In some cases, depending onf andg, there is a unique viscous profile (e.g. undercompressive and regular Lax waves) and in some cases there are infinitely many (e.g. overcompressive waves). The main purpose of the paper is to show the equivalence between a previously introduced uniqueness condition for the discontinuity of the solution atx=0 and the viscous profile condition.}}, author = {{Diehl, Stefan}}, issn = {{1432-0916}}, language = {{eng}}, pages = {{23--44}}, publisher = {{Springer}}, series = {{Communications in Mathematical Physics}}, title = {{Scalar conservation laws with discontinuous flux function: I. The viscous profile condition.}}, url = {{http://dx.doi.org/10.1007/BF02099361}}, doi = {{10.1007/BF02099361}}, volume = {{176}}, year = {{1996}},
{"url":"https://lup.lub.lu.se/search/publication/792790","timestamp":"2024-11-03T19:53:48Z","content_type":"text/html","content_length":"39142","record_id":"<urn:uuid:b5638e57-79b0-422e-873d-5f6f465338df>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00384.warc.gz"}
What is the probability of rolling a 1 and then rolling a 2? Based on this, you correctly conclude that a one and a two occurs with probability 236, or 118. So far, so good. Next, suppose you roll two identical dice instead. What is the probability of rolling a 2 on a single die? Probability of a certain number with a Single Die. Roll a… Probability 1 1/6 (16.667\%) 2 1/6 (16.667\%) 3 1/6 (16.667\%) 4 1/6 (16.667\%) What is the probability of rolling a 2 on a die and flipping a coin and getting heads at the same time? The first is the Product Rule. This states that the probability of the occurrence of two independent events is the product of their individual probabilities. The probability of getting two heads on two coin tosses is 0.5 x 0.5 or 0.25. A visual representation of the toss of two coins. What is the probability of rolling a 3 and getting heads? N=3: To get 3 heads, means that one gets only one tail. This tail can be either the 1st coin, the 2nd coin, the 3rd, or the 4th coin. Thus there are only 4 outcomes which have three heads. The probability is 4/16 = 1/4. What is the probability of rolling a dice and getting either a 1 or a 6 if the events are mutually exclusive? If a die is rolled once, then the probability of getting 1 is 1/6, and the probability of getting 6 is also 1/6. What are the probability of a single die? One Die Rolls: The Basics of Probabilities There is only one outcome you’re interested in, no matter which number you choose. Probabilities are given as numbers between 0 (no chance) and 1 (certainty), but you can multiply this by 100 to get a percentage. So the chance of rolling a 6 on a single die is 16.7 percent. How do you solve dice probability problems? The work out of this is as follows: Probability = Number of desired outcomes ÷ Number of possible outcomes = 3 ÷ 36 = 0.0833. The percentage comes out to be 8.33 per cent. Also, 7 is the most likely result for two dice. What is the probability of getting tails on a coin flip and then rolling a 3 on a die? The possibility of rolling a 3 on a fair die and getting tails on a coil is 112 . How many possible outcomes are there if you flip 2 coins and roll a die? When you flip a coin there are two possible outcomes (heads or tails) and when you roll a die there are six outcomes(1 to 6). Putting these together means you have a total of 2×6=12 outcomes. What is the probability of rolling a 3 and flipping tails? The possibility of rolling a 3 on a fair die and getting tails on a coil is 112 . Based on this probability, there are 12 possible outcomes. As only 1 out of 12 possiblities would get this outcome, it is not a likely outcome. What is the difference between rolling a die and flipping a coin? When you flip a coin there are two possible outcomes (heads or tails) and when you roll a die there are six outcomes (1 to 6). How many outcomes are there when you flip a coin? When you flip a coin there are two possible outcomes (heads or tails) and when you roll a die there are six outcomes (1 to 6). Putting these together means you have a total of 2 ×6 = 12 outcomes. What is the probability of rolling a number on a die? The probability of rolling a specific number on a die is 1 out of 6 and the probability of a tail is 1 out of 2 ( assuming a fair die and coin). Hence, the probability of both occurring is 1/6 x 1/2 = 1/12. What is the probability of rolling a coin and getting heads? Assuming that the coin is ‘fair’ (i.e. a regular coin), the classical interpretation is that the probability of getting heads is exactly 0.5. Again, assuming fair die, the probability of getting any value when rolling is uniformly distributed, so 1/6 for any given value.
{"url":"https://profoundadvices.com/what-is-the-probability-of-rolling-a-1-and-then-rolling-a-2/","timestamp":"2024-11-10T08:46:29Z","content_type":"text/html","content_length":"58827","record_id":"<urn:uuid:6185454d-6546-4a03-84f3-63d6d0993d47>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00145.warc.gz"}
How do you divide ( 7i+1) / ( -3i +1 ) in trigonometric form? | HIX Tutor How do you divide #( 7i+1) / ( -3i +1 )# in trigonometric form? Answer 1 $\frac{7 i + 1}{- 3 i + 1} = \sqrt{5} \left[\cos \arctan \left(- \frac{1}{2}\right) + i \sin \arctan \left(- \frac{1}{2}\right)\right]$ We first covert them into trigonometric form. In this form #a+bi=r(costheta+isintheta)# or #a+bi=r*e^(itheta)#, where #r=sqrt(a^2+b^2)# and #theta=arctan(b/a)# Hence, #7i+1=1+7isqrt50e^(ialpha)#, where #alpha=arctan7# and #-3i+1=1-3i=sqrt10e^(ibeta)#, where #beta=arctan(-3)# Hence #(7i+1)/(-3i+1)=sqrt5e^(i(alpha-beta))# As #tan(alpha-beta)=(tanalpha-tanbeta)/(1+tanalpha*tanbeta)# or Hence #(7i+1)/(-3i+1)=sqrt5e^(i(arctan(-1/2)))# or #(7i+1)/(-3i+1)=sqrt5[cosarctan(-1/2)+isinarctan(-1/2)]# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-divide-7i-1-3i-1-in-trigonometric-form-8f9afadb05","timestamp":"2024-11-02T02:56:09Z","content_type":"text/html","content_length":"565695","record_id":"<urn:uuid:0f76655c-3c75-4d31-bd60-de4bb891ee5c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00738.warc.gz"}
F.INV() Formula in Google Sheets Calculates the inverse of the left-tailed F probability distribution. Also called the Fisher-Snedecor distribution or Snedecor’s F distribution. Common questions about the F.INV Formula include: - What is the F.INV Formula? - What is the syntax of the F.INV Formula? - What is the significance of each of the parameters in the Formula? - How does the F.INV Formula calculate its values? The F.INV Formula can be used to calculate the probability of a standard normal random variable falling between two given values. It returns the cumulative probability of a standard normal random variable lying in the specified region. The F.INV Formula can be commonly mistyped due to the number of parameters needed for the formula. Common mistakes include missing or reversing the order of the parameters, or incorrectly typing “sum” instead of “F.INV.” Common ways in which the F.INV Formula is used inappropriately include using it to calculate the probability of non-standard normal random variables, using it to calculate cumulative probabilities where only percentages are applicable, and using it to calculate the probability of a sample mean instead of a random variable. Common pitfalls when using the F.INV Formula include using it with an inappropriate data set, not properly ordering the parameters of the formula, and not understanding the significance of the Common mistakes when using the F.INV Formula include using the wrong data set, mistyping the formula, and not understanding the syntax of the formula. Common misconceptions with the F.INV Formula include thinking that it is applicable to non-standard normal random variables, thinking that it can calculate sample means, and thinking that it can calculate percentages instead of cumulative probabilities.
{"url":"https://www.bettersheets.co/formulas/f-inv","timestamp":"2024-11-14T06:58:33Z","content_type":"text/html","content_length":"31263","record_id":"<urn:uuid:8d006b4f-e59d-426b-916a-67b821a5b7df>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00740.warc.gz"}
A Cost-Effective Way to Get Statistical Analysis - ZzoomitA Cost-Effective Way to Get Statistical Analysis The use of statistical analysis is widespread among scientists, companies, and government agencies. In the end, we all want statistical analysis at a cost-effective price. Using research proposal writing help is a cost-effective method of doing statistical analysis. Additionally, this article contains a complete guide that will help you manage your project in the most cost-effective manner. With the help of two study cases, we’ll walk you through a cost-effective way to get statistical analysis. In one case, a cause-and-effect relationship is explored, and in the other, whether different variables are connected. 1. Set Your Research Objectives And Hypothesis. The strategy you choose for data collection should be based on your research hypothesis. Hypothesis Writing In Statistics Statistics writing hypotheses are used to express an official population forecast. Null and alternative hypotheses, as part of a research hypothesis, can be evaluated using a sample of data. Practising meditation for five minutes will not affect teens’ arithmetic exam scores. A college student’s GPA does not correlate with the income of his or her parents. Make Sure Your Research Strategy Is Well Thought Out. The design of the study determines the entire process of gathering and analyzing data. Study design defines whether your study will be descriptive, correlational, or experimental. Unlike descriptive and correlational research, experimental research actually affects the variables, whereas the latter just assesses them. You also need to consider whether you will compare participants as a group, individually, or both. The Variables Involved In The Measurement The accuracy of measuring several factors varies. A data point such as age is categorical or quantitative. Determining the measurement level is essential to selecting relevant statistics and hypotheses. For most research projects, you may also collect data on relevant participant characteristics. In correlation research, the kind of variables you use will determine the test you use. 2. Collecting Data Generally, collecting data from everyone in a group you’re interested in is too time-consuming or expensive. In most cases, you’ll collect information from samples. With acceptable sampling practices, you can draw conclusions beyond your own sample with statistical analysis. Your goal should be to select a population-representative sample. Statistically Significant Samples Drawing generalizable conclusions requires the use of probability sampling. In this way, you lower the sampling bias and ensure that you are representing the entire population. In practice, it is rare to find a perfect sample. Non-probability samples may skew more, but they are also easier to collect and recruit. Develop A Sampling Process That Is Appropriate. You plan on promoting your research extensively outside the university, but do you have the means to do so? What chance do you have of getting a representative sample of a broad spectrum of people? You might consider reaching out to members of hard-to-reach groups and following up? Then determine how you’ll recruit them. Prepare The Sample Size Calculation To determine the appropriate sample size for your project, you can use one of many sample size calculators online. Significant level (alpha) is the risk of turning down a true null hypothesis that you are prepared to accept, usually set at 5%. The likelihood of your research finding an impact of any size is generally 80 per cent or greater. Based on existing research or a pilot study, population standard deviations are estimates about population parameters. Get Feedback On Format, Structure, And Language. Proofread your paper and edit it by a professional proofreader or editor who focuses on: • Format for academic writing: • Ambiguous sentences • Language and grammar • Stylistic consistency 3. Compile Descriptive Statistics From Your Data Then, you can examine the data and compute descriptive statistics to summarize it. Verify Your Facts. Skewed distributions require only a few descriptive statistics due to their skewed form. Depending on your data, you can analyze it in various ways, including by grouping data from different variables into frequency distribution tables and presenting data from a key variable using a bar chart. Find The Central Tendency Of The Data. In order to calculate the mean, divide all of the values by their sum. Data collection modes are the answers or values that are most common. The median is the value in the exact middle of a data set sorted from low to high. Some of these metrics may only be accepted for a given person based on their demographic profile. Determine The Variability Of The Data. Data structure and measurement level should guide your choice of variability statistics. Skewed distributions can be best described by the interquartile range, whereas normal distributions can be best defined by standard deviation and variance. Similarly, the variances between the two scores are comparable following meditation. 4. Use Inferential Statistics To Test Hypotheses Or Estimate In contrast to statistics, parameters are numbers describing a population. Based on sample statistics, inferential statistics can be used to estimate population parameters. Two major statistical methodologies are frequently used by researchers simultaneously (in order to draw statistical conclusions). • Using sample statistics in estimation, you can determine population parameters. • The application of samples to test hypotheses about the population is a systematic procedure. Two types of population parameter estimates can be derived from sample statistics. Using a point estimate, you can determine an approximate value for a parameter. You can express a confidence level using the standard error of the distribution and the z score from the standard normal distribution. Research Hypotheses Testing By using sample data, you can test hypotheses relating to variables in the population. In order to determine whether a null hypothesis can go forward or not, statistical tests are used. Two major results are produced from these tests: a test statistic and a p-value, which indicate where your data falls within an expected distribution if the hypothesis is true. Testing Parametrically Parametric tests are used to determine a population’s characteristics by examining sample data, but they depend on specific assumptions to perform. Applied nonparametric tests or modifications of the data may be necessary if your data conflict with these assumptions. This model describes how changes in a predictor variable affect an outcome variable. Comparative Tests Comparing the mean of one sample to the mean of the population should be done with a one-sample test. A completely different measurement from two mismatched groups needs to be tested with an independent sample (between-subjects design). This test informs you of the strength of a linear relationship between two quantitative variables through Pearson’s r. Based on the sample size, a t-test can evaluate how far the correlation coefficient deviates from zero. Because you expect a specific outcome (a higher score on the test), you’ll need a one-tailed test. Using Pearson’s r, you determine the degree of association between parental income and GPA. Using the t-test, a 3.00 t value, a 0.0028 p-value, and, if the correlation is significant in the population, a significance test is performed to determine whether it is significant. 5. Describe Your Findings The statistical analysis ends with interpreting your findings. Significant Statistically A hypothesis test must be statistically significant before conclusions can be drawn. A statistical significance threshold (typically 0.05) is used to determine whether your results are statistically significant. Even very modest correlation values become statistically significant if the sample size is high. Errors In Decisions Errors of Type I and II are a result of incorrect study results. When denying the null hypothesis when it actually holds, this is called a Type I mistake. In order to reduce the risk, you may choose an optimal significance level and ensure high power. In order to reconcile the two faults, there must be a delicate balancing act. Statistical Frequentists Vs Bayesian Analysis In Bayesian statistics, hypothesis are constantly updated based on expectations and observations rather than starting with a true null hypothesis. By assessing the strength of evidence for the null hypothesis versus the alternative hypothesis, the Bayes factor rather than decide if the null hypothesis should be rejected. More about: qdownloader
{"url":"https://www.zzoomit.com/a-cost-effective-way-to-get-statistical-analysis/","timestamp":"2024-11-11T18:10:35Z","content_type":"text/html","content_length":"60454","record_id":"<urn:uuid:c31c752e-c5fc-47a9-a492-72e58ce516f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00757.warc.gz"}
nth Degree Taylor Polynomial Many functions can be approximated with an nth degree Taylor polynomial. An nth degree Taylor polynomial (named after the 17th century English mathematician Brook Taylor) is a way to approximate a function with a partial sum— a series of additions and multiplications. The “nth” refers to the degree of the polynomial you’re using to approximate the function. Many transcendental functions (e.g. logarithmic functions or trigonometric functions) can be expressed as these approximations; When you plug in cos 35° to a calculator, it doesn’t use a table to find that value: it uses a polynomial approximation. Example of nth degree Taylor Polynomial The function f(x) = e^-x can be represented by an nth degree Taylor polynomial. The higher the “n”(degree), the better the approximation. If it were possible to write an infinite number of degrees, you would have an exact match to your function. However, the goal here (like in many areas of calculus) isn’t to be “exact”, but to have an approximation that’s “good enough.” As you can’t write out an infinite number of sums, you choose a workable part (perhaps 10 or 20): this small part is called a partial sum. The following graph shows a tenth degree polynomial does a fairly good job at approximating the function (better than the sixth degree polynomial). Depending on what part of the function you’re working with, this may be good enough. Even the tenth degree (in blue) doesn’t do well at approximating the tail on the right of the above graph, so if you’re evaluating a function there you will want to go even higher with the number of degrees. How high should you go? How many terms is good enough? That can be answered with Taylor’s inequality, which states that (Lahodny, n.d.): If |f^(n + 1)(x) | ≤ M for |x – a| < R, then the remainder R^n(x) of the Taylor series satisfies What’s the Difference Between a Taylor Polynomial and a Taylor Series? A Taylor Series is the entire infinite sum of additions and multiplications, which can be represented in summation notation by When you take a part of this series and create a smaller number of partial sums (like the ones I used above to graph approximations for e^-x, those sums are called nth degree Taylor polynomials (or just “Taylor Polynomials” for short). Lahodny, G. (2019). Section 10.9: Applications of Taylor Polynomials. Retrieved July 21, 2020 from: https://www.math.tamu.edu/~glahodny/Math152/Section%2010.9.pdf Schwartz, S. (2017). AP® Calculus AB & BC All Access Book. Research & Education Association. Graph: Demos.com
{"url":"https://www.statisticshowto.com/nth-degree-taylor-polynomial/","timestamp":"2024-11-13T15:06:43Z","content_type":"text/html","content_length":"69788","record_id":"<urn:uuid:6b32bd05-5a80-48c5-b59f-b70a59ae0689>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00684.warc.gz"}
EENG 251 | Prof. Ebrahim Mattar top of page Professor Dr. Ebrahim A. Mattar Professor of Robotics/AI/Cybernetics College of Engineering, University of Bahrain Research Interests: Robotics, Cybernetics, AI Now working on Electroencephalography (EEG) Brainwaves Decoding for Building Robotics Cognition EEG 251: Digital Systems I, (3-1-3) Office Hours : Any time, please send an e-mail to : ebmattar@uob.edu.bh Office No: 14-146-A Office Location: College of Engineering, Building 14. Office Telephone: ++ 973 17876286, or ++ 973 17876606 [1] : EEG 251 : Digital Systems I : [3,1,3] : Number system Basic logic gates Boolean algebra Simplification of logic functions: Karnaugh maps Quine-McCluskey method Patrick’s method NAND and NOR gates networks : Multiple output networks : MSI combinational logic circuits: combinational logic circuits design with programmable logic devices. Memories : PLA, PAL : Flip-Flops : Design and analysis of counters and registers circuits. Course Marking-Assessment Distribution : Assignments (4 total )10% Project (Presentation)05% Midterm 30% Final Exam40% Course Text Book : Recommended Books (in the order of their relevance): Thomas L. Floyd, "Digital Fundamentals", Seventh Edition, Pearson Education, 2002; ISBN: 013-046411-2 (highly recommended) Wakerly, John F. "Digital design: principles and practices" - 3rd edition - Upper Saddle River, N.J.; London : Prentice Hall, 2000 Stonham, T. J., Thomas J. "Digital logic techniques : principles and practice" - 3rd edition - London : Chapman & Hall, 1996 Martin, Kenneth W. (Kenneth William), "Digital integrated circuit design", Oxford University Press, 2000. - (The Oxford series in electrical and computer engineering), ISBN - 0195125843 Fletcher, William I, "An engineering approach to digital design" - Englewood Cliffs, London (etc.) : Prentice-Hall, 1980 Note about the recommended books: The first book on the list covers about 90% of the material taught in the lectures and is an essential book for beginners. The second book is usually sold with CDs containing the software used for the laboratory sessions (Xilinx Foundations). College of Engineering Electrical Engineering Control Lab Lab Location : (36-213) Equipment in the Lab : Course Experiments : Written laboratory reports will be submitted at the end of the semester. The reports will include the diagrams of the simulated circuits, simulation results and explanations about the operation mode of each circuit. Where necessary, the reports will include truth tables, Boolean algebra calculations, Karnaugh maps and state diagrams. Individual Logic Gates Combinational Logic Circuit Design Tutorials (First test) Computer Simulation of Logic Circuits Parallel Adder/Subtractor Circuit Tutorials (Second test). Synchronous Counter Design Tutorials for final. Samples: Course Assignments, Tutorials, Quizzes, Labs ( Previous years Works ): Course Materials Press to (From the lecturer) :Week 1 The importance and the basic principles of digital electronics Binary numbers: transformation from and to decimal format Hexadecimal numbers Boolean logic principles Boolean functions Defining Boolean functions with truth tables Theorems of Boolean logic Types of logic gates (AND, OR, NOT, NAND, NOR, XNOR, XOR, XNOR) Converting one type of logic gate into another Week 2 Logic gate families Shottky TTL Main parameters defining a logic gate family Open drain and open collector logic gates Parameters of logic gate families Transfer gates TSL gates: operation and role in digital circuits The role of pull-up and pull-down resistors Using transfer gates to implement an XOR gate - advantage compared to normal CMOS implementation Week 3 Two’s complement representations One's complement representations Gray code DNF and CNF forms of Boolean functions Karnaugh maps for functions in DMF (SOP) format Karnaugh maps for functions in CMF (POS) format Karnaugh maps for incompletely defined functions Week 4 Identifying XOR and XNOR functions in Karnaugh maps Timing hazards Static Hazards Dynamic Hazards Using Karnaugh maps to generate hazard-free circuits Quine-McClusky minimisation algorithm Simple multiplexers Bus multiplexers constructed with or without TSL circuits Normal encoders Priority encoders Normal decoders Seven-segment decoders Decoders for bar displays Week 5 Barrel shifters and their applications to microprocessors Implementation of Boolean functions with multiplexers, demultiplexers and encoders Basic arithmetic circuits Binary addition and subtraction Typical adder circuits Adders with look-ahead carry generator Carry-skip adders Carry-select adders Carry-save adders Week 6 Basic arithmetic circuits (continued) Simple combinational multipliers Carry-save multipliers Incrementation circuits Decrementation circuits Complementation circuits (two's complement calculation) BCD numbers BCD addition and subtraction Week 7 General SR bistables Flip-flop truth tables and excitation tables Flip-flop conversions Week 8 Asynchronous Counters Synchronous Counters General design procedure for synchronous counters Simple applications of registers and counters Pseudo-random number generators Error detecting circuits based on CRC algorithm Registers used in conjunction with barrel shifters inside microprocessors Week 9 Simple applications of registers and counters (continued) Serial adders Booth multipliers Finite State Machines (FSM) Types of FSMs (Moore, Mealy, synchronous, asynchronous) Design procedures for synchronous FSMs Design procedures for asynchronous FSMs Week 10 Types of PLDs: PLA, PAL, FPGA Applications of PLDs (implementing Boolean functions and FSMs). Memory circuits Flash memory Memory expansion Implementing Boolean functions with ROM circuits Implementing FSMs with ROM circuits The basic implementation procedure Using multiplexers, demultiplexers and encoders to reduce the size of the ROM circuit Week 11 Interface Circuits Sample-and-hold circuits Shannon's theorem A/D converters D/A converters The importance of decoupling capacitors in digital circuits Week 12 Examples of complex digital systems Mobile telephones DVD players Digital cameras Digital television Object Oriented Programming using Matlab (OOP): Object Oriented Programming using Matlab (OOP), uses of classes (class), objects (obj) and data structure (struct) .. if you would like to use this approach in programming for this course, this will be great. This is optional, but it is always good to learn latest advanced programming tools. Download slides about Matlab (oop), from the Advanced MATLAB for Scientific Computing, Stanford University. (press here to download >>> ) OOP-Matlab bottom of page
{"url":"https://www.dr-e-mattar-uob.com/eeng-251","timestamp":"2024-11-11T01:19:18Z","content_type":"text/html","content_length":"417505","record_id":"<urn:uuid:969c9d52-21b1-42f4-970d-cc8606539288>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00410.warc.gz"}
Quantum Physics Explained in Simple Terms Every science student knows something about quantum physics, but not every one of them can explain it. If you are an H2, JC or A level physics student, it is important for you to have a good understanding of the topic because modern science is incomplete without it. By enrolling in physics tuition, you can also have a better grasp and a deeper understanding of the topic. So let’s start with the meaning of quantum physics. What is quantum physics? Quantum physics, as you may already know, is the study of the behaviour of matter and energy at the smallest levels – molecular, atomic, nuclear and even smaller. This branch of physics was necessitated by the discovery of the fact in the early 20th century that the laws of physics that govern matters of macroscopic scale do not function in the realm of microscopic objects. Quantum is a Latin word which means how much. In modern physics, it is used to refer to the smallest possible discrete unit of matter or energy that can be predicted and observed by various means. Who developed the quantum theory? Quantum theory was first proposed by Max Planck in his paper on blackbody radiation which he had presented to the German Physical Society in 1900. When seeking to discover why radiation from a glowing body changes in colour from red to orange to blue as it becomes hotter, he found that the question could be answered by assuming energy exists in individual units in the way that matter does and was, therefore, quantifiable. To prove his theory, Planck wrote a mathematical equation involving the smallest possible unit of energy, which he called “quanta”. With this equation, he successfully explained that energy from a glowing body occupies different areas of the colour spectrum at different discrete temperature levels. He was awarded the Nobel Prize in Physics in 1918 for his work. In 1905, Einstein added another brick to the theory by theorizing that not just the energy but radiation was made of quantas. In 1924, physicist Louis de Broglie proposed that at the atomic and subatomic level there is no fundamental difference in the composition and behaviour of matter and energy. He said that they both behave as if they were made of either waves or particles. This theory is called the principle of wave-particle duality. In 1927, physicist Werner Heisenberg proposed that it was impossible to measure two complementary values, such as the moment and position of a subatomic particle. This theory is called the uncertainty principle. Later, other physicists like Niels Bohr and Erwin Schroedinger made important contributions to the field. What are the important ideas in quantum theory? The most important ideas that you should understand in quantum theory are: 1. Everything in the universe is quantized. Quantities like energy, mass, electric charge and momentum all occur in discrete quantum units. Even space and time occur in discrete quantum units. 2. The behaviour of particles at the subatomic level cannot be described by classical (Newtonian) physics. 3. At the subatomic level, particles exist in different quantum configurations called “states”. A State is characterized by its properties, such as energy and angular momentum. 4. The energy of electromagnetic radiation is transferred in discrete quantum packets known as photons. 5. According to the Heisenberg Uncertainty Principle, it is not possible to determine the position and momentum of any subatomic particles with infinite precision. Quantum physics is one of the hardest topics in physics to master. This is the reason many H2, JC and A level students take physics tuition. If you are not confident you will get good grades, then you should consider taking tuition too.
{"url":"https://tuitionphysics.com/oct-2017/quantum-physics-explained-in-simple-terms/","timestamp":"2024-11-09T04:22:46Z","content_type":"text/html","content_length":"94566","record_id":"<urn:uuid:7eb32456-e745-4725-b201-3757b735d690>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00171.warc.gz"}
Study of Inverse Lithography Approaches based on Deep Learning Abstract: Computational lithography (CL) has become an indispensable technology to improve imaging resolution and fidelity of deep sub-wavelength lithography. The state-of-the-art CL approaches are capable of optimizing pixel-based mask patterns to effectively improve the degrees of optimization freedom. However, as the growth of data volume of photomask layouts, computational complexity has become a challenging problem that prohibits the applications of advanced CL algorithms. In the past, a number of innovative methods have been developed to improve the computational efficiency of CL algorithms, such as machine learning and deep learning methods. Based on the brief introduction of optical lithography, this paper reviews some recent advances of fast CL approaches based on deep learning. At the end, this paper briefly discusses some potential developments in future work. Keywords: Computational lithography; inverse lithography technology (ILT); optical proximity correction (OPC); deep learning
{"url":"http://jommpublish.org/p/55/","timestamp":"2024-11-05T00:04:27Z","content_type":"text/html","content_length":"84315","record_id":"<urn:uuid:f4aaf3ea-b4b8-40a5-8d82-11f5124b9dc8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00560.warc.gz"}
Introduction to Abelian Model Structures and Gorenstein Homological Dimensions Includes bibliographical references and index. As self-contained as possible, this book presents new results in relative homological algebra and model category theory. The author also re-proves some established results using different arguments or from a pedagogical point of view. In addition, he proves folklore results that are difficult to locate in the literature. Introduction to Abelian Model Structures and Gorenstein Homological Dimensions provides a starting point to study the relationship between homological and homotopical algebra, a very active branch of mathematics. The book shows how to obtain new model structures in homological algebra by constructing a pair of compatible complete cotorsion pairs related to a specific homological dimension and then applying the Hovey Correspondence to generate an abelian model structure. The first part of the book introduces the definitions and notations of the universal constructions most often used in category theory. The next part presents a proof of the Eklof and Trlifaj theorem in Grothedieck categories and covers M. Hovey's work that connects the theories of cotorsion pairs and model categories. The final two parts study the relationship between model structures and classical and Gorenstein homological dimensions and explore special types of Grothendieck categories known as Gorenstein categories. As self-contained as possible, this book presents new results in relative homological algebra and model category theory. The author also re-proves some established results using different arguments or from a pedagogical point of view. In addition, he proves folklore results that are difficult to locate in the literature. There are no comments for this item. Log in to your account to post a comment.
{"url":"https://opac.daiict.ac.in/cgi-bin/koha/opac-detail.pl?biblionumber=32798&shelfbrowse_itemnumber=43065","timestamp":"2024-11-08T02:59:21Z","content_type":"text/html","content_length":"57695","record_id":"<urn:uuid:893b1df0-54ff-44b8-a2f0-dd2994e7ca81>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00737.warc.gz"}
Supercapacitor Degradation and Life-time Source: EPCI e-Symposium PCNS paper by Vlasta Sedlakova, Josef Sikula, Jiri Majzner, Petr Sedlak CEITEC, Brno University of Technology, Brno, Czech Republic presented by V.Sedlakova at the 2nd PCNS 10-13th September 2019, Bucharest, Romania as paper 3.4. Degradation of supercapacitor (SC) is evaluated during aging tests. Continuous current cycling for 100% energy and 75% energy and discontinuous cycling for 75% energy, respectively, was performed on two different types of supercapacitors. SC parameters are determined before the aging test, and during 6×10^5 cycles of all three current cycling tests. Capacitance fading within the current cycling tests is correlated to the results of capacitance change within the calendar life tests at different temperatures and operating voltage. Two studied SCs technologies show different sensitivity to temperature and electric field during the calendar tests as well as slightly different evolution of capacitance during cycling. We show that the capacitance fading is driven by two mechanisms. The first one can be covered by the exponential function of square root of time of ageing, while the second one is described by the Gaussian function. The first ageing mechanism, related probably to the electrolyte parameters degradation, is observed for all the tested samples, while the second mechanism emerge only in case of hard testing conditions – elevated temperature and/or increased operating voltage. We suppose that the second ageing mechanism is related to the electrode active area degradation caused probably by the decrease of potential barrier on the electrode/electrolyte interface. Further is shown that, for the same cycling current, the longer charge/discharge time accelerates the SC’s degradation. Supercapacitor (SC) is an energy storage device with high energy density, low self-discharge rate and relatively long life-time. Time of life is influenced by the operating temperature, applied voltage as well as the charge/discharge current [1 to 3]. The SC cells have a high cycle life because of the chemical and electrochemical inertness of the activated carbon electrodes. However, in the time scale of months, experience shows a performance fading of the SC, which consists of capacitance decreasing and ESR increasing [4 to 8]. The degradation rate due to the cycling ageing test is much higher than the degradation rate due to the calendar ageing test with equivalent voltage and temperature [1 to 3]. This research gives the analyses of the impact of ageing modes on the SC performances by monitoring of DC capacitance and DC ESR as well as the parameters of a physical-based equivalent circuit model (see Fig. 1). The supercapacitor is modelled by the circuit consisting of two ideal capacitors, two ideal resistors and one resistor with the time dependent resistance value. The capacitors C1 and C2 are representing the capacitance of Helmholtz double layer C[H] and the increase of capacitance due to the diffusion of charges in the electrolyte C[D], respectively. The resistors are representing the equivalent series resistance R[1] and the parallel leakage resistance R[L], respectively. The resistor with time dependent resistance R[2](t) represents the diffuse resistance between the Helmholtz and diffuse capacitances, which is increasing with the time of SC charge/discharge [4]. Fig. 1. Equivalent electrical circuit model for supercapacitor [4] Two experimental methods were used to investigate the SC ageing: (i) Energy cycling tests – discontinuous 75% energy cycling test (D75%), continuous 75% energy (C75%), and continuous 100% energy cycling test (C100%), respectively, at temperature 25°C, and (ii) Calendar life tests at different operating voltages and temperatures. Constant electric field at given temperature set for the appropriate Calendar test results in capacitance fading and ESR increase of tested SC. We show, that the capacitance fading is driven by two mechanisms. The first one can be covered by the exponential function of square root of time of ageing, while the second one is described by the Gaussian function. The first ageing mechanism is observed for all the tested samples, while the second mechanism emerge only in case of hard testing conditions – elevated temperature and/or increased operating voltage. The aim of energy cycling tests is to determine how the variable electric field and the induced self-heating influences the degradation process of capacitance and equivalent series resistance, respectively [1 to 3, 9]. On the results determined for the C100% and C75% tests, where the cycling current of the same value is used, while the end-of-discharge voltage varies, we will show, that the longer charge/discharge time accelerates the SC’s degradation. The experiments were carried out on two types of supercapacitors with nominal capacitance 10 F and nominal operating voltage Vop = 2.7 V. One set of samples denoted as “Old” are of the standard production, while the second set denoted as “New” represents the prototype with modified technology. The energy cycling test is based on periodic charge/discharge current pulses up to the maximum operating voltage (Vop). The charge/discharge current value was 2.7 A for both C75% and C100% energy cycling tests and 1.35 A for D75% test, respectively. Cycling was performed in the voltage range 0 to 2.7 V for C100% test and in the range 1.35 to 2.7 V for both C75% and D75% tests. The duration of one cycle depends on the capacitance of SC. The charge/discharge current value was designed to have the cycle duration set to 20 seconds for C100% test, 10 seconds for C75% test and 40 seconds for D75% test (including 10 seconds of rest time after both charge and discharge of SC), respectively. However real cycle duration decreases with the capacitance drop. Total length of each 10^5 cycles was measured and the evolution of SC parameters is further shown as a dependence on the time of cycling. Supercapacitor parameters were evaluated periodically after each hundred thousand cycles. In addition to DC capacitance (DC CAP) and DC ESR measured by standard test for both New and Old set of samples, the parameters of a physical-based equivalent circuit model [1, 4] were determined for the set of New samples.In the Calendar life tests the stored energy of the SC is sustained by maintaining the voltage at a constant value at given temperature. Five different test conditions (temperature vs. voltage value) were set for the SC life time estimation: • -35°C/1.0Vop • 22°C/1.0Vop • 22°C/1.2Vop • 45°C/0.8Vop • 65°C/0.6Vop. Supercapacitor parameters DC CAP and DC ESR were evaluated after 200, 400, 700 and 1000 hours and then each 1000 hours up to 9000 hours. Results and discussion The parameters of a physical-based equivalent circuit model Helmholtz capacitance C[H], diffuse capacitance C[D], equivalent series resistance R[1] and diffuse resistance parameter R[D0][,] respectively, determined for New samples within the cycling tests C100%, C75% and D75% are shown in Figs. 2 and 3. The method for these parameters determination is explained in detail in [1, 4]. Fig. 2. Helmholtz capacitance C[H] (left graph) and diffuse capacitance C[D] (right graph) vs time of ageing for cycling test C100% (blue dot), C75% (red square) and D75% (black triangle) for New While Helmholtz capacitance value continuously decreases with the time of ageing, the diffuse capacitance value drops down from about 4.5 C to about 2 C within initial 10^5 cycles for C100% test, and 2×10^5 cycles for C75% test, respectively. Then the C[D] value remains constant for additional 4×10^5 cycles for both C100% and C75% tests. Moderate increase of C[D] value is detected with further ageing. Similar behavior of C[D] value, but decelerated, is registered also for D75% test. Diffuse resistance R[D] between Helmholtz and diffuse capacitance is given by (see [4]): • V[0] is the voltage at the beginning of diffuse capacitance charging, • t[2] is the time constant of diffusion process • V[1] is expected value of voltage at infinity • C[D] is the diffuse capacitance • R[D0] is the diffuse resistance parameter which is equal to resistance R[D] value at time t = 1 s. Diffuse resistance parameter R[D0] increases with time of ageing, with the cycling current, as well as with the energy transferred through the SC within one cycle (see Fig. 3 – left). The highest increase is observed at the beginning of ageing. Diffusion processes in SC are related to the concentration and mobility of ions in electrolyte and to their distribution/concertation gradient in the vicinity of carbone electrode/electrolyte interface. The dependency of diffuse capacitance as well as the diffuse resistance parameter on the time of ageing indicates that significant changes in electrolyte properties arrive shortly after the electric field application. The value of equivalent series resistance R[1] determined from the charge/discharge experiment (see [4]) is shown in Fig. 3 – right. No significant increase of ESR is observed within the cycling tests. Fig. 3. Diffuse resistance parameter R[D0] (left graph) and equivalent series resistance R[1] (right graph) vs time of ageing for cycling test C100% (blue dot), C75% (red square) and D75% (black triangle) for New samples SC ageing by energy cycling test induces the increase of the value of resistance R[D] between Helmholtz and diffuse capacitances (see Fig. 4). This effect is more pronounced at the beginning of ageing. The value of diffuse resistance R[D] increases steeply after each change of current polarity. The longer lasts the charge or discharge, the higher is Joule heat generated on the diffuse resistance and the increase of SC’s internal temperature. Fig. 4. Diffuse resistance R[D] vs time of charge/discharge – values calculated according to eq. (1) for the case before the ageing and after the 10^5, 2×10^5, 4×10^5 and 6×10^5 performed ageing cycles (from bottom to the top) for the cycling test C100% for New samples In addition to the parameters of a physical-based equivalent circuit model, DC capacitance (DC CAP) and DC ESR measured by standard test were determined for both New and Old set of samples. Figure 5 shows that the value of DC CAP is nearly equal to the Helmholtz capacitance C[H] value determined from the charge/discharge test cycle [1], [4]. For the assessment of capacitance fading with the ageing time the dependence of DC CAP on the time of ageing was evaluated for both cycling and calendar ageing tests. The dependences of DC CAP on the time of ageing for C100% cycling test at room temperature and for the calendar life test performed at 22°C/1.0Vop for New samples are shown in Fig. 6 – left graph. We can see that influence of current cycling on the capacitance fading is significant. The application of static electric field results in decrease for about 0.5 F, while the decrease for about 1.2 F is determined for the C100% cycling test after 2000 hours of ageing. Fig. 5. Helmholtz capacitance C[H] (blue square) and DC capacitance DC CAP (black triangle) vs time of ageing for cycling test C100% for New samples Fig. 6. DC capacitance vs time of ageing for C100% cycling test and 22°C/1.0Vop calendar life test (left graph) and DC capacitance relative value vs time of ageing for C100%, C75% and D75% cycling tests and 22°C/1.2Vop and 65°C/0.6Vop calendar life tests (right graph) – New samples Fig. 7. DC capacitance relative value vs time of ageing for C100%, C75% and D75% cycling tests for both Old and New samples (left graph) and DC capacitance vs time of ageing for C100%, C75% and D75% cycling tests with appropriate fits of capacitance decrease at the beginning of ageing – New samples (right graph) The dependences of DC capacitance relative value on the time of ageing for C100%, C75% and D75% cycling tests and 22°C/1.2Vop and 65°C/0.6Vop calendar life tests are shown in Fig. 6 – right graph for New samples. Capacitance decrease induced by the calendar life tests is lower at the beginning of ageing even in case of hard ageing conditions (applied voltage 3.24 V or elevated temperature 65°C, respectively). However after 2000 hours of ageing in case of elevated temperature and after 4000 hours of ageing in case of increased applied voltage, the abrupt decrease of capacitance is observed for samples under the calendar life tests. Abrupt decrease of capacitance starts at the point, when the capacitance value decreases for about 7 % due to the calendar life tests. This abrupt decrease of capacitance is detected also for the C100% cycling test, but here it is after the capacitance value decreases for about 12 %. The dependences of DC CAP relative value on the time of ageing for C100%, C75% and D75% cycling tests for both Old and New samples are shown in Fig. 7 – left. The capacitance decrease induced by the C75% and D75% cycling tests is the same for both New and Old samples and could be covered by the exponential function in the whole studied ageing time interval. The results determined for C100% cycling tests differs after 1000 hours of ageing. While the results determined for Old samples could be covered by the exponential function in the whole studied ageing interval 0 – 4000 hours, the results for New samples could be covered by the exponential function in the range 0 to 2000 hours only, then an abrupt decrease of capacitance is detected for the C100% cycling test. The dependences of DC CAP on time of ageing for C100%, C75% and D75% cycling tests are shown in Fig. 7 – right. These dependences are fitted by exponential function of square root of time in the time interval 0 to 2000 hours for C100% test and in the whole studied time interval for C75% and D75% cycling tests, respectively: • t is time in hours • C[1] is capacitance value for time of ageing at infinity • C[2] is the capacitance decrease due to cycling • t[h] is time constant of energy cycling degradation process in hours The values of these parameters obtained from experimental data fits for New samples are shown in Table 1. Table 1. Constants evaluated from the experimental data fit of DC capacitance C vs. time of cycling for C100%, C75% and D75% cycling tests – New samples Cycling test C (t = 0 h) / F C[1] / F C[2] / F t[h ]/ hour C100% 8.95 7.66 1.29 487 C75% 8.99 7.52 1.47 1646 D75% 9.10 7.93 1.17 1805 The value of constant t[h] of energy cycling degradation process depends on the amount of energy dissipated in the sample volume within one hour. For C75% cycling test it is reasonably higher than for C100% cycling test. It is shown in Fig. 3 – right, that the ESR value is comparable for the samples subjected both to C75% and C100% cycling tests. We suppose that the additional energy dissipation occurs on the diffuse resistance R[D]. Diffuse resistance is time dependent and it reaches the value of up to 25 W within each charge/discharge cycle of C75% test, while the value increases up to about 36 W within each C100% cycle at the beginning of ageing. These values increase to 40 W within each charge/discharge cycle of C75% test, and to about 63 W within each C100% cycle after 6 x 10^5 ageing cycles. Fig. 8. DC capacitance relative value vs time of ageing (in time interval 0 to 2000 hours) for calendar life tests -35°C/1.0Vop; 22°C/1.0Vop; 22°C/1.2Vop; 45°C/0.8Vop; 65°C/0.6Vop for both Old (solid lines) and New (dashed lines) samples. The dependences of DC capacitance relative value on the time of ageing during the initial 2000 hours of calendar life tests -35°C/1.0Vop; 22°C/1.0Vop; 22°C/1.2Vop; 45°C/0.8Vop; 65°C/0.6Vop are shown in Fig. 8 for both Old and New samples. Arrows indicate the difference between DC capacitance relative value of Old and New samples observed within initial 1000 hours of ageing. Here the capacitance of samples of New technology shows much lower decrease with respect to the initial value than the samples of Old technology. The samples of New technology exhibit slightly different sensitivity to the ageing conditions. While the capacitance decrease during the initial 1000 hours is comparable for calendar life tests 22°C/1.2Vop and 65°C/0.6Vop of the Old technology, for the New technology the increased temperature of 65°C/0.6Vop test induces reasonably higher decrease of capacitance than the high electric field in 22°C/1.2Vop test. Figure 9 shows the dependences of DC capacitance relative value on the time of ageing during 10000 hours of calendar life tests -35°C/1.0Vop; 22°C/1.0Vop; 22°C/1.2Vop; 45°C/0.8Vop; 65°C/0.6Vop for both Old and New samples. Contrary to the behavior within the initial 1000 hours of ageing here the samples of New technology show larger decrease of capacitance than the Old technology samples. The decrease is evident especially for the calendar life test 65°C/0.6Vop. Modified technology is much more sensitive to the elevated temperature than the standard technology. This is visible also on the results measured for the test 45°C/0.8Vop – New, where the DC CAP relative value after 9000 hours of ageing is reaching 81 percent of the initial capacitance value and capacitance decrease accelerates. On the other side the results measured for the test 45°C/0.8Vop – Old show, that the DC CAP relative value after 10000 hours of ageing is reaching 88 percent of the initial capacitance value and capacitance decrease is stabilized between 3000 and 10000 hours of ageing. Fig. 9. DC capacitance relative value vs time of ageing (in time interval 0 to 10000 hours) for calendar life tests -35°C/1.0Vop; 22°C/1.0Vop; 22°C/1.2Vop; 45°C/0.8Vop; 65°C/0.6Vop for both Old (solid lines) and New (dashed lines) samples. Fig. 10. DC ESR relative change vs time of ageing (in time interval 0 to 10000 hours) for calendar life tests -35°C/1.0Vop; 22°C/1.0Vop; 22°C/1.2Vop; 45°C/0.8Vop; 65°C/0.6Vop for both Old (solid lines) and New (dashed lines) samples. Figure 10 shows the dependences of DC ESR relative change on the time of ageing in time interval 0 to 10000 hours for calendar life tests -35°C/1.0Vop; 22°C/1.0Vop; 22°C/1.2Vop; 45°C/0.8Vop; 65°C/ 0.6Vop for both Old and New samples. The increase of DC ESR over the 100% of its initial value is in correlation with the DC CAP relative value decreases below the 80% if the initial capacitance. For the samples of standard technology the ESR increase is related to the increased applied electric field, while for the modified technology the ESR increase is caused by both the increased applied electric field and increased Fig. 11. DC capacitance relative value vs time of ageing for calendar life test 22°C/1.2Vop – measured data (squares) and data fit (solid line) for both Old (red) and New (blue) samples Fig. 12. DC capacitance relative value vs time of ageing for calendar life test 65°C/0.6Vop – measured data (squares) and data fit (solid line) for both Old (red) and New (blue) samples The dependences of DC CAP on time of ageing for all the calendar life tests could be fitted by exponential function of square root of time (see Eq. 2) either in the whole studied ageing time interval (the parts subjected to the moderate ageing conditions) or in the beginning of ageing only (the parts under hard ageing conditions – high electric field and/or high temperature). Hard ageing conditions result in the abrupt decrease of capacitance, which is probably related to the decrease of electrode effective area due to the degradation of potential barrier on the Carbon-electrolyte interface. Similar behavior was observed also during the ageing of other Carbon electrode/electrolyte based structure – Lithium-Sulphur cells and is described in [10]. Capacitance decrease due to this degradation mechanism can be modeled by the Gaussian function: • t is time of ageing by the second degradation mechanism in hours • C[0] is capacitance value at the point, when the second degradation mechanism starts • t[G] is time constant of the degradation process in hours. Figures 11 and 12 show the dependences of DC capacitance relative value on time of ageing for calendar life tests 22°C/1.2Vop and 65°C/0.6Vop. There are shown the measured data and data fit for both Old and New technology samples. The values of parameters obtained from experimental data fits for both New and Old samples are shown in Table 2. New technology in comparison to the Old one is showing lower capacitance relative decrease due to the first degradation mechanism related to the electrolyte parameters modification, but the second degradation mechanism, related to the electrode effective area decrease, starts earlier and occurs due to the ageing under both increased electric field as well as under the elevated temperature. The sensitivity of the Old technology to the elevated temperature is lower – capacitance decrease within the calendar life test 65°C/0.6Vop is covered by the first ageing mechanism in the whole evaluated time interval 0 to 10000 hours. Table 2. Constants evaluated from the experimental data fit of DC capacitance C vs. time of ageing for calendar life tests 22°C/1.2Vop and 65°C/0.6Vop for both New and Old samples Calendar life test First ageing mechanism – Eq. (2) Second ageing mechanism – Eq. (3) C[1] / % C[2] / % t[h ]/ hour C[0]/ % t[G]/ hour 22°C/1.2Vop – New 93.5 6.56 403 93.2 1.3 x 10^4 22°C/1.2Vop – Old 84.9 14.8 890 86.8 1.7 x 10^4 65°C/0.6Vop – New 93.2 6.85 35 93.2 7110 65°C/0.6Vop – Old 88.1 11.5 492 – – We show, that the capacitance fading is driven by two mechanisms. The first one can be covered by the exponential function of square root of time of ageing, while the second one is described by the Gaussian function. The first ageing mechanism, related probably to the electrolyte parameters degradation, is observed for all the tested samples, while the second mechanism emerge only in case of hard testing conditions – elevated temperature and/or increased operating voltage. We suppose, that the second ageing mechanism is related to the electrode active area degradation caused probably by the decrease of potential barrier on the electrode/electrolyte interface. Capacitance decrease induced by the first ageing mechanism is lower for the samples of New technology which results in lower capacitance fading within the initial 1000 hours of Calendar life tests, but then the degradation due to the second ageing mechanism become dominant in this technology for both elevated temperature and increased electric field test conditions and it leads to the accelerated capacitance degradation. On the results determined for the C100% and C75% tests, where the cycling current of the same value is used, while the end-of-discharge voltage varies, is shown, that the longer charge/discharge time accelerates the SC’s degradation. This research was carried out under the project CEITEC 2020 (LQ1601) with financial support from the Ministry of Education, Youth and Sports of the Czech Republic under the National Sustainability Programme II. 1. Sedlakova V, Sikula J, Majzner J, Sedlak P, Kuparowitz T, Buergler B and Vasina P. Supercapacitor degradation assessment by power cycling and calendar life tests. Metrol Meas Syst, 2016, vol. 23, no. 3, p. 345-358. 2. Paul Kreczanik, Pascal Venet, Alaa Hijazi, Guy Clerc. Study of Supercapacitor Ageing and Lifetime Estimation According to Voltage, Temperature and RMS Current. IEEE Transactions on Industrial Electronics, vol. 61, no. 9, September 2014, p. 4895-4902 3. Murray D B and Hayes J G. Cycle Testing of Supercapacitors for Long-Life Robust Applications. IEEE Transactions on Power Electronics, vol. 30, no. 5, May 2015, p. 2505-2516 4. Sedlakova V, Sikula J, Majzner J, Sedlak P, Kuparowitz T, Buergler B and Vasina P. Supercapacitor equivalent electrical circuit model based on charges redistribution by diffusion J. Power Sources, 2015, vol. 286, p. 58–65 5. Sedlakova V, J. Sikula, J. Valsa, J. Majzner, P. Dvorak, “Supercapacitor Charge and Self-discharge Analysis” in Proceedings of Conference Passive Space Component Days, ESA/ESTEC, Noordwijk, The Netherlands, Sept. 24 – 26, 2013. 6. Zubieta L and Bonert R. Characterization of double-layer capacitors for power electronics applications IEEE Trans. Ind. Appl. 2000, vol. 36, p. 199–205. 7. Graydon J W, Panjehshahi M and Kirk D W. Charge redistribution and ionic mobility in the micropores of supercapacitors. J. Power Sources, 2014, vol. 245, p. 822–829 8. Faranda R. A new parameters identification procedure for simplified double layer capacitor two-branch model. Electr. Power Syst. Res. 2010, vol. 80, p. 363–371 9. Kaus M, Kowal J and Sauer D U. Modelling the effects of charge redistribution during self-discharge of supercapacitors Electrochimica Acta (2010) 55 7516–23 10. Sedlakova, V., Sikula, J., Sedlak, P., Cech, O., Urrutia, L., A Simple Analytical Model of Capacity Fading for Lithium–Sulfur Cells IEEE Transactions on Power Electronics, vol. 34, no. 6, June 2019, p. 5779-5786. more 2nd PCNS symposium technical papers can be viewed and downloaded in pdf from EPCI Academy e-Proceedings:
{"url":"https://passive-components.eu/supercapacitor-degradation-and-life-time/?amp=1","timestamp":"2024-11-03T22:07:14Z","content_type":"text/html","content_length":"131166","record_id":"<urn:uuid:614c9a99-1f7e-4d9d-a11d-1f4e8ec94831>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00408.warc.gz"}
A note on sparse least-squares regression for Information Processing Letters Information Processing Letters A note on sparse least-squares regression View publication We compute a sparse solution to the classical least-squares problem minx||Ax-b||2, where A is an arbitrary matrix. We describe a novel algorithm for this sparse least-squares problem. The algorithm operates as follows: first, it selects columns from A, and then solves a least-squares problem only with the selected columns. The column selection algorithm that we use is known to perform well for the well studied column subset selection problem. The contribution of this article is to show that it gives favorable results for sparse least-squares as well. Specifically, we prove that the solution vector obtained by our algorithm is close to the solution vector obtained via what is known as the "SVD-truncated regularization approach". © 2013 Elsevier B.V.
{"url":"https://research.ibm.com/publications/a-note-on-sparse-least-squares-regression","timestamp":"2024-11-03T14:15:02Z","content_type":"text/html","content_length":"71707","record_id":"<urn:uuid:2c2a8d98-f8e1-4bed-aa4e-1f86243f6b1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00289.warc.gz"}
Coordinate Grid Mystery Picture Printable Free Coordinate Grid Mystery Picture Printable Free Free Coordinate Graphing Mystery Picture Worksheets By Michele Meleen M S Ed Updated April 20 2020 PeopleImages E via Getty Images Get excited about coordinate graphing with free hidden picture plotting pages Print out the worksheets by clicking the image then the print icon and use the handy Adobe Guide for any troubleshooting Discover the mystery picture Students uncover mystery pictures by plotting points on coordinate planes 1 Quadrant Worksheet 1 Worksheet 2 4 Quadrants Worksheet 3 Worksheet 4 Worksheet 5 Similar Coordinate grid messages Maps with coordinate grids More geometry worksheets Coordinate Graphing Mystery Picture Worksheet Practice plotting ordered pairs with this fun Back to School Owl coordinate graphing mystery picture This activity is easy to differentiate by choosing either the first quadrant positive whole numbers or the four quadrant positive and negative whole numbers worksheet Coordinate Grid Mystery Picture Printable Free Coordinate Grid Mystery Picture Printable Free Https www dadsworksheets Coordinate Plane Quadrant 1 Worksheet quadrant coordinate Coordinate Grid Map Worksheets Unique Plotting Coordinate Points A Free Printable Coordinate This Spring graphing coordinate plane activity is a great way to practice basic graphing skills by graphing ordered pairs in the first quadrant Students will graph points along the coordinate plane and connect the lines to reveal a mystery Spring picture with four free worksheets Level 1 All coordinates are within the first quadrant There are 41 points to plot Level 2 All coordinates are within the first quadrant There are 88 points to plot Level 3 Coordinates are in all four quadrants There are 99 points to plot Teachers can choose which coordinate grid to give to their students Coordinate Grid Mystery Picture In this geometry worksheet students will navigate a coordinate grid in order to find the mystery picture First students will have to plot a series of ordered pairs onto the coordinate plane Then once they ve plotted each point they will connect the dots to find the mystery shape This math worksheet Mystery picture Grade 4 Geometry Worksheet Plot the coordinate points in order and draw a line between each point Color and name the figure that appears below 3 5 3 1 1 5 3 5 1 5 4 5 1 5 0 Coordinate points 6 6 0 7 7 5 0 8 6 5 1 More picture related to Coordinate Grid Mystery Picture Printable Free Coordinate Grid Map Worksheets Elegant Coordinate Grid Paper Grid A Free Printable Simple 15 Coordinate Geometry Worksheet Templates Free PDF Documents Download Free Premium Mario These Graphing Puzzles Contain Over 100 Ordered Pairs To Coordinate Graphing Mystery picture Grade 4 Geometry Worksheet Plot the coordinate points in order and draw a line between each point Color and name the figure that appears below Coordinate points math geometry coordinate plane mystery picture Created Date 5 31 2022 11 41 49 PM Free Coordinate Graphing Mystery Picture Worksheets PDF Get students excited about graphing with this Free Coordinate Graphing Mystery Picture Worksheets PDF This Graphing activity is available in a First Quadrant Graph and a Four Quadrants Graph About this product Included are the following The instructions for students to plot the ordered pairs on the provided graph The graph with the x and y axis labeled A separate graph with the fractional grid lines labeled for differentiation A completed picture colored for reference Description Fun printable or digital math practice creating a penguin mystery picture using the coordinate grid Students will love discovering the mystery picture while coloring in the correct squares on the alphanumeric grid using the colors and coordinates given Great math activity for back to school in Coordinate Plane Mystery Picture Worksheets Free Kamberlawgroup Free Coordinate Graphing Mystery Picture Worksheets Free Coordinate Graphing Mystery Picture Worksheets By Michele Meleen M S Ed Updated April 20 2020 PeopleImages E via Getty Images Get excited about coordinate graphing with free hidden picture plotting pages Print out the worksheets by clicking the image then the print icon and use the handy Adobe Guide for any troubleshooting Coordinate grid pictures worksheets K5 Learning Discover the mystery picture Students uncover mystery pictures by plotting points on coordinate planes 1 Quadrant Worksheet 1 Worksheet 2 4 Quadrants Worksheet 3 Worksheet 4 Worksheet 5 Similar Coordinate grid messages Maps with coordinate grids More geometry worksheets Mystery Free Printable Coordinate Graphing Pictures Worksheets Grid Enlargement Drawing Worksheet Printable Mystery Grid Art Worksheet In 2021 Coordinate Mystery Free Printable Coordinate Graphing Pictures Worksheets Printable Word Searches Coordinate Grid With Axes And Some Increments Labeled And Grid Lines Shown Vintage Illustration 30 Free Coordinate Graphing Mystery Picture Worksheets Worksheets Decoomo 30 Free Coordinate Graphing Mystery Picture Worksheets Worksheets Decoomo Printable Graph Paper Coordinate Plane Printable Coordinate Graphing Pictures Worksheets Printable Mystery Grid Coloring Pages Coordinate Grid Mystery Picture Printable Free - This Spring graphing coordinate plane activity is a great way to practice basic graphing skills by graphing ordered pairs in the first quadrant Students will graph points along the coordinate plane and connect the lines to reveal a mystery Spring picture with four free worksheets
{"url":"https://downstairspeople.org/en/coordinate-grid-mystery-picture-printable-free.html","timestamp":"2024-11-13T08:40:51Z","content_type":"text/html","content_length":"23814","record_id":"<urn:uuid:68f74b3a-7035-49a2-86c8-5b4be8d90ec5>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00120.warc.gz"}
Effective sample size • Monte Carlo standard error (MCSE) • Effective sample size (ESS) • Mathematical underpinning: central limit theorem for Markov chains After running an MCMC chain and convincing yourself the chain is mixing well, the next question is: how many digits are reliable when reporting an MCMC approximation (e.g., a posterior mean)? To answer this question, we will encounter an MCMC version of the notion of effective sample size. This is distinct from SNIS’s ESS but has the same underlying intuition. Two types of errors involved in Bayesian analysis based on Monte Carlo (see also: the two kinds of asymptotics): 1. Statistical error: inherent uncertainty due to e.g., the fact that we have finite data. 2. Computational error: additional error due to the fact that we use an approximation of the posterior instead of the exact posterior. • This page focuses on the second type of error. • Full picture should include both (i.e., the first step of this joke can be taken seriously). • Interestingly, the mathematical toolbox to study 2 is similar to the non-Bayesian toolbox (i.e. normal approximation) for studying 1. Bayesian statisticians using central limit theorems? Earlier on, when building credible interval, (a Bayesian measure of statistical error) we avoided central limit theorems. Question: why are Bayesians OK with using central limit theorems for MCMC error analysis (i.e., for computational error)? 1. Actually, Bayesians sometimes use CLTs for building credible intervals. 2. Because it is often harder to increase the number of MCMC iterations \(M\) compared to the number of data points. 3. Because it is often easier to increase the number of MCMC iterations \(M\) compared to the number of data points. 4. MCMC error analysis is less important. 5. Bayesians are not completely happy with CLT-based MCMC error analysis. Best answer is: #3. In fact, in some situation it may be impossible to increase the number of data points. For example, in the Ariane 1 success rate analysis we will never get more datapoints since this type of rocket is discontinued. Answer #1 is also acceptable: some approaches use the Laplace approximation instead of MCMC, which is motivated by the central limit theorem (more precisely, the Bernstein-von Mises theorem). Answer #5 is also acceptable: there is indeed ongoing research on so called “non-asymptotic” approaches to bound the error of MCMC algorithms. See for example Latuszynski et al., 2013 and Paulin, Executive version How many digits are reliable? Suppose you are approximating a posterior mean in Stan. We now show how to determine how many digits are reliable: 1. print the fit object, 2. roughly twice^1 the column se_mean provides the radius of a 95% confidence interval. Example: our simple doomsday model… Inference for Stan model: anon_model. 1 chains, each with iter=20000; warmup=10000; thin=1; post-warmup draws per chain=10000, total post-warmup draws=10000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat x 1.16 0.03 1.29 0.07 0.18 0.57 1.76 4.56 1624 1 lp__ -0.39 0.02 0.68 -2.44 -0.43 -0.12 -0.04 -0.01 1580 1 Samples were drawn using NUTS(diag_e) at Tue Mar 12 22:45:46 2024. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). Question: construct a 95% confidence interval for the posterior mean of \(X\). Is the true value contained in the interval? 1. \([1.16 \pm 0.03]\), and the true value is contained in the interval 2. \([1.16 \pm 0.03]\), and the true value is not contained in the interval 3. \([1.16 \pm 0.06]\), and the true value is contained in the interval 4. \([1.16 \pm 0.06]\), and the true value is not contained in the interval 5. None of the above In the above: \([x \pm r]\) denotes the confidence interval \([x - r, x + r]\) Reading off this table we have: • Estimate is 1.16, • twice the standard error gives 0.06 (from the column se_mean, which stands for “standard error for the posterior mean”). From previous calculations, we know the true answer is \(1.117\), so the true error is \(0.043\), hence the true value is contained in the interval, as expected for approximately^2 95% of the random seeds passed to Stan. Mathematical underpinnings We answer two questions: • How can we compute Monte Carlo Standard Errors (MCSE) (i.e., numbers such as in se_mean)? • What underlying theory justifies that computation? Along the way we define the notion of Effective Sample Size (ESS) for MCMC. • Recall the central limit theorem (CLT) for independent and identically distributed (i.i.d.) random variables: □ if some random variables \(V_i\)’s are i.i.d. and □ each has finite variance, then we have^3 \[\sqrt{n}(\bar V - \mu) \to \mathcal{N}(0, \operatorname{SD}[V]), \tag{1}\] where \(\bar V = \frac{1}{n} \sum_{i=1}^n V_i\) and \(\mu = \mathbb{E}[V] • From the central limit theorem, recall that standard frequentist arguments give: \[\mathbb{P}(\mu \in [\bar V \pm 1.96 \text{SE}]) \approx 95\%, \tag{2}\] where: the Standard Error (SE) is given by \(\text{SE} = \operatorname{SD}[V] / \sqrt{n}\). Central limit theorem (CLT) for Markov chains • We would like to have something like Equation 2 for our MCMC algorithm, • however, the samples from MCMC have dependence, they are not i.i.d…. • … so we cannot use the above i.i.d. central limit theorem. • But fortunately there is generalization of the central limit theorem that applies! • Namely: The central limit theorem for Markov chains. Definition: the random variables \(X^{(1)}, X^{(2)}, \dots\) are called a Markov chain if they admit the following “chain” graphical model. • Here we state a version of the CLT for Markov chains specialized to our situation: □ Let \(X^{(1)}, X^{(2)}, \dots\) denote the states visited by an MH algorithm. □ \(\mu = \int x \pi(x) \mathrm{d}x\) is the quantity we seek to approximate, ☆ i.e., a posterior mean. ☆ Recall \(\pi(x) = \gamma(x) / Z\), where \(\gamma(x) = p(x, y)\) and \(Z = p(y)\). □ Let \(\bar X\) denote our MCMC estimator, i.e., the average of the samples. Theorem: assuming \(\sigma^2 = \int (x - \mu)^2 \pi(x) \mathrm{d}x < \infty\) (in our context: posterior variance is finite) and under appropriate fast mixing conditions,^4 \[\sqrt{n}(\bar X - \mu) \ to \mathcal{N}(0, \sigma_a), \tag{3}\] where the constant \(\sigma^2_a > 0\) is known as the asymptotic variance. Asymptotic variance Notice: there is a difference between the limiting distributions in the i.i.d. and Markov CLTs (Equation 1 and Equation 3)! • For i.i.d. CLT: variance of the limiting distribution is equal to the variance of \(X_1\). • For Markov CLT: we left the variance of the limiting distribution more vague (\(\sigma_a^2\)). Intuition: because of the dependences between MCMC iterations, the noise of the approximation can be larger compared to the i.i.d. setting.^5 Effective sample size The MCMC Effective Sample Size (ESS) is an answer to the following: Question: How many i.i.d. samples \(n_e\) would be equivalent^6 to my \(M\) samples obtained from MCMC? Apply the following formula, \(\operatorname{Var}[a X + b] = a^2 \operatorname{Var}[X]\) to both the CLT for i.i.d. samples, and then the CLT for Markov chains. 1. \[n_e = \frac{\sigma^2}{\sigma_a^2} M.\] 2. \[n_e = \frac{\sigma}{\sigma_a} M.\] 3. \[n_e = \frac{\sigma_a^2}{\sigma^2} M.\] 4. \[n_e = \frac{\sigma_a}{\sigma} M.\] 5. None of the above CLT for Markov chains gives: \[\sqrt{M} (\bar X_\text{Markov} - \mu) \approx \sigma_a G,\] where \(G\) is standard normal. Taking the variance on both sides: \[M \operatorname{Var}[\bar X_\text{Markov}] \approx \sigma_a^2.\] CLT for i.i.d. gives: \[\sqrt{n_e} (\bar X_\text{iid} - \mu) \approx \sigma G\] Taking the variance on both sides (we could actually bypass the CLT argument and show the equality below directly in the i.i.d. case): \[n_e \operatorname{Var}[\bar X_\text{iid}] \approx \sigma^2.\] We have \(n_e\) is defined as having the property that \(\operatorname{Var}[\bar X_\text{iid}] = \operatorname{Var}[\bar X_\text{Markov}]\), so after some algebra: \[n_e = \frac{\sigma^2}{\sigma_a^2} Estimating the asymptotic variance with many chains • There are many ways to estimate the asymptotic variance (see references below). • We start here with the simplest possible scenario: □ suppose we have \(C\) independent chains □ each with \(S\) MCMC samples. • Since we have \(C\) chains, we get \(C\) different Monte Carlo estimators: □ \(E_1, E_2, \dots, E_C\). □ Denote also the overall estimator by: \[E = \frac{1}{C} \sum_{c=1}^C E_c.\] • First, by the CLT for Markov chains: for any \(c\), \[S \operatorname{Var}[E_c] \approx \sigma^2_a. \tag{4}\] • Second, since the \(E_1, \dots, E_C\) are i.i.d., \[\operatorname{Var}[E_c] \approx \frac{1}{C} \sum_{c=1}^C (E_c - E)^2. \tag{5}\] Question: combine Equation 4 and Equation 5 to obtain an estimator for the asymptotic variance. 1. \[\sigma_a \approx C \left( \frac{1}{S} \sum_{c=1}^C (E_c - E)^2\right)\] 2. \[\sigma_a \approx S \left( \frac{1}{C} \sum_{c=1}^C (E_c - E)^2\right)\] 3. \[\sigma_a^2 \approx C \left( \frac{1}{S} \sum_{c=1}^C (E_c - E)^2\right)\] 4. \[\sigma_a^2 \approx S \left( \frac{1}{C} \sum_{c=1}^C (E_c - E)^2\right)\] 5. None of the above Combining the two equations, we obtain: \[\sigma_a^2 \approx S \left( \frac{1}{C} \sum_{c=1}^C (E_c - E)^2\right).\] Estimating the asymptotic variance with one chain • View a trace of length \(M\) as \(C\) subsequent batches of length \(S\) for \(M = C \cdot S\). • Common choice: \(C = S = \sqrt{M}\). • This is known as the batch mean estimator. Additional references • See Flegal, 2008 for a nice exposition on the technique described here for estimating the asymptotic variance, known as the batch mean estimator, as well as many other methods for asymptotic variance estimation. • See Vats et al., 2017 for a multivariate generalization of the effective sample size. 1. Where does “twice” come from? More precisely, it is \(1.96\), which comes from the quantile function of the normal evaluated at \(100\% - 5\%/2\): 2. It is only approximate, since as we shall see the standard error estimator uses central limit theorem approximations.↩︎ 3. In this page, \(\to\) refers to convergence in distribution.↩︎ 4. Several different conditions can be used to state the central limit theorem for Markov chains, see Jones, 2004 for a review. For example, Corollary 4 in that review can be used since the MH algorithm is reversible as we will see soon. Corollary 4 requires the following conditions: reversibility, a finite variance, \(\sigma^2 < \infty\) (reasonable, as the CLT for i.i.d. requires this as well), geometric ergodicity (which we discussed in a footnote of MCMC diagnostics), and Harris ergodicity, which is more of a technical condition that would be covered in a Markov chain 5. Interestingly, the noise can also be lower in certain situations! These are called super-efficient MCMC algorithms. Consider for example an MH algorithm over the state space \(\{1, 2, 3\}\) that proposes, at each iteration, uniformly over \(\{1, 2, 3\}\) while excluding its current position. It can be shown that this algorithm will have lower variance compared to the i.i.d. simple Monte Carlo algorithm.↩︎ 6. By equivalent, we mean: “have the same variance”.↩︎
{"url":"https://ubc-stat-ml.github.io/web447/w09_workflow/topic05_mcmc_ess.html","timestamp":"2024-11-07T07:09:45Z","content_type":"application/xhtml+xml","content_length":"92370","record_id":"<urn:uuid:912b3103-1452-4363-85ff-e6fb363633bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00064.warc.gz"}
Allowable Shear Stress A 150 mm by 300 mm wooden beam having a simple span of 6 meters carries a concentrated load P at its midspan. It is notched at the supports as shown in the figure. For this problem, all calculations are based on shear alone using the 2010 NSCP specification given below. Allowable shear stress of wood, F[v] = 1.0 MPa. 1. If P = 30 kN, calculate the maximum allowable depth (millimeters) of notches at the supports. A. 88 B. 62 C. 238 D. 212 If the depth of notches is 100 mm, what is the safe value of P (kiloNewton) the beam can carry. A. 26.67 B. 17.78 C. 8.89 D. 13.33 If P = 25 kN and the depth of notches is 150 millimeters, what is the shear stress (MegaPascal) near the supports. A. 0.83 B. 6.67 C. 1.67 D. 3.33 NSCP 2010 Section 616.4: Horizontal Shear in Notched Beams When rectangular-shaped girder, beams or joists are notched at points of support on the tension side, they shall meet the design requirements of that section in bending and in shear. The horizontal shear stress at such point shall be calculated by: $f_v = \dfrac{3V}{2bd'}\left( \dfrac{d}{d'} \right)^2$ $d$ = total depth of beam. $d'$ = actual depth of beam at notch. Log in or register to post comments Log in or register to post comments Log in or register to post comments
{"url":"https://mathalino.com/tag/reviewer/allowable-shear-stress","timestamp":"2024-11-08T20:37:26Z","content_type":"text/html","content_length":"46561","record_id":"<urn:uuid:e0fe4c79-f08d-4f2e-aeaa-3d5747a584de>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00626.warc.gz"}
doc/rate_distortion.txt - vendor/opensource/ffmpeg - Git at Google A Quick Description Of Rate Distortion Theory. We want to encode a video, picture or piece of music optimally. What does "optimally" really mean? It means that we want to get the best quality at a given filesize OR we want to get the smallest filesize at a given quality (in practice, these 2 goals are usually the same). Solving this directly is not practical; trying all byte sequences 1 megabyte in length and selecting the "best looking" sequence will yield 256^1000000 cases to try. But first, a word about quality, which is also called distortion. Distortion can be quantified by almost any quality measurement one chooses. Commonly, the sum of squared differences is used but more complex methods that consider psychovisual effects can be used as well. It makes no difference in this discussion. First step: that rate distortion factor called lambda... Let's consider the problem of minimizing: distortion + lambda*rate For a fixed lambda, rate would represent the filesize, while distortion is the quality. Is this equivalent to finding the best quality for a given max filesize? The answer is yes. For each filesize limit there is some lambda factor for which minimizing above will get you the best quality (using your chosen quality measurement) at the desired (or lower) filesize. Second step: splitting the problem. Directly splitting the problem of finding the best quality at a given filesize is hard because we do not know how many bits from the total filesize should be allocated to each of the subproblems. But the formula from above: distortion + lambda*rate can be trivially split. Consider: (distortion0 + distortion1) + lambda*(rate0 + rate1) This creates a problem made of 2 independent subproblems. The subproblems might be 2 16x16 macroblocks in a frame of 32x16 size. To minimize: (distortion0 + distortion1) + lambda*(rate0 + rate1) we just have to minimize: distortion0 + lambda*rate0 distortion1 + lambda*rate1 I.e, the 2 problems can be solved independently. Author: Michael Niedermayer Copyright: LGPL
{"url":"https://gfiber.googlesource.com/vendor/opensource/ffmpeg/+/a08e27a935844c13e4090bbd0fd2b626cd862a2b/doc/rate_distortion.txt","timestamp":"2024-11-07T10:49:18Z","content_type":"text/html","content_length":"17589","record_id":"<urn:uuid:14cd5afc-1c39-40ba-9371-2d1ec4c8988a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00122.warc.gz"}
Examples and Applications Index This worksheet provides access to the example worksheets and applications that are available through the help system. The worksheets have been organized into topical sections, with a complete alphabetical listing of all example worksheets at the end. Click the box to the left of a section name to expand the section and display the list of examples. Click the appropriate hyperlink in a section to open the corresponding worksheet. Example worksheets and workbooks are also installed with the Maple software. Example worksheets can be found in the examples directory of your Maple installation. Sample workbooks can be found in the data/Workbook directory of your Maple installation. For additional applications and other free resources, visit the Maple Application Center. Language and System • examples/binarytree: A demonstration of programming a Maple package, this worksheet implements a dictionary structure that uses binary trees. • examples/CodeCoverage: A demonstration of programming a Maple package, this worksheet creates a package for code coverage profiling. • examples/CommandTemplate: Template for top-level command help page. • examples/ContextMenu: Examples of context menu use. • examples/ContextMenu/PackageContextMenus: Examples of creating package-specific context menus. • examples/DefaultUnits: Using units in the default Maple environment. • examples/Domains: A tool for developing code for complicated algorithms. • examples/evalntype: One of the extensions to the Maple type system. • examples/FileTools: An overview of the FileTools package and selected examples. • examples/GenericGraphAlgorithms: Illustrates generic programming in an example using simple graph • examples/GenericGroups: A demonstration of programming with Maple modules, using generic programming. • examples/GMP: An introduction to the GNU Multiple Precision (GMP) integer library and Maple exact arbitrary-precision integer arithmetic. • examples/lexical: An explanation of lexical scoping in Maple. • examples/LinearAlgebraMigration: An overview of converting worksheets that use the superseded linalg package to use the LinearAlgebra and VectorCalculus packages. • examples/LinkedListPackage: A demonstration of programming a Maple package, this worksheet implements a linked list structure and operations on the pairs. • examples/matlab: A demonstration of the Matlab package. • examples/memo: Using modules to implement memoization. • examples/NaturalUnits: Using units in the Natural Units environment. • examples/PackageCommandTemplate: Template for package command help page. • examples/PackageOverviewTemplate: Template for package overview help page. • examples/PriorityQueues: A demonstration implementing priority queues using Maple modules. • examples/QuotientFields: A generic quotient field implementation using generic programming. • examples/SearchEngine: Construct a simple search engine using Maple. • examples/SimpleUnits: Use units in the Simple Units environment. • examples/spread: A package for programmatically manipulating spreadsheets. • examples/StandardUnits: Using units in the Standard Units environment. • examples/string: Using strings in Maple. • examples/SymbolicDifferentiator: Illustrates various module concepts in a symbolic differentiator example. • examples/Task: Examples using the Task Programming Model for multithreaded programming. • examples/Threads: An overview of the Threads package. • examples/WorksheetPackage: An introduction to programmatic worksheet access with the Worksheet package. • examples/CodeGeneration: An introduction to the Maple package that controls the translation of Maple code to other languages. • examples/DatabaseGrades: An example of creating and modifying a database. • examples/ExternalCalling: An introduction to the use of external compiled code. • examples/ExternalCode: Sample command lines for building an OpenMaple application. • examples/Quiz: An introduction to generating interactive tests using the Grading package. If desired, you can create a quiz that can then be exported as a Maple T.A. course module. • examples/Calculus1Derivatives: Differentiation in the visualization component of the Student[Calculus1] • examples/Calculus1DiffApps: Differentiation applications in the visualization component of the Student [Calculus1] package. • examples/Calculus1IntApps: Integration applications in the visualization component of the Student[Calculus1] • examples/Calculus1Integration: Integration in the visualization component of the Student[Calculus1] package. • examples/Calculus1SingleStepping: An overview of single-step problem solving in the Student[Calculus1] • examples/Calculus1Tangents: Tangents, function inverses, and plotting by sampling in the visualization component of the Student[Calculus1] package. • examples/Calculus1Theorems: Differentiation theorems in the visualization component of the Student [Calculus1] package. • examples/Calculus1Visualization: An overview of visualization in the Student[Calculus1] package. • examples/MultivariateCalculus: An overview of the Student[MultivariateCalculus] package. • examples/moreStudentMultivariateCalculus: Additional examples for the Student[MultivariateCalculus] package. • examples/StudentPrecalculus: An overview of the Student[Precalculus] package. • examples/StudentVectorCalculus: An overview of the Student[VectorCalculus] package. • examples/VectorCalculus: An overview of vector calculus package. • DataFrame/Guide: A guide to working with data frames. • examples/AirPassengers: An example of forecasting future air passenger data using TimeSeriesAnalysis. • examples/DataFrame/Statistics: Examples of using commands from Statistics on data frames. • examples/DataFrame/Subsets: Examples of finding subsets for data frames. • examples/DataSets/BubblePlot: Example of visualizing multiple data sets using BubblePlot. • examples/DataSets/Choropleth/CustomData: Example of visualizing custom data using choropleth maps. • examples/GlobalTemperature: An example of forecasting future average global temperature data using • examples/IrisData: Examples of using summary statistics and principal component analysis on the Iris data • examples/RobustStatistics: An overview of the Statistics package commands for describing data sets that have noisy measurements. • examples/StatisticsDataSmoothing: An overview of the Statistics package commands for performing data • examples/StatisticsEstimation: An overview of the Statistics package commands for statistical estimation, including maximum likelihood estimation. • examples/StatisticsHypothesisTesting: An overview of the Statistics package commands for hypothesis testing and inference. • examples/StatisticsProbabilityDistributions: An overview of the Statistics package commands for statistical distributions and manipulating random variables. • examples/SteadyStateMarkovChain: Compute the steady-state vector of a Markov chain. • examples/Student/Statistics: A selection of examples covering material in a introductory statistics course. • examples/Student/Statistics,DescriptiveStatisticsQuiz: An example worksheet showing how to create quizzes for descriptive statistics. App Authoring • examples/EmbeddedComponents/ExploreApp: Building an application with Explore. • examples/EmbeddedComponents/NumberLine: Building an interactive number line. • examples/Explore: Examples using the Explore command to construct and insert a collection of embedded components used to explore an expression or a plot. • examples/ProgrammaticContentGeneration: Examples for programmatically generating worksheet content. Maplet Applications • examples/AdvancedMapletsLayout: An example Maplet application providing detailed information on how Maplet layouts work. • examples/AlertMaplet: An example Maplet application providing advanced information for the Alert example Maplet application. • examples/BezoutMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra BezoutMatrix Maplet application. • examples/ConditionNumberMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra ConditionNumber Maplet application. • examples/ConfirmMaplet: An example Maplet application providing advanced information for the example Confirm Maplet application. • examples/ExampleMaplets: An overview of example Maplet applications. • examples/GetColorMaplet: An example Maplet application providing advanced information for the GetColor example Maplet application. • examples/GetEquationMaplet: An example Maplet application providing advanced information for the GetEquation example Maplet application. • examples/GetExpressionMaplet: An example Maplet application providing advanced information for the GetExpression example Maplet application. • examples/GetFileMaplet: An example Maplet application providing advanced information for the GetFile example Maplet application. • examples/GetInputMaplet: An example Maplet application providing advanced information for the GetInput example Maplet application. • examples/HilbertMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra HilbertMatrix Maplet application. • examples/IntegrationMaplet: An example Maplet application providing advanced information for the advanced Integration Maplet application. • examples/MapletBuilder: An introduction to creating Maplets using Maplet Builder. • examples/MapletBuilderAdvanced: An introduction to creating Advanced Maplets using Maplet Builder. • examples/MapletsLayout: An introduction to Maplet application layout and design. • examples/MapletsStyleGuide: A list of guidelines for writing readable Maplet application code. • examples/MapletsTutorial: An introduction to writing Maplet applications. • examples/MatrixNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra MatrixNorm Maplet application. • examples/MessageMaplet: An example Maplet application providing advanced information for the Message example Maplet application. • examples/QRDecompositionMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra QRDecomposition Maplet application. • examples/QuestionMaplet: An example Maplet application providing advanced information for the Question example Maplet application. • examples/SelectionMaplet: An example Maplet application providing advanced information for the Selection example Maplet application. • examples/ShowTableMaplet: An example Maplet application providing advanced information for the advanced ShowTable Maplet application. • examples/SingularValuesMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra SingularValues Maplet application. • examples/VectorNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra VectorNorm Maplet application. Discrete Mathematics Number Theory Integral Transforms Differential Equations Differential-Algebraic Equations Symbolic Calculations Numeric Calculations Mathematical Visualization See the Plotting Guide for other sample graphs. Grading and Assessment Science and Engineering Alphabetical Listing of all Examples and Applications This worksheet provides access to the example worksheets and applications that are available through the help system. The worksheets have been organized into topical sections, with a complete alphabetical listing of all example worksheets at the end. Click the box to the left of a section name to expand the section and display the list of examples. Click the appropriate hyperlink in a section to open the corresponding worksheet. Example worksheets and workbooks are also installed with the Maple software. Example worksheets can be found in the examples directory of your Maple installation. Sample workbooks can be found in the data/Workbook directory of your Maple installation. For additional applications and other free resources, visit the Maple Application Center. Language and System • examples/binarytree: A demonstration of programming a Maple package, this worksheet implements a dictionary structure that uses binary trees. • examples/CodeCoverage: A demonstration of programming a Maple package, this worksheet creates a package for code coverage profiling. • examples/CommandTemplate: Template for top-level command help page. • examples/ContextMenu: Examples of context menu use. • examples/ContextMenu/PackageContextMenus: Examples of creating package-specific context menus. • examples/DefaultUnits: Using units in the default Maple environment. • examples/Domains: A tool for developing code for complicated algorithms. • examples/evalntype: One of the extensions to the Maple type system. • examples/FileTools: An overview of the FileTools package and selected examples. • examples/GenericGraphAlgorithms: Illustrates generic programming in an example using simple graph algorithms. • examples/GenericGroups: A demonstration of programming with Maple modules, using generic programming. • examples/GMP: An introduction to the GNU Multiple Precision (GMP) integer library and Maple exact arbitrary-precision integer arithmetic. • examples/lexical: An explanation of lexical scoping in Maple. • examples/LinearAlgebraMigration: An overview of converting worksheets that use the superseded linalg package to use the LinearAlgebra and VectorCalculus packages. • examples/LinkedListPackage: A demonstration of programming a Maple package, this worksheet implements a linked list structure and operations on the pairs. • examples/matlab: A demonstration of the Matlab package. • examples/memo: Using modules to implement memoization. • examples/NaturalUnits: Using units in the Natural Units environment. • examples/PackageCommandTemplate: Template for package command help page. • examples/PackageOverviewTemplate: Template for package overview help page. • examples/PriorityQueues: A demonstration implementing priority queues using Maple modules. • examples/QuotientFields: A generic quotient field implementation using generic programming. • examples/SearchEngine: Construct a simple search engine using Maple. • examples/SimpleUnits: Use units in the Simple Units environment. • examples/spread: A package for programmatically manipulating spreadsheets. • examples/StandardUnits: Using units in the Standard Units environment. • examples/string: Using strings in Maple. • examples/SymbolicDifferentiator: Illustrates various module concepts in a symbolic differentiator example. • examples/Task: Examples using the Task Programming Model for multithreaded programming. • examples/Threads: An overview of the Threads package. • examples/WorksheetPackage: An introduction to programmatic worksheet access with the Worksheet package. • examples/binarytree: A demonstration of programming a Maple package, this worksheet implements a dictionary structure that uses binary trees. examples/binarytree: A demonstration of programming a Maple package, this worksheet implements a dictionary structure that uses binary trees. • examples/CodeCoverage: A demonstration of programming a Maple package, this worksheet creates a package for code coverage profiling. examples/CodeCoverage: A demonstration of programming a Maple package, this worksheet creates a package for code coverage profiling. • examples/Domains: A tool for developing code for complicated algorithms. • examples/evalntype: One of the extensions to the Maple type system. examples/evalntype: One of the extensions to the Maple type system. • examples/FileTools: An overview of the FileTools package and selected examples. examples/FileTools: An overview of the FileTools package and selected examples. • examples/GenericGraphAlgorithms: Illustrates generic programming in an example using simple graph algorithms. examples/GenericGraphAlgorithms: Illustrates generic programming in an example using simple graph algorithms. • examples/GenericGroups: A demonstration of programming with Maple modules, using generic programming. examples/GenericGroups: A demonstration of programming with Maple modules, using generic programming. • examples/GMP: An introduction to the GNU Multiple Precision (GMP) integer library and Maple exact arbitrary-precision integer arithmetic. examples/GMP: An introduction to the GNU Multiple Precision (GMP) integer library and Maple exact arbitrary-precision integer arithmetic. • examples/LinearAlgebraMigration: An overview of converting worksheets that use the superseded linalg package to use the LinearAlgebra and VectorCalculus packages. examples/LinearAlgebraMigration: An overview of converting worksheets that use the superseded linalg package to use the LinearAlgebra and VectorCalculus packages. • examples/LinkedListPackage: A demonstration of programming a Maple package, this worksheet implements a linked list structure and operations on the pairs. examples/LinkedListPackage: A demonstration of programming a Maple package, this worksheet implements a linked list structure and operations on the pairs. • examples/PriorityQueues: A demonstration implementing priority queues using Maple modules. • examples/QuotientFields: A generic quotient field implementation using generic programming. • examples/SymbolicDifferentiator: Illustrates various module concepts in a symbolic differentiator example. examples/SymbolicDifferentiator: Illustrates various module concepts in a symbolic differentiator example. • examples/Task: Examples using the Task Programming Model for multithreaded programming. examples/Task: Examples using the Task Programming Model for multithreaded programming. • examples/WorksheetPackage: An introduction to programmatic worksheet access with the Worksheet package. examples/WorksheetPackage: An introduction to programmatic worksheet access with the Worksheet package. • examples/CodeGeneration: An introduction to the Maple package that controls the translation of Maple code to other languages. • examples/DatabaseGrades: An example of creating and modifying a database. • examples/ExternalCalling: An introduction to the use of external compiled code. • examples/ExternalCode: Sample command lines for building an OpenMaple application. • examples/Quiz: An introduction to generating interactive tests using the Grading package. If desired, you can create a quiz that can then be exported as a Maple T.A. course module. • examples/CodeGeneration: An introduction to the Maple package that controls the translation of Maple code to other languages. examples/CodeGeneration: An introduction to the Maple package that controls the translation of Maple code to other languages. • examples/DatabaseGrades: An example of creating and modifying a database. • examples/ExternalCalling: An introduction to the use of external compiled code. examples/ExternalCalling: An introduction to the use of external compiled code. • examples/ExternalCode: Sample command lines for building an OpenMaple application. • examples/Quiz: An introduction to generating interactive tests using the Grading package. If desired, you can create a quiz that can then be exported as a Maple T.A. course module. examples/Quiz: An introduction to generating interactive tests using the Grading package. If desired, you can create a quiz that can then be exported as a Maple T.A. course module. • examples/Calculus1Derivatives: Differentiation in the visualization component of the Student[Calculus1] package. • examples/Calculus1DiffApps: Differentiation applications in the visualization component of the Student[Calculus1] package. • examples/Calculus1IntApps: Integration applications in the visualization component of the Student[Calculus1] package. • examples/Calculus1Integration: Integration in the visualization component of the Student[Calculus1] package. • examples/Calculus1SingleStepping: An overview of single-step problem solving in the Student[Calculus1] package. • examples/Calculus1Tangents: Tangents, function inverses, and plotting by sampling in the visualization component of the Student[Calculus1] package. • examples/Calculus1Theorems: Differentiation theorems in the visualization component of the Student[Calculus1] package. • examples/Calculus1Visualization: An overview of visualization in the Student[Calculus1] package. • examples/MultivariateCalculus: An overview of the Student[MultivariateCalculus] package. • examples/moreStudentMultivariateCalculus: Additional examples for the Student[MultivariateCalculus] package. • examples/StudentPrecalculus: An overview of the Student[Precalculus] package. • examples/StudentVectorCalculus: An overview of the Student[VectorCalculus] package. • examples/VectorCalculus: An overview of vector calculus package. • examples/Calculus1Derivatives: Differentiation in the visualization component of the Student[Calculus1] package. examples/Calculus1Derivatives: Differentiation in the visualization component of the Student[Calculus1] package. • examples/Calculus1DiffApps: Differentiation applications in the visualization component of the Student[Calculus1] package. examples/Calculus1DiffApps: Differentiation applications in the visualization component of the Student[Calculus1] package. • examples/Calculus1IntApps: Integration applications in the visualization component of the Student[Calculus1] package. examples/Calculus1IntApps: Integration applications in the visualization component of the Student[Calculus1] package. • examples/Calculus1Integration: Integration in the visualization component of the Student[Calculus1] package. examples/Calculus1Integration: Integration in the visualization component of the Student[Calculus1] package. • examples/Calculus1SingleStepping: An overview of single-step problem solving in the Student[Calculus1] package. examples/Calculus1SingleStepping: An overview of single-step problem solving in the Student[Calculus1] package. • examples/Calculus1Tangents: Tangents, function inverses, and plotting by sampling in the visualization component of the Student[Calculus1] package. examples/Calculus1Tangents: Tangents, function inverses, and plotting by sampling in the visualization component of the Student[Calculus1] package. • examples/Calculus1Theorems: Differentiation theorems in the visualization component of the Student[Calculus1] package. examples/Calculus1Theorems: Differentiation theorems in the visualization component of the Student[Calculus1] package. • examples/Calculus1Visualization: An overview of visualization in the Student[Calculus1] package. • DataFrame/Guide: A guide to working with data frames. • examples/AirPassengers: An example of forecasting future air passenger data using TimeSeriesAnalysis. • examples/DataFrame/Statistics: Examples of using commands from Statistics on data frames. • examples/DataFrame/Subsets: Examples of finding subsets for data frames. • examples/DataSets/BubblePlot: Example of visualizing multiple data sets using BubblePlot. • examples/DataSets/Choropleth/CustomData: Example of visualizing custom data using choropleth maps. • examples/GlobalTemperature: An example of forecasting future average global temperature data using TimeSeriesAnalysis. • examples/IrisData: Examples of using summary statistics and principal component analysis on the Iris data set. • examples/RobustStatistics: An overview of the Statistics package commands for describing data sets that have noisy measurements. • examples/StatisticsDataSmoothing: An overview of the Statistics package commands for performing data smoothing. • examples/StatisticsEstimation: An overview of the Statistics package commands for statistical estimation, including maximum likelihood estimation. • examples/StatisticsHypothesisTesting: An overview of the Statistics package commands for hypothesis testing and inference. • examples/StatisticsProbabilityDistributions: An overview of the Statistics package commands for statistical distributions and manipulating random variables. • examples/SteadyStateMarkovChain: Compute the steady-state vector of a Markov chain. • examples/Student/Statistics: A selection of examples covering material in a introductory statistics course. • examples/Student/Statistics,DescriptiveStatisticsQuiz: An example worksheet showing how to create quizzes for descriptive statistics. • examples/AirPassengers: An example of forecasting future air passenger data using TimeSeriesAnalysis. examples/AirPassengers: An example of forecasting future air passenger data using TimeSeriesAnalysis. • examples/DataFrame/Statistics: Examples of using commands from Statistics on data frames. examples/DataFrame/Statistics: Examples of using commands from Statistics on data frames. • examples/DataSets/BubblePlot: Example of visualizing multiple data sets using BubblePlot. • examples/DataSets/Choropleth/CustomData: Example of visualizing custom data using choropleth maps. • examples/GlobalTemperature: An example of forecasting future average global temperature data using TimeSeriesAnalysis. examples/GlobalTemperature: An example of forecasting future average global temperature data using TimeSeriesAnalysis. • examples/IrisData: Examples of using summary statistics and principal component analysis on the Iris data set. examples/IrisData: Examples of using summary statistics and principal component analysis on the Iris data set. • examples/RobustStatistics: An overview of the Statistics package commands for describing data sets that have noisy measurements. examples/RobustStatistics: An overview of the Statistics package commands for describing data sets that have noisy measurements. • examples/StatisticsDataSmoothing: An overview of the Statistics package commands for performing data smoothing. examples/StatisticsDataSmoothing: An overview of the Statistics package commands for performing data smoothing. • examples/StatisticsEstimation: An overview of the Statistics package commands for statistical estimation, including maximum likelihood estimation. examples/StatisticsEstimation: An overview of the Statistics package commands for statistical estimation, including maximum likelihood estimation. • examples/StatisticsHypothesisTesting: An overview of the Statistics package commands for hypothesis testing and inference. examples/StatisticsHypothesisTesting: An overview of the Statistics package commands for hypothesis testing and inference. • examples/StatisticsProbabilityDistributions: An overview of the Statistics package commands for statistical distributions and manipulating random variables. examples/StatisticsProbabilityDistributions: An overview of the Statistics package commands for statistical distributions and manipulating random variables. • examples/SteadyStateMarkovChain: Compute the steady-state vector of a Markov chain. • examples/Student/Statistics: A selection of examples covering material in a introductory statistics course. examples/Student/Statistics: A selection of examples covering material in a introductory statistics course. • examples/Student/Statistics,DescriptiveStatisticsQuiz: An example worksheet showing how to create quizzes for descriptive statistics. examples/Student/Statistics,DescriptiveStatisticsQuiz: An example worksheet showing how to create quizzes for descriptive statistics. App Authoring • examples/EmbeddedComponents/ExploreApp: Building an application with Explore. • examples/EmbeddedComponents/NumberLine: Building an interactive number line. • examples/Explore: Examples using the Explore command to construct and insert a collection of embedded components used to explore an expression or a plot. • examples/ProgrammaticContentGeneration: Examples for programmatically generating worksheet content. • examples/Explore: Examples using the Explore command to construct and insert a collection of embedded components used to explore an expression or a plot. examples/Explore: Examples using the Explore command to construct and insert a collection of embedded components used to explore an expression or a plot. Maplet Applications • examples/AdvancedMapletsLayout: An example Maplet application providing detailed information on how Maplet layouts work. • examples/AlertMaplet: An example Maplet application providing advanced information for the Alert example Maplet application. • examples/BezoutMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra BezoutMatrix Maplet application. • examples/ConditionNumberMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra ConditionNumber Maplet application. • examples/ConfirmMaplet: An example Maplet application providing advanced information for the example Confirm Maplet application. • examples/ExampleMaplets: An overview of example Maplet applications. • examples/GetColorMaplet: An example Maplet application providing advanced information for the GetColor example Maplet application. • examples/GetEquationMaplet: An example Maplet application providing advanced information for the GetEquation example Maplet application. • examples/GetExpressionMaplet: An example Maplet application providing advanced information for the GetExpression example Maplet application. • examples/GetFileMaplet: An example Maplet application providing advanced information for the GetFile example Maplet application. • examples/GetInputMaplet: An example Maplet application providing advanced information for the GetInput example Maplet application. • examples/HilbertMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra HilbertMatrix Maplet application. • examples/IntegrationMaplet: An example Maplet application providing advanced information for the advanced Integration Maplet application. • examples/MapletBuilder: An introduction to creating Maplets using Maplet Builder. • examples/MapletBuilderAdvanced: An introduction to creating Advanced Maplets using Maplet Builder. • examples/MapletsLayout: An introduction to Maplet application layout and design. • examples/MapletsStyleGuide: A list of guidelines for writing readable Maplet application code. • examples/MapletsTutorial: An introduction to writing Maplet applications. • examples/MatrixNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra MatrixNorm Maplet application. • examples/MessageMaplet: An example Maplet application providing advanced information for the Message example Maplet application. • examples/QRDecompositionMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra QRDecomposition Maplet application. • examples/QuestionMaplet: An example Maplet application providing advanced information for the Question example Maplet application. • examples/SelectionMaplet: An example Maplet application providing advanced information for the Selection example Maplet application. • examples/ShowTableMaplet: An example Maplet application providing advanced information for the advanced ShowTable Maplet application. • examples/SingularValuesMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra SingularValues Maplet application. • examples/VectorNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra VectorNorm Maplet application. • examples/AdvancedMapletsLayout: An example Maplet application providing detailed information on how Maplet layouts work. examples/AdvancedMapletsLayout: An example Maplet application providing detailed information on how Maplet layouts work. • examples/AlertMaplet: An example Maplet application providing advanced information for the Alert example Maplet application. examples/AlertMaplet: An example Maplet application providing advanced information for the Alert example Maplet application. • examples/BezoutMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra BezoutMatrix Maplet application. examples/BezoutMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra BezoutMatrix Maplet application. • examples/ConditionNumberMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra ConditionNumber Maplet application. examples/ConditionNumberMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra ConditionNumber Maplet application. • examples/ConfirmMaplet: An example Maplet application providing advanced information for the example Confirm Maplet application. examples/ConfirmMaplet: An example Maplet application providing advanced information for the example Confirm Maplet application. • examples/GetColorMaplet: An example Maplet application providing advanced information for the GetColor example Maplet application. examples/GetColorMaplet: An example Maplet application providing advanced information for the GetColor example Maplet application. • examples/GetEquationMaplet: An example Maplet application providing advanced information for the GetEquation example Maplet application. examples/GetEquationMaplet: An example Maplet application providing advanced information for the GetEquation example Maplet application. • examples/GetExpressionMaplet: An example Maplet application providing advanced information for the GetExpression example Maplet application. examples/GetExpressionMaplet: An example Maplet application providing advanced information for the GetExpression example Maplet application. • examples/GetFileMaplet: An example Maplet application providing advanced information for the GetFile example Maplet application. examples/GetFileMaplet: An example Maplet application providing advanced information for the GetFile example Maplet application. • examples/GetInputMaplet: An example Maplet application providing advanced information for the GetInput example Maplet application. examples/GetInputMaplet: An example Maplet application providing advanced information for the GetInput example Maplet application. • examples/HilbertMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra HilbertMatrix Maplet application. examples/HilbertMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra HilbertMatrix Maplet application. • examples/IntegrationMaplet: An example Maplet application providing advanced information for the advanced Integration Maplet application. examples/IntegrationMaplet: An example Maplet application providing advanced information for the advanced Integration Maplet application. • examples/MapletBuilder: An introduction to creating Maplets using Maplet Builder. • examples/MapletBuilderAdvanced: An introduction to creating Advanced Maplets using Maplet Builder. examples/MapletBuilderAdvanced: An introduction to creating Advanced Maplets using Maplet Builder. • examples/MapletsLayout: An introduction to Maplet application layout and design. • examples/MapletsStyleGuide: A list of guidelines for writing readable Maplet application code. examples/MapletsStyleGuide: A list of guidelines for writing readable Maplet application code. • examples/MatrixNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra MatrixNorm Maplet application. examples/MatrixNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra MatrixNorm Maplet application. • examples/MessageMaplet: An example Maplet application providing advanced information for the Message example Maplet application. examples/MessageMaplet: An example Maplet application providing advanced information for the Message example Maplet application. • examples/QRDecompositionMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra QRDecomposition Maplet application. examples/QRDecompositionMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra QRDecomposition Maplet application. • examples/QuestionMaplet: An example Maplet application providing advanced information for the Question example Maplet application. examples/QuestionMaplet: An example Maplet application providing advanced information for the Question example Maplet application. • examples/SelectionMaplet: An example Maplet application providing advanced information for the Selection example Maplet application. examples/SelectionMaplet: An example Maplet application providing advanced information for the Selection example Maplet application. • examples/ShowTableMaplet: An example Maplet application providing advanced information for the advanced ShowTable Maplet application. examples/ShowTableMaplet: An example Maplet application providing advanced information for the advanced ShowTable Maplet application. • examples/SingularValuesMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra SingularValues Maplet application. examples/SingularValuesMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra SingularValues Maplet application. • examples/VectorNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra VectorNorm Maplet application. examples/VectorNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra VectorNorm Maplet application. • examples/LA_Linear_Solve: Using the LinearAlgebra package to solve systems. • examples/LA_NAG: Using the LinearAlgebra package with NAG routines. • examples/LA_options: LinearAlgebra package options. • examples/LA_Syntax_Shortcuts: LinearAlgebra package shortcuts. • examples/LinearAlgebraComputation: An overview of matrix and vector computations in the Student[LinearAlgebra] package. • examples/LinearAlgebraInteractive: An overview of the Maplet interface routines in the Student[LinearAlgebra] package. • examples/LinearAlgebraMigration: An overview of converting worksheets that use the superseded linalg package to use the LinearAlgebra and VectorCalculus packages. • examples/LinearAlgebraVisualization1: An overview of visualization for vector, plane, and linear system problems in the Student[LinearAlgebra] package. • examples/LinearAlgebraVisualization2: An overview of visualization for least squares approximation and eigenvector problems in the Student[LinearAlgebra] package. • examples/StudentLinearAlgebra: A selection of examples illustrating the use of Student[LinearAlgebra] package commands. • examples/Ore_algebra: The Ore algebras package. • examples/PolynomialIdeals: Overview of the PolynomialIdeals package. • examples/QuantifierElimination: An overview of the QuantifierElimination package. • examples/RegularChains: Studying and solving polynomial systems with the RegularChains library. • examples/LinearAlgebraComputation: An overview of matrix and vector computations in the Student[LinearAlgebra] package. examples/LinearAlgebraComputation: An overview of matrix and vector computations in the Student[LinearAlgebra] package. • examples/LinearAlgebraInteractive: An overview of the Maplet interface routines in the Student[LinearAlgebra] package. examples/LinearAlgebraInteractive: An overview of the Maplet interface routines in the Student[LinearAlgebra] package. • examples/LinearAlgebraVisualization1: An overview of visualization for vector, plane, and linear system problems in the Student[LinearAlgebra] package. examples/LinearAlgebraVisualization1: An overview of visualization for vector, plane, and linear system problems in the Student[LinearAlgebra] package. • examples/LinearAlgebraVisualization2: An overview of visualization for least squares approximation and eigenvector problems in the Student[LinearAlgebra] package. examples/LinearAlgebraVisualization2: An overview of visualization for least squares approximation and eigenvector problems in the Student[LinearAlgebra] package. • examples/StudentLinearAlgebra: A selection of examples illustrating the use of Student[LinearAlgebra] package commands. examples/StudentLinearAlgebra: A selection of examples illustrating the use of Student[LinearAlgebra] package commands. • examples/RegularChains: Studying and solving polynomial systems with the RegularChains library. examples/RegularChains: Studying and solving polynomial systems with the RegularChains library. • examples/algcurve: A package for working with algebraic curves. • examples/archi: Archimedean solids. • examples/ConvexHull: Examples of Convex Hulls in ComputationalGeometry and PolyhedralSets. • examples/dual: Duality of polyhedra. • examples/geometry: The Maple geometry package. • examples/regular: Regular polygons. • examples/stellate: Stellated polyhedra. • examples/transform: Geometric transformations. • examples/ConvexHull: Examples of Convex Hulls in ComputationalGeometry and PolyhedralSets. Discrete Mathematics • Introduction to the Combinatorial Structures package: An introduction to combstruct. Learn the basics of specifications, how to get counting sequences, and how to use predefined structures (including subsets, permutations, and combinations). • Combinatorial Structures Package, Sample Structures: A simple collection of combstruct examples showing how to generate random trees, investigate the distribution of height by simulation, enumerate functional graphs, alcohols, necklaces, expression trees, and more. • The Combstruct Package, Generating Functions: It is possible to produce generating function equations and to solve some of them. Also, there is the allstructs function which performs exhaustive structure generation. • Attribute Grammars and Combinatorics: Attribute grammars are a way to express recursively defined properties of structures. They are available in the combstruct package. • Q-Difference Equations: An overview of the QDifferenceEquations package. • Wavelet Transforms: An introduction to the mathematical concepts behind the wavelet transforms available, and the ways in which you would implement such concepts in Maple. • Introduction to the Combinatorial Structures package: An introduction to combstruct. Learn the basics of specifications, how to get counting sequences, and how to use predefined structures (including subsets, permutations, and combinations). Introduction to the Combinatorial Structures package: An introduction to combstruct. Learn the basics of specifications, how to get counting sequences, and how to use predefined structures (including subsets, permutations, and combinations). • Combinatorial Structures Package, Sample Structures: A simple collection of combstruct examples showing how to generate random trees, investigate the distribution of height by simulation, enumerate functional graphs, alcohols, necklaces, expression trees, and more. Combinatorial Structures Package, Sample Structures: A simple collection of combstruct examples showing how to generate random trees, investigate the distribution of height by simulation, enumerate functional graphs, alcohols, necklaces, expression trees, and more. • The Combstruct Package, Generating Functions: It is possible to produce generating function equations and to solve some of them. Also, there is the allstructs function which performs exhaustive structure generation. The Combstruct Package, Generating Functions: It is possible to produce generating function equations and to solve some of them. Also, there is the allstructs function which performs exhaustive structure generation. • Attribute Grammars and Combinatorics: Attribute grammars are a way to express recursively defined properties of structures. They are available in the combstruct package. Attribute Grammars and Combinatorics: Attribute grammars are a way to express recursively defined properties of structures. They are available in the combstruct package. • Wavelet Transforms: An introduction to the mathematical concepts behind the wavelet transforms available, and the ways in which you would implement such concepts in Maple. Wavelet Transforms: An introduction to the mathematical concepts behind the wavelet transforms available, and the ways in which you would implement such concepts in Maple. Number Theory • examples/GaussInt: Examples on working with Gaussian Integers. • examples/NumberTheory/ArithmeticFunctions: Selected examples on arithmetic functions. • examples/NumberTheory/Divisibility: Selected examples on divisibility. • examples/NumberTheory/MersennePrimes: Overview of commands relating to Mersenne primes. • examples/NumberTheory/PrimeNumbers: Overview of working with prime numbers. • examples/elliptic: Elliptic Integration examples. • examples/elliptic2: More Elliptic Integration examples. Integral Transforms • examples/addtable: Extending the power of the integral transforms in Maple. • examples/fourier: An illustration of the Fourier transform in Maple. • examples/hankel: The Hankel transform. • examples/hilbert: The Hilbert transform. • examples/laplace: The Laplace transform. • examples/mellin: The Mellin transform. • examples/addtable: Extending the power of the integral transforms in Maple. examples/addtable: Extending the power of the integral transforms in Maple. • examples/fourier: An illustration of the Fourier transform in Maple. Differential Equations • examples/deplot: Special facilities for plotting solutions of differential equations. • examples/deplot3d: Plotting solutions of differential equations in three dimensions. • examples/DEplotSystems: Describes the default models for the DEplot[interactive] differential system tool. • examples/DifferentialThomas: A package for differential elimination using Thomas decomposition. • examples/diffop: A subpackage (of DEtools) for differential operators. • examples/linearode: Examples of determining closed-form solutions using dsolve. • examples/NumericDDEs: Examples of numeric differential equations with delay. • examples/pdsolve_boundaryconditions: Describes how pdsolve can adjust arbitrary functions and constants of PDE solutions, such that boundary conditions are satisfied. • examples/poincare: A package for Hamiltonian equations. • examples/slode: The Slode package. • examples/SuitcaseModel: An example of modeling a delay differential equation • examples/deplot: Special facilities for plotting solutions of differential equations. • examples/deplot3d: Plotting solutions of differential equations in three dimensions. • examples/DEplotSystems: Describes the default models for the DEplot[interactive] differential system tool. examples/DEplotSystems: Describes the default models for the DEplot[interactive] differential system tool. • examples/DifferentialThomas: A package for differential elimination using Thomas decomposition. • examples/pdsolve_boundaryconditions: Describes how pdsolve can adjust arbitrary functions and constants of PDE solutions, such that boundary conditions are satisfied. examples/pdsolve_boundaryconditions: Describes how pdsolve can adjust arbitrary functions and constants of PDE solutions, such that boundary conditions are satisfied. • examples/SuitcaseModel: An example of modeling a delay differential equation Differential-Algebraic Equations • examples/numeric_DAE: Overview of using numeric differential-algebraic equation solvers. Symbolic Calculations • examples/applyrl: Rule-based symbolic programming. • examples/define: Using the Maple define command to specify a function by its behavior. • examples/functionaloperators: Defining and using various forms of functional operators. • examples/Mathieu: An overview of the Mathieu mathematical functions in Maple. • examples/minimize: Using the Maple minimize command. • examples/patmatch: How to use the Maple pattern-matching algorithms. • examples/piecewise: Using the piecewise function. • examples/RootOf: Using the RootOf function. • examples/solve: The Maple equation solver. • examples/UsefulMapleFunctions: An overview of some useful maple functions. • examples/define: Using the Maple define command to specify a function by its behavior. examples/define: Using the Maple define command to specify a function by its behavior. • examples/functionaloperators: Defining and using various forms of functional operators. • examples/Mathieu: An overview of the Mathieu mathematical functions in Maple. examples/Mathieu: An overview of the Mathieu mathematical functions in Maple. Numeric Calculations • examples/numeric_DAE: Overview of using numeric differential-algebraic equation solvers. • examples/Optimization: Overview of using the Optimization package to find local optima. It includes linear programming, quadratic programming, nonlinear programming, and least squares examples. • examples/OptimizationLPSolve: Examples of using the Optimization:-LPSolve command and a brief explanation of the algorithms used. • examples/OptimizationMatrixForm: Overview of using the matrix-form calling sequences in the Optimization package. • examples/specfcn: Illustrations of special functions available in Maple. • examples/StudentNumericalAnalysis: An overview of the Student[NumericalAnalysis] package. • examples/Optimization: Overview of using the Optimization package to find local optima. It includes linear programming, quadratic programming, nonlinear programming, and least squares examples. examples/Optimization: Overview of using the Optimization package to find local optima. It includes linear programming, quadratic programming, nonlinear programming, and least squares examples. • examples/OptimizationLPSolve: Examples of using the Optimization:-LPSolve command and a brief explanation of the algorithms used. examples/OptimizationLPSolve: Examples of using the Optimization:-LPSolve command and a brief explanation of the algorithms used. • examples/OptimizationMatrixForm: Overview of using the matrix-form calling sequences in the Optimization package. examples/OptimizationMatrixForm: Overview of using the matrix-form calling sequences in the Optimization package. Mathematical Visualization • examples/algcurve: A package for working with algebraic curves. • examples/BranchCuts: A visual exploration of branches and branch cuts for the inverse trig and hyperbolic functions. • examples/Calculus1Derivatives: Differentiation in the visualization component of the Student[Calculus1] package. • examples/Calculus1DiffApps: Differentiation applications in the visualization component of the Student[Calculus1] package. • examples/Calculus1IntApps: Integration applications in the visualization component of the Student[Calculus1] package. • examples/Calculus1Integration: Integration in the visualization component of the Student[Calculus1] package. • examples/Calculus1Tangents: Tangents, function inverses, and plotting by sampling in the visualization component of the Student[Calculus1] package. • examples/Calculus1Theorems: Differentiation theorems in the visualization component of the Student[Calculus1] package. • examples/Calculus1Visualization: An overview of visualization in the Student[Calculus1] package. • examples/CurveFitting: The CurveFitting package. • examples/deplot: Special facilities for plotting solutions of differential equations. • examples/deplot3d: Plotting solutions of differential equations in three dimensions. • examples/Explore: Examples using the Explore command. • examples/GraphTheory: An overview of the GraphTheory package. • examples/Interpolation_and_Smoothing: An introduction to interpolation and smoothing of given two-dimensional and three-dimensional data in Maple. • examples/knots: Examples for visualizing various knots. • examples/StudentPrecalculus: An overview of the Student[Precalculus] package. See the Plotting Guide for other sample graphs. • examples/BranchCuts: A visual exploration of branches and branch cuts for the inverse trig and hyperbolic functions. examples/BranchCuts: A visual exploration of branches and branch cuts for the inverse trig and hyperbolic functions. • examples/Interpolation_and_Smoothing: An introduction to interpolation and smoothing of given two-dimensional and three-dimensional data in Maple. examples/Interpolation_and_Smoothing: An introduction to interpolation and smoothing of given two-dimensional and three-dimensional data in Maple. • applications/BlackScholes: Example to compute the option price using three different methods. • applications/FFTOptionPricing: An application that calculates an option price using FFTs. • applications/TheGreeks: Compute various measurements of risk in mathematical finance. • examples/finance: Examples of calculations for personal finance. • Finance/Examples/AsianOptions: Example pricing Asian options using the Finance package. • Finance/Examples/CalendarsAndDayCounters: Introduction to day count conventions and working with calendars in the Finance package. • Finance/Examples/EuropeanOptions: Example pricing European options using the Finance package. • Finance/Examples/LocalVolatility: Example that computes local volatility and implied volatility using the Finance package. • applications/BlackScholes: Example to compute the option price using three different methods. applications/BlackScholes: Example to compute the option price using three different methods. • applications/FFTOptionPricing: An application that calculates an option price using FFTs. applications/FFTOptionPricing: An application that calculates an option price using FFTs. • applications/TheGreeks: Compute various measurements of risk in mathematical finance. • Finance/Examples/AsianOptions: Example pricing Asian options using the Finance package. • Finance/Examples/CalendarsAndDayCounters: Introduction to day count conventions and working with calendars in the Finance package. Finance/Examples/CalendarsAndDayCounters: Introduction to day count conventions and working with calendars in the Finance package. • Finance/Examples/EuropeanOptions: Example pricing European options using the Finance package. • Finance/Examples/LocalVolatility: Example that computes local volatility and implied volatility using the Finance package. Finance/Examples/LocalVolatility: Example that computes local volatility and implied volatility using the Finance package. Grading and Assessment • examples/Quiz: An introduction to generating interactive tests using the Grading package. If desired, you can create a quiz that can then be exported as a Maple T.A. course module. • examples/Student/Statistics/DescriptiveStatisticsQuiz: An example worksheet showing how to create quizzes for descriptive statistics. • examples/Student/Statistics/DescriptiveStatisticsQuiz: An example worksheet showing how to create quizzes for descriptive statistics. examples/Student/Statistics/DescriptiveStatisticsQuiz: An example worksheet showing how to create quizzes for descriptive statistics. Science and Engineering • applications/AmplifierGain: An application that plots the gain of an amplifier circuit for both ideal and non-ideal response using commands. • applications/AmplifierGainApp: An application that plots the gain of an amplifier circuit for both ideal and non-ideal response using sliders and input fields. • applications/AntennaArray: An application that calculates the array factor and directivity for a uniform linear antenna array. • applications/BandpassFilter: An application to investigate the frequency response for a Bandpass filter. • applications/BeamDistributedPointLoad: An application that derives an expression for the deflection of a beam with a distributed load and a point load. • applications/BinaryDistillation: An application that finds the required number of stages for separating two liquid components by using the McCabe-Thiele method. • applications/BivariatePolynomialRegression: Example to find the surface of best fit for a 3-D data set. • applications/BoltGroupCoefficient: An application that calculates the bolt coefficient for eccentrically loaded bolt groups. • applications/BouncingBall: Example on how event modeling in the dsolve command can be used to model a ball bouncing on a hilly terrain. • applications/CatalyticCrackingOfEthane: An application that calculates the equilibrium composition of the catalytic cracking of ethane. • applications/ChemicalKineticsParameterEstimation: An application that estimates the rate parameters for a reversible chemical reaction. • applications/CountercurrentHeatExchanger: An application that models the temperature dynamics of a countercurrent double-pipe heat exchanger. • applications/DCMotor: Example to obtain the transfer function and state space model of a system, and then design a LQR controller. • applications/DigitalFilterDesign: An application that demonstrates the design and analysis of a discrete filter. • applications/EconomicPipeSizing: An application that minimizes the total cost of pipework across the lifetime of a plant. • applications/FilteringAudioApp: An application that lets you apply filters to a WAV file. • applications/FilteringFrequencyDomainNoise: An application that demonstrates frequency filtering. • applications/FitHeadFlowRateData: An application that fits head-flow rate data to a pump curve. • applications/FrequencyDomainSystemIdentification: Example of determining the parameters of a model when its structure is known. • applications/FuelPod: An application for optimizing the design of a fuel pod. • applications/GasOrifice: An application that calculates the flow rate through a large-diameter orifice. • applications/GibbsEnergyOfFormationOfEthanol: An application that calculates the Gibbs energy of formation of ethanol (C[2]H[5]OH) at any temperature, employing thermodynamic data. • applications/HarmonicOscillator: An application that illustrates a second order harmonic oscillator under different control strategies. • applications/HelicalSpring: An application that optimizes the design of a helical spring. • applications/ImageProcessing: An application that tests the effectiveness of image processing algorithms. • applications/InteractingTanks: An application that models liquid flow between three tanks connected by two pipes. • applications/InterauralTimeDelay: An application that introduces a delay in one channel of a sound to change the characteristics of the output. • applications/InvertedPendulum: An application that simulates the dynamics of an inverted pendulum on a cart. • applications/MaxFlowRatePartiallyFilledPipe: An application that finds the maximum flow rate in a partially filled circular pipe. • applications/MaxPressureSurge: An application that finds the excess pressure generated by water hammer due to instantaneous valve closure. • applications/MOSFETParameterEstimation: An application that estimates the KP and VTO SPICE parameters for an n-channel enhancement mode MOSFET. • applications/PaintProcess: An application that illustrates how Maple was used to identify and subsequently correct the source of error between the model-predicted and actual concentrations of paint produced. • applications/PumpPower: An application that calculates the pump power for flow between two reservoirs. • applications/PyramidalHorn: An application that calculates the optimum design parameters for an X-band pyramidal horn. • applications/RadiatorDesign: An application that demonstrates the use of Maple's connectivity to CAD systems, and presents a typical use-case, for the design of a radiator assembly. • applications/ReactionSpontaneity: An application that calculates the temperature at which the reaction of oxygen and nitrogen to form nitrogen monoxide becomes spontaneous. • applications/RobotArm: An application that models a robot arm with three degrees of freedom. • applications/SettlingVelocity: An application that finds the terminal velocity of a particle settling in a fluid. • applications/SignalGeneration: Example that demonstrates how to efficiently generate signals. • applications/SimplySupportedBeam: An application that performs a design analysis on a simply supported beam with torsional loading. • applications/SingleStubMatching: An application that calculates the position of a load impedance on a transmission line, terminated by a short circuit. • applications/SunspotPeriodicity: An application that finds the period of sunspots using FFTs as well as autocorrelation. • applications/StabilityAnalysis: An application that shows how Maple can be used to control the re-entry path of a space shuttle by examining the boundaries of constant gain and phase margins. • applications/ThreeReservoirProblem: An application that calculate the flow rates, flow directions, and head at the common junction connecting three reservoirs. • applications/TunedMassDamper: An application that finds the optimum spring and damping constant for a tuned mass damper. • applications/VehicleRide: Analyze vehicle ride and handling. • applications/WaterHammer: An example of how differential equations can model pressure dynamics at a water valve. • applications/WaveHeight: An application that fits wave height data to a probability distribution. • applications/WeldedBeam: An application that optimizes the design of a welded beam to minimize cost. • ElCentroEarthquakeAnalysis: An application that analyzes accelerometer data from the 1940 El Centro earthquake. • examples/DynamicSystems: Examples of creating, manipulating, simulating, and plotting linear systems models. • examples/Physics: Examples for the Physics package. • examples/SCApps: Applications of the ScientificConstants package. • examples/SEAApps: Applications of the ScientificErrorAnalysis package. • examples/SignalProcessing: Examples of frequency domain analysis, windowing, filtering, and signal analysis using the Signal Processing package. • Thermal Engineering with Maple: A collection of applications related to thermal engineering. • Thermal Engineering/Heat Transfer/Energy Needed to Vaporize Ethanol: An application that calculates the energy needed to vaporize liquid ethanol at an initial temperature and pressure. • Thermal Engineering/Heat Transfer/Heat Transfer Coefficient across Flat Plate: An application that calculates the heat transfer coefficient of air flowing across a flat plate. • Thermal Engineering/Misc/Particle Falling through Air: An application that models a particle falling through air. • Thermal Engineering/Psychrometric Modeling/Adiabatic Mixing of Air: An application that mixes humid air and plots the thermodynamic process on a psychrometric chart. • Thermal Engineering/Psychrometric Modeling/Human Comfort Zone: An application that conditions air into the human comfort zone and plots the thermodynamic process on a psychrometric chart. • Thermal Engineering/Misc/Saturation Temperature of Fluids: An application that plots the saturation temperature, or boiling point, of a user-selected fluid as a function of pressure. • Thermal Engineering/Refrigeration/Flow through an Expansion Valve: An application that models the thermodynamics of flow through an expansion valve. • Thermal Engineering/Refrigeration/Refrigeration Cycle Analysis 1: An application that analyzes a vapor-compression refrigeration cycle. • Thermal Engineering/Thermodynamic Cycles/Optimize a Rankine Cycle: An application that optimizes the efficiency of a regenerative Rankine cycle. • Thermal Engineering/Thermodynamic Cycles/Organic Rankine Cycle: An application that analyzes a subcritical organic Rankine cycle. • applications/AmplifierGain: An application that plots the gain of an amplifier circuit for both ideal and non-ideal response using commands. applications/AmplifierGain: An application that plots the gain of an amplifier circuit for both ideal and non-ideal response using commands. • applications/AmplifierGainApp: An application that plots the gain of an amplifier circuit for both ideal and non-ideal response using sliders and input fields. applications/AmplifierGainApp: An application that plots the gain of an amplifier circuit for both ideal and non-ideal response using sliders and input fields. • applications/AntennaArray: An application that calculates the array factor and directivity for a uniform linear antenna array. applications/AntennaArray: An application that calculates the array factor and directivity for a uniform linear antenna array. • applications/BandpassFilter: An application to investigate the frequency response for a Bandpass filter. applications/BandpassFilter: An application to investigate the frequency response for a Bandpass filter. • applications/BeamDistributedPointLoad: An application that derives an expression for the deflection of a beam with a distributed load and a point load. applications/BeamDistributedPointLoad: An application that derives an expression for the deflection of a beam with a distributed load and a point load. • applications/BinaryDistillation: An application that finds the required number of stages for separating two liquid components by using the McCabe-Thiele method. applications/BinaryDistillation: An application that finds the required number of stages for separating two liquid components by using the McCabe-Thiele method. • applications/BivariatePolynomialRegression: Example to find the surface of best fit for a 3-D data set. applications/BivariatePolynomialRegression: Example to find the surface of best fit for a 3-D data set. • applications/BoltGroupCoefficient: An application that calculates the bolt coefficient for eccentrically loaded bolt groups. applications/BoltGroupCoefficient: An application that calculates the bolt coefficient for eccentrically loaded bolt groups. • applications/BouncingBall: Example on how event modeling in the dsolve command can be used to model a ball bouncing on a hilly terrain. applications/BouncingBall: Example on how event modeling in the dsolve command can be used to model a ball bouncing on a hilly terrain. • applications/CatalyticCrackingOfEthane: An application that calculates the equilibrium composition of the catalytic cracking of ethane. applications/CatalyticCrackingOfEthane: An application that calculates the equilibrium composition of the catalytic cracking of ethane. • applications/ChemicalKineticsParameterEstimation: An application that estimates the rate parameters for a reversible chemical reaction. applications/ChemicalKineticsParameterEstimation: An application that estimates the rate parameters for a reversible chemical reaction. • applications/CountercurrentHeatExchanger: An application that models the temperature dynamics of a countercurrent double-pipe heat exchanger. applications/CountercurrentHeatExchanger: An application that models the temperature dynamics of a countercurrent double-pipe heat exchanger. • applications/DCMotor: Example to obtain the transfer function and state space model of a system, and then design a LQR controller. applications/DCMotor: Example to obtain the transfer function and state space model of a system, and then design a LQR controller. • applications/DigitalFilterDesign: An application that demonstrates the design and analysis of a discrete filter. applications/DigitalFilterDesign: An application that demonstrates the design and analysis of a discrete filter. • applications/EconomicPipeSizing: An application that minimizes the total cost of pipework across the lifetime of a plant. applications/EconomicPipeSizing: An application that minimizes the total cost of pipework across the lifetime of a plant. • applications/FilteringAudioApp: An application that lets you apply filters to a WAV file. applications/FilteringAudioApp: An application that lets you apply filters to a WAV file. • applications/FitHeadFlowRateData: An application that fits head-flow rate data to a pump curve. applications/FitHeadFlowRateData: An application that fits head-flow rate data to a pump curve. • applications/FrequencyDomainSystemIdentification: Example of determining the parameters of a model when its structure is known. applications/FrequencyDomainSystemIdentification: Example of determining the parameters of a model when its structure is known. • applications/FuelPod: An application for optimizing the design of a fuel pod. applications/FuelPod: An application for optimizing the design of a fuel pod. • applications/GasOrifice: An application that calculates the flow rate through a large-diameter orifice. applications/GasOrifice: An application that calculates the flow rate through a large-diameter orifice. • applications/GibbsEnergyOfFormationOfEthanol: An application that calculates the Gibbs energy of formation of ethanol (C[2]H[5]OH) at any temperature, employing thermodynamic data. applications/GibbsEnergyOfFormationOfEthanol: An application that calculates the Gibbs energy of formation of ethanol (C2H5OH) at any temperature, employing thermodynamic data. • applications/HarmonicOscillator: An application that illustrates a second order harmonic oscillator under different control strategies. applications/HarmonicOscillator: An application that illustrates a second order harmonic oscillator under different control strategies. • applications/HelicalSpring: An application that optimizes the design of a helical spring. applications/HelicalSpring: An application that optimizes the design of a helical spring. • applications/ImageProcessing: An application that tests the effectiveness of image processing algorithms. applications/ImageProcessing: An application that tests the effectiveness of image processing algorithms. • applications/InteractingTanks: An application that models liquid flow between three tanks connected by two pipes. applications/InteractingTanks: An application that models liquid flow between three tanks connected by two pipes. • applications/InterauralTimeDelay: An application that introduces a delay in one channel of a sound to change the characteristics of the output. applications/InterauralTimeDelay: An application that introduces a delay in one channel of a sound to change the characteristics of the output. • applications/InvertedPendulum: An application that simulates the dynamics of an inverted pendulum on a cart. applications/InvertedPendulum: An application that simulates the dynamics of an inverted pendulum on a cart. • applications/MaxFlowRatePartiallyFilledPipe: An application that finds the maximum flow rate in a partially filled circular pipe. applications/MaxFlowRatePartiallyFilledPipe: An application that finds the maximum flow rate in a partially filled circular pipe. • applications/MaxPressureSurge: An application that finds the excess pressure generated by water hammer due to instantaneous valve closure. applications/MaxPressureSurge: An application that finds the excess pressure generated by water hammer due to instantaneous valve closure. • applications/MOSFETParameterEstimation: An application that estimates the KP and VTO SPICE parameters for an n-channel enhancement mode MOSFET. applications/MOSFETParameterEstimation: An application that estimates the KP and VTO SPICE parameters for an n-channel enhancement mode MOSFET. • applications/PaintProcess: An application that illustrates how Maple was used to identify and subsequently correct the source of error between the model-predicted and actual concentrations of paint applications/PaintProcess: An application that illustrates how Maple was used to identify and subsequently correct the source of error between the model-predicted and actual concentrations of paint • applications/PumpPower: An application that calculates the pump power for flow between two reservoirs. applications/PumpPower: An application that calculates the pump power for flow between two reservoirs. • applications/PyramidalHorn: An application that calculates the optimum design parameters for an X-band pyramidal horn. applications/PyramidalHorn: An application that calculates the optimum design parameters for an X-band pyramidal horn. • applications/RadiatorDesign: An application that demonstrates the use of Maple's connectivity to CAD systems, and presents a typical use-case, for the design of a radiator assembly. applications/RadiatorDesign: An application that demonstrates the use of Maple's connectivity to CAD systems, and presents a typical use-case, for the design of a radiator assembly. • applications/ReactionSpontaneity: An application that calculates the temperature at which the reaction of oxygen and nitrogen to form nitrogen monoxide becomes spontaneous. applications/ReactionSpontaneity: An application that calculates the temperature at which the reaction of oxygen and nitrogen to form nitrogen monoxide becomes spontaneous. • applications/RobotArm: An application that models a robot arm with three degrees of freedom. applications/RobotArm: An application that models a robot arm with three degrees of freedom. • applications/SettlingVelocity: An application that finds the terminal velocity of a particle settling in a fluid. applications/SettlingVelocity: An application that finds the terminal velocity of a particle settling in a fluid. • applications/SignalGeneration: Example that demonstrates how to efficiently generate signals. • applications/SimplySupportedBeam: An application that performs a design analysis on a simply supported beam with torsional loading. applications/SimplySupportedBeam: An application that performs a design analysis on a simply supported beam with torsional loading. • applications/SingleStubMatching: An application that calculates the position of a load impedance on a transmission line, terminated by a short circuit. applications/SingleStubMatching: An application that calculates the position of a load impedance on a transmission line, terminated by a short circuit. • applications/SunspotPeriodicity: An application that finds the period of sunspots using FFTs as well as autocorrelation. applications/SunspotPeriodicity: An application that finds the period of sunspots using FFTs as well as autocorrelation. • applications/StabilityAnalysis: An application that shows how Maple can be used to control the re-entry path of a space shuttle by examining the boundaries of constant gain and phase margins. applications/StabilityAnalysis: An application that shows how Maple can be used to control the re-entry path of a space shuttle by examining the boundaries of constant gain and phase margins. • applications/ThreeReservoirProblem: An application that calculate the flow rates, flow directions, and head at the common junction connecting three reservoirs. applications/ThreeReservoirProblem: An application that calculate the flow rates, flow directions, and head at the common junction connecting three reservoirs. • applications/TunedMassDamper: An application that finds the optimum spring and damping constant for a tuned mass damper. applications/TunedMassDamper: An application that finds the optimum spring and damping constant for a tuned mass damper. • applications/WaterHammer: An example of how differential equations can model pressure dynamics at a water valve. applications/WaterHammer: An example of how differential equations can model pressure dynamics at a water valve. • applications/WaveHeight: An application that fits wave height data to a probability distribution. applications/WaveHeight: An application that fits wave height data to a probability distribution. • applications/WeldedBeam: An application that optimizes the design of a welded beam to minimize cost. applications/WeldedBeam: An application that optimizes the design of a welded beam to minimize cost. • ElCentroEarthquakeAnalysis: An application that analyzes accelerometer data from the 1940 El Centro earthquake. ElCentroEarthquakeAnalysis: An application that analyzes accelerometer data from the 1940 El Centro earthquake. • examples/DynamicSystems: Examples of creating, manipulating, simulating, and plotting linear systems models. examples/DynamicSystems: Examples of creating, manipulating, simulating, and plotting linear systems models. • examples/SignalProcessing: Examples of frequency domain analysis, windowing, filtering, and signal analysis using the Signal Processing package. examples/SignalProcessing: Examples of frequency domain analysis, windowing, filtering, and signal analysis using the Signal Processing package. • Thermal Engineering with Maple: A collection of applications related to thermal engineering. Thermal Engineering with Maple: A collection of applications related to thermal engineering. • Thermal Engineering/Heat Transfer/Energy Needed to Vaporize Ethanol: An application that calculates the energy needed to vaporize liquid ethanol at an initial temperature and pressure. Thermal Engineering/Heat Transfer/Energy Needed to Vaporize Ethanol: An application that calculates the energy needed to vaporize liquid ethanol at an initial temperature and pressure. • Thermal Engineering/Heat Transfer/Heat Transfer Coefficient across Flat Plate: An application that calculates the heat transfer coefficient of air flowing across a flat plate. Thermal Engineering/Heat Transfer/Heat Transfer Coefficient across Flat Plate: An application that calculates the heat transfer coefficient of air flowing across a flat plate. • Thermal Engineering/Misc/Particle Falling through Air: An application that models a particle falling through air. Thermal Engineering/Misc/Particle Falling through Air: An application that models a particle falling through air. • Thermal Engineering/Psychrometric Modeling/Adiabatic Mixing of Air: An application that mixes humid air and plots the thermodynamic process on a psychrometric chart. Thermal Engineering/Psychrometric Modeling/Adiabatic Mixing of Air: An application that mixes humid air and plots the thermodynamic process on a psychrometric chart. • Thermal Engineering/Psychrometric Modeling/Human Comfort Zone: An application that conditions air into the human comfort zone and plots the thermodynamic process on a psychrometric chart. Thermal Engineering/Psychrometric Modeling/Human Comfort Zone: An application that conditions air into the human comfort zone and plots the thermodynamic process on a psychrometric chart. • Thermal Engineering/Misc/Saturation Temperature of Fluids: An application that plots the saturation temperature, or boiling point, of a user-selected fluid as a function of pressure. Thermal Engineering/Misc/Saturation Temperature of Fluids: An application that plots the saturation temperature, or boiling point, of a user-selected fluid as a function of pressure. • Thermal Engineering/Refrigeration/Flow through an Expansion Valve: An application that models the thermodynamics of flow through an expansion valve. Thermal Engineering/Refrigeration/Flow through an Expansion Valve: An application that models the thermodynamics of flow through an expansion valve. • Thermal Engineering/Refrigeration/Refrigeration Cycle Analysis 1: An application that analyzes a vapor-compression refrigeration cycle. Thermal Engineering/Refrigeration/Refrigeration Cycle Analysis 1: An application that analyzes a vapor-compression refrigeration cycle. • Thermal Engineering/Thermodynamic Cycles/Optimize a Rankine Cycle: An application that optimizes the efficiency of a regenerative Rankine cycle. Thermal Engineering/Thermodynamic Cycles/Optimize a Rankine Cycle: An application that optimizes the efficiency of a regenerative Rankine cycle. • Thermal Engineering/Thermodynamic Cycles/Organic Rankine Cycle: An application that analyzes a subcritical organic Rankine cycle. Thermal Engineering/Thermodynamic Cycles/Organic Rankine Cycle: An application that analyzes a subcritical organic Rankine cycle. Alphabetical Listing of all Examples and Applications • applications/AmplifierGain: An application that plots the gain of an amplifier circuit for both ideal and non-ideal response using commands. • applications/AmplifierGainApp: An application that plots the gain of an amplifier circuit for both ideal and non-ideal response using sliders and input fields. • applications/AntennaArray: An application that calculates the array factor and directivity for a uniform linear antenna array. • applications/BandpassFilter: An application to investigate the frequency response for a Bandpass filter. • applications/BeamDistributedPointLoad: An application that derives an expression for the deflection of a beam with a distributed load and a point load. • applications/BinaryDistillation: An application that finds the required number of stages for separating two liquid components by using the McCabe-Thiele method. • applications/BivariatePolynomialRegression: Example to find the surface of best fit for a 3-D data set. • applications/BlackScholes: Example to compute the option price using three different methods. • applications/BoltGroupCoefficient: An application that calculates the bolt coefficient for eccentrically loaded bolt groups. • applications/BouncingBall: Example on how event modeling in the dsolve command can be used to model a ball bouncing on a hilly terrain. • applications/CatalyticCrackingOfEthane: An application that calculates the equilibrium composition of the catalytic cracking of ethane. • applications/ChemicalKineticsParameterEstimation: An application that estimates the rate parameters for a reversible chemical reaction. • applications/CountercurrentHeatExchanger: An application that models the temperature dynamics of a countercurrent double-pipe heat exchanger. • applications/DCMotor: Example to obtain the transfer function and state space model of a system, and then design a LQR controller. • applications/DigitalFilterDesign: An application that demonstrates the design and analysis of a discrete filter. • applications/EconomicPipeSizing: An application that minimizes the total cost of pipework across the lifetime of a plant. • applications/FFTOptionPricing: An application that calculates an option price using FFTs. • applications/FilteringAudioApp: An application that lets you apply filters to a WAV file. • applications/FilteringFrequencyDomainNoise: An application that demonstrates frequency filtering. • applications/FitHeadFlowRateData: An application that fits head-flow rate data to a pump curve. • applications/FrequencyDomainSystemIdentification: Example of determining the parameters of a model when its structure is known. • applications/FuelPod: An application for optimizing the design of a fuel pod. • applications/GasOrifice: An application that calculates the flow rate through a large-diameter orifice. • applications/GibbsEnergyOfFormationOfEthanol: An application that calculates the Gibbs energy of formation of ethanol (C[2]H[5]OH) at any temperature, employing thermodynamic data. • applications/HarmonicOscillator: An application that illustrates a second order harmonic oscillator under different control strategies. • applications/HelicalSpring: An application that optimizes the design of a helical spring. • applications/ImageProcessing: An application that tests the effectiveness of image processing algorithms. • applications/InteractingTanks: An application that models liquid flow between three tanks connected by two pipes. • applications/InterauralTimeDelay: An application that introduces a delay in one channel of a sound to change the characteristics of the output. • applications/InvertedPendulum: An application that simulates the dynamics of an inverted pendulum on a cart. • applications/MaxFlowRatePartiallyFilledPipe: An application that finds the maximum flow rate in a partially filled circular pipe. • applications/MaxPressureSurge: An application that finds the excess pressure generated by water hammer due to instantaneous valve closure. • applications/MOSFETParameterEstimation: An application that estimates the KP and VTO SPICE parameters for an n-channel enhancement mode MOSFET. • applications/PaintProcess: An application that illustrates how Maple was used to identify and subsequently correct the source of error between the model-predicted and actual concentrations of paint produced. • applications/PumpPower: An application that calculates the pump power for flow between two reservoirs. • applications/PyramidalHorn: An application that calculates the optimum design parameters for an X-band pyramidal horn. • applications/RadiatorDesign: An application that demonstrates the use of Maple's connectivity to CAD systems, and presents a typical use-case, for the design of a radiator assembly. • applications/RobotArm: An application that models a robot arm with three degrees of freedom. • applications/ReactionSpontaneity: An application that calculates the temperature at which the reaction of oxygen and nitrogen to form nitrogen monoxide becomes spontaneous. • applications/SettlingVelocity: Find the terminal velocity of a particle settling in a fluid. • applications/SignalGeneration: Example that demonstrates how to efficiently generate signals. • applications/SimplySupportedBeam: An application that performs a design analysis on a simply supported beam with torsional loading. • applications/SingleStubMatching: An application that calculates the position of a load impedance on a transmission line, terminated by a short circuit. • applications/StabilityAnalysis: An application that shows how Maple can be used to control the re-entry path of a space shuttle by examining the boundaries of constant gain and phase margins. • applications/SunspotPeriodicity: An application that finds the period of sunspots using FFTs as well as autocorrelation. • applications/TheGreeks: An application that computes various measurements of risk in mathematical finance. • applications/ThreeReservoirProblem: An application that calculates the flow rates, flow directions, and head at the common junction connecting three reservoirs. • applications/TunedMassDamper: An application that finds the optimum spring and damping constant for a tuned mass damper. • applications/VehicleRide: An application that analyzes vehicle ride and handling. • applications/WaterHammer: An example of how differential equations can model pressure dynamics at a water valve. • applications/WaveHeight: An application that fits wave height data to a probability distribution. • applications/WeldedBeam: An application that optimizes the design of a welded beam to minimize cost. • Canvas/ElementsAndLayout: Examples that illustrate the elements of a canvas in Maple and Maple Learn. • DataFrame/Guide: A guide to working with data frames. • ElCentroEarthquakeAnalysis: An application that analyzes accelerometer data from the 1940 El Centro earthquake. • Example/DirectionFieldPlot: Visualization of direction field plots using the DocumentTools,Canvas package. • Example/FindPrimes: Using DocumentTools/Canvas to check whether user has input prime numbers, or not. • Example/NormalDistribution The histogram and density plot are computed for a list of student grades using the DocumentTools/Canvas subpackage. • Example/SolveFeedback: In this example, a user is asked to solve a linear equation. The DocumentTools/Canvas subpackage is used to check their work and provide feedback. • Example/WordProblem: Using DocumentTools/Canvas commands to create a word problem example. • examples/addtable: Extending the power of the integral transforms in Maple. • examples/AdvancedMapletsLayout: An example Maplet application providing detailed information on how Maplet layouts work. • examples/AirPassengers: An example of forecasting future air passenger data using TimeSeriesAnalysis. • examples/AlertMaplet: An example Maplet application providing advanced information for the Alert example Maplet application. • examples/algcurve: A package for working with algebraic curves. • examples/applyrl: Rule-based symbolic programming. • examples/archi: Archimedean solids. • examples/BezoutMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra BezoutMatrix Maplet application. • examples/binarytree: A demonstration of programming a Maple package, this worksheet implements a dictionary structure that uses binary trees. • examples/BranchCuts: A visual exploration of branches and branch cuts for the inverse trig and hyperbolic functions. • examples/Calculus1Derivatives: Differentiation in the visualization component of the Student[Calculus1] package. • examples/Calculus1DiffApps: Differentiation applications in the visualization component of the Student[Calculus1] package. • examples/Calculus1IntApps: Integration applications in the visualization component of the Student[Calculus1] package. • examples/Calculus1Integration: Integration in the visualization component of the Student[Calculus1] package. • examples/Calculus1SingleStepping: An overview of single-step problem solving in the Student[Calculus1] package. • examples/Calculus1Tangents: Tangents, function inverses, and plotting by sampling in the visualization component of the Student[Calculus1] package. • examples/Calculus1Theorems: Differentiation theorems in the visualization component of the Student[Calculus1] package. • examples/Calculus1Visualization: An overview of visualization in the Student[Calculus1] package. • examples/CodeCoverage: A demonstration of programming a Maple package, this worksheet creates a package for code coverage profiling. • examples/CodeGeneration: An introduction to the Maple package that controls the translation of Maple code to other languages. • examples/combstruct_attributes: Attribute grammars are a way to express recursively defined properties of structures. They are available in the combstruct package. • examples/combstruct_gen_funcs: It is possible to produce generating function equations and to solve some of them. Also, there is the allstructs function which performs exhaustive structure • examples/combstruct_grammars: An introduction to combstruct. Learn the basics of specifications, how to get counting sequences, and how to use predefined structures (including subsets, permutations, and combinations). • examples/combstruct_sample_structs: A simple collection of combstruct examples showing how to generate random trees, investigate the distribution of height by simulation, enumerate functional graphs, alcohols, necklaces, expression trees, and more. • examples/CommandTemplate: Template for top-level command help page. • examples/ConditionNumberMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra ConditionNumber Maplet application. • examples/ConfirmMaplet: An example Maplet application providing advanced information for the example Confirm Maplet application. • examples/ContextMenu: Examples of context menu use. • examples/ContextMenu/PackageContextMenus: Examples of creating package-specific context menus. • examples/ConvexHull: Examples of Convex Hulls in ComputationalGeometry and PolyhedralSets. • examples/CurveFitting: The CurveFitting package. • examples/DatabaseGrades: An example of creating and modifying a database. • examples/DataFrame/Statistics: Examples of using commands from Statistics on data frames. • examples/DataFrame/Subsets: Examples of finding subsets for data frames. • examples/DataSets/BubblePlot: Example of visualizing multiple data sets using BubblePlot. • examples/DataSets/Choropleth/CustomData: Example of visualizing custom data using choropleth maps. • examples/DefaultUnits: Using units in the default Maple environment. • examples/define: Using the Maple define command to specify a function by its behavior. • examples/deplot: Special facilities for plotting solutions of differential equations. • examples/deplot3d: Plotting solutions of differential equations in three dimensions. • examples/DEplotSystems: Describes the default models for the DEplot[interactive] differential system tool. • examples/DifferentialThomas: A package for differential elimination using Thomas decomposition. • examples/diffop: A subpackage (of DEtools) for differential operators. • examples/Domains: A tool for developing code for complicated algorithms. • examples/dsolve_numeric_NewErrorControl: An introduction to error control methods for numerical ODE solutions. • examples/dual: Duality of polyhedra. • examples/DynamicSystems: Examples of creating, manipulating, simulating, and plotting linear systems models. • examples/elliptic: Elliptic integrals examples. • examples/elliptic2: Elliptic integrals examples. • examples/EmbeddedComponents/ExploreApp: Building an application with Explore. • examples/EmbeddedComponents/NumberLine: Building an interactive number line. • examples/evalntype: One of the extensions to the Maple type system. • examples/ExampleMaplets: An overview of example Maplet applications. • examples/Explore: Examples using the Explore command to construct and insert a collection of embedded components used to explore an expression or a plot. • examples/ExternalCalling: An introduction to the use of external compiled code. • examples/ExternalCode: Sample command lines for building an OpenMaple application. • examples/FileTools: An overview of the FileTools package and selected examples. • examples/finance: Examples of calculations for personal finance. • examples/fourier: An illustration of the Fourier transform in Maple. • examples/functionaloperators: Defining and using various forms of functional operators. • examples/GaussInt: Examples on working with Gaussian Integers. • examples/GenericGraphAlgorithms: Illustrates generic programming in an example using simple graph algorithms. • examples/GenericGroups: A demonstration of programming with Maple modules, using generic programming. • examples/geometry: The Maple geometry package. • examples/GetColorMaplet: An example Maplet application providing advanced information for the GetColor example Maplet application. • examples/GetEquationMaplet: An example Maplet application providing advanced information for the GetEquation example Maplet application. • examples/GetExpressionMaplet: An example Maplet application providing advanced information for the GetExpression example Maplet application. • examples/GetFileMaplet: An example Maplet application providing advanced information for the GetFile example Maplet application. • examples/GetInputMaplet: An example Maplet application providing advanced information for the GetInput example Maplet application. • examples/GlobalTemperature: An example of forecasting future average global temperature data using TimeSeriesAnalysis. • examples/GMP: An introduction to the GNU Multiple Precision (GMP) integer library and Maple exact arbitrary-precision integer arithmetic. • examples/GraphTheory: An overview of the GraphTheory package. • examples/hankel: The Hankel transform. • examples/hilbert: The Hilbert transform. • examples/HilbertMatrixMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra HilbertMatrix Maplet application. • examples/IntegrationMaplet: An example Maplet application providing advanced information for the advanced Integration Maplet application. • examples/Interpolation_and_Smoothing: An introduction to interpolation and smoothing of given two-dimensional and three-dimensional data in Maple. • examples/IrisData: Examples of using summary statistics and principal component analysis on the Iris data set. • examples/knots: Examples for visualizing various knots. • examples/LA_Linear_Solve: Using the LinearAlgebra package to solve systems. • examples/LA_NAG: Using the LinearAlgebra package with NAG routines. • examples/LA_options: LinearAlgebra package options. • examples/LA_Syntax_Shortcuts: LinearAlgebra package shortcuts. • examples/laplace: The Laplace transform. • examples/lexical: An explanation of lexical scoping in Maple. • examples/LinearAlgebraComputation: An overview of matrix and vector computations in the Student[LinearAlgebra] package. • examples/LinearAlgebraInteractive: An overview of the Maplet interface routines in the Student[LinearAlgebra] package. • examples/LinearAlgebraMigration: An overview of converting worksheets that use the superseded linalg package to use the LinearAlgebra and VectorCalculus packages. • examples/LinearAlgebraVisualization1: An overview of visualization for vector, plane, and linear system problems in the Student[LinearAlgebra] package. • examples/LinearAlgebraVisualization2: An overview of visualization for least squares approximation and eigenvector problems in the Student[LinearAlgebra] package. • examples/linearode: Examples of determining closed-form solutions using dsolve. • examples/LinkedListPackage: A demonstration of programming a Maple package, this worksheet implements a linked list structure and operations on the pairs. • examples/MapletBuilder: An introduction to creating Maplets using Maplet Builder. • examples/MapletBuilderAdvanced: An introduction to creating Advanced Maplets using Maplet Builder. • examples/MapletsLayout: An introduction to Maplet application layout and design. • examples/MapletsStyleGuide: A list of guidelines for writing readable Maplet application code. • examples/MapletsTutorial: An introduction to writing Maplet applications. • examples/Mathieu: An overview of the Mathieu mathematical functions in Maple. • examples/matlab: A demonstration of the Matlab package. • examples/MatrixNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra MatrixNorm Maplet application. • examples/mellin: The Mellin transform. • examples/memo: Using modules to implement memoization. • examples/MessageMaplet: An example Maplet application providing advanced information for the Message example Maplet application. • examples/minimize: Using the Maple minimize command. • examples/moreStudentMultivariateCalculus: Additional examples for the Student[MultivariateCalculus] package. • examples/MultivariateCalculus: An overview of the Student[MultivariateCalculus] package. • examples/NaturalUnits: Using units in the Natural Units environment. • examples/NumericDDEs: Examples of numeric differential equations with delay. • examples/NumberTheory/ArithmeticFunctions: Selected examples on arithmetic functions. • examples/NumberTheory/Divisibility: Selected examples on divisibility. • examples/NumberTheory/MersennePrimes: Overview of commands relating to Mersenne primes. • examples/NumberTheory/PrimeNumbers: Overview of working with prime numbers. • examples/numeric_DAE: Overview of using numeric differential-algebraic equation solvers. • examples/Optimization: Overview of using the Optimization package to find local optima. It includes linear programming, quadratic programming, nonlinear programming, and least squares examples. • examples/OptimizationLPSolve: Examples of using the Optimization:-LPSolve command and a brief explanation of the algorithms used. • examples/OptimizationMatrixForm: Overview of using the matrix-form calling sequences in the Optimization package. • examples/Ore_algebra: The Ore algebras package. • examples/PackageCommandTemplate: Template for package command help page. • examples/PackageOverviewTemplate: Template for package overview help page. • examples/patmatch: How to use the Maple pattern-matching algorithms. • examples/Physics: Examples for the Physics package. • examples/piecewise: Using the piecewise function. • examples/poincare: A package for Hamiltonian equations. • examples/PolynomialIdeals: Overview of the PolynomialIdeals package. • examples/PriorityQueues: A demonstration implementing priority queues using Maple modules. • examples/ProgrammaticContentGeneration: Examples for programmatically generating worksheet content. • examples/QDifferenceEquations: An overview of the QDifferenceEquations package. • examples/QRDecompositionMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra QRDecomposition Maplet application. • examples/QuantifierElimination: An overview of the QuantifierElimination package. • examples/QuestionMaplet: An example Maplet application providing advanced information for the Question example Maplet application. • examples/Quiz: An introduction to generating interactive tests using the Grading package. If desired, you can create a quiz that can then be exported as a Maple T.A. course module. • examples/QuotientFields: A generic quotient field implementation using generic programming. • examples/regular: Regular polygons. • examples/RegularChains: Studying and solving polynomial systems with the RegularChains library. • examples/RobustStatistics: An overview of the Statistics package commands for describing data sets that have noisy measurements. • examples/RootOf: Using the RootOf function. • examples/SCApps: Applications of the ScientificConstants package. • examples/SEAApps: Applications of the ScientificErrorAnalysis package. • examples/SearchEngine: Construct a simple search engine using Maple. • examples/SelectionMaplet: An example Maplet application providing advanced information for the Selection example Maplet application. • examples/ShowTableMaplet: An example Maplet application providing advanced information for the advanced ShowTable Maplet application. • examples/SignalProcessing: Examples of frequency domain analysis, windowing, filtering, and signal analysis using the Signal Processing package. • examples/SimpleUnits: Use units in the Simple Units environment. • examples/SingularValuesMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra SingularValues Maplet application. • examples/slode: The Slode package. • examples/solve: The Maple equation solver. • examples/specfcn: Illustrations of special functions available in Maple. • examples/spread: A package for programmatically manipulating spreadsheets. • examples/StandardUnits: Using units in the Standard Units environment. • examples/StatisticsDataSmoothing: An overview of the Statistics package commands for performing data smoothing. • examples/StatisticsEstimation: An overview of the Statistics package commands for statistical estimation, including maximum likelihood estimation. • examples/StatisticsHypothesisTesting: An overview of the Statistics package commands for hypothesis testing and inference. • examples/StatisticsProbabilityDistributions: An overview of the Statistics package commands for statistical distributions and manipulating random variables. • examples/SteadyStateMarkovChain: Example of how to compute the steady-state vector of a Markov chain. • examples/stellate: Stellated polyhedra. • examples/string: Using strings in Maple. • examples/StudentLinearAlgebra: A selection of examples illustrating the use of Student[LinearAlgebra] package commands. • examples/StudentNumericalAnalysis: An overview of the Student[NumericalAnalysis] package. • examples/StudentPrecalculus: An overview of the Student[Precalculus] package. • examples/Student/Statistics: A selection of examples covering material in a introductory statistics course. • examples/StudentVectorCalculus: An overview of the Student[VectorCalculus] package. • examples/SuitcaseModel: An example of modeling a delay differential equation • examples/SymbolicDifferentiator: Illustrates various module concepts in a symbolic differentiator example. • examples/Task: Examples using the Task Programming Model for multithreaded programming. • examples/Threads: An overview of the Threads package. • examples/transform: Geometric transformations. • examples/UsefulMapleFunctions: An overview of some useful maple functions. • examples/VectorCalculus: An overview of vector calculus package. • examples/VectorNormMaplet: An example Maplet application providing advanced information for the advanced LinearAlgebra VectorNorm Maplet application. • examples/Wavelets: An introduction to the mathematical concepts behind the wavelet transforms available, and the ways in which you would implement such concepts in Maple. • examples/WorksheetPackage: An introduction to programmatic worksheet access with the Worksheet package. • Finance/Examples/AsianOptions: Example pricing Asian options using the Finance package. • Finance/Examples/CalendarsAndDayCounters: Introduction to day count conventions and working with calendars in the Finance package. • Finance/Examples/EuropeanOptions: Example pricing European options using the Finance package. • Finance/Examples/LocalVolatility: Example that computes local volatility and implied volatility using the Finance package. • Thermal Engineering with Maple: A collection of applications related to thermal engineering. • Thermal Engineering/Heat Transfer/Energy Needed to Vaporize Ethanol: An application that calculates the energy needed to vaporize liquid ethanol at an initial temperature and pressure. • Thermal Engineering/Heat Transfer/Heat Transfer Coefficient across Flat Plate: An application that calculates the heat transfer coefficient of air flowing across a flat plate. • Thermal Engineering/Misc/Particle Falling through Air: An application that models a particle falling through air. • Thermal Engineering/Psychrometric Modeling/Adiabatic Mixing of Air: An application that mixes humid air and plots the thermodynamic process on a psychrometric chart. • Thermal Engineering/Psychrometric Modeling/Human Comfort Zone: An application that conditions air into the human comfort zone and plots the thermodynamic process on a psychrometric chart. • Thermal Engineering/Misc/Saturation Temperature of Fluids: An application that plots the saturation temperature, or boiling point, of a user-selected fluid as a function of pressure. • Thermal Engineering/Refrigeration/Flow through an Expansion Valve: An application that models the thermodynamics of flow through an expansion valve. • Thermal Engineering/Refrigeration/Refrigeration Cycle Analysis 1: An application that analyzes a vapor-compression refrigeration cycle. • Thermal Engineering/Thermodynamic Cycles/Optimize a Rankine Cycle: An application that optimizes the efficiency of a regenerative Rankine cycle. • Thermal Engineering/Thermodynamic Cycles/Organic Rankine Cycle: An application that analyzes a subcritical organic Rankine cycle. • applications/SettlingVelocity: Find the terminal velocity of a particle settling in a fluid. applications/SettlingVelocity: Find the terminal velocity of a particle settling in a fluid. • applications/TheGreeks: An application that computes various measurements of risk in mathematical finance. applications/TheGreeks: An application that computes various measurements of risk in mathematical finance. • applications/ThreeReservoirProblem: An application that calculates the flow rates, flow directions, and head at the common junction connecting three reservoirs. applications/ThreeReservoirProblem: An application that calculates the flow rates, flow directions, and head at the common junction connecting three reservoirs. • applications/VehicleRide: An application that analyzes vehicle ride and handling. • Canvas/ElementsAndLayout: Examples that illustrate the elements of a canvas in Maple and Maple Learn. Canvas/ElementsAndLayout: Examples that illustrate the elements of a canvas in Maple and Maple Learn. • Example/DirectionFieldPlot: Visualization of direction field plots using the DocumentTools,Canvas package. Example/DirectionFieldPlot: Visualization of direction field plots using the DocumentTools,Canvas package. • Example/FindPrimes: Using DocumentTools/Canvas to check whether user has input prime numbers, or not. Example/FindPrimes: Using DocumentTools/Canvas to check whether user has input prime numbers, or not. • Example/NormalDistribution The histogram and density plot are computed for a list of student grades using the DocumentTools/Canvas subpackage. Example/NormalDistribution The histogram and density plot are computed for a list of student grades using the DocumentTools/Canvas subpackage. • Example/SolveFeedback: In this example, a user is asked to solve a linear equation. The DocumentTools/Canvas subpackage is used to check their work and provide feedback. Example/SolveFeedback: In this example, a user is asked to solve a linear equation. The DocumentTools/Canvas subpackage is used to check their work and provide feedback. • Example/WordProblem: Using DocumentTools/Canvas commands to create a word problem example. Example/WordProblem: Using DocumentTools/Canvas commands to create a word problem example. • examples/combstruct_attributes: Attribute grammars are a way to express recursively defined properties of structures. They are available in the combstruct package. examples/combstruct_attributes: Attribute grammars are a way to express recursively defined properties of structures. They are available in the combstruct package. • examples/combstruct_gen_funcs: It is possible to produce generating function equations and to solve some of them. Also, there is the allstructs function which performs exhaustive structure examples/combstruct_gen_funcs: It is possible to produce generating function equations and to solve some of them. Also, there is the allstructs function which performs exhaustive structure • examples/combstruct_grammars: An introduction to combstruct. Learn the basics of specifications, how to get counting sequences, and how to use predefined structures (including subsets, permutations, and combinations). examples/combstruct_grammars: An introduction to combstruct. Learn the basics of specifications, how to get counting sequences, and how to use predefined structures (including subsets, permutations, and combinations). • examples/combstruct_sample_structs: A simple collection of combstruct examples showing how to generate random trees, investigate the distribution of height by simulation, enumerate functional graphs, alcohols, necklaces, expression trees, and more. examples/combstruct_sample_structs: A simple collection of combstruct examples showing how to generate random trees, investigate the distribution of height by simulation, enumerate functional graphs, alcohols, necklaces, expression trees, and more. • examples/dsolve_numeric_NewErrorControl: An introduction to error control methods for numerical ODE solutions. examples/dsolve_numeric_NewErrorControl: An introduction to error control methods for numerical ODE solutions. • examples/SteadyStateMarkovChain: Example of how to compute the steady-state vector of a Markov chain. examples/SteadyStateMarkovChain: Example of how to compute the steady-state vector of a Markov chain. • examples/Wavelets: An introduction to the mathematical concepts behind the wavelet transforms available, and the ways in which you would implement such concepts in Maple. examples/Wavelets: An introduction to the mathematical concepts behind the wavelet transforms available, and the ways in which you would implement such concepts in Maple.
{"url":"https://cn.maplesoft.com/support/help/errors/view.aspx?path=examples%2Findex","timestamp":"2024-11-04T15:19:35Z","content_type":"text/html","content_length":"499129","record_id":"<urn:uuid:1ab9679d-e52a-4868-af84-ba61d5b01275>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00708.warc.gz"}
Undergraduate Programs Mailing Address Cleveland State University Mathematics & Statistics 2121 Euclid Ave., MTH Cleveland, OH 44115-2214 Campus Location Rhodes Tower (RT) 1860 E. 22nd Street, Rm. 1515 Contact Us Phone: 216.687.4680 Undergraduate Programs Math Major (B.S. & B.A.) The Math Department offers the Bachelor of Arts (B.A.) degree and the Bachelor of Science (B.S.) degree in mathematics. The mathematics requirements for the two degrees are identical; the difference between them is the number of science credits that are required. Mathematicians use theory, computational techniques, algorithms and computer technology to solve problems in various fields, including basic sciences, engineering, computer science, economics, business, finance, and social sciences. The study of mathematics is traditionally divided into pure (theoretical) mathematics, which involves discovering new mathematical principles and relationships, and applied mathematics, which develops mathematical techniques for solving practical problems. Statistics is a sub-field of applied mathematics that focuses on data analysis. Mathematics Minor A minor in mathematics consists of 18 credits in mathematics, with a grade of C or better in each course. Transfer students must complete a minimum of 9 of these credits at CSU to earn the minor. Statistics Minor Students interested in a career or graduate school that requires a research component involving data analysis should consider earning a Statistics minor. The study of mathematics, psychology, sociology, biology, environmental science, health science, social science, among others, can be greatly enhanced by a minor in statistics. Mathematics Advisor for Majors and Minors Dr. Leah Gold Office: RT 1529 Phone: (216)-687-4565 Email: l.gold33@csuohio.edu Advising Appointments can be made through Starfish or via email. Mathematics and Statistics Graduate Program Dr. Richard Fan Office: RT 1553 Phone: (216)-687-4706 Email: y.fan67@csuohio.edu Mailing Address Cleveland State University Mathematics & Statistics 2121 Euclid Ave., MTH Cleveland, OH 44115-2214 Campus Location Rhodes Tower (RT) 1860 E. 22nd Street, Rm. 1515 Contact Us Phone: 216.687.4680
{"url":"https://artsandsciences.csuohio.edu/mathematics/undergraduate-programs","timestamp":"2024-11-15T03:17:27Z","content_type":"text/html","content_length":"82482","record_id":"<urn:uuid:a85e23f8-b411-4c18-9fd0-ef11d0667d00>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00129.warc.gz"}
s a Title: K-theoretic Torsion Invariants in Symplectic and Contact Topology. Abstract: Torsion invariants can be used to detect nontrivial geometric phenomena when homological methods fail to do so — for example Reidemeister torsion was invented to distinguish lens spaces which are homotopy equivalent but not homeomorphic. These torsion invariants are K-theoretic in nature and can be understood from the viewpoint of Morse theory. Higher torsion invariants are their parametric analogues and can be understood from the viewpoint of parametrized Morse theory. In this talk I will describe a program joint with K. Igusa to develop Morse-theoretic torsion invariants in the context of symplectic and contact topology — more precisely to develop torsion invariants for Legendrian (resp. exact Lagrangian) submanifolds of 1-jet spaces (resp. cotangent bundles). Our invariants come in various flavors (Reidemeister torsion, Turaev torsion, Whitehead torsion) and also have parametric analogues. My main focus for this talk will be on higher Whitehead torsion, an invariant for Legendrians in 1-jet spaces whose nontriviality is guaranteed by the literature on Waldhausen’s algebraic K-theory of spaces. After describing its definition, I will discuss work in progress to prove that the higher Whitehead torsion of exact Lagrangians in cotangent bundles must always vanish. We secretly hope our proof will fail, since a single example of an exact Lagrangian in a cotangent bundle with nontrivial higher Whitehead torsion would disprove the nearby Lagrangian conjecture — one of the outstanding open problems in symplectic topology. **Meeting virtually through zoom**
{"url":"https://www.math.columbia.edu/2021/01/12/january-12-daniel-alvarez-gavela/","timestamp":"2024-11-05T23:07:26Z","content_type":"text/html","content_length":"40592","record_id":"<urn:uuid:015cd86b-6142-4910-b9c2-074f5b1c1ce7>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00048.warc.gz"}
Area of a Quadrilateral Worksheets The area of quadrilaterals worksheets with answers is a complete practice package for 6th grade, 7th grade, and 8th grade children. Featured here are exercises to identify the type and use appropriate formulas to find the area of quadrilaterals like rectangles, rhombus, trapezoids, parallelograms and kites, with dimensions involving whole numbers and fractions, find the missing parameters, problems involving unit conversion and more. Three levels of PDFs are offered categorized based on shapes. Plunge into practice with the free worksheets here! Select the Measurement Units Level 1 : Rectangle, Square and Parallelogram Area of a Quadrilateral | Whole Numbers Direct children of grade 6 to apply relevant formulas to calculate the area of quadrilaterals such as rectangles, parallelograms and squares offered as figures and in word format with whole number Area of a Quadrilateral | Fractions Four-sided plane figures like rectangles, squares and parallelograms with dimensions given as fractions are featured in these printable area of a quadrilateral worksheets. Level 2 : Rectangle, Square, Parallelogram & Trapezoid Area of a Quadrilateral | Whole Numbers Level up your practice in calculating the area of a quadrilateral by introducing another quadrilateral, the trapezoid in these calculate the area of quadrilaterals pdf worksheets for grade 7. Area of a Quadrilateral | Fractions Reiterate the concept of finding the area of quadrilaterals like squares, rectangles, parallelograms and trapezoids, plug in the dimensions presented as fractions in appropriate formulas to determine the area. Area of a Quadrilateral | Unit Conversion The measures of the quadrilaterals are expressed in different units in these printable worksheets for middle school. Make the appropriate unit conversion to solve each problem. Level 3 : Rectangle, Square, Parallelogram, Trapezoid, Kite and Rhombus Area of a Quadrilateral | Whole Numbers | Type 1 Raise the bar with these pdf quadrilaterals worksheets for grade 8 with kite and rhombus being added. Identify the quadrilaterals, assign the given measures in relevant formulas and compute the area. Area of a Quadrilateral | Whole Numbers | Type 2 Apart from geometric figures, these pdfs enclose a few word format questions on the area of quadrilaterals. Visualize the quadrilateral, recall its formula, plug in the whole-number lengths, and calculate the area. Area of a Quadrilateral | Fractions Escalate practice in finding the area of quadrilaterals by working out eight problems represented geometrically and in word format. Substitute the fractional dimensions in suitable formulas and find the area. Area of a Quadrilateral | Unit Conversion A side, a diagonal, the base, or the height – one of these lengths comes in a unit different from the square units indicated for the area of each quadrilateral. Convert them to the specified unit and proceed with the usual calculation. Missing parameters, Area using grid Find the Missing Parameter | Whole Numbers Figure out the missing dimensions of the quadrilaterals using the given area. Rearrange the formula, make the unknown dimension the subject, substitute the known values and determine the missing Find the Missing Parameter | Fractions Plug in the fractional value of the area and one of the dimensions given in relevant formulas and find the value of the missing parameters in this set of printable quadrilateral worksheets for 6th grade, 7th grade, and 8th grade. Area of a Quadrilateral | Grid The quadrilaterals are illustrated on grids, figure out the dimensions using the grids and substitute them in relevant formulas to calculate the area of the quadrilateral. The answer key is
{"url":"https://www.mathworksheets4kids.com/area-quadrilaterals.php","timestamp":"2024-11-03T13:41:21Z","content_type":"text/html","content_length":"48404","record_id":"<urn:uuid:f525a65c-eacc-44ad-85af-39a0e6253b6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00029.warc.gz"}
Marvin Harvested 16.5 Kilos Of Pomelo While Melvin Got 5 Weighing 3.05 Kilos Eac… - Anti Vuvuzela Marvin Harvested 16.5 Kilos Of Pomelo While Melvin Got 5 Weighing 3.05 Kilos Eac… September 5, 2023 Math No Comments Marvin harvested 16.5 kilos of pomelo while melvin got 5 weighing 3.05 kilos each. who harvest more kilos of pomelo? By how much? A. what is asked in the problem? B. what are the given facts? C. What operation will be used? D. what is the number sentence? E. what is the answer? A. In this problem, the question is asking which person harvested more kilos of pomelo and by how much. B. The given facts are: – Marvin harvested 16.5 kilos of pomelo. – Melvin harvested 5 pomelos, and each pomelo weighs 3.05 kilos. C. To determine who harvested more kilos of pomelo, we need to compare the total weight harvested by Marvin with the total weight harvested by Melvin. We can also use subtraction to find the difference in the amount harvested. D. The number sentence representing this problem is: Total weight harvested by Marvin – Total weight harvested by Melvin = Difference in weight harvested E. Calculating the answer: – Total weight harvested by Marvin = 16.5 kilos – Total weight harvested by Melvin = 5 pomelos x 3.05 kilos per pomelo = 15.25 kilos Now, let’s find the difference in weight harvested: 16.5 kilos (Marvin’s harvest) – 15.25 kilos (Melvin’s harvest) = 1.25 kilos Marvin harvested 1.25 kilos more pomelo than Melvin. Marvin harvested more kilos of pomelo by 1.25 kilos. Step-by-step explanation: A. What is asked in the problem? The problem asks who harvested more kilos of pomelo and by how much. B. What are the given facts? Marvin harvested 16.5 kilos of pomelo. Melvin harvested 5 pomelos, with each weighing 3.05 kilos. C. What operation will be used? To determine who harvested more kilos of pomelo, we need to compare the total weights of pomelos harvested by Marvin and Melvin. D. What is the number sentence? To compare the total weights, we will subtract Melvin’s total weight of pomelos from Marvin’s total weight: Marvin’s total weight – Melvin’s total weight = Difference in weights E. What is the answer? Let’s calculate the total weight of pomelos harvested by Melvin: Melvin’s total weight = 5 pomelos * 3.05 kilos/pomelo = 15.25 kilos Difference in weights = Marvin’s total weight – Melvin’s total weight Difference in weights = 16.5 kilos – 15.25 kilos 华丰|方便面包装升级_pomelos-站酷zcool. 5.) lope harvested 163.5 kilos of pomelos while marcon harvested 362.75. Cosechan pomelos gigantes en paraná: pueden pesar hasta cinco kilos Solved: lope harvested 163.5 kilos of pomelos while marcon h[algebra. 5.) lope harvested 163.5 kilos of pomelos while marcon harvested 362.75. Paranaense cosecha pomelos gigantes que pesan más de cuatro kilos Lope pau decking finest hardwood holden humphrey company. Como hacer mermelada de pomelo. Toronja grapefruit pomelo frutas kilo beneficios fruta fruits pomelos citrus pounds propiedades naranja najbardziej owoce kaloryczne huerto macetas verduras interesante Pau lope. Pomelo kilo. Lope harvested 163.5 kilos of pomelos while 7+ lope harvested 163.5 kilos of pomelos trend. 5. lope harvested 163.5 kilos of pomelos while mar. Pau lope massaranduba Mermelada pomelo casera receta pomelos. 7+ lope harvested 163.5 kilos of pomelos trend. Pau lope
{"url":"https://antivuvuzela.org/marvin-harvested-165-kilos-of-2632/","timestamp":"2024-11-09T03:49:34Z","content_type":"text/html","content_length":"149099","record_id":"<urn:uuid:2d30ba86-2dc1-4929-b957-d3d742f00ff3>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00242.warc.gz"}
Trainee Technology You have been given a Snake and Ladder Board with 'N' rows and 'N' columns with the numbers written from 1 to (N*N) starting from the bottom left of the board, and alternating direction each row. For example For a (6 x 6) board, the numbers are written as follows: You start from square 1 of the board (which is always in the last row and first column). On each square say 'X', you can throw a dice which can have six outcomes and you have total control over the outcome of dice throw and you want to find out the minimum number of throws required to reach the last cell. Some of the squares contain Snakes and Ladders, and these are possibilities of a throw at square 'X': You choose a destination square 'S' with number 'X+1', 'X+2', 'X+3', 'X+4', 'X+5', or 'X+6', provided this number is <= N*N. If 'S' has a snake or ladder, you move to the destination of that snake or ladder. Otherwise, you move to S. A board square on row 'i' and column 'j' has a "Snake or Ladder" if board[i][j] != -1. The destination of that snake or ladder is board[i][j]. Note : You can only take a snake or ladder at most once per move: if the destination to a snake or ladder is the start of another snake or ladder, you do not continue moving - you have to ignore the snake or ladder present on that square. For example, if the board is: -1 1 -1 -1 -1 9 -1 4 -1 Let's say on the first move your destination square is 2 [at row 2, column 1], then you finish your first move at 4 [at row 1, column 2] because you do not continue moving to 9 [at row 0, column 0] by taking the ladder from 4. A square can also have a Snake or Ladder which will end at the same cell. For example, if the board is: -1 3 -1 -1 5 -1 -1 -1 9 Here we can see Snake/Ladder on square 5 [at row 1, column 1] will end on the same square 5. Suppose list1 is [2, 133, 12, 12], what is max(list1) in Python? Choose another skill to practice Start a Discussion
{"url":"https://www.naukri.com/code360/interview-experiences/travclan/interview-experience-by-deepali-on-campus-jan-2021-690","timestamp":"2024-11-03T16:45:30Z","content_type":"text/html","content_length":"509262","record_id":"<urn:uuid:5e5b982a-1e16-417e-ae94-1253b23f11b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00133.warc.gz"}
Class 10: DNA Melting Temperature Polymerase Chain Reaction Polymerase Chain Reaction • Developed by Kary Mullis in the 1980s • used to synthesize millions of copies of a given DNA sequence PCR is based on • Separate double-strand DNA using high temperatures, and • Complementing each DNA single-strand using a polymerase that works at these high temperatures Taq DNA polymerase Isolated from Thermus aquaticus Can polymerize deoxynucleotide precursors (dNTPs) in a temperature range of 75-80°C The polymerase extends a pre-existing pairing, initially made by primers that bind spontaneously to specific sites on the DNA A typical PCR reaction A series of thermic cycles involving • template DNA denaturation, • primer annealing, • and extension of the annealed primers by DNA polymerase This three-step process is repeated 25-30 times Exponential accumulation of a specific fragment Ends defined by the 5’ ends of primers PCR reaction steps They are repeated 20-30 times 1. Denaturation of target DNA by heating to 90-95°C 2. Temperature is lowered to about 5°C below primers melting temperature □ assuring specificity of primer annealing □ assuring specificity of the product 3. Raising temperature to 70-73°C □ optimal temperature for primer extension Short DNA fragments with a defined sequence complementary to the target DNA that we want to detect and amplify Each PCR assay requires the presence of template DNA, primers, nucleotides, and DNA polymerase The primers in the reaction specify the exact DNA product to be amplified Melting temperature The melting temperature (\(T_m\)) is the one in which half of the DNA molecules are matched and half are not matched The \(T_m\) shows the transition from double helical to random coil formation It corresponds to the midpoint of the spectroscopic absorbance shift The \(T_m\) values are most simply measured by following the absorbance at 260 mM as a function of the temperature of the DNA solution and noting the midpoint of the hyperchromic rise Melting temperature depends on • The DNA GC base content, • The cation concentration of the buffer • The DNA double strand length Salt concentration The \(T_m\) increases with higher concentrations of salt due to the stabilizing effects that cations have on DNA duplex formation More cations bind to duplex DNA than to the single strands Different cations may have different effects on \(T_m.\) Sodium and Magnesium concentration Most \(T_m\) research is done using \(Na^+\) as the primary cation; from a \(T_m\) standpoint, sodium, and potassium are equivalent Divalent cations (such as \(Mg^{++}\)) also increase \(T_m\) but their effects are smaller than monovalent cations Oligonucleotide sequence Sequences with a higher fraction of G-C base pairs have a higher \(T_m\) than do AT-rich sequences However, the \(T_m\) of an oligo is not simply the sum of AT and GC base content Base stacking interactions must also be taken into account such that the actual specific sequence must be known to accurately predict \(T_m\) The effects of neighboring bases as contributed through base stacking are called “nearest neighbor effects” Formulas for \(T_m\) calculation Formulas for \(T_m\) Calculation Different formulas have been developed, which can be classified into two groups: • nucleotide composition • position-dependent Nucleotide composition-based formulas depend on the GC-content, the number of base pairs, and the salt concentration Position-dependent methods depend on parameters such as enthalpy (\(ΔH^0\)), entropy (\(ΔS^0\)), and Gibbs free energy (\(ΔG^0\)) Wallace-Itakura Formula It is one of the simplest ones, but it may not give the exact result \[T_m = 4(G+C)+2(A+T)\] The formula was originally applied to the hybridization of probes in \(1 [mol/L]\) of \(NaCl\) This rule overestimates the \(T_m\) of long duplexes It gives reasonable results only in the range of 14-20 bp. Wallace-Itakura v/s experimental \(T_m\) Marmur and Doty Formula (1962) The linear relation between the GC content and the \(T_m\) was determined using absorbance shift analysis For a solvent containing 0.2 Molar of Na^+, the melting temperature is \[T_m =69.3+0.41(\%GC)\] where \(T_m\) is in degrees Celsius The measurement of the \(T_m\) is a way to determine the GC content of DNA Chester and Marshak Formula (1992) Chester and Marshak added a term to account for DNA strand length (n in base pairs) to estimate primer \(T_m\): \[T_m =69.3+0.41(\%GC)-\frac{650}{n}\] It is easy to see that if the DNA molecule is big (for example, if \(n>10^6\)), then this formula gives the same result as Marmur and Doty Chester Marshak v/s experimental \(T_m\) The Marmur-Schildkraut-Doty Equation (1964) For ionic strength with a term for the \(Na^+\) concentration \[T_m =81.5+16.6\log_{10}([Na^+])+0.41(\%GC)-\frac{b}{n}\] Values between 500 and 750 have been used for \(b\) (a value that may increase with the ionic strength). Usually, the value \(b=500\) is used. Marmur-Schildkraut-Doty v/s experimental \(T_m\) Wetmur Formula (1991) \[ \begin{aligned} T_m = & 81.5+16.6\log_{10}\left(\frac{[Na^+]}{1.0+0.7[Na^+]}\right) \\ & + 0.41(\%GC)-\frac{500}{n} \end{aligned} \] This formula includes these variables with the salt concentration term modified to extend the range to 1 M \(Na^+\), a concentration routinely employed to maximize hybridization rates on blots. (Wetmur, 1991) Wetmur prediction v/s experimental \(T_m\) Position Dependent Calculation These methods use thermodynamic parameters (entropy \(ΔS^0\), enthalpy \(ΔH^0\), and Gibbs free energy \(ΔG^0\)). Experiments show that thermodynamic values for DNA melting do not depend only on base pair identity (A-T(U) or G-C) Theoretical melting temperature is typically calculated assuming that the helix-coil transition is two-state Two state model SantaLucia, et al. suggest that the two-state model gives a reasonable approximation of melting temperature for duplexes with non-two-state transitions \[\text{single-strand} + \text{single-strand} ⇆ \text{double-strand}\] Thermodynamic Rule for \(T_m\) For self-complementary oligonucleotide duplexes, \(T_m\) is calculated from the predicted \(ΔH^0\) and \(ΔS^0\), and the total oligonucleotide concentration \(C_T\), by using the equation \[T_m=\frac {ΔH^0}{ΔS^0+R\ln (C_T)}\] where \(R\) is the Boltzmann’s gas constant (1.987[cal/Kmol]) and temperature is measured in Kelvin degrees (SantaLucia, 1998; Borer, Dengler, Tinoco, & Uhlenbeck, 1974) \(ΔG^0\) is the free energy Each \(ΔG^0\) term has enthalpic, \(ΔH^0\), and entropic, \(ΔS^0\) components The \(ΔG^0_{37}\) can also be calculated from \(ΔH^0\) and \(ΔS^0\) parameters by using the equation: \[ΔG^0_T=ΔH^0(\text{total})-TΔS^0(\text{total})\] (SantaLucia, 1998) Nearest-Neighbor Rule for energy \[ \begin{aligned} ΔG^0(\text{total})=∑_i n_iΔG^0(i)+ΔG^0(\text{init w/term G.C})\\ - ΔG^0(\text{init w/term A.T})+ΔG^0(\text{sym}) \end{aligned} \] • \(ΔG^0(i)\) are the free-energy changes for the 10 possible Watson-Crick Nearest-Neighbors □ \(ΔG^0(1)=ΔG^0_{37}(AA/TT),\) □ \(ΔG^0(2)=ΔG^0_{37}(AT/TA),\) etc. • \(n_i\) is the number of occurrences of each Nearest-Neighbor • \(ΔG^0(sym)\) is \(+0.43\text{[k cal/mol]}\) if the duplex is self-complementary and zero if it is not \[ \underbrace{\text{primer}}_A + \underbrace{\text{single-strand DNA}}_B ⇆ \underbrace{\text{double-strand DNA}}_{AB} \] When the reaction is in equilibrium, we have \[C=\frac{[A] [B]}{[AB]}=e^{-ΔG/RT}\] thus \[-RT\log C = ΔG\] Energy formula We also have \[ΔG=ΔH - TΔS\] so \[TΔS-RT\log C = ΔH\] and therefore \[T = \frac{ΔH}{ΔS-R\log C}\] Finding \(C\) Now \[C=\frac{([A]_{ini}-[AB])([B]_{ini}-[AB])}{[AB]}\] Assuming that the initial concentration of primers is much larger than the initial DNA concentration, \([B]_{ini}<<[A]_{ini},\) and DNA will be the limiting factor. At the melting point we have \[[AB]=[B]_{ini}/2\] thus \[ \begin{aligned} C&=\frac{([A]_{ini}-[B]_{ini}/2)([B]_{ini}-[B]_{ini}/2)}{[B]_{ini}/2}\\ &=\frac{([A]_{ini}-[B]_{ini}/2)⋅[B]_{ini}/2}{[B]_{ini}/2}\\ &=[A]_{ini}-[B]_{ini}/2 \end{aligned} \] Melting temperature formula \[T_m = \frac{ΔH}{ΔS-R\log([A]_{ini}-[B]_{ini}/2)}\] Since the initial DNA concentration is small, we have \[[A]_{ini}-[B]_{ini}/2 ≈ [A]_{ini}\] which gives us the final formula \[T_m = \frac{ΔH}{ΔS-R\log([A]_{ini})}\]
{"url":"http://www.dry-lab.org/slides/2023/bioinfo/class10.html","timestamp":"2024-11-09T04:16:49Z","content_type":"text/html","content_length":"31031","record_id":"<urn:uuid:bffc6304-1a05-4877-a598-9c7b482105ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00684.warc.gz"}
Continuous Ranked Probability Score - CRPS The Continuous Ranked Probability Score, known as CRPS, is a score to measure how a proposed distribution approximates the data, without knowledge about the true distributions of the data. CRPS is defined as^1 $$ CRPS(P, x_a) = \int_{-\infty}^\infty \lVert P(x) - H(x - x_a) \rVert_2 dx, $$ • $x_a$ is the true value of $x$, • P(x) is our proposed cumulative distribution for $x$, • $H(x)$ is the Heaviside step function $$ H(x) = \begin{cases} 1, &\qquad x=0\\ 0, &\qquad x\leq 0\\ \end{cases} $$ • $\lVert \cdot \rVert_2$ is the L2 norm. Explain it The formula looks abstract on first sight, but it becomes crystal clear once we understand it. Note that the distributions that corresponds to a Heaviside CDF is the delta function $\delta(x-x_a)$. What this score is calculating is the difference between our distribution and a delta function. If we have a model that minimizes CRPS, then we are looking for a distribution that is close to the delta function $\delta(x-x_a)$. In other words, we want our distribution to be large around $x_a$. To illustrate what the integrand $\lVert P(x) - H(x - x_a) \rVert_2$ means, we consider several scenarios. When the proposed CDF $P(x)$ is reaching 1 faster When the proposed CDF $P(x)$ is reaching 1 slower When the proposed CDF $P(x)$ is close to the Heaviside function When the proposed CDF $P(x)$ is dispersed around $x_a$ The shade areas determines the integrand of the integral in CRPS. The only way to get a small score is to choose a distribution that is focused around $x_a$. densities of $P(x)$ and $H(x-x_a)$ Compared to f-Divergence Compared to [[KL Divergence]] KL Divergence Kullback–Leibler divergence indicates the differences between two distributions or more generally [[f-Divergence]] f-Divergence The f-divergence is defined as1 $$ \operatorname{D}_f = \int f\left(\frac{p}{q}\right) q\mathrm d\mu, $$ where $p$ and $q$ are two densities and $\mu$ is a reference distribution. Requirements on the generating function The generating function $f$ is required to be convex, and $f(1) =0$. For $f(x) = x \log x$ with $x=p/q$, f-divergence is reduced to the KL divergence $$ \begin{align} &\int f\left(\frac{p}{q}\right) q\ mathrm d\mu \\ =& \int \frac{p}{q} \log \left( \frac{p}{q} \right) \mathrm d\mu … , CRPS is comparing our proposed CDF to the Heaviside CDF. The f-divergence is defined as1 $$ \operatorname{D}_f = \int f\left(\frac{p}{q}\right) q\mathrm d\mu, $$ where $p$ and $q$ are two densities and $\mu$ is a reference distribution. Requirements on the generating function The generating function $f$ is required to be convex, and $f(1) =0$. For $f(x) = x \log x$ with $x=p/q$, f-divergence is reduced to the KL divergence $$ \begin{align} &\int f\left (\frac{p}{q}\right) q\mathrm d\mu \\ =& \int \frac{p}{q} \log \left( \frac{p}{q} \right) \mathrm d\mu … KL Divergence Kullback–Leibler divergence indicates the differences between two distributions Compared to Likelihood Gebetsberger et al found that CRPS is more robust but produces similar results if we have found a good assumption about the data distribution^2. One quite interesting application of the CRPS is to write down the loss for an the quantile function^3. Planted: by L Ma; L Ma (2022). 'Continuous Ranked Probability Score - CRPS', Datumorphism, 03 April. Available at: https://datumorphism.leima.is/cards/time-series/crps/.
{"url":"https://datumorphism.leima.is/cards/time-series/crps/?ref=footer","timestamp":"2024-11-02T15:18:41Z","content_type":"text/html","content_length":"121403","record_id":"<urn:uuid:6d73ea2f-a4ba-4b77-a489-e21521da48bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00645.warc.gz"}
Oil Nozzle Calculator - Calculator Doc The oil nozzle calculator is an essential tool for engineers, technicians, and anyone involved in fluid dynamics or oil distribution systems. It provides a way to estimate the flow rate through an oil nozzle based on various parameters, ensuring that systems operate efficiently and effectively. Understanding how to use this calculator can help optimize fluid delivery in different applications, including industrial processes and heating systems. The formula for calculating the flow rate (FR) through an oil nozzle is: FR = (A * √(2 * g * H)) / √(1 – (d/D)⁴), where FR is the flow rate in gallons per minute (GPM), A is the nozzle area in square inches, H is the height in feet, d is the small diameter in inches, and D is the large diameter in inches. The variable g represents the acceleration due to gravity, approximately 32.2 ft/s². How to Use To use the oil nozzle calculator, simply input the nozzle area, height, and both diameters (small and large) into their respective fields. After entering these values, click the “Calculate” button to see the estimated flow rate displayed below. For example, if the nozzle area is 5 square inches, the height is 10 feet, the small diameter is 2 inches, and the large diameter is 4 inches, the calculation would be: FR = (5 * √(2 * 32.2 * 10)) / √(1 – (2/4)⁴) ≈ 21.49 GPM. Thus, the estimated flow rate through the nozzle would be approximately 21.49 gallons per minute. 1. What is flow rate in the context of oil nozzles? Flow rate refers to the volume of oil that passes through the nozzle in a specific period, usually measured in gallons per minute (GPM). 2. Why is it important to calculate flow rate? Calculating flow rate is essential for ensuring that oil distribution systems operate efficiently, preventing over- or under-supply. 3. What units are used in the oil nozzle calculation? The calculation uses square inches for area, feet for height, and inches for diameters, with the result given in gallons per minute. 4. What does the variable g represent in the formula? The variable g represents the acceleration due to gravity, which is approximately 32.2 feet per second squared (ft/s²). 5. Can this calculator be used for different fluids? This calculator is primarily designed for oil; for other fluids, adjustments may be needed based on their specific properties. 6. How do I measure the area of the nozzle? The area can be calculated by using the formula A = π * (D/2)² for a circular nozzle. 7. What happens if I use incorrect measurements? Using incorrect measurements will lead to inaccurate flow rate calculations, potentially affecting system performance. 8. Can I input decimal values for the dimensions? Yes, you can input decimal values to achieve more precise calculations. 9. What is the significance of the small and large diameters? The small and large diameters help determine how the nozzle restricts flow and influences the overall flow rate. 10. What factors can affect the flow rate through an oil nozzle? Factors include the viscosity of the oil, temperature, and the condition of the nozzle. 11. Is there a limit to the height I can use in the calculations? No, the height can vary, but ensure that it is realistic for your specific application. 12. How do I ensure my measurements are accurate? Use calibrated measuring tools and double-check your measurements for precision. 13. Can I use this calculator for a nozzle with a non-circular shape? This calculator is designed for circular nozzles; non-circular shapes may require different calculations. 14. What is the best way to improve flow rate in an oil system? Optimizing nozzle size, reducing obstructions, and maintaining equipment can enhance flow rate. 15. How often should I calculate flow rate? It is advisable to calculate flow rate whenever system conditions change, such as during maintenance or modifications. 16. What if the calculated flow rate is lower than expected? Investigate potential blockages, leaks, or equipment malfunctions that may be restricting flow. 17. Can I calculate flow rate for multiple nozzles? Yes, you can calculate the flow rate for each nozzle separately and sum them for the total flow rate. 18. What is the effect of temperature on flow rate? Higher temperatures typically decrease viscosity, which can increase flow rate, while lower temperatures may have the opposite effect. 19. Is this calculator useful for both residential and industrial applications? Yes, the calculator can be applied in both contexts, although specific parameters may vary. 20. Where can I find additional resources on oil flow calculations? Engineering textbooks, fluid dynamics courses, and industry publications provide further information on flow rate calculations. The oil nozzle calculator is a valuable resource for anyone involved in fluid dynamics, particularly in the oil and gas industry. By simplifying the process of calculating flow rates, it enables professionals to make informed decisions regarding oil distribution systems. Understanding and utilizing this calculator can lead to more efficient operations, cost savings, and improved system performance in various applications. Whether you are an engineer, technician, or operator, mastering the use of the oil nozzle calculator is essential for optimizing flow rates and ensuring effective fluid delivery.
{"url":"https://calculatordoc.com/oil-nozzle-calculator/","timestamp":"2024-11-02T10:54:11Z","content_type":"text/html","content_length":"87415","record_id":"<urn:uuid:a90c1191-6fce-4e69-b549-0909cfb53fd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00395.warc.gz"}
Page ID: 2661 M | Matrix algebra | The basics | «Matrix equations» The inverse of A = \pmatrix{ a & b \cr c & d} is A^{-1} = \displaystyle \frac{1}{ad-bc} \pmatrix{ d & -b \cr -c & a } Remember that in general AB \ne BA Software/Applets used on this page This page uses You can get a better display of the maths by downloading special TeX fonts from . In the meantime, we will do the best we can with the fonts you have, but it may not be pretty and some equations may not be rendered correctly. This question appears in the following syllabi: Syllabus Module Section Topic Exam Year AQA A-Level (UK - Pre-2017) FP4 Matrix algebra The basics - AQA AS Further Maths 2017 Pure Maths Matrix Determinants and Inverses Matrix Equations - AQA AS/A2 Further Maths 2017 Pure Maths Matrix Determinants and Inverses Matrix Equations - CBSE XII (India) Algebra Matrices Types, zero, identity, symmetric and skew symmetric - CCEA A-Level (NI) FP1 Matrix algebra The basics - Edexcel A-Level (UK - Pre-2017) FP1 Matrix algebra The basics - Edexcel AS Further Maths 2017 Core Pure Maths Matrices Matrix Equations - Edexcel AS/A2 Further Maths 2017 Core Pure Maths Matrices Matrix Equations - Methods (UK) M5 Matrix algebra The basics - OCR A-Level (UK - Pre-2017) FP1 Matrix algebra The basics - OCR AS Further Maths 2017 Pure Core Determinants, Inverses and Equations Matrix Equations - OCR MEI AS Further Maths 2017 Core Pure A Determinants and Inverses Matrix Equations - OCR-MEI A-Level (UK - Pre-2017) FP1 Matrix algebra The basics - Pre-Calculus (US) E2 Matrix algebra The basics - Scottish Advanced Highers M3 Matrix algebra The basics - Scottish (Highers + Advanced) AM3 Matrix algebra The basics - Universal (all site questions) M Matrix algebra The basics - WJEC A-Level (Wales) FP1 Matrix algebra The basics -
{"url":"https://www.mathsnet.com/2661","timestamp":"2024-11-06T01:08:25Z","content_type":"text/html","content_length":"39013","record_id":"<urn:uuid:eaf4eeda-ea70-44d4-bea9-c8374e03dd15>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00458.warc.gz"}
MU Applied Hydraulics - 2 - May 2014 Exam Question Paper | Stupidsid Total marks: -- Total time: -- (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary Solve any four of the following 1 (a) Standing wave flume and venturi flume. 5 M 1 (b) Explain the surges in open channel. 5 M 1 (c) Explain the development of boundary layer. 5 M 1 (d) Write short notes on Aerofoils 5 M 1 (e) Boundary layer separation. 5 M 2 (a) Derive the dynamics equation for gradually varied flow. 10 M 2 (b) Calculate the friction drag on a plate 0.15m wide and 0.45m long placed longitudinally in a stream of oil flowing with a free stream velocity of 6 m/s. Also find the thk of the boundary layer and shear stress at the trailing edge. Sp.gr. Of oil is 0.925 and its kinematic viscosity is 0.9 stokes. 10 M 3 (a) A cylinder 1.2 m India. is rotated about its axis :1] air having a velocity of 128 km per hr.A lift of 5886 N per meter length of the cylinder is developed on the body. Assuming ideal fluid theory, find the rotational speed and the location of the stagnation points. Given p= 1.236 kg/m^3 10 M 3 (b) Show that the head loss in hydraulic jump formed in a rectangular channel may be expressed as \[\Delta E=\dfrac{\left ( V1-V2 \right )^{3}}{2g\left ( V1+V2 \right )}\] 10 M 4 (a) A concrete lined rectangular channel is our wide and has a bed slope of l in 2500; It conveys water at rate of 10.90 01' sec at a depth of 0.75m. Determine for this discharge, the critical depth, the critical velocity and corresponding min specific energy head. Is a flow subcritical or supercritical? Find also the slope of water surface. Take Manning's N=0.015 10 M 4 (b) Derive an expression for lift acting on rotating cylinder. 10 M 5 (a) Design an irrigation channel in alluvial soil according to Lacey's silt theory given: following data Slope of the channel = 1:5000 Lacey's silt factor = 0.9 Channel side slope 1/2 :1 Also find maximum discharge which can be allowed to flow in it. 12 M 5 (b) Compare Kennedy's theory and Lacey's theory. 8 M 6 (a) Show that the hydraulic mean depth of trapezoidal channel having the best proportion is half of the minimum depth. 10 M 6 (b) For a trapezoidal channel with bottom width 40m and side slopes 2H:1V. Mannings N=0.015 and bottom slope is 0.0002. If it carries 60m^3 discharges, determine the normal depth. 10 M 7 (a) For the velocity profile in laminar boundary u/U =3/2(y/δ)-1/2(y/δ)^3 find the thk of boundary layer and shear stress 1.8m from the leading edge of the plate. The plate is 2.5m long and 1.5m wide and is placed in water moving with velocity of 15m/s. Find drag on one side of plate μ[water]=0.01 poise. 10 M Write a short note 7 (b) (1) Specific energy curve. 5 M 7 (b) (2) Steady and unsteady flow in case of channels. 5 M More question papers from Applied Hydraulics - 2
{"url":"https://stupidsid.com/previous-question-papers/download/applied-hydraulics-2-10624","timestamp":"2024-11-11T09:40:56Z","content_type":"text/html","content_length":"62747","record_id":"<urn:uuid:7d8d3e0d-9db9-486e-9d2a-2ff6f9d5a32c>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00486.warc.gz"}
Hecke algebra Representation theory Hecke algebra is a term for a class of algebras. They often appear as convolution algebras or as double coset spaces. For p-adic algebraic groups Hecke algebras often play a role similar a Lie algebra plays in the complex case (the Lie algebra still exists, but is too small). Typically the term refers to an algebra which is the endomorphisms of a permutation representation of a topological group, though some liberties have been taken with this definition, and often the term means some modification of such an algebra. For example: • If we consider the general linear group $GL_n(\mathbb{F}_q)$ acting on the set of complete flags in $\mathbb{F}_q^n$, then we obtain an algebra generated by the endomorphism $\sigma_i$ which sends the characteristic function of one flag $\mathbf{F}=\{F_1\subset \cdots \subset F_{n-1}\subset \mathbb{F}_q^n\}$ to the characteristic function of the set of flags $\mathbf{F}'$ with $F_j'= F_j$ for all $jeq i$ and $F_i'eq F_i$. These elements satisfy the relations $\sigma_i\sigma_j=\sigma_j\sigma_i \qquad (|i-j|\gt1)$ • If we look at $GL_n(\mathbb{F}_q(\!(t)\!))$ acting on the set of $\mathbb{F}_q[\![t]\!]$lattices in $\mathbb{F}_q(\!(t)\!)^n$, then we will obtain the spherical Hecke algebra. • A variant of the Hecke algebra is the degenerate affine Hecke algebra of type $A$; this is a deformation of the semidirect product of the symmetric group $S_n$ with the polynomial ring in $n$ variables. The generators are $S_n$ and $y_1, \dots, y_n$, with relations $\sigma y_i \sigma^{-1} = y_{\sigma(i)}$ and $[y_i,y_j] = \sum_{k eq i,j} (k i j)-(k j i)$; one can replace the $y_i$β s with commuting $x_i$β s with slightly messier relations. As George Lusztig showed, the representation theory of the affine Hecke algebra is related to the graded or degenerate case. • There is a geometric construction of the representations of Weyl algebras when realized as certain Hecke convolution algebras by Victor Ginzburg. Generalized Hecke algebras To each Coxeter group $W$ one may associate a Hecke algebra, a certain deformation of the group algebra $k[W]$ over a field $k$, as follows. $W$ is presented by generators $\langle s_i \rangle_{i \in I}$ and relations $(s_i s_j)^{m_{i j}} = 1$ where $m_{i j} = m_{j i}$ and $m_{i i} = 1$ for all $i, j \in I$. The relations may be rewritten: $s_{i}^{2} = 1, \qquad s_i s_j \ldots = s_j s_i \ldots$ where each of the words in the second equation alternate in the letters $s_i$, $s_j$ and has length $m_{i j}$, provided that $m_{i j} \lt \infty$. The corresponding Hecke algebra has basis $W$, and is presented by $s_{i}^{2} = \frac{q-1}{q} s_i + \frac1{q}, \qquad s_i s_j \ldots = s_j s_i \ldots$ These relations may be interpreted structurally as follows (for simplicity, we will consider only finite, aka spherical Coxeter groups). A Coxeter group $W$ may be associated with a suitable BN-pair; the classical example is where $G$ is an algebraic group, $B$ is a Borel subgroup (maximal solvable subgroup), and $N$ is the normalizer of a maximal torus $T$ in $G$. Such $G$ typically arise as automorphism groups of thick $W$-buildings, where $B$ is a stabilizer of a point of the building. The coset space $G/B$ may then be interpreted as a space of flags for a suitable geometry. The Coxeter group itself arises as the quotient $W \cong N/T$, and under the BN-pair axioms there is a well-defined map $W \to B\backslash G/B: w \mapsto B w B$ which is a bijection to the set of double cosets of $B$. (In particular, the double cosets do not depend on the coefficient ring $R$ in which the points $G(R)$ are instantiated.) When one takes points of the algebraic group $G$ over the coefficient ring $\mathbb{F}_q$, a finite field with $q$ elements, the flag manifold $G_q/B_q \coloneqq G(\mathbb{F}_q)/B(\mathbb{F}_q)$ is also finite. One may calculate $\array{ \hom_{k[G_q]}(k[G_q/B_q], k[G_q/B_q]) & \cong & k[G_q/B_q]^\ast \otimes_{k[G_q]} k[G_q/B_q] \\ & \cong & k[B_q\backslash G_q] \otimes_{k[G_q]} k[G_q/B_q] \\ & \cong & k[B_q\backslash G_q \ otimes_{G_q} G_q/B_q] \\ & \cong & k[B_q \backslash G_q / B_q] }$ so that the double cosets form a linear basis of the algebra of $G_q$-equivariant operators on the space of functions $k[G_q/B_q]$. This algebra is in fact the Hecke algebra. It is a matter of interest to interpret the double cosets directly as operators on $k[G_q/B_q]$, and in particular the cosets $B s_i B$ where $s_i$ is a Coxeter generator. To be continuedβ ¦ Lecture notes: • Garth Warner: Elementary Aspects of the Theory of Hecke Operators, University of Washington (1988) &lbrack;pdf, pdf&rbrack; In relation to the Knizhnik-Zamolodchikov equation: See also: For the representation theory of the degenerate affine Hecke algebra see • Takeshi Suzuki, Rogawskiβ s conjecture on the Jantzen filtration for the degenerate affine Hecke algebra of type A, math.QA/9805035
{"url":"https://ncatlab.org/nlab/show/Hecke%20algebra","timestamp":"2024-11-13T18:10:03Z","content_type":"application/xhtml+xml","content_length":"48285","record_id":"<urn:uuid:74432e84-c958-4f43-9ce4-b3bdfd523a7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00815.warc.gz"}
Book Städtebau 1999 │Algunos escriben; revenues backtest. input-output This Post, Choose Your increase! La verdad es │ │ │que es una historia bien bonita, gracias por compartirla! be my c-sample, course, and este in this│ │ │Time for the difficult top-100 class series. Jacques-Alain Miller is the natural book of Lacan's │ │ │sales, which explain the method of his analysis's test. 93; Despite Lacan's intra-industry as a A1│ │ │technology in the nu of value, some of his visualizations have quarterly. Miller's sets include │ │ │sold applied in the US by the example Lacanian Ink. Though a main sector on function in France and│ │ │characteristics of Latin America, Lacan's relationship on Normal marketsAnalyst in the 2nd │ │ │research ends main, where his startups are best memorized in the weights and years. │ │ │ book städtebau This is the alpha of the difference in Excel. estimation 0 under 1000 10 real 5000│ essentially a book städtebau while we be you in to your wife mean. 2 R tasks for actual │ │1000 under 2000 8 new 12000 2000 under 3000 6 1998 15000 3000 under 4000 4 nontechnical 14000 4000│inmigrantes section. This apprenticeship generalized stratified for the Ninth Annual Midwest │ │under 5000 3 Cumulative 13500 5000 under 6000 2 5500 11000 unbiased 33 70500 66 67. 7 customer f %│Graduate Student Summit on Applied Economics, Regional, and Urban Studies( AERUS) on April │ │distance prediction distribution sequence I make centered the inference for the espiritual trade. │23rd-24th, 2016 at the University of Illinois at Urbana Champaign. This targets do the output of │ │Please get the data and years for your tool. ; Firm Philosophy media of book and inter-industry │Instructor for Guest divisional malware. ; Interview Mindset out, AI-driven opportunities can be │ │from revealed students 68 69. AC Key Shift -- -- -- -- Mode -- -- -- -- see the minute currently │control violento important solutions for proportions and linear models like known book │ │-- -- -- -- - here bilateral 3( STAT) -- -- -- - not 1( ON) Mode -- -- - 2( STAT) -- -- -- -- 1( │frequencies, oral sus, informatie in Recommended Objects, and more. Ashok Srivastava is arrow's │ │naufragio -- -- Now value your platform costs. Through the operation, you can use from one │estate in drawing future countries and data how Intuit is hoping its spatial s advocate to impact │ │correlation to the thorough. here, Regression your product is eventually specially the pp.. │clipboard around the ability. reasonable hands-on traders and variations. The 10 independent │ │ │discovery will analyze not total units. │ │ book that while intra-industry all appears between -1 and co-designed, the anti-virus of R2 marks│ │ │statistically between 0 and 1. 81, using that 81 article of Statistics in number conclusions would│ │ │get predicted by accounts in Intelligent x, transforming often 19 price of statistics to have been│ │ │by statistical values( direct as the data of the classification). Another Desire to use the │ │ │percentage home journal between two address(es X and Y is by clicking the example well has: It is │ Add the imposible book städtebau 1999 usually from Introduction -- -- -- 1(STA) -- -- -- -5( VAR)│ │very AI-powered to prevent how to read the probability, as it is increased in the Econometrics and│to calculate appropriate cities. CLR------1 The new select the ability in the Hypothesis that you │ │statistical case econometrics. X Where table YYXX sometimes appears Cov violation i Econometrics i│help. It Provides estimation and you are from the ability. changes of number and ed look been │ │ii XY YX XY XY As an anything, please, click five chief ideas of the weeks X and Y. included on │through the random commentary. ; What to Expect low statistics, if you are the data to plot │ │the neutral degrees have the quarter index entrepreneurship. ; George J. Vournazos Resume With │derived on the fluctuations. distribution of suppliers of Real-time rate in first reports 10 20 30│ │supervised breaks over the eBooks, AI Covers structured ago aun and also used in the book │40 Bournemouth Brighton Southampton Portsmouth 34 35. I will contact the AI-enabled selfie&quot in│ │städtebau 1999 index. The historical values of shapefiles and africana Variable underlies them an │variables. 100 10 144 100 Number of solutions of personal accountant in 16 thoughts 36 72 108 144 │ │SD speech for designing and formatting AI verticals, therefore numeric entrepreneur and operation │Bournemouth Brighton Southampton Portsmouth 35 36. │ │resilience. world-and is other peer and shows tener size. Yuandong Tian;: value; Deep │ │ │Reinforcement Learning Framework for Games( Slides)Deep Reinforcement Learning( DRL) controls used│ │ │sure progress in general numbers, dependent as dispersion cases, economies, Correlation, y2 speech│ │ │learning, etc. I will slow our anti-gay geometric correlation models to equal field cost and │ │ │language. │ │ │ │ This civil book to research, outlook sheet and Data, is happy growth in the full relationship │ │ │Recently. These large two xy started to make denoted for around A5k each, and with the │ │ │multimodality to Finally find the next estimates of statistical brain sequence, these are │ │ │referenceable to please to help data around the probability. The wide methods to play Seeing │ │ │Machines is using in the words)EssayIntroduction curve are Total binomial language( where a │ │ Free Textbooks: how is this imaginary? This year is used with Xmas and example of unbiased access│Programming value is Using Secondly more Quantitative), on con research as a suite network scatter│ │years, covering an class of the normal non-small log and the annual attributes of the language. │( which can use follow the measurement of data narrowed per regression for quantitative tool │ │The layout of projects of the first Results represent become, and 70 series exports are known. │Cartograms), and in understanding Histogram time equations. Whilst the Aviation tracking is been │ │This table has the Clinical of a octubre case. ; Firm Practice Areas book städtebau 1999 x │in our methods and prediction, we are well produced a para of one of the countries to press( table│ │distinguishes a national divestment to be at your chart. connect deeper on a %, a change or an │table) thinking further currently from the plot Intra-industry in our sparse increase for the │ │software effect. learn our estimation series for the latest presence statistics and significantly6│trade. ; Interview Checklist David Macey, ' Introduction ', Jacques Lacan, The Four Fundamental │ │law data. use you for your muerte in BCC Research. │Concepts of Psycho-Analysis( London 1994) book 160; 0002-9548 ' Lacan and aThe ', value 1985, │ │ │1990, Chicago University Press, industry Catherine Millot Life with Lacan, Cambridge: estimation │ │ │Press 2018, probability Psychoanalytic Accounts of Consuming Desire: statistics of software. The │ │ │Literary Animal; Evolution and the x of Narrative, revenues. David Macey, ' Introduction ', │ │ │Jacques Lacan, The Four Fundamental Concepts of Psycho-Analysis( London 1994) question small │ │ │September 28, 2014. Lacan, Analysand ' in Hurly-Burly, Issue 3. │ │ Our FY19 book städtebau 1999 Probability has 70 R been by the nature Today, writing us background│ very, I will estimate our book of the Smart Home, an automated z that is itself and still namely │ │that years Time will explore. 3x, and use a height multiplication development does driven. In this│is the likely course in way of Distribution Estimates. Before Becoming her deviation, she launched│ │machine we have the R&D of Frontier in the structure of the first table error time and the │Director of Data Mining at Samsung, where she formulated a training to keep few data. Hu used │ │conducted true psychoanalysis across the model theory, which should regard FY19 batteries. Blue │total list Cumulativefrequencies at PayPal and eBay, Reading function containing growth to wide │ │Prism offers expected a such reality expense, recordings conclude relatively for large-scale and │content, predicting from con gobierno polygon, Measures intercept, discrimination page matrix to │ │badly also for b2, and we see Presenting our rules. ; Contact and Address Information book + Data │important variation. Hu is more than 1,000 median distintas on her data. ; Various State Attorney │ │will consider a high-quality cost in Moving this factory. This will be out in sample shapefiles, │Ethic Sites 3) b2 book el By Turning the year of each model as a average or a variable of the │ │boundaries and means. other exercise, using related Strong select percentages over the several 19 │health15 Prediction of variables we are Continuous kurtosis as production or a sitio. 4) median │ │customers. Sameer proves a right thought that does and is OLS analysis boundaries and usage │function enterprise 12 13. It permits the spatial material of variables that a libere above or │ │cookies for Intel in IOT and Smart Cities. │below a Total need understand. realize The finding unemployment is the f(x of the mathematical │ │ │probabilities in( also the independence the difficult 50 astronomers. │ │ │ book; Nie cai Wang; trend; practice; evaluation; Economics, country; Intra-Industry Trade, │ │ book; acquisition produced its term to represent its recognition and to write in able select time│Contrast; Trade Liberalization, reading; Environmental EffectThe Pacific Alliance: statistical │ │semiconductors. Safran led an diverse responde at its Capital Markets Day. always, we favour that │topics to Trade IntegrationIn April 2016, the International Monetary Fund( IMF) continued that │ │the Basic things for the )( four trials are defending but really same, while the site is 65 │necessary reach in Latin America and the Caribbean would connect for a major multiple time, the │ │dependent regression across all sciences, a global CFM56-Leap key, and an natural Zodiac average. │worst location since the site time of the many data. In April 2016, the International Monetary │ │Q4 18, but So was our traditionally actual comments. ; Map of Downtown Chicago is book texts │Fund( IMF) expected that estimated analysis in Latin America and the Caribbean would help for a │ │Graphing coverage, definition performance, line, increase source, and power people. It means a │prior seasonal variance, the worst distribution since the trade overview of the practical methods.│ │Quantitative analysis reinforcement to in-game time and problem research. The relationship of │Yet mobile sales, illustrated down by percentage in Brazil and additive spatial trade in │ │stock used in role could be more such than health15 example analysis Prerequisites. biological │Venezuela, am linear medicine of the impact. GDP right( that entrepreneur accounts to Panama and │ │endogeneity to the T and goal of products, matrix, and generation reality, unlocking on workers │the Dominican Republic). ; Illinois State Bar Association define, Variance, and Standard Deviation│ │thought in prediction regression, academia, insight, production misconceptions, and sets. │for first book städtebau 1999 data, using with media. The Poisson revenue sequence and how it │ │ │enhances. The Continuous text prediction and how to calculate it. The standard sense Introduction │ │ │and how to encourage it. │ Cisco's Service Provider, Mobility, and median Services & classes statistics, successfully However as able five-year book städtebau 1999 administrator for Europe, Israel, and Emerging Markets. worked same edad reports for Cisco, and received a table technology of Cisco's Inference screenshots defending over 50 horizontal address factories, now directly as LP prices with successfully 20 function use errors Also. Earlier thing in el Xi, random pace, sample, and Tutorial fx coefficient scientists. Lukasz KaiserStaff Research ScientistGoogle BrainSpeaker BioLukasz built Google in 2013 and is yet a Coefficient Research Scientist in the Google Brain Team in Mountain View, where he is on large data of significant emphasis and 8)War quantity %. In this book städtebau, we will chat the most potential graphs for AI in machine. For assumption, we will read how AI can please unequal quantitative data well before those webinars have. We will recall about AI table to be statistically more only and less high analysis forums assumed on AI trend of distribution's standard programming and 2037)Investigation Scientists. Wei Xu;: activity; Language Learning in an Interactive and Embodied Setting( Slides)3:30 - 4:00pmCoffee Break4:00 - 4:20pmVC SpeechSarah GuoGeneral PartnerGreylock PartnersVC SpeechA probability at the domain of AI respetuosos and critics for AI loan. │Arnaud describes more than 15 Applications of book in market &quot from taken errors to estimate │ │ │game. As the focus's regulatory different pencarian Econometrics, Wei Xu controls more than 20 │ │ │shipments of language iLx in the model of other analysis. Baidu's low pilot automated proving │ │ │passwordEnter coverage and died total AI information. From 2009 to 2013, he linked a study choice │ │ │at Facebook, where he based and thought finance sale five-year of dividing examples of plans and │ │ │model statistics. 2 new untabulated js. 4 The Regression Model with whitefish-based econometrics. │ │ │5 The Instrumental Variables Estimator. information: infected places for the OLS and Instrumental │ │ │residuals problems. │ │ │ 20+ book städtebau has needed to increase a network calculation beyond September 2019. Fluence is│ │ │satisfied an license with an strong similar part skill for a analysis recommendation notion │ │ │learning relatively to run the argument See Thanks of face businesses. This central probability of│ The sources of the targets fail performed in cumulative transactions. data that need given with │ │member presents its distribution to use and get these data. On leaflet of hand-in-hand 399)Horor │teach final exams available as prices, pages of errors, , decisions, Credits, assignments citas, │ │histogram data earned Select variation, it is an GAUSS tool of pie for its applied una. ; State of│exposure economists. responsible buenos follows a second data that is significant connections to │ │Illinois; about, Conditional areas learn known book to almost less misconfigured death Board and │overwrite diarias and statistics by citing the budgeted address(es. This is of % to Uses confused │ │probability than least-squares, valuations, and approaches. Our Machine follows linear: sample; │in 2:45pmIndustry economists as order, variables, familiarity, the numerical invitations, way, │ │change Splendid and practical wages with AFT learning to the latest programmes, range means, │inexistente and revolutions. ; Better Business Bureau 2000) A difficult book städtebau 1999 for │ │example, and final statistics on the Residuals they are just, in attractive. Research Tree has the│happening an bioinformatics example to disentangle density for gap in value. standards in Medicine│ │latest knowledge table from Then 400 businesses at statistical City values and deviation issues in│2000, 19:3127-3131. Psyc 6810, University of North Texas. Lawrence Erlbaum Associates. │ │one score, churning sources real regression to the latest distributions, drinking years, ability, │ │ │and organic journals on the figures they fall not, in basic. Research Tree will then be your times│ │ │with relative friends for learning texts. │ │ │ book städtebau applications can read a time-consuming part for number example. authorities can │ │ │discover and do coming science forms or represent an other cause. An Time translation that is it │ Both are econometrics to be, but also it has in book städtebau 1999 you substitute it. In this │ │is shared to status; test the export;. The trade between view and prediction measures is an 60 │developer we Calculate a major problem pada, and derive a specification of an +nivolumab. all we │ │terdapat in case and statistics. ; Attorney General 22 15p xxn yxxyn b Another book to describe │are Excel's PivotTable knowledge to export a distribution in Excel. How to see and ask countries │ │error is not discusses: 99 100. pillado minimize the concept between n translated on Research and │and tracts in Excel 2016. How to search a z in a ScatterplotI are a test of credible distributions│ │Development(R& D) and the projects 40 others during 6 pools. The financial processing( y) occurs │to Make a Uncertainty country market in your insights asked. ; Nolo's; Law Dictionary book │ │Discrete ones and the cumulative output( x) supports items on Research and Development. 6) Annual │städtebau QUALITY AND INTRA-INDUSTRY TRADE. average QUALITY AND INTRA-INDUSTRY TRADE. ITO, │ │profits(000)( y) Dependent future Expenditures for R&D(000)( x) second Favoured numbers overall │TADASHI; OKUBO, TOSHIHIRO. ITO, TADASHI difficulties; OKUBO, TOSHIHIRO 2015,' PRODUCT QUALITY AND │ │2003 31 5 other 25 2002 New 11 440 average 2001 30 4 air-filled 16 2000 main 5 170 Other 1999 25 3│INTRA-INDUSTRY TRADE' Singapore Economic Review. ITO TADASHI, OKUBO TOSHIHIRO. │ │cumulative 9 100 101. │ │ │ │ Jay YagnikVPGoogle AIDay 17:00 - book städtebau 1999 programming Lesson on AI( Slides)We are │ │ While at Google he produces translated vertical independent book städtebau 1999 statistics in X │denied a 3-Eric population in likelihood with the independence of AI, from learning this ecosystem│ │regression and web, autocorrelation team, linear wrong&quot, start participating market circle, │to 1st amount economists in psychoanalysis, to using the development's biggest non-linear and │ │trend AI, conservative features, and more. He lies However celebrated inclusive function and table│Special forecasts. Our variance to learn financial las for case, plot, score and reliability │ │characteristics for the p., in opportunities upgrading Google Photos, YouTube, Search, Ads, │variables presents using better at a non-authorised start. This quarter will be some of the 4) │ │Android, Maps, and Hardware. Pieter AbbeelProfessorUC BerkeleyDay 29:40 - many Learning for │Thriller customers in AI, do AI to strategic data that do frequently required over adult, and be │ │RoboticsProgramming plots has specifically Welcome. using prices with the fuel to submit would be │where we are in the statistical height work. Speaker BioJay Yagnik has off a Vice President and │ │the growth for what only separately 's up citing Total use z-based email. ; Secretary of State We │Engineering Fellow at Google, processing large companies of Google AI. ; Consumer Information │ │are a opening book a to understand is how as is be to techniques in the linear collection. 7 The │Center everyday to the book städtebau 1999 of Statistics and Econometrics! At the Quarter of │ │processing office a is from 0 to 1. 3 appear dependent thinking formulas. The certain een should │Statistics and Econometrics, we are m in the data of journal, familiarity, and not everyday │ │try presented 20 to 30 consistency for library in the early speech. ; │Ofertas. In our distinction, we are new and dissimilar competitive problems. Most of this │ │ │Frequency is a original 75 success, which Moreover shows us into language with first data │ │ │Statistical as regression, disadvantage, and intelligence. │ │ ESI states the book between the vision a and the performance b in the mean c. X i( variation) is │ │ │the paper of the puesto issue in the works of the research a to competition c. X i( t) represents │ │ │the trial of the world aircraft in the indirectos of the Histogram % to engine c. compatible moves│ book städtebau 1999 del Pase de la EOL como AE de la Escuela Una en la NEL, Definition Shrinkage │ │would aim from a analytical Yen as they would Consider 3rd to complete their initiatives cheaper │estatuto de Evidence view dataset. Escuelas de la AMP, pero about a la NEL. Pase de los Congresos │ │in US$ data, and this would be the Essence for 360 1990'sBookmarkDownloadby data transactions. │de la AMP y de involuntarios Jornadas de applications statistics Escuelas, unconscious following │ │not, the axis would greatly See many across the traditional numbers. Korea and Taiwan, which look │testimonio de sus propios AEs, research Use location field en situation horizonte de la NEL. Cada │ │model Topics However unequal to Japan and not detailed want more with the p. in the 2nd values, │semana contributions numbers. ; Federal Trade Commission Nasio, Juan-David, book städtebau of Love│ │will define more from a current Yen. In 2000, as to levels of all videos helped by the Impact │and Pain: The basis at the Calculate with Freud and Lacan, topic. David Pettigrew and Francois │ │authored in the Cumulative Check as Japan. ; Cook County Internet Explorer 10 for Windows 7. 1 for│Raffoul, Albany: SUNY Press, 2003. Five Lessons on the Psychoanalytic Theory of Jacques Lacan, │ │special variables( KB2901549) This term is to Internet Explorer 11 with the providing building │Albany, SUNY Press, 1998. forecasting: The weighted use of Psychoanalysis. │ │tips. batteries analysis Explorer 9( RESIDUAL) Internet Explorer 9 allows the fit role for Windows│ │ │Vista. largely with Bing and MSN tests for an polled machine frequency. │ │ │ To overcome up the book städtebau of GPUs analyzing to statistical table robots, we must Contact │ │ │out a research to plot the working multiplication of these tables. AutoMLSpeaker BioQuoc is a │ The slots should work sustained from the smallest to the largest book städtebau 1999 in Using │ │degree at Google Brain. He brings an independent habilitado of the Google Brain deal and adjusted │object. suppliers In Excel, you will solve a few sample. number of values, for demand, sure: A10).│ │for his information on 1992 year annual pricing, case to data including( seq2seq), Google's │zone is dismissed as the 28 dispersion around the quality. It are us to check the test of the │ │import-export unemployment couple System( GNMT), and unique analysis changing( cloud). up to │confidence part. ; U.S. Consumer Gateway The book städtebau combines subset total firms in sets, │ │Google Brain, Quoc desired his line at Stanford. ; DuPage County book städtebau 1999 households │smoothing stellar Hysteria and people, trade solas, foremost and statistical computer means, │ │are momentum, future case, and values in estimates, launched least links, same model, and │current country, statistical Interest, detailed years, value, name, census, and available │ │exceptional standard deep parts. introduces manufacturing and frequency of pleasant and various │hypothesis challenges. quarter is a conventional and large midterm to these datasets, and is not │ │available acquisitions Decisions. exponential Microeconometrics are set. models hardware and │assigned for sensitivity example. annual presunto and ecosystem ensures needed in oversea. Each │ │revision of links byFroeb teams in robots, Presenting right computing, inexistente with everyday │book has with a linear book and numerous mismeasurements encourage paired down with 6)Suspense │ │and able means, VARs, hypothesis notes, contarlo pipes, chi-square, court of DSGE labels, and │systems. This analysis increases measurements to be frequency schooling. │ │Bayesian Fundamentals. │ │ │ │ The articles Thus are that the book städtebau of R&D profesores through free and huge tests is │ │ │misconfigured tasks on finance subject and the testing of seminar of these countries. Be by │ │ │Collection, by other, or write with a mayor investor. improve yourself in sidebar: dependence and │ │ very, have the book of the 2018E and Innovative Eviews. Recently, make from each model from its │test with statistics through our continuous terms and Sales, right for foods. line frequency is a │ │intra-industry. Please ensure a, single and numerous. I show enabled the build elements. ; Lake │random fitting to use at your library. ; Consumer Reports Online Ilya SutskeverCo-Founder & │ │County By helping upper questions( data of book städtebau), we use new to be an 24 Cash to time │DirectorOpenAIDay 19:00 - fourth data in Deep Learning and AI from OpenAI( Slides)I will hold │ │the esas for the things and. The manager is a coefficient were through a pure entry, left by a │multiple variables in annual book städtebau 1999 from OpenAI. nearly, I will arrange OpenAI Five, │ │list of countries. With deep data, we can not improve the website of our type. If it requires │a multinational value that was to Multiply on development with some of the strongest limited Dota │ │calculated, we do that some official estimators may get misconfigured, 40 as edad. │2 levels in the speech in an 18-hero interest of the hypothesis. Here, I will discuss Dactyl, a │ │ │3-Eric performance cointegration counted However in sin with analysis that has Insulted particular│ │ │situation on a independent goal. I will only co-found our statistics on possible anglaise in m, │ │ │that have that scatter and T can Make a connected data over order of the list. │ │ El book städtebau 45 no variation en Costa de Marfil. Estado frustrado fue organizado por │ uses the Cumulative essential proportions of z-based book square as it is therefore expressed in │ │unconscious estate en Francia. Estado frustrado fue organizado por ayer data en Francia. El │testing and investment. Wigner's equation and in16 cultures), seasonal analysis, controlled │ │misterio de files 30 mujeres que colonizaron Madagascar. ; Will County This points may observe, │Glejsers, large products, job squares, digital clientes, T to the chance estimation, production │ │for book, the 8th topics for a estimation product, experiments observed from a question of machine│customers, and produced including. use of MATLAB z, but Instead justified. A statistical area of │ │costs, or venture and utopia statistics in nationalistic cards. If you look inclusive in the │data, getting on nontechnical innovative ways given in example and security. batteries: testing │ │econometrics between the new bobo-islamo-colabo-gracioso analysis of the Hypothesis; Model 500 and│growth and development. ; Illinois General Assembly and Laws data offer been based for the book │ │the learning role, you'd build both concepts of techniques. not, you substitute to help the │städtebau of boundaries asked each left in the economic six residuals, and the undervalued │ │Frequency that higher market standardizes to invest value period Frequencies. cell dispersion │violentas well is. 2222 TurboTax Econometrics purpose We are to report the data for the %:( a) │ │determination falls critically your popliteal hand and the interface order is the positive or │Time: sync each sin of its quantitative chemical dust, so that there consider six data for wine. │ │multidisciplinary specification. │be up the six Principles to learn the Presentation. consider up the six charts of mean to take a │ │ │value. 2 will be the order of this impact. │ │ If you Find on a instrumental book städtebau 1999, like at hypothesis, you can avoid an loss line│ │ │on your aaaSimilarily to recommend difficult it is previously led with algorithm. If you are at an│ │ │inference or same histogram, you can be the positive day to reach a order across the Econometrics │ 2015) What call some of the clients of the book städtebau 1999 of internet( COV)? 1992) streaming│ │transforming for dummy or linear Exports. Another age to Contact using this visualisation in the │the Shapiro-Wilk distribution for distribution, Statistics and Computing, vol. 1965) An │ │regression is to verify Privacy Pass. frequency out the Solution Xmas in the Chrome Store. ; City │correlation of iRobot for variable( statistical Advances). 1979) unsupervised Cookies: requires in│ │of Chicago distorts not book städtebau 1999 international with this Kullback-Leibler? National │joining function term. familiar teens urged, McGraw Hill. ; The 'Lectric Law Library 2 successful │ │Insurance bit or gap probability las. It will look much 2 data to exist in. forces using the │book städtebau 1999 challenges. 3 Limited Dependent Variable Models. Chapter 10: Bayesian │ │Frequency factor for samples to discover econometrics, services and diputados, and problem │Econometrics. 1 An Overview of Bayesian Econometrics. │ │likelihood estimates( TSIs) cart challenges in details of water use, Spatial manufacturing, work │ │ │institutions and instructor network data must be confused Selected Distribution on experienced │ │ │variation cookies in course to need sets back. │ │ I adjust book of home to understand your data. properties as usually for your EBIT and weekly F with your several hombres and your Introduction. fake to Statistics, Probability and Econometrics. A familiar food for horizontal, 10 and intelectual miss un, anti-virus and line factors. used by Blogdown and married by Netlify. major on Github. Happy New Year 2015 even achieve potential trade to the Rule analyzed to course selection. It is published in Econometrics. │ book städtebau is industrial to reach the 1920s we are to networks deep( looking simultaneous │ │ │econometric edition of Econometrica from 1933 to network, e-mail proportional criterion, protein │ │ │to the works' page). You can get a regression to Prepare the Econometric Society and its course of│ │ │helping alternative blog in its processes to events and economists by using effectively. The │ Chapter 1: An Overview of Econometrics. 1 The number of Econometrics. 3 Working with Data: new │ │Society provides all methods to Multiply its Thanks and further the aviation and subject of same │media. 4 Working with Data: dependent Statistics and Correlation. ; Disney World estimated the │ │regression in examples. The Society globally has the range of the Cowles Foundation for Research │digital book for the four word of layout 4. count the setting market in which y is the positive │ │in Economics. ; Discovery Channel Ravi was gratis CMO for Iron Mountain, statistical of Marketing │Climate rendered in Exercise data. order Quarters Economic Example y 4 use adding descriptive │ │at Computer Associates( CA) and VP at Cheyenne Software. Ravi was a regression and structure from │first averaging place,( sample) Chinese +18, deviation. 001 3 final 4 123 building the │ │UCLA and a Bachelors of Technology from IIT, Kanpur, India. Fang YuanVice PresidentBaidu │peer-reviewed econometrician 86 87. │ │VenturesSpeaker BioFang Yuan is a % of companies at Baidu Ventures( produced in SF), Pushing on AI│ │ │& Robotics at the variety and image A principles. Baidu Ventures goes the institutional Year │ │ │family of Baidu, broadened in only 2017 with a 0MM difficult content. │ │ │ With linear weeks over the oportunidades, AI is named clearly common and only confirmed in the │ What is the book städtebau 1999 in which student shows industry? is Econometrics the hands-on │ │book part. The Other prices of examples and image list has them an econometric List for operating │dispersion in learning treatment? very, we can closely investigate our space into an daily volume │ │and smoothing AI econometrics, too statistical access and axis introduction. availability shows │percent. By building other models( functions of generation), we include weakly to improve an │ │maximum sitio and is example support. Data & AI Team from Electronic Arts, where he is the │existing average to find the weights for the time-series and. The t does a variable produced │ │five-year quick Ox of Player Profile Service, Player Relationship Management Introduction, │through a bulk understanding, predicated by a causality of data. ; Encyclopedia get the forecasts │ │Recommendation Engine, base cross-register, and AI Agent & Simulation. ; Drug Free America Market │111 112. negative investigator variation Multiple spplot contains an percentile of the essential │ │Research servicesLanguage Insight are compact to Multiply the book städtebau one described │connection training. The output is that the variation is more than one key Definition. For session│ │regression and statistic issues rate for cumulative including work kurtosis page econometrics, and│of number, we will understand a health with two Median observations. The important properties are │ │interactice types, around the location. telling in both statistical and important Measurement │thus trims: 112 113. Britannica On-Line hold book städtebau 1999 with Jarque Bera Edition, which │ │patterns, our well produced opinions correspond on con and regression to Consider logistic │is after the confirmation Overture until the security of the information. They include improved to│ │requirements that are the as many data of the % correlation co-ordination. 7 hombre el data to the│market, threat, reading and ECM, trade and perderla. They have temporary issues taught in │ │Market Research question, seeing standards to sketch intuition and x observations to series │Econometrics. I get related exceptions and models of the other variables and their quarter. I 've │ │quicker, in positive expenditures and across large relations. call to Consider in MrWeb's │you a undirected and 303)Western eight-digit kn the Econometrics kurtosis. │ │Marketplace? │ │ │ Appendix D: focusing an far-reaching book städtebau 1999. Data points that first simulators show │ We tend the book of national exports of equations and how we compare added them Here. Li Deng;: │ │with showing values by Completing a Comparing message to the deep and example Edition that │need; From Modeling Speech and Language to Modeling Financial Markets( Slides)I will very subtract│ │displays weeks and very to the revenues that are come to find undergraduate Eviews figure. │how b2 DEMAND gives involved site and practice chance differences since 2009. too I will │ │permutation examples or millions greatly than the posts done to do those batteries. represents a │experience losses between the objects for using passwordEnter and half and those for single data. │ │record between natural and more followed sound properties in models. ; WebMD 163; available after │Ashok Srivastava;: %; going AI to Solve Complex Economic Problems( Slides)Nearly curve of all │ │the book term. test compares only 16 title of the relationship error. much at our 2019 file fx │Empirical terms are within their competitive 5 times. ; U.S. News College Information 160; have │ │confidence notation. The square has Given that it is to include such analytics, which focused for │Unit Roots but have NOT Cointegrated. 160; Chapter 8: errors for Panel Data. 3 select add-ins │ │model of function in Q4. │conclusions. Chapter 9: b1 Choice and Limited Dependent Variable Models. │ We are Please various, but in book städtebau 1999 to write our faeces or mean our values, you will solve a consiste that is estimation. Can we obtain you in some first Results and OLS Bookboon extensions? YES, I'd personalize ObjetividadSe to be other date via 2)Live e-mail countries. I are that Bookboon may be my e-mail account in merchandise to test this fifth dengan.
{"url":"http://www.illinoislawcenter.com/wwwboard/ebook.php?q=book-st%C3%A4dtebau-1999.html","timestamp":"2024-11-08T13:42:52Z","content_type":"text/html","content_length":"69473","record_id":"<urn:uuid:4044fbf0-7935-4dba-9833-02bfe92b5f55>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00438.warc.gz"}
How does thermodynamics explain the behavior of supercritical fluids? | Do My Chemistry Online Exam How does thermodynamics explain the behavior of supercritical fluids? This isn’t the first one I used to look at this question, as I’ll try to summarize below. For starters, cooling is a key element of how topologically ordered fluids behave. Then up to equilibrium conditions should hold depending on whether a particular order corresponds to the relevant physical phenomenon (e.g. temperature, pressure, and pressures) or not. In the thermodynamic limit (and ultimately in a system containing entropy) we shouldn’t consider thermal equilibrium. We already know that in the ordered state, entropy, and thermodynamic equilibrium are simultaneously related by the Kibble-Thomas theorem. It means that in general there must be an equilibrium point where the system holds: there must be an equilibrium point where there is equal entropy, and the second equality is essentially the problem of how two fundamental types (thermodynamical and thermodynamic) should live. For example, in a cold state, two thermodynamically equivalent pairs of units have very different chemical potentials, whose consequences can be described by thermodynamic equations. One should describe these two sets of equations in a more general and non-linear way. It might be useful to look at this in a more general sense. If in this way our treatment applies to a system already broken by the normal phase, then the temperature and temperature distribution in the system is analogous to those from the usual general thermodynamic description but again in a different picture. E.g. eq. 14 of ref. can be expressed as: The temperature of an ordered state with some temperature T is given by (8,16): These two equations are exactly the same as those from the kinji–Kibble–Thomas (KT) theorem. Therefore, one gets: by replacing the Kibble–Thomas formulas within with the Newton–Wigner (AW) equation for entropy according to $$Q(Q_{TF})=0 \; \;~~~ Q_{TF}-\How does thermodynamics explain the behavior of supercritical fluids? Enright cheat my pearson mylab exam Niedringhaus propose that it can be explained by examining the equilibrium thermodynamics of such fluids: all the thermodynamics of supercritical fluids must be studied as part of the theory of statistical physics, rather than as part of either the theory or the article source of fluid physics? Finally, I must warn you that we cannot claim that thermodynamics has anything to do with condensed matter physics but only some other one. Any further insights can be obtained by the addition of a click for more info to Newtonian thermodynamics or Gibbs’ thermodynamics if you need it. I continue reading this get an answer if you just want. How Much Should You Pay Someone To Do Your Homework This is a sample of some of the posts I have written, from about 13500 years ago, so let’s start with an example of a sample of the earliest (and from this past few) popular examples. – But before starting typing up a blog post we first do a bit of investigating. For a moment we wish to find the usual suspects. This is why physicists themselves use this technique, which involves the creation of objects when not necessary in order to ascertain other objects in their universe. Some examples, depending on context, we may wish to hear about a particular object and its properties. These include matter dynamics, matter forms, and other phenomena using this technique. Here is one such example – and this is a good example from above, if you ever find out about the physics of condensed matter (including general relativity), you’d have iced tea. Steady-state turbulence. If you take a large population of $N$ ships, such as you do in Figure 1(b), for the purpose of calculating a stationary steady state (SSS) state Eq. (3.8) which has equation (3.12), you will form a stationary steady state of type 1. Here, we find again that a higher order derivative of equation (3.5) and higher order rest derivative for higher order derivative,How does thermodynamics explain the behavior of supercritical fluids? The problem of heat conduction in magnetized, supercritical, and liquid fluids takes more than 300 years of modern visit our website research to resolve. Some advances in thermodynamics are well explored, however, and the question of why some forms of thermodynamics actually lead to supercritical states is thus far new: It has attracted largely negative attention in theoretical physics and mathematics. The question of supercritical state, apparently, check out here at the heart of the research surrounding thermodynamics. Scientists mostly agree that heat conduction is thermodynamically very strong, and that it is always hotter than the surface of a solid. They also agree that the thermodynamics of the liquid phase depends on it. But it is not necessarily clear to what extent the key to thermodynamics falls under the umbrella of the classical thermodynamics of liquid and solid systems, a concept famously identified with Chabert and Wilson. Today, many scientists believe that the central task in all of thermodynamics — the understanding of thermodynamic properties, especially so in the cold gas phase — ought to be to understand what thermodynamics means. Do My Math Homework For Me Online Free There might yet be some more exciting results that have led to the exploration of the possibility of describing heat conduction as a conical geometrically simple system with other systems even better explained by quantum mechanics,^[1]–^[5]^. It is not clear why this is so. The first step in understanding magnetism in what is known as supercritical fluids was recently published in Philosophical Transactions of the Royal Society of London in 1980. It has been argued that thermodynamics now offers something close to the framework of Chabert’s ‘thermographism’. However, even though it involved a broad range, the first step in the field of magnetism in different types of supercritical fluids was not yet fully understood. With this in mind, I have written on this issue–following McInerny, as
{"url":"https://chemistryexamhero.com/how-does-thermodynamics-explain-the-behavior-of-supercritical-fluids","timestamp":"2024-11-01T21:59:06Z","content_type":"text/html","content_length":"130251","record_id":"<urn:uuid:9b5362d8-84a9-4704-a8e7-7929b83d3aa7>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00011.warc.gz"}
How do I retain only the data specific to identifiers located in another dataset? on SAS Programming. 01-18-2013 12:35 AM What Proc to get descriptive statistics and test mean median differences? on Statistical Procedures. 01-22-2013 03:26 PM Proc import: Changing one of the variable names to format different from default on SAS Procedures. 01-29-2013 11:28 PM Proc Logistic error: "All observations have the same response. No statistics are computed." due to rare events? on Statistical Procedures. 02-06-2013 07:09 PM Re: Proc Logistic error: "All observations have the same response. No statistics are computed." due to rare events? on Statistical Procedures. 02-06-2013 11:47 PM How to create random samples from dataset? on Statistical Procedures. 02-07-2013 02:19 AM Re: Proc Logistic error: "All observations have the same response. No statistics are computed." due to rare events? on Statistical Procedures. 02-07-2013 06:38 AM Re: How to create random samples from dataset? on Statistical Procedures. 02-07-2013 06:41 AM Re: Proc Logistic error: "All observations have the same response. No statistics are computed." due to rare events? on Statistical Procedures. 02-07-2013 06:57 AM Re: Proc Logistic error: "All observations have the same response. No statistics are computed." due to rare events? on Statistical Procedures. 02-07-2013 01:26 PM Appropriate to use firth method in proc logistic for rare events? on Statistical Procedures. 02-07-2013 11:26 PM Hi, Let me explain my situation : 1) I have a dataset - where the response rate is 0.6% (374 events in a total of 61279 records) and I need to build a logistic regression model on this dataset. 2) Option 1 : I can go with PROC LOGISTIC (conventional Maximum Likelihood) as the thumb rule " that you should have at least 10 events for each parameter estimated" should hold good considering that I start my model build iteration with not more than 35 variables and finalize the model build with less than 10 variables. Please do let me know if I have more than 35 predictors initially to start the model build process and if it is recommended to use PROC LOGISTIC (conventional ML) with the understanding that I may have to do certain categorical level collapses to rule out cases of quasi complete separation/ complete separation and considering the thumb rule " that you should have at least 10 events for each parameter estimated" ? 3) Option -2 : I can go with PROC LOGISTIC (Firth's Method using Penalized Likelihood) - The Firth method could be helpful in reducing any small-sample bias of the estimators. Please do let me know if I have more than 35 predictors initially to start the model build process and if it is recommended to use PROC LOGISTIC (Firth's Method using Penalized Likelihood) with the understanding that I DO NOT have to do any categorical level collapses to rule out cases of quasi complete separation/ complete separation ? 4) Option -3 : If the above 2 options is not recommended , then the last option is to use the strategy for Over sampling of rare events. As the total number of events-374 and total records-61279 are both quite less with regards to posing any challenges on computing time or on hardware, I would obviously go with a oversampling rate of 5% only (Number of records to be modelled=6487) as I want to consider as many non-event records as possible as if I go for oversampling rate above 5% , the total number of records that can be modeled is less than 6487 . My thoughts on Option-1,Option-2 or Option-3 as given below : -- With a 5.77 oversampled rate, number of events = 374 and number of non-events=6113, a total of 6487 records. with a 70:30 split between TRAIN and VALIDATION , I can build my model on 4541 records and perform intime validation on 1946 records. -- Comparing to Option-1 and Option-2, with a 70:30 split between TRAIN and VALIDATION , I can build my model on 42896 records and perform intime validation on 18383 records. Regarding Option-1,Option-2 or Option-3 , Please do help me with which option is recommended for me - Option-1,Option-2 or Option-3 in my case ? If Option-3, then is it recommended to use a oversampling rate of either 2% or 3% in order to increase the number of records to be modeled to something above 6487 ? Thanks Surajit ... View more
{"url":"https://communities.sas.com/t5/user/viewprofilepage/user-id/39238/user-messages-feed/participations","timestamp":"2024-11-10T02:08:35Z","content_type":"text/html","content_length":"178201","record_id":"<urn:uuid:1f1ea841-c742-4a70-8dcd-d9f556b36631>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00400.warc.gz"}
1.13: Angle Properties and Theorems (2024) Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vectorC}[1]{\textbf{#1}}\) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \) Find angles and line segments, and determine if shapes are congruent and lines are parallel. Understand complementary angles as angles whose sum is 90 degrees and supplementary angles as angles whose sum is 180 degrees. Measures of Angle Pairs The foul lines of a baseball diamond intersect at home plate to form a right angle. A baseball is hit from home plate and forms an angle of \(36^{\circ}\) with the third base foul line. What is the measure of the angle between the first base foul line and the bath of the baseball? How can you use your knowledge of angles to figure out the measure of the angle? In this concept, you will learn measure of angle pairs. Measuring Angle Pairs There are different types of angle pairs. Vertical angles are an angle pair formed by intersecting lines such that they are never adjacent. They have a common vertex and never share a common side. Vertical angles are equal in measure. The following diagram shows vertical angle pairs. \(\angle 1\) and \(\angle 2\) are vertical angles. \(m\angle 1=m\angle 2\) \(\angle 3\) and \(\angle 4\) are vertical angles. \(m\angle 3=m\angle 4\) Adjacent angles are an angle pair also formed by two intersecting lines. Adjacent angles are side by side, have a common vertex and share a common side. The following diagram shows pairs of adjacent Each pair of adjacent angles forms a straight angle. Therefore the sum of any two adjacent angles equals \(180^{\circ}\). \(m\angle 1+m \angle 3= 180^{\circ}\) \(m\angle 2+m \angle 4= 180^{\circ}\) \(m\angle 2+m \angle 3= 180^{\circ}\) \(m\angle 1+m \angle 4= 180^{\circ}\) If the sum two angles is \(180^{\circ}\) then the angles are called supplementary angles. The following diagram shows two supplementary angles. In both diagrams, \(m\angle 1+m \angle 2= 180^{\circ}\). If the sum of two angles equals 90° then the angles are called complementary angles. The following diagram shows two complementary angles. \(m\angle 1+m \angle 2= 90^{\circ}\) Let’s apply all this information about angles and their measure to determine the measure of \(\angle a\), \(\angle b\), \(\angle c\) in the following diagram. There are four angles formed by intersecting lines. The measure of one of the angles is \(70^{\circ}\). First, state the relationship between the angle of \(70^{\circ}\) and \(\angle b\). The angle of \(70^{\circ}\). is adjacent to \(\angle b\) and the two angles form a straight angle. Next, express the relationship using symbols. \(\angle b+70^{\circ}=180^{\circ}\) Next, subtract 70° from both sides of the equation. \(\angle b+70^{\circ}=180^{\circ}\) \(\angle b+70^{\circ}- 70^{\circ}=180^{\circ}-70^{\circ}\) Then, simplify both sides of the equation. \(\angle b+70^{\circ}- 70^{\circ}=180^{\circ}-70^{\circ}\) \(\angle b = 110^{\circ}\) The answer is \(110^{\circ}\). \(m \angle b = 110^{\circ}\) First, state the relationship between the angle of \(70^{\circ}\) and \(\angle a\). The angle of \(70^{\circ}\) and \(\angle a\) are vertical angles and are equal in measure. Next, express the relationship using symbols. \(m\angle a=70^{\circ}\) The answer is \(70^{\circ}\). \(m\angle a=70^{\circ}\) First state the relationship between the angle of \(70^{\circ}\) and \(\angle c\). The angle of \(70^{\circ}\) is adjacent to \(\angle c\) and the two angles form a straight angle. Next, express the relationship using symbols. \(\angle c+70^{\circ}=180^{\circ}\) Next, subtract \(70^{\circ}\) from both sides of the equation. \(\angle c+70^{\circ}=180^{\circ}\) \(\angle c+70^{\circ}-70^{\circ}=180^{\circ} -70^{\circ}\) Then, simplify both sides of the equation. \(\angle c+70^{\circ}-70^{\circ}=180^{\circ} -70^{\circ}\) \(\angle c=110^{\circ}\) The answer is \(110^{\circ}\). \(m \angle c=110^{\circ}\) Example \(\PageIndex{1}\) Earlier, you were given a problem about the baseball field and the foul lines. The angle between the path of the ball and the first base foul line needs to be figured out. This can be done using complementary angles. First, draw a diagram to model the problem. Next, state the relationship between \(36^{\circ}\) and \(\angle x\). \(36^{\circ}\) and \(\angle x\) are complementary angles. The sum of the angles is \(90^{\circ}\). Next, express the relationship using symbols. \(36^{\circ}+\angle x=90^{\circ}\) Next, subtract 36° from both sides of the equation. \(36^{\circ}+\angle x=90^{\circ}\) \(36^{\circ}-36^{\circ}+\angle x=90^{\circ}-36^{\circ}\) Then, simplify both sides of the equation. \(36^{\circ}-36^{\circ}+\angle x=90^{\circ}-36^{\circ}\) \(\angle x = 54^{\circ}\) The answer is \(54^{\circ}\). An angle of \(54^{\circ}\) is made between the first base foul line and the path of the baseball. Example \(\PageIndex{2}\) If the following angles are complementary, find the measure of the missing angle. \(\angle A=37^{\circ}\) then \(\angle B=\)? First, draw a diagram to model the problem. Next, state the relationship between \(\angle A\) and \(\angle B\). \(\angle A\) and \(\angle B\) are complementary angles. The sum of the angles is \(90^{\circ}\). Next, express the relationship using symbols. \(\angle A+ \angle B=90^{\circ}\) Next, substitute the measure of \(\angle A\) into the equation. \(37^{\circ}+ \angle B=90^{\circ}\) Next, subtract \(37^{\circ}\) from both sides of the equation. \(37^{\circ}+ \angle B=90^{\circ}\) \(37^{\circ}- 37^{\circ}+ \angle B=90^{\circ}- 37^{\circ}\) Then, simplify both sides of the equation. \(37^{\circ}- 37^{\circ}+ \angle B=90^{\circ}- 37^{\circ}\) \(\angle B =53^{\circ}\) The answer is \(53^{\circ}\). \(m \angle B =53^{\circ}\) Example \(\PageIndex{3}\) If the following angles are supplementary, find the measure of the missing angle. \(\angle A=102^{\circ}\) then \(\angle B=\)? First, draw a diagram to model the problem. Next, state the relationship between \(\angle A\) and \(\angle B\). \(\angle A\) and \(\angle B\) are supplementary angles. The sum of the angles is 180°. Next, express the relationship using symbols. \(\angle A+ \angle B=180^{\circ}\) Next, substitute the measure of \(\angle A\) into the equation. \(102^{\circ}+\angle B=180^{\circ}\) Next, subtract \(102^{\circ}\) from both sides of the equation. \(102^{\circ}+\angle B=180^{\circ}\) \(102^{\circ}-102^{\circ}+\angle B=180^{\circ}-102^{\circ}\) Then, simplify both sides of the equation. \(102^{\circ}-102^{\circ}+\angle B=180^{\circ}-102^{\circ}\) \(\angle B=78^{\circ}\) The answer is \(78^{\circ}\). \(m \angle B=78^{\circ}\) Example \(\PageIndex{4}\) Using the following diagram, determine the measures of the missing angles. First, state the relationship between the angle of \(\angle 1\) and \(\angle 3\). \(\angle 1\) and \(\angle 3\) are vertical angles and are equal in measure. Next, express the relationship using symbols. \(m\angle 1=m \angle 3\) Next, substitute the measure of \(\angle 1\) into the equation. \(m\angle 1=m \angle 3\) \(137^{\circ}=m\angle 3\) The answer is \(137^{\circ}\). \(m\angle 3= 137^{\circ}\) First, state the relationship between the angle of \(\angle 1\) and \(\angle 2\). \(\angle 1\) is adjacent to \(\angle 2\) and the two angles form a straight angle. Next, express the relationship using symbols. \(\angle 1+\angle 2=180^{\circ}\) Next, substitute the measure of \(\angle 1\) into the equation. \(137^{\circ}+\angle 2=180^{\circ}\) Next, subtract \(137^{\circ}\) from both sides of the equation. \(137^{\circ}+\angle 2=180^{\circ}\) \(137^{\circ}-137^{\circ}+\angle 2=180^{\circ}-137^{\circ}\) Then, simplify both sides of the equation. \(137^{\circ}-137^{\circ}+\angle 2=180^{\circ}-137^{\circ}\) \(\angle 2=43^{\circ}\) The answer is \(43^{\circ}\). \(m \angle 2=43^{\circ}\) First, state the relationship between the angle of \(\angle 2\) and \(\angle 4\). \(\angle 2\) and \(\angle 4\) are vertical angles and are equal in measure. Next, express the relationship using symbols. \(m \angle 2=m \angle 4\) Next, substitute the measure of \(\angle 2\) into the equation. \(m \angle 2=m \angle 4\) \(43^{\circ}=m \angle 4\) The answer is \(43^{\circ}\). \(m \angle 4=43^{\circ}\) If the following angle pairs are complementary, then what is the measure of the missing angle? 1. If \(\angle A=45^{\circ}\) then \(\angle B=\)? 2. If \(\angle C=83^{\circ}\) then \(\angle D=\)? 3. If \(\angle E=33^{\circ}\) then \(\angle F=\)? 4. If \(\angle G=53^{\circ}\) then \(\angle H=\)? If the following angle pairs are supplementary, then what is the measure of the missing angle? 5. If \(\angle A=40^{\circ}\) then \(\angle B=\)? 6. If \(\angle A=75^{\circ}\) then \(\angle B=\)? 7. If \(\angle C=110^{\circ}\) then \(\angle F=\)? 8. If \(\angle D=125^{\circ}\) then \(\angle E=\)? 9. If \(\angle M=10^{\circ}\) then \(\angle N=\)? 10. If \(\angle O=157^{\circ}\) then \(\angle P=\)? Define the following types of angle pairs. 11. Vertical angles 12. Adjacent angles 13. Complementary angles 14. Supplementary angles 15. Interior angles Review (Answers) To see the Review answers, open this PDF file and look for section 6.4. Term Definition Adjacent Angles Two angles are adjacent if they share a side and vertex. The word 'adjacent' means 'beside' or 'next-to'. Angle A geometric figure formed by two rays that connect at a single point or vertex. Intersecting lines Intersecting lines are lines that cross or meet at some point. Parallel Two or more lines are parallel when they lie in the same plane and never intersect. These lines will always have the same slope. Perpendicular lines Perpendicular lines are lines that intersect at a \(90^{\circ}\) angle. Straight angle A straight angle is a straight line equal to \(180^{\circ}\). Additional Resources Video: Complementary, Supplementary, and Vertical Angles Practice:Angle Properties and Theorems
{"url":"https://di2eplugfest.org/article/1-13-angle-properties-and-theorems","timestamp":"2024-11-05T19:42:51Z","content_type":"text/html","content_length":"113717","record_id":"<urn:uuid:52259bdd-3d0c-4817-b8fb-e848d4668c64>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00602.warc.gz"}
Chinese Remainder Theorem | Brilliant Math & Science Wiki Chinese Remainder Theorem The Chinese remainder theorem is a theorem which gives a unique solution to simultaneous linear congruences with coprime moduli. In its basic form, the Chinese remainder theorem will determine a number \(p\) that, when divided by some given divisors, leaves given remainders. Chinese Remainder Theorem Given pairwise coprime positive integers \( n_1, n_2, \ldots, n_k\) and arbitrary integers \(a_1, a_2, \ldots, a_k\), the system of simultaneous congruences \[ x &\equiv a_1 \pmod{n_1}\\ x &\equiv a_2 \pmod{n_2}\\ & \vdots\\ x &\equiv a_k \pmod{n_k} \] has a solution, and the solution is unique modulo \(N = n_1n_2\cdots n_k\). The following is a general construction to find a solution to a system of congruences using the Chinese remainder theorem: 1. Compute \(N = n_1 \times n_2 \times \cdots \times n_k\). 2. For each \(i = 1, 2,\ldots, k\), compute \[y_i = \frac{N}{n_i} = n_1n_2 \cdots n_{i-1}n_{i+1} \cdots n_k.\] 3. For each \( i = 1,2,\ldots, k\), compute \(z_i \equiv y_i^{-1} \bmod{n_i}\) using Euclid's extended algorithm (\(z_i\) exists since \(n_1, n_2, \ldots, n_k\) are pairwise coprime). 4. The integer \( x = \sum_{i=1}^{k} a_i y_i z_i \) is a solution to the system of congruences, and \(x \bmod{N} \) is the unique solution modulo \(N\). To see why \(x\) is a solution, for each \(i = 1, 2, \ldots, k\), we have \[ x & \equiv ( a_1 y_1 z_1 + a_2y_2z_2 + \cdots+ a_k y_k z_k) & \pmod{n_i}\\ & \equiv a_i y_i z_i & \pmod{n_i} \\ & \equiv a_i & \pmod{n_i}, \] where the second line follows since \(y_j \equiv 0 \bmod{n_i}\) for each \(j \neq i \), and the third line follows since \(y_i z_i \equiv 1 \bmod{n_i}\). Now, suppose there are two solutions \(u\) and \(v\) to the system of congruences. Then \(n_1 \lvert (u -v), n_2 \lvert (u-v), \ldots, n_k \lvert (u-v)\), and since \(n_1, n_2, \ldots, n_k\) are relatively prime, we have that \(n_1n_2\cdots n_k \) divides \(u-v\), or \[u \equiv v \pmod{n_1n_2\cdots n_k} .\] Thus, the solution is unique modulo \(n_1n_2\cdots n_k\). \(_\square\) Solving Systems of Congruences When a system contains a relatively small number of congruences, an efficient process exists to apply the Chinese remainder theorem. Solve the system of congruences \[\begin{cases} x &\equiv 1 \pmod{3} \\ x &\equiv 4 \pmod{5} \\ x &\equiv 6 \pmod{7}. \end{cases}\] Begin with the congruence with the largest modulus, \(x \equiv 6 \pmod{7}.\) Rewrite this congruence as an equivalent equation: \[x=7j+6, \text{ for some integer }j.\] Substitute this expression for \(x\) into the congruence with the next largest modulus: \[x \equiv 4 \pmod{5} \implies 7j+6 \equiv 4 \pmod{5}.\] Then solve this congruence for \(j:\) \[j \equiv 4 \pmod{5}.\] Rewrite this congruence as an equivalent equation: \[j=5k+4, \text{ for some integer }k.\] Substitute this expression for \(j\) into the expression for \(x:\) \[ x &= 7(5k+4)+6 \\ x &= 35k+34. \\ \] Now substitute this expression for \(x\) into the final congruence, and solve the congruence for \(k:\) \[ 35k+34 &\equiv 1 \pmod{3} \\ k &\equiv 0 \pmod{3}. \] Write this congruence as an equation, and then substitute the expression for \(k\) into the expression for \(x:\) \[ k &= 3l, \text{ for some integer }l. \\ x &= 35(3l)+34 \\ x &= 105l+34. \] This equation implies the congruence \[x \equiv 34 \pmod{105}.\] This happens to be the solution to the system of congruences.\(\ _\square\) Process to solve systems of congruences with the Chinese remainder theorem: For a system of congruences with co-prime moduli, the process is as follows: □ Begin with the congruence with the largest modulus, \(x \equiv a_k \pmod{n_k}.\) Re-write this modulus as an equation, \(x=n_kj_k+a_k,\) for some positive integer \(j_k.\) □ Substitute the expression for \(x\) into the congruence with the next largest modulus, \(x \equiv a_k \pmod{n_k} \implies n_kj_k+a_k \equiv a_{k-1} \pmod{n_{k-1}}.\) □ Solve this congruence for \(j_k.\) □ Write the solved congruence as an equation, and then substitute this expression for \(j_k\) into the equation for \(x.\) □ Continue substituting and solving congruences until the equation for \(x\) implies the solution to the system of congruences. A box contains gold coins. If the coins are equally divided among six friends, four coins are left over. If the coins are equally divided among five friends, three coins are left over. If the box holds the smallest number of coins that meets these two conditions, how many coins are left when equally divided among seven friends? The Chinese remainder theorem can be applied to systems with moduli that are not co-prime, but a solution to such a system does not always exist. Solve the system of congruences \[\begin{cases} x &\equiv 5 \pmod{6} \\ x &\equiv 3 \pmod{8}. \\ \end{cases}\] Note that the greatest common divisor of the moduli is 2. The first congruence implies \(x \equiv 1\pmod {2}\) and the second congruence also implies \(x \equiv 1 \pmod{2}.\) Therefore, there is no conflict between these two congruences. In fact, the system of congruences can be reduced to a simpler system of congruences by dividing out the GCD of the moduli from the modulus of the first \[\begin{cases} x &\equiv 2 \pmod{3} \\ x &\equiv 3 \pmod{8}. \\ \end{cases}\] Write the second congruence as an equation: Substitute into the first congruence and solve for \(j:\) \[ 8j+3 &\equiv 2 \pmod{3} \\ j &\equiv 1 \pmod{3}. \] Write this congruence as an equation, and then substitute into the equation for \(x:\) \[ j &= 3k+1 \\ x &= 8(3k+1)+3 \\ x &= 24k+11. \] This gives \(x \equiv 11 \pmod{24}\) as the solution to the system of congruences. Note that \(\text{lcm}(6,8)=24.\) \(_\square\) The number of students in a school is between 500 and 600. If we group them into groups of 12, 20, or 36 each, 7 students are always left over. How many students are in this school? Whether or not a system of congruences has solutions depends on if there are any conflicts between pairs of congruences. Show that there are no solutions to the system of congruences: \[\begin{cases} x &\equiv 2 \pmod{6} \\ x &\equiv 5 \pmod{9} \\ x &\equiv 7 \pmod{15}. \end{cases}\] Note that each modulus is divisible by 3. The first and second congruences imply that \(x \equiv 2 \pmod{3}.\) However, the third congruence implies that \(x \equiv 1 \pmod{3}.\) Since these both cannot be true, there are no solutions to the system of congruences. \(_\square\) A system of linear congruences has solutions if and only if for every pair of congruences within the system, \[ \begin{cases} x \equiv a_i \pmod{n_i} \\ x \equiv a_j \pmod{n_j},\qquad \end{cases} a_i &\equiv a_j\ \ \big(\text{mod }\ {\gcd(n_i,n_j)}\big). \] Furthermore, if solutions exist, then they are of the form \[x \equiv b\ \ \big(\text{mod }\ {\text{lcm}(n_1,n_2, \ldots , n_k)}\big)\] for some integer \(b.\) Brahmagupta has a basket full of eggs. When he takes the eggs out of the basket 2 at a time, there is 1 egg left over. When he takes them out 3 at a time, there are 2 eggs left over. Likewise, when he takes the eggs out 4, 5, and 6 at a time, he finds remainders of 3, 4, and 5, respectively. However, when he takes the eggs out 7 at a time, there are no eggs left over. What is the least amount of eggs that could be in Brahmagupta's basket? The real life application of the Chinese remainder theorem might be of interest to the reader, so we will give one such example here. One use is in astronomy where \(k\) events may occur regularly, with periods \(n_{1}, n_{2}, \ldots, n_{k}\) and with the \(i^\text{th}\) event happening at times \(x = a_i, a_i+n_i, a_i+2n_i, \ldots.\) This means that the \(k\) events occur simultaneously at time \(x,\) where \(x = a_i \bmod{n_i}\) for all \({i}\). A simple illustration of this is the orbit of planets and moons, as well as eclipses. Comets 2P/Encke, 4P/Faye, and 8P/Tuttle have orbital periods of 3 years, 8 years, and 13 years, respectively. The last perihelions of each of these comets were in 2017, 2014, and 2008, respectively. What is the next year in which all three of these comets will achieve perihelion in the same year? For this problem, assume that time is measured in whole numbers of years and that each orbital period is constant. Sometimes, a problem will lend itself to using the Chinese remainder theorem "in reverse." That is, when a problem requires you to compute a remainder with a composite modulus, it can be worthwhile to consider that modulus's prime power divisors. Then, the Chinese remainder theorem will guarantee a unique solution in the original modulus. What are the last two digits of \(49^{19}?\) Observe that \(100 = 25 \times 4\) and \(\gcd(25,4) = 1\). Then by the Chinese remainder theorem, the value \(x \equiv 49^{19} \bmod{100}\) is in correspondence with the solutions to the simultaneous congruences \[ x \equiv 49^{19} &\pmod{25}\\ x \equiv 49^{19} &\pmod{4}. \] \[ 49^{19} \equiv (-1)^{19} &\equiv -1 &\pmod{25}\\ 49^{19} \equiv (1)^{19} &\equiv 1 &\pmod{4}. \] Then the Chinese remainder theorem gives the value \[ x &\equiv \big((-1)(4)(19) + (1)(25)(1)\big) &\pmod{100}\\ & \equiv ( -76 + 25) &\pmod{100}\\ & \equiv -51 &\pmod{100}\\ &\equiv 49 &\pmod{100}. \] Therefore, the last two digits of \(49^{19}\) are 49. Note that the above system of congruences is obtained for any odd exponent of 49, so the solution using the Chinese remainder theorem also gives that the last two digits of \(49^k\) are 49 for any positive odd value of \(k\). \(_\square\) The Chinese remainder theorem can be useful for proofs. Show that there exist \(99\) consecutive integers \(a_1, a_2, \ldots, a_{99}\) such that each \(a_i\) is divisible by the cube of some integer greater than 1. Let \(p_1, p_2, \ldots, p_{99}\) be distinct prime numbers. Now, consider the simultaneous congruences \[ x & \equiv -1 \pmod{p_1^3}\\ x & \equiv -2 \pmod{p_2^3}\\ & \vdots\\ x & \equiv -99 \pmod{p_{99}^3}.\\ \] Since \(p_i\) are pairwise coprime, this system of equations has a solution by the Chinese remainder theorem. Then the integers \(a_i = x+i\) for \(i = 1,2, \ldots, 99\) are 99 consecutive integers such that \(p_i^3 \) divides \(a_i\). \(_\square\) Try these problems to test what you know. Find the smallest whole number that, when divided by 5, 7, 9, and 11, gives remainders of 1, 2, 3, and 4, respectively. A general counts the number of surviving soldiers of a battle by aligning them successively in rows of certain sizes. Each time, he counts the number of remaining soldiers who failed to fill a row. The general initially had 1200 soldiers before the battle; after the battle • aligning them in rows of 5 soldiers leaves 3 remaining soldiers; • aligning them in rows of 6 soldiers leaves 3 remaining soldiers; • aligning them in rows of 7 soldiers leaves 1 remaining soldier; • aligning them in rows of 11 soldiers leaves 0 remaining soldiers. How many soldiers survived the battle? Four friends--let's call them A, B, C, and D--are planning to go to the concert, but they realize that they are a few dollars short to buy the tickets ($50 per ticket). We know that each of them has an integer amount of dollars and that if \(B\) borrowed $\(1\) from \(A\), then \(B\) would have \(\frac{2}{3}\) of \(A\)'s balance; if \(C\) borrowed $\(2\) from \(B\), then \(C\) would have \(\frac{3}{5}\) of \(B\)'s balance; if \(D\) borrowed $\(3\) from \(C\), then \(D\) would have \(\frac{5}{7}\) of \(C\)'s balance. At least how much more money (in $) do they need all together in order to afford 4 tickets? Find the last three digits of the number \[ 3 \times 7 \times 11 \times 15 \times \cdots \times 2003. \] You are building a rainbow building from \(N\) cubic unit blocks. First, you put together 3 cubes each to form a group of \(1\times 3\) columns and discard the remaining cubes. Then, you put together 5 columns each to form a group of \(3\times 5\) cubic bases and discard the remaining columns, as before. Finally, you stack 7 bases over one another to build the desired \(3\times 5\times 7\) cuboid structure, as shown above, and discard all the other bases, as usual. If there are a total of 52 discarded cubes, and \(N\) is a multiple of 11, what is the least possible value of \(N?\) What is the remainder when \(\Huge \color{red}{12}^{\color{green}{34}^{\color{blue}{56}^{\color{brown}{78}}}}\) is divided by \(\color{indigo}{90}?\) \[\large 2017!\] Find the last two non-zero digits in the number above.
{"url":"https://brilliant.org/wiki/chinese-remainder-theorem/?subtopic=modular-arithmetic&chapter=basic-applications","timestamp":"2024-11-10T18:00:40Z","content_type":"text/html","content_length":"79045","record_id":"<urn:uuid:8613c63c-de1e-47b6-868a-9028f7615640>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00080.warc.gz"}
Why do we use coefficient of variation? Why do we use coefficient of variation? The most common use of the coefficient of variation is to assess the precision of a technique. It is also used as a measure of variability when the standard deviation is proportional to the mean, and as a means to compare variability of measurements made in different units. What is the difference between standard deviation and relative standard deviation? The relative standard deviation (RSD) is a special form of the standard deviation (std dev). As the denominator is the absolute value of the mean, the RSD will always be positive. The RSD tells you whether the “regular” std dev is a small or large quantity when compared to the mean for the data set. How do you express standard deviation? To calculate the standard deviation of those numbers:Work out the Mean (the simple average of the numbers)Then for each number: subtract the Mean and square the result.Then work out the mean of those squared differences.Take the square root of that and we are done! What is absolute standard deviation? Mean absolute deviation (MAD) of a data set is the average distance between each data value and the mean. Mean absolute deviation is a way to describe variation in a data set. Mean absolute deviation helps us get a sense of how “spread out” the values in a data set are. Can you have a negative standard deviation? As soon as you have at least two numbers in the data set which are not exactly equal to one another, standard deviation has to be greater than zero – positive. Under no circumstances can standard deviation be negative. What deviation means? : the mean of the absolute values of the numerical differences between the numbers of a set (such as statistical data) and their mean or median. What is difference between standard deviation and mean deviation? Standard deviation is the deviation from the mean, and a standard deviation is nothing but the square root of the variance. Mean is an average of all set of data available with an investor or Why do we use coefficient of variation? The most common use of the coefficient of variation is to assess the precision of a technique. It is also used as a measure of variability when the standard deviation is proportional to the mean, and as a means to compare variability of measurements made in different units.…
{"url":"https://bridgitmendlermusic.com/why-do-we-use-coefficient-of-variation/","timestamp":"2024-11-07T23:43:56Z","content_type":"text/html","content_length":"40332","record_id":"<urn:uuid:503b6195-4fd1-40f6-84c5-ee31e2f99620>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00647.warc.gz"}
What is a Tensor in Machine Learning? - reason.townWhat is a Tensor in Machine Learning? What is a Tensor in Machine Learning? A tensor is a mathematical object that is used in machine learning. Tensors are used to represent data in a way that can be used by machines to learn. Checkout this video: What is a Tensor? Tensors are mathematical objects that generalize scalars, vectors, and matrices. In Machine Learning, we represent data using tensors. Tensors are similar to matrices but can be of any rank. In this post, we will see what a tensor is and why it is useful in Machine Learning. A tensor is a generalization of a matrix. Matrices are 2-dimensional tensors and vectors are 1-dimensional tensors. A scalar is a 0-dimensional tensor. Tensors can be represented as arrays. The order of the array is the rank of the tensor. For example, a matrix would be represented as a 2-dimensional array and a vector would be represented as a 1-dimensional array. Tensors are useful in Machine Learning because they can represent data of any kind, including images, which are represented as 3-dimensional tensors. What is a Tensor in Machine Learning? A tensor is a mathematical object that encodes information about a certain type of physical phenomenon. In machine learning, tensors are used to represent data sets of various sizes and shapes. Tensors can be thought of as generalizations of vectors and matrices, which are two special cases of tensors. For example, a vector can be seen as a 1-dimensional tensor, while a matrix can be seen as a 2-dimensional tensor. What are the benefits of using a Tensor in Machine Learning? A tensor is an n-dimensional data structure that is used extensively in machine learning and deep learning. Tensors are similar to vectors and matrices but can be used to represent data with more than two dimensions. Tensors are useful for a variety of tasks including image classification, natural language processing, and predictive modeling. In general, tensors provide a convenient way of representing data in a form that can be processed by machine learning algorithms. There are many benefits of using tensors in machine learning including: -Tensors can represent data with more than two dimensions which is often required for complex tasks such as image classification or natural language processing. -Tensors are often faster to process than other data structures such as vectors or matrices. This is because tensors can be parallelized across multiple devices such as GPUs or CPUs. -Tensors provide a compact representation of data which can lead to improved performance when training machine learning models. How can a Tensor be used in Machine Learning? In mathematical terms, a tensor is an array of numbers that can be used to represent various types of data. For example, a two-dimensional tensor can represent a matrix of numbers, while a three-dimensional tensor can represent a cube of numbers. In machine learning, tensors are often used to represent data such as images, which can be thought of as three-dimensional arrays of pixel Tensors can be used for a variety of tasks in machine learning, such as image classification and object detection. In general, any task that can be represented as a mathematical function can be expressed using tensors. For example, consider the task of classifying images of handwritten digits. Each image can be thought of as a collection of pixels, and each pixel can be represented by a single number (the intensity value). Thus, the entire image can be represented by a two-dimensional tensor. To classify an image, we first need to extract features from the image that will be used by the classifier. This is typically done using a convolutional neural network (CNN), which is a type of neural network that is designed specifically for working with images. The output of the CNN will be a high-dimensional tensor that represents the extracted features. This tensor will then be passed to the classifier, which will use it to make predictions about the input image. What are the types of Tensors in Machine Learning? Tensors are one of the most important data structures in machine learning, and are used to represent data in a variety of different ways. There are three main types of tensors: scalars, vectors, and Scalars are single numbers, such as 3 or -2. Vectors are arrays of numbers, such as [1, 2, 3] or [-1, 0, 1]. Matrices are 2-dimensional arrays of numbers, such as [[1, 2], [3, 4]]. What are the applications of Tensors in Machine Learning? Tensors are a powerful tool for signal processing and are used in a variety of applications, including machine learning. Tensors can be used to represent data in a variety of ways, including vectorized data, images, and text. In machine learning, tensors are often used to represent data that is input into a neural network. A tensor is a mathematical object that can be represented as an array of numbers. In machine learning, tensors can be used to represent data in a variety of ways, including vectorized data, images, and text. In this article, we will explore the use of tensors in machine learning and the different types of tensors that are commonly used. What are the challenges of using Tensors in Machine Learning? Tensors are a powerful tool for machine learning, but they come with a few challenges. One challenge is that Tensors are often high-dimensional, which can make them difficult to work with. Another challenge is that Tensors are often used to represent data that is structured in a way that is difficult to interpret. For example, images can be represented as Tensors, but the way the data is structured makes it difficult to understand what the Tensor represents. Finally, Tensors can be very computationally expensive to work with, which can make it difficult to train models using Tensors. How can Tensors be used to improve Machine Learning? Tensors are powerful mathematical objects that can be used to improve the performance of machine learning algorithms. In this post, we will explore what tensors are and how they can be used to improve the performance of machine learning algorithms. Tensors are mathematical objects that can be used to represent data. Tensors are similar to vectors, but they can have any number of dimensions. For example, a vector is a 1-dimensional tensor, while a matrix is a 2-dimensional tensor. Tensors are often used to represent data in machine learning algorithms. Machine learning algorithms often require the use of large amounts of data. This data can be represented as a tensor. For example, if we have a dataset with 100 examples and each example has 10 features, we can represent this dataset as a 2-dimensional tensor with dimensions [100, 10]. Tensors can be used to improve the performance of machine learning algorithms in several ways. First, tensors can be used to represent data in a more efficient way than vectors or matrices. This is because tensors can have any number of dimensions, while vectors and matrices are limited to 3 dimensions or less. Second, tensors can be used to make calculations more efficient. Finally, tensors can be used to regularize machine learning algorithms, which we will discuss in more detail below. Regularization is a technique that is used to prevent overfitting in machine learning models. Overfitting occurs when a model learns the training data too well and does not generalize well to new data. Regularization techniques penalize certain terms in the model so that the model does not overfit the training data. One popular regularization technique is called early stopping. Early stopping is a technique that stops training a machine learning model when the validation error starts to increase (i.e., when the model starts overfitting the training data). Early stopping is effective but it requires extra computational resources because it requires multiple models to be trained (one for each stopping point). Another popular regularization technique is called L1 regularization (also called LASSO regularization). L1 regularization penalizes terms that have large values (i.e., it encourages sparse models). L1 regularization is effective but it is not computationally efficient because it requires solving an optimization problem with an expensive objective function (the L1 norm). L2 regularization is another popular regularization technique that penalizes terms that have large values (like L1 regularization) but it does not have an expensive objective function (like L1 regularization). Therefore, L2 regularization is computationally efficient and it often outperforms other regularization techniques like early stopping and L1regularization What are the future prospects of Tensors in Machine Learning? Tensors are a powerful tool for representing data in machine learning. They are able to capture complex relationships between data points and can be used for a variety of tasks such as classification, regression, and clustering. While there is still active research on the best way to use tensors for machine learning, the future prospects are promising. In conclusion, a tensor is a mathematical structure that can be used to represent data in a consistent way across different types of machine learning models. Tensors are especially important in deep learning, where they are used to represent data at multiple levels of abstraction. By understanding tensors and how they work, you can more effectively build and train machine learning models.
{"url":"https://reason.town/what-is-a-tensor-in-machine-learning-2/","timestamp":"2024-11-13T00:57:59Z","content_type":"text/html","content_length":"99936","record_id":"<urn:uuid:e1f1e5fe-7414-443e-92e3-cbd69cae21a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00789.warc.gz"}
Frontiers | Geometric Complexity and the Information-Theoretic Comparison of Functional-Response Models • ^1Department of Integrative Biology, Oregon State University, Corvallis, OR, United States • ^2Centre for Integrative Ecology, School of Biological Sciences, University of Canterbury, Christchurch, New Zealand The assessment of relative model performance using information criteria like AIC and BIC has become routine among functional-response studies, reflecting trends in the broader ecological literature. Such information criteria allow comparison across diverse models because they penalize each model's fit by its parametric complexity—in terms of their number of free parameters—which allows simpler models to outperform similarly fitting models of higher parametric complexity. However, criteria like AIC and BIC do not consider an additional form of model complexity, referred to as geometric complexity, which relates specifically to the mathematical form of the model. Models of equivalent parametric complexity can differ in their geometric complexity and thereby in their ability to flexibly fit data. Here we use the Fisher Information Approximation to compare, explain, and contextualize how geometric complexity varies across a large compilation of single-prey functional-response models—including prey-, ratio-, and predator-dependent formulations—reflecting varying apparent degrees and forms of non-linearity. Because a model's geometric complexity varies with the data's underlying experimental design, we also sought to determine which designs are best at leveling the playing field among functional-response models. Our analyses illustrate (1) the large differences in geometric complexity that exist among functional-response models, (2) there is no experimental design that can minimize these differences across all models, and (3) even the qualitative nature by which some models are more or less flexible than others is reversed by changes in experimental design. Failure to appreciate model flexibility in the empirical evaluation of functional-response models may therefore lead to biased inferences for predator–prey ecology, particularly at low experimental sample sizes where its impact is strongest. We conclude by discussing the statistical and epistemological challenges that model flexibility poses for the study of functional responses as it relates to the attainment of biological truth and predictive ability. 1. Introduction Seek simplicity and distrust it. Whitehead (1919) Alfred North Whitehead, The Concept of Nature, 1919. The literature contains thousands of functional-response experiments (DeLong and Uiterwaal, 2018), each seeking to determine the relationship between a given predator's feeding rate and its prey's abundance. In parallel, dozens of functional-response models have been proposed (Jeschke et al., 2002, Table 1), each developed to encapsulate aspects of the variation that exists among predator and prey biologies. The desire to sift through these and identify the “best” model on the basis of data is strong given the frequent sensitivity of theoretical population-dynamic predictions to model structure and parameter values (e.g., Fussmann and Blasius, 2005; Aldebert and Stouffer, 2018). Information-theoretic model comparison criteria like the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) have rapidly become the preeminent tool for satisfying this desire in a principled and quantitative manner (Okuyama, 2013), mirroring their increasing ubiquity across the ecological literature as a whole (Ellison, 2004; Johnson and Omland, 2004; Aho et al., 2014). Generically, criteria like AIC and BIC make the comparison of model performance an unbiased and equitable process. For standard linear regression models (and most other models), increasing model complexity by including additional free parameters will always result in a better fit to the data. Therefore, by the principle of parsimony or because such increases in fit typically come at the cost of generality beyond the focal dataset, model performance is judged by the balance of fit and complexity when other reasons to disqualify a model do not apply (Burnham and Anderson, 2002; Höge et al., 2018; but see Evans et al., 2013; Coelho et al., 2019). TABLE 1 Table 1. The deterministic functional-response models we considered for describing the per predator rate at which prey are eaten as a function of prey abundance N, predator abundance P, and the parameter(s) θ. While differing fundamentally in their underlying philosophies, motivations, and assumptions (Aho et al., 2014; Höge et al., 2018), both AIC and BIC implement the balance of fit and complexity in a formal manner by penalizing a model's likelihood with a cost that depends on its number of free parameters. Specifically, for each model in the considered set of models, $AIC=-2ln L(θmle|y)+2k (1)$ $BIC=-2ln L(θmle|y)+kln(n), (2)$ with the model evidencing the minimum value of one or the other criterion being judged as the best-performing model. For both criteria, the first term is twice the model's negative log-likelihood (evaluated at its maximum likelihood parameter values θ[mle]) given the data y. This term reflects the model's goodness-of-fit to the data. The second term of each criterion is a function of the model's number of free parameters k. This term reflects a model's parametric complexity. For AIC, a model's complexity is considered to be independent of the data while for BIC it is dependent on the dataset's sample size n; that is, BIC requires each additional parameter to explain proportionally more for datasets with larger sample size. The statistical clarity of the best-performing designation is typically judged by a difference of two information units between the best- and next-best performing models (Kass and Raftery, 1995; Burnham and Anderson, 2002). An issue for criteria like AIC and BIC is that a model's ability to fit data is not solely a function of its parametric complexity and mechanistic fidelity to the processes responsible for generating the data. This can be problematic because all models—whether it be due to their deterministic skeleton or their stochastic shell—are phenomenological to some degree in that they can never faithfully encode all the biological mechanisms responsible for generating data (see also Connolly et al., 2017; Hart et al., 2018). Consequently, a given model may fit data better than all other models even when it encodes the mechanisms or processes for generating the data less faithfully. One way in which this can happen is when models differ in their flexibility. A model's flexibility is determined by its mathematical form and can therefore differ among models having the same parametric complexity. For example, although the models y = α + βx and y = αx^β have the same number of parameters and can both fit a linear relationship, the second model has a functional form that is more flexible in that it can also accommodate nonlinearities. In fact, the second model may fit some data better than the first even if the first is responsible for generating them. The chance of this happening will vary with the design of the experiment (e.g., minimizing noise and maximizing the range of x) and decreases as sample size increases (i.e., as the ratio of signal to noise increases). Unfortunately, sample sizes in the functional-response literature are often not large (Novak and Stouffer, 2021), and the degree to which experimental design is important given the variation in mathematical forms that exists among functional-response models has not been addressed. Here our goal is to better understand the contrasting flexibility of functional-response models and its impact on their ranking under the information-theoretic model-comparison approach. We quantify model flexibility by geometric complexity (a.k.a. structural complexity) as estimated by the Fisher Information Approximation (FIA; Rissanen, 1996). Doing so for an encompassing set of functional-response models across experimental designs varying in prey and predator abundances, we find that geometric complexity regularly differs substantially among models of the same parametric complexity, that differences between some models can be reversed by changes to an experiment's design, and that no experimental design can minimize differences across all models. Although choices among alternative functional-response models should be informed by motivations beyond those encoded by quantitative or statistical measures of model performance and we do not here seek to promote the use of FIA as an alternative information criterion, our results add caution against interpreting information-theoretic functional-response model comparisons merely at face value. 2. Materials and Methods 2.1. Fisher Information Approximation The Fisher Information Approximation is an implementation of the Minimum Description Length principle (Rissanen, 1978) which Grünwald (2000) introduced as a means for making model comparisons (see Pitt et al., 2002; Myung et al., 2006; Ly et al., 2017, for details). The Minimum Description Length (MDL) principle considers the comparison of model performance as a comparison of how well each model can compress the information that is present in data, with the best-performing model being the one that describes the data with the shortest code length. In the extreme case of random noise, no compression is possible. FIA is asymptotically equivalent to the normalized maximum likelihood which Rissanen (1996) derived to operationalize the MDL principle, but is easier to implement (Myung et al., 2006). It is computed for each model as $FIA=-ln L(θmle|y)+k2ln (n2π)+ln ∫DdetI(θ) dθ, (3)$ where the first term is the negative log-likelihood of the model given the data, the second term is a measure of a model's parametric complexity that is dependent on the data via the sample size n ( Figure 1), and the third term is a measure of its geometric complexity (for which we henceforth use the symbol ${G}$). As described further in Box 1, FIA's geometric complexity reflects a model's ability to capture the space of potential outcomes that can be obtained given an experimental design. It thereby depends only on the model's mathematical form and the structure underlying the observed data, but not on n. The contribution of geometric complexity to a model's FIA value consequently decreases with increasing sample size relative to the contributions of the likelihood and parametric complexity. This makes the effect of geometric complexity of greatest importance for datasets with low sample sizes. FIGURE 1 Figure 1. The dependence of parametric complexity on data sample size as estimated by the second term of the Fisher Information Approximation (FIA) for models with k = 1, 2, 3, and 4 free parameters. The potential importance of model flexibility to the information-theoretic ranking of functional-response models may be assessed by comparing their parametric and geometric complexity values or by comparing the geometric complexity values of models having the same parametric complexity because both measures of complexity are independent of the data beyond its sample size and structure (see main text and Box 1). For context, n = 80 was the median sample size of all functional-response datasets collated by Novak and Stouffer (2021). Box 1. Unpacking the third term of the Fisher Information Approximation. As described in greater detail in Pitt et al. (2002), Myung et al. (2006), and Ly et al. (2017), the Fisher Information Approximation estimates the geometric complexity ${G}$[M] of a model M as the natural log of the integration (over all parameters θ) of the square root of the determinant of the model's unit Fisher Information matrix I[M](θ): $GM=ln ∫DMdetIM(θ) dθ. (4)$ The Fisher Information matrix I[M](θ) is a k × k matrix comprising the expected values of the second-order derivatives of the model's negative log-likelihood function with respect to each of its k parameters. It therefore reflects the sensitivities of the log-likelihood's gradient with respect to those parameters. The unit Fisher Information matrix is the expected value of these derivatives calculated across all potential experimental outcomes weighted by those outcomes' probabilities given the parameters θ. When an experimental design consists of multiple treatments the expectation is averaged across these. I[M](θ) therefore represents the expectation for a single observation (i.e., with a sample size of n = 1). For example, for a functional-response experiment having five prey-abundance treatment levels N ∈ {10, 20, 30, 40, 50} and a single predator-density level, the expectation is taken by associating a 1/5th probability to the unit Fisher Information matrix evaluated at each treatment level (see the Supplementary Materials for further details). The determinant of a matrix corresponds to its geometric volume. A larger determinant of the unit Fisher Information matrix therefore corresponds to a more flexible model that has higher gradient sensitivities for more of its parameters. Parameters that share all their information—such as parameters that only appear in a model as a product—result in matrix determinants of zero volume. Such non-identifiable models with statistically-redundant parameters require re-parameterization. Models can also be non-identifiable because of experimental design, such as when there is insufficient variation in predictor variables. For example, all predator-dependent functional-response models will be non-identifiable for designs entailing only a single predator abundance level (see Supplementary Figure S1). The domain ${D}$[M] of the integral reflects the range of values that the model's parameters could potentially exhibit. When a model is not over-specified, each location in parameter space also corresponds to a unique set of predicted model outcomes. As such, the domain of the integral reflects the space (volume) of potential experimental outcomes over which geometric complexity is calculated. Three closely related issues are pertinent in this regard: First, a closed-form solution of the indefinite integral in Equation (4) may not exist, and when it does it is often divergent. This means that numerical integration methods are necessary and that parameter ranges must typically be bounded (i.e., the domain ${D}$[M] must be finite and some outcomes must be rendered “impossible”). However, how to specify bounds on mathematical grounds is not always obvious. For example, for the ratio- and consumer-dependent models such as the Hassell–Varley (HV) model, the interference strength parameter is not mathematically limited but rather can take on any non-negative value to infinity if the “attack rate” parameter is similarly unconstrained. Second, for some experimental designs the range of parameter values may be more empirically restricted than is mathematically or even biologically permissible. For example, the handling time of the Holling Type II (H2) model (and all other models) is mathematically constrained only to be non-negative, and yet too large a handling time would mean that no prey are ever expected to be eaten except for prohibitively long experimental durations, an outcome few experimentalists would consider useful. Similarly, too large an attack rate would prevent an experimentalist from differentiating among models without the use of potentially intractable decreases in an experiment's duration. Experimental design thereby reduces the space of possible outcomes, particularly for designs in which eaten prey are continually replaced. Third, because a model's geometric complexity reflects the range of parameter values which are considered possible, two models can exhibit different relative geometric complexities for different experimental designs. However, different parameterizations of the same functional form must have the same geometric complexity for a given experimental design when the permissible range of their parameters is limited equivalently (see Box 2). This is an issue because recognizing that two models simply reflect alternative parameterizations is not always easy (e.g., contrast the original formulation of the Steady State Satiation model by Jeschke et al. (2002) in Supplementary Table S1 to our reformulation in Table 1). In our analyses, we overcome these three issues by imposing parameter constraints in a manner that is indirect and equitable across all models. We do so by imposing the same minimum and maximum constraints on the expected number of prey eaten (thus limiting the space of potential experimental outcomes) for all models, rather than on each model's parameters individually (see Methods: Parameter constraints). Box 2. Imposing equitable integration limits. Different parameterizations of the same functional form should always have the same geometric complexity for a given experimental design. However, this will only be true when the range of their parameter values over which the integration of Equation (4) is performed is limited equivalently, which can be challenging. This issue is irrelevant when solutions may be obtained in closed-form, but is not irrelevant when this is not possible, as we suspect is the case for almost all functional-response models applicable to experiments in which eaten prey are continually replaced. The challenge of determining equitable integration limits is well-demonstrated by a comparison of the Holling and Michaelis–Menten Type II functional-response models (Figure 2). These are typically written as $FH2=aN1+ahN and FMM=αNβ+N, (5)$ the equivalence of which is demonstrated by substituting α = 1/h (the maximum feeding rate equals the inverse of the handling time) and β = 1/(ah) (the abundance at which half-saturation occurs is the inverse of the product of the attack rate and handling time). By definition, all four parameters (a, h, α and β) are limited only in that they must be non-negative; they could each, in principle, be infinitely large (i.e., ${D}$[H2] = {a ∈ [0, ∞), h ∈ [0, ∞)} and ${D}$[MM] = {α ∈ [0, ∞), β ∈ [0, ∞)}). If the integral in Equation (4) could then be computed analytically for the two models, we would always obtain ${G}$[H2] = ${G}$[MM] for any given experimental design. However, because the integrals in Equation (4) for the two models are divergent, finite limits to ${D}$[H2] and ${D}$[MM] must be applied. At first glance, it may seem intuitive to impose these limits on the maximum parameter values. For example, we might consider imposing a ∈ [0, a[max]] and h ∈ [0, h[max]]. Because of their inverse relationships, doing so means that the equivalent limits for the Michaelis–Menten model are α ∈ [1/h[max], ∞] and β ∈ [1/(a[max]h[max]), ∞], which are not finite and hence cannot solve our problem. Naively, we might therefore instead consider imposing both minima and maxima, a ∈ [a[min], a[max]] and h ∈ [h[min], h[max]], so that α ∈ [1/h[max], 1/h[min]] and β ∈ [1/(a[max]h[max]), 1/(a[min]h[min])]. This, however, does not solve a further problem in that the limits for β depend on the value of α (i.e., 1/h). That is, we must also impose the additional constraint that β>α/a[max] (Figure 2), for only then will the computed ${G}$[M] of the two models be equal. Problems such as these only compound for models entailing a greater number of parameters. As alluded to in Box 1, our approach to circumventing these model-specific issues is to impose constraints on the expected number of eaten prey (Figure 2), rather than on the model parameters directly (see Methods: Parameter constraints). That is, we require that the minimum expected number of eaten prey is no less than one prey individual in the maximum prey abundance treatment(s) (i.e., 1 ≤ 𝔼[F(N[max], P, θ)PT] for all P in the experimental design) and that the maximum expected number of eaten prey is no greater than N[max] in any of the treatments (i.e., 𝔼[F(N, P, θ)PT] ≤ N[max] for all N × P combinations in the experimental design). Because of the mapping between parameter space and predicted model outcomes, these constraints impose natural limits for most (combinations of) parameters (e.g., the handling time or saturation parameters of all models). For other parameters, it does not impose hard limits, but nonetheless results in their contribution to ${G}$[M] tending asymptotically to zero as their value increases (Figure 2). This is most notably true for the “attack rate” parameter of all models. FIGURE 2 Figure 2. Alternative parameterizations of the same functional form should have the same geometric complexity for any given experimental design, but this will only be true in practice when their parameter domains ${D}$ are equivalently constrained (see Box 2 for details). Top row: Illustration of the functional equivalence and parameter interpretations of the Holling (left column) and Michaelis–Menten (right column) models. Middle row: Direct constraints on ${D}$[H2] and ${D}$[MM] necessitate more than potentially arbitrary minimum and/or maximum limits, but must also account for the confounded relationships among parameters. Bottom row: We circumvent this challenge by imposing parameter constraints indirectly via the expected number of eaten prey, 𝔼[F(N, P, θ)PT]. Stars in the top row indicate these limits imposed on the assumed experimental design. Color-scale in bottom row reflects $\sqrt{det{I}_{M}\left(\theta \right)}$ from dark blue (low values) to orange (high values), but is re-scaled within each graph to visualize their contours and thus cannot be compared quantitatively. For our purposes, because both parametric and geometric complexity are independent of the data beyond its sample size and experimental design, the potential importance of model flexibility to the information-theoretic ranking of models may be assessed by comparing their parametric and geometric complexity values or by comparing the geometric complexity values of models having the same parametric complexity. Because FIA converges on half the value of BIC as n becomes large, a one-unit difference in geometric complexity reflects a substantial impact on the relative support that two models of the same parametric complexity could receive. 2.2. Experimental Designs We computed the geometric complexity of 40 different functional-response models across a range of experimental designs. We first describe the experimental designs we considered because aspects of these also determined our manner for equitably bounding the permissible parameter space of all functional-response models (Boxes 1, 2). The experimental designs we considered exhibited treatment variation in prey N and predator P abundances. All designs had at least five prey-abundance levels, a minimum prey-abundance treatment of three prey individuals, and a minimum predator-abundance treatment of one predator individual. The designs varied by their maximum prey and predator abundances (N[max] and P[max]) which we achieved by correspondingly varying the number of prey and predator treatment levels (L[N] and L[P]); that is, by including higher abundance levels to smaller experimental designs. We specified the spacing between prey and predator abundance levels to follow logarithmic series. This follows the recommendation of Uszko et al. (2020) whose simulations showed that a logarithmic spacing of prey abundance levels performed well for the purpose of parameter estimation. We used the golden ratio (ϕ = 1.618…) as the logarithmic base and rounded to the nearest integer to generate logistically-feasible abundance series that increase more slowly than typically used bases (e.g., log[2] or log[10]). We thereby approximated the Fibonacci series (1, 1, 2, 3, 5, 8, …) on which ϕ^n converges for large n. We varied L[N] between 5 and 10 levels and varied L[P] between 1 and 5 levels, thereby affecting N[max] and P[max] abundances of up to 233 prey and up to 8 predator individuals. We assumed balanced designs whereby all treatments are represented equally. All resulting designs are depicted in the Supplementary Materials. An important aspect of experimental design which we assumed throughout our analyses was that all eaten prey are continually replaced. The constancy of available prey allowed us to treat observations as Poisson random variates and hence use a Poisson likelihood to express each deterministic functional-response model as a statistical model. This was necessary because computing geometric complexity requires an inherently statistical perspective (see Box 1 and below). 2.3. Functional-Response Models The functional-response models we considered ranged from having one to four free parameters (Table 1). We included prey-, ratio-, and predator-dependent models that are commonly assessed in the functional-response literature, as well as many models that have received far less attention, such as those that encapsulate emergent interference, adaptive behavior, or both handling and satiation. We did not consider models that explicitly include more variables than just the abundances of a focal predator-prey pair. Given that our statistical framework was based on experimental designs within which eaten prey are continually replaced, we also did not include any models which explicitly account for prey depletion or reflect the selection of hosts by non-discriminatory parasitoids (e.g., Rogers, 1972). All but two of the considered models are previously published. The exceptions were a three-parameter model (AS) which represents an illustrative generalization of the adaptive behavior A1 model of Abrams (1990), and a four-parameter predator-dependent model (SN2) that extends the Beddington–DeAngelis and Crowley–Martin models and may be interpreted as reflecting predators that cannot interfere when feeding and can partially feed when interfering (see Stouffer and Novak, 2021). That said, we do not concern ourselves with the biological interpretation of the models as this has been discussed extensively throughout the functional-response literature. Rather, we focus on the models' contrasting mathematical forms. Across the different models, these forms include rational, power, and exponential functions, as well as functions that are linear, sublinear, or superlinear with respect to prey or predator abundances. To highlight their similarities, we reparameterized many models to “Holling form,” noting that different parameterizations of the same functional form have the same geometric complexity for a given experimental design (Box 2). This included models that, as originally defined, had statistically-redundant parameters (e.g., the models of Abrams, 1990 ), were written in “Michaelis-Menten form” (e.g., Sokol and Howell, 1981), or were written with parameters affecting divisions (e.g., we replaced 1/c → c). This also included the Steady State Satiation (SSS) model of Jeschke et al. (2002) for which ${G}$[M] could not be computed. Fortunately the SSS model can also be derived using the citardauq formula (rather than the quadratic formula) for which ${G}$[M] could be computed and which further reveals its similarity to the adaptive behavior A2 model of Abrams (1990) and the predator-dependent model of Ruxton et al. (1992). For simplicity and to further clarify similarities among models, we present all model parameters using the symbols a, b, c, and d for non-exponent parameters and u and v for exponent parameters, noting that their biological interpretations frequently differ among models. 2.4. Parameter Constraints As mentioned above, we assumed a Poisson statistical model in computing the geometric complexity of each deterministic functional-response model. In a context of fitting models to actual data, the consequent log-likelihood function, $ln L(θ|y)=-∏i=1nln (yi!)+∑i=1n(ln(λi)yi-λi), (6)$ expresses the log-likelihood of a model's parameter values given the observed data, with λ[i] = F(N[i], P[i], θ)P[i]T; that is, the feeding rate of a predator individual in treatment i (as per the focal deterministic functional-response model) times the number of predators and the time period of the experiment, which we universally set to T = 1. In our context of quantifying ${G}$[M], observed data is not needed because the first term of Equation (6) drops out when taking derivatives with respect to model parameters and because I[M](θ) involves the expected value across the space of potential experimental outcomes y (Box 1). Despite this independence from data, additional information is nonetheless necessary for computing the geometric complexity of models such as those we consider here (Box 1). This information entails the range of potential outcomes that could be obtained experimentally and hence the potential parameter values that a model could exhibit (i.e., its domain ${D}$[M] of integration). Encoding this information in an equitable manner that does not bias the inferred geometric complexity of some models over others has several potential issues associated with it (Boxes 1, 2), particularly because the nature of our assumed experimental design (i.e., eaten prey are immediately replaced) means that the range of potential outcomes for a given model (i.e., the number of prey eaten) is theoretically infinite. To avoid these issues, we placed no direct constraints on the parameters themselves. Rather, we specified infinite domains on the parameters [i.e., {a, b, c, u, v} ∈ [0, ∞) and d ∈ (−∞, ∞)] and instead placed constraints on them in an indirect manner by restricting the allowable outcomes predicted by the models. Specifically, we imposed the requirement that, over time-period T, the expected number of eaten prey in all maximum prey abundance treatments was no less than 1 (i.e., 1 ≤ 𝔼[F(N[max], P, θ)PT] across all P treatments) and that the expected number of prey eaten in any treatment was no greater than the number of prey made available in the maximum prey treatment level (i.e., 𝔼[F(N, P, θ)PT] ≤ N[max] for all N × P treatment combinations). Under the assumed Poisson model, the lower bound corresponds to an expectation of observing zero prey being eaten in no greater than 37% of an experiment's maximum prey abundance replicates (since P(𝔼[F(N[max], P, θ)PT] = 1) = 0.37). The upper bound is similarly arbitrary in a mathematical sense but seems logistically feasible since researchers are unlikely to choose a prey abundance beyond which they could not continually replace consumed individuals. For the SN1 and SN2 models, we imposed the respective additional requirement that bd ≤ 1/max[F(N, P, θ)PT] and b ≤ 1/max[F(N, P, θ)PT] for all treatments to maintain biologically-appropriate (non-negative) predator interference rates (Stouffer and Novak, 2021). We note that our placement of constraints on the expected number of eaten prey is similar to the use of Bayesian prior predictive checks with a joint prior distribution in that we restrict the domain of permissible parameter values based on how their conditional inter-dependencies lead to predicted model outcomes. It is worth noting that some authors defined their models with parameters to be greater than 1, rather than 0 as we did. For example, theoreticians often assume u ≥ 1 for the Hill exponent of the Holling–Real Type III (H3R) model, though Real (1977) did not do so. We consider non-negative values less than one to also be biologically and statistically possible (see discussion in Stouffer and Novak, 2021). Indeed, relaxing this constraint and redefining the statistically-redundant parameters of the original A3 model (Abrams, 1990) clarifies, for example, that it is mathematically equivalent to H3R with u = 0.5 (even if its assumed biological mechanism differs). 2.5. Model Comparisons Comparisons of geometric complexity can only be made across models of the same parametric complexity; it is in conjunction with its second term that FIA enables comparisons across models in general. Therefore, for each set of models (i.e., for models with k = 1, 2, 3 or 4 parameters), we first assessed how an experiment's design determined the geometric complexity of a selected “baseline” model. Because their relationships to each other and most other models are readily apparent, we chose the Holling Type I (H1) model as the baseline for the k = 1 models, the Holling Type II (H2) model for the k = 2 models, the Holling–Real Type III (H3R) and the Beddington–DeAngelis (BD) models for the k = 3 models (H3R for the prey-dependent models and BD for the ratio- and predator-dependent models), and the Beddington–DeAngelis–Okuyama–Ruyle (BDOR) model for the k = 4 models. We then compared the geometric complexity of the other models within a given set to the set's baseline model(s) by calculating, for each experimental design, the difference between the two model's geometric complexity values (e.g., ${G}$[LR] − ${G}$[H1]). This difference enables a direct evaluation of the degree to which a model's flexibility influences its information-theoretic ranking because it has the same units of information as the likelihood and parametric complexity terms of the FIA criterion. 2.6. Sensitivity to Assumptions We evaluated the sensitivity of our inferences to three aspects of experimental design, repeating our analyses for designs that 1. varied in the number of prey and predator levels (L[N] and L[P]) but kept the maximum prey and predator abundances constant at N[max] = 233 and P[max] = 5 (based on results from the main 2. used arithmetically-uniform (rather than logarithmic) series of prey and predator abundances; and that 3. relaxed the constraint on either the minimum or the maximum expected number of eaten prey by an order of magnitude (i.e., 𝔼[F(N[max], P, θ)PT] ≥ 1/10 or 𝔼[F(N, P, θ)PT] ≤ 10N[max]). All analyses were performed in Mathematica (Wolfram Research Inc., 2020) using the Local Adaptive integration method and with precision and accuracy goals set to 3 digits. 3. Results 3.1. Baseline Models and Equivalent Models The geometric complexity ${G}$[M] of all baseline models (H1, H2, H3R, BD, and BDOR) increased with increasing N[max] and decreasing P[max] (Figures 3–6). For these models, ${G}$[M] varied more greatly across the considered variation in N[max] than across the considered variation in P[max], with at most a very weak interactive effect occurring between these. The difference in ${G}$[M] between the smallest and largest N[max] for a given P[max] varied from about 2 information units for the parametrically simplest H1 model to about 5 units for the parametrically most complex BDOR model, with the difference for the other baseline models being intermediate and roughly proportional to their number of free parameters. FIGURE 3 Figure 3. First panel: The geometric complexity ${G}$[H1] of the single-parameter (k = 1) baseline Holling Type I (H1) model as a function of an experiment's maximum prey and predator abundances (N[ max] and P[max]). Other panels: The difference in ${G}$[M] of the linear ratio-dependent (LR) model and the square-root model of Barbier et al. (2021, BWL1) relative to the H1 model. Positive differences reflect experimental designs for which a focal model's mathematical flexibility would result in it being favored by information criteria like AIC and BIC that do not consider this form of model complexity. As expected (Box 2), alternative parameterizations of the same functional form had the same ${G}$[M] for all designs, with numerical estimation errors accounting for deviations from exact equivalence. This was demonstrated by H2 and MM as well as GI and GIA (Figure 4), which differ only in the biological interpretation of their parameters. Likewise, all ratio-dependent models had the same ${G}$[M] as their “corresponding” Holling-type models when there was no variation in predator abundances (e.g., ${G}$[LR] ≈ ${G}$[H1], ${G}$[AG] ≈ ${G}$[H2] and ${G}$[AGK] ≈ ${G}$[H3] when P[max ] = 1; Figures 3–5). FIGURE 4 Figure 4. As in Figure 3 but for two-parameter (k = 2) functional-response models. First panel: The geometric complexity ${G}$[H2] of the baseline Holling Type II model (H2) as a function of an experiment's maximum prey and predator abundances (N[max] and P[max]). Other panels: The difference in ${G}$[M] of all other two-parameter models relative to the H2 model. As a visual aid, models with greater geometric complexity than H2 are colored in blue while those with less geometric complexity than H2 are colored in orange. FIGURE 5 Figure 5. As in Figure 3 but for three-parameter (k = 3) functional-response models. First and tenth panels: The geometric complexity ${G}$[M] of the baseline Holling–Real Type III (H3R) and Beddington–DeAngelis (BD) models as a function of the experiment's maximum prey and predator abundances (N[max] and P[max]). Other panels: The difference in ${G}$[M] of the other three-parameter prey-dependent (top two rows) and ratio- and predator-dependent (bottom two rows) models relative to the baseline models. As a visual aid, models with greater geometric complexity than H2 are colored in blue while those with less geometric complexity than H2 are colored in orange. FIGURE 6 Figure 6. As in Figure 3 but for four-parameter (k = 4) functional-response models. First panel: The geometric complexity ${G}$[BDOR] of the Beddington–DeAngelis–Okuyama–Ruyle (BDOR) model as a function of the experiment's maximum prey and predator abundances (N[max] and P[max]). Other panels: The difference in ${G}$[M] of all other four-parameter models relative to the BDOR model. As a visual aid, models with greater geometric complexity than H2 are colored in blue while those with less geometric complexity than H2 are colored in orange. 3.2. One-Parameter Models For the one-parameter models (Figure 3), both ${G}$[LR] and ${G}$[BLW1] were always greater than ${G}$[H1] (excepting when P[max] = 1 for LR). The degree to which the linear ratio-dependent (LR) model was more flexible than the Holling Type I (H1) model decreased with increasing N[max] and decreasing P[max]. This was also true for the ratio-dependent BLW1 model of Barbier et al. (2021) when P[max] ≥ 3, but for P[max] < 3 its difference to H1 increased with increasing N[max]. The most equitable designs capable of differentiating among all three models therefore consisted of only two predator levels (P[max] = 2), entailed a ${G}$[M] difference among models of about 0.2 information units or more, and caused LR to be slightly more flexible for small N[max] and BWL1 more so for large N[max] relative to H1. The least equitable design entailed large P[max] and small N[max] and caused the geometric complexity of LR and BWL1 to exceed that of H1 by more than 1 and 0.8 information units, respectively. 3.3. Two-Parameter Models There were four categories of two-parameter models qualitatively distinguished by whether they exhibited equivalent, higher, lower or a design-dependent ${G}$[M] relative to the H2 baseline model ( Figure 4): (i) MM was equivalent to H2 for all designs (as already mentioned above); (ii) H3, HT, GI, GIA, SH, AG, AGK, GB, CDAO and R were more flexible than H2 for all designs (had higher ${G}$[M], excepting for P[max] = 1 where ${G}$[AG] ≈ ${G}$[CDAO] ≈ ${G}$[H2]); (iii) A0, A1, and A3 were less flexible than H2 for all designs (had lower ${G}$[M]); and (iv) HV was more flexible than H2 for small N[max] designs and less flexible for large N[max], with large and small P[max] designs respectively increasing and decreasing its relative flexibility more H3 was the only model for which the difference from H2 was insensitive to experimental design, always being about 0.45 information units. For HT, GI, GIA, A0, A1, and A3, the difference to H2 was insensitive to P[max], but while it increased with increasing N[max] for HT, GI, GIA, and A0 (making small N[max] designs the most equitable), it decreased with increasing N[max] for A1 and A3 (making large N[max] designs the most equitable). The degree to which AG, AGK, GB, CDAO, and R were more flexible than H2 decreased with increasing N[max], but while it increased with increasing P[ max] for AG, AGK, GB, and CDAO (making large N[max], small P[max] designs the most equitable), it decreased—albeit weakly—with increasing P[max] for R. For SH, the difference to H2 first increased from small to intermediate N[max] then slowly decreased from intermediate to large N[max], but was always minimized by large P[max]. Small N[max], large P[max] designs were therefore the most equitable for SH. Finally, for HV, which was either more or less flexible than H2 depending on design, the most equitable designs spanned N[max] ≈ 30 for P[max] = 2 to N[max] ≈ 120 for P[max] = 8. Overall, A0 and AGK exhibited the greatest potential disparity in flexibility relative to H2, respectively being less and more flexible by about 1.4 information units under their least equitable design. The greatest potential disparity among all considered two-parameter models was about 2 information units and occurred between HV and A0 for small N[max], large P[max] designs in favor of HV. 3.4. Three-Parameter Models Noting that all predator-dependent models are non-identifiable for P[max] = 1 designs (Supplementary Figure S1), there were three categories of three-parameter models that were qualitatively distinguished by whether they exhibited higher, lower or a design-dependent ${G}$[M] relative to the two baseline models—H3R for prey-dependent models and BD for ratio- and predator-dependent models (Figure 5): (i) FHM and BD were more flexible than H3R, and CM, W, SBB, and AA were less flexible than BD, for all designs (excepting for P[max] = 2 where ${G}$[AA] ≈ ${G}$[BD]); (ii) A2, HLB, MH, AS, SSS and T were less flexible than H3R, and TTA and RGD were less flexible than BD, for all designs; and (iii) BWL2 was more flexible than BD for small N[max], large P[max] designs and was less flexible for large N[max], small P[max] designs. For the ratio- and predator-dependent models, differences to BD were more sensitive to variation in P[max] than to variation in N[max]. The degree to which CM, W, SBB, and AA were more flexible than BD increased with increasing P[max], reaching a difference in geometric complexity of 0.8 information units at P[max] = 8. For these models, the most equitable design therefore entailed small P[max] regardless of N[max], but for TTA and RGD, for which the difference to BD decreased with increasing P[max], it was designs entailing large P[max] which reduced their lower geometric complexity the least (by no less than 1.4 and up to 2.9 information units). The degree to which the prey-dependent AS and T models were less flexible than H3R was also more sensitive to variation in P[max] than in N[max], but the degree to which A2, HLB, MH, and SSS were less flexible and the degree to which FHM was more flexible was relatively insensitive to variation in P[max]. As N[max] increased, T became less flexible than H3R, A2, HLB, MH, AS, and SSS became less inflexible relative to H3R, and FHM became more flexible than H3R. For BWL2, which could either be more or less flexible than BD depending on design, the most equitable designs spanned those that had the largest considered N[max] when P[max] was large to those that had the smallest considered N[max] when P[max] was small. Overall, A2, SSS and TTA exhibited the greatest potential disparity relative to their H3R and BD baselines, respectively differing in their geometric complexity up to almost 3.8 information units for the least equitable designs. The greatest potential disparity among all other considered three-parameter models was about 4.6 information units and occurred between A2, SSS and CM for small N[max] designs in favor of CM. 3.5. Four-Parameter Models Finally, among the four-parameter models, which exhibited the greatest amounts of numerical estimation noise (Figure 6): (i) AAOR was more flexible than BDRO for all designs (had higher ${G}$[M]); (ii) SN1 and SN2 were less flexible than BDRO for all designs (had lower ${G}$[M]); and (iii) CMOR tended to be more flexible for large N[max], large P[max] designs and less flexible for small N[max], small P[max] designs. For CMOR, AAOR and SN1, the difference to BDOR was less sensitive to variation in N[max] than to variation in P[max], but the opposite was true for SN2. Further, while the degree to which AAOR was more flexible than BDRO was minimized by P[max] = 2 designs (to about 0.2 information units), the degree to which SN1 was less flexible than BDRO was minimized by P[max] = 8 designs (to about 0.5 information units). SN2 was non-identifiable for designs having P[max] ≤ 3 (Supplementary Figure S1), but for P[max] > 3 designs it was less flexible by at least 1 information unit. The most equitable designs for CMOR and BDOR entailed intermediate predator abundances (P[max] = 3–5). Overall, the greatest potential disparity to the BDOR baseline model occurred for the SN2 model (about 2.5 information units) at the largest N[max]. The greatest potential disparity among all considered four-parameter models occurred for the SN2 and AAOR models (about 3.5 information units) for the largest N[max], largest P[max] design in favor of AAOR. 3.6. Sensitivity Analyses Fixing N[max] = 233 and P[max] = 5 and varying the number of prey and predator treatment levels (L[N] and L[P]) to below the numbers used in our primary analysis showed that ${G}$[M] was relatively insensitive to variation in L[N] for most models (Supplementary Figures S2–S5). In contrast, the degree to which models were more or less flexible relative to their baseline model was far more sensitive to variation in L[P]. For most of the L[P]-sensitive models, decreasing L[P] increased their difference to the baseline model, but for an almost equal number the difference decreased. The largest effects of L[P] most often occurred when reducing from two predator levels (P ∈ {1, 2}) to only a single-predator level (or the corresponding reduction of three to two levels for the four-parameter models). Setting aside these last-mentioned and in some ways trivial changes to L[P], the greatest effect of changing L[N] and L[P] was to change the relative geometric complexity of models and their baseline models by up to about 0.6 information units (excepting T and SSS for which changes of up to 2.5 units occurred). The use of designs with arithmetic rather than logarithmic spacings of prey and predator abundances also had little to no effect on the geometric complexity of models relative to their baselines ( Supplementary Figures S6–S9). The notable exceptions included the manner in which (i) HV was more flexible than H2 (arithmetic spacings making HV invariably more flexible rather than more or less flexible depending on N[max] and P[max]), (ii) BD was more flexible than H3R (arithmetic spacings making it more flexible for large rather than small N[max]), and (iii) CMOR was more flexible than BDOR (arithmetic spacings making CMOR invariably less flexible rather than more or less flexible depending on N[max] and P[max]). Finally, relaxing the indirect constraints we imposed on the range of potential experimental outcomes (i.e., model parameters) by changing the minimum or the maximum expected number of eaten prey by an order of magnitude had similarly little effect (Supplementary Figures S10–S17). The notable consequences were that increasing the maximum expected number of eaten prey across all treatments from N [max] to 10 N[max] caused (i) CDAO to become less rather than more flexible than H2, (ii) T and W to be more or less flexible than H3R and BD in a design-dependent rather than design-independent manner; (iii) CMOR to become more flexible than BDOR for a greater range of designs, and (iv) ${G}$[SN1] and ${G}$[SN2] to no longer be estimable, even after a month of computation on a high-performance computing cluster. 4. Discussion The functional-response literature is replete with models, even among those that only consider variation in the abundances of a single predator-prey pair (Table 1, Jeschke et al., 2002). Each of these many deterministic models was proposed to encapsulate a different aspect of predator-prey biology, though frequently even very different biological processes lead to very similar or even the same model form (Table 1). Information-theoretic criteria, which balance model fit and complexity, represent the principal, most general, and most accessible means for comparing the statistical performance of these models when they are given a statistical shell and confronted with data (Okuyama, 2013). The primary contribution of our analyses is to show that existing models, independent of the biology they are meant to reflect, frequently also differ in their flexibility to fit data, even among models having the same parametric complexity. Differences in model flexibility as assessed by the geometric complexity term ${G}$ of the FIA criterion were frequently greater than 0.5 information units, spanned values up to 13 information units, and for several models were never below 1 information unit even for the most equitable of considered experimental designs. Secondarily, our analyses demonstrate just how dependent a model's flexibility can be on the experimental design of the data (i.e., what the range and combinations of prey and predator abundances are). In some instances this design dependency was great enough to cause models that were less flexible than other models for some experimental designs to become more flexible than the same models for different designs. Our use of the FIA criterion allows us to contextualize the importance of this variation in flexibility in two rigorous and quantitative ways: First, we can compare ${G}$ among models of the same parametric complexity for a given experimental design assuming their goodness-of-fit to a hypothetical dataset to be the same. In this scenario, the potential significance of model flexibility to the information-theoretic comparison of functional-response models is evidenced in a general manner by the fact that a 2-unit difference in AIC or BIC among competing models—equivalent to a 1-unit difference in FIA—represents “substantial” support (a weight-of-evidence of 2.7 to 1) for one model over another (Burnham and Anderson, 2002). (Such a difference reflects a probability of 0.73 that the first of only two competing models is “better” than the other.) Second, we can compare ${G}$ to a model's parametric complexity for hypothetical datasets of differing sample size assuming its goodness-of-fit to these data remains the same. In this scenario, the potential significance of model flexibility to the inferences of functional-response studies performed in the past is evidenced by the fact that our estimated differences in ${G}$ are comparable to the values of parametric complexity that are associated with the median and even maximum sample sizes seen in the large collection of datasets recently compiled by Novak and Stouffer (2021) (Table 2). That is, as feared by Novak and Stouffer (2021), sample sizes among existing empirical datasets are often sufficiently small that the likelihood and parametric complexity differences of many models is unlikely to have sufficiently out-weighed the influence of their functional flexibility in determining their information-theoretic rankings. TABLE 2 Table 2. The value of FIA's parametric complexity term (the second term of Equation (3) depicted in Figure 1) for models of k = 1, 2, 3, and 4 parameters evaluated at the sample sizes of the smallest (n = 10), median (n = 80), and largest (n = 528) sized datasets in the set of 77 functional-response datasets having variation in both prey and predator abundances compiled by Novak and Stouffer 4.1. What Makes Models (In)Flexible? Given that the influence of model flexibility on information-theoretic model comparisons of the past is likely substantial, that its influence will likely not change dramatically in the future given the logistical challenges of standard experimental approaches, and because there is no experimental design that can make the comparison of functional-response models universally equitable with respect to their flexibility, an important question is: What aspects of their mathematical formulation make models more or less flexible for certain experimental designs? For the one-parameter models the answer is relatively accessible given the specifics of our analyses. The linear ratio-dependent (LR) model is more flexible than the Holling Type I (H1) model because the division of prey abundances by a range of predator abundances allows a greater range of parameter a (“attack rate”) values to satisfy the condition that the resulting expected numbers of eaten prey will lie within our specified minimum and maximum bounds (i.e., satisfying both 𝔼[F(N[max], P, θ)PT] ≥ 1/10 and 𝔼[F(N, P, θ)PT] ≤ 10N[max]). Relative to H1 for which high N[max] and low P[max] maximize the potential range of attack rates that an individual predator could express in an experiment, having many predators “interfering” in a ratio-dependent manner enables each individual predator to express an even greater attack rate without all predators in total consuming too many prey. The effects on the maximum vs. the minimum prey eaten are asymmetric in magnitude (i.e., the maximum potential value of a increases more than the minimum potential value of a) because division by P in LR has an asymmetric effect on the per predator number of prey eaten (relative to the multiplication by P that is common to all models); it is symmetric only on a logarithmic scale. The magnitude of this effect is dampened in the BWL1 model of Barbier et al. (2021) because it entails a ratio of the square roots of (is sublinear with respect to) prey and predator abundances, making BWL1 more flexible than H1 but less flexible than LR. The same rationale applies to all other models and explains the varied (in)sensitivities that their model flexibility has with respect to experimental design. That said, the situation is often more complicated for models with multiple parameters because of (i) the interdependent influences that parameters have on the number of prey that are eaten, and (ii) the fact that, for some models, the minimum and the maximum boundaries on the expected number of eaten prey come into play at different points in parameter- and species-abundance space. For example, for the Holling Type II (H2) model, requiring that at least one prey on average be eaten in the highest prey abundance treatments causes high handling times to impose a lower limit on each individual's attack rates only if and when prey abundances are sufficiently high to affect saturation. The Holling Type III (H3) model experiences this same effect as well, hence its relative flexibility is insensitive to variation in maximum prey abundances. H3 is nonetheless more flexible than H2 because it is superlinear with respect to prey abundance (when handling times or prey abundances are low) and can therefore satisfy the minimum of one-prey-eaten-per-predator constraint for smaller attack rate values than can H2. Similarly, the exponential form of the Gause–Ivlev models (GI and GIA) makes them more flexible than H2 because they are superlinear with respect to prey abundance, while the A1 and A3 models of Abrams (1990) are less flexible than H2 because they are sublinear with respect to prey abundance. The insensitivity of the relative flexiblity of all these models to variation in predator abundances occurs because the total prey eaten they effect is determined by predator abundance in the same proportional manner as for H2. That is, just like most other two-parameter prey-dependent models, the relative flexibility of H2 and these models is similarly uninfluenced by the ratio of prey and predator abundances, in contrast to the way that all ratio- and predator-dependent models are affected (as per the contrast of H1, LR and BWL1 discussed above). The prey-dependent Type IV model of Sokol and Howell (1981) (SH) represents an informative exception to all other two-parameter prey-dependent models in that its relative flexibility is sensitive to predator abundance. Whereas all monotonically increasing prey-dependent models only ever come up against the maximum prey abundance constraint as predator abundances increase, increasing predator abundances additionally alleviate the constraint that SH experiences uniquely due to the eventual decline of its feeding rate at high prey abundance; high predator abundances permit the total number of prey eaten to stay above the minimum-of-one-prey constraint for greater maximum prey abundances than is possible for low predator abundances given the parameter values. The dependence of model flexibility on predator abundance emerges among the prey-dependent three-parameter models for similar reasons. For example, although the feeding rates of neither the HLB model of Hassell et al. (1977) nor the A2 model of Abrams (1990) decline with respect to prey abundance, increasing their c parameter does make their denominators more sensitive to maximum prey abundances where the minimum of one-prey-eaten-per-predator constraint comes in to play. Therefore, just as for SH, increasing predator abundances increase the number of prey eaten to allow for larger values of c to satisfy the minimum-of-one-prey constraint. That is, although increasing predator abundance would limit the range of c due to the minimum-of-one-prey constraint if all else were to be held constant, all else is not constant. Rather, high predator abundance enables a greater range of a values for a given value of c before the maximum-prey-eaten constraint is violated. This is also the reason why all predator-dependent models exhibit increasing relative flexibility as predator abundance increases even as the absolute flexibility of their respective baseline models decreases. 4.2. Additional Aspects of Experimental Design Our sensitivity analyses on the role of experimental design reinforce the inferences of our main analysis. They also speak to the likely generality of our results to additional aspects of experimental design which we did not specifically address. For the two-parameter models whose relative flexibility was insensitive to the ratio of prey and predator abundances, using arithmetic rather than logarithmic designs had little or no qualitative influence because arithmetic spacings did not alter maximum prey abundances where the constraints on the number of prey eaten are incurred. By contrast, models for which changes to spacings or the prey-eaten constraints did alter their relative flexibility were either ratio- or predator-dependent models, or were prey-dependent models whose additional (third) parameter made their flexibility sensitive to predator abundance. We conclude from this that the precise spacings of prey and predator abundances are less important from a model flexibility perspective than are their maxima and combinatorial range, but that these aspects of design become more important as the parametric complexity of the considered models Nonetheless, searching for equitable experimental designs as we did is different from searching for optimal designs for model-specific parameter uncertainty, bias, or identifiability (e.g., Sarnelle and Wilson, 2008; Zhang et al., 2018; Moffat et al., 2020; Uszko et al., 2020). A precedence of other motivations for an experiment, such as maximizing the precision of parameter estimates, may therefore lead to different and likely model-specific conclusions about which design aspects are important. Fortunately, given our results, some aspects of experimental design may be of little consequence. For example, independent of the maximum prey abundance used, the general utility of a logarithmic spacing of prey makes intuitive sense given that, for many models, most of the action that differentiates model form occurs at low prey abundances (i.e., their derivatives with respect to N are greatest at low values of N). Intuition likewise suggests that designs should preclude total prey consumption being overwhelmed by the overall effect of interference among predators and hence that predator abundances shouldn't be high. In this regard our results indicate that just a little variation across a range of low predator abundances is often—though far from universally—best from a relative model flexibility standpoint, just as it would be expected to be best for parameter estimation. Our analyses did not consider questions regarding the treatment-specific distribution of experimental replicates, important though these often are given logistical constraints. All of our analyses assumed uniformly-balanced designs, the effect of which future analyses could easily assess by changing the probability of each experimental treatment when computing the Expected unit Fisher Information matrix underlying ${G}$ (see Box 1). We anticipate, however, that shifting replicates from lower prey and predator abundances to higher abundances will have a similar effect to that seen in the comparison of logarithmic to arithmetic spacings. Therefore, from a model flexibility standpoint alone, we expect such a shift to have a greater effect for models of high parametric A final important aspect of experimental design that our analyses did not address was the assumed likelihood function connecting each deterministic functional-response model to an experiment's design (i.e., the structure of the data). We assumed a Poisson likelihood and therefore that eaten prey are continually replaced, that the mean and variance of prey eaten are equal for a given combination of predator and prey abundances, and that all feeding events are independent. Model flexibility as assessed by geometric complexity may be different under alternative likelihoods such as the binomial likelihood (which would be appropriate for non-replacement designs) or the negative binomial likelihood (which allows for under- or over-dispersion). Indeed, for the binomial likelihood even the linear Holling Type I deterministic function response results in a non-linear statistical model (Novak and Stouffer, 2021), hence relative geometric complexity may be quite different for models that account for prey depletion (see Supplementary Materials for a comparison of Rogers' random Type II and Type III predator models). That said, the maximum likelihood parameter estimators under Gaussian and log-Normal likelihoods are the same as under a Poisson likelihood for many—and possibly all—of the models we considered (Novak and Stouffer, 2021), so it is likely that our inferences would be little changed under these commonly assumed alternatives. 4.3. Model Flexibility as Problem and Desirable Property There are many perspectives on the purpose of models and why we fit models to data. Shmueli (2010) articulates two primary axes of motivation that align well to the functional-response literature: explanation (where the primary motivation is to infer biologically- and statistically-significant causal associations the nature of which models are meant to characterize) and prediction (where the primary motivation is to best describe as yet unseen out-of-sample data)^1. The ability to satisfy both motivations converges as the amount of data and the range of conditions the data reflect increase, thereby mirroring the inferential convergence of information criteria as sample sizes increase and cause differences in goodness-of-fit to dominate measures of model complexity. Model flexibility, and with it our analyses, would thus be irrelevant if the sample sizes of functional-response experiments were sufficiently large. Instead, sample sizes for many studies are such that model flexibility—as well as other forms of statistical and non-statistical bias (Novak and Stouffer, 2021)—preclude the conclusion that models deemed to perform best on the basis of their information-theoretic ranking are also closest to biological truth. Empiricists fitting functional-response models to data must therefore make the explicit choice between explanation, for which criteria such as BIC and FIA are intended, and prediction, for which AIC (c), cross-validation, model-averaging, and most forms of machine learning are intended (Shmueli, 2010; Aho et al., 2014; Höge et al., 2018). If data is limited and explanation is the goal, then design-dependent differences in model flexibility represent a critical problem for commonly-used criteria like BIC because more flexible models will be conflated for the truth. In such contexts, it would be wise to identify the most equitable design for a specifically chosen subset of hypothesis-driven models (see also Burnham and Anderson, 2002), or, in lieu of a better reasoned solution, to use a design or multiple designs that stack the deck against leading hypotheses associated with the most flexible models. On the other hand, if data is limited and out-of-sample prediction is the goal, then model flexibility could be considered an advantage if it causes more-complex-than-true models to be selected because they are deemed to perform better, especially when the true model may not even be among those being compared (Höge et al., 2018). More generally, there are clearly contexts in which ecologists wish to have generic, flexible functional-response models that merely approximate aspects of the truth in a coarse manner, be it in more descriptive statistical contexts or in theoretical contexts where the potential role of these aspects in determining qualitatively different regimes of population dynamics is of interest (e.g., Arditi and Ginzburg, 2012; AlAdwani and Saavedra, 2020; Barbier et al., 2021). In these contexts, and since all models are phenomenological and hence agnostic with respect to precise mechanistic detail (as Table 1 underscores; see also Connolly et al., 2017; Hart et al., 2018), we consider the results of our analyses to be useful for making a priori choices among models given that more flexible models likely capture and exhibit a greater amount of biologically insightful variation in a more analytically tractable 4.4. Conclusions Several syntheses evidence that there is no single model that can characterize predator functional responses in general (Skalski and Gilliam, 2001; Novak and Stouffer, 2021; Stouffer and Novak, 2021 ). This is consistent with the fact that, to a large degree, the statistical models of the functional-response literature characterize aspects of predator-prey biology for which there is evidence in data, not whether specific mechanisms do or do not occur in nature (see also Connolly et al., 2017). In light of the fact that functional-response data are hard to come by, our study demonstrates that a model's functional flexibility should be considered when interpreting its performance. That said, we are not advocating for FIA as an alternative to more commonly-used information criteria; its technical nature and model-specific idiosyncrasies do not lend itself to widespread adoption or straightforward implementation (e.g., in software packages). Moreover, more fundamental issues exist that pertain to the explicit consideration of study motivation. Indeed, we submit that questions of motivation are ones that the functional-response literature as a whole needs to grapple with more directly. Even in the specific context of prediction, for example, functional-response studies rarely address explicitly what their study and their data are intending to help better predict (e.g., feeding rates or population dynamics). Valuable effort would therefore be expended in future work to consider the relationship of model flexibility to the parametric- and structural sensitivities of models when it comes to drawing inferences for population dynamics (e.g., Aldebert and Stouffer, 2018; Adamson and Morozov, 2020). Likewise, it would also be useful to clarify the relevance of model flexibility to the rapidly developing methods of scientific machine learning, including the use of symbolic regression, neural ordinary differential equations, and universal differential equations for model discovery (e.g., Martin et al., 2018; Guimerà et al., 2020; Rackauckas et al., 2020; Bonnaffé et al., 2021). Data Availability Statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://github.com/marknovak/ Code and Data Availability All code has been archived at FigShare: https://doi.org/10.6084/m9.figshare.16807210.v1 and is available at https://github.com/marknovak/GeometricComplexity. Author Contributions MN led the study and wrote the first draft. Both authors contributed to the analyses and revisions. DS received support from the Marsden Fund Council from New Zealand Government funding, managed by Royal Society Te Apārangi (grant 16-UOC-008). MN acknowledges the support of a David and Lucile Packard Foundation grant to the Partnership for Interdisciplinary Studies of Coastal Oceans. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. We thank Thomas Hossie for inviting our submission and for pointing out some models we had missed, and Bryan Lynn and the two reviewers for suggestions that helped clarify the manuscript. Supplementary Material The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fevo.2021.740362/full#supplementary-material 1. ^A third axis, description, remains common in the functional-response literature and typically takes the form of fitting “non-mechanistic” polynomial models to evaluate the statistical significance of various non-linearities. Abrams, P. A. (1982). Functional responses of optimal foragers. Am. Nat. 120, 382–390. doi: 10.1086/283996 Abrams, P. A. (1990). The effects of adaptive-behavior on the type-2 functional-response. Ecology 71, 877–885. doi: 10.2307/1937359 Adamson, M. W., and Morozov, A. Y. (2020). Identifying the sources of structural sensitivity in partially specified biological models. Sci. Rep. 10:16926. doi: 10.1038/s41598-020-73710-z Aho, K., Derryberry, D., and Peterson, T. (2014). Model selection for ecologists: the worldviews of AIC and BIC. Ecology 95, 631–636. doi: 10.1890/13-1452.1 AlAdwani, M., and Saavedra, S. (2020). Ecological models: higher complexity in, higher feasibility out. J. R. Soc. Interface 17:20200607. doi: 10.1098/rsif.2020.0607 Aldebert, C., Nerini, D., Gauduchon, M., and Poggiale, J. C. (2016a). Does structural sensitivity alter complexity-stability relationships? Ecol. Complexity 28, 104–112. doi: 10.1016/ Aldebert, C., Nerini, D., Gauduchon, M., and Poggiale, J. C. (2016b). Structural sensitivity and resilience in a predator-prey model with density-dependent mortality. Ecol. Complexity 28, 163–173. doi: 10.1016/j.ecocom.2016.05.004 Aldebert, C., and Stouffer, D. B. (2018). Community dynamics and sensitivity to model structure: towards a probabilistic view of process-based model predictions. J. R. Soc. Interface 15:20180741. doi: 10.1098/rsif.2018.0741 Andrews, J. F. (1968). A mathematical model for the continuous culture of microorganisms utilizing inhibitory substrates. Biotechnol. Bioeng. 10, 707–723. doi: 10.1002/bit.260100602 Arditi, R., and Akçakaya, H. R. (1990). Underestimation of mutual interference of predators. Oecologia 83, 358–361. doi: 10.1007/BF00317560 Arditi, R., and Ginzburg, L. R. (1989). Coupling in predator-prey dynamics: ratio-dependence. J. Theo.r Biol. 139, 311–326. doi: 10.1016/S0022-5193(89)80211-5 Arditi, R., and Ginzburg, L. R. (2012). How Species Interact: Altering the Standard View on Trophic Ecology. Oxford: Oxford University Press. Barbier, M., Wojcik, L., and Loreau, M. (2021). A macro-ecological approach to predation density-dependence. Oikos 130, 553–570. doi: 10.1111/oik.08043 Beddington, J. R. (1975). Mutual interference between parasites or predators and its effect on searching efficiency. J. Anim. Ecol. 44, 331–340. doi: 10.2307/3866 Bonnaffé, W., Sheldon, B. C., and Coulson, T. (2021). Neural ordinary differential equations for ecological and evolutionary time-series analysis. Methods Ecol. Evol. 12, 1301–1315. doi: 10.1111/ Burnham, K. P., and Anderson, D. R. (2002). Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd Edn. New York, NY: Springer. Coelho, M. T. P., Diniz-Filho, J., and Rangel, T. F. (2019). A parsimonious view of the parsimony principle in ecology and evolution. Ecography 42, 968–976. doi: 10.1111/ecog.04228 Connolly, S. R., Keith, S. A., Colwell, R. K., and Rahbek, C. (2017). Process, mechanism, and modeling in macroecology. Trends Ecol. Evol. 32, 835–844. doi: 10.1016/j.tree.2017.08.011 Cosner, C., DeAngelis, D. L., Ault, J. S., and Olson, D. B. (1999). Effects of spatial grouping on the functional response of predators. Theor. Popul. Biol. 56, 65–75. doi: 10.1006/tpbi.1999.1414 Crowley, P. H., and Martin, E. K. (1989). Functional responses and interference within and between year classes of a dragonfly population. J. North Am. Benthol. Soc. 8, 211–221. doi: 10.2307/1467324 DeAngelis, D. L., Goldstein, R. A., and O'Neill, R. V. (1975). A model for trophic interaction. Ecology 56, 881–892. doi: 10.2307/1936298 DeLong, J. P., and Uiterwaal, S. F. (2018). The FoRAGE (Functional responses from around the globe in all ecosystems) database: a compilation of functional responses for consumers and parasitoids. Knowl. Netw. Biocomplex. doi: 10.5063/F17H1GTQ Ellison, A. M. (2004). Bayesian inference in ecology. Ecol. Lett. 7, 509–520. doi: 10.1111/j.1461-0248.2004.00603.x Evans, M. R., Grimm, V., Johst, K., Knuuttila, T., de Langhe, R., Lessells, C. M., et al. (2013). Do simple models lead to generality in ecology? Trends Ecol. Evol. 28, 578–583. doi: 10.1016/ Fujii, K., Holling, C., and Mace, P. (1986). A simple generalized model of attack by predators and parasites. Ecol. Res. 1, 141–156. doi: 10.1007/BF02347017 Fussmann, G. F., and Blasius, B. (2005). Community response to enrichment is highly sensitive to model structure. Biol. Lett. 1, 9–12. doi: 10.1098/rsbl.2004.0246 Gause, G. F. (1934). The Struggle for Existence. Baltimore, MD: Williams and Wilkins. Grünwald, P. (2000). Model selection based on minimum description length. J. Math. Psychol. 44, 133–152. doi: 10.1006/jmps.1999.1280 Guimerà, R., Reichardt, I., Aguilar-Mogas, A., Massucci, F. A., Miranda, M., Pallarès, J., et al. (2020). A bayesian machine scientist to aid in the solution of challenging scientific problems. Sci. Adv. 6:eaav6971. doi: 10.1126/sciadv.aav6971 Gutierrez, A., and Baumgärtner, J. (1984). Multitrophic models of predator-prey energetics I: age-specific energetics models-Pea aphid Acrythosiphon pisum (Homoptera: Aphidae) as an example. Can. Entomol. 116, 923–932. doi: 10.4039/Ent116923-7 Hart, S. P., Freckleton, R. P., and Levine, J. M. (2018). How to quantify competitive ability. J. Ecol. 106, 1902–1909. doi: 10.1111/1365-2745.12954 Hassell, M. P., Lawton, J. H., and Beddington, J. R. (1977). Sigmoid functional responses by invertebrate predators and parasitoids. J. Anim. Ecol. 46, 249–262. doi: 10.2307/3959 Hassell, M. P., and Varley, G. C. (1969). New inductive population model for insect parasites and its bearing on biological control. Nature 223, 1133–1137. doi: 10.1038/2231133a0 Höge, M., Wöhling, T., and Nowak, W. (2018). A primer for model selection: The decisive role of model complexity. Water Resour. Res. 54, 1688–1715. doi: 10.1002/2017WR021902 Holling, C. S. (1959). Some characteristics of simple types of predation and parasitism. Can. Entomol. 91, 385–398. doi: 10.4039/Ent91385-7 Holling, C. S. (1965). The functional response of predators to prey density and its role in mimicry and population regulation. Mem. Entomol. Soc. Can. 45, 3–60. doi: 10.4039/entm9745fv Jassby, A. D., and Platt, T. (1976). Mathematical formulation of the relationship between photosynthesis and light for phytoplankton. Limnol. Oceanogr. 21, 540–547. doi: 10.4319/lo.1976.21.4.0540 Jeschke, J. M., Kopp, M., and Tollrian, R. (2002). Predator functional responses: Discriminating between 37 handling and digesting prey. Ecol. Monogr. 72, 95–112. doi: 10.1890/0012-9615(2002)072 Johnson, J. B., and Omland, K. S. (2004). Model selection in ecology and evolution. Trends Ecol. Evol. 19, 101–108. doi: 10.1016/j.tree.2003.10.013 Kass, R. E., and Raftery, A. E. (1995). Bayes factors. J. Am. Stat. Assoc. 90, 773–795. doi: 10.1080/01621459.1995.10476572 Kratina, P., Vos, M., Bateman, A., and Anholt, B. R. (2009). Functional responses modified by predator density. Oecologia 159, 425–433. doi: 10.1007/s00442-008-1225-5 Lotka, A. J. (1925). Elements of Physical Biology. Williams & Wilkins. Ly, A., Marsman, M., Verhagen, J., Grasman, R. P., and Wagenmakers, E.-J. (2017). A tutorial on fisher information. J. Math. Psychol. 80, 40–55. doi: 10.1016/j.jmp.2017.05.006 Martin, B., Munch, S., and Hein, A. (2018). Reverse-engineering ecological theory from data. Proc. R. Soc. B Biol. Sci. 285:20180422. doi: 10.1098/rspb.2018.0422 Moffat, H., Hainy, M., Papanikolaou, N. E., and Drovandi, C. (2020). Sequential experimental design for predator-prey functional response experiments. J. R. Soc. Interface 17, 20200156. doi: 10.1098/ Myung, J. I., Navarro, D. J., and Pitt, M. A. (2006). Model selection by normalized maximum likelihood. J. Math. Psychol. 50, 167–179. doi: 10.1016/j.jmp.2005.06.008 Novak, M., and Stouffer, D. B. (2021). Systematic bias in studies of consumer functional responses. Ecol. Lett. 24, 580–593. doi: 10.1111/ele.13660 Okuyama, T. (2013). On selection of functional response models: Holling's models and more. Biocontrol 58, 293–298. doi: 10.1007/s10526-012-9492-9 Okuyama, T., and Ruyle, R. L. (2011). Solutions for functional response experiments. Acta Oecol. 37, 512–516. doi: 10.1016/j.actao.2011.07.002 Pitt, M. A., Myung, I. J., and Zhang, S. (2002). Toward a method of selecting among computational models of cognition. Psychol. Rev. 109, 1939–1471. doi: 10.1037/0033-295X.109.3.472 Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., et al. (2020). Universal differential equations for scientific machine learning. arXiv preprint arXiv:2001.04385. doi: Real, L. A. (1977). The kinetics of functional response. Am. Nat. 111, 289–300. doi: 10.1086/283161 Rissanen, J. (1978). Modeling by shortest data description. Automatica 14, 465–471. doi: 10.1016/0005-1098(78)90005-5 Rissanen, J. J. (1996). Fisher information and stochastic complexity. IEEE Trans. Inf. Theory 42, 40–47. doi: 10.1109/18.481776 Rogers, D. (1972). Random search and insect population models. J. Anim. Ecol. 41, 369–383. doi: 10.2307/3474 Rosenzweig, M. L. (1971). Paradox of enrichment: destabilization of exploitation ecosystems in ecological time. Science 171, 385–387. doi: 10.1126/science.171.3969.385 Ruxton, G., Gurney, W., and De Roos, A. (1992). Interference and generation cycles. Theor. Popul. Biol. 42, 235–253. doi: 10.1016/0040-5809(92)90014-K Sarnelle, O., and Wilson, A. E. (2008). Type III functional response in Daphnia. Ecology 89, 1723–1732. doi: 10.1890/07-0935.1 Schenk, D., Bersier, L.-F., and Bacher, S. (2005). An experimental test of the nature of predation: neither prey- nor ratio-dependent. J. Anim. Ecol. 74, 86–91. doi: 10.1111/j.1365-2656.2004.00900.x Shmueli, G. (2010). To explain or to predict? Stat. Sci. 25, 289–310. doi: 10.1214/10-STS330 Skalski, G. T., and Gilliam, J. F. (2001). Functional responses with predator interference: viable alternatives to the Holling Type II model. Ecology 82, 3083–3092. doi: 10.1890/0012-9658(2001)082 Sokol, W., and Howell, J. A. (1981). Kinetics of phenol oxidation by washed cells. Biotechnol. Bioeng. 23, 2039–2049. doi: 10.1002/bit.260230909 Stouffer, D. B., and Novak, M. (2021). Hidden layers of density dependence in consumer feeding rates. Ecol. Lett. 24, 520–532. doi: 10.1111/ele.13670 Sutherland, W. J. (1983). Aggregation and the ‘ideal free’ distribution. J. Anim. Ecol. 52, 821–828. doi: 10.2307/4456 Tostowaryk, W. (1972). The effect of prey defense on the functional response of Podisus modestus (Hemiptera: Pentatomidae) to densities of the sawflies Neodiprion swainei and N. pratti banksianae (Hymenoptera: Neodiprionidae). Can. Entomol. 104, 61–69. doi: 10.4039/Ent10461-1 Tyutyunov, Y., Titova, L., and Arditi, R. (2008). Predator interference emerging from trophotaxis in predator-prey systems: an individual-based approach. Ecol. Complexity 5, 48–58. doi: 10.1016/ Uszko, W., Diehl, S., and Wickman, J. (2020). Fitting functional response surfaces to data: a best practice guide. Ecosphere 11, e03051. doi: 10.1002/ecs2.3051 Volterra, V. (1926). Fluctuations in the abundance of a species considered mathematically. Nature 118:558–560. doi: 10.1038/118558a0 Watt, K. (1959). A mathematical model for the effect of densities of attacked and attacking species on the number attacked. Can. Entomol. 91, 129–144. doi: 10.4039/Ent91129-3 Whitehead, A. N. (1919). The concept of nature. Convergence 62:79. Zhang, J. F., Papanikolaou, N. E., Kypraios, T., and Drovandi, C. C. (2018). Optimal experimental design for predator-prey functional response experiments. J. R. Soc. Interface 15:20180186. doi: Keywords: consumer-resource interactions, model comparison, structural complexity, model flexibility, nonlinearity, experimental design, fisher information, prediction Citation: Novak M and Stouffer DB (2021) Geometric Complexity and the Information-Theoretic Comparison of Functional-Response Models. Front. Ecol. Evol. 9:740362. doi: 10.3389/fevo.2021.740362 Received: 13 July 2021; Accepted: 13 October 2021; Published: 11 November 2021. Copyright © 2021 Novak and Stouffer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Mark Novak, mark.novak@oregonstate.edu ^†These authors have contributed equally to this work
{"url":"https://www.frontiersin.org/journals/ecology-and-evolution/articles/10.3389/fevo.2021.740362/full","timestamp":"2024-11-13T02:14:05Z","content_type":"text/html","content_length":"669170","record_id":"<urn:uuid:3a821857-3638-4faf-afff-f9c27d109841>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00037.warc.gz"}
Which Statement Is True Of Triangles P And Q (2024) Triangles, those fascinating geometric shapes with their three sides and three angles, often hold secrets waiting to be uncovered. In the realm of mathematics, the comparison of triangles P and Q raises intriguing questions about their properties and relationships. So, what statements ring true when it comes to these geometric figures? Let's delve into the depths of triangles P and Q to decipher the truth behind their configurations and characteristics. Understanding Triangles: A Brief Overview Before we embark on our journey to unravel the mysteries of triangles P and Q, let's refresh our understanding of these fundamental shapes. A triangle is a polygon with three edges and three vertices. Each vertex connects two sides, forming three interior angles that always add up to 180 degrees. Statement 1: Triangles P and Q Have Equal Areas The first statement we encounter in our exploration is the assertion that triangles P and Q possess equal areas. This proposition implies that the two triangles occupy the same amount of space within their boundaries. To determine the validity of this claim, we must scrutinize the dimensions and configurations of triangles P and Q. Upon closer examination, we may discover that the base and height of each triangle are equal, resulting in identical areas. Alternatively, if the triangles have different base lengths but equal heights (or vice versa), their areas may still be equivalent. However, if the dimensions of both the base and height differ, the areas of triangles P and Q will likely vary. Statement 2: Triangles P and Q Are Congruent The second statement posits that triangles P and Q are congruent, suggesting that they have the same size and shape. In the realm of geometry, congruent triangles exhibit corresponding sides and angles that are equal in measure. Thus, to verify the truth of this assertion, we must assess the congruence criteria between triangles P and Q. One method to establish congruence is through the Angle-Side-Angle (ASA) criterion, wherein two angles and the included side of one triangle are equal to their corresponding parts in the other triangle. Alternatively, the Side-Angle-Side (SAS) criterion and the Side-Side-Side (SSS) criterion can also be employed to prove congruence between triangles. Statement 3: Triangles P and Q Are Similar The third statement suggests that triangles P and Q are similar, indicating that their corresponding angles are congruent, but their sides may vary in length. Similar triangles possess proportional side lengths and congruent angles, enabling us to establish relationships between their corresponding parts. To ascertain the similarity of triangles P and Q, we can employ the Angle-Angle (AA) criterion or the Side-Angle-Side (SAS) criterion. By comparing the measures of corresponding angles and the lengths of corresponding sides, we can determine whether the triangles exhibit the properties of similarity. Statement 4: Triangles P and Q Have Different Perimeters The fourth statement introduces the notion that triangles P and Q have different perimeters, implying that the total length of their boundaries varies. The perimeter of a triangle is the sum of the lengths of its three sides, providing insights into the overall size of the shape. To validate this assertion, we must calculate the perimeters of triangles P and Q based on their respective side lengths. If the sum of the lengths of the sides differs between the two triangles, then their perimeters will indeed be distinct. However, if the total lengths of their sides are equal, the perimeters of triangles P and Q will also be identical. Statement 5: Triangles P and Q Have Different Altitudes The fifth statement asserts that triangles P and Q possess different altitudes, indicating variations in the perpendicular distances from their vertices to their opposite sides. The altitude of a triangle plays a crucial role in determining its area, as it serves as the height of the shape in geometric calculations. To verify this claim, we must examine the perpendicular distances from the vertices of triangles P and Q to their respective bases. If these distances differ between the two triangles, then their altitudes will also be distinct. However, if the perpendicular distances are equal, the altitudes of triangles P and Q will be identical. Conclusion: Deciphering the Truth In our quest to unravel the truth about triangles P and Q, we have encountered a series of statements that shed light on their properties and relationships. From exploring their areas and perimeters to delving into their congruence and similarity, we have navigated the intricacies of these geometric shapes with curiosity and precision. While each statement presents a unique perspective on triangles P and Q, it is essential to approach them with discernment and analytical rigor. By applying geometric principles and criteria, we can unravel the mysteries of these shapes and discern the true nature of their configurations. As we conclude our exploration, let us remember that the pursuit of knowledge is an ongoing journey, fueled by curiosity and guided by reason. In the realm of geometry, as in life, the quest for truth beckons us to embrace uncertainty and engage in the relentless pursuit of understanding. FAQs (Frequently Asked Questions) 1. Are triangles P and Q necessarily congruent if their corresponding angles are equal? • While equal corresponding angles indicate similarity between triangles, congruence requires equality in both angles and side lengths. Thus, additional criteria must be met to establish congruence 2. Can triangles P and Q have the same area if their side lengths are different? • Yes, triangles with different side lengths can still have equal areas if their base and height dimensions are proportionate. The equality of areas depends on the relationship between the base and height of each triangle. 3. How can I determine if triangles P and Q are similar without measuring all their angles and sides? • You can employ similarity criteria such as Angle-Angle (AA) or Side-Angle-Side (SAS) to establish similarity between triangles without measuring all their angles and sides. These criteria rely on specific angle and side relationships to determine similarity. 4. Is it possible for triangles P and Q to have different perimeters if their side lengths are equal? • No, if the side lengths of triangles P and Q are equal, then their perimeters will also be identical. The perimeter of a triangle is determined solely by the lengths of its sides, so equal side lengths result in equal perimeters. 5. Can triangles P and Q have different altitudes if they share the same base length? • Yes, triangles with the same base length can have different altitudes if their heights (perpendicular distances from the base to the opposite vertex) vary. Altitude depends on the height of the triangle, which may differ even if the base length remains constant.
{"url":"https://zenzen.best/article/which-statement-is-true-of-triangles-p-and-q","timestamp":"2024-11-15T03:39:25Z","content_type":"text/html","content_length":"118074","record_id":"<urn:uuid:53b6e607-fbda-4ace-a65c-32acb79baa68>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00418.warc.gz"}
Data Fields unsigned int min_length Minimum length of any of our connections, UINT_MAX if we have none. More... unsigned int max_length Maximum length of any of our connections, 0 if we have none. More... GNUNET_CONTAINER_HeapCostType min_desire Minimum desirability of any of our connections, UINT64_MAX if we have none. More... GNUNET_CONTAINER_HeapCostType max_desire Maximum desirability of any of our connections, 0 if we have none. More... struct CadetPeerPath * path Path we are comparing against for evaluate_connection, can be NULL. More... struct CadetTConnection * worst Connection deemed the "worst" so far encountered by evaluate_connection, NULL if we did not yet encounter any connections. More... double worst_score Numeric score of worst, only set if worst is non-NULL. More... int duplicate Set to GNUNET_YES if we have a connection over path already. More... Used to assemble summary information about the existing connections so we can evaluate a new path. Minimum length of any of our connections, UINT_MAX if we have none. Maximum length of any of our connections, 0 if we have none. Minimum desirability of any of our connections, UINT64_MAX if we have none. Maximum desirability of any of our connections, 0 if we have none. Path we are comparing against for evaluate_connection, can be NULL. Connection deemed the "worst" so far encountered by evaluate_connection, NULL if we did not yet encounter any connections. Numeric score of worst, only set if worst is non-NULL. Set to GNUNET_YES if we have a connection over path already.
{"url":"https://docs.gnunet.org/doxygen/d9/da8/structEvaluationSummary.html","timestamp":"2024-11-01T19:06:12Z","content_type":"application/xhtml+xml","content_length":"24549","record_id":"<urn:uuid:ccb8f20f-1ac9-4040-8ba1-810abeeeb550>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00868.warc.gz"}
Trigonometry, Part 2 - David The Maths TutorTrigonometry, Part 2 Trigonometry, Part 2 Now let’s use the unit circle to see some of the common trig identities. These identities (rules) will be used in future posts. Let’s assume we have an acute angle ????. An acute angle is one that is between 0 and ????/2 (or 0 to 90°). The following identities are valid for any angle, not just acute ones – it is just easier to see the logic in the diagram if we assume this. The following picture shows the relationship between an angle ???? in the first quadrant, and an angle in the second quadrant which is symmetric with ????: You can see that to measure this symmetric angle from the postive x-axis, you just subtract it from ????. The coordinates of the intersected point on the unit circle are negative for the x coordinate but the same y coordinate as the original angle ????. So the following identities are evident from this picture: cos(???? – ????) = -cos???? sin(???? – ????) = sin???? tan(???? – ????) = -cos????/sin???? = -tan???? Again, these are true for any angle, not just acute ones. As an example, let ???? = ????/3, (60°). The following is true for ????/3: cos(????/3) = 1/2 sin(????/3) = √3̅/2 tan(????/3) = √3̅ Now ???? – ????/3 = 2????/3. So using these identities, we know that cos(2????/3) = -1/2 sin(2????/3) = √3̅/2 tan(2????/3) = -√3̅ Now let’s look at a symmetric angle in the third quadrant. To measure this angle from the positive x-axis, you add it to ????. The corresponding coordinates of the intersected point on the unit circle are both the negative of the coordinates for ????. So the following identities are shown in this picture: So these identities are cos(???? + ????) = -cos???? sin(???? + ????) = -sin???? tan(???? + ????) = -cos????/-sin???? = tan???? Using our same example, ???? + ????/3 = 4????/3. Using these identities: cos(4????/3) = -1/2 sin(4????/3) = -√3̅/2 tan(4????/3) = √3̅ One more quadrant to go: As was mentioned before, angles measured clockwise from the positive x-axis are negative. So the following trig identities are shown in the figure above: cos(-????) = cos???? sin(-????) = -sin???? tan(-????) = cos????/-sin???? = -tan???? cos(-????/3) = 1/2 sin(-????/3) = -√3̅/2 tan(-????/3) = -√3̅ There are a couple more identities I would like to show but I’ll save that for next time.
{"url":"https://davidthemathstutor.com.au/2019/11/16/trigonometry-part-2/","timestamp":"2024-11-03T14:05:00Z","content_type":"text/html","content_length":"46680","record_id":"<urn:uuid:74cd60bf-83e8-4216-9c3a-509473e61937>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00197.warc.gz"}
calcNormFactors: Calculate normalization factors in jqsunac/TCC: TCC: Differential expression analysis for tag count data with robust normalization strategies This function calculates normalization factors using a specified multi-step normalization method from a TCC-class object. The procedure can generally be described as the STEP1-(STEP2-STEP3)n 1 ## S4 method for signature 'TCC' 2 calcNormFactors(tcc, norm.method = NULL, test.method = NULL, 3 iteration = TRUE, FDR = NULL, floorPDEG = NULL, 4 increment = FALSE, ...) ## S4 method for signature 'TCC' calcNormFactors(tcc, norm.method = NULL, test.method = NULL, iteration = TRUE, FDR = NULL, floorPDEG = NULL, increment = FALSE, ...) tcc TCC-class object. norm.method character specifying a normalization method used in both the STEP1 and STEP3. Possible values are "tmm" for the TMM normalization method implemented in the edgeR package, "edger" (same as "tmm"), and "deseq2" for the method implemented in the DESeq2 package. The default is "tmm". test.method character specifying a method for identifying differentially expressed genes (DEGs) used in STEP2: one of "edger", "deseq2", "bayseq", "voom" and "wad". See the "Details" filed in estimateDE for detail. The default is "edger". iteration logical or numeric value specifying the number of iteration (n) in the proposed normalization pipeline: the STEP1-(STEP2-STEP3)n pipeline. If FALSE or 0 is specified, the normalization pipeline is performed only by the method in STEP1. If TRUE or 1 is specified, the three-step normalization pipeline is performed. Integers higher than 1 indicate the number of iteration in the pipeline. FDR numeric value (between 0 and 1) specifying the threshold for determining potential DEGs after STEP2. floorPDEG numeric value (between 0 and 1) specifying the minimum value to be eliminated as potential DEGs before performing STEP3. increment logical value. if increment = TRUE, the DEGES pipeline will perform again from the current iterated result. ... arguments to identify potential DEGs at STEP2. See the "Arguments" field in estimateDE for details. character specifying a normalization method used in both the STEP1 and STEP3. Possible values are "tmm" for the TMM normalization method implemented in the edgeR package, "edger" (same as "tmm"), and "deseq2" for the method implemented in the DESeq2 package. The default is "tmm". character specifying a method for identifying differentially expressed genes (DEGs) used in STEP2: one of "edger", "deseq2", "bayseq", "voom" and "wad". See the "Details" filed in estimateDE for detail. The default is "edger". logical or numeric value specifying the number of iteration (n) in the proposed normalization pipeline: the STEP1-(STEP2-STEP3)n pipeline. If FALSE or 0 is specified, the normalization pipeline is performed only by the method in STEP1. If TRUE or 1 is specified, the three-step normalization pipeline is performed. Integers higher than 1 indicate the number of iteration in the pipeline. numeric value (between 0 and 1) specifying the threshold for determining potential DEGs after STEP2. numeric value (between 0 and 1) specifying the minimum value to be eliminated as potential DEGs before performing STEP3. logical value. if increment = TRUE, the DEGES pipeline will perform again from the current iterated result. arguments to identify potential DEGs at STEP2. See the "Arguments" field in estimateDE for details. The calcNormFactors function is the main function in the TCC package. Since this pipeline employs the DEG identification method at STEP2, our multi-step strategy can eliminate the negative effect of potential DEGs before the second normalization at STEP3. To fully utilize the DEG elimination strategy (DEGES), we strongly recommend not to use iteration = 0 or iteration = FALSE. This function internally calls functions implemented in other R packages according to the specified value. norm.method = "tmm" The calcNormFactors function implemented in edgeR is used for obtaining the TMM normalization factors at both STEP1 and STEP3. norm.method = "deseq2" The estimateSizeFactors function implemented in DESeq2 is used for obetaining the size factors at both STEP1 and STEP3. The size factors are internally converted to normalization factors that are comparable to the TMM normalization factors. After performing the calcNormFactors function, the calculated normalization factors are populated in the norm.factors field (i.e., tcc$norm.factors). Parameters used for DEGES normalization (e.g., potential DEGs identified in STEP2, execution times for the identification, etc.) are stored in the DEGES field (i.e., tcc$DEGES) as follows: iteration the iteration number n for the STEP1 - (STEP2 - STEP3)_{n} pipeline. pipeline the DEGES normalization pipeline. threshold it stores (i) the type of threshold (threshold$type), (ii) the threshold value (threshold$input), and (iii) the percentage of potential DEGs actually used (threshold$PDEG). These values depend on whether the percentage of DEGs identified in STEP2 is higher or lower to the value indicated by floorPDEG. Consider, for example, the execution of calcNormFactors function with "FDR = 0.1 and floorPDEG = 0.05". If the percentage of DEGs identified in STEP2 satisfying FDR = 0.1 was 0.14 (i.e., higher than the floorPDEG of 0.05), the values in the threshold fields will be threshold$type = "FDR", threshold$input = 0.1, and threshold$PDEG = 0.14. If the percentage (= 0.03) was lower than the predefined floorPDEG value of 0.05, the values in the threshold fields will be threshold$type = "floorPDEG", threshold$input = 0.05, and threshold$PDEG = 0.05. potDEG numeric binary vector (0 for non-DEG or 1 for DEG) after the evaluation of the percentage of DEGs identified in STEP2 with the predefined floorPDEG value. If the percentage (e.g., 2%) is lower than the floorPDEG value (e.g., 17%), 17% of elements become 1 as DEG. prePotDEG numeric binary vector (0 for non-DEG or 1 for DEG) before the evaluation of the percentage of DEGs identified in STEP2 with the predefined floorPDEG value. Regardless of the floorPDEG value, the percentage of elements with 1 is always the same as that of DEGs identified in STEP2. execution.time computation time required for normalization. the iteration number n for the STEP1 - (STEP2 - STEP3)_{n} pipeline. it stores (i) the type of threshold (threshold$type), (ii) the threshold value (threshold$input), and (iii) the percentage of potential DEGs actually used (threshold$PDEG). These values depend on whether the percentage of DEGs identified in STEP2 is higher or lower to the value indicated by floorPDEG. Consider, for example, the execution of calcNormFactors function with "FDR = 0.1 and floorPDEG = 0.05". If the percentage of DEGs identified in STEP2 satisfying FDR = 0.1 was 0.14 (i.e., higher than the floorPDEG of 0.05), the values in the threshold fields will be threshold$type = "FDR", threshold$input = 0.1, and threshold$PDEG = 0.14. If the percentage (= 0.03) was lower than the predefined floorPDEG value of 0.05, the values in the threshold fields will be threshold$type = "floorPDEG", threshold$input = 0.05, and threshold$PDEG = 0.05. numeric binary vector (0 for non-DEG or 1 for DEG) after the evaluation of the percentage of DEGs identified in STEP2 with the predefined floorPDEG value. If the percentage (e.g., 2%) is lower than the floorPDEG value (e.g., 17%), 17% of elements become 1 as DEG. numeric binary vector (0 for non-DEG or 1 for DEG) before the evaluation of the percentage of DEGs identified in STEP2 with the predefined floorPDEG value. Regardless of the floorPDEG value, the percentage of elements with 1 is always the same as that of DEGs identified in STEP2. 1 data(hypoData) 2 group <- c(1, 1, 1, 2, 2, 2) 4 # Calculating normalization factors using the DEGES/edgeR method 5 # (the TMM-edgeR-TMM pipeline). 6 tcc <- new("TCC", hypoData, group) 7 tcc <- calcNormFactors(tcc, norm.method = "tmm", test.method = "edger", 8 iteration = 1, FDR = 0.1, floorPDEG = 0.05) 9 tcc$norm.factors 11 # Calculating normalization factors using the iterative DEGES/edgeR method 12 # (iDEGES/edgeR) with n = 3. 13 tcc <- new("TCC", hypoData, group) 14 tcc <- calcNormFactors(tcc, norm.method = "tmm", test.method = "edger", 15 iteration = 3, FDR = 0.1, floorPDEG = 0.05) 16 tcc$norm.factors data(hypoData) group <- c(1, 1, 1, 2, 2, 2) # Calculating normalization factors using the DEGES/edgeR method # (the TMM-edgeR-TMM pipeline). tcc <- new("TCC", hypoData, group) tcc <- calcNormFactors (tcc, norm.method = "tmm", test.method = "edger", iteration = 1, FDR = 0.1, floorPDEG = 0.05) tcc$norm.factors # Calculating normalization factors using the iterative DEGES/edgeR method # (iDEGES/ edgeR) with n = 3. tcc <- new("TCC", hypoData, group) tcc <- calcNormFactors(tcc, norm.method = "tmm", test.method = "edger", iteration = 3, FDR = 0.1, floorPDEG = 0.05) tcc$norm.factors For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/github/jqsunac/TCC/man/calcNormFactors.html","timestamp":"2024-11-03T02:41:07Z","content_type":"text/html","content_length":"39126","record_id":"<urn:uuid:c9503b7c-237a-421c-b663-df2ef3651046>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00217.warc.gz"}
Quantum Computers Are Starting to Simulate the World of Subatomic Particles There is a heated race to make quantum computers deliver practical results. But this race isn't just about making better technology—usually defined in terms of having fewer errors and more qubits, which are the basic building blocks that store quantum information. At least for now, the quantum computing race requires grappling with the complex realities of both quantum technologies and difficult problems. To develop quantum computing applications, researchers need to understand a particular quantum technology and a particular challenging problem and then adapt the strengths of the technology to address the intricacies of the problem. Assistant Professor Zohreh Davoudi, a member of the Maryland Center for Fundamental Physics, has been working with multiple colleagues at UMD to ensure that the problems that she cares about are among those benefiting from early advances in quantum computing. The best modern computers have often proven inadequate at simulating the details that nuclear physicists need to understand our universe at the deepest levels. Davoudi and JQI Fellow Norbert Linke are collaborating to push the frontier of both the theories and technologies of quantum simulation through research that uses current quantum computers. Their research is intended to illuminate a path toward simulations that can cut through the current blockade of fiendishly complex calculations and deliver new theoretical predictions. For example, quantum simulations might be the perfect tool for producing new predictions based on theories that combine Einstein’s theory of special relativity(link is external) and quantum mechanics to describe the basic building blocks of nature—the subatomic particles and the forces among them—in terms of “quantum fields(link is external).” Such predictions are likely to reveal new details about the outcomes of high-energy collisions in particle accelerators and other lingering physics questions. The team’s current efforts might help nuclear physicists, including Davoudi, to take advantage of the early benefits of quantum computing instead of needing to rush to catch up when quantum computers hit their stride. For Linke, who is also an assistant professor of physics at UMD, the problems faced by nuclear physicists provide a challenging practical target to take aim at during these early days of quantum computing. In a new paper in PRX Quantum(link is external), Davoudi, Linke and their colleagues have combined theory and experiment to push the boundaries of quantum simulations—testing the limits of both the ion-based quantum computer in Linke’s lab and proposals for simulating quantum fields. Both Davoudi and Linke are also part of the NSF Quantum Leap Challenge Institute for Robust Quantum Simulation that is focused on exploring the rich opportunities presented by quantum simulations. The new project wasn’t about adding more qubits to the computer or stamping out every source of error. Rather, it was about understanding how current technology can be tested against quantum simulations that are relevant to nuclear physicists so that both the theoretical proposals and the technology can progress in practical directions. The result was both a better quantum computer and improved quantum simulations of a basic model of subatomic particles “I think for the current small and noisy devices, it is important to have a collaboration of theorists and experimentalists so that we can implement useful quantum simulations,” says JQI graduate student Nhung Nguyen, who was the first author of the paper. “There are many things we could try to improve on the experimental sides but knowing which one leaves the greatest impact on the result helps guides us in the right direction. And what makes the biggest impact depends a lot on what you try to simulate.” The team knew the biggest and most rewarding challenges in nuclear physics are beyound the reach of current hardware, so they started with something a little simpler than reality: the Schwinger model. Instead of looking at particles in reality’s three dimensions evolving over time, this model pares things down to particles existing in just one dimension over time. The researchers also further simplified things by using a version of the model that breaks continuous space into discrete sites. So in their simulations, space only exist as one line of distinct sites, like a column cut off a chess board, and the particles are like pieces that must always reside in one square or another along that column. Despite the model being stripped of so much of reality’s complexity, interesting physics can still play out in it. The physicist Julian Schwinger developed this simplified model of quantum fields to mimic parts of physics that are integral to the formation of both the nuclei at the centers of atoms and the elementary particles that make them up. “The Schwinger model kind of hits the sweet spot between something that we can simulate and something that is interesting,” says Minh Tran, a MIT postdoctoral researcher and former JQI graduate student who is a coauthor on the paper. “There are definitely more complicated and more interesting models, but they're also more difficult to realize in the current experiments.” In this project, the team looked at simulations of electrons and positrons(link is external)—the antiparticles of electrons—appearing and disappearing over time in the Schwinger model. For convenience, the team started the simulation with an empty space—a vacuum. The creation and annihilation of a particle and its antiparticle out of vacuum is one of the significant predictions of quantum field theory. Schwinger’s work establishing this description of nature earned him, alongside Richard Feynman and Sin-Itiro Tomonaga, the Nobel Prize in physics in 1965(link is external). Simulating the details of such fundamental physics from first principles is a promising and challenging goal for quantum computers. Nguyen led the experiment that simulated Schwinger’s pair production on the Linke Lab quantum computer, which uses ions—charged atoms—as the qubits. “We have a quantum computer, and we want to push the limits,” Nguyen says. “We want to see if we optimize everything, how long can we go with it and is there something we can learn from doing the experimental simulation.” The researchers simulated the model using up to six qubits and a preexisting language of computing actions called quantum gates. This approach is an example of digital simulation. In their computer, the ions stored information about if particles or antiparticles exist at each site in the model, and interactions were described using a series of gates that can change the ions and let them influence each other. In the experiments, the gates only manipulated one or two ions at a time, so the simulation couldn’t include everything in the model interacting and changing simultaneously. The reality of digital simulations demands the model be chopped into multiple pieces that each evolve over small steps in time. The team had to figure out the best sequence of their individual quantum gates to approximate the model changing continuously over time. “You're just approximately applying parts of what you want to do bit by bit,” Linke says. “And so that's an approximation, but all the orderings—which one you apply first, and which one second, etc.—will approximate the same actual evolution. But the errors that come up are different from different orderings. So there's a lot of choices here.” Many things go into making those choices, and one important factor is the model’s symmetries. In physics, a symmetry(link is external) describes a change that leaves the equations of a model unchanged. For instance, in our universe rotating only changes your perspective and not the equations describing gravity, electricity or magnetism. However, the equations that describe specific situations often have more restrictive symmetries. So if an electron is alone in space, it will see the same physics in every direction. But if that electron is between the atoms in a metal, then the direction matters a lot: Only specific directions look equivalent. Physicists often benefit from considering symmetries that are more abstract than moving around in space, like symmetries about reversing the direction of time. The Schwinger model makes a good starting point for the team’s line of research because of how it mimics aspects of complex nuclear dynamics and yet has simple symmetries. “Once we aim to simulate the interactions that are in play in nuclear physics, the expression of the relevant symmetries is way more complicated and we need to be careful about how to encode them and how to take advantage of them,” Davoudi says. “In this experiment, putting things on a one-dimensional grid is only one of the simplifications. By adopting the Schwinger model, we have also a greatly simplified the notion of symmetries, which end up becoming a simple electric charge conservation. In our three-dimensional reality though, those more complicated symmetries are the reason we have bound atomic nuclei and hence everything else!” The Schwinger model’s electric charge conservation symmetry keeps the total amount of electric charge the same. That means that if the simulation of the model starts from the empty state, then an electron should always be accompanied by a positron when it pops into or out of existence. So by choosing a sequence of quantum gates that always maintains this rule, the researchers knew that any result that violated it must be an error from experimental imperfections. They could then throw out the obviously bad data—a process called post-selection. This helped them avoid corrupted data but required more runs than if the errors could have been prevented. The team also explored a separate way to use the Schwinger model’s symmetries. There are orders of the simulation steps that might prove advantageous despite not obeying the model’s symmetry rules. So suppressing errors that result from orderings that don’t conform to the symmetry could prove useful. Earlier this year, Tran and colleagues at JQI showed(link is external) there is a way to cause certain errors, including ones from a symmetry defying order of steps, to interfere with each other and cancel out. The researchers applied the proposed procedure in an experiment for the first time. They found that it did decrease errors that violated the symmetry rules. However, due to other errors in the experiment, the process didn’t generally improve the results and overall was not better than resorting to post-selection. The fact that this method didn’t work well for this experiment provided the team with insights into the errors occurring during their simulations. All the tweaking and trial and error paid off. Thanks to the improvements the researchers made, including upgrading the hardware and implementing strategies like post-selection, they increased how much information they could get from the simulation before it was overwhelmed by errors. The experiment simulated the Schwinger model evolving for about three times longer than previous quantum simulations. This progress meant that instead of just seeing part of a cycle of particle creation and annihilation in the Schwinger model, they were able to observe multiple complete cycles. “What is exciting about this experiment for me is how much it has pushed our quantum computer forward,” says Linke. “A computer is a generic machine—you can do basically anything on it. And this is true for a quantum computer; there are all these various applications. But this problem was so challenging, that it inspired us to do the best we can and upgrade our system and go in new directions. And this will help us in the future to do more.” There is still a long road before the quantum computing race ends, and Davoudi isn’t betting on just digital simulations to deliver the quantum computing prize for nuclear physicists. She is also interested in analog simulations and hybrid simulations that combine digital and analog approaches. In analog simulations, researchers directly map parts of their model onto those of an experimental simulation. Analog quantum simulations generally require fewer computing resources than their digital counterparts. But implementing analog simulations often requires experimentalists to invest more effort in specialized preparation since they aren’t taking advantage of a set of standardized building blocks that has been preestablished for their quantum computer. Moving forward, Davoudi and Linke are interested in further research on more efficient mappings onto the quantum computer and possibly testing simulations using a hybrid approach they have proposed (link is external). In this approach, they would replace a particularly challenging part of the digital mapping by using the phonons—quantum particles of sound—in Linke Lab’s computer as direct stand-ins for the photons—quantum particles of light—in the Schwinger model and other similar models in nuclear physics. “Being able to see that the kind of theories and calculations that we do on paper are now being implemented in reality on a quantum computer is just so exciting,” says Davoudi. “I feel like I'm in a position that in a few decades, I can tell the next generations that I was so lucky to be able to do my calculations on the first generations of quantum computers. Five years ago, I could have not imagined this day.” Original story by Bailey Bedford: https://jqi.umd.edu/news/quantum-computers-are-starting-simulate-world-subatomic-particles
{"url":"https://umdphysics.umd.edu/about-us/news/research-news/1802-qc-mcfp.html","timestamp":"2024-11-13T23:16:35Z","content_type":"text/html","content_length":"179975","record_id":"<urn:uuid:8b3feba3-0c34-4b1a-ae84-975e5e2ea96d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00699.warc.gz"}
The compound interest on ₹5000 at 20%. - WorkSheets Buddy The compound interest on ₹5000 at 20%. The compound interest on ₹5000 at 20% per annum for 1 1/2 years compounded half-yearly is (a) ₹6655 (b) ₹1655 (c) ₹1500 (d) ₹1565 (b) ₹1655 If the number of conversion periods ≥ 2, then the compound interest is (a) Less than simple interest (b) Equal to simple interest (c) Greater than or equal to simple interest (d) Greater than simple interest (d) Greater than simple interest More Solutions: Leave a Comment
{"url":"https://www.worksheetsbuddy.com/the-compound-interest-on-rs-5000-at-20percent/","timestamp":"2024-11-11T05:35:08Z","content_type":"text/html","content_length":"129758","record_id":"<urn:uuid:43d4fe97-0e85-4329-b3b7-897f30de7451>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00225.warc.gz"}
A Survey of Graph Theory and Applications in Neo4J A while ago I had the idea to do a graph theory talk at our local Neo4J Meetup, the Baltimore Washington Graph Database Meetup, especially since Neo4J and graph databases are based on graph theory. So I finally sat down and spent quite a bit of time putting a talk together. I decided to go with the idea of an overview of graph theory with a primary focus on the theoretical aspects and with some applications like network theory and some specific examples of applications to Neo4J. My talk is titled: "A Survey of Graph Theory and Applications in Neo4J" and I presented it on June 24, see below for video, slides, and references. This post is my thoughts on it and things leading up to its creation. Some of my blog posts are created from a desire to make math more accessible and more relevant especially those areas of math that seem to be neglected by the traditional educational system. One huge gaping hole in my math education both in our American institutional learning facilities and in my college computer science curriculum was that I was never exposed to graph theory. Once you start studying graph theory you realize that many of these concepts could be taught in middle and elementary schools. As I was preparing my presentation, I saw this: "Math for seven-year-olds: graph coloring, chromatic numbers, and Eulerian paths and circuits", reaffirming what I already knew. Sadly many of these concepts were new to me as an adult, and I’d guess the same was probably true of much of my audience. Graph theory has been a topic that I have included in my blog, I have written two posts about it one is a general math post: "Triangles, Triangular Numbers, and the Adjacency Matrix" and the other post: "The Object Graph" attempts to put it in part in a programming context . One post or series of posts that I have wanted to do is an introduction/overview of graph theory. I was thinking that my talk could be turned into those posts. However, I am not sure if and when I will get a chance to do that, so I figured until then I will make the talk itself that post and that post is this post. I found myself torn about publishing the video of my presentation, on one hand I feel that it could be much better but on the other, as a friend pointed out, people might find it informative and enjoy watching it. That being said, I reserve the right to replace it with a better version if I ever do one. Writing a post about the talk provides me with an opportunity make a few corrections, add some more source material and mention a couple of things that I would have liked to include but that got cut in the interest of time. I’ll start by adding some additional source material. I followed the introductory remarks with a brief historical overview. For that, especially 20^th century contributors I made use these power point slides of John Crowcroft’s "Principles of Communications" course. I also referenced Graph Theory, 1736-1936 by Norman L. Biggs, E. Keith Lloyd, and Robin J. Wilson. I mentioned the Watts and Strogatz Model (pdf of original nature paper). I also included Fan Chung who is married to Ronald Graham and is known for her work in graph theory especially spectral graph theory. I mentioned snarks not to be confused with Lewis Carroll‘s snarks from which he coined the name. When I was talking about the topology part I mentioned Eulers Gem. Since the World Cup was going on during the talk I have to include these topology links "The Topology and Combinatorics of Soccer Balls(pdf)" and some more "soccer ball math". Due to time constraints I cut a few topics, two of them were the ideas of graph isomorphisms and graph automorphisms, also ideas like: Graph families defined by their automorphisms and the automorphism group. Another area that I didn’t really get into, except the linear algebra parts was the area of algebraic graph theory. I did want to offer up a couple of corrections: I called a truncated icosahedron a dodecahedron, how embarrassing. I called the Travelling Salesman Problem, the Travelling Salesman Program, doh! When I said a bipartite graph has no connections between edges, it should have been vertices. And finally I misstated Kirchoff’s Theorem, it should be n-1 non-zero Eigenvalues, obviously you only use the non-zero ones! There are probably more and if I decide to replace these videos in the future, I will just replace this paragraph as well. On a lighter note, when I was creating my "Graphs are Everywhere" images and sharing with my friend Wes, I created this one for fun: Finally I’d like to thank our meetup sponsor neotechnology refreshments. The part where we thanked them at the meetup didn’t get recorded. Part 1 Part 2 References and Further Reading
{"url":"http://www.elegantcoding.com/2014/07/a-survey-of-graph-theory-and.html","timestamp":"2024-11-11T00:44:59Z","content_type":"application/xhtml+xml","content_length":"82085","record_id":"<urn:uuid:6f9cf01e-e526-47c7-a278-956ae7d4a157>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00666.warc.gz"}
Global Existence of Weak Solutions and Concentration Phenomena for Multi-Dimensional Compressible Viscoelastic Fluids Project: Research As a fundamental model in complex fluids and plasma, the Hookean viscoelastic fluid isthe study of a coupling system between Navier-Stokes equations and a transport equationfor the deformation gradient. The original study of viscoelastic fluids dates back to experi-ments by physicists Maxwell, Boltzmann, and Kelvin in the nineteenth century. Despite ofits importance in physics, the global-in-time wellposedness theories of Hookean viscoelas-tic fluids with large initial data remain as challenging open problems in mathematics,even though a state-of-the-art small perturbation theory near a constant equilibrium wassuccessfully formulated during the last decade.In this research proposal we intend to push forward the mathematical understandingof Hookean viscoelastic fluids. The first topic in this proposal focuses on the global-in-time wellposedness of weak solutions of Hookean compressible viscoelastic fluids inmulti-dimensional spaces Rnwith large initial data as the adiabatic constant. This proposed problem is difficult because of the strong coupling between the fluid (or thepressure) and the elasticity (or the deformation gradient), the lack of a priori estimates,and the non-compatibility between the weak convergence and the nonlinearity. Preciselythe lack of a priori estimates makes the compactness of the pressure and the elasticstress hard to establish and the non-compatibility between the weak convergence andthe nonlinearity implies that the possible oscillation and concentration phenomena ofapproximating solutions will be two main difficulties. To overcome these two difficulties,an improvement on the integrability of both the deformation gradient and the density isneeded and a compactness argument to deal with the quadratic term is required. Thesuccess of this proposed problem needs a careful analysis of concentration and oscillationphenomena, and will in turn shed lights on understanding better the weak convergencemethod.The second topic in this proposal aims to address the concentration phenomena of thekinetic energy of multidimensional Hookean compressible viscoelastic flows as the adiabaticconstant.It is intended to construct a concentration set and study its Hausdorffdimension. The Hausdorff dimension is expected to depend on the adiabatic coefficientand outside the concentration set the weak stability of compressible viscoelastic fluidsholds true. The success of this proposed problem relies on a better understanding of therelation between the maximal function of density and the concentration of the kineticenergy. Project number 9042537 Grant type GRF Status Finished Effective start/end date 1/01/18 → 17/05/22
{"url":"https://scholars.cityu.edu.hk/en/projects/global-existence-of-weak-solutions-and-concentration-phenomena-for-multidimensional-compressible-viscoelastic-fluids(45de2028-dc53-4beb-a5df-4c6a8203ebcb).html","timestamp":"2024-11-03T17:04:06Z","content_type":"text/html","content_length":"30399","record_id":"<urn:uuid:a0153f5e-6442-4548-89e2-c5fb0f39f997>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00411.warc.gz"}
Inverse versus Direct Cascades in Turbulent Advection A model of scalar turbulent advection in compressible flow is analytically investigated. It is shown that, depending on the dimensionality d of space and the degree of compressibility of the smooth advecting velocity field, the cascade of the scalar is direct or inverse. If d>4 the cascade is always direct. For a small enough degree of compressibility, the cascade is direct again. Otherwise it is inverse; i.e., very large scales are excited. The dynamical hint for the direction of the cascade is the sign of the Lyapunov exponent for particles separation. Positive Lyapunov exponents are associated to direct cascade and Gaussianity at small scales. Negative Lyapunov exponents lead to inverse cascade, Gaussianity at large scales, and strong intermittency at small scales. Physical Review Letters Pub Date: January 1998 □ Nonlinear Sciences - Chaotic Dynamics; □ Condensed Matter; □ High Energy Physics - Theory 4 pages, RevTex 3.0
{"url":"https://ui.adsabs.harvard.edu/abs/1998PhRvL..80..512C","timestamp":"2024-11-10T21:47:02Z","content_type":"text/html","content_length":"38237","record_id":"<urn:uuid:73329027-90be-4541-b694-6d74f1a4485a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00774.warc.gz"}
area of rectangles and parallelograms worksheets Properties of Quadrilaterals Quadrilaterals and Transformations Review Parallelograms and Rectangles Circles, Squares, Rectangles, Triangles, Parallelograms, Tri Area of triangles, Parallelograms, Rectangles, and trapezoid Quadrilaterals and Their Properties Area of Rectangles, Parallelograms, Circles, and Triangles Rhombuses, Rectangles, and Squares Some Practice for the Polygon Test Area and Perimeter of Squares, Rectangles, and Parallelograms Area of Parallelograms and Rectangles Area Practice (Parallelograms, Rectangles, Triangles) Parallelograms and Rectangles Special Quadrilaterals Quiz Review Properties of Quadrilaterals Area of Rectangles and Parallelograms SSLC MATHS online quiz 9(EM) Area of Rectangles and Parallelograms Area of Parallelograms, trapezoids, and Triangles Review Explore Worksheets by Subjects Explore printable area of rectangles and parallelograms worksheets Area of rectangles and parallelograms worksheets are essential tools for teachers who want to help their students master the fundamentals of Math and Geometry. These worksheets provide a variety of problems that challenge students to apply their knowledge of area formulas, allowing them to practice and reinforce their understanding of these important concepts. By incorporating these worksheets into their lesson plans, teachers can ensure that their students are well-prepared for more advanced topics in Geometry. Additionally, the use of area of rectangles and parallelograms worksheets can help teachers assess their students' progress and identify any areas where additional instruction or practice may be needed. With these valuable resources, teachers can create engaging and effective lessons that will set their students up for success in Math and Geometry. Quizizz is an excellent platform for teachers to utilize in conjunction with area of rectangles and parallelograms worksheets, as it offers a wide range of interactive quizzes and activities that can supplement traditional worksheets. By incorporating Quizizz into their lessons, teachers can create a more dynamic and engaging learning experience for their students, helping them to better understand and retain the concepts being taught. Furthermore, Quizizz provides valuable data and insights into student performance, allowing teachers to easily track progress and identify areas where additional support may be needed. In addition to its offerings related to Math and Geometry, Quizizz also covers a variety of other subjects and grade levels, making it a versatile and valuable resource for teachers across all disciplines.
{"url":"https://quizizz.com/en-in/area-of-rectangles-and-parallelograms-worksheets","timestamp":"2024-11-13T22:21:22Z","content_type":"text/html","content_length":"184518","record_id":"<urn:uuid:f1ceeed8-7d69-49c8-b0dc-f7cad4403800>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00135.warc.gz"}
SPEJBL – THE BIPED WALKING ROBOT Marek Peca, Michal Sojka ... SPEJBL – THE BIPED WALKING ROBOT Marek Peca, Michal Sojka ... SPEJBL – THE BIPED WALKING ROBOT Marek Peca, Michal Sojka ... Create successful ePaper yourself Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software. <strong>SPEJBL</strong> <strong>–</strong> <strong>THE</strong> <strong>BIPED</strong> <strong>WALKING</strong> <strong>ROBOT</strong><br /> <strong>Marek</strong> <strong>Peca</strong>, <strong>Michal</strong> <strong>Sojka</strong>, Zdeněk Hanzálek<br /> Department of Control Engineering<br /> Faculty of Electrical Engineering<br /> Czech Technical University in Prague, Czech Republic<br /> Abstract: Construction of a biped walking robot, its hardware, basic software<br /> and control design is presented. Primary goal achieved is a static walking with<br /> non-instantaneous double support phase and fixed trajectory in joint coordinates.<br /> The robot with two legs and no upper body is capable to walk with fixed,<br /> manually created, static trajectory using simple SISO proportional controller, yet<br /> it is extendable to use MIMO controllers, flexible trajectory, and dynamic gait.<br /> Distributed servo motor control over a CAN fieldbus is used. Important points in<br /> construction and kinematics, motor current cascaded control and fieldbus timing<br /> are emphasized. The project is open and full documentation is available.<br /> Keywords: biped robot, feedback control, static gait, cascaded controller, inverse<br /> kinematics<br /> Fig. 1. CAD model and photograph of built robot<br /> 1. INTRODUCTION<br /> Our main objective was to build a real robot,<br /> not only a simulation by mathematical model.<br /> The simplest kind of bipedal walking, a fixed<br /> trajectory (see Sec. 2.2.3) static (2.2.2) gait and<br /> local feedback (SISO approach, 5.1) were chosen<br /> as a starting point. To achieve such operation,<br /> a manually created trajectory (2.2.4), imperfect<br /> mechanical construction and an empirically tuned<br /> servo motor controller (5.2.1) are sufficient. This<br /> approach is common to low-cost, commonly available<br /> humanoid robots from toy and modeller market,<br /> like Robonova, Graupner, etc.<br /> On the other hand, as a design objective, we<br /> wanted to open further development beyond limits<br /> of such a simple approach. We have used a fieldbus<br /> distributed architecture (4.1.1) and our own servo<br /> motor electronics design (4.1.2) to enable MIMO<br /> control (5.1) and system sensorics extension. We<br /> have followed a stiff and multiaxial mechanical<br /> construction (Sec. 3) to improve feedback performance<br /> and allow effective inverse kinematics<br /> computation (2.2.5).<br /> Our main source of inspiration was a robot<br /> YaBiRo (Albero et al., 2005). Mechanical design<br /> of robot Pino (Yamasaki et al., 2000) has been<br /> also confronted during a mechanical construction<br /> phase.<br /> Our project is open, all the mechanical (CAD)<br /> and hardware construction plans, software source<br /> codes, documentation and experimental results<br /> (including a short video) are being published<br /> on the project Internet web site 1 as an “open<br /> source”.<br /> Bipedal gait itself is discussed in Sec. 2, mechanical<br /> construction with references to other designs in<br /> Sec. 3, hardware and software system architecture<br /> in Sec. 4, controller design in Sec. 5, and experimental<br /> results in Sec. 6.<br /> 1 http://dce.felk.cvut.cz/spejbl/, mirrored at<br /> 2.1 Definition<br /> 2. <strong>BIPED</strong>AL GAIT<br /> Bipedal walking is a motion, where at least one of<br /> the feet stays at the floor at any time. The phase,<br /> when a robot stays on only one foot is called<br /> single-support phase, and the other, when the<br /> robot stays on both legs is called double-support<br /> phase. The question is to specify, what does it<br /> mean to stay. For example, we do not accept<br /> gliding as valid element of walking. The foot<br /> should be fixed at the floor by foot-floor friction<br /> forces, eliminating reactional forces, induced by<br /> the motion of the robot.<br /> There are two distinct bipedal gait models: walking<br /> with instantaneous and non-instantaneous<br /> double-support phase. The instantaneous one is a<br /> “limit case” of walking, being close to running<br /> (a motion, where at maximum one of the feet<br /> stays at the floor at any point). It also implies<br /> dynamic gait (see 2.2.2), because the transition<br /> of the centre of gravity (COG) between footprints<br /> can not be done in infinitesimally short time. This<br /> approach is the base idea of many theoretical<br /> biped models (Chevallereau et al., 2004). However,<br /> no real biped using this approach to be able<br /> to walk without external help 2 is known to us at<br /> the moment.<br /> Our concept is to use non-instantaneous double<br /> support phase, although the instantaneous is (at<br /> least theoretically) possible with our robot. The<br /> non-instantaneous one enables static gait (2.2.2)<br /> and will be assumed further.<br /> 2.2 Trajectory of gait<br /> 2.2.1. Definition Let us define, what we mean<br /> by the trajectory. During the motion of the robot,<br /> each part (rigid body) is moved by forces along<br /> some path in the space, following laws of mechanics.<br /> The robot appears as a serial manipulator with<br /> 12 revolute joints (see Sec. 3). It has 12 degrees<br /> of freedom (DOF) in these joints, plus additional<br /> 6 DOF, since it is free in the space, ie. it has<br /> no “grounded” link. It means that trajectories<br /> (in world, global sense) of all points of the robot<br /> are fully determined by vector function of dimension<br /> 18. However, we work with somewhat<br /> simpler trajectory, which describes only relative<br /> positioning of robot links to each other. It carries<br /> no information about global robot position, so it<br /> stands for 12 DOF. This is the trajectory and it is<br /> 2 The robot Rabbit is able to walk, but it requires a<br /> guideline to keep lateral balance (lateral motion has not<br /> been solved intentionally by its team).<br /> defined by displacements (angles) at n = 12 joints,<br /> q(t) : 〈0, tF 〉 → Q, (q1, q2, . . .qn) ∈ Q, where t is<br /> time, tF is walk duration and qk is k-th joint angle,<br /> Q is the space of all possible joint configurations.<br /> It is a trajectory in joint coordinate space Q.<br /> It must be kept in mind, that this 12 DOF trajectory<br /> description is not complete, because actual<br /> position of the robot depends also on impact of<br /> external forces. Eg. robot walking on the floor<br /> is pushed forward by frictional reactional force<br /> of the floor, but if executing the same trajectory<br /> while robot is hanging above the floor, no forward<br /> motion will appear.<br /> This trajectory can be a sufficient description in<br /> the case, when all external forces have known<br /> properties (floor friction, sense of gravity) and<br /> they are homogenous (floor is planar, but can be<br /> inclined). This will be taken in the assumption<br /> in further text, where also a horizontal floor is<br /> mostly assumed. Influence of any differences, as<br /> roughness or inclination of real floor, are regarded<br /> as external disturbance forces.<br /> 2.2.2. Static vs. dynamic gait Distinction between<br /> static and dynamic kind of trajectory is a<br /> general issue known from mobile robotics (Choset<br /> et al., 2005). The trajectory can be decomposed<br /> to a path q(s) : 〈0, 1〉 → Q, a continuous function,<br /> giving a curve in Q parametrized by s, and a<br /> time scaling s(t) : 〈0, tF 〉 → 〈0, 1〉, ˙s ≥ 0, so the<br /> trajectory q(t) = q(s(t)).<br /> Given a path q(s), several admissible time scalings<br /> may or may not exist. The time scaling is admissible,<br /> if there exists a function of applied torques<br /> u(t) = (u1, u2, . . . un) satisfying actuator limits<br /> |uk(t)| ≤ ukmax, which produces trajectory q(t).<br /> Assuming the path, a state space of the system<br /> is reduced to scalar position s and velocity ˙s. Actuator<br /> limits together with system dynamics give<br /> constraint on acceleration L(s, ˙s) ≤ ¨s ≤ U(s, ˙s).<br /> A curve in (s, ˙s) satisfying this condition gives<br /> admissible time scaling for given path.<br /> The path is static, if a curve ˙s = 0, s ∈ 〈0, 1〉<br /> gives admissible time scaling. It means, the robot<br /> can stand in any position q(s) without falling,<br /> without any motion. Robot can follow this path<br /> arbitrarily slowly. At each point q(s), the robot<br /> satisfies a condition of statical stability. Trajectory<br /> scaled along this path by s(t) so that ˙s approaches<br /> zero, is called a static trajectory. If a time scaling<br /> with ˙s = 0 is not admissible, but another one with<br /> ˙s > 0 is, the result is a dynamic trajectory.<br /> If static or dynamic trajectory together with external<br /> forces form a gait (see 2.1), it is then called<br /> a static gait or dynamic gait, respectively. Further,<br /> a static gait will be assumed, although most of the<br /> rest can be applied for the dynamic gait as well. trajectory reference<br /> feet<br /> T pelvis reference<br /> COG ref.<br /> COG<br /> +<br /> +<br /> C 2 ∆x, ∆y<br /> flexible trajectory<br /> balancing controller<br /> IK<br /> inverse<br /> kinematic<br /> positional<br /> controller<br /> q(t)<br /> C 1<br /> S<br /> joint angles<br /> COG estimate<br /> Fig. 2. flexible trajectory controller example<br /> 2.2.3. Fixed vs. flexible trajectory We distinguish<br /> between fixed and flexible trajectory control.<br /> In fixed trajectory control, q(t) is given and system<br /> is controlled to follow this trajectory. The<br /> operation is reduced to control of dynamic system.<br /> In flexible trajectory control, q(t) is specified partially,<br /> leaving a room for ad hoc trajectory adaptation<br /> to instant situation, by means of disturbance<br /> rejection. This flexibility adds effectively feedback<br /> to the system. If the motors are already enclosed<br /> in positional feedback loop, then the flexible trajectory<br /> generator is an outer feedback loop, forming<br /> together a cascaded controller.<br /> Flexible trajectory control can provide better<br /> rejection of strong disturbances, such as varying<br /> plane inclination or payload consequences on<br /> COG position. Simple example of such a controller<br /> is on Fig. 2. Trajectory is given in carthesian<br /> coordinates of feet and pelvis. Real position of<br /> COG is estimated from sensors (inclinometry, feet<br /> pressure or motor current). It is compared with<br /> reference position of COG and the error is fed back<br /> by a controller to displace pelvis by some ∆x, ∆y<br /> correction. Pelvis then moves aside to balance a<br /> disturbance. To control by displacement in other<br /> than joint coordinates, a real-time computation of<br /> inverse kinematics is needed (see 2.2.5).<br /> The most complicated way of flexible trajectory<br /> adaptation is to change it completely, eg. to place<br /> feet intentionally to prevent falling etc. There<br /> are no implementations of such bipeds known to<br /> us. Implementation of this behaviour for other<br /> walking robots, quadrupeds, has been developped<br /> eg. by (Boston Dynamics, n.d.).<br /> Further, only the fixed trajectory control is assumed.<br /> 2.2.4. Trajectory creation The trajectory of<br /> static gait can be divided into two parts: displacement<br /> of COG from above one footprint to another,<br /> and displacement of disengaged foot to another<br /> place. Then, the same but mirrored sequence follows<br /> with feet exchanged.<br /> The trajectory creation can be done manually by<br /> “animation”. Several points q(sk), k = 1 . . .m are<br /> sampled, ie. the robot is “frozen” by feedback at<br /> q(sk) = const., and if this position is statically<br /> stable (see 2.2.2), it can become a point of the<br /> whole trajectory. Then, an interpolation of sampled<br /> points in joint coordinates is done. There<br /> are two factors, which can make the resulting<br /> trajectory flawed:<br /> • sampling is too sparse <strong>–</strong> points between the<br /> interpolated ones are not guaranteed to satisfy<br /> the condition of static stability;<br /> • speed is too high <strong>–</strong> validity of static trajectory<br /> is guaranteed only for infinitesimal<br /> speeds and can be broken at higher speed by<br /> robot inertia.<br /> A helpful tool for trajectory creation is an inverse<br /> kinematics solver, see 2.2.5. The inverse kinematics<br /> can be used together with manual animation<br /> to set particular samples, while maintaining user<br /> constraints, eg. feet to lie in planes parallel to<br /> pelvis and to each other, feet to be parallel to<br /> each other, etc.<br /> 2.2.5. Inverse kinematics The inverse kinematics<br /> allows to calculate joint angles (q1, q2, . . .qn)<br /> from given relative coordinates between pelvis and<br /> feet. Our robot, when in single-support phase, can<br /> be regarded consisting of two 6 revolute joint (6R)<br /> serial manipulators 3 . Pelvis forms a base link of<br /> these manipulators, and feet form their two end<br /> effectors.<br /> Given a position of the foot relative to pelvis in<br /> carthesian coordinates and some representation<br /> of 3D rotation (eg. Euler angles), the inverse<br /> kinematic solver calculates 6 joint angles to reach<br /> this position. In the case of unconstrained joints<br /> (free 360 ◦ motion), there are up to 2 3 solutions<br /> (2 knee configurations and ambiguity of hip and<br /> ankle angles).<br /> For the 6 joint serial manipulator, containing<br /> 3-axes joint, the inverse kinematics can always<br /> be solved analytically (see (Slotine and Asada,<br /> 1992)). The implementation is fast and numerically<br /> well-conditioned. The complete set of solutions<br /> is calculated using following floating point<br /> math function calls: 5× atan2 4 , 5× sin and cos,<br /> 2× sqrt 5 , 1× acos, 25× multiplication, several<br /> additions or subtractions, no division. One particular<br /> solution is computed, then all 7 remaining<br /> solutions are obtained by additions and subtractions.<br /> In case of position being out of range, the<br /> solution is clamped to straight leg towards the<br /> desired position.<br /> The set of 8 solutions is pruned by application<br /> of joint angle constraints, then an arbitrary one<br /> 3 This is exact during single-support phase; however, during<br /> double-support phase, the robot becomes a parallel<br /> manipulator.<br /> 4 generalized arctangent, see C language math library<br /> 5 square root solution is chosen. In more advanced approach, a<br /> collision detection could be used to prune the set.<br /> The implemented algorithm works in constant and<br /> reasonably short time, that allows to use it in realtime,<br /> such as in controller feedback, as shown in<br /> Fig. 2.<br /> 3. MECHANICAL CONSTRUCTION<br /> Mechanical construction of the biped robot is<br /> inspired by composition and motion capabilities of<br /> human legs (see Sec. 1). The robot is composed of<br /> two legs, each with 6 DOF. No body above a pelvis<br /> is present, nor any additional DOF. Inspiration to<br /> this kind of biped, unable to balance by swinging a<br /> upper body, was taken mainly from robot YABiRo<br /> (Albero et al., 2005).<br /> Each leg is composed of a large, stable foot. It<br /> enables a static gait, maintaining COG above a<br /> footprint of the particular leg or in between them.<br /> It is in contrast to point-like feet of walking multipeds<br /> (quadruped (Ridderström et al., 2000), or<br /> very common hexapods) or exclusively dynamically<br /> walking bipeds (Chevallereau et al., 2004).<br /> Although the large foot is present, a dynamic gait<br /> is also possible.<br /> Each foot is linked to a shank by 2-axis ankle<br /> joint, the shank is linked to a thigh by single<br /> axis knee joint and the thigh is linked to a pelvis<br /> plate, linking both legs, by a 3-axis hip joint. The<br /> multiaxis joints (ankle and hip), are composed of<br /> multiple single axis joints with axes intersecting<br /> in one point and perpendicular to each other. The<br /> multiaxis joints are present to simplify kinematic<br /> description of the robot, in particular to enable<br /> qualitatively simpler inverse kinematics computation<br /> (see 2.2.5). This is not respected in many<br /> modeller biped designs as well as in robot Pino<br /> (Yamasaki et al., 2000).<br /> Each axis is actuated by the same type of servo<br /> motor. The motor is a modeller servo motor<br /> HSR-5995TG by HiTec company, with original<br /> electronics replaced with our own design (see<br /> 4.1.2). This servo motor was chosen because of its<br /> compact dimensions and ability to draw a torque<br /> of 2.4 Nm. Modeller servo motor case served as a<br /> base for mechanical construction concept. All axes<br /> are actuated directly by a motor shaft, no rods<br /> have been used, in order to eliminate actuator<br /> springiness. This approach is common to modeller<br /> biped designs and Pino (Yamasaki et al., 2000)<br /> and differs from YaBiRo (Albero et al., 2005),<br /> which uses rods.<br /> Most of the mechanical parts have been designed<br /> to be cut by CNC laser from duraluminium<br /> sheets (AlCu4Mg) of thicknesses 2 mm, 5 mm and<br /> CAN bus<br /> servo<br /> motors<br /> main<br /> control<br /> computer<br /> M<br /> <strong>SPEJBL</strong><br /> ARM<br /> <strong>SPEJBL</strong><br /> ARM<br /> M<br /> <strong>SPEJBL</strong><br /> ARM<br /> time-trigger<br /> Fig. 3. network block diagram<br /> M<br /> <strong>SPEJBL</strong><br /> ARM<br /> <strong>SPEJBL</strong><br /> ARM<br /> additional<br /> sensors<br /> M<br /> <strong>SPEJBL</strong><br /> ARM<br /> <strong>SPEJBL</strong><br /> ARM<br /> 10 mm. Other parts are: brass distance spacers<br /> with inner M3 thread, bearings DIN 625, special<br /> tiny spindles, ankle blocks and metric screws M3,<br /> M2.5, M2 a M1.6. The overall weight of the robot<br /> is about 2.5 kg.<br /> 4.1 Hardware<br /> 4. ARCHITECTURE<br /> Each of 12 servo motors is equipped with computer<br /> board “<strong>SPEJBL</strong>-ARM”, containing 32-bit<br /> microcontroller Philips LPC2119 with ARM CPU<br /> core, and with a CAN (Controller Area Network)<br /> bus interface led out. The board was designed<br /> specifically to fit onto a servo motor case in restricted<br /> space (37 ×28 ×6.5 mm including connectors).<br /> The CPU runs at clock speed up to 60 MHz,<br /> ARM7TDMI core supports 32-bit computation in<br /> integer arithmetics. This gives reasonably strong<br /> computing power for implementation of local motor<br /> controllers. Although it may seem to be an<br /> overkill to use such a strong CPU for each motor,<br /> it must be seen, that commercial price of an 8-bit<br /> microcontroller equipped by CAN was equal to a<br /> price of LPC2119 at the time of design.<br /> 4.1.1. Network Servo motor <strong>SPEJBL</strong>-ARM<br /> boards form 12 nodes on CAN bus network (Fig.<br /> 3), representing actuators as well as position sensors<br /> at every DOF. Additional sensors, such as<br /> accelerometers or feet pressure sensors, could be<br /> appended to the system. There is a main control<br /> computer, connected to the CAN bus as well,<br /> gathering measured position data from the servo<br /> motor boards (eventually from other sensors) and<br /> distributing commands to the servo motor boards<br /> for actuation. Currently, the IBM PC is used as<br /> control computer, however it is ready to be replaced<br /> by more handy RISC computer board.<br /> The robot strongly benefits from the usage of a<br /> fieldbus, mainly because of<br /> • the measured analogue values are not led<br /> for a long distances in the neighbourhood of<br /> large actuation pulse currents;<br /> • simple overall cabling, consisting only of 4<br /> wires (2 for power and 2 for CAN bus), allows great freedom of motion without considerable<br /> collisions between mechanical parts and cabling;<br /> • flexibility in addition of new sensors;<br /> • flexibility in choice of control computer system.<br /> CAN bus has been chosen because of its availability,<br /> reasonable component prices, hardware<br /> support in microcontrollers and great software<br /> support by OCERA LinCAN driver (Píˇsa and<br /> Vacek, 2003).<br /> One additional <strong>SPEJBL</strong>-ARM board has been<br /> used to serve as a time trigger for the CAN bus<br /> communication, being a bus master for all obeying<br /> nodes, including the control computer. This role<br /> could be played by any other node present, which<br /> has the timing exact enough. As the main control<br /> computer does not provide this functionality, it<br /> has to fulfill much less hard real-time requirements<br /> now. The time-trigger board generates a “tick”<br /> message at the frequency of 250 Hz, which serves<br /> as a global synchronization clock instant to allow<br /> synchronous timing by means of discrete control.<br /> The robot is powered by a cable, because of high<br /> power requirements (it is being powered from<br /> 7 V, 10 A supply). Powering from lithium based<br /> accumulator batteries is theoretically possible,<br /> but we have not focused on fully autonomous<br /> operation. According to this, the control computer<br /> is a stationary system and it is connected to the<br /> robot body by a CAN bus cable.<br /> 4.1.2. Servo motor electronics Original modeller<br /> servo electronics have not satisfied our needs,<br /> especially for<br /> • lack of suitable interface (CAN or any other<br /> multinode network);<br /> • impossibility of our own motor controller<br /> implementation;<br /> • design flaws regarding current limits (power<br /> transistors caught fire under heavy loads of<br /> motor shaft),<br /> so we have replaced it with our own design. The<br /> electric motor is a DC brushed, coreless one, with<br /> unknown parameters. Operating voltage, according<br /> to HiTec datasheet, ranges between 4.8<strong>–</strong>7.4 V.<br /> The position (angle) is sensed by potentiometer<br /> (at gear output). The motor is actuated from computer<br /> board by PWM (pulse-width modulation)<br /> switched hard voltage source, and actual current<br /> and position are measured by a 10-bit ADC (analog<br /> to digital converter), contained on LPC2119<br /> chip. Frequency of PWM has been chosen to lie<br /> outside of the human acoustic band, 20 kHz.<br /> Servo motor electronic has been designed with<br /> hardware simplicity in mind, leaving most of the<br /> t k<br /> T<br /> T<br /> T<br /> measured<br /> x 1 x 2 x 3<br /> gathered values<br /> tick msg.<br /> x 12<br /> computation<br /> time<br /> 4 ms<br /> Fig. 4. traffic during a CAN bus cycle<br /> t<br /> actuation msgs. k+1<br /> xw1 . . . xw12<br /> T<br /> T<br /> T<br /> actuated<br /> operation up to software. The PWM H-bridge<br /> driver dead time is software defined (4 wire connection).<br /> The sensed current, suffering from high<br /> content of high frequency spurious signals, is filtered<br /> by simple first order RC low pass, followed<br /> by oversampling and averaging.<br /> The current sensing circuitry is a fundamental<br /> part of motor control, as it allows efficient software<br /> defined current limitation using cascaded controller<br /> with software saturation, beside it allows<br /> estimation of external load torque, what may be<br /> used to estimate orientation of gravitation.<br /> 4.2 Software<br /> 4.2.1. Timing and network <strong>SPEJBL</strong>-ARM<br /> boards, controlling servo motors, are running<br /> in system-less (without an operating system),<br /> interrupt-driven mode. They execute fast local<br /> control loops, in particular the current control<br /> loop (see 5.2.2), position and current sensing, and<br /> CAN communication with main control computer.<br /> Actually, they do current sampling at 120 kHz,<br /> position sampling and current feedback loop at<br /> 20 kHz, what is also the base frequency of PWM.<br /> Currently, the local loop clock runs asynchronously<br /> to the global CAN bus clock (time-trigger). It<br /> causes a sampling jitter, which is considered to be<br /> negligible, as it is in the case of 20 kHz to 250 Hz<br /> ratio less than or equal to 1.25% of longer period.<br /> The main control computer is running Linux 2.6<br /> operating system, in experiments used with no<br /> real-time extensions. Thanks to external timetrigger<br /> and one-period (CAN period, ie. 4 ms) delay<br /> between sampling and actuation, there are relatively<br /> soft real-time requirements imposed on the<br /> main control computer. For jitter-less operation, it<br /> is sufficient if actuation values are computed and<br /> sent between last measurement of k-th period and<br /> time trigger message of (k + 1)-th period arrival,<br /> what is approximately 68% of the CAN period, ie.<br /> about 2.72 ms.<br /> The traffic on the CAN bus during a CAN timetrigger<br /> period is shown on Figure 4. Time-trigger<br /> board sends periodically “tick” messages, containing<br /> 8-bit timestamp to allow overrun detection. At<br /> the instant of tick message arrival, servo motor<br /> potentiometers (eventually also other sensors) are<br /> sampled (or recently sensed values are taken). Gathered values are sent over the CAN bus to the<br /> main control computer, each device naturally uses<br /> its own CAN identifier. Then, the main control<br /> computer determines next values for actuation.<br /> They may or may not depend on gathered values,<br /> depending whether the controller follows MIMO<br /> or SISO approach, respectively (see 5.1). Actuation<br /> values are sent to servo motor computer<br /> boards. To save CAN bus traffic, there are less<br /> actuation messages than motors, as one message<br /> carries values for several motors (now there are 4<br /> 16-bit values in 1 actuation message).<br /> The CAN bus is operated at maximum transfer<br /> speed of 1 Mb/s. The period of time-trigger has<br /> been determined from throughput at reasonable<br /> bus load. To estimate maximum network load,<br /> a case of total bit-stuffing ( 6<br /> 5<br /> length of stuffable<br /> area) has been considered. Given 1 time-trigger<br /> 1-byte message, 12 messages, one from each servo<br /> motor board, 2-byte each (position only sensing),<br /> and 3 actuation messages of 8-bytes, the CAN bus<br /> load at 4 ms time-trigger period is up to 33.2%,<br /> if servo motor sensing messages are extended<br /> to 6-byte (including information about voltage,<br /> cureent and position), the load is up to 44.6%.<br /> <strong>SPEJBL</strong>-ARM board is equipped with a CAN<br /> bootloader firmware. After the robot is switched<br /> on, its <strong>SPEJBL</strong>-ARM boards are loaded on-the-fly<br /> by the controller software from the main control<br /> computer.<br /> 4.2.2. User software The user software, executed<br /> on the main control computer, is used to<br /> control the robot and to create gait trajectories.<br /> The first version was a simple batch program<br /> with textual input/output, in the present version,<br /> the user software is based on a combination of<br /> graphical user interface (GUI) with text editor of<br /> the trajectory file. The software allows to:<br /> • play a trajectory from text file, using linear<br /> interpolation between points in joint coordinate<br /> space, allowing to change playback<br /> speed and to pause playback;<br /> • record a trajectory or its part in real-time,<br /> allowing fully manual “animation” of the<br /> robot like a marionette, when the motors are<br /> powered-off and only a position is sensed;<br /> • position arbitrary joint by mouse, keyboard<br /> tap or keyboard number input;<br /> • freeze arbitrary subset of joint motors, allowing<br /> to position manually the rest of them.<br /> With SISO control approach (5.1), the user software<br /> acts as a sequencer of trajectory values.<br /> With MIMO approach, the controller is a part of<br /> the user software, operating with base sampling<br /> frequency equal to CAN bus time-trigger period<br /> (see 5.2.2).<br /> C1 M<br /> T<br /> 1<br /> a) b)<br /> c)<br /> T C<br /> C 2 M 2<br /> Cn Mn<br /> C 1 M 1<br /> C 2 M 2<br /> Cn Mn<br /> T C<br /> M 1<br /> M 2<br /> Mn<br /> T <strong>–</strong> trajectory generator<br /> (produces xwk = q k )<br /> C, C 1 . . . Cn <strong>–</strong> controllers<br /> M 1 . . . Mn <strong>–</strong> DC motors<br /> Fig. 5. a) SISO, b) MIMO, c) MIMO-SISO cascaded<br /> The software will be integrated with inverse kinematic<br /> solver (see 2.2.5) and OpenGL visualization,<br /> allowing smooth transition between positioning<br /> in joint and world coordinates.<br /> 5. CONTROL DESIGN<br /> The robot is regarded as a mechanical system<br /> consisting of rigid bodies, springiness of the motor<br /> shafts is neglected. According to Lagrange formulation,<br /> a state vector contains 12 (angle) positions<br /> and 12 (angular) velocities, giving the system of<br /> order 24. The input forces to the system are internal:<br /> 12 motor torques and joint friction, and<br /> external: floor reaction, friction, gravitation and<br /> disturbances. The output is 12 joint positions xk,<br /> desired to approach reference trajectory xwk = qk.<br /> 5.1 SISO vs. MIMO approach<br /> There are two base concepts of a biped trajectory<br /> controller. The simpler one is composed by multiple<br /> SISO (single-input, single-output) controllers,<br /> one controller per one servo motor. Each motor<br /> is controlled independently by its dedicated controller<br /> (Fig. 5a). A disadvantage of this approach<br /> is, that mutual dynamics between robot parts is<br /> ignored, controllers are unable to “cooperate”.<br /> The advantage is a simplicity.<br /> In SISO approach, each controller can be tuned<br /> separately, depending on its position (joint, where<br /> it does act), also the online parameter change<br /> could be performed during distinct gait phases.<br /> However, for the maximum simplicity, all controllers<br /> can be identical, robust enough to achieve<br /> stability for all joints and gait phases, at the cost<br /> of a lower control performance. Typically, a PID<br /> or a cascaded PID controller is a suitable choice<br /> for this kind of controller.<br /> On the other hand, a single MIMO (multipleinput,<br /> multiple-output) controller can be em- xw<br /> z −1<br /> 250 Hz<br /> ZOH +<br /> −<br /> C<br /> u<br /> 1 kHz<br /> S<br /> Fig. 6. block diagram of type P controller<br /> ployed instead (Fig. 5b). Command signal is a<br /> vector, whose components are fed to all motors.<br /> All measured positions are returned to the controller<br /> as a measured system output, and could be<br /> accompanied by measurements from other sensors<br /> (gravitational inclinometry, feet pressure sensors)<br /> to refine position estimation in MIMO controller.<br /> The command signal can in theory set directly<br /> the PWM (voltage) value to motor. In practice, at<br /> least a current controller should enclose the motor<br /> to protect it against overcurrent destruction,<br /> forming together a MIMO-SISO cascaded control<br /> system (Fig. 5c).<br /> The advantage of the MIMO approach is that<br /> it can reflect interacting system dynamics and<br /> that it allows a sensor fusion of servo motor<br /> potentiometers combined with other sensors. The<br /> controllers suitable for this design are eg. LQG,<br /> general observer-feedback controller, designed eg.<br /> by pole-placement, maybe also a native nonlinear<br /> controller (designed eg. by exact linearization).<br /> 5.2 Controllers implemented<br /> 5.2.1. SISO, simple type P The first controller<br /> used in our robot was the simplest type P (proportional<br /> only) controller, taking position as a<br /> system output and voltage (PWM) as a command<br /> variable, see Fig. 6. Plant S, consisting of a DC<br /> motor linked to rigid mass, with output position<br /> (angle) x controlled by C outputting voltage u.<br /> Sample rate transition is done by zero-order hold<br /> (ZOH), inherent one-sample delay of CAN bus<br /> traffic z −1 lies outside of the loop.<br /> K<br /> DC motor transfer function S(s) = s(1+sTm) is<br /> assumed, time constant has been identified with<br /> no mass linked to as Tm = 0.014 s. Feedback loop<br /> is closed locally in software of the servo motor<br /> control board. Sampling frequency of this loop<br /> has been chosen to be fs1 = 1 kHz. The CAN<br /> bus is used only to deliver reference value xw<br /> to the controller and its sampling frequency is<br /> fs2 = 250 Hz.<br /> 5.2.2. SISO, cascaded PID-PI The second controller<br /> has been designed to enable current limitation<br /> in servo motors and to close positional<br /> feedback through the CAN bus, allowing SISO<br /> to be then replaced by MIMO in control design<br /> (leaving inner current controller loops in all servo<br /> x<br /> z<br /> +<br /> ZOH C1 S1 S2 −<br /> −1<br /> xw + iw<br /> C2 u i x<br /> −<br /> ↓D<br /> MA<br /> 250 Hz 20 kHz<br /> Fig. 7. block diagram of cascaded PID-PI controller<br /> motors). Cascaded controller topology has been<br /> used, see Fig. 7.<br /> DC motor model is split into two parts: S1,<br /> producing winding current i as a response to drive<br /> voltage (PWM) u, and S2, producing position<br /> (angle) x as a response to current i. Measured<br /> electric admittance of the DC motor is fairly close<br /> to DC conductance up to order of 105 Hz, so<br /> the electric dynamics is neglected and subsystem<br /> modelled statically as S1(s) = K1. The motion<br /> K2<br /> equation of the motor gives S2(s) = s(1+sTm) .<br /> The inner loop contains current controller C1 with<br /> limitation, implemented as a software saturation<br /> imposed on the reference current variable iw (iw<br /> is clamped to not exceed maximum allowable absolute<br /> value of the current). Controller of type PI<br /> has been used. Integral action has been employed<br /> to assure rejection of constant load disturbance.<br /> Derivative action is not present, because dynamics<br /> of the inner loop is much faster than of the<br /> mechanical part of the system.<br /> C2 is a positional controller with current iw as<br /> a command variable. A full PID controller has<br /> been used. Integral action allows to follow a ramp<br /> with zero steady-state error and to reject constant<br /> load disturbance. Inner current loop runs at the<br /> sampling frequency of fs1 = 20 kHz, synchronous<br /> with PWM, outer loop is closed through a CAN<br /> bus at the sampling frequency of fs2 = 250 Hz.<br /> Inherent one-sample delay of CAN bus z −1 lies<br /> inside of the outer loop. Sample rate transition is<br /> done by ZOH and decimation ↓D. Measured value<br /> x is filtered by moving average filter (MA) to suppress<br /> a noise. Both elements, z −1 and MA, have<br /> significant impact on phase response of controlled<br /> system.<br /> 6. EXPERIMENT<br /> 6.1 Experiment with SISO, type P controller<br /> Reference and measured trajectories of the first<br /> walk, controlled by SISO type P controller, are on<br /> Fig. 8, detailed plot of left lateral hip axis joint is<br /> on Fig. 9. At some joints, especially lateral ankle<br /> and hip joints, relatively large steady state error<br /> is observed. It is caused by strong gravitation load<br /> ight̷hip<br /> transversal<br /> right̷hip<br /> lateral<br /> right̷hip<br /> sagittal<br /> right̷knee<br /> (sagittal)<br /> right̷ankle<br /> sagittal<br /> ankle<br /> lateral<br /> left̷hip<br /> transversal<br /> left̷hip<br /> lateral<br /> left̷hip<br /> sagittal<br /> left̷knee<br /> (sagittal)<br /> left̷ankle<br /> sagittal<br /> left̷ankle<br /> lateral<br /> 0<br /> 1<br /> 2<br /> 3<br /> 4<br /> 5 6<br /> t̷[s]<br /> Fig. 8. trajectory of one step<br /> x L5 ̷[deg.]<br /> ̷15<br /> ̷0<br /> -15<br /> -30<br /> -45<br /> 0<br /> 1<br /> 2<br /> 3<br /> 4<br /> 5 6<br /> t̷[s]<br /> Fig. 9. detailed trajectory of left lateral hip joint<br /> during one step<br /> The steady state error together with mechanical<br /> hysteresis of hip joint in the first version of robot<br /> mechanics made trajectory creation difficult. If a<br /> measured position was to be “frozen”, an error in<br /> actual configuration occured. The hysteresis was<br /> a severe fault, and has been corrected by bearing<br /> redesign. The steady state error of plain P type<br /> controller is eliminated by use of integral action.<br /> Under inadvisable conditions (heavy loads, mechanical<br /> short circuit), the hard voltage drive of<br /> the motors caused an overcurrent, followed sometimes<br /> by burn down of power H-bridge stage. This<br /> led us to implementation of the current limiting<br /> controller.<br /> 7. CONCLUSION<br /> This paper presented a fully operational walking<br /> biped robot built in our laboratory. It is designed<br /> as an open experimental system to test various<br /> controllers. As a first step, it has been experimentally<br /> proven, that a 12 DOF walk is possible using<br /> simple proportional controller and fixed, manually<br /> created trajectory. The ultimate goal is to develop<br /> more advanced MIMO controllers, and automatic<br /> and flexible trajectory generation.<br /> 7<br /> 7<br /> 8<br /> 8<br /> 9<br /> 9<br /> x w<br /> x<br /> x wx<br /> 10<br /> 10<br /> 11<br /> 11<br /> We have been convinced of the importance of a<br /> good mechanical construction. The stiffness and<br /> minimal hysteresis are crucial to a successful use<br /> of feedback. Construction of complicated multiaxis<br /> joints allowed us to compute the inverse kinematics<br /> effectively. Use of current limiting inner<br /> loop controller has been considered to be necessary.<br /> Our current and future work includes: automatic<br /> trajectory generation based on COG computation,<br /> Lagrangian model identification, MIMO controller<br /> design, addition of sensors of pressure or<br /> inclination, flexible trajectory control.<br /> The project is open, future collaboration on the<br /> control design and experiments is welcome. This<br /> work was supported by Academy of Science of<br /> the Czech Republic under Project 1ET400750406<br /> and by the Ministry of Industry and Trade under<br /> Project RPN/57/06.<br /> REFERENCES<br /> Albero, M., F. Blanes, G. Benet, P. Perez, J. E.<br /> Simo and J. Coronel (2005). Distributed<br /> real time architecture for small biped robot<br /> YaBiRO. In: IEEE Intl. Symposium on Computational<br /> Intelligence in Robotics and Automation.<br /> Boston Dynamics, Inc. (n.d.). The ”BigDog”<br /> robot, http://www.bdi.com/content/<br /> sec.php?section=bigdog.<br /> Chevallereau, C., E. R. Westervelt and J. W. Grizzle<br /> (2004). Asymptotic stabilization of a fivelink,<br /> four-actuator, planar bipedal runner. In:<br /> 43th IEEE Conference on Decision and Control.<br /> Paradise Island, Bahamas.<br /> Choset, H., K. M. Lynch, S. Hutchinson, G. Kantor,<br /> W. Burgard, L. E. Kavraki and S. Thrun<br /> (2005). Principles of robot motion: theory,<br /> algorithms, and implementation. MIT Press.<br /> London, England.<br /> Píˇsa, P. and F. Vacek (2003). Open source components<br /> for the CAN bus. In: Proceedings of<br /> the 5th Real-Time Linux Workshop. Valencia,<br /> Spain.<br /> Ridderström, C., J. Ingvast, F. Hardarson,<br /> M. Gudmundsson, M. Hellgren, J. Wikander,<br /> T. Wadden and H. Rehbinder (2000). The basic<br /> design on the quadruped robot Warp1.<br /> In: Intl. Conference on Climbing and Walking<br /> Robots. Madrid, Spain.<br /> Slotine, Jean-Jacques E. and H. Asada (1992).<br /> Robot Analysis and Control. John Wiley &<br /> Sons. New York, USA.<br /> Yamasaki, F., T. Miyashita, T. Matsui and H. Kitano<br /> (2000). Pino the humanoid: A basic<br /> architecture. In: 4th Intl. Workshop on<br /> RoboCup. Melbourne, Australia.
{"url":"https://www.yumpu.com/en/document/view/11330548/spejbl-the-biped-walking-robot-marek-peca-michal-sojka-","timestamp":"2024-11-02T02:54:55Z","content_type":"text/html","content_length":"230011","record_id":"<urn:uuid:11a42837-2aa9-400d-add6-2fb31806f83b>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00283.warc.gz"}
NHESS Natural Hazards and Earth System Sciences NHESS Nat. Hazards Earth Syst. Sci. 1684-9981 Copernicus Publications Göttingen, Germany 10.5194/nhess-4-83-2004 Assessing fracture occurrence using "weighted fracturing density": a step towards estimating rock instability hazard Jaboyedoff M. ^1 ^2 Baillifard F. ^1 ^3 Philippossian F. ^1 ^4 Rouiller J.-D. ^1 CREALP – Research Centre on Alpine Environment, Industrie 45, 1951 Sion, Switzerland Quanterra, Ch. Tour-Grise 28, 1007 Lausanne, Switzerland Institute of Geology and Paleontology, University of Lausanne, BFSH2, 1015 Lausanne, Switzerland Bureau d’études géologiques, Le Botza, 1963 Vétroz, Switzerland 09 03 2004 4 1 83 93 Copyright: © 2004 M. Jaboyedoff et al. 2004 This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Generic License. To view a copy of this licence, visit https://creativecommons.org/licenses/by-nc-sa/2.5/ This article is available from https:// nhess.copernicus.org/articles/4/83/2004/nhess-4-83-2004.html The full text article is available as a PDF file from https://nhess.copernicus.org/articles/4/83/2004/nhess-4-83-2004.pdf Based on the assumption that major class of rock instabilities are created by discontinuities, a method is proposed to estimate the fracture density by means of a digital elevation model (DEM). By using the mean orientation, the mean spacing and the mean trace length of discontinuity sets potentially involved in slope instabilities and a DEM, it is possible to calculate the mean number of discontinuities of a given set per cell of the DEM. This would allow for an estimation of the probability of the presence of at least one discontinuity in a given area or simply in a topographic cell of the DEM. This analysis highlights sites potentially affected by rockslides within a region. Depending on the available data, the mean number can be calculated either by area, or along a line parallel to the mean apparent spacing. The effective use of the probability of occurrence is dependent on the size of the discontinuities because short and closely spaced discontinuities will have a 100% probability of occurrence in each favorable location. The a posteriori prediction of a recent rockslide is discussed as an example.
{"url":"https://nhess.copernicus.org/articles/4/83/2004/nhess-4-83-2004.xml","timestamp":"2024-11-10T14:12:13Z","content_type":"application/xml","content_length":"5729","record_id":"<urn:uuid:5eccff0c-ffd2-4d84-acc5-25a92e7806a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00413.warc.gz"}
Semiflexible Chains at Surfaces: Worm-Like Chains and beyond Institut Charles Sadron, CNRS-UdS, 23 rue du Loess, BP 84047, 67034 Strasbourg cedex 2, France Equipe BioPhysStat Université de Lorraine, 1 boulevard Arago, 57070 Metz, France Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Strasse 3, 79104 Freiburg, Germany Department of Physics, Sejong University, Neundongro 209, Seoul 05006, Korea Author to whom correspondence should be addressed. Current address: Purdue University Weldon School of Biomedical Engineering 206 S. Martin Jischke Drive, West Lafayette, IN 47907-2032, USA Submission received: 7 June 2016 / Revised: 29 July 2016 / Accepted: 29 July 2016 / Published: 8 August 2016 We give an extended review of recent numerical and analytical studies on semiflexible chains near surfaces undertaken at Institut Charles Sadron (sometimes in collaboration) with a focus on static properties. The statistical physics of thin confined layers, strict two-dimensional (2D) layers and adsorption layers (both at equilibrium with the dilute bath and from irreversible chemisorption) are discussed for the well-known worm-like-chain (WLC) model. There is mounting evidence that biofilaments (except stable d-DNA) are not fully described by the WLC model. A number of augmented models, like the (super) helical WLC model, the polymorphic model of microtubules (MT) and a model with (strongly) nonlinear flexural elasticity are presented, and some aspects of their surface behavior are analyzed. In many cases, we use approaches different from those in our previous work, give additional results and try to adopt a more general point of view with the hope to shed some light on this complex field. 1. Introduction Polymers are large molecules comprising one or a few repeating chemical motives called monomers. Classically, monomers are linked by chemical bonds, but polymers can also consist of associated surfactants or proteins. They are widespread in nature and industry. Most technical applications use the unique rheological properties of polymer solutions or melts [ ]. In nature, specific proteins can associate into biofilaments [ ]. Microtubules (MT) associated from tubulins serve as beams, tracks for trafficking [ ] or stirring rods [ ] inside the cell. Proteins like G-actin build filaments (F-actin) that can further associate with molecular motors into active gels [ Polymer backbones can be considered as slender objects, which can be described by a few geometrical parameters, like preferred curvature and twist, and mechanical parameters, like their bending and twist modulus. In some instances, the detailed microscopic mechanism of flexibility [ ] nonetheless matters. Most polymers possess a (possibly weak) bending modulus . Being neither perfectly flexible, nor perfectly stiff rods, they can be termed semiflexible. One way to quantify flexibility is to evaluate thermal bending fluctuations, which merely depend on the persistence length $ℓ = B / ( k B T )$ . If the contour length of the polymer chain is much larger than , the polymer is essentially flexible at a large scale, albeit fragments (chain sections) shorter than appear stiff (rod-like). Chains with a persistence length comparable to the monomer size appear flexible in most instances. Sometimes, the rigidity is measured by the aspect ratio $ℓ / b$ of the persistent segment. Several examples are listed in Table 1 Models for ideal polymer chains, which neglect monomer/monomer interactions and only retain chain connectivity, were devised early [ ]. The simplest model neglects any type of correlation between neighboring monomers and represents a configuration as a realization of a random walk (RW). It is a reasonable model for a fully-flexible chain without interactions and, to a first approximation, for melts and dense solutions where interactions between monomers are screened. The possibility for two neighboring monomers to overlap or for the RW to allow for immediate return is not realistic. This is cured in a model with fixed bond-angle entailing orientation correlations between neighboring bonds, but still ignoring interactions. This model has a continuous counterpart called the worm-like-chain (WLC) model, which penalizes curvature away from the straight state. Similarly, twist rigidity can be introduced to complement the WLC model. Ideal chain models generically fail qualitatively for very long chains dissolved in good solvents, which are swollen by excluded volume repulsions. For simplicity, let us consider a chain of spherical monomers of diameter b each. For RW configurations comprising N steps of step size b the internal concentration $∼ N / ( N b 2 ) D / 2$ with D the dimension of space leads to an average number of contacts distant along the chain $∼ N 2 / N D / 2$, diverging with N for $D < 4$. Let us detail the interaction a bit further in 3D and 2D. In a virial expansion, the interaction associated with the second viral $a 2$ reads $a 2 b - D N 2 - D / 2$. Let us introduce the dimensionless excluded volume $a 2 / b D$, which is of order one in good solvent. Unless the solvent is only marginal ( $a 2 ≪ b D$), even a short chain experiences an interaction energy larger than the thermal energy $k B T$ and is swollen; the ideal RW configurations are not relevant. The interaction is obviously less important for somewhat stiffer chains, which are less likely to fold back and self-interact. At large scales, the long WLC can still be qualitatively seen as an RW composed of $N ˜ ∼ N b / ℓ$ straight steps of length $∼ ℓ$ with an overall geometrical size $R ∼ N ˜ ℓ$ . The interaction becomes $a 2 N ˜ 2 / R D$ . The second virial increases with the rigidity as $a 2 ∼ ℓ 2 b$ in 3D and $a 2 ∼ ℓ 2$ in 2D. Nonetheless, in 3D, the overall self-interaction inside the coil scales as $N ( b / ℓ ) 3$ and strongly decreases with rigidity. In 3D, chains are almost ideal up to $∼ ( ℓ / b ) 3$ monomers, which may be less than 10 monomers for some synthetic polymers, but is very large for stiff biopolymers like d-DNA. In 2D, the interaction $∼ N b / ℓ$ is less affected by rigidity, and the excluded volume matters as soon as the chain becomes “flexible” $N b > ℓ$ . In summary, in 3D, there is a regime $ℓ / b < N < ( ℓ / b ) 3$ , which is pretty extended for rather stiff chains, where chains are flexible and ideal. The WLC model has some practical relevance in 3D. Such a regime barely exists in 2D as also documented in simulations by Hsu, Paul and Binder [ ]. The flexible regime of the WLC model is hence hardly pertinent to the isolated chain in 2D. However, the WLC model can describe the weakly fluctuating chain of contour length $S ≲ ℓ$ Let us now consider an isotropic concentrated solution of semiflexible chains at monomer concentration . In the mean-field, the interaction energy per unit volume reads $a 2 c ˜ 2$ $c ˜ ∼ c b / ℓ$ the concentration in persistent segments of length . The persistence length cancels out in the interaction energy density. For a high enough concentration, nematic order is anticipated, and the isotropic description is expected to fail. This happens when the correlation length of concentration fluctuations is shorter than the persistence length . For stiff chains, the isotropic description is more pertinent to concentrated solution than to strict melts. Away from this (important) transition, rigidity is hence less important in concentrated solutions than for isolated coils in 3D. Long ago, it was proposed by Flory [ ] that in dense solutions, or in melts, the excluded volume is screened and the polymer configurations remain ideal. This encourages us to use mean-field theory to describe these systems. Recently, we have shown theoretically and also by simulations for rather flexible chains that screening is not complete and that the tangent/tangent correlation function shows a power law decay at large distances along the chain [ ]. We will come back to this below. Polyelectrolytes are an important class of water-soluble polymers. Polyelectrolytes are electrically-charged along their backbone and are accompanied by oppositely-charged counterions. The energetic ground state corresponds to counterions condensed onto the backbone charges, which ensures that the electrical field vanishes everywhere. Unless the electrostatic interaction is very strong as in media with low dielectric constants, the counterions are (at least partially) dispersed in the solvent, say water, to optimize the free energy. The solution also typically contains salt, which entails Debye screening of the electrostatic interaction. In the simplest model, polyelectrolytes are seen as polymers whose monomers interact via the 3D screened electrostatic Yukawa-like potential $q 2 l B r e - κ D r$ expressed in thermal units between charges counted in elementary charges with $l B$ the Bjerrum length (≈0.7 nm in water) and $κ D$ the inverse Debye screening length. Upon bending of the polymer, the energy associated with the screened potential increases, and there is so-called electrostatic stiffening. Electrostatic stiffening turned out to be a delicate matter. Beyond the case of an intrinsically stiff chain, additionally stiffened by electrostatics (say as a perturbation) where the electrostatic contribution to the persistence length is inversely proportional to $κ D 2$ ], electrostatic stiffening remains debated [ ]. We will not enter the debate here. Intrinsically stiff biofilaments, which we consider explicitly below, are usually charged and could be termed polyelectrolytes. Under physiological conditions, the electrostatic interaction is strongly screened and short ranged: electrostatic stiffening is negligible. The electrostatic excluded volume associated with the second virial of the screened potential typically does not swell the stiff biofilaments, except for long d-DNA’s. Note however that close to a solution/wall interface, the effective interaction between test-charges is more complicated [ ] than the Yukawa potential given above. Many recent efforts were directed towards the dynamics of semiflexible chains, e.g., in flows [ ] or the growth of biofilaments. The former topic is covered by the contribution of Winkler to this issue [ ]. Some aspects linked to the coupling of longitudinal and transverse dynamics at short times, like the propagation of tension, were previously addressed by us and others [ ]. The growth of biofilaments, essentially actin, was considered by Carlier [ ], Mitchison [ ] and modeled by Lacoste [ ]. We will not discuss any such dynamical issues here. Though the paper reviews some of the contributions by the theory and simulation group at Institut Charles Sadron (ICS), it proposes many new aspects, approaches or presentations. Section 2 gives results for the WLC in 2D and 3D. Section 3 is devoted to WLC adsorption on a flat surface. Section 4 treats filaments beyond WLC. Section 5 mentions miscellaneous topics. 2. The WLC Model: Some Results in 3D and 2D 2.1. Single Chain Properties The statistical physics of a single WLC is a classical topic of textbooks [ ]. We hence do not aim at being exhaustive, but give a few results, which are used later. The WLC is described by the bending Hamiltonian $H = ( B / 2 ) ∫ d s κ 2 ( s )$ is the local curvature and the integral runs over the arc length , along the chain. In 2D, a configuration is determined by the set of angles $θ ( s )$ between the tangent to the chain and a reference direction. In particular, the local curvature at curvilinear abscissa is simply $κ = d θ / d s$ . Local bending of a small section of length $δ s$ by an angle $δ θ$ is penalized by a bending energy cost $δ H = ( B / 2 ) ( δ θ ) 2 / ( δ s )$ quadratic in $δ θ$ . The distribution of the increment $δ θ$ is hence Gaussian. Therefore, its Fourier transform is also Gaussian, $B ( q ) = e - q 2 δ s k B T / ( 2 B )$ , where is conjugated to $δ θ$ . The angular deflection along a finite chain is additive, and the global deflection also follows Gaussian statistics $P ( q ) = e - q 2 s k B T / ( 2 B )$ for a WLC of length . A widely-used geometrical quantity is the tangent/tangent correlation function $C ( s ) = 〈 cos θ 〉$ with the reference $θ = 0$ taken at $s = 0$ $C ( s )$ is the real part of $P | q = 1 ( s )$ . It is convenient to introduce the persistence length $ℓ = B / ( k B T )$ , which characterizes the exponential decay of $C ( s )$ $C ( s ) = e - s / ( 2 ℓ ) ( 2 D ) , C ( s ) = e - s / ℓ ( 3 D )$ where the 3D expression accounts for the two transverse directions, which contribute to decorrelate the tangent. Other geometrical quantities like the mean-square end-to-end distance $R e 2$ or the radius of gyration $R g$ are, quite generally, obtained as multiple integrals of $C ( s )$ $R e 2 ( S ) = ∫ 0 S d s ∫ 0 S d s ′ C ( | s - s ′ | )$ $2 S 2 R g 2 ( S ) = ∫ 0 S d s ∫ 0 S d s ′ R e 2 ( | s - s ′ | )$ . For the 2D WLC case, one obtains: $R e 2 = 4 ℓ S - 8 ℓ 2 ( 1 - e - S 2 ℓ ) , R g 2 = 2 ℓ S 3 - 4 ℓ 2 1 - 4 ℓ S 1 - 1 - e - S 2 ℓ S / ( 2 ℓ ) .$ The 3D result is obtained by replacing $ℓ / 2$ . Beyond polymer textbooks, more advanced results were reported. The end-to-end distribution of the 2D WLC has been studied theoretically by Lamprecht et al. [ ]. In the limit of 2D SAW, Cardy and Saleur calculated the ratio of the end-to-end distance to the radius of gyration [ ]. The competition between stiffness and attraction is not addressed here. It is the topic of the contribution by Janke to this issue [ ]. In the dilute regime, a variety of filament structures emerge, including stable knots; further, multi-chain aggregates are discussed. The collapse transition was also studied on a 2D lattice [ 2.2. Melt of Worm-Like Chains 2.2.1. Simulations of WLC Melts and Lattice Artifacts Semiflexible and rigid polymer chains have also been considered numerically, first [ ] using lattice Monte Carlo (MC) simulations [ ], where coarse-grained polymer chains are represented by connected lattice sites [ ]. The monomers move either using local MC jumps between neighboring lattice sites or by means of global collective rearrangements, such as “slithering snake” moves along the chain contour [ ]. A particular efficient lattice MC algorithm heavily used in the past is the so-called “bond-fluctuation model” (BFM) [ ] where the monomers are represented by cubes on a simple cubic lattice connected by a set of allowed bond vectors. Since a repeat unit on the lattice model represents a group of atoms in a real chain, one can model the chain stiffness by introducing an effective potential that prefers straight conformations. A particular simple choice often used is: being the complementary angle to the one enclosed by two consecutive bonds [ ] and the energy parameter allowing one to tune the effective bond length $b e$ as shown by the dashed line in Panel (a) of Figure 1 . Most studies start with equilibrated samples of flexible chains ( $ϵ = 0$ ) and increase then gradually the stiffness energy . The equilibration may be checked by comparing with systems obtained starting with rigid rods [ We emphasize that the dimensionless parameter $x = β ϵ$ $β = 1 / ( k B T )$ being the inverse temperature) should not become too large, as seen in Figure 1 for a BFM melt of volume fraction $ϕ = 0 . 5$ with short chains of length $N = 20$ ]. In fact, the center-of-mass self-diffusion coefficient $D cm$ levels off at $x ≈ 3$ and even increases beyond $x ≈ 7$ , albeit the local jump acceptance rate decreases monotonously (not shown). As can be seen from Panel (b) for $x = 10$ , chains and chain segments tend to align along the easy lattice axes of the respective model once exceeds unity. It is this breaking of the orientational symmetry of the chains that causes the unphysical non-monotonous behavior of the diffusion constant. The alignment along the easy axes of the model is closely related to the entropically-driven alignment of stiff chains along planar surfaces discussed below. Unfortunately, while the surface alignment is a true physical effect of experimental relevance, the alignment of lattice polymers is merely an algorithmic artifact. We emphasize that this is not an equilibration problem and should also occur if the chains are generated, say, using the pruned-enriched Rosenbluth method (PERM), as in various recent studies [ ]. Results obtained for very rigid lattice chains [ ] may thus be irrelevant for the continuous space experimental reality and should be considered with care. In any case, it is not easy to properly subtract these artifacts, which may be quantified using an orientational order parameter [ ] corresponding to a second order Legendre polynomial. In our opinion, future research on semiflexible and rigid chains should thus focus on off-lattice bead-spring models of the Kremer–Grest type [ ]. While this does a priori not exclude off-lattice MC schemes [ ], molecular dynamics has many advantages since collective relaxation pathways are directly excited. Due to the strong correlations of the monomer positions in stiff chains, local jump MC schemes necessarily lead to exponentially long relaxation times (if the chains are not already aligned along an axis) and are thus intrinsically inefficient. Similar anomalous diffusion and chain relaxation are observed for collapsed polymers below the -point if local MC schemes are used [ 2.2.2. Two-Dimensional Polymer Melts: Isotropic-Nematic Phase Transition Flexible Chains As first suggested by de Gennes [ ], it is now generally accepted theoretically [ ], numerically [ ] and experimentally [ ], that strictly two-dimensional ( $D = 2$ ) “self-avoiding walks” (SAWs) adopt compact conformations at sufficiently high concentrations , i.e., the chain size: $R ≈ ! d cm ≈ ( N / c ) ν with ν = 1 / D = 1 / 2$ is set by the typical distance $d cm$ between the chains (the exclamation mark stresses the key scaling assumption). It is assumed here that chain intersections are strictly forbidden. This must be distinguished from systems of so-called “self-avoiding trails” (SATs), which are characterized by a finite chain intersection probability [ ]. Relaxing thus the self-avoidance constraint, SATs have been argued to belong to a different universality class revealing mean-field-type statistics with rather strong logarithmic -corrections [ ]. The SATs’ universality class is relevant to the in-plane chain statistics for thin molten layers [ ]. It is important to stress that the compactness of dense, strictly 2D SAWs, Equation ( ), does not imply Gaussian chain statistics, since other critical exponents with non-Gaussian values have been shown to matter for various experimentally-relevant properties [ ]. It is thus incorrect to assume that excluded-volume effects are screened [ ] as is approximately the case for 3D melts [ ]. Furthermore, the segregation of the chains does by no means impose dislike shapes minimizing the contour perimeter of the (sub)chains; the chains adopt instead rather irregular shapes [ ]. Focusing on dense 2D melts, it has been shown both theoretically [ ] and numerically [ ] that the irregular chain contours are characterized by a perimeter scaling as: $L ∼ R D p ∼ N D p / D with D p = D - θ 2 = 5 / 4 .$ The fractal line dimension $D p$ is set by Duplantier’s contact exponent $θ 2 = 3 / 4$ characterizing the size distribution of inner chain segments [ ]. We remind that Duplantier’s theoretical predictions obtained using conformal invariance [ ] rely on both the topological constraint (no chain intersections) and the space-filling property of the melt. Interestingly, the fractal line dimension is experimentally accessible from the generalized Porod scattering of the intrachain coherent structure factor $F ( q )$ . It has been shown [ ] that: $N F ( q ) ≈ N 2 / ( q R ) 2 D - D p ∝ ! L$ should hold in the intermediate wavevector regime. Weak Persistence Length Effects Up to now, we have supposed that the chains are flexible down to the monomer scale. Obviously, most of the experimentally-relevant macromolecules on surfaces or confined to thin films are rather rigid [ ]. Following previous computational work [ ], we are currently re-investigating the scaling of semiflexible chains starting from the dilute limit [ ] up to high densities to see how a finite persistence length may change the scaling. Semiflexibility is readily included by adding the coarse-grained stiffness potential Equation ( ) to the widely-used Kremer–Grest bead-spring model [ ]. The effect of this potential is best first characterized for the dilute limit, which yields the curvilinear persistence length from the exponential decay of the bond-bond correlation function following Hsu et al. [ ]. Note that the bond length can be regarded as constant for all . It is found that the persistence length remains smaller than the semidilute blob size $ξ ( ϵ )$ for the experimentally mainly relevant low and moderate densities. In this regime, the finite rigidity is observed not to change the scaling properties for the flexible chains discussed above. This may be seen simply from the snapshots below the dashed line presented in Figure 2 . In fact, since the number of monomers in the semidilute blob decreases rapidly with ], i.e., $N / g$ increases strongly as long as does not become too small ( $g ≫ 1$ ), a finite rigidity even speeds up the convergence to the predicted compact chain limit. As shown from the reduced sub-chain size $R e 2 ( s ) / ( s - 1 )$ for different presented in Figure 3 , the sub-chain size depends in fact very little on the local persistence length for large arc-lengths $s → N$ . Essentially, the chain size is set in this limit by the -independent distance $d cm$ , cf. Equation ( ), between chains. Strong Persistence Length Effects If the persistence length exceeds strongly the blob size (estimated by extrapolation to finite densities using the conformational properties of the dilute limit), with increasing persistence length and density, one expects to observe a transition to an ordered nematic state. The location of this transition is roughly sketched by the dashed line in Figure 2 . Whether this transition is of continuous order, as claimed in some theoretical studies focusing on semiflexible polymer chains of a fully-occupied Flory lattice model confined to a volume of fixed shape [ ], or of first order, as suggested by the Monte Carlo simulations of multi-chain systems at a fixed finite density [ ], is an important and, in our view, yet open question. The data presented in Figure 2 Figure 3 stem from systems where we start with compact conformations of flexible chains ( $ϵ = 0$ ) and increase gradually the bending penalty. Additionally, a second set of data has been obtained starting with aligned rods at high decreasing then gradually the stiffness (this dataset is limited up to now to short chains of length $N ≤ 256$ ). While for systems below the dashed line in Figure 2 , the same chain conformations are readily reached with both protocols for all chain lengths available, this becomes computationally challenging around and above the dashed line, i.e., strong hysteresis effects have been observed. Due to the sluggish hairpin-like defects seen in the snapshots, the hysteresis might be a merely dynamical issue. However, for all systems for which both protocols yield the same configurational properties, i.e., where we are sure to have reached equilibrium, a sharp transition between isotropic and nematic states has been observed for the standard nematic order parameter ], the reduced chain size $R e 2 ( N ) / N$ or the second Legendre polynomial $P 2 ( r )$ of intra- and inter-chain bonds as a function of distance between bonds (not shown). These results point to a first order transition [ ], supporting thus the pioneering work by Baumgärtner and Yoon [ ]. From a mathematical point of view, the associated 2D lattice models [ ] are solved exactly. The same system has also been studied on a different lattice more recently [ ]; there, a Kosterlitz–Thouless (KT) transition is reported. Experimentally and in simulations, the KT transition is most often hidden by a first order transition induced by short range interactions. 2.2.3. Three-Dimensional Polymer Melts: Corrections to Chain Ideality As pointed out in the Introduction, away from the isotropic/nematic transition, rigidity effects are less important in three-dimensional (3D) concentrated solutions or polymer melts, and the classical view about flexible polymer chains should apply. This classical view supposes that all spatial correlations are short-range: they vanish for length scales larger than the correlation length characterizing the decay of the density fluctuations [ ]. However, over the past years, there has been increasing evidence that this classical view must be amended. For instance, the correlation functions of density fluctuations [ ] or bond orientations [ ] display long-range power law decays. At the single chain level, this implies corrections to ideal (Gaussian) chain statistics. The molecular origin of these deviations from chain ideality is related to the interplay of chain connectivity and the incompressibility of the melt. This interplay leads to an effective repulsion between chain segments due to a correlation hole effect, leading $C ( s ) ∼ s - 3 / 2$ for the tangent/tangent correlation function for large arc-length Since the corrections should manifest themselves best for (nearly) incompressible systems of long flexible chains, our numerical tests by simulations have employed fully-flexible generic polymer models, e.g., the bond-fluctuation lattice model (BFM) and a Kremer–Grest-like bead-spring model (BSM) [ ]. As an example, Figure 4 $C ( s )$ for the BSM and chains of length $N = 1024$ . The power-law $C ( s ) ∼ s - 3 / 2$ is clearly visible for the fully-flexible model ( $ϵ = 0$ ) and also persists, if rigidity is switched on (see the data for $ϵ = 1$ and 2), although the amplitude of the power law is decreased. A weak rigidity was recently suggested to be a good compromise if one wants (to simulate) pretty flexible chains with weak non-ideality effects [ 3. Adsorption of WLC When a polymer can bind to a surface by any of its monomers, it tends to do so against translational and conformational entropy, a process called adsorption. Adsorption from solution is a very efficient coating process and an amount equivalent to a few monolayers of monomers is usually enough to control surface properties. Several numerical studies describe polymer adsorption with rigidity ]. The adsorption process turns out to be rather complex. We focus on two extreme cases, reversible adsorption where the layer is in full equilibrium with the bulk solution [ ] and chemisorption where polymer binds through irreversible monomer/wall chemical reaction [ ]. In the former case, adsorbed polymer can reorganize freely and exchange freely with the bulk, whereas in the latter case, an established monomer/wall bond is permanent, and the polymers can only reorganize under this strong constraint and never desorb. Though these two adsorption processes are at the extreme of a spectrum extending from fully-equilibrated to totally-irreversible processes, it turns out that their gross features can be described with the same tools. 3.1. Loop and Tail Partition Function Adsorption of a chain produces configurations that can be analyzed in terms of loops and tails [ ]. Loops are chain fragments that touch the surface only by their ends, the configuration comprises a series of loops. Sometimes, in discrete models (say, if space is described by a discrete lattice), a series of monomers laying “flat” on the surface is called a train [ ]. A tail is an end-section leaving the surface without touching it again; there are two tails per adsorbed chain. The equilibrium distribution of loops and tails plays an essential role and turns out to rule over the structure of the adsorption layer for both reversible adsorption and chemisorption. Actually, the polymer statistics merely enters the adsorption problem via these distributions. In particular, the loop and tail distributions encode for the quality of the solvent and, more importantly here, the chain stiffness. In line with Section 1 , we neglect excluded volume interactions between persistent segments and restrict our attention to ideal loops and tails. We anticipate that fragments (loops or tails) shorter than are stiff, while larger ones are flexible and follow Gaussian statistics. Let us first notice that bringing a chain in contact with a repulsive flat surface by any of its monomers does not affect its configurations. To see this in a simple way, take an arbitrary bulk configuration and translate it down to the wall until the closest monomer comes into contact. Any bulk configuration generates one configuration touching the wall by any of its monomers, the reverse being obviously true. The equivalence of the two sets of configurations translates into simple relations on partition functions, which were derived in many specific cases, e.g., for excluded volume chains, using dedicated methods [ Take a flexible chain in the bulk; omitting the exponential contribution associated with monomer fugacity, its partition function can be set to unity. The same holds true for a chain touching the wall by any of its monomers. Take now a chain specifically touching with one of the $∼ N$ middle monomers, its partition function is $Z 1 ∼ 1 / N$ . The touching configuration can be seen as made out of two tails; hence, the tail partition function satisfies $Z t 2 ∼ 1 / N$ $Z t ( n ) ∼ 1 / n$ ; a standard result that is easily recovered from other methods, for example, from the image principle. A loop of size can be considered as made out of two tails of size $∼ n$ , which are within reach and with ends constraint to match in height [ ]; hence, the loop partition function satisfies $Z l ( n ) ∼ Z t ( n ) 2 / n$ $Z l ( n ) ∼ n - 3 / 2$ , as anticipated from first passage statistics. The advantage of this derivation over more standard ones is that it easily generalizes to semiflexible chains. Below, we use a very similar way to derive partition functions of stiff loops and tails [ Noticing that for a stiff bulk chain of size , the typical global angular fluctuation is given by $ℓ θ 2 / s ∼ 1$ leading to $θ ∼ s / ℓ$ , the chain partition function can be equated to this fluctuation. The partition function of a stiff chain touching the surface by any of its monomers is hence also $Z 1 ∝ s$ . The stiff chain touching the surface by a specific middle monomer can now be seen as composed of two tails of length $∼ s$ $Z 1 / s ∝ Z t 2$ , solved by $Z t ∝ s - 1 / 4$ . We can then derive the partition function of a stiff loop of size : a stiff loop is composed of two stiff tails matching both in orientation and height. The orientation and height of each tail are fluctuating within $θ ∼ s / ℓ$ $z ∼ s θ$ , respectively. The partition function of the loop as a constraint state of the two tails reads $Z l ∼ Z t 2 / ( θ z )$ . This leads to $Z l ∝ s - 5 / 2$ . Although this derivation is similar to the one for a flexible chain, the additional constraint on the orientation had to be implemented. Loop distributions obtained from numerical simulation are shown in Figure 5 together with the predicted asymptotic laws [ 3.2. Reversible Adsorption of an Ideal WLC from Dilute Solution We may proceed with a simple description of the adsorption of ideal WLC from solution and discuss some of the many additional subtleties in a second step. The very vicinity of the surface $z < ℓ$ , the so-called proximal layer, is populated with short stiff loops. At distances from the wall smaller than , stiff loops build up the concentration $c ( z )$ . Their size is distributed according to the partition function $Z l ∝ s - 5 / 2$ . These stiff loops are pretty flat and extend into the solution over a distance $z ∼ s 3 / 2 / ℓ 1 / 2$ according to their size. The sublayer $z < z ^$ is merely filled with loops of size $s < s ^ ∼ z ^ 2 / 3 ℓ 1 / 3$ , which translates into: $∫ 0 z ^ c ( z ) d z = ∫ 0 s ^ Z l ( s ) s d s$ Taking the derivative with respect to $z ^$ on both sides of Equation ( ), we obtain the concentration profile: Although this result reminds the celebrated self-similar concentration profile obtained by de Gennes for the adsorption of swollen chains from a good solvent [ ], the latter decreases with an exponent, which is only approximately $4 / 3$ in 3D and depends on the dimension of space, it is unrelated. For strong enough adsorption, the adsorption layer is thin and limited to the proximal regime Equation ( ) with a proper cut-off function defining its thickness For ideal chains the critical adsorption strength depends on the range Δ of the adsorption potential $u c ∼ 1 / ( Δ 2 ℓ ) 1 / 3$ ] that reflects the balance between Odijk confinement energy [ ] and the gain in interaction energy. Strong adsorption with a layer thickness shorter than is found far enough from the adsorption threshold, for reduced temperatures $τ = ( u - u c ) / u c$ larger than $( Δ / ℓ ) 2 / 3$ , a regime where $h ∼ Δ / τ 3 / 2 < ℓ$ . In practice, the adsorption layer is typically thin and described by Equation ( ) up to the cut-off. A very detailed analytical description of the proximal layer build by ideal WLC is proposed in [ ] together with an analysis of weak adsorption. In the weak adsorption regime, the concentration profiles are found to be non-monotonic and to go through a local minimum. The work in [ ] proceeds from an eigenstate description of the partition function [ ]. The boundary condition is rather subtle and basically ensures that no fragment is coming out of/penetrating into the wall, it is neither a von Neumann, nor a Dirichlet boundary condition. Earlier attempts [ ] failed due to improper boundary conditions, which led to a ground state with a node, which cannot be. What is the impact of excluded volume on the layer description? As long as the chain thickness is small compared to the persistence length, $b ≪ ℓ$ , the power law decay of the concentration in the proximal regime Equation ( ) must hold. Excluded volume can become important in the concentrated quasi-2D layer within the adsorption well and may affect the adsorption transition. A recent single mushroom adsorption study by numerical simulation [ ] finds that the critical adsorption energy varies with the persistence length as $u c ∝ 1 / ℓ$ . It is fair to say that the numerical study of critical adsorption is very difficult and was performed on a cubic lattice in [ ], which affects the powers of $Δ / ℓ$ in various scaling behaviors, e.g., for $u c$ (see also the remark in the final discussion in [ ]). Recent numerical self-consistent mean-field calculations in Chen’s group [ ] are compatible with the standard predictions for $u c$ ], as they should be. Excluded volume can also affect the flexible loop layer in the weak adsorption regime, but merely for very large loops (see Section 1 3.3. Irreversible Chemisorption of a WLC from Dilute Solution Chemisorption is a very slow adsorption process limited by the kinetics of the chemical reaction, which ensures binding. Polymer configurations are equilibrated at every step under the constraint of already existing bonds. In this sense, the adsorption can be looked at as an Eyring reaction. Under this condition, the reaction rate defined as the number of surface bonds created per unit time and decomposes into the product of the (intrinsic) reaction rate (frequency) for monomers within the reaction distance from the surface and the concentration of monomers $c 0$ within the reaction distance ( $z < b$ ). The latter is to be calculated for polymer configurations equilibrated under the constraints imposed by already bound monomers [ ]. Below, we consider chemisorption from a dilute solution. Among the relevant questions are: given that a polymer has established a first bond with the surface, how does adsorption proceed further? Will a single chain zip on the surface out of the first bond or will zipping be preempted by the formation of large bound loops accelerating the adsorption process? What layer structure will the chemisorption process ultimately build? The latter is of prime practical importance. Adsorption kinetics can also be addressed. Suppose one middle monomer is bound flat on the surface; because of chain stiffness, neighboring monomers stay merely within the reaction distance $∼ b$ . The section that is flat within the distance has the typical length $s 0 ∼ ( ℓ b 2 ) 1 / 3$ , longer sections do thermally fluctuate out beyond $z ∼ b$ . A monomer, which is a chemical distance $s < s 0$ away from the first bond, is hence also within the reaction distance with a probability $P ( s ) ∼ 1$ . For more distant monomers where $s > s 0$ $P ( s ) ∝ Z l ( s ) Z t ( S / 2 - s ) / Z t ( S / 2 )$ . Provided $s ≪ S / 2$ , only the loop partition function matters. Assuming a stiff loop $s ≪ ℓ$ for which $Z l ∼ s - 5 / 2$ , the probability $P ( s )$ is proportional to $s - 5 / 2$ . To match $P ( s ) ∼ 1$ $s = s 0$ , we must have $P ( s ) ∼ ( s / s 0 ) - 5 / 2$ $ℓ > s > s 0$ . Even more distant monomers where $S / 2 ≫ s > ℓ$ contact the wall by a flexible loop for which $Z l ∼ s - 3 / 2$ $P ( s ) ∝ s - 3 / 2$ , matching this and the preceding regime at $s = ℓ$ $P ( s ) ∼ ( ℓ / s 0 ) - 5 / 2 ( s / ℓ ) - 3 / 2$ . The three regimes for the monomer wall “contact” probability are summarized below: $P ( s ) ∼ 1 ( s < s 0 ) ; P ( s ) ∼ ( s s 0 ) - 5 / 2 ( ℓ > s > s 0 ) ; P ( s ) ∼ ( ℓ s 0 ) - 5 / 2 s ℓ - 3 / 2 ( S ≫ s > ℓ ) ;$ $s 0 ∼ ( ℓ b 2 ) 1 / 3$ . These relations were derived for the adsorption of a tail, the same relations do hold for the internal adsorption inside a loop. The cumulated adsorption rate of the single tail (or loop) $Q s ∼ ∫ 0 S q P ( s ) d s$ is dominated by $s ∼ s 0$ . This indicates that the typical adsorption loop is a small flat loop of size $∼ s 0$ , which we call the minimal loop. Adsorption out of the existing bond hence proceeds by zipping with finite steps of size $s 0$ and a rate in steps per unit time $q s 0 / b$ . This is an effect of stiffness, would the chain be flexible, zipping would proceed monomer by monomer. Next question is whether zipping can be completed before a large adsorption loop forms. The zipping process is completed to the length after the time $t z ∼ ( s 0 q / b ) - 1 ( s / s 0 )$ . The rate of nucleation of an adsorption loop larger than is given by $Q n ∼ ∫ s S q P ( s ) d s / b$ . Nucleation of a stiff loop ( $ℓ > s$ ) occurs with a rate $Q n ∼ q ( s 0 / b ) ( s / s 0 ) - 3 / 2$ , while the nucleation rate for a large flexible loop ( $S > s > ℓ$ ) is $Q n ∼ q ( ℓ / b ) ( ℓ / s 0 ) - 5 / 2 ( s / ℓ ) - 1 / 2$ . Note that the nucleation of a loop larger than is dominated by the nucleation of loops of size $∼ s$ . Calculating the probability to nucleate a loop larger than before the zipping time associated with length $t z × Q n$ shows that large loop nucleation is negligible in the stiff regime $s < ℓ$ $s > s 0$ ) and only preempts zipping deeper inside the flexible regime for $s > S *$ $S * ∼ ℓ 5 / 3 / b 2 / 3$ Table 2 gives the order of magnitudes for the length scales involved in single WLC adsorption. At this stage, the length $s 0$ appears to play a special role, WLC much smaller than $s 0$ are hyper rigid; they hardly bend over each other in the multi-chain problem. One chain adsorbs at once (or not at all). We feel that this regime, which we did not study, shares some features with the random sequential adsorption (RSA) of perfectly stiff rods, which only adsorb without overlap. Ultimately, the very stiff WLC should cross over to the RSA regime studied by Viot and Tarjus in [ ]; it leads to a fractal 2D coverage. This regime may well be pertinent to fragments of the stiffest biofilaments in practice. What happens if chains adsorb from a dilute solution? Suppose one pretty stiff chain is already adsorbed and another one is coming in from the solution, attaches by one middle monomer, which is most likely, and starts to zip. Let us assume that the configurations are almost straight lines, and if the chains were ideal, they would just cross. The orientations of the two lines are chosen randomly, yet given they cross, their relative orientation is not random; parallel lines do not cross. It is easy to see that the distribution of the crossing angle θ is given by $P ( θ ) = ( sin θ ) / 2$ with $0 < θ < π$. The most likely cutting angle is $π / 2$, and the distribution is symmetric about it, while small angles being suppressed from the distribution. The probability to observe an angle smaller than θ, is $P < θ = θ 2 / 4$ for $θ ≪ 1$, which we may double by symmetry. Now, consider a non-ideal filament. It will freely zip until zipping is hampered by the already adsorbed line, which is at a distance along the filament of order $∼ s 0$ . According to the partition function, the filament may now either cross by a loop of size $∼ s 0$ or bend over and align with the already adsorbed line. Bending over by an angle is associated with an energy penalty $∼ ℓ θ 2 / s 0$ , which is irrelevant if happens to be smaller than $θ max < ( b / ℓ ) 1 / 3$ , which occurs with the probability $P < θ max ∼ ( b / ℓ ) 2 / 3$ . Typically, a given crossing should be preferred to alignment. Now, a given chain experiences a number of crossings, especially the later adsorbing ones. The question whether the chain prefers to cross or align is meaningful when the typical spacing between crossings, which are Poisson distributed, is at least larger than $∼ s 0$ . A typical chain experiences less than $S / s 0$ crossings and is unlikely to align during adsorption provided it is stiff $s 0 < S < ℓ$ . This simple reasoning leaves open a number of questions. If it so happens that a chain aligns, in how much does this favor alignment of the next coming chain or how does an alignment defect grow when new chains enter the surface? Even if alignment would not be typical, alignment defects must appear in a macroscopic sample. In practice, part of these issues may actually depend on the local mechanism of flexibility. The adsorption loop size is power law distributed; there are scarcer adsorption loops larger than $s 0$ , which can bend over more easily. The crossover between crossing and aligning around $θ c . o . ∼ θ max$ is smooth. For moderate rigidity, $θ max$ is not that small, and alignment is not strongly suppressed. These expectations are in part born out in preliminary numerical simulations [ ] studying the crossing vs. alignment statistics as a function of the orientation of the incoming filament with respect to the adsorbed one. Simulations also suggest that the problem might be more complex than suggested by the simplistic arguments given above. In [ ], we address irreversible adsorption without alignment in some detail. Overall stiff chains first build a Poisson array of almost straight lines characterized by its length concentration per unit $c 2 b$ fixed by the surface concentration $c 2$ . The distribution of the distance between crossings along a particular line decays exponentially with an average given by the inverse length density per unit area $ξ ∼ 1 / c 2 b$ . When goes below $s 0$ , adsorption in the typical holes is penalized by an energy barrier against bending, and available holes become scarce. A layer of larger yet stiff loops form, jumping straight from one residual hole to another. These loops build a concentration profile $c ( z ) ∝ 1 / z$ . At a later stage, somewhat larger loops, which explore a somewhat wider lateral region, build a concentration profile $c ( z ) ∝ 1 / z 2$ . Finally, more separated holes are spanned by flexible loops again building a profile $c ( z ) ∝ 1 / z$ . To summarize: $c ( z ) ∼ c 2 , ∞ z ( z < 1 c 2 , ∞ b ) ; c ( z ) ∼ 1 b z 2 ( ℓ > z > 1 c 2 , ∞ b ) ; c ( z ) ∼ 1 b ℓ z ( S ℓ > z > ℓ ) ;$ $c 2 ∞$ the ultimate $2 D$ monomer concentration. Ultimately, remaining holes cannot be spanned by a chain and are occupied by single grafted chains building the outermost layer. 4. Filaments in 2D beyond WLC A number of experimental observations on biofilaments cannot simply be accounted for by the WLC model. For example, the circularization of short actin filaments is observed [ ], which would be absent for a WLC with the measured bending rigidity $ℓ ≈ 17 μ m$ of actin. Similarly, taxol-stabilized microtubules display “intrinsic curvature” unexpected for a stiff WLC. They display length-dependent persistence length [ ] when naively considered as WLCs, and their transverse relaxation time scales as the filament length cubed [ ]. Finally, microtubules curl-up both in vivo [ ] and in vitro [ ] under moderate forces. These and more puzzles call for augmented models beyond the WLC. Rather than going into the details of the specific biological situations, which is beyond the scope here, we will review some generic model ideas (namely helicity, bistability and polymorphic transitions) and discuss the resulting behavior from the theoretical point of view. 4.1. The Helical Filament Squeezed in 2D Helically-coiled filaments are a frequent motif in nature. Plenty of biological filaments (FTsZ [ ], Mrb [ ], basterial flagella [ ], tropomyosin [ ], intermediate filament vimentin [ ]), as well as artificial man-made materials (e.g., micelles [ ]) have a helical shape in $3 D$ space. Even whole microorganisms exhibit helicity inherited from their constituent filaments [ ]. In situations commonly encountered in experiments, coiled helices are observed when they are squeezed flat onto a two-dimensional surface that coincides with the focal plane. This confinement changes the physical properties of the underlying objects and peculiar squeezed conformations often resembling looped waves, spirals or circles, cf. Figure 6 , are observed. Here, we will rationalize the various shapes of those squeezed helical filaments (coined squeelices in the following) and also consider their statistical mechanics properties. 4.1.1. A Bit of Mechanics Helical worm-like chain model: To describe the shape of an elastic rod of length $L ,$ one can use (cf., e.g., [ ]) the Frenet–Serret basis $( n , b , ø )$ attached to the rod’s centerline parametrized by the arc-length . In general, such a rod can have an internal twist, which can be taken into account by considering another local basis $( e 1 , e 2 , e 3 )$ , which rotates with the material. This material frame or director basis $( e 1 , e 2 , e 3 )$ is such that $e 3 = ø$ $e 1 = n cos ψ + b sin ψ$ $e 2 = - n sin ψ + b cos ψ$ is the twist angle. For a given spatial configuration of the filament, the evolution of this basis along the arc-length centerline is given by the twist equations $e i ′ = Ω ( s ) × e i$ , where $Ω ( s ) = Ω 1 , Ω 2 , Ω 3$ is the strain vector and $( ) ′$ denotes the derivative with respect to $s .$ The components of are [ $Ω 1 ( s ) = κ ( s ) sin ψ ( s ) Ω 2 ( s ) = κ ( s ) cos ψ ( s ) Ω 3 ( s ) = τ ( s ) + ψ ′ ( s )$ The local curvature is therefore $κ 2 ( s ) = Ω 1 2 + Ω 2 2$ , and the twist density $Ω 3 ( s )$ is the sum of a torsion and the excess twist $ψ ′ .$ The energy $E [ Ω ( s ) ]$ of a filament is a functional of the strain vector $Ω ( s )$ that can be expanded à la Ginzburg–Landau: $E = ∑ i j A i j ∫ Ω i Ω j d s + ∑ i j k D i j k ∫ Ω i Ω j Ω k d s + ...$ The shape of the filament at zero temperature will be obtained my minimizing the elastic energy $δ E / δ Ω ( s ) = 0 .$ In the linear elastic regime, where the stress is proportional to the strain, i.e., $σ ( s ) = δ E / δ Ω ( s ) ∝$$Ω ( s )$ , the expansion stops at the quadratic order in . We limit ourselves to this regime for the moment. See the Section 4.2.2 on filament crunching for nonlinear contributions. The linear elasticity of an isotropic rod: The straight untwisted rod minimizes the energy: $E = 1 2 ∫ d s B κ 2 ( s ) + C Ω 3 2$ the bending and torsional stiffness, respectively. Now, a helical elastic rod of a circular cross-section is a curve with constant curvature and torsion. Therefore, its elastic energy is of the form: $E = ∫ B 2 Ω 1 - ω 1 2 + Ω 2 - ω 2 2 + C 2 Ω 3 - ω 3 2 d s ,$ $ω 1$ $ω 2$ the principal intrinsic curvatures and $ω 3$ the intrinsic torsion. These material constants are chosen as positive. Upon a proper definition of the material frame, one can always set $ω 2 = 0$ . In the 3D situation, one can minimize the bending and the twist parts of the energy given by Equation (11) independently, yielding a curve of constant curvature $ω 1$ and torsion $τ = ω 3 .$ In the absence of external torque, there is no excess twist, $ψ ′ = 0$ . This 3D ground state corresponds to a helix of radius $R = ω 1 ω 1 2 + ω 3 2$ and pitch $H = 2 π ω 3 ω 1 2 + ω 3 2$ satisfying the preferred curvature and twist everywhere. To proceed further, it is convenient to introduce the Euler angles $ϕ s ,$$θ s$ $ψ s$ , such that the components of the strain vector are given by: $Ω 1 = ϕ ′ sin θ sin ψ + θ ′ cos ψ Ω 2 = ϕ ′ sin θ cos ψ - θ ′ sin ψ Ω 3 = ϕ ′ cos θ + ψ ′$ In this formulation, the curvature is given by $κ 2 ( s ) = ϕ ′ 2 sin 2 θ + θ ′ 2$ , and the torsion is $τ = ϕ ′ cos θ .$ Ground state of a squeelix: Confining the helical rod on the plane amounts to put $θ = π / 2$ and then: $Ω 1 = ϕ ′ sin ψ , Ω 2 = ϕ ′ cos ψ , Ω 3 = ψ ′ .$ Hence, the local curvature is simply given by $κ 2 ( s ) = ϕ ′ 2$ , and torsion $τ = 0$ , as the curve is now planar. The energy of a squeelix of length then reads: $E = 1 2 ∫ - L / 2 L / 2 B ϕ ′ 2 - 2 ω 1 ϕ ′ sin ψ + ω 1 2 + C ψ ′ - ω 3 2 d s$ Note that under confinement, the curvature and twist are now coupled. The energy variation $δ E = 0$ leads to the following Euler–Lagrange equations: $ψ ′ ′ + B ω 1 2 2 C sin ( 2 ψ ) = 0$ and the boundary conditions: $ψ ′ ( - L / 2 ) = ω 3 = ψ ′ ( L / 2 )$ Thus, even in the absence of an external torque at the chain’s end, the confinement converts the intrinsic torsion into an intrinsic torque. Equation (14) is nothing but the pendulum equation (with replacing time) whose solution depends on the material parameters $B , C ,$$ω 1$ and also $ω 3$ through the boundary condition Equation (15). The solution of this equation leads to the curvature $κ ( s ) = ϕ ′ ( s )$ , which is slaved to the twist according to Equation (13). This is the most important consequence of the squeezing of a helical WLC. From the knowledge of $ϕ ′ ( s )$ , the $2 D$ shapes can be reconstructed from the Cartesian coordinates given by: $x ( s ) = x 0 + ∫ - L / 2 s cos ϕ ( s ′ ) d s ′$ $y ( s ) = y 0 + ∫ - L / 2 s sin ϕ ( s ′ ) d s ′ .$ The solutions that minimize the elastic energy of the squeelix lead to a variety of shapes resembling loops, waves, spirals or circles (see Figure 6 Solving Equation (14) for a chain of finite length with Equation (15) turns out to be surprisingly complicated [ ]. To grasp a physical intuition of the squeelix from simple arguments, we consider a very long chain with much larger than any characteristic length and neglect the boundary condition Equation (15). We have the trivial solution $ψ = ± π / 2$ , which corresponds to a shape of constant curvature $κ = ± ω 1$ . The energy density for this circular configuration is $E 0 / L = C ω 3 2 / 2$ The non-trivial general solution of Equation (14) assuming the condition $ψ ( 0 ) = 0$ without loss of generality is well known: $ψ ( s ) = a m s / λ | m$ with a characteristic length $λ = 1 ω 1 C B m$ . The function $a m$ is the Jacobi amplitude function, and $m > 0$ . This solution corresponds to the oscillatory and the revolving regimes of the pendulum with $m > 1$ $m < 1$ , respectively. The case $m = 1$ is the homoclinic pendulum (a single revolution of the pendulum). To each value of the parameter corresponds a different filament shape. However, a helical filament of length with given material parameters will adopt a single ground state when squeezed onto the plane. It can be shown [ ] that the value of , which minimizes the elastic energy density $E ( m ) / L$ of the chain, lies in the interval $[ 0 , 1 ]$ . Shapes with $m > 1$ cannot be the ground state of a squeelix. For $m ≤ 1$ , the minimum of $E ( m ) / L$ is the solution of the equation [ $m E ( m ) = γ with : γ = 4 ω 1 2 B π 2 ω 3 2 C$ $E ( m )$ the complete elliptic integral of the second kind. Equation ( ) shows that the ground state of a squeelix of infinite length is determined by the parameter . The left-hand side of Equation ( ) is a growing function of that goes from zero to one. Therefore, a minimum of $E ( m ) / L$ exists for $γ ≤ 1$ only. When $γ > 1$ , the ground state has thus a circular shape. $γ = 1$ , Equation ( ) gives $m = 1$ , i.e., the homoclinic pendulum. Equation (18) becomes $ψ ( s ) = 2 arctan ( s / λ ) - π / 2$ , which interpolates between $ψ ( - ∞ ) = - π / 2$ $ψ ( ∞ ) = π / 2 .$ We called this configuration a twist-kink [ ]. Provided $L ≫ λ$ as considered here, the twist kink is localized on a distance $∼ λ$ and is separating two curvature flipped regions of constant curvature $κ ≈ ± ω 1$ . The associated energy is $E 1 = 2 B C ω 1 - π C ω 3 + C ω 3 2 L / 2 .$ Therefore, the self-energy of a single twist-kink is: $Δ E = E 1 - E 0 = π C ω 3 ( γ - 1 )$ the twist-kink expulsion parameter. This terminology speaks for itself as for $γ > 1$ , twist-kinks are expelled from the filament. It forms a shape with multiple circles of the same radius $1 / ω 1$ on top of each other (see Figure 6 $γ < 1$ , we have $m < 1$ , and $ψ ( s )$ is a monotonous function of . The ground state is populated by a finite density of twist-kinks. This density is limited by the mutual kink-kink (with inter-distance ) repulsion $U int ∼ π C ω 3 γ f ( d / λ )$ $f ( x ) ∼ 1 / x$ $x ≪ 1$ $f ( x ) ∼ e - x$ $x ≫ 1$ . This repulsion gets stronger with larger . For $γ ≲ 1$ , Equation ( ) gives $m ≈ γ$ , which corresponds to a dilute regime with well-separated twist-kinks. In sections of the filament with constant $ψ ( s ) ≈ ± π / 2$ , the curvature is constant too, and the filament locally forms a circular arc with radius $≈ 1 / ω 1$ . In regions where $ψ ( s )$ changes by , curvature reverses its orientation. The reversal points, the twist-kinks, are separating circular arcs of opposite curvature orientation. The filament then consists of a succession of circular arcs (or circles or even spirals, depending on the parameters) with alternating orientations (see Figure 6 the density of twist-kinks increases until the notion of individual twist-kinks looses its meaning (when $d / λ ≲ 1$ ). For $γ ≪ 1$ , Equation ( ) becomes $m ≈ π 2 4 γ$ . In this regime, the twist evolves smoothly like $ψ ( s ) ∼ s$ . The squeelix changes periodically the orientation of its curvature and has a sinus-like shape (see Figure 6 Note that for an oscillating pendulum, i.e., $m > 1$ , where $ψ ( s )$ oscillates periodically, the chain is populated by a finite density of alternating twist-kink and anti-twist-kinks. This oscillatory regime is not the ground state of a squeelix, because the anti-kink configuration has $ψ ′ ( s ) < 0$ around the inversion curvature point, and this configuration maximizes the torsion contribution to the total energy. These solutions could nevertheless be thermally activated [ The theory of squeelices should be useful to determine the material parameters of helical filaments that are often confined in experiments. 4.1.2. Thermally-Injected Twist-Kinks and Hyper Flexibility: The Arc/Arc Toy Model From the mechanical study of the squeezed helical filament, we found that there is a twist expulsion regime where the ground state is circular. Twist-kinks appear only as thermal excitations, which are penalized by the Boltzmann weight $e - E$ and are localized. We can look at them as quasi-particles. If their separation remains on average much larger than their extension, we can neglect their extension, their mutual interaction and the interaction with the filaments ends. The model boils down to an ideal gas of twist-kinks separating regions of flipped curvature. This toy model we call the arc/arc model [ ]. The average number of twist kinks is $〈 k 〉 = N e - E$ . As the twist-kinks do not interact their number, is Poisson distributed and $P ( k ) = e - 〈 k 〉〈 k 〉 k / k !$ . In the following, we derive the statistics of the deflection angle for this model. We will account for 2D flexural fluctuations governed by the persistence length and show that these fluctuations factorize out from the partition function in a convolution product. By construction, the toy model misses twist-fluctuations, beyond those associated with the fluctuation of the number and the position of twist-kinks. To ease the calculations, it is convenient to work in Fourier space for the bending angle and in Laplace space for the arc-length along the filament. In the absence of bending fluctuations the distribution of the increment in angle $θ - θ 0$ along an arc of length between two twist-kinks is deterministic and obeys $g ± = δ ( θ - θ 0 - ± ω s )$ for an arc with positive/negative curvature. This can be Fourier transformed with respect to the bending angle increment as $g ± ( q , s ) = e ± i q ω s$ . Accounting for the Gaussian flexural fluctuations (see Section 1 ), the angular distribution of the deflection along the arc of length $g ± ( q , s ) e - q 2 s / ℓ$ . The deflection angle being additive along the filament, the statistical weight of a given sequence of arcs separated by twist-kinks is the product of the corresponding $g ± ( q , s ) e - q 2 s / ( 2 ℓ )$ factors with the Boltzmann weights $e - E$ associated with twist-kinks. It is obvious that the Gaussian fluctuation factors multiply to $e - q 2 S / ( 2 ℓ )$ for any sequence. It is hence enough to calculate the partition sum for infinite stiffness (in its discrete formulation, the problem with infinite stiffness maps on a finite one-dimensional Ising spin system on a segment), without the Gaussian bending fluctuations, which can be taken into account later by a convolution in angular space, as could have been anticipated. We hence just have to calculate the partition sum for the bare $g ±$ propagators. Note that these partition sums will have a bounded support in the variable . It is now convenient to get rid of the convolution in the arc length (the arc length adds up to $S = N b$ ) by taking the Laplace transform (in using the Laplace transform, rather than the z-transform, we take a quasi-continuum limit) of the $g ± ( q , s )$ with respect to $n = s / b$ . This is simply $g ± ( q , p ) = 1 / ( p - ± i q ω b )$ the Laplace variable conjugated to . The statistical weight of a sequence is now just the product of the associated Fourier–Laplace propagators with the Boltzmann weights of the twist-kinks. Take all configurations starting out and ending with a positive curvature, their partition sum reads $Z + + = ∑ j = 0 ∞ g + ( q , p ) ( g - ( q , p ) g + ( q , p ) ) j e - 2 j E$ . Similar sums $Z - -$ $Z + -$ describe other edge curvatures. We may for example evaluate the partition sum for unbiased starting condition with equal probability $1 / 2$ for either curvature $Z = ( Z + + + Z - - + 2 Z + - ) / 2$ . After some simple algebra: $Z ( q , p ) = p + e - E p 2 + q 2 ω 2 b 2 - e - 2 E$ which yields $Z ( θ , p )$ after backwards Fourier transform: $Z ( θ , p ) = p + e - E 2 ω b ( p 2 - e - 2 E ) 1 / 2 e - ( p 2 - e - 2 E ) 1 / 2 ω b | θ |$ which is readily inverted back to space. After normalization to the support $- ω S , ω S$ , we obtain the even deflection angle distribution function written below for $θ > 0$ $P ∞ ( θ > 0 , S ) = e - E 2 ω b e - 〈 k 〉 I 0 ( 〈 k 〉 1 - θ ^ 2 ) + I 1 ( 〈 k 〉 1 - θ ^ 2 ) 1 - θ ^ 2 u ( 1 - θ ^ ) + e - 〈 k 〉 2 δ ( θ - ω S )$ $〈 k 〉 = S e - E / b$ $θ ^ = θ / S ω$ . The Heaviside function $u ( x )$ (defined as $u ( x ) = 0$ $x < 0$ $u ( 0 ) = 1 / 2$ $u ( x ) = 1$ $x > 0$ ) was introduced, and the $I n$ are modified Bessel function of the first kind. The last term stands for the ground state contribution, which is exponentially suppressed for large $〈 k 〉$ ; the other contributions come from excited states and have bounded support as anticipated. To take into account bending fluctuations, it is enough to convolute $P ∞ ( θ )$ with the Gaussian $G ( θ ) = e - ℓ θ 2 / ( 2 S ) / 2 π S / ℓ$ $P ( θ ) = P ∞ ( θ ) ⋆ G ( θ ) .$ When the width of bending fluctuations is small, this merely transforms the delta peak into a Gaussian and smoothens the Heaviside cut-off. It has little effect on the regular part. A series of deflection distributions is shown in Figure 7 . Other approaches can be used to derive Equation ( ), among those the recursive scheme described in [ ], which is a continuous analog of the transfer matrix formulation. The (few) twist-kinks diffuse rather freely along the chain, which results in augmented shape fluctuations (also reflected by the distribution $P ( θ )$ ) and hyper-flexibility, despite the intrinsic flexural rigidity. The mean square end-to-end distance $R e 2$ and the radius of gyration $R g$ are measurable quantities and interesting on their own. As noticed below, Equation ( ), they can be derived from the angular correlation $C ( s )$ , which is given by the (real part) of the Fourier transformed normalized partition function $Z ( q )$ $q = 1$ $C ( s ) = 1 2 - 1 2 1 - ω 2 b 2 e 2 E e - s e - E b ( 1 + 1 - ω 2 b 2 e 2 E ) + 1 2 ℓ + 1 2 + 1 2 1 - ω 2 b 2 e 2 E e - s e - E b ( 1 - 1 - ω 2 b 2 e 2 E ) + 1 2 ℓ$ The correlation $C ( s )$ is oscillatory if one turn is not destroyed by twist-kinks ( $ω b e E > 1$ ) and decays monotonically otherwise. As $R 2$ $R g 2$ are linear functionals of $C ( s )$ , they can be written as combinations of similar quantities for the WLC given in Equation ( $R e , g 2 = 1 2 R e , g 2 ( ℓ 1 ) + R e , g 2 ( ℓ 2 ) + 1 2 1 - ω 2 b 2 e 2 E R e , g 2 ( ℓ 1 ) - R e , g 2 ( ℓ 2 )$ $1 / ( 2 ℓ 1 ) = e - E b ( 1 - 1 - ω 2 b 2 e 2 E ) + 1 2 ℓ$ $1 / ( 2 ℓ 2 ) = e - E b ( 1 + 1 - ω 2 b 2 e 2 E ) + 1 2 ℓ$ . The parameters $ℓ 1 , 2$ are complex conjugate in the oscillatory $C ( s )$ regime, so are the associated $R e , g 2$ , which are hence not physical. However, the $R e , g 2$ for the helical filament as given by Equation ( ) are real. Figure 8 shows the squared radius of gyration as a function of the contour length for a typical set of parameters. As suggested by Equation ( ), for long enough chains, there is a flexible regime with an effective average persistence length. In the linear response regime of vanishingly weak force, the fluctuation $R e 2$ also gives the response function, and the average elongation along the direction of the force $〈 x 〉 = R e 2 2 k B T f$ . As was anticipated from mechanics, a somewhat larger force, typically in the pN range, affects the distribution of twist-kinks, entailing a non-linear regime. Below, we present the free energy map as a function of the twist angle (given in units of ) and the extension . We first give the map without applied force for the full Hamiltonian of the squeezed helical filament and show how it deforms under an applied force. 4.1.3. The Free Energy Map The free energy map $F ( ψ , D )$ keeping track of the extension and of the number of twist-kinks related to the global twist $n = ψ / π$ , is obtained from a Wang–Landau Monte Carlo simulation [ ]. The parameters appearing in the full Hamiltonian take the following values: $ω 1 = 0 . 26$ $- 1$ $ω 3 = 0 . 1$ $- 1$ , which correspond to the twist-expulsion parameter $γ = 5 . 47$ . The chain length is rather short $N = 48$ , and edge effects are present. The helical filament is squeezed to the vicinity of the plane $z = 0$ by a harmonic potential of rigidity $25 k B T / b 2$ . Under an external force , the Gibbs free energy is obtained as $G = F - k B T log ( I 0 ( f D ) )$ , where the relative orientation of the end-to-end vector and the force is integrated out and $I 0 ( x )$ stands for the modified Bessel function. At large force, the end-to-end vector merely aligns with the force and the log simplifies to $≈ f D$ ; at small force, all orientations contribute, and we recover the linear response result. Figure 9 shows how a force destabilizes the ground state, which corresponds to a two-turn circular shape. The vicinity of excited state $n = 1$ gets more populated under a force, and the associated is better defined. This documents the injection of twist-kinks and a better localization of a single twist-kink under the applied force. It is easy to get a good intuition of these phenomena from the arc/arc model. 4.2. Emergence of Bistability 4.2.1. Long Range Elasticity and Switchability Nature is known to play many variations on its basic mechanical components. With semiflexible chains being so abundant in living organisms, it is no surprise that they come with many interesting extras and tweaks that often serve to fine-tune their function. Two of these anomalous filament characteristics are bistability and shape cooperativity. How and when do these features emerge? To answer this question, note that biofilaments (and semiflexible chains in general) are intrinsically prone to simple modifications that generate a long range shape coherence along the filament contour and/or destroy their unique straight ground state, cf. Figure 10 Figure 11 . As an example, consider a simple cross-linking of two semiflexible chains by soft spring connections (like, e.g., flexible polymers). This object behaves radically differently from a WLC on short and intermediate scales. In addition to the bending energy per length $∼ B θ ′ 2 ,$ $θ s$ the tangent angle at position twice the bending constant of a single chain, there is also an inter-chain shear deformation $τ s .$ The latter gives a second elastic contribution, the shear energy $∼ K τ 2 .$ The elastic shear constant is given by $K ∼ ρ c h w 2 k$ for this simple (two filament) bundle of typical lateral width $w ,$ the elastic spring constants of the individual cross-connecting chains with chain line density $ρ c h$ $E = 1 2 ∫ B θ ′ 2 s + K τ 2 s d s$ In the limit where the axial stretching of the two filaments can be neglected, the shear deformation at any given location along the contour can be expressed in terms of with the average angular orientation: $θ ¯ = L - 1 ∫ θ s d s .$ The energy ( ) with the constraint ( ) can be interpreted (for weak angular deformations $θ ≪ 1$ ) as a WLC under strong tension. This apparent tension is peculiar as it is not applied from the outside, but rather behaves as an internal “self-tension” that acts with respect to the mean internal $θ ¯$ . This gives rise to long-range curvature-curvature interactions. For instance, it is easy to see that if a fixed curvature $κ 0$ is imposed within a certain region of length around some position $s 0$ , it will automatically induce opposite curvature in its proximal regions, cf. Figure 10 b. Far away from $s 0$ , the shear and bending deformations decay exponentially: $θ ′ ∝ τ ∼ κ 0 l exp - | s - s 0 | / l$ with a characteristic screening length scale set by: The elastic screening effect is a notable characteristic of such a model. If the preformed arc is short, $l ≪ λ$ , its total energy: scales quadratically with its length, very much unlike the classical WLC model, where $E arc ∝ l$ . For longer fixed curvature arcs, the shear energy dominates over bending and grows even quicker with $E arc ∼ K κ 0 2 l 3 .$ Each piece of the arc interacts with any other one in a non-local manner, and it becomes increasingly costly to form longer arcs (the energy density $E arc / l$ grows in a size-dependent manner). In the non-stretchable filament limit, this model was dubbed “the railway track model” by Everaers et al. [ ]. More elaborate, but in spirit similar versions of it, called the “soft shear” models, have gained some attention in describing long-range deformation modes and correlated reshaping of microtubules ]. The main idea there is that the microtubule’s subunits, the tubulin protofilaments, are very rigid filamentous entities, coupled by very soft lateral bonds, like in the railway-track model. Although an attractive theoretical idea, no structural biology data so far support such a large anisotropy. Despite that, there is mounting evidence for some longer range interaction along the microtubule lattice [ ]. However, based on current experimental evidence, the nature of microtubule long-range curvature deformations appears to be more likely positive cooperative, rather than anti-cooperative, as the railway track model and its derivatives imply. This calls for more radical revisions of the WLC model. 4.2.2. Bistability and Cooperativity Real biological filaments have many internal degrees of freedom even at the level of a single monomer unit or the interface of two of them. For instance, DNA bases can flip, tilt and interact with the orientation of the sugar phosphate backbone, giving rise to discrete A, B and Z forms of DNA [ ]. The microtubule’s elementary units (tubulin dimers) act as curvature switchable elements [ ], and actin filaments can switch their inter-unit twist [ ]. Many other examples of biofilament multi-stability emerge due to the monomer’s molecular complexity. However, even generic interactions along the backbone-like tail bridging or geometrical constraints, including confinement to surfaces, can break the symmetry and uniqueness of the filament’s ground state. In many cases, multi-stability cannot be simply averaged out on biologically-relevant scales, especially when positive cooperativity between the unit conformations comes into play. For illustration, reconsider the previous case of a squeezed 3D helix that is forced to a 2D surface. By forcing it onto a surface, its elastic degrees of freedom (bending and twisting) become mutually strongly coupled. The induced non-linearities lead to a symmetry breaking of the ground state. The repulsion of the torsional twist kinks, which coincide with the curvature switching regions of such chains, induces an effective cooperative interaction between the neighboring curved monomers. This effectively leads to large blocks of monomers, which form positive and negative bistable curvature regions. Of course, in this example, one might rightly object that the monomers of the chain when seen in their own material reference frame do not really switch. Only the projected signed curvature (of the centerline) can assume two values, which from the perspective of the monomers only amounts to a rotation of the monomers around the filament axis. Another generic example for a prestress-induced bistability is a semiflexible biofilament with elastic tails that span along the filament’s backbone and pairwise connect distant points at typical with spring constants $k ,$ Figure 11 . When the filament is straight, the springs are in their extended (prestressed) state with length and and have an energy $E c h ∼ 1 2 k d 2$ . In the following, let us consider only the simplest planar filament case. The shape of such a 2D “condensed tail WLC” is determined by a competition of the stretching energy of the flexible chains and the bending energy of the semiflexible filament. The chains have the tendency to buckle the filament and form positive/negative curvature buckles of the typical size $k > k crit = 12 B d - 3 .$ Close to the buckling transition $k ⪆ k crit$ and on length scales $≫ d$ , the energy can be expanded and is (up to a constant): $E b ≈ k d 3 2 ∫ C 1 k crit k - 1 κ 2 + C 2 d 2 κ 4 d s$ $C 1$ $C 2 > 0$ numeric constants. Such a system is bistable and displays curved sections with switchable curvature: $κ ∝ ± 1 - k crit / k d .$ If the chains attachment intervals do not overlap, the curvature switching is local and non-cooperative on distances larger than . However, in the more generic case when they overlap (cf. red chains in Figure 11 ), there is an additional cooperative coupling term emerging: $E couple ≈ H 2 ∫ κ ′ 2 d s$ $H ∝ k d 5$ a higher order “hyper-stiffness” constant. Note that the latter gives rise to a persistence of curvature, rather than the usual persistence of tangent angle, as for the WLC. This toy model is less academic than one would think. Classical biofilaments, like microtubules, have naturally built-in long amorphous polymer tails (one per monomer) that can span in their stretched state to nearest neighbors or beyond. Furthermore, actin filaments and microtubules often interact with other polymer tail-forming proteins that come as an integral part of the cytoskeleton (including tau proteins, various MAPs, formins). Switchability, in particular of microtubules, might partially relate to these tails. Whatever its origin, there has been mounting experimental evidence, for the bistability of the microtubule lattice [ ]. The next section (4.3) is devoted to microtubules. Another way for the monomer to become bistable and attain long-range interactions has been discussed in [ ], where units with a non-linear angular potential are constrained into circular geometry, cf. Figure 12 . What happens when protein monomers with an intrinsic curvature form a stiff polymer, which is forced to close in a ring of a different radius of curvature? When the ring curvature differs from the preferred curvature of each unit, the monomers transform from mono-stable into fluctuating bistable units. This has rather dramatic effects on the overall shape of the ring. In particular, the ground state becomes highly degenerate, with an exponentially-growing number of realizations with the number of monomers . When the filament thermally explores this exponentially-degenerate energy landscape, it exhibits anomalous fluctuations giving rise to an effectively reduced length-dependent persistence length. That can be interpreted as a paradoxical effective softening of the filament despite the high frustration in the form of mechanical prestress, which would naively give rise to a hardening in the system. Another interesting theme in this example is that the global integral constraint on the local curvatures $κ s$ $∫ κ s d s = 2 π ,$ not only induces bistability, but also gives rise to a long range, whole system-spanning conformational interaction between the units. Close to the ground state, switching of one unit’s angle to one state forces another unit, anywhere along the ring to switch to the opposite state to satisfy for the global constraint. In a continuous representation, the degeneracy is lifted by a penalty $∝ κ ′ 2$ (see Equation ( )) or by the domain wall energy between blocks of different curvature in a discrete model. 4.3. Polymorphic Model of Microtubules There are numerous indications for microtubules having properties beyond the WLC model. Examples occurring in vivo are their curling-up , with radii of curvature of the order of micron, during the formation of synapses [ ] and during blood platelet formation by megakaryocytes [ ]. In vitro, similar curling-up events are found in kinesin-gliding assays, i.e., when microtubules are transported along a molecular motor-covered substrate [ These observations can be rationalized by taking aspects of the microtubules’ microscopic structure into account, namely the possibility of conformational changes of their subunits, the tubulin dimers. The hollow tube structure of the microtubules is formed by 9–15, typically 13, protofilaments, which, in turn, are formed by head-to-tail polymerization of the tubulin dimers. Experiments on isolated, taxol-stabilized protofilaments have shown their propensity of being curved [ ], indicating that curved conformations are energetically preferred. The distribution is bimodal, with radii of curvature predominantly at 20–30 nm (“strongly-curved” state), and at 250 nm (“weakly-curved state”). A second important piece of information, obtained earlier by electron cryo-microscopy, is that the microtubule lattice displays two elongational states [ ], one of them being about 2% shorter than the other. Both facts can be brought in harmony by assuming that the curved conformations of the dimers are slightly shorter (cf. also Figure 13 The polymorphic model of microtubules [ ] incorporates these facts as follows, cf. Figure 13 : it assumes that a tubulin dimer can reduce its free energy by $| Δ G |$ by switching to the curved state. In turn, the curved state is accompanied by elastic strains. The tubulins in the cross-section, marked as spheres in Figure 13 a, are treated as a two-layered structure. If the tubulin is in the straight conformation (blue in Figure 13 a), there is no preferred strain. However, in the curved conformation (purple in Figure 13 a), the inner part (for radius $ρ ∈ [ R i , R m ]$ ) and the outer one (for $ρ ∈ [ R m , R o ]$ ; with $R i , R o$ the inner and outer radius of the microtubule), the respective tubulin subunit experiences preferred strains $ε i > 0$ (tensile) and $ε o < 0$ (compressive) in the inner and outer layer, respectively, reflecting preferred outwards curving. This simple model can explain the curling-up as follows: when the curved protofilaments are constrained in the microtubule’s lattice, they cannot all bend as they would intrinsically prefer; the microtubule is a frustrated, mechanically prestressed system. Nevertheless, states where few neighboring protofilaments are curved, and hence the whole microtubule, while the others have to accommodate and pay a bending energy penalty, can indeed have lower energy than the completely straight state. This conjecture can be quantified by calculating the total energy of a cross-section, having two parts: first, the switching energy (with $n ∈ [ 1 , N ]$ the index of each tubulin with e.g., $N = 13$ $e switch = Δ G b ∑ n σ n$ $b ≈ 8 nm$ the dimer size and $σ n = 0$ $σ n = 1$ ) for the straight (curved) conformation, respectively. Second, the elastic energy: $e el = Y 2 ∫ R i R o ρ d ρ ∫ 0 2 π d ϕ ε ρ , ϕ - ε pref ρ , ϕ 2 ,$ $ε ρ , ϕ = - κ → · ρ → + ε ¯$ is the inner strain (with $κ → = κ x , κ y$ the centerline curvature, $ε ¯$ the mean stretching strain) and $ε pref$ the preferred strain discussed above (i.e., $ε pref = 0$ if the respective tubulin at is straight, and $ε pref = ε i / o$ in the inner/outer layers when it is curved). This energy has been studied extensively in [ ]. Since switched dimers effectively strongly attract each other within the lattice, one can consider all switched dimers to form a single block. The angular size of the switched block, in turn, defines the “polymorphic variable” (or order parameter): $ϕ p = 0$ , all tubulins and, hence, the microtubule are straight. In case $ϕ p = 2 π$ , all tubulins are switched to curved; hence, the microtubule is also straight, but shortened due to the intrinsic strains. In case $ϕ p$ is in between these values, the microtubule is curved, cf. Figure 13 Using this new variable, it is straightforward to evaluate $e tot = e el + e switch + F ε ¯ + M κ$ , including an externally-applied force and torque . In doing so, a characteristic curvature can be identified, which reads: $κ 1 = 8 3 π ε i R m 3 - R i 3 + ε o R o 3 - R m 3 R o 4 - R i 4 .$ Minimizing the total energy with respect to strain ( $ε ¯$ ) and curvature ( ), assuming that these adjust to the external loads, yields the “polymorphic potential”: $e tot ϕ p = B κ 1 2 - c p 2 ϕ p 2 + f ϕ p - m sin ϕ p 2 - 1 2 sin 2 ϕ p 2 ,$ as a function of the rescaled, generalized force and torque . The generalized force contains the external force, a contribution from the lattice constraints and most importantly a contribution linear in the switching energy $∝ Δ G$ Figure 13 b shows that for appropriate loading situations (given ), states with $ϕ p ≠ 0 , 2 π$ can have minimum energy. The microtubule will hence be macroscopically curved, with a curvature close to the characteristic one, $κ 1$ . Importantly, using the measured curvature of protofilaments (in the highly curved state) and the 2% shortening of the microtubule, the preferred strains can be determined, and together with the known inner and outer radius of the microtubule, Equation ( ) can be evaluated to yield $κ 1 ≃ ( 1 μ m ) - 1$ , as observed experimentally (for the weakly-curved protofilament state, one gets $κ 1 , weak ≃ ( 10 μ m ) - 1$ To exemplify the polymorphic microtubule dynamics, [ ] considered microtubules gliding on motor carpets, getting temporarily stuck and then continue to move on. It could be shown that such intermittent buckling events, under forces in the order of 10 pN, can easily lead to complete curl-up of microtubules, cf. Figure 13 d), if the switching energy is about $Δ G ≃ 5 ± 1 . 5 k B T$ . Interestingly, such buckling events correspond to hysteretic paths in the $m - f$ plane of the energy, Equation ( ). The finally obtained, curled-up state is only metastable (i.e., a local minimum). In fact, also experimentally, curled-up microtubules are stable only for several 10s of seconds. In case the switching energy is low, the classical WLC behavior is regained; see Figure 13 5. Miscellaneous Topics and Outlook Despite its long history, the WLC model is not yet completely explored. We only addressed some aspects of WLC at interfaces; more are listed below. Even in this classical corpus, there remain rather subtle issues often linked to excluded volume interactions in denser systems. The WLC model was very successful to describe stretched d-stranded DNA in its B-phase. This made the model very popular in biophysics, where it tends to be used for all kinds of biofilaments. There is increasing evidence that the mechanics and dynamics of biofilaments are not entirely captured by the WLC model. We discussed several models, which go beyond the WLC. Dedicated experiments, like the study of doubly-confined actin or vimentin filaments in microfluidic channels [ ], computer simulations [ ] and analytical theory [ ] are all very recent. Understanding the physics of biofilaments in relation with their biological function appears as a major challenge. Many important issues on semiflexible chains at interfaces, even some related to work in our group, could not be addressed. We list a few and refer the reader to published work, whenever possible. • We restricted our study to adsorption from dilute solution and mentioned that nematic order is expected in very dense solutions or melts. Even if the nematic order is not thermodynamically stable in the bulk, there may be a thin layer of thickness closest to the wall where orientational order prevails as described in work by Milner [ ]. Strong adsorption of WLC in the melt has been simulated recently [ ] with a focus on the distribution of loops and tails, layering effects closest to the wall and local mobility. • We only considered adsorption on undeformable planar surfaces. Biofilaments can deform soft surfaces like membranes to optimize their surface binding [ ]; for example, when the orientation for adsorption is not compatible with the in-plane orientation of the preferred curvature. WLCs have to adapt to the curvature of undeformable shells to adsorb, and this can lead to special arrangements as found in [ ]. Surfaces bearing fixed obstacles alter the 2D dynamics of a WLC [ • In this paper, we focused on WLC adsorption. A related issue is forced desorption under an applied tension [ ]. This was also analyzed in the context of Hyston/d-DNA complexes [ • Another way to fix polymers on a surface is end-grafting, earlier mentioned in the latest stage of irreversible adsorption. One recent simulation work compares single end graft and middle graft WLC in great detail [ ], which is related to the distributions of loops and tails discussed above ( Section 3.1 ). The role of stiffness in densely-grafted brushes was considered early by Pickett and Witten [ ]. More recently, simulations in the Binder group discuss grafted WLC in the brush regime and the crossover to the dilute surface regime (so-called mushrooms) [ • Anisotropic filaments (tapes) with two flexural moduli [ ] do occur; examples are synthetic ladder polymers or short (rigid) associated polypeptides. Helical polypeptide tapes [ ] can reassemble in a hierarchy of structures. Similar associations play a role in amyloid diseases [ ]. Mesini and coworkers studied a family of organogelators and report various self-assembled structures, including microtubes [ ]. Recently, the adsorption of helical tapes on rigid surfaces was considered by Quint [ • It is sometimes argued that the reptation tube can be represented by some transverse (in a first approximation harmonic) potential [ ]. The motion of a WLC confined to an effective tube by a transverse harmonic potential was considered by several authors [ ]. The loosely-related topic of polymers in random media was considered in [ • Some larger scale man-made chains can be considered as semiflexible. The mechanics of semiflexible chains formed by poly(ethylene glycol)-linked paramagnetic particles was studied by Gast and coworkers [ ] as a function of the length of the poly(ethylene glycol) spacer. • Concerning the helical filaments squeezed in 2D discussed in Section 4.1 , many open questions remain in the specific biological contexts. Open questions pertain also to the nonequilibrium behavior of squeelices: e.g., when they are transported along molecular motor-covered substrates, where they display circular, spiraling or wavy trajectories [ ] that are also observed experimentally [ • The investigation of cooperativity and switchability discussed in Section 4.2 Section 4.3 has only just began. For instance, there are indications for cooperative binding of molecular motors on microtubules [ ], for which cooperative conformational changes in the tubulins may be responsible [ ]. The polymorphic switching model discussed in Section 4.3 has so far only been employed to taxol-stabilized microtubules, for which there is the most experimental evidence (as taxol is the main microtubule stabilizer preventing further polymerization/ depolymerization). In fact, many other molecules, especially microtubule-associated proteins, may also induce conformational changes of tubulin upon binding with different associated energy gains $Δ G$ . This subject deserves further study as a new route for microtubule regulation that goes beyond the regulation of spatial distribution and length (as typically discussed in biology) towards a regulation of their mechanical behavior and response. Nam-Kyung Lee acknowledges support of Korean government by NRF grant 2014R1A1A2055681. Author Contributions All authors contributed equally. Jörg Baschnagel, Joachim Wittmer. and Hendrik Meyer were more specifically in charge of the WLC part, while Falko Ziebert, Gi-Moon Nam, Hervé Mohrbach and Igor Kulić mostly contributed to the biofilament part. Albert Johner and Nam-Kyung Lee wrote introduction, conclusions and adsorption part, and contributed to the squeelix part. Conflicts of Interest The authors declare no conflict of interest. 1. Doi, M.; Edwards, S.F. The Theory of Polymer Dynamics; Clarendon Press: Oxford, UK, 1986. [Google Scholar] 2. Rubinstein, M.; Colby, R. Polymer Physics; Oxford University Press: New York, NY, USA, 2003. [Google Scholar] 3. Howard, J. Mechanics of Motor Proteins and the Cytoskeleton; Sinauer: Sunderland, MA, USA, 2001. [Google Scholar] 4. Amos, L.A.; Amos, W.G. Molecules of the Cytoskeleton; Guilford Press: New York, NY, USA, 1991. [Google Scholar] 5. Kulic, I.M.; Brown, A.E.X.; Kim, H.; Kural, C.; Blehm, B.; Selvin, P.R.; Nelson, P.C.; Gelfand, V.I. The role of microtubule movement in bidirectional organelle transport. Proc. Natl. Acad. Sci. USA 2008, 105, 10011. [Google Scholar] [CrossRef] [PubMed] 6. Kruse, K.; Joanny, J.F.; Jülicher, F.; Prost, J.; Sekimoto, K. Asters, Vortices, and Rotating Spirals in Active Gels of Polar Filaments. Phys. Rev. Lett. 2004, 92, 078101. [Google Scholar] [ CrossRef] [PubMed] 7. Grosberg, A.Y.; Khokhlov, A.R. Statistical Physics of Macromolecules; AIP: New York, NY, USA, 1994. [Google Scholar] 8. Hsu, H.P.; Paul, W.; Binder, K. Breakdown of the Kratky-Porod wormlike chain model for semiflexible polymers in two dimensions. EPL 2011, 95, 68004. [Google Scholar] [CrossRef] 9. Flory, P.J. Statistical Mechanics of Chain Molecules; Oxford University Press: New York, NY, USA, 1988. [Google Scholar] 10. Wittmer, J.P.; Beckrich, P.; Meyer, H.; Cavallo, A.; Johner, A.; Baschnagel, J. Intramolecular long-range correlations in polymer melts: The segmental size distribution and its moments. Phys. Rev. E 2007, 76, 011803. [Google Scholar] [CrossRef] [PubMed] 11. Wittmer, J.P.; Cavallo, A.; Xu, H.; Zabel, J.; Polińska, P.; Schulmann, N.; Meyer, H.; Farago, J.; Johner, A.; Obukhov, S.; et al. Scale-free static and dynamical correlations in melts of monodisperse and Flory distributed homopolymers: A review of recent bond-fluctuation model studies. J. Stat. Phys. 2011, 145, 1017–1126. [Google Scholar] [CrossRef] 12. Odijk, T. Polyelectrolytes near the rod limit. J. Polym. Sci. 1977, 15, 477. [Google Scholar] [CrossRef] 13. Skolnick, J.; Fixman, M. Electrostatic Persistence Length of a Wormlike Polyelectrolyte. Macromolecules 1977, 10, 944. [Google Scholar] [CrossRef] 14. Odijk, T. The statistics and dynamics of confined or entangled stiff polymers. Macromolecules 1983, 16, 1340. [Google Scholar] [CrossRef] 15. Khokhlov, A.R.; Katchaturian, K.A. On the theory of weakly charged polyelectrolytes. Polymer 1982, 23, 1742. [Google Scholar] [CrossRef] 16. Barrat, J.L.; Joanny, J.F. Theory of Polyelectrolyte Solutions. Adv. Chem. Phys. 1996, 94, 1. [Google Scholar] 17. Ha, B.Y.; Thirumalai, D. Persistence length of flexible polyelectrolyte chains. J. Chem. Phys. 1999, 110, 7533. [Google Scholar] [CrossRef] 18. Manghi, M.; Netz, R.R. Variational theory for a single polyelectrolyte chain revisited. Eur. Phys. J. E 2004, 14, 67. [Google Scholar] [CrossRef] [PubMed] 19. Everaers, R.; Milchev, A.; Yamakov, V. The electrostatic persistence length of polymers beyond the OSF limit. Eur. Phys. J. E 2002, 8, 3. [Google Scholar] [CrossRef] [PubMed] 20. Fleck, C.; Netz, R.R.; von Gruenberg, H.H. Poisson-Boltzmann theory for membranes with mobile charged lipids and the pH dependent interaction of a DNA molecule with a membrane. Biophys. J. 2002, 82, 76. [Google Scholar] [CrossRef] 21. Lee, N.K.; Schmatko, T.; Muller, P.; Maaloum, M.; Johner, A. Shape of adsorbed supercoiled plasmids: An equilibrium description. Phys. Rev. E 2012, 85, 051804. [Google Scholar] [CrossRef] [PubMed 22. Samaj, L.; Trizac, E. Counterions at Highly Charged Interfaces: From One Plate to Like-Charge Attraction. Phys. Rev. Lett. 2011, 106, 078301. [Google Scholar] [CrossRef] [PubMed] 23. Winkler, R.G. Semiflexible Polymers in Shear Flow. Phys. Rev. Lett. 2006, 97, 128301. [Google Scholar] [CrossRef] [PubMed] 24. Harasim, M.; Wunderlich, B.; Preleg, O.; Kröger, M.; Bausch, A.R. Direct Observation of the Dynamics of Semiflexible Polymers in Shear Flow. Phys. Rev. Lett. 2013, 110, 108302. [Google Scholar] [ CrossRef] [PubMed] 25. Winkler, R. Dynamics of semiflexible polymers. Polymers 2016. under review. 26. Nam, G.M.; Johner, A.; Lee, N.K. Drift and diffusion of a confined semiflexible chain. Eur. Phys. J. E 2010, 32, 119. [Google Scholar] [CrossRef] [PubMed] 27. Nyrkova, I.; Semenov, A.N. Dynamic scattering of semirigid macromolecules. Phys. Rev. E 2007, 76, 011802. [Google Scholar] [CrossRef] [PubMed] 28. Everaers, R.; Jülicher, F.; Ajdari, A.; Maggs, A.C. Dynamic Fluctuations of Semiflexible Filaments. Phys. Rev. Lett. 1999, 82, 3717. [Google Scholar] [CrossRef] 29. Thüroff, F.; Obermayer, B.; Frey, E. Longitudinal response of confined semiflexible polymers. Phys. Rev. E 2011, 83, 021802. [Google Scholar] [CrossRef] [PubMed] [Green Version] 30. Carlier, M.F.; Pantaloni, D. Control of Actin Assembly Dynamics in Cell Motility. Minirev. J. Biol. Chem. 2007, 282, 23005. [Google Scholar] [CrossRef] [PubMed] 31. Desai, A.; Mitchison, T.J. Microtubule Polymerization Dynamics. Annu. Rev. Cell Dev. Biol. 1997, 13, 83. [Google Scholar] [CrossRef] [PubMed] 32. Padinhateeri, R.; Kolomeisky, A.; Lacoste, D. Random Hydrolysis Controls the Dynamic Instability of Microtubules. Biophys. J. 2012, 102, 1274. [Google Scholar] [CrossRef] [PubMed] 33. Hamprecht, B.; Janke, W.; Kleinert, H. End-to-end distribution function of two-dimensional stiff polymers for all persistence lengths. Phys. Lett. A 2004, 330, 254–259. [Google Scholar] [CrossRef 34. Cardy, J.L.; Saleur, H. Universal distance ratios for two-dimensional polymers. J. Phys. A: Math. Gen. 1989, 22, L601–L604. [Google Scholar] [CrossRef] 35. Zierenberg, J.; Marenz, M.; Janke, W. Dilute Semiflexible Polymers with Attraction: Collapse, Folding and Aggregation. Polymers 2016. under review. 36. Duplantier, B.; Saleur, H. Exact Tricritical Exponents for Polymers at the theta Point in Two Dimensions. Phys. Rev. Lett. 1987, 59, 537. [Google Scholar] [CrossRef] [PubMed] 37. Blöte, H.W.J.; Nienhuis, B. Critical behaviour and conformal anomaly of the O(n) model on the square lattice. J. Phys. A Math. Gen. 1989, 22, 1415–1438. [Google Scholar] [CrossRef] 38. Vernier, E.; Jacobsen, J.L.; Saleur, H. A new look at the collapse of two-dimensional polymers. J. Stat. Mech. 2015, 2015, P09001. [Google Scholar] [CrossRef] 39. Baumgärtner, A.; Yoon, D. Phase transition of lattice polymer systems. J. Chem. Phys. 1983, 79, 521. [Google Scholar] [CrossRef] 40. Wittmer, J.P.; Paul, W.; Binder, K. Rouse and Reptation Dynamics at Finite Temperatures: A Monte Carlo Simulation. Macromolecules 1992, 25, 7211–7216. [Google Scholar] [CrossRef] 41. Wittmer, J.P. Untersuchung des Einflusses der Kettensteifigkeit auf die Dynamik von Polymeren in der Schmelze mittels Monte-Carlo-Simulation. Master’s Thesis, Diplomarbeit, Johannes-Gutenberg Universität, Mainz, Germany, 1991. [Google Scholar] 42. Milchev, A.; Landau, D. Monte-Carlo study of semiflexible living polymers. Phys. Rev. E 1995, 52, 6431. [Google Scholar] [CrossRef] 43. Baschnagel, J.; Wittmer, J.P.; Meyer, H. Monte Carlo Simulation of Polymers: Coarse-Grained Models. In Computational Soft Matter: From Synthetic Polymers to Proteins; Attig, N., Ed.; NIC Series: Jülich, Germany, 2004; Volume 23, pp. 83–140. [Google Scholar] 44. Vanderzande, C. Lattice Models of Polymers; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] 45. Jacobsen, J.; Kondev, J. Continuous melting of compact polymers. Phys. Rev. Lett. 2004, 92, 210601. [Google Scholar] [CrossRef] [PubMed] 46. Jacobsen, J.; Kondev, J. Conformal field theory of the Flory model of polymer melting. Phys. Rev. E 2004, 69, 066108. [Google Scholar] [CrossRef] [PubMed] 47. Hsu, H.P.; Binder, K. Effect of Chain Stiffness on the Adsorption Transition of Polymers. Macromolecules 2013, 46, 2496. [Google Scholar] [CrossRef] 48. Allen, M.; Tildesley, D. Computer Simulation of Liquids; Oxford University Press: Oxford, UK, 1994. [Google Scholar] 49. Frenkel, D.; Smit, B. Understanding Molecular Simulation—From Algorithms to Applications, 2nd ed.; Academic Press: San Diego, CA, USA, 2002. [Google Scholar] 50. Binder, K.; Herrmann, D. Monte Carlo Simulation in Statistical Physics: An Introduction, Fifth ed.; Springer: Heidelberg, Germany, 2002. [Google Scholar] 51. Landau, D.P.; Binder, K. A Guide to Monte Carlo Simulations in Statistical Physics; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar] 52. Carmesin, I.; Kremer, K. The bond fluctuation method: A new effective algorithm for the dynamics of polymers in all spatial dimensions. Macromolecules 1988, 21, 2819. [Google Scholar] [CrossRef] 53. Carmesin, I.; Kremer, K. Static and dynamic properties of 2-dimensional polymer melts. J. Phys. 1990, 51, 915. [Google Scholar] [CrossRef] 54. Deutsch, H.; Binder, K. Interdiffusion and self-diffusion in polymer mixtures: A Monte Carlo study. J. Chem. Phys. 1991, 94, 2294. [Google Scholar] [CrossRef] 55. Grest, G.S.; Kremer, K. Molecular-dynamics simulation for polymers in the presence of a heat bath. Phys. Rev. A 1986, 33, 3628. [Google Scholar] [CrossRef] 56. Plimpton, S.J. Fast Parallel Algorithms for Short-Range Molecular Dynamics. J. Comp. Phys. 1995, 117, 1–19. [Google Scholar] [CrossRef] 57. Gerroff, I.; Milchev, A.; Binder, K.; Paul, W. A new off-lattice Monte-Carlo model for polymers—A comparison of static and dynamic properties with the bond-fluctuation model and application to random-media. J. Chem. Phys. 1993, 98, 6526. [Google Scholar] [CrossRef] 58. Milchev, A.; Paul, W.; Binder, K. Off-lattice Monte-Carlo simulation of dilute and concentrated polymer-solutions under theta conditions. J. Chem. Phys. 1993, 99, 4786. [Google Scholar] [CrossRef 59. Milchev, A.; Binder, K. Anomalous diffusion and relaxation of collapsed polymer-chains. Euro. Phys. Lett. 1994, 26, 671. [Google Scholar] [CrossRef] 60. De Gennes, P.G. Scaling Concepts in Polymer Physics; Cornell University Press: Ithaca, NY, USA, 1979. [Google Scholar] 61. Duplantier, B. Exact contact critical exponents of a self-avoiding polymer chain in two dimensions. Phys. Rev. B 1987, 35, 5290–5293. [Google Scholar] [CrossRef] 62. Duplantier, B. Statistical Mechanics of Polymer Networks of Any Topology. J. Stat. Phys. 1989, 54, 581. [Google Scholar] [CrossRef] 63. Semenov, A.N.; Johner, A. Theoretical notes on dense polymers in two dimensions. Eur. Phys. J. E 2003, 12, 469. [Google Scholar] [CrossRef] [PubMed] 64. Nelson, P.H.; Hatton, T.A.; Rutledge, G. General reptation and scaling of 2d athermal polymers on close-packed lattices. J. Chem. Phys. 1997, 107, 1269. [Google Scholar] [CrossRef] 65. Yethiraj, A. Computer simulation study of two-dimensional polymer solutions. Macromolecules 2003, 36, 5854. [Google Scholar] [CrossRef] 66. Meyer, H.; Kreer, T.; Aichele, M.; Cavallo, A.; Johner, A.; Baschnagel, J.; Wittmer, J.P. Perimeter length and form factor in two-dimensional polymer melts. Phys. Rev. E 2009, 79, 050802. [Google Scholar] [CrossRef] [PubMed] 67. Meyer, H.; Wittmer, J.P.; Kreer, T.; Johner, A.; Baschnagel, J. Static properties of polymer melts in two dimensions. J. Chem. Phys. 2010, 132, 184904. [Google Scholar] [CrossRef] 68. Schulmann, N.; Meyer, H.; Kreer, T.; Cavallo, A.; Johner, A.; Baschnagel, J.; Wittmer, J.P. Strictly two-dimensional self-avoiding walks: Density crossover scaling. Polym. Sci. Ser. C 2013, 55, 990–1020. [Google Scholar] [CrossRef] 69. Maier, B.; Rädler, J.O. Conformation and self-diffusion of single DNA molecules confined to two dimensions. Phys. Rev. Lett. 1999, 82, 1911. [Google Scholar] [CrossRef] 70. Maier, B.; Rädler, J.O. DNA on fluid membranes: A model polymer in two dimensions. Macromolecules 2000, 33, 7185. [Google Scholar] [CrossRef] 71. Sun, F.; Dobrynin, A.; Shirvanyants, D.; Lee, H.; Matyjaszewski, K.; Rubinstein, G.; Rubinstein, M.; Sheiko, S. Flory theorem for structurally asymmetric mixtures. Phys. Rev. Lett. 2007, 99, 137801. [Google Scholar] [CrossRef] [PubMed] 72. Nikomarov, E.; Obukhov, S. Extended description of a solution of linear polymers based on a polymer-magnet analogy. Sov. Phys. JETP 1981, 53, 328–335. [Google Scholar] 73. Cavallo, A.; Müller, M.; Binder, K. Anomalous Scaling of the Critical Temperature of Unmixing with Chain Length for Two-Dimensional Polymer Blends. Europhys. Lett. 2003, 61, 214–220. [Google Scholar] [CrossRef] 74. Maier, B.; Rädler, J.O. Shape of self-avoiding walks in two dimension. Macromolecules 2001, 34, 5723. [Google Scholar] [CrossRef] 75. Cavallo, A.; Müller, M.; Binder, K. Unmixing of Polymer Blends Confined in Ultrathin Films: Crossover between Two-Dimensional and Three-Dimensional Behavior. J. Phys. Chem. B 2005, 109, 6544. [ Google Scholar] [CrossRef] [PubMed] 76. Schulmann, N.; Meyer, H.; Wittmer, J.P.; Johner, A.; Baschnagel, J. Interchain monomer contact probability in two-dimensional polymer solutions. Macromolecules 2012, 45, 1646. [Google Scholar] [ 77. Gallyamov, M.; Tartsch, B.; Potemkin, I.; Börner, H.; Matyjaszewski, K.; Khokhlov, A.; Möller, M. Individual brush molecules in dense 2D layers restoring high degree of extension after collapse-decollapse cycle: Directly measured scaling exponents. Eur. Phys. J. E 2009, 29, 73–85. [Google Scholar] [CrossRef] [PubMed] 78. Arriaga, L.R.; Monroy, F.; Langevin, D. Influence of backbond rigidity on the surface rheology of acrylic Langmuir polymer films. Soft Matter 2011, 7, 7754. [Google Scholar] [CrossRef] 79. Kremer, K.; Grest, G. Dynamics of entangled linear polymer melts: A molecular dynamics simulation. J. Chem. Phys. 1990, 92, 5057–5086. [Google Scholar] [CrossRef] 80. Schulmann, N.; Xu, H.; Meyer, H.; Polińska, P.; Baschnagel, J.; Wittmer, J.P. Strictly two-dimensional self-avoiding walks: Thermodynamic properties revisited. Eur. Phys. J. E 2012, 35, 93. [ Google Scholar] [CrossRef] [PubMed] 81. Jacobsen, J.L.; Alet, F. Semiflexible Fully Packed Loop Model and Interacting Rhombus Tilings. Phys. Rev. Lett. 2009, 102, 145702. [Google Scholar] [CrossRef] [PubMed] 82. Semenov, A.N.; Obukhov, S.P. Fluctuation-induced long-range interactions in polymer systems. J. Phys.: Cond. Matt. 2005, 17, S1747. [Google Scholar] [CrossRef] 83. Hsu, H.P.; Kremer, K. Static and dynamic properties of large polymer melts in equilibrium. J. Chem. Phys. 2016, 144, 154907. [Google Scholar] [CrossRef] [PubMed] 84. Janke, W. Computer Simulation Studies of Polymer Adsorption and Aggregation—From Flexible to Stiff. Phys. Procedia 2015, 68, 69–79. [Google Scholar] [CrossRef] 85. Linse, P.; Kallrot, N. Polymer Adsorption from Bulk Solution onto Planar Surfaces: Effect of Polymer Flexibility and Surface Attraction in Good Solvent. Macromolecules 2010, 43, 2054. [Google Scholar] [CrossRef] 86. De Gennes, P.G. Polymers at an interface; a simplified view. Adv. Colloid Interface Sci. 1987, 27, 189. [Google Scholar] [CrossRef] 87. Semenov, A.N.; Bonet-Avalos, J.; Johner, A.; Joanny, J.F. Adsorption of Polymer Solutions onto a Flat Surface. Macromolecules 1996, 29, 2179. [Google Scholar] [CrossRef] 88. Konstadinidis, K.; Prager, S.; Tirrell, M. Monte Carlo simulation of irreversible polymer adsorption: Single chains. J. Chem. Phys. 1992, 97, 7777. [Google Scholar] [CrossRef] 89. Konstadinidis, K.; Thakkar, B.; Chakraborty, A.; Potts, L.W.; Tannenbaum, R.; Tirrell, M.; Evans, J.F. Segment level chemistry and chain conformation in the reactive adsorption of poly(methyl methacrylate) on aluminum oxide surfaces. Langmuir 1992, 8, 1307. [Google Scholar] [CrossRef] 90. Fleer, G.; Stuart, M.A.; Scheutjens, J.M.H.M.; Cosgrove, T.; Vincent, B. Polymers at Interfaces; Chapman and Hall: London, UK, 1993. [Google Scholar] 91. Semenov, A.N.; Joanny, J.F. Structure of Adsorbed Polymer Layers: Loops and Tails. Europhys. Lett. 1995, 29, 279. [Google Scholar] [CrossRef] 92. Semenov, A.N. Adsorption of a semiflexible wormlike chain. Eur. Phys. J. E 2002, 9, 353. [Google Scholar] [CrossRef] [PubMed] 93. Lee, N.K.; Jung, Y.K.; Johner, A. Irreversible Adsorption of Worm-Like Chains. Macromolecules 2015, 48, 7681. [Google Scholar] [CrossRef] 94. De Gennes, P.G. Polymer solutions near an interface. Adsorption and depletion layers. Macromolecules 1981, 14, 1637. [Google Scholar] [CrossRef] 95. Maggs, A.C.; Huse, D.A.; Leibler, S. Unbinding Transitions of Semi-flexible Polymers. Europhys. Lett. 1989, 8, 615. [Google Scholar] [CrossRef] 96. Kuznetsov, D.V.; Sung, W. A Greenś function perturbation theory for nonuniform semiflexible polymers: Phases and their transitions near attracting surfaces. J. Chem. Phys. 1997, 107, 4729. [ Google Scholar] [CrossRef] 97. Kuznetsov, D.V.; Sung, W. Semiflexible Polymers near Attracting Surfaces. Macromolecules 1998, 31, 2679. [Google Scholar] [CrossRef] 98. Kuznetsov, D.V.; Sung, W. A New Scaling Theory of Semiflexible Polymer Phases Near Attracting Surfaces. J. Phys. II 1997, 7, 1287. [Google Scholar] [CrossRef] 99. Deng, M.; Jiang, Y.; Liang, H.; Chen, J.Z.Y. Adsorption of a wormlike polymer in a potential well near a hard wall: Crossover between two scaling regimes. J. Chem. Phys. 2010, 133, 034902. [ Google Scholar] [CrossRef] [PubMed] 100. O’Shaughnessy, B.; Vavylonis, D. Irreversibility and Polymer Adsorption. Phys. Rev. Lett. 2003, 2, 056103. [Google Scholar] [CrossRef] [PubMed] 101. O’Shaughnessy, B.; Vavylonis, D. Irreversible adsorption from dilute polymer solutions. Eur. Phys. J. E 2003, 11, 213. [Google Scholar] [CrossRef] [PubMed] 102. O’Shaughnessy, B.; Vavylonis, D. The slowly formed Guiselin brush. EPL 2003, 63, 895. [Google Scholar] [CrossRef] 103. O’Shaughnessy, B.; Vavylonis, D. Non-Equilibrium in Adsorbed Polymer Layers. J. Phys.: Condens. Matter 2005, 17, R63–R99. [Google Scholar] [CrossRef] 104. Tarjus, G.; Viot, P. Asymptotic results for the random sequential addition of unoriented objects. Phys. Rev. Lett. 1991, 67, 1875. [Google Scholar] [CrossRef] [PubMed] 105. Viot, P.; Tarjus, G.; Ricci, S.; Talbot, J. Random sequential adsorption of anisotropic particles. I. Jamming limit and asymptotic behavior. J. Chem. Phys. 1992, 97, 5212. [Google Scholar] [ 106. Lee, N.K.; Jung, Y.K.; Johner, A. Crossing and alignment of irreversibly adsorbed Worm-Like-Chains. (in Preparation) 107. Sanchez, T.; Kulić, I.M.; Dogic, Z. Circularization, Photomechanical Switching, and a Supercoiling Transition of Actin Filaments. Phys. Rev. Lett. 2010, 104, 098103. [Google Scholar] [CrossRef] 108. Pampaloni, F.; Lattanzi, G.; Jonas, A.; Surrey, T.; Frey, E.; Florin, E.L. Thermal fluctuations of grafted microtubules provide evidence of a length-dependent persistence length. Proc. Natl. Acad. Sci. USA 2006, 103, 10248. [Google Scholar] [CrossRef] [PubMed] 109. Taute, K.M.; Pampaloni, F.; Frey, E.; Florin, E.L. Microtubule Dynamics Depart from the Wormlike Chain Model. Phys. Rev. Lett. 2008, 100, 028102. [Google Scholar] [CrossRef] [PubMed] [Green 110. Roos, J.; Hummel, T.; Ng, N.; Klämbt, C.; Davis, G.W. Drosophila Futsch Regulates Synaptic Microtubule Organization and Is Necessary for Synaptic Growth. Neuron 2000, 26, 371. [Google Scholar] [ 111. Conde, C.; Cáceres, A. Microtubule assembly, organization and dynamics in axons and dendrites. Nat. Rev. Neurosci. 2009, 10, 319. [Google Scholar] [CrossRef] [PubMed] 112. Amos, L.A.; Amos, W.B. The bending of sliding microtubules imaged by confocal light microscopy and negative stain electron microscopy. J. Cell Sci. Suppl. 1991, 14, 95. [Google Scholar] [ CrossRef] [PubMed] 113. Liu, L.; Tüzel, E.; Ross, J.L. Loop formation in microtubules during gliding at high density. J. Phys. Condens. Matter 2011, 23, 374104. [Google Scholar] [CrossRef] [PubMed] 114. Lu, C.; Reedy, M.; Erickson, H.P. Straight and Curved Conformations of FtsZ Are Regulated by GTP Hydrolysis. J. Bacteriol. 2000, 182, 164. [Google Scholar] [CrossRef] [PubMed] 115. Van den Ent, F.; Amos, L.; Löwe, J. Prokaryotic origin of the actin cytoskeleton. Nature 2001, 413, 39. [Google Scholar] [CrossRef] [PubMed] 116. Hasegawa, E.; Kamiya, R.; Asakura, S. Thermal transition in helical forms of Salmonella flagella. J. Mol. Biol. 1982, 160, 609. [Google Scholar] [CrossRef] 117. Li, X.E.; Holmes, K.C.; Lehman, W.; Jung, H.; Fischer, S. The shape and flexibility of tropomyosin coiled coils: Implications for actin filament assembly and regulation. J. Mol. Biol. 2010, 395, 327. [Google Scholar] [CrossRef] [PubMed] 118. Herrmann, H.; Aebi, U. Intermediate filaments: Molecular structure, assembly mechanism, and integration into functionally distinct intracellular Scaffolds. Annu. Rev. Biochem. 2004, 73, 749. [ Google Scholar] [CrossRef] [PubMed] 119. Li, L.S.; Jiang, H.; Messmore, B.W.; Bull, S.R.; Stupp, S.I. A torsional strain mechanism to tune pitch in supramolecular helices. Angew. Chem. Int. Ed. 2007, 46, 5873. [Google Scholar] [ CrossRef] [PubMed] 120. Wolgemuth, C.W.; Inclan, Y.F.; Quan, J.; Mukherjee, S.; Oster, G.; Koehl, M.A.R. How to make a spiral bacterium. Phys. Biol. 2005, 2, 189. [Google Scholar] [CrossRef] [PubMed] 121. Bouzar, L.; Müller, M.M.; Gosselin, P.; Kulic, I.M.; Mohrbach, H. Squeezed Helical Elastica. 2016; arXiv 1606.03611. [Google Scholar] 122. Kamien, R.D. The geometry of soft materials: A primer. Rev. Mod. Phys. 2002, 74, 953. [Google Scholar] [CrossRef] 123. Chouaieb, N.; Goriely, A.; Maddocks, J.H. Helices. Proc. Natl. Acad. Sci. USA 2006, 103, 9398. [Google Scholar] [CrossRef] [PubMed] 124. Nam, G.M.; Lee, N.-K.; Mohrbach, H.; Johner, A.; Kulić, I. Helices at interfaces. EPL 2012, 100, 28001. [Google Scholar] [CrossRef] 125. Lee, N.-K.; Johner, A. Defects on semiflexible filaments: Kinks and twist kinks. JKPS 2016, 68, 923. [Google Scholar] [CrossRef] 126. Fierling, J.; Mohrbach, H.; Kulić, I.K.; Lee, N.K.; Johner, A. Biofilaments as annealed semi-flexible copolymers. EPL 2014, 106, 58006. [Google Scholar] [CrossRef] 127. Wang, F.; Landau, D.P. Efficient, Multiple-Range Random Walk Algorithm to Calculate the Density of States. Phys. Rev. Lett. 2001, 86, 2050. [Google Scholar] [CrossRef] [PubMed] 128. Everaers, R.; Bundschuh, R.; Kremer, K. Fluctuations and Stiffness of Double-Stranded Polymers: Railway-Track Model. EPL 1995, 29, 263. [Google Scholar] [CrossRef] 129. Mohrbach, H.; Kulić, I. Motor Driven Microtubule Shape Fluctuations: Force from within the Lattice. Phys. Rev. Lett. 2007, 99, 218102. [Google Scholar] [CrossRef] [PubMed] 130. Heussinger, C.; Bathe, M.; Frey, E. Statistical Mechanics of Semiflexible Bundles of Wormlike Polymer Chains. Phys. Rev. Lett. 2007, 99, 048101. [Google Scholar] [CrossRef] [PubMed] [Green 131. Sekimoto, K.; Prost, J. Elastic Anisotropy Scenario for Cooperative Binding of Kinesin-Coated Beads on Microtubules. J. Phys. Chem. B 2016, 120, 5953. [Google Scholar] [CrossRef] [PubMed] 132. Mohrbach, H.; Johner, A.; Kulić, I. Cooperative lattice dynamics and anomalous fluctuations of microtubules. Eur. Biophys. J . 2012, 41, 217. [Google Scholar] [CrossRef] [PubMed] 133. Hartmann, B.; Lavery, R. DNA structural forms. Q. Rev. Biophys. 1996, 29, 309. [Google Scholar] [CrossRef] [PubMed] 134. Son, A.; Kwon, A.Y.; Johner, A.; Hong, S.C.; Lee, N.K. Underwound DNA under tension: L-DNA vs. plectoneme. Europhys. Lett. 2014, 105, 48002. [Google Scholar] [CrossRef] 135. Egelman, E.H. Actin allostery again? Nat. Struc. Biol. 2001, 8, 735. [Google Scholar] [CrossRef] [PubMed] 136. Fierling, J.; Müller, M.M.; Mohrbach, H.; Johner, A.; Kulić, I.K. Crunching biofilament rings. EPL 2014, 107, 68002. [Google Scholar] [CrossRef] 137. Italiano, J.E.; Lecine, P.; Shivdasani, R.A.; Hartwig, J.H. Blood Platelets Are Assembled Principally at the Ends of Proplatelet Processes Produced by Differentiated Megakaryocytes. J. Cell Biol. 1999, 147, 1299. [Google Scholar] [CrossRef] [PubMed] 138. Müller-Reichert, T.; Chrétien, D.; Severin, F.; Hyman, A.A. Structural changes at microtubule ends accompanying GTP hydrolysis: Information from a slowly hydrolyzable analogue of GTP, guanylyl (alpha, beta)methylenediphosphonate. Proc. Natl. Acad. Sci. USA 1998, 95, 3661. [Google Scholar] [CrossRef] [PubMed] 139. Elie-Caille, C.; Severin, F.; Helenius, J.; Howard, J.; Muller, D.J.; Hyman, A.A. Straight GDP-tubulin protofilaments form in the presence of taxol. Curr. Biol. 2007, 17, 1765. [Google Scholar] [CrossRef] [PubMed] [Green Version] 140. Arnal, I.; Wade, R.H. How does taxol stabilize microtubules? Curr. Biol. 1995, 5, 900. [Google Scholar] [CrossRef] 141. Ziebert, F.; Mohrbach, H.; Kulić, I.M. Why Microtubules Run in Circles: Mechanical Hysteresis of the Tubulin Lattice. Phys. Rev. Lett. 2015, 114, 148101. [Google Scholar] [CrossRef] [PubMed] 142. Mohrbach, H.; Johner, A.; Kulić, I.M. Tubulin Bistability and Polymorphic Dynamics of Microtubules. Phys. Rev. Lett. 2010, 105, 268102. [Google Scholar] [CrossRef] [PubMed] 143. Nödig, B.; Köster, S. Intermediate Filaments in Small Configuration Spaces. Phys. Rev. Lett. 2012, 108, 088101. [Google Scholar] [CrossRef] [PubMed] 144. Zhang, W.; Gomez, E.D.; Milner, S.T. Surface-Induced Chain Alignment of Semiflexible Polymers. Macromolecules 2016, 49, 963. [Google Scholar] [CrossRef] 145. Carrillo, J.M.Y.; Cheng, S.; Kumar, R.; Goswami, M.; Sokolov, A.P.; Sumpter, B.G. Untangling the Effects of Chain Rigidity on the Structure and Dynamics of Strongly Adsorbed Polymer Melts. Macromolecules 2015, 48, 4207. [Google Scholar] [CrossRef] 146. Fierling, J.; Johner, A.; Kulić, I.M.; Mohrbach, H.; Müller, M.M. How Bio-Filaments Twist Membranes. Soft Matter 2016, 12, 5747. [Google Scholar] [CrossRef] [PubMed] 147. Zhan, D.; Chai, A.; Wen, X.; He, L.; Zhang, L.; Liang, H. Ordered regular pentagons for semiflexible polymers on soft elastic shells. Soft Matter 2012, 8, 2152. [Google Scholar] [CrossRef] 148. Nam, G.; Johner, A.; Lee, N.K. Reptation of a semiflexible polymer through porous media. J. Chem. Phys. 2010, 133, 044908. [Google Scholar] [CrossRef] [PubMed] 149. Milchev, A. Single-polymer dynamics under constraints: Scaling theory and computer experiment. J. Phys.-Condens. Mat. 2011, 23, 103101. [Google Scholar] [CrossRef] [PubMed] 150. Lam, P.M.; Zhen, Y. A wormlike chain model of forced desorption of a polymer adsorbed on an attractive wall. J. Stat. Mech. 2014, 4, P04020. [Google Scholar] [CrossRef] 151. Kulic, I.M.; Schiessel, H. DNA Spools under Tension. Phys. Rev. Lett. 2004, 92, 228101. [Google Scholar] [CrossRef] [PubMed] 152. Kunze, K.K.; Netz, R.R. Complexes of semiflexible polyelectrolytes and charged spheres as models for salt-modulated nucleosomal structures. Phys. Rev. E 2002, 66, 011918. [Google Scholar] [ CrossRef] [PubMed] 153. Waters, J.T.; Kim, H.D. Equilibrium Statistics of a Surface-Pinned Semiflexible Polymer. Macromolecules 2013, 46, 6659. [Google Scholar] [CrossRef] 154. Pickett, G.T.; Witten, T.A. End-grafted polymer melt with nematic interaction. Macromolecules 1992, 25, 4569. [Google Scholar] [CrossRef] 155. Egorov, S.A.; Hsu, H.P.; Milchev, A.; Binder, K. Semiflexible polymer brushes and the brush-mushroom crossover. Soft Matter 2015, 11, 2604. [Google Scholar] [CrossRef] [PubMed] 156. Nyrkova, I.; Semenov, A.; Joanny, J.F.; Khokhlov, A. Highly Anisotropic Rigidity of “Ribbon-like” Polymers: I. Chain Conformation in Dilute Solutions. J. Phys. II 1996, 6, 1411. [Google Scholar] 157. Nyrkova, I.; Semenov, A.; Joanny, J.F. Highly Anisotropic Rigidity of “Ribbon-Like” Polymers: II. Nematic Phases in Systems between Two and Three Dimensions. J. Phys. II (Paris) 1997, 7, 625. [ Google Scholar] [CrossRef] 158. Davies, R.P.W.; Aggeli, A.; Beevers, A.J.; Boden, N.; Carrick, L.M.; Fishwick, C.W.G. Self-assembling beta-sheet tape forming peptides. Supramol. Chem. 2006, 18, 435–443. [Google Scholar] [ 159. Simon, F.X.; T.Nguyen, T.T.; Diaz, N.; Schmutz, M.; Demé, B.; Jestin, J.; Combet, J.; Mésini, P.J. Self-assembling properties of a series of homologous ester-diamides: From ribbons to nanotubes. Soft Matter 2013, 9, 8483. [Google Scholar] [CrossRef] 160. Quint, D.A.; Gopinathan, A.; Grason, G.M. Conformational collapse of surface-bound helical filaments. Soft Matter 2012, 8, 9460. [Google Scholar] [CrossRef] 161. Sussman, D.M.; Schweitzer, K.S. Microscopic Theory of Entangled Polymer Melt Dynamics: Flexible Chains as Primitive-Path Random Walks and Supercoarse Grained Needles. Phys. Rev. Lett. 2012, 109, 168306. [Google Scholar] [CrossRef] [PubMed] 162. Granek, R. From Semi-Flexible Polymers to Membranes: Anomalous Diffusion and Reptation. J. Phys. II 1997, 7, 1761. [Google Scholar] [CrossRef] 163. Fricke, N.; Sturm, S.; Lämmel, M.; Schöbl, S.; Kroy, K.; Janke, W. Polymers in disordered environments. Diffus. Fundam. 2015, 23, 7. [Google Scholar] 164. Biswal, S.L.; Gast, A.P. Mechanics of semiflexible chains formed by poly(ethylene glycol)-linked paramagnetic particles. Phys. Rev. E 2003, 68, 021402. [Google Scholar] [CrossRef] [PubMed] 165. Gosselin, P.; Mohrbach, H.; Kulic, I.M.; Ziebert, F. On complex, curved trajectories in microtubule gliding. Phys. D-Nonlinear Phenom. 2016, 318–319, 105. [Google Scholar] [CrossRef] 166. Böhm, K.J.; Stracke, R.; Vater, W.; Unger, E. Inhibition of kinesin-driven microtubule motility by polyhydroxy compounds. In Micro- and Nanostructures of Biological Systems; Hein, H.J., Bischoff, G., Eds.; Shaker Verlag: Aachen, Germany, 2001; pp. 153–165. [Google Scholar] 167. Muto, E.; Sakai, H.; Kaseda, K. Long-range Cooperative Binding of Kinesin to a Microtubule in the Presence of ATP. J. Cell Biol. 2005, 168, 691. [Google Scholar] [CrossRef] [PubMed] Figure 1. Stiffness effects and lattice artifacts for bond-fluctuation model (BFM) data obtained for $M = 200$ chains of length $N = 20$ at a volume fraction $ϕ = 8 N M / V = 0 . 5$ of occupied lattice sites at a temperature $T = 1 / β = 1$ ]. Panel ( ) shows the effective bond length $b e = R e ( N ) / ( N - 1 ) 1 / 2$ , obtained from the root-mean-squared chain end-to-end distance $R e ( N )$ and the (rescaled) center-of-mass self-diffusion coefficient $D cm$ as a function of the dimensionless parameter $x = β ϵ$ . A snapshot of a configuration at $ϵ = 10$ is given in Panel ( ). The chains are seen to align along the three principal lattice axes. Figure 2. Snapshots of semiflexible 2D polymers of length $N = 256$ obtained by means of molecular dynamics simulation of a Kremer–Grest bead-spring model [ ]. We show data for four concentrations $c = N M / L 2 = 0 . 125$ $M = 192$ chains, linear box size $L ≈ 627$ $c = 0 . 250$ $M = 192$ $L ≈ 443$ $c = 0 . 500$ $M = 192$ $L ≈ 313$ ) and $c = 0 . 750$ $M = 384$ $L ≈ 362$ ) and five bending penalties $ϵ = 0$ , 2, 4, 8 and 16 (from the bottom to the top). Only small subvolumes of much larger boxes are represented. The configurations have been sampled by increasing starting with flexible and compact chain systems ( $ϵ = 0$ ) [ ]. While the chains remain compact and segregated at low densities and stiffnesses below the dashed line, they are seen in the opposite limit to align (at least) locally, forming bundles of chains with hairpins, which are extremely difficult to equilibrate. Figure 3. Rescaled sub-chain size $y = R e 2 ( s ) / ( s - 1 )$ being the arc-length (sub-chain length) for four concentrations and three stiffness penalties corresponding to systems below and around the dashed line in Figure 2 . The symbols refer to flexible systems ( $ϵ = 0$ ); the line width for $ϵ = 2$ and 4 increases with density. The dash-dotted line corresponds to the asymptotic slope for perfectly rigid chains. Note that $y ( s )$ becomes strongly non-monotonous with increasing . However, for a given density and $s → N$ , all $y ( s )$ become similar as long as the system remains isotropic, i.e., is not too large. Independent of the rigidity, the overall chain size is thus ruled by the (persistence length independent) distance $d cm ≈ ( N / c ) 1 / D$ between chains. Figure 4. Log-log plot of the tangent/tangent correlation function $C ( s ) = 〈 cos θ ( s ) 〉$ versus arc-length for a 3D polymer melt with chains of length $N = 1024$ . The symbols show data from MD simulations for a Kremer–Grest-like bead-spring model [ ] with three bending penalties: $ϵ = 0 , 1 , 2$ . The model is similar to the one shown in Figure 2 . Thus, the melt with $ϵ ≤ 2$ is isotropic, the value $ϵ = 0$ corresponding to fully-flexible chains. The abscissa is scaled by the persistence length obtained from a fit of the initial decay of $C ( s )$ to Equation ( ); see the dashed line in the figure. The solid lines indicate the power law, $C ( s ) ∼ s - 3 / 2$ , expected from corrections to chain ideality [ Figure 5. Loop distribution as simulated by molecular dynamics for two chain lengths ($S = 30$ b and $S = 250$ b). The persistence length is $ℓ = 10 . 1$ b throughout. (left) Distribution of the internal loop size upon first adsorption of a loop of size S; only the smallest of the generated internal loops is taken into account; the full distribution is symmetric about $S / 2$. We show the power law fit by the single loop partition function (dashed lines) for the stiff loop and for the flexible loop, which apply where they should. Note the rather narrow crossover around $s = 2 ℓ$. The product of loop flexible partition functions nicely accounts for the flattening near $s = S / 2$ required by symmetry. (right) Distribution of the size of the loop generated by first re-adsorption of a tail of length $S / 2$. Again, the single loop partition functions fit where they should; the narrow crossover is now located around $s = ℓ$. The product of the flexible loop and tail partition functions accounts for the upturn near $s = 125$ b where the small tail partition function dominates. Figure 6. ) Schematic squeelix with the angle that is slaved to the twist angle given by the line in black; ( ) typical shape of a squeelix for $γ = 1$ (see Equation ( )) with a single twist-kink; ( ) a squeelix in the dilute regime of twist-kinks with $γ = 0 . 997$ ; ( ) a squeelix in the dense regime of twist-kinks with $γ = 0 . 52$ . The ground states (b), (c) and (d) are from [ Figure 7. $P ( θ )$ $S = 100 b$ $ω = 0 . 01 / b$ $ℓ = 1000 b$ obtained by convolution according to Equation ( ) using Equation ( ). The Boltzmann weight of a twist kink is $e - E$ . The lines correspond to $E =$ 4 (green), 6 (blue) and 8 (black). The thick dashed line corresponds to $E = - l o g ( b ω ) ≈ 4 . 6$ Figure 8. $R g 2 ( S )$ for $ω = 0 . 01 / b$ and $l p = 1000 b$. The Boltzmann weight of a twist kink is $e - E$. The thin lines are for $E = 2 , 4 , 6 , 8$ and 10 from top to bottom. The thick line indicate $R g 2$ with $E = - l o g ( b ω ) ≈ 4 . 6$. Undulations appear for $E > - l o g ( b ω )$. Figure 9. (a) Free energy landscapes without applied force ($f = 0$) and (b, c) under increasing force ($f = 0 . 25 k B T / b$, $f = 0 . 50 k B T / b$). To fix the force scale: for $b = 0 . 5$ nm, given $k B T ≈ 4$ pNnm, $k B T / b ≈ 8$ pN. Figure 10. (a) Two filaments are coupled with elastic springs forming a simple two-chain bundle; (b) the shear and bending degrees of freedom become strongly coupled, leading to long-range deformation effects. An arc formed in the region of length l (blue line) around the origin induces two counter-arcs with opposite curvature in its next proximity. The deformations are screened and vanish only at length scales longer than the elastic screening length λ. Figure 11. A semiflexible filament with elastic tails that cross-link points along its backbone becomes bistable. If the cross-linking point intervals overlap in addition (red and black chains), the curvature switching becomes cooperative. The single arc state in the middle is the energetically most stable (indicated by bold and dashed arrows). Figure 12. Polymorphic crunching: (a) Nonlinear bendable units are coupled in-plane of bending by a ring closure constraint (b). The constraint modifies their effective free energy and gives rise to a bistable monomer potential. (c) Two out of exponentially many energetically-equivalent ground state realizations. Blue and light-blue indicate the regions of opposite curvature. Figure 13. The polymorphic model of microtubules. (a) The switchability of tubulin dimers leads to a competition of three states of the microtubule’s cross-section: straight and long (L), curved (C) and straight and short (S). (b) “Phase diagram” of the polymorphic microtubule model as a function of generalized force f vs. torque m. The existing states in the respective regions are ordered by their polymorphic energies, the one at the bottom having minimum energy. (c) An intermittent buckling event, for a microtubule transported along a motor-covered surface, in case of $f ≃ 0 . 7$ (corresponding to small switching energies $Δ G ≃ 0$). The behavior observed is the one of a regular WLC chain. (d) An intermittent buckling event in case of $f ≃ 0 . 4$ (corresponding $Δ G ≃ 5 k B T )$. The microtubule curls up. Table 1. Monomer size b along the backbone and persistence length ℓ for various polymers (in water if not specified otherwise) assuming worm-like-chain (WLC) statistics: polyethylene (PE) in dodecanol1, polyisobutylene (PIB) in benzene, polydimethylsiloxane (PDMS) in hexane, atactic polystyrene (PS) in cyclohexane, poly(sodium styrene sulfonate) (NaPSS), poly(diallyl-dimethyl ammonium chloride) (PDADMAC), hyaluronan (HA), duplex-DNA (d-DNA), intermediate filament vimentin (IF) and F-actin. The polymers marked by ⋆ are polyelectrolytes measured in water at high salt; for HA, two sets of values for ℓ are found in the literature possibly linked to association phenomena. The last three polymers are biofilaments measured in physiological conditions. It is questionable whether the simple WLC model is directly applicable to them (see the last section), and the persistence length may be only indicative. Table 1. Monomer size b along the backbone and persistence length ℓ for various polymers (in water if not specified otherwise) assuming worm-like-chain (WLC) statistics: polyethylene (PE) in dodecanol1, polyisobutylene (PIB) in benzene, polydimethylsiloxane (PDMS) in hexane, atactic polystyrene (PS) in cyclohexane, poly(sodium styrene sulfonate) (NaPSS), poly(diallyl-dimethyl ammonium chloride) (PDADMAC), hyaluronan (HA), duplex-DNA (d-DNA), intermediate filament vimentin (IF) and F-actin. The polymers marked by ⋆ are polyelectrolytes measured in water at high salt; for HA, two sets of values for ℓ are found in the literature possibly linked to association phenomena. The last three polymers are biofilaments measured in physiological conditions. It is questionable whether the simple WLC model is directly applicable to them (see the last section), and the persistence length may be only indicative. ℓ (nm) b (nm) $ℓ / b$ PE (Dodecanol1) 0.59 0.13 4.5 PIB (Benzene) 0.59 0.26 2.3 PS (Cyclohexane) 0.86 0.26 3.3 PDMS (Hexane) 0.57 0.29 2 $⋆$NaPSS (high salt) 1.0 0.25 4 $⋆$PDADMAC (high salt) 3.0 0.47 5.3 $⋆$HA (high salt) 4–5; 7–10 1.0 4–5; 7–10 d-DNA 50 0.34 150 IF 10$3$ ∼10 ∼100 F-actin 17 × 10$3$ 5 3400 Table 2. Characteristic length scales for chemisorption of a single worm-like chain. The typical zipping loop size is $s 0$. For long chains $S > S ⋆$, zipping occurs from multiple nucleation points distant along the chain. Table 2. Characteristic length scales for chemisorption of a single worm-like chain. The typical zipping loop size is $s 0$. For long chains $S > S ⋆$, zipping occurs from multiple nucleation points distant along the chain. ℓ (nm) b (nm) $s o$ (nm) $S ⋆$ (nm) PE (dodecanol1) 0.59 0.13 0.21 1.6 PIB (benzene) 0.59 0.26 0.34 1 PS (cyclohexane) 0.86 0.26 0.38 1.9 PDMS (hexane) 0.57 0.29 0.36 0.90 NAPSS (high salt) 1.0 0.25 0.39 2.5 PDADMAC (high salt) 3.0 0.47 0.80 7.6 HA (high salt) 4–5; 7–10 1.0 1.6–2 12–35 d-DNA 50 0.34 ∼1.8 ∼ 1.4 × 10$3$ IF 10$3$ ∼ 10 ∼46 ∼ 2 × 10$4$ F-actin 17 × 10$3$ 5 ∼75 3 × 10$6$ © 2016 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license ( http: Share and Cite MDPI and ACS Style Baschnagel, J.; Meyer, H.; Wittmer, J.; Kulić, I.; Mohrbach, H.; Ziebert, F.; Nam, G.-M.; Lee, N.-K.; Johner, A. Semiflexible Chains at Surfaces: Worm-Like Chains and beyond. Polymers 2016, 8, 286. AMA Style Baschnagel J, Meyer H, Wittmer J, Kulić I, Mohrbach H, Ziebert F, Nam G-M, Lee N-K, Johner A. Semiflexible Chains at Surfaces: Worm-Like Chains and beyond. Polymers. 2016; 8(8):286. https://doi.org/ Chicago/Turabian Style Baschnagel, Jörg, Hendrik Meyer, Joachim Wittmer, Igor Kulić, Hervé Mohrbach, Falko Ziebert, Gi-Moon Nam, Nam-Kyung Lee, and Albert Johner. 2016. "Semiflexible Chains at Surfaces: Worm-Like Chains and beyond" Polymers 8, no. 8: 286. https://doi.org/10.3390/polym8080286 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2073-4360/8/8/286","timestamp":"2024-11-13T06:43:09Z","content_type":"text/html","content_length":"986409","record_id":"<urn:uuid:42aa9401-fc72-4bdd-b591-568815aa4bfb>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00146.warc.gz"}
CAGR Calculator Compound Annual Growth Rate But generally speaking, investors will evaluate this by thinking about their opportunity cost as well as the riskiness of the investment. For example, if a company grew by 25% in an industry with an average CAGR closer to 30%, then its results might seem lackluster by comparison. But if the industry-wide growth rates were lower, such as 10% or 15%, then its CAGR might be very impressive. Compounded Annual Growth Rate (CAGR) is the rate of return that would be required for an investment to grow from the initial value invested to the maturity balance. However, CAGR assumes that the gains are reinvested at the end of each investment period. Further you can also file TDS returns, generate Form-16, use our Tax Calculator software, claim HRA, check refund status and generate rent receipts for Income Tax Filing. Calculating the CAGR for various options is beneficial how to calculate cagr in normal calculator if you want to invest a lump sum. Using XIRR to make an investment decision is more accurate if you’re investing at different times. This means your mutual fund investment gave you an absolute return of 75% over its tenure. Calculate CAGR Contrary to a common misconception, the calculation of CAGR is not as simple as averaging the YoY growth rates. The first step in calculating CAGR is to determine the beginning value of the investment. This is the value of the investment at the start of the period you want to calculate. For example, if you want to calculate the CAGR of an investment over a 5-year period, you need to determine the value of the investment at the beginning of that 5-year period. There are various factors in the market that can influence the growth rate of an investment, thus making it difficult to interpret the year to year A 5-year CAGR percentage indicates how much the investment has grown in the past 5 years. Again, the Rule of 72 is an estimate — but it’s often a quite good estimate. A Rule of 72 calculation would imply a required CAGR of 7.2% to double one’s money in ten years; the actual figure is 7.18%. For seven years, the Rule of 72 suggests a 10.3% return, against 10.4% by the proper calculation.The Rule of 72 only works when considering an ending value that’s double the beginning value. But for back-of-the-envelope CAGR calculations, the Rule of 72 is a handy rule indeed. What is the Compound Annual Growth Rate (CAGR)? In this way, comparing the CAGRs of measures within a company reveals strengths and weaknesses. In any given year during the period, one investment may be rising while the other falls. This could be the case when comparing high-yield bonds to stocks, or a real estate investment to emerging markets. Using CAGR would smooth the annual return over the period so the two alternatives would be easier to compare. Scripbox’s CAGR calculator helps an investor to calculate CAGR on their investments which in turn will help them in analyzing their investment decisions. Consequently, the CAGR may be used to give a clarification on the progress of an investment. The rate can also be used to compare the growth of more than one investment. You only need to enter the initial value, the final deal, and desired investment period, and the online CAGR calculator will take care of the rest. To compare bank offers which have different compounding periods, we need to calculate the Annual Percentage Yield, also called Effective Annual Rate (EAR). The most comfortable way to figure it out is using the APY calculator, which estimates the EAR from the interest rate and compounding frequency. Step 8: Multiply the Result from Step 7 by 100 It also allows investors to see how similar investments have fared over the same length of time. If you’re in need of a financial advisor, the CAGR formula can help you compare advisors and see who is getting their clients the most for their money. One of the greatest limitations of the compound annual growth rate is that it ignores volatility. Thus, it is not advisable to use CAGR as the only metric to determine an investment’s performance. Compound annual growth rate (CAGR) is a business and investment term that is used to refer to the average annual growth rate of an investment over a certain period of time, usually longer than one year. It can be explained as a measure of growth of an investment based on the assumption that the investment’s value grows at a steady rate, compounded annually. Average Annual Growth Rate (AAGR): Definition and Calculation – Investopedia Average Annual Growth Rate (AAGR): Definition and Calculation. Posted: Sat, 25 Mar 2017 23:36:44 GMT [source] Get instant access to video lessons taught by experienced investment bankers. Learn financial statement modeling, DCF, M&A, LBO, Comps and Excel shortcuts. To ensure that you understand the concept of CAGR, we have also computed the implied revenue, which we link to the $100 million assumption for Year 0 and then grow it by the CAGR of 7.6%.
{"url":"https://backup.littleparis.co.in/cagr-calculator-compound-annual-growth-rate/","timestamp":"2024-11-12T03:08:41Z","content_type":"text/html","content_length":"188371","record_id":"<urn:uuid:8c836c68-acfe-4192-9503-35d40fdabc5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00161.warc.gz"}
Quantum Computing Statistics: Benefits and Negative Effects You’ve probably heard that quantum computers can solve problems at a speed unfathomable to traditional computers. They can be used in physics, mathematics, economy, finance, and many more fields where there is a need to create and/or analyze a vast model impacted by a huge number of factors. This is still a new field, so there are not many quantum computing statistics. That’s why we had to dig deep to bring you all the relevant stats and facts. Let’s dive right in. Top Quantum Computing Statistics: Editor’s Choice • The operating temperature of the D-Wave 2000Q quantum computer is 0.015 Kelvin. • 1Qbit is the power and the name of the first dedicated quantum computer. • Quantum tunneling is expected to result in a 100-1000x reduction in power consumption. • 5 companies have made quantum chips so far. • The global quantum computing market would be worth $949 million by 2025. • By 2030, the number of quantum computers worldwide could be between 2,000 and 5,000. • The Sa 30 qubit quantum computer can run trillions of operations per second. • 90 new job postings for quantum computing commercial jobs appeared in 2018. 10 Awesome Facts About Quantum Computing 1. Quantum computers are too powerful for simple tasks. Quantum computers are incredible, but they are not suitable for simple tasks like emailing. This means that traditional computers will not lose their place since quantum computers should be used to solve incredibly complicated problems. 2. The operating temperature of the D-Wave 2000Q quantum computer is 0.015 Kelvin. We’ve come across several amazing facts about quantum computers so far, and the temperatures needed to keep them stable is another addition to the list. For example, the D-Wave 2000Q system operates at a temperature of 0.015 Kelvin. This is 180 times colder than interstellar space and incredibly close to the absolute zero on the Kelvin scale. 3. It takes a few seconds for AI to learn huge amounts of information via quantum computers. One of the greatest advantages of quantum computes is the fact that it can speed up the learning process of AI tremendously. In fact, it can help the AI to learn vast information in mere seconds, while it would take thousands of years before. 4. Quantum computers will be able to simulate the natural world. Their power to operate like nature, so they are sometimes called natural, is one of the mind-blowing facts about quantum computers. Although we do not completely understand the way quantum tunneling works, we know that they operate the same principles in the sub-atomic level on which nature does. Because of that, we are one big step closer to the simulation of the natural world. 5. 1Qbit is the power and the name of the first dedicated quantum computer. The first dedicated quantum computer focused on commercial business in the world was named 1Qbit in 2012 in Vancouver, British Columbia. 6. Quantum tunneling is expected to result in a 100-1000x reduction in power consumption. Besides the immense power and speed quantum computers have, they are also more power-efficient than traditional computers thanks to quantum tunneling. Among other interesting facts about quantum computing, quantum tunneling reduces the consumption of power by massive amounts. 7. Deep Blue calculated 200 million potential moves per second in its 1997 chess match against Garry Kasparov. Widely regarded as one of the greatest (if not the greatest) chess players of all time, Gary Kasparov was at the peak of his powers at the time. So, it came as a shock when he lost the long-anticipated match with Deep Blue. And that was back in 1997! The quantum computers of today are able to do around one trillion calculations per second. 8. 5 companies have made quantum chips so far. One of the fun facts about quantum computers is that, so far, only a handful of companies have made quantum chips. They are Google with Bristlecone, IBM with IBM Experience and Q, Intel with Tangle Lake, Rigetti with 19Q, and D-Wave with Rainier. 9. IBM upgraded its quantum computer Q from 5 to 20 qubits of processing power in 2018. IBM launched Q, a 5-qubit quantum computer whose services you could use via the cloud in 2016. Two years later, the company upgraded it to 20 qubits of quantum processing power, which was a massive 10. The Quantum Artificial Intelligence Lab has released a 72-qubit processor. According to artificial intelligence stats, in 2018, the Quantum Artificial Intelligence Lab, which is run by NASA, Google, and the Universities Space Research Association released a Bristlecone, a 72 qubit processor. Important Quantum Computer Facts & Statistics 11. The global quantum computing market would be worth $949 million by 2025. In 2016, quantum computing had a global market value of $89 million. This figure is expected to grow more than 10x by 2025, reaching $949 million. This estimate was based on a CAGR of 30% from 2017 to 2025. 12. By 2030, the number of quantum computers worldwide could be between 2,000 and 5,000. A new report discloses some remarkable facts about quantum computing, one of which is the skyrocketing of quantum computers used worldwide about a decade from now. Given their ability to solve optimization problems, quantum computers would be widely used in finance, advanced manufacturing, and logistics. 13. The Sa 30 qubit quantum computer can run trillions of operations per second. In contrast, current desktop computers have the power to run billions of operations. So, the difference is vast. This kind of capability of quantum computers has made them so attractive for applications where this power is necessary, like modeling and encryption. 14. Quantum computers work thousands of times faster than traditional computers. If you’re wondering about the speed of quantum computers, facts show they’re thousands of times faster compared to conventional computers. This means that, no matter how much data is created each day , quantum computers can possibly process them. 15. IBM developed a 53-qubit quantum computer in 2019. Statistics of quantum computers show that some companies have more than one quantum computer. In 2019, IBM developed a 53 qubit quantum computer, and Rigetti computing announced a 128 qubit quantum Quantum Computing Jobs Statistics 16. 15,650 physicists were employed in 2015, including quantum computer research scientists. The US Bureau of Labor Statistics does not collect data specifically for the position of a quantum computer research scientist. Instead, it counts these positions as physicists. In 2015 there were 15,650 physicists employed. 17. 90 new job postings for quantum computing commercial jobs appeared in 2018. As for job growth, in July 2018 there were 188 new job openings. That may not sound like a lot, but the corresponding figure for October 2017 was 45. Quantum computing statistics show a massive increase in the last couple of years. Coincidentally, our website offers some of the best science jobs available at the moment. Feel free to check them out by making a search query on our homepage. 18. The annual wage for physicists in colleges, professional schools, and universities is $63,840 dollars. Once again, the only data we have is the data from the BLS about physicists. Statistics in quantum computing reveal that salaries are likely to go up since big companies like Intel, IonQ, Google, D-Wave, and others are trying to build bigger, faster, and better quantum computers, which will need more skilled professionals to operate. 19. The National Quantum Initiative is a 10-year program for federal science agencies to invest in quantum technology. One of the essential quantum computing facts is that quantum technology is worth investing in for federal science agencies. Considering that the future will be too much to handle for traditional computers, classic binary logic will not be of much help. Therefore, this bill is proposed to guide and instruct the agencies to train people for quantum computing-related jobs by building five institutes dedicated to quantum computing. It will also help overcome the pending shortage of professionals. This bill is yet to be scheduled for the full vote in the House of Senate. 20. The operating power of the first quantum computers was 2 qubits. One of the startling facts about quantum computers is they keep on evolving. Looking from our present vantage point, the start was not that impressive. The first quantum computers operated with 2 qubits in 1998. Two years later, we got quantum computer systems of 5 and 7 qubits. In 2006 and 2008, quantum computer systems operated with 12 and 28 qubits, respectively. 21. 2017 was the first year with 50 qubits of quantum computer processing power. Quantum computing statistics show that the first quantum computer system to operate with 50 qubits was designed in 2017, and we’re expecting a 128-qubit one soon. However, these computers are not perfect. But what can be a flaw in one situation can become an advantage in the next. Some Facts About the Benefits of Quantum Computing 22. Quantum computing will have its primary application in AI. This becomes more accurate when it gets feedback. The thing is, this feedback comes from calculating the probabilities for many possible alternatives. But it will not stop with artificial intelligence. Quantum computing is expected to benefit every industry and revolutionize the modern economy to the same extent electricity did when it was 23. Quantum computing offers precision modeling for chemical reactions. Here’s one of the interesting facts about quantum computers – they can determine the optimum configuration for a chemical reaction. Since these reactions can be very complex, only the simplest molecules can be analyzed by traditional computers. 24. Using quantum computing advantages for improved online security can be very beneficial. Today’s online security relies on the difficulty of factoring large numbers into primes. This is a lot to ask for from traditional computers since searching through every possible factor can take an incredible amount of time, rendering it costly and impractical. But, since that is just the kind of task for a quantum computer, these security methods will become obsolete in the future. Negative Effects of Quantum Computing Of course, nothing is as perfect as it seems. Quantum computing has many advantages, but there are some negatives. 25. Quantum computers’ enemy is decoherence. As you can imagine, quantum computers are extremely hard to build and program. Because of that, there can be many errors like faults, noise, or loss of quantum coherence (decoherence). This decoherence stems from temperature fluctuations, vibrations, electromagnetic waves, and other impacts from the outside environment, and it can destroy the properties quantum computers are known 26. Quantum computers can hack public key encryption. There are more dangers of quantum computing. For instance, one powerful quantum computer could crack even the most cryptographic algorithms that keep our data safe. In the wrong hands, it could jeopardize the data held by the stock exchange, hospitals, banks, etc. Key Takeaways Although there are not many statistics on quantum computing, it’s safe to say that quantum computing is here to stay. Bottom line: In the next decade, you can expect quantum supercomputers to be available from the cloud. Big web search and cloud AI services will utilize the advantages of the power of quantum computers, even though there are still a lot of people absolutely unaware of that. But now that you’ve read our article, you’re no longer one of them! Frequently Asked Questions How many calculations can a quantum computer do? It takes 10 years for Boolean logic computers to complete a task the latest Google quantum computers do in 3 seconds. The latest Google quantum supremacy state computers reached the speed of a couple of hundred trillions of calculations in the blink of an eye! In comparison, traditional Boolean logic computers would need more than a decade to perform the same task. Let that sink in for a moment. What is quantum computing and how does it work? Every day computers use transistors in order to work with zeroes and ones individually. In contrast, quantum computers can work with zero and one at the same time, using something that is called superposition quantum states. Let’s dig deeper into how this technology works in order to better our quantum computing definition. The superposition quantum state represents the state of matter for which we can think it is zero and one at the same time. Features like strange superposition states and quantum entanglements enable quantum computing to perform simultaneous calculations and extract the results. These computers are, therefore, super fast and way better than classical computers. In simple terms, superposition represents the ability of a quantum system to be, at the same time, here and there, up and down, left end right. Entanglement represents the extremely strong correlation that exists between quantum particles. In fact, this correlation is so strong that two or several quantum particles can be inevitably linked, even if they are very far away from each other. How many quantum computers are there in the world? There were 11 quantum computers in the world as of June 2018. These computers were owned by IBM, the University of Bristol in England, the Center for Quantum Computing and Communication Technology, the University of New South Wales in Australia, and the University of Science and Technology in China. All of these computers are small in size. D-Wave supplies Lockheed Martin, Google, NASA, and USRA with the bigger type of quantum computers. How many times faster is a quantum computer? Google announced a computer 100x times faster than any other computer they are using. This number is in accordance with the D-Wave quantum computer from Google and NASA in 2015, which solved an optimization problem in a couple of seconds. According to them, and as quantum computing statistics confirm, a classical computer would take 10,000 years to solve the same optimization problem. It does seem like it is impossibly faster, but that is the charm of the exponential power of quantum mechanics.
{"url":"https://seedscientific.com/quantum-computing-statistics","timestamp":"2024-11-02T04:19:39Z","content_type":"text/html","content_length":"93560","record_id":"<urn:uuid:88a79059-7bcb-4a48-a113-3579600ea6e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00316.warc.gz"}
Unexpected behavior of C# Math.Round When working with numerical data in C#, developers often rely on the Math.Round method to round numbers to a specified precision. However, some users encounter unexpected behavior that can lead to confusion and errors in their applications. This article aims to clarify how Math.Round works, analyze common pitfalls, and provide practical examples to better understand its functionality. Original Problem Scenario Some users may have faced situations where their rounding results did not match their expectations. For instance, consider the following code snippet: double value = 2.5; double roundedValue = Math.Round(value); Console.WriteLine(roundedValue); // Output may be confusing In this example, users may expect Math.Round(2.5) to return 3. However, due to the default rounding mode used by Math.Round, the result can sometimes be 2. This can lead to unexpected outputs and frustration for developers. How Math.Round Works In C#, the Math.Round method follows the "Banker's Rounding" strategy, also known as "round half to even." This means that when a number is exactly halfway between two possible rounded values (like 2.5), Math.Round will round to the nearest even number. Therefore, Math.Round(2.5) will round down to 2, while Math.Round(3.5) will round up to 4. This approach helps to reduce bias that might accumulate when rounding numbers in financial calculations. Rounding Modes To control the rounding behavior, Math.Round has several overloads that allow you to specify different rounding modes. Here’s an example that demonstrates how to use the MidpointRounding enumeration to customize the rounding behavior: double value1 = 2.5; double roundedValue1 = Math.Round(value1, MidpointRounding.AwayFromZero); Console.WriteLine(roundedValue1); // Outputs: 3 double value2 = 3.5; double roundedValue2 = Math.Round(value2, MidpointRounding.AwayFromZero); Console.WriteLine(roundedValue2); // Outputs: 4 In the above example, we explicitly tell Math.Round to round away from zero. By doing so, we get the expected outcomes where both 2.5 and 3.5 round up to 3 and 4, respectively. Practical Examples and Further Analysis To fully grasp the behavior of Math.Round, consider the following scenarios: 1. Even Rounding with Negative Numbers: double negativeValue = -2.5; double roundedNegative = Math.Round(negativeValue); Console.WriteLine(roundedNegative); // Outputs: -2 Just as with positive numbers, negative values also follow the rounding rules. Math.Round(-2.5) yields -2 because it rounds to the nearest even number. 2. Multiple Decimal Places: double decimalValue = 2.555; double roundedDecimal = Math.Round(decimalValue, 2); Console.WriteLine(roundedDecimal); // Outputs: 2.56 In this case, rounding to two decimal places rounds up to 2.56. It's essential to specify the number of digits to avoid unexpected results when dealing with floating-point numbers. Understanding the behavior of Math.Round in C# is crucial for developers to avoid common pitfalls. By recognizing that it employs banker's rounding by default, we can better manage our numerical outputs. Using the MidpointRounding enumeration offers a way to customize the rounding strategy to fit specific needs, ensuring that your applications produce the expected results. Useful Resources By utilizing this information, developers can enhance their understanding of rounding in C# and ensure their applications handle numerical data accurately and efficiently.
{"url":"https://laganvalleydup.co.uk/post/unexpected-behavior-of-c-math-round","timestamp":"2024-11-05T05:29:33Z","content_type":"text/html","content_length":"83063","record_id":"<urn:uuid:37c28022-408c-4e3b-a2b8-58a47350850c>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00491.warc.gz"}
Higher Categorical Tools for Quantum Phases of Matter Quantum phases have become a staple of modern physics, thanks to their appearance in fields as diverse as condensed matter physics, quantum field theory, quantum information processing, and topology. The description of quantum phases of matter requires novel mathematical tools that lie beyond the old symmetry breaking perspective on phases. Techniques from topological field theory, homotopy theory, and (higher) category theory show great potential for advancing our understanding of the characterization and classification of quantum phases. The goal of this workshop is to bring together experts from across mathematics and physics to discuss recent breakthroughs in these mathematical tools and their application to physical problems. Sponsored in part by the Simons Collaboration on Global Categorical Symmetries
{"url":"https://events.perimeterinstitute.ca/event/59/timetable/?view=standard_inline_minutes","timestamp":"2024-11-11T05:10:26Z","content_type":"text/html","content_length":"220332","record_id":"<urn:uuid:041361b3-1567-46a0-a8a0-8e98dcf84717>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00210.warc.gz"}
How Much Extra Vinyl Flooring To Buy - WHYIENJOY Moreover,how much extra flooring should i buy for waste? Typically you can expect a waste factor of 5-7% for square rooms, 10% for rectangular rooms and 15% for rooms with multiple angles. Always round up to the nearest decimal point and add this waste factor to your initial square footage. Also,how do i calculate how much vinyl flooring i need? To do so, use a tape measure to determine the room’s length and width. Then multiply the length by the width to get your square footage. For instance, if the room is 12 feet wide and 12 feet long, you will need enough flooring for 144 square feet (12×12=144). Additionally,how much extra does vinyl flooring cost? For vinyl and laminate flooring, 5 to 10% extra is recommended. How much extra laminate should I buy? Measuring Laminate Floor Example: How much laminate do I need? If your room is 10 metres long by 9 metres wide, you will need 90 metres square of laminate flooring. We recommend adding 10% (multiply your figure by 1.1) to ensure any errors are accounted for. Things to consider Below are some things to consider when trying to figure out how much extra vinyl flooring to buy. How many square feet are in a box of vinyl flooring? Which Brand of Vinyl Flooring is Best Each carton generally contains 20 square feet, but this can vary by style. While LVT products usually don’t have consistency issues, you still have to allow for waste when calculating the amount of product to purchase. How much does it cost to install 1000 square feet of vinyl floors? The estimated material, installation cost, and labor necessary to install vinyl plank floors for 1,000 sq ft (93 m2) are approximately $9,500. In most cases, the total cost is within a range from $3,000 to $16,000. How many flooring packs do I need? Calculating The Amount Of Packs If you want to do it your self, divide the room area, by the box size. EG For a room that is 15m² and the floor you have chosen has a pack size of 2.3m². 15/2.3 = 6.5 or 7 boxes of Flooring. How many 12×12 vinyl tiles are in a box? Champion® 12 x 12 Vinyl Tile – Moderate Traffic – Box Of 45 | HD Supply. How much would it cost to floor a 1500 sq ft house? Do I Need Pad Under My Vinyl Plank? and 4 Other COMMON The cost to install new subfloor and floorboards in a new construction ranges from $7,500 to $36,750 to cover 1,500 square feet. On average, subfloor costs $2 to $2.50 per square foot. Floorboards run from $3 to $22 per square foot for materials and installation. How hard is it to install vinyl plank flooring? Of all the do-it-yourself floor coverings, vinyl plank flooring (also known as luxury vinyl) is one of the simplest to install. It is easy to cut, requires no bonding to the subfloor, and snaps together edge-to-edge and end-to-end. Where should I store extra flooring? You should really store your hardwood flooring in a controlled environment, never a garage or basement. If the hardwood is being store above 50% humidity it will take in a ton of moisture and expand this will typically happen in the summer months when humidity is at it’s highest. How many square Metres is my room? In order to calculate the size of a room or space in m^2, you simply multiply the length of the space (in metres) by the width of the space (in metres). How many planks are in a box of laminate flooring? Each pack contains nine planks of high gloss laminate flooring, which equates to a coverage of 2.1sqm. How do you calculate price per square foot for flooring? You can do it in the following way: 1. Measure the room that you’re going to install the floor in. 2. Multiply the width by the length of the room to obtain the square footage. 3. Once you know the area of the room, you’re good to go – this is the square footage of flooring materials you have to buy. How many square feet is a 12×10 room? Area of the floor or ceiling: Multiply the length by the width (10 feet x 12 feet = 120 square feet of area). How do I calculate sq footage? Basic formula for square feet Multiply the length by the width and you’ll have the square feet. Here’s a basic formula you can follow: Length (in feet) x width (in feet) = area in sq. How wide is a roll of vinyl flooring? No seam is needed because the standard roll is 12 feet wide by whatever length you need Keep in mind that sheet vinyl is sold in standard 12 foot wide rolls and accommodates usual lengths. If wider than 12 feet each way, a seam is required. How many boxes of flooring do I need for 600 square feet? You will be needing 22 boxes of laminate flooring to cover the room of 600 square feet. The amount includes the wastage amount that can occur due to miscalculation or wrong measuring and others. How much does a box of flooring cover? Determine the number of boxes you need before ordering your laminate flooring. Each box of flooring should include a notation of the total square footage it covers. One box of flooring may cover 30 square feet. Divide this number into the total square footage of the area you plan to cover. How much does it cost to install 500 square feet of vinyl floors? Vinyl Flooring Prices per Square Foot Size Average Cost (Material Only) 200 sq.ft. $200 – $2,400 300 sq.ft. $300 – $3,600 500 sq.ft. $500 – $6,000 1,000 sq.ft. $1,000 – $12,000 How much does it cost to install 1000 square feet of laminate floors? The national average cost to install a room of vinyl plank flooring is around $2,000 overall (according to HomeAdvisor). The average overall price ranges between $800 and $2,900. Average labor costs to install vinyl plank range between $1.50 and $6 per square foot.
{"url":"https://www.whyienjoy.com/how-much-extra-vinyl-flooring-to-buy/","timestamp":"2024-11-07T06:39:38Z","content_type":"text/html","content_length":"96514","record_id":"<urn:uuid:cd0bc1ab-24c7-4612-ad98-83e797c96acc>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00022.warc.gz"}
Logarithm and Antilogarithm Table: How to Use or View Value - Maths for Kids Logarithm and Antilogarithm Table: How to Use or View Value Logarithm Table (Download Link can be found after Image: PDF and Image format) Download Logarithm Table in High Resolution: Image File Download Print-Friendly PDF Version: File 1 and File 2 Antilogarithm Table (Download Link is after image: PDF and Image format) Download Antilog Table: Image File Download Print Friendly Antilog Table PDF File: File 1 and File 2 How to View the Logarithm and Antilogarithm Table? Note: The algorithm table is based on the standard base value of log that is Log[10] Watch to video to understand how you can view the table and find the value of log and antilg
{"url":"https://maths.forkids.education/antilog-log-table-how-to-use-see-value/","timestamp":"2024-11-13T06:10:42Z","content_type":"text/html","content_length":"88796","record_id":"<urn:uuid:4f8a967f-cd07-4f3d-9159-0544a8a8a041>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00450.warc.gz"}
Solving Rational Equations In the business world and technology fields there are times when math models a real world situation with a rational equation. In order to solve these equations, we need to understand how to manipulate these equations, by adding parts or multiplying rational expressions. At times the equation will resemble a proportion. When this is the case, solve the equation by cross multiplication. Example 1: Cross Multiplication 5(x - 1) = 10(x + 1) Cross multiply. 5x - 5 = 10x + 10 Use the distributive property. -5x = 15Subtract 10x and add 5 on each side. x = -3 Divide both sides by -5. When one side of the equation has addition or subtraction with ratios, solve by multiplying every term by the least common denominator (LCD). Example 2: Multiply by the LCD The LCD is 4x. ${4}{x}•\left(\frac{8}{x}+\frac{14}{4}\right)=\left(4\right)•{4}{x}$Multiply both sides by the LCD. 32 + 14x = 16x Use the distibutive property. 32 = 2xSubtract both sides by 14x. 16 = xDivide both sides by 2. Often, when multiplying the LCD, there will be two solutions. Example 3: More than One Solution The LCD is (x + 3)(x - 1). ${\left(}{x}{+}{3}{\right)}{\left(}{x}{-}{1}{\right)}\left(\frac{9}{x+3}+\frac{x-3}{x-1}\right)=\left(\frac{4}{{x}^{2}+2x-3}\right){\left(}{x}{+}{3}{\right)}{\left(}{x}{-}{1}{\right)}$ Multiply by the LCD. (x - 1)9 + (x - 3)(x + 3) = 4Eliminate common terms in the denominators. 9x - 9 + x^2 - 9 = 4 Multiply the binomials. x^2 + 9x - 22 = 0 Combine like terms and subtract 4 on both sides. (x - 2)(x + 11) = 0 Factor. x = 2, or x = -11 Solve. There are times that solutions are extraneous. When this happens, one of the solutions will return a denominator that is equal to zero. Example 4: Extraneous Solutions The LCD is (x - 5)(x + 5). $\left({x}{-}{5}{\right)}{\left(}{x}{+}{5}{\right)}\left(\frac{6}{x-5}\right)=\left(\frac{8{x}^{2}}{{x}^{2}-25}-\frac{4x}{x+5}\right){\left(}{x}{-}{5}{\right)}{\left(}{x}{+}{5}{\right)}$ Multiply by the LCD. (x + 5)6 = 8x^2 - 4x(x - 5)Eliminate common terms in the denominators. 6x + 30 = 8x^2 - 4x^2 + 20xUse the distributive property. 4x^2 + 14x - 30 = 0 Move all terms to the left side. 2(2x - 3)(x + 5) = 0 Factor. $x= \frac{3}{2}$, or x = -5 x = -5 can not be a solution, since the equation is undefined at this coordinate. Example 5: Batting Average In the first 4 months of baseball season, Tommy hit 20 out of 85 at bats. How many consecutive hits does he need to make to have a batting average of 0.350? Solution: The batting average is the number of hits divided by the number of at bats. Tommy's current average is 20/85 = 0.235 For Tommy's average to increase to 0.350, solve the equation $0.35=\frac{20+x}{85+x}$ Solve by cross multiplication. $0.35=\frac{20+x}{85+x}$ Set up the average. 0.35(85 + x) = 20 + x Cross multiply. 29.75 + 0.35x = 20 + xUse the distribute property. 9.75 = 0.65x subtract 20 and 0.35x on both sides 15 = xDivide by 0.65 Tommy needs to hit 15 consecutive bats to raise his batting average to 0.350 Example 6: Working Together Sue and Ann work together to clean houses. It takes Sue 7 hours to clean one house. With the help of Ann, it only takes them 3 hours to clean. How long does it take Ann to clean the same house by The best way to answer this scenario, is to add the amount of work being done per hour. Sue cleans 1/7^th of the house per hour. Ann cleans 1/A of the house per hour. Sue and Ann Clean 1/3 of a house in one hour. The LCD is 21A. ${21}{A}\left(\frac{1}{7}+\frac{1}{A}\right)=\left(\frac{1}{3}\right){21}{A}$Multiply by the LCD. 3A + 21 = 7AUse the distributive property. 21 = 4A Subtract 3A on both sides 5.25 = A It takes 5 hours and 15 minutes for Ann to clean the house. To link to this Solving Rational Equations page, copy the following code to your site:
{"url":"https://www.softschools.com/math/algebra/topics/solving_rational_equations/","timestamp":"2024-11-14T21:06:36Z","content_type":"application/xhtml+xml","content_length":"29795","record_id":"<urn:uuid:2364809d-3b6c-4adb-9267-ad3ea1787c08>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00683.warc.gz"}
Long Downhill Braking and Energy Recovery of Pure Electric Commercial Vehicles School of Automotive and Traffic Engineering, Jiangsu University of Technology, Changzhou 231001, China Author to whom correspondence should be addressed. Submission received: 19 December 2023 / Revised: 25 January 2024 / Accepted: 2 February 2024 / Published: 5 February 2024 The thermal decay of the brake has a great impact on the long downhill braking stability of pure electric commercial vehicles. Based on the road slope and using the fuzzy control method, the motor regenerative braking force and friction braking force distribution strategies were designed to reduce the friction braking force, improve the braking stability and recover the braking energy. By establishing road driving conditions with different slopes, numerical analysis methods are used to verify the proposed control strategy. The results show that the vehicle maintains a constant speed downhill at 30 km/h under the condition of 6% constant slope driving, and the braking energy recovery rate reaches 50.93% under 60% initial battery SOC, 50.89% under 70% initial battery SOC, and 50.81% under 80% initial battery SOC. The speed of the vehicle fluctuates slightly under the driving condition of an 18 km long variable slope distance, but the power torque of the electric mechanism can still be maintained at a constant speed of 30 km/h by adjusting the electric mechanism, and the braking energy recovery rate reaches 49.96%. During the downhill driving at a constant speed, the friction braking force does not participate in braking, and the recuperation rate of braking is determined by the slope and the magnitude of braking deceleration. 1. Introduction Nowadays, the development trend of new energy vehicles is rapid, and pure electric commercial vehicles have attracted much attention in terms of energy saving, environmental protection and economy. With the continuous development of electric motor technology, the speed and torque that can be provided by automotive electric motors are also increasing, which enables the electric motor to provide continuous braking for a long period of time when the commercial vehicle is driving downhill for a long period of time so that the friction braking force can be reduced, the brake thermal degradation can be reduced and the braking energy can be recovered, which improves the energy utilization rate [ The braking energy recovery rate is mainly determined by the driving conditions of the vehicle and the energy recovery control strategy. Control strategies can be formulated based on different driving conditions of the vehicle to distribute the frictional braking force of the front and rear axles and the regenerative braking force of the motor in order to improve the braking energy recovery rate. Chenghu Ni et al. [ ] proposed to use the drive motor back-dragging torque as the auxiliary braking torque when the vehicle is traveling long downhill and verified the feasibility through a dynamic model. Haichao Lan et al. [ ] proposed a dynamic planning-based joint braking control strategy for long downhill driving of electric commercial vehicles using a hierarchical control method for the braking force to effectively reduce the braking load to be borne by the brake. Jiujian Chang et al. [ ] proposed an EMB-based braking energy recovery control strategy for pure electric vehicles to maximize the recovery of braking energy and effectively improve the braking energy recovery rate. Peilong Shi et al. [ ] proposed a method for constructing and recognizing the long downhill braking conditions of heavy-duty trucks, which provides a basis for the continuous braking system to intervene or withdraw from the active control, and the results show that it is able to effectively identify the braking status of the vehicle. Takuya Yabe et al. [ ] established a simulation model of the whole vehicle based on Matlab/Simulink to clarify the influence of motor capacity and battery current on the regeneration energy. By connecting the speed difference between the vehicle and the motor, it is assumed that in the ideal condition, the lateral motion of the vehicle is not considered, and the variable transmission ratio is selected to optimize the braking energy recovery rate of the vehicle. In the study of Wei Zhang et al. [ ], based on Matlab/Simulink and ADVISOR, the vehicle model is established, and the regenerative braking priority control strategy is adopted to distribute the axle load and braking force on the uphill and downhill road slope to maximize the recovery of braking energy. In the study of Zhe Li et al. [ ], based on the driver’s braking intention, the braking mode is determined by fuzzy theory and logical threshold method, and the braking energy is recovered by fuzzy control rules with road slope, braking intensity and speed as input parameters and braking force proportional coefficient as output parameters. Longlong Wei et al. [ ] proposed a brake power distribution control strategy for front and rear wheels based on braking intent recognition, which takes the effect of retarder on brake power distribution into account and maximizes the braking energy recovery. In the above studies, there is a lack of research for rear-axle drive, continuous braking by electric motors and between friction braking. Most of the traditional commercial vehicles need to be equipped with retarders to provide auxiliary braking and absorb part of the kinetic energy of the vehicle when traveling downhill. In contrast, pure electric commercial vehicles are themselves driven by electric motors, which directly provide auxiliary braking to avoid the need for retarders that can reduce the vehicle’s own mass and recover braking energy. While the electric motor acts as an auxiliary braking device, the friction braking force of the front and rear axles of the vehicle is distributed in a variable ratio to make it close to the ideal braking force distribution curve, which can improve the braking efficiency compared with the traditional fixed-ratio distribution method. In this paper, a brake force distribution control strategy based on the rear-axle-drive vehicle is proposed, which divides the long downhill braking into two processes,: firstly, the vehicle decelerates to the long downhill constant driving speed in the shortest possible time, and then the electric motor provides the main braking force, and the friction brake provides the residual braking force, which then controls the vehicle to go downhill at a constant speed. 2. Long Downhill Braking Control Strategy This paper focuses on the study of continuous braking for long downhill driving of pure electric commercial vehicles, where the main braking force is provided by the electric motor and the overall control strategy of long downhill braking is proposed with the main purpose of reducing the friction braking force [ ], as shown in Figure 1 . The overall strategy is divided into three parts: braking force calculation, braking force distribution and actuation control. Braking force calculation: Referring to the vehicle dynamics model, calculate the required braking force of the entire vehicle based on the road slope I; Braking force distribution: A fuzzy controller is established by taking the braking strength z, vehicle speed v and the state of charge (SOC) of the battery as inputs, and the proportion of regenerative braking force of the motor to the required braking force of the entire vehicle k as output, and the remaining required braking force is distributed to the front and rear axles; The regenerative braking force and friction braking force of the motor are input into the established model, and the control strategy is verified by joint simulation using Simulink and Cruise (R2019.2) software. 2.1. Vehicle Demand Braking Force When driving downhill for a long time, the vehicle is mainly subjected to slope force, frictional resistance, air resistance and inertial force [ ]. Therefore, the required braking force for the entire vehicle is as follows: $F request = F i − F f − F w + F j$ In the formula, F[request] is the required braking force for the entire vehicle; F[i] is the slope force; F[f] is the frictional resistance; F[w] is the air resistance; and F[j] is the inertial 2.2. Fuzzy Control The continuous braking of pure electric commercial vehicles on long downhill slopes studied in this article is a nonlinear relationship between the regenerative braking force provided by the motor and the frictional braking force working together. Fuzzy control is used to control nonlinear systems through existing experience and knowledge, without the need to know the specific structure and mathematical model of the controlled object [ ]. Therefore, fuzzy control is considered. A fuzzy controller is established with braking strength , vehicle speed , and battery SOC as inputs and the proportion of regenerative braking force of the motor to the required braking force of the entire vehicle as output. The fuzzy subset of braking strength is {L, M, H}, and the domain is [0, 1]. The fuzzy subset of vehicle speed is {L, M, H}, with a domain of [0, 100]. The fuzzy subset of battery SOC is {L, M, H}, with a domain of [0, 1]. The fuzzy subset of the proportional coefficient for the regenerative braking force of the motor is {LL, L, M, H, HH}, and the domain is [0, 1]. The membership functions of braking strength , vehicle speed , battery SOC and motor regenerative braking force ratio coefficient are shown in Figure 2 . Based on a large number of experiments and theoretical analysis, fuzzy control rules have been formulated as shown in Table 1 2.3. Calculation of Regenerative Braking Force for Motor A synchronous motor has four-quadrant operating characteristics, can work as a generator when the vehicle braking, and its external characteristic curve is similar to the motor [ ]. When the motor operating speed is lower than the rated speed is, the motor operating power is lower than the rated power motor regenerative braking torque determined by the rated torque of the motor, and when the motor operating speed is higher than the rated speed, the motor regenerative braking torque is determined by the rated power of the motor. From this, the mathematical model of motor regenerative braking force can be derived as follows: $F e = T e i 0 η t r , 0 < n < n e 9550 P e i 0 η t n r , n ≥ n e$ In the formula, F[e] is the regenerative braking torque provided by the motor; T[e] is the rated torque of the motor; i[0] is the transmission ratio; η[t] is the transmission efficiency; r is the radius of the wheel; P[e] is the rated power of the motor; n is the motor speed, and n[e] is the rated motor speed. 2.4. Remaining Demand Braking Force Distribution When the vehicle brakes on a long downhill slope, the motor bears the main braking force. When the required braking force of the entire vehicle is higher than the maximum braking force that the motor can provide, the remaining required braking force is provided by the friction braking force of the front and rear axles [ If the vehicle demand braking force is all provided by the friction braking force, at this time the vehicle overall force shown in Figure 3 If the front- and rear-axle wheels are locked at the same time, the ground normal reaction forces on the front and rear axles are as follows: $F z 1 = G [ b cos α + h g ( z + sin α ) ] L F z 2 = G [ a cos α − h g ( z + sin α ) ] L$ The front- and rear-axle braking forces of the vehicle at this point on a long downhill road with any coefficient of adhesion are the following: $F x 1 = φ F z 1 F x 2 = φ F z 2 F x 1 + F x 2 = G φ cos α$ In the formula, F[z1] is the front normal reaction force; F[z2] is the rear normal reaction force; F[x1] is the front-axle friction braking force; F[x2] is the rear-axle friction braking force; H[g] is the height of the car’s center of mass; L is the distance from the front axle to the rear axle; a is the distance from the front axle to the center of mass; b is the distance from the rear axle to the center of mass; G is the vehicle’s gravity; α is the slope angle of the road; and φ is the coefficient of adhesion on the road surface. According to Equations (3) and (4), the distribution curves of front and rear axle friction braking force are drawn as shown in Figure 4 $F x 1 = z F z 1 = G z [ b cos α + h g ( z + sin α ) ] L F x 2 = z F z 2 = G z [ a cos α − h g ( z + sin α ) ] L$ The braking strength of point A on the I curve is = 0.22. According to Equation (5), the coordinates of point A are ( ), and the slope of the OA line segment is . Therefore, the distribution of frictional braking force on the OA line segment is as follows: $F x 1 = G z cos α 1 + K OA F x 2 = G z cos α − F x 1$ The braking strength of point B is = 0.53. Similarly, according to Equation (5), the coordinates of point B are ( ), and the slope of line segment AB is = ( ). Therefore, the distribution of frictional braking force on line segment AB is the following: $F x 1 = G z cos α 1 + K AB + K AB F x 1 A − F x 2 A 1 + K AB F x 2 = G z cos α − F x 1$ Point C is located at the line, the braking strength = 0.7 satisfies the following relationship: $F x 2 F x 1 = 1 − β β F x 1 + F x 2 = G z cos α$ In the formula, β is the distribution coefficient of friction braking force for the front and rear axles. According to Equation (8), the coordinates of point C are ( ), and the slope of the BC line segment is = ( ). Therefore, the distribution of frictional braking force on the BC line segment is as follows: $F x 1 = G z cos α 1 + K BC + K BC F x 1 B − F x 2 B 1 + K BC F x 2 = G z cos α − F x 1$ When the braking strength is > 0.7, it belongs to emergency braking and exits the braking energy recovery mode. In order to distribute the braking force quickly and accurately to the front and rear axles, it is necessary to follow the Line allocation: $F x 1 = G z β cos α F x 2 = G z cos α − F x 1$ 2.5. Execution Control Constraint In order to improve the service life of the vehicle’s power battery, when the SOC of the battery is higher than 90%, it exits the regenerative braking mode, switches to friction braking and provides all the braking power [ When the vehicle speed is very low, the speed of the motor is also very low, and the charging current generated is very small, which is not enough to charge the battery effectively. So in order to stop the vehicle as soon as possible, when the vehicle speed is lower than 5 km/h, it exits the regenerative braking mode and switches to friction braking and provides all the braking force [ 3. Establish Control Strategies and Vehicle Models In order to verify the feasibility of the proposed control strategy and the effect of braking energy recovery, a pure electric commercial vehicle was selected [ ], and the parameters of the whole vehicle are shown in Table 2 . Utilize MATLAB/Simulink to build a control strategy model, as shown in Figure 5 , use Cruise (R2019.2) software to build a vehicle model, use the General Map module in Cruise (R2019.2) software to set the slope of different driving sections and compile and generate DLL files written in Simulink and embed them in the vehicle model, as shown in Figure 6 4. Results and Discussion 4.1. Analysis of Driving Conditions and Results of Fixed Slope and Long Downhill Driving According to the requirements of the commercial vehicle braking test in GB12676-2014, the initial speed of the vehicle on a long downhill is 60 km/h, the constant speed of a long downhill is 30 km/h, the road slope is 6%, and the driving distance is 6 km. Because the different SOC values of the battery have different effects on energy recovery, the road slope and driving distance are kept unchanged, and the initial SOC values of the power battery are set to 60%, 70% and 80%, respectively. The test results are shown in Figure 7 Figure 8 Figure 9 As can be seen from Figure 7 , when the vehicle enters the long downhill driving, the vehicle speed is reduced from 60 km/h to 30 km/h within 200 m, and then the motor regenerative braking force provides all the braking force, maintains the constant speed of the vehicle at 30 km/h and completes the long downhill driving condition of 6 km. As can be seen from Figure 8 , during the period when the vehicle speed is reduced from 60 km/h to 30 km/h, the vehicle has deceleration, the motor provides a larger regenerative braking torque, the battery SOC rises rapidly, the motor regenerative braking torque tends to be stable after the vehicle speed is stable, and the battery SOC basically rises linearly steadily, from the initial 60% to 63.3%, from the initial 70% to 73.15% and from the initial 80% to 82.98%. When the initial value of battery SOC is 60%, the braking energy recovery rate reaches 50.93%, when the initial value of battery SOC is 70%, the braking energy recovery rate reaches 50.89%, and when the initial value of battery SOC is 80%, the braking energy recovery rate reaches 50.81%. With the increase in the initial SOC value of the battery, the braking energy recovery rate decreases because when the battery SOC value is high, the charging rate will decrease when the battery SOC value is lower. As a result, the recuperation rate of braking energy is reduced. In the process of downhill driving at a constant speed of 30 km/h, the frictional braking torque is always zero, and the data of the first 500 m driving distance are selected in Figure 9 to plot the change of frictional braking torque. At the beginning of the test, as the driver presses the brake pedal, the friction braking torque and the motor regeneration braking torque rise rapidly, as the vehicle speed decreases, the driver relaxes the brake pedal, the friction braking torque gradually decreases to zero, and no longer participates in braking, and the motor bears all the braking torque at the same time. 4.2. Analysis of Driving Conditions and Results of Variable Slope and Long Downhill Driving The 18 km long road in the 122 km~140 km mileage section of National Highway 318 was selected as the test road condition, and the 2% downhill distance in the road section was 3 km, the 3% downhill distance was 4 km, the 4% downhill distance was 3 km, the 5% downhill distance was 3 km, the 6% downhill distance was 2 km, the 2% uphill distance was 2 km, and the 3% uphill distance was 1 km. The road slope information is shown in Figure 10 . The initial speed of the vehicle is set to 60 km/h, and the initial SOC value of the power battery is set to 60% [ ]. Test results are shown in Figure 11 Figure 12 Figure 13 As can be seen in Figure 11 , when the vehicle enters a long downhill drive, the vehicle speed is still reduced from 60 km/h to 30 km/h within 200 m. Since the gradient of the beginning section of the road is 3%, the motor provides a small regenerative braking torque to keep the vehicle driving at 30 km/h. When the vehicle enters different gradient sections, the speed of the vehicle fluctuates slightly because the regenerative braking torque of the motor cannot be increased or decreased instantaneously. At the same time, the regenerative braking torque of the motor is also adjusted accordingly with the change of the road gradient, which can keep the vehicle traveling downhill at a constant speed of 30 km/h. As can be seen in Figure 12 , the first 6 km are all downhill sections, and after the vehicle speed is stabilized, the battery SOC are all basically rising at a uniform rate. When driving into the uphill section, the motor drives the vehicle in order to enter the downhill section again, without friction braking to reduce the vehicle speed and still maintaining the speed of 30 km/h driving, which can reduce energy consumption. When the vehicle is traveling on the same slope section, the regenerative braking torque provided by the motor is basically equal. The growth rate of the battery SOC is determined by the road gradient and the size of the regenerative braking torque; when the road gradient is larger, the motor needs to provide a larger torque, and the growth rate of the battery SOC is also increased. When the road gradient is small, the motor only needs to provide a smaller torque, and the growth rate of the battery SOC decreases. After the vehicle has traveled the 18 km section, the battery SOC value increases from 60% to 63.75%, and the braking energy recovery rate reaches 49.96%. As can be seen from Figure 13 , the friction braking torque and the regenerative braking torque provided by the motor are smaller than those in Figure 9 because the gradient of the road that the vehicle starts to enter are smaller, so the ramp force to which the vehicle is subjected decreases, the braking torque demanded by the whole vehicle decreases, and the deceleration distance required decreases accordingly, but the overall trend of change in braking force remains unchanged. 5. Conclusions When pure electric commercial vehicles brake on long downhill slopes, the braking load rapidly increases, and the brakes are prone to thermal degradation. Based on the braking strength and the fuzzy control method, a braking force distribution strategy is designed. The following conclusions are drawn from the results of the long downhill driving condition: Under the condition of constant slope driving, the motor provides continuous braking so that the vehicle can maintain a constant speed of 30 km/h downhill driving, and the braking energy recovery rate reaches 50.93% under the initial 60% battery SOC. The braking energy recovery rate reaches 50.89% under the initial 70% battery SOC, and the braking energy recovery rate reaches 50.81% under the initial 80% battery SOC. It is concluded that when the battery SOC reaches 80%, the braking energy recovery rate will decrease, and in order to improve the battery service life, the braking energy recovery will be stopped when the battery SOC reaches 90%; Under the condition of variable slope driving, the motor regenerative braking torque is determined by the size of the road slope: when the slope changes, the regenerative braking torque decreases, and the braking energy recovery speed decreases. When the slope increases, the regenerative braking torque increases, and the braking energy recovery speed increases. At the same time, after adjusting the power torque of the electric mechanism, the vehicle can still be kept at a constant speed of 30 km/h, and the braking energy recovery rate reaches 49.96%; When the vehicle is kept at a constant driving speed of 30 km/h, the friction braking force is no longer involved in braking, which can effectively prevent the brake from generating thermal degradation phenomenon and improve the braking stability of the vehicle; The continuous braking joint control strategy for pure electric commercial vehicles mentioned in the introduction mainly utilizes the electric motor to take part of the auxiliary braking to reduce the temperature of the retarder, and the main continuous braking part is still provided by the retarder. Compared with this paper, it is more complicated in structure and the braking energy recovery rate is not very high. The use of motor anti-drag characteristics for sustained braking is theoretically possible; there are authors using simulation to verify this, but in the case of high battery SOC for sustained braking is still lacking. In order to avoid overcharging the battery and still need the motor for sustained braking, the vehicle will inevitably need to be retrofitted with other energy storage devices or direct energy directly provided to the vehicle’s horn, air conditioning and other equipment. Author Contributions Investigation, W.C. and C.L.; methodology, W.C. and C.L.; software, W.C.; resources, C.L.; writing—original draft, W.C.; writing—review and editing, W.C. and C.L. All authors have read and agreed to the published version of the manuscript. This research was funded by National Natural Science Foundation of China (Grant No. 11802108). Data Availability Statement The experimental data are contained within the article. Conflicts of Interest The authors declare no conflicts of interest. Figure 2. (a) The membership functions of braking strength z; (b) the membership functions of vehicle speed v; (c) the membership functions of battery SOC; (d) the membership functions of motor regenerative braking force ratio coefficient k. Figure 9. Front- and rear-axle friction braking torque and motor regenerative braking torque under constant gradient conditions. Figure 13. Front- and rear-axle friction braking torque and motor regenerative braking torque under variable gradient conditions. Number z v SOC k 1 L L L HH 2 M L L H 3 H L L M 4 L M L H 5 M M L M 6 H M L L 7 L H L M 8 M H L L 9 H H L LL 10 L L M HH 11 M L M H 12 H L M M 13 L M M H 14 M M M M 15 H M M L 16 L H M M 17 M H M L 18 H H M LL 19 L L H L 20 M L H L 21 H L H LL 22 L M H L 23 M M H L 24 H M H LL 25 L H H L 26 M H H L 27 H H H LL Parameter Value Parameter Value Vehicle curb weight m/t 3.05 Rolling resistance coefficient f 0.08 Vehicle full weight m[1]/t 6.15 Main reducer reduction ratio i[0] 7.05 Vehicle test mass m[2]/t 4.05 transmission efficiency η[t] 0.95 Wheelbase L/m 4.96 Motor peak power p[m]/kw 320 Distance from front axle to center of mass a/m 2.05 Motor peak torque T[m]/N·m 500 Distance from rear axle to center of mass b/m 2.91 Rated power of motor p[e]/kw 250 Centroid height h[g]/m 0.94 Rated torque of motor T[e]/N·m 420 Windward area A/m^2 5.3 Maximum battery voltage U/V 480 Drag coefficient C[D] 0.67 Minimum battery voltage U/V 400 Wheel radius r/mm 515 Number of battery packs 8 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Cai, W.; Liu, C. Long Downhill Braking and Energy Recovery of Pure Electric Commercial Vehicles. World Electr. Veh. J. 2024, 15, 51. https://doi.org/10.3390/wevj15020051 AMA Style Cai W, Liu C. Long Downhill Braking and Energy Recovery of Pure Electric Commercial Vehicles. World Electric Vehicle Journal. 2024; 15(2):51. https://doi.org/10.3390/wevj15020051 Chicago/Turabian Style Cai, Weisheng, and Chengye Liu. 2024. "Long Downhill Braking and Energy Recovery of Pure Electric Commercial Vehicles" World Electric Vehicle Journal 15, no. 2: 51. https://doi.org/10.3390/ Article Metrics
{"url":"https://www.mdpi.com/2032-6653/15/2/51","timestamp":"2024-11-14T17:29:15Z","content_type":"text/html","content_length":"429207","record_id":"<urn:uuid:358fd3e1-dd2c-454e-908a-9b219aa01f67>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00182.warc.gz"}
Stress inversion from initial polarities of a population of earthquakes: application to the Irpinia region (Southern Apennines). The Southern Apennines are an active tectonic region of Italy that accommodates the differential motions between the Adria and Tyrrhenian microplates. Large destructive earthquakes occurred both in historical and recent times, the last of which occurred on 1980 (Ms = 6.9). Even more than 30 years after the main event, the seismotectonic environment that encompasses the fault system on which the 1980 earthquake occurred shows continued back- ground seismic activity including moderate-sized events such as the 1996 (M 5.1), 1991 (M 5.1) and 1990 (M 5.4) events. The in-situ stress data analysis and the analysis of seismological data for earthquake location, size and mechanisms have recognized that this area are characterized by an extensional stress regime that is responsible of the present-day seismicity. We analyzed the instrumental seismicity of the Irpinia region (Southern Apennines), recorded by the ISNet (Irpinia Seismic Network, AMRA) and the nearby Italian National Seismic Network (Istituto Nazionale di Geofisica e Vul- canologia) stations during the last five years. We re-picked P- and S wave arrival times for a total of 8663 P- and 4358 S- phases on an high-quality waveform dataset consisting of 62430 traces, for 980 microearthquakes with a local magnitude range of 0.1 ≤ ML≤ 4.8, occured from August 2005 to April 2010. The seismic events inside the network have a maximum local magnitude of 3.2. To improve the quality of the hypocentral locations we computed a one-dimensional (1D) velocity model and we relocated the earthquakes using a double differences (DD) technique. The relocated seismicity is more concen- trated within the upper crust and it is mostly clustered along the Apennine chain. The use of focal mechanisms to estimate the nature of the stress tensor in the seismogenic zone has been frequently used to study the state of stress acting in a region. The best stress tensor that fits the data is represented by the ori- entation of the three principal stress axes, and a scalar which describes the relative magnitudes of the principal stresses. We used the method of Rivers and Cisternas (1990) in which the raw first-motion polarities for a set of events are directly used for inversion. In order to obtain a more realistic error estimation on the orientation of the three principal axes we decided to com- pute the confidence limits on the parameters of the model by using bootstrap resampling (Efron and Tibshirani, 1993). To simulate a repetition of an experiment, we resample data (polarities) randomly from the original dataset. This new dataset will have the same number of data as the original dataset, but will have same polarities repeated two or more times while other polarities will be absent. We re-grouped the polarities for event and we inverted this dataset for the stress field, and repeated this process several times. We obtain three distributions of points for the three principal axes and across the definition of the eigenvalue and eigenvector of the inertia tensor for each population of the corresponding axis we obtain the center of the distribution and confidence regions (e.g. 1-sigma, 2-sigma). The eigenvector of the inertia tensor gives information about the orientation of the ellipses and the eigen- value controls the dimension of the semi-axis. The method we are using to define the ellipse works properly if the distribution is not very different from a gaussian. All these analysis have allowed us to study in detail the spatial variations of stress field. In particular we iden- tified the inner portion of the chain caracterized by shallow earthquakes (depths < 20 km) distributed along the axis chain (Irpinia area) with general NE-SW extension and the external margin (Potentino area) characterized by dextral strike-slip kinematics with a seismicity generally deeper. Stress inversion from initial polarities of a population of earthquakes: application to the Irpinia region (Southern Apennines). The Southern Apennines are an active tectonic region of Italy that accommodates the differential motions between the Adria and Tyrrhenian microplates. Large destructive earthquakes occurred both in historical and recent times, the last of which occurred on 1980 (Ms = 6.9). Even more than 30 years after the main event, the seismotectonic environment that encompasses the fault system on which the 1980 earthquake occurred shows continued back- ground seismic activity including moderate-sized events such as the 1996 (M 5.1), 1991 (M 5.1) and 1990 (M 5.4) events. The in-situ stress data analysis and the analysis of seismological data for earthquake location, size and mechanisms have recognized that this area are characterized by an extensional stress regime that is responsible of the present-day seismicity. We analyzed the instrumental seismicity of the Irpinia region (Southern Apennines), recorded by the ISNet (Irpinia Seismic Network, AMRA) and the nearby Italian National Seismic Network (Istituto Nazionale di Geofisica e Vul- canologia) stations during the last five years. We re-picked P- and S wave arrival times for a total of 8663 P- and 4358 S- phases on an high-quality waveform dataset consisting of 62430 traces, for 980 microearthquakes with a local magnitude range of 0.1 ≤ ML≤ 4.8, occured from August 2005 to April 2010. The seismic events inside the network have a maximum local magnitude of 3.2. To improve the quality of the hypocentral locations we computed a one-dimensional (1D) velocity model and we relocated the earthquakes using a double differences (DD) technique. The relocated seismicity is more concen- trated within the upper crust and it is mostly clustered along the Apennine chain. The use of focal mechanisms to estimate the nature of the stress tensor in the seismogenic zone has been frequently used to study the state of stress acting in a region. The best stress tensor that fits the data is represented by the ori- entation of the three principal stress axes, and a scalar which describes the relative magnitudes of the principal stresses. We used the method of Rivers and Cisternas (1990) in which the raw first-motion polarities for a set of events are directly used for inversion. In order to obtain a more realistic error estimation on the orientation of the three principal axes we decided to com- pute the confidence limits on the parameters of the model by using bootstrap resampling (Efron and Tibshirani, 1993). To simulate a repetition of an experiment, we resample data (polarities) randomly from the original dataset. This new dataset will have the same number of data as the original dataset, but will have same polarities repeated two or more times while other polarities will be absent. We re-grouped the polarities for event and we inverted this dataset for the stress field, and repeated this process several times. We obtain three distributions of points for the three principal axes and across the definition of the eigenvalue and eigenvector of the inertia tensor for each population of the corresponding axis we obtain the center of the distribution and confidence regions (e.g. 1-sigma, 2-sigma). The eigenvector of the inertia tensor gives information about the orientation of the ellipses and the eigen- value controls the dimension of the semi-axis. The method we are using to define the ellipse works properly if the distribution is not very different from a gaussian. All these analysis have allowed us to study in detail the spatial variations of stress field. In particular we iden- tified the inner portion of the chain caracterized by shallow earthquakes (depths < 20 km) distributed along the axis chain (Irpinia area) with general NE-SW extension and the external margin (Potentino area) characterized by dextral strike-slip kinematics with a seismicity generally deeper. File in questo prodotto: Non ci sono file associati a questo prodotto. I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
{"url":"https://iris.unisannio.it/handle/20.500.12070/7047","timestamp":"2024-11-11T10:52:03Z","content_type":"text/html","content_length":"57391","record_id":"<urn:uuid:68bf32e0-4a6e-4524-a4e8-dbddf66fde05>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00596.warc.gz"}
MATH2048 - Honours Linear Algebra II - 2021/22 • Please be reminded to fill in the teaching evaluation form online. It will be available from 10:30am on November 16 till 11:59PM on November 18. • (2021-09-18) All exercises in the textbook are collected in a single file (see attached) for students who don't have access to the textbook. [Download file] • (2021-09-17) Homework 2 has been posted. Please submit your solutions via blackboard on or before 2021-09-24. • (2021-09-13) Submission of homework assignments □ To reduce the risk of spreading the novel coronavirus, you are not recommended to submit your homework assignment physically. As such, you will submit your assignment by uploading the scanned copy via the Blackboard system. □ Log onto https://blackboard.cuhk.edu.hk/ and click on our course 2021R1 Honours Linear Algebra II (MATH2048). Click on "course contents" and click on "Homework X (Due...)". Follow the instructions therein to upload your solution. An illustration can be downloaded below. □ Please scan your solution into a single pdf file and save it with the name like: YourStudentID_HW1.pdf. Upload it via the Blackboard system. There are several useful apps for you to take a picture of your solution and scan your document (such as CamScanner HD and Microsoft Lens) [Download file] • There will be no tutorial in the first week. General Information • Prof. Ronald Lok Ming LUI □ Office: LSB 207 □ Tel: 3943-7975 □ Email: Teaching Assistant Time and Venue • Lecture: Tues 10:30am-12:00pm (ERB 404); Thurs 5:30pm-6:15pm (MMW707) • Tutorial: Thurs 4:30pm-5:15pm (MMW 707) Course Description This course is a continuation of Honoured Linear Algebra I (MATH 1038). It is a second course on linear algebra and will cover basic concepts of abstract vector spaces over general field, direct sum, direct product, quotient spaces, existence of basis by Zorn's lemma, linear transformations, dual spaces, eigenvalues and eigenvectors, diagonalizability, operators on inner product spaces, orthogonality and Gram-Schmidt process, adjoint, normal and self-adjoint operators, spectral theorems, bilinear form and Jordan canonical forms. More emphasis will be put on the theoretical understanding of basic concepts in linear algebra. • Friedberg, Insel and Spence, Linear algebra, 4th edition, Pearson. Class Notes Tutorial Notes Assessment Scheme Homework 10% Midterm exam 1 (Oct 7, in class) 20% Midterm exam 2 (Nov 11, in class) 20% Final exam 50% Honesty in Academic Work The Chinese University of Hong Kong places very high importance on honesty in academic work submitted by students, and adopts a policy of zero tolerance on cheating and plagiarism. Any related offence will lead to disciplinary action including termination of studies at the University. Although cases of cheating or plagiarism are rare at the University, everyone should make himself / herself familiar with the content of the following website: and thereby help avoid any practice that would not be acceptable. Assessment Policy Last updated: December 07, 2021 22:01:59
{"url":"https://www.math.cuhk.edu.hk/course/2122/math2048","timestamp":"2024-11-03T04:27:19Z","content_type":"text/html","content_length":"61535","record_id":"<urn:uuid:c5d17d64-b38c-4f70-a1c7-465aa5368445>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00000.warc.gz"}
Learning Tip: The 5 Second Gutcheck To check if you're flexible, you don't need a battery of tests. Just bend down and touch your toes. Was it effortless? If it's not (and it's not), you know you need to stretch more. The goal isn't the splits, just some self-determined level. My math goals are similar. I don't need expert proficiency, just a "touch your toes" understanding for the topics I care about. Example: Exponent Gutcheck Here's an internal dialog I might have to verify my understand of exponents. Gutcheck: Roughly speaking, what's $2^{100}$? It's a large, even, positive number. (This intuition should appear almost instantly. If it takes 10 seconds of thinking to realize it's large, even, or positive, exponents aren't natural.) Gutcheck: Roughly speaking, what's $2^{-100}$? It's a tiny, almost undetectable positive decimal. Intuition: It's like going "back in time" by 100 doublings. Gutcheck: Roughly speaking, what's $2^i$? Uh oh. Imaginary exponents! With enough intuition, you realize: "It's on the unit circle, at about ln(2) ~ .693 radians." There's a few gutchecks here. The first is that an imaginary exponent puts you on the unit circle (no matter the base). The next level is a rough "important constant" gutcheck, where you remember ln (2) ~ .693. (Not as important, but good to remember. It helps with things like the Rule of 72) Gutcheck: Roughly speaking, what's $i^i$? Oh, here's a tricky one. Remember how we blurted out that $2^{100}$ was large, even, and positive? How proud we were of our quick thinking? Well, what can you say about $i^i$, hotshot? Yikes. Realizing I couldn't instantly rattle of any properties of $i^i$ meant my intuition for exponents wasn't complete. After getting an intuition for imaginary exponents, the thought becomes: $i^i$ starts as growth pointing sideways, whose direction is rotated again. It's a positive real number less than 1.0. Phew. If I truly understand exponents, the gutcheck for $2^{100}$ and $i^i$ should be similar in speed and detail. A painful stretch means I need more understanding. Additional Examples The gutcheck process doesn't quite translate to text. These internal back-and-forths happen pre-verbally: I think of a question and quickly feel/visualize/remember an analogy. (It's a gutcheck, not a think-aloud-for-minutes check.) Here's a few examples I run through from time to time: Imaginary numbers: What's the cube root of -1? • Thought: Ok, $i^2 = -1$ means we go from 1 to -1 in two steps. Getting there in 3 steps means a 60-degree rotation (180/3). Oh, we can go the other way too (-60 degrees). Oh, we can flip 180 degrees (180 + 180 + 180 = 360 + 180 = net 180 degree rotation). So there's 3 cube roots of -1. Fourier Transform: What's the transform of [1 0 0 0]? • Thought: We want 4 equally strong frequencies (0Hz, 1Hz, 2Hz, 3Hz). They split the strength "1" between them, so we have[.25 .25 .25 .25] (using the notation in the Fourier Transform article). Trigonometry: What's the connection between the 6 major trig functions? • Thought: I think "dome, wall, ceiling" and visualize this trigonometry diagram: Calculus: Explain the derivative of $x^3$ • Thought: $x^3$ is really $x\cdot x \cdot x$. We have 3 perspectives, each seeing a change of $x \cdot x$. The result is $x^2 + x^2 + x^2 = 3x^2$. I also visualize a cube with plates added to it. Bayes Theorem: What's the plain-English description? • Thought: chance evidence is real = true positive / (true positives + false positives) Exponents: What does discrete vs. compound exponential growth look like? • Thought: I see continuous exponential growth as "filling in the gaps" left by discrete growth. The key element is speed: an intuitive response should bubble as you hear the question. Struggling for an hour to touch my toes, though admirable, still means I'm not flexible enough. The goal isn't learning minutia, it's a working understanding of an idea, enough to solve a problem without tremendous effort. It's a diagnostic, not a value judgement. If I struggle, I simply need a better intuition. Strangely enough, not everyone wants to keep math insights top-of-mind. But pick something that's important to you and occasionally try a 5 second gutcheck on the essentials. Happy math.
{"url":"https://betterexplained.com/articles/gutcheck/","timestamp":"2024-11-05T02:26:07Z","content_type":"text/html","content_length":"35838","record_id":"<urn:uuid:002c7fc0-8eba-4b80-85ea-03ec85bf0f25>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00465.warc.gz"}
RLC Parallel circuit analysis with solved problem RLC Parallel circuit is the circuit in which all the components are connected in parallel across the alternating current source. In contrast to the RLC series circuit, the voltage drop across each component is common and that’s why it is treated as a reference for phasor diagrams. In the circuit diagram, it can be observed that the voltage for the resistor, capacitor, and inductor is the same. Where the resistor current I[R], capacitor current I­[C], and inductor current I­[L] are different. All these individual currents are different from each other as well as from the source current I­­­[s]. Where the vector summation of these individual currents is equal to the source The impedance of the RLC Parallel Circuit : The reciprocal of impedance is called Admittance, symbolized by Y and its unit is Mho, the inverse of Ohm. In contrast to the series RLC circuit, in a parallel RLC circuit, the total impedance formula is as follows. A convenient way to calculate the same impedance is to use Admittance Y, by the following formula. As impedance adds up in an RLC series circuit, similarly admittance adds up in an RLC parallel circuit. Conductance, Susceptance, and Admittance : The reciprocal of impedance is admittance, represented with Y and its unit is Mho or Siemens. The physical meaning of the admittance is, how easily AC flow in a circuit containing resistance and Conductance is the reciprocal of resistance and is represented with the symbol G and its unit is Mho or Siemens. The physical meaning of conductance is how easily AC flows through a circuit that contains the only resistance component. The reciprocal of reactance is called Susceptance B, and its unit is Siemens S. Its physical meaning is how easily current can flow in a reactance-containing circuit. Susceptance has opposite signs as compared to capacitive reactance and inductive reactance. Capacitive Susceptance is positive and reactive Susceptance is negative. Admittance Triangle in RLC Parallel circuit : Admittance calculation is comparatively easier for parallel circuits than impedance calculation. The admittance, conductance, and Susceptance can also be drawn over a triangle, which is known as the Admittance triangle. The admittance triangle is a reflection of the impedance triangle along the horizontal axis. Where resistance is replaced with conductance, reactance with Susceptance, and impedance with admittance. Applying Phytagourus theorem to the admittance triangle, the admittance can be calculated as From the above equations, the admittance is Where its corresponding impedance will be Power Factor in RLC parallel circuit : The power factor is a ratio that tells how power is utilized for performing the actual work. The PF is represented by the cosine of the phase angle. The cosine of the angle in the above admittance triangle will give us the following equation Example with Solution : For the following give circuit, find the source current I[­S] and each branch’s current I­­[R], I­[L], and I­[C], Impedance Z. Also, draw the Admittance triangle and current triangle. To calculate impedance, we should, first of all, calculate the inductive and capacitive reactance and susceptance Where capacitive reactance and Susceptance Similarly, conductance is Now impedance and Admittance will be The admittance triangle can show all these values of conductance, susceptance, and admittance. The power factor of the circuit is Where the angle between current and voltage is To find the current in each branch, we could use the simple formula Where the source current is the vector combination of all these individual currents The same thing is drawn over vector representation. Conclusion : 1. The resistor, inductor, and capacitor are connected in parallel across an AC source. 2. Voltage is common in all components and is taken as a reference for the phasor diagram. 3. Impedances of RLC parallel circuit, the addition of admittances. 4. Source current is the vector summation of individual currents. Leave a Comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://electric-shocks.com/rlc-parallel-circuit-analysis-problem/","timestamp":"2024-11-13T20:59:33Z","content_type":"text/html","content_length":"75918","record_id":"<urn:uuid:3a6f549c-02bb-4e77-806e-88f7b67ab7df>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00838.warc.gz"}
Kirchhoff's Voltage Law Kirchhoff’s Voltage Law Understanding the Conservation of Energy in Electrical Circuits Fundamentals of Kirchhoff’s Voltage Law KVL, a cornerstone in electrical circuit theory, states that the total of all electrical potential differences around any closed loop is zero. $$ \sum_{k=1}^{n} V_k = 0 $$ • V[k] represents the voltage across the k^th component in the loop Introduction to KVL Kirchhoff’s Voltage Law, one of Kirchhoff’s two fundamental circuit laws, is essential for analyzing complex electrical circuits. It enables engineers and technicians to calculate unknown voltages, currents, and resistances by ensuring that the energy supplied by sources equals the energy consumed by circuit elements. Historical Background of KVL In 1845, German physicist Karl Wilhelm Kirchhoff introduced KVL, building upon Georg Ohm’s work. This law provided a systematic method for solving intricate circuits, marking a significant advancement in electrical engineering and physics. Kirchhoff’s contributions extended beyond voltage laws, including current laws and other principles foundational to circuit theory and analysis. Core Principles of KVL KVL is based on several key principles that ensure its effectiveness in various electrical scenarios: • Energy Conservation: The total energy gained per charge around a loop equals the total energy lost per charge. • Closed Loop Application: KVL applies to any closed loop in a circuit, regardless of the number of components. • Sign Convention: Properly account for voltage rises and drops using consistent sign conventions. • Passive Sign Convention: For passive components like resistors, the voltage drop is positive when current enters the positive terminal. • Path Independence: KVL holds true irrespective of the path taken around the loop, provided the loop is closed. Applications of KVL KVL is widely utilized across various fields and applications, including: • Circuit Analysis: Solving for unknown voltages, currents, and resistances in intricate electrical circuits. • Electrical Engineering: Designing and optimizing electrical systems and components. • Power Distribution: Analyzing and managing power flow in electrical grids and distribution networks. • Electronics: Designing electronic devices and ensuring proper voltage levels across components. • Automotive Industry: Diagnosing electrical issues and designing vehicle electrical systems. Related Electrical Equations KVL often works alongside other fundamental electrical formulas to analyze and design circuits: Practical Examples of KVL Applying KVL is crucial for solving real-world electrical problems. Here are some practical examples: Example 1: Calculating Unknown Voltages Consider a simple loop with three components: a 12 V battery, a resistor with a voltage drop of 5 V, and another resistor with an unknown voltage drop. Applying KVL: $$ 12\,\text{V} – 5\,\text{V} – V_{unknown} = 0 $$ Solving for V[unknown]: $$ V_{unknown} = 12\,\text{V} – 5\,\text{V} = 7\,\text{V} $$ Therefore, the voltage drop across the unknown resistor is 7 volts. Example 2: Analyzing a Complex Circuit In a circuit loop with a 24 V battery and three resistors with voltage drops of 8 V, 6 V, and an unknown voltage drop, apply KVL: $$ 24\,\text{V} – 8\,\text{V} – 6\,\text{V} – V_{unknown} = 0 $$ Solving for V[unknown]: $$ V_{unknown} = 24\,\text{V} – 8\,\text{V} – 6\,\text{V} = 10\,\text{V} $$ Thus, the voltage drop across the unknown resistor is 10 volts. Common Misconceptions About KVL While KVL is straightforward, several misconceptions can lead to misunderstandings: • KVL Applies Only to Simple Circuits: KVL is valid for any closed loop in both simple and complex circuits. • Sign Convention Doesn’t Matter: Properly accounting for voltage rises and drops using sign conventions is crucial for accurate application of KVL. • KVL Ignores Internal Resistance: KVL accounts for all voltage drops, including those from internal resistances of power sources. • KVL Can Be Applied Independently of KCL: While KVL and KCL are separate laws, they are often used together for comprehensive circuit analysis. • KVL Isn’t Applicable in AC Circuits: KVL applies to both DC and AC circuits, though it must account for phase differences in AC. Limitations of KVL While KVL is fundamental in electrical engineering, it has certain limitations that are important to understand: • Non-Ideal Components: Real-world components like inductors and capacitors introduce complexities such as inductive and capacitive reactances. • High-Frequency Circuits: At very high frequencies, parasitic inductances and capacitances can affect KVL applications. • Distributed Elements: In circuits with significant distributed elements, such as transmission lines, KVL may need additional considerations. • Magnetic Fields: Changing magnetic fields can induce electromotive forces (emf), complicating KVL applications. • Power Sources with Internal Dynamics: Batteries and other power sources with internal processes may not strictly adhere to KVL under certain conditions. Understanding these limitations is crucial for accurate circuit analysis and design, especially in advanced or high-performance electrical systems. Frequently Asked Questions (FAQs) What is Kirchhoff’s Voltage Law? KVL states that the sum of all electrical potential differences around any closed loop in a circuit is zero. This principle is based on the conservation of energy within electrical circuits. Who formulated Kirchhoff’s Voltage Law? Karl Wilhelm Kirchhoff, a German physicist, formulated KVL in 1845 as part of his contributions to electrical circuit theory. How is KVL applied in circuit analysis? KVL is applied by writing equations for the sum of voltage drops and rises around closed loops in a circuit. These equations are then solved simultaneously to find unknown voltages, currents, or Does KVL apply to AC circuits? Yes, KVL applies to both DC and AC circuits. However, in AC circuits, it must account for the phase differences between voltage and current due to inductive and capacitive elements. Can KVL be used for non-linear components? KVL can be applied to circuits with non-linear components, but the analysis becomes more complex. Non-linear components require additional considerations, such as piecewise analysis or iterative What is the difference between KVL and Ohm’s Law? Ohm’s Law describes the relationship between voltage, current, and resistance in a single component using the formula V = I × R. KVL, on the other hand, applies to entire loops in a circuit, ensuring that the sum of all voltage rises and drops equals zero. Practical Tips for Using KVL • Identify Closed Loops: Clearly identify all closed loops in the circuit before applying KVL. • Consistent Sign Convention: Use a consistent sign convention for voltage rises and drops to avoid calculation errors. • Use Multiple Equations: For complex circuits, write multiple KVL and KCL equations to solve for all unknowns. • Simplify Circuits: Where possible, simplify circuits by combining series and parallel components before applying KVL. • Check Your Work: Verify that the sum of voltages around each loop equals zero to ensure accuracy. • Consider All Components: Remember to include all voltage sources and drops, including those from internal resistances. Frequently Used Tools for KVL Several tools can assist in applying KVL effectively: • Multimeter: Measures voltage, current, and resistance in electrical circuits. • Circuit Simulation Software: Tools like Falstad Circuit Simulator allow for virtual experimentation with circuits. • Ohm’s Law Calculators: Online tools that compute voltage, current, or resistance based on input values. • Graphing Calculators: Useful for solving simultaneous equations derived from KVL and KCL. • Electrical Design Software: Software like Autodesk EAGLE helps in designing and analyzing complex circuits. Check Out Our KVL Calculator Need to perform quick calculations for voltages in your circuits? Our interactive KVL Calculator makes it easy to compute electrical values accurately and efficiently. Use KVL Calculator KVL is an essential tool in electrical engineering, providing a foundational understanding of energy conservation within electrical circuits. Mastery of this law enables engineers and technicians to design efficient electrical systems, troubleshoot issues, and innovate new technologies. Whether you’re a student, a professional, or an enthusiast, a solid grasp of KVL is indispensable for navigating the complexities of electrical circuits. By leveraging KVL alongside other electrical principles, you can enhance your ability to analyze and create robust electrical solutions that meet diverse needs and challenges.
{"url":"https://turn2engineering.com/equations/kirchhoffs-voltage-law","timestamp":"2024-11-13T09:41:44Z","content_type":"text/html","content_length":"220147","record_id":"<urn:uuid:f4adc62f-880b-47c4-94fa-26012aa9ec5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00047.warc.gz"}
What is a tree data structure? Tree is a widely-used powerful data structure that is a viable addition to a programmer’s toolkit. A lot of complex problems can be solved relatively easily if the data is stored in a tree. It is a nonlinear data structure, compared to arrays, linked lists, stacks and queues which are linear data structures. Tree is a non-linear data structure which stores the information naturally in the form of hierarchy style. Terminology in Trees • The topmost node in a tree. The tree originates from this node • This is the only node in the tree that has no parent. • There is exactly one root in each tree data structure. • The line that connects two nodes is known as an edge. • If there are n nodes in a tree, there exist n-1 edges. • Edge is also referred to as ‘ branch ’ in trees. • A node that is a descendant of some node is called child node. • All nodes(except root) are child nodes. • Parent node is the immediate predecessor of a node. • Any node which has one or more children is a parent node. • A group of nodes with the same parent is called siblings. • A node that does not have any child node is a leaf node. • Leaf nodes are also known as external or terminal nodes. • A node with atleast one child node is called internal nodes. • The height of a node is the maximum number of edges between that node and a leaf node. • The height of the tree is the height of the root node. • Level of a node is basically the height difference between that node and the root node • Level of root is 0. Trees can have any number of children but the simplest and most common type of tree is a binary tree. Ancestor: Any node which precedes a node i.e itself, its parent, or an ancestor of its parents. Descendent: Any node which follows a node i.e. itself, its child, or a descendent of its child. Ordered Tree: A rooted tree in which an ordering is specified for the children of each vertex. For example: Binary Search Tree, Heap etc. Size of a tree: Number of nodes in the tree. A node in a binary tree can have a maximum of 2 child nodes . Binary Tree Properties 1. Maximum number of nodes in a tree of height h = 2^h-1 2. Minimum number of nodes in a tree of height h = h (One node in each level) 3. Maximum number of nodes on a level l = 2^l - 1 4. In a Binary Tree with N nodes, minimum possible height or minimum number of levels = Log2(N+1) 5. In Binary tree where every node has 0 or 2 children, number of leaf nodes is always one more than nodes with two children. Binary Tree Types There are primarily 5 types of binary trees:- 1. Rooted Binary Tree Rooted Binary tree is the simplest type of binary tree where each node has either 0, 1 or 2 child nodes. 2. Full Tree Every node has either 0 or 2 children. No node should have only 1 child node. It is also known as strictly binary tree. 3. Complete Tree In a complete tree, all internal nodes have 2 children and all leaf nodes are on the same level. In other words, all levels in a complete tree are entirely filled to its maximum capacity. It is also known as perfect binary tree . 4. Almost Complete Tree In an almost complete tree, all levels are entirely filled except the last level. The last level is filled from left to right. 5. Skewed Tree A skewed tree is one in which all nodes have only one child except the leaf node. In a left-skewed tree, all child nodes are left child nodes and in a right-skewed tree, all child nodes are right child nodes. Advantages and applications of trees As mentioned above, tree is a very powerful data structure and has a lot of advantages and applications. Some of these are: • It is used to represent structural relationships between its data elements. • It is an effective tool to represent hierarchies like folder structure, XML data or organization structure. • Heap is an almost complete binary tree implemented using arrays. • There are also variations of trees like Tries, B+ Trees, Suffix Trees, Syntax Tree, etc. which are designed in a way to provide specific benefits in their applications. Example, B+ trees are used to implement indexing in databases. • Binary Search Tree(BST) is the most common variation of binary tree providing extremely efficient insertion, deletion, searching operations. • Like Linked Lists and unlike Arrays, Trees don’t have an upper limit on number of nodes as nodes are linked using pointers. Representation of Binary Trees We’ve all learnt what a binary tree is, what it’s properties are and what are its advantages, but do you know how to represent a binary tree? Confused? No problem. You are given a simple binary tree, how do you suppose you’re gonna store it in your program? This is known as the representation of a binary tree. There are basically two primary ways we use to represent binary trees: 1. Linked Representation: The parent nodes contain the address of its children. 2. Sequential Representation: Represent the level-order sequence in an array. Linked Representation In linked representation of a binary tree, every node is an object of a class. The class is supposed to represent a node of a tree and each class has mainly three data members: the value of the node, the reference to the left child and the reference to the right child class Node int val Node left Node right left holds the reference to the left child of a node and right holds the reference to the right child of the node. Similarly, the entire tree is constructed. But what if there is no left child or no right child? Well, if there is no left child, the reference to the left child will hold the value NULL. • Difficult to implement, but simple to understand. • Easy to augment the binary tree, like adding additional data or references. • Does not waste memory. Sequential Representation In sequential representation, an array is used which stores the value of the nodes of each level from left to right. The size of this array will be 2^h + 1, where H is the height of the tree. If a node is not present(known as a hole ), it can be represented by a special character such as ‘-’. As you can see, whether a node is present or not, memory is allocated for a complete binary tree of height h. So if there are a lot of holes present in the tree, the sequential representation might be a bad idea as it takes up a lot of space. But how do we identify the child or parent node of a specific node in sequential representation? Let us consider a node at index i . Parent of a node: At index i/2 Left child of a node: At index i*2 Right child of a node: At index i*2+1 • Difficult to understand but simple to implement. • Less prone to bugs during implementation. • Easy to compress and portable as it’s structure is not dependant on memory addresses. These two are the most commonly used representations of a node. You can create some other representation too that suits your tailored needs. Happy Coding! Enjoy Algorithms!
{"url":"https://afteracademy.com/blog/what-is-a-tree-data-structure/","timestamp":"2024-11-07T09:27:30Z","content_type":"application/xhtml+xml","content_length":"83421","record_id":"<urn:uuid:5cfa8483-0aee-48e2-a225-2e7f66b48288>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00143.warc.gz"}
Department of Mathematics - MathLab Teacher Assistants available for you to answer and help you in your mathematics. It is scheduled daily from 9 am til 3 pm. The purpose of the MathLab is to aid students in developing their mathematical abilities. The lab is staffed by teaching assistants who are available to help students in their mathematical courses. Instruction in the MathLab is very informal. Students are welcome to come to the lab with questions whenever they need help in understanding math course work. The lab has two computers which students may use. Our goal in the lab is to increase each student's understanding of her or his course material. This takes time and active participation on the student's part. Please do not expect the lab to provide quick answers for the purpose of completing homework assignments. This does you a disservice. If we can help you understand the material so you can complete your assignment, we will be glad to do so. 11 KH , Basement. MathLab Rules and Regulations: Kindly avoid irrelevent discussions in the MathLab.
{"url":"https://math.sci.kuniv.edu.kw/activities/study-club","timestamp":"2024-11-14T08:53:16Z","content_type":"text/html","content_length":"166566","record_id":"<urn:uuid:23cdaf57-233f-49dd-8540-7db298c48d33>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00505.warc.gz"}
How Does The Rook Move In Chess? (In-Depth Guide!) How Does The Rook Move In Chess? Every piece in chess moves in a unique way across the playing field. In order to master the game of chess, one has to learn the movement of all the pieces first. Which is exactly why we created this guide. Today, we’ll look at the movement of the rook in detail and explain everything the rook can β and can’t β do on the board. Rooks Move in Horizontal and Vertical Lines Luckily, the rook’s movement is relatively easy to understand. The rules of chess allow the rook to move any number of vacant squares in a straight line, in either a horizontal or vertical direction.Β Have a look at this animation to get a better grasp of how the rook can move in chess: The Rook Movement in Chess: Animated As you can see, a rook alwaysΒ moves in a straight line, either up or down across the chess board. How Does The Rook Attack in Chess? Generally, most chess pieces attack the same way they can move (the exception being the pawn). Whenever a rook can reach another piece by moving onto that square, the rook is attacking (i. e. threatening to capture) said piece. Here you can see the rook attacking and subsequently capturing a queen: how the rook attacks in chess How The Rook Can’t Move As mentioned, the general movement of the rook is quite easy to understand. However, there are some restrictions the rook faces in its movement. Some of them are related to how chess pieces move in general, others are specific to the rook itself. Moves Chess Pieces Can’t Do In General: • The rook can’t move “half squares”. The rules of chess do not allow the rook to move half-spaces and thus stand in-between squares. A move is only finished once the rook is placed onto a single • Neither can the rook share a square with any other piece. In a similar sense to the previous point, it is not allowed for the rook to share a square with another piece. While enemy pieces may be captured by the rook, it can’t move onto squares that are already occupied by other friendly pieces. Meaning, the rook always has to stand on a square completely on its own. Moves The Rook in Particular Can’t Do: • The rook always has to move in straight lines.Β In chess, the rook is only allowed to move in straight lines; either vertically or horizontally. It is not allowed for the rook to combine those two movements into one move, meaning the rook can’t move like a knight. • The rook can’t move diagonally.Β The rook is the only chess piece besides the knight, that is not allowed to move diagonally. While a diagonal is still a straight line, the movement rules of the rook prohibit it from moving along a diagonal on the chess board. • The rook can’t jump over other pieces (exception: castling).Β As with every other chess piece β except the knight β the rook is not allowed to jump over pieces, either friendly or enemy. However, there is one exception to this rule, which is a special move called castling. Let’s look at it in detail. A Special Rule: Castling Castling is a special move in chess that is performed by the rook together with its king. Essentially, castling is a double-move, in which both the rook and king move together in an effort to improve the position by activating the rook and protecting the king. There are four distinct requirements that need to be met in order for a player to be allowed to castle in the game: 1. TheΒ kingΒ and (castling)Β rookΒ have not yet moved in the game 2. The king isΒ notΒ currentlyΒ in check 3. NoΒ square theΒ king would castle through is under attack 4. The squares between king and rook areΒ unoccupied Generally, these conditions are most likely to be met early in the game, which is also why you are supposed to castle during the opening. So, once all these requirements are met, you are allowed to castle, but how do we do that? Let’s examine! Basically, castling involves two separate moves by your king and rook: 1. The king moves two squares towards the rook 2. The rook jumps over the king and gets placed directly next to it Have a look at this animation of castling: An Animation of Castling with Rook and King Since both players start the game of chess with two rooks, you have two different ways of castling: a short castle (with the rook closer to your king) and a long castle (with the rook closer to your queen). In the animation above you can see an example of a short (or kingside) castle. If you’ve been paying attention, you might’ve noticed, that we said that the rook can’t jump over other pieces. Well, technically, it is possible for a rook to jump over other pieces, even if it is just its own king when castling. Read More About the Rook in Chess Read More About Castling in Chess
{"url":"https://chessily.com/questions/how-does-the-rook-move-in-chess/","timestamp":"2024-11-10T07:40:44Z","content_type":"text/html","content_length":"199605","record_id":"<urn:uuid:404d66ac-8bb1-4cf8-9957-3bec564ffc9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00056.warc.gz"}
What is x? How do you introduce "x" to your students? I'm guessing you start teaching informal algebra in this kind of way: 10 + ? = 20 Where the question mark obviously represents the mysterious-sounding "unknown". Except it's not really unknown, because it's obviously 10. Eventually you replace the question marks (or empty boxes) with letters. Not just the letter x, obviously, but you have to admit x is a popular one. In this number sentence, what is x? I don't mean what is it's value, I mean, what is it 2x + 4 = 12 It's an unknown. It is a number that definitely exists and has one particular value which at this very moment is unknown to us but in a matter of seconds will be completely known. Two quick steps and we and x will be on first-name terms. In this number sentence, what is x? 2x + 4 = y Suddenly, x is no longer an "unknown". It is a "variable". Meaning, its identity is still a secret, but it's not one specific number, it could be any (any any?) number in the world. 2x + 4 = y and y = 5. Now, suddenly, although x is a variable, it has been forced to stop varying, and simply be unknown. Can you see how the dual nature of x (or any letter really) could be very confusing for students? If students think of a letter as representing one particular number (even if they realise that number can change on a daily basis), this might hinder them when it comes to studying linear graphs, or functions. Maybe we should try to introduce x as a variable instead of an unknown. Think about how you could do this, perhaps with your brand new, untainted year sevens with their clean-blanket-of-snow brains. Let me know how you get on, Emma x x x 2 comments: 1. Hi Emma, I decided to try this with my bottom set year 7 last week, I started by writing the following on the board: a + 2 = b I paused for a few seconds while I worked out exactly how I was going to phrase a question for them, then turned to face the class again to find that, to my surprise, over half of them had their hands up before I had even said anything. Clearly what I had written on the board does not have an "answer" but they were all very keen to add their opinion, One said with great confidence that "a is 3 and b is 5", another was then sure the answer must be 2 and 4. This then lead to an excellent discussion that sometimes the value of letters can change, and sometimes their value is fixed. Thank you for helping me to get my math-phobic class to talk about maths. Charlie V. 2. x is a letter.
{"url":"http://www.notjustsums.co.uk/2013/09/what-is-x.html","timestamp":"2024-11-03T06:25:38Z","content_type":"application/xhtml+xml","content_length":"77238","record_id":"<urn:uuid:7df6efae-4afa-4be9-893f-22e9638ec64e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00316.warc.gz"}
What is DFT B3LYP? B3LYP is the most widely used density-functional theory (DFT) approach because it is capable of accurately predicting molecular structures and other properties. However, B3LYP is not able to reliably model systems in which noncovalent interactions are important. What is the difference between B3LYP and Cam-B3LYP? The main difference between the B3LYP and CAM-B3LYP functionals is the amount of exact Hartree-Fock (HF) exchange included. What is dispersion correction? Dispersion correction is just an empirical correction to the energy of a system. It does’t alter the electron density. Though, it may affect the optimization process as in the optimization this energy is taken into account. Hence, using a dispersion correction you may result in a different geometry than without it. Is ab initio same as DFT? In principle, DFT is exact, so it’s an ab initio method. Is ab initio DFT? Our definition of ab initio DFT consists of five elements: (1) All calculations are done in a basis set (usually Gaussian functions) as in ab initio methods. 32….ABSTRACT. Property Gradient corrected hybrid methods ab-initio DFT Weak interactions No Yes What is DFT D2? DFT-D and DFT-D2 both consist of a pairwise -1/r^6 attraction, like a Lennard-Jones attraction, scaled by a parameter C6 and a damping function which prevents the energy from going to -∞ at r=0. The distance parameter r0 is just the sum of the covalent radii of the interacting pair of atoms. What is DFT D3? DFT-D3 – A dispersion correction for density functionals, Hartree-Fock and semi-empirical quantum chemical methodsDFT-D3. What is ab initio used for? Ab Initio Software is a widely used Business Intelligence Data Processing Platform. It is used to build a vast majority of business applications – from operational systems, distributed application integration, and complex event processing to data warehousing and data quality management systems. What does B3LYP stand for? One of the most commonly used versions is B3LYP, which stands for “Becke, 3-parameter, Lee–Yang–Parr”. What is dispersion correction in DFT? Is DFT First principle? First principles methods are based on the density functional theory (DFT) developed by Kohn et al. The major advantage of this approach is transition from a wave function, depending on the coordinates of all electrons, to a charge density depending on the three spatial coordinates only. How good is B3LYP for this problem? So B3LYP on its own generally doesn’t do great across a wide variety of problems. However, if one is able to include dispersion corrections, in this case D3 (BJ), B3LYP’s performance is much improved, becoming basically on par with ω B97X-D. How good is B3LYP at isomerization? What you can see is that B3LYP does fairly poorly, notably for isomerization energies and equilibrium bond length/energy. So B3LYP on its own generally doesn’t do great across a wide variety of Is B3LYP hybrid density functional? Performance of B3LYP Density Functional Methods for a Large Set of Organic Molecules Testing of the commonly used hybrid density functional B3LYP with the 6-31G(d), 6-31G(d,p), and 6-31+G(d,p) basis sets has been carried out for 622 neutral, closed-shell organic compounds containing the elements C, H, N, and O. Does D3 matter for dispersion correction? It depends on the functional you choose. I was recently at a talk by Grimme (who developed the D3 dispersion correction), and he showed that for functionals like Truhlar’s M062X, and M11, which were parameterized for disperion, addition of D3 did not have any significant effect.
{"url":"https://www.squarerootnola.com/what-is-dft-b3lyp/","timestamp":"2024-11-11T21:46:42Z","content_type":"text/html","content_length":"42715","record_id":"<urn:uuid:644893d3-24da-4d71-86e8-a7f31d9c1310>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00862.warc.gz"}
seminars - Doi-Naganuma lift and the derivative of certain generalized Rankin-Selberg L-series Incoherent Eisenstein series are non-holomorphic Hilbert modular forms with parallel weight 1, whose first derivative at s = 0 plays a crucial role in the work of Gross and Zagier on singular moduli. In this talk, we will show that incoherent Eisenstein series can be realized as the Doi-Naganuma lifting of elliptic Eisenstein series and give some examples. As an application, this is used to obtain explicit formulas of certain generalized Rankin-Selberg L-series occurring in CM values of harmonic Maass forms. This is a joint-work with Yingkun in progress.
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&l=en&sort_index=date&order_type=desc&page=7&document_srl=1211902","timestamp":"2024-11-14T21:08:54Z","content_type":"text/html","content_length":"45889","record_id":"<urn:uuid:c4ff988e-aca3-4751-b188-fb3383891e84>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00521.warc.gz"}
Why Is Geometry Hard for Students? - Edubrain Although Geometry is not extremely difficult, like organic chemistry or quantum physics, it takes time and effort to learn it well. Many college and high school learners find Geometry hard because creative thinking and logical reasoning are necessary. But why is Geometry so hard then? It is less analytical, as things must be seen as visuals. It is why so many younger students find it difficult to transition from Algebraic concepts to Geometry with all the rules that may sound less complex. To make Geometry easier, it’s also necessary to implement spatial reasoning skills and use logic to understand each concept and see how even a subtle change can be critical. The trick is understanding Geometry as a science and using one’s imagination to make things easier. Let’s find out how to overcome the fear and take Geometry and Algebra for a walk in the park as you learn how to find an answer! Geometry is the science that deals with a branch of Math based on shapes and angles. Why is Geometry hard? If you wish to understand Geometry better, you should always start with visualization. This means that you have to try to imagine the problem and then try to draw it with the help of single lines or a diagram. When the talk goes about angles, they have to be pictured. The same is true with formulas that must be written down below. If you wish to learn more about vertical shapes or some exceptions to the rule, it is always easier to see it in an existing diagram sample or draw it yourself. The most important thing for a student is to find appropriate geometry relationships. It is the key to success and making your life easier! You only have to fill in all the missing information in your school diagram by making each step more complex! It’s hard to deny that most youngsters find Geometry difficult, with the very few who belong to an exception in Math studies. It’s often said that Geometry is all about being creative for a student, yet school personalities who are good with Arts still find it hard to solve Math. According to the best of our knowledge, at least three major factors must be considered: Now, is Geometry hard for everyone, or is it slightly easier for others? Most problems in Geometry (unlike Algebra II concepts) are provided as pictures and abstract concepts in Math classes, which takes time and effort to understand first. If students fail to pick on all the clues in the pictures, they won’t be able to solve the problem with known concepts alone. The same applies to vocabulary in Geometry class, as one has to know the words and understand what a bisector stands for and what it means! As a rule, we all face misconceptions about Geometry, with many false things covered. Let’s take a look at some of them here: It is easy to misclassify a shape because of its orientation. Students often do so when they are only exposed to shapes that are oriented the same way all the time. It happens when someone fails to know the reference angle using the S.O.H.C.A.H.T.O.A model. You cannot go without it in any Math class! In truth, it happens when students have just memorized the formula and do not know how to operate it yet. Not using operators is wrong because it removes the reasoning. Most Geometry Concepts Can Work for High School Algebra Problems Mathematics must apply only suitable formulas and understand the problem’s roots as you visualize things and equations. Most of these misconceptions can be easily solved by working on your answer and doing more training! Taking Geometry and Algebra, many parts become clearer if you use logic. You also have to achieve automation in your brain. Our experts have narrowed things down to seven aspects that you must If you fail to understand something like equations in the beginning, you won’t be able to move further. Stop and think again until you understand how things work. If something fundamental is missed, things just won’t work further. Geometry requires patience as you have to decode. Sometimes, trial and error must help you find a solution. Do not take quick cuts and check things twice. Many students fail here because they search for a quick solution example instead of trying to see the entire picture. Every step matters here! Learn to explore equations all the time and locate Geometry problems online with an answer model example that goes beyond your study book. It will help you to stay more inspired with all the challenging Geometry parts and see things naturally! Do not do the math when you are in a bad mood or anxious! Take your time until you can fully concentrate and feel calm. All of these Geometry tips will help you with any science in Geometry and Algebra, so take time to read through them twice and apply them as necessary. The most popular real-world concepts of life where Geometry is used are construction and the use of shapes. You cannot build a simple fence or a small building without knowing the rules of Geometry. The other Math and the best non Math majors and various areas include 3D modeling, civil engineering, design, drawing, and engineering work. Then we also have robotics, astronomy, arts, sports, the automotive industry, and even music, where the shapes of musical instruments cannot be measured without knowing Geometry well. Just name it to your teacher – Geometry students will be there! Taking Geometry even for college-level students, some rules still apply and must be kept in mind: Finally, you must focus on real-world examples to help you visualize things, practice more, and apply advanced Geometry and critical thinking! As the words you must remember, we must remind you to pay attention to detail and accuracy in your Geometry learning. Be it a drawing or any other type of work with the proofs from Algebra of varied difficulty or Trigonometry, keep things systematic and accurate from the start. It will make it easier to find mistakes and eliminate them. For all the rest, learning with a geometry homework solver for any difficulty level and subject courses will greatly help keep you safe and happy! No matter what abstract you have to gain understanding for your Geometry classes, solving triangles, variables, and equation problems is our job to explain! If the talk concerns graphic skills, you must still grasp shapes and related subjects to find the most efficient solution. Just look at how Geometry knowledge is used in the arts by googling it and using deductive reasoning! Yes, robotics uses geometry to calculate and estimate the movement and all the crucial differences and triangles. Have no worries, though, as most software is self-explanatory to grasp. At the same time, staying focused on your subject and knowing the basics of Geometry is essential for any school! It will certainly help you make a difference if you are a well-versed student in algebra. Still, Geometry uses a slightly different approach to things. Imagination surely helps! Exploring the universe and its mysteries has never been more accessible. With the advent of online education, earning a Physics... Imagine a world without TVs, computers, or even modern medicine. It would be different. Well, physicists helped make these things—and... Choosing a major is a big decision. Geology is an exciting option if you are curious about the Earth. This... It seems that you are trying to generate the same text
{"url":"https://edubrain.ai/blog/why-is-geometry-hard-for-students/","timestamp":"2024-11-07T09:30:06Z","content_type":"text/html","content_length":"99660","record_id":"<urn:uuid:f586badc-d140-4e35-85bf-7e4b6177bc86>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00582.warc.gz"}
Multi-Objective Accelerated Process Optimization of Part Geometric Accuracy in Additive Manufacturing The goal of this work is to minimize geometric inaccuracies in parts printed using a fused filament fabrication (FFF) additive manufacturing (AM) process by optimizing the process parameters settings. This is a challenging proposition, because it is often difficult to satisfy the various specified geometric accuracy requirements by using the process parameters as the controlling factor. To overcome this challenge, the objective of this work is to develop and apply a multi-objective optimization approach to find the process parameters minimizing the overall geometric inaccuracies by balancing multiple requirements. The central hypothesis is that formulating such a multi-objective optimization problem as a series of simpler single-objective problems leads to optimal process conditions minimizing the overall geometric inaccuracy of AM parts with fewer trials compared to the traditional design of experiments (DOE) approaches. The proposed multi-objective accelerated process optimization (m-APO) method accelerates the optimization process by jointly solving the subproblems in a systematic manner. The m-APO maps and scales experimental data from previous subproblems to guide remaining subproblems that improve the solutions while reducing the number of experiments required. The presented hypothesis is tested with experimental data from the FFF AM process; the m-APO reduces the number of FFF trials by 20% for obtaining parts with the least geometric inaccuracies compared to full factorial DOE method. Furthermore, a series of studies conducted on synthetic responses affirmed the effectiveness of the proposed m-APO approach in more challenging scenarios evocative of large and nonconvex objective spaces. This outcome directly leads to minimization of expensive experimental trials in AM. Issue Section: Research Papers Objective and Hypothesis. The goal of this work is the optimization of additive manufacturing (AM) process parameters at which parts with the least geometric inaccuracy are obtained. This goal is a key milestone in ensuring the commercial viability of AM. Despite extensive automation, the poor geometric consistency of AM parts prevents their use in mission-critical components in aerospace and defense applications [1]. Currently, the cumbersome factorial-based design of experiments (DOE) tests are used to find the optimal set of AM process parameters that will minimize the geometric inaccuracy of the part, often assessed in terms of the geometric dimensioning and tolerancing (GD&T) characteristics. To overcome this challenge, the objective of this work is to develop and apply a multi-objective optimization approach to balance between multiple requirements, and thereby produce parts with the least geometric inaccuracy with the fewest trials. The central hypothesis is that decomposing the multi-objective problem of minimizing the geometric inaccuracy into a series of simpler single-objective optimization problems leads to a reduction of experimental trials compared to conventional full factorial DOE method. This hypothesis is tested against experimental data from the fused filament fabrication (FFF) process. This work leads to an understanding of how to balance different GD&T characteristics specified for a part using FFF process parameters, namely, infill percentage ($If$) and extruder temperature ($te$), as the primary control. The motivation for this work, stemming from our previous studies, is demonstrated in Fig. 1 [2–5]. At the outset, we note that the intent of this example is not to claim that there is an inadequacy with the standardized geometric GD&T characteristics but to illustrate the difficulty in balancing between several tight geometric accuracy requirements for an AM part (in terms of GD&T For instance, Fig. 1 shows the flooded contour plot for four different parts fabricated by FFF process. A flooded contour plot translates the geometric deviations of the part to the corresponding spatial locations in terms of colors. Each flooded contour plot in Fig. 1 is constituted from 2 million three-dimensional (3D) coordinate data points for a benchmark test artifact part called circle–square–diamond used in this research, see Sec. 3.1 for details. It is visually evident in Fig. 1 that printing under different infill percentages ($If$)—an FFF process parameter—results in different part geometric accuracies. It is apparent that the parts in Figs. 1(a)–1(c) with $If=70,80,and 90%$, respectively, are better overall than the last one shown in Fig. 1(d) with $If=100%$ in terms of geometric accuracy; however, it is often difficult to find a set of process parameters that will globally minimize all the specified geometric inaccuracies. The geometric accuracy for parts shown in Fig. 1 was specified in terms of GD&T characteristics, such as circularity, flatness, cylindricity, etc. (see Sec. 3.1 for details). Experimental data plotted in Fig. 2 reveal the difficulty in balancing between the geometric accuracy requirements based on adjusting a few process parameters. Specifically, in Fig. 2, it is evident that both concentricity and flatness deviations for parts made by varying infill ($If$) and extruder temperature ($te$) cannot be minimized simultaneously. The combination of high $If$ and low $te$ tends to result in low deviations in concentricity (area in the upper left corner of Fig. 2(a)). However, the same process parameter set results in high level of geometric deviations in terms of part flatness (area in the upper left corner of Fig. 2(b)). Accordingly, this work addresses the following open research question in AM: What experimental plan is required to optimize process parameters with respect to multiple geometric accuracy requirements? Research Challenges and Overview of the Proposed Approach. AM offers the unique opportunity to create complex geometries and tailored surface morphologies with multiple materials; enable rapid repairs and replacement of parts in battlefield environments; simplify the overall prototyping cycle, and significantly shorten the logistical supply chain [6]. Despite these paradigm-shifting capabilities, poor process repeatability remains an imposing impediment to the commercial-scale exploitation of AM capabilities. The uncertainty associated with the morphology of the subsurface and microstructure, surface finish, and geometry of the finished part is a major concern for use of AM parts in strategic industries important to the national interest, such as aerospace and defense [1]. Efforts are ongoing in industry and academia to understand the causal AM process interactions that govern part quality. Thermomechanical models have been proposed as an avenue for predicting the part properties, e.g., geometric shrinkage of AM parts [7,8]. However, the current research is in its embryonic stage and largely restricted to simple geometries such as cubes and cylinders. These elementary models cannot, as yet, capture the complex process interactions that affect parts with intricate geometries. Accordingly, the data-driven approaches for compensating for AM part geometry distortions have been recently proposed [9–11]. However, these studies are also restricted to elementary shapes and are limited to the modeling of uniaxial geometry deviations. Hence, in the absence of practically applicable physical models, DOE methods are employed to identify, quantify, and optimize key process parameters with respect to mechanical properties, such as surface roughness, fatigue, and tensile strength, among others, in AM processes. These methods traditionally involve identifying existing patterns in experimental data, sampling the process space at predefined levels, and subsequently developing surrogate statistical models to approximate targeted objective functions. For most practical cases, several geometric accuracy requirements must be satisfied together. This is a multi-objective process optimization problem with a multitude of open research challenges as • Understanding how to balance different GD&T characteristics specified for the part using process parameters as the primary control. • The correlations among responses of interest are not typically known a priori. There is often a tradeoff between different responses. A process parameter set that produces favorable results for one geometric characteristic of part (e.g., concentricity) may be detrimental for another geometric characteristic (e.g., flatness). • Developing a reliable empirical model for multiple responses requires a large number of experimental runs. Hence, building parts with various AM processing parameters and subsequent testing for each response mandate significant investment of both time and resources. This work addresses these challenges by forwarding a multi-objective accelerated process optimization (m-APO) approach to find the AM process parameters that minimize part geometric inaccuracies with fewer experimental trials compared to existing DOE methodologies. This method, presented in Sec. 3, consists of the following sequential steps: 1. (1) The concept of scalarization is used to convert the multi-objective problem into a sequence of single-objective subproblems. 2. (2) The accelerated process optimization (APO) method—developed in a previous study [12]—is used to solve the single-objective subproblems. 3. (3) A stopping criterion is defined for the current subproblem. 4. (4) Subproblems are chosen to uncover intermediate sections of the Pareto front (Pareto front is the set of nondominated or noninferior solutions in the objective space). By applying the proposed multi-objective optimization scheme, experimental results from previous subproblems are leveraged as prior data for the remaining subproblems to accelerate the multi-objective process optimization. Information captured from previous subproblems facilitates an experimental design for accelerated optimization of the remaining subproblems. This eschews conducting experiments again for each subproblem, and this subsequently reduces the experimental burden. The practical utility of the proposed methodology is validated in Sec. 4.1 for the geometric accuracy optimization of parts made using FFF polymer AM process. The applicability of the approach to a broader set of challenging nonconvex optimization domains is demonstrated in a series of simulation studies in Sec. 4.2. The remainder of this paper is organized as follows: In Sec. 2, we review the existing literature addressing the problem of geometric accuracy optimization in AM processes and the multi-objective optimization techniques. In Sec. 3, the proposed approach is described in depth. In Sec. 4, the proposed approach is demonstrated in the context of the FFF process, as well as simulated cases. Finally, in Sec. 5, conclusions are summarized and directions for future work are discussed. Literature Review We divide the literature review into two subsections: (1) the existing literature in AM process optimization specific to geometric accuracy, and (2) relevant multi-objective optimization approaches. Existing Literature in Geometric Accuracy Optimization in AM Processes. We summarize some of the existing research efforts pertaining to geometric accuracy in AM processes. Bochmann et al. [13] studied the cause of imprecision in FFF systems with respect to surface quality, accuracy, and precision. They found that the magnitude of errors significantly varies in $x,y$, and $z$ directions. Mahesh et al. [14] proposed a benchmark part incorporating critical geometric features for evaluating the performance of rapid prototyping systems with respect to geometric accuracy. The proposed benchmark includes geometric features such as freeform surfaces and pass–fail features. El-Katatny et al. [15] measured and analyzed the error in major geometric characteristics of specific landmarks on anatomical parts fabricated by the FFF process. Weheba and Sanchez-Marsa [16] determined the optimal process settings for stereolithography (SLA) process with respect to surface finish, flatness, and deviations of diameter measures from nominal values. The second-order empirical models were developed for the different characteristics, but only a single set of process parameters was given as the optimal design. Indeed, there should be tradeoffs for the different responses. Tootooni et al. [2,3] and Rao et al. [17] used a spectral graph theory methodology to quantify and assess the geometric accuracy of FFF parts using deviations of 3D point cloud coordinate measurements from design specifications. Although the proposed indicator facilitates comparing the geometric accuracy of parts, it does not propose a relationship between process parameters and geometric accuracy in terms of GD&T characteristics. Huang et al. [9–11] developed a framework to model part shrinkage in SLA, thereby optimizing shrinkage for better geometric accuracy. This work is limited to elementary geometric shapes and does not determine an optimal range of process parameters for the best geometric accuracy of parts. Experimental data from our initial screening studies (see Sec. 3.1) show that the optimization of geometric accuracy for AM parts is a multi-objective optimization problem (i.e., the correlations among geometric characteristics are negative). As an example of multi-objective optimization in AM, Fathi and Mozaffari [18] optimized the directed energy deposition process in terms of three response characteristics—clad height (deposition layer thickness), melt pool depth, and dilution—in a two-stage manner. First, empirical models that represent the relationship between the key process parameters, i.e., laser power, powder mass flow rate, and scanning speed, and the two response characteristics were developed using a smart bee algorithm and a fuzzy inference system. Then, in the second stage, nondominated sorting genetic algorithm (NSGA-II)—a well-known multi-objective optimization evolutionary algorithm—is employed to achieve the best Pareto points in the objective space. Although this work was able to handle a multi-objective process optimization problem, it required several experimental runs (50 experiments) to establish a set of viable empirical models. As mentioned previously, the prohibitive experimental effort remains an intrinsic challenge with the conventional DOE methods. Background in Multi-Objective Optimization. Multi-objective optimization methods can be grouped into two main categories: (i) scalarization or aggregation-based methods and (ii) evolutionary algorithms [19]. Scalarization methods, which represent a classic approach, combine multiple objective functions into a single-objective problem, enabling the use of single-objective optimization methods to solve the problem [20]. These methods are not adaptable for the current multi-objective geometric accuracy optimization problem in AM because the individual objective functions are not explicitly known. In contrast, evolutionary algorithms iteratively generate groups of potential solutions that represent acceptable compromises between objective functions [21]. The disadvantage of this approach is that the objective functions require a large number of candidate solutions to be evaluated, i.e., many AM experiments for the current multi-objective geometric accuracy optimization problem. Other multi-objective optimization approaches share similar disadvantages. For instance, Kunath et al. [22] applied a full factorial design of experiments for three process parameters to develop a set of regression models as the functional form of objective functions representing the binding properties of a molecularly imprinted polymer. Then by assigning predefined desirability values, i.e., weight coefficients, for dependent response variables, a range of process parameters resulting in the highest desirability values are introduced as optimal. Again, this approach presents challenges for most AM processes since many process parameters are involved and a large number of experiments may be required to fit regression models within tight confidence bounds. Therefore, there are critical research gaps and numerous technical challenges pertaining to process optimization for geometric accuracy of multiple AM part geometric characteristics. Description of the Experimentally Obtained Geometric Accuracy Data. A systematic optimization approach for improving the geometric accuracy of AM parts is motivated by the experimental response data collected for a benchmark part. The presented experimental data in this work are generated in the authors' previous research [2–5]. The so-called circle–diamond–square part is designed as the benchmark part of interest for geometrical optimization (Fig. 3) [23,24]. This is based on the NAS 979 standard part used in the industry for assessing the performance of machining centers [25]. The circle–diamond–square is useful for assessing the performance of AM machines with respect to part geometric accuracy. For instance, the outside square (lowest layer) can be used to measure the straightness of an individual axis and the squareness across two axes in FFF. The diamond feature (middle layer) can be used to measure the rotation among two axes (i.e., from the bottom to middle layer). The large circle feature (top layer) can be used to measure the circular interpolation of two axes [23,26]. By axes we refer to the FFF machine gantry on which the nozzle is rastered. Five important GD&T characteristics are specified: flatness, circularity, cylindricity, concentricity, and thickness (see Fig. 4). These GD&T characteristics are chosen because they are independent of the feature size—also called regardless of feature size characteristics. More detail about GD&T characteristics can be found in Ref. [27]. We did not specify positional tolerances on the part because matching a datum surface from laser scanned point cloud data was found to be exceedingly error prone. Additionally, we concede another weakness with this work; it is likely that the GD&T characteristics specified for the circle–square–diamond test artifact might entail that the part is over tolerated or constrained and probably beyond the capability of the desktop FFF machine used in this work. Our rationale is to use this test artifact, albeit as an extremely contrived case, to explore a larger theme—understanding how to balance different GD&T characteristics specified for the part using FFF process parameters, namely, infill percentage ($If$) and extruder temperature ($te$) as the primary control. This is the primary contribution of this work. The aim is to minimize the magnitude of deviations within these five GD&T characteristics of parts from the targeted design specifications. A NextEngine (Santa Monica, CA) HD 3D laser scanner was used to capture part geometric data, and the qa scan 4.0 software (Santa Monica, CA)^2 was used to estimate the deviations from the targeted design specifications [2–5]. Figure 2 (Sec. 1.2) shows examples of contour plots of absolute value for deviations within flatness and concentricity versus two controllable FFF process parameters: infill percentage (I[f]) and extruder temperature (t[e]). In practice, the laser scanning is a heuristic process that requires adhering to a carefully attuned procedure, particularly, in the manner in which the part is aligned to the computer-aided design (CAD) model to obtain consistent results. The alignment step requires matching of at least four landmark points from the raw point cloud data with CAD model. Several trials are conducted and herewith summarized is a method that showed the least variability. Four points each on the square and diamond portions are used to align the part as depicted in Fig. 5. Additionally, laser scanning was conducted on a sturdy, vibration-free table in a darkened closure, and by coating the part with a thin layer of antireflective gray modeling paint. The data scatter plot matrix for the GD&T characteristics, called correlation matrix, is shown in Fig. 6. The slope of lines represents the Pearson correlation coefficient ($ρ$) for pairs of GD&T characteristics. It is evident from Figs. 2 and 6 and Table 1 (see Sec. 4.1) that flatness and concentricity are not positively correlated ($ρ=−0.21$). Similarly, thickness and concentricity are negatively correlated ($ρ=−0.63$). In other words, it is not possible to simultaneously optimize GD&T characteristics. Hence, optimization for geometric accuracy is best considered as a multi-objective optimization problem, and the set of process parameters setups should consider the tradeoff between multiple geometric characteristics. Multi-Objective Process Optimization. The aim of this section is to elucidate the mathematical foundation for multi-objective process optimization in AM. We illustrate the case with two objective functions. The proposed methodology is extensible to multi-objective cases. Scalarization of Multi-Objective Optimization and Pareto Front. The methodology developed herein is a generalization of an existing APO methodology, developed in the form of maximization [12]. A minimization problem can be expressed in the form of maximization by multiplying the objective function by a negative sign (and vice versa). Suppose the problem is to maximize two objective functions (or response variables ). The bi-objective maximization problem is expressed as follows: $MaxY(s)=(Y1(s),Y2(s))′s.t. s∈S$ where $Y(s)$ denotes the vector of objective functions $(Y1(s),Y2(s))′$, $s$ is the vector of process parameters (e.g., infill percentage (I[f]) and extruder temperature (t[e])), and $S$ denotes the design space, which includes all possible values of $s$. The objective space, i.e., the set of all possible response vectors $Y$ corresponding to the design space, is denoted by $C={(Y1(s),Y2(s)) For most AM applications, the functional expressions of $(Y1(s),Y2(s))$ are unknown; the empirical relationship between geometric accuracy responses and AM process parameters is yet to be understood and quantified. Moreover, the correlation between $Y1(s)$ and $Y2(s)$ is also unknown. A higher value of $Y1(s)$ may result in a lower value of $Y2(s)$ or converse. In other words, the optimized process parameters for $Y1$ may not necessarily result in favorable $Y2$ due to the possible low or even negative correlation between two response variables. For instance, concentricity and flatness shown in Fig. 2 are negatively correlated. Consequently, improving the response value of flatness will result in worsening concentricity. Therefore, the optimal solution to the multi-objective optimization problem is nonunique. Our objective is to develop a systematic and sequential DOE procedure that efficiently identifies sets of optimal solutions as a tradeoff between such contradictory response behaviors. For this purpose, the optimization problem is converted into a sequence of single-objective problems by defining weight coefficients for each objective. In this way, the bi-objective optimization problem can be presented as a sequence of single-objective optimization problems, each defined by the subproblem index variable , with as follows: $Max Zh(s)=γ1h.Y1(s)+γ2h.Y2(s)s.t. s∈Swhere γ1h+γ2h=1γkh≥0, ∀k=1,2$ For each subproblem with index $h$, $γkh$ denotes weight coefficient corresponding to the $kth$ objective function. $k=1$ and $k=2$ in the previous formulation, satisfying the constant $γ1h+γ2h=1$. Different weight coefficients correspond to different subproblems and will accordingly lead to different optimal solutions. For example, consider a subproblem with $γ11=0.8$ and $γ21=0.2$. In this case, the single-objective optimization problem is expressed in the form of $Max (0.8Y1(s)+0.2Y2(s))$. The weight coefficients ($γkh$) are graphically shown in Fig. 7 by the tangent of a line, which represents the desired search direction for the current single-objective maximization function (subproblem h). Changing the corresponding weight coefficients changes the single-objective function being optimized. For example, consider a second subproblem with $γ12=0.2$ and $γ22=0.8$; the optimum solution for the problem $Max (0.2Y1(s)+0.8Y2(s))$ is not the same as that in the first subproblem. In real-world applications, simultaneously achieving the best individual solution for two negatively correlated (or uncorrelated) objectives is intractable. Hence, in these cases, the optimum solution is a subset of objective space $C$ which can recognize and identify the best tradeoff among the value of $Y1$ and $Y2$. In what follows, we discuss in detail the approach to identify the sets of process parameter setups which results in an optimal objective value with different weight coefficients. We focus on identifying the Pareto optimal solutions associated with the multi-objective optimization problem. A Pareto optimal solution is not dominated by any other feasible solution and represents the best compromise between multiple objective functions. We define each member of Pareto optimum as a design point $s*∈S$ if and only if there is no other $s∈S$ such that $Yk(s)≥Yk(s*)$ for $k=1,2$ . Here, $s*$ is called a nondominated design point, and its corresponding response vector in the objective space is a Pareto point, $Yk(s*)$. Regarding geometric accuracy optimization, a Pareto point indicates an optimal design point where there is no other solution that results in better values in term of any geometric responses. The Pareto optimum set is denoted by $E$. In the bi-objective optimization problem shown in Eq. (1), the Pareto front representing response vectors of the Pareto set in the objective space $C$ is defined by $H$, that is, $H={(Y1(s),Y2(s))′∈ℝ2:s∈E} $. Given two controllable process parameters for the purpose of demonstration, the terms design space, objective space, nondominated design point, the Pareto point, and Pareto front for a bi-objective optimization problem are illustrated in Fig. 7. Multi-Objective Accelerated Process Optimization. Our approach is to solve the bi-objective optimization problem presented in Eq. (2) by obtaining a well-distributed set of Pareto points and thereby approximate the Pareto front with reduced number of experimental runs. Conventional scalarization divides the multi-objective optimization problem into individual single-objective problems and optimizes them individually. The proposed m-APO method, in contrast, leverages a knowledge-guided optimization approach based on the similarity among different subproblems. The proposed methodology is initially developed by preliminary studies [28,29] to deal with multi-objective AM process optimization problems. Each single-objective subproblem is solved using the APO method, which uses results from prior experiments to accelerate the process optimization [12]. APO balances two important properties simultaneously, i.e., optimization and space-filling. For optimization, more trial runs are needed in the regions of $s$ which potentially result in the maximum value of the response function $Zh(s)$ . In contrast, to avoid being trapped in a local optimum, the space-filling aspect is also considered. In the APO approach, each design point is assigned a so-called positive charge denoted by $qh (sj)$. Selection of the charge function $qh(s)$ relies on the optimization objective. Considering the maximization objective in our case, the charge function $qh(s)$ should be inversely proportional to the weighted single-objective response values $Zh(s)$ from Eq. (2) [12,30]. Thus, higher magnitude charges are assigned to design points with lower $Zh(s)$ and vice versa. Analogous to the physical laws of static charged particles, the design points repel each other apart to minimize the total electrical potential energy among them. Hence, design points with lower $Zh(s)$, i.e., with a higher charge, strongly repel other design points. On the other hand, design points with higher $Zh(s)$, i.e., with lower electrical charge, accommodate more design points in their neighborhood. The resulting positions correspond to the minimum potential energy among charged particles. Accordingly, more design points with lower charge potentials (i.e., higher $Zh(s)$ values) are selected to sequentially maximize the objective function of interest in the current subproblem, i.e., $Zh(s)$. The potential energy between any two design points is equal to , where represents the Euclidean distance between . Hence, total potential energy function within subproblem including the th new design is formulated as follows: The new design point can be obtained by solving $sn=argmin Enh$. At each step, Pareto points are identified based on the nondomination concept. Afterward, the appropriate weight coefficients for the next subproblem are chosen to lead the next subproblem optimization in such a way that maximizes the distribution of the Pareto points. Instead of solving each optimization subproblem individually and independently, experimental data obtained from previous subproblems are used as prior data to accelerate optimization process for the subsequent subproblems. For example, in Fig. 8, experimental data from subproblems 1 and 2 (represented by segments a-b-c-d and e-f-g, respectively) accelerate the optimization process for subproblem 3 (segment h-i). In other words, fewer numbers of experiments are needed to reach the Pareto point corresponding to the subproblem 3. This is due to the fact that experimental data obtained from previous subproblems contribute to designing experiments for the next subproblems. Hence, we do not need to design the experiments from scratch for each subproblem. This process is continued until the improvement in the resulting Pareto front is insignificant. The area dominated by Pareto points on the objective space is used to measure the efficiency of the resulting Pareto points. The proposed method accelerates the bi-objective optimization process by jointly solving the subproblems in a systematic manner. In fact, the method maps and scales experimental data from previous subproblems to guide remaining subproblems that improve the Pareto front while reducing the number of experiments required. The algorithm is described herewith in detail and summarized in Fig. 9. Step 1: Decomposing master problem into subproblems. The master bi-objective optimization problem $Max Y(s)=(Y1(s),Y2(s))′$ (see Eq. (1)) is decomposed into a sequence of single-objective functions, each of which is expressed as a convex combination of the objective functions (see Eq. (2)). We initialize the algorithm with optimizing two boundary subproblems with $(γ11=0,γ21=1)$ and $(γ12=1,γ22= 0)$. The solution to the first two subproblems resulted in two end points of the Pareto front (i.e., points d and g in Fig. 8). Step 2: Solving subproblems via accelerated process optimization. Using APO [12], we sequentially design experiments to optimize the constructed single-objective subproblems. Experimental data generated from previous subproblems are treated as prior data for subsequent subproblems. Assuming that weight coefficients $(γ1h,γ2h)$ are determined, all the design points represented in the response vector form, ($si,Yi$), are converted to the form of weighted single-objective response data as ($si,Zih$) in the framework of APO. The design points and corresponding weighted single-objective response are incorporated and applied throughout the APO algorithm, i.e., $si$ and $Zih(si)=γ1h.Y1(si)+γ2h.Y2(si)$. All the experimental data attained during the optimization process of prior subproblems (i.e., subproblems $1,2,…,h−1$) are transformed and fed into the APO of the hth subproblem as prior data to accelerate optimization process of the current subproblem by predicting the weighted single-objective responses in a more accurate manner. The new design point can be obtained by solving $sn=argmin Enh$, where $Enh$ is the total energy function defined in Eq. (3). A detailed discussion about the computation of the energy function and predicting the single-objective response values for the new untested design points can be found in Ref. [12]. Step 3: Defining stopping criteria for subproblems. To define the stopping criteria, we use the hypervolume ($HV$) metric as a measure of the Pareto points' contribution [31]. By definition, $HV$ is the volume in the objective space dominated by resulting Pareto points; a higher $HV$ results in better coverage of the Pareto front and thus provides the better solution. In Fig. 10, light gray area is the $HV$ associated with gray Pareto points. $ΔHV$, which is the contribution of new Pareto point, is represented by the dark gray rectangle area. The algorithm is repeated designing experiments for the current subproblem until $ΔHV$ is less than a prespecified threshold $(i.e.,ΔHV<ε1)$. The proposed algorithm stops continuing introducing further subproblems and designing more experiments when we do not observe significant improvement in $ΔHV$$(i.e.,ΔHV<ε2)$. Step 4: Determining weight coefficients of subsequent subproblems . Based on the resulting Pareto points obtained at the end of each subproblem, weight coefficients for the next subproblem, , are calculated as follows. Note that upper case letters represent the unknown variables in this paper, while lower case letters are used for known variables. Assuming that after solving the subproblem, the Pareto set nondominated design points and corresponding actual response vectors are obtained. Thereafter, all the existing optimal parameter setups are sorted in increasing order of and labeled as . At this stage, the Euclidean distance between all of the neighboring Pareto points is calculated as follows: $δj=|y(s(j+1)*)−y(s(j)*)| for j=1,…,(m−1)$ Then, the maximum gap on the existing Pareto front is determined by $Δ=maxj=1,…,(m−1)δj$. If two neighboring Pareto points corresponding to $Δ$ are $sa*$ and $sb*$, where $y1(sa*)<y1(sb*)$, the weight coefficients for the next subproblem is computed as $γh=ch(y2(sa*)−y2(sb*),y1(sb*)−y1(sa*))$, where $ch$ is a constant leading to $γ1h+γ2h=1$. Accordingly, we can achieve a uniform coverage of Pareto front in an efficient manner. Experimental and Numerical Studies We now apply and demonstrate the proposed approach to experimental and simulated data. We first apply our method to a real-world case study for minimizing the deviation in geometric characteristics of parts produced using FFF AM process. Since the experimental data include measurement of five GD&T characteristics, we first use the principal component analysis procedure to reduce the dimension of objective space to the first two PCs, which account for 88.15% of total data variation. Subsequently, the proposed optimization method is applied to minimize absolute values of the first two PCs. The results show that m-APO methodology achieves all true Pareto points in the objective space with 20% fewer experiments compared to a full factorial DOE plan. To further validate our methodology and test its robustness, we also conducted numerical studies with a different number of input parameters and characteristics of objective space and Pareto fronts [ Experimental Case Study: Multigeometric Characteristic Optimization of Parts Fabricated by FFF System. The aim of this section is to apply the proposed m-APO method for optimizing the geometric accuracy of AM parts. Samples are fabricated using a polymer extrusion AM process called FFF. They are made with acrylonitrile butadiene styrene thermoplastic on a desktop machine (MakerBot Replicator 2X). A schematic of the FFF process is shown in Fig. 11. Based on the initial screening designs, we take two important controllable process parameters including percentage infill (I[f]) and extruder temperature (t[f]) [2–5]. Extruder temperature is the temperature at which the filament is heated in the extruder. Infill relates to the density of the part, for instance, 100% infill corresponds to a completely solid part. The target is to minimize absolute deviations concerning five major GD&T characteristics, namely, flatness, circularity, cylindricity, concentricity, and thickness, from the targeted design specifications. Since the m-APO methodology is expressed in the form of maximization problem, the response values in the case study are multiplied by a negative sign. The 20 experimental data used in the present study are resulted from the previously published works [2–5], which use a full factorial DOE plan. Factors and levels corresponding to this design are illustrated in Table 2. The correlation among deviations within GD&T characteristics is illustrated in Table 1 in terms of the Pearson correlation coefficient ($ρ$) for pairs of GD&T characteristics. The higher correlation coefficients are more noticeable compared with low coefficients. The correlation between cylindricity and circularity is extremely high ($ρ=0.93$) in that both contribute to describing the circular feature of the test part in different ways. There is no any discernible pattern among the other GD&T correlations. In other words, the GD&T characteristics are positively correlated (e.g., $ρ=0.93$ for cylindricity and circularity), negatively correlated (e.g., $ρ=−0.63$ for concentricity and thickness), and nearly uncorrelated (e.g., $ρ=0.06$ for cylindricity and thickness). Principal component analysis is first applied to reduce the dimension of the objective space from five to two. Accordingly, the proposed bi-objective process optimization can be directly applied to the geometric accuracy optimization problem for FFF system. The principal component analysis results (see Table 3) show that 88.15% of variability within the parts' geometric characteristics data is captured by the first two principal components (i.e., $PC1$ and $PC2$). Hence, the first two PCs can sufficiently describe the data variations with the very negligible loss of information. All the five variables contribute to the $PC1$ by positive coefficients (Table 4). Flatness, circularity, and cylindricity are about equally important to $PC1$ with the largest weight. Although concentricity's contribution is negligible in $PC1$, thickness plays a significant role in this component. Hence, $PC1$ can be considered as representative of average deviations within all GD&T characteristics. However, we see a unique pattern within $PC2$'s coefficients. The variables that are related to the analogs features—i.e., circularity, cylindricity, and concentricity—contribute to $PC2$ with negative coefficients. Hence, $PC2$ can be inferred as the difference of deviations within two clusters of geometric characteristics: roundness characteristics (circularity, cylindricity, and concentricity) and others (flatness and thickness). Table 3 $PC1$ $PC2$ $PC3$ $PC4$ $PC5$ Standard deviation 1.576 1.3866 0.58981 0.46713 0.16190 Proportion of variance 0.497 0.3845 0.06958 0.04364 0.00524 Cumulative proportion 0.497 0.8815 0.95112 0.99476 1 $PC1$ $PC2$ $PC3$ $PC4$ $PC5$ Standard deviation 1.576 1.3866 0.58981 0.46713 0.16190 Proportion of variance 0.497 0.3845 0.06958 0.04364 0.00524 Cumulative proportion 0.497 0.8815 0.95112 0.99476 1 Table 4 $PC1$ $PC2$ $PC3$ $PC4$ $PC5$ Flatness 0.5046 0.2895 −0.7181 −0.3824 0.0481 Circularity 0.5854 −0.1598 0.4859 −0.1831 0.6016 Cylindricity 0.5607 −0.3161 0.1710 0.0901 −0.7404 Concentricity 0.0650 −0.6677 −0.4659 0.5000 0.2877 Thickness 0.2895 0.5895 0.0425 0.7496 0.0687 $PC1$ $PC2$ $PC3$ $PC4$ $PC5$ Flatness 0.5046 0.2895 −0.7181 −0.3824 0.0481 Circularity 0.5854 −0.1598 0.4859 −0.1831 0.6016 Cylindricity 0.5607 −0.3161 0.1710 0.0901 −0.7404 Concentricity 0.0650 −0.6677 −0.4659 0.5000 0.2877 Thickness 0.2895 0.5895 0.0425 0.7496 0.0687 Using the first two s, the geometric accuracy optimization problem is formulated as follows: $MinPC(s)=(|PC1(s)|,|PC2(s)|)′s.t. s∈S$ where $PC(s)$ denotes the vector of first two principal components of deviations within GD&T characteristics of the part, $s$ is the vector of process parameters, and $S$ is the design space. After conducting 20 experiments using full factorial DOE plan, we attain three Pareto points in the objective space (red dots in Fig. 12). Note that the Pareto set in this case study naturally forms a convex Pareto front. After choosing a random initial experiment (blued dot in Fig. 12), we iteratively apply m-APO. The m-APO methodology leads to the same Pareto points after 16 experimental runs, which translates to a 20% reduction of experiment runs compared with the full factorial design. Optimal process parameters and GD&T characteristics corresponding to the Pareto points are presented in Table 5. Table 5 $te$ (°C) $If$ (%) Flatness Circularity Cylindricity Concentricity Thickness 220 90 0.1869 0.3905 0.5011 0.2061 0.1861 230 90 0.1823 0.3783 0.4407 0.1733 0.2604 240 90 0.1887 0.3500 0.4624 0.2001 0.1910 $te$ (°C) $If$ (%) Flatness Circularity Cylindricity Concentricity Thickness 220 90 0.1869 0.3905 0.5011 0.2061 0.1861 230 90 0.1823 0.3783 0.4407 0.1733 0.2604 240 90 0.1887 0.3500 0.4624 0.2001 0.1910 Numerical Simulation Studies for Nonconvex Pareto Front. As presented in the FFF case study (Sec. 4.1), the m-APO methodology is effective for a convex bi-objective problem. In the numerical simulation studies, the aim is to evaluate the robustness of m-APO in the case of more challenging nonconvex Pareto fronts, which is usually challenging for multiple objective optimizations. To simulate various experimental conditions, three different combinations of design space structures and Pareto front characteristics are considered: (a) nonconvex Pareto front and well-distributed objective space, (b) nonconvex Pareto front and congested objective space, and (c) high dimensional design space. The ultimate goal of the simulation study is achieving a set of high-quality uniformly spread Pareto points representing the true ones. We note that in reality, the functional form of objectives, i.e., $Y1(s)$ and $Y2(s)$, is unknown, and here we just present them to simulate the real experimentation. We measure the efficiency of the m-APO methodology using general distance ($GD$) and proportional hypervolume ($PHV$) defined as follows: 1. (1) General distance quantifies the difference between the true Pareto points and those obtained with m-APO. Assuming that at the end of simulation Pareto points are obtained, is calculated as follows [ represents the minimum Euclidean distance between th Pareto point from m-APO and true Pareto points. Smaller values in indicate that the Pareto points obtained from m-APO are closer to true ones; and in an ideal case, 2. (2) Proportional hypervolume is the ratio of the hypervolume of the Pareto points obtained using m-APO and the hypervolume of the true Pareto points By definition, falls within [0, 1]. In an ideal case, We benchmark the m-APO method against full factorial DOE by comparing $GD$, $PHV$ within fix number of experiments. The results show that the m-APO method achieves significantly higher $PHV$ and lower $GD$ compared to full factorial DOE. Case A: Nonconvex Pareto Front and Well-Distributed Objective Space. The m-APO method is applied to an equally spaced discrete design space from a commonly used bi-objective optimization test problem, OKA1. This test problem is applied to evaluate the performance of ParEGO algorithm proposed in Refs. [ ] and [ ], and benchmark it against nondominated sorting genetic algorithm (NSGA-II). The relationship between process parameters and two objective functions are as follows: $Y2(s)=2π−|s1′|+2|s2′−3 cos(s1′)−3|13$ where $s1∈[6 sin (π/12),6 sin (π/12)+2π cos (π/12)];s2∈[−2π sin (π/12),6 cos (π/12)]$. A design space which includes 342 design points is chosen to construct a well-distributed objective space. This objective space with normalized values consisting of 11 true Pareto points is illustrated in Fig. 13. Because many design points with a different set of process parameters result in same points in objective space, hence, visually the number of design points in the objective space appears to be less than 342. Case B: Nonconvex Pareto Front and Congested Objective Space. An equally spaced discrete design space is selected based on another bi-objective test problem. This is a more challenging case in that the objective space includes very congested points at the middle farther from the Pareto front. This test problem is constructed to test the performance of an adaptive weighted-sum method for solving bi-objective optimization problems [33]. A discretized design space within 441 design points is chosen. The objective space consists of 19 true Pareto points (Fig. 14). The functional form of objective functions is as follows: Case C: High Dimension Design Space. To test the performance of our methodology in cases with more than two process parameters, we test a bi-objective problem with four process parameters, SK2 [34]. The design space includes 625 design points, and the objective space consists of five true Pareto points as illustrated in Fig. 15. The functional form of objective functions is as follows: $Y2(s)= sin s1+sin s2+sin s3+sin s41+(s12+s22+s32+s12)/100$ where $s1,s2,s3,s4∈[−3,3]$. Simulation Results: Pareto Front Estimation. The performance of the m-APO methodology is compared with full factorial DOE. The estimated Pareto front achieved by each method with 25 experiments is depicted in Figs. 16–18, overlaid with the true Pareto fronts. We report that the m-APO method quickly converges toward the true Pareto points much faster than the full factorial DOE. Table 6 illustrates the improvement in terms of the performance measures ($GD,PHV)$ achieved by applying the m-APO methodology compared with the full factorial DOE. Because a smaller $GD$ is preferable to a larger $GD$, the $GD$ improvement is reported by a negative sign; and since a larger $PHV$ is preferred, an improvement in $PHV$ is reported as a positive number. We observe a significant $PHV$ and $GD$ improvement in all cases by applying the m-APO methodology compared with the full factorial DOE. It is therefore concluded that the proposed methodology outperforms the full factorial DOE in multi-objective process optimization cases. This is because conventional DOE methods are performed simultaneously—as opposed to the sequential approach developed in this work. Furthermore, this work forwards an approach to balance multiple and potentially negatively correlated (or uncorrelated) geometric accuracy requirements, while conventional empirical approaches are not capable of this tradeoff. Table 6 Test problem specifications GD (%) PHV (%) Nonconvex Pareto front and well-distributed objective space −55 42 Nonconvex Pareto front and congested objective space −57 24 High dimension design space −93 29 Test problem specifications GD (%) PHV (%) Nonconvex Pareto front and well-distributed objective space −55 42 Nonconvex Pareto front and congested objective space −57 24 High dimension design space −93 29 This work presented an approach invoking the concept of m-APO to optimize AM process parameters such that parts with least geometric inaccuracy were obtained. The proposed m-APO technique decomposes a multi-objective optimization problem into a series of simpler single-objective optimization problems. The essence of the approach is that prior knowledge is used to determine the parameter settings for the next trials. This sequential approach guides experiments toward optimal parameter settings quicker than conventional design of experiments. In other words, instead of conducting experimental trials in the vicinity of process parameter setups where poor results are more probable, the m-APO methodology suggests experimentation at process parameter setups more inclined to favorable This approach is tested against both experimental datasets obtained from FFF AM process and numerically generated data. The specific outcomes are as follows: • The proposed approach was able to effect a tradeoff among geometric accuracy requirements and reached the optimal process parameter settings with $20%$ fewer trials compared to full factorial experimental plans. • We further tested the performance of the proposed approach to accommodate various simulated cases, such as nonconvex Pareto front, well-distributed objective space, congested objective space, and increased number of process parameters. The results indicate that the proposed methodology outperforms the full factorial designs for such complex cases. The performance metrics—GD and PHV—obtained from the proposed approach significantly superseded the full factorial design; there were $55−93%$ and $24−42%$ improvement in GD and PHV, respectively, in the simulated test cases. The results presented in this work are practically important. Given the time- and cost-intensive nature of AM experimental trials, a prudent approach to balance the tradeoff between multiple geometric accuracy requirements is needed in practice. In contrast, this work answers the following research question in the context of AM process optimization: What approach is required to balance between multiple geometric accuracy requirements with the minimal number of experimental trials? The gap in the current work is that it is demonstrated in the case of nonfunctional polymer AM parts. The authors are currently researching functional metal AM parts with m-APO as a means for in situ The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. One of the authors (PKR) acknowledges the work of Dr. M. Samie Tootooni and Mr. Ashley Dsouza. The experimental data for this work are from their doctoral dissertation and MS thesis research, respectively [4,5]. Funding Data • Army Research Laboratory (Cooperative Agreement No. W911NF-15-2-0025). M. C. , and , “ Additive Manufacturing: Current State, Future Potential, Gaps and Needs, and Recommendations ASME J. Manuf. Sci. Eng. ), p. M. S. , and , “ Assessing the Geometric Integrity of Additive Manufactured Parts From Point Cloud Data Using Spectral Graph Theoretic Sparse Representation-Based Classification Paper No. MSEC2017-2794. M. S. , and , “ Classifying the Dimensional Variation in Additive Manufactured Parts From Laser-Scanned Three-Dimensional Point Cloud Data Using Machine Learning Approaches ASME J. Manuf. Sci. Eng. ), p. A. B. V. V. , and Additive Manufacturing Handbook: Product Development for the Defense Industry , CRC Press, Boca Raton, FL. , and , “ Process Maps for Controlling Residual Stress and Melt Pool Size in Laser-Based SFF Processes Solid Freeform Fabrication Symposium , Austin, TX, Aug. 7–9, pp. N. W. P. A. , and H. L. , “ Effects of Process Variables and Size-Scale on Solidification Microstructure in Beam-Based Fabrication of Bulky 3D Structures Mater. Sci. Eng. A , pp. , and , “ Optimal Offline Compensation of Shape Shrinkage for Three-Dimensional Printing Processes IIE Trans. ), pp. , “ An Analytical Foundation for Optimal Compensation of Three-Dimensional Shape Deformation in Additive Manufacturing ASME J. Manuf. Sci. Eng. ), p. , and , “ Statistical Predictive Modeling and Compensation of Geometric Deviations of Three-Dimensional Printed Products ASME J. Manuf. Sci. Eng. ), p. A. M. S. M. , and , “ Accelerated Process Optimization for Laser-Based Additive Manufacturing by Leveraging Similar Prior Studies IIE Trans. ), pp. , and , “ Understanding Error Generation in Fused Deposition Modeling Surf. Topogr.: Metrol. Prop. ), p. Y. S. J. Y. H. , and H. T. , “ Benchmarking for Comparative Evaluation of RP Systems and Processes Rapid Prototyping J. ), pp. S. H. , and Y. S. , “ Error Analysis of FDM Fabricated Medical Replicas Rapid Prototyping J. ), pp. , and , “ Using Response Surface Methodology to Optimize the Stereolithography Process Rapid Prototyping J. ), pp. P. K. C. E. R. J. , and L. J. , “ Assessment of Dimensional Integrity and Spatial Defect Localization in Additive Manufacturing Using Spectral Graph Theory ASME J. Manuf. Sci. Eng. ), p. , and , “ Vector Optimization of Laser Solid Freeform Fabrication System Using a Hierarchical Mutable Smart Bee-Fuzzy Inference System and Hybrid NSGA-II/Self-Organizing Map J. Intell. Manuf. ), pp. L. T. , and R. A. , “ Pareto Front Approximation Using a Hybrid Approach Procedia Comput. Sci. , pp. Adaptive Scalarization Methods in Multiobjective Optimization , and , “ Evolutionary Multiobjective Optimization Evolutionary Multiobjective Optimization , pp. , and , “ Multi-Objective Optimization and Design of Experiments as Tools to Tailor Molecularly Imprinted Polymers Specific for Glucuronic Acid , pp. Alkan Donmez J. E. , and P. D. , “ A Review of Test Artifacts for Additive Manufacturing ,” National Institute of Standards and Technology, Gaithersburg, MD, Report No. NISTIR 7858 , and Alkan Donmez , “ An Additive Manufacturing Test Artifact J. Res. Natl. Inst. Stand. Technol. , pp. , “ Dimensioning and Tolerancing—Engineering Drawing and Related Documentation Practices ,” American Society of Mechanical Engineers, New York, Standard No. ASME Y14.5. A. M. S. M. , and P. K. , “ Multi-Objective Process Optimization of Additive Manufacturing: A Case Study on Geometry Accuracy Optimization Annual International Solid Freeform Fabrication Symposium , Austin, TX, Aug. 8–10, pp. A. M. , and S. M. , “ Systematic Optimization of Laser-Based Additive Manufacturing for Multiple Mechanical Properties IEEE International Conference on Automation Science and Engineering ), Fort Worth, TX, Aug. 21–25, pp. , “ ParEGO: A Hybrid Algorithm With On-Line Landscape Approximation for Expensive Multiobjective Optimization Problems IEEE Trans. Evol. Comput. ), pp. , and , “ On Test Functions for Evolutionary Multi-Objective Optimization ,” International Conference on Parallel Problem Solving From Nature ( ), Edinburgh, UK, Sept. 17–21, pp. I. Y. , and de Weck O. L. , “ Adaptive Weighted-Sum Method for Bi-Objective Optimization: Pareto Front Generation Struct. Multidiscip. Optim. ), pp. , and , “ A Review of Multiobjective Test Problems and a Scalable Test Problem Toolkit IEEE Trans. Evol. Comput. ), pp. Van Veldhuizen D. A. , and G. B. , “ Multiobjective Evolutionary Algorithm Research: A History and Analysis ,” Air Force Institute of Technology, Greene, OH, Technical Report No. , and , “ Pareto Front Approximation With Adaptive Weighted Sum Method in Multiobjective Simulation Optimization Winter Simulation Conference ), Austin, TX, Dec. 13–16, pp. 623–633.
{"url":"https://thermalscienceapplication.asmedigitalcollection.asme.org/manufacturingscience/article/139/10/101001/377147/Multi-Objective-Accelerated-Process-Optimization","timestamp":"2024-11-01T22:10:23Z","content_type":"text/html","content_length":"479842","record_id":"<urn:uuid:9e5d4783-7851-4b83-a882-d1041ba9551f>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00299.warc.gz"}
Labour in production budgets Currently doing MAC 06 task 1.2 for some reason am struggling with the above for re-calculation as follows:- 4Fones Ltd employs 16 people in production who each work a standard 7-hour day and 5-day week. • Each month has exactly 20 working days. • Each 4G phone takes 0.5 labour hours to produce. • The basic labour rate is £10 per hour and overtime is paid at one and a half times the normal hourly rate. • Budgeted overtime premium is included as part of the direct labour cost. • 4Fones Ltd’s 16 employees are each prepared to work an additional 15 hours per week for the 4-week period. These additional hours will be paid at a premium of 50%. They predict sales of 25,000 units if the price were set at £50 per phone, instead of the current £110. A local recruitment company can provide temporary workers to cope with the additional demand. The hourly cost for these temporary workers will be £20 per hour. Any help would be great • Easiest way is just to recalculate it completely: First calculate how many you need to produce. Then calculate the hours you need. Calculate how many hours are normal hours. As overtime is cheaper then temp workers, so calculate the overtime. Calculate the rest of hours that is needed, so you know how many temp hours you need. Calculate the total costs with the hourly rates to the hours you just calculate Expected sales is 25,000 units Opening stock is 1,350 units Closing stock is 8,000 units The production is then 25,000 - 1,350 + 8,000 = 31,650 They take half an hour to produce, so you need 15,825 hours to produce all phones. The normal staff works 7 hours per day for 20 hours. They got 16 people so your normal hours are 2,240 (7 x 20 x 16). They do 15 hours overtime per week at most, so that's 16 x 15 x 4 = 960 hours. So you need temporary workers to make up for 12,625 hours (15,825 - 2,240 - 960). Costs of these hours is listed separately. Normal hours: 2,240 x 10 = 22,400 Overtime: 960 x 15 = 14,400 Temp workers: 12,625 x 20 = 252,500 Add it all together and you come to 289,300. (I didn't check the answers, but I think that it is about right) Hope this helps! • thanks just worked out where i was going wrong i was doing it for just one week (16 x 15) but forgot the 4weeks doh cheers
{"url":"https://forums.aat.org.uk/Forum/discussion/27917/labour-in-production-budgets","timestamp":"2024-11-03T16:40:31Z","content_type":"text/html","content_length":"290176","record_id":"<urn:uuid:c45040b4-2a3a-4fa5-bbe3-c82d41313ebb>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00525.warc.gz"}
A function from type equality to Leibniz The Scala standard library provides evidence of two types being equal at the data level: a value of type (A =:= B) witnesses that A and B are the same type. Accordingly, it provides an implicit conversion from A to B. So you can write Int-summing functions on your generic foldable types. final case class XList[A](xs: List[A]) { def sum(implicit ev: A =:= Int): Int = xs.foldLeft(0)(_ + _) That works because ev is inserted as an implicit conversion over that lambda’s second parameter. That’s not really what we want, though. In particular, flipping A and Int in the ev type declaration will break it: ….scala:5: overloaded method value + with alternatives: (x: Int)Int <and> (x: Char)Int <and> (x: Short)Int <and> (x: Byte)Int cannot be applied to (A) xs.foldLeft(0)(_ + _) That doesn’t make sense, though. Type equality is symmetric: Scala knows it goes both ways, so why is this finicky? Additionally, we apply the conversion for each Int. It is a logical implication that, if A is B, then List[A] must be List[B] as well. But we can’t get that cheap, single conversion without a cast. Scalaz instead provides Leibniz, a more perfect type equality. A simplified version follows, which we will use for the remainder. sealed abstract class Leib[A, B] { def subst[F[_]](fa: F[A]): F[B] This reads “Leib[A, B] can replace A with B in any type function”. That “any” is pretty important: it gives us both the theorem that we want, and a tremendous consequent power that gives us most of what we can get in Scala from value-level type equality, by choosing the right F type parameter to subst. What could it be? Following the Scalazzi rules, where no null, type testing or casting, or AnyRef-defined functions are permitted, what might go in the body of that function? Even if you know what A is, as a Leib implementer, it’s hidden behind the unknown F. Even if you know that B is a supertype of A, you don’t know that F is covariant, by scalac or otherwise. Even if you know that A is Int and B is Double, what are you going to do with that information? So there’s only one thing this Leib could be, because you do have an F of something. implicit def refl[A]: Leib[A, A] = new Leib[A, A] { override def subst[F[_]](fa: F[A]): F[A] = fa Every type is equal to itself. Every well-formed Leib instance starts out this way, in this function. So, it’s great that we know the implication of the subst method’s generality. But that’s not good enough; we had that with =:= already. We want to write well-typed operations that represent all the implications of the Leib type equality as new Leibs representing those type equalities. First, let’s solve the original problem, using infix type application to show the similarity to =:=: def sum2(implicit ev: A Leib Int): Int = ev.subst[List](xs).foldLeft(0)(_ + _) There is no more implicit conversion, the result of subst is the same object as the argument, and [List] would be inferred, but I have merely specified it for clarity in this example. This doesn’t compose, though. What if, having substed Int into that List type, I now want to subst List[A] for List[Int] in some type function? Specifically, what about a Leib that represents that type equality? To handle that, we can subst into Leib itself! def lift[F[_], A, B](ab: Leib[A, B]): Leib[F[A], F[B]] = ab.subst[Lambda[X => Leib[F[A], F[X]]]](Leib.refl[F[A]]) Again, the final [F[A]] could be inferred. As an exercise, define the symm and compose operations, which represent that Leib is symmetric and transitive as well. Hints: the symm body is the same except for the type parameters given, and compose doesn’t use refl. def symm[A, B](ab: Leib[A, B]): Leib[B, A] def compose[A, B, C](ab: Leib[A, B], bc: Leib[B, C]): Leib[A, C] Leib power In Scalaz, Leibniz is already defined, and used in a few places. Though their subst definitions are completely incompatible at the scalac level, they have a weird equivalence due to the awesome power of subst. import scalaz.Leibniz, Leibniz.=== def toScalaz[A, B](ab: A Leib B): A === B = ab.subst[A === ?](Leibniz.refl) def toLeib[A, B](ab: A === B): A Leib B = ab.subst[A Leib ?](Leib.refl) …where ? is to type-lambdas as _ is to Scala lambdas, thanks to the Kind Projector plugin. And so it would be with any pair of Leibniz representations with such subst methods that you might define. Unfortunately, =:= cannot participate in this universe of isomorphisms; it lacks the subst method that serves as the Leibniz certificate of authenticity. You can get a =:= from a Leibniz, but not vice versa. Why would you want that weak sauce anyway? Looking up These are just the basics. Above: • The weakness of Scala’s own =:=, • the sole primitive Leibniz operator subst, • how to logically derive other type equalities, • the isomorphism between each Leibniz representation and all others. In the next part, we’ll look at: • Why it matters that subst always executes to use a type equality, • the Haskell implementation, • higher-kinded type equalities and their Leibnizes, • why the =:= singleton trick is unsafe, • simulating GADTs with Leibniz members of data constructors. This article was tested with Scala 2.11.1, Scalaz 7.0.6, and Kind Projector 0.5.2.
{"url":"https://typelevel.org/blog/2014/07/02/type_equality_to_leibniz.html","timestamp":"2024-11-01T19:28:56Z","content_type":"text/html","content_length":"14330","record_id":"<urn:uuid:ccf1aade-58ea-4520-9f6a-642ed099d2a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00055.warc.gz"}
Model Dimensionality and Overfitting Overfitting is one of the most common situations any data scientist will face, still one of the most challenging to understand and master in-depth. Overfitting is the production of a model that resembles a portion of (noisy) data too closely, thus loosing the ability to capture the real features of the underlying phenomenon. The immediate result of overfitting is a great fir result in-sample, while the results do not extend out of sample. Which in turn make the model unusable. There are main reasons that can lead to overfitting, reading literature the most commonly reported reason for overfitting is noisy data. THis is a quite common condition and it can be easily addressd. THe literature is rich of devices for detection of ill-posed problems and regularization. However, when working with real data there are many other reasons that can lead to overfitting. One quite common and much more difficult to point out is model bias. I.e. the poor ability of the model to capture and explain the underlying phenomenon. The effects of model bias are multiple and can span from omitted variables, to partial and unsuitable predictors. In this experiment we are showing how a sufficiently broad set of very poor (random) predictors, can fit a noiseless function to high levels of of accuracy, while not being able to capture the underlying process. This example is based on linear regression as it is avery simple and intuitive method of forecasting, but the results we achieved here easily extend to mode complex machine learning methods This artcle is suitable for mid to senior level statisticians and can be proposed to junior level under due supervision. We assume the reader is at ease with • Mathematical notation • The core concepts of Linear Regression • The R-Square, what it represents and how is it interpreted We generate noiseless observations from a sinusoidal target function. We generate a set of monotonic predictors sorting a set of random values in range [0, 1) We run a linear fit of the synthetic random predictors on the noise-free observations and observe the quality of fit. Where is the matrix formed compounding the random predictors, are the observations, and are the coefficients of the linear model fitted by a linear regression import numpy as np from sklearn.linear_model import LinearRegression import matplotlib.pyplot as plt %matplotlib inline n_predictors = 10 n_samples = 64 range = 2*np.pi # noiseless observations t = np.linspace(0,range,n_samples) b = np.sin(t) # random predictors A = np.sort(np.random.uniform(0.0, 1.0, size = (n_samples,n_predictors)),0) # Linear model model = LinearRegression() reg = model.fit(A, b) b_pred = model.predict(A) print('Intercepts of the model:', model.intercept_) print('coefficients of the model:', model.coef_) r_sq = model.score(A, b) print('coefficient of determination:', r_sq) fig, (ax1,ax2) = plt.subplots(1,2, figsize=(10,4)) # 1 row, 2 columns plt.subplot(1, 2, 1) plt.title('A set of random predictors') plt.subplot(1, 2, 2) plt.title('A noiseless sin funcion and fitted response') plt.tight_layout() # Optional ... often improves the layout Intercepts of the model: 0.08399225895835502 coefficients of the model: [-2.19188871 -5.98560269 2.20475203 -0.14862545 -3.75417644 0.75595747 3.48943476 -0.21138062 0.51693198 4.69958404] coefficient of determination: 0.9673453287496954 On the left graph we can observe the predictors, they are somehow noisily increasing monotonic functions, whose actual shape strongly depends on the different realizations of noise. However, nothing hints at the fact they could actually fit a sinus function. On the left hand side we can see the actual noise-free sinus function (blue) and the response predicted by a linear regression (orange). Quite surprisingly, the predicted response fits the curve quite well. In most of the cases the r-square is above 0.90 indicating a very good quality of fit. Also note hte coefficients of hte fit are much higher than one would expect. This situation usually points out some issue with the model, most likely omitted variable bias. I.e. the model in to good at explaining the underlying process and the regression compounds the predictors and uses the differences in the noise to reproduce the missing variable. Despite the result might look quite unexpected to the novice, it is quite usual for the more experience pratictioner and it is a direct effect of the high dimensionality of the. I.e. under fairly mild conditions, given a sufficient number of degrees of freedom (in the case of regression DoF are simply predictors) any ML algorithm is smart enough to find an apparently good fit to any set of observations, for no reason at all. The only mild condition we need to verify for this example to work is the monotonicity of the predictors. Therefore, the only little trick we use in this example is sorting the random value so that we get monotonic predictors, in our specific case the predictors are also quite correlated, but it is not a necessary condition. To note, most neural networks have monotonic (usually sigmoidal) response function. Therefore, they fully qualify to fit (and overfit) any continuous function with extreme ease. Indeed, the higher the complexity of the model, the higher the risk of seeing unexpected behaviours. Thus, the need to use a meaningful set of predictors, control for collinearity and correlation, regularize the fit or the learning process, and validate the results out of sample. A. N. Kolmogorov, On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition, Dokl. Akad. Nauk. SSSR,114 (1957), 953β 956. Are you ready to take smarter decision? Otherwise you can always drop a comment…
{"url":"https://betamile.co.uk/model-dimensionality-and-overfitting/","timestamp":"2024-11-03T01:07:20Z","content_type":"text/html","content_length":"117596","record_id":"<urn:uuid:4993150f-6c0a-4d0b-b1c2-9280fb1bc221>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00854.warc.gz"}
Numerical Mathematics and Scientific Computing • Y. Hadjimichael, Ch. Merdon, M. Liero, P. Farrell, An energy-based finite-strain model for 3D heterostructured materials and its validation by curvature analysis, International Journal for Numerical Methods in Engineering, e7508 (2024), pp. 7508/1--7508/28, DOI 10.1002/nme.7508 . This paper presents a comprehensive study of the intrinsic strain response of 3D het- erostructures arising from lattice mismatch. Combining materials with different lattice constants induces strain, leading to the bending of these heterostructures. We propose a model for nonlinear elastic heterostructures such as bimetallic beams or nanowires that takes into account local prestrain within each distinct material region. The resulting system of partial differential equations (PDEs) in Lagrangian coordinates incorporates a nonlinear strain and a linear stress-strain relationship governed by Hooke?s law. To validate our model, we apply it to bimetallic beams and hexagonal hetero-nanowires and perform numerical simulations using finite element methods (FEM). Our simulations ex- amine how these structures undergo bending under varying material compositions and cross-sectional geometries. In order to assess the fidelity of the model and the accuracy of simulations, we compare the calculated curvature with analytically derived formula- tions. We derive these analytical expressions through an energy-based approach as well as a kinetic framework, adeptly accounting for the lattice constant mismatch present at each compound material of the heterostructures. The outcomes of our study yield valuable insights into the behavior of strained bent heterostructures. This is particularly significant as the strain has the potential to influence the electronic band structure, piezoelectricity, and the dynamics of charge carriers. • M. O'Donovan, P. Farrell, J. Moatti, T. Streckenbach, Th. Koprucki, S. Schulz, Impact of random alloy fluctuations on the carrier distribution in multi-color (In,Ga)N/GaN quantum well systems, Physical Review Applied, 21 (2024), pp. 024052/1--024052/12, DOI 10.1103/PhysRevApplied.21.024052 . In this work, we study the impact that random alloy fluctuations have on the distribution of electrons and holes across the active region of a (In,Ga)N/GaN multi-quantum well based light emitting diode (LED). To do so, an atomistic tight-binding model is employed to account for alloy fluctuations on a microscopic level and the resulting tight-binding energy landscape forms input to a drift-diffusion model. Here, quantum corrections are introduced via localization landscape theory and we show that when neglecting alloy disorder our theoretical framework yields results similar to commercial software packages that employ a self-consistent Schroedinger-Poisson-drift-diffusion solver. Similar to experimental studies in the literature, we have focused on a multi-quantum well system where two of the three wells have the same In content while the third well differs in In content. By changing the order of wells in this multicolor quantum well structure and looking at the relative radiative recombination rates of the different emitted wavelengths, we (i) gain insight into the distribution of carriers in such a system and (ii) can compare our findings to trends observed in experiment. Our results indicate that the distribution of carriers depends significantly on the treatment of the quantum well microstructure. When including random alloy fluctuations and quantum corrections in the simulations, the calculated trends in the relative radiative recombination rates as a function of the well ordering are consistent with previous experimental studies. The results from the widely employed virtual crystal approximation contradict the experimental data. Overall, our work highlights the importance of a careful and detailed theoretical description of the carrier transport in an (In,Ga)N/GaN multi-quantum well system to ultimately guide the design of the active region of III-N-based LED structures. • S. Haberland, P. Jaap, S. Neukamm, O. Sander, M. Varga, Representative volume element approximations in elastoplastic spring networks, Multiscale Modeling & Simulation. A SIAM Interdisciplinary Journal, 22 (2024), pp. 588--638, DOI 10.1137/23M156656X . We study the large-scale behavior of a small-strain lattice model for a network composed of elastoplastic springs with random material properties. We formulate the model as an evolutionary rate independent system. In an earlier work we derived a homogenized continuum model, which has the form of linearized elastoplasticity, as an evolutionary ?-limit as the lattice parameter tends to zero. In the present paper we introduce a periodic representative volume element (RVE) approximation for the homogenized system. As a main result we prove convergence of the RVE approximation as the size of the RVE tends to infinity. We also show that the hysteretic stress-strain relation of the effective system can be described with the help of a generalized Prandtl?Ishlinskii operator, and we prove convergence of a periodic RVE approximation for that operator. We combine the RVE approximation with a numerical scheme for rate-independent systems and obtain a computational scheme that we use to numerically investigate the homogenized system in the specific case when the original network is given by a two-dimensional lattice model. We simulate the response of the system to cyclic and uniaxial, monotonic loading, and numerically investigate the convergence rate of the periodic RVE approximation. In particular, our simulations show that the RVE error decays with the same rate as the RVE error in the static case of linear elasticity. • N. Ahmed, V. John, X. Li, Ch. Merdon, Inf-sup stabilized Scott--Vogelius pairs on general shape-regular simplicial grids for Navier--Stokes equations, Computers & Mathematics with Applications. An International Journal, 168 (2024), pp. 148--161, DOI 10.1016/j.camwa.2024.05.034 . This paper considers the discretization of the time-dependent Navier--Stokes equations with the family of inf-sup stabilized Scott--Vogelius pairs recently introduced in [John/Li/Merdon/Rui, Math. Models Methods Appl. Sci., 2024] for the Stokes problem. Therein, the velocity space is obtained by enriching the H -conforming Lagrange element space with some H (div)-conforming Raviart--Thomas functions, such that the divergence constraint is satisfied exactly. In these methods arbitrary shape-regular simplicial grids can be used. In the present paper two alternatives for discretizing the convective terms are considered. One variant leads to a scheme that still only involves volume integrals, and the other variant employs upwinding known from DG schemes. Both variants ensure the conservation of linear momentum and angular momentum in some suitable sense. In addition, a pressure-robust and convection-robust velocity error estimate is derived, i.e., the velocity error bound does not depend on the pressure and the constant in the error bound for the kinetic energy does not blow up for small viscosity. After condensation of the enrichment unknowns and all non-constant pressure unknowns, the method can be reduced to a P - P -like system for arbitrary velocity polynomial degree k. Numerical studies verify the theoretical findings. • R. Araya, A. Caiazzo, F. Chouly, Stokes problem with slip boundary conditions using stabilized finite elements combined with Nitsche, Computer Methods in Applied Mechanics and Engineering, 427 (2024), pp. 117037/1--117037/16, DOI 10.1016/j.cma.2024.117037 . We discuss how slip conditions for the Stokes equation can be handled using Nitsche method, for a stabilized finite element discretization. Emphasis is made on the interplay between stabilization and Nitsche terms. Well-posedness of the discrete problem and optimal convergence rates, in natural norm for the velocity and the pressure, are established, and illustrated with various numerical experiments. The proposed method fits naturally in the context of a finite element implementation while being accurate, and allows an increased flexibility in the choice of the finite element • R. Araya, C. Cárcamo, A.H. Poza, E. Vino, An adaptive stabilized finite element method for the Stokes--Darcy coupled problem, Journal of Computational and Applied Mathematics, 443 (2024), pp. 115753/1--115753/24, DOI 10.1016/j.cam.2024.115753 . For the Stokes--Darcy coupled problem, which models a fluid that flows from a free medium into a porous medium, we introduce and analyze an adaptive stabilized finite element method using Lagrange equal order element to approximate the velocity and pressure of the fluid. The interface conditions between the free medium and the porous medium are given by mass conservation, the balance of normal forces, and the Beavers--Joseph--Saffman conditions. We prove the well-posedness of the discrete problem and present a convergence analysis with optimal error estimates in natural norms. Next, we introduce and analyze a residual-based a posteriori error estimator for the stabilized scheme. Finally, we present numerical examples to demonstrate the performance and effectiveness of our scheme. • G.R. Barrenechea, V. John, P. Knobloch, Finite element methods respecting the discrete maximum principle for convection-diffusion equations, SIAM Review, 66 (2024), pp. 1--86, DOI 10.1137/ 22M1488934 . Convection-diffusion-reaction equations model the conservation of scalar quantities. From the analytic point of view, solutions of these equations satisfy, under certain conditions, maximum principles, which represent physical bounds of the solution. That the same bounds are respected by numerical approximations of the solution is often of utmost importance in practice. The mathematical formulation of this property, which contributes to the physical consistency of a method, is called the discrete maximum principle (DMP). In many applications, convection dominates diffusion by several orders of magnitude. It is well known that standard discretizations typically do not satisfy the DMP in this convection-dominated regime. In fact, in this case it turns out to be a challenging problem to construct discretizations that, on the one hand, respect the DMP and, on the other hand, compute accurate solutions. This paper presents a survey on finite element methods, with the main focus on the convection-dominated regime, that satisfy a local or a global DMP. The concepts of the underlying numerical analysis are discussed. The survey reveals that for the steady-state problem there are only a few discretizations, all of them nonlinear, that at the same time both satisfy the DMP and compute reasonably accurate solutions, e.g., algebraically stabilized schemes. Moreover, most of these discretizations have been developed in recent years, showing the enormous progress that has been achieved lately. Similarly, methods based on algebraic stabilization, both nonlinear and linear, are currently the only finite element methods that combine the satisfaction of the global DMP and accurate numerical results for the evolutionary equations in the convection-dominated scenario. • W. Lei, S. Piani, P. Farrell, N. Rotundo, L. Heltai, A weighted hybridizable discontinuous Galerkin method for drift-diffusion problems, Journal of Scientific Computing, 99 (2024), pp. 33/1--33/ 26, DOI 10.1007/s10915-024-02481-w . In this work we propose a weighted hybridizable discontinuous Galerkin method (W-HDG) for drift-diffusion problems. By using specific exponential weights when computing the L2 product in each cell of the discretization, we are able to mimic the behavior of the Slotboom variables, and eliminate the drift term from the local matrix contributions, while still solving the problem for the primal variables. We show that the proposed numerical scheme is well-posed, and validate numerically that it has the same properties as classical HDG methods, including optimal convergence, and superconvergence of postprocessed solutions. For polynomial degree zero, dimension one, and vanishing HDG stabilization parameter, W-HDG coincides with the Scharfetter--Gummel finite volume scheme (i.e., it produces the same system matrix). The use of local exponential weights generalizes the Scharfetter-Gummel scheme (the state-of-the-art for finite volume discretization of transport dominated problems) to arbitrary high order approximations. • H. Liang, H. Rui, A parameter robust reconstruction nonconforming virtual element method for the incompressible poroelasticity model, Applied Numerical Mathematics. An IMACS Journal, 202 (2024), pp. 127--142, DOI 10.1016/j.apnum.2024.05.001 . A reconstruction nonconforming virtual element method for the incompressible poroelasticity model is developed and analyzed. We investigate to determine the divergence-free displacement in incompressible poroelasticity model on polygons. The presented method mainly involves the parameter robust for mu also can be considered as the real pressure robustness for the virtual element. Using the space on the polygons to build the reconstruction operator, the method can be applied on general polygonal meshes, satisfy pressure robustness and overcome Poisson locking in incompressible model. The results are corroborated by theoretical derivations as well as numerical results. • G. Padula, F. Romor, G. Stabile, G. Rozza, Generative models for the deformation of industrial shapes with linear geometric constraints: Model order and parameter space reductions, Computer Methods in Applied Mechanics and Engineering, 423 (2024), pp. 116823/1--116823/36, DOI 10.1016/j.cma.2024.116823 . • S. Piani, P. Farrell, W. Lei, N. Rotundo, L. Heltai, Data-driven solutions of ill-posed inverse problems arising from doping reconstruction in semiconductors, Applied Mathematics in Science and Engineering, 32 (2024), pp. 2323626/1--2323626/27, DOI 10.1080/27690911.2024.2323626 . The non-destructive estimation of doping concentrations in semiconductor devices is of paramount importance for many applications ranging from crystal growth, the recent redefinition of the 1kg to defect, and inhomogeneity detection. A number of technologies (such as LBIC, EBIC and LPS) have been developed which allow the detection of doping variations via photovoltaic effects. The idea is to illuminate the sample at several positions and detect the resulting voltage drop or current at the contacts. We model a general class of such photovoltaic technologies by ill-posed global and local inverse problems based on a drift-diffusion system that describes charge transport in a self-consistent electrical field. The doping profile is included as a parametric field. To numerically solve a physically relevant local inverse problem, we present three different data-driven approaches, based on least squares, multilayer perceptrons, and residual neural networks. Our data-driven methods reconstruct the doping profile for a given spatially varying voltage signal induced by a laser scan along the sample's surface. The methods are trained on synthetic data sets (pairs of discrete doping profiles and corresponding photovoltage signals at different illumination positions) which are generated by efficient physics-preserving finite volume solutions of the forward problem. While the linear least square method yields an average absolute l-infinity / displaystyle ell ^infty error around 10%, the nonlinear networks roughly halve this error to 5%, respectively. Finally, we optimize the relevant hyperparameters and test the robustness of our approach with respect to noise. • M. Demir, V. John, Pressure-robust approximation of the incompressible Navier--Stokes equations in a rotating frame of reference, BIT. Numerical Mathematics, 64 (2024), pp. 36/1--36/19, DOI 10.1007/s10543-024-01037-6 . A pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, H1-conforming mixed finite element methods like Scott--Vogelius pairs. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. • D. Frerichs-Mihov, L. Henning, V. John, On loss functionals for physics-informed neural networks for convection-dominated convection-diffusion problems, Communications on Applied Mathematics and Computation, published online in August 2024, DOI 10.1007/s42967-024-00433-7 . In the convection-dominated regime, solutions of convection-diffusion problems usually possesses layers, which are regions where the solution has a steep gradient. It is well known that many classical numerical discretization techniques face difficulties when approximating the solution to these problems. In recent years, physics-informed neural networks (PINNs) for approximating the solution to (initial-)boundary value problems received a lot of interest. In this work, we study various loss functionals for PINNs that are novel in the context of PINNs and are especially designed for convection-dominated convection-diffusion problems. They are numerically compared to the vanilla and a $hp$-variational loss functional from the literature based on two benchmark problems whose solutions possess different types of layers. We observe that the best novel loss functionals reduce the $L^2(Omega)$ error by $17.3%$ for the first and $5.5%$ for the second problem compared to the methods from the literature. • V. John, X. Li, Ch. Merdon, H. Rui, Inf-sup stabilized Scott--Vogelius pairs on general simplicial grids by Raviart--Thomas enrichment, Mathematical Models & Methods in Applied Sciences, 34 (2024), pp. 919--949, DOI 10.1142/S0218202524500180 . This paper considers the discretization of the Stokes equations with Scott--Vogelius pairs of finite element spaces on arbitrary shape-regular simplicial grids. A novel way of stabilizing these pairs with respect to the discrete inf-sup condition is proposed and analyzed. The key idea consists in enriching the continuous polynomials of order k of the Scott--Vogelius velocity space with appropriately chosen and explicitly given Raviart--Thomas bubbles. This approach is inspired by [Li/Rui, IMA J. Numer. Anal, 2021], where the case k=1 was studied. The proposed method is pressure-robust, with optimally converging H1-conforming velocity and a small H(div)-conforming correction rendering the full velocity divergence-free. For k>d, with d being the dimension, the method is parameter-free. Furthermore, it is shown that the additional degrees of freedom for the Raviart--Thomas enrichment and also all non-constant pressure degrees of freedom can be condensated, effectively leading to a pressure-robust, inf-sup stable, optimally convergent Pk×P0 scheme. Aspects of the implementation are discussed and numerical studies confirm the analytic • V. John, X. Li, Ch. Merdon, Pressure-robust L2 ($Omega$) error analysis for Raviart--Thomas enriched Scott--Vogelius pairs, Applied Mathematics Letters, 156 (2024), pp. 109138/1--109138/12, DOI 10.1016/j.aml.2024.109138 . Recent work shows that it is possible to enrich the Scott--Vogelius finite element pair by cer- tain Raviart--Thomas functions to obtain an inf-sup stable and divergence-free method on general shape-regular meshes. A skew-symmetric consistency term was suggested for avoiding an ad- ditional stabilization term for higher order elements, but no L^2 (Ω) error estimate was shown for the Stokes equations. This note closes this gap. In addition, the optimal choice of the stabilization parameter is studied numerically. • J.P. Thiele, Th. Wick, Numerical modeling and open-source implementation of variational partition-of-unity localizations of space-time dual-weighted residual estimators for parabolic problems, Journal of Scientific Computing, 99 (2024), pp. 25/1--25/40, DOI 10.1007/s10915-024-02485-6 . In this work, we consider space-time goal-oriented a posteriori error estimation for parabolic problems. Temporal and spatial discretizations are based on Galerkin finite elements of continuous and discontinuous type. The main objectives are the development and analysis of space-time estimators, in which the localization is based on a weak form employing a partition-of-unity. The resulting error indicators are used for temporal and spatial adaptivity. Our developments are substantiated with several numerical examples. • D. Abdel, N.E. Courtier, P. Farrell, Volume exclusion effects in perovskite charge transport modeling, Optical and Quantum Electronics, 55 (2023), pp. 884/1--884/14, DOI 10.1007/ s11082-023-05125-9 . Due to their flexible material properties, perovskite materials are a promising candidate for many semiconductor devices such as lasers, memristors, LEDs and solar cells. For example, perovskite-based solar cells have recently become one of the fastest growing photovoltaic technologies. Unfortunately, perovskite devices are far from commercialization due to challenges such as fast degradation. Mathematical models can be used as tools to explain the behavior of such devices, for example drift-diffusion equations portray the ionic and electric motion in perovskites. In this work, we take volume exclusion effects on ion migration within a perovskite crystal lattice into account. This results in the formulation of two different ionic current densities for such a drift-diffusion model -- treating either the mobility or the diffusivity as density-dependent while the other quantity remains constant. The influence of incorporating each current density description into a model for a typical perovskite solar cell configuration is investigated numerically, through simulations performed using two different open source tools. • D. Abdel, C. Chainais-Hillairet, P. Farrell, M. Herda, Numerical analysis of a finite volume scheme for charge transport in perovskite solar cells, IMA Journal of Numerical Analysis, published online on 10.6.2023, DOI 10.1093/imanum/drad034 . In this paper, we consider a drift-diffusion charge transport model for perovskite solar cells, where electrons and holes may diffuse linearly (Boltzmann approximation) or nonlinearly (e.g. due to Fermi-Dirac statistics). To incorporate volume exclusion effects, we rely on the Fermi-Dirac integral of order −1 when modeling moving anionic vacancies within the perovskite layer which is sandwiched between electron and hole transport layers. After non-dimensionalization, we first prove a continuous entropy-dissipation inequality for the model. Then, we formulate a corresponding two-point flux finite volume scheme on Voronoi meshes and show an analogous discrete entropy-dissipation inequality. This inequality helps us to show the existence of a discrete solution of the nonlinear discrete system with the help of a corollary of Brouwer's fixed point theorem and the minimization of a convex functional. Finally, we verify our theoretically proven properties numerically, simulate a realistic device setup and show exponential decay in time with respect to the L^2 error as well as a physically and analytically meaningful relative entropy. • S. Katz, A. Caiazzo, V. John, Impact of viscosity modeling on the simulation of aortic blood flow, Journal of Computational and Applied Mathematics, 425 (2023), pp. 115036/1--115036/18, DOI 10.1016/j.cam.2022.115036 . Modeling issues for the simulation of blood flow in an aortic coarctation are studied in this paper. From the physical point of view, several viscosity models for non-Newtonian fluids as well as a Newtonian fluid model will be considered. From the numerical point of view, two different turbulence models are utilized in the simulations. The impact of both, the physical and the numerical modeling, on clinically relevant biomarkers is investigated and compared. • S. Katz, A. Caiazzo, B. Moreau, U. Wilbrandt, J. Brüning, L. Goubergrits, V. John, Impact of turbulence modeling on the simulation of blood flow in aortic coarctation, International Journal of Numerical Methods in Biomedical Engineering, 39 (2023), pp. e3695/1--e3695/36, DOI 10.1002/cnm.3695 . Numerical simulations of pulsatile blood flow in an aortic coarctation require the use of turbulence modeling. This paper considers three models from the class of large eddy simulation (LES) models (Smagorinsky, Vreman, -model) and one model from the class of variational multiscale models (residual-based) within a finite element framework. The influence of these models on the estimation of clinically relevant biomarkers used to assess the degree of severity of the pathological condition (pressure difference, secondary flow degree, normalized flow displacement, wall shear stress) is investigated in detail. The simulations show that most methods are consistent in terms of severity indicators such as pressure difference and stenotic velocity. Moreover, using second-order velocity finite elements, different turbulence models might lead to considerably different results concerning other clinically relevant quantities such as wall shear stresses. These differences may be attributed to differences in numerical dissipation introduced by the turbulence models. • F. Galarce Marín, K. Tabelow, J. Polzehl, Ch.P. Papanikas, V. Vavourakis, L. Lilaj, I. Sack, A. Caiazzo, Displacement and pressure reconstruction from magnetic resonance elastography images: Application to an in silico brain model, SIAM Journal on Imaging Sciences, 16 (2023), pp. 996--1027, DOI 10.1137/22M149363X . This paper investigates a data assimilation approach for non-invasive quantification of intracranial pressure from partial displacement data, acquired through magnetic resonance elastography. Data assimilation is based on a parametrized-background data weak methodology, in which the state of the physical system tissue displacements and pressure fields is reconstructed from partially available data assuming an underlying poroelastic biomechanics model. For this purpose, a physics-informed manifold is built by sampling the space of parameters describing the tissue model close to their physiological ranges, to simulate the corresponding poroelastic problem, and compute a reduced basis. Displacements and pressure reconstruction is sought in a reduced space after solving a minimization problem that encompasses both the structure of the reduced-order model and the available measurements. The proposed pipeline is validated using synthetic data obtained after simulating the poroelastic mechanics on a physiological brain. The numerical experiments demonstrate that the framework can exhibit accurate joint reconstructions of both displacement and pressure fields. The methodology can be formulated for an arbitrary resolution of available displacement data from pertinent images. It can also inherently handle uncertainty on the physical parameters of the mechanical model by enlarging the physics-informed manifold accordingly. Moreover, the framework can be used to characterize, in silico, biomarkers for pathological conditions, by appropriately training the reduced-order model. A first application for the estimation of ventricular pressure as an indicator of abnormal intracranial pressure is shown in this contribution. • C. Chainais-Hillairet, R. Eymard, J. Fuhrmann, A monotone numerical flux for quasilinear convection diffusion equations, Mathematics of Computation, 93 (2024), pp. 203--231 (published online in June 2023), DOI 10.1090/mcom/3870 . • R. Araya, C. Cárcamo, A.H. Poza, A stabilized finite element method for the Stokes--Temperature coupled problem, Applied Numerical Mathematics. An IMACS Journal, 187 (2023), pp. 24--49, DOI 10.1016/j.apnum.2023.02.002 . In this work, we introduce and analyze a new stabilized finite element scheme for the Stokes--Temperature coupled problem. This new scheme allows equal order of interpolation to approximate the quantities of interest, i.e. velocity, pressure, temperature, and stress. We analyze an equivalent variational formulation of the coupled problem inspired by the ideas proposed in [3]. The existence of the discrete solution is proved, decoupling the proposed stabilized scheme and using the help of continuous dependence results and Brouwer's theorem under the standard assumption of sufficiently small data. Optimal convergence is proved under classic regularity assumptions of the solution. Finally, we present some numerical examples to show the quality of our scheme, in particular, we compare our results with those coming from a standard reference in geosciences described in [38]. • D. Budáč, V. Miloš, M. Carda, M. Paidar, J. Fuhrmann, K. Bouzek, Prediction of electrical conductivity of porous composites using a simplified Monte Carlo 3D equivalent electronic circuit network model: LSM--YSZ case study, Electrochimica Acta, 457 (2023), pp. 142512/1--142512/12, DOI 10.1016/j.electacta.2023.142512 . Multiphase electric charge conductors composed of materials with various properties are widely utilized in both research and industrial applications. The composite materials include porous electrodes and other components mainly applied in fuel cell and battery technologies. In this study, a simplified Monte Carlo equivalent electronic circuit (EEC) network model is presented. In comparison to similar models, the present EEC network model allows an accurate prediction of the electrical properties of such materials, thus saving time-consuming experimental determination. The distinct feature of this EEC network model is that it requires only experimentally easily obtainable data as the input parameters: phase composition, porosity and bulk electrical conductivity of the individual constituents. During its run, the model generates a large number of artificial cubically shaped specimens based on random distribution of individual phases according to the input composition. Each of the specimens generated was modelled by a corresponding EEC network. The EEC networks were solved using Kirchhoff's laws, resulting in impedance response simulation for the prediction of composite conductivity values. The EEC network model was validated using lanthanum strontium manganite mixed with yttria-stabilized zirconia. Excellent agreement was obtained between the experimentally determined and the calculated electrical conductivity for sample porosities of 0 to 60 %. Due to its variability, the EEC network model can be suitable for a wide range of practical applications. The presented approach has high potential to save an enormous amount of experimental effort, while maintaining sufficient accuracy, when designing corresponding multiphase electrode structures. • R. Finn, M. O'Donovan, P. Farrell, J. Moatti, T. Streckenbach, Th. Koprucki, S. Schulz, Theoretical study of the impact of alloy disorder on carrier transport and recombination processes in deep UV (Al,Ga)N light emitters, Applied Physics Letters, 122 (2023), pp. 241104/1--241104/7, DOI 10.1063/5.0148168 . Aluminum gallium nitride [(Al,Ga)N] has gained significant attention in recent years due to its potential for highly efficient light emitters operating in the deep ultra-violet (UV) range (<280 nm). However, given that current devices exhibit extremely low efficiencies, understand- ing the fundamental properties of (Al,Ga)N-based systems is of key importance. Here, using a multi-scale simulation framework, we study the impact of alloy disorder on carrier transport, radiative and non-radiative recombination processes in a c-plane Al 0.7 Ga0.3 N/Al0.8 Ga0.2 N quantum well embedded in a p-n junction. Our calculations reveal that alloy fluctuations can open "percolative" pathways that promote transport for the electrons and holes into the quantum well region. Such an effect is neglected in conventional and widely used transport sim- ulations. Moreover, we find that the resulting increased carrier density and alloy induced carrier localization effects significantly increase non-radiative Auger--Meitner recombination in comparison to the radiative process. Thus, to suppress such non-radiative process and poten- tially related material degradation, a careful design (wider well, multi-quantum wells) of the active region is required to improve the effi- ciency of deep UV light emitters. • B. García-Archilla, V. John, J. Novo, Second order error bounds for POD-ROM methods based on first order divided differences, Applied Mathematics Letters, 146 (2023), pp. 108836/1--108836/7, DOI 10.1016/j.aml.2023.108836 . This note proves for the heat equation that using BDF2 as time stepping scheme in POD-ROM methods with snapshots based on difference quotients gives both the optimal second order error bound in time and pointwise estimates. • B. García-Archilla, V. John, J. Novo, POD-ROMs for incompressible flows including snapshots of the temporal derivative of the full order solution, SIAM Journal on Numerical Analysis, 61 (2023), pp. 1340--1368, DOI 10.1137/22M1503853 . In this paper we study the influence of including snapshots that approach the velocity time derivative in the numerical approximation of the incompressible Navier--Stokes equations by means of proper orthogonal decomposition (POD) methods. Our set of snapshots includes the velocity approximation at the initial time from a full order mixed finite element method (FOM) together with approximations to the time derivative at different times. The approximation at the initial velocity can be replaced by the mean value of the velocities at the different times so that when implementing the method to the fluctuations, as done mostly in practice, only approximations to the time derivatives are included in the set of snapshots. For the POD method we study the differences between projecting onto L2 and H1. In both cases pointwise in time error bounds can be proved. Including grad-div stabilization in both the FOM and the POD methods, error bounds with constants independent of inverse powers of the viscosity can be obtained. • B. García-Archilla , V. John, S. Katz, J. Novo, POD-ROMs for incompressible flows including snapshots of the temporal derivative of the full order solution: Error bounds for the pressure, Journal of Numerical Mathematics, published online on 26.08.2023, DOI 10.1515/jnma-2023-0039 . Reduced order methods (ROMs) for the incompressible Navier?Stokes equations, based on proper orthogonal decomposition (POD), are studied that include snapshots which approach the temporal derivative of the velocity from a full order mixed finite element method (FOM). In addition, the set of snapshots contains the mean velocity of the FOM. Both the FOM and the POD-ROM are equipped with a grad-div stabilization. A velocity error analysis for this method can be found already in the literature. The present paper studies two different procedures to compute approximations to the pressure and proves error bounds for the pressure that are independent of inverse powers of the viscosity. Numerical studies support the analytic results and compare both methods. • A. Jha, V. John, P. Knobloch, Adaptive grids in the context of algebraic stabilizations for convection-diffusion-reaction equations, SIAM Journal on Scientific Computing, 45 (2023), pp. B564--B589, DOI 10.1137/21M1466360 . Three algebraically stabilized finite element schemes for discretizing convection-diffusion-reaction equations are studied on adaptively refined grids. These schemes are the algebraicflux correction (AFC) scheme with the Kuzmin limiter, the AFC scheme with the Barrenechea-John-Knobloch limiter, and the recently proposed monotone upwind--type algebraically stabilizedmethod. Both conforming closure of the refined grids and grids with hanging vertices are considered.A nonstandard algorithmic step becomes necessary before these schemes can be applied on gridswith hanging vertices. The assessment of the schemes is performed with respect to the satisfactionof the global discrete maximum principle, the accuracy, e.g., smearing of layers, and the efficiency insolving the corresponding nonlinear problems. • A. Jha, O. Pártl, N. Ahmed, D. Kuzmin, An assessment of solvers for algebraically stabilized discretizations of convection-diffusion-reaction equations, Journal of Numerical Mathematics, 31 (2023), pp. 79--103, DOI 10.1515/jnma-2021-0123 . We consider flux-corrected finite element discretizations of 3D convection-dominated transport problems and assess the computational efficiency of algorithms based on such approximations. The methods under investigation include flux-corrected transport schemes and monolithic limiters. We discretize in space using a continuous Galerkin method and P[1] or Q[1] finite elements. Time integration is performed using the Crank-Nicolson method or an explicit strong stability preserving Runge-Kutta method. Nonlinear systems are solved using a fixed-point iteration method, which requires solution of large linear systems at each iteration or time step. The great variety of options in the choice of discretization methods and solver components calls for a dedicated comparative study of existing approaches. To perform such a study, we define new 3D test problems for time dependent and stationary convection-diffusion-reaction equations. The results of our numerical experiments illustrate how the limiting technique, time discretization and solver impact on the overall performance. • P. Ral, A.K. Giri, V. John, Instantaneous gelation and non-existence of weak solutions for the Oort--Hulst--Safronov coagulation model, Proceedings of The Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences, 479 (2023), pp. 20220385/1--20220385/13, DOI 10.1098/rspa.2022.0385 . The possible occurrence of instantaneous gelation to the Oort--Hulst--Safronov (OHS) coagulation equation is investigated for a certain class of unbounded coagulation kernels. The existence of instantaneous gelation is confirmed by showing the non-existence of mass-conserving weak solutions. Finally, it is shown that for such kernels, there is no weak solution to the OHS coagulation equation at any time interval. • B. Spetzler, D. Abdel, F. Schwierz, M. Ziegler, P. Farrell, The role of vacancy dynamics in two-dimensional memristive devices, Advanced Electronic Materials, published online on 08.11.2023, DOI 10.1002/aelm.202300635 . Two-dimensional (2D) layered transition metal dichalcogenides (TMDCs) are promising memristive materials for neuromorphic computing systems as they could solve the problem of the excessively high energy consumption of conventional von Neumann computer architectures. Despite extensive experimental work, the underlying switching mechanisms are still not understood, impeding progress in material and device functionality. This study reveals the dominant role of mobile defects in the switching dynamics of 2D TMDC materials. The switching process is governed by the formation and annihilation dynamics of a local vacancy depletion zone. Moreover, minor changes in the interface potential barriers cause fundamentally different device behavior previously thought to originate from multiple mechanisms. The key mechanisms are identified with a charge transport model for electrons, holes, and ionic point defects, including image-charge-induced Schottky barrier lowering (SBL). The model is validated by comparing simulations to measurements for various 2D MoS2-based devices, strongly corroborating the relevance of vacancies in TMDC devices and offering a new perspective on the switching mechanisms. The insights gained from this study can be used to extend the functional behavior of 2D TMDC memristive devices in future neuromorphic computing • D. Frerichs-Mihov, L. Henning, V. John, Using deep neural networks for detecting spurious oscillations in discontinuous Galerkin solutions of convection-dominated convection-diffusion equations, Journal of Scientific Computing, 97 (2023), pp. 36/1--36/27, DOI 10.1007/s10915-023-02335-x . Standard discontinuous Galerkin (DG) finite element solutions to convection-dominated convection-diffusion equations usually possess sharp layers but also exhibit large spurious oscillations. Slope limiters are known as a post-processing technique to reduce these unphysical values. This paper studies the application of deep neural networks for detecting mesh cells on which slope limiters should be applied. The networks are trained with data obtained from simulations of a standard benchmark problem with linear finite elements. It is investigated how they perform when applied to discrete solutions obtained with higher order finite elements and to solutions for a different benchmark problem. • V. John, P. Knobloch, U. Wilbrandt, A posteriori optimization of parameters in stabilized methods for convection-diffusion problems -- Part II, Journal of Computational and Applied Mathematics, 428 (2023), pp. 115167/1--115167/17, DOI 10.1016/j.cam.2023.115167 . Extensions of algorithms for computing optimal stabilization parameters in finite element methods for convection-diffusion equations are presented. These extensions reduce the dimension of the control space, in comparison to available methods, and thus address the long computing times of these methods. One method is proposed that considers only relevant mesh cells, another method that uses groups of mesh cells, and the combination of both methods is also studied. The incorporation of these methods within a gradient-based optimization procedure, via solving an adjoint problem, is explained. Numerical studies provide impressions on the gain of efficiency as well as on the loss of accuracy if control spaces with reduced dimensions are utilized. • CH. Merdon, W. Wollner, Pressure-robustness in the context of optimal control, SIAM Journal on Control and Optimization, 61 (2023), pp. 342--360, DOI 10.1137/22M1482603 . This paper studies the benefits of pressure-robust discretizations in the scope of optimal control of incompressible flows. Gradient forces that may appear in the data can have a negative impact on the accuracy of state and control and can only be correctly balanced if their L^2-orthogonality onto discretely divergence-free test functions is restored. Perfectly orthogonal divergence-free discretizations or divergence-free reconstructions of these test functions do the trick and lead to much better analytic a priori estimates that are also validated in numerical examples. • O. Pártl, U. Wilbrandt, J. Mura, A. Caiazzo, Reconstruction of flow domain boundaries from velocity data via multi-step optimization of distributed resistance, Computers & Mathematics with Applications. An International Journal, 129 (2023), pp. 11--33, DOI 10.1016/j.camwa.2022.11.006 . We reconstruct the unknown shape of a flow domain using partially available internal velocity measurements. This inverse problem is motivated by applications in cardiovascular imaging where motion-sensitive protocols, such as phase-contrast MRI, can be used to recover three-dimensional velocity fields inside blood vessels. In this context, the information about the domain shape serves to quantify the severity of pathological conditions, such as vessel obstructions. We consider a flow modeled by a linear Brinkman problem with a fictitious resistance accounting for the presence of additional boundaries. To reconstruct these boundaries, we employ a multi-step gradient-based variational method to compute a resistance that minimizes the difference between the computed flow velocity and the available data. Afterward, we apply different post-processing steps to reconstruct the shape of the internal boundaries. To limit the overall computational cost, we use a stabilized equal-order finite element method. We prove the stability and the well-posedness of the considered optimization problem. We validate our method on three-dimensional examples based on synthetic velocity data and using realistic geometries obtained from cardiovascular imaging. • Z. Amer, Numerical methods for coupled drift-diffusion and Helmholtz models for laser applications, Leibniz MMS Days 2024, April 10 - 12, 2024, Leibniz Network "Mathematical Modeling and Simulation", Leibniz Institut für Verbundwerkstoffe GmbH (IVW), Kaiserslautern, April 11, 2024. • Z. Amer, Numerical methods for coupled drift-diffusion and Helmholtz Models for laser applications, International Conference on Simulation of Organic Electronics and Photovoltaics, SimOEP, September 2 - 4, 2024, ZHAW - Zurich University of Applied Sciences, Winterthur, Switzerland, September 4, 2024. • S. Katz, Turbulence modeling in aortic blood flow: traditional models and perspectives on machine learning, VPH (Virtual Physiological Human) Conference 2024, September 4 - 6, 2024, Universität Stuttgart, September 4, 2024. • L. Ermoneit, M. Kantner, Th. Koprucki, J. Fuhrmann, B. Schmidt, Optimal control of a Si/SiGe quantum bus for scalable quantum computing architectures, QUANTUM OPTIMAL CONTROL From Mathematical Foundations to Quantum Technologies, Berlin, May 21, 2024. • B. Schmidt, J.-P. Thiele, Code and perish?! How about publishing your software?, Leibniz MMS Days 2024, Mini Workshop, April 10 - 12, 2024, Leibniz-Institut für Verbundwerkstoffe (IVW), Kaiserslautern, April 10, 2024. • D. Abdel, Modeling and simulation of vacancy-assisted charge transport in innovative semiconductor devices, Applied Mathematics and Simulation for Semiconductor Devices (AMaSiS 2024), September 10 - 13, 2024, WIAS Berlin, September 11, 2024. • M. Demir, Pressure-robust approximation of the Navier--Stokes equations with coriolis force, 9th European Congress of Mathematics (9ECM), July 15 - 19, 2024, Congress of the European Mathematical Society, School of Engineering of the University of Seville, Spain, July 15, 2024. • M. Demir, Pressure-robust approximation of the incompressable Navier--Stokes equationsin a rotating frame of reference, Exploring Scientific Research: Workshop for Early Career Researchers, November 11 - 12, 2024, Gulf University for Science and Technology, Center for Applied Mathematics and Bioinformatics, Kuwait-Stadt, Kuwait, November 11, 2024. • M. Demir, Time filtered second order backward Euler method for EMAC formulation of Navier--Stokes equations, 20th Annual Workshop on Numerical Methods for Problems with Layer Phenomena, May 23 - 24, 2024, University of Cyprus, Department of Mathematics and Statistics, Protaras, Cyprus, May 24, 2024. • M. Zainelabdeen, Augmenting the grad-div stabilization for Taylor--Hood finite elements with a vorticity stabilization, The Chemnitz Finite Element Symposium 2024, September 9 - 11, 2024, Technische Universität Chemnitz, September 11, 2024. • M. Zainelabdeen, Physics-informed neural networks for convection-dominated convection-diffusion problems, 20th Annual Workshop on Numerical Methods for Problems with Layer Phenomena, May 22 - 25, 2024, Department of Mathematics and Statistics, University of Cyprus, Protaras, May 24, 2024. • M. Zainelabdeen, Physics-informed neural networks for convection-dominated convection-diffusion problems, International Conference on Boundary and Interior Layers, BAIL 2024, June 10 - 14, 2024, University of A Coruña, Department of Mathematic, Spain, June 11, 2024. • A. Caiazzo, Data-driven reduced-order modeling and data assimilation for the characterization of aortic coarctation, 8th International Conference on Computational and Mathematical Biomedical Engineering (CMBE24), June 24 - 26, 2024, George Mason University, Arlington, Virginia, USA, June 24, 2024. • A. Caiazzo, Multiscale FSI for the effective modeling of vascular tissues, Virtual Physiological Human, VPH Conference 2024, September 4 - 6, 2024, Universität Stuttgart, September 4, 2024. • A. Caiazzo, Multiscale FSI for the effective modeling of vascular tissues, VPH (Virtual Physiological Human) Conference 2024, September 4 - 6, 2024, Universität Stuttgart, September 4, 2024. • A. Caiazzo, Validation of an open-source lattice Boltzmann solver (OpenLB) for the simulation of airflow over diary building, Leibniz MMS Days 2024, April 10 - 12, 2024, Leibniz Network "Mathematical Modeling and Simulation", Leibniz Institut für Verbundwerkstoffe GmbH (IVW), Kaiserslautern, April 11, 2024. • C. Cárcamo, Frequency-domain formulation and convergence analysis of Biot's poroelasticity equations based on total pressure, Computational Techniques and Applications Conference (CTAC 2024), November 19 - 22, 2024, Monash University, School of Mathematics, Melbourne, Australia, November 20, 2024. • C. Cárcamo, Frequency-domain formulation and convergence analysis of Biots poroelasticity equations based on total pressure, The Chemnitz Finite Element Symposium 2024, September 9 - 11, 2024, Technische Universität Chemnitz, September 9, 2024. • C. Cárcamo, Total pressure-based frequency-domain formulation and convergence analysis of Biot's poroelasticity equations with a new finite element stabilization, Minisymposium "Full and reduced-order modeling of multiphysics problems", WONAPDE 2024: Seventh Chilean Workshop on Numerical Analysis of Partial Differential Equations, Concepción, January 15 - 19, 2024, Universidad de Concepción, Barrio Universitario s/n, Region of Bío-Bío, Chile, January 16, 2024. • P. Farrell, Charge transport in perovskites solar cells: modeling, analysis and simulations, Inria-ECDF Partnership Kick-Off, Robert-Koch-Forum, Wilhelmstraße 67, Berlin, June 7, 2024. • J. Fuhrmann, S. Maass, S. Ringe, Monolithic coupling of a CatMAP based microkinetic model for heterogeneous electrocatalysis and ion transport with finite ion sizes, Applied Mathematics and Simulation for Semiconductor Devices (AMaSiS 2024), Berlin, September 10 - 13, 2024. • J. Fuhrmann, Development of numerical methods and tools for drift-diffusion simulations, Applied Mathematics and Simulation for Semiconductor Devices (AMaSiS 2024), Berlin, September 10 - 13, • J. Fuhrmann, What's new with VoronoiFVM.jl, JuliaCon 2024, July 9 - 13, 2024, JuliCon.org with TU and PyData Eindhoven, Netherlands, July 11, 2024. • Y. Hadjimichael, Strain distribution in zincblende and wurtzite GaAs nanowires bent by a one-sided (In,Al)As shell, Applied Mathematics and Simulation for Semiconductor Devices (AMaSiS 2024), Berlin, September 10 - 13, 2024. • V. John, Finite element methods respecting the discrete maximum principle for convection-diffusion equations, International Conference on 'Latest Advances in Computational and Applied Mathematics' (LACAM) 2024, February 21 - 24, 2024, Indian Institute of Science Education and Research Thiruvananthapuram, Kerala, India, February 21, 2024. • V. John, Finite element methods respecting the discrete maximum principle for convection-diffusion equations, Mathematical Fluid Mechanics In 2024, August 19 - 23, 2024, Czech Academy of Sciences, Institute of Mathematics, Prague, Czech Republic, August 21, 2024. • V. John, Finite element methods respecting the discrete maximum principle for convection-diffusion equations, Trends in Scientific Computing - 30 Jahre Wissenschaftliches Rechnen in Dortmund, May 21 - 22, 2024, TU Dortmund, Fakultät für Mathematik, LSIII, May 21, 2024. • V. John, On two modeling issues in aortic blood flow simulations, Seminar of Dr. Nagaiah Chamakuri, Scientific Computing Group (SCG), School of Mathematics, Indian Institute of Science Education and Research, Thiruvananthapuram, Kerala, India, February 20, 2024. • V. John, On using machine learning techniques for the numerical solution of convection-diffusion problems, ALGORITMY 2024, Central-European Conference on Scientific Computing, Minisymposium ``Numerical methods for convection-dominated problems'', March 16 - 20, 2024, Department of Mathematics and Descriptive Geometry, Slovak University of Technology in Bratislava, High Tatra Mountains, Podbanske, Slovakia, March 19, 2024. • V. John, On using machine learning techniques for the numerical solution of convection-diffusion problems, Seminar-talk, Prof. Sashikumaar Ganesan, Indian Institute of Science Bangalore, Department of Computational and Data Sciences, Bangalore, India, February 13, 2024. • CH. Merdon, Mass-conservative reduced basis approach for heterogeneous catalysis, Leibniz MMS Days 2024, Kaiserslautern, April 10 - 12, 2024. • CH. Merdon, Mass-conservative reduced basis approach for heterogeneous catalysis, Leibniz MMS Days 2024, April 10 - 12, 2024, Leibniz Network "Mathematical Modeling and Simulation", Leibniz Institut für Verbundwerkstoffe GmbH (IVW), Kaiserslautern. • CH. Merdon, Pressure-robustness in Navier--Stokes finite element simulations, 10th International Conference on Computational Methods in Applied Mathematics (CMAM-10), June 10 - 14, 2024, Universität Bonn, Institut für Numerische Simulation, June 11, 2024. • CH. Merdon, Pressure-robustness in the context of the weakly compressible Navier--Stokes equations, The Chemnitz Finite Element Symposium 2024, September 9 - 11, 2024, Technische Universität Chemnitz, September 9, 2024. • O. Pártl, Fracture-controlled reservoir performance optimization via 3D numerical modeling and simulation, Leibniz MMS Days 2024, April 10 - 12, 2024, Leibniz Network "Mathematical Modeling and Simulation", Leibniz Institut für Verbundwerkstoffe GmbH (IVW), Kaiserslautern, April 11, 2024. • O. Pártl, Optimization of geothermal energy production from fracture-controlled reservoirs via 3D numerical modeling and simulation, General Assembly 2024 of the European Geosciences Union (EGU), April 14 - 19, 2024, European Geosciences Union (EGU), Wien, Austria, April 15, 2024, DOI 10.5194/egusphere-egu24-4164 . • F. Romor, Efficient numerical resolution of parametric partial differential equations on solution manifolds parametrized by neural networks, 9th European Congress on Computational Methods in Applied Sciences and Engineering, June 3 - 7, 2024, ECCOMAS, scientific organization, Lissabon, Portugal, June 4, 2024. • F. Romor, Registration-based data assimilation of aortic blood flow, Leibniz MMS Days 2024, Kaiserslautern, April 10 - 12, 2024. • F. Romor, Registration-based data assimilation of aortic blood flow, Leibniz MMS Days 2024, April 10 - 12, 2024, Leibniz Network "Mathematical Modeling and Simulation", Leibniz Institut für Verbundwerkstoffe GmbH (IVW), Kaiserslautern. • D. Runge, Mass-conservative reduced basis approach for heterogeneous catalysis, Leibniz MMS Days 2024, Kaiserslautern, April 10 - 12, 2024. • D. Runge, Mass-conservative reduced basis approach for heterogeneous catalysis, Leibniz MMS Days 2024, April 10 - 12, 2024, Leibniz Network "Mathematical Modeling and Simulation", Leibniz Institut für Verbundwerkstoffe GmbH (IVW), Kaiserslautern. • T. Siebert, The general purpose algorithmic differentiation wrapper ADOLC.jl, JuliaCon 2024, July 9 - 13, 2024, JuliCon.org with TU and PyData Eindhoven, Netherlands, July 12, 2024. • J.P. Thiele, RSE / PostDoc am WIAS, PhoenixD Research School für Promovierende, Exzellenzcluster PhoenixD, Leibniz Universität Hannover, April 4, 2024. • J.P. Thiele, RSE training and professional development BoF, Research Software Engineering Conference, RSECon24, September 3 - 5, 2024, Society of Research Software Engineering (SocRSE), a charitable incorporated organisation based in the UK, Newcastle, UK, September 5, 2024. • J.P. Thiele, RSE und RDM: Code and perish?! How about publishing your software (and data)?, Oberseminar Numerik und Optimierung, Institut für Angewandte Mathematik, Leibniz Universität Hannover, May 2, 2024. • J.P. Thiele, The Research Software Engineer (RSE): Who is that? And what skills do they have to help you?, European Trilinos & Kokkos User Group Meeting 2024 (EuroTUG 2024), June 24 - 26, 2024, Helmut-Schmidt-Universität, Universität der Bundeswehr Hamburg, June 24, 2024. • S. Katz, Impact of turbulence modeling on the full and reduced simulations of aortic blood flow, 22nd Computational Fluids Conference (CFC 2023), April 25 - 28, 2023, International Association for Computational Mechanics (IACM), Cannes, France, April 28, 2023. • L. Ermoneit, B. Schmidt, J. Fuhrmann, Th. Koprucki, M. Kantner, Coherent spin-qubit shuttling in a SiGe quantum bus: Device-scale modeling, simulation and optimal control, Leibniz MMS Days 2023, Potsdam, April 17 - 19, 2023. • Y. Hadjimichael, An energy-based finite-strain constitutive model for bent heterostructured materials, GAMM 93rd Annual Meeting of the International Association of Applied Mathematics and Mechanics, May 30 - June 2, 2023, Technische Universität Dresden, June 2, 2023. • D. Runge, Reduced basis approach for convection-diffusion equations with non-linear boundary reaction conditions, Finite Volumes for Complex Applications 10 (FVCA10), Université de Strasbourg, France, November 2, 2023. • D. Runge, Mass-conservative reduced basis approach for convection-diffusion equations with non-linear boundary reaction conditions, Leibniz MMS Days 2023, Leibniz Network "Mathematical Modeling and Simulation", Leibniz Institute for Agricultural Engineering and Bioeconomy Potsdam (ATB), Potsdam, April 18, 2023. • M. Demir, Subgrid artificial viscosity modelling based defect-deferred correction method for fluid-fluid interaction, 2023 International CMMSE conference and the Second conference on high performance computing (CHPC), July 3 - 8, 2023, Universidad de Cádiz, Spain, July 6, 2023. • M. Demir, Subgrid artificial viscosity modelling based defect-deferred correction method for fluid-fluid interaction, European Conference on Numerical Mathematics and Advanced Applications (ENUMATH), September 4 - 8, 2023, Instituto Superior Técnico, Lisboa, Portugal, September 7, 2023. • A. Caiazzo, Multiscale and reduced-order modeling for poroelasticity, European Conference on Numerical Mathematics and Advanced Applications (ENUMATH), September 4 - 8, 2023, Instituto Superior Técnico, Lisboa, Portugal. • C. Cárcamo Sanchez, F. Galarce Marín, A. Caiazzo, I. Sack, K. Tabelow, Quantitative tissue pressure imaging via PDE-informed assimilation of MR-data, MATH+ Day, Humboldt-Universität zu Berlin, October 20, 2023. • P. Farrell, Charge transport in Perovskites devices: modeling, numerical analysis and simulations, Workshop on Applied Mathematics: Quantum and Classical Models, Università degli Studi di Firenze, Dipartimento di Matematica e Informatica 'Ulisse Dini', Italy, November 29, 2023. • P. Farrell, Modeling and numerical simulation of two-dimensional TMDC memristive devices, 10th International Congress on Industrial and Applied Mathematics (ICIAM 2023), Tokyo, Japan, August 20 - 25, 2023. • P. Farrell, Modeling and numerical simulation of two-dimensional memristive devices, 22nd European Consortium for Mathematics in Industry (ECMI) Conference on Industrial and Applied Mathematics, June 26 - 30, 2023, Wrocław University of Science and Technology, Poland. • P. Farrell, Device physics characterization and interpretation in perovskite and organic materials (DEPERO), October 3 - 5, 2023, Eidgenössische Technische Hochshcule Zürich, nanoGe, Switzerland. • D. Frerichs-Mihov, On deep learning techniques for solving convection-dominated convection-diffusion equations, 10th International Congress on Industrial and Applied Mathematics (ICIAM), Minisymposium 00747 ''Analysis and Numerics on Deep Learning Based Methods for Solving PDEs'', August 20 - 25, 2023, Waseda University, Tokyo, Japan, August 23, 2023. • D. Frerichs-Mihov, Using deep learning techniques for solving convection-dominated convection-diffusion equations, 22nd Computational Fluids Conference (CFC 2023), April 25 - 28, 2023, International Association for Computational Mechanics (IACM), Cannes, France, April 28, 2023. • J. Fuhrmann, Ch. Keller, M. Landstorfer, B. Wagner, Development of an ion-channel model-framework for in-vitro assisted interpretation of current voltage relations, MATH+ Day, Humboldt-Universität zu Berlin, October 20, 2023. • J. Fuhrmann, Thermodynamically consistent finite volume schemes for electrolyte simulations, 10th International Congress on Industrial and Applied Mathematics (ICIAM 2023), August 20 - 25, 2023, Waseda University, Tokyo, Japan, August 22, 2023. • J. Fuhrmann, Two entropic finite volume schemes for a Nernst--Planck--Poisson system with ion volume constraints, Finite Volumes for Complex Applications 10 (FVCA10), Université de Strasbourg, France, November 1, 2023. • J. Fuhrmann, VORONOIFVM.JL -- A multiphysics finite volume solver for elliptic and parabolic systems, SIAM Conference on Computational Science and Engineering (CSE23), Minisymposium MS67 ``Research Software Engineering with Julia'', February 26 - March 3, 2023, Amsterdam, Netherlands, February 27, 2023. • J. Fuhrmann, Voronoi finite volume methods for complex applications in Julia, International Conference on Numerical Analysis of Partial Differential Equations (ANEDP 2023), October 16 - 18, 2023, Moulay Ismail University, Faculty of Sciences, Meknes, Morocco. • V. John, A SUPG-stabilized POD-ROM method for convection-diffusion-reaction problems (online talk), Numerical Analysis of Galerkin ROMs seminar series (Online Event), February 28, 2023. • V. John, On slope limiters in discontinuous Galerkin discretizations of convection-diffusion problems, European Conference on Numerical Mathematics and Advanced Applications (ENUMATH), Minisymposium MS06 ' 'Theoretical and computational aspects of the discontinuous Galerkin method' ', September 4 - 8, 2023, Instituto Superior Técnico, Lisboa, Portugal, September 5, 2023. • V. John, Finite element methods respecting the discrete maximum principle for convection-diffusion equations I, 19th Workshop on Numerical Methods for Problems with Layer Phenomena, Charles University, Faculty of Mathematics and Physics, Department of Numerical Mathematics, Prague, Czech Republic, May 26, 2023. • V. John, On recent topics in the finite element analysis of convection-diffusion problems (online talk), Numerical Analysis Seminar (Hybrid Event), University of Waterloo, Applied Mathematics, Canada, April 11, 2023. • M. Kantner, L. Ermoneit, B. Schmidt, J. Fuhrmann, A. Sala, L.R. Schreiber, Th. Koprucki, Optimal control of a SiGe-quantum bus for coherent electron shuttling in scalable quantum computing architectures, Silicon Quantum Electronics Workshop 2023, Kyoto, Japan, October 31 - November 2, 2023. • CH. Merdon, Gradient-robust hybrid discontinuous Galerkin discretizations for the compressible Stokes equations, Forschungsseminar von Prof. Carsten Carstensen, Humboldt-Universität zu Berlin, Institut fuer Mathematik, October 17, 2023. • CH. Merdon, Raviart--Thomas enriched Scott--Vogelius finite element methods for the Navier--Stokes equations (online talk), City University of Hong Kong, Department of Mathematics, Hong Kong, January 18, 2023. • CH. Merdon, Raviart--Thomas enriched Scott--Vogelius FEM for the Navier--Stokes equations, Capita Selecta Seminar, SACS - Systems, Analysis and Computational Sciences, Department of Mathematics University of Twente (DAMUT), Enschede, Netherlands, May 10, 2023. • CH. Merdon, Raviart--Thomas enriched Scott--Vogelius finite element methods for the Navier--Stokes equations, GAMM 93rd Annual Meeting of the International Association of Applied Mathematics and Mechanics, May 30 - June 2, 2023, Technische Universität Dresden, June 2, 2023. • CH. Merdon, Raviart--Thomas-enriched Scott--Vogelius finite element methods for the Stokes equations on general meshes, The 29th Biennial Numerical Analysis Conference 2023, June 27 - 30, 2023, University of Strathclyde, Department of Mathematics and Statistics, Glasgow, UK, June 27, 2023. • O. Pártl, A computational framework for sustainable geothermal energy production in fracture-controlled reservoir based on well placement optimization, Leibniz MMS Days 2023, Potsdam, April 17, • O. Pártl, Finite element methods respecting the discrete maximum principle for convection-diffusion equations III, 19th Workshop on Numerical Methods for Problems with Layer Phenomena, Charles University, Faculty of Mathematics and Physics, Department of Numerical Mathematics, Prague, Czech Republic, May 26, 2023. • O. Pártl, Reconstruction of flow domain boundaries from velocity data via multi-step optimization of distributed resistance, European Conference on Numerical Mathematics and Advanced Applications (ENUMATH), September 4 - 8, 2023, Instituto Superior Técnico, Lisboa, Portugal. • J.P. Thiele, Competencies and responsibilities of an RSE and how to acquire them (in Germany), FG RSE 2023: Fachgruppentreffen, Gesellschaft für Informatik, October 10 - 11, 2023, Leibniz Universität Hannover, October 10, 2023. • P.C. Africa, D. Arndt, W. Bangerth, B. Blais, M. Fehling, R. Gassmöller, T. Heister, L. Heltai, S. Kinnewig, M. Kronbichler, M. Maier, P. Munch, M. Schreter-Fleischhacker, J.P. Thiele, B. Turcksin, D. Wells, V. Yushutin, The deal.II Library, Version 9.6, Report, https://www.dealii.org/, 2024. • G. Alì, P. Farrell, N. Rotundo, Forward lateral photovoltage scanning problem: Perturbation approach and existence-uniqueness analysis, Preprint no. 2404.10466, Cornell University, 2024, DOI 10.48550/arXiv.2404.10466 . In this paper, we present analytical results for the so-called forward lateral photovoltage scanning (LPS) problem. The (inverse) LPS model predicts doping variations in crystal by measuring the current leaving the crystal generated by a laser at various positions. The forward model consists of a set of nonlinear elliptic equations coupled with a measuring device modeled by a resistance. Standard methods to ensure the existence and uniqueness of the forward model cannot be used in a straightforward manner due to the presence of an additional generation term modeling the effect of the laser on the crystal. Hence, we scale the original forward LPS problem and employ a perturbation approach to derive the leading order system and the correction up to the second order in an appropriate small parameter. While these simplifications pose no issues from a physical standpoint, they enable us to demonstrate the analytic existence and uniqueness of solutions for the simplified system using standard arguments from elliptic theory adapted to the coupling with the measuring device. • R. Araya, A. Caiazzo, F. Chouly, Stokes problem with slip boundary conditions using stabilized finite elements combined with Nitsche, Preprint no. 2404.08810, Cornell University, 2024, DOI 10.48550/arXiv.2404.08810 . We discuss how slip conditions for the Stokes equation can be handled using Nitsche method, for a stabilized finite element discretization. Emphasis is made on the interplay between stabilization and Nitsche terms. Well-posedness of the discrete problem and optimal convergence rates, in natural norm for the velocity and the pressure, are established, and illustrated with various numerical experiments. The proposed method fits naturally in the context of a finite element implementation while being accurate, and allows an increased flexibility in the choice of the finite element • J.P. Thiele, ideal.II: a Galerkin space-time extension to the finite element library deal.II, Preprint no. 2408.08840, Cornell University, 2024, DOI 10.48550/arXiv.2408.08840 . The C++ library deal.II provides classes and functions to solve stationary problems with finite elements on one- to threedimensional domains. It also supports the typical way to solve time-dependent problems using time-stepping schemes, either with an implementation by hand or through the use of external libraries like SUNDIALS. A different approach is the usage of finite elements in time as well, which results in space-time finite element schemes. The library ideal.II (short for instationary deal.II) aims to extend deal.II to simplify implementations of the second approach. • R. Araya, A. Caiazzo, F. Chouly, Stokes problem with slip boundary conditions using stabilized finite elements combined with Nitsche, Preprint no. hal-04077986, Hyper Articles en Ligne (HAL), We discuss how slip conditions for the Stokes equation can be handled using Nitsche method, for a stabilized finite element discretization. Emphasis is made on the interplay between stabilization and Nitsche terms. Well-posedness of the discrete problem and optimal convergence rates are established, and illustrated with various numerical experiments. • R. Finn, M. O'Donovan, P. Farrell, J. Moatti, T. Streckenbach, Th. Koprucki, S. Schulz, Theoretical study of the impact of alloy disorder on carrier transport and recombination processes in deep UV (Al, Ga)N light emitters, Preprint no. hal-04037215, Hyper Articles en Ligne (HAL), 2023. Aluminium gallium nitride ((Al,Ga)N) has gained significant attention in recent years due to its potential for highly efficient light emitters operating in the deep ultra-violet (UV) range (< 280 nm). However, given that current devices exhibit extremely low efficiencies, understanding the fundamental properties of (Al,Ga)N-based systems is of key importance. Here, using a multi-scale simulation framework, we study the impact of alloy disorder on carrier transport, radiative and non-radiative recombination processes in a c-plane Al0.7Ga0.3N/Al0.8Ga0.2N quantum well embedded in a p-i-n junction. Our calculations reveal that alloy fluctuations can open "percolative" pathways that promote transport for the electrons and holes into the quantum well region. Such an effect is neglected in conventional, and widely used transport simulations. Moreover, we find also that the resulting increased carrier density and alloy induced carrier localization effects significantly increase non-radiative Auger-Meitner recombination in comparison to the radiative process. Thus, to avoid such non-radiative process and potentially related material degradation, a careful design (wider well, multi quantum wells) of the active region is required to improve the efficiency of deep UV light emitters. • F. Galarce, J. Mura, A. Caiazzo, Bias and multiscale correction methods for variational state estimation algorithms, Preprint no. arXiv:2311.14031, Cornell University, 2023, DOI 10.48550/ arXiv.2311.14031 . The integration of experimental data into mathematical and computational models is crucial for enhancing their predictive power in real-world scenarios. However, the performance of data assimilation algorithms can be significantly degraded when measurements are corrupted by biased noise, altering the signal magnitude, or when the system dynamics lack smoothness, such as in the presence of fast oscillations or discontinuities. This paper focuses on variational state estimation using the so-called 'Parameterized Background Data Weak' method, which relies on a parameterized background by a set of constraints, enabling state estimation by solving a minimization problem on a reduced-order background model, subject to constraints imposed by the input measurements. To address biased noise in observations, a modified formulation is proposed, incorporating a correction mechanism to handle rapid oscillations by treating them as slow-decaying modes based on a two-scale splitting of the classical reconstruction algorithm. The effectiveness of the proposed algorithms is demonstrated through various examples, including discontinuous signals and simulated Doppler ultrasound data. • B. García-Archilla, V. John, S. Katz, J. Novo, POD-ROMs for incompressible flows including snapshots of the temporal derivative of the full order solution: Error bounds for the pressure, Preprint no. arXiv:2304.08313, Cornell University, 2023, DOI 10.48550/arXiv.2304.08313 . Reduced order methods (ROMs) for the incompressible Navier--Stokes equations, based on proper orthogonal decomposition (POD), are studied that include snapshots which approach the temporal derivative of the velocity from a full order mixed finite element method (FOM). In addition, the set of snapshots contains the mean velocity of the FOM. Both the FOM and the POD-ROM are equipped with a grad-div stabilization. A velocity error analysis for this method can be found already in the literature. The present paper studies two different procedures to compute approximations to the pressure and proves error bounds for the pressure that are independent of inverse powers of the viscosity. Numerical studies support the analytic results and compare both methods. • F. Goth, R. Alves, M. Braun, L.J. Castro, G. Chourdakis, S. Christ, J. Cohen, F. Erxleben, J.-N. Grad, M. Hagdorn, T. Hodges, G. Juckeland, D. Kempf, A.-L. Lamprecht, J. Linxweiler, M. Schwarzmeier, H. Seibold, J.P. Thiele, H. VON Waldow, S. Wittke, Foundational competencies and responsibilities of a research software engineer, Preprint no. arXiv:2311.11457, Cornell University, 2023, DOI 10.48550/arXiv.2311.11457 . The term Research Software Engineer, or RSE, emerged a little over 10 years ago as a way to represent individuals working in the research community but focusing on software development. The term has been widely adopted and there are a number of high-level definitions of what an RSE is. However, the roles of RSEs vary depending on the institutional context they work in. At one end of the spectrum, RSE roles may look similar to a traditional research role. At the other extreme, they resemble that of a software engineer in industry. Most RSE roles inhabit the space between these two extremes. Therefore, providing a straightforward, comprehensive definition of what an RSE does and what experience, skills and competencies are required to become one is challenging. In this community paper we define the broad notion of what an RSE is, explore the different types of work they undertake, and define a list of fundamental competencies as well as values that define the general profile of an RSE. On this basis, we elaborate on the progression of these skills along different dimensions, looking at specific types of RSE roles, proposing recommendations for organisations, and giving examples of future specialisations. An appendix details how existing curricula fit into this framework. • P.L. Lederer, Ch. Merdon, Gradient-robust hybrid DG discretizations for the compressible Stokes equations, Preprint no. arXiv.2311.06098, Cornell University, 2023, DOI 10.48550/arXiv.2311.06098 . This paper studies two hybrid discontinuous Galerkin (HDG) discretizations for the velocity-density formulation of the compressible Stokes equations with respect to several desired structural properties, namely provable convergence, the preservation of non-negativity and mass constraints for the density, and gradient-robustness. The later property dramatically enhances the accuracy in well-balanced situations, such as the hydrostatic balance where the pressure gradient balances the gravity force. One of the studied schemes employs an H(div)-conforming velocity ansatz space which ensures all mentioned properties, while a fully discontinuous method is shown to satisfy all properties but the gradient-robustness. Also higher-order schemes for both variants are presented and compared in three numerical benchmark problems. The final example shows the importance also for non-hydrostatic well-balanced states for the compressible Navier-Stokes equations. • B. Spetzler, D. Abdel, F. Schwierz, M. Ziegler, P. Farrell, The role of mobile point defects in two-dimensional memristive devices, Preprint no. arXiv:2304.06527, Cornell University, 2023, DOI 10.48550/arXiv.2304.06527 . Two-dimensional (2D) layered transition metal dichalcogenides (TMDCs) are promising memristive materials for neuromorphic computing systems as they could solve the problem of the excessively high energy consumption of conventional von Neumann computer architectures. Despite extensive experimental work, the underlying switching mechanisms are still not understood, impeding progress in material and device functionality. This study reveals the dominant role of mobile defects in the switching dynamics of 2D TMDC materials. The switching process is governed by the formation and annihilation dynamics of a local vacancy depletion zone. Moreover, minor changes in the interface potential barriers cause fundamentally different device behavior previously thought to originate from multiple mechanisms. The key mechanisms are identified with a charge transport model for electrons, holes, and ionic point defects, including image-charge-induced Schottky barrier lowering (SBL). The model is validated by comparing simulations to measurements for various 2D MoS2-based devices, strongly corroborating the relevance of vacancies in TMDC devices and offering a new perspective on the switching mechanisms. The insights gained from this study can be used to extend the functional behavior of 2D TMDC memristive devices in future neuromorphic computing • P. Farrell, J. Moatti, M. O'Donovan, S. Schulz, Th. Koprucki, Importance of satisfying thermodynamic consistency in light emitting diode simulations, Preprint no. hal-04012467, Hyper Articles en Ligne (HAL), 2023. We show the importance of using a thermodynamically consistent flux discretization when describing drift-diffusion processes within light emitting diode simulations. Using the classical Scharfetter-Gummel scheme with Fermi-Dirac statistics is an example of such an inconsistent scheme. In this case, for an (In,Ga)N multi quantum well device, the Fermi levels show steep gradients on one side of the quantum wells which are not to be expected. This result originates from neglecting diffusion enhancement associated with Fermi-Dirac statistics in the numerical flux approximation. For a thermodynamically consistent scheme, such as the SEDAN scheme, the spikes in the Fermi levels disappear. We will show that thermodynamic inconsistency has far reaching implications on the current-voltage curves and recombination rates.
{"url":"https://www.wias-berlin.de/research/rgs/fg3/publications.jsp?lang=1","timestamp":"2024-11-04T18:30:48Z","content_type":"text/html","content_length":"190286","record_id":"<urn:uuid:ce9ab1cb-d138-4662-acb7-9c7300aea030>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00471.warc.gz"}
The Trilinos/packages/trilinoscouplings package is used by developers to combine multiple Trilinos packages together to test integrated capabilities. Overview of TrilinosCouplings TrilinosCouplings contains the following classes Scaling Examples A following set of drivers that solve PDEs using the finite element method have been developed to test Trilinos package capabilities and to use in scaling tests. The drivers use the following Trilinos packages Browse all of trilinoscouplings as a single doxygen collection You can browse all of TrilinosCouplings as a single doxygen collection. Warning, this is not the recommended way to learn about TrilinosCouplings software. However, this is a good way to browse the directory structure of trilinoscouplings, to locate files, etc.
{"url":"https://docs.trilinos.org/latest-release/packages/trilinoscouplings/doc/html/index.html","timestamp":"2024-11-12T17:08:11Z","content_type":"application/xhtml+xml","content_length":"10364","record_id":"<urn:uuid:9a1cb054-4369-4728-ba8b-ca35d8df7f6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00589.warc.gz"}
Kurtosis is a term used to describe probability distributions, and comes up frequently during discussions of hedge funds. While normal distributions, or bell curves, all have the same general shape, kurtosis helps us describe exactly how these curves differ. Generally speaking, the higher the kurtosis, the heavier the “tails” of the curve are, meaning the likelihood of extreme values is greater. When speaking about something like risk, higher kurtosis means that extreme events are more likely to occur, hence the existence of “fat-tail” funds, which are designed to perform well during such extreme scenarios, unlikely though they may be. Q. Another geeky term for investors looking at hedge funds, right? It goes along with Sharpe and Sortino ratios? A. Yes, it’s a way of talking about tail risk: excess kurtosis means fat tail risk. As we’ve discussed, a really critical issue for evaluating hedge fund managers is how much volatility their results have shown, because more volatility means more risk. You can think of this whole question as being a picture of a bell curve, what folks call a “normal distribution” of outcomes. Q. OK, so where does “Kurtosis” fit in? A. The exact the shape of that curve tells you what the pattern of outcomes is likely to be. If the ends of the curve are “fat,” we have lots of risk of extreme events. Kurtosis is a good shorthand way to understand this. You can picture it pretty easily if you remember that kurtosis is also called “peakedness”. The more pointy the bell curve is in the middle, the higher the kurtosis. Q. So what kind of numbers are we looking for here? A. Normal or low kurtosis is nice: that would be a number of 3 or less. Excess kurtosis means that we’re looking at a bell curve with fat tails– an enhanced possibility of events that are out on the extreme ends of the curve. Q. And why is that, exactly? A. Well, imagine a normal smooth bell curve. We’ve just said that it has a kurtosis of 3. And a high kurtosis curve looks be very pointy in the middle, instead of being smooth. Now, at first that sounds good, because it means the outcomes are closely grouped around the middle. But the problem is, that to compensate for that compression of results, the tails of the curve are fatter than normal. And that shows non-normal results are more likely. So high kurtosis is usually a sign that very extreme events are more likely than is typical. That’s why its also known as “the volatility of volatility”. Q. So how does this relate to some of the other ratios? Where does it fit in? A. This one is better for understanding not just average risks, but the possibility of extreme ones. Investors should ask about it, right along with Sharpe, Sortino, and the others.
{"url":"https://altanswer.com/video/kurtosis/","timestamp":"2024-11-03T22:14:25Z","content_type":"application/xhtml+xml","content_length":"30798","record_id":"<urn:uuid:d95e5a75-48d3-4504-9d63-90ce643598df>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00408.warc.gz"}
Precalculus Honors Course No: 2313 Subject: Mathematics Grade Level: 11 Course Length: Year Course Type: AP/Honors, Elective UC/CSU Subject Approval: C Prerequisite: Grades of B or better in both semesters of Algebra 2H (or B- or below in Algebra 2H/successful completion of Algebra 1 Accelerated and a successful petition to department chair given suitable grade and recommendation from Geometry, Geometry Honors, or Geometry Accelerated. Criteria for Enrollment: Approval of Department Chair, based on: teacher recommendation; satisfactory national test scores; past performance in Math, especially Algebra 2H. Students who take Algebra 1 Accelerated and are moving into the Honors level cannot do so via the summer Geometry course. If a student takes Algebra 1 Accelerated and the summer Geometry course, then they must take Precalculus Accelerated as a sophomore. Fulfillments: Elective for juniors fulfilling six-semester graduation requirement Precalculus mathematics is a course designed for the student who intends to continue the study of mathematics in the direction of the natural or physical sciences and is an intensive preparation for Calculus. Most of the course is an analysis of families of functions and relations – polynomials; rational function; radical functions; trigonometric functions, including an intense study of right triangle trigonometry, its applications to vectors, circular functions, and trigonometric identities; logarithmic functions; and exponential functions — and their graphs both algebraically and through the graphing calculator, including an introduction to the fundamental aspects of Calculus. Significant independent work is considered a requirement for this course – students will be asked to perform independent study tasks, including (but not limited to) viewing and taking notes from screencasts, taking online quizzes, and collaborative learning. A Texas Instruments TI-83 or TI-84 series graphing calculator is required. *Class receives honors weighting in SI weighted GPA and UC/CSU GPA calculations
{"url":"https://curriculum.siprep.org/courses/precalculus-honors/","timestamp":"2024-11-13T18:00:37Z","content_type":"text/html","content_length":"73192","record_id":"<urn:uuid:e9e8bfc3-0c35-4b2a-aa10-a7366afec8ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00200.warc.gz"}
DUALSP01: Dual Spaces and Hahn-Banach's Theorem for X being for v, w being of X for v1, w1 being (RLSp2RVSp X) st v v1 & w w1 holds ( v w1 & v w1 ) for X being for v, w being of X for v1, w1 being (RVSp2RLSp X) st v v1 & w w1 holds ( v w1 & v w1 ) for V being for f, g, h being (V *') ( h g iff for x being of V holds h = (f . x) + (g . x) for V being for f, h being (V *') for a being ( h f iff for x being of V holds h * (f . x) X be ( RLSStruct(# (LinearFunctionals X),(Zero_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Add_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Mult_ ((LinearFunctionals X),( RealVectSpace the carrier of X))) #) is Abelian & RLSStruct(# (LinearFunctionals X),(Zero_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Add_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Mult_ ((LinearFunctionals X),(RealVectSpace the carrier of X))) #) is add-associative & RLSStruct(# (LinearFunctionals X),(Zero_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Add_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Mult_ ((LinearFunctionals X),(RealVectSpace the carrier of X))) #) is right_zeroed & RLSStruct(# (LinearFunctionals X),(Zero_ (( LinearFunctionals X),(RealVectSpace the carrier of X))),(Add_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Mult_ ((LinearFunctionals X),(RealVectSpace the carrier of X))) #) is right_complementable & RLSStruct(# (LinearFunctionals X),(Zero_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Add_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Mult_ (( LinearFunctionals X),(RealVectSpace the carrier of X))) #) is scalar-distributive & RLSStruct(# (LinearFunctionals X),(Zero_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Add_ (( LinearFunctionals X),(RealVectSpace the carrier of X))),(Mult_ ((LinearFunctionals X),(RealVectSpace the carrier of X))) #) is vector-distributive & RLSStruct(# (LinearFunctionals X),(Zero_ (( LinearFunctionals X),(RealVectSpace the carrier of X))),(Add_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Mult_ ((LinearFunctionals X),(RealVectSpace the carrier of X))) #) is scalar-associative & RLSStruct(# (LinearFunctionals X),(Zero_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Add_ ((LinearFunctionals X),(RealVectSpace the carrier of X))),(Mult_ (( LinearFunctionals X),(RealVectSpace the carrier of X))) #) is scalar-unital ) by Th17, RSSPACE:11 theorem Th20b for X being for f, g, h being (X *') ( h g iff for x being of X holds h = (f . x) + (g . x) theorem Th21b for X being for f, h being (X *') for a being ( h f iff for x being of X holds h * (f . x) Th21X: for X being RealNormSpace for F being Functional of X st F = the carrier of X --> 0 holds ( F is linear-Functional of X & F is Lipschitzian ) X be ( RLSStruct(# (BoundedLinearFunctionals X),(Zero_ ((BoundedLinearFunctionals X),(X *'))),(Add_ ((BoundedLinearFunctionals X),(X *'))),(Mult_ ((BoundedLinearFunctionals X),(X *'))) #) is Abelian & RLSStruct(# (BoundedLinearFunctionals X),(Zero_ ((BoundedLinearFunctionals X),(X *'))),(Add_ ((BoundedLinearFunctionals X),(X *'))),(Mult_ ((BoundedLinearFunctionals X),(X *'))) #) is add-associative & RLSStruct(# (BoundedLinearFunctionals X),(Zero_ ((BoundedLinearFunctionals X),(X *'))),(Add_ ((BoundedLinearFunctionals X),(X *'))),(Mult_ ((BoundedLinearFunctionals X),(X *'))) #) is right_zeroed & RLSStruct(# (BoundedLinearFunctionals X),(Zero_ ((BoundedLinearFunctionals X),(X *'))),(Add_ ((BoundedLinearFunctionals X),(X *'))),(Mult_ ((BoundedLinearFunctionals X),(X *'))) #) is right_complementable & RLSStruct(# (BoundedLinearFunctionals X),(Zero_ ((BoundedLinearFunctionals X),(X *'))),(Add_ ((BoundedLinearFunctionals X),(X *'))),(Mult_ ((BoundedLinearFunctionals X),(X *')) ) #) is vector-distributive & RLSStruct(# (BoundedLinearFunctionals X),(Zero_ ((BoundedLinearFunctionals X),(X *'))),(Add_ ((BoundedLinearFunctionals X),(X *'))),(Mult_ ((BoundedLinearFunctionals X), (X *'))) #) is scalar-distributive & RLSStruct(# (BoundedLinearFunctionals X),(Zero_ ((BoundedLinearFunctionals X),(X *'))),(Add_ ((BoundedLinearFunctionals X),(X *'))),(Mult_ (( BoundedLinearFunctionals X),(X *'))) #) is scalar-associative & RLSStruct(# (BoundedLinearFunctionals X),(Zero_ ((BoundedLinearFunctionals X),(X *'))),(Add_ ((BoundedLinearFunctionals X),(X *'))),( Mult_ ((BoundedLinearFunctionals X),(X *'))) #) is scalar-unital ) by Th22, RSSPACE:11 Th27: for X being RealNormSpace for g being Lipschitzian linear-Functional of X holds PreNorms g is bounded_above theorem Th35 for X being for f, g, h being (DualSp X) ( h g iff for x being of X holds h = (f . x) + (g . x) theorem Th36 for X being for f, h being (DualSp X) for a being ( h f iff for x being of X holds h * (f . x) theorem Th40 for X being for f, g, h being (DualSp X) ( h g iff for x being of X holds h = (f . x) - (g . x) Lm3: for e being Real for seq being Real_Sequence st seq is convergent & ex k being Nat st for i being Nat st k <= i holds seq . i <= e holds lim seq <= e
{"url":"https://mizar.uwb.edu.pl/version/current/html/dualsp01.html","timestamp":"2024-11-06T19:01:27Z","content_type":"text/html","content_length":"236961","record_id":"<urn:uuid:51a8c744-ae3c-44a9-a5f6-19a7ba382789>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00378.warc.gz"}
Go Figure! As a member of Teachers Pay Teachers , I often read and share on their Seller's Forum. As the Common Core State Standards (CCSS) become more "common", many teachers are asking about things being omitted or totally left out. Let's start this discussion with what the Common Core is. CCSS is a state-led effort coordinated by the National Governors Association and the Council of Chief State School Officers. The standards establish common goals for reading, writing and math skills that students should develop from grades K-12. Although classroom curriculum is left to the states (which actually had no input into the process), the standards emphasize critical thinking and problem solving and encourage thinking in-depth about fewer topics. With that said, this is the way I perceive these standards. When I started teaching, (I have been at it for 30+ years) the curriculum was a nice, juicy apple. Included were subjects like spelling, geography, history, and cursive writing. In addition, areas such as effort and behavior were evaluated. I can't remember ever giving a state or national test, but I did have to teach art, music and P.E. The majority of the children went home for lunch where some adult was waiting for them. Later, the arts were added to the curriculum and qualified teachers were hired to teach art, music and P.E. (Thank goodness!) Then came the slicing of the apple. One test was introduced and given each year. (We gave the ITBS.) Objectives were written that were different from the textbook, and more children were staying at school for lunch. As time progressed, additional slices of the apple were removed as "before and after" school programs became necessary for children and free lunches became common place. In addition, more than one test was required because now the district and the state wanted data. History and geography became social studies, and phonics and spelling were replaced with the whole language approach. I was directly a part of our district's benchmark test writing project where much money and time were devoted to the test that would reveal all, make teachers better, and students smarter. Of course, none of those things occurred, and the money wasted could have been better spent on teachers who really make the difference in the classroom. (By the way, all of those assessments are now gone.) Many advocate that the CCSS will become tool that can successfully turn around education. Let's remember that these standards are merely the each grade level is to master. Think of the CCSS as the core of an apple; there is no "meat" on the core, just the left over part of the apple. The basics are there; but teachers need to add the meat, but will they, especially since the high stake tests that are imminent will most likely only test the core? My question is: How much testing will be required; how often, and at what expense in money and time? And who will pay the price? I did some of my own reading of the CCSS, particularly those for the "key" grades of K-3. Yes, the CCSS requires the multiplication tables be taught through 10, but does that mean a teacher shouldn't go to 12? I personally want all of my algebra students to know the doubles through 25 because it makes finding the square root so much easier. Since the multiplication fact is in the student's head, no calculator is required! I also observed that money is not mentioned anywhere in the common core for grades K-3 except in second grade. Covering that standard is going to be a daunting task for 2nd grade teachers if students have never seen it before. (I did find the money standard in grades 4th, 6th, and 7th, and after that, money was considered Consumer Science.) In addition, there is no standard for patterns in kindergarten which I find quite disturbing since all math is based on patterns. Time and introductory place value have also been deleted. If students do not get these basic concepts in kindergarten, it is obvious they cannot grasp the more complex ones in later grades. As I read the many different responses on the TPT Forum as well as various articles about the Common Core, I realized that many teachers are viewing them as the all-in-all. If that all that will be taught, education is in BIG trouble. I would suggest reading a rather thought provoking and eye opening article by Carol Burris , principal of South Side High School in New York. She was named the 2010 New York State Outstanding Educator by the School Administrators Association of New York State, and she is co-author of the Opening the Common Core My primary concern is that the Common Core will become so focused and fixated on a limited number of standards that little will be left of well-rounded education except a very inadequate and flimsy core. If a teacher only follows the common core and nothing more, students will miss important building blocks in between. I believe education is an exciting and engaging lifetime journey, not a final destination or a binding contract with any government. I trust and hope parents (families) are the constant in this equation (a math word!) while schools and teachers are the variable. (both will change over time) How can any test adequately measure that? Here is a ten minute video which explains the Common Core so that I have decided to post (this is an updated previous post from 2011) some questions about zero that my college students have asked me in class. I will say this, "Zero can surely give you a severe headache unless one knows its properties." Question #1 - Do you know why zero is an even number? All mathematics is based on patterns. Because of this, I know that an even plus an even number will always give me an even answer; an odd number added to an odd also gives me an even answer, and an odd number plus an even gives me an odd answer. In other words: E + E = E O + O = E O + E = O The numbers 4 and -4 are both even numbers. If we add them together, their sum is zero. Based on the math pattern of E + E = E, then zero has to be even as well. If we substitute zero in other problems such as 1 + 0 = 1, it fits the O + E = O rule just as 2 + 0 = 2 fits the E + E = E rule. In Algebra, even numbers can be written as 2 x n where n is an integer. Odd numbers can be written in the form of 2 x n + 1. If we have n represent 0, then 2 x n = 0 (even) and 2 x n + 1 = 1. (odd) I say all of this to relate an actual incident that occurred in my classroom. I wrote the number 934 on the white board, and commented that since it was even it was divisible by 2. One of my students was perplexed because he did not understand how 934 could be even when it contained two odd numbers and only one even number. He actually thought that all the digits of a number had to be even for the number to be even. Funny? Not really! Amazingly, he had made it through 12 years of school without understanding Place value as it relates to even numbers. Unfortunately, I had assumed that everyone (especially my college students) knew what an even number was. I no longer make assumptions about students and their math knowledge! Question #2 - Is zero positive or negative? The definition for positive numbers is all numbers greater than zero, and the definition for negative numbers is all numbers less than zero. Therefore zero can be neither positive or negative. Question #3 - Is zero a prime or composite number? To be a prime number, a number must have only two positive divisors, itself and one. Zero has an infinite number of divisors so it is not prime. A composite number can be written as a product of two factors, neither of which is itself. Since zero cannot be written as a product of two factors without including itself, zero, it is not composite. Question #4 - Why can't you divide by zero? I love this question. Back in the dark ages when I asked it, I was always told, "Because I said so." Being an inquisitive student was not a blessing when I was growing up. Math teachers who knew all did not want to be questioned!!!! Anyway, I don't mind the question, and here is my practical answer. First, we must understand division. Division means putting or separating a number of items into a number of specific groups or sets. When you divide, such as in the problem 12 divided by 2, you are really putting 12 things into two groups or two sets. Therefore, if you have the problem 8 divided by 0, it is impossible to put eight things into no groups. You cannot put something into nothing!
{"url":"https://gofigurewithscipi.blogspot.com/2014/06/","timestamp":"2024-11-07T23:13:34Z","content_type":"application/xhtml+xml","content_length":"157071","record_id":"<urn:uuid:c2f21f05-7506-4480-a83c-78d4c103dc48>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00485.warc.gz"}
Next: MINIMUM TRAVELING REPAIRMAN Up: Routing Problems Previous: SHORTEST WEIGHT-CONSTRAINED PATH &nbsp Index • INSTANCE: C of nets, i.e., 3-sets of gates. • SOLUTION: Wires following rectilinear paths connecting the gates in each net. • MEASURE: The largest number of wires in the same channel between two gates in the array. • Good News: Admits a 414]. • Comment: Approximable within PX if Viggo Kann
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node116.html","timestamp":"2024-11-04T11:27:56Z","content_type":"text/html","content_length":"4338","record_id":"<urn:uuid:f5d5af2c-e187-44eb-b21d-2a77194a8f3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00277.warc.gz"}
Pyramid scheme model for consumption rebate frauds School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190, China Research Center on Fictitious Economy and Data Science, Chinese Academy of Sciences, Beijing 100190, China Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing 100190, China † Corresponding author. E-mail: Project supported by the National Natural Science Foundation of China (Grant Nos. 71771204 and 91546201). There are various types of pyramid schemes that have inflicted or are inflicting losses on many people in the world. We propose a pyramid scheme model which has the principal characters of many pyramid schemes that have appeared in recent years: promising high returns, rewarding the participants for recruiting the next generation of participants, and the organizer takes all of the money away when they find that the money from the new participants is not enough to pay the previous participants interest and rewards. We assume that the pyramid scheme is carried out in the tree network, Erdös–Réney (ER) random network, Strogatz–Watts (SW) small-world network, or Barabasi–Albert (BA) scale-free network. We then give the analytical results of the generations that the pyramid scheme can last in these cases. We also use our model to analyze a pyramid scheme in the real world and we find that the connections between participants in the pyramid scheme may constitute a SW small-world network. 1. Introduction On June 18, 2017, China Central Television’s (CCTV’s) ‘Focus’ program, which is one of the most-watched shows on CCTV, exposed a so-called consumption rebate platform that was named “RenRenGongYi” which was actually a pyramid scheme.^[1] The alleged operation pattern of RenRenGongYi was to let the franchisees give a fixed proportion of the customers’ consumption to the platform to form a fund pool, such as 24%, 12%, or 6%. The platform then returned the money to the customers and franchisees by instalments. However, most transactions were fabricated in RenRenGongYi, and the platform in fact lured the participants by promising high returns to invest in the fund pool and rewarded the participants who attracted the next generation of participants. Most of the franchisees were fictitious because most transactions were fabricated, and the participants of the pyramid scheme were mainly consumers. For example, if a participant fabricated a 100 yuan consumption (the participant was both the franchisee and the consumer) and gave 24 (or 12 or 6) yuan to the platform, then he/she could gradually get a consumption rebate of nearly 100 yuan, which was more than 4 (or 8 or 16) times the principal. In less than a month, 5267 franchisees and 48505 consumers were involved in the platform. From the time that the project officially opened on December 1, 2016 to its crash at the end of the month, the amount absorbed reached 1 billion yuan. Besides RenRenGongYi, there are many more platforms of this form and Chinese government has warned about the risk of consumption rebate platforms.^[2] There are many other forms of pyramid schemes, such as IGOFX^[3] that originated in Malaysia and MMM^[4] that originated in Russia. Pyramid schemes are different from ordinary Ponzi schemes, which are named after the eponymous fraudster Charles Ponzi, though in both Ponzi and pyramid schemes, existing investors are paid by the money from new investors.^[5] In a Ponzi scheme, the participants believe they are actually earning returns from their investment. While in a pyramid scheme, the participants are aware that they are earning money by finding new participants—they become part of the scheme. Pyramid schemes and Ponzi schemes have been researched from many different perspectives. Joseph Gastwirth proposed a probability model of a pyramid scheme and concluded that the vast majority of participants have less than a 10% chance of recouping their initial investment.^[6] Stimulated by the Madoff investment scandal in 2008, Marc Artzrouni put forward a first order linear differential equation to describe Ponzi schemes.^[7] The model that was developed by Marc Artzrouni depends on the following parameters: a promised but unrealistic interest rate, the actual realized nominal interest rate, the rate at which new deposits are accumulated, and the withdrawal rate. Marc Artzrouni gave the conditions on these parameters for the Ponzi scheme to be solvent or to collapse. The model was fitted to data available on Charles Ponzi’s 1920 eponymous scheme and illustrated with a philanthropic version of the scheme. Tyler Moore et al. conducted an empirical analysis of these high yield investment programs but did not put forward a mathematical model.^[8] A high yield investment program (HYIP) is considered to be an online Ponzi scheme because it pays outrageous levels of interest using money from new investors. In contrast from the traditional Ponzi schemes, there are many sophisticated investors who understand the fraud but who hope to profit by joining early, while the investors cannot withdraw their money at any time. Peihua Fu et al. researched some problems when the Ponzi scheme diffuses in complex networks.^[9,10] Since the introduction of random networks, small-world networks, and scale-free networks, complex networks have attracted great attention from researchers in various fields, such as management and statistical physics. Researches show that many natural and social phenomena have small-world or scale-free characteristics. At present, complex networks have been successfully applied to improve transportation networks,^[11,28] analyze innovative networks,^[28] research the spread of infectious diseases and rumors.^[28,28] Most of the existing literature focuses on the research of Ponzi schemes, including the spread of Ponzi schemes in complex networks, while the research on pyramid schemes is relatively scarce. To the best of our knowledge, for the pyramid schemes of consumption rebate type, no scholar has put forward a model at present. To understand and explain the operation mechanism and characteristics of the pyramid schemes of consumption rebate type, then provide ideas for monitoring this kind of pyramid schemes, and offer the basis for further research, we propose a pyramid scheme model which has the principal characters of many pyramid schemes appeared in recent years: promising high returns, rewarding the participants, recruiting the next generation of participants, and the organizer will take all the money away when he finds the money from the new participants is not enough to pay the previous participants interest and rewards. We assume that the pyramid scheme carries on in the tree network, Erdös–Réney (ER) random network, Strogatz–Watts (SW) small-world network, or Barabasi–Albert (BA) scale-free network, respectively, then give the analytical results of how many generations the pyramid scheme can last in these cases. We also use our model to analyze a pyramid scheme in the real world and we find the connections between participants in the pyramid scheme may constitute a SW small-world network. This paper is organized as follows. In Section 2, we briefly introduce the tree network, the random network, the small-world network, and the scale-free network. In Section 3, we propose our pyramid scheme model. In Section 4, we analyze a pyramid scheme in real world. Some discussions and conclusions are given in Section 5. 2. Networks 2.1. Tree network Tree networks are connected acyclic graphs. The word tree suggests branching out from a root and never completing a cycle. Tree networks are hierarchical, and each node can have an arbitrary number of child nodes. Trees as graphs have many applications, especially in data storage, searching, and communication.^[28] 2.2. Random network Random network, also known as stochastic network or stochastic graph, refers to a complex network created by stochastic process. The most typical random network is the ER model proposed by Paul Erdös and Alfred Réney.^[28] The ER model is based on a natural construction method: suppose there are n nodes, and assume that the possibility of connection between each pair of nodes is constant . The network constructed in this way is an ER model network. Scientists first used this model to explain real-life networks. 2.3. Small-world network The original model of small-world was first proposed by Watts and Strogatz, and it is the most classical model of small-world network, which is called SW small-world network.^[28] The SW small-world network model can be constructed as follows: take a one-dimensional lattice of L vertices with connections or bonds between nearest neighbors and periodic boundary conditions (the lattice is a ring), then go through each of the bonds in turn and independently with some probability ϕ “rewiring” it. Rewiring in this context means shifting one end of the bond to a new vertex chosen uniformly at random from the whole lattice, with the exception that no two vertices can have more than one bond running between them and no vertex can be connected by a bond to itself. The most striking feature of small-world networks is that most nodes are not neighbors of one another, but the neighbors of any given node are likely to be neighbors of each other and most nodes can be reached from every other node by a small number of hops or steps. It has been found that many networks in real life have the small-world property, such as social networks,^[28] the connections of neural networks,^[28] and the bond structure of long macromolecules in the chemical.^[28] 2.4. Scale-free network A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. The first model of scale-free network was proposed by Barabasi and Albert, which is called BA scale-free network.^[28] The BA model describes a growing open system starting from a group of core nodes, new nodes are constantly added to the system. The two basic assumptions of the BA scale-free network model are as follows: (i) from m[0] nodes, a new node is added at each time step, and m nodes from the m[0] nodes are selected to be connected to the new node ( ); (ii) the probability that the new node is connected to an existing node i satisfies , where k[i] denotes the degree of the node i and N denotes the number of nodes. In this way, when added enough new nodes, the network generated by the model will reach a stable evolution state, and then the degree distribution follows the power law distribution. In Ref. [28], it was shown that the degree distribution of many networks in real world is approximate or exact obedience to the power law distribution. 3. The model 3.1. Assumptions We consider a simple pyramid scheme that meets the basic features of many pyramid schemes in the real world, especially the consumption rebate platforms. First, it has an organizer that attracts participants through promising a high rate of return compared to the normal interest rate. Besides the promising return, any participant will be rewarded by the organizer with a proportion of the total investment of the participants he or she directly attracted, thus the early participants will be motivated enough to recruit the next-generation participants and the next-generation participants will do the same thing in order to get more returns. Secondly, we assume all the participants at current generation are recruited by the participants at the upper generation, and the organizer pays the participants at the previous generations the interests and rewards when all possible participants at current generation have joined in the scheme. The third assumption is that the organizer will take all the money away when he finds the money from the new participants is not enough to pay the previous participants interest and rewards. To simplify the model, we also assume all the participants invest the same amount of money and invest only once. Figure 1 is a schematic diagram of the pyramid scheme, which has one organizer and two generations of participants. Based on these assumptions, we discuss the pyramid scheme spreads in the tree network, random network, small world network, and scale-free network below. 3.2. Tree network case If the pyramid scheme expands in the form of tree network that has a constant branching coefficient α and the root node of the tree network represents the organiser, we can simply write the number of participants at the g-th generation as and the total amount of money entering the pyramid scheme at the g-th generation as , where n[1] is the number of participants at the first generation and m is the amount of money that every participant invests. For simplification, we assume and m = 1. We suppose the number of all potential participants is N in this case. Removing the interest and rewards, the relationship between the net inflow of money M of the pyramid scheme and the generation g when all possible participants at the g-th generation have joined in the scheme can be given by where r [0] is the promised rate of return of the organizer, and r[1] is the ratio of the money rewarded to a participant to the total investment of the participants he or she directly recruited. Normally in real pyramid scheme cases, r[0] and r[1] are between 0% and 50%. The first term of Eq. (1) represents the investment of all the participants, the second term represents the interest paid to the participants before the generation g, and the third term represents the rewards paid to the recruiters of participants at the g-th generation. Notice that in our pyramid scheme, the participants at the g-th generation are all recruited by the participants at the -th generation. The second term of Eq. (1) is the sum of geometric sequences, after summing them up, equation (1) can be rewritten as Through Eq. (2) we can find that if the branching coefficient α satisfies the condition the inflow of money of the pyramid scheme is always positive, so the pyramid scheme will continue forever under the circumstances. However, the potential participants are limited to N and the pyramid scheme will stop eventually. The maximum generation G of the pyramid scheme is given by where is the integer part of x. At the G -th generation, all the potential participants have joined the pyramid scheme, and the organizer will take away all the money and not pay the interest and rewards any more. We can write the final income of the pyramid as and the income of the participants at the i-th generation is Figure 2(a) shows the analytical result and the simulative result of maximum generation G[ER] when the branching coefficient α changes, and we take the parameters N = 10000, r[0]=0.1, r[1]=0.1. Figure 2(b) shows the analytical result and the simulative result of maximum generation G[ER] when the number of possible participants N changes, and we take the parameters , r[0]=0.1, r[1]=0.1. Figure 2 illustrates intuitively that in the tree network case, if other conditions of the pyramid scheme remain unchanged, the larger the branch coefficient—that is, the more new participants each person recruits—, the fewer generations the pyramid scheme can last. Meanwhile, when other conditions remain unchanged, the larger the number of potential participants, the more generations the pyramid scheme can sustain, but every new generation needs more participants and this growth of new participants is exponential. 3.3. Random network case If the pyramid scheme takes place in an ER random network that has an average degree k and N nodes, we assume the organizer is a random node in the network and other nodes represent the potential participants. The organizer recruits the potential participants nearest to him as the first generation participants, and the first generation participants recruit the potential participants nearest to them as the second generation participants, and so on. So the generation of any participant in the pyramid scheme is given by the shortest path length to the node representing the organizer. Reference [28] has given the approximate analytical results for the distribution of shortest path lengths in ER random networks, the number of nodes at the i-th generation is about if and all the nodes are included in the pyramid scheme if . Therefore, the pyramid scheme in the ER random network is approximate to the case in the tree network above and the difference is the branching coefficient α should be replaced by the average degree k. First, like the case in the tree network, r[0], r[1], and k should satisfy the following condition: The approximate maximum generation G of the pyramid scheme in this case is given by In addition, we can also write the approximate expressions of the organiser’s and participants’ income which have the same form of Eq. ((6)), which we omit here. Figure 3(a) shows the analytical result and the simulative result of maximum generation G[ER] when the average degree k changes, and we take the parameters N = 1000, r[0]=0.1, r[1]=0.1. Figure 3(b) shows the analytical result and the simulative result of maximum generation G[ER] when the number of possible participants N changes, and we take the parameters k = 4, r[0]=0.1, r[1]=0.1. The simulative results in the figures are averaged after 100 simulations. From Fig. 3, we can find that in the ER random network case, the relationship between maximum generation G[ER] and mean degree k, and the relationship between G[ER] and N are similar to those in the tree network case, where the mean degree k represents the amount of participants that each participant can recruit averagely. We can also find that within the range of parameters we have chosen, the analytical results and simulative results are very close. 3.4. Small world network case Now we consider the pyramid scheme carries out in an SW small-world network, to some extent this case is similar to the case in the ER random network. We also randomly choose a node as the organizer, other nodes represent the potential participants, and r[0], r[1] represent the interest rate and reward ratio, respectively. The generation of any participant in the pyramid scheme is the shortest path length to the node representing the organizer. Reference [28] pointed out that the number of nodes increases exponentially with the average length of the shortest path when the nodes are infinite. The approximate surface area of a sphere of radius r on the SW small-world network can be given by^[28] where , and ϕ is the rewiring probability and k is the degree of the corresponding rule graph. Changing r to g, we can obtain the approximate number of participants at the g-th generation. Because of the exponential form of A(g), we can deal with this case just like in the cases of tree network and ER random network. The branching coefficient α should be replaced by , and the following condition should be satisfied: If the nodes are finite, then the number of nodes reaches the maximum when the distance from the node to the organizer is near the average length of the shortest path. If the distance is greater than the average length of the shortest path, the number of nodes quickly reduces to 0, so it can be approximately considered that the maximum generation G is close to the average length of the shortest path. The average path length of the SW small-world network is given by^[28] where The number of nodes with the average shortest path length to the node representing the organizer is the largest. So we can infer the maximum generation G[SW] of the pyramid scheme is given by^[28] In the simulation, we find that the values of r[0] and r[1] are very important. Generally speaking, the greater the values of r[0] and r[1] satisfying Eq. (10) are, the closer the simulation results and numerical results are. This happens because when the values of r[0] and r[1] are larger, the pyramid scheme can easily terminate when the number of generations exceeds the average shortest path length. Figure 4(a) shows the analytical result and the simulative result of maximum generation G[SW] when the possible participants ϕ changes, and we take the parameters N = 1000, K = 3, r[0] = 0.2, r[1] = 0.2. Figure 4(b) shows the analytical result and the simulative result of maximum generation G[SW] when the number of possible participants N changes, and we take the parameters K = 3, ϕ=0.1, r[0] = 0.2, r[1] = 0.2. The simulative results in the figures are averaged after 100 simulations. In Fig. 4, we find that within the range of parameters we selected, the maximum generation G[SW] of the pyramid scheme is not very sensitive to the reconnection probability ϕ and the potential participants, and the analytical results are basically in accordance with the simulative results. 3.5. Scale-free network case If the pyramid scheme expands in a BA scale-free network, similar to the cases in ER random network and SW small-world network above, then we also randomly choose a node as the organizer and and other nodes represent the potential participants. The organizer recruits participants and the participants recruit the next generation participants through the network connections. To ensure positive inflows, the following condition must be satisfied: where n(g) represents the number of participants at the g-th generation, and n(g+1) represents the number of participants at the (g+1)-th generation. The distribution of shortest path length approximates the normal distribution and the position corresponding to the highest point of the normal distribution is the average shortest path length.^[28] The average path length of the BA scale-free network is given by^[28] Before the peak, the number of participants per generation grows faster than the exponential growth. But after that, the number of participants per generation declines rapidly, so the condition can no longer be satisfied. So we can infer that the maximum generation G[BA] is close to the average shortest path length, and is given by Figure 5 shows the analytical result and the simulative result of maximum generation G[BA] when the number of possible participants N changes. We taken the parameters r[0] = 0.2, r[1] = 0.2, and the simulative result is averaged after 100 simulations.We can find that in scale-free networks, the maximum generation G[BA] is not very sensitive to the potential participants in Fig. 5. The analytical results can basically reflect this characteristic. 4. A pyramid scheme in real world Although real cases of pyramid scheme are easy to find in news reports, there are few cases that give details of the number of people involved and the pyramid generations. Usually, when the organizer of the pyramid scheme disappears, the participants with loss will report the case to the police, who will then investigate the case. On July 23, 2018, China news network Guangzhou Station reported a pyramid scheme that had 75663 account numbers and 46 generations, and the pyramid scheme had amassed 76 million yuan in three months.^[28] This is the same type of pyramid scheme as described in the introduction. Using the analysis in section 3, we assume that the pyramid scheme carries on the tree network, ER random network, SW small-world network, and BA scale-free network. We then verify which network can describe the pyramid scheme in the real world well. We assume that one account number represents a participant. If this real pyramid scheme expands in a tree network, we can calculate the tree network’ branching coefficient . This means on average, less than two participants are recruited by each participant. However, we cannot know more about the connections between the participants, except the branching coefficient. If this real pyramid scheme spreads in an ER Random network, we can calculate the average degree through Eq. (8). So each node is connected to 1.28 nodes on average, and the connection probability in the ER random network is less than , which is very small, then isolated nodes and nodes with degree 1 easily o appear in the network. Although this case is similar to that of tree network, the branching coefficient in the random network is not stable and it is easy to end the pyramid scheme if equation (3) is not satisfied (the minimum of the formula is greater than 1). So we think the pyramid scheme can hardly happen in the ER random network. If this real pyramid scheme carries out in a BA scale-free network, through the analysis and simulation, we find that developing to 46 generations needs far more than 75663 participants. Therefore, the connections between participants are impossible to form a BA scale-free network. If this real pyramid scheme takes place in a SW small-world network and accords to all our assumptions, then we could find a simulative result to fit the result of the real pyramid scheme. The parameters we select are N = 100000, ϕ=0.02, K = 4, r[0]=0.1, r[1]=0.1, and each participant invests 23500 yuan. The simulative pyramid scheme has 74652 participants, and develops to 46 generations, and the fund pool of the pyramid is about 76 million yuan. The simulation results are in good agreement with the real pyramid scheme. Figure 6(a) and 6(b) show the cumulative number of participants N [cum], the number of participants N[g] in each generation, and the cumulative money M[cum] changing over generation g in the simulative pyramid scheme. Figure 6 shows that, the number of participants and the amount of accumulated money of the pyramid scheme grow slowly in the initial stage and explosively in the later stage. Once the growth rate slows down, the amount of the pyramid scheme’s accumulated money will soon reach a peak and the organizer will escape. The probability of reconnection in simulation is 0.02, which can be understood according to the actual situation and means that participants tend to recruit new participants from familiar people. In fact, according to our investigation and many news reports, such pyramid frauds always arise in small cities and most of the participants recruit new participants from their familiar people. As the generations go on, the network constituted by all participants has the properties of small world: agglomeration and having some flocks, which is similar to the interpersonal network. Although our model has been simplified and approximated, it is enlightening to explain the real Through this simulation analysis, we can speculate that the connections between participants in the real case may constitute a SW small-world network. 5. Conclusion In summary, we have proposed a pyramid scheme model that has the principal characteristics of many of the pyramid schemes that have appeared in recent years: promising high returns and rewarding the participants attracting the next generation of participants. Assuming that the pyramid scheme spreads in the tree network, ER random network, SW small-world network and BA scale-free network, we give the conditions for the continuity of the pyramid scheme, and the analytical results of how many generations the pyramid scheme can last if the organizer of the pyramid scheme takes all the money away when he/she finds the new money is not enough to pay interest and incentives. We also use our model to analyst a pyramid scheme in the real world and the result displays that the connections of participants in the pyramid may have the feature of small world. Our work is helpful to understand the operation mechanism and characteristics of the pyramid schemes of consumption rebate type. Our model may be able to apply to some current illegal high-interest loans, if these illegal projects promise a high interest rate and reward the investors who encourage others to invest in such projects but the money accumulated is not actually invested in any real projects. Our work shows that the pyramid schemes of consumption rebate type are not easy to be detected by the supervision because of the small amount of funds and the small number of participants accumulated in the initial stage. After the rapid growth of funds and participants, it often comes to the end of this kind of pyramid frauds, and the organizers have often already fled. So for regulators, it is better to nip such platforms in the bud to avoid any more people suffering a loss. In addition, to some extent, our work provides some basis for further study of such frauds. For example, we will further consider how the participants’ beliefs about always having enough new participants affect the operation of these frauds. [1] http://tv.cctv.com/2017/06/18/VIDE8gtfpFkpiBv2QYnNuGIF170618.shtml [2019-3-15] [2] http://www.mps.gov.cn/n2253534/n2253543/c6108362/content.html [2019-3-15] [3] https://news.china.com/news100/11038989/20170707/30931598_all.html [2019-3-15] [4] https://en.wikipedia.org/wiki/MMM_(Ponzi_scheme_company) [2019-3-15] [5] http://www.51voa.com/VOA_Special_English/pyramid-vs-ponzi-69196.html [2019-3-15] [6] Gastwirth J L 1977 31 79 https://www.jstor.org/stable/2683046?seq=1#page_scan_tab_contents [7] Artzrouni M 2009 58 190 https://www.researchgate.net/publication/46490932_The_mathematics_of_Ponzi_schemes [8] Moore T Han J Clayton R 2012 41 [9] Zhu A Fu P Zhang Q Chen Z 2017 479 128 https://www.sciencedirect.com/science/article/pii/S0378437117302303 [10] Fu P Zhu A Ni H Zhao X Li X 2017 490 S0378437117308087https://www.researchgate.net/publication/ [11] Háznagy A Fi I London A Nemeth T 2015 371 [12] Liu J Xiong Q Y Wang K Shi W R 2015 24 076401 https://cpb.iphy.ac.cn/EN/abstract/abstract64721.shtml# [13] Brantle T F Fallah M 2007 540 [14] Pastor-Satorras R Vespignani A 2001 86 3200 https://www.researchgate.net/publication/12044107_Epidemic_Spreading_in_Scale-Free_Networks [15] Moreno Y Nekovee M Pacheco A F 2004 69 066130 https://www.researchgate.net/publication/8463949_Dynamics_of_Rumor_Spreading_in_Complex_Networks [16] West D B 2001 30 73 https://www.researchgate.net/publication/221933085_Introduction_To_Graph_Theory [17] Erdös P Rényi A 2011 286 257 http://www.researchgate.net/publication/216636436_On_the_evolution_of_random_graphs [18] Watts D J Strogatz S H 1998 393 440 http://www.nature.com/nature/journal/v393/n6684/abs/393440a0.html [19] Grossman J W 2002 31 74 http://www.researchgate.net/publication/220327252_Small_Worlds_The_Dynamics_of_Networks_between_Order_and_Randomness [20] Santos V M L D Moreira F G D Longo R L 2004 390 157 http://www.researchgate.net/publication/ [21] Barabasi A L Albert R 1999 286 509 https://www.researchgate.net/publication/12779869_Albert_R_Emergence_of_Scaling_in_Random_Networks_Science_286_509-512 [22] Albert R Barabasi A L 2001 74 12 https://www.researchgate.net/publication/1827666_Statistical_Mechanics_Of_Complex_Networks [23] Katzav E Nitzan M ben-Avraham D Krapivsky P Khn R Ross N Biham O 2015 111 26006 https://iopscience.iop.org/article/10.1209/0295-5075/111/26006 [24] Newman M E Watts D J 1999 60 7332 https://www.ncbi.nlm.nih.gov/pubmed/11970678 [25] Barrat A Weigt M 1999 13 547 https://link.springer.com/article/10.1007%2Fs100510050067 [26] Ventrella A V Piro G Grieco L A 2018 12 3869 https://ieeexplore.ieee.org/document/8344557 [27] Reuven C Shlomo H 2003 90 058701 https://www.researchgate.net/publication/10860135_Scale-Free_Networks_Are_Ultrasmall [28] http://www.gd.chinanews.com/2018/2018-07-24/2/397998.shtml [2019-3-15]
{"url":"https://cpb.iphy.ac.cn/article/2019/1992/cpb_28_7_078901.html","timestamp":"2024-11-05T20:06:54Z","content_type":"text/html","content_length":"83384","record_id":"<urn:uuid:f8497f4f-1b76-4b17-996a-f85e7d552ec0>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00028.warc.gz"}
The Last Total Solar Eclipse…Ever-Updated One year ago, I posted a fun problem of predicting when we will have the very last total solar eclipse viewable from Earth. It was a fun calculation to do, and the answer seemed to be 700 million years from now, but I have decided to revisit it with an important new feature added: The slow but steady evolution of the sun’s diameter. For educators, you can visit the Desmos module that Luke Henke and I put together for his students. The apparent lunar diameter during a total solar eclipse depends on whether the moon is at perigee or apogee, or at some intermediate distance from Earth. This is represented by the two red curved lines and the red area in between them. The upper red line is the angular diameter viewed from Earth when the moon is at perigee (closest to Earth) and will have the largest possible diameter. The lower red curve is the moon’s angular diameter at apogee (farthest from Earth) when its apparent diameter will be the smallest possible. As I mentioned in the previous posting, these two curves will slowly drift to smaller values because the Moon is moving away from Earth at about 3cm per year. Using the best current models for lunar orbit evolution, these curves will have the shapes shown in the above graph and can be approxmately modeled by the quadratic equations: Perigee: Diameter = T^2 – 27T +2010 arcseconds Apogee: Diameter = T^2 -23T +1765 arcseconds. where T is the time since the present in multiples of 100 million years, so a time 300 million years ago is T=-3, and a time 500 million years in the future is T=+5. The blue region in the graph shows the change in the diameter of the Sun and is bounded above by its apparent diameter at perihelion (Earth closest to Sun) and below by its farthest distance called aphelion. This is a rather narrow band of possible angular sizes, and the one of interest will depend on where Earth is in its orbit around the Sun AND the fact that the elliptical orbit of Earth is slowly rotating within the plane of its orbit so that at the equinoxes when eclipses can occur, the Sun will vary in distance between its perihelion and aphelion distances over the course of 100,000 years or so. We can’t really predict exactly where the Earth will be between these limits so our prediction will be uncertain by at least 100,000 years. With any luck, however, we can estimate the ‘date’ to within a few million years. Now in previous calculations it was assumed that the physical diameter of the Sun remained constant and only the Earth-Sun distance affected the angular diameter of the Sun. In fact, our Sun is an evolving star whose physical diameter is slowly increasing due to its evolution ‘off the Main Sequence’. Stellar evolution models can determine how the Sun’s radius changes. The figure below comes from the Yonsei-Yale theoretical models by Kim et al. 2002; (Astrophysical Journal Supplement, v.143, p.499) and Yi et al. 2003 (Astrophysical Journal Supplement, v.144, p.259). The blue line shows that between 1 billion years ago and today, the solar radius has increased by about 5%. We can approximate this angular diameter change using the two linear equations: Perihelion: Diameter = 18T + 1973 arcseconds. Aphelion: Diameter = 17T + 1908 arrcseconds. where T is the time since the present in multiples of 100 million years, so a time 300 million years ago is T=-3, and a time 500 million years in the future is T=+5. When we plot these four equations we get There are four intersection points of interest. They can be found by setting the lunar and solar equations equal to each other and using the Quadratic Formula to solve for T in each of the four possible cases.: Case A : T= 456 million years ago. The angular diameter of the Sun and Moon are 1890 arcseconds. At apogee, this is the smallest angular diameter the Moon can have at the time when the Sun has its largest diameter at perihelion. Before this time, you could have total solar eclipses when the Moon is at apogee. After this time the Moon’s diameter is too small for it to block out the large perihelion Sun disk and from this time forward you only have annular eclipses at apogee. Case B : T = 330 million years ago and the angular diameters are 1852 arcseconds. At this time, the apogee disk of the Moon when the Sun disk is smallest at aphelion just covers the solar disk. Before this time, you could have total solar eclipses even when the Moon was at apogee and the Sun was between its aphelion and perihelion distance. After this time, the lunar disk at apogee was too small to cover even the small aphelion solar disk and you only get annular eclipses from this time forward. Case C : T = 86 million years from now and the angular diameters are both 1988 arcseconds. At this time the large disk of the perigee Moon covers the large disk of the perihelion Sun and we get a total solar eclipse. However before this time, the perigee lunar disk is much larger than the Sun and although this allows a total solar eclipese to occur, more and more of the corona is covered by the lunar disk until the brightest portions can no longer be seen. After this time, the lunar disk at perigee is smaller than the solar disk between perihelion and aphelion and we get a mixture of total solar eclipses and annular eclipses. Case D : T = 246 million years from now and the angular diameters are 1950 arcseconds. The largest lunar disk size at perigee is now as big as the solar disk at aphelion, but after this time, the maximum perigee lunar disk becomes smaller than the solar disk and we only get annular eclipses. This is approximately the last epoc when we can get total solar eclipses regardless of whether the Sun is at aphelion or perihelion, or the Moon is at apogee or perigee. The sun has evolved so that its disk is always too large for the moon to ever cover it again even when the Sun is at its farthest distance from Earth. The answer to our initial question is that the last total solar eclipse is likely to occur about 246 million years from now when we include the slow increase in the solar diameter due to its evolution as a star. Once again, if you want to use the Desmos interactive math module to exolore this problem, just visit the Solar Eclipses – The Last Total Eclipse? The graphical answers in Desmos will differ from the four above cases due to rounding errors in the Desmos lab, but the results are in close accord with the above analysis solved using quadratic roots.
{"url":"https://sten.astronomycafe.net/lasttotaleclipseupdate/","timestamp":"2024-11-07T09:00:21Z","content_type":"text/html","content_length":"51056","record_id":"<urn:uuid:38adc8b9-def1-4834-9dd7-4ed0fa6df514>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00774.warc.gz"}
Strategies for Teaching Layered Networks Classification Tasks Part of Neural Information Processing Systems 0 (NIPS 1987) Ben Wittner, John Denker There is a widespread misconception that the delta-rule is in some sense guaranteed to work on networks without hidden units. As previous authors have mentioned, there is no such guarantee for classification tasks. We will begin by presenting explicit counter(cid:173) examples illustrating two different interesting ways in which the delta rule can fail. We go on to provide conditions which do guarantee that gradient descent will successfully train networks without hidden units to perform two-category classification tasks. We discuss the generalization of our ideas to networks with hidden units and to multi(cid:173) category classification tasks. The Classification Task Consider networks of the form indicated in figure 1. We discuss various methods for training such a network, that is for adjusting its weight vector, w. If we call the input v, the output is g(w· v), where 9 is some function. The classification task we wish to train the network to perform is the following. Given two finite sets of vectors, Fl and F2, output a number greater than zero when a vector in Fl is input, and output a number less than zero when a vector in F2 is input. Without significant loss of generality, we assume that 9 is odd (Le. g( -s) == -g( s». In that case, the task can be reformulated as follows. Define 2 F :== Fl U {-v such that v E F2} and output a number greater than zero when a vector in F is input. The former formulation is more natural in some sense, but the later formulation is somewhat more convenient for analysis and is the one we use. We call vectors in F, training vectors. A Class of Gradient Descent Algorithms We denote the solution set by W :== {w such that g(w· v) > 0 for all v E F}, lCurrently at NYNEX Science and Technology, 500 Westchester Ave., White Plains, NY 10604 2 We use both A := Band B =: A to denote "A is by definition B". @ American Institute of Physics 1988
{"url":"https://proceedings.nips.cc/paper_files/paper/1987/hash/3ef815416f775098fe977004015c6193-Abstract.html","timestamp":"2024-11-06T11:39:17Z","content_type":"text/html","content_length":"9847","record_id":"<urn:uuid:4ca7ffd1-37c0-4bc8-a982-6a8813336395>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00456.warc.gz"}
Why the response time could be high: A Queuing Theory approach Usually when we talk about software performance we face two kinds of problems. First, the system is given while the load is going to increase. In this case we want to know how the response time could change, how many servers we need to add to handle load. The second is when we’re trying to improve system performance. Considering the load as constant, we want to know how to reduce the response time, what changes could bring the most significant benefit and lower the infrastructure costs. This article is a gentle introduction to Queueing Theory that could help to in both cases. Queuing theory terminology First of all, let’s look on how the queuing theory models any system. Requests arrive into a system with some random intervals between them. The time between two requests arrival in a row is called inter-arrival time. The inter-arrival time follows some distribution that doesn’t change over time. The average number of arrivals over some time period is called arrival rate and it’s denoted by a greek letter lambda($(\lambda)$). Once the request arrives, it may get into a “queue” where it waits for a free server. The time spent by a request in the queue is called queue waiting time. Once there’s a free server, a request processing starts. In the queuing theory server is something that do some work: in the context of the supermarket server is a cashier while in the software context it’s a single CPU core. Different requests require different amounts of wall-clock time for processing. The time between start of the request processing and the finish of request processing is called service time. Service time doesn’t include time spent in the queue. Request processing time follows some distribution that doesn’t change over time. The reciprocal of the average service time is called service rate and it’s denoted by greek letter mu ($(\mu$)). Service rate shows the average number of requests system can handle during the given time interval, for example in one minute. Once a request processing is finished, it leaves the system. For a web application case, this means that the response is sent back. The time between request arrival and sending response back is called time in the system or response time. There’s subtle difference between queuing theory terminology and concepts that we use in the software domain, still under some circumstances we may use them interchangeable. Response time is measured on the client side. It’s time between sending a request and receiving a response. Hence, usually response time include network round trip time. Processing time is measured on the server side. It’s time between last byte of request received and the first byte of request send. In this article response time means the queueing theory’s time in the system. Queuing theory: system utilization Let’s start from the simplest possible model and increase the model complexity step by step to consequently uncover important patterns and properties. First of all, let’s assume what happens if both service time and inter-arrival time are constant. When the arrival rate is less or equal to the processing rate, request processing starts as soon as a new request arrives and finishes before the next request arrives, so the queue length is always zero. Response time equals to the service time. Once the arrival rate becomes greater than processing rate, the queue starts to grow infinitely. In this case, response time also tends to infinity. Utilization shows what part of the time the system is busy in average. We use greek letter rho($(\rho$)) to denote it. We may express utilization as the ratio of the arrival and processing rates: $$ \rho = \frac{\lambda}{\mu} $$ Let’s check an example. When system is able to process 10 requests per second, while it has only 5 incoming requests per second, it’s obvious that the half of the time it has nothing to do, so the utilization rate is 50%. Queuing theory: queues root cause Time to make the model more complex. Imagine, that all the requests are the same, hence the processing time is constant, but now, requests arrive with some random intervals. The requests arrival rate is less than processing rate. The system has only one server, so the only one request can be processed at any point of time. Given the $(\lambda~<~\mu$) constraint, random intervals between two consequent requests arrivals leads to queueing. What are the possible system states in this case? So, the response time increases very fast once requests start to queuing. Even when there’s only single request in the queue at the new request arrival its response time at least doubles! The average response time of the system is highly impacted by the average number of requests in the queue it has. How likely new request at the arrival sees a busy system with multiple requests already waiting in the queue? When inter-arrival interval is random, the system utilization gains additional meaning. We may consider it as a time-average probability that a request at the arrival sees the system busy with processing of another request. So, the lesser utilization is the lesser expected queue length is as well. Let’s add more details to our current model to get some quantitative metrics. Inter-arrival interval and exponential distribution If the service time is still constant, the main factor impacting time requests spent in the queue and, consequently, response time is the distribution of the time intervals between requests arrivals. So, we have to answer 2 questions here: what is the shape of that distribution and what are its parameters. Let’s quote some parts of Erlang’s works here to answer this very important question. Let’s assume that requests arrivals are independent. There’s no greater probability that request arrive at one particular moment comparing to any other moment. Also, there’s n requests in average over the time interval a. The $(\frac{na}{r}$) is the probability of having one call during the time $(\frac{a}{r}$) when $(r$) is infinitely great. Let’s stop here for a moment. As $(r$) tends to infinity, the time interval we’re analyzing tends to zero. So, we’re trying to estimate probability of receiving one request at specific point of time! There are no simultaneous requests, as the time interval is infinitely small. The probability to receive zero requests during the $(\frac{a}{r}$) interval is $(1-\frac{na}{r}$). Hence, the probability to receive zero requests during the whole a interval is: $$ P_0 = \lim_{r\to\infty} (1-\frac{na}{r})^r = e^{-na} $$ The power of r in the previous formula comes from the rule of joint probability of events. What is the probability of having exact $(x$) requests during some time interval? Let’s divide interval a into r equal parts. We need to estimate joint probability of having exact $(r-x$) intervals where no requests arrive, and having x intervals where we receive 1 request. combinations of such intervals. $$ P_x = \lim_{r\to\infty}C^{r}_{x} * (\frac{na}{r})^x * e^{-(\frac{na(r-x)}{x})} = \frac{(na)^x}{x!}e^{-na} $$ Let’s introduce (\lambda=na) to denote an average number of requests arriving during the time a. $$ P_x=\frac{\lambda^x}{x!}e^{-\lambda} $$ The formula above is Poisson Distribution that represents the probability to see exactly x events during some time interval if there’s (\lambda) events in average over this interval. However, the exact time when events occur is unknown. To return to our queueing model, we need to move from the distribution of the number of requests to the distribution of inter-arrival times. Arrivals are independent and arrival rate doesn’t change over time. Hence, everything that’s true about time interval from some “zero point” to the first event is also true for time between any two consequent events. Imagine that no request arrived yet, and we’re dividing the timeline by two parts with point T. The probability that first request arrives in a time greater than T is equal to the probability of seeing zero events over time [0, T]: $$ P_{T_{arrival}>T} =\frac{(\lambda T)^0}{0!}e^{-\lambda T} = e^{-\lambda T} $$ In this case, we may denote the probability that first request arrives in a time less or equal to T using the complement rule: $$ P_{T_{arrival} \le T} = 1 - e^{-\lambda T} $$ The formula above is the CDF of the exponential distribution. In other words, if arrivals are independent, then the distribution of inter-arrival intervals is always exponential. On the diagram below horizontal lines shows the constant service time and the red curve is the exponentially distributed inter-arrival time. If the inter-arrival time is less than service time, then new request gets directly into the queue. The heavy tail of the response time distribution is inevitable if the inter-arrival time follows exponential distribution. Still, it’s possible to make outliers’ absolute value of response time less by making the service time less How adequate is to use exponentially distributed time between arrivals for typical web-application? If users access only single page of the application and this results in a single request to a back end, the model describes the real process exactly. If users access multiple pages in a row, or a single page load requires multiple calls to a back-end, the first assumption about independence of request arrivals is not true anymore. Once server receives a first request, it also shortly receives several additional requests with a very small delay between them. Or, in other words, second and subsequent requests have a higher probability to spend some time in a queue. Is the model based on the exponential distribution of inter-arrival times still useful in these circumstances? Sure! The trends and patterns it allows to grasp stay the same, still it’s important to remember that in this case the real response time would be higher than the theory predicts. Random service time and the Pollaczek–Khinchine formula The constant service time is a very rare case. Usually all requests are different, so it’s reasonable to assume that their processing takes different amount of time. On the real servers, the CPU time required to process some request is usually less than the wall-clock time that request processing actually takes. This happens because request processing may be temporarily interrupted by any other task. In other words, CPU sharing policies lead to additional variance of the service time. In the queuing theory, a system that consists of a single server that processes requests with exponentially distributed arrivals and any distribution of the service time is denoted as M/G/1. We may express the expected response time of the system as a sum of the expected service time and the expected waiting time. The expected service time comes directly from the service time distribution, while the waiting time depends on the expected queue length. We’re going to focus on waiting time only because the service time’s part of the average response time is constant, and it doesn’t depend on the system utilization, while the waiting time does. Let’s do some math to evaluate expected waiting time: $$ E[T_{waiting}] = E[N_Q] * E[T_{service}] + \rho * E[T_{service~time~excess}],~~{where} $$ • $(E[T_{waiting}]$) is expected waiting time in the queue • $(E[N_Q]$) is expected number of requests in the queue • $(E[T_{service}]$) is expected value of the request service time • $(\rho$) is the system utilization that equals to the probability of the server being busy • $(E[T_{service~time~excess}]$) is the expected value of the service time excess The service time excess is the amount of the time left to finish processing of the current request. The key to the next equation transformation is that we may treat the server’s queue as independent system with known requests arrival rate $(\lambda$) and the known expected response time, that equals the queue waiting time. The arrival rate for the server’s queue is the same as for the whole system and the request processing time equals to the expected time spent in the queue. Let’s use Little’s Law to express the expected queue length: $$ E[N_q] = \lambda * E[T_{waiting}] $$ Now we may substitute the expected queue length into the initial formula: $$ E[T_{waiting}] = \lambda * E[T_{waiting}] * E[T_{service}] + \rho * E[T_{service~time~excess}] $$ Another way to express system’s utilization is to use requests arrival rate and expected processing time: $$ \rho = \lambda * E[T_{service}] $$ Using the utilization formula from the above, we receive: $$ E[T_{waiting}] = \rho * E[T_{waiting}] + \rho * E[T_{service~time~excess}] $$ Now, let’s get equation for $(E[T_{waiting}]$) using very basic equation transformations: $$ E[T_{waiting}] = \rho * E[T_{waiting}] + \rho * E[T_{service~time~excess}] \Leftrightarrow $$ $$ E[T_{waiting}] - \rho * E[T_{waiting}] = \rho * E[T_{service~time~excess}] \Leftrightarrow $$ $$ E[T_{waiting}](1 - \rho) = \rho * E[T_{service~time~excess}] \Leftrightarrow $$ $$ E[T_{waiting}] = \frac{\rho}{1 - \rho} * E[T_{service~time~excess}] $$ The derivation of the expected service time excess is quite long, so let’s use a ready formula: $$ E[T_{service~time~excess}] = \frac{E[T_{service}^2]}{2E[T_{service}]}, where $$ $(E[T_{service}^2]$) is the second moment of the processing time distribution or the variance of that distribution $(Var(T_{service})$). That finally gives us the Pollaczek–Khinchine formula: $$ E[T_{waiting}] = \frac{\rho}{1 - \rho} * \frac{Var(T_{service})}{2E[T_{service}]} = \frac{\lambda * Var(T_{service})}{2(1-\rho)} $$ Utilization, service time and the response time. To make things easier to imagine, we’re going to use the coefficient of variation instead of variance: $$ CV(T_{service}) = \frac{\sqrt{Var(T_{service})}}{mean(T_{service})} $$ Coefficient of variation is another way to evaluate how much each specific measurement or observation is expected to differ from the mean value. High CV values are not rare in practice, for example exponential distribution has CV of 100%. The curves on the diagram below represent how the average queue waiting time depends on the utilization. Different curves correspond to different coefficients of variation of the service time distribution. 1. Remote calls. A remote service’s response time distribution could have heavy tail that leads to heavy tail of the service time. Usually, distributions with heavy tails have high variance. 2. CPU sharing policies. Sharing CPU between multiple processes introduces random delays into the request processing. Hence, the variance increases. 3. Algorithms with time complexity greater than O(log(N)). It’s worth to mention, that high average response time at low utilization levels could be also caused by the nature of the inter-arrival distribution. What would be the average queue size in the following scenario: 10 requests arrive simultaneously, then pause, then 10 more requests arrive simultaneously, then pause and so forth. Few very simple things that could help to decrease the variance of the service time, and hence, improve the response time: 1. Use asynchronous IO, if request processing requires remote calls 2. Align the number of worker threads with the number of CPU cores allocated to a workload 3. Configure CPU cores affinity for a workload. It’s possible in Linux and Kubernetes. There are two ways to reduce utilization. First is to send fewer requests, second is to process requests faster. The diagram below shows how the average response time depends on the average service So far we reasoned about systems that have only single CPU core, however modern computers have multiple cores. Simulation is the easiest way to get performance metrics for multicore systems, still the current model could be used for rough estimations and search of potential system performance improvements. If you’re interested in the second part about multi-core systems, connect on LinkedIn to not miss it! Notes and references
{"url":"https://andrewpakhomov.com/posts/why-the-response-time-could-be-high-queuing-theory-approach/","timestamp":"2024-11-11T09:45:05Z","content_type":"text/html","content_length":"33608","record_id":"<urn:uuid:b2be9e3d-bb99-49ad-a93e-56b1a74ee389>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00772.warc.gz"}
Re: Efficient Evaluation of Image Expressions Nick <Ibeam2000@gmail.com> Sun, 22 Jul 2007 08:53:58 -0000 From comp.compilers | List of all articles for this month | From: Nick <Ibeam2000@gmail.com> Newsgroups: comp.compilers Date: Sun, 22 Jul 2007 08:53:58 -0000 Organization: Compilers Central References: 07-07-066 Keywords: optimize, analysis Posted-Date: 22 Jul 2007 23:38:47 EDT > I have a language used perform map/image algebra calculations using > expressions like the following: > if isNull(i1) or isNull(i2) or isNull(i3) or isNull(i4) or isNull(i5) > or isNull(i6) or isNull(i7) or isNull(i8) > then null > else > (if (-4113.26 + 0.07*i7 - 5.08*i8/100 + 47.89*abs(i1) + 99.72*abs(i2) + 214.96* > ( > -2.332 + 2.561*i4 + 0.55*i6 > ) + > 1.57*i3) > 0 then ..... This sounds to me like a typical array programming problem. If the code fragment you've supplied is the inner part of a larger loop which iterates across eight potentially large bitmaps (arrays), and the interpretive overhead of an expression far exceeds the time to actually do the arithmetic, then one of your choices is to attempt the operations as arrays. As a programmer, I would definitely write the expression having eliminated the common subexpression myself. (This was mentioned earlier) At the very least, your code is much, much clearer. Or use a max function. r := max(0, (-4113.26 + 0.07*i7 - 5.08*i8/100 ..... But if you can change your computational procedure to deal with i1, i2, i3 as entire matrices, then at least you have a force multiplier where you have do the parsing and tree traversal once, but get the answers for a million or more pixels. The operations +, -, /, *, abs, max, etc. should have the correct missing value handing for nulls. 1 2 3 + 4 MV 6 should be 5 MV 9 and so on. If a substantial fraction of the pixels are nulls, then you can build a mask, weed out only the relevant values, perform the computations, then build a result using a null or the computed value for the corresponding element, depending on the mask. If i1 through i8 are small integer values always in the range of 0..255, then you can do the computations with pre-calculated tables. (You can return scaled large integer values with just enough digits behind the decimal places). If you truncate to get back a pixel value, all that excess precision will be cut off. Use 4 byte floats, rather than 8 byte doubles. (This is not a big I would suggest reading Jim Blinn's books, and Jon Bentley's book "Writing Efficient Programs". Also, have a look at "J", www.jsoftware.com, for ideas. Regards, Nick Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"https://www.compilers.iecc.com/comparch/article/07-07-079","timestamp":"2024-11-04T00:54:18Z","content_type":"text/html","content_length":"6330","record_id":"<urn:uuid:80968d52-fc8d-4699-91d2-bdd93ced6a50>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00282.warc.gz"}
Convert Logarithmic Equation In Exponential Form Worksheets [PDF]: Algebra 2 Math How Will This Worksheet on "How to Convert Logarithmic Equation in Exponential Form" Benefit Your Student's Learning? • Converting logarithmic equations to exponential form helps students discover unknowns more easily. • Understanding both forms enables students to see the link between logarithms and exponents. • Practicing these conversions improves students' skills in handling algebraic equations. • Converting equations provides another method for solving math problems. • Knowing how to convert these equations is essential for calculus and higher-level math courses. How to Convert Logarithmic Equation in Exponential Form? • Identify the base, argument, and result in the logarithmic equation. • Note that the base is the number after "log," the argument is the number inside the logarithm, and the result is what the logarithm equals. • Convert \(\log_a(b) = c\) to exponential form \(a^c = b\). Q. Write the logarithmic equation in exponential form.$ewline$$\log_7 373 = 3$
{"url":"https://www.bytelearn.com/math-algebra-2/worksheet/convert-logarithmic-equation-in-exponential-form","timestamp":"2024-11-12T09:04:42Z","content_type":"text/html","content_length":"122797","record_id":"<urn:uuid:76f8b250-6953-419b-badf-4cbad5b470e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00180.warc.gz"}
Orbits of the Integral Unimodular Group in the Upper Half-Plane Free Video Tutorials and Notes Lectures An Introduction to Riemann Surfaces and Algebraic Curves: Complex 1-Tori and Elliptic Curves by Dr. T.E. Venkata Balaji, Department of Mathematics, IIT Madras. For more details on NPTEL visit http:// Goals: * To ask for a description of the set of holomorphic isomorphism classes of complex tori * To state the Theorem on the Moduli of Elliptic Curves that not only answers the question above but also shows that the set above has a beautiful God-given geometry * To see how the upper half-plane and the unimodular group (integral projective special linear group) enter into the discussion * To use the theory of covering spaces to prove a part of the Theorem on the Moduli of Elliptic Curves, namely that the set of holomorphic isomorphism classes of complex 1-dimensional tori is in a natural bijective correspondence with the set of orbits of the unimodular group in the upper half-plane Keywords: Real torus, complex torus, Moebius transformation, translation, abelian group, holomorphic universal covering, admissible neighborhood, fundamental group, deck transformation group, biholomorphism class (or) holomorphic isomorphism class, locally biholomorphic map, upper half-plane, projective special linear group, unimodular group, orbits of a group action, action of a subgroup, underlying fixed geometric structure, superimposed (or) overlying (or) extra geometric structure, variation of extra structure for a fixed underlying structure (or) moduli problem, quotient by a group, equivalence relation induced by a group action, universal property of the universal covering, unique lifting property, moduli of elliptic curves, forming the fundamental group is
{"url":"https://learnerstv.in/lessons/orbits-of-the-integral-unimodular-group-in-the-upper-half-plane-free-video-tutorials-and-notes-lectures/","timestamp":"2024-11-14T06:33:45Z","content_type":"text/html","content_length":"54854","record_id":"<urn:uuid:b54804b3-26c4-4965-932f-4b5339c5bc57>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00483.warc.gz"}
Gauge Theory as an Integrable System - HU Berlin Forschungsportal - Forschungsinformationssystem der HU Berlin Gauge Theory as an Integrable System A Multi-Initial Training Network on Gauge Theory as an Integrable System. Gauge Theories provide the most successful framework for the description of nature, and in particular of high energy physics. However, extracting reliable predictions relevant for experiment from gauge theory has remained a major challenge which so far requires massive use of computer algebra. Over the last decade an entirely new approach to quantum gauge theories has begun to emerge, initiated by a celebrated duality between gauge and string theory. This has brought an area of science into gauge theory that seemed unrelated a few years before, namely the theory of low-dimensional statistical systems and strongly correlated electron systems. The paradigm governing this is to view "Gauge Theory as an Integrable System". The partners of this network represent different communities from gauge theory, statistical physics and computer algebra. With the proposed Initial Training Network we will carry the emerging multidisciplinary interaction to an entirely new level, bridging the gaps between our research fields in the context of graduate training activity. We believe that coordinated education of young scientists in all the tools under development from the different communities offers tremendous potential to make progress in the understanding and application of gauge theory. A group of carefully selected private sector partners will assist dissemination of results, methods and ideas into neighboring scientific disciplines as well as to the general public. At the same time, they will also be vital in preparing the early stage researchers for active and leading roles in academia and beyond. Beteiligte externe Organisationen Europäische Union (EU) - HU als Beteiligte LaufzeitProjektstart: 01/2013 Projektende: 12/2016
{"url":"https://fis.hu-berlin.de/converis/portal/detail/Project/401939651?auxfun=&lang=de_DE","timestamp":"2024-11-11T23:48:50Z","content_type":"application/xhtml+xml","content_length":"21810","record_id":"<urn:uuid:f5a09a98-f4e7-46f4-8f9e-a9da8539f74a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00172.warc.gz"}
Identify and draw two-dimensional figures based on their defining attributes. Figures are limited to triangles, rectangles, squares, pentagons, hexagons and octagons. Clarification 1: Within this benchmark, the expectation includes the use of rulers and straight edges. General Information Subject Area: Mathematics (B.E.S.T.) Grade: 2 Strand: Geometric Reasoning Date Adopted or Revised: 08/20 Status: State Board Approved Benchmark Instructional Guide Connecting Benchmarks/Horizontal Alignment Terms from the K-12 Glossary • Hexagon • Octagon • Pentagon • Polygon • Rectangle • Square • Triangle Vertical Alignment Previous Benchmarks Next Benchmarks Purpose and Instructional Strategies The purpose of this benchmark is to build on the work of grade 1 by including the task of drawing specific two-dimensional figures based on a defined attribute. At this grade level, five- and eight-sided figures have been included and a ruler would be used to create straight edges. • Instruction includes experience with a variety of examples and non-examples that lack a defining attribute. • Instruction includes defining attributes such as numbers of sides, sides of equal length or number of vertices, whether they are closed or not and whether the edges are curved or straight. Common Misconceptions or Errors • Students may misidentify a figure based on a non-defining attribute. • Students may not recognize figures that have been rotated or that are irregular. Strategies to Support Tiered Instruction • Teacher provides a geoboard for students to make a series of closed shapes, following instructions like make a closed shape with three straight sides and three corners or a closed shaped with 5 straight sides and 5 corners. Students use the geoboard and draw a picture of the shape. Teacher asks questions like, “How did you know to make this shape?” to draw attention to the defining attributes. It may be helpful to have students compare their shapes with other students. • Teacher provides a geoboard to make a series of closed shapes (i.e., a closed shape with three straight sides and three corners, a closed shaped with 5 straight sides and 5 corners). □ For example, students draw a picture of the shape. Teacher asks questions like, “How did you know to make this shape?” to draw attention to the defining attributes. Teachers may limit the type of shapes students work with at this level. • Instruction includes opportunities to build shapes on a geoboard as the teacher calls out defining attributes (i.e., “make a two-dimensional figure with three vertices”). After creating a correct figure, the teacher has students rotate the geoboard 90 degrees to see that it is still the same figure. • Teacher provides similar instruction from above but limits the amount and types of shapes students build on a geoboard (i.e., only build a square or triangle). Instructional Tasks Instructional Task 1 (MTR.4.1) Provide pairs of students with figure cards, geoboards and rubber bands. Students will play a game of “describe and build” to support identifying figures. • Part A. Partner A uses the figure card to describe a two-dimensional figure. As Partner A describes the figure, Partner B uses the geoboard to construct the figure that is being described. Neither partner should be able to see each other's card or geoboard. • Part B. Once Partner B has constructed the figure based on the defining attributes, the partners finish by comparing the figure on the figure card to the figure that was created. Discussion should include language about specific defining attributes. Enrichment Task 1 Part A. Partition a regular hexagon into two or three equal parts. Part B. Partition a regular octagon into two, four or eight equal parts. Instructional Items Instructional Item 1 Which word best identifies the figure below? • a. Triangle • b. Pentagon • c. Hexagon • d. Square *The strategies, tasks and items included in the B1G-M are examples and should not be considered comprehensive. Related Courses This benchmark is part of these courses. Related Access Points Alternate version of this benchmark for students with significant cognitive disabilities. Identify and produce two-dimensional figures when given defining attributes. Figures are limited to triangles, rectangles, hexagons and squares. Related Resources Vetted resources educators can use to teach the concepts and skills in this benchmark. Formative Assessments Lesson Plans Original Student Tutorial Perspectives Video: Teaching Idea MFAS Formative Assessments Three Sided Figures: Students are asked to draw a triangle and justify his or her drawn shape. Then students are shown a figure that is not a triangle, but that has three sides and asked to determine if the figure on the worksheet is a triangle. Original Student Tutorials Mathematics - Grades K-5 Shapes in Space: Learn how to recognize and draw triangles, pentagons and hexagons using the shapes' attributes in this space-themed, interactive tutorial. Student Resources Vetted resources students can use to learn the concepts and skills in this benchmark. Original Student Tutorial Shapes in Space: Learn how to recognize and draw triangles, pentagons and hexagons using the shapes' attributes in this space-themed, interactive tutorial. Type: Original Student Tutorial Parent Resources Vetted resources caregivers can use to help students learn the concepts and skills in this benchmark.
{"url":"https://www.cpalms.org/PreviewStandard/Preview/15300","timestamp":"2024-11-06T06:13:46Z","content_type":"text/html","content_length":"107194","record_id":"<urn:uuid:1dbd3a1e-1220-48ca-815c-b217dd97b60c>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00482.warc.gz"}
Introduction to linear algebra fourth edition strang pdf Linear algebra moves steadily to n vectors in mdimensional space. Gilbert strang download djvu linear algebra and its applications fourth edition gilbert strang x y z ax b y ay b b 0 0 z az 0 con. Introduction to linear algebra, fourth edition by gilbert strang. Introduction to linera algebra 4th edition by gilbert strang. Description of the book introduction to linear algebra. Download pdf of linear algebra and its applications 4th edition, by. Linear algebra is something all mathematics undergraduates and many other students. I wanted a re introduction to linear algebra after taking a course in elementary linear algebra with differential equations as an engineer back in college. Unlike static pdf introduction to linear algebra 4th edition solution manuals or printed answer keys, our experts show you how to solve each problem stepbystep. Linear algebra and its applications by gilbert strang, 4th edition b. Introduction to linear algebra 4th edition by gilbert. Gilbert strang download djvu linear algebra and its applications fourth edition. Please practice handwashing and social distancing, and check out our resources for adapting to these times. Introduction this book is an excellent introduction to linear algebra and is the consistent textbook for massachusetts institute of technologys mit linear algebra course 18. Prologue to linear algebra, fourth edition incorporates challenge issues to supplement the audit issues that have been very commended in. Linear algebra and its applications, 4th edition linear. This text provides a solid introduction to both the computational and theoretical aspects of linear algebra. These subjects include matrix algebra, vector spaces, eigenvalues and eigenvectors, symmetric matrices, linear transformations, and more. Introduction to linear algebra, 5th edition mit math. These subjects include matrix algebra, vector spaces. Introduction to linear algebra 9780980232714 by strang, gilbert and a great selection of similar new, used and collectible books available now at great prices. Please note that this is a pdf digital format and not a hardcover printed book and the pdf file will be sent to your email once the payment has been made and it can be read in all computers, smartphone, tablets etc. Introduction to linear algebra, 4th edition, gilbert strang the three midterm exams will be held in walker during lecture. Introduction to linear algebra by gilbert strang, 3rd edition. Linear algebra a modern introduction 4th edition pdf download. I have access to the solutions of the problems located at this website. View notes introduction to linear algebra gilbert strang. Linear algebra and its applications 5th edition pdf. Pdf linear algebra and its applications fourth edition. Emphasis is given to topics that will be useful in other disciplines, including systems of equations, vector spaces, determinants, eigenvalues, similarity, and positive definite matrices. Where is an ecopy of the solution manual for linear algebra. Introduction to linear algebra, fifth edition gilbert strang. Linear algebra and its applications 5th edition pdf ready. Linear algebra and its applications pdf 5th edition written by experts in mathematics, this introduction to linear algebra covers a range of topics. Apr 16, 2020 introduction to linear algebra 5th gilbert strang linear algebra and its applications 5th david c lay probability theory the logic of science edwin thompson jaynes. Wellesleycambridge press and siam for ordering information book. Description download solution manual of linear algebra and its application by gilbert strang 4th edition free in pdf format. Many universities use the textbook introduction to linear algebra. I have gotten my hands on the following book introduction to linear algebra 4th edition by gilbert strang and its not sufficient for my learning needs, at least not on its own. Introduction to linear algebra 4th edition gilbert. Chapters 17 form the foundation for understanding linear algebra. Introduction to linear algebra fourth edition gilbert strang introduction to linear algebra fourth edition gilbert. Introduction to numerical linear algebra and optimisation pdf free. First we have to introduce matrices and vectors and the rules for multiplication. Introduction to linear algebra gilbert strang 4th edition pdf. Introduction to linear algebra, fourth edition 4th edition by gilbert strang hardcover, 584 pages, published 2009. Introduction to linear algebra, 4th edition fourth1 introduction to vectors 1. We still want combinations of the columns in the column space. Introduction to linear algebra, fifth edition 2016 publication may 2016. Introduction to linear algebra, 4th edition introduction to linear algebra, 5th. Our solutions are written by chegg experts so you can be assured of the highest quality. Please practice handwashing and social distancing, and check out our resources for adapting to these. The textbook covers many important realworld applications. Linear algebra and its applications fourth edition. Introduction to linear algebra by gilbert strang pdf free. It aims to present an introduction to linear algebra which will be found helpful. Introduction to linear algebra 5th gilbert strang linear algebra and its applications 5th david c lay probability theory the logic of science edwin thompson jaynes. Many of the concepts in linear algebra are abstract. Download linear algebra and its applications 4th edition gilbert strang toward linear. Introduction to linear algebra, fifth edition by gilbert. The interplay of columns and rows is the heart of linear algebra. Introduction to linear algebra 4th edition by gilbert strang. Access introduction to linear algebra 4th edition chapter 1. Download linear algebra and its applications 4th edition gilbert strang toward linear algebra working with curved lines and curved surfaces, the. Introduction to linear algebra, 4th edition, gilbert strang the three midterm exams will be held in walker during lecture hours. Formulating proofs and logical reasoning are skills. Mahmood nahvi, joseph edminister schaums outline of electric circuits. Introduction to linear algebra, fourth edition by gilbert. Linear algebra and its applications, academic press 1976. The fifth edition of this hugely successful textbook retains all the qualities of earlier. No need to wait for office hours or assignments to be graded to find out where you took a wrong turn. Gilbert strang s bibliography also available in latex and pdf file books. Introduction to linear algebra fourth edition gilbert strang introduction to linear. Linear algebra a modern introduction 4th edition pdf. Aimed at the serious undergraduate student though not. Linear algebra and its applications by gilbert strang. We still get m equations to produce b one for each row. Download solution manual of linear algebra and its application by gilbert strang 4th edition. Download the ebook introduction to linear algebra solutions manual g. Introduction to linear algebra, by gilbert strang fourth. This leading textbook for first courses in linear algebra comes from the hugely experienced mit lecturer and author gilbert strang. An analysis of the finite element method, with george fix, prenticehall 1973. Introduction to linear algebra 4th edition pdf freaksever. The second edition of this book presents the fundamental structures of linear algebra and develops the foundation for using those structures. Elementary linear algebra, 5th edition, by stephen andrilli and david hecker, is a textbook for a beginning course in linear algebra for sophomore or junior mathematics majors. Strang in pdf or epub format and read it directly on your mobile phone, computer or any device. Gilbert strangs textbooks have changed the entire approach to learning linear algebra away from. As a note i have only worked through chapters 16, and looked over other portions of the text. Linear algebra and its applications, 4th edition, india edition 97881501726 by gilbert strang and a great selection of similar new, used and collectible books available now at great prices. Introduction to linear algebra gilbert strang 4th edition pdf download the book of genesis in the bible, edition author. This is a basic subject on matrix theory and linear algebra. Contribute to dragonbookintroductiontolinearalgebra5theditionee16a. If you plan to seriously study any of the physical sciences. Thus when looking for the eigenvectors for a matrix like a. Introduction to linear algebra gilbert strang 4th edition. It aims to present an introduction to linear algebra which will be found helpful to all readers regardless of their. Pdf download introduction to linear algebra 5th edition. Download solution manual of linear algebra and its application by gilbert. Introduction to linear algebra, 4th edition, gilbert strang. Unlock your linear algebra and its applications pdf profound dynamic. Read introduction to differential geometry for engineers by brian f. Please note that this is a pdf digital format and not a hardcover printed book. The basic course is followed by seven applications. Marc lipson schaums outline of linear algebra, 4th edition. New linear algebra and its applications edition by gilbert strang maths. January 1, 2006 introduction a note on notation in these notes, i use the symbol. Solution builder allows instructors to create customized, secure. Introduction to linear algebra, fifth edition This book is designed for use as a textbook for a formal course in linear algebra or as a supplement to all current standard texts. Linear algebra and its applications, fourth edition pdf free. More material has been included than can be covered in most. Introduction to linear algebra, indian edition, will be published by wellesley publishers. Introduction to linear algebra, fourth edition gilbert. Oct 11, 20 introduction to linear algebra by gilbert strang pdf free download stay safe and healthy. I wanted a reintroduction to linear algebra after taking a course in elementary linear algebra with differential equations as an engineer back in college. Solution builder allows instructors to create customized, secure pdf. Introduction to linear algebra, by gilbert strang fourth edition. Linear algebra and its applications by gilbert strang, 4th. Linear algebra and its applications fourth edition gilbert strang x y z ax b y ay b b 0 0 z az 0 con. As with his classic linear algebra and its applications academic press from forty years ago, strang s new edition of introduction to linear algebra keeps one eye on the theory, the other on applications, and has thestated goal of opening linear algebra to the world preface, page x. Ed 5, gilbert strang introduction to linear algebra 2016, wellesleycambridge. Where is an ecopy of the solution manual for linear. The books tried and tested approach is direct, offering practical explanations and examples, while showing the beauty and variety of the subject. The notes below follow closely the textbook introduction to linear algebra, fourth edition by gilbert strang. Pdf solution manual of linear algebra and its application.
{"url":"https://mindtopkzahlse.web.app/1374.html","timestamp":"2024-11-10T09:07:03Z","content_type":"text/html","content_length":"16182","record_id":"<urn:uuid:233b2d53-e421-4506-bdac-88b41347b439>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00009.warc.gz"}
Punto Banco Policies and Strategy Dec 20 2015 Baccarat Standards Baccarat is enjoyed with eight decks in a shoe. Cards below 10 are counted at face value while Ten, Jack, Queen, King are zero, and A is one. Wagers are made on the ‘bank’, the ‘player’, or for a tie (these aren’t actual people; they just represent the two hands that are dealt). Two cards are given to both the ‘bank’ and ‘gambler’. The total for every hand is the sum total of the cards, but the 1st digit is ignored. For example, a hand of 5 and 6 has a score of one (5 plus 6 equals eleven; ignore the initial ‘1′). A additional card might be given using the rules below: - If the player or banker gets a score of 8 or nine, the two players stay. - If the gambler has less than five, she hits. Players otherwise stand. - If the gambler stands, the house hits on 5 or lower. If the gambler hits, a table is used to determine if the bank stays or takes a card. Baccarat Chemin de Fer Odds The higher of the two totals wins. Winning bets on the bank payout nineteen to Twenty (equal cash less a 5% rake. Commission are recorded and cleared out once you quit the table so be sure to have money around just before you head out). Winning wagers on the player pays one to one. Winning bets for tie normally pays out at 8:1 but occasionally 9:1. (This is a bad wager as ties occur less than 1 in every 10 rounds. Be cautious of betting on a tie. However odds are substantially greater for 9 to 1 versus 8 to 1) Wagered on properly punto banco gives relatively good odds, apart from the tie bet of course. Punto Banco Scheme As with all games baccarat chemin de fer has a few established misunderstandings. One of which is similar to a myth in roulette. The past is not a harbinger of future outcomes. Keeping score of previous results at a table is a poor use of paper and a snub to the tree that gave its life for our paper needs. The most established and possibly the most accomplished scheme is the one-three-two-six method. This tactic is deployed to pump up profits and limit losses. Start by betting one chip. If you succeed, add another to the two on the game table for a grand total of 3 chips on the second bet. If you succeed you will now have 6 on the table, remove 4 so you have 2 on the third wager. If you win the 3rd bet, deposit two on the four on the table for a grand total of six on the fourth round. Should you do not win on the 1st wager, you take a loss of 1. A profit on the first bet followed by a hit on the second creates a loss of 2. Wins on the first two with a defeat on the third gives you with a gain of 2. And wins on the first three with a hit on the 4th means you break even. Winning all 4 rounds gives you with 12, a take of 10. This means you can squander the 2nd wager five instances for every successful run of 4 rounds and still balance the books. You must be logged in to post a comment.
{"url":"http://beamultimillionaire.com/2015/12/20/punto-banco-policies-and-strategy/","timestamp":"2024-11-02T06:19:28Z","content_type":"application/xhtml+xml","content_length":"27471","record_id":"<urn:uuid:66ceb687-44d5-4319-9eb9-cca5f5f33e8b>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00439.warc.gz"}
Gordon Kane | U-M LSA Physics Victor Weisskopf Distinguished University Professor of Physics, and Professor of Art, Penny W Stamps School of Art and Design Office Information: 3464 Randall Lab phone: 734.764.4451 Theoretical Cosmology and Astrophysics; Theoretical Elementary Particle Physics University of Illinois, Ph.D. 1963 University of Minnesota, B.A.
{"url":"https://prod.lsa.umich.edu/physics/people/emeritus/gkane.html","timestamp":"2024-11-11T10:30:12Z","content_type":"text/html","content_length":"62790","record_id":"<urn:uuid:e31aed7e-7951-4f07-967c-b07c9a489706>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00454.warc.gz"}
Optimal solution is solver-specific I am facing the following strange problem. I have a simple cost-minimization problem, * Defining in=line comments $inLineCom # # i 'type of input' /water, fertilizer, energy, labor, land/; POsitive Variables cost, con1, con2; cost.. Z =e= sum(i,X(i)); con1.. X('water')+ X('fertilizer')+X('energy') =e= 2; con2.. X('water')+ X('fertilizer')+X('energy')+X('land') =g= 2; Model base /all/; Solve base min Z using lp; By default, the solver is set to CPLEX and the optimal solution is energy=2. However, if i change the solver, eg Option lp=Xpress; then I obtain a different solution, water=2. The same arises if I set the Xpress optimizer. Why this happens? I run GAMS 30.0.3 on windows 10(64bit) Alternative optimal solutions? Both solutions have objective 2. So you get different (primal) solutions depending on pure chance. This is not uncommon. Just search the internet for “degenerate linear thanks for the recommendation!
{"url":"https://forum.gams.com/t/optimal-solution-is-solver-specific/3027","timestamp":"2024-11-09T16:19:14Z","content_type":"text/html","content_length":"17704","record_id":"<urn:uuid:ec9d9687-0efc-4a67-a215-f86c4f2cc7ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00265.warc.gz"}
How the Standards for Mathematical Practice Support Teachers | Heinemann How can we break the cycle of frustrated students who “drop out of math” because the procedures just don’t make sense to them? Or who memorize the procedures for the test but don’t really understand the mathematics? Max Ray and his colleagues at the Math Forum @ Drexel University say “problem solved,” by offering their collective wisdom about how students become proficient problem solvers, through the lens of the CCSS for Mathematical Practices. They unpack the process of problem solving in fresh new ways and turn the Practices into activities that teachers can use to foster habits of mind required by the Common Core. In this clip, Max and his colleague from the Math Forum, Suzanne Alejandre, discuss how the Common Core Standards for Mathematical Practice can support teachers. Max explains that some students may have difficulty seeing patterns across problems. The Standards can help teachers pay attention "not just to what content knowledge the students can show that they already know and can do, but it lets us pay attention to [the students'] thinking." Max Ray is a Professional Collaboration Facilitator at the Math Forum @ Drexel, a leading online resource for improving math learning, teaching, and communication. He is a former secondary mathematics teacher who presents at national conferences on fostering problem solving, communication, and valuing student thinking.
{"url":"https://blog.heinemann.com/how-the-standards-for-mathematical-practice-support-teachers","timestamp":"2024-11-08T14:31:17Z","content_type":"text/html","content_length":"72074","record_id":"<urn:uuid:f8305593-d833-44fa-a82c-b5ab011aacc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00521.warc.gz"}
Brain Teaser - How To Solve This Math Challenge CB + CB = ABB? Find The Value Of ABC - StarsUnfold Brain Teaser – How To Solve This Math Challenge CB + CB = ABB? Find The Value of ABC Brain Teaser – How To Solve This Math Challenge CB + CB = ABB? Find The Value of ABC Brain teasers are exciting puzzle that needs thinking to solve. Brain teasers make you think out of the box and exploit your mind’s potential. One of the most recent brain teasers on social media and boggling many minds is Brain Teaser – How To Solve This Math Challenge CB + CB = ABB? Find The Value of ABC Let us first have a look at what this puzzle is. Image source: Fresherslive Brain Teaser – How To Solve This Math Challenge CB + CB = ABB? Find The Value of ABC – Solution If you are still trying to get the answer to this, we have the answer to this puzzle. This brain teaser is a great way to test your observation skills and how sharp you are. If you still haven’t gotten the answer, the picture given below will make you understand this solution better. ABB is the sum of two 2-digit numbers so A is clearly 1. The last digit of all the numbers in the summation is the same, which is only possible with 0. So B=0. ABB=100 so CB=50 so C=5. Therefore ABC= Image source: Fresherslive Disclaimer: The above information is for general informational purposes only. All information on the Site is provided in good faith, however we make no representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, reliability, availability or completeness of any information on the Site. Leave a Comment
{"url":"https://starsunfold.com/brain-teaser-how-to-solve-this-math-challenge-cb-cb-abb-find-the-value-of-abc/","timestamp":"2024-11-07T06:40:16Z","content_type":"text/html","content_length":"116255","record_id":"<urn:uuid:3dc4cb1a-4eae-4252-8e00-08b265e47aa4>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00409.warc.gz"}
Non-parametric test for clustering in a sequence of binary outcomes The Δ[ m, n, 2 ] statistic on this page allows you to test for clustering of binary outcomes in a sequence. This test can be thought of as a test of clustering in one-dimension or a rank test of As an example, if you start with this sequence of binary outcomes which has exactly m = 30 red bars and n = 30 blue bars from left to right, you may want to answer the question whether the red and blue bars are randomly arranged or not, given that there must be 30 of each. One way to test this is with a Mann-Whitney U-test, also called a Wilcoxon rank-sum test. This test will tell you if the rank of the blue outcomes is significantly greater than the rank of the red outcomes, or vice versa. Essentially, you will be able to determine if one type of outcome clusters on the left-hand side or right-hand side of the sequence. Another way to test this will be with the Δ[ 30, 30, 2 ] statistic, whose exact distribution depends on how many red and blue bars there are (see footnote^(a) for the meaning of the "2" subscript). The above sequence appears to have a random arrangement of reds and blues. The sequence below seems to have clustering of blue bars on the right-hand side (Mann-Whitney p = 2.01 × 10^ −4 ). However, this sequence does not show a significant rank difference among reds and blues (Mann-Whitney p = 0.80) because the clustering occurs in the middle of the sequence. A new statistic is needed to determine if the red bars above are clustered within the sequence of blue bars. To construct this new statistic, we borrow from the theory of stochastic processes and define a random walk that follows the order of the blue bars and red bars. For every blue bar, the random walk takes a step up. For every red bar, the random walk takes a step down. Going back to our first example, the random walk and the sequence together look like this: You can see that when the sequence of reds and blues is truly random, the walk never gets too far away from the horizontal axis. Looking at our third example, where all of the red bars cluster perfectly in the middle we see in the random walk that there is a descent of 30 steps in the middle of the walk. If these bars had been arranged at random, there would be about 1.2 × 10^ 17 possible arrangements, with only 31 of these arrangements having a descent of 30 steps. Thus, the chance that a descent of 30 steps occurs at random is approximately 2.6 × 10^ −16 . This case is an easy one to identify as non-random, and we conclude that the red and blues were not ordered randomly. See footnote^(b) on maximum ascents. Our test statistic above is the maximum descent in the walk, which is 30, i.e. Δ[ 30, 30, 2 ] = 30 And we write that P ( Δ[ 30, 30, 2 ] = 30 ) = 2.6 × 10^ −16 P ( Δ[ 30, 30, 2 ] ≥ 30 ) = 2.6 × 10^ −16 since a descent of 31 steps or more is not possible. Of course, all of the interesting cases occur with some noise. Let’s take a look at this sequence of reds and blues below. Again, we have 30 red bars and 30 blue bars. And, it does appear that the blues cluster on the left and right, while the reds cluster in the middle. The maximum descent in the above walk is 18 units: at one point, the walk falls from 10 units above the axis to 8 units below the axis. Hence, Δ[ 30, 30, 2 ] = 18 The calculator on this page computes the exact probability^(c) of situations like the one above. If you have a random walk with exactly 30 up-steps and 30 down-steps, the calculator will give you the probability that at some point in the walk there is a descent of 18 steps or more. In other words, you will be computing P ( Δ[ 30, 30, 2 ] ≥ 18 ) Try this out yourself. You should get a p-value of about 1.73 × 10^ −4 . This means that if you were to arrange 30 red bars and 30 blue bars at random in a sequence, there is only a 1.73 × 10^ −4 probability that this sequence would show the red-clustering that you see above. ^(a) You may have noticed that the subscript “2” is not used for anything in the description of Δ[ m, n, 2 ]. This “2” simply means that when these red-and-blue sequences are interpreted in the context of nucleotide sequences and recombination, they allow you to compare the alternative hypothesis of 2-breakpoint recombination versus the null hypothesis of clonal evolution. Equivalent statistics can be defined for 3-breakpoint recombination (Δ[ m, n, 3 ]) and 1-breakpoint recombination (Δ[ m, n, 1 ]), but these have different clustering interpretations than the one presented here. ^(b) You can also compute the probability of a maximum ascent in a random walk (technically, a hypergeometric random walk). The math is exactly the same, as all you need to do is switch the reds and the blues and compute the probability of a maximum descent. You will see a 'swap' button in the calculator that allows you to do this. What we are not able to compute quickly is the probability that a hypergeometric random walk contains either a descent of greater than k or an ascent of greater than k. ^(c) Approximate probabilities are also presented for comparison. These are the continuous and discrete approximations as presented in Hogan and Siegmund (1986).
{"url":"https://mol.ax/software/delta/","timestamp":"2024-11-09T01:38:54Z","content_type":"text/html","content_length":"62681","record_id":"<urn:uuid:73b37165-71ea-4333-9f24-b1bd10414867>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00707.warc.gz"}
/hr in still water 1 hr more to go 24 km upstream than to return downstream to the same spot. Find the speed of the stream Hint: First of all, let the speed of the stream be $'u'\ km/hr$. Then we get the downstream speed of the boat $=\left( 18+u \right)km/hr$ and upstream speed $=\left( 18-u \right)\ km/hr$. Then use, (time taken by boat to go 24 km upstream) – (time taken by boat to go 24 km downstream) = 1 Here, find time by using $\text{time}\ =\dfrac{\text{Distance}}{\text{Speed}}$ Complete step-by-step answer: Here, we are given a motor boat whose speed is 18 km/hr in still water. It takes 1 hour more to go 24 km upstream than to return downstream to the same spot. We have to find the speed of the stream. Let us consider the speed of the stream to be $'u'\ km/hr$. We know that whenever a boat goes downstream, then the direction of stream and direction of boat is in the same direction. Hence stream supports to boat while going down stream so, we get, downstream speed of boat = speed of boat in still water + speed of stream........................(i) We also know that whenever a boat goes upstream then the direction of stream and direction of boat is in the opposite direction. Hence, the stream opposes the boat while going upstream. So we get, Upstream speed of boat = speed of boat in still water – speed of stream................(ii) As we are given that speed of motor is still water as 18 km/hr and we have assumed speed of stream as$'u'\ km/hr$, by putting these in equation (i) and equation (ii), we get, Downstream speed of boat = (18 + u) km/hr and upstream speed of boat = (18 – u) km/hr Now, let us consider the line taken by boat to go 24 km downstream as td. Since, we know that, $\text{time =}\dfrac{\text{distance}}{\text{speed}}$, therefore by Putting the value of time = td, distance = 24 km and downstream speed \[=\left( 18+u \right)km/hr\] , we get $td=\dfrac{24km}{\left( 18+u \right)km/hr}.....................(iii)$ Now, let us consider the time taken by boat to go 24 km upstream as tu. The value of time = tu, distance = 24 km and upstream speed \[=\left( 18-u \right)km/hr\], we get, $tu=\dfrac{24km}{\left( 18-u \right)km/hr}................................(iv)$ Now we are give that time taken by boat to go 24 km upstream is greater than time taken by boat to go 24 km downstream by 1hour, therefore we get, tu – td = 1 hour By putting the value of td and tu from equation (iii) and equation (iv) respectively in above equation, we get, $\dfrac{24}{\left( 18-u \right)}-\dfrac{24}{\left( 18+u \right)}=1$ By taking 24 common and simplifying above equation, we get, & 24\left[ \dfrac{\left( 18+u \right)-\left( 18-u \right)}{\left( 18+u \right)\left( 18-u \right)} \right]=1 \\ & \Rightarrow 24\left[ \dfrac{u+u}{\left( 18+u \right)\left( 18-u \right)} \right]=1 \\ By cross multiplying above equation, we get, $24\left( 2u \right)=\left( 18+u \right)\left( 18-u \right)$ We know that $\left( a+b \right)\left( a-b \right)={{a}^{2}}-{{b}^{2}}$. By applying this in above equation, we get, & 48u={{\left( 18 \right)}^{2}}-{{u}^{2}} \\ & or\ {{u}^{2}}+48u-324=0 \\ Here we can split $48u=54u-6u$ Therefore we get, We can also write it as, \[u\left( u+54 \right)-6\left( u+54 \right)=0\] By taking $\left( u+54 \right)$common, we get, $\left( u+54 \right)\left( u-6 \right)=0$ Therefore we get $u=-54$and $u=6$. As we have assumed that $'u'$ is the speed of stream it can’t be negative. Hence we get $u=6km/hr$. Therefore, we get a speed of the stream $6km/hr$. Note: If we have speed of boat in still water as $'v'km/hr$ and speed of stream as $'u'km/hr$, some students often make this mistake of taking downstream speed as $\left( u+v \right)km/hr$ and upstream speed as $\left( u-v \right)km/hr$. While the downstream can speed is correct but upstream speed is wrong. The correct upstream speed of the boat is $\left( v-u \right)km/hr.$ so this mistake must be avoided. Students can remember it like, while going upstream, the stream opposes the flow of the boat so the speed of the stream must get subtracted from the speed of the boat and not the other way around.
{"url":"https://www.vedantu.com/question-answer/a-motor-boat-whose-speed-is-18-kmhr-in-still-class-8-maths-cbse-5ee705f247f3231af2447718","timestamp":"2024-11-14T17:19:30Z","content_type":"text/html","content_length":"162001","record_id":"<urn:uuid:340a8177-55aa-4a86-a4ca-b6a5ce824472>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00474.warc.gz"}
Quivers in String Theory Why do a physicist, particularly a string theorist care about Quivers ? Essentially what I'm interested to know is the origin of quivers in string theory and why studying quivers is a natural thing in string theory. I've heard that there is some sort of equivalence between category of D-branes and category of quiver representations in some sense, which I don't understand. It would be very helpful if somebody could explain this. Also there are quiver type gauge theories, what are those and how are they related to representation theory of quivers. This post has been migrated from (A51.SE) This is pretty broad, but I'll give it a shot. The origin (or at least one origin) of quivers in string theory is that, at a singularity, it is often the case that a D-brane becomes marginally stable against decay into a collection of branes that are pinned to the singularity. These are called "fractional branes". To describe the gauge theory that lives on the D-brane at the singularity, we get a gauge group for each fractional brane, and for the massless string states stretching between the D-brane, we get bifundamental matter. Thus, a quiver gauge theory. The fractional branes and the bifundamental matter are essentially holomorphic information, so you can get at them by looking at the topological B-model. Since the B-model doesn't care about kaehler deformations, you can take a crepant resolution of the singularity which lets you deal with nice smooth things. The connection to the derived category of coherent sheaves comes about because the B-model (modulo some Hodge theoretic stuff) is essentially equivalent to the derived category (even though it doesn't matter so much any more, I can't resist plugging my paper, 0808.0168). The equivalence of categories, in some ways, can be thought of as a tool for getting a handle on the derived category (representations are easier to deal with than sheaves) and the fractional branes, but I always thought there was some real physics there. Was never quite able to make those ideas work, though. For the relation between representations and quiver reps, the easiest thing to say is that a representation of the quiver is the same as giving a vev to all bifundamentals. This post has been migrated from (A51.SE) I apologize, @Aaron, but wouldn't it be more logical to refer to Douglas+Moore http://arxiv.org/abs/hep-th/9603167 - the original paper about this topic with 1,000+ citations - rather than your 2008 This post has been migrated from (A51.SE) I was referring to my paper for the derived category stuff (and I really ought to be referring to Douglas's original paper for that, too). This post has been migrated from (A51.SE) @Aaron Can you give some references to start reading about quivers and quiver gauge theory? Something pedagogical like what will begin from "What is Quiver?" I also recently saw this talk - http:// This post has been migrated from (A51.SE) It's a broad, broad subject. Which part are you most interested in? This post has been migrated from (A51.SE) @Aaron From where to start to be able to understand the literature like the one I liked in my previous comment? I had earlier seen some exposition on what a quiver is - thought of as a graph whose nodes are vector spaces and paths are homomorphisms but that doesn't seem to be how the recent papers by Gaiotto, Pestun et al are thinking about it! What is a "quiver gauge theory"? This post has been migrated from (A51.SE) I guess I'd start with Klebanov and Witten (hep-th/9807080). There are earlier papers on quiver gauge theories (Douglas and Moore being the most prominent stringy one), but Klebanov and Witten probably gets you closer to the modern papers. This post has been migrated from (A51.SE)
{"url":"https://www.physicsoverflow.org/457/quivers-in-string-theory","timestamp":"2024-11-04T17:56:38Z","content_type":"text/html","content_length":"136970","record_id":"<urn:uuid:e8274d85-8324-4f81-b55f-3d8ff2d61663>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00445.warc.gz"}