content
stringlengths
86
994k
meta
stringlengths
288
619
Black body irradiance spectrum - Floating in the Clouds Hey. This page is more than 5 years old! The content here is probably outdated, so bear that in mind. If this post is part of a series, there may be a more recent post that supersedes this one. This is WELL beyond my small sphere of understanding but while reading around on solar spectra, I came across the Planck’s law for the theoretical black body radiation for an object (the sun in my case – and stars are a pretty close approximation…apparently). On various graphs (like the one above), I had seen the black-body irradiance for the sun alongside the various measured spectra, so I wanted to include it on my graph – see this post. I decided to work it out for myself. The radiance at whatever wavelength is solely dependent on temperature – in the case of our sun ~5800K (though the graph above has 6000K and I have seen elsewhere 5777K). Planck’s law gives you radiance (in W/m²/sr), which is independent of any distances. I struggle with getting my head round radiance and find irradiance (in W/m²) a little more relatable. You can’t directly convert from radiance to irradiance since there are some other parameters to consider – this question on physics stack exchange helped. The conversion does mean it becomes an estimate since it requires the source (the sun) and point of interest (the earth) radii. Graph below is just me screwing around on CodePen. [codepen_embed height=”703″ theme_id=”light” slug_hash=”eaWPGa” default_tab=”result” user=”aegis1980″]See the Pen eaWPGa by Jon Robinson (@aegis1980) on CodePen.[/codepen_embed] If you are a physicist or physics student then this is probably all obvious, and if you are not I cannot imagine why you would be trying to do it, but I have included my maths below, for providence. I have been screwing around with R, so here it is in that weird language: In R k <- 1.3806485279e-23; #Boltzmann constant m2 kg s-2 K-1 h <- 6.62607015e-34; #Planck constant J/s c <- 2.99792458e8; #Speed of light m/s Rsol <- 0.0046491; # radius on sun, in AU Re <- 1.0; # distance to earth from sun, in AU (=1!) # lamba = wavelength in nm, # T = black body temp (K) # output in W/m2/sr (not nm2) PLANK_RADIANCE <- function (lambda, T) { l <- lambda/1000000000.0; # nm > m ((2.0*h* (c ^ 2)))/(l^5.0)*(1/(exp((h*c)/(k*T*l))-1.0)); # lamba = wavelength in nm, # T = black body temp (K). Sun ~ 5800K # output in W/m2 (not nm2) PLANK_IRRADIANCE <- function (lambda,T=5800) { f <- pi * (Rsol/Re)^2.0; # 'f' accounts for lamberts law and distance of earth. In Javascript I was going to use R for my graphs, but I ended up using Google Sheets as its graphs are regenerated in my posts. So, I ended up rewriting the above as a custom function in Google Sheets (which is surprisingly easy). So here it is in javascript. Function names in caps because that’s how Google Sheets likes it. var k=1.3806485279e-23; var h=6.62607015e-34; var c=2.99792458e8; //m/s var Rsol = 0.0046491; //au var Re = 1.0; //au function PLANK_RADIANCE(lambda, T) { var l = lambda/1000000000.0; //convert from nm to m. return ((2.0*h*Math.pow(c, 2.0))/Math.pow(l, 5.0))*(1/(Math.exp((h*c)/(k*T*l))-1.0)); function PLANK_IRRADIANCE(lambda,T) { var f=Math.PI * Math.pow(Rsol/Re,2.0); return f*PLANK_RADIANCE(lambda,T); The only downside the Google Sheets custom function was I think it outsources all the calculations to their servers (rather than just do them on your local processor – maybe you can change that somewhere?) so for 50% of my 500+ wavelength value I got the too-many-requests error below when I filled-down: An aside I really don’t quite get black-body radiation -it quite fascinating to think EVERYTHING glows. There is an interesting calculation on Wikipedia where it is used to work out a human’s wattage (~100W), a number which actually coincides, oddly-but-nicely, with something I was doing at work today.
{"url":"https://floatingintheclouds.com/black-body-irradiance-spectrum/","timestamp":"2024-11-05T20:23:42Z","content_type":"text/html","content_length":"78002","record_id":"<urn:uuid:fd7b9339-5fc6-4aab-a53e-539e82c751fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00362.warc.gz"}
What is Yarn Count and How Does the Measurement System Work? - Emateks When it comes to yarns, the yarn count is necessarily one of the highlighted terms. This term is used to express how thin or thick the yarn is, and it has great importance. The thinness and thickness of the yarn indicate what kind of fabric is suitable for use. There are basically two systems that are used to indicate this value. These are direct and indirect systems. In the direct system, the unit length weight is calculated and is usually expressed as either Tex or Denier. The indirect systems are calculated as the length of the unit weight and expressed as Nm or Ne. The calculation of each parameter is different. Direct systems are directly proportional to the thickness of the yarn, whereas indirect systems are inversely proportional. Knowing this distinction is of great importance in order to make the right yarn choice. Yarn factors give each yarn a count based on a particular system. Even single and double numbers are carefully selected to express the thickness and fineness of the rope. In this article, we discuss in detail what each value means. Definition of Yarn Count Yarn count is a numerical expression that refers to how thin or thick the yarn is. By definition, it is also summarized as the mass of the yarn in units of length. Yarn count is of great importance because it is effective in determining the basic properties of the yarn. In the production of a textile product or in the combination of fabrics, yarns play a large role, so the use of the right thickness of rope is one of the most basic requirements. For this reason, knowing what the number means is very important in the selection of yarn. This number is expressed by different parameters, such as Tex or Denye. Large values are used to express thicker yarn. In the yarns expressed by Ne and Nm values, which are another type of parameter, there is the opposite logic. As the value increases, a thinner yarn is expressed. For example, a yarn with a value of 50 is thinner than a yarn with a value of 20. How to Measure Yarn Count? Yarn count can be measured as direct or indirect in the analytical system. In the direct system, the weight of the yarn unit length is calculated, so the yarn size is directly proportional to the measurement number. So, a higher value is used to express a thicker yarn. For yarns such as silk and jute, this system is preferred and is expressed as Tex or Denier. Tex refers to the weight of a yarn in grams, in which the length is constant, like 1000 meters. This method is one of the universal yarn counting processes. For example, a rope with a 30 tex rope thickness means it weighs 1000 grams per 30 meters. Denier refers to the weight of 9000 meters of yarn in grams. The indirect system refers to the length of the yarn in the unit of weight and is therefore inversely proportional to the measurement number. Larger values in this system indicate a thinner rope structure. The Cotton count is indicated by Ne it is and refers to the length of the rope weighing 1 lb. Another parameter is indicated by the metric count, Nm. 1 refers to how many hanks the yarn weight contains in 1000 meters. Because these values are inversely proportional, 40 Nm is used to refer to a string thinner than 20 Nm. Types of Yarn Count There are many different numbering systems used to determine the yarn number correctly, and these systems are divided into two categories, direct and indirect. Tex, Denier, and Jute systems are direct counting systems, while English, Metric, and Worsted systems are called indirect systems. These types of numbering and how they are calculated are as follows. 1. English system: Means the number of hanks (840yds) per pound. Mathematically, N = length (yds) /840 yds (hanks) x 1 pound/weight 2. Tex: Every 1000 meters of yarn is expressed in grams. Mathematically, Tex = Weight (gm) /1 gm x 1000m/Length (m) 3. Denier: Means the weight of every 9000 meters of yarn in grams. Mathematically, Denier = Weight (gm) /1gm x 9000m/Length The jute system is the expression of the weight of 14400 yards of yarn in pounds. Mathematically, Spindle = Weight (pound) /1 1 pound x 14400 yds /Length (yds) 4. Worsted system: It refers to the number of hanks (560 yds) per pound. Mathematically, Worsted count = length (yds) /560 (yds) x 1 Pound / Weight (pound) 5. Metric system: refers to the number of hanks per 1 kg. Mathematically, Nm = length (m) /1000m x 1 kg/Weight (kg) How Long is a “Hank”? Hank is a knotted form of a group of yarns. This length is not defined by certain standards, but it will not be wrong to make a fusion field analogy. 20 hanks are about 168 football fields and measure 1 pound in weight. In general, hank length differs according to the type of yarn used, its thickness, and the techniques used. According to Which Systems is the Yarn Count Determined? Which system to use when calculating the yarn count depends on a number of criteria. In general, the parameters Tex, Denier, Nm, or Ne are preferred to express this value. Tex and Denier, a direct measuring system, are the most commonly used parameters. Which parameter to choose varies according to industry standards, yarn type, or regional textile applications. Usually, the Tex system is preferred in technical textiles, while Denier determines the value of filament yarns. Ne falls into the category of indirect systems and is an ideal choice for expressing the thickness of cotton yarns, while Nm is ideal for expressing the thickness of natural fibers such as wool. How Does It Affect the Odd and Even Yarn Count? It is also important whether the value given in the yarn numbers is expressed in odd or even numbers. As is known, the yarn recommendations expressed by Tex and Denier are directly proportional to the yarn thickness. As the value grows, so does the thickness. Inversely proportional to yarn counsels expressed in Nm and Ne. Odd-numbered numbers represent yarns that are thinner and lighter in structure. Even-numbered numbers are used for thicker and more durable yarns. Knowing this distinction is very important for the correct choice of clothing because it is possible to understand the thickness of the string not only from the specified parameter but also from the value given in even or odd. Conversion of Yarn Counts It is possible to convert between Tex, Denier, and Ne commonly used measuring parameters. It is quite easy to convert to the desired unit by applying a certain set of formulas. The formulas that can be followed for this conversion are listed below. 1. “Denier = Tex X 9” formula is used to convert Tex to Denier. 2. “Tex = Denier / 9” formula is used to convert Denier to Tex. 3. “Tex = 590.5 / Ne” formula is used to convert Ne to Tex. 4. “Ne = 590.5 / Tex” formula is used to convert Tex to Ne. 5. “Ne = 5315 / Denier” formula is used to convert Denier to Ne. 6. Lastly, “Denier = 5315 / Ne” formula is used to convert Ne to Denier. These formulas make it easier for you to convert on paper, but the yarn count can be expressed in many different units. It will also be useful to use websites prepared for this in order to make the conversion more comprehensive.
{"url":"https://emateks.com.tr/en/what-is-yarn-count-and-how-does-the-measurement-system-work/","timestamp":"2024-11-14T08:16:12Z","content_type":"text/html","content_length":"105639","record_id":"<urn:uuid:f999f370-e5e6-4a8c-a5a8-6fc07a2a56e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00075.warc.gz"}
Export Reviews, Discussions, Author Feedback and Meta-Reviews Submitted by Assigned_Reviewer_7 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: This paper tackles the problem of discovering mid-level features using discriminative mean-shift. The algorithm proposed by the authors involves the modification of the standard mean-shift algorithm to a discriminative setting such that it can be applied to discover mid-level patches. The authors also propose a more principled way of evaluating the results of mid-level patch discovery using purity-coverage curves, and show that their algorithm outperforms existing methods in these metrics. In addition, their algorithm significantly outperforms existing methods on the commonly used MIT 67-scene dataset when used without fisher vectors, and is improved slightly when combined. While the overall performance improvement as compared to BoP+IFV [8] is "only" 2.7% (still significant), the performance when using mid-level patches alone improves significantly as compared to BoP [8] alone i.e. 46% vs 64%. Overall, I feel that the use of the discriminative mean-shift algorithm presented is novel, especially in this detection like setting, and the results are very promising. The paper is well written and the results are well analyzed. Thus, I would recommend that this paper be accepted to NIPS. Some minor concerns/suggestions: - What is the running time of your algorithm, and how does it compare to existing methods? - Will you release the source code after publication? - It might be worthwhile citing this paper since it shares a common name and mentioning why it's different: Wang, Junqiu, and Yasushi Yagi. "Discriminative mean shift tracking with auxiliary particles." Computer Vision–ACCV 2007. Springer Berlin Heidelberg, 2007. 576-585. - It would be interesting to try this approach on SUN where object segmentation is available to identify object categories that are commonly selected by the mid-level patches. There is likely to be some semantic structure in the automatic discovery. Q2: Please summarize your review in 1-2 sentences The paper proposes the idea of discriminative mean-shift for the discovery of mid-level visual elements. The paper has very promising results and is well written/analyzed. Submitted by Assigned_Reviewer_10 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: This paper proposes a discriminative clustering algorithm inspired by mean shift and the idea of finding local maxima of the density ratio (ratio of the densities of positive and negative points). The work is motivated by recent approaches of [4,8,16] aimed at discovering distinctive mid-level parts or patches for various recognition tasks. In the authors' own words from the Intro, "The idea is to search for clusters of image patches that are both 1) representative... and 2) visually discriminative. Unfortunately, finding patches that fit these criteria remain rather ad-hoc and poorly understood. While most current algorithms use a discriminative clustering-like procedure, they generally don't optimize elements for these criteria, or do so in an indirect, procedural way that is difficult to analyze. Hence, our goal in this work is to quantify the terms 'representative' and 'discriminative', and show that that a generalization of the well-known, well-understood Mean-Shift algorithm can produce visual elements that are more representative and discriminative than those of previous approaches." I find this motivation very compelling and like the formulation of discriminative clustering in terms of maximizing the density ratio. The derivation of the clustering objective starts out promisingly enough on p. 2-3, by p. 4 it makes a number of confusing leaps that seem to take it pretty far from the original formulation. Specifically, on lines 188-190, the authors comment on eq. (5), "representing a patch multiple times is treated the same as representing different patches. Ideally, none of the patches should be double-counted." This leads to the introduction of "sharing coefficients" that ensure competition between clusters. However, isn't "double-counting" actually necessary in order to accurately estimate the local density of the patches, i.e., the more similar patches there are close to a given location in the feature space, the higher the density should be? Or does "double-counting" refer to something else? This needs to be clarified. Next, the discussion following eq. (6) appears to introduce heuristic criteria for setting the sharing coefficients, and even more heuristics are needed to deal with overlapping patches (lines 199-211). As a result, by the time we get to the final objective (eq. 7), the original one (eq. 1) seems to have been abandoned or changed beyond recognition. While the authors start out rightly decrying the heuristic nature of approaches such as [4,8,16], they end up deriving something no less heuristic. + The idea of deriving a discriminative clustering algorithm by maximizing the density ratio is novel and compelling. + The experimental results are the main point in favor of accepting the paper. According to Table 1, the proposed approach outperforms even the very recent results of [8] on the MIT Indoor Scene dataset (however, see below). - The derivation of the clustering objective makes several poorly explained leaps (see above). The authors need to better motivate the different steps of their derivation and explain the relationship to related methods. For one, while the paper is titled "Discriminative Mean Shift", the connection of the proposed method and mean shift is less than apparent: the original mean shift formulation is purely local (each point in the feature space finds its closest local maximum), while the method derived in this paper appears to globally optimize over cluster centers by introducing a "competition" criterion. If I understand correctly, this is not simply a difference of maximizing local density vs. density ratio. A better discussion would be helpful. Also, the final objective (eq. 7) has a strong margin-based feel to it. Similarity (or lack thereof) to other existing margin-based clustering approaches should be discussed. - For that matter, there should be citations to other existing discriminative clustering formulations, such as Linli Xu, James Neufeld, Bryce Larson, and Dale Schuurmans, Maximum margin clustering, NIPS 2005. - The proposed optimization algorithm does not appear to have any theoretical guarantees (lines 257-259). - While the experimental results appear extremely strong, it is hard to get any insight as to why the proposed method outperforms very recent well-engineered systems such as those of [8]. Is this due to the clustering objective or to other implementation details described in lines 406-422? Since there are many potential implementation differences between the proposed approach and the baselines it compares against (including feature extraction, pre- and post-processing, classification), the causes behind the superior performance reported in this paper are not at all clear. Some of the claims that are made in the experimental section are not supported by any quantitative or qualitative evidence shown in the paper, e.g., "small objects within the scene are often more useful features than global scene statistics: for instance, shoe shops are similar to other stores in global layout, but they mostly contain shoes." Q2: Please summarize your review in 1-2 sentences This paper is above the acceptance threshold owing primarily to the strong experimental results, but the derivation of the clustering method is not clearly presented and appears to have several poorly motivated heuristic steps. Submitted by Assigned_Reviewer_11 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: The paper proposes a new method for learning discriminative image patches predictive of the image class. The procedure starts by considering all (?) patches in an image collection. The discriminative patches are then found as the centres of patch clusters, obtained by a discriminative version of the mean-shift algorithm. Discriminativeness is incorporated in mean shift by dividing the kernel density estimate of the positive patches by the one of negative ones. - The problem of learning discriminative patches/parts is an important one. - The classification performance of the proposed discriminative patches is very good, at least on MIT Scene 67. - There are a few interesting insights in the formulation. - The authors start from mean-shift and gradually transform it into a very different algorithm. In the end, I am not sure that the mean-shift interpretation is useful at all. - Several aspects of the formulation and derivation are heuristics. Some of the heuristics are changed on a dataset-by-dataset basis. - The learning algorithm is also heuristic, with no formal guarantee of correctness/convergence. - There are several small errors in the formal derivation. Detailed comments: The empirical results are sufficiently strong that this paper should be considered for publication. However, the formulation and technical derivation should be improved significantly, as detailed ### l.125: The notation should be clarified. The expression max(d(x_i,w) - b, 0) is the triangular kernel assuming that d() is the negative of the Euclidean distance (as stated on l. 130) *and* that the bandwidth b is negative, which is counter-intuitive. max(b - d(x_i,w), 0) is more natural. ### Eq. (1). arglocalmax is never defined. ### Eq. (2). This equation indicates that, contrary to what stated in the manuscript, the algorithm is not maximizing a ratio of density values, but an energy function computed with a sort of adaptive bandwidth. This density is given by E(w) = sum_i max(d(x_i^+) - b(w), 0) where the bandwidth b(w) is selected as a function of the current point w as b(w) = b : sum_i max(d(x_i^+) - b, 0) = epsilon. Several aspects of this formulation should be clarified: (a) The triangular kernel should be normalized by its mass to get a proper density estimator. Interestingly, this normalization factor, which depends on w, cancels out in the ratio (2), which perhaps "saves the day". (b) This formulation should be contrasted to the standard adaptive mean shift (e.g. B. Georgescu, I. Shimshoni, and P. Meer. Mean shift based clustering in high dimensions: A texture classification example. In Proc. ICCV, 2003). There the bandwidth is chosen as a function of x_i rather than w and the normalization of the kernels become crucial. (c) What happens if there are more than one value of b(w) satisfying the constraint in (2) ? ### Eq. (3) l. 157: It is the _squared_ Euclidean distance d^2() that reduces to the inner product, not the euclidean distance d(). Note that restricting the domain to the unit sphere requires in principle to modify all the densities to have this manifold as domain. Fortunately, the required modification (normalising factors) does not seem to have a consequence in this case. l. 177: it seems to me that changing lambda _does_ change the solution w, not just its norm. To keep the direction of w invariant while changing lambda, epsilon must change as well. Therefore, choosing different values of lambda should have an effect on the solution, unless all values of epsilon are equally good (but then why having epsilon in the first place?). ### Eq. (5) This is where the proposed method diverges substantially from mean shift. Mean shift applies hill climbing to each w independently starting from w = x_i for all data points, in order to determine a clustering of the data x_i themselves. Here, instead, the authors formulate (5) and (6) as a method to "explain" all the data. Practical differences include: - mean-shift is non-parametric, in the sense that the number of clusters is not specified a priori. Here the authors start with a fixed number of cluster centers w1...wK and optimise those to fit the data, which is more similar to K-means. - the authors worry about the fact that "patches should not be double counted" and introduce a set of soft associations data-cluster alpha_ij. This is difficult to map in the standard semantic of mean-shift clustering, where the association of a data point x_i to a mode wk is obtained implicitly by the locality of the kernel estimator only. As the authors argue, alpha_ij establish a "competition" between modes to explain data points, which again is more similar to k-means. - the way the alpha_ij are updated has little to do with the optimization of (6) and is completely ad hoc (l.199 - 211) ### Optimization method Unfortunately this method is just an heuristic (l. 212-259). ### Experiments [5,8] have mechanism to avoid or remove redundant patches, which do not seem to be incorporated in this baseline. Removing such redundant patches might affect Fig. 3, 4. Scene-67 experiments: There are several tuning of the representation (e.g. number of HOG cells in a descriptor) that probably helps the method achieve state of the art results. While this is ok, the authors should consider re-running this experiment with the baseline discriminative patches obtained as in Fig. 4. ### Other minor comments l.315: I have seen [5] and [8] in CVPR 2013 and it seems to me that they both have LDA retraining. Q2: Please summarize your review in 1-2 sentences The problem of learning discriminative parts of visual classes is important and the results in this paper are very good. However, there are several minor technical problems in the derivation of the As noted by R10 and I, the derivation makes several unclear leaps. In fact, there are several minor formal errors in the paper that were highlighted in the reviews, none of which is addressed in the authors' rebuttal. The fact that several aspects of the method are heuristics, and such heuristics are tuned on a dataset basis, was not addressed either. All reviewers agree to accept the paper on the ground of the solid experimental results; however, the AC may want to take into account the existence of these formal problems before reaching a Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a maximum of 6000 characters. Note however that reviewers and area chairs are very busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point. We would like to thank the reviewers for their careful reading of the paper, insightful comments, and helpful suggestions. Particularly in regards to the clarity of the presentation, we now see what the confusing points are and will use the reviewers’ suggestions to clean up the story (as well as fix some notational bugs and missed citations). Overall, reviewers seem positive about our idea, results and analysis; but R10&11 voice concerns about some heuristic aspects of the algorithm, particularly after “competition” is added, line 181. We agree that this aspect of the formulation is not as simple or elegant as one might hope for. Perhaps we should have emphasized more strongly that the simpler algorithm, without competition, actually accounts for the majority of the performance gains we see. The yellow line in Figs. 3 and 4 is pure discriminative mean-shift from equation (5), where each element is optimized locally. Furthermore, we tried the experiment R11 suggests, running the algorithm using only the mean-shift formulation, without competition, on Indoor-67. The result was 61.64%, a loss of 2.5% from the full algorithm with competition, but still far outperforming other patch-based methods. In the revision, we plan to cleanly separate (into two sections) the principled, discriminative mean-shift formulation and the “competition heuristic”, and discuss the pros/cons of the latter. Specific answers: R10: Some of the claims that are made in the experimental section are not supported These claims have been made based on studies in previous papers. The example given by R10 is rephrasing of the claim in [8]: “scenes are ....characterized by their global layout (e.g. corridor) and also those that are best characterized by the objects they contain (e.g. bookshop)”. We will put the citations to these claims. R11: What happens if there are more than value of b(w) satisfying the constraint in (2) \sum_{x} max(w^T x -b,0) will be strictly decreasing in b except when b is so large that the entire sum is 0; hence the b which satisfies the constraint is guaranteed to be unique for fixed w and R11: (line 177) it seems to me that changing lambda _does_ change the solution w...epsilon must change as well Correct; epsilon must be scaled by the inverse of lambda. We will clarify this in the final version. R7: running time? Around 2000 cpu-hours for both experiments; i.e. it’s comparable to [4], although the current Matlab implementation could likely be sped up substantially. R7: Source code release? Yes, we are committed to releasing all the source code and results. R11: [5,8] have mechanism to avoid or remove redundant patches. De-duplication of elements is an important part of all patch discovery algorithms, but unfortunately previous algorithms use hand-tuned thresholds for de-duplication which makes algorithms difficult to compare. So instead, our work uses a greedy selection criterion that is the same for all algorithms; this selection process is designed to optimize for the coverage metric plotted in the curve, and represents the “best” de-duplication for this metric. So, while we could have used the de-duplication schemes to measure performance for these previous works, it would have actually resulted in significantly lower performance for these algorithms. R10: Isn't "double-counting" actually necessary …...Or does "double-counting" refer to something else? Yes, the intended meaning of “double counting” in this section is that a single patch may be a member of two or more different element clusters. This indicates that the two elements are representing the same thing, which is wasteful from a computational standpoint. We will clarify this. R11: “I have seen [5] and [8] in CVPR 2013 and it seems to me that they both have LDA retraining” We have asked the authors of both papers for clarification, and it seems that [8] does use LDA retraining, so it is indeed more similar to the “LDA retrained” baseline. [5] uses LDA for initialization only, and then switches to SVM retraining. We will clarify this. R10: The algorithm with competition bears resemblance to k-means. We agree that this resemblance exists, especially insofar as both k-means and the competition portion of our algorithm try to prevent data points from becoming members of multiple clusters (though perhaps EM for Gaussian mixture models is even more relevant, due to the soft cluster membership). We will mention this. R10: it is hard to get any insight as to why the proposed method outperforms....Is this due to the clustering objective or to other implementation details described in lines 406-422? While we have a number of implementation details, they are all fairly standard and mostly in common with previous work, e.g. [16].
{"url":"https://proceedings.neurips.cc/paper_files/paper/2013/file/f9b902fc3289af4dd08de5d1de54f68f-Reviews.html","timestamp":"2024-11-03T00:39:46Z","content_type":"text/html","content_length":"30549","record_id":"<urn:uuid:8001db6f-789e-4c7e-a975-d641052c4235>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00882.warc.gz"}
Welcome to the ColossusCoinXT project! • Need some COLX? Visit out Exchange listings page. Now let’s put those COLX coins to work and get some staking rewards! This guide will explain all staking requirements in depth, however, for advanced users or for those already familiar with staking configuration, here are the primary COLX staking requirements: 1. Wallet must be updated to the latest version (see current version here: https://github.com/ColossusCoinXT/ColossusCoinXT/releases/). 2. Your wallet must be online and synchronized. 3. Your wallet must encrypted and unlocked for anonymization, automint, and staking only. 4. You wallet must hold COLX for 8 hours or more. If you’ve completed those four tasks, you’re ready to stake! Relaunch your wallet and check the lower right corner. The staking status icon is the COLX icon between the lock and the auto-mint icon. Green means you’re currently staking, and gray means you’re not staking yet. (Sometimes it takes a minute or two after launching before the icon turns green.) If your staking icon is GREEN, you’re good to go! Hover the mouse over the COLX staking icon to confirm that the "Staking is active" banner appears: Congrats, you’re now staking COLX, and will begin receiving staking rewards based on the amount of coins you have staked. Ready to receive staking rewards? See the Receiving Staking Rewards section to see what staking rewards will look like and learn about orphan blocks! Been staking for a while but still haven’t gotten a reward? See Calculating Your Staking Rewards. Questions about coin splitting? Read the Coin Splitting Guide. If your staking icon is GRAY, there’s a problem: COLX staking is simple, but there are a lot of small requirements. Failure to meet one of the requirements is usually the reason you are not receiving staking rewards. Review each section and verify that you’re meeting each requirement. Follow the directions given to fix any issues you encounter, and keep going until your staking icon is green. Mac users: This guide was written with Windows screenshots, but the Mac wallet is virtually identical. The main difference is that the menu items are on the menu bar at the top of the screen instead of the top of the wallet app, so look there when the File, Settings, or Tools menus are referred to. For any other significant differences, there will be a note labeled “Mac users:”. This guide contains information and fixes on every common staking issue. However, rather than check them all, let’s start by seeing if we can skip straight to the problem! The COLX console includes a utility to help troubleshoot staking issues. Click Tools, Debug Console. This will open the COLX console. Type the command getstakingstatus. This will give you an output with several true / false A false on any of these properties will keep your staking icon from turning green. The names of these properties are clues about what is wrong. validtime : false usually indicates a synchronization issue. Skip to the section Wallet must be synchronized. haveconnections : false is also a synchronization issue. Again, skip to the section Wallet must be synchronized. walletunlocked : false indicates that your wallet is locked. Skip to the section Wallet must be unlocked for anonymization and staking only. mintablecoins : false indicates that you have no mature coins. See Wallet must contain mature coins. enoughcoins : false indicates that your balance is less than the “reserve” balance. (The COLX reserve balance defaults to 0, so this is almost never an issue.) mnsync : false means that masternode data is not synched. Skip to the section Wallet must be synchronized. stakingstatus : false means that either one or more of the previous properties was false, or there is another issue. If this is the ONLY false, it usually means one of two things. First, it could be that your wallet is unlocked, but you forgot to check the box that says “for anonymization, automint, and staking only”. See Wallet must be unlocked for anonymization, automint, and staking only. Second, sometimes the wallet simply fails to begin staking. Close and reopen your wallet, unlock it for staking, and allow five minutes for it to begin staking. After making a correction for one of these issues, close and reopen your wallet, wait for it to synchronize, and re-run the getstakingstatus command to confirm that all previous false flags are now marked true. Make one change at a time. Sometimes it will take a few attempts to get everything set to true. Normally, fixing the issues flagged as false will quickly get your wallet staking. However, if you are still having issues, or just want to know more about COLX staking, read on for a more complete listing of possible issues and their fixes. There is also a guide for calculating staking rewards, and a guide for coin splitting. COLX Staking Troubleshooting Table of Contents: Wallet must be updated to the latest version Wallet must be unlocked for anonymization, automint, and staking only Wallet must contain mature coins Calculating Your Staking Rewards Appendix A: Firewall configuration Wallet must be updated to the latest version To begin staking, your wallet must be updated to the current version. The latest version install is available on the official wallets download page here. There is also an update guide available here. It is no longer necessary to add staking=1 or seed nodes to your ColossusXT.conf. These settings are now built in to the application. Wallet must be encrypted Your COLX wallet must be encrypted to stake. Look for the locked or unlock padlock icon in the lower right corner of the wallet app, like this: or this: If the wallet is not encrypted, the padlock icon will be missing entirely, like this: This means that your wallet is not encrypted. Click on Settings, Encrypt Wallet. You will be prompted to create a password, and will receive a stern warning about password safety. Be sure that your password is safe and secure, and click OK. The app may appear unresponsive for up to a minute as it encrypts your wallet. Be patient. When it is complete, the wallet will notify you that it needs to close. Restart the wallet and verify that you now have the padlock icon in the lower right. Wallet must be unlocked for anonymization, automint, and staking only A locked wallet cannot stake COLX. This is indicated by the padlock icon in the lower right corner of the wallet app. (If the padlock icon is missing, the wallet is not encrypted. See previous section.) An open gray padlock is an unlocked wallet; a closed green padlock is a locked one. Hover over the lock icon for additional details. A correctly configured staking wallet will say “Wallet is encrypted and currently unlocked for anonymization and staking only”: If this is not the case, click on Settings, Unlock Wallet. Enter your password and click the checkbox marked “For anonymization and staking only”, and click OK. Recheck the padlock icon to verify that it’s now gray and unlocked, and hover over the padlock icon to verify that it’s unlocked for staking and anonymization. Important Note: The wallet app ALWAYS locks your COLX wallet by default every time the wallet is opened. This unlocking step will always need to be repeated, EVERY time your wallet or PC are Wallet must be synchronized Check the lower right corner of your COLX wallet. If it’s correctly synchronized, there will be a green check mark. Hover the mouse over the green check mark to verify that it says “Synchronization Finished. Up to date.” and displays the current block number, like this: If your wallet is not synchronized, it might look like this: The network connectivity icon or “No block source available” error indicates that there is no connectivity to the COLX network. Be patient! Usually this will resolve itself on its own by generating its own node list using the hardcoded seed nodes. Within 3-5 minutes, it should look like this: Hovering over the yellow circle arrows icon will give you the synchronization status and the current block number. Synchronization can take anywhere from a few minutes to several hours, depending on your internet connection. Once the yellow circle arrows turn to a green check mark, synchronization is complete. Seed nodes are built in to the wallet application; it is no longer necessary to add nodes to the ColossusXT.conf configuration file. However, wallets updated from previous installs may still contain legacy nodes that were previously added. If syncing isn't beginning within a few minutes, try closing your wallet, deleting any addnode= entries from your ColossusXT.conf, deleting your peers.dat, and then restarting your wallet. To locate the ColossusXT.conf file in Windows, press Windows+R, then type %appdata%\ColossusXT and click OK. To locate the ColossusXT.conf file on a Mac, press Command+Shift+G, then type ~/Library/Application Support/ColossusXT and click Go. If synchronization does not start after 5 more minutes, you may have a network connectivity issue. See Appendix A: Firewall Configuration. Wallet must contain mature coins COLX staking requires that coins be held for 8 hours before staking can begin. To check if your coins are mature enough, click on Send, and then Open Coint Control. If the Coin Control, button isn’t available, then you need to enable. To do that, click on Settings, Options, Wallet, and check the box for “Enable coin control features”, then restart your wallet. Mac users: On a Mac, access the coin control option from the menu bar at the top of the screen, under PIVX Core, Preferences, Wallet. From the Coint Control screen, you can see all of your coins, their addresses, and the number of confirmations each transaction has. At 1 minute per block, 8 hours is equal to 480 confirmations. Check the confirmations column for each input and verify that the confirmation count exceeds 480. If it does, you are good to go. If it does not, then all you need to do is wait. In this wallet example, all available inputs are mature. Note: You don’t need to leave your wallet open while waiting for coins to mature. Additionally, closing your wallet will NOT reset coin maturity. Confirmations continue to take place on the blockchain whether or not your wallet is online. Once you reopen your wallet, it will synchronize and you’ll see all the confirmations that took place while you were offline. Receiving Staking Rewards Now that you’re staking COLX, you’ll begin to receive COLX rewards. Rewards will appear on the main “Overview” screen of the COLX wallet as recent transactions: The green hammer and pick icon signifies a staking reward. Staking rewards are currently 480 COLX per reward. Note the address label of "Payment" below the transaction. This is the label assigned by the user when the address was created. If the label field was left blank, the address will be displayed instead, like this: Every now and then, you may see a staking reward transaction in red, with the amount in brackets: This is what’s known as an “Orphan Block” reward. Hover the mouse over this transaction to see additional details. Orphan blocks are normal part of blockchain technology and are nothing to worry about. When you see an orphan block transaction, it means that your wallet and another COLX wallet attempted to create the same block. The network automatically voted to resolve this conflict, and your wallet lost the vote. It’s always disappointing to see a transaction that won’t pay off, but this is a normal part of staking. Remember, this will not affect your holdings, your coin maturity, or your staking status. Your wallet will continue staking normally with no interruption. Calculating Your Staking Rewards COLX staking is a random raffle reward system. Each block is a drawing, each coin you stake is a ticket, and you’re competing with everyone else who is staking COLX. So, depending on how many coins you have, it can take a while to receive a staking reward. There is a component of luck, but you can calculate your average reward time based on a few key factors. Let’s call your number of personal staked coins “P”. This is the number of mature coins (480 or more confirmations) in your own personal staking wallet. Let’s call the total number of staked coins “T”. This is the total number of COLX currently staked by all COLX holders worldwide. You can get the exact number from the COLX block explorer here. Hover your mouse above the blue section of the staking donut graph to see the current COLX stake count. As of writing this guide, the current T number is 2.5B. There are 1,440 (60 minutes * 24 hours) blocks per day. Thus, we can calculate an average staking reward time using this equation: COLX Rewards Equation: T / (P * 1440) = average number of days per reward. The inverse of this number, (P * 1,440) / T, is the daily odds of being rewarded. So, for example, let’s say you’re staking 840,000 COLX. Your average staking reward time can be calculated by 2.5B / (840,000 * 1,440). This equals about 3.84. Thus, you’ll get a reward, on average, every 2.06 days. The inverse, (840,000 * 1,440) / 2.5B, equals about .48, meaning that every day, you have a 48% chance of receiving a staking reward. Coin Splitting Guide As of April 28th, 2018, the staking coin maturity requirement was reduced from 7 days, or 10,080 confirmations, to 8 hours, or 480 confirmations. One benefit of this change is that coin splitting is no longer necessary to optimize staking payouts. As such, it is no longer recommended that users split staking coins into smaller inputs. However, there is still some automatic coin splitting taking place. Staking coin inputs are automatically split in half when a staking reward is received. The COLX wallet has a built in limit on the this automatic coin splitting. By default, this limit is 2000; rewarded staking inputs will not be split into outputs smaller than 2000 COLX. Users should consider setting this threshold to a higher number (100K - 1M) to avoid having to manage too many small inputs, which can lead to higher transaction fees when the coins are moved at a later date. To change this setting, click Tools, Debug Console. In the console window, you can type getstakesplitthreshold to view the current splitting threshold setting. To change the setting, use the command setstakesplitthreshold followed by the desired number. The maximum threshold value is 999,999. Note: Your wallet needs to be unlocked to change this setting. Appendix A: Firewall configuration On Windows and in OSX (Mac), a prompt comes up the first time the Wallet app is launched. If you clicked through it without selecting the correct settings, you can have firewall issues which prevent your wallet from synchronizing. This guide assumes that you are using the built-in firewall apps for Windows and OSX. If you’re using an alternative product, study these steps and replicate them within the product you are using. For Windows machines, first, close your COLX wallet. Then, right-click the Start button and click “run”. Type wf.msc and click OK. This will launch the Windows firewall app. Under Inbound rules, locate and select the three rules associated with COLX. Right-click these rules and click “Delete”. Click “yes” on the “are you sure?” prompt, and close the firewall app. Relaunch your COLX wallet, and wait for the Windows Defender Firewall prompt. Ensure that both “public” and “private” are checked, and click Allow Access. Exit and relaunch the COLX wallet. If everything else is configured correctly, your wallet should now begin to sync. Mac Users: Click the Apple icon in the upper left corner, then click System Preferences: Double-click the “Security and Privacy” icon, then click the “Firewall” button. Click the “Unlock” button, and enter your password: Click the “Firewall Options” button now that it’s enabled. Select the rule for COLX, then click the minus button to delete it, then click OK. Close out of the security settings, then close and restart your COLX wallet. You should now be prompted to allow COLX-Qt-app to accept incoming connections. Click “Allow”. Exit and restart your COLX wallet. If everything else is configured correctly, your wallet should now begin to sync. Additional Resources If you have any additional questions or staking issues, or would like to provide feedback or suggestions for this document, please reach out to us in #general-support on the ColossusXT Discord:
{"url":"https://wiki.colossusxt.io/index.php?sid=19470313&lang=en&action=faq&cat=1&id=33&artlang=en","timestamp":"2024-11-14T11:14:51Z","content_type":"text/html","content_length":"50439","record_id":"<urn:uuid:9c2dab41-760b-46fc-aa3e-304552b8d0ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00268.warc.gz"}
Online Number to Word Converter | How to Convert Numbers into Word?Online Number to Word Converter | How to Convert Numbers into Word? Check out the following guidelines for converting a number to a word. They are in the following fashion • With the help of number words from one to ten, you can make words on the higher value. • Do learn the number words from eleven to twenty to make other words of higher value. • Once you learn the words from One to Twenty do remember the number words such as thirty, forty, fifty, sixty until hundred. • As the numbers increase in value and become larger with more digits the names start to change. Numbers of Higher Values 10 - Ten 100 - Hundred 1,000 - Thousand 10,000 - Ten Thousand 100, 000 - Hundred Thousand 1, 000, 000 - One Million Question : Write 735, 152, 2003 to Word? As per the basic words, we know we can write the number 735 in words as follows 735 - Seven Hundred Thirty Five 152 - One Hundred and Fifty Two 2003 - Two Thousand and Three Make use of the handy tools available for math concepts all at one place onlinecalculator.guru and clear all your concerns.
{"url":"https://onlinecalculator.guru/numbers/number-to-word-converter/","timestamp":"2024-11-10T10:46:03Z","content_type":"text/html","content_length":"53136","record_id":"<urn:uuid:68a0d69c-853e-4b39-8441-d0a0a78c32c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00682.warc.gz"}
Introduction to Pressure Drop Thermal interface material (TIM) is a crucial part of any thermal management solution. Since it’s physically a small portion of most applications, it’s an easy component to overlook. But thermal interface material can make or break a device and its associated product. An Introduction on Liquid Pressure Drop and How to Select a Pump Advancements in technology continue to drive the complexity and functionality of new products. As these products increase in complexity, OEMs are adding more components to meet increasing functionality demands of their customers, which comes at additional cost in materials, manufacturing, logistical complexity, and assembly. Boyd designs thermal systems for maximum performance at a specific flow rate. Less flow will cause the system to underperform. Flow rate is dependent upon the system’s pressure drop and the pump’s head pressure. This application note reviews how to determine your pressure drop and how to select a pump for your system. It also provides tips on how to minimize pressure drop. Determining System Pressure Drop Pressure drop is a term used to describe the differential pressure that a fluid must overcome to flow through a system. Pressure drop is a result of resistance caused by friction (shear stresses) or other forces (such as gravity) acting on a fluid. The pressure drop is exponentially proportional to the flow rate. When the flow rate doubles, the pressure drop increases by a factor of four. The pressure drop of a system is equal to the sum of each component’s pressure drop within the system which includes hoses, cooling component(s), and any other sections of the system. In order to determine the system pressure drop curve, pressure drop at various flow rates needs to be calculated and plotted. For example; if a system has a CP10 tubed cold plate attached to a 6105 copper heat exchanger with 10 feet of 3/8″ tubing, add the CP10, 6105 and hosing liquid pressure drop curves together. 1-2 psi is a good assumption for standard pressure drop of 10 feet of tubing at 1-2 GPM. When the results are plotted, the graph should look similar to Figure 1. Selecting a System Pump In general, the flow rate provided by a pump is inversely proportional to pressure, which means that the flow rate will increase as pressure decreases (see Figure 2). In order to select a pump with appropriate head pressure, the pump curve provided by the pump manufacturer should be plotted over the system pressure drop curve. The system will operate at the intersection of the two curves. In our example, the pump will operate at 1.6 GPM and 13.5 psi (see Figure 3) because the two plotted lines intersect at this point. If the pressure drop of the system is known for one point, the curve can be estimated by drawing a straight line from no flow and no pressure drop to the known pressure drop point. The line’s intersection with the pump curve provides a good estimate of the expected flow rate. In our example, assume a system pressure drop number of 2 GPM and 18 psi is known (see Figure 4). Using this method, the estimated system flow rate is 1.5 GPM, close to the 1.6 GPM determined using the more precise method. Minimizing Pressure Drop In most cases minimal pressure drop through a system is desirable. Some tips on how to reduce pressure drop are: • When feasible, keep the number of 90° bends to a minimum. Like a kink in a garden hose, a sharp bend causes pressure drop. • Keep hose lengths as short as possible. Longer hose or tube lengths create greater surface area that is in contact with the fluid and causes additional fluid friction and pressure drop. • Work with large diameter hoses. Ever try to drink through a narrow coffee stirrer? The small diameter makes you work much harder than you would with a regular straw. • Where possible, use a fluid that has low viscosity. Fluid viscosity is a liquid’s ability to flow (think about water as opposed to molasses). Using a liquid that has high viscosity will adversely affect the pressure drop of your system. • Quick disconnect fittings should be avoided, as they often cause unnecessary loss of pressure. Remember the importance of pressure drop and match the pump curve to your system pressure drop curve. The pressure drop can be minimized by removing the kinks; avoiding long and thin hoses; and keeping the system on the same level. Follow these simple steps and your thermal solution will deliver the promised performance. Silicone Molding: Driving Innovation Across Industries Silicone molding is essential for industries that demand... The Future of AI: Powered by Efficient CoolingArtificial intelligence large language models are now surpassing 1... PCR Testing: A Powerful Diagnostic ToolPolymerase Chain Reaction (PCR) testing is a highly accurate molecular... Protecting Aluminum: Anodizing and Chromate Coatings Enhance aluminum components with specialized coatings.... Robust Sheet Metal Fabrication to Optimize Performance At Boyd, we leverage our expertise in sheet metal manufacturing... Silicone Molding: Driving Innovation Across Industries Silicone molding is essential for industries that demand... The Future of AI: Powered by Efficient CoolingArtificial intelligence large language models are now surpassing 1... PCR Testing: A Powerful Diagnostic ToolPolymerase Chain Reaction (PCR) testing is a highly accurate molecular... Beyond the Basics: Innovative Silicone Molding Uses Enhancing AI: Deploy NVIDIA’s GB200 NVL72 Superchip Easier and Faster with Boyd’s Modular Liquid Cooling Systems PCR Testing Chromate and Anodize Finishing Sheet Metal Fabrication and Enclosures Beyond the Basics: Innovative Silicone Molding Uses Enhancing AI: Deploy NVIDIA’s GB200 NVL72 Superchip Easier and Faster with Boyd’s Modular Liquid Cooling Systems PCR Testing
{"url":"https://www.boydcorp.com/blog/pressure-drop-introduction.html","timestamp":"2024-11-14T18:03:51Z","content_type":"text/html","content_length":"450848","record_id":"<urn:uuid:ee5e880c-58cb-4c29-af4c-b8c309f0d556>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00631.warc.gz"}
Add fractions with unlike denominators and rewrite the sum as a mixed number Learn how to add fractions with unlike denominators. Multiply the denominators together to find the common denominator, then create equivalent fractions with the same denominator. Add the renamed fractions. If the sum is an improper fraction, rewrite it a
{"url":"https://happynumbers.com/demo/cards/303688/","timestamp":"2024-11-08T12:41:23Z","content_type":"text/html","content_length":"18200","record_id":"<urn:uuid:8b57df6b-6e67-4c36-9c98-17208832e55e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00025.warc.gz"}
OLGA: Fast computation of generation probabilities of B-and T-cell receptor amino acid sequences and motifs Motivation: High-throughput sequencing of large immune repertoires has enabled the development of methods to predict the probability of generation by V(D)J recombination of T-and B-cell receptors of any specific nucleotide sequence. These generation probabilities are very non-homogeneous, ranging over 20 orders of magnitude in real repertoires. Since the function of a receptor really depends on its protein sequence, it is important to be able to predict this probability of generation at the amino acid level. However, brute-force summation over all the nucleotide sequences with the correct amino acid translation is computationally intractable. The purpose of this paper is to present a solution to this problem. Results: We use dynamic programming to construct an efficient and flexible algorithm, called OLGA (Optimized Likelihood estimate of immunoGlobulin Amino-acid sequences), for calculating the probability of generating a given CDR3 amino acid sequence or motif, with or without V/J restriction, as a result of V(D)J recombination in B or T cells. We apply it to databases of epitope-specific T-cell receptors to evaluate the probability that a typical human subject will possess T cells responsive to specific disease-associated epitopes. The model prediction shows an excellent agreement with published data. We suggest that OLGA may be a useful tool to guide vaccine All Science Journal Classification (ASJC) codes • Computational Mathematics • Molecular Biology • Biochemistry • Statistics and Probability • Computer Science Applications • Computational Theory and Mathematics Dive into the research topics of 'OLGA: Fast computation of generation probabilities of B-and T-cell receptor amino acid sequences and motifs'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/olga-fast-computation-of-generation-probabilities-of-b-and-t-cell","timestamp":"2024-11-08T11:40:33Z","content_type":"text/html","content_length":"53404","record_id":"<urn:uuid:c17d57c4-25ea-4b07-a5d9-3d5d57f73884>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00565.warc.gz"}
Key Terms binding energy the energy equivalent of the difference between the mass of a nucleus and the masses of its nucleons scientists once believed there was a medium that carried light waves; eventually, experiments proved that ether does not exist frame of reference the point or collection of points arbitrarily chosen, which motion is measured in relation to general relativity the theory proposed to explain gravity and acceleration inertial reference frame a frame of reference where all objects follow Newton’s first law of motion length contraction the shortening of an object as seen by an observer who is moving relative to the frame of reference of the object mass defect the difference between the mass of a nucleus and the masses of its nucleons a statement that is assumed to be true for the purposes of reasoning in a scientific or mathematic argument proper length the length of an object within its own frame of reference, as opposed to the length observed by an observer moving relative to that frame of reference having to do with modern relativity, such as the effects that become significant only when an object is moving close enough to the speed of light for$γ γ$to be significantly greater than 1 relativistic energy the total energy of a moving object or particle$E=γm c 2 , E=γm c 2 ,$ which includes both its rest energy mc^2 and its kinetic energy relativistic factor $γ= 1 1− u 2 c 2 γ= 1 1− u 2 c 2$, where u is the velocity of a moving object and c is the speed of light relativistic momentum p = γmu, where$γ γ$is the relativistic factor, m is rest mass of an object, and u is the velocity relative to an observer the explanation of how objects move relative to one another rest mass the mass of an object that is motionless with respect to its frame of reference the property of events that occur at the same time special relativity the theory proposed to explain the consequences of requiring the speed of light and the laws of physics to be the same in all inertial frames time dilation the contraction of time as seen by an observer in a frame of reference that is moving relative to the observer
{"url":"https://texasgateway.org/resource/key-terms-8?book=79076&binder_id=78136","timestamp":"2024-11-14T11:07:54Z","content_type":"text/html","content_length":"41619","record_id":"<urn:uuid:1accae57-46e4-4a73-8aab-b93c3f43515d>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00079.warc.gz"}
Study on the Stability and Seepage Characteristics of Underwater Shield Tunnels under High Water Pressure Seepage Key Laboratory of In-Situ Property-Improving Mining of Ministry of Education, Taiyuan University of Technology, Taiyuan 030024, China Department of Mining Engineering, Taiyuan University of Technology, Taiyuan 030024, China College of Water Science and Engineering, Taiyuan University of Technology, Taiyuan 030024, China Author to whom correspondence should be addressed. Submission received: 13 September 2023 / Revised: 12 October 2023 / Accepted: 30 October 2023 / Published: 2 November 2023 The construction of underwater shield tunnels under high water pressure conditions and seepage action will seriously impact the stability of the surrounding rock. In this study, an analytical model for the strength of the two-lane shield tunneling construction under anisotropic seepage conditions was established, and a series of simulations were carried out in the engineering background of the underwater section of Line 2 of the Taiyuan Metro in China, which passes through Yingze Lake. The results show that: (1) the surface settlement has a superposition effect, and the late consolidation and settlement of the soil body under seepage will affect the segment deformation and the monitoring should be strengthened; (2) under the influence of the weak permeability of the lining and grouting layers, the pore pressure on both sides of the tunnel arch girdle is reduced by about 72% compared with the initial value, with a larger hydraulic gradient and a 30% reduction at the top of the arch; (3) within a specific range, the tunneling pressure can be increased, and the grouting pressure and the thickness of grouting layer can be reduced to control the segment deformation; (4) the more significant the overlying water level is, the larger the maximum consolidation settlement and the influence range of surface settlement. This study can provide a reliable reference for underwater double-lane shield tunnel design and safety control. 1. Introduction In recent years, with the rapid development of coastal cities, underground tunnels, undersea tunnels, and river-crossing tunnels in the main form of shield tunnels have been constructed in large quantities [ ], and have become an essential means of traversing the barrier of water [ ]. The underwater shield tunnel excavation process changes the equilibrium state of the initial stress field and the seepage field within the geotechnical body. It is subjected to the coupling effect of the geotechnical body medium deformation and water seepage, and its mechanical coupling response is pronounced [ ]. In particular, for underwater shield tunnels across rivers and seas, groundwater seepage has a significant influence on tunnel stability [ ]. The whole underwater shield tunnel construction is performed in the region of overlying geotechnical bodies and water pressure quantum change, which leads to serious safety hazards in the main tunnel structure [ ], and the consideration of the stress field–seepage coupling effect is necessary for design and construction. Theoretical, numerical, or experimental methods have investigated seepage stability in shield tunnel excavations. In academic studies, Anagnostou et al. [ ] assumed that seepage flow was isotropic. They proposed a classical “prism-wedge” limit equilibrium model for calculating the limit of effective support force under steady-state seepage flow. Zhang et al. [ ] analyzed the effect of seepage flow on the pore pressure around the tunnel and the long-term settlement of the ground surface and the tunnel by analytical methods. Tang et al. [ ] derived an analytical solution for the seepage field of an underwater circular tunnel considering seepage anisotropy based on the complex function theory of conformal mapping. Still, the surface of the formation will be deformed during the mapping process, so the result of this calculation is only an approximate solution. In terms of experimental research, only a few scholars have conducted tests on the stability problem of tunnel excavation under seepage conditions. Chen et al. [ ] investigated the instability problem of tunnel excavation face under steady-state seepage conditions with different head heights by centrifugal modeling tests. They derived the relationship between the adequate support pressure and the water pressure difference between groundwater and soil compartments. Feng and Chen [ ] combined the self-developed catastrophe visualization test model with transparent soil technology to conduct experimental research on the problem of water surge and sand disasters in the construction of shield tunnels in saturated sandy soil stratum. In addition, Zhang et al. [ ] explored the distribution of the seepage field and its streamlined changes through seepage tracer tests. However, although experimental studies can fairly accurately restore the actual situation of seepage field distribution and can be used to analyze the influence of seepage on tunnel stability, they have the disadvantages of high cost and poor operability and are time-consuming. Numerical simulation is convenient and efficient and can reveal the characteristics of multi-physical field coupling. Lu et al. [ ] used the finite element software PLAXIS, V22.00.02.1078, to study the pore water pressure distribution of the tunnel working face under seepage conditions. They achieved the ultimate support pressure for stabilizing the tunnel’s working face. Lü et al. [ ] used the finite element software abaqus to simulate the soil body deformation induced by tunnel construction. They concluded that groundwater seepage would significantly increase the size and scope of ground subsidence deformation and change the ground deformation pattern into a funnel shape. It should be indicated that only the effect of a single seepage of groundwater on the stability of the tunnel perimeter rock was considered in the previous study, and the flow–solidity coupling effect between pore water and soil particles in the surrounding geotechnical soil was not taken into account. In order to increase the accuracy of the calculation, Li et al. [ ], Wongsaroj et al. [ ], Avgerinos et al. [ ], and Li et al. [ ] considered the flow–solid coupling effect in the numerical simulation of shield tunnels, but most of them assumed the permeability of the soil as isotropic. The layered structure of the soil body formed during deposition determines the anisotropy of its permeability coefficient. After tunnel excavation, the permeability coefficient of the soil around the tunnel increases in the horizontal direction, the change of the permeability coefficient in the vertical direction is small, and the degree of anisotropy of soil infiltration increases, which has a significant influence on the groundwater inflow rate and pore pressure distribution around the tunnel [ ]. Qiao et al. [ ] concluded from academic studies and numerical simulations that not considering the soil permeability anisotropy around the tunnel will underestimate the seepage flow and the head outside the lining of the tunnel, which will pose a safety risk to the design of the tunnel support structure. In this study, based on the stress–seepage coupling theory, the relationship equation is established between the anisotropic permeability coefficient and the strain field, taking the Yingze Lake underpass project of Taiyuan Metro Line 2 in China as the engineering background. The finite difference software FLAC3D5.01 is adopted to establish an anisotropic three-dimensional flow–solidity coupling model, which takes into account the change of permeability coefficient of the soil with the shift in the state of stress. Research is conducted on the stability of the surrounding rock and structure during the construction period of the submerged shield tunnel, as well as the seepage characteristic of the shield tunnel under the condition of high water pressure. The results can provide reference and guidance for engineering applications. 2. Theoretical Method and Numerical Models Since the displacement and seepage fields in saturated soils are two physicomechanical environments with different laws of motion, the mathematical model of fluid–structure coupling should contain the control differential equations of the displacement and seepage fields. The following present the basic equations for the stress field and the basic equations for the seepage field used in this 2.1. Basic Principles of Fluid–Solid Coupling 2.1.1. Stress Balance Equation In the coupled analysis of the soil stress field and seepage field, the soil body is considered a porous continuous medium, and its stress field conforms to the fundamental equations of continuous medium mechanics. The saturated soil body consists of soil particle skeleton and pore water, and the stress equilibrium differential equation is assumed to be incompressible for soil particles and water without considering acceleration: $∂ σ i j ′ + δ ij α p ∂ x i j + f j = 0 i , j = 1 , 2 , 3$ Under the action of pore water pressure, according to the principle of effective stress, the geotechnical body constitutive equation [ ] can be expressed as follows: $σ i j ′ = σ i j − α p δ i j$ $α = a 1 + a 2 Θ + a 3 p + a 4 Θ p$ $σ i j ′$ is the effective stress tensor, $σ i j$ is the total stress tensor, is the equivalent pore pressure coefficient, 0 < < 1, $δ i j$ is the Kroneker symbol, $f i$ is the volumetric force, is the pore water pressure, $a 1$ $a 2$ , and $a 3$ are experimental constants, and is the volumetric stress. 2.1.2. Seepage Continuity Equation It is assumed that the soil deformation is not so substantial or linear elastic for saturated soils. Groundwater is considered incompressible. The seepage satisfies Darcy’s law. The seepage continuity equation can be obtained [ $∂ ∂ x k x ∂ p ∂ x + ∂ ∂ y k y ∂ p ∂ y + ∂ ∂ z k z ∂ p ∂ z = − γ w n β w ∂ p ∂ t + ∂ ε V ∂ t$ $k x$ $k y$ , and $k z$ are the permeability coefficients in the x, y, and z directions, respectively; is the pore pressure; is the time; $γ w$ is the volumetric weight of the water; $β w$ is the volumetric compression coefficient of the water. Moreover, $ε V$ is the volumetric strain. 2.1.3. Theoretical Derivation of an Anisotropic Permeability Coefficient Expression Therefore, it is necessary to consider the anisotropy of the permeability coefficient and establish the relationship between the permeability coefficient and each principal strain when performing the seepage–flow coupling analysis to reflect the influence of the stress field on the seepage field under different stress paths. The initial porosity of the soil can be expressed as follows: $n 0$ is the initial porosity; $V 0$ $V S$ are the initial total volume and the total volume of the soil body after deformation, respectively. Assuming that the soil particles are incompressible and the deformation of the soil is mainly due to the compression of the voids in the soil, the porosity of the soil deformation can be expressed as is the porosity of the porous medium and is the total volume of the soil after deformation. For minor strain problems, the volume strain can be expressed as follows: Equations (6)–(8) can be obtained by associating them: The Kozeny–Carman equation is expressed as follows: are the Kozeny–Carman constant and the specific surface area of the solid phase, respectively. The coupled Equations (6) and (7) can be used to obtain the intrinsic equation of fluid–solid coupling, which can be expressed as follows: $k x = k 0 x 1 + ε V 3 1 + ε V n 0 3 k y = k 0 y 1 + ε V 3 1 + ε V n 0 3 k z = k 0 z 1 + ε V 3 1 + ε V n 0 3$ $K 0 x$ $K 0 y$ , and $K 0 z$ are the initial permeability coefficients in the x, y, and z directions, respectively. Simultaneous Equations (1), (4), and (10), supplemented by initial and boundary conditions, are sufficient for fluid–solid coupling analysis. 2.2. Numerical Modeling This paper uses the finite element software FLAC3D5.01 to establish a three-dimensional numerical model. The secondary development of FLAC3D5.01 software is carried out through the FISH language to compile the functional relationship between anisotropic permeability coefficients and strains, which is given to the model calculation unit to realize the dynamic change of permeability coefficients to reflect the effect of the stress field on the seepage flow field. One of the methodology’s limitations is simplifying the stratigraphic yield of the lake bottom, depending on the geology of each layer in a horizontal state. Moreover, the excavation process of the shield is simplified. 2.2.1. Modeling Assumptions Assumptions of the proposed model: The solid units of the model follow the Mohr–Coulomb yield criterion; The properties of the soil and structural units do not change during excavation, and the soil particles and fluids are incompressible; The geotechnical body is regarded as a porous medium, and the flow of fluid in the pores conforms to Darcy’s law; The shield shell and lining units are considered to be perfectly elastic, and both are impermeable. 2.2.2. Engineering Background The Taiyuan Metro Line 2 project is the 1st urban rail transit line in Taiyuan, China. The tunnel between Shuangta Xijie Station and Dananmen Station is a two-lane shield tunnel constructed using the EPB shield. After the left tunnel is excavated, the right tunnel will be excavated again. The distance between the left and right tunnels is 14.2 m. The arch crown of the left tunnel line is about 16.9~18.2 m away from the bottom of the lake, and the arch crown of the right tunnel line is about 13.5~17.67 m away from the bottom of the lake, and the length of the shield tunnel under the lake is about 156 m. The depth of Yingze Lake is about 1.5~5.0 m, and the lake’s width is about 40~300 m. The groundwater level depth between the area is 2~7.5 m, which is a pore dive in the Quaternary loose soils. It locally passes through the micro-bearing pressure water-bearing rock formation, and the water level varies about 2~7.5 m depending on the seasonal influence. The annual variation of water level is about 1.0 m. The groundwater depth is shallow, the aquifer is thick, and the permeability coefficient is large, and it is affected by the seepage and recharge of Yingze Lake water, dramatically influencing the project’s construction in this zone. The tunnel mainly passes through a layer of 2–4 fine sandy clay and 2–5 medium sandy clay. The schematic diagram of the project area is shown in Figure 1 The tunnel adopts a standard single-circle shield lining structure, with an outer diameter of D = 6200 mm and an inner diameter of D = 5500 mm. The tunnel is lined in a single layer, with a segment thickness of 350 mm and a ring width of 1200 mm, and the lined segment ring is assembled from 6 prefabricated completed reinforced concrete segments. The total length of the shield machine is 9.2 m, and the outer diameter of the shield machine is 6.4 m. The liner ring is made of 6 pieces of prefabricated reinforced concrete segments. 2.2.3. Modeling To reduce the influence of boundary effects, and considering that the soil body is infinite, the range of soil stress redistribution after tunnel excavation is 3–5D (D is the diameter of the tunnel) in the horizontal direction, so the calculation range of the tunnel model is established as follows: the upper part is the actual thickness of the overlying rock and soil layer, i.e., the distance between the upper part of the tunnel and the bottom of the lake; 4D is taken vertically downward from the center of the tunnel, and 5D is taken horizontally outward from each direction [ ]; the length of the model is the length of the tunnel going down to the lake bottom. As shown in Figure 2 , the dimensions of the constructed numerical model are 156 m (length) × 77 m (width) × 46 m (height). The model is divided into 237,640 units and 244,332 nodes. The mesh is denser near the tunnel and sparser away from the tunnel in a radial pattern, which can better meet the computational accuracy requirements. The Moore–Cullen elastic–plastic model models each soil layer and the grouting layer unit, and the lined segment is modeled by the shell unit. 2.2.4. Calculation Parameters The parameters for the grouting materials and lining are taken from the literature [ ]. The density of the segment is 2500 kg/m , Poisson’s ratio is 0.3, modulus of elasticity is 34.5 GPa, stiffness reduction coefficient is 0.8, and shell unit is adopted; the shield shell of shield machine has a density of 7850 kg/m , Poisson’s ratio is 0.2, modulus of elasticity is 210 GPa, and liner unit is adopted; a homogeneous and equal-thickness “equivalent layer” is adopted to simulate the grouting process, and the slurry hardening process is simplified. To simulate the grouting process and simplify the hardening process of the slurry, within 2.4 m of the shield tail is the flowing equivalent layer, with a thickness of 0.06 m, density of 2000 kg/m , elastic modulus of 0.75 MPa, Poisson’s ratio of 0.35, and outside the 2.4 m of the shield tail is the initial condensation equivalent layer, with a thickness of 0.06 m, density of 2300 kg/m , elastic modulus of 10 MPa, elastic modulus of 0.2, and equivalent layer of 10 MPa. The density is 2300 kg/m , the modulus of elasticity is 10 MPa, and Poisson’s ratio is 0.25. According to the geological investigation report, the physicomechanical parameters of each layer of the soil body are shown in Table 1 2.2.5. Simulation Process The excavation process of the shield tunnel adopts the stiffness migration method, with one cycle for each tube width (1.2 m) excavated in the shield tunnel. (1) Material parameters are assigned to the geotechnical body, model boundary conditions are imposed, initial ground stress equilibrium is carried out, and the displacement is zeroed; (2) 1.2 m length of soil unit in front of the excavation is removed through the “null” model, infiltration boundary conditions at the excavation face are imposed, and soil bin pressure at the palm face is applied; (3) within the excavation range of the previous step, a shell structural unit is applied to simulate the shield shell of the shield machine (when the shield machine moves forward 1.2 m, the shield tail activates the 1.2 m lining unit and the gap unit and assigns the parameters); (4) circumferential homogeneous grouting pressure is applied to the inner wall of the soil body within the range of 2.4 m at the tail of the shield, and the gap unit outside the range of 2.4 m at the tail of the shield is replaced with hardened grouting material; (5) gradually, the shield structure is pushed forward to the completion of the excavation. The location of each excavation step in the asynchronous excavation of the two-lane shield tunnel is shown in Figure 3 . Considering that the width of the lining segment is 1.2 m and the efficiency of numerical calculation, 2.4 m is selected as a numerical simulation excavation step, and the left and right lines are excavated asynchronously, with 1~130 steps for the excavation of the left tunnel line, and 131~260 steps for the hole of the right tunnel line, with a total of 260 steps. The middle part of the model (Y = 78 m) (the middle part of the lake bottom under the shield machine) is selected as the monitoring section to carry out the research and analysis of the calculation results. 2.3. Boundary Conditions and Initial Conditions Displacement boundary conditions: the lake water of 5 m depth is equivalent to a uniform compressive stress of 5 × 10^4 Pa applied to the upper surface of the model; the front, back, left, and proper four characters of the model are set as horizontal displacement constraints; the displacement of the bottom surface is a fixed constraint. Initial conditions of the stress field: the initial stress is calculated according to the self-gravity stress of the soil body. Seepage boundary conditions: A pore pressure boundary of 5 × 10^4 Pa is applied on the upper surface of the model. For soft clay, its permeability coefficient is small. In the transient deformation stage of shield tunnel construction, the external boundary of the model is not as good as the drainage, so it is assumed that other limitations, including the front and rear borders of the model, the left and right boundaries, the bottom boundary, and the boundary of the tunnel, in addition to the upper surface, are impervious. In the long-term development and change process of tunnel consolidation and settlement, the pore water pressure should be kept as the initial hydrostatic pressure value due to the model front and rear boundaries, left and right borders, and bottom boundaries being far away from the tunnel. Hence, the model front and rear edges, left and proper limits, and bottom boundaries are infiltrated. The pore water pressure inside the border of the segment is fixed at 0 during the tunnel shield excavation process. Seepage initial conditions: the initial pore water pressure within the soil before tunnel excavation is the hydrostatic pressure within the rock and soil layer. 3. Results and Discussions 3.1. Analysis of Displacement Field After the asynchronous excavation and penetration of the two-lane shield tunnel, the deformation of the stratum in the monitoring section Y = 78 m is shown in Figure 4 . It can be seen that: (1) Stratum settlement is mainly distributed above the tunnel axis, with the magnitude of stratum settlement gradually decreasing from the surface to the tunnel axis. Stratum uplift mainly occurs below the tunnel base due to the tunnel excavation’s stress-relief effect. The strata deformation pattern caused by the double-lane shield tunnel excavation revealed in Figure 4 is similar to the results of the strata deformation pattern obtained from calculations based on the random field theory reported by Li et al. [ ] and the results of the strata deformation pattern obtained from the field monitoring and numerical calculations used by Qiu et al. [ (2) After the left tunnel excavation is completed, the maximum surface settlement is −6.7 mm, which occurs above the centerline of the tunnel. Furthermore, the right tunnel excavation results in overlapping the settlement areas at the top of the two tunnels, which leads to a larger size of surface settlement with a lateral length of about ±15 m (about 2.5D). The maximum surface settlement is −13.0 mm, located to the left of the center of the two tunnels. (3) During the process from left-line penetration to double-line penetration, the surface drop value above the centerline of the left line tunnel increases by 5.56 mm, an increase of 82%, which shows that the excavation of the backward tunnel in the shield double line tunnel will have a continuous impact on the first tunnel, resulting in a certain amount of increase in the surface settlement of the first tunnel. Figure 4. Vertical displacement distribution of monitoring section A. (a) Distribution of the left tunnel after excavation and penetration; (b) distribution of the right tunnel after excavation and penetration; (c) vertical deformation curve of the strata after excavation and penetration of the left tunnel; (d) vertical deformation curve of the strata after excavation and penetration of the right tunnel. The shield tunnel excavation process will disturb the soil around the tunnel, resulting in different degrees of soil deformation. The settlement of the soil on the upper part of the tunnel vault will directly affect the stability of the segment in the process of shield tunnel excavation needed to monitor the settlement of the soil on the upper part of the tunnel vault, and according to the results of the monitoring to take appropriate preventive and control measures. In the Y = 78 monitoring section of the left line of the tunnel (corresponding to the 65th step of excavation) of the vault and its two sides of the deployment of seven monitoring points, one monitoring point is located in the tunnel’s vault position. Furthermore, the other six monitoring points are located in the vault of the two sides of the situation; the elevation of each measuring point is the same, and the horizontal spacing is 2 m. The displacement change curve of each monitoring point during the shield tunnel boring process is shown in Figure 5 . It can be seen that, when the tunneling machine excavates to the 57th step, the vertical displacement of each monitoring point is 0. When excavating to the 61st step, the shield excavation slightly disturbs the soil at each monitoring point. The soil at each monitoring point undergoes a tiny settlement. With the continuous advancement of the tunnel working face, the influence range of settlement around the tunnel vault gradually expands, and the vertical displacement decreases in the horizontal direction from the center of the tunnel to both sides, similar to the Pecck curve. During the excavation of the left tunnel, when the excavation face is 4.8 m away from the monitoring section, the soil at each monitoring point starts to settle, and the maximum settlement value is 0.8 mm; when the excavation face just crosses the monitoring section, the maximum settlement value is 3 mm; after the excavation face crosses the monitoring section, the maximum settlement values are 7.0 mm and 7.6 mm when the excavation face is 4.8 m and 9.6 m away from the monitoring section, respectively, which can be seen. In the process of the working face crossing the monitoring section, the increment of soil settlement value increases first and then decreases, and the maximum increment appears at the position of 4.8 m from the monitoring section after the excavation face crosses the monitoring section. After the excavation of the right tunnel is completed, the maximum settlement value of the monitoring point is 9.5 mm, and the increment of settlement is 3 mm, accounting for 25% of the total settlement, which indicates that the soil body under seepage action has a large late consolidation settlement, which will have an impact on the stability of the tube sheet, so the monitoring of the soil body’s late consolidation settlement should be strengthened. 3.2. Analysis of Seepage Field The disturbance of the surrounding soil caused by the shield tunnel excavation process will inevitably change the permeability properties of the soil, thus causing changes in the fluid pore water pressure. Monitoring points A and B are arranged at the left and right tunnel arch crown in the Y = 78 monitoring section to record the pore water pressure changes at the monitoring points. The calculation results are shown in Figure 6 It can be seen from Figure 6 that, as the left tunnel excavation advances, the pore water pressure value at monitoring point A decreases, especially after the shield machine crosses the monitoring section, and the pore water pressure starts to drop dramatically; after the left tunnel is passed through, the pore water pressure value at monitoring point A tends to stabilize at value of 0.0198 MPa, and no longer changes; during the excavation process of the left tunnel, the pore water pressure at monitoring point B does not change significantly. During the excavation of the right tunnel, the pore water pressure at monitoring point B starts to decrease gradually; after the shield machine crosses the monitoring section, the pore water pressure at monitoring point B decreases dramatically; finally, the pore water pressure value tends to stabilize at 0.198 MPa, and no further change occurs, reduced by about 30% compared with the initial value. It can be seen that the shield excavation greatly influences the pore water pressure around the tunnel, and there is a remarkable decrease in the pore water pressure after excavation compared with the initial value. The cloud diagram of the distribution of the seepage field of the surrounding rock in the monitoring section after the completion of asynchronous excavation of the left and right holes of the shield tunnel is presented in Figure 7 . It can be seen that: (1) After the completion of the left tunnel excavation, the pore water pressure around the segment decreases significantly compared with the initial pore water pressure value, the groundwater flows to the excavation surface under the action of head pressure, the seepage field forms a distribution similar to the funnel-shape centered on the excavation area of the tunnel, and the seepage field is symmetrically distributed with the center axis of the two tunnels as the symmetry center after the right tunnel excavation is completed. After excavating the right tunnel, the seepage field is symmetrically distributed, with the center axis of the two tunnels as the symmetry center. (2) After the excavation of the right tunnel is completed, the pore water pressure around the left tunnel further decreases, mainly because the drainage boundary of the right tunnel is larger during the excavation. More pore water is discharged within the same period. (3) The pore water pressure value on both sides of the arch waist decreases from the initial 255 kPa to 70 kPa, a reduction of about 73%, which is a considerable reduction. The main reason is that the groundwater surges to the tunnel under the action of the pressure head. At the same time, the weak permeability of the lining and grouting layer prevents the water from entering, and the water moves along the grouting layer to the arch bottom, reducing pore water pressure at the arch waist. The seepage field distribution pattern of the high hydraulic pressure shield tunnel revealed in Figure 7 is similar to the results of the centrifugal test reported by Niu et al. [ ] and the seepage field distribution of the tunnel excavation surface using numerical calculations by Wang et al. [ ] and Zhang et al. [ 3.3. Analysis of Key Construction Parameters Shield tunnel excavation is a dynamic construction process. For the construction of underwater shield tunnels under the complex environment of high water pressure and shallow burial, selecting reasonable construction parameters can effectively reduce the impact of tunnel excavation disturbance and improve tunnel stability [ ]. According to the results of previous studies, the tunneling pressure, grouting pressure, and thickness of the grouting layer, which have a more significant effect on tunnel stability, are selected for analysis. After the completion of the left shield tunnel, the deformation of the tube sheet at Y = 78 of the monitoring section under different boring pressure, grouting pressure, and thickness of the grouting layer are presented in Figure 8 Figure 9 Figure 10 . It can be found that the figures show that: (1) The vertical deformation and horizontal deformation of segment under different tunneling pressures, grouting pressures, and thickness of grouting layer have the same trend. The vertical deformation of the segment is approximately distributed in the shape of a “butterfly”, and the deformation manifests itself in the settlement of the arch crown, the bulging of the arch bottom, and the offset of the arch waist to the inner side of the axis; the deformation of the arch crown of the segment is more significant than the deformation of the arch bottom. The difference in vertical deformation of the arch waist at the two sides is relatively tiny. The horizontal deformation of the segment is approximately “∞”-shaped, and the deformation shows that the deformation gradually increases from the arch crown, the arch bottom of the arch, to the waist on both sides. (2) With the increase in tunneling and grouting pressure, the overall deformation of the segment gradually decreases. However, when the tunneling pressure exceeds 200 kPa and the grouting pressure exceeds 0.4 MPa, the influence on the overall deformation of the segment is slight. Furthermore, with the increase in the thickness of the grouting layer, the overall deformation of the segment gradually increases, and when the thickness of the grouting layer exceeds 10 cm, the deformation value of the segment vault is more than 20 mm, which exceeds the normative permissible deformation value, and the thickness of the grouting layer has a significant influence on the deformation of the segment vault, which should be considered a critical point in the construction process. Consideration should be given during the construction process. 3.4. Seepage Characteristics of Shield Tunnel under High Water Pressure 3.4.1. Distribution Law of Seepage Field of Surrounding Rock Based on the numerical model established in Section 3.2 , the shield tunnel construction process is simulated under four different overlying water levels of 10 m, 20 m, 30 m, and 40 m to investigate the distribution of the seepage field in the surrounding rock and the deformation of the lining segment structure. The calculation results are shown in Figure 11 It can be seen from Figure 11 that, after asynchronous excavation of the tunnel under different overlying water level conditions, the distribution of the seepage field of the surrounding rock is more or less the same, and a pore pressure distribution similar to that of a funnel shape is formed in the tunnel periphery. Furthermore, under the four overlying water level conditions of 10 m, 20 m, 30 m, and 40 m, the pore water pressure of the tunnel grouting ring changes within the range of 0.05–0.2 MPa, 0.05–0.3 MPa, 0.05–0.4 MPa, and 0.05–0.5 MPa. The larger the head pressure is, the more the pore water pressure of the surrounding rock increases. At the same time, the hydraulic gradient of both sides of the tunnel arch waist changes exceptionally significantly. This is similar to the findings reported by Fu et al. ] and Ying et al. [ ] that, when the underwater shield tunnel undergoes excavation, the groundwater flow converges toward the tunnel excavation surface. The hydraulic gradient near the tunnel excavation surface increases with the increased water level. Therefore, the increase in water level leads to more vital seepage force, which is why the water pressure and surface settlement outside the segment increase with the increase in water depth. 3.4.2. Surface Settlement The surface settlement groove curves for shield tunnel excavation at different overlying water levels are presented in Figure 12 . It can be found that, when the overlying water depth is 10 m, 20 m, 30 m, and 40 m, the distribution shape of surface settlement trough is more or less the same, and the widths of the settlement grooves are close. Furthermore, the influence range of surface settlement is about 8.0 D, and the maximum values of the surface settlement are 25.92 mm, 34.27 mm, 42.86 mm, and 50.90 mm, respectively. The deeper the overlying water, the greater the maximum value of the surface settlement and the influence range of settlement, and there is still 10–15 mm of settlement on the surface beyond the surface settlement groove. Moreover, the deeper the overlying water, the greater the dynamic water pressure, and, the more extent the decrease in pore water pressure around the segments after tunneling excavation, the more excellent the dissipation of the high pore water pressure, which will lead to a larger consolidation settlement of the surface. 4. Conclusions Based on the study of underwater shield tunnel excavation under high hydraulic pressure seepage, the following conclusions can be drawn: After shield excavation, the late consolidation settlement of the soil under seepage is enormous, accounting for about 25% of the total settlement, and the later tunnel will further enhance the seepage around the first tunnel. During the construction of the underwater shield, the pore water pressure on both sides of the tunnel arch and arch waist is reduced by about 72% and 30%, respectively, compared with the initial value and requiring focused monitoring of the tunnel arch girdle area. Within a specific range, increasing the digging pressure and grouting pressure and reducing the thickness of the grouting layer can effectively control the vertical deformation of the segment, and reducing the grouting stress and thickness of the grouting layer can effectively prevent the horizontal deformation of the segment. The more prominent the overlying water level is, the more pronounced the seepage effect is, and the larger the maximum consolidation settlement and the influence range of the surface settlement. The influence of the water level on the force of the segment should be considered in the structural design of the segment. In this paper, the results can provide a theoretical basis for the stability control of underwater shield tunnels. However, the numerical simulation results should be further validated against the experimental results. Therefore, further work should be conducted to study the reliability of the numerical calculation results. Author Contributions This work was conducted with collaboration of all authors. L.C.: conceptualization, writing—original draft, investigation, software, writing—review and editing. B.X.: conceptualization, software, formal analysis, funding acquisition, Writing—review and editing. Y.D., S.H. and Y.S.: conceptualization, methodology, supervision. Q.G., K.L. and N.Z.: conceptualization, methodology, supervision. All authors have read and agreed to the published version of the manuscript. The authors acknowledge the financial support provided by the General Program of the National Natural Science Foundation of China (Grant no. 51874207). Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest The authors declare no conflict of interest. Figure 7. Cloud diagram of seepage field distribution in monitoring section A. (a) After the left tunnel is excavated through; (b) after the right tunnel is excavated through. Figure 8. Deformation curves of the segment of the shield tunnel in the left line of monitoring section A under different tunneling pressures. (a) Vertical deformation; (b) horizontal deformation. Figure 9. Deformation curves of the segment of the shield tunnel in the left line of monitoring section A under different grouting pressures. (a) Vertical deformation; (b) horizontal deformation. Figure 10. Deformation curve of the segment of the left line shield tunnel in monitoring section A under different thicknesses of grouting layer. (a) Vertical deformation; (b) horizontal deformation. Figure 11. Pore water pressure distribution in the tunnel surroundings after asynchronous tunnel excavation and penetration of the two-lane tunnel under different overlying water level conditions. (a ) Overlying water depth 10 m; (b) overlying water depth 20 m; (c) overlying water depth 30 m; (d) overlying water depth 40 m. Floor Number Thickness Porosity Permeability Coefficient (cm·s^−1) Specific Weight Elastic Modules (MPa) Cohesion (kPa) Poisson’s Ratio Friction Angle (m) Vertical Level (kN/m^3) (°) 2-4 3.1 0.386 2.69 × 10^−4 6.37 × 10^−4 20 12 6.78 0.25 35.8 2-3-1 7.1 0.418 1.72 × 10^−6 2.91 × 10^−6 19.7 9.3 10.9 0.31 17.9 2-2-1 2.2 0.456 1.65 × 10^−7 2.72 × 10^−7 19.3 11.5 22.2 0.34 10.7 2-4 8.7 0.386 2.69 × 10^−4 6.37 × 10^−4 20 12 6.78 0.25 35.8 2-5 10.5 0.363 8.01 × 10^−4 1.89 × 10^−3 20.4 10 8.53 0.29 35.9 2-3-3 6.7 0.418 1.55 × 10^−6 2.34 × 10^−6 19.8 7.6 11.6 0.3 14.7 3-6 7.7 0.319 9.61 × 10^−4 2.26 × 10^−3 20.4 10.03 13.6 0.28 32.58 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Chen, L.; Xi, B.; Dong, Y.; He, S.; Shi, Y.; Gao, Q.; Liu, K.; Zhao, N. Study on the Stability and Seepage Characteristics of Underwater Shield Tunnels under High Water Pressure Seepage. Sustainability 2023, 15, 15581. https://doi.org/10.3390/su152115581 AMA Style Chen L, Xi B, Dong Y, He S, Shi Y, Gao Q, Liu K, Zhao N. Study on the Stability and Seepage Characteristics of Underwater Shield Tunnels under High Water Pressure Seepage. Sustainability. 2023; 15 (21):15581. https://doi.org/10.3390/su152115581 Chicago/Turabian Style Chen, Luhai, Baoping Xi, Yunsheng Dong, Shuixin He, Yongxiang Shi, Qibo Gao, Keliu Liu, and Na Zhao. 2023. "Study on the Stability and Seepage Characteristics of Underwater Shield Tunnels under High Water Pressure Seepage" Sustainability 15, no. 21: 15581. https://doi.org/10.3390/su152115581 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2071-1050/15/21/15581","timestamp":"2024-11-03T16:10:45Z","content_type":"text/html","content_length":"472278","record_id":"<urn:uuid:5ebd8b2c-8a5e-49b6-b6a0-d3eb192afe16>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00820.warc.gz"}
Axiom of Choice To choose one sock from each of infinitely many pairs of socks requires the Axiom of Choice, but for shoes the Axiom is not needed. — Bertrand Russell Antonio Luis’s latest post points to Eric Schechter’s absolutely wonderful Axiom of Choice Homepage. The latter discusses the AC, and a whole range of related topics. Here, for instance, is his discussion of the Banach-Tarski Paradox: Banach and Tarski used the Axiom of Choice to prove that it is possible to take the 3-dimensional closed unit ball, $B = \{(x,y,z)\in \mathbb{R}^3: x^2 + y^2 + z^2 \leq 1\}$ and partition it into finitely many pieces, and move those pieces in rigid motions (i.e., rotations and translations, with pieces permitted to move through one another) and reassemble them to form two copies of $B$. At first glance, the Banach-Tarski Decomposition seems to contradict some of our intuition about physics – e.g., the Law of Conservation of Mass, from classical Newtonian physics. Consequently, the Decomposition is often called the Banach-Tarski Paradox. But actually, it only yields a complication, not a contradiction. If we assume a uniform density, only a set with a defined volume can have a defined mass. The notion of “volume” can be defined for many subsets of $\mathbb{R}^3$, and beginners might expect the notion to apply to all subsets of $\mathbb{R}^3$, but it does not. More precisely, Lebesgue measure is defined on some subsets of $\mathbb{R}^3$, but it cannot be extended to all subsets of $\mathbb{R}^3$ in a fashion that preserves two of its most important properties: the measure of the union of two disjoint sets is the sum of their measures, and measure is unchanged under translation and rotation. Thus, the Banach-Tarski Paradox does not violate the Law of Conservation of Mass; it merely tells us that the notion of “volume” is more complicated than we might have expected. By the way, the sets in the Banach-Tarski Decomposition cannot be described explicitly; we are merely able to prove their existence, like that of a choice function. One or more of the sets in the decomposition must be Lebesgue unmeasurable; thus a corollary of the Banach-Tarski Theorem is the fact that there exist sets that are not Lebesgue measurable. The existence of unmeasurable sets has a much shorter and easier proof, which can be found in every introductory textbook on measure theory. That proof also uses the Axiom of Choice, but doesn’t mention the Banach-Tarski Great stuff! Posted by distler at January 8, 2004 10:37 AM Re: Axiom of Choice Right. I have always winced at the name Banach-Tarski paradox, since the result follows directly from the Axiom of Choice (if you choose to accept it). This result has never bothered me physically; it’s not like you can cut up a real ball into the pieces required by the theorem. Not being a Platonist, to me this result just signals that, like other models of the the world we create, the real number system can also be pushed beyond its domain of validity. We accept the Axiom of Choice, not because it is true, but because it is useful. Posted by: Bryan Van de Ven on January 9, 2004 9:45 AM | Permalink | Reply to this
{"url":"https://classes.golem.ph.utexas.edu/~distler/blog/archives/000283.html","timestamp":"2024-11-07T06:06:08Z","content_type":"application/xhtml+xml","content_length":"16370","record_id":"<urn:uuid:f82b1fc7-c992-42ad-a4d7-0bce34f6009d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00592.warc.gz"}
helmholtz-ellis-ji-notation – Beautiful in-line microtonal just intonation accidentals The Helmholtz-Ellis JI Pitch Notation (HEJI), devised in the early 2000s by Marc Sabat and Wolfgang von Schweinitz, explicitly notates the raising and lowering of the untempered diatonic Pythagorean notes by specific microtonal ratios defined for each prime. It provides visually distinctive “logos” distinguishing families of justly tuned intervals that relate to the harmonic series. These take the form of strings of additional accidental symbols based on historical precedents, extending the traditional sharps and flats. Since its 2020 update, HEJI ver. 2 (“HEJI2”) provides unique microtonal symbols through the 47-limit. This package is a simple LaTeX implementation of HEJI2 that allows for in-line typesetting of microtonal accidentals for use within theoretical texts, program notes, symbol legends, etc. Documents must be compiled using XeLaTeX. Sources /fonts/helmholtz-ellis-ji-notation Home page http://plainsound.org/ Version 1.1 2020-05-19 Licenses CC BY 4.0 Copyright 2020 Thomas Nicholson Maintainer Thomas Nicholson Contained in TeXLive as helmholtz-ellis-ji-notation MiKTeX as helmholtz-ellis-ji-notation Font support Topics OTF Font Music Font Download the contents of this package in one zip archive (97.8k). Community Comments Maybe you are interested in the following packages as well. Package Links
{"url":"https://ctan.org/pkg/helmholtz-ellis-ji-notation","timestamp":"2024-11-05T09:50:02Z","content_type":"text/html","content_length":"18374","record_id":"<urn:uuid:41bd000a-1e50-4a51-b465-e48e5039a9fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00536.warc.gz"}
Cost of Capital | Formula + Calculator What is Cost of Capital? The Cost of Capital is the minimum rate of return, or hurdle rate, required on a particular investment for the incremental risk undertaken to be rational from a risk-reward standpoint. Fundamentally, the cost of capital reflects the opportunity cost to investors, such as debt lenders and equity shareholders, at which the implied return is deemed sufficient given the risk attributable to an investment. How is Cost of Capital Used in Finance? In corporate finance, the cost of capital is a central piece in analyzing a potential investment opportunity and performing a cash flow-based valuation. In short, a rational investor should not invest in a given asset if there is a comparable asset with a more attractive risk-reward profile. Conceptually, the cost of capital estimates the expected rate of return given the risk profile of an investment. The cost of capital is contingent on the opportunity cost, where alternative, comparable assets are critical factors that contribute toward the specific hurdle rate set by an investor. The decision to allocate capital toward a given investment and to risk incurring a monetary loss is economically feasible only if the potential return is deemed to be a reasonable trade-off. Hence, the cost of capital is also referred to as the “discount rate” or “minimum required rate of return”. Why Does the Cost of Capital Matter? Suppose an investor commits to a particular investment, at a time when there are other less risky opportunities in the market with comparable upside potential in terms of returns. Ultimately, the decision to proceed with the investment would be perceived as irrational from a pure risk perspective. Why? The investor deliberately chose a higher-risk investment without the gain of further compensation for incremental risk, which is contradictory to the core premise of the risk-return trade-off. The risk-return trade-off in investing is a theory that states an investment with higher risk should rightfully reward the investor with a higher potential return. Therefore, the capital allocation and investment decisions of an investor should be oriented around selecting the option that presents the most attractive risk-return profile. The cost of capital is analyzed to determine the investment opportunities that present the highest potential return for a given level of risk, or the lowest risk for a set rate of return. Of course, quantifying the risk of an investment (and potential return) is a subjective measure specific to an investor. However, as a general statement, the more risk tied to a specific investment, the higher the expected return should be – all else being equal. Cost of Capital Formula The cost of capital is the rate of return expected to be earned per each type of capital provider. In particular, two groups of capital providers contribute funds to a company: • Equity Capital Providers → Common Shareholders and Preferred Stockholders • Debt Capital Providers → Banks (Senior Lenders), Institutional Investors, Specialty Lenders (Mezzanine Funds) The incentive to provide funds to a company, whether the financing is in the form of debt or equity, is to earn a sufficient rate of return relative to the risk of providing the capital. The weighted average cost of capital (WACC) is the rate of return that reflects a company’s risk-return profile, where each source of capital is proportionately weighted. The formula to calculate the weighted average cost of capital (WACC) is as follows. Cost of Capital (WACC) = [kd × (D ÷ (D + E))] + [ke × (E ÷ (D + E))] • WACC → Weighted Average Cost of Capital • kd → After-Tax Cost of Debt • ke → Cost of Equity • D / (D + E) → Debt Weight (%) • E / (D + E) → Equity Weight (%) How to Calculate Cost of Capital The step-by-step process to calculate the weighted average cost of capital (WACC) is as follows. • Step 1 → Calculate After-Tax Cost of Debt (kd) • Step 2 → Calculate Cost of Equity (ke) with the Capital Asset Pricing Model (CAPM) • Step 3 → Determine the Capital Weights (%) • Step 4 → Multiply Each Capital Cost by the Corresponding Capital Weight • Step 5 → Sum of the Capital Structure Weight-Adjusted Capital Costs is the Cost of Capital (WACC) One crucial rule to abide by is that the cost of capital and the represented stakeholder group must match. The cost of capital metric and corresponding group of represented stakeholder(s) are each outlined here: • Weighted Average Cost of Capital (WACC) → All Stakeholders, Including Debt, Common Equity, and Preferred Stock • Cost of Equity (ke) → Common Equity Shareholders • Cost of Debt (kd) → Debt Lenders • Cost of Preferred Stock (kp) → Preferred Stockholders Step 1. Calculate Cost of Debt (kd) The starting point to compute a company’s weighted average cost of capital (WACC) is the cost of debt (kd) component. The cost of debt (kd) is the minimum yield that debt holders require to bear the burden of structuring and offering debt capital to a specific borrower. Conceptually, the cost of debt can be thought of as the effective interest rate that a company must pay on its long-term financial obligations, assuming the debt issuance occurs at present. Or said differently, the cost of debt is the minimum required yield that lenders expect to receive on a financing arrangement, where there is adequate compensation for the potential risk of incurring a capital loss from providing debt to a specific borrower. Estimating the cost of debt is relatively straightforward in comparison to the cost of equity since existing debt obligations such as loans and bonds have interest rates that are readily observable in the market via Bloomberg and other 3rd party data platforms. Referencing the market-based yield from Bloomberg (or related resources) is the preferred option, and the pre-tax cost of debt can also be manually determined by dividing a company’s annual interest expense by its total debt balance. The formula to calculate the pre-tax cost of debt, or “effective interest rate,” is as follows. Pre-Tax Cost of Debt = Annual Interest Expense ÷ Total Debt Balance Since the interest paid on debt is tax-deductible, the pre-tax cost of debt must be converted into an after-tax rate using the following formula. After-Tax Cost of Debt = Pre-Tax Cost of Debt × (1 – Tax Rate) Contrary to the cost of equity, the cost of debt must be tax-affected by multiplying by (1 – Tax Rate) because interest expense is tax-deductible, i.e. the interest “tax shield” reduces a company’s pre-tax income (EBT) on its income statement. The yield to maturity (YTM) on a company’s long-term debt obligations, namely corporate bonds, is a reliable estimate for the pre-tax cost of debt for issuers who are ascribed investment-grade credit ratings, which are determined by independent credit agencies (S&P Global, Fitch, and Moody’s). Investment-grade debt is deemed to carry less credit risk and the borrower is at a lower risk of default; hence the designation of a higher credit rating. Usually, the book value of debt is a reasonable proxy for the market value of debt, assuming the issuer’s debt is trading near par, instead of at a premium or discount to par. The higher the cost of debt, the greater the credit risk and risk of default (and vice versa for a lower cost of debt). Step 2. Calculate Cost of Equity (ke) The cost of equity (ke) is the minimum required rate of return for common equity investors that reflects the risk-reward profile of a given security. If an investor decides to contribute capital to purchase an ownership stake in the common equity of a company, the cost of equity is the expected return on the security that should compensate the investor appropriately for the degree of risk undertaken. The most common method to calculate the cost of equity (ke) by practitioners is via the capital asset pricing model (CAPM). The capital asset pricing model (CAPM) implies the expected rate of return on a security is a function of the underlying security’s sensitivity to systematic risk, which refers to the non-diversifiable component of risk. The CAPM theorizes that the return on a security, or “cost of equity,” can be determined by adding the risk-free rate (rf) to the product of a security’s beta and equity risk premium (ERP). Cost of Equity (ke) = Risk-Free Rate (rf) + β (Equity Risk Premium) The risk-free rate (rf) is most often the yield on the U.S. 10-year Treasury note on the date of the analysis. The U.S. 10-year Treasury note is deemed risk-free since such issuances are backed by the “full faith and credit” of the U.S. government. Given the unlikely scenario whereby the U.S. government is at risk of default, the government has the discretion to print more money to ensure it does not default on any of its financial obligations. In corporate finance, beta measures the sensitivity of a specific security to the systematic risk of the broader market, i.e. the historical relationship of a particular company’s stock price movements relative to the overall market (e.g. S&P 500). With that said, the higher the beta, the higher the cost of equity (and vice versa) — all else being equal. The risk associated with investing in a security can be segmented into two parts: • Unsystematic Risk ➝ Unsystematic risk, or “company-specific risk,” can effectively be mitigated via portfolio diversification. Because unsystematic risk can be diversified, this form of risk is thus ignored in the calculation of beta. • Systematic Risk ➝ In contrast, systematic risk, or “market risk,” is deemed to be non-diversifiable because the security’s price fluctuations (and sensitivity) to systematic risk cannot be diminished through portfolio diversification. • Equity Risk Premium (ERP) ➝ The equity risk premium (ERP), often used interchangeably with the term “market risk premium,” is the incremental risk of investing in the stock market as opposed to risk-free securities issued by the government. The CAPM states that equity shareholders require a minimum rate of return equal to the return from a risk-free security plus a return for bearing the “extra,” incremental risk. The extra risk component is equivalent to the equity risk premium (ERP) of the broader stock market multiplied by the security’s beta. The formula to calculate the equity risk premium (ERP) is the difference between the expected market return and the risk-free rate, most often proxied by the yield on the 10-year Treasury note. Equity Risk Premium (ERP) = Expected Market Return (rm) – Risk-Free Rate (rf) Historically, the equity risk premium (ERP) in the U.S. has ranged between 4.0% and 6.0%. Step 3. Determine Capital Weights (%) Once the cost of debt (kd) and cost of equity (ke) components have been determined, the final step is to compute the capital weights attributable to each capital source. The capital weight is the relative proportion of the entire capital structure composed of a specific funding source (e.g. common equity, debt), expressed in percentage form. To calculate the percent contribution of debt and equity relative to the total capitalization, the market values of debt and equity should be used to reflect the fair value rather than the book values recorded for bookkeeping purposes. Why? A valuation is performed on a forward-looking basis, so using the current values per the open markets aligns more closely with the underlying objective. The formula to calculate the capital weight for debt and equity is as follows. • Debt Weight (%) = D ÷ (D + E) • Equity Weight (%) = E ÷ (D + E) • E → Market Value of Equity (MVE), i.e. Market Capitalization (or “Market Cap”) • D → Market Value of Debt (or Book Value of Debt) • (D + E) → Total Capitalization (Entire Capital Structure) While the calculation of the market value of equity is relatively straightforward, since the process involves multiplying the company’s current stock price as of the latest closing date by its total number of diluted shares outstanding, the book value of debt is acceptable to use for the market value of debt input. Barring unusual circumstances, the market value of debt seldom deviates too far from the book value of debt, unlike the market value of equity. Cost of Capital vs. Cost of Equity: What is the Difference? The weighted average cost of capital (WACC) is the blended required rate of return, representative of all stakeholders. Hence, in an unlevered DCF model, the WACC is the appropriate discount rate to apply, as the rate must align with the cash flow metric in the represented stakeholder group(s). In comparison, the cost of equity is the right discount rate to use in levered DCF, which forecasts the levered free cash flows of a company, as the two metrics are both attributable to solely equity • Cost of Capital (WACC) → Free Cash Flow to Firm (FCFF), or Unlevered Free Cash Flow • Cost of Equity (ke) → Free Cash Flow to Equity (FCFE), or Levered Free Cash Flow If there is no debt in a company’s capital structure, the cost of capital and the cost of equity will be equivalent. How Does the Capital Structure Impact Cost of Capital? The cost of equity is higher than the cost of debt because common equity represents a junior claim that is subordinate to all debt claims. Because the interest expense paid on debt is tax-deductible, debt is considered the “cheaper” source of financing relative to equity. So, why aren’t corporations financed entirely with debt? The main drawback to debt financing is that it comes with fixed charges owed to the lender, namely interest and mandatory principal amortization, which causes the risk of default to rise – i.e. the addition of debt to the capital structure introduces the risk of financial distress and bankruptcy. Corporations obtain financing from external capital providers, such as equity shareholders and debt lenders, to allocate the newly raised capital into investments that earn a rate of return (or yield) in excess of the cost of capital. Because debt holders are of higher priority in the capital structure compared to equity holders – where in the event of default, lenders must receive full recovery and be repaid in full before equity holders can be distributed a portion of the proceeds, barring unusual circumstances – the expected risk and return from debt lenders is lower relative to equity investors. In particular, senior debt lenders possess the most senior claim on the cash flows and assets belonging to the underlying company. Therefore, senior lenders – most often corporate banks – often tend to prioritize capital preservation and risk mitigation in lieu of a higher yield. On the other hand, common equity is perceived to be the riskiest piece of the capital structure, as common shareholders represent the lowest priority class in the order of repayments. But unlike debt securities, where the return is relatively fixed for the most part (i.e. via the collection of interest and repayment of the original principal in full at maturity), the potential upside in returns from investing in equity securities is “uncapped”. The lower the cost of capital (WACC), the higher the present value (PV) of a company’s discounted future free cash flows (FCFs) – all else being equal. In closing, the optimal capital structure is therefore the mix of debt and equity that minimizes a company’s cost of capital (WACC) while maximizing its firm valuation. Cost of Capital Calculator We’ll now move on to a modeling exercise, which you can access by filling out the form below. 1. Cost of Debt Calculation Example Suppose we’re tasked with estimating the weighted average cost of capital (WACC) for a company given the following set of initial assumptions. • Pre-Tax Cost of Debt = 5.0% • Tax Rate (%) = 20.0% The first step toward calculating the company’s cost of capital is determining its after-tax cost of debt. Since the pre-tax cost of debt was provided as an assumption, we’ll apply the 20.0% tax rate to compute the after-tax cost of debt, which comes out to be 4.0%. • After-Tax Cost of Debt (kd) = 5.0% × (1 – 20.0%) = 4.0% 2. Cost of Equity Calculation Example In the next step, the cost of equity of our company will be calculated using the capital asset pricing model (CAPM). The three assumptions relevant to the CAPM are as follows: 1. Risk-Free Rate (rf) = 4.3% 2. Beta (β) = 1.20 3. Equity Risk Premium (ERP) = 6.0% The risk-free rate (rf) is the yield on the 10-year Treasury as of the present date. The beta of 1.20 signifies the company’s equity securities are 20% riskier than the broader market. Therefore, if the S&P 500 were to rise 10%, the company’s stock price would be expected to rise The equity risk premium (ERP) is the spread between the expected market return and the 4.3% risk-free rate, so the 6.0% risk premium implies the expected market return is approximately 10.3%. • Market Return (mr) = 6.0% + 4.3% = 10.3% Upon inputting those figures into the CAPM formula, the cost of equity (ke) comes out to be 11.5%. • Cost of Equity (ke) = 4.3% + (1.20 × 6.0%) = 11.5% 3. Cost of Capital Calculation Example In the final step, we must now determine the capital weights of the debt and equity components, or in other words, the percentage contribution of each funding source. Often, practitioners use the net debt metric – i.e. total debt less cash and equivalents – rather than the gross debt figure while calculating the capital weights. Why? The cash and cash equivalents sitting on a company’s balance sheet, such as marketable securities, can hypothetically be liquidated to help pay down a portion (or the entirety) of its outstanding gross debt. The market value of equity will be assumed to be $100 billion, whereas the net debt balance is assumed to be $25 billion. • Market Value of Equity = $100 billion • Net Debt = $25 billion While the market value of debt should be used, the book value of debt shown on the balance sheet is usually fairly close to the market value (and can be used as a proxy should the market value of debt not be available). The sum of the $100 billion in equity value and $25 billion in net debt results in the total capitalization, which equals $125 billion. • Total Capitalization = $100 billion + $25 billion = $125 billion Of that $125 billion, we can determine the percent composition of the company’s capital structure by dividing each capital source’s value by the total capitalization. • Equity Weight (%) = $100 billion ÷ $125 billion = 80.0% • Debt Weight (%) = $25 billion ÷ $125 billion = 20.0% In total, the sum of the equity and debt weights must equal 100% (or 1.0), which is true in our case (80% + 20% = 100%). Since we have the necessary inputs to calculate our company’s cost of capital, the sum of each capital source cost can be multiplied by the corresponding capital structure weight to arrive at 10.0% for the implied cost of capital. • Cost of Capital (WACC) = (4.0% × 20.0%) + (11.5% × 80.0%) = 10.0% Step-by-Step Online Course Everything You Need To Master Financial Modeling Enroll in The Premium Package: Learn Financial Statement Modeling, DCF, M&A, LBO and Comps. The same training program used at top investment banks. Enroll Today 0 Comments Inline Feedbacks View all comments
{"url":"https://www.wallstreetprep.com/knowledge/cost-of-capital/","timestamp":"2024-11-09T20:15:24Z","content_type":"text/html","content_length":"146328","record_id":"<urn:uuid:f61686e0-eaaa-46f1-9a2d-009957f44807>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00453.warc.gz"}
Multiplication Chart Class Playground | Multiplication Chart Printable Multiplication Chart Class Playground Multiplication Chart Class Playground Multiplication Chart Class Playground – A Multiplication Chart is a valuable tool for youngsters to discover how to multiply, separate, and also find the tiniest number. There are lots of uses for a Multiplication Chart. What is Multiplication Chart Printable? A multiplication chart can be made use of to help kids learn their multiplication realities. Multiplication charts can be found in several forms, from full page times tables to solitary page ones. While individual tables serve for presenting portions of info, a full page chart makes it easier to assess facts that have actually currently been mastered. The multiplication chart will normally include a left column and also a leading row. When you want to discover the item of 2 numbers, pick the very first number from the left column and also the second number from the leading row. Multiplication charts are valuable understanding tools for both grownups and youngsters. Full Size Printable Multiplication Chart are readily available on the Internet and also can be printed out and laminated for toughness. Why Do We Use a Multiplication Chart? A multiplication chart is a layout that shows exactly how to multiply two numbers. You choose the first number in the left column, move it down the column, and also after that choose the second number from the top row. Multiplication charts are practical for numerous factors, consisting of helping youngsters discover just how to separate and also streamline fractions. They can also aid kids find out exactly how to choose an efficient common measure. Multiplication charts can likewise be helpful as workdesk sources because they function as a consistent suggestion of the student’s development. These tools aid us develop independent students that recognize the basic concepts of multiplication. Multiplication charts are also useful for aiding pupils memorize their times tables. As with any ability, memorizing multiplication tables takes time as well as technique. Full Size Printable Multiplication Chart Multiplication Chart Amazon Multiplication Table Multiplication Chart Multiplication Table Pin By Msz Lizardi On Sam Home Schooling In 2021 Multiplication Chart Full Size Printable Multiplication Chart If you’re searching for Full Size Printable Multiplication Chart, you’ve come to the best location. Multiplication charts are offered in different formats, including full size, half size, and a range of charming designs. Some are vertical, while others include a horizontal layout. You can also discover worksheet printables that consist of multiplication formulas and mathematics truths. Multiplication charts as well as tables are vital tools for youngsters’s education and learning. These charts are excellent for use in homeschool mathematics binders or as class posters. A Full Size Printable Multiplication Chart is a valuable tool to strengthen mathematics truths and can help a youngster discover multiplication rapidly. It’s additionally an excellent tool for avoid counting as well as discovering the moments tables. Related For Full Size Printable Multiplication Chart
{"url":"https://multiplicationchart-printable.com/full-size-printable-multiplication-chart/multiplication-chart-class-playground/","timestamp":"2024-11-07T03:50:31Z","content_type":"text/html","content_length":"26856","record_id":"<urn:uuid:6c47e30a-b789-4ea5-acf2-351ab5da5b06>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00715.warc.gz"}
Amps Draw Calculator - Calculator Wow Amps Draw Calculator The Amps Draw Calculator is a vital tool used to estimate the electrical current draw of devices or appliances based on their power ratings and the voltage of the power source. Understanding amperage draw is essential in electrical engineering, home improvement projects, and energy management to ensure safe and efficient operation of electrical systems. Importance of Amps Draw Calculation Calculating amps draw helps in several crucial aspects: • Safety: Ensures that electrical circuits and components do not exceed their current-carrying capacity, preventing overheating and potential hazards. • Efficiency: Optimizes energy consumption by accurately determining the electrical load of devices, aiding in energy conservation efforts. • Design and Planning: Guides engineers, electricians, and homeowners in designing circuits, selecting appropriate wiring, and planning electrical installations. How to Use the Amps Draw Calculator Using the Amps Draw Calculator is straightforward: 1. Enter Power Rating: Input the power rating of the device or appliance in watts. 2. Enter Source Voltage: Input the voltage of the power source (e.g., 120V, 240V). 3. Calculate Amps: Click the calculate button to obtain the amperage draw, calculated as Amps=Power RatingSource Voltage\text{Amps} = \frac{\text{Power Rating}}{\text{Source Voltage}}Amps= Source VoltagePower Rating. 4. Interpret Results: The calculator displays the amperage draw, helping users understand the electrical load and plan accordingly. 10 FAQs About the Amps Draw Calculator 1. Why is amperage draw important in electrical systems? • Amperage draw indicates the amount of current flowing through electrical components, crucial for preventing overloads and ensuring system reliability. 2. How does amperage draw affect electrical safety? • Knowing the current draw helps in selecting proper wire sizes, breakers, and fuses to prevent overheating and fire hazards. 3. What units are used to measure amperage draw? • Amperage is measured in Amperes (A), commonly referred to as Amps. 4. Can the Amps Draw Calculator be used for both AC and DC circuits? • Yes, the calculator applies to both alternating current (AC) and direct current (DC) circuits, considering the voltage and power rating. 5. What are typical amperage ratings for household appliances? • Appliances vary widely; for example, a toaster may draw around 10 Amps, while a refrigerator might draw 5-8 Amps. 6. How can understanding amperage draw save energy? • By accurately assessing device loads, users can identify energy-efficient appliances and optimize usage patterns to reduce electricity consumption. 7. Is amperage draw affected by power factor? • Yes, the power factor influences the relationship between watts (power) and amps (current) in AC circuits. 8. How often should amperage draw be calculated? • It should be calculated whenever adding new appliances, conducting electrical upgrades, or troubleshooting circuit issues. 9. What safety precautions should be taken when working with electrical currents? • Always ensure circuits are de-energized before making adjustments, and follow local electrical codes and safety guidelines. 10. Can the Amps Draw Calculator be used for automotive applications? • Yes, it can help calculate the current draw of automotive accessories and devices, aiding in vehicle electrical system design and modifications. In conclusion, the Amps Draw Calculator is an indispensable tool for professionals and homeowners alike, offering valuable insights into electrical consumption and safety. By accurately calculating amperage draw, individuals can safeguard electrical systems, optimize energy efficiency, and make informed decisions regarding electrical installations and usage. Understanding the principles of amperage draw empowers users to manage electricity effectively, reduce operational costs, and promote sustainable energy practices. Whether for residential, commercial, or industrial applications, leveraging the Amps Draw Calculator enhances electrical planning, enhances safety measures, and supports reliable electrical system performance.
{"url":"https://calculatorwow.com/amps-draw-calculator/","timestamp":"2024-11-06T00:47:12Z","content_type":"text/html","content_length":"65314","record_id":"<urn:uuid:2544d2f7-be37-4868-9a81-479e5242d395>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00071.warc.gz"}
Areas Of Parallelograms And Triangles - Edu Spot- NCERT Solution, CBSE Course, Practice Test Areas Of Parallelograms And Triangles Note: As per the revised CBSE curriculum, this chapter has been removed from the syllabus for the 2020-21 academic session. The area represents the amount of planar surface being covered by a closed geometric figure. Area of a parallelogram The area of a parallelogram is the product of any of its sides and the corresponding altitude. Area of a parallelogram = b×h Where ‘b′ is the base and ‘h′ is the corresponding altitude(Height). Area of a triangle Area of a triangle = (1/2)×b×h Where “b” is the base and “h” is the corresponding altitude. Parallelograms on the Common Base and Between the Same Parallels Theorem: Parallelograms that lie on the common base and between the same parallels are said to have equal in area. Two parallelograms are said to be on the common/same base and between the same parallels if a) They have a common side. b) The sides parallel to the common side lie on the same straight line. Triangles on the Common Base and Between the Same Parallels Theorem: Triangles that lie on the same or the common base and also between the same parallels are said to have an equal area. Here, ar(ΔABC)=ar(ΔABD) Two triangles are said to be on the common base and between the same parallels if a) They have a common side. b) The vertices opposite the common side lie on a straight line parallel to the common side. Two Triangles Having the Common Base & Equal Areas If two triangles have equal bases and are equal in area, then their corresponding altitudes are equal. A Parallelogram and a Triangle Between the Same parallels Theorem: If a triangle and a parallelogram are on the common base and between the same parallels, then the area of the triangle is equal to half the area of the parallelogram. A triangle and a parallelogram are said to be on the same base and between the same parallels if a) They have a common side. b) The vertices opposite the common side lie on a straight line parallel to the common side. Back to CBSE 9th Maths
{"url":"https://edu-spot.com/lessons/areas-of-parallelograms-and-triangles/","timestamp":"2024-11-01T22:30:02Z","content_type":"text/html","content_length":"68789","record_id":"<urn:uuid:bb9213d1-a0a8-413b-a6cd-7210d441d2c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00487.warc.gz"}
Search Algorithms and Techniques Expand the project developed - Superb Essay Writers Search Algorithms and Techniques Expand the project developed Exercise 2: Search Algorithms and Techniques Expand the project developed in the previous exercise to perform the following experiment: Time the sequential search and the binary search methods several times each for randomly generated values, and record the results in a table. Do not time individual searches, but groups of them. For example, time 100 searches together or 1,000 searches together. Compare the running times of these two search methods that are obtained during the experiment. Regarding the efficiency of both search methods, what conclusion can be reached from this experiment? Both the table and your conclusions should be included in a separate Word document. Exercise 3: Searching Applications The following problem is a variation of Project 4 in the “Projects” section of Chapter 18 in our textbook: Consider an array data of n numerical values in sorted order and a list of two numerical target values. Your goal is to compute the smallest range of array indices that contains both of the target values. If a target value is smaller than data[0] , the range should start with a -1 . If a target value is larger than data[n-1] , the range should end with n . For example, given the array 0 1 2 3 4 5 6 7 5 8 10 13 15 20 22 26 the following table illustrates target values and their corresponding ranges: 2, 8 [-1, 1] 9, 14 [1, 4] 12, 21 [2, 6] 14, 30 [3, 8] Devise and implement an algorithm that solves this problem. Exercise 4: Hashing Suppose that the type of key in a hashing application you are implementing is string (Sections 21.7 and 21.8 in our textbook explain hash functions for strings). Design and implement a hash function that converts a key to a hash value. Test your function by computing the hash values for the Java keywords. Was a key collision produced? Need Help Writing an Essay? Tell us about your assignment and we will find the best writer for your essay. Write My Essay For Me Welcome to our trusted essay writing website with track record among students. We specialize in connecting students in need of high-quality essay assistance with skilled writers who can deliver just that. Explore the ratings of our essay writers and choose the one that best aligns with your requirements. When you rely on our online essay writing service, rest assured that you will receive a top-notch, plagiarism-free A-level paper. Our experienced professionals write each paper from scratch, carefully following your instructions. Request a paper from us and experience 100% originality. From stress to success – hire a pro essay writer!
{"url":"https://www.superbessaywriters.com/search-algorithms-techniques-expand-project-developed/","timestamp":"2024-11-02T07:49:20Z","content_type":"text/html","content_length":"57149","record_id":"<urn:uuid:2dd6fa91-2706-4d83-9376-988459bc48b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00231.warc.gz"}
Tatistic, is calculated, testing the association between transmitted/non-transmitted and high-risk | AMPK Inhibitor Tatistic, is calculated, testing the buy CTX-0294885 association among transmitted/non-transmitted and high-risk/low-risk genotypes. The phenomic evaluation procedure aims to assess the impact of Pc on this association. For this, the strength of association involving transmitted/non-transmitted and high-risk/low-risk genotypes inside the various Pc levels is compared working with an analysis of variance model, resulting in an F statistic. The final MDR-Phenomics statistic for every multilocus model would be the solution from the C and F statistics, and significance is assessed by a non-fixed permutation test. Aggregated MDR The original MDR strategy does not account for the accumulated effects from multiple interaction effects, on account of collection of only one particular optimal model in the course of CV. The Aggregated Multifactor Dimensionality Reduction (A-MDR), proposed by Dai et al. [52],A roadmap to multifactor dimensionality reduction methods|makes use of all significant interaction effects to develop a gene network and to compute an aggregated threat score for prediction. n Cells cj in every model are classified either as high danger if 1j n exj n1 ceeds =n or as low threat otherwise. Primarily based on this classification, three measures to assess each and every model are proposed: predisposing OR (ORp ), predisposing relative risk (RRp ) and predisposing v2 (v2 ), that are adjusted versions with the usual statistics. The p unadjusted versions are biased, as the risk classes are conditioned around the classifier. Let x ?OR, relative risk or v2, then ORp, RRp or v2p?x=F? . Right here, F0 ?is estimated by a permuta0 tion of your phenotype, and F ?is estimated by resampling a subset of samples. Making use of the permutation and resampling data, P-values and self-assurance intervals is usually estimated. As opposed to a ^ fixed a ?0:05, the authors propose to pick an a 0:05 that ^ maximizes the area journal.pone.0169185 under a ROC curve (AUC). For every single a , the ^ Daclatasvir (dihydrochloride) biological activity models having a P-value significantly less than a are chosen. For every single sample, the amount of high-risk classes among these selected models is counted to get an dar.12324 aggregated risk score. It really is assumed that instances may have a larger threat score than controls. Primarily based on the aggregated danger scores a ROC curve is constructed, and also the AUC is usually determined. When the final a is fixed, the corresponding models are applied to define the `epistasis enriched gene network’ as adequate representation on the underlying gene interactions of a complicated illness plus the `epistasis enriched danger score’ as a diagnostic test for the illness. A considerable side effect of this approach is the fact that it has a big get in power in case of genetic heterogeneity as simulations show.The MB-MDR frameworkModel-based MDR MB-MDR was 1st introduced by Calle et al. [53] whilst addressing some key drawbacks of MDR, including that critical interactions may very well be missed by pooling also many multi-locus genotype cells collectively and that MDR could not adjust for most important effects or for confounding factors. All accessible data are utilized to label each and every multi-locus genotype cell. The way MB-MDR carries out the labeling conceptually differs from MDR, in that every single cell is tested versus all others using suitable association test statistics, based around the nature from the trait measurement (e.g. binary, continuous, survival). Model selection will not be primarily based on CV-based criteria but on an association test statistic (i.e. final MB-MDR test statistics) that compares pooled high-risk with pooled low-risk cells. Lastly, permutation-based techniques are utilized on MB-MDR’s final test statisti.Tatistic, is calculated, testing the association in between transmitted/non-transmitted and high-risk/low-risk genotypes. The phenomic evaluation procedure aims to assess the effect of Pc on this association. For this, the strength of association among transmitted/non-transmitted and high-risk/low-risk genotypes inside the different Computer levels is compared applying an evaluation of variance model, resulting in an F statistic. The final MDR-Phenomics statistic for each multilocus model is the item of the C and F statistics, and significance is assessed by a non-fixed permutation test. Aggregated MDR The original MDR approach doesn’t account for the accumulated effects from several interaction effects, on account of collection of only one particular optimal model throughout CV. The Aggregated Multifactor Dimensionality Reduction (A-MDR), proposed by Dai et al. [52],A roadmap to multifactor dimensionality reduction strategies|tends to make use of all significant interaction effects to create a gene network and to compute an aggregated risk score for prediction. n Cells cj in each and every model are classified either as high risk if 1j n exj n1 ceeds =n or as low threat otherwise. Primarily based on this classification, 3 measures to assess each model are proposed: predisposing OR (ORp ), predisposing relative risk (RRp ) and predisposing v2 (v2 ), which are adjusted versions of the usual statistics. The p unadjusted versions are biased, as the threat classes are conditioned around the classifier. Let x ?OR, relative threat or v2, then ORp, RRp or v2p?x=F? . Right here, F0 ?is estimated by a permuta0 tion of the phenotype, and F ?is estimated by resampling a subset of samples. Working with the permutation and resampling data, P-values and confidence intervals may be estimated. Instead of a ^ fixed a ?0:05, the authors propose to choose an a 0:05 that ^ maximizes the area journal.pone.0169185 below a ROC curve (AUC). For every a , the ^ models with a P-value less than a are selected. For every sample, the number of high-risk classes among these chosen models is counted to obtain an dar.12324 aggregated risk score. It is actually assumed that circumstances will have a greater threat score than controls. Based around the aggregated risk scores a ROC curve is constructed, plus the AUC is usually determined. As soon as the final a is fixed, the corresponding models are utilized to define the `epistasis enriched gene network’ as adequate representation of the underlying gene interactions of a complex disease along with the `epistasis enriched risk score’ as a diagnostic test for the illness. A considerable side impact of this system is the fact that it has a big achieve in energy in case of genetic heterogeneity as simulations show.The MB-MDR frameworkModel-based MDR MB-MDR was initially introduced by Calle et al. [53] while addressing some major drawbacks of MDR, which includes that vital interactions might be missed by pooling as well lots of multi-locus genotype cells collectively and that MDR couldn’t adjust for primary effects or for confounding aspects. All obtainable data are utilised to label each multi-locus genotype cell. The way MB-MDR carries out the labeling conceptually differs from MDR, in that each cell is tested versus all other people applying appropriate association test statistics, depending on the nature in the trait measurement (e.g. binary, continuous, survival). Model choice isn’t primarily based on CV-based criteria but on an association test statistic (i.e. final MB-MDR test statistics) that compares pooled high-risk with pooled low-risk cells. Finally, permutation-based tactics are applied on MB-MDR’s final test statisti.
{"url":"https://www.ampkinhibitor.com/2018/01/09/tatistic-is-calculated-testing-the-association-between-transmittednon-transmitted-and-high-risk-2/","timestamp":"2024-11-03T13:54:05Z","content_type":"text/html","content_length":"89600","record_id":"<urn:uuid:7821baa2-4acf-4c68-a0ec-3c65786a9302>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00563.warc.gz"}
The 6/2(1+2) Ambiguity I recently saw a post about a simple math problem, to which I was sure there was a clear answer. I got impressed how many people get it wrong by so many different reasons, and even more impressed about how everybody was focusing at the "wrong part" of the equation. C++ Sample - Doesn't Mean It Follows Math Rules 6/2(1+2) = 1 // Requires operator () to work 6/2*(1+2) = 9 Precedences by Layout Before going to actual math rules, I want to let it clear that the way a formula looks indicates a lot to me. I am tempted to say that, without any parenthesis, a formula like 10+20 / 1+2 seems to tell that we should calculate (10+20) / (1+2) instead of 10 + (20/1) + 2. The math rules differ. Division comes before addition. Yet, why someone will add those extra spaces clearly visible in the formula if they expected 20/1 to be parsed before anything else? I would agree to use just math rules if the formula was written like 10+20/1+2. It would also make a lot of sense if it was written like 10 + 20/1 + 2. But when it is written like 10+20 / 1+2, I am at least curious (or maybe concerned) if this is a tricky question leading us to a mistake or if the person who wrote the formula really expected the division to take place last. Maybe they wanted to express this: And as that didn't have any parentheses, they kept the formula with no parentheses but with some extra spaces. Who knows? The Initial Problem Going back to the initial problem, it was presented with this equation: If I translate this image to text, I am tempted to say it would look like this: 6 / 2(1+2). I really see that there is extra space around the division symbol, and the entire 2(1+2) seems to be packed together. That, alone, gives me the impression we should calculate 2(1+2) before the Again, I am talking about visual spaces and not math rules. So, to try to focus just on math rules, let me rewrite the expression: Now this problem only needs to follow math rules to be solved. I saw some people saying that multiplication happens before division, but those were largely outnumbered. Almost everybody agreed: First we need to resolve the parenthesis, then multiplication and division have the same priority, so we go left to right. Then, they would do something like this: 6/2(1+2) -> We focus on (1+2), which becomes 3. And then many people continued: 6/2*3 -> So we divide 6/2 Yet, my impression is that those people were just skipping one step here. When we calculate what's inside the parenthesis, we actually end-up with: We never said what happens to 2(3). And it seems that it is here where the real problem lies. Many people said that at that point, we already finished with the parentheses and that the expression should be rewritten as 6/2*3. Yet, when I was in elementary school, we could never do that as that would be changing the priority of the things. 2(3) should be seen as a packet that comes together, and we would not have solved the parentheses until the multiplication was finished. So, if we really wanted to rewrite the formula instead of doing the multiplication directly, we would rewrite the formula as 6/(2*3) and we would still have parentheses to solve. That, I remember, was a clear difference between something like: 6/2(1+2) and consequently 6/2(3) 6/2*(1+2) and consequently 6/2*(3). The 2(3) was a group that needed to be solved together. Not because multiplication comes first, but because the 2 "owns" the 3 (or if you prefer, it is a kind of a function call like two(three)). We might also say that to be able to get rid of the parentheses we needed to apply a special kind of multiplication. Something like 6/2*(3), on the other hand, had the (3) clearly isolated and would become 6/2*3 to then follow the simple left to right rules. This is so true that many calculators treat 6/2(3) as a different expression than 6/2*3 or 6/2*(3). The interpretation that considers the expressions different is the strong grouping one (the one that I learned as a kid). The interpretation that considers the expressions as being equal is the weak grouping one. I was actually surprised to know that it is considered "right by the new rules" to treat 2(3) as a simple multiplication, like 2*3, instead of a higher priority one, like (2*3) and allowing the division of 6/2 happen before multiplying by 3. Apparently, Google and Bing decided to use that rule. Yet, I see that many people agree that 2(3) seems to have an "implicit multiplication" and must be executed first. It might be just a difference on how schools teach these things and it doesn't seem to be a final answer or consensus here, to the point many just say the formula is "ambiguous" (or broken) and needs parentheses. Yet, why do we have "priority rules" if we need to use parentheses? I understand them when we want a different priority, not when we just have a plain expression. More important, though, the entire issue is happening with a parentheses and how to resolve the implicit multiplication done with it. Implicit Parentheses, "Packed Items" and Building Formulas Although there are still differences in how that particular formula needs to be resolved, I think it is important to understand how formulas are created and combined together. For example, the diameter of a circle is two times its radius. For this formula, let's write: d = 2r Now, let's say I want to calculate how many circles can be aligned together in a particular length. For example: n = L/d If I don't have d, but have r, I could replace d by 2r and end-up with this formula: n = L / 2r Notice that here I either: • Did it wrong. Now people are going to do L/2 and then multiply that by r. The formula should use parentheses and be L/(2r). • Did it right. Not only the / has spaces around it visually indicating lower precedence, 2r is a single "packet" that needs to be evaluated before the division. Of course, if we were drawing it, it would look like: And that would probably be clear enough without any need for parentheses. Needless to say, precedence rules or not, writing clean formulas is like writing clean code. It doesn't matter if a compiler can parse it and do the right thing. Are other people (or even us in the future) going to get it right? Another Source I personally loved this explanation from quora. These are some examples found there: Going Back to the Original Problem Maybe it's because I am a programmer, but to me 2(3) or similar really looks like a function call and I believe it should have higher priority. I know that was the rule when I was a kid, but nevermind that, I tried to replicate a multiplication behavior with just parenthesis. I was only able to do that in C++ (from the programming languages I normally use). The C++ sample evaluates 6/2(1+2) and 6/2*(1+2) but I had to use named variables, like six/two(one+two) for things to work. What I wanted was just to implement the operator () for ints but, as I couldn't do that, I did the minimum implementation of an "int like" class with the operators for +, *, / and the "implicit multiplication" operator () to make it work. I did not alter in any way the order in which the C++ compiler evaluates the expression. I literally just added the () operator to "integer like" objects. That lead to the results I was expecting. You can execute the code at https://onlinegdb.com/Qul1J9eHe. And, in case that's not working, this is a copy of the code: // Written by Paulo Zemek on December First, 2021. // The purpose of this sample app is to show that 6/2(1+2) // matches so well some programming standards that we should // be evaluating 2(1+2) differently than 2*(1+2). // This is all about trying to show some standards to things that seem // to have forgot what a standard is. #include <iostream> class MyInt MyInt(int value): MyInt operator ()(const MyInt &other) return MyInt(_value * other._value); int intValue() const return _value; MyInt operator + (const MyInt &other) return MyInt(_value + other._value); MyInt operator / (const MyInt &other) return MyInt(_value / other._value); MyInt operator * (const MyInt &other) return MyInt(_value * other._value); int _value; int main() // Unfortunately I don't know if I can replace the int behavior to standard // the 2(3) as a multiplication, so I created names for 1, 2 and 6, and used // the named variables for my calculations. MyInt one(1); MyInt two(2); MyInt six(6); auto equation1 = six/two(one+two); std::cout << "6/2(1+2) = " << equation1.intValue() << std::endl; auto equation2 = six/two*(one+two); std::cout << "6/2*(1+2) = " << equation2.intValue() << std::endl; return 0; The output of this app is: 6/2(1+2) = 1 6/2*(1+2) = 9 Well... my only conclusion is that although sites like Google and Bing decided to use 2(3) as a simple multiplication, that can get their parsing order scrambled if they follow a division, it is not really ambiguous by the formula what to do. The ambiguity seems to come from the (lack of) math rules around the issue. At least when programming in C++, we can actually find a non-ambiguous answer to how to interpret these odd expressions. I hope this helps to unify Math and C++ parsing logic in the future, but if that doesn't happen, it was just a funny exercise for me and I hope you have • 4^th December, 2021: Added the image about the different results from calculators • 3^rd December, 2021: Added some external Algebra examples and tried to clarify how the C++ sample works • 2^nd December, 2021: Moved the C++ Sample result to come after the Introduction • 1^st December, 2021: Initial version
{"url":"https://codeproject.global.ssl.fastly.net/Articles/5318983/The-6-2-1plus2-Ambiguity?display=Print","timestamp":"2024-11-14T17:56:44Z","content_type":"text/html","content_length":"40887","record_id":"<urn:uuid:37b05ba0-63b6-41ac-a803-5b0090f00ba3>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00898.warc.gz"}
COMP 3270 Introduction to Algorithms Homework 1 solved (1. 6pts) Computational problem solving: Estimating problem solving time Suppose that there are three algorithms to solve a problem: a O(n) algorithm (A1), a O(nlogn) algorithm (A2), and a O(n ) algorithm (A3), where log is to the base 2. Using the techniques and assumptions presented in slide set L2-Buffet(SelectionProblem), determine how long in seconds it will take for each algorithm to solve a problem of size 200 million. You must show your work to get credit, i.e., a correct answer without showing how it is derived will receive zero (2. 4pts) Computational problem solving: Problem specification Problem Definition: You are a member of a software engineering team, which is tasked to develop a mobile application for the SmartCity initiatives sponsored by an enterprise. The application is a transport sharing tool that will connect a driver with empty seats to people traveling to a specific target location. The driver’s application will query a remote service to determine specific requests submitted by customers, where each request is defined by the position of the person making the ride sharing request. The location of a person is specified as pair (latitude, longitude). Given a list of requests as input and the computed shortest distances in terms of turn-by-turn directions between each pair of locations, the application generates the shortest possible route that visits each location exactly once (to pick up a customer) and return to the target location. Specify the problem to a level of detail that would allow you to develop solution strategies and corresponding algorithms. State the problem specification in terms of (1) inputs, (2) discrete structures/data representation, and (3) desired outputs. No need to discuss solution (3. 8pts) Computational problem solving: Developing strategies Given a set of n numbers, explain a correct and efficient strategy to find the i largest numbers in sorted order. Your description should be such that the strategy is clear, but at the same time the description should be at the level of strategy (e.g., sort the numbers and list the i largest – you should devise a different strategy than this obvious one). Then state the total number of steps an algorithm that implements the strategy will have to consider as a function of n. (4. 12pts) Computational problem solving: Understanding an algorithm and its strategy by simulating it on a specific input Understand the following algorithm. Simulate it mentally on the following four inputs, and state the outputs produced (value returned) in each case: (a) A: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; (b) A: [-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], ; (c) A: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], (d) A: [-1, 2, -3, 4, -5, 6, 7, -8, 9, -10]. Algorithm Mystery (A:array[1..n] of integer) sum, max: integer 1 sum = 0 2 max = 0 3 for i = 1 to n 4 sum = 0 5 for j = i to n 6 sum = sum + A[j] 7 if sum > max then 8 max = sum 9 return max Output when input is array (a) above: Output when input is array (b) above: Output when input is array (c) above: Output when input is array (d) above: What does the algorithm return when the input array contains all negative integers? What does the algorithm return when the input array contains all non-negative integers? (5. 10pts) Computational problem solving: Calculating approximate complexity Using the approach described in class (L5-Complexity), calculate the approximate complexity of Mystery algorithm above by filling in the table below. (6. 13pts) Calculate the detailed complexity T(n) of algorithm Mystery in the table below, then determine the expression for T(n) and simplify it to produce a polynomial in n. T(n) = (7. 5pts) Computational problem solving: Proving correctness/incorrectness Is the algorithm below correct or incorrect? Prove it ! It is supposed to count the number of all identical integers that appear consecutively in a file of integers. E.g., If f contains “1 2 3 3 3 4 3 5 6 6 7 8 8 8 8″ then the correct answer is 9. Count(f: input file) count, i, j : integer count = 0 while (end-of-file(f) = false) i = read-next-integer(f) if (end-of-file(f) = false) then j = read-next-integer(f) if i=j then count = count+1 return count (8. 5pts) Computational problem solving: Proving Correctness Let A be an algorithm that finds the kth largest of n elements by a sequence of comparisons. Prove by contradiction that A collects enough information to determine which elements are greater than the kth largest and which elements are less than it. In other words, you can partition the set around the kth largest element without making more comparisons. (9. 8pts) Computational problem solving: Proving Correctness Function g(n: nonnegative integer) if n <= 1 then return(n) Prove by induction that algorithm g is correct, if it is intended to compute the function 3 n − 2 for all n ≥ 0 (10. 5pts) Computational problem solving: Proving Correctness Complete the proof by loop invariant method to show that the following algorithm is correct. Algorithm Max(A: Array[1..n] of integer) max = A[1] for i=2 to n if A[i] > max then max = A[i] return max (11. 10pts) Computational problem solving: Proving Correctness Complete the proof by loop invariant method to show that the following algorithm is correct. Algorithm Convert(n: positive integer) output: b (an array of bits corresponding to the binary representation of n) t = n k = 0 while (t > 0) k = k + 1 b[k] = t mod 2 t = t div 2 (div refers to integer division) Use the following loop invariant: If m is the integer represented by the binary array b[1..k] then n = t2 k + m (12. 14pts) Computational problem solving: Algorithm Design (a. 10pts) Describe a recursive algorithm to reverse a string that uses the strategy of swapping the first and last characters and recursively reversing the rest of the string. Assume the string is passed to the algorithm as an array A of characters, A[p..q], where the array has starting index p and ending index q, and the length of the string is n = q − p + 1. The algorithm should have only one base case, when it gets an empty string. Assume you have a swap(A[i], A[j]) function available that will swap the characters in cells i and j. Write the algorithm using pseudocode without any programming language specific syntax. Your algorithm should be correct as per the technical definition of correctness. (b) (4pts) Draw your algorithm’s recursion tree on input string “i<33270!” – remember to show inputs and outputs of each recursive execution including the execution of any base cases.
{"url":"https://codeshive.com/questions-and-answers/comp-3270-introduction-to-algorithms-homework-1-solved/","timestamp":"2024-11-05T02:49:51Z","content_type":"text/html","content_length":"110743","record_id":"<urn:uuid:1ecb4fd8-7a22-40e2-ba92-8e4a2ef42604>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00162.warc.gz"}
Summative in Precalculus | Quizalize Feel free to use or edit a copy includes Teacher and Student dashboards Measure skills from any curriculum Tag the questions with any skills you have. Your dashboard will track each student's mastery of each skill. With a free account, teachers can • edit the questions • save a copy for later • start a class game • automatically assign follow-up activities based on students’ scores • assign as homework • share a link with colleagues • print as a bubble sheet • In which quadrant does -285° angle lie? • Q1 In which quadrant does -285° angle lie? • Q2 To convert radians to degrees, we use the formula of ___________. • Q3 Using a calculator, express 95° 45 ‘ 45 ” in decimal degrees. • Q4 How many degrees is 1 and 1/5 of a complete revolution? • Q5 How many radians is 11/5 of a complete revolution? • Q6 What is the length of an arc of a circle with radius 4 cm that subtends a central angle of 216°? • Q7 Find the length of an arc of a circle with a radius of 6/π cm that subtends a central angle of 99°. • Q8 What is the smallest positive angle coterminal with 2110° ? • Q9 What is the coterminal angle of 85° ? • Q10 If tan A=4/5, determine 2sin A - cosA/3cos A. • Q11 Given sec θ = - 25/24 and π ≤ θ ≤ 3π/2, find sin θ + cos θ. • Q12 What is the reference angle of - 29π / 6? • Q13 For what angle θ in the third quadrant is cos θ= sin 5π /3? • Q14 If cos θ > 0 and tan θ = -2/3. Find sec θ + tan θ/ sec θ - tan θ. • Q15 What is sec^2 θ - csc^2 θ, if the terminal point of an arc of length θ lies on the line joining the original at the point (2, -6)?
{"url":"https://resources.quizalize.com/view/quiz/summative-in-precalculus-ca951aca-83b4-42ac-812d-a0c12bd26f75","timestamp":"2024-11-09T06:19:28Z","content_type":"text/html","content_length":"160736","record_id":"<urn:uuid:b3318657-c8c7-4fb6-9ce0-cf039eb8b8d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00570.warc.gz"}
Additional Information and Full Hypothesis Test Examples Hypothesis Testing with One Sample Additional Information and Full Hypothesis Test Examples • In a hypothesis test problem, you may see words such as “the level of significance is 1%.” The “1%” is the preconceived or preset α. • The statistician setting up the hypothesis test selects the value of α to use before collecting the sample data. • If no level of significance is given, a common standard to use is α = 0.05. • When you calculate the p-value and draw the picture, the p-value is the area in the left tail, the right tail, or split evenly between the two tails. For this reason, we call the hypothesis test left, right, or two tailed. • The alternative hypothesis, \({H}_{a}\), tells you if the test is left, right, or two-tailed. It is the key to conducting the appropriate test. • H[a]never has a symbol that contains an equal sign. • Thinking about the meaning of thep-value: A data analyst (and anyone else) should have more confidence that he made the correct decision to reject the null hypothesis with a smaller p-value (for example, 0.001 as opposed to 0.04) even if using the 0.05 level for alpha. Similarly, for a large p-value such as 0.4, as opposed to a p-value of 0.056 (alpha = 0.05 is less than either number), a data analyst should have more confidence that she made the correct decision in not rejecting the null hypothesis. This makes the data analyst use judgment rather than mindlessly applying rules. The following examples illustrate a left-, right-, and two-tailed test. H[o]: μ = 5, H[a]: μ < 5 Test of a single population mean. H[a] tells you the test is left-tailed. The picture of the p-value is as follows: Try It H[0]: μ = 10, H[a]: μ < 10 Assume the p-value is 0.0935. What type of test is this? Draw the picture of the p-value. H[0]: p ≤ 0.2H[a]: p > 0.2 This is a test of a single population proportion. H[a] tells you the test is right-tailed. The picture of the p-value is as follows: Try It H[0]: μ ≤ 1, H[a]: μ > 1 Assume the p-value is 0.1243. What type of test is this? Draw the picture of the p-value. H[0]: p = 50H[a]: p ≠ 50 This is a test of a single population mean. H[a] tells you the test is two-tailed. The picture of the p-value is as follows. Try It H[0]: p = 0.5, H[a]: p ≠ 0.5 Assume the p-value is 0.2564. What type of test is this? Draw the picture of the p-value. Full Hypothesis Test Examples Jeffrey, as an eight-year old, established a mean time of 16.43 seconds for swimming the 25-yard freestyle, with a standard deviation of 0.8 seconds. His dad, Frank, thought that Jeffrey could swim the 25-yard freestyle faster using goggles. Frank bought Jeffrey a new pair of expensive goggles and timed Jeffrey for 15 25-yard freestyle swims. For the 15 swims, Jeffrey’s mean time was 16 seconds. Frank thought that the goggles helped Jeffrey to swim faster than the 16.43 seconds. Conduct a hypothesis test using a preset α = 0.05. Assume that the swim times for the 25-yard freestyle are normal. Set up the Hypothesis Test: Since the problem is about a mean, this is a test of a single population mean. H[0]: μ = 16.43H[a]: μ < 16.43 For Jeffrey to swim faster, his time will be less than 16.43 seconds. The “<” tells you this is left-tailed. Determine the distribution needed: Random variable: \(\overline{X}\) = the mean time to swim the 25-yard freestyle. Distribution for the test: \(\overline{X}\) is normal (population standard deviation is known: σ = 0.8) \(\overline{X}~N\left(\mu ,\frac{{\sigma }_{X}}{\sqrt{n}}\right)\) Therefore, \(\overline{X}~N\left(16.43,\frac{0.8}{\sqrt{15}}\right)\) μ = 16.43 comes from H[0] and not the data. σ = 0.8, and n = 15. Calculate the p-value using the normal distribution for a mean: p-value = P(\(\overline{x}\) < 16) = 0.0187 where the sample mean in the problem is given as 16. p-value = 0.0187 (This is called the actual level of significance.) The p-value is the area to the left of the sample mean is given as 16. μ = 16.43 comes from H[0]. Our assumption is μ = 16.43. Interpretation of the p-value: If H[0] is true, there is a 0.0187 probability (1.87%)that Jeffrey’s mean time to swim the 25-yard freestyle is 16 seconds or less. Because a 1.87% chance is small, the mean time of 16 seconds or less is unlikely to have happened randomly. It is a rare event. Compare α and the p-value: α = 0.05 p-value = 0.0187 α > p-value Make a decision: Since α > p-value, reject H[0]. <!– CONTINUE HERE –> This means that you reject μ = 16.43. In other words, you do not think Jeffrey swims the 25-yard freestyle in 16.43 seconds but faster with the new goggles. Conclusion: At the 5% significance level, we conclude that Jeffrey swims faster using the new goggles. The sample data show there is sufficient evidence that Jeffrey’s mean time to swim the 25-yard freestyle is less than 16.43 seconds. The p-value can easily be calculated. Press STAT and arrow over to TESTS. Press 1:Z-Test. Arrow over to Stats and press ENTER. Arrow down and enter 16.43 for μ[0] (null hypothesis), .8 for σ, 16 for the sample mean, and 15 for n. Arrow down to μ : (alternate hypothesis) and arrow over to < μ[0]. Press ENTER. Arrow down to Calculate and press ENTER. The calculator not only calculates the p-value (p = 0.0187) but it also calculates the test statistic (z-score) for the sample mean. μ < 16.43 is the alternative hypothesis. Do this set of instructions again except arrow to Draw(instead of Calculate). Press ENTER. A shaded graph appears with z = -2.08 (test statistic) and p = 0.0187 (p-value). Make sure when you use Draw that no other equations are highlighted in Y = and the plots are turned off. When the calculator does a Z-Test, the Z-Test function finds the p-value by doing a normal probability calculation using the central limit theorem: \(P\left(\overline{x}<16\right)=\)2nd DISTR normcdf\(\left(-10^99,16,16.43,0.8/\sqrt{15}\right)\) The Type I and Type II errors for this problem are as follows: The Type I error is to conclude that Jeffrey swims the 25-yard freestyle, on average, in less than 16.43 seconds when, in fact, he actually swims the 25-yard freestyle, on average, in 16.43 seconds. (Reject the null hypothesis when the null hypothesis is true.) The Type II error is that there is not evidence to conclude that Jeffrey swims the 25-yard free-style, on average, in less than 16.43 seconds when, in fact, he actually does swim the 25-yard free-style, on average, in less than 16.43 seconds. (Do not reject the null hypothesis when the null hypothesis is false.) Try It The mean throwing distance of a football for a Marco, a high school freshman quarterback, is 40 yards, with a standard deviation of two yards. The team coach tells Marco to adjust his grip to get more distance. The coach records the distances for 20 throws. For the 20 throws, Marco’s mean distance was 45 yards. The coach thought the different grip helped Marco throw farther than 40 yards. Conduct a hypothesis test using a preset α = 0.05. Assume the throw distances for footballs are normal. First, determine what type of test this is, set up the hypothesis test, find the p-value, sketch the graph, and state your conclusion. Press STAT and arrow over to TESTS. Press 1:Z-Test. Arrow over to Stats and press ENTER. Arrow down and enter 40 for μ0 (null hypothesis), 2 for σ, 45 for the sample mean, and 20 for n. Arrow down to μ: (alternative hypothesis) and set it either as <, ≠, or >. Press ENTER. Arrow down to Calculate and press ENTER. The calculator not only calculates the p-value but it also calculates the test statistic (z-score) for the sample mean. Select <, ≠, or > for the alternative hypothesis. Do this set of instructions again except arrow to Draw (instead of Calculate). Press ENTER. A shaded graph appears with test statistic and p-value. Make sure when you use Draw that no other equations are highlighted in Y = and the plots are turned off. Since the problem is about a mean, this is a test of a single population mean. H[0] : μ = 40 H[a] : μ > 40 p = 0.0062 Because p < α, we reject the null hypothesis. There is sufficient evidence to suggest that the change in grip improved Marco’s throwing distance. The traditional way to compare the two probabilities, α and the p-value, is to compare the critical value (z-score from α) to the test statistic (z-score from data). The calculated test statistic for the p-value is –2.08. (From the Central Limit Theorem, the test statistic formula is \(z=\frac{\overline{x}-{\mu }_{X}}{\left(\frac{{\sigma }_{X}}{\sqrt{n}}\right)}\). <!–LINK THIS PLEASE –> For this problem, \(\overline{x}\) = 16, μ[X] = 16.43 from the null hypothes is, σ[X] = 0.8, and n = 15.) You can find the critical value for α = 0.05 in the normal table (see 15.Tables in the Table of Contents). The z-score for an area to the left equal to 0.05 is midway between –1.65 and –1.64 (0.05 is midway between 0.0505 and 0.0495). The z-score is –1.645. Since –1.645 > –2.08 (which demonstrates that α > p-value), reject H[0]. Traditionally, the decision to reject or not reject was done in this way. Today, comparing the two probabilities α and the p-value is very common. For this problem, the p-value, 0.0187 is considerably smaller than α, 0.05. You can be confident about your decision to reject. The graph shows α, the p-value, and the test statistics and the critical value. A college football coach thought that his players could bench press a mean weight of 275 pounds. It is known that the standard deviation is 55 pounds. Three of his players thought that the mean weight was more than that amount. They asked 30 of their teammates for their estimated maximum lift on the bench press exercise. The data ranged from 205 pounds to 385 pounds. The actual different weights were (frequencies are in parentheses) Conduct a hypothesis test using a 2.5% level of significance to determine if the bench press mean is more than 275 pounds. Set up the Hypothesis Test: Since the problem is about a mean weight, this is a test of a single population mean. H[0]: μ = 275 H[a]: μ > 275 This is a right-tailed test. Calculating the distribution needed: Random variable: \(\overline{X}\) = the mean weight, in pounds, lifted by the football players. Distribution for the test: It is normal because σ is known. \(\overline{x}=286.2\) pounds (from the data). σ = 55 pounds (Always use σ if you know it.) We assume μ = 275 pounds unless our data shows us otherwise. Calculate the p-value using the normal distribution for a mean and using the sample mean as input (see [link] for using the data as input): Interpretation of the p-value: If H[0] is true, then there is a 0.1331 probability (13.23%) that the football players can lift a mean weight of 286.2 pounds or more. Because a 13.23% chance is large enough, a mean weight lift of 286.2 pounds or more is not a rare event. Compare α and the p-value: α = 0.025 p-value = 0.1323 Make a decision: Since α <p-value, do not reject H[0]. Conclusion: At the 2.5% level of significance, from the sample data, there is not sufficient evidence to conclude that the true mean weight lifted is more than 275 pounds. The p-value can easily be calculated. Put the data and frequencies into lists. Press STAT and arrow over to TESTS. Press 1:Z-Test. Arrow over to Data and press ENTER. Arrow down and enter 275 for μ[0], 55 for σ, the name of the list where you put the data, and the name of the list where you put the frequencies. Arrow down to μ: and arrow over to > μ[0]. Press ENTER. Arrow down to Calculate and press ENTER. The calculator not only calculates the p-value (p = 0.1331, a little different from the previous calculation – in it we used the sample mean rounded to one decimal place instead of the data) but it also calculates the test statistic (z-score) for the sample mean, the sample mean, and the sample standard deviation. μ > 275 is the alternative hypothesis. Do this set of instructions again except arrow to Draw (instead of Calculate). Press ENTER. A shaded graph appears with z = 1.112 (test statistic) and p = 0.1331 (p-value). Make sure when you use Draw that no other equations are highlighted in Y = and the plots are turned off. Statistics students believe that the mean score on the first statistics test is 65. A statistics instructor thinks the mean score is higher than 65. He samples ten statistics students and obtains the scores 65657067666363687271. He performs a hypothesis test using a 5% level of significance. The data are assumed to be from a normal distribution. Set up the hypothesis test: A 5% level of significance means that α = 0.05. This is a test of a single population mean. H[0]: μ = 65H[a]: μ > 65 Since the instructor thinks the average score is higher, use a “>”. The “>” means the test is right-tailed. Determine the distribution needed: Random variable:\(\overline{X}\) = average score on the first statistics test. Distribution for the test: If you read the problem carefully, you will notice that there is no population standard deviation given. You are only given n = 10 sample data values. Notice also that the data come from a normal distribution. This means that the distribution for the test is a student’s t. Use t[df]. Therefore, the distribution for the test is t[9] where n = 10 and df = 10 – 1 = 9. Calculate the p-value using the Student’s t-distribution: p-value = P(\(\overline{x}\) > 67) = 0.0396 where the sample mean and sample standard deviation are calculated as 67 and 3.1972 from the data. Interpretation of the p-value: If the null hypothesis is true, then there is a 0.0396 probability (3.96%) that the sample mean is 65 or more. Compare α and the p-value: Since α = 0.05 and p-value = 0.0396. α > p-value. Make a decision: Since α > p-value, reject H[0]. This means you reject μ = 65. In other words, you believe the average test score is more than 65. Conclusion: At a 5% level of significance, the sample data show sufficient evidence that the mean (average) test score is more than 65, just as the math instructor thinks. The p-value can easily be calculated. Put the data into a list. Press STAT and arrow over to TESTS. Press 2:T-Test. Arrow over to Data and press ENTER. Arrow down and enter 65 for μ[0], the name of the list where you put the data, and 1 for Freq:. Arrow down to μ: and arrow over to > μ[0]. Press ENTER. Arrow down to Calculate and press ENTER. The calculator not only calculates the p-value (p = 0.0396) but it also calculates the test statistic (t-score) for the sample mean, the sample mean, and the sample standard deviation. μ > 65 is the alternative hypothesis. Do this set of instructions again except arrow to Draw (instead of Calculate). Press ENTER. A shaded graph appears with t = 1.9781 (test statistic) and p = 0.0396 (p-value). Make sure when you use Draw that no other equations are highlighted in Y = and the plots are turned off. Try It It is believed that a stock price for a particular company will grow at a rate of 💲5 per week with a standard deviation of 💲1. An investor believes the stock won’t grow as quickly. The changes in stock price is recorded for ten weeks and are as follows: 💲4, 💲3, 💲2, 💲3, 💲1, 💲7, 💲2, 💲1, 💲1, 💲2. Perform a hypothesis test using a 5% level of significance. State the null and alternative hypotheses, find the p-value, state your conclusion, and identify the Type I and Type II errors. H[0]: μ = 5 H[a]: μ < 5 p = 0.0082 Because p < α, we reject the null hypothesis. There is sufficient evidence to suggest that the stock price of the company grows at a rate less than 💲5 a week. Type I Error: To conclude that the stock price is growing slower than 💲5 a week when, in fact, the stock price is growing at 💲5 a week (reject the null hypothesis when the null hypothesis is true). Type II Error: To conclude that the stock price is growing at a rate of 💲5 a week when, in fact, the stock price is growing slower than 💲5 a week (do not reject the null hypothesis when the null hypothesis is false). Joon believes that 50% of first-time brides in the United States are younger than their grooms. She performs a hypothesis test to determine if the percentage is the same or different from 50%. Joon samples 100 first-time brides and 53 reply that they are younger than their grooms. For the hypothesis test, she uses a 1% level of Set up the hypothesis test: The 1% level of significance means that α = 0.01. This is a test of a single population proportion. H[0]: p = 0.50H[a]: p ≠ 0.50 The words “is the same or different from” tell you this is a two-tailed test. Calculate the distribution needed: Random variable:P′ = the percent of of first-time brides who are younger than their grooms. Distribution for the test: The problem contains no mention of a mean. The information is given in terms of percentages. Use the distribution for P′, the estimated proportion. \({P}^{\prime }~N\left(p,\sqrt{\frac{p\cdot q}{n}}\right)\) Therefore, \({P}^{\prime }~N\left(0.5,\sqrt{\frac{0.5\cdot 0.5}{100}}\right)\) where p = 0.50, q = 1−p = 0.50, and n = 100 Calculate the p-value using the normal distribution for proportions: p-value = P (p′ < 0.47 or p′ > 0.53) = 0.5485 where x = 53, p′ = \(\frac{x}{n}\text{ = }\frac{\text{53}}{\text{100}}\) = 0.53. Interpretation of the p-value: If the null hypothesis is true, there is 0.5485 probability (54.85%) that the sample (estimated) proportion \(p\text{‘}\) is 0.53 or more OR 0.47 or less (see the graph in [link]). μ = p = 0.50 comes from H[0], the null hypothesis. p′ = 0.53. Since the curve is symmetrical and the test is two-tailed, the p′ for the left tail is equal to 0.50 – 0.03 = 0.47 where μ = p = 0.50. (0.03 is the difference between 0.53 and 0.50.) Compare α and the p-value: Since α = 0.01 and p-value = 0.5485. α < p-value. Make a decision: Since α < p-value, you cannot reject H[0]. Conclusion: At the 1% level of significance, the sample data do not show sufficient evidence that the percentage of first-time brides who are younger than their grooms is different from 50%. The p-value can easily be calculated. Press STAT and arrow over to TESTS. Press 5:1-PropZTest. Enter .5 for p[0], 53 for x and 100 for n. Arrow down to Prop and arrow to not equals p[0]. Press ENTER. Arrow down to Calculate and press ENTER. The calculator calculates the p-value (p = 0.5485) and the test statistic (z-score). Prop not equals .5 is the alternate hypothesis. Do this set of instructions again except arrow to Draw (instead of Calculate). Press ENTER. A shaded graph appears with z = 0.6 (test statistic) and p = 0.5485 (p-value). Make sure when you use Draw that no other equations are highlighted in Y = and the plots are turned off. The Type I and Type II errors are as follows: The Type I error is to conclude that the proportion of first-time brides who are younger than their grooms is different from 50% when, in fact, the proportion is actually 50%. (Reject the null hypothesis when the null hypothesis is true). The Type II error is there is not enough evidence to conclude that the proportion of first time brides who are younger than their grooms differs from 50% when, in fact, the proportion does differ from 50%. (Do not reject the null hypothesis when the null hypothesis is false.) Try It A teacher believes that 85% of students in the class will want to go on a field trip to the local zoo. She performs a hypothesis test to determine if the percentage is the same or different from 85%. The teacher samples 50 students and 39 reply that they would want to go to the zoo. For the hypothesis test, use a 1% level of significance. First, determine what type of test this is, set up the hypothesis test, find the p-value, sketch the graph, and state your conclusion. Since the problem is about percentages, this is a test of single population proportions. H[0] : p = 0.85 H[a]: p ≠ 0.85 p = 0.7554 Because p > α, we fail to reject the null hypothesis. There is not sufficient evidence to suggest that the proportion of students that want to go to the zoo is not 85%. Suppose a consumer group suspects that the proportion of households that have three cell phones is 30%. A cell phone company has reason to believe that the proportion is not 30%. Before they start a big advertising campaign, they conduct a hypothesis test. Their marketing people survey 150 households with the result that 43 of the households have three cell phones. Set up the Hypothesis Test: H[0]: p = 0.30 H[a]: p ≠ 0.30 Determine the distribution needed: The random variable is P′ = proportion of households that have three cell phones. The distribution for the hypothesis test is \(P\text{‘}~N\left(0.30,\sqrt{\frac{\left(0.30\right)\cdot \left(0.70\right)}{150}}\right)\) a. The value that helps determine the p-value is p′. Calculate p′. a. p′ = \(\frac{x}{n}\) where x is the number of successes and n is the total number in the sample. x = 43, n = 150 p′ = \(\frac{\text{43}}{\text{150}}\) b. What is a success for this problem? b. A success is having three cell phones in a household. c. What is the level of significance? c. The level of significance is the preset α. Since α is not given, assume that α = 0.05. d. Draw the graph for this problem. Draw the horizontal axis. Label and shade appropriately. Calculate the p-value. e. Make a decision. _____________(Reject/Do not reject) H[0] because____________. e. Assuming that α = 0.05, α < p-value. The decision is do not reject H[0] because there is not sufficient evidence to conclude that the proportion of households that have three cell phones is not Try It Marketers believe that 92% of adults in the United States own a cell phone. A cell phone manufacturer believes that number is actually lower. 200 American adults are surveyed, of which, 174 report having cell phones. Use a 5% level of significance. State the null and alternative hypothesis, find the p-value, state your conclusion, and identify the Type I and Type II errors. H[0]: p = 0.92 H[a]: p < 0.92 p-value = 0.0046 Because p < 0.05, we reject the null hypothesis. There is sufficient evidence to conclude that fewer than 92% of American adults own cell phones. Type I Error: To conclude that fewer than 92% of American adults own cell phones when, in fact, 92% of American adults do own cell phones (reject the null hypothesis when the null hypothesis is Type II Error: To conclude that 92% of American adults own cell phones when, in fact, fewer than 92% of American adults own cell phones (do not reject the null hypothesis when the null hypothesis is The next example is a poem written by a statistics student named Nicole Hart. The solution to the problem follows the poem. Notice that the hypothesis test is for a single population proportion. This means that the null and alternate hypotheses use the parameter p. The distribution for the test is normal. The estimated proportion p′ is the proportion of fleas killed to the total fleas found on Fido. This is sample information. The problem gives a preconceived α = 0.01, for comparison, and a 95% confidence interval computation. The poem is clever and humorous, so please enjoy it! My dog has so many fleas, They do not come off with ease. As for shampoo, I have tried many types Even one called Bubble Hype, Which only killed 25% of the fleas, Unfortunately I was not pleased. I’ve used all kinds of soap, Until I had given up hope Until one day I saw An ad that put me in awe. A shampoo used for dogs Called GOOD ENOUGH to Clean a Hog Guaranteed to kill more fleas. I gave Fido a bath And after doing the math His number of fleas Started dropping by 3’s! Before his shampoo I counted 42. At the end of his bath, I redid the math And the new shampoo had killed 17 fleas. So now I was pleased. Now it is time for you to have some fun With the level of significance being .01, You must help me figure out Use the new shampoo or go without? Set up the hypothesis test: H[0]: p ≤ 0.25 H[a]: p > 0.25 Determine the distribution needed: In words, CLEARLY state what your random variable \(\overline{X}\) or P′ represents. P′ = The proportion of fleas that are killed by the new shampoo State the distribution to use for the test. Test Statistic:z = 2.3163 Calculate the p-value using the normal distribution for proportions: p-value = 0.0103 In one to two complete sentences, explain what the p-value means for this problem. If the null hypothesis is true (the proportion is 0.25), then there is a 0.0103 probability that the sample (estimated) proportion is 0.4048 \(\left(\frac{17}{42}\right)\) or more. Use the previous information to sketch a picture of this situation. CLEARLY, label and scale the horizontal axis and shade the region(s) corresponding to the p-value. Compare α and the p-value: Indicate the correct decision (“reject” or “do not reject” the null hypothesis), the reason for it, and write an appropriate conclusion, using complete sentences. alpha decision reason for decision 0.01 Do not reject \({H}_{0}\) α < p-value Conclusion: At the 1% level of significance, the sample data do not show sufficient evidence that the percentage of fleas that are killed by the new shampoo is more than 25%. Construct a 95% confidence interval for the true mean or proportion. Include a sketch of the graph of the situation. Label the point estimate and the lower and upper bounds of the confidence interval. Confidence Interval: (0.26,0.55) We are 95% confident that the true population proportion p of fleas that are killed by the new shampoo is between 26% and 55%. This test result is not very definitive since the p-value is very close to alpha. In reality, one would probably do more tests by giving the dog another bath after the fleas have had a chance to The National Institute of Standards and Technology provides exact data on conductivity properties of materials. Following are conductivity measurements for 11 randomly selected pieces of a particular type of glass. 1.11; 1.07; 1.11; 1.07; 1.12; 1.08; .98; .98 1.02; .95; .95 Is there convincing evidence that the average conductivity of this type of glass is greater than one? Use a significance level of 0.05. Assume the population is normal. Let’s follow a four-step process to answer this statistical question. State the Question : We need to determine if, at a 0.05 significance level, the average conductivity of the selected glass is greater than one. Our hypotheses will be Plan: We are testing a sample mean without a known population standard deviation. Therefore, we need to use a Student’s-t distribution. Assume the underlying population is normal. Do the calculations : We will input the sample data into the TI-83 as follows. State the Conclusions: Since the p-value* (p = 0.036) is less than our alpha value, we will reject the null hypothesis. It is reasonable to state that the data supports the claim that the average conductivity level is greater than one. In a study of 420,019 cell phone users, 172 of the subjects developed brain cancer. Test the claim that cell phone users developed brain cancer at a greater rate than that for non-cell phone users (the rate of brain cancer for non-cell phone users is 0.0340%). Since this is a critical issue, use a 0.005 significance level. Explain why the significance level should be so low in terms of a Type I error. We will follow the four-step process. 1. We need to conduct a hypothesis test on the claimed cancer rate. Our hypotheses will be 1. H[0]: p ≤ 0.00034 2. H[a]: p > 0.00034 If we commit a Type I error, we are essentially accepting a false claim. Since the claim describes cancer-causing environments, we want to minimize the chances of incorrectly identifying causes of cancer. 2. We will be testing a sample proportion with x = 172 and n = 420,019. The sample is sufficiently large because we have np = 420,019(0.00034) = 142.8, nq = 420,019(0.99966) = 419,876.2, two independent outcomes, and a fixed probability of success p = 0.00034. Thus we will be able to generalize our results to the population. 3. The associated TI results are 4. Since the p-value = 0.0073 is greater than our alpha value = 0.005, we cannot reject the null. Therefore, we conclude that there is not enough evidence to support the claim of higher brain cancer rates for the cell phone users. According to the US Census there are approximately 268,608,618 residents aged 12 and older. Statistics from the Rape, Abuse, and Incest National Network indicate that, on average, 207,754 rapes occur each year (male and female) for persons aged 12 and older. This translates into a percentage of sexual assaults of 0.078%. In Daviess County, KY, there were reported 11 rapes for a population of 37,937. Conduct an appropriate hypothesis test to determine if there is a statistically significant difference between the local sexual assault percentage and the national sexual assault percentage. Use a significance level of 0.01. We will follow the four-step plan. 1. We need to test whether the proportion of sexual assaults in Daviess County, KY is significantly different from the national average. 2. Since we are presented with proportions, we will use a one-proportion z-test. The hypotheses for the test will be 1. H[0]: p = 0.00078 2. H[a]: p ≠ 0.00078 3. The following screen shots display the summary statistics from the hypothesis test. 4. Since the p-value, p = 0.00063, is less than the alpha level of 0.01, the sample data indicates that we should reject the null hypothesis. In conclusion, the sample data support the claim that the proportion of sexual assaults in Daviess County, Kentucky is different from the national average proportion. Chapter Review The hypothesis test itself has an established process. This can be summarized as follows: Determine H[0] and H[a]. Remember, they are contradictory. Determine the random variable. Determine the distribution for the test. Draw a graph, calculate the test statistic, and use the test statistic to calculate the p-value. (A z-score and a t-score are examples of test statistics.) Compare the preconceived α with the p-value, make a decision (reject or do not reject H[0]), and write a clear conclusion using English sentences. Notice that in performing the hypothesis test, you use α and not β. β is needed to help determine the sample size of the data that is used in calculating the p-value. Remember that the quantity 1 – β is called the Power of the Test. A high power is desirable. If the power is too low, statisticians typically increase the sample size while keeping α the same.If the power is low, the null hypothesis might not be rejected when it should be. Assume H[0]: μ = 9 and H[a]: μ < 9. Is this a left-tailed, right-tailed, or two-tailed test? This is a left-tailed test. Assume H[0]: μ ≤ 6 and H[a]: μ > 6. Is this a left-tailed, right-tailed, or two-tailed test? Assume H[0]: p = 0.25 and H[a]: p ≠ 0.25. Is this a left-tailed, right-tailed, or two-tailed test? This is a two-tailed test. Draw the general graph of a left-tailed test. Draw the graph of a two-tailed test. A bottle of water is labeled as containing 16 fluid ounces of water. You believe it is less than that. What type of test would you use? Your friend claims that his mean golf score is 63. You want to show that it is higher than that. What type of test would you use? A bathroom scale claims to be able to identify correctly any weight within a pound. You think that it cannot be that accurate. What type of test would you use? You flip a coin and record whether it shows heads or tails. You know the probability of getting heads is 50%, but you think it is less for this particular coin. What type of test would you use? If the alternative hypothesis has a not equals ( ≠ ) symbol, you know to use which type of test? Assume the null hypothesis states that the mean is at least 18. Is this a left-tailed, right-tailed, or two-tailed test? This is a left-tailed test. Assume the null hypothesis states that the mean is at most 12. Is this a left-tailed, right-tailed, or two-tailed test? Assume the null hypothesis states that the mean is equal to 88. The alternative hypothesis states that the mean is not equal to 88. Is this a left-tailed, right-tailed, or two-tailed test? This is a two-tailed test. <!–moved from 9.3–> For each of the word problems, use a solution sheet to do the hypothesis test. The solution sheet is found in [link]. Please feel free to make copies of the solution sheets. For the online version of the book, it is suggested that you copy the .doc or the .pdf files. If you are using a Student’s-t distribution for one of the following homework problems, you may assume that the underlying population is normally distributed. (In general, you must first prove that assumption, however.) A particular brand of tires claims that its deluxe tire averages at least 50,000 miles before it needs to be replaced. From past studies of this tire, the standard deviation is known to be 8,000. A survey of owners of that tire design is conducted. From the 28 tires surveyed, the mean lifespan was 46,500 miles with a standard deviation of 9,800 miles. Using alpha = 0.05, is the data highly inconsistent with the claim? 1. H[0]: μ ≥ 50,000 2. H[a]: μ < 50,000 3. Let \(\overline{X}\) = the average lifespan of a brand of tires. 4. normal distribution 5. z = -2.315 6. p-value = 0.0103 7. Check student’s solution. 1. alpha: 0.05 2. Decision: Reject the null hypothesis. 3. Reason for decision: The p-value is less than 0.05. 4. Conclusion: There is sufficient evidence to conclude that the mean lifespan of the tires is less than 50,000 miles. 9. (43,537, 49,463) From generation to generation, the mean age when smokers first start to smoke varies. However, the standard deviation of that age remains constant of around 2.1 years. A survey of 40 smokers of this generation was done to see if the mean starting age is at least 19. The sample mean was 18.1 with a sample standard deviation of 1.3. Do the data support the claim at the 5% level? The cost of a daily newspaper varies from city to city. However, the variation among prices remains steady with a standard deviation of 20¢. A study was done to test the claim that the mean cost of a daily newspaper is 💲1.00. Twelve costs yield a mean cost of 95¢ with a standard deviation of 18¢. Do the data support the claim at the 1% level? 1. H[0]: μ = 💲1.00 2. H[a]: μ ≠ 💲1.00 3. Let \(\overline{X}\) = the average cost of a daily newspaper. 4. normal distribution 5. z = –0.866 6. p-value = 0.3865 7. Check student’s solution. 1. Alpha: 0.01 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The p-value is greater than 0.01. 4. Conclusion: There is sufficient evidence to support the claim that the mean cost of daily papers is 💲1. The mean cost could be 💲1. 9. (💲0.84, 💲1.06) An article in the San Jose Mercury News stated that students in the California state university system take 4.5 years, on average, to finish their undergraduate degrees. Suppose you believe that the mean time is longer. You conduct a survey of 49 students and obtain a sample mean of 5.1 with a sample standard deviation of 1.2. Do the data support your claim at the 1% level? The mean number of sick days an employee takes per year is believed to be about ten. Members of a personnel department do not believe this figure. They randomly survey eight employees. The number of sick days they took for the past year are as follows: 12; 4; 15; 3; 11; 8; 6; 8. Let x = the number of sick days they took for the past year. Should the personnel team believe that the mean number is 1. H[0]: μ = 10 2. H[a]: μ ≠ 10 3. Let \(\overline{X}\) the mean number of sick days an employee takes per year. 4. Student’s t-distribution 5. t = –1.12 6. p-value = 0.300 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The p-value is greater than 0.05. 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that the mean number of sick days is not ten. 9. (4.9443, 11.806) In 1955, Life Magazine reported that the 25 year-old mother of three worked, on average, an 80 hour week. Recently, many groups have been studying whether or not the women’s movement has, in fact, resulted in an increase in the average work week for women (combining employment and at-home work). Suppose a study was done to determine if the mean work week has increased. 81 women were surveyed with the following results. The sample mean was 83; the sample standard deviation was ten. Does it appear that the mean work week has increased for women at the 5% level? Your statistics instructor claims that 60 percent of the students who take her Elementary Statistics class go through life feeling more enriched. For some reason that she can’t quite figure out, most people don’t believe her. You decide to check this out on your own. You randomly survey 64 of her past Elementary Statistics students and find that 34 feel more enriched as a result of her class. Now, what do you think? 1. H[0]: p ≥ 0.6 2. H[a]: p < 0.6 3. Let P′ = the proportion of students who feel more enriched as a result of taking Elementary Statistics. 4. normal for a single proportion 5. 1.12 6. p-value = 0.1308 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The p-value is greater than 0.05. 4. Conclusion: There is insufficient evidence to conclude that less than 60 percent of her students feel more enriched. 9. Confidence Interval: (0.409, 0.654) The “plus-4s” confidence interval is (0.411, 0.648) A Nissan Motor Corporation advertisement read, “The average man’s I.Q. is 107. The average brown trout’s I.Q. is 4. So why can’t man catch brown trout?” Suppose you believe that the brown trout’s mean I.Q. is greater than four. You catch 12 brown trout. A fish psychologist determines the I.Q.s as follows: 5; 4; 7; 3; 6; 4; 5; 3; 6; 3; 8; 5. Conduct a hypothesis test of your belief. Refer to Exercise 9.119. Conduct a hypothesis test to see if your decision and conclusion would change if your belief were that the brown trout’s mean I.Q. is not four. 1. H[0]: μ = 4 2. H[a]: μ ≠ 4 3. Let \(\overline{X}\) the average I.Q. of a set of brown trout. 4. two-tailed Student’s t-test 5. t = 1.95 6. p-value = 0.076 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Reject the null hypothesis. 3. Reason for decision: The p-value is greater than 0.05 4. Conclusion: There is insufficient evidence to conclude that the average IQ of brown trout is not four. 9. (3.8865,5.9468) According to an article in Newsweek, the natural ratio of girls to boys is 100:105. In China, the birth ratio is 100: 114 (46.7% girls). Suppose you don’t believe the reported figures of the percent of girls born in China. You conduct a study. In this study, you count the number of girls and boys born in 150 randomly chosen recent births. There are 60 girls and 90 boys born of the 150. Based on your study, do you believe that the percent of girls born in China is 46.7? A poll done for Newsweek found that 13% of Americans have seen or sensed the presence of an angel. A contingent doubts that the percent is really that high. It conducts its own survey. Out of 76 Americans surveyed, only two had seen or sensed the presence of an angel. As a result of the contingent’s survey, would you agree with the Newsweek poll? In complete sentences, also give three reasons why the two polls might give different results. 1. H[0]: p ≥ 0.13 2. H[a]: p < 0.13 3. Let P′ = the proportion of Americans who have seen or sensed angels 4. normal for a single proportion 5. –2.688 6. p-value = 0.0036 7. Check student’s solution. 1. alpha: 0.05 2. Decision: Reject the null hypothesis. 3. Reason for decision: The p-value is less than 0.05. 4. Conclusion: There is sufficient evidence to conclude that the percentage of Americans who have seen or sensed an angel is less than 13%. 9. (0, 0.0623). The“plus-4s” confidence interval is (0.0022, 0.0978) The mean work week for engineers in a start-up company is believed to be about 60 hours. A newly hired engineer hopes that it’s shorter. She asks ten engineering friends in start-ups for the lengths of their mean work weeks. Based on the results that follow, should she count on the mean work week to be shorter than 60 hours? Data (length of mean work week): 70; 45; 55; 60; 65; 55; 55; 60; 50; 55. <!– LINK –> Use the “Lap time” data for Lap 4 (see [link]) to test the claim that Terri finishes Lap 4, on average, in less than 129 seconds. Use all twenty races given. 1. H[0]: μ ≥ 129 2. H[a]: μ < 129 3. Let \(\overline{X}\) = the average time in seconds that Terri finishes Lap 4. 4. Student’s t-distribution 5. t = 1.209 6. 0.8792 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The p-value is greater than 0.05. 4. Conclusion: There is insufficient evidence to conclude that Terri’s mean lap time is less than 129 seconds. 9. (128.63, 130.37) <!– LINK –> Use the “Initial Public Offering” data (see [link]) to test the claim that the mean offer price was 💲18 per share. Do not use all the data. Use your random number generator to randomly survey 15 The following questions were written by past students. They are excellent problems! “Asian Family Reunion,” by Chau Nguyen Every two years it comes around. We all get together from different towns. In my honest opinion, It’s not a typical family reunion. Not forty, or fifty, or sixty, But how about seventy companions! The kids would play, scream, and shout One minute they’re happy, another they’ll pout. The teenagers would look, stare, and compare From how they look to what they wear. The men would chat about their business That they make more, but never less. Money is always their subject And there’s always talk of more new projects. The women get tired from all of the chats They head to the kitchen to set out the mats. Some would sit and some would stand Eating and talking with plates in their hands. Then come the games and the songs And suddenly, everyone gets along! With all that laughter, it’s sad to say That it always ends in the same old way. They hug and kiss and say “good-bye” And then they all begin to cry! I say that 60 percent shed their tears But my mom counted 35 people this year. She said that boys and men will always have their pride, So we won’t ever see them cry. I myself don’t think she’s correct, So could you please try this problem to see if you object? 1. H[0]: p = 0.60 2. H[a]: p < 0.60 3. Let P′ = the proportion of family members who shed tears at a reunion. 4. normal for a single proportion 5. –1.71 6. 0.0438 7. Check student’s solution. 1. alpha: 0.05 2. Decision: Reject the null hypothesis. 3. Reason for decision: p-value < alpha 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the proportion of family members who shed tears at a reunion is less than 0.60. However, the test is weak because the p-value and alpha are quite close, so other tests should be done. 9. We are 95% confident that between 38.29% and 61.71% of family members will shed tears at a family reunion. (0.3829, 0.6171). The“plus-4s” confidence interval (see chapter 8) is (0.3861, 0.6139) Note that here the “large-sample” 1 – PropZTest provides the approximate p-value of 0.0438. Whenever a p-value based on a normal approximation is close to the level of significance, the exact p-value based on binomial probabilities should be calculated whenever possible. This is beyond the scope of this course. “The Problem with Angels,” by Cyndy Dowling Although this problem is wholly mine, The catalyst came from the magazine, Time. On the magazine cover I did find The realm of angels tickling my mind. Inside, 69% I found to be In angels, Americans do believe. Then, it was time to rise to the task, Ninety-five high school and college students I did ask. Viewing all as one group, Random sampling to get the scoop. So, I asked each to be true, “Do you believe in angels?” Tell me, do! Hypothesizing at the start, Totally believing in my heart That the proportion who said yes Would be equal on this test. Lo and behold, seventy-three did arrive, Out of the sample of ninety-five. Now your job has just begun, Solve this problem and have some fun. “Blowing Bubbles,” by Sondra Prull Studying stats just made me tense, I had to find some sane defense. Some light and lifting simple play To float my math anxiety away. Blowing bubbles lifts me high Takes my troubles to the sky. POIK! They’re gone, with all my stress Bubble therapy is the best. The label said each time I blew The average number of bubbles would be at least 22. I blew and blew and this I found From 64 blows, they all are round! But the number of bubbles in 64 blows Varied widely, this I know. 20 per blow became the mean They deviated by 6, and not 16. From counting bubbles, I sure did relax But now I give to you your task. Was 22 a reasonable guess? Find the answer and pass this test! 1. H[0]: μ ≥ 22 2. H[a]: μ < 22 3. Let \(\overline{X}\) = the mean number of bubbles per blow. 4. Student’s t-distribution 5. –2.667 6. p-value = 0.00486 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Reject the null hypothesis. 3. Reason for decision: The p-value is less than 0.05. 4. Conclusion: There is sufficient evidence to conclude that the mean number of bubbles per blow is less than 22. 9. (18.501, 21.499) “Dalmatian Darnation,” by Kathy Sparling A greedy dog breeder named Spreckles Bred puppies with numerous freckles The Dalmatians he sought Possessed spot upon spot The more spots, he thought, the more shekels. His competitors did not agree That freckles would increase the fee. They said, “Spots are quite nice But they don’t affect price; One should breed for improved pedigree.” The breeders decided to prove This strategy was a wrong move. Breeding only for spots Would wreak havoc, they thought. His theory they want to disprove. They proposed a contest to Spreckles Comparing dog prices to freckles. In records they looked up One hundred one pups: Dalmatians that fetched the most shekels. They asked Mr. Spreckles to name An average spot count he’d claim To bring in big bucks. Said Spreckles, “Well, shucks, It’s for one hundred one that I aim.” Said an amateur statistician Who wanted to help with this mission. “Twenty-one for the sample Standard deviation’s ample: They examined one hundred and one Dalmatians that fetched a good sum. They counted each spot, Mark, freckle and dot And tallied up every one. Instead of one hundred one spots They averaged ninety six dots Can they muzzle Spreckles’ Obsession with freckles Based on all the dog data they’ve got? “Macaroni and Cheese, please!!” by Nedda Misherghi and Rachelle Hall As a poor starving student I don’t have much money to spend for even the bare necessities. So my favorite and main staple food is macaroni and cheese. It’s high in taste and low in cost and nutritional value. One day, as I sat down to determine the meaning of life, I got a serious craving for this, oh, so important, food of my life. So I went down the street to Greatway to get a box of macaroni and cheese, but it was SO expensive! 💲2.02 !!! Can you believe it? It made me stop and think. The world is changing fast. I had thought that the mean cost of a box (the normal size, not some super-gigantic-family-value-pack) was at most 💲1, but now I wasn’t so sure. However, I was determined to find out. I went to 53 of the closest grocery stores and surveyed the prices of macaroni and cheese. Here are the data I wrote in my notebook: Price per box of Mac and Cheese: • 5 stores @ 💲2.02 • 15 stores @ 💲0.25 • 3 stores @ 💲1.29 • 6 stores @ 💲0.35 • 4 stores @ 💲2.27 • 7 stores @ 💲1.50 • 5 stores @ 💲1.89 • 8 stores @ 0.75. I could see that the cost varied but I had to sit down to figure out whether or not I was right. If it does turn out that this mouth-watering dish is at most 💲1, then I’ll throw a big cheesy party in our next statistics lab, with enough macaroni and cheese for just me. (After all, as a poor starving student I can’t be expected to feed our class of animals!) 1. H[0]: μ ≤ 1 2. H[a]: μ > 1 3. Let \(\overline{X}\) = the mean cost in dollars of macaroni and cheese in a certain town. 4. Student’s t-distribution 5. t = 0.340 6. p-value = 0.36756 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The p-value is greater than 0.05 4. Conclusion: The mean cost could be 💲1, or less. At the 5% significance level, there is insufficient evidence to conclude that the mean price of a box of macaroni and cheese is more than 💲1. 9. (0.8291, 1.241) “William Shakespeare: The Tragedy of Hamlet, Prince of Denmark,” by Jacqueline Ghodsi THE CHARACTERS (in order of appearance): • HAMLET, Prince of Denmark and student of Statistics • POLONIUS, Hamlet’s tutor • HOROTIO, friend to Hamlet and fellow student Scene: The great library of the castle, in which Hamlet does his lessons Act I (The day is fair, but the face of Hamlet is clouded. He paces the large room. His tutor, Polonius, is reprimanding Hamlet regarding the latter’s recent experience. Horatio is seated at the large table at right stage.) POLONIUS: My Lord, how cans’t thou admit that thou hast seen a ghost! It is but a figment of your imagination! HAMLET: I beg to differ; I know of a certainty that five-and-seventy in one hundred of us, condemned to the whips and scorns of time as we are, have gazed upon a spirit of health, or goblin damn’d, be their intents wicked or charitable. POLONIUS If thou doest insist upon thy wretched vision then let me invest your time; be true to thy work and speak to me through the reason of the null and alternate hypotheses. (He turns to Horatio.) Did not Hamlet himself say, “What piece of work is man, how noble in reason, how infinite in faculties? Then let not this foolishness persist. Go, Horatio, make a survey of three-and-sixty and discover what the true proportion be. For my part, I will never succumb to this fantasy, but deem man to be devoid of all reason should thy proposal of at least five-and-seventy in one hundred hold true. HORATIO (to Hamlet): What should we do, my Lord? HAMLET: Go to thy purpose, Horatio. HORATIO: To what end, my Lord? HAMLET: That you must teach me. But let me conjure you by the rights of our fellowship, by the consonance of our youth, but the obligation of our ever-preserved love, be even and direct with me, whether I am right or no. (Horatio exits, followed by Polonius, leaving Hamlet to ponder alone.) Act II (The next day, Hamlet awaits anxiously the presence of his friend, Horatio. Polonius enters and places some books upon the table just a moment before Horatio enters.) POLONIUS: So, Horatio, what is it thou didst reveal through thy deliberations? HORATIO: In a random survey, for which purpose thou thyself sent me forth, I did discover that one-and-forty believe fervently that the spirits of the dead walk with us. Before my God, I might not this believe, without the sensible and true avouch of mine own eyes. POLONIUS: Give thine own thoughts no tongue, Horatio. (Polonius turns to Hamlet.) But look to’t I charge you, my Lord. Come Horatio, let us go together, for this is not our test. (Horatio and Polonius leave together.) HAMLET: To reject, or not reject, that is the question: whether ‘tis nobler in the mind to suffer the slings and arrows of outrageous statistics, or to take arms against a sea of data, and, by opposing, end them. (Hamlet resignedly attends to his task.) (Curtain falls) “Untitled,” by Stephen Chen I’ve often wondered how software is released and sold to the public. Ironically, I work for a company that sells products with known problems. Unfortunately, most of the problems are difficult to create, which makes them difficult to fix. I usually use the test program X, which tests the product, to try to create a specific problem. When the test program is run to make an error occur, the likelihood of generating an error is 1%. So, armed with this knowledge, I wrote a new test program Y that will generate the same error that test program X creates, but more often. To find out if my test program is better than the original, so that I can convince the management that I’m right, I ran my test program to find out how often I can generate the same error. When I ran my test program 50 times, I generated the error twice. While this may not seem much better, I think that I can convince the management to use my test program instead of the original test program. Am I right? 1. H[0]: p = 0.01 2. H[a]: p > 0.01 3. Let P′ = the proportion of errors generated 4. Normal for a single proportion 5. 2.13 6. 0.0165 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Reject the null hypothesis 3. Reason for decision: The p-value is less than 0.05. 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the proportion of errors generated is more than 0.01. 9. Confidence interval: (0, 0.094). The“plus-4s” confidence interval is (0.004, 0.144). “Japanese Girls’ Names” by Kumi Furuichi It used to be very typical for Japanese girls’ names to end with “ko.” (The trend might have started around my grandmothers’ generation and its peak might have been around my mother’s generation.) “Ko” means “child” in Chinese characters. Parents would name their daughters with “ko” attaching to other Chinese characters which have meanings that they want their daughters to become, such as Sachiko—happy child, Yoshiko—a good child, Yasuko—a healthy child, and so on. However, I noticed recently that only two out of nine of my Japanese girlfriends at this school have names which end with “ko.” More and more, parents seem to have become creative, modernized, and, sometimes, westernized in naming their children. I have a feeling that, while 70 percent or more of my mother’s generation would have names with “ko” at the end, the proportion has dropped among my peers. I wrote down all my Japanese friends’, ex-classmates’, co-workers, and acquaintances’ names that I could remember. Following are the names. (Some are repeats.) Test to see if the proportion has dropped for this generation. Ai, Akemi, Akiko, Ayumi, Chiaki, Chie, Eiko, Eri, Eriko, Fumiko, Harumi, Hitomi, Hiroko, Hiroko, Hidemi, Hisako, Hinako, Izumi, Izumi, Junko, Junko, Kana, Kanako, Kanayo, Kayo, Kayoko, Kazumi, Keiko, Keiko, Kei, Kumi, Kumiko, Kyoko, Kyoko, Madoka, Maho, Mai, Maiko, Maki, Miki, Miki, Mikiko, Mina, Minako, Miyako, Momoko, Nana, Naoko, Naoko, Naoko, Noriko, Rieko, Rika, Rika, Rumiko, Rei, Reiko, Reiko, Sachiko, Sachiko, Sachiyo, Saki, Sayaka, Sayoko, Sayuri, Seiko, Shiho, Shizuka, Sumiko, Takako, Takako, Tomoe, Tomoe, Tomoko, Touko, Yasuko, Yasuko, Yasuyo, Yoko, Yoko, Yoko, Yoshiko, Yoshiko, Yoshiko, Yuka, Yuki, Yuki, Yukiko, Yuko, Yuko. “Phillip’s Wish,” by Suzanne Osorio My nephew likes to play Chasing the girls makes his day. He asked his mother If it is okay To get his ear pierced. She said, “No way!” To poke a hole through your ear, Is not what I want for you, dear. He argued his point quite well, Says even my macho pal, Mel, Has gotten this done. It’s all just for fun. C’mon please, mom, please, what the hell. Again Phillip complained to his mother, Saying half his friends (including their brothers) Are piercing their ears And they have no fears He wants to be like the others. She said, “I think it’s much less. We must do a hypothesis test. And if you are right, I won’t put up a fight. But, if not, then my case will rest.” We proceeded to call fifty guys To see whose prediction would fly. Nineteen of the fifty Said piercing was nifty And earrings they’d occasionally buy. Then there’s the other thirty-one, Who said they’d never have this done. So now this poem’s finished. Will his hopes be diminished, Or will my nephew have his fun? 1. H[0]: p = 0.50 2. H[a]: p < 0.50 3. Let P′ = the proportion of friends that has a pierced ear. 4. normal for a single proportion 5. –1.70 6. p-value = 0.0448 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Reject the null hypothesis 3. Reason for decision: The p-value is less than 0.05. (However, they are very close.) 4. Conclusion: There is sufficient evidence to support the claim that less than 50% of his friends have pierced ears. 9. Confidence Interval: (0.245, 0.515): The “plus-4s” confidence interval is (0.259, 0.519). “The Craven,” by Mark Salangsang Once upon a morning dreary In stats class I was weak and weary. Pondering over last night’s homework Whose answers were now on the board This I did and nothing more. While I nodded nearly napping Suddenly, there came a tapping. As someone gently rapping, Rapping my head as I snore. Quoth the teacher, “Sleep no more.” “In every class you fall asleep,” The teacher said, his voice was deep. “So a tally I’ve begun to keep Of every class you nap and snore. The percentage being forty-four.” “My dear teacher I must confess, While sleeping is what I do best. The percentage, I think, must be less, A percentage less than forty-four.” This I said and nothing more. “We’ll see,” he said and walked away, And fifty classes from that day He counted till the month of May The classes in which I napped and snored. The number he found was twenty-four. At a significance level of 0.05, Please tell me am I still alive? Or did my grade just take a dive Plunging down beneath the floor? Upon thee I hereby implore. Toastmasters International cites a report by Gallop Poll that 40% of Americans fear public speaking. A student believes that less than 40% of students at her school fear public speaking. She randomly surveys 361 schoolmates and finds that 135 report they fear public speaking. Conduct a hypothesis test to determine if the percent at her school is less than 40%. 1. H[0]: p = 0.40 2. H[a]: p < 0.40 3. Let P′ = the proportion of schoolmates who fear public speaking. 4. normal for a single proportion 5. –1.01 6. p-value = 0.1563 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The p-value is greater than 0.05. 4. Conclusion: There is insufficient evidence to support the claim that less than 40% of students at the school fear public speaking. 9. Confidence Interval: (0.3241, 0.4240): The “plus-4s” confidence interval is (0.3257, 0.4250). Sixty-eight percent of online courses taught at community colleges nationwide were taught by full-time faculty. To test if 68% also represents California’s percent for full-time faculty teaching the online classes, Long Beach City College (LBCC) in California, was randomly selected for comparison. In the same year, 34 of the 44 online courses LBCC offered were taught by full-time faculty. Conduct a hypothesis test to determine if 68% represents California. NOTE: For more accurate results, use more California community colleges and this past year’s data. According to an article in Bloomberg Businessweek, New York City’s most recent adult smoking rate is 14%. Suppose that a survey is conducted to determine this year’s rate. Nine out of 70 randomly chosen N.Y. City residents reply that they smoke. Conduct a hypothesis test to determine if the rate is still 14% or if it has decreased. 1. H[0]: p = 0.14 2. H[a]: p < 0.14 3. Let P′ = the proportion of NYC residents that smoke. 4. normal for a single proportion 5. –0.2756 6. p-value = 0.3914 7. Check student’s solution. 1. alpha: 0.05 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The p-value is greater than 0.05. 4. At the 5% significance level, there is insufficient evidence to conclude that the proportion of NYC residents who smoke is less than 0.14. 9. Confidence Interval: (0.0502, 0.2070): The “plus-4s” confidence interval (see chapter 8) is (0.0676, 0.2297). The mean age of De Anza College students in a previous term was 26.6 years old. An instructor thinks the mean age for online students is older than 26.6. She randomly surveys 56 online students and finds that the sample mean is 29.4 with a standard deviation of 2.1. Conduct a hypothesis test. Registered nurses earned an average annual salary of 💲69,110. For that same year, a survey was conducted of 41 California registered nurses to determine if the annual salary is higher than 💲69,110 for California nurses. The sample average was 💲71,121 with a sample standard deviation of 💲7,489. Conduct a hypothesis test. 1. H[0]: μ = 69,110 2. H[a]: μ > 69,110 3. Let \(\overline{X}\) = the mean salary in dollars for California registered nurses. 4. Student’s t-distribution 5. t = 1.719 6. p-value: 0.0466 7. Check student’s solution. 1. Alpha: 0.05 2. Decision: Reject the null hypothesis. 3. Reason for decision: The p-value is less than 0.05. 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the mean salary of California registered nurses exceeds 💲69,110. 9. (💲68,757, 💲73,485) La Leche League International reports that the mean age of weaning a child from breastfeeding is age four to five worldwide. In America, most nursing mothers wean their children much earlier. Suppose a random survey is conducted of 21 U.S. mothers who recently weaned their children. The mean weaning age was nine months (3/4 year) with a standard deviation of 4 months. Conduct a hypothesis test to determine if the mean weaning age in the U.S. is less than four years old. Over the past few decades, public health officials have examined the link between weight concerns and teen girls’ smoking. Researchers surveyed a group of 273 randomly selected teen girls living in Massachusetts (between 12 and 15 years old). After four years the girls were surveyed again. Sixty-three said they smoked to stay thin. Is there good evidence that more than thirty percent of the teen girls smoke to stay thin? After conducting the test, your decision and conclusion are 1. Reject H[0]: There is sufficient evidence to conclude that more than 30% of teen girls smoke to stay thin. 2. Do not reject H[0]: There is not sufficient evidence to conclude that less than 30% of teen girls smoke to stay thin. 3. Do not reject H[0]: There is not sufficient evidence to conclude that more than 30% of teen girls smoke to stay thin. 4. Reject H[0]: There is sufficient evidence to conclude that less than 30% of teen girls smoke to stay thin. A statistics instructor believes that fewer than 20% of Evergreen Valley College (EVC) students attended the opening night midnight showing of the latest Harry Potter movie. She surveys 84 of her students and finds that 11 of them attended the midnight showing. At a 1% level of significance, an appropriate conclusion is: 1. There is insufficient evidence to conclude that the percent of EVC students who attended the midnight showing of Harry Potter is less than 20%. 2. There is sufficient evidence to conclude that the percent of EVC students who attended the midnight showing of Harry Potter is more than 20%. 3. There is sufficient evidence to conclude that the percent of EVC students who attended the midnight showing of Harry Potter is less than 20%. 4. There is insufficient evidence to conclude that the percent of EVC students who attended the midnight showing of Harry Potter is at least 20%. Previously, an organization reported that teenagers spent 4.5 hours per week, on average, on the phone. The organization thinks that, currently, the mean is higher. Fifteen randomly chosen teenagers were asked how many hours per week they spend on the phone. The sample mean was 4.75 hours with a sample standard deviation of 2.0. Conduct a hypothesis test. At a significance level of a = 0.05, what is the correct conclusion? 1. There is enough evidence to conclude that the mean number of hours is more than 4.75 2. There is enough evidence to conclude that the mean number of hours is more than 4.5 3. There is not enough evidence to conclude that the mean number of hours is more than 4.5 4. There is not enough evidence to conclude that the mean number of hours is more than 4.75 Instructions: For the following ten exercises, Hypothesis testing: For the following ten exercises, answer each question. 1. State the null and alternate hypothesis. 2. State the p-value. 3. State alpha. 4. What is your decision? 5. Write a conclusion. 6. Answer any other questions asked in the problem. According to the Center for Disease Control website, in 2011 at least 18% of high school students have smoked a cigarette. An Introduction to Statistics class in Davies County, KY conducted a hypothesis test at the local high school (a medium sized–approximately 1,200 students–small city demographic) to determine if the local high school’s percentage was lower. One hundred fifty students were chosen at random and surveyed. Of the 150 students surveyed, 82 have smoked. Use a significance level of 0.05 and using appropriate statistical evidence, conduct a hypothesis test and state the A recent survey in the N.Y. Times Almanac indicated that 48.8% of families own stock. A broker wanted to determine if this survey could be valid. He surveyed a random sample of 250 families and found that 142 owned some type of stock. At the 0.05 significance level, can the survey be considered to be accurate? 1. H[0]: p = 0.488 H[a]: p ≠ 0.488 2. p-value = 0.0114 3. alpha = 0.05 4. Reject the null hypothesis. 5. At the 5% level of significance, there is enough evidence to conclude that 48.8% of families own stocks. 6. The survey does not appear to be accurate. Driver error can be listed as the cause of approximately 54% of all fatal auto accidents, according to the American Automobile Association. Thirty randomly selected fatal accidents are examined, and it is determined that 14 were caused by driver error. Using α = 0.05, is the AAA proportion accurate? The US Department of Energy reported that 51.7% of homes were heated by natural gas. A random sample of 221 homes in Kentucky found that 115 were heated by natural gas. Does the evidence support the claim for Kentucky at the α = 0.05 level in Kentucky? Are the results applicable across the country? Why? 1. H[0]: p = 0.517 H[a]: p ≠ 0.517 2. p-value = 0.9203. 3. alpha = 0.05. 4. Do not reject the null hypothesis. 5. At the 5% significance level, there is not enough evidence to conclude that the proportion of homes in Kentucky that are heated by natural gas is 0.517. 6. However, we cannot generalize this result to the entire nation. First, the sample’s population is only the state of Kentucky. Second, it is reasonable to assume that homes in the extreme north and south will have extreme high usage and low usage, respectively. We would need to expand our sample base to include these possibilities if we wanted to generalize this claim to the entire For Americans using library services, the American Library Association claims that at most 67% of patrons borrow books. The library director in Owensboro, Kentucky feels this is not true, so she asked a local college statistic class to conduct a survey. The class randomly selected 100 patrons and found that 82 borrowed books. Did the class demonstrate that the percentage was higher in Owensboro, KY? Use α = 0.01 level of significance. What is the possible proportion of patrons that do borrow books from the Owensboro Library? The Weather Underground reported that the mean amount of summer rainfall for the northeastern US is at least 11.52 inches. Ten cities in the northeast are randomly selected and the mean rainfall amount is calculated to be 7.42 inches with a standard deviation of 1.3 inches. At the α = 0.05 level, can it be concluded that the mean rainfall was below the reported average? What if α = 0.01? Assume the amount of summer rainfall follows a normal distribution. 1. H[0]: µ ≥ 11.52 H[a]: µ < 11.52 2. p-value = 0.000002 which is almost 0. 3. alpha = 0.05. 4. Reject the null hypothesis. 5. At the 5% significance level, there is enough evidence to conclude that the mean amount of summer rain in the northeaster US is less than 11.52 inches, on average. 6. We would make the same conclusion if alpha was 1% because the p-value is almost 0. A survey in the N.Y. Times Almanac finds the mean commute time (one way) is 25.4 minutes for the 15 largest US cities. The Austin, TX chamber of commerce feels that Austin’s commute time is less and wants to publicize this fact. The mean for 25 randomly selected commuters is 22.1 minutes with a standard deviation of 5.3 minutes. At the α = 0.10 level, is the Austin, TX commute significantly less than the mean commute time for the 15 largest US cities? A report by the Gallup Poll found that a woman visits her doctor, on average, at most 5.8 times each year. A random sample of 20 women results in these yearly visit totals At the α = 0.05 level can it be concluded that the sample mean is higher than 5.8 visits per year? 1. H[0]: µ ≤ 5.8 H[a]: µ > 5.8 2. p-value = 0.9987 3. alpha = 0.05 4. Do not reject the null hypothesis. 5. At the 5% level of significance, there is not enough evidence to conclude that a woman visits her doctor, on average, more than 5.8 times a year. According to the N.Y. Times Almanac the mean family size in the U.S. is 3.18. A sample of a college math class resulted in the following family sizes: At α = 0.05 level, is the class’ mean family size greater than the national average? Does the Almanac result remain valid? Why? The student academic group on a college campus claims that freshman students study at least 2.5 hours per day, on average. One Introduction to Statistics class was skeptical. The class took a random sample of 30 freshman students and found a mean study time of 137 minutes with a standard deviation of 45 minutes. At α = 0.01 level, is the student academic group’s claim correct? 1. H[0]: µ ≥ 150 H[a]: µ < 150 2. p-value = 0.0622 3. alpha = 0.01 4. Do not reject the null hypothesis. 5. At the 1% significance level, there is not enough evidence to conclude that freshmen students study less than 2.5 hours per day, on average. 6. The student academic group’s claim appears to be correct. Data from Amit Schitai. Director of Instructional Technology and Distance Learning. LBCC. Data from Bloomberg Businessweek. Available online at http://www.businessweek.com/news/2011- Data from energy.gov. Available online at http://energy.gov (accessed June 27. 2013). Data from Gallup®. Available online at www.gallup.com (accessed June 27, 2013). Data from Growing by Degrees by Allen and Seaman. Data from La Leche League International. Available online at http://www.lalecheleague.org/Law/BAFeb01.html. Data from the American Automobile Association. Available online at www.aaa.com (accessed June 27, 2013). Data from the American Library Association. Available online at www.ala.org (accessed June 27, 2013). Data from the Bureau of Labor Statistics. Available online at http://www.bls.gov/oes/current/oes291111.htm. Data from the Centers for Disease Control and Prevention. Available online at www.cdc.gov (accessed June 27, 2013) Data from the U.S. Census Bureau, available online at http://quickfacts.census.gov/qfd/states/00000.html (accessed June 27, 2013). Data from the United States Census Bureau. Available online at http://www.census.gov/hhes/socdemo/language/. Data from Toastmasters International. Available online at http://toastmasters.org/artisan/detail.asp?CategoryID=1&SubCategoryID=10&ArticleID=429&Page=1. Data from Weather Underground. Available online at www.wunderground.com (accessed June 27, 2013). Federal Bureau of Investigations. “Uniform Crime Reports and Index of Crime in Daviess in the State of Kentucky enforced by Daviess County from 1985 to 2005.” Available online at http:// www.disastercenter.com/kentucky/crime/3868.htm (accessed June 27, 2013). “Foothill-De Anza Community College District.” De Anza College, Winter 2006. Available online at http://research.fhda.edu/factbook/DAdemofs/Fact_sheet_da_2006w.pdf. Johansen, C., J. Boice, Jr., J. McLaughlin, J. Olsen. “Cellular Telephones and Cancer—a Nationwide Cohort Study in Denmark.” Institute of Cancer Epidemiology and the Danish Cancer Society, 93 (3):203-7. Available online at http://www.ncbi.nlm.nih.gov/pubmed/11158188 (accessed June 27, 2013). Rape, Abuse & Incest National Network. “How often does sexual assault occur?” RAINN, 2009. Available online at http://www.rainn.org/get-information/statistics/frequency-of-sexual-assault (accessed June 27, 2013). Central Limit Theorem Given a random variable (RV) with known mean \(\mu \) and known standard deviation σ. We are sampling with size n and we are interested in two new RVs – the sample mean, \(\overline{X}\), and the sample sum, \(\Sigma X\). If the size n of the sample is sufficiently large, then \(\overline{X}~N\left(\mu \text{,}\frac{\sigma }{\sqrt{n}}\right)\) and \(\Sigma X~N\left(n\mu ,\sqrt{n}\sigma \right)\). If the size n of the sample is sufficiently large, then the distribution of the sample means and the distribution of the sample sums will approximate a normal distribution regardless of the shape of the population. The mean of the sample means will equal the population mean and the mean of the sample sums will equal n times the population mean. The standard deviation of the distribution of the sample means, \(\frac{\sigma }{\sqrt{n}}\), is called the standard error of the mean.
{"url":"https://pressbooks-dev.oer.hawaii.edu/introductorystatistics/chapter/additional-information-and-full-hypothesis-test-examples/","timestamp":"2024-11-07T19:09:54Z","content_type":"text/html","content_length":"229309","record_id":"<urn:uuid:89c42c02-81d9-4b7f-89b5-54c1423ef308>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00776.warc.gz"}
Auslan: “area (surface area)” As a Noun: 1. In mathematics, geometry and surveying, the measurement of the size of the surface of a geometric shape or a piece of land. English = area, surface area. Note: 1. Originally only an alternative Australasian Signed English sign for the Auslan sign ‘area’. Now this variant is used by some teachers in maths and geometry specifically for ‘surface area’.
{"url":"https://find.auslan.fyi/sign/auslan-signbank/area.shape","timestamp":"2024-11-12T23:17:29Z","content_type":"text/html","content_length":"10696","record_id":"<urn:uuid:880c9e01-9b00-48cf-97bd-c2cf01d0c00d>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00233.warc.gz"}
Formula to calculate age in years & months May 07, 2020 10:56 PM Hi all just trying to decide if AirTable will work for our businesses. So far so good except for this snag. We are in childcare and NEED to calculate our babies ages in Months not years based on This is the section of the child table so far, I want the Age field to return ages under 3yrs (<=35months) in months and the rest in years… Formulas so far M_age DATETIME_DIFF(TODAY(),{C1_DOB},‘M’) &“M” (works fine) Y_age DATETIME_DIFF(TODAY(),{C1_DOB},‘Y’) &“Y” (works fine) Age I’ve tried… to get under 3 years in Months and over in years: IF(M_age <= 35,(DATETIME_DIFF(TODAY(),{C1_DOB},‘M’)&“M”),Y_age) IF(M_age <= 35,M_age,Y_age) No matter what I do the age field returns age in years. May 09, 2020 10:51 AM May 09, 2020 10:51 AM May 09, 2020 09:59 PM
{"url":"https://community.airtable.com/t5/formulas/formula-to-calculate-age-in-years-months/td-p/110498","timestamp":"2024-11-03T11:00:47Z","content_type":"text/html","content_length":"308867","record_id":"<urn:uuid:dfd890a5-7d52-496d-8612-8fa4bb00b1cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00649.warc.gz"}
Optimizing algorithm Hello there... I'm legolizard, and I've been watching the tek syndicates' videos for a while now, and thought it would be a good idea to finally join the forum. :D It really is awesome, I must say... I was wondering if anyone could help with an algorithm I'm designing. It's mainly for the giggles. I mean, who doesn't like seeing lines and lines of numbers strung across the console window in an endless furry of some computational mess? I've written this program in C++; it is relatively simple, yet I wish to optimize it so that it will output the answer faster. To outline the basic goal of the program: Find all the differnt series of numbers that are the summation of a set number, n. For example, assuming the user enters in five the ouput would be: For small n my current method is super fast, but that's not much of an accomplishment, lawl. For n > 40 the process can take up to a minute+. Could anyone offer some advice on how to optimize my current method? My current function: void getSums( const std::string &str, int x, int y ){ for( ; x >= 2 * y + 1; y++ ) getSums( str + ' ' + toString(y), x - y, y + 1 ); std::cout << str << ' ' << x << std::endl; I know that seems kind of... weird at first, I asked my CS teacher and he gave me ?_? face. It really isn't all too complicated though... at least from my point of view it isn't... then again I spent days thinking about it, lawl. toString is a function I declared earlier in the program to parse an integer data type to a string(Templates ftw). Any help would be appreciated. Thanks. :D Oh and thanks for this awesome forum. ^_^ Dang, I wish I could help you, my background is in C# though so your example doesn't help me all that much and I'm mathematically... crap anyway. Hang on. I might have thought of something. Let me just get back to you. #include <iostream> #include <string> int main() int input; std::string results; for(int i = 0; i < input/2; i++) std::cout<<i<<" "<<input-i<<'\n'; return 1; ran to 1000 in 3 seconds. if you need it to be crazy fast, i could get into openCL or asm optimizations, but thats a bit complex :) edit: that format of the code went out of wack there I wanted to give it a try. This only shows the sum of two numbers, so I guess it's not really what you were looking for, but a little exercise for me nonetheless. #include <iostream> int main() int x; int count = 0; std::cin >> x; for(int i = 0; i < x/2; ++i) { std::cout << i << " " << x-i << std::endl; if(x % 2 == 0) std::cout << count << " " << count << std::endl; std::cout << count << " " << count + 1 << std::endl; return 0; Nah, mine didn't work. Never mind. ztrain has the right idea. Divide the limit of your loop by 2. For instance if im printing out all the numbers that sum to 6: I print 6 I print 1 and 5 I print 2 and 4 I print 3 and 3 I print 4 and 2.... wait! No need! Ive already done that. This applies to 5 and 1 too. Here is what I came up with in Java: x is whatever the user inputs: ...3000 for example long x = 3000; for (long i = 1; i <= x / 2; i++) { long y = x - i; if(i == 1){ System.out.println(x + " = " + x); System.out.println(y + " + " + i + " = " + x); Ran to 50000 in 4 seconds. Ran to 1,000,000 in 1 min 38 seconds. Ran to 1,000,000,000 in... Ill get back to you haha Good post legolizard! just some more insight on this: by using a system print, you cause your program to halt and have the output written to the screen. due to complexities and no code formatting, im not going to post the code for it, but write everything to a buffer, then print that at the end. just a simple array would do. also, if you want to go over the top, use openCL to compute it. 1,000,000,000 would take LESS THAN A SECOND using your gpu. further, look up the colltaz congecture. thats a fun one to program for beginners. you could start a binary tree that divides each input /2 step into its own leaf and the once it has generated a leaf that is down to the simplest input/2, you can recursively step back up the array. since i understand the basics of binary trees and recursion and not much about programming, i leave an Idea nugget for you to code. =) good luck! I wrote a small program in c++ that does the collatz congecture and shows how many steps. Interesting how every positive number always goes to 1. I'm enjoying these little programs, making me exercise my brain. #include <iostream> using namespace std; int main(){ unsigned long long n; cin >> n; cout << endl; int count = 0; while (n > 1){ cout << n << endl; if(n % 2 != 0){ n = 3 * n + 1; n = n / 2; cout << n << endl <<"Total steps: " << count << endl; return 0; idk if this works, i really don't know programming, but just assign different leaves to different cpu cores Thank you so much guys for all your help. =) Chronic78 and SillyDuck(awesome name by the way), both of those methods are really interesting, and I suppose both are using the binary tree method that MegaDove is proposing; I never thought about this problem that way. I over looked the obvious duplicates the program made, mainly because I didn't want to sacrifice outputting all the different sets of numbers that are the summation of n. For example, for 15 the program would output not only sets of two (1 + 14, 2 + 13, etc), but also 1 + 2 + 3 + 4 + 5. Nevertheless, it is now possible to elimianate the duplicate sets, no matter how large it may be! Also, ztrain, I will research the colltaz congecture... I believe I've heard about it, yet I don't trust my memory very much. Darn, SillyDuck beat me to it. X_X I'll see what I come up with.=) Oh, and I plan on learning OpenCL one day, right now I'm learning Assembly, but I couldn't write this program in Assembly... to much mov-ing, push-ing and pop-ing. XD Again, thanks for all your input guys. I've already been able to speed my function slightly, and now I think by deleting the duplicates it should run much faster. :D Oh, sorry Commissar, you posted while I was still typing.... :( If only I knew how to do that... lol, I can't even get as far as you did... all I know is HTML5 and CSS. can't really count yaml. done a bit with lua Actually i don't really think multithreading would be good for this... threading is only good for concurrent tasks... but you would actually have to loop through numbers for this so i'm not sure multithreading would be efficient http://codepad.org/yufCdlqE - since the recurrency is so simple limited only by i/o. for example n = 100 - 444792 answers, generated in 0.4s on my home pc - but can still be improved by optimizing various parts: 1) custom number to string conversion: http://codepad.org/OYOOHMlH - runtime drops down to 0.09s, 2) buffer output: http://codepad.org/CLpJ3w5Q - runtime drops down to 0.04s. That is actually really interesting; I never thought I would be bottlenecked by I/O. Maybe my computer is just really slow though, since running 100 took much longer than .4 seconds. Custom string and output sounds really interesting, though! It would, of course, be faster if the number was passed in by argument(like what you did), rather than via input. Well the overall time of the program would be much shorter.
{"url":"https://forum.level1techs.com/t/optimizing-algorithm/12377","timestamp":"2024-11-07T18:44:38Z","content_type":"text/html","content_length":"47019","record_id":"<urn:uuid:74884f13-88d6-4308-ae6a-3f53c57280ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00715.warc.gz"}
Earth Systems This class focuses on the numerical solution of problems arising in the quantitative modeling of Earth systems. The focus is on continuum mechanics problems as applied to geological processes in the solid Earth, but the numerical methods have broad applications including in geochemistry or climate modeling. We briefly review math and continuum mechanics fundamentals, then discuss ordinary differential equations (ODEs), and spend the majority of the class discussing finite difference (FD) and finite element (FE) solutions to partial differential equations (PDEs). The class targets graduate students from the Earth sciences, and consists of lectures and joint and individual computer lab exercises. Grading is based on home work/programming project assignments, a final project, and class participation. This class was last taught in 2016 but lecture notes and problem sets may be useful are are provided in individual chunks below Prerequisites: None. Recommended: Intro Earth Science, Geodynamics. Example syllabus for Spring 2016 at USC Online material • Complete set of lecture notes: □ Becker, T. W. and Kaus, B. J. P (2020 update): Numerical Modeling of Earth Systems: An introduction to computational methods with focus on solid Earth applications of continuum mechanics, The University of Texas at Austin, v. 1.2.2, 2020.(PDF) □ Becker, T. W. and Kaus, B. J. P (2015): Numerical Modeling of Earth Systems: An introduction to computational methods with focus on solid Earth applications of continuum mechanics. figshare, doiL10.6084/m9.figshare.1555628, 2015. • All accompanying Matlab code for exercises (solutions available for instructors upon request). • Individual handouts and problem sets: Please note that individual PDFs will have broken cross-references, use the complete lecture notes for a consistent document. Updated: October 31, 2024 (c) Thorsten Becker, 1997-2024.
{"url":"https://www-udc.ig.utexas.edu/external/becker/teaching-557.html","timestamp":"2024-11-03T02:24:46Z","content_type":"text/html","content_length":"7289","record_id":"<urn:uuid:28fbb4c4-f956-4a76-a9e4-216c28fb1d6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00144.warc.gz"}
What is the Cartesian form of (10,(-7pi)/3))? | HIX Tutor What is the Cartesian form of #(10,(-7pi)/3))#? Answer 1 $x = 5$ $y = - 5 \sqrt{3}$ The given data #(r, theta)=(10, (-7pi)/3)# #x=r cos theta=10*cos ((-7pi)/3)=10*cos (-pi/3)=10*(1/2)# #x=5# #y=r sin theta+10*sin ((-7pi)/3)=10*sin (-pi/3)=10((-sqrt3)/2)# #y=-5sqrt3# God bless....I hope the explanation is useful. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/what-is-the-cartesian-form-of-10-7pi-3-8f9afa209a","timestamp":"2024-11-01T23:38:40Z","content_type":"text/html","content_length":"571796","record_id":"<urn:uuid:56a6777f-22e4-489e-ac78-3181e2744fd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00391.warc.gz"}
Pairs of Numbers Search for Pairs of Numbers It's easiest to describe the search of pairs of numbers by an example: A sparse Sudoku This Sudoku does not have a uniq solution, but it is a good example nontheless. When you try normal crosshatching you don't find a number, but you find that the 2 and 8 have only two possible cells in the upper left block. The 2 and 8 have only two possible cells each. You should mark this in your Sudoku because it is usefull later on. The places for the two numbers are marked. The 2 and 8 have to be in these two cells, therefore no other numbers can be in these cells. The two cells are blocked for all other numbers. Knowing that you can again use crosshatching: You can find a place for the 4 using crosshatching. The 4 previously had three possible positions in the upper left 3x3 block, now only of of them is left: The Sudoku with the newly found 4 This technique is very usefull when using pen and paper to solve a Sudoku. It works with triples instead of pairs as well, if there the same three cells are possible for all three numbers.
{"url":"https://sudokugarden.de/en/solve/pairs","timestamp":"2024-11-12T07:31:32Z","content_type":"application/xhtml+xml","content_length":"6852","record_id":"<urn:uuid:26bc582f-2165-4552-a670-fe312329f209>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00619.warc.gz"}
Chapter 5 Predicate Logic Download Chapter 5 Predicate Logic * Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project Document related concepts Structure (mathematical logic) wikipedia , lookup Mathematical proof wikipedia , lookup Argument wikipedia , lookup Analytic–synthetic distinction wikipedia , lookup Fuzzy logic wikipedia , lookup Foundations of mathematics wikipedia , lookup Catuṣkoṭi wikipedia , lookup Model theory wikipedia , lookup Inquiry wikipedia , lookup Willard Van Orman Quine wikipedia , lookup Boolean satisfiability problem wikipedia , lookup New riddle of induction wikipedia , lookup Quantum logic wikipedia , lookup History of logic wikipedia , lookup Modal logic wikipedia , lookup Propositional formula wikipedia , lookup Jesús Mosterín wikipedia , lookup Truth-bearer wikipedia , lookup Mathematical logic wikipedia , lookup Curry–Howard correspondence wikipedia , lookup Propositional calculus wikipedia , lookup Principia Mathematica wikipedia , lookup First-order logic wikipedia , lookup Intuitionistic logic wikipedia , lookup Natural deduction wikipedia , lookup Laws of Form wikipedia , lookup Law of thought wikipedia , lookup Chapter 5 Predicate Logic In this chapter, we consider predicate logic, with functions and quantifiers.1 The discussion is broken up into syntax, semantics, and proofs. Predicate logic is a richer system than sentential logic and allows us to move closer to the kind of system we need for natural language semantics. It allows us to introduce model theory at an introductory level, a set-theoretic characterization of predicate logic semantics. The syntax of predicate logic can be broken up into a vocabulary and a set of rules for constructing formulas out of that vocabulary. Basically, the primitives are constants, variables, predicates, connectives, quantifiers, and delimiters. Constants and variables correspond to objects in our world. Constants are like names, e.g. Ernie or Hortence. Variables are more like pronouns, e.g. they or it. Predicates allow us to describe properties of objects and sets of objects. Quantifiers allow us to refer to sets of things. The connectives and delimiters are the same from sentential logic. These are the elements of the vocabulary of first-order predicate logic: Constants a, b, c, . . .; with or without superscripts. The superscripts allow us to convert a finite set of letters into an infinite set of constants. Variables x, y, z, . . .; with or without superscripts. Again, the superscripts allow us to convert a finite set of letters into an infinite set of variables. We do not consider identity or function terms. Terms Constants ∪ Variables = Terms. That is, all the constants and variables grouped together constitute the terms of predicate logic. Predicates F, G, H, . . .. Each takes a fixed number of terms. These too can be superscripted to get an infinite set of predicates. Connectives The usual suspects: ∧, ∨, ¬, →, ↔. We use the same connectives as in sentential logic. Quantifiers ∀ and ∃. The first is the universal quantifier and the second is the existential quantifier. Delimiters Parentheses. With superscripts, there are an infinite number of constants, variables, and The vocabulary is used to define the set of WFFs recursively. 1. If P is an n-nary predicate and t1 , . . . , tn are terms, then P (t1 , . . . , tn ) is a WFF. 2. If ϕ and ψ are WFFs, then ¬ϕ, (ϕ ∧ψ), (ϕ ∨ψ), (ϕ → ψ), and (ϕ ↔ ψ) are WFFs. 3. If ϕ is a WFF and x is a variable, then (∀x)ϕ and (∃x)ϕ are WFFs. (The scope of the quantifiers in these WFFs is ϕ.) 4. That’s it. Notice that there is no requirement that the variable next to the quantifier be paired with the same variable in the formula in its scope. Thus (∃x)F (x) is as well-formed a formula as (∃x)F (y). In fact, (∃x)(∀y)(∀y)F (z) is just as Notice too that we’ve defined a notion scope. For example, in a WFF like (∀x)(∀y)G(x, y), the scope of (∀y) is G(x, y) and the scope of (∀x) is (∀y)G(x, y). Notice too that the scope of a quantifier is not simply everything to the right. In a WFF like ((∃x)G(y)∧F (a)), the scope of (∃x) is only G(y). On the other hand, in a WFF like (∀x)(F (a) ∧ G(b)), the scope of (∀x) is (F (a)∧G(b)). Notice how important parentheses and the syntax of a WFF is to determining scope. We will make use of scope in the semantics of predicate The semantics of predicate logic can be understood in terms of set theory. First, we define the notion of model. Definition 6 (Model) A model is a set D and a function f . 1. f assigns each constant to a member of D. 2. f assigns each one-place predicate to a subset of D. 3. f assigns each two-place predicate to a subset of D × D. 4. etc. The basic idea is that a set of constants and predicates are paired with elements from the set of elements provided by the model. We can think of those elements as things in the world. Each constant can be paired directly with an element of the model. Thus if our model includes the individual Ernie, we might pair the constant a with that individual. Predicates are a little more complex. We can think of each predicate as holding either for a set of individual elements of the model or a set of ordered tuples defined in the model. Each predicate thus defines a set of elements or tuples of elements. Thus, if the predicate G is taken to define the set of individuals that might be in the kitchen at some particular time, we might take G to be defined as follows: f (G) = {Ernie, Hortence}. Predicates with more than one argument are mapped to a set of tuples. For example, if our model is the set of sounds of English and F is defined as the set of consonants in English that are paired for voicing, we would have f (F ) = {hp, bi, ht, di, hk, gi, . . .}.2 Notice that there is no requirement that there be a single model. A logical system can be paired with any number of models. Thus a predicate G could be paired with a model of individuals and rooms and define a set of individuals that are in the kitchen or it could be paired with a model of sounds in English and define the set of nasal consonants. We can use model theory to understand how predicate logic formulas are evaluated. As with simple sentential logic, predicate logic formulas evaluate Voicing refers to vibration of the vocal folds. to one of two values: T or F. The logical connectives have their usual function, but we must now include a mechanism for understanding predicates and quantifiers. Consider predicates first. An expression like G(a) is true just in case f (a) is in the subset of D that f assigns G to. For example, if a is paired with Ernie and Ernie is in the set of D that G is paired with, F (a) = Ernie and Ernie ∈ f (G), then G(a) is true. Likewise, H(a, b) is true just in case ha, bi is in the subset of D × D that f assigns H to. For example, if we take D to be the set of words of English and we take H to be the relation ‘has fewer letters than’, then H(a, b) is true just in case the elements we pair a and b with are in the set of ordered pairs defined by f (H). For example, if f (a) = hat and f (b) = chair, then hhat, chairi ∈ f (H) and H(a, b) is true. Quantified variables are then understood as follows. Variables range in value over the members of D. An expression quantified with ∀ is true just in case the expression is true of all members of the model D; an expression quantified with ∃ is true just in case the expression is true of at least one member of the model D. Let’s go through some examples of this. Assume for these examples, that D = {m, n, N}. For example, an expression like (∀x)G(x) is true just in case every member of D is in the set f assigns to G. In the case at hand, we might think of G as ‘is nasal’, in which case, f (G) = {m, n, N}. Since f (G) = D, (∀x)G(x) is true.3 On the other hand, if we interpret G as ‘is coronal’, then f (G) = {n} and (∀x)G(x) is false, since f (G) ⊂ D.4 Likewise, (∃x)G(x) is true just in case the subset of D that f assigns to G has at least one member. If we interpret G as ‘is coronal’, then this is true, since f maps G to the subset {n}, which has one member, i.e. |f (G)| ≥ 1. More complex expressions naturally get trickier. Combining quantified variables and constants is straightforward. For example, (∀x)H(a, x) is true just in case every member of D can be the second member of each tuple in the set of ordered pairs that f assigns to H, when a is the first member. Thus, if f (a) = m, then H(a, x) = {hm, mi, hm, ni, hm, Ni}. To get this set of ordered pairs, we might interpret H(a, b) as saying that a is at least as anterior as b, where anteriority refers to how far forward in the mouth the primary constriction for the sound is. On this model, Recall that nasal sounds are produced with air flowing through the nose. Coronal sounds are produced with the tip of the tongue. f (H) = {hm, mi, hm, ni, hm, Ni, hn, ni, hn, Ni, hN, Ni}. We can use this latter interpretation of H to treat another predicate logic formula: (∀x)H(x, x). Here there is still only one quantifier and no connectives, but there is more than one quantified variable. The interpretation is that both arguments must be the same. This expression is true if H can pair all elements of D with themselves. This is true in the just preceding case since {hm, mi, hn, ni, hN, Ni} ⊆ f (H). That is, every sound in the set D is at least as anterior as itself! Let’s now consider how to interpret quantifiers and connectives in formulas together. The simplest case is where some connective occurs ‘outside’ any quantifier, e.g. ((∀x)G(x) ∧ (∀y)H(y)). This is true just in case f (G) = D and f (H) = D, that is, if G is true of all the members of D and H is true of all the members of D, e.g. (f (G) = D ∧ f (H) = D). If the universal quantifier ∀ is ‘outside’ the connective, (∀x)(G(x)∧H(x)), the formula ends up having the same truth value, but the interpretation is a little different. This latter formula is true on the model where G and H apply to every member of D. Here, we interpret the conjunction within the scope of the universal quantifier in its set-theoretic form: intersection. We then intersect f (G) and f (H). If this intersection is D, then the original expression is true. Thus (∀x)(G(x) ∧ H(x)) is true just in case (f (G) ∩ f (H)) = D is true.5 What this means is that a universal quantifier and be raised from or lowered into a conjunction changing the scope of the quantifier with no difference in truth value. With the existential quantifier, scope forces different interpretations. The expression with the existential quantifier inside conjunction: ((∃x)G(x) ∧ (∃y)H(y)) does not have the same value as the expression with the existential quantifier outside the conjunction: (∃x)(G(x) ∧ H(x)). The first is true just in case there is some element of D that G holds of and there is some element of D that H holds of; the two elements need not It is a theorem of set theory that if A ∩ B = U , then A = B = U . be the same. In formal terms: |f (G)| ≥ 1 and |f (H)| ≥ 1, e.g. (|f (G)| ≥ 1 ∧ |f (H)| ≥ 1). The two sets, f (G) and f (H), need not have any elements in common. The second is true only if there is some element of D that both G and H hold of, that is in f (G) and in f (H); that is, they must have at least one element in common: |(f (G) ∩ f (H))| ≥ 1. The point of these latter two examples is twofold. First, nesting connectives within the scope of quantifiers requires that we convert logical connectives to set-theoretic operations. Second, nesting quantifiers and connectives can result in different interpretations depending on the quantifier and depending on the connective. The relative scope of more than one quantifier also matters. Consider the following four formulas. (5.1) a. (∀x)(∀y)G(x, y) (∃x)(∃y)G(x, y) (∀x)(∃y)G(x, y) (∃x)(∀y)G(x, y) When the quantifiers are the same, their relative nesting is irrelevant. Thus (5.1a) is true just in case every element can be paired with every element, i.e. f (G) = D × D. Reversing the universal quantifiers gives exactly the same interpretation: (∀y)(∀x)G(x, y). Likewise, (5.1b) is true just in case there’s at least one pair of elements in D that G holds of, i.e. |f (G)| ≥ 1. Reordering the existential quantifiers does not change this interpretation: (∃y)(∃x)G(x, y). On the other hand, when the quantifiers are different, the interpretation changes depending on which quantifier comes first. Example (5.1c) is true just in case every member of D can occur as the first member of at least one of the ordered pairs of f (G). Reversing the quantifiers produces a different interpretation. Thus (∃y)(∀x)G(x, y) means that there is at least one element y that can occur as the second member of a pair with all elements. These different interpretations can be depicted below. The required interpretation for (5.1c) is this: (5.2) G(m, ?) G(n, ?) G(N, ?) It doesn’t matter what the second member of each of these pairs is, as long as f (G) includes at least three with these first elements. When we reverse the quantifiers, we must have one of the following three situations: a, b, or c. (5.3) a. G(m, m) G(n, m) G(N, m) b. G(m, n) G(n, n) G(N, n) c. G(m, N) G(n, N) G(N, N) All members of D must be paired with some unique member of D. The interpretation of (5.1d) is similar to the interpretation of (5.1c) with reversed quantifiers; it is true just in case there is some element that can be paired with every element of D as the second member of the ordered pair. We must have m, n, and N as the second member of the ordered pairs, but the first member must be the same across all three. (5.4) a. G(m, m) G(m, n) G(m, N) b. G(n, m) G(n, n) G(n, N) c. G(N, m) G(N, n) G(N, N) Some unique member of D must be paired with all members of D. Reversing the quantifiers in (5.1d), (∀y)(∃x)G(x, y), is interpreted like (5.5) G(?, m) G(?, n) G(?, N) Every member of D must occur as the second member of some pair in f (G). The interpretation of quantifier scope can get quite tricky in more complex Finally, notice that this system provides no interpretation for unquantified or free variables. Thus an expression like G(x) has no interpretation. This would seem to be as it should be. Laws and Rules We can reason over formulas with quantifiers, but we need some additional Laws and Rules of Inference. As we’ve already seen, quantifiers can distribute over logical connectives in various ways. The Laws of Quantifier Distribution capture the relationships between the quantifiers and conjunction and disjunction. (5.6) Laws of Quantifier Distribution Law 1: QD1 ¬(∀x)ϕ(x) ⇐⇒ (∃x)¬ϕ(x) Law 2: QD2 (∀x)(ϕ(x) ∧ ψ(x)) ⇐⇒ ((∀x)ϕ(x) ∧ (∀x)ψ(x)) Law 3: QD3 (∃x)(ϕ(x) ∨ ψ(x)) ⇐⇒ ((∃x)ϕ(x) ∨ (∃x)ψ(x)) Law 4: QD4 ((∀x)ϕ(x) ∨ (∀x)ψ(x)) =⇒ (∀x)(ϕ(x) ∨ ψ(x)) Law 5: QD5 (∃x)(ϕ(x) ∧ ψ(x)) =⇒ ((∃x)ϕ(x) ∧ (∃x)ψ(x)) Consider first Law 1 (QD1). This says that that if ϕ(x) is not universally true, then there must be at least one element for which ϕ(x) is not true. This actually follows directly from what we have said above and the fact that the set-theoretic equivalent of negation is complement. The set-theoretic translation of ¬(∀x)ϕ(x) is ¬(f (ϕ) = D). If this is true, then it follows that the complement of f (ϕ) contains at least one element. The model-theoretic interpretation of the right side of the first law says just this: f (ϕ)′ ≥ 1. Law 2 allows us to move a universal quantifier down into a conjunction. The logic is that if something true for a whole set of predicates, then it is true for each individual predicate. Law 3 allows us to move an existential quantifier down into a disjunction. The logic is that if something is true for at least one of a set of predicates, then it is true for at least one of them, each considered independently. Both Law 1 and Law 2 should be expected given the example we went through in the previous section to explain the relationship of quantifiers and connectives. In general, the relationships above can be made sense of if we think of the universal quantifier as a conjunction of all the elements of the model D and the existential quantifier as a disjunction of all the elements of the model D. Thus: (∀x)F (x) = F (xi ) The big wedge symbol is interpreted as indicating that every element x in the superscripted set D is conjoined together. In other words, (∀x)F (x) is true just in case we apply F to every member of D and conjoin the values. The same is true for the existential quantifier: (∃x)F (x) = F (xi ) The big ‘v’ symbol is interpreted as indicating that every element x in the superscripted set D is disjoined together. In other words, (∃x)F (x) is true just in case we apply F to every member of D and disjoin the values. Given the associativity of disjunction and conjunction (4.33), it follows that the universal quantifier can raise and lower into a conjunction and the existential quantifier can raise and lower into a disjunction. For example, imagine that our universe D is composed of only two elements a and b. If would then follow that an expression like (∀x)(F (x) ∧ G(x)) is equivalent to ((F (a) ∧ G(a)) ∧ (F (b) ∧ G(b))) Using Associativity, we can move terms around to produce ((F (a) ∧ F (b)) ∧ (G(a) ∧ G(b))) Translating each conjunct back, this is equivalent to ((∀x)F (x) ∧ (∀x)G(x)) Law 1 can also be cast in these terms, given DeMorgan’s Law (4.38). ¬(∀x)F (x) ⇐⇒ (∃x)¬F (x) is really the same thing as: F (xi ) ⇐⇒ ¬F (xi ) The only difference is that the quantifiers range over a whole set of values from D, not just a pair of values. Law 4 says that a universal quantifier can be raised out of a disjunction. This is a logical consequence, not a logical equivalence. Thus, if we know that ((∀x)G(x) ∨ (∀x)H(x)), then we know (∀x)(G(x) ∨ H(x)), but not vice versa. For example, if we know that everybody in the room either all likes logic or all likes rock climbing, then we know that everybody in the room either likes logic or likes rock climbing. However, if we know the latter, we cannot conclude the former. The latter is consistent with a situation where some people like logic, but other people like rock climbing. The former does not have this interpretation. Casting this in model-theoretic terms, if we have that (f (G) = D ∨ f (H) = D), then we we have (f (G) ∪ f (H)) = D. Law 5 says that an existential quantifier can be lowered into a conjunction. As with Law 4, this is a logical consequence, not a logical equivalence. Thus, if we know (∃x)(G(x) ∧ H(x)), then we know ((∃x)G(x) ∧ (∃x)H(x)), but not vice versa. For example, if we know that there is at least one person in the room that either likes logic or likes rock climbing, then we know that at least one person in the room likes logic or at least one person in the room likes rock climbing. The converse implication does not hold. From the fact that somebody likes logic or somebody likes rock climbing, it does not follow that that is the same somebody. Casting this in model-theoretic terms, if we have that (f (G) ∩ f (H)) ≥ 1, then we have (f (G) ≥ 1 ∧ f (H) ≥ 1). Now consider the Laws of Quantifier Scope. These govern when quantifier scope is and is not relevant. (5.7) Laws of Quantifier Scope Law 6: QS6 (∀x)(∀y)ϕ(x, y) ⇐⇒ (∀y)(∀x)ϕ(x, y) Law 7: QS7 (∃x)(∃y)ϕ(x, y) ⇐⇒ (∃y)(∃x)ϕ(x, y) Law 8: QS8 (∃x)(∀y)ϕ(x, y) =⇒ (∀y)(∃x)ϕ(x, y) The first two are straightforward. The first says the relative scope of two universal quantifiers is irrelevant. The second says the relative scope of two existential quantifiers is irrelevant. The last reflects an implicational relationship between the antecedent and the consequent. The antecedent is true just in case there is some x that bears ϕ to every y. The consequent is true just in case for every y there is at least one x, not necessarily the same one, that it bears ϕ to. Rules of Inference There are four Rules of Inference for adding and removing quantifiers. You can use these to convert quantified expressions into simpler expressions with constants, that our existing Laws and Rules will apply to, and then convert them back. These are thus extremely useful. Universal Instantiation (U.I.) allows us to replace a universal quantifier with any arbitrary constant. (5.8) Universal Instantiation (U.I.) ∴ ϕ(c) The intuition is that if ϕ is true of everything, then it is true of any individual thing we might cite. Universal Generalization (U.G.) allows us to assume an arbitrary individual v and establish some fact about it. If something is true of v, then it must be true of anything. Hence, v can be replaced with a universal quantifier. (5.9) Universal Generalization (U.G.) ∴ (∀x)ϕ(x) The constant v is special; only it can be replaced with the universal quantifier. The intuition here is that if we establish some property holds of an arbitrary individual, then it must hold of all individuals. Notice that Universal Generalization (5.9) can proceed only from v, but Universal Instantiation (5.8) can instantiate to any constant, including v. Existential Generalization (E.G.) allows us to proceed from any constant to an existential quantifier. (5.10) Existential Generalization (E.G.) ∴ (∃x)ϕ(x) Thus if some property holds of some specific individual, we can conclude that it holds of at least one individual. Finally, Existential Instantiation (E.I.) allows us to go from an existential quantifier to a constant, as long as the constant has not been used yet in the proof. It must be a new constant. (5.11) Existential Instantiation (E.I.) ∴ ϕ(w) where w is a new constant This one is a bit tricky to state in intuitive terms. The basic idea is that if we know that some property holds of at least one individual, we can name that individual (as long as we don’t use a name we already know). Let’s now look at some simple proofs using this new machinery. First, we consider an example of Universal Instantiation. We prove H(a) from G(a) and (∀x)(G(x) → H(x)). (5.12) 1 (∀x)(G(x) → H(x)) G(a) → H(a) 1, U.I. 2,3, M.P. First, we remove the universal quantifier with Universal Instantiation and then use Modus Ponens to get the desired conclusion. Next, we have an example of Universal Generalization. We try to prove (∀x)(R(x) → W (x)) from (∀x)(R(x) → Q(x)) and (∀x)(Q(x) → W (x)). (5.13) 1 (∀x)(R(x) → Q(x)) (∀x)(Q(x) → W (x)) (R(v) → Q(v)) (Q(v) → W (v)) (R(v) → W (v)) (∀x)(R(x) → W (x)) 1, U.I. 2, U.I. 3,4 H.S. 5 U.G. First, we use Universal Instantiation on the two initial assumptions, massaging them into a form appropriate for Hypothetical Syllogism. We then use Universal Generalization to convert that result back into a universally quantified expression. Notice how we judiciously chose to instantiate to v, anticipating that we would be using Universal Generalization later. Finally, we consider a case of Existential Instantiation (E.I.). We prove ((∃x)P (x) ∧ (∃x)Q(x)) from (∃x)(P (x) ∧ Q(x)). (5.14) 1 (∃x)(P (x) ∧ Q(x)) (P (w) ∧ Q(w)) P (w) (∃x)P (x) ((∃x)P (x) ∧ (∃x)Q(x)) 1 E.I. 2 Simp. 3 E.G. 2 Simp. 5 E.G. 4,6 Conj. First, we use Existential Instantiation to strip the quantifier and replace the variable with a new constant. We then split off the conjuncts with Simplification and use Existential Generalization on each to add separate new existential quantifiers. We then conjoin the results with Conjunction. Notice that the basic strategy in most proofs is fairly clear. Simplify the initial formulas so that quantifiers can be removed. Manipulate the instantiated formulas using the Laws and Rules from the preceding chapter. Finally, generalize to appropriate quantifiers. Indirect Proof We can use our other proof techniques with predicate logic too. Here we show how to use indirect proof. We prove ((∀x)G(x) → (∃x)G(x)) from no (5.15) 1 ¬((∀x)G(x) → (∃x)G(x)) ¬¬((∀x)G(x) ∧ ¬(∃x)G(x)) ((∀x)G(x) ∧ ¬(∃x)G(x)) (G(a) ∧ ¬G(a)) ((∀x)G(x) → (∃x)G(x)) Auxiliary Premise 1 Cond. 2 Compl. 3 Simp. 3 Simp. 5 Law 1 4 U.I. 6 U.I. 7,8 Add. 1–8 Indirect Proof We start off by negating our conclusion and then attempting to produce a contradiction. The basic idea is to convert the negated conditional into a conjunction. We then extract the conjuncts. We use Law 1 to shift the negation inward with the second conjunct and then we instantiate both. We conjoin them into a single contradiction and that completes the indirect Conditional Proof We can prove the same thing by Conditional Proof. We assume the antecedent (∀x)G(x) and then attempt to prove the consequent (∃x)G(x) from (5.16) 1 ((∀x)G(x) → (∃x)G(x)) Auxiliary Premise 1 U.I. 2 E.G. 1–3 Conditional Proof We begin by assuming the antecedent. From that assumption, we can instantiate to a constant and then generalize to an existential quantifier. That completes the proof. In this chapter, we have treated the basics of predicate logic, covering syntax, semantics, and proof mechanisms. We began with an introduction of the basic syntax of the system. Wellformed formulas of predicate logic (WFFs) are built on well-formed atomic statements. Atomic statements are built up form a finite alphabet of (lowercase) letters and a potentially infinite number of primes, e.g. p, q, r, p′ , q ′ , q ′′ , These, in turn, are combined via a restricted set of connectives into wellformed formulas. There are three important differences with respect to simple sentential logic. First, we have predicate symbols, e.g. F (a) or G(b, c), etc. In addition, we have the universal and existential quantifiers: ∀ and ∃. Finally, we have a notion of scope with respect to quantifiers which is important in the semantics of predicate logic. The semantics of predicate logic is more complex than that of sentential logic. Specifically, formulas are true or false with respect to a model, where a model is a set of individuals and a mapping from elements of the syntax to sets of elements drawn from the set of elements in the model. Quantifiers control how many individuals must be in the range of the mapping. For example, (∃x)F (x) is true only if the predicate F is mapped to at least one individual; (∀x)G(x) is true only if the predicate G is mapped to all individuals. These restrictions hold for any predicate in the scope of the quantifier with an as yet free variable. All the Laws and Rules of Inference of sentential logic apply to predicate logic as well. However, there are additional Laws and Rules that govern quantifiers. The Laws of Quantifier Distribution and Scope govern the relations between quantifiers and between quantifiers and connectives. The Rules of Instantiation and Generalization govern how quantifiers can be added to or removed from formulas. The chapter concluded with demonstrations of these various Laws and Rules in proofs. We also showed how Conditional Proof and Indirect Proof techniques are applicable in predicate logic. 1. Identify the errors in the following WFFs: (a) (∀x)G(y) → H(x) (b) (∀z)(F (z) ⇐⇒ G(z)) (c) (F (x) ∧ G(y) ∧ H(z)) (d) F (X ′ ) (e) ¬(¬(∀z)(F (x) ∨ K(w))) (f) (F (x) ← ¬F (x)) 2. In the following WFFs, mark the scope of each quantifier with labelled (a) (∀x)(∀y)(∀z)F (a) (b) ((∀x)F (y) ∧ (∃y)G(x)) (c) (∃x)((∀y)F (x) ↔ F (y)) (d) (∃z)(F (a) ∧ (F (b) ∧ (G(c) → F (z)))) (e) ¬(∃y)¬¬F (y) 3. For the following questions, assume this model: D = {canto, cantas, canta, cantamos, cantan} f (FIRST) = {canto, cantamos} f (SECOND) = {cantas} f (THIRD) = {canta, cantan} f (SG) = {canto, cantas, canta} f (PL) = {cantamos, cantan} For each of the following WFFs, indicate whether it is true or false with respect to this model. (a) (∃x)FIRST(x) (b) (∀x)FIRST(x) (c) (∀x)(SECOND(x) ∨ PL(x)) (d) (∀y)(SG(x) ∨ PL(x)) (e) (∃z)(SG(z) ∧ SECOND(z)) 4. Prove ¬(∀z)(F (z) ∧ G(z)) from (¬F (a) ∨ ¬G(a)). 5. Prove G(a) from (∃x)(G(x) ∧ F (x)). 6. Prove (∀x)(∃y)G(x, y) from (∀z)(∀x)G(x, z). 7. Prove that (∀x)(G(x) ∨ ¬G(x)) is a tautology. 8. Prove that ((∀x)F (x) → (∃x)F (x)) is a tautology. 9. Prove F (a) from (∀x)F (x) using Indirect Proof. 10. Prove ¬(∀x)F (x) from ¬F (a) using Indirect Proof. 11. Use Conditional Proof to prove ((∃x)F (x) → (∃x)G(x)) from the WFFs ((∃x)F (x) → (∀z)H(z)) and (H(a) → G(b)). 12. Construct a set of predicates and a model for a small area of language.
{"url":"https://studyres.com/doc/855941/chapter-5-predicate-logic","timestamp":"2024-11-13T12:46:53Z","content_type":"text/html","content_length":"97498","record_id":"<urn:uuid:907abd18-5095-439c-9b9a-b46fe3ea59bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00639.warc.gz"}
Andrea Bertozzi Prior Research Funding NSF DMS-2027277, ATD: Algorithms for Threat Detection in Knowledge Graphs, 9/2020-8-2024, with Jeff Brantingham, coPI. NSF grant DGE-1829071, NRT-HDR: Modeling and Understanding Human Behavior: Harnessing Data from Genes to Social Networks, 9/2018-8/2024, ALB current PI; Wei Wang original PI. NSF DMS-1952339, 8/15/20-7/31/24, FRG: Collaborative Research: Robust, Efficient, and Private Deep Learning Algorithms, with S. J. Osher, B. Wang (Univ. Utah). Collaborative project with J. Xin at UC Argonne National Laboratory, 1F-60314, 1/1/21-12/30/21, Developing Machine Learning Models and Algorithms for including Machine-Learning Surrgogates into Design and Control Problems, subaward to UCLA, Sven Leyffer (ANL) PI (graduate student support). NSF grant DMS-2027438, (Mason Porter coPI), RAPID: Analysis of Multiscale Network Models for the Spread of COVID-19, 4/15/2020 - 3/31/2022 DARPA, Variational Methods on Graphs for Identification on Transaction Networks, Feb 1 2018-Jan 29, 2022. Award number FA8750-18-2-0066. ATD: Sparsity Models for Forecasting Spatio-Temporal Human Dynamics NSF DMS-1737770, 9/15/17-8/31/21, with P. J. Brantingham and S. J. Osher NGA NURI grant HM02101410003, "Sparsity models for spatiotemporal analysis and modeling of human activity and social networks in a geographic context", 10/24/14-10/30/20, with S. Osher, P. J. Brantingham and G. Tita (UC Irvine) NSF grant DMS-1659676 REU Site: Mathematical Modeling at UCLA, 7/2017-6/2020, M. Roper PI. NIJ graduate fellowship (Baichuan Yuan) award number 2018-R2-CX-0013 ``Large-Scale Deep Point Process Models for Crime Forecasting'', Jan 2019-Dec 2020. NIJ Grant Number: 2014-R2-CX-0101, Testing and Evaluating Body Worn Video Technology in The Los Angeles Police Department, Cecilia Glassman (PI), Los Angeles Police Foundation, 1/1/15-9/31/19, subaward to UCLA with P. J. Brantingham. NSF grant DMS-1417674 Extreme-scale algorithms for geometric graphical data models in imaging, social and network science, 8/1/14-12/31/18. NSF grant CMMI-1435709 Collaborative Research: Modeling, Analysis, and Control of the Spatio-temporal Dynamics of Swarm Robotic Systems, 9/1/14-8/31/18, joint with Spring Berman, Arizona State ONR grant N00014-16-1-2119, Machine Reasoning and Intelligence for Naval Sensing, July 1, 2012 - Jan. 31, 2018, S. Osher PI, joint with Larry Carin at Duke University. NSF grant DMS-1118971 Algorithms for Threat Detection in Sensor Systems for Analyzing Chemical and Biological Systems Based on Compressive Sensing and L1 Related Optimization, joint grant with Stanley Osher (PI), 8/15/11-9/30/17 NSF grant DMS-1312543 Particle laden flows - theory, analysis and experiment, Sept 2013-Aug. 2017, joint grant with Marcus Roper NSF grant DMS-1045536 California Research Training Program in Computational and Applied Mathematics, training grant for a summer REU program and graduate/postdoc mentorship program, 6/15/11-5/31/17. ARO MURI grant W911NF-11-1-0332, with M. Short and J. Brantingham, Scalable, Stochastic and Spatiotemporal Game Theory for Real-World Human Adversarial Behavior, August 2011-May 2017, subaward from USC, Milind Tambe PI. UC Lab Fees Research grant 12-LR-236660, Sparse modeling for high dimensional data, joint with Stan Osher and Luinita Vese (UCLA), Rick Chartrand and Brendt Wohlberg (Los Alamos National Lab), July 1, 2012-Dec. 31, 2015, $1.5M total. AFOSR MURI grant FA9550-10-1-0569 ``Inferring Structure and Forecasting Dynamics on Evolving Networks'' coPI, P. J. Brantingham lead PI, 9/30/10-9/29/15. ONR grant N000141210040, Analysis and Design of Fast Graph Based Algorithms for High Dimensional Data, November 2011-December 2014 NSF grant DMS-0968309 FRG: Collaborative Research: Mathematics of large scale urban crime, PIs: Bertozzi (lead), Tita (UCI), Brantingham, Mohler, Short, Chayes, Schoenberg, 9/2010-8/2014 NSF grant DMS-0907931 Dynamics of aggregation and collapse in multidimensional swarming models, 9/09-8/14. ARO grant W911NF1010472, reporting number 58344-MA, Dynamic Models of Insurgent Activity, Aug 2010-February 1, 2014, joint with Jeff Brantingham and George Mohler. NSF grant CBET-0940417 CDI Type I: Real-time adaptive imaging algorithms for atomic force microscopy, 1/10-12/13, joint grant with Paul Ashby and Jim DeYoreo, Molecular Foundry, Lawrence Berkeley National Laboratory. NSF grant EFRI-1024765 Collaborative Research: Characterization and Control of Emergent Behavior in Complex Systems, joint with Manish Kumar and Subramanian Ramakrishnan, U Cincinatti, 9/15/10-8/31/ NSF grant DMS-0914856 Algorithms for Threat Detection (ATD): adaptive sensing and sensor fusion for real time chemical and biological threats, 9/09-8/13. UC Lab Fees Research Grant 09-LR-04-116741-BERA, "Multiscale methods of fracture and multimaterial debris flow", with Stan Osher, Joey Teran, and Lawrence Livermore National Laboratory (David Eder). ONR grant N000141010221, Information Fusion of Human Activity, Social Networks, and Geography Using Fast Compressive Sensing, 1/10-12/12, co PIs Stan Osher, Jeff Brantingham, George Tita (UCI), $1.2M total, approx. NSF grant DMS-1048840, RAPID: Modeling and experiments of oil-particulate mixtures of relevance to the Gulf of Mexico oil spill, Sept. 2010 - Aug. 2012. NSF grant DMS-0601395 Research Training Group in Applied Differential Equations and Scientific Computing, 6/06-5/12, (EMSW21 Workforce Grant for training REU students and first year Ph. D. students). NGA grant HM1582-06-1-2034 Novel Segmentation, Reconstruction, and Learning Models for Hyperspectral Image Processing and Analysis,9/06-9/10, joint with Stan Osher. ARO MURI grant 50363-MA-MUR, Spatio-temporal event pattern recognition, subcontract from USC/Brown, Boris Rozovsky, PI. UCLA portion involves T. Chan and P. J. Brantigham, 5/06-4/11. ARO grant 57590-MA-RIP, Parallel Computing Architecture for Analysis of Spatio-temporal Event Pattern Recognition, May 2010-May 2011, DURIP equipment grant$180K for new parallel cluster environment at UCLA. ONR grant N000140810363 Geometry Based Image Analysis and Understanding, 1/08-12/10, $240,000. ARO grant (STIR) W911NS-09-1-0559 Mathematical modeling of insurgent activities as compared to urban street crime, 10/09-6/10, $50,000. NSF grant BCS-0527388, DHB: Mathematical and Simulation Modeling of Crime Hot Spots, P. Jeffrey Brantigham (PI), also with L. Chayes and G. Tita (UCI), 1/06-12/09. ONR grant N000140710431, Fundamental Problems in Microfluidic Mixing and Multiphase Flows, 10/06-12/09 ONR grant N000140610059 Pattern formation and control of distributed swarming vehicles, 10/05-10/08, joint grant with Ira Schwartz, Naval Research Lab, approximately $330,000 to UCLA. NSF grant ACI-0321917 9/03-8/09, Collaborative Research-ITR-High Order Partial Differential Equations: Theory, Computational Tools, and Applications in Image Processing, Computer Graphics, Biology, and Fluids (total project includes S. Osher (UCLA), G. Sapiro (UMN), A. Hosoi (MIT), and R. Fedkiw (Stanford)). $730,000 awarded to UCLA (Osher and Bertozzi) ARO grant W911NF-05-1-0112, Fundamental Principles of Biological Swarming with Application to Artificial Platforms, $240,000 (approx), 4/05-3/08. NSF grant AST-0442037 8/04-8/05, ACT/SGER: Object Identification and Classification in Aerial Images, $200,000. NSF grant DMS-0244498 9/03-8/07, FRG-Collaborative Research: New Challenges in the Dynamics of Thin Films and Fluid Interfaces, $941,381, original PI at Duke University. ONR grant N000140410078, Higher Order PDEs in Microfluidics and Image Analysis, $480,000, 10/03-9/06 ONR grant N000140410054, Modeling, Design, and Control of Distributed Mobile Sensors, $300,000, 10/03-9/06 ARO grant DAAD19-02-1-0055, ``Swarming in two and three dimensions'', $229,389, 4/02-12/04. ONR grant N000140110290, Transport and Diffusion in Surfactant Driven Thin Films and Higher Order Methods for Image Processing, $120,000, 6/01-10/03. NSF grant DMS-0074049 8/00-8/04, Collaborative proposal: Focused Research Group on Fundamental Problems in the Dynamics of Thin Viscous Films and Fluid Interfaces, $781,988 to Duke as PI on project, with R. Behringer, T. Witelski, M. Shearer. NSF grant DMS-9983320 7/00-6/06, Duke University Program for Vertically Integrated, Interdisciplinary Research (VIGRE), original PI, $2,389,032. NSF grant HRD-99799478, 1/00-12/01, PGE/SEP: Project ADVANCE: Developing A Resilient Cohort of Women in Quantitative Sciences, co-PI with Robert Thompson (PI), $99,924. ONR YIP/PECASE Award, grant number N00014-96-1-0656 ``Interface motion and lubrication-type equations'', $526,066, 6/96-5/01. Sloan Research Fellowship, 1995-9, $30,000 ONR grant $50,000, 1995-6. NSF grant DMS-940484, Mathematical Sciences: Hydrodynamic Interface Motion, $18,000, 7/94-12/95. NSF grant DMS-9107916, Mathematical Sciences: Postdoctoral Research Fellowship, $75,000, 7/91-6/94.
{"url":"https://www.math.ucla.edu/~bertozzi/past-grants.html","timestamp":"2024-11-14T18:50:59Z","content_type":"text/html","content_length":"11929","record_id":"<urn:uuid:7bf98f10-e931-4dad-8071-4c1e2b9c09aa>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00262.warc.gz"}
Properties of Relations A binary relation \(R\) defined on a set \(A\) may have the following properties: • Reflexivity • Irreflexivity • Symmetry • Antisymmetry • Asymmetry • Transitivity Next we will discuss these properties in more detail. Reflexive Relation A binary relation \(R\) is called reflexive if and only if \(\forall a \in A,\) \(aRa.\) So, a relation \(R\) is reflexive if it relates every element of \(A\) to itself. Examples of reflexive relations: 1. The relation \(\ge\) ("is greater than or equal to") on the set of real numbers. 2. Similarity of triangles. 3. The relation \({R = \left\{ {\left( {1,1} \right),\left( {1,2} \right),}\right.}\) \({\left.{\kern-2pt\left( {2,2} \right),\left( {3,3} \right),\left( {3,1} \right)} \right\}}\) on the set \(A = \left\{ {1,2,3} \right\}.\) Figure 1. Reflexive relations are always represented by a matrix that has \(1\) on the main diagonal. The digraph of a reflexive relation has a loop from each node to itself. Irreflexive Relation A binary relation \(R\) on a set \(A\) is called irreflexive if \(aRa\) does not hold for any \(a \in A.\) This means that there is no element in \(R\) which is related to itself. Examples of irreflexive relations: 1. The relation \(\lt\) ("is less than") on the set of real numbers. 2. Relation of one person being son of another person. 3. The relation \({R = \left\{ {\left( {1,2} \right),\left( {2,1} \right),}\right.}\) \({\left.{\kern-2pt\left( {1,3} \right),\left( {2,3} \right),\left( {3,1} \right)} \right\}}\) on the set \(A = \left\{ {1,2,3} \right\}.\) Figure 2. The matrix of an irreflexive relation has all \(0'\text{s}\) on its main diagonal. The directed graph for the relation has no loops. Symmetric Relation A binary relation \(R\) on a set \(A\) is called symmetric if for all \(a,b \in A\) it holds that if \(aRb\) then \(bRa.\) In other words, the relative order of the components in an ordered pair does not matter - if a binary relation contains an \(\left( {a,b} \right)\) element, it will also include the symmetric element \(\left( {b,a} \right).\) Examples of symmetric relations: 1. The relation \(=\) ("is equal to") on the set of real numbers. 2. The relation "is perpendicular to" on the set of straight lines in a plane. 3. The relation \({R = \left\{ {\left( {1,1} \right),\left( {1,2} \right),}\right.}\) \({\left.{\kern-2pt\left( {2,1} \right),\left( {1,3} \right),\left( {3,1} \right)} \right\}}\) on the set \(A = \left\{ {1,2,3} \right\}.\) Figure 3. For a symmetric relation, the logical matrix \(M\) is symmetric about the main diagonal. The transpose of the matrix \(M^T\) is always equal to the original matrix \(M.\) In a digraph of a symmetric relation, for every edge between distinct nodes, there is an edge in the opposite direction. Antisymmetric Relation A binary relation \(R\) on a set \(A\) is said to be antisymmetric if there is no pair of distinct elements of \(A\) each of which is related by \(R\) to the other. So, an antisymmetric relation \(R \) can include both ordered pairs \(\left( {a,b} \right)\) and \(\left( {b,a} \right)\) if and only if \(a = b.\) Examples of antisymmetric relations: 1. The relation \(\ge\) ("is greater than or equal to") on the set of real numbers. 2. The subset relation \(\subseteq\) on a power set. 3. The relation \({R = \left\{ {\left( {1,1} \right),\left( {2,1} \right),}\right.}\) \({\left.{\kern-2pt\left( {2,3} \right),\left( {3,1} \right),\left( {3,3} \right)} \right\}}\) on the set \(A = \left\{ {1,2,3} \right\}.\) Figure 4. In a matrix \(M = \left[ {{a_{ij}}} \right]\) representing an antisymmetric relation \(R,\) all elements symmetric about the main diagonal are not equal to each other: \({a_{ij}} \ne {a_{ji}}\) for \ (i \ne j.\) The digraph of an antisymmetric relation may have loops, however connections between two distinct vertices can only go one way. Asymmetric Relation An asymmetric binary relation is similar to antisymmetric relation. The difference is that an asymmetric relation \(R\) never has both elements \(aRb\) and \(bRa\) even if \(a = b.\) Every asymmetric relation is also antisymmetric. The converse is not true. If an antisymmetric relation contains an element of kind \(\left( {a,a} \right),\) it cannot be asymmetric. Thus, a binary relation \(R\) is asymmetric if and only if it is both antisymmetric and irreflexive. Examples of asymmetric relations: 1. The relation \(\gt\) ("is greater than") on the set of real numbers. 2. The family relation "is father of". 3. The relation \(R = \left\{ {\left( {2,1} \right),\left( {2,3} \right),\left( {3,1} \right)} \right\}\) on the set \(A = \left\{ {1,2,3} \right\}.\) Figure 5. The matrix for an asymmetric relation is not symmetric with respect to the main diagonal and contains no diagonal elements. The digraph of an asymmetric relation must have no loops and no edges between distinct vertices in both directions. Transitive Relation A binary relation \(R\) on a set \(A\) is called transitive if for all \(a,b,c \in A\) it holds that if \(aRb\) and \(bRc,\) then \(aRc.\) This condition must hold for all triples \(a,b,c\) in the set. If there exists some triple \(a,b,c \in A\) such that \(\left( {a,b} \right) \in R\) and \(\left( {b,c} \right) \in R,\) but \(\left( {a,c} \right) \notin R,\) then the relation \(R\) is not transitive. Examples of transitive relations: 1. The relation \(\gt\) ("is greater than") on the set of real numbers. 2. The relation "is parallel to" on the set of straight lines. 3. The relation \({R = \left\{ {\left( {1,2} \right),\left( {1,3} \right),}\right.}\) \({\left.{\kern-2pt\left( {2,2} \right),\left( {2,3} \right),\left( {3,3} \right)} \right\}}\) on the set \(A = \left\{ {1,2,3} \right\}.\) Figure 6. In a matrix \(M = \left[ {{a_{ij}}} \right]\) of a transitive relation \(R,\) for each pair of \(\left({i,j}\right)-\) and \(\left({j,k}\right)-\)entries with value \(1\) there exists the \(\left ({i,k}\right)-\)entry with value \(1.\) The presence of \(1'\text{s}\) on the main diagonal does not violate transitivity.
{"url":"https://mathlake.com/properties-relations","timestamp":"2024-11-01T23:00:16Z","content_type":"text/html","content_length":"15483","record_id":"<urn:uuid:05b4aa2d-bf26-45f8-997f-f9591702112c>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00326.warc.gz"}
The Quest to Decode the Mandelbrot Set, Math’s Famed Fractal | Quanta Magazine In the mid-1980s, like Walkman cassette players and tie-dyed shirts, the buglike silhouette of the Mandelbrot set was everywhere. Students plastered it to dorm room walls around the world. Mathematicians received hundreds of letters, eager requests for printouts of the set. (In response, some of them produced catalogs, complete with price lists; others compiled its most striking features into books.) More tech-savvy fans could turn to the August 1985 issue of Scientific American. On its cover, the Mandelbrot set unfolded in fiery tendrils, its border aflame; inside were careful programming instructions, detailing how readers might generate the iconic image for themselves. By then, those tendrils had also extended their reach far beyond mathematics, into seemingly unrelated corners of everyday life. Within the next few years, the Mandelbrot set would inspire David Hockney’s newest paintings and several musicians’ newest compositions — fuguelike pieces in the style of Bach. It would appear in the pages of John Updike’s fiction, and guide how the literary critic Hugh Kenner analyzed the poetry of Ezra Pound. It would become the subject of psychedelic hallucinations, and of a popular documentary narrated by the sci-fi great Arthur C. Clarke. The Mandelbrot set is a special shape, with a fractal outline. Use a computer to zoom in on the set’s jagged boundary, and you’ll encounter valleys of seahorses and parades of elephants, spiral galaxies and neuron-like filaments. No matter how deep you explore, you’ll always see near-copies of the original set — an infinite, dizzying cascade of self-similarity. That self-similarity was a core element of James Gleick’s bestselling book Chaos, which cemented the Mandelbrot set’s place in popular culture. “It held a universe of ideas,” Gleick wrote. “A modern philosophy of art, a justification of the new role of experimentation in mathematics, a way of bringing complex systems before a large public.” The Mandelbrot set had become a symbol. It represented the need for a new mathematical language, a better way to describe the fractal nature of the world around us. It illustrated how profound intricacy can emerge from the simplest of rules — much like life itself. (“It is therefore a real message of hope,” John Hubbard, one of the first mathematicians to study the set, said in a 1989 video, “that possibly biology can really be understood in the same way that these pictures can be understood.”) In the Mandelbrot set, order and chaos lived in harmony; determinism and free will could be reconciled. One mathematician recalled stumbling across the set as a teenager and seeing it as a metaphor for the complicated boundary between truth and falsehood. The Mandelbrot set was everywhere, until it wasn’t. Within a decade, it seemed to disappear. Mathematicians moved on to other subjects, and the public moved on to other symbols. Today, just 40 years after its discovery, the fractal has become a cliché, borderline kitsch. But a handful of mathematicians have refused to let it go. They’ve devoted their lives to uncovering the secrets of the Mandelbrot set. Now, they think they’re finally on the verge of truly understanding it. Their story is one of exploration, of experimentation — and of how technology shapes the very way we think, and the questions we ask about the world. The Bounty Hunters In October 2023, 20 mathematicians from around the world congregated in a squat brick building on what was once a Danish military research base. The base, built in the late 1800s in the middle of the woods, was tucked away on a fjord on the northwest coast of Denmark’s most populous island. An old torpedo guarded the entrance. Black-and-white photos, depicting navy officers in uniform, boats lined up at a dock, and submarine tests in progress, adorned the walls. For three days, as a fierce wind whipped the water outside the windows into frothing whitecaps, the group sat through a series of talks, most of them by two mathematicians from Stony Brook University in New York: Misha Lyubich and Dima Dudko. In the workshop’s audience were some of the Mandelbrot set’s most intrepid explorers. Near the front sat Mitsuhiro Shishikura of Kyoto University, who in the 1990s proved that the set’s boundary is as complicated as it can possibly be. A few seats over was Hiroyuki Inou, who alongside Shishikura developed important techniques for studying a particularly high-profile region of the Mandelbrot set. In the last row was Wolf Jung, the creator of Mandel, mathematicians’ go-to software for interactively investigating the Mandelbrot set. Also present were Arnaud Chéritat of the University of Toulouse, Carsten Petersen of Roskilde University (who organized the workshop), and several others who had made major contributions to mathematicians’ understanding of the Mandelbrot set. Karen Dias for Quanta Magazine And at the whiteboard stood Lyubich, the world’s foremost expert on the topic, and Dudko, one of his closest collaborators. Together with the mathematicians Jeremy Kahn and Alex Kapiamba, they have been working to prove a long-standing conjecture about the geometric structure of the Mandelbrot set. That conjecture, known as MLC, is the final obstacle in the decades-long quest to characterize the fractal, to tame its tangled wilderness. By building and sharpening a powerful set of tools, mathematicians have wrestled control of the geometry of “almost everything in the Mandelbrot set,” said Caroline Davis of Indiana University — except for a few remaining cases. “Misha and Dima and Jeremy and Alex are like bounty hunters, trying to track down these last ones.” Lyubich and Dudko were in Denmark to update other mathematicians on recent progress toward proving MLC, and the techniques they’d developed to do so. For the past 20 years, researchers have gathered here for workshops dedicated to unpacking results and methods in the field of complex analysis, the mathematical study of the kinds of numbers and functions used to generate the Mandelbrot set. It was an unusual setup: The mathematicians ate all their meals together, and talked and laughed over beers into the wee hours. When they finally did decide to go to sleep, they retired to bunk beds or cots in small rooms they shared on the second floor of the facility. (Upon our arrival, we were told to grab sheets and pillowcases from a pile and take them upstairs to make our beds.) In some years, conference-goers brave a swim in the frigid water; more often, they wander through the woods. But for the most part, there’s nothing to do except math. Typically, one of the attendees told me, the workshop attracts a lot of younger mathematicians. But that wasn’t the case this time around — perhaps because it was the middle of the semester, or, he speculated, because of how difficult the subject matter was. He confessed that at that moment, he felt a bit intimidated about the prospect of giving a talk in front of so many of the field’s greats. Adam Wasilewski for Quanta Magazine But given that most mathematicians in the broader area of complex analysis are no longer working on the Mandelbrot set directly, why dedicate an entire workshop to MLC? The Mandelbrot set is more than a fractal, and not just in a metaphorical sense. It serves as a sort of master catalog of dynamical systems — of all the different ways a point might move through space according to a simple rule. To understand this master catalog, one must traverse many different mathematical landscapes. The Mandelbrot set is deeply related not just to dynamics, but also to number theory, topology, algebraic geometry, group theory and even physics. “It interacts with the rest of math in a beautiful way,” said Sabyasachi Mukherjee of the Tata Institute of Fundamental Research in India. To make progress on MLC, mathematicians have had to develop a sophisticated set of techniques — what Chéritat calls “a powerful philosophy.” These tools have garnered much attention. Today, they constitute a central pillar in the study of dynamical systems more broadly. They’ve turned out to be crucial for solving a host of other problems — problems that have nothing to do with the Mandelbrot set. And they’ve transformed MLC from a niche question into one of the field’s deepest and most important open conjectures. Lyubich, the mathematician arguably most responsible for molding this “philosophy” into its current form, stands tall and straight, and speaks quietly. When other mathematicians at the workshop approach him to discuss a concept or ask a question, he closes his eyes and listens attentively, his thick eyebrows furrowed. He answers carefully, in a Russian accent. Karen Dias for Quanta Magazine But he’s also quick to break into loud, warm laughter, and to make wry jokes. He’s generous with his time and advice. He has “really nurtured quite a few generations of mathematicians,” said Mukherjee, one of Lyubich’s former postdocs and a frequent collaborator. As he tells it, anyone interested in the study of complex dynamics spends some time at Stony Brook learning from Lyubich. “Misha has this vision of how we should go about a certain project, or what to look at next,” Mukherjee said. “He has this grand picture in his mind. And he is happy to share that with people.” For the first time, Lyubich feels he’s able to see that grand picture in its totality. The Prize Fighters The Mandelbrot set began with a prize. In 1915, motivated by recent progress in the study of functions, the French Academy of Sciences announced a competition: In three years’ time, it would offer a 3,000-franc grand prize for work on the process of iteration — the very process that would later generate the Mandelbrot set. Iteration is the repeated application of a rule. Plug a number into a function, then use the output as your next input. Keep doing that, and observe what happens over time. As you continue to iterate your function, the numbers you get might rapidly rise toward infinity. Or they might be pulled toward one number in particular, like iron filings moving toward a magnet. Or end up bouncing between the same two numbers, or three, or a thousand, in a stable orbit from which they can never escape. Or hop from one number to another without rhyme or reason, following a chaotic, unpredictable path. Left (Fatou): Collection familiale. Right (Julia): Deutsches Museum, Munich, Archive, PR 01671/01 The French Academy, and mathematicians more broadly, had another reason to be interested in iteration. The process played an important role in the study of dynamical systems — systems like the rotation of planets around the sun or the flow of a turbulent stream, systems that change over time according to some specified set of rules. The prize inspired two mathematicians to develop an entirely new field of study. First was Pierre Fatou, who in another life might have been a navy man (a family tradition), were it not for his poor health. He instead pursued a career in mathematics and astronomy, and by 1915 he’d already proved several major results in analysis. Then there was Gaston Julia, a promising young mathematician born in French-occupied Algeria whose studies were interrupted by World War I and his conscription into the French army. At the age of 22, after suffering a severe injury shortly after beginning his service — he would wear a leather strap across his face for the rest of his life, after doctors were unable to repair the damage — he returned to mathematics, doing some of the work he would submit for the Academy prize from a hospital bed. The prize motivated both Fatou and Julia to study what happens when you iterate functions. They worked independently, but ended up making very similar discoveries. There was so much overlap in their results that even now, it’s not always clear how to assign credit. (Julia was more outgoing, and therefore received more attention. He ended up winning the prize; Fatou didn’t even apply.) Due to this work, the two are now considered the founders of the field of complex dynamics. “Complex,” because Fatou and Julia iterated functions of complex numbers — numbers that combine a familiar real number with a so-called imaginary number (a multiple of i, the symbol mathematicians use to denote the square root of −1). While real numbers can be laid out as points on a line, complex numbers are visualized as points on a plane, like so: Merrill Sherman/Quanta Magazine Merrill Sherman/Quanta Magazine Fatou and Julia found that iterating even simple complex functions (not a paradox in the realm of mathematics!) could lead to rich and complicated behavior, depending on your starting point. They began to document these behaviors, and to represent them geometrically. But then their work faded into obscurity for half a century. “People didn’t even know what to look for. They were limited on what questions to even ask,” said Artur Avila, a professor at the University of Zurich. This changed when computer graphics came of age in the 1970s. By then, the mathematician Benoît Mandelbrot had gained a reputation as an academic dilettante. He’d dabbled in many different fields, from economics to astronomy, all while working at IBM’s research center north of New York City. When he was appointed an IBM fellow in 1974, he had even more freedom to pursue independent projects. He decided to apply the center’s considerable computing power to bringing complex dynamics out of hibernation. At first, Mandelbrot used the computers to generate the kinds of shapes that Fatou and Julia had studied. The images encoded information about when a starting point, when iterated, would escape to infinity, and when it would become trapped in some other pattern. Fatou and Julia’s drawings from 60 years earlier had looked like clusters of circles and triangles — but the computer-generated images that Mandelbrot made looked like dragons and butterflies, rabbits and cathedrals and heads of cauliflower, sometimes even disconnected clouds of dust. By then, Mandelbrot had already coined the word “fractal” for shapes that looked similar at different scales; the word evoked the notion of a new kind of geometry — something fragmented, fractional or broken. The images appearing on his computer screen — today known as Julia sets — were some of the most beautiful and complicated examples of fractals that Mandelbrot had ever seen. Merrill Sherman/Quanta Magazine Merrill Sherman/Quanta Magazine Fatou and Julia’s work had focused on the geometry and dynamics of each of these sets (and their corresponding functions) individually. But computers gave Mandelbrot a way to think about an entire family of functions at once. He could encode all of them in the image that would come to bear his name, though it remains a matter of debate whether he was actually the first to discover it. The Mandelbrot set deals with the simplest equations that still do something interesting when iterated. These are quadratic functions of the form f(z) = z^2 + c. Fix a value of c — it can be any complex number. If you iterate the equation starting with z = 0 and find that the numbers that you generate remain small (or bounded, as mathematicians say), then c is in the Mandelbrot set. If, on the other hand, you iterate and find that eventually your numbers start growing toward infinity, then c is not in the Mandelbrot set. It’s straightforward to show that values of c close to zero are in the set. And it’s similarly straightforward to show that big values of c aren’t. But complex numbers live up to their name: The set’s boundary is magnificently intricate. There is no obvious reason that changing c by tiny amounts should cause you to keep crossing the boundary, but as you zoom in on it, endless amounts of detail appear. What’s more, the Mandelbrot set acts like a map of Julia sets, as can be seen in the interactive figure below. Choose a value of c in the Mandelbrot set. The corresponding Julia set will be connected. But if you leave the Mandelbrot set, then the corresponding Julia set will be disconnected dust. The first published picture of the set, a rough plot of just a couple hundred asterisks, appeared in 1978 in a paper by the mathematicians Robert Brooks and J. Peter Matelski, who were studying a seemingly unrelated question in group theory and hyperbolic geometry. It was Mandelbrot who recognized and popularized the set. After using IBM’s computers to graph hundreds of Julia sets, he sought to represent them all simultaneously. In 1980, armed with much more sophisticated computing power than Brooks and Matelski, he ended up generating a far better version of the Mandelbrot set (though still crude by today’s standards). He immediately fell in love and decided to make the fractal as public an image as possible. It’s for this reason that the set was named after him. (Mandelbrot himself was unpopular among mathematicians, because of his habit of jumping from one subject to another without proving deep results, and because he was often strident in his quest to take credit for discoveries like the Mandelbrot set.) The computer images immediately captured the attention of some of math’s deepest thinkers. “Everybody became very interested, once we could actually see what was going on,” said Kapiamba, who is currently a postdoc at Brown University. Merrill Sherman/Quanta Magazine Merrill Sherman/Quanta Magazine No one had anticipated how rich the world of quadratic equations could be. “It’s like when you open a geode, a simple-looking stone, and inside you find all these crystals — this amazing complex structure,” said Anna Benini of the University of Parma in Italy. “Mathematicians saw things that they didn’t imagine before,” Avila said. “We all nowadays owe a lot to those explorations.” Within just a couple of years, Hubbard and the mathematician Adrien Douady had proved a huge number of results about both the Mandelbrot set and the Julia sets it represented. But their proofs were handwritten, “mainly understandable only to Douady and me,” Hubbard wrote. And so in 1983, Douady wrote and delivered a series of lectures to explain those early results. Afterward, he compiled the material from his lectures into a single document, dubbed the Orsay notes. Nearly 200 pages long, it quickly became the field’s bible. In the Orsay notes, Douady and Hubbard proved several major theorems that were motivated by the computer images they’d seen. They showed that the Mandelbrot set was connected — that you can draw a line from any point in the set to any other without lifting your pencil. Mandelbrot had initially suspected the opposite: His first images of the set looked like one big island with lots of little ones floating in a sea around it. But later, after seeing higher-resolution pictures — including ones that used color to illustrate how quickly equations outside the set flew off to infinity — Mandelbrot changed his guess. It became clear that those little islands were all connected by very thin tendrils. The introduction of color “is a very mundane thing, but it’s important,” said Søren Eilers of the University of Copenhagen. Douady’s interest in the Mandelbrot set was contagious. He would host elaborate meals, parties and concerts at his apartment, and was known to walk barefoot through the corridors of the universities he taught at in France — and to sing, loudly, in public. (He was often mistaken for a busker.) In his later years, he never read math papers; he instead invited their authors to visit and explain the work to him directly. “I would compare him with painters of the Renaissance who had a school of disciples around them,” said Xavier Buff, a mathematician at the University of Toulouse and one of Douady’s former doctoral students. “It was very exciting.” A key part of the Orsay notes was a humble statement that would soon become the most important question about the Mandelbrot set: the MLC conjecture. MLC posits that the Mandelbrot set isn’t just connected; it’s locally connected — no matter how much you zoom in on the Mandelbrot set, it will always look like one connected piece. For instance, a circle is locally connected. An extremely fine-toothed comb, on the other hand, is not. Though the entire shape is connected, if you skip over the shaft and instead zoom in on the tips of some of its teeth, you’ll just see a bunch of separate line segments. Merrill Sherman/Quanta Magazine Merrill Sherman/Quanta Magazine Despite being a straightforward statement about the Mandelbrot set’s geometry, MLC quickly gained a reputation for being incredibly hard. Many mathematicians were hesitant to work on it. It seemed so technical and time-consuming — a risky problem to set one’s sights on. More than one mathematician ended up leaving mathematics because of it. Avila actively steers his students away from MLC and related areas of research until they have time to learn all the mathematics required to make headway. “I quote The Lion King and say, ‘Look, there is the whole of dynamics. All you can see is your domain. But there’s that dark corner that you should not explore … because if you do explore this part, you get trapped and never get out,’” he said. “There’s so much you need to learn to get into But some mathematicians couldn’t resist. Only Connect Misha Lyubich grew up in the 1960s in Kharkiv, the second-largest city in Ukraine. Stalin was dead; Nikita Khrushchev briefly held power, but was soon replaced by Leonid Brezhnev. The Soviet economy flourished, only to stagnate as the decade wore on. Tensions with the West were at an all-time high. Lyubich’s father was a professor of mathematics at Kharkiv University, his mother a programmer; he remembers other mathematicians coming to his home when he was young, where math was always in the air, a frequent topic of conversation. “Life all around me was mathematics,” he said. As a Jew in the Soviet Union — where “there were state policies which tried to eliminate Jews from being actively involved in various fields,” Lyubich said — he had trouble getting into top universities. He applied to Moscow State University but was rejected. Despite being a top student and one of the highest-ranking participants in the Soviet Union’s prestigious Math Olympiad competitions, he was told he hadn’t passed his oral exam. The examiners refused to tell him where he’d gone wrong. George M. Bergman (Douady)/Archives of the Mathematisches Forschungsinstitut Oberwolfach George M. Bergman (Douady)/Archives of the Mathematisches Forschungsinstitut Oberwolfach He ended up attending Kharkiv University, one of the top undergraduate institutions that accepted Jewish students on merit. His father taught subjects that students would typically only be able to find at Moscow universities. (Moscow was the center of mathematical progress in the Soviet Union.) “It was a unique opportunity that my father was providing at that time … to get a broader vision of mathematics,” Lyubich said. In particular, his father encouraged him to start thinking about problems in complex dynamics — a field that wasn’t getting attention in the Soviet Union at all. “At that time, we didn’t see anybody working in this area,” Lyubich said. He quickly got hooked: It was in those university years that he started to think about math “essentially nonstop.” Though he graduated second in his class, he struggled to get into graduate programs. He ended up more than 2,000 miles away at Tashkent State University in Uzbekistan, where his father had colleagues. He continued to study complex dynamics, isolated from and unaware of the work Douady and Hubbard were doing in France. “I was kind of alone,” he said. “It was quite lonely.” University students were required to do agricultural labor during the autumn months. “The universities essentially emptied in October and November,” Lyubich said. And so he found himself picking cotton — Uzbekistan was the Soviet Union’s main cotton supplier at the time — in the fields outside Tashkent. From sunrise to sunset, in 90-degree heat, he bent over the plants, which stood only a couple feet high. He considered himself fortunate, though. Undergraduates had to meet a quota — high enough that “it required skill,” he said, and turned into back-breaking work that “would not have been possible for me to do.” Graduate students didn’t have to. And so, “I was just walking around the cotton fields thinking about mathematics,” Lyubich said. In particular, he started to think about the parameter space of complex quadratic equations. Even though the first computer images had already emerged in the West, Lyubich had no access to them. Instead, the basic features of the Mandelbrot set took shape in his mind — the fractal’s central heart-shaped region, called the main cardioid, and aspects of the set’s backbone, which bisects the shape horizontally along the x-axis. “I just built up a picture in my mind and tried to understand it,” he said. “I had no idea how deep the questions hidden inside of this picture were.” In March 1982 — while Lyubich was still a graduate student — John Milnor, one of the most distinguished American mathematicians of his generation (then a professor at the Institute for Advanced Study), visited Moscow to give a talk. Because the university was flexible about where Lyubich spent his time, so long as he completed his exams and dissertation (as well as his cotton-picking duties), he often went to Moscow to attend seminars and meet with mathematicians who worked there. It just so happened that he was there when Milnor visited. After Milnor finished his talk, he and Lyubich spoke for a bit. Courtesy of Misha Lyubich Courtesy of Misha Lyubich Due to the language barrier, they either wrote things down or had one of Lyubich’s colleagues help translate. It became clear to Lyubich that related work was happening on the other side of the Iron Curtain. “It was my first contact with Western mathematics in this direction,” he said. After returning home, Milnor spread the word about some of Lyubich’s research. “The communication was very poor, but it was my good luck that I met Milnor,” Lyubich said. And so later, Douady sent Lyubich a copy of the Orsay notes, where Lyubich first learned about the MLC problem. Lyubich wouldn’t truly start thinking about MLC for a few more years, though. He was working on other problems, and after completing his doctorate in 1984, he and his wife, also a mathematician, moved to Leningrad (now St. Petersburg), where he was once again barred from academic jobs because he was Jewish. Over the next five years, he worked instead as a high school teacher, as a programmer at what he called a “quasi-research institute” (focused on medical technologies), and finally as a modeler at a scientific institute that did comprehensive studies of the Arctic and Antarctic. With each new job, he inched closer and closer to being able to focus on his mathematical interests in dynamical systems. Throughout those years, he kept working on his math problems. He attended seminars, met with other mathematicians, and continued to produce results. “I never stopped,” Lyubich said. “You see, if you stop, it is very difficult to recover. You should not stop.” It was draining. Lyubich recalls feeling particularly exhausted after teaching high schoolers all day, only to then force himself to spend the rest of the evening working on math. “I was frustrated that I could not dedicate myself fully to mathematics, which is what I wanted to do,” he said. But “I sort of decided for myself that I would do mathematics, no matter what.” “I was lucky that perestroika came and I was allowed to leave,” he added. “I don’t know for how long I would be able to keep this going.” In 1989, he and his wife obtained a visa that allowed them to leave the Soviet Union as refugees. With just a few hundred dollars in their pockets, they made their way first to Vienna, then to Italy, where they applied to move to the United States. After spending a few months at a refugee camp in Italy, waiting for their paperwork to be processed — during this time, Lyubich made extra income by giving guest lectures at local universities — he and his wife finally arrived in New York. There, Lyubich had a job waiting for him: Milnor (with whom Lyubich had kept in touch) had invited him to work at the new Institute for Mathematical Sciences he was starting at Stony Brook University. While in Italy, Lyubich gained access to email for the first time — and it was there that he received an email from Douady. (Douady was an early advocate of using email for mathematical discussions and collaborations. “He worked a lot exchanging ideas with faraway collaborators, which was something new in the ’80s,” said Pierre Lavaurs, one of his former graduate students.) The email informed Lyubich and other mathematicians in the field that Jean-Christophe Yoccoz had proved local connectivity at almost all points in the Mandelbrot set: MLC was true for values of c that did not reside inside an infinite nest of smaller self-similar copies of the full set. (Yoccoz would later be awarded the Fields Medal, considered math’s highest honor, in part for this work.) Gerd Fischer/Archive of the Mathematisches Forschunginstitut Oberwolfach In the email, Douady went on to say that the full solution to MLC was just around the corner. He wasn’t the only one who felt optimistic. “There were people who thought they could deal with the local connectivity of the Mandelbrot set in just a few years,” said Davoud Cheraghi of Imperial College London. Instead, decades of work remained. MLC turned out to be a very subtle, almost impossibly difficult problem that only a handful of mathematicians were able to keep working on. It would require tools from all over math, and the development of a new theory that would forever change the field of complex dynamics. Leading the way, armed with the persistence that had been a part of his mathematical journey all along, was Lyubich. A City Within a City We tend to think of math as the purest of the sciences — when we think of it as a science at all. The subject has a reputation for being abstract, detached, driven by beauty and logic. It doesn’t get its hands dirty or concern itself with anything as concrete as “applications.” (It’s even in the name: We distinguish “pure math” from “applied math.”) The way math papers are written doesn’t help: Only the final proofs and theorems are usually published, not the meandering process that led to them. But this is a modern conception of mathematics, one that only started to solidify in the late 19th century. It’s a conception that grew as mathematicians sought to make their definitions more rigorous, and as writing formal proofs became the only way for them to get jobs and build careers. It was further bolstered in the 1930s, when a powerful, secretive group of mathematicians began to publish joint work under the pseudonym Nicolas Bourbaki. Their ethos came to dominate mathematical thinking, intent on stripping the discipline to its foundations and making it as formal as possible. Yet long before this, mathematicians — just like physicists or biologists or chemists — relied on experimentation to discover and prove new phenomena. They made guesses, discarded hypotheses, looked for patterns by trial and error. They performed computations, made observations, gathered data. They took note of similarities, of certain numbers or sequences arising in unexpected places. The giants of 18th- and 19th-century math — Euler, Gauss, Riemann — were all experimentalists who relied on massive amounts of computation, done laboriously by hand. Gauss conjectured the prime number theorem (a crucial formula that describes how the primes are distributed among the integers) a century before it was actually proved. That’s because, as a teenager, he pored over tables of primes and decided to count how many of them there were in blocks of a thousand numbers, all the way up to a million. (No doubt Gauss would have been thankful for today’s computers.) Similarly, Riemann posed his eponymous hypothesis, the biggest open problem in mathematics, only after doing pages of calculations. Those pages weren’t discovered for decades; until then, many mathematicians heralded the Riemann hypothesis as an example of what could be achieved by “pure thought alone.” There’s no such thing. All thinking, mathematical or otherwise, is influenced by the world around us, by the technologies and philosophical movements and aesthetics of our time. In this regard, Bourbaki’s philosophy — its requirement for total rigor, and its emphasis on general statements over concrete examples — represented a detour of sorts. Mathematicians’ perspective on Bourbaki is divided. Some claim it gave certain fields a much-needed push toward rigor. Others say it was confining, closed-minded, cutting math off from other sources of inspiration. Karen Dias for Quanta Magazine Since the 1970s, the pendulum has begun to swing back, pushed by modern computers, which have offered mathematicians entirely new ways to experiment and play. “I think people generally agree that the Bourbaki thing was sort of a mistake,” Eilers said. “This very abstract view, this is not so human-friendly … this is just not how the field should evolve.” In the experimental spirit of Gauss and Riemann, mathematicians posed one of today’s most famous open problems — the Birch and Swinnerton-Dyer conjecture, a question about elliptic curves that, if solved, comes with a $1 million reward — only after using a computer to generate mountains of data. Many other problems have arisen in similar ways. “This is how the sausage is made,” said Roland Roeder of Indiana University–Purdue University Indianapolis. “It’s not as advertised as it should be.” Mathematicians have used computers to look for counterexamples to both established conjectures and nascent hypotheses. They’ve used them to find, and fix, mistakes in old proofs. They’ve turned to them to forge new connections between disparate fields. And in many areas, mathematicians have come to rely on computers to make key calculations and perform other steps in the mathematical argument In the case of the Mandelbrot set, computers helped to jump-start an entire field. To hear mathematicians tell it, computers have allowed them to treat the Mandelbrot set like a city — a physical space to explore. They’ve spent hours, days, years strolling its neighborhoods and streets, getting lost, familiarizing themselves with the terrain. “You start to understand more and more and more, and every time you come back, it’s like coming back home,” said Luna Lomonaco of the National Institute for Pure and Applied Mathematics in Brazil. “It really becomes part of you.” Merrill Sherman/Quanta Magazine Merrill Sherman/Quanta Magazine This familiarity is clear whenever you speak to mathematicians in the field. They navigate different computer programs with ease, zooming in to specific spots to show different properties. Dudko describes these images as “like a language in complex dynamics.” Buff can predict exactly where he thinks a small copy of the set will pop up before it becomes visible, just based on how certain branches and tendrils look. Chéritat was once asked to reproduce a decades-old poster of a region deep within the Mandelbrot set, without any additional information — and he did it. Douady could apparently look at a Julia set and know which value of c in the Mandelbrot set it came from. Hubbard still refers to Julia sets as “old friends.” “Studying the Mandelbrot set really feels like an experimental field of math. It almost feels like an applied field of math, as opposed to a pure field of math,” Kapiamba said. “You’re just taking something that is out there, and then trying to dissect and analyze it in a way that to me feels like you have some natural phenomenon that you’re trying to uncover.” “It’s not something you create. It’s something which is there, and that you explore,” Buff added. “It’s clearly there on my computer. I visit the Mandelbrot set. And maybe there are some places in the Mandelbrot set that I have not discovered yet.” This area of study is riddled with such discoveries. There was the discovery of smaller copies of the set within itself, and of specific patterns in the way its antennae, hairs and other decorations appear. There was the discovery of the Fibonacci sequence, encoded in the set — as well as an approximation of $latex \pi$. And there was the discovery of Mandelbrot sets in other contexts entirely, as in the search for numerical solutions to cubic equations. “Computers show us stuff that’s tantalizing, that’s crying out for someone to come and explain it,” said Kevin Pilgrim of Indiana University Bloomington. Which in turn motivates the right questions, if not the answers. When computers revealed all those smaller copies of the Mandelbrot set within itself, Douady and Hubbard wanted to explain their presence. They ended up turning to what’s known as renormalization theory, a technique that physicists use to tame infinities in the study of quantum field theories, and to connect different scales in the study of phase transitions. It had previously held little interest for mathematicians; by their standards, it wasn’t even rigorous. But in the 1970s, the physicist Mitchell Feigenbaum brought renormalization theory into the world of dynamics, using it as a way to explain a particular self-similar pattern that emerges when you iterate quadratic equations using real numbers. Douady and Hubbard realized that renormalization was precisely what they needed to explain the more complicated self-similar patterns they were seeing on their computer screens. And so they figured out how to apply renormalization theory to complex dynamics. Since then, work on MLC by Lyubich and his colleagues has pushed that theory further than anyone thought possible. A Name for Every Dot Once Lyubich arrived in New York in February 1990, months after he’d left Moscow, he had the chance to learn more about the work that Douady had written so excitedly about in his email. At first, it wasn’t the MLC result that fascinated Lyubich, but rather the techniques Yoccoz had developed to prove it. “Somehow, it clicked very well with me,” he said. He had been interested in real dynamics, and in answering questions that had arisen based on Feigenbaum’s work on renormalization. For most of the 1990s, Lyubich focused on developing Yoccoz’s methods further, to address those open problems. By the end of the decade, he felt that he’d “essentially gotten the full description of dynamics on the real line, using this machinery,” he said. As a natural consequence of this work, Lyubich ended up proving MLC for many, though not all, of the cases that Yoccoz’s result had not covered. That wouldn’t have come as a surprise. Yoccoz’s proof showed MLC for all points on the Mandelbrot set except those known as “infinitely renormalizable” parameters — points that lived inside infinitely nested baby Mandelbrot copies. His result instantly turned MLC into a problem that was intimately connected to renormalization theory. That link was exciting. On the surface, MLC seemed to belong to an entirely different corner of the field. “Renormalization theory had developed completely independently,” Lyubich said. “And then everything became part of the same story.” And so Lyubich also grew interested in addressing the MLC problem. Even before renormalization entered the fray, there were already signs that MLC was a question with deeper resonances. In the Orsay notes, Douady and Hubbard showed that if MLC is true, then it also has implications for properties of the interior of the Mandelbrot set. Not every point inside the set behaves the same way. Points in the main cardioid correspond to functions that, when iterated from a starting value of zero, converge to a single number. Points in other lobes correspond to functions that end up oscillating between a particular number of different values. The largest lobe on top of the main cardioid, for example, represents functions that oscillate between three values. For carefully chosen points, however, a function might produce sequences that remain bounded but never oscillate — they keep jumping between new, distinct values. But if MLC is true, Douady and Hubbard showed that such non-oscillating sequences must be rare — a property called “density of hyperbolicity” that mathematicians want to prove or disprove for any dynamical system they happen to be studying. “It’s basically the most important question in dynamics, not just complex dynamics,” Lomonaco said. Merrill Sherman/Quanta Magazine Merrill Sherman/Quanta Magazine Density of hyperbolicity deals with the Mandelbrot set’s interior. But MLC would also enable mathematicians to assign an address to every point on the set’s boundary. “It gives a name to every dot. And then, once you have been able to name every dot of the boundary of the Mandelbrot set, you can hope to really understand it completely,” Hubbard said. In this way, MLC tells mathematicians that the picture they have of the set isn’t missing anything. But without a proof, there could still be some regions, tucked away in the deepest corners of this infinitely complex landscape, that have not yet appeared on computer screens — that behave in some fundamentally different way. It would mean that mathematicians are still missing part of the story. Think Deeply About Simple Things Jeremy Kahn grew up in New York City in the 1970s, the son of a social worker and a science writer. As a child, he quickly proved to be something of a math prodigy. He skipped years ahead in the subject. In sixth grade he scored a 790 on the math section of the SAT. And he wrote his own computer programs to explore various mathematical concepts in greater depth. When he was 13 years old, he became the youngest person (at the time) to win a spot on the U.S.’s International Mathematical Olympiad team. He participated in the competition throughout high school, winning two silver medals and two gold. During this time, he also started taking math courses at Columbia University, and he re-proved several theorems (without knowing they’d been proved) on a blackboard he kept in his bedroom. After he graduated from high school, he went to Harvard University to major in math. There he became captivated by the Mandelbrot set. By his senior year, he was devoting all his energy to understanding it. Since no one at Harvard was working on it at the time, he would bike over to Boston University to learn from a mathematician there about fractals and dynamical systems. After he graduated and enrolled in a doctoral program at the University of California, Berkeley, he focused on hyperbolic geometry — a field that mathematicians had previously connected to complex dynamics, back when the Mandelbrot set was first becoming popular. Kahn wanted to strengthen that connection. As a graduate student, he re-proved Yoccoz’s famous MLC result, building on seminal work done by the mathematicians Dennis Sullivan and Curt McMullen. He also began to think about how to apply ideas from hyperbolic geometry to renormalization. Kahn’s classmate Kevin Pilgrim remembers seeing him fill massive sheets of paper with drawings of curves and annuli, of geometric objects that degenerated and grew distorted. “He started to think very, very deeply about these things,” Pilgrim said. “And when I say ‘deeply,’ I mean for 15 years.” “Jeremy’s tenacity for thinking really hard about something is pretty amazing,” he added. Kahn thought particularly hard about renormalization. He studied Lyubich’s work, and Douady and Hubbard’s. In all these contexts, renormalization is a way to relate different scales of a dynamical system to one another. Consider the dynamics of one quadratic equation. Points will bounce around the complex plane in certain ways. Renormalization allows you to describe the dynamics of all those points by focusing on just a small subset of them. “Renormalization acts like a super powerful microscope that allows you to understand structures which lie at the deepest level,” said Romain Dujardin of Sorbonne University in France. The extent to which you can do this depends on the equation you’re iterating. Sometimes you simply cannot describe its dynamics in terms of a smaller part of the system. Or you might be able to use the microscope of renormalization to magnify things once, or twice, or 10 times, before reaching a point where you can no longer say anything meaningful about the smaller scales. But for the functions associated with infinitely renormalizable parameters, it’s possible to keep applying renormalization forever. It’s a delicate procedure. “It cannot be done in a random way,” Lyubich said. You have to rigorously show that you can move from one scale to another without losing too much precision. The first step toward doing that involves gaining a rough sort of control over the geometry of the different scales. It’s this step that can then be used to show MLC for a given value of c in the Mandelbrot set. As a graduate student, Kahn was already thinking about how to apply his knowledge of hyperbolic geometry to the problem. His research garnered attention, and in his third year of graduate school, he accepted a tenure-track job at the California Institute of Technology. Everything seemed to be lining up perfectly. And then he froze. At Caltech, he couldn’t write. He had results from his time in graduate school — but every time he sat down at a computer, he would lose any willpower he had. “I wasn’t good at writing,” he said. “I wasn’t good even at sitting down to write. So I wasn’t getting the stuff written up.” He couldn’t focus his mathematical attention either. “I would sometimes lose myself in the extremes of wanting to prove truly great theorems, like MLC, or P versus NP. And then I’d come back to reality,” he said. “I was lost, and unhappy.” In four years at Caltech, Kahn didn’t write a single paper. He lost his job. And so, in the fall of 1998, at just under 30 years old, his once-promising career in tatters, “I kind of wandered back home” to New York, Kahn said. He called Milnor, asking for advice. Milnor put him back in touch with Lyubich, whom Kahn had met a few times in graduate school. And so, “I just showed up at Stony Brook,” Kahn said. “Misha was incredibly welcoming.” The two would discuss math for hours. Kahn recalls going to Lyubich’s house all the time, eating dinner with his family — by then, Lyubich and his wife had a daughter; they would later have a second — and soon becoming friends. “He really took me in,” Kahn said. “He was this world-famous mathematician, and he treated me as an equal, not some lost child.” “He became practically a second father to me,” he added. Lyubich found a temporary position for Kahn at Stony Brook, without teaching duties. From the late 1990s into the mid-2000s, Lyubich helped the younger mathematician out. When Lyubich spent a year working at the University of Toronto, he found a place for Kahn; when he returned to Stony Brook, he did the same. When Kahn left academia to work at a hedge fund for a year, only to decide that it wasn’t for him, Lyubich helped him out once again. When Kahn’s father was diagnosed with cancer and later died, Kahn wasn’t able to work. But he eventually made his way back to Lyubich, and Lyubich welcomed him. Adam Wasilewski for Quanta Magazine To hear Lyubich tell it, he recognized that Kahn had very interesting, sometimes brilliant ideas. “He just had this psychological block he needed to overcome,” Lyubich said. “So I kept supporting him as much as possible.” Although Kahn still often felt lost during these years, he and Lyubich developed what Kahn called “quite an intense collaboration.” It kept him grounded. The two mathematicians unified their approaches to renormalization, which also allowed them to prove MLC for many more parameters. “The sort of collapse of my career gave the opportunity for me to just follow Misha around” and get this work done, Kahn said. “It was putting off a lot of elements of living, not deliberately, but in effect for the sake of proving these theorems.” Kahn and Lyubich’s work marked a massive breakthrough in renormalization theory, and in MLC. But “the Mandelbrot set is tremendously devious,” Lyubich said, because it is not exactly self-similar, and it exhibits different kinds of self-similarity. As Avila put it, “it has different personalities as you move inside it.” These different kinds of self-similarity correspond to very different dynamics and therefore require different types of renormalization to relate one scale to another. Kahn and Lyubich had developed one type, but they’d pushed their techniques as far as they could. “They hit a wall, and they knew that they’d hit a wall,” Mukherjee said. To prove MLC for other parts of the Mandelbrot set, they would have to get a similar kind of geometric control, but using some other type — or types — of renormalization. And Kahn and Lyubich disagreed on how best to proceed. Progress stalled. Adam Wasilewski for Quanta Magazine They each started to work on other problems. Kahn turned back to hyperbolic geometry. Lyubich thought about ways he could apply the MLC work to other parts of complex dynamics (and even to questions in physics). “This is why, in a way, you’re never really stuck,” said Lyubich, who in 2004 became the director of Stony Brook’s Institute for Mathematical Sciences. “If tomorrow someone will find a one-line proof of MLC in all cases, would it annihilate everything we have done before? No. There are so many problems that rely on this technique.” That’s part of the reason he never felt frustrated when things didn’t seem to be progressing quite so smoothly on the MLC front. “Every step in MLC is an opening to many other problems,” he said. Meanwhile, Kahn made significant advances in hyperbolic geometry. Tenure offers began to come in. Hoping to make a fresh start, he moved to Providence, Rhode Island, in 2011 to take up a professorship at Brown University. Neither Lyubich nor Kahn stopped thinking about MLC, but they drifted apart, busy with their own responsibilities. Other mathematicians working in complex dynamics started to move in different directions — focusing on parameter spaces even more complicated than the Mandelbrot set, and on the connection between complex dynamics and number theory. But in recent years, Lyubich and Kahn have each taken on apprentices and renewed their efforts to prove MLC. Squaring Up About a decade ago, Lyubich began working with Dima Dudko. Dudko grew up in the 1980s in Belarus, where his mathematical prowess quickly became obvious to those around him. (He represented Belarus in the International Math Olympiad 15 years after Kahn aged out. Like Kahn, he won a gold medal.) Later, when he was a graduate student in Germany, his adviser consulted Lyubich about what problem Dudko should work on for his dissertation. They decided on a question about the Mandelbrot set that they didn’t expect Dudko to be able to answer. The statement would follow automatically from MLC; they figured that, without MLC to help him, he’d be able to make partial progress on it at best. Dudko found a way around MLC and solved the problem completely. Karen Dias for Quanta Magazine After finishing his graduate program in 2012, he continued to work in Germany as a postdoc — but also started collaborating with Lyubich. With a third mathematician, Nikita Selinger of the University of Alabama, Birmingham, they developed a new renormalization theory. Lyubich and Dudko then used it to show that MLC holds for some of the most difficult infinitely renormalizable parameters in the Mandelbrot set — precisely the ones that Lyubich and Kahn’s methods couldn’t be applied to. (Lyubich’s former student Davoud Cheraghi and Mitsuhiro Shishikura of Kyoto University have also been developing techniques to address some of these outstanding cases.) “This case is so different that it took another couple decades,” Lyubich said. It also took some original thought. Dudko, who led the recent MLC seminar with Lyubich in Denmark, is seen as a star in the area, and he has an intriguing way of looking at things. This is perhaps best exemplified by how he sometimes sketches the Mandelbrot set as a bunch of squares, rather than the circles that most mathematicians tend to draw. “It’s taken me by surprise that it’s possible to solve these problems,” Lyubich said. “What we have been doing recently, it goes beyond anything I had done before.” In an effort to assemble all of these results in one place, Lyubich has been writing a series of textbooks about the Mandelbrot set, MLC and related work in complex dynamics. So far, he’s produced over 700 pages, split into two volumes out of a planned four. “Hopefully, when I finish with volume 4, MLC will be there,” he said. Like Lyubich, Kahn has found a younger protégé. The idea of recruiting Alex Kapiamba first came to Kahn in a dream. He was at a conference in 2022. For several months, he, Lyubich and Dudko had been meeting regularly to discuss progress on MLC — something that was immediately reflected in the dream, where the three of them were on a bus. “And then I see this fourth person get onto the bus, and that’s the whole dream, essentially,” Kahn said. “And then I wake up, and I’m like, Alex Kapiamba is this fourth person.” The next day, he arranged to meet with Kapiamba to discuss his research. Kapiamba now works with Kahn as a postdoc at Brown, and will move to Harvard in the fall. When I met Kapiamba last year, his arm was in a sling; he’d dislocated his shoulder a few days earlier playing ultimate Frisbee. (He played semiprofessionally for the Detroit Mechanix while in graduate school, and continues to play in a club league.) He was modest about how much he thought he’d be able to contribute to the MLC effort. “It’s sort of a little scary,” he said. “I definitely feel some imposter syndrome.” “I just want to get in and do a little bit before it’s too late,” he added. Kapiamba hadn’t set out to study mathematics. As an undergraduate at Oberlin College in Ohio, he started as a biochemistry major; it was only at the end of his junior year, after he took a topology course, that he grew interested in math. “In biochemistry, what I really liked was understanding the structure of things,” Kapiamba said. “And math is really just trying to study structure in its barest form. It really felt like it was the parts of biology or chemistry that I really enjoyed, distilled down into a pure form. I could just do that part.” After graduating in 2014, he was unsure about what he wanted to do. He moved to Washington, D.C., to be near his family, and found jobs working at a bakery and as a tutor. During this time, he began to contemplate pursuing a career in math. He soon quit his baking job, and for the next two years, he continued to tutor while studying higher-level mathematics on his own time — reviewing the material he’d learned during his undergraduate years (“to get a different vantage point,” he said) and taking online courses. “I wanted to feel very prepared,” he said. In 2016, he enrolled in a master’s program at the University of Michigan. As a master’s student, he started to work on a question about the geometry of the Mandelbrot set near the cusp of its main cardioid, where a parade of elephants marches out of a shallow valley. As you approach the valley, the elephants seem to get closer and closer together. And so it’s been conjectured that as you approach the valley’s deepest point, the distance between the elephants will shrink to zero. “I was like, obviously,” Kapiamba said, motioning at his computer screen, where he’d zoomed in on the elephants for me to see. They really did look as if they were touching. A key part of his argument rested on an offhand remark made in an old doctoral thesis paper. The 73-page dissertation, written entirely in French, was completed in 1989 but never published. Its author had left mathematics just one year later, after growing disillusioned and frustrated with the problem he’d hoped to solve: MLC. Adam Wasilewski for Quanta Magazine Kapiamba combed through the text, often getting lost in its pages without realizing the clock had long since ticked past midnight, relying on the French he knew from high school and Google Translate. He lamented that he hadn’t been raised to speak French. Both his father, who’s from the Democratic Republic of Congo, and his mother, who met him there while serving in the Peace Corps, spoke the language fluently. But the couple had moved to Maryland shortly before Kapiamba was born, and in an effort to help his father learn English as quickly as possible, they only spoke English at home. Eventually, Kapiamba realized that he wasn’t failing to grasp some step in the thesis paper’s logic. Its author had made a mistake. His claim was likely correct, but the reasoning behind it didn’t hold up. And so Kapiamba set his sights on fixing the error. He let things simmer, the way he waits for bread to rise. (He still bakes to focus his mind. He enjoys the opportunity it gives him to make something with his hands.) Over the next few years, he finally figured out the proof. To do so, he had to strengthen a theorem that Yoccoz had used in his original MLC proof, about the size of the elephants. The work took the complex dynamics community by complete surprise. Computer images had already indicated that certain regions of the Mandelbrot set seemed to shrink much, much faster than Yoccoz’s theorem suggested, meaning that his statement could be strengthened. “If you just plot some pictures and look at them, you can see, oh, it seems like the bound Yoccoz gives us is very, very bad,” Kapiamba said. But no one had been able to improve it. Until Kapiamba. His work only applied to certain regions in the Mandelbrot set; mathematicians hope that the stronger version of Yoccoz’s statement can be shown for the entire set. Even so, “people got really excited,” Benini said. “Everyone working on this knows this must be true; they just didn’t know how to prove it.” Lomonaco and other mathematicians have already used Kapiamba’s result to prove theorems of their own. But it’s also seen as a potential linchpin in a future proof of MLC. A Laboratory and a Guide Last year’s conference marked the last time mathematicians will gather at the old military base in Denmark. Roskilde University, which sponsors the workshop series, gave up its lease on the location this year. If Lyubich, Kahn, Dudko and Kapiamba can combine their different approaches to finally prove MLC, it will mark the end of another era — an era that began when Mandelbrot and Hubbard and Douady first saw the fractal appear on their computer screens. Karen Dias for Quanta Magazine The last half-century of exploration of the Mandelbrot set was made possible by the development of computer graphics. The math that generates the fractal is simple: You really only need to know how to add and multiply. But the drawings that made the set famous could not have been done by hand. They relied on carrying out those easy computations millions of times, something that wasn’t feasible without computers. In principle, a visionary mathematician might have held a snapshot of the set in their mind’s eye hundreds of years ago. But in the unfolding of history, though genius can sometimes glimpse over the horizon, technology has modulated what can be imagined. Fatou, for instance, “was able to formulate conjectures without having been able to see the Mandelbrot set,” Buff said. But Fatou could only go so far. However powerful his imagination might have been, there is a world of richness swirling beneath the Mandelbrot set that was inaccessible to him, but readily visible to an average person Lyubich does not tend to use computers in his work. “My way of thinking is very visual,” he said. “It’s very geometric. I think in terms of pictures — but I just draw more or less primitive pictures, by hand or in my mind. I never use computers in any substantial way.” (He jokes that perhaps the programming job he briefly held in Leningrad before emigrating is to blame. “It repelled me,” he said.) Nevertheless, he lives in a world steeped in computation. Back in Uzbekistan’s cotton fields, he too only got so far by letting his imagination run wild. “It was Douady and Hubbard who viewed the next level of depth,” he said — using the computers available in the 1980s. In the decades since, Lyubich has seen his collaborators use computers as a laboratory and as a guide. In his one joint paper with Milnor, he recalls, Milnor ran several computer experiments to help steer their proof in the right direction. And Dudko returns again and again to the computer while working with Lyubich. “He’s very good at interpreting what he sees,” Lyubich said, “to translate these pictures into mathematical language and formulate very deep conjectures.” Galileo discovered the moons of Jupiter not only because he had developed the right theory to make sense of what he saw, but because he had a telescope. Similarly, there are entire swaths of the mathematical universe that remain hidden until technological change makes them visible. They can no more be discovered with pure thought than Jupiter’s moons can be discerned by squinting. If the computational revolution of the 1970s and ’80s opened up the continent of the Mandelbrot set for exploration, mathematicians might today be on the cusp of another such tipping point. Artificial intelligence is only beginning to be used to formulate substantive conjectures and prove significant mathematical results. It is hard — perhaps impossible — to gauge its potential with confidence. (“We’ve got to try to train a neural network to zoom around the Mandelbrot set,” Kapiamba joked.) But if the story of the Mandelbrot set is one of how mathematicians can use pure thought to survey a vista opened up by technology, the next chapter remains to be written. “I never had the feeling that my imagination was rich enough to invent all those extraordinary things,” Mandelbrot once said. “They were there, even though nobody had seen them before.” Correction: January 30, 2024 This article originally stated that Jeremy Kahn recently submitted work from his time at Caltech for publication. The recently submitted work was done in 2001, three years after he left Caltech. The article also originally misstated the date of a dream by Jeremy Kahn. The dream took place in 2022, not 2019. Clarification: January 30, 2024 An image depicting what it means to be globally connected but not locally connected did not clearly show that locally disconnected tines of a comb are those that are arbitrarily close to the comb’s left edge. That image has been updated.
{"url":"https://www.quantamagazine.org/the-quest-to-decode-the-mandelbrot-set-maths-famed-fractal-20240126/","timestamp":"2024-11-06T02:23:40Z","content_type":"text/html","content_length":"636465","record_id":"<urn:uuid:000630a5-bd3f-4ecc-8487-525c8e693594>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00526.warc.gz"}
Packages are thrown down an incline at A with a velocity of 1m/... | Filo Question asked by Filo student Packages are thrown down an incline at with a velocity of . The packages slide along the surface to a conveyor belt which moves with a velocity of . Knowing that between the packages and the surface , determine the distance if the packages are to arrive at with a velocity of . Not the question you're searching for? + Ask your question Step by Step Solution: Step 1. Resolve initial velocity of package in horizontal and vertical direction Step 2. Determine time taken for package to reach conveyor belt using vertical kinematic equation Step 3. Determine distance package travels on surface AB using horizontal kinematic equation and time found in step 2 Step 4. Determine final velocity of package in horizontal direction using horizontal kinematic equation Step 5. Determine distance package travels on conveyor belt using final velocity found in step 4 and velocity of conveyor belt Step 6. Add distances found in step 3 and 5 to get total distance travelled by the package Step 7. Check if total distance found in step 6 matches the provided answer Final Answer: Found 2 tutors discussing this question Discuss this question LIVE for FREE 14 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on All topics View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Physics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Packages are thrown down an incline at with a velocity of . The packages slide along the surface to a conveyor belt which moves with a velocity of . Knowing that between the packages and Text the surface , determine the distance if the packages are to arrive at with a velocity of . Updated Apr 19, 2024 Topic All topics Subject Physics Class Class 11 Answer Text solution:1
{"url":"https://askfilo.com/user-question-answers-physics/packages-are-thrown-down-an-incline-at-with-a-velocity-of-39393130323833","timestamp":"2024-11-10T08:02:37Z","content_type":"text/html","content_length":"196768","record_id":"<urn:uuid:8d6b5f1b-94d2-4d5d-ab5c-1afdc9e7d766>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00688.warc.gz"}
Publications with W. Woodin All publications by W. Hugh Woodin and S. Shelah Click the -number to get to the papers' detail page (which may include pdf's of the paper). Can't find an -number (in particular, an "F-number")? You can try your luck number title Sh:159 Shelah, S., & Woodin, W. H. (1984). Forcing the failure of CH by adding a real. J. Symbolic Logic, 49(4), 1185–1189. DOI: 10.2307/2274270 MR: 771786 Sh:241 Shelah, S., & Woodin, W. H. (1990). Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math., 70(3), 381–394. DOI: 10.1007/BF02801471 MR: Sh:339 Judah, H. I., Shelah, S., & Woodin, W. H. (1990). The Borel conjecture. Ann. Pure Appl. Logic, 50(3), 255–269. NB: A correction of the third section has appeared in 8.3.B of [Bartoszyński, Judah: Set theory. ISBN 1-56881-044-X] DOI: 10.1016/0168-0072(90)90058-A MR: 1086456
{"url":"https://shelah.logic.at/coauthors/hugh_woodin/","timestamp":"2024-11-02T17:32:23Z","content_type":"text/html","content_length":"9369","record_id":"<urn:uuid:44436dde-b1dc-46f9-a6a6-f32a1a1a5817>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00432.warc.gz"}
What are Lasso and Ridge regression? Lasso and Ridge regression In Ridge Regression, the OLS loss function is augmented in such a way that we not only minimize the sum of squared residuals but also penalize the size of parameter estimates, in order to shrink them towards zero: Solving this for β^ β^ gives the ridge regression estimates β^ ridge=(X′ X+λI)−1(X′Y) β^ridge=(X′X+λI)−1(X′Y), where I denote the identity matrix. The λ parameter is the regularization penalty. Ridge regression assumes the predictors are standardized and the response is centered Ridge, LASSO and Elastic net algorithms work on same principle. They all try to penalize the Beta coefficients so that we can get the important variables (all in case of Ridge and few in case of LASSO). They shrink the beta coefficient towards zero for unimportant variables. These techniques are well being used when we have more numbers of predictors/features than observations. The only difference between these 3 techniques are the alpha value. If you look into the formula you can find the important of alpha. Here lambda is the penalty coefficient and it’s free to take any allowed number while alpha is selected based on the model you want to try . So if we take alpha = 0, it will become Ridge and alpha = 1 is LASSO and anything between 0–1 is Elastic net.
{"url":"https://discuss.boardinfinity.com/t/what-are-lasso-and-ridge-regression/391","timestamp":"2024-11-13T19:25:19Z","content_type":"text/html","content_length":"18598","record_id":"<urn:uuid:ba2cf10b-b057-4550-bcd8-deca93ca6f01>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00735.warc.gz"}
Swinburne’s Case for God – Part 10 1. Don’t Criticize what you don’t understand. I have been following this principle in my approach to Richard Swinburne. For more than a year now I have studied his case for God in The Coherence of Theism and The Existence of God. As an atheist the objective of finding significant problems in his case for theism is of interest to me, so that I can refute his case as part of a defense of my own viewpoint. But I’m not anxious to achieve that goal. So far, I have made no serious attempt to refute Swinburne’s arguments or to even raise objections to them, other than what just naturally pops into my mind as apparent potential problems or errors in his thinking. Swinburne’s books defending theism have been available for over two decades, and lots of smart and well-informed philosophers and critical thinkers have read and commented on Swinburne’s case for God. If there are major problems or errors in his thinking, it is very likely that most of those problems have already been discovered and written about. It is thus unlikely that I will discover some major error in Swinburne’s thinking about God that no other philosopher or critical thinker has previously noticed and pointed out. I’m in no hurry to evaluate Swinburne’s case for God. I can take my time to figure it out first, to get clear in my own mind how his arguments work and the logic of his case. If there are major flaws or errors in his reasoning, then slow and careful analysis of his case will eventually reveal most of those problems. If one first achieves a clear understanding of Swinburne’s case for God, then one will be in a good position to properly evaluate the quality and strength of that case. 2. Thinking out loud In this particular post (and the next), I don’t plan to lay out an argument for a specific conclusion. I’m just going to do some thinking out loud about Swinburne’s use of Bayes’ theorem in his case for God, in the hope that some clarity or insight might be gained. So, I’m not promising any definite conclusions or even that I have important insights, just an honest effort to get a bit clearer about this topic. 3. GIGO: Garbage In, Garbage Out GIGO is probably the most natural objection to make about Swinburne’s use of Bayes’ theorem to make his case for God. But I think this natural response is somewhat misguided. Bayes’ theorem serves as a conclusion-generating mechanism, in a way similar to the use of valid deductive argument forms by philosophers. Often analytic philosophers will summarize an argument in a deductively valid argument form (e.g. 1. If P, then Q. 2. P. Therefore: 3. Q. – this is known as a modus ponens inference). This is done for a number of reasons. First, by putting an argument into a valid deductive form, one simplifies the thinking by eliminating the issue of faulty logic. Anyone with a basic understanding or familiarity with deductive logic can verify that the logic of the argument is valid. The focus of thinking can thus be shifted to the question of the truth or justification of the premises of the argument. Furthermore, a deductive argument form ensures that all of the bases have been covered, that if one evaluates the truth (or justification) of each of the premises, then one has covered all of the issues necessary in order to arrive at a comprehensive evaluation of the argument. Also, in the case of arguments that have more than just one premise, the argument breaks down the thinking into pieces, so that one can focus attention on each of the premises, thinking about their truth (or justification) one at a time, thus helping one to achieve greater clarity about the argument, and greater confidence in one’s evaluation of the strength and quality of the argument. It seems to me that Bayes’ theorem plays a similar role in Swinburne’s case for God, or at least It has the potential to play this kind of role. It is a bit of logic that takes bits of information as input, and then generates a conclusion based on that information. The inputs in this case are three conditional probabilities: The posterior probability of the evidence: P(e I h & k) The prior probability of the hypothesis: P(h I k) The prior probability of the evidence: P(e I k) The output is also a conditional probability: The posterior probability of the hypothesis: P(h I e & k) Like a valid deductive argument form, Bayes’ theorem helps us to set aside the issue of faulty reasoning (at least for the logic of the overall argument). Like a valid deductive argument form, Bayes’ theorem helps to ensure that our thinking covers all the bases, that our reasoning is comprehensive. Like a valid deductive argument form, Bayes’ theorem helps to break down the intellectual work into smaller bite-sized pieces, so we can focus on each piece one at a time, and achieve greater clarity about the argument, and greater confidence in our evaluation of the quality and strength of the argument. The temptation is to object that the conditional probabilities given as input for use with Bayes’ theorem are speculation, guesses, subjective opinion, unfounded, or dubious for some reason or other. Such objections have some force, no doubt, but I think they are a bit misguided. So far as I can see, Swinburne is very reluctant to make estimates of conditional probabilities, and this means that he does not provide enough information to actually perform the necessary calculations that would make use of Bayes’ theorem. The big problem, it seems to me, is that Swinburne ought to have provided more estimates of conditional probabilities, or at least more claims that clearly constrain the range of acceptable estimates for the conditional probabilities that are needed to make use of Bayes’ I would prefer “Garbage In, Garbage Out” over “Insufficient Data”, because if Swinburne had made some additional estimates of the relevant conditional probabilities, I would have a clearer understanding of his thinking, and a clearer target to aim at. Since he has not done so, I’m reduced to trying to fill in the missing data on my own; hence the next post on “Playing with the (To be continued…)
{"url":"https://secularfrontier.infidels.org/2011/03/swinburnes-case-for-god-part-10/","timestamp":"2024-11-04T01:03:51Z","content_type":"text/html","content_length":"43949","record_id":"<urn:uuid:75c67b44-8412-4e31-a536-e102a347708a>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00705.warc.gz"}
A cubical block of side 7 cm is surmounted by a hemisphere. What is the greatest diameter the hemisphere can have? Find the surface area of the solid. The greatest diameter the hemisphere can have when surmounted on a cubical block with a side of 7 cm is equal to the side of the cube, which is 7 cm. This is because the diameter of the hemisphere must be the same as or smaller than the side of the cube to fit perfectly on top of it. To find the surface area of the solid, we calculate the surface area of the cube and the curved surface area of the hemisphere and add them together. The surface area of the cube is 6 × side² = 6×7² = 294 cm². The curved surface area of the hemisphere is 2πr², where r is the radius of the hemisphere (half of the diameter, so 3.5 cm). Therefore, the curved surface area is 2 × π × 3.5² = 77 cm². Adding these, the total surface area of the solid is 294 cm² + 77 cm² = 371 cm². Combining Geometric Shapes In the fascinating study of geometry, combining different shapes to form a new solid is a common practice. A perfect example of this is a cubical block surmounted by a hemisphere. This combination not only presents an interesting visual but also a challenging problem in terms of calculating its dimensions and surface area. The key to solving this problem lies in understanding the properties of both the cube and the hemisphere and how they interact when combined. This exercise is not just about applying formulas; it’s about visualizing how different geometric shapes can coalesce to form a unique structure. Understanding the Dimensions of the Cube The first step in our analysis is to understand the dimensions of the cube. In this case, we have a cube with each side measuring 7 cm. The cube, being a regular solid, has equal lengths on all sides, which simplifies our calculations. The significance of the cube’s dimensions becomes apparent when we consider the size of the hemisphere that can be placed on it. The cube’s uniformity in dimensions provides a base with a specific area on which the hemisphere can rest, thereby determining the maximum possible diameter of the hemisphere. Determining the Hemisphere’s Maximum Diameter The greatest diameter that the hemisphere can have when placed on the cube is directly related to the dimensions of the cube. Since the cube has a side length of 7 cm, the maximum diameter of the hemisphere that can sit on it without overhanging is also 7 cm. This is because the diameter of the hemisphere must be equal to or less than the side of the cube to ensure a perfect fit. This constraint is crucial in maintaining the structural integrity and aesthetic of the combined solid. Calculating the Surface Area of the Cube To calculate the total surface area of the combined solid, we start with the cube. The surface area of a cube is calculated using the formula 6 × side². For our cube with a side of 7 cm, the surface area is 6×7² = 294 cm² . This area represents the total outer surface of the cube, which forms the base of our combined solid. It’s important to note that this calculation covers all six faces of the Computing the Surface Area of the Hemisphere Next, we calculate the curved surface area of the hemisphere. The formula for this is 2πr², where r is the radius of the hemisphere. Since the diameter of the hemisphere is the same as the side of the cube (7 cm), the radius is half of this, which is 3.5 cm. Substituting this into the formula, we get 2 × π × 3.5² = 77 cm². This calculation gives us the area of the curved surface of the hemisphere, which is the visible part of the hemisphere when mounted on the cube. Total Surface Area of the Combined Solid Finally, to find the total surface area of the solid, we add the surface areas of the cube and the hemisphere. The total surface area is 294 cm² + 77 cm² = 371 cm². This total surface area represents the entire outer surface of the combined solid, encompassing both the cube and the hemisphere. This calculation is essential in understanding the extent of the surface that covers the entire structure, providing insights into the material requirements if the solid were to be constructed or covered. Synthesizing Geometric Knowledge In conclusion, the process of calculating the total surface area of a cubical block surmounted by a hemisphere demonstrates the beauty and precision of geometric calculations. By dissecting the problem into smaller parts, applying specific formulas, and combining the results, we arrive at a comprehensive understanding of the surface area of the combined solid. This exercise not only reinforces our knowledge of geometry but also showcases its practical applications in real-world scenarios. It highlights the importance of geometry in understanding and creating complex structures, emphasizing the interconnectedness of different geometric shapes. Last Edited: June 12, 2024
{"url":"https://www.tiwariacademy.com/ncert-solutions/class-10/maths/chapter-12/exercise-12-1/a-cubical-block-of-side-7-cm-is-surmounted-by-a-hemisphere-what-is-the-greatest-diameter-the-hemisphere-can-have-find-the-surface-area-of-the-solid/","timestamp":"2024-11-04T07:54:36Z","content_type":"text/html","content_length":"241675","record_id":"<urn:uuid:785a151a-95b5-4422-99ba-570cba53ba5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00404.warc.gz"}
Search results for: hamiltonian chaos Commenced in January 2007 Search results for: hamiltonian chaos 105 Hamiltonian Factors in Hamiltonian Graphs Authors: Sizhong Zhou, Bingyuan Pu Let G be a Hamiltonian graph. A factor F of G is called a Hamiltonian factor if F contains a Hamiltonian cycle. In this paper, two sufficient conditions are given, which are two neighborhood conditions for a Hamiltonian graph G to have a Hamiltonian factor. Keywords: graph, neighborhood, factor, Hamiltonian factor. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1169 104 A Sufficient Condition for Graphs to Have Hamiltonian [a, b]-Factors Authors: Sizhong Zhou Let a and b be nonnegative integers with 2 ≤ a < b, and let G be a Hamiltonian graph of order n with n ≥ (a+b−4)(a+b−2) b−2 . An [a, b]-factor F of G is called a Hamiltonian [a, b]-factor if F contains a Hamiltonian cycle. In this paper, it is proved that G has a Hamiltonian [a, b]-factor if |NG(X)| > (a−1)n+|X|−1 a+b−3 for every nonempty independent subset X of V (G) and δ(G) > (a−1) n+a+b−4 a+b−3 . Keywords: graph, minimum degree, neighborhood, [a, b]-factor, Hamiltonian [a, b]-factor. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1232 103 Mutually Independent Hamiltonian Cycles of Cn x Cn Authors: Kai-Siou Wu, Justie Su-Tzu Juan In a graph G, a cycle is Hamiltonian cycle if it contain all vertices of G. Two Hamiltonian cycles C_1 = 〈u_0, u_1, u_2, ..., u_{n−1}, u_0〉 and C_2 = 〈v_0, v_1, v_2, ..., v_{n−1}, v_0〉 in G are independent if u_0 = v_0, u_i = ̸v_i for all 1 ≤ i ≤ n−1. In G, a set of Hamiltonian cycles C = {C_1, C_2, ..., C_k} is mutually independent if any two Hamiltonian cycles of C are independent. The mutually independent Hamiltonicity IHC(G), = k means there exist a maximum integer k such that there exists k-mutually independent Hamiltonian cycles start from any vertex of G. In this paper, we prove that IHC(C_n × C_n) = 4, for n ≥ 3. Keywords: Hamiltonian, independent, cycle, Cartesian product, mutually independent Hamiltonicity Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1278 102 The Panpositionable Hamiltonicity of k-ary n-cubes Authors: Chia-Jung Tsai, Shin-Shin Kao The hypercube Qn is one of the most well-known and popular interconnection networks and the k-ary n-cube Qk n is an enlarged family from Qn that keeps many pleasing properties from hypercubes. In this article, we study the panpositionable hamiltonicity of Qk n for k ≥ 3 and n ≥ 2. Let x, y of V (Qk n) be two arbitrary vertices and C be a hamiltonian cycle of Qk n. We use dC(x, y) to denote the distance between x and y on the hamiltonian cycle C. Define l as an integer satisfying d(x, y) ≤ l ≤ 1 2 |V (Qk n)|. We prove the followings: • When k = 3 and n ≥ 2, there exists a hamiltonian cycle C of Qk n such that dC(x, y) = l. • When k ≥ 5 is odd and n ≥ 2, we request that l /∈ S where S is a set of specific integers. Then there exists a hamiltonian cycle C of Qk n such that dC(x, y) = l. • When k ≥ 4 is even and n ≥ 2, we request l-d(x, y) to be even. Then there exists a hamiltonian cycle C of Qk n such that dC(x, y) = l. The result is optimal since the restrictions on l is due to the structure of Qk n by definition. Keywords: Hamiltonian, panpositionable, bipanpositionable, k-ary n-cube. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394 101 The Balanced Hamiltonian Cycle on the Toroidal Mesh Graphs Authors: Wen-Fang Peng, Justie Su-Tzu Juan The balanced Hamiltonian cycle problemis a quiet new topic of graph theorem. Given a graph G = (V, E), whose edge set can be partitioned into k dimensions, for positive integer k and a Hamiltonian cycle C on G. The set of all i-dimensional edge of C, which is a subset by E(C), is denoted as Ei(C). Keywords: Hamiltonian cycle, balanced, Cartesian product. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451 100 A Hamiltonian Decomposition of 5-star Authors: Walter Hussak, Heiko Schröder Star graphs are Cayley graphs of symmetric groups of permutations, with transpositions as the generating sets. A star graph is a preferred interconnection network topology to a hypercube for its ability to connect a greater number of nodes with lower degree. However, an attractive property of the hypercube is that it has a Hamiltonian decomposition, i.e. its edges can be partitioned into disjoint Hamiltonian cycles, and therefore a simple routing can be found in the case of an edge failure. The existence of Hamiltonian cycles in Cayley graphs has been known for some time. So far, there are no published results on the much stronger condition of the existence of Hamiltonian decompositions. In this paper, we give a construction of a Hamiltonian decomposition of the star graph 5-star of degree 4, by defining an automorphism for 5-star and a Hamiltonian cycle which is edge-disjoint with its image under the automorphism. Keywords: interconnection networks, paths and cycles, graphs andgroups. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742 99 A Further Study on the 4-Ordered Property of Some Chordal Ring Networks Authors: Shin-Shin Kao, Hsiu-Chunj Pan Given a graph G. A cycle of G is a sequence of vertices of G such that the first and the last vertices are the same. A hamiltonian cycle of G is a cycle containing all vertices of G. The graph G is k-ordered (resp. k-ordered hamiltonian) if for any sequence of k distinct vertices of G, there exists a cycle (resp. hamiltonian cycle) in G containing these k vertices in the specified order. Obviously, any cycle in a graph is 1-ordered, 2-ordered and 3- ordered. Thus the study of any graph being k-ordered (resp. k-ordered hamiltonian) always starts with k = 4. Most studies about this topic work on graphs with no real applications. To our knowledge, the chordal ring families were the first one utilized as the underlying topology in interconnection networks and shown to be 4-ordered. Furthermore, based on our computer experimental results, it was conjectured that some of them are 4-ordered hamiltonian. In this paper, we intend to give some possible directions in proving the conjecture. Keywords: Hamiltonian cycle, 4-ordered, Chordal rings, 3-regular. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870 98 A Systematic Approach for Finding Hamiltonian Cycles with a Prescribed Edge in Crossed Cubes Authors: Jheng-Cheng Chen, Chia-Jui Lai, Chang-Hsiung Tsai, The crossed cube is one of the most notable variations of hypercube, but some properties of the former are superior to those of the latter. For example, the diameter of the crossed cube is almost the half of that of the hypercube. In this paper, we focus on the problem embedding a Hamiltonian cycle through an arbitrary given edge in the crossed cube. We give necessary and sufficient condition for determining whether a given permutation with n elements over Zn generates a Hamiltonian cycle pattern of the crossed cube. Moreover, we obtain a lower bound for the number of different Hamiltonian cycles passing through a given edge in an n-dimensional crossed cube. Our work extends some recently obtained results. Keywords: Interconnection network, Hamiltonian, crossed cubes, prescribed edge. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526 97 The Frequency Graph for the Traveling Salesman Problem Authors: Y. Wang Traveling salesman problem (TSP) is hard to resolve when the number of cities and routes become large. The frequency graph is constructed to tackle the problem. A frequency graph maintains the topological relationships of the original weighted graph. The numbers on the edges are the frequencies of the edges emulated from the local optimal Hamiltonian paths. The simplest kind of local optimal Hamiltonian paths are computed based on the four vertices and three lines inequality. The search algorithm is given to find the optimal Hamiltonian circuit based on the frequency graph. The experiments show that the method can find the optimal Hamiltonian circuit within several trials. Keywords: Traveling salesman problem, frequency graph, local optimal Hamiltonian path, four vertices and three lines inequality. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767 96 Lyapunov Type Inequalities for Fractional Impulsive Hamiltonian Systems Authors: Kazem Ghanbari, Yousef Gholami This paper deals with study about fractional order impulsive Hamiltonian systems and fractional impulsive Sturm-Liouville type problems derived from these systems. The main purpose of this paper devotes to obtain so called Lyapunov type inequalities for mentioned problems. Also, in view point on applicability of obtained inequalities, some qualitative properties such as stability, disconjugacy, nonexistence and oscillatory behaviour of fractional Hamiltonian systems and fractional Sturm-Liouville type problems under impulsive conditions will be derived. At the end, we want to point out that for studying fractional order Hamiltonian systems, we will apply recently introduced fractional Conformable operators. Keywords: Fractional derivatives and integrals, Hamiltonian system, Lyapunov type inequalities, stability, disconjugacy. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524 95 Dense Chaos in Coupled Map Lattices Authors: Tianxiu Lu, Peiyong Zhu This paper is mainly concerned with a kind of coupled map lattices (CMLs). New definitions of dense δ-chaos and dense chaos (which is a special case of dense δ-chaos with δ = 0) in discrete spatiotemporal systems are given and sufficient conditions for these systems to be densely chaotic or densely δ-chaotic are derived. Keywords: Discrete spatiotemporal systems, coupled map lattices, dense δ-chaos, Li-Yorke pairs. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651 94 Quantum Localization of Vibrational Mirror in Cavity Optomechanics Authors: Madiha Tariq, Hena Rabbani Recently, cavity-optomechanics becomes an extensive research field that has manipulated the mechanical effects of light for coupling of the optical field with other physical objects specifically with regards to dynamical localization. We investigate the dynamical localization (both in momentum and position space) for a vibrational mirror in a Fabry-Pérot cavity driven by a single mode optical field and a transverse probe field. The weak probe field phenomenon results in classical chaos in phase space and spatio temporal dynamics in position |ψ(x)²| and momentum space |ψ(p)²| versus time show quantum localization in both momentum and position space. Also, we discuss the parametric dependencies of dynamical localization for a designated set of parameters to be experimentally feasible. Our work opens an avenue to manipulate the other optical phenomena and applicability of proposed work can be prolonged to turn-able laser sources in the future. Keywords: Dynamical localization, cavity optomechanics, hamiltonian chaos, probe field. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775 93 Chaos Synchronization Using Sliding Mode Technique Authors: Behzad Khademian, Mohammad Haeri In this paper, an effective sliding mode design is applied to chaos synchronization. The proposed controller can make the states of two identical modified Chua-s circuits globally asymptotically synchronized. Numerical results are provided to show the effectiveness and robustness of the proposed method. Keywords: Sliding mode, Chaos synchronization, Modified Chua's circuit. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1285 92 Robust Conversion of Chaos into an Arbitrary Periodic Motion Authors: Abolhassan Razminia, Mohammad-Ali Sadrnia One of the most attractive and important field of chaos theory is control of chaos. In this paper, we try to present a simple framework for chaotic motion control using the feedback linearization method. Using this approach, we derive a strategy, which can be easily applied to the other chaotic systems. This task presents two novel results: the desired periodic orbit need not be a solution of the original dynamics and the other is the robustness of response against parameter variations. The illustrated simulations show the ability of these. In addition, by a comparison between a conventional state feedback and our proposed method it is demonstrated that the introduced technique is more efficient. Keywords: chaos, feedback linearization, robust control, periodic motion. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1694 91 Observer Design for Chaos Synchronization of Time-delayed Power Systems Authors: Jui-Sheng Lin, Yi-Sung Yang, Meei-Ling Hung, Teh-Lu Liao, Jun-Juh Yan The global chaos synchronization for a class of time-delayed power systems is investigated via observer-based approach. By employing the concepts of quadratic stability theory and generalized system model, a new sufficient criterion for constructing an observer is deduced. In contrast to the previous works, this paper proposes a theoretical and systematic design procedure to realize chaos synchronization for master-slave power systems. Finally, an illustrative example is given to show the applicability of the obtained scheme. Keywords: Chaos, Synchronization, Quadratic stability theory, Observer Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719 90 PSS and SVC Controller Design by Chaos and PSO Algorithms to Enhancing the Power System Stability Authors: Saeed jalilzadeh, Mohammad Reza Safari Tirtashi, Mohsen Sadeghi this paper focuses on designing of PSS and SVC controller based on chaos and PSO algorithms to improve the stability of power system. Single machine infinite bus (SMIB) system with SVC located at the terminal of generator has been considered to evaluate the proposed controllers where both SVC and PSS have the same controller. The coefficients of PSS and SVC controller have been optimized by chaos and PSO algorithms. Finally the system with proposed controllers has been simulated for the special disturbance in input power of generator, and then the dynamic responses of generator have been presented. The simulation results showed that the system composed with recommended controller has outstanding operation in fast damping of oscillations of power system. Keywords: PSS, CHAOS, PSO, Stability Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651 89 An Efficient Hamiltonian for Discrete Fractional Fourier Transform Authors: Sukrit Shankar, Pardha Saradhi K., Chetana Shanta Patsa, Jaydev Sharma Fractional Fourier Transform, which is a generalization of the classical Fourier Transform, is a powerful tool for the analysis of transient signals. The discrete Fractional Fourier Transform Hamiltonians have been proposed in the past with varying degrees of correlation between their eigenvectors and Hermite Gaussian functions. In this paper, we propose a new Hamiltonian for the discrete Fractional Fourier Transform and show that the eigenvectors of the proposed matrix has a higher degree of correlation with the Hermite Gaussian functions. Also, the proposed matrix is shown to give better Fractional Fourier responses with various transform orders for different signals. Keywords: Fractional Fourier Transform, Hamiltonian, Eigen Vectors, Discrete Hermite Gaussians. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524 88 Anti-Synchronization of two Different Chaotic Systems via Active Control Authors: Amir Abbas Emadzadeh, Mohammad Haeri This paper presents anti-synchronization of chaos between two different chaotic systems using active control method. The proposed technique is applied to achieve chaos antisynchronization for the Lü and Rössler dynamical systems. Numerical simulations are implemented to verify the results. Keywords: Active control, Anti-Synchronization, Chaos, Lü system, Rössler system. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910 87 On Deterministic Chaos: Disclosing the Missing Mathematics from the Lorenz-Haken Equations Authors: Belkacem Meziane The original 3D Lorenz-Haken equations -which describe laser dynamics- are converted into 2-second-order differential equations out of which the so far missing mathematics is extracted. Leaning on high-order trigonometry, important outcomes are pulled out: A fundamental result attributes chaos to forbidden periodic solutions, inside some precisely delimited region of the control parameter space that governs self-pulsing. Keywords: chaos, Lorenz-Haken equations, laser dynamics, nonlinearities Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 607 86 A Necessary Condition for the Existence of Chaos in Fractional Order Delay Differential Equations Authors: Sachin Bhalekar In this paper we propose a necessary condition for the existence of chaos in delay differential equations of fractional order. To explain the proposed theory, we discuss fractional order Liu system and financial system involving delay. Keywords: Caputo derivative, delay, stability, chaos. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2661 85 On Chvátal’s Conjecture for the Hamiltonicity of 1-Tough Graphs and Their Complements Authors: Shin-Shin Kao, Yuan-Kang Shih, Hsun Su In this paper, we show that the conjecture of Chv tal, which states that any 1-tough graph is either a Hamiltonian graph or its complement contains a specific graph denoted by F, does not hold in general. More precisely, it is true only for graphs with six or seven vertices, and is false for graphs with eight or more vertices. A theorem is derived as a correction for the conjecture. Keywords: Complement, degree sum, Hamiltonian, tough. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694 84 An Augmented Automatic Choosing Control Designed by Extremizing a Combination of Hamiltonian and Lyapunov Functions for Nonlinear Systems with Constrained Input Authors: Toshinori Nawata, Hitoshi Takata In this paper we consider a nonlinear feedback control called augmented automatic choosing control (AACC) for nonlinear systems with constrained input. Constant terms which arise from section wise linearization of a given nonlinear system are treated as coefficients of a stable zero dynamics.Parameters included in the control are suboptimally selectedby extremizing a combination of Hamiltonian and Lyapunov functions with the aid of the genetic algorithm. This approach is applied to a field excitation control problem of power system to demonstrate the splendidness of the AACC. Simulation results show that the new controller can improve performance remarkably well. Keywords: Augmented Automatic Choosing Control, NonlinearControl, Genetic Algorithm, Hamiltonian, Lyapunovfunction Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439 83 Synchronization of Chaos in a Food Web in Ecological Systems Authors: Anuraj Singh, Sunita Gakkhar The three-species food web model proposed and investigated by Gakkhar and Naji is known to have chaotic behaviour for a choice of parameters. An attempt has been made to synchronize the chaos in the model using bidirectional coupling. Numerical simulations are presented to demonstrate the effectiveness and feasibility of the analytical results. Numerical results show that for higher value of coupling strength, chaotic synchronization is achieved. Chaos can be controlled to achieve stable synchronization in natural systems. Keywords: Lyapunov Exponent, Bidirectional Coupling, ChaosSynchronization, Synchronization Manifold Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1305 82 An Optimal Control Problem for Rigid Body Motions on Lie Group SO(2, 1) Authors: Nemat Abazari, Ilgin Sager In this paper smooth trajectories are computed in the Lie group SO(2, 1) as a motion planning problem by assigning a Frenet frame to the rigid body system to optimize the cost function of the elastic energy which is spent to track a timelike curve in Minkowski space. A method is proposed to solve a motion planning problem that minimize the integral of the square norm of Darboux vector of a timelike curve. This method uses the coordinate free Maximum Principle of Optimal control and results in the theory of integrable Hamiltonian systems. The presence of several conversed quantities inherent in these Hamiltonian systems aids in the explicit computation of the rigid body motions. Keywords: Optimal control, Hamiltonian vector field, Darboux vector, maximum principle, lie group, Rigid body motion, Lorentz metric. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1331 81 Linear Cryptanalysis for a Chaos-Based Stream Cipher Authors: Ruming Yin, Jian Yuan, Qiuhua Yang, Xiuming Shan, Xiqin Wang Linear cryptanalysis methods are rarely used to improve the security of chaotic stream ciphers. In this paper, we apply linear cryptanalysis to a chaotic stream cipher which was designed by strictly using the basic design criterion of cryptosystem – confusion and diffusion. We show that this well-designed chaos-based stream cipher is still insecure against distinguishing attack. This distinguishing attack promotes the further improvement of the cipher. Keywords: Stream cipher, chaos, linear cryptanalysis, distinguishing attack. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748 80 Chaotic Dynamics of Cost Overruns in Oil and Gas Megaprojects: A Review Authors: O. J. Olaniran, P. E. D. Love, D. J. Edwards, O. Olatunji, J. Matthews Cost overruns are a persistent problem in oil and gas megaprojects. Whilst the extant literature is filled with studies on incidents and causes of cost overruns, underlying theories to explain their emergence in oil and gas megaprojects are few. Yet, a way to contain the syndrome of cost overruns is to understand the bases of ‘how and why’ they occur. Such knowledge will also help to develop pragmatic techniques for better overall management of oil and gas megaprojects. The aim of this paper is to explain the development of cost overruns in hydrocarbon megaprojects through the perspective of chaos theory. The underlying principles of chaos theory and its implications for cost overruns are examined and practical recommendations proposed. In addition, directions for future research in this fertile area provided. Keywords: Chaos theory, oil and gas, cost overruns, megaprojects. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2332 79 Planning Rigid Body Motions and Optimal Control Problem on Lie Group SO(2, 1) Authors: Nemat Abazari, Ilgin Sager In this paper smooth trajectories are computed in the Lie group SO(2, 1) as a motion planning problem by assigning a Frenet frame to the rigid body system to optimize the cost function of the elastic energy which is spent to track a timelike curve in Minkowski space. A method is proposed to solve a motion planning problem that minimizes the integral of the Lorentz inner product of Darboux vector of a timelike curve. This method uses the coordinate free Maximum Principle of Optimal control and results in the theory of integrable Hamiltonian systems. The presence of several conversed quantities inherent in these Hamiltonian systems aids in the explicit computation of the rigid body motions. Keywords: Optimal control, Hamiltonian vector field, Darboux vector, maximum principle, lie group, rigid body motion, Lorentz metric. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567 78 Chua’s Circuit Regulation Using a Nonlinear Adaptive Feedback Technique Authors: Abolhassan Razminia, Mohammad-Ali Sadrnia Chua’s circuit is one of the most important electronic devices that are used for Chaos and Bifurcation studies. A central role of secure communication is devoted to it. Since the adaptive control is used vastly in the linear systems control, here we introduce a new trend of application of adaptive method in the chaos controlling field. In this paper, we try to derive a new adaptive control scheme for Chua’s circuit controlling because control of chaos is often very important in practical operations. The novelty of this approach is for sake of its robustness against the external perturbations which is simulated as an additive noise in all measured states and can be generalized to other chaotic systems. Our approach is based on Lyapunov analysis and the adaptation law is considered for the feedback gain. Because of this, we have named it NAFT (Nonlinear Adaptive Feedback Technique). At last, simulations show the capability of the presented technique for Chua’s Keywords: Chaos, adaptive control, nonlinear control, Chua's circuit. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2062 77 Chaos-based Secure Communication via Continuous Variable Structure Control Authors: Cheng-Fang Huang, Meei-Ling Hung, Teh-Lu Liao, Her-Terng Yau, Jun-Juh Yan The design of chaos-based secure communication via synchronized modified Chua-s systems is investigated in this paper. A continuous control law is proposed to ensure synchronization of the master and slave modified Chua-s systems by using the variable structure control technique. Particularly, the concept of extended systems is introduced such that a continuous control input is obtained to avoid chattering phenomenon. Then, it becomes possible to ensure that the message signal embedded in the transmitter can be recovered in the receiver. Keywords: Chaos, Secure communication, Synchronization, Variable structure control (VSC) Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430 76 Solving SPDEs by a Least Squares Method Authors: Hassan Manouzi We present in this paper a useful strategy to solve stochastic partial differential equations (SPDEs) involving stochastic coefficients. Using the Wick-product of higher order and the Wiener-Itˆo chaos expansion, the SPDEs is reformulated as a large system of deterministic partial differential equations. To reduce the computational complexity of this system, we shall use a decomposition-coordination method. To obtain the chaos coefficients in the corresponding deterministic equations, we use a least square formulation. Once this approximation is performed, the statistics of the numerical solution can be easily evaluated. Keywords: Least squares, Wick product, SPDEs, finite element, Wiener chaos expansion, gradient method. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1798
{"url":"https://publications.waset.org/search?q=hamiltonian%20chaos","timestamp":"2024-11-09T19:50:53Z","content_type":"text/html","content_length":"120278","record_id":"<urn:uuid:3d6a04f7-aac2-427a-bbf3-1db755c52c5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00256.warc.gz"}
What Are Complex Numbers? - CentermathWhat Are Complex Numbers? What Are Complex Numbers? So, what exactly are complex numbers? Picture a number line stretching infinitely in both directions. Real numbers, like 2, -5, or 3.14, live comfortably on this line. They’re our everyday numbers. But then, there’s a whole new dimension we need to explore: the imaginary axis. This is where complex numbers come into play. A complex number is essentially a combination of a real number and an imaginary number. You can think of it as having a real part, like 3 or -7, and an imaginary part, represented by a multiple of “i,” where “i” is the square root of -1. So, a complex number looks like this: 3 + 4i. Here, 3 is the real part, and 4i is the imaginary part. The magic of complex numbers unfolds when you realize they allow us to solve equations that don’t have solutions in the realm of real numbers. For instance, the equation x² + 1 = 0 doesn’t have a real solution because no real number squared gives you -1. Enter complex numbers, and we find that x = ±i is a perfectly valid solution. In the real world, complex numbers are more than just theoretical musings. They’re crucial in fields like engineering, physics, and even computer graphics. They help in analyzing electrical circuits, describing wave behaviors, and creating stunning visual effects. Unlocking the Mysteries of Complex Numbers: A Beginner’s Guide At first glance, complex numbers might seem like a mathematical trick, but they’re incredibly practical. They take the form of (a + bi), where (a) and (b) are real numbers, and (i) represents the square root of -1. Think of (a) as the real part and (b) as the imaginary part. Together, they form a coordinate on a plane, not unlike plotting a point on a map. Why bother with this? Complex numbers make it easier to solve equations that can’t be solved using only real numbers. For instance, if you want to find the square root of a negative number, real numbers simply can’t help. Here, complex numbers come to the rescue, expanding the horizon of what’s possible. And it’s not just theory—complex numbers pop up in real-world scenarios, too. They’re used in engineering, physics, and even in computer graphics. They help model everything from electrical circuits to quantum mechanics. So, the next time you encounter a complex number, don’t shy away. Instead, appreciate how they unlock new dimensions of problem-solving and make our understanding of the world a bit richer. From Imaginary to Real: How Complex Numbers Transform Mathematics So, what exactly are complex numbers? Picture them as a combination of two parts: a real number and an imaginary number. The real part is just like the numbers you use every day, while the imaginary part introduces a whole new dimension. It’s like mixing a splash of fantasy into a mundane recipe. This might sound strange, but trust me, complex numbers are crucial to modern mathematics. These numbers are not just theoretical curiosities; they have practical applications that transform various fields. For example, in electrical engineering, complex numbers help us understand alternating current (AC) circuits. They allow engineers to calculate voltages and currents in ways that would be nearly impossible with real numbers alone. It’s like having a superpower that makes complex problems simpler and more manageable. In physics, complex numbers help describe wave functions and quantum states. They turn what seems like an abstract concept into a powerful tool for predicting how particles behave at the smallest scales. It’s as if complex numbers are the secret code to the universe’s most intricate mysteries. And let’s not forget about computer graphics. Ever wonder how video games create such stunning visuals? Complex numbers are part of the algorithmic magic that brings your favorite digital worlds to life. They help in transforming and manipulating images with breathtaking precision. Complex Numbers 101: Why These Mathematical Wonders Matter Why should you care about complex numbers? For starters, they simplify the way we handle oscillations and waves. Think about electrical engineers working with AC circuits. Without complex numbers, dealing with the alternating current’s phase and amplitude would be like trying to navigate a maze blindfolded. Complex numbers provide a streamlined way to manage these alternating signals, making calculations not only easier but also more intuitive. In the realm of quantum physics, complex numbers play a starring role. Quantum mechanics, the science that describes the behavior of particles at the smallest scales, uses complex numbers to represent the probability of finding a particle in a certain state. Imagine trying to predict where a tiny particle might be without these mathematical tools—it would be like trying to catch a shadow in the dark. Moreover, complex numbers aren’t just confined to abstract theory. They’re used in algorithms that power everything from image processing to telecommunications. When you snap a photo or make a phone call, complex numbers help make sense of the data and ensure everything runs smoothly. In essence, complex numbers are more than just a mathematical curiosity; they’re essential tools that make modern technology and scientific understanding possible. Whether you’re delving into the mysteries of the quantum world or just trying to make sense of a signal, complex numbers are the unsung heroes making it all work seamlessly. The Fascinating World of Complex Numbers: Beyond the Imaginary Let’s break it down. Complex numbers are like the superheroes of the number world. Imagine your regular numbers are like the heroes in our everyday life—straightforward and reliable. Now, complex numbers? They’re the superheroes with a special power: they can solve equations that real numbers can’t even touch. They’re composed of two parts: a real number and an imaginary number. The imaginary part, denoted by (i), is the square root of -1. Yes, you heard that right—an actual number that, when squared, gives you a negative result. It’s as mind-bending as it sounds! In practical terms, complex numbers show up everywhere from electrical engineering to fluid dynamics. Think of them as the secret sauce behind many technological marvels. For instance, when engineers design circuits or analyze signals, they use complex numbers to simplify and solve problems that would be way too complicated otherwise. But let’s get back to the imagination aspect. Visualizing complex numbers can be like dreaming in color. On a graph, you plot the real part on one axis and the imaginary part on another. This gives you a plane where every point is a complex number, making it easier to work with and understand these enigmatic entities. So next time you’re marveling at the wonders of modern technology or trying to solve a tricky math problem, remember: behind the scenes, complex numbers are the unsung heroes making the impossible Complex Numbers in Action: How They Solve Real-World Problems Let’s dive into an everyday example: electrical engineering. When designing circuits, engineers frequently use complex numbers to analyze alternating current (AC) circuits. Imagine AC as a constantly changing current that flips direction, making it tricky to handle. Complex numbers simplify this by representing both the magnitude and phase of the current in one neat package. It’s like using a single tool to manage both the height and weight of an object—much more efficient than handling each characteristic separately. Another fascinating application is in signal processing. Ever heard of noise-canceling headphones? They use complex numbers to filter out unwanted sounds from your favorite tunes. The headphones process sound signals using complex algorithms that adjust the phase and amplitude, essentially ‘cleaning up’ the audio. Without complex numbers, this technology wouldn’t be as effective. But the magic doesn’t stop there. Complex numbers are also crucial in fluid dynamics, where they help model the flow of liquids and gases. Picture a turbulent river where you want to predict how the water moves and swirls. Complex numbers help engineers and scientists create simulations that make these predictions more accurate, which can be vital for designing efficient transport systems or understanding environmental impacts. So, while complex numbers might seem like a quirky mathematical curiosity, they’re actually essential tools in solving a wide range of real-world problems. They help us design better technology, improve efficiency, and even protect our environment. Demystifying Complex Numbers: What They Are and Why They’re Important Imagine you’re navigating a 2D plane: real numbers let you move left and right, while imaginary numbers let you move up and down. Together, they give you full control over this plane, making complex numbers a powerful tool for solving problems that real numbers alone can’t handle. They’re like having a GPS system that helps you plot a course through tricky mathematical landscapes. In practical terms, complex numbers are crucial in electrical engineering for analyzing circuits and signals. They simplify the equations needed to model AC currents and electromagnetic waves. They’re also essential in control systems, which regulate everything from autopilot systems in aircraft to robotic arms in manufacturing. And let’s not forget about signal processing. Here, complex numbers help us break down signals into their component frequencies, making it easier to filter noise or compress data. If you’ve ever listened to your favorite song on a digital device, complex numbers played a behind-the-scenes role in making sure it sounds crystal clear. So, while complex numbers might seem like a mathematical curiosity, they are the unsung heroes in many technological advancements, bridging the gap between abstract theory and real-world applications. They make the impossible possible, showing that sometimes, the most extraordinary solutions come from embracing a bit of mathematical imagination. The Role of Complex Numbers in Modern Science and Engineering Imagine complex numbers as a supercharged version of our everyday numbers. They consist of two parts: a real number and an imaginary number. It might sound like something out of a fantasy novel, but these “imaginary” numbers have real-world applications that are nothing short of magical. In engineering, complex numbers are crucial for analyzing electrical circuits. Picture them as a high-tech toolkit that helps engineers understand how alternating current (AC) flows through circuits. Without complex numbers, predicting and managing the behavior of these currents would be like trying to read a book with half the pages missing. But the magic doesn’t stop there. In modern science, complex numbers are used in quantum mechanics, the study of the tiniest particles in our universe. They’re like the compass guiding scientists through the mysterious world of quantum states and probabilities. If you’ve ever marveled at how GPS systems pinpoint your exact location, complex numbers played a silent but powerful role in making that possible. Ever heard of signal processing? Complex numbers are again the unsung heroes here. They help transform signals into a form that’s easier to analyze, making technologies like mobile phones and Wi-Fi work seamlessly. So, the next time you enjoy a crystal-clear video call, you can thank complex numbers for their behind-the-scenes magic. In essence, complex numbers are like the hidden gems of mathematics—discreet but absolutely vital. They’re the secret sauce that makes a lot of today’s technological wonders possible. Leave A Reply Cancel Reply
{"url":"https://centermath.com/what-are-complex-numbers/","timestamp":"2024-11-03T03:53:21Z","content_type":"text/html","content_length":"123576","record_id":"<urn:uuid:216eab06-2abb-46fe-b08c-083e0bfda81d>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00757.warc.gz"}
A purely combinatorial proof of the Hadwiger Debrunner (p, q) conjecture A family of sets has the (p, q) property if among any p members of the family some q have a nonempty intersection. The authors have proved that for every p ≥ q ≥ d + 1 there is a c = c(p, q, d) < ∞ such that for every family F of compact, convex sets in R^d which has the (p, q) property there is a set of at most c points in R^d that intersects each member of F, thus settling an old problem of Hadwiger and Debrunner. Here we present a purely combinatorial proof of this result. Dive into the research topics of 'A purely combinatorial proof of the Hadwiger Debrunner (p, q) conjecture'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/a-purely-combinatorial-proof-of-the-hadwiger-debrunner-p-q-conjec","timestamp":"2024-11-11T13:28:59Z","content_type":"text/html","content_length":"43408","record_id":"<urn:uuid:8a42b961-c515-4345-8459-e14a496cfc49>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00344.warc.gz"}
the grand escape sparknotes In binary searching, first thing is to do sorting, because binary search can only perform on a sorted list. Recently I have written a blog post about Big O Notation. More efficient than linear search. N equals the number of items in sequence. In binary searching, first thing is to do sorting, because binary search can only perform on a sorted list. The performance of searching algorithms varies depending on sequence characteristics (distribution). Dictionary is a sorted list of word definitions. Challenge #1 Write a Python program to search using the binary search … The binary search finds an element in the sorted array in log n time. In data structures, the binary search tree is a binary tree, in which each node contains smaller values in its left subtree and larger values in its right subtree. The binary search tree is some times called as BST in short form. Begin with an interval covering the whole array. In a binary search we use the information that all the elements are sorted. The data of all the nodes in the right subtree of the root node should be $$\gt$$ the data of the root. Step 4 - If both are matched, then display "Given element is found!!!" Hey Engineers, welcome to the award-winning blog,Engineers Tutor. The function returns the index of the found value. This blog post is a continuation of a series of blog posts about Algorithms, as it has been a hard concept for me to grasp as a programmer. 12 15 15 19 24 31 53 59 60 0 1 2 3 4 5 6 7 8 12 15 15 19 24 31 53 59 60 Step 2 - Find the middle element in the sorted list. If we have 100000 elements in the array, the log 1000000 will be 20 i.e. Binary Search: Steps on how it works: To find the index of element e with a certain value: Start with an array sorted in descending order. BINARY SEARCH. | Principles of Operating Systems, More efficient than linear search. Below I have written a function, which accept the following parameters: an array and a value I want to find. Challenge #1 Write a Python program to search using the binary search … Telephone directory is also a sorted list of names, addresses and numbers. Linear Search. and terminate the function. The binary search is an algorithm of searching, used with the sorted data. Requires that the elements of the list be in sorted order. Regarding Time/Space Complexity in Binary Search, since this algorithm splits array in half every time, at most log2N steps are performed. In Fig. For that reason, we’re going to jump straight into an example of how the binary search works. Also check out the third blog post about Time Complexity and Space Complexity, which I provide an explanation of Time and Space Complexity. Hey Engineers, welcome to the award-winning blog,Engineers Tutor. Required fields are marked *, Without hardwork nothing grows except weeds. third blog post about Time Complexity and Space Complexity, JavaScript Algorithms: Find All Duplicates in an Array (LeetCode), Depth-First Search vs. Breadth-First Search in Python, Binary Search Tree to Greater Sum Tree Explained, How to Implement a Queue Using Two Stacks, Time Complexity of Algorithms— Big O Notation Explained In Plain English, Solving 0–1 Knapsack Problem Using Dynamic Programming, Optimizing the Knapsack Problem dynamic programming solution for space complexity. If m is greater than e, then e must be in the right subarray. Searching is the process of finding or locating an element or number in a given list. As we have sorted elements in the array, binary search method can be employed to find data in the array. # Binary Search in python def binarySearch(array, x, low, high): if high >= low: mid = low + (high - low)//2 # If found at mid, then return it if array[mid] == x: return mid # Search the left half elif array[mid] > x: return binarySearch(array, x, low, mid-1) # Search the right half else: return binarySearch(array, x, mid + 1, high) else: return -1 array = [3, 4, 5, 6, 7, 8, 9] x = 4 result = binarySearch(array, x, 0, len(array)-1) if result != -1: … In data structures, the binary search tree is a binary tree, in which each node contains smaller values in its left subtree and larger values in its right subtree. Solved assignment problems in communicaion |online Request, Solved Assignment Problems in Java (with Algorithm and Flowchart), Examples of Algorithms and Flow charts – with Java programs, What is Operating System? Binary search begins by comparing an element in the middle of the array with the target value. Binary Search Example in Java using Arrays.binarySearch() import java.util.Arrays; class BinarySearchExample2{ public static void main(String args[]){ int arr[] = {10,20,30,40,50}; int key = 30; int result = Arrays.binarySearch (arr,key); if (result < 0) System.out.println("Element is not found! I'm Gopal Krishna. BINARY_SEARCH(A, lower_bound, upper_bound, VAL) Step 1: [INITIALIZE] SET BEG = lower_bound END = upper_bound, POS = - 1; Step 2: Repeat Steps 3 and 4 while BEG =END Step 3: SET MID = (BEG + END)/2; Step 4: IF A[MID] = VAL SET POS = MID PRINT POS Go to Step 6 ELSE IF A[MID] > VAL SET END = MID - 1 ELSE SET BEG = MID + 1 [END OF IF] [END OF LOOP] Step 5: IF POS = -1 PRINT "VALUE IS NOT … Notes and Video Materials for Engineering in Electronics, Communications and Computer Science subjects are added. There are other searching algorithms as well called linear and interpolation search. Systems, More efficient than linear search, consider the root node with data =.... On the binary search works, India consider the root node with =... Searching algorithm which finds an element in the array with the middle element in the data... Implementing each the value x = 31 distribution ) `` a blog support. I want to find data in the middle element in the sorted list sorted elements the... Support Electronics, Communications and Computer Science subjects are added is a type of searching algorithms varies depending sequence... Middle of the array of how the number of candidates is reduced, for example for the value =. An item with a binary search works addresses and numbers required fields are marked *, hardwork! 3 - Compare the search element with the middle of the found.... Repeatedly dividing the search Space ’ re going to jump straight into an of. Array is returned with examples and a step-by-step guide to implementing each of..., we ’ re going to jump straight into an example of how the binary search by... Is returned, which accept the following parameters: an array and a value I want to find BST short. Information that all the elements of the array some times called as BST in short form it to! 1000000 will be 20 i.e find data in the array with the middle of the array, the log will... Value in a Given list is to do sorting, because binary search works of! 100000 elements in the array with the sorted list of N elements, perform at most log value... Search finds an element in the sorted data let ’ s see how the binary search: search sorted. Process of finding or locating an element or number in a Given.. Straight into an example of how the number of candidates is reduced, for for! Begins by comparing an element or number in a sorted list if we have sorted in... Called as BST in short form directory is also a sorted sequence upcoming blog of... Process of finding or locating an element in the array, binary search tree are... A type of searching algorithms as well called linear and interpolation search to algorithms reason, we ’ re to. Tree operations are explained with a particular value in a binary search can only perform on a sorted sequence i.e... Sequence characteristics ( distribution ) search method can be employed to find data the. Written a function, which accept the following parameters: an array and a step-by-step guide to each. Space Complexity, which accept the following parameters: an array and a I., we ’ re going to jump straight into an example of how binary! With examples and a step-by-step guide to implementing each repeatedly halving the search interval in.! Element is found!!! search: search a sorted list algorithm, with. Be 20 i.e below I have written a function, which I provide an explanation of Time Space... Found value halving the search interval in half will be 20 i.e over sorting algorithms algorithm means you find item., because binary search finds an element or number in a binary search is a type of searching algorithm you. Algorithm of searching, used binary search example step by step the target value matches the element, its position in the array students. - Compare the search element from the user subjects are added you binary search example step by step... - find the middle element in the array times called as BST in short form the. The performance of searching, first thing is to do sorting, because binary search tree example 2 find. Other searching algorithms varies depending on sequence characteristics ( distribution ) will on... To do sorting, because binary search tree operations are explained with a binary search operations. The list be in the sorted list of names, addresses and numbers nothing grows weeds. Matched, then e must be in the array, binary search is a of. M is greater than e, then display `` Given element is found!! the. 20 i.e tree operations are explained with a binary search is an important concept to when. Operating Systems, More efficient than linear search for the value x 31. & blogger from Andhra Pradesh, India value is less than the element, the log 1000000 will be i.e... Depending on sequence characteristics ( distribution ) 1000000 will be 20 i.e the element, the search in... When it comes to algorithms find the middle of the array, binary search can only perform on a list... In short form list be in the array, binary search is an important concept understand! Value x = 31 the third blog post I will focus on the binary search tree operations are explained a!, which accept the following parameters: an array and a step-by-step guide to implementing.... Search: search a sorted list of names, addresses and numbers be in sorted order distribution ) from. 100000 elements in the array, binary search we use the information that all the elements the! Pradesh, India element, the log 1000000 will be 20 i.e from Andhra,. For the value x = 31 I want to find data in the right.. Of Operating Systems, More efficient than linear search regarding Time/Space Complexity in binary search since! Posts of this series, I will go over sorting algorithms Materials for Engineering in,. Every Time, at most log2N steps are performed this blog post about Time Complexity and Space Complexity, I! Communication and Computer students ” searching, first thing is to do sorting, because binary search of Operating,... Sorting algorithms, Engineers Tutor this algorithm splits array in log N Time Complexity in binary can! Begins by comparing an element or number in a binary search tree is some times called as in... To do sorting, because binary search tree operations are explained with a binary method... Will be 20 i.e interval in half every Time, at most log2N steps are.! List of names, addresses and numbers value x = 31 or locating element! The search continues in the sorted array by repeatedly halving the search interval in half - Compare search! Finds an element in the middle element in the sorted list position in the right subarray the... Greater than e, then display `` Given element is found!! efficient linear!: search a sorted list of names, addresses and numbers check out the third blog post about Big Notation..., I will focus on the binary search we use the information that the... For each algorithm, along with examples and a step-by-step guide to implementing each the Space... Space Complexity if the target value is less than the element, the binary finds. 100000 elements in the array, binary search tree operations are explained with a binary we. = 10 ’ re going to jump straight into an example of the! I want to find which I provide an explanation of Time and Space Complexity, which I provide an of! Repeatedly dividing the search element with the sorted array in log N Time directory also! Be in sorted order is also a sorted list will focus on the binary search an... Node with data = 10 item by repeatedly dividing the search interval in half every Time, most! Compare the search element from the user 1000000 will be 20 i.e the information that all the elements the... Log N Time step-by-step guide to implementing each of Time and Space,! When it comes to algorithms find the middle element in the array an array and a value I want find. Times called as BST in short form a step-by-step guide to implementing each searching is process! Provide an explanation of Time and Space Complexity algorithm, along with examples and step-by-step! E must be in sorted order greater than e, then e must be in the array, e! Blog posts of this series, I will go over sorting algorithms | Principles of Operating Systems, More than..., for example for the value x = 31 following parameters: an array and a step-by-step to. Read the search continues in the array Engineers, welcome to the award-winning,! 1, consider the root node with data = 10 search element the... I want to find data in the array is returned post about Big O.. - Read the search Space an element in the array is returned Andhra Pradesh, India, at most steps! Cocolife Accredited Dental Clinics 2020 Verify Chase Card Uw Oshkosh Spring Semester 2021 Verify Chase Card Aluminum Window Sill Plate Modern Grey Dining Table Pinochet Meaning In Tamil
{"url":"http://www.takapi.info/site/rbtbg6.php?77933a=the-grand-escape-sparknotes","timestamp":"2024-11-12T23:38:22Z","content_type":"text/html","content_length":"25295","record_id":"<urn:uuid:71155c42-3c2b-44f8-be6d-a3c097495a1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00569.warc.gz"}
Calculating Thrust & Initial Acceleration of Saturn V • Thread starter Amber_ • Start date In summary, the task involves calculating the thrust produced by the first stage of a Saturn V space vehicle, which consumes fuel at a rate of 47400 kg/s with an exhaust speed of 2820 m/s and experiences an acceleration of gravity of 9.8 m/s^2. The solution requires dividing the thrust by the mass of the fuel and subtracting the acceleration due to gravity to find the initial acceleration of the vehicle on the launch pad, which is 2.3 m/s^2. Homework Statement The first stage of a Saturn V space vehicle consumes fuel at the rate of 47400 kg/s, with an exhaust speed of 2820 m/s. The acceleration of gravity is 9.8 m/s2 . Calculate the thrust produced by these en- Answer in units of N. (part 2 of 2) 10.0 points Note: You must include the force of gravity. Find the initial acceleration of the vehicle on the launch pad if its initial mass is 1.1 × 107 kg. Answer in units of m/s2. Homework Equations vrel*r= ma = thrust The Attempt at a Solution For the first part, I got 133668000 N, which is right. For the second part I divided the answer to the first part by the given mass. I got 12.51 m/s^2. Then to account for the acceleration of gravity, I subtracted 9.81 m/s^2 and got 2.3 m/s^2. However, that's not the correct answer. What am I doing wrong here? Can I not substract accelerations like that? It seems to me, I should be able to-- They're vector quantities acting in opposite directions. You need to formulate the expression using Newton's 2nd Law. You'd need to divide by the mass of the fuel to get its acceleration. ma=Thrust-mg, find 'a'. FAQ: Calculating Thrust & Initial Acceleration of Saturn V 1. How is thrust calculated for the Saturn V rocket? The thrust of the Saturn V rocket is calculated by multiplying the average exhaust velocity of the rocket's engines by the mass flow rate of the propellant being burned. 2. What is the initial acceleration of the Saturn V rocket? The initial acceleration of the Saturn V rocket is approximately 9.8 m/s², which is the standard acceleration due to gravity on Earth. However, this value may vary slightly depending on the weight of the rocket and its payload. 3. How is the mass flow rate of the propellant determined? The mass flow rate of the propellant is determined by dividing the total weight of the propellant by the burn time of the engines. This information is typically provided by the rocket's manufacturer. 4. What factors affect the thrust and initial acceleration of the Saturn V rocket? The thrust and initial acceleration of the Saturn V rocket are affected by various factors such as the weight of the rocket and its payload, the efficiency of the engines, and the atmospheric conditions during launch. 5. Can the thrust and initial acceleration of the Saturn V rocket be modified? Yes, the thrust and initial acceleration of the Saturn V rocket can be modified by adjusting the design of the engines or by using different types of propellant with higher exhaust velocities.
{"url":"https://www.physicsforums.com/threads/calculating-thrust-initial-acceleration-of-saturn-v.432515/","timestamp":"2024-11-09T13:09:25Z","content_type":"text/html","content_length":"75769","record_id":"<urn:uuid:bb394894-3d8f-4656-ae6d-05ac8ce97634>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00796.warc.gz"}
Collapse of the Wave Function About Articles Books Lectures Presentations Glossary Cite page Help? Translate Philosophers Collapse of the Wave Function Mortimer Adler Why is it that more than half of the modern "interpretations of quantum mechanics deny the Rogers Albritton "collapse of the wave function." Why are so many serious physicists and philosophers of Alexander of science so unhappy with this concept, which was a fundamental part of the "orthodox" theory Aphrodisias proposed in the late 1920's by the "founders" of quantum mechanics - Werner Heisenberg, Samuel Alexander Niels Bohr, Max Born, Paul Dirac, Wolfgang Pauli, and Pascual Jordan. We can give the William Alston simplest answer in a single word - chance. Albert Einstein, the foremost scientist of all Anaximander time (and ironically the discoverer of chance in quantum mechanics, which he disliked but G.E.M.Anscombe never denied was a part of the quantum theory, as far as it could go in his time) adamantly Anselm disliked the idea of "uncertainty" or "indeterminism," the thought that some things in the Louise Antony universe were not caused (or only statistically caused). The idea of the wave function in Thomas Aquinas quantum mechanics and its indeterministic collapse during a measurement is without doubt Aristotle the most controversial problem in physics today. Of the several “interpretations” of David Armstrong quantum mechanics, more than half deny the collapse of the wave function. Some of these Harald deny quantum jumps and even the existence of particles! So it is very important to Atmanspacher understand the importance of what Dirac called the projection postulate in quantum Robert Audi mechanics. The “collapse of the wave function” is also known as the “reduction of the wave Augustine packet.” This describes the change from a system that can be seen as having many possible J.L.Austin quantum states (Dirac’s principle of superposition) to its randomly being found in only one A.J.Ayer of those possible states. Although the collapse is historically thought to be caused by a Alexander Bain measurement, and thus dependent on the role of the observer in preparing the experiment, Mark Balaguer collapses can occur whenever quantum systems interact (e.g., collisions between particles) Jeffrey Barrett or even spontaneously (radioactive decay). The claim that an observer is needed to collapse William Barrett the wave function has injected a severely anthropomorphic element into quantum theory, William Belsham suggesting that nothing happens in the universe except when physicists are making Henri Bergson measurements. An extreme example is Hugh Everett’s Many Worlds theory, which says that the George Berkeley universe splits into two nearly identical universes whenever a measurement is made. Isaiah Berlin Richard J. What is the Wave Function? Bernard Berofsky Perhaps the best illustration of the wave function is to show it passing though the famous Robert Bishop slits in a two-slit experiment. It has been known for centuries that water waves passsing Max Black through a small opening creates circular waves radiating outward from that opening. If Susanne Bobzien there are two openings, the waves from each opening interfere with those from the other, Emil du producing waves twice as tall at the crests (or deep in the troughs) and cancelling Bois-Reymond perfectly where a crest from one meets a trough from the other. When we send light waves Hilary Bok through tiny slits, we see the same phenomenon. Most of the light that reaches light Laurence BonJour detectors at the back lands right behind the barrier between the slits. At some places, no George Boole light appears in the interference pattern. Today we know that light actually consists of Émile Boutroux large numbers of individual photons, quanta of light. Our experiment can turn down the Daniel Boyd amount of light so low that we know there is only a single photon, a single particle of F.H.Bradley light in the experiment at any time. What we see is the very slow accumulation of photons C.D.Broad at the detectors, but with exactly the same interference pattern. And this leads to what Michael Burke Richard Feynman called not just "a mystery,” but actually "the only mystery” in quantum Lawrence Cahoone mechanics. How can a single particle of light interfere with itself, without going through C.A.Campbell both slits? We can see what would happen if it went through only one slit by closing one or Joseph Keim the other slit. We get a completely different interference pattern. Feynman was right. If Campbell you can comprehend, though perhaps not “understand,” this highly non-inuitive phenomenon, Rudolf Carnap one that is impossible in classical physics, you are well on your way to appreciating Carneades quantum mechanics. The wave function in quantum mechanics is a solution to Erwin Nancy Cartwright Schrödinger’s famous wave equation that describes the evolution in time of his wave Gregg Caruso function ψ, ih/2π δψ/δt = Hψ. Max Born interpreted the wave function ψ(x) at a position x Ernst Cassirer as telling us that the complex square of the wave function, < ψ(x) | ψ(x) >, gives us the David Chalmers probability of finding a particle at that position. So the quantum wave going through the Roderick Chisholm slits (and this probability amplitude wave ψ(x) does go through both slits) is an abstract Chrysippus number, neither material nor energy, just a probability. It is information about where Cicero particles of light (or particles of matter if we shoot electrons at the slits) will be Tom Clark found when we record them. If we imagine a single particle being sent from a great distance Randolph Clarke away toward the two slits, the wave function that describes its “time evolution” or motion Samuel Clarke through space looks like a plane wave - the straight lines of the wave cresta approaching Anthony Collins the slits from below in the figure to the left. We have no information about the exact Antonella position of the particle. It could be anywhere. Einstein said that quantum mechanics is Corradini “incomplete” because the particle has no definite position before a measurement. He was Diodorus Cronus right. When the particle lands on one of the detectors at the screen in back, we can Jonathan Dancy represent it by the dot in the figure below. Donald Davidson Animation of a wave function collapsing - click to restart Mario De Caro The interfering probability amplitude waves disappear instantly everywhere once the Democritus particle is detected, but we have left a small fragment of interfering waves in the upper Daniel Dennett left corner to ask a question first raised by Einstein in 1905. What happens to the small Jacques Derrida but finite probability that the particle might have been found at the left side of the René Descartes screen? How has that probability instantaneously (at faster than light speed) been Richard Double collected into the unit probability at the dot? To be clear, when Einstein first asked this Fred Dretske question, he thought of the light wave as energy spread out eveywhere in the wave. So it John Dupré was energy that he thought might be traveling faster than light, violating his brand new John Earman principle of relativity (published two months after his light quantum paper). At the Solvay Laura Waddell conference in Brussels in 1927, twenty-two years after Einstein first tried to understand Ekstrom what is happening when the wave collapses, he noted; Epicurus If | ψ |^2 were simply regarded as the probability that at a certain point a given Austin Farrer particle is found at a given time, it could happen that the same elementary process Herbert Feigl produces an action in two or several places on the screen. But the interpretation, Arthur Fine according to which | ψ |^2 expresses the probability that this particle is found at a John Martin given point, assumes an entirely peculiar mechanism of action at a distance, which Fischer prevents the wave continuously distributed in space from producing an action in two Frederic Fitch places on the screen.” Owen Flanagan Luciano Floridi Einstein came to call this spukhafte Fernwerkung, “spooky action at a distance.” It is Philippa Foot known as nonlocality. Niels Bohr recalled Einstein’s description. He drew Einstein's figure Alfred Fouilleé on a blackboard, but he did not understand what Einstein was saying. Harry Frankfurt Richard L. Einstein referred at one of the sessions to the simple example, illustrated by Fig. 1, Franklin of a particle (electron or photon) penetrating through a hole or a narrow slit in a Bas van Fraassen diaphragm placed at some distance before a photographic plate. On account of the Michael Frede diffraction of the wave connected with the motion of the particle and indicated in the Gottlob Frege figure by the thin lines, it is under such conditions not possible to predict with Peter Geach certainty at what point the electron will arrive at the photographic plate, but only to Edmund Gettier calculate the probability that, in an experiment, the electron will be found within any Carl Ginet given region of the plate. The apparent difficulty, in this description, which Einstein Alvin Goldman felt so acutely, is the fact that, if in the experiment the electron is recorded at one Gorgias point A of the plate, then it is out of the question of ever observing an effect of Nicholas St. John this electron at another point (B), although the laws of ordinary wave propagation Green offer no room for a correlation between two such events. H.Paul Grice Ian Hacking Information Physics Explains the Two-Slit Experiment Ishtiyaque Haji Stuart Hampshire Although we cannot say anything about the particle’s whereabouts, we can say clearly that W.F.R.Hardie what goes through the two slits and interferes with itself is information. The wave Sam Harris function tells us the abstract probability of finding the particle somewhere. The idea of William Hasker probability - or possibilities - “collapsing” is much easier to understand. When a die is R.M.Hare rolled and the number 6 shows up, the possibilites of 1 through 5 disappear instantly. When Georg W.F. Hegel the wave function collapses to unity in one place and zero elsewhere, nothing physical is Martin Heidegger moving from one place to the other. Consider a horse race. Heraclitus When the nose of one horse crosses the finish line, his probability of winning goes to R.E.Hobart certainty, and the finite probabilities of the other horses, including the one in the rear, Thomas Hobbes instantaneously drop to zero. This happens faster than the speed of light, since the last David Hodgson horse is in a “space-like” separation from the first. Although horse races are not Shadsworth (normally) influenced by quantum mechanics, the idea of probability collapsing applies to Hodgson both. The only difference is that in quantum mechanics, we are dealing with a complex Baron d'Holbach probability amplitude that can interfere with itself. Note that probability, like Ted Honderich information, is neither matter nor energy. When a wave function “collapses” or “goes Pamela Huby through both slits” in the dazzling two-slit experiment, nothing material is traveling David Hume faster than the speed of light or going through the slits. No messages or signals can be Ferenc Huoranszki sent using this collapse of probability. Only the information has changed. This is similar Frank Jackson to the Einstein-Podolsky-Rosen experiments, where measurement of one particle transmits William James nothing physical (matter or energy) to the other “entangled” particle. Instead Lord Kames instantaneous information has come into the universe at the new particle positions. That Robert Kane information, together with conservation of angular momentum, makes the state of the Immanuel Kant coherently entangled second particle certain, however far away it might be after the Tomis Kapitan measurement. The standard “orthodox” interpretation of quantum mechanics includes the Walter Kaufmann projection postulate. This is the idea that once one of the possibilities becomes actual at Jaegwon Kim one position, the probabilities for actualization at all other positions becomes instantly William King zero. New information has appeared. The principle of superposition tells us that before a Hilary Kornblith measurement, a system may be in any one of many possible states. In the two-slit Christine experiment, this includes all the possible positions where |ψ(x)|^2 is not zero. Once the Korsgaard quantum system (the photon or electron) interacts with a specific detector at the screen, Saul Kripke all other possibilities vanish. It is perhaps unfortunate that the word “collapse” was Thomas Kuhn chosen, since it suggests some great physical motion. Just as in philosophy, where the Andrea Lavazza language used can be the source of confusion, we find that thinking about the information Christoph Lehner involved clarifies the problem. Normal | Teacher | Scholar Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin Joseph Levine George Henry David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood Arthur O. Lovejoy E. Jonathan Lowe John R. Lucas Ruth Barcan Tim Maudlin James Martineau Nicholas Maxwell Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick John Duns Scotus John Searle Wilfrid Sellars David Shiang Alan Sidelle Ted Sider Henry Sidgwick Peter Slezak Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster R. Jay Wallace Ted Warfield Roy Weatherford C.F. von William Whewell Alfred North David Widerker David Wiggins Bernard Williams Susan Wolf David Albert Michael Arbib Walter Baade Bernard Baars Jeffrey Bada Leslie Ballentine Marcello Barbieri Gregory Bateson Horace Barlow John S. Bell Mara Beller Charles Bennett Ludwig von Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Walther Bothe Jean Bricmont Hans Briegel Leon Brillouin Stephen Brush Henry Thomas S. H. Burbury Melvin Calvin Donald Campbell Sadi Carnot Anthony Cashmore Eric Chaisson Gregory Chaitin Rudolf Clausius Arthur Holly John Conway Jerry Coyne John Cramer Francis Crick E. P. Culverwell Antonio Damasio Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Stanislas Dehaene Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Gerald Edelman Paul Ehrenfest Manfred Eigen Albert Einstein George F. R. Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher David Foster Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Benjamin Gal-Or Howard Gardner Lila Gatlin Michael Gazzaniga J. Willard Gibbs James J. Gibson Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Dirk ter Haar Jacques Hadamard Mark Hadley Patrick Haggard J. B. S. Haldane Stuart Hameroff Augustin Hamon Sam Harris Ralph Hartley Hyman Hartman Jeff Hawkins John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Basil Hiley Art Hobson Jesper Hoffmeyer Don Howard John H. Jackson William Stanley Roman Jakobson E. T. Jaynes Pascual Jordan Eric Kandel Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Christof Koch Simon Kochen Hans Kornhuber Stephen Kosslyn Daniel Koshland Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Karl Lashley David Layzer Joseph LeDoux Gerald Lettvin Gilbert Lewis Benjamin Libet David Lindley Seth Lloyd Hendrik Lorentz Josef Loschmidt Alfred Lotka Ernst Mach Donald MacKay Henry Margenau Owen Maroney David Marr Humberto Maturana James Clerk Ernst Mayr John McCarthy Warren McCulloch N. David Mermin George Miller Stanley Miller Ulrich Mohrhoff Jacques Monod Emmy Noether Donald Norman Alexander Oparin Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Wilder Penfield Roger Penrose Steven Pinker Colin Pittendrigh Walter Pitts Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Zenon Pylyshyn Henry Quastler Adolphe Quételet Pasco Rakic Nicolas Rashevsky Lord Rayleigh Frederick Reif Jürgen Renn Giacomo Rizzolati A.A. Roback Emil Roduner Juan Roederer Jerome Rothstein David Ruelle David Rumelhart Robert Sapolsky Tilman Sauer Ferdinand de Erwin Schrödinger Aaron Schurger Sebastian Seung Thomas Sebeok Franco Selleri Claude Shannon Abner Shimony Herbert Simon Dean Keith Edmund Sinnott B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark Teilhard de Libb Thims William Thomson Richard Tolman Giulio Tononi Peter Tse Alan Turing C. S. Francisco Varela Vlatko Vedral Heinz von Richard von Mises John von Neumann Jakob von Uexküll C. H. Waddington John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss Herman Weyl John Wheeler Jeffrey Wicken Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Günther Witzany Stephen Wolfram H. Dieter Zeh Semir Zeki Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium If | ψ |2 were simply regarded as the probability that at a certain point a given particle is found at a given time, it could happen that the same elementary process produces an action in two or several places on the screen. But the interpretation, according to which | ψ |2 expresses the probability that this particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance, which prevents the wave continuously distributed in space from producing an action in two places on the screen.” Einstein referred at one of the sessions to the simple example, illustrated by Fig. 1, of a particle (electron or photon) penetrating through a hole or a narrow slit in a diaphragm placed at some distance before a photographic plate. On account of the diffraction of the wave connected with the motion of the particle and indicated in the figure by the thin lines, it is under such conditions not possible to predict with certainty at what point the electron will arrive at the photographic plate, but only to calculate the probability that, in an experiment, the electron will be found within any given region of the plate. The apparent difficulty, in this description, which Einstein felt so acutely, is the fact that, if in the experiment the electron is recorded at one point A of the plate, then it is out of the question of ever observing an effect of this electron at another point (B), although the laws of ordinary wave propagation offer no room for a correlation between two such events.
{"url":"https://informationphilosopher.com/solutions/experiments/wave-function_collapse/","timestamp":"2024-11-10T08:15:45Z","content_type":"text/html","content_length":"107144","record_id":"<urn:uuid:b8f14614-d4f2-4373-95b7-524f1d93ce7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00678.warc.gz"}
Two-Qubit Separability Functions and Probabilities Organizing Committee: □ Masahito Hayashi (Chair), Tohoku University □ Hiroshi Imai, NII □ Gen Kimura, AIST Matthias Christandl (Ludwig-Maximilians-University, Germany) Title: Post-selection technique for quantum channels with applications to quantum cryptography Abstract: We propose a general method for studying properties of quantum channels acting on an n-partite system, whose action is invariant under permutations of the subsystems. Our main result is that, in order to prove that a certain property holds for any arbitrary input, it is sufficient to consider the special case where the input is a particular de Finetti-type state, i.e., a state which consists of n identical and independent copies of an (unknown) state on a single subsystem. A similar statement holds for more general channels which are covariant with respect to the action of an arbitrary finite or locally compact group. Our technique can be applied to the analysis of information-theoretic problems. For example, in quantum cryptography, we get a simple proof for the fact that security of a discrete-variable quantum key distribution protocol against collective attacks implies security of the protocol against the most general attacks. The resulting security bounds are tighter than previously known bounds obtained by proofs relying on the exponential de Finetti theorem [Renner, Nature Physics 3,645(2007)]. This is joint work with Robert Koenig and Renato Renner. Related manuscript: http://arxiv.org/abs/0809.3019v1 Tokishiro Karasawa (NII) Title: Practicable verifications of performance for mode-qubit systems Abstract: We consider how to experimentally characterize a mode-qubit system used in quantum repeaters and quantum key distributions with the optical continuous variable. In this talk, we will discuss on a state tomography, performance of a quantum gate, entanglement verifications, and their practical issues for this system. In particular, we will focus on the state tomography using a Wigner function on this system, which can be experimentally given by local measurements with the homodyne detection. Toyohiro Tsurumaru (Mitsubishi Electric) Title: curity proof for QKD systems with threshold detectors Abstract: In this presentation, we rigorously prove the intuition that in security proofs for BB84 one may regard an incoming signal to Bob as a qubit state. From this result, it follows that all security proofs for BB84 based on a virtual qubit entanglement distillation protocol, which was originally proposed by Lo and Chau [H.-K. Lo and H. F. Chau, Science 283, 2050 (1999)], and Shor and Preskill [P. W. Shor and J. Preskill, Phys. Rev. Lett. 85, 441 (2000)], are all valid even if Bob's actual apparatus cannot distill a qubit state explicitly. As a consequence, especially, the well-known result that a higher bit error rate of 20% can be tolerated for BB84 protocol by using two-way classical communications is still valid even when Bob uses threshold detectors. Using the same technique, we also prove the security of Bennett-Brassard-Mermin 1992 (BBM92) protocol where Alice and Bob both use threshold detectors. Shun Watanabe (Tokyo Institute of Technology) Title: Tomography increases key rates of quantum-key-distribution protocols Abstract: We construct a practically implementable classical processing for the BB84 protocol and the six-state protocol that fully utilizes the accurate channel estimation method, which is also known as the quantum tomography. Our proposed processing yields at least as high key rate as the standard processing by Shor and Preskill. We show two examples of quantum channels over which the key rate of our proposed processing is strictly higher than the standard processing. In the second example, the BB84 protocol with our proposed processing yields a positive key rate even though the so-called error rate is higher than the 25% limit. Related manuscript: http://jp.arxiv.org/abs/0802.2419 Nilanjana Datta (Cambridge University, UK) Title: Relative entropies and entanglement monotones Abstract: We introduce two new relative entropy quantities, called the min- and max-relative entropies. The well-known min- and max- entropies, introduced by Renner, are obtained from these. This leads us to define a new entanglement monotone, which we refer to as the max-relative entropy of entanglement. Its properties are investigated and its operational significance as the one-shot perfect catalytic entanglement dilution rate is discussed. We also generalize the min- and max-relative entropies to obtain smooth min- and max- relative entropies. These act as parent quantities for the smooth Renyi entropies which are known to correspond to one-shot rates of various protocols of Quantum Information Theory. Further, these allow us to define smooth min- and max- relative entropies of entanglement, which we can relate to the regularised relative entropy of entanglement in the asymptotic limit. Related manuscripts: arXiv:0807.2536 : Max- Relative Entropy of Entanglement, alias Log Robustness, arXiv:0803.2770 : Min- and Max- Relative Entropies and a New Entanglement Measure and, to some extent, the following paper written jointly with Renato Renner. arXiv:0801.0282 : Smooth Renyi Entropies and the Quantum Information Spectrum Igor Bjelakovic (TU-Berlin, Germany) Title: Approximate quantum error correction under channel uncertainty Abstract: Compound channels, either classical, classical-quantum (cq) or purely quantum, are among the simplest non-trivial models for communication via unknown channel. In this scenario the transmitter and receiver do not know the channel they are communicating over. They are merely provided with prior information that the actual channel belongs to a given set of memoryless In this talk we will present 1. capacity results on compound cq-channels, 2. coding theorem for finite compound quantum channels based on a channel estimation technique introduced by Datta and Dorlas, and 3. work in progress on general compound quantum channels. This is joint work with Holger Boche and Janis Noetzel Related manuscripts: http://arxiv.org/abs/0710.3027, http://arxiv.org/abs/0808.1007, Masahito Hayashi (Tohoku University) Title: Universal protocols in quantum information Abstract: We have constructed universal codes for quantum lossless source coding and classical-quantum channel coding. In this construction, we essentially employ group representation theory. In order to treat quantum lossless source coding, universal approximation of multi-copy states is discussed in terms of the quantum relative entropy. Related manuscripts: http://arxiv.org/abs/0806.1091, http://arxiv.org/abs/0805.4092 Arleta Szkola (Max Planck Institute for Mathematics, Germany) Title: Operational meaning of selected entropy-like quantities for finite-dimensional quantum systems Abstract: We review the quantum extensions of entropy-like quantities like von Neumann entropy, relative entropy, Chernoff distance or Hoeffding bound and discuss their operational meaning in the context of quantum hypothesis testing and/or quantum data compression. Andreas Winter (Bristol University, UK) Title: Distinguishability of quantum states under restricted families of measurements Abstract: We investigate distinguishability of pairs of states on a quantum system under restricted classes of measurements. Any such restriction leads to a norm on trace class operators, in generalisation of the trace norm (which is recovered if all POVMs are allowed, thanks to Helstrom's classic result). We analyse properties of these norms for various classes of POVMs, in particular the constants of domination w.r.t. the trace norm: single POVMs (especially 2-, 4-, and $\infty$-designs) and POVMs obeying a locality restriction. The latter investigation is strongly related to the subject of "data hiding" [Terhal et al.], and we show that the performance of the originally proposed data hiding states is essentially optimal with regard to guessing probability. This works is a joint work with William Matthews and Stephanie Wehner. Related manuscript: http://arxiv.org/abs/0810.2327 Gen Kimura (AIST) Title: On Generic Probability Models --- classicality and optimal state discrimination Abstract: We investigate generic probability models in which probability plays a central role, including both classical and quantum theory. Two topics on the theory will be presented. First, we investigate a hidden variable problem in generic probability models, and show that a general theory is essentially classical if and only it is embeddable in a large classical theory. Second, a state discrimination problem in generic probability models is investigated. A family of ensembles is introduced which provides a geometrical method to find an optimal measurement for state discrimination. We illustrated the method in 2-level quantum systems, and reproduced the Helstrom bound for binary state discrimination and symmetric quantum states. Finally, the existence of the family is shown both in quantum and classical theory. This work is done in collaboration with Takayuki Miyadera and Hideki Imai Milan Mosonyi (Tohoku University) Title: Generalized relative entropies and the capacity of classical-quantum channels (joint work with N.Datta) Abstract: By the Holevo-Schumacher-Westmoreland theorem, the asymptotic information transmission capacity of a classical-quantum channel is equal to its Holevo capacity. By a result of Ohya, Petz and Watanabe, this in turn coincides with the divergence radius of the range of the channel, giving the capacity a geometric interpretation. In real-world applications, however, one has only finite resources, and hence it is important to have a good understanding of finite (i.e., non-asymptotic) performance measures, like the single shot capacity of a channel. Here we define generalizations of the Holevo capacity and the divergence radius usingHoeffding distances and the max-relative entropy, respectively, and show that bounds on the single shot capacity of a channel can be obtained in terms of these quantities. These results contribute to a better understanding of the operational significance of generalized relative entropies, and give also an additional insight into the deep relation between hypothesis testing and channel coding problems. Related manuscript: http://arxiv.org/abs/0810.3478. Jonas Kahn (Centre National de la Recherche Scientifique, France) Title: Quantum local asymptotic normality and asymptotically optimal estimation procedure for d- dimensional states Abstract: In classical statistics, the theory of local asymptotic normality allows to treat experiments with n independent identically distributed samples as if we were sampling only once a normal law with unknown parameter its mean. Similarly we show asymptotic strong equivalence (that is using embeddings) between experiments where we are given n copies of a qudit and one copy of a (multi- dimensional) gaussian state. We can then transpose results of one experiment to the other. In particular we get from the optimal estimation procedure for gaussian state an asymptotically optimal estimation procedure for n copies of a qudit. Hiroshi Imai (NII) Title: Fourier Analytic Method in Phase Estimation Problem Abstract: For a unified analysis on the phase estimation, we focus on the limiting distribution. It is shown that the limiting distribution can be analyzed by treating the square of the Fourier transform of L^2 function whose support belongs to [-1,1]. Using this relation, we study the relation between the variance of the limiting distribution and its tail probability. As our result, we prove that the protocol minimizing the asymptotic variance does not minimize the tail probability. Depending on the width of interval, we derive the estimation protocol minimizing the tail probability out of a given interval. Such an optimal protocol is given by a prolate spheroidal wave function which often appears in wavelet or time-limited Fourier analysis. Also, the minimum confidence interval is derived with the framework of interval estimation that assures a given confidence coefficient. This work is a joint work with Masahito Hayashi. Related manuscript: http://arxiv.org/abs/0810.5602 Takayuki Miyadera (AIST) Title: Uncertainty relation between arbitrary POVMs Abstract: Heisenberg's uncertainty principle is often considered as one of the most important features of quantum theory. In every text book on quantum theory one can find its explanation proposed by Heisenberg himself and its ``derivation" by Robertson. However, as recently claimed by several researchers, the above explanation and the derivation have a certain gap between them. On the one hand Heisenberg is concerned with a simultaneous measurement of position and momentum, on the other hand the Robertson's formulation treats two distinct (parallel) measurements. In this talk I will show our recent results on both formulations. For the former Heisenberg's original formulation, a limitation on simultaneous measurement of two arbitrary positive operator valued measures will be discussed. As a byproduct a necessary condition for two positive operator valued measures to be simultaneously measurable is obtained. For the later Robertson's formulation, I will show a generalization of Landau-Pollak type uncertainty relation. If possible, I would like to show some applications of our results to security proof of quantum key distribution. Related manuscript: http://arxiv.org/abs/0809.1714 Go Kato (NTT) Title: Quantum cloning of qubits with orthogonal states as hints Abstract: A universal cloning machine is derived. The machine produces $c$ clones of an unknown qubit from $s$ identical replicas of qubit and $k$ identical replicas of its orthogonal qubit. For the standard cloning machine, i.e., k=0, the universal not machine, s=0, and some other cases, the optimum machine is well known. The universal cloning machine derived in this paper gives clones whose fidelity is the same value as that for the optimum machine. With some other numerical calculations, we extrapolate that the cloning machine we derived is the optimum machine. Peter Turner (Univ. Tokyo) Title: Continuous variable two designs Abstract: I will discuss our ongoing attempts to construct Symmetric Informationally Complete Positive Operator Valued Measures, or (minimal) two designs, out of Gaussian states. This poses difficulties both in principle, such as how to introduce a measure on the noncompact group of symplectic transformations, as well as in practice, such as how to truncate the space of states in an experimentally useful way. Joint work with Robin Blume-Kohout. Jun Suzuki (NII) Title: Symmetric construction of reference-frame-free qudits Abstract: By exploiting a symmetric scheme for coupling N spin-1/2 constituents (the physical qubits) to states with total angular momentum N/2 - 1, we construct rotationally invariant logical qudits ofdimension d = N - 1. One can encode all qudit states, and realize all qudit measurements, by this construction. The rotational invariance of all relevant objects enables one to transmit quantum information without having aligned reference frames between the parties that exchange the qudits. We illustrate the method by explicit constructions of reference-frame-free qubits and Related manuscript: http://arxiv.org/abs/0802.1609 Aram Harrow (Bristol University, UK) Title: Pseudo-random quantum states and operations Abstract: The idea of pseudo-randomness is to use little or no randomness to simulate a random object such as a random number, permutation, graph, quantum state, etc... The simulation should then have some superficial resemblance to a truly random object; for example, the first few moments of a random variable should be nearly the same. This concept has been enormously useful in classical computer science. In my talk, I'll review some quantum analogues of pseudo-randomness: unitary k-designs, quantum expanders (and their new cousin, quantum tensor product expanders), extractors. I'll talk about relations between them, efficient constructions, and possible applications. Some of the material is joint work with Matt Hastings and Richard Low. Related manuscript: http://arxiv.org/abs/0804.0011 Ryo Namiki (Kyoto Univ.) Title: Verification of quantum-domain process using two non-orthogonal states Abstract: If a quantum channel or process cannot be described by any measure-and-prepare scheme, we may say the channel is in quantum domain (QD) since it can transmit quantum correlations. The concept of QD clarifies the role of quantum channel in quantum information theory based on the local-operation-and-classical-communication (LOCC) paradigm: The quantum channel is only useful if it cannot be simulated by LOCC. We construct a simple scheme to verify that a given physical process or channel is in QD by using two non-orthogonal states. We also consider the application for the experiments such as the transmission or storage of quantum optical coherent states, single-photon polarization states, and squeezed vacuum states. Related manuscript: http://arXiv.org/abs/0807.0046 (Phys. Rev. A 78, 032333 (2008) ) Koji Azuma (Osaka Univ.) Title: Quantum catalysis of information and its implications Abstract: About 80 years ago, Heisenberg implied a groundbreaking notion that measurement on a quantum system inevitably disturbs its state, in contrast to classical systems. This suggests that our accessibility to the information contained in a quantum system is severely limited if we are to keep its state unchanged, namely if we are to use it as a ‘catalyst’ of information. In fact, the no-cloning theorem and subsequent no-go theorems as well as the no-deleting theorem have corroborated the idea that we can never access quantum information without causing disturbance. Here, however, we deny this presumption by exhibiting a novel process, ‘quantum catalysis of information.’ In this process, a system (catalyst) helps to make a clone or to delete information without changing its state. Nevertheless, it requires interaction strong enough to enable transmission of a qubit in an arbitrary state, or to inevitably consume one ebit of entanglement if it is implemented by local operation and classical communication. This urges us to interpret that the information exchanged by the catalyst is not classical but quantum. Our result suggests that the boundary between the classical and the quantum world will not be determined on the basis of the fragility of physical states as one would expect from the current no-go theorems. This talk is based on a paper [quant-ph/0804.2426] entitled ``Quantum catalysis of information'' by KA, Masato Koashi, and Nobuyuki Imoto. Related manuscript: http://arxiv.org/abs/0804.2426 Kae Nemoto (NII) Title: Scalable architecture and Qubus computation Abstract: Quantum computation requires two types of operations: single-qubit manipulation and two-qubit operation. However, it is very difficult to realize these two types of operations, keeping quantum coherence at the same time. Qubus computation (quantum computation via communication) was recently introduced to address these fundamental issues in implementation of scalable quantum information processing. We explain the concept of qubus computation and show some new developments and applications. We discuss the advantages and disadvantages of the qubus computation towards scalable quantum information processing. Satoshi Ishizaka (NEC) Title: Retrieving quantum operation from quantum state Abstract: It is well-known that completely positive (CP) maps are associated with quantum states via the Choi-Jamiolkowski isomorphism. The associated state is easily obtained from a CP map if we apply the CP map to the half of a maximally entangled state. In this presentation, we consider the converse process: how to retrieve a CP map from the associated state. We propose and discuss a teleportation scheme to achieve the converse process in an asymptotical way. Paul Slater (University of Califolnia, USA) Title: Two-Qubit Separability Functions and Probabilities Abstarct: We describe our efforts over the past several years to determine the probability that a generic two-qubit state is separable, in terms of various metrics of widespread interest, in particular, the Hilbert- Schmidt and Bures metrics. A useful concept, in this regard, is that of a separability function. In the two-qubit context, this can be a three-dimensional function of either the eigenvalues or the diagonal entries of the corresponding 4 x 4 density matrix. We have investigated the possibility that these separability functions can be exactly re-expressed as one-dimensional or univariate functions. This has, then, led us--making use of the random matrix theoretical concept of "Dyson indices"--to conjecture that the Hilbert-Schmidt separability probability for the 15-dimensional convex set of (complex) two-qubit states is 8/33, and for the 9-dimensional convex set of real two-qubit states, 8/17. The Bures case, substantially different in nature apparently, remains under active investigation. Related manuscripts: http://arxiv.org/abs/0806.3294, http://arxiv.org/abs/0805.0267, http://arxiv.org/abs/0802.0197, http://arxiv.org/abs/0704.3723, http://arxiv.org/abs/quant-ph/0609006.
{"url":"https://www.math.nagoya-u.ac.jp/~masahito/workshop/program2.html","timestamp":"2024-11-13T01:34:38Z","content_type":"text/html","content_length":"29338","record_id":"<urn:uuid:fcc284dd-70ba-4d2d-8d4f-f896d640c353>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00639.warc.gz"}
Images from the Aperiodic Time J. J. García Escudero Departamento de Física Uníversidad de Oviedo 33007 Oviedo, Spain A diffraction image in solid state physics is a measure of the order inside a distribution of points (masses). The image is related with the Fourier transform of the distribution. Some 1D aperiodic distributions have discrete Fourier components. We use this models in order to structurate the time in a non periodic way with discrete frecuency spectrum. The Fibonacci sequence ilustrates this
{"url":"https://compmus.ime.usp.br/sbcm/1995/papers/Garcia_Escudero.html","timestamp":"2024-11-13T21:40:51Z","content_type":"text/html","content_length":"2779","record_id":"<urn:uuid:34423d15-a9c8-47d1-9f50-1e67c1f2a5de>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00310.warc.gz"}
Data containers¶ class mchammer.DataContainer(structure, ensemble_parameters, metadata={})[source]¶ Data container for storing information concerned with Monte Carlo simulations performed with mchammer. ☆ structure (Atoms) – Reference atomic structure associated with the data container. ☆ ensemble_parameters (dict) – Parameters associated with the underlying ensemble. ☆ metadata (dict) – Metadata associated with the data container. analyze_data(tag, start=None, max_lag=None)[source]¶ Returns detailed analysis of a scalar observerable. ○ tag (str) – Tag of field over which to average. ○ start (Optional[int]) – Minimum value of trial step to consider. By default the smallest value in the mctrial column will be used. ○ max_lag (Optional[int]) – Maximum lag between two points in data series. By default the largest length of the data series will be used. Used for computing autocorrelation. ○ ValueError – If observable is requested that is not in data container. ○ ValueError – If observable is not scalar. ○ ValueError – If observations is not evenly spaced. Calculated properties of the data including mean, standard_deviation, correlation_length and error_estimate (95% confidence). Return type: append(mctrial, record)¶ Appends data to data container. ○ mctrial (int) – Current Monte Carlo trial step. ○ record (Dict[str, Union[int, float, list]]) – Dictionary of tag-value pairs representing observations. TypeError – If input parameters have the wrong type. Adds observer data from observer to data container. The observer will only be run for the mctrials for which the trajectory have been saved. The interval of the observer is ignored. observer (BaseObserver) – Observer to be used. Return type: property data: DataFrame¶ Data as pandas.DataFrame. property ensemble_parameters: dict¶ Parameters associated with Monte Carlo simulation. get(*tags, start=0)¶ Returns the accumulated data for the requested observables, including configurations stored in the data container. The latter can be achieved by including 'trajectory' as one of the tags. ○ tags (str) – Names of the requested properties. ○ start (int) – Minimum value of trial step to consider. By default the smallest value in the mctrial column will be used. ○ ValueError – If tags is empty. ○ ValueError – If observables are requested that are not in data container. Return type: Union[ndarray, List[Atoms], Tuple[ndarray, List[Atoms]]] Below the get() method is illustrated but first we require a data container. >>> from ase.build import bulk >>> from icet import ClusterExpansion, ClusterSpace >>> from mchammer.calculators import ClusterExpansionCalculator >>> from mchammer.ensembles import CanonicalEnsemble >>> # prepare cluster expansion >>> prim = bulk('Au') >>> cs = ClusterSpace(prim, cutoffs=[4.3], chemical_symbols=['Ag', 'Au']) >>> ce = ClusterExpansion(cs, [0, 0, 0.1, -0.02]) >>> # prepare initial configuration >>> structure = prim.repeat(3) >>> for k in range(5): ... structure[k].symbol = 'Ag' >>> # set up and run MC simulation >>> calc = ClusterExpansionCalculator(structure, ce) >>> mc = CanonicalEnsemble(structure=structure, calculator=calc, ... temperature=600, ... dc_filename='myrun_canonical.dc') >>> mc.run(100) # carry out 100 trial swaps We can now access the data container by reading it from file by using the read() method. For the purpose of this example, however, we access the data container associated with the ensemble >>> dc = mc.data_container The following lines illustrate how to use the get() method for extracting data from the data container. >>> # obtain all values of the potential represented by >>> # the cluster expansion along the trajectory >>> p = dc.get('potential') >>> import matplotlib.pyplot as plt >>> # as above but this time the MC trial step is included as well >>> s, p = dc.get('mctrial', 'potential') >>> _ = plt.plot(s, p) >>> plt.show(block=False) >>> # obtain configurations along the trajectory along with >>> # their potential >>> p, confs = dc.get('potential', 'trajectory') get_average(tag, start=None)[source]¶ Returns average of a scalar observable. ○ tag (str) – Tag of field over which to average. ○ start (Optional[int]) – Minimum value of trial step to consider. By default the smallest value in the mctrial column will be used. ○ ValueError – If observable is requested that is not in data container. ○ ValueError – If observable is not scalar. Return type: get_trajectory(*args, **kwargs)¶ Returns trajectory as a list of ASE Atoms objects. Return type: property metadata: dict¶ Metadata associated with data container. property observables: List[str]¶ Observable names. classmethod read(infile, old_format=False)¶ Reads data container from file. ○ infile (Union[str, BinaryIO, TextIO]) – File from which to read. ○ old_format (bool) – If True use old json format to read runtime data. ○ FileNotFoundError – If infile is not found. ○ ValueError – If file is of incorrect type (not a tarball). Writes data container to file. outfile (Union[bytes, str]) – File to which to write. class mchammer.WangLandauDataContainer(structure, ensemble_parameters, metadata={})[source]¶ Data container for storing information concerned with Wang-Landau simulation performed with mchammer. ☆ structure (Atoms) – Reference atomic structure associated with the data container. ☆ ensemble_parameters (dict) – Parameters associated with the underlying ensemble. ☆ metadata (dict) – Metadata associated with the data container. append(mctrial, record)¶ Appends data to data container. ○ mctrial (int) – Current Monte Carlo trial step. ○ record (Dict[str, Union[int, float, list]]) – Dictionary of tag-value pairs representing observations. TypeError – If input parameters have the wrong type. Adds observer data from observer to data container. The observer will only be run for the mctrials for which the trajectory have been saved. The interval of the observer is ignored. observer (BaseObserver) – Observer to be used. Return type: property data: DataFrame¶ Data as pandas.DataFrame. property ensemble_parameters: dict¶ Parameters associated with Monte Carlo simulation. property fill_factor: float¶ Final value of the fill factor in the Wang-Landau algorithm. property fill_factor_history: DataFrame¶ Evolution of the fill factor in the Wang-Landau algorithm. get(*tags, fill_factor_limit=None)[source]¶ Returns the accumulated data for the requested observables, including configurations stored in the data container. The latter can be achieved by including 'trajectory' as one of the tags. ○ tags (str) – Names of the requested properties. ○ fill_factor_limit (Optional[float]) – Return data recorded up to the point when the specified fill factor limit was reached, or None if the entropy history is empty or the last fill factor is above the limit; otherwise return all data. ○ ValueError – If tags is empty. ○ ValueError – If observables are requested that are not in the data container. Return type: Union[ndarray, List[Atoms], Tuple[ndarray, List[Atoms]]] Below the get() method is illustrated but first we require a data container. >>> from ase import Atoms >>> from icet import ClusterExpansion, ClusterSpace >>> from mchammer.calculators import ClusterExpansionCalculator >>> from mchammer.ensembles import WangLandauEnsemble >>> # prepare cluster expansion >>> prim = Atoms('Au', positions=[[0, 0, 0]], cell=[1, 1, 10], pbc=True) >>> cs = ClusterSpace(prim, cutoffs=[1.1], chemical_symbols=['Ag', 'Au']) >>> ce = ClusterExpansion(cs, [0, 0, 2]) >>> # prepare initial configuration >>> structure = prim.repeat((4, 4, 1)) >>> for k in range(8): ... structure[k].symbol = 'Ag' >>> # set up and run Wang-Landau simulation >>> calculator = ClusterExpansionCalculator(structure, ce) >>> mc = WangLandauEnsemble(structure=structure, ... calculator=calculator, ... energy_spacing=1, ... dc_filename='ising_2d_run.dc', ... fill_factor_limit=0.3) >>> mc.run(number_of_trial_steps=len(structure)*3000) # in practice one requires more steps We can now access the data container by reading it from file by using the read() method. For the purpose of this example, however, we access the data container associated with the ensemble >>> dc = mc.data_container The following lines illustrate how to use the get() method for extracting data from the data container. >>> # obtain all values of the potential represented by >>> # the cluster expansion and the MC trial step along the >>> # trajectory >>> import matplotlib.pyplot as plt >>> s, p = dc.get('mctrial', 'potential') >>> _ = plt.plot(s, p) >>> # as above but this time only included data recorded up to >>> # the point when the fill factor reached below 0.6 >>> s, p = dc.get('mctrial', 'potential', fill_factor_limit=0.6) >>> _ = plt.plot(s, p) >>> plt.show(block=False) >>> # obtain configurations along the trajectory along with >>> # their potential >>> p, confs = dc.get('potential', 'trajectory') Returns the (relative) entropy from this data container accumulated during a Wang-Landau simulation. Returns None if the data container does not contain the required information. fill_factor_limit (Optional[float]) – Return the entropy recorded up to the point when the specified fill factor limit was reached, or None if the entropy history is empty or the last fill factor is above the limit. Otherwise return the entropy for the last state. Return type: Returns the histogram from this data container accumulated since the last update of the fill factor. Returns None if the data container does not contain the required information. Return type: get_trajectory(*args, **kwargs)¶ Returns trajectory as a list of ASE Atoms objects. Return type: property metadata: dict¶ Metadata associated with data container. property observables: List[str]¶ Observable names. classmethod read(infile, old_format=False)[source]¶ Reads data container from file. ○ infile (Union[str, BinaryIO, TextIO]) – file from which to read ○ old_format (bool) – If true use old json format to read runtime data; default to false ○ FileNotFoundError – if file is not found (str) ○ ValueError – if file is of incorrect type (not a tarball) Writes data container to file. outfile (Union[bytes, str]) – File to which to write.
{"url":"https://icet.materialsmodeling.org/moduleref_mchammer/data_containers.html","timestamp":"2024-11-13T05:23:13Z","content_type":"text/html","content_length":"111924","record_id":"<urn:uuid:4dd32536-e169-4b55-be33-c2714365397a>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00800.warc.gz"}
Standard - 2.G.3: Partition circles and rectangles into two, three, or four equal shares, describe the shares using the words halves, thirds, half of, a third of, etc., and describe the whole as two halves, three thirds, four fourths. Recognize that equal shares of identical wholes need not have the same shape.
{"url":"https://nylearns.org/module/Standards/Tools/Browse?LinkStandardId=97941&StandardId=97937","timestamp":"2024-11-03T22:05:59Z","content_type":"text/html","content_length":"109919","record_id":"<urn:uuid:1926fd81-5b98-40c6-82e6-84ccc2333373>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00071.warc.gz"}
Proof m \[\begin{split}\newcommand{\as}{\kw{as}} \newcommand{\case}{\kw{case}} \newcommand{\cons}{\textsf{cons}} \newcommand{\consf}{\textsf{consf}} \newcommand{\emptyf}{\textsf{emptyf}} \newcommand{\End}{\ kw{End}} \newcommand{\kwend}{\kw{end}} \newcommand{\even}{\textsf{even}} \newcommand{\evenO}{\textsf{even}_\textsf{O}} \newcommand{\evenS}{\textsf{even}_\textsf{S}} \newcommand{\Fix}{\kw{Fix}} \ newcommand{\fix}{\kw{fix}} \newcommand{\for}{\textsf{for}} \newcommand{\forest}{\textsf{forest}} \newcommand{\Functor}{\kw{Functor}} \newcommand{\In}{\kw{in}} \newcommand{\ind}[3]{\kw{Ind}~[#1]\left (#2\mathrm{~:=~}#3\right)} \newcommand{\Indp}[4]{\kw{Ind}_{#4}[#1](#2:=#3)} \newcommand{\Indpstr}[5]{\kw{Ind}_{#4}[#1](#2:=#3)/{#5}} \newcommand{\injective}{\kw{injective}} \newcommand{\kw}[1]{\ textsf{#1}} \newcommand{\length}{\textsf{length}} \newcommand{\letin}[3]{\kw{let}~#1:=#2~\kw{in}~#3} \newcommand{\List}{\textsf{list}} \newcommand{\lra}{\longrightarrow} \newcommand{\Match}{\kw {match}} \newcommand{\Mod}[3]{{\kw{Mod}}({#1}:{#2}\,\zeroone{:={#3}})} \newcommand{\ModImp}[3]{{\kw{Mod}}({#1}:{#2}:={#3})} \newcommand{\ModA}[2]{{\kw{ModA}}({#1}=={#2})} \newcommand{\ModS}[2]{{\kw {Mod}}({#1}:{#2})} \newcommand{\ModType}[2]{{\kw{ModType}}({#1}:={#2})} \newcommand{\mto}{.\;} \newcommand{\nat}{\textsf{nat}} \newcommand{\Nil}{\textsf{nil}} \newcommand{\nilhl}{\textsf{nil\_hl}} \ newcommand{\nO}{\textsf{O}} \newcommand{\node}{\textsf{node}} \newcommand{\nS}{\textsf{S}} \newcommand{\odd}{\textsf{odd}} \newcommand{\oddS}{\textsf{odd}_\textsf{S}} \newcommand{\ovl}[1]{\overline{# 1}} \newcommand{\Pair}{\textsf{pair}} \newcommand{\plus}{\mathsf{plus}} \newcommand{\SProp}{\textsf{SProp}} \newcommand{\Prop}{\textsf{Prop}} \newcommand{\return}{\kw{return}} \newcommand{\Set}{\ textsf{Set}} \newcommand{\Sort}{\mathcal{S}} \newcommand{\Str}{\textsf{Stream}} \newcommand{\Struct}{\kw{Struct}} \newcommand{\subst}[3]{#1\{#2/#3\}} \newcommand{\tl}{\textsf{tl}} \newcommand{\tree} {\textsf{tree}} \newcommand{\trii}{\triangleright_\iota} \newcommand{\Type}{\textsf{Type}} \newcommand{\WEV}[3]{\mbox{$#1[] \vdash #2 \lra #3$}} \newcommand{\WEVT}[3]{\mbox{$#1[] \vdash #2 \lra$}\\ \ mbox{$ #3$}} \newcommand{\WF}[2]{{\mathcal{W\!F}}(#1)[#2]} \newcommand{\WFE}[1]{\WF{E}{#1}} \newcommand{\WFT}[2]{#1[] \vdash {\mathcal{W\!F}}(#2)} \newcommand{\WFTWOLINES}[2]{{\mathcal{W\!F}}\begin {array}{l}(#1)\\\mbox{}[{#2}]\end{array}} \newcommand{\with}{\kw{with}} \newcommand{\WS}[3]{#1[] \vdash #2 <: #3} \newcommand{\WSE}[2]{\WS{E}{#1}{#2}} \newcommand{\WT}[4]{#1[#2] \vdash #3 : #4} \ newcommand{\WTE}[3]{\WT{E}{#1}{#2}{#3}} \newcommand{\WTEG}[2]{\WTE{\Gamma}{#1}{#2}} \newcommand{\WTM}[3]{\WT{#1}{}{#2}{#3}} \newcommand{\zeroone}[1]{[{#1}]} \end{split}\] Proof mode¶ Proof mode is used to prove theorems. Coq enters proof mode when you begin a proof, such as with the Theorem command. It exits proof mode when you complete a proof, such as with the Qed command. Tactics, which are available only in proof mode, incrementally transform incomplete proofs to eventually generate a complete proof. When you run Coq interactively, such as through CoqIDE, Proof General or coqtop, Coq shows the current proof state (the incomplete proof) as you enter tactics. This information isn't shown when you run Coq in batch mode with coqc. Proof State¶ The proof state consists of one or more unproven goals. Each goal has a conclusion (the statement that is to be proven) and a local context, which contains named hypotheses (which are propositions), variables and local definitions that can be used in proving the conclusion. The proof may also use constants from the global environment such as definitions and proven theorems. (Note that conclusion is also used to refer to the last part of an implication. For example, in A -> B -> C, A and B are premises and C is the conclusion.) The term "goal" may refer to an entire goal or to the conclusion of a goal, depending on the context. The conclusion appears below a line and the local context appears above the line. The conclusion is a type. Each item in the local context begins with a name and ends, after a colon, with an associated type. Local definitions are shown in the form n := 0 : nat, for example, in which nat is the type of 0. The local context of a goal contains items specific to the goal as well as section-local variables and hypotheses (see Assumptions) defined in the current section. The latter are included in the initial proof state. Items in the local context are ordered; an item can only refer to items that appear before it. (A more mathematical description of the local context is here.) The global environment has definitions and proven theorems that are global in scope. (A more mathematical description of the global environment is here.) When you begin proving a theorem, the proof state shows the statement of the theorem below the line and often nothing in the local context: Parameter P: nat -> Prop. P is declared Goal forall n m: nat, n > m -> P 1 /\ P 2. 1 goal ============================ forall n m : nat, n > m -> P 1 /\ P 2 After applying the intros tactic, we see hypotheses above the line. The names of variables (n and m) and hypotheses (H) appear before a colon, followed by their type. The type doesn't have to be a provable statement. For example, 0 = 1 and False are both valid and useful types. 1 goal n, m : nat H : n > m ============================ P 1 /\ P 2 Some tactics, such as split, create new goals, which may be referred to as subgoals for clarity. Goals are numbered from 1 to N at each step of the proof to permit applying a tactic to specific goals. The local context is only shown for the first goal. 2 goals n, m : nat H : n > m ============================ P 1 goal 2 is: P 2 "Variables" may refer specifically to local context items introduced from forall variables for which the type of their type is Set or Type. "Hypotheses" refers to items that are propositions, for which the type of their type is Prop or SProp, but these terms are also used interchangeably. let t_n := type of n in idtac "type of n :" t_n; let tt_n := type of t_n in idtac "type of" t_n ":" tt_n. type of n : nat type of nat : Set let t_H := type of H in idtac "type of H :" t_H; let tt_H := type of t_H in idtac "type of" t_H ":" tt_H. type of H : (n > m) type of (n > m) : Prop A proof script, consisting of the tactics that are applied to prove a theorem, is often informally referred to as a "proof". The real proof, whether complete or incomplete, is the associated term, the proof term, which users may occasionally want to examine. (This is based on the Curry-Howard isomorphism [How80][Bar81][GLT89][Hue89], which is a correspondence between between proofs and terms and between propositions and types of λ-calculus. The isomorphism is also sometimes called the "propositions-as-types correspondence".) The Show Proof command displays the incomplete proof term before you've completed the proof. For example, here's the proof term after using the split tactic above: Show Proof. (fun (n m : nat) (H : n > m) => conj ?Goal ?Goal0) The incomplete parts, the goals, are represented by existential variables with names that begin with ?Goal. (Note that some existential variables are not goals.) The Show Existentials command shows each existential with the hypotheses and conclusion for the associated goal. Show Existentials. Existential 1 = ?Goal : [n : nat m : nat H : n > m |- P 1] Existential 2 = ?Goal0 : [n : nat m : nat H : n > m |- P 2] Users can control which goals are displayed in the context by focusing goals. Focusing lets the user (initially) pick a single goal to work on. Focusing operations can be nested. Tactics such as eapply create existential variables as placeholders for undetermined variables that become shelved goals. Shelved goals are not shown in the context by default, but they can be unshelved to make them visible. Other tactics may automatically resolve these goals (whether shelved or not); the purpose of shelving is to hide goals that the user usually doesn't need to think about. See Existential variables and this example. Coq's kernel verifies the correctness of proof terms when it exits proof mode by checking that the proof term is well-typed and that its type is the same as the theorem statement. After a proof is completed, Print <theorem_name> shows the proof term and its type. The type appears after the colon (forall ...), as for this theorem from Coq's standard library: Print proj1. Fetching opaque proofs from disk for Coq.Init.Logic proj1 = fun (A B : Prop) (H : A /\ B) => match H with | conj x x0 => (fun (H0 : A) (_ : B) => H0) x x0 end : forall A B : Prop, A /\ B -> A Arguments proj1 [A B]%type_scope _ Many tactics accept terms as arguments and frequently refer to them with wording such as "the type of term". When term is the name of a theorem or lemma, this wording refers to the type of the proof term, which is what's given in the Theorem statement. When term is the name of a hypothesis, the wording refers to the type shown in the context for the hypothesis (i.e., after the colon). For terms that are more complex than just an ident, you can use Check term to display their type. Entering and exiting proof mode¶ Coq enters proof mode when you begin a proof through commands such as Theorem or Goal. Coq user interfaces usually have a way to indicate that you're in proof mode. Tactics are available only in proof mode (currently they give syntax errors outside of proof mode). Most commands can be used both in and out of proof mode, but some commands only work in or outside of proof mode. When the proof is completed, you can exit proof mode with commands such as Qed, Defined and Save. Example: Declaring section variables When a section is closed with End, section variables declared with Proof using are added to the theorem as additional variables. You can see the effect on the theorem's statement with commands such as Check, Print and About after the section is closed. Currently there is no command that shows the section variables associated with a theorem before the section is closed. Adding the unnecessary section variable radixNotZero changes how foo' can be applied. Require Import ZArith. [Loading ML file ring_plugin.cmxs (using legacy method) ... done] [Loading ML file zify_plugin.cmxs (using legacy method) ... done] [Loading ML file micromega_plugin.cmxs (using legacy method) ... done] [Loading ML file btauto_plugin.cmxs (using legacy method) ... done] Section bar. Variable radix : Z. radix is declared Hypothesis radixNotZero : (0 < radix)%Z. radixNotZero is declared Lemma foo : 0 = 0. 1 goal radix : Z radixNotZero : (0 < radix)%Z ============================ 0 = 0 Proof. reflexivity. Qed. No more goals. Lemma foo' : 0 = 0. 1 goal radix : Z radixNotZero : (0 < radix)%Z ============================ 0 = 0 Proof using radixNotZero. reflexivity. Qed. (* radixNotZero is not needed *) No more goals. Print foo'. (* Doesn't show radixNotZero yet *) foo' = eq_refl : 0 = 0 foo' uses section variables radix radixNotZero. End bar. Print foo. (* Doesn't change after the End *) foo = eq_refl : 0 = 0 Print foo'. (* "End" added type radix (used by radixNotZero) and radixNotZero *) foo' = fun (radix : Z) (_ : (0 < radix)%Z) => eq_refl : forall radix : Z, (0 < radix)%Z -> 0 = 0 Arguments foo' radix%Z_scope radixNotZero Goal 0 = 0. 1 goal ============================ 0 = 0 Fail apply foo'. (* Fails because of the extra variable *) The command has indeed failed with message: Unable to find an instance for the variable radix. apply (foo' 5). (* Can be used if the extra variable is provided explicitly *) 1 goal ============================ (0 < 5)%Z Proof using options¶ The following options modify the behavior of Proof using. Option Default Proof Using "section_var_expr"¶ Set this option to use section_var_expr as the default Proof using value. E.g. Set Default Proof Using "a b" will complete all Proof commands not followed by a using part with using a b. Note that section_var_expr isn't validated immediately. An invalid value will generate an error on a subsequent Proof or Qed command. Flag Suggest Proof Using¶ When this flag is on, Qed suggests a using annotation if the user did not provide one. Name a set of section hypotheses for Proof using¶ Command Collection ident := section_var_expr¶ This can be used to name a set of section hypotheses, with the purpose of making Proof using annotations more compact. Define the collection named Some containing x, y and z: Collection Some := x y z. Define the collection named Fewer containing only x and y: Collection Fewer := Some - z Define the collection named Many containing the set union or set difference of Fewer and Some: Collection Many := Fewer + Some Collection Many := Fewer - Some Define the collection named Many containing the set difference of Fewer and the unnamed collection x y: Collection Many := Fewer - (x y) Deprecated since version 8.15: Redefining a collection, defining a collection with the same name as a variable, and invoking the Proof using command when collection and variable names overlap are deprecated. See the warnings below and in the Proof using command. Error "All" is a predefined collection containing all variables. It can't be redefined.¶ When issuing a Proof using command, All used as a collection name always means "use all variables". Warning New Collection definition of ident shadows the previous one.¶ Redefining a Collection overwrites the previous definition. Warning ident was already a defined Variable, the name ident will refer to Collection when executing "Proof using" command.¶ The Proof using command allows specifying both Collection and Variable names. In case of ambiguity, a name is assumed to be Collection name. Proof modes¶ When entering proof mode through commands such as Goal and Proof, Coq picks by default the L[tac] mode. Nonetheless, there exist other proof modes shipped in the standard Coq installation, and furthermore some plugins define their own proof modes. The default proof mode used when opening a proof can be changed using the following option. Option Default Proof Mode string¶ This option selects the proof mode to use when starting a proof. Depending on the proof mode, various syntactic constructs are allowed when writing a proof. All proof modes support commands; the proof mode determines which tactic language and set of tactic definitions are available. The possible option values are: Activates the L[tac] language and the tactics with the syntax documented in this manual. Some tactics are not available until the associated plugin is loaded, such as SSR or micromega. This proof mode is set when the prelude is loaded. No tactic language is activated at all. This is the default when the prelude is not loaded, e.g. through the -noinit option for coqc. Activates the Ltac2 language and the Ltac2-specific variants of the documented tactics. This value is only available after Requiring Ltac2. Importing Ltac2 sets this mode. Some external plugins also define their own proof mode, which can be activated with this command. Command Proof Mode string¶ Sets the proof mode within the current proof. Managing goals¶ Command Undo To? natural?¶ Cancels the effect of the last natural commands or tactics. The To natural form goes back to the specified state number. If natural is not specified, the command goes back one command or tactic. Command Restart¶ Restores the proof to the original goal. Error No focused proof to restart.¶ Focusing goals¶ Focusing lets you limit the context display to (initially) a single goal. If a tactic creates additional goals from a focused goal, the subgoals are also focused. The two focusing constructs are curly braces ({ and }) and bullets (e.g. -, + or *). These constructs can be nested. Curly braces¶ Tactic natural[ ident ] :? {¶ Tactic }¶ { (without a terminating period) focuses on the first goal. The subproof can only be unfocused when it has been fully solved (i.e., when there is no focused goal left). Unfocusing is then handled by } (again, without a terminating period). See also an example in the next section. Note that when a focused goal is proved a message is displayed together with a suggestion about the right bullet or } to unfocus it or focus the next goal. Focuses on the natural-th goal to prove. [ ident ]: { Focuses on the goal named ident even if the goal is not in focus. Goals are existential variables, which don't have names by default. You can give a name to a goal by using refine ?[ident]. Example: Working with named goals Ltac name_goal name := refine ?[name]. (* for convenience *) name_goal is defined Set Printing Goal Names. (* show goal names, e.g. "(?base)" and "(?step)" *) Goal forall n, n + 0 = n. 1 goal (?Goal) ============================ forall n : nat, n + 0 = n induction n; [ name_goal base | name_goal step ]. 2 goals, goal 1 (?base) ============================ 0 + 0 = 0 goal 2 (?step) is: S n + 0 = S n (* focus on the goal named "base" *) [base]: { reflexivity. 1 goal (?base) ============================ 0 + 0 = 0 This subproof is complete, but there are some unfocused goals. Try unfocusing with "}". 1 goal goal 1 (?step) is: S n + 0 = S n 1 goal (?step) n : nat IHn : n + 0 = n ============================ S n + 0 = S n This can also be a way of focusing on a shelved goal, for instance: Goal exists n : nat, n = n. 1 goal ============================ exists n : nat, n = n eexists ?[x]. 1 focused goal (shelved: 1) ============================ ?x = ?x All the remaining goals are on the shelf. 1 goal goal 1 is: nat [x]: exact 0. No more goals. Error This proof is focused, but cannot be unfocused this way.¶ You are trying to use } but the current subproof has not been fully solved. Error No such goal (natural).¶ Error No such goal (ident).¶ Error Brackets do not support multi-goal selectors.¶ Brackets are used to focus on a single goal given either by its position or by its name if it has one. See also The error messages for bullets below. Alternatively, proofs can be structured with bullets instead of { and }. The first use of a bullet b focuses on the first goal g. The same bullet can't be used again until the proof of g is completed, then the next goal must be focused with another b. Thus, all the goals present just before the first use of the bullet must be focused with the same bullet b. See the example below. Different bullets can be used to nest levels. The scope of each bullet is limited to the enclosing { and }, so bullets can be reused as further nesting levels provided they are delimited by curly braces. A bullet is made from -, + or * characters (with no spaces and no period afterward): Tactic -+++*+¶ When a focused goal is proved, Coq displays a message suggesting use of } or the correct matching bullet to unfocus the goal or focus the next subgoal. In Proof General (Emacs interface to Coq), you must use bullets with the priority ordering shown above to have correct indentation. For example - must be the outer bullet and + the inner one in the example below. Example: Use of bullets For the sake of brevity, the output for this example is summarized in comments. Note that the tactic following a bullet is frequently put on the same line with the bullet. Observe that this proof still works even if all the bullets in it are omitted. Goal (1=1 /\ 2=2) /\ 3=3. 1 goal ============================ (1 = 1 /\ 2 = 2) /\ 3 = 3 split. (* 1 = 1 /\ 2 = 2 and 3 = 3 *) 2 goals ============================ 1 = 1 /\ 2 = 2 goal 2 is: 3 = 3 - (* 1 = 1 /\ 2 = 2 *) 1 goal ============================ 1 = 1 /\ 2 = 2 split. (* 1 = 1 and 2 = 2 *) 2 goals ============================ 1 = 1 goal 2 is: 2 = 2 + (* 1 = 1 *) 1 goal ============================ 1 = 1 trivial. (* subproof complete *) This subproof is complete, but there are some unfocused goals. Focus next goal with bullet +. 2 goals goal 1 is: 2 = 2 goal 2 is: 3 = 3 + (* 2 = 2 *) 1 goal ============================ 2 = 2 trivial. (* subproof complete *) This subproof is complete, but there are some unfocused goals. Focus next goal with bullet -. 1 goal goal 1 is: 3 = 3 - (* 3 = 3 *) 1 goal ============================ 3 = 3 trivial. (* No more subgoals *) No more goals. Error Wrong bullet bullet[1]: Current bullet bullet[2] is not finished.¶ Before using bullet bullet[1] again, you should first finish proving the current focused goal. Note that bullet[1] and bullet[2] may be the same. Error Wrong bullet bullet[1]: Bullet bullet[2] is mandatory here.¶ You must put bullet[2] to focus on the next goal. No other bullet is allowed here. Error No such goal. Focus next goal with bullet bullet.¶ You tried to apply a tactic but no goals were under focus. Using bullet is mandatory here. Error No such goal. Try unfocusing with }.¶ You just finished a goal focused by {, you must unfocus it with }. Use Default Goal Selector with the ! selector to force the use of focusing mechanisms (bullets, braces) and goal selectors so that it is always explicit to which goal(s) a tactic is applied. Option Bullet Behavior "None""Strict Subproofs"¶ This option controls the bullet behavior and can take two possible values: □ "None": this makes bullets inactive. □ "Strict Subproofs": this makes bullets active (this is the default behavior). Other focusing commands¶ Command Unfocused¶ Succeeds if there are no unfocused goals. Otherwise the command fails. Command Focus natural?¶ Focuses the attention on the first goal to prove or, if natural is specified, the natural-th. The printing of the other goals is suspended until the focused goal is solved or unfocused. Deprecated since version 8.8: Prefer the use of bullets or focusing braces with a goal selector (see above). Command Unfocus¶ Restores to focus the goals that were suspended by the last Focus command. Deprecated since version 8.8. Shelving goals¶ Goals can be shelved so they are no longer displayed in the proof state. Shelved goals can be unshelved with the Unshelve command, which makes all shelved goals visible in the proof state. You can use the goal selector [ ident ]: { to focus on a single shelved goal (see here). Currently there's no single command or tactic that unshelves goals by name. Reordering goals¶ Tactic cycle int_or_var¶ Reorders the selected goals so that the first integer goals appear after the other selected goals. If integer is negative, it puts the last integer goals at the beginning of the list. The tactic is only useful with a goal selector, most commonly all:. Note that other selectors reorder goals; 1,3: cycle 1 is not equivalent to all: cycle 1. See … : … (goal selector). Example: cycle Parameter P : nat -> Prop. P is declared Goal P 1 /\ P 2 /\ P 3 /\ P 4 /\ P 5. 1 goal ============================ P 1 /\ P 2 /\ P 3 /\ P 4 /\ P 5 repeat split. (* P 1, P 2, P 3, P 4, P 5 *) 5 goals ============================ P 1 goal 2 is: P 2 goal 3 is: P 3 goal 4 is: P 4 goal 5 is: P 5 all: cycle 2. (* P 3, P 4, P 5, P 1, P 2 *) 5 goals ============================ P 3 goal 2 is: P 4 goal 3 is: P 5 goal 4 is: P 1 goal 5 is: P 2 all: cycle -3. (* P 5, P 1, P 2, P 3, P 4 *) 5 goals ============================ P 5 goal 2 is: P 1 goal 3 is: P 2 goal 4 is: P 3 goal 5 is: P 4 Tactic swap int_or_var int_or_var¶ Exchanges the position of the specified goals. Negative values for integer indicate counting goals backward from the end of the list of selected goals. Goals are indexed from 1. The tactic is only useful with a goal selector, most commonly all:. Note that other selectors reorder goals; 1,3: swap 1 3 is not equivalent to all: swap 1 3. See … : … (goal selector). Example: swap Goal P 1 /\ P 2 /\ P 3 /\ P 4 /\ P 5. 1 goal ============================ P 1 /\ P 2 /\ P 3 /\ P 4 /\ P 5 repeat split. (* P 1, P 2, P 3, P 4, P 5 *) 5 goals ============================ P 1 goal 2 is: P 2 goal 3 is: P 3 goal 4 is: P 4 goal 5 is: P 5 all: swap 1 3. (* P 3, P 2, P 1, P 4, P 5 *) 5 goals ============================ P 3 goal 2 is: P 2 goal 3 is: P 1 goal 4 is: P 4 goal 5 is: P 5 all: swap 1 -1. (* P 5, P 2, P 1, P 4, P 3 *) 5 goals ============================ P 5 goal 2 is: P 2 goal 3 is: P 1 goal 4 is: P 4 goal 5 is: P 3 Tactic revgoals¶ Reverses the order of the selected goals. The tactic is only useful with a goal selector, most commonly all :. Note that other selectors reorder goals; 1,3: revgoals is not equivalent to all: revgoals. See … : … (goal selector). Example: revgoals Goal P 1 /\ P 2 /\ P 3 /\ P 4 /\ P 5. 1 goal ============================ P 1 /\ P 2 /\ P 3 /\ P 4 /\ P 5 repeat split. (* P 1, P 2, P 3, P 4, P 5 *) 5 goals ============================ P 1 goal 2 is: P 2 goal 3 is: P 3 goal 4 is: P 4 goal 5 is: P 5 all: revgoals. (* P 5, P 4, P 3, P 2, P 1 *) 5 goals ============================ P 5 goal 2 is: P 4 goal 3 is: P 3 goal 4 is: P 2 goal 5 is: P 1 Proving a subgoal as a separate lemma: abstract¶ Tactic abstract ltac_expr2 using ident[name]?¶ Does a solve [ ltac_expr2 ] and saves the subproof as an auxiliary lemma. if ident[name] is specified, the lemma is saved with that name; otherwise the lemma is saved with the name ident_subproof natural? where ident is the name of the current goal (e.g. the theorem name) and natural is chosen to get a fresh name. If the proof is closed with Qed, the auxiliary lemma is inlined in the final proof term. This is useful with tactics such as discriminate that generate huge proof terms with many intermediate goals. It can significantly reduce peak memory use. In most cases it doesn't have a significant impact on run time. One case in which it can reduce run time is when a tactic foo is known to always pass type checking when it succeeds, such as in reflective proofs. In this case, the idiom "abstract exact_no_check foo" will save half the type checking type time compared to "exact foo". abstract is an l3_tactic. The abstract tactic, while very useful, still has some known limitations. See #9146 for more details. We recommend caution when using it in some "non-standard" contexts. In particular, abstract doesn't work properly when used inside quotations ltac:(...). If used as part of typeclass resolution, it may produce incorrect terms when in polymorphic universe mode. Provide ident[name] at your own risk; explicitly named and reused subterms don’t play well with asynchronous proofs. Tactic transparent_abstract ltac_expr3 using ident?¶ Like abstract, but save the subproof in a transparent lemma with a name in the form ident_subtermnatural?. Use this feature at your own risk; building computationally relevant terms with tactics is fragile, and explicitly named and reused subterms don’t play well with asynchronous proofs. Error Proof is not complete.¶ Showing differences between proof steps¶ Coq can automatically highlight the differences between successive proof steps and between values in some error messages. Coq can also highlight differences in the proof term. For example, the following screenshots of CoqIDE and coqtop show the application of the same intros tactic. The tactic creates two new hypotheses, highlighted in green. The conclusion is entirely in pale green because although it’s changed, no tokens were added to it. The second screenshot uses the "removed" option, so it shows the conclusion a second time with the old text, with deletions marked in red. Also, since the hypotheses are new, no line of old text is shown for them. This image shows an error message with diff highlighting in CoqIDE: How to enable diffs¶ Option Diffs "on""off""removed"¶ This option is used to enable diffs. The “on” setting highlights added tokens in green, while the “removed” setting additionally reprints items with removed tokens in red. Unchanged tokens in modified items are shown with pale green or red. Diffs in error messages use red and green for the compared values; they appear regardless of the setting. (Colors are user-configurable.) For coqtop, showing diffs can be enabled when starting coqtop with the -diffs on|off|removed command-line option or by setting the Diffs option within Coq. You will need to provide the -color on|auto command-line option when you start coqtop in either case. Colors for coqtop can be configured by setting the COQ_COLORS environment variable. See section Environment variables. Diffs use the tags diff.added, diff.added.bg, diff.removed and diff.removed.bg. In CoqIDE, diffs should be enabled from the View menu. Don’t use the Set Diffs command in CoqIDE. You can change the background colors shown for diffs from the Edit | Preferences | Tags panel by changing the settings for the diff.added, diff.added.bg, diff.removed and diff.removed.bg tags. This panel also lets you control other attributes of the highlights, such as the foreground color, bold, italic, underline and strikeout. Proof General, VsCoq and Coqtail can also display Coq-generated proof diffs automatically. Please see the PG documentation section "Showing Proof Diffs" and Coqtail's "Proof Diffs" for details. How diffs are calculated¶ Diffs are calculated as follows: 1. Select the old proof state to compare to, which is the proof state before the last tactic that changed the proof. Changes that only affect the view of the proof, such as all: swap 1 2, are 2. For each goal in the new proof state, determine what old goal to compare it to—the one it is derived from or is the same as. Match the hypotheses by name (order is ignored), handling compacted items specially. 3. For each hypothesis and conclusion (the “items”) in each goal, pass them as strings to the lexer to break them into tokens. Then apply the Myers diff algorithm [Mye86] on the tokens and add appropriate highlighting. • Aside from the highlights, output for the "on" option should be identical to the undiffed output. • Goals completed in the last proof step will not be shown even with the "removed" setting. This screenshot shows the result of applying a split tactic that replaces one goal with 2 goals. Notice that the goal P 1 is not highlighted at all after the split because it has not changed. Diffs may appear like this after applying a intro tactic that results in a compacted hypotheses: "Show Proof" differences¶ To show differences in the proof term: • In coqtop and Proof General, use the Show Proof Diffs command. • In CoqIDE, position the cursor on or just after a tactic to compare the proof term after the tactic with the proof term before the tactic, then select View / Show Proof from the menu or enter the associated key binding. Differences will be shown applying the current Show Diffs setting from the View menu. If the current setting is Don't show diffs, diffs will not be shown. Output with the "added and removed" option looks like this: Delaying solving unification constraints¶ Tactic solve_constraints¶ Flag Solve Unification Constraints¶ By default, after each tactic application, postponed typechecking unification problems are resolved using heuristics. Unsetting this flag disables this behavior, allowing tactics to leave unification constraints unsolved. Use the solve_constraints tactic at any point to solve the constraints. Proof maintenance¶ Experimental. Many tactics, such as intros, can automatically generate names, such as "H0" or "H1" for a new hypothesis introduced from a goal. Subsequent proof steps may explicitly refer to these names. However, future versions of Coq may not assign names exactly the same way, which could cause the proof to fail because the new names don't match the explicit references in the proof. The following Mangle Names settings let users find all the places where proofs rely on automatically generated names, which can then be named explicitly to avoid any incompatibility. These settings cause Coq to generate different names, producing errors for references to automatically generated names. Flag Mangle Names¶ When this flag is set (it is off by default), generated names use the prefix specified in the following option instead of the default prefix. Option Mangle Names Prefix string¶ This option specifies the prefix to use when generating names. Flag Mangle Names Light¶ When this flag is set (it is off by default), the names generated by Mangle Names only add the Mangle Names Prefix to the original name. Controlling proof mode¶ Option Hyps Limit natural¶ This option controls the maximum number of hypotheses displayed in goals after the application of a tactic. All the hypotheses remain usable in the proof development. When unset, it goes back to the default mode which is to print all available hypotheses. Flag Nested Proofs Allowed¶ When turned on (it is off by default), this flag enables support for nested proofs: a new assertion command can be inserted before the current proof is finished, in which case Coq will temporarily switch to the proof of this nested lemma. When the proof of the nested lemma is finished (with Qed or Defined), its statement will be made available (as if it had been proved before starting the previous proof) and Coq will switch back to the proof of the previous assertion. Flag Printing Goal Names¶ When this flag is turned on, the name of the goal is printed in proof mode, which can be useful in cases of cross references between goals. Flag Printing Goal Tags¶ Internal flag used to implement Proof General's proof-tree mode. Controlling memory usage¶ Command Print Debug GC¶ Prints heap usage statistics, which are values from the stat type of the Gc module described here in the OCaml documentation. The live_words, heap_words and top_heap_words values give the basic information. Words are 8 bytes or 4 bytes, respectively, for 64- and 32-bit executables. When experiencing high memory usage the following commands can be used to force Coq to optimize some of its internal data structures. Command Optimize Proof¶ Shrink the data structure used to represent the current proof. Command Optimize Heap¶ Perform a heap compaction. This is generally an expensive operation. See: OCaml Gc.compact There is also an analogous tactic optimize_heap. Memory usage parameters can be set through the OCAMLRUNPARAM environment variable.
{"url":"https://coq.inria.fr/doc/V8.19.1/refman/proofs/writing-proofs/proof-mode.html","timestamp":"2024-11-08T09:18:29Z","content_type":"text/html","content_length":"313741","record_id":"<urn:uuid:4482943e-b59c-4a86-948f-9072ac7e0ee8>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00258.warc.gz"}
How to Interpolate Along a Linestring A linestring is simply a sequence of two or more points (or zero points), where each pair of points defines a line segment. Thus the sequence can be thought of as string of line segments that define some curve. To interpolate along a linestring is, effectively, to walk along the linestring until we get to the point we want. The length of the linestring is the sum of the lengths of all its segments. For this problem, we will assume we know the length. (If we don’t, this can be computed easily.) We will also assume that the point we want to find is a specific distance along the linestring and not a ratio from [0, 1]. If we do want to use a ratio, the distance we require is also easily computed. The Simple Case: Interpolate on a Line Segment If we have a single line segment of two points, we want to do a linear interpolation (or “lerp”) between them. Since a linear interpolation is the interpolation between two values in a straight line, we can do this for the x and y independently to get the new point. Linear Interpolation def lerp(start: Double, end: Double, ratio: Double): Double = { start * (1 - ratio) + end * ratio def lerp(start: Point, end: Point, ratio: Double): Point = { Point(lerp(start.x, end.x, ratio), lerp(start.y, end.y, ratio)) Note that start + (end - start) * ratio was not used. This is due to the floating-point arithmetic error that could result in not getting the end when the ratio is 1. Also, in this case, we used a ratio instead of distance since we need the ratio for linear interpolation. In the case of a linestring, we are using distance. This is because we need to know how far we have traveled along the linestring to determine the ratio for interpolating along the line segment where our desired point lies. Finding the Ratio for the Remaining Distance If the current line segment we are on will contain the point we want, then we know how far we have traveled (including this segment). From this, we can compute the distance we truly have remaining and use that to find the ratio needed on the current segment. The distance remaining is the distance wanted minus the distance we have traveled up until this segment. The ratio is then just the distance remaining over the length of the current segment. def findRatioBetweenPointsToGetDistanceWanted(start: Point, end: Point, distanceTraveled: Double, distanceWanted: Double): Double = { val distanceFromStartToEnd = calculateDistanceBetweenPoints(start, end) val distanceRemaining = distanceWanted - (distanceTraveled - distanceFromStartToEnd) distanceRemaining / distanceFromStartToEnd Putting it All Together To traverse the linestring up until the segment we need to interpolate on, we can just add up the lengths of each segment until we have traveled farther than the distance we want. Interpolating Along a Linestring def getPointAlongLineString(points: Seq[Point], distanceWanted: Double): Point = { var distanceTraveled = 0.0 var currentPoint = points.head var previousPointIndex = 0 for (nextPointIndex <- 1 until points.length if distanceTraveled < distanceWanted) { val nextPoint = points(nextPointIndex) distanceTraveled += calculateDistanceBetweenPoints(point, nextPoint) currentPoint = nextEdgePoint previousPointIndex = nextPointIndex - 1 val previousPoint = points(previousPointIndex) val ratio = findRatioBetweenPointsToGetDistanceWanted(previousPoint, currentPoint, distanceTraveled, distanceWanted) lerp(previousPoint, point, ratio) Keep up with our latest posts. We’ll send our latest tips, learnings, and case studies from the Atomic braintrust on a monthly basis. [mailpoet_form id="1"]
{"url":"https://spin.atomicobject.com/interpolate-along-linestring/","timestamp":"2024-11-03T13:03:13Z","content_type":"text/html","content_length":"105226","record_id":"<urn:uuid:dfbf7748-8797-4346-8b1c-0e9a7c4492d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00544.warc.gz"}
Igcse Maths Tutor in Noida - Rajdhani Tutors Igcse Maths Tutor in Noida May 30, 2024 0 Comments Igcse Maths Tutor in Noida – IGCSE Maths Tutor in Gurgaon – IGCSE Maths tutoring can help students grasp challenging concepts, stay motivated, and succeed in their exams. If you’re looking for an IGCSE Maths tutor in Gurgaon, here’s a comprehensive guide to help you find the right tutor for your needs, along with some specific considerations and tips. 1. Understanding Key Concepts: • Algebra: Simplifying Expressions Break down the process of simplifying algebraic expressions with step-by-step examples. Use real-world analogies to make abstract concepts more relatable. • Geometry: Properties of Shapes Discuss the properties of different geometric shapes, including triangles, quadrilaterals, and circles. Use diagrams to illustrate points and explain how to solve related problems. • Trigonometry: Sine, Cosine, and Tangent Explain the basics of trigonometry, including how to use sine, cosine, and tangent in various problems. Provide practical applications, such as measuring heights and distances. • Statistics: Mean, Median, Mode Clarify the differences between mean, median, and mode. Offer tips on when to use each measure and provide example problems for practice. Igcse Maths Tutor in Noida 2. Exam Preparation Tips: • Revision Techniques Share effective revision strategies, such as creating summary notes, using flashcards, and practicing past papers. Emphasize the importance of consistent, daily practice. • Time Management Offer advice on how to manage time during exam preparation and the actual exam. Suggest techniques like the Pomodoro method for study sessions and tips for pacing during the exam. • Dealing with Exam Stress Provide tips on how to handle exam stress and anxiety. Discuss the benefits of a healthy lifestyle, regular breaks, and mindfulness exercises. IGCSE Maths Tutor in Noida • Weekly Problem Sets Post weekly sets of practice problems covering different topics. Include detailed solutions and explanations to help students understand their mistakes. • Past Paper Analysis Analyze past exam papers, highlighting common question types and frequent mistakes. Offer strategies for tackling these questions effectively. IGCSE Maths Tutor in Noida 4. Real-Life Applications: • Math in Daily Life Write about how various mathematical concepts are used in everyday life. Examples can include budgeting, cooking, and travel planning. • Math in Careers Discuss how different careers use math. Interview professionals in fields like engineering, finance, and technology to provide insights into how they apply mathematical concepts in their jobs. IGCSE Maths Tutor in Noida Simplifying Algebraic Expressions: A Step-by-Step Guide Algebra can be a daunting topic for many IGCSE Maths students, but with the right approach, you can simplify even the most complex expressions. Let’s break down the process step by step. Step 1: Understand the Basics First, remember the fundamental properties of algebra: • Commutative Property: a + b = b + a and ab = ba • Associative Property: (a + b) + c = a + (b + c) and (ab)c = a(bc) • Distributive Property: a(b + c) = ab + ac Step 2: Combine Like Terms Like terms are terms that have the same variable raised to the same power. For example, in the expression 3x+5x3x+5x, both terms are like terms because they contain the variable xx. Example: Simplify 3x+5x3x+5x. Solution: 3x+5x=(3+5)x=8x3x+5x=(3+5)x=8x Step 3: Use the Distributive Property When you encounter an expression that involves parentheses, use the distributive property to simplify. Example: Simplify 2(x+3)2(x+3). Solution: 2(x+3)=2x+62(x+3)=2x+6 Step 4: Simplify Fractions Sometimes, algebraic expressions include fractions. Simplify the numerator and denominator separately before dividing. Example: Simplify 2x+6222x+6. Solution: 2x+62=2(x+3)2=x+322x+6=22(x+3)=x+3 Practice Problems 1. Simplify 4y+7−2y+34y+7−2y+3. 2. Simplify 3a(2+b)−4a3a(2+b)−4a. 3. Simplify 5x+10555x+10. 1. 4y+7−2y+3=2y+104y+7−2y+3=2y+10 2. 3a(2+b)−4a=6a+3ab−4a=2a+3ab3a(2+b)−4a=6a+3ab−4a=2a+3ab 3. 5x+105=5(x+2)5=x+255x+10=55(x+2)=x+2
{"url":"https://rajdhanitutors.com/?p=532","timestamp":"2024-11-05T10:48:16Z","content_type":"text/html","content_length":"96777","record_id":"<urn:uuid:e1238eb5-d3cd-4d84-b1c0-6cada9ff6b7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00058.warc.gz"}
The goal of kerntools is to provide R tools for working with a family of Machine Learning methods called kernel methods. It can be used to complement other R packages like kernlab. Right now, kerntools implements several kernel functions for treating non-negative and real vectors, real matrices, categorical and ordinal variables, sets, and strings. Several tools for studying the resulting kernel matrix or to compare two kernel matrices are available. These diagnostic tools may be used to infer the kernel(s) matrix(ces) suitability in training models. This package also provides functions for computing the feature importance of Support Vector Machines (SVMs) models, and display customizable kernel Principal Components Analysis (PCA) plots. For convenience, widespread performance measures and feature importance barplots are available for the user. Installation and loading Installing kerntools is easy. In the R console: Once the package is installed, it can be loaded anytime typing: kerntools requires R (>= 2.10). Currently, it also relies on the following packages: • dplyr • ggplot2 • kernlab • methods • reshape2 • stringi Usually, if some of these packages are missing in your library, they will be installed automatically when kerntools is installed. A quick example: kernel PCA Imagine that you want to perform a (kernel) PCA plot but your dataset consist of categorical variables. This can be done very easily with kerntools! First, you chose an appropriate kernel for your data (in this example, the Dirac kernel for categorical variables), and then you pass the output of the Dirac() function to the kPCA() function. #> Favorite.color Favorite.actress Favorite.actor Favorite.show #> 1 red Sophie Turner Josh O'Connor The crown #> 2 black Soo Ye-jin Hyun Bin Bridgerton #> 3 red Lorraine Ashbourne Henry Cavill Bridgerton #> 4 blue Sophie Turner Alvaro Morte La casa de papel #> 5 red Sophie Turner Michael K Williams The wire #> 6 yellow Sophie Turner Kit Harington Game of Thrones #> Liked.new.show #> 1 Yes #> 2 No #> 3 Yes #> 4 No #> 5 Yes #> 6 No KD <- Dirac(showdata[,1:4]) dirac_kpca <- kPCA(KD,plot=c(1,2),title="Survey", name_leg = "Liked the show?", y=showdata$Liked.new.show, ellipse=0.66) You can customize your kernel PCA plot: apart from picking which principal components you want to display (in the example: PC1 and PC2), you may want to add a title, or a legend, or use different colors to represent an additional variable of interest, so you can check patterns on your data. To see in detail how to customize a kPCA() plot, please refer to the documentation. The projection matrix is also returned (dirac_kpca$projection), so you may use it for further analyses and/or creating your own plot. Main kerntools features Right now, kerntools can deal effortlessly with the following kinds of data: • Real vectors: Linear, RBF and Laplacian kernels. • Real matrices: Frobenius kernel. • Counts or Frequencies (non-negative numbers): Bray-Curtis and Ruzicka (quantitative Jaccard) kernels. • Categorical data: Overlap / Dirac kernel. • Sets: Intersect and Jaccard kernels. • Ordinal data and rankings: Kendall’s tau kernel. • Strings and Text: Spectrum kernel. Several tools for visualizing and comparing kernel matrices are provided. Regarding kernel PCA, kerntools allows the user to: • Compute a kernel PCA from any kernel matrix, be it computed with kerntools or provided by the user. • Display customizable PCA plots • (When possible) Compute and display the contribution of variables to each principal component. • Compare two or more PCAs generated from the same set of samples using Co-inertia and Procrustes analysis. When using some specific kernels, kerntools computes the importance of each variable or feature in a Support Vector Machine (SVM) model. kerntools does not train SVMs or other prediction models, but it can recover the feature importance of models fitted with other packages (for instance kernlab). These importances can be sorted and summarized in a customizable barplot. Finally, the following performance measures for regression, binary and multi-class classification are implemented: • Regression: Normalized Mean Squared Error • Classification: accuracy, specificity, sensitivity, precision and F1 with (optional) confidence intervals, computed using normal approximation or bootstrapping. Example data kerntools contains a categorical toy dataset called showdata and a real-world count dataset called soil. To see detailed and step-by-step examples that illustrate the main cases of use of kerntools, please have a look to the vignettes: The basic vignette covers the typical kerntools workflow. Thorough documentation about the kernel functions implemented in this package is in the “Kernel functions” vignette. If you want instead to know more about kernel PCA and Coinertia analysis, you can refer to the corresponding vignette too. Additional help Remember that detailed, argument-by-argument documentation is available for each function: The documentation of the example datasets is available in an analogous way, typing: More about kernels To know more about kernel functions, matrices and methods, you can consult the following reference materials: • Bishop, C. M., & Nasrabadi, N. M. (2006). Pattern recognition and machine learning (Vol. 4, No. 4, p. 738). Chapter 6, pp. 291-323. New York: springer. • Müller, K. R., Mika, S., Tsuda, K., & Schölkopf, K. (2018) An introduction to kernel-based learning algorithms. In Handbook of neural network signal processing (pp. 4-1). CRC Press. • Shawe-Taylor, J., & Cristianini, N. (2004). Kernel methods for pattern analysis. Cambridge university press.
{"url":"https://cran.case.edu/web/packages/kerntools/readme/README.html","timestamp":"2024-11-04T07:42:26Z","content_type":"application/xhtml+xml","content_length":"15654","record_id":"<urn:uuid:6b7b72e1-7a73-4429-9368-c3cd839b3e1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00446.warc.gz"}
An object travels 7.5 m/s toward the west . Under the influence of a constant net force of 5.2 kN, it comes to rest in 3.2 s. What is its ma - DocumenTVAn object travels 7.5 m/s toward the west . Under the influence of a constant net force of 5.2 kN, it comes to rest in 3.2 s. What is its ma An object travels 7.5 m/s toward the west . Under the influence of a constant net force of 5.2 kN, it comes to rest in 3.2 s. What is its ma An object travels 7.5 m/s toward the west . Under the influence of a constant net force of 5.2 kN, it comes to rest in 3.2 s. What is its mass? in progress 0 Physics 3 years 2021-07-28T12:25:50+00:00 2021-07-28T12:25:50+00:00 1 Answers 73 views 0
{"url":"https://documen.tv/question/an-object-travels-7-5-m-s-toward-the-west-under-the-influence-of-a-constant-net-force-of-5-2-kn-17396861-65/","timestamp":"2024-11-06T01:47:39Z","content_type":"text/html","content_length":"80235","record_id":"<urn:uuid:7d08987c-19e8-42df-aecf-4874d17119b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00198.warc.gz"}
Variable growth rate formula Growth rates refer to the percentage change of a specific variable within a specific time period, given a certain context. For investors, growth rates typically represent the compounded annualized End Amount; Additional Contribute; Return Rate; Start Amount; Invest Length The Investment Calculator can help determine one of many different variables 31 Jan 2019 The dividend growth model is method that investors use for estimating Inserting those inputs into the dividend growth rate formula, we get: a multi-year analysis with variable growth rate, that process is a bit more complex Keywords: Earnings growth PE ratio Equity valuation Cost of capital In order to control other variables in the equation two, we take different approaches for. To calculate Compound Annual Growth Rate (CAGR) in Excel, the average rate of return for an investment In the example shown, the formula in H7 is: Geometric mean can be used to calculate average rate of return with variable rates. As such the formula is vulnerable to the distortions of the 'Garbage in - Garbage The Variable-Growth Rate Model can take many different forms assuming two, future value / present value = (1+i)^n (growth rate equation google it) need to know the value of the other variables, namely Nₒ , r and a point in time N(t) . 27 Apr 2016 Manipulate these two variables to explore different scenarios your current town or city may experience in the future. Population Growth Simulator future value / present value = (1+i)^n (growth rate equation google it) need to know the value of the other variables, namely Nₒ , r and a point in time N(t) . Solow has assumed technical coefficients of production to be variable, so that the The right hand of the equation (4) shows the rate of growth of labour force 27 May 2012 Stock valuation: Two-stage Dividend Growth Model (Critical Review) the formula method to calculate the two-stage dividend growth model. If we solve the above equation for g, we get the implied growth rate as 8.13% #3 – Variable-Growth Rate DDM Model (Multi-stage Dividend Discount Model) Variable Growth rate Dividend Discount Model or DDM Model is much closer to reality as compared to the other two types of dividend discount model. The formula to calculate a growth rate given a beginning and ending population is: Pop Future = Future Population Pop Present = Present Population i = Growth Rate (unknown) 27 Apr 2016 Manipulate these two variables to explore different scenarios your current town or city may experience in the future. Population Growth Simulator future value / present value = (1+i)^n (growth rate equation google it) need to know the value of the other variables, namely Nₒ , r and a point in time N(t) . Solow has assumed technical coefficients of production to be variable, so that the The right hand of the equation (4) shows the rate of growth of labour force 27 May 2012 Stock valuation: Two-stage Dividend Growth Model (Critical Review) the formula method to calculate the two-stage dividend growth model. If we solve the above equation for g, we get the implied growth rate as 8.13% #3 – Variable-Growth Rate DDM Model (Multi-stage Dividend Discount Model) Variable Growth rate Dividend Discount Model or DDM Model is much closer to reality as compared to the other two types of dividend discount model. Calculating Growth Rates. The economic growth rate can be measured as the annual percentage change of real Growth rate formula for any variable (1) :. Solow has assumed technical coefficients of production to be variable, so that the The right hand of the equation (4) shows the rate of growth of labour force 27 May 2012 Stock valuation: Two-stage Dividend Growth Model (Critical Review) the formula method to calculate the two-stage dividend growth model. If we solve the above equation for g, we get the implied growth rate as 8.13% #3 – Variable-Growth Rate DDM Model (Multi-stage Dividend Discount Model) Variable Growth rate Dividend Discount Model or DDM Model is much closer to reality as compared to the other two types of dividend discount model. 25 Jun 2019 Every dividend payment in the future was discounted back to the present and added together. We can use the following formula to determine this 23 Nov 2012 Here, a key variable is the biomass concentration which is required for further calculation of variables describing the metabolic state of the culture 14 Aug 2012 In fact, growth rates are an endogenous variable, which is estimated This equation suggests that prices lead earnings in the sense of that 31 Jan 2019 The dividend growth model is method that investors use for estimating Inserting those inputs into the dividend growth rate formula, we get: a multi-year analysis with variable growth rate, that process is a bit more complex Keywords: Earnings growth PE ratio Equity valuation Cost of capital In order to control other variables in the equation two, we take different approaches for. To calculate Compound Annual Growth Rate (CAGR) in Excel, the average rate of return for an investment In the example shown, the formula in H7 is: Geometric mean can be used to calculate average rate of return with variable rates.
{"url":"https://dioptioneusfnaxc.netlify.app/semetara9105jat/variable-growth-rate-formula-204","timestamp":"2024-11-11T07:54:45Z","content_type":"text/html","content_length":"34067","record_id":"<urn:uuid:48703a32-468d-470b-8b00-1aa104c973d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00686.warc.gz"}
Peg ratio stock valuation 20 Feb 2013 P/E is the most popular valuation ratio used by investors. It is equal to a stock's market price divided by the earnings per share for the most 2 Oct 2011 Investing has a similar set of four basic elements that investors use to break down a stock's value. In this article, we will look at the four ratios and 11 Apr 2019 This is a detailed guide on the price-to-earnings ratio (P/E ratio), and then shows its shortcomings and presents several superior valuation 6 Nov 2018 Simply put, PEG ratios put a stock's attractiveness in perspective by dividing a company's price-to-earnings ratio (a common valuation measure, 7 Nov 2019 This screen looks for large-cap stocks above $5 billion in market capitalization with good valuation based on Book Value and Earnings multiples The term “PEG ratio” or Price/Earnings to Growth ratio refers to the stock valuation method based on the growth potential of the company's earnings. The formula How the price/earnings ratio and the PEG ratio of a company are calculated, and P/E ratio) is the most widely published ratio on stocks, equal to the company's Some companies have such high P/E's that their market value can be greater The price-earnings ratio (P/E) is one of the most basic metrics of stock valuation. It is calculated by dividing a stock's current price by its earnings, giving a relative Price to earnings, growth radio and value growth based strategies. Social Science Research Network, 19(4).] to discuss the strategies of investing on stocks. The The price earnings to growth ratio, also known as the PEG ratio, takes the price earnings ratio one step further. This valuation ratio compares a company’s current share price with its current earnings per share, and then measures that P/E ratio against the rate at which the firm’s earnings are growing. PEG Ratio Vs Price To Earnings: Why Peter Lynch Wins Here Nov 04, 2016 · “The PEG ratio (price-earnings to growth ratio) is a valuation metric for determining the relative trade-off between the price of a stock, the earnings generated per share (EPS), and the company's expected growth. In general, the P/E ratio is higher for a company with a higher growth rate. Stock Screeners - Yahoo Finance Find Yahoo Finance predefined, ready-to-use stock screeners to search stocks by industry, index membership, and more. Create your own screens with over 150 different screening criteria. The price-earnings ratio (P/E) is one of the most basic metrics of stock valuation. It is calculated by dividing a stock's current price by its earnings, giving a relative Apr 02, 2020 · The PEG ratio (price-earnings to growth) is a valuation metric that describes the relationship between the price of a stock, the earnings generated per share and the growth rate. It is obtained by dividing the price per earnings of a company with its growth rate. Chapter 9 - The Valuation of Stock Flashcards | Quizlet Start studying Chapter 9 - The Valuation of Stock. Learn vocabulary, terms, and more with flashcards, games, and other study tools. The PEG ratio multiplies a stock's earnings, price, and growth rate. False. its historic high price to book ratio d. a stock should be purchased if it is selling near PEG Ratio | Formula | Calculator (Updated 2020) The price earnings to growth ratio, also known as the PEG ratio, takes the price earnings ratio one step further. This valuation ratio compares a company’s current share price with its current earnings per share, and then measures that P/E ratio against the rate at which the firm’s earnings are growing. PEG Ratio (Price-Earnings to Growth) | Formula, Calculator ... Start studying Chapter 9 - The Valuation of Stock. Learn vocabulary, terms, and more with flashcards, games, and other study tools. The PEG ratio multiplies a stock's earnings, price, and growth rate. False. its historic high price to book ratio d. a stock should be purchased if it is selling near PEG Ratio | Formula | Calculator (Updated 2020) The price earnings to growth ratio, also known as the PEG ratio, takes the price earnings ratio one step further. This valuation ratio compares a company’s current share price with its current earnings per share, and then measures that P/E ratio against the rate at which the firm’s earnings are growing. Costco Wholesale Corp. (NASDAQ:COST) | Valuation Ratios Valuation ratio Description The company; P/OP ratio: Because P/E ratio is calculated using net income, the ratio can be sensitive to nonrecurring earnings and capital structure, analysts may use price to operating profit. Costco Wholesale Corp.’s P/OP ratio increased from 2017 to … PEGY Ratio | Formula | Calculator (Updated 2020) Chapter 9 - The Valuation of Stock Flashcards | Quizlet Find Yahoo Finance predefined, ready-to-use stock screeners to search stocks by industry, index membership, and more. Create your own screens with over 150 different screening criteria. The PEG ratio and other valuation multiples - Security ... The PEG ratio and other valuation multiples. A rule of thumb is that the PE ratio should be roughly equal to the growth in earnings or dividends. In other words, the ratio of the PE ratio to growth in earnings, which is called the PEG ratio, should be close to 1. The PE ratio for gender model stock is currently five. P/E Ratio: Why Investors Need Better Stock Valuation Methods Advantages of the PEG Ratio in Stock Valuation - Financial Web The advantages of the PEG ration in stock valuation are concentrated around the ratio’s ability to be applied across industries. The PEG ratio is the relationship between the price-to-earnings ratio (P/E) and the companies projected growth rate. The P/E ratio is commonly used to value stocks because it … PEG Ratio: how accurate is it? - Moneychimp: Stock Market ... The PEG approach is a simple valuation tool, popularized by Peter Lynch and The Motley Fool among many others. Here is how Lynch puts it in One Up on Wall Street: "The p/e ratio of any company that's fairly priced will equal its growth rate." In other words, P/E = G where P/E is the stock's P/E ratio, and G is its earnings growth rate.
{"url":"https://topoptionsmdihhsc.netlify.app/ciccarone32996qyxy/peg-ratio-stock-valuation-173.html","timestamp":"2024-11-08T23:37:17Z","content_type":"text/html","content_length":"33052","record_id":"<urn:uuid:a0c78be0-e0a9-4791-bd96-71c7d6e58e4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00522.warc.gz"}
Haskell/The Functor class - Wikibooks, open books for an open world In this chapter, we will introduce the important Functor type class. In Other data structures, we saw operations that apply to all elements of some grouped value. The prime example is map which works on lists. Another example we worked through was the following Tree data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving (Show) The map function we wrote for Tree was: treeMap :: (a -> b) -> Tree a -> Tree b treeMap f (Leaf x) = Leaf (f x) treeMap f (Branch left right) = Branch (treeMap f left) (treeMap f right) As discussed before, we can conceivably define a map-style function for any arbitrary data structure. When we first introduced map in Lists II, we went through the process of taking a very specific function for list elements and generalizing to show how map combines any appropriate function with all sorts of lists. Now, we will generalize still further. Instead of map-for-lists and map-for-trees and other distinct maps, how about a general concept of maps for all sorts of mappable types? Functor is a Prelude class for types which can be mapped over. It has a single method, called fmap. The class is defined as follows: class Functor f where fmap :: (a -> b) -> f a -> f b The usage of the type variable f can look a little strange at first. Here, f is a parametrized data type; in the signature of fmap, f takes a as a type parameter in one of its appearances and b in the other. Let's consider an instance of Functor: By replacing f with Maybe we get the following signature for fmap... fmap :: (a -> b) -> Maybe a -> Maybe b ... which fits the natural definition: instance Functor Maybe where fmap f Nothing = Nothing fmap f (Just x) = Just (f x) (Incidentally, this definition is in Prelude; so we didn't really need to implement maybeMap for that example in the "Other data structures" chapter.) The Functor instance for lists (also in Prelude) is simple: instance Functor [] where fmap = map ... and if we replace f with [] in the fmap signature, we get the familiar type of map. So, fmap is a generalization of map for any parametrized data type.^[1] Naturally, we can provide Functor instances for our own data types. In particular, treeMap can be promptly relocated to an instance: instance Functor Tree where fmap f (Leaf x) = Leaf (f x) fmap f (Branch left right) = Branch (fmap f left) (fmap f right) Here's a quick demo of fmap in action with the instances above (to reproduce it, you only need to load the data and instance declarations for Tree; the others are already in Prelude): *Main> fmap (2*) [1,2,3,4] *Main> fmap (2*) (Just 1) Just 2 *Main> fmap (fmap (2*)) [Just 1, Just 2, Just 3, Nothing] [Just 2, Just 4, Just 6, Nothing] *Main> fmap (2*) (Branch (Branch (Leaf 1) (Leaf 2)) (Branch (Leaf 3) (Leaf 4))) Branch (Branch (Leaf 2) (Leaf 4)) (Branch (Leaf 6) (Leaf 8)) Beyond [] and Maybe, there are many other Functor instances already defined. Those made available from the Prelude are listed in the Data.Functor module. When providing a new instance of Functor, you should ensure it satisfies the two functor laws. There is nothing mysterious about these laws; their role is to guarantee fmap behaves sanely and actually performs a mapping operation (as opposed to some other nonsense). ^[2] The first law is: id is the identity function, which returns its argument unaltered. The first law states that mapping id over a functorial value must return the functorial value unchanged. Next, the second law: fmap (g . f) = fmap g . fmap f It states that it should not matter whether we map a composed function or first map one function and then the other (assuming the application order remains the same in both cases). At this point, we can ask what benefit we get from the extra layer of generalization brought by the Functor class. There are two significant advantages: • The availability of the fmap method relieves us from having to recall, read, and write a plethora of differently named mapping methods (maybeMap, treeMap, weirdMap, ad infinitum). As a consequence, code becomes both cleaner and easier to understand. On spotting a use of fmap, we instantly have a general idea of what is going on.^[3] Thanks to the guarantees given by the functor laws, this general idea is surprisingly precise. • Using the type class system, we can write fmap-based algorithms which work out of the box with any functor - be it [], Maybe, Tree or whichever you need. Indeed, a number of useful classes in the core libraries inherit from Functor. Type classes make it possible to create general solutions to whole categories of problems. Depending on what you use Haskell for, you may not need to define new classes often, but you will certainly be using type classes all the time. Many of the most powerful features and sophisticated capabilities of Haskell rely on type classes (residing either in the standard libraries or elsewhere). From this point on, classes will be a prominent presence in our studies. 1. ↑ Data structures provide the most intuitive examples; however, there are functors which cannot reasonably be seen as data structures. A commonplace metaphor consists in thinking of functors as containers; like all metaphors, however, it can be stretched only so far. 2. ↑ Some examples of nonsense that the laws rule out: removing or adding elements from a list, reversing a list, changing a Just-value into a Nothing. 3. ↑ This is analogous to the gain in clarity provided by replacing explicit recursive algorithms on lists with implementations based on higher-order functions.
{"url":"https://en.m.wikibooks.org/wiki/Haskell/The_Functor_class","timestamp":"2024-11-08T16:00:50Z","content_type":"text/html","content_length":"44934","record_id":"<urn:uuid:d4061545-6116-43a5-92b5-35a46402b22a>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00649.warc.gz"}
Numerical integration trapz - integrate sampled values using trapezoidal rule Returns the trapezoidal rule integral of an array y representing discrete samples of a function. The integral is computed assuming either equidistant abscissas with spacing dx or arbitrary abscissas result = trapz (y, x) result = trapz (y, dx) y: Shall be a rank-one array of type real. x: Shall be a rank-one array of type real having the same kind and size as y. dx: Shall be a scalar of type real having the same kind as y. Return value The result is a scalar of type real having the same kind as y. If the size of y is zero or one, the result is zero. program example_trapz use stdlib_quadrature, only: trapz implicit none real, parameter :: x(5) = [0., 1., 2., 3., 4.] real :: y(5) = x**2 print *, trapz(y, x) ! 22.0 print *, trapz(y, 0.5) ! 11.0 end program example_trapz trapz_weights - trapezoidal rule weights for given abscissas Given an array of abscissas x, computes the array of weights w such that if y represented function values tabulated at x, then sum(w*y) produces a trapezoidal rule approximation to the integral. result = trapz_weights (x) x: Shall be a rank-one array of type real. Return value The result is a real array with the same size and kind as x. If the size of x is one, then the sole element of the result is zero. program example_trapz_weights use stdlib_quadrature, only: trapz_weights implicit none real, parameter :: x(5) = [0., 1., 2., 3., 4.] real :: y(5) = x**2 real :: w(5) w = trapz_weights(x) print *, sum(w*y) ! 22.0 end program example_trapz_weights simps - integrate sampled values using Simpson's rule Returns the Simpson's rule integral of an array y representing discrete samples of a function. The integral is computed assuming either equidistant abscissas with spacing dx or arbitrary abscissas x. Simpson's ordinary ("1/3") rule is used for odd-length arrays. For even-length arrays, Simpson's 3/8 rule is also utilized in a way that depends on the value of even. If even is negative (positive), the 3/8 rule is used at the beginning (end) of the array. If even is zero or not present, the result is as if the 3/8 rule were first used at the beginning of the array, then at the end of the array, and these two results were averaged. result = simps (y, x [, even]) result = simps (y, dx [, even]) y: Shall be a rank-one array of type real. x: Shall be a rank-one array of type real having the same kind and size as y. dx: Shall be a scalar of type real having the same kind as y. even: (Optional) Shall be a default-kind integer. Return value The result is a scalar of type real having the same kind as y. If the size of y is zero or one, the result is zero. If the size of y is two, the result is the same as if trapz had been called instead. program example_simps use stdlib_quadrature, only: simps implicit none real, parameter :: x(5) = [0., 1., 2., 3., 4.] real :: y(5) = 3.*x**2 print *, simps(y, x) ! 64.0 print *, simps(y, 0.5) ! 32.0 end program example_simps simps_weights - Simpson's rule weights for given abscissas Given an array of abscissas x, computes the array of weights w such that if y represented function values tabulated at x, then sum(w*y) produces a Simpson's rule approximation to the integral. Simpson's ordinary ("1/3") rule is used for odd-length arrays. For even-length arrays, Simpson's 3/8 rule is also utilized in a way that depends on the value of even. If even is negative (positive), the 3/8 rule is used at the beginning (end) of the array and the 1/3 rule used elsewhere. If even is zero or not present, the result is as if the 3/8 rule were first used at the beginning of the array, then at the end of the array, and then these two results were averaged. result = simps_weights (x [, even]) x: Shall be a rank-one array of type real. even: (Optional) Shall be a default-kind integer. Return value The result is a real array with the same size and kind as x. If the size of x is one, then the sole element of the result is zero. If the size of x is two, then the result is the same as if trapz_weights had been called instead. program example_simps_weights use stdlib_quadrature, only: simps_weights implicit none real, parameter :: x(5) = [0., 1., 2., 3., 4.] real :: y(5) = 3.*x**2 real :: w(5) w = simps_weights(x) print *, sum(w*y) ! 64.0 end program example_simps_weights gauss_legendre - Gauss-Legendre quadrature (a.k.a. Gaussian quadrature) nodes and weights Computes Gauss-Legendre quadrature (also known as simply Gaussian quadrature) nodes and weights, for any N (number of nodes). Using the nodes x and weights w, you can compute the integral of some function f as follows: integral = sum(f(x) * w). Only double precision is supported - if lower precision is required, you must do the appropriate conversion yourself. Accuracy has been validated up to N=64 by comparing computed results to tablulated values known to be accurate to machine precision (maximum difference from those values is 2 epsilon). subroutine gauss_legendre (x, w[, interval]) x: Shall be a rank-one array of type real(real64). It is an output argument, representing the quadrature nodes. w: Shall be a rank-one array of type real(real64), with the same dimension as x. It is an output argument, representing the quadrature weights. interval: (Optional) Shall be a two-element array of type real(real64). If present, the nodes and weigts are calculated for integration from interval(1) to interval(2). If not specified, the default integral is -1 to 1. program example_gauss_legendre use iso_fortran_env, dp => real64 use stdlib_quadrature, only: gauss_legendre implicit none integer, parameter :: N = 6 real(dp), dimension(N) :: x, w call gauss_legendre(x, w) print *, "integral of x**2 from -1 to 1 is", sum(x**2*w) end program example_gauss_legendre gauss_legendre_lobatto - Gauss-Legendre-Lobatto quadrature nodes and weights Computes Gauss-Legendre-Lobatto quadrature nodes and weights, for any N (number of nodes). Using the nodes x and weights w, you can compute the integral of some function f as follows: integral = sum (f(x) * w). Only double precision is supported - if lower precision is required, you must do the appropriate conversion yourself. Accuracy has been validated up to N=64 by comparing computed results to tablulated values known to be accurate to machine precision (maximum difference from those values is 2 epsilon). subroutine gauss_legendre_lobatto (x, w[, interval]) x: Shall be a rank-one array of type real(real64). It is an output argument, representing the quadrature nodes. w: Shall be a rank-one array of type real(real64), with the same dimension as x. It is an output argument, representing the quadrature weights. interval: (Optional) Shall be a two-element array of type real(real64). If present, the nodes and weigts are calculated for integration from interval(1) to interval(2). If not specified, the default integral is -1 to 1. program example_gauss_legendre_lobatto use iso_fortran_env, dp => real64 use stdlib_quadrature, only: gauss_legendre_lobatto implicit none integer, parameter :: N = 6 real(dp), dimension(N) :: x, w call gauss_legendre_lobatto(x, w) print *, "integral of x**2 from -1 to 1 is", sum(x**2*w) end program example_gauss_legendre_lobatto
{"url":"https://stdlib.fortran-lang.org/page/specs/stdlib_quadrature.html","timestamp":"2024-11-03T20:06:43Z","content_type":"text/html","content_length":"36919","record_id":"<urn:uuid:1af8243f-fba6-46d4-88ab-882b53aa1ad8>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00163.warc.gz"}
Free Math Activities for K-5 and 6-12 Make Math Connect • What you’ve heard is true: math is a subject that builds. It’s a progressive discipline, and the foundational skills acquired in kindergarten are essential for algebra success and beyond. • Do you have the math activities you need to help your students make these vital connections? • Check out our free Make Math Connect activity packs. Inside, you’ll see how number lines are an essential building block of mathematical know-how from kindergarten through Algebra 2. The Make Math Connect packs include easy-to-facilitate math activities that show how number lines are used from kindergarten through Algebra 2. Math activities for K-5 students • Engaging, play-based activities and games • Resources that can be printed or projected • Facilitation instructions for each activity • And more! Math activities for 6-12 students • Printable activities that encourage collaboration and discussion • Facilitation instructions for each activity • Discourse supports • And more!
{"url":"https://www.neuronlearning.com/make-clearmath-connect-activities/","timestamp":"2024-11-09T02:41:05Z","content_type":"text/html","content_length":"116239","record_id":"<urn:uuid:7edd08ba-6540-4b21-a75f-47abb7dd79ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00260.warc.gz"}
xgboost gamma regression 16. These days, XGBoost gets more and more popular and used widely in data science, especially in competitions like those on Kaggle. 10? Regardless of the data type (regression or classification), it is well known to provide better solutions than other ML algorithms. The range of that parameter is [0, Infinite[. After this, we could compare the gain with this and gain with other thresholds to find the biggest one for better split. It also explains what are these regularization parameters in xgboost, without having to go in the theoretical details. So we substitute first part as below. However, if we prune the root, it shows us the initial prediction is all we left which is an extreme pruning. The higher Gamma is, the higher the regularization. The code is self-explanatory. Unlike Gradient Boost, XGBoost makes use of regularization parameters that helps against overfitting. Before we start to talk about the math, I would like to get a brief review of the XGBoost regression. For instance, you won’t take all immediately, but you will take them slowly. XGBoost improves on the regular Gradient Boosting method by: 1) improving the process of minimization of the model error; 2) adding regularization (L1 and L2) for better model generalization; 3) adding parallelization. (full momentum), Controlling the hessian weights? Finding a “good” gamma is very dependent on both your data set and the other parameters you are using. The XGBoost is a popular supervised machine learning model with characteristics like computation speed, parallelization, and performance. If it is positive we will keep the branch so we finish the pruning. 4y ago. Learns a tree based XGBoost model for regression. 6 NLP Techniques Every Data Scientist Should Know, Are The New M1 Macbooks Any Good for Data Science? In this post, we'll learn how to define the XGBRegressor model and predict regression data in Python. So the first thing XGBoost does is multiply the whole equation by -1 which means to change the parabola over to horizontal line. XGBoost gained much popularity and attention recently as the algorithm of choice for many winning teams of machine learning competitions these days. Xgboost: A scalable tree boosting system. $\endgroup$ – AdmiralWen Jun 8 '16 at 21:56 $\begingroup$ Gini coefficient perhaps? Introduction . 16. close. (min_child_weight) => you are the second controller to force pruning using derivatives! "reg:gamma" --gamma regression with log-link. The lambda prevented over-fitting the training data. E.g. Choices: auto, exact, approx, hist, gpu_hist, this is a combination of commonly used updaters. By the way, if we take loss function as the most popular one which is L(yi,y’i)=1/2(yi-y’i)*(yi*y’i), the above result will become wj=(sum of residuals)/(number of residuals + lambda). This extreme implementation of gradient boosting created by Tianqi Chen was published in 2016. Then we expand the sigma we found that for each i this part equal to L(yi,yhat(i-1)) plus gi and hi parts. Laurae: This post is about tuning the regularization in the tree-based xgboost (Maximum Depth, Minimum Child Weight, Gamma). It employs a number of nifty tricks that make it exceptionally successful, particularly with structured data. Booster parameters depend on which booster you have chosen. Checkout the official documentation for some tutorials on how XGBoost works. Did you find this Notebook useful? The following are 30 code examples for showing how to use xgboost.XGBRegressor().These examples are extracted from open source projects. Then we quantify how much better the leaves cluster similar residuals than the root by calculating the gain. Regardless of the data type (regression or classification), it is well known to provide better solutions than other ML algorithms. XGBoost. (Gamma) => you are the first controller to force pruning of the pure weights! It is known for its good performance as compared to all other machine learning algorithms.. XG Boost works on parallel tree boosting which predicts the target by combining results of multiple weak model. XGBoost is a scalable machine learning system for tree boosting. XGBoost is a powerful approach for building supervised regression models. When we use XGBoost, no matter we use it for classification or regression, it starts with an initial prediction and we use loss function to evaluate if the prediction works well or not. Just like Gradient Boost, XGBoost is the extreme version of it. Before running XGBoost, we must set three types of parameters: general parameters, booster parameters and task parameters. We keep building other trees based on new residuals and make new prediction gives smaller residuals until residuals are supper small or reached maximum number. Hands-on real-world examples, research, tutorials, and cutting-edge techniques delivered Monday to Thursday. Experimental support for external memory is available for approx and gpu_hist. You can use the new release of the XGBoost algorithm either as a Amazon SageMaker built-in algorithm or as a framework to run training scripts in your local environments. For other updaters like refresh, set the parameter updater directly. The objective function contains loss function and a regularization term. In this course, you'll learn how to use this powerful library alongside pandas and scikit-learn to build and tune supervised learning models. Currently, it has become the most popular algorithm for any regression or classification problem which deals with tabulated data (data not comprised of images and/or text). The validity of this statement can be inferred by knowing about its (XGBoost) objective function and base learners. Use Icecream Instead. With high depth such as 15 in this data set, you can train yourself using Gamma. If the gain is less than the gamma value then the branch is cut and no further splitting takes place else splitting continues. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. What if we set the XGBoost objective to minimize the deviance function of a gamma distribution, instead of minimize RMSE? Now the optimal output value represents the x-axis of highest point in the parabola and the corresponding y-axis value is the similarity score! Now let us do simply algebra based on above result. 0.1? XGboost is a very fast, scalable implementation of gradient boosting, with models using XGBoost regularly winning online data science competitions and being used at scale across different industries. The post was originally at Kaggle. It is a pseudo-regularization hyperparameter in gradient boosting. XGBoost is a tree based ensemble machine learning algorithm which has higher predicting power and performance and it is achieved by improvisation on Gradient Boosting framework by introducing some accurate approximation algorithms. A decision tree is a simple rule-based system, built around a hierarchy of branching true/false statements. My dataset has all positive values but some of the predictions are negative. Noise is made of 1000 other features. How do we find the range for this parameter? This article will explain the math behind in a simple way to help you understand this algorithm. Unfortunately, a Gamma value for a specific max_depth does NOT work the same with a different max_depth. You can use XGBoost for regression, classification (binary and multiclass), and ranking problems. We build the XGBoost regression model in 6 steps. Then we will talk about tree pruning based on its gain value. Always start with 0, use xgb.cv, and look how the train/test are faring. XGBoost uses loss function to build trees by minimizing the following value: In this equation, the first part represents for loss function which calculates the pseudo residuals of predicted value yi with hat and true value yi in each leaf, the second part contains two parts just showed as above. # this script demonstrates how to fit gamma regression model (with log link function) # in xgboost, before running the demo you need to generate the autoclaims dataset # by running gen_autoclaims.R located in xgboost/demo/data. I’ll spread it using different separated paragraphs. The impact of the system has been widely recognized in a number of machine learning and data mining challenges. 5? For the corresponding output value we get: In XGBoost, it uses the simplified equation: (g1+g2+….+gn)ft(xi)+1/2(h1+h2+…..+hn+lambda)ft(xi)*ft(xi) to determine similarity score. After we build the tree, we start to determine the output value of the tree. You can find more about the model in this link. It also explains what are these regularization parameters in xgboost… For learning how to implement the XGBoost algorithm for regression kind of problems, we are going to build one with sklearn famous regression dataset boston horse price datasets. The datasets for this tutorial are from the scikit-learn … Another choice typical and most preferred choice: step max_depth down :). You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. Make learning your daily ritual. colsample_bytree = ~0.70 (tune this if you don’t manage to get 0.841 by tuning Gamma), nrounds = 100000 (use early.stop.round = 50), Very High depth => high Gamma (like 3? However, many people may find the equations in XGBoost seems too complicated to understand. When we use XGBoost, no matter we use it for classification or regression, it starts with an initial prediction and we use loss function to evaluate if the prediction works well or not. Lower Gamma (good relative value to reduce if you don’t know: cut 20% of Gamma away until you test CV grows without having the train CV frozen). Easy question: when you want to use shallow trees because you expect them to do better. Since I covered Gradient Boosting Machine in detail in my previous article – Complete Guide to Parameter Tuning in Gradient Boosting (GBM) in Python, I … The regression tree is a simple machine learning model that can be used for regression tasks. Depending on what you see between the train/test CV increase speed, you try to find an appropriate Gamma. Then we calculate the similarity for each groups (leaf and right). How to get contacted by Google for a Data Science position? We calculate the similarity score and gain in just the same way and we found that when lambda is larger than 0, the similarity and gain will be smaller and it is easier to prune leaves. General parameters relate to which booster we are using to do boosting, commonly tree or linear model. I read on this link that reducing the number of trees might help the situation. Put a higher Gamma (good absolute value to use if you don’t know: +2, until your test CV can follow faster your train CV which goes slower, your test CV should be able to peak). Thank you for reading my Medium! The models in the middle (gamma = 1 and gamma = 10) are superior in terms of predictive accuracy. Feel free to contact me! (Or if not gamma deviance, what other objectives might you minimize for a regression problem?) Gain = Left similarity + Right similarity- Root similarity. Then we go back to the original residuals and build a tree just like before, the only difference is we change the lambda value to 1. Note, we will never remove the root if we do not remove the first branch. We start by picking a number as threshold, which is gamma. i playing around xgboost, financial data , wanted try out gamma regression objective. Now, let us first check the first part of the equation. By substituting gi and hi, we could rewrite the equation as: 1/2*(g1+g2+…..+gn)(g1+g2+…..+gn)/(h1+h2+….+hn+lambda). Using Gamma will always yield a higher performance than not using Gamma, as long as you found the best set of parameters for Gamma to shine. Regardless of the type of prediction task at hand; regression or classification. Full in-depth tutorial with one exercise using this data set :). If you understood the four sentences higher ^, you can now understand why tuning Gamma is dependent on all the other hyperparameters you are using, but also the only reasons you should tune Gamma: Take the following example: you sleep in a room during night, and you need to wake up at a specific time (but you don’t know when you will wake up yourself!!!). You know the dependent features of “when I wake up” are: noise, time, cars. If the gain is less than the root, it shows us the initial prediction is we... Against overfitting approach for building supervised regression models Second Order Taylor Approximation, start. After this, we must set three types of parameters: general parameters relate to which booster you no. To learn more about the functions of the model in 6 steps spread it using different separated paragraphs alongside and. Widely in data science, particularly with structured data algorithm in machine learning algorithms you won ’ t all. Put 10 and look what happens and calculate the similarity score by setting... The regularization the higher gamma is, the script is broken down into a simple rule-based system, built a. The optimal output value represents the x-axis of highest point in the tree-based XGBoost ( extreme boosting. Using derivatives wanted to construct a model to predict the price of a “ decision tree.. Is more, more pruning takes place badly tuned something else or use gamma! Objectives might you minimize for a specific max_depth does not give us any clue on booster. Statement can be inferred by knowing about its ( XGBoost ) objective function and a regularization.... For many winning teams of machine learning algorithm these days \endgroup $ – AdmiralWen 8... Set: ) this powerful library alongside pandas and xgboost gamma regression to build and tune supervised learning models the. To be like that choice typical and most preferred choice: step max_depth down: ) =. Groups ( xgboost gamma regression and right ) used and frequently makes its way to the top of the to. Cv can ’ t follow ), it is known for its good performance as compared all... Range: xgboost gamma regression 0, Infinite [ ( complexity control ) an gamma... To be like that the learning rate ( eta ) x output value what happens for classification... Rate ( eta ) x output value for the leaves could be split we. ) ( and with your experience too! ) system, built around a hierarchy of true/false... Choice typical and most preferred choice: step max_depth down: ) thresholds... Not work the same with a masters degree majoring in Business Analytics the goal is to the. Functions of the most popular machine learning algorithm in machine learning algorithms it shows us the prediction... $ Gini coefficient perhaps over to horizontal line full in-depth tutorial with one exercise this... In use to know the convergence of the system has been around for a regression problem )... Which is an extreme pruning: use heuristic to choose the fastest method > you are first... For any data set alone make the model more complex and more likely to overfit up ” are noise! 5 ) this xgboost gamma regression has been widely recognized in a cross-validation scheme to select the popular. Train/Test CV increase speed, you try to find an optimized output value of the acm! The gamma parameter in XGBoost seems too complicated to understand its behind math place else splitting continues never remove root. With high depth such as 15 in this article, we could the... Finish the pruning will keep the branch else splitting continues is well known to provide better than! Math formulas and equations for the leaf to minimize and when to:! A … XGBoost stands for extreme Gradient boosting xgboost gamma regression is an extreme pruning than... Like 0.01 minimize the deviance function of a tree based XGBoost model for regression represents the x-axis of point... Sanger Shooting 2020 Teletubbies Vhs Internet Archive Foothill High School Famous Alumni Jump Rope Counter Attachment Fillet Edge Autocad Rubygems Server Status
{"url":"https://media.evolufarma.es/merck-chemicals-odrql/xgboost-gamma-regression-2b3dcc","timestamp":"2024-11-14T10:22:14Z","content_type":"text/html","content_length":"21834","record_id":"<urn:uuid:b8819cd7-665d-4a26-a29a-a986e2b36e2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00223.warc.gz"}
How to Apply the Empirical Rule in Excel | Online Tutorials Library List | Tutoraspire.com How to Apply the Empirical Rule in Excel by Tutor Aspire The Empirical Rule, sometimes called the 68-95-99.7 rule, states that for a given dataset with a normal distribution: • 68% of data values fall within one standard deviation of the mean. • 95% of data values fall within two standard deviations of the mean. • 99.7% of data values fall within three standard deviations of the mean. In this tutorial, we explain how to apply the Empirical Rule in Excel to a given dataset. Applying the Empirical Rule in Excel Suppose we have a normally-distributed dataset with a mean of 7 and a standard deviation of 2.2. The following screenshot shows how to apply the Empirical Rule to this dataset in Excel to find which values 68% of the data falls between, which values 95% of the data falls between, and which values 99.7% of the data falls between: From this output, we can see: • 68% of the data falls between 4.8 and 9.2 • 95% of the data falls between 2.6 and 11.4 • 99.7% of the data falls between 0.4 and 13.6 The cells in columns F and G show the formulas that were used to find these values. To apply the Empirical Rule to a different dataset, we simply need to change the mean and standard deviation in cells C2 and C3. For example, here is how to apply the Empirical Rule to a dataset with a mean of 40 and a standard deviation of 3.75: From this output, we can see: • 68% of the data falls between 36.25 and 43.75 • 95% of the data falls between 32.5 and 47.5 • 99.7% of the data falls between 28.75 and 51.25 And here is one more example of how to apply the Empirical Rule to a dataset with a mean of 100 and a standard deviation of 5: From this output, we can see: • 68% of the data falls between 95 and 105 • 95% of the data falls between 90 and 110 • 99.7% of the data falls between 85 and 115 Finding What Percentage of Data Falls Between Certain Values Another question you might have is: What percentage of data falls between certain values? For example, suppose you have a normally-distributed dataset with a mean of 100, a standard deviation of 5, and you want to know what percentage of the data falls between the values 99 and 105. In Excel, we can easily answer this question by using the function = NORM.DIST(), which takes the following arguments: NORM.DIST(x, mean, standard_dev, cumulative) • x is the value we’re interested in • mean is the mean of the distribution • standard_dev is the standard deviation of the distribution • cumulative takes a value of “TRUE” (returns the CDF) or “FALSE” (returns the PDF) – we’ll use “TRUE” to get the value of the cumulative distribution function. The following screenshot shows how to use the NORM.DIST() function to find the percentage of the data that falls between the values 99 and 105 for a distribution that has a mean of 100 and a standard deviation of 5: We see that 42.1% of the data falls between the values 105 and 99 for this distribution. Helpful Tools: Empirical Rule Calculator Empirical Rule (Practice Problems) Share 0 FacebookTwitterPinterestEmail previous post How to Create a Residual Plot in Excel next post Kendall’s Tau: Definition + Example You may also like
{"url":"https://tutoraspire.com/empirical-rule-excel/","timestamp":"2024-11-11T23:31:57Z","content_type":"text/html","content_length":"352515","record_id":"<urn:uuid:0c7d0e56-5dd7-4917-b3ef-a03d9e48005a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00858.warc.gz"}
Figure 3-2. Division - Ss052650039 point where the diagonals cross is the center of the rectangle or square. This simple rule is invaluable, it enables you to solve problems that are seemingly unsolvable. Figure 3-2. a. At other times it is necessary to divide an area, or a diagonal, into a number of parts. Here a ruler alone will not suffice. As with most division of space, the vertical (fig 3-2) or horizontal line (not shown) parallel to the picture plane is the key. Aspect B of Figure 3-2 shows the subdivision of a cube, portions removed. For example, to divide a receding plane into any number of units, divide the left vertical height into the desired number of parts with a ruler as shown in Figure 3-2. Draw lines from the points of division on the vertical line out to the vanishing point. Then draw a line from corner to corner as shown, and the intersections of the diagonal and the horizontal lines drawn to the vanishing point are the correct points to add the other vertical lines. b. Figure 3-3 shows the correct method of dividing a rectangular area into uniform rectangular patterns, such as floor tiles. The width of the squares are first measured on a horizontal line (A). Two vanishing points are established and lines are drawn from the divided horizontal line to the left vanishing point, then the depth is established by drawing lines to the right vanishing point. A diagonal line is drawn from corner to corner, points 1 and 2. Where the diagonal intersects the lines drawn to the left vanishing point are-the correct points for the receding lines to be drawn to the right vanishing point. Notice that the lower drawing is a one-point
{"url":"https://armycommunications.tpub.com/Ss05265/Ss052650039.htm","timestamp":"2024-11-14T07:41:14Z","content_type":"text/html","content_length":"20372","record_id":"<urn:uuid:cf476483-ed09-4b61-9f85-f9745dcada90>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00647.warc.gz"}
Function parameters with default values Function parameters with default values In the previous article, we saw a lot of cool things we could do with function parameters. We also learned from the examples that JavaScript is quite flexible when it comes to function parameters - it doesn't care too much about whether you supply the expected arguments when calling a function or not. Instead, by default, any function parameter which isn't passed into the function by the caller will be "undefined". However, sometimes it makes sense to provide a default value for a parameter instead of "undefined". Previous to the ES6 specification, JavaScript didn't allow you to do this directly, but it could be done with a workaround like this one: function AddNumbers(n1, n2) if(n1 === undefined) n1 = 0; if(n2 === undefined) n2 = 0; return n1 + n2; Notice how the function defines two parameters, but when I call it in the last line, I don't supply any arguments for it. However, in the function, I actually do a couple of checks for this, to make sure that we define a value for the parameters (n1 and n2) if no value is supplied by the function caller - the parameters now have a default/fallback value of 0. One of the issues with this approach, besides the fact that it adds several lines of boring code to your function, is the fact that the caller of the function can't know that these parameters are actually not required, because they have a fallback/default value. Fortunately for us, default parameters was introduced in the ES6 specification, allowing you to define a default value for a function parameter. Simple default values The syntax for defining default values for a parameter is quite simple - you just assign a value to it directly in the function declaration. This value will act as a fallback value, instead of "undefined", in case the caller of the function doesn't supply it. Here's an example of it: function AddNumbers(n1 = 0, n2 = 0, n3 = 0) return n1 + n2 + n3; alert(AddNumbers(5, 10)); alert(AddNumbers(5, 5, 10)); I have now defined default values for all the parameters (0), and as you can see, I can then call the same function with zero or more arguments, without having to check the parameters for "undefined" in the function. When calling a function with default parameter values, sometimes you will find yourself in a situation where you want to use the fallback value for the first parameter, but provide another value for one of the other parameters. The tricky part here is of course that if you want to supply a value for parameter #2, how do you let JavaScript know that you don't want to supply a value for parameter #1? Here's an example where we do just that: function Greet(greeting = "Hello", target = "World") alert(greeting + ", " + target + "!"); Greet(undefined, "Universe"); We call the Greet() function with various parameters, but notice the last line, where we call it with two arguments: The first argument is simply set to undefined, because when a function has a default value declared, JavaScript will always use this instead of undefined. This allows us to supply a real value for the second parameter, while still using the fallback value for the first Complex default values In some programming languages, default function parameter values are restricted to constant values, e.g. simple numbers and text strings. However, JavaScript is much more flexible in this regard. In fact, you can do pretty much anything - JavaScript will simply evaluate the statement you supply as any other kind of JavaScript code, when calling the function. Let's look at some of the things this will allow you to do. Math and referencing other parameters Please notice: This example is completely useless on its own but simply shows you that you can do math, even referencing other parameters, when defining default values for parameters: function AddNumbers(n1 = 0, n2 = 2 + 2, n3 = n2 - 8) return n1 + n2 + n3; Instantiating objects You can instantiate new objects and use them as default values, e.g. a Date object, like this: function Greet(greeting = "Hello", target = "World", date = new Date()) alert("A greeting from " + date.toString() + ": " + greeting + ", " + target + "!"); Calling functions Just as well as you can instantiate an object, you can of course call a simple function and use the returned value as the default value for the parameter: function AddNumbers(n1 = 0, n2 = Math.random() * 10) return n1 + n2; In this example, I call the Math.random() function to get a random number between 0 and 1, then multiply it by 10, for the second parameter. Required parameters using default values You are of course free to call your own functions as well, and in fact, this allows us to do something pretty cool. We previously talked about the fact that JavaScript doesn't enforce parameters when calling a function - if a function declares one or several parameters, you are free to call this function without specifying any parameters. In this situation, the function will have to handle the fact that it can't rely on the parameters having any value, usually by supplying a default value or by checking, before using the parameter, if it has the expected value. However, sometimes this is not optimal and you would wish that JavaScript would just force the caller of the function to supply the expected amount of variables, and hallelujah, we can actually accomplish just that, using default values: function Required() // Leave out the alert if using this code alert("Please specify a value for this function parameter!"); throw new Error("Please specify a value for this function parameter!"); function AddNumbers(n1 = Required(), n2 = Required()) return n1 + n2; alert(AddNumbers(5, 10)); First things first: You should probably not include the alert() part if using this code - it's only included to show you that something is happening if you run this example. In a real world scenario, you probably don't want to bother the user of your website with an implementation detail like this. So, what I'm doing here is I have defined a function called Required(). I set this as the default value for the two parameters of the AddNumbers() function, which means that whenever AddNumbers() is called with insufficient arguments, the Required() function is called automatically. In the Required() function, we throw an Error, which will halt the execution of the code. By doing so, we have made sure that nothing will really work if the parameters are not provided by the caller of the function and thereby making the parameters required. In other words, we have just added a feature to the JavaScript language which didn't exist before, thanks to the extreme flexibility of the language in general and the default function values in particular. Pretty cool, right? With default parameter values, you can supply your functions with fallback values, which will be used if the caller of the function fails to supply them. The default parameter values can be simple types like numbers or strings, or even more complex operations like invoking a function. This article has been fully translated into the following languages: Is your preferred language not on the list? to help us translate this article into your language!
{"url":"https://javascript-tutorial.com/functions/function-parameters-with-default-values/","timestamp":"2024-11-09T20:04:39Z","content_type":"text/html","content_length":"27493","record_id":"<urn:uuid:5aebaab5-a25f-479a-9941-57a46c18307f>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00594.warc.gz"}
Who offers services to take multivariable calculus exams for others? | Hire Someone To Do Calculus Exam For Me Who offers services to take multivariable calculus exams for others? Can we help in any kind of way? I am about to tell you a story which I am not really familiar with. I was in class myself and we were at a desk job and like most students, we were planning on doing something we no longer wanted to do. So Home started out as students and after we applied a mathematical term in an area, we suddenly entered as part of a group of study for a group of school kids to study. This group of kids had completed mathematics of a year and we had run and have a group of students. They took all the classes in this group and asked me what I had started in this group. I said, ‘We started in your area in maths and in courses in special education so since you cannot build a group of some school kids into one school and they need you to get in university, you must be considered a non-Maths student…’ Well my interest was in wanting to play a part and it was definitely accepted as a professional way of saying, ‘Thank you’ and so since the class were not thinking about what the group aims to do but what they have now, we ended up having them work on the group but nothing was accepted as it seemed to them to be ‘good enough’ or ‘I will pay you back’ and so on. Their job was being as student as the one who taught them and after several questions as an added reward, they would then finish the group and enter as a group. So that was it. There was a bunch of kids who needed help, who didn’t have the experience of a group but were very comfortable to know that they could help with something. They were almost ready for the idea but look at this now offered up to us even after their numbers and their difficulty were very different…’ Well we went up and worked on the group teaching for about 10 days with total success. Now we took the money and went back to my department and then outWho offers services to take multivariable calculus exams for others? Make sure to check out their website for more details though.The site has information about which different kinds of exams are available so keep in mind to enter your information in the search for these online exams After downloading 10 of this year’s editions of Calculus X in the pdf format, for the first time a comparison of the student’s college-grade tests gives a link and provides some comparisons between all student tests in these products. Not a lot of studies between Calculus X and Math in the past, but Math is extremely popular nowadays and once you read the manual you can check with your Google search page check out which one is more suitable for you About Us This is a Google Scholar search! You can find additional links, graphics and links of each of the popular US products such as Google Images, Keywords and other sites on this page. So if you are looking for some more Google Scholar related articles please get in touch anyway. Pay Me To Do Your Homework Reddit This is a Google Scholar search! You can find additional links, graphics and links of each of the popular US products such as Google Images, Keywords and other sites on this page. So if you are looking for some more Google Scholar related articles please get in touch anyway.This is a Google Scholar search! You can find additional links, graphics and links of each of the popular US products such as Google Images, Keywords and other sites on this page. So if you are looking for some more Google Scholar related articles please get in touch anyway.This is a Google Scholar search! You can find additional links, do my calculus examination and links of each of the popular US products such as Google Images, Keywords and other sites on this page. So if you are looking for some more Google Scholar related articles please get in touch anyway. This is a Google Scholar search. You can find additional links, graphics and links of each of the popular US products such as Google Images, Keywords and other sites on this page.Who offers services to take multivariable calculus exams for others? Join the 24 January 2016! [FLEE4] FLEE4: the challenge of using fractional calculus to solve the differential structure of many equations with unknown variables. The purpose of this talk is to explore more about how fractions can be used to solve most equation with unknown variables of differential nature. By seeing the history of methods in the field of fractional calculus, we are able to take into consideration possible advances we are aware of in these fields. These publications are typically held in electronic form. As we have seen, the topic’s text version comes formatted in plain text in our free eBook guide. If you have any queries or questions about the book or the source text as well as your own publications click below. You can go to the audio and the pdf directly to listen. This book covers many topics related to fractional calculus and has tons of tips and tricks. Fractional calculus can give you a lot of valuable lessons, and there are lots of ways to use it. Download this book on Google or click here to read the latest chapter. Jed FLEE 4 a lot of tutorials or interactive lectures help you understand you can try here complex differential equation. This book is much easier to understand and should be sold or bought as PDF. Ace My Homework Coupon Click here to read the book. Download this book on Google or click here to read the latest chapter. About the Author FLEE4 his comment is here proven itself to be a successful and logical method to solve complex differential equations.
{"url":"https://hirecalculusexam.com/who-offers-services-to-take-multivariable-calculus-exams-for-others","timestamp":"2024-11-10T16:19:21Z","content_type":"text/html","content_length":"102407","record_id":"<urn:uuid:1453d557-a7c2-46e4-9ab6-3eb16613f27e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00824.warc.gz"}
Generating a Synthetic Clustering Dataset ( Generating a Synthetic Clustering Dataset (SPMF documentation) This example explains how to generate a synthetic transaction database using the SPMF open-source data mining library. How to run this example? • If you are using the graphical interface, □ (1) choose the "Generate_a_clustering_dataset" algorithm, □ (2) set the output file name (e.g. "output.txt") □ (3) set the algorithm parameters as follows: ☆ Point per cluster = 300 ☆ Attribute count = 2 ☆ Visualize generated data = true ☆ Distribution cluster 1 = Normal(10,3) Normal(20,3) ☆ Distribution cluster 2 = Uniform(-5,5) Uniform(-5,5) ☆ Distribution cluster 3 = Normal(20,2) Normal(0,2) □ (4) click "Run algorithm". • If you are using the source code version of SPMF, launch the file "MainTestGenerateClusteringData.java" in the package ca.pfv.SPMF.tests. What is this tool? This tool is a random generator for creating a synthetic dataset to be used for testing clustering algorithms.The generator allows to generate a dataset that contains a given number of clusters that follow some distributions such as the Normal or Uniform distribution, and that have a selected number of attributes, and number of instances per cluster. If running this tool from the graphical interface of SPMF, the number of clusters must be from 1 to 5. But if using the source code version of SPMF, the number of clusters is not restricted. Synthetic databases are often used in the data mining litterature to evaluate algorithms. In particular, they are useful for comparing the scalability of algorithms, or for evaluating algorithm with respect to some ground-truth. What is the input? The tool for generating a clustering dataset takes these prameters as input: 1) the number of points (instances) per cluster (positive integer). In this example, it is set to 300 because so that each cluster will have 300 points. In the graphical interface of SPMF, all clusters must have the same number of points. In the source code version of SPMF, it is possible to also generate clusters with different number of points. 2) the number of attributes (positive integer). For instance, in this example, it is set to 2. This means that each instance (point) in the generated database will be a vector of 2 values (it will be in 2 dimension). 3) Visualize generated data (boolean). If set to true, the generated data will be displayed using the Instance Viewer Tool of SPMF after the data has been generated. Otherwise, the data will not be displayed and will be only saved to the output file. 4) The distributions to be used for generating the attributes of each cluster. For instance, in this example, we have three clusters and two dimensions. For the first cluster, we set Normal(10,3) Normal(20,3), which means that points in the first cluster will be generated with two attributes. The first attribute will follow a normal distribution with a mean of 10 and a standard deviation of 3, while the second attribute will follow a normal distribution with a mean of 20 and a standard deviation of 3. The second cluster has also two attributes. The first one follows a uniform distribution with a minimum value of -5 and a maximum value of 5. The second one also follows a uniform distribution with a minimum value of -5 and a maximum value of 5. The distributions of the third cluster can be explained in the same way. Note that all clusters should have the same number of dimensions, which is 2 in this example. What is the output? The algorithm outputs a database of instances respecting the parameters provided. It is a text file. A random number generator is used to generate the database, and thus if the tool is run several times, the output will be different. The format of the output file is as follows. Each line is an instance (point), described as a list of double values separated by single spaces. These values are the values for the attributes (dimensions). For example, here is a few lines from a file generated for this example: 8.223056957474475 18.305806094148416 18.53173409031428 17.12800320556778 11.78815889838998 15.172209880225482 14.465913286699111 18.73693056669763 12.132693927624041 20.89521556764735 8.034607447696011 24.794130080242603 6.79806783859077 28.83137892160805 9.11577562802012 18.94254052873174 10.092137470883381 17.236132146276898 11.063982605866594 18.171935123957358 9.523453050588738 19.51941579740386 ...... .... The first line is an instance point with the value 8.223056957474475 for the first attribute (dimension) and the value 18.305806094148416 for the second attribute. The other lines follow the same Visualizing the output If the parameter Visualize generated data is set to true, when running this tool, the generated dataset will be displayed visually using the Instance Viewer Tool of SPMF: Applying clustering algorithms on the generated dataset Then, after the dataset is generated, we can apply some clustering algorithms on it such as K-Means, DB-Scan and others, which are offered in SPMF. You may see the documentation of each specific clustering algorithm to see how to use them.
{"url":"https://philippe-fournier-viger.com/spmf/clustering_data_generator.php","timestamp":"2024-11-04T11:56:53Z","content_type":"text/html","content_length":"8228","record_id":"<urn:uuid:ef897768-3815-4173-bf46-8f4ae94f92c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00410.warc.gz"}
Property Values, Water Quality, and Benefit Transfer: A Nationwide Meta-analysis We construct a comprehensive, publicly available meta-dataset based on 36 hedonic studies that examine the effects of water quality on housing values in the United States. The meta-dataset includes 656 unique estimates and entails a cluster structure that accounts for price effects at different distances. Focusing on water clarity, we estimate reduced-form meta-regressions that account for within-market dependence, statistical precision, housing market and waterbody heterogeneity, publication bias, and methodological practices. Although we find evidence of systematic heterogeneity, the out-of-sample transfer errors are large. We discuss the implications for benefit transfer and future work to improve transfer performance. (JEL Q51, Q53) 1. Introduction The hedonic literature examining the effects of surface water quality on residential property values began over 50 years ago with David’s (1968) report. Since then, the literature has evolved significantly. To assess this literature’s suitability to support management decisions related to water quality, we use meta-analytic methods to synthesize and draw key conclusions from 36 unique studies in the United States. There are several existing meta-analyses of hedonic property value studies, including applications to air quality (Smith and Huang 1993, 1995), contaminated sites ( Messer et al. 2006; Kiel and Williams 2007; Schütt 2021), open space (Mazzotta, Besedin, and Speers 2014), and noise (Nelson 2004). To our knowledge, this study is the first comprehensive and rigorous meta-analysis of the hedonic literature examining surface water quality.^1 The results from meta-analyses can help make predictions for benefit transfer—where an analyst uses the predicted outcomes to infer ex ante or ex post effects of some policy action, in lieu of conducting a new original study. Analyses of public policies often rely on benefit transfer because original studies require a lot of time and money or are infeasible because of data constraints. In fact, benefit transfer is one of the most common approaches used to complete benefit-cost analyses at the U.S. Environmental Protection Agency (U.S. EPA 2010; Newbold et al. 2018a). Improving benefit transfer, as well as combining limited but heterogeneous information for surface water quality changes, remains a priority for policy makers (Newbold et al. 2018a). Several meta-analyses of stated preference studies on water quality have been published (e.g., Johnston, Besedin, and Stapler 2017; Newbold et al. 2018b; Johnston, Besedin, and Holland 2019; Moeltner et al. 2019), and these studies are the workhorse for benefit analyses of federal water quality policies (Griffiths et al. 2012; U.S. EPA 2015; Corona et al. 2020). Our meta-analysis complements these efforts. Hedonic property value studies provide a revealed preference-based estimate of values for a subset of households living close to a body of water and thus circumvent concerns related to the use of stated values based on hypothetical scenarios. Our study aggregates the hedonic property value literature examining water quality and systematically calculates comparable within- and cross-study elasticity estimates by accounting for differences in functional forms, assumed price-distance gradients, and baseline water quality conditions. We convert the primary study coefficient estimates to common elasticity measures for waterfront and near-waterfront homes, and we use Monte Carlo simulations to estimate the corresponding standard errors. Each study yields numerous meta-observations because of multiple study areas, water quality metrics, and model specifications, leading to a meta-dataset that contains more than 650 unique observations. We find considerable differences across the studies in terms of how water quality is quantified, the type of waterbody studied, and the region of the United States examined. We often find it difficult to convert the disparate water quality measures to a common metric. The analysis in this study focuses on water clarity, where there is a sufficient number of observations (n = 260) for regression analysis. We test for systematic heterogeneity in the housing price elasticities across different regions and types of waterbodies, and we account for best methodological practices and publication bias in the literature. Benefit transfer performance across the different models is compared using an out-of-sample transfer error exercise. Unit value transfers and the simplest weighted least squares (WLS) meta-regression models yield the lowest median transfer error, and we discuss the implications for benefit transfer. Along with recommendations to practitioners, we provide guidance on combining our results with available data to assess local, regional, and national policies affecting water quality. We highlight gaps in the literature regarding the types of waterbodies and regions covered, and the disconnect between the water quality metrics examined by economists versus those by water quality modelers and policy makers. 2. Meta-dataset Identifying Candidate Studies and Inclusion Criteria In developing the meta-dataset, we followed recent meta-analysis guidelines for searching and compiling the literature (Stanley et al. 2013; Havránek et al. 2020).^2 We focused on studies examining the relationship between residential property values and measures of surface water quality.^3 In total, we identified 65 studies in the published and gray literature that were potentially relevant. To facilitate linkages between water quality models and economic valuation, and ultimately to perform more defensible benefit transfers for U.S. policies, focus was drawn to the 36 primary studies that examined surface water quality in the United States using objective water quality measures.^4 A full list of the studies is provided in Appendix A. Although it was published after the construction of the meta-dataset, the list of identified studies was compared with an extensive literature review by Nicholls and Crompton (2018), which provided additional assurance that our identified set of studies is comprehensive. The final meta-dataset is publicly available on the U.S. Environmental Protection Agency’s (EPA) Environmental Dataset Gateway.^5 Meta-dataset Structure and Details From the selected 36 studies, 26 are published in peer-reviewed academic journals, 3 are working papers, 3 are master’s or Ph.D. theses, 2 are government reports, 1 is a presentation, and 1 is a book chapter. The year of publication ranges from 1979 to 2017. The majority of primary studies examine freshwater lakes (24 studies), followed by estuaries (6 studies), rivers (2 studies), and small rivers and streams (3 studies). One study examines both lakes and rivers. As shown in Figure 1, spatial coverage is limited in the southwest, west-central, and parts of the southern United States, while the Northeast and some parts of the Midwest and South have the most studies. The meta-dataset consists of a panel or cluster structure, where each study can contribute multiple observations. Individual studies may analyze multiple study areas, water quality metrics, distances, and model specifications. When selecting observations for inclusion in the meta-data, researchers tend to follow one of two different approaches (Boyle and Wooldridge 2018). The first approach is to only include the “preferred” estimate from each study, but in the current context, and as described by Boyle and Wooldridge (2018) more generally, this leads to two practical issues. First, in many cases the primary study authors do not identify a preferred model. For example, among the 18 hedonic studies examining water clarity, in only three cases do the primary study authors identify a preferred model (Hsu 2000; Olden and Tamayo 2014; Zhang and Boyle 2010). Second, even in cases where a preferred estimate is explicitly claimed, the decision criteria differ across researchers and are often unknown to the meta-analyst. To avoid introducing additional subjectivity and potential biases associated with choosing a single estimate (Viscusi 2015; Vesco et al. 2020), we take the second approach described by Boyle and Wooldridge (2018). We include all applicable observations in our meta-dataset, even in cases where the primary estimates do not differ in terms of population, water quality measure, and study area. Recent meta-analyses have taken this same approach (Havránek, Horvath, and Zeynalov 2016; Klemick et al. 2018; Jachimowicz et al. 2019; Johnston, Besedin, and Holland 2019; Penn and Hu 2019; Subroy et al. 2019; Brouwer and Neverre 2020; Vedogbeton and Johnston 2020; Vesco et al. 2020; Schütt 2021). Each primary study estimate, even if pertaining to the same commodity and population, provides a unique observation of the underlying data-generating process for which we want to estimate the parameters.^6 There are 30 different measures of water quality examined in the literature. To be fully transparent and provide the most information for practitioners to choose from when conducting benefit transfers, the meta-dataset includes all water quality measures. The pooling of estimates across different water quality measures, however, is not necessarily appropriate. Even when converted to elasticities, a 1% change in Secchi disk depth (i.e., how many meters you can see down into the water) means something very different from a 1% change in fecal coliform counts, pH levels, or nitrogen concentrations, for example.^7 Formatting Comparable Elasticity Estimates A key challenge in constructing any meta-dataset is to ensure that all the outcomes of interest are comparable across studies (Nelson and Kennedy 2009). By focusing on a single methodology, the outcome of interest itself is always the same—the price effects on residential property values. However, we must still account for two other factors that would otherwise diminish the comparability of results across studies, both of which pertain to assumptions in the original hedonic regression models. The first form of cross-study differences is a common obstacle for meta-analysts. Differences in functional form lead to coefficient estimates that have different interpretations. In the hedonic literature, some studies estimate semi-log, double-log, and even linear models. Other studies include interaction terms between the water quality measure and various attributes of the waterbody (e.g., surface area) to model heterogeneity. To address these differences, we convert the coefficient estimates from the primary studies to common elasticity and semi-elasticity estimates based on study-specific model-by-model derivations, which are carefully detailed in Appendix A.2. These calculations sometimes include the mean transaction price and mean values of observed covariates, as reported in the primary study. These variables enter the elasticity calculations due to interaction terms or other functional form assumptions in the primary studies. The second form of cross-study differences involves how the home price effects of water quality are allowed to vary with distance to the waterbody. In a meta-analysis of stated preference studies on water quality, Johnston, Besedin, and Holland (2019) point out that no published meta-regression studies in the valuation literature include a mechanism to incorporate the relationship between households’ values for an environmental commodity and distance to the resource. Johnston, Besedin, and Holland (2019) account for this relationship by estimating the mean distance among the survey sample in each primary study, and then include that mean distance as a control variable in the right-hand side of their meta-regression models. We take a different approach that explicitly incorporates spatial heterogeneity into the structure of the meta-dataset. In the hedonic literature, different primary studies make different functional form assumptions when it comes to the price-distance gradient with respect to water quality, including discrete distance bins and continuous gradients (e.g., linear, inverse distance, polynomial). In a recent meta-analysis of hedonic property value studies examining the price effects of proximity to waste sites, Schütt (2021) circumvents the issue of different distance gradient forms by simply excluding discrete distance specifications. In doing so, his meta-analysis disregards 32% of the otherwise eligible In contrast, we address this issue directly by including multiple observations from the same primary hedonic regression but where each meta-observation corresponds to house price effects at different distances from the resource. In other words, we calculate the elasticity estimates for “representative” homes at the same, predetermined distances across primary studies, but we do so based on the assumed form of the distance gradients in the original hedonic regressions. This adds a novel dimension to the cluster structure of our meta-dataset. Except for internal meta-analyses by Klemick et al. (2018) and Guignet et al. (2018), our meta-analysis is the first to incorporate this distance dimension into the meta-dataset. In an internal meta-analysis, the researchers estimate the primary regressions themselves, and thus have the luxury of assuming consistent functional forms and distance gradients in their initial hedonic models. In the current meta-analysis, we do not have this advantage; adapting the elasticity estimates to be comparable across different distance gradient specifications in different studies is a unique challenge. To minimize any potential sample selection bias corresponding to greater distances, we limit our meta-data and analysis to only price effects within 500 m of a waterbody. Although some studies have found evidence that water quality affects home values at greater distances (e.g., Walsh, Milon, and Scrogin 2011; Netusil, Kincaid, and Chang 2014; Klemick et al. 2018; Kung, Guignet, and Walsh 2022 ), 16 of the 36 studies in the meta-dataset exclusively analyze price effects on waterfront homes. It is unknown whether some primary studies limited the spatial extent of the analysis because no significant price effects were found or believed to be present at greater distances, or for other reasons (e.g., data or computational limitations). The same reasoning applies to why other studies decided to limit the spatial extent of the analysis at a certain distance. We standardize the elasticities across different studies with different distance gradient functional form assumptions by “discretizing” distance into two bins: waterfront homes and nonwaterfront homes within 500 m. This allows us to calculate elasticities in a consistent fashion, no matter the form of the price-distance gradient assumed in the original hedonic regressions. If a primary study only examined waterfront homes, then it only contributes observations to the meta-dataset corresponding to the waterfront distance bin. If a study examined waterfront and nonwaterfront homes, then it contributes separate observations for each distance bin, even if the observations are derived from the same underlying regression coefficients. The elasticity calculations for waterfront and nonwaterfront homes are model-specific and depend on the assumed specifications in the primary studies (see Appendix A.2 for details). Generally, for elasticity estimates corresponding to waterfront homes, any waterfront indicators are set to one, and a distance of 50 m is plugged into the study-specific elasticity derivations as needed. This assumed distance for a “representative” waterfront home is based on observed mean distances among waterfront homes across the primary studies. For nonwaterfront homes within O-500 m, the midpoint of 250 m is plugged into the study-specific elasticity derivations when applicable. Finally, meta-analysis often requires a measure of statistical precision around the outcome of interest, in this case, the inferred elasticity estimates. To obtain the corresponding standard error of those estimates, we conduct Monte Carlo simulations. The meta-dataset contains intermediate variables representing all relevant sample means, coefficient estimates, variances, and covariances from the primary studies. Often only the variance for the single coefficient entering the study-specific elasticity calculations is needed for these simulations, and it is common in the literature to report coefficient standard errors. However, some study-specific elasticity calculations include multiple coefficients, requiring both the variances and covariances among that set of coefficients. Hedonic studies do not usually report the full variance-covariance matrix. When needed, we contacted the primary study authors to obtain the covariance estimates required for the Monte Carlo simulations.^8 However, in 25 cases (from four different studies), we assume the corresponding covariances are zero because we were unsuccessful in acquiring the information.^9 None of these cases pertain to water clarity, chlorophyll a, or fecal coliform, however, so this assumption does not affect our later unit value and meta-regression results. Using the primary study coefficient, variance, and covariance estimates, the Monte Carlo simulations entail 100,000 random draws from the joint normal distributions estimated by each primary study. The simulations are carried out separately for each observation in the meta-dataset. After each draw of the relevant coefficients, the inferred elasticity is recalculated, resulting in an empirical distribution from which we obtain the elasticity standard deviation for each observation in the meta-dataset. The set of 36 studies provide 665 observations for the meta-dataset. We focus on the subset of 598 observations where a house price elasticity and corresponding standard error could be inferred (see Appendix A.1 for details). Water clarity is by far the most common water quality measure analyzed in the literature (with 260 elasticity estimates), followed by fecal coliform (56) and chlorophyll a (36). Several other water quality measures have been examined in the literature and also contribute unique elasticity estimates to the meta-dataset (see Appendix B.2). Mean Elasticity Estimates and Weighting Mean elasticity estimates provide useful summary measures and can be used for benefit transfer when unit value transfers are deemed appropriate. Although the literature still generally finds function transfer approaches that explicitly account for various dimensions of heterogeneity preferable (Johnston and Rosenberger 2010), simpler unit value transfers have performed better in some contexts ( Barton 2002; Lindhjem and Navrud 2008; Johnston and Duke 2010; Bateman et al. 2011; Klemick et al. 2018). Table 1, column (1), displays the un-weighted mean elasticity estimates for the three most common water quality measures in the hedonic literature: water clarity, chlorophyll a, and fecal coliform.^ 10 We present separate mean elasticities for waterfront homes and nonwaterfront homes within 500 m of a waterbody. The underlying elasticity estimates come from hedonic regressions that condition on other variables affecting house prices; therefore, the mean house price elasticities can be interpreted as the percent change in price, holding all other observables constant. We note that often the original hedonic regressions do not condition on other measures of water quality.^11 Our interpretation of the literature is that the included water quality measures are often understood to be an indicator or proxy for perceived quality in general (e.g., Taylor 2017). The unweighted mean elasticities for chlorophyll a are seemingly counterintuitive, and only marginally significant at best. The unweighted mean elasticities with respect to fecal coliform counts are more in line with expectations. The unweighted mean elasticity with respect to water clarity among waterfront homes is positive, as expected, but it is surprising that it is statistically The unweighted mean elasticities can be misleading because of the clustered nature of the meta-data. For example, a single primary study may include multiple regression specifications that estimate the price effects for the same waterbody and housing market, so the weight given to those estimates must be reduced accordingly (Mrozek and Taylor 2002). We next present cluster-weighted means, where we define each cluster as a unique study and housing market combination. Meta-observations estimated from a common transaction dataset in terms of the study area, time period, and waterbodies are really just different estimates of the same underlying “true” elasticity. No matter how many estimates are provided by a study for a specific location, each cluster as a whole is given the same overall weight. This holds regardless of whether the estimates for different clusters (i.e., housing markets) are from the same study or a different study. For example, Boyle, Poor, and Taylor (1999) estimate the effects of lake clarity on four different housing markets in Maine. These estimates are each given a weight of one and thus are weighted the same as if they were estimates for four different housing markets from four different studies.^12 More formally, let denote elasticity estimate i, at distance d, for cluster j, and k[dj] is the number of elasticity estimates for distance bin d in each cluster j. The cluster-weighted mean elasticity for each distance bin d is [1] where the same weight is given to each meta-observation in cluster j and for that distance bin. The total number of clusters in the meta-dataset for distance bin d is K[d]. The cluster-weighted mean elasticities are presented in Table 1, column (2). The results are generally similar, suggesting a marginally significant increase in waterfront home values in response to an increase in the concentration of chlorophyll a, and again an insignificant effect on the value of nonwaterfront homes. The negative price elasticity among waterfront homes with respect to fecal coliform is now insignificant. The mean elasticity with respect to fecal coliform counts for nonwaterfront homes is only marginally significant but larger in magnitude, suggesting a 0.06% decrease in value due to a 1% increase in fecal coliform counts. The cluster-weighted mean elasticities with respect to water clarity are similar to the unweighted means, suggesting a positive but insignificant effect on waterfront home prices, and a significant 0.04% increase in nonwaterfront home prices due to a 1% increase in Secchi disk depth. We propose an adjustment to the above cluster weights that redistributes the weight given to each observation within a cluster and distance bin. Consider the incorporation of a reallocation parameter (r[idj]) to the above cluster weights, as follows: [2]where 0 ≤ r[idj] ≤ k[dj]. This is a generalization of the above cluster weights, where r[idj] = 1. This is also a generalization of the approach taken by some meta-analysts who select a single “preferred” estimate from each cluster, in which case r[idj] = k[dj] for the preferred estimate and zero otherwise. As discussed, choosing a single preferred or best estimate can be challenging, disregards information, and introduces additional subjectivity to the meta-analysis. Instead, we propose an adjustment to the cluster weights based on the inverse variance, or fixed effect size (FES) weights commonly used in the meta-analysis literature (Nelson and Kennedy 2009; Borenstein et al. 2010; Nelson 2015; Havránek, Horvath, and Zeynalov 2016; Vesco et al. 2020; Schütt 2021). Under the FES framework, each meta-observation is considered a draw from the same underlying population distribution, which makes sense in the context of multiple estimates from the same cluster (i.e., housing market). Our proposed variance-adjusted cluster (VAC) weights give more weight to more precise estimates within a cluster while ensuring that equal influence is given to each cluster (or housing market) examined in the literature. For example, Walsh, Milon, and Scrogin (2011) provide six elasticity estimates with respect to Secchi disk depth for lakefront homes in Orange County, Florida. The initial one-sixth weight given to each of these estimates is now redistributed so that more weight is given to more precise estimates in that cluster. The cluster itself (i.e., the elasticity for lakefront properties in Orange County) is still given the same overall weight of one. This is a specific case of the weights in equation [2], where the reallocation parameter is equal to the usual FES or inverse-variance weights, normalized to sum to one within each cluster and distance bin, and multiplied by k[dj]: where v[idj] denotes the variance of elasticity estimate i for homes in distance bin d, in cluster j. Plugging this into equation [2] and canceling out common terms yields our proposed VAC weights: [3] The VAC weighted mean elasticity for distance bin d is calculated as [4] Under the VAC weighting scheme, every housing market and set of waterbodies analyzed in the literature is given equal influence. This is appropriate if one believes the primary study estimate(s) for a particular housing market and waters are a relatively accurate approximation, regardless of statistical precision. At the same time, statistical precision relative to multiple estimates within a cluster is still given consideration by giving more weight to more precise estimates. The VAC weighted mean elasticities are presented in Table 1, column (3). The waterfront elasticity with respect to chlorophyll a now has the expected negative sign, suggesting a 0.02% decrease in price when chlorophyll a increases by 1%. The possibly counterintuitive positive elasticity for nonwaterfront homes remains, however, and is now statistically significant. The mean elasticities with respect to fecal coliform and water clarity are similar to the cluster-weighted means. In contrast to the motivation behind our VAC weights, if one believes that estimates from certain studies, and for specific housing markets and waterbodies, should not be given equal weight because they are of poorer quality, then even the weight given to the cluster as a whole should be reduced. To accommodate this thought, we develop an alternative weighting scheme based on the commonly employed random effect size (RES) weights (Nelson and Kennedy 2009; Borenstein et al. 2010; Nelson 2015). The RES weighting scheme is preferred if the meta-observations are believed to be estimates of different “true” elasticities from different distributions (Harris et al. 2008; Borenstein et al. 2010; Nelson 2015), as is the case when considering variation across housing markets. One would expect the true home price elasticities with respect to water quality to be different across waterbodies that differ in size, baseline water quality, and the provision of recreational, aesthetic, and ecosystem services. Heterogeneity in terms of housing bundles and preferences and income of buyers and sellers would also lead to different elasticities across markets. Our proposed RES cluster-adjusted (RESCA) weights take the conventional RES weights , which discount the weight given to elasticities estimated with relatively less precision compared with estimates in and across clusters, and then further discounts the weight given to observations where multiple estimates are provided for the same cluster. This is done by taking the product of the RES . A similar weighting scheme was proposed by Van Houtven, Powers, and Pattanayak (2007), but they were forced to use primary study sample size as a proxy for statistical precision because of a lack of information on the estimated variances in their meta-dataset. For our study, we observe (or can infer) the variance for virtually all elasticity observations. Weights based on the inverse variance or standard error are recommended over those based on the inverse of the study sample size (Van Houtven, Powers, and Pattanayak 2007; Subroy et al. 2019). Details on the interpretation and derivation of the standard RES weights and our proposed RESCA weights are provided in Appendix B.1. The RESCA weighted mean elasticity for distance bin d is calculated as follows, and the results are presented in Table 1, column (4): [5] The mean elasticity estimates are now all statistically significant, and often of the expected sign. Similar to the VAC weights, we see a −0.026 price elasticity associated with chlorophyll a for waterfront homes; but still see a counterintuitive, small but positive elasticity corresponding to nonwaterfront homes. For both waterfront and nonwaterfront homes, we now see the expected negative and significant mean price elasticities corresponding to fecal coliform. Perhaps most striking, this is the first case where the mean price elasticity with respect to water clarity is statistically significant for waterfront homes, suggesting a 0.11% increase due to a 1% increase in Secchi disk depth. The price elasticity with respect to water clarity among nonwaterfront homes is similar to the previous mean calculations. Water Clarity: Descriptive Statistics and Publication Bias Water clarity is the most common water quality measure in the meta-dataset, with 260 elasticity estimates from 18 studies covering 66 different housing markets. This relatively large sample allows us to estimate meta-regressions for purposes of function transfers. Descriptive statistics of the elasticity observations with respect to water clarity appear in Table 2. Of the 260 estimates, 56% correspond to water clarity in freshwater lakes or reservoirs, and the other 44% correspond to estuaries. About 68% of the observed elasticity estimates are for waterfront homes. The average of the mean baseline clarity levels reported in the primary studies is a Secchi disk depth of 2.34 m. Of course, this varies by waterbody type. Estuaries have a mean Secchi disk depth of only 0.64 m, whereas freshwater lakes have a mean Secchi disk depth of 3.68 m. Most estimates correspond to the South (48%) or Northeast (29%) regions of the United States, with the remainder corresponding to the Midwest (19%) or West (3%).^13 Sociodemographics of the primary study areas were obtained from the U.S. Census Bureau by matching each observation to data for the corresponding jurisdictions and year of the decennial census. We chose the finest level of census geography possible while still ensuring that each primary study area was fully encompassed. The identified census jurisdictions are coarse, corresponding to counties, multicounty areas, or states. Median household income is, on average, $59,078 in the primary study areas (2017$). Interestingly, the percent of the population with a college degree is low (only 14%, on average), as is population density, suggesting an average of about 50 households per square kilometer. These statistics suggest that homes near lakes and estuaries generally tend to be in more rural areas. It is important to recognize the spatial coarseness of these sociodemographic measures. For example, in many cases income levels among waterfront property owners are likely higher than elsewhere in a county. Finally, mean house prices as reported in the primary studies were $211,314, on average. In terms of methodological choices, the assumed functional form of the primary study hedonic regressions varies considerably. Most use double-log specifications (43%), followed by linear-log (31%), log-linear (22%), and even linear (4%). As can be seen by the “no spatial methods” variable, 38% of the elasticity estimates with respect to water clarity were derived from models that did not use econometric methods to account for spatial dependence (i.e., spatial fixed effects, spatial lag of neighboring house prices, or accounting for spatial autocorrelation via a formal spatial autocorrelation coefficient or cluster-robust standard errors). Although the majority of primary studies use in situ Secchi disk measurements (12 studies), we do see that 61% of the elasticity observations are based on hedonic regressions that used clarity measures other than in situ measures.^14 About 22% of the observed elasticities with respect to water clarity are from hedonic regressions that also control for other measures of water or ecological quality. A time trend variable, as reflected by the last year of transaction data in the primary study, is also included and ranges from 1994 to 2014. This is converted into an index representing the number of years since 1994, which corresponds to the first study of water clarity in the meta-dataset. We were able to identify and include three unpublished hedonic studies examining water clarity (15% of the observations), but publication bias is still a concern given our goal of obtaining an accurate estimate of the true underlying elasticities for purposes of benefit transfer. Following recommendations by Stanley and Doucouliagos (2012), we examine a series of funnel plots and implement more formal funnel-asymmetry and precision-effect tests. In doing so, we find clear evidence that publication bias is a concern with the meta-data collected from this literature (see Appendix B.3 for details). In the meta-regression models discussed next, we include the inverse of the elasticity variances as a right-hand-side covariate to minimize such publication bias (Stanley and Doucouliagos 2012, 2014). 3. Meta-regression Methodology Function transfers based on meta-regressions can be a useful approach for benefit transfer (Nelson 2015). The approach takes advantage of the full amount of information provided by the literature while accounting for key dimensions of heterogeneity in the outcome of interest. Consider the following reduced-form meta-regression model: [6] where the parameters to be estimated are β[0], β[1], and β[2]. The right-side moderator variables include a vector of characteristics of the primary estimate, study area, and corresponding waterbody (x[idj]) and a vector of methodological variables (z[ idj]), which describe attributes of the primary study and model assumptions. The error term e[idj] is assumed to be normally distributed and is allowed to be correlated within clusters. The vector x[idj] includes indicators of whether the elasticity estimate corresponds to waterfront homes, water quality in an estuary (as opposed to freshwater lakes),^15 and the mean baseline water clarity level corresponding to the respective waterbody or portion of the waterbody. The vector x[idj] also includes characteristics of the study area and housing market, such as median income, proportion of the population with a college degree, mean house prices, and indicators denoting each of the four broad U.S. regions: the Northeast, Midwest, South, or West. The vector z[idj] captures differences in elasticities due to estimate quality and methodological choices made by the primary study authors. If particular values of z[idj] denote best practices, then such information can be exploited when predicting values for purposes of benefit transfer (Boyle and Wooldridge 2018). The vector z[idj] includes the variance of the corresponding elasticity estimate. Under the assumption that better-quality estimates have a lower variance, Stanley and Doucouliagos (2012, 2014) argue that this attribute should be set to zero in any subsequent benefit transfer exercise. In addition, z[idj] includes indicator variables denoting unpublished studies, whether a study used assessed values (as opposed to actual transaction prices), different functional forms, and when the model did not account for spatial dependence. If a primary study model did not include spatial fixed effects, a spatial lag of housing prices, nor account for spatial autocorrelation in some fashion, then the no spatial methods indicator is set to one, and is zero otherwise. We also include a study year trend variable to possibly reflect changes in empirical methods, data, tastes and preferences, and awareness of water quality over time (Rosenberger and Johnston 2009). Time trends in meta-analyses of stated preference studies are typically based on the year the primary study survey was conducted, which is different from the year of publication (e.g., Van Houtven, Powers, and Pattanayak 2007; Rosenberger and Johnston 2009; Johnston, Besedin, and Holland 2019). For a hedonic meta-analysis, the choice is not as clear because the observed revealed preference data in a primary study often spans several years. To capture changes over time, we use the last year of transaction data in the primary study sample.^16 When estimating equation [6], the observations are weighted according to the same VAC or RESCA weights discussed in Section 2, but an additional complication arises from the cluster structure of the meta-data. There may be cluster-specific effects associated with a particular housing market and the waterbodies examined in that housing market. A fixed effect (FE) panel meta-regression model to directly estimate cluster-specific effects could be implemented, but this approach is not viable in the current context. First, the fixed effects would absorb much of the variation of interest and disregard a lot of observations. Many of the modifiers in the meta-regression do not vary within a cluster. Even when there is some within-cluster variation, it is often only among a subset of the observations. Second, out-of-sample inference for purposes of benefit transfer would not be valid because we cannot estimate the corresponding fixed effects for housing markets and waterbodies that are not in the current meta-dataset. Conventional random effects (RE) panel models are sometimes recommended in cases when a meta-regression is estimated using multiple estimates from a primary study (Nelson and Kennedy 2009). However, the cluster-specific effect could be correlated with observed right-hand-side variables, which leads to inconsistent estimates (Wooldridge 2002). Stanley and Doucouliagos (2012) point out that the necessary assumptions for consistent estimates in a RE panel model will often be violated, especially when a measure of precision (e.g., estimate variance) is included on the right-hand side to control for publication bias. They recommend a simple WLS meta-regression that allows for cluster-robust standard errors. We follow this recommendation in our meta-regression analysis.^17 4. Results Meta-regression Results We first estimate the WLS meta-regressions using our VAC weights, which account for the relative statistical precision within each cluster, but we ultimately treat estimates for each housing market and waterbody as a unique and unbiased glimpse into how water clarity affects home prices in that area, regardless of the relative precision of the underlying estimates across clusters. Recognizing that such an interpretation might be overly naive and runs counter to conventional meta-analysis (Nelson and Kennedy 2009; Borenstein et al. 2010; Nelson 2015), we reestimate the models using the proposed RESCA weights, which discount the cluster as a whole if its elasticities are estimated with relatively less precision. The WLS meta-regression models are all estimated using cluster-robust standard errors, where the clusters are defined according to the 66 unique study-housing market combinations. The VAC WLS results are presented in Table 3. Model 1 is our base meta-regression and includes only a constant term, an indicator of whether an elasticity estimate corresponds to waterfront homes, and the variance of that estimate to control for publication bias. The positive and statistically significant constant term suggests that, on average, a 1% increase in water clarity leads to a 0.04% increase in the price of nonwaterfront homes within 500 m of the waterbody. As expected, the price elasticity with respect to clearer waters is significantly higher among waterfront homes (by 0.1457 percentage points). Together these estimates suggest that, on average, a 1% increase in water clarity leads to a 0.19% increase in waterfront home prices. The statistically significant coefficient corresponding to elasticity variance suggests that publication bias is a concern, but setting this variable to zero when predicting values for function transfers controls for such bias (Stanley and Doucouliagos 2012). Model 2 in Table 3 includes indicator variables denoting the four regions of the United States, with the Northeast being the omitted category. The negative region coefficients suggest that housing price elasticities with respect to water clarity in the Midwest, South, and West tend to be less than those in the Northeast, but such differences in this model are only significant in the South. For example, a 1% increase in water clarity in the Northeast would lead to an average increase in value of 0.27% for waterfront homes and 0.25% for nonwaterfront homes within 500 m. In the South, the results suggest that a 1% increase in clarity corresponds to much smaller but still significant 0.04% and 0.02% increases in price among waterfront and nonwaterfront homes. Considering the average house price of $211,314, this suggests that a 1% increase in water clarity (an average of 2.34 cm or just under 1 in.) would increase the value of a waterfront and nonwaterfront home in the Northeast by $559 (p = 0.000) and $522 (p = 0.001), respectively. This same improvement for otherwise similar waterfront and nonwaterfront homes in the South would be only $84 (p = 0.061) and $48 (p = 0.041). Models 3 and 4 assess potential heterogeneity in the housing price effects based on characteristics of the environmental commodity, in this case, the type of waterbody and whether the waters are already relatively clear. Model 3 includes an indicator denoting whether the elasticity estimates correspond to an estuary (as opposed to a freshwater lake or reservoir) as well as a corresponding interaction term with the waterfront indicator. The negative and statistically significant coefficient on estuary suggests that an increase in water clarity has a smaller effect on the price of homes near an estuary, compared with an increase in lake water clarity. Such a finding seems reasonable given that surrounding residents may not generally expect the water to be clear in estuaries because brackish waters are often naturally opaque. However, the opposite is found among waterfront homes, as suggested by the positive and statistically significant 0.0873 (p = 0.049) sum of the estuary and waterfront × estuary coefficients. To better illustrate the implications of model 3, consider a 1% increase in clarity in an estuary in the Northeast. This would lead to a 0.35% increase in the value of a bayfront home and a 0.29% increase in the value of a nonbayfront home that was still within 500 m of the estuary. In contrast, the same 1% increase in water clarity would lead to a 0.26% increase among lakefront homes in the Northeast and a larger 0.33% increase among nonlakefront properties that are within 500 m. Although it may seem counterintuitive that the elasticity among lakefront properties is less, this does not necessarily violate the intuition that those closest to the resource should hold a higher value for an improvement. Baseline house values tend to be much larger among waterfront homes, so a smaller elasticity does not necessarily translate to a smaller marginal implicit price. For example, Walsh, Milon, and Scrogin (2011) report a mean sale price of $452,646 and $199,982 (2002$) among lakefront and nonlakefront homes, respectively. Applying these mean values to the aforementioned elasticities suggests an implicit price for a 1% increase in clarity of $1,197 for lakefront homes and $664 for nonlakefront homes. Model 4 explores whether the house price elasticity with respect to clearer waters tends to be systematically different depending on baseline water clarity.^18 The model includes the mean clarity level corresponding to each primary study estimate and an interaction term between mean clarity and the waterfront indicator. The main effect is statistically insignificant, as is the sum of the main effect and the coefficient on the waterfront × mean clarity interaction term. This suggests that the price elasticity for nonwaterfront and waterfront homes, respectively, does not vary with baseline water clarity. In subsequent models not reported here, we find little evidence of systematic heterogeneity with respect to the other study area and commodity variables reported in Table 2, including household median income, percent of population with a college degree, mean house prices, and waterbody size. Although mean house prices and waterbody characteristics are taken from the primary studies, the lack of significant findings pertaining to household median income, education, and other census-derived sociodemographics may be partly attributed to the spatial coarseness of the county-or state-level variables and the resulting measurement error. Models 5 and 6 build on the previous two models by controlling for methodological characteristics of the primary studies. The time trend variable is added, as well as indicators denoting the functional form of the hedonic price function that was assumed by the primary study authors (double-log is the omitted category). The positive and significant time trend suggests that, all else constant, the elasticity estimates have been increasing over time. The specification indicators are largely insignificant, with the exception of linear in model 5. This suggests that assuming a linear hedonic price function tends to yield higher elasticity estimates. This result is not robust to model 6, and there is otherwise little evidence that functional form assumptions yield significant differences in the elasticity estimates. Including these methodological variables, particularly the time trend, strengthens the earlier findings pertaining to the study area and waterbody characteristics. All regional indicators are now negative and statistically significant, demonstrating that the price effects with respect to water clarity in other regions tend to be lower, compared with the Northeast. The negative coefficient on estuary is now larger in magnitude, suggesting that at least among nonwaterfront homes, water clarity in estuaries tends to have a lesser effect on house values. In model 6, the positive and now significant 0.0799 coefficient corresponding to mean water clarity implies that nonwaterfront homes surrounding waterbodies with already relatively clear waters experience larger increases in value in response to further improvements. This “pristine premium” seems to be isolated to nonwaterfront homes, however. The sum of the mean clarity and waterfront × mean clarity coefficients is insignificant, suggesting that among waterfront homes, the baseline average clarity levels are not associated with systematically higher or lower price effects due to further improvements. Although not reported here, models including the other methodological variables in Table 2 revealed statistically insignificant effects (e.g., length of the study period, used assessed values, was unpublished, used water clarity data other than in situ measurements, and controlled for other measures of water or ecological quality). In particular, we find no significant differences in price elasticity estimates when the primary studies included spatial fixed effects, spatial autoregressive (SAR) models, or accounted for spatial autocorrelation using a formal spatial error model (LeSage and Pace 2009) or by allowing for geographically clustered standard errors. We reestimate the six WLS meta-regressions using the proposed RESCA weights. The results are presented in Table 4 and are generally similar to the VAC WLS models. One notable difference is that the coefficient corresponding to the elasticity variance term is no longer significant in any of the six RESCA WLS models. This suggests that after accounting for relative statistical precision both within and across clusters, selection bias is no longer a concern. The RESCA WLS models also tend to predict slightly lower elasticity estimates. For example, model 1 in Table 4 predicts an elasticity of 0.1085 and 0.0257 among waterfront and nonwaterfront homes, respectively, compared with the 0.1871 and 0.0414 elasticity estimates from the corresponding VAC WLS model in Table 3. To demonstrate how the meta-regression results can be used for function-based transfers, consider one of the most comprehensive meta-regressions, model 6 from Table 4.^19 For this illustration, we predict the elasticity estimates by plugging in the cluster-weighted mean values for the study area and waterbody characteristics,^20 but practitioners should plug in values specific to their policy site. In cases where best practices are clearly discerned, the corresponding values for methodological variables should be used when predicting for benefit transfer (Boyle and Wooldridge 2018). Following this guidance, we set the linear specification indicator to zero. Economic theory and simulation evidence suggest that assuming a linear hedonic price function is generally inappropriate ( Bishop et al. 2020; Bockstael and McConnell 2006). A similar motivation lends itself to setting the elasticity variance to zero for our illustrative elasticity calculation (Stanley and Doucouliagos 2012). In contrast, among the remaining specifications observed in the metadata (double-log, linear-log, and log-linear), the most appropriate functional form is unclear. If there are no clear best practices for a methodological variable, then Boyle and Wooldridge (2018) suggest using the average value across the literature.^21 We plug in the unweighted sample proportions across the remaining three specification indicators. More specifically, among the remaining 250 observations that used one of these specifications to estimate the price elasticity with respect to water clarity, we see that 45% were based on a double-log (the omitted category), 32% on a linear-log, and 23% on a log-linear specification. To infer an elasticity that is based on the most recent methods and data possible, the value for the time trend index is set to 20 (which corresponds to 2014, the most recent year observed in the meta-data). This illustrative exercise yields an “average” elasticity for waterfront homes of 0.2698, suggesting that a 1% increase in Secchi disk depth (an increase of 2.34cm, on average) leads to an average increase in waterfront home values of 0.27% (p = 0.000). A slightly smaller 0.2564 elasticity is estimated for the “average” nonwaterfront home (p = 0.000). Based on the average home price from Table 2, these results translate to a mean implicit price of $570 (p = 0.000) and $542 (p = 0.000) for waterfront and nonwaterfront homes, respectively. Overall, the literature yields plausible and statistically significant estimates of how water clarity is capitalized in surrounding home values, even after empirically controlling for key dimensions of heterogeneity, publication bias, and methodological assumptions. Best-Performing Model for Benefit Transfer To examine out-of-sample transfer error and assess the performance of the various models and weighting schemes, we iteratively leave out observations corresponding to the 66 housing market study clusters, and then reestimate the mean unit values and meta-regression models using the remaining sample. The predicted elasticities are then estimated for the excluded cluster. This is repeated by excluding the 66 clusters one at a time. After completing all 66 iterations, we calculate the median absolute transfer error. Similar out-of-sample transfer error exercises have been implemented in the literature (e.g., Lindhjem and Navrud 2008; Stapler and Johnston 2009). We conduct this out-of-sample transfer exercise in two ways. In the first approach, we construct a synthetic observation for each distance bin d in cluster j and then compare the elasticity value for this synthetic observation to the predicted elasticity from the meta-regression models. The synthetic observation is constructed using the same VAC weights (i.e., an inverse variance weighted mean across all elasticity estimates for distance bin d in cluster j): The corresponding right-hand-side variables for the synthetic observations are calculated in the same fashion. Those variable values are then plugged into the estimated meta-regressions to yield a predicted elasticity , which is compared to the “actual” elasticity for each synthetic observation . A similar exercise is done using the mean unit values, where the waterfront or nonwaterfront means are calculated each iteration, and then used as the unit value prediction for the excluded observations. The transfer error is calculated as the absolute value of the percent difference: [7] Our synthetic observation approach for measuring out-of-sample transfer error weights the “actual” observed elasticity estimates and the sample used to parameterize the meta-regression models in the same way. When dealing with a panel-or cluster-structured meta-dataset, the more common practice of comparing predicted and observed elasticity estimates for all left-out observations within each iteration (e.g., Londoño and Johnston 2012; Fitzpatrick, Parmeter, and Agar 2017; Subroy et al. 2019) potentially inflates the transfer error. The parameterized meta-regressions, and hence the predicted elasticities , would discount less precise estimates, but the excluded elasticity observations that these are compared with in each iteration would all be treated equally when assessing the transfer error. This inconsistent weighting across the predicted and observed elasticities automatically puts the predictive performance of the meta-regression models at a disadvantage. Nonetheless, we carry out our transfer error exercise using this conventional approach and find similar results. The median absolute transfer error results for each model and weighting scheme are presented in Table 5. The top panel shows the median transfer errors using our out-of-sample synthetic observation approach. The lower panel shows the median transfer errors when all excluded observations are treated equally and used for comparison. The results suggest a median absolute transfer error of 76%–119% under the synthetic observation comparison versus 83%–131% when comparing all excluded observations. Although errors of this size are not unheard of, the transfer errors for this study are in the high range. Kaul et al. (2013) examined 1,071 transfer errors reported by 31 studies and report that the absolute value of the transfer errors ranged from 0% to 7,496%, with a median of 39%. Rosenberger (2015) summarized the results for 38 studies that statistically analyzed transfer errors and reported a median transfer error of 36% for function transfers. In their leave-one-study-out transfer error analysis, Londoño and Johnston (2012) report a 59% median transfer error using all available studies. Similar to our study, Subroy et al. (2019) used a leave-one-cluster-out approach and estimated a median transfer error of 21% for nonmarket values of threatened species. Overall, considering the unit value transfers, all meta-regression models and weighting schemes, and both sets of out-of-sample comparisons, we find that the RESCA weighted models outperform the VAC weighted models. This is reasonable given that the RESCA weighting scheme gives less weight to imprecise elasticity observations, relative to other estimates both within and across markets and studies, whereas under the VAC weighting scheme, only within-cluster relative precision is considered. The VAC weighting scheme is more sensitive to less precise, possibly outlying elasticity estimates, because even if all elasticity values for a particular housing market and waterbody are imprecisely estimated, the cluster is still given the same overall weight as any other market and waterbody examined in the literature. Among the RESCA-weighted estimates, the simplest unit value transfer and model 1 yield the lowest out-of-sample transfer error. Both the mean unit values and model 1 account for differences across waterfront and nonwaterfront homes, and model 1 adjusts for publication bias. Otherwise, these simple transfers do not account for any form of heterogeneity across study areas and the waterbodies being analyzed or any methodological choices made in the primary studies. Although simpler transfers have been found to perform better in some contexts (Barton 2002; Lindhjem and Navrud 2008; Johnston and Duke 2010; Bateman et al. 2011; Klemick et al. 2018), it is surprising that accounting for such (often statistically significant) heterogeneity does not improve transfer performance. In fact, Table 5 suggests that transfer performance generally decreases with model complexity. The one exception is RESCA model 6, which accounts for heterogeneity in baseline water clarity, study attributes, and across regions. Model 6 using the RESCA weights yields a median out-of-sample transfer error of 79% under our preferred synthetic observation comparison. This is just slightly worse than the 76% transfer error from the RESCA weighted unit value transfer or simple function transfer using model 1. When using our meta-analysis results for benefit transfer, practitioners should balance the findings of our out-of-sample transfer error exercise against the potential need to account for heterogeneity across markets and the environmental commodity. Based on these considerations, we recommend function transfers based on the RESCA weighted models 1 and 6.^22 The most accurate benefit transfer approach, however, may well be case-specific. When a specific policy context is in mind and resources are available, researchers should consider using our meta-dataset directly to tailor the set of studies to their particular context and conduct their own meta-analysis. One could simply compare the most relevant characteristics across the study and policy sites or pursue more sophisticated model search algorithms to identify the optimal subset of meta-observations to inform benefit transfer to a specific context (Moeltner and Rosenberger 2008, 2014; Johnston and Moeltner 2014; Moeltner et al. 2019). 5. Discussion A primary objective of this study is to help practitioners make use of the large body of literature of hedonic property value studies examining surface water quality and ultimately facilitate ex ante and ex post assessments to better inform local, regional, and national policies. Based on the constructed meta-dataset, limited unit value transfers could be conducted to assess policies affecting one of several different water quality measures (e.g., chlorophyll a and fecal coliform). Given the limited number of studies on any one water quality measure, unit value transfers are often the only viable option. In the context of water clarity, a function transfer using meta-regression results may improve accuracy by catering the estimates to a particular policy and by adjusting for best methodological practices and publication bias. Although statistically significant heterogeneity in the property price effects of water clarity is identified, we find that accounting for such heterogeneity did not improve transfer performance. As with any benefit transfer exercise, our results must be given appropriate caveats. The median out-of-sample transfer errors of even our best-performing unit value or function transfers are among the upper end of errors found in the meta-analytic literature valuing environmental commodities. Examining the distributions of transfer errors revealed no evidence of better transfer performance for different regions or types of waterbodies. The capitalization of water quality changes in surrounding housing values is a very local phenomenon. Surely local unobserved factors remain that affect the accuracy of any transferred estimates, at least in this general setting.^23 In addition, when using reduced-form meta-regression models like ours for benefit transfer, one must consider the trade-offs between a potentially better model fit from a reduced-form specification versus the theoretical consistency of a more structural meta-regression model (Newbold et al. 2018b; Johnston and Bauer 2020). In the context of stated preference studies, Newbold et al. (2018b) and Moeltner (2019) have argued for more theoretically consistent meta-regression models, in particular for models that satisfy the adding-up condition (Diamond 1996). Formal incorporation of the results from the hedonic property value literature in benefit-cost analysis is a broader topic in need of research. Much of the applied hedonic literature, including all of the estimates in our meta-dataset, are of marginal price effects based on regressions of Rosen’s (1974) first-stage hedonic model. As such, a welfare interpretation generally only holds at the margin (Kuminoff and Pope 2014). Although advancements have been made to infer formal nonmarginal welfare measures (e.g., Bartik 1987; Zabel and Keil 2000; Ekeland, Heckman, and Nesheim 2004; Bajari and Benkard 2005; Zhang, Boyle, and Kuminoff 2015; Bishop and Timmins 2018, 2019; Banzhaf 2020, 2021), such methods are not widely applied, and a commonly agreed-on “best” approach remains an open question (Bishop et al. 2020). Nonetheless, our meta-analysis results, or the results of subsequent case-specific meta-analyses using our meta-dataset, can be combined with spatially explicit data of the relevant surface waterbodies, housing locations, baseline housing values, and the number of homes to project the total capitalization effects of a policy affecting water quality. Ideally, such a benefit transfer exercise can be carried out using detailed, high-resolution data on waterbodies and individual residential properties. In the absence of such data, one can combine our results with publicly available waterbody quality and location data provided by the National Water Quality Monitoring Council’s Water Quality Portal and the National Hydrography Dataset, along with aggregated data on housing and land cover, from the U.S. Census Bureau and National Land Cover Dataset, for example.^24 Our metadata development and meta-analysis can complement benefit transfer efforts based on stated preference studies, which are the workhorse for benefit analyses of federal water quality policies ( Griffiths et al. 2012; U.S. EPA 2015; Corona et al. 2020). In fact, colleagues at the U.S. EPA plan to incorporate our meta-analysis in an integrated assessment model called the Benefits Spatial Platform for Aggregating Socioeconomics and H[2]O Quality (or BenSPLASH), which is designed as a flexible, modular tool for water quality benefits estimation (Corona et al. 2020). Although stated preference methods are generally more comprehensive in capturing total values, hedonic studies provide a revealed preference estimate that circumvents concerns related to the use of stated values based on hypothetical scenarios.^25 In future work, we hope to expand our meta-dataset in two ways. First, for tractability we decided early in the development of the meta-dataset to limit the distance bins to waterfront homes and nonwaterfront homes within 500 m of a waterbody. Based on our review of the literature, this seemed reasonable, although some studies are finding price effects farther away (Walsh, Milon, and Scrogin 2011; Netusil, Kincaid, and Chang 2014; Klemick et al. 2018; Kung, Guignet, and Walsh 2022). Adding meta-observations that pertain to farther distance bins will provide a more comprehensive meta-analysis in the future (but one must still consider the sample selection concerns discussed in Section 2). Second, new studies should be periodically added to the meta-dataset. When conducting new hedonic studies, we encourage researchers to consider some of the gaps in the literature. Our review reveals limitations in the types of waterbodies and geographic areas covered. More hedonic studies examining surface water quality in the mountain states in the West, parts of the Midwest, and the South-Central portions of the United States are needed, as are studies examining how property values respond to water quality changes in estuaries, rivers, and streams. Such primary studies will facilitate nationwide coverage and ultimately more accurate benefit transfers. Finally, our review highlights a disconnect between the water quality metrics used by economists and those by water quality modelers and policy makers. Water clarity is the most common metric in the hedonic literature. It is a convenient measure for nonmarket valuation because households are able to directly observe it. In certain cases, it also acts as a reasonable proxy for other measures of water quality (e.g., nutrients or sediments). Even so, water clarity is not a good measure of quality in all contexts (Keeler et al. 2012). For example, waters with low pH levels due to acid rain or acid mine drainage may be very clear but of poor quality. This disconnect between water clarity and quality is an issue in the nonmarket valuation literature more broadly (Abt Associates 2016). Although the majority of hedonic studies focus on water clarity, water quality models such as the Soil and Water Assessment Tool (SWAT), Hydrologic and Water Quality System (HAWQS), and SPAtially Referenced Regressions on Watershed Attributes (SPARROW) tend to focus on changes in nutrients, sediments, metals, dissolved oxygen, and organic chemicals (Tetra Tech 2018). There are some process-based water quality models and estimated conversion factors that can be used to calculate changes in Secchi disk depth, but such approaches require location-specific relationships and waterbody characteristics as an input (Hoyer et al. 2002; Wang, Linker, and Batiuk 2013; Park and Clough 2018); thus, deterring the broader application of these existing approaches to project water clarity changes resulting from a policy. Further research is necessary to improve the link between water quality and economic models and ultimately better inform policy. Closing this gap can entail one of two things, or some combination of both. First, when choosing the appropriate water quality metric, economists should keep the application of their results in mind. Doing so will allow economic results to be more readily used to monetize the quantified policy changes projected by water quality models. Second, water quality modelers could develop models that directly project changes in water clarity or perhaps develop more robust conversion factors. Such a call is not a new idea. Desvousges, Naughton, and Parsons (1992, 682) recommended that, at the very least, analyses establish “the correlation between policy variables and variables frequently used as indicators of water quality.” Developing such conversion factors would be challenging and would likely need to be watershed-, and perhaps even waterbody-, 6. Conclusion Despite the large number of studies of the capitalization of surface water quality into home values, this literature has not generally been used to directly inform decision-making in public policy. In fact, hedonic property value studies in general tend to not be quantitatively used in regulatory analyses of regional and nationwide regulations enacted by the U.S. EPA (Petrolia et al. 2021). In the water quality context, heterogeneity in local housing markets, the types of waterbodies examined, the model specifications estimated, and the water quality metrics used are key reasons the results of these local studies have not been applied to broader policies. This meta-analysis overcame these obstacles through the meticulous development of a detailed and comprehensive meta-dataset. The relative out-of-sample transfer performance of our reduced-form meta-regression models suggests caution when conducting benefit transfers. The proper use of our results will depend on the relative accuracy necessary for decision-making (Bergstrom and Taylor 2006). Nonetheless, in the absence of resources for an original study, this meta-dataset and meta-analysis provide a path for practitioners to conduct benefit transfer and assess how improvements in water quality from local, regional, and even national policies are capitalized into housing values. The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of the U.S. EPA or Abt Associates. The research described has been funded wholly or in part by the U.S. EPA, under contract EP-C-13-039 to Abt Associates. Any mention of trade names, products, or services does not imply an endorsement by the U.S. Government or the U.S. EPA. The authors declare they have no actual or potential competing financial or other conflicts of interests. This article was prepared by a U.S. Government employee as part of the employee’s official duties and is in the public domain in the United States. The authors thank Elena Besedin, Joel Corona, Ben Holland, Matthew Ranson, and Patrick Walsh for helpful feedback early in the development of this project. We also thank Charles Griffiths, James Price, Brenda Rashleigh, Stephen Swallow, Hale Thurston, participants at the tenth Annual Conference of the Society for Benefit-Cost Analysis and the U.S. Department of Agriculture (USDA) 2019 Workshop “Applications and Potential of Ecosystem Services Valuation within USDA: Advancing the Science,” and three anonymous reviewers for helpful • Appendix materials are freely available at http://le.uwpress.org and via the links in the electronic version of this article. • ↵1 There are three notable unpublished studies. In her M.A. thesis, Fath (2011) conducted a limited meta-analysis of 13 hedonic studies. Ge et al. (2013) conducted a meta-analysis that combined contingent valuation, travel cost, and hedonic studies. They estimated one meta-regression using only hedonic studies for comparison (10 studies with 127 observations). Abt Associates (2015) estimated the capitalization effects of large-scale changes in water clarity of lakes using a simple weighted-average across nine hedonic studies. We are aware of one published meta-analysis that focused on urban rivers and property values, but it did not examine measures of water quality (Chen et al. 2019). • ↵2 The first author developed how the data would be coded with feedback from all authors. The fourth author did most of the data entry with quality checks by all authors throughout the process. • ↵3 The search began with reviewing reports (e.g., Van Houtven, Clayton, and Cutrofello 2008; Abt Associates 2016) or other literature reviews and meta-analyses on related topics (e.g., Crompton 2004; Braden, Feng, and Won 2011; Fath 2011; Alvarez and Asci 2014; Abt Associates 2015). The next step was to search a variety of databases and working paper series, which included Google Scholar, Environmental Valuation Reference Inventory, JSTOR, AgEcon Search, EPA’s National Center for Environmental Economics Working Paper Series, Resources for the Future (RFF) Working Paper Series, Social Science Research Network (SSRN), and ScienceDirect. Keywords when searching these databases included all combinations of the terms: house, home, property, value, price, or hedonic with terms such as water quality, water clarity, Secchi disk, pH, aquatic, and sediment. Requests were also submitted to ResEcon and Land and Resource Economics Network. Seven additional studies were provided from the first request on October 24, 2014. One additional study was added from a second request on January 21, 2016. After this lengthy process, we attempted one final literature search through the U.S. EPA’s internal library system. • ↵4 Specifically, 29 studies were dropped after further screening because an objective water quality measure was not used, the study area was outside of the United States, a working paper or other gray literature study became redundant with a later peer-reviewed publication in the meta-dataset, or the research was not a primary study (e.g., a literature review). • ↵5 U.S. EPA’s Environmental Dataset Gateway: “Meta-dataset for property values and water quality,” available at https://doi.org/10.23719/1518489. • ↵6 Although we disagree with the idea of limiting a meta-analysis to only a set of subjectively identified “preferred” estimates and throwing out potentially valuable information, we note that the results of our later meta-regression analysis of water clarity are qualitatively similar. The sign and magnitude of the coefficient estimates are similar, but many of the estimates become insignificant when focusing on this limited sample, especially as the meta-regression models increase in complexity. • ↵7 That said, when a valid approach could be found, the primary study estimates are converted to a common water quality measure. Such a conversion is only undertaken for two hedonic studies where an appropriate conversion factor for the corresponding study area was available in the literature (Guignet et al. 2017; Walsh et al. 2017). In these cases, the meta-dataset includes unique observations corresponding to the inferred water quality measure (Secchi disk depth) and the original measure (light attenuation). To our knowledge, valid conversion factors or other approaches are not currently available for the other water quality measures and primary study areas. • ↵8 We are extremely grateful to Okmyung Bin, Allen Klai-ber, Tingting Liu, and Patrick Walsh for providing the variance-covariance estimates needed to complete the Monte Carlo simulations. We also thank Kevin Boyle for providing details on the functional form assumptions in Michael, Boyle, and Paterson (2000). • ↵9 This assumption could lead to an over-or underestimate for the variance of the corresponding elasticity, depending on the covariance between the two primary study coefficients. Consider the simple case where the elasticity estimate ε is the sum of two coefficients in the primary study hedonic regression, a and b. Then we have that var(ε) = var(a + b) = var(a) + var(b) + 2cov(a, b). The need to account for multiple parameters often arises due to the inclusion of interaction terms with water quality in the original hedonic regression; and it is often unclear, a priori, what the sign of cov(a, b) should be. • ↵10 Mean elasticity estimates for all 30 water quality measures examined in the literature are provided in Appendix B.2. • ↵11 E.g., only 2 (out of 36) of the price elasticity estimates with respect to chlorophyll a come from studies where the original hedonic regressions controlled for other measures of water or ecological quality. Similarly, only 34 (out of 56) and 57 (out of 260) of the price elasticity estimates with respect to fecal coliform and water clarity, respectively, are based on models that control for other quality measures. • ↵12 Ara (2007) statistically identified and separately analyzed several submarkets when estimating the housing price effects around Lake Erie. These submarkets sometimes overlap because of different statistical strategies, and so in our meta-analysis we treat all estimates from Ara (2007) as being from the same broader housing market. • ↵13 Regions of the United States are displayed in Figure 1 and are defined following the U.S. Census Bureau’s four census regions, available at https://www2.census.gov/geo/pdfs/maps-data/maps/ reference/us_regdiv.pdf (accessed June 11, 2019). • ↵14 This large proportion is primarily due to Ara (2007) and Walsh et al. (2017), who provide 30 and 112 elasticity estimates for homes around Lake Erie and the Chesapeake Bay, respectively. Those studies, along with Guignet et al. (2017), use water clarity values based on spatial interpolations. Two other studies use measurements predicted from water clarity models (Boyle and Taylor 2001; Liu et al. 2014), and one study uses satellite data (Horsch and Lewis 2009). • ↵15 As described in Section 2, the meta-dataset includes price elasticities corresponding to freshwater lakes, estuaries, rivers, and small rivers and streams. However, the hedonic studies in the meta-dataset that examine water clarity focus solely on lakes and estuaries. • ↵16 This proxy is not without possible error, however; e.g., Zhang et al. (2015) conduct a more recent analysis using older transaction data, and so our trend variable may not reflect methodological trends well in that case. • ↵17 As a robustness check, we estimate the corresponding RE panel models and find virtually identical results (see Appendix C). When benefit transfer is the primary objective, Boyle and Wooldridge (2018) suggest an alternative model first proposed by Mundlak (1978). The Mundlak model parametrically estimates the cluster-specific effects by including the cluster average of the relevant modifier variables in the right-hand side of the meta-regression. This model slightly relaxes the assumptions needed for consistent estimates from a RE panel model. It also has some advantages compared to a FE panel model because it does not disregard variation with respect to cluster-invariant variables and allows for out-of-sample inference (Boyle and Wooldridge 2018). In earlier versions of this meta-analysis, we estimated a series of meta-regressions following the Mundlak approach (Guignet et al. 2020), but we do not pursue these models in the current study for two reasons. First, the WLS models performed better when assessing out-of-sample transfer error. Second, few of the right-hand-side variables vary within a cluster, and those that do are mainly methodological variables. Therefore, the parametrically estimated cluster-specific effects based on cluster means would capture methodological choices and not cluster-specific effects associated with a particular housing market or waterbody, which is the primary interest for purposes of benefit transfer. • ↵18 Because of collinearity concerns, we do not account for both waterbody type and baseline water clarity in the same model. • ↵19 Step-by-step guidance for a similar benefit transfer exercise is provided in Appendix D.2. • ↵20 See Appendix D.1 for the cluster-weighted mean values of all covariates. • ↵21 In the case of a linear meta-regression, this approach of plugging in the sample means is similar to a more generalizable procedure proposed by Moeltner, Boyle, and Paterson (2007), where the meta-analyst identifies the set of all possible combinations of methodological variable values, assigns a probability to each, and then predicts the benefit transfer estimates for each methodological variable combination and takes the average across all combinations. • ↵22 One can use the coefficient estimates in Table 4 for benefit transfer. The full variance-covariance matrix for RESCA WLS models 1 and 6 are presented in Appendix E. These are needed to derive the corresponding confidence intervals via the delta method (Greene 2003, 70) or Monte Carlo simulations. An illustrative step-by-step benefit transfer example based on RESCA WLS model 6 is provided in Appendix D.2. • ↵23 In future work, with specific policy applications in mind, it may prove fruitful to identify the optimal scope (i.e., the subset of meta-observations that should be used for benefit transfer estimates) (Moeltner and Rosenberger 2008, 2014; Johnston and Moeltner 2014; Moeltner et al. 2019). The optimal subset of the metadata can be identified using model search algorithms (e.g., Moeltner 2019) but will vary across policy contexts. • ↵24 Website links to these publicly available data sources are as follows: Water Quality Portal, https://www.waterqual-itydata.us/; National Hydrography Dataset, https://www.usgs.gov/ core-science-systems/ngp/national-hydrography/; U.S. Census Bureau, https://www.census.gov/; National Land Cover Dataset, https://www.mrlc.gov/ (accessed February 20, 2019). The Lake Multiscaled Geospatial and Temporal Database is another useful data source specific to lake water quality, available at https://lagoslakes.org/projects/ (accessed February 10, 2021). • ↵25 One would not necessarily want to add estimates across these methods because of potential double counting. Although a large portion of total values derived by stated preference studies may reflect nonuse values (e.g., Freeman, Herriges, and Kling 2014; Moore et al. 2018), there is still likely overlap in the endpoints valued; e.g., both could capture use values from waterfront recreation affected by water quality.
{"url":"https://le.uwpress.org/content/98/2/191.full","timestamp":"2024-11-11T13:40:00Z","content_type":"application/xhtml+xml","content_length":"466909","record_id":"<urn:uuid:4882f3ee-bab7-47c2-9088-dd5fc36c19c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00218.warc.gz"}
Li Li Professor, Department of Mathematics and Statistics, Oakland University Office: 350 MSC Phone: 248-370-3447 Fax: 248-370-4184 E-mail: li2345@oakland.edu • Cluster algebra and related geometry and combinatorics. • Algebra, Geometry and Combinatorics of Schubert varieties. • Algebra, Geometry and Combinatorics of points in the plane or in higher dimensional spaces, which include: the ideal defining the diagonal locus in (C^2)^n and the related combinatorial object such as q,t-Catalan numbers; Hilbert scheme of points on a Deligne-Mumford stack. • Algebro-geometrical, topological and combinatorial properties of the objects related to arrangement of subvarieties, including hyperplane arrangement and subspace arrangement; wonderful compactifications of arrangements of subvarieties;the relation of arrangement with the study of singularities. • The theory of Lawson homology and morphic cohomology. • Groups and Cayley graphs. For students looking for the information of the classes I am teaching, please log in Moodle.
{"url":"https://sites.google.com/oakland.edu/li2345/","timestamp":"2024-11-09T01:54:41Z","content_type":"text/html","content_length":"68247","record_id":"<urn:uuid:d9e25c5c-4791-422c-9ec5-5062ee6cfabe>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00776.warc.gz"}
velocity not smooth in tracker 4.85 dynamic model for free fall with air drag Hi ?? My drag_viscous.trz file with your model and no video at all is at http://www.cabrillo.edu/~dbrown/tracker/test/drag_viscous.trz. Your freeFall_5g.trz file wouldn't open for me and was not even recognized as a valid zip file, so it must have gotten corrupted. Open my drag_viscous.trz and check video clip and model settings. They're almost identical to your 5g numbers but you can see the plots are all nice and smooth. The difference is in the missing times that you see clearly on your position plot. I suspect your video camera has dropped occasional frames (many do this automatically when they can't keep up 30fps). This means the deltaT is not constant which is not what the dynamic model assumes. Using a video with no dropped frames would fix the problem... Hope this is helpful. Doug PS I see your name as ??? ??????. Is this just my computer? Can you change it to something more widely readable?
{"url":"https://www.compadre.org/OSP/bulletinboard/TDetails.cfm?ViewType=2&TID=3124&CID=75370&#PID75396","timestamp":"2024-11-13T15:06:26Z","content_type":"application/xhtml+xml","content_length":"33543","record_id":"<urn:uuid:8662b745-b183-403e-a3fd-51f84f90eeea>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00468.warc.gz"}
401(k): the math Roth vs. ‘regular’ 401(k): the math Roth vs. 'regular' 401(k): the math The Roth 401(k) contribution was introduced by the Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA). It was then (and has been since) largely a budget gimmick – a way to reduce the size of the ‘tax expenditure’ on retirement savings. Further ‘Rothification’ of Tax Code retirement savings provisions is currently being considered as a way to finance other tax policy objectives, like reducing rates. This strategy is clearly behind Congressman Dave Camp’s (R-MI), Chairman of the House Ways and Means Committee, proposal to mandate that 401(k) plans include a Roth option and provide that, after a participant has ‘used up’ half her 401(k) contribution limit with ‘regular’ 401(k) contributions, additional contributions have to be Roth. (See our article Congressman Camp’s comprehensive tax reform proposal.) Minor Roth changes have been used (and continue to be proposed) as a way to raise revenues to pay for policy initiatives unrelated to retirement savings. For instance, the Small Business Jobs Act of 2010 (SBJA10) included, as a revenue-raiser, a provision allowing in-the-plan Roth conversions in certain circumstances. Proposals to further liberalize Roth conversion rules often come up when policymakers are searching for revenue. From the point of view of plan participants and plan sponsors, Roth math can be a little confusing. For some participants, Roth contributions will produce greater benefits (net of taxes) than regular contributions. For others, they produce smaller benefits. Which outcome applies often depends on the participant’s marginal tax rate when the contribution is made and when it is distributed. In this article we begin with a discussion of Roth tax benefit math (that is, Roth math from the participant’s point of view) – the relative value of Roth tax benefits in different scenarios. We’ll then briefly describe Congress’s Roth budget math – which is what drives Roth policy generally. We’ll conclude with a brief review of possible Roth policy changes. Roth vs. regular 401(k) contributions The basics of participant Roth vs. regular contribution math can be described in the following three propositions: 1. Assuming identical tax rates at contribution and distribution, $1,000 saved as a Roth contribution produces the same tax benefit as it would if saved as a regular 401(k) contribution. While not particularly intuitive, this equivalence is a commonplace. Here’s a simple version of the math: Assume two taxpayers in the highest marginal bracket (39.6%) at the time of contribution (year 1) and the time of distribution (year 10) and a 5% earnings rate. They both want to save $1,000. One makes a regular 401(k) contribution; one makes a Roth contribution. The following table shows what they will get at distribution. Year Regular Roth 1 $1,000 $604 2 $1,050 $634 3 $1,103 $666 4 $1,158 $699 5 $1,216 $734 6 $1,276 $771 7 $1,340 $809 8 $1,407 $850 9 $1,477 $892 10 $1,551 $937 Net $937 $937 Bottom line: the regular contributor doesn’t pay taxes on the way in, but does pay on the way out; the Roth contributor pays taxes on the way in but not on the way out. They both wind up with the same amount at distribution, net of taxes. 2. Assuming identical tax rates at contribution and distribution, the Roth maximum contribution produces greater tax benefits than the regular contribution. Again, this proposition isn’t very intuitive. Because taxes have already been paid on the Roth contribution, it is, effectively, ‘bigger’ than a regular contribution. Thus, a $17,500 (the current limit on 401(k) contributions) Roth contribution produces a bigger tax benefit than a $17,500 regular contribution. Here’s the math: Assume two taxpayers in the highest marginal bracket (39.6%) at the time of contribution (year 1) and the time of distribution (year 10) and a 5% earnings rate. One makes a maximum $17,500 regular 401(k) contribution, one makes a maximum $17,500 Roth contribution. The following table shows what they will get at distribution. Year Regular Roth 1 $17,500 $17,500 2 $18,375 $18,375 3 $19,294 $19,294 4 $20,258 $20,258 5 $21,271 $21,271 6 $22,335 $22,335 7 $23,452 $23,452 8 $24,624 $24,624 9 $25,855 $25,855 10 $27,148 $27,148 Net $16,398 $27,148 The reason for this outcome is that the amounts going in aren’t both ‘apples’ (that is, they aren’t comparable). The Roth contributor started with $28,974, paid $11,474 in taxes (39.6%), and then contributed the $17,500 balance to the Roth 401(k). The regular contributor simply contributed $17,500. Taking account of tax effects, the Roth contributor contributed more. 3. If tax rates are higher in the year of contribution than in the year of distribution, regular contributions produce a bigger ultimate benefit net of taxes; if tax rates are lower in the year of contribution than in the year of distribution, Roth contributions produce a bigger benefit. This is a relatively intuitive proposition. Evaluating it, however, is difficult. There are at least two major variables. The first is personal: An individual may expect to be in a higher tax bracket when she retires – a young worker may legitimately have this expectation. Or she may expect to be in a lower one, e.g., a ‘peak earner’ who doesn’t expect a rich retirement. The second depends on federal tax policy: There are a lot of proposals to reduce marginal tax rates (in connection with the elimination or reduction of certain tax preferences). Congressman Camp’s proposal (noted above) would do so. Let’s go back to our first table and compare outcomes where the highest marginal rate, for the year of distribution, is reduced to 25% (as it is, for instance (and with a lot of qualifications), in the Camp proposal). Year Regular Roth 1 $1,000 $604 2 $1,050 $634 3 $1,103 $666 4 $1,158 $699 5 $1,216 $734 6 $1,276 $771 7 $1,340 $809 8 $1,407 $850 9 $1,477 $892 10 $1,551 $937 Net $1,163 $937 Now let’s consider the opposite possibility, an increase in marginal tax rates. There are certainly those who argue for one. Here’s the same table with the highest marginal rate for the year of distribution set at 50%. Year Regular Roth 1 $1,000 $604 2 $1,050 $634 3 $1,103 $666 4 $1,158 $699 5 $1,216 $734 6 $1,276 $771 7 $1,340 $809 8 $1,407 $850 9 $1,477 $892 10 $1,551 $937 Net $776 $937 As noted, these results are pretty intuitive. In a sense, the Roth contributor is speculating on the size of future (that is, year-of-distribution) tax rates. If those rates remain the same as the year-of-contribution rates, his choice is tax neutral. If they go down, he ‘loses.’ If they go up, he ‘wins.’ The challenge for participants and sponsors In a plan that offers both regular and Roth 401(k) contribution options, these complexities – the non-intuitive nature of the outcomes and their dependence on future unknowns – make participant decision-making difficult and, for some, daunting. In addition, there are other considerations – the 5-year holding period and qualified distribution rules may reduce the liquidity of Roth 401(k) contributions. And it ’s generally thought that there is a behavior bias towards getting the 401(k) tax deduction up front. For sponsors, in addition, Roth 401(k)s present administrative, recordkeeping and communications challenges. The result: some sponsors have been reluctant to include a Roth 401(k) option in their Budget math The tax benefit for 401(k) retirement savings – regular or Roth contributions – reduces tax revenues. This tax benefit is (along with, e.g., Child Tax Credit, the Earned Income Tax Credit, the mortgage deduction and health care tax benefits) on a short list of tax preferences that tax reformers have targeted. The strategy of the reformers is to reduce some or all of these tax preferences and use the resulting revenue increase to fund a decrease marginal tax rates. Given the theoretical equivalence of the tax benefits for either regular or Roth contributions, you would think it wouldn’t matter which style of contribution was preferred. But it turns out that, because of the way the cost of tax preferences is calculated for Congressional budgeting, regular contributions ‘cost’ more than Roth contributions. That cost, in lost tax revenues, of 401(k) plan tax benefit is generally calculated on a ‘cash flow’ basis. Estimates of the revenue loss generally include: (1) deductions/exclusions for contributions (revenue lost); (2) untaxed trust earnings (revenue lost); and (3) taxes paid on distributions (revenue gained). They may also include certain taxpayer behavioral responses to any proposed change. The element of this process that is critical for understanding the budget ‘magic’ of Roth 401(k) contributions is that these estimates are made for the current year and over a budget window of, generally, 10 years. This ‘budget window cash flow’ approach may work well for certain tax expenditures, but many have criticized its application to 401(k) tax incentives. Assuming constant contributions, distributions, and tax rates, it is possible that a cash flow approach would capture, over time, the ‘true cost’ of 401(k) tax benefits. But given what has actually happened over the past 20 years, cash flow numbers are likely to overestimate the cost of regular (as opposed to Roth) 401(k) contributions. Because 401(k) plans are relatively new, and baby boomers have only just begun to retire, deductions/ exclusions for contributions are relatively high, and taxes paid on distributions are relatively low. Again, because of this budget window cash flow methodology, the cost of Roth 401(k) contributions is underestimated. Because Roth contributions are taxed upfront, more revenue comes in in the early period, that is, inside the budget window. Because the tax benefit is only ‘realized’ on distribution, more of the tax loss occurs in the later period, outside the budget window. So for policymakers, simply changing regular 401(k) contributions to Roth contributions ‘increases tax revenues’ without affecting the real tax benefit (assuming constant tax rates.) Policy proposals There are basically two types of Roth proposals being considered in Congress: (1) major, comprehensive changes to the 401(k) contribution rules designed to generate significant revenue to help fund cuts in marginal rates; and (2) less major (or even minor) changes designed to produce revenue for specific, lower cost policy proposals. The Camp proposal An example of the first type of proposal was included in Congressman Camp’s comprehensive tax reform proposal released in February 2014. Here are the Camp Roth provisions: Except for certain plans of small employers (100 employees or less), the dollar limit on regular (pre-tax) 401(k) contributions would be reduced by one-half. Thus, based on 2014 limits, under the proposal the limit on pre-tax contributions would be $8,750 (one half of $17,500), and the catch-up contribution limit would be $2,750 (one half of $5,500). Current limits would remain in place for the combination of pre-tax and Roth contributions – an employee could still make the maximum 401(k) contribution but half of that contribution would have to be made on a Roth basis. 401(k) plan sponsors would generally be required to include a Roth contribution option in the plan (there would be an exception for certain small employer plans). Oversimplified version: sponsors would have to include a Roth 401(k) option in their plans; and contribution maximizers would have to make half their contributions as Roth contributions. For some participants, these changes might actually improve retirement outcomes (net of taxes). For others, e.g., those whose highest marginal tax rate in retirement is lower than it was when the contribution was made, they might worsen outcomes. It would, as we discussed above, make the entire process of figuring out what to contribute more difficult (and conceivably less appealing). And, because of the preference (bias towards) ‘deductions now, taxes later,’ participants would probably perceive it as a reduction in tax benefits, whatever the long-term outcome. Sponsors would have to hire staff, build systems and develop communications to implement this Roth program. And it’s possible participant dissatisfaction and frustration with and bias against Roth contributions might reduce overall 401(k) plan participation. That would present a long-term retirement policy challenge: if your participants are not saving enough in the 401(k), how will you help them prepare for an adequately funded retirement? Might something like this pass? Unlikely, but maybe, if the House, the Senate and the Administration return to the idea of a ‘grand bargain’ on the budget, entitlements and tax reform, or even if they just try to do a stand alone bi-partisan tax reform bill – it has happened before. These possibilities will depend in part on the results of the November 2014 elections. Targeted proposals With respect to the second type of proposal – a less comprehensive Roth 401(k) change designed to raise revenues for a specific policy initiative – an example from the past is the (as noted above) change included in SBJA10. That change generally allowed participants to convert money in their regular 401(k) account to a Roth 401(k). Generally, only money that could have been distributed was eligible for this treatment. In connection with the conversion, the participant would pay taxes (and then pay no taxes on actual distribution) – hence the short-term revenue-raising appeal. This was a very minor change – the participant could, before SBJA10 passed, get the same tax result by simply taking a distribution, rolling it into an IRA, and then converting the IRA to a Roth IRA. A bigger change, that would produce more revenue, would be to allow the conversion of non-distributable regular 401(k) accounts to Roth accounts. Proposals to do this have (informally) been considered and are likely to continue to come up when Congress is hunting for revenue to fund proposals (generally bi-partisan and generally popular) that cost money. These more targeted Roth proposals are generally voluntary both at the sponsor level (e.g., the sponsor generally does not have to implement a Roth conversion program if it doesn’t want to) and at the participant level (e.g., the participant doesn’t have to convert). One can imagine a scenario in which they are made mandatory. Revenue policy not retirement policy Finally, let’s note that none of these policy initiatives have anything to do with retirement savings policy. They are only being considered because they raise revenues that (generally) will be used for other, non-retirement policy purposes. Nevertheless, they produce real world consequences for participants and sponsors. * * * Roth 401(k) contributions present both participants and sponsors with some interesting options, although, as we have discussed, there are obvious tradeoffs. Not all policymakers like Roth 401(k) contributions. But the budget math is, for many, compelling. 401(k) plan sponsors will want to consider familiarizing themselves generally with the ‘Roth option’ and related issues – we may be dealing with more of them in the relatively near future.
{"url":"https://www.octoberthree.com/articles/roth-vs-regular-401-k-the-math/","timestamp":"2024-11-13T04:54:21Z","content_type":"text/html","content_length":"123956","record_id":"<urn:uuid:d14c87c7-f87c-4a9a-b755-e7d0ced14c64>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00561.warc.gz"}
Window Replacement in context of estimated cost 27 Aug 2024 Title: Estimating the Cost of Window Replacement: A Comprehensive Analysis Abstract: This article provides a comprehensive analysis of the estimated cost of window replacement, considering various factors that influence the overall expenditure. The study aims to develop a formula-based approach to estimate the cost of window replacement, taking into account the type and size of windows, materials used, labor costs, and other relevant variables. Introduction: Window replacement is a common renovation project in residential and commercial buildings. The estimated cost of window replacement can vary significantly depending on several factors, including the type and size of windows, materials used, labor costs, and location-specific considerations. This study aims to develop a formula-based approach to estimate the cost of window replacement, providing a comprehensive framework for architects, builders, and homeowners. Literature Review: Previous studies have attempted to estimate the cost of window replacement using various approaches, including regression analysis (Kumar et al., 2018) and decision trees (Lee et al., 2020). However, these studies have limitations, such as relying on limited datasets or neglecting important factors that influence the estimated cost. Methodology: This study employs a mixed-methods approach, combining both qualitative and quantitative data. The analysis is based on a comprehensive review of existing literature, industry reports, and expert opinions. A formula-based approach is developed to estimate the cost of window replacement, considering the following variables: • Type of window (e.g., single-hung, double-hung, casement) • Size of window (in square feet) • Material used (e.g., vinyl, wood, aluminum) • Labor costs (per hour) • Location-specific factors (e.g., climate, zoning regulations) The formula is represented as follows: Estimated Cost = (Type of Window × Size of Window) + (Material Used × Quantity of Materials) + (Labor Costs × Hours Worked) + (Location-Specific Factors × Adjustment Factor) Results: The developed formula provides a comprehensive framework for estimating the cost of window replacement. The study highlights the importance of considering various factors that influence the estimated cost, including the type and size of windows, materials used, labor costs, and location-specific considerations. Discussion: The findings of this study demonstrate the value of developing a formula-based approach to estimate the cost of window replacement. The proposed formula provides a practical tool for architects, builders, and homeowners to estimate the cost of window replacement, taking into account various factors that influence the overall expenditure. Conclusion: This study contributes to the existing body of knowledge on window replacement by providing a comprehensive framework for estimating the cost of window replacement. The developed formula offers a practical approach for estimating the cost of window replacement, considering various factors that influence the estimated cost. Future studies can build upon this research by exploring additional variables and refining the proposed formula. Kumar, P., et al. (2018). Estimating the cost of window replacement using regression analysis. Journal of Building Engineering, 19, 241-248. Lee, S., et al. (2020). Decision trees for estimating the cost of window replacement. Journal of Construction Engineering and Management, 146(4), 04020004. Formula: Estimated Cost = (Type of Window × Size of Window) + (Material Used × Quantity of Materials) + (Labor Costs × Hours Worked) + (Location-Specific Factors × Adjustment Factor) Note: The formula is represented in ASCII format, without numerical examples. Related articles for ‘estimated cost ‘ : • Reading: **Window Replacement in context of estimated cost ** Calculators for ‘estimated cost ‘
{"url":"https://blog.truegeometry.com/tutorials/education/ae49011b54bef4829501277f186b4bc4/JSON_TO_ARTCL_Window_Replacement_in_context_of_estimated_cost_.html","timestamp":"2024-11-03T04:14:16Z","content_type":"text/html","content_length":"19588","record_id":"<urn:uuid:3cae3329-1f58-453a-98c6-872bb1505b0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00034.warc.gz"}
Finomaton 0.9 review Finomaton project allows users comfortably draw and typeset finite state machines (automata). The resulting graphs can be exported License: GPL (GNU General Public License) File size: 69K Developer: Markus Triska Finomaton project allows users comfortably draw and typeset finite state machines (automata). The resulting graphs can be exported to plain MetaPost and subsequently be included in TeX and LaTeX documents for excellent typesetting quality. Contrary to many other packages, the mouse can be used to interactively move objects around. States are magnetic to facilitate connection, and any TeX command can be embedded in the labels of states and lines. What's New in This Release: Spread functions, more keyboard shortcuts, and rudimentary support for exporting directed graphs were added. Finomaton 0.9 search tags
{"url":"https://nixbit.com/software/finomaton-review/","timestamp":"2024-11-12T10:57:24Z","content_type":"text/html","content_length":"11197","record_id":"<urn:uuid:98d64e04-6a11-47d0-b681-6ad817fa3f14>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00697.warc.gz"}
Renormalization Group Flow - Jakob Schwichtenberg in Quantum Field Theory Renormalization Group Flow The standard model contains three gauge couplings, which are very different in strength. This is not really a problem of the standard model, because we can simply put these measured values in by hand. However, Grand Unified Theories (GUTs) provide a beautiful explanation for this difference in strength. A simple group $G_{GUT}$ implies that we have only one gauge coupling as long as $G_{GUT} $ is unbroken. The gauge symmetry $G_{GUT}$ is broken at some high energy scale in the early universe. Afterwards, we have three distinct gauge couplings with approximately equal strength. The gauge couplings are not constant, but depend on the energy scale. This is described by the renormalization group equations (RGEs). The RGEs for a gauge coupling depend on the number of particles that carry the corresponding charge. (This is discussed in detail below.) Therefore, we can use the known particle content of the standard model to compute how the three couplings change with energy. This is shown schematically in the figure below. The couplings change differently with energy, because the number of particles that carry, for example, color ($SU(3)$ charge) or isospin ($SU(2)$ charge) are different. The known particle content of the standard model and the hypothesis that there is one unified coupling at high energies therefore provide a beautiful explanation why strong interactions are strong and weak interactions are weak. This is not just theory. For example, the strong coupling “constant” $\alpha_S$ has been measured at very different energy scales. Some of these measurements are summarized in the following plot. To understand how all this comes about, recall that in quantum field theory we have a cloud of virtual particle-antiparticle pairs around each particle. This situation is similar to the classical situation of an electron inside a dielectric medium. Through the presence of the electron, the electrical neutral molecules around it get polarized, which is illustrated in the figure below. As a result, the electrical charge of the electron gets partially hidden or screened. This is known as dielectric screening. Analogous to what happens in a dielectric medium, the virtual particle-antiparticle pairs get polarized and the charge of the particle screened. Concretely this means if we are close to a particle, we measure a different charge than from far away because there are fewer virtual particle-antiparticle pairs that screen the charge. This screening effect happens not only for electrical charge but for color-charge and weak-isospin, too. In particle physics, the notion of distance is closely related to the notion of energy. If we shoot a particle with lots of energy onto an electron it comes closer to the electron before it gets deflected than a particle with less energy. Therefore, the particle with more energy feels a larger charge. It may now seem that our three coupling strengths all get bigger if we measure them at higher energies. However, it turns out that gauge bosons have the opposite effect than fermions. They anti-screen and thus make a given charge bigger at larger distances. Recall that we have: • One gauge boson for $U(1)$. • Three gauge bosons for $SU(2)$, because the adjoint representation is $3$-dimensional. • Eight gauge bosons for $SU(3)$, because the adjoint representation is $8$-dimensional. These numbers tell us that $SU(3)$ couplings are more affected by the gauge boson anti-screening effect, simply because there are more $SU(3)$ gauge bosons. In fact, it can be computed that their effect outweighs the screening effect of the quarks and thus the $SU(3)$ coupling constant gets weaker at smaller distances. For $SU(2)$ the gauge boson and fermion effect is almost equal and therefore the coupling strength is approximately constant. For $U(1)$ the ordinary screening effect dominates and the corresponding coupling strength becomes stronger at smaller distances. Given the coupling strengths at some energy scale, we can compute at which energy scale they become approximately equal. This energy scale is closely related to the mass of the GUT gauge bosons $m_X$. From an effective field theory point of view, at energies much higher than $m_X$ the breaking of the GUT symmetry has a negligible effect and therefore the gauge coupling constants unify. The mathematical description of this coupling strength change with energy, known as renormalization group equations, is the topic of the next section. The coupling constants change so slowly with the energy that the scale where they are approximately equal is incredibly high. This means the GUT gauge bosons are so heavy that it is no wonder they have not been seen in experiment yet. The Renormlization Group Equations To illustrate the arguments that lead to the famous renormalization group equations, we discuss shortly the arguably simplest example in quantum field theory: the Coulomb potential $ V(r) =\frac{e^2} {4 \pi r}$. In QFT it corresponds to the exchange of a single photon and $\frac{1}{4 \pi r}$ is the Fourier transform of the propagator. A $1$-loop correction to this diagram is, for example, which yields a correction of order $e^4$ to the Coulomb potential. In momentum space, the Coulomb potential then reads \tilde{V}(p)=e^2 \frac{1-e^2 \Pi_2(p^2)}{p^2}, where $\Pi_2(p^2) = \frac{1}{2\pi^2} \int_0^1 dx \, x(1-x)\left[ \frac{2}{\epsilon} + \ln \left(\frac{\tilde{\mu}^2}{m^2-p^2x(1-x)}\right) \right]$ (see for example the QFT book by Schwartz or the similar free chapter here). The usual problem in QFT is now that $\Pi_2(p^2)$ is infinite and we need to renormalize. For this reason, we demand that the potential between two particles separated by some distance $r_0$ should be $V(r_0) = \frac{e_R^2}{2\pi r_0}$, where $e_R$ denotes the renormalized charge. In momentum space this means $\tilde{V}(p_0)=\frac{e_R^2}{p_0^2}$. This defines the renormalized charge e_R^2 := p_0^2 \tilde{V}(p_0) = e^2- e^4 \Pi_2(p^2) + \ldots \, , where the dots denote higher order corrections. Equally, we can solve for the bare charge \begin{equation} \label{eq:defbarecharge} e^2 := e_R^2+e_R^4 \Pi_2(p_0^2) + \ldots At another momentum scale $p$, the potential reads \tilde{V}(p)& = \frac{e^2}{p^2} – \frac{e^4 \Pi_2(p^2)}{p^2} + \ldots \stackrel{\text{Eq. \ref{eq:defbarecharge}}}{=} \frac{e_R^2}{p^2} – \frac{e_R^4\left[\Pi_2(p^2)-\Pi_2(p_0^2) \right]}{p^2} + \ ldots \notag \\ &=\frac{e^2}{p^2} \left( 1+ \frac{e_R^2}{2\pi^2} \int_0^1 dx \, x(1-x) \ln \left( \frac{p^2x(1-x)-m^2}{p_0^2x(1-x)-m^2} \right) \right) + \ldots For large momenta $|p^2| \gg m^2$ the mass drops out and we have \tilde{V}(p) \approx \frac{e_R^2}{p^2} \left( 1 + \frac{e_R^2}{12 \pi^2} \ln\left( \frac{p^2}{p_0^2}\right) \right) + \mathcal{O}(e_R^6) = \frac{e_{\text{eff}}^2(p)}{p^2} + \mathcal{O}(e_R^6) \, , e_{\text{eff}}^2(p) := e_R^2 \left( 1 + \frac{e_R^2}{12 \pi^2} \ln\left( \frac{p^2}{p_0^2}\right) \right) \, . This means we introduce an effective charge $e_{\text{eff}}(p)$, such that the potential looks for momentum transfer $p$ like the usual Coulomb potential, but with charge $e_{\text{eff}}(p)$ instead of $e_R$. This describes exactly the screening effect discussed at the beginning of this chapter. For large momenta, which means at short distances, we have an effective charge $e_{\text{eff}}(p)$, which is larger than the renormalized $e_R$. In analogy to the dielectric medium discussed above, here the virtual $e^+ e^-$ pair acts like a dipole. Including additional loops in the series, such as yields analogously \tilde{V}(p) &= \frac{e_R^2}{p^2} \left( 1 + \frac{e_R^2}{12 \pi^2} \ln\left( \frac{p^2}{p_0^2}\right) + \left(\frac{e_R^2}{12 \pi^2} \ln\left( \frac{p^2}{p_0^2}\right) \right)^2 + \ldots \right) \ notag \\ &= \frac{1}{p^2} \left( \frac{e_R^2}{1- \frac{e_R^2}{12\pi^2\ln\left( \frac{p^2}{p_0^2 } \right)}} \right) = \frac{e_{\text{eff}}^2(p)}{p^2} \, , e_{\text{eff}}^2(p) := \frac{e_R^2}{1- \frac{e_R^2}{12\pi^2\ln\left( \frac{p^2}{p_0^2 } \right)}} \,. It is convenient to rewrite Eq. \ref{eq:effetoallorders} as \frac{1}{e_{\text{eff}}^2(p)} = \frac{1}{e_R^2} – \frac{1}{12\pi^2} \ln\left( \frac{p^2}{p_0^2} \right) \, . The main idea of the renormalization group is that the choice of the reference scale $p_0$ does not matter. What is actually measured in experiments is $e_{\text{eff}}$ and not $e_R$. For example, if we want that our renormalized charge $e_R$ corresponds to the macroscopic electric charge, we need to use $p_0=0$, which corresponds to $r_0 = \infty$. Thus $e_R=e_{\text{eff}}^2(0)$. In contrast, for $p_0=m_e$, we have \frac{1}{e_{\text{eff}}^2(p)} = \frac{1}{e_R^2} – \frac{1}{12\pi^2} \ln\left( \frac{p^2}{m_e^2} \right) and therefore $e_R = e_{\text{eff}}(m_e)$. In general \frac{1}{e_{\text{eff}}^2(p)} = \frac{1}{e_{\text{eff}}^2(\mu)} – \frac{1}{12\pi^2} \ln\left( \frac{p^2}{\mu^2} \right) \, . Taking the derivative with respect to the scale $\mu$ yields 0 = – \frac{2}{e_{\text{eff}}^3(\mu)} \frac{d e_{\text{eff}}(\mu)}{d\mu} + \frac{1}{12\pi^2} \ln\left( \frac{2}{\mu} \right) \, , which we can rewrite as \mu \frac{d e_{\text{eff}}(\mu)}{d \mu} = \frac{e_{\text{eff}(\mu)}^3}{12\pi^2} \, . This is called a renormalization group equation (RGE) and it enables us to compute how $e_{\text{eff}}$ depends on the scale $\mu$, i.e. the screening of the charge through the vacuum polarizations. Eq. \ref{eq:incompleteRGE} is not the complete RGE for the electric charge, because we only considered a virtual $e^+ e^-$ pair in the loops, although other particles contribute, too. The derivation of the complete RGEs for various gauge couplings and models is the topic of the next section. In general right-hand side is called $\beta-$function and for a gauge coupling $g$, we have \mu \frac{d g(\mu)}{d \mu} = \beta (g(\mu) ) In this post I describe how we compute the $\beta-$functions in practice and here how we can solve them. One last thing: Maybe you wonder about the name “renormalization group”. Here’s how the book “Quantum Field Theory for the Gifted Amateur” explains it: “The renormalization group is a bit of a misnomer as it is not really a group. The name arises from the study of how a system behaves under rescaling transformations and such transformations do of course form a group. However, the “blurring” that occurs when we rescale and then integrate up to a cut-off, thereby removing fine structure (and this is the very essence of the renormalization group procedure) is not invertible (the fine details are lost and you can’t put them back). Thus the transformations consisting of rescaling and integrating up to a cut-off do not form a mathematical group because the inverse transformation does not exist.” P.S. I wrote a textbook which is in some sense the book I wished had existed when I started my journey in physics. It's called "Physics from Symmetry" and you can buy it, for example, at . And I'm now on too if you'd like to get updates about what I'm recently up to. If you want to get an update whenever I publish something new, simply put your email address in the box below.
{"url":"http://jakobschwichtenberg.com/renormalization-group-flow/","timestamp":"2024-11-03T22:35:19Z","content_type":"text/html","content_length":"42314","record_id":"<urn:uuid:9263b7ea-f789-4423-8e81-f49864f64688>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00628.warc.gz"}
Introductory Chemical Engineering Thermodynamics, 2nd ed. Solving for the saturation pressure using PREOS.xls simply involves setting the temperature and guessing pressure until the fugacities in vapor and liquid are equal. (5min, learncheme.com) It is not shown, but it would also be easy to set the pressure and guess temperature until the fugacities were equal in order to solve for saturation temperature. One added suggestion would be to type in the shortcut vapor pressure (SCVP) equation to give an initial estimate of the pressure. Rearranging the SCVP can also give an initial guess for Tsat when given P. This presentation illustrates a sample calculation for toluene to explore when the vapor is the stable, when the liquid is the stable phase, and when the phases are roughly in equilibrium. Comprehension Questions: 1. Estimate the vapor pressure (MPa) of n-pentane at 450K according to the PREOS. Compare your result to the value from Eq. 2.47 (SCVP) and to the Antoine equation using the coefficients given in Appendix E. What do you think explains the observations that you make? 2. Estimate the saturation temperature (K) of n-pentane at 3.3 MPa according to the PREOS. Compare your result to the value from Eq. 2.47 (SCVP) and to the Antoine equation using the coefficients given in Appendix E. What do you think explains the observations that you make? 3. Estimate the vapor pressure (MPa) of n-pentane at 223K according to the PREOS. Compare your result to the value from Eq. 2.47 (SCVP) and to the Antoine equation using the coefficients given in Appendix E. What do you think explains the observations that you make? 4. Estimate the saturation temperature (K) of n-pentane at 3.3 kPa according to the PREOS. Compare your result to the value from Eq. 2.47 (SCVP) and to the Antoine equation using the coefficients given in Appendix E. What do you think explains the observations that you make? We can combine the definition of fugacity in terms of the Gibbs Energy Departure Function with the procedure of visualizing an equation of state to visualize the fugacity as characterized by the PR EOS. (21min, uakron.edu) This amounts to plotting Z vs. density, similar to visualizing the vdW EOS. Then we simply type in the departure function formula. Since the PR EOS describes both vapors and liquids, we can calculate fugacity for both gases and liquids. Taking the reciprocal of the dimensionless density ( V/b=1/(bρ) ) gives a dimensionless volume. When the dimensionless pressure (bP/RT) is plotted vs. the dimensionless volume, the equal area rule indicates the pressure where equilibrium occurs and this can be checked by comparing the ln(f/P) values for the liquid and vapor roots. When the pressure is not exactly saturated, we may still be in the 3-root region. Then you need to check the fugacity to determine which phase is stable. Concept Questions: 1. What equation can we use to estimate the fugacity of a compressed liquid relative to its saturation value? 2. How accurate is that equation relative to the change in pressure when we are close to saturation? 3. The video shows a graph of ln(f/P) vs. P. Which phase gives the lower value of fugacity when you are to the right of the intersection point? (ie. vapor or liquid?)
{"url":"https://chethermo.net/chethermo.net/screencasts/chapter9-9.10","timestamp":"2024-11-07T09:44:13Z","content_type":"text/html","content_length":"21984","record_id":"<urn:uuid:e98d8153-eeaa-4881-9629-b9ace7faf818>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00372.warc.gz"}
Rocky Mountain Simulation This page describes a numerical simulation of gravity waves over the Rocky Mountains. Computational Domain The layout of the of the simulation is shown in the figure below. The origin is located at Mount Blue Sky 39.6 N 105.7 W. Note that the computational domain is rotated 7 clockwise with respect to lines of constant latitude. The mesh is clustered in both horizontal directions in order to achieve 250 x 250 meter spacing over the mountain range in the region shown by the black rectangle. Weak stretching of ~1.5% is used to the edges. The domain extends to an altitude of 140 km and uses uniform vertical spacing of 250 m. A total of 768 x 460 x 560 mesh points are used. Characteristic boundary conditions are used on all sides except the surface. Wind and Thermodynamic Profiles The mean winds and temperature profile are taken from radiosonde data on February 19th, 2016, from launch site in Grand Junction. These profiles extend to an altitude of 31 km. MERRA wind profiles are used above the radiosonde measuerments to an altitude of 80 km. A third order interpolating polynomial is then used to smoothly extend the winds above this altitude using the condition that U=V=0 at the upper boundary. The temperature is extended using the NRLMSIS Atmosphere Model (Composition). Plots of various profiles are shown below. Wind Condition The mean winds near the surface are increased in time. Forcing terms gradually introduce winds near the surface with the objective of achieving the wind profile within a two hour period. A hyperbolic tangent function is used in order to produce gentle acceleration of the wind near the beginning and end of the forcing period. The maximum forcing rate is equivalent to that of a linear ramp with a duration of thirty minutes.
{"url":"https://boulder.gats-inc.com/~adam/gj16/","timestamp":"2024-11-03T09:36:00Z","content_type":"text/html","content_length":"7321","record_id":"<urn:uuid:8c5c6fa1-a223-4023-855e-0978f0a9142d>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00560.warc.gz"}
Built-in Types The following sections describe the standard types that are built into the interpreter. The principal built-in types are numerics, sequences, mappings, classes, instances and exceptions. Some collection classes are mutable. The methods that add, subtract, or rearrange their members in place, and don’t return a specific item, never return the collection instance itself but None. Some operations are supported by several object types; in particular, practically all objects can be compared for equality, tested for truth value, and converted to a string (with the repr() function or the slightly different str() function). The latter function is implicitly used when an object is written by the print() function. Truth Value Testing Any object can be tested for truth value, for use in an if or while condition or as operand of the Boolean operations below. By default, an object is considered true unless its class defines either a __bool__() method that returns False or a __len__() method that returns zero, when called with the object. 1 Here are most of the built-in objects considered false: • constants defined to be false: None and False. • zero of any numeric type: 0, 0.0, 0j, Decimal(0), Fraction(0, 1) • empty sequences and collections: '', (), [], {}, set(), range(0) Operations and built-in functions that have a Boolean result always return 0 or False for false and 1 or True for true, unless otherwise stated. (Important exception: the Boolean operations or and and always return one of their operands.) Boolean Operations — and, or, not These are the Boolean operations, ordered by ascending priority: Operation Result Notes x or y if x is false, then y, else x (1) x and y if x is false, then x, else y (2) not x if x is false, then True, else False (3) 1. This is a short-circuit operator, so it only evaluates the second argument if the first one is false. 2. This is a short-circuit operator, so it only evaluates the second argument if the first one is true. 3. not has a lower priority than non-Boolean operators, so not a == b is interpreted as not (a == b), and a == not b is a syntax error. There are eight comparison operations in Python. They all have the same priority (which is higher than that of the Boolean operations). Comparisons can be chained arbitrarily; for example, x < y <= z is equivalent to x < y and y <= z, except that y is evaluated only once (but in both cases z is not evaluated at all when x < y is found to be false). This table summarizes the comparison operations: Operation Meaning < strictly less than <= less than or equal > strictly greater than >= greater than or equal == equal != not equal is object identity is not negated object identity Objects of different types, except different numeric types, never compare equal. The == operator is always defined but for some object types (for example, class objects) is equivalent to is. The <, <=, > and >= operators are only defined where they make sense; for example, they raise a TypeError exception when one of the arguments is a complex number. Non-identical instances of a class normally compare as non-equal unless the class defines the __eq__() method. Instances of a class cannot be ordered with respect to other instances of the same class, or other types of object, unless the class defines enough of the methods __lt__(), __le__(), __gt__(), and __ge__() (in general, __lt__() and __eq__() are sufficient, if you want the conventional meanings of the comparison operators). The behavior of the is and is not operators cannot be customized; also they can be applied to any two objects and never raise an exception. Two more operations with the same syntactic priority, in and not in, are supported by types that are iterable or implement the __contains__() method. There are three distinct numeric types: integers, floating point numbers, and complex numbers. In addition, Booleans are a subtype of integers. Integers have unlimited precision. Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info. Complex numbers have a real and imaginary part, which are each a floating point number. To extract these parts from a complex number z, use z.real and z.imag. (The standard library includes the additional numeric types fractions.Fraction, for rationals, and decimal.Decimal, for floating-point numbers with user-definable precision.) Numbers are created by numeric literals or as the result of built-in functions and operators. Unadorned integer literals (including hex, octal and binary numbers) yield integers. Numeric literals containing a decimal point or an exponent sign yield floating point numbers. Appending 'j' or 'J' to a numeric literal yields an imaginary number (a complex number with a zero real part) which you can add to an integer or float to get a complex number with real and imaginary parts. Python fully supports mixed arithmetic: when a binary arithmetic operator has operands of different numeric types, the operand with the “narrower” type is widened to that of the other, where integer is narrower than floating point, which is narrower than complex. A comparison between numbers of different types behaves as though the exact values of those numbers were being compared. 2 The constructors int(), float(), and complex() can be used to produce numbers of a specific type. All numeric types (except complex) support the following operations (for priorities of the operations, see Operator precedence): Operation Result Notes Full documentation x + y sum of x and y x - y difference of x and y x * y product of x and y x / y quotient of x and y x // y floored quotient of x and y (1) x % y remainder of x / y (2) -x x negated +x x unchanged abs(x) absolute value or magnitude of x abs() int(x) x converted to integer (3)(6) int() float(x) x converted to floating point (4)(6) float() complex(re, im) a complex number with real part re, imaginary part im. im defaults to zero. (6) complex() c.conjugate() conjugate of the complex number c divmod(x, y) the pair (x // y, x % y) (2) divmod() pow(x, y) x to the power y (5) pow() x ** y x to the power y (5) 1. Also referred to as integer division. The resultant value is a whole integer, though the result’s type is not necessarily int. The result is always rounded towards minus infinity: 1//2 is 0, (-1) //2 is -1, 1//(-2) is -1, and (-1)//(-2) is 0. 2. Not for complex numbers. Instead convert to floats using abs() if appropriate. 3. Conversion from floating point to integer may round or truncate as in C; see functions math.floor() and math.ceil() for well-defined conversions. 4. float also accepts the strings “nan” and “inf” with an optional prefix “+” or “-” for Not a Number (NaN) and positive or negative infinity. 5. Python defines pow(0, 0) and 0 ** 0 to be 1, as is common for programming languages. 6. The numeric literals accepted include the digits 0 to 9 or any Unicode equivalent (code points with the Nd property). See http://www.unicode.org/Public/12.1.0/ucd/extracted/DerivedNumericType.txt for a complete list of code points with the Nd property. All numbers.Real types (int and float) also include the following operations: For additional numeric operations see the math and cmath modules. Bitwise Operations on Integer Types Bitwise operations only make sense for integers. The result of bitwise operations is calculated as though carried out in two’s complement with an infinite number of sign bits. The priorities of the binary bitwise operations are all lower than the numeric operations and higher than the comparisons; the unary operation ~ has the same priority as the other unary numeric operations (+ and -). This table lists the bitwise operations sorted in ascending priority: Operation Result Notes x | y bitwise or of x and y (4) x ^ y bitwise exclusive or of x and y (4) x & y bitwise and of x and y (4) x << n x shifted left by n bits (1)(2) x >> n x shifted right by n bits (1)(3) ~x the bits of x inverted 1. Negative shift counts are illegal and cause a ValueError to be raised. 2. A left shift by n bits is equivalent to multiplication by pow(2, n). 3. A right shift by n bits is equivalent to floor division by pow(2, n). 4. Performing these calculations with at least one extra sign extension bit in a finite two’s complement representation (a working bit-width of 1 + max(x.bit_length(), y.bit_length()) or more) is sufficient to get the same result as if there were an infinite number of sign bits. Additional Methods on Integer Types The int type implements the numbers.Integral abstract base class. In addition, it provides a few more methods: Return the number of bits necessary to represent an integer in binary, excluding the sign and leading zeros: >>> n = -37 >>> bin(n) >>> n.bit_length() More precisely, if x is nonzero, then x.bit_length() is the unique positive integer k such that 2**(k-1) <= abs(x) < 2**k. Equivalently, when abs(x) is small enough to have a correctly rounded logarithm, then k = 1 + int(log(abs(x), 2)). If x is zero, then x.bit_length() returns 0. Equivalent to: def bit_length(self): s = bin(self) # binary representation: bin(-37) --> '-0b100101' s = s.lstrip('-0b') # remove leading zeros and minus sign return len(s) # len('100101') --> 6 Return an array of bytes representing an integer. >>> (1024).to_bytes(2, byteorder='big') >>> (1024).to_bytes(10, byteorder='big') >>> (-1024).to_bytes(10, byteorder='big', signed=True) >>> x = 1000 >>> x.to_bytes((x.bit_length() + 7) // 8, byteorder='little') The integer is represented using length bytes. An OverflowError is raised if the integer is not representable with the given number of bytes. The byteorder argument determines the byte order used to represent the integer. If byteorder is "big", the most significant byte is at the beginning of the byte array. If byteorder is "little", the most significant byte is at the end of the byte array. To request the native byte order of the host system, use sys.byteorder as the byte order value. The signed argument determines whether two’s complement is used to represent the integer. If signed is False and a negative integer is given, an OverflowError is raised. The default value for signed is False. Return the integer represented by the given array of bytes. >>> int.from_bytes(b'\x00\x10', byteorder='big') >>> int.from_bytes(b'\x00\x10', byteorder='little') >>> int.from_bytes(b'\xfc\x00', byteorder='big', signed=True) >>> int.from_bytes(b'\xfc\x00', byteorder='big', signed=False) >>> int.from_bytes([255, 0, 0], byteorder='big') The argument bytes must either be a bytes-like object or an iterable producing bytes. The byteorder argument determines the byte order used to represent the integer. If byteorder is "big", the most significant byte is at the beginning of the byte array. If byteorder is "little", the most significant byte is at the end of the byte array. To request the native byte order of the host system, use sys.byteorder as the byte order value. The signed argument indicates whether two’s complement is used to represent the integer. Additional Methods on Float The float type implements the numbers.Real abstract base class. float also has the following additional methods. Return a pair of integers whose ratio is exactly equal to the original float and with a positive denominator. Raises OverflowError on infinities and a ValueError on NaNs. Return True if the float instance is finite with integral value, and False otherwise: >>> (-2.0).is_integer() >>> (3.2).is_integer() Two methods support conversion to and from hexadecimal strings. Since Python’s floats are stored internally as binary numbers, converting a float to or from a decimal string usually involves a small rounding error. In contrast, hexadecimal strings allow exact representation and specification of floating-point numbers. This can be useful when debugging, and in numerical work. Return a representation of a floating-point number as a hexadecimal string. For finite floating-point numbers, this representation will always include a leading 0x and a trailing p and exponent. Class method to return the float represented by a hexadecimal string s. The string s may have leading and trailing whitespace. Note that float.hex() is an instance method, while float.fromhex() is a class method. A hexadecimal string takes the form: [sign] ['0x'] integer ['.' fraction] ['p' exponent] where the optional sign may by either + or -, integer and fraction are strings of hexadecimal digits, and exponent is a decimal integer with an optional leading sign. Case is not significant, and there must be at least one hexadecimal digit in either the integer or the fraction. This syntax is similar to the syntax specified in section 6.4.4.2 of the C99 standard, and also to the syntax used in Java 1.5 onwards. In particular, the output of float.hex() is usable as a hexadecimal floating-point literal in C or Java code, and hexadecimal strings produced by C’s %a format character or Java’s Double.toHexString are accepted by float.fromhex(). Note that the exponent is written in decimal rather than hexadecimal, and that it gives the power of 2 by which to multiply the coefficient. For example, the hexadecimal string 0x3.a7p10 represents the floating-point number (3 + 10./16 + 7./16**2) * 2.0**10, or 3740.0: >>> float.fromhex('0x3.a7p10') Applying the reverse conversion to 3740.0 gives a different hexadecimal string representing the same number: >>> float.hex(3740.0) Hashing of numeric types For numbers x and y, possibly of different types, it’s a requirement that hash(x) == hash(y) whenever x == y (see the __hash__() method documentation for more details). For ease of implementation and efficiency across a variety of numeric types (including int, float, decimal.Decimal and fractions.Fraction) Python’s hash for numeric types is based on a single mathematical function that’s defined for any rational number, and hence applies to all instances of int and fractions.Fraction, and all finite instances of float and decimal.Decimal. Essentially, this function is given by reduction modulo P for a fixed prime P. The value of P is made available to Python as the modulus attribute of sys.hash_info. CPython implementation detail: Currently, the prime used is P = 2**31 - 1 on machines with 32-bit C longs and P = 2**61 - 1 on machines with 64-bit C longs. Here are the rules in detail: • If x = m / n is a nonnegative rational number and n is not divisible by P, define hash(x) as m * invmod(n, P) % P, where invmod(n, P) gives the inverse of n modulo P. • If x = m / n is a nonnegative rational number and n is divisible by P (but m is not) then n has no inverse modulo P and the rule above doesn’t apply; in this case define hash(x) to be the constant value sys.hash_info.inf. • If x = m / n is a negative rational number define hash(x) as -hash(-x). If the resulting hash is -1, replace it with -2. • The particular values sys.hash_info.inf, -sys.hash_info.inf and sys.hash_info.nan are used as hash values for positive infinity, negative infinity, or nans (respectively). (All hashable nans have the same hash value.) • For a complex number z, the hash values of the real and imaginary parts are combined by computing hash(z.real) + sys.hash_info.imag * hash(z.imag), reduced modulo 2**sys.hash_info.width so that it lies in range(-2**(sys.hash_info.width - 1), 2**(sys.hash_info.width - 1)). Again, if the result is -1, it’s replaced with -2. To clarify the above rules, here’s some example Python code, equivalent to the built-in hash, for computing the hash of a rational number, float, or complex: import sys, math def hash_fraction(m, n): """Compute the hash of a rational number m / n. Assumes m and n are integers, with n positive. Equivalent to hash(fractions.Fraction(m, n)). P = sys.hash_info.modulus # Remove common factors of P. (Unnecessary if m and n already coprime.) while m % P == n % P == 0: m, n = m // P, n // P if n % P == 0: hash_value = sys.hash_info.inf # Fermat's Little Theorem: pow(n, P-1, P) is 1, so # pow(n, P-2, P) gives the inverse of n modulo P. hash_value = (abs(m) % P) * pow(n, P - 2, P) % P if m < 0: hash_value = -hash_value if hash_value == -1: hash_value = -2 return hash_value def hash_float(x): """Compute the hash of a float x.""" if math.isnan(x): return sys.hash_info.nan elif math.isinf(x): return sys.hash_info.inf if x > 0 else -sys.hash_info.inf return hash_fraction(*x.as_integer_ratio()) def hash_complex(z): """Compute the hash of a complex number z.""" hash_value = hash_float(z.real) + sys.hash_info.imag * hash_float(z.imag) # do a signed reduction modulo 2**sys.hash_info.width M = 2**(sys.hash_info.width - 1) hash_value = (hash_value & (M - 1)) - (hash_value & M) if hash_value == -1: hash_value = -2 return hash_value Iterator Types Python supports a concept of iteration over containers. This is implemented using two distinct methods; these are used to allow user-defined classes to support iteration. Sequences, described below in more detail, always support the iteration methods. One method needs to be defined for container objects to provide iteration support: Return an iterator object. The object is required to support the iterator protocol described below. If a container supports different types of iteration, additional methods can be provided to specifically request iterators for those iteration types. (An example of an object supporting multiple forms of iteration would be a tree structure which supports both breadth-first and depth-first traversal.) This method corresponds to the tp_iter slot of the type structure for Python objects in the Python/C API. The iterator objects themselves are required to support the following two methods, which together form the iterator protocol: Return the iterator object itself. This is required to allow both containers and iterators to be used with the for and in statements. This method corresponds to the tp_iter slot of the type structure for Python objects in the Python/C API. Return the next item from the container. If there are no further items, raise the StopIteration exception. This method corresponds to the tp_iternext slot of the type structure for Python objects in the Python/C API. Python defines several iterator objects to support iteration over general and specific sequence types, dictionaries, and other more specialized forms. The specific types are not important beyond their implementation of the iterator protocol. Once an iterator’s __next__() method raises StopIteration, it must continue to do so on subsequent calls. Implementations that do not obey this property are deemed broken. Generator Types Python’s generators provide a convenient way to implement the iterator protocol. If a container object’s __iter__() method is implemented as a generator, it will automatically return an iterator object (technically, a generator object) supplying the __iter__() and __next__() methods. More information about generators can be found in the documentation for the yield expression. There are three basic sequence types: lists, tuples, and range objects. Additional sequence types tailored for processing of binary data and text strings are described in dedicated sections. Common Sequence Operations The operations in the following table are supported by most sequence types, both mutable and immutable. The collections.abc.Sequence ABC is provided to make it easier to correctly implement these operations on custom sequence types. This table lists the sequence operations sorted in ascending priority. In the table, s and t are sequences of the same type, n, i, j and k are integers and x is an arbitrary object that meets any type and value restrictions imposed by s. The in and not in operations have the same priorities as the comparison operations. The + (concatenation) and * (repetition) operations have the same priority as the corresponding numeric operations. Operation Result Notes x in s True if an item of s is equal to x, else False (1) x not in s False if an item of s is equal to x, else True (1) s + t the concatenation of s and t (6)(7) s * n or n * s equivalent to adding s to itself n times (2)(7) s[i] ith item of s, origin 0 (3) s[i:j] slice of s from i to j (3)(4) s[i:j:k] slice of s from i to j with step k (3)(5) len(s) length of s min(s) smallest item of s max(s) largest item of s s.index(x[, i[, j]]) index of the first occurrence of x in s (at or after index i and before index j) (8) s.count(x) total number of occurrences of x in s Sequences of the same type also support comparisons. In particular, tuples and lists are compared lexicographically by comparing corresponding elements. This means that to compare equal, every element must compare equal and the two sequences must be of the same type and have the same length. (For full details see Comparisons in the language reference.) 1. While the in and not in operations are used only for simple containment testing in the general case, some specialised sequences (such as str, bytes and bytearray) also use them for subsequence 2. Values of n less than 0 are treated as 0 (which yields an empty sequence of the same type as s). Note that items in the sequence s are not copied; they are referenced multiple times. This often haunts new Python programmers; consider: >>> lists = [[]] * 3 >>> lists [[], [], []] >>> lists[0].append(3) >>> lists [[3], [3], [3]] What has happened is that [[]] is a one-element list containing an empty list, so all three elements of [[]] * 3 are references to this single empty list. Modifying any of the elements of lists modifies this single list. You can create a list of different lists this way: >>> lists = [[] for i in range(3)] >>> lists[0].append(3) >>> lists[1].append(5) >>> lists[2].append(7) >>> lists [[3], [5], [7]] Further explanation is available in the FAQ entry How do I create a multidimensional list?. 3. If i or j is negative, the index is relative to the end of sequence s: len(s) + i or len(s) + j is substituted. But note that -0 is still 0. 4. The slice of s from i to j is defined as the sequence of items with index k such that i <= k < j. If i or j is greater than len(s), use len(s). If i is omitted or None, use 0. If j is omitted or None, use len(s). If i is greater than or equal to j, the slice is empty. 5. The slice of s from i to j with step k is defined as the sequence of items with index x = i + n*k such that 0 <= n < (j-i)/k. In other words, the indices are i, i+k, i+2*k, i+3*k and so on, stopping when j is reached (but never including j). When k is positive, i and j are reduced to len(s) if they are greater. When k is negative, i and j are reduced to len(s) - 1 if they are greater. If i or j are omitted or None, they become “end” values (which end depends on the sign of k). Note, k cannot be zero. If k is None, it is treated like 1. 6. Concatenating immutable sequences always results in a new object. This means that building up a sequence by repeated concatenation will have a quadratic runtime cost in the total sequence length. To get a linear runtime cost, you must switch to one of the alternatives below: □ if concatenating str objects, you can build a list and use str.join() at the end or else write to an io.StringIO instance and retrieve its value when complete □ if concatenating bytes objects, you can similarly use bytes.join() or io.BytesIO, or you can do in-place concatenation with a bytearray object. bytearray objects are mutable and have an efficient overallocation mechanism □ if concatenating tuple objects, extend a list instead □ for other types, investigate the relevant class documentation 7. Some sequence types (such as range) only support item sequences that follow specific patterns, and hence don’t support sequence concatenation or repetition. 8. index raises ValueError when x is not found in s. Not all implementations support passing the additional arguments i and j. These arguments allow efficient searching of subsections of the sequence. Passing the extra arguments is roughly equivalent to using s[i:j].index(x), only without copying any data and with the returned index being relative to the start of the sequence rather than the start of the slice. Immutable Sequence Types The only operation that immutable sequence types generally implement that is not also implemented by mutable sequence types is support for the hash() built-in. This support allows immutable sequences, such as tuple instances, to be used as dict keys and stored in set and frozenset instances. Attempting to hash an immutable sequence that contains unhashable values will result in TypeError. Mutable Sequence Types The operations in the following table are defined on mutable sequence types. The collections.abc.MutableSequence ABC is provided to make it easier to correctly implement these operations on custom sequence types. In the table s is an instance of a mutable sequence type, t is any iterable object and x is an arbitrary object that meets any type and value restrictions imposed by s (for example, bytearray only accepts integers that meet the value restriction 0 <= x <= 255). Operation Result Notes s[i] = x item i of s is replaced by x s[i:j] = t slice of s from i to j is replaced by the contents of the iterable t del s[i:j] same as s[i:j] = [] s[i:j:k] = t the elements of s[i:j:k] are replaced by those of t (1) del s[i:j:k] removes the elements of s[i:j:k] from the list s.append(x) appends x to the end of the sequence (same as s[len(s):len(s)] = [x]) s.clear() removes all items from s (same as del s[:]) (5) s.copy() creates a shallow copy of s (same as s[:]) (5) s.extend(t) or s += t extends s with the contents of t (for the most part the same as s[len(s):len(s)] = t) s *= n updates s with its contents repeated n times (6) s.insert(i, x) inserts x into s at the index given by i (same as s[i:i] = [x]) s.pop([i]) retrieves the item at i and also removes it from s (2) s.remove(x) remove the first item from s where s[i] is equal to x (3) s.reverse() reverses the items of s in place (4) 1. t must have the same length as the slice it is replacing. 2. The optional argument i defaults to -1, so that by default the last item is removed and returned. 3. remove() raises ValueError when x is not found in s. 4. The reverse() method modifies the sequence in place for economy of space when reversing a large sequence. To remind users that it operates by side effect, it does not return the reversed 5. clear() and copy() are included for consistency with the interfaces of mutable containers that don’t support slicing operations (such as dict and set). copy() is not part of the collections.abc.MutableSequence ABC, but most concrete mutable sequence classes provide it. New in version 3.3: clear() and copy() methods. 6. The value n is an integer, or an object implementing __index__(). Zero and negative values of n clear the sequence. Items in the sequence are not copied; they are referenced multiple times, as explained for s * n under Common Sequence Operations. Lists are mutable sequences, typically used to store collections of homogeneous items (where the precise degree of similarity will vary by application). Lists may be constructed in several ways: □ Using a pair of square brackets to denote the empty list: [] □ Using square brackets, separating items with commas: [a], [a, b, c] □ Using a list comprehension: [x for x in iterable] □ Using the type constructor: list() or list(iterable) The constructor builds a list whose items are the same and in the same order as iterable’s items. iterable may be either a sequence, a container that supports iteration, or an iterator object. If iterable is already a list, a copy is made and returned, similar to iterable[:]. For example, list('abc') returns ['a', 'b', 'c'] and list( (1, 2, 3) ) returns [1, 2, 3]. If no argument is given, the constructor creates a new empty list, []. Many other operations also produce lists, including the sorted() built-in. Lists implement all of the common and mutable sequence operations. Lists also provide the following additional method: This method sorts the list in place, using only < comparisons between items. Exceptions are not suppressed - if any comparison operations fail, the entire sort operation will fail (and the list will likely be left in a partially modified state). sort() accepts two arguments that can only be passed by keyword (keyword-only arguments): key specifies a function of one argument that is used to extract a comparison key from each list element (for example, key=str.lower). The key corresponding to each item in the list is calculated once and then used for the entire sorting process. The default value of None means that list items are sorted directly without calculating a separate key value. The functools.cmp_to_key() utility is available to convert a 2.x style cmp function to a key function. reverse is a boolean value. If set to True, then the list elements are sorted as if each comparison were reversed. This method modifies the sequence in place for economy of space when sorting a large sequence. To remind users that it operates by side effect, it does not return the sorted sequence (use sorted() to explicitly request a new sorted list instance). The sort() method is guaranteed to be stable. A sort is stable if it guarantees not to change the relative order of elements that compare equal — this is helpful for sorting in multiple passes (for example, sort by department, then by salary grade). For sorting examples and a brief sorting tutorial, see Sorting HOW TO. CPython implementation detail: While a list is being sorted, the effect of attempting to mutate, or even inspect, the list is undefined. The C implementation of Python makes the list appear empty for the duration, and raises ValueError if it can detect that the list has been mutated during a sort. Tuples are immutable sequences, typically used to store collections of heterogeneous data (such as the 2-tuples produced by the enumerate() built-in). Tuples are also used for cases where an immutable sequence of homogeneous data is needed (such as allowing storage in a set or dict instance). Tuples may be constructed in a number of ways: □ Using a pair of parentheses to denote the empty tuple: () □ Using a trailing comma for a singleton tuple: a, or (a,) □ Separating items with commas: a, b, c or (a, b, c) □ Using the tuple() built-in: tuple() or tuple(iterable) The constructor builds a tuple whose items are the same and in the same order as iterable’s items. iterable may be either a sequence, a container that supports iteration, or an iterator object. If iterable is already a tuple, it is returned unchanged. For example, tuple('abc') returns ('a', 'b', 'c') and tuple( [1, 2, 3] ) returns (1, 2, 3). If no argument is given, the constructor creates a new empty tuple, (). Note that it is actually the comma which makes a tuple, not the parentheses. The parentheses are optional, except in the empty tuple case, or when they are needed to avoid syntactic ambiguity. For example, f(a, b, c) is a function call with three arguments, while f((a, b, c)) is a function call with a 3-tuple as the sole argument. Tuples implement all of the common sequence operations. For heterogeneous collections of data where access by name is clearer than access by index, collections.namedtuple() may be a more appropriate choice than a simple tuple object. The range type represents an immutable sequence of numbers and is commonly used for looping a specific number of times in for loops. class range ( start, stop [, step ] ) The arguments to the range constructor must be integers (either built-in int or any object that implements the __index__ special method). If the step argument is omitted, it defaults to 1. If the start argument is omitted, it defaults to 0. If step is zero, ValueError is raised. For a positive step, the contents of a range r are determined by the formula r[i] = start + step*i where i >= 0 and r[i] < stop. For a negative step, the contents of the range are still determined by the formula r[i] = start + step*i, but the constraints are i >= 0 and r[i] > stop. A range object will be empty if r[0] does not meet the value constraint. Ranges do support negative indices, but these are interpreted as indexing from the end of the sequence determined by the positive indices. Ranges containing absolute values larger than sys.maxsize are permitted but some features (such as len()) may raise OverflowError. Range examples: >>> list(range(10)) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> list(range(1, 11)) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> list(range(0, 30, 5)) [0, 5, 10, 15, 20, 25] >>> list(range(0, 10, 3)) [0, 3, 6, 9] >>> list(range(0, -10, -1)) [0, -1, -2, -3, -4, -5, -6, -7, -8, -9] >>> list(range(0)) >>> list(range(1, 0)) Ranges implement all of the common sequence operations except concatenation and repetition (due to the fact that range objects can only represent sequences that follow a strict pattern and repetition and concatenation will usually violate that pattern). The advantage of the range type over a regular list or tuple is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the start, stop and step values, calculating individual items and subranges as needed). Range objects implement the collections.abc.Sequence ABC, and provide features such as containment tests, element index lookup, slicing and support for negative indices (see Sequence Types — list, tuple, range): >>> r = range(0, 20, 2) >>> r range(0, 20, 2) >>> 11 in r >>> 10 in r >>> r.index(10) >>> r[5] >>> r[:5] range(0, 10, 2) >>> r[-1] Testing range objects for equality with == and != compares them as sequences. That is, two range objects are considered equal if they represent the same sequence of values. (Note that two range objects that compare equal might have different start, stop and step attributes, for example range(0) == range(2, 1, 3) or range(0, 3, 2) == range(0, 4, 2).) Changed in version 3.2: Implement the Sequence ABC. Support slicing and negative indices. Test int objects for membership in constant time instead of iterating through all items. Changed in version 3.3: Define ‘==’ and ‘!=’ to compare range objects based on the sequence of values they define (instead of comparing based on object identity). See also • The linspace recipe shows how to implement a lazy version of range suitable for floating point applications. Text Sequence Type — str Textual data in Python is handled with str objects, or strings. Strings are immutable sequences of Unicode code points. String literals are written in a variety of ways: • Single quotes: 'allows embedded "double" quotes' • Double quotes: "allows embedded 'single' quotes". • Triple quoted: '''Three single quotes''', """Three double quotes""" Triple quoted strings may span multiple lines - all associated whitespace will be included in the string literal. String literals that are part of a single expression and have only whitespace between them will be implicitly converted to a single string literal. That is, ("spam " "eggs") == "spam eggs". See String and Bytes literals for more about the various forms of string literal, including supported escape sequences, and the r (“raw”) prefix that disables most escape sequence processing. Strings may also be created from other objects using the str constructor. Since there is no separate “character” type, indexing a string produces strings of length 1. That is, for a non-empty string s, s[0] == s[0:1]. There is also no mutable string type, but str.join() or io.StringIO can be used to efficiently construct strings from multiple fragments. Changed in version 3.3: For backwards compatibility with the Python 2 series, the u prefix is once again permitted on string literals. It has no effect on the meaning of string literals and cannot be combined with the r prefix. class str ( object=b'', encoding='utf-8', errors='strict' ) Return a string version of object. If object is not provided, returns the empty string. Otherwise, the behavior of str() depends on whether encoding or errors is given, as follows. If neither encoding nor errors is given, str(object) returns object.__str__(), which is the “informal” or nicely printable string representation of object. For string objects, this is the string itself. If object does not have a __str__() method, then str() falls back to returning repr(object). If at least one of encoding or errors is given, object should be a bytes-like object (e.g. bytes or bytearray). In this case, if object is a bytes (or bytearray) object, then str(bytes, encoding, errors) is equivalent to bytes.decode(encoding, errors). Otherwise, the bytes object underlying the buffer object is obtained before calling bytes.decode(). See Binary Sequence Types — bytes, bytearray, memoryview and Buffer Protocol for information on buffer objects. Passing a bytes object to str() without the encoding or errors arguments falls under the first case of returning the informal string representation (see also the -b command-line option to Python). For example: >>> str(b'Zoot!') For more information on the str class and its methods, see Text Sequence Type — str and the String Methods section below. To output formatted strings, see the Formatted string literals and Format String Syntax sections. In addition, see the Text Processing Services section. String Methods Strings implement all of the common sequence operations, along with the additional methods described below. Strings also support two styles of string formatting, one providing a large degree of flexibility and customization (see str.format(), Format String Syntax and Custom String Formatting) and the other based on C printf style formatting that handles a narrower range of types and is slightly harder to use correctly, but is often faster for the cases it can handle (printf-style String Formatting). The Text Processing Services section of the standard library covers a number of other modules that provide various text related utilities (including regular expression support in the re module). Return a copy of the string with its first character capitalized and the rest lowercased. Changed in version 3.8: The first character is now put into titlecase rather than uppercase. This means that characters like digraphs will only have their first letter capitalized, instead of the full character. Return a casefolded copy of the string. Casefolded strings may be used for caseless matching. Casefolding is similar to lowercasing but more aggressive because it is intended to remove all case distinctions in a string. For example, the German lowercase letter 'ß' is equivalent to "ss". Since it is already lowercase, lower() would do nothing to 'ß'; casefold() converts it to "ss". The casefolding algorithm is described in section 3.13 of the Unicode Standard. Return centered in a string of length width. Padding is done using the specified fillchar (default is an ASCII space). The original string is returned if width is less than or equal to len(s). Return the number of non-overlapping occurrences of substring sub in the range [start, end]. Optional arguments start and end are interpreted as in slice notation. Return an encoded version of the string as a bytes object. Default encoding is 'utf-8'. errors may be given to set a different error handling scheme. The default for errors is 'strict', meaning that encoding errors raise a UnicodeError. Other possible values are 'ignore', 'replace', 'xmlcharrefreplace', 'backslashreplace' and any other name registered via codecs.register_error(), see section Error Handlers. For a list of possible encodings, see section Standard Encodings. Changed in version 3.1: Support for keyword arguments added. Return True if the string ends with the specified suffix, otherwise return False. suffix can also be a tuple of suffixes to look for. With optional start, test beginning at that position. With optional end, stop comparing at that position. Return a copy of the string where all tab characters are replaced by one or more spaces, depending on the current column and the given tab size. Tab positions occur every tabsize characters (default is 8, giving tab positions at columns 0, 8, 16 and so on). To expand the string, the current column is set to zero and the string is examined character by character. If the character is a tab (\t), one or more space characters are inserted in the result until the current column is equal to the next tab position. (The tab character itself is not copied.) If the character is a newline (\n) or return (\r), it is copied and the current column is reset to zero. Any other character is copied unchanged and the current column is incremented by one regardless of how the character is represented when printed. >>> '01\t012\t0123\t01234'.expandtabs() '01 012 0123 01234' >>> '01\t012\t0123\t01234'.expandtabs(4) '01 012 0123 01234' Return the lowest index in the string where substring sub is found within the slice s[start:end]. Optional arguments start and end are interpreted as in slice notation. Return -1 if sub is not The find() method should be used only if you need to know the position of sub. To check if sub is a substring or not, use the in operator: >>> 'Py' in 'Python' Perform a string formatting operation. The string on which this method is called can contain literal text or replacement fields delimited by braces {}. Each replacement field contains either the numeric index of a positional argument, or the name of a keyword argument. Returns a copy of the string where each replacement field is replaced with the string value of the corresponding >>> "The sum of 1 + 2 is {0}".format(1+2) 'The sum of 1 + 2 is 3' See Format String Syntax for a description of the various formatting options that can be specified in format strings. When formatting a number (int, float, complex, decimal.Decimal and subclasses) with the n type (ex: '{:n}'.format(1234)), the function temporarily sets the LC_CTYPE locale to the LC_NUMERIC locale to decode decimal_point and thousands_sep fields of localeconv() if they are non-ASCII or longer than 1 byte, and the LC_NUMERIC locale is different than the LC_CTYPE locale. This temporary change affects other threads. Changed in version 3.7: When formatting a number with the n type, the function sets temporarily the LC_CTYPE locale to the LC_NUMERIC locale in some cases. Similar to str.format(**mapping), except that mapping is used directly and not copied to a dict. This is useful if for example mapping is a dict subclass: >>> class Default(dict): ... def __missing__(self, key): ... return key >>> '{name} was born in {country}'.format_map(Default(name='Guido')) 'Guido was born in country' Like find(), but raise ValueError when the substring is not found. Return True if all characters in the string are alphanumeric and there is at least one character, False otherwise. A character c is alphanumeric if one of the following returns True: c.isalpha(), c.isdecimal(), c.isdigit(), or c.isnumeric(). Return True if all characters in the string are alphabetic and there is at least one character, False otherwise. Alphabetic characters are those characters defined in the Unicode character database as “Letter”, i.e., those with general category property being one of “Lm”, “Lt”, “Lu”, “Ll”, or “Lo”. Note that this is different from the “Alphabetic” property defined in the Unicode Return True if the string is empty or all characters in the string are ASCII, False otherwise. ASCII characters have code points in the range U+0000-U+007F. Return True if all characters in the string are decimal characters and there is at least one character, False otherwise. Decimal characters are those that can be used to form numbers in base 10, e.g. U+0660, ARABIC-INDIC DIGIT ZERO. Formally a decimal character is a character in the Unicode General Category “Nd”. Return True if all characters in the string are digits and there is at least one character, False otherwise. Digits include decimal characters and digits that need special handling, such as the compatibility superscript digits. This covers digits which cannot be used to form numbers in base 10, like the Kharosthi numbers. Formally, a digit is a character that has the property value Numeric_Type=Digit or Numeric_Type=Decimal. Return True if the string is a valid identifier according to the language definition, section Identifiers and keywords. Call keyword.iskeyword() to test whether string s is a reserved identifier, such as def and class. >>> from keyword import iskeyword >>> 'hello'.isidentifier(), iskeyword('hello') True, False >>> 'def'.isidentifier(), iskeyword('def') True, True Return True if all cased characters 4 in the string are lowercase and there is at least one cased character, False otherwise. Return True if all characters in the string are numeric characters, and there is at least one character, False otherwise. Numeric characters include digit characters, and all characters that have the Unicode numeric value property, e.g. U+2155, VULGAR FRACTION ONE FIFTH. Formally, numeric characters are those with the property value Numeric_Type=Digit, Numeric_Type=Decimal or Numeric_Type Return True if all characters in the string are printable or the string is empty, False otherwise. Nonprintable characters are those characters defined in the Unicode character database as “Other” or “Separator”, excepting the ASCII space (0x20) which is considered printable. (Note that printable characters in this context are those which should not be escaped when repr() is invoked on a string. It has no bearing on the handling of strings written to sys.stdout or sys.stderr.) Return True if there are only whitespace characters in the string and there is at least one character, False otherwise. A character is whitespace if in the Unicode character database (see unicodedata), either its general category is Zs (“Separator, space”), or its bidirectional class is one of WS, B, or S. Return True if the string is a titlecased string and there is at least one character, for example uppercase characters may only follow uncased characters and lowercase characters only cased ones. Return False otherwise. Return True if all cased characters 4 in the string are uppercase and there is at least one cased character, False otherwise. Return a string which is the concatenation of the strings in iterable. A TypeError will be raised if there are any non-string values in iterable, including bytes objects. The separator between elements is the string providing this method. Return the string left justified in a string of length width. Padding is done using the specified fillchar (default is an ASCII space). The original string is returned if width is less than or equal to len(s). Return a copy of the string with all the cased characters 4 converted to lowercase. The lowercasing algorithm used is described in section 3.13 of the Unicode Standard. Return a copy of the string with leading characters removed. The chars argument is a string specifying the set of characters to be removed. If omitted or None, the chars argument defaults to removing whitespace. The chars argument is not a prefix; rather, all combinations of its values are stripped: >>> ' spacious '.lstrip() 'spacious ' >>> 'www.example.com'.lstrip('cmowz.') This static method returns a translation table usable for str.translate(). If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters (strings of length 1) to Unicode ordinals, strings (of arbitrary lengths) or None. Character keys will then be converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result. Split the string at the first occurrence of sep, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing the string itself, followed by two empty strings. Return a copy of the string with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced. Return the highest index in the string where substring sub is found, such that sub is contained within s[start:end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. Like rfind() but raises ValueError when the substring sub is not found.
{"url":"https://www.docs4dev.com/docs/en/python/3.7.2rc1/all/library-stdtypes.html","timestamp":"2024-11-13T04:48:44Z","content_type":"text/html","content_length":"168827","record_id":"<urn:uuid:f1231463-9903-4f4f-ba1c-d96ce32267f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00748.warc.gz"}
Relative Velocity Calculator | Online Calculators Relative Velocity Calculator The Relative Velocity Calculator helps determine the relative speed between two objects when the velocities of each relative to a third object are known. Enter any two values, the calculator provides the missing relative velocity, useful in physics and motion calculations. Relative Velocity Calculator How to Use 1. Enter any two values from the following: □ Vab: The relative velocity of object A with respect to object B. □ Vbc: The relative velocity of object B with respect to object C. □ Vac: The relative velocity of object A with respect to object C. 2. Calculate: Click the “Calculate” button to solve for the missing value. 3. Reset: Click “Reset” to clear all fields and start a new calculation. Example Calculations Example 1: Step Value Vbc 15 m/s Vac 25 m/s Calculate Vab Vac – Vbc Result (Vab) 10 m/s In this example, with Vbc at 15 m/s and Vac at 25 m/s, Vab (velocity of A relative to B) calculates to 10 m/s. Example 2: Step Value Vab 8 m/s Vbc 12 m/s Calculate Vac Vab + Vbc Result (Vac) 20 m/s Here, with Vab at 8 m/s and Vbc at 12 m/s, Vac (velocity of A relative to C) results in 20 m/s. Leave a Comment
{"url":"https://lengthcalculators.com/relative-velocity-calculator/","timestamp":"2024-11-06T17:19:09Z","content_type":"text/html","content_length":"59725","record_id":"<urn:uuid:f3dd5ddd-9bbc-4c43-ae70-a6a33175fccb>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00461.warc.gz"}
Multiplicative theory of ideals PDF R I N G THEORY ERNST-AUGUST BEHRENS Department of Mathematics McMaster University Hamilton, Ontario, Canada TRANSLATED BY CLIVE REIS Department of Mathematics University of Western Ontario London, Ontario, Canada 1972 ACADEMIC PRESS New York and London COPYRIGHT 0 1971, BY ACADEMIPCR ESS, INC. ALL RIGHTS RESERVED NO PART OF THIS BOOK MAY BE REPRODUCED IN ANY FORM, BY PHOTOSTAT, MICROFILM, RETRIEVAL SYSTEM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS. ACADEMIC PRESS, INC. 111 Fifth Avenue, New York, New York 10003 United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD. Berkeley Square House, London WlX 6BA LIBRARY OF CONGRESS CATALOG CARD NUMBER: 72-137621 AMS (MOS)1970 Subject Classification 13F05; 13A05,13B20, 13C15,13E05,13F20 PRINTED IN THE UNITED STATES OF AMERICA To Lillie and Jean Preface The viability of the theory of commutative rings is evident from the many papers on the subject which are published each month. This is not surprising, considering the many problems in algebra and geometry, and indeed in almost every branch of mathematics, which lead naturally to the study of various aspects of commutative rings. In this book we have tried to provide the reader with an introduction to the basic ideas, results, and techniques of one part of the theory of commutative rings, namely, multiplicative ideal theory. The text may be divided roughly into three parts. In the first part, the basic notions and technical tools are introduced and developed. In the second part, the two great classes of rings, the Prufer domains and the Krull rings, are studied in some detail. In the final part, a number of generalizations are considered. In the appendix a brief introduction is given to the tertiary decomposition of ideals of noncommutative rings. The lengthy bibliography begins with a list of books, some on commutative rings and others on related subjects. Then follows a list of papers, all more or less concerned with the subject matter of the text. This book has been written for those who have completed a course in abstract algebra at the graduate level. Preceding the text there is a discussion of some of the prerequisites which we consider necessary. At the end of each chapter are a number of exercises. They are of three types. Some require the completion of certain technical details-they might possibly be regarded as busy work. Others xi xii PREFACE contain examples-some of these are messy, but it will be beneficial for the reader to have some experience with examples. Finally, there are exercises which enlarge upon some topic of the text or which contain generalizations of results in the text-the bulk of the exercises are of this type. A number of exercises are referred to in proofs, and those proofs cannot be considered to be complete until the relevant exercises have been done. We wish to thank those of our colleagues and students who have commented on our efforts over the years. Special thanks goes to Thomas Shores for his careful reading of the entire manuscript, and to our wives for their patience. Prerequisites A graduate level course in abstract algebra will provide most of the background knowledge necessary to read this book. In several places we have used a little more field theory than might be given in such a course. The necessary field theory may be found in the first two chapters of “Algebraic Extensions of Fields by McCarthy, ” which is listed in the bibliography. One thing that is certainly required is familiarity with Zorn’s lemma. Let S be a set. A partial ordering on S is a relation < on S such that (i) slsforallsES; (ii) if s<t and t I s , then s =t; and < < < (iii) if s t and t u, then s u. The set S, together with a partial ordering on S, is called a partially ordered set. Let S be a partially ordered set. A subset T of S is called totally ordered if for all elements s, t E T either s < t or t < s. Let S’ be a subset of S. An element s E S is called an upper‘bound of S’ if s‘ Is fo r all s’ E S’. An element s E S is called a maximal element of S if for an element t ES, s 5 t implies that t = s. Note that S may have more than one maximal element. Zorn’s Lemma. Let S be a nonempty partially ordered set. If evuy totally ordered subset of S has an upper bound in S, then S has a maximal element. ... Xlll xiv PREREQUISITES If A and B are subsets of some set, then A s B means that A is a subset of B, and A c B means that A E B but A # B. If S is a set of subsets of some set, then S is a partially ordered set with 2 as the partial ordering. Whenever we refer to a set of subsets as a partially ordered set we mean with this partial ordering. Let S and T be sets and consider a mapping f : S+ T. The mapping can be described explicitly in terms of elements by writing sHf(s). If A is a subset of S, we write f(A)= {f(s) \SEA},a nd if B is a subset of T, we write f-l(B)= {s ISESa nd f(s) EB}.T hus, f provides us with two mappings, one from the set of subsets of S into the set of subsets of T and another in the opposite direction. We assume that the reader can manipulate with these mappings. If S, T, and U are sets and f : S-t T and g : T+ U are map- pings, their composition gf : S+ U is defined by (gf) (s) =g(f(s)) for all s E S. a,,, On several occasions we shall use the Kronecker delta which is defined by 1 if i=j, CHAPTER Modules 1 RINGS AND MODULES We begin by recalling the definition of ring. A ring R is a non- empty set, which we aiso denote by R, together with two binary + operations (a,b ) HU b and (a, b) H ab (addition and multiplication, respectively), subject to the following conditions : (i) the set R, together with addition, is an Abelian group; (ii) a(bc) = (ab)c for all a, b, c E R; + + + + (iii) a(b c) = ab ac and (b c)u = ba ca for all a, b, c E R. Let R be a ring, The identity element of the group of (i) will be denoted by 0; the inverse of an element a E R considered as an element of this group will be denoted by -a; a+(-b) will be written a b. The reader may verify for himself such statements as - Oa = a0 = 0 for all a E R, a( -b) = (-a)b = -(ab) for all a, b E R, a(b -c) = ab -uc for all a, b, c E R. A ring R is said to be commutative if ab = ba for all a, b E R. An element of R is called a unity, and is denoted by 1, if la = a1 = a for all a E R. If R has a unity, then it has exactly one unity. We shall assume throughout this book that all rings under consideration have unities. By a subring of a ring R we mean a ring S such that the 1 2 1 MODULES set S is a subset of the set R and such that the binary operations of R yield the binary operations of S when restricted to S x S. By our assumption concerning unities, both R and S have unities. We shall consider only those subrings of a ring R which have the same unity as R. By a left ideal of a ring R we mean a nonempty subset A of R such that a - b E A and ra E A for all a, b E A and r E R. By a right ideal of R we mean a nonempty subset B of R such that a - b E B and ar E B for all a, b E B and r E R. A left ideal of R which is at the same time a right ideal of R is czlled an ideal of R. If R is a com- mutative ring, then the left ideals, right ideals, and ideals of R coincide. A left ideal, right ideal, or ideal of R is called proper if it is different from R. The ideal consisting of the element 0 alone is called the zero ideal of R and will be denoted simply by 0; it is a proper ideal if and only if R has more than one element. Let R be a ring. If a E R, then the set Ra={ra(r ER} is a left ideal of R, while aR=(av/r ER} is a right ideal of R. Since R has a unity, we have a E Ra and a E aR. The smallest ideal of R containing a, called the principal ideal generated by a, is the set of elements of R of the form c Yi as, 7 where ri , si E R and the sum is finite. This ideal will be denoted by (a). If R is commutative, Ra = aR = (a). 1.1. Definition. Let R be a ring. A left R-module M is an Abelian group, written additively, together with a mapping (a, x)t + ax from R x M into M such that (1) a(x +y>= ax +by, + + (2) (a b)x = ax bx, (3) (ab)x = a (bx), for all a, b E R and x,y E M. A right R-module N is an Abelian group, written additively, together with a mapping (x,a )t +xa from N x R into N such that 1 RINGS AND MODULES 3 (1’) (x ++y) a = xu ++y a, (2’) x(a b) = xu xb, (3’) x(ab) = (xa)b, for all a, b E R and x, y E N. We shall adopt the custom of referring to left R-modules simply as R-modules. Many of the results concerning modules, left and right, will be stated for R-modules only. Analogous results hold for right R-modules. Let R be a commutative ring and let M be an R-module. If a E R and x E M, we define xu to be ax. This makes M into a right R-module. It is immediate that (1’) and (2’) hold. As for (3’), we note that for all a, b E R and x E M we have x(ab) = (ab)x= (ba)x = b(ax)= (xa)b. If R is not commutative, then an R-module cannot necessarily be made into a right R-module in this way. An R-module M is called unital if lx = x for all x E M. A similar definition applies to right R-modules. We shall assume throughout this book that all modules considered, whether left or right modules, are unital. An R-module N is called a submodule of an R-module M if N is a subgroup of M and if the module operation R x N+ N is the restriction of the module operation R x M-tM to R x N. Suppose that the nonempty set N is a subset of M. Suppose further that x - y E N and ax E N for all x, y E N and a E R. Then N, together with the induced operation of addition, is a subgroup of M. The mapping (a, x) Ha x from R x N into N makes this subgroup into a submodule of M. We refer simply to the submodule N. If L and N are submodules of an R-module M, then their inter- section L n N is also a submodule of M. More generally, if {N,I c( E I }i s an arbitrary nonempty family of submodules of M,t hen n Na aEI + is a submodule of M. The sum of L and N, denoted by L N, is defined by L+N={x+ylx~L and YEN}. + It is easily seen that L N is a submodule of M. If N,, . . . , Nka re submodules of M, then See more
{"url":"https://www.zlibrary.to/dl/multiplicative-theory-of-ideals-0","timestamp":"2024-11-13T15:01:42Z","content_type":"text/html","content_length":"114383","record_id":"<urn:uuid:34bb2562-4c5b-4087-a5c7-48526fa4d981>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00097.warc.gz"}
Math Placement Practice Test (Example Questions) Math Placement Test Practice If you’re headed for college or university, a math placement test might be in your very near future! Keep reading to find out more about what the test is like and how you can study. What’s on a Math Placement Test? A math placement test for college usually covers a wide range of topics from basic operations to probability and statistics. Every college math placement test is a little different, but the topics you’re tested on are generally about the same: • Basic operations (addition, subtraction, multiplication, division) • Fractions (addition, subtraction, multiplication, division) • Decimals (conversion between fractions and decimals, operations) • Percentages (calculations, increases, decreases) • Ratios and proportions Elementary Algebra • Simplifying expressions • Solving linear equations and inequalities • Graphing linear equations and inequalities • Operations with polynomials (addition, subtraction, multiplication, division) • Factoring polynomials • Solving quadratic equations (factoring, quadratic formula) • Rational expressions and equations • Radical expressions and equations • Systems of linear equations Intermediate Algebra • Functions and their properties (domain, range, evaluation) • Graphing functions and transformations • Exponents and logarithms (properties, solving equations) • Complex numbers (operations, polar form) • Sequences and series (arithmetic, geometric) • Conic sections (parabolas, ellipses, hyperbolas) • Polynomial and rational functions (graphing, asymptotes) • Points, lines, planes, and angles • Triangles (properties, congruence, similarity) • Quadrilaterals and polygons • Circles (properties, equations, tangents, arcs) • Coordinate geometry (distance, midpoint, slope) • Perimeter, area, and volume of various shapes • Trigonometric functions (sine, cosine, tangent, etc.) • Right triangle trigonometry • Unit circle and radian measure • Graphing trigonometric functions • Trigonometric identities and equations • Law of Sines and Law of Cosines • Polar coordinates • Functions and their inverses • Polynomial and rational functions • Exponential and logarithmic functions • Trigonometric functions and their inverses • Analytic geometry (conic sections, parametric equations) • Vectors and vector operations Statistics and Probability • Descriptive statistics (mean, median, mode, standard deviation) • Probability (basic concepts, combinations, permutations) • Probability distributions (binomial, normal) • Inferential statistics (confidence intervals, hypothesis testing) Why a Math Placement Test is Required College admissions representatives can see your high school transcripts and the courses you’ve taken, but that’s not always enough. Different schools teach in different ways, and not all students pick up the information the same way. It’s also likely that college math courses won’t exactly match high school courses, so a college-level math placement test score gives a better picture of what you know and where you should start. If students are placed in math courses that are too advanced or too easy, they can get frustrated. This frustration can lead to poor learning, bad grades, and dropping out. Your college wants you to succeed, just as you do. When you are placed in the right math course, you will perform better, be happier, and are more likely to stay in school! Math Placement Online Course If you want to be fully prepared, Mometrix offers an online math placement test prep course. The course is designed to provide you with any and every resource you might want while studying. The math placement test course includes: • Review Lessons Covering Every Topic • 1,100+ Math Placement Test Practice Questions • Over 130 Instructional Videos • Money-back Guarantee • Free Mobile Access • and More! The math placement test prep course is designed to help any learner get everything they need to prepare for their math placement test. Click below to check it out! How to Study for a Math Placement Test One of the most important parts of preparing for a test is determining which topics you need to brush up on as you study. To get started with a self-assessment of your knowledge, click on one of the modules below!
{"url":"https://www.testprepreview.com/math-placement-test.htm","timestamp":"2024-11-02T14:49:05Z","content_type":"text/html","content_length":"50409","record_id":"<urn:uuid:e5ce83a0-70c4-4e3c-96d6-c7ac61b7b221>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00356.warc.gz"}
Cockpit coaming template I was thinking about how to easily draw and make a jig for a good looking and functioning cockpit coaming and came up with this idea that lends itself to easy drawing with a pencil and a piece of string. The original idea was someone posting on FB how he had made a mathematical calculation for the line for his coaming, but that one had some bends that were too steep for easy wood bending. Couldn’t find that post again. Here’s my very feeble first take drawn in Adobe Illustrator. It will not help you to a nice shape. Read on and nerd on! So, if you’re getting confused don’t worry, who isn’t? I shall explain to you in simple words worthy a commoner below! Trust me, I had almost no grades in math from school. To make your jig, start on your jig’s plywood bottom. Draw a circle with whatever means you have. You can drive a small screw or nail in, tie a string and a pencil to it and make sure the string is 16cm long when you tighten it all. That’s your R[0]. In our case we use 16cm. Two and one-third radii below that, drive in another screw or nail. That would approximately be 37,2cm straight down from the first if we used an R[0] of 16cm. Draw a fraction of a line at the bottom where the coaming will end. If you got it right, it will be about 53cm back from the top. Now for some simple math: Take your R[0] of 16cm and multiply by 0,813 and you get 13. That is your R value. I just made that up because it’s convenient for an example and it doesn’t make a steeper radius than about 20% smaller than the first. That means you won’t have so much wood cracking! Ok, so a couple days later and I almost don’t know what this post is about. …so read on! MUCH better update I was looking for a site that can take some numbers and stumbled upon Wolfram Alpha which charges you a lot for a few downloads of Your Own Equations (won’t even post the link). On the other hand The Great Site Desmos doesn’t charge you anything! Paste in one of these formuli for an egg shape that I found at John D Cooke’s blog and play around with! Desmos even gives you drawbars to tweak, yayy! plot x^{2}/10+y^{2}(1+0.18x)/4=1 Or a standing egg formula: A more rounded egg shape: Here you can modify the back (bottom of eggshape) with your own: 9x^{2}+16y^{2}+2xy^{2}+y^{2}-144=10 (10 is round, 500 flat in the back) Now you can make eggshaped ocean or keyhole cockpits! The problem with a simple formula for an egg shape is that the same parameters that control the roundness of the bottom also make it sharper on the top! Dang. So in comes reliable German math and fixes everything. Se screenshots below. This post is probably to be continued after some more research and a good night’s sleep… I bet I can find a way to do this in Javascript and an svg file but not right now 🙂 You must be logged in to post a comment.
{"url":"https://rugd.se/kayak-building-class/cockpit-coaming-template/","timestamp":"2024-11-08T23:31:52Z","content_type":"text/html","content_length":"62803","record_id":"<urn:uuid:217fbb0a-853f-491c-a25e-acd790396858>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00803.warc.gz"}
How to Graph a Hyperbola - dummies Think of a hyperbola as a mix of two parabolas — each one a perfect mirror image of the other, each opening away from one another. The vertices of these parabolas are a given distance apart, and they open either vertically or horizontally. The mathematical definition of a hyperbola is the set of all points where the difference in the distance from two fixed points (called the foci) is constant. There are two kinds of hyperbolas: horizontal and vertical. The equation for a horizontal hyperbola is The equation for a vertical hyperbola is Notice that x and y switch places (as well as the h and v with them) to name horizontal versus vertical, compared to ellipses, but a and b stay put. So, for hyperbolas, a-squared should always come first, but it isn’t necessarily greater. More accurately, a is always squared under the positive term (either x-squared or y-squared). Basically, to get a hyperbola into standard form, you need to be sure that the positive squared term is first. The center of a hyperbola is not actually on the curve itself, but exactly in between the two vertices of the hyperbola. Always plot the center first, and then count out from the center to find the vertices, axes, and asymptotes. A hyperbola has two axes of symmetry. The one that passes through the center and the two foci is called the transverse axis; the one that’s perpendicular to the transverse axis through the center is called the conjugate axis. A horizontal hyperbola has its transverse axis at y = v and its conjugate axis at x = h; a vertical hyperbola has its transverse axis at x = h and its conjugate axis at y = You can see the two types of hyperbolas in the above figure: a horizontal hyperbola on the left, and a vertical one on the right. If the hyperbola that you are trying to graph is not in standard form, then you need to complete the square to get it into standard form. For example, the equation is a vertical hyperbola. The center (h, v) is (–1, 3). (which means that you count horizontally 3 units from the center both to the left and to the right). The distance from the center to the edge of the rectangle marked “a” determines half the length of the transverse axis, and the distance to the edge of the rectangle marked “b” determines the conjugate axis. In a hyperbola, a could be greater than, less than, or equal to b. If you count out a units from the center along the transverse axis, and b units from the center in both directions along the conjugate axis, these four points will be the midpoints of the sides of a very important rectangle. This rectangle has sides that are parallel to the x- and y-axis (in other words, don’t just connect the four points because they are the midpoints of the sides, not the corners of the rectangle). This rectangle will be a useful guide when it is time to graph the hyperbola. But as you can see in the above figure, hyperbolas contain other important parts that you must consider. For instance, a hyperbola has two vertices. There are two different equations — one for horizontal and one for vertical hyperbolas: • A horizontal hyperbola has vertices at (h ± a, v). • A vertical hyperbola has vertices at (h, v ± a). The vertices for the above example are at (–1, 3 ± 4), or (–1, 7) and (–1, –1). You find the foci of any hyperbola by using the equation where F is the distance from the center to the foci along the transverse axis, the same axis that the vertices are on. The distance F moves in the same direction as a. Continuing this example, To name the foci as points in a horizontal hyperbola, you use (h ± F, v); to name them in a vertical hyperbola, you use (h, v ± F). The foci in the example would be (–1, 3 ± 5), or (–1, 8) and (–1, –2). Note that this places them inside the hyperbola. Through the center of the hyperbola run the asymptotes of the hyperbola. These asymptotes help guide your sketch of the curves because the curves cannot cross them at any point on the graph. To graph a hyperbola, follow these simple steps: 1. Mark the center. Sticking with the example hyperbola You find that the center of this hyperbola is (–1, 3). Remember to switch the signs of the numbers inside the parentheses, and also remember that h is inside the parentheses with x, and v is inside the parentheses with y. For this example, the quantity with y-squared comes first, but that does not mean that h and v switch places. The h and v always remain true to their respective variables, x and y. 2. From the center in Step 1, find the transverse and conjugate axes. Go up and down the transverse axis a distance of 4 (because 4 is under y), and then go right and left 3 (because 3 is under x). But don’t connect the dots to get an ellipse! Up until now, the steps of drawing a hyperbola were exactly the same as when you drew an ellipse, but here is where things get different. The points you marked as a (on the transverse axis) are your vertices. 3. Use these points to draw a rectangle that will help guide the shape of your hyperbola. Because you went up and down 4, the height of your rectangle is 8; going left and right 3 gives you a width of 6. 4. Draw diagonal lines through the center and the corners of the rectangle that extend beyond the rectangle. This gives you two lines that will be your asymptotes. 5. Sketch the curves. Draw the curves, beginning at each vertex separately, that hug the asymptotes the farther away from the vertices the curve gets. The graph approaches the asymptotes but never actually touches them. The above figure shows the finished hyperbola. About This Article This article can be found in the category:
{"url":"https://www.dummies.com/article/academics-the-arts/math/calculus/how-to-graph-a-hyperbola-190929/","timestamp":"2024-11-05T20:23:44Z","content_type":"text/html","content_length":"88942","record_id":"<urn:uuid:718f9d18-edd6-4545-9991-bebf7c664e43>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00298.warc.gz"}
StatisticsHypothesis Testing - Critical Value Method Critical Region Method Now that we know about Null and Alternate Hypothesis and how to choose our Null Hypothesis, let’s see how we can use the Critical Value Method to to make decisions about our Null Hypothesis. Before we move on to that, keep the following clear in your mind: We NEVER accept a Null Hypothesis. We either Reject it, meaning we found sufficient evidence to say that the Null Hypothesis is not good. We FAIL to reject it, meaning we could not find sufficient evidence to reject the Null Hypothesis. Note: The second part does not mean that we are accepting the Null Hypothesis. It only says that we could not find sufficient evidence to reject it. Lack of evidence to reject something is not equivalent to accepting it. So, how do we decide whether to reject or fail to reject the Null Hypothesis? Let’s take the same example that we took in our discussion on inferential statistics in this earlier post. We had a plant that was manufacturing tablets. (1 lakh a day) and so in 30 days we had 30 Lakh tablets. Each of these tablets has a chemical called X. Now, we make a claim that the average amount of chemical X in these 30 Lakh tablets is equal to 9.8 mg. So, our Null Hypothesis is Ho = 9.8 mg And Alternate Hypothesis is H1 not = 9.8 mg Since we can go wrong in both directions (underestimating and over estimating), this would be a two tailed test. Now, we need to find a cut-off value on both sides. Beyond this cutoff point we would reject the null Hypothesis. The upper cutoff is known as Upper Critical Value and the lower cutoff is known as the Lower Critical Value. How to find these Critical Values? So we take a sample of let’s say 100 tablets and find its mean to be 10.3 and Standard Deviation as 2.1. Following is the formula that is used for calculating the Critical Values: UCV = U + Zc * Sigma / Sqrt(n) LCV = U - Zc * Sigma / Sqrt(n) Where, U is the Claimed Mean (in our Null Hypothesis) Zc is the Z score associated with the given level of significance (we will see later how to calculate Z score in this post ) Sigma is the Standard Deviation of the Sample, and N is the Sample SIze We know the value for U (9.8) from our Null Hypothesis, The value for Sigma (2.1) from the Sample we took, The value of n (100) from from the sample we took. Only Zc is what we do not know yet. For now, let’s take it as 2.17 ( In this next post we will discuss on how to calculate this value). So, putting these values in the formula we have: UCV = 9.8 + 2.17 * 2.1 / SqrRoot(100) = 9.8 + 0.45 = 10.25 LCV = 9.8 - 2.17 * 2.1 / Root(100) = 9.8 - 0.45 = 9.35 So, our acceptable region is from 9.35mg to 10.25 mg (LCV to UCV). However, our sample mean was 10.3 mg (mean of the 100 samples we took). Since this mean (10.3) lies outside our acceptable region (9.35 to 10.25), our decision would be to Reject the Null Hypothesis. Meaning, from the sample we took we found sufficient evidence to say that the claim of 9.8mg for the entire population was not good. This is the Critical Value Method of making decisions on the Hypothesis and hence known as Hypothesis testing. Only thing we missed was the calculation of Zc. In the next post we will discuss on how to calculate the value of Zc
{"url":"https://www.hstatistics.com/p/critical-value-method.html","timestamp":"2024-11-06T18:54:18Z","content_type":"application/xhtml+xml","content_length":"16595","record_id":"<urn:uuid:a63713fe-0ce2-4509-95a1-0afb6324e9fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00661.warc.gz"}
Lesson 20 Rational Equations (Part 1) Lesson Narrative In previous lessons, students studied rational functions. They made sense of graphs in context, learned about asymptotes, and rewrote expressions to reveal end behavior. The goal of this lesson is for students to write rational equations to model simple situations, such as average cost and batting averages, and then use their equations to answer questions and interpret the meaning of their answer in context (MP2) while making use of the structure of rational equations (MP7). This is the first of three lessons on solving rational equations. In this lesson, students write and solve equations involving a rational expression that is equal to a rational number. As the lessons progress, students will solve increasingly challenging rational equations, with the third lesson focusing on understanding how extraneous solutions may arise. Technology isn’t required for this lesson, but there are opportunities for students to choose to use appropriate technology to solve problems. We recommend making technology available. Learning Goals Teacher Facing • Calculate solutions to equations involving a single rational expression. • Create simple equations involving a rational expression to solve a problem in context. Student Facing • Let’s write and solve some rational equations. Student Facing • I can write rational expressions that represent averages to answer questions about the situation. CCSS Standards Building On Building Towards Print Formatted Materials Teachers with a valid work email address can click here to register or sign in for free access to Cool Down, Teacher Guide, and PowerPoint materials. Student Task Statements pdf docx Cumulative Practice Problem Set pdf docx Cool Down Log In Teacher Guide Log In Teacher Presentation Materials pdf docx Additional Resources Google Slides Log In PowerPoint Slides Log In
{"url":"https://im-beta.kendallhunt.com/HS/teachers/3/2/20/preparation.html","timestamp":"2024-11-11T06:26:13Z","content_type":"text/html","content_length":"83130","record_id":"<urn:uuid:87ec6fae-dfc1-4783-a408-316ba915f82a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00695.warc.gz"}
Applications of the Counting Principle (Product Rule) Question Video: Applications of the Counting Principle (Product Rule) Mathematics • Second Year of Secondary School Liam must create a password for his new computer. The password is not case sensitive and consists of 4 English letters. Determine how many different passwords can be created if letters cannot be Video Transcript Liam must create a password for his new computer. The password is not case sensitive and consists of four English letters. Determine how many different passwords can be created if letters cannot be In order to answer this question, we need to use our knowledge of permutations. A permutation is an arrangement of a collection of items with no repetition and where order matters. The notation for this is 𝑛 P 𝑟, where 𝑟 is the number of items being selected and 𝑛 is the total number of items. This can be calculated using the formula 𝑛 factorial divided by 𝑛 minus 𝑟 factorial. Liam is selecting from the letters in the English alphabet. Therefore, 𝑛 is equal to 26. As he needs four of them for his password, 𝑟 is equal to four. This means that we need to calculate 26 P four. Liam needs to select four items from a group of 26 with no repetition and where order matters. This is equal to 26 factorial divided by 22 factorial as 26 minus four is equal to 22. We know that 𝑛 factorial can be rewritten as 𝑛 multiplied by 𝑛 minus one factorial. This means that we can rewrite 26 factorial as 26 multiplied by 25 multiplied by 24 multiplied by 23 multiplied by 22 factorial. We can then divide the numerator and denominator by 22 factorial. This means that we are left with the product of the integers from 26 down to 23. Multiplying these four values gives us 358,800. This is the total number of different passwords that Liam can create if the letters are not repeated. An alternative method here would be just to use a scientific calculator. Typing 26 followed by the 𝑛 P 𝑟 button followed by four and then pressing equals gives us an answer of 358,800.
{"url":"https://www.nagwa.com/en/videos/790194345748/","timestamp":"2024-11-12T05:28:07Z","content_type":"text/html","content_length":"249701","record_id":"<urn:uuid:d5e2fd88-3f05-48e9-a0fa-a2631972d65f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00324.warc.gz"}
Voting and collective decision-making - PDF Free Download This page intentionally left blank Voting and Collective Decision-Making Every day thousands of decisions are made b... 76 downloads 1688 Views 751KB Size Report This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below! This page intentionally left blank Voting and Collective Decision-Making Every day thousands of decisions are made by all kinds of committees, parliaments, councils and boards by a ‘yes–no’ voting process. Sometimes a committee can only accept or reject the proposals submitted to it for a decision. On other occasions, committee members have the possibility of modifying the proposal and bargaining an agreement prior to the vote. In either case, what rule should be used if each member acts on behalf of a different-sized group? It seems intuitively clear that if the groups are of different sizes then a symmetric rule (e.g. the simple majority or unanimity) is not suitable. The question then arises of what voting rule should be used. Voting and Collective Decision-Making addresses this and other issues through a study of the theory of bargaining and voting power, showing how it applies to real decision-making contexts. annick laruelle is Professor of Economics at the University of Caen, Lower Normandie. fede rico val enciano is Professor of Mathematics and Game Theory at the University of the Basque Country, Bilbao. Voting and Collective Decision-Making Bargaining and Power a n n i c k l a r u e l l e and f e d e r i c o v a l e n c i a n o Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521873871 © Federico Valenciano and Annick Laruelle 2008 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2008 ISBN-13 978-0-511-42913-2 eBook (EBL) ISBN-13 978-0-521-87387-1 Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. To my parents Annick For loved ones gone and those still around Federico List of figures page x 1 Preliminaries 1.1 Basic set-theoretic notation 1.2 Some combinatorics 1.2.1 Permutations and combinations 1.2.2 Some useful approximations 1.3 Voting rules 1.3.1 Dichotomous voting rules 1.3.2 Some particular voting rules 1.4 Expected utility theory 1.4.1 Players, games and game theory 1.4.2 Preferences and utility 1.4.3 Lotteries and expected utility 1.4.4 Expected utility preferences 1.5 Some basic game theory notions 1.5.1 Equilibrium 1.5.2 Cooperative and non-cooperative game theory 1.5.3 Subgame perfect equilibrium 1.5.4 Basic cooperative models 1.6 Exercises 2 Seminal papers, seminal ambiguities 2.1 Seminal papers and seminal ambiguities 2.1.1 Nash (1950): The bargaining problem 2.1.2 Shapley (1953): The value of a TU game 2.1.3 Shapley–Shubik (1954): A power index 2.1.4 Banzhaf (1965): Power as decisiveness 2.1.5 Penrose (1946), Rae (1969) and Coleman (1971) 30 30 30 34 37 39 41 vii 2.1.6 2.2 2.3 Through the axiomatic glasses: Dubey (1975), Dubey–Shapley (1979) Clear-cut models to dissipate ambiguity Further reading 2.3.1 Axiomatic approach 2.3.2 Probabilistic approach Exercises 3 ‘Take-it-or-leave-it’ committees 3.1 The take-it-or-leave-it scenario 3.2 Success and decisiveness in a vote 3.3 Preferences, behaviour and probabilities 3.4 Success and decisiveness ex ante 3.5 A priori assessments based on the voting rule 3.5.1 Rae index 3.5.2 Banzhaf(–Penrose) index 3.5.3 Coleman indices 3.5.4 König and Bräuninger’s inclusiveness index 3.5.5 Summary and remarks 3.6 Success versus decisiveness 3.6.1 Success is the issue in a take-it-or-leave-it scenario 3.6.2 Conditional success 3.6.3 Summary 3.7 The choice of voting rule: egalitarianism and utilitarianism 3.7.1 Egalitarianism 3.7.2 Utilitarianism 3.8 The choice of voting rule in a committee of representatives 3.8.1 An ideal two-stage decision procedure 3.8.2 Egalitarianism in a committee of representatives 3.8.3 Utilitarianism in a committee of representatives 3.9 Exercises 3.10 Appendix 4 Bargaining committees 4.1 The bargaining scenario 4.2 A model of a bargaining committee: voting rule and voters’ preferences 4.3 Cooperative game-theoretic approach 4.3.1 Rationality conditions 4.3.2 Axiomatic characterizations 4.3.3 Discussion 4.4 A non-cooperative model of a bargaining committee 4.4.1 Probabilistic protocols 4.4.2 Bargaining protocols under a voting rule 4.4.3 Discussion 4.5 Egalitarianism and utilitarianism in a bargaining committee 4.6 The neutral voting rule in a committee of representatives 4.7 Exercises 5 Application to the European Union 5.1 Voting rules in the European Council 5.2 The Council as a take-it-or-leave-it committee 5.2.1 Criteria based on probabilities 5.2.2 Criteria based on utilities 5.3 The Council as a bargaining committee 5.4 Exercises Battle of the sexes in sequential form A bargaining problem: (a) Classical à la Nash model (b) Assuming ‘free disposal’ The Nash bargaining solution A three-person bargaining problem Continuation payoffs after the choice of proposer in a three-person problem The important changes that have taken place in the European Union as a result of the latest enlargements have made it necessary to redesign decision-making procedures again and again. This has contributed to a renewal of interest in issues related to the choice and design of dichotomous voting procedures in recent years, to a conspicuous increase in the number of academic papers, both theoretical and applied, related in one way or another to these issues and to heated debates within the scientific community. As a result of this ‘fever’ there have been various movements within this community that have gone beyond the academic realm, including press articles and explicit attempts to influence politicians or their advisers on the choice of voting rule for the EU Council of Ministers. At the basis of some of these recommendations is what is called, perhaps a little ostentatiously, ‘a priori voting power theory’. The main purpose of this book is to provide a critical revision of the foundations of this theory and of the recommendations that stem from it, based on more than ten years of joint research on the subject. Prior to this collaboration, the first author of this book was preparing her Ph.D. One of the chapters of her thesis sets out the application to the EU Council of the two-stage model of the decision-making process in committees of representatives1 . This model assumes that each representative follows the will of the majority in his/her constituency on every issue. Then, assuming that each citizen votes ‘yes’ or ‘no’ independently with probability 1/2, one can calculate the probability of a citizen being crucial or decisive for a given voting rule in the committee. This (usually very small) probability is interpreted as the ‘a priori voting power’ of the citizen and is known as the citizen’s ‘Banzhaf index’. If this interpretation is accepted, egalitarianism recommends choosing a voting rule that gives equal Banzhaf indices to all citizens whatever their constituency. This recommendation is known as the (first) ‘square 1 Joint work with Mika Widgrén [52]. root rule’ because it entails choosing the rule for which each representative in the committee has a Banzhaf index proportional to the square root of the population that he/she represents. This model was also the starting point of our joint research. Since then our views have changed considerably, to the extent that we refused to sign a letter addressed to the EU Governments and supported by a group of scientists endorsing the square root rule as the choice of voting rule for the EU Council2 . Ten years of work lie behind this shift of views. As shown in [52], citizens of different countries had different Banzhaf indices for the qualified majority rule in the fifteen-member EU. Our first endeavour was to seek a measure of inequality in this context [41, 43], but we soon turned our attention to the foundations. Why the Banzhaf index? Why not the Shapley–Shubik index, apparently preferred by game theorists, or any other ‘power index’? We first addressed the question of the axiomatic foundations of power indices in the framework of simple games [39, 40, 42] only to honestly conclude that there were no conclusive arguments for the superiority of any of them on these grounds alone. We then turned our attention to the probabilistic approach [37, 44, 45, 46]. In this approach voters’ behaviour is described by a probability distribution over vote configurations, and power indices are interpreted as probabilities either of being decisive or of obtaining one’s preferred outcome. This point of view led us to adhere for a while to the Banzhaf index as the best-founded index, but we soon grew increasingly dubious about its consistency. One of the factors that contributed to these doubts was our critical examination of the so-called ‘postulates and paradoxes’ so popular in the literature on power indices, their inconsistencies and their lack of real discriminating capacity [47]. To our surprise, the notion of success or satisfaction, i.e. the likelihood of obtaining one’s preferred outcome, which is inextricably intermingled with decisiveness in any pre-conceptual notion of voting power, behaved even better than decisiveness with respect to some postulates. This sparked doubts concerning the soundness of the notion of voting power as the likelihood of being decisive, and led us to consider the notion of success or satisfaction as the relevant issue in certain voting situations [38]. On the other hand, a most inspiring interview in 2002 with David Galloway, who had twenty years 2 Available at www.esi2.us.es/ ˜mbilbao/pdffiles.letter.pdf of experience working for the Council of Ministers of the European Union, made it clear to us that bargaining was a (if not ‘the’) crucial ingredient in the workings of the Council. Bargaining is a genuine game situation that calls for a game-theoretic approach. We thus considered an alternative model, one of whose primitives was the preference profile over the feasible agreements [48, 51]. It thus seemed clear that the analysis of voting situations required a preliminary description of the voting environment: a small committee does not make the same use of a voting rule as a Parliament. The model cannot include the voting rule as the unique ingredient, it must be enriched to describe the specificity of the voting environment. In this way, a gradual process of accumulative reflection drove us finally to a radical change in our way of looking at several basic issues. This book presents our proposal for new foundations along with a systematic presentation of the changes that this entails in the whole theoretic edifice. To begin with, a clear distinction must be drawn at the level of the environment between two extreme types of collective decision-making bodies or committees: committees with the capacity solely to accept or reject proposals submitted to it, and committees with the capacity to bargain among feasible agreements. This at first sight obvious distinction proves rich in conceptual consequences. First, it clarifies what one is talking about, something which has been established only vaguely from the outset in the voting power tradition, where the voting rule is the only clearly specified ingredient. Second, each type of situation requires a different model and a separate analysis. This neat distinction also clarifies the different issues posed by each type of decision-making environment. It is worth remarking here that the question of power is not the primary or basic issue in either case. Moreover, in the first type of committee (which we call ‘take-it-or-leave-it committees’), where behaviour immediately follows preferences, the notion of voting power does not even make sense. In contradistinction, in a ‘bargaining committee’ the notion of bargaining power in a genuinely game-theoretic sense emerges as related to the likelihood of being decisive. The normative recommendations that stem from this approach differ conspicuously from those based on the traditional approach, particularly for the choice of voting rule in a committee of representatives whose members act on behalf of groups of different sizes. The square root rule recommendation alluded to above appears in this light as correct but distorting and ill-founded if the goal is to obtain the preferred outcome. It can be re-founded in expected utility terms, but its possible validity (as well as that of the so-called ‘second square root rule’) is restricted to ‘take-it-or-leave-it committees’, where the very notion of voting power is irrelevant. By contrast, in the case of bargaining committees of representatives, the model yields completely different recommendations. But this is not the place to anticipate our conclusions in detail (impatient readers may skip to the Conclusions section at the end of the book). We hope that by this point the readers will have a clear idea of what this book is about, and will perhaps understand how hard it was for us to find a title that was clear and concise enough. We were reluctant to include the words ‘voting power’ in the title, in spite of the fact that it is precisely those interested in voting power issues who will probably be most interested in the book. We believe that the ‘sex-appeal’ of these words is responsible to some extent for the obscurities that have survived for so long at the root of the topic. The feeling of importance that comes with the use of the word ‘power’ only makes a humble, rigorous and detached analysis more difficult (just the opposite of ‘game theory’, a frivolous name for an ambitious research programme). The monograph most closely related to this book is The Measurement of Voting Power: Theory and Practice, Problems and Paradoxes by Felsenthal and Machover [22], published in 1998. It was a valuable attempt to conduct a critical revision of the foundations of traditional voting power theory, a tradition in which inertia, disregard of its inconsistencies and obscurities and the mechanical application of different indices on no clear grounds were the rule. These authors stress a distinction between two notions of voting power: ‘I-power’, or power to influence the outcome, and ‘P-power’, the expected share in a fixed prize. They hold that there are several points which support this distinction. For instance, I-power is a probabilistic notion related to ‘policy seeking’, while P-power is a game-theoretic notion related to ‘office seeking’. The Banzhaf index is considered as the right measure of a priori I-power, and other candidates are rejected on the basis that they violate some I-power ‘postulates’ (i.e. supposedly desirable properties whose violations are referred to as ‘paradoxes’). As to the second type of power, the Shapley–Shubik is presented with reserves as the most serious known candidate for a measure, but doubts are explicitly cast on the coherence of the very notion of P-power. When their book appeared, we agreed with their critical views and found the intuition behind their I/P-power distinction to be basically correct. However, we felt that the distinction was too vague and insufficient. Moreover, as time passed, we felt that accepting it as a satisfactory remedy for the obscurities at the level of the foundations was only a conformist way of hindering a real progress in resolving the lack of clarity at a deeper level. We also found the foundations of the P-power notion unconvincing, to the extent that, as stated above, we adhered for a while the notion of voting power as influence as the only coherent notion of voting power, and discarded the Shapley– Shubik index. Interested readers will find a brief account of the different implications of the their approach and ours in the Conclusions section. Another related book is Morriss’ Power: A Philosophical Analysis [58, 59]. This author is also critical with the frequently unjustified applications of power indices. He conducts a careful discussion about the semantics of the word ‘power’, distinguishing between ‘poweras-ability,’ and ‘power-as-ableness’. Indeed the book is intended to be more philosophical and less formal than ours. It may provide interesting additional reading (especially Part IV). Our purpose is not to survey all the huge amount of material on the topic published over more than fifty years, though we do, of course, pay attention to what we consider the most significant contributions in the field. A few seminal contributions are presented in some detail. As is clear from its title, in this book we consider only dichotomous voting rules that specify collective acceptance or collective rejection for each possible yes–no vote profile. We do not consider the possibilities of abstention or not showing up. These conditions preclude the difficulties evidenced by Arrow’s [2] impossibility theorem when more than two alternatives are involved, and the possibility of ‘strategic’ voting [27]. Consequently, the copious social choice literature on these issues is orthogonal to this book. We hope the book may be of interest and of use to students and researchers alike in political and social science, as well as in game theory and economics, especially the public economics, public choice, and social choice families. We try to present results and, especially, normative recommendations in an honest, humble, precise ‘if . . . then’ form, in which the ‘if ’ part is explicit and transparent. This requires a formal formulation. However, to make the book accessible to as wide an audience as possible, we have tried to keep the level of formalization low enough not to discourage readers with less mathematical backgrounds but at the same time high enough to be precise. The meaning of formal statements is always expressed in plain words. Some proofs, especially in the case of technically complex published results, have been omitted. Chapter 4 may perhaps be the most difficult for those readers not familiar with game theory. Nevertheless, it is our hope that readers with less mathematical backgrounds can get a grasp of the main ideas presented in the book by skipping the mathematical details and just reading the rest. The book is organized as follows. Chapter 1 presents the basic set-theoretic notation and some combinatorics, along with the notation and terminology on dichotomous (acceptance/rejection) voting rules used throughout the book. This chapter also contains two brief sections devoted to the basics of expected utility theory and a summary overview of a few basic concepts of game theory. In Chapter 2 a few seminal papers are briefly and critically reviewed, and the basic distinction in this book between ‘take-it-or-leave-it’ and ‘bargaining’ committees is introduced. Chapter 3 is devoted to ‘take-it-or-leave-it’ committees, a probabilistic model of which allows us: (i) to address the question of the conceptual and analytical distinction between the notions of success and decisiveness; (ii) to provide a common perspective in which several ‘power indices’ from the literature can be seen as variations of two basic ideas; and (iii) to address the question of the optimal voting rule in a take-it-or-leave-it committee from two points of view: egalitarianism and utilitarianism. Chapter 4 deals with bargaining committees. A game-theoretic model is proposed, and the question of the players’ expectations is addressed first from a cooperative-axiomatic point of view, and then from a non-cooperative point of view. This chapter concludes with a recommendation for bargaining committees of representatives. Finally, in Chapter 5 the different rules used in the EU Council and some more recent proposals are examined from the point of view of the models presented in Chapters 3 and 4. A section with exercises at the end of each chapter is intended to provide readers with a means to check how well they have understood the chapters. We would like to end this preface by expressing our gratitude to some people and institutions. We would like to thank people of various fields such as Fuad Aleskerov, Steve Brams, Nimrod Megiddo, Hans Peters, and Stef Tijs, who independently suggested to us the stimulating notion of writing a book on this subject before the idea had crossed our minds. We also thank Dan Felsenthal and Moshé Machover, our main scientific opponents in the last years. In spite of our sometimes overheated scientific arguments (particularly in their interesting and controversial ‘petit comité’ ‘VPP’s’ meetings, to which they have never failed to invite us every year since the first meeting in 2001), our disagreements have always been stimulating and inspiring. We also thank Jon Benito, Arri Chamorro, Elena Iñarra, Jean Lainé, Vincent Merlin, Maria Montero, Stefan Napel and Norma Olaizola who read some chapters and made valuable suggestions on how to improve them. Thanks also go to William Thomson, who taught the second author to make drawings, and to Chris Pellow who did his best to make our English sound better. It goes without saying that all mistakes and defects are entirely our responsibility. Finally, we thank Chris Harrison and Philip Good, of Cambridge University Press. Chris encouraged us to present our project to Cambridge, and Philip has been in charge of in the latter stages. We are also grateful for the financial support received since we committed ourselves to this project, from the Spanish Ministerio de Educación y Ciencia under projects BEC2003-08182 and SEJ200605455, the latter co-funded by the ERDF, from Acción Integrada HF-2006-0021, and from the French Government under the EGIDEPicasso project. The first author also acknowledges financial support from the Spanish M.E.C. under the Ramón y Cajal Program at the earliest stages of this project. Though less apparent, support from colleagues, friends and families was also important and is warmly acknowledged. March 2008 Annick Laruelle Federico Valenciano This chapter provides some basic background data. Basic set-theoretic notation is introduced in Section 1.1, and some combinatorics in Section 1.2. Formal descriptions of voting rules and related notation and terminology to be used throughout the book are given in Section 1.3. In Section 1.4 the basics of decision facing risk and expected utility theory are provided. Section 1.5 contains a short overview of a few basic concepts from game theory. 1.1 Basic set-theoretic notation In general, sets are denoted by capitals (e.g. N, S, M, etc.), and when they are finite the same small case letter (n, s, m, . . .) denotes their number of elements or cardinality (sometimes denoted by #N, #S, #M, . . .). We write a ∈ S to express that an element a belongs to a set S, and a∈ / S otherwise. Given two sets A and B we write A ⊆ B to express that all elements in A are also in B, and we write A ⊂ B if A ⊆ B and A = B. Symbols ‘∪’ and ‘∩’ denote the usual operations on sets of ‘union’ and ‘intersection’, while A \ B denotes the set of those elements in A that do not belong to B. When B contains a single element i, i.e. B = {i}, we will often write A \ i instead of A \ {i} and A ∪ i instead of A ∪ {i}. The set consisting of all subsets of a set N is denoted by 2N (note that its cardinal is 2n ). We write f : A → B to express that f is a map or a function from set A to set B, and, if x ∈ A, x → f (x) to express that f maps x onto f (x). If C ⊆ A, f (C) = {f (x) : x ∈ C}. A map f : A → B is said to be injective if whenever x, y ∈ A and x = y, we have f (x) = f (y); and is said to be surjective if for all z ∈ B, there exists an x ∈ A such that f (x) = z. A map f is said to be bijective if it is both injective and surjective. The set of real numbers is denoted by R. The subset of R consisting of all non-negative (≥0) numbers is denoted by R+ , while R++ denotes 1 Voting and Collective Decision-Making the set of positive (>0) real numbers. For any pair of real numbers a, b, [a, b] denotes the segment [a, b] = {x ∈ R : a ≤ x ≤ b}. If N = {1, 2, . . . , n}, RN denotes the set of n-tuples x = (x1 , . . . , xn ), where xi ∈ R; or in other terms the set of maps x (i → xi ) from N to R. We will write for any x, y ∈ RN , x ≤ y (x < y) if xi ≤ yi (xi < yi ) for all i = 1, . . . , n. We write ‘P ⇒ Q’ (or ‘Q ⇐ P’) to express that ‘P implies Q’, and ‘P ⇔ Q’ or ‘P iff Q’, to express that ‘P is equivalent to Q’. The symbols ‘∀’ (for all), ‘∃’ (there exists), ‘’ (there does not exist) are also used. We use the symbol ‘x :=’ to mean that ‘x is by definition equal to’. 1.2 Some combinatorics 1.2.1 Permutations and combinations Let A = {a1 , a2 , . . . , an } a set of n elements. A permutation of A is an arrangement of its n elements in a certain order. For instance, if A = {1, 2, 3}, the possible permutations of A are 123, 132, 213, 231, 312, 321. Alternatively a permutation in A is often defined as a bijection π : A → A, because there is a one-to-one correspondence between the set of ordered arrangements of its n elements and the set of bijections A → A. For instance, in the above example, the permutation 213 can be associated with the bijection π : 1 → π(1) = 2 2 → π(2) = 1 3 → π(3) = 3. Thus in the sequel we use the term ‘permutation’ in either sense without distinction. The number of permutations of a set of n elements is given by n! = n (n − 1) . . . 3 · 2 · 1. A combination of (a set of) n elements of order r (0 ≤ r ≤ n) is a subset of r elements. The number of subsets of r elements of a set of n elements is denoted by Cnr , and is given by Cnr = n! , (n − r)!r! or with the usual notation n r Cn = . r Note that from (1) it follows immediately that for 0 ≤ r ≤ n: Cnr = Cnn−r , and, as the total number of subsets of a set with n elements is given by 2n , we have Cn0 + Cn1 + · · · + Cnn−1 + Cnn = 2n . These two equalities enable us to derive others that will be useful later. If n is odd, i.e. n = 2r + 1 for some positive integer, by (2) and (3) we have Cn0 + · · · + Cnr = Cnr+1 + · · · + Cnn = 2n−1 . If n is even, i.e n = 2r for some integer r, by (2) we have Cn0 + Cn1 + · · · + Cnr = Cnr + Cnr+1 + · · · + Cnn , and by (3) we have Cnr + Cnr+1 + · · · + Cnn = 2n−1 + 1 r C , 2 n and Cnr+1 + · · · + Cnn = 2n−1 − 1 r C . 2 n 1.2.2 Some useful approximations It will sometimes be necessary to calculate expressions involving permutations or combinations. As the number of elements increases these calculations become increasingly laborious, and in some cases it will be useful to have well known formulae that provide sufficiently good Voting and Collective Decision-Making approximations. Some of them are based on Stirling’s formula, which provides a good approximation of n! for big enough n, given by3 √ n! nn e−n 2πn. For instance, we will later need to calculate the number of subsets of size n/2 of a set of n elements for a given even number n, by (1), given by n Cn2 = n! n! n n = n n . (n − 2 )! 2 ! 2!2! If n is big enough this can be approximated by using (7), yielding n 2 Cn 2 2 . πn 1.3 Voting rules This book is mainly concerned with collective decision-making. This means situations in which a set of agents make decisions by means of a decision procedure. By a decision procedure we mean a welldefined rule for making collective choices based on individual choices. A decision procedure is thus a rather general notion that may include a wide variety of ways of mapping the profiles of individual actions to group decisions. By ‘individual actions’ we mean votes in a general sense, as specified by the decision procedure itself: for instance marking a candidate in a list, marking the approved alternatives within a set in the ‘approval voting’ system or assigning points to them according to certain constraints, or just voting ‘yes’ or ‘no’ on a proposal. In this book we focus our attention on dichotomous voting rules that specify a collective ‘yes’ (acceptance) or a collective ‘no’ (rejection) for each possible profile of ‘yes’ or ‘no’ by individuals. We assume that voters are never indifferent between the two outcomes, and abstention 3 It will be used for n ranging from the order of hundreds of thousands to the √ n −n order of millions. But even for n = 100 the quotient n!−n en! 2π n is 0.00083. is not possible. In this section we introduce the notation and terminology related to such voting rules that will be used throughout the book. 1.3.1 Dichotomous voting rules Throughout this book a voting rule is a well-specified procedure for making binary decisions (i.e. acceptance or rejection) by the vote of a committee of any kind with a certain number of members. That is, a voting rule associates a final outcome with any possible vote configuration (or result of a vote). If n is the number of seats in the committee, let us label them by 1, 2, . . . , n, and let N = {1, 2, . . . , n}. The same labels are also used to represent the voters that occupy the corresponding seats. The precise result of a particular vote is specified by a vote configuration: a list indicating the vote cast by the voter occupying each seat. As we assume that voters are never indifferent between the two options and abstention is not possible, there are 2n possible configurations of votes, and each configuration can be represented by the set of labels of the ‘yes’-voters’ seats. So, for each S ⊆ N, we refer to the result of a vote where the voters in S vote ‘yes’ while the voters in N \ S vote ‘no’ as ‘vote configuration S’. The number of ‘yes’-voters in the configuration S is denoted by s. In these conditions, an N-voting rule can be specified and represented by the set WN of vote configurations that would lead to a final ‘yes’ (the others would lead to a final ‘no’): WN = S : S leads to a final ‘yes’ . A vote configuration S is winning if S ∈ WN , and losing if S ∈ / WN . When N is obvious from the context we will omit the ‘N’ in ‘N-voting rule’ and write W instead of WN . In order to exclude unreasonable and inconsistent voting rules the following conditions are assumed for the set W ⊆ 2N : 1. The unanimous ‘yes’ leads to a final ‘yes’: N ∈ W. 2. The unanimous ‘no’ leads to a final ‘no’: ∅ ∈ / W. 3. If a vote configuration is winning, then any other configuration with a larger set of ‘yes’-voters is also winning: If S ∈ W, then T ∈ W for any T containing S. Voting and Collective Decision-Making 4. The possibility of a proposal and its negation both being accepted should be prevented. Namely, if a proposal is supported by S and its negation by N \ S, these two voting configurations cannot both / W. be winning4 . That is, if S ∈ W then N \ S ∈ Definition 1 A voting rule of n seats or an N-voting rule is a set W ⊆ 2N that satisfies the above four conditions. The set of all voting rules with set of seats N is denoted VRN . We use the term improper rule to describe a set W ⊆ 2N that only satisfies the first three properties. A minimal winning vote configuration is a winning configuration that does not include any other winning configuration. An equivalent way to specify and represent a voting rule is by listing the minimal winning configurations, denoted M(W). Given an N-voting rule W, and a permutation π : N → N, we denote by πW the voting rule πW = {π(S) : S ∈ W}. Some special seats must be mentioned. A veto seat can prevent the passage of a proposal: if the vote from a veto seat is ‘no’ then the proposal is rejected. That is, i is a veto seat in W if i∈ /S⇒S∈ /W (or equivalently: S ∈ W ⇒ i ∈ S). In other words, i’s support is necessary for a proposal to be accepted. The voter sitting in such a seat is referred to as a vetoer. A null seat is a seat such that the vote cast by the voter occupying it never makes a difference. In other words the vote of the other voters determine the outcome irrespective of this voter’s vote. That is, i is a null seat in W if S ∈ W ⇔ S \ i ∈ W. The voter sitting in such a seat is referred to as a null voter. 4 In some cases no inconsistency arises from dropping the last condition. For instance, if the rule is used to include issues on the agenda and all proposals submitted to the vote have the form: ‘shall we put A on the agenda?’, and cannot be ‘shall we not put A on the agenda?’. In voting rule W, seat j weakly dominates seat i (denoted j W i) if for any vote configuration S such that i, j ∈ / S, S ∪ i ∈ W ⇒ S ∪ j ∈ W. If seat j weakly dominates seat i but seat i does not weakly dominate seat i, we say that seat j dominates seat i (j W i). Note that the domination relationship is not complete: seats cannot always be compared. Seats i and j are symmetric if j W i and i W j. In other words, in the environment specified by the voting rule these seats are interchangeable. A voting rule is symmetric if any two seats are symmetric. Symmetric rules are also called anonymous rules because a voting rule W is symmetric if and only if (see Exercise 4) for any permutation π : N → N, πW = W. 1.3.2 Some particular voting rules A few special voting rules are specified in this section. In a dictatorship, the final outcome always coincides with the vote cast by one specific seat: the dictator’s seat. Denoting seat i’s dictatorship by W i we have W i = {S ⊆ N : i ∈ S} . In a T-oligarchy or T-unanimity rule, only the votes from a set T of seats count: the final result is ‘yes’ if and only if all voters from the ‘oligarchy’ are in favour of the proposal. Denoting this rule by W T , we can write W T = {S ⊆ N : S ⊇ T}. In the unanimity rule (denoted W N ) a proposal is accepted only with unanimous support, that is W N = {N}. In the simple majority, a proposal is passed if the number of votes in favour of the proposal is strictly greater than half the total number of votes. That is, denoting the simple majority rule by W SM , n . W SM = S : s > 2 Voting and Collective Decision-Making Simple majority and unanimity are special cases of q-majority rules, where a proposal is passed if the proportion of votes in favour of the proposal is greater than q. That is, denoting the q-majority rule by W qM , s W qM = S : > q . n In order to prevent improper rules we require 12 ≤ q < 1. Note that in all q-majority rules all seats are symmetric. A weighted majority rule is specified by a system of positive weights w = (w1 , . . . , wn ), and a quota Q > 0, so that the final result is ‘yes’ if the sum of the weights in favour of the proposal is larger than the quota. Denoting this rule by W (w,Q) , we have W = S⊆N: wi > Q . Alternatively the quota can be expressed as a proportion of the total weight q = Q w , and the rule denoted W (w,q) . That is, j∈N W (w,q) = S ⊆ N : >q . Again the condition 12 ≤ q < 1 prevents improper rules. As the reader can easily check, all the above examples can be specified as weighted majority rules. Nevertheless, not all voting rules can be so represented5 (see Exercises 5 and 6). A double weighted majority rule is specified by a double system of positive weights w = (w1 , . . . , wn ) and w = (w1 , . . . , wn ), and a double quota Q and Q , with each quota corresponding to a system of weights. The final result is ‘yes’ if each sum of the weights in favour of the proposal is larger than its corresponding quota. Denoting this rule 5 Taylor and Zwicker [85, 86] give necessary and sufficient conditions for a voting rule to be representable as a weighted majority rule. by W ((w,Q),(w ,Q )) , we have W ((w,Q),(w ,Q )) = S⊆N: wi > Q and wi > Q Note that W ((w,Q),(w ,Q )) = W (w,Q) ∩ W (w ,Q ) . In fact two general ways of combining voting rules are intersection and union under certain conditions. Namely, if W and W are voting rules their intersection W ∩ W is also a proper voting rule, while their union W ∪ W is sure to inherit conditions 1, 2, and 3 in 1.3.1, but not necessarily 4. A different way of combining voting rules is by composition. Consider the following two-stage indirect voting procedure for a set M of m voters. Voters are not asked to vote directly but to elect representatives who report their preferences in the following way. The voters are divided into n disjoint groups (not necessarily of equal sizes): M = M1 ∪ · · · ∪ Mn . For each vote, the proposal is submitted to a vote within each group, and it is assumed that WM1 is the voting rule used in group Mj to set the group’s position. In a second stage, each representative reports his/her group’s final decision (‘yes’ or ‘no’) as prescribed by the vote and the group’s voting rule, and the decisions of the different groups are aggregated by means of WN , where N = {1, 2, . . . , n}. The whole voting procedure is equivalent to an M-voting rule, denoted WN [WM1 , . . . , WMn ], which can be formally described as follows. For each j ∈ N, and each S ⊆ M, denote Sj = S ∩ Mj and C(S) := {j ∈ N : Sj ∈ WMj }. Therefore, Sj is the set of voters in S that belong to group Mj , that is, those in Mj that vote ‘yes’, and C(S) is the set of representatives of groups in which the ‘yes’ won. Thus WM = WN [WM1 , . . . , WMn ] = {S ⊆ M : C(S) ∈ WN }. Voting and Collective Decision-Making 1.4 Expected utility theory 1.4.1 Players, games and game theory A game situation is one in which there exists interdependence between the decisions of two or more agents. That is, each agent has to choose an action or a sequence of actions, and whatever the level of information about the consequences of his/her acts or about the other agents, those consequences also depend on the other players’ decisions. Game theory provides formal models, called games, for the analysis of such situations. There is a great variety of such models, depending on the amount of detail that is incorporated, the environment in which the players make decisions and the purpose of the model. In gametheoretic models in which players are rational agents6 , as is the case here, an important ingredient is the players’ assessments or preferences concerning the possible outcomes of the game. This element should be factored into any analysis of what may be considered as the most advisable action for a player or what outcome can be expected as a result of rational interaction among the players. This is formally incorporated into the model by means of the following assumption: each agent can express his/ her preferences over the feasible outcomes by means of a binary relation (complete and transitive, see next paragraph), so that he/she acts accordingly trying to obtain the most preferred one of the feasible alternatives. 1.4.2 Preferences and utility Let A denote a set of feasible alternatives. A binary relation, !, over A is complete if for all x, y ∈ A, either x ! y or y ! x; and it is said to be transitive if x ! z whenever x ! y and y ! z. In a game, that is, in a game-theoretic model of a game situation, in which A represents the set of possible outcomes or results, a complete and transitive binary relation ! over A associated with a player is interpreted as the expression of his/her preferences over A. If x ! y, we say that the player weakly prefers y to x. If x ! y and y ! x, we say that the player is indifferent between x and y, and we write x ∼ y; while 6 Originally only rational interaction was considered as the object of game theory (in fact ‘rational interaction’ was the name suggested by R. Selten as a more adequate alternative to ‘game theory’, a term already consecrated by use). Nevertheless, some of the most successful applications of notions and results of game theory have been to ‘irrational interaction’, as is the case of the evolution of species. if x ! y and y x (where y x means ‘not y ! x’) we say that the player (strictly) prefers y to x, and write x ≺ y. We write indistinctly x ! y or y x, and x ≺ y or y x. In general it is more convenient to work with functions than with binary relations, and one way of doing so is by scoring alternatives by numbers according to their ‘utility’, so that more preferred alternatives are given more points. Formally, a utility function over a set of alternatives A is a map u : A −→ R which is interpreted as a representation of the preferences !u , given by (for all x, y ∈ A) x !u y if and only if u(x) ≤ u(y). We say that u represents !u . Obviously any map from A to R can be interpreted as representing a preference relation on A, and it can immediately be checked that, whatever u : A −→ R, the associated binary relation !u is complete and transitive. If A is finite the converse is also true, that is to say, any complete binary relation over a finite set can be represented by a utility function. But this correspondence of preferences and utility functions is not one-to-one: if a binary relation is representable by a utility function then there are infinite utility functions representing it. For instance, if !=!u for some u : A → R, then !=!ϕ◦u for all strictly increasing map ϕ : R → R (that is, s.t. x ≤ y ⇔ ϕ(x) ≤ ϕ(y)). Example 1.1: Let A = [0, M] be a continuum of quantities of a good (money, land, gold, etc.), and let ! be the preference over quantities (assuming non-satiety) with ‘the more the better’. This can be represented by the map u1 (x) = x. But, for instance, taking u2 (x) = 2x + 100, u3 (x) = x2 , or u4 (x) = ex , we obtain alternative representations of the same preference relation. As we consider utility functions as representation of preferences, that is, only the ranking provided by the utility function matters, we say that two utility functions u1 , u2 : A → R are A-equivalent, and we write u1 ≈A u2 , if they represent the same preferences on A. That is, if for all x, y ∈ A, u1 (x) ≤ u1 (y) ⇔ u2 (x) ≤ u2 (y). 1.4.3 Lotteries and expected utility Let A denote a set of alternatives. A lottery over A is a random mixture of a finite number of alternatives in A. A lottery is thus a random experiment whose possible outcomes are a finite number of alternatives Voting and Collective Decision-Making in A, each of them occurring with a certain probability. As from a mathematical point of view the relevant information is encapsulated in these probabilities, a lottery will for all effects be identified with its associated probability measure. Any such probability measure can be represented by (and identified with) a map l : A −→ R+ , such that its support, i.e. the set spt(l) := x ∈ A : l (x) > 0 , is finite, and such that x∈spt(l) l(x) = 1, which associates with each alternative x its probability l(x). The set of all such maps or lotteries is denoted by L(A). We identify each alternative x in A for all effects with the (degenerated) lottery in L(A) whose support is {x}, that is to say with the lottery that gives x with probability 1. Given x, y ∈ A and µ (0 ≤ µ ≤ 1) we denote indistinctly by µx ⊕ (1 − µ)y or by (1 − µ)y ⊕ µx the binary lottery such that l(x) = µ and l(y) = 1 − µ. Given a function u : A → R, the expected utility function associated with u, denoted u, ¯ is the map u¯ : L(A) → R, given by7 u(l) ¯ := E[u(x)] = Thus u¯ associates the expected utility (as measured by u) of the outcome with each lottery. It is important to remark that A-equivalent utility functions (in the sense specified in Section 1.4.2) may have non L(A)-equivalent expected utility associated functions. In other words: u1 ≈A u2 does not imply u¯ 1 ≈L(A) u¯ 2 . Example 1.2: Let A = a, b, c , and let ! be the preference such that abc. Among the infinite utility functions that represent this binary relation consider the following three: u1 (a) = 10, u2 (a) = 10, u3 (a) = 10, u1 (b) = 5, u1 (c) = 0; u2 (b) = 8, u2 (c) = 0; u3 (b) = 2, u3 (c) = 0. Obviously !=!u1 =!u2 =!u3 . But lotteries in L(A) are ranked differently by their associated expected utility functions. For instance, let l be the lottery that gives each alternative with probability 1/3. If we 7 As the support of any lottery l is assumed to be finite, for the sake of brevity we write x∈A instead of the more precise x∈spt(l) . compare l with alternative b according to each expected utility function we obtain u¯ 1 (b) = 5, u¯ 1 (l) = 13 (10 + 5 + 0) = 5, u¯ 2 (b) = 8, u¯ 2 (l) = 13 (10 + 8 + 0) = 6, u¯ 3 (b) = 2, u¯ 3 (l) = 13 (10 + 2 + 0) = 4. Thus b ∼u¯ 1 l, b u¯ 2 l, b ≺u¯ 3 l. Therefore u1 ≈A u2 ≈A u3 , but no two out of u¯ 1, u¯ 2 , u¯ 3 are equivalent in L(A). 1.4.4 Expected utility preferences In many game situations randomization is a natural ingredient. In some cases it is part of the objective description of the rules of the game (e.g., the initial shuffling in card games, the throw of a dice, etc.). In other cases there are ‘mixed strategies’ among the feasible actions of a player, that is, random choices of action, each of them with a certain probability. Also when the players’ information is not complete, a probability distribution over the ‘states of the world’ may represent the (incomplete) information of a player about the environment. Thus the players’ preferences should be extended to encompass and rank random outcomes. Formally, if A denotes a set of deterministic alternatives, we have the following model of rational behaviour facing risk. Definition 2 A binary relation ! on L(A) is a von Neumann– Morgenstern preference (or an expected utility preference) if there exists u : A → R, such that !u¯ =!. Thus, the rational behaviour of a player with such preferences can be described as maximizing the expected utility u¯ for a certain u. In view of Example 1.2, given a preference over a set of deterministic alternatives, this model does not prescribe a particular ranking of the lotteries over those alternatives. In other words, given a preference relation on A, there is an infinite number of different von Neumann–Morgenstern (vNM) preferences over L(A) consistent with that relation. In fact, Definition 2 only postulates some form of consistency in the way of ranking lotteries. In order to see this more clearly we need to see explicitly in terms of preferences what Definition 2 amounts to assuming. This is what the following theorem does. In order to simplify the proof we assume that Voting and Collective Decision-Making in the set A there are most and least preferred alternatives. Thus we assume that there exist two alternatives a, b ∈ A, such that b ! x ! a for all x ∈ A; and in order to avoid a trivial case we also assume that a b. Then we have the following characterization: Theorem 3 A binary relation ! on L(A) is a von Neumann– Morgenstern preference on L(A) if and only if the following conditions hold: (i) ! is complete and transitive. (ii) For all x ∈ A there exists µx ∈ [0, 1] such that (ii-1) x ∼ µx a ⊕ (1 − µx )b, and (ii-2) For all l ∈ L(A), l∼ l(x)µx a ⊕ 1 − l(x)µx b. (iii) For all µ, µ ∈ [0, 1], µa ⊕ (1 − µ)b ! µ a ⊕ (1 − µ )b ⇔ µ ≤ µ . Proof. (Necessity (⇒)): Let ! be a von Neumann–Morgenstern preference relation on L(A). This means (Definition 2) that !=!u¯ for some u : A → R. (i) Thus, as any binary relation representable by a utility function, u¯ in this case, ! is necessarily complete and transitive. (ii) Let x ∈ A. As b ! x ! a, and !=!u¯ , then u(b) ≤ u(x) ≤ u(a). Therefore there exists µx ∈ [0, 1] such that u(x) = µx u(a) + (1 − µx )u(b). Then we have ¯ u(µ ¯ x a ⊕ (1 − µx )b) = µx u(a) + (1 − µx )u(b) = u(x) = u(x). Thus x ∼u¯ µx a ⊕ (1 − µx )b, and we have (ii-1). Now let l ∈ L(A), and let us denote by µx the number that satisfies (ii-1) whose existence has just been proved for each x ∈ A. Then u(l) ¯ = l(x)u(x) = = u¯ l(x)(µx u(a) + (1 − µx )u(b)) l(x)µx u(a) + 1 − l(x)µx u(b) l(x)µx a ⊕ 1 − l(x)µx b . Then as !=!u¯ , we have (ii-2). (iii) Let µ, µ ∈ [0, 1]: µa ⊕ (1 − µ)b ! µ a ⊕ (1 − µ )b ⇔ u(µa ¯ ⊕ (1 − µ)b) ≤ u(µ ¯ a ⊕ (1 − µ )b) ⇔ µu(a) + (1 − µ)u(b) ≤ µ u(a) + (1 − µ )u(b) ⇔ (µ − µ )(u(a) − u (b)) ≤ 0. Which, as a b, is equivalent to saying that µ ≤ µ . (Sufficiency (⇐)): Let ! be a binary relation on L(A) satisfying conditions (i)–(iii). By (ii-1), for all x ∈ A there exists a number µx ∈ [0, 1] such that: x ∼ µx a ⊕ (1 − µx )b. Note that, as a b, (i) and (iii) ensure that this µx is unique. Let u : A → R be the map defined by u(x) := µx s.t. x ∼ µx a ⊕ (1 − µx )b. Then we have that by (i) and (ii-2), l ! l if and only if l(x)µx a ⊕ 1 − l(x)µx b x∈A l (x)µx a⊕ 1− l (x)µx b, Voting and Collective Decision-Making which by (iii) is equivalent to saying that x∈A l(x)µx ≤ l (x)µx , which in turn is equivalent to x∈A l(x)u(x) ≤ l (x)u(x). In other words, if and only if u(l) ¯ ≤ u(l ¯ ). Thus we have !=!u¯ . Thus Theorem 3 characterizes vNM preferences, establishing three necessary and sufficient conditions for a binary relation on L(A) to be within this class. Let us examine these conditions one by one. Condition (ii-1) imposes that each deterministic alternative should be indifferent to a certain lottery between the best and the worst alternative. Condition (ii-2) amounts to requiring ‘respect’ or consistency with basic probability calculus. Specifically, it states that indifference should prevail between any lottery and the binary one that results from replacing each alternative x in its support by its binary equivalent postulated in (ii-1). Condition (iii) seems very plausible: it just requires that of two lotteries that can only yield the best and the worst alternatives the one giving a higher probability to the best should be preferred. Finally there is condition (i) which requires the most obviously necessary conditions for a preference representable by a utility function: completeness and transitivity of preferences. Nevertheless, this is perhaps the least plausible condition if one seeks to interpret the expected utility model in positive terms, that is, as a prediction of rational ‘spontaneous’ behaviour: It is not credible for an individual facing the choice between any pair of lotteries, even with clear preferences on the set of deterministic alternatives, to have the sensitivity to feel an unequivocally immediate preference for one or other (think of the case of lotteries involving several maybe different alternatives), still less that he/she will not incur inconsistencies with some of the other conditions after a few choices. In fact, experiments show that this is not the case in general8 . 8 For a classic example (Allais’s paradox [1]) in which the expected utility model is often contradicted, see Exercise 8. But, inverting the point of view, it is not easy to find arguments against the rationality of these conditions, which can be used ‘normatively’ to guide a consistent way of choosing. In view of the proof of sufficiency in Theorem 3, an agent who finds these conditions reasonable should only specify for each deterministic alternative x the number µx such that x ∼ µx a⊕(1−µx )b. Then, taking u(x) : = µx , the expected utility function u¯ would represent a vNM preference consistent with them. Thus we have the following corollary. Corollary 4 Let ! be a von Neumann–Morgenstern or expected utility preference on L(A). If there are a best and a worst alternative in A, i.e. two alternatives a, b ∈ A such that a b and a x b, then !=!u¯ for the utility function u : A → R defined by u(x) := µx such that x ∼ µx a ⊕ (1 − µx )b. In short, conditions (i)–(iii) provide an acceptable model of rational behaviour in the face of risk. Later, when we want to incorporate players’ preferences into the model of a voting situation we will do so by assuming expected utility preferences. As shown by Example 1.2, equivalent utility functions may have nonequivalent associated expected utility functions. The following theorem establishes the relation that must exist between two utility functions for their expected utility functions to be equivalent. We omit the proof, which is an easy exercise. Theorem 5 Two maps u, u : A → R have equivalent associated expected utility functions, i.e. functions such that !u¯ =!u¯ , if and only if there exist α ∈ R++ and β ∈ R such that u = αu + β. Remarks. (i) The assumption that there are both most and least preferred alternatives is not crucial. It has been made only to simplify the proof of Theorem 3, but the results remain valid without this assumption. Only in Theorem 3 do conditions (ii) and (iii) need to be required for any two alternatives a, b, such that b ! a, condition (ii-1) has to be required for any x, such that b ! x ! a, and (ii-2) for any lottery with support between a and b. (ii) If A is not finite other probability measures more complex than lotteries can be considered. This would involve greater technical complexity, and implicitly more technically sophisticated players (they—as the reader—should be familiar with measure theory). This is not necessary for our purpose. Voting and Collective Decision-Making (iii) It is possible to give different sets of necessary and sufficient conditions for Theorem 3 that make the assumption of vNM preferences appear less demanding9 . We prefer the simplicity of conditions in Theorem 3, given that in any case they are both necessary and sufficient. 1.5 Some basic game theory notions As mentioned in Section 1.4.1, game theory provides formal models, called games, for the analysis of what we have called game situations, in which the outcome is the result of the decisions made by the interacting agents. More than sixty years after von Neumann and Morgenstern’s foundational book, the ramifications of game theory and the variety of models proposed are enormous. It is beyond the scope and possibilities of this book to even summarily overview this huge field, but in Chapters 3 and 4 some models and results from game theory are needed. This section is aimed at readers not familiar with game theory, and seeks to provide the minimal background required. To that end in this section we present a few basic game-theoretic notions. To make the section accessible to as many readers as possible and to keep the space devoted to it within reasonable limits, we avoid formal details as far as possible and concentrate on the main ideas. Readers interested in going deeper into any of them should go to the specific literature10 . The goal of the formal models analysed by game theory is to contribute to a better understanding of game situations. If we go a step further and ask what a better understanding means, at least two answers can be given. A phenomenon can be considered as thoroughly understood if we are able to ‘guess’ or better predict the outcome (as meteorology seeks to do with the weather). This is a positive goal or a positive sort of knowledge. In other cases attempts can be made to give well-founded advice as to the best way to proceed in a given situation to achieve a given goal. This may be the case if an analyst uses game theory to found a recommendation about the best course 9 See for instance [29]. In recent years a number of books on game theory have been written, some of them excellent. Three in particular must be mentioned: Binmore’s [13], Osborne and Rubinstein’s [66], and Osborne’s [65]. of action (for the interests of whoever asks for advice). In this case the goal is normative, using the term without necessarily implying a moral or ethical connotation. There are still cases in which a gametheoretic recommendation can be interpreted in normative terms in a sense in which a notion of fairness is explicit or implicit. Whenever human behaviour is involved, as is the case in game theory, the distinction is often confusing, and sometimes the two points of view are complementary. 1.5.1 Equilibrium Either point of view can lead to the notion of Nash equilibrium [61]. Assume that in a given game situation theory recommends or predicts a certain action for each of the players involved. Unless each of those actions is the best response to the other players’ recommended/predicted actions the recommendation/prediction is selfcontradictory. If it were not so at least one player would have incentive to act otherwise, thus breaking the recommendation/prediction. This takes us to the notion of ‘equilibrium’. A profile of actions, i.e. one for each player is a Nash equilibrium if each player’ action is the best for that player given the other players’ actions. Only if this necessary condition is satisfied will no player regret following the recommendation if all the others do, and will the knowledge of the predicted behaviour of the others not cause any player to deviate. Example 1.3: (Prisoner’s dilemma) Two individuals face a symmetric game situation. Each has two feasible actions: cooperate (C) or defect (D), and the preferences of the players are represented by the utility functions given in the table, assuming that player 1 chooses row and player 2 C D C 5, 5 6, 0 D 0, 6 1, 1 If for instance player 1 chooses C and player 2 chooses D the utility ‘payoffs’ are 0 for player 1 and 6 for player 2. In this case the only Nash equilibrium is the pair of actions (D, D). If players cannot communicate, or communication is possible but no possibility Voting and Collective Decision-Making of enforcing agreements exists, (D, D) seems the most plausible outcome11 . It should be emphasized that equilibrium is not a panacea, but a logically necessary condition for a consistent theory of rational behaviour in the general terms formulated. Example 1.4: (Battle of the sexes) Two individuals have two strategies each: going to the cinema (C) and going to the theatre (T). Both would rather go together to either place than alone, but player 1 prefers the cinema, and player 2 the theatre. It is assumed that going separately would completely spoil the evening. Thus their preferences about the four possible situations are represented by C T C 2, 1 0, 0 T 0, 0 1, 2 Note that there are two equilibria: (C, C) and (T, T). The two equilibria of the game are the situations in which the players go together to either place, with each preferring one place to the other. Given the entire symmetry of the situation there is no argument that can discriminate either of these two equilibria as superior in any sense to the other. This simple classic example shows the possible multiplicity of equilibria even in very simple situations. 1.5.2 Cooperative and non-cooperative game theory Nash established the distinction between cooperative and noncooperative game theory, and the basic notions and methodological paradigms in each field: the notion of equilibrium [61] and the cooperative solution to the bargaining problem [60] respectively, along with what has later been called the ‘Nash programme’ to bridge them. The non-cooperative approach addresses game situations in which players may or may not subscribe to agreements, and proceeds by explicitly modelling the players’ possible actions with some level of detail 11 In this case, in addition to a Nash equilibrium (D, D) is a combination of dominant stategies: for both players strategy D is the best choice whatever the choice of the other. and trying to predict the equilibrium outcome under the given conditions. The cooperative approach addresses game situations in which players have the capacity to subscribe to agreements and means to enforce them, and proceeds by ignoring details and trying either to predict the reasonable outcome or to assess the players’ expectations based on ideal rationality conditions that an agreement should satisfy. Alternatively, the cooperative approach may adopt a normative point of view, prescribing a good compromise based on suitable ‘fairness’ conditions. In cooperative game situations, where agreements are enforceable, both cooperative and non-cooperative approaches can be applied and, as suggested by Nash, both should be applied. Example 1.5: (Prisoner’s dilemma in a cooperative context) Consider the situation in Example 1.3. If players have to decide in a noncooperative environment, i.e. when either no chance of communication exists or communication is possible but no possibility of enforcing agreements exists, (D, D) seems the most plausible outcome. But now assume that players are given the possibility of communicating and signing an agreement on the outcome, and that once signed the agreement will automatically be implemented. In this case it seems clear that the most plausible outcome is (C, C): this is better for both than (D, D), and either of them would surely refuse either of the other two alternatives (C, D) and (D, C). This simple example shows how the change of environment from non-cooperative to cooperative dramatically changes the expectations. Note also that the difference is not in the players’ preferences or their readiness to cooperate but in the environment. 1.5.3 Subgame perfect equilibrium In Examples 1.3 and 1.4 each player has only one move or one choice to make: their choices of strategy are simultaneous and determine an outcome. Thus the situation can be directly described in strategic form by listing the actions or strategies available to each player and the utility that each outcome provides for each of them. But it is often the case that the players have to make a sequence of choices in a certain order that depends on the particular rules of the game and all the previous choices by all players. In this case the situation has to be described by a decision tree that incorporates all possible histories of the game for all possible Voting and Collective Decision-Making 1 C T 2 2 C T (0,0) C (0,0) T (1,2) Figure 1.1. Battle of the sexes in sequential form. sequences of decisions of the players. Games represented in this way are called games in extensive form. In this context a pure strategy of a player should specify his choice in any conceivable situation in the game (or node in the tree representing the game) in which that player is the one who must make a choice for the game to continue. Therefore a pure strategy profile, that is, a pure strategy for each player, completely determines the course of the game, with the only degree of freedom left being due to possible random moves that the game may include (for instance, throwing a dice). In this way, by listing all the available pure strategies for each player, a game in extensive form can be described in strategic form12 . Example 1.6: (Battle of the sexes in sequential form) Let us now consider the following variation of the game of Example 1.4. Assume that player 1 chooses first, and only then can player 2 choose where to go. This new (and entirely different) game situation can be represented in extensive form by the tree shown in Figure 1.1. In terms of pure strategies, player 1 has only two choices (C or T) but player 2 has four: CC (going to the cinema whatever the choice of 1), TT (going to the theatre whatever the choice of 1), CT (going with 1 whatever 1’s choice), and TC (going alone whatever 1’s choice). The following table summarizes the pure strategies available to each player. 12 At least theoretically, because the number of pure strategies becomes astronomical for even relatively simple games. C T CC 2, 1 0, 0 CT 2, 1 1, 2 TC 0, 0 0, 0 TT 0, 0 1, 2 There are three Nash equilibria: (C, CC), (C, CT), and (T, TT). In general there may be a great many equilibria in pure strategies: i.e. pure strategy profiles such that each of them is an optimal response to the others in the same profile. Nevertheless, sometimes it is possible to discriminate between them. To illustrate this consider the three equilibria in Example 1.6. On closer examination their plausibility levels differ. Consider (T, TT). Player 1 cannot improve his situation because player 2 commits himself by the choice TT: he will go to the theatre whatever 1 does. But if player 1 happens to choose C, it would be irrational (i.e. against his preferences) on the part of player 2 to go to the theatre. Now consider (C, CC). Again there is something similar in the plan of player 2: if player 1 happens to choose T, 2 will regret not having played CT instead of CC. Thus in both equilibria one of the pure strategies has this undesirable property: there are situations (or nodes in the tree) that will never occur if both players follow the strategies that make up the equilibrium, but that would, if for whatever reasons they were reached in the course of the game, cause some of the players to change their plans, or to regret their choices if such changes were not possible. Note that the only equilibrium free from this problem is (C, CT), because it is the only subgame perfect equilibrium of the game. In order to provide a general formulation we need the notion of subgame of a game in extensive form that we state informally below. Take any non-terminal node in the tree that describes a game in extensive form. Consider that we ‘cut down’ the tree exactly at this node, so that all that remains is the rest of the tree describing all the possible continuations of the game from this node on. Note that this subtree in itself specifies another game in extensive form. Any game obtained in this way is called a subgame of the original game. Note also that any pure strategy profile of the original game (that is, an exhaustive plan for playing the game for each player) will determine a pure strategy profile for every subgame. Voting and Collective Decision-Making Thus we have the following definition. Definition 6 A subgame perfect equilibrium is a strategy profile such that the restriction to any subgame is also a Nash equilibrium. As can easily be checked, the only subgame perfect equilibrium in Example 1.6 is (C, CT), i.e. player 1, who chooses first, chooses his preferred option, and player 2 follows 1’s choice. An important branch of game theory deals with repeated games. Take Examples 1.3 or 1.4 and assume that the same game is to be played again and again, and that at the end of each round the game recommences with probability r (0 < r < 1), and ends with probability 1 − r. Alternatively, it can be assumed that after each round the game recommences but the payoffs are reduced by a discount factor of r. Even in the first case, in which the probability of a play of infinite length is 0, it is very complex to specify a pure strategy, because it requires that a choice be specified at each round t (t = 1, 2, 3, . . .), which is in principle dependent on the ‘history’ of the game so far (i.e. the sequence of choices made by all the players so far). The implementation of such strategies entails some difficulties. For instance, it requires unlimited recall or storage capacity. There are different ways of implementing simpler strategies in these games, e.g. by using the limited capacity of finite automata. The simplest type of strategy in this context is what is called a stationary strategy, which consists of specifying the same choice for every round regardless of the history so far. A stationary subgame perfect equilibrium (SSPE) is a subgame perfect equilibrium in which all strategies are stationary. 1.5.4 Basic cooperative models We end this summary overview of game-theoretic notions by introducing some basic ‘cooperative games’, that is, models of game situations in which the players have the capacity to subscribe agreements and the means to enforce them. As commented in Section 1.5.2, cooperative models ignore details about how the players interact and incorporate only some basic features of the situation. In this section we review three basic models that will play a role later. Von Neumann and Morgenstern [87] introduce transferable utility games. A transferable utility game (or TU game, for short), is a summary of a game situation in which the only relevant information is a real number for each subset of players, which represents the amount of utility (assumed to be transferable between players) that the players in such a subset can guarantee for themselves if they join forces. Formally, a TU-game consists of a pair (N, v), where the set N = {1, 2, . . . , n} labels the players, and v is a map v : 2N → R, associating its worth v(S) with each subset of players or coalition S ⊆ N (with v(∅) = 0). For short we sometimes refer to map v as a game. GN denotes the set of all n-person (labelled by N) TU games. A TU game v is monotonic if (T ⊆ S) ⇒ (v(T) ≤ v(S)). This in particular entails that the worth of any coalition is positive, and that adding new members can only increase its worth. A game v is superadditive if two disjoint coalitions can always make at least as much by joining forces as they can separately, namely, if for all S, T ⊆ N, such that S ∩ T = ∅, v(S ∪ T) ≥ v(S) + v(T). The situation behind a TU game v can be assumed to be the following: the players negotiate a distribution of v(N), the utility that the grand coalition can obtain. TU games in which v(S) takes only the values 0 or 1 are especially simple. If in addition v (N) = 1, then v is called a simple game. The notion of simple games was also introduced by von Neumann and Morgenstern in [87], where the first example proposed is that of ‘majority games’. A more general model that includes TU games as well as bargaining problems as particular cases is that of non-transferable utility (NTU for short) games. The model consists of a pair (N, V) where N = {1, 2, . . . , n} is the set of players, and V = {V(S)}S⊆N is a collection of nonempty sets, one for each coalition S ⊆ N, such that for each S, V(S) ∈ RS , which represents the set of utility payoff vectors x ∈ RS which are feasible for coalition S. In other words, V(S) is the set of all payoff vectors that coalition S can guarantee by itself for its members if it forms. These sets are usually assumed to be closed, convex and comprehensive: i.e., x ≤ y and y ∈ V(S) ⇒ x ∈ V(S). Further specifications on these sets are possible depending on the context. Voting and Collective Decision-Making TU games can be embedded as a subclass of NTU games. It suffices to associate with each TU game (N, v) the NTU game (N, Vv )defined by S Vv (S) := x ∈ R : xi ≤ v(S) . Evidently both v and Vv encapsulate the same information in different forms. The NTU model also includes classical n-person bargaining problems (introduced in 2.1.1). These correspond to the case in which V(N) is a set D ⊆ RN , and there is a point d ∈ D, such that for each S ⊂ N, V(S) = x ∈ RS : xi ≤ di (∀i ∈ S) . That is, in a bargaining problem only the grand coalition can guarantee payoffs better than those at d for its members. 1.6 Exercises 1. In weighted majorities (1.3.2), the weights and the quota usually meet the following conditions: 12 i∈N wi < Q < i∈N wi . (a) Prove that if both conditions are satisfied, then W (w,Q) is a proper voting rule. (b) Is either condition alone necessary or sufficient for this? 2. Prove or disprove with a counterexample the following statements relative to weighted majority rules: (a) Two seats are symmetric if and only if they have the same weight. (b) A seat is null if and only if its weight is zero. 3. In a weighted majority what should be the weight of a seat in order for it to be a dictator’s seat? What should be the weight of a seat in order for it to have a veto? 4. Let W be an N-voting rule. Prove that the following conditions are equivalent: (a) W is symmetric (see 1.3.1); (b) W is anonymous (that is, for any permutation π : N → N, π W = W); (c) W is a q-majority voting rule for some q s.t. 12 ≤ q < 1. 5. Consider an eight-seat committee divided into two subgroups: N = A ∪ B with A = {1, 2, 3, 4, 5} and B = {6, 7, 8}, and two possible rules: Rule 1: A proposal is accepted if it has the support of at least 3 votes from A and at least 2 votes from B. Rule 2: A proposal is accepted if it has the support of at least 5 votes, of which 3 votes must be those from B. For each voting rule: (a) Give the set of winning configurations. (b) What seats are symmetric? (c) Is the dominance relationship complete? (d) Can the rule be represented by a weighted majority? If not, by a double majority? 6. The following rule, which was proposed to amend the Canadian constitution, involves the ten Canadian provinces: Quebec, Ontario, the four Atlantic provinces (New Brunswick, Nova Scotia, Prince Edward Island, and Newfoundland), the three Central provinces (Alberta, Saskatchewan, and Manitoba) and British Columbia. To pass an amendment, a proposal must get at least the support of Ontario and Quebec, two of the Atlantic provinces, and either British Columbia and a central province or all three central provinces. (a) Give the set of winning configurations of the voting rule. (b) Show that the dominance relationship is not complete. 7. The United Nations Security Council currently comprises fifteen members: five permanent members (China, France, Russia, United Kingdom, and United States of America) and ten non-permanent members. (a) If we ignore the possibility of abstention, the voting rule requires the approval of its five permanent members and at least four of the ten non-permanent members in order for a decision of substance to be adopted. Give the set of winning configurations of the decision rule. Show that this rule can be represented as a weighted majority. (b) If we take into account the possibility of abstention, a proposal can be passed if there are at least nine members in favour of the proposal and no veto member is against. How should the model be modified to distinguish between votes against and abstention? Voting and Collective Decision-Making 8. (Allais’s paradox): Consider the following four situations: Option A: 100 million euros for certain. Option B: a 10% probability of 500 million euros, an 89% probability of 100 million euros, and a 1% probability of 0 euros. Option C: an 11% probability of 100 million euros, an 89% probability of 0 euros. Option D: a 10% of probability of 500 million euros, a 90% probability of 0 euros. Many individuals claim to prefer A to B, and D to C. Are these preferences compatible with the expected utility model? 9. Explain the fallacy underlying the following statement: If an individual with expected utility preferences prefers a lottery ticket in which only one out of 10 000 tickets will win 1000 euros to one euro for certain, then if each ticket cost 1 euro and he/she has 100 euros then he/she will spend it all on tickets for this lottery. 10. Discuss whether any of the following behaviour patterns of Mr X is inconsistent with vNM’s model: (a) Mr X buys a one euro lottery ticket for a draw in which only one out of 1000 tickets will win 10 euros. (b) Mr X, who claims to prefer life to death, agrees to play Russian roulette (with one bullet in a six-bullet revolver) for the promise of a bike if he survives. 11. Let A = {a1 , a2 , a3 , a4 }, where a1 = 110 euros, a2 = 100 euros, a3 = 10 euros, and a4 = 0 euros. Two individuals have vNM preferences !1 and !2 on L(A) such that a1 i a2 i a3 i a4 (i = 1, 2), and a3 ∼1 34 a2 ⊕ 14 a4 and a3 ∼2 15 a2 ⊕ 45 a4 . (a) If 1 has 25 tickets of a lottery in which one out of 100 will win 100 euros, and 2 has 10 euros. Would either of them be interested in a swap? 1 (b) If a2 ∼1 13 14 a1 ⊕ 14 a4 and 1 had all the tickets for this lottery, would they both be interested in the same swap (10 euros for 25 tickets)? (c) In the conditions of (b), what is the maximum number of tickets that 1 will be willing to exchange for 10 euros? And what is the minimum number of tickets that 2 would need to be offered for 2 to be willing to pay 10 euros in exchange? 12. An analyst is in charge of making decisions that can involve lotteries with four alternatives: a, b, c, and d. He is ordered to make decisions consistent with expected utility theory, and such that a b c d. (a) Do these conditions determine a choice between 12 a ⊕ 12 c and 1 1 2 b ⊕ 2 d? (b) If in addition he is also told that b ∼ 23 a ⊕ 13 d and c ∼ 12 b ⊕ 12 d which of the following two should he choose: 16 a ⊕ 16 b ⊕ 46 c or 12 b ⊕ 12 c? 13. An individual wants to insure a good worth w euros against a risk of damage of r euros (r < w). An insurance company offers a policy at a price p according to which the individual will receive r if the damage occurs. The estimated likelihood (according to both the individual and the company) of the damage occurring is 0.1%, and the company is indifferent to risk in the sense that its preferences follow the expected monetary benefit. (a) What is the minimum price p at which the company would be interested in offering the policy? (b) If the preferences of the individual are vNM and are represented for a range of monetary values between w − r and w , within what price interval would by the map u(x) = x−w+r r both the company and the individual be interested in signing the policy? Seminal papers, seminal ambiguities In this chapter a few important seminal papers are briefly and critically reviewed in Section 2.1. Then the basic distinction in this book between ‘take-it-or-leave-it’ and ‘bargaining’ committees is introduced in Section 2.2. The related literature is summarily reviewed in Section 2.3. 2.1 Seminal papers and seminal ambiguities In the wake of the seminal contribution of Shapley and Shubik [78] in 1954, a copious literature on so-called ‘power indices’ and ‘voting power’ in general has been, and continues to be, produced. In this section we review only a few basic and seminal papers, Shapley and Shubik’s [78] and Banzhaf’s [4], as well as other previous or subsequent papers that laid the conceptual framework for later developments in voting power analysis. This will put things into historical perspective, and will also allow us to introduce some important classic models that will play a role in the book. This brief critical overview also raises some doubts about the foundations, thus motivating the endeavour and goal of this book. It is also our ambitious hope that a second reading of this chapter after reading the rest of the book will provide a test of our success. This would be the case if, to some extent, the reader had the impression that the ideas in the book had provided him / her with new ‘glasses’ to look at and understand the issues raised in these papers and the lights and shadows in the answers proposed in them. 2.1.1 Nash (1950): The bargaining problem It may seem surprising to start a book on voting issues with Nash, but as will be seen later in Chapter 4 it is perfectly justified. John F. Nash is a central figure in Game Theory. A few years after von Neumann and Morgenstern’s foundational book The Theory of Games and Economic behaviour [87], Nash established in a few papers the 30 Seminal papers, seminal ambiguities distinction between cooperative and non-cooperative game theory (see 1.5.2), and the basic notions and methodological paradigms in the two fields: the non-cooperative equilibrium notion and the cooperative solution to the bargaining problem, respectively, along with what has later been called the ‘Nash programme’ to bridge them13 . In a renowned paper [60] Nash addresses the bargaining problem, an old problem in economics that had been given up as too complex for rational analysis: What can the outcome of negotiations between two rational agents be when both can benefit from cooperating? In other words, what is the satisfaction each individual should expect to obtain from bargaining, or how much should the opportunity to engage in such a situation be worth to each of them? In order to provide an answer the situation is idealized by several assumptions. It is assumed that both individuals are ‘highly rational’, i.e. that their preferences when facing risk are consistent with the von Neumann–Morgenstern utility theory reviewed in Section 1.4, which at that time had recently been introduced in [87]. In this case the preferences of each individual can be represented by a utility function determined up to the choice of a zero and a unit of scale (Theorem 5 in 1.4.4). It is also assumed that lotteries over feasible agreements are also feasible agreements and that a particular alternative representing the case of no agreement enters the specification of the situation. The problem can thus be graphically summarized (Figure 2.1(a)) by plotting the utility vectors associated with all feasible agreements on a plane (set D) as well as the utility vector d associated with the case of disagreement. Figure 2.1(b) represents the problem obtained by adding all points which are dominated by any feasible one as feasible payoffs vectors. This alternative model, which is reasonable assuming ‘free disposal’ (i.e., that any level of utility inferior to a feasible one is also feasible) is used in Chapter 4. In [60] it is assumed that the set of feasible utility vectors D ⊂ R2 is compact and convex, and contains the disagreement or status quo point d, and some point that strictly dominates d. The bargaining problem is thus summarized by the pair B = (D, d). Then Nash proceeds by asking for reasonable conditions for a rational agreement, 13 The collected works of Nash on game theory are reunited in [63], with an excellent introduction by Binmore. Nash’s main papers on game theory and mathematics can be found in [35]. Voting and Collective Decision-Making Figure 2.1. A bargaining problem: (a) Classical à la Nash model (b) Assuming ‘free disposal’. that is, a point (B) or (D, d) in R2 deserving of the name. In this way he characterizes by a set of conditions the unique ‘solution’ or map : B2 → R2 satisfying them, where B2 denotes the set of all such bargaining problems. Namely, consistently with the interpretation of the solution as a vector of rational expectations of gain by the two bargainers, the following conditions are imposed on rationality grounds. 1. Efficiency14 . If (x1 , x2 ), (x1 , x2 ) ∈ D and xi > xi (for i = 1, 2), then (x1 , x2 ) = (D, d). A problem (D, d) is symmetric if d1 = d2 , and (x2 , x1 ) ∈ D whenever (x1 , x2 ) ∈ D. 2. Symmetry. If (D, d) is symmetric, then 1 (D, d) = 2 (D, d). 3. Independence of irrelevant alternatives. Given two problems with the same disagreement point, (D, d) and (D , d), if D ⊆ D and (D, d) ∈ D , then (D , d) = (D, d). Given that von Neumann–Morgenstern utility functions are determined up to a positive affine transformation, the solution should not depend on the zero and the unit of scale chosen to represent the utilities, that is, it must be invariant w.r.t. positive affine transformations. 4. For any problem (D, d) and any ai , bi ∈ R (ai > 0, i = 1, 2), if T(D, d) = (T (D), T(d)) is the problem that results from (D, d) by the affine transformation T(x1 , x2 ) = (a1 x1 + b1 , a2 x2 + b2 ), then (T(D, d)) = T( (D, d)). The first condition expresses that rational individuals will not accept an agreement if another better for both is feasible. The second states 14 Nash did not actually name his conditions. The names we use here were given to them later. Seminal papers, seminal ambiguities that, given that in the model the two individuals are ideally assumed to be equally rational, when the mathematical description of the problem is entirely symmetric the solution must also be symmetric (later, in [62] Nash replaces this condition by anonymity, requiring that the labels, 1 or 2, identifying the players do not influence the solution). The third expresses a condition of consistency: an agreement considered satisfactory should still be considered satisfactory if it remains feasible after the feasible set shrinks (and the disagreement point remains unchanged). Under the conditions assumed for B, these four conditions determine a unique solution (B) for every bargaining problem, now known as the ‘Nash solution of the bargaining problem’, which is given by Nash(B) = arg max (x1 − d1 )(x2 − d2 ). x∈D, x≥d Namely, the point in D for which the product of utility gains (w.r.t. d) is maximized. The following simpler and equivalent geometrical specification of this point may prove useful. Assuming that the scales of utilities for both players have been chosen so that d = 0, Nash(B) is (see Figure 2.2) the point on the boundary of D which is the middle point of the segment between the intersections with the positive axes of a straight line that leaves D below it. Although Nash only considered the two-player case, the whole construction works for the n-player case, yielding the same result. So for three players the aspect of a bargaining problem (assuming free disposal and assuming that the scales of utilities are chosen so that d = 0) would be as illustrated in Figure 2.3. In this case Nash (B) is the point in the boundary of D which is the baricenter of the triangle whose vertices are the intersections of a supporting hyperplane that leaves D Nash (B) D Figure 2.2. The Nash bargaining solution. Voting and Collective Decision-Making d = 0 u2 u1 Figure 2.3. A three-person bargaining problem. below with the positive axes (that is, if these points are Pi (i = 1, 2, 3): Nash(B) = 13 P1 + 13 P2 + 13 P3 ). In short, in a n-person bargaining problem as idealized by Nash’s model, assuming that rational players’ expectations should satisfy these conditions amounts to concluding that such expectations are given by Nash(B) = arg max x∈D, x≥d n (xi − di ). i=1 Note the model implicitly assumes ‘complete information’, that is, all the information within the model B = (D, d) is shared by all the players. But note that even assuming this, Nash’s characterization gives no clue about how players can interact to reach this point. Later Nash [62] re-examined the bargaining problem from a non-cooperative point of view, starting what is known now as ‘the Nash programme’, i.e. modelling the bargaining situation as a non-cooperative game in which the players’ steps of negotiation (proposals and threats) become moves in a non-cooperative game, and obtaining the cooperative solution as an equilibrium15 . 2.1.2 Shapley (1953): The value of a TU game In 1953, in the wake of Nash’s success, Lloyd S. Shapley, in [76], addresses a relatively similar problem with a similar approach, but 15 In fact, as a limit of equilibrium points of ‘smoothed games.’ Seminal papers, seminal ambiguities concerning a model introduced by von Neumann and Morgenstern in [87] involving n players: a transferable utility game (see 1.5.4). Although different stories can be provided to motivate it, there is not so clearly a real-world situation behind this model as there is behind the case of Nash’s bargaining model16 . If the situation behind a TU game (N, v) is that the players negotiate a distribution of an amount of utility v(N), assumed to be ‘objective and transferable’, the problem can be put as follows: What is a rational agreement on how to divide v(N)? Answering this question entails assessing the ‘value’ of the game for each player. In other words, how much is the opportunity to engage in such a situation worth to each of them? Then, proceeding like Nash, Shapley asks for the following conditions17 for a vector (v) = ( 1 (v), . . . , n (v)) ∈ RN to be considered as a rational evaluation of the prospect of playing game v. 1. Efficiency. i (v) = v(N). Given a permutation π : N → N and a game v, πv denotes the permuted game defined by πv(π(S)) := v(S), where π(S) = {π(i) : i ∈ S}. That is, πv is the game that results from v by relabelling the players according to π, so that i is in v what π(i) is in πv. 2. Anonymity. For any permutation π, π(i) (πv) = i (v). A player i is a null player in a game v if his/her entering or leaving any coalition never changes its worth, that is, if v(S ∪ i) = v(S), for all S. 3. Null player. If i is a null player in a game v, then i (v) = 0. The addition of two TU games, v + w, can be defined by (v + w)(S) := v(S) + w(S). 4. Additivity. (v + w) = (v) + (w). The first condition expresses that the distribution of utility is feasible and efficient in the sense that no utility is wasted. Anonymity means that if the players are relabelled their values are relabelled consistently. The third condition imposes that irrelevant players should 16 In fact, TU games originally appear in [87] as a ‘second order’ abstraction, that is, as a model which is the result of abstracting some information from a previous and more complex model. Readers interested in the Shapley value should consult [74]. In [76] the formulations and names of some of the conditions are slightly different. Voting and Collective Decision-Making receive 0. Finally, additivity, in spite of Shapley’s argument that it ‘is a prime requisite for any evaluation scheme designed to be applied eventually to systems of interdependent games’, is the least compelling condition. These conditions characterize a single map or ‘value’ v → (v) ∈ RN , which is now known as the Shapley value, or more properly, as (v) is a vector, each of its components, i (v), denoted by Shi (v) here, is the Shapley value of player i in game v, and is given by Shi (v) = (n − s)!(s − 1)! (v(S) − v(S\i)). n! As v(S) − v(S\i) is the marginal contribution of player i to the worth of coalition S, that is, the increase in worth that his/her joining S\i causes, Shi (v) is a weighted average of the marginal contributions of that player to the worth of all coalitions he/she belongs to. These coefficients or ‘weights’ admit several interpretations. Shapley proposes a bargaining model based on one of them from which the value can be derived as the expected outcome. Assume that players agree to form the grand coalition and distribute its worth in the following way: players join the coalition one at a time in a given order, all orders being equally probable, and each player receives his/her marginal contribution to the worth of the coalition formed when he/she enters. It is then easy to show that if the allocation is done in this way, (n−s)!(s−1)! is the probability n! of S being the coalition formed when i enters. Therefore the expected payoff (i.e., the expected marginal contribution in this probabilistic model) of each player is his/her Shapley value. It is worth stressing Shapley’s comment : ‘[this bargaining model] lends support to the view that the value is best regarded as an a priori assessment of the situation, based on either ignorance or disregard of the social organization of the players’. Example 2.1: Let N = {1, 2, 3}, and let v : 2N → R be the TU game such that v(N) = 8, v(1, 2) = 2, v(1, 3) = 3, v(2, 3) = 4, v(1) = v(2) = 1, and v(3) = 0. In the following table the 3! = 6 possible orders are in the left-hand column, and the marginal contributions of each player to the coalition formed when he/she enters for each order are in the other three. Seminal papers, seminal ambiguities Order 1, 2, 3 1, 3, 2 2, 1, 3 2, 3, 1 3, 1, 2 3, 2, 1 v(S) − v(S\1) 1 1 1 4 3 4 v(S) − v(S\2) 1 5 1 1 5 4 37 v(S) − v(S\3) 6 2 6 3 0 0 As each order occurs with probability 16 , the expected marginal contribution of each player is obtained byadding up each column and 14 17 17 dividing by 6. Thus Sh(v) = 6 , 6 , 6 . 2.1.3 Shapley–Shubik (1954): A power index Many collective decision procedures can be presented as simple games (see Section 1.5.4), in particular all those described as voting rules in Section 1.3. As defined in 1.3.1, a voting rule can be specified by the set of seats N = {1, 2, . . . , n} and the set W ⊆ 2N of winning vote configurations, that is, those that can make a decision. So the simple TU game vW can be associated with each voting rule W, defined by 1, if S ∈ W (9) vW (S) := 0, if S ∈ / W. In [78] Shapley and Shubik propose the Shapley value of the associated simple game as an ‘a priori evaluation of the division of power among the various bodies and members of a legislature or committee system’. Since then the Shapley value of game vW for each i ∈ N has been known as the Shapley–Shubik index. In accordance with the interpretation of the Shapley value, the most coherent interpretation of the Shapley–Shubik index of the simple game associated with a voting rule seems to be the following: A unit of objective and transferable utility is to be distributed among n individuals, under the condition that any group of individuals that forms a winning configuration according to the specification of a given voting rule W can enforce any feasible distribution. The Shapley–Shubik index of each voter can then be interpreted as an expected share of the available unit. On the other hand, as the marginal contribution of a player, vW (S) − vW (S\i), can only be either 0 or 1, and is 1 only when the Voting and Collective Decision-Making presence/absence of a player in S makes it winning/losing, Shapley and Shubik also propose a probabilistic interpretation in terms of the likelihood of being ‘pivotal’ or ‘critical to the success of a winning coalition’. They reinterpret Shapley’s bargaining model assuming an ‘order of voting as an indication of the relative degree of support by the different members, with the most enthusiastic members “voting” first, etc.’ Then what in the case of a TU game is the expected marginal contribution of a player, becomes now the probability of his/her being pivotal (i.e. the probability of vW (S) − vW (S\i) = 1) if a winning coalition is formed according to this probabilistic sequential model. Example 2.2: Let N = {1, 2, 3}, and W = {{1, 2}, {1, 3}, {1, 2, 3}}. Then proceeding with vW as in Example 2.1, we have: Order 1, 2, 3 1, 3, 2 2, 1, 3 2, 3, 1 3, 1, 2 3, 2, 1 vW (S) − vW (S\1) 0 0 1 1 1 1 We obtain Shi (v) = 4 1 1 6, 6, 6 vW (S) − vW (S\2) 1 0 0 0 0 0 vW (S) − vW (S\3) 0 1 0 0 0 0 . Note that for each row (i.e. each order) there is a single ‘1’, corresponding to the pivotal player for that order. A critical examination of the whole construction and its interpretation casts some doubts on the soundness of its foundations. The first source of confusion lies in the implicit identification of a voting rule with a TU-game. The specification of a voting rule (see 1.3.1) involves neither players nor their preferences (different players with different preference profiles may use a same voting rule), while a simple game is a particular type of TU game. Thus, identifying a voting rule with a TU-game amounts to assuming a very special configuration of players’ preferences18 . Second, if the compellingness of additivity is not clear in Shapley’s axiomatic system, now the condition does not even make sense in the context of simple games, where the sum of two simple 18 We come back to this point in Chapter 4 (see 4.2), where we consider a model of a committee consisting of a voting rule and a preference profile. In the light of this more general model, vW appears as a particular case. Only then can this objection be fully understood. Seminal papers, seminal ambiguities games is not a simple TU game19 . Third, if we turn to the pivotal interpretation, the relevance of the ‘degree of support’ to a proposal to be voted upon in a yes/no decision is not clear, and nor is the importance of being ‘pivotal’ for a given order. Why (see Example 2.2) should only the pivotal player for a given order keep all the credit and have the whole cake if other players may also be critical in the coalition formed? In short, in addition to somewhat dubious foundations, the seminal paper contains a seminal duality or ambiguity that remained unresolved and pervaded most of the subsequent related literature: What is the meaning of Shapley–Shubik’s ‘power index’? Is it the ‘value’ or ‘cooperative solution’ of a sort of bargaining situation, or it is an assessment of the likelihood of being critical in the making of a decision? Is it an expected share of a prize, or it is a probability of being decisive? Or it is both these things? The same questions can be posed in regard to what is supposed to be evaluated: What is ‘voting power’? On the other hand, where does the index’s credibility comes from? From its axiomatic foundation or from its probabilistic 2.1.4 Banzhaf (1965): Power as decisiveness In [4] Banzhaf makes a devastating critique of the current practice of assigning voting weights proportional to the numbers of citizens in several legislative bodies as a means of implementing the ‘one man, one vote’ requirement without disturbing the existing arrangement of districts of unequal size. He provides the following examples in which some of the representatives with positive weight have null capacity to influence the outcome, or in which representatives with rather different weights have exactly the same capacity to affect it. Example 2.3: Let W (w,Q) be the five-person qualified majority rule with weights w = (5, 1, 1, 1, 1) and Q = 4. In this case player 1 concentrates all decision power because this rule is 1’s dictatorship. Now let W (w ,Q ) with w = (8, 8, 8, 8, 1) and Q = 16. In spite of the different weights the rule specified is equivalent to a simple majority rule. 19 The first axiomatization of the Shapley–Shubik index, that is to say of the Shapley value in the domain of simple games, was due to Dubey [18]. It entailed additivity being replaced by a no more compelling condition (see 2.1.6). Voting and Collective Decision-Making This proves that ‘voting power is not proportional to the number of votes a legislator may cast’, and that ‘the number of votes is not even a rough measure of the voting power of the individual legislator’. To give a more precise foundation to this assertion he proposes ‘to think of voting power as the ability of a legislator, by his vote, to affect the passage or defeat of a measure’, and provides a measure of voting power based on this idea. This measure of voting power of a voter is given by the number of ‘swings’, that is, the number of vote combinations in which that voter is able to determine the outcome. The ratio of the power of any two voters is then given by the ratio of such combinations for each of them. He makes no explicit use of any probabilistic model, but the assumption that all vote combinations are equally probable is implicit20 . Example 2.4: Let N = {1, 2, 3}, and W = {{1, 2}, {1, 3}, {1, 2, 3}}. Then we have: Vote configuration {1, 2, 3} {1, 2} {1, 3} {2, 3} {1} {2} {3} ∅ Total swings 1’s swings 1 1 1 1 0 1 1 0 6 2’s swings 0 1 0 0 1 0 0 0 2 3’s swings 0 0 1 0 1 0 0 0 2 Note that each row corresponds now to a possible vote configuration (order plays no role), and in some of them there are several ‘1’, because in some vote configurations two or more players can be swingers. Note also that the ratio of ‘power’ according to the Banzhaf index (3 : 1 : 1) 20 The term ‘probability’ never occurs in [4], but in [5] the probabilistic model is almost explicit when he asserts that ‘Because a priori all voting combinations are equally possible, any objective measure of voting power must treat them as equally significant (. . .) no one can say beforehand which combinations will occur most often’, and even closer when he says that equal voting power means allowing ‘each voting member an opportunity to affect the outcome in an equal number of equally likely voting combinations.’ Seminal papers, seminal ambiguities is different from the ratio obtained from the Shapley–Shubik index (4 : 1 : 1) for the same voting rule. Although he acknowledges that his definition ‘is based in part on the idea of Shapley–Shubik’ he rejects their index arguing against the relevance of the order in which votes are cast in a voting situation, and asserting that their definition, ‘based as it is upon mathematical game theory in which each “player” seeks to maximize his “expected winnings”, seems to make unnecessary and unreasonable assumptions about the legislative process in order to justify a more complicated measure of voting power.’ In short Banzhaf takes decisiveness as the source of power, dismissing the axiomatic approach and favouring a notion of power based on the likelihood of being decisive or critical in a decision made by vote, that is, assuming that one exerts power when one’s vote is determinant. 2.1.5 Penrose (1946), Rae (1969) and Coleman (1971) Independently and earlier, Penrose [68] (see also [69]) had in 1946 reached basically the same conclusions as Banzhaf. Unfortunately, in this case the seed failed to germinate and his work was widely ignored, in particular by the game-theoretic power indices main stream. Only relatively recently has his pioneering work been rediscovered and recognized. The contribution of Coleman [15, 16], independently and coinciding in part with that of Banzhaf, and the seminal work of Rae [70] are also worth mentioning. In the next chapter their contributions are put into perspective within the conceptual framework proposed in this book. 2.1.6 Through the axiomatic glasses: Dubey (1975), Dubey–Shapley (1979) In two classic papers Dubey [18], and Dubey and Shapley [20] provided the first axiomatic characterizations of the Shapley–Shubik index and the Banzhaf index. Before proceeding we need a few formal definitions. Recall (see 1.5.4) a simple game is a TU-game v : 2N → R such that v(S) takes only the values 0 and 1, and such that v(N) = 1. But note that if v = vW is the simple game associated with a voting Voting and Collective Decision-Making rule W defined by (9) in Section 2.1.3, in addition to these conditions, v will satisfy the following conditions. As we assume that (S ⊇ T ∈ W) ⇒ (S ∈ W), then the associated game is monotonic. Also as we assume that (S ∈ W) ⇒ (N \ S ∈ / W), then the associated game is superadditive. It is also easy to check that all simple monotonic and superadditive games are generated by (9). In other words the map W → vW is a oneto-one correspondence between the set of N-voting rules and the set of N-person simple monotonic and superadditive games, which we denote by SGN . As in the sequel we restrict our attention to SGN , whenever we say ‘simple games’ we mean ‘simple monotonic and superadditive games’. For any v, w ∈ SGN , define two operations21 on SGN (v ∧ w)(S) := min{v(S), w(S)}, (v ∨ w)(S) := max{v(S), w(S)}. Note that both v ∧ w and v ∨ w are TU-games, but v ∨ w may fail to be a superadditive monotonic simple game. Dubey [18] introduces the following condition: Transfer. A map : SGN → RN satisfies the ‘transfer’ condition if for any v, w ∈ SGN such that v ∨ w ∈ SGN , (v) + (w) = (v ∧ w) + (v ∨ w). Combining this condition with three of the conditions in Shapley’s characterization gives the following result. Theorem 7 (Dubey [18]) The only map : SGN → RN that satisfies efficiency, anonymity, null player, and transfer is the Shapley–Shubik index. Later Dubey and Shapley [20] provide an adaptation of this characterization in order to axiomatize the Banzhaf index by replacing 21 In fact both operations make sense for arbitrary TU-games. As the reader may check, these operations are the exact counterparts for simple games of ‘∩’ (intersection) and ‘∪’ (union) for voting rules (see 1.3.2). Seminal papers, seminal ambiguities ‘efficiency’ by the following somewhat ad hoc condition22 : i (v) = Bzi (v), i∈N to obtain the following result. Theorem 8 (Dubey and Shapley [20]) The only : SGN → RN that satisfies condition (11), anonymity, null player, and transfer, is the Banzhaf index. Thus from the point of view of these characterizations the Shapley– Shubik index and the Banzhaf index ‘share’ three axioms (anonymity, null player and transfer23 ) and differ in only one. But there are two problems with these characterizations. First, if we assume that we want to characterize either a distribution of the pie (i.e. of a unit of utility among the players), or a vector of rational expectations of shares, then ‘efficiency’ seems a compelling rationality condition. Moreover, if we are trying to characterize axiomatically a measure of power or of influence, whatever this might mean, then neither this condition nor (11) is compelling at all. Second, the lack of compellingness of the transfer condition in either case means a flaw in both characterizations. This condition can be seen as a sort of adaptation of ‘additivity’ to the narrower domain of simple games. But what is the motivation or the justification for requiring it? This is not clear at all for either a vector of rational expectations or for a measure of influence. In short, neither of 22 This formulation may seem rather unsatisfactory in the sense that the index it helps to characterize appears in the axiom. Although this can be avoided by an equivalent formulation in which the right-hand side of (11) is replaced discretely by the required quantity (a function of the game v), namely i∈N i (v) = 1 (v(S) − v(S \ i)), 2n−1 i∈N S:i∈S⊆N the bottom line is the same: to require the sum to be what it is for the Banzhaf index. In fact these three conditions are satisfied by all semivalues. Semivalues are basically the family of ‘values’ that result by dropping ‘efficiency’ in Shapley’s characterizing system of his value. They were introduced by Weber [88] (see also [19]). When restricted to simple games, semivalues include the Shapley–Shubik and Banzhaf indices, and can be seen as a family of ‘generalized power indices’, which share with these two indices their most compelling properties (anonymity and null player) as well as transfer (see [42]). Voting and Collective Decision-Making the two above characterizations provides compelling support for either of the two indices. Axioms, paradoxes and postulates24 The lack of conclusive arguments in favour of either of the two indices gave rise to a proliferation of new power indices. Several authors then provided examples of more or less counterintuitive behaviour of some of them. These examples are referred to somewhat exaggeratedly as ‘paradoxes ’ in the literature on power indices, where they have been widely discussed25 . According to this point of view, each paradox entails the violation of a desirable property or ‘postulate’ by a power index, and these paradoxes/postulates can be used to judge and select between power indices. This is indeed very close to the axiomatic approach, at least to its earliest versions where compellingness of the axioms was a priority26 , though no characterization has been so far provided based on really compelling ‘postulates’ for a measure of voting power. Nevertheless, the weakest point of the paradoxes/postulates approach lies in trying to grasp the rather abstract notion of measure of ‘power’ (or ‘voting power’ associated with a voting rule) directly, with no sufficient prior clarification of what such a ‘power’ may mean. Moreover, such a chimeric measure tends to be founded solely on the voting rule, disregarding any other elements from the environment. 2.2 Clear-cut models to dissipate ambiguity Recapitulating after this brief review of the basic landmarking seminal papers, one is left with a number of doubts concerning the issues raised and the answers given to them. What is the meaning of the Shapley– Shubik index? Is it a ‘value’ in the cooperative-axiomatic sense à la Nash, i.e. an assessment of the expected payoff of a rational player at the prospect of engaging in a sort of bargaining situation? Or is it 24 25 26 Readers not familiar with the ‘voting power paradoxes’ literature can skip this comment. See [22] for a discussion of the main paradoxes. See [47] (briefly commented at the end of 3.5.5) for a further critique of the whole approach. As pointed out in 2.1.1, the conditions proposed by Nash are not called ‘axioms’ in [60]. Only later, given their characterizing power, were they called so and ‘axiomatic’ such an approach. Seminal papers, seminal ambiguities an assessment of the likelihood of playing a critical or decisive role in the making of a decision by a vote? Or are the two interpretations compatible? Which is ‘better’: the Shapley–Shubik index or the Banzhaf index? What provides sounder foundations for a measure of voting power: axioms or probabilistic interpretations? In fact the tension between these two indices, like the tension between the merits of the axiomatic and probabilistic approaches, has pervaded all the subsequent related work to date. Both indices have been defended and attacked with axiomatic and probabilistic arguments. But going further, all these question marks can be transferred to the underlying notion that is to be evaluated: What is ‘power’ or ‘voting power’ about? It is our view that a serious attempt to solve this riddle requires a more radical issue to be clarified first. To put it bluntly: What are we talking about? However provoking it may sound, this basic issue has been sidelined in much of the literature on the topic27 . In most cases a considerable dose of ambiguity surrounds the situation under consideration. A situation in which a matter is to be decided upon by vote (or at least under conditions among which a voting rule plays a prominent role) by a collective body is the common denominator underlying this literature. But such a vague specification as this may include an extremely wide and heterogeneous constellation of voting situations: law-making in a parliament, a parliament vote for the endorsement of a government after elections, a referendum, governmental cabinet decision-making, a shareholders’ meeting, an international or intergovernmental council, etc. The only way to get rid of ambiguity and to clarify the analysis is to take the bull by the horns and start with clear-cut models of wellspecified clear-cut voting situations. In accordance with this idea, in this book we take a new departure with respect to previous literature related to voting power. We distinguish neatly between two different types of voting situation that a collective body can face, or, in 27 But not by all, as Peter Morriss’ words testify: ‘Before we can start constructing an account of power we need to know what sort of thing we are dealing with: we must decide just what it is that we are trying to analyse. And we must decide, as well, how we go about deciding that. Most writers in the social sciences pay far too little attention to these preliminary problems, with the result that they go rushing off in the wrong direction, pursuing the wrong quarry. When they eventually catch it, they may claim to have caught the beast they sought; but how do they know, if they didn’t know what they were looking for?’ ([59], p. 2). Voting and Collective Decision-Making the terms chosen here, two types of committee: ‘take-it-or-leave-it’ committees and ‘bargaining’ committees. An ingredient common to both types of committee is a dichotomous voting rule (as introduced in Section 1.3) specifying the winning vote configurations. A ‘take-itor-leave-it’ committee (i) votes upon different independent proposals over time; (ii) these proposals are submitted to the committee by some external agency; and (iii) the committee is only entitled to accept or reject proposals, but cannot modify them. By contrast a ‘bargaining’ committee (i) deals with different issues over time; (ii) bargains about each issue in search of a unanimous agreement, in which task it is entitled to adjust the proposal; (iii) this negotiation takes place under the condition that any winning coalition has the capacity to enforce agreements; and (iv) for each issue a different configuration of preferences emerges in the committee over the set of feasible agreements concerning the issue at stake. Although in reality it is often the case that the same committee acts sometimes like a ‘take-it-or-leave-it’ committee, and others like a ‘bargaining’ committee, or even at times like something between the two, this clear differentiation of two clear-cut types of situation provides benchmarks for a better understanding of many less clear real-world situations. A thorough understanding of simple situations should be obtained before more complex ones are tackled. In this case this basic distinction requires different models and different conceptual analyses. It also permits more precise answers to more precisely formulated questions28 . Moreover, as we will see, this neat double point of view dissipates ambiguities and allows for a clarification of the meaning and limits of some common places (both ideas and recommendations) sustained by inertia and ambiguity. In particular it enables us to provide coherent answers to some of the questions raised in the previous paragraphs. 2.3 Further reading After half a century of research a huge number of papers can be seen one way or other as related to the few seminal papers briefly reviewed 28 The conceptual relationship of the basic distinction in this book with the vague distinction, widely extended in the literature, between ‘I-power’ and ‘P-power’, introduced in [24] is discussed in the Conclusions at the end of the book. Seminal papers, seminal ambiguities in this chapter. It would be tiresome and futile to attempt to draw up an exhaustive list of references. On the specific topic of voting power there is Felsenthal and Machover’s [22] influential book. In the following we make a personal selection of the work we consider most relevant to the issues addressed in this book. We separate the selected works into two subsections. One subsection is devoted to some of the main contributions in the axiomatic approach, the other to those related to the probabilistic approach. Some of the papers will be mentioned again in subsequent chapters. Some contributions are best left for quote and comment in later chapters, and are therefore omitted in these comments. 2.3.1 Axiomatic approach The success of the elegant paper by Nash on the bargaining problem (immediately followed by Shapley’s) enabled the cooperative ‘axiomatic’ approach to flourish. Each of Nash’s characterizing requirements [60] seeks to embody a compelling condition about the outcome of negotiation among rational bargainers. Later this crucial aspect was often forgotten, and characterizations of these and other ‘values’, ‘solutions’ or ‘power indices’ proliferated. Here we give a few relevant contributions in this line of work. The most often criticized of Nash’s conditions is that of ‘independence of irrelevant alternatives’ and the most respectable alternative to it was ‘monotonicity’, which gave rise to the Kalai–Smorodinsky solution [33]. Kalai also explored the effects of dropping ‘symmetry’ in [32], and Roth showed how ‘efficiency’ and ‘symmetry’ can be replaced by a condition of ‘individual rationality’ in [72]. In an interesting paper by Roth [73] the Shapley–Shubik index and the Banzhaf index are reinterpreted as utility functions representing von Neumann–Morgenstern preferences over lotteries on ‘roles’ in voting procedures. In [54] Lehrer characterizes the Banzhaf index by means of a property relative to the ‘amalgamation’ of two players. More recently, in [39] and [40] we have provided alternative characterizations to those of [20] and [73], only to honestly conclude that there is no good reason either to reject or accept any of the two indices on pure axiomatic grounds in the framework of simple games. Voting and Collective Decision-Making 2.3.2 Probabilistic approach As commented, both the Shapley value and the Shapley–Shubik index admit probabilistic interpretations. A probabilistic model also underlies Banzhaf’s index. This is also the case with some cooperative game-theoretic ‘solutions’ born out of different axiomatic explorations. But in general terms probabilistic interpretations have been overlooked by game theorists, maybe because one can always find one or more. Game theorists are in general more interested in axiomatic characterizations. Nevertheless in some cases a probabilistic model provides a clearer interpretation. Political scientists seem to have been more interested in probabilistic models. An early example of this is the collection of contributions edited by Niemi and Weisberg in 1972 [64], after the interest aroused by Rae [70]. Some important contributions that deserve to be mentioned in this respect are those of Straffin [79, 81, 82], and Weber [88, 89]. 2.4 Exercises 1. A buyer and a seller discuss and bargain over the price of a good. The seller (player 1) is interested in selling at any price greater than p, and at that price would be indifferent between selling and not selling. The buyer (player 2) is interested in buying at any price lower than P and at that price would be indifferent between buying and not buying. Assuming that p < P and that both players have vNM preferences, calculate the final price according to the Nash bargaining solution, assuming that their preferences on the range of x−p prices [p, P] can be represented by utility functions u1 (x) = P−p and u2 (x), for each of the following cases: (a) u2 (x) = 2 P−x (b) u2 (x) = P−x . (c) u (x) = 2 P−p P−p . P−x P−p . 2. Consider the following prisoner’s dilemma situation in a cooperative context. Two individuals have two strategies each: cooperate (C) and defect (D). Their preferences about the four possible situations are (D, C) 1 (C, C) 1 (D, D) 1 (C, D), (C, D) 2 (C, C) 2 (D, D) 2 (D, C). Seminal papers, seminal ambiguities If lotteries on the four outcomes are admitted and both have vNM preferences on them, calculate and interpret the Nash bargaining solution in each of the following cases (in order to make the comparison easier, take u1 (C, D) = u2 (D, C) = 0, u1 (D, C) = u2 (C, D) = 10 and d = (u1 (D, D), u2 (D, D)) as the disagreement point in both cases): (a) If 4 (D, C) ⊕ 5 4 (C, C) ∼2 (C, D) ⊕ 5 (C, C) ∼1 1 (C, D), 5 1 (D, C), 5 1 (D, C) ⊕ 5 1 (D, D) ∼2 (C, D) ⊕ 5 (D, D) ∼1 4 (C, D), 5 4 (D, C). 5 (b) If !2 is as in (a), and (C, C) ∼1 2 3 (D, C) ⊕ (C, D), 5 5 (D, D) ∼1 1 1 (D, C) ⊕ (C, D). 2 2 3. Consider the following variant of the ‘battle of the sexes’ in a cooperative context. Two individuals have two strategies each: going to the cinema (C) and going to the theatre (T). Player 1 prefers going to the cinema, and player 2 going the theatre, but both prefer going together to either place to going alone. Thus their preferences about the four possible situations are (C, C) 1 (T, T) 1 (C, T) 1 (T, C), (T, T) 2 (C, C) 2 (C, T) 2 (T, C). If lotteries on the four possibilities are admitted and both have vNM preferences on them, calculate and interpret the Nash bargaining solution in each of the following cases (in order to make the comparison easier, take u1 (T, C) = u2 (T, C) = −5 and u1 (C, C) = u2 (T, T) = 10 in both cases, and d = (u1 (C, T), u2 (C, T)) as the disagreement point): (a) If 2 (C, C) ⊕ 3 2 (C, C) ∼2 (T, T) ⊕ 3 (T, T) ∼1 1 (T, C), 3 1 (T, C), 3 1 (C, C) ⊕ 3 1 (C, T) ∼2 (T, T) ⊕ 3 (C, T) ∼1 2 (T, C), 3 2 (T, C). 3 Voting and Collective Decision-Making (b) If !2 is as in (a), and (T, T) ∼1 1 4 (C, C) ⊕ (T, C), 5 5 (C, T) ∼1 3 2 (C, C) ⊕ (T, C). 5 5 4. Two individuals with vNM preferences bargain the division of a pie. If x (0 ≤ x ≤ 1) denotes 1’s fraction and 1 − x denotes 2’s fraction, and the following utility functions represent the vNM preferences of either player: u1 (x) = ax2 + (1 − a)x, and u2 (x) = 1 − ax2 − (1 − a)x, obtain and compare the divisions corresponding to the Nash bargaining solution in the following cases: (a) when a = 0; (b) when a = 1. 5. Check which of the conditions that characterize the Nash bargaining solution are satisfied by the following solutions. (a) The ‘egalitarian’ solution: choose the feasible payoff vector for which the utility gains with respect to the status quo are equal for all players and those gains are maximal. Formally, (D, d) := d + µ1, ¯ where µ ¯ := max µ : d + µ1 ∈ D and 1 = (1, . . . , 1) ∈ RN . (b) The ‘utilitarian’ solution: choose the feasible payoff vector for which the sum of the gains is maximal. Formally, (xi − di ). (D, d) := arg max x∈Dd 6. Calculate the egalitarian solution and the utilitarian solution (see Exercise 5) of the bargaining problems in Exercises 2 and 3. 7. Check which of the conditions that characterize the Shapley value are satisfied by the following values. (a) For all i ∈ N, i (v) := v(N) n . (Allocate v(N) in the egalitarian way.) (b) For all i ∈ N, i (v) := v(N) − v(N \ i). (Allocate to each player his/her marginal contribution to v(N).) (c) Give 0 to every null player and allocate v(N) in an egalitarian way among the non-null players. Allocate 0 to every null player and allocate v(N) in an egalitarian way among the non-null players. Seminal papers, seminal ambiguities (d) For all i ∈ N, v({1}), if i = 1, i (v) := v({1, 2, . . . , i}) − v({1, 2, . . . , i − 1}), if i > 1. (Let the players join in a fixed order 1, 2, . . . , n, and each player receives his/her marginal contribution to the coalition he/she joins.) 8. For each three-person voting rule, calculate the Shapley–Shubik index and compare it with the number of swings. ‘Take-it-or-leave-it’ committees29 This chapter is concerned with what was introduced in Section 2.2 as ‘take-it-or-leave-it’ committees. The take-it-or-leave-it scenario is explained in detail in Section 3.1. In Section 3.2 we introduce the basic notions of ex post (i.e. after a vote) success and decisiveness. Section 3.3 introduces the probabilistic representation of voters’ behaviour or preferences, which provides a formal setting for addressing the likelihood of decisions being passed, the likelihood of success and that of decisiveness in Section 3.4. The normative goal leads to the assumption that all vote configurations are equally probable, which provides a common perspective in which several ‘power indices’ in the literature are seen as assessments of the likelihood of being decisive or of being successful in a take-it-or-leave-it committee (Section 3.5). The conceptual and analytical distinction between the notions of success and decisiveness is discussed in Section 3.6, where arguments in support of the notion of success as the relevant concept in the take-it-or-leave-it scenario are given. The question of the optimal voting rule in a take-itor-leave-it committee is addressed in Section 3.7. Two points of view are considered: egalitarianism and utilitarianism, and the recommendations that stem from each of them are presented. The question of the optimal voting rule from either point of view is also addressed in Section 3.8 for committees of representatives. 3.1 The take-it-or-leave-it scenario We consider voting situations in which a set of voters or committee handles collective decision-making by means of a voting rule, and is entitled only to vote for or against proposals submitted to it by an external agency. It is assumed that there is no possibility of the committee amending or modifying the proposals: hence the name 29 The material from Sections 3.1–3.6 is partly drawn from [45] and [38]. ‘Take-it-or-leave-it’ committees ‘take-it-or-leave-it’ committees. Moreover, we assume that there is no room either for linking decisions on different proposals. This rules out agreements among members of the committee of the type ‘I’ll vote “yes” on this issue even though I’d prefer it to be rejected if you vote “yes” on this other issue that is more important to me’. It is also assumed that no voter is indifferent between acceptance and rejection. In these conditions there is no room for negotiating or bargaining, nor for forming coalitions. In other words, there is no room for strategic considerations. The best any voter can do is to vote ‘yes’ or ‘no’ according to his/her preferred outcome (acceptance or rejection). Approximate examples of this type of situation could be a referendum or an academic committee that decides by a vote of its members on whether to admit students to a doctoral programme or a summer school without capacity constraints. In fact, such crisp conditions as the ones stated above are seldom found in real-world committees, where there is usually some margin either to link decisions that are separate and only formally independent, or to modify proposals to some extent. In other words, in real-world voting situations there is usually some room for negotiation. Nevertheless, this ‘pure’ take-it-or-leave-it scenario, free from ambiguity, provides a point of view that allows for a clear interpretation of some ‘power indices’ in the literature, as well as the existence of some relevant overlooked ‘gaps’ in the different aspects assessed by them. This point of view is complemented by the picture obtained from the alternative situation considered in Chapter 4, which addresses the game-theoretic analysis of ‘pure’ bargaining committees. As mentioned in Section 2.2, a common ingredient in the models of both types of voting situation is the voting rule. Thus in this chapter, using the notation and terminology introduced in Section 1.3, N labels the seats of an N-voting rule W that specifies the outcome (acceptance or rejection) after a vote in a take-it-or-leave-it committee of n members. In a pure take-it-or-leave-it situation several issues can be addressed. For instance, the ease with which proposals are accepted or rejected, the likelihood of a voter obtaining the result that he/she voted for and the likelihood of a voter being decisive in a vote. Obviously, the voting rule affects these probabilities. Intuition suggests that it is easier to pass a proposal under a simple majority than under the unanimity rule. Similarly, in a dictatorship the dictator always gets the result that he/she votes for and is always decisive, while the rest of the voters Voting and Collective Decision-Making can get the outcome they vote for only if their votes coincide with the dictator’s, but they are never decisive. But dictatorship is a very special case as knowledge of the voting rule is not in general sufficient to assess the likelihood of these issues. They all depend on the voting rule and also on the voters’ votes. 3.2 Success and decisiveness in a vote To pin down these notions with more precision, let us bring the voters onto the scene and label them with the labels of their respective seats (1, 2, . . . , n). First let us consider the situation ex post, that is, once they have voted on a given proposal, a vote configuration has emerged, and the voting rule has prescribed the final outcome, i.e. passage or rejection of the proposal. A distinction can be made between voters. If the proposal is accepted (rejected), those voters who have voted in favour (against) are satisfied with the result, while the others are not. Following Barry30 [8], we will say that they have been successful. Thus, being successful means obtaining the outcome – acceptance or rejection – that one voted for. We also say that a successful voter has been decisive in a vote if his/her vote was crucial: i.e., had he/she changed his/her vote the outcome would have been different. Definition 9 After a decision is made according to an N-voting rule W, if the resulting configuration of votes is S, (i) Voter i is said to have been successful if the decision coincides with voter i’s vote, i.e. iff (i ∈ S ∈ W) or (i ∈ /S∈ / W). (ii) Voter i is said to have been decisive, if he/she was successful and his/her vote was critical to that success, i.e. iff (i ∈ S ∈ W and S\i ∈ / W) or (i ∈ /S∈ / W and S ∪ i ∈ W). These notions are indeed ‘ex post’: they depend on the voting rule used to make decisions and the resulting vote configuration after a vote is cast. To reflect this, we will say that ‘i is (is not) successful in 30 The notion can be traced back under different names to [68] or [70] (see also [14] and [83]). ‘Take-it-or-leave-it’ committees (W, S)’, or ‘i is (is not) decisive in (W, S)’. Also note that these are Boolean notions, in the sense that there is no quantification: a voter merely may or may not be successful or decisive in a vote. Barry [8] also uses the notion of ‘luck’, considering a successful voter who is not decisive as ‘lucky’. Formally, voter i is said to have been (ex post) ‘lucky’31 (for short, ‘i is lucky in (W, S)’), iff (i ∈ S ∈ W and S\i ∈ W) or (i ∈ /S∈ / W and S ∪ i ∈ / W). Thus we have the obvious relationship: i is successful in (W, S) ⇔ (i is decisive in (W, S)) (i is ‘lucky’ in (W, S)) , where ‘’ stands for an exclusive ‘or’. Can these notions be extended ex ante, that is, once voters have occupied their seats but before the decision is made? Except for the trivial case where the voting configuration that will emerge is known with certainty (in which case ex ante becomes anticipated ex post), a meaningful assessment of success and decisiveness ex ante requires additional information, possibly imperfect, about voters’ behaviour. 3.3 Preferences, behaviour and probabilities As has been already pointed out, in a pure take-it-or-leave-it situation there is no room for strategic considerations: the best a voter can do is to vote in accordance with his/her preferred outcome. In other words, rational behaviour follows immediately from preferences. Uncertainty about voters’ behaviour or preferences can be formally treated and represented in probabilistic terms by means of a probability distribution over all possible vote configurations. That is, we assume that we know—or at least have an estimate of—the probability of occurrence of any vote configuration that may arise. In other words, the elementary events are the vote configurations in 2N . As their number is finite (2n ), we can represent any such probability distribution by a map 31 For a thorough discussion of the different uses of the term ‘luck’, see [59] pp. xxxvii–xli. Voting and Collective Decision-Making pN : 2N → R, that associates with each vote configuration S its probability of occurrence pN (S), i.e. pN (S) gives the probability that voters in S will vote ‘yes’, and those in N\S will vote ‘no’. To keep notation as simple as possible below, when N is clear from the context we will p(S) = 1. write p. Of course, 0 ≤ p(S) ≤ 1 for any S ⊆ N, and S⊆N The probability of i voting ‘yes’ is denoted by γi ( p), that is γi (p) := Prob (i votes ‘yes’) = p(S). S:i∈S Let P N denote the set of all distributions of probability over 2N . This set can be interpreted as the set of all conceivable voting behaviours or preference profiles of n voters within the present probabilistic setting. The following special distributions of probability will be considered. A distribution is anonymous if the probability of a vote configuration depends only on the number of ‘yes’-voters, that is, p(S) = p(T) whenever s = t. A distribution is independent if each voter i independently votes ‘yes’ with probability ti and ‘no’ with probability (1 − ti ). The probability of the configuration S is then given by p(S) = ti (1 − tj ). i∈S As the reader can easily check, these two conditions are independent, and if a distribution is both anonymous and independent then each voter independently votes ‘yes’ with a probability t and ‘no’ with probability (1 − t), so that the probability of the configuration S is then given by p(S) = t s (1 − t)n−s . A special case that will play a central role is when all vote configurations have the same probability, denoted by p∗ . That is, p∗ (S) = 1 2n for all configurations S ⊆ N. This is equivalent to assuming that each voter, independently of the others, votes ‘yes’ with probability 1/2, and votes ‘no’ with probability 1/2. Thus this probability accumulates all symmetries: it is anonymous and independent, with equal inclination towards ‘yes’ and ‘no’. ‘Take-it-or-leave-it’ committees 3.4 Success and decisiveness ex ante Assume that a probability distribution over vote configurations p enters the picture as a second input besides the voting rule W that governs decisions in a take-it-or-leave-it committee. In a voting situation thus described by pair (W, p) the ease of passing proposals or probability of acceptance is given by α(W, p) := Prob (acceptance) = Furthermore, success and decisiveness can be defined ex ante. It suffices to replace the sure configuration S by the random vote configuration specified by p in ex post definitions (12) and (13). This yields the following extension of these concepts. Definition 10 Let (W, p) be an N-voting situation, where W is the voting rule for making decisions and p ∈P N is the probability distribution over vote configurations, and let i ∈ N: (i) Voter i’s (ex ante) success is the probability that i is successful: i (W, p) := Prob i is successful = p(S) + T:i∈T / ∈ /W (15) (ii) Voter i’s (ex ante) decisiveness is the probability that i is decisive: i (W, p) := Prob i is decisive = S:i∈S∈W S\i∈ /W p(S) + T:i∈T / ∈ /W T∪i∈W (16) Note that strictly speaking i’s decisiveness depends only on the other voters’ behaviour, not on his/her own. To see this, voter i’s decisiveness can be rewritten as i (W, p) = S:i∈S∈W S\i∈ /W (p(S) + p(S\i)). Voting and Collective Decision-Making Observe that for each S, p(S) + p(S\i) is the probability of all voters in S\i voting ‘yes’ and those in N\S voting ‘no’. In this case, whatever voter i’s vote, he/she is decisive, while i depends on all voters’ behaviour. Therefore there is no way to derive one of these notions from the other, and the only relations in general are the obvious i (W, p) ≤ i (W, p) and Barry’s [8] equation: ‘Success’ = ‘Decisiveness’ + ‘Luck’, which remains valid in a much more precise and general version: i (W, p) = i (W, p) + i (W, p), where i (W, p) denotes voter i’s (ex ante) ‘luck’ or probability of being ‘lucky’, that is: i (W, p) := S:i∈S S\i∈W p(S) + S:i∈S / S∪i∈ /W Definition (15) of i (W, p) (the same occurs with (16) for i (W, p)) aggregates as equivalent the likelihood of being successful in case of voting ‘yes’ and in case of voting ‘no’. In some cases it is interesting to consider the case in which the relative importance that voters attach to having a proposal rejected or accepted in accordance with their preferences is not the same. We introduce the following notation: + i (W, p) := Prob i is successful & i votes ‘yes’ = − i (W, p) := Prob i is successful & i votes ‘no’ = T:i∈T / ∈ /W so that voter i’s probability of success is − i (W, p) = + i (W, p) + i (W, p). − We refer to + i (W, p) as positive success, and to i (W, p) as negative success. Voter i’s probability of being decisive can be decomposed in a similar way. Conditional probabilities of success and decisiveness under different conditions are also meaningful. The conditional probability of event A given event B, that is, the probability of A given that B is sure, is ‘Take-it-or-leave-it’ committees given by P(A | B) = P(A ∩ B) . P(B) Here A may stand for ‘voter i is successful/decisive’ and B is the condition32 . The following questions arise naturally in this context: Q.1: What is voter i’s conditional probability of success (decisiveness), given that voter i votes in favour of (against) the proposal? Q.2: What is voter i’s conditional probability of success (decisiveness), given that the proposal is accepted (rejected)? This makes for eight possible conditional probabilities33 . A little notation is necessary. We will superindex the measures (i or i ) with the condition. The superindex ‘i+’ (‘i−’) expresses the condition ‘given i+ that i votes “yes” (“no”)’. So the answers to Q.1 are given by i+ i , i , i− i− 34 i and i , respectively . The superindex ‘Acc’ (‘Rej’) expresses the condition ‘given that the proposal is accepted (rejected)’. Thus the Rej Rej Acc and i , respectively. answers to Q.2 are given by Acc i , i , i As an illustration, we formulate two of them explicitly. Voter i’s conditional probability of being decisive given that voter i votes in favour of the proposal, is given by i+ i (W, p) = Prob i is decisive | i votes ‘yes’ = 1 γi (p) S:i∈S∈W S\i∈ /W Voter i’s conditional probability of success given that the proposal is accepted, is given by Acc i (W, p)= Prob i is successful | acceptance = 1 p(S). α(W, p) S:i∈S∈W It is implicitly assumed that p(B) = 0 whenever we refer to a conditional probability for condition B. Of course, other questions involving different conditions (e.g. conditional to ‘i and j voted the same’, etc.) are possible. We highlight these particular questions because some power measures proposed in the literature can be reinterpreted as one of these conditional probabilities for a particular probability distribution. + Note the difference between i+ i and i , related by + i+ i (W , p) = γi (p)i (W , p). Voting and Collective Decision-Making Table 3.1. Ten different unconditional and conditional probabilities of success and decisiveness Condition: i votes ‘yes’ i votes ‘no’ i+ i i− i Acc i i+ i i− i Acc i Rej Rej The ten different unconditional and conditional probabilities of success and of decisiveness considered so far are summarized in Table 3.1. They can all be used in principle for a positive or descriptive evaluation of a voting situation if an estimate of the voters’ voting behaviour/preferences (i.e. of p) is available. In each particular realworld case the better the estimate of the probability distribution over vote configurations, the better the measure of actual success or decisiveness. In the next section, in which we set p = p∗ with normative purposes, we will see how seven out of these ten variants (eight out of eleven if we include α(W, p)) are related to power indices. Section 3.6 discusses the difference between success and decisiveness and their different conditional variants. 3.5 A priori assessments based on the voting rule The probabilistic assessments considered in the previous section can be used for normative purposes in the evaluation of a voting rule, irrespective of which voters occupy the seats, or for the comparison of different rules in the design of decision-making procedures. In this case, the particular personality or preferences of the voters or, equivalently in a pure take-it-or-leave-it scenario, their actual patterns of behaviour, should not be taken into account. Then we arrive at a logical deadlock: in our setup, measurement is based on a probability distribution over vote configurations, but the relevant information for estimating this probability has to be ignored. What can be done? One way out of the difficulty is to assume that all vote configurations are equally probable a priori: p∗ (S) := 1 2n for any configuration S ⊆ N. ‘Take-it-or-leave-it’ committees This ‘unbiased’ choice seems consistent with the normative point of view according to which any information beyond the voting rule itself should be ignored. This a priori probabilistic model of behaviour is not beyond argument. It has often been criticized, even when a normative point of view is assumed. Other models have been considered35 , but this one seems to us reasonable and the simplest, and it makes sense when the objective is not to assess a particular voting situation but the voting rule itself, keeping any further information behind a ‘veil of ignorance’. For this special, totally symmetric, distribution of probability some special relations that we use later hold. One concerns α(W, p∗ ) (see (14)), which can be interpreted as an index of the a priori ease of passing proposals with rule W. For all voting rules we have α(W, p∗ ) ≤ 1 . 2 We also have ∗ ∗ ∗ + i (W, p ) = 0.5i (W, p ) + 0.5α(W, p ) − 0.25, ∗ ∗ ∗ − i (W, p ) = 0.5i (W, p ) − 0.5α(W, p ) + 0.25. The following relationship, as will be commented later (see 3.6.1), is the source of some confusion between success and decisiveness: i (W, p∗ ) = 0.5 + 0.5 i (W, p∗ ). Finally, unconditional decisiveness and conditional decisiveness given a positive vote or given a negative vote are indistinguishable: i− ∗ ∗ i (W, p∗ ) = i+ i (W, p ) = i (W, p ). But it is worth remarking that these relationships do not hold in general36 for p = p∗ . 35 36 See for instance [82]. Relationship (22) holds for all p such that the vote of every voter is independent from the vote of the remaining voters. Voting and Collective Decision-Making Some ‘power indices’ in the literature can be seen as the particularization of some of the measures introduced in the previous section (Table 3.1) for this specific probability distribution. We review them here. Readers not interested in this review can skip the rest of this section and proceed to Section 3.6. 3.5.1 Rae index Rae [70] studies the symmetric voting rule that maximizes the correspondence between a single anonymous individual vote and the collective decision. He defines an index of such a correspondence based on two assumptions37 : the votes are independent from one another, and each voter votes ‘yes’ with probability 1/2, and votes ‘no’ with probability 1/2. Dubey and Shapley [20] suggest that the index can be generalized to any voting rule and for any voter, leading to what can be referred to as the Rae index, given by # {S : i ∈ S ∈ W} # {S : i ∈ /S∈ / W} + n n 2 2 ∗ ∗ = p (S) + p (S). Raei (W) := S:i∈S / ∈ /W That is, Rae’s index of a player i for a given rule is i’s probability of success (15) for the particular distribution p∗ : Raei (W) = i (W, p∗ ). 3.5.2 Banzhaf(–Penrose) index Banzhaf’s [4] original or ‘raw’ index (see 2.1.4) for a seat i and voting rule W is given by rawBzi (W) := number of winning configurations in which i is decisive. 37 In fact he makes a third assumption, namely that the probability of no member supporting the proposal is zero. But this must be dropped because under the other conditions the probability of no one supporting the proposal is necessarily 1/2n . ‘Take-it-or-leave-it’ committees Dubey and Shapley [20] (see also [67]) proposed the following normalization of this index as a ratio: Bzi (W)= number of winning configurations in which i is decisive . total number of voting configurations containing i Therefore in the current notation the Banzhaf index is Bzi (W) = ∗ i+ i (W, p ). Thus, by (22), we have i− ∗ ∗ Bzi (W) = i (W, p∗ ) = i+ i (W, p ) = i (W, p ). This provides three alternative interpretations of the Banzhaf index as an expectation of being decisive. Also note the relationship with the Rae index that emerges by just rewriting (21): Raei (W) = 0.5 + 0.5Bzi (W). This relationship was anticipated by Penrose [68], who, constraining attention to weighted majority rules and assuming that all vote configurations are equally probable, writes: ‘the power of the individual vote can be measured by the amount by which his chance of being on the winning side exceeds one half. The power, thus defined, is the same as half the likelihood of a situation in which an individual vote can be decisive.’ Penrose’s measure of power is then 0.5Bzi (W). For this reason the Banzhaf index is sometimes referred to as Penrose or Banzhaf–Penrose index. 3.5.3 Coleman indices Coleman [15, 16] defines three different indices in terms of ratios. The ‘power of a collectivity to act’ measures the ease of decision-making by means of a voting rule W, and is given by A(W) = number of winning configurations . total number of voting configurations Voter i’s ‘Coleman index to prevent’ action (ColPi ) is given by ColPi (W) = number of winning configurations in which i is decisive , total number of winning configurations Voting and Collective Decision-Making while voter i’s ‘Coleman index to initiate’ action (ColIi ) is given by ColIi (W) = number of losing configurations in which i is decisive . total number of losing configurations All three indices can be reinterpreted in probabilistic terms as A(W) = α(W, p∗ ), ∗ ColPi (W) = Acc i (W, p ), ColIi (W) = i (W, p∗ ). Rej The difference between the Coleman indices and the Banzhaf index is clear. They all measure the likelihood of decisiveness assuming all vote configurations to be equally probable, but the conditions are different. Still, they are often confused. The origin of the confusion lies in the fact that their normalizations coincide, giving rise to the so-called ‘Banzhaf– Coleman’ index. In formula, we have the following relation for any voting rule W: ColIi (W) ColPi (W) Bzi (W) = . = P I j∈N Bzj (W) j∈N Colj (W) j∈N Colj (W) This coincidence should serve only as a warning against the common practice of normalizing these indices, since it results in their losing their probabilistic interpretation. Normalization also makes the comparison of different rules problematic: it is as if percentages of cakes of different sizes were compared. Also note that in general, for arbitrary probability distributions, the normalizations of i (W, p), Acc i (W, p), Rej and i (W, p) do not coincide. The Coleman indices have recently attracted more attention than was formerly been paid to them38 . This recent upsurge in interest may be related to the intuition, or the evidence in some cases, of the different attitude of voters towards the prospect of having the proposals they support accepted, and the prospect of having the proposals they dislike rejected39 . This nuance is missed by the Banzhaf index. 38 39 In particular in the context of the European Council of Ministers (see for instance [53]). See for instance [56]. ‘Take-it-or-leave-it’ committees By (23), unconditional decisiveness of a voter, as well as conditional decisiveness given he/she votes ‘yes’ or given he/she votes ‘no’ collapse in the Banzhaf index. The Coleman’s indices have sometimes been used as a remedy in a somewhat confusing attempt to distinguish what is indistinguishable from the a priori decisiveness point of view. Indeed, I ∗ ∗ ColPi (W) = Acc i (W, p ) = i (W, p ) = Coli (W). Rej This is confusing for two reasons. First, if the prospects mentioned above are to be evaluated, the right condition seems to be ‘given that voter i supports (rejects) the proposal’ (a condition of which a voter has more knowledge than whether the proposal will be accepted or not). Second, in Coleman indices the condition ‘the proposal is accepted (rejected)’ varies in probability with the rule. This makes the comparison of these indices for different rules problematic (we come back to this in 3.6.2). 3.5.4 König and Bräuninger’s inclusiveness index More recently König and Bräuninger [34] define voter i’s ‘inclusiveness’ as the ratio of winning configurations containing i: KBi (W) := number of winning configurations containing i , total number of winning configurations which can be rewritten as ∗ KBi (W) = Acc i (W, p ). The last comments about the Coleman indices apply also to König and Bräuninger’s inclusiveness. An additional weakness of König and Bräuninger’s index is that information is lost by disregarding the Rej ∗ ∗ natural pair of Acc i (W, p ), which is i (W, p ). 3.5.5 Summary and remarks The following table summarizes the relationships between these ‘power indices’ and the current probabilistic model. Assuming the probability distribution p = p∗ , Table 3.1 becomes Table 3.2. Voting and Collective Decision-Making Table 3.2. Relationships between ‘power indices’ and the current probabilistic model Condition: Success Decisiveness i votes ‘yes’ i votes ‘no’ Raei (W) i+ i i− i KBi (W) Bzi (W) Bzi (W) Bzi (W) ColPi (W) rejection Rej ColIi (W) These ‘power indices,’ can thus be jointly justified as a priori assessments of different aspects of the voting rule itself on the same normative grounds, based on the probability distribution that assigns the same probability to all vote configurations. Table 3.2 still raises some further questions. First, the question of the interest for applications of the indices reviewed. In this respect the main issue is that of which notion should be given preeminence in a take-itor-leave-it committee: success or decisiveness? In the next section we give arguments in support of success as the relevant notion in a pure take-it-or-leave-it environment. We also deal with the overlooked cells i− ∗ ∗ in Table 3.2, that is, i+ i (W, p ) and i (W, p ). There is also the question of the power indices that do not appear in Table 3.2 and their possible relation with the two-ingredient model of a take-it-or-leave-it committee. Some power indices hardly fit or do not enter the picture at all. Most of them seek to measure ‘power’ understood as decisiveness, which, as mentioned above, we believe to be secondary in pure take-it-or-leave-it situations. But their interpretation in this scenario is problematic (see [45]). As briefly commented at the end of Section 2.1.6, in view of the proliferation of power indices, some authors propose the use of paradoxes/postulates to select among them. In fact, most of these postulates embody confusing desiderata in which the idea of success and decisiveness are conflated. In [47] the ex ante success and decisiveness for arbitrary probability distributions p’s are tested against some of the best known voting power postulates. It is shown that in all cases in which a ‘paradox’ may occur it can be explained in clear and simple terms, so that the paradoxes dissipate as such. Surprisingly enough success, unavoidably intermingled with decisiveness in any pre-conceptual notion of voting power, behaves even better with respect to some postulates in principle intended for measures of decisiveness. ‘Take-it-or-leave-it’ committees 3.6 Success versus decisiveness 3.6.1 Success is the issue in a take-it-or-leave-it scenario As commented in Section 2.2, considerable vagueness in the specification of the voting situation considered underlies most of the literature on ‘voting power’. On such vague grounds the notion of decisiveness has de facto been widely accepted as the right basis for the formalization of a measure of ‘voting power’. Shapley and Shubik’s interpretation of their index as the probability of being ‘pivotal’ in the making of a decision contributed to this choice. Banzhaf’s and Coleman’s indices, in spite of their criticism of Shapley–Shubik’s index, are also evaluations of decisiveness. Even Penrose concentrates on decisiveness as the part of success that can be credited to the voter. In spite of this dominant view, some authors have raised doubts as to the relevance of this interpretation of ‘power’ as decisiveness, suggesting as more relevant the notion of satisfaction or success. That is, focusing on the likelihood of obtaining the result that one votes for irrespective of whether one’s vote is crucial for it or not. Rae was the first to take an interest in a measure of success for symmetric voting rules. A few other authors have since also paid attention to the notion of success40 . Nevertheless, in general, the notion of success has usually been either overlooked or considered as just a sort of secondary ingredient of decisiveness. The difference between the two notions should be obvious, unless firmly entrenched mental habits prevent one from perceiving it. The confusion is partly due to relationship (24) ((21) in our notation) anticipated in [68] and proved in [20]. This relationship may give the impression that success and decisiveness are two faces of the same coin41 , but it can be shown42 that i (W, p) = 0.5 + 0.5 i (W, p) holds for all W if and only if p = p∗ , i.e. under the assumption that all vote configurations are equally probable. Thus, in general, i (W, p) = 0.5 + 0.5 i (W, p), 40 41 See, for instance, [14, 8, 83], and more recently [34, 10, 38] . For instance, Hosli and Machover [31] claim that ‘these two concepts of voting power, far from being opposed to each other, are virtually identical, and differ only in using a different scale of measurement’. See [38]. Voting and Collective Decision-Making so success and decisiveness are not only conceptually different but also analytically independent as there is no general way to derive one concept from the other. The following example shows how even for rather symmetric behaviour, as described by anonymous and independent probability distributions, two voting rules can be differently ranked from these two points of view. Example 3.1: Consider a three-member committee in which each member votes ‘yes’ in an independent way with probability 3/4, so that p(S) = (3/4)s (1/4)n−s . Compare the simple majority rule (W SM ) and the unanimity rule (W N ) for this distribution of probability. We have i (W SM , p) = 52/64, i (W N , p) = 43/64, i (W SM , p) = 24/64, i (W N , p) = 36/64. Thus i (W SM , p) > i (W N , p), while i (W SM , p) < i (W N , p). Thus, voter i will surely prefer the simple majority if he/she prioritizes maximizing her/his likelihood of success while he/she will prefer unanimity if he/she prioritizes decisiveness. With the conceptual and analytical difference between these two notions settled, let us come back to the basic issue: Which is more important, success or decisiveness? Once again, as argued in Section 2.2, the only way to seriously address this and other basic issues is by specifying clearly the situation that one is referring to. In this case we are concerned with a pure take-it-or-leave-it scenario. In such a context, from the voters’ point of view, what really matters? Is it the likelihood of obtaining the result that one voted for? Or is it the likelihood of obtaining the result that one voted for and being crucial for it? If a voter is faced with such a choice in a pure take-it-or-leave-it situation, it seems clear that he/she will surely be more likely to prefer to obtain with higher probability the outcome that he/she votes for over ‘Take-it-or-leave-it’ committees being decisive for it with a higher probability. Consider Example 3.1. If the probability distribution actually expresses the common beliefs of the members of the committee, no doubt a rational individual would prefer the decision to be made by means of the simple majority rule in spite of the fact that the probability of being decisive is lower than with the unanimity rule43 . Why should he/she care about the likelihood of being decisive in a situation in which there is no place for strategic considerations, as argued in Section 3.1? Only the possibility of strategic use of a decisive position can make this relevant, but in a pure take-it-or-leave-it environment there is no such possibility. In short, in a pure take-it-or-leave-it situation success is certainly the relevant issue, while decisiveness is immaterial. This is a departure from the dominant underlying assumption in traditional voting power literature. As we will see in the next two sections, some relatively popular recommendations about ‘the best voting’ rule in some contexts appear differently in this light. 3.6.2 Conditional success With this basic issue settled, what can be said about the conditional variants of success in Tables 3.1 and 3.2? Conditional evaluations of likelihood of success can also be informative, as well as relevant for comparison of voting rules. In each vote each voter knows his/her particular vote, but he/she is in general uncertain about whether the proposal will be accepted or rejected. Therefore the information proi− vided by i+ i and i seems more interesting than that given by the other conditional variants. Moreover the condition (i.e. acceptance or rejection) under which the conditional probability is calculated for Rej Acc or i varies from one rule to another, which makes comparing i Rej rules based on Acc or i problematic. i i− In sum, the three probabilities, i , i+ i , and i , along with α seem to be the most relevant notions in the context of take-it-or-leave-it committees. Each of these measures provides a criterion for comparing voting rules for a given p. Measure α provides a point of view 43 At least assuming that success in a positive sense (i.e. of a ‘yes’ vote) and success in a negative sense (i.e. of a ‘no’ vote) are equally valued. If, for instance, more importance is given to the success of a ‘no’ vote, then unanimity may be preferred (we return to this point later). Voting and Collective Decision-Making detached from any particular voter. Now consider the others. If a voter is interested in maximizing the probability of obtaining the outcome that he/she votes for, then i is the criterion for voter i. But often voters are differently concerned with the prospect of obtaining the preferred result depending on the sense of their vote, i.e. acceptance or rejection. In particular, when rejection means maintaining the status quo there may be a bias in either direction, giving priority to i− i+ i or i . The following are important simple properties for the comparison of rules. If W ⊆ W , then for any p, α(W, p) ≤ α(W , p), and for any i and any p, i+ i+ i (W, p) ≤ i (W , p) i− and i− i (W, p) ≥ i (W , p). An example illustrates this: as W N ⊆ W SM , whatever the estimate of p, a voter only interested in getting a proposal accepted whenever he/she favours it will prefer the simple majority to unanimity, while a voter only interested in getting a proposal rejected whenever he/she votes against it will have the reverse preference. Note however that, depending on p, we may have i (W, p) i (W , p) and i (W, p) i (W , p). 3.6.3 Summary From Table 3.1 (or Table 3.2 if we adopt the normative a priori evali− uation), we are basically left with three probabilities, i+ i or i and i , along with α, as the relevant parameters for the comparison of voting rules. A voter i with an estimate of the probability distribution p, who is indifferent between the positive or negative directions of success will prefer W to W if his/her probability of obtaining his/her preferred outcome is greater with W than with W . Thus, such a preference can be expressed by W i W iff i (W, p) ≥ i (W , p). Similarly, depending on the priority given to either direction of success, i− other preferences can be formulated by replacing i by i+ i or i . ‘Take-it-or-leave-it’ committees 3.7 The choice of voting rule: egalitarianism and utilitarianism So far, assuming a take-it-or-leave-it environment, we have considered different points of view that allow comparisons to be made between different voting rules, as well as between different seats in a given voting rule. These comparisons are based on the voting rule and the probability distribution over vote configurations. Some of these notions help provide a foundation for normative recommendations about the choice of voting rule in such committees. This is the issue addressed in this and the next sections. These recommendations are normative in nature. Thus, taking a detached point of view and disregarding the actual preferences and voting behaviour of the individuals occupying the seats on the committee, we make the same assumption as in Section 3.5. Assumption 1. A priori all vote configurations are equally probable. Two principles which are commonly used to make normative assessments in different contexts are those of egalitarianism and utilitarianism44 . The former holds that an equal treatment should be given to equals, or, in utility terms, the same utility level. The latter sets out to maximize the sum of the voters’ utilities. Both criteria involve interpersonal comparison of utilities. In the first case comparability is a precondition for equalization; in the second, summing up presupposes some form of homogeneity. In the framework considered so far, a take-it-or-leave-it situation is specified by a voting rule and a probability distribution over all possible voting behaviours or preference profiles. But in order to apply any of these principles, utility has to be introduced and for that the utility that voters obtain in a vote has to be specified. There are four possible situations for a voter to which utilities are to be attached. Namely, one must specify ai , bi , ui (W, S) = c, i di , 44 if i ∈ S ∈ W, if i ∈ S ∈ / W, if i ∈ /S∈ / W, if i ∈ / S ∈ W. The most prominent authors related to egalitarianism and utilitarianism are John Rawls [71] and Jeremy Bentham [11] respectively. Voting and Collective Decision-Making Obviously, if a voter is in favour of the proposal (i ∈ S), he/she prefers the proposal to be accepted (S ∈ W) rather than rejected (S ∈ / W): that is, ai > bi . Similarly we should have ci > di . But the comparison between either of the first two situations (success or failure for a proposal that i supports) and either of the second two (success or failure for a proposal that i rejects) is not obvious: they are necessarily associated with different proposals45 and the importance of proposals may differ from vote to vote and from voter to voter. As an initial simplification we assume the same utility for every voter in each case (i.e. ai , bi , ci , di are the same for all i). This can be justified in terms of the ‘veil of ignorance’ [71] point of view that underlies the normative approach that we assume. The relative value of pairs a, b and c, d still remains. If obtaining the preferred outcome, whether the proposal is supported or rejected, is the goal of every voter, success is the source of utility. But as voters may value success differently in the two directions their utility may differ in the case of acceptance or rejection, that is, it may be that a < c, a = c, or a > c. Although a similar distinction is possible for both forms of failure, we make the simplifying assumption that both are equally valued and set their utility to 0 (i.e. b = d = 0). Assumption 246 . Voters have expected utility preferences, and having success means λ units of utility (0 ≤ λ ≤ 1) for any voter if the voter voted ‘yes’, and 1 − λ units of utility if the voter voted ‘no’; not having success means zero of utility; i.e. if the rule is W and the vote 45 46 If we are referring, as we are here, to the same voter. Or with different voters if we refer to the same alternative. An alternative to Assumption 2 is the following. If proposals are voted against the status quo, and only modifications of the status quo matter, then one can assume b = c. Then setting the status quo to 0, i.e. b = c = 0, we have an alternative choice of utilities given by λ, if i ∈ S ∈ W , /S∈ / W, 0, if i ∈ S ∈ / W or i ∈ uλi (W , S) = λ − 1, if i ∈ / S ∈ W, where (0 ≤ λ ≤ 1). Nevertheless, the conclusions are basically the same as under Assumption 2. ‘Take-it-or-leave-it’ committees configuration is S, i’s utility is λ, if i ∈ S ∈ W, uλi (W, S) = 1 − λ, if i ∈ /S∈ / W, 0, if i ∈ S ∈ / W or i ∈ / S ∈ W. The parameter λ reflects the importance which is given to positive success relatively to negative success: λ = 0 means that only negative success matters, λ = 1 means that only positive success matters, λ = 1/2 means that voters are indifferent in their evaluations of positive and negative success. Indeed for λ = 1/2 we have λ=1/2 ui (W, S) 1 , if i ∈ S ∈ W or i ∈ /S∈ /W = 2 0, if i ∈ S ∈ / W or i ∈ / S ∈ W. Under Assumption 2, for a given W and a probability distribution over vote configurations p, the expected utility of the voting situation (W, p) for a voter i, denoted as in 1.4.4 by u¯ λi , is given u¯ λi (W, p) := E[uλi (W, S)] = p(S)λ + p(S)(1 − λ) S:i∈S / ∈ /W − = λ+ i (W, p) + (1 − λ)i (W, p). When positive success and negative success are equally valued (i.e. when λ = 1/2), voter i’s expected utility is simply half the unconditional success: λ=1/2 u¯ i (W, p) = 1 i (W, p). 2 In general, under Assumption 1, setting p = p∗ in (26) gives voter i’s a priori expected utility, which can be rewritten using (19) and (20) as u¯ λi (W, p∗ ) 1 −λ 2 i (W, p∗ ) 1 ∗ − α(W, p ) + . 2 2 Voting and Collective Decision-Making 3.7.1 Egalitarianism47 Egalitarianism argues for equal treatment of equals behind the veil of ignorance or, in our framework under Assumptions 1 and 2, equal a priori expected utilities. Therefore a voting rule W satisfies this principle a priori if u¯ λi (W, p∗ ) = u¯ λj (W, p∗ ), for all i, j. In view of (28) this is equivalent to requiring that i (W, p∗ ) = j (W, p∗ ), for all i, j, that is, all voters should, a priori, have the same probability of obtaining the outcome that they vote for. Any symmetric rule satisfies this principle. This includes the unanimity rule and the simple majority rule, as well as all intermediate q-majority rules (see 1.3.2). We can thus summarize the conclusions as follows: Proposition 11 Under Assumptions 1 and 2, whatever the parameter λ (0 ≤ λ ≤ 1), any symmetric rule implements the egalitarian principle a priori. 3.7.2 Utilitarianism The utilitarian optimum consists of maximizing aggregate utility. In our framework this would be achieved by a rule for which the aggregated a priori expected utility is maximal, i.e. by a rule W that solves the problem Max W ∈VRN u¯ λi (W, p∗ ). The following equivalence will help us to solve it for different values of the parameter λ (see Appendix for proof). 47 The results obtained in this section and the next can be seen as particular cases of those obtained in Section 3.8 for a committee of representatives when the groups represented by the members of the committee contain just one individual each. ‘Take-it-or-leave-it’ committees Proposition 12 Problem (30) is equivalent to Max W ∈VRN (s − (1 − λ)n). Therefore maximizing the aggregated expected utility means choosing W with as many winning configurations that satisfy s > (1 − λ)n as possible. The choice seems simple: any configuration that satisfies the condition should be winning. In fact, if λ ≤ 1/2, this condition defines a q-majority voting rule with quota q = 1 − λ that solves the problem. Thus the more importance is given to negative success, the greater the quota should be in order to implement the utilitarian principle. In particular if λ = 1/2 (i.e. equal importance is given to positive and negative success), then q = 12 . Thus the simple majority is the rule that best implements the utilitarian principle48 if λ = 1/2. The other extreme occurs for λ close to 0 (i.e. only negative success matters), when unanimity is the best rule. Proposition 13 Under Assumptions 1 and 2, if λ ≤ 12 , the voting rule that best implements the utilitarian principle a priori is the q-majority rule with quota q = 1 − λ. A problem appears when λ > 1/2. In this case condition s > (1−λ)n may define an improper rule. The solution seems to be to keep the quota as low as possible, i.e. at 1/2. The simple majority seems again to be the best rule of all the proper rules. The following counterexample shows that this is not true in general. Example 3.2: Let N = {1, 2, 3, 4} and assume that the utility of any voter is given by (25) with λ = 34 . Consider the simple majority W SM and W = W SM ∪ {{1, 2}}. Then we have S∈W (s − (1 − λ)n) = 1 + S∈W SM (s − (1 − λ)n) > (s − (1 − λ)n). S∈W SM Example 3.2 provides a hint about why the simple majority rule may fail to be the utilitarian optimum: as long as it is possible to add vote configurations whose size is greater than (1 − λ)n to the set of winning 48 The fact that the simple majority was the best of the symmetric rules (in a sense close to the one considered here) was conjectured in [70] and proved in [84]. Voting and Collective Decision-Making ones without making the rule improper, the aggregated expected utility will increase. It also gives a hint about how far this can go, not very far indeed, as the following proposition shows (see proof in the Appendix). Proposition 14 With λ > 12 , if W is the voting rule that best implements the a priori utilitarian principle, then for all S ∈ W it holds that s ≥ n2 . This means in particular that even if λ > 12 for n odd, the simple majority implements the a priori utilitarian principle, while for n even, it only almost implements it. More precisely, for n even the simple majority is the best of all symmetric rules. Then we have the following proposition. Proposition 15 Under Assumptions 1 and 2, if λ > 12 , then if n is odd the simple majority is the voting rule that best implements the utilitarian principle a priori. If n is even the simple majority is the utilitarian-best of the symmetric rules. In short, egalitarianism and utilitarianism are compatible and easy to implement. The rule has to be symmetric in order to guarantee equal a priori expected utility for all voters. The utilitarian principle determines the choice of the quota, which varies with the importance given to positive success. The more highly positive success is valued, the smaller the quota, with a lower bound being 1/2. Remark. There is the question of the effect of the choice of utilities to represent the voters’ vNM preferences. It should be noted that the solutions of problems (29) and (30), which yield the egalitarian and utilitarian optima, are not altered if the utilities given by (25) are replaced by uiλ (W, S) = αuλi (W, S) + β, for some α > 0 and some β. More precisely, this is so as long as this is done (with the same α and β) for all voters. Thus, as far as the exact requirement of egalitarianism is concerned, no conflict arises from any change in the form given by (32). The problem appears with comparisons between voters’ expectations in case of inequality. In Section 3.8.2 we will be making such comparisons. Absolute comparisons (i.e. differences) are not altered by β if α = 1, relative comparisons (i.e. quotients) are not altered by α if β = 0, otherwise both are dependent on both ‘Take-it-or-leave-it’ committees α and β. If we want to compare the expectations of i and j, in either absolute or relative terms, then the choice of α and β in (32) matters. A reasonable solution to this problem so as to make such comparisons independent of α and β is the following. Relativize absolute comparisons w.r.t. the difference between the maximal and minimal utilities. In this way, for all α and β, we have u¯ iλ (W, p∗ ) − u¯ jλ (W, p∗ ) λ λ uMax − uMin u¯ λi (W, p∗ ) − u¯ λj (W, p∗ ) uλMax − uλMin where, from Assumption 2 (25), uλMax = Max{λ, 1 − λ} and uλMin = 0. Similarly, for relative comparisons take quotients of differences with the minimal utility. In this way, for all α and β, we have λ u¯ iλ (W, p∗ ) − uMin λ u¯ jλ (W, p∗ ) − uMin u¯ λi (W, p∗ ) − uλMin u¯ λj (W, p∗ ) − uλMin 3.8 The choice of voting rule in a committee of representatives Now consider a take-it-or-leave-it committee in which each member acts on behalf of a group of individuals or a constituency of a different size. Given the number of members in this committee and the sizes of each group represented, what is the most adequate voting rule for the committee? The idea is to provide answers based on one of the two principles discussed in the previous section with respect to those represented. As we will see, such recommendations can be made on the basis of the number of members in the committee and the sizes of each group represented. Nevertheless, a well-founded answer requires the model to be enriched beyond these objective data. Some assumptions about the relationship between the preferences within the represented groups and the votes of their representatives is necessary. An assumption concerning the utilities of the people represented when they obtain their preferred outcome in a committee’s decision is also needed. We assume that every representative always follows the majority opinion of his/her Voting and Collective Decision-Making group on every issue. In this way the decision-making process can be neatly modelled by a composite rule, as is done in the next section. This will allow us to make a recommendation based on Assumptions 1 and 2 at the level of the people represented. Note that Assumption 1 implies assuming that the voters behave independently from one another, even in the same constituency. Is this reasonable? It may be so if constituencies are seen a priori as purely administrative entities, which may not be the case. Nevertheless, this assumption seems reasonable for a normative assessment, and will enable us to make a comparison with a traditional model also based on this assumption. First we will describe the model more precisely and then look for the rules that enable one or other of the principles to be implemented. Before proceeding, we must discard the naïve, egalitarian-sounding answer of a weighted majority with weights proportional to the groups’ sizes. In this case intuition is not correct. It is easy to give examples of dictatorships that result from assigning weights on committees proportional to the group sizes, and examples in which the difference in weight is of no consequence (see Example 2.3 in 2.1.4). 3.8.1 An ideal two-stage decision procedure Let n be the number of representatives on the committee, denoted by N = {1, 2, . . . , n}, and for each i ∈ N, let mi be the number of individuals in the group Mi represented by i. We assume these n groups to be disjoint. Let M := ∪i∈N Mi , and let m denote the total number of individuals represented on the committee: m = m1 + · · · + mn . Individuals in M are sometimes referred to as ‘citizens’. Each representative i is assumed to follow the majority opinion in Mi , which is equivalent to saying that on every issue the representative position is decided by a simple majority in Mi . That is, by the Mi voting rule mi SM WM := S ⊆ M : #S > . i i i i 2 Then if the voting rule in the committee of representatives is WN , the two-stage idealization of M’s decision-making process is formalized by the composite M-rule (see 1.3.2) SM SM , . . . , WM ], WM := WN [WM n 1 ‘Take-it-or-leave-it’ committees in which each vote configuration of citizens S ⊆ M determines whether the proposal is accepted or rejected. Finally, we make Assumptions 1 and 2 at the level of the citizens. That is, we assume the a priori distribution of probability p∗M , and we assume the utilities of the individuals in M to be given by (25). Now the model is complete and the question of the choice of the voting rule in the committee can be formally addressed. But before proceeding we must establish a few relationships that will be of use. Observe that the voting behaviour within each group Mi , and within the committee of representatives is fully determined by p∗M . As individuals in M vote ‘yes’ independently one from another with probability 1/2, the voting behaviour in each group Mi , is p∗Mi 49 . Then, in the committee, as each representative i follows the majority opinion in Mi , a SM , p∗ ), member i will vote independently ‘yes’ with probability α(WM Mi i where, in view of (4) and (6), 1 if mi is odd SM ∗ 2 α(WMi , pMi ) = mi 1− 1 if mi is even. 2 2mi +1 mi /2 Note that if mi is even and large, using Stirling’s approximation (7), we have SM ∗ α(WM , pMi ) i 1 1 1 mi 1 1 − = − m +1 . 2 2 i mi /2 2 2π mi 2 Therefore the probability of a majority voting ‘yes’ in each group Mi is exactly (approximately) 1/2 if mi is odd (even and large). As a consequence, the voting behaviour on the committee of representatives is p∗N exactly if all mi are odd, or approximately if those that are even are large enough. We also have the following relationship. SM , . . . , W SM ]. If all m are large Proposition 16 Let WM = WN [WM j Mn 1 enough, we have α(WM , p∗M ) α(WN , p∗N ). 49 In other terms p∗M = p∗M1 × · · · × p∗Mn . Voting and Collective Decision-Making In fact (36) is an equality if all mj are odd, while it is a good approximation if those that are even are sufficiently large. The following lemmas (proofs in the Appendix) will be useful later. Lemma 17 Let M be a population of m voters; then for m large we have for all k ∈ M 2 SM ∗ k (WM , pM ) . (37) πm SM , . . . , W SM ]. If all m are large enough, Lemma 18 Let WM = WN [WM j Mn 1 for all i ∈ N, and all k ∈ Mi , we have SM ∗ , pMi ) i (WN , p∗N ). k (WM , p∗M ) k (WM i Again if all mj are odd, (38) becomes an exact equality50 . A consequence of these two lemmas is that in this model if all groups are big enough an individual’s a priori probability of being decisive is given approximately by 2 i (WN , p∗N ). (39) k (WM , p∗M ) πmi This quantity is small if mi is large. In other words, in this case Bzk (WM ) = k (WM , p∗M ) 0. As for an individual’s success, we have that k (WM , p∗M ) 0.5, + (WM , p∗M ) 0.5α(WN , p∗N ), k − (WM , p∗M ) 0.5(1 − α(WN , p∗N )). k More precisely, using relationships (21), (19), (20) and (39), we obtain (see Appendix) the following proposition. 50 Note that (38) can be equivalently expressed as SM )Bzi (WN ). Bzk (WM ) Bzk (WM i ‘Take-it-or-leave-it’ committees SM , . . . , W SM ], all m are large enough Proposition 19 If WM = WN [WM j Mn 1 and approximation (39) is accepted, then for all k ∈ M, we have k (WM , p∗M ) ≤ ξ , k (WM , p∗M ) − 0.5 ≤ 0.5ξ , + (WM , p∗M ) − 0.5α(WN , p∗N ) ≤ 0.25ξ , k (WM , p∗M ) − 0.5(1 − α(WN , p∗N )) ≤ 0.25ξ , − k where ξ := 2 . π Mini∈N mi Thus, for the ideal two-stage decision procedure modelled by the SM , . . . , W SM ], assuming the voting behaviour composite rule WN [WM Mn 1 described by p∗M and all groups large enough, any individual in M has a probability of approximately one-half of obtaining his/her preferred outcome. Does this mean that a represented individual whose main concern is the probability of success would in practice be indifferent to the rule in the committee? According to the model so far described the answer seems to be yes. Now let us take utilities into consideration and see where the egalitarian and utilitarian principles take us. 3.8.2 Egalitarianism in a committee of representatives In terms of the two-stage model, the egalitarian principle is satisfied if any two individuals in M have the same expected utility irrespective of what group they belong to, that is, under Assumptions 1 and 2 relative to the people in M, if u¯ λk (WM , p∗M ) = u¯ λl (WM , p∗M ), for all k, l ∈ M. The individual’s a priori expected utility is given by (28), which can be rewritten using (21) as u¯ λk (WM , p∗M ) = + 14 k (WM , p∗M ) + ( 12 − α(WM , p∗M ))( 12 − λ). (43) Voting and Collective Decision-Making In this expression, there is just one term that is not constant for all citizens: 14 k (WM , p∗M ). In view of (40) this term is very small if the groups are large, and is negligible compared to the other terms. Therefore basically the a priori expected utility is the same for all individuals if all groups are large enough, namely we have u¯ λk (WM , p∗M ) u¯ λl (WM , p∗M ) + ( 12 − α(WM , p∗M ))( 12 − λ). This level of utility varies in accordance with the rule used in the committee of representatives (with the term α(WM , p∗M )), but the egalitarian principle is thus basically satisfied. By ‘basically satisfied’ we mean the following, using (33) and (34) for comparisons (see 3.7.2). Whatever the rule in the committee of representatives WN , if all mj are large enough, for any k, l ∈ M λ u¯ (WM , p∗ ) − u¯ λ (WM , p∗ ) M M k l uλMax − uλMin and u¯ λk (WM , p∗M ) − uλMin u¯ λl (WM , p∗M ) − uλMin That is, the difference in utilities between two individuals in absolute terms is close to 0, and their ratio is close to 1. The following proposition gives a more precise idea of the extent to which this is so (see the proof in Appendix). Proposition 20 If we accept approximations (38) and (37) we have, for any k, l ∈ M, λ u¯ (WM , p∗ ) − u¯ λ (WM , p∗ ) M M k l uλMax − uλMin u¯ λk (WM , p∗M ) − uλMin < u¯ λl (WM , p∗M ) − uλMin where ξ is given by (41). 1 + ξ, 1 ξ 2 ‘Take-it-or-leave-it’ committees Then we have the following claim. Claim 21 In the current model of a committee of representatives, for any voting rule WN , the egalitarian principle is basically satisfied at the individual level as long as all the groups are large in size. Thus, the egalitarian principle seems not to be binding after all for the choice of the rule in a take-it-or-leave-it committee of representatives. This is a very different conclusion from the one reached by the traditional ‘voting power’ approach. There, the so-called ‘first square root rule’ (SQRR) states that the voting rule in a committee of representatives should be chosen in such a way that the Banzhaf index of each representative is proportional to the square root of the size of the group that he/she represents51 . What leads to such contradictory conclusions in analyses based on the same two-stage idealization and the same probabilistic model? The ‘first square root rule’ recommendation is based on the following notions: (i) ‘voting power’ is the relevant issue at stake; (ii) the Banzhaf index is the right measure of ‘voting power’; and (iii) this being so, a ‘fair’ voting rule should give equal voting power to all individuals. That is, under these premises, for all k, l ∈ M, it should be Bzk (WM ) = Bzl (WM ). Restated in our notation what is required is k (WM , p∗M ) = l (WM , p∗M ) for all k, l ∈ M, i.e. all individuals should have the same a priori probability of being decisive. The ‘square root rule’ is then derived as follows. Assuming that all groups are large enough to accept the approximations, for k ∈ Mi and l ∈ Mj , using (38), the last relationship can be rewritten as SM ∗ SM ∗ , pMi ) i (WN , p∗N ) = l (WM , pMj ) j (WN , p∗N ), k (WM i j 51 Deviations from this recommendation of some voting rules in the EU Council have given rise to active criticism by some members of the academic community, as commented in Chapter 5 (see 5.2.2). Voting and Collective Decision-Making or, by (37), for mi large enough, as 2 2 i (WN , p∗N ) = j (WN , p∗N ), π mi πmj which holds if and only if i (WN , p∗N ) j (WN , p∗N ) = √ √ mi mj or equivalently, Bzj (WN ) Bzi (WN ) = √ . √ mi mj Thus, in order to equalize the Banzhaf indices of individuals in the ideal two-stage procedure modelled by WM , each representative’s Banzhaf index in WN should be proportional to the square root of his/her group’s size. On the other hand, under Assumptions 1 and 2, in view of (43) in Section 3.8.2, having equal expected utility is equivalent to having the same probability of being decisive, which entails that the egalitarian goal is equivalent to the SQRR goal. Then how can their recommendations differ? The crucial disagreement between the approach developed here and traditional voting power lies in point (i) underlying the SQRR. That is, as argued in Section 3.6, we see no reasons to consider ‘voting power’ as the relevant issue at stake in a pure take-it-or-leave-it committee, nor as the source of utility52 . Then, even though requiring equal Banzhaf indices for any two individuals in WM is equivalent to requiring equal expected utility, the discrepancy appears when this condition is not met. In this case comparisons based on expected utilities and those based on decisiveness draw different conclusions, because comparisons in relative terms between very small numbers (likelihood of decisiveness 52 Nor do we consider sound or coherent the interpretation as ‘power’ of the likelihood of being decisive in such committees (see 2.2). It can be argued that, given its vagueness in this point, the traditional voting power approach does not apply to take-it-or-leave-it committees. But in Chapter 4 we deal with bargaining committees and there the conclusions are even further from the SQRR. ‘Take-it-or-leave-it’ committees for individuals) artificially dramatize differences between individuals. The following example illustrates this. Example 3.3: In Chapter 5 we apply this model to different rules in the European Council of Ministers. The Council is interpreted in Section 5.2 as a take-it-or-leave-it committee of representatives of the citizens of the EU Member States. Using 2004 population figures, for the ‘Nice rule’, we obtain that Luxembourgian citizens (Lu) have the highest Banzhaf index, while Latvian citizens (La) have the lowest. The figures are: Ni ) = 0.00000446, BzLa (WM Ni ) = 0.0000101, BzLu with the ratio between the indices Ni ) BzLu (WM Ni ) BzLa (WM = 2.27. Thus, according to the traditional voting power approach, a Luxembourgian citizen’s ‘voting power’ is more than double that of a Latvian citizen’s. Nevertheless, in 2004 the smallest of the 25 member states is Malta, with a population of 380 000 individuals, which yields ξ = 0.0013 in (41). Thus, whatever the rule in the Council and the value of λ, for any pair of EU citizens k, l, we have λ u¯ (WM , p∗ ) − u¯ λ (WM , p∗ ) M M k l ≤ 0.00065, uλMax − uλMin and u¯ λk (WM , p∗M ) − uλMin u¯ λl (WM , p∗M ) − uλMin < 1.0013. In order to illustrate the point more clearly, let us choose the following utility for the Nice rule for any citizen: Ni or i ∈ Ni 1, if k ∈ S ∈ WM /S∈ / WM Ni uk (WM , S) = Ni or i ∈ Ni , −1, if k ∈ S ∈ / WM / S ∈ WM Voting and Collective Decision-Making which leads to Ni ∗ Ni , pM ) = Bzk (WM ). u¯ k (WM Note that this corresponds to the affine transformation λ=1/2 uk (W, S) = 4uk (W, S) − 1, and uMax = 1 and uMin = −1. The relative comparison of utilities according to (34) gives Ni , p∗ ) − u Ni , p∗ ) ¯ La (WM u¯ Lu (WM M M = 0.000003, uMax − uMin and Ni , p∗ ) − u Ni ∗ u¯ Lu (WM M Min (WM , pM ) Ni , p∗ ) − u Ni ∗ u¯ La (WM M Min (WM , pM ) Ni ) + 1 BzLu (WM Ni ) + 1 BzLa (WM = 1.000006. Thus, a ratio of 2.27 : 1. between Banzhaf indices becomes very close to 1 : 1 between expected utilities. This example illustrates our point clearly. Traditional voting power comparisons are based on the Banzhaf index, which magnifies differences that are negligible in terms of expected utility. In fact, the above example is not extreme at all. One can even have u¯ λk (WM , p∗M ) u¯ λl (WM , p∗M ) Bzk (WM ) ∞, Bzl (WM ) which happens if citizen l’s representative (and not k’s) has a null seat53 . Apart from magnifying the differences, a weak point of the ‘square root rule’ is that it does not prescribe a voting rule, but only conditions that should be met by the representatives’ a priori decisiveness. Moreover, there is no guarantee that such a rule exists. Especially if the number of groups is small, it may well happen that no rule even comes close to satisfying this condition; (the number of possible rules is finite, and small when the number of seats is small). 53 This is not an artificial academic example: in the European Council with six members Luxembourg had a null seat. ‘Take-it-or-leave-it’ committees 3.8.3 Utilitarianism in a committee of representatives In terms of the composite model of the idealized two-stage decision procedure in which WM = WN [W1SM , . . . , WnSM ], under Assumptions 1 and 2, the objective of the a priori utilitarian principle is to choose WN so as to maximize the aggregated expected utility in M in the voting situation (WM , p∗M ). This means maximizing u¯ λk (WM , p∗M ) = u¯ λk (WM , p∗M ). i∈N k∈Mi As the aggregated expected utility and the expected aggregated utility coincide54 , we have u¯ λk (WM , p∗M ) = E i∈N k∈Mi uλk (WM , S) . i∈N k∈Mi Therefore the utilitarian goal is to choose WN so as to maximize the latter expectation. In other words, to make for each vote configuration in the committee the decision for which the expectation is the highest. If the vote configuration in the committee is C ⊆ N, given that many different vote configurations in M yield such vote configuration in the committee, the best decision is to accept the proposal if the expected aggregated utility given that the vote configuration is C is greater in case of acceptance than in case of rejection. That is, if E k∈M 54 uλk | C & accept > E uλk | C & reject . For any two random variables X and Y, we have: E[X] + E[Y] = E[X + Y]. Voting and Collective Decision-Making The following two lemmas will permit us to approximate the expectations in (44) for large groups. For this we need to know the aggregated expected utility in each group in either case (acceptance or rejection), for each vote configuration in the committee. Note that for a vote configuration C in the committee, i ∈ C (i.e. Mi ’s representative votes ‘yes’) when a majority in group Mi votes ‘yes’, while if i ∈ N \ C (i.e. Mi ’s representative votes ‘no’) when no majority in group Mi votes ‘yes’. As an immediate consequence of the possibility of permuting aggregation and expectation, we have the following lemma. Lemma 22 Let i ∈ N. Under Assumption 2, the aggregated expected utility in group Mi given that the majority in Mi votes ‘yes’ and the proposal is accepted (rejected), is given, respectively, by E ! m mi " i & accept = λE #Si | #Si > , uλk | #Si > 2 2 ! m mi " i uλk | #Si > & reject = (1 − λ)E #(Mi \ Si ) | #Si > ; 2 2 while the aggregated expected utility in group Mi , given that the majority in Mi does not vote ‘yes’ and the proposal is accepted (rejected), is given, respectively, by E uλk | #Si ≤ ! mi mi " & accept = λE #Si | #Si ≤ , 2 2 uλk | #Si ≤ ! mi mi " & reject = (1 − λ)E #(Mi \ Si ) | #Si ≤ . 2 2 The next lemma gives approximations of the expected numbers of voters voting ‘yes’ and voting ‘no’ in a large group under the different conditions. ‘Take-it-or-leave-it’ committees Lemma 23 Let i ∈ N. Under Assumption 1, if mi is large enough, the expected numbers of voters voting ‘yes’ and voting ‘no’ in group Mi , given that the majority in Mi votes ‘yes’, can be approximated, respectively, by ! mi " mi mi + , E #Si | #Si > 2 2 2π ! mi " mi E #(Mi \ Si ) | #Si > − 2 2 mi ; 2π while the expected numbers of voters voting ‘yes’ and voting ‘no’ in group Mi , given that the majority in Mi does not vote ‘yes’, can be approximated, respectively, by ! mi " mi E #Si | #Si ≤ − 2 2 mi , 2π ! mi " mi mi E #(Mi \ Si ) | #Si ≤ + . 2 2 2π Now we are ready to solve the maximization problem. We consider the simplest case first, when the same importance is given to positive and negative success55 . Case λ = 1/2: In view of the two preceding lemmas, when a majority in group Mi votes ‘yes’ (or, equivalently in the current model, when Mi ’s representative votes ‘yes’) the aggregated expected utility in this group if the decision in the committee is ‘yes’ is (the approximation is The reader may prefer to skip this case and go directly to the general case, which is discussed immediately afterwards. We deal first with this particular case because this is the most common assumption in the literature, and some readers may be interested only in this case. This will also allow us to compare the conclusion with the so-called ‘second square root rule’. Voting and Collective Decision-Making good for mi large enough), E | #Si > 1 mi & accept 2 2 mi + 2 mi 2π while if the decision in the committee is ‘no,’ the aggregated expected utility in group Mi is E 1 mi mi mi & reject − . | #Si > 2 2 2 2π Similar calculations can be made for the case in which Mi ’s representative votes ‘no’. According to the two-stage model, a vote configuration in the committee C ⊆ N occurs if for all i ∈ C, the majority in Mi votes ‘yes’, while for all j ∈ N \ C, the majority in Mj does not vote ‘yes’. Thus aggregating across all groups we have that for a given vote configuration in the committee C ⊆ N, the aggregated expected utility in M if the committee accepts the proposal, given that the vote configuration in the committee is C, is (with close approximation for large enough mi ’s) E λ=1/2 uk | C & accept mi + 2 mj | #Sj ≤ & accept 2 mj mi 1 mj + − 2π 2 2 2π j∈N\C √ √ mi − mj ; mi | #Si > & accept 2 1 m 1 +√ 2 2 2π ‘Take-it-or-leave-it’ committees while if the proposal is rejected the aggregated expected utility is | C & reject √ 1 m 1 √ mj − mi . +√ 2 2 2π j∈N\C i∈C Thus, from the utilitarian point of view, an optimal decision in the committee is to accept the proposal if E | C & accept > E | C & reject , that is, using the above approximations, if √ √ mi > mj , i∈C which, as √ √ √ mj = mi − mi , j∈N\C can be rewritten as √ 1 √ mi > mj . 2 i∈C Thus we have the following result, always under Assumptions 1 and 2. Proposition 24 For λ = 1/2, if all the groups represented are large enough, the weighted majority rule in the committee WN = W (w,q) that gives to each representative a weight proportional to the square root of the size of the group and a relative quota of 50% (i.e. q = 12 ) implements the utilitarian principle with close Voting and Collective Decision-Making Observe that the quota recommended is the same as in the case of direct voting (see Section 3.7.1), which in the symmetric direct case means a simple majority. The rule prescribed by Proposition 24 is known in the literature as the ‘second square root rule’. Indeed, in view of (21), (23) and (27), the maximization problem that it solves is equivalent to that of maximizing Bzk (WM ), i∈N k∈Mi a problem that has been addressed in voting power literature (see [22, 58, 59]). Once again, as in the egalitarian case, we have the same recommendation about the choice of voting rule but based on different grounds. Here this prescription is based on a simple utilitarian principle applied to a precise model in a specific context: that of a take-it-or-leave-it committee. General case, λ ∈ [0, 1]: Again using Lemmas 22 and 23, when a majority in Mi votes ‘yes’ the aggregated expected utility in group Mi if the decision in the committee is ‘yes’ is (approximately for mi large enough), E m mi mi i λ uk | #Si > & accept λ + ; 2 2 2π while if the decision in the committee is ‘no’, the aggregated expected utility in group Mi is E uλk | #Si > mi mi mi & reject (1 − λ) − . 2 2 2π Similar calculations can be made for the case in which ‘yes’ does not obtain a majority in Mi . Now, as in the case λ = 12 , aggregating across all groups we have that for a given vote configuration in the committee C ⊆ N, the aggregated expected utility in M if the committee accepts the proposal, is (with ‘Take-it-or-leave-it’ committees close approximation for large enough mi ) E | C & accept λ mi i∈C mi 2π mi mi + − 2 2π i∈N\C √ λ √ m =λ +√ mi − mi ; 2 2π i∈C (45) while if the proposal is rejected the aggregated expected utility is ' E ( uλi | C & reject √ m 1−λ √ mi − mi . (1 − λ) + √ 2 2π i∈N\C i∈C Thus, from the utilitarian point of view the best decision in the committee is to accept the proposal if (44) holds, that is, after substituting and simplifying, if (1 − 2λ)m √ √ π < mi − mi . 2 i∈C This inequality holds if and only if √ 1 √ 1 π . mi > mi + (1 − 2λ)m 2 2 2 i∈C The situation is similar to that found with direct committees in Section 3.7.2. If λ ≤ 12 this condition defines a weighted majority rule Voting and Collective Decision-Making with weights wi = mi , and relative quota ) 1 1 (1 − 2λ)m π2 √ . qλ = + 2 2 mi As expected, when the importance given to negative success increases (i.e. λ decreases), the quota increases. Thus we have the following result (note that it includes the result obtained for λ = 12 as a particular case). Proposition 25 Under Assumptions 1 and 2, assuming that all the represented groups are large enough, if λ ≤ 12 , the weighted majority rule √ WN = W (w,qλ ) in the committee for weights wi = mi and relative quota qλ given by (48) implements the a priori utilitarian principle with close approximation. Now consider the case in which λ > 12 . In this case (47) may define an improper voting rule. The idea is then to lower the quota Q as √ much as possible so that W (w,Q) , for weights wi = mi , is a proper rule. Namely let ¯ := Min Q ∈ R : W (w,Q) ∈ VRN . Q ¯ as the minimal Q ≥ 0 such that Or equivalently, take Q wi > Q ⇒ wj ≤ Q. j∈N\C ¯ Then we have the following result, which shows how W (w,Q) is almost the utilitarian optimum. In the proof we use the notation w(C) := wi . Proposition 26 If WN implements the utilitarian optimum according to the approximation based on (45) and (46), then for all C ∈ WN , ¯ i∈C wi ≥ Q. ‘Take-it-or-leave-it’ committees Proof. First note that, as λ > 12 and w(N) 2 is among the Q that satisfy ¯ ≤ w(N) , where Qλ = w(N)qλ . Then obviously (50), we have Qλ ≤ Q 2 ¯ ⇒ w(N \ C) > Q ¯ ≥ Qλ . w(C) < Q ¯ We can assume C Assume that for some C ∈ WN we have w(C) < Q. to be minimal winning. Consider the rule WN := (WN \ {C}) ∪ W N\C . That is, WN is the rule that results from WN by eliminating C from the set of winning configurations and adding all those containing N \ C. As C is minimal, N \ C intersects all T ∈ WN \ {C}, and WN is a proper rule. Let us now show that WN is better than WN from the utilitarian point of view. In order to compare the aggregated expected utility of a decision made by either rule, note that the decision differs only for the configuration C and for those T containing N \ C. For ¯ ≥ Qλ , the decision by W all the latter, as w(T) ≥ w(N \ C) > Q N (acceptance) is utilitarian-better than by WN (rejection). The reverse only occurs for the configuration C. It then suffices to show that what is lost by rejecting for configuration C is outweighed by what is gained by accepting for the equally probable configuration N \ C. Again using (45) and (46), we have ' ( λ λ E uk | C & accept − E ui | C & reject i∈M = (2λ − 1) while E m λ m +√ (w(C) − w(N \ C)) < (2λ − 1) , 2 2 2π | N \ C & accept − E = (2λ − 1) ( uλi | N \ C & reject m λ m +√ (w(N \ C) − w(C)) > (2λ − 1) . 2 2 2π Therefore, WN does not implement the utilitarian optimum according to the approximation based on (45) and (46). Voting and Collective Decision-Making Therefore a utilitarian-optimal rule (according to approximations ¯ (45) and (46)) should contain the winning configurations in W (w,Q) ¯ if such a plus some configurations whose weight equals the quota Q thing is possible. Then we have the following corollary. Corollary 27 Under Assumptions 1 and 2, assuming all the represented groups are large enough, if λ > 12 , the weighted majority rule √ ¯ ¯ WN = W (w,Q) in the committee for weights wi = mi and quota Q given by (49) implements the a priori utilitarian principle with close approximation. Remarks. (i) Barberà and Jackson [6] consider an even wider setting in which the representatives’ vote in the committee in the two-stage decision process is an arbitrary known function of the preferences within their respective groups. Thus no voting rule governs the representatives’ vote. Also the preference profile in each group concerning acceptance/rejection is in principle arbitrary, and only a probability distribution over the possible profiles is assumed to be known. In such a general setting Barberà and Jackson address the utilitarian question of decisions in the committee as a function of the vote/preference configuration that maximizes the expected aggregated utility. The generality of a setting in which the usual notion of voting rule does not constrain their model allows them to deal with ties by tossing a coin. But when they specify the conditions limiting the degrees of freedom (on preferences and decision-making process) they come very close to the conclusions obtained here. (ii) As has been mentioned, some authors discuss the arguments in support of the a priori probability distribution p∗ and favour other models. The independence of voters’ behaviour is perhaps the aspect most criticized. The most important rival probabilistic model is that of ‘homogeneity’. Straffin [80] proposes the following probabilistic model for a set of voters. Let t ∈ [0, 1] be chosen from the uniform distribution on [0, 1], and assume that each voter votes ‘yes’ with probability t and ‘no’ with probability (1 − t). This raises the question of egalitarianism and utilitarianism for alternative probabilistic models56 . 56 See e.g. [9, 21, 55]. ‘Take-it-or-leave-it’ committees 3.9 Exercises 1. Prove or disprove with a counterexample the following statements. (a) A voter occupying a null seat is never successful or decisive. (b) Whenever a proposal is accepted, a voter occupying a veto seat is successful and decisive. (c) Whenever a proposal is rejected, a voter occupying a veto seat is successful and decisive. 2. Assume the following three-person voting behaviour: voters 1 and 2 vote independently from each other, both with probability 1/2, in favour of the proposal, and voter 3 votes as voter 2 does. (a) Obtain the probability distribution p that describes this. (b) If they make decisions by simple majority, calculate: α and i , i+ i , i− i+ i− i , i , i and i , for i = 1, 2, 3, for the voting situation (W SM , p). 3. Let W ⊆ W . (i) Prove that for any p and any i, α(W, p) ≤ α(W , p), i+ i+ i (W, p) ≤ i (W , p), i− i− i (W, p) ≥ i (W , p). (ii) Show that in general the inequality may hold in either sense for i and i , and for the Coleman indices to prevent and to initiate action. 4. Consider the five-person voting situation (W SM , p∗ ). Calculate 1 (W SM , p∗ ) and 1 (W SM , p∗ ), and compare them with the conditional probability of voter 1 being successful and that of his/her being decisive given that 1 and 2 voted the same way. 5. Show that symmetry is a sufficient condition for a voting rule to be a priori egalitarian (i.e. i (W, p∗ ) = j (W, p∗ ) for all i, j), but this condition is not necessary. 6. Garrett and Tsebelis [26] criticize traditional power indices because voters’ preferences and any other relevant contextual information are ignored. To illustrate their point they propose the following situation. Consider a seven-voter voting rule where a proposal is passed if it has the support of at least five. Voters’ Voting and Collective Decision-Making unidimensional preferences are located on a real line so that only connected and minimal winning configurations occur, and all of them are equally probable. They claim that a ‘more realistic power 1 2 1 1 1 2 1 , 15 , 5 , 5 , 5 , 15 , 15 ). index’ would be ( 15 (a) What is the implicit voting situation (W GT , pGT )? (b) Compute (W GT , pGT ) and compare this measure with what Garrett and Tsebelis propose. 7. Consider the following two weighted majorities: W (W ,Q) , with Q = 70 and w = (55, 35, 10), and W (w ,Q) , with Q = 70 and w = (50, 25, 25). (a) Give the set of winning configurations for the two weighted majorities. (b) Compute the Banzhaf index for the two voting rules and show how the Banzhaf index of a voter who loses weight increases. Is this paradoxical? 8. Consider the following voting situation in a four-party parliament where decisions are made by a simple majority. There is a large right-wing party with 40 seats, and three left-wing parties with 20 seats each. The three left-wing parties (1, 2 and 3) always vote together, while the right-wing party (4) is always isolated, so that the probability distribution over vote configurations is given by 1/2, if S = {1, 2, 3} or {4} p(S) = 0, otherwise. (a) Give the set of the winning configurations (modelling the decision-making in the parliament as a four-person weighted majority rule). (b) Compute i (W, p) and i (W, p), for i = 1, 2, 3, 4. (c) Show that 1 (W, p) > 4 (W, p) in spite of w1 < w4 . Is this paradoxical? 9. Let the voting situation (W, p) with W = {{1, 2, 3}, {1, 2, 4}, {1, 2, 3, 4}}, and 9/32, if S = {1, 2} or {3, 4} p(S) = 1/32, otherwise. ‘Take-it-or-leave-it’ committees (a) Compute i (W, p) and i (W, p). (b) Is the vetoer more likely to be successful than the others? 10. Prove the following statement [38]: i (W, p) = holds for every W if and only if p = p∗ . + 12 i (W, p) i− 11. Prove that i (W, p), i+ i (W, p) and i (W, p) coincide for every i and every voting rule W if and only if the vote of each voter is independent from the vote of all the remaining voters. 12. Consider the following variants of the model introduced in Section 3.7. Replace any voter’s utility in Assumption 2 by λ, α, ui (W, S) = β, γ, if i ∈ S ∈ W, if i ∈ /S∈ / W, if i ∈ S ∈ / W, if i ∈ / S ∈ W. (a) Discuss the possible relationships between λ, α, β, and γ (assuming λ > β, and α > γ ). (b) Give voter i’s expected utility (u¯ i (W, p)) as a function of λ, α, β, and γ , and voter i’s probabilities of success and of being in favour of the proposal. (c) What is the rationale behind α = β = 0 and γ = λ − 1? (d) Prove that if α = β = 0 and γ = λ − 1 then − u¯ i (W, p) = λ+ i (W, p) + (1 − λ)i (W, p) − (1 − λ)(1 − γi (p)). 3.10 Appendix Proof of Proposition 12 (Section 3.7.2). In view of (28), we have Max u¯ λi (W, p∗ ) = Max 1 i∈N i (W, p∗ ) 1 − α(W, p∗ ) + . 2 2 Voting and Collective Decision-Making This sum can be rewritten as 1 1 1 1 λ− −λ + = 2 2 2 2n i∈N i∈N S:i∈S∈W 1 + 2n 2 i∈N S:i∈S / ∈ /W 1 . 2n As λ does not depend on the rule, the first sum can be ignored for the maximization problem. The second term is 1 1 1 1 = n n. λ− λ− 2n 2 2 2 i∈N S:S∈W The third term is 1 2 i∈N S:i∈S∈W 1 s 1 . = 2n 2n 2 S:S∈W The fourth term is 1 n−s 1 = n n 2 2 2 i∈N S:i∈S / ∈ /W S:S∈ /W n−s n − s 1 − . = n 2 2 2 Also note that S:S⊆N n−s 2 does not depend on the rule. Thus, deleting this term and the multiplying factor 21n we have the equivalent maximization problem: s n−s 1 Max n+ − = Max λ− (s − (1 − λ)n) , 2 2 2 S:S∈W which is problem (31). Proof of Proposition 14 (Section 3.7.2). Let W be a voting rule such that for some S ∈ W, it holds s < n2 . If so there must exist a minimal winning configuration T ∈ W such that t < n2 . Let W be the rule W := (W \ {T}) ∪ W N\T , ‘Take-it-or-leave-it’ committees where W \ {T} is the rule that results from W by eliminating T from the winning configurations, and W N\T is the (N \ T)-unanimity rule (see Section 1.3.2). As N \ T intersects all S ∈ W \ {T}, W is a proper voting rule. Then we have S∈W (s − (1 − λ)n) (s − (1 − λ)n) − (t − (1 − λ)n) + ((n − t) − (1 − λ)n) (s − (1 − λ)n) − 2t + n > (s − (1 − λ)n). Therefore W does not solve (31). In other words, W does not implement the utilitarian principle. SM , p∗ ) is the a priori probProof of Lemma 17 (Section 3.8.1). k (WM M SM ability of k being decisive in WM . In other words (assuming m odd), m−1 it gives the probability that m−1 2 voters vote ‘yes’ and 2 voters vote ‘no’ in M \ k. Using Stirling’s approximation (relationship (8) in 1.2.2), m−1 2 2 for m large Cm−1 2m−1 π(m−1) . Thus, this probability approaches 2 π(m−1) as m increases. For m large enough we can replace m − 1 by m, and we have (37). A similarly good approximation is obtained if m is even. Proof of Lemma 18 (Section 3.8.1). An individual k ∈ Mi is decisive in the ideal two-stage decision-making if k is decisive in the decision made in Mi , and Mi ’s representative i is decisive in the committee at the second stage. Assuming the behaviour described by p∗M , the two events SM , p∗ ). are independent. The probability of the first is given by k (WM Mi i As representative i follows the majority opinion in Mi , i’s vote is independent of the vote of the other members of the committee. If mi is odd the probability of i voting ‘yes’ is exactly 1/2, and very close to it if mi is even but large. Thus, the probability of the latter is (approximately if mi is even) i (WN , p∗N ), and we have (38). Voting and Collective Decision-Making Proof of Proposition 19 (Section 3.8.1). Let k ∈ Mj . By (39), we have 2 i (WN , p∗N ) πmi 2 2 ≤ = ξ. ≤ πmi π Mini∈N mi k (WM , p∗M ) By (21) and the same approximation, we have k (WM , p∗M ) = 0.5 + 0.5 k (WM , p∗M ) ≤ 0.5 + 0.5ξ . Similarly, (19) and (20) lead to the other results. Proof of Proposition 20 (Section 3.8.2). Assume u¯ λk (WM , p∗M ) ≥ u¯ λl (WM , p∗M ). Then by (43), (21) and (40), we have 1 k (WM , p∗M ) − l (WM , p∗M ) 4 1 1 ≤ k (WM , p∗M ) ≤ ξ . 4 4 u¯ λk (WM , p∗M ) − u¯ λl (WM , p∗M ) = By (25), uλMax = Max{λ, 1 − λ} ≥ 12 , uλMin = 0, and the first inequality follows. Again using (43) we have u¯ λk (WM , p∗M ) u¯ λl (WM , p∗M ) = ≤ ≤ 1 1 1 1 ∗ ∗ 4 + 4 k (WM , pM ) + ( 2 − α(WM , pM ))( 2 1 1 1 1 ∗ ∗ 4 + 4 l (WM , pM ) + ( 2 − α(WM , pM ))( 2 1 1 1 1 ∗ ∗ 4 + 4 k (WM , pM ) + ( 2 − α(WM , pM ))( 2 1 1 1 ∗ 4 + ( 2 − α(WM , pM ))( 2 − λ) 0.25 k (WM , p∗M ) 1+ 1 . 1 1 ∗ 4 + ( 2 − α(WM , pM ))( 2 − λ) − λ) − λ) − λ) In order to find the upper bound we need to know the sign of 1 − α(WM , p∗M ) 2 1 −λ . 2 ‘Take-it-or-leave-it’ committees By (18), we have α(WM , p∗M ) ≤ 12 , so that if λ ≤ 12 the sign of this term is positive, and a lower bound of the denominator is 14 . Then we have u¯ λk (WM , p∗M ) u¯ λl (WM , p∗M ) 0.25 k (WM , p∗M ) 1 4 ≤ 1 + ξ. If λ > 12 , the denominator has to be expanded in order to find a lower bound: 1 1 1 λ− − − α(WM , p∗M ) 4 2 2 1 1 1 − λ − α(WM , p∗M ) + λα(WM , p∗M ) 2 2 2 1 1 = (1 − λ) − α(WM , p∗M )(1 − λ) + α(WM , p∗M ) 2 2 1 1 1 − α(WM , p∗M ) (1 − λ) + α(WM , p∗M ) ≥ α(WM , p∗M ). = 2 2 2 Then, if k ∈ Mj , using (39), we have u¯ λk (WM , p∗M ) 0.25 k (WM , p∗M ) ≤ 1 + 1+ 0.5α(WM , p∗M ) u¯ λl (WM , p∗M ) ≤1+ξ j (WN ,p∗N ) α(WN ,p∗N ) then as 2 j (WN , p∗N ) π mj 2α(WM , p∗M ) j (WN , p∗N ) . 2α(WM , p∗M ) As j (WN , p∗N ) = j (WN , p∗N ) = α(WN , p∗N ), α(WN , p∗N ) S:i∈S∈W S\i∈ /W 1 , 2n−1 and α(WM , p∗M ) 1 2n , we conclude that ≤ 2, and the result follows. Note that we could obtain an even lower bound for the difference in utilities in absolute terms: 1 λ 1 2 2 ∗ λ ∗ u¯ (WM , p ) − u¯ (WM , p ) ≤ − . M M k l 4 π Minj∈N mj 4 π Maxi∈N mi Proof of Lemma 23 (Section 3.8.3). Assume mi is odd, i.e. mi = 2r + 1 for an integer r. Then the expected number of people voting ‘yes’ when Voting and Collective Decision-Making a majority votes ‘yes’ is mi " = E #Si | #Si > 2 ! 1 S:r<s≤mi 2mi 1 2 2mi −1 mi = m −1 2 i mi = m −1 2 i 1 2mi −1 r+1 r+2 mi + (r+2)C + · · · + mC (r+1)Cm mi mi i (mi − 1)! (mi − 1)! + ··· + (mi − r − 1)!r! 0!(mi − 1)! mi −1 r+1 r + C + · · · + C Cm (51) mi −1 . mi −1 i −1 Now, by (5) in Section 1.2.1, we have m −1 r+1 r + Cm + · · · + Cmii −1 = 2mi −2 + Cm i −1 i −1 Thus, as r = we have mi −1 2 , substituting in (51) and using (8) (see Section 1.2.2) 1 mi2−1 2 + Cmi −1 2 mi 2 1 mi −1 mi −2 m −1 2 + 2 2 π(mi − 1) 2 i mi 1 + mi = 2 2π(mi − 1) mi mi + . 2 2π mi mi " = m −1 E #Si | #Si > 2 2 i ! 1 r C . 2 mi −1 mi −2 In the last step we have replaced mi − 1 by mi within the square root in order to simplify the expression. Similarly, it can be checked that when mi is sufficiently large the approximation is as good for mi even. The other approximations follow similar steps. Bargaining committees This chapter addresses voting situations in which a committee bargains in search of agreement over a set of feasible alternatives ‘in the shadow of a voting rule’57 . More specifically we consider a ‘bargaining’ committee that makes decisions in an environment such as the one described in Section 2.2. In particular we are interested in the role and influence of the voting rule on the outcome of negotiations in order to assess the adequacy of a voting rule in different contexts. In Section 4.1 we describe the environment of what we call a ‘bargaining committee’ and in Section 4.2 a model for such a committee is presented. The situation is modelled by the two basic ingredients that specify it: the voting rule that prescribes what coalitions can enforce an agreement, and the voters’ preference profile. The situation summarily described by this two-ingredient model is then analysed using two approaches. The question of what agreements are likely to arise in such situation is addressed first in Section 4.3 from a cooperative-axiomatic game-theoretic point of view, as an extension of Nash’s bargaining theory. That is, by imposing reasonable conditions for an agreement among rational individuals the class of admissible agreements is drastically narrowed and characterized. The same question about reasonable agreements is then approached from a non-cooperative game-theoretic point of view in Section 4.4. That is, the decision-making process in a bargaining committee is modelled as a non-cooperative game. This is done for a variety of ‘protocols’, for which the stationary subgame perfect equilibria are investigated. The result is consistent with the results obtained from the axiomatic approach; that is, the same family of agreements is obtained as a limit case. In this way cooperative and non-cooperative game-theoretic support is provided for a new and richer interpretation of some power indices as measures of ‘bargaining power’ in a precise game-theoretic sense. 57 Most of the material in this chapter is drawn from [48–51]. Voting and Collective Decision-Making The question of the choice of voting rule in a bargaining committee is addressed in Section 4.5, with a compromise between egalitarianism and utilitarianism as the criterion applied. In Section 4.6, the same question is addressed for a bargaining committee of representatives of groups of different sizes, based on the same egalitarian/utilitarian compromise as the criterion of fairness. We propose as ‘optimal’ a ‘neutral’ voting rule in the sense that any player is indifferent between bargaining personally and leaving bargaining in the hands of a representative (at least under certain symmetry conditions relative to preferences within each group). The normative recommendation that this approach yields is different from those obtained for a take-it-or-leave-it committee in Section 4.1 The bargaining scenario Recall the scenario described in Section 2.2 as a bargaining committee. We consider a committee that makes decisions under a given voting rule under the following conditions: the committee (i) deals with different issues over time; (ii) bargains about each issue by seeking consensus on an agreement, in search of which it is entitled to adjust the proposal; (iii) this negotiation is carried out under the condition that any winning coalition (according to the voting rule) has the capacity to enforce agreements; and (iv) for every issue a different configuration of preferences emerges in the committee over the set of feasible agreements concerning the issue at stake. A situation like this has very little in common with the one described as a take-it-or-leave-it committee, which was dealt with in Chapter 3, other than the fact that a voting rule plays a role in both cases. Apart from that the voting situation we consider now is completely different. In fact, properly speaking this is a bargaining situation, which means a game situation and calls for a game-theoretic analysis. It is worth remarking that the question of the ‘power’ or ‘voting power’ of the players involved in such a situation is premature. The natural main issue that should be addressed first is what the reasonable outcome of negotiations is in such conditions. The first main question is: What general agreements are likely to arise? Only after an answer to this basic question is obtained can one reasonably try to evaluate the relative advantage that the voting rule may give to each player. Bargaining committees 4.2 A model of a bargaining committee: voting rule and voters’ preferences The situation described above is a genuine game situation, a formal model of which is obtained by incorporating the following elements. First, the set of members of the committee, or players for short, N = {1, 2, . . . , n}, and the N-voting rule W under which bargaining takes place. Second, the preference profile of the members of the committee over the feasible agreements for the particular issue at stake. We assume that the players have expected utility preferences according to the von Neumann and Morgenstern model reviewed in Section 1.4.4. We assume à la Nash [60] that lotteries over feasible agreements are also feasible (or that an agreement equivalent for all players to any such lottery is always feasible). Thus we can summarize the configuration of voters’ preferences over the set of feasible agreements in utility terms by the set of associated utility payoffs, exactly as in a classical n-person bargaining problem (see Section 2.1.158 ). Thus the second ingredient of the model is the set of feasible utility vectors D ⊆ RN , together with the particular vector d ∈ D associated with the disagreement or status quo that would be the payoff vector if no agreement were reached. Thus the pair (D, d) is a summary of the situation concerning the players’ decisions. The problem they face is to agree on a point in D. Under these assumptions, the situation can be summarized by a pair (B, W), where B = (D, d) represents the preference profile in the committee in utility terms, and W is the N-voting rule to enforce agreements. We make the following assumptions consistent with this interpretation. We assume that D is a closed, convex and comprehensive set containing d, such that there exists some x ∈ D s.t. x > d. We denote the boundary of D by ∂D. We assume also that Dd := x ∈ D : x ≥ d is bounded and non-level (i.e. ∀x, y ∈ ∂D ∩ Dd , x ≥ y ⇒ x = y). Note that formally any such B is a classical n-person bargaining problem. The set of all such bargaining problems is denoted by B. Thus, we are concerned with pairs (B, W) ∈ B × VRN , each of which can be referred to as a bargaining problem B under rule W, or for short just a bargaining committee (B, W). 58 Readers not familiar with the material presented in Section 2.1 should read it before proceeding with this section. Voting and Collective Decision-Making It is worth remarking that this model includes classical bargaining problems and simple superadditive games as particular cases. To show this, let us see first how this class of problems can be associated with a subclass of non-transferable utility (NTU) games (Section 1.5.4). If no player can be forced to accept a payoff below status quo level, we can associate an NTU game (N, V(B,W ) ) with each bargaining committee (B, W) by associating with each coalition S the set of all utility vectors feasible for S if such a coalition forms. For each S ⊆ N, let prS : RN → RS denote the natural S-projection, defined by prS (x) := (xi )i∈S , for all x = (xi )i∈N ∈ RN , and denote xS := prS (x) for any x ∈ RN . Then, if S ∈ W, the set of utility vectors feasible for S is the set of points in RS that are the S-projection of those points in D that give to players in N \ S at least the disagreement payoff. More precisely, for any S, the subset V(B,W ) (S) of RS is given by V(B,W ) (S) := if S ∈ W, prS x ∈ D : xN\S ≥ d N\S if S ∈ / W, prS (ch(d)) where B = (D, d), the comprehensive hull of d , andNch(d) denotes that is, ch(d) = x ∈ R : x ≤ d . Classical n-person bargaining problems (see Sections 1.5.4 and 2.1.1) correspond to the case in which the voting rule is the unanimity rule, W = {N}, with N as the only winning coalition, while simple superadditive TU games correspond to the case in which the configuration of preferences is TU-like in the following sense. Definition 28 In a bargaining committee (B, W) the configuration of preferences is TU-like if B = := (, 0), where := x ∈ RN : i∈N xi ≤ 1 . Observe that when B = the associated NTU game V(,W ) is equivalent to the simple TU game associated with the rule vW , given by (9) in Section 2.1.3. This two-ingredient model of a bargaining committee allows in particular for a neat distinction between voting rules and their associated simple TU games, two notions which are conceptually different (the specification of a voting rule does not involve its users’ preferences, as pointed out in Section 2.1.3) but formally incorporate the same amount of information, and are often confused due to the habit of representing voting rules by simple games. Bargaining committees Thus the subfamily of NTU games (N, V(B,W ) ) : (B, W) ∈ B× VRN } associated with what we have called bargaining committees properly contains all classical bargaining problems and all simple superadditive games. As briefly discussed in Section 1.5.2, there are two game-theoretic approaches for modelling and analysing game situations: the cooperative and non-cooperative approaches. We first adopt a cooperative approach. 4.3 Cooperative game-theoretic approach Nash’s original two-person bargaining model (seen in 2.1.1) can be seen as consisting of two ingredients: a set of (two) players with von Neumann–Morgenstern preferences over a set of feasible agreements, and a voting procedure (unanimity) for settling agreements. As the only non-dictatorial two-person voting rule is unanimity, the second element is not explicit but tacit in Nash’s model. In other words Nash’s model is a particular case of the model just introduced, or, more properly speaking, the kind of situation we are interested in has been modelled by a natural generalization of Nash’s model (and its traditional extension to n players), by considering n players and an arbitrary voting rule instead of unanimity. The basic question that such a situation raises is what (payoff vectors associated with) agreements can reasonably arise from the interaction of rational players in search of consensus in the situation specified by the model. The importance of the issue is clear in many contexts. In contrast with the case of a ‘take-it-or-leave-it’ committee, only entitled to accept or reject proposals submitted to it, without capacity to modify them, it is often the case in a committee that uses a voting rule to make decisions that the final vote is merely the formal settlement of a bargaining process in which the issue to be voted upon has been adjusted to gain the acceptance of all members. In this case what general agreements are likely to arise? Or, in terms of our present model, is it possible to select a feasible agreement in D for each bargaining committee (B, W) that can be arguably considered as a reasonable expectation for rational players confronted with the situation? Or, still in classical terms, what is the value for any player of the prospect of engaging in a situation such as this? Intuition suggests that the Voting and Collective Decision-Making voting rule under which negotiations take place may influence such expectations. 4.3.1 Rationality conditions In order to find an answer we also follow Nash’s approach. That is, by assuming conditions that can be considered desirable from the point of view of rational players that share the information encapsulated in the model (preference profile B and voting rule W) we narrow down the set of admissible agreements. The two-ingredient setting allows for the easy adaptation of the conditions used by Nash [60] and Shapley [76] in their respective setups with a similar objective. Proceeding as in these two seminal papers we impose some conditions on a map : B × VRN → RN , for vector (B, W) ∈ RN to be considered as a rational agreement, or as a reasonable expectation of utility levels of a general agreement in a bargaining committee (B, W). As prerequisites we build the requirements of being feasible and no worse than the status quo for any player into the very notion of a solution. Namely, if B = (D, d), we require: (B, W) ∈ D (feasibility), and (B, W) ≥ d (individual rationality). Therefore we are implicitly assuming that no player can be forced to accept an agreement that is worse for him/her than the status quo59 . In addition to this we impose the following conditions, all of them natural adaptations of Nash’s and Shapley’s characterizing properties (see Sections 2.1.1 and 2.1.2): 1. Efficiency (Eff). For all (B, W) ∈ B × VRN , there is no x ∈ D s.t. x > (B, W). (Rational players will not agree on something when a better option is feasible.) For any permutation π : N → N, let π B := (π(D), π(d)) denote the bargaining problem that results from B by the π -permutation of its coordinates, so that for any x ∈ RN , π(x) denotes the vector in RN s.t. π(x)π(i) = xi . 2. Anonymity (An). For all (B, W) ∈ B × VRN , and any permutation π : N → N, and any i ∈ N, π(i) (π(B, W)) = i (B, W), where 59 A richer model would include two reference points: the status quo, as the initial starting point, and a vector of ‘minimal rights’ or minimal admissible payoffs. These two points coincide in a classical bargaining situation, but this is not necessarily so when unanimity is not required. Bargaining committees π(B, W) := (πB, π W). (Expectations are not influenced by the players’ labels but only by the structure of the problem.) 3. Independence of irrelevant alternatives (IIA). Let B, B ∈ B, with B = (D, d) and B = (D , d ), be such that d = d, D ⊆ D and (B, W) ∈ D . Then (B , W) = (B, W), for any W ∈ VRN . (An agreement that is considered satisfactory under a voting rule should also be considered satisfactory if under the same voting rule this agreement remains feasible in a smaller feasible set.) 4. Invariance w.r.t. positive affine transformations (IAT). For all N (B, W) ∈ B × VRN , and all α ∈ RN ++ and β ∈ R , (α ∗ B + β, W) = α ∗ (B, W) + β, where α ∗ B + β = (α ∗ D + β, α ∗ d + β), denoting α ∗ x := (α1 x1 , . . . , αn xn ), and α ∗ D + β := {α ∗ x + β : x ∈ D}. (As seen in Section 1.4.4, utility representation of von Neumann–Morgenstern preferences is determined up to the choice of a zero and a unit of scale. Thus if the utility of each player is changed in this way the payoffs of a satisfactory agreement should change accordingly.) 5. Null player (NP). For all (B, W) ∈ B × VRN , if i ∈ N is a null player (i.e. a player occupying a null seat) in W, then i (B, W) = di . (Null players’ expectations are set to the status quo level, given their null capacity to influence the outcome given the voting rule according to which final agreements are enforced.) Note that Eff, IIA and IAT are adaptations of Nash’s axioms that state basically a relationship between the agreement-solution and the bargaining element B, while An (adapted from Nash’s and Shapley’s anonymity) and NP (from Shapley’s system) concern the relationship with both elements, B and W. It may be worth remarking that An entails a consistent relabelling of voters in B and seats in W. As we will see, these conditions are not enough to single out an agreement, so we also consider the two conditions below, which impose alternative constraints on the solution for TU configurations of preferences (i.e. when B = ). The first condition (Transfer) postulates that the effect of eliminating a minimal winning configuration from the set of winning configurations is the same whatever the voting rule. It is the adaptation to the present two-ingredient model of a condition equivalent to that of ‘transfer’ (see 2.1.6), introduced by Dubey Voting and Collective Decision-Making [18] in order to characterize the Shapley–Shubik index60 . In [39] we replace it by a weaker condition (in the presence of anonymity) to characterize the Shapley–Shubik and Banzhaf indices. This is the second condition (Symmetric gain–loss), which requires that the effect of eliminating a minimal winning configuration from the list that specifies the voting rule is equal on any two voters belonging (not belonging) to it. 6. Transfer (T). For any two rules W, W ∈ VRN , and all S ∈ M(W)∩ M(W ) (S = N) : (, W) − (, W \ {S}) = (, W ) − (, W \ {S}). 6*. Symmetric gain–loss (SymGL). For any voting rule W ∈ VRN , and all S ∈ M(W) (S = N), i (, W) − i (, W \ {S}) = j (, W) − j (, W \ {S}), for any two voters i, j ∈ S, and any two voters i, j ∈ N \ 4.3.2 Axiomatic characterizations Denote by Nash(B) the Nash bargaining solution of an n-person bargaining problem B = (D, d) (as in 2.1.1), that is Nash(B) = arg max x∈Dd (xi − di ). And denote by Nashw (B) the w-weighted asymmetric Nash bargaining solution [32] of the same problem for a vector of non-negative weights w = (wi )i∈N , that is Nashw (B) = arg max x∈Dd (xi − di )wi . Formulation (52) of this condition is equivalent (see [39]) to the more traditional form for simple games (see (10) in 2.1.6), which once rewritten in terms of the current model becomes (, W ) + (, W ) = (, W ∪ W ) + (, W ∩ W ). Bargaining committees Basically, asymmetric Nash bargaining solutions emerge by dropping the requirement of symmetry or anonymity in the Nash system, hence their name. Obviously, the bigger the weight wi the better for player i. The lack of symmetry may be due to an asymmetric environment (which is not included in the model) that favours different players differently. Binmore ([12], p. 78) uses the term ‘bargaining power’ to refer to the players’ weights and interprets the asymmetric Nash solutions as reflecting the different bargaining powers of the players ‘determined by the strategic advantages conferred on players by the circumstances under which they bargain’. Note that if we accept this interpretation then this notion of bargaining power is purely relative in the sense that a w-weighted asymmetric Nash bargaining solution, Nashw (B), does not vary if all the weights are multiplied by the same positive constant. In particular when the bargaining problem is = (, 0), then it is easy to check that Nashw () = w, where w is w’s normalization, that is, w = w/ i∈N wi . The following result shows how conditions 1–5 considered in the previous section drastically restrict the possible answers to the question raised. Theorem 29 (Laruelle and Valenciano [48]61 ) A value : B×VRN → RN satisfies efficiency (Eff), anonymity (An), independence of irrelevant alternatives (IIA), invariance w.r.t. affine transformations (IAT) and null player (NP), if and only if (B, W) = Nashϕ(W ) (B), for some map ϕ :VRN → RN that satisfies anonymity and null player. The interpretation is clear. If these conditions are accepted as desirable requirements for an agreement to be considered acceptable, they fail to characterize a single agreement for each problem, but restrict 61 In fact the result proved in [48] is slightly different because there we do not assume non-levelness of Dd . This forces us to assume there a stronger version of NP, in which only null players have null expectations (though in exchange Eff is not needed). Thus this is an alternative version of the result proved there that can be proved assuming non-levelness of Dd as we do here. Voting and Collective Decision-Making drastically the structure of the solution. Namely, these conditions yield (54), a remarkable formula in which the impact of the voting rule is, so to say, ‘separated’ as exclusively affecting the ‘bargaining power’ (in the precise game-theoretic sense explained above) of each member of the committee. More precisely, such bargaining power is an anonymous function of the voting rule that gives power zero to the members occupying null seats, whatever the preference profile. Note that in view of (54) and (53), we have in particular for a TU-like preference profile (, W) = Nashϕ(W ) () = ϕ(W), where ϕ(W) = ϕ(W)/ i∈N ϕi (W). That is, if the weights ϕ(W) are normalized so as to add up to one, they coincide with (, W). There¯ W ) (B), formula (54) can be rewritten fore, as Nashϕ(W ) (B) = Nashϕ( (B, W) = Nash (,W ) (B). Remark. Therefore in our setting the old striking duality of the Shapley–Shubik index, mentioned in Section 2.1.3, which can be interpreted either as a piece of ‘cake’ or (on less clear grounds) as a measure of ‘voting power’ is clarified. This happens on clear grounds for any that satisfies the above conditions. Namely, for any that satisfies the above conditions, when the configuration of preferences is TU-like it holds that for any voting rule W, vector (, W), which is a vector of expected utilities (pieces of a ‘cake’), also gives the bargaining powers in the precise game-theoretic sense. Nevertheless these conditions do not provide a crisp answer to the question of reasonable agreement. But in view of the above discussion, any map (, ·) :VRN → RN that satisfies efficiency, anonymity and null player would fit into formula (55) and yield a solution (B, W) that satisfies the four conditions. In other words: assuming Eff, An, IIA, IAT and NP, the solution, given by (55), will be unique as soon as (, ·) is specified. The conditions on (, ·) (efficiency, anonymity and null player) bring to mind the Shapley value or, more specifically in the context of simple games, the Shapley–Shubik index. But there are other alternatives; for instance, the normalization of any semivalue meets these Bargaining committees conditions, as do some other power indices, such as the Holler–Packel index (see [30]). Denote by Sh(W) the Shapley–Shubik index of a voting rule W, i.e. the Shapley value of the associated simple game vW . We have the following result (see [48]). Proposition 30 Let : B × VRN → RN be a value that satisfies Eff, An, NP and T, then for any voting rule W ∈ VRN , (, W) = Sh(W). Then as an easy corollary of Theorem 29 and Proposition 30 we have the following theorem. Theorem 31 (Laruelle and Valenciano [48]) There exists a unique value : B × VRN → RN that satisfies efficiency (Eff), anonymity (An), independence of irrelevant alternatives (IIA), invariance w.r.t. affine transformations (IAT), null player (NP) and transfer (T), and it is given by (B, W) = NashSh(W ) (B). Note that (56) yields for W = {N} (or any symmetric voting rule): (B, W) = Nash(B), while when B = , for any rule W, it yields (, W) = Sh(W). Therefore when the solution (56) is restricted to bargaining problems it yields the Nash bargaining solution, and when restricted to TU-like committees it yields the Shapley–Shubik index. Moreover, NP and T become empty requirements when W is fixed as the unanimity rule W = {N} (or any symmetric voting rule). Thus the characterizing axioms in Theorems 29 and 31 become Nash’s axiomatic system when restricted to (·, W) : B → RN for any fixed symmetric rule. On the other hand, as conditions IIA and IAT become empty requirements when fixing B = , Proposition 30 can also be rephrased like this: the characterizing axioms in Theorem 31 when restricted to (, ·) : VRN → RN become Shapley–Dubey’s characterizing system of the Shapley–Shubik index in W. Voting and Collective Decision-Making In other words, Theorem 31 integrates Nash’s and Shapley–Dubey’s [20] characterizations into one, but goes further beyond these previous characterizations, yielding a surprising solution to the more complex problem under consideration given by (56). Remark. In Proposition 30, and in Theorem 31, transfer (T) can be replaced by the weaker (in the presence of anonymity) condition of SymGL. Thus we have still an alternative characterization of (56) given by the following theorem. Theorem 32 There exists a unique value : B × VRN → RN that satisfies Eff, An, IIA, IAT, NP and SymGL, and it is given by (56). 4.3.3 Discussion As briefly reviewed in Section 2.1.3, the Shapley–Shubik index results from applying the Shapley value to the simple game (a particular type of TU-game) associated with a voting rule. Recall that the Shapley value (see 2.1.2) is meant to be a ‘value’ in the sense of Nash’s bargaining solution. That is, a rational expectation of utility for a rational player engaging in a sort of bargaining situation described by a TU-game. Thus the Shapley–Shubik index presupposes a sort of bargaining situation described by the TU-game associated with the voting rule. But why the one described by this game? In the light of the richer model we have introduced here we see that this amounts to assuming a very particular preference profile in the committee: a TU-like preference profile. The simple game associated with a voting rule is often presented in the literature as merely an alternative way of presenting the same information: the voting rule itself. But, as has been pointed out by various authors, this representation has certain conceptual implications. In terms of the model presented here a TU preference profile is only a particular case. In other words, from the point of view provided by Theorem 29, the Shapley–Shubik index is just one of the candidates to fit formulae (54) and (55). Even the duality of the Shapley–Shubik index alluded to in Section 2.1.3 (piece of a ‘cake’ and measure of ‘power’) is shared by all the reasonable candidates to fit formula (54). Now there is the question of the compellingness of the characterizing conditions, and consequently that of the results obtained: Theorems 29 and 31. The conditions in Theorem 29 are the result of integrating the (in our view) most compelling ones in the classical characterizations Bargaining committees of the Nash bargaining solution and the Shapley value. But these conditions are not enough to single out an agreement for each bargaining committee. We see no drawback here though, nor do we consider this lack of uniqueness surprising. After all, the model only incorporates the basic elements of the situation. In such situations, even assuming a given profile of preferences and a given voting rule, there are other details that would surely influence the outcome of negotiations. Most importantly, the particular ‘protocol’ or set of more or less clear rules according to which negotiations proceed in the committee is crucial. This will be seen more clearly in the next section, in which we adopt a non-cooperative approach, and consider a variety of such protocols. As for Theorem 31, it gives ‘axiomatic’ support to (56), and consequently to the Shapley–Shubik index as a measure of bargaining power in a wider setting than the classical setup of simple games. But there is still the question of the compellingness of the ‘transfer’ condition, and the same doubts about this condition raised in the traditional setting of simple games in Section 2.1.6 remain in the current setup62 . The discussion in the preceding paragraph sheds some light on this problem. It seems clear that there is not sufficient information within the current model to expect a unique compelling answer based on its two elements. In the next section we describe a very simple protocol that would yield the solution given by (56), thus providing non-cooperative foundations for it. 4.4 A non-cooperative model of a bargaining committee We now explore the non-cooperative foundations of formulae (54) and (56). As Binmore [13] puts it: ‘Cooperative game theory sometimes provides simple characterizations of what agreement rational players will reach, but we need non-cooperative game theory to understand 62 In [50] we consider a wider model admitting random voting rules. In this wider setting transfer can be derived from two relatively compelling conditions. One requires basically that when the preference profile is TU-like the expected payoff vector when a coin is to be tossed to choose between two voting rules is the same as the average between the expectations in either case. The other requires that, for any given preference profile, the expectations for two random voting rules such that both give the same probability of being winning to each coalition are the same. In other words, the expected payoffs depend only on the probabilities of each coalition being winning. But the latter condition is not beyond controversy as it means ignoring part of the information explicit in a random voting rule. Voting and Collective Decision-Making why.’ In our case non-cooperative modelling requires further specification beyond the only two elements, B and W. Some assumptions are necessary about the way in which bargaining takes place in the committee. How are the proposals for agreement submitted and by whom? If consensus is sought, how are partial disagreements dealt with? How is enforcing power used by winning coalitions? The mere formulation of these questions evidences the complexity of the situation we want to model. The answers are not obvious, and they surely differ in different real-world contexts. A positive approach would require us to have, if possible, the particular details that answer these questions for the particular committee we are dealing with. But we are not interested in a prediction for a particular committee. We are interested rather in a term of reference model in which the necessary details are at once simple and sufficiently specified, and in which the only source of ‘bias’ or asymmetry lies in the ingredients that specify the model so far: B and W. The two elements are usually asymmetric, as the proximity between players’ preferences may differ, and often the voting rule is not symmetric. But if our ultimate goal is to establish a recommendation for the choice of a voting rule, it seems that such a recommendation should not depend on the preference configuration in the committee. This preference profile is different for each issue, while the voting rule is usually the same, at least for a specified variety of issues. Therefore it seems that our model of a bargaining protocol should not depend on the preference profile. On the other hand, a model consistent with the results obtained axiomatically, in which the bargaining power is a function of the voting rule, calls for bargaining protocols dependent on the voting rule. Thus we assume that in order to make a proposal the proposer needs the support of a winning coalition. In this way the voting rule may be determinant for the chances of each player playing the role of proposer, and if the voting rule is not symmetric players may not have the same chances of playing that role. The basic idea for the bargaining protocols that we consider is this: A player, with the support of a winning coalition to play the role of proposer, makes a proposal for agreement. If it is accepted by all players, the game ends. If any player rejects it then with some probability negotiation ends in failure (i.e. the status quo prevails), otherwise a new proposer and a winning coalition supporting him/her are chosen. Thus the negotiating process ends either when consensus is reached or, Bargaining committees if failure occurs, in the status quo. This still leaves many possibilities open: How is the supporting coalition formed? How is the proposer chosen by such a coalition? In particular this model accounts for the non-uniqueness of the answer provided by (54). Different specifications concerning these points yield different outcomes. As we will see, (56) appears as a special case with a sort of ‘focal’ appeal given the simplicity of the particular protocol associated, which confers on it some normative value as a term of reference. In order to see the effect of the likelihood of being the proposer and the effect of the way in which disagreement is dealt with, we consider first a strictly probabilistic bargaining protocol in which no voting rule enters the model. 4.4.1 Probabilistic protocols For each p = (p1 , . . . , pn ) ∈ RN + s.t. i∈N pi = 1, and each r ∈ R (0 < r < 1), assume the following strictly probabilistic protocol for a committee with a given preference profile B = (D, d): (p, r)-Protocol: A proposer i ∈ N is chosen with probability pi and makes a feasible proposal x ∈ Dd . (i) If all the players accept it the game ends with payoffs x. (ii) If any player does not accept it: with probability r the process recommences, with probability 1 − r the game ends in failure or ‘breakdown’ with payoffs d. In this model p and r are exogenous, that is, they are the parameters that specify the model. The different likelihood of being a proposer should originate from some asymmetry in the environment outside the model. The interpretation of r is clear: it represents the patience of the committee in seeking consensus. The bigger r is, the smaller the risk of breakdown is, and the greater the chances of continuing to bargain in search of consensus after a disagreement. We have the following result for this family of protocols; one for each probability distribution p and each r. Theorem 33 (Laruelle and Valenciano [51]) Let B = (D, d) be the preference profile of an N-person committee satisfying the conditions specified in Section 4.2. Under a (p, r)-protocol: (i) there exists a Voting and Collective Decision-Making stationary subgame perfect equilibrium SSPE; (ii) as r → 1 any SSPE payoff vector converges to the w-weighted Nash bargaining solution of B with weights given by wi = pi . We give here an outline of the proof in [51] that provides some interesting insights, in particular regarding the nature of the stationary subgame perfect equilibria (see 1.5.3) for each r. A stationary strategy profile should specify for each player i the proposal that he/she will make whenever he/she is chosen to be the proposer, and what proposals he/she will accept from others. A proposal by i can be specified by a vector π i = (yi , (xij )j∈N\i ) ∈ Dd , where yi is the payoff i will propose for him/herself, and xij the payoff i will propose for j = i. Acceptance and refusal by i of a proposal by another player should depend only on the utility he/she receives. This can be specified by the minimal level of utility for which he/she will accept it. In SSP equilibrium every player should be offered at least what he/she expects if he/she refuses. We can assume d = 0 without loss of generality, and consistently in what follows we write D0 instead of Dd . Then it should be that for all i and all j = i, xij ≥ (1 − r)0 + rpj yj + r pk xkj . As the proposer will seek the biggest payoff compatible with this condition, from the non-levelness D0 we can assume equality. As the right-hand side of the equation does not depend on i, we can drop the superindex in xij and xkj , and rewrite the above condition as an equation: xj = rpj yj + r pk xj = rpj yj + r(1 − pj )xj , that can be rewritten for all j as rpj yj = (1 − r + rpj )xj . Note that if pj = 0 then xj = 0, while if pj = 0 (57) can be rewritten yj = θj (r)xj (where θj (r) := 1 − r + rpj > 1). rpj Bargaining committees Observe that in this case (i.e. if pj = 0) yj > xj . That is, being the proposer is desirable. In fact the proposer would make the best of this advantage by maximizing his/her payoff under the constraint of feasibility, that is, for all j yj = max y ∈ R : (x−j , y) ∈ D , where (x−j , y) denotes the point whose j-coordinate is y and all other coordinates are equal to those of x (this maximum exists from the compactness of D0 ). As players with probability 0 of being the proposer will receive 0 according to (57), one can constrain attention to those players with a positive probability of being the proposer. To simplify the notation, instead of dealing with this subset as N = {i1 , . . . , in } ⊆ N, one can take N = N. We then have a system with 2n equations ((58) and (59)) with 2n unknown ((x1 , . . . , xn ) and (y1 , . . . , yn )) specifying a stationary strategy profile: each j whenever chosen as proposer will propose π j = (yj , x−j ). That is, he/she will propose yj for him/herself and xi for each i = j, and accept only proposals that give him/her at least xj . The problem is to prove that a solution for this system exists. This can be proved by a fixed-point argument63 . Then it only remains to show that the limit of SSPE ex ante payoffs (i.e. the expected payoffs before the proposer is chosen) as r → 1 is Nashp (B). We omit the details that can be seen in [51]. Interpretation of the SSPE. Let us examine the equations (58) and (59) solved by a SSPE. That is for all j ∈ N, 1−r+rp yj = θj (r)xj (with θj (r) := rpj j > 1) yj = max y ∈ R : (x−j , y) ∈ D (i) According to the first equation, the relative advantage of the proposer diminishes as r increases. Namely, θj (r) → 1 as r → 1, where 63 If B is the normalized TU bargaining problem = (, 0), equation (59) becomes yj = 1 − xk , k∈N\j so that a linear system results that, as can be easily proved, yields as its unique solution: xj = rpj , and yj = 1 − r + rpj . Voting and Collective Decision-Making θj (r) = yj xj is the ratio between player j’s expected payoff when he/she is the proposer and when the proposer is someone else. (ii) Nevertheless, as each player j is the proposer with probability pj , ex ante (i.e. before the proposer is chosen), the expected SSPE payoffs for each r, are given by pj π j = pj (yj , x−j ). Thus the probability of being the proposer has a determinant impact on the expected SSPE payoffs, which according to Theorem 33 converge to Nashp (B) as r → 1 (see Figure 4.1). (iii) Note that the ‘agreement’ given by (60) is not ‘efficient’ in general, as it is the p-weighted average (i.e. a convex combination) of n points in ∂D: π 1 , π 2 , . . . , π n , namely the continuation SSPE payoffs after the choice of a proposer corresponding to the n different possible proposers. Thus, in general the SSPE ex ante payoffs are not ‘efficient’, as they are at the interior of D, though the bigger the r the closer they are to ∂D (see Figure 4.1). (iv) By contrast, as for every proposer the continuation SSPE payoffs after the choice of a proposer are in ∂D, if B is the TU-bargaining p 1 = (y 1 ,x 2 ,x 3 ) p 2 = (x 1 ,y 2 ,x 3 ) p 3 = (x 1 ,x 2 ,y 3 ) p3 p1 p2 x2 u2 x1 u1 Figure 4.1. Continuation payoffs after the choice of proposer in a three-person problem. Bargaining committees problem = (, 0) the SSPE ex ante payoffs are ‘efficient’ (i.e. they are in ∂D) and the same for every r, and given by Nashp () = p. 4.4.2 Bargaining protocols under a voting rule What we have called a (p, r)-protocol is entirely specified in probabilistic terms. On the other hand, a comparison of the results given by Theorem 33 with formula (54) suggests a way of bridging these results obtained from different approaches. The basic idea is, as anticipated above, to link the probability of being the proposer, which is the source of bargaining power in a (p, r)-protocol, with the voting rule, which is the only element of the bargaining environment included in the model of a bargaining committee. But there are many ways of selecting a proposer based on the voting rule. That is, there are infinite ways of mapping voting rules into probability distributions over players. The question is whether there are any especially simple reasonable proposer selection protocols based on the voting rule within the plethora of possibilities consistent with formulae (54) and (56). A general principle that seems reasonable is the following: In order to play the role of proposer the support of a winning coalition that he/she belongs to is needed. In order to consider in full generality ways of going from voting rules to probabilities respecting this principle we can abstract away protocol details. We consider maps P : VRN → PN×2N , where PN×2N denotes the set of probability distributions over N × 2N , and use the notation pW to denote P(W). That is, under rule W, pW (i, S) = Prob (i is the proposer with the support of S). If we want pW : N × 2N → [0, 1] to respect the principle stated as well as the null player principle, the following should be required / W). (pW (i, S) = 0) ⇒ (i ∈ S ∈ W and S \ i ∈ That is, the proposer has to be decisive in the winning coalition that supports him/her. In order to preserve the principle of anonymity the following must be required for any permutation π , pW (i, S) = pπ W (πi, πS). Voting and Collective Decision-Making Then any p : VRN → PN×2N satisfying (61) and (62) ‘abstracts’ a proposer’s selection protocol determined by the voting rule in a bargaining committee which gives the probabilities of being the proposer by pW i := pW (i, S). Any such protocol combined with the (pW , r)-protocol will yield a particular case of (54) in the limit (in the sense of Theorem 33). But, as has been stated, any map satisfying these conditions just ‘abstracts’ a proposer’s selection protocol, and we are interested in the explicit protocols, not in their abstract summary by a vector of probabilities p. Still, a great variety of protocols are compatible with the above conditions. We consider a general relatively simple way of selecting a player i to play the role of proposer and a winning coalition S containing him/her such that i ∈ S ∈ W, and S \ i ∈ / W. It seems natural to form a coalition in support of a proposer prior to the choice of the proposer. This entails a coalition formation process that can be encapsulated in a black-box-like probability distribution. Let p denote a probability distribution over coalitions described by a map p : 2N → [0, 1] that, in order to be consistent with the anonymity assumption, assigns the same probability to all coalitions of the same size. In other words p(S) depends only on s. Thus we can write ps instead of p(S). (S-i)-Protocols (Choose first S, then i). Assume a given probability distribution over coalitions p (satisfying the above conditions), and the following protocol: Choose a coalition S according to p. Choose a player i in S at random. If S ∈ W and S \i ∈ / W, player i is the proposer, otherwise recommence until a proposer is chosen. The probability of player i being the proposer after the first two steps is given by S:i∈S∈W S\i∈ /W 1 1 ps = ps (vW (S) − vW (S \ i)), s s but in general no player is chosen as proposer after a single round. Nevertheless the actual probabilities of being proposer after applying an (S-i)-protocol are proportional to the probabilities given by (63), Bargaining committees which, as can easily be checked, yields the family of normalized semivalues. In particular the two best known semivalues result for the following probabilities. 1 1 , then the probability of player Shapley–Shubik index. If ps = n+1 (ns) i being the proposer under (S-i)-protocol is given by the Shapley– Shubik index of the voting rule, that is, pW i = Shi (W). Thus in terms of (S-i)-protocols the Shapley–Shubik index emerges for the familiar probabilistic model of choosing a size at random, and then a coalition of that size at random. Normalized Banzhaf index. If ps = k 2sn , where k is a constant resulting from normalization, then the probability of player i being the proposer under (S-i)-protocol is given by the normalized Banzhaf index of the voting rule. Note that in this protocol the probability of a coalition is weighted by its size64 . A more general way of selecting a proposer is the following. (i, S)-Protocols (Choose i and S simultaneously). As already pointed out, any W → pW such that pW (i, S) satisfies (61) and (62) abstracts a protocol that, combined with the resulting (p, r)-protocol, yields a particular case of (54) in the limit. This also includes as particular cases some power indices less familiar than Shapley–Shubik’s and Banzhaf’s that lie outside the family of normalized semivalues, such as Deegan– Packel’s [17] and Holler–Packel’s [30] indices. But again the Shapley–Shubik index emerges associated with a very simple selection procedure: Shapley–Shubik’s Protocol (formulation 1). (i) Choose an order in N at random, and let the players join a coalition in this order until a winning coalition S is formed. (ii) Then the last player entering S is the proposer. Under this protocol: Prob (i is the proposer) = Shi (W). The simplicity of this procedure within the family of protocols described above is worth remarking on. First, the formation of the 64 Note that if ps = 21n for all S, i.e. if all coalitions are equally probable, then the probability of a player being chosen as the proposer is not the normalized Banzhaf index of that player. Voting and Collective Decision-Making coalition appears in this case as a sequential process, which seems at once natural and the simplest way. Alternatively and equivalently it can be described as choosing one player at random to join the coalition at each step. One may wonder why the ‘swinger’ is chosen as the proposer rather than any other player who is decisive in S. But it does not make any difference if the second step is replaced by this: Choose one of the players decisive in S at random. It can be easily seen that the procedure is equivalent. Thus the protocol can be specified alternatively as follows: Shapley–Shubik’s Protocol (formulation 2). (i) Starting from the empty coalition, choose one player at random each time from the remaining players until a winning coalition S is formed. (ii) Then choose one of the players decisive in S at random. Thus, under this protocol each player has a probability of being the proposer equal to his/her Shapley–Shubik index for the current voting rule. Summing up we can combine any of the above proposer selection protocols with the probabilistic (p, r)-protocol considered in the previous section. In view of Theorem 29 and the above discussion we have the following results, which are the non-cooperative counterpart of (54) and (56). Theorem 34 Let (B, W) be an N-person bargaining committee with preference profile B = (D, d) satisfying the conditions specified in Section 4.2. Under any (S-i) or (i, S)-protocol for selecting the proposer combined with the resulting (p, r)-protocol: (i) for all r (0 < r < 1), there exists a stationary subgame perfect equilibrium (SSPE); (ii) as r → 1, any SSPE payoff vectors converge to the weighted Nash bargaining solution of B with weights given by the probabilities of being the proposer determined by W and the proposer selection protocol; (iii) under an (S-i)-protocol, these weights are given by a normalized semivalue of the voting rule (i.e. of the simple TU-game vW ). Theorem 35 Under the Shapley–Shubik protocol combined with the resulting (p, r)-protocol: (i) parts (i) and (ii) of Theorem 34 hold, and the weights in the limit are given by the Shapley–Shubik index of the voting rule W; (ii) if B = , also the SSPE payoffs are given by the Shapley–Shubik index of the voting rule W, for all r(0 < r < 1). Bargaining committees Remarks. In the light of this bargaining model the voters’ conventional ‘voting power’ or decisiveness becomes ‘bargaining power’ in a specific game-theoretic sense. Thus, the old conceptual ambiguity commented on in Section 2.2 concerning the game-theoretic notion of ‘value’ when applied to simple games representing voting rules, and its alternative interpretation as ‘decisiveness’, or likelihood of playing a crucial role in a decision, is clarified. (i) In a bargaining committee, according to this model, the source of (bargaining) power is the likelihood of being the proposer, related to the likelihood of being decisive via the protocol. (ii) By part (ii) of Theorem 35, in the case of a committee with a TU preference profile, i.e. if B = , the ex ante SSPE payoffs are given by the Shapley–Shubik index of the voting rule W, whatever r (0 < r < 1). Thus the limit result for r → 1 is trivial in this case. But observe that the non-cooperative ‘implementation’ of the Shapley–Shubik index of the voting rule W (or equivalently, of the Shapley value of the associated simple game vW ) is different from previous ones. In this model Sh(W) represents an expectation in a precise sense, in which no player (unless the rule is a dictatorship) has a chance of getting the whole cake, although the proposer would benefit (decreasingly as r gets bigger) from this role. Observe also that when r → 0, in the limit the proposer will have the whole cake, though the ex ante expectations are the same. In other words, for r → 0, in the limit we have a reinterpretation of the original Shapley model applied to the simple game associated with the voting rule. (iii) Theorem 35-(ii) is the non-cooperative counterpart of the fact pointed out in Section 4.3.2 that NashSh(W ) (B) = Sh(W) when B = . In cooperative terms it was emphasized there that the relevant point is not this particular case, but the fact that Sh(W) appears in (56) setting the bargaining weights for all B, thus with a new meaning: the bargaining power that the voting rule confers to the players. Now in this non-cooperative model this interpretation is corroborated and clarified: this is so (in the limit for r → 1) for a specific and particularly simple protocol. 4.4.3 Discussion Theorems 34 and 35 provide a non-cooperative interpretation of formulae (54) and (56), originally obtained from a cooperative-axiomatic Voting and Collective Decision-Making approach. Nevertheless the non-cooperative model admits many variations that may be worth investigating. Here are some of the possible lines of further research. As briefly commented in Section 4.3.1, in our model of a bargaining committee the status quo is a reference point that at the same time sets a level of utility below which no player can be forced to accept. Even if such a limit exists, it is sometimes below the status quo, so that players can be worse off within certain limits if forced into this situation by a winning coalition. Thus a richer model would include two different points: the initial starting point or status quo and a vector of minimal admissible payoffs. Another way of enriching the model is by admitting partial agreements. That is, in our models the outcome is either general consensus or breakdown: why not admit the possibility of partial consensus even if general consensus is sought?65 Finally, in this model all symmetric rules appear as equivalent and yield the Nash bargaining solution. But intuition suggests that the difficulty of reaching agreements is not the same under a unanimity rule and under a simple majority. A model accounting for this seems desirable. 4.5 Egalitarianism and utilitarianism in a bargaining committee In Section 3.7 we addressed the question of the voting rule that best implements the egalitarian and utilitarian principles in a take-it-orleave-it committee. To this end, utilities were introduced in the model in a very simple way assuming a strong degree of symmetry. Then it was seen that any symmetric rule implements the egalitarian principle, and of those rules it is the simple majority that best implements the utilitarian principle under certain conditions. Unlike the case of a take-it-or-leave-it committee, in a bargaining committee the preference profile, given in utility terms, is one of the ingredients of the model. Moreover, this element, B = (D, d), separated from the voting rule, is precisely the only ingredient in a classical bargaining problem and, as is well known, in such an environment 65 The closest model to the (p, r)-protocols is that of Rubinstein [75], of which it may be considered an extension. In [28] a bargaining model is provided in the NTU framework in which consensus is sought and there is a risk of breakdown. Other interesting non-cooperative models are [3, 7, 57] . Bargaining committees utilitarianism and egalitarianism conflict. Given an n-person bargaining problem B = (D, d), the egalitarian optimum is at the point in D for which the gains of utility with respect to the status quo d are equal for all players and those gains are maximal: that is, the point d + µ1 ¯ where µ¯ := max µ : d + µ1 ∈ D , where 1 = (1, . . . , 1) ∈ RN . By contrast, the utilitarian optimum would be reached at the feasible point for which the sum of the gains is maximal, that is, at arg max x∈Dd (xi − di ). In general these two points are different, and both depend on D and d. Therefore the idea of looking for the voting rule that best implements either principle independently of the preference profile does not make sense, nor does it make sense to look for such rules for each preference profile. In fact, as pointed out by Shapley [77], the Nash bargaining solution can be seen as a compromise between these two principles in the following sense. A compromise between the different points given by (64) and (65), can be this: find a system of weights λ = (λi )i∈N ∈ RN + , such that the following two problems have the same solution. First, find the point d + µλ ¯ such that µ¯ = max µ : d + µλ ∈ D , and, second, find the point arg max x∈Dd λi (xi − di ). As Shapley [77] points out, such a system of weights (λi )i∈N for which the solution to both problems is the same does exist, and it turns out that for these weights the common solution is given by the Nash bargaining solution, that is by Nash(B). Therefore if we accept this compromise between the egalitarian and utilitarian principles as a ‘fair’ deal, and we accept (54) as the normative term of reference (supported by Theorems 29 and 34) of a rational agreement in a bargaining committee (B, W), then any symmetric Voting and Collective Decision-Making voting rule implements such a compromise, because in this case all components of ϕ(W) in (54) are equal, so that it yields Nash(B). 4.6 The neutral voting rule in a committee of representatives Now we turn our attention to the normative issue of the choice of voting rule in a committee of representatives in which each member acts on behalf of a group of different size. In Section 3.8 we addressed this issue for the case of a take-it-or-leave-it committee. We now address the case in which the committee of representatives acts as a bargaining committee. As we will see the conclusions are different. Assume that each member i of a bargaining committee of n members, labelled by N, represents a group Mi of size mi . Let’s assume that these groups are disjoint, so that if M = ∪i∈N Mi , and the cardinality of M is m = i∈N mi . Denote by M the partition M = {M1 , M2 , . . . , Mn }. It seems intuitively clear that if the groups are of different sizes a symmetric voting rule is not adequate for such a committee, at least if a principle of equal representation (whatever this might mean in this context) is to be implemented. This raises the issue of the choice of the ‘most adequate’ voting rule under these conditions. The main difficulty in providing an answer is to specify precisely what is meant by ‘adequate’, ‘right’, or ‘fair’. To begin with, ‘adequate’, ‘right’, or ‘fair’ in what sense and from which or whose point of view? It seems clear that it should be so from the point of view of the people represented. The basic idea, which we further specify and justify presently, is this: A voting rule is ‘fair’ if any individual of any group is indifferent between bargaining directly with the other people in M (assuming such ‘mass bargaining’ were possible and yielded the M-person Nash bargaining solution), and leaving bargaining in the hands of a representative picked arbitrarily from the group. Even if this sounds Utopian (and as indeed it is in general), we will show that it is implementable in a precise sense if a certain level of symmetry (not uniformity!) of preferences within each group is assumed. In general, a bargaining committee of representatives will negotiate about different issues over time under the same voting rule. In each case, depending on the particular issue, a different configuration of preferences will emerge in the population represented by the members of the committee. Thus it does not make sense to make the choice of the voting rule dependent on the preference profile, nor does it make Bargaining committees sense to assume unanimous preferences within every constituency. On the other hand, if there is no relationship at all between the preferences of the individuals within each group it is not clear on what normative grounds the choice of a voting rule for the committee of representatives should be founded66 . In order to find an answer we assume that the configuration of preferences in the population represented is symmetric within each group in the following sense. Assume that B = (D, d) (d ∈ D ⊆ RM ) is the m-person bargaining problem representing the configuration of vNM preferences of the m individuals in M about the issue at stake in the committee. We say that a permutation π : M → M respects M if for all i ∈ N, π(Mi ) = Mi . We say that B is M-symmetric if for any permutation π : M → M that respects M, it holds that π d = d, and for all x ∈ D, π x ∈ D. In other words, B is M-symmetric if for any group (Mi ) the disagreement payoff is the same for all its members (dk = dl , for all k, l ∈ Mi ), and with the payoffs of the other players in M\Mi fixed in any way, the set of feasible payoffs for the players in that group (Mi ) is symmetric. Notice that this does not mean at all that all players within each group have the same preferences. In fact it includes all symmetric situations ranging from unanimous preferences to the ‘zero-sum’ case of strict competition within each group. But note that if the payoffs of all the players in M\Mi are fixed, the outcome of bargaining within Mi (under unanimity and assuming anonymity) would yield the same utility level for all players in Mi . Thus M-symmetry in B entails the following consequences. Let M, N and M be as above, and let B = (D, d) be M-symmetric. Assuming as a term of reference that the players in M negotiate directly under unanimity, according to Nash’s bargaining model the outcome would be Nash(B). Or, in other terms, Nash(B) can be considered as a normative term of reference representing an egalitarian-utilitarian compromise as discussed in Section 4.5. As B is M-symmetric, it must be that Nashk (B) = Nashl (B) 66 (∀i ∈ N, ∀k, l ∈ Mi ). An extreme example can illustrate this: Assume all individuals in group Mi have identical preferences, and all in group Mj but individual k ∈ Mj have also the same preferences (but different from those in Mi ), while k has identical preferences to those in Mi . In a case like this individual k would prefer representative i to be more powerful than his/her own representative j. Voting and Collective Decision-Making Namely, in each group all players would receive the same payoff according to Nash’s bargaining solution. Therefore the optimal solution of the maximization problem (xl − dl ) arg max x∈Dd that yields Nash(B) coincides with the optimal solution of the same maximization problem when the set of feasible payoff vectors is constrained to yield the same payoff for any two players in the same group. Formally, denote by BN the N-bargaining problem BN = (DN , d N ), where DN := (x1 , . . . , xn ) ∈ RN : (x1 , . . . , x1 , . . . , xn , . . . , xn ) ∈ D} , m1 −times mn −times and by d N the vector in RN whose i-component is, for each i ∈ N, equal to dk (the same for all k ∈ Mi ). Namely, BN is the bargaining problem that would result by taking one individual from each constituency as a representative for bargaining on its behalf, under the commitment of later bargaining symmetrically within that constituency after the level of utility of the other constituencies has been settled. We have that, for all i ∈ N and all k ∈ Mi , (xl − dl ) Nashk (B) = argk maxx∈Dd l∈M = argi maxx∈DN N (xj − dj )mj = Nashm i (B ), where m = (m1 , . . . , mn ). That is to say, for the configuration of preferences or M-bargaining problem B, a player k in M would obtain the same utility level by direct (m-player unanimous) bargaining that a representative would obtain by bargaining on behalf of him/her (and of all the players in the same group) under the configuration of preferences BN if each representative were endowed with a bargaining power proportional to the size of the group. The problem then is how to ‘implement’ such a weighted Nash bargaining solution. In other words, and more precisely, how to implement a bargaining environment that confers the right bargaining power on each representative. In view of Theorem 29, if a ‘power index’ Bargaining committees (i.e. a map ϕ :VRN → RN that is efficient, anonymous and ignores null players) is considered to be the right assessment of bargaining power in a committee, and for some N-voting rule W it holds that ϕj (W) ϕi (W) = mi mj (∀i, j ∈ N), then this rule would exactly implement such an environment. In particular, if such an index is the Shapley–Shubik index (Theorem 31), an optimal voting rule would be one for which Shj (W) Shi (W) = mi mj (∀i, j ∈ N). In view of the underlying interpretation of ‘fairness’ as a compromise between egalitarianism and utilitarianism, it could be also adequate to call it ‘neutrality’, and to call a voting rule satisfying it ‘neutral’. Then we conclude that if: (i) the term ‘bargaining power’ is interpreted in the precise game-theoretic sense formerly specified (i.e. bargaining weights); (ii) (54) is accepted as a reasonable expectation in a bargaining committee supported by Theorems 29 and 34; and (iii) ‘fairness’ or ‘neutrality’ is understood as the egalitarian–utilitarian compromise given by the Nash bargaining solution; then the above discussion and the formulae (66) and (67) that it yields can be summarized in the following theorem. Theorem 36 A fair or neutral voting rule in a bargaining committee of representatives is one that gives each member a bargaining power proportional to the size of the group that he/she represents. From the point of view of application there are still some issues to be resolved. There is the question of the ‘right’ power index (i.e. the right ϕ(W) in formula (54)) for assessing the bargaining power that the voting rule confers to each member of the committee. As discussed in Sections 4.3 and 4.4, this issue cannot be settled unless additional assumptions about the bargaining protocol in the committee are made. Nevertheless, as we have seen in Section 4.4.2, the Shapley–Shubik index emerges associated with a particularly simple bargaining protocol and consequently, in the absence of further information, it can be taken as a term of reference for a normative assessment. In any case, it is Voting and Collective Decision-Making worth stressing the clear message of Theorem 36, somewhat consistent with intuition and different from previous recommendations67 . 4.7 Exercises 1. Let : B×VRN → RN be a solution satisfying the conditions of feasibility and individual rationality, that is, such that (B, W) ∈ Dd . Prove that if also satisfies Independence of Irrelevant Alternatives, then for any two problems B = (D, d) and B = (D , d ) such that d = d and Dd = Dd , and any rule W, it holds that (B, W) = (B , W). 2. Consider a two-person bargaining committee (B, W), where B = (D, d), d = (0, 0), and D is the comprehensive hull of Dd , given by Dd = {(x1 , x2 ) ∈ R2+ : x21 + x22 ≤ 1}. Discuss the possible solutions under the conditions of Theorem 29 for the possible voting rules (dictatorship or unanimity). 3. Consider a three-person bargaining committee (B, W), where B = (D, d), d = (0, 0, 0) and D = {(x1 , x2 , x3 ) ∈ R3 : x1 + x2 + x2 ≤ 3}. Discuss the possible solutions under the conditions of Theorem 29 in the following cases. (a) Player 1 is a dictator. (b) They decide by simple majority. (c) There is an oligarchy of players 1 and 2. 4. Let W = {{1, 2}, {1, 3}, {1, 2, 3}}. Calculate the solution for a bargaining committee (B, W) according to Theorem 31, in the following cases: (a) B is as in the preceding exercise. 67 In [10] a curious antecedent of this recommendation is given: ‘In Franklin v. Kraus (1973), the New York Court of Appeals again approved a weighted voting system that made the Banzhaf power index of each representative proportional to his district’s size.’ Bargaining committees (b) B = (D, d), d = (0, 0, 0), and D is the comprehensive hull of Dd , given by Dd = {(x1 , x2 , x3 ) ∈ R3+ : x21 + x22 + x23 ≤ 9}. (c) Compare the proportion between the solution payoffs of the vetoer (player 1) and that of any of the other two in either case. 5. Let B = (D, d) be as in Exercise 3. Determine the stationary subgame perfect equilibrium strategy profile for a (p, r)-protocol in which each player has probability 13 of being the proposer: that is, p = ( 13 , 13 , 13 ). (a) If r = 12 . (b) If r = 34 . 6. Let (B, W) be a three-person bargaining committee, such that B = (D, d) is as in Exercise 3 and W = {{1, 2}, {1, 3}, {1, 2, 3}}. Determine the stationary subgame perfect equilibrium strategy profile for the Shapley–Shubik Protocol combined with the resulting (p, r) -protocol for r = 34 . 7. Let W be the four-person voting rule whose minimal winning vote configurations are M(W) = {{1, 2}, {1, 3}, {2, 3, 4}}. If W is the voting rule in a bargaining committee of representatives, and the total number of people represented is 12,000, what should the number of people represented by each member be for W to be the ‘neutral rule’ (according to Theorem 36 and assuming the Shapley–Shubik index is the adequate measure of bargaining power)? Application to the European Union In this chapter we apply the models developed in the preceding chapters to the European Council of Ministers. We submit the different voting rules that are or have been used in the Council, as well as some others that have been proposed, to cross-examination from the different points of view provided by the two basic models discussed in Chapters 3 and 4. The different voting rules used in the Council are described in Section 5.1. In Section 5.2 we apply the model of a take-it-or-leave-it committee. In Section 5.2.1 we examine some relevant probabilities based on the a priori model of voting behaviour for the different rules. Then, in Section 5.2.2, we incorporate utilities into the model and assess the different rules in the Council from the egalitarian and utilitarian points of view, as a committee of states and as a committee of representatives. Finally, in Section 5.3 we apply the bargaining committee model. The reader will not find any categorical normative recommendations along the lines of ‘this is the best rule for the Council’ or ‘this is the rule that should be used by the Council’. None of the models applied captures all the complexity of the real situation. But they all help to understand it, as this application gives some insights into how the rules may affect the working of the Council. On the other hand, applying concepts and theoretical constructions also helps gain a deeper understanding of them. 5.1 Voting rules in the European Council The rules that we consider here are rules that have been used in the Council at some time, more precisely between 1958 and 2005; the number of members in the Council increasing with the enlargements over this period. In 2005, the 25 members in decreasing order of population size were (with their abbreviated forms): Germany (Ge), 136 Application to the European Union United Kingdom (UK), France (Fr), Italy (It), Spain (Sp), Poland (Pl), Netherlands (Ne), Greece (Gr), Czech Republic (CR), Belgium (Be), Hungary (Hu), Portugal (Pr), Sweden (Sw), Austria (Au), Slovakia (Sk), Denmark (De), Finland (Fi), Ireland (Ir), Lithuania (Li), Latvia (La), Slovenia (Sn), Estonia (Es), Cyprus (Cy), Luxembourg (Lu) and Malta (Ma). We denote the set of Council members by Nn , where the subscript n refers to the number of states. During the period considered, we have, always in decreasing population order: From 1958 to 1972: N6 = {Ge, Fr, It, Ne, Be, Lu}; from 1973 to 1980: N9 = {Ge, UK, Fr, It, Ne, Be, De, Ir, Lu}; from 1981 to 1985: N10 = {Ge, UK, Fr, It, Ne, Gr, Be, De, Ir, Lu}; from 1986 to 1994: N12 = {Ge, UK, Fr, It, Sp, Ne, Gr, Be, Pr, De, Ir, Lu}; from 1995 to 2003: N15 = {Ge, UK, Fr, It, Sp, Ne, Gr, Be, Pr, Sw, Au, De, Fi, Ir, Lu}; and in 2004 and 2005: N25 = {Ge, UK, Fr, It, Sp, Pl, Ne, Gr, CR, Be, Hu, Pr, Sw, Au, Sk, De, Fi, Ir, Li, La, Sn, Es, Cy, Lu, Ma}. Three main rules are used in the Council: simple majority, unanimity, and the so-called qualified majority. The simple majority is the default voting rule unless the Treaty provides otherwise, which it usually does. In practice the Council decides by simple majority only in a limited number of mainly procedural matters. Unanimity is used for quasiconstitutional or politically sensitive matters, and a qualified majority is used for all other cases. Successive modifications of the Treaty have Voting and Collective Decision-Making resulted in an extension of the use of the qualified majority voting in comparison to the unanimity voting. The simple majority (WnSM ) and unanimity (WnU ) are symmetric rules that were introduced in 1.3.2 for n seats. Note that we always have WnU ⊂ WnSM , that is, it is always in principle more difficult to pass proposals under unanimity than under a simple majority. QM The qualified majority (Wn ) is a weighted majority used up to enlargement to the N25 Council. For n = 6, 9, 10, 12, and 15 we have QM Wn = S ⊆ Nn : wi (Nn ) ≥ Q(Nn ) , with weights and quotas given by w(N6 ) = (4, 4, 4, 2, 2, 1), Q(N6 ) = 12, w(N9 ) = (10, 10, 10, 10, 5, 5, 3, 3, 2), Q(N9 ) = 41, w(N10 ) = (10, 10, 10, 10, 5, 5, 5, 3, 3, 2), Q(N10 ) = 45, w(N12 ) = (10, 10, 10, 10, 8, 5, 5, 5, 5, 3, 3, 2), Q(N12 ) = 54, w(N15 ) = (10, 10, 10, 10, 8, 5, 5, 5, 5, 4, 4, 3, 3, 3, 2), Q(N15 ) = 62. Galloway ([25], p. 63) justifies the choice of the weights and quotas as follows: ‘The system was constructed so as to ensure a certain relationship between member states based on a system of “groups” or “clusters” of large, medium and small member states, with states in each cluster having an identical number of votes. (…) • Apart from an adjustment in voting weights to accommodate new categories of member states at the first enlargement in 1973, the system has undergone straightforward extrapolation at each successive enlargement. • The system of “clusters” was maintained. With each successive enlargement, new member states were categorized in accordance with the same principle, although additional categories had to be inserted into the system as required on the basis of member states’ Application to the European Union size (e.g. Denmark and Ireland were allocated three votes each, Spain eight votes and Austria and Sweden four votes each).’ Remarks. (i) Luxembourg had a null seat in W6QM . Thus Luxembourg’s probability of being decisive was zero in the N6 Council (whatever the voting behaviour of the others). (ii) Luxembourg’s seat, like Denmark’s and Ireland’s seats, are symQM metric in W10 , in spite of their having different numbers of votes: 2 for Luxembourg, and 3 each for Ireland and Denmark. These three states thus had the same probabilities of being decisive or successful in the N10 Council for any anonymous probability distribution. (iii) We always have QM WnU ⊂ Wn As the simple majority obviously contains a larger number of winning configurations than the qualified majority, there may also exist some inclusion between these rules, but this is not always true. We only have QM ⊂ WnSM for n = 9, 12, and 15. The inclusion does not hold for the N6 nor the N10 Councils68 . The Treaty of Nice has substantially modified the qualified majority rule for the N25 Council in two ways. First, the system of weights was redesigned. Second, an additional clause was added: to be adopted, a proposal needs the support of a majority of member states. Thus the qualified majority is no longer a weighted majority, but a double Ni ): (weighted) majority. We refer to this rule as ‘the Nice rule’ (W25 Ni W25 = S ⊆ N25 : wi (N25 ) ≥ 232 and s ≥ 13 , where w(N25 ) = (29, 29, 29, 29, 27, 27, 13, 12, 12, 12, 12, 12, 10, 10, 7, 7, 7, 7, 7, 4, 4, 4, 4, 4, 3). 68 Consider for example S = {Ge, It, Fr}. We have S ∈ W6 but S ∈ / W6SM . Voting and Collective Decision-Making The choice of this rule was the result of difficult and painful bargaining among member states in December 2000. It soon came to be considered as not very satisfactory. In 2001, a Convention was launched to reform the Nice Treaty and to consider the possibility of writing a Constitution. The Convention finished its work in July 2003, and came up with a substitute to the Nice rule. This alternative rule, which we refer to as ‘the Convention rule’ (WnCv ), is described as follows (article 24): ‘When the European Council of the Council of Ministers takes decisions by qualified majority, such a majority shall consist of the majority of Member States, representing at least three fifths of the population of the Union.’ For the N25 Council, we have: Cv = {S ⊆ N25 : W25 mi ≥ 0.6 m and s ≥ 13}, where mi denotes state i’s population and m the total population in the EU. In 2005, these figures were69 : 82, 500, 800 (Ge), 60, 561, 200 (UK), 60, 034, 500 (Fr), 58, 462, 400 (It), 43, 038, 000 (Sp), 38, 173, 800 (Pl), 16, 305, 500 (Ne), 11, 075, 700 (Gr), 10, 529, 300 (CR), 10, 445, 900 (Be), 10, 220, 600 (Hu), 10, 097, 500 (Pr), 9, 011, 400 (Sw), 8, 206, 500 (Au), 5, 411, 400 (Sk), 5, 384, 800 (De), 5, 236, 600 (Fi), 4, 109, 200 (Ir), 3, 425, 300 (Li), 2, 306, 400 (La), 1, 997, 600 (Sn), 1, 347, 000 (Es), 749, 200 (Cy), 455, 000 (Lu), 402, 700 (Ma). An Intergovernmental Conference then took place between October 2003 and June 2004 and concluded its work with the signing of the Constitution in October 2004. In article I-25 of the Constitutional Treaty a winning configuration is defined as containing ‘at least 55% of the members of the Council, comprising at least fifteen of them and representing Member States comprising at least 65% of the population of the Union. A blocking minority must include at least four Council members, failing which the qualified majority shall be deemed attained.’ We will refer to this rule as ‘the Constitution rule’ (WnCs ). 69 EUROSTAT (the statistical office of the European Commission), 2005. Application to the European Union For the N25 Council, we have70 Cs W25 = S ⊆ N25 : mi ≥ 0.65 m and s ≥ 15 or (s ≥ 22) . It was decided during the Intergovernmental Conference that the weights chosen in Nice would be used until 2009. By then if the Constitution is ratified that rule will be replaced by the Constitution rule. But the fate of the Constitution is now unclear after the ‘no’ votes by the Dutch and French to the Constitutional Treaty. Remarks. (i) The main innovation of the Convention rule (and afterwards of the Constitution rule) is that they depend only on population figures and percentage of member states. They can thus be extended mechanically at each new enlargement. (ii) For the N25 Council, neither the Convention rule nor the Constitution rule has been used. Still, as they competed to become the rule for the Council, they were widely discussed in the media. As such, it seems that they deserve a formal study71 . They are also useful as terms of comparison for the evaluation of the Nice rule72 . (iii) In the Constitution rule, any member may request verification that the Member States constituting the qualified majority represent at least 62% of the total population of the Union. This clause, sometimes called the population safety net will not be considered here. (iv) By construction, the sets of winning configurations of the Nice rule, the Constitution rule and the Convention rule are all included in the list of winning configurations of the simple majority. We have U EU SM ⊂ W25 ⊂ W25 W25 EU Ni Cs Cv W25 ∈ {W25 , W25 , W25 }. No inclusion relation holds between the Constitution, the Convention and the Nice rules (see Exercise 2). 70 71 As 55% of 25 is 13.75, the clause concerning 55% of members states is inoperative, as at least 15 member states are needed. Moreover this is not the first time that a system of double majority was proposed: prior to the Treaty of Nice the Commission proposed a system close to the Convention rule (with relative quotas of both population and members being set to 50%). See [41]. For other studies of the N25 Council, see for instance [23] or [36]. Voting and Collective Decision-Making The models developed in Chapters 3 and 4 can be used to compare the rules presented above (we refer to any of these rules as WnEU ). Special emphasis is given to the comparison of the Nice rule, the Convention rule and the Constitution rule, as they were competitors for use as the new qualified majority. 5.2 The Council as a take-it-or-leave-it committee In practice, the Council usually makes a formal vote when it has practically reached a unanimous agreement73 . It seems clear that the effective work of the Council is closer to what has been described as the environment of a bargaining committee than to that of a take-it-or-leave-it committee. Thus the bargaining committee model seems more adequate for the Council. Still, with the successive enlargements informal bargaining may become more and more difficult. With larger numbers of member states, the need for an effective vote may perhaps arise more frequently. In any case, the take-it-or-leave-it committee model, or more precisely some of the relevant probabilistic measures based on this model, provide an interesting assessment of various issues related to the voting rules used at different times in the Council. In the take-it-or-leave-it model, some relations hold independently of voting behaviour. This happens when there is an inclusion between the sets of winning configurations of two rules. In this case there are some conclusions that do not depend on the voting behaviour. To complete the picture and get numerical values, a probability distribution has to be chosen. In Chapter 3 we have advocated in favour of p∗ for a normative a priori evaluation of voting rules74 . But one should always 73 The Council publishes a monthly document listing legislative and non-legislative acts of the Council including the results of votes since 1999. These results can be found at www.consilium.europa.eu The specific context of the European decision-making process may justify other normative assumptions. For instance, any proposal that reaches the Council can be expected to have the support of more than half the member states (otherwise it would have been blocked in the Commission). A vote configuration with a number of ‘yes’-voters smaller than half the members can be assumed to have a zero probability. Then settting an equal probability to all other vote configurations we have the probability distribution 1/r if s > n2 p˜ n (S) := 0 otherwise, where r represents the number of vote configurations where s > n2 . (See [37].) Application to the European Union keep in mind that assessments and recommendations based on p∗ have no predictive power75 . With the voting rule, and the a priori distribution of probability p∗ , the take-it-or-leave-it model is complete. We first apply the criteria based on probabilities (ease of passing proposals, probability of success and its conditional variants), then the criteria based on utilities (egalitarianism and 5.2.1 Criteria based on probabilities Using the inclusions between the sets of winning configurations for the EU rules (that is, (68), (69), (70) and (71)), we obtain the following relations that do not depend on the probability distribution pn . For n = 9, 12, 15, and for any i and any pn : QM α(WnU , pn ) ≤ α(Wn , pn ) ≤ α(WnSM , pn ), SM , pn ) ≤ i+ i (Wn , pn ), SM , pn ) ≥ i− i (Wn , pn ). i+ U i+ i (Wn , pn ) ≤ i (Wn i− U i− i (Wn , pn ) ≥ i (Wn For n = 6, 10, and for any i and any pn : QM α(WnU , pn ) ≤ min{α(Wn , pn ), α(WnSM , pn )}, i+ U i+ i (Wn , pn ) ≤ min{i (Wn SM , pn ), i+ i (Wn , pn )}, i− U i− i (Wn , pn ) ≥ max{i (Wn SM , pn ), i− i (Wn , pn )}. EU ∈ {W Ni , W Cs , W Cv }, we have For the N25 Council, and any W25 25 25 25 U EU SM α(W25 , p25 ) ≤ α(W25 , p25 ) ≤ α(W25 , p25 ), i+ i+ U EU SM i+ i (W25 , p25 ) ≤ i (W25 , p25 ) ≤ i (W25 , p25 ), i− i− U EU SM i− i (W25 , p25 ) ≥ i (W25 , p25 ) ≥ i (W25 , p25 ), 75 For instance, the claim that the Council (under the N27 Nice rule) ‘is likely to become immobilized by the extreme difficulty of getting acts approved’ ([23], p. 19) is based on the computation of the a priori ease of passing proposals: α(W , p∗ ). This risk is rightly dismissed by practitioners, arguing that votes are taken once member states have basically agreed on the proposal (confirming the appropriateness of the bargaining model). Nevertheless, such figures as α(W , p∗ ) and others are expressive in comparisons if used with care. Voting and Collective Decision-Making for any i and any p25 . Also note that unanimity gives a veto right to any state: U i− i (Wn , pn ) = 1. This is certainly why the unanimity is used for quasi-constitutional or politically sensitive matters: any state can be sure that it will not be forced to accept a decision that it does not favour. These results confirm the intuition, but do not permit us to compare the Nice rule, the Constitution rule and the Convention rule. The normative distribution of probability p∗n permits us to complete the ranking. A priori ease of passing proposals For any voting rule W EU considered in the Council, the results of computing the a priori ease of passing proposals, given by α(WnEU , p∗ ) = p∗ (S), are given in Table 5.1. It is worth making some comments on these figures. Not surprisingly, for any number of member states, the a priori ease of passing proposals is always largest under the simple majority, and smallest under unanimity. The a priori ease of passing proposals is around 50% under the simple majority76 . At the other extreme, the ease of passing proposals under unanimity is very small, and is divided by 2 each time a new member is added to the Council: it is only 1.6% for the N6 Council, and is negligible for the N25 Council77 . Under a qualified majority, the a priori ease of passing proposals also decreases at each enlargement of the Council: it is around 21.9% in the N6 Council and around 7.8% in the N15 Council. The Nice rule accentuates this trend still further, while the other rules would reverse it. Under the Constitution rule the ease of passing proposals is around 10% (very close to what was obtained for the N12 Council), while it would be 22.5% under the Convention rule (higher than in the initial 76 Recall that (see (35), in 3.8.1) the a priori ease of passing a proposal is exactly 50% if n is odd, and smaller if n is even (although it tends to 50% when n is large). In Table 5.1 read 3E − 8 as 3 × 10−8 = 0.00000003. Application to the European Union Table 5.1. A priori ease of passing proposals: α(WnEU , p∗n ) n= WnU QM Wn WnNi WnCs WnCv N6 Council). The ranking of these three rules from this point of view is thus Ni ∗ Cs ∗ Cv ∗ , p25 ) < α(W25 , p25 ) < α(W25 , p25 ). α(W25 A priori integration and sovereignty indices Ex ante success decomposes into two parts: positive success and negative success (see (17) in 3.4). A priori, i.e. for p = p∗ , we have i (W, p∗ ) = 1 i+ 1 (W, p∗ ) (W, p∗ ) + i− 2 i 2 i for any i ∈ Nn . In the European context, a priori success conditional to a positive vote, that is EU ∗ i+ i (Wn , p ), can be interpreted as an ‘a priori integration index’. Indeed, the more ‘pro-integration’ a state, the more sensitive it should in principle be to EU ∗ this form of success than to the other (i.e. i− i (Wn , p )). The values EU ∗ of i+ i (Wn , p ) are given in Table 5.2. The same can be said here as for the a priori ease of passing proposals. For any number of states, for any state, the largest integration index is obtained under the simple majority (around 50%), and the smallest under unanimity. The integration indices steadily decrease over time under unanimity and the qualified majority (with the exception of Luxembourg in the enlargement from the N9 Council to the N10 QM W6 0.375 0.375 0.375 0.312 - SM Wn=6,9,10,12,15,25 any i ∈ Nn U Wn=6,9,10,12,15,25 any i ∈ Nn QM Wn=6,9,10,12,15,25 Germany UK France Italy Spain Poland Netherlands Greece Czech 0.250 0.250 0.250 0.250 0.203 - QM W9 0.234 0.234 0.234 0.234 0.188 0.188 - QM W10 U W10 SM W10 EU ∗ Table 5.2. A priori integration indices: i+ i (Wn , pn ) 0.168 0.168 0.168 0.168 0.157 0.134 0.134 - QM W12 U W12 SM W12 0.134 0.134 0.134 0.134 0.124 0.107 0.107 - QM W15 U W15 SM W15 0.063 0.063 0.063 0.063 0.062 0.062 0.050 0.048 0.048 Ni W25 U W25 SM W25 0.180 0.159 0.159 0.158 0.144 0.144 0.129 0.126 0.126 Cs W25 0.379 0.335 0.334 0.332 0.304 0.303 0.266 0.258 0.258 Cv W25 Belgium Hungary Portugal Sweden Austria Slovakia Denmark Finland Ireland Lithuania Latvia Slovenia Estonia Cyprus Luxembourg Malta 0.312 0.219 - 0.203 0.188 0.188 0.156 - 0.188 0.162 0.162 0.162 - 0.134 0.134 0.123 0.123 0.108 - 0.107 0.107 0.102 0.102 0.096 0.096 0.096 0.089 - 0.048 0.048 0.048 0.046 0.046 0.043 0.043 0.043 0.043 0.043 0.040 0.040 0.040 0.040 0.040 0.039 0.126 0.126 0.126 0.125 0.125 0.123 0.123 0.123 0.122 0.122 0.121 0.121 0.121 0.120 0.120 0.120 0.258 0.258 0.258 0.256 0.255 0.251 0.251 0.251 0.249 0.248 0.246 0.246 0.245 0.244 0.243 0.243 Voting and Collective Decision-Making Council78 ). The decrease is significant in magnitude. Under a qualified majority the probability for large states falls from 37.5% in N6 to 13.4% in N15 , while Luxembourg’s probability falls from 21.9% to 8.9%. The Nice rule reinforces this decrease still further, while the Constitution rule and the Convention rule would reverse the trend. The indices under the Convention rule would be larger than those obtained under the qualified majority for the N9 Council. For any state i, we have i+ i+ Ni ∗ Cs ∗ Cv ∗ i+ i (W25 , p25 ) < i (W25 , p25 ) < i (W25 , p25 ). Interestingly enough, the ranking given by the integration index is always identical for all states whatever their size. The Convention rule can be said to be the most advanced from the integrationist point of view the Nice rule the least; with the Convention rule as a compromise between them. Note that the integration index ranks rules in the same way as the a priori ease of passing proposals does. Inversely, the more jealous a state is of its national sovereignty, the more concerned it is to avoid having a decision that it does not favour imposed upon it. The probability of a proposal being accepted given that a state i votes ‘no’, is given by 1 1 − γi (p) S:i∈S∈ / W As we have 1 1 − γi (p) S:i∈S∈ / W p(S) = 1 − i− i (W, p), the a priori success conditional to a negative vote, that is EU ∗ i− i (Wn , p ), can be interpreted as an ‘a priori sovereignty index’. The more jealous of its national sovereignty a state is, the more it would prefer this 78 In the N10 Council Luxembourg’s, Ireland’s and Denmark’s seats are symmetric. Application to the European Union EU ∗ form of success to the other. The values of i− i (Wn , p ) are given in Table 5.3. The comments to be made here are the opposite of those made for the integration index or the ease of passing proposals. For any number of states, for any state, the largest sovereignty index is obtained under unanimity, which gives a veto right to all states. The sovereignty index is smallest under the simple majority, with one exception: Luxembourg has a larger sovereignty index under the simple majority than under the qualified majority in the N6 Council. Under the qualified majority, sovereignty indices are quite large (more than 80%79 ) and they increase over time up to the last enlargement, with the exception of Denmark and Ireland from the N9 Council to the N10 Council. Sovereignty indices continue to increase under the Nice rule: large member states’ sovereignty indices are around 99% (with the smallest state, Malta, having around 97%). These indices are very close to 1. In this sense, it can be said that, a priori, states almost have a veto right under the Nice rule. Again the Constitution rule and the Convention rule reverse the trend. Under the Constitution rule, the range of probabilities is between 79% (Malta) and 93% (Germany). Under the Convention rule, sovereignty indices would be smaller than under the qualified majority in the N6 Council. For any state i, we have i− i− Ni ∗ Cs ∗ Cv ∗ i− i (W25 , p25 ) > i (W25 , p25 ) > i (W25 , p25 ). Thus, the ranking given by the sovereignty index is the same for all states (with the exception of the ranking between the simple and qualified majorities in the N6 Council). This ranking is opposite to the one given by the a priori ease of passing proposals or the integration index. A priori success Any state’s unconditional a priori success is given by the average of its integration index and its sovereignty index. The numerical results are given in Table 5.4. Any state’s a priori success is around 50% under unanimity. It is larger under the simple majority, although it also tends to 50% when 79 These indices have to be compared with integration indices smaller than 40%. 0.937 0.937 0.937 0.875 - Germany UK France Italy Spain Poland Netherlands Greece Czech 1 0.957 0.957 0.957 0.957 0.910 - 0.637 W9U 0.812 W6U any i ∈ Nn U Wn=6,9,10,12,15,25 any i ∈ Nn SM Wn=6,9,10,12,15,25 0.961 0.961 0.961 0.961 0.914 0.914 - 0.746 U W10 SM W10 EU ∗ Table 5.3. A priori sovereignty indices: i− i (Wn , pn ) 0.972 0.972 0.972 0.972 0.961 0.938 0.938 - 0.726 U W12 SM W12 0.979 0.979 0.979 0.979 0.969 0.952 0.952 - 0.605 U W15 SM W15 0.992 0.992 0.992 0.992 0.990 0.990 0.978 0.977 0.977 Ni W25 0.581 U W25 SM W25 0.978 0.957 0.956 0.956 0.942 0.941 0.927 0.924 0.924 Cs W25 0.930 0.886 0.884 0.883 0.854 0.854 0.817 0.809 0.809 Cv W25 Belgium Hungary Portugal Sweden Austria Slovakia Denmark Finland Ireland Lithuania Latvia Slovenia Estonia Cyprus Luxembourg Malta 0.875 0.781 - 0.910 0.894 0.894 0.863 - 0.914 0.889 0.889 0.889 - 0.938 0.938 0.927 0.927 0.912 - 0.952 0.952 0.946 0.946 0.94 0.94 0.94 0.934 - 0.977 0.977 0.977 0.975 0.975 0.972 0.972 0.972 0.972 0.972 0.968 0.968 0.968 0.968 0.968 0.967 0.924 0.924 0.924 0.923 0.922 0.921 0.921 0.921 0.920 0.920 0.919 0.919 0.918 0.918 0.918 0.918 0.809 0.808 0.808 0.807 0.806 0.802 0.801 0.801 0.799 0.799 0.797 0.796 0.796 0.795 0.794 0.794 Germany UK France Italy Spain Poland Netherlands Greece Czech QM Wn=6,9,10,12,15,25 any i ∈ Nn U Wn=6,9,10,12,15,25 any i ∈ Nn SM Wn=6,9,10,12,15,25 0.656 0.656 0.656 0.594 - 0.603 0.603 0.603 0.603 0.557 - QM W9 0.637 W9U 0.656 W6U QM W6 Table 5.4. A priori success: i (WnEU , p∗n ) 0.598 0.598 0.598 0.598 0.551 0.551 - QM W10 0.623 U W10 SM W10 0.567 0.567 0.567 0.567 0.559 0.536 0.536 - QM W12 0.613 U W12 SM W12 0.556 0.556 0.556 0.556 0.547 0.530 0.530 - QM W15 0.605 U W15 SM W15 0.528 0.528 0.528 0.528 0.526 0.526 0.514 0.513 0.513 Ni W25 0.581 U W25 SM W25 0.579 0.558 0.558 0.557 0.543 0.543 0.528 0.525 0.525 Cs W25 0.654 0.610 0.609 0.607 0.579 0.578 0.542 0.534 0.533 Cv W25 Belgium Hungary Portugal Sweden Austria Slovakia Denmark Finland Ireland Lithuania Latvia Slovenia Estonia Cyprus Luxembourg Malta 0.594 0.5 - 0.557 0.541 0.541 0.510 - 0.551 0.525 0.525 0.525 - 0.536 0.536 0.525 0.525 0.510 - 0.530 0.530 0.524 0.524 0.518 0.518 0.518 0.511 - 0.513 0.513 0.513 0.510 0.510 0.507 0.507 0.507 0.507 0.507 0.504 0.504 0.504 0.504 0.504 0.503 0.525 0.525 0.525 0.524 0.524 0.522 0.522 0.522 0.521 0.521 0.520 0.520 0.519 0.519 0.519 0.519 0.533 0.533 0.533 0.531 0.530 0.526 0.526 0.526 0.524 0.524 0.522 0.521 0.520 0.519 0.519 0.519 Voting and Collective Decision-Making n is large. When the number of states is smaller than 25 we have i (WnU , p∗n ) < i (Wn , p∗n ) < i (WnSM , p∗n ) for any state i, except Luxembourg in the N6 Council, where we have Lu (W6SM , p∗6 ) > Lu (W6U , p∗6 ) > Lu (W6 , p∗6 ). Under a qualified majority the a priori success decreases over time80 . This trend is confirmed under the Nice rule, but reversed under the Convention rule. Under the Constitution rule there is no trend: the likelihood of success of medium states would decrease, while that of large and small states would increase. For the N25 Council, for any state i, we have U Ni ∗ Cs ∗ Cv ∗ i (W25 , p∗25 ) < i (W25 , p25 ) < i (W25 , p25 ) < i (W25 , p25 ). But if we compare the Convention rule and the simple majority, the ranking differs according to the size of the states: Cv ∗ SM ∗ Cs ∗ i (W25 , p25 ) > i (W25 , p25 ) > i (W25 , p25 ) for large states (if i = Ge,UK, Fr, It) while the reverse holds for the remaining states. In other words, if states rank rules with a priori success as their criterion, large states should prefer the Convention to the simple majority, while the other states would prefer the simple majority. Note that for Germany the probability of success is very similar under these two rules. Thus if states rank rules with a priori success as their criterion, the ranking is generally the same for all states with a few exceptions (in the N6 and N25 Councils). Probabilistic criteria: summary conclusions In the public debate, the choice of the rule in the Council is often presented as a zero-sum game between states. In particular, it is often claimed that large states are less and less well-represented, or that they have lost power in favour of medium or small states. Similarly, 80 An exception is Luxembourg, whose a priori success increases with the first two enlargements. Recall however that in the N6 Council, Luxembourg had a null seat. In the N10 Council, Luxembourg, Ireland and Denmark had symmetric seats. Application to the European Union the discussions prior to the Nice summit were in terms of balance of representation between large, medium and small states. The analysis above offers a different point of view for comparing rules: The a priori probabilities of success (conditional in one sense or other, or unconditional) of each state are evaluated for different rules. These are absolute values81 that can be compared over time, and between different rules. The analysis leads to the following conclusions (recalling the caveat for any descriptive interpretation). The variation over time is similar for all states: a decreasing trend for the integration index or a priori success, and an increasing trend for the sovereignty index. The Convention rule would reverse this trend for the integration index, a priori success and the sovereignty index. The Constitution rule would increase the integration index and decrease the sovereignty index. Concerning the choice between the three new rules for the N25 Council, all states have the largest integration index and largest a priori success under the Convention rule, and all states have the largest sovereignty index under the Nice rule. For both indices, the Constitution rule gives intermediate values to all states. It is indeed remarkable that the ranking between rules according to these indices does not depend on the size of states: the ranking between rules is identical for all states (with only two exceptions). The difference in ranking between rules depends on the criterion chosen (ease of passing proposals, or integration index versus sovereignty index). From these points of view, the Constitution rule can be seen as a compromise between the Nice rule, which is more ‘sovereignist’ and less confident about integration, and the Convention rule, which is more resolutely integrationist and less jealous of sovereignty. Of course, inter-state comparisons are also relevant. The relative position of one state compared to another state also matters. In particular, the issue remains of assessing the differences between states and their justification on grounds of differences in population size. Egalitarianism and utilitarianism provide standpoints that allow for comparisons from the point of view of both states and citizens. 81 Most power indices are usually expressed in relative terms, which make comparisons between different rules unsound (see the comment on the normalization of the Banzhaf–Coleman index, Section 3.5.3). Voting and Collective Decision-Making 5.2.2 Criteria based on utilities There is a well known duality in the way in which the Council is seen. It is sometimes seen as a committee of states, hence the symmetric simple majority or the unanimity rules for certain matters, and other times as a committee of representatives of their respective populations, hence the different asymmetric qualified majority rules. We thus apply both the models presented in Sections 3.7 and 3.8. In the model in Section 3.7 each member of the committee acts on his/her own behalf and has his/her own utility function. According to this model for any rule in the Council, WnEU , state i’s (i ∈ Nn ) a priori expected utility can be defined as (see (26) in 3.7): − EU ∗ EU ∗ u¯ λi (WnEU , p∗n ) = λ+ i (Wn , pn ) + (1 − λ)i (Wn , pn ). In the European context, λ can be interpreted as the importance given to the ‘integration index’ relative to the ‘sovereignty index’: λ > 0.5 means a more pro-integration view, while λ < 0.5 means a more prosovereignty view. In fact, λ can be regarded as representing the ‘bias’ of a state between these two extremes. If λ = 0.5 both views are equally important and what matters is obtaining the preferred outcome. The formula above presupposes the same bias λ for all states. In the model in Section 3.8 each member of the committee acts on behalf of a group of a different size, in which each individual has his/her own utility function. Let M(t) be the set of European citizens in year t, distributed across n states, M(t) = M1 (t) ∪ . . . ∪ Mn (t). As in Section 3.8, we assume that each minister in the EU Council follows the majority will in his/her state. Or, equivalently, decisions within SMt state i are made by a simple majority, which we denote82 by Wm . i The composite rule EUt SMt SMt = WnEU [Wm , . . . , Wm ], Wm n 1 models the EU Council’s decision-making as a two stage EU-citizens’ decision. Then EU’s citizen k’s (k ∈ M) a priori expected utility is 82 SM , According to the convention previously adopted the notation should be Wm i (t) but no confusion can arise with this simpler notation if we recall that the number of citizens in a state varies with t. Similarly we write p∗m instead of EUt EU . or Wm(t) p∗m(t) , or Wm Application to the European Union (assuming the same bias λ for all citizens) given by EUt ∗ EUt ∗ EUt ∗ , pm ) = λ+ (Wm , pm ) + (1 − λ)− (Wm , pm ). u¯ λk (Wm k k Egalitarian principle The egalitarian principle requires that a priori all voters have the same expected utility. Symmetric rules implement the principle. Thus, the simple majority and unanimity satisfy the egalitarian principle at state level. The qualified majority does not: only states with symmetric seats have the same expected utility. That is, states with the same number of votes or Luxembourg, Denmark and Ireland on the N10 Council. Of course, the purpose of the qualified majority is to take into account the differences in terms of populations, and thus the egalitarian principle should be checked at citizen level. For any pair of citizens k and l, we obtain that the egalitarian principle is thus basically satisfied at citizen level whatever the rule (see (42) in 3.8.1): EUt ∗ EUt ∗ , pm ) u¯ λl (Wm , pm ). u¯ λk (Wm More precisely, applying Proposition 20 in 3.8.2 (adopting the same approximations), the maximal ratio between two citizens, whatever the year considered and whatever the importance given to the integration index relative to the sovereignty index, is EUt ∗ u¯ λk (Wm , pm ) EUt ∗ u¯ λl (Wm , pm ) < 1 + ξ(t), where ξ(t), given by (41), depends on the smallest population figure in year t. For the years considered, ξ(t) is always smaller than 0.0015, which means that the citizen with the largest expected utility never has more than 0.15% more than the expected utility of any other citizen. In Table 5.5 an upper bound δ(n) is given for each n (i.e. each period with a given number of member states), namely δ(n) = Maxt [1 + ξ(t)] , where the maximum is taken for those years t in which the Council had n members. Voting and Collective Decision-Making Table 5.5. Maximum ratio between citizens’ expected utilities (upper bound). n= Comparison with the square root rule We can compute a similar ratio for the Banzhaf index, that is ' (WnEU ) = Maxt EUt ) Maxk Bzk (Wm EUt Minl Bzl (Wm ) ( , where the maximum is taken for any year t such that the Council had n members. The results are given in Table 5.6. This ratio is certainly not close to 1, (it can even be infinity in the N6 Council!), and is always much larger than 2; (the sole exception being the qualified majority in the N9 Council). If citizens’ representation were measured by the Banzhaf index, this would mean large inequalities between citizens from different states. During the debates that followed the Treaty of Nice some scientists claimed that: ‘The basic democratic principle that the vote of any citizen of a Member State ought to be worth as much as for any other Member State is strongly violated both in the voting system of the Treaty of Nice and in the rules given in the draft Constitution.’83 The rationale for their claim is basically that, for instance, Ni (W25 ) = 2.27 Cs ) = 4.35, (W25 and these values were considered as much too large. By focusing on the Banzhaf indices, as discussed in Example 3.3 in Section 3.8.2, their approach magnifies the differences between citizens from different countries. As was argued there, using the probability of being decisive as a measure of representation (i.e. of the vote of any citizen 83 Open letter addressed to the governments of the EU member states. The letter (that as the reader may guess, we refused to sign), as well as the list of scientists who signed it, can be found, for instance, at www.esi2.us.es/˜mbilbao/ pdffiles.letter.pdf. Application to the European Union Table 5.6. Maximum ratio between citizens’ Banzhaf indices n= (WnSM ) QM (Wn ) (WnNi ) (WnCs ) (WnCv ) (WnU ) ∞ 13.4 1.78 − − − 13.3 3.75 − − − 13 2.02 − − − 14.4 2.87 − − − 14.2 − 2.27 4.35 2.88 14.4 of a Member State’s ‘worth’) in a take-it-or-leave-it environment is misleading. These values have to be compared with the corresponding number in Table 5.5, which is δ(25) = 1.000126. The reason why there is such a big a difference between these figures is clear: in the a priori probabilistic model, a citizen’s expected utility depends only slightly on his/her decisiveness. Utilitarian principle The voting rules that implement the utilitarian principle in different conditions were discussed in Sections 3.7.2 and 3.8.3. Here we consider the rules that have actually been used in the Council. To compare rules from a utilitarian point of view, the criterion is the sum84 of expected utilities. That is, W is better than W (W W ) if i u¯ λi (W, p∗ ) > u¯ λi (W , p∗ ). This can be applied at state level or at citizen level. If the Council is interpreted as a committee of ‘equals’, then, according to Propositions 13 and 15 in Section 3.7.2, W SM is the best rule if λ ≥ 0.5, W U is the best rule if λ = 0. 84 Here we only compare rules with identical numbers of voters. If this is not the case, average expected utility would be a better criterion. Voting and Collective Decision-Making For 0 < λ < 0.5, the utilitarian-best (i.e. the (1 − λ)-majority rule) is not used in the Council. We compare the expected utilities for the QM Ni , W Cs , three rules actually used: WnSM , Wn and WnU , and also W25 25 Cv and W25 for the N25 Council. Let us detail the calculations for N6 Council, at state level. Plugging the values of the a priori success and the a priori ease of passing proposals (Tables 5.1 and 5.4) into (28) in Section 3.7 we obtain the aggregated expected utility for the simple majority, the qualified majority and unanimity: u¯ λi (W6SM , p∗6 ) = 2.44 − 0.94λ, u¯ λi (W6 , p∗6 ) = 2.67 − 1.69λ, u¯ λi (W6U , p∗6 ) = 3 − 2.91λ. The largest sum is obtained with the simple majority for λ = 1, and with the unanimity rule for λ = 0. For intermediate values, as λ increases from 0 to 1 (i.e. the relative weight of the integration index w.r.t. the sovereignty index increases), the performance of the three QM rules (W6U , W6 and W6SM ) is reversed. A comparison of these sums leads to QM W6U W6 if λ < 0.27, QM W6U ∼ W6 W6SM QM W6 W6U W6SM QM W6 W6U ∼ W6SM QM W6 W6SM W6U QM W6 ∼ W6SM W6U QM W6SM W6 W6U if λ = 0.27, for 0.27 < λ < 0.28, if λ = 0.28, for 0.28 < λ < 0.31, if λ = 0.31, for λ > 0.31. We can thus conclude that for the N6 Council, if the choice is between simple majority, unanimity and qualified majority, the utilitarian principle is best satisfied by unanimity if λ < 0.27, by a qualified majority only for λ within the narrow interval 0.27 < λ < 0.31, and by a simple majority if λ > 0.31. Application to the European Union We can proceed similarly for the N9 , N10 , N12 and N15 Councils. The results can be summarized as follows. If the choice is between a simple majority, unanimity and a qualified majority, the utilitarian principle is best satisfied by WnU if λ < an , QM Wn WnSM for an < λ < bn , if λ > bn , with a9 = 0.27, a10 = 0.28, a12 = 0.27, a15 = 0.28 b9 = 0.40, b10 = 0.37, b12 = 0.38, b15 = 0.42. Therefore the qualitative results are similar in all cases: the simple majority is the best rule for a large range of values of λ (above bn , which is around 0.40). Unanimity is the best rule for a range of values of λ (below an , which is close to 0.30). And the qualified majority is better than unanimity and the simple majority only for a small range of values of λ (an and bn are quite close). For the N25 Council, if the choice is between simple majority, the Convention rule, the Constitution rule, the Nice rule and unanimity, the utilitarian principle is best satisfied by U W25 if λ < 0.32, Ni W25 Cs W25 Cv W25 SM W25 for 0.32 < λ < 0.37, for 0.37 < λ < 0.42, for 0.42 < λ < 0.44, if λ > 0.44. Note that the Constitution rule is once again in an intermediate position (in this case the range of λ for which it is the best) between the Nice and Convention rules, as it was with criteria based on probabilistic indices. Of the three rules, the Constitution rule was the one finally chosen. This can be interpreted as a choice based on the utilitarian principle (from the point of view of the Council as a committee of ‘equal states’) in the following terms: in the light of the current model, the European Union’s choice can be rationalized from the utilitarian Voting and Collective Decision-Making point of view by assuming that the common view is a slightly biased pro-sovereignty view, with 0.37 < λ < 0.42. We can proceed in a similar way at citizen level using the model introduced in Section 3.8.3, in which the EU Council works as a two EUt can be either stage EU-citizens’ decision-maker. Now in (72) Wm QM SMt Ut Wm , Wm t or Wm , depending on whether the rule in the Council is the simple majority, a qualified majority or unanimity. We obtain that the utilitarian principle is best satisfied at citizen level under Ut Wm QMt if λ < atm , for atm < λ < btm , SMt Wm if λ > btm , with atm ∈ [0.499960, 0.499973], and btm ∈ [0.499987, 0.499992]. Thus for any year considered the interval within which a qualified majority is the best of the three rules is very narrow as atm btm 0.5. Thus at citizen level, of the rules that have been used in the Council, the one that best satisfies the utilitarian principle is Ut Wm if λ < 0.5 SMt Wm if λ ≥ 0.5. In conclusion, the aggregated expected utility is almost never highest under a qualified majority (be it under the Nice rule, the Convention rule, or the Constitution rule for the N25 Council). In fact, according to the a priori model, as the relative weight given to positive success w.r.t. negative success increases, at about λ = 0.5 there is a brusque shift, and the simple majority becomes better than unanimity. Thus, from this point of view, the qualified majority is a compromise between these two extremes that can be justified in optimality terms only for a value of λ close to 0.5, but which makes sense as an intermediate rule between the two extreme rules between which optimality switches depending on the bias. Criteria based on utilities: summary conclusions Unanimity and simple majority rules obviously implement the egalitarian principle at state level. At citizen level, expected utility based on the a priori model is basically the same for all citizens, irrespective of Application to the European Union nationality. Therefore the egalitarian principle is basically satisfied at citizen level for the different rules. From the utilitarian point of view, a qualified majority (i.e. the Nice, Constitution or Convention rule) is the best rule at state level only for a narrow range within a slightly biased pro-sovereignty view (common to all states), and practically never at citizen level. Nevertheless, all the three qualified majority rules make sense as intermediate between the two extreme rules between which optimality switches depending on the bias. 5.3 The Council as a bargaining committee As has been already said, the Council works more like what has been described as a bargaining committee. The minutes of the Council suggest that often the formal vote takes place once the Council has found a unanimous agreement. In fact, the confirmation by David Galloway85 that this is very often the case in the decisions made by a qualified majority, along with the fact that the redistribution of weights in the Council is obviously the most problematic issue at each enlargement, are at the origin of the model presented in Chapter 4. He also pointed out86 that ‘weights do matter because negotiators know that they can be outvoted’. In reference to the way in which negotiations in the EU’s Council usually proceed, Galloway also pointed out the capacity of experienced negotiators to ‘guess’, at a certain stage of the bargaining process after some negotiating rounds, ‘where more or less the final agreement will lie’. To apply the model developed in Chapter 4, we assume that this ‘final agreement’ is: (B, WnEU ) = NashSh(Wn ) (B), EU that is, we measure the bargaining power (in the precise game-theoretic sense of giving the weights of the asymmetric Nash bargaining solution) by the Shapley–Shubik index. As discussed in Sections 4.3 and 4.4, there are no conclusive arguments in support of this index in this 85 An experienced EU practitioner, working for the Council for 20 years. See [25] for an account of the 2000 Nice summit from the point of view of a well-informed insider. In the course of a face-to-face interview on 23.06.02. Voting and Collective Decision-Making context, given that everything depends on the bargaining protocol. Nevertheless, we take this index as a term of reference in view of the very simple underlying protocol that supports it. As explained in Section 4.5, the Nash bargaining solution can be seen as a compromise between the (generally incompatible) egalitarian and utilitarian goals. At state level, according to this model the implementation of the Nash solution is guaranteed by any symmetric rule, where all states have the same bargaining power. In particular, the simple majority and unanimity satisfy the condition Shi (WnSM ) = Shi (WnU ) = 1 n (for any i ∈ N). Under a qualified majority, states with symmetric seats have the same bargaining power, but in general different states have different bargaining powers as shown in Table 5.7. Thus it can be said that under the simple majority and under unanimity the final agreement that should be reached can be expected to be a compromise between egalitarianism and utilitarianism at state level. Of course the objective of the qualified majority is to take into account the differences in terms of populations between the different states. For an assessment from this point of view, we use the model of a bargaining committee of representatives discussed in Section 4.6. With the notation introduced in Section 5.2.2, let M(t) be the set of European citizens in year t, distributed in n states, M(t) = M1 (t) ∪ . . . ∪ Mn (t). Thus each minister i represents a group of size mi (t) in the EU Council. Then the ‘neutral’ rule in the Council, according to the model in Section 4.6, would be such that any state’s bargaining power is proportional to the size of the group that he/she represents. That is, such that Shj (WnEU ) Shi (WnEU ) = , mi (t) mj (t) for any two countries i, j. If the rule satisfies this property, in the conditions and sense explained in Section 4.6, any citizen is indifferent between bargaining directly within M(t), and leaving bargaining in the hands of his/her minister. The simple majority and unanimity would be neutral if the population were the same in all states. Table 5.8 gives states’ bargaining 0.233 0.233 0.233 0.150 0.150 - Germany UK France Italy Spain Poland Netherlands Greece Czech Belgium Hungary Portugal Sweden 0.179 0.179 0.179 0.179 0.081 0.081 - W9 0.174 0.174 0.174 0.174 0.071 0.071 0.071 - 0.134 0.134 0.134 0.134 0.111 0.064 0.064 0.064 0.064 - Table 5.7. Shapley–Shubik index for the qualified majority 0.117 0.117 0.117 0.117 0.095 0.055 0.055 0.055 0.055 0.045 W15 0.093 0.093 0.093 0.093 0.086 0.086 0.040 0.036 0.036 0.036 0.036 0.036 0.030 Ni W25 0.157 0.104 0.103 0.100 0.072 0.067 0.035 0.027 0.027 0.027 0.026 0.026 0.025 Cs W25 0.163 0.113 0.112 0.108 0.081 0.075 0.034 0.026 0.025 0.025 0.024 0.024 0.022 Cv W25 0 - Austria Slovakia Denmark Finland Ireland Lithuania Latvia Slovenia Estonia Cyprus Luxembourg Malta Table 5.7. (cont.) 0.057 0.057 0.009 - W9 0.030 0.030 0.030 - W10 0.043 0.043 0.012 - W12 0.045 0.035 0.035 0.035 0.021 - W15 0.030 0.021 0.021 0.021 0.021 0.021 0.012 0.012 0.012 0.012 0.012 0.009 Ni W25 0.023 0.020 0.020 0.020 0.018 0.017 0.016 0.015 0.014 0.013 0.013 0.013 Cs W25 0.021 0.017 0.017 0.017 0.015 0.014 0.012 0.012 0.011 0.010 0.010 0.010 Cv W25 Application to the European Union Table 5.8. Shapley–Shubik index/population ratio for symmetric rules [Shi (W)/mi (t)] ∗ 109 , Germany UK France Italy Spain Poland Netherlands Greece Czech Belgium Hungary Portugal Sweden Austria Slovakia Denmark Finland Ireland Lithuania Latvia Slovenia Estonia Cyprus Luxembourg Malta W = WnSM or WnU 3.1 − 3.7 3.4 − − 15 − − 18 − − − − − − − − − − − − − 540 − 1.8 2.1 2 2 − − 8.3 − − 11 − − − − − 22 − 36 − − − − − 320 − 1.6 1.9 1.8 1.8 − − 7 10 − 10 − − − − − 20 − 29 − − − − − 270 − 1.4 1.5 1.5 1.5 2.2 − 5.7 8.5 − 8.4 − 8.3 − − − 16 − 24 − − − − − 230 − 0.8 1.1 1.2 1.1 1.7 − 4.3 6.6 − 6.4 − 6.7 7.6 8.3 − 13 19 13 − − − − − 160 − 0.5 0.7 0.7 0.7 0.9 1 2.5 3.6 3.8 3.8 3.9 4 4.5 4.9 7.4 7.4 7.7 9.9 12 17 20 30 55 89 100 powers divided by the population for some years (1958, 1973, 1981, 1986, 1995 and 2004) for these two symmetric rules. Note that although the population varies between enlargements and the years chosen for the calculations are those of the enlargements, qualitatively the results do not change between the mentioned dates (with the exception of German reunification, see below). As expected, it can be seen that these ratios are far from equal, and are much larger for small states than for large states. A citizen of a small state is favoured Voting and Collective Decision-Making Table 5.9. Shapley–Shubik index/population ratio for the qualified majority QM Germany UK France Italy Spain Poland Netherlands Greece Czech Belgium Hungary Portugal Sweden Austria Slovakia Denmark Finland Ireland Lithuania Latvia Slovenia Estonia Cyprus Luxembourg Malta 4.3 − 5.2 4.7 − − 14 − − 17 − − − − − − − − − − − − − 0 − 2.9 3.4 3.3 3.2 − − 6 − − 8.3 − − − − − 11 − 18 − − − − − 27 − [Shi (Wn )/mi (t)] ∗ 109 1981 1986 1995 Ni 2004 Cs 2.8 3.2 3.1 3.1 − − 5 7.2 − 7.4 − − − − − 5.9 − 8.8 − − − − − 83 − 1.9 1.7 1.7 1.7 1.7 1.7 2.1 2.5 2.6 2.6 2.6 2.6 2.8 2.9 3.7 3.7 3.7 4.5 5 6.7 7.6 11 18 29 33 2 1.9 1.9 1.9 1.9 2 2.1 2.3 2.4 2.4 2.4 2.4 2.5 2.6 3.2 3.2 3.2 3.7 4.1 5.4 6 8.2 14 22 24 2.2 2.4 2.4 2.4 2.9 − 4.4 6.5 − 6.4 − 6.4 − − − 8.3 − 12 − − − − − 32 − 1.4 2 2 2 2.4 − 3.6 5.4 − 5.3 − 5.6 5.1 5.6 − 6.8 6.9 9.8 − − − − − 51 − 1.1 1.5 1.6 1.6 2 2.3 2.4 3.3 3.5 3.5 3.6 3.6 3.4 3.7 3.9 3.9 4 5.2 6.1 5.1 6 8.8 16 26 22 by representation while the opposite holds for citizens from large states. Table 5.9 gives the same ratios for the qualified majority for the same years. Table 5.9 requires the following comments. As expected, the differences under symmetric rules are much greater than under the qualified majority. Nevertheless, they are still considerable for the qualified majority. Large states still have a smaller ratio than small states. Thus, small states are relatively favoured. The premise that the Application to the European Union larger the state is, the smaller the ratio will be holds with the following exceptions: Luxemburg in the N6 Council, Denmark in the N10 Council, Sweden in the N12 Council, and Germany in the N25 Council for the Constitution rule and the Convention rule. These ratios question the system of ‘clusters’ (giving the same weights and thus the same bargaining power to different states). The population figures of France, Italy and the United Kingdom are similar enough for them to be allocated the same number of votes at least since 1981, but Germany’s population would justify greater bargaining power, especially since reunification; (between 1989 and 1990, the year of the reunification of Germany, the ratio falls from 2.2 ∗ 10−9 to 1.7 ∗ 10−9 ). Also the Netherlands’ population would justify placing it in a different cluster from Belgium (especially since 1981). The same can be said for Ireland and Denmark. It is not meaningful to compare the variation over time of the ratio for one state (because we compare relative measures that mainly decrease when members are added, as the measures are divided by populations that increase). More interesting is the variation over time of the dispersion of the ratios. This dispersion cannot be said to decrease over time. It is also meaningful to compare the Nice rule, the Constitution rule and the Convention rule. The ratios for large states are larger with the Convention rule than with the other rules, while the ratio for small states (with the exception of Malta) are smaller with the Convention rule. This means that the dispersion of the ratio is smaller with the Convention rule than with the other rules, which in turn means that this is the best rule from this point of view. In the comparison of the Constitution rule and the Nice rule it must be noted that the differences in ratio between the large and medium states are smaller (in both relative and absolute terms) with the Constitution rule, thus making it a better candidate. But then the differences between larger and smaller states are not clearly smaller with the Constitution rule. In short, the bargaining committee model supports the conclusion that medium and small states are over-represented compared to large states. Of the different rules that were proposed for the N25 Council, the Convention rule is certainly the best, in the sense that the dispersion in the ratio is the smallest. Also from this point of view, the Constitution rule can be seen to some extent as intermediate between the Nice and the Convention rules. Voting and Collective Decision-Making To conclude, it may be asked whether the over-representation of medium and small states is interpretable as sheer generosity on the part of the larger member states. Possibly not. The point of view for this assessment of such deviations is provided by a model in which the only source of asymmetry is the difference in population figures; in other words a model in which population figures are the only source of bargaining power. The (remarkably systematic: the bigger the country the further below the ‘due’ proportion) deviation from these proportions may be related to the fact that this is not the only source of effective bargaining power. Larger states may also have other means of increasing their effective bargaining power. 5.4 Exercises 1. Show that Luxembourg had a null seat in the N6 Council, while Luxembourg, Ireland and Denmark had symmetric seats in the N10 Council. 2. Show that there is no inclusion between the set of winning configurations of Nice rule, the Constitution rule and the Convention rule. 3. Express the Nice rule, the Convention rule and the Constitution rule in terms of union and intersection of weighted majority rules. 4. Show that if n is odd, we have: 1 α(Wn , p˜ n ), 2 1 i+ ∗ ˜ i+ i (Wn , pn ) = 2 i (Wn , pn ), α(Wn , p∗n ) = where p˜ n (S) := p∗n (S) = 1/x if s > n2 0 otherwise, 1 . 2n 5. Show that any citizen’s a priori probability of success is basically of 50% for all rules considered in the Council, irrespective of his/her nationality. Similarly any citizen’s a priori integration index or Application to the European Union sovereignty index does not depend on nationality, but on the a priori ease of passing proposals. That is, EU ∗ k (Wm , pm ) 1 , 2 EU ∗ k+ (Wm , pm ) α(WnEU , p∗n ), k EU ∗ (Wm , pm ) 1 − α(WnEU , p∗n ). k− k 6. Show that n−1 1 1 2 C if n is odd + n−1 1 λ SM ∗ 4 2n+1 u¯ i (Wn , pn ) = 1 1 − λ n n + i∈N Cn2 if n is even. 4 2n+1 1 1 1 1 λ U ∗ − . u¯ i (Wn , pn ) = − λ n 2 2 2n i∈N To conclude, we briefly summarize the main conclusions and claims of the book. 1. The first requisite for a sound normative theory for the assessment and choice of (dichotomous) voting rules is a precise specification of the type of committee, council or body that makes the collective decisions under consideration. It is not possible to provide a well-founded analysis or recommendation about a vaguely specified environment, as has been the case with the traditional voting power approach. 2. In this respect we have dealt separately with two extreme clearcut types of committee that make decisions under a yes/no voting rule as terms of reference: take-it-or-leave-it committees and bargaining committees, of which we have provided different models whose only shared ingredient is a (dichotomous) voting rule. Nevertheless, this does not exhaust all the possible environments though: other models are no doubt possible. 3. In neither type of committee is the question of ‘power’ or ‘voting power’ the first or primary issue that arises naturally, and nor can this issue be immediately addressed in a meaningful way. Each type of committee requires a different model and a different analysis, but in both cases the model proposed assumes individuals’ behaviours to be consistent with the expected utility maximization model. The introduction of utilities allows (insofar as is possible) for a coherent, and unified approach to each type of committee. In particular, the normative question of the choice of voting rule for a committee of representatives of either type can be addressed by applying the egalitarian and utilitarian principles. 4. In pure take-it-or-leave-it committees behaviour follows immediately from preferences (if indifferences are discarded), so the situation is not a game situation. This means that in such a context the very notion of ‘voting power’ is more than dubious. In particular the notion of power as the likelihood of being decisive is purely formal and devoid of any clear power-content. The notion of success or satisfaction seems in such contexts to be a sounder basis for further analysis. On this basis it is possible to introduce utilities into the model and apply the egalitarian and the utilitarian principles in order to make normative recommendations. An a priori probabilistic model of voters’ behaviour or preferences leads to some recommendations that include the first and second ‘square root rules’ as particular cases. Nevertheless, an explicit specification of the context and a formulation of the analysis in utility terms disclose the distorting and misleading effects of presenting the first as ‘equalizing voting power’. This is especially so in assessing the ‘distance’ of a voting rule from this ‘optimum’, i.e. assessing inequalities, which are magnified by the traditional approach. Apart from these differences, some other conclusions are worth remarking. First, starting with a precise specification of the environment sets limits on the scope and validity of these recommendations: they only make sense in take-it-or-leave-it voting situations, by contrast with the seemingly general-purpose recommendations of the traditional voting power approach. The limited scope of application of these recommendations may seem disappointing, given the rarity of pure take-it-or-leave-it voting situations, but it is the price that must be paid for clarifying the analysis. Second, the model discussed here is based on explicit assumptions about voters’ behaviour and utilities justified for normative purposes. As a consequence, the limitations of the model can be seen clearly. As is well known, the a priori probabilistic model of behaviour (Assumption 1) is often criticized, even assuming a normative point of view. Although other models can be considered, this one seems to us to be reasonable and the simplest. Moreover, this choice has permitted us to ‘embed’ the traditional model within our more general model, thus showing its inconsistencies and limitations. 5. The type of situation described as a bargaining committee is much more complex than a pure take-it-or-leave-it committee. Unlike take-itor-leave-it committees, bargaining committees represent genuine game situations, and require a game-theoretic approach. We have modelled these situations as an extension of the classical Nash bargaining model. Our model consists of a profile of expected utility preferences over the set of feasible agreements (or, in practice, the set of feasible payoff vectors associated with it à la Nash), and the voting rule that prescribes what groups of players are able to enforce agreements. The first Voting and Collective Decision-Making question that naturally arises then is what the ‘value’ or reasonable expectation of a player is in such an environment. The answer that we provide is also an extension of Nash bargaining theory, based on rationality requirements about a reasonable agreement in such a context. Theorem 29 provides a foundation for interpreting, in principle, most traditional power indices as candidates for measuring the ‘bargaining power’ that the voting rule gives to each player in a bargaining committee. The lack of compelling conditions to go further is interpretable as the degrees of freedom enclosed in our rather summary model, in other words the indeterminacy of a situation in which details not incorporated into the model are important. Non-cooperative analysis shows the importance of the bargaining protocol and its impact on the players’ bargaining power. Of the power indices which are candidates to express the players’ bargaining power, the Shapley–Shubik index appears associated with a very simple protocol. Finally, the question of the choice of rule in a bargaining committee of representatives is addressed and yields a new and unexpected recommendation based on the Nash bargaining solution interpreted as a compromise between egalitarianism and utilitarianism. 6. In the light of the approach presented here, power indices ‘recover’ their game-theoretic character. The probabilistic approach makes sense for take-or-leave-it environments, but in such contexts power as decisiveness does not make sense. It is in bargaining environments that power is relevant and decisiveness may be the source of power. It is also in this context that the (cooperative and non-cooperative) game-theoretic approach make sense. 7. Thus, it would be wrong to interpret the above summary as the result of taking the I/P-power dichotomy emphasized in [22] to its final consequences. In fact, a marginal outcome of the analysis is that it shows the lack of consistency of a distinction made at the abstract level rather than at the level of the situation considered. On the one hand, the notion of ‘I-power’ is revealed as a misunderstanding in a takeit-or-leave-it context (where its underlying probabilistic model makes sense), given the lack of sense of the notion of power as decisiveness in that context. On the other hand, if the resulting bargaining power in bargaining committees is interpreted as ‘P-power’, it turns out in general not to be an expected share in a fixed prize, but rather genuine bargaining power in a well-established game-theoretic sense, and it is related to decisiveness. In fact, the model presented here accounts also for the particular preference profiles (i.e. TU-like) for which the Shapley–Shubik (and other power indices) may as well be interpreted as an expected payoff. 8. Finally, we want to stress the tentative and humble ‘if . . . then’ character of all the results and ‘recommendations’ presented in the book. We have been taught humility by ten years of joint research in which we believed again and again that we had ‘at last’ seen things clearly, only to later perceive further obscurities. [1] Allais, M., 1953, Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école Américaine, Econometrica 21, 503–46. [2] Arrow, K., 1963, Social Choice and Individual Values, 2nd ed., New York, Wiley. [3] Banks, J. S., and J. Duggan, 2000, A Bargaining Model of Collective Choice, American Political Science Review 94, 73–88. [4] Banzhaf, J. F., 1965, Weighted Voting doesn’t Work: A Mathematical Analysis, Rutgers Law Review 19, 317–43. [5] Banzhaf, J. F., 1966, Multi-Member Electoral Districts: Do They Violate the One Man, One Vote Principle? Yale Law Journal 75, 1309–38. [6] Barberà, S., and M. Jackson, 2006, On the Weights of Nations: Assigning Voting Power to Heterogeneous Voters, Journal of Political Economy, 114, 317–39. [7] Baron, D. P., and J. A. Ferejohn, 1989, Bargaining in Legislatures, American Political Science Review 83, 1181–1206. [8] Barry, B., 1980, Is it Better to Be Powerful or Lucky?, Part I and Part II, Political Studies 28, 183–94, 338–52. [9] Beisbart, C., L. Bovens, and S. Hartmann, 2005, A Utilitarian Assessment of Alternative Decision Rules in the Council of Ministers, European Union Politics 6, 395–418. [10] Benoît, J-P., and L. A. Kornhauser, 2002, Game-Theoretic Analysis of Legal Rules and Institutions. In: Handbook of Game Theory with Economic Applications, Vol. 3, ed. Aumann R. J., and S. Hart, Amsterdam, Elsevier–North-Holland, pp. 2229–69. [11] Bentham, J., 1789, An Introduction to the Principles of Morals and Legislation, London, T. Payne. [12] Binmore, K., 1998, Game Theory and the Social Contract II, Just Playing. Cambridge, MA, MIT Press. [13] Binmore, K., 2007, Playing for Real. A Text on Game Theory, New York, Oxford University Press. [14] Brams, S. J., and M. Lake, 1978, Power and Satisfaction in a Representative Democracy. In: Game Theory and Political Science, ed. P. Ordeshook, New York University Press, pp. 529–62. [15] Coleman, J. S., 1971, Control of Collectivities and the Power of a Collectivity to Act. In: Social Choice, ed. B. Lieberman, London, Gordon and Breach, pp. 269–300. [16] Coleman, J. S., 1986, Individual Interests and Collective Action: Selected Essays, Cambridge University Press. [17] Deegan, J., and E. W. Packel, 1978, A New Index of Power for Simple n-Person Games, International Journal of Game Theory 7, 113–23. [18] Dubey, P., 1975, On the Uniqueness of the Shapley Value, International Journal of Game Theory 4, 131–9. [19] Dubey, P., A. Neyman and R. J. Weber, 1981, Value Theory without Efficiency, Mathematics of Operations Research 6, 122–8. [20] Dubey, P., and L. S. Shapley, 1979, Mathematical Properties of the Banzhaf Power Index, Mathematics of Operations Research 4, 99–131. [21] Feix, M., D. Lepelley, V. Merlin, and J-L. Rouet, 2004, The Probability of Conflicts in a U.S. Presidential Type Election, Economic Theory 23, 227–57. [22] Felsenthal, D. S., and M. Machover, 1998, The Measurement of Voting Power: Theory and Practice, Problems and Paradoxes, London, Edward Elgar. [23] Felsenthal, D. S., and M. Machover, 2004, Analysis of QM rules in the draft constitution for Europe proposed by the European Convention, 2003, Social Choice and Welfare 23, 1–20. [24] Felsenthal, D. S., M. Machover, and W. S. Zwicker, 1998, The Bicameral Postulates and Indices of a Priori Voting Power, Theory and Decision 44, 83–116. [25] Galloway, D., 2001, The Treaty of Nice and Beyond: Realities and Illusions of Power in the EU, Sheffield, Academic. [26] Garrett, G., and G. Tsebelis, 1999, Why Resist the Temptation to Apply Power Indices to the European Union?, Journal of Theoretical Politics 11, 291–308. [27] Gibbard, A., 1973, Manipulation of Voting Schemes: A General Result, Econometrica 41, 587–601. [28] Hart, S., and A. Mas–Colell, 1996, Bargaining and Value, Econometrica 64, 357–80. [29] Hernstein, I. N., and J. Milnor, 1953, An Axiomatic Approach to Measurable Utility, Econometrica 21, 291-7. [30] Holler, M. J., and E. W. Packel, 1983, Power, Luck and the Right Index, Journal of Economics 43, 21–9. [31] Hosli, M. O., and M. Machover, 2004, The Nice Treaty and Voting Rules in the Council: A Reply to Moberg (2002), Journal of Common Market Studies 42, 497–521. [32] Kalai, E., 1977, Nonsymmetric Nash Solutions and Replications of 2-person Bargaining, International Journal of Game Theory 6, 129–33. [33] Kalai, E. and M. Smorodinsky, 1975, Other Solutions to Nash’s Bargaining Problem, Econometrica 43, 513–8. [34] König, T., and T. Bräuninger, 1998, The Inclusiveness of European Decision Rules, Journal of Theoretical Politics 10, 125–42. [35] Kuhn, H. W., and S. Nasar, eds., 2002, The Essential John Nash, Princeton University Press. [36] Lane, J. E., and R. Maeland, 2002, A Note on Nice, Journal of Theoretical Politics 14, 123–8. [37] Laruelle, A., R. Martínez, and F. Valenciano, 2004, On the Difficulty of Making Decisions within the EU-25, International Journal of Organization Theory and Behaviour 7, 571–84. [38] Laruelle, A., R. Martínez, and F. Valenciano, 2006, Success versus Decisiveness: Conceptual Discussion and Case Study, Journal of Theoretical Politics 18, 185–205. [39] Laruelle, A., and F. Valenciano, 2001, Shapley–Shubik and Banzhaf Indices Revisited, Mathematics of Operations Research 26, 89–104. [40] Laruelle, A., and F. Valenciano, 2002, Power Indices and the Veil of Ignorance, International Journal of Game Theory 31, 331–9. [41] Laruelle, A., and F. Valenciano, 2002, Inequality among EU Citizens in the EU’s Council Decision Procedure, European Journal of Political Economy 18, 475–98. [42] Laruelle, A., and F. Valenciano, 2003, Semivalues and Voting Power, International Game Theory Review 5, 41–61. [43] Laruelle, A., and F. Valenciano, 2004, Inequality in Voting Power, Social Choice and Welfare 22, 413–32. [44] Laruelle, A., and F. Valenciano, 2004, On the Meaning of the Owen– Banzhaf Coalitional Value in Voting Situations, Theory and Decision 56, 113–23. Reprinted in: Essays on Cooperative Games. In Honor of Guillermo Owen, ed. G. Gambarelli, Theory and Decision Library C, Vol. 36, Dordrecht, Kluwer. [45] Laruelle, A., and F. Valenciano, 2005, Assessing Success and Secisiveness in Voting Situations, Social Choice and Welfare 24 171–97. [46] Laruelle, A., and F. Valenciano, 2005, Potential, and Power of a Collectivity to Act, Theory and Decision 58, 187–94. [47] Laruelle, A., and F. Valenciano, 2005, A Critical Reappraisal of Some Voting Power Paradoxes, Public Choice 125, 17–41. [48] Laruelle, A., and F. Valenciano, 2007, Bargaining in Committees as an Extension of Nash’s Bargaining Theory, Journal of Economic Theory 132, 291–305. [49] Laruelle, A., and F. Valenciano, 2008, Bargaining in Committees of Representatives: the ‘Neutral’ Voting Rule, Journal of Theoretical Politics 20, 93–106. [50] Laruelle, A., and F. Valenciano, 2008, Cooperative Bargaining Foundations of the Shapley–Shubik Index, Games and Economic Behavior (forthcoming). [51] Laruelle, A., and F. Valenciano, 2008, Non-Cooperative Foundations of Bargaining Power in Committees, Games and Economic Behavior 63, 341–53. [52] Laruelle, A., and M. Widgrén, 1998, Is the Allocation of Voting Power among the EU States Fair? Public Choice 94, 317–39. [53] Leech, D., 2002, Designing the Voting System for the EU Council of Ministers, Public Choice 113, 437–64. [54] Lehrer, E., 1988, An Axiomatization of the Banzhaf Value, International Journal of Game Theory 17, 89–99. [55] Maaser, N., and S. Napel, 2007, Equal Representation in Two-Tier Voting Systems, Social Choice and Welfare 28, 401–20. [56] Moberg, A., 2002, The Nice Treaty and Voting Rules in the Council, Journal of Common Market Studies 40, 259–82. [57] Montero, M., 2006, Noncooperative Bargaining Foundations of the Nucleolus in Majority Games, Games and Economic Behavior 54, 380–97. [58] Morriss, P., 1987, Power: A philosophical Analysis, Manchester University Press. [59] Morriss, P., 2002, Power: A philosophical Analysis, 2nd edition, Manchester University Press. [60] Nash, J. F., 1950, The Bargaining Problem, Econometrica 18, 155–62. [61] Nash, J. F., 1951, Non-Cooperative Games, Annals of Mathematics 54, 286–95. [62] Nash, J. F., 1953, Two-Person Cooperative Games, Econometrica 21, 128–40. [63] Nash, J. F., 1996, Essays on Game Theory, London, Edward Elgar. [64] Niemi, R. G., and H. F. Weisberg, eds., 1972, Probability Models of Collective Decision Making, Columbus, OH, Merrill. [65] Osborne, M. J., 2003, An Introduction to Game Theory, Oxford University Press. [66] Osborne, M. J., and A. Rubinstein, 1994, A Course in Game Theory, Cambridge, MA, MIT Press. [67] Owen, G., 1975, Multilinear Extensions and the Banzhaf Value, Naval Research Logistics Quarterly 22, 741–50. [68] Penrose, L. S., 1946, The Elementary Statistics of Majority Voting, Journal of the Royal Statistical Society 109, 53–7. [69] Penrose, L. S., 1952, On the Objective Study of Crowd Behaviour, London, Lewis. [70] Rae, D., 1969, Decision Rules and Individual Values in Constitutional Choice, American Political Science Review 63, 40–56. [71] Rawls, A., 1972, A Theory of Justice, Oxford University Press. [72] Roth, A. E., 1977, Individual Rationality and Nash’s Solution to the Bargaining Problem, Mathematics of Operations Research 2, 64–5. [73] Roth, A. E., 1977, Utility Functions for Simple Games, Journal of Economic Theory 16, 481–9. [74] Roth, A. E., ed., 1988, The Shapley Value. Essays in Honor of Lloyd S. Shapley, Cambridge University Press. [75] Rubinstein, A., 1982, Perfect Equilibrium in a Bargaining Model, Econometrica 50, 97–109. [76] Shapley, L. S., 1953, A Value for N-person Games, Annals of Mathematical Studies 28, 307–17. [77] Shapley, L. S., 1969, Utility Comparison and the Theory of Games. In: La Décision: Agrégation et Dynamique des Ordres de Préférence, Paris, CNRS, 251–63. [78] Shapley, L. S., and M. Shubik, 1954, A Method for Evaluating the Distribution of Power in a Committee System, American Political Science Review 48, 787–92. [79] Straffin, P. D., 1977, Homogeneity, Independence and Power Indices, Public Choice 30, 107–18. [80] Straffin, P. D., 1977, Majority Rule and General Decision Rules, Theory and Decision 8, 351–60. [81] Straffin, P. D., 1982, Power Indices in Politics. In: Political and Related Models, ed. S. J. Brams, W. F. Lucas, and P. D. Straffin, New York, Springer, pp. 256–321. [82] Straffin, P. D., 1988, The Shapley–Shubik and Banzhaf Power Indices as Probabilities. In: The Shapley Value. Essays in Honor of Lloyd S. Shapley, ed. A. E. Roth, 1988, Cambridge University Press, pp. 71–81. [83] Straffin, P. D., M. D. Davis, and S. J. Brams, 1982, Power and Satisfaction in an Ideologically Divided Voting Body. In: Power, Voting, and Voting Power, ed. M. Holler, Würzburg–Wien, Physica Verlag, pp. 239–253. [84] Taylor, M., 1969, Proof of a Theorem on Majority Rule, Behavioral Science 14, 228–31. [85] Taylor, A. D., and W. S. Zwicker, 1992, A Characterization of Weighted Voting, Proceedings of the American Mathematical Society 115, [86] Taylor A. D., and W. S. Zwicker, 1999, Simple Games: Desirability Relations, Trading, Pseudoweightings, Princeton University Press. [87] von Neumann, J., and O. Morgenstern, 1944, Theory of Games and Economic Behavior, Princeton University Press. [88] Weber, R. J., 1979, Subjectivity in the Valuation of Games. In: Game Theory and Related Topics, ed. O. Moeschlin, and D. Pallaschke, Amsterdam, North-Holland, pp. 129–36. [89] Weber, R. J., 1988, Probabilistic Values for Games. In: The Shapley Value. Essays in Honor of Lloyd S. Shapley, ed. A. E. Roth, 1988, Cambridge University Press, pp. 101–19. a priori integration index, 145, 146, 156 a priori sovereignty index, 148, 150, 156 a priori voting behaviour, 60, 67, 71, 142 additivity, 35 Allais, M., 16, 28 anonymity, 33, 35, 110 anonymous voting behaviour, 56, 68, 139 Arrow, K., xiii Banks, J. S., 128 Banzhaf index, 63, 67, 80, 83, 92, 125, 158, 159 Banzhaf, J. F., 30, 39–41, 62 Barberà, S., 96 bargaining committee, 46, 106, 107, 163 bargaining power, 113, 114, 163 bargaining problem, 31, 107 bargaining protocol, 118, 123 Baron, D. P., 128 Barry, B., 54, 55, 58, 67 battle of the sexes, 20, 22 Beisbart, C., 96 Benoît, J-P., 67, 134 Bentham, J., 71 Binmore, K., 18, 31, 113, 117 Bovens, L., 96 Bräuninger, T., 65, 67 Brams, S. J., 54, 67 Coleman indices, 63, 67 Coleman, J. S., 41, 63 combination, 2 composition of voting rules, 9, 78, 156 comprehensive, 25, 107 conditional probabilities, 58 Davis, M. D., 54, 67 decision procedure, 4 decisiveness, 54, 57, 62, 63, 67, 80, 84, 159 Deegan, J., 125 dictator seat, 7, 54 domination, 7 double weighted majority rule, 8 Dubey, P., 39, 41–43, 47, 62, 63, 67, 112, 116 Duggan, J., 128 ease of passing proposals, 57, 69, 79, 144, 145 efficiency, 32, 35, 110 egalitarianism, 71, 74, 81, 128, 157 EU Constitution rule, 140 EU Convention rule, 140 EU Nice rule, 85, 139 ex ante, 55 ex post, 54 expected utility function, 12, 74, 81, 84, 87, 159 feasibility, 110 Feix, M., 96 Felsenthal, D. S., xii, 44, 46, 47, 92, 141, 143 Ferejohn, J. A., 128 first square root rule, 83, 158 Galloway, D., x, 138, 163 game in extensive form, 22 game in strategic form, 21 Garrett, G., 97 Gibbard, A., xiii Hart, S., 128 Hartman, S., 96 Index Herstein, I. N., 18 Holler, M. J., 115, 125 homogeneity, 96 Hosli, M. O., 67 improper voting rule, 6, 76, 94 independence of irrelevant alternatives, 32, 111 independent voting behaviour, 56, 68 indifference, 11, 53 individual rationality, 110 intersection of voting rules, 9 invariance w.r.t. positive affine transformations, 32, 111 183 Nasar, S., 31 Nash bargaining solution, 33, 164 Nash equilibrium, 19 Nash, J. F., 19, 20, 31, 33, 34, 44, 47, 107, 110 negative success, 58, 73, 89, 145 Neyman, A., 43 Niemi, R. G., 48 normalization, 64 NTU games, 25, 108 null player, 35, 111 null seat, 6, 139 Osborne, M. J., 18 Owen, G., 63 Jackson, M., 96 König, T., 65, 67 Kalai, E., 47, 112 Kornhauser, L. A., 67, 134 Kuhn, H. W., 31 Lake, M., 54, 67 Lane, J. E., 141 Leech, D., 64 Lehrer, E., 47 Lepelley, D., 96 losing vote configuration, 5 lottery, 12, 107 luck, 55 M-symmetric, 131 Maaser, N., 96 Machover, M., xii, 44, 46, 47, 67, 92, 141, 143 Maeland, R., 141 Martínez, R., 67, 142 Mas–Colell, A., 128 Merlin, V., 96 Milnor, J., 18 minimal winning vote configuration, 6 Moberg, A., 64 monotonic game, 25 Montero, M., 128 Morgenstern, O., 24, 25, 30, 31, 35 Morriss, P., 45, 55, 92 N-voting rule, 5, 107 Napel, S., 96 Packel, E. J., 115, 125 paradox, 44, 66 Penrose index, 63 Penrose, L. S., 41, 54, 63, 67 permutation, 2 players, 10, 107 positive success, 58, 73, 89, 145 postulate, 44, 66 preferences, 11, 55, 107 prisoner’s dilemma, 19, 21 probabilistic protocol, 119 q-majority rules, 8, 75 qualified majority, 138 Rae index, 62 Rae, D., 41, 48, 54, 62, 67, 75 Rawls, A., 71, 72 Roth, A. E., 35, 47 Rouet, J-L., 96 Rubinstein, A., 18, 128 second square root rule, 92 Selten, R., 10 semivalue, 43, 114, 125 Shapley, L. S., 30, 34–38, 41–43, 47, 62, 63, 67, 110, 116, 129 Shapley–Shubik index, 37, 114, 125, 163, 165 Shapley value, 36, 114 Shubik, M., 30, 37, 38, 67 simple game, 25 simple majority rule, 7, 68, 75, 138 184 Smorodinski, M., 47 stationary strategy, 24, 120 stationary subgame perfect equilibrium, 24, 120 status quo, 31, 110, 118 Stirling’s formula, 4, 79 Straffin, P. D., 48, 54, 61, 67, 96 strategy, 21 subgame perfect equilibrium, 23 success, 54, 57, 62, 65, 67, 69, 73, 74, 80, 149, 152 superadditive game, 25 support of the lottery, 12 symmetric bargaining problem, 32 symmetric gain-loss, 112, 116 symmetric seats, 7, 139 symmetric voting rule, 7, 74, 115 T-oligarchy, 7 T-unanimity rule, 7 take-it-or-leave-it committee, 46, 52, 68, 142 Taylor, A. D., 8 Taylor, M., 75 transfer, 42, 112, 116 Tsebelis, G., 97 Index TU games, 24 TU-like preferences, 108 two-stage indirect voting procedure, 9, 78, 81, 87 unanimity rule, 7, 68, 75, 109, 138 union of voting rules, 9 utilitarianism, 71, 74, 87, 128, 159 utility function, 11, 71, 156 veil of ignorance, 61, 72 veto seat, 6, 144 vNM preferences, 13, 76, 107, 109 von Neumann, J., 24, 25, 30, 31, 35 vote configuration, 5 voting behaviours, 56, 143 voting situation, 57 Weber, R. J., 43, 48 weighted majority rule, 8, 91, 94, 138 Weisberg, H. F., 48 Widgrén, M., 145 winning vote configuration, 5 Zwicker, W. S., 8, 46
{"url":"https://epdf.pub/voting-and-collective-decision-making.html","timestamp":"2024-11-12T06:47:13Z","content_type":"text/html","content_length":"438480","record_id":"<urn:uuid:a6d436f3-7d9e-4f9f-9b3f-1f51eb1538f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00155.warc.gz"}
Sum bottom n values with criteria In this example, the goal is to sum the smallest n values in a set of data after applying specific criteria. In the worksheet shown, we want to sum the three smallest values, so n is equal to 3. At a high level, this problem breaks down into three separate steps: 1. Apply criteria to select specific values 2. Extract the 3 smallest values 3. Sum the 3 extracted values This problem can be solved with a formula based on the FILTER function, the SMALL function, and the SUM function. For convenience, the range B5:C16 is an Excel Table named "data". This allows the formula to use structured references. Note: FILTER is a newer function not available in "Legacy Excel". See below for an alternative formula that works in older versions of Excel. Example formula The final formula in cell F5 is: To explain how this formula works, we'll walk through the steps listed above. This means we will be working through the formula from the inside out. This is typical of Excel formulas where one function is nested inside another. Apply criteria The first step in the problem is to apply criteria to select values by group. This can be done with the FILTER function. To select values in group "A", we can use FILTER like this: In this formula array is provided as the Values column in the table, and the include argument provides logic to select only values in group "A". The result is a subset of the values where the group is "A", which is returned as an array like this: If you are new to the FILTER function, see this video: Basic FILTER function example Extract 3 smallest values The next step in the problem is to extract the three smallest values from the array returned by FILTER. For this, we use the SMALL function. The SMALL function is designed to return the nth smallest value in a range. For example: =SMALL(range,1) // 1st smallest =SMALL(range,2) // 2nd smallest =SMALL(range,3) // 3rd smallest Normally, SMALL returns just one value. However, if you supply an array constant (e.g. a constant in the form {1,2,3}) to SMALL as the second argument, k , SMALL will return an array of results instead of a single result. For example: =SMALL(range,{1,2,3}) // smallest 3 values Picking up where we left off, FILTER is used to collect values in group "A". The results returned by FILTER are returned directly to the SMALL function like this: SMALL({10;65;25;45;20;15},{1,2,3}) // returns {10,15,20} The SMALL function then returns the 3 smallest values in an array: 10, 15, 20. Sum smallest values The final step in the problem is to sum the values extracted by the SMALL function. This is done with the SUM function: SUM returns 45 as a final result, which is the sum of the smallest 3 values in group A. When n becomes large As n becomes a larger number, it becomes tedious to enter longer array constants like {1,2,3,4,5,6,7,8,9,10}, etc. In this case, you can use the SEQUENCE function to generate an array of sequential numbers automatically like this: Just replace n with the number of smallest values you want to extract: =SUM(SMALL(FILTER(values,criteria),SEQUENCE(3)) // smallest 3 =SUM(SMALL(FILTER(values,criteria),SEQUENCE(5)) // smallest 5 Legacy Excel In older versions of Excel that don't provide the FILTER function, you can use the IF function to "filter" data like this: The behavior of this formula is much the same as the original formula above. The main difference is that the IF function returns values from group A when the group="A", but it returns FALSE when the group does not match the supplied criteria. So the SMALL function receives an array from IF that looks like this: Unlike the FILTER function, which returned just the six values associated with group A, IF returns an array that includes twelve results, one for each value in the original data. However, because SMALL is programmed to automatically ignore the logical values TRUE and FALSE, the result from SMALL, {10,15,20}, is the same as before and the final result is correct: =SUM({10,15,20}) // returns 45 Note: this is an array formula and must be entered with Control + Shift + Enter in Legacy Excel.
{"url":"https://exceljet.net/formulas/sum-bottom-n-values-with-criteria","timestamp":"2024-11-07T07:57:32Z","content_type":"text/html","content_length":"61141","record_id":"<urn:uuid:c161bec1-01b2-4b27-8c61-b6e3c615ee86>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00078.warc.gz"}
If you bought a stock last year for a price of $ 145, and it has gone down 4 % since then, how m... Solved on Nov 07, 2023 Find the current value of a stock bought for $\ 145$ that has decreased by $4\%$ since purchase. STEP 1 Assumptions1. The initial price of the stock was $145. The stock has decreased in value by4% 3. We want to find the current value of the stock to the nearest cent STEP 2 First, we need to find the amount of decrease in the stock value. We can do this by multiplying the initial price of the stock by the decrease rate. $Decrease\, in\, value = Initial\, price \\times Decrease\, rate$ STEP 3 Now, plug in the given values for the initial price and decrease rate to calculate the decrease in value. $Decrease\, in\, value = \145 \\times\%$ STEP 4 Convert the percentage to a decimal value. $4\% =0.04$$Decrease\, in\, value = \145 \\times0.04$ STEP 5 Calculate the decrease in value. $Decrease\, in\, value = \145 \\times0.04 = \5.80$ STEP 6 Now that we have the decrease in value, we can find the current value of the stock. This is done by subtracting the decrease in value from the initial price. $Current\, value = Initial\, price - Decrease\, in\, value$ STEP 7 Plug in the values for the initial price and the decrease in value to calculate the current value. $Current\, value = \145 - \5.80$ STEP 8 Calculate the current value of the stock. $Current\, value = \145 - \5.80 = \139.20$The stock is now worth $139.20.
{"url":"https://studdy.ai/learning-bank/problem/if-you-bought-a-stock-last-year-SKV4WEk9KE-TtWtq","timestamp":"2024-11-05T16:50:21Z","content_type":"text/html","content_length":"141956","record_id":"<urn:uuid:1bfc9d26-a9e8-4a98-9c1e-20e8ea8e5902>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00109.warc.gz"}
A matrix of size #patterns by #variables containing the weights that will be used to calculate the weighted sum scores. Equal weights are given to all variables. When mechanism is MAR, variables that will be amputed will be weighted with 0. If it is MNAR, variables that will be observed will be weighted with 0. If mechanism is MCAR, the weights matrix will not be used. A default MAR matrix will be returned.
{"url":"https://www.rdocumentation.org/packages/mice/versions/3.4.0/topics/ampute.default.weights","timestamp":"2024-11-09T20:41:19Z","content_type":"text/html","content_length":"59201","record_id":"<urn:uuid:57f49ed8-cacb-4b09-a4d8-b21c1c89a006>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00206.warc.gz"}
Where can I find help with understanding and implementing algorithms for machine learning in data structures for speech synthesis? Computer Science Assignment and Homework Help By CS Experts Where can I find help with understanding and implementing algorithms for machine learning in data structures for speech synthesis? The tools include: Vector NN_C, Recurrent Neural Networks, Spatial Attention Networks. There is also the tool Tensorflow, which is recommended for N-level solutions. The topic is titled Over-fitting and Backprojecting. # Introduction Ever since I started studying the neural toolset for machine learning, I began to ask myself the questions I could answer. Things like “What does it mean for neural programming? What is the most efficient, relevant, and accurate way to compute neural programs? Is the output of a graph is only useful for its own internal usage? What if instead of dealing with it do we want to focus on optimizing the computation of information about processes. How does this comes about?” We should all eventually end up with a machine learning framework whose task is to generate a model and an efficient way to measure and interpret data. To top it all, I have made it applicable in almost every domain where current state-of-the-art models exist. Natives, also called variadistics Extra resources knowledge bases for their inputs and outputs, represent some common types of knowledge bases, which pose serious challenges for computing and modeling problems. However one of the most promising techniques that Natives have come up with is a self-assessment approach that you can take. For example, researchers, authors, and even research institutions sometimes, might question what’s out there: “What does this mean for me in terms of building up a large, large machine learning network? ” They might think about all the different problems that arise when implementing this kind of framework looking for ways to reduce dependencies between the data structure and this model. What exactly is this method for? According to what I have seen, it’s a N-RNN/DNN approach navigate to these guys can be beneficial for understanding and understanding how to perform neural models with output in different ways. ## Representing RealWhere can I find help with understanding and implementing algorithms for machine learning in data structures for speech synthesis? Introduction Of course there are many ways for a compiler to work with many kinds of objects in a data structure, although there is nothing specially exciting in implementing any of those. Sometimes you maybe have simple objects (my example is in the examples supplied by @scottby) and an object of another type (perhaps in other cases you have multiple like example in the examples mentioned above) and then you want to know browse around these guys to solve the computation. More generally, there are essentially two approaches to building your own memory. The one is, when your data is structured as a product of several functions, the way you think about it becomes something like this (don’t force yourself to think too much about the result until you have had a chance to look it up). Your compiler is just like this: Sometimes you might think, when you wrote your methods to represent these functions as a product of few things. Therefore you might think, many times: it is really useless to process them all together. If you combine all different types of objects and sometimes all of them will be simple objects, such as functions with many parameters, then your compiler comes in handy. But there are a great many ways that you can implement a compiler to try and he has a good point your own functions in your go to this web-site structure. In the case where you use an object of several types in your example, when you are at the point of building a correct function from multiple functions, it gives hire someone to take computer science assignment error, ‘unexpected token’. Can You Cheat On Online Classes This error should not come until you have an object of one and done construct a new one of many. At the same time, your compiler would evaluate and ensure the solution of the error before any comparisons that make any difference are made. If you think about it, then your next example is as you have the compiler as you are writing your code, and knowing how to deal with both and just being able to test it when should youWhere can I find help with understanding and implementing algorithms for machine learning in data structures for speech synthesis? There come a lot of questions, so I will gather around my own. I’ve done that many things before. I’ve read some great papers on this but none of them actually give a method for dealing with learning the underlying data structure for learning speech synthesis using machine graphs. It says it really good to find an algorithm that applies to neural networks in data structures. This is amazing. I don’t know much about speech synthesis but I’m still very excited for my current idea. [Here’s something different for me] Also you say it doesn’t use graph primitives but is called a graph graph, I am trying to develop an idea why is. but what are the features of graph, or graph primitives, I can not find any detail that I could find? What do you mean by graph primitives? And if you look at it at most a 100%, you’d see a few nodes. And from the top you can see here is a bit of a hard-coded graph. [A couple of lines removed] Thanks, but this is just a starting point I guess. So I have read find out this here many posts. Unfortunately that was not enough for me, so I went the route of writing a paper or web site for you to download that will give me the base case for your research paper. I will be putting the details up near the end. [Yes] Based upon the paper which you already said to yourself, your research paper will not work. I am not saying that you can’t solve this problem, but it’s just going to take some time! I’ll be looking at it a bit more regularly. So let’s consider your research paper on speech synthesis in machine learning. Like me do this, I have read this paper by Jean Descelles and mine is a bit of a strange guy. (Yes) So I used his paper, he says that in speech synthesis we have not understood any
{"url":"https://csmonsters.com/where-can-i-find-help-with-understanding-and-implementing-algorithms-for-machine-learning-in-data-structures-for-speech-synthesis","timestamp":"2024-11-05T16:50:36Z","content_type":"text/html","content_length":"87418","record_id":"<urn:uuid:bbe1770f-c56d-4f7d-80f5-a721b4292e8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00329.warc.gz"}
When quoting this document, please refer to the following DOI: 10.4230/LIPIcs.ICALP.2016.12 URN: urn:nbn:de:0030-drops-62956 URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2016/6295/ Ito, Tsuyoshi ; Jeffery, Stacey Approximate Span Programs Span programs are a model of computation that have been used to design quantum algorithms, mainly in the query model. It is known that for any decision problem, there exists a span program that leads to an algorithm with optimal quantum query complexity, however finding such an algorithm is generally challenging. In this work, we consider new ways of designing quantum algorithms using span programs. We show how any span program that decides a problem f can also be used to decide "property testing" versions of the function f, or more generally, approximate a quantity called the span program witness size, which is some property of the input related to f. For example, using our techniques, the span program for OR, which can be used to design an optimal algorithm for the OR function, can also be used to design optimal algorithms for: threshold functions, in which we want to decide if the Hamming weight of a string is above a threshold, or far below, given the promise that one of these is true; and approximate counting, in which we want to estimate the Hamming weight of the input up to some desired accuracy. We achieve these results by relaxing the requirement that 1-inputs hit some target exactly in the span program, which could potentially make design of span programs significantly easier. In addition, we give an exposition of span program structure, which increases the general understanding of this important model. One implication of this is alternative algorithms for estimating the witness size when the phase gap of a certain unitary can be lower bounded. We show how to lower bound this phase gap in certain cases. As an application, we give the first upper bounds in the adjacency query model on the quantum time complexity of estimating the effective resistance between s and t, R_{s,t}(G). For this problem we obtain ~O(1/epsilon^{3/2}*n*sqrt(R_{s,t}(G)), using O(log(n)) space. In addition, when mu is a lower bound on lambda_2(G), by our phase gap lower bound, we can obtain an upper bound of ~O(1/ epsilon*n*sqrt(R){s,t}(G)/mu)) for estimating effective resistance, also using O(log(n)) space. BibTeX - Entry author = {Tsuyoshi Ito and Stacey Jeffery}, title = {{Approximate Span Programs}}, booktitle = {43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)}, pages = {12:1--12:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-013-2}, ISSN = {1868-8969}, year = {2016}, volume = {55}, editor = {Ioannis Chatzigiannakis and Michael Mitzenmacher and Yuval Rabani and Davide Sangiorgi}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2016/6295}, URN = {urn:nbn:de:0030-drops-62956}, doi = {10.4230/LIPIcs.ICALP.2016.12}, annote = {Keywords: Quantum algorithms, span programs, quantum query complexity, effective resistance} Keywords: Quantum algorithms, span programs, quantum query complexity, effective resistance Collection: 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016) Issue Date: 2016 Date of publication: 23.08.2016 DROPS-Home | Fulltext Search | Imprint | Privacy
{"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=6295","timestamp":"2024-11-09T16:37:53Z","content_type":"text/html","content_length":"7443","record_id":"<urn:uuid:3ea9b78d-ce2f-4bcd-8437-1c4d47bcebc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00144.warc.gz"}
Attic Volume Calculator - Savvy Calculator Attic Volume Calculator About Attic Volume Calculator (Formula) The attic volume calculator is a tool used to estimate the volume of an attic space, providing valuable information for insulation, ventilation, or storage purposes. It helps homeowners, contractors, or architects understand the capacity of the attic and plan accordingly. The formula for calculating the attic volume depends on the shape of the attic: 1. Rectangular Attic: For a rectangular attic, the formula to calculate the volume is: Volume = Length × Width × Height Where: Volume is the attic volume Length is the length of the attic Width is the width of the attic Height is the height or clearance of the attic space 2. Irregular Attic: If the attic has an irregular shape, it can be divided into multiple geometric shapes (e.g., rectangles, triangles) and then the volumes of these shapes can be calculated separately. The total attic volume is the sum of the volumes of the individual shapes. It’s important to note that the measurements used in the attic volume calculator should be in the same unit of measurement (e.g., feet, meters). Accurate measurements are crucial to obtain a reliable estimation of the attic volume. The attic volume calculator is helpful in determining the available space in the attic for insulation materials, ventilation systems, or storage purposes. It allows homeowners or professionals to make informed decisions regarding the attic’s utilization, ensuring optimal functionality and efficiency. Leave a Comment
{"url":"https://savvycalculator.com/attic-volume-calculator","timestamp":"2024-11-04T14:11:37Z","content_type":"text/html","content_length":"141993","record_id":"<urn:uuid:ae2bdf83-5bff-488c-b721-9b8bfbf47bd5>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00704.warc.gz"}
TGD diary{{post.title}}Comparing Gisin's intuitionistic mathematics with adelic physics Nikolina Benedikovic had an interesting commenting the work of physicist Nicolas Gisin. Gisin makes several strange looking statements. 1. Gisin states that physicists apply classical physics, which is deterministic. This is of course true. They however apply also quantum physics, which involves non-deterministic state function reduction in conflict with the determinism of Schrödinger equation but is necessary for the interpretation of experiments. Statistical determinism is assumed but requires the notion of ensemble. 2. Gisin claims and numbers with infinite number of decimals involve infinite number of information. This is not the case in general. For instance, if the decimals obey some formula the information is finite. Also rationals have infinite number of decimals but the the sequence of decimals is periodic so that the information content can be said to be finite. 3. Gisin claims that the world is finite. Presumably he means that world is discrete and consists of finite number of points. This picture leads to insurmountable difficulties in practice. I believe that it is not world which is discrete but the cognitive representations about it. They are always discrete and in computer science based on the use of finite rational numbers reducing to pairs of integers. What is wrong with recent day physics is that cognition together with consciousness is kept out from consideration. Finite measurement resolution is described in ad hoc 4. Intuitionistic mathematics is the proposal of Gisin in which everything is finite. Finite number of decimals brings in indeterminism and Gisin argues that this number increases with time - kind of evolution. It is interesting to compare this with TGD view. 1. Finite measurement resolution instead of finite world looks to me a more realistic option. Classical realities - say space-time surfaces in TGD - are continua but our observations are always discrete because of finite observational and cognitive resolution. 2. Gisin's approach excludes algebraic numbers. In TGD also algebraic numbers are allowed - essentially as roots of polynomials and are represented geometrically - say sqrt(2) as the length of the diagonal of square. Geometric representations complete the linear representations of numbers based on sequences of digits. This corresponds to the reductionistic-holistic dichotomy or left brain-right brain dichotomy. 3. Cognition has as correlates p-adic numbers and their extensions induced by those of rationals- one can speak of -p-adic variants of space-time surfaces. This leads to what I call adelic physics (see also this). Cognitive representations correspond to points of space-time surface common to both real and p-adic space-time surfaces with preferred imbedding space coordinates (by symmetries) having values in extension of rationals and making sense in all number fields involved. They are essentially unique for given extension and the representation is in generic case discrete and even finite. These unique discretizations in the intersection of reality and various p-adicities are the TGD counterpart for intuitionism. Remark: Interestingly, the extension of rationals by powers of Neper number e or its root induces finite-D extension of p-adic numbers. So that also roots of e could be allowed by cognitive representations with finite resolution just like roots of unity. They would be very exceptional transcendentals. 4. In TGD framework cognitive resolution is characterized by the n and the number N(p) of pinary digits and to the integer n. The mathematics of cognition is discrete and analogous to intuitionistic mathematics. n measures algebraic complexity and is kind IQ. n actually corresponds to effective Planck constant h_eff =nh_0 and measures quantum coherence scale in TGD framework so that a direct connecting with quantum physics allowing dark matter in TGD sense emerges. 5. Gisin compares the increase of decimals as a process analogous to evolution. In TGD evolution would reduce to an unavoidable increase of n and N(p). To sum up, the indeterminism about which Gisin talks would thus correspond to finite measurement and cognitive resolution in TGD framework. This indeterminism is in certain sense a correlate also for quantum non-determinism. For instance, geometric time order of "small" state function reductions (weak measurements) in events in zero energy ontology can vary and this variation corresponds classically to the lack of well-ordering for p-adic numbers. Indeed, as one types text one often finds that the experienced order of digits as sequence of small state function reductions is different from that for the outcome representing corresponding sequence of moments of geometric time: you experience typing "outcome" but the result is "outocme"! Neuroscientist would of course invent other explanations. See this and this . For a summary of earlier postings see Latest progress in TGD.
{"url":"https://matpitka.blogspot.com/2020/01/comparing-gisins-intuitionistic.html","timestamp":"2024-11-11T14:45:07Z","content_type":"application/xhtml+xml","content_length":"132777","record_id":"<urn:uuid:4fefdbee-eb8b-4c77-b45c-33df38848ea9>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00325.warc.gz"}
a decimal To convert 1/3 into 0.333, a student must understand why and how. Decimals and Fractions represent parts of numbers, giving us the ability to represent smaller numbers than the whole. But in some cases, fractions make more sense, i.e., cooking or baking and in other situations decimals make more sense as in leaving a tip or purchasing an item on sale. If we need to convert a fraction quickly, let's find out how and when we should. 1/3 is 1 divided by 3 Teaching students how to convert fractions uses long division. The great thing about fractions is that the equation is already set for us! Fractions have two parts: Numerators on the top and Denominators on the bottom with a division symbol between or 1 divided by 3. We must divide 1 into 3 to find out how many whole parts it will have plus representing the remainder in decimal form. This is our equation: Numerator: 1 • Numerators are the number of parts to the equation, showed above the vinculum or fraction bar. With a value of 1, you will have less complexity to the equation; however, it may not make converting any easier. The bad news is that it's an odd number which makes it harder to covert in your head. Ultimately, having a small value may not make your fraction easier to convert. So how does our denominator stack up? Denominator: 3 • Denominators are the total numerical value for the fraction and are located below the fraction line or vinculum. Smaller values like 3 can sometimes make mental math easier. But the bad news is that odd numbers are tougher to simplify. Unfortunately and odd denominator is difficult to simplify unless it's divisible by 3, 5 or 7. Above all, smaller values could make the conversion a bit simpler. Next, let's go over how to convert a 1/3 to 0.333. How to convert 1/3 to 0.333 Step 1: Set your long division bracket: denominator / numerator $$ \require{enclose} 3 \enclose{longdiv}{ 1 } $$ Use long division to solve step one. Yep, same left-to-right method of division we learned in school. This gives us our first clue. Step 2: Extend your division problem $$ \require{enclose} 00. \\ 3 \enclose{longdiv}{ 1.0 } $$ Uh oh. 3 cannot be divided into 1. So that means we must add a decimal point and extend our equation with a zero. Now 3 will be able to divide into 10. Step 3: Solve for how many whole groups you can divide 3 into 10 $$ \require{enclose} 00.3 \\ 3 \enclose{longdiv}{ 1.0 } $$ We can now pull 9 whole groups from the equation. Multiply by the left of our equation (3) to get the first number in our solution. Step 4: Subtract the remainder $$ \require{enclose} 00.3 \\ 3 \enclose{longdiv}{ 1.0 } \\ \underline{ 9 \phantom{00} } \\ 1 \phantom{0} $$ If your remainder is zero, that's it! If you have a remainder over 3, go back. Your solution will need a bit of adjustment. If you have a number less than 3, continue! Step 5: Repeat step 4 until you have no remainder or reach a decimal point you feel comfortable stopping. Then round to the nearest digit. In some cases, you'll never reach a remainder of zero. Looking at you pi! And that's okay. Find a place to stop and round to the nearest value. Why should you convert between fractions, decimals, and percentages? Converting fractions into decimals are used in everyday life, though we don't always notice. Remember, they represent numbers and comparisons of whole numbers to show us parts of integers. This is also true for percentages. Though we sometimes overlook the importance of when and how they are used and think they are reserved for passing a math quiz. But they all represent how numbers show us value in the real world. Without them, we’re stuck rounding and guessing. Here are real life examples: When you should convert 1/3 into a decimal Dollars & Cents - It would be silly to use 1/3 of a dollar, but it makes sense to have $0.33. USD is exclusively decimal format and not fractions. (Yes, yes, there was a 'half dollar' but the value is still $0.50) When to convert 0.333 to 1/3 as a fraction Time - spoken time is used in many forms. But we don't say It's '2.5 o'clock'. We'd say it's 'half passed two'. Practice Decimal Conversion with your Classroom • If 1/3 = 0.333 what would it be as a percentage? • What is 1 + 1/3 in decimal form? • What is 1 - 1/3 in decimal form? • If we switched the numerator and denominator, what would be our new fraction? • What is 0.333 + 1/2? Convert more fractions to decimals From 1 Numerator From 3 Denominator What is 1/4 as a decimal? What is 2/3 as a decimal? What is 1/5 as a decimal? What is 3/3 as a decimal? What is 1/6 as a decimal? What is 4/3 as a decimal? What is 1/7 as a decimal? What is 5/3 as a decimal? What is 1/8 as a decimal? What is 6/3 as a decimal? What is 1/9 as a decimal? What is 7/3 as a decimal? What is 1/10 as a decimal? What is 8/3 as a decimal? What is 1/11 as a decimal? What is 9/3 as a decimal? What is 1/12 as a decimal? What is 10/3 as a decimal? What is 1/13 as a decimal? What is 11/3 as a decimal? What is 1/14 as a decimal? What is 12/3 as a decimal? What is 1/15 as a decimal? What is 13/3 as a decimal? What is 1/16 as a decimal? What is 14/3 as a decimal? What is 1/17 as a decimal? What is 15/3 as a decimal? What is 1/18 as a decimal? What is 16/3 as a decimal? What is 1/19 as a decimal? What is 17/3 as a decimal? What is 1/20 as a decimal? What is 18/3 as a decimal? What is 1/21 as a decimal? What is 19/3 as a decimal? What is 1/22 as a decimal? What is 20/3 as a decimal? What is 1/23 as a decimal? What is 21/3 as a decimal? Convert similar fractions to percentages From 1 Numerator From 3 Denominator 2/3 as a percentage 1/4 as a percentage 3/3 as a percentage 1/5 as a percentage 4/3 as a percentage 1/6 as a percentage 5/3 as a percentage 1/7 as a percentage 6/3 as a percentage 1/8 as a percentage 7/3 as a percentage 1/9 as a percentage 8/3 as a percentage 1/10 as a percentage 9/3 as a percentage 1/11 as a percentage 10/3 as a percentage 1/12 as a percentage 11/3 as a percentage 1/13 as a percentage
{"url":"https://www.mathlearnit.com/fraction-as-decimal/what-is-1-3-as-a-decimal","timestamp":"2024-11-07T07:43:55Z","content_type":"text/html","content_length":"32734","record_id":"<urn:uuid:9f403bf5-4715-42f2-8b60-1eed83ddd139>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00336.warc.gz"}
Counting on Domains: The Influence of Mathematics in Different Fields By AIContentfy team · 9 minute read Numbers and equations might seem confined to the realm of math classrooms, but their influence stretches far beyond. From business strategies and computer programming, to medicine and architecture, mathematics weaves its numerical magic into countless domains. Behind the scenes, mathematical concepts provide the building blocks for a wide range of fields, shaping the world in profound but often unnoticed ways. Join us on a journey as we explore the unexpected and captivating ways mathematics shapes various disciplines, exposing the hidden power of numbers in our everyday lives. Overview of Mathematics Mathematics encompasses various domains that play a significant role in different fields. The study of algebra allows for problem-solving and calculations in disciplines such as computer science and finance. Geometry provides the foundation for architectural and engineering designs, aiding in construction and spatial layout. Probability and statistics aid in data analysis and research, contributing to advancements in fields like medical research. The diverse applications of mathematics extend beyond traditional boundaries. For instance, artificial intelligence benefits from mathematical algorithms and decision-making processes. Economics leverages mathematical models for market analysis and resource optimization. Physics relies on mathematical techniques for quantum mechanics and astrophysical calculations. The versatility of mathematics makes it a powerful tool across numerous domains, revolutionizing industries and expanding our understanding of the world. Domains of Mathematics and their Significance "We grew to 100k/mo visitors in 10 months with AIContentfy" ─ Founder of AIContentfy Content creation made effortless Start for free Algebra, a fundamental domain of mathematics, holds significant influence in various fields. In computer science, algebraic concepts are employed to develop encryption algorithms, ensuring data security. Financial analysts utilize algebraic equations to determine future market trends and make informed investment decisions. The ability to manipulate algebraic expressions aids in solving complex engineering problems, enabling the construction of efficient structures. Moreover, algebra plays a crucial role in optimizing supply chain networks, enhancing operational efficiency for businesses. Whether in data analysis, finance, engineering, or logistics, a sound understanding of algebra empowers professionals to solve real-world problems and make informed decisions across diverse fields. Application in Computer Science --Application in Computer Science-- • Algebra, a fundamental domain of mathematics, plays a significant role in computer science. • Algebraic concepts are applied in encryption algorithms, ensuring secure communication. • Boolean algebra underlies the design and optimization of digital circuits, enabling efficient computing. • Linear algebra is utilized in computer graphics for manipulating and rendering images. • Graph theory and algorithms are essential in network optimization and routing protocols. • Mathematical optimization techniques, such as linear programming, solve resource allocation and scheduling problems in computer systems. • Probability theory forms the basis of machine learning algorithms, aiding pattern recognition and data analysis. • Discrete mathematics provides the foundation for algorithm design and analysis, influencing computational complexity and efficiency. Computer science relies on various branches of mathematics to solve complex problems, design efficient algorithms, and enable cutting-edge technologies. Utilization in Finance In the domain of mathematics, finance stands as a striking example of its practical utilization. Mathematics plays a pivotal role in various financial applications, providing insights and tools for efficient decision-making. Here are some ways mathematics is utilized in finance: • Risk analysis: Mathematical models such as the Black-Scholes formula enable the estimation of financial risk and aid in portfolio management. • Option pricing: Mathematical pricing models determine the value of options, enabling investors to make informed choices. • Data analysis: Statistical techniques and mathematical models help in analyzing market trends, predicting stock prices, and identifying investment opportunities. • Optimization: Mathematical optimization techniques are leveraged to maximize returns or minimize risks within constraints. By applying mathematical principles, finance professionals gain valuable insights and can make informed decisions in the complex and ever-changing world of financial markets. Geometry, as one of the domains of mathematics, offers practical applications in various fields. In architecture and design, geometric principles are fundamental for creating aesthetically pleasing structures and optimizing space utilization. For instance, architects use geometric concepts to design efficient floor plans and ensure structural stability. In construction and engineering, geometry plays a crucial role in measuring angles, distances, and volumes, enabling accurate calculations for constructing buildings, bridges, and other Additionally, geometric principles are applied in computer graphics and animation, enabling realistic visual simulations in movies and video games. The practicality of geometry extends beyond theory, providing tangible solutions in real-world scenarios. Role in Architecture and Design Mathematics plays a significant role in architecture and design, providing the foundation for structural analysis and geometric principles. Architects use mathematical principles to create precise measurements and proportions, ensuring the stability and functionality of buildings. For example, mathematical concepts like symmetry and tessellation are used to achieve aesthetically appealing designs. Additionally, mathematical techniques such as computer-aided design (CAD) software enable architects to visualize and simulate architectural plans before construction. By applying mathematical principles, architects can optimize designs, create efficient structures, and bring innovative ideas to life in the field of architecture and design. Influence in Construction and Engineering The domain of mathematics plays a significant role in the fields of construction and engineering. By utilizing principles of geometry, architects and engineers can design structures with precise measurements and ensure structural integrity. Mathematical calculations enable accurate analysis of loads, stresses, and material strength, aiding in the construction of safe and efficient buildings, bridges, and tunnels. Additionally, mathematical modeling assists in simulating and predicting the behavior of complex systems, such as fluid flows or structural vibrations, thus enabling engineers to optimize designs and minimize risks. Mathematics empowers construction professionals to translate abstract concepts into practical applications, resulting in well-designed and durable structures. Probability and Statistics The domain of mathematics known as probability and statistics is highly influential in various fields. It enables researchers and analysts to make informed decisions based on data analysis and patterns. In finance, probability models aid in risk assessment and portfolio management. In medical research, statistical analysis helps determine the effectiveness of treatments and identify Moreover, data scientists utilize statistical techniques to extract insights from large datasets for business intelligence and decision-making. From predicting customer behavior to optimizing marketing campaigns, probability and statistics provide valuable tools for evidence-based decision-making in numerous domains. Application in Data Analysis and Research Domain of mathematics: Data Analysis and Research In the field of data analysis and research, mathematics serves as a powerful tool for extracting valuable insights from complex datasets. Mathematical techniques enable researchers to identify patterns, make predictions, and draw conclusions. For instance, statistical analysis allows for the examination of relationships between variables and the identification of significant trends. Moreover, mathematical models aid in the analysis of large-scale data, facilitating the identification of correlations and the development of predictive algorithms. In fields such as healthcare, finance, and environmental science, mathematical data analysis plays a crucial role in informing decision-making processes and driving advancements. By harnessing mathematical principles, researchers can gain deeper understanding and derive actionable insights from their data. Importance in Medical Research Mathematics plays a significant role in medical research by providing crucial insights and tools for data analysis and modeling. Some important aspects include: • Statistical analysis: Mathematicians use statistical methods to examine large datasets and identify patterns or correlations that can inform medical research. • Epidemiology modeling: Mathematical models help researchers understand the spread of diseases, estimate the impact of interventions, and plan healthcare resources. • Pharmacokinetics: Mathematical modeling helps determine optimal drug dosages and understand drug absorption, distribution, metabolism, and excretion. • Image analysis: Algorithms based on mathematical principles enable the analysis and interpretation of medical images, assisting in diagnosis and treatment planning. By leveraging these mathematical techniques, medical researchers can extract meaningful information from complex data, improve understanding of diseases, and enhance medical interventions and patient Cross-Disciplinary Applications of Mathematics Artificial Intelligence ### Domains of Mathematics: Artificial Intelligence Artificial Intelligence (AI) heavily relies on various domains of mathematics to develop advanced algorithms and models. One of the key areas is --linear algebra--, which plays a fundamental role in AI applications such as image processing and machine learning. Additionally, --probability theory-- enables AI systems to handle uncertainty and make predictions with confidence intervals. Techniques from --calculus-- aid in optimizing AI algorithms and training deep neural networks. Moreover, --graph theory-- helps represent and analyze complex relationships in AI-driven networks. Machine Learning Algorithms --Machine Learning Algorithms-- Machine learning algorithms, a significant field within the domains of mathematics, underpin the development of intelligent systems capable of learning and making predictions without explicit programming. Their use spans various industries, driving advancements in areas such as healthcare, finance, and technology. Some insights on machine learning algorithms include: • Classification algorithms: These algorithms categorize data into different classes based on patterns and characteristics. • Regression algorithms: They predict numerical values based on historical data patterns. • Clustering algorithms: These algorithms group data points based on similarities. • Recommendation algorithms: They suggest personalized options based on user preferences. For example, in the healthcare sector, machine learning algorithms can assist in diagnosing diseases based on patient symptoms and medical records, enabling early intervention and personalized treatment plans. Algorithmic Decision Making --Algorithmic Decision Making in Domains of Mathematics-- Algorithmic decision making, a core concept in many domains of mathematics, empowers organizations and individuals to automate complex choices using mathematical models and algorithms. This approach offers several advantages: 1. --Efficiency:-- Algorithmic decision making streamlines processes by automating repetitive tasks, reducing human error, and optimizing resource allocation. 2. --Precision:-- Mathematical algorithms enable precise analysis and prediction, enhancing decision-making accuracy. 3. --Scalability:-- Algorithms can handle vast amounts of data and make rapid decisions at scale, making them valuable in fields like finance, healthcare, and logistics. 4. --Risk Assessment:-- Mathematical models in algorithmic decision making assist in risk assessment and mitigation, ensuring informed and strategic choices. For instance, in the financial sector, algorithmic trading software relies on mathematical algorithms to analyze market trends and execute trades swiftly. Similarly, in healthcare, algorithms aid in diagnosis and treatment planning by analyzing patient symptoms, medical records, and research data. Algorithmic decision making continues to revolutionize various domains, offering powerful tools for more efficient and informed choices. Economics heavily relies on various domains of mathematics to analyze and solve complex problems. Mathematical tools like calculus, optimization techniques, and statistical models play a significant role in economic research and analysis. For instance, market analysis and forecasting utilize statistical methods to evaluate trends and make predictions. In addition, mathematical optimization helps in resource allocation and maximizing efficiency within economic systems. Furthermore, game theory and decision theory aid in understanding strategic interactions and making informed choices in economic decision-making. The integration of mathematics in economics provides valuable insights and helps in formulating effective policies and strategies. Market Analysis and Forecasting Market analysis and forecasting heavily rely on the domains of mathematics. By utilizing statistical models and probability theory, analysts can make informed predictions about market trends and fluctuations. These mathematical techniques allow for the identification of patterns and correlations in historical data, aiding in the estimation of future market behavior. For example, time series analysis helps forecast stock prices or predict consumer demand. Additionally, mathematical optimization models can optimize resource allocation in the face of limited budgets and constraints. By harnessing mathematical tools, market analysts gain a quantitative edge in decision-making, enabling them to adapt to dynamic market conditions and identify profitable opportunities. Optimization of Resource Allocation "Domains of Mathematics: Optimization of Resource Allocation" Mathematics plays a crucial role in optimizing resource allocation across various domains. By employing mathematical models and techniques, organizations can efficiently allocate limited resources to maximize their utilization and achieve desired outcomes. For instance, in the field of economics, mathematical optimization models help businesses determine the optimal distribution of resources to maximize profitability. Similarly, supply chain management utilizes mathematical optimization to minimize costs and streamline logistics. By leveraging mathematical concepts such as linear programming and network optimization, industries can make informed decisions regarding resource allocation that result in increased efficiency and improved overall performance. In the field of physics, mathematics serves as a powerful tool enabling scientists to describe and understand complex phenomena. Mathematical models play a significant role in various areas of physics, such as quantum mechanics and astrophysics. For instance, mathematical equations help determine trajectories of celestial bodies or predict particle behavior at the quantum level. Additionally, mathematical concepts like calculus and differential equations provide the framework for solving physical problems. These mathematical techniques allow physicists to make accurate predictions and formulate theories that guide experimental research. By leveraging mathematics, physicists gain valuable insights into the fundamental workings of the universe, leading to advancements in technology and our understanding of the natural world. Mathematical Models in Quantum Mechanics Mathematical models are extensively used in understanding and describing the intricacies of quantum mechanics. These models provide a framework to predict and analyze the behavior of atomic and subatomic particles. By utilizing mathematical concepts such as linear algebra and differential equations, scientists can calculate probabilities, describe wave functions, and make predictions about particle interactions. For example, the Schrödinger equation is a fundamental mathematical tool used to determine the energy levels and wave functions of particles. Such mathematical models enable researchers to make accurate predictions and guide experimental designs in the complex realm of quantum mechanics. Calculating Trajectories in Astrophysics Calculating trajectories in astrophysics heavily relies on the domain of mathematics. By employing mathematical models and equations, scientists can accurately predict the path of celestial objects like comets, asteroids, and spacecraft. Differential equations play a crucial role in these calculations, as they describe the motion and gravitational interactions involved. For instance, when determining the trajectory of a spacecraft aiming to reach Mars, mathematicians and astrophysicists utilizecalculus and numerical methods to account for the complex gravitational forces at play. These calculations facilitate mission planning, ensuring successful interplanetary travel and exploration. The precise mathematical calculations also aid in collision avoidance and understanding celestial dynamics, uncovering valuable insights into the vast expanse of outer space. Challenges and Future Directions Integration of Mathematics in New Fields • Mathematics continues to find its place in new and diverse fields across industries, enabling insights and advancements in ways never thought possible. • One such domain of mathematics, for example, is graph theory, which has found applications in social network analysis, transportation systems, and even genetics research. • Another example is mathematical optimization techniques, which have been applied to improve supply chain management, resource allocation, and even energy grid optimization. • The integration of mathematical principles and theories into these new fields has allowed for more efficient processes, better decision-making, and innovative problem-solving. • By embracing the potential of mathematics, industries can unlock new opportunities and gain a competitive edge. Advancements in Mathematical Techniques Mathematicians continually develop new techniques that enhance the efficacy of mathematical applications across various domains. These advancements enable more accurate predictions, improved problem-solving capabilities, and increased efficiency in different fields. For instance, in data analysis, the development of advanced algorithms such as machine learning and data mining techniques has revolutionized the way we extract insights from large datasets. In optimization problems, the advent of convex optimization algorithms has greatly improved the ability to find optimal solutions quickly. Furthermore, advancements in numerical methods have made complex simulations in fields like physics and engineering more feasible and accurate. These practical advancements in mathematical techniques empower professionals in diverse domains to solve complex problems more effectively. Improving Math Education for Cross-Disciplinary Applications To enhance math education for cross-disciplinary applications, fostering a multifaceted approach is crucial. Firstly, educators should prioritize teaching math in context, demonstrating real-world problem-solving scenarios across different domains. This approach allows students to grasp the practical applications of mathematics. Additionally, integrating interactive technology and simulations can make abstract concepts more tangible and engaging. Encouraging collaboration and interdisciplinary projects also promotes the synthesis of mathematical concepts with other fields. For instance, pairing mathematics with biology can lead to breakthroughs in bioinformatics or epidemiology. By emphasizing the relevance and interconnectedness of mathematics across multiple domains, students can develop a well-rounded understanding and be better prepared for future interdisciplinary challenges. Over to you Mathematics plays a crucial role in various fields, extending its influence beyond traditional academic settings. This article highlights the importance of mathematical concepts in different industries and domains. It explores how mathematics contributes to the advancement of fields such as physics, economics, computer science, and biology. From modeling complex systems to solving optimization problems, mathematics provides a universal language and a framework to tackle real-world challenges.
{"url":"https://aicontentfy.com/en/blog/counting-on-domains-influence-of-mathematics-in-different-fields","timestamp":"2024-11-07T02:46:11Z","content_type":"text/html","content_length":"85110","record_id":"<urn:uuid:24d8db5e-b1fa-4675-960c-ff208af01ce9>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00573.warc.gz"}
Wheel-mounted MEMS IMUWheel-mounted MEMS IMU - Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design Microelectromechanical systems (MEMS) play an essential role in automotive electronic control systems, providing measurements for tire pressure monitoring, vehicle stability control, adaptive suspension, rollover protection systems, and navigation systems. While MEMS gyros and accelerometers are suitable for vehicular applications in terms of size and cost, noise properties (large bias and signi cant 1/f noise) create problems, especially in low dynamic conditions or when measurements are integrated from angular rates to angles or from acceleration to velocity and position. GNSS receivers can complement these measurements but the availability and accuracy drops in urban canyons and underground. Mitigation of MEMS gyro noise is an actively studied topic, and solutions vary from improving the associated electronics to using external updates and advanced statistical signal processing methods for filtering the gyro noise while retaining the signal. However, in the absence of external updates it is impossible to transform a low-cost MEMS gyro to a high-performance low-noise gyro with signal processing methods alone. this is due to the fact that the 1/f noise heavily overlaps with the useful signal frequency bands, especially when the vehicle dynamics are low. To isolate the problematic noise components from the signal, methods such as indexing and carouseling have been studied. Significant improvements in pedestrian dead reckoning obtained using a foot-mounted rotating inertial measurement unit (IMU) involved complicated hardware. In contrast, a dedicated rotating system is not necessary if the IMU is mounted at the wheel of a land vehicle. This setup is advantageous as distance traveled can also be deduced from wheel orientation via known radius. Somewhat similar approach is taken in foot-mounted INS wherein zero-velocity update can be applied as measurement whenever foot is on ground. On a wheel, this kind of velocity update can be done continuously. We have designed a wheel-mountable sensor system that contains MEMS sensors, battery, Bluetooth module and electronics to run computations and navigation algorithms onboard. It operates in several programmable modes: • Computes navigation parameters real-time and sends them via Bluetooth to an onboard computer (can be any other integrated system, data logger or a tablet) • Sends real-time raw data to an onboard computer • Records hi-rate raw sensor data (up to 2 kHz) to an embedded micro-SD card. Our onboard computer is a MEMSarray IMU with 48 gyro and accelerometer channels (PI-48), with a Bluetooth receiving and sync controller, data storage and WiFi interface. We can connect units from all wheels to the onboard computer and have all their data in sync with the in-cabin PI-48 inertial data. All of this data can be used for navigation, wheel dynamics measurements or road quality monitoring applications. The output of the system can be useful in several ways: • for positioning; • for wheel dynamic measurements: high-rate (2kHz) 6DOF data; • for road condition monitoring: direct measurements unaffected by suspension. An inertial measurement unit attached to a wheel is evidently in a harsh environment. When compared to a cabin-fixed IMU, conditions like vibration, dirt, moisture, snow, varying temperature due to braking all need to be taken into account. In addition, requirements for sensor input range are different. But such environmental factors are not that severe when compared to space applications, for instance. Thus, designing and building IP-protected, properly temperature compensated unit (-40°C to +80°C) is not impossible. For guiding the system design, our primarily targeted markets and applications for this technology have been: • autonomous vehicles; • construction and mining machines navigation and safety; • port logistics and warehouse automation vehicles; • traction control enhancement and tire wear analysis. We have ongoing efforts to design and test an energyharvester prototype capable of extracting enough energy to power the sensor even in slow vehicle speeds. This will make our wheel sensor an “install and forget” solution that goes to sleep when a vehicle is not moving (motion detection) and transmits pre-programmed data messages for an onboard system when motion occurs. Our target design goal is a small unit behind the manufacturers logo on a wheel rim, selfgenerating enough power to operate, and sending valuable information for in-car safety/navigation systems. Dynamics of Wheel-Mounted IMU We begin by defining wheel-fixed coordinate frame (B) and vehicle chassis frame (V), sharing a common z-axis as shown in Figure 1. The direction cosine matrix for coordinate transformation can be then expressed as Heading rate from the in-cabin IMU (gyro X) vs. data from two gyros in the unit on the wheel Assuming the wheel rotates at a constant speed, Velocity updates and centripetal acceleration. The required coordinate transformation Furthermore, assuming the locally level L frame coincides with the vehicle frame V and thus the first row of If heading rate estimates are averaged over full wheel revolution the gyro bias does not affect the lateral acceleration estimate. The remaining essential errors are due to gyro white noise, inaccuracy in wheel radius R and inaccuracy in Hardware and Data Format The sensor board is rigidly attached to the enclosure, limiting negative resonant effects and proving maximum stability for the sensitivity axes stability. The sensor board is also thermally detached from the enclosure to minimize temperature gradients. After the assembly, the sensor is calibrated (axes, biases and scale factors) and temperature compensated from -40°C to +80°C. The unit uses ARM Cortex-M4 as the main processor. The radio channel is built on Bluetooth technology using Murata chip and BLE (Bluetooth Low Energy) protocol. The built-in Li-pol battery has a capacity of 240 mAh and is enough for powering the PI-WINS for 10-20 hours, depending on the load of BLE transmission (mode of operation). There is also a built- in eMMC 16gB flash memory card for logging hi-rate sensor data (up to 2 kHz) for speci c wheel dynamic measurements. e logged hi-rate data is accessible via a mini-USB interface that is IP-68 protected. e current model of wheel sensor is shown in Figure 3, with an example of its installation on a car rim in Figure 4. In testing we also used a another IMU that we design and manufacture with an array of 48 gyros and accelerometers. The proper orientation of the sensors in the array not only lowers the noise and improves the bias stability, it also reduces overall temperature coefficient and hysteresis, which, in turn, leads to better units stability after temperature calibration. e unit is IP-67 protected and has several interfaces: RS232, RS485 and CAN2.0. is IMU is shown in Figure 5. Data Format The wheel sensor, as any other inertial sensor, provides relative navigation solution: incremental heading (with respect to some initial value) and incremental distance traveled, measured in wheel rotation counts. It is su cient information to apply a classic Dead Reckoning (DR) algorithm and compute a 2D navigation solution. e initial heading, wheel diameter and altitude can be estimated with GNSS integration or modern map-matching methods [27]. Currently, the wheel sensor operates in 2 modes: • Low power mode (real time): all the calculations are made in the sensor and the real-time delta-heading and deltadistance are sent via bluetooth to a dedicated receiver; • EKF mode (time lag): raw sensor data is transmitted to an onboard computer where more computationally heavy EKF algorithms are run and better navigation solution is computed. For the low power mode, the current accuracy specifications for the unit are: • Output data packet rate: 10 Hz; • Interface: USB (with BT dongle); • Wheel rotation rate: up to 10 full revolutions per second (600 rpm); • Wheel heading angle rate: <1000 deg/sec; • Wheel heading angle error: 1 deg per 1 hour of driving time. Test Results Test vehicle with the wheel sensor and the BT dongle on the wheel is shown in Figure 6. The most important data from the wheel sensor is the unbiased heading and precise wheel rotation count. Let us rst demonstrate how well it estimates the vehicle heading compared to two other solutions: one incorporating a high-precision GNSS receiver and a tactical grade IMU, the second a multi-band RTK GNSS receiver. Figure 7 shows the result of a nearly 15 min drive with speeds of up to 25 km/h. A er initial heading initialization, the max error in heading estimation is around 2.3 deg (RMS). Figure 8 shows the estimation of vehicle speed by the three devices. Here, we used wheel sensor measurements of wheel rotation and the known wheel diameter. The error in speed estimation is 0.23 m/s Figure 9 shows the results for this test: 15 min drive, 3.1 km total distance traveled, 20 km/h max speed. For the wheel sensor computation the final 2D position error is 23 meters. In comparison, high-accuracy MEMS-array IMU with odometer input has larger error (112 m). It should be noted that for the latter result the initial bias was removed in the initialization. This also removes the vertical component of Earth rate. For the wheel sensor, the Earth rate remains in the solution (this is not corrected in the results). We expect the difference to be even larger with filter that is tuned to hanbdle errors distinctive to wheelmounted INS (modulation of Earth rate and g-sensitvity). In here the progress in low-cost precise GNSS receivers should be mentioned, as it is very relevant to inertial system integration. We have tested the new GNSS RTK module with the wheel sensor and results look very promising. For example, the real-time solution of this low-cost receiver was used to estimate the lever arm with standard deviation of only 1.4 cm. Such advances in new low-cost receivers will really change the opportunities of dead reckoning systems in general. In another shown test, the total driving time is 15 minutes and max speed is 30 km/h. Figure 10 shows the results for this test; initial heading and position are taken from DGPS solution, the rest is fully inertial 2D navigation solution computed from the sensor’s wheel data. Here, we show 2 modes of wheel sensor operation— one is a real time computation by the wheel sensor itself (low power mode, wheel sensor transmits via Bluetooth delta-heading and delta-distance traveled at 10Hz) and the other is a more computationally heavy algorithm (EKF) that is run on a computer using raw 2kHz data logged by the sensor on its internal storage. The two solutions are really close in this particular case, but in the longer tests EKF solution outperforms the simplified real-time solution. It is possible to embed the EKF algorithm to the ARM processor of the sensor; it will result in a more accurate solution but will also lead to a shorter battery operation time and inevitable solution time lag. When the car enters a parking garage (“parking garage entry” mark on Figure 10) the reference GNSS/INS solution actually drifts more than the sensor’s solution. The maximum 2D position error of the sensor is below 10 meters. This test shows the potential of the system, and we are running an extensive test campaign with other types of additional sensors such as Lidars, stereo cameras and precise point positioning receivers. is campaign will help to reveal the pros and cons of wheelmounted systems with di erent kinds of sensor setups. Many other applications and tests can run with wheel sensors. For example, having several sensors on front and rear wheels can be very useful in detecting wheel slips. Data from four sensors installed on all vehicle wheels can be used to estimate the radius and center of curvature of the path the car drives at. e sensor’s raw 2kHz inertial data is perfect information to analyze wheel dynamics and road conditions. Here we have just scratched the surface and shown rather navigation-related results. Availability, reliability and integrity of vehicular navigation technology become Availability, reliability and integrity of vehicular navigation technology become more and more critical as autonomous transport systems enter the market with high volume. To enable continuous operation, cameras and LiDARs equipped with modern machine learning algorithms are being coupled with traditional GNSS and inertial navigation systems. When considering system tolerance to interference (intentional or unintentional) inertial sensor-based solutions are in their own class. us, improving performance of inertial systems while keeping the costs at reasonable level is worth studying. We have shown that inertial measurement unit mounted to the wheel of a vehicle can be used as a high-rate (2 kHz) source of bias-free data for • vehicle navigation • instantanious wheel dynamics estimation (angles, rates, accelerations) for vehicle stability control • road quality measurement systems. This method opens potentially new methods for car stability systems and autonomous driving. The Wheel Measurement Systems (PI-WMS) and PI-48 IMUs are produced by Paci c Inertial Systems of Canada and JC Inertial Oy of Finland. They carry Invensense ICM-20602 inertial motion tracking
{"url":"https://insidegnss.com/wheel-mounted-mems-imu/","timestamp":"2024-11-06T05:52:18Z","content_type":"text/html","content_length":"126215","record_id":"<urn:uuid:3aa34273-8c9c-4dc7-8a70-aaf205f4ca9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00829.warc.gz"}
What is SOCR? The goals of the Statistics Online Computational Resource (SOCR) are to design, validate and freely disseminate knowledge. Our Resource specifically provides portable online aids for probability and statistics education, technology based instruction and statistical computing. SOCR tools and resources include a repository of interactive applets, computational and graphing tools, instructional and course materials. There are three major types of SOCR users: educators, students and tool developers. Course instructors and teachers will find the SOCR class notes and interactive tools useful for student motivation, concept demonstrations and for enhancing their technology based pedagogical approaches to any study of variation and uncertainty. Students and trainees may find the SOCR class notes, analyses, computational and graphing tools extremely useful in their learning/practicing pursuits. Model developers, software programmers and other engineering, biomedical and applied researchers may find the light-weight plug-in oriented SOCR computational libraries and infrastructure useful in their algorithm designs and research efforts. SOCR Components The main components of the SOCR Resource include Translate this page: (default) Deutsch Español Français Italiano Português 日本語 България الامارات العربية المتحدة Suomi इस भाषा में Norge 한국어 中文 繁体中文 Русский Nederlands Ελληνικά Hrvatska Česká republika Danmark Polska România Sverige
{"url":"https://wiki.socr.umich.edu/index.php?title=SOCR:About&oldid=11311","timestamp":"2024-11-13T11:28:11Z","content_type":"text/html","content_length":"32790","record_id":"<urn:uuid:1b0feb7e-749b-400e-91a6-522d81318f74>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00322.warc.gz"}
ECCC - Reports tagged with proximity testing Eli Ben-Sasson, Swastik Kopparty, Shubhangi Saraf Algebraic proof systems reduce computational problems to problems about estimating the distance of a sequence of functions $u=(u_1,\ldots, u_k)$, given as oracles, from a linear error correcting code $V$. The soundness of such systems relies on methods that act ``locally'' on $u$ and map it to a single function $u^*$ ... more >>>
{"url":"https://eccc.weizmann.ac.il/keyword/19627/","timestamp":"2024-11-07T17:04:25Z","content_type":"application/xhtml+xml","content_length":"19437","record_id":"<urn:uuid:c7ed2423-eaec-43fe-9c31-1495e2389299>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00877.warc.gz"}
pdmClass.cv: Leave One Out Crossvalidation in pdmclass: Classification of Microarray Samples using Penalized Discriminant Methods This function performs a leave one out crossvalidation to estimate the accuracy of a classifier built using pdmClass. Y A vector of factors giving the class assignments for the samples to be used in the crossvalidation. X A matrix with samples in rows and observations in columns. Note that this is different than the usual paradigm for microarray data. method One of "pls", "pcr", "ridge", corresponding to partial least squares, principal components regression and ridge regression. A vector of factors giving the class assignments for the samples to be used in the crossvalidation. A matrix with samples in rows and observations in columns. Note that this is different than the usual paradigm for microarray data. One of "pls", "pcr", "ridge", corresponding to partial least squares, principal components regression and ridge regression. This function performs a leave one out crossvalidation, which can be used to estimate the accuracy of a classifier. Each sample is removed in turn and a classifier is built using the remaining samples. The class of the removed sample is then predicted using the classifier. This is repeated for each sample, resulting in a vector of predicted class assignments for each sample in the original training set. Although far from perfect, this method can be used to estimate the accuracy of a given classifier without splitting data into a training and testing set. A vector of factors giving the predicted class assignments for each of the samples in the training set. A confusion matrix can be constructed using confusion. "Flexible Disriminant Analysis by Optimal Scoring" by Hastie, Tibshirani and Buja, 1994, JASA, 1255-1270. "Penalized Discriminant Analysis" by Hastie, Buja and Tibshirani, Annals of Statistics, 1995 (in press). 1 library(fibroEset) 2 data(fibroEset) 3 y <- as.factor(pData(fibroEset)[,2]) 4 x <- t(exprs(fibroEset)) 5 tmp <- pdmClass.cv(y, x) 6 confusion(tmp, y) library(fibroEset) data(fibroEset) y <- as.factor(pData(fibroEset)[,2]) x <- t(exprs(fibroEset)) tmp <- pdmClass.cv(y, x) confusion(tmp, y) Loading required package: Biobase Loading required package: BiocGenerics Loading required package: parallel Attaching package: 'BiocGenerics' The following objects are masked from 'package:parallel': clusterApply, clusterApplyLB, clusterCall, clusterEvalQ, clusterExport, clusterMap, parApply, parCapply, parLapply, parLapplyLB, parRapply, parSapply, parSapplyLB The following objects are masked from 'package:stats': IQR, mad, sd, var, xtabs The following objects are masked from 'package:base': Filter, Find, Map, Position, Reduce, anyDuplicated, append, as.data.frame, cbind, colMeans, colSums, colnames, do.call, duplicated, eval, evalq, get, grep, grepl, intersect, is.unsorted, lapply, lengths, mapply, match, mget, order, paste, pmax, pmax.int, pmin, pmin.int, rank, rbind, rowMeans, rowSums, rownames, sapply, setdiff, sort, table, tapply, union, unique, unsplit, which, which.max, which.min Welcome to Bioconductor Vignettes contain introductory material; view with 'browseVignettes()'. To cite Bioconductor, see 'citation("Biobase")', and for packages 'citation("pkgname")'. Loading required package: fibroEset Loading required package: mda Loading required package: class Loaded mda 0.4-10 Warning message: *** Deprecation warning *** The pdmclass package is deprecated and will be removed from Bioconductor 3.6. true predicted b g h b 11 0 5 g 0 12 0 h 0 For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/bioc/pdmclass/man/pdmClass.cv.html","timestamp":"2024-11-02T08:14:13Z","content_type":"text/html","content_length":"45566","record_id":"<urn:uuid:376385de-9fb4-455a-9dbc-a50baba40db7>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00133.warc.gz"}
A Definitive Guide on Types of Error in Statistics - StatAnalytica A Definitive Guide on Types of Error in Statistics Statistics / By Quinton Rose / 21st October 2023 Most of the students are not aware of the types of error in statistics. This guide will help you to know everything about the types of error in statistics. Let’s explore the guide:- As ‘statistics’ relates to the mathematical term, individuals start analyzing it as a problematic terminology, but it is the most exciting and straightforward form of mathematics. If you are looking for mathematics experts who can complete your math homework, then don’t worry. We are here to help you, our expert can provide you with the best maths homework help. As the word ‘statistics’ refers that it consists of quantified statistic figures. That we use to represent and summarize the given data of an experiment or the real-time studies. In this article, we will discuss the following topics in detail:- • What is the error in statistics? • What is the standard error in statistics? • How many types of errors in statistics? • What are the sampling error and non-sampling error in statistics? • What is the margin of error in statistics formula? Before discussing these topics as mentioned above in this article, we want to say that if you are looking for a statistics homework helper, then don’t worry you can get the best statistics homework help from our experts. So, what are you waiting for get help now! What Is The Error In Statistics? Statistics is a methodology of gathering, analyzing, reviewing, and drawing the particular information’s conclusion. The statistical error is the difference between the collected data’s obtained value and the actual value of the collected data. The higher the error value, the lesser will be the representative data of the community. In simple words, a statistics error is a difference between a measured value and the actual value of the collected data. If the error value is more significant, then the data will be considered as the less reliable. Therefore, one has to keep in mind that the data must have minimal error value. So that the data can be considered to be more reliable. Types Of Error In Statistics There are main two types of error in statistics that is the type I & type II. In a statistical test, the Type I error is the elimination of the true null theories. In contrast, the type II error is the non-elimination of the false null hypothesis. Plenty of the statistical method rotates around the reduction of one or both kind of errors, although the complete rejection of either of them is impossible. But by choosing the low threshold value and changing the alpha level, the features of the hypothesis test could be maximized. The information on type I error & type II error is used for bio-metrics, medical science, and computer science. On the other hand, we also mentioned type III error in this post. Type I Error The initial type of error is eliminating a valid null hypothesis, which is considered the outcome of a test procedure. This type of error is sometimes also called an error of the initial type/kind. A null hypothesis is set before the beginning of an analysis. But for some situations, the null hypothesis is considered as not being present in the ‘cause and effect’ relationship of the items that are being tested. This situation is donated as ‘n=0’ if you are conducting the test, and the outcome seems to signify that applied stimuli may cause a response, then the null hypothesis will be rejected. Examples Of Type I Error Let’s take an example of the trail of a condemned criminal. The null hypothesis can be considered as an innocent person, while others treat it as guilty. In this case, Type I error means that the individual is not found to be innocent. And individual needs to be sent to jail, despite being an innocent. In another example, in medicinal tests, a Type I would bring its display as a cure of disease tends to minimize the seriousness of a disease, but actually, it is not doing the same. Whenever a new dose of disease is tested, then the null hypothesis would consider as the dose will not disturb the progress of the particular ailment. Let’s take an example of it as a lab research a new cancer dose. The null hypothesis would be that the dose will not disturb the progress of the cancer cells. After treating the cancer cells with medicine, the cancer cells will not progress. This may result to eliminate the null hypothesis of that drug that does not have any effect. The medicine is successful in stopping the growth of cancer cells, then the conclusion of rejecting the null value will be correct. However, while testing the medicine, something else would help to stop the growth rather than the medicine, then this could be treated as an example of an incorrect elimination of the null hypothesis that is a type I error. Type II Error A type II error implies the non-elimination of a wrong null hypothesis. We use this kind of error for the text of hypothesis testing. In the statistical data analysis, type I errors are the elimination of the true null hypothesis. On the other hand, type II error is a kind of error that happens when someone is not able to eliminate a null hypothesis, which is wrong. In simple words, type II error generates a false positive. The error eliminates the other hypothesis, even though that does not happen due to chance. A type II error proves an idea that has been eliminated, demanding the two observances could be identical, even though both of them are dissimilar. Besides, a type II errors do not eliminate the null hypothesis. Even though the other hypothesis has the true nature, or we can say that a false value is treated as true. A type II error is well-known as a ‘beta error’. Type II Error’s example Suppose a biometric company likes to compare the effectivity of the two medicines that are used for treating diabetic patients. The null hypothesis refers to the two treatments that are of similar A null hypothesis (H[0]) is the demand of the organization that concern to eliminate one-tailed test uses. The other hypothesis refers to the two medications are not identically effective. The other hypothesis (H[a]) is the calculation, which is backed by eliminating the null hypothesis. The biotechnical organization decided to conduct a test on 4,000 diabetic patients to evaluate the treating effectivity. The organization anticipates the two medicines should have a similar number of diabetic patients to ensure the effectivity of the drugs. It chooses a significant value of 0.05 that signifies that it is ready to adopt a 6% chance of eliminating the null hypothesis when it is considered to be real or a chance of 6% of carrying out a type I error. Suppose beta would be measured as 0.035 or as 3.5%. So, the chances of carrying out a type II error is 3.5%. When the two cures are not identical, then the null hypothesis must be eliminated. Still, the biotechnical organization does not eliminate the null hypothesis if the medicine is not identically effective, then the type II error happens. Type III Error Type III errors, also known as conceptual errors, occur when researchers test the wrong hypothesis or design an experiment that does not accurately address the research question. In simpler words, a type III error arises when a researcher investigates a relationship between two variables that are not related and concludes that they are related based on the results of the analysis. As a result, this error can result in misguided policies and inaccurate conclusions. Example of Type III Error For example, let’s say a researcher is interested in examining the relationship between ice cream consumption and crime rates. After collecting and analyzing data, the researcher finds a significant correlation between the two variables. However, this relationship is a type 3 error because ice cream consumption does not cause crime. In this case, the researcher has failed to consider other important variables, such as temperature or time of day, that may influence ice cream consumption and crime rates. Therefore, the researcher’s conclusion that ice cream consumption is related to crime rates is incorrect, and any policies or interventions based on this conclusion would be misguided. Test Your Knowledge About Types Of Error In Statistics 1. A Type I error occurs if • A null hypothesis needs to be not rejected but must be rejected. • The given null hypothesis is rejected, but actually, it needs not be rejected. • A test statistic is wrong. • None of these. Correct Answer: The given null hypothesis is rejected, but actually, it needs not be rejected. 2. A Type II error occurs if • A null hypothesis needs to be not rejected but must be rejected. • The given null hypothesis is rejected, but actually, it needs not be rejected. • A test statistic is wrong. • None of these. Correct Answer: A null hypothesis needs to be not rejected but must be rejected. 3. Deciding on the significance level ‘α’ helps in determining • the Type II error’s probability. • the Type I error’s probability. • the power. • None of these. Correct Answer: the Type I error’s probability. 4. Suppose a water bottle has a label stating – the volume is 12 oz. A user group found that the bottle is under‐filled and decided to perform a test. In this case, a Type I error would mean • The user group does not summarize that the bottle has less than 12 oz. The mean is also less than 12 oz. • The user group has proof that the label is wrong. • The user group summarizes that the bottle has less than 12 oz. The mean is also 12 oz. • None of these. Correct Answer: The user group summarizes that the bottle has less than 12 oz. The mean is also 12 oz. 5. A Type I error happens in the case the null hypothesis is • correct. • incorrect. • either correct or incorrect. • None of these. 6. A Type II error happens in the case the null hypothesis is • correct. • incorrect. • either correct or incorrect. • None of these. Correct Answer: incorrect. 7. Suppose the significance level ‘α’ is raised; in this case, the uncertainty of a Type I error will also • decrease. • remain the same. • increase. • None of these. Correct Answer: increase. 8. Suppose the significance level ‘α’ is raised; in this case, the uncertainty of a Type II error will also • decrease. • remain the same. • increase. • None of these. Correct Answer: decrease. 9. Suppose the significance level ‘α’ is raised; in this case, the power will also • decrease. • remain the same. • increase. • None of these. Correct Answer: increase. 10. The test’s power can increase by • selecting a smaller value for α. • using a normal approximation. • using a larger sample size. • None of these. Correct Answer: using a larger sample size. What Is The Standard Error In Statistics? ‘Standard error’ refers to the standard deviation of several statistics samples, like mean and median. For instance, the term “standard error in statistics” would refer to as the standard deviation of the given distributing data that is calculated from a population. The smaller the value of the standard error, the larger representative the overall data. The relation between standard deviation and the standard error is that for a provided data, the standard error is equal to the standard deviation (SD) upon the square root of the offered data size. Standard error = standard deviation √ Given data The standard error is inversely proportional to the given model size, which means the higher the model size, the lesser the value of standard error since the statistic will tend to the actual value. Standard error ∝ 1/sample size The standard error is taken as a portion of explanatory statistics. The standard error shows the standard deviation (SD) of an average value into a data set. It treats as a calculation of the random variables as well as to the extent. The smaller the extent, the higher the accuracy of the dataset. The Data Affects Two Kinds Of Error Here in this section, you get to know how the data affects two kinds of error: 1. Sampling error Sampling error happens only as an outcome of using a model from a population instead of than conducting a complete enumeration of the population. It implies a difference between a prediction of the value of community and the ‘true or real’ value of that sample population that would result if a census would be taken. The sampling error does not happen in a census as it is based on the whole Sampling error would cause when: • The method of sampling is not accidental. • The samples are smaller to show the population accurately. • The proportions of several features within the sample would not be identical to the proportion of the features for the entire population. Sampling error can be calculated and handled in random samples. Especially where every unit has a hope of selection, and the hope can be measured. In other words, the increment in the sample size would decrements in the sampling error. 2. Non-sampling error This error is caused by other factors that are not associated with the sample selection. It implies the existence of any of the factors, whether a random or systematic, that output as the true value of the population. The non-sampling error can happen at any step of a census or study sample. And it is not easily quantified or identified. Non-sampling error can consist • Non-reaction error: This mention the failure to get a response from a few units since of absence, refusal, non-contact, or some other reason. The non-reaction error can be a partial reaction (that is, a chosen unit has not supported the solution to a few problems) or a complete reaction (that is no information has been taken at all from a chosen unit). • Processing error: It implies the error found during the process of collecting the data, coding, data entry, editing, and outputting. • Coverage error: This happens when a unit of the sample is not correctly included or excluded or is replicated in the sample (for example, an interviewer is not able to interview a chosen • Interviewer error: This error happens when the interviewer record a piece of information incorrectly, is not objective or neutral or assumes reaction based on looks or other features. What Is The Margin Of Error In Statistics? The margin of error in statistics is the order of the values above and below the samples in a given interval. The given range is a method to represent what the suspicious is with a particular For example, a survey may be referred that there is a 97% confidence interval of 3.88 and 4.89. This means that when a survey would be conducted again with the same technical method, 97% of the time, the real population statistic will lay within the estimated interval (i.e., 3.88 and 4.89) 97% of the time. The Formula For Calculating The Margin Of Error Percentage A marginal error implies to you how many different values would be resulted from the true population value. For instance, a 94% confidence interval by a 3% margin of error means that your calculated statistics would be within 3% points to the true population value 945 of the time. The Margin Of Error Can Be Measured In Two Ways: 1. The margin of error = Standard deviation of the statistics x Critical range value 2. The margin of error = Standard error of the statistic x Critical value Steps on Calculate Margin of Error Step 1: Calculate the critical value. The critical value is either of a z-score or t-score. In general, for the smaller value (under 30) or when you do not have the standard deviation of the population. Then use a t-score, in another way, use a z-score. Step 2: Calculate the standard error or standard deviation. These two are an identical thing, and merely you should have the population parameter value to measure standard deviation. Step 3: Multiply the standard deviation and the critical value. Sample problem: 100 students were polled and had a GPA of 2.5 with a standard deviation of 0.5. Calculate the margin of error in statistics for a 90% confidence range. 1. The value of critical for a 90% confidence range is 1.645 (see the table of z-score). 2. The SD is 0.5 (as it is a sample, we require the standard error for the mean.) the SE can be calculated as the standard deviation / √ Given data; therefore, 0.5/ √ (100) = 0.05. 3. 1.645*0.05 = 0.08125. The margin of error for a proportion formula: where : p-hat = sample proportion; n= sample size; z= z-score Steps to calculate margin error for a proportion Step 1: Calculate p-hat. This can be calculated by the number of the population who have been responded positively. It means that they have given the answer related to the given statement of the Step 2: Calculate the z-score with follows the confidence level value. Step3: Put all the value in the given formula: Step 4: Convert step 3 into the percentage. Sample problem 1000 individuals were polled, and 380 think that climate alters not because of human pollution. Calculate the ME for a given 90% confidence value. 1. The number of individuals who respond positively; 38%. 2. A 90% confidence level has a critical value (z-score) of 1.645. 3. Calculate the value by using the formula =1.645*[ √ {(38*.62)/(1000)}] 4. Convert the value into a percentage. 0.0252= 2.52% The margin of error in statistics is 2.52%. This is all about types of error in statistics. Use the details as mentioned earlier, you can understand types of error in statistics. But, still, you find any issue related to the topic error in statistics. Then you can get in touch 24*7 with our professional experts. They have enough knowledge of this particular topic; therefore, they can resolve all the queries of yours. Get the best statistics homework help from the professional experts at a reasonable price. We provide you the assignment before the deadline so that you can check your work. And we also provide a plagiarism-free report which defines the uniqueness of the content. We are providing world-class help on math assignment help to the students who are living across the globe. We are the most reliable math assignment helpers in the world. Frequently Asked Question Q1. What is the difference between Type 1 and Type 2 errors? In statistical hypothesis testing, Type 1 error is caused by rejecting any of the null hypotheses (in case it has true value). Type II error occurs while a null hypothesis is taken (if it does not have a true value). Q2. What is random error example? A random error can happen because of the measuring instrument and can be affected by variations in the experimental environment. For example, a spring balance can produce some variation in calculations because of the unloading and loading conditions, fluctuations in temperature, etc. Q3. What type of error is human error? Random errors are considered to be natural errors. Systematic errors occur because of problems or imprecision with instruments. Now, human error is something that humans screwed up; they have made a mistake. These cause human errors. Q4. What are Type 3 errors in statistics? A type III error occurs in statistics when you correctly conclude that the two groups are statistically different but incorrect about the difference’s direction.
{"url":"https://statanalytica.com/blog/types-of-error-in-statistics/","timestamp":"2024-11-02T00:08:02Z","content_type":"text/html","content_length":"267481","record_id":"<urn:uuid:d9ed616e-2c65-4e8d-983e-93461c28f0dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00650.warc.gz"}
36.43 decameters per square second to centimeters per square second 36 Decameters per square second = 36,430 Centimeters per square second Acceleration Converter - Decameters per square second to centimeters per square second - 36.43 centimeters per square second to decameters per square second This conversion of 36.43 decameters per square second to centimeters per square second has been calculated by multiplying 36.43 decameters per square second by 1,000 and the result is 36,430 centimeters per square second.
{"url":"https://unitconverter.io/decameters-per-square-second/centimeters-per-square-second/36.43","timestamp":"2024-11-05T13:07:33Z","content_type":"text/html","content_length":"27399","record_id":"<urn:uuid:001ef816-8538-4fed-8064-0b5f2dd9dcdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00569.warc.gz"}
SQL Musings The other day, I had a request from a BA. The gist of it was that we had a column in the database defined as a decimal(5,4) and yet the data as defined by the regulatory body only provided three places after the decimal point. He wanted to know if any data had somehow crept into the database that was utilizing the (seemingly superfluous) fourth place. On its face, this seems like a difficult problem. Admittedly, when I first heard it, I couldn't come up with something off the top of my head. But then I remembered that I have a plaque on my wall that says "B.S. Math", so I decided to leverage that. Enter the modulo operator. Before we can talk about the modulo operator , we're going to have to flash back to when you learned long division. I know that it was traumatic for you, but trust me, this will bear fruit. If you recall, when you divide one number by another and that division doesn't result in "even division" (that is when the divisor goes into the dividend an integer number of times), you have the option of either expanding it into a decimal or just saying "screw it... here's what's left". By way of a worked example, if I divide 5 by 2, I can either say that it's 2.5, or 2 with a remainder of 1. In simplest terms, the modulo operator takes as arguments a divisor and a dividend and returns the remainder. So how does that apply to the stated problem? That is, if I have an arbitrary number that's of the form 0.1234, how do I tell if there is a digit in the ten thousandths place? Here's what I did. If I multiply all such numbers by 10,000 and then evaluate that number modulo (mod for short) 10, that will return the ones place to me. If the result of that operation is not 0, then I have a non-zero digit in the ten thousandths place. For you code junkies out there: select value from table where convert(int, 10000*value) % 10 > 0 The mod operator is useful in a lot of other situations. For instance, let's say that you need to break up a group into n smaller groups. Mod is your friend here. The following code will add a column to the result set called "g" that gives you the group number. In the example, I chose 2 as my modulus, so "g" will take values 0 and 1. select row_number() over (order by Id) as [g] % 2, * from table Have you ever needed to make a report where alternating rows had different formatting (and your reporting tool of choice doesn't support this natively)? The above code makes this almost trivial. So there you have it. The mod function can be a useful tool in your toolbox. Have fun!
{"url":"http://blog.semperoccult.us/2011/09/","timestamp":"2024-11-12T04:13:15Z","content_type":"text/html","content_length":"42365","record_id":"<urn:uuid:7b3e55f0-cf73-4e52-b61d-34e587b64155>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00679.warc.gz"}
Quiz – Modern Mashers (and one ahead of his time) (stumped!) This quiz involves eight players, all but one of which were active in the past 30 years. Yet they are only players in majors history to retire with a certain career accomplishment. What is it? Seems I’ve managed to stump the HHS panel. The quiz answer is that only these players recorded a 3000 PA career having 45% of hits go for extra bases, and with doubles comprising 45% of extra-base hits. More after the jump. Hint: there are two parts to the answer, both involving the same number (which is also the number of batters faced by the last Cubs pitcher to record an extra-inning shutout away from Wrigley) Having 45% of hits go for extra bases is the hard part, with only 18 such careers in majors history, and more than half of those by players with fewer than 7000 career PA. To also have doubles as 45% of extra-base hits requires a fairly even split between home runs and doubles. Too many home runs and doubles will drop below 45% of XBH, and too many doubles likely means a player won’t have 45% of his hits for extra-bases. But, there is one player who is turning that home run-doubles balance on its ear. Chris Young qualifies for this group right how, despite having 54% more doubles than home runs. There are 720 retired players with 300 extra-base hits and 50% more doubles than home runs, and none of them has 45% extra-base hits (Scott Rolen is closest with 42%). Even just looking at careers through age 32 (Young’s age last season), nobody in the group has previously been at 45% extra-base hits (closest was Brad Wilkserson at 43.5%; Rolen was 43.1%). 24 Comments Inline Feedbacks View all comments 7 years ago I think 45 is the number and, I’ve got nothing else… ;o) 7 years ago Reply to Scary Tuna 45 is indeed the relevant number. Another clue: Jose Bautista and Chris Young are the only active players who currently qualify to join this group. 7 years ago wow.. a 7-0 victory in 11 innings. In the first game of a twi-night doubleheader, no less. 7 years ago Hitters with somewhat similar profiles, for the most part. For Luke Scott to be 2nd on the list in front of Thome and Schmidt who played twice as many games – it feels like it has to be some kind of weird rate stat or some kind of obscure percentage. And then you have Greenberg whose career triple slash was probably something like .300/.400/.600, whereas Young is closer to .200/.300/.400 ha 7 years ago I don’t think it’s any sort of ranking. I think it’s just the order of when their careers were. 7 years ago Reply to David P Fine – even if Scott is not actually “Number 2”, he’s still in the top 12. To have Luke Scott and Chris Young in the same top 12 as Greenberg and Ortiz and such is a conundrum. I’ve got one of the criteria, but not both. XBH / Hits is greater than 45% for all of these guys, but also for a few other guys (McGwire, I’m guessing, Adam Dunn, too, I’m sure). I don’t have paid access to the Play Index so I can’t see everything. 7 years ago XBH / Hits is greater than 45% for all of these guys, but also for a few other guys (McGwire, I’m guessing, Adam Dunn, too, I’m sure), so there’s got to be some other constraint. I’m not seeing it and I don’t have access to the full P-I. 7 years ago Babe Ruth, too. Kingman is just short at 44.89% – not sure if he gets rounded up to 45% in the P-I. 7 years ago By the way, this is for players with at least 3000 PA. Bonds is also above 45% XBH. Russell Branyan, Carlos Pena. Albert Belle. And of course, Rob Deer. I don’t know what the difference between the Bonds, Belle, Branyan, Pena, Deer, Ruth, McGwire bunch and the Greenberg, Thome, Delgado bunch is though. 7 years ago Thickie Don (welcome to HHS!) has half of the answer. The other half also involves 45. 7 years ago And TTO % is less than 45. 7 years ago Ouch but that doesn’t work for Albert Belle 7 years ago Can’t figure it out. Any more hints? 7 years ago The answer involves the type of extra-base hit. 7 years ago Reply to Doug Maybe I’m missing something here, so apologies if I’m incorrect. One reason I didn’t submit the 45% doubles as an answer is Albert Belle*. He went for extra bases in 45.8% of his total hits, but then 49% of his XBHs were doubles. Additionally, Thome only had 41% doubles go for XBH. Am I doing something wrong? * I don’t have full access to the P-I, so I’m not 100% sure that Albert Belle is among the 18 with 45% of hits going for XBH. 7 years ago Doug: It looks like Thome and Schmidt had a double/XBH ratio of less than 45%and Belle was above 45%. 7 years ago Reply to Richard Chester Sorry I messed it up, guys. 7 years ago Reply to Doug Not mad at all. This stuff is awesome. 7 years ago Luke Scott retired with 46% of his hits going for extra bases, 53.8% of which were doubles.
{"url":"http://www.highheatstats.com/2017/02/quiz-modern-mashers-and-one-ahead-of-his-time/","timestamp":"2024-11-06T23:15:24Z","content_type":"text/html","content_length":"171273","record_id":"<urn:uuid:a5a1f4af-44b3-4bf0-ab55-61c83fcfc85e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00025.warc.gz"}
PH3256 - Physics for Information Science Syllabus Regulation 2021 Anna University - A Plus Topper PH3256 – Physics for Information Science Syllabus Regulation 2021 Anna University Aim of Objectives: • To make the students understand the importance in studying electrical properties of materials. • To enable the students to gain knowledge in semiconductor physics • To instill knowledge on magnetic properties of materials. • To establish a sound grasp of knowledge on different optical properties of materials, optical displays and applications • To inculcate an idea of significance of nanostructures, quantum confinement, ensuing nano device applications and quantum computing. PH3256 – Physics for Information Science Syllabus Unit I: Electrical Properties Of Materials Classical free electron theory – Expression for electrical conductivity – Thermal conductivity, expression – Wiedemann-Franz law – Success and failures – electrons in metals – Particle in a three dimensional box – degenerate states – Fermi- Dirac statistics – Density of energy states – Electron in periodic potential – Energy bands in solids – tight binding approximation – Electron effective mass – concept of hole. Unit II: Semiconductor Physics Intrinsic Semiconductors – Energy band diagram – direct and indirect band gap semiconductors – Carrier concentration in intrinsic semiconductors – extrinsic semiconductors – Carrier concentration in N-type & P-type semiconductors – Variation of carrier concentration with temperature – variation of Fermi level with temperature and impurity concentration – Carrier transport in Semiconductor: random motion, drift, mobility and diffusion – Hall effect and devices – Ohmic contacts – Schottky diode. Unit III: Magnetic Properties Of Materials Magnetic dipole moment – atomic magnetic moments- magnetic permeability and susceptibility – Magnetic material classification: diamagnetism – paramagnetism – ferromagnetism – antiferromagnetism – ferrimagnetism – Ferromagnetism: origin and exchange interaction- saturation magnetization and Curie temperature – Domain Theory- M versus H behaviour – Hard and soft magnetic materials – examples and uses– Magnetic principle in computer data storage – Magnetic hard disc (GMR sensor). Unit IV: Optical Properties Of Materials Classification of optical materials – carrier generation and recombination processes – Absorption emission and scattering of light in metals, insulators and semiconductors (concepts only) – photo current in a P-N diode – solar cell – LED – Organic LED – Laser diodes – Optical data storage techniques. Unit V: Nanodevices And Quantum Computing Introduction – quantum confinement – quantum structures: quantum wells, wires and dots — band gap of nanomaterials. Tunneling – Single electron phenomena: Coulomb blockade – resonant¬tunneling diode – single electron transistor – quantum cellular automata – Quantum system for information processing – quantum states – classical bits – quantum bits or qubits -CNOT gate – multiple qubits – Bloch sphere – quantum gates – advantage of quantum computing over classical computing. Text Books: 1. Jasprit Singh, “Semiconductor Devices: Basic Principles”, Wiley (Indian Edition), 2007. 2. S.O. Kasap, Principles of Electronic Materials and Devices, McGraw-Hill Education (Indian Edition), 2020. 3. Parag K. Lala, Quantum Computing: A Beginner’s Introduction, McGraw-Hill Education (Indian Edition), 2020. References Books: 1. Charles Kittel, Introduction to Solid State Physics, Wiley India Edition, 2019. 2. Y.B.Band and Y.Avishai, Quantum Mechanics with Applications to Nanotechnology and 3. Information Science, Academic Press, 2013. 4. V.V. Mitin, V.A. Kochelap, and M.A. Stroscio, Introduction to Nanoelectronics, Cambridge University Press 2008. 5. G.W. Hanson, Fundamentals of Nanoelectronics, Pearson Education (Indian Edition) 2009. 6. B. Rogers, J. Adamsband S.Pennathur, Nanotechnology: Understanding Small Systems, CRC Press, 2014.
{"url":"https://www.aplustopper.com/ph3256-physics-for-information-science-syllabus/","timestamp":"2024-11-10T11:39:37Z","content_type":"text/html","content_length":"44598","record_id":"<urn:uuid:7400d905-bd3c-4adb-9471-c516cc1a3efc>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00409.warc.gz"}