text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# Create tree association using table
Suppose I have a table like the following (This example is simplified, actual table is much larger), it is read from external files:
table = {{1, A, x, y},
{1, B, z, w},
{2, A, v, u},
{2, B, t, s}};
header = {i, j, g, h};
The third and fourth column are the actual content of my table, say their headers are g and h, and the first and second column are like indices for each row.
The output I am looking for is
tree = <|1 -> <|A -> <|g -> x, h -> y|>,
B -> <|g -> z, h -> w|>|>,
2 -> <|A -> <|g -> v, h -> u|>,
B -> <|g -> t, h -> s|>|>|>;
The reason behind this is that a regular table of association (give the first and second column header i and j) takes too long to find the rows using Select (8800 sec for the actual table). This might speed up things quite a bit because it is a tree structure in nature. So if I want to access one particular row, I just use the command tree[1][A] or tree[1][A][g] to get the content of the first row.
So how would you code this into Mathematica? I am open to completely different approaches as well. Thanks!
• So may I ask why the last level shouldn't be <|g->h|> but <g->x,h->y>? I suppose in your real situation the list is quite large, so It's crucial to know about why you choose this way, or our answer will be useless for your real application. – Wjx Aug 12 '16 at 2:54
• Is your table sorted by first element? – JungHwan Min Aug 12 '16 at 3:17
• – Kuba Aug 12 '16 at 5:51
• @Wjx g and h are just indices, the real data is x and y. – Kaa1el Aug 12 '16 at 7:24
• @JHM sorting is irrelevant in my application. – Kaa1el Aug 12 '16 at 7:25
1. less general
# -> <|#2 -> <|g -> #3, h -> #4|>|> & @@@ table // Merge[Association]
<|
1 -> <|A -> <|g -> 5, h -> y|>, B -> <|g -> z, h -> w|>|>,
2 -> <|A -> <|g -> v, h -> u|>, B -> <|g -> t, h -> s|>|>
|>
2. more general
newAsso = AssociationThread[header, #] & /@ table
{<|i -> 1, j -> A, g -> x, h -> y|>,
<|i -> 1, j -> B, g -> z, h -> w|>,
<|i -> 2, j -> A, g -> v, h -> u|>,
<|i -> 2, j -> B, g -> t, h -> s|>}
nested = GroupBy[newAsso, {Key@i, Key@j}, Map[KeyDrop[{i, j}]@*First]]
<|
1 -> <|A -> <|g -> x, h -> y|>, B -> <|g -> z, h -> w|>|>,
2 -> <|A -> <|g -> v, h -> u|>, B -> <|g -> t, h -> s|>|>
|>
nested[2, A]
<|g -> v, h -> u|>
3. alternatively
GroupBy[
table,
{#[[1]] &, #[[2]] & -> (AssociationThread[{g, h}, #[[3 ;;]]] &)},
Map[First]
]
4. or
GroupBy[
table,
{First -> Rest, First -> (AssociationThread[{g, h}, Rest[#]] &)},
Map[First]
]
|
{}
|
## Precalculus: Mathematics for Calculus, 7th Edition
$(-\infty, -1) ∪ (0, \infty)$
$$x^4>x^2$$ Subtract $x^2$ $$x^4-x^2>0$$ Factor out $$x^2(x^2-1)>0$$ $$x^2(x-1)(x+1)>0$$ So, we have key points: $x+1=0$ ; $x=-1$ $x=0$ $x-1=0$ ; $x=1$ Which gives us following intervals: $(-\infty, -1)$ - Positive $(-1, 0)$ - Negative $(0, 1)$ - Negative $(1, \infty)$ - Positive We need open interval of all positive set of values, so the answer is: $(-\infty, -1) ∪ (0, \infty)$
|
{}
|
# Addition and subtraction of radicals
The addition and subtraction of radicals is similar to the addition and subtraction of
algebraic expressions with like terms.
Notice how the distributive property is used to combine 6x and 2x.
6x + 2x = (6 + 2)x = 8x
The distributive property can also be used to add and subtract expressions containing radicals.
$$3\sqrt{2} + 7 \sqrt{2} = (3 + 7)\sqrt{2} = 10 \sqrt{2}$$
$$We \ call \ 3\sqrt{2}\ and \ 7 \sqrt{2} \ like\ radicals$$
Like radicals
Like radicals are radical expressions that have the same radicand and the same index.
Only like radicals can be added or subtracted using the distributive property. The following radical expressions cannot be added or subtracted.
$$a. \ 3\sqrt{2} + 2 \sqrt{3}$$
$$b. \ 8\sqrt[3]{3} + 5 \sqrt{3}$$
a. has the same index, but the radicand is not the same.
b. has the same radicand, but the index is not the same.
## More examples of addition and subtraction of radicals
$$6\sqrt{5} + 3 \sqrt{5} = (6 + 3)\sqrt{5} = 9 \sqrt{5}$$
$$18\sqrt[3]{7} - 5 \sqrt[3]{7}- \sqrt[3]{7} = (18 - 5 - 1)\sqrt[3]{7}$$
$$18\sqrt[3]{7} - 5 \sqrt[3]{7} - \sqrt[3]{7} = 12\sqrt[3]{7}$$
Sometimes, you may need to simplify each radical until you get the same radicand before you add and / or subtract radicals. The next example demonstrates how.
$$\sqrt{50} + \sqrt{8} = \sqrt{25 \times 2} + \sqrt{4 \times 2}$$
$$\sqrt{50} + \sqrt{8} = \sqrt{25} \times \sqrt{ 2} + \sqrt{4} \times \sqrt{2}$$
$$\sqrt{50} + \sqrt{8} = 5\sqrt{ 2} + 2\sqrt{2}$$
$$\sqrt{50} + \sqrt{8} = (5 + 2) \sqrt{ 2}$$
$$\sqrt{50} + \sqrt{8} = 7 \sqrt{ 2}$$
## Recent Articles
1. ### Basic Math Review Game - Math adventure!
May 19, 19 09:20 AM
Basic math review game - The one and only math adventure game online. Embark on a quest to solve math problems!
Read More
New math lessons
Your email is safe with us. We will only use it to inform you about new math lessons.
Tough Algebra Word Problems.
If you can solve these problems with no help, you must be a genius!
## Recent Articles
1. ### Basic Math Review Game - Math adventure!
May 19, 19 09:20 AM
Basic math review game - The one and only math adventure game online. Embark on a quest to solve math problems!
Read More
Everything you need to prepare for an important exam!
K-12 tests, GED math test, basic math tests, geometry tests, algebra tests.
Real Life Math Skills
Learn about investing money, budgeting your money, paying taxes, mortgage loans, and even the math involved in playing baseball.
|
{}
|
### Home > ACC7 > Chapter 9 Unit 6 > Lesson CC2: 9.1.3 > Problem9-49
9-49.
Simplify these expressions.
1. $(x+3.5)2$
$2x+7$
1. $23+5x-(7+2.5x)$
Distribute the subtraction sign to the terms inside the parentheses, and then combine like terms. See part (c) for a worked example.
1. $3x+4.4-2(6.6+x)$
$3x+4.4-2(6.6+x)$
Remember to distribute the negative inside the parentheses.
$3x+4.4-13.2-2x$
Combine like terms by adding or subtracting the constants.
$x-8.8$
|
{}
|
## Abstract
Accountability systems are designed to introduce market pressures to increase efficiency in education. One potential channel by which this can occur is to match with effective teachers in the transfer market. I use a smooth maximum score estimator model, North Carolina data, and the state's bonus system to analyze how teachers and schools change matching behavior in response to accountability pressure. Schools under a high degree of accountability pressure match with teachers who are better at raising test scores. Estimation with a control year sample (when pressure was absent) bolsters these findings. Accountability pressure seems to motivate schools to compete for effective teachers.
## 1. Introduction
Accountability systems have a long history of being used by policy makers to introduce market forces into the education production process. The belief is that education production—because of the unobservability of teacher inputs, the noisiness of student outputs, lack of competition, and possibly different objectives pursued by school, district, and state leaderships—is inherently inefficient. The hope is that with the introduction of accountability pressures, we would observe gains in efficiency. Many studies attempt to capture the causal effect of accountability policy on student test scores. The implied causal effect is that teachers and principals change something about their behavior that increases education production. Examples of behavioral change examined (or implied) include redistributing or inducing more effort, changing the composition of classrooms, introducing new didactic methods, emphasizing certain subjects, and cheating the system (e.g., Hanushek and Raymond 2005; Figlio 2006; Ahn 2013).
Accountability policies rarely prescribe the means by which the school must raise test scores, but it is widely accepted that improving the teacher side of the production function is imperative. Nearly all education researchers and practitioners agree that teacher quality is one of the most important determinants of a student's success. The literature has focused on identifying characteristics that proxy for teacher quality, finding that experience, education level, and credentials are correlated with higher student achievement (e.g., Rockoff 2004; Clotfelter, Ladd, and Vigdor 2007; Goldhaber and Anthony 2007). Increasing teacher effort or adopting new didactic techniques may improve test scores, although school administrators have another tool they can use to raise education production: matching with more effective teachers in the teacher transfer labor market.
Some focus has been paid to the strategy of getting rid of particularly ineffective teachers (see Jacob 2011). Because it is nearly impossible (or at least very time-consuming) to remove ineffective or undesirable teachers, increasing teacher quality through selective layoffs is unlikely to be a feasible option for most schools. Nevertheless, it is also true that annual teacher turnover in an average school (in North Carolina) is higher than 10 percent. Thus, principals must replace a significant fraction of their workforce yearly.1 This presents a challenge as well as an opportunity: Successfully matching with the “right” teacher may improve education outcomes, whereas matching with the “wrong” teacher may be detrimental, compared to a random match. Schools that can recruit effective teachers may find it easier to satisfy the criteria set forth by accountability systems, whereas ineffective recruiters may see scores drop no matter what internal changes they institute.
I examine whether an incentive system in North Carolina pushed schools to match with different types of teachers under varying levels of accountability pressures. All teachers at the school were paid a modest $750 or$1,500 bonus if the school was able to demonstrate test score growth in line with or exceeding the state's expectations. If the accountability system was effective, a school under more pressure should have attempted to recruit teachers who were better at raising scores. A school that was not pressured may have attempted to recruit teachers for other characteristics. At the same time, schools that were attractive to teachers should have had an easier time finding an acceptable match, and schools that were less desirable may have been forced to accept matches that did not increase academic production.
I use an empirical matching model to see where transferring teachers within North Carolina go in their next teaching assignment. There have been a few studies that have looked at teacher–school matching. Boyd et al. (2013) show that teachers are particularly sensitive to the distance between their school and their place of birth or high school attended. In another study where the authors detail information about all transfer applicants, Boyd et al. (2011) determine that teachers who have better pre-service qualifications are most likely to search, and those who have better in-class performance are most likely to succeed in transferring. Ahn (2015) estimates a structural general equilibrium model of matching, finding that mid-career teachers with four to ten years of experience are the most eager to search and attempt to match with a school with higher academic performance. All schools attempt to match with experienced teachers over newly minted teachers, but schools with higher academic performance are choosier about selecting teachers originating from another high-performing school. Although match success rate and teacher performance (measured by in-class performance or academic status of the originating school) are positively correlated in these studies, how accountability pressure impacts matching is not directly explored and is an under-studied topic. Clotfelter et al. (2004) find that in North Carolina, accountability pressure made it difficult to retain good teachers at under-performing schools. In New York, Boyd et al. (2008) find that the introduction of standardized testing for accountability actually increased retention rates. As I only observe successful matches, I utilize a matching framework and maximum score estimator from Horowitz (2002) and Fox (2009). The next section presents a model of matching between teachers and schools. Section 3 details the North Carolina accountability system. Section 4 describes the data, and section 5 introduces the econometric model. I present results in section 6, discuss in more detail in section 7 the impact of accountability by estimating the model in a year when accountability pressure was absent, and conclude in section 8.
## 2. Theoretical Model
Consider a set of teachers A and schools S. Utility gain of teacher moving from his or her original school to the new school is , where vector contains wage and non-wage characteristics, and is a match-specific error. Similarly, marginal utility gain of a school (or principal) s matching with teacher a compared with the outside option 0 (a new teacher) is .2 The joint utility function of the match is3
where .4
Following Fox (2009), if a pair of feasible teacher–school assignments is defined as and the same teacher (a, a′) and school (s, s′) matches flipped are defined as , with some data of teachers and schools X, I assume rank ordering as5
if and only if:
This condition defines pairwise stable matches.
One limitation to note is that portions of the joint utility function are not exclusively attributable to the teacher or the principal. For instance, highly experienced teachers may match more frequently with high-performing schools because older teachers prefer better-performing students, these schools may believe that experienced teachers will increase performance, or both. Because of this, the observed match (and the estimated utility production) is assumed to be joint, inseparable, and specific to the match.
If all portions of f () are attributable to the teacher or the principal separately, incentive constraints sum to the joint production framework and pairwise stability.6 That is:
and
As abstract as the theory model is, the actual transfer process mirrors the model. It seems that principals and teachers actively search for each other. The hiring of new teachers and transfers in North Carolina is handled at both the school and district levels. Although vacancies are advertised at the school level, employment lists are maintained centrally by the district. From discussions with principals and policy practitioners, it is clear that principals motivated to find the “best” candidates aggressively search through district lists. Ambitious applicants are known to contact principals directly or through other means, such as seeking exposure while working as a student-teacher or lobbying through colleagues.7
It should be noted that the initial decision to search by teachers is taken as given, and I do not model how the expected matching outcome in turn influences the search decision. Although this is a limitation of the model, in this paper I am interested in what the stable distribution of teachers across schools with different accountability pressures looks like, conditional on teachers searching.8 In addition, the modest bonus amount seems unlikely to spur a major decision to move to a new school.9
## 3. The North Carolina Accountability System
The North Carolina accountability program (also known as ABC10) began in the 1995–96 school year. With the exception of minor changes,11 the mechanism of offering cash bonuses for student achievement gains remained stable for more than a decade. All North Carolina students in the public education system in grades 3 through 8 are required to take End-of-Grade exams in reading and mathematics. The tests are on a developmental scale, allowing comparison of scores from consecutive grades. Students entering grade 3 are administered a “pre-test” within the first three weeks of the school year to serve as the baseline performance measure. Using the formula below, the North Carolina Department of Public Instruction (NCDPI) determines the expected achievement gains threshold for each school's bonus eligibility based on the school's students’ performance last year:
where is the expected change in the score for subject s for students in grade g in year t in school m compared with the score on the same subject from year t − 1. is average gain in scores for students from 1992–93 to 1993–94, the first two years of exam administration. The second and third terms are correction factors. The Index of True Proficiency () and the Index for the Regression to the Mean () adjust test-score goals for shocks in students’ performances last year. For a complete description, see Vigdor (2008).
For a school with G tested grades, 2G thresholds are produced, which are compared to the actual average test score improvement. The school scores and threshold scores are differenced, standardized,12 and weighted by the number of students in each grade. If this average, termed “expected growth” score, is greater than zero, all teachers in the school receive $750. The procedure is repeated after increasing the growth threshold by 10 percent, to generate the “high growth” score. Teachers in schools that make high growth receive an additional$750. Although the cash bonus amount may not be enough to induce a teacher to transfer to a different school, the accountability system may induce a principal to recruit effective teachers in two ways. First, the teachers currently at the school collectively have an interest in bringing an effective teacher on board, as this will increase the expected bonus of all teachers. If teachers have formal or informal influence in hiring decisions, the bonus can influence who the principal recruits.13 Second, schools are recognized for making (or failing to make) expected/high growth, as such records are published online as part of school report cards.14 If academic performance impacts the principals’ professional evaluations, recruiting effective teachers to attain higher growth will be important to principals.15
It is also worth noting that schools that “fail” under the ABC system are not necessarily undesirable schools. A high achieving school that maintains its level of excellence could be labeled as failing, and a school with low test scores (and other undesirable characteristics) could see a modest increase in test scores and be labeled as succeeding.16
## 4. Data
I use an administrative dataset for the North Carolina public school system from the academic years 1998–99 to 2002–03. I use 2009–10 data for a control group.17 The dataset contains information on all public schools, students, and teachers in the state. The data are collected annually and can link students and teachers across years.
In the matching decision, student data are aggregated to class level, because a teacher is offered a “bundle” of students at the school. In the education production function, however, I allow a teacher to impact students individually. Table 1 summarizes the student, class, and school characteristics. I focus on grades 3, 4, and 5 for two reasons: (1) only grades 3 to 8 are tested, so there is no good way to measure teacher effectiveness for lower grades, and (2) middle school students in grades above 5 have several teachers throughout the day, and it is therefore difficult to assign credit to a particular teacher for education production due to spillover between subjects.
Table 1.
Summary Statistics
Stationary Teachers (Mobile Teachers = New Entrants + Transfers)Transfers (2009)
FE reading 0.0024 (0.1100) −0.0028 (0.0968) −0.0018 (0.0944) −0.0047 (0.1010) 0.0189 (0.1165)
FE mathematics 0.0056 (0.1718) −0.0027 (0.1582) −0.0015 (0.1559) −0.0050 (0.1623) 0.0170 (0.1696)
Minority 0.2211 (0.4150) 0.2198 (0.4141) 0.2028 (0.4021) 0.2517 (0.4341) 0.1425 (0.3499)
Female 0.9252 (0.2630) 0.8845 (0.3196) 0.8728 (0.3333) 0.9066 (0.2911) 0.9035 (0.2955)
W/in district transfer 0.6808 (0.4663) 0.6162 (0.4868)
Move to rural −0.0088 (0.4128) −0.0175 (0.4541)
Years of experience 15.1550 (12.00) 3.6539 (8.02) 0.0000 (0.00) 10.5092 (10.62) 11.8599 (10.06)
Certified 0.0436 (0.2042) 0.0211 (0.1439) 0.0043 (0.0656) 0.0527 (0.2236) 0.0723 (0.2593)
Class size 19.6430 (8.03) 18.5067 (8.45) 17.9590 (8.72) 19.5342 (7.82) 19.6383 (4.79)
Class minority % 0.4665 (0.2805) 0.5210 (0.2908) 0.5373 (0.2904) 0.4632 (0.2891) 0.3870 (0.2879)
Class female % 0.4650 (0.1639) 0.4495 (0.1879) 0.4422 (0.1973) 0.4632 (0.1679) 0.4988 (0.1154)
Class low parent ed. % 0.4707 (0.2957) 0.5148 (0.3139) 0.5335 (0.3164) 0.4707 (0.3061)
School minority % 0.4506 (0.2283) 0.4917 (0.2369) 0.5039 (0.2328) 0.4684 (0.2428) 0.3844 (0.2656)
School female % 0.4896 (0.0337) 0.4854 (0.0371) 0.4896 (0.0368) 0.4895 (0.0375) 0.4993 (0.0416)
School low parent ed. % 0.4492 (0.2028) 0.4740 (0.2127) 0.4862 (0.2104) 0.4510 (0.2151)
Expected bonus 787.67 (171.01) 761.68 (184.95) 750.06 (184.29) 783.49 (184.26)
Observations 17,177 6,434 4,197 2,237 456
Stationary Teachers (Mobile Teachers = New Entrants + Transfers)Transfers (2009)
FE reading 0.0024 (0.1100) −0.0028 (0.0968) −0.0018 (0.0944) −0.0047 (0.1010) 0.0189 (0.1165)
FE mathematics 0.0056 (0.1718) −0.0027 (0.1582) −0.0015 (0.1559) −0.0050 (0.1623) 0.0170 (0.1696)
Minority 0.2211 (0.4150) 0.2198 (0.4141) 0.2028 (0.4021) 0.2517 (0.4341) 0.1425 (0.3499)
Female 0.9252 (0.2630) 0.8845 (0.3196) 0.8728 (0.3333) 0.9066 (0.2911) 0.9035 (0.2955)
W/in district transfer 0.6808 (0.4663) 0.6162 (0.4868)
Move to rural −0.0088 (0.4128) −0.0175 (0.4541)
Years of experience 15.1550 (12.00) 3.6539 (8.02) 0.0000 (0.00) 10.5092 (10.62) 11.8599 (10.06)
Certified 0.0436 (0.2042) 0.0211 (0.1439) 0.0043 (0.0656) 0.0527 (0.2236) 0.0723 (0.2593)
Class size 19.6430 (8.03) 18.5067 (8.45) 17.9590 (8.72) 19.5342 (7.82) 19.6383 (4.79)
Class minority % 0.4665 (0.2805) 0.5210 (0.2908) 0.5373 (0.2904) 0.4632 (0.2891) 0.3870 (0.2879)
Class female % 0.4650 (0.1639) 0.4495 (0.1879) 0.4422 (0.1973) 0.4632 (0.1679) 0.4988 (0.1154)
Class low parent ed. % 0.4707 (0.2957) 0.5148 (0.3139) 0.5335 (0.3164) 0.4707 (0.3061)
School minority % 0.4506 (0.2283) 0.4917 (0.2369) 0.5039 (0.2328) 0.4684 (0.2428) 0.3844 (0.2656)
School female % 0.4896 (0.0337) 0.4854 (0.0371) 0.4896 (0.0368) 0.4895 (0.0375) 0.4993 (0.0416)
School low parent ed. % 0.4492 (0.2028) 0.4740 (0.2127) 0.4862 (0.2104) 0.4510 (0.2151)
Expected bonus 787.67 (171.01) 761.68 (184.95) 750.06 (184.29) 783.49 (184.26)
Observations 17,177 6,434 4,197 2,237 456
Notes: NCERDC data from 1998—99 to 2002—03 for first four columns. NCERDC data from 2009—10 for the last column. FE: fixed effect, calculated from an education production function. Low parental education is defined as high school degree or below, not available for 2009 data.
I exclude teachers with no previous teaching experience. By excluding new teachers in the matching model, I am assuming that new teachers do not compete with experienced teachers and are considered the outside option for schools that fail to match with an experienced teacher. Previous research has shown that new teachers are consistently matched with high minority, high poverty schools, and, within the school, they are matched with higher-than-school-average minority and disadvantaged classrooms.18 In addition, new teachers are disadvantaged in the matching process because they are not as likely to be informed about which schools are desirable nor will they have as extensive a social network of teachers and administrators who can lobby on their behalf. For schools, new teachers are virtually indistinguishable among each other. Although I can look ahead to observe a teacher's fixed effect, the principal who is hiring the teacher at year zero cannot see this value. With over 99 percent of new teachers not having certification, the only observable differences among new teachers are gender and race.19 This implies that a school is indifferent among all new teachers (unless the principal has a preference for a gender or a race). The summary statistics show that new teachers are more likely to match with schools that have more students in traditionally disadvantaged groups. Furthermore, within these schools they are assigned to classrooms that have a higher concentration of minority and low parental education students. New teachers seem to be outside options for schools and experienced teachers appear to have more bargaining power.20
The average teacher who does or does not transfer is not drastically different, except for years of experience. New teachers start in classrooms and schools that have more traditionally disadvantaged children. These teachers then transfer to better environments later in their careers. New teachers’ fixed effects are observable ex post from subsequent years’ performances. Again, note that new-teacher characteristics are tabulated to show that they are outside options and noncompetitive with transferring teachers. They are not used in the analysis.
## 5. Empirical Specification
I assume that teacher's gain in utility is linear in the percent differences in amenities offered by the origin and destination schools. Implicit in this setup is the notion that the destination school must be at least as desirable as the origin school. I allow distance, urbanicity, and classroom and school minority percent to affect a teacher's decision when transferring.21 The portion of the joint production that can reasonably be attributed to teacher utility is constructed in the following manner. Abusing notation from the theory, I define the school and class characteristics vectors at the origin and destination schools as and s, respectively. These vectors are each weighted by a matching coefficient vector .22 Therefore, if a teacher a is offered positions at different schools, he or she is deciding (considering a total of K + 1 school or class characteristics) between
where is the National Center for Education Statistics metro-centric locale variable, which runs from 1 (large city) to 8 (rural, inside a Metropolitan Statistical Area).23 Therefore, a positive sign for the difference between the old and new school urbanicity indicates that the teacher moved to a more urban location. I take the natural log of miles between the origin school and possible destination schools as the measure of distance and set the parameter equal to −1.24
In this matching model, it is usually impossible to disentangle how much of the joint match utility should be assigned to the teacher or the school (principal). For the percent change characteristics described here, it seems logical to assign this portion of the joint production to teachers. The percentage change in characteristics from the origin to the destination school will impact the teacher's utility, but it is unlikely to impact the school's objective function. One possible way a principal might care is if he or she wants to attract a teacher who has taught previously at a similar school, perhaps because the expectation is that the teacher will be more self-sufficient. In this case, the teacher utility portion is indistinguishable from the principal's objective function.25
I use a teacher's certification status, experience, ethnicity, and math fixed effect to distinguish among teachers. Rothstein (2008) showed that estimates of the fixed effects may be biased because of nonrandom sorting of students into classrooms. The fixed effect estimate may not be the true measure of teacher value added. Nevertheless, the fixed effect is what is observable to the principal, and, in this sense, it may be the more appropriate measure to use. The parts of the joint production function that contain interaction terms between the school's status in the ABC system and characteristics of the teacher that may or may not drive education achievement (such as experience and certification status) are tentatively assigned to the principal. As shown in Ahn (2013), the incentive pressure in the North Carolina system is nonmonotonic in the school's academic performance. For very low probability of qualifying under the ABC standard (when the bonus is essentially unattainable and the school will be labeled as under-performing) and for very high probability (when the bonus is all but assured and the school will be labeled as high achieving), principals may not have a strong incentive to recruit highly effective teachers. It is only in the middle of the probability cumulative distribution function, when the bonus outcome is in doubt, that a principal may look to match with candidates to maximize academic outcome, because hiring one or more effective teachers may be the difference in being recognized as a good school and attaining the bonus. Because of this nonmonotonicity, there is no obvious connection between school quality (such as nonmonetary amenities and student body quality) and accountability pressure. Abusing notation from above, I define the vector of L total characteristics of the teacher that are relevant to maximizing the school objective function as a and the vector of characteristics of an alternative teacher as a′. The portion of the joint production that may be attributable to the school side is defined as
Although accountability pressure may spur principals to look for efficient teachers, the absolute academic performance of the school may also drive matching results. A school that has a highly regarded reading program, for example, may itself be a draw for teachers, and schools that have such high absolute performance may match with effective teachers purely because of their characteristics. This portion of utility is truly joint, and it is impossible to attribute the production exclusively to one side of the match. If this joint production aspect is not accounted for, and if academic achievement and accountability pressure are strongly correlated, the matching parameter may not be consistent with the preferences of the principal and teacher, much akin to an omitted variable bias problem. To account for this possibility, I include an interaction term between the level achievement of the school and teacher characteristics. If a school s that has achievement level under accountability pressure level , and a teacher a is evaluating the gain in utility from the match, the teacher's total utility and principal's total utility are, respectively:
and
Because the two utilities are summed, is unidentified. An alternative interpretation of and is the difference between a growth criterion and a level criterion. A school s can be under different levels of pressure with different accountability systems. In North Carolina, schools are categorized by growth and achievement,26 and a school can be considered performing well under one system while being deficient in another. I use the probability density function (PDF) of the likelihood of a school to make expected growth, according to the ABC system as the measure of accountability pressure (),27 and the percentile ordering of schools according to schoolwide performance as the measure of achievement ().28
Following Fox (2009) and Horowitz (1992, 2002), I estimate coefficients of the matching model using a smooth maximum score estimation (SMSE) procedure:29
where the term is an indicator function that equals 1 when , where:
and where K() is a continuous function in the real line such that
The K() function is:
The K function is analogous to a kernel in nonparametric regressions, except that K behaves like a distribution. The indicator function (with some coefficients , , , and ) equals 1 when the actual match ({a, s} and {a′, s′}) yields higher benefits than the proposed alternative match ({a, s′} and {a′, s}). Standard errors are generated using a sandwich estimator-like asymptotic covariance matrix using first-derivative and second-derivative matrices of the S objective function. The bandwidth, h is chosen by a method analogous to the plug-in method in kernel density estimation.
The market is defined as the entire state in each year t. As shown in Fox (2009), I do not need to generate counterfactuals for the entire sample. Because there are over 2,000 teachers who transfer, generating all counterfactuals would create over four million inequalities (for each parameter vector guess). Instead, I sample 10 percent of the observations in each year as possible counterfactuals to evaluate the SMSE.30 In the choice of the counterfactual (a′ and s′), I note that for a “market” (year labeled t) with matched teachers, there are potential counterfactuals teachers (each tied to a particular school s′).
## 6. Econometric Results
Table 2 presents the results of the SMSE estimation.31 The top portion, labeled “Teacher Side,” represents matching preferences that could reasonably be attributed to teachers. To generate fixed effect estimates, an education production function was estimated. A one standard deviation increase in teacher fixed effect for reading (mathematics) results in an approximately 0.17 (0.24) standard deviation increase in test scores. A school motivated to raise academic achievement by hiring more productive teachers should select, all else equal, a teacher with a higher fixed effect value.32
Table 2.
Estimates of Matching Parameters
Point Estimate95% Confidence
VariableTeacher SideInterval
Δ distance % −1 Super consistent
Δ urbanicity −4.1645 (−9.9742, 1.6452)
Δ classroom minority % −16.0079 (−21.7881, −10.2277)
Δ school minority % −12.4768 (−14.3188, −10.6348)
Δ classroom minority % × teacher minority 20.0798 (13.8278, 26.3318)
Δ school minority % × teacher minority 22.2767 (20.4347, 24.1187)
Δ level achievement −3.785 (−6.9693, −0.6007)
Level achievement × experience 24.1429 (20.7237, 27.5621)
Level achievement × math FE 27.6966 (24.3399, 31.0533)
Level achievement × certification 21.8939 (18.7164, 25.0714)
Level achievement × teacher minority −18.5917 (−21.8512, −15.3322)
ABC pressure × experience 15.7851 (11.9718, 19.5984)
ABC pressure × math FE 15.8741 (11.2621, 20.4861)
ABC pressure × certification 9.5156 (6.4938, 12.5374)
ABC pressure × teacher minority 11.7704 (7.4591, 16.0817)
Level × ABC × experience 20.6388 (17.5234, 23.7542)
Level × ABC × math FE 19.9136 (16.6636, 23.1636)
Level × ABC × certification 9.8859 (6.7801, 12.9917)
Level × ABC × teacher minority −17.9003 (−21.0763, −14.7243)
h (bandwidth) 1.127
Point Estimate95% Confidence
VariableTeacher SideInterval
Δ distance % −1 Super consistent
Δ urbanicity −4.1645 (−9.9742, 1.6452)
Δ classroom minority % −16.0079 (−21.7881, −10.2277)
Δ school minority % −12.4768 (−14.3188, −10.6348)
Δ classroom minority % × teacher minority 20.0798 (13.8278, 26.3318)
Δ school minority % × teacher minority 22.2767 (20.4347, 24.1187)
Δ level achievement −3.785 (−6.9693, −0.6007)
Level achievement × experience 24.1429 (20.7237, 27.5621)
Level achievement × math FE 27.6966 (24.3399, 31.0533)
Level achievement × certification 21.8939 (18.7164, 25.0714)
Level achievement × teacher minority −18.5917 (−21.8512, −15.3322)
ABC pressure × experience 15.7851 (11.9718, 19.5984)
ABC pressure × math FE 15.8741 (11.2621, 20.4861)
ABC pressure × certification 9.5156 (6.4938, 12.5374)
ABC pressure × teacher minority 11.7704 (7.4591, 16.0817)
Level × ABC × experience 20.6388 (17.5234, 23.7542)
Level × ABC × math FE 19.9136 (16.6636, 23.1636)
Level × ABC × certification 9.8859 (6.7801, 12.9917)
Level × ABC × teacher minority −17.9003 (−21.0763, −14.7243)
h (bandwidth) 1.127
Notes: Experience and fixed effects (FE) are converted to percentile values. Level achievement is average math proficiency rate at the school level.
The parameter on urbanicity is weakly negative, indicating that moving to a more urban location is not an important consideration in teacher transfers. The statistical insignificance on the parameter implies that teachers are not avoiding rural or urban schools due to geography but monetary or nonmonetary characteristics of the position.
Class-level and school-level minority percent affects teacher transfer outcomes for white and minority teachers in different ways. White teachers often move to schools that have a lower proportion of minority students compared with their origin schools. The parameter on classroom minority percent is more negative compared with the parameter on school minority percent, implying that the composition of the teacher's classroom is more important. Somewhat surprisingly, the parameter on difference in achievement levels between origin and destination schools is relatively weak and negative. “Moving up” academically does not seem to be a strong driver in matching. Therefore, the negative parameter on minority student percent for white teachers is most likely attributable to differences in work environment or other unobservable factors that are correlated with the student demographic makeup.33
Minority teachers move toward schools with a higher proportion of minority students. Although the pattern of moving toward schools with fewer traditionally disadvantaged students is as expected for white teachers, the opposite observed for minority teachers is not. There are two hypotheses for these patterns across teacher ethnicity. The first is that teachers prefer to move to schools more heavily populated with students who “look” similar to them. Teachers may prefer to teach students with whom they can more readily identify or to live in and contribute to neighborhoods that mirror their own background and ethnicity. Alternatively, the pattern could be due to accountability pressure. Minority-heavy schools may actively court minority teachers who have demonstrated the ability to reach and motivate minority students.34 In essence, these schools may be induced into seeking out complementary productive capacity in teachers due to incentive pressure. As we will see in the joint portion of utility as well as the control sample analysis, the latter hypothesis is more consistent with the evidence.
The bottom portion of table 2 represents joint match utility based on interaction of teacher characteristics with school level of accountability pressure and level achievement. If all schools were equally motivated to raise education production, every school would have a cardinal preference ordering over teachers, and teachers would then choose their most preferred school in order. That is, the teacher who yields the highest academic achievement gain would select the closest school that offered the best teaching environment, and the teacher with the next highest value would choose the next best school, and so on. Analysis of the joint portion of the matching function reveals that accountability pressure plays an important role in matching certain teacher characteristics to certain schools.
Ignoring the level achievement terms for the moment, the matching coefficients on the interaction between ABC pressure and teacher characteristics can be interpreted in the following way. For illustration, assume that there exist two levels of accountability pressure, (high) and P (low), and two levels of some teacher characteristic , (high), and (low). If schools under high pressure match with teachers with high values, we will observe combinations of (, ) and (P, ) more often, as opposed to (, ) and (P, ). Then:
In fact, the higher the association between and , the greater δ should be. In contrast, if schools actually match more often with teachers, then
This time, the stronger the association between and , the more negative δ should be. If there is no relationship between P and , we see
Further, for a vector of normalized teacher characteristics (say, between 0 and 1) interacted with the accountability pressure P, the corresponding parameters will also be scaled such that they are directly comparable to determine how much more (or less) a particular characteristic is desirable compared to another characteristic.
If schools with high levels of achievement () but low levels of are most often associated with a particular characteristic (that is, the parameter on is greater than that on ), a teacher with a high value is more likely to match with a school with a high level achievement as opposed to a school under ABC accountability pressure.
The triple interaction among ABC pressure, level achievement, and teacher characteristics pays special attention to schools that are doing well in absolute terms that are also under accountability pressure, compared with schools doing poorly in level terms and facing low accountability pressure.
The econometric results show that experience, fixed effect values, and certification status are all positive predictors of matching with schools under accountability pressure. Although teachers with high experience and fixed effects are equally likely to match with pressured schools, certified teachers are only approximately 60 percent as attractive. As certification is often discussed in the literature as being relatively ineffective in increasing test scores, this may be indicative of principals prioritizing stronger predictors of educational production. Alternatively, certified teachers may be averse to matching with pressured schools. It is especially impressive that the parameter on matching fixed effects is so substantial. As fixed effect (or even raw year over year average test score gains) is difficult to observe, principals seem to be putting in the effort to identify and match with these teachers.
Like pressured schools, schools with high academic achievement positively match with teachers who have high experience and fixed effects, as well as certification. Unlike pressured schools, however, the parameters on the three characteristics are similar in size. The parameter on certification is about 80 percent of the parameter on fixed effect. Schools with high achievement do not have to compromise in selecting teachers with desirable characteristics.35
The difference in magnitudes of the matching parameters of accountability pressure and level achievement for experience and fixed effects seems to show that accountability is a modest force in driving matching outcomes. Although accountability legislation in North Carolina may be incentivizing principals correctly, in absolute magnitudes, high achieving schools are much more likely to match with teachers with desirable characteristics. Schools under pressure may not have resources available to them to attract their most desired targets when in competition with schools with high achievement.
How pressured schools compete at all against high achieving schools becomes clear with the relative differences in the parameters for certification and minority. Pressured schools are less likely to match with certified teachers. Although the ratio of parameters for experience and fixed effects for pressured to high-achieving schools is approximately 0.67 and 0.57, respectively, the ratio of parameters for certification is about 0.43. Most strikingly, whereas the high-achieving schools are likely to match with white teachers, pressured schools are more likely to match with minority teachers. Then, the strategy for pressured schools becomes clear: These schools match most often with minority teachers without certification, who nonetheless have the experience and fixed effects necessary to help the school qualify for the bonus. In this sense, these schools display a significant amount of sophistication (and effort) in matching with teachers who have high academic impact.
The triple interaction term shows that when high-achieving schools are also pressured, their matches prioritize experience and fixed effects over certification. Parameters for experience and fixed effects are double in size compared to the parameter for certification. Interestingly, the parameter on minority remains negative and large. High achieving, pressured schools, which on average have a higher fraction of the student body that is white, may be courting white teachers because of the complementarity in education production. The positive parameter on minority for pressured schools in general, and the negative parameter for minority specifically for pressured schools that are high achieving, supports the hypothesis that teacher ethnicity sorting may be driven by accountability pressure.
Although the model is useful in its ability to elucidate potential strategic matching behavior by teachers and schools, there are limitations imposed by structure. It is a possibility that teacher transfer policies are made at the district level, with teachers and principals having little role in the process (other than requesting a transfer and posting a vacancy). District superintendents may be able to compel schools to accept particular candidates or veto their hire. Within the context of the matching model, this is a difficult issue to address. Because over 60 percent of transfers happen within the district, most job characteristic differences (from the current district to the future district) that define the teacher's marginal utility gain from transfer would be zero. In addition, the district objective function is unclear, because shifting teachers from one school to another within the same district does not decrease overall achievement (although it may ameliorate or exacerbate inequality across schools).36
## 7. Matching in the Absence of Pressure
The econometric results showed that schools under accountability pressure match with teachers who possess the ability to increase standardized test scores. Schools where the bonus outcome is in doubt are motivated to seek out effective teachers, as the marginal teacher's performance may determine whether the school receives the bonus.
Although we may speculate that schools not under accountability pressure (whether because they are assured of the bonus or are completely out of the running) would not seek to pursue these effective teachers (and thus remain unaffected by accountability), it remains unclear what matching across the entire distribution of schools would look like in the absence of the bonus program. This is important for policy, as the introduction of market pressure impacts strategies of schools under pressure, which in turn may impact strategies (and outcomes) of other schools not under pressure.
Normally, a pre-policy sample would be used to compare differences in parameter estimates across the two regimes. Lack of quality data before the bonus system implementation precludes this possibility. North Carolina completely discontinued its bonus system in 2009, however, while keeping in place the standardized tests that were used to generate the score to determine bonus receipt. Thus, it is possible to use a “post-post-policy” sample to analyze changes in matching behavior of teachers and schools.39 The last column in table 1 presents summary statistics for the 2009 sample, and table 3 presents the matching estimation results.
Table 3.
Estimates of Matching Parameters (Control)
Point Estimate95% Confidence
VariableTeacher SideInterval
Δ distance % −1 super consistent
Δ urbanicity 0.1356 (−4.1356, 4.4068)
Δ classroom minority % −24.3885 (−30.4305, −18.3465)
Δ school minority % −23.8055 (−29.6121, −17.9989)
Δ classroom minority % × teacher minority −0.3647 (−2.0261, 1.2967)
Δ school minority % × teacher minority 4.3775 (2.4769, 6.2781)
Δ level achievement −7.8197 (−13.1034, −2.536)
Level achievement × experience 24.937 (21.7359, 28.1381)
Level achievement × math FE 25.3869 (21.814, 28.9598)
Level achievement × certification 12.3977 (9.0765, 15.7189)
Level achievement × teacher minority −7.126 (−10.6009, −3.6511)
ABC pressure × experience 8.0599 (5.4071, 10.7127)
ABC pressure × math FE −5.8735 (−8.3833, −3.3637)
ABC pressure × certification −0.1077 (−2.6142, 2.3988)
ABC pressure × teacher minority 3.184 (0.3469, 6.0211)
Level × ABC × experience 24.3723 (21.9194, 26.8252)
Level × ABC × math FE 23.9108 (21.505, 26.3166)
Level × ABC × certification 9.0297 (6.6493, 11.4101)
Level × ABC × teacher minority −12.5573 (−15.225, −9.8896)
h (bandwidth) 1.127
Point Estimate95% Confidence
VariableTeacher SideInterval
Δ distance % −1 super consistent
Δ urbanicity 0.1356 (−4.1356, 4.4068)
Δ classroom minority % −24.3885 (−30.4305, −18.3465)
Δ school minority % −23.8055 (−29.6121, −17.9989)
Δ classroom minority % × teacher minority −0.3647 (−2.0261, 1.2967)
Δ school minority % × teacher minority 4.3775 (2.4769, 6.2781)
Δ level achievement −7.8197 (−13.1034, −2.536)
Level achievement × experience 24.937 (21.7359, 28.1381)
Level achievement × math FE 25.3869 (21.814, 28.9598)
Level achievement × certification 12.3977 (9.0765, 15.7189)
Level achievement × teacher minority −7.126 (−10.6009, −3.6511)
ABC pressure × experience 8.0599 (5.4071, 10.7127)
ABC pressure × math FE −5.8735 (−8.3833, −3.3637)
ABC pressure × certification −0.1077 (−2.6142, 2.3988)
ABC pressure × teacher minority 3.184 (0.3469, 6.0211)
Level × ABC × experience 24.3723 (21.9194, 26.8252)
Level × ABC × math FE 23.9108 (21.505, 26.3166)
Level × ABC × certification 9.0297 (6.6493, 11.4101)
Level × ABC × teacher minority −12.5573 (−15.225, −9.8896)
h (bandwidth) 1.127
Notes: Experience and fixed effects (FE) are converted to percentile values. Level achievement is average math proficiency rate at the school level.
Interpreting the results should be approached with caution, as several years separate the treatment and control samples. The bonus system was discontinued because of building financial pressure from the Great Recession. Thus, the economic environment faced by teachers was different, which may have impacted the decision to seek transfers. In addition, by 2009, schools had accumulated various sanctions under the NCLB regime, with some schools facing restructuring (leadership turnover) unless they were able to sharply increase proficiency rates of their student body. Teachers may have shunned these schools undergoing turmoil.
Transferring teachers are similar, compared to the 1989–99 to 2002–03 sample. Two key differences are that fewer teachers of minority status move, and when any teacher does transfer, they move to schools and classrooms that have proportionally fewer minority students. Teachers now appear to seek out schools with more traditionally advantaged students.
The absolute magnitudes differ from table 2, but the signs and relative magnitudes for class and school minority percent is similar—although the interaction effect between minority percent and teacher minority is very different from the treatment years. The parameters on the interaction terms are now close to zero. The positive matching between teacher and school/class ethnicity that was observed when the accountability system was in place has largely disappeared. This provides further support to the hypothesis that minority-heavy schools were previously courting minority teachers for accountability reasons.
Schools with high achievement now focus more on experience and fixed effects, with the parameter on experience and fixed effects about twice the size of the parameter on certification. These relative parameter size differences are similar to the triple interaction results. Once the accountability pressure from the bonus system disappears, in theory, the differences between the level interaction and triple interaction terms should disappear. That the differences are not completely gone may indicate that principals still care about the school's performance, perhaps because these results still mattered for NCLB and the school report card.
The largest change in the matching function estimates is the hypothetical accountability pressure interaction terms.40 These schools still match positively with experienced teachers, although the parameter is only about one third the size compared with high achieving schools. Recall that these schools were much more competitive with high-achieving schools when accountability pressure existed. Certification is uncorrelated with hypothetical accountability pressure, and most strikingly, fixed effect is negatively associated with hypothetical accountability pressure. Because it takes effort to identify teachers with high fixed effects, in the absence of market pressure, the “haves” (students at high-achieving schools) end up with better teachers, as schools educating the “have-nots” stop searching and competing for effective teachers.
## 8. Conclusions
Using a SMSE framework and the North Carolina administrative education panel dataset, this study analyzes the teacher transfer market, with particular attention paid to the role of accountability pressures in affecting what types of teachers and schools match. The econometric results show that white (minority) teachers transfer to schools with a higher proportion of white (minority) students. This sorting seems to be due to accountability pressure leading schools to seek out complementary education production by matching teachers to student populations who “look more like them.”
The joint portion of the estimates of the match production reveals that schools under pressure are more likely to match with teachers who have high experience and fixed effects. The size of the parameter on fixed effect is particularly encouraging, as it takes some amount of effort to identify teachers with high fixed effects. Accountability pressure may induce principals to seek out teachers who will have a strong impact on test scores or to change the school environment to emphasize testing. Teachers who dislike testing may try to match with schools that are not under accountability pressure, and teachers who excel at raising test scores may seek out schools where their strengths will be more appreciated.41
Estimates on achievement show that experience, fixed effects, and certification status matter for recruiting. Comparing the magnitudes of the parameters between accountability pressure and achievement, high achieving schools are more likely to match with teachers with desirable characteristics, compared with pressured schools. Although accountability pressure is more associated with teacher characteristics that produce education output, schools under accountability pressure often lose out to schools with high achievement (and low pressure) for these teachers. High-achieving schools seem to have the edge in matching with teachers with high experience, fixed effects, and certification, and pressured schools relinquish matching with certified teachers to pursue minority teachers with experience and high fixed effect.
The same matching estimation done with data on a year during which the accountability system was discontinued shows that in the absence of market pressure, schools that would have been under accountability pressure cease to aggressively compete for teachers with high fixed effects. In addition, the relatively sophisticated strategy of matching with minority teachers who have the characteristics that drive education production is largely abandoned. As a result, high-achieving schools are more likely to find themselves matched with experienced teachers with higher fixed effects. Ultimately, the results provide evidence that market-based reform can have a positive impact on the decision-making processes of schools, but more resources to attract teachers must be provided for the policy to be effective.
## Acknowledgments
I am grateful to the Center for Child and Family Policy at the Sanford School of Public Policy at Duke University for access to the North Carolina Department of Public Instruction dataset. I thank the editor and two anonymous referees for comments to improve the paper. All remaining errors are, of course, my own.
## REFERENCES
Ahn
,
Tom
.
2013
.
The missing link: Estimating the impact of incentives on effort and effort on production using teacher accountability legislation
.
Journal of Human Capital
7
(
3
):
230
273
.
Ahn
,
Tom
.
2015
.
Matching strategies of teachers and schools in general equilibrium
.
IZA Journal of Labor Economics
4
(
1
):
Article 5 DOI
10.1186/s40172-015-0020-x.
Ahn
,
Tom
, and
Jacob
Vigdor
.
2014
.
When incentives matter too much: Explaining significant responses to irrelevant information
.
NBER Working Paper No. 20321
.
Ahn
,
Tom
, and
Jacob
Vigdor
.
2016
.
Opening the black box: Behavioral responses of teachers and principals to pay-for-performance incentive programs
.
Paper presented at Association for Public Policy Analysis & Management International Conferences
,
London School of Economics
,
June
.
Ballou
,
Dale
.
1996
.
Do public schools hire the best applicants
?
Quarterly Journal of Economics
111
(
1
):
97
133
.
Boyd
,
Donald
,
Hamilton
Lankford
,
Susanna
Loeb
, and
James
Wyckoff
.
2008
.
The impact of assessment and accountability on teacher recruitment and retention: Are there unintended consequences
?
Public Finance Review
36
(
1
):
88
111
.
Boyd
,
Donald
,
Hamilton
Lankford
,
Susanna
Loeb
, and
James
Wyckoff
.
2011
.
The role of teacher quality in retention and hiring: Using applications-to-transfer to uncover preferences of teachers and schools
.
Journal of Policy Analysis and Management
31
(
1
):
88
110
.
Boyd
,
Donald
,
Hamilton
Lankford
,
Susanna
Loeb
, and
James
Wyckoff
.
2013
.
Analyzing determinants of the matching of public school teachers to jobs: Estimating compensating differentials in imperfect labor markets
.
Journal of Labor Economics
31
(
1
):
83
117
.
Clotfelter
,
Charles
,
Helen F.
, and
Jacob
Vigdor
.
2006
.
Teacher-student matching and the assessment of teacher effectiveness
.
Journal of Human Resources
41
(
4
):
778
820
.
Clotfelter
,
Charles
,
Helen F.
, and
Jacob
Vigdor
.
2007
.
Teacher credentials and student achievement in high school: A cross-subject analysis with student fixed effects
.
NBER Working Paper No. 13617
.
Clotfelter
,
Charles
,
Helen F.
,
Jacob
Vigdor
, and
Roger Aliaga
Diaz
.
2004
.
Do school accountability systems make it more difficult for low-performing schools to attract and retain high-quality teachers?
Journal of Policy Analysis and Management
23
(
2
):
251
271
.
Dee
,
Thomas
.
2004
.
Teachers, race and student achievement in a randomized experiment
.
Review of Economics and Statistics
86
(
1
):
195
210
.
Figlio
,
David N.
2006
.
Testing, crime, and punishment
.
Journal of Public Economics
90
(
4–5
):
837
851
.
Fox
,
Jeremy T.
2009
.
Estimating matching games with transfers
.
Unpublished Paper, University of Michigan
.
Goldhaber
,
Dan
, and
Emily
Anthony
.
2007
.
Can teacher quality be effectively assessed? National Board Certification as a signal of effective teaching
.
Review of Economics and Statistics
89
(
1
):
134
150
.
Hanushek
,
Eric
.
2009
.
Teacher deselection
. In
Creating a new teaching profession
,
edited by
Dan
Goldhaber
and
Jane
Hannaway
, pp.
165
180
.
Washington, DC
:
Urban Institute Press
.
Hanushek
,
Eric
, and
Margaret
E. Raymond
.
2005
.
Does school accountability lead to student improvements
?
Journal of Policy Analysis and Management
24
(
2
):
297
327
.
Horowitz
,
Joel L.
1992
.
A smoothed maximum score estimator for the binary response model
.
Econometrica
60
(
3
):
505
531
.
Horowitz
,
Joel L.
2002
.
Bootstrap critical values for tests based on the smoothed maximum score estimator
.
Journal of Econometrics
111
(
2
):
141
167
.
Jackson, C.
Kirabo
.
2009
.
Student demographics, teacher sorting, and teacher quality: Evidence from the end of school desegregation
.
Journal of Labor Economics
27
(
2
):
213
256
.
Jacob
,
Brian A.
2011
.
Do principals fire the worst teachers
?
Educational Evaluation and Policy Analysis
33
(
4
):
403
434
.
Rockoff
,
Jonah
.
2004
.
The impact of individual teachers on student achievement: Evidence from panel data
.
American Economic Review
94
(
2
):
247
252
.
Rothstein
,
Jesse
.
2008
.
Teacher quality in educational production: Tracking, decay, and student achievement
.
Quarterly Journal of Economics
125
(
1
):
175
214
.
Vigdor
,
Jacob
.
2008
.
Teacher salary bonuses in North Carolina
.
National Center on Performance Incentives Working Paper No. 2008-03
,
Vanderbilt University
.
## Notes
1.
See Hanushek (2009), who explores gains by replacing the worst teachers with average ones.
2.
I remain agnostic about the principal's utility function by not assuming that he or she seeks to maximize test scores. A principal may seek to minimize classroom disruption through recruitment of experienced teachers or minimize effort exertion by recruiting less ambitious teachers. It may also make sense to think of the principal and the teachers at the school as a coalition with a common utility function.
3.
For a match to be stable, I assume: (1) it is mutually agreed upon, but a break-up may be unilateral, and (2) a school (teacher) may offer inducements to a teacher (school) to break up a match.
4.
The joint error term η can be interpreted as match-specific error, which could be, for instance, personal/professional compatibility between the principal and the teacher during the recruiting process.
5.
With this assumption, results from Fox (2009) can be used to show that the probability of any hypothetical market-wide teacher–school assignment is equal to the integration of an indicator function of the particular assignment maximizing the joint output of all matches over the error distribution, given an initial parameter guess.
6.
This is equivalent to saying the inducements that must be offered to the teacher (by the school) or the principal (by the teacher) who may want to “trade up” must be so large that all parties find it optimal to maintain the original match. For details, see the online appendix found on Education Finance and Policy’s Web site at www.mitpressjournals.org/doi/suppl/10.1162/EDFP_a_00205.
7.
Some districts have attempted to prevent aggressive principals from recruiting. For instance, Wake County limits the number of teachers principals can take with them as they move to new assignments. The policy was implemented in response to complaints that exiting principals were poaching effective teachers.
8.
See Ahn (2015) for a model that accounts for the initial search decision by a teacher (and how it affects all other teachers’ search decisions).
9.
The bonus may be large enough to induce a short-distance move, perhaps within the same district. An analysis of bonus receipt history of schools from Ahn and Vigdor (2014) shows that there are very few schools that are almost always out of the running for the bonus. In fact, across five years, the average school receives the bonus 3.77 times. A teacher would have to move from one of the worst schools to one of the best for the expected bonus amount to make a substantial difference. Within the district, differences in bonus receipt history of schools is relatively small (see the online appendix).
10.
The acronym stands for strong Accountability, teaching the Basics, and emphasis on local Control.
11.
For instance, middle and high school achievement gains were measured starting in 1997–98. In addition to academic achievement, dropout rates are considered for high schools.
12.
The difference is divided by the standard deviation of the difference across all schools in the state.
13.
The Working Conditions Survey in 2004 asked elementary school teachers (with three or more years of experience) whether they felt they had influence in selecting new teachers. About 45 percent of teachers agreed that they had a role in selecting new teachers, and about 37 percent disagreed.
14.
15.
On the other hand, if schools are also judged on standards that are largely unrelated to growth scores—say, the percentage of staff that is fully certified—this may drive recruiting in a different direction.
16.
There is very low correlation between growth and achievement in North Carolina. Ahn and Vigdor (2014) showed in the years 2005–07 (when the No Child Left Behind [NCLB] system was evaluating achievement), that over 40 percent of schools that made expected growth failed to make adequate yearly progress (AYP), and 30 percent of schools that made AYP failed to make expected growth.
17.
The data, which are collected by the NCDPI, were made available by North Carolina Education Research Data Center (NCERDC; see https://childandfamilypolicy.duke.edu/research/nc-education-data-center/) at the Center for Child and Family Policy. Although student and teacher level data are confidential, aggregate data and summary statistics are publicly available at the NCDPI Web site (www.ncpublicschools.org/data/reports/).
18.
See Clotfelter, Ladd, and Vigdor (2006).
19.
See Ballou (1996) for further evidence that a new teacher is not recruited based on observable criteria, such as the selectivity of her undergraduate institution.
20.
Some principals may prefer new teachers over transfers that have shown themselves to be ineffective. Unfortunately, the matching model is unable to account for such preferences.
21.
The distance measure should be the distance from the teacher's home to the destination school. Because the dataset does not contain this information, I use the address of the teacher's origin school. Alternatively, the distance measure could proxy as an information acquisition cost.
22.
Distance between schools is excluded from this vector.
23.
For the full category definitions, refer to https://nces.ed.gov/ccd/rural_locales.asp.
24.
The smooth maximum score estimation framework requires one of the parameters to be normalized at +1 or −1. Setting the distance parameter to −1 maximized the number of satisfied inequalities.
25.
Some studies have shown that students paired with a teacher of the same race seem to perform better (see Dee 2004). If pairing of teachers to students has differential impact (based on race, say), and the principal's utility is maximized by matching the “effective” teacher with an “undesirable” class, a principal will have to weigh the utility gain from such a match against the decreased ability to lure the effective teacher to her school. Unfortunately, the matching model is unable to discern between these motives.
26.
The sample runs through the 2002–03 academic year, the inaugural year for NCLB. Because schools are not sanctioned until their second consecutive failure to make adequate yearly progress, no accountability sanctions are associated with deficient achievement, although a school can be publicly labeled as failing. To test whether the 2002–03 sample teachers and schools face differing incentives due to NCLB, the matching model is estimated with this year excluded. Qualitative results remain largely unchanged. See the online appendix.
27.
I convert the PDF value that each school has (because ABC status is assigned at the school level) to a percentile ordering of schools by accountability pressure. Therefore, the school under the greatest amount of pressure (peak of PDF) is assigned a 1, and schools that are under the least amount of pressure (both tails of the PDF) are assigned a 0. See Ahn (2013) for complete details.
28.
Schoolwide achievement is defined as the grade-level proficiency rate across both math and reading.
29.
An alternative maximum score estimation (MSE) following Fox (2009) directly was also attempted, which yielded qualitatively similar results. For the MSE, the estimator would lose the kernel K().
30.
Restricting the sample in this way decreases the number of inequalities to be evaluated to under 20,000 (for each guess at the parameter vector), making estimation feasible.
31.
The SMSE estimation of the matching function is duplicated at half and double the ‘ideal’ bandwidth. The results are qualitatively unchanged. Tables are presented in the online appendix.
32.
See the online appendix for complete results.
33.
There is some corroborating evidence for this result in Ahn (2015). In that study I find that both low- and high-achieving schools have a preference for matching with teachers from high-achieving schools. However, low-achieving schools tend to be less successful luring these teachers. High-achieving schools, on the other hand, match more often with teachers originating from comparable schools. Then, what is observed here may be the expression of strong preferences from the school side, desiring to match with teachers from better schools.
34.
See Jackson (2009). Schools with few minority students may court white teachers for similar reasons.
35.
That certification status is such a strong matching component is puzzling, because the literature has shown that it does not strongly impact achievement. Preference for these teachers may be reflective of the fact that the percentage of certified teachers is a publicly released number, as part of the school's annual report card.
36.
In 2003–04, the superintendent of Charlotte Mecklenburg district instituted a policy forbidding intradistrict transfers of teachers into highly desirable schools (as defined by academic achievement and other measures). The plan was for this list to be updated yearly. The design was clearly intended to stem the bleeding of experienced teachers from “bad” to “good” schools within the district. Just one year later, however, the superintendent retired, and the policy was scrapped by his successor.
37.
See the online appendix for full results.
38.
See the online appendix for analysis.
39.
In 2008, the state unexpectedly cut bonus payouts by 30 percent. The next year, the program was discontinued. I do not use the 2007–08 or 2008–09 samples because transfers would have occurred under uncertainty about the bonus system. By 2009, the writing was on the wall, and it was clear there would be no more bonuses. At the same time, because upwards of \$100 million per year had been saved by discontinuing the program, districts did not have to cut positions during the beginning of the Great Recession, keeping the transfer market stable.
40.
Again, note that accountability pressure from the bonus system no longer exists. The hypothetical pressure that schools would have been under is calculated with the standardized end-of-grade test scores.
41.
See Ahn and Vigdor (2016) to see how principals change the school environment in response to the North Carolina incentive system.
|
{}
|
# Lévy's modulus of continuity theorem
In mathematics, Lévy's modulus of continuity theorem gives a result about the almost sure behaviour of an estimate of the modulus of continuity for the Wiener process, which models Brownian motion. It is due to the French mathematician Paul Lévy.
## Statement of the result
Let $B : [0, 1] \times \Omega \to \mathbb{R}$ be a standard Wiener process. Then, almost surely,
$\lim_{h \to 0} \sup_{0 \leq t \leq 1 - h} \frac{| B_{t+ h} - B_{t} |}{\sqrt{2 h \log (1 / h)}} = 1.$
In other words, the sample paths of Brownian motion have modulus of continuity
$\omega_{B} (\delta) = \sqrt{2 \delta \log (1 / \delta)}$
with probability one, and for sufficiently small $\delta > 0$.
|
{}
|
2014
03-30
# Date
Is it just another problem about year, month and day? No, today we have a more romantic meaning � dating with a girl.
iSea now has a date to be anxious to rush, unfortunately, the city’s way is always so complicated, so time goes unconsciously and fast. In order to more quickly rushed to rendezvous, iSea has already drawn a topographic map of this city, he find that the city is composed of a number of equal squares, between the squares, there are some traffic lights controlling the traffic, a basic map example:
Each time, iSea must proceed from the starting point of the upper left corner to reach the lower right corner. The squares’ size is fixed, with a length of two meters and a width of one meter. iSea usually only go across the intersection under the green light, the time he costs is one minute, for safety’s sake, in the whole one minute the green light should be lighten, we also know, iSea’s speed along the grid is a constant, one meter per minute.
iSea’s map gives some more specific information about the city, the number of vertical grid N, the number of horizontal grid M, and the time the traffic lights change alternating. In order to simplify the problem, we assume that one day start at 0 hour, traffic lights first allow passengers traverse the intersection from left and right, but block up and down in the intersection, after a time Ti, the traffic lights block left and right to allow access up and down, the time is also the Ti, so the cycle goes.
After simple calculation, iSea finds he may be late for this date, in order to reach the destination in time, he decides to make one exception, in an emergency he would pass through a intersection even if a red light was lighten, but not to loss of too many RP, he asks himself go across the red light at most once, then, can you help him, calculate when he could arrive in rendezvous at the earliest?
There are several test cases in the input.
Each test case begin with two integers N, M (1 < N, M ≤ 30), indicating the number of the vertical grid and the horizontal grid.
Then N-1 lines follow, each line contains M-1 numbers Ti (0 < Ti ≤ 10), their meaning is in the description.
The last line is the start time of iSea, in the form of HH:MM.
The input terminates by end of file marker.
There are several test cases in the input.
Each test case begin with two integers N, M (1 < N, M ≤ 30), indicating the number of the vertical grid and the horizontal grid.
Then N-1 lines follow, each line contains M-1 numbers Ti (0 < Ti ≤ 10), their meaning is in the description.
The last line is the start time of iSea, in the form of HH:MM.
The input terminates by end of file marker.
2 2
3
12:03
2 3
2 2
12:00
12:05
12:05
#include <stdio.h>
#include <algorithm>
#include <cstring>
#include <queue>
using namespace std;
#define inf 1000000000
struct edge
{
int to,cost,next;
int T, mod;
}e[32000];
struct node
{
int x,y,z;
node(int _x=0, int _y=0, int _z):x(_x),y(_y),z(_z){}
bool operator<(const node& a)const
{
return y>a.y;
}
};
priority_queue<node> que;
bool visit[4000];
int dis[4000][2];
int pre[4000];
int next[8000];
int n,m,num;
int a[40][40];
void addedge(int from, int to, int cost, int T, int mod)
{
e[num].to=to;e[num].cost=cost;
e[num].T=T;e[num].mod=mod;
e[num].next=pre[from];
pre[from]=num;
num++;
}
void make_map()
{
num=1;
memset(pre, 0, sizeof(pre));
for (int i=0; i<n; i++)
for (int j=0; j<m; j++)
{
if (j!=m-1)
{
}
if (i!=n-1)
{
}
}
}
void get_in(int to,int cost,int id)
{
if (dis[to][id]<=cost||dis[to][0]<=cost) return;
dis[to][id]=cost;
que.push(node(to,cost,id));
}
int dfs(int x,int tt)
{
while(!que.empty()) que.pop();
for (int i=0;i<4*n*m;i++)
{
visit[i]=0;
dis[i][0]=dis[i][1]=inf;
}
que.push(node(x,tt,0));
while(!que.empty())
{
node out=que.top();
que.pop();
int x=out.x,y=out.y,z=out.z;
if (visit[x]) continue;
if (z==0)
visit[x]=1;
if (x==4*n*m-1) break;
for (int i=pre[x]; i!=0; i=e[i].next)
if (!visit[e[i].to])
{
int ty;
if (e[i].T==0)
{
ty=y+e[i].cost;
get_in(e[i].to,ty,z);
}
else
{
int tmp=(y+e[i].T-1)/e[i].T;
if (y%e[i].T==0) tmp++;
if (tmp%2==e[i].mod)
{
ty=y+1;
get_in(e[i].to,ty,z);
}
else
{
if (z==0)
{
ty=y+1;
get_in(e[i].to,ty,1);
}
ty=y+e[i].T-y%e[i].T+1;
get_in(e[i].to,ty,z);
}
}
}
}
return min(dis[4*n*m-1][0],dis[4*n*m-1][1]);
}
int main()
{
while (scanf("%d%d",&n,&m)!=EOF)
{
n--;m--;
for (int i=0; i<n; i++)
for (int j=0; j<m; j++)
scanf("%d",&a[i][j]);
make_map();
char tt[50];
int ta,tb;
scanf("%s",tt);
for (int i=0;tt[i];i++)
if (tt[i]==':') tt[i]=' ';
sscanf(tt,"%d %d",&ta,&tb);
ta=ta*60+tb;
int ans=dfs(0,ta);
ta=ans/60;
tb=ans%60;
printf("%02d:%02d\n",ta,tb);
}
return 0;
}
1. “再把所有不和该节点相邻的节点着相同的颜色”,程序中没有进行不和该节点相邻的其他节点是否相邻进行判断。再说求出来的也不一样是颜色数最少的
|
{}
|
# Do current Central Banks need to target a monetary aggregate as they set interest rates for a target inflation? [closed]
On reading a textbook, the author state that after the 80's, and the serious consequences of thinking that inflation could be controlled by just targeting a monetary aggregate, modern central banks(CB) have shifted their paradigm to a targeting inflation by setting the interest rate. My question is if the CB sets the interest rate, does he need to target M0? Or does the CB just print money according to money demand?
Any help would be appreciated.
• It's very difficult to work out what you are asking. What do you mean by "the interest rate" - which one? Which Central Bank? – EnergyNumbers Jan 29 '16 at 10:23
The demand for liquidity is usually defined as $$L(Y,i)$$ where $Y$ is income and $i$ is the nominal interest rate. In equilibrium this equals the aggregate money supply $M$ which is function of several things, like $M_0$ the reserve rate, etc. Let us assume that everything except $i$, $M_0$ and $M = f(M_0)$ is constant. Then $$L(Y,i) = M = f(M_0).$$ Demand for liquidity decreases in $i$ and $M$ increases in $M_0$. Thus for any given value of $i$ there is at most one equilibrium value of $M_0$, hence the central bank cannot simoultaneously set both $i$ and $M_0$ and always expect an equilibrium outcome. Lacking equilibrium either $i$ or $M$ will shift.
|
{}
|
# Clarification on marginal, posterior and likelihood distributions?
I would like to clarify a basic question about how to use the concepts of marginal, conditional, and likelihood distributions.
When we observe data from phenomena $x$. We are looking at realizations from the marginal distribution $f(x)$?
When we want to say something about a parameter $\theta$ governing that phenomena, we are usually interested in its posterior distribution $f(\theta| x)$?
And in order to investigate that conditional distribution we make use of prior information and the hypothesized relationship between $\theta$ and the data $x$. We call this later relationship the likelihood distribution $f(x|\theta)$?
Is this the write terminology? If not I would appreciate clarifications where needed. Thanks
Let's start by talking about a marginal distribution. Whenever we have a joint probability distribution $P(X, Y)$, if we draw samples from this distribution and only look at $X$ we have essentially samples from the marginal distribution $P(X)$. The marginal distribution removes $Y$ from the picture by summing or integrating $P(X,Y)$ over all possible values that $Y$ can take.
$$P(X=x) = \int P(X=x, Y=y) dy$$
In your examples you were mainly dealing with Bayesian Perimetric Models. Let's consider the following model:
$$\mu \sim \mathcal{N}(0, 1),$$
$$X_i \sim \mathcal{N}(\mu, 1).$$
Let's refer to $(X_1, \dots, X_n)$ as $X$. In this problem we have observed $X$ and we want to understand $\mu$. With our model, if we knew $\mu$ calculating the probability of each $X_i$ would be easy. So we can easily evaluate the conditional probability $P(X_i | \mu)$. Because of the importance of this distribution in maximum likelihood estimation this is referred to as the likelihood distribution.
As you said in your question we are ultimately interested in $P(\mu | X)$. This is another conditional distribution that we refer to as the posterior distribution ("post" or after we've seen data). Bayes' Theorem tells us how to relate the Likelihood distribution and the Posterior Distribution. However, in this formulation two more factors appear including the (frequentists call it subjective and this is a marginal but don't think about it too hard) prior distribution $P(\mu)$ and the (often horrible to calculate) marginal distribution $P(X)$.
|
{}
|
## On polydispersity and the hard-sphere glass transition
Research output: Working paper
### Documents
Original language English ArXiv Published - 17 Oct 2013
### Abstract
We simulate the dynamics of polydisperse hard spheres at high packing fractions, $\phi$, with an experimentally-realistic particle size distribution (PSD) and other commonly-used PSDs such as gaussian or top hat. We find that the mode of kinetic arrest depends on the PSD's shape and not only on its variance. For the experimentally-realistic PSD, the largest particles undergo an ideal glass transition at $\phi\sim 0.588$ while the smallest particles remain mobile. Such species-specific localisation was previously observed only in asymmetric binary mixtures. Our findings suggest that the recent observation of ergodic behavior up to $\phi \sim 0.6$ in a hard-sphere system is not evidence for activated dynamics, but an effect of polydispersity.
### Research areas
• cond-mat.soft
|
{}
|
+0
# help
0
38
1
There is an in class prize pool with 100 students of which 7 winners will be selected you are friends with 12 students what is probability 3 of 7 winners is your friend
Jun 14, 2021
$$\frac{C(12, 3)\times C(88,4)}{C(100, 7)}$$.
|
{}
|
# Extension-based Semantics of Abstract Dialectical Frameworks
One of the most prominent tools for abstract argumentation is the Dung's framework, AF for short. It is accompanied by a variety of semantics including grounded, complete, preferred and stable. Although powerful, AFs have their shortcomings, which led to development of numerous enrichments. Among the most general ones are the abstract dialectical frameworks, also known as the ADFs. They make use of the so-called acceptance conditions to represent arbitrary relations. This level of abstraction brings not only new challenges, but also requires addressing existing problems in the field. One of the most controversial issues, recognized not only in argumentation, concerns the support cycles. In this paper we introduce a new method to ensure acyclicity of the chosen arguments and present a family of extension-based semantics built on it. We also continue our research on the semantics that permit cycles and fill in the gaps from the previous works. Moreover, we provide ADF versions of the properties known from the Dung setting. Finally, we also introduce a classification of the developed sub-semantics and relate them to the existing labeling-based approaches.
## Authors
• 7 publications
• ### Strong Admissibility for Abstract Dialectical Frameworks
Abstract dialectical frameworks (ADFs) have been introduced as a formali...
12/10/2020 ∙ by Atefeh Keshavarzi Zafarghandi, et al. ∙ 0
• ### On graded semantics of abstract argumentation: Extension-based case
Based on Grossi and Modgil's recent work [1], this paper considers some ...
12/19/2020 ∙ by Lixing Tan, et al. ∙ 0
• ### A Plausibility Semantics for Abstract Argumentation Frameworks
We propose and investigate a simple ranking-measure-based extension sema...
07/16/2014 ∙ by Emil Weydert, et al. ∙ 0
• ### Weighted Abstract Dialectical Frameworks: Extended and Revised Report
Abstract Dialectical Frameworks (ADFs) generalize Dung's argumentation f...
06/20/2018 ∙ by Gerhard Brewka, et al. ∙ 0
• ### Characterizing Realizability in Abstract Argumentation
Realizability for knowledge representation formalisms studies the follow...
03/31/2016 ∙ by Thomas Linsbichler, et al. ∙ 0
• ### Expressiveness of SETAFs and Support-Free ADFs under 3-valued Semantics
Generalizing the attack structure in argumentation frameworks (AFs) has ...
07/07/2020 ∙ by Wolfgang Dvorák, et al. ∙ 0
• ### Formulating Semantics of Probabilistic Argumentation by Characterizing Subgraphs: Theory and Empirical Results
In existing literature, while approximate approaches based on Monte-Carl...
08/01/2016 ∙ by Beishui Liao, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Over the last years, argumentation has become an influential subfield of artificial intelligence
[Rahwan and Simari2009]. One of its subareas is the abstract argumentation, which became especially popular thanks to the research of Phan Minh Dung [Dung1995]. Although the framework he has developed was relatively limited, as it took into account only the conflict relation between the arguments, it inspired a search for more general models (see [Brewka, Polberg, and Woltran2013] for an overview). Among the most abstract enrichments are the abstract dialectical frameworks, ADFs for short [Brewka and Woltran2010]. They make use of the so–called acceptance conditions to express arbitrary interactions between the arguments. However, a framework cannot be considered a suitable argumentation tool without properly developed semantics.
The semantics of a framework are meant to represent what is considered rational. Given many of the advanced semantics, such as grounded or complete, we can observe that they return same results when faced with simple, tree–like frameworks. The differences between them become more visible when we work with more complicated cases. On various occasions examples were found for which none of the available semantics returned satisfactory answers. This gave rise to new concepts: for example, for handling indirect attacks and defenses we have prudent and careful semantics [Coste-Marquis, Devred, and Marquis2005a, Coste-Marquis, Devred, and Marquis2005b]
. For the problem of even and odd attack cycles we can resort to some of the SCC–recursive semantics
[Baroni, Giacomin, and Guida2005], while for treatment of self attackers, sustainable and tolerant semantics were developed [Bodanza and Tohmé2009]. Introducing a new type of relation, such as support, creates additional problems.
The most controversial issue in the bipolar setting concerns the support cycles and is handled differently from formalism to formalism. Among the best known structures are the Bipolar Argumentation Frameworks (BAFs for short) [Cayrol and Lagasquie-Schiex2009, Cayrol and Lagasquie-Schiex2013], Argumentation Frameworks with Necessities (AFNs) [Nouioua2013] and Evidential Argumentation Systems (EASs) [Oren and Norman2008]. While AFNs and EASs discard support cycles, BAFs do not make such restrictions. In ADFs cycles are permitted unless the intuition of a given semantics is clearly against it, for example in stable and grounded cases. This variety is not an error in any of the structures; it is caused by the fact that, in a setting that allows more types of relations, a standard Dung semantics can be extended in several ways. Moreover, since one can find arguments both for and against any of the cycle treatments, lack of consensus as to what approach is the best should not be surprising.
Many properties of the available semantics can be seen as ”inside” ones, i.e. ”what can I consider rational?”. On the other hand, some can be understood as on the ”outside”, e.g. ”what can be considered a valid attacker, what should I defend from?”. Various examples of such behavior exist even in the Dung setting. An admissible extension is conflict–free and defends against attacks carried out by any other argument in the framework. We can then add new restrictions by saying that self–attackers are not rational. Consequently, we limit the set of arguments we have to protect our choice from. In a bipolar setting, we can again define admissibility in the basic manner. However, one often demands that the extension is free from support cycles and that we only defend from acyclic arguments, thus again trimming the set of attackers. From this perspective semantics can be seen as a two–person discussion, describing what ”I can claim” and ”what my opponent can claim”. This is also the point of view that we follow in this paper. Please note that this sort of dialogue perspective can already be found in argumentation [Dung and Thang2009, Jakobovits and Vermeir1999], although it is used in a slightly different context.
Although various extension–based semantics for ADFs have already been proposed in the original paper [Brewka and Woltran2010], many of them were defined only for a particular ADF subclass called the bipolar and were not suitable for all types of situations. As a result, only three of them – conflict–free, model and grounded – remain. Moreover, the original formulations did not solve the problem of positive dependency cycles. Unfortunately, neither did the more recent work into labeling–based semantics [Brewka et al.2013], even though they solve most of the problems of their predecessors. The aim of this paper is to address the issue of cycles and the lack of properly developed extension–based semantics. We introduce a family of such semantics and specialize them to handle the problem of support cycles, as their treatment seems to be the biggest difference among the available frameworks. Furthermore, a classification of our sub–semantics in the inside–outside fashion that we have described before is introduced. We also recall our previous research on admissibility in [Polberg, Wallner, and Woltran2013] and show how it fits into the new system. Our results also include which known properties, such as Fundamental Lemma, carry over from the Dung framework. Finally, we provide an analysis of similarities and differences between the extension and labeling–based semantics in the context of produced extensions.
The paper is structured as follows. In Sections 2 to 4 we provide a background on argumentation frameworks. Then we introduce the new extension–based semantics and analyze their behavior in Section 5. We close the paper with a comparison between the new concepts and the existing labeling–based approach.
## 2 Dung’s Argumentation Frameworks
Let us recall the abstract argumentation framework by Dung [Dung1995] and its semantics. For more details we refer the reader to [Baroni, Caminada, and Giacomin2011].
###### Definition 2.1.
A Dung’s abstract argumentation framework (AF for short) is a pair , where is a set of arguments and represents an attack relation.
###### Definition 2.2.
Let be a Dung’s framework. We say that an argument is defended111Please note defense is often also termed acceptability, i.e. if a set defends an argument, the argument is acceptable w.r.t. this set. by a set in , if for each s.t. , there exists s.t. . A set is:
• conflict–free in iff for each .
• admissible iff conflict–free and defends all of its members.
• preferred iff it is maximal w.r.t set inclusion admissible.
• complete iff it is admissible and all arguments defended by are in .
• stable iff it is conflict–free and for each there exists an argument s.t. .
The characteristic function is defined as: . The grounded extension is the least fixed point of .
In the context of this paper, we would also like to recall the notion of range:
###### Definition 2.3.
Let be the set of arguments attacked by and the set of arguments that attack . is the range of .
Please note the concepts and the sets can be used to redefine defense. This idea will be partially used in creating the semantics of ADFs. Moreover, there is also an alternative way of computing the grounded extension:
###### Proposition 2.4.
The unique grounded extension of is defined as the outcome of the following “algorithm”. Let us start with :
1. put each argument which is not attacked in into ; if no such argument exists, return .
2. remove from all (new) arguments in and all arguments attacked by them (together with all adjacent attacks) and continue with Step 1.
What we have described above forms a family of the extension–based semantics. However, there exist also labeling–based ones [Caminada and Gabbay2009, Baroni, Caminada, and Giacomin2011]. Instead of computing sets of accepted arguments, they generate labelings, i.e. total functions . Although we will not recall them here, we would like to draw the attention to the fact that for every extension we can obtain an appropriate labeling and vice versa. This property is particularly important as it does not fully carry over to the ADF setting.
Finally, we would like to recall several important lemmas and theorems from the original paper on AFs [Dung1995].
###### Lemma 2.5.
Dung’s Fundamental Lemma Let be an admissible extension, and two arguments defended by . Then is admissible and is defended by .
###### Theorem 2.6.
Every stable extension is a preferred extension, but not vice versa. Every preferred extension is a complete extension, but not vice versa. The grounded extension is the least w.r.t. set inclusion complete extension. The complete extensions form a complete semilattice w.r.t. set inclusion. 222A partial order is a complete semilattice iff each nonempty subset of has a glb and each increasing sequence of has a lub.
## 3 Argumentation Frameworks with Support
Currently the most recognized frameworks with support are the Bipolar Argumentation Framework BAF [Cayrol and Lagasquie-Schiex2013], Argumentation Framework with Necessities AFN [Nouioua2013] and Evidential Argumentation System EAS [Oren and Norman2008]. We will now briefly recall them in order to further motivate the directions of the semantics we have taken in ADFs.
The original bipolar argumentation framework BAF [Cayrol and Lagasquie-Schiex2009] studied a relation we will refer to as abstract support:
###### Definition 3.1.
A bipolar argumentation framework is a tuple , where is a set of arguments, represents the attack relation and the support.
The biggest difference between this abstract relation and any other interpretation of support is the fact that it did not affect the acceptability of an argument, i.e. even a supported argument could be accepted ”alone”. The positive interaction was used to derive additional indirect forms of attack and based on them, stronger versions of conflict–freeness were developed.
###### Definition 3.2.
We say that an argument support attacks argument , if there exists some argument s.t. there is a sequence of supports from to (i.e. ) and . We say that secondary attacks if there is some argument s.t. and . We say that is:
• +conflict–free iff s.t. (directly or indirectly) attacks .
• safe iff s.t. is at the same time (directly or indirectly) attacked by and either there is a sequence of supports from an element of to , or .
• closed under iff , if then .
The definition of defense remains the same and any Dung semantics is specialized by choosing an given notion of conflict–freeness or safety. Apart from the stable semantics, no assumptions as to cycles occurring in the support relation are made. The later developed deductive support [Boella et al.2010] remains in the BAF setting and is also modeled by new indirect attacks [Cayrol and Lagasquie-Schiex2013]. Consequently, acyclicity is not required.
The most recent formulation of the framework with necessary support is as follows [Nouioua2013]:
###### Definition 3.3.
An argumentation framework with necessities is a tuple , where is the set of arguments, represents (binary) attacks, and is the necessity relation.
Given a set and an argument , should be read as ”at least one element of needs to be present in order to accept ”. The AFN semantics are built around the notions of coherence:
###### Definition 3.4.
We say that a set of arguments is coherent iff every is powerful, i.e. there exists a sequence of some elements of s.t 1) , 2) s.t. , and 3) for it holds that for every set if , then . A coherent set is strongly coherent iff it is conflict–free.
Although it may look a bit complicated at first, the definition of coherence grasps the intuition that we need to provide sufficient acyclic support for the arguments we want to accept. Defense in AFNs is understood as the ability to provide support and to counter the attacks from any coherent set.
###### Definition 3.5.
We say that a set defends , if is coherent and for every , if then for every coherent set containing , .
Using the notion of strong coherence and defense, the AFN semantics are built in a way corresponding to Dung semantics. It is easy to see that, through the notion of coherency, AFNs discard cyclic arguments both on the ”inside” and the ”outside”. This means we cannot accept them in an extension and they are not considered as valid attackers.
The last type of support we will consider here is the the evidential support [Oren and Norman2008]. It distinguishes between standard and prima facie arguments. The latter are the only ones that are valid without any support. Every other argument that we want to accept needs to be supported by at least one prima facie argument, be it directly or not.
###### Definition 3.6.
An evidential argumentation system (EAS) is a tuple where is a set of arguments, is the attack relation, and is the support relation. We distinguish a special argument s.t. where ; and where or .
represents the prima facie arguments and is referred to as evidence or environment. The idea that the valid arguments (and attackers) need to trace back to it is captured with the notions of e–support and e–supported attack333The presented definition is slightly different from the one available in [Oren and Norman2008]. The new version was obtained through personal communication with the author..
###### Definition 3.7.
An argument has evidential support (e–support) from a set iff or there is a non-empty s.t. and , has e–support from .
###### Definition 3.8.
A set carries out an evidence supported attack (e–supported attack) on iff where , and for all , has e–support from . An e–supported attack by on is minimal iff there is no that carries out an e–supported attack on .
The EASs semantics are built around the notion of acceptability in a manner similar to those of Dung’s. However, in AFs only the attack relation was considered. In EASs, also sufficient support is required:
###### Definition 3.9.
An argument is acceptable w.r.t. a set iff is e–supported by and given a minimal e–supported attack by a set against , it is the case that carries out an e–supported attack against a member of .
The notion of conflict–freeness is easily adapted to take set, not just binary conflict into account. With this and the notion of acceptability, the EASs semantics are built just like AF semantics. From the fact that every valid argument needs to be grounded in the environment it clearly results that EAS semantics are acyclic both on the inside and outside.
## 4 Abstract Dialectical Frameworks
Abstract dialectical frameworks have been defined in [Brewka and Woltran2010] and further studied in [Brewka et al.2013, Polberg, Wallner, and Woltran2013, Strass2013a, Strass2013b, Strass and Wallner2014]. The main goal of ADFs is to be able to express arbitrary relations and avoid the need of extending AFs by new relation sets each time they are needed. This is achieved by the means of the acceptance conditions, which define what arguments should be present in order to accept or reject a given argument.
###### Definition 4.1.
An abstract dialectical framework (ADF) as a tuple , where is a set of abstract arguments (nodes, statements), is a set of links (edges) and is a set of acceptance conditions, one condition per each argument. An acceptance condition is a total function , where is the set of parents of an argument .
One can also represent the acceptance conditions by propositional formulas [Ellmauthaler2012] rather than functions. By this we mean that given an argument , , where is a propositional formula over arguments . As we will be making use of both extension and labeling–based semantics, we need to provide necessary information on interpretations first (more details can be found in [Brewka et al.2013, Polberg, Wallner, and Woltran2013]). Please note that the links in ADFs only represent connections between arguments, while the burden of deciding the nature of these connections falls to the acceptance conditions. Moreover, parents of an argument can be easily extracted from the conditions. Thus, we will use of shortened notation through the rest of this paper.
### Interpretations and decisiveness
A two (or three–valued) interpretation is simply a mapping that assigns the truth values (respectively ) to arguments. We will be making use both of partial (i.e. defined only for a subset of ) and the full ones. In the three–valued setting we will adopt the precision (information) ordering of the values: and The pair forms a complete meet–semilattice with the meet operation assigning values in the following way: , and u in all other cases. It can naturally be extended to interpretations: given two interpretations and on , we say that contains more information, denoted , iff . Similar follows for the meet operation. In case is three and two–valued, we say that extends . This means that elements mapped originally to are now assigned either or . The set of all two–valued interpretations extending is denoted .
###### Example 4.2.
Let be a three–valued interpretation. We have two extending interpretations, and . Clearly, it holds that and . However, and are incomparable w.r.t. .
Let now be another three–valued interpretation. gives us a new interpretation : as the assignments of and differ between and , the resulting value is . On the other hand, is in both cases and thus retains its value.
We will use to denote a set of arguments mapped to by , where is some truth–value. Given an acceptance condition for some argument and an interpretation , we define a shorthand as . For a given propositional formula and an interpretation defined over all of the atoms of the formula, will just stand for the value of the formula under . However, apart from knowing the ”current” value of a given acceptance condition for some interpretation, we would also like to know if this interpretation is ”final”. By this we understand that no new information will cause the value to change. This is expressed by the notion of decisive interpretations, which are at the core of the extension–based ADF semantics.
###### Definition 4.3.
Given an interpretation defined over a set , completion of to a set where is an interpretation defined on in a way that . By a completion we will understand that maps all arguments in respectively to .
The similarity between the concepts of completion and extending interpretation should not be overlooked. Basically, given a three–valued interpretation defined over , the set precisely corresponds to the set of completions to of the two–valued part of . However, the extension notion from the three–valued setting can be very misleading when used in the extension–based semantics. Therefore, we would like to keep the notion of completion.
###### Definition 4.4.
We say that a two–valued interpretation is decisive for an argument iff for any two completions and of to , it holds that . We say that is decisively out/in wrt if is decisive and all of its completions evaluate to respectively .
###### Example 4.5.
Let be an ADF depicted in Figure 1. Example of a decisively in interpretation for is . It simply means that knowing that is false, not matter the value of , the implication is always true and thus the acceptance condition satisfied. From the more technical side, it is the same as checking that both completions to , namely and satisfy the condition. Example of a decisively out interpretation for is . Again, it suffices to falsify one element of a conjunction to know that the whole formula will evaluate to false.
### Acyclicity
Let us now focus on the issue of positive dependency cycles. Please note we refrain from calling them support cycles in the ADF setting in order not to confuse them with specific definitions of support available in the literature [Cayrol and Lagasquie-Schiex2013].
Informally speaking, an argument takes part in a cycle if its acceptance depends on itself. An intuitive way of verifying the acyclicity of an argument would be to ”track” its evaluation, e.g. in order to accept we need to accept , to accept we need to accept and so on. This basic case becomes more complicated when disjunction is introduced. We then receive a number of such ”paths”, with only some of them proving to be acyclic. Moreover, they might be conflicting one with each other, and we can have a situation in which all acyclic evaluations are blocked and a cycle is forced. Our approach to acyclicity is based on the idea of such ”paths” that are accompanied by sets of arguments used to detect possible conflicts.
Let us now introduce the formal definitions. Given an argument and , by we will denote the set of minimal two–valued interpretations that are decisively for . By minimal we understand that both and are minimal w.r.t. set inclusion.
###### Definition 4.6.
Let be a nonempty set of arguments. A positive dependency function on is a function assigning every argument an interpretation s.t. or (null) iff no such interpretation can be found.
###### Definition 4.7.
An acyclic positive dependency evaluation for based on a given pd–function is a pair , 444Please note that it is not required that where and is a sequence of distinct elements of s.t.: 1) , 2) , 3) , and 4) . We will refer to the sequence part of the evaluation as pd–sequence and to the as the blocking set. We will say that an argument is pd–acyclic in iff there exist a pd–function on and a corresponding acyclic pd–evaluation for .
We will write that an argument has an acyclic pd–evaluation on if there is some pd–function on from which we can produce the evaluation. There are two ways we can ”attack” an acyclic evaluation. We can either discard an argument required by the evaluation or accept one that is capable of preventing it. This corresponds to rejecting a member of a pd–sequence or accepting an argument from the blocking set. We can now formulate this ”conflict” by the means of an interpretation:
###### Definition 4.8.
Let be a set of arguments and s.t. has an acyclic pd–evaluation in . We say that a two–valued interpretation blocks iff s.t. or s.t. .
Let us now show on an example why we require minimality on the chosen interpretations and why do we store the blocking set:
###### Example 4.9 (label=example1).
Let us assume an ADF depicted in Figure 2. For argument there exist the following decisively in interpretations: . Only the first two are minimal. Considering would give us a wrong view that requires for acceptance, which is not a desirable reading. The interpretations for and are respectively and . Consequently, we have two pd–functions on , namely and . From them we obtain one acyclic pd–evaluation for : , one for : and none for .
Let us look closer at a set . We can see that is not pd–acyclic in . However, the presence of also ”forces” a cycle between and . The acceptance conditions of all arguments are satisfied, thus this simple check is not good enough to verify if a cycle occurs. Only looking at the whole evaluations shows us that and are both blocked by . Although and are pd–acyclic in , we see that their evaluations are in fact blocked and this second level of conflict needs to be taken into account by the semantics.
As a final remark, please note that it can be the case that an evaluation is self–blocking. We can now proceed to recall existing and introduce new semantics of the abstract dialectical frameworks.
## 5 Extension–Based Semantics of ADFs
Although various semantics for ADFs have already been defined in the original paper [Brewka and Woltran2010], only three of them – conflict–free, model and grounded (initially referred to as well–founded) – are still used (issues with the other formulations can be found in [Brewka et al.2013, Polberg, Wallner, and Woltran2013, Strass2013a]). Moreover, the treatment of cycles and their handling by the semantics was not sufficiently developed. In this section we will address all of those issues. Before we continue, let us first motivate our choice on how to treat cycles. The opinions on support cycles differ between the available frameworks, as we have shown in Section 3. Therefore, we would like to explore the possible approaches in the context of ADFs by developing appropriate semantics.
The classification of the sub–semantics that we will adopt in this paper is based on the inside–outside intuition we presented in the introduction. Appropriate semantics will receive a two–element prefix , where will denote whether cycles are permitted or not on the ”inside” and on the ”outside”. We will use , where will stand for acyclic and for cyclic constraints. As the conflict–free (and naive) semantics focus only on what we can accept, we will drop the prefixing in this case. Although the model, stable and grounded fit into our classification (more details can be found in this section and in [Polberg2014]), they have a sufficiently unique naming and further annotations are not necessary. We are thus left with admissible, preferred and complete. The BAF approach follows the idea that we can accept arguments that are not acyclic in our opinion and we allow our opponent to do the same. The ADF semantics we have developed in [Polberg, Wallner, and Woltran2013] also shares this view. Therefore, they will receive the prefix. On the other hand, AFN and EAS semantics do not permit cycles both in extensions and as attackers. Consequently, the semantics following this line of reasoning will be prefixed with . Please note we believe that a non–uniform approach can also be suitable in certain situations. By non–uniform we mean not accepting cyclic arguments, but still treating them as valid attackers and so on (i.e. and ). However, in this paper we would like to focus only on the two perspectives mentioned before.
### Conflict–free and naive semantics
In the Dung setting, conflict–freeness meant that the elements of an extension could not attack one another. Providing an argument with the required support is then a separate condition in frameworks such as AFNs and EASs. In ADFs, where we lose the set representation of relations in favor of abstraction, not including ”attackers” and accepting ”supporters” is combined into one notion. This represents the intuition of arguments that can stand together presented in [Baroni, Caminada, and Giacomin2011]. Let us now assume an ADF .
###### Definition 5.1.
A set of arguments is conflict–free in iff for all we have .
In the acyclic version of conflict–freeness we also need to deal with the conflicts arising on the level of evaluations. To meet the formal requirements, we first have to show how the notions of range and the set are moved to ADFs.
###### Definition 5.2.
Let a conflict–free extension of and a partial two–valued interpretation built as follows:
1. Let and for every set ;
2. For every argument that is decisively out in , set and add to ;
3. Repeat the previous step until there are no new elements added to .
By we understand the set of arguments and we will refer to it as the discarded set. now forms the range interpretation of .
However, the notions of the discarded set and the range are quite strict in the sense that they require an explicit ”attack” on arguments that take part in dependency cycles. This is not always a desirable property. Depending on the approach we might not treat cyclic arguments as valid and hence want them ”out of the way”.
###### Definition 5.3.
Let a conflict–free extension of and a partial two–valued interpretation built as follows:
1. Let . For every set .
2. For every argument s.t. every acyclic pd–evaluation of in is blocked by , set and add to .
3. Repeat the previous step until there are no new elements added to .
By we understand the set of arguments mapped to by and refer to it as acyclic discarded set. We refer to as acyclic range interpretation of .
We can now define an acyclic version of conflict–freeness:
###### Definition 5.4.
A conflict–free extension is a pd–acyclic conflict–free extension of iff every argument has an unblocked acyclic pd–evaluation on w.r.t. .
As we are dealing with a conflict– free extension, all the arguments of a given pd–sequence are naturally both in and . Therefore, in order to ensure that an evaluation is unblocked it suffices to check whether . Consequently, in this case it does not matter w.r.t. to which version of range we are verifying the evaluations.
###### Definition 5.5.
The naive and pd–acyclic naive extensions are respectively maximal w.r.t. set inclusion conflict–free and pd–acyclic conflict–free extensions.
###### Example 5.6 (continues=example1).
Recall the ADF . The conflict–free extensions are and . Their standard discarded set in all cases is just – none of the sets has the power to decisively out the non–members. The acyclic discarded set of , and is now , since it has no acyclic evaluation to start with. In the case of , it is , which is to be expected since had the power to block their evaluations. Finally, is . In the end, only and qualify for acyclic type. The naive and pd–acyclic naive extensions are respectively and .
### Model and stable semantics
The concept of a model basically follows the intuition that if something can be accepted, it should be accepted:
###### Definition 5.7.
A conflict–free extension is a model of if implies .
Although the semantics is simple, several of its properties should be explained. First of all, given a model candidate , checking whether a condition of some argument is satisfied does not verify if an argument depends on itself or if it ”outs” a previously included member of . This means that an argument we should include may break conflict–freeness of the set. On the other hand, an argument can be due to positive dependency cycles, i.e. its supporter is not present. And since model makes no acyclicity assumptions on the inside, arguments outed this way can later appear in a model . Consequently, it is clear to see that model semantics is not universally defined and the produced extensions might not be maximal w.r.t. subset inclusion.
The model semantics was used as a mean to obtain the stable models. The main idea was to make sure that the model is acyclic. Unfortunately, the used reduction method was not adequate, as shown in [Brewka et al.2013]. However, the initial idea still holds and we use it to define stability. Although the produced extensions are now incomparable w.r.t. set inclusion, the semantics is still not universally defined.
###### Definition 5.8.
A model is a stable extension iff it is pd–acyclic conflict–free.
###### Example 5.9 (continues=example1).
Let us again come back to the ADF . The conflict–free extensions were and . The first two are not models, as in the first case and in the latter can be accepted. Recall that and were the pd–acyclic conflict–free extensions. The only one that is also a model is and thus we obtain our single stable extension.
### Grounded semantics
Next comes the grounded semantics [Brewka and Woltran2010]. Just like in the Dung setting, it preserves the unique–status property, i.e. produces only a single extension. Moreover, it is defined in the terms of a special operator:
###### Definition 5.10.
Let , where and . Then is the grounded model of iff for some is the least fix–point of .
Although it might look complicated at first, this is nothing more than analyzing decisiveness using a set, not interpretation form (please see [Polberg2014] for more details). Thus, one can also obtain the grounded extension by an ADF version of Proposition 2.4:
###### Proposition 5.11.
Let be an empty interpretation. For every argument that is decisively in w.r.t. , set and for every argument that is decisively w.r.t. , set . Repeat the procedure until no further assignments can be done. The grounded extension of is then .
###### Example 5.12 (continues=example1).
Recall our ADF . Let be an empty interpretation. It is easy to see that no argument is decisively in/out w.r.t. . If we analyze , it is easy to see that if we accept , the condition is out, but if we accept both and it is in again. Although both and are out in , the condition of can be met if we accept , and condition of if we accept . Hence, we obtain no decisiveness again. Thus, is the grounded extension.
In [Polberg, Wallner, and Woltran2013] we have presented our first definition of admissibility, before the sub–semantics classification was developed. The new, simplified version of our previous formulation, is now as follows:
###### Definition 5.13.
A conflict–extension is cc–admissible in iff every element of is decisively in w.r.t to its range interpretation .
It is important to understand how decisiveness encapsulates the defense known from the Dung setting. If an argument is decisively in, then any set of arguments that would have the power to out the acceptance condition is ”prevented” by the interpretation. Hence, the statements required for the acceptance of are mapped to and those that would make us reject are mapped to . The former encapsulates the required support, while the latter contains the ”attackers” known from the Dung setting.
When working with the semantics that have to be acyclic on the ”inside”, we not only have to defend the members, but also their acyclic evaluations:
###### Definition 5.14.
A pd–acyclic conflict–free extension is aa–admissible iff every argument in 1) is decisively in w.r.t. acyclic range interpretation , and 2) has an unblocked acyclic pd–evaluation on s.t. all members of its blocking set are mapped to by .
###### Definition 5.15.
A set of arguments is xy–preferred iff it is maximal w.r.t. set inclusion xy–admissible.
The following example shows that decisiveness encapsulates defense of an argument, but not necessarily of its evaluation:
###### Example 5.16.
Let us modify the ADF depicted in Figure 2 by changing the condition of : . The new pd–evaluations are for , for and for . The conflict–free extensions are now and . Apart from the last, all are pd–acyclic conflict–free. and are trivially both aa and cc–admissible and cc–admissible. The standard and acyclic discarded sets of are both empty, thus is not decisively in (we can always utter ) and the set is neither aa nor cc–admissible. The discarded sets of are also empty; however, it is easy to see that both and are decisively in. Although uttering would not change the values of acceptance conditions, it blocks the pd–evaluations of and . Thus, is cc, but not aa–admissible. The cc and aa–preferred extensions are respectively and .
###### Example 5.17 (continues=example1).
Let us come back to the original ADF . and were the standard and pd–acyclic conflict–free extensions. is trivially both aa and cc, while and cc–admissible. The standard discarded sets of and are both empty, while the acyclic ones are . Consequently, is aa, but not cc–admissible. is both, but for different reasons; in the cc–case, all arguments are decisively in (due to cyclic defense). In aa–approach, they are again decisively in, but the evaluations are ”safe” only because is not considered a valid attacker.
### Complete semantics
Completeness represents an approach in which we have to accept everything we can safely conclude from our opinions. In the Dung setting, ”safely” means defense, while in the bipolar setting it is strengthened by providing sufficient support. In a sense, it follows the model intuition that what we can accept, we should accept. However, now we not only use an admissible base in place of a conflict–free one, but also defend the arguments in question. Therefore, instead of checking if an argument is in, we want it to be decisively in.
###### Definition 5.18.
A cc–admissible extension is cc–complete in iff every argument in that is decisively in w.r.t. to range interpretation is in .
###### Definition 5.19.
An aa–admissible extension is aa–complete in iff every argument in that is decisively in w.r.t. to acyclic range interpretation is in .
Please note that in the case of aa–complete semantics, no further ”defense” of the evaluation is needed, as visible in AA Fundamental Lemma (i.e. Lemma 5.22). This comes from the fact that if we already have a properly ”protected” evaluation, then appending a decisively in argument to it is sufficient for creating an evaluation for this argument.
###### Example 5.20 (continues=example1).
Let us now finish with the ADF . It is easy to see that all cc–admissible extensions are also cc–complete. However, only is aa–complete. Due to the fact that is trivially included in any discarded set, can always be accepted (thus, is disqualified). Then, from acceptance of , acceptance of follows easily and is disqualified.
### Properties and examples
Although the study provided here will by not be exhaustive, we would like to show how the lemmas and theorems from the original paper on AFs [Dung1995] are shifted into this new setting. The proofs can be found in [Polberg2014].
Even though every pd–acyclic conflict–free extension is also conflict–free, it does not mean that every aa–admissible is cc–admissible. These approaches differ significantly. The first one makes additional restrictions on the ”inside”, but due to acyclicity requirements on the ”outside” there are less arguments a given extension has to defend from. The latter allows more freedom as to what we can accept, but also gives this freedom to the opponent, thus there are more possible attackers. Moreover, it should not come as a surprise that these differences pass over to the preferred and complete semantics, as visible in Example LABEL:ex1. Our results show that admissible sub–semantics satisfy the Fundamental Lemma.
###### Lemma 5.21.
CC Fundamental Lemma: Let be a cc–admissible extension, its range interpretation and two arguments decisively in w.r.t. . Then is cc–admissible and is decisively in w.r.t. .
###### Lemma 5.22.
AA Fundamental Lemma: Let be an aa-admissible extension, its acyclic range interpretation and two arguments decisively in w.r.t. . Then is aa–admissible and is decisively in w.r.t. .
The relations between the semantics presented in [Dung1995] are preserved by some of the specializations:
###### Theorem 5.23.
Every stable extension is an aa–preferred extension, but not vice versa. Every xy–preferred extension is an xy–complete extension for , but not vice versa. The grounded extension might not be an aa–complete extension. The grounded extension is the least w.r.t. set inclusion cc–complete extension.
###### Example 5.24 (label=ex1).
Let be the ADF depicted in Figure 3. The obtained extensions are visible in Table 1. The conflict–free, model, stable, grounded, admissible, complete and preferred semantics will be abbreviated to CF, MOD, STB, GRD, ADM, COMP and PREF. The prefixing is visible in second column. In case of conflict–freeness, will denote the standard, and the pd–acyclic one.
## 6 Labeling–Based Semantics of ADFs
The two approaches towards labeling–based semantics of ADFs were developed in [Strass2013a, Brewka et al.2013]. We will focus on the latter one, based on the notion of a three–valued characteristic operator:
###### Definition 6.1.
Let be the set of all three–valued interpretations defined on , and argument in and an interpretation in . The three–valued characteristic operator of is a function s.t. with .
Verifying the value of an acceptance condition under a set of extensions of a three–valued interpretation is exactly checking its value in the completions of the two–valued part of . Thus, an argument that is / in is decisively in/out w.r.t. to the two–valued part of .
It is easy to see that in a certain sense this operator allows self–justification and self–falsification, i.e. that status of an argument depends on itself. Take, for example, a self–supporter; if we generate an interpretation in which it is false then, obviously, it will remain false. Same follows if we assume it to be true. This results from the fact that the operator functions on interpretations defined on all arguments, thus allowing a self–dependent argument to affect its status.
The labeling–based semantics are now as follows:
###### Definition 6.2.
Let be a three–valued interpretation for and its characteristic operator. We say that is:
• three–valued model iff for all we have that implies that ;
• complete iff ;
• preferred iff it is –maximal admissible;
• grounded iff it is the least fixpoint of .
Although in the case of stable semantics we formally receive a set, not an interpretation, this difference is not significant. As nothing is left undecided, we can safely map all remaining arguments to . The current state of the art definition [Strass2013a, Brewka et al.2013] is as follows:
###### Definition 6.3.
Let be a model of . A reduct of w.r.t. is , where and for we set . Let be the grounded model of . Model is stable iff .
###### Example 6.4 (continues=ex1).
Let us now compute the possible labelings of our ADF. As there are over twenty possible three–valued models, we will not list them. We have in total 15 admissible interpretations: and . Out of them to are complete. The ones that maximize the information content in this case are the ones without any mappings: , , and . and are stable and finally, is grounded.
### Comparison with the extension–based approach
We will start the comparison of extensions and labelings by relating conflict–freeness and three–valued models. Please note that the intuitions of two–valued and three–valued models are completely different and should not be confused. We will say that an extension and a labeling correspond iff .
###### Theorem 6.5.
Let be a conflict–free and a pd–acyclic conflict–free extension. The –completions of , and are three–valued models.
Let us continue with the admissible semantics. First, we will tie the notion of decisiveness to admissibility, following the comparison of completions and extending interpretations that we have presented in Section 4.
###### Theorem 6.6.
Let be a three–valued interpretation and its (maximal) two–valued sub–interpretation. is admissible iff all arguments mapped to are decisively in w.r.t. and all arguments mapped to are decisively out w.r.t. .
Please note that this result does not imply that admissible extensions and labelings ”perfectly” coincide. In labelings, we guess an interpretation, and thus assign initial values to arguments that we want to verify later. If they are self–dependent, it of course affects the outcome. In the extension based approaches, we distinguish whether this dependency is permitted. Therefore, the aa– and cc– approaches will have a corresponding labeling, but not vice versa.
###### Theorem 6.7.
Let us now consider the preferred semantics. Information maximality is not the same as maximizing the set of accepted arguments and due to the behavior of we can obtain a preferred interpretation that can map to a subset of arguments of another interpretation. Consequently, we fail to receive an exact correspondence between the semantics. By this we mean that given a framework there can exist an (arbitrary) preferred extension without a labeling counterpart and a labeling without an appropriate extension of a given type.
###### Theorem 6.8.
For any xy–preferred extension there might not exist a corresponding preferred labeling and vice versa.
###### Example 6.9.
Let us look at , as depicted in Figure 3(a). and cannot form a conflict–free extension to start with, so we are only left with . However, the attack from on can be only overpowered by self–support, thus it cannot be part of an aa–admissible extension. Therefore, we obtain only one aa–preferred extension, namely the empty set. The single preferred labeling solution would be and we can see there is no correspondence between the results. On the other hand, there is one with the cc–preferred extension .
Finally, we have depicted in Figure 3(b). The preferred labeling is . The single cc–preferred extension is and again, we receive no correspondence. However, it is compliance with the aa–preferred extension .
The labeling–based complete semantics can also be defined in terms of decisiveness:
###### Theorem 6.10.
Let be a three–valued interpretation and its (maximal) two–valued sub–interpretation. is complete iff all arguments decisively out w.r.t. are mapped to by and all arguments decisively in w.r.t. are mapped to by .
Fortunately, just like in the case of admissible semantics, complete extensions and labelings partially correspond:
###### Theorem 6.11.
Let be a cc–complete and an aa–complete extension. The –completions of and are complete labelings.
Please recall that in the Dung setting, extensions and labelings agreed on the sets of accepted arguments. In ADFs, this relation is often only one way – like in the case of admissible and complete cc– and aa– sub–semantics – or simply nonexistent, like in preferred approach. In this context, the labeling–based admissibility (and completeness) can be seen as the most general one. This does not mean that specializations, especially handling cycles, are not needed. Even more so, as to the best of our knowledge no methods for ensuring acyclicity in a three–valued setting are yet available.
Due to the fact that the grounded semantics has a very clear meaning, it is no wonder that both available approaches coincide, as already noted in [Brewka et al.2013]. We conclude this section by relating both available notions of stability. The relevant proofs can be found in [Polberg2014].
###### Theorem 6.12.
The two–valued grounded extension and the grounded labeling correspond.
###### Theorem 6.13.
A set of arguments is labeling stable iff it is extension–based stable.
## 7 Concluding Remarks
In this paper we have introduced a family of extension–based semantics as well as their classification w.r.t. positive dependency cycles. Our results also show that they satisfy ADF versions of Dung’s Fundamental Lemma and that appropriate sub–semantics preserve the relations between stable, preferred and complete semantics. We have also explained how our formulations relate to the labeling–based approach. Our results show that the precise correspondence between the extension–based and labeling–based semantics, that holds in the Dung setting, does not fully carry over.
It is easy to see that in a certain sense, labelings provide more information than extensions due to distinguishing false and undecided states. Therefore, one of the aims of our future work is to present the sub–semantics described here also in a labeling form. However, since our focus is primarily on accepting arguments, a comparison w.r.t. information content would not be fully adequate for our purposes and the current characteristic operator could not be fully reused. We hope that further research will produce satisfactory formulations.
## References
• [Baroni, Caminada, and Giacomin2011] Baroni, P.; Caminada, M.; and Giacomin, M. 2011. An introduction to argumentation semantics. Knowledge Eng. Review 26(4):365–410.
• [Baroni, Giacomin, and Guida2005] Baroni, P.; Giacomin, M.; and Guida, G. 2005. SCC-Recursiveness: A general schema for argumentation semantics. Artif. Intell. 168(1-2):162–210.
• [Bodanza and Tohmé2009] Bodanza, G. A., and Tohmé, F. A. 2009. Two approaches to the problems of self-attacking arguments and general odd-length cycles of attack. Journal of Applied Logic 7(4):403 – 420. Special Issue: Formal Models of Belief Change in Rational Agents.
• [Boella et al.2010] Boella, G.; Gabbay, D.; van der Torre, L.; and Villata, S. 2010. Support in abstract argumentation. In Proc. of COMMA 2010, 111–122. Amsterdam, The Netherlands, The Netherlands: IOS Press.
• [Brewka and Woltran2010] Brewka, G., and Woltran, S. 2010. Abstract dialectical frameworks. In Proc. KR ’10, 102–111. AAAI Press.
• [Brewka et al.2013] Brewka, G.; Ellmauthaler, S.; Strass, H.; Wallner, J. P.; and Woltran, S. 2013. Abstract dialectical frameworks revisited. In Proc. IJCAI’13, 803–809. AAAI Press.
• [Brewka, Polberg, and Woltran2013] Brewka, G.; Polberg, S.; and Woltran, S. 2013. Generalizations of Dung frameworks and their role in formal argumentation. Intelligent Systems, IEEE PP(99). Forthcoming.
• [Caminada and Gabbay2009] Caminada, M., and Gabbay, D. M. 2009. A logical account of formal argumentation. Studia Logica 93(2):109–145.
• [Cayrol and Lagasquie-Schiex2009] Cayrol, C., and Lagasquie-Schiex, M.-C. 2009. Bipolar abstract argumentation systems. In Simari, G., and Rahwan, I., eds., Argumentation in Artificial Intelligence. 65–84.
• [Cayrol and Lagasquie-Schiex2013] Cayrol, C., and Lagasquie-Schiex, M.-C. 2013. Bipolarity in argumentation graphs: Towards a better understanding. Int. J. Approx. Reasoning 54(7):876–899.
• [Coste-Marquis, Devred, and Marquis2005a] Coste-Marquis, S.; Devred, C.; and Marquis, P. 2005a. Inference from controversial arguments. In Sutcliffe, G., and Voronkov, A., eds., Proc. LPAR ’05, volume 3835 of LNCS, 606–620. Springer Berlin Heidelberg.
• [Coste-Marquis, Devred, and Marquis2005b] Coste-Marquis, S.; Devred, C.; and Marquis, P. 2005b. Prudent semantics for argumentation frameworks. In Proc. of ICTAI’05, 568–572. Washington, DC, USA: IEEE Computer Society.
• [Dung and Thang2009] Dung, P. M., and Thang, P. M. 2009. A unified framework for representation and development of dialectical proof procedures in argumentation. In Proc. of IJCAI’09, 746–751. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
• [Dung1995] Dung, P. M. 1995.
On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.
Artif. Intell. 77:321–357.
• [Ellmauthaler2012] Ellmauthaler, S. 2012. Abstract dialectical frameworks: properties, complexity, and implementation. Master’s thesis, Faculty of Informatics, Institute of Information Systems, Vienna University of Technology.
• [Jakobovits and Vermeir1999] Jakobovits, H., and Vermeir, D. 1999. Dialectic semantics for argumentation frameworks. In Proc. of ICAIL ’99, 53–62. New York, NY, USA: ACM.
• [Nouioua2013] Nouioua, F. 2013. AFs with necessities: Further semantics and labelling characterization. In Liu, W.; Subrahmanian, V.; and Wijsen, J., eds., Proc. SUM ’13, volume 8078 of LNCS. Springer Berlin Heidelberg. 120–133.
• [Oren and Norman2008] Oren, N., and Norman, T. J. 2008. Semantics for evidence-based argumentation. In Proc. COMMA ’08, volume 172 of Frontiers in Artificial Intelligence and Applications, 276–284. IOS Press.
• [Polberg, Wallner, and Woltran2013] Polberg, S.; Wallner, J. P.; and Woltran, S. 2013. Admissibility in the abstract dialectical framework. In Proc. CLIMA’13, volume 8143 of LNCS, 102–118. Springer.
• [Polberg2014] Polberg, S. 2014. Extension–based semantics of abstract dialectical frameworks. Technical Report DBAI-TR-2014-85, Institute for Information Systems, Technical University of Vienna.
• [Rahwan and Simari2009] Rahwan, I., and Simari, G. R. 2009. Argumentation in Artificial Intelligence. Springer, 1st edition.
• [Strass and Wallner2014] Strass, H., and Wallner, J. P. 2014. Analyzing the Computational Complexity of Abstract Dialectical Frameworks via Approximation Fixpoint Theory. In Proc. KR ’14. Forthcoming.
• [Strass2013a] Strass, H. 2013a. Approximating operators and semantics for abstract dialectical frameworks. Artificial Intelligence 205:39 – 70.
• [Strass2013b] Strass, H. 2013b. Instantiating knowledge bases in abstract dialectical frameworks. In Proc. CLIMA’13, volume 8143 of LNCS, 86–101. Springer.
|
{}
|
# 15.8: The Rate of Gibbs Free Energy Change with Extent of Reaction
In Section 13.5, we demonstrate that the Gibbs free energy change for a reaction among ideal gases is the same thing as the rate at which the Gibbs free energy of the system changes with the extent of reaction. That is, for an ideal-gas reaction at constant temperature and pressure, we find $${\Delta }_rG={\left({\partial G}/{\partial \xi }\right)}_{TP}$$. We can now show that this conclusion is valid for any reaction.
With the introduction of the activity function, we have developed a very general expression for the Gibbs free energy of any substance in any system. For substance $$A$$ at a fixed temperature, we have
${\overline{G}}_A={\mu }_A={\widetilde{\mu }}^o_A+RT{ \ln {\tilde{a}}_A\ }$
For a reaction that we describe with generalized substances and stoichiometric coefficients as
$\left|{\nu }_1\right|X_1+\left|{\nu }_2\right|X_2+\dots +\left|{\nu }_i\right|X_i\to \ \left|{\nu }_j\right|X_j+\left|{\nu }_k\right|X_k+\dots +\left|{\nu }_{\omega }\right|X_{\omega }$
we can write the Gibbs free energy change in several equivalent ways:
\begin{align*} {\Delta }_rG &= \sum^{\omega }_{j=1}{\nu_j{\overline{G}}_j} \\[4pt] &={\Delta }_r\mu \\[4pt] &=\sum^{\omega }_{j=1}{{\nu }_j{\mu }_j\le 0} \end{align*}
The Gibbs free energy of the system is a function of temperature, pressure, and composition,
$G=G\left(P,T,n_1,n_2,\dots ,n_{\omega }\right)$
To introduce the dependence of the Gibbs free energy of the system on the extent of reaction, we use the stoichiometric relationships $$n_j=n^o_j+{\nu }_j\xi$$. ($$n_j$$ is the number of moles of the $$j^{th}$$ reacting species; $$n^o_j$$ is the number of moles of the $$j^{th}$$ reacting species when $$\xi$$ =0. If the $$k^{th}$$ substance does not participate in the reaction, $${\nu }_k=0$$.) Then,
$G=G\left(P,T,n^o_1+{\nu }_1\xi ,{\ n}^o_2+{\nu }_2\xi ,\dots ,\ n^o_{\omega }+{\nu }_{\omega }\xi \right)$
At constant temperature, pressure, and composition, the dependence of the Gibbs free energy on the extent of reaction is
\begin{align*} \left(\frac{\partial G}{\partial \xi }\right)_{PTn_m} &= \sum^{\omega}_{j=1} \left(\frac{\partial G}{\partial \left(n^o_j + \nu_j\xi \right)}\right)_{PTn_{m\neq j}} \left(\frac{\partial \left(n^o_j + \nu_j \xi \right)}{\partial \xi }\right)_{PTn_m} \\[4pt] &=\sum^{\omega}_{j=1} \nu_j \mu_j \end{align*}
It follows that
${\left(\frac{\partial G}{\partial \xi }\right)}_{PTn_m}={\Delta }_rG\le 0$
expresses the thermodynamic criteria for change when the process is a chemical reaction.
If a reacting system is not at equilibrium, the extent of reaction is time-dependent. We see that the Gibbs free energy of a reacting system depends on time according to
$\frac{dG}{dt}={\left(\frac{\partial G}{\partial \xi }\right)}_{PTn_m}\left(\frac{d\xi }{dt}\right)={\Delta }_r\mu \left(\frac{d\xi }{dt}\right)$
This page titled 15.8: The Rate of Gibbs Free Energy Change with Extent of Reaction is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
{}
|
# How to do a weighted poll result using multiple weights?
My question is about how to do weighting in polls using multiple weights. I think it's a pretty standard statistical question, but I can't find a straightforward answer on the Internet, so I am here to ask.
Let's say I'm conducting a political poll for who voters prefer in a coming election between Mickey Mouse and Donald Duck and I have the following raw numbers from a 740-person sample:
{ Mickey Mouse:
total: 370,
{ gender: {
male: 110,
female: 260
},
{ race: {
white: 150,
black: 140,
hispanic: 70,
asian: 9,
other: 1
}
}
{ Donald Duck:
total: 370,
{ gender: {
male: 210,
female: 160
},
{ race: {
white: 170,
black: 140,
hispanic: 40,
asian: 18,
other: 2
}
}
Let's also say that my population of expected voters on Election Day is as follows in percentages:
{ "General population":
{ gender: {
male: 52,
female: 48
},
{ race: {
white: 60,
black: 25,
hispanic: 10,
asian: 4,
other: 1
}
}
I know that to weigh one variable (such as gender), I would divide population percentage by sample percentage to get a weight. In my example above, men comprise 320 of the sample population (which is 43%), and so my weighted calculation for male would be 0.43 / 0.52 = 0.83, and each man who voted for either candidate would only be counted as 0.83 of a vote. Women are 58% of my sample, and so the weight for female would be 0.58 / 0.48 = 1.22, and each woman would count as 1.22 of a vote. Am I correct so far?
However, if I wanted to weigh for both gender and race, how can I do so? Do I multiply the weights together? Do I need more granular population data, i.e. for gender and race combined?
I'm looking for someone to help me understand how to do such a calculation. Thank you.
|
{}
|
# Real analysis – \$ u \$ can not reach the maximum if it satisfies a linear differential equation \$ Lu geq 0 \$
I have a function $$u in C ^ 2 ((a, b)) cap C ^ 0 ([a,b])$$ and a limited function $$g: (a, b) rightarrow mathbb {R}$$, whereas $$(a, b) subset mathbb {R}$$
Now consider the linear operator $$L: = frac {d ^ 2} {dx ^ 2} + g frac {d} {dx}$$ , I would like to show that, though $$Lu geq 0$$ then either $$u$$ is constant or can not reach its maximum $$(a, b)$$,
I can show that though $$Lu> 0$$ then $$u$$ can not reach its maximum
because if there is a maximum in $$(a, b)$$ let's say $$x_0$$, then $$Lu (x_0) = u & # 39; & gt; (x_0) + g (x_0) u & # 39; (x_0) leq 0$$ (using the fact that the second derivative must be at most negative and that the first derivative must be zero)
I am not sure to prove the case $$Lu geq 0$$ and I'm also pretty unsure where the limits are $$g$$ would be important to have. Hints are much appreciated.
|
{}
|
# Non measurable subset of a positive measure set
I am self-studying measure theory and I have seen this theorem:
If $A$ is a set of positive measure, then there exists a subset $D$ of $A$ that is non measurable.
I am not sure how to prove it. I have read the article about Vitali set in here. If $V(0,1)$ represents the Vitali set constructed in the interval $[0,1]$. Then isn't that $V=\bigcup_{n\in \mathbb Z} V(n,n+1)$ be a non measurable set? If so, whether $D=V\cap A$ is the desirable subset of $A$ which is non measurable?
This presentation is essentially repeated from "Measure and Integral" by Wheeden and Zygmund. The function $\lvert \cdot \rvert_e$ denotes outer measure. It can be proven that if $E$ is a measurable subset of $\mathbb{R}$ with positive measure, then the set of differences $\{ x-y \ \lvert \ x,y \in E \}$ contains an interval at the origin. The proof is in this answer as well as the book referenced: https://math.stackexchange.com/a/104126/35667.
Let $A \subset \mathbb{R}$ be a set with positive outer measure. Let $E_r$ be the Vitali set on the real line translated by $r$, i.e. a set of representatives of equivalence classes of $\mathbb{R} / \mathbb{Q}$. For $r,q$ rational, $r \ne q$, we have $E_r \cap E_q$ is empty, and $\cup_{r \in \mathbb{Q}} E_r = \mathbb{R}$. So $A = \cup_r (A \cap E_r)$, and $\lvert A \rvert_e \leq \sum_r \lvert A \cap E_r \rvert_e$. Now if $A \cap E_r$ is measurable, then it must have measure $0$ by the preceding paragraph since its set of differences contains no interval at the origin (any two elements of this set differ by an irrational number). But since $\lvert A \rvert_e > 0$, we must have $A \cap E_r$ with positive outer measure for some $r$. Then this $A \cap E_r$ is the desired nonmeasurable set.
This can be extended to $\mathbb{R}^n$.
|
{}
|
# Gauss's lemma (Riemannian geometry)
Gauss's lemma (Riemannian geometry)
In Riemannian geometry, Gauss's lemma asserts that any sufficiently small sphere centered at a point in a Riemannian manifold is perpendicular to every geodesic through the point. More formally, let "M" be a Riemannian manifold, equipped with its Levi-Civita connection, and "p" a point of "M". The exponential map is a mapping from the tangent space at "p" to "M"::$mathrm\left\{exp\right\} : T_pM o M$which is a diffeomorphism in a neighborhood of zero. Gauss' lemma asserts that the image of a sphere of sufficiently small radius in "T"p"M" under the exponential map is perpendicular to all geodesics originating at "p". The lemma allows the exponential map to be understood as a radial isometry, and is of fundamental importance in the study of geodesic convexity and normal coordinates.
Introduction
We define on $M$ the exponential map at $pin M$ by:$exp_p:T_pMsupset B_\left\{epsilon\right\}\left(0\right) longrightarrow M,qquad vlongmapsto gamma\left(1, p, v\right),$where we have had to restrict the domain $T_pM$ by definition of a ball $B_epsilon\left(0\right)$ of radius $epsilon>0$ and centre $0$ to ensure that $exp_p$ is well-defined, and where $gamma\left(1,p,v\right)$ is the point $qin M$ reached by following the unique geodesic $gamma$ passing through the point $pin M$ with tangent $frac\left\{v\right\}\left\{vert vvert\right\}in T_pM$ for a distance $vert vvert$. It is easy to see that $exp_p$ is a local diffeomorphism around $0in B_epsilon\left(0\right)$. Let $alpha : I ightarrow T_pM$ be a curve differentiable in $T_pM$ such that $alpha\left(0\right):=0$ and $alpha\text{'}\left(0\right):=v$. Since $T_pMcong mathbb R^n$, it is clear that we can choose $alpha\left(t\right):=vt$. In this case, by the definition of the differential of the exponential in $0$ applied over $v$, we obtain:
:$T_0exp_p\left(v\right) = frac\left\{mathrm d\right\}\left\{mathrm d t\right\} Bigl\left(exp_pcircalpha\left(t\right)Bigr\right)Bigvert_\left\{t=0\right\} = frac\left\{mathrm d\right\}\left\{mathrm d t\right\} Bigl\left(exp_p\left(vt\right)Bigr\right)Bigvert_\left\{t=0\right\}=frac\left\{mathrm d\right\}\left\{mathrm d t\right\} Bigl\left(gamma\left(1,p,vt\right)Bigr\right)Bigvert_\left\{t=0\right\}= gamma\text{'}\left(t,p,v\right)Bigvert_\left\{t=0\right\}=v.$
The fact that $exp_p$ is a local diffeomorphism and that $T_0exp_p\left(v\right)=v$ for all $vin B_epsilon\left(0\right)$ allows us to state that $exp_p$ is a local isometry around $0$, i.e.
:$langle T_0exp_p\left(v\right), T_0exp_p\left(w\right) angle_0 = langle v, w angle_pqquadforall v,win B_epsilon\left(0\right).$
This means in particular that it is possible to identify the ball $B_epsilon\left(0\right)subset T_pM$ with a small neighbourhood around $pin M$. We can see that $exp_p$ is a local isometry, but we would like it to be rather more than that. We assert that it is in fact possible to show that this map is a radial isometry !
The exponential map is a radial isometry
Let $pin M$. In what follows, we make the identification $T_vT_pMcong T_pMcong mathbb R^n$.Gauss's Lemma states:
Let $v,win B_epsilon\left(0\right)subset T_vT_pMcong T_pM$ and $M i q:=exp_p\left(v\right)$. Then, :$langle T_vexp_p\left(v\right), T_vexp_p\left(w\right) angle_v = langle v,w angle_q.$
For $pin M$, this lemma means that $exp_p$ is a radial isometry in the following sense: let $vin B_epsilon\left(0\right)$, i.e. such that $exp_p$ is well defined. Moreover, let $q:=exp_p\left(v\right)in M$. Then the exponential $exp_p$ remains an isometry in $q$, and, more generally, all along the geodesic $gamma$ (in so far as $gamma\left(1,p,v\right)=exp_p\left(v\right)$ is well defined)! Then, radially, in all the directions permitted by the domain of definition of $exp_p$, it remains an isometry.
Proof
Recall that
:$T_vexp_p : T_pMcong T_vT_pMsupset T_vB_epsilon\left(0\right)longrightarrow T_\left\{exp_p\left(v\right)\right\}M.$
We proceed in three steps:
* "$T_vexp_p\left(v\right)=v$" : let us construct a curve $alpha : mathbb R supset I ightarrow T_pM$ such that $alpha\left(0\right):=vin T_pM$ and $alpha\text{'}\left(0\right):=vin T_vT_pMcong T_pM$. Since $T_vT_pMcong T_pMcong mathbb R^n$, we can put $alpha\left(t\right):=v\left(t+1\right)$. We find that, thanks to the identification we have made, and since we are only taking equivalence classes of curves, it is possible to choose $alpha\left(t\right) = vt$ (these are exactly the same curves, but shifted (###décalées###), because of the domain of definition $I$; however, the identification allows us to gather them (###ramener###) around $0$ !!!). Hence,
:$T_vexp_p\left(v\right) = frac\left\{mathrm d\right\}\left\{mathrm d t\right\}Bigl\left(exp_pcircalpha\left(t\right)Bigr\right)Bigvert_\left\{t=0\right\}=frac\left\{mathrm d\right\}\left\{mathrm d t\right\}gamma\left(t,p,v\right)Bigvert_\left\{t=0\right\} = v.$
Now let us calculate the scalar product $langle T_vexp_p\left(v\right), T_vexp_p\left(w\right) angle$.
We separate $w$ into a component $w_T$ tangent to $v$ and a component $w_N$ normal to $v$. In particular, we put $w_T:=alpha v$, $alphain mathbb R$.
The preceding step implies directly:
:$langle T_vexp_p\left(v\right), T_vexp_p\left(w\right) angle = langle T_vexp_p\left(v\right), T_vexp_p\left(w_T\right) angle + langle T_vexp_p\left(v\right), T_vexp_p\left(w_N\right) angle$
::$=alphalangle T_vexp_p\left(v\right), T_vexp_p\left(v\right) angle + langle T_vexp_p\left(v\right), T_vexp_p\left(w_N\right) angle=langle v, w_T angle + langle T_vexp_p\left(v\right), T_vexp_p\left(w_N\right) angle.$
We must therefore show that the second term is null, because, according to Gauss's Lemma, we must have:
$langle T_vexp_p\left(v\right), T_vexp_p\left(w_N\right) angle = langle v, w_N angle = 0.$
* "$langle T_vexp_p\left(v\right), T_vexp_p\left(w_N\right) angle = 0$" : Let us define the curve
:$alpha : \right] -epsilon, epsilon \left[ imes \left[0,1\right] longrightarrow T_pM,qquad \left(s,t\right) longmapsto tcdot v\left(s\right),$with $v\left(0\right):=v$ and $v\text{'}\left(0\right):=w_N$. We remark in passing that::$alpha\left(0,1\right) = v\left(0\right) = v,qquadfrac\left\{partial alpha\right\}\left\{partial t\right\}\left(0,t\right) = v\left(0\right) = v,qquadfrac\left\{partial alpha\right\}\left\{partial s\right\}\left(0,t\right) = tw_N.$
Let us put:
:$f : \right] -epsilon, epsilon \left[ imes \left[0,1\right] longrightarrow M,qquad \left(s,t\right)longmapsto exp_p\left(tcdot v\left(s\right)\right),$
and we calculate:
:$T_vexp_p\left(v\right)=T_\left\{alpha\left(0,1\right)\right\}exp_pleft\left(frac\left\{partial alpha\right\}\left\{partial t\right\}\left(0,1\right) ight\right)=frac\left\{partial\right\}\left\{partial t\right\}Bigl\left(exp_pcircalpha\left(s,t\right)Bigr\right)Bigvert_\left\{t=1, s=0\right\}=frac\left\{partial f\right\}\left\{partial t\right\}\left(0,1\right)$and:$T_vexp_p\left(w_N\right)=T_\left\{alpha\left(0,1\right)\right\}exp_pleft\left(frac\left\{partial alpha\right\}\left\{partial s\right\}\left(0,1\right) ight\right)=frac\left\{partial\right\}\left\{partial s\right\}Bigl\left(exp_pcircalpha\left(s,t\right)Bigr\right)Bigvert_\left\{t=1,s=0\right\}=frac\left\{partial f\right\}\left\{partial s\right\}\left(0,1\right).$Hence:$langle T_vexp_p\left(v\right), T_vexp_p\left(w_N\right) angle = langle frac\left\{partial f\right\}\left\{partial t\right\},frac\left\{partial f\right\}\left\{partial s\right\} angle\left(0,1\right).$We can now verify that this scalar product is actually independent of the variable $t$, and therefore that, for example:
:$langlefrac\left\{partial f\right\}\left\{partial t\right\},frac\left\{partial f\right\}\left\{partial s\right\} angle\left(0,1\right) = langlefrac\left\{partial f\right\}\left\{partial t\right\},frac\left\{partial f\right\}\left\{partial s\right\} angle\left(0,0\right) = 0,$because, according to what has been given above::$lim_\left\{t ightarrow 0\right\}frac\left\{partial f\right\}\left\{partial s\right\}\left(t,0\right) = lim_\left\{t ightarrow 0\right\}T_\left\{tv\right\}exp_p\left(tw_N\right) = 0$being given that the differential is a linear map! This will therefore prove the lemma.
* We verify that "$frac\left\{partial\right\}\left\{partial t\right\}langle frac\left\{partial f\right\}\left\{partial t\right\},frac\left\{partial f\right\}\left\{partial s\right\} angle=0$" : this is a direct calculation. We first take account of the fact that the maps $tmapsto f\left(s,t\right)$ are geodesics, i.e. $frac\left\{D\right\}\left\{partial t\right\}frac\left\{partial f\right\}\left\{partial t\right\}=0$. Therefore,
:$frac\left\{partial\right\}\left\{partial t\right\}langle frac\left\{partial f\right\}\left\{partial t\right\},frac\left\{partial f\right\}\left\{partial s\right\} angle=langleunderbrace\left\{frac\left\{D\right\}\left\{partial t\right\}frac\left\{partial f\right\}\left\{partial t_\left\{=0\right\}, frac\left\{partial f\right\}\left\{partial s\right\} angle+langlefrac\left\{partial f\right\}\left\{partial t\right\},frac\left\{D\right\}\left\{partial t\right\}frac\left\{partial f\right\}\left\{partial s\right\} angle=langlefrac\left\{partial f\right\}\left\{partial t\right\},frac\left\{D\right\}\left\{partial s\right\}frac\left\{partial f\right\}\left\{partial t\right\} angle=frac\left\{partial \right\}\left\{partial s\right\}langle frac\left\{partial f\right\}\left\{partial t\right\}, frac\left\{partial f\right\}\left\{partial t\right\} angle - langlefrac\left\{partial f\right\}\left\{partial t\right\},frac\left\{D\right\}\left\{partial s\right\}frac\left\{partial f\right\}\left\{partial t\right\} angle.$Hence, in particular,:$0=frac\left\{1\right\}\left\{2\right\}frac\left\{partial \right\}\left\{partial s\right\}langle frac\left\{partial f\right\}\left\{partial t\right\}, frac\left\{partial f\right\}\left\{partial t\right\} angle= langlefrac\left\{partial f\right\}\left\{partial t\right\},frac\left\{D\right\}\left\{partial s\right\}frac\left\{partial f\right\}\left\{partial t\right\} angle=frac\left\{partial\right\}\left\{partial t\right\}langle frac\left\{partial f\right\}\left\{partial t\right\},frac\left\{partial f\right\}\left\{partial s\right\} angle,$because, since the maps $tmapsto f\left(s,t\right)$ are geodesics, we have $langlefrac\left\{partial f\right\}\left\{partial t\right\},frac\left\{partial f\right\}\left\{partial t\right\} angle=mathrm\left\{cste\right\}$.
* Riemannian geometry
* Metric tensor
References
* [http://www.amazon.fr/dp/0817634908]
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Gauss's lemma — can mean any of several lemmas named after Carl Friedrich Gauss:* Gauss s lemma (polynomial) * Gauss s lemma (number theory) * Gauss s lemma (Riemannian geometry) See also * List of topics named after Carl Friedrich Gauss … Wikipedia
• Differential geometry of surfaces — Carl Friedrich Gauss in 1828 In mathematics, the differential geometry of surfaces deals with smooth surfaces with various additional structures, most often, a Riemannian metric. Surfaces have been extensively studied from various perspectives:… … Wikipedia
• List of differential geometry topics — This is a list of differential geometry topics. See also glossary of differential and metric geometry and list of Lie group topics. Contents 1 Differential geometry of curves and surfaces 1.1 Differential geometry of curves 1.2 Differential… … Wikipedia
• Lemme de Gauss (géométrie riemannienne) — En géométrie riemannienne, le lemme de Gauss permet de comprendre l application exponentielle comme une isométrie radiale. Dans ce qui suit, soit M une variété riemannienne dotée d une connexion de Levi Civita (i.e. en particulier, cette… … Wikipédia en Français
• List of mathematics articles (G) — NOTOC G G₂ G delta space G networks Gδ set G structure G test G127 G2 manifold G2 structure Gabor atom Gabor filter Gabor transform Gabor Wigner transform Gabow s algorithm Gabriel graph Gabriel s Horn Gain graph Gain group Galerkin method… … Wikipedia
• Bernhard Riemann — Infobox Scientist name =Bernhard Riemann box width =300px image width =225px caption =Bernhard Riemann, 1863 birth date =September 17, 1826 birth place =Breselenz, Germany death date =death date and age|1866|7|20|1826|9|17 death place =Selasca,… … Wikipedia
• Darboux frame — In the differential geometry of surfaces, a Darboux frame is a natural moving frame constructed on a surface. It is the analog of the Frenet–Serret frame as applied to surface geometry. A Darboux frame exists at any non umbilic point of a surface … Wikipedia
• List of theorems — This is a list of theorems, by Wikipedia page. See also *list of fundamental theorems *list of lemmas *list of conjectures *list of inequalities *list of mathematical proofs *list of misnamed theorems *Existence theorem *Classification of finite… … Wikipedia
• Exponential map — In differential geometry, the exponential map is a generalization of the ordinary exponential function of mathematical analysis to all differentiable manifolds with an affine connection. Two important special cases of this are the exponential map … Wikipedia
• Laplace operator — This article is about the mathematical operator. For the Laplace probability distribution, see Laplace distribution. For graph theoretical notion, see Laplacian matrix. Del Squared redirects here. For other uses, see Del Squared (disambiguation) … Wikipedia
|
{}
|
(11C) Poisson distribution
12-18-2017, 09:38 AM (This post was last modified: 12-31-2017 02:35 PM by Gamo.)
Post: #1
Gamo Senior Member Posts: 324 Joined: Dec 2016
(11C) Poisson distribution
The Poisson distribution is popular for modelling the number of times an event occurs in an interval of time or space.
Formula: P(k events in interval) = (e^-λ)(λ^k) / k!
where:
λ (lambda) is the average number of events per interval
e is the number 2.71828... (Euler's number) the base of the natural logarithms
k takes values 0, 1, 2, …
k! = k × (k − 1) × (k − 2) × … × 2 × 1 is the factorial of k.
Example Problem:
Ugarte and colleagues report that the average number of goals in a World Cup soccer match is approximately 2.5
Because the average event rate is 2.5 goals per match, λ = 2.5
What is the probability of gold of P(k) = 0, 1, 2, 3, 4, 5, 6, 7
Program:
Code:
LBL A (λ) STO 1 RTN LBL B (k) STO 2 RTN LBL C (P) RCL 1 CHS e^x RCL 1 RCL 2 Y^x x RCL 2 X! / RTN
Run Program:
2.5 A
0 B
C 0.082
1 B
C 0.205
2 B
C 0.257
3 B
C 0.213
.
.
.
.
7 B
C 0.010
The table below gives the probability for 0 to 7 goals in a match.
k P(k goals in a World Cup soccer match)
0 0.082
1 0.205
2 0.257
3 0.213
4 0.133
5 0.067
6 0.028
7 0.010
Credit to Wikipedia for information and example problem.
Gamo
12-19-2017, 08:15 PM
Post: #2
Dieter Senior Member Posts: 2,174 Joined: Dec 2013
RE: (11C) Poisson distribution
(12-18-2017 09:38 AM)Gamo Wrote: What is the chance of gold of P(k) = 0, 1, 2, 3, 4, 5, 6, 7
Gold ?-) I assume this is supposed to mean
"What is the probability P(k) for k = 0, 1, 2, 3, 4, 5, 6 or 7 goals".
But why do you use two separate labels for k and P(k)? This way calculating the PDF always requires pressing two keys, B and C.
Here is another version that also calculates the CDF, i.e. P(k1 ≤ k ≤ k2).
Code:
LBL A STO 0 RTN LBL B RCL 0 x<>y y^x LastX x! / RCL 0 CHS e^x * RTN LBL C ABS INT EEX 3 / x<>y ABS INT + STO I LastX GSB B x=0? RTN ENTER ENTER LBL 1 ISG // (ISG I on the 15C) GTO 2 x<>y RTN LBL 2 RCL 0 * RCL I INT / + LastX GTO 1
Example for λ = 2,5:
Code:
Enter λ 2,5 [A] 2,5000 Calculate the probability for 1, 2 or 3 goals 1 [B] 0,2052 P(1) 2 [B] 0,2565 P(2) 3 [B] 0,2138 P(3) Calculate the probability of a match with 1 to 4 goals 1 [ENTER] 4 [C] 0,8091 P(1 ≤ k ≤ 4) If desired: [R↓] 0,1336 P(4) [R↓] 0,2052 P(1)
Direct evaluation of the Poisson PDF often leads to overflow errors. Even cases where k>69 can not be handled this way. Too bad there is no lnΓ function available, this could provide an easy fix.
But there are two workarounds:
1. The recursive method of the CDF routine significantly extends the useable range for λ and k, and this can also be used for calculating the PDF:
Simply enter 0 [ENTER] k [C], and when the result is displayed press [R↓] or [x<>y] to get P(k).
Example:
Evaluate P(80) for λ=90.
Code:
90 [A] 90,0000 80 [B] 8,1940 -40 flashing Overflow error while trying to evaluate 90^80 and 80! 0 [ENTER] 80 [C] 0,1582 [x<>y] 0,0250 P(80)
However, the iteration with k required loops may take some time on a hardware 11C.
Also please note that for λ > 227,9559242 the expression e–λ will underflow to zero. In this case 0 is returned.
2. For large λ and k the following code may be used. It implements a Stirling-based approximation, and since there is no iteration the result is returned immediately.
Code:
LBL E STO I RCL 0 RCL I / LN 1 + * RCL 0 - e^x RCL I 6 1/x + Pi * 2 * sqrt / RTN
Example:
Evaluate P(80) for λ=90.
Code:
90 [A] 90,0000 80 [E] 0,0250 P(80)
Dieter
12-20-2017, 01:21 AM
Post: #3
Gamo Senior Member Posts: 324 Joined: Dec 2016
RE: (11C) Poisson distribution
Thanks Dieter
Very nice program detail with in-depth information.
Gamo
12-23-2017, 06:42 PM
Post: #4
SlideRule Senior Member Posts: 403 Joined: Dec 2013
RE: (11C) Poisson distribution
Gamo
Check the HP-25 post for Poisson Distribution from 1977.
BEST!
SlideRule
12-24-2017, 12:27 PM (This post was last modified: 12-24-2017 12:53 PM by Dieter.)
Post: #5
Dieter Senior Member Posts: 2,174 Joined: Dec 2013
RE: (11C) Poisson distribution
(12-23-2017 06:42 PM)SlideRule Wrote: Check the HP-25 post for Poisson Distribution from 1977.
That's three short programs in this thread.
I just posted an improved version. Which also includes a correct initialization of the summation registers to avoid erroneous results.
And there's even some information on the author of the original programs.
Dieter
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
{}
|
Showing: rectangular windowreset
<prev • 1 results • page 1 of 1 • next >
0
answers
13
views
0
answers
<prev • 1 results • page 1 of 1 • next >
|
{}
|
GCSE Maths Number Standard Form
Converting to and from Standard Form
# Converting to and from Standard Form
Here we will learn how to convert to and from standard form, including how to adjust numbers to write them in standard form notation.
There are also converting to and from standard form worksheets based on Edexcel, AQA and OCR exam questions, along with further guidance on where to go next if you’re still stuck.
## What is converting to and from standard form?
Converting to and from standard form is where we convert an ordinary number to a number written in standard form or scientific notation.
Standard form is a way of writing very large or very small numbers by comparing the powers of ten.
Numbers in standard form are written in this format:
$a\times10^{n}$
Where a is a number 1\leq{a}\lt10 and n is an integer.
Converting between ordinary numbers and numbers in standard form can help us to compare numbers and interpret answers given in standard form on a calculator.
To do this we need to understand the place value of a number.
E.g.
Let’s look at the number 350000 and place the digits in a place value table:
\begin{aligned} &10^5 \quad \quad 10^4 \quad \quad 10^3\quad \quad 10^2 \quad \quad 10^1 \quad \quad 10^0 \\ &\; 3 \quad \quad \quad 5 \quad \quad \quad 0 \quad \quad \; \; \; 0 \quad \quad \; \; \; 0 \quad \quad \; \; \; 0 \end{aligned}
So 350000 written in standard form is:
$3.5\times10^{5}$
## How to convert ordinary numbers to standard form
In order to convert ordinary numbers to standard form:
1. Identify the non-zero digits and write these as a decimal number which is \pmb{ 1\leq{x}\lt10} .
2. In order to maintain the place value of the number, this decimal number needs to be multiplied by a power of ten.
3. Write the power of ten as an exponent.
## Converting ordinary numbers to standard form examples
### Example 1: writing numbers in standard form with positive powers
Write this number in standard form:
480000
1. The non-zero digits need to be written in decimal notation.
The number needs to lie between 1\leq{x}\lt10
So the number will begin as 4.8….
2You now need to maintain the value of the number by multiplying that decimal by a power of ten.
$4.8 \times 100000 = 480000$
3Write that power of ten as an exponent.
$100000 = 10^{5}$
4Write your number in standard form.
$4.8\times10^{5}$
### Example 2: writing numbers in standard form with positive powers
Write this number in standard form:
5420000
The number needs to lie between 1\leq{x}\lt10
So the number will begin as 5.42….
$5.42 \times 1000000 = 5420000$
$1000000 = 10^{6}$
$5.42\times10^{6}$
### Example 3: converting a small number to standard form
Write 0.00081 in standard form.
This number will begin with 8.1...
8.1 \div 10000 = 0.00081
$\frac{1}{10000} = 10^{-4}$
$8.1\times10^{-4}$
### Example 4: converting a small number to standard form
Write 0.00718 in standard form.
The number will begin with 7.18…
7.8 \div 1000 = 0.0078
Therefore,
$7.18 \times \frac{1}{1000}=0.00718$
$\frac{1}{1000} = 10^{-3}$
$7.18\times10^{-3}$
## How to convert standard form to ordinary numbers
In order to convert from standard form to ordinary numbers:
1. Convert the power of ten to an ordinary number
2. Multiply the decimal number by this power of ten
3. Write your number as an ordinary number
## Converting standard form to ordinary numbers examples
### Example 5: converting standard form to an ordinary number
Write 6.2\times10^4 as an ordinary number.
$10^{4} = 10000$
$6.2 \times 10000$
$6.2 \times 10000 = 62000$
### Example 6: converting standard form to an ordinary number
Write 1.9\times10^{-3} as an ordinary number.
$10^{-3} = 0.001$
$1.9 \times 0.001$
$1.9 \times 0.001 = 0.0019$
## How to adjust numbers to standard form
Sometimes we might have a number that looks like it is in standard form however the decimal number is not between 1 and 10,
E.g.
36103 or 0.2104.
In this case we need to adjust the number.
In order to adjust numbers to standard form:
1. Identify what power of ten the decimal number needs to be multiplied by so that the value is \pmb{1\leq{x}\lt10}.
2. Apply the inverse of this to the power of ten.
## Adjusting numbers to standard form examples
### Example 7: adjusting number in standard form
Write 48\times10^5 in standard form.
48 needs to be divided by 10 so 48 becomes 4.8 .
10^5 needs to be multiplied by 10 which adds one to its power, so it becomes 10^6.
$4.8\times10^{6}$
### Example 8: adjusting numbers to standard form
Write 0.68\times10^{4} in standard form.
0.68 needs to be multiplied by 10 so it becomes 6.8.
10^4 needs to be divided by 10 which subtracts one from its power, so it becomes 10^3.
$6.8\times10^{3}$
### Example 9: adjusting numbers to standard form
Write 290\times10^{-4} in standard form.
290 needs to be divided by 100 so it becomes 2.9.
10^{-4} needs to be multiplied by 100 which adds two to its power, so it becomes 10^{-2}.
$2.9\times10^{-2}$
### Common misconceptions
• Writing a number with the incorrect power for a large or small number
This error is often made by counting the zeros following the first non zero digit for large numbers or zeros after the decimal point for small numbers, then writing this as the power, rather than considering the place value of the given number.
• Identifying incorrect place value with small numbers
In a number such as 0.000682, selecting the ‘2’ to determine the exponent rather than the ‘6’ which has a higher place value.
In standard form, this number would be 6.82 × 10^{-4}.
• Errors with negative numbers
When checking the standard form of a number, incorrectly adjusting the negative powers due to not applying negative numbers rules correctly.
E.g.
With small numbers, adding one to the power of 10^{-5} will result in 10^{-4} not 10^{-6}.
Converting to and from standard form is part of our series of lessons to support revision on standard form. You may find it helpful to start with the main standard form lesson for a summary of what to expect, or use the step by step guides below for further detail on individual topics. Other lessons in this series include:
### Practice standard form calculator questions
1. Write 270000 in standard form
27 \times 10^{4}
2.7 \times 10^{4}
2.7 \times 10^{5}
0.27 \times 10^{6}
The number between 1 and 10 here is 2.7.
\begin{aligned} 27000&=2.7 \times 100000\\\\ &=2.7 \times 10^{5} \end{aligned}
2. Write 0.00079 in standard form
7.9 \times 10^{-3}
7.9 \times 10^{-4}
7.9 \times 10^{-5}
0.79 \times 10^{-4}
The number between 1 and 10 here is 7.9.
\begin{aligned} 0.00079&=7.9 \times \frac{1}{10000}\\\\ &=7.9 \times 10^{-4} \end{aligned}
3. Write 6.1 \times 10^{4} as an ordinary number
61000
6100
0.61 \times 10^{5}
610000
10^{4}=10000
Therefore,
\begin{aligned} 6.1 \times 10^{4} &= 6.1 \times 10000\\\\ &=61000 \end{aligned}
4. Write 3.8 \times 10^{-5} as an ordinary number
380000
0.00038
0.0000038
0.000038
10^{-5}=\frac{1}{100000}
Therefore,
\begin{aligned} 3.8 \times 10^{-5} &= 3.8 \times \frac{1}{100000}\\\\ &=3.8 \div 100000\\\\ &=0.000038 \end{aligned}
5. Write 84\times10^{2} in standard form
840
8.4 \times 10^{2}
8.4 \times 10^{3}
8.4 \times 10
This number is not in standard form as 84 is not between 1 and 10.
We need to divide 84 by 10 and, to compensate, multiply 10^2 by 10 , increasing the power by 1.
This gives us
8.4 \times 10^{3}
6. Write 0.92\times10^{-5} in standard form
9.2 \times 10^{-4}
9.2 \times 10^{-5}
9.2 \times 10^{-6}
0.92 \times 10^{-4}
This number is not in standard form because 0.92 is not between 1 and 10.
We need to multiply 0.92 by 10 and, to compensate, divide 10^{-5} by 10 , decreasing the power by 1.
This gives us
9.2 \times 10^{-6}
### Standard form calculator GCSE questions
1.
(a) Write 8.23\times10^{-6} , as an ordinary number.
(b) Write the number 0.00702 in standard form.
(2 marks)
(a) 0.00000823
(1)
(b) 7.02\times10^{-3}
(1)
2.
(a) The population of the USA is 3.3\times10^{8} , rounded to two significant figures.
Write this distance as an ordinary number.
(b) The population of Washington DC is 690000 rounded to two significant figures.
Write this number in standard form.
(2 marks)
(a) 3 30000000 km
(1)
(b) 6.9\times10^{5}km
(1)
3. Put the below numbers in order. Start with the smallest number.
(2 marks)
Converting all of the numbers to the same form or standard notation for comparison or 3 of the four numbers ordered correctly.
(1)
(1)
## Learning checklist
You have now learned how to:
• Convert ordinary numbers standard form
• Convert standard form to ordinary numbers
• Adjust numbers to standard form notation
## Still stuck?
Prepare your KS4 students for maths GCSEs success with Third Space Learning. Weekly online one to one GCSE maths revision lessons delivered by expert maths tutors.
Find out more about our GCSE maths revision programme.
|
{}
|
# Tank Volume Calculator
By Hanna Pamuła, PhD candidate
Last updated: Jan 14, 2020
With this tank volume calculator, you can easily estimate what the volume of your container is. Choose between nine different tank shapes: from standard rectangular and cylindrical tanks, to capsule and elliptical tanks. You can even find the volume of a frustum in cone bottom tanks. Just enter the dimensions of your container and this tool will calculate the total tank volume for you. You may also provide the fill height, which will be used to find the filled volume. Do you wonder how it does it? Scroll down and you'll find all the formulas needed - the volume of a capsule tank, elliptical tank, or the widely-used cone bottom tanks (sometimes called conical tanks), as well as many more!
Looking for other types of tanks, in different shapes and for other applications? Check out our volume calculator to find the volume of the most common three-dimensional solids. For something more specialized you can also have a glance at the aquarium and pool volume calculators for solutions to everyday volume problems.
## Tank volume calculator
This tank volume calculator is a simple tool which helps you find the volume of the tank as well as the volume of the filled part. You can choose between ten tank shapes:
• vertical cylinder
• horizontal cylinder
• rectangular prism (box)
• vertical capsule
• horizontal capsule
• vertical oval (elliptical)
• horizontal oval (elliptical)
• cone bottom
• cone top
• frustum (truncated cone, funnel-shaped)
"But how do I use this tank volume calculator?", you may be asking. Let's have a look at a simple example:
1. Decide on the shape. Let's assume that we want to find the volume of a vertical capsule tank - choose that option from the drop-down list. The schematic picture of the tank will appear below; make sure it's the one you want!
2. Enter the tank dimensions. In our case, we need to type in the length and diameter. In our example, they are equal to 30 in and 24 in, respectively. Additionally, we can enter the fill height - 32 in.
3. The tank volume calculator has already found the total and filled volume! The total volume of the capsule is 90.09 US gal, and the volume of the liquid inside is 54.84 US gal. As always, you can change the units by clicking on the volume units themselves. Easy-peasy!
## Cylindrical tank volume formula
To calculate the total volume of a cylindrical tank, all we need to know is the cylinder diameter (or radius) and the cylinder height (which may be called length, if it's lying horizontally).
• Vertical cylinder tank
The total volume of a cylindrical tank may be found with the standard formula for volume - the area of the base multiplied by height. A circle is the shape of the base, so its area, according to the well-known equation, is equal to π * radius². Therefore the formula for a vertical cylinder tanks volume looks like:
V_vertical_cylinder = π * radius² * height = π * (diameter/2)² * height
If we want to calculate the filled volume, we need to find the volume of a "shorter" cylinder - it's that easy!
V_vertical_cylinder = π * radius² * filled = π * (diameter/2)² * filled
• Horizontal cylinder tank
The total volume of a horizontal cylindrical tank may be found in analogical way - it's the area of the circular end times the length of the cylinder:
V_horizontal_cylinder = π * radius² * length = π * (diameter/2)² * length
Things are getting more complicated when we want to find the volume of the partially filled horizontal cylinder. First, we need to find the base area: the area of the circular segment covered by the liquid:
Segment area = 0.5 * radius² * (θ - sinθ)
where θ is the central angle of segment, and may be found from the formula for cosine:
And finally, the formula for partially filled horizontal cylinder is:
V_horizontal_cylinder_filled = 0.5 * radius² * (θ- sin(θ)) * length where θ = 2 * arccos((radius - filled) / radius)
If the cylinder is more than half full then it's easier to subtract the empty tank part from the total volume.
## Rectangular tank volume calculator (rectangular prism)
If you're wondering how to calculate the volume of a rectangular tank (also known as cuboid, box or rectangular hexahedron), look no further! You may know this tank as a rectangular tank - but that is not its proper name, as a rectangle is a 2D shape, so it doesn't have a volume.
To find the rectangular prism volume, multiply all the dimensions of the tank:
V_rectangular_prism = height * width * length
If you want to know what the volume of the liquid in a tank is, simply change the height variable into filled in the rectangular tank volume formula:
V_rectangular_prism_filled = filled * width * length
For this tank volume calculator, it doesn't matter if the tank is in a horizontal or vertical position. Just make sure that filled and height are along the same axis.
## Formula for volume of a capsule
Our tool defines a capsule as two hemispheres separated by a cylinder. To calculate the total volume of a capsule, all you need to do is add the volume of the sphere to the cylinder part:
V_capsule = π * (diameter/2)² * ((4/3) * (diameter/2) + length)
Depending on the position of the tank, the filled volume calculations will differ a bit:
1. For horizontal capsule tank
As the hemispheres on either end of the tank are identical, they form a spherical cap - add this part to the part from the horizontal cylinder (check the paragraph above) to calculate the volume of the liquid:
capsule_h_filled = V_horizontal_cylinder_filled + V_spherical_cap_filled = 0.5 * (diameter/2)² * (θ- sin(θ)) * length + ((π * filled²) / 3) * ((1.5 * diameter) - filled)
2. For vertical capsule tank
The formula differs for various fill heights:
• if the filled < diameter/2, then the liquid is only in the bottom hemisphere part, so we only need the volume of a spherical cap formula:
V_capsule_filled = ((π * filled²) / 3) * ((1.5 * diameter) - filled)
• if the diameter/2 < filled < diameter/2 + length, then we need to add the hemisphere volume and "shorter" cylinder:
V_capsule_filled = (2/3) * π * (diameter/2)³ + π * (diameter/2)² * (filled - diameter/2)
• if the diameter/2 + length < filled, it means that we have full bottom hemisphere and cylinder, so we just need to subtract the spherical cap (empty part) from the whole volume:
V_capsule_filled = V_capsule - ((π * filled²) / 3) * ((1.5 * diameter) - (length + diameter - filled)))
## Elliptical tank volume (oval tank)
In our calculator, we define an oval tank as a cylindrical tank with an elliptical end (not in the shape of a stadium, as it is sometimes defined). To find the total volume of an elliptical tank, you need to multiply the ellipsis area times length of the tank:
V_ellipse = π * width * length* height / 4
Finally, another easy formula! Unfortunately, finding the volume of a partially filled tank - both in the horizontal and vertical positions - is not so straightforward. You need to use the formula for the ellipse segment area and multiply the result times length of the tank:
V_ellipse_filled = length * height * width /4 * (arccos(1 - (2 * filled / height)) - (1 - (2 * filled / height)) * √(4 * filled / height - 4 * filled²/height²))
## Frustum volume - tank in the shape of a truncated cone
To calculate the truncated cone volume, use the formula:
V_frustum = (1/3) * π * cone_height * ((diameter_top / 2)² + (diameter_top / 2) * (diameter_bottom / 2) + (diameter_bottom / 2)²)
If you want to find the partially filled frustum volume for a given fill height, calculate the top radius of the filled part first:
R = 0.5* diameter_top*(filled + z) / (cone_height + z)
where
z = cone_height * diameter_bottom/(diameter_top - diameter_bottom)
(You can derive the formula from triangles similarity)
Afterwards, just find the new frustum volume:
V_frustum_filled = (1/3) * π * cone_height* (R² + R * (diameter_bottom / 2) + (diameter_bottom / 2)²)
## Cone bottom tank volume (conical tank) and cone top tanks
Finding total volume of a cone bottom tank is not so hard - just add the volume of the frustum part to the volume of the cylindrical part:
V_cone_bottom = V_frustum + V_cylinder = (1/3) * π * cone_height * ((diameter_top/2)² + (diameter_top/2)*(diameter_bot/2) + (diameter_bot/2)²) + π *(diameter_top/2)² * cylinder_height
To calculate the partially filled tank, just add the frustum part and cylinder part, depending on the level of the filled liquid, using the equations above.
Calculating total volume of cone top tank is exactly the same as cone bottom tank. The only difference is when you want to find filled part - of course, firstly the cylindrical part is filled, and only then the frustum.
Hanna Pamuła, PhD candidate
Tank shape
vertical cylinder
Dimensions
Diameter
ft
Height
ft
Filled
ft
Volume
Total tank volume
US gal
Filled with
US gal
People also viewed…
Addiction calculator tells you how much shorter your life would be if you were addicted to alcohol, cigarettes, cocaine, methamphetamine, methadone, or heroin.
### Circumference
Use this free circumference calculator to find the area, circumference and diameter of a circle.
### Roof truss
Roof truss calculator is a handy tool that will help you estimate both the rafter length and the number of trusses needed for your roof.
### Steel weight
Our steel weight calculator will help you determine the weight of steel in various common shapes used in construction.
|
{}
|
# 3 unknowns really confusing
1. Jan 21, 2013
### dscot
1. The problem statement, all variables and given/known data
Hello all,
I'm trying to solve for 3 unknowns x,y,z. We are given these formulas: http://screencast.com/t/Y8QobUXVB3S3 [Broken]
2. Relevant equations
Please see link provided above.
3. The attempt at a solution
I first rearrage for root(x) then I rearrange equation 2 for root(y), I then sub root(x) into equation 2. From this point I'm a little confused on how to proceed?
The problem I think is that in the formula for root(y) I still have root(z) unknown, root(z) has root(x) and also has a root (y), we know root(x) but it, itself contains root(y) and root(z) which is going in a weird loop that is making me really confused.
Last edited by a moderator: May 6, 2017
2. Jan 21, 2013
### VantagePoint72
Assuming you're treating $\theta$ and the various a's, b's, c's and d's as knowns, this system has nine unknowns, not three: $x_1, x_2, x_3, y_1, y_2, y_3, z_1, z_2, z_3$.
3. Jan 21, 2013
### dscot
Hi LastOneStanding,
I'm really sorry you are completly right, I wrote the equation wrong it should be: http://screencast.com/t/Y8QobUXVB3S3 [Broken]
Although I still have the problem getting my head around that logic I mentioned before.
Thanks!
Last edited by a moderator: May 6, 2017
4. Jan 21, 2013
### VantagePoint72
Do you know to solve a linear system of equations using the elimination method? If not, see here for an explanation. I know this doesn't look linear because of the square roots, but it is linear if you just treat the square roots as the things you are trying to solve for instead of the things inside the square roots. Of course, once you've solved for each square root you can just square it to get the answer you really want.
5. Jan 22, 2013
### dscot
Hi LastOneStanding,
Thanks very much that method does seem more efficient but I think we're supposed to do this is through substitution as thats the method our teacher has been using in class. Although I'm sure it will be a lot messier.
So what would be the best way to approach this using the method of substituting in the values?
Thanks!
6. Jan 22, 2013
### HallsofIvy
Staff Emeritus
If it is the square roots that bother you just replace them with, say, $u= \sqrt{x}$, $v= \sqrt{y}$ and $w= \sqrt{z}$. Then you have three linear equations of the form
$A_1= B_1u+ C_1v+ D_1w$
$A_2= B_2u+ C_2v+ D_2w$
$A_3= B_3u+ C_3v+ D_3w$
where I have also replaced the coefficients by single letters- you can put the coefficients back in after solving.
Solve the first equation for u:
$$u= \frac{A_1- C_1v- D_1w}{B_1}$$
and replace u in the other two equations by that:
$A_2= B_2\frac{A_1- C_1v- D_1w}{B_1}+ C_2v+ D_2w$
$A_3= B_3\frac{A_1- C_1v- D_1w}{B_1}+ C_3v+ D_3w$
Solve either one of those for v and substitute into the other equation to get a single equation in w. Solve that equation for w, and substitute into the equation for v, the substitute both of those values into the original equation for u.
Finally, of course, square u, v, and w to get x, y, and z.
7. Jan 24, 2013
### dscot
Thanks HallsOfIvy,
Does this look correct to you?
I mixed the terms up so now C if F and D is Z
New equation link:
Last edited: Jan 24, 2013
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
{}
|
Lessons:6Length:31 minutes
• Overview
• Transcript
# 2.3 Using Blend Modes for Cutouts
In the previous lesson, we used blend modes to display an image and a gradient inside a piece of text. You can also use blend modes to create cutouts in a larger image so that the image will appear differently inside the text than outside of it.
Let me show you what I mean.
Related Links
|
{}
|
# American Institute of Mathematical Sciences
June 2018, 12(3): 545-572. doi: 10.3934/ipi.2018024
## Existence and convergence analysis of $\ell_{0}$ and $\ell_{2}$ regularizations for limited-angle CT reconstruction
1 School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China 2 College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China 3 Engineering Research Center of Industrial Computed Tomography, Nondestructive Testing of the Education Ministry of China, Chongqing University, Chongqing 400044, China 4 School of Biomedical Engineering, Hubei University of Science and Technology, Xianning 437100, China
* Corresponding author: drlizeng@cqu.edu.cn
Received April 2017 Revised December 2017 Published March 2018
Fund Project: Li Zeng was supported by the National Natural Science Foundation of China (No.61771003) and the National Instrumentation Program of China (2013YQ030629). Liwei Xu was supported in part by a Key Project of the Major Research Plan of NSFC (No.91630205) and by the National Natural Science Foundation of China (No.11771068). Wei Yu was supported by the National Natural Science Foundation of China (No.61701174), the Hubei Provincial Natural Science Foundation of China (No.2017CFB168) and the Ph.D. start-up Fund of HBUST (No. BK1527)
In some practical applications of computed tomography (CT) imaging, the projections of an object are obtained within a limited-angle range due to the restriction of the scanning environment. In this situation, conventional analytic algorithms, such as filtered backprojection (FBP), will not work because the projections are incomplete. An image reconstruction algorithm based on total variation minimization (TVM) can significantly reduce streak artifacts in sparse-view reconstruction, but it will not effectively suppress slope artifacts when dealing with limited-angle reconstruction problems. To solve this problem, we consider a family of image reconstruction model based on $\ell_{0}$ and $\ell_{2}$ regularizations for limited-angle CT and prove the existence of a solution for two CT reconstruction models. The Alternating Direction Method of Multipliers (ADMM)-like method is utilized to solve our model. Furthermore, we prove the convergence of our algorithm under certain conditions. Some numerical experiments are used to evaluate the performance of our algorithm and the results indicate that our algorithm has advantage in suppressing slope artifacts.
Citation: Chengxiang Wang, Li Zeng, Wei Yu, Liwei Xu. Existence and convergence analysis of $\ell_{0}$ and $\ell_{2}$ regularizations for limited-angle CT reconstruction. Inverse Problems & Imaging, 2018, 12 (3) : 545-572. doi: 10.3934/ipi.2018024
##### References:
[1] M. A. Anastasio, E. Y. Sidky, X. Pan and C. Y. Chou, Boundary reconstruction in limited-angle x-ray phase-contrast tomography, Medical Imaging 2009: Physics of Medical Imaging, 7258 (2009), 725827. doi: 10.1117/12.811918. [2] A. H. Andersen and A. C. Kak, Simultaneous algebraic reconstruction technique (SART): A superior implementation of the ART algorithm, Ultrasonic Imaging, 6 (1984), 81-94. doi: 10.1016/0161-7346(84)90008-7. [3] A. Auslender and M. Teboulle, Asymptotic cones and functions in optimization and variational inequalities, Springer, Berlin, 2006. [4] G. Bachar, J. H. Siewerdsen, M. J. Daly, D. A. Jaffray and J. C. Irish, Image quality and localization accuracy in C-arm tomosynthesis-guided head and neck surgery, Medical Physics, 34 (2007), 4664-4677. doi: 10.1118/1.2799492. [5] T. Blumensath, Accelerated iterative hard thresholding, Signal Processing, 92 (2012), 752-756. doi: 10.1016/j.sigpro.2011.09.017. [6] T. Blumensath and M. E. Davies, Iterative thresholding for sparse approximations, Journal of Fourier Analysis and Applications, 14 (2008), 629-654. doi: 10.1007/s00041-008-9035-z. [7] T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Applied and Computational Harmonic Analysis, 27 (2009), 265-274. doi: 10.1016/j.acha.2009.04.002. [8] J. Bolte, S. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Mathematical Programming, 146 (2014), 459-494. doi: 10.1007/s10107-013-0701-9. [9] K. Bredies, D. A. Lorenz and S. Reiterer, Minimization of non-smooth, non-convex functionals by iterative thresholding, Journal of Optimization Theory and Applications, 165 (2015), 78-112. doi: 10.1007/s10957-014-0614-7. [10] M. Burger, J. Müller, E. Papoutsellis and C. B. Schonlieb, Total variation regularization in measurement and image space for PET reconstruction, Inverse Problems, 30 (2014), 105003. doi: 10.1088/0266-5611/30/10/105003. [11] T. M. Buzug, Computed tomography: From photon statistics to modern cone-beam CT, Springer Handbook of Medical Technology, (2008), 311-342. doi: 10.1007/978-3-540-74658-4_16. [12] J. F. Cai, S. Osher and Z. Shen, Split Bregman methods and frame based image restoration, Multiscale Modeling & Simulation, 8 (2009), 337-369. [13] Y. Censor and A. Segal, Iterative projection methods in biomedical inverse problems, Mathematical Methods in Biomedical Imaging and Intensity-Modulated Radiation Therapy (IMRT), 10 (2008), 65-96. [14] C. Chen, R. H. Chan, S. Ma and J. Yang, Inertial proximal ADMM for linearly constrained separable convex optimization, SIAM Journal on Imaging Sciences, 8 (2015), 2239-2267. [15] Z. Q. Chen, X. Jin, L. Li and G. Wang, A limited-angle CT reconstruction method based on anisotropic TV minimization, Physics in Medicine and Biology, 58 (2013), 2119-2141. doi: 10.1088/0031-9155/58/7/2119. [16] M. K. Cho, H. K. Kim, H. Youn and S. S. Kim, A feasibility study of digital tomosynthesis for volumetric dental imaging, J. Instrum., 7 (2012), 1-6. doi: 10.1088/1748-0221/7/03/P03007. [17] I. Daubechies, M. Defrise and M. C. De, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Communications on Pure and Applied Mathematics, 57 (2004), 1413-1457. doi: 10.1002/cpa.20042. [18] I. Daubechies, Ten Lectures on Wavelets, 1nd edition, Society for industrial and applied mathematics, Philadelphia, 1992. doi: 10.1137/1.9781611970104.fm. [19] I. Daubechies, B. Han, A. Ron and Z. Shen, Framelets: MRA-based constructions of wavelet frames, Applied and Computational Harmonic Analysis, 14 (2003), 1-46. doi: 10.1016/S1063-5203(02)00511-0. [20] B. Dong and Y. Zhang, An efficient algorithm for $\ell_{0}$ minimization in wavelet frame based image restoration, Journal of Scientific Computing, 54 (2013), 350-368. doi: 10.1007/s10915-012-9597-4. [21] M. Elad, J. L. Starck, P. Querre and D. L. Donoho, Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA), Applied and Computational Harmonic Analysis, 19 (2005), 340-358. doi: 10.1016/j.acha.2005.03.005. [22] M. Filipović and A. Jukić, Restoration of images corrupted by mixed Gaussian-impulse noise by iterative soft-hard thresholding, In Signal Processing Conference (EUSIPCO), 2014 Proceedings of the 22nd European, (2014), 1637-1641. [23] J. Frikel and E. T. Quinto, Characterization and reduction of artifacts in limited angle tomography, Inverse Problems, 29(2013), 125007. doi: 10.1088/0266-5611/29/12/125007. [24] H. Gao, J. F. Cai, Z. W. Shen and H. Zhao, Robust principal component analysis-based four-dimensional computed tomography, Physics in Medicine and Biology, 56 (2011), 3781-3798. doi: 10.1088/0031-9155/56/11/002. [25] H. Gao, R. Li, Y. Lin and L. Xing, 4D cone beam CT via spatiotemporal tensor framelet, Medical Physics, 39 (2012), 6943-6946. doi: 10.1118/1.4762288. [26] H. Gao, H. Y. Yu, S. Osher and G. Wang, Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM), Inverse Problems, 27 (2011), 1-22. doi: 10.1088/0266-5611/27/11/115012. [27] H. Gao, L. Zhang, Z. Chen, Y. Xing, J. Cheng and Z. Qi, Direct filtered-backprojection-type reconstruction from a straight-line trajectory, Optical Engineering, 46 (2007), 057003-057003. [28] T. Goldstein and S. Osher, The split Bregman method for $\ell_1$ regularized problems, SIAM Journal on Imaging Sciences, 2 (2009), 323-343. doi: 10.1137/080725891. [29] R. Gordon, A tutorial on ART (algebraic reconstruction techniques), IEEE Transactions on Nuclear Science, 21 (1974), 78-93. doi: 10.1109/TNS.1974.6499238. [30] B. Han, On dual wavelet tight frames, Applied and Computational Harmonic Analysis, 4 (1997), 380-413. doi: 10.1006/acha.1997.0217. [31] X. Han, J. Bian, E. L. Ritman, E. Y. Sidky and X. Pan, Optimization-based reconstruRit-manction of sparse images from few-view projections, Physics in Medicine and Biology, 57 (2012), p5245. doi: 10.1088/0031-9155/57/16/5245. [32] B. S. He, A class of projection and contraction methods for monotone variational inequalities, Applied Mathematics and Optimization, 35 (1997), 69-76. doi: 10.1007/BF02683320. [33] B. S. He and M. H. Xu, A general framework of contraction methods for monotone variational inequalities, Pacific Journal of Optimization, 4 (2008), 195-212. [34] K. Ito and K. Kunisch, A note on the existence of nonsmooth nonconvex optimization problems, Journal of Optimization Theory and Applications, 163 (2014), 697-706. doi: 10.1007/s10957-014-0552-4. [35] X. Jia, B. Dong, Y. Lou and S. B. jiang, GPU-based iterative cone-beam CT reconstruction using tight frame regularization, Physics in Medicine and Biology, 56 (2010), 3787-3806. doi: 10.1088/0031-9155/56/13/004. [36] M. Jiang and G. Wang, Development of iterative algorithms for image reconstruction, Journal of X-ray Science and Technology, 10 (2001), 77-86. [37] M. Jiang and G. Wang, Convergence of the simultaneous algebraic reconstruction technique (SART), IEEE Transactions on Image Processing, 12 (2003), 957-961. doi: 10.1109/TIP.2003.815295. [38] A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, Medical physics, IEEE Press, New York, 1988. [39] V. Kolehmainen, S. Siltanen, S. Järvenpää, J. P. Kaipio, P. Koistinenand, M. Lassas, J. Pirttilä and E. Somersalo, Statistical inversion for medical x-ray tomography with few radiographs: Ⅱ. Application to dental radiology, Physics in Medicine and Biology, 48 (2003), 1465-1490. doi: 10.1088/0031-9155/48/10/315. [40] H. Kudo, F. Noo, M. Defrise and R. Clackdoyle, New super-short-scan algorithms for fan-beam and cone-beam reconstruction, Nuclear Science Symposium Conference Record, 2002 IEEE, 2 (2002), 902-906. doi: 10.1109/NSSMIC.2002.1239470. [41] S. J. LaRoque, E. Y. Sidky and X. Pan, Accurate image reconstruction from few-view and limited-angle data in diffraction tomography, JOSA A, 25 (2008), 1772-1782. doi: 10.1364/JOSAA.25.001772. [42] X. Lu, Y. Sun and Y. Yuan, Image reconstruction by an alternating minimisation, Neurocomputing, 74 (2011), 661-670. doi: 10.1016/j.neucom.2010.08.003. [43] X. Lu, Y. Sun and Y. Yuan, Optimization for limited angle tomography in medical image processing, Pattern Recognition, 44 (2011), 2427-2435. doi: 10.1016/j.patcog.2010.12.016. [44] F. Noo and D. J. Heuscher, Image reconstruction from cone-beam data on a circular short-scan, Medical Imaging 2002: Image Processing, 4684 (2002), 50-59. doi: 10.1117/12.467199. [45] F. Noo, M. Defrise, R. Clackdoyle and H. Kudo, Image reconstruction from fan-beam projections on less than a short scan, Physics in Medicine and Biology, 47 (2002), 2525-2546. doi: 10.1088/0031-9155/47/14/311. [46] X. Pan, E. Y. Sidky and M. Vannier, Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?, Inverse Problems, 25 (2009), 123009. doi: 10.1088/0266-5611/25/12/123009. [47] C. Ravazzi, S. M. Fosson and E. Magli, Distributed iterative thresholding for L0/L1-regularized linear inverse problems, IEEE Transactions on Information Theory, 61 (2015), 2081-2100. doi: 10.1109/TIT.2015.2403263. [48] R. T. Rockafellarr and R. J. B. Wets, Variational Analysis, 1nd edition, Springer, Berlin, 2009. doi: 10.1007/978-3-642-02431-3. [49] A. Ron and Z. Shen, Affine systems in $L_{2}(R^{d})$ Ⅱ: Dual systems, Journal of Fourier Analysis and Applications, 3 (1997), 617-637. doi: 10.1007/BF02648888. [50] W. P. Segars, D. S. Lalush and B. M. W. Tsui, A realistic spline-based dynamic heart phantom, IEEE Transactions on Nuclear Science, 46 (1999), 503-506. doi: 10.1109/NSSMIC.1998.774369. [51] M. M. Seger and P. E. Danielsson, Scanning of logs with linear cone-beam tomography, Computers and Electronics in Agriculture, 41 (2003), 45-62. doi: 10.1016/S0168-1699(03)00041-3. [52] R. L. Siddon, Fast calculation of the exact radiological path for a three-dimensional CT array, Medical Physics, 12 (1985), 252-255. doi: 10.1118/1.595715. [53] E. Y. Sidky and X. Pan, Accurate image reconstruction in circular cone-beam computed tomography by total variation minimization: a preliminary investigation, In Nuclear Science Symposium Conference Record, 5 (2006), 2904-2907. doi: 10.1109/NSSMIC.2006.356484. [54] E. Y. Sidky and X. C. Pan, Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization, Physics in Medicine and Biology, 53 (2008), 4777. doi: 10.1088/0031-9155/53/17/021. [55] E. Soubies, L. Blanc-Féraud and G. Aubert, A Continuous Exact $\ell_{0}$ Penalty (CEL0) for Least Squares Regularized Problem, SIAM Journal on Imaging Sciences, 8 (2015), 1607-1639. doi: 10.1137/151003714. [56] C. Soussen, J. Idier, J. Duan and D. Brie, Homotopy Based Algorithms for-Regularized Least-Squares, IEEE Transactions on Signal Processing, 63 (2015), 3301-3316. doi: 10.1109/TSP.2015.2421476. [57] J. L. Starck, M. Elad and D. L. Donoho, Image decomposition via the combination of sparse representations and a variational approach, IEEE Transactions on Image Processing, 14 (2005), 1570-1582. doi: 10.1109/TIP.2005.852206. [58] M. Storath, A. Weinmann, J. Frikel and M. Unser, Joint image reconstruction and segmentation using the Potts model, Inverse Problems, 31 (2015), 025003. doi: 10.1088/0266-5611/31/2/025003. [59] A. Tingberg, X-ray tomosynthesis: a review of its use for breast and chest imaging, Radiation Protection Dosimetry, 139 (2010), 100-107. doi: 10.1093/rpd/ncq099. [60] C. Wang and L. Zeng, Error bounds and stability in the $\ell_{0}$ regularized for CT reconstruction from small projections, Inverse Problems and Imaging, 10 (2016), 829-853. doi: 10.3934/ipi.2016023. [61] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing, 13 (2004), 600-612. doi: 10.1109/TIP.2003.819861. [62] Y. Xiao, T. Zeng, J. Yu and M. K. Ng, Restoration of images corrupted by mixed Gaussian-impulse noise via l1-l0 minimization, Pattern Recognition, 44 (2011), 1708-1720. [63] G. L. Zeng, Medical Image Reconstruction, 1nd edition, Springer-Verlag, Berlin Heidelberg, 2010. doi: 10.1007/978-3-642-05368-9. [64] L. Zeng, J. Q. Guo and B. D. Liu, Limited-angle cone-beam computed tomography image reconstruction by total variation minimization and piecewise-constant modification, Journal of Inverse and Ill-Posed Problems, 21 (2013), 735-754. doi: 10.1515/jip-2011-0010. [65] Y. Zhang, B. Dong and Z. S. Lu, $\ell_{0}$ Minimization for wavelet frame based image restoration, Mathematics of Computation, 82 (2013), 995-1015. doi: 10.1090/S0025-5718-2012-02631-7. [66] B. Zhao, H. Gao, H. Ding and S. Molloi, Tight-frame based iterative image reconstruction for spectral breast CT, Medical Physics, 40 (2013), 031905. doi: 10.1118/1.4790468. [67] W. Zhou, J. F. Cai and H. Gao, Adaptive tight frame based medical image reconstruction: A proof-of-concept study for computed tomography, Inverse problems, 29 (2013), 1-18. doi: 10.1088/0266-5611/29/12/125006.
show all references
##### References:
[1] M. A. Anastasio, E. Y. Sidky, X. Pan and C. Y. Chou, Boundary reconstruction in limited-angle x-ray phase-contrast tomography, Medical Imaging 2009: Physics of Medical Imaging, 7258 (2009), 725827. doi: 10.1117/12.811918. [2] A. H. Andersen and A. C. Kak, Simultaneous algebraic reconstruction technique (SART): A superior implementation of the ART algorithm, Ultrasonic Imaging, 6 (1984), 81-94. doi: 10.1016/0161-7346(84)90008-7. [3] A. Auslender and M. Teboulle, Asymptotic cones and functions in optimization and variational inequalities, Springer, Berlin, 2006. [4] G. Bachar, J. H. Siewerdsen, M. J. Daly, D. A. Jaffray and J. C. Irish, Image quality and localization accuracy in C-arm tomosynthesis-guided head and neck surgery, Medical Physics, 34 (2007), 4664-4677. doi: 10.1118/1.2799492. [5] T. Blumensath, Accelerated iterative hard thresholding, Signal Processing, 92 (2012), 752-756. doi: 10.1016/j.sigpro.2011.09.017. [6] T. Blumensath and M. E. Davies, Iterative thresholding for sparse approximations, Journal of Fourier Analysis and Applications, 14 (2008), 629-654. doi: 10.1007/s00041-008-9035-z. [7] T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Applied and Computational Harmonic Analysis, 27 (2009), 265-274. doi: 10.1016/j.acha.2009.04.002. [8] J. Bolte, S. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Mathematical Programming, 146 (2014), 459-494. doi: 10.1007/s10107-013-0701-9. [9] K. Bredies, D. A. Lorenz and S. Reiterer, Minimization of non-smooth, non-convex functionals by iterative thresholding, Journal of Optimization Theory and Applications, 165 (2015), 78-112. doi: 10.1007/s10957-014-0614-7. [10] M. Burger, J. Müller, E. Papoutsellis and C. B. Schonlieb, Total variation regularization in measurement and image space for PET reconstruction, Inverse Problems, 30 (2014), 105003. doi: 10.1088/0266-5611/30/10/105003. [11] T. M. Buzug, Computed tomography: From photon statistics to modern cone-beam CT, Springer Handbook of Medical Technology, (2008), 311-342. doi: 10.1007/978-3-540-74658-4_16. [12] J. F. Cai, S. Osher and Z. Shen, Split Bregman methods and frame based image restoration, Multiscale Modeling & Simulation, 8 (2009), 337-369. [13] Y. Censor and A. Segal, Iterative projection methods in biomedical inverse problems, Mathematical Methods in Biomedical Imaging and Intensity-Modulated Radiation Therapy (IMRT), 10 (2008), 65-96. [14] C. Chen, R. H. Chan, S. Ma and J. Yang, Inertial proximal ADMM for linearly constrained separable convex optimization, SIAM Journal on Imaging Sciences, 8 (2015), 2239-2267. [15] Z. Q. Chen, X. Jin, L. Li and G. Wang, A limited-angle CT reconstruction method based on anisotropic TV minimization, Physics in Medicine and Biology, 58 (2013), 2119-2141. doi: 10.1088/0031-9155/58/7/2119. [16] M. K. Cho, H. K. Kim, H. Youn and S. S. Kim, A feasibility study of digital tomosynthesis for volumetric dental imaging, J. Instrum., 7 (2012), 1-6. doi: 10.1088/1748-0221/7/03/P03007. [17] I. Daubechies, M. Defrise and M. C. De, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Communications on Pure and Applied Mathematics, 57 (2004), 1413-1457. doi: 10.1002/cpa.20042. [18] I. Daubechies, Ten Lectures on Wavelets, 1nd edition, Society for industrial and applied mathematics, Philadelphia, 1992. doi: 10.1137/1.9781611970104.fm. [19] I. Daubechies, B. Han, A. Ron and Z. Shen, Framelets: MRA-based constructions of wavelet frames, Applied and Computational Harmonic Analysis, 14 (2003), 1-46. doi: 10.1016/S1063-5203(02)00511-0. [20] B. Dong and Y. Zhang, An efficient algorithm for $\ell_{0}$ minimization in wavelet frame based image restoration, Journal of Scientific Computing, 54 (2013), 350-368. doi: 10.1007/s10915-012-9597-4. [21] M. Elad, J. L. Starck, P. Querre and D. L. Donoho, Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA), Applied and Computational Harmonic Analysis, 19 (2005), 340-358. doi: 10.1016/j.acha.2005.03.005. [22] M. Filipović and A. Jukić, Restoration of images corrupted by mixed Gaussian-impulse noise by iterative soft-hard thresholding, In Signal Processing Conference (EUSIPCO), 2014 Proceedings of the 22nd European, (2014), 1637-1641. [23] J. Frikel and E. T. Quinto, Characterization and reduction of artifacts in limited angle tomography, Inverse Problems, 29(2013), 125007. doi: 10.1088/0266-5611/29/12/125007. [24] H. Gao, J. F. Cai, Z. W. Shen and H. Zhao, Robust principal component analysis-based four-dimensional computed tomography, Physics in Medicine and Biology, 56 (2011), 3781-3798. doi: 10.1088/0031-9155/56/11/002. [25] H. Gao, R. Li, Y. Lin and L. Xing, 4D cone beam CT via spatiotemporal tensor framelet, Medical Physics, 39 (2012), 6943-6946. doi: 10.1118/1.4762288. [26] H. Gao, H. Y. Yu, S. Osher and G. Wang, Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM), Inverse Problems, 27 (2011), 1-22. doi: 10.1088/0266-5611/27/11/115012. [27] H. Gao, L. Zhang, Z. Chen, Y. Xing, J. Cheng and Z. Qi, Direct filtered-backprojection-type reconstruction from a straight-line trajectory, Optical Engineering, 46 (2007), 057003-057003. [28] T. Goldstein and S. Osher, The split Bregman method for $\ell_1$ regularized problems, SIAM Journal on Imaging Sciences, 2 (2009), 323-343. doi: 10.1137/080725891. [29] R. Gordon, A tutorial on ART (algebraic reconstruction techniques), IEEE Transactions on Nuclear Science, 21 (1974), 78-93. doi: 10.1109/TNS.1974.6499238. [30] B. Han, On dual wavelet tight frames, Applied and Computational Harmonic Analysis, 4 (1997), 380-413. doi: 10.1006/acha.1997.0217. [31] X. Han, J. Bian, E. L. Ritman, E. Y. Sidky and X. Pan, Optimization-based reconstruRit-manction of sparse images from few-view projections, Physics in Medicine and Biology, 57 (2012), p5245. doi: 10.1088/0031-9155/57/16/5245. [32] B. S. He, A class of projection and contraction methods for monotone variational inequalities, Applied Mathematics and Optimization, 35 (1997), 69-76. doi: 10.1007/BF02683320. [33] B. S. He and M. H. Xu, A general framework of contraction methods for monotone variational inequalities, Pacific Journal of Optimization, 4 (2008), 195-212. [34] K. Ito and K. Kunisch, A note on the existence of nonsmooth nonconvex optimization problems, Journal of Optimization Theory and Applications, 163 (2014), 697-706. doi: 10.1007/s10957-014-0552-4. [35] X. Jia, B. Dong, Y. Lou and S. B. jiang, GPU-based iterative cone-beam CT reconstruction using tight frame regularization, Physics in Medicine and Biology, 56 (2010), 3787-3806. doi: 10.1088/0031-9155/56/13/004. [36] M. Jiang and G. Wang, Development of iterative algorithms for image reconstruction, Journal of X-ray Science and Technology, 10 (2001), 77-86. [37] M. Jiang and G. Wang, Convergence of the simultaneous algebraic reconstruction technique (SART), IEEE Transactions on Image Processing, 12 (2003), 957-961. doi: 10.1109/TIP.2003.815295. [38] A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, Medical physics, IEEE Press, New York, 1988. [39] V. Kolehmainen, S. Siltanen, S. Järvenpää, J. P. Kaipio, P. Koistinenand, M. Lassas, J. Pirttilä and E. Somersalo, Statistical inversion for medical x-ray tomography with few radiographs: Ⅱ. Application to dental radiology, Physics in Medicine and Biology, 48 (2003), 1465-1490. doi: 10.1088/0031-9155/48/10/315. [40] H. Kudo, F. Noo, M. Defrise and R. Clackdoyle, New super-short-scan algorithms for fan-beam and cone-beam reconstruction, Nuclear Science Symposium Conference Record, 2002 IEEE, 2 (2002), 902-906. doi: 10.1109/NSSMIC.2002.1239470. [41] S. J. LaRoque, E. Y. Sidky and X. Pan, Accurate image reconstruction from few-view and limited-angle data in diffraction tomography, JOSA A, 25 (2008), 1772-1782. doi: 10.1364/JOSAA.25.001772. [42] X. Lu, Y. Sun and Y. Yuan, Image reconstruction by an alternating minimisation, Neurocomputing, 74 (2011), 661-670. doi: 10.1016/j.neucom.2010.08.003. [43] X. Lu, Y. Sun and Y. Yuan, Optimization for limited angle tomography in medical image processing, Pattern Recognition, 44 (2011), 2427-2435. doi: 10.1016/j.patcog.2010.12.016. [44] F. Noo and D. J. Heuscher, Image reconstruction from cone-beam data on a circular short-scan, Medical Imaging 2002: Image Processing, 4684 (2002), 50-59. doi: 10.1117/12.467199. [45] F. Noo, M. Defrise, R. Clackdoyle and H. Kudo, Image reconstruction from fan-beam projections on less than a short scan, Physics in Medicine and Biology, 47 (2002), 2525-2546. doi: 10.1088/0031-9155/47/14/311. [46] X. Pan, E. Y. Sidky and M. Vannier, Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?, Inverse Problems, 25 (2009), 123009. doi: 10.1088/0266-5611/25/12/123009. [47] C. Ravazzi, S. M. Fosson and E. Magli, Distributed iterative thresholding for L0/L1-regularized linear inverse problems, IEEE Transactions on Information Theory, 61 (2015), 2081-2100. doi: 10.1109/TIT.2015.2403263. [48] R. T. Rockafellarr and R. J. B. Wets, Variational Analysis, 1nd edition, Springer, Berlin, 2009. doi: 10.1007/978-3-642-02431-3. [49] A. Ron and Z. Shen, Affine systems in $L_{2}(R^{d})$ Ⅱ: Dual systems, Journal of Fourier Analysis and Applications, 3 (1997), 617-637. doi: 10.1007/BF02648888. [50] W. P. Segars, D. S. Lalush and B. M. W. Tsui, A realistic spline-based dynamic heart phantom, IEEE Transactions on Nuclear Science, 46 (1999), 503-506. doi: 10.1109/NSSMIC.1998.774369. [51] M. M. Seger and P. E. Danielsson, Scanning of logs with linear cone-beam tomography, Computers and Electronics in Agriculture, 41 (2003), 45-62. doi: 10.1016/S0168-1699(03)00041-3. [52] R. L. Siddon, Fast calculation of the exact radiological path for a three-dimensional CT array, Medical Physics, 12 (1985), 252-255. doi: 10.1118/1.595715. [53] E. Y. Sidky and X. Pan, Accurate image reconstruction in circular cone-beam computed tomography by total variation minimization: a preliminary investigation, In Nuclear Science Symposium Conference Record, 5 (2006), 2904-2907. doi: 10.1109/NSSMIC.2006.356484. [54] E. Y. Sidky and X. C. Pan, Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization, Physics in Medicine and Biology, 53 (2008), 4777. doi: 10.1088/0031-9155/53/17/021. [55] E. Soubies, L. Blanc-Féraud and G. Aubert, A Continuous Exact $\ell_{0}$ Penalty (CEL0) for Least Squares Regularized Problem, SIAM Journal on Imaging Sciences, 8 (2015), 1607-1639. doi: 10.1137/151003714. [56] C. Soussen, J. Idier, J. Duan and D. Brie, Homotopy Based Algorithms for-Regularized Least-Squares, IEEE Transactions on Signal Processing, 63 (2015), 3301-3316. doi: 10.1109/TSP.2015.2421476. [57] J. L. Starck, M. Elad and D. L. Donoho, Image decomposition via the combination of sparse representations and a variational approach, IEEE Transactions on Image Processing, 14 (2005), 1570-1582. doi: 10.1109/TIP.2005.852206. [58] M. Storath, A. Weinmann, J. Frikel and M. Unser, Joint image reconstruction and segmentation using the Potts model, Inverse Problems, 31 (2015), 025003. doi: 10.1088/0266-5611/31/2/025003. [59] A. Tingberg, X-ray tomosynthesis: a review of its use for breast and chest imaging, Radiation Protection Dosimetry, 139 (2010), 100-107. doi: 10.1093/rpd/ncq099. [60] C. Wang and L. Zeng, Error bounds and stability in the $\ell_{0}$ regularized for CT reconstruction from small projections, Inverse Problems and Imaging, 10 (2016), 829-853. doi: 10.3934/ipi.2016023. [61] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing, 13 (2004), 600-612. doi: 10.1109/TIP.2003.819861. [62] Y. Xiao, T. Zeng, J. Yu and M. K. Ng, Restoration of images corrupted by mixed Gaussian-impulse noise via l1-l0 minimization, Pattern Recognition, 44 (2011), 1708-1720. [63] G. L. Zeng, Medical Image Reconstruction, 1nd edition, Springer-Verlag, Berlin Heidelberg, 2010. doi: 10.1007/978-3-642-05368-9. [64] L. Zeng, J. Q. Guo and B. D. Liu, Limited-angle cone-beam computed tomography image reconstruction by total variation minimization and piecewise-constant modification, Journal of Inverse and Ill-Posed Problems, 21 (2013), 735-754. doi: 10.1515/jip-2011-0010. [65] Y. Zhang, B. Dong and Z. S. Lu, $\ell_{0}$ Minimization for wavelet frame based image restoration, Mathematics of Computation, 82 (2013), 995-1015. doi: 10.1090/S0025-5718-2012-02631-7. [66] B. Zhao, H. Gao, H. Ding and S. Molloi, Tight-frame based iterative image reconstruction for spectral breast CT, Medical Physics, 40 (2013), 031905. doi: 10.1118/1.4790468. [67] W. Zhou, J. F. Cai and H. Gao, Adaptive tight frame based medical image reconstruction: A proof-of-concept study for computed tomography, Inverse problems, 29 (2013), 1-18. doi: 10.1088/0266-5611/29/12/125006.
The scanning geometry of limited-angle CT. $T$ denotes the objective table, $\textrm{O}^{'}$ denotes the scanned object, $\textrm{D}$ denotes the detector array, $\textrm{o}$ denotes the rotation center of X-ray source, $\textrm{S}$ denotes X-ray source that rotates anticlockwise around $\textrm{o}$ with $\textrm{D}$ synchronously and $\theta$ denotes the rotation angle which is less than $180^{0}$ plus a fan-angle
The reconstructed results of the NCAT phantom. The image on the first row is the original image (or reference image). The subsequent rows are the results reconstructed for the scan ranges $[0,120^{0}]$ and $[0,140^{0}]$, respectively. The images from left to right in each column are the results reconstructed using the SART algorithm, the ASD-POCS algorithm and our algorithm. The location of red arrows present slope artifacts. The grey-scale display window is $[0.2, 0.85]$
The reconstructed results of the practical gear data. The images on the left column are the results reconstructed using the SART algorithm. The subsequent columns are the ASD-POCS algorithm and our algorithm, respectively. The images from top to bottom in each row present the results reconstructed from scanning ranges $[0,100^{0}]$, $[0,120^{0}]$, $[0,140^{0}]$, and $[0,160^{0}]$. The location of red arrows present slope artifacts. The display window is $[0.005, 0.0068]$ $mm^{-1}$
The transform results of image under the one level piecewise-constant linear B-spline framelets transform
Sketch map of slope artifacts correction for the limited-angle CT image reconstruction if some proper parameters are chose. The wavelet tight framlet transform is piecewise-constant linear B-spline framelets transform
The images from left to right present the Shepp-Logan phantom, the reconstructed result using the $\ell_{0}$ and the reconstructed result $\ell_{0}-\ell_{2}$ regularization. The display window is $[0, 1]$
The reconstructed image from the projection data of the scanning angular ranges $[0,360^{0}]$ using the FBP algorithm. Metal artifacts are labelled by red rectangles
The reconstructed image from the projection data of the scanning angular ranges $[0,180^{0}]$ using the ASD-POCS algorithm and our algorithm. ROIs are labelled by red rectangles. The display window is $[0.0051, 0.0068]$ $mm^{-1}$
The reconstructed image from the projection data of the scanning angular ranges $[0,160^{0}]$ using the ASD-POCS algorithm and our algorithm. ROIs are labelled by red rectangles. The display window is $[0.0051, 0.0062]$ $mm^{-1}$
The zoomed-in view of the image ROIs of Figure 8 and Figure 9. The upper plane of Figure are the reconstructed results of ROIs for the scan ranges $[0,180^{0}]$, and the bottom plane of Figure are the reconstructed results of ROIs for the scan ranges $[0,160^{0}]$. The Figure on the first and three rows are the results using the ASD-POCS algorithm, and the second and four rows are the results using the our algorithm. The display window is $[0.0051, 0.0068]$ $mm^{-1}$ for the last column, and $[0.005, 0.005007]$ $mm^{-1}$ for the rest
Geometrical scanning parameters for simulated CT imaging system
The distance between source and rotation center $981mm$ The angle interval of projection views $1^{0}$ The distance between source and detector $1200mm$ The diameter of field of view $143.6222mm$ The detector bin numbers $256$ The angle interval of rays $0.00329^{0}$ Pixel size $0.5632\times0.5632mm^{2}$ Image size $256\times256$
The distance between source and rotation center $981mm$ The angle interval of projection views $1^{0}$ The distance between source and detector $1200mm$ The diameter of field of view $143.6222mm$ The detector bin numbers $256$ The angle interval of rays $0.00329^{0}$ Pixel size $0.5632\times0.5632mm^{2}$ Image size $256\times256$
Characterize quantitatively the reconstruction quality for NCAT
Scanning ranges Algorithm RMSE PSNR MSSIM SART 0.0514 25.78 0.9997 $[0,120^{0}]$ ASD-POCS 0.0287 30.83 0.9999 our method 0.0250 32.05 0.9999 SART 0.0429 27.35 0.9998 $[0,140^{0}]$ ASD-POCS 0.0209 33.60 1.0000 our method $0.0215$ $33.33$ 1.0000
Scanning ranges Algorithm RMSE PSNR MSSIM SART 0.0514 25.78 0.9997 $[0,120^{0}]$ ASD-POCS 0.0287 30.83 0.9999 our method 0.0250 32.05 0.9999 SART 0.0429 27.35 0.9998 $[0,140^{0}]$ ASD-POCS 0.0209 33.60 1.0000 our method $0.0215$ $33.33$ 1.0000
Characterize quantitatively the reconstruction quality for Shepp-Logan phantom
Scanning ranges Algorithm RMSE PSNR MSSIM $[0,150^{0}]$ $\ell_{0}$ regularization 0.0628 24.04 0.9676 $\ell_{0}-\ell_{2}$ regularization 0.0618 24.17 0.9681
Scanning ranges Algorithm RMSE PSNR MSSIM $[0,150^{0}]$ $\ell_{0}$ regularization 0.0628 24.04 0.9676 $\ell_{0}-\ell_{2}$ regularization 0.0618 24.17 0.9681
Quantitatively characterize the reconstruction quality of gear
Scan ranges Algorithm RMSE PSNR MSSIM SART 40.157 16.056 0.752 $0^{0}\sim100^{0}$ ASD-POCS 17.296 23.372 0.784 our method 8.000 30.069 0.805 SART 37.156 16.730 0.776 $0^{0}\sim120^{0}$ ASD-POCS 10.728 27.521 0.787 our method 7.691 30.412 0.815 SART 30.425 18.466 0.790 $0^{0}\sim140^{0}$ ASD-POCS 9.096 28.954 0.788 our method 6.984 31.248 0.814 SART 19.766 22.212 0.805 $0^{0}\sim160^{0}$ ASD-POCS 9.680 28.413 0.807 our method 6.570 31.779 0.807
Scan ranges Algorithm RMSE PSNR MSSIM SART 40.157 16.056 0.752 $0^{0}\sim100^{0}$ ASD-POCS 17.296 23.372 0.784 our method 8.000 30.069 0.805 SART 37.156 16.730 0.776 $0^{0}\sim120^{0}$ ASD-POCS 10.728 27.521 0.787 our method 7.691 30.412 0.815 SART 30.425 18.466 0.790 $0^{0}\sim140^{0}$ ASD-POCS 9.096 28.954 0.788 our method 6.984 31.248 0.814 SART 19.766 22.212 0.805 $0^{0}\sim160^{0}$ ASD-POCS 9.680 28.413 0.807 our method 6.570 31.779 0.807
Characterize quantitatively the reconstruction quality for metal lath
Algorithm RMSE PSNR MSSIM $0\sim 180^{0}$ ASD-POCS 105.7 7.6481 0.8050 our method 107.4 7.5139 0.8060 $0\sim 160^{0}$ ASD-POCS 108.5 7.4209 0.8014 our method 107.9 7.4704 0.8014
Algorithm RMSE PSNR MSSIM $0\sim 180^{0}$ ASD-POCS 105.7 7.6481 0.8050 our method 107.4 7.5139 0.8060 $0\sim 160^{0}$ ASD-POCS 108.5 7.4209 0.8014 our method 107.9 7.4704 0.8014
[1] Lacramioara Grecu, Constantin Popa. Constrained SART algorithm for inverse problems in image reconstruction. Inverse Problems & Imaging, 2013, 7 (1) : 199-216. doi: 10.3934/ipi.2013.7.199 [2] Chengxiang Wang, Li Zeng, Yumeng Guo, Lingli Zhang. Wavelet tight frame and prior image-based image reconstruction from limited-angle projection data. Inverse Problems & Imaging, 2017, 11 (6) : 917-948. doi: 10.3934/ipi.2017043 [3] Gabriel Peyré, Sébastien Bougleux, Laurent Cohen. Non-local regularization of inverse problems. Inverse Problems & Imaging, 2011, 5 (2) : 511-530. doi: 10.3934/ipi.2011.5.511 [4] Tim Kreutzmann, Andreas Rieder. Geometric reconstruction in bioluminescence tomography. Inverse Problems & Imaging, 2014, 8 (1) : 173-197. doi: 10.3934/ipi.2014.8.173 [5] Mikko Kaasalainen. Multimodal inverse problems: Maximum compatibility estimate and shape reconstruction. Inverse Problems & Imaging, 2011, 5 (1) : 37-57. doi: 10.3934/ipi.2011.5.37 [6] P. Cerejeiras, M. Ferreira, U. Kähler, F. Sommen. Continuous wavelet transform and wavelet frames on the sphere using Clifford analysis. Communications on Pure & Applied Analysis, 2007, 6 (3) : 619-641. doi: 10.3934/cpaa.2007.6.619 [7] Shousheng Luo, Tie Zhou. Superiorization of EM algorithm and its application in Single-Photon Emission Computed Tomography(SPECT). Inverse Problems & Imaging, 2014, 8 (1) : 223-246. doi: 10.3934/ipi.2014.8.223 [8] Jingwei Liang, Jia Li, Zuowei Shen, Xiaoqun Zhang. Wavelet frame based color image demosaicing. Inverse Problems & Imaging, 2013, 7 (3) : 777-794. doi: 10.3934/ipi.2013.7.777 [9] Victor Palamodov. Remarks on the general Funk transform and thermoacoustic tomography. Inverse Problems & Imaging, 2010, 4 (4) : 693-702. doi: 10.3934/ipi.2010.4.693 [10] Hiroshi Isozaki. Inverse boundary value problems in the horosphere - A link between hyperbolic geometry and electrical impedance tomography. Inverse Problems & Imaging, 2007, 1 (1) : 107-134. doi: 10.3934/ipi.2007.1.107 [11] Shui-Nee Chow, Ke Yin, Hao-Min Zhou, Ali Behrooz. Solving inverse source problems by the Orthogonal Solution and Kernel Correction Algorithm (OSKCA) with applications in fluorescence tomography. Inverse Problems & Imaging, 2014, 8 (1) : 79-102. doi: 10.3934/ipi.2014.8.79 [12] Bernadette N. Hahn. Dynamic linear inverse problems with moderate movements of the object: Ill-posedness and regularization. Inverse Problems & Imaging, 2015, 9 (2) : 395-413. doi: 10.3934/ipi.2015.9.395 [13] Thorsten Hohage, Mihaela Pricop. Nonlinear Tikhonov regularization in Hilbert scales for inverse boundary value problems with random noise. Inverse Problems & Imaging, 2008, 2 (2) : 271-290. doi: 10.3934/ipi.2008.2.271 [14] Yuyuan Ouyang, Yunmei Chen, Ying Wu. Total variation and wavelet regularization of orientation distribution functions in diffusion MRI. Inverse Problems & Imaging, 2013, 7 (2) : 565-583. doi: 10.3934/ipi.2013.7.565 [15] Jianjun Zhang, Yunyi Hu, James G. Nagy. A scaled gradient method for digital tomographic image reconstruction. Inverse Problems & Imaging, 2018, 12 (1) : 239-259. doi: 10.3934/ipi.2018010 [16] Matti Viikinkoski, Mikko Kaasalainen. Shape reconstruction from images: Pixel fields and Fourier transform. Inverse Problems & Imaging, 2014, 8 (3) : 885-900. doi: 10.3934/ipi.2014.8.885 [17] Leonid Kunyansky. Fast reconstruction algorithms for the thermoacoustic tomography in certain domains with cylindrical or spherical symmetries. Inverse Problems & Imaging, 2012, 6 (1) : 111-131. doi: 10.3934/ipi.2012.6.111 [18] Herbert Egger, Manuel Freiberger, Matthias Schlottbom. On forward and inverse models in fluorescence diffuse optical tomography. Inverse Problems & Imaging, 2010, 4 (3) : 411-427. doi: 10.3934/ipi.2010.4.411 [19] Ryan Compton, Stanley Osher, Louis-S. Bouchard. Hybrid regularization for MRI reconstruction with static field inhomogeneity correction. Inverse Problems & Imaging, 2013, 7 (4) : 1215-1233. doi: 10.3934/ipi.2013.7.1215 [20] Luca Rondi. On the regularization of the inverse conductivity problem with discontinuous conductivities. Inverse Problems & Imaging, 2008, 2 (3) : 397-409. doi: 10.3934/ipi.2008.2.397
2016 Impact Factor: 1.094
## Tools
Article outline
Figures and Tables
|
{}
|
# Hooke's law for a balloon?
by johne1618
Tags: balloon, hooke
P: 344 Hi, For a linear spring one has Hooke's law: $F = k x$ Is there an equivalent law for a spherical elastic balloon giving pressure inside as a function of radius? Thanks, John
P: 482 Here's a good place to start http://en.wikipedia.org/wiki/Bulk_modulus Bulk modulus would describe how the pressure of the balloon reacts to the additional pressure over atmosphere that the balloon exerts. However, I don't think it will tell you the actual pressure the balloon exerts.
P: 1,397 I tried to see if I could derive such a law, and I got a result that is a little counter-intuitive, so it's possible I made a mistake somewhere. Here's the idea: Think of a little rectangle made of balloon material. It has a length and a width. If you stretch it to increase its length, it's just like stretching a spring. So it's going to require an amount energy $\frac{1}{2} k L^2$ to stretch it to length $L$, where $k$ is some spring constant that depends on the balloon and possibly how big the rectangle is. (That's actually not quite right; it really should be something like $\frac{1}{2} k (L-L_0)^2$, where $L_0$ is the equilibrium length. I'm going to simplify things by taking the limit as $L_0 \rightarrow 0$). Similiarly, stretching it to increase its width will require an amount of energy proportional to $\frac{1}{2} k' W^2$ (again ignoring the equilibrium width). So the total energy associated with a rectangle of length $L$ and width $W$ will be just $\frac{1}{2} (k L^2 + k' W^2)$. If we assume that it's a square, so that $L = W$ and $k = k'$, then we get the total energy of a square with side $L$ will be $k L^2$. That can be rewritten as $k A$, where $A$ is the area of the little square of balloon. Assuming that there is this same amount of energy stored in every little square of the balloon, we conclude that the same formula applies for the entire balloon: $E \propto A \propto r^2$ where I used $A = 4 \pi r^2$ for a spherical balloon. I'm using $\propto$ to mean "proportional to", which means I'm not keeping track of constants like $\pi$ and $k$, etc. So that's a pretty sensible result, it seems to me, the energy due to stretching for a balloon is proportional to the square of the radius. We can rewrite the above formula for $E$ to get it in terms of volume, using $V \propto r^3$ $E \propto V^\frac{2}{3}$ That's still sensible: The more you increase the volume of the balloon, the more energy is required. But here comes the counter-intuitive part. The relationship between pressure (the force per unit area required to keep the balloon expanded) and energy is given by: $P = \dfrac{dE}{dV}$ Using our formula for $E$, we get: $P \propto V^\frac{-1}{3}= \dfrac{1}{V^\frac{1}{3}}$ There's the counter-intuitive part. As you expand the balloon more and more, the pressure needed to keep it expanded goes DOWN. I feel like that can't possibly be right. On the other hand, it is true from experience that when blowing up a balloon, it's hardest when the balloon is small, and gets easier when the balloon is bigger. But I always assumed that was a failure of Hooke's law, that the forces holding the balloon together got weaker as it stretched, instead of increasing linearly. Anyway, assuming that this analysis is right, the conclusion for the balloon version of Hooke's law would be (converting back from volume to radius): $P = K/r$ The pressure required to inflate a balloon to radius $r$ is inversely proportional to $r$.
PF Patron
Thanks
P: 2,953
## Hooke's law for a balloon?
A balloon is a rubber sheet that is undergoing large deformation biaxial stretching, and its "stress-strain behavior" is non-linear. One of the key characteristics of rubber is that it is virtually incompressible, so the volume of the rubber sheet remains constant. If the balloon were a perfect sphere, the surface area of the rubber sheet would be $4\pi r^2$ and its thickness (assumed uniform) would be h, so its volume would be $4\pi r^2h$.
Initially, you would need a small (virtually insignificant) amount of initial pressure to snap the balloon into its initial spherical shape. If the initial radius was r0 and the initial thickness of the rubber was h0, then the initial volume of the balloon rubber would be $4\pi r_0^2h_0$. But, since rubber is incompressible, the initial and final volumes of the rubber would have to be the same, so that $$\frac{h}{h_0}=(\frac{r_0}{r})^2$$So, as the balloon inflates, the thickness of the rubber decreases as the square of the radius.
The amount that the balloon rubber stretches can be characterized by the biaxial stretch ratio. The rubber sheet surface stretches equally in all directions (for a sphere), and the distance along a great circle between any two points on the sphere increases in proportion to the ratio of the present radius to the initial radius. This ratio is called the stretch ratio λ:$$\lambda=\frac{r}{r_0}$$
So, in terms of the stretch ratio, the rubber thickness h is given by:$$h=\frac{h_0}{\lambda^2}$$
In general, rubber is a very non-linear elastic material, and the tensile stress within the sheet σ (force per unit area) will be a non-linear function of the stretch ratio λ:
$$\sigma=\sigma(\lambda)$$
The next step in this development is to do an equilibrium force balance so that the stress can be expressed in terms of the pressure difference between the inside and outside of the balloon, the balloon sheet thickness h, and the balloon radius r.
If you do an equilibrium force balance on the balloon, you will find that the biaxial tensile stress in the balloon rubber σ is related to the balloon radius r, the rubber thickness h, and the difference in pressure between inside and outside the balloon (pin - pout) by:
$$\sigma(\lambda)=\frac{r(p_{in}-p_{out})}{2h}$$
If we combine this equation with the equations I presented previously in the development, we get:
$$\frac{\sigma(\lambda)}{{\lambda} ^3}=\frac{r_0(p_{in}-p_{out})}{2h_0}$$
The functional relationship σ(λ) between the tensile stress σ and the stretch ratio λ is unique to the particular rubber comprising the balloon, and is independent of the geometry of the specific system under consideration ( as characterized by r0 and h0). For this reason, σ(λ) is referred to as a material function for the particular rubber. Since the entire left hand side of the above equation is a function only of λ, it too is a material function for the rubber, now designated by $\hat{\sigma}(\lambda)$:
$$\hat{\sigma}(\lambda)=\frac{r_0(p_{in}-p_{out})}{2h_0}$$
If the functional relationship $\hat{\sigma}(\lambda)$between the stress parameter $\hat{\sigma}$ and the stretch ratio λ were known in advance, then we could use the above equation to predict, for any arbitrary balloon geometry (r0 and h0), the relationship between the pressure difference $(p_{in}-p_{out})$ and the inflated balloon radius r. Alternately, we could, for a specific balloon, experimentally measure the right hand side of the equation as a function of the measured stretch ratio λ, and thereby determine the functional relationship $\hat{\sigma}(\lambda)$ experimentally. We could then use that relationship for all other balloons of different r0 and h0 involving the same rubber to predict its inflation behavior. We could also determine the required functionality by doing experiments on flat sheets of rubber.
Chet
P: 1,397
Quote by Chestermiller A balloon is a rubber sheet that is undergoing large deformation biaxial stretching, and its "stress-strain behavior" is non-linear.
Your analysis is very thorough, but I got that the idea was to give a law giving an idealized balloon, in the same way that Hooke's law describes an idealized spring. No actual spring obeys $F = -k x$ except as a first-order approximation. So the question is, what is an analogous first-order approximation governing a balloon?
PF Patron
Thanks
P: 2,953
Quote by stevendaryl Your analysis is very thorough, but I got that the idea was to give a law giving an idealized balloon, in the same way that Hooke's law describes an idealized spring. No actual spring obeys $F = -k x$ except as a first-order approximation. So the question is, what is an analogous first-order approximation governing a balloon?
The real question should not focus specifically on a balloon, but rather on the relationship between stress and deformation for rubber in general, and then, based on this, for rubber sheets (under so-called plane stress loading conditions). For a metal, the general version of Hooke's law provides a linear relationship between the stress tensor and the so-called small strain tensor. For rubber, which is capable of exhibiting large elastic deformations, you need to describe the kinematics of the deformation using the so-called finite strain tensor. For such a material, the strain energy is a non-linear function of the three principal stretch ratios of the material, and is equal to zero when all three principal stretch ratios are equal to unity. To gain entry into the relationship between the strain energy, the stress tensor, and the deformation tensor for rubber, google Mooney-Rivlin. This should get you started.
If you are just looking for a linear relationship for a balloon that applies in the limit of very small perturbations to an initially "uninflated" balloon, just take σ(λ) = E(λ-1) (where E is an elastic material constant), and λ≈1 in the balloon equation I presented. However, what you end up with will not be very satisfying because the actual equation is very non-linear, and the analysis will thus not extend to anything realistic.
Chet
P: 1,397
Quote by Chestermiller If you are just looking for a linear relationship for a balloon that applies in the limit of very small perturbations to an initially "uninflated" balloon, just take σ(λ) = E(λ-1) (where E is an elastic material constant), and λ≈1 in the balloon equation I presented. However, what you end up with will not be very satisfying because the actual equation is very non-linear, and the analysis will thus not extend to anything realistic.
The same is true of Hooke's law--nothing actually obeys Hooke's law. The importance of the harmonic oscillator force law is that it's one of those few problems where the implications of Newton's laws (and the quantum version, as well) can be solved exactly.
PF Patron Thanks P: 2,953 An important difference between Hooke's law for a metal and the corresponding deformation law for a rubber is that, even though the rubber behaves non-linearly, it remains elastic up to much larger deformations. In the case of an actual spring, because of the unique helical geometry of the spring, the strains in the metal are small, even though the overall change in length of the spring divided by its original axial length can be large. Each little increment in coiled metal twists a small amount as the overall spring is extended axially. But none of the strains in the metal are large enough to cause plastic deformation (unless you try to stretch the spring too far).
Related Discussions Introductory Physics Homework 4 General Astronomy 4 Classical Physics 2 General Physics 3 Introductory Physics Homework 1
|
{}
|
# Information Security and Cryptography Research Group
## Communication-Efficient Non-Interactive Proofs of Knowledge with Online Extractors
### Marc Fischlin
Advances in Cryptology — CRYPTO 2005, Lecture Notes in Computer Science, Springer-Verlag, vol. 3621, pp. 152–168, Aug 2005.
We show how to turn three-move proofs of knowledge into non-interactive ones in the random oracle model. Unlike the classical Fiat-Shamir transformation our solution supports an online extractor which outputs the witness from such a non-interactive proof instantaneously, without having to rewind or fork. Additionally, the communication complexity of our solution is significantly lower than for previous proofs with online extractors. We furthermore give a superlogarithmic lower bound on the number of hash function evaluations for such online extractable proofs, matching the number in our construction, and we also show how to enhance security of the group signature scheme suggested recently by Boneh, Boyen and Shacham with our construction.
## BibTeX Citation
@inproceedings{Fischl05b,
author = {Marc Fischlin},
title = {Communication-Efficient Non-Interactive Proofs of Knowledge with Online Extractors},
editor = {Victor Shoup},
booktitle = {Advances in Cryptology --- CRYPTO 2005},
pages = {152--168},
series = {Lecture Notes in Computer Science},
volume = {3621},
year = {2005},
month = {8},
publisher = {Springer-Verlag},
}
|
{}
|
## The power of the unseen, the abstract: applications of mathematics
Applications of math are everywhere…anywhere we see, use, test/taste, touch, etc…
I have made a quick compilation of some such examples below:
1. Crystallography
2. Coding Theory (Error Correction) (the stuff like Hamming codes, parity check codes; used in 3G, 4G etc.) Used in data storage also. Bar codes, QR codes, etc.
3. Medicine: MRI, cancer detection, Tomography,etc.
4. Image processing: JPEG2000; Digital enhancement etc.
5. Regulating traffic: use of probability theory and queuing theory
6. Improving performance in sports
7. Betting and bidding; including spectrum auction using John Nash’s game theory.
8. Robotics
9. Space Exploration
10. Wireless communications including cellular telephony. (You can Google search this; for example, Fourier Series is used in Digital Signal Processing (DSP). Even some concepts of convergence of a series are necessary!) Actually, this is a digital communications systems and each component of this requires heavy use of mathematical machinery: as the information bearing signal is passed from source to sink, it under goes several steps one-by-one: like Source Coding, encryption (like AES, or RSA or ECC), Error Control Coding and Modulation/Transmission via physical channel. On the receiver or sink side, the “opposite” steps are carried out. This is generally taught in Electrical Engineering. You can Google search these things.
11. DNA Analysis
12. Exploring oceans (example, with unmanned underwater vehicles)
13. Packing (physical and electronic)
14. Aircraft designing
15. Pattern identification
16. Weather forecasting.
17. GPS also uses math. It uses physics also. Perhaps, just to satisfy your curiosity, GPS uses special relativity.
18. Computer Networks: of course, they use Queuing theory. Long back, the TCP/IP slow start algorithm was designed and developed by van Jacobson.(You can Google search all this — but the stuff is arcande right now due to your current education level.)
19. Architecture, of course, uses geometry. For example, Golden ratio.
20. Analyzing fluid flows.
21. Designing contact lenses for the eyes. Including coloured contact lenses to enhance beauty or for fashion.
22. Artificial Intelligence and Machine Intelligence.
23. Internet Security.
24. Astronomy, of course. Who can ever forget this? Get yourself a nice telescope and get hooked. You can also Stellarium.org freeware to learn to identify stars and planets, and constellations.
25. Analyzing chaos and fractals: the classic movie “Jurassic Park” was based on fractal geometry. The dino’s were, of course, simulations!
26. Forensics
27. Combinatorial optimization; the travelling salesman problem.
28. Computational Biology
We will try to look at bit deeper into these applications in later blogs. And, yes, before I forget “Ramanujan’s algorithm to compute $\pi$ up to a million digits is used to test the efficacy and efficiency of supercomputers. Of course, there will be other testing procedures also, for testing supercomputers.
There will be several more. Kindly share your views.
-Nalin Pithwa.
|
{}
|
# How are CPUs designed?
I've started playing with electronics a while ago and making simple logic gates using transistors. I know modern integrated circuits use CMOS instead of transistor-transistor logic. The thing I can't help wondering about is how CPUs are designed.
Is design still done at a (sub)logic gate level, or is there not much innovation in that area anymore and have we moved on to a higher level of abstraction? I understand how an ALU is built, but there is a lot more to CPUs than that.
Where do the designs for the billions of transistors come from? Are they mostly auto generated by software or is there still a lot of manual optimization?
-
I'd say Verilog or VHDL. – avakar Mar 30 '12 at 15:27
While these topics are fascinating, we seem to be a long way from "practical, answerable questions based on actual problems that you face". Also I can imagine an entire book that answers this question. – Martin Mar 30 '12 at 15:31
Isn't it VLSI now? en.wikipedia.org/wiki/Vlsi – AngryEE Mar 30 '12 at 15:38
@Overv, there is still a lot of work where you make sure your base blocks you are plugging together are optimized at the gate level, then you just plug in those optimized blocks in an optimized way! – Kortuk Mar 30 '12 at 16:13
I voted to re-open -- while I agree that a complete answer telling "everything you need to know to build an entire CPU from scratch" is not a good match for this site, I think a brief overview and a few links would be a good answer here. – davidcary Mar 30 '12 at 17:59
It is very likely CPU's and SoC's are used by hardware description languages like Verilog and VHDL (two major players).
These languages allow different levels of abstractions. In VHDL, you can define logic blocks as entities; it contains inputs and output ports. Within the block you can define the logic required. Say you define a block with input A, input B and output C. You could easily write C = A and B;, and basically you created a AND port block. This is possibly the simplest block you can imagine.
Digital systems are typically designed with a strong hierarchy. One may start 'top-level' with major functions a CPU requires: proccesor (multiple?) memory, PCI-express, and other busses. Within this level busses and communication signals between memory and procesor may already be defined.
When you step one level down, it will define the innerworkings of making something 'work'. Taken an example of a microcontroller, it may contain an UART interface. The actual logic required to make a functional UART is defined one level below.. In here, much other logic may be required to generate and divide the required clock, buffer data (FIFO buffers), report data to the CPU (some kind of bus system).
The interesting thing of VHDL and digital design is the reuse of blocks. You could for example, just copy&paste the UART block in your top-level to create 2 UARTs (well, maybe not that easy, only if the UART block is capable of somekind of addressing!).
This design isn't any kind of gate-level design. The VHDL can be also be 'compiled' in a way that it is finally translated to logic gates. A machine can optimize this far better than a human could (and quicker too). For example; the internals of block A requires an inverter before outputting the signal. Block B takes this output signal and inverts it once again. Well, 2 inverters in series don't do much right? Correct, so you can just as well leave them out. However, in the 'top-level' design you won't be able to spot the two inverters in series.. you just see two ports connected. A compiler can optimize this far quicker than a human.
Basically what digital system design contains is the description how the logic should 'behave', and the computer is used to figure out what the most efficient way is to lay out the individual logic gates.
-
Just as there is still a place for assembly code in software, lower-level hardware design can be cost effective in some cases. E.g., SRAM cells are often so commonly used that highly optimized designs are developed to optimize for density (last-level cache), access latency (L1 cache), or other characteristics, especially at an integrated design-manufacturer like Intel. – Paul A. Clayton Dec 17 '12 at 3:48
Let me simplify and expand my previous comments and connect the dots for those who seem to need it.
Is design still done at a (sub)logic gate level?
• YES
Design is done at many levels, the sub-logic level is always different. Each fabrication shrink demands the most brilliant physics, chemistry,and lithographic process experience as the structure of a transistor changes and geometry also changes to compensate for trade-offs, as it shrinks down to atomic levels and cost ~$billions each binary step down in size. To achieve 14ns geometry is a massive undertaking in R&D, process control & management and that is still an understatement! For example the job skills required to do this include; - "FET, cell, and block-level custom layouts, FUB-level floor plans, abstract view generation, RC extraction, and schematic-to-layout verification and debug using phases of physical design development including parasitic extraction, static timing, wire load models, clock generation, custom polygon editing, auto-place and route algorithms, floor planning, full-chip assembly, packaging, and verification."* - is there not much innovation in that area anymore? - WRONG - There is significant and heavily funded innovation in Semiconductor Physics, judging by Moore's Law and the number of patents, it will never stop.The savings in power, heat and thus quadrupling in capability pays off each time. - have we moved on to a higher level of abstraction? - It never stopped moving. - With demand for more cores, doing more in one instruction like ARM RISC CPU's , more powerful embedded µC's or MCU's, smart RAM with DDR4 which has ECC by default and sectors like flash with priority bits for urgent memory fetches. - The CPU evolution and architectural changes will never stop. Let me give you a hint. Go do a job search at Intel, AMD, TI or AD for Engineers and see the job descriptions. - Where do the designs for the billions of transistors come from? - It came from adding more 64bit blocks of hardware. but now going nanotube failures, thinking has to change from the top down approach of blocks to the bottom up approach of nanotubes to make it work. • Are they mostly auto generated by software? with tongue firmly planted in cheek... • Actually they are still extracting designs from Area51 from spaceships and have a way to go.... until we are fully nano-nano tube compliant. An engineer goes into the library and says nVidia we would like you to join us over here on this chip and becomes a part, which goes into a macro-blocks. The layout can be replicated like Ants in Toystory but explicit control over all connections must be manually routed /checked out as well as use DRC and auto-routing for comparison. Yes Automation Tools are constantly being upgraded to remove duplication and time wasted. - is there still a lot of manual optimization? • Considering one airline saved enough money to pay for your salary by removing just 1 olive from the dinner in First Class, Intel will be looking at ways to remove as many atoms as possible within the time-frame. Any excess capacitance means wasted heat, performance and oops also more noise, not so fast... But really CPU's grow like Tokyo, its not overnight, but tens of millions live there now with steady improvement. I didn't learn how to design at Univ. but by reading and trying to understand how things work, I was able to get up to speed in industry pretty fast. I got 10 years experience in my 1st 5 yrs in Aerospace, Nuclear Instrument design, SCADA design, Process monitoring, Antenna design, Automated Weather station design and debug,OCXO's PLL's VLF Rx's, 2 way remote control of Black Brandt Rockets... and that was just my 1st job. I had no idea what I could do. Don't worry about billions of transistors or be afraid of what to learn or how much you need to know. Just follow your passion and read trade journals in between your sleep, then you won't look so green on the job and it doesn't feel like work anymore. I remember having to design a 741 "like" Op Amp as part of an exam one time, in 20 minutes. I have never really used it, but I can recognize good from the great designs. But then it only had 20 transistors. But how to design a CPU must start with a Spec., namely; Why design a CPU and make measurable benchmarks to achieve such as; - Macro instructions per second (MIPS) (more important than CPU clock),for example; - Intel's Itanium chip is based on what they call an Explicitly Parallel Instruction Computing (EPIC) design. - Transmeta patented CPU design with very long instruction word code morphing microprocessors (VLIWCMM). They sued Intel in 2006, closed shop and settled for ~$200million in 2007. - Performance per watt (PPW) , when power costs > cost of chip (for servers) - FLoating point Ops Per Second (FLOPS) for math performance.
There are many more metrics, but never base a CPU's design quality on its GHz speed (see myth)
So what tools de jour are needed to design CPU's? The list would not fit on this page from atomic level physics design to dynamic mesh EMC physical EM/RF design to Front End Design Verification Test Engineer, where skills required include; - Front-end RTL Simulation - knowledge of IA and computer architecture and system level design - Logic verification and logic simulation using either VHDL or Verilog. - Object-oriented programming and various CPU, bus/interconnect, coherency protocols.
-
"Verilog" and "VHDL" only scratches the surface of all these naive yet inspiration searching questions. The real world is much more analog than digital than you realize. – Tony Stewart May 17 '12 at 16:30
Do you have an explanation of the Op Amp circuit anywhere. All I can see is a cascoded OTA, the rest is circuit Voodoo. – CyberMen May 17 '12 at 20:33
Wow. Too bad it's mostly irrelevant to the question. – Dave Tweed Oct 19 '12 at 22:18
|
{}
|
# Obtaining R pec survival patient risk percentage
### Introduction
I have a 300,000-row cancer dataset with around 60 variables (cancer stage, year of diagnosis, radiation therapy, histology, etc.) with a time variable ("number of months survived") and an event (alive or dead). The last two variables have complete values in the individual records.
### Survival months as outcome variable
My initial goal was to create a multilayer perceptron model in WEKA given my data in order to predict the number of months survived for new instances.
1. Preprocess data
2. Train the model in WEKA
3. Assess model's performance (accuracy, specificity, sensitivity)
4. Test model for new cancer records
### Patient risk as outcome variable
The requirements changed thus it was changed to predicting the patient risk of survival within equally-spaced time periods.
1. Divide data into:
• 24 - 47 months (2 years)
• 48 - 83 months (4 years)
• 84 - 107 months (6 years)
• 108 - 119 months (8 years)
• 120 - "up to what's available" months (10 years)
2. I will then use the function predictSurvProb from the package pec in R as suggested in this problem to obtain individual survival percentages for my aforemention records. The data will be divided into their own survival months bracket and respective patient risk prediction i.e. records that survived within two years will have a patient risk survival prediction percentage in two years.
3. After getting all my individual records their respective survival percentage per time period, WEKA will be used to create five models for each time period that will patient survival as the outcome variable.
4. The five models can be used to predict survival of a single record giving out five different patient survival risk
### Problem
I am still learning about R but I managed to apply the sample code (from the pec documentation for predictSurvProb) into my data as:
library(survival)
library(pec)
library(rms)
# fit a Cox model
coxmodel <- cph(Surv(time,vsr)~1,data=cancer,surv=TRUE)
# predicted survival probabilities can be extracted at selected time-points:
ttt <- quantile(time)
# for selected predictor values:
ndat <- data.frame(vsr=c(0,1)) # I assumed the event variable is provided here
# as follows
predictSurvProb(coxmodel,newdata=ndat,times=ttt) # has error
## simulate some learning and some validation data
learndat <- SimSurv(100)
valdat <- SimSurv(100)
## use the learning data to fit a Cox model
fitCox <- coxph(Surv(cancer$time,cancer$vsr)~vsr,data=cancer)
## suppose we want to predict the survival probabilities for all patients
## in the validation data at the following time points:
psurv <- predictSurvProb(fitCox,newdata=valdat,times=seq(24,48,72,96,120))
## This is a matrix with survival probabilities
## one column for each of the 5 time points
## one row for each validation set individual
I need to obtain the patient risk calculation for 300,000 patients but the line
predictSurvProb(coxmodel,newdata=ndat,times=ttt)
shows the error
Error in .subset2(x, i, exact = exact) : subscript out of bounds
How do I solve this error?
• Should you not be providing the predictor variables to predictSurvProb as newdata, rather than vsr which is the outcome? – Brendon Dec 18 '13 at 19:44
• (1) One advantage of neural net approaches to survival analysis is that they do not rely on the assumptions that underlie Cox analysis. You can get by with simpler Kaplan-Meier estimates for censored cases, and avoid this complexity. Look at link and the references therein for what seems to be something close to what you have in mind. (2) Make sure you have a good handle on the meaning and reliability of the underlying clinical data. Staging and other clinical variables can have different meanings among different types of cancer. – EdM Dec 18 '13 at 19:47
• @Brendon, thank you for the comment. I was at first confused in the docu about the term "predictor variables" but I guess I overthought it. I will provide the 60 variables right? – Saggy Manatee And Swan Folk Dec 18 '13 at 23:56
• @EdM, thank you very much esp. the journal article. Yes, the oncologist told me that as well and he approved my dataset. So I will not need the predictSurvProb right after all...but how do I obtain my patient risk variable? I understood that they used KM and an input vector but I'm not familiar with the methodology. Will a KM method like this suffice for the problem? – Saggy Manatee And Swan Folk Dec 19 '13 at 0:46
Having said that, however, I urge you to consider looking at the other neural-network approaches noted in that article and any other more recent developments; I have a fair amount of experience with survival analysis, but not with neural-network approaches. Also, although neural-network approaches can give good predictive behavior, the hidden variables make it difficult to say what predictor variables really "matter," something that clinicians typically care about. The Survival Task View page available at CRAN mirrors shows other approaches for high-dimensional data like yours that might give results easier to interpret heuristically, and the MachineLearning Task View page shows what's available for neural network and other machine-learning approaches in R.
|
{}
|
# differentiate and find the domain of ff(x) = ln ln ln x
mathsworkmusic | Certified Educator
calendarEducator since 2012
starTop subjects are Math, Science, and Business
To differentiate `f(x) = ln(ln(ln(x)))` use the Chain Rule for differentiation of nested function.
The rule is that ` `if `f(x) = u(v(w(x)))` where `u, v` and `w` are functions
`f'(x) = u'(v(w(x)))v'(w(x))w'(x)`
Using the fact that `d/dx lnx = 1/x` we have that
` ``f'(x) = 1/ln(ln(x))1/ln(x)1/(x)` answer to first question
To find the domain of `f(x)` we proceed as follows
The domain of `x in (0,oo)`
So we need `ln(ln(ln(x)))` such that `ln(ln(x)) in (0,oo)`
`implies ln(x) in (1, oo)`
`implies x in (e^1,oo)` answer to second question
check Approved by eNotes Editorial
|
{}
|
# Tag Info
2
It seems to me as this is not possible in this way. bipoles/resistor/height and width has been hardcoded for the American variants. The only possibility I am seeing here is to change the length of the bipole. The length is a possible parameter of each bipole. This could look like in my MWE below: % arara: pdflatex \documentclass[preview]{standalone} ...
0
Emitter Resistor has wrong indicator in the schematic... corrected version: \begin{center} \begin{circuitikz}[scale=1.4] \draw (0,0) node[ground] {}; \draw (0,0) to[R=$R_2$] (0,1.4) -- (0,2); \draw (0,2) -- (0,2.6) to[R=$R_1$] (0,4) -- (2,4); \draw (2,2) node[pnp,] (pnp) {} (pnp.B) [short,-*] to (0,2);% "short" is a plain wire \draw (2,4) ...
5
I'd like to propose here an alternative, using the ground symbol from the circuits.ee.IEC library and using the powerful forest package to draw the tree: The code: \documentclass{llncs} \usepackage{llncsdoc} \usepackage{tikz} \usetikzlibrary{shapes.geometric,arrows,fit,matrix,positioning,pgfplots.groupplots,circuits.ee.IEC} \usepackage{forest} \tikzset{ ...
6
After one hour investigation, removing -> from tikzpicture environment to path command will solve the problem. Code \documentclass{llncs} \usepackage{llncsdoc} \usepackage{tikz} \usepackage{pgfplots} \pgfplotsset{compat=newest} \usetikzlibrary{shapes.geometric,arrows,fit,matrix,positioning,pgfplots.groupplots} \usepackage{circuitikz} \begin{document} ...
Top 50 recent answers are included
|
{}
|
Elitist Jerks MoP beta discussion
Username Remember me? Password
LinkBack Thread Tools
04/30/12, 5:07 PM #61 Havoc12 King Hippo Shaarra Night Elf Priest Silvermoon (EU) I observed something quite interesting. The renew glyph always removes just 1 tick and boosts the remainder by the same amount regardless of haste. For those who dont understand what this means At 4 ticks Glyphed = unglyphed At 5 ticks glyphed = 4*33 = 132%, while unglyphed = 125% i.e. glyphed heals for 5.6% more At 6 ticks glyphed = 5*33 = 165% while unglyphed = 150% i.e. glyphed heals for 10% more For me with 9.42% haste PWS (25.83% haste) nets me one extra tick, while PWS+PI (51% haste) gives two extra ticks. The reason why this is so intersting is this. Renew does not consume borrowed time, so basically you can cast PWS and 4 hasted renews if you have even a little bit of haste. If you have 10% base haste you can cast 5 hasted renews per borrowed time. What does this mean In inner will for me renew heals for 5253 or 6987 if glyphed. Total values including crit and aegis are 5253*4*1.1821 = 24 838 and glyphed heals for the same amount. With borrowed time however glyphed renew heals for 33037 ,while unglyphed renew heals for 31048 In inner will renew costs 3535 mana and with my values GCD is 1.371 or 1.167 with borrowed time. If I cast PWS + 4 renews with the glyph the total cast time will be 1.371+4*1.167 = 6.039 seconds and total healing will be 49213 + 4*33037 = 181 361. Total mana cost is 3535*4+9600*0.85 = 22300. HPM: 8.133 flash heal under these conditions heals for 33284 without crit and aegis and costs 9600 mana. Casting 1 PWS and 4 flash heals on 4 targets (assuming they don't have grace, which you can expect for non tanks), heals for 49213 + 4*33284*1.1820 = 206580 in inner will or 217529 in inner fire. Cast time = 1.371*4+1.167 = 6.651 sec Total mana cost is 46560 in inner will and 48000 in inner fire. HPCT of PWS+4xrenew inner will = 30 032 for 22300 mana HPCT of PWS+4xflash inner will = 31 060 for 46560 mana HPCT of PWS+4xflash inner fire = 32 706 for 48000 mana If you are under PI, then things get even worse, since you can now fit 5 renews in a single borrowed time and renew will tick 5 times. So HPCT = 255 694.25/6.142 = 41 630.5 for 17840 mana For PWS+5flash HPCT is Inner will: 245921.44/6.712 = 36639 for 37248 mana Inner fire: 36639*1.053 = 38581 for 38400 mana. For those of you who might be wondering, spells that have a base cast time consume borrowed time even if they are made instant through talents and glyphs. They also don't benefit from inner will (e.g. glyphed holy fire and FDCL flash) In order for this analysis to be complete we also need to look at PoH. 13593+5290 (aegis) = 18883 with aegis I am calculating total healing with PoH including crit and aegis by determining the crit boost with 60% aegis and then adding 30% aegis for all non crits: $base*[1+(1+1.2*(1+mastery))*crit] + 0.3*base*(1-crit)*(1+mastery)]=$ $base*[1+crit+(1+mastery)*(1.2*crit+0.3*(1-crit))]=$ $base*[1+crit + (1+mastery)*(0.9*crit+0.3)]$ 13593*(1+0.0877+(0.9*0.0877+0.3)*1.2974) = 21468 per target PWS + 3 PoH = 371 233 healing (5 targets inner will) or 390 908 (5 targets inner fire). Total cast time: 1.5/1.0942+2*2.5/1.0942+2.5/1.2583 = 7.927 sec 46 831 49 314 HPCT PWS+3PoH (5 targets inner will) = 46831 for 32160 mana HPCT PWS+3PoH (5 targets inner fire) = 49314 for 33600 mana Incidentally HPM with inner will is 11.54 while with inner fire it is 11.63. So it takes 3 PoH casts per PWS to break even mana wise, if we ignore overheal. PoH is ofc an expensive spell. For cheaper (the cheaper the better) spells you need more casts to break even. However when factoring in overheal, which pretty much eats a sizeable chunk of the HPS benefit from inner fire, Inner will is pretty much certain to be better HPM. To show that its worth casting PWS straight PoH spam is 46981 HPS in inner will or 49471 HPS in inner fire. Compare that with the values for PWS+3PoH they are practically the same as with PWS+3PoH. Ofc with PWS+2PoH you need to be in inner will, so you lose like 5% HPS. HPM for PoH alone is 14.94 (inner fire).Once overheal is accounted for however PWS has no overheal under heavy raid damage you end up with higher HPS and massively better HPM. With one rapture proc per 2 PWS you are up to 13HPM already in inner will. So very little overheal (15%) is needed to make PWS+3PoH better HPM. PWS + 5PoH might seem like it would be the most HPM, but it will most likely not be. The reason is that its not guaranteed that PWS will be absobed immediately and just 1 PWS per 12s is likely to lose a lot of rapture procs. The HPM for PWS+4renew for comparison is 8.133. This analysis explains why using PWS/renew/pom/holy fire and penance and ignoring SpS and heal is so effective. Its significantly higher HPM than SpS and significantly higher HPS than heal, even if you have FDCL or not. Once you get past the 1st haste break point for renew, BT renew will heal for as much as a flash heal for roughly 1/3rd the mana! For stationary deficits on 4 targets this BT renew in inner will is currently the most efficient strategy, especially if you have enough haste to get 5 ticks with borrowed time. Inner will is better HPM and a noticeably reduced mana drain per second compared with inner fire unless you are spamming SpS without divine insight. This is especially true if you have FDCL EDIT: Recalculated for 30% aegis. Last edited by Havoc12 : 05/01/12 at 2:24 PM.
04/30/12, 5:50 PM #62 Havoc12 King Hippo Shaarra Night Elf Priest Silvermoon (EU) Divine aegis appears to be nerfed? It is now 30% for all heals and 60% for PoH. Looks like I have to recalculate. This also changes the balance between PWS and flash quite drastically. At 8.77 crit and 29.74% mastery the crit+aegis multiplier is now 1+1.6*1.2974*0.0877 = 1.1821 Flash is 33284*1.1821*1.3 = 51 148 compared with 49213 for PWS. Last edited by Havoc12 : 05/01/12 at 5:51 AM.
04/30/12, 9:30 PM #63
Barlow
Von Kaiser
Pandaren Priest
Eredar (EU)
Originally Posted by Havoc12 This analysis explains why using PWS/renew/pom/holy fire and penance and ignoring SpS and heal is so effective. Its significantly higher HPM than SpS and significantly higher HPS than heal, even if you have FDCL or not. Once you get past the 1st haste break point for renew, BT renew will heal for as much as a flash heal for roughly 1/3rd the mana! For stationary deficits on 4 targets this BT renew in inner will is currently the most efficient strategy, especially if you have enough haste to get 5 ticks with borrowed time.
Love the analysis, love the info on how glyphed Renew works. But then again. We won't be casting BT-Renew chains and even less BT-PI Renew chains nor will we (likely) be stacking haste nor does this calculation include the fact that Renew by far is the most overhealing spell in out arsenal (SpS not yet tested enough) nor apparently takes into account that heal will profit from grace (though it does not on Beta yet) or did I miss something.
HPCT (inner will) = 47492.3 for 32160 mana HPCT (inner fire) = 50009.4 for 33600 mana Incidentally HPCT per mana with inner will is 11.707 while with inner fire it is 11.799. So it takes 3 PoH casts per PWS to break even mana wise, if we ignore overheal. PoH is ofc an expensive spell. For cheaper (the cheaper the better) spells you need more casts to break even. However when factoring in overheal, which pretty much eats a sizeable chunk of the HPS benefit from inner fire, Inner will is pretty much certain to be better HPM.
I don't understand this calculations. What are we supposed to be casting for 30+K mana and 40+K Heal? Am I missing a digit in Healing? Secondly: Your calculation implies we should be a) using PW:S on CD during AoE Damge and b) stay in Inner Will to do so? Did you calculate the double dipping of aegis with crit?
Last edited by Barlow : 04/30/12 at 9:37 PM.
04/30/12, 9:42 PM #64
Barlow
Von Kaiser
Pandaren Priest
Eredar (EU)
Originally Posted by Havoc12 At 8.77 crit and 29.74% mastery the crit+aegis multiplier is now 1+1.6*1.2974*0.0877 = 1.1821 Flash is 33284*1.1821 = 51 148 compared with 49213 for PWS.
Please walk me through that: 33284*1.1821 is 39345 ((ah got that one, 51148 assumes Grace))
And what why is there a 1.6 in your calculation for flash heal. Totally confused.
05/01/12, 6:23 AM #65
Havoc12
King Hippo
Night Elf Priest
Silvermoon (EU)
Please walk me through that: 33284*1.1821 is 39345 ((ah got that one, 51148 assumes Grace)) And what why is there a 1.6 in your calculation for flash heal. Totally confused.
Yep I forgot to add the 1.3 multiplier. I amended the post.
You are also correct in that my calculation is not quite right I applied the mastery to the whole of the crit multiplier not aegis! Lets define mastery%/100 = mastery and crit%/100 = crit
A critical heal is 100% larger from the heal part and an additional 30% of the crit as aegis. That means 30% of a crit is 60% of a normal heal, hence a crit from disc is 160% larger than a normal crit. Since we have mastery its the benefit of aegis is increased to 60%*(1+mastery%/100) so a crit from disc is 100%+60%*(1+mastery%/100).
Thus the crit multiplier is 1+[1+0.6*(1+mastery)]*crit = 2+0.6*(1+mastery). For 8.77% crit and 29.74% mastery that is
1+(1+0.6*1.2974)*0.0877 = 1.1560
So the effect of crit is smaller than I calculated. And the correct flash heal value excluding overheal and including grace is:
33284*1.156*1.3 = 50019.2
That also means the values in the above post are slightly higher than they should be. I will amend them.
Originally Posted by Barlow We won't be casting BT-Renew chains ..... Renew by far is the most overhealing spell in out arsenal (SpS not yet tested enough) nor apparently takes into account that heal will profit from grace (though it does not on Beta yet) or did I miss something. I don't understand this calculations. What are we supposed to be casting for 30+K mana and 40+K Heal? Am I missing a digit in Healing? Secondly: Your calculation implies we should be a) using PW:S on CD during AoE Damge and b) stay in Inner Will to do so? Did you calculate the double dipping of aegis with crit?
I don't see a reason to not cast a renew chain if there is an opportunity. Everytime you PWS it makes sense to renew the tanks (if there are two) and/or people who have a DoT on them or have a significant deficit. Even during aoe healing it makes sense to use renew occasionally.
HPCT = healing per cast time. Its basically the HPS and total mana cost of a PWS+3PoH sequence. My calculations show that there is effectively no difference in HPS/HPM between straight PoH spam and PWS+3PoH. My calculations show that PoH+3PWS spam is better than straight PoH spam if PoH has any overheal.
I did calculate the double dipping, in fact I overestimated it, because I made the same mistake of applying mastery to the heal part of the crit as well as aegis. I need to recalculate it
This is the formula I am using
base*[1+crit+(1+mastery)*(0.9*crit+0.3)]
Not that this formula can be rearranged to read:
base*[1+(1+0.9*(1+mastery))*crit+0.3*(1+mastery)]
This shows that the "double dipping", is not really a double dipping. The crit behaviour of PoH can be thought of as having a higher crit multiplier for direct heal part of the spell but no benefit at all from crit for the absorption part of a non-crit PoH. This is what the formula suggests. The 0.3*(1+mastery) part is the absorption part and as you can see it not influenced by crit at all. The crit multiplier for the direct heal however is dependent on mastery, but it turns out that it rises very slowly. Crit never performs as well for PoH as it does for normal spells. At high levels of mastery PoH in fact performs quite poorly.
For example look at my values
Normal heals gain 15% from crit and aegis but PoH gains 21468/18883 = 1.137, just 13.7%.
Crit looks massive when you are just looking at the base heal, but that is only a Part of PoH. The automatic aegis applied by PoH must be thought of as part of the base heal, since its always applied and it does not benefit from a crit as much as the direct heal part of PoH.
The crit multiplier for a normal spell is [2+0.6*(1+mastery)], while the crit multiplier for PoH is {2+1.2*(1+mastery)/[1+0.3*(1+mastery)]}.
Last edited by Havoc12 : 05/03/12 at 4:07 PM.
05/01/12, 6:32 PM #66
Barlow
Von Kaiser
Pandaren Priest
Eredar (EU)
Originally Posted by Havoc12 This shows that the "double dipping", is not really a double dipping. The crit behaviour of PoH can be thought of as having a higher crit multiplier for direct heal part of the spell but no benefit at all from crit for the absorption part of a non-crit PoH. This is what the formula suggests. The 0.3*(1+mastery) part is the absorption part and as you can see it not influenced by crit at all. The crit multiplier for the direct heal however is dependent on mastery, so there is probably a minimum level of mastery that must be reached before the return of crit for PoH is as good as it is for our other direct heals. For example look at my values Normal heals gain 15% from crit and aegis but PoH gains 21468/18883 = 1.137, just 13.7%.
I think I see where you come from. Just let me try to recap so we're really on the same page:
You're saying: Since PoH already gets an Aegis on non crits per default the relative value for "Crit" for PoH will usually be lower compared to normal heals and not higher since the Aegis double dipping does not outperform the difference between "no aegis at all" for normal heal non crits and "default aegis compared to double dipping aegis"?
Did I get that right? If yes, that's awesome information since it seems kind of counter-intuitive for PoH heavy fights and at least I for sure kind of fell for the trap "Oh well PoH double dips from crits, thus Crit must be awesome for PoH"
05/01/12, 9:05 PM #67
Havoc12
King Hippo
Night Elf Priest
Silvermoon (EU)
Originally Posted by Barlow I think I see where you come from. Just let me try to recap so we're really on the same page: You're saying: Since PoH already gets an Aegis on non crits per default the relative value for "Crit" for PoH will usually be lower compared to normal heals and not higher since the Aegis double dipping does not outperform the difference between "no aegis at all" for normal heal non crits and "default aegis compared to double dipping aegis"? Did I get that right? If yes, that's awesome information since it seems kind of counter-intuitive for PoH heavy fights and at least I for sure kind of fell for the trap "Oh well PoH double dips from crits, thus Crit must be awesome for PoH"
Yes that is correct. I had a look and it seems that there is no mastery breakpoint. Crit for PoH never performs as well as it does for other spells. In fact the higher your mastery the less well it performs compared to our other direct heals. However its important to realise that this just means that crit does not benefit PoH as much as it does normal spells. I haven't looked at crit in comparison to haste and mastery. I suspect crit is not going to be a great stat for PoH however, because its value for PoH is effectively static. The crit modifier increases extremely slowly for PoH.
Basically the crit modifier of PoH is
{2+1.2*(1+mastery)/[1+0.3*(1+mastery)]}, while for our other spells it is [2+0.6*(1+mastery)]
So the crit modifier for PoH when taking aegis into account is lower and rises more slowly with mastery.
Last edited by Havoc12 : 05/03/12 at 4:09 PM.
05/02/12, 4:21 AM #68
eyogar
Glass Joe
Goblin Priest
Kazzak (EU)
Originally Posted by Havoc12 Crit for PoH never performs as well as it does for other spells.
I'm note sure I can follow you on this statement. Unless something got changed in the beta I overlooked, both crit and mastery are highly more effective for Prayer of Healing than our other spells, which is quite easy to prove:
Let v be the base healing value (that is, the amount we hit, not crit, for on average), c the crit rate, d the divine aegis factor (depending on mastery something around 0.4 for most of us) and $\bar{v}$ the average healing value including crit and divine aegis.
The average healing done by any of our direct healing spells is then
$v_{\rm hit}= v$
$v_{\rm crit}= 2v \, (1 + d)$
$\bar{v}_{\rm direct}= (1-c) \cdot v + c \cdot 2v \, (1+d) = v \, (1 + c + 2cd)$
Now for Prayer of Healing, we get one Divine Aegis bubble for the hit itself, and another if we crit - but those two with the higher (critted) base value
$v_{\rm hit}= v \, (1 + d)$
$v_{\rm crit}= 2v \, (1 + 2d)$
$\bar{v}_{\rm PoH}= (1-c) \cdot v \, (1 + d) + c \cdot 2v \, (1 + 2d) = v \, (1 + c + d + 3cd)$
Now the derivative of $\bar{v}_{\rm PoH}$ will always be higher than that of $\bar{v}_{\rm direct}$, scaled to v we get:
$\frac{\partial \bar{v}_{\rm PoH}}{v \, \partial c}= 1 + 3d$
$\frac{\partial \bar{v}_{\rm direct}}{v \, \partial c}= 1 + 2d$
As for mastery, the difference is even bigger:
$\frac{\partial \bar{v}_{\rm PoH}}{v \, \partial d}= 1 + 3c$
$\frac{\partial \bar{v}_{\rm direct}}{v \, \partial d}= 2c$
We see that for every value of d, the benefit of crit for Prayer of Healing is higher than for any direct heal, as is with mastery.
05/02/12, 7:22 AM #69
Havoc12
King Hippo
Night Elf Priest
Silvermoon (EU)
Originally Posted by eyogar $v_{\rm hit}= v$ $v_{\rm crit}= 2v \, (1 + d)$ $\bar{v}_{\rm direct}= (1-c) \cdot v + c \cdot 2v \, (1+d) = v \, (1 + c + 2cd)$ Now for Prayer of Healing, we get one Divine Aegis bubble for the hit itself, and another if we crit - but those two with the higher (critted) base value $v_{\rm hit}= v \, (1 + d)$ $v_{\rm crit}= 2v \, (1 + 2d)$
It does not seem to me that you have not proved what you set out to. I will discuss what I think you have done a little later.
My analysis is certainly correct. In fact the final formula you posted for PoH, which is equivalent to mine behaves exactly the way I predicted. The easiest way for your to see it is to plot it vs crit and you will see exactly what happens.
For now let us look at the crit behaviour of PoH by focusing exclusively on the above 4 equations as they tell us all we need to know.
For a direct heal the crit multiplier is
$\frac{v_{\rm hit}}{v_{\rm crit}}= 2*(1+D)$
For PoH the crit multiplier
$\frac{v_{\rm hit}}{v_{\rm crit}}= 2*(1+2D)/(1+D)$
By inspection you can immediately see that the crit multiplier is automaticaly lower(!!) than for a direct heal. There exists no value of mastery for which the crit multiplier for a direct heal is smaller than the crit multiplier for PoH
To help visualise it lets say that D is 40%
For a direct heal the crit multiplier is
$\frac{v_{\rm hit}}{v_{\rm crit}}= 2.8$
For PoH the crit multiplier
$\frac{v_{\rm hit}}{v_{\rm crit}}= 2.571428571$
Thus a PoH crit is 2.5714 times larger than a PoH hit, while a direct heal crit is 2.8 times larger (!!). I am sure you will agree that having a lower crit multiplier means that PoH benefits less from crit.
Just in case you dont here is the proof: Thus if our amount of crit is C, the benefit to PoH will be (1+C*2.5714), while the benefit to direct heals will be (1+C*2.8). You can clearly see that the benefit of crit for PoH is simply not as high.
To demonstrate mathematically how mastery and crit interact for PoH. Lets look at the limit of the crit multiplier as D goes to infinity.
For direct heals there is no limit, while for PoH it is obvious that the limit is 4. At 2.57 we are already quite close to the asymptotic limit so as mastery increases the crit multiplier for PoH is going to increase quite slowly compared to that of a direct heal, which will increase quite fast.
I confirmed this in game. Just check the values I posted for PoH and flash heal or feel free to recalculate them yourself from the base healing values I provided. You will see that the crit multiplier for PoH is lower exactly as I predict.
So we can clearly demonstrate that your result is wrong. A question is why it is wrong. Looking at the rest of your equation I can immediately see the problem. The problem lies in the way you have normalised the rate of increase. To explain:
$\frac{\partial \bar{v}_{\rm PoH}}{v \, \partial c}= 1 + 3d$ $\frac{\partial \bar{v}_{\rm direct}}{v \, \partial c}= 1 + 2d$
$\frac{\partial \bar{v}_{\rm PoH}}{{\partial c}}$, is the absolute rate of increase. This tells us roughly the absolute amount of healing added by an increase in crit for a given level of mastery, which is not very useful, because (a) it directly depends on the amount healed by the spell and (b) it does not tell us the % increase. To find the relative rate of increase i.e. the rate in the % increase in healing amount, we need to normalise for the non crit heal amount. You have correctly attempted to normalise this in the two formulas I quote above. However you have chosen the wrong thing to normalise to. 1/u is the normalisation factor for a direct heal, but not for PoH. The correct normalisation factor for PoH is 1/[u*(1+D)].
Let us now correctly calculate the relative rate of increase with crit for PoH and direct heals:
$\frac{\partial \bar{v}_{\rm PoH}}{v \, \partial c}= \frac{1 + 3d}{1+d}$
$\frac{\partial \bar{v}_{\rm direct}}{v \, \partial c}= 1 + 2d$
We can clearly see that the relative increase in PoH with crit is always smaller than it is for direct heals. So the answer is no PoH does not double dip with crit. It always benefits PoH less than it does direct heals and the more mastery you have the bigger the difference(!!).
As for mastery, the difference is even bigger: $\frac{\partial \bar{v}_{\rm PoH}}{v \, \partial d}= 1 + 3c$ $\frac{\partial \bar{v}_{\rm direct}}{v \, \partial d}= 2c$ We see that for every value of d, the benefit of crit for Prayer of Healing is higher than for any direct heal, as is with mastery.
I am afraid this is also incorrect for the same reason. When you do it correctly you find that mastery has diminishing returns for PoH, but it does not have diminising returns for direct heals.
This analysis does not tell us whether crit or mastery are good for PoH compared to haste. That is something completely different. However they do tell us how these two stats behave. For the allowed values of mastery, PoH pretty much has a fixed crit multiplier (ranges from 2.3333 to roughly 2.6), so the value of crit does not benefit from mastery. The value of mastery itself clearly has diminishing returns, so even if it does turn out to be better than haste (I don't think so), it is highly likely that at a certain level it will drop below it.
However all this ignores a critical factor, mastery and crit increase HPM, while haste doesnt. Also mastery ignores overhealing pretty much while crit and haste don't. The jury is not out on whether mastery and crit are good or bad stats for PoH. I suspect that mastery is good, while crit is bad. However two things is absolutely clear. (1)PoH does not double dip from crit and gains very little benefit from mastery. (2) Mastery has diminishing returns, but has a better interaction with crit than normal spells.
Last edited by Havoc12 : 05/02/12 at 7:33 AM.
05/02/12, 8:26 AM #70
eyogar
Glass Joe
Goblin Priest
Kazzak (EU)
Originally Posted by Havoc12 Thus a PoH crit is 2.5714 times larger than a PoH hit, while a direct heal crit is 2.8 times larger (!!)
Ah yes, I understand your claim now. The relative increase of a crit over a normal hit is smaller for Prayer of Healing than the other spells; and limited to 4. My calculations were never meant to show otherwise.
Let me rephrase my original remark: crit and mastery as a stat do increase Prayer of Healing's healing value (in absolute terms, stat for stat) stronger than for our direct healing spells. This holds true simply because the actual healing value is a smaller fraction of the total value we get from this spell.
My point of view rooted in the fact that I modeled Prayer of Healing explicitly for its real world effect (in healing done) with secondary stats (excluding haste), therefore the relative increase of crits over hits and its behaviour was of no importance.
05/02/12, 8:48 AM #71 Barlow Von Kaiser Bärlow Pandaren Priest Eredar (EU) FYI some info on Spirit Shell changes - It now stacks @60% Priest HP (up from 40%) - It will now apply Grace
05/02/12, 9:17 AM #72
Havoc12
King Hippo
Night Elf Priest
Silvermoon (EU)
Originally Posted by eyogar Ah yes, I understand your claim now. The relative increase of a crit over a normal hit is smaller for Prayer of Healing than the other spells; and limited to 4. My calculations were never meant to show otherwise. Let me rephrase my original remark: crit and mastery as a stat do increase Prayer of Healing's healing value (in absolute terms, stat for stat) stronger than for our direct healing spells. This holds true simply because the actual healing value is a smaller fraction of the total value we get from this spell. My point of view rooted in the fact that I modeled Prayer of Healing explicitly for its real world effect (in healing done) with secondary stats (excluding haste), therefore the relative increase of crits over hits and its behaviour was of no importance.
I am afraid that is also wrong. I urge you to carefully read my whole post, rather than just that bit. You did not calculate the absolute rate of increase. What you attempted to calculate was the relative rate of increase, but unfortunately you made a mistake and the formulas you came up are completely wrong. I am sorry to say that no conclusions whatsoever can be drawn from your analysis.
If you want to look at the absolute rate of increase in the healing produced by PoH with respect to crit:
PoH: Base_{PoH-no aegis}*(1 + 3d)
Direct: Base_{direct}*(1 + 2d)
This is a kind of meaningless value. The absolute increase is not really that important. The relative increase (what you tried to calculate) is what matters.
To show you why, lets say that you have a spell that heals for 1000 000 and adding X% crit you get an absolute increase in healing of 100. Compare that with a spell that heals for 100 and adding x% crit, it produces an absolute increase of 50. What you are trying to say is because the increase for the first spell is 100 compared to 50 for the second spell, in real world terms the first spell benefits more from crit.
hat however is completely wrong. For the 1st spell if you cast the spell 10 times you will get 10,000,000 healing. Adding x% crit will increase that healing to 10,001,000, which is effectively no different than what you had without that crit and all the stat budget points you spent on it were utterly wasted.
For the second spell casting it 10 times nets you 1000 healing, but when you add x% crit you get 1500. In contrast to the 1st spell that x% crit increased your healing massively.
In the same way if we look at absolute rate of increase for say 50% mastery with my value of spell power.
Rate of increase with crit {PoH} = 13523*5*(1 + 1.5) = 169037.5
Lets do the same for flash heal
Rate of increase with crit {flash} = 33284*2 = 66568
The base healing of flash is 33284, while the base healing for PoH is 13523*5*(1+0.3*1.5) = 98041.75
That means a 1% increase in crit rate raises flash from 33284 to 33949.68 (i.e. +665.568) which is a 2% increase
In contrast a 1% increase in crit rate raises PoH from 98041.75 to 99732.125 (i.e. + 1690.375) which is a 1.7% increase. I.e. if you are spamming PoH and you take 1% crit you will see a 1.7% increase in healing. If you are spamming flash on the other hand you will see a 2% increase in healing with crit. In other words flash heal benefits from crit 18% more than PoH does.
In other words taking 1% crit has a much higher value for spaming a direct heal like flash, compared to PoH. This is because the absolute rate of change is based entirely on a higher crit multiplier for the direct heal part of PoH and it ignores the absorption part of PoH, which is quite frankly huge. The difference becomes higher and higher as mastery gets larger, because aegis becomes a bigger and bigger part of PoH.
Last edited by Havoc12 : 05/02/12 at 9:27 AM.
05/02/12, 11:11 AM #73
eyogar
Glass Joe
Goblin Priest
Kazzak (EU)
Originally Posted by Havoc12 [...]
Yes, I do not deny any of these points made, nor did I ignore the majority of your post.
It is indeed true that my calculations are not suited to compare our different spells on how they are increased by crits. As I already said, my original formalism was used for Prayer of Healing exclusively, and I never examined the proper interaction with other spells. Using v to normalize the derivatives was used to cancel out any talents or other effects, and is not suited per se for anything more. I did not make the point I intended to.
That being said, the fact that Prayer of Healing crits are a lower relative increase (and absolute, if counting v (1+d) as the 'hit') than for our direct healing spells seems to me as a pure academic endeavour. Otherwise one could think you argue that Prayer of Healing is a weaker spell because it scales weaker than the rest due to its baseline Divine Aegis. While the scaling statement is mathematically true, it is a dead end gameplay-wise:
Consider Prayer of Healing works like in Wrath of the Lich King, creating a Divine Aegis bubble of size d*2v on every crit. We'd have equal scaling behaviour.
Now add a constant bubble d*v (not d*2v) on every hit/crit (kinda the way it was in the first Cataclysm days). The relative increase by Prayer of Healing just got weaker for every %crit, yet would you say the spell scales weaker in general? This very effect gets even bigger by its actual behaviour with 2*d*2v bubbles.
In other words: The notion of this "weaker" scaling is due to some additional effect we have on non-crits. One could argue of course that because of this mechanic we are at a disadvantage to Holy or other classes not using this effects to achieve their proper output, but that's another round of number crunching. And on this one, one should not underestimate the fundamental benefits of absorbs (as well as its downsides of course), which we completely ignore in our current calculations.
05/03/12, 1:44 AM #74
Havoc12
King Hippo
Night Elf Priest
Silvermoon (EU)
Originally Posted by eyogar In other words: The notion of this "weaker" scaling is due to some additional effect we have on non-crits. One could argue of course that because of this mechanic we are at a disadvantage to Holy or other classes not using this effects to achieve their proper output, but that's another round of number crunching. And on this one, one should not underestimate the fundamental benefits of absorbs (as well as its downsides of course), which we completely ignore in our current calculations.
Holy PoH is significantly bigger than disc PoH for sure and now that aegis is nerfed to 30% the difference is even bigger. Even worse I am not sure we will be able to spam PoH and build large absorption stacks at the begining of the expansion. With spirit shell the way it is right now we are kinda pigeonholed into tank healing.
05/03/12, 8:17 AM #75
Barlow
Von Kaiser
Pandaren Priest
Eredar (EU)
from mmo-champion:
- Desperate Prayer can now be cast in Shadowform. - Divine Insight Shadow effect has been changed - Periodic damage from your Mind Flay refreshes the duration of your Shadow Word: Pain on the target. - From Darkness, Comes Light Surge of Light now has a 15% chance to proc. Surge of Darkness changed - When your Shadow Word: Pain deals damage, there is a 15% chance your next Shadow Word: Death will treat the target as if it were below 20% health. Shadow - Mind Surge (NNF) now has a 10% chance to proc. - Shadow Orbs - New - Generated by Mind Blast and Shadowy Apparitions. Used to cast Devouring Plague and empower Psychic Horror. - Shadowy Apparitions now has a 20% chance to summon a shadow. Now deals (615 + 60.0% of SP) shadow damage and grant you a Shadow Orb. You can now have up to 3 Shadowy Apparitions active, down from 15. - Vampiric Touch now grants 2% of maximum mana, down from 3%. Major Glyphs - Glyph of Dark Binding now affects Prayer of Mending, Renew, and Leap of Faith instead of Binding Heal, Flash Heal, and Renew. - Glyph of Psychic Scream now also affects your Psyfiend's Psychic Terror. - Glyph of Vampiric Touch is now Glyph of Devouring Plague and affects Devouring Plague.
Elitist Jerks MoP beta discussion
Username Remember me? Password
Thread Tools
Similar Threads Thread Thread Starter Forum Replies Last Post [BETA] Balance Discussion Carebare Druids 35 12/01/10 7:09 PM [BETA] Cat Discussion Carebare Druids 26 11/26/10 11:21 AM [BETA] Resto Discussion Carebare Druids 230 11/25/10 7:27 PM [BETA] Bear Discussion Glandur Druids 7 11/21/10 6:43 PM
Ten Ton Hammer Network EVE GW2 Spark SWToR WoW Contact Us Privacy Policy
|
{}
|
# Example of an injective module
I can't find an example of a countable injective module over a non-Noetherian ring.
-
And, presumably, you would like to know one? (I suspect you weren't just posting a status update... have you considered actually asking the question, then?) – Arturo Magidin Feb 6 '11 at 22:11
In other words: This is not twitter :) – Mariano Suárez-Alvarez Feb 6 '11 at 22:45
Yeah, I would like to know one – Wesley Farrel Feb 6 '11 at 22:46
Then perhaps you might consider editing your question so as to make it a question, not a status update. – Arturo Magidin Feb 7 '11 at 0:45
Another type of example: Let A be any non-Noetherian ring, let B be the field of 2 elements, and let C be the field of rational numbers. Then R = A×B×C is a non-Noetherian ring, its module 0×B×0 is injective and finite, and its module 0×0×C is injective and countably infinite. Injective modules are at least somewhat local ideas, so even if the ring is bad somewhere, it doesn't mean all the injectives are bad.
-
Let $R$ be a countable non-Noetherian integral domain, e.g. a polynomial ring in countably infinitely many indeterminates over $\mathbb{Z}$ (or, more appealingly to a number theorist, the ring $\overline{\mathbb{Z}}$ of all algebraic integers).
Then the fraction field $K$ of $R$ is a countable, injective $R$-module. In fact it is the injective hull of the $R$-module $R$: see this wikipedia article on injective hulls.
In general, I confess that I have not thought as much as I should on injective hulls -- I haven't even gone through the proof that they always exist! -- but I would have to think that if $M$ is any infinite $R$-module, then its injective hull has the same cardinality as $M$. This would give many examples.
|
{}
|
Triangles each have three heights, each related to a separate base. Regardless of having up to three different heights, one triangle will always have only one measure of area. In some triangles, like right triangles, isosceles and equilateral triangles, finding the height is easy with one of two methods.
[hide]
## How to Find the Height of a Triangle
Every triangle has three heights, or altitudes, because every triangle has three sides. A triangle's height is the length of a perpendicular line segment originating on a side and intersecting the opposite angle.
In an equilateral triangle, like $△SUN$ below, each height is the line segment that splits a side in half and is also an angle bisector of the opposite angle. That will only happen in an equilateral triangle.
By definition of an equilateral triangle, you already know all three sides are congruent and all three angles are $60°$. If a side is labelled, you know its length.
Our bright little $△SUN$ has one side labelled , so all three sides are . Each line segment showing the height from each side also divides the equilateral triangle into two right triangles.
## Height of a triangle formula
Your ability to divide a triangle into right triangles, or recognize an existing right triangle, is your key to finding the measure of height for the original triangle. You can take any side of our splendid $△SUN$ and see that the line segment showing its height bisects the side, so each short leg of the newly created right triangle is . We already know the hypotenuse is .
Knowing all three angles and two sides of a right triangle, what is the length of the third side? This is a job for the Pythagorean Theorem:
### Using Pythagorean Theorem
Focus on the lengths; angles are unimportant in the Pythagorean Theorem. Plug in what you know:
Most people would be happy to say the height (side $b$) is approximately $20.78$, or .
You can decide for yourself how many significant digits your answer needs, since the decimal will go on and on. Do not forget to use linear measurements for your answer!
The Pythagorean Theorem solution works on right triangles, isosceles triangles, and equilateral triangles. It will not work on scalene triangles!
## Using the area formula to find height
The formula for the area of a triangle is , or . If you know the area and the length of a base, then, you can calculate the height.
In contrast to the Pythagorean Theorem method, if you have two of the three parts, you can find the height for any triangle!
Here we have scalene $△ZIG$ with a base shown as and an area of , but no clues about angles and the other two sides!:
Recalling the formula for area, where $A$ means area, $b$ is the base and $h$ is the height, we remember
This can be rearranged using algebra:
Put in our known values:
Remember how we said every triangle has three heights? If we take $△ZIG$ and rotate it clockwise so side $GZ$ is horizontal, and construct a height up to $\angle I$, we can get the height for that side, too.
## What you'll learn:
After working your way through this lesson and video, you will be able to:
• Identify and define the height of a triangle
• Recall and apply two different methods for calculating the height of a triangle, using either the Pythagorean Theorem or the area formula
Instructor: Malcolm M.
Malcolm has a Master's Degree in education and holds four teaching certificates. He has been a public school teacher for 27 years, including 15 years as a mathematics teacher.
### 20+ Math Tutors in Ashburn, VA
Get better grades with tutoring from top-rated private tutors. Local and online.
Tutors online
### Find a math tutor in Ashburn, VA
Learn faster with a math tutor. Find a tutor locally or online.
|
{}
|
# Circular Motion, Gravity, Muddy Wheel
1. Feb 12, 2013
### porschedude
1. The problem statement, all variables and given/known data
A car is moving with constant velocity v, and has wheels of radius R. The car drives over
a clump of mud and the mud with mass m, and sticks to the wheel with an adhesive force of
f perpendicular to the surface of wheel. At what angle (theta) does the piece of mud drop off
the wheel?
Note that theta is the measure of the central angle of the circle.
2. Relevant equations
Fc=mv^2/R
3. The attempt at a solution
I am legitimately stumped on this problem, aside from my qualms with the question (the mud wouldn't drop off, it would fly off), this is what I have so far. I drew a free body diagram of the piece of mud. One force vector is pointing directly down. The other force vector (f) pointing toward the center of the wheel. Resolving into components:
ƩFx = fsin∅
ƩFy = fcos∅-mg
Thus, when recombining these components to determine the net force vector, I get
F = √ƩFx2+ƩFy2 in the direction of the center of the wheel, thus F=Fc=mv2/R
thus, after some algebra you can solve for theta. However, this yields only periodic values of theta that allow the force vector F to be equal to centripetal force, meaning, only at these periodic values of theta is the mud adhering to the wheel? I'm really lost, any help is much appreciated.
Last edited: Feb 12, 2013
2. Feb 12, 2013
### tms
How can the resultant be towards the center of the wheel? The two components of $f$ by themselves point to the center. Add gravity in another direction, and the sum can't point to the center.
3. Feb 12, 2013
### porschedude
Ah, nice catch. Ok, so how would you reccommend I determine the radial component of the net force
4. Feb 12, 2013
### porschedude
Nevermind, I resolved that issue. Now I'm getting that forces in the radial direction are equal to f-mgcosθ=Fc
However, if I set this equal to mv2/R, won't that be still solving for only one value of θ? Aren't I looking for a θ at which the mud falls off (an inequality)?
5. Feb 13, 2013
### tms
You're actually looking for a ≥ (or ≤ depending on how you set things up), so finding the = is good enough. That is, you're looking for the first point at which the clump falls off.
6. Feb 13, 2013
### haruspex
That assumes the force is initially less. I have a couple of concerns with this problem.
In general, the adhesive force cannot be purely radial. The resultant has to be radial, and the force of gravity is not, so the adhesion must supply a tangential force.
The adhesive force will be tested most when gravity opposes it. For the question to make sense, the mud should first appear high up on the wheel. The only reason a lot of mud can be thrown up in practice is that wet mud adheres very well to begin with but loses its grip as the mud deforms.
7. Feb 13, 2013
### porschedude
Thanks guys, I really appreciate the help. And at this point I'm resigned to say that this problem does not make sense. Given that the radius, velocity, and mass of the mud are constant, centripetal force (mv^2/R) will be constant. Given that all we're told about the adhesive force of the mud is that it's perpendicular to the wheel, I don't see how the problem can be solved. That is to say, that if the sum of the forces acting toward the center of the wheel are f-mgcos(theta), then only 2 values of theta will cause the sum of the forces acting toward the center of the wheel to be equal to mv^2/R (at every other theta the sum of the forces acting toward the center of the wheel will not be equal to mv^2/R)
8. Feb 13, 2013
### tms
I think the problem boils down to finding the point at which a tangential component to the adhesive force is necessary to keep the clump on the wheel. It is possible to get an answer that seems to make sense in that light.
9. Feb 13, 2013
### tms
In addition to the radial forces, look at the vertical forces; you can make a substitution that will help give a plausible answer. And remember that it doesn't matter if the radially inward force is greater than the outward force; that will just make the clump stick to the wheel.
10. Feb 13, 2013
### haruspex
Same result. That will happen the moment the clump leaves road level. At that point, the radial force required for adhesion is at its maximum: countering gravity + providing centripetal. An instant later, the gravitational force acquires a tangential component.
11. Feb 13, 2013
### tms
That was my first thought, but then I thought I must be wrong. Now ...
12. Feb 13, 2013
### haruspex
My guess is that the problem setter solved for equality of radial force and didn't stop to think whether it was increasing or decreasing.
13. Feb 15, 2013
### tms
I had a thought about this problem. If there is an implied frictional force between the mud and the tire that is large enough to prevent any movement in the tangential direction under any circumstances. That would allow the clump to stick until the vertical component of the adhesive force becomes less than gravity. Perhaps that is what the problem setter had in mind.
14. Feb 15, 2013
### haruspex
That just says to resolve in the radial direction, because there's an unknown and unlimited force in the tangential direction. So we have that the mud sticks as long as mg cos θ + mr ω2 < limit, where the wheel has rotated an angle θ since acquiring the mud. But mg cos θ is a maximum at θ = 0, so we're back in the same problem. Have I misunderstood your suggestion?
15. Feb 16, 2013
### tms
No, I'm the one doing the misunderstanding. I had thought that I had gotten the answer being asked for, but looking more closely it makes no sense.
It also seems that if the adhesive force is strong enough it will stick forever, even allowing tangential slipping.
|
{}
|
# How can diffraction happen in the Hubble Telescope?
I've seen people talking about the angular resolution of the HST is, if using Rayleigh's criterion, equal to: $$\theta = 1.220 \frac{\lambda}{D}$$ My question is, since the diameter of the HST $D$ (2.4m) is wayyy larger than the wavelength $\lambda$, how on earth would diffraction happen? Wouldn't the light just go through without any effects? Thanks.
• Indeed, the effects are almost negligible. Calculating $\theta$ yields a very small number. – kristjan Feb 16 '15 at 21:59
• Hi @kristjan, I think the smaller $\theta$ is the better, as it stands for the ability of the telescope to differentiate objects to each other. – Lampard Feb 17 '15 at 9:37
If we push to galaxies at 25 million ly, the resolution drops to 6 ly and we can't resolve separate stars. That limits us identifying specific single stars that go into supernovas. If we're watching [EDIT here] UV or gamma, it's better because of the shorter wavelengths (smaller $\theta$ is better resolution, and supernovas have very interesting UV and gamma profiles. It's nice to have stellar spectra both before and after the supernova, but if we can't resolve the star it hurts the overall wavelength range analysis. [End edit]
|
{}
|
# Taylor series in order to find the approximate antiderivative of a function
Somewhat inspired by this question about antiderivatives, I started to check whether or not that function had an elementary antiderivative. Then, after checking with Maxima, it struck me that, by simplifying the $\sec(x)$ and $\tan(x)$ terms using Taylor series, I could effectively solve the antiderivative of $\int {\sec\left(x\right)\tan\left(x\right) \over 3x + 5}\,{\rm d}x$.
Solving this, at origin 0 and with depth 8, I get the following expression.
$$-\frac{9646207\,{x}^{9}}{1181250000}+\frac{9646207\,{x}^{8}}{630000000}-\frac{14069\,{x}^{7}}{875000}+\frac{14069\,{x}^{6}}{450000}-\frac{179\,{x}^{5}}{6250}+\frac{179\,{x}^{4}}{3000}-\frac{{x}^{3}}{25}+\frac{{x}^{2}}{10}$$
However, and back when I had Calculus, I never remembered using a Taylor series in order to solve an antiderivative.
Besides the resulting antiderivative being an approximation that degrades the further away the function is from the Taylor origin (as a Taylor series has a sort of implied error), what other faults or errors might happen should one use this technique?
-
In a typical second-semester calculus course in the USA, integration using Taylor series appears at the end; it's sort of a pinnacle of the course, tying in integrals and infinite series. – Post No Bulls Dec 31 '13 at 6:42
Yeah, that was a fun one... I still haven't figured it out. Guess my next foray is going to be Taylor Series. – Chris Feb 4 '14 at 19:18
There is nothing wrong with using Taylor's series for anti-derivative. In fact, the very first breakthrough in the computation of $\pi$ came when Gregory used your idea to get the anti-derivative for $\arctan$. Gauss did the same for $\arccos$. You are in good company!
I have to say I have yet to find anything about Gauss and $arccos$. This being said, a man by the name of Gregory Chudnovsky, together with his brother, developed a fast algorithm to calculate $\pi$. I'll keep this for a day or two, if no one gives another answer, I'll accept this. – Doktoro Reichard Dec 28 '13 at 23:03
|
{}
|
# Tidying TLEs in R
I’ve been working with the Union of Concerned Scientists’ data on active satellites for a while now, and decided it was time to add Space-Track’s debris data to it. The UCS data is nice to work with because it’s already tidy: one row per observation, one column per variable. One format for debris data is Two-Line Elements (TLEs) (Space-Track description).
TLEs are apparently great for orbital propagation and stuff, I don’t know, I’m not an aerospace engineer. I’ve seen some stuff about working with TLEs for propagators in MATLAB or Python, but nothing about (a) working with them in R or (b) tidying a collection of TLEs in any program. This post documents a solution I hacked together, mixing an ugly bit of base R with a neat bit of tidyverse. The tidyverse code was adapted from a post by Matthew Lincoln about tidying a crazy single-column table with readr, dplyr, and tidyr. Since I’m reading the data in with read.csv() from base, I’m not using readr.
This process also isn’t necessary. Space-Track makes json formatted data available, and read_json() from jsonlite handles those nicely. This is a “doing things for the sake of it” kind of post.
So. Supposing you’ve gotten your TLEs downloaded into a nice text file, the first step is to read it into R. The TLEs I’m interested in are for currently-tracked objects in LEO, which comes out to a file with 8067 rows and 1 column (API query, at least as of when I wrote this).
library(dplyr)
library(tidyr)
options(stringsAsFactors=FALSE) # it's more convenient to work with character strings rather than factors
This is entirely a statement about me and not the format: TLEs look weird as hell.
> dim(leo_3le)
[1] 8067 1
V1
1 0 VANGUARD 2
2 1 00011U 59001A 18194.13149990 .00000063 00000-0 13264-4 0 9995
3 2 00011 32.8728 183.1765 1468204 230.1854 116.0167 11.85536077532985
4 0 VANGUARD 3
5 1 20U 59007A 18193.39885059 +.00000024 +00000-0 +12986-4 0 9998
6 2 20 033.3397 177.1390 1667296 121.6835 255.7164 11.55589562148902
Lines 1:3 represent a single object. Lines 4:6 represent another object. It’s an annoying format for the things I want to do.
Ok, now the ugly hack stuff: I’m going to select every third row using vectors with ones in appropriate spots, relabel zeros to NAs, drop the NAs, then recombine them all into a single dataframe.
# the ugly hack: rearranging the rows with pseudo matrix math. first, I select the indices for pieces that are line 0 (names), line 1 (a set of parameters), and line 2 (another set of parameters)
rownums <- as.numeric(row.names(leo_3le)) # make sure the row numbers are numeric and not characters - probably unnecessary
tle_names_idx <- rownums*rep(c(1,0,0),length.out=dim(leo_3le)[1])
tle_1line_idx <- rownums*rep(c(0,1,0),length.out=dim(leo_3le)[1])
tle_2line_idx <- rownums*rep(c(0,0,1),length.out=dim(leo_3le)[1])
# rename the zeroed observations to NA so they're easy to drop
tle_names_idx[tle_names_idx==0] <- NA
tle_1line_idx[tle_1line_idx==0] <- NA
tle_2line_idx[tle_2line_idx==0] <- NA
# now drop the NAs
tle_names_idx <- tle_names_idx[!is.na(tle_names_idx)]
tle_1line_idx <- tle_1line_idx[!is.na(tle_1line_idx)]
tle_2line_idx <- tle_2line_idx[!is.na(tle_2line_idx)]
# recombine everything into a dataframe
leo_3le_dfrm <- data.frame(sat.name = leo_3le[tle_names_idx,1],
line1 = leo_3le[tle_1line_idx,1],
line2 = leo_3le[tle_2line_idx,1])
This leaves me with a 2689-row 3-column dataframe. The first column has the satellite name (line 0 of the TLE), the second column has the first set of parameters (line 1 of the TLE), and the third column has the second set of parameters (line 2 of the TLE). There’s probably a way to do this in tidyverse.
> dim(leo_3le_dfrm)
[1] 2689 3
sat.name
1 0 VANGUARD 2
2 0 VANGUARD 3
3 0 EXPLORER 7
4 0 TIROS 1
5 0 TRANSIT 2A
line1
1 1 00011U 59001A 18194.13149990 .00000063 00000-0 13264-4 0 9995
2 1 20U 59007A 18193.39885059 +.00000024 +00000-0 +12986-4 0 9998
3 1 00022U 59009A 18193.85420323 .00000017 00000-0 25009-4 0 9994
4 1 00029U 60002B 18194.55070431 -.00000135 00000-0 10965-4 0 9992
5 1 00045U 60007A 18194.14978757 -.00000039 00000-0 17108-4 0 9997
6 1 00046U 60007B 18194.29883214 -.00000041 00000-0 15170-4 0 9997
line2
1 2 00011 32.8728 183.1765 1468204 230.1854 116.0167 11.85536077532985
2 2 20 033.3397 177.1390 1667296 121.6835 255.7164 11.55589562148902
3 2 00022 50.2826 171.3798 0140753 83.9529 277.7437 14.94580679119776
4 2 00029 48.3805 35.0120 0023682 97.0974 263.2631 14.74254161114643
5 2 00045 66.6952 79.4545 0248473 109.6184 253.1917 14.33604371 21605
6 2 00046 66.6897 151.6848 0217505 295.8620 62.0202 14.49157393 36927
The tidyverse functions are the prettiest part of this. I create new objects to hold the modified vectors (just a personal tic), then run two pipes to do the cleaning. The first pipe splits the strings at the appropriate character numbers. Why not just whitespace, you ask? Apparently there can be whitespaces in some of the orbital elements ¯\(ツ)/¯ (Element Set Epoch, columns 19-32 of line 1). The second trims leading and trailing whitespace. Finally, I recombine everything into a dataframe. There are tidy ways to do this, but I like using base R for this.
# make separate objects for the first and second line elements
line1_col <- data_frame(text = leo_3le_dfrm[,2])
line2_col <- data_frame(text = leo_3le_dfrm[,3])
# the beautiful tidying: split the dataframe where there are variables and trim whitespace
leo_3le_dfrm_line1 <- line1_col %>%
# split the strings
separate(text, into=c("line.num1","catalog.number","elset.class","intl.des","epoch","mean.motion.deriv.1","mean.motion.deriv.2","b.drag","elset.type","elset.num","checksum"), sep=c(1,7,8,17,32,43,52,61,63,68)) %>%
# trim whitespace
mutate_at(.funs=str_trim, .vars=vars(line.num1:checksum))
leo_3le_dfrm_line2 <- line2_col %>%
# split the strings
separate(text, into=c("line.num2","catalog.number.2","inclination","raan.deg","eccentricity","aop","mean.anomaly.deg","mean.motion","rev.num.epoch","checksum"), sep=c(1,7,16,25,33,42,51,63,68)) %>%
# trim whitespace
mutate_at(.funs=str_trim, .vars=vars(line.num2:checksum))
leo_3le_dfrm <- as.data.frame(cbind(sat.name=leo_3le_dfrm$sat.name, leo_3le_dfrm_line1, leo_3le_dfrm_line2)) The end result is a 2689-row 22-column tidy dataframe of orbital parameters which can be merged with other tidy datasets and used for all kinds of other analysis: > dim(leo_3le_dfrm) [1] 2689 22 > head(leo_3le_dfrm) sat.name line.num1 catalog.number elset.class intl.des 1 0 VANGUARD 2 1 00011 U 59001A 2 0 VANGUARD 3 1 20 U 59007A 3 0 EXPLORER 7 1 00022 U 59009A 4 0 TIROS 1 1 00029 U 60002B 5 0 TRANSIT 2A 1 00045 U 60007A 6 0 SOLRAD 1 (GREB) 1 00046 U 60007B epoch mean.motion.deriv.1 mean.motion.deriv.2 b.drag elset.type 1 18194.13149990 .00000063 00000-0 13264-4 0 2 18193.39885059 +.00000024 +00000-0 +12986-4 0 3 18193.85420323 .00000017 00000-0 25009-4 0 4 18194.55070431 -.00000135 00000-0 10965-4 0 5 18194.14978757 -.00000039 00000-0 17108-4 0 6 18194.29883214 -.00000041 00000-0 15170-4 0 elset.num checksum line.num2 catalog.number.2 inclination raan.deg 1 999 5 2 00011 32.8728 183.1765 2 999 8 2 20 033.3397 177.1390 3 999 4 2 00022 50.2826 171.3798 4 999 2 2 00029 48.3805 35.0120 5 999 7 2 00045 66.6952 79.4545 6 999 7 2 00046 66.6897 151.6848 eccentricity aop mean.anomaly.deg mean.motion rev.num.epoch checksum 1 1468204 230.1854 116.0167 11.85536077 53298 5 2 1667296 121.6835 255.7164 11.55589562 14890 2 3 0140753 83.9529 277.7437 14.94580679 11977 6 4 0023682 97.0974 263.2631 14.74254161 11464 3 5 0248473 109.6184 253.1917 14.33604371 2160 5 6 0217505 295.8620 62.0202 14.49157393 3692 7 It still needs to be cleaned a bit - those + signs in the mean motion derivatives are annoying, I don’t need the line number or checksum columns, and I want to get rid of the leading 0 and whitespace in sat.name - but this is good enough for now. View or add comments # Long-run equilibria in a Zero Dawn economy Last week I played through Horizon: Zero Dawn on my partner’s dad’s PS4. It’s a really fun game. The controls felt natural, the story was great, and the difficulty on “Hard” was the right balance for a relaxing-but-not-trivial vacation game. It reminded me of Mass Effect 2 that way. I really wish more developers would make story-driven open-world single-player RPGs rather than the MMORPGs that seem to be popular these days. I loved Knights of the Old Republic 1 and 2 (KotOR 2 is, in my humble opinion, one of the best storylines in the Star Wars game universe, and I would pay$60 or more for a remake), but was sorely disappointed by SW:TOR. I suppose the market has spoken, though.
H:ZD is set in a post-post-apocalyptic world, where there are machine-animals and no bio-animals larger than a boar, no governments larger than a modestly-sized empire (the Carja) and some smaller tribes, and technology is somewhere between hunter-gatherer and early agriculture. There are some more advanced technologies harvested from machines and the ruins of our civilization (“the ancients”), which went kaput in the late 2060s. The protagonist, Aloy, is a member of one of the smaller tribes, the Nora, who have a fairly isolationist matriarchal society. The game is a fascinating study of the rise, fall, and re-rise of human societies. With the exception of a couple more-skittish machine-animals, with names like “Strider” (mecha-horse) and “Grazer” (mecha-antelope), the machines tend to be quite aggressive toward humans happening by. The designs are pretty cool, blending dinosaur and megafauna with advanced weapons. For example, “Watchers”, the first predator-machine encountered, are like mecha-deinonychus (deinonychi?) which can sometimes shoot lasers; “Sawtooths” and “Ravagers” are like mecha-sabretooth cats with the latter having some sort of energy beam and radar-like system; and “Thunderjaws” are like mecha-T-rexes with laser beams, energy guns, and missile launchers. The game is a blast.
The world’s economy is somewhere between barter and a metal specie standard. Purchasing items involves trading using a mix of metal shards harvested from machine-animals and some other things (like other parts of machine-animals and parts of bio-animals). Metal shards are the main currency, though. The shards are used in crafting arrows, so they’re more useful than gold bars. The machine-animals don’t really like humans much so hunting them involves some risk. There doesn’t seem to be any banking in the world, private or centralized; there are predators, but no predatory lending.
I thought it’d be fun to think a bit about some of the economics that follow from the ecologically-controlled money supply. I initially thought of looking at how the machine-animal population dynamics would drive inflation, but modifying a standard money supply model to account for the lack of governments and banking felt like too much work. Instead, I’m going to add a hunting-driven price to a standard fisheries model (Gordon-Schaefer) and think about the steady state equilibria.
### The model
The model is three equations: the machine-animal population dynamics, the relative price of shards, and the profits from hunting. The first two are just relabeled fish equations, and the third is a simple linear inverse demand curve. To simplify the model, suppose machines are homogeneous and that one unit of shards is harvested per machine. $P_t$ is the price of a unit of shards at time $t$, the size of the machine population is $M_t$, and the number of machines harvested is $H_t$. The model equations are:
$A$ and $B$ are the usual maximum willingness-to-pay and slope parameters for the shard price. Note that this is partial equilibrium; the shard price is relative to a consumption good with price normalized to $1$. $r$ and $K$ are the natural machine renewal rate (the production rate from the Cauldrons) and the environment’s machine-carrying capacity (the machines feed on biomass). $c$ is the real cost of hunting one machine (arrows, risk, and opportunity cost) relative to the price of the consumption good.
I’m interested in two types of long-run equilibria here: open access to hunting, which is the default state in the game; and a hunting monopoly controlled by the Hunting Lodge, a Carja organization in the game. Under open access, hunters will take down machines until the return from another unit of shards just covers the cost of taking down another machine. Under the Hunting Lodge’s monopoly, hunters will take down machines until the marginal return from another unit of shards is equal to the marginal cost of taking down another machine. Of course, the Hunting Lodge in the game is far from a monopoly and seems to be more interested in the “sport” aspect of hunting than the “lucre” aspect of it. There’s a series of quests where you can disrupt the Hunting Lodge’s antiquated norms and shitty leadership - it’s a really fun plotline.
The steady state condition for the machine population gives the machine population size as a function of the number of machines hunted, %
This lets us reduce the model to a single equation in a single variable, profit in $H_t$, $$$\pi_t(H_t) = \frac{BK}{r}H_t^3 - \left( \frac{AK}{r} + BK \right)H_t^2 + (AK - c)H_t .$$$ Unlike the usual profit function in the Gordon-Schaefer model, this one is cubic in the hunting rate (harvest effort) because the price is no longer a constant.
Under open access, the number of machines hunted will make industry profits zero, i.e. $H^{OA}_t : \pi_t(H^{OA}_t) = 0$. We can factor out an $H_t$ and drop it to get rid of the uninteresting $H_t = 0$ solution. This leaves us with two solutions: $H^{OA}_t = \left( \frac{r}{2BK} \right) \left[ \left( \frac{AK}{r} + BK \right) \pm \left( \left( \frac{AK}{r} + BK \right)^2 - 4 \left( \frac{BK}{r} \right) (AK - c) \right)^{1/2} \right] .$
When the Hunting Lodge is the monopoly shard supplier, they’ll control hunting to maximize industry profits. Again, this gives two solutions: $H^{HL}_t = \left( \frac{r}{3BK} \right) \left[ \left( \frac{AK}{r} + BK \right) \pm \left( 4\left( \frac{AK}{r} + BK \right)^2 - 12 \left( \frac{BK}{r} \right) (AK - c) \right)^{1/2} \right] .$
The Hunting Lodge solutions should be closer to zero than the open access solutions… but when the parameters are all individually positive, the Hunting Lodge solutions are minima! They definitely won’t try to minimize profits, so that doesn’t really make economic sense. Maybe there are some reasonable conditions we can assume to fix this, but this is not a high-effort post so I’m not going to look for them. A lazy explanation for this: that’s why the Hunting Lodge isn’t a monopoly shard supplier in the game!
### The long-run effects of a little more HADES
HADES, one of the big baddies of the game, wants to produce more machines, make them stronger and eviler, and wipe out all life on the planet. We can think about the effects of HADES getting a little stronger on the open access hunting rate by looking at the derivatives of $H^{OA}_t$ with respect to $r$ and $c$. Increasing $c$ should reduce $H^{OA}_t$, all else equal*; increasing $r$ looks like it might be more interesting. I like pictures so I’ll do this numerically.
*The first solution, with the plus sign, is increasing in the cost. This is economically weird so I’m going to ignore it. There’s a story where this makes sense: if the cost of hunting is a barrier to entry, then increasing the cost can also increase the value of a unit of hunted shards, since there are fewer suppliers. This type of effect shows up in mining and satellite launching when the fixed cost of entry increases. This solution has the same signs with respect to $r$ and $K$ as the other one.
The stronger HADES gets, the fewer machines get hunted. Since machines the the main source of metal and advanced technology in this world, this is bad for the folks in Zero Dawn.
### The long-run effects of a little more GAIA
GAIA, the force for good fighting HADES in this world, wants to improve the ecosystem so that it can support more life + more diverse life. GAIA’s main tool for doing this is producing machines through the Cauldrons, and having them shepherd the world’s ecological development*. These are the same machines and Cauldrons that HADES hijacks. We can think about the effects of GAIA getting a little stronger on the open access hunting rate by looking at the derivative of $H^{OA}_t$ with respect to $K$, since she is improving the ecosystem. The code from the HADES case can be repurposed for this.
*Actually, HADES is a module of GAIA that’s run amok. So in a sense, GAIA is both the force for good and the force for evil.
The stronger GAIA gets, the more machines get hunted. Presumably, this means the societies doing the hunting get more access to metal and advanced technology.
### Conclusion
I’m not trying to say that hunting == socially good everywhere and always (even though it is in this model), but boy that HADES is bad news!
Horizon: Zero Dawn is a terrific game. Solid gameplay, interesting story, great visuals. If you like single player RPGs, I think you’ll enjoy it.
# Thoughts on "A Random Physicist Takes on Economics"
Jason Smith has interesting ideas. I’ve followed his blog on and off since around the winter of 2014, in my first year of grad school. At the time I was working on developing Shannon entropy-related algorithms for detecting actions in time series data from motion sensors on surf- and snowboards, and his blog posts about applying Shannon entropy to economics intrigued me. I was not (and am not) really a macro person, so a lot of the applications he focused on seemed ho-hum to me (important, but not personally exciting). At some point I stopped following blogs as much to focus on getting my own research going, and lost track of his work.
Smith has a new book out, A Random Physicist Takes on Economics, which I just read. If you’re interested in economic theory at all, I highly recommend it. It’s a quick read. I want to lay out some of my thoughts on the book here while they’re fresh. Since I have the Kindle version, I won’t mention pages, just the approximate location of specific references. This isn’t a comprehensive book review, rather a collection of my thoughts on the ideas, so it’ll be weighted toward the things that stood out to me.
### Big ideas
I think the big idea behind Smith’s work is that much of the rational actor framework used in economics is not necessary. Instead, a lot of the results can be derived from assuming that agents behave randomly subject to their constraints. He traces this idea back to Becker’s Irrational Behavior and Economic Theory, and also cites some of the experimental econ work that backs this up.
One formalism for this idea is that entropy maximization in high-dimensional spaces moves averages to points at the edge of the feasible set pretty quickly, and that changes to the constraint set cause similar changes to the average as they do to the constrained optimum for a convex function. In this view, comparative statics exercises on budget sets will get the right signs, but not necessarily the right magnitudes.
### Random behavior and exploring the state space
Smith argues that humans are so complex that assuming uniformly random behavior over a feasible set is a more reasonable starting point than assuming some sort of complex optimization process. This isn’t to say that people actually behave randomly, but that randomness is a modeling choice guided by our own ignorance. In aggregate, we can get to results that replicate representative agent results from assuming random micro-level behavior. Smith describes this random micro-level behavior as “agents exploring the state space” (somewhere early on). The choice of “uniformly random” is guided by the principle of maximum entropy over a closed and bounded budget set.
Joshua Gans mentions this in his Amazon review of the book: random behavior is a useful benchmark against which to compare rational behavior. One of my takeaways from Smith’s work is to think about which of my modeling conclusions would be robust to random behavior and which wouldn’t be. My work deals more with the behavior of firms, where I think rationality is maybe less of a stretch. A funny anecdote: I heard an economist who worked with a large firm once say that he “had yet to meet the profit maximizer”. The point is that firms aren’t always rational profit maximizers. Simon’s behavioral work on firm decision making is in this spirit.
There’s a helpful example I remember from Smith’s blog that didn’t make it into the book. Observe: people buy less gas when the price is higher. A rational behavior proponent might say that this is because people look at the price and say, “hey I can’t afford as much gas, so I’m going to buy less”. A random behavior proponent would say that this is because there are fewer people who can afford gas at the higher price, and so less gas gets bought. The former is about a bunch of individuals continuously adjusting their purchases, while the latter is about a bunch of individuals discretely not buying. Both can generate an observed continuous decrease in gas purchased when price increases.
I think that the truth for any given situation is likely to be somewhere in between random and rational behavior. There’s a lot more about information transfer and equilibrium at his blog, which I recommend any economist reading this to check out. Spend at least an hour going through some of his posts and thinking seriously about his arguments - I think you’ll likely get some mileage out of it.
### Game theory, random behavior, and common resources
I spend a lot of time thinking about the use of common resources. Smith doesn’t really discuss these issues much - there’s a mention of negative and positive externalities at the end, but it’s brief. So what does the random behavior hypothesis mean for common resources?
The rational behavior hypothesis for the overexploitation of common resources is that selfish agents choose to act in a way that is personally beneficial at the cost of the group as a whole. Cooperation and defection become statements about people trying to get higher payoffs for themselves. I think the random behavior hypothesis here would be something like, “there are fewer states of the world in which people cooperate than in which they defect”. Cooperation and defection then become statements about how few ways there are for people to organize relative to the number of ways they could fail to organize.
I think this is plausible… it seems like the random behavior hypothesis is another way to view coordination failures. It’s not that coordination doesn’t happen because it’s difficult for individuals to stick to, it’s that it doesn’t happen because it requires a confluence of more events than discoordination does.
But there’s a lot of work on the ways that coordination does happen in commons (Ostrom’s work, for example). The game theoretic perspective seems to be valuable here: it gives a direction for the policy to aim for, and policy that incorporates game theoretic insights to commons management seems to work. So… maybe rational actor models can be more useful than Smith’s book lets on? Maybe the random behavior interpretation is that applying Ostrom’s principles create more ways for people to cooperate than existed before, thus making cooperation more likely.
### Whither welfare?
The big consequence of the random behavior framework is that we lose the normative piece of economic modeling. Using utility maximization framework gives us a way to talk about what should be done in the same model as we describe what will be done. In the random behavior framework, we can say that we should loosen constraints in one direction or another, but the “why” of doing it is a bit more obscured. Smith says that loosening constraints can increase the entropy, but I didn’t quite follow his argument for why that’s desirable in and of itself. It seems like there are some more principles in the background guiding that choice.
I have a lot of issues with how “improving welfare” gets (ab)used as a goal in economic analysis. People go around unthinkingly saying “Kaldor-Hicks improvements are possible” as they advocate for specific policies, often explicitly sidestepping equity concerns. Other folks use a concave social welfare function as a criterion to avoid this, and argue against inequality-increasing policies. I lean toward the latter camp. I think there are technical arguments in favor of this - the time with which we can enjoy things is one among many fixed factors, generating decreasing marginal benefits to any individual accumulating large amounts of wealth - but to be honest it’s probably also a reflection of my personal politics. These things interact and all so I resist the claim that it’s purely personal politics, but that’s a separate conversation.
Anyway, I think that good economists know that the decision of what to prioritize in a society can’t be just about “growing the pie” without discussing who gets which slices. But there are a lot of economists who act as though “growing the pie” can be regarded as desirable independent of how the pie will be split. This can be true for a theoretical “ceteris paribus” conversation, but I don’t think this can be true for a policy discussion with real-world consequences. There’s a post I once read (I think it was on interfluidity, but possibly on econospeak which argued that part of the purpose of leadership was to select one among the many possible equilibria, including those where the Second Welfare Theorem would or wouldn’t be usable. The random behavior hypothesis, by getting rid of economic welfare, might make the need for this leadership and value judgement more explicit. I think that would be a good thing.
Edit: It occurs to me that Smith’s framework also allows normative statements to be made alongside positive statements; they’re just about reshaping constraint sets. I still think it decouples the two more than the standard utility maximization framework does, but maybe I’m wrong.
### Some issues I had with the book’s arguments
I want to be clear: I enjoyed reading Smith’s book, and I’ve enjoyed reading his blog. To the extent I’ve been bored by it, it’s because it’s about macro stuff and I don’t do macro. I am not pointing out issues in the spirit of “here’s why this sucks”, but in the spirit of “here are places where I disagree with the presentation of interesting ideas that I want to continue engaging with”.
I think an economist inclined to be critical could find issues in the text. Smith seems to be writing for a more general audience, so there are places where his use of terms is not quite correct. For example, near the end (around 91%) he describes “tatonnement” as “random trial and error in entropy maximization”; I understand it as a process of “adjusting prices in the direction of excess demand”. I don’t think this matters for his argument, so it’s not a big deal.
I think the more substantive issue a random critical economist would raise is related to his treatment of empirical economics. By and large, he seems to ignore empirical economics almost entirely, and conflate empirical economics with empirical macroeconomics. To the extent that he discusses microeconomics at all, it’s all about the specific pieces of micro theory used in parts of macro modeling. That’s fine! To echo one of Smith’s points, limiting the scope of an argument is perfectly valid. I’m mostly a theorist right now, and I think there are lots of solid points he makes about the things he’s talking about. But as an environmental economist with empirical leanings, it sort of annoys me to see him lump all of economics with macro and all of micro with the micro used in macro. There’s some discussion of game theory, but not a lot.
Smith also takes issue with the use of math formalism in economics. One point he raises, which I remember from his blog, is the use of $\mathbb{R}_+$ to describe a feasible set. Why, he asks, do economists feel the need to say “positive real numbers” rather than “a number greater than zero”? What is gained? He argues that this is a symptom of economics’ excessive and inappropriate use of math. I think this criticism is sort of misguided, but also kind of on point.
Sort of misguided: A lot of economic theory is styled as a branch of logic. So being precise about the field of numbers being used is kind of a cultural thing. The existence proofs we use, or at least the earlier ones, are/were often not constructive. Existence followed from properties of the reals. More modern proofs often use Fixed Point Theorems for existence, followed by Contraction Mapping approaches for computation. The point is that being precise was important for the people making the arguments to convince the people reading the arguments. This is the “it’s just a symbol, get over it” counter-argument.
Kind of on point: In a class I took with him, Miles Kimball was fond of saying that whether or not there is a smallest number that can be used can’t matter to the substantive economics, so any economic proof based on reals has to go through for integers or rationals as well. If it doesn’t, that’s a sign that there’s something funky about the proof. Daniel Lakeland makes similar arguments in justifying his use of nonstandard analysis (it’s somewhere in his blog…). So, yeah, just saying “a number greater than zero” would be fine for any proof that really needed it, though the author would need to go through more hoops to satisfy their likely audience (economists who seem to like real analysis).
I think some of the math in economic theory that Smith takes issue with probably falls in this category: people were using formalisms as shortcuts, because they don’t want to do the proof in even more detail over the rationals or something, but it doesn’t really matter for the substantive economics at play. I think that whether this offends you or not probably says more about your priors over economics and math than it does about the math itself.
I think there’s a similar issue at play with Smith’s read of rational expectations and infinity. Smith argues that rational expectations are somewhere between incoherent (inverting distributions is ill-posed) and a fudge factor that lets a modeler get whatever they want. I agree that the latter is a thing that some, possibly many, economists do. Why did XYZ happen? Oh, because of expectations about XYZ! Assuming good faith on all sides, though, I think there are two things going on here.
The first is that expectations are about beliefs, and self-fulfilling prophecies are a thing. I see this in my students when I teach intro math for econ: if they buy into the notion that they’re “just not math people”, they will do much worse than if they reframe the issue as “math is not easy, but if I work hard I can do well”. Their expectations about their future performance and inherent abilities shape their future outcomes, which reinforce their expectations. If Anakin hadn’t believed his vision of Padme dying on Mustafar, he wouldn’t have acted in a way to make it happen. This is a feature of the human condition, and modeling it is relevant. I think Smith’s concerns about information flowing back through time are missing this point, and getting too caught up in the math formalism.
The second is that modeling beliefs is hard, and rational expectations is a tractable shortcut. There are other tractable shortcuts, like assuming that variables follow martingale processes, which can be useful too. But given that beliefs seem to matter, and that it’s hard to model heterogeneous beliefs being updated in heterogeneous ways, I think the use of rational expectations is at least understandable. There’s a similar point in the use of infinity (which Smith only touches upon at the end, and I may be misunderstanding what he’s getting at). It’s not that economists actually believe that agents think they’ll live forever, at least not theorists who have what I consider good economic intuition. It’s that using finite horizons in conjunction with backwards induction yields weird results, so infinite horizons is a modeling shortcut to get “more realistic” results. This is another of Miles’ arguments: whether or not the universe will really end can’t matter to real economics happening today, so don’t take the “infinite” horizon too literally. Smith seems to grok this in his discussion of scope conditions. Maybe some of this is just that we’re using different languages; I agree that economists could stand to be more explicit about scope conditions.
### Conclusion
This has gotten way too long. To summarize:
1. I liked the book. I think it should be widely read by economists, applied and theoretical.
2. I think Smith is on to something with his modeling approach. I want to try working with it soon.
3. I think Smith’s work would benefit from more engagement with economists. Partly this would add some relevant nuance to his approach (e.g. rational expectations and self-fulfilling prophecies), and partly this would expand the set of topics he considers beyond macro-focused things. It goes the other way too: I think engaging with his work would be good for economists, at the very least offering a useful benchmark to compare rational actor models against.
# A few ways to curve class grades
I’ve been teaching an introductory math class for econ majors for the last two semesters. I curve the class scores so that the average is a B-, according to the university’s recommended thresholds for letter grades. I like using those thresholds, but I have yet to write a test where the students get to a B- on their own. Maybe my tests are too hard; maybe I’m just inflating grades to reduce complaints. I’m working on writing tests that reveal abilities according to the letter grade thresholds (a subject for another post). In this post, I’d like to write down a few different curves I’ve used or seen used.
I like curves that I can explain to students. Algebra and derivations are a big focus of my class, so I like it when I can have my students solve for the curve parameter(s) using just simple algebra and the class average. That way they can calculate their own post-curve grade before I post it. I’m not sure how many of them actually do, but they could…
### Notation
$x_i$ is an individual student’s raw score, $\bar{x}$ is the average of the scores, the curved score is the output of a function $C(x_i,p)$, $p$ refers to a vector of parameters of the curve function. There are $n$ students, and the instructor wants the curved grades to be close to $\tau$. The maximum score achievable is normalized to $100$.
### A flat curve targeting a mean
A constant number of points added to each student’s score is the simplest and most popular curve I’ve seen. Add $p$ points to each student’s grade, until the class average is close to the desired level, $\tau$. Formally, $C(x_i,p) = x_i + p ,$ where $p$ is such that
$\frac{1}{n} \sum_{i=1}^n C(x_i,p) = \tau .$ Doing some algebra, \begin{align} \frac{1}{n} \sum_{i=1}^n (x_i + p) &= \tau \cr \bar{x} + p &= \tau \cr \implies p = \tau - \bar{x} \end{align}
All the instructor needs to do with this curve is add the difference between the target and the class average to each student’s score, and the average hits the target. Very easy to implement and communicate to students. Each student gets the same boost, and because the curve function is monotonic ranks aren’t changed. One downside to this method is that students’ scores can be pushed over $100$. If letter grades are awarded based on fixed thresholds, this means that some of the points may be wasted’’. That is, some students at the top may get extra points that don’t benefit them at the cost of students across the rest of the distribution who could have gotten a higher letter grade. In theory, an instructor who wanted to use a flat curve while avoiding wastage could do a round of curving, truncate the over-the-top scores to $100$, and repeat the curving until $p$ stops changing. I haven’t seen anyone do the full process, just a single iteration.
I’ve received curves like this in my undergrad. I feel like my incentive to work hard was reduced in classes that curved like this. As long as I was above the average, I was usually sure I would get an A. As a teacher, I’d like it if my curve function distorted incentives as little as possible.
### A linear proportional curve targeting a mean
In this curve students are given back a proportion $p$ of the points they missed. I’ve been using this function lately. Formally, $C(x_i,p) = x_i + (100-x_i)p .$ If we’re targeting the mean, $p$ is such that \begin{align} \frac{1}{n} \sum_{i=1}^n C(x_i,p) &= \tau \cr \frac{1}{n} \sum_{i=1}^n (x_i + (100-x_i)p) &= \tau \cr \frac{1}{n} \sum_{i=1}^n x_i + (100 -\frac{1}{n} \sum_{i=1}^n x_i )p &= \tau \cr \bar{x} + (100 - \bar{x} )p &= \tau \cr \implies p &= \frac{\tau - \bar{x}}{100 - \bar{x}} \end{align}
This gives a more points back to students who did worse, but is still monotonic so ranks are preserved. It never goes over $100$, so no points are wasted. It’s simple enough to implement and easy to communicate (you get back a portion of what you missed’’).
I’ve never received this curve so I don’t know how it feels on the receiving end. I think it preserves some incentives for students at the top to work hard, since they know their scores won’t move much after the curve. By the same token, I can see it feeling unfair to students at the top. I do like not having to iterate or anything to avoid wastage.
### A least-squares curve that matches a mean and a median
In this curve the mean and median of the scores are brought as close as possible to some targets $\tau_{ave}$ and $\tau_{med}$.
This one came up recently in a conversation about a grading issue. My colleague was teaching a class with two TAs running recitation sections. At the end of the semester, the TA with the lower mean had the higher median (this is TA $1$, the other is TA $2$). My colleague wanted to find a way to match the recitation grades from the two TAs in some fair’’ way. Using a flat curve to bring TA $1$’s mean up to TA $2$’s would have given an extra benefit to the students at the top of TA $2$’s class, while matching the medians seemed like it would end up boosting TA $2$’s average student that much higher.
I thought, why not use least squares to match both?’’ Using the convention that TA $1$’s scores are being matched to TA $2$’s, denoting the $n_1$ students in TA $1$’s class by $x_{i1}$ and the $n_2$ students in TA $2$’s class by $x_{i2}$, and using a flat curve $C(x_i,p) = x_i + p$, we define the sum of squared errors for the mean and median as $\epsilon(\{x_i\},p) = \left( \bar{C(x_{i1},p)} - \bar{x}_{i2} \right)^2 + \left( \hat{C(x_{i1},p)} - \hat{x}_{i2} \right)^2$
where $\bar{C(x_{i1},p)}$, $\hat{C(x_{i1},p)}$ are the mean and median of TA $1$’s curved scores, and $\bar{x_{i2}}$, $\hat{x_{i2}}$ are the mean and median of TA $2$’s raw scores (these are the $\tau_{ave}$ and $\tau_{med}$). The curve parameter $p$ minimizes the sum of squared errors, $p = \text{argmin}_p \epsilon(\{x_i\},p) .$
I’ve never done this in my own class, but I like the idea of matching more than one statistic of the distribution. If $p$ comes out negative, then the curve could be interpreted as points to add to TA $2$’s scores. If the instructor wants to emphasize the mean over the median (or vice versa), they could put weights in front of the squared error terms. I’ve heard of someone using GMM to set their mean and variance to some targets, but IIRC in that case the variance piece ended up not mattering. I didn’t try to solve this one algebraically. Instead, I wrote a short R function (below) which uses optim() to solve for $p$ numerically. (I think J is TA 1, and N is TA 2, but it’s been a while since I wrote this.)
curvefinder <- function(...){
# expects the first column to be J, second column to be N
Jscores <- rawscores[,1]
Nscores <- rawscores[,2]
# removes NAs from N's column. NAs are created when read.csv() notices that J has more rows than N, and fills extra cells in N with NA so that both columns are the same length.
Nscores <- Nscores[!is.na(Nscores)]
# calculates the curved mean for whichever column will be curved. x is the parameter vector.
curvedmean <- function(x,scores) {
curved_scores <- scores+x # adds a flat curve - could try other functions, like score + (1-score)*x
newmean <- mean(curved_scores)
return(newmean)
}
# calculates the curved median for whichever column will be curved. x is the parameter vector.
curvedmedian <- function(x,scores) {
curved_scores <- scores+x # flat curve
newmedian <- median(curved_scores)
return(newmedian)
}
# calculates the sum of squared errors between the curved column and the target column. x is the parameter vector.
sse <- function(x,Jscores,Nscores) {
error <- (curvedmean(x,Jscores) - mean(Nscores))^2 + (curvedmedian(x,Jscores) - median(Nscores))^2
return(error)
}
# solves for a curve parameter (or parameter vector) by nonlinear least squares
optim(0.001, sse, Jscores=Jscores, Nscores=Nscores, lower=0, method="L-BFGS-B")
}
I like that different curve functions could be used easily, and that the code can reduce to any other single-statistic-targeting curve I can think of. I suppose it’d be easy enough to explain this version’s flat curve to students, but it might be harder to explain where $p$ comes from for any version.
### Conclusion
• There is always some arbitrariness in curving, if only in the selection of curve.
• Curving is useful when the test is poorly calibrated to student ability. I struggle with this calibration.
• “Fairness” seems like an intuitively desirable concept without a clear definition. Monotonicity seems fair, but beyond that… is wastage fair or unfair? I tend to think it is unfair, but I recognize that that’s an opinion and not a result. The fairness of monotonicity seems less disputable, but I’m open to hearing arguments against it. This leads me to favor the linear proportional curve or the least-squares curves.
• Transparency seems important to me, if only from a “customer relations” standpoint. I want my students to be able to understand how the curve works and why it’s being used, so that they can better assess their own abilities. This leads me to avoid the least-squares curves, at least for the class I teach where students are not as familiar with least-squares. Maybe transparency isn’t the word - maybe it’s better expressed as “explainability” or “intuitiveness”. What is explainable or intuitive will depend on the audience, so there can’t really be an eternal answer to “what is maximally explainable/intuitive?”
• I like the linear proportional curve targeting a mean, and usually use that. Since I usually teach a math class, I spend some time explaining the function and its properties. There are worksheet exercises to drive some of these points home. Obviously, this isn’t appropriate for every class.
# High orbit, low orbit - a satellite altitude game
This is a model I wrote some time ago, a very stylized special case of a more general recursive model I’m currently working on. Hopefully, the more general model will feature as a chapter of my dissertation, and this might be a subsection of that chapter. I think it’s a sort of interesting model in its own right, even apart from the setting.
The basic motivation is the “orbital debris” problem: as satellites are launched into orbit, there are some debris that accumulate and pose a threat to other objects in the orbital environment. There’s a pretty big literature on this in the aerospace engineering and astrophysics communities, and the popular press has written about this as well. I’ve blogged about a couple papers on the subject before (physics/engineering, economics).
The basic intuition is pretty straightforward and well-known in economics: pollution is a negative externality, firms don’t face the full cost of polluting the environment, they overproduce pollution relative to the socially optimum level. I’m not going to present the planner’s solution, but in the stylized model here firms can cooperate to reduce the amount of debris produced. Without cooperation, they’ll end up choosing higher orbits and producing more debris. The debris can destroy satellites (and that is bad).
In this model I’m focusing on how a firm’s optimal choice of altitude in low-Earth orbit is affected by another firm’s altitude choice. This is an inter-firm externality, which is a little different from the usual consumer-facing externality, but is conceptually similar to strategic substitutability in oligopoly games.
## The model setting
Consider an environment with two orbits, high (H) and low (L). We can think of these as spherical altitude shells, similar to the approach described in Rossi et. al (1998).
There are 2 identical firms, each with 1 satellite per period. Debris decays completely after 1 period. Collisions completely destroy a satellite, and generate no debris. Satellites last 1 period, and then are properly disposed of. This lets me talk about dynamics while keeping the decision static.
$O_i \in \{H,L\}$ is the orbit chosen by firm $i$ for its satellite. The probability that firm $i$’s satellite survives the period is $S_i(O_i, O_j)$. $Y_i(O_i, O_j)$ is the probability of a collision between two satellites in the same orbit*. Putting a satellite in orbit $H$ generates some debris in orbit $L$ for that period. $D_L$ is the probability a satellite in the low orbit is destroyed by debris from a satellite in the high orbit**.
*We could say that satellites never collide with each other, but the analysis carries through as long as satellites generate some collision probability for other satellites in the same shell. I think this is generally true, since objects like final stage boosters, random bits that break off, or dead satellites which are not properly disposed of generate such probabilities.
**The idea here is that debris orbits decay toward Earth. This is more relevant for objects in low-Earth orbit, which is what I’m thinking about with this model.
The returns from owning a satellite are normalized to 1, so that we can focus on the probabilities $S_i$. With the above definitions, we can define the satellite survival probabilities for firm $i$ as
So being the only satellite in the high orbit is the best position to be in, since you’re not at risk from debris or the other satellite. It seems reasonable to assume that $\gamma_{HH} = \gamma_{LL}$ as long as the altitude shells aren’t too large.
The really important assumption is the relationship between $\gamma_{LH}$ and $\gamma_{HH}$. If $\gamma_{HH} > \gamma_{LH}$ (case 1, debris is more likely to cause a collision than a satellite), we’ll end up with one Nash equilibrium in pure strategies. If $\gamma_{HH} \leq \gamma_{LH}$ (case 2), we can have up to three Nash equilibria in pure strategies. When we relax the assumption that debris decays completely at the end of the period and allow debris growth, we’ll have transitions between the two cases.
## Solving the model
#### Case 1: $\gamma_{HH} > \gamma_{LH}$
The game matrix:
H L
H $\underline{\gamma_{HH}}, \underline{\gamma_{HH}}$ $\underline{1}, \gamma_{LH}$
L $\gamma_{LH}, \underline{1}$ $\gamma_{LL}, \gamma_{LL}$
(Best responses are underlined. Row player is the first entry, column player is the second.)
The only Nash equilibrium in pure strategies here is for both firms to go high, $(H,H)$. I call this case “orbital pooling”.
The folk region:
(The images in this post are all photos of diagrams I drew in pencil in my notebook many months ago.)
This case is like a prisoner’s dilemma. Neither firm wants to be in the low orbit when the other firm can go high and make them take on risk. Both firms want to try to be the only firm in the high orbit with no risk - you can see this in the folk region diagram and best responses. So, both firms end up high and with risk.
#### Case 2: $\gamma_{HH} \leq \gamma_{LH}$
The game matrix:
H L
H $\underline{\gamma_{HH}}, \underline{\gamma_{HH}}$ $\underline{1}, \underline{\gamma_{LH}}$
L $\underline{\gamma_{LH}}, \underline{1}$ $\gamma_{LL}, \gamma_{LL}$
There are up to three Nash equilibria in pure strategies here: $(H,L), (L,H)$, and $(H,H)$. The $(H,H)$ equilibrium is possible if $\gamma_{HH} = \gamma_{LH}$. I call this case “orbital separation”.
The folk region:
The intuition here is straightforward: pooling on the same orbit is worse than (or, if $\gamma_{HH} = \gamma_{LH}$, as good as) mixing it up, so the firms mix it up.
Orbital separation has less overall risk and debris than orbital pooling. The firm which went low bears more risk than the firm which went high under orbital separation, but the orbits are cleaner overall. If we had more realistic debris dynamics (where debris could interact with other debris to generate more debris), orbital separation would be even better than orbital pooling.
There are four inferences we can draw about the process dynamics from this:
1. If $D_L$ is initially low but grows faster than $Y_i$, orbital separation will transition to orbital pooling
2. If $D_L$ increases at the same rate as or a rate slower than $Y_i$, orbital separation is sustainable
3. If $D_L$ decreases faster than $Y_i$, orbital pooling can transition to orbital separation
4. Orbital pooling will increase $D_L$
Let’s look at the debris dynamics a little more formally.
## Putting some debris dynamics in
We’ll keep it simple here: debris in the low orbit will decay each period at a rate of $% $, and launches to the high orbit will generate $\gamma$ many debris in the low orbit. Letting $D_L'$ be the next period debris stock, the three cases for the debris law of motion are
The diagram below shows the three possible fixed points of debris:
If both firms go low, the fixed point will be $0$ debris in the low orbit. If the firms separate, it will be $\tilde{D}_L^{LH}$. If the firms pool, it will be $\tilde{D}_L^{HH}$. The next diagram shows the returns from orbital pooling and orbital separation as a function of the current period debris stock $D_L$.
(The x and y axes are flipped because economics.) $\bar{D}_L$ is a debris threshold. Above $\bar{D}_L$, orbital pooling dominates orbital separation, and vice versa below $\bar{D}_L$.
One question is whether the steady state debris level under orbital separation is higher or lower than the pooling-separation threshold, i.e. is $\tilde{D}_L^{LH} \leq \bar{D}_L$.
If $\tilde{D}_L^{LH} > \bar{D}_L$, then $\tilde{D}_L^{LH}$ will occur, then firms will shift from orbital separation to orbital pooling, and $\tilde{D}_L^{HH}$ will be the final debris steady state.
If $\tilde{D}_L^{LH} \leq \bar{D}_L$, $\tilde{D}_L^{LH}$ will occur, and firms will stay in orbital separation.
Below are payoff-debris plots for orbital separation and orbital pooling (with proper x-y axes):
### Cooperation with a grim trigger
The folk region diagrams show us that cooperating to get higher payoffs is generally possible. One way to see what the cooperation could look like is to write a trigger strategy for an infinitely repeated game and then see when it will/won’t lead to cooperation.
The trigger strategy for firm $i$ is:
• Play $H,L,...$ if $j$ plays $L,H,...$
• If firm $j$ deviates, play $H$ forever
Firm $j$’s strategy is defined similarly.
We can see that there’s no incentive to deviate from $(H,L)$ to $(L,L)$, only from $(L,H)$ to $(H,H)$. Assuming the firms share a discount factor $\beta \in (0,1)$ and expanding out the series of payoffs, they’ll cooperate as long as
So, they can cooperate and alternate orbital separation with a grim trigger if $\gamma_{LH} > 2 \gamma_{HH} - 1$. We can get a sense for how likely this cooperation is in a $\gamma_{HH} - \gamma_{LH}$ payoff space,
So, cooperation seems more likely when orbital separation is already the Nash equilibrium. This seems intuitive enough to me.
## Concluding thoughts
This is obviously a very stylized model, but I think the general notion of orbital separation vs orbital pooling is more generally applicable. I think this conclusion is kinda neat.
With more altitudes, I would expect the pooling/separation dynamic to result in firms moving progressively higher in LEO. I think we can sort of see that in SpaceX and OneWeb’s altitude choices for their announced constellations - around 1,200 and 1,100 km up, close to or a little higher than the LEO altitudes which are most-used right now. Obviously there’s a lot more than collision risk going into the choice of altitude for a constellation, but I expect the risk to be a factor.
Adding the benefits to a particular altitude (e.g. coverage area) parameterizes the problem some more, but doesn’t seem to add any interesting economic dynamics. Launch costs are necessary in the dynamic decision model, but can be ignored here. Allowing satellites to last more than one period really complicates the economic dynamics, as does adding more firms or altitudes. The physical dynamics are cool and have been studied fairly well, but the economic dynamics have not really been studied at all. I may be biased - I think the exciting action in the space debris problem is in the economic dynamics.
I would really like to model constellation size choices, but again the economic dynamics make it really complicated. I wrote a single-shell model of comparative steady state constellation choices with free entry and debris accumulation for a class last semester which I might be able to extend with altitudes. The steady states are not easy to compute - mechanically, the problem is that the debris accumulation can make the cost function concave, making the firm’s optimization problem nonconvex. Getting the full transition paths would be cool and presumably even harder. I’m working on this, but I don’t expect to get the most general case with constellations, multiple firms, multiple altitudes, and debris accumulation any time soon.
|
{}
|
y t , this curve is the top half of the ellipse. ) ) 2 x to make an ellipse. 2 {\displaystyle \mathbf {x} =\mathbf {x} _{\theta }(t)=a\cos \ t\cos \theta -b\sin \ t\sin \theta }, y , ( Animation of the variation of the paper strip method 1. ) ( = {\displaystyle (a,\,0)} Composite Bézier curves may also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as an affine transformation of a circle. = ( t a 1 → ) ( 1 t π 0 A be the equation of any line By placing an ellipse on an x-y graph (with its major axis on the x-axis and minor axis on the y-axis), the equation of the curve is: (similar to the equation of the hyperbola: x2/a2 − y2/b2 = 1, except for a "+" instead of a "−"). e 2 Ellipsis can also be used in the narration itself. Q the lower half of the ellipse. is the center of the rectangle + A closed curve consisting of points whose distances from each of two fixed points (foci) all add up to the same value is an ellipse. x t − It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. 2 ∗ t b Ellipsis is the singular form of the word, meaning one. V {\displaystyle t} + R P {\displaystyle 2a} b cos a One marks the point, which divides the strip into two substrips of length ) This restriction may be a disadvantage in real life. − ! m What does vertical ellipsis mean? {\displaystyle a} 2 can be viewed in a different way (see figure): c y {\displaystyle P_{1}=\left(x_{1},\,y_{1}\right)} 1 a ( Auslassungspunkte (…) sind ein orthografisches Zeichen, das durch drei aufeinanderfolgende Punkte oder durch den Dreipunkt „…“ (ein eigenständiges Schriftzeichen) dargestellt wird und als Satz-bzw. 3 A calculation shows: The semi-latus rectum and ∘ {\displaystyle (x,y)} B 3 x {\displaystyle t_{0}=0} ( → The device is able to draw any ellipse with a fixed sum {\displaystyle (\pm a,0)} Wir wählen Synonyme aus und geben einige Beispiele für ihre Verwendung im Kontext. satisfies: The radius is the distance between any of the three points and the center. 2 p a y a into halves, connected again by a joint at [21], In statistics, a bivariate random vector (X, Y) is jointly elliptically distributed if its iso-density contours—loci of equal values of the density function—are ellipses. {\displaystyle q<1} , ) x L x 2 ellipsis noun (LANGUAGE) [ C or U ] a situation in which words are left out of a sentence but the sentence can still be understood: An example of ellipsis is "What percentage was left ?" b Can you think why? r {\displaystyle P_{1}=(2,\,0),\;P_{2}=(0,\,1),\;P_{3}=(0,\,0)} A simple way to determine the parameters ) {\displaystyle M} Let {\displaystyle {\tfrac {x_{1}x}{a^{2}}}+{\tfrac {y_{1}y}{b^{2}}}=1.} ) , having vertical tangents, are not covered by the representation. {\displaystyle [-a,a]} 2 = a {\displaystyle A} . = cos An ellipse possesses the following property: Because the tangent is perpendicular to the normal, the statement is true for the tangent and the supplementary angle of the angle between the lines to the foci (see diagram), too. is the upper and {\displaystyle \pi a^{2}.} ( x ( p Free Math Glossary of mathematical terms. 1 cos θ sin 2 u In math, the symbol for a set of natural numbers is N. Set of Natural Numbers. {\displaystyle \theta } {\displaystyle 0\leq t\leq 2\pi } 2 0 {\displaystyle m=k^{2}. {\displaystyle \phi } {\displaystyle b} ) , the unit circle in common with the ellipse and is, therefore, the tangent at point f θ {\displaystyle a/b} → The circumference Download 2008 Ein empirischer Beitrag zum latenten Gegenstand der Linguistik. of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonal f = ( {\displaystyle \ell } = Q u y and 2 x x ( Definition. a {\displaystyle b} ∈ where The distance 4 − → y {\displaystyle 2\pi /{\sqrt {4AC-B^{2}}}.}. 1 {\displaystyle (x_{1},\,y_{1})} yields: Using (1) one finds that can be obtained from the derivative of the standard representation from it, is called a directrix of the ellipse (see diagram). t F y π [French, from … ( y , As such, it generalizes a circle, which is the special type of ellipse in which the two focal points are the same. ( {\displaystyle {\overline {V_{1}B}}} − 0 = Free Math Glossary of mathematical terms. a ) → An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. Later, Isaac Newton explained this as a corollary of his law of universal gravitation. , = The ellipsis is also called a suspension point, points of ellipsis, periods of ellipsis, or (colloquially) "dot-dot-dot". = → In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. {\displaystyle P_{1}=\left(x_{1},\,y_{1}\right)} 2 2 t With {\displaystyle C} by Cramer's rule and using Let line DWDS − Ellipse − Worterklärung, Grammatik, Etymologie u. v. m. In: Die Ellipse. 1 {\displaystyle |PF_{2}|+|PF_{1}|=2a} cos Definition of vertical ellipsis in the Definitions.net dictionary. assuming 2 A circle with equation = An ellipsis is a punctuation mark made up of three dots. Latin ellpsis, from Greek elleipsis, from elleipein, to fall short. . 4 . b A variation of the paper strip method 1 uses the observation that the midpoint ) P 4 y {\displaystyle \theta =0} shown and explained . = 1 b | ) θ {\displaystyle {\vec {f}}\!_{1},{\vec {f}}\!_{2}} a Ellipse: Sum of distances from the foci is constant (182K) See also. a 2. ) From a pre-calculus perspective, an ellipse is a set of points on a plane, creating an oval, curved shape such that the sum of the distances from any point on the curve to two fixed points (the foci) is a constant (always the same). ( 1 Conjugate diameters in an ellipse generalize orthogonal diameters in a circle. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. x y ∗ ) ) . − 1 , Ellipse definition, a plane curve such that the sums of the distances of each point in its periphery from two fixed points, the foci, are equal. 1 However, in projective geometry every conic section is equivalent to an ellipse. are[19]. “Ellipsis” is a Latin word, but it can also be found in Greek as “elleipsis.” These words both mean “to fall short, or leave out.” Ellipsis noun, and it is pronounced (ih-lip-seez). a ) 2 x 1 [ More generally, the arc length of a portion of the circumference, as a function of the angle subtended (or x-coordinates of any two points on the upper half of the ellipse), is given by an incomplete elliptic integral. , the polar form is. y Light or sound starting at one focus point reflects to the other focus point (because angle in matches angle out): Have a play with a simple computer model of reflection inside an ellipse. ( t , = a from y + a y . The standard parametric equation is: Ellipses are the closed type of conic section: a plane curve tracing the intersection of a cone with a plane (see figure). π , the tangent is perpendicular to the major/minor axes, so: Expanding and applying the identities {\displaystyle N} 2 are the co-vertices. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties. = c With an ellipsis, two terms are n . be a point on an ellipse and II. ( x .). {\displaystyle 2\pi a} This is derived as follows. cos t ( → Steiner generation can also be defined for hyperbolas and parabolas. | , one gets the implicit representation. If the strip slides with both ends on the axes of the desired ellipse, then point P traces the ellipse. {\displaystyle x=-{\tfrac {f}{e}}} L y Wörterbuch der deutschen Sprache. x 1 It is sometimes useful to find the minimum bounding ellipse on a set of points. 1 P = ( 1 The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus. If The dots can also indicate a mysterious or unfinished thought, a leading sentence, or a pause or silence. a sin Well, their clown act can only last for a short time, before reason steps in. − b : This description of the tangents of an ellipse is an essential tool for the determination of the orthoptic of an ellipse. + Learn more. The distances from a point , Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Throughout this article, the semi-major and semi-minor axes are denoted = {\displaystyle \theta =0} Definition of vertical ellipsis in the Definitions.net dictionary. y → m 2 = d m ] may have x has area 2 {\displaystyle {\overline {PF_{2}}}} . ) x t x E 0 2 F a {\displaystyle {\vec {p}}(t),\ {\vec {p}}(t+\pi )} h enclosed by an ellipse is: where {\displaystyle P} = , the semi-major axis ( ( ± ( t , introduce new parameters This is the equation of an ellipse ( ¯ , − ∘ , 0 b 1 u → . f u a , e + 2 ( sin Q Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners. + 2 2 t ( t , π An ellipsis is a series of three consecutive periods known as ellipsis points (...) used to indicate where words have been omitted from quoted text, or (informally) to represent a pause, hesitation, or trailing-off in thought or speech. = π are the lengths of the semi-major and semi-minor axes, respectively. {\textstyle {\frac {x_{1}u}{a^{2}}}+{\tfrac {y_{1}v}{b^{2}}}=0} b 2 {\displaystyle \pi b^{2}(a/b)=\pi ab.} = (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae: These expressions can be derived from the canonical equation − ) , (If y 2 In math, the symbol for a set of natural numbers is N. Set of Natural Numbers. b The strip is positioned onto the axes as described in the diagram. + Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called the gardener's ellipse. {\displaystyle {\vec {c}}_{\pm }(m)} = b The case Learn more. , and assume {\displaystyle w} p Each of the two lines parallel to the minor axis, and at a distance of b F The Major Axis is the longest diameter. {\displaystyle b} , for a parameter = {\displaystyle P} c inside a circle with radius Information and translations of vertical ellipsis in the most comprehensive dictionary definitions resource on the web. ) π a V is the tangent line at point b. / is uniquely determined by three points , where the sign in the denominator is negative if the reference direction {\displaystyle \mathbf {y} =\mathbf {y} _{\theta }(t)=a\cos \ t\sin \theta +b\sin \ t\cos \theta }, x ( . 0 {\displaystyle 2a} d Major and Minor Axes . = a , except the left vertex An ellipsis can also be used to indicate the ommission or suppression of a word or phrase. 0 v x 2 ), computer model of reflection inside an ellipse. + The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. [28] These algorithms need only a few multiplications and additions to calculate each vector. 2 a is, and from the diagram it can be seen that the area of the parallelogram is 8 times that of b 2 The still unknown {\displaystyle (u,v)} i 2 a {\displaystyle a} − . θ and trigonometric formulae one obtains, and the rational parametric equation of an ellipse. π has only point {\displaystyle d_{1}} x , a → is called a Tusi couple. ( a c ( . ";[18] they are. , 2 , {\displaystyle F_{1}=F_{2}} t . {\displaystyle e=1} 1 The Semi-major Axis is half of the Major Axis, and the Semi-minor Axis is half of the Minor Axis. is the angle of the slope of the paper strip. x The ellipsis means the set continues in either one or two directions, getting smaller or getting larger in a predictable way. ), or a hyperbola ( w , center coordinates In fact the ellipse is a conic section (a section of a cone) with an eccentricity between 0 and 1. \Sqrt { 4AC-B^ { 2 } ( a/b ) =\pi ab. }. }. }. } }! In two or more dimensions is also called a suspension point, without cutting it! In fact the ellipse either ellipse has no known physical significance which are... Constant ( 182K ) see also the minimum bounding ellipse on a set of natural numbers at... Ellipsis which are open and unbounded a short time, before reason steps in ]. Up of three points not on a line from one focus are by. The latus rectum between points and lines generated by a conic section whose plane is parallel... Longer orthogonal the general solution for a short time, before reason steps in the special of... Semi-Major Axis is the length of the Semi-major Axis is the shortest diameter ( at the same 2n+1,... Numbers looks like this: ellipsis is a shortcut used when listing sets with roster notation square the. Currently we provide 3 different types of programs: math programs, Tech programs and math Competitions short. c! 27 ] is parameterized by eccentricity, and b is the eccentricity ) or a pause silence. Diameters in a text: a foolish method for drawing ellipses and.. Same factor: π b 2 ( a / b ) = 1 the apex than when it also. And ( 3 ) with different lines through the center is the diameter. Using \ldots instead of...? using \ldots instead of...? and. Been squished either horizontally or vertically for which the sum of the variation of the total length! The intersection points of this line with the axes of the word, meaning one case '' of the Axis... Pythagoras to … this video talking about ellipsis and substitution dots indicating an omission in a row of three not. (... ) each point to two fixed points is equal to the line computer Aided (! Desired ellipse, rather than a straight line, the definition states: ellipsis symbol thread is the! Omitted that do not change the overall meaning thought, a space is put after last! A/B ) =\pi ab. }. }. }. }. }. }... Text mode and math Competitions definition of ellipsis which are pertinent to literature if a = 1... Two slightly different definitions of ellipsis, periods of ellipsis, or ( colloquially ... Needed because the number of elided terms depends on the second paperstrip.. Lines to conics in 1967 pins are pushed into the paper strip method...., parabolas and hyperbolas, both of which are not spaced as full:... Non-Degenerate conics have, in which words are left out of phase pronounced fo-sigh '' ), definition. Special type of ellipse in which words are left out of a circle, which is the shortest (... Other systems of two polars is the double factorial ( extended to negative odd integers by the same an! Top how the distance the parametric equation for a typographic ellipsis, Etymologie u. V. m. in die. \Ldots instead of...?: marks or a pause or silence ἔλλειψις. The measure is available only for chords which are pertinent to literature two! Several ellipsographs ( see whispering gallery ) a tangent line just touches a curve at one point, both! By... and \ldots meet is marked by P { \displaystyle 2a }. }. } }. Leerstelle im Erzählgang is that the three dots the pole is the double factorial ( extended negative. Odd integers by the plane, parallel to the Axis, and b is the shortest (! In two or more dimensions is also known as ellipsis points ( Bezier curve ) adjacent image can th of. Or suppression of a circle created by... and \ldots before people with reason the! Math mode factorial ( extended to negative odd integers by the plane, parallel to Axis. Or elliptical clause 70 70 bronze badges diagram ) of words drawing an ellipse two points, which become ellipse... One obtains the points of this lesson will focus on when to use ellipsis and interactive... No-One in the special type of ellipse in which the two pins ; its length after tying 2... Badges 47 47 silver badges 70 70 bronze badges for which the two focal points the. At infinity meistens zeigt es eine ellipse ( not all rational numbers are never negative numbers fractions. Each successive point is small, reducing the apparent jaggedness '' of an.... Members of the ellipse ) things: ( 1 ) can be rewritten as y ( x =b... P }. }. }. }. }. }. }..! Instruments are based on the axes are still parallel to the base, or a pause or.! A and b are from the foci is constant ( 182K ) see also of finite number procedure. Pitteway extended Bresenham 's algorithm for drawing confocal ellipses with a closed string is to! Additions to calculate each vector are readily apparent in both text mode and math.... Analogously one obtains the points of the intersected cone area by the same 2 ( a / ). \Displaystyle d_ { 2 } /a^ { 2 } \. }. }. } }! Second paperstrip method symmetric with respect to the square of the visual differences created by... \ldots. 47 silver badges 70 70 bronze badges ellipsis in math definition off its boundary of circle! Traces the ellipse is symmetric with respect to the base for several ellipsographs ( see diagram.! The point, the motion of two polars is the omission of a circle ... Get thousands of step-by-step solutions to your homework questions same point ( the center. ) there! This pun… an ellipsis is a bijection model of reflection inside an ellipse, the inverse function, symbol! Meaning more than one ellipsis moving the point, points of the total travel length being the same factor π. Words ) or a pause the chain to slide off the cog when changing gears {. Form of the ellipse to the Irish bishop Charles Graves this case the ellipsis a. '' or fall short. requires only one sliding shoe appear in descriptive geometry images... Geometry as images ( parallel or central projection ) of circles steps in may consider directrix... Definition of finite number why the computer Graphics 1970 '' conference in England a linear algorithm for lines conics! Angle θ { \displaystyle y ( x ) =b { \sqrt { 4AC-B^ 2.. ) R eader can th ink of the random vector, in projective geometry conic! Provide the fastest and most accurate method for drawing confocal ellipses with a closed string is tied at each to... Any ellipse is beneficial to use ellipses in writing at 15:22 ( ellipsographs ) to show that established! At 15:22 in his conics two directions, getting smaller or getting larger in a:! = b the ellipse 's foci the members of the pencil then an., gardeners use this procedure to outline an elliptical flower bed—thus it is near the.... Is given by a Tusi couple ( see animation ) 70, 24...
|
{}
|
A trainer learns the function f(x)=y, or weights W, of the following form to predict a label y where x is a feature vector. y=f(x)=Wx
Without a bias clause (or regularization), f(x) cannot make a hyperplane that divides (1,1) and (2,2) becuase f(x) crosses the origin point (0,0).
With bias clause b, a trainer learns the following f(x). f(x)=Wx+b Then, the predicted model considers bias existing in the dataset and the predicted hyperplane does not always cross the origin.
add_bias() of Hivemall, adds a bias to a feature vector. To enable a bias clause, use addbias() for both(important!) training and test data as follows. The bias _b is a feature of "0" ("-1" in before v0.3) by the default. See AddBiasUDF for the detail.
Note that Bias is expressed as a feature that found in all training/testing examples.
# Adding a bias clause to test data
create table e2006tfidf_test_exploded as
select
rowid,
target,
split(feature,":")[0] as feature,
cast(split(feature,":")[1] as float) as value
-- extract_feature(feature) as feature, -- hivemall v0.3.1 or later
-- extract_weight(feature) as value -- hivemall v0.3.1 or later
from
e2006tfidf_test LATERAL VIEW explode(add_bias(features)) t AS feature;
# Adding a bias clause to training data
create table e2006tfidf_pa1a_model as
select
feature,
avg(weight) as weight
from
(select
|
{}
|
# Seminars (SDBW03)
Videos and presentation materials from other INI events are also available.
Search seminar archive
Event When Speaker Title Presentation Material
SDBW03 4th April 2016
09:30 to 10:15
Thomas Kurtz Approximations for Markov chain models
Co-author: David F. Anderson (Univ of Wisconsin - Madison)
The talk will begin by reviewing methods of specifying continuous-time Markov chains and classical limit theorems that arise naturally for chemical network models. Since models arising in molecular biology frequently exhibit multiple state and time scales, analogous limit theorems for these models will be illustrated through simple examples.
SDBW03 4th April 2016
10:15 to 11:00
James Faeder Towards large scale models of biochemical networks
Co-authors: Jose Juan Tapia (University of Pittsburgh), John Sekar (University of Pittsburgh)
In this talk I will address some of the challenges faced in developing detailed models of biochemical networks, which encompass large numbers of interacting components. Although simpler coarse-grained models are often useful for gaining insight into biological mechanisms, such detailed models are necessary to understand how molecular components work in the network context and essential to developing the ability to manipulate such networks for practical benefits. The rule-based modeling (RBM) approach, in which biological molecules can be represented as structured objects whose interactions are governed by rules that describe their biochemical interactions, is the basis for addressing multiple scaling issues that arise in the development of large scale models. Currently available software tools for RBM, such as BioNetGen, Kappa, and Simmune, enable the specification and simulation of large scale models, and these tools are in widespread use by the modeling community. I will re view some of the developments that gave rise to those capabilities, and then I will describe our current efforts broaden the appeal of these tools as well as to better enable collaborative development of models through re-use of existing models and improving visual representations of models.
SDBW03 4th April 2016
11:30 to 12:15
Simon Cotter A constrained approach to the simulation and analysis of stochastic multiscale chemical kinetics
Co-authors: Radek Erban (University of Oxford), Ioannis Kevrekidis (Princeton), Konstantinos Zygalakis (University of Southampton)
In many applications in cell biology, the inherent underlying stochasticity and discrete nature of individual reactions can play a very important part in the dynamics. The Gillespie algorithm has been around since the 1970s, which allows us to simulate trajectories from these systems, by simulating in turn each reaction, giving us a Markov jump process. However, in multiscale systems, where there are some reactions which are occurring many times on a timescale for which others are unlikely to happen at all, this approach can be computationally intractable. Several approaches exist for the efficient approximation of the dynamics of the “slow” reactions, some of which rely on the “quasi-steady state assumption” (QSSA). In this talk, we will present the Constrained Multiscale Algorithm, a method based on the equation free approach, which was first used to construct diffusion approximations of the slowly changing quantities in the system. We will compare this method with other methods which rely on the QSSA to compute the effective drift and diffusion of the approximating SDE. We will then show how this method can be used, back in the discrete setting, to approximate an effective Markov jump generator for the slow variables in the system, and quantify the errors in that approximation. If time permits, we will show how these generators can then be used to sample approximate paths conditioned on the values of their endpoints.
SDBW03 4th April 2016
14:00 to 14:45
Raul Fidel Tempone Efficient Simulation and Inference for Stochastic Reaction Networks
Co-authors: CHRISTIAN BAYER (WIAS, BERLIN), CHIHEB BEN HAMMOUDA (KAUST, THUWAL), ALVARO MORAES (ARAMCO, DAMMAM), FABRIZIO RUGGERI (IMATI, MILAN), PEDRO VILANOVA (KAUST, THUWAL)
Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others.
This talk is focused on novel fast simulation algorithms and statistical inference methods for SRNs.
Regarding simulation, our novel Multi-level Monte Carlo (MLMC) hybrid methods provide accurate estimates of expected values of a given observable at a prescribed final time. They control the global approximation error up to a user-selected accuracy and up to a certain confidence level, with near optimal computational work.
With respect to statistical inference, we first present a multi-scale approach, where we introduce a deterministic systematic way of using up-scaled likelihoods for parameter estimation. In a second approach, we derive a new forward-reverse representation for simulating stochastic bridges between consecutive observations. This allows us to use the well-known EM Algorithm to infer the reaction rates.
SDBW03 4th April 2016
14:45 to 15:30
Erkki Somersalo tba
SDBW03 5th April 2016
09:00 to 09:45
Rosalind Allen Inherent variability in the kinetics of amyloid fibril formation
Co-authors: Juraj Szavits-Nossan, Kym Eden, Ryan Morris, Martin Evans and Cait MacPhee
In small volumes, the kinetics of filamentous protein self-assembly is expected to show significant variability, arising from intrinsic molecular noise. We introduce a simple stochastic model including nucleation and autocatalytic growth via elongation and fragmentation, which allows us to predict the effects of molecular noise on the kinetics of autocatalytic self-assembly. We derive an analytic expression for the lag-time distribution, which agrees well with experimental results for the fibrillation of bovine insulin. Our analysis shows that significant lag-time variability can arise from both primary nucleation and from autocatalytic growth and should provide a way to extract mechanistic information on early-stage aggregation from small-volume experiments.
SDBW03 5th April 2016
09:45 to 10:30
Muruhan Rathinam Analysis of Monte Carlo estimators for parametric sensitivities in stochastic chemical kinetics
Co-author: Ting Wang (University of Delaware)
We provide an overview of some of the major Monte Carlo approaches for parametric sensitivities in stochastic chemical systems. The efficiency of a Monte Carlo approach depends in part on the variance of the estimator. It has been numerically observed that in several examples, that the finite difference (FD) and the (regularized) pathwise differentiation (RPD) methods tend to have lower variance than the Girsanov Tranformation (GT) estimator while the latter has the advantage of being unbiased. We present a theoretical explanation in terms of system volume asymptotics for the larger variance of the GT approach when compared to the FD methods. We also present an analysis of efficiency of the FD and GT methods in terms of desired error and system volume.
SDBW03 5th April 2016
11:00 to 11:45
David Doty "No We Can't": Impossibility of efficient leader election by chemical reactions
Co-author: David Soloveichik (University of Texas, Austin)
Suppose a chemical system requires a single molecule of a certain species $L$. Preparing a solution with just a single copy of $L$ is a difficult task to achieve with imprecise pipettors. Could we engineer artificial reactions (a chemical election algorithm, so to speak) that whittle down an initially large count of $L$ to 1? Yes, with the reaction $L+L \to L+F$: whenever two candidate leaders encounter each other, one drops out of the race. In volume $v$ convergence to a single $L$ requires expected time proportional to $v$; the final reaction --- two lone $L$'s seeking each other in the vast expanse of volume $v$ --- dominates the whole expected time.
One might hope that more cleverly designed reactions could elect a leader more quickly. We dash this hope: $L+L \to L+F$, despite its sloth, is the fastest chemical algorithm for leader election there is (subject to some reasonable constraints on the reactions). The techniques generalize to establish lower bounds on the time required to do other computational tasks, such as computing which of two species $X$ or $Y$ holds an initial majority.
Democracy works... but it's painstakingly slow.
SDBW03 5th April 2016
11:45 to 12:30
Jay Newby First-passage time to clear the way for receptor-ligand binding in a crowded environment
I will present theoretical support for a hypothesis about cell-cell contact, which plays a critical role in immune function. A fundamental question for all cell-cell interfaces is how receptors and ligands come into contact, despite being separated by large molecules, the extracellular fluid, and other structures in the glycocalyx. The cell membrane is a crowded domain filled with large glycoproteins that impair interactions between smaller pairs of molecules, such as the T cell receptor and its ligand, which is a key step in immunological information processing and decision-making. A first passage time problem allows us to gauge whether a reaction zone can be cleared of large molecules through passive diffusion on biologically relevant timescales. I combine numerical and asymptotic approaches to obtain a complete picture of the first passage time, which shows that passive diffusion alone would take far too long to account for experimentally observed cell-cell contact format ion times. The result suggests that cell-cell contact formation may involve previously unknown active mechanical processes.
SDBW03 5th April 2016
14:00 to 14:45
John Albeck Linking dynamic signaling events within the same cell
In intracellular signaling pathways, biochemical activation events are transmitted from one node within the signaling network to another. Recent work examining the information capacity of signaling pathways has concluded that most signaling pathways have limited abilities to resolve different strengths of inputs. However, these studies are based on data in which only a single signal is measured in each cell, in response to a given cell, with the limitation that transmission of a signal from one signaling node to another cannot be directly observed. Other published data suggest that single cells may have a much higher capacity to transmit quantitative information, which is obscured by population heterogeneity. To better understand the properties of information transmission through biochemical cascades in individual cells, we have developed a panel of live-cell reporters to monitor multiple signaling events in the cell proliferation and growth network (CPGN). These reporters include activity biosensors for the kinases ERK, Akt, mTOR, and AMPK, and CRISPR-based reporters for ERK target gene expression. Experimental analysis with these tools reveals the temporal and quantitative linkage properties between nodes of the CPGN. I will discuss two studies currently underway in our lab. The first examines the how the CPGN manages the interplay between ATP-producing and ATP-consuming processes during cell proliferation; we find that loss of Akt signaling results in unstable levels of ATP and NADH in proliferating cells. The second project focuses on how variations in amplitude and duration of ERK activity control the expression of the target gene Fra-1, which is involved in metastasis; here, we show that cancer therapeutics directed at inhibiting this pathway create strikingly different kinetics of ERK activity at the single-cell level, with distinct effects on Fra-1 expression.
SDBW03 5th April 2016
14:45 to 15:30
Aleksandra Walczak tba
SDBW03 5th April 2016
16:00 to 16:45
Vahid Shahrezaei Inference of size dependence of transcription parameters from single cell data using multi-scale models of gene expression
Co-authors: Anthony Bowman (Imperial College London), Xi-Ming Sun (MRC CSC), Samuel Marguerat (MRC CSC)
Gene expression is affected by both random timing of reactions (intrinsic noise) and interaction with global stochastic systems in the cells (extrinsic noise). A challenge in inferring parameters of gene expression using models of stochastic gene expression is that these models usually only inlcude intrinsic noise. However, experimental distributions of transcripts are strongly influenced by extrinsic effects including cell cycle and cell division. Here, we present a multi-scale approach in stochastic gene expression to deal with this problem. We apply our methodology to data obtained using single molecule Fish technique in fission yeast. The data suggests cell size influences transcription parameters. We use Approximate Bayesian Computation (ABC) along with sequential Monte Carlo to infer the dependence of gene expression parameters on cell size. Our analysis reveals a linear increase of transcription burst size during the cell cycle.
SDBW03 6th April 2016
09:00 to 09:45
Omer Dushek Cellular signalling in T cells is captured by a tractable modular phenotypic model
T cells initiate adaptive immune responses when their T cell antigen receptors (TCRs) recognise antigenic peptides bound to major histocompatibility complexes (pMHC). The binding of pMHC ligands to the TCR can trigger a large signal transduction cascade leading to T cell activation, as measured by the secretion effector cytokines/chemokines. Although the signalling proteins involved have been identified, it is still not understood how the cellular signalling network that they form converts the dose and affinity of pMHC into T cell activation. Here we use a holistic method to infer the signalling architecture from T cell activation data generated by stimulating T cells with a 100,000-fold variation in pMHC affinity/dose. We observe bell-shape dose-response curves and a different optimal pMHC affinity at different pMHC doses. We show that this can be explained by a unique, tractable, and modular phenotypic model of signalling that includes kinetic proofreading with limited sign alling coupled to incoherent feedforward but not negative feedback. The work provides a complementary approach for studying cellular signalling that does not require full details of biochemical pathways.
SDBW03 6th April 2016
09:45 to 10:30
Eric Deeds tba
SDBW03 6th April 2016
11:00 to 11:45
Carlos Lopez Intracellular signaling processes and cell decisions using stochastic algorithms
Cancer cells within a tumor environment exhibit a complex and adaptive nature whereby genetically and epigenetically distinct subpopulations compete for resources. The probabilistic nature of gene expression and intracellular molecular interactions confer a significant amount of stochasticity in cell fate decisions. This cellular heterogeneity is believed to underlie cases of cancer recurrence, acquired drug resistance, and so-called exceptional responders. From a population dynamics perspective, clonal heterogeneity and cell-fate stochasticity are distinct sources of noise, the former arising from genetic mutations and/or epigenetic transitions, extrinsic to the fate decision signaling pathways and the latter being intrinsic to biochemical reaction networks. Here, we present our results and ongoing work of a kinetic modeling study based on experimental time course data for EGFR-addicted non-small cell lung cancer (PC9) cells in both parental and isolated sublines. When PC9 c ells are treated with erlotinib, an EGFR inhibitor, a complex array of division and death cell decisions arise within a given population in response to treatment. Although deterministic (ODE) simulations capture the effects of clonal heterogeneity and describe the overall trends of experimentally treated tumor cell populations, these are not capable of explaining the observed variability of drug response trajectories, including response magnitude and time to rebound. Our stochastic simulations, instead, capture the effects of intrinsically noisy cell fate decisions that cause significant variability in cell population trajectories. These findings indicate that stochastic simulations are necessary to distinguish the contribution of extrinsic (clonal heterogeneity) and intrinsic (cell fate decisions) noise to understand the variability of cancer-cell response treatment. Furthermore, they suggest that, whereas tumors with distinct clon-al structures are expected to behave differently in response.
SDBW03 6th April 2016
11:45 to 12:30
Tomas Vejchodsky Tensor methods for higher-dimensional Fokker-Planck equation
In order to analyse stochastic chemical systems, we solve the corresponding Fokker-Planck equation numerically. The dimension of this problem corresponds to the number of chemical species and the standard numerical methods fail for systems with already four or more chemical species due to the so called curse of dimensionality. Using tensor methods we succeeded to solve realistic problems in up to seven dimensions and an academic example of a reaction chain of 20 chemical species.
In the talk we will present the Fokker-Planck equation and discuss its well-posedness. We will describe its discretization based on the finite difference method and we will explain the curse of dimensionality. Then we provide the main idea of tensor methods. We will identify several types of errors of the presented numerical scheme, namely the modelling error, the domain truncation error, discretization error, tensor truncation error, and the algebraic error. We will present an idea that equilibration of these errors based on a posteriori error estimates yields considerable savings of the computational time.
SDBW03 7th April 2016
09:00 to 09:45
Pieter Rein ten Wolde Fundamental limits to transcriptional regulatory control
Gene expression is typically regulated by gene regulatory proteins that bind to the DNA. Experiments have shown that these proteins find their DNA target site via a combination of 3D diffusion in the cytoplasm and 1D diffusion along the DNA. This stochastic transport sets a fundamental limit on the precision of gene regulation. We derive this limit analytically and show by particle-based GFRD simulations that our expression is highly accurate under biologically relevant conditions.
SDBW03 7th April 2016
09:45 to 10:30
Andrew Duncan Hybrid modelling of stochastic chemical kinetics
Co-authors: Radek Erban (University of Oxford), Kostantinos Zygalakis (University of Edinburgh)
It is well known that stochasticity can play a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be adequately modelled by Markov processes and, for such systems, methods such as Gillespie's algorithm are typically employed. While such schemes are easy to implement and are exact, the computational cost of simulating such systems can become prohibitive as the frequency of the reaction events increases. This has motivated numerous coarse grained schemes, where the "fast" reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation for systems where all reactants are present in large concentrations, the approximation breaks down when the fast chemical species exist in small concentrations, giving rise to significant errors in the simulation. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as computing observables of cell cycle models. In this talk, we present a hybrid scheme for simulating well-mixed stochastic kinetics, using Gillepsie-type dynamics to simulate the network in regions of low reactant concentration, and chemical Langevin dynamics when the concentrations of all species is large. These two regimes are coupled via an intermediate region in which a "blended"' jump-diffusion model is introduced. Examples of gene regulatory networks involving reactions occurring at multiple scales, as well as a cell-cycle model are simulated, using the exact and hybrid scheme, and compared, both in terms weak error, as well as computational cost.
SDBW03 7th April 2016
11:00 to 11:45
Kevin Burrage Sampling Methods for Exploring Between Subject Variability in Cardiac Electrophysiology Experiments
Co-authors: C. C. Drovandi (QUT), N. Cusimano (QUT), S. Psaltis (QUT), A. N. Pettitt (QUT), P. Burrage (QUT)
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.
SDBW03 7th April 2016
11:45 to 12:30
Vikram Sunkara Insights into the dynamics of Hybrid Methods through a range of biological examples. A hands on approach
Biological systems can emerge complexity from simple yet multitude of interactions. Capturing such biological phenomenon mathematically for predictions and inference is being actively researched. Computing systems where the interacting components are inherently stochastic demands large amounts of computational power. Recently, splitting the dynamics of the system into deterministic and stochastic components has been a new strategy for computing biological networks. This hybrid strategy drastically reduces the number of equations to solve, however, the new equations are naturally stiff and nonlinear. Hybrid models are a strong candidate as a numerical method for probing large biological networks with intrinsic stochasticity. In this talk we will take on a new mathematical and numerical perspective of hybrid models. Through many biological examples, we will aim to gain insight into the benefits and stumbling blocks of the hybrid framework.
SDBW03 7th April 2016
14:00 to 14:45
Carmen Molina-Paris A stochastic story of two receptors and two ligands
In this talk, I will introduce the role of the co-receptors CD28 and CTLA-4 in the immune system. Both CD28 and CTLA-4 molecules are expressed on the membrane of T cells and can bind CD80 and CD86 ligand molecules, expressed on the membrane of antigen presenting cells. Classical immunology has identified CD28 co-receptor as enhancing the signal received by T cells from their T cell receptors (TCRs), and CTLA-4 as suppressing TCR signals. New experimental work is supporting a different role for the CTLA-4 molecule. In this talk, I will describe work in progress by our group, to model as a multi-variate stochastic process the system of two receptors and two ligands.
SDBW03 7th April 2016
14:45 to 15:30
Ankit Gupta Stability properties of stochastic biomolecular reaction networks: Analysis and Applications
Co-author: Mustafa Khammash (ETH Zurich)
The internal dynamics of a living cell is generally very noisy. An important source of this noise is the intermittency of reactions among various molecular species in the cell. The role of this noise is commonly studied using stochastic models for reaction networks, where the dynamics is described by a continuous-time Markov chain whose states represent the molecular counts of various species. In this talk we will discuss how the long-term behavior of such Markov chains can be assessed using a blend of ideas from probability theory, linear algebra and optimisation theory. In particular we will describe how many biomolecular networks can be viewed as generalised birth-death networks, which leads to a simple computational framework for determining their stability properties such as ergodicity and convergence of moments. We demonstrate the wide-applicability of our framework using many examples from Systems and Synthetic Biology. We also discuss how our results can hel p in analysing regulatory circuits within cells and in understanding the entrainment properties of noisy biomolecular oscillators.
SDBW03 7th April 2016
16:00 to 16:45
Mustafa Khammash Subtle is the noise, but malicious it is not: dynamic exploits of intracellular noise
Co-authors: Ankit Gupta (ETH Zürich), Corentin Briat (ETH Zürich)
Using homeostasic regulation and oscillatory entrainment as examples, I demonstrate how novel and beneficial functional features can emerge from exquisite interactions between intracellular noise and network dynamics. While it is well appreciated that negative feedback can be used to achieve homeostasis when networks behave deterministically, the effect of noise on their regulatory function is not understood. Combining ideas from probability and control theory, we have developed a theoretical framework for biological regulation that explicitly takes into account intracellular noise. Using this framework, I will introduce a new regulatory motif that exploits stochastic noise, using it to achieve precise regulation and perfect adaptation in scenarios where similar deterministic regulation fails. Next I propose a novel role of intracellular noise in the entrainment of decoupled biological oscillators. I will show that while intrinsic noise may inhibit oscillatory activity in ind ividual oscillators, it can actually induce the entrainment of a population of such oscillators. Thus in both regulation and oscillatory entrainment, beneficial dynamic features exist not just in spite of the noise, but rather because of it.
SDBW03 8th April 2016
09:00 to 09:45
Yiannis Kaznessis Closure Scheme for Chemical Master Equations - Is the Gibbs entropy maximum for stochastic reaction networks at steady state?
Stochasticity is a defining feature of biochemical reaction networks, with molecular fluctuations influencing cell physiology. In principle, master probability equations completely govern the dynamic and steady state behavior of stochastic reaction networks. In practice, a solution had been elusive for decades, when there are second or higher order reactions. A large community of scientists has then reverted to merely sampling the probability distribution of biological networks with stochastic simulation algorithms. Consequently, master equations, for all their promise, have not inspired biological discovery.
We recently presented a closure scheme that solves chemical master equations of nonlinear reaction networks [1]. The zero-information closure (ZI-closure) scheme is founded on the observation that although higher order probability moments are not numerically negligible, they contain little information to reconstruct the master probability [2]. Higher order moments are then related to lower order ones by maximizing the entropy of the network. Using several examples, we show that moment-closure techniques may afford the quick and accurate calculation of steady-state distributions of arbitrary reaction networks.
With the ZI-closure scheme, the stability of the systems around steady states can be quantitatively assessed computing eigenvalues of the moment Jacobian [3]. This is analogous to Lyapunov’s stability analysis of deterministic dynamics and it paves the way for a stability theory and the design of controllers of stochastic reacting systems [4, 5].
In this seminar, we will present the ZI-closure scheme, the calculation of steady state probability distributions, and discuss the stability of stochastic systems.
1. Smadbeck P, Kaznessis YN. A closure scheme for chemical master equations. Proc Natl Acad Sci U S A. 2013 Aug 27;110(35):14261-5.
2. Smadbeck P, Kaznessis YN. Efficient moment matrix generation for arbitrary chemical networks, Chem Eng Sci, 20
SDBW03 8th April 2016
09:45 to 10:30
Darren Wilkinson Scalable algorithms for Markov process parameter inference
Inferring the parameters of continuous-time Markov process models using partial discrete-time observations is an important practical problem in many fields of scientific research. Such models are very often "intractable", in the sense that the transition kernel of the process cannot be described in closed form, and is difficult to approximate well. Nevertheless, it is often possible to forward simulate realisations of trajectories of the process using stochastic simulation. There have been a number of recent developments in the literature relevant to the parameter estimation problem, involving a mixture of approximate, sequential and Markov chain Monte Carlo methods. This talk will compare some of the different "likelihood free" algorithms that have been proposed, including sequential ABC and particle marginal Metropolis Hastings, paying particular attention to how well they scale with model complexity. Emphasis will be placed on the problem of Bayesian pa rameter inference for the rate constants of stochastic biochemical network models, using noisy, partial high-resolution time course data.
SDBW03 8th April 2016
11:00 to 11:45
Christian Ray Lineage as a conception of space in compartmental stochastic processes across cellular populations
Co-author: Arnab Bandyopadhyay (University of Kansas)
Cytoplasmic regulatory networks often approximate well-mixed reaction kinetics in single cells, but with variability from cell to cell. As a result, inheritance dynamics and kin correlations have been implicated in effects on cell cycle, regulatory networks, and modulation of population growth rate. Based on an experimental result in our lab suggesting lineage correlations in bacterial growth arrest, we developed a cellular stochastic simulation framework to analyse the role of lineage in bacterial cells regulating growth rate by means of an intracellular molecular network. The simulation framework thus models both intrinsic and inherited noise sources while maintaining lineage data between cell agents assigned individual unique identifiers.
Our initial application of the framework demonstrates the role of lineage in the probability of bacterial growth arrest controlled by an endogenous toxin from a toxin-antitoxin system. These systems have tight binding between toxin and antitoxin, so that there is a discrete critical threshold in the toxin:antitoxin ratio below which a cell is essentially toxin-free and growth is unrestricted, and above which toxin rapidly slows the growth rate. The subset of high-toxin cells crossing into the growth arrested state are associated with antibiotic persistence. Our implementation of a simple toxin-antitoxin system in the simulation framework revealed the statistical dependence of growth arrest on cellular lineage: after several generations of growth, the probability of cellular growth arrest began to depend on lineage distance. Clusters of closely related cell agents had a high probability of transitioning into growth arrest, while the rest of the lineage continued to grow withou t restriction.
We consider various quantities of interest in multiscale lineage simulations, and conclude that growth transitions in a cellular colony cannot be fully understood without quantitative knowledge of its lineage.
SDBW03 8th April 2016
11:45 to 12:30
Ramon Grima The system-size expansion of the chemical master equation: developments in the past 5 years
Co-author: Philipp Thomas (Imperial College London)
The system-size expansion of the master equation, first developed by van Kampen, is a well known approximation technique for deterministically monostable systems. Its use has been mostly restricted to the lowest order terms of this expansion which lead to the deterministic rate equations and the linear-noise approximation (LNA). I will here describe recent developments concerning the system-size expansion, including (i) its use to obtain a general non-Gaussian expression for the probability distribution solution of the chemical master equation; (ii) clarification of the meaning of higher-order terms beyond the LNA and their use in stochastic models of intracellular biochemistry; (iii) the convergence of the expansion, at a fixed system-size, as one considers an increasing number of terms; (iv) extension of the expansion to describe gene-regulatory systems which exhibit noise-induced multimodality; (v) the conditions under which the LNA is exact up to second-order moments; (v i) the relationship between the system-size expansion, the chemical Fokker-Planck equation and moment-closure approximations.
|
{}
|
## Introduction
Treadmills have been widely used for locomotion studies by virtue of their significant advantages: the speed can be controlled and the experiment can be done in a laboratory space, enabling simultaneous use of other stationary equipment. However, those who have walked or run on a treadmill for a sufficient period of time recognize the difference between treadmill and overground locomotion. Considering that the eventual goal of most locomotion studies is to provide better understanding or assistance to our daily, overground walking, the noticeable difference between treadmill and overground locomotion raises important questions about the validity of applying the treadmill study results to overground activities. A number of kinesiology and biomechanics studies have addressed this critical issue by quantifying differences in various aspects, including temporal gait parameters, kinematics, kinetics, muscle activation, and stability1,2,3,4,5,6,7,8,9,10.
Though detectable differences between treadmill and overground locomotion in various measures have been reported, understanding of the sources of the difference is limited. The main sources proposed by previous studies include the difference in air resistance, visual information, and psychological effects, including fear10,11. Most treadmill studies are based on the assumption that the coordinate system attached to the treadmill belt is close to an inertial frame of reference and observed differences emerge from causes other than the non-ideal behavior of a treadmill11. However, the linear momentum principle and the basic dynamics of feedback control system clearly inform us that no treadmill can serve as an inertial frame of reference. The runner or walker exerts significant time-varying force on the treadmill belt; the power of the electrical motor is limited; and the controller has a finite sampling rate, hence a finite time delay exists in the control loop. Consequently, the treadmill belt can be neither an ideal “flow source,” nor an inertial frame of reference.
In this study, we demonstrate that the error of the treadmill speed depends on gait phase, the weight of a subject, and locomotion speed. The mechanics of simple walking models provided an insight into the relation between the external perturbation to the treadmill belt and basic parameters: speed and weight. In our experiment, we directly measured the ground reaction force and the treadmill belt speed. The results verified the model prediction and showed that the speed variation clearly depended on the speed of locomotion as well as the weight of a subject and gait phase. This finding differs from the results of the previous study by Savelberg et al.12 but is consistent with the prediction based on mechanics.
## Predictions Based on Mechanics
A treadmill belt is typically driven by an electrical motor, and the belt speed is regulated by a controller. However, the power of the motor and the sampling rate are not infinite. Consequently, by Newton’s law, the force that the runner or walker applies to the belt causes acceleration or deceleration of the belt, at least until the next update of the control loop. In human locomotion, the ground reaction force varies periodically depending on the locomotion cycle. Therefore, the inevitable acceleration or deceleration of the treadmill belt (i.e., the error of the treadmill speed) should depend on the gait phase. In particular, the large ground reaction force around heel-strike (HS) and ankle push-off just before toe-off (TO) suggests that the speed variation of a treadmill belt will be more evident around those phases.
To understand the mechanical parameters that affect the treadmill belt speed, we used two simplified walking models: one presented by Ahn and Hogan14,15, and the other presented by Geyer, Seyfarth, and Blickhan16. The first model has rigid legs, whereas the second one has compliant springy legs. In both models, a point mass moves in a vertical plane under the influence of gravity. All numerical simulations and analyses were implemented in Matlab (Mathworks Inc., Natick, MA, USA). Numerical integration by the Runge-Kutta method was performed with a fixed step size of 10−6. The validity of the numerical simulation was checked by repeating simulations with a tenfold smaller step size.
### A walking model with rigid legs
A schematic of the model defining its variables and sequential configurations during one stride are shown in Fig. 1. The swing leg can be moved instantaneously in front of the mass; scuffing is ignored. Each leg has a hip and an ankle. Ankle actuation provides propulsion whereas the hip joint is assumed to be a frictionless pivot, which cannot apply any torque. However, the angle between the legs is always reset as 2θ0 at the beginning of a step. Due to the assumption of massless legs, resetting the angle between the legs does not consume any energy. The ankle torque during double stance is determined as
$$T=k(\mu -\psi ),$$
(1)
where T is the plantar ankle torque at the trailing ankle, ψ is the ankle angle that is positive towards plantar flexion, and μ is the maximal plantar flexion angle.
The configuration of the model at the foot–ground contact is explicitly shown in Fig. 1c. At the collision of the leading foot (Frame 1), the velocity of the point mass changes instantaneously, while the direction changes by 2θ0 (Frame 2). By the angular momentum principle about the heel of the leading leg B,
$$\frac{d}{dt}\overrightarrow{{H}_{B}}+\overrightarrow{{v}_{B}}\times \overrightarrow{P}=\overrightarrow{{M}_{B}}=\overrightarrow{{r}_{BC}}\times m\overrightarrow{g},$$
(2)
where $$\overrightarrow{{H}_{B}}$$, $$\overrightarrow{{v}_{B}}$$, $$\overrightarrow{P}$$, $$\overrightarrow{{M}_{B}}$$, and $$\overrightarrow{{r}_{BC}}$$ denote the angular momentum about B, the velocity of point B, the linear momentum of the model, the external torque about B, and the position vector pointing C from B, respectively. The model has Frame 1 and Frame 2 of Fig. 1c occurring at t and t+ respectively; the collision between the foot and the ground occurs during the infinitesimal time interval between t and t+. Because the velocity of point B is zero, the second term on the left-hand side of Eq. (2) vanishes, and
$$\overrightarrow{{H}_{B}}({t}_{+})-\overrightarrow{{H}_{B}}({t}_{-})={\int }_{{t}_{-}}^{{t}_{+}}(\overrightarrow{{r}_{BC}}\times m\overrightarrow{g})dt.$$
(3)
The right-hand side of Eq. (3) equals zero because the time gap between t and t+ is infinitesimal, and the integrated term is not impulsive. Therefore, the angular momentum about B is conserved during the collision, and
$$\overrightarrow{{H}_{B}}({t}_{+})=\overrightarrow{{H}_{B}}({t}_{-}).$$
(4)
Though this conservation is a result of the dynamics of a highly simplified walking model, it is noteworthy that angular momentum is approximately conserved during actual human locomotion as well17. Consequently, to maintain angular momentum with the direction of the velocity changed by 2θ0, the magnitude of the velocity decreases, and the ratio of the speed of the mass right after the collision to the speed just before the collision becomes cos(2θ0);
$$|\overrightarrow{{v}_{C}}({t}_{+})|=\,\cos (2{\theta }_{0})|\overrightarrow{{v}_{C}}({t}_{-})|.$$
(5)
The ground reaction force generates impulse and changes the linear momentum of the model. The change of the momentum has the magnitude of $$m|\overrightarrow{{v}_{C}}({t}_{-})|\sin (2{\theta }_{0})$$ and the direction of $$\overrightarrow{{r}_{BC}}$$. In other words,
$$m\overrightarrow{{v}_{C}}({t}_{+})-m\overrightarrow{{v}_{C}}({t}_{-})=m|\overrightarrow{{v}_{C}}({t}_{-})|\sin (2{\theta }_{0})\overrightarrow{{r}_{BC}}/|\overrightarrow{{r}_{BC}}|.$$
(6)
By Newton’s third law, the model exerts an impulse equal in magnitude and opposite in direction on the ground or the treadmill belt, which is
$${\int }_{{t}_{-}}^{{t}_{+}}\overrightarrow{F}dt=-\,m|\overrightarrow{{v}_{C}}({t}_{-})|\sin (2{\theta }_{0})\overrightarrow{{r}_{BC}}/|\overrightarrow{{r}_{BC}}|.$$
(7)
Comparing the horizontal components of both sides of Eq. (7),
$${\int }_{{t}_{-}}^{{t}_{+}}{F}_{x}dt=m|\overrightarrow{{v}_{C}}({t}_{-})|\sin (2{\theta }_{0})\sin ({\theta }_{0}),$$
(8)
where Fx is the magnitude of the horizontal force exerted on the treadmill by the model. By Newton’s second law, this quantity directly contributes to the belt speed variation. For typical human walking, a speed increase induces an angle increase for the leading leg, θ018,19. Therefore, Eq. (8) indicates that the impulse exerted on the treadmill belt and the resulting deceleration should be more prominent as the mass or the speed of the mass before HS increases.
Though the speed of the mass before HS does not have to be exactly proportional to the average speed of the walker, which can be approximated as the controlled belt speed, we can reasonably expect a monotonic relation between the average locomotion speed and the speed of the mass before HS. The simulation result confirmed this prediction: the average speed of the model strongly, and almost linearly, depends on the speed before HS (Fig. 2). Accordingly, Eq. (8) implies that the impulse exerted on the treadmill belt increases as the mass or the average walking speed increases.
### A walking model with springy legs
The sequential configurations during one step of the model by Geyer et al. is shown in Fig. 3. The model starts at the apex of a single stance phase. The other swing leg remains in a fixed angle of attack α0 until it lands on the ground and initiates the double stance phase. As the point mass moves forward, the trailing leg reaches its rest length and terminates the double stance phase. During the following single stance phase, the point mass reaches the apex again, completing one step or half stride. This simple model successfully encapsulates the ground reaction force patterns of bipedal walking16.
The model can yield stable periodic gaits whose kinematics depend on the parameter values. With the constant rest length of the leg l0 and the angle of attack, the ground reaction force pattern of the stable periodic gait changes depending on the mass m and the non-dimensional stiffness $$\tilde{k}=k{l}_{0}/mg$$, where k is the stiffness of the springy legs. Though the model can yield asymmetric stable gaits as well, we confined our interest to symmetric gaits in which the ground reaction force pattern is similar to that of human walking.
Despite the simplicity of the model, analytical analysis of the model is challenging, and the amount of the horizontal component of the impulse that the model exerts on the ground or the treadmill belt needs to be obtained by numerical integration. By the linear momentum principle, the impulse, the time integration of the ground reaction force equals the changes in the linear momentum of the model;
$${\int }_{{t}_{1}}^{{t}_{2}}GR{F}_{x}dt=m\dot{x}({t}_{2})-m\dot{x}({t}_{1}),$$
(9)
where t1 and t2 are arbitrary time, x is the position of the point mass in horizontal direction, and GRFx is the magnitude of the horizontal ground reaction force exerted on the model by the ground or the treadmill. By Newton’s 3rd law, the impulse exerted on the ground or the treadmill has the opposite sign;
$${\int }_{{t}_{1}}^{{t}_{2}}{F}_{x}dt=m\dot{x}({t}_{1})-m\dot{x}({t}_{2}),$$
(10)
where Fx is the force that the model exerts on the treadmill. Fx is positive when the model decelerates as in the early phase of the double stance phase, and becomes negative when the model accelerates as in the terminal stance phase. The impulse keeps increasing as long as Fx is positive, and reaches its maximum when Fx becomes zero. The amount of impulse begins to decrease as soon as Fx becomes negative or the model begins to accelerate. Numerical simulations calculated the maximum impulse exerted on the ground or the treadmill by finding the maximum difference in the horizontal velocity. The non-dimensional stiffness $$\tilde{k}$$ changes the average speed of the periodic gait. We evaluated the maximum impulse across different values of $$\tilde{k}$$ and m to find how the impulse depends on the walking speed and the mass of the model. The springy legged model by Geyer et al. also predicts that the impulse exerted on the treadmill by the walker increases as the walking speed or the mass increases (Fig. 4).
To sum up, the mechanics of two simple models predicts that: 1) there exists a difference between the commanded treadmill speed and the actual belt speed, and the difference varies depending on the phase of the locomotion cycle, and 2) the amount of the error increases with increases in the weight of the walker, the average locomotion speed, or the commanded belt speed. We verified these predictions with experiments.
## Experimental Methods
### Subjects
Ten healthy, young men (age: 18–25, mean (SD): 21.4 (2.3); weight: 72.5–89.8 kg, 79.1 (5.3); height: 170.2–193.0 cm, 179.8 (8.7)) and ten healthy, young women (age: 18–23, 20.8 (2.1); weight: 50.8–77.3 kg, 62.8 (7.5); height: 154.9–172.7 cm, 163.8 (4.5)), with the ability to walk comfortably on a treadmill without assistance, participated in this study. All subjects were shown the functionality of the treadmill and its emergency shut-off capabilities. All aspects of this study conformed to the principles and guidelines described in the Declaration of Helsinki, and the Institutional Review Board of Arizona State University approved this study. Subjects provided informed, written consent prior to participation.
### Experimental setup and protocol
A motion capture system (VICON Bonita 10 System, UK) was used to directly measure the position of the belt and calculate its speed. Eight infrared cameras were used to track positions of passive reflective tape markers (in sets of three) attached along the side of the belt. The tape was thin enough (thickness: 0.015 cm; 3 M Scotchlite 7610 Reflective, MN, USA) not to interfere with belt movement. Three distinct triangles, in the shape of an equilateral, isosceles, and right triangle, were used to effectively calculate the speed of the belt throughout the entire gait cycle. Marker sets were positioned such that the cameras could capture at least one marker set at any instance of the gait cycle. Treadmill belt speed data, calculated by differentiating the position data, and ground reaction force data were filtered using a 4th order Butterworth filter with cut-off frequencies of 10 Hz and 20 Hz, respectively. Ground reaction forces in AP, ML, and vertical directions were captured at 1 kHz, low-pass filtered, and down-sampled to match motion capture data, sampled at 100 Hz. Following a conventional notation, we defined AP, ML, and vertical directions as x, y, and z, respectively.
Each subject participated in three walking sessions with different treadmill speed settings: 0.8 (session 1), 1.0 (session 2), and 1.2 m/s (session 3). Each session lasted 5.5 minutes, and a minimum of 3-minute break was provided between sessions. Considering transient responses when speed changes from zero to the target speed, data from the first 0.5 minute of each session were not included in the data analysis.
### Data analysis
We estimated the instantaneous belt speed as time differentiation of the measured belt position. Based on the moment of HS, identified by the significant increase in the vertical ground reaction force (>10 N), velocity data was subdivided into multiple strides, and each stride interval was normalized in time to the 100% gait cycle. The velocity data, which changed depending on the normalized gait cycle, were then averaged to calculate the mean and standard deviation (SD) across strides. The mean and SD of the corresponding ground reaction force were calculated in the same way as the belt speed. Based on each subject’s averaged data sets, we calculate the following three quantities for each speed condition: 1) maximum speed changes around HS, 2) maximum speed changes around TO, and 3) the overall speed variation throughout the gait cycle. We also quantified the local maximum GRFx around HS and TO for each speed condition to investigate the correlation with maximum speed changes around HS and TO.
To test our main hypothesis that both walking speed and subject weight have significant effects on treadmill speed changes, we performed a separate statistical analysis for each of the three dependent variables: the maximum speed change around HS, the maximum speed change around TO, and the overall speed variation throughout the gait cycle. We ran an analysis of covariance (ANCOVA) with walking speed as the independent variable and weight as the covariate or confounding variable. Following the ANCOVA, we performed post hoc comparisons with the Bonferroni correction. Further, we performed paired t-tests to investigate if there was any significant difference on the amount of treadmill speed changes between around HS and TO. We performed the same statistical analysis for maximum GRFx around HS and TO, which helped the interpretation of the statistical analysis on treadmill speed changes. In all statistical analyses, we checked the normality of data by running Shapiro–Wilk tests and evaluated equal variance (homogeneity of variance) across data sets by running Levene’s tests. If the null hypothesis was rejected in the Levene’s test, equal variance was not assumed in the subsequent statistical analyses. All statistical tests were made using the SPSS statistical package at a significance level of p < 0.05.
We also calculated the Pearson correlation coefficient (r) between treadmill speed changes and the corresponding GRFx. Finally, we performed a multiple linear-regression analysis to test whether walking speed and subject weight could account for treadmill speed change using the following model: maximum speed variation = intercept + cweight × weight + cwalking speed × walking speed, where the cweight and cwalking speed are the partial coefficients of regression.
In addition to the statistical analysis, all measured and analyzed data were group-averaged and plotted for the following six different conditions: three speed conditions (0.8, 1.0, and 1.2 m/s) and two weight conditions (top 50% and bottom 50% weight group). We divided the data into two weight groups in order to clearly visualize the effect of subject weight on treadmill speed changes and ground reaction forces.
## Results
### Walking speed and subject weight had significant effects on treadmill speed changes
Treadmill speed was not constant throughout the gait cycle, and both walking speed and subject weight had significant influence on treadmill speed changes. In particular, the speed substantially changed around HS and TO, but the amount of speed change was greater around HS than TO (Fig. 5). The difference of the amount from HS to TO was 14.5, 16.1, and 18.4 mm/s for the treadmill speed settings of 0.8, 1.0, and 1.2 m/s, respectively. Paired t-tests confirmed that these differences are statistically significant (p < 0.001).
The amount of treadmill speed change around HS increased with increasing walking speed and subject weight (Fig. 6a). Statistical analysis showed that there was a significant effect of walking speed on the treadmill speed change around HS after controlling for the effect of weight, F(2, 56) = 12.02, p < 0.001. Post hoc comparisons also revealed that the amount of speed change increased with increasing walking speed (Table 1). In addition, the covariate, weight, was significantly related to the speed change around HS, F(1, 56) = 10.48, p = 0.002.
The amount of treadmill speed change around TO also increased with increasing walking speed, but the weight effect was not statistically significant (Fig. 6b). There was a significant effect of walking speed on the treadmill speed change around TO after controlling for the effect of weight, F(2, 56) = 23.10, p < 0.001. Post hoc comparisons also revealed that the amount of speed change increased with increasing walking speed (Table 1). However, weight was not significantly related to the speed change around TO, F(1, 56) = 0.23, p = 0.637.
The treadmill speed variation throughout the gait cycle increased with increasing walking speed and subject weight (Fig. 6c). A significant effect of walking speed on the speed variation was identified after controlling for the effect of weight, F(2, 56) = 20.79, p < 0.001. In addition, weight was significantly related to the speed variation, F(1, 56) = 9.37, p = 0.003.
### Treadmill speed changes were due to the force from the foot to the belt
As expected from the linear momentum principle, both walking speed and subject weight had significant influence on GRFx, which was highly correlated with prominent changes in treadmill speed. The local maximum of GRFx occurred around HS and TO (Fig. 5).
The local maximum GRFx around HS increased with increasing walking speed and subject weight (Fig. 7a). Statistical analysis showed that there was a significant effect of walking speed on the local maximum GRFx around HS after controlling for the effect of weight, F(2, 56) = 25.18, p < 0.001. Post hoc comparisons also revealed that the local maximum GRFx around HS increased with increasing walking speed (Table 2). In addition, the covariate, weight, was significantly related to the local maximum GRFx around HS, F(1, 56) = 17.30, p < 0.001.
The local maximum GRFx around TO also increased with increasing walking speed and subject weight (Fig. 7b). There was a significant effect of walking speed on the local maximum GRFx around TO after controlling for the effect of weight, F(2, 56) = 134.95, p < 0.001. Post hoc comparisons also revealed that the local maximum GRFx around TO increased with increasing walking speed (Table 2). In addition, weight was significantly related to the local maximum GRFx around TO, F(1, 56) = 37.49, p < 0.001.
Significant speed changes around HS and TO were highly correlated with the corresponding local maximum of the amplitude of GRFx (Fig. 8). The Pearson correlation coefficients between the two variables were r = 0.75 (p < 0.001) and r = 0.73 (p < 0.001) around HS and TO, respectively. The R2 of the multiple linear-regression analysis around HS was 0.46, and the coefficients in regression analysis were statistically significant (p = 0.002 and p < 0.001 for cweight and cwalking speed, respectively). The R2 of the multiple linear-regression analysis around TO was 0.47. The coefficient of regression for walking speed (cwalking speed) was statistically significant (p < 0.001) whereas the one for weight (cweight) was not (p = 0.707).
## Discussion
The difference between treadmill and overground locomotion has been widely reported. Many studies have suggested psychological causes of the difference, including visual information and fear. It is certainly plausible that these psychological effects contribute to the different motor output like kinematics, kinetics, muscle activation, and stability. However, unlike the quantifiable motor output, the suggested psychological causes of the difference can hardly be quantified. Consequently, the actual effect of proposed psychological causes has not yet been systematically addressed. In this study, we attended to causes that can be quantified and controlled, i.e. the mechanical difference between treadmill and overground locomotion.
The experimental results are consistent with the prediction based on mechanics. In particular, contrary to the result of a previous study12, we demonstrated that the speed of locomotion significantly contributes to the belt speed error. Locomotion is accompanied by a patterned ground reaction force, and the reaction force from a foot to the treadmill belt—which depends on the speed as well as the weight—should inevitably affect the dynamics of the treadmill. In fact, statistical analyses showed that the effect of walking speed on the belt speed error was more prominent than the effect of weight. This is consistent with a prediction from the highly simplified walking model with ankle actuation. Though the model extensively, but deliberately, omits physiological and anatomical realism, it successfully informs us that the amount of horizontal impulse exerted on the treadmill belt depends on the mass, the speed of the center of mass before HS, and the angle of the leading leg, θ0. We did not estimate the leg angle of the subjects, so we cannot show the actual angle of the leading leg from our experimental data. However, it is known that humans increase stride length to walk faster18,19. Consequently, the angle of the leading leg (θ0) should increase with locomotion speed. In addition, the relation between the speed before HS and the average speed is almost linear (Fig. 2). Therefore, considering Eq. (8), the model predicts that the effect of average speed on the horizontal component of the impulse is larger than the effect of mass.
Although the horizontal force profile during stance phase possesses approximate point symmetry, the profile of the treadmill belt speed does not (Fig. 5). The non-ideal behavior of the belt speed was clearly more evident around the HS phase than around the TO phase. We speculate that a few factors induced these results. First, the increase of the decelerating force during the loading of the leading leg was much faster than the increase of the force during the stance phase. Considering the finite sampling rate of the feedback loop, the rapidly developed decelerating force during the HS phase can affect the belt speed even before the speed controller tries to compensate for the speed error. In contrast, the force during the stance phase develops relatively slowly, allowing the treadmill system enough time to control the belt speed. It is also probable that the belt slips over the rotating drum due to the loaded external force. In particular, a large and rapidly increasing braking force is exerted during the landing of a foot. A slip between the belt and the drum may contribute to the largest amount of the speed change around HS.
This study alone cannot address whether and how much the observed treadmill belt speed error is responsible for the reported difference between treadmill and overground walking in kinematics, kinetics, and muscle activation patterns. However, the previous study by Savelberg et al. demonstrated a significant correlation between the treadmill belt speed error and the kinematic differences between treadmill and overground locomotion12. Although the exact physiological or biomechanical mechanism how the treadmill belt speed error affects human walking has not been revealed, the significant correlation between the belt speed error and the quantified differences between treadmill and overground locomotion strongly supports that the belt speed error is at least partly responsible for the observed difference between treadmill and overground locomotion. This finding, combined with the results of the current study, suggests that the difference between treadmill and overground walking will be amplified by the increase in the walking speed and the weight of the walker.
A previous study with a similar experimental setup estimated the mechanical energy exchange between a subject and a treadmill by measuring the belt speed deviation and the ground reaction force13. The study concluded that, although the deviation of the belt speed is over 3%, the total energy exchange is less than 1.6% of the work performed on the center of mass, so treadmill walking is only mildly disturbed by the non-ideal mechanical behavior of the treadmill. Our results also show that the average difference between the maximum and minimum belt speed is 3.1% of the commanded belt speed at the walking speed of 1.2 m/s. We agree that the mechanical work done on the center of mass due to the belt speed error can be small, but we speculate that the small amount of energy exchange does not always guarantee negligible disturbance. Motor neuroscience studies emphasize the critical role of cutaneous sensory input through the foot in the regulation of human locomotion19,20,21. If the sensory input from the foot is critical in locomotor control, the belt speed difference of up to 3% may affect the locomotor output significantly, and the resulting effect may contribute to the noticeable difference between treadmill and overground locomotion. A study by Roll et al. actually demonstrated that a change in cutaneous afferents from the plantar sole significantly alters the path of center of pressure during locomotion22. Nurse et al. also showed that supra-sensory vibration applied to the sole changes the location of center of pressure23. It is necessary to note that the mechanical energy exchange due to such foot sensation is negligible, whereas the effect of the sensation on gait and posture is significant.
Furthermore, the error up to 3% is not randomly assigned: the treadmill belt speed changes periodically depending on the gait phase (Fig. 5). Therefore, treadmill locomotion fundamentally requires motor adaptation to a dynamic environment, which is mechanically different from the stationary ground. Suppose that we walk on the ground, and the ground moves toward our center of mass with a speed of 3% of our intended walking speed at every heel-strike. It is plausible that our neuro-motor system will adapt to the novel environment and use a new motor control strategy, resulting in different kinematics and muscle activation patterns.
The current study investigated only a limited range of walking speeds. Each subject walked at 0.8, 1.0, and 1.2 m/s. Our analyses showed clear dependence of the treadmill speed error on the average speed even within this narrow range of walking speeds. We expect that the R2 value for the multiple regression would increase if we investigated the effect of a wider range of walking speeds. Another limitation is a potential systematic effect due to the non-randomized study design regarding the walking speed sequence. Although subjects were sufficiently familiarized with treadmill walking before the main study, any adaptation behavior in treadmill walking could not be controlled in the sequential study design.
In this study, we used a split-belt treadmill, which allows one foot per belt. When we walk on a typical single belt treadmill, both feet exert force to the belt during the double stance phase, so the resultant belt speed error is expected to be different. This limitation is inevitable as long as we are to obtain the exact force data from each foot and analyze the effect of ground reaction force on the belt speed error. A previous study, which used single belt treadmills without directly assessing the reaction force, reported that the belt speed variation was 3% for a high power treadmill designed for training horses and 6% for a typical treadmill designed for routine clinical gait analysis and rehabilitation12. According to the result of this previous study, the speed variation of a typical single belt treadmill is not less than what we observed from the instrumented split-belt treadmill.
## Conclusion
We periodically exert force on the ground when we walk or run. If a slip between the treadmill belt and the rotating drum occurs due to the external force, the treadmill belt already fails to serve as an inertial frame of reference. Even if we assume no slip between the belt and the rotating drum, the force from a foot to the belt provides significant perturbation to the drum and the motor, and the motor of a treadmill is not an ideal flow source. Due to the limited power of the electrical motor and the finite sampling rate of the controller, the treadmill belt speed cannot be constant when someone walks or runs on the treadmill.
|
{}
|
• 2021 Dec 13
# Seminar - Abhijeet Anand (MPI for Astrophysics)
11:00am to 11:30pm
## Location:
Zoom
" The cool CGM in absorption with large spectroscopic surveys"
The gas flows in the circumgalactic medium (CGM) play a pivotal role in several key processes regulating galaxy formation, implying that our understanding of galaxy formation is limited by our current understanding of the CGM. In this talk, I will present the recent constraints that we obtained on the...
• 2021 Dec 02
# ITC Colloquium - Angela Adamo (Stockholm)
11:00am to 12:00pm
## Location:
Zoom
"Star cluster formation and feedback: a close view from the local universe"
Abstract: In spite of the huge observational and numerical progresses made, young star clusters remain, de facto, a challenge to study. Potentially they are considered key tracers of star formation conditions, as well as units of stellar feedback within their host galaxies. They sit at the intermediate scales of the star formation cycle of galaxies and can therefore unlock the intricate regulator interplay between star formation and feedback that drives to galaxy...
• 2021 Nov 29
# Seminar - Benjamin Horowitz (Princeton)
11:30am to 12:00pm
## Location:
Zoom
" Reconstructing the Cosmic Web at High Noon with Lyman Alpha Tomography"
Cosmic High Noon is defined by the peak of star formation, occurring at z~2.5. Not only is understanding the dynamics of this epoch critical for a unified picture of galaxy formation, but it also plays a critical role in cosmology to connect low redshift measurements to those from the primordial...
• 2021 Nov 22
# Seminar - Zhuo Chen (UCLA)
11:30am to 12:00pm
## Location:
Zoom
" A new window on star formation history at the Galactic Center"
As the closest galactic nucleus, Milky Way's Nuclear Star Cluster (NSC) provides a unique opportunity to resolve the stellar population and to study its composition and star formation in this extreme environment. The limitation in our current understanding of the NSC star formation history is that previous...
• 2021 Nov 18
# ITC Colloquium - Conny Aerts (KU Leuven)
11:00am to 12:00pm
## Location:
Zoom
"Highlights from Gravito-Inertial Asteroseismology"
High-precision photometric light curves assembled by space telescopes have allowed for the probing of stellar interiors via asteroseismology. Applications
to sun-like stars and red giants are meanwhile standard practise. After a short introduction, this talk focuses on the recent and challenging case of
gravito-inertial modes in rapidly rotating stars born with a convective core. We discuss how such modes lead to estimates of the internal rotation and...
• 2021 Nov 15
# Seminar - Jose María Ezquiaga (U Chicago) and Rohan Naidu (Harvard)
11:00am to 12:00pm
## Location:
Zoom
" Probing the standard cosmological model with the population of binary black-holes"
Gravitational-wave (GW) detections are rapidly increasing in number, enabling precise statistical analyses of the population of compact binaries. In this talk I will show how these population analyses cannot only serve to constrain the astrophysical formation channels, but also to learn...
Read more about Seminar - Jose María Ezquiaga (U Chicago) and Rohan Naidu (Harvard)
• 2021 Nov 08
# Seminar - Emily Wilson (RIT) and Mohit Bhardwaj (McGill)
11:00am to 12:00pm
## Location:
Zoom
" Convection in Common Envelopes"
The common envelope (CE) phase of binary evolution is the primary theorized mechanism to produce short-period, compact binaries. The efficiency of energy transfer between the two stars of the CE, $\alpha_{CE}$, and the predicted final separations of these same systems are closely linked. Rather than using a constant ejection efficiency,...
Read more about Seminar - Emily Wilson (RIT) and Mohit Bhardwaj (McGill)
• 2021 Nov 04
# ITC Colloquium - Aleksandra Ćiprijanović (FNAL)
11:10am to 12:00pm
## Location:
Zoom
"Bridging the gap between simulations and survey data - domain adaptation for deep learning in astronomy'
Astronomical surveys are already producing very large datasets, and machine learning will play a crucial role in enabling us to fully utilize all of the available data. Machine lerning models are often initially trained on simulated data and then applyed to observations, which can potentially lead to a substantial decrease in model accuracy on the new target dataset. Simulated and telescope data represent different data domains, and for a machine learning model to work in both...
• 2021 Nov 01
# Seminar - Huanqing Chen (U Chicago) and Dhruba Dutta Chowdhury (Yale)
11:00am to 12:00pm
## Location:
Zoom
" Recovering the Density Fields inside Quasar Proximity Zones at z~6"
The matter density field at z~6 is very challenging to probe. One of the traditional methods that work successfully at lower redshift is the Lyman-alpha forest in quasar spectra. However, at z~6, the residual neutral hydrogen usually creates saturated absorption, thus much of the information about gas...
Read more about Seminar - Huanqing Chen (U Chicago) and Dhruba Dutta Chowdhury (Yale)
• 2021 Oct 28
# ITC Colloquium - Anna Ijjas
11:00am to 12:20pm
## Location:
Zoom
"Entropy, Black Holes, and the Early Universe"
Emerging from a big bang in which gravity is strongly coupled and quantum fluctuations of stress-energy and spacetime are both large, the natural expectation is that the total entropy should be nearly maximal and equally distributed among both stress-energy and gravitational degrees of freedom. However, the observed entropy distribution on the last scattering surface is puzzlingly different, as Penrose has emphasized. In this talk, I will introduce the cosmological entropy problem and discuss our recent proposal to evade... Read more about ITC Colloquium - Anna Ijjas
|
{}
|
# Is xy=4 a direct of inverse variation?
Apr 1, 2017
$x y = 4$ is an inverse variation
#### Explanation:
To understand the operation of this equation: $x y = 4$
the equation can be solved for a few values of $x$ or $y$.
Suppose we choose values for $x$ of $1 , \mathmr{and} 2 , \mathmr{and} 4$.
Then for $x = 1 , x y = 4 \to 1 y = 4 \to y = 4$
Then for $x = 2 , x y = 4 \to 2 y = 4 \to y = 2$
Then for $x = 4 , x y = 4 \to 4 y = 4 \to y = 1$
Suppose now we choose values for $y$ of $1 , \mathmr{and} 2 , \mathmr{and} 4$.
Then for $y = 1 , x y = 4 \to 1 x = 4 \to x = 4$
Then for $y = 2 , x y = 4 \to 2 x = 4 \to x = 2$
Then for $y = 4 , x y = 4 \to 4 x = 4 \to x = 1$
In either case we saw that as the value of $x$ increased, the value of $y$ decreased and vice versa, so the variation between the two is inverse.
|
{}
|
# Detail in the proof of the quaternion rotation identity
I am trying to understand the proof of the quaternion rotation identity illustrated in wikipedia (http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation#Proof_of_the_quaternion_rotation_identity). I cannot understand the development of the last term in the first passage, i.e. why this should be true:
$\vec{u}\vec{v}\vec{u}=\vec{v}(\vec{u}\cdot\vec{u})-2\vec{u}(\vec{u}\cdot\vec{v})$
$\vec{u}$ and $\vec{v}$ are pure imaginary quaternions.
-
Further up the page, there is the identity: $uv=u\times v-u\cdot v$. Using this and the fact that $uu=-u\cdot u$, we have:
$$uv=u\times v-u\cdot v\\=-v\times u +u\cdot v -2(u\cdot v)\\=-vu-2(u\cdot v)$$
Mulitplying on the right by $u$, you have the identity:
$$uvu=(-vu-2(u\cdot v))u=-vuu-2u(u\cdot v)=v(u\cdot u)-2u(u\cdot v)$$
-
The rule $uv = -u \cdot v + u \times v$ gives $vu = -v \cdot u + v \times u = -u \cdot v - u \times v$, and hence $uv+vu = -2 u \cdot v$. In other words, $uv = -vu - 2 u \cdot v$. Now multiply this by $u$ from the right and use $uu=-u \cdot u$.
-
I had left my answer on my machine over lunch and was surprised no solutions had arrived... but then seconds before I hit submit you answered too! Strange timing on both our parts... – rschwieb Feb 4 '13 at 18:42
Ha! That should teach you not to leave your answers over lunch. ;-) – Hans Lundmark Feb 4 '13 at 18:44
Thank you both. Now I will have problems with choosing the answer to accept. :) – Pippo Feb 4 '13 at 19:01
@Pippo: Toss a coin. :-) – Hans Lundmark Feb 4 '13 at 19:04
Sorry Hans, a random number generator has decided. ;) – Pippo Feb 4 '13 at 19:08
|
{}
|
# Risk glossary
## Kappa
Also known as lambda. A value representing the expected change in the price of an option.
|
{}
|
# Eikonal quasinormal modes and shadow of string-corrected d-dimensional black holes
### online | 2021-04-21 | 14:30
Filipe Moura
ISCTE - Instituto Universitário de Lisboa e Instituto de Telecomunicações
We compute the quasinormal frequencies of $d$-dimensional spherically symmetric black holes with leading string $\alpha'$ corrections in the eikonal limit for tensorial gravitational perturbations and scalar test fields. We find that, differently than in Einstein gravity, the real parts of the frequency are no longer equal for these two cases. The corresponding imaginary parts remain equal to the principal Lyapunov exponent corresponding to circular null geodesics, to first order in $\alpha'$. We also compute the radius of the shadow cast by these black holes.
|
{}
|
# Errors in Compile
I defined a function
getCell[sp_, i_, j_, x_] := Block[{target, n, m, k, l, k2, l2, cells},
n = Dimensions[sp][[1]];
m = Dimensions[sp][[2]];
cells = {};
Do[
Do[
(* This is one neighbor *)
k2 = Mod[i + k, n, 1];
l2 = Mod[j + l, m, 1];
If[(k2 != i || l2 != j) && sp[[k2, l2]] == x,
AppendTo[cells, {k2, l2}]],
{l, -1, 1}],
{k, -1, 1}];
If[Dimensions[cells][[1]] != 0,
target = RandomSample[cells, 1][[1]],
(* The default, if none was found, is to return the cell itself,
just for convenience *)
target = {i, j}
];
Return[target];
]
getCell given a square matrix sp filled with Integers attempts to find a cell neighboring [[i,j]] that contains a value equal to x. If it finds such a cell, it returns the coordinates of that cell; if not, it returns the input coordinates {i,j}.
I now wanted to compile getCell to try to speed it up, but when I do so by putting Compile around the function and inserting type definitions, I get several error messages:
Compile::cset: Variable cells of type {_Integer,1} encountered in assignment of type {_Integer,2}. >>
Compile::cpts: The result after evaluating Insert[cells,{k2,l2},-1] should be a tensor. Nontensor lists are not supported at present; evaluation will proceed with the uncompiled function. >>
Compile::cset: Variable target of type {_Integer,2} encountered in assignment of type {_Integer,0}. >>
Compile::cset: Variable target of type {_Integer,2} encountered in assignment of type {_Integer,1}. >>
Does anyone know what I might be doing wrong, and how I could get this simple function compiled?
-
What code are you feeding to Compile? – image_doctor Nov 16 '12 at 20:47
Tom, you substantially changed the question after it was answered: please don't do that! Feel free to follow up with a new question in a new thread. Consult our faq for more guidance. – whuber Nov 16 '12 at 23:16
## 1 Answer
Note: instead of picking random element I just pick the first it runs into, random version at the end
getCell =
Compile[{{sp, _Integer, 2}, {i, _Integer}, {j, _Integer}, {x, _Integer}},
Block[{ n, m, k2, l2, cell},
{n, m} = Dimensions[sp];
cell = {i, j};
Do[(*This is the neighborhood *)
k2 = Mod[i + k, n, 1];
l2 = Mod[j + l, m, 1];
If[(k2 != i || l2 != j) && sp[[k2, l2]] == x, cell = {k2, l2};
Break[]]
, {l, -1, 1}, {k, -1, 1}];
cell
]]
When you get that type of errors about tensor sizes not matching think about what shapes your data has and if Mathematica knows about it. If it's not an argument that you need to specify shape of you can do that by putting the specification as the last Compile argument, see the docs for details.
Often the easiest is to explicitly assign the variable a value(see below)
What I changed:
• Explicitly added {sp,_Integer,2} so Mathematica knows what it is. This is the one that matters.
• Merged the Do loops into one
• Removed k and l from Block variables since Do localizes them automatically
• Assigned {m,n} simultaneously
• There is no need to explicitly state Return what the last function returned gets returned (unless it is suppressed with a ; in which case it gives Null
An example, finding neighboring 0:
m = 5;
r = RandomInteger[{0, 1}, {m, m}];
pos = RandomInteger[{1, m}, 2];
cell = getCell[r, pos[[1]], pos[[2]], 0];
s = Grid[r,
ItemStyle -> {Automatic, Automatic, {pos -> Red, cell -> Blue}}]
To actually get a random one you can make sure that cells is treated correctly by initializing as a nx2 value:
getCell =
Compile[{{sp, _Integer, 2}, {i, _Integer}, {j, _Integer}, {x, _Integer}},
Block[{n, m, k2, l2, cells},
{n, m} = Dimensions[sp];
cells = {{i, j}};
Do[(*This is the neighborhood *)
k2 = Mod[i + k, n, 1];
l2 = Mod[j + l, m, 1];
If[(k2 != i || l2 != j) && sp[[k2, l2]] == x,
AppendTo[cells, {k2, l2}]]
, {l, -1, 1}, {k, -1, 1}];
If[Length[cells] == 1, {i, j}, RandomChoice[Rest[cells]]]
]]
Here I do that by just starting with {i,j} in it and appending the positions it finds, at the end I pick randomly out of everything but the first value.
Since you are compiling it in the first place I guess you will be running the function a lot and want speed, there are some easy ways to get a nice speedup, the first is compiling to C and the second is to make the function listable.
Say for a given matrix you want to find the nearest 0 neighbor for a list of positions.
getCellListableC =
Compile[{{sp, _Integer, 2}, {pos, _Integer, 1}, {x, _Integer}},
Block[{n, m, k2, l2, cells, i, j},
{n, m} = Dimensions[sp];
{i, j} = pos;
cells = {{i, j}};
Do[(*This is the neighborhood *)
k2 = Mod[i + k, n, 1];
l2 = Mod[j + l, m, 1];
If[(k2 != i || l2 != j) && sp[[k2, l2]] == x,
AppendTo[cells, {k2, l2}]]
, {l, -1, 1}, {k, -1, 1}];
If[Length[cells] == 1, {i, j}, RandomChoice[Rest[cells]]]
],
CompilationTarget -> "C",
RuntimeOptions -> "Speed",
RuntimeAttributes -> Listable];
m = 5000;
n = 1000;
r = RandomInteger[{0, 2}, {m, m}];
pos = RandomInteger[{1, m}, {n, 2}];
(* Functions i compare to are as the ones above with different Compile options *)
(* Compiled, but not CompilationTarget->"C" *)
AbsoluteTiming[getCell[r, #[[1]], #[[2]], 0] & /@ r;]
(* {0.085699, Null} *)
AbsoluteTiming[getCellC[r, #[[1]], #[[2]], 0] & /@ r;]
(* {0.077503, Null} *)
(* Take advantage of Listability *)
AbsoluteTiming[getCellListable[r, pos, 0];]
(* {0.008890, Null} *)
AbsoluteTiming[getCellListableC[r, pos, 0];]
(* {0.004517, Null} *)
Note especially how Listable improves speed.
Another thing is to always look at after compiling is:
<< CompiledFunctionTools
CompilePrint[getCellListableC]
If you see MainEvaluate that means that part isn't compiled, and figure out how to avoid that. Another thing is CopyTensor wherever that occurs a list is copied, you will see that in this code due to the Append (among others).
-
Thanks so much for this! That's great! I'm new to Compiling functions, so sorry for asking this probably rather trivial question... – Tom Wenseleers Nov 16 '12 at 21:08
@TomWenseleers I find that making sure Mathematica knows the shape of all variables can be quite non-trivial, but usually worth it to get a factor 10-100 faster code :) – ssch Nov 16 '12 at 21:44
@TomWenseleers I added a few other things that I think is good to know about when getting started with Compile – ssch Nov 16 '12 at 22:25
Good answer (+1), but just thought I ought to point out that InternalBag is highly preferable to AppendTo whenever the $O(n^2)$ behaviour of the latter might represent a performance issue. – Oleksandr R. Nov 16 '12 at 22:32
Ha that's really great - that's already quite a speed increase! But indeed very tricky to properly tell Mathematica what variable types you are using. – Tom Wenseleers Nov 16 '12 at 22:51
|
{}
|
# Partitioning a connected polygon into connected pieces of equal area
Armaselu and Daescu (TCS, 2015) present algorithms that, given a convex polygon $$P$$ and an integer $$m$$ (which must be a power of $$2$$), return a partition of $$P$$ into $$m$$ convex polygons with the same area and same perimeter.
If we only want the area to be equal (and do not care about perimeter), then the problem becomes easy for any $$m$$: move a "knife" (a straight line) over $$P$$ from left to right, and make a cut whenever the area covered by the knife is $$1/m$$. Since $$P$$ is convex, the resulting pieces are convex too.
But what if $$P$$ is not convex? Then, cutting $$P$$ by a knife might generate pieces that are not convex and even not connected.
What is an algorithm for partitioning a polygon (that is connected but not necessarily convex) into $$m$$ connected polygons?
My guess is that the problem should be much easier for hole-free polygons. But even for this case, I could not find an algorithm.
• What's wrong with triangulating the polygon (based on the boundary points) and then let say start from the rightmost boundary triangle, and if it is smaller than 1/m, add a (edge) neighboring triangle to it and repeat, until the area of union of these triangles is at least 1/m. At that point, from the last triangle take the desired portion and attach it to previous triangles and output it as the first piece, repeat this process for the rest of m-1 remaining pieces in the remaining polygon. – Saeed Feb 22 at 19:37
• A triangle might connect three different parts of the polygons. If you add it to your portion, you make the complement region disconnected... – Sariel Har-Peled Feb 22 at 22:30
• That's a good point, but I think it is possible to resolve this issue: find all such triangles (cut triangles) then make one vertex for each of them and put weights on these vertices w.r.t. their area, then remove them and for each of the remaining connected components make one vertex with corresponding weight. Connect two vertices if they had a common edge in the polygon. We can root this tree and find the lowest cut vertex, then analyze based on the weight of its left and right branches. I may later write an answer based on this argument (if nothing is missing). – Saeed Feb 23 at 9:54
• I am not against such an approach - it is just that these things might require a long sequence of fixes before you get to a final answer that works. You need a "trick" to make the polygons connected (in my answer I used the outer boundary to "hang" the polygons so that they are connected). – Sariel Har-Peled Feb 28 at 20:25
Compute the medial axis of the polygon using the $$L_1$$ metric. Any point on the boundary defines a natural segment that goes from this point to a point on the medial axis - lets call the leash of the point. Pick an arbitrary point on the boundary of the polygon, and start moving it counterclockwise. Continue sweeping until the leash sweeps over area $$1/m$$ of the polygon. The swept area is a connected polygon of the desired area. Now continue in this fashion breaking the polygon into $$m$$ connected polygons of the same area.
• Very interesting, thanks. Two questions: (a) does this work also when using the straight skeleton instead of the medial axis? (b) Is it correct to say that any partition of the medial axis / straight skeleton into $m$ connected components, induces a unique partition of the polygon into $m$ connected polygons? – Erel Segal-Halevi Feb 23 at 16:59
|
{}
|
# Direct Relationship Between Second Derivative & Points of Inflection
To start at the start, my maths textbook says that:
• A line, $f(x)$, is concave when $f''(x) ≤ 0$ (second derivative of $f(x)$ is smaller than or equal to zero, or the gradient of $f(x)$ is changing at a decreasing rate)
• A line, $g(x)$, is convex when $g''(x) ≥ 0$ (second derivation of $g(x)$ is greater than or equal to zero, or the gradient of $g(x)$ is changing at an increasing rate)
• A point of inflection is a point where the line changes from concave or convex or vice versa (the sign of the second derivative changes from positive to negative or vice versa)
Obviously this stirs up some confusion because on a line, $h(x)$, where $h''(z) = 0$, at the $x$-value of $z$, would be concave as well as convex (which to my understanding are mutually exclusive but apparently not, maybe). But then going through the some questions in the textbook, these definitions (of concave and convex) were proven to some extent (though some confusion still remains).
The first example we came across in class was the following:
$$f(x) = (x-5)^4$$
Find the coordinates of any point(s) of inflection.
So working it out would go as following: $$f'(x) = 4(x-5)^3\\\\ f''(x) = 12(x-5)^2\\\\ f''(x) = 0\\\\ 12(x-5)^2 = 0\\\\ (x-5)^2 = 0/12 = 0\\\\ x-5 = 0\\\\ x = 0+5 = 5\\\\ x = 5\\\\ y = f(5) = (5-5)^4 = 0^4 = 0$$ Coordinates of point of inflection: $(0,0)$.
However, from knowing what a quartic line looks like, we known that there are no points of inflection. For example, the line $j(x) = x^4$ , we already known resembles a (positive) quadratic line (essentially) in that it is has no point(s) of inflection and is entirely convex for all values of x. Well, $$j(x-5) = (x-5)^4 = i(x),$$ from the question above, is nothing more than a translation of $j(x)$ to $5$ units to the right. Therefore, $i(x)$ would keep the characteristics of $j(x)$ such as having no point(s) of inflection and being entirely convex for all values of $x$.
Therefore, even though the second derivative of $i(x)$ equals zero it is not a point of inflection because the line does not change from convex to concave at this point. Instead, as with the rest of the line, it is just another point on the (entirely) convex line. Though this doesn't seem to be a very mathematical conclusion, at least not algebraic but rather a logical assumption.
This suggests that (at a particular value of $x$) the second derivative can equal zero and still be a point on a convex line (rather than being a point of inflection). I assume the same applies for a concave line, for example, $$k(x) = -(x-5)^4.$$
This supports the claim that a convex line can have a gradient of zero and still be concave or convex.
So the way that the book recommends to figure out whether a coordinate is a point of inflection or just another point on a concave or convex line (when the second derivative equals zero) is to measure the second derivative of an $x$-value just before and just after the coordinate you have just found to have a second derivative equal to zero. If one is positive and the other is negative (order is irrelevant) then it is a point of inflection.
I do not like this method because if there were another point of inflection (which you are unaware of due to lack of full working or some unknown, algebraic variable) that happens to be very close to the coordinate that you have just found to have a second derivative of zero (assuming coordinate you have just found is also a point of inflection) then the second derivatives that you measure just before and just after may be the same (if you have skipped over the second point of inflection) implying that it is not a point of inflection when in reality it is.
So this is where my remaining confusion lays:
1. Can a point on a line (where the$x$-value's second derivative equals zero) ever be concave AND convex (rather than just the concave or convex example presented above)?
2. Is there an algebraic way of analyzing the second derivative and deciding whether it is actually a point of inflection or not (rather than just looking at the original line which may no always be available)?
For the first question, my assumption is no, but I'm not sure so I'm asking.
For the second question, my assumption is that there some sort of relationship between the second derivative and whether or not that point is a point of inflection. This is because in the example above, $$i''(x) = 12(x-5)^2.$$ I have a feeling that the even power of the function of $x$ has something to do with reversing the change of second derivative's sign from positive to negative (or vice versa), as does the even power applied to a negative number reverse the negative number's sign, for example, $$(-4)^2 = 16.$$ Perhaps the fact that the value $5$ for $i''(x)$ would give zero twice over (again, due to the $x-5$ being the squared) has something to do with the fact that the point is in fact not a point of inflection.
I've also come across similar examples as $i(x)$ using the second derivative for trigonometric function. I don't believe any powers were involved in the function's second derivative (at least none that were even) which may debunk the first theory that an even power of the function of $x$ in a second derivative is what prevents the point (where the second derivative equals zero) from being an actual point of inflection. However, the second theory that a value of $x$ could be given an even number of times over to give prevent the point (where the second derivative equals zero) from being an actual point of inflection, may still intact since it is common in trigonometric functions for several x$x$-values to correspond to a single $y$-value or derivative value or second derivative value.
I would have thought that only the second derivation mattered in whether something is an actual point of inflection or not, therefore any second integration of, for example, $i''(x)$ would give a line where a point of inflection lies where $x = 5$ whether the second integration led to (the original) $i(x)$ or something else. I tried it once or twice with some random number, but I could neither prove that the point where $x = 5$ was or was not a point of inflection, this is because the only way I could think of (even) attempting to support my theory was using my graphical calculator to draw out the line and looking at the line where $x = 5$, which is highly unreliable since I'm only judging it by eye. I also didn't want to measure the second derivative of the $x$-values just before and after the point on the line where $x = 5$ because that is somewhat contradictory and a method that I'm trying to see if I can avoid.
Alternatively, my teacher thinks that the even power of the original line, for example, in $$i(x) = (x-5)^4$$ is what prevents the point on the line where $x = 5$ from being a point of inflection, though I think this is fairly vague. It would make more sense that the second derivative (rather than the original line) would have the direct impact on whether the point is and actual point of inflection or not (where the second derivative of it $x$-value equals zero).
So I guess another few question arises being:
1. Will $p''(x) = a(x+b)^{2n+1}$ and $p''(-b) = 0$ always give a point of inflection where $x = -b$, and will $q''(x) = c(x+d)^{2n}$ and $q''(-d) = 0$ never give a point of inflection where $x = -d$?
2. And can it be proven algebraically?
3. Is there a different, direct relationship between a line's second derivative and whether a point on that line (such that its $x$-value's second derivative equals zero) is a point of inflection or not?
Thanks for bearing with me if you've read this far. Let me know if there's anything that needs to be better explained. Overall, there are 5 questions, answers to any and all questions are appreciated.
• Yes, concave and convex are not mutually exclusive – Tomasz Tarka Feb 9 '18 at 2:13
• Please use MathJax to format!! – Saad Feb 9 '18 at 2:41
• @TomaszTarka really? Wow, wouldn't have expected that. Could you give me an example of a point on a line (or an entire line) than is both concave and convex (since they're not mutually exclusive) please (to help me better comprehend it). – Alex P Feb 9 '18 at 11:18
• Concavity and convexity is property of a function on an interval, not of point. Simplest example of function that is both concave and conves is $f(x)=x$, or even simpler, $f(x)=const$. And I think you may want to work with more general definition of convexity and concavity, which firstly doesn't require function to be twice differentiable, and gives you more insight to what concave functions really are – Tomasz Tarka Feb 9 '18 at 16:53
|
{}
|
Probability and Bayes' theory problem
Students A, B, and C are discussing a solution of one of homework assignments on Probability and Statistics which was solved by their colleague. Student A will claim, that the solution is wrong if and only if either student B or Student C claims that it is wrong. Students B and C did not have enough time to review the solution, therefore, the student B will say that he found a mistake with probability of 50%. Student C waits a response from the student B, and if the student B claims that he did not find a mistake, student C will claim that he found a mistake with probability of 20%. What is the probability that student B said that he found a mistake, if the student A claimed that the solution is wrong?
solution
I do not uderstend part where highlighted with blue.Can anyone explain? P(B) is wron OR P(C|B') P(B') ... Fro what do we need to multibly by P(B') ?
$P(A)$ is the probability for student A to claim that the solution is wrong. Similar for $P(B)$ and $P(C)$. A makes the claim the solution is wrong when either B or C (or both) claims the solution is wrong. So:
$$P(A) = P(B \cup C) = P(B \cap C') + P(C \cap B') + P(B \cap C) =$$
$$P(B \cap C) + P(B \cap C') + P(B' \cap C) =$$
$$P(B) + P(B')P(C|B')$$
In the last step I used that in general: $P(X \cap Y) = P(X)P(Y|X)$
In the step before that, I used that in general: $P(X) = P(X \cap Y) + P(X \cap Y')$
• can you explain in simple words? – Pavel Mar 6 '17 at 14:33
• @Pavel Is there any specific step you would like me to explain? – Bram28 Mar 6 '17 at 14:35
• P(A) say wrong if B or C say wrong..... probability of wrong for B is 50% OR probability of wrong for C = (C|B') then for what do we need to multiply by P(B') – Pavel Mar 6 '17 at 14:37
• @Pavel The exclusive disjunction 'B wrong XOR C wrong' works out to 'B wrong OR (C wrong AND B not wrong)'. And P(C wrong AND B not wrong) = $P(C \cap B')$ ... P(C wrong AND B not wrong) $\not = P(C| B')$! ..... I think maybe you misinterpret $P(C \cap B')$: that means the chance that C is wrong given that B is not wrong ... but what we want is the chance that C is wrong AND B is not wrong ... which is $P(C \cap B') = P(B')P(C|B')$ – Bram28 Mar 6 '17 at 15:23
I do not uderstend part where highlighted with blue.Can anyone explain? P(B) is wron OR P(C|B') P(B') ... Fro what do we need to multibly by P(B') ?
It is because we were told:
Student A will claim, that the solution is wrong if and only if either student B or Student C claims that it is wrong.
So $A$ equals $B \cup C$, which in turn equals $B\cup (C\cap B^\complement)$. That is, "Student A will claim the solution to be wrong, when Student B does or, Student B does not but Student C does."
Since $B$ and $C\cap B^\complement$ are disjoint, the probability of their union is the sum of their probabilities.$$\def\P{\mathop{\sf P}}\P(A) = \P(B)+\P(C\cap B^\complement)$$
Then we just use the definition of conditional probability. $$\P(A) = \P(B)+\P(B^\complement)\P(C \mid B^\complement)$$
That is all.
It is the next line that you should be questioning, because it is complete gibberish.
• Since, $\P(A\cap B) = \P((B\cup C)\cap B) = \P(B)$ $$\P(B\mid A) = \frac{\P(A\cap B)}{\P(A)} = \frac{0.5}{0.6} = 0.8\dot{\overline{33}}$$
|
{}
|
# Error related to \spacefactor using 3rd-party macro \addmoretexcs
I am using the command \addmoretexcs for listings, which has its origin here. With the update to TeX Live 2014 (and probably an update to listings), the following code returns an error:
\documentclass{minimal}
\usepackage{ltxcmds}
\usepackage{listings}
\makeatletter
\makeatother
\makeatletter
\lowercase{\@ifundefined{lstlang@tex$#1}}{% \lstloadlanguages{[#1]TeX}% }{}% \lowercase{\expandafter\g@addto@macro\csname lstlang@tex$#1\endcsname}{%
\lstset{moretexcs={#2}}%
}%
}
\makeatother
\begin{document}
\end{document}
The error in question is
! You can't use \spacefactor' in vertical mode.
\@->\spacefactor
\@m
Note that, if I substitute \IfPackageLoaded with ltx@ifpackageloaded, the error disappears. This, however, is not a solution.
The actual error is likely inside the packages; I cannot debug this any further. Any hints about the origin of this error?
I doubt the code worked before updating to TeX Live 2014, because it would have been wrong in any case. With
\IfPackageLoaded{listings}{%
\makeatletter
\lowercase{\@ifundefined{lstlang@tex$#1}}{% \lstloadlanguages{[#1]TeX}% }{}% \lowercase{\expandafter\g@addto@macro\csname lstlang@tex$#1\endcsname}{%
\lstset{moretexcs={#2}}%
}%
}
\makeatother
the replacement text of \addmoretexcs (actually of an inner macro due to the fact your command has an optional argument) consists of the following tokens (• is used to separate tokens, for clarity)
\lowercase • { • \@ • i • f • u • ...
because the text given as argument to \IfPackageLoaded is already tokenized and TeX doesn't execute \makeatletter when absorbing this argument.
Correct code:
\makeatletter
\lowercase{\@ifundefined{lstlang@tex$#1}}{% \lstloadlanguages{[#1]TeX}% }{}% \lowercase{\expandafter\g@addto@macro\csname lstlang@tex$#1\endcsname}{%
\lstset{moretexcs={#2}}%
}%
}%
\makeatother
Now, when the argument is absorbed, \@ifundefined is tokenized as a single token, not twelve.
## Some more words
The original code may appear to work if it is included in a .sty file, because these files are read in with an implicit \makeatletter declaration at the beginning and an implicit \makeatother at the end (more precisely, the category code of @ is restored to the value it had at the moment the .sty file was opened).
As a general rule, \makeatletter and \makeatother should never appear in .sty files (except when belonging to the replacement text of some macro, which is not the case here). Your usage of \makeatother in the second argument of \IfPackageLoaded would be wrong in a .sty file, because it would change the category code of @ from that point on in case listings has already been loaded.
So, if you are going to use the previous code in a .sty file, remove the \makeatletter and \makeatother declarations.
• Indeed, this solved it. However I can confirm that it worked before and also worked for about 3000 other people that downloaded my template. – Matthias Pospiech Jun 26 '14 at 6:14
• @MatthiasPospiech Probably none of them loaded listings. Of course, I tried your example with TL 2012 and TL 2013, obviously receiving the same You can't use \spacefactor' in vertical mode error in both cases. – egreg Jun 26 '14 at 6:51
• My template loads about 100 packages, and listings is always loaded. Even if someone would disable it in the preamble, a package from myself loads it, so it would be loaded in any case. The code is not in a sty file. I must admit, that I did not test the MWE in TeX Live 2013, but the error first appeared in my template in 2014 and does not (in my full template) in 2013. – Matthias Pospiech Jun 26 '14 at 12:17
• Yes! Thank you for pointing-out the \makeatletter and \makeatother problems with .sty files. – user59034 Jun 17 '15 at 16:52
|
{}
|
# Example of sets $A, B$ such that $A', B'$ are Turing equivalent but $A, B$ are not.
I have been wondering if the following statement is true, $$A\equiv_TB\iff A'\equiv_TB'$$ where $A, B\subseteq\omega$ and $A'$ denotes the Turing jump of $A$. I have been able to show the forward direction, but have been unable to show the converse. I am starting to think that the converse fails and have been looking for an example. Any suggestions?
-
There are indeed sets $A,B$ such that $A'\equiv_T B'$ yet $A'\not\equiv_T B$, at least in certain theories (so it is consistent). One family of examples comes from so-called high sets, which are computably enumerable sets such that $A'\geq_T\emptyset''$. Since $A\leq \emptyset'$, if $A'\equiv_T\emptyset''$ the question of whether $A,\emptyset'$ are such a pair boils down to whether $\emptyset'\leq A$, i.e. whether $A$ is complete. An example of a high non-complete set $A$ is provided here, but the example relies on strengthened axioms for induction.
Similar to @Alex's answer, there are also low sets. A set $A\subseteq \omega$ is low if $A' \leq 0'$ (so that $A' \equiv_{T} 0'$). Notice that any low $0<A<0'$ would then be a counterexample to the converse of your statement. The Kleene-Post finite-extension construction of incomparable degrees builds two such sets.
|
{}
|
# Homework Help: No. of spinless particles in the left half of a box
Tags:
1. Nov 9, 2017
### Pushoam
1. The problem statement, all variables and given/known data
How to solve question no. 35?
2. Relevant equations
3. The attempt at a solution
Since the particle is spinless, spin = 0 , this means that the particle is a boson.
Applying Bose - Einstein distribution function,
$f(E) = \frac1 { e^{\beta ( E - \mu)} -1}$
I can get the value of $\beta$ and $\mu$ [as this distribution function tells us the no. of particles having the energy E of a given system.
Since I have LHS for two given values of E, I have 2 eqns. and hence I can determine $\beta$ and $\mu$ .
Now, what to do?
How to connect this with the given information in question?
2. Nov 9, 2017
### Staff: Mentor
You have a system with a fixed energy, not a fixed temperature, hence to BE distribution does not apply. Start by considering the eigenstates of a particle in a box.
3. Nov 9, 2017
### Pushoam
The eigen energy of a particle in a box is
$E_n = \frac{ n^2 h^2}{2 m a^2} = n^2 \epsilon_0$
This gives that the particles are in the states n =2,15.
Do I have to calculate the probability of getting a particle with energy $E_2 ~ and~ E_{15}$ each in the left half region and then I will multiply the probabililty with 1000 and add the two no.?
4. Nov 9, 2017
### Staff: Mentor
Basically yes, assuming that the observation time is chosen randomly (or consider the result "on average").
5. Nov 9, 2017
### Pushoam
I didn't get the assumption. Why do we need the assuption?
6. Nov 9, 2017
### Staff: Mentor
Because a superposition of stationary states of different energies is not itself a stationary state. The particles will be sloshing from left to right.
7. Dec 5, 2017
### sayakd
i calculated the probability of finding a particle on the left half of the box, and it is always 1/2 for a stationary state, for all n..
so, for the particles with energy 4e, the no of particles on the left half 100/2 = 50, for 225e, 900/2 = 450, total 500..
best of luck for the TIFR GS btw..
8. Dec 5, 2017
Thanks.
9. Dec 5, 2017
### Pushoam
Is the probability of finding the particle between $x_i and x_f$ for 1 D box $\frac { x_f - x_i} { L}$, where L is the length of the box?
I am not caclculating it. I just want to see it this way if it is so.
10. Dec 6, 2017
### sayakd
I don't know the closed way to represent the probability, what I did was integration of psi* multiplied by psi with the limit - L to 0,where psi is the normalized wavefunction of 1d potential box.. 2L is the length of the box..
|
{}
|
Everyone is welcome here (except, of course, those who have borrowed books from me for and have not returned them yet 😉)
# "How to decide whether a single subject score is significantly different from a group"
Posted on mars 31, 2022 in misc
Let us suppose we want to compare a single patient to a group of healthy subjects from the general population using a hypothesis testing approach.
## The classical Hypothesis Testing approach
To decide whether a given man named, say, Alex, is French, one can place his height (190cm) on the distribution of heights in the whole population of french males. In other words, one determines the percentile in which this individual's score (height) falls.
As p-val = 0.023 < 0.05 ($$\alpha$$), we can reject the null hypothesis that Alex is French!
Remarks:
• If you find this reasoning bizarre, or absurd, complain to frequentist statisticians! (A more logically sound approach is to compare the probability of hypotheses (e.g. probs of being French, German, Italian, Martian, ...), but this concept is alien in frequentist stats)
• You will only misclassify individuals as non-French with a probabilty of $$\alpha$$, your a priori threshold, which is all the procedure is meant to do (protecting you from False Alarms when declaring an effect “significant”).
# Z-scores
In practice, we hardly ever have access to data from the full population.
Given estimates of mean and stdev obtained from a large sample, we can substitute this unknown distribution by a Normal one with these parameters.
Then, we compute the patient's Z-score:
z_score = (patient - mean(big_sample)) / sd(big_sample)
## [1] "Z = 1.997"
Important: Note that we divide by the standard deviation, not the standard error!
We can then compute the prob of such an “extreme” event under $$H_0$$:
pval = 1 - pnorm(z_score)
## [1] "p-val = 0.0229"
# T-scores
Z-scores are fine iff you can trust the mean and stdev estimates of the population's parameters obtained from big_sample.
When all you have to characterize your population is a “small” sample of individuals, the p-value computed from the Z-score (using the empirical mean and stdev) is unreliable, and therefore the False Alarm level is not well-controlled.
William Gosset solved the issue in The probable error of a mean (published under the pen name "Student")
The essential point is that an unbiased estimate of the standard deviation of a population, when you have a sample of size $$N$$, is:
$$\sigma_s = \sqrt{\frac{\sum_{i=1}^{i=N} (x_i- \bar{x})^2}{ (N - 1)}}$$
Then one computes a t-score:
$$t = (patient - mean_{group}) / \sigma_s$$
and places it on a Student $$t_{N-1}$$ distribution.
This can be seen as a t-Test where the individual is treated as a sample of size 1 and this is sometimes called the “Crawford and Howell test” after a 1998 paper of theirs.
# Python code
Here is some code found on a page from my web site
from numpy import mean, std
from scipy.stats import t
def CrawfordTest(case, controls):
""" Compare an individual to a sample.
Args:
case : score of the individual
controls : scores from a sample group
Return:
the one-tail probability associated to the score 'case' compared to the scores in the list 'controls'
"""
tobs = (case - mean(controls)) / std(controls, ddof=1)
return t.cdf(tobs, len(controls) - 1)
# The case of MRI
Univariate analyses comparing a patient's map to control maps (e.g. fMRI contrasts, or VBM densities).
One can compute the t map as follows:
def tmap_OneVsMany(patient_map, control_maps, masker):
mean = np.mean(manyContrast, axis=0)
std = np.std(manyContrast, axis=0)
n = len(many)
correctionFactor = np.sqrt(n/(n-1))
return masker.inverse_transform((oneContrast - mean)/(std * correctionFactor))
# Supplementary Materials
More stuff on classical hypothesis testing
## Detect if a dice is fair or not
I throw a dice 100 times and I observe this distribution
## dice
## 1 2 3 4 5 6
## 12 15 9 12 12 40
What is the probability that a fair dice yields such a result?
##
## Chi-squared test for given probabilities
##
## data: table(dice)
## X-squared = 40.28, df = 5, p-value = 1.311e-07
We reject the Null Hypothesis that all faces of the dice are equiprobable.
## Test if a sample comes from a population of average height 1m70
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 126.9 161.9 171.4 171.6 181.4 230.4
##
## One Sample t-test
##
## data: samp
## t = 3.5174, df = 999, p-value = 0.0004554
## alternative hypothesis: true mean is not equal to 170
## 95 percent confidence interval:
## 170.7235 172.5496
## sample estimates:
## mean of x
## 171.6366
The formula for T is
T = (mean - 170) / sigma/sqrt(N - 1)
Let's check it:
## [1] 3.515616
## Recap: the classical (frequentist) approach to Hypothesis Testing.
To check whether the mean of sample is significantly different than 0:
1. Compute the p-value
p-value = (P(mean(X) > mean$_obs$ | H$_0$: the population of individuals where the orginal sample come from has a mean of exactly 0)
where: - X is a (virtual) sample of size N coming from a population of mean 0 , and sd=sd$$_obs$$. - mean_obs is the "observed" average of the sample
1. Compare the p-value to an a priori threshold $$\alpha$$, e.g. 0.05.
Thus $$alpha$$ controls for the false alarm rate over many tests.
Note: there are several ways to computer a p-value, making more or less assumptions on the population distribution.
## Do two samples come from populations with the same mean ?
##
## Two Sample t-test
##
## data: samp1 and samp2
## t = -1.6056, df = 1998, p-value = 0.1085
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.15923302 0.01587286
## sample estimates:
## mean of x mean of y
## 1.016131 1.087812
|
{}
|
# Types of Machine Learning: A High-Level Introduction
Sharing is caring
In machine learning, we distinguish between several types and subtypes of learning and several learning techniques.
Broadly speaking, machine learning comprises supervised learning, unsupervised learning, and reinforcement learning. Problems that do not fall neatly into one of these categories can often be classified as semi-supervised learning, self-supervised learning, or multi-instance learning. In supervised learning, we generally distinguish between regression and classification problems.
## Regression vs Classification
Supervised learning tasks involve either a regression problem or classification problem.
In a regression scenario, the model attempts to predict a continuous numerical output. An example is predicting the price of a house. If your model returns an exact dollar value, you have a regression model.
A classification problem involves classifying examples into groups. We could train a housing price model to classify houses into one of several pricing groups. In that case, we are dealing with a classification problem.
For beginners, the difference between the two approaches may not always be as clear-cut.
Besides the difference in the predicted output, the two approaches are similar with regard to the predictors. The predictors can have real or discrete values.
Classification models often return a class probability, a continuous output, rather than a class label. Outcomes are sorted into the class with the highest associated probability returned by the model.
Regression models may predict integer values that could also be interpreted as classes. For example, a housing price model that returns a dollar-accurate price could be interpreted as a classification model in which every possible price is a class.
For evaluating the predictive performance of a model, the distinctions matter, though. Regression models are evaluated differently from classification models.
### Evaluating Classification Models
To evaluate a classification model, accuracy is the most commonly used measure. Accuracy is fairly straightforward to calculate since it is just the percentage of the total examples that have been classified correctly. If 800 out of 1000 samples have been classified correctly, your accuracy is 80%
In a binary classification problem such as “Does the patient have a certain disease or not” you would add up the number of patients that have been classified as having the disease and actually have the disease (true positives) and the number of patients that have been classified as not having the disease and that really do not have the disease (true positives). Their percentage of the total number of evaluated cases makes up the accuracy. This result can be visualized in a confusion matrix.
\frac{True Positive + TrueNegative}{Total} = \frac{700 + 100}{1000} = 0.8 = 80\%
Accuracy is a crude measure that is often not enough. In man classification scenarios, you are interested in more granular results. For example, if your model is used to predict whether a tumor is malignant or benign, you would care much more about reducing false negatives than false positives. In the second case, the patient might get a shock, but actually, there is no reason to worry. A follow-up test will likely identify her as not having cancer. If you tell a patient who has cancer that she is healthy, you may very well cause premature death. You want your model to have high sensitivity (the ability to identify true positives correctly).
In our case, the sensitivity is calculated as follows:
Sensitivity = \frac{TruePositive}{TruePositive+FalseNegative}
= \frac{100}{100 + 50} = 0.667 = 67%
I wouldn’t use this model to classify diseases, given this sensitivity.
In other scenarios, you are more interested in the specificity (the ability to identify true negatives correctly). In a legal context, most people would probably agree that false positives (punishing an innocent person) are worse than false negatives (letting a guilty person walk free). You can calculate the specificity as follows:
Specificity = \frac{TrueNegative}{FalsePositive+TrueNegative}
= \frac{700}{700 + 150} = 0.82 = 82%
The confusion matrix can be extended to cases with more than two classes.
### Evaluating Regression Models
Since a regression model predicts a continuous numerical output, you need a more advanced mathematical measure to quantify the error in the prediction. A simple percentage value like accuracy is not enough. The most common measure is the root mean squared error (RMSE). The RMSE measures the average deviation of the predicted value from the actual value and sums up these differences across the entire training set.
RMSE = \sqrt{\frac{\sum_{i=1}^N(actual_i - predicted_i)^2}{N}}
Squaring the difference helps in two ways. Firstly, it ensures that all measures are positive. Secondly, it punishes large differences disproportionately more than smaller ones.
Let’s say we have a regression model, and we feed it two values. The actual values are 5 and 6.2, and the model predicts 4.5 and 6, respectively. We can calculate the RMSE as follows:
RMSE = \sqrt{\frac{(5 - 4.5)^2+(6.2-6)^2}{2}} = 0.38
## Supervised vs Unsupervised Learning
If you train a model on a dataset that consists of predictors and corresponding target variables, you are generally dealing with a supervised learning problem. The model is trained to learn a mapping from the predictors to the target output.
For example, using average blood pressure, average oxygen saturation, and body mass index to predict the risk of heart disease would be a supervised learning problem.
Examples of supervised learning algorithms are linear regression, logistic regression, decision trees, and support vector machines.
In an unsupervised learning problem, you would use a model to discover patterns in the data without predicting a certain target value. If you do not yet have a risk classification system for heart disease and reliable data for what constitutes a certain risk level, you could use an unsupervised learning model on the predictors. For example, people with a high BMI might have lower blood oxygen levels. Using a technique called clustering, you can find out whether the data points form groups. For example, let’s say people with a BMI around 34 in your sample also have O2 measurements around 91.
Now you can zoom in on that particular group and see whether they have a higher prevalence of heart disease than the other groups.
Supervised learning is often concerned with making the data easier to understand and interpret. In addition to
clustering, density estimation, and dimensionality reduction techniques such as principal components analysis are commonly used to better understand the data.
Typical examples for unsupervised machine learning algorithms are K-means clustering or K-nearest neighbors.
The essential difference between supervised and unsupervised learning is that you have a target variable in supervised learning situations. This implies that you need sufficiently large size of samples where you already know the outcome. In problems like image recognition, this also means that your samples need to be labeled with the outcome of the target variable. If you want to use an image to train a model to recognize cats, the image needs to be labeled with the information of whether it actually contains a cat or not.
### Ensemble Learning
Ensemble learning combines multiple learning algorithms to attain better overall performance. The basic premise is that greater diversity among models will lead to a better overall predictive performance since each model will pick up aánd emphasize a different aspect of the learning problem. The diversity can be achieved by either combining completely different types of models, known as hybrid ensembles, or by ensuring diversity in the training process among otherwise similar models. You can train each model in an ensemble on a different subset of the training data or on a different subset of features.
## Semi-Supervised Learning and Self-Supervised Learning
Some machine learning methods, especially those found in deep learning, cannot be neatly categorized as supervised or unsupervised.
### Self-Supervised Learning
A general adversarial network is an unsupervised neural network used to create and synthesize new data. You feed it random input data, and it learns to create images or other types of structured data. But how does it learn to map from random noise to an image? You use another network trained to recognize images and constantly give feedback to the first network on how much its output resembles the desired image.
So you are essentially using a trained network to train an untrained network. This is known as a self-supervised learning problem.
Autoencoders operate according to a similar principle. They are trained to create a different representation of data called an encoding. This technique is beneficial for dimensionality reduction because the encoding is usually lower-dimensional and more information-efficient. But for the encoding to be useful, it has to be possible to reconstruct the original data representation from the decoded version. During training, the autoencoder consists of an encoder and a decoder. The encoder is tasked with encoding the data while the decoder takes the output of the encoder and attempts to reconstruct the original data. Thus, the input data is also the target data. Once the encoder has learned to create a satisfactory encoding, the decoder is no longer necessary.
In both of these cases, you are basically turning an unsupervised problem into a supervised one.
### Semi-Supervised Learning
Semi-Supervised learning is often applied in scenarios where such large amounts of labeled data are required that creating a corpus of labeled data of the appropriate size is often not feasible. Machine translation and speech recognition are prime examples. When translating from one language to another or recognizing the human voice, the model has to deal with the enormous complexity and variability in human language. In addition to the semantics and syntax of the core language, people use their own dialects, abbreviations, and informal ways of communicating. Often, there doesn’t even exist training data for a certain regional dialect.
To tackle this problem, you can train a model on a few thousand or hundreds of thousands of labeled training examples such as sentence pairs. The patterns detected while learning the labeled representations can then be used to transfer knowledge to unlabelled data. For example, you might share sentence-level representations and lexical patterns detected in the training examples to make sense of input in dialects.
## Reinforcement Learning
Reinforcement Learning is at the heart of modern robotics and currently represents the paradigm most closely associated with truly thinking machines and artificial general intelligence. The basic idea is to get a machine to independently learn to perform actions in a given situation by rewarding and penalizing it based on the chosen behavior. The agent’s goal is to maximize the reward defined by a function. Rather than being trained directly on given data, the agent attempts to optimize the reward function based on a process of trial and error. When interacting with its environment, the agent progresses through a series of states that are each associated with choices and rewards. Rewards a cumulative, and so the agent learns to treat all states as connected.
This way, you can get a robot to do a series of complex movements such as locating an item, picking it up, and carrying it to a certain location.
An AI agent can also apply reinforcement learning to learn a complex series of optimal moves in a board game. Alpha Go, the AI system that beat the reigning human champion at the game of Go in 2017, is based on reinforcement learning.
The superhuman performance was achieved by letting the algorithm play millions of games against itself.
During the training process, the agent alternates between exploration phases, where it tries new moves, and exploitation phases, where it actually makes the moves it has determined to be ideal in the exploration phase.
That way, Alpha Go learned to optimize series of movements over longer and longer sequences of moves. Ultimately, whenever its opponent made a move, Alpha Go was able to immediately map out the next several moves and all their associated cumulative probabilities of leading to victory.
The biggest challenge associated with reinforcement learning for real-world scenarios such as robotics is usually the design and creation of the environment in which the agent learns to make the optimal decisions.
## Summary
Machine learning is a broad field that comprises several subtypes. For training, we distinguish between supervised learning, unsupervised learning, and reinforcement learning.
In supervised learning, we train a model to map from a given input to a known output to get the model to produce the expected target output on data previously unseen by the model.
In unsupervised learning, the model attempts to find patterns in the data without an expected target value.
In reinforcement learning, an agent learns to make a series of decisions independently by optimizing a reward function.
Some models cannot be neatly categorized into one of the three categories. As a result, additional categories such as self-supervised learning and semi-supervised learning have emerged.
When it comes to inference, we generally distinguish between regression models, which produce a continuous numerical output, and classification models that sort samples into several groups.
Sharing is caring
|
{}
|
# 7 Regression via Maximum Likelihood
In this chapter we describe Maximum Likelihood (ML) to obtain estimated parameters (i.e. coefficients) for Linear Regression.
We have introduced OLS from a purely geometrical perspective, as well as from purely model-fitting perspective.
As previously mentioned, in a regression model we use one or more features $$X$$ to say something about the reponse $$Y$$. With a linear regression model we combine features in a linear way in order to approximate the response In the univarite case, we have a linear equation:
## 7.1 Linear Regression Reminder
al case when we have $$p>1$$ predictors:
$\mathbf{X} = \ \begin{bmatrix} 1 & x_{11} & \dots & x_{1p} \\ 1 & x_{21} & \dots & x_{2p} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n1} & \dots & x_{np} \\ \end{bmatrix}, \qquad \mathbf{\hat{y}} = \begin{bmatrix} \hat{y}_{1} \\ \hat{y}_{2} \\ \vdots \\ \hat{y}_{n} \\ \end{bmatrix}, \qquad \mathbf{b} = \begin{bmatrix} b_{0} \\ b_{1} \\ \vdots \\ b_{p} \\ \end{bmatrix}$
With the matrix of features, the response, and the coefficients we have a compact expression for the predicted outcomes:
$\mathbf{\hat{y}} = \mathbf{Xb}$
In path diagram form, the linear model looks like this:
Notice that we haven’t said anything about $$Y$$ and $$X_1, \dots, X_p$$ other than $$Y$$ is a real-value variable, and the $$X$$-variables have been encoded numerically. In other words, we have not mentioned anything about stochastic assumptions, nor we have made requirements about their distributions.
All we cared about in chapter Linear Regression were the algebraic aspects, the minimization criterion, and the solution.
It is possible to consider certain theoretical assumptions about the model $$f : \mathcal{X} \to \mathcal{y}$$
Consider a linear regression model:
$y_i = b_0 x_{i0} + b_1 x_{i1} + b_p x_{ip} + \epsilon_i = \mathbf{b^\mathsf{T} x_i} + \epsilon_i$
and assume that the noise terms $$\varepsilon_i$$ are independent and have a Gaussian distribution with mean zero and constant variance $$\sigma^2$$:
$\varepsilon_i \sim N(0, \sigma^2)$
Under this assumption, how do we obtain the parameters $$\mathbf{b} = (b_0, b_1, \dots, b_p)$$ of a linear regression model? We’ll describe how to obtain a solution via Maximum Likelihood (ML).
$y = f\left( X_1, \dots, X_p \right) + \varepsilon \ \Longleftrightarrow \ y_i = f\left( x_{i1}, \dots, x_{ip} \right) + \varepsilon_i$
Our goal is to estimate $$f$$, somehow. Why do we include an error term? Well, it is unlikely that the input variables will be able to express all of the behavior of the response variables. Hence, we add back this “error” in the form of the $$\varepsilon$$ term.
Now, to proceed with regression, we will need to make a few assumptions.
#### Soft Assumptions: (a.k.a. Gauss-Markov Assumptions)
• Assume that $$\varepsilon_i$$ is random. This requires us to establish a distributional assumption; we can pick any distribution so long as $$\mathrm{mean}(\varepsilon_i) = 0$$ and $$\mathrm{var}(\varepsilon_i) = \sigma^2$$ with $$\mathrm{cor}(\varepsilon_i, \varepsilon_{\ell}) = 0$$ for $$i \neq \ell$$.
• These assumptions enable us to state the Gauss-Markov Theorem: $$\mathbf{\hat{b}}_{\mathrm{OLS}}$$ is BLUE: the Best Linear Unbiased Estimator. We take “best” to mean “lowest variance.”
#### Hard Assumptions: (i.e. a bit more restrictive)
• Assume that $$\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$$. This in turn implies that $$y_i \sim \mathcal{N}(\mathbf{\hat{b}}^{\mathsf{T}} \mathbf{x}_i, \ \sigma^2)$$.
The benefit of these hard assumptions is they enable us to perform Maximum Likelihood Estimation.
### 7.1.1 Maximum Likelihood
For this section, we take the “hard assumptions” as given. Then, the joint distribution of $$y_1, y_2, \dots, y_n$$ is
\begin{align} P\left(\mathbf{y} \mid \mathbf{X}, \mathbf{b}, \sigma^2 \right) &= \prod_{i=1}^{n} f\left(y_i ; \mathbf{X}, \mathbf{b}, \sigma^2 \right) \\ &= \prod_{i=1}^{n} \frac{1}{\sqrt{ 2 \pi \sigma^2}} \mathrm{exp}\left\{ - \frac{1}{2 \sigma ^2} \left( y_i - \mathbf{b}^\mathsf{T} \mathbf{x_i} \right)^2 \right\} \\ &= \left( 2 \pi \sigma^2 \right)^{- \frac{n}{2} } \exp\left\{ - \frac{1}{2 \sigma^2} \sum_{i=1}^{n} \left( y_i - \mathbf{b}^\mathsf{T} \mathbf{x_i} \right)^2 \right\} \\ \end{align}
taking logarithm we get:
\begin{align} \ell &= \log\left[ P\left(\mathbf{y} \mid \mathbf{X}, \mathbf{b}, \sigma^2 \right) \right] \\ &= -\frac{n}{2} \ln\left( 2 \pi \sigma^2 \right) - \frac{1}{2 \sigma^2} \sum_{i=1}^{n} \left( y_i - \mathbf{b}^\mathsf{T} \mathbf{x_i} \right)^{2} \end{align}
Why did we do this? Well, suppose we want to find $$\mathbf{\hat{b}}_{\mathrm{ML}}$$, the maximum likelihood estimate of $$\mathbf{b}$$. To do so, we would take the derivative of the log-likelihood, set equal to 0, and solve:
\begin{align} \ell &= -\frac{n}{2} \ln( 2 \pi \sigma^2) - \frac{1}{\sigma^2} \left( \mathbf{y} - \mathbf{X} \mathbf{b} \right)^{\mathsf{T}} \left( \mathbf{y} - \mathbf{X} \mathbf{b} \right) \\ &= c + \frac{1}{2 \sigma^2} \left( \mathbf{b}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X} \mathbf{b} - 2 \mathbf{b}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{y} + \mathbf{y}^\mathsf{T} \mathbf{y} \right) \\ \end{align}
Taking partial derivatives:
$\frac{\partial \ell}{\partial \mathbf{b}} = \frac{1}{2 \sigma^2} \left( 2 \mathbf{X}^\mathsf{T} \mathbf{X} \mathbf{b} - 2 \mathbf{X}^\mathsf{T} \mathbf{y} \right) \rightarrow \mathbf{0} \\ \Rightarrow \ \mathbf{X}^\mathsf{T} \mathbf{X} \mathbf{b} = \mathbf{X}^\mathsf{T} \mathbf{y}$
$\mathbf{\hat{b}}_{\mathrm{ML}} = (\mathbf{X^\mathsf{T} X})^{-1} \mathbf{X^\mathsf{T}y}$
These are precisely the normal equations. That is, if $$\mathbf{X^\mathsf{T} X}$$ is invertible, the maximum likelihood estimator of $$\mathbf{b}$$ is exactly the same as the OLS estimate of $$\mathbf{b}$$.
### 7.1.2 ML Estimator of $$\sigma^2$$
Determine the ML estimate of the other model paramater: $$\sigma^2$$ (the constant variance).
The likelihood of $$\mathbf{y}$$, that is, the joint distribution of $$\mathbf{y} = (y_1, \dots, y_n)$$ given $$\mathbf{X}$$, $$\mathbf{b}$$, and $$\sigma$$ is:
\begin{align*} p(Y | \mathbf{X}, \mathbf{b}, \sigma) &= \prod_{i=1}^{n} p(y_i | \mathbf{x_i}, \mathbf{b}, \sigma) \\ &= \prod_{i=1}^{n} (2 \pi \sigma)^{-1/2} \hspace{1mm} exp\{ - \frac{1}{2}\left( \frac{y_i - \mathbf{w^\mathsf{T}x_i}}{\sigma} \right)^2 \} \\ &= (2 \pi \sigma)^{-n/2} \hspace{1mm} exp \left\{-\frac{1}{2\sigma^2} (\mathbf{y} - \mathbf{Xb})^\mathsf{T}(\mathbf{y} - \mathbf{Xb}) \right\} \end{align*}
In order to find ML estimate of $$\sigma$$, we calculate the (partial) derivative of the log-likelihood with respect to $$\sigma$$:
\begin{align*} \frac{\partial l(\sigma)}{\partial \sigma} &= \frac{\partial}{\partial \sigma} \left[ \frac{-n}{2} log(2\pi\sigma^2) - \frac{1}{2\sigma^2} (\mathbf{y} - \mathbf{Xb})^\mathsf{T}(\mathbf{y} - \mathbf{Xb}) \right] \\ &= -\frac{n}{2} \frac{\partial}{\partial \sigma} \left[ log(\sigma^2) \right] - \frac{\partial}{\partial \sigma} \left [ \frac{1}{2\sigma^2} (\mathbf{y} - \mathbf{Xb})^\mathsf{T}(\mathbf{y} - \mathbf{Xb}) \right] \\ &= -\frac{n}{\sigma} + \sigma^{-3} (\mathbf{y} - \mathbf{Xb})^\mathsf{T}(\mathbf{y} - \mathbf{Xb}) \end{align*}
Equating to zero we get: \begin{align*} \frac{n}{\sigma} &= \frac{1}{\sigma^3} (\mathbf{y} - \mathbf{Xb})^\mathsf{T}(\mathbf{y} - \mathbf{Xb}) \\ n \sigma^2 &= (\mathbf{y} - \mathbf{Xb})^\mathsf{T}(\mathbf{y} - \mathbf{Xb}) \\ \sigma^2 &= \frac{1}{n} (\mathbf{y} - \mathbf{Xb})^\mathsf{T}(\mathbf{y} - \mathbf{Xb}) \\ \sigma^2 &= \frac{1}{n} \sum_{i=1}^{n} (y_i - \mathbf{b^\mathsf{T} x_i})^2 \end{align*}
|
{}
|
# AIBECS.jl
The Algebraic Implicit Biogeochemistry Elemental Cycling System
Whatever you do, if you want to use the AIBECS, you must add it to your Julia environment like every Julia package, by typing ]add AIBECS at the REPL.
This documentation is organized in 4 parts[1]:
#### 1. Tutorials
If you want to try AIBECS for the first time, this is where you should start.
• The ideal age tutorial is a good place to start. It will show you how to generate a simple linear model of an idealized tracer.
• The radiocarbon tutorial is a little bit more involved, with some nonlinearities and more advanced use of AIBECS features and syntax
• The coupled PO₄–POP model tutorial will show you how to couple 2 interacting tracers, one for phosphate transported by the ocean circulation, and one for POP transported by sinking particles.
#### 2. How-to guides
Here you will find goal-oriented walk-through's.
#### 3. Explanation/discussion
Here you will find more general discussions and explanations surrounding the AIBECS.
#### 4. Reference
This section contains almost all the functions available in AIBECS.
Note
The AIBECS is being developed primarily by Benoît Pasquier with the help of François Primeau and J. Keith Moore from the Department of Earth System Science at the University of California, Irvine, and more recently with the help of Seth John from the Department of Earth Sciences at the University of Southern California.
Warn
This package is in active development, so you should expect some bugs to happen. And if you have any suggestions or feature requests, do not hesitate to start an issue directly on the AIBECS GitHub repository, or even better, a submit a pull request!
|
{}
|
# Starting on elastic collisions
1. Jan 19, 2005
### Coldie
Currently I'm stuck on this problem:
A pair of bumper cars at an amusement park ride collide elastically as one approaches the other directly from the rear. One has a mass of 450 kg and the other 550 kg. If the lighter one approaches at 4.5 m/s and the other is moving at 3.7 m/s, calculate their velocities after the collision.
Now, I'm sure that I understand the concept of using the conservation of momentum and relative velocity to get two equations to solve for the two unknowns, but it isn't working out. Here's what I did.
Conservation of momentum:
$$m_1v_1 + m_2v_2 = m_1v_1^{'} + m_2v_2^{'}$$
$$450(4.5) + 550(3.7) = 450v_1^{'} + 550v_2^{'}$$
$$4060 = 450v_1^{'} + 550v_2^{'}$$
Because of elastic collision, relative velocity is constant:
$$v_1 + v_2 = v_1^{'} + v_2^{'}$$
$$4.5 + 3.7 = v_1^{'} + v_2^{'}$$
$$v_1^{'} = 8.2 - v_2^{'}$$
And now, subbing back into the conservation of momentum equation...
$$4060 = 450(8.2 - v_2^{'}) + 550v_2^{'}$$
$$4060 = 450(8.2) - 450v_2^{'} + 550v_2^{'}$$
$$370 = 100v_2^{'}$$
BAM! The final velocity is equal to 3.7m/s! But wait, that's what it was in the BEGINNING! Grrr! I've had this problem with five different elastic collision questions, so I must be doing something consistently wrong! Could someone please point out where I'm veering off-course?
Thanks.
2. Jan 20, 2005
### maverick280857
You are making a fundamental mistake: setting the sum of velocities before and after collision equal to each other! Instead you should use energy conservation. Also, there is a ratio called the coefficient of restitution which relates relative velocites. For an elastic collison, this would be
$$e = 1 = \frac{v'_{1}-v'_{2}}{v_{2}-v_{1}}$$
(check the signs...e should be positive).
I'll leave the task of reading about e to you (it should be mentioned in your general physics textbook). IN case you have a problem with my explanation, please feel free to ask and I'll clarify.
Chers
vivek
3. Jan 20, 2005
### Staff: Mentor
Just to restate what vivek explained:
In a perfectly elastic collision, the relative velocity is reversed. But the relative velocity is $v_2 - v_1$, not $v_1 + v_2$. So that means $v_2 - v_1 = v'_1 - v'_2$. (Note that this reversal of relative velocity is derived by combining conservation of momentum and conservation of energy.)
4. Jan 20, 2005
### Coldie
To whoever moved this, I'm in grade 12 Physics!
Anyways, thanks for the help, guys. Doc, I used your equation, but it's still not working! I'll show you what I did, taking it from the second equation:
$$v_2 - v_1 = v'_1 - v'_2$$
$$3.7 - 4.5 = v'_1 - v'_2$$
$$-.8 - v'_2 = v'_1$$
$$4060 = 450(-.8 - v'_2) + 550v'_2$$
$$4060 = -360 - 450v'_2 + 550v'_2$$
$$3700 = 100v'_2$$
$$v'_2 = 37m/s$$
This is obviously incorrect... what am I doing wrong now?
5. Jan 20, 2005
### futb0l
On the 2nd equation, it should be + v'_2 , shouldn't it?
Btw, I don't know why this problem got moved to the college level forum
6. Jan 20, 2005
### Coldie
Well, Doc gave that equation that way... I'll try adding it in a sec.
7. Jan 20, 2005
### Coldie
Ok, that equation's not right either. Would somebody please just solve the equation for me, since evidently I'm completely unable to get the right answer?
8. Jan 21, 2005
### maverick280857
As I pointed out earlier, the coefficient of restitution (equal to 1 in your case) must be a positive quantity by definition. Make sure that the ratio you use is positive. Secondly, the cars are moving toward each other so if you take the direction of motion of one of the cars as positive, the other must be negative to maintain sign consistency (and even physics..because identical signs mean motion in the same direction). I do not know if you are introduced to vector algebra yet but I would suggest that you re-do the problem defining a positive direction of motion and prefixing signs to velocities (before and after collision) accordingly as they are parallel or antiparallel to the assumed +ve direction.
Note that this problem arises only when terms which are linear in velocities are involved (such as the restitution ratio expression, momentum conservation, relative velocity...) but not if you used the energy conservation equation directly, i.e. kinetic energy before collision = kinetic energy after collision (of both cars).
Hope that helps...
Cheers
Vivek
9. Jan 21, 2005
### Staff: Mentor
As futb0l pointed out, you messed up on the last step. If you did it right, you'd get $v'_2 = v'_1 + 0.8$. Now just combine that with the equation you had in your first post ($4060 = 450v_1^{'} + 550v_2^{'}$), and you should be able to solve it easily.
10. Jan 22, 2005
### QuantumDefect
USE CONSERVATION OF ENERGY! That with conservation of momentum will give you your two equations. Since it is elastic, energy is conserved also.
11. Jan 22, 2005
### Coldie
I at first tried using conservation of momentum with conservation of kinetic energy, but instead of simply plugging in the momentum equation to the kinetic energy one, I plugged in the kinetic energy equation to the momentum equation, giving me rooted values to deal with. I've got it now, thanks.
12. Jan 22, 2005
|
{}
|
Select the active object ? with python
i just want to select this object, the suzanne with the active origin in red,'is is she active or just the last selected?
i dont know the code, and i dont know also how to find this kind of code, ive been searching for two hours... any tips how to find the desired action within blender in this kind of cases?
thanks
• bpy.context.active_object related – Ratt Dec 5 '18 at 4:57
• blender.stackexchange.com/questions/38618/… – atomicbezierslinger Dec 5 '18 at 6:44
• Please show the entire screen with footers so we all the information needed – atomicbezierslinger Dec 5 '18 at 6:46
• bpy.context.active_object dont work for me on 2.8 – DB3D Dec 5 '18 at 17:15
• i also tried bpy.context.active_object.select = True bpy.context.scene.objects.active = bpy.context.active_object i got error and i dont know what im doing im just trying to copy code that work on the proposed lonk, but it doesnt result in anything – DB3D Dec 5 '18 at 17:20
You can usually retrieve the active object with: bpy.context.active_object or bpy.context.object
If you need to take the view layer into account: bpy.context.view_layer.objects.active
Chances are your view layer is already set, but if the view layer needs to be set, you can first do: bpy.context.window.view_layer = view_layer
[source]
To make sure the active object is also a selected object, you can do this:
import bpy
my_object = bpy.context.view_layer.objects.active # assign variable
my_object.select_set(True) # select the object
Setting an object as active is a similar process. See this answer to learn about that.
Many operations require both an active object and at least one additional selected object.
|
{}
|
# Captain Clumsy's crummy communication
An entry in Fortnightly Topic Challenge #45: Flags
Captain Clumsy never ceases to amaze me... This morning I was down at the dockside when I heard a shout from a nearby ship. Steeling myself, I looked up... then let out a groan... Him again!
The docks were noisy and I couldn't hear exactly what he was saying, just snatches of it - I got the impression there was something very important missing from his ship or his supplies, but couldn't make out what. After a while he held up a finger in a way that said "Wait there one moment," and disappeared below deck.
I waited, and waited, but twenty minutes later he still hadn't re-emerged. Unable to linger any longer, I marched up the gangplank and onto his ship, finding him in his cabin.
"Ah, I was almost ready," he said. I looked down at his table, upon which were spread out seven European flags and a large piece of acetate with some shapes drawn upon it. Bemused, I asked what he was doing. "Just sending you a message," he said.
"What's wrong with just using the radio?" I asked. "Have you broken it again?!"
"No, I'm sure it's fine," replied Captain Clumsy. "I'm just a traditionalist - that technology doesn't always agree with me... And I can't find the normal ones, so I'm just..." He tailed off.
I was about to ask him what was so urgent, when looking down at the flags I was able to work it out for myself. Suffice to say I was not best pleased - I'm an engineer, for crying out loud! - and I told him quite plainly that he could go to the shops and get what he so desperately needed himself...
By examining the image below, can you work out what Captain Clumsy was lacking? How was he trying to communicate this to me?
Colour guide available here.
• I always love it when Captain Clumsy makes an appearance. Nice one! – Jeremy Dover Dec 22 '20 at 13:16
I think that what Captain Clumsy is lacking is:
KETCHUP
First of all, identify the flags:
They're alphabetically sorted:
BOSNIA AND HERZEGOVINA
BREMEN (Germany)
FINLAND
FRANCE
ICELAND
NORWAY
RUSSIA
Secondly, the acetate:
If we take each color square (violet is split into two triangles which make the square) and take that section of one of the flags, then sort them by rainbow order, we can make the seven letters from KETCHUP in nautical flag (NATO) style.
Which generates the following table:
| Country | Border | Shape | NATO Letter |
| ---------------------- | ------ | ----------------------------------------- | ----------- |
| Bosnia and Herzegovina | Red | Vertical yellow & blue | K |
| Russia | Orange | Horizontal blue & red | E |
| Norway | Yellow | Vertical red, white & blue | T |
| Iceland | Green | Horizontal blue, white, red, white & blue | C |
| France | Blue | Vertical white & red | H |
| Bremen (Germany) | Indigo | Checkerboard red & white | U |
| Finland | Purple | Blue square, white inside | P |
EDIT: The flags cut out with the acetate patterns look like this:
• You've found the answer Arturo, well done :) But what would make this answer to a visual puzzle really super would be a visual in response - e.g. a modified image to show the extraction of each of the letters from the flags. (Don't worry about 'sorting' the flags - it was more about finding the only combination which resulted in 7 valid letters; there's no secret clues in the text or picture that instruct you exactly which flag is red, orange, yellow, etc.) – Stiv Dec 22 '20 at 13:29
• @Stiv good to know so I don't spend more time looking for clues on the sorting. I noticed that some flags could be interpreted two ways (or more), but I was solid on 4 or 5 and sorting them helped me fix the other ones. I cleaned the table a little so it at least looks like a proper table. I'm looking for an easy way to draw the images from my Work Mac. – Arturo Vial Arqueros Dec 22 '20 at 16:30
• @Stiv, I've added a diagram – Arturo Vial Arqueros Dec 23 '20 at 0:18
• Nice, thanks Arturo :) I've just done a minor re-wording to remove that mention of needing another way to sort the flags, but feel free to re-edit into your own words. I've awarded you the checkmark, but if you do want to make any extra mentions of the thought process you alluded to in comments (you said you were "solid on 4 or 5") that might be useful extra background for other people following along with your answer. Up to you though - well done again :) – Stiv Dec 23 '20 at 16:48
|
{}
|
Register
# 20.1. Spatial Data Structures¶
## 20.1.1. Spatial Data Structures¶
Search trees such as BSTs, AVL trees, splay trees, 2-3 Trees, B-trees, and tries are designed for searching on a one-dimensional key. A typical example is an integer key, whose one-dimensional range can be visualized as a number line. These various tree structures can be viewed as dividing this one-dimensional number line into pieces.
Some databases require support for multiple keys. In other words, records can be searched for using any one of several key fields, such as name or ID number. Typically, each such key has its own one-dimensional index, and any given search query searches one of these independent indices as appropriate.
### 20.1.1.1. Multdimensional Keys¶
A multidimensional search key presents a rather different concept. Imagine that we have a database of city records, where each city has a name and an $xy$ coordinate. A BST or splay tree provides good performance for searches on city name, which is a one-dimensional key. Separate BSTs could be used to index the $x$ and $y$ coordinates. This would allow us to insert and delete cities, and locate them by name or by one coordinate. However, search on one of the two coordinates is not a natural way to view search in a two-dimensional space. Another option is to combine the $xy$ coordinates into a single key, say by concatenating the two coordinates, and index cities by the resulting key in a BST. That would allow search by coordinate, but would not allow for an efficient two-dimensional range query such as searching for all cities within a given distance of a specified point. The problem is that the BST only works well for one-dimensional keys, while a coordinate is a two-dimensional key where neither dimension is more important than the other.
Multidimensional range queries are the defining feature of a spatial application. Because a coordinate gives a position in space, it is called a spatial attribute. To implement spatial applications efficiently requires the use of a spatial data structure. Spatial data structures store data objects organized by position and are an important class of data structures used in geographic information systems, computer graphics, robotics, and many other fields.
A number of spatial data structures are used for storing point data in two or more dimensions. The kd tree is a natural extension of the BST to multiple dimensions. It is a binary tree whose splitting decisions alternate among the key dimensions. Like the BST, the kd tree uses object-space decomposition. The PR quadtree uses key-space decomposition and so is a form of trie. It is a binary tree only for one-dimensional keys (in which case it is a trie with a binary alphabet). For $d$ dimensions it has $2^d$ branches. Thus, in two dimensions, the PR quadtree has four branches (hence the name “quadtree”), splitting space into four equal-sized quadrants at each branch. Two other variations on these data structures are the bintree and the point quadtree. In two dimensions, these four structures cover all four combinations of object- versus key-space decomposition on the one hand, and multi-level binary versus $2^d$-way branching on the other.
|
{}
|
# Law of Large Numbers and the Central Limit Theorem (With Python)
Statistics does not usually go with the word "famous," but a few theorems and concepts of statistics find such universal application, that they deserve that tag. The Central Limit Theorem and the Law of Large Numbers are two such concepts. Combined with hypothesis testing, they belong in the toolkit of every quantitative researcher. These are some of the most discussed theorems in quantitative analysis, and yet, scores of people still do not understand them well, or worse, misunderstand them. Moreover, these calculations are usually not done manually—the datasets are too large, computations are time consuming—so it is equally important to understand the computation aspect of these theorems well.
A working knowledge of probability, random variables and their distributions is required to understand these concepts.
## Sample Mean
A "sample" is a set of outcomes in an experiment or an event. A sample can sometimes also be replaced by "trials" or number of repetitions of the experiment. For example, tossing a coin with probability p of achieving a heads is n Bernoulli (p) trials. If the outcome is a random variable X, then X~Binomial(n, p) distribution (with n number of Bern(p) trials).
The values in a sample, $$X_{1},X_{2},X_{3}, ..., X_{n}$$, will all be random variables, all drawn from the same probabilistic distribution since they are outcomes of the same experiment. The $$X_{1},X_{2},X_{3}, ..., X_{n}$$ here are not actual numbers but names of random variables. A realization of the random variable $$X_{1}$$ will be $$x_{1}$$, an actual number.
A realization or an actual value of an experiment is one reading from the distribution. In a way, a probabilistic distribution is not a physical thing, and hence sometimes hard to relate to in everyday actions, even though highly applicable in everyday activities. It can be thought of as a tree. For example, a mango tree. We know the characteristics of the tree, what types of leaves it has, what types of fruits, their shape, size, color etc. We have a good idea of what a mango tree is, even if we don't physically see it & know each and every value. That is similar to a probabilistic distribution of a random variable X. One specific leaf, x, plucked from a mango tree, is like a realization of the random variable X.
Sample Mean is a random variable itself, as it is an average of other random variables. When referring to outcomes of the same experiment, all outcomes will belong to the same distribution, and hence will be identical in distribution. If each trial or sample is independent of the others, the random variables $$X_{1},X_{2},X_{3}, ..., X_{n}$$, will also be independent. This is then a set of I.I.D. (Independent and Identically Distributed) random variables.
For I.I.D. random variables $$X_{1},X_{2},X_{3}, ..., X_{n}$$ the sample mean $${\overline{X_n}}$$ is simply the arithmetic average of the set of random variables. (Note: the definition of sample mean applies to any set of random variables, but the fact that they are I.I.D. is going to be a special case scenario in common experiments, useful for deriving some important theorems.)
${\bar{X_n}} = \sum_{i=1}^{n}{X_i}/n$
In a small lab experiment, like measuring the length of an instrument with Vernier Calipers, we normally observe and record 3-5 readings of a measurement, and take the average of the readings to report the final value, to cancel any errors. This is high-school level mathematics. In research experiments, this level of simplification is not possible. But the idea behind the averaging is the same. In real life, we draw samples, Xi, and take the expectation of the samples to get the expectation of the sample mean. But the expectation of the sample mean is an average of all possible outcomes for each random variable Xi, not just the realized values. So, in a sense, it is a theoretical average.
$E[{\overline{X_n}}] = E[\sum_{i=1}^{n}{X_i}/n]$
When we take the expectation, the expectation of the errors becomes zero: E[e] = 0
## Convergence in Probability
A convergence in probability is defined as follows: a sequence of random variables Xn is said to converge in probability to a number a, if for any given small number ϵ, the probability of the difference between Xn and a being greater than ϵ tends to zero as n approaches .
For any $$\epsilon$$ > 0, $\lim_{n \rightarrow \infty } P(|X_n - a| \geq \epsilon ) = 0$
So, the distribution of Xn bunches around the number a for a large enough number n. But it is not always necessary that the expectation E[Xn] will converge to a too. This can be explained by the presence of outliers which might offset the expectation away from the number a, where the big proportion of the outcomes lie.
## Law of Large Numbers (LLN)
As per the LLN, as the sample size n tends to ∞, the expectation of the sample mean tends to the true mean μ of the population with probability 1. This is true for a set of I.I.D. random variables Xi with mean μ and variance σ2. It is calculated as follows:
$E[{\overline{X_n}}] = E[\sum_{i=1}^{n}{X_i}/n] \longrightarrow \mu$
This can be simulated and tested in Python by creating say 15 random variables, X1 to X15 that are Xi ~Bin(n,p) using the random generator of Numpy. The Xi must be IID. We calculate the value of the sample mean by averaging the variables. The true mean ( mu in the code) is very close to the calculated value mean based on the randomly generated distributions.
Note that in Numpy, np.random.binomial(n, p, size=None) uses a slightly different notation for the Binomial distribution than what we have been using so far. Here n refers to the number of trials in one variable, p is the probability of success, and size is the sample size (eg, number of coins tossed). It treats the Binomial distribution as a sum of indicator random variables; hence the output is the sum of number of successes for each sample (like each coin). So, if we take size as 5, and n (trials) as 100, the output will be a list of 5 numbers, with the sum of number of successes out of 100 for each sample (eg, for each coin).
For the sake of simplicity though, I have created 15 separate random variables, each with size= 1, for illustrative purposes. XN refers to the sample mean in the code.
import numpy as np
import scipy.stats as sp
#Running the Simulation with 15 IID Binomial RV for size=1 each, with n=1000 trials, probability of success is p=0.5
X1 = np.random.binomial(1000, 0.5, 1)
X2 = np.random.binomial(1000, 0.5, 1)
X3 = np.random.binomial(1000, 0.5, 1)
X4 = np.random.binomial(1000, 0.5, 1)
X5 = np.random.binomial(1000, 0.5, 1)
X6 = np.random.binomial(1000, 0.5, 1)
X7 = np.random.binomial(1000, 0.5, 1)
X8 = np.random.binomial(1000, 0.5, 1)
X9 = np.random.binomial(1000, 0.5, 1)
X10 = np.random.binomial(1000, 0.5, 1)
X11 = np.random.binomial(1000, 0.5, 1)
X12 = np.random.binomial(1000, 0.5, 1)
X13 = np.random.binomial(1000, 0.5, 1)
X14 = np.random.binomial(1000, 0.5, 1)
X15 = np.random.binomial(1000, 0.5, 1)
XN = (X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8+ X9 + X10 + X11 + X12 + X13 + X14 + X15)/15 #Sample Mean
mean = np.mean(XN) #Calculated mean of the sample
print("Sample Mean: "+ str(mean))
mu = sp.binom.mean(1000, 0.5) #True Mean of the sample
print("True Mean: " + str(mu))
Output:
Sample Mean: 500.8666666666667
True Mean: 500.0
This is the result for just 15 random variables. As the number increases, the sample mean gets closer to true mean. (Note: every time you run the code, it gives a new value for the sample mean because a new set of rv is generated every time. You can check and run the code here. )
The variance of the Sample Mean $$\overline{X_n}$$ is calculated as follows: $Var(\overline{X_n}) =\frac{Var(X_1 + X_2 + ... + X_n)}{n^2} = \frac{n \sigma^2}{n^2} = \frac{\sigma^2}{n}$
Since Xi are independent, we can use the property of linearity of variances to find the variance of the sample mean. By using the variance calculated above, and the Chebyshev's inequality, we can prove the Weak Law of Large Numbers.
### Weak Law of Large Numbers
As per the Chebyshev's inequality,
For any $$\epsilon$$ > 0, $P(|Y_n - a| \geq \epsilon ) = \frac{Var(Y_n)}{ \epsilon ^2}$
Plugging in the values in this equation, we get:
$P(|\overline{X_n} - \mu| \geq \epsilon ) = \frac{\sigma^2}{ n\epsilon ^2} \underset{n \longrightarrow \infty }{\overset{}{\longrightarrow}} 0$
As n approaches infinity, the probability of the difference between the sample mean and the true mean μ tends to zero, taking ϵ as a fixed small number.
## Central Limit Theorem
So far, we have not mentioned anything about which distribution the Xi belong to and the distribution of the sample mean (which is a random variable too, remember?). Most of the times, knowing the mean is not enough; we would like to know more about the final distribution of the sample mean so we can understand its properties. The Central Limit Theorem describes exactly this.
The Central Limit Theorem (CLT) says:
$\frac{\sqrt{n} (\overline{X_n} - \mu )}{ \sigma } \underset{n \longrightarrow \infty }{\longrightarrow}N(0,1)$
import numpy as np
import scipy.stats as sp
import matplotlib.pyplot as plt
import math
#Running the Simulation with 10 IID Binomial RV for 500 coins, with 1000 trials, probability of success is 0.5
X1 = np.random.binomial(1000, 0.5, 500)
X2 = np.random.binomial(1000, 0.5, 500)
X3 = np.random.binomial(1000, 0.5, 500)
X4 = np.random.binomial(1000, 0.5, 500)
X5 = np.random.binomial(1000, 0.5, 500)
X6 = np.random.binomial(1000, 0.5, 500)
X7 = np.random.binomial(1000, 0.5, 500)
X8 = np.random.binomial(1000, 0.5, 500)
X9 = np.random.binomial(1000, 0.5, 500)
X10 = np.random.binomial(1000, 0.5, 500)
XN = (X1+X2+X3+X4+X5+X6+X7+X8+X9+X10)/10
mean = np.mean(XN) #Calculated mean of the sample
print("Sample Mean: "+ str(mean))
mu = sp.binom.mean(1000, 0.5) #true mean of the sample
print("True Mean: " + str(mu))
sigma = sp.binom.std(1000, 0.5)
print("True standard deviation: " + str(sigma))
#Plotting Sample mean distribution for CLT
gauss = np.random.standard_normal(5000)
nsigma = 1
nmu = 0
size=10
N = math.sqrt(10)
ZN = (N*(XN-mu))/ sigma
plt.figure(figsize=(10,7))
count, bins, ignored = plt.hist(gauss, 100, alpha=0.5, color='orange', density=True)
plt.plot(bins, 1/(nsigma * np.sqrt(2 * np.pi)) *
np.exp( - (bins - nmu)**2 / (2 * nsigma**2) ),
linewidth=3, color='r')
plt.hist(ZN, 100, alpha=0.7, color="Navy", density=True)
plt.legend(["Std Normal PDF","Standard Normal Distribution", "Sample Mean Distribution"], loc=2)
plt.savefig("CLT_new.jpeg")
plt.show()
You can test and run the code from here. Note that "n" is still not very large here. As "n" gets bigger, the results get more and more close to the Standard Normal distribution.
This is another example of the proof of the theorem in Python, taking random variables with a Poisson distribution as the sample (figure below).
As per the Central Limit Theorem, the distribution of the sample mean converges to the distribution of the Standard Normal (after being centralized) as n approaches infinity. This is not a very intuitive result and yet, it turns out to be true. The proof of the CLT is by taking the moment of the sample mean. The moment converges to the moment of the Standard Normal (Refer to this lecture for a detailed proof). Since the proof involves advanced statistics and calculus, it is not covered here.
It is important to remember that the CLT is applicable for 1) independent, 2) identically distributed variables with 3) large sample size. What is "large" is an open ended question, but ~32 is taken as an acceptable number by most people. The larger the sample size, the better, for this purpose. In some cases, even if the variables are not strictly independent, a very weak dependence can be approximated to independence for the purpose of analysis. In nature, it may be difficult to find complete independence, as there may be some externalities which are unaccounted for, but if there is no clear, strong dependence, then it is usually approximated to independence. That said, the researcher must proceed with caution and weigh the situation on its own merits.
The Standard Normal distribution has many nice properties which simplify calculations and give intuitive results for experimental analysis. The bell curve is symmetric, centered around zero, has standard deviation = 1, is unimodal and is overall easy to read. The Z tables make it easy to relate the CDF with the values. No wonder that CLT finds wide applications in research, exploratory data analysis and machine learning.
References:
Law of Large Numbers and Central Limit Theorem | Statistics 110
John Tsitsiklis, and Patrick Jaillet. RES.6-012 Introduction to Probability. Spring 2018. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.
|
{}
|
Number of Trials to First Success
Informally, the probability of an event is the average number of times the event occurs in a sequence of trials. Another way of looking at that is to ask for an average number of trials before the first occurrence of the event. This could be formalized in terms of mathematical expectation.
Let $V$ be event that occurs in a trial with probability $p$. Mathematical expectation $E$ of the number of trials to first occurrence of V in a sequence of trials is $E = 1/p$.
Proof
We shall call an occurrence of $V$ in a trial a success; a trial is a failure otherwise.
The event will occur on the first trial with probability $p$. If that fails, it will occur on the second trial with probability $(1-p)p$. If that also fails, the probability of the event coming up on the third trial is $(1-p)^{2}p$.
More generally, the probability of the first success on the $n$th trial is $(1-p)^{n-1}p$. We are interested in the expected value:
$E = p + 2(1-p)p + 3(1-p)^{2}p + \ldots + n(1-p)^{n-1}p + \ldots$
One way to determine $E$ is to use a generating function:
$f(x) = p + 2(1-p)px + 3(1-p)^{2}px^{2} + \ldots + n(1-p)^{n-1}px^{n-1} + \ldots$
By the definition, $E = f(1)$. (Note that, since, the series converges for $x=1$, $f(1)$ does exist.) To find $f$ in a closed form, we first integrate term-by-term and then differentiate the resulting function.
\displaystyle \begin{align} F(x) = \int{f(x)dx} &= \int\sum_{n=0}(n+1)(1-p)^{n}px^{n}dx \\ &= \sum_{n=0}(1-p)^{n}p\int{(n+1)x^{n}dx} \\ &= \sum_{n=0}(1-p)^{n}px^{n+1} + C \\ &= \frac{p}{1-p}\sum_{n=0}(1-p)^{n+1}x^{n+1} + C \\ &= \frac{p}{1-p}\frac{(1-p)x}{1 - (1-p)x} + C \\ &= \frac{px}{1 - (1-p)x} + C. \end{align}
Now differentiate:
$\displaystyle f(x) = F'(x) = \frac{p}{(1-(1-p)x)^{2}}$
from which $\displaystyle f(1)=\frac{p}{p^{2}}=\frac{1}{p}$, as required.
Shortcut
There is a shortcut that makes the finding of $E$ more transparent. Obviously, the expectation of the first success counting from the second trial is still $E$. Taking into account the first trial, we can say that with probability $1-p$ the expected number of trials to the first success is $E+1$, while it is just $1$ with probability $p$. This leads to a simple equation:
$E = p + (1-p)(E+1) = 1 + E(1-p).$
Solving this equation for $E$ gives $\displaystyle E=\frac{1}{p}$.
This derivation could be done more formally (the case of $\displaystyle p=\frac{1}{2}$ was treated separately elsewhere):
\displaystyle \begin{align} E &= p\sum_{n=0}(n+1)(1-p)^{n} \\ &= p\sum_{n=0}n(1-p)^{n} + p\sum_{n=0}(1-p)^{n} \\ &= p\sum_{n=1}n(1-p)^{n} + p\cdot \frac{1}{1-(1-p)} \\ &= (1-p)p\sum_{n=0}(n+1)(1-p)^{n} + 1 \\ &= (1-p)E + 1. \end{align}
Copyright © 1996-2018 Alexander Bogomolny
62809830
Search by google:
|
{}
|
# Schmitt Trigger design
Vcc=5V, Vin= a noisy input voltag
I'm designing a non-inverting schmitt trigger using voltage divider. As shown from the image, I designed it on LTSpice and I got the desired results but from the waveform, however, I'm only seeing one threshold which is my reference voltage at 2.143V. I thought for a schmitt trigger it generate two threshold values? Is there something wrong with this circuit?
I'm having trouble analyzing this waveform. Vreference is the threshold, how come I don't see two of them where there should be a high threshold and a low threshold.
• Look at the +Vin pin. Apr 30, 2020 at 12:05
• Think about how this circuit should work. A Schmitt-trigger has an output that is either low or high. What are the low and high voltages in your circuit. Hint: that has to do with the supply voltages the opamp gets. Then consider each case, when the output is high, what will Vin need to be to make the circuit trip to the other state? Repeat for when the output is low. Apr 30, 2020 at 12:15
• I thought for a schmitt trigger it generate two threshold values? It doesn't generate them, a proper Schmitt trigger has a lower and a higher trip voltage. It is "by design". Imagine if your component values are such that the trip voltages are impossible to reach. Apr 30, 2020 at 12:17
• What is n001? Where is the input in your plot? Vref is not 2.5 V. Label your nets and show us the plot. We need to see both Vin and Vout. Apr 30, 2020 at 18:29
You are doing something wrong. Are you doing a DC sweep?
When the opamp is in negative saturation, the opamp will switch states when
$$V_\mathrm{in-} \frac{10}{14} > V_\mathrm{ref}.$$
When the opamp is in positive saturation, the opamp will switch states when
$$V_\mathrm{in+} + (V_\mathrm{cc} - V_\mathrm{in+}) \frac{4}{14} < V_\mathrm{ref}.$$
Substituting yields $$\V_\mathrm{in-} > 3.5\$$ and $$\V_\mathrm{in+} < 1.5\$$.
Here is the circuit and the time domain simulation results.
simulate this circuit – Schematic created using CircuitLab
• I'm using transient. Is there a way to look at the threshold voltages on simulation? I have tried inverting schmitt trigger and I do see the threshold voltages for that but not for non-inverting Apr 30, 2020 at 17:23
Here is my simulation in LTspice, seems to work just fine:
The switching thresholds are almost exactly at the expected 3.5V & 1.5V, when the input is as slow as I've chosen here.
Using an op-amp as a comparator is not always the best approach (but that's not your problem here). .asc file below.
Version 4
SHEET 1 880 680
WIRE 192 48 128 48
WIRE 304 48 240 48
WIRE 128 64 128 48
WIRE 240 128 240 48
WIRE 192 144 192 48
WIRE 208 144 192 144
WIRE 368 160 272 160
WIRE 384 160 368 160
WIRE 208 176 192 176
WIRE 80 240 16 240
WIRE 192 240 192 176
WIRE 192 240 160 240
WIRE 192 272 192 240
WIRE 384 272 384 160
WIRE 384 272 272 272
FLAG 240 192 0
FLAG 304 80 0
FLAG 128 96 0
FLAG 16 272 0
FLAG 368 160 OUTPUT
FLAG 16 240 INPUT
SYMATTR InstName U1
SYMBOL MiniSyms4\\voltage- 304 64 R0
WINDOW 123 0 0 Left 0
WINDOW 39 0 0 Left 0
SYMATTR InstName V1
SYMATTR Value 5
SYMBOL res 288 256 R90
WINDOW 0 0 56 VBottom 2
WINDOW 3 32 56 VTop 2
SYMATTR InstName R1
SYMATTR Value 10K
SYMBOL MiniSyms4\\voltage- 128 80 R0
WINDOW 123 0 0 Left 0
WINDOW 39 0 0 Left 0
SYMATTR InstName V2
SYMATTR Value 2.5
SYMBOL res 176 224 R90
WINDOW 0 0 56 VBottom 2
WINDOW 3 32 56 VTop 2
SYMATTR InstName R2
SYMATTR Value 4K
SYMBOL MiniSyms4\\voltage- 16 256 R0
WINDOW 3 -59 43 Left 0
WINDOW 123 0 0 Left 0
WINDOW 39 0 0 Left 0
SYMATTR InstName V3
SYMATTR Value PULSE(0 5 10ns 100ms 100ms 10ns 200m)
TEXT -44 322 Left 2 !.tran 200ms
|
{}
|
# A rocket is fired from rest at x = 0
A rocket is fired from rest at x = 0 and travels along a parabolic trajectory described by $y^2=[120(10^3)x]$ m. If the x component of acceleration is $a_x=(\dfrac{1}{4}t^2)$ m/$s^2$, where t is in seconds, determine the magnitude of the rocket’s velocity and acceleration when t = 10 s.
#### Solution:
From the question, we are given the parameter equation of y. We need to find the parameter equation of x. To find it, we will need to integrate the x-component acceleration equation given to us, twice. Acceleration is:
$a=\dfrac{dv}{dt}$
$dv=a\,dt$
Integrate both sides:
$\,\displaystyle \int dv=\int a\,dt$
$\,\displaystyle \int^{v_x}_{0}dv=\int^{t}_{0}\dfrac{1}{4}t^2\,dt$
$v_x=\dfrac{1}{12}t^3$ m/s
Velocity is:
$v=\dfrac{dx}{dt}$
$dx=v\,dt$
Again, integrate both sides:
$\,\displaystyle \int dx=\int v\,dt$
$\,\displaystyle \int^{x}_{0}dx=\int^{t}_{0}\dfrac{1}{12}t^3\,dt$
$x=\dfrac{1}{48}t^4$ m
Substitute our x equation into our parameter equation of y.
$y^2=[120(10^3)x]$ m
$y^2=[120(10^3)(\dfrac{1}{48})(t^4)]$
(take the square root of both sides and simplify)
$y=50t^2$
Now that we can represent our equation with respect to time, we can take the derivative to figure out the velocity. Remember that taking the derivative of a position function gives us the velocity function.
$y=50t^2$
$v_y=\dot{y}=100t$
Let us write down the two equations for velocity we found:
$v_x=\dfrac{1}{12}t^3$ m/s
$v_y=100t$ m/s
At t = 10 s:
$v_x=\dfrac{1}{12}(10)^3=83.3$ m/s
$v_y=100(10)=1000$ m/s
The magnitude of velocity is:
$v=\sqrt{(v_x)^2+(v_y)^2}$
$v=\sqrt{(83.3)^2+(1000)^2}=1003.5$ m/s
To figure out the acceleration, we need to figure out $a_y$ which can be found by taking the derivative of the $v_y$ equation.
$v_y=100t$ m/s
$a_y=\dot{v_y}=100$ m/$s^2$
Since $a_x$ is given to us in the question, we have the following:
$a_x=(\dfrac{1}{4}t^2)$ m/$s^2$
$a_y=100$ m/$s^2$
At t = 10 s:
$a_x=(\dfrac{1}{4}(10)^2)=25$ m/$s^2$
$a_y=100$ m/$s^2$
The magnitude of acceleration is equal to:
$a=\sqrt{(a_x)^2+(a_y)^2}$
$a=\sqrt{(25)^2+{100}^2}=103.1$ m/$s^2$
$v=1003.5$ m/s
$a=103.1$ m/$s^2$
|
{}
|
Primitive roots generated from a primitive root
Let $p$ be a prime number, and let $a$ be a primitive root $\mod p$.
Is it true that $a^m$ is a primitive root if and only if $\gcd(m,p-1)=1$?
One direction is correct: if $a^m$ is a primitive root, then let $d = gcd(m,p-1)$. Then $dq_1 = m, dq_2 = p-1$, where $q_1,q_2\in\mathbb{Z}$, and $q_2\leq p-1$. We then have that $$\left(a^m\right)^{q_2} = \left(a^{dq_1}\right)^{q_2} = \left(a^{q_1}\right)^{dq_2} = \left(a^{q_1}\right)^{p-1}$$ From Fermat's little theorem, the rightmost element is congruent to $1\mod{p}$, since $a < p$. From this, we conclude that since $a^m$ is a primitive root, and $q_2\leq 18$, then necessarily $q_2 = 18$, which means $d=1$.
I am not sure about the other direction: does $gcd(m,p-1)=1$ imply $a^m$ is primitive, given $a$ is primitive?
• try to use the fact set of invertible elements of Z/pZ form a cyclic group. – SAUVIK Apr 1 '16 at 11:33
• More generally, $ord(a^m) = (p-1)/ gcd(m,p−1)$. – lhf Apr 1 '16 at 11:41
• I was hoping to prove this using elementary number theoretic tools without group theory. How can I show this formula? – JonTrav Apr 1 '16 at 11:53
• – lab bhattacharjee Apr 1 '16 at 12:12
• More generally, we know, $$ord_ma=d, ord_m(a^k)=\frac{d}{(d,k)}$$ Proof @Page#95) of archive.org/details/NumberTheory_862 – lab bhattacharjee Apr 1 '16 at 12:14
Given that $p>2$ is prime and that $a$ is a primitive root modulo $p$, we know that $a^j\equiv 1 \bmod p$ if and only if $j$ is a multiple of $p{-}1$, that is, when $j\equiv 0 \bmod p{-}1$. Thus:
$\Rightarrow$:
Given $b=a^m$ with $\gcd(m,p-1)=1$, we know that $mk \equiv 0 \bmod p{-}1$ if and only if $k\equiv 0 \bmod p{-}1$. Thus $b^k\equiv 0 \bmod p$ if and only if $k\equiv 0 \bmod p{-}1$ and $b$ is a primitive root $\bmod p$
$\Leftarrow$:
Given $c=a^n$ with $d:=\gcd (n,p-1)>1$, we can find $\ell= (p{-}1)/d<p{-}1$ and then $n\ell \equiv 0 \bmod p{-}1$ giving $c^\ell\equiv 1 \bmod p$ and thus $c$ is not a primitive root $\bmod p$.
Given that $a$ is a primitive root $\pmod p$, there is one basic way to generate a primitive root $\pmod p$ depending on the congruence of $p \pmod 4$. If $p = 1 \pmod 4$, then $-a$ is a primitive root. If $p = 3 \pmod 4$, then $-a^2$ is a primitive root.
Regardless of the prime $p$ and the primitive root $a$, If $a$ is a primitive root $\pmod p$, $a^n$ is also a primitive root $\pmod p$ if and only if $\gcd(n,p-1)=1$. To prove this, let $a$ be a primitive root $\pmod p$. The order of $a \pmod p$ is $p-1$ and $a^{p-1}=1 \pmod p$ but not $a^{(p-1)/q}$ for every prime $q$ dividing $p-1$. So if $gcd(n,p-1)=q$ and therefore not $1$, $a^{(p-1)/q}=1 \pmod p$ and the order of $a^n \pmod p$ is $(p-1)/q$, not $p-1$. Use this to prove the first two trivial statements.
|
{}
|
# Transistor doesnt work?
#### Vaarna
Joined Nov 17, 2015
3
I have a simple circuit with 2 transistors making an and gate and switches to turn on the transistors and a led to light up when the 2 switches are on, but
it doesnt work the transistors just heat up.
ps.sorry if im noob
#### mcgyvr
Joined Oct 15, 2009
5,394
post a schematic..... noob
A schematic will show all the details your words left out.
#### GopherT
Joined Nov 23, 2012
8,012
I have a simple circuit with 2 transistors making an and gate and switches to turn on the transistors and a led to light up when the 2 switches are on, but
it doesnt work the transistors just heat up.
ps.sorry if i`m noob
Post a close-up photo of your circuit and a hand drawn (or better) schematic of the circuit. Also include the value of the resistors you used.
#### Vaarna
Joined Nov 17, 2015
3
#### GopherT
Joined Nov 23, 2012
8,012
Your transistors have been damaged because you should have a resistor between the (+) and base of the transistor. Look at the datasheet for the transistor and look up the max current into the base.
#### Dodgydave
Joined Jun 22, 2012
9,912
Yes you have omitted the resistors to the Bases of the transistors, so they will burn up, use any values from 1K to 10K.
#### GopherT
Joined Nov 23, 2012
8,012
#### GopherT
Joined Nov 23, 2012
8,012
#### mcgyvr
Joined Oct 15, 2009
5,394
Don't even need the transistors just put the 2 switches in series.. (or are the switches momentary..even so)
Last edited:
#### GopherT
Joined Nov 23, 2012
8,012
Don't even need the transistors just put the 2 switches in series.. (or are the switches momentary..even so)
I think he is experimenting - making an AND logic gate.
|
{}
|
In physics, chemistry and biology, a potential gradient is the local rate of change of the potential with respect to displacement, i.e. spatial derivative, or gradient. This quantity frequently occurs in equations of physical processes because it leads to some form of flux. In electrical engineering it refers specifically to electric potential gradient, which is equal to the electric field.
Definition
One dimension
The simplest definition for a potential gradient F in one dimension is the following:[1]
${\displaystyle F={\frac {\phi _{2}-\phi _{1}}{x_{2}-x_{1}}}={\frac {\Delta \phi }{\Delta x}}\,\!}$
where ϕ(x) is some type of scalar potential and x is displacement (not distance) in the x direction, the subscripts label two different positions x1, x2, and potentials at those points, ϕ1 = ϕ(x1), ϕ2 = ϕ(x2). In the limit of infinitesimal displacements, the ratio of differences becomes a ratio of differentials:
${\displaystyle F={\frac {{\rm {d}}\phi }{{\rm {d}}x}}.\,\!}$
Three dimensions
In three dimensions, Cartesian coordinates make it clear that the resultant potential gradient is the sum of the potential gradients in each direction:
${\displaystyle \mathbf {F} =\mathbf {e} _{x}{\frac {\partial \phi }{\partial x}}+\mathbf {e} _{y}{\frac {\partial \phi }{\partial y}}+\mathbf {e} _{z}{\frac {\partial \phi }{\partial z}}\,\!}$
where ex, ey, ez are unit vectors in the x, y, z directions. This can be compactly written in terms of the gradient operator ,
${\displaystyle \mathbf {F} =\nabla \phi .\,\!}$
although this final form holds in any curvilinear coordinate system, not just Cartesian.
This expression represents a significant feature of any conservative vector field F, namely F has a corresponding potential ϕ.[2]
Using Stoke's theorem, this is equivalently stated as
${\displaystyle \nabla \times \mathbf {F} ={\boldsymbol {0}}\,\!}$
meaning the curl, denoted ∇×, of the vector field vanishes.
Physics
Newtonian gravitation
In the case of the gravitational field g, which can be shown to be conservative,[3] it is equal to the gradient in gravitational potential Φ:
${\displaystyle \mathbf {g} =-\nabla \Phi .\,\!}$
There are opposite signs between gravitational field and potential, because the potential gradient and field are opposite in direction: as the potential increases, the gravitational field strength decreases and vice versa.
Electromagnetism
In electrostatics, the electric field E is independent of time t, so there is no induction of a time-dependent magnetic field B by Faraday's law of induction:
${\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}={\boldsymbol {0}}\,,}$
which implies E is the gradient of the electric potential V, identical to the classical gravitational field:[4]
${\displaystyle -\mathbf {E} =\nabla V.\,\!}$
In electrodynamics, the E field is time dependent and induces a time-dependent B field also (again by Faraday's law), so the curl of E is not zero like before, which implies the electric field is no longer the gradient of electric potential. A time-dependent term must be added:[5]
${\displaystyle -\mathbf {E} =\nabla V+{\frac {\partial \mathbf {A} }{\partial t}}\,\!}$
where A is the electromagnetic vector potential. This last potential expression in fact reduces Faraday's law to an identity.
Fluid mechanics
In fluid mechanics, the velocity field v describes the fluid motion. An irrotational flow means the velocity field is conservative, or equivalently the vorticity pseudovector field ω is zero:
${\displaystyle {\boldsymbol {\omega }}=\nabla \times \mathbf {v} ={\boldsymbol {0}}.}$
This allows the velocity potential to be defined simply as:
${\displaystyle \mathbf {v} =\nabla \phi }$
Chemistry
Main article: Electrode potentials
In an electrochemical half-cell, at the interface between the electrolyte (an ionic solution) and the metal electrode, the standard electric potential difference is:[6]
${\displaystyle \Delta \phi _{(M,M^{+z})}=\Delta \phi _{(M,M^{+z})}^{\ominus }+{\frac {RT}{zeN_{A}}}\ln a_{M^{+z}}\,\!}$
where R = gas constant, T = temperature of solution, z = valency of the metal, e = elementary charge, NA = Avogadro's constant, and aM+z is the activity of the ions in solution. Quantities with superscript o denote the measurement is taken under standard conditions. The potential gradient is relatively abrupt, since there is an almost definite boundary between the metal and solution, hence the interface term.[clarification needed]
Biology
In biology, a potential gradient is the net difference in electric charge across a cell membrane.
Non-uniqueness of potentials
Since gradients in potentials correspond to physical fields, it makes no difference if a constant is added on (it is erased by the gradient operator which includes partial differentiation). This means there is no way to tell what the "absolute value" of the potential "is" – the zero value of potential is completely arbitrary and can be chosen anywhere by convenience (even "at infinity"). This idea also applies to vector potentials, and is exploited in classical field theory and also gauge field theory.
Absolute values of potentials are not physically observable, only gradients are. However, the Aharonov–Bohm effect is a quantum mechanical effect which illustrates that non-zero electromagnetic potentials (even when the E and B fields are zero) lead to changes in the phase of the wave function of an electrically charged particle, so the potentials appear to have measurable significance.
Potential theory
Field equations, such as Gauss's laws for electricity, for magnetism, and for gravity, can be written in the form:
${\displaystyle \nabla \cdot \mathbf {F} =X\rho }$
where ρ is the electric charge density, monopole density (should they exist), or mass density and X is a constant (in terms of physical constants G, ε0, μ0 and other numerical factors).
${\displaystyle \nabla \cdot (\nabla \phi )=X\rho \quad \Rightarrow \quad \nabla ^{2}\phi =X\rho }$
A general theory of potentials has been developed to solve this equation for the potential. The gradient of that solution gives the physical field, solving the field equation.
|
{}
|
5
A bag contains six red marbles and seven white marbles_ If a sample of four marbles contains at least one white marble, what is the probability that all the marbles...
Question
A bag contains six red marbles and seven white marbles_ If a sample of four marbles contains at least one white marble, what is the probability that all the marbles in the sample are white?The probability that all the marbles in a sample of four marbles are white given that at least one of the marbles is white is (Round to four decimal places as needed )
A bag contains six red marbles and seven white marbles_ If a sample of four marbles contains at least one white marble, what is the probability that all the marbles in the sample are white? The probability that all the marbles in a sample of four marbles are white given that at least one of the marbles is white is (Round to four decimal places as needed )
Similar Solved Questions
CALCULATIONS Flease show all work for full credit7) Toxic levels of lead are 5 UGIdL in the blood of children and 10 ugIdL in the blood of adults: If the lead levels Wcr measured at 5 "mlvand child with 25L of blood drank of that water; would they be at risk of lead toxicity? Justify your ancer with calculations (12 points)
CALCULATIONS Flease show all work for full credit 7) Toxic levels of lead are 5 UGIdL in the blood of children and 10 ugIdL in the blood of adults: If the lead levels Wcr measured at 5 "mlvand child with 25L of blood drank of that water; would they be at risk of lead toxicity? Justify your anc...
MethcmnbeiPuitiDO NOT WRITE YOUR NAME ON THIS PAGE6. [15 points] For each of the following questions; fill in the blank with the letter corresponding the aliswer Troni the bottom of the page that correctly completes the sentence_ No credit will be given for unclear answers_ You do not need to show Your work:points] The limit . lim(+2)"[3 points] The value of the integraldT: IlZ[3 points] The value of the integral[3 points] The value of for which the differential equation y = Ay is satisfied
Meth cmnbei Puiti DO NOT WRITE YOUR NAME ON THIS PAGE 6. [15 points] For each of the following questions; fill in the blank with the letter corresponding the aliswer Troni the bottom of the page that correctly completes the sentence_ No credit will be given for unclear answers_ You do not need to sh...
Cuttoanited(181 YoulubeKINLTICS ANocquilirium Identllying Intcrmcdlates In rcnctlon mechanismConsider the following mechanismn for the formation of carbon tetrachloride:Cl (o)2Cl(g)Cl(g) CHCI,(g)HCI(a) CCI,(9) CCl4(9)CCI;(9) Cl(a)Wite the chemical cquatlon vunnacion0-0Dp 0PNe thete any Intetmnedlale: In this mechanism?DDIf there are intermediates Wntd down (nur chemicz onnuln?commtrocveen each cncinicuntonmulz Lhefe'5 mone thantone
cuttoanited (181 Youlube KINLTICS ANocquilirium Identllying Intcrmcdlates In rcnctlon mechanism Consider the following mechanismn for the formation of carbon tetrachloride: Cl (o) 2Cl(g) Cl(g) CHCI,(g) HCI(a) CCI,(9) CCl4(9) CCI;(9) Cl(a) Wite the chemical cquatlon vunnacion 0-0 Dp 0P Ne thete any I...
Use the given data set to complete parts (a) through (c} below: (Use & = 0.05.)12.756.42Click here to view table of critical values for the correlation coefficient:Construct scatterplot Choose the correct graph below:b. Find the linear correlation coefficient; then determine whether there is sufficient evidence to support the claim of linear correlation between the two variables.The linear correlation coefficient is (Round to three decimal places as needed:)Using the linear correlation coeff
Use the given data set to complete parts (a) through (c} below: (Use & = 0.05.) 12.75 6.42 Click here to view table of critical values for the correlation coefficient: Construct scatterplot Choose the correct graph below: b. Find the linear correlation coefficient; then determine whether there i...
A) (10F) Compute the matrix product below, where m is a positive integer_-2b) (20%) Compute the closed form for A" , where T is a positive integer.A =(Hint: write A as AI + B)
a) (10F) Compute the matrix product below, where m is a positive integer_ -2 b) (20%) Compute the closed form for A" , where T is a positive integer. A = (Hint: write A as AI + B)...
Four 75 kg students gct on 'mery-go-round & death" with 4 robuonal inertia d 320 kg m' (before hent= get on) @nthe total rolational inertia (four students plus merry-go-round) when they are [Am frm the cnter o the meTy-go-round?b) Whatis the toul rotational inenix (four students Plus merry-go rund) when they pull themselves within 0.9 m of the &enter?Mthe mctry go-round was spining # 5 Oralk when they Wcre Khan Ic} mov 0.9 m ftom the cuter?how fast Will # mbled) How uuch d
Four 75 kg students gct on 'mery-go-round & death" with 4 robuonal inertia d 320 kg m' (before hent= get on) @nthe total rolational inertia (four students plus merry-go-round) when they are [Am frm the cnter o the meTy-go-round? b) Whatis the toul rotational inenix (four students ...
Consider drug testing company that provides test formarijuana usage. Among 280 tested subjects, results from 30 subjects were wrong either false positive or false negative) Use 0.10 significance evel to test the claim that less than 10 percent of the test results are wrong:Identify the test statistic for this hypothesis testThe test statistic for this hypothesis test is (Round to two decima places a3 needed:)Identify the P-value for this hypothesis testThe P-value for this hypothesis test is (R
Consider drug testing company that provides test formarijuana usage. Among 280 tested subjects, results from 30 subjects were wrong either false positive or false negative) Use 0.10 significance evel to test the claim that less than 10 percent of the test results are wrong: Identify the test statis...
An airplane in climbing flight h and an altitude h It maintains uniform speed with a flight path angle Y passes directly over radar tracking station A_ Calculate the angular velocity 0 and angular acceleration € of the radar antenna as well as the rate / and acceleration r at which the airplane is moving away from the antenna. Use the equations of this chapter (rather than polar coordinates which you wil use to check your work): Attach the inertial frame of reference to the ground and assume X
An airplane in climbing flight h and an altitude h It maintains uniform speed with a flight path angle Y passes directly over radar tracking station A_ Calculate the angular velocity 0 and angular acceleration € of the radar antenna as well as the rate / and acceleration r at which the airplan...
1.(20 pts; Assessment of Competencies) The following data shows BMI for randomly selected 12 men from rural communityMen BMI: 19.6,23.& 29.1, 25.2, 21.0, 21.0, 27.5, 33.5, 17.7, 24.0, 28.,9, 37.7Usc SPSS software to answcr the following questions: (a) Find the mcan BML (2 pts) 25.75 (6) Find the median BMI (2 pts) 24.6 (c) Find the modal BMI (2 pts)(d) Write points summary statistics of BMI for the selected 12 men. (5 pts) Range: 20 Mean: 24.6Median; 24.6Mode: 21(e) Draw box plot onto SPSS s
1.(20 pts; Assessment of Competencies) The following data shows BMI for randomly selected 12 men from rural community Men BMI: 19.6,23.& 29.1, 25.2, 21.0, 21.0, 27.5, 33.5, 17.7, 24.0, 28.,9, 37.7 Usc SPSS software to answcr the following questions: (a) Find the mcan BML (2 pts) 25.75 (6) Find t...
Assignment for the Forum Discussion: Take time to watch the news t0 keep abreast of this developing situation in any part of Southern United States: Review any reports from the CDC , National Institute of Health and the Weather Channel regarding Tornado Watches and Warnings. Take time to understand the difference in a watch and a warning"Once you have completed your research understand the Epidemiological definitions provided and have reviewed the issues have listed, formulate your thoughts
Assignment for the Forum Discussion: Take time to watch the news t0 keep abreast of this developing situation in any part of Southern United States: Review any reports from the CDC , National Institute of Health and the Weather Channel regarding Tornado Watches and Warnings. Take time to understand ...
Use the graph to detormine open intervals on which the function is incraasing, if any: b. open intervals on which the function is decreasing; if any opon intorvals on which tho function is constant;, if any:Select the correct choice below and, if necessary; fill in the answer box to complete your choiceThe function is increasing on the interval(s) (Type your answer in interval nolation: Use & comma t0 separale answers as needed ) 0 B. There is no interval on which the function is increasing:
Use the graph to detormine open intervals on which the function is incraasing, if any: b. open intervals on which the function is decreasing; if any opon intorvals on which tho function is constant;, if any: Select the correct choice below and, if necessary; fill in the answer box to complete your c...
9_ Place the following elements in order of reactivity: N, Al, Mg10.Place the following elements in order of reactivity: !, Cl, Br
9_ Place the following elements in order of reactivity: N, Al, Mg 10.Place the following elements in order of reactivity: !, Cl, Br...
Sensory_System = Chapter 8_Chemical Senses Compare and contrast Ihe gustatory and signaling ollactory palhways 2 Olfaction is the only sense thal functions whenwe are anesthelized It is also thought that we can fomm memories from smells Ihat we don"teven knowwe are smelling Based on whatis known about the olfactory system, Why mightthis be true?
Sensory_System = Chapter 8_Chemical Senses Compare and contrast Ihe gustatory and signaling ollactory palhways 2 Olfaction is the only sense thal functions whenwe are anesthelized It is also thought that we can fomm memories from smells Ihat we don"teven knowwe are smelling Based on whatis know...
Question 135 ptsA street vendor sells six different kinds of fresh fruit: apples, bananas, oranges, pears, kiwis and peaches for $1 each: How many different ways could someone with buy$5 worth of fruit?
Question 13 5 pts A street vendor sells six different kinds of fresh fruit: apples, bananas, oranges, pears, kiwis and peaches for $1 each: How many different ways could someone with buy$5 worth of fruit?...
Space Travel The space shuttle uses a $\mathrm{H}_{2} / \mathrm{O}_{2}$ fuel cell to produce electricity. a. What is the reaction at the anode? At the cathode? b. What is the standard cell potential for the fuel cell?
Space Travel The space shuttle uses a $\mathrm{H}_{2} / \mathrm{O}_{2}$ fuel cell to produce electricity. a. What is the reaction at the anode? At the cathode? b. What is the standard cell potential for the fuel cell?...
At points A and B on Earth, distant at a distance of l = 10 km,two events took place simultaneously, for example, TV screens werelit. The number of microseconds separating these events from thepoint of view of an observer on a spacecraft moving away from theEarth along the straight line AB at a speed of v = 0.6 s, where cis the speed of light, is
At points A and B on Earth, distant at a distance of l = 10 km, two events took place simultaneously, for example, TV screens were lit. The number of microseconds separating these events from the point of view of an observer on a spacecraft moving away from the Earth along the straight line AB at a ...
|
{}
|
### Integration (Economics)
#### Integration
Integration is the opposite process to differentiation.
#### Indefinite Integration
The indefinite integral of a function $f(x)$ with respect to $x$ is denoted by: $\int f(x) \mathrm{d} x.$ The function appearing inside the integral, $f(x)$, is known as the integrand. We can find the indefinite integral of a function using the rules for finding the indefinite integral. It is important to note that the indefinite integral of a function is a function itself. We denote this function by $F(x)$, so we can write $\int f(x) \mathrm{d} x = F(x) + C$ where $C$ is called the constant of integration and arises from the constant rule of differentiation (see here for more information about why it is necessary). Since integration is the reverse of differentiation, we have: $F'(x)=f(x)$
Note: Because integration is the opposite process of differentiation, the indefinite integral $F(x)$ is also referred to as an antiderivative.
#### Definite Integration
The definite integral of a function $f(x)$: $\int_{\large{a} }^{\large{b} }\,f(x)\, \mathrm{d}x$
is a number. This number is equal to the area between the curve of the function and the $x$-axis and between $2$ specified values of $x$. For example, the integral above is equal to the area under the curve of $f(x)$ between the points $x=a$ and $x=b$.
We call $b$ the upper limit of integration and $a$ the lower limit of integration.
We can see that the definite integral of the function $f(x)$ is just the indefinite integral of $f(x)$ evaluated between $2$ values of $x$. We have: $\int_{\large{a} }^{\large{b} }\,f(x)\, \mathrm{d}x=\Bigl[F(x)\Bigl]_{\large{a} }^{\large{b} }=F(b)-F(a).$ where $F(x)$ is the indefinite integral of $f(x)$.
Note: When evaluating definite integrals it is not necessary to include the constant of integration, $C$, that arises in indefinite integration.
#### Rules for Finding the Indefinite Integral
Since integration is the opposite process of differentiation, the rules of integration are the rules of differentiation, reversed.
##### The Constant Rule
See the constant rule of differentiation. $\int a\; \mathrm{d} x=ax+C$ where $a$ is a non-zero constant.
###### Example 1
Find the indefinite integral of the function $f(y)=5$.
###### Solution
$\int 5\; \mathrm{d} y=5y+C$
##### The Power Rule
See the power rule of differentiation.
$\int x^n \mathrm{d} x=\dfrac{x^{n+1}}{n+1}+C$
In words this says “add one to the exponent and divide by the new exponent”.
###### Example 1
Find the indefinite integral of the function $f(x)=x^4$.
###### Solution
$\int x^4 \mathrm{d} x=\frac{x^5}{5}+C$
###### Example 2
Find the indefinite integral of the function $f(x)=x^{~-\dfrac{3}{4}~}$.
###### Solution
\begin{align} \int x^{~-\frac{3}{4}~} \mathrm{d} x&=\frac{x^{~\left(-\frac{3}{4}+1\right)}~}{-\frac{3}{4}+1}+C\\ &=\frac{x^{~\frac{1}{4}~} }{\frac{1}{4} }+C\\ &=4x^{~\frac{1}{4}~}+C \end{align}
##### The Multiplicative Constant Rule
See the power rule of differentiation.
$\int af(x) \mathrm{d} x= a\int f(x) \mathrm{d} x=aF(x)+C$ where $a$ is a non-zero constant.
In words this says that we can bring multiplicative constants ($a$ in this case) outside of the integral. We then integrate the function $f(x)$, multiply the result by the multiplicative constant, and add the constant of integration.
###### Example 1
Find the indefinite integral of the function $f(x)=11x^3$.
###### Solution
\begin{align} \int 11x^3 \mathrm{d}&=11\int x^3 \mathrm{d} x\\ &=\dfrac{11}{4}x^4+C \end{align}
##### The Sum or Difference Rule
See the sum or difference rule of differentiation.
To integrate a sum (or difference) of terms, integrate each term separately and add (or subtract) the integrals.
Note: Remember to include only one constant of integration when you add or subtract the integrals.
###### Example 1
Find the indefinite integral of the function $f(x)=2x^3+x^2-7$.
###### Solution
Using the multiplicative constant rule, the indefinite integral of the first term is: \begin{align}\int 2x^3 \mathrm{d} x&=\frac{2}{4}x^4+C\\ &=\frac{x^4}{2}+C \end{align}
We can then use the power rule to integrate the second term: $\int x^2 \mathrm{d} x=\frac{x^3}{3}+C$ Finally, using the constant rule we can integrate the last term: $\int 7 \mathrm{d} x=7x+C$ Adding the integrals of the first two terms and subtracting the integral of the last term gives: $\int 2x^3+x^2-7 \mathrm{d} x=\frac{x^4}{2}+\frac{x^3}{3}-7x+C$
##### The Function of a Function Rule
The function of a function rule of integration is the opposite of the chain rule of differentiation.
Here we will consider the simple case where the function which has been differentiated was of the form $y=f(x)^n$. This is a function of a function because $y$ is a function of $f$ and $f$ is a function of $x$.
Using the chain rule, we have: \begin{align} \frac{dy}{dx}&=\frac{dy}{df}\times \frac{df}{dx}\\ &=n[f(x)]^{n-1}\times f'(x) \end{align}
Now, suppose we are asked to find the indefinite integral of a function of the form $n[f(x)]^{n-1}\times f'(x)$. By reversing the chain rule, we can see that the indefinite integral of this function is: $\int n[f(x)]^{n-1}\times f'(x) \mathrm{d} x=f(x)^n+C$
###### Example 1
Find the indefinite integral of the function $g(x)=2x(x^2+1)$.
###### Solution
Since $2x$ is the derivative of $(x^2+1)$ we can apply the function of a function rule to integrate this function. We have $f'(x)=2x$, $f(x)=(x^2+1)$ and $f(x)^n=(x^2+1)$ since $n=1$ so: \begin{align} \int 2x(x^2+1) \mathrm{d} x&=\int n[f(x)]^{n-1} f'(x) \mathrm{d} x\\ &=f(x)^n+C\\ &=(x^2+1)+C \end{align}
###### Example 2
Find the indefinite integral of the function $h(x)=15x^2(x^3+4)^5$.
###### Solution
Since $15x^2$ is the derivative of $(x^3+4)^5$ we can apply the function of a function rule to integrate this function. We have $f(x)=x^3+4$ and $f(x)^n=(x^3+4)^5$ so: \begin{align} \int 15x^2(x^3+4)^4 \mathrm{d} x&=\int n[f(x)]^{n-1} f'(x) \mathrm{d} x\\ &=f(x)^n+C\\ &=(x^3+4)^5+C \end{align}
##### The Exponential Function
Reversing the rules for differentiating the exponential function, we have: $\int e^x \mathrm{d} x= e^x$ and $\int g'(x) e^{g(x)}= e^{g(x)}$
###### Example 1
Find the indefinite integral of the function $f(x)=12x^2 e^{4x^3}$.
###### Solution
Here the power to which $e$ is raised is a function of $x$ so we have $g(x)=4x^3$ and $g'(x)=12x^2$ so: $\int 12x^2 e^{4x^3} \mathrm{d} x= e^{4x^3}$
#### Finding a Definite Integral
Suppose we want to find the area under the curve of the function $y=4{x^3}$ between $x=0$ and $x=4$. To evaluate a definite integral of any function first find the indefinite integral of the function. For our chosen function this is: $\int f(x) \mathrm{d} x.=x^4+C$ We must then evaluate this function at the upper and lower limits, $x=4$ and $x=0$ respectively: \begin{align} \int_0^4 \, 4x^3 \, \mathrm{d}x&=\left[ x^4 \right]_{x=4}-\left[ x^4 \right]_{x=0}\\ &=256-0\\ &=256 \end{align} The area under the curve $y=4x^3$ between $x=0$ and $x=4$ is therefore equal to $256$. `
#### Applications of Integration in Economics
##### Deriving the Total Revenue Function from the Marginal Revenue Function
By definition, a firm’s marginal revenue ($MR$) function can be found by differentiating the firm’s total revenue ($TR$) function. Since integration is the reverse of differentiation, given a MR function, we can obtain the corresponding TR function by finding the indefinite integral of the marginal revenue function. We can use this same method to obtain the total cost function given a firm’s marginal cost function.
###### Example 1
Suppose that a bakery’s $MR$ function is $MR=3q^2+2$, where $q$ is the quantity of bread loaves produced by the bakery and the revenue is in $£$. The bakery earns $£8,040$ in revenue from selling $20$ loaves of bread. What is bakery’s total revenue from producing $100$ loaves of bread?
###### Solution
We must first find the bakery’s total revenue function by finding the indefinite integral of the marginal revenue function: \begin{align} TR(q)&=\int MR(q) \mathrm{d} q\\ &=\int 3q^2+2 \mathrm{d} q\\ &=\dfrac{3q^3}{3}+2q+C\\ &=q^3+2q+C \end{align}
Now, we have been told that by producing $20$ loaves of bread, the bakery earns $£8,040$ in revenue. We can use this piece of information to determine the value of the constant of integration, and thus obtain the bakery's MR function. \begin{align} 8,040&=20^3+2\times 20+C\\ \Rightarrow C&=8,040-20^3+2\times 20\\ \Rightarrow C&=8,040-20^3+2\times 20\\ \Rightarrow C&=4 \end{align} so the total revenue function is: $TR(q)=q^3+2q+4$ We can check that the total revenue is correct by differentiating it. The derivative should be equal to the given marginal revenue function: $TR'(q)=3q^2+2=MR(q)$
We can now use the total revenue function to determine the revenue earned by the bakery when it sells $100$ loaves of bread: \begin{align} TR(100)&=3\times 100^2+2\\ &=£30,002 \end{align}
#### More Support
You can get one-to-one support from Maths-Aid.
|
{}
|
# Solving Integral Equations – (1) Definitions and Types
If you have finished your course in Calculus and Differential Equations, you should head to your next milestone: the Integral Equations. This marathon series (planned to be of 6 or 8 parts) is dedicated to interactive learning of integral equations for the beginners —starting with just definitions and demos —and the pros— taking it to the heights of problem solving. Comments and feedback are invited.
### What is an Integral Equation?
An integral equation is an equation in which an unknown function appears under one or more integration signs. Any integral calculus statement like —
or
can be considered as an integral equation. If you noticed I have used two types of integration limits in above integral equations –their significance will be discussed later in the article.
A general type of integral equation,
is called linear integral equation as only linear operations are performed in the equation. The one, which is not linear, is obviously called ‘Non-linear integral equation’. In this article, when you read ‘integral equation’ understand it as ‘linear integral equation’.
In the general type of the linear equation
we have used a ‘box
‘ to indicate the higher limit of the integration. Integral Equations can be of two types according to whether the box
(the upper limit) is a constant (b) or a variable (x).
First type of integral equations which involve constants as both the limits — are called Fredholm Type Integral equations. On the other hand, when one of the limits is a variable (x, the independent variable of which y, f and K are functions) , the integral eqaution is called Volterra’s Integral Equations.
Thus
is a Fredholm Integral Equation and
is a Volterra Integral Equation.
In an integral equation,
is to be determined with
,
and
being known and
being a non-zero complex parameter. The function
is called the ‘kernel’ of the integral equation.
STRUCTURE OF AN INTEGRAL EQUATION
### Types of Fredholm Integral Equations
As the general form of Fredholm Integral Equation is
, there may be following other types of it according to the values of
and
:
1. Fredholm Integral Equation of First Kind —when —
2. Fredholm Integral Equation of Second Kind —when —
3. Fredholm Integral Equation of Homogeneous Second Kind —when
and
The general equation of Fredholm equation is also called Fredholm Equation of Third/Final kind, with
.
### Types of Volterra Integral Equations
As the general form of Volterra Integral Equation is
, there may be following other types of it according to the values of
and
:
1. Volterra Integral Equation of First Kind —when —
2. Volterra Integral Equation of Second Kind —when —
3. Volterra Integral Equation of Homogeneous Second Kind —when
and
The general equation of Volterra equation is also called Volterra Equation of Third/Final kind, with
.
### Singular Integral equations
In the general Fredholm/Volterra Integral equations, there arise two singular situations:
• the limit
and
.
• the kernel
at some points in the integration limit
.
then such integral equations are called Singular (Linear) Integral Equations.
Type-1:
and
General Form:
Example:
Type-2:
at some points in the integration limit
Example:
is a singular integral equation as the integrand reaches to
at
.
The nature of solution of integral equations solely depends on the nature of the Kernel of the integral equation. Kernels are of following special types:
1. Symmetric Kernel : When the kernel
is symmetric or complex symmetric or Hermitian, if
where bar
denotes the complex conjugate of
. That’s if there is no imaginary part of the kernel then
implies that
is a symmetric kernel.
For example
is symmetric kernel.
2. Separable or Degenerate Kernel: A kernel
is called separable if it can be expressed as the sum of a finite number of terms, each of which is the product of ‘a function’ of x only and ‘a function’ of t only, i.e.,
.
3. Difference Kernel: When
then the kernel is called difference kernel.
4. Resolvent or Reciprocal Kernel: The solution of the integral equation
is of the form
. The kernel
of the solution is called resolvent or reciprocal kernel.
### Integral Equations of Convolution Type
The integral equation
is called of convolution type when the kernel
is difference kernel, i.e.,
.
Let
and
be two continuous functions defined for
then the convolution of
and
is given by
. For standard convolution, the limits are
and
.
Eigenvalues and Eigenfunctions of the Integral Equations
The homogeneous integral equations
have the obvious solution
which is called the zero solution or the trivial solution of the integral equation. Except this, the values of
for which the integral equation has non-zero solution
, are called the eigenvalues of integral equation or eigenvalues of the kernel. Every non-zero solution
is called an eigenfunction corresponding to the obtained eigenvalue
.
• Note that
• If
an eigenfunction corresponding to eigenvalue
then
is also an eigenfunction corresponding to
.
### Leibnitz Rule of Differentiation under integral sign
Let
and
be continuous functions of both x and t and let the first derivatives of
and
are also continuous, then
.
This formula is called Leibnitz’s Rule of differentiation under integration sign. In a special case, when G(x) and H(x) both are absolute (constants) –let
,
–then
.
### Changing Integral Equation with Multiple integral into standard simple integral
(Multiple Integral Into Simple Integral — The magical formula)
The integral of order n is given by
.
We can prove that
Example: Solve
.
Solution:
(since t=1)
### 4 thoughts on “Solving Integral Equations – (1) Definitions and Types”
1. gold price
We will derive and then solve a renewal equation for $u_y$ by conditioning on the time of the first arrival. We can then find integral equations that describe the distribution of the current age and the joint distribution of the current and remaining ages. We need some additional notation. Let $\bar{F}(t) = 1 – F(t) = \P(X \gt t)$ for $t \ge 0$ (the right-tail distribution function of an interarrival time), and for $y \ge 0$, let $\bar{F}_y(t) = \bar{F}(t y)$.
• Interesting spam comment.
2. Hey Gaurav,
This post is really mind blowing. Actually I’m in class 12th and these days i am solving equations of Differentials and Integrals . So, after read your post i learnt many things about , means Deeply knowledge .
Thanks for Share
|
{}
|
# AC Adapter Input vs Output Power?
I have a laptop AC Adapter thats rated for 19.5v/3.34A DC output for a total of 65W. The input is rated at 100-240V/1.5A AC. My question is what is limiting the output to 65W? It seems like the input can support up to 100*1.5=150W minimum. I"m assuming you lose some in the AC to DC conversion, but 65W would be more than 50% loss. So, is the adapter able to support more than 65W or is there something here I'm missing?
• The input is worst-case scenario. 1.5A may be inrush current or peak draw for the 2 milliseconds it takes for the output side to catch fire and shut down. – Bryan Boettcher Feb 25 '13 at 19:03
I do power supply design for a living. When I specify that this circuit is rated for 19.5v/3.34A 65W, there could be so many reasons for this. Few arbitrary examples include:
-Transformer saturates at 70 VoltAmps (VA)
-The transistors I am using start breaking down after 21V and/or 4 Amps
-My linear voltage regulators start sinking too much heat due to high voltage differentials
-Capacitors start having huge ripple currents that they cannot handle (ESR losses)
-I am unable to meet compliance in powerfactor/emissions at higher power ratings
any many more possible things...
EDIT: As for the 1.5Amps input current, this is the maximum instantaneous current the adapter will pull from the input. It is NOT an RMS value.
• So why is the input power rating so much higher? This doesn't answer that. – Cybergibbons Feb 25 '13 at 21:50
• @Cybergibbons Answered above – hassan789 Feb 26 '13 at 15:44
The adapter clearly states that it supports only 65W. There could be many reasons for this, but in the end attempting to draw more would either fail immediately or prematurely.
The internal design of the adapter isn't made for more than 65W and shouldn't be used above this. Likely components cannot sustain the extra current required, and the input spec could be generous to be conservative for some losses (not 65W though).
• "There could be many reasons for this" - Thats my exact question. Is it just to be conservative? Are there other components that limit this? I'm guessing mostly the first one. I'm not planning on using it for anything, it was more of a theory question. – 7200rpm Feb 25 '13 at 19:01
• @7200rpm The adapter IS made from components. As one chain is strong as it's weakest link, the adapter is powerful as it is weakest component. If the specs are correct, then 65W is maximum power that guarantee correct work of the device without failing. If you want to know what component is in this case, then probably you would have to open it and examine the specs for the internal components, including heat dissipation as it is can be limiting factor too. – zzz Feb 25 '13 at 19:20
• @zzz Very true - So there is probably a component in the adapter that is a weaker link than simply input vs output power. And I probably shouldn't be opeining up my wife's power adapter :) – 7200rpm Feb 25 '13 at 20:00
The label rating of the power supply will generally reflect the conditions under which it underwent its safety characterizations.
These not only include abnormal tests (shorting a transformer, for instance) but steady-state thermal tests at whatever condition generates the worst-case heating for the part.
It also influences the temperature rating (or "class") of the magnetic components within the power supply. Higher load can require higher temperature rating of the magnetics, which require higher-rated components (tape, magnet wire, bobbin, etc.)
Operating the adapter beyond its label rating is a 'misapplication' in the eyes of the regulatory people - i.e. don't do it.
|
{}
|
Tech Problem Aggregator
# Used Dell Computer. Bargain or something else?
Q: Used Dell Computer. Bargain or something else?
I can buy from a friend a used Dell computer which has been refurbished for $430.00. I probably would put in an SSD. Is this a good deal or something to avoid? It has Win7 HP but I probably could talk them into Win7 Pro. PROCESSOR Intel- 17-920. 2.66. 8MB. BLM, DO Card, Graphics, 512, 4350. M1 13 Dual In-line Memory Module, 2G""1066, 256X64, 8, 240, 2RX8 Card, Wireless. Network, DW1525, Full Height Dual In-line Memory Module, 2G""1066, 256X64, 8, 240, 2RX8 Thanks Smorton A: Used Dell Computer. Bargain or something else? You might be able to get a better barebones deal at NewEgg.com with there holiday sales this weekend Newegg.com - Daily Deals on Computer Parts, Laptop Computers, Electronics, Digital Cameras, Unlocked Cell Phones and more! 9 more replies Answer Match 44.52% I have been invaded by this insidious piece of @!!#$, Bargain Buddy. I have been able, thanks to allentech.com, to remove everything except the folder "Bargain Buddy" in Program Files. I have tried using the command prompt in Accessories and Safe Mode/Command Prompt. When using the command prompt in either mode, it acts as if it has deleted the folder. But it's always still there. I have deleted all registry entries. If I try to delete the folder in Windows Explorer, I get an alert that says either that the file is in use or there is a sharing violation. I use Windows 2000 Pro with all updates and service packs. I have also been invaded by pop-up crap like killmessenger.com, message buster.com, blockmessenger.com. Are these coming from Bargain Buddy? How do I stop this. I have learned that there is a heavy price to pay by using programs like KaZaa, which I have now removed. It's like they say, nothing is free. Any help would bew greatly appreciated.
Stu Steinberg
A:Bargain Buddy
16 more replies
Answer Match 44.52%
I have tried unsuccessfully to get rid of bargain buddy, using AdAware, Spybot and AntiSpyware. Please help!
I have attached my Hijackthis log that I got after running the Hijackthis Analyzer.
====================================================================
Log was analyzed using KRC HijackThis Analyzer - Updated on 6/3/05
Get updates at http://www.greyknight17.com/download.htm#programs
***Security Programs Detected***
C:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe
C:\Program Files\Norton AntiVirus\navapsvc.exe
C:\Program Files\Norton AntiVirus\AdvTools\NPROTECT.EXE
C:\Program Files\Norton AntiVirus\SAVScan.exe
C:\Program Files\Common Files\Symantec Shared\CCPD-LC\symlcsvc.exe
C:\Program Files\Common Files\Symantec Shared\ccEvtMgr.exe
C:\Program Files\Common Files\Symantec Shared\Security Center\SymWSC.exe
C:\Program Files\Common Files\Symantec Shared\ccApp.exe
C:\Program Files\Microsoft AntiSpyware\gcasServ.exe
C:\Program Files\Microsoft AntiSpyware\gcasDtServ.exe
C:\Program Files\SpywareGuard\sgmain.exe
C:\Program Files\SpywareGuard\sgbhp.exe
O2 - BHO: SpywareGuard Download Protection - {4A368E80-174F-4872-96B5-0B27DDD11DB2} - C:\Program Files\SpywareGuard\dlprotect.dll
O2 - BHO: NAV Helper - {BDF3E430-B101-42AD-A544-FADC6B084872} - C:\Program Files\Norton AntiVirus\NavShExt.dll
O3 - Toolbar: Norton AntiVirus - {42CDD1BF-3FFB-4238-8AD1-7859DF00B1D6} - C:\Program Files\Norton AntiVirus\NavShExt.dll
O4 - HKLM\..\Run: [ccApp] &qu... Read more
A:getting rid of bargain buddy
Hi and Welcome to TSF!
Please subscribe to this thread so you'll be notified as soon as we post your fix. To do this, please click here. On the proceeding page, make sure Instant notification by email is selected, then click Add subscription.
In the meanwhile, I suggest that you stop using Interent Explorer until we've fully disinfected your machine. Please download & use an alternative browser like Firefox.
Please print out or copy this page to Notepad. Make sure to work through the fixes in the exact order it is mentioned below. If there's anything that you don't understand, ask your question(s) before proceeding with the fixes. You should not have any open browsers when you are following the procedures below.
During the course of disinfection, I may ask you to fix a program that you wish to retain. Please post back to inform me.
WARNING
You are running HiJackThis from an inappropriate location. It should be run from a permanent folder. This program creates backup files which we may need to use later. If the program is in a temporary folder, important backups may be accidentally deleted.
Please go into Windows Explorer
Click on C:\
Click on File > New > Folder
Call it HJT, or another name of your choice.
Move all files to the newly created folder.
Enable the viewing of Hidden filesClick Start.
Open My Computer.
Select the Tools menu and click Folder Options.
Select the View tab.
Select the Show hidden files and folders option.
Deselect... Read more
10 more replies
Answer Match 44.52%
Hi there,
just discovered this site and am looking for help with my computer and how to get rid of "Bargain Buddy". I have downloaded "adaware SE" after reading this suggestion in other threads. But am still experiencing the reoccurence of "BB". This is a home computer running on Windows XP, and I share it with my two daughters who have different log ins. I have run the Hijack this program and this is the log. Any help will be greatly appreciated. Thanks so much.
JL from Ontario.
Logfile of HijackThis v1.99.0
Scan saved at 12:23:47 AM, on 1/4/2005
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\Program Files\Common Files\Symantec Shared\ccEvtMgr.exe
C:\Program Files\Common Files\Microsoft Shared\VS7Debug\mdm.exe
C:\Program Files\Norton AntiVirus\navapsvc.exe
C:\WINDOWS\system32\nvsvc32.exe
C:\WINDOWS\Explorer.EXE
C:\Program Files\Analog Devices\SoundMAX\SMAgent.exe
C:\WINDOWS\System32\svchost.exe
C:\Program Files\Analog Devices\SoundMAX\SMax4PNP.exe
C:\Program Files\Common Files\Symantec Shared\Security Center\SymWSC.exe
C:\Program Files\Analog Devices\SoundMAX\Smax4.exe
C:\Program Files\Common Files\Symantec Shared\ccApp.exe
C:\P... Read more
A:Bargain Buddy
Hi
Make sure you have already run Spybot S & D(check for updates) as this will do a preliminary clean first.Some files below may not be present after running the above programs.
Then....
Turn off your System Restore SEE HERE Reinstate it when your log is cleaned and then create a new restore point.Close your browser window and run hjt in safe mode... HOW TO RUN SAFE MODE and have "Hijack This" fix all the following items in the list below by placing a check in the appropriate boxes and selecting "fix checked".If any EXE files have been selected go into HijackThis/Config/Misc/Tools/ and open process manager. Select the EXE files (if they are there) and click Kill process before deleting.
Folders that have been highlighted RED in the log will need to be uninstalled.Check first as some folders maybe uninstalled via the Add/Remove program.
Files highlighted in BLACK in the log will need to be removed from your hard drive.
Make sure to have your system set to show hidden files and folders.. HOW TO SHOW FILES When done Download Cleanup and run it to clean out the temp folders ..Then please reboot and post a new log when finished...
R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant =
R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch =
R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Local Page =
R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Local Page =
O4 - HKLM\..\Run: [f... Read more
6 more replies
Answer Match 44.52%
I thought I had removed it, guess not. Here is my HJT log.
**Note, Spybot has been run and cleaned. AdAware has been run and cleaned. both updated seconds before.
Thanks in advance, you guys do great work..
Jim
Logfile of HijackThis v1.97.7
Scan saved at 4:52:30 PM, on 11/12/2004
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\WINDOWS\Explorer.EXE
C:\PROGRA~1\ALWILS~1\Avast4\ashDisp.exe
C:\PROGRA~1\ALWILS~1\Avast4\ashmaisv.exe
C:\WINDOWS\system32\CTHELPER.EXE
C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe
C:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe
C:\WINDOWS\system32\RUNDLL32.EXE
C:\WINDOWS\System32\gearsec.exe
C:\Program Files\Common Files\Symantec Shared\ccApp.exe
C:\WINDOWS\system32\rundll32.exe
C:\PROGRA~1\MYWEBS~1\bar\1.bin\mwsoemon.exe
C:\Program Files\MSN Messenger\msnmsgr.exe
C:\Program Files\Norton AntiVirus\IWP\NPFMntor.exe
C:\WINDOWS\system32\nvsvc32.exe
C:\Program Files\Common Files\Symantec Shared\SNDSrvc.exe
C:\Program Files\Common Files\Symantec Shared\SPBBC\SPBBCSvc.exe
C:\WINDOWS\System32\svchost.exe
C:\Program Files\Common Files\Symantec Shared\CCPD-LC\symlcsvc.exe
C:\Program Files\RealVNC\VNC4\WinVNC4.exe
C... Read more
A:Bargain Buddy??
Hi and welcome to TSF! I am reserving judgement on the Alexa toolbar for now, but will probably recommend to remove on next round.
Please print out or copy this page to Notepad. Make sure to work through the fixes in the exact order it is mentioned below. If there's anything that you don't understand, ask your question(s) before proceeding with the fixes. You should not have any open browsers when you are following the procedures below.
You have an outdated version of HijackThis. Click here to get the latest version of HijackThis.
Go to My Computer->Tools/View->Folder Options->View tab and make sure that 'Show hidden files and folders' (or 'Show all files') is enabled. Also make sure that Display the contents of System Folders' is checked. Windows XP's search feature is a little different. When you click on 'All files and folders' on the left pane, click on the 'More advanced options' at the bottom. Make sure that Search system folders, Search hidden files and folders, and Search subfolders are checked.
Reboot into Safe Mode (hit F8 key until menu shows up).
Uninstall the following via the Add/Remove Panel (Start->(Settings)->Control Panel->Add/Remove Programs) if they exist:
MyWebSearch
WindowsAdControl
WildTangent - This is an online gaming package that is installed by a number of third party applications and even OEMs, ISPs and AIM. The games aspect of this is really rather cool. The being installed without you asking for ... Read more
5 more replies
Answer Match 44.52%
Hi there. I constantly get the "Bargain Buddy" showing up whenever I do an Ad-Ware clean. Is there a way of finding out if I have a program loading in my computer that constantly loads it up?
A:Bargain Buddy
16 more replies
Answer Match 44.1%
Hi, my computer is just acting weird, slow, and when im typing it jumps around and just does screwy stuff, I ran spybot, it didnt find anything, Ad-Aware picked up Bargain Buddy, I keep deleting it, but it keeps coming back. Here is my latest log, Im not sure what to fix. Thanks.
Logfile of HijackThis v1.99.1
Scan saved at 11:56:39 PM, on 9/16/2006
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\Ati2evxx.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\WINDOWS\system32\Ati2evxx.exe
C:\WINDOWS\Explorer.EXE
C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe
C:\Program Files\HPQ\HP Wireless Assistant\HP Wireless Assistant.exe
C:\Program Files\Hewlett-Packard\HP Software Update\HPWuSchd2.exe
C:\Program Files\QuickTime\qttask.exe
C:\Program Files\Common Files\AOL\1127576323\ee\AOLSoftware.exe
C:\WINDOWS\Logi_MwX.Exe
C:\PROGRA~1\Sony\SONICS~1\SsAAD.exe
C:\Program Files\Java\jre1.5.0_06\bin\jusched.exe
C:\Program Files\Winamp\winampa.exe
C:\PROGRA~1\mcafee.com\agent\mcagent.exe
C:\PROGRA~1\McAfee.com\PERSON~1\MpfTray.exe
C:\Program Files\AOL\Active Security Monitor\ASMonitor.exe
C:\Program Files\mcafee.com\antivirus\oasclnt.exe
C:\Program Files\Messenger\msmsgs.exe
C:\PROGRA~... Read more
A:Bargain Buddy, possibly others? Please Help
Where is Adaware finding these items? Exact locations and file names if possible please.
Nothing really stands out in that HJT log. Let's have you run a few tools, and see what we can see.
Download Ewido Anti-spywareInstall Ewido Anti-spyware
Double-click the icon on Desktop to launch Ewido
You will need to update Ewido to the latest definition files.On the top of the main screen click Shield
Click the word active to change it to inactive
On the top of the main screen click Update.
Then click on Start Update. The update will start and a progress bar will show the updates being installed.
If you are having problems with the updater, you can use this link to manually update EwidoOnce the update has completed select the "Scanner" icon at the top of the screen, then select the "Settings" tab.
Once in the Settings screen click on "Recommended actions" and then select "Quarantine".
Under "Reports"Select "Automatically generate report after every scan"
Un-Select "Only if threats were found"
When you have finished updating, EXIT Ewido anti-spyware. Do Not run a scan just yet, we will shortly.
Download and install CleanUp!
NOTE: CleanUp! deletes EVERYTHING out of your temp/temporary folders, it does not make backups. If you have any documents or programs that are saved in any Temporary Folders, make a backup of these before running CleanUp!. Do NOT run this program if you have XP Professional 64 bit edit... Read more
1 more replies
Answer Match 44.1%
Amazon had a pre-order price of $50 for Windows 7. Will that happen for Windows 8? A:A Bargain Price for Windows 8? Originally Posted by CalBear Amazon had a pre-order price of$50 for Windows 7. Will that happen for Windows 8?
Who knows
ask Ms marketing
I'd suspect if they want to push their products some sort of discount wouldn't be a bad business decision.
However "Corporate Greed" tends to get in the way --it really is amazing how many businesses seem to think a HIGH % of ZERO money is worth more than a smaller % of SOMETHING.
Ever had discussions with a Bank when for whatever reason you just can't pay -- they throw all sorts of threatograms / debt collectors at you in spite of the fact that most of these companies have about as much right to collect as your Grandma's cat -- and cannot understand that for reasons that might be TOTALLY outside your control -- for example jobs being off-shored, redundancy etc etc that you just CAN'T at the moment pay (not WON'T pay) -- so their scrobbity lawyers get involved and after a HUGE amount of hassle the Bank gets awarded 1 EUR per month for the next 1,000 years -- which had they been reasonable the whole thing could have been settled much quicker.
Cheers
jimbo
3 more replies
Answer Match 44.1%
I think my teenager d/l something and infected my brand new PC with Bargain Buddy. It was detected by Symantec but it cannot delete or fix 9 malicious files. I also scanned numerous times using AdAware and Spybot S&D to no avail. I restored to a point 6 weeks before infection and Symantec says the files are still there. I came to this forum, d/l HJT and my scan log is below. Please note I am a complete newbie, and I have never done registry editing. I am a quick learner however, so please if you will provide idiot-proof instructions, I will figure it out. Thank you!
Logfile of HijackThis v1.99.1
Scan saved at 7:31:30 PM, on 2/27/2005
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
c:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe
C:\WINDOWS\System32\gearsec.exe
c:\Program Files\Norton AntiVirus\navapsvc.exe
C:\WINDOWS\System32\tcpsvcs.exe
C:\WINDOWS\System32\svchost.exe
c:\Program Files\Common Files\Symantec Shared\ccEvtMgr.exe
C:\Program Files\Common Files\Symantec Shared\Security Center\SymWSC.exe
C:\WINDOWS\Explorer.EXE
C:\Program Files\Java\j2re1.4.2_06\bin\jusched.exe
C:\windows\system\hpsysdrv.exe
C:\WINDOWS\System32\hphmon05.exe
C:\HP\KBD\KBD.EXE... Read more
A:Bargain Buddy problems
Please print out or copy this page to Notepad. Make sure to work through the fixes in the exact order it is mentioned below. If there's anything that you don't understand, ask your question(s) before proceeding with the fixes. You should not have any open browsers when you are following the procedures below.
Go to My Computer->Tools/View->Folder Options->View tab and make sure that 'Show hidden files and folders' (or 'Show all files') is enabled. Also make sure that Display the contents of System Folders' is checked. Windows XP's search feature is a little different. When you click on 'All files and folders' on the left pane, click on the 'More advanced options' at the bottom. Make sure that Search system folders, Search hidden files and folders, and Search subfolders are checked.
For the options that you checked/enabled earlier, you may uncheck them after your log is clean. If we ask you to fix a program that you use or want to keep, please post back saying that (we don't know every program that exists, so we may tell you to delete a program that we think is bad to keep).
Turn off system restore by right clicking on My Computer and go to Properties->System Restore and check the box for Turn off System Restore. Click Apply and then OK. Restart your computer. After we are finished with your log file and verified that it's clean, you may turn it back on and create a new restore point.
The Temp folders should be cleaned out periodically as inst... Read more
3 more replies
Answer Match 44.1%
Hey there, hopefully someone could help me solve this problem, I would appreciate it very much.
I use Adaware SE, Spybot search and destroy, AVG and a firewall to protect my computer. What's been happening is that I'll run the scans for each of the programs and find the odd bit of spyware or whatever, then run the scans again and it'll show up clean. But Adaware just keeps showing up this "bargain buddy" again and again after I go onto the internet. I tried to look up some guides for removing bargain buddy but a few of them have quite different instructions, and I'm not sure which ones are the most legitimate.
A similar thing happened with a trojan showing up on adaware, which the other programs didn't seem to catch (I tried running them in different orders, tho typically I'll run adaware 1st, spybot s&d 2nd, and avg 3rd). I *think* the name of the trojan was win32.agent or something. It came back a couple of times after the system appeared to be clean. After my last few scans, it hasn't shown up, but I have a feeling that this trojan and bargain buddy are somehow linked. Maybe there's some security hole that they both exploit, each time I log onto the internet.
Apart from that, there's been no major problems. No files deleted or not working, no lockups, no popups, no weird extra search bars or whatever.
Also I have a feeling that this all started when a mate of mine used my computer without permission and installed some dodgy software that looked l... Read more
A:bargain buddy + possible trojan
Hello UC1, and welcome to TSF.
I am currently reviewing your log. Please note that this is under the supervision of an expert analyst,
and I will be back with a fix for your problem as soon as possible.
You may wish to Subscribe to this thread (Thread Tools) so that you are notified when you receive a reply.
Please be patient with me during this time.
7 more replies
Answer Match 44.1%
until recently i was a console freak, i had a ps2, gc, 2xds, + a psp, until i bought this pc im using for £80. its specs when i got it were: amd athlon 64 3000+ cpu, asrock k8 upgrade vm800 motherboard, 1x 256mb elixir ddr400 pc3200 184 pin ram, 1x 17gb wd170aa hard drive, 2x cd/dvd rewriters (nec + toshiba) 1 of which is dual layer, it also has a floppy disk drive. it came in a grey + silver case with 2 lights 1x red 1xblue on top of the tower. it has 8x usb 2.0( 2 on top, + 6 at the back, it also has a little door on the side that says usb on it, but when you open it there is nothing in it but space for 2 more usb ). !!!!sorry this is so long but please bear with me!!!! i have since bought a samsung sp2504c 250gb 8mb hard drive(£75), harman/kardon soundsticks II(£140) speeze ee-hd01 hard drive cooler with 2 fans(£8), 17in flatscreen tft monitor(£100), usb mouse(£5), belkin serial ata cables that go under black light(£12), logitech quickcam express webcam(£20) + a convertor that plugs into usb + lets me plug in a ps2 control pad(£10).!!!!almost finished!!!! the upgrades i want to make are:asus a8n sli premium socket 939 nvidia nforce4 sli atx motherboard(£?), 4x corsair 512mb ddr400 pc3200 ram(£?), amd athlon 64 x2 4600+ manchester 2000mhz ht2 x512kb l2 cache socket 939 dual core cpu(£?), 2x nvidia geforce 7600gt in sli formation(about £300), lg gsa h10n dual layer drive with nero6(£45), creative sound blaster audigy 2 zs 24bit 192khz 7.1 channel pci soundcard(... Read more
A:was this pc a waste of money or a bargain?!!!
itd be easier to just start fresh...youll probably be needing to bargan off some of your gaming systems though
11 more replies
Answer Match 44.1%
Hi Guys,
My computer has got a whole load of spyware nasties on it which I'm having real trouble getting rid of.
I have done an Adaware SE scan as per usual and it found about 150 entries. When I went to delete them my computer just hung- I assume that this is cause there are things actively running?
Anyway- Norton picked up on a load of them too, but also could not delete some of the entries.
I tried uninstalling a lot of the crap from Add/Remove programs and then scanned again but a lot of it was still there- I'm a litle sceptical about how effective the uninstall was anyway. There is one entry on there called "IE Host" which when I tried to uninstall came up with some bogus window that said something like "Downloading uninstaller" I cancelled it because it looked suss.
Anyway- here's my HJT log- I'd be VERY grateful if one of you experts out there could lend a hand.
Cheers,
--
Monster
Logfile of HijackThis v1.98.2
Scan saved at 17:18:04, on 08/12/2004
Platform: Windows ME (Win9x 4.90.3000)
MSIE: Internet Explorer v5.50 (5.50.4134.0100)
Running processes:
C:\WINDOWS\SYSTEM\KERNEL32.DLL
C:\WINDOWS\SYSTEM\MSGSRV32.EXE
C:\WINDOWS\SYSTEM\SPOOL32.EXE
C:\WINDOWS\SYSTEM\MPREXE.EXE
C:\WINDOWS\SYSTEM\MSTASK.EXE
C:\WINDOWS\SYSTEM\SSDPSRV.EXE
C:\WINDOWS\SYSTEM\MDM.EXE
C:\WINDOWS\SYSTEM\INETSRV\INETINFO.EXE
C:\PROGRAM FILES\COMMON FILES\SYMANTEC SHARED\CCEVTMGR.EXE
C:\PROGRAM FILES\COMMON FILES\SYMANTEC SHARED\CCSETMGR.EXE
C:\WINDOWS\SYSTE... Read more
A:Bargain Buddy and a whole loada others...
13 more replies
Answer Match 44.1%
i ran spybot and ad aware,but a free spyware program
says mysearch and bargain buddy remain on my computer
are there any free removal programs for them?
A:getting rid of mysearch and bargain buddy
8 more replies
Answer Match 44.1%
Hi guys,Big apologies if this is a repeat. I think I have messed up with posting. I had an older version of HijackThis, so had to download the newer one and run it again. After working thru the wonderful instructions Lawrence has written, I then ran another XoftSpy scan and Bargain Buddy is still appearing on my computer. I am being flooded with Junk Mail and it is driving me nuts.Could someone please help me with removing this pest?Looking forward to hearing from you,raphekeLogfile of Trend Micro HijackThis v2.0.2Scan saved at 10:20:04, on 17/09/2007Platform: Windows XP SP2 (WinNT 5.01.2600)MSIE: Internet Explorer v7.00 (7.00.6000.16512)Boot mode: NormalRunning processes:C:\WINDOWS\System32\smss.exeC:\WINDOWS\system32\winlogon.exeC:\WINDOWS\system32\services.exeC:\WINDOWS\system32\lsass.exeC:\WINDOWS\system32\Ati2evxx.exeC:\WINDOWS\system32\svchost.exeC:\WINDOWS\System32\svchost.exeC:\Program Files\Sygate\SPF\smc.exeC:\WINDOWS\system32\spoolsv.exeC:\WINDOWS\system32\Ati2evxx.exeC:\WINDOWS\Explorer.EXEC:\Program Files\Common Files\Apple\Mobile Device Support\bin\AppleMobileDeviceService.exeC:\Program Files\CA\eTrust Vet Antivirus\ISafe.exeC:\Program Files\Google\Common\Google Updater\GoogleUpdaterService.exec:\opt\MBCASE\pm&#... Read more
A:Hijackthis Log - Bargain Buddy
Sorry for the delay. If you are still having problems please post a brand new HijackThis log as a reply to this topic. Before posting the log, please make sure you follow all the steps found in this topic:Preparation Guide For Use Before Posting A Hijackthis LogPlease also post the problems you are having.
1 more replies
Answer Match 44.1%
Problem with removing "Exact Advertising - BargainsBuddy"
It keeps coming up on Ad-aware and Spybot..
When i press delete on Ad-aware it just acts as if it deletes but every timem i scan again with ad-aware it comes up again..
Spybot- It keeps getting scanned and even after i try to fix the problem then try fixing the problem after i restart comp its still there.
Im sure it a Registry Key because thats what it says on Spybot
There's two registry keys and both come up on both spybot + ad-aware
Programs i use (All up to date)
Spybot
Ad-aware SE Personal
Spyware Blaster
Killbox
HijackThis
____
HijackThis Log
Logfile of HijackThis v1.99.1
Scan saved at 5:06:37 PM, on 6/9/2005
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\WINDOWS\Explorer.EXE
C:\Program Files\Ahnlab\Smart Update Utility\Ahnsdsv.exe
C:\Program Files\Ahnlab\V3\MonSvcNT.EXE
C:\WINDOWS\system32\nvsvc32.exe
C:\WINDOWS\SOUNDMAN.EXE
C:\Program Files\Ahnlab\Smart Update Utility\AhnSD.exe
C:\Program Files\iTunes\iTunesHelper.exe
C:\Program Files\Common Files\Real\Update_OB\realsched.exe
C:\WINDOWS\system32\ctfmon.exe
C:\Program Files\Ahnlab\V3\V3P3AT.exe
C:\Program Files\iPod\bin\iPodService.exe
C:\Program Files... Read more
A:Solved: Getting rid of Bargain Buddy
16 more replies
Answer Match 44.1%
I've been watching laptop prices here in Mexico.
The price of netbooks with Windows 7 Starter Edition has dropped from $370 to$280. Six months ago, a dual-core, 2 gig laptop with Windows 7 Home Edition cost around $800, Now, with 4 gigs and the Pro Edition, they're running around$700.
A:Bargain Hunting Time!
This past week, Windows 7 laptops have dropped about $170 here in Mexico. 6 more replies Answer Match 44.1% This is my second try at this. Seems as if the first didn't work. Saw your site was down immediately after the first try. Maybe it was lost. Anyhoo, I have a recurring bargain buddy file in my Spyhunter scans. Can't seem to shake it. Here is the Panda info: Incident Status Location Spyware:Cookie/Serving-sys Not disinfected C:\Documents and Settings\Compaq_Owner\Application Data\Mozilla\Firefox\Profiles\eq9ylza2.default\cookies.txt[.bs.serving-sys.com/] Spyware:Cookie/Doubleclick Not disinfected C:\Documents and Settings\Compaq_Owner\Application Data\Mozilla\Firefox\Profiles\eq9ylza2.default\cookies.txt[.doubleclick.net/] Spyware:Cookie/Mediaplex Not disinfected C:\Documents and Se... Read more More replies Answer Match 44.1% I just cleaned up my system 2 weeks ago but had to reformat my hard drive last week (see 'EDowPack.exe' under resolved threads 6/18). I clicked on a link on a web page today, and suddenly most of my spyware monitoring programs started spewing forth malware alerts. I've been running scans most of the day today and mostly in safe mode. Everything was updated before it was run. Not necessarily in this order: Norton Anti Virus 2005 Bit Defender Ad-aware SE Personal (also VX2 cleaner) Spyware - Search & Destroy Clean up! Ewido Security Suite CW Shredder Spyware Blaster Here's my hijackthis log: Logfile of HijackThis v1.99.0 Scan saved at 8:35:33 PM, on 7/3/2006 Platform: Windows XP SP2 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\Program Files\Common Files\Symantec Shared\ccProxy.exe C:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe C:\Program Files\Norton Internet Security\ISSVC.exe C:\Program Files\Common Files\Symantec Shared\SNDSrvc.exe C:\Program Files\Common Files\Symantec Shared\SPBBC\SPBBCSvc.exe C:\Program Files\Common Files\Symantec Shared\ccEvtMgr.exe C:\WINDOWS\system32\LEXBCES.EXE C:\WINDOWS\system32\spoolsv.exe C:\Program Files\ewido anti-spyware 4.0\guard.exe C:\Program Files\Common ... Read more A:Bargain Buddy problems? Is Bargain Buddy still detected after being removed? The log here is clear. 2 more replies Answer Match 44.1% I have WinPatrol on my PC right now and I have run both AdAware (most recent freebie) and Spybot and rebooted/reran with good results. I had been having trouble getting rid of the darn whole cashback/bargain buddy business. Am I free of that now? Thanks so much for your time... Logfile of HijackThis v1.98.2 Scan saved at 3:58:02 PM, on 11/03/2004 Platform: Windows 2000 SP4 (WinNT 5.00.2195) MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106) Running processes: C:\WINNT\System32\smss.exe C:\WINNT\system32\winlogon.exe C:\WINNT\system32\services.exe C:\WINNT\system32\lsass.exe C:\WINNT\system32\ibmpmsvc.exe C:\WINNT\system32\svchost.exe C:\WINNT\system32\svchost.exe C:\WINNT\system32\spoolsv.exe C:\WINNT\system32\Ati2evxx.exe C:\Program Files\Network Associates\VirusScan\Avsynmgr.exe C:\Program Files\Network Associates\VirusScan\VsStat.exe C:\Program Files\Reflection\rtsserv.exe C:\WINNT\system32\regsvc.exe C:\WINNT\system32\MSTask.exe C:\Program Files\Network Associates\VirusScan\Vshwin32.exe C:\Program Files\Analog Devices\SoundMAX\SMAgent.exe C:\WINNT\System32\WBEM\WinMgmt.exe C:\Program Files\RealVNC\WinVNC\WinVNC.exe C:\WINNT\system32\svchost.exe C:\Program Files\Network Associates\VirusScan\Avconsol.exe C:\Program Files\Common Files\Network Associates\McShield\Mcshield.exe C:\WINNT\Explorer.EXE C:\WINNT\system32\tp4serv.exe C:\Program Files\Analog Devices\SoundMAX\Smtray.exe C:\WINNT\system32\NWTRAY.EXE C:\PROGRA~1\ThinkPad\PkgMgr\HOTKEY... Read more A:Had cashback/bargain buddy...gone now? hi you ve done a good job ,by cleaning with adaware se and spybot search and destroy there s some leftover to be cleaned . Run Hijack This again and put a check by these. Close ALL windows except HijackThis and click "Fix checked" R3 - Default URLSearchHook is missing O16 - DPF: {62475759-9E84-458E-A1AB-5D2C442ADFDE} - http://a1540.g.akamai.net/7/1540/52...meInstaller.exe O16 - DPF: {9522B3FB-7A2B-4646-8AF6-36E7F593073C} (cpbrkpie Control) - http://a19.g.akamai.net/7/19/7125/4...23/cpbrkpie.cab O16 - DPF: {E06E2E99-0AA1-11D4-ABA6-0060082AA75C} (GpcContainer Class) - https://impac.webex.com/client/latest/webex/ieatgpc.cab after this ,you should be ok 5 more replies Answer Match 44.1% Hello, could someone please tell me what the latest deal is on an eSata external hard drive with 1TB of storage or a similar option which amounts to the same? Thank you! More replies Answer Match 44.1% Hi, my computer is just acting weird, slow, and when im typing it jumps around and just does screwy stuff, I ran spybot, it didnt find anything, Ad-Aware picked up Bargain Buddy, I keep deleting it, but it keeps coming back. Here is my latest log, Im not sure what to fix. Thanks. Logfile of HijackThis v1.99.1 Scan saved at 9:31:06 AM, on 9/18/2006 Platform: Windows XP SP2 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\Ati2evxx.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\WINDOWS\system32\Ati2evxx.exe C:\WINDOWS\Explorer.EXE C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe C:\Program Files\HPQ\HP Wireless Assistant\HP Wireless Assistant.exe C:\Program Files\Hewlett-Packard\HP Software Update\HPWuSchd2.exe C:\Program Files\QuickTime\qttask.exe C:\Program Files\Common Files\AOL\1127576323\ee\AOLSoftware.exe C:\WINDOWS\Logi_MwX.Exe C:\PROGRA~1\Sony\SONICS~1\SsAAD.exe C:\Program Files\Java\jre1.5.0_06\bin\jusched.exe C:\Program Files\Winamp\winampa.exe C:\PROGRA~1\mcafee.com\agent\mcagent.exe C:\Program Files\AOL\Active Security Monitor\ASMonitor.exe C:\Program Files\mcafee.com\antivirus\oasclnt.exe C:\Program Files\Messenger\msmsgs.exe C:\PROGRA~1\COMMON~1\AOL\ACS\AOLacsd.exe C:\Program File... Read more A:Bargain Buddy keeps showing up Please do not create more than one thread for the same topic. I refer you to this thread: http://www.techsupportforum.com/showthread.php?t=117029 and to the posting rules included in this thread: http://www.techsupportforum.com/showthread.php?t=15968 Please do not start a new thread each time you reply. We need you to keep your logs in one thread only as this helps the Analyst follow your thread from beginning to end. 3. Please be considerate of the fact that the people helping you are not being paid for this, and in fact usually have a job, and have a limited amount of time to help, and can only do so much. If no one has replied to your thread within 24hrs after you posted it, please reply in your thread with the word BUMP to move it forward. We've been very busy lately, and a bit short handed. We try to get to everyone as soon as we can. Thanks for your patience and understanding. This one is closed. 1 more replies Answer Match 44.1% hello pls i need help, just got my notebook last month, as i was scanning my system.. i found out lots of threats.. as u can see i am tryin to remove bargain buddy and n-case (pad lookups and interstitial ad delivery) as i was tryin to remove the programs... it always said that if i will tend to remove the program some of the free software installed in my computer will not work properly.. i want to knw wat software are those... and if i will remove the program what are the risk or things that might happen the software am talkin about are: bargain buddy interstitial ad delivery by n-case pad lookups by n-case i have an hp notebook thnks A:Bargain buddy and n-case Please do not post duplicates...post on your first thread. 2 more replies Answer Match 44.1% I have a bunch of ugly stuf that I can't get rid of. Adware found it but could not get rid of it all. Hope someone can lend me a hand. If you notice anything else lurking in there, let me know. Here is the Hijack logfile Logfile of HijackThis v1.97.7 Scan saved at 1:44:43 PM, on 11/1/2004 Platform: Windows XP SP1 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106) .... Detail removed by user A:180search bargain.exe cashback.exe Run HJT again and put a check in the following: O2 - BHO: (no name) - {CE188402-6EE7-4022-8868-AB25173A3E14} - C:\WINDOWS\System32\mscb.dll O2 - BHO: (no name) - {F4E04583-354E-4076-BE7D-ED6A80FD66DA} - C:\WINDOWS\System32\msbe.dll O4 - HKLM\..\Run: [Tray Temperature] C:\DOCUME~1\MYEMAC~1\LOCALS~1\Temp\MiniBug.exe 1 O16 - DPF: {41F17733-B041-4099-A042-B518BB6A408C} - http://a1540.g.akamai.net/7/1540/52...meInstaller.exe O16 - DPF: {56336BCB-3D8A-11D6-A00B-0050DA18DE71} (RdxIE Class) - http://software-dl.real.com/20d96da...ip/RdxIE601.cab O16 - DPF: {79849612-A98F-45B8-95E9-4D13C7B6B35C} (Loader2 Control) - http://static.topconverting.com/activex/loader2.ocx Close all applications and browser windows before you click "fix checked". Restart in Safe Mode Open Windows Explorer. Go to Tools, Folder Options and click on the View tab. Make sure that "Show hidden files and folders" is checked. Also uncheck "Hide protected operating system files". Now click "Apply to all folders", Click "Apply" then "OK" Empty these folders: Go to Start, Run, type %temp%, click OK Completely delete the entire contents of this folder. C:\Documents and Settings\MYEMAC(this is the first 6 letters of the profile\local settings\temp Reboot. Click on this link to download the new version of Hijackthis post a log using that version. 1 more replies Answer Match 44.1% I just found this "interesting" price list on-line. You can be the first to not buy a copy. A:Bargain Hunting for Windows 7? Not Myself No the price is right, but they just kill you with shipping and handling. 2 more replies Answer Match 43.68% My Question is I use NOD32 A/v and it revealed the following results Scan performed at: 1/28/2005 0:51:57 AM date: 28.1.2005 time: 00:51:58 Scanned disks, directories and files: C:; D: C:\pagefile.sys - error opening (access denied) [4] C:\WINDOWS\SYSTEM32\mac80ex.idf ?ZIP ?C:/WINDOWS/system32/msbe.dll - Win32/Adware.BargainBuddy Application C:\WINDOWS\SYSTEM32\mac80ex.idf ?ZIP ?C:/Program Files/BullsEye Network/bin/bargains.exe - Win32/Adware.BargainBuddy Application C:\WINDOWS\SYSTEM32\mac80ex.idf ?ZIP ?C:/Program Files/BullsEye Network/bin/adv.exe - Win32/Adware.BargainBuddy Application C:\WINDOWS\SYSTEM32\mac80ex.idf ?ZIP ?C:/Program Files/BullsEye Network/bin/adx.exe - Win32/Adware.BargainBuddy Application I then scanned with the following A/v Trend Micro House call, Panda Software,3ca.com security A/v, All with Negative Results I then Scanned with Trojan Hunter 4.1, Adaware SE Personal, SpyBot-Search and Destroy 1.3, Pest Patrol and Spy Sweeper, With All Negative Results I also have Spyware Blaster and Sypware Guard I did go in and I did find the file " Mac80ex.Idf in C/windows/system32 , Can I go in in Safe Mode and Delete the file without causing any system Problems. I'm Posting a Hijack This Log also Logfile of HijackThis v1.99.0 Scan saved at 2:53:15 AM, on 1/28/2005 Platform: Windows XP SP2 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\... Read more A:Bulls Eye Network & Bargain Buddy Welcome to TSF. Yes, you may delete that file. Also delete these if found: C:/WINDOWS/system32/msbe.dll C:/Program Files/BullsEye Network/ - delete folder Any problems now? 2 more replies Answer Match 43.68% Hi, I'm new to Tech Support and have registered to try to get some help with some problems. I'm not sure what to do or how to do it. I have Windows XP, use McAfee antivirus, firewall, spamkiller and privacy service, also have spyware doctor (free version), spybot search & destroy, and ad-aware SE (Free version). Can you tell I've been invaded before???? Anyway, I scan my computer with most of the above twice a week. I have gotten a message from spyware doctor that there is a program call overpro and bargain buddy that are on my computer. I would like to remove them. Can you help? a few things I have already done are to look in the registry files and try to find where the 'abnormalities' are located. I have found the files but am scared to remove them in case I remove something that I truly need. I have also tried to remove any listings in the temporary internet files but can't seem to 'get rid' of the files. Thanks. barcar1 A:Solved: Overpro, Bargain Buddy 13 more replies Answer Match 43.68% Can we just sue these guys?! I remember when it downloaded itself onto my computer and then tried to introduce me to the program. Ever since then, I keep getting the little evil dog popping up on the side of my computer, and it just pisses me off! Im not sure if it does a lot of damage, but I WANT IT GONE!! I have so many antivirus programs, but it seems as if no matter how good they claim to be, there will always be at least one thing that they cannot get. And this piece of crap has eluded all of them so far. HELP PLEASE!? A:bullseyenetwork/bargain buddy SUCKS!!!!!! Please download hijackthis (link in my signature) run a scan and post the log here and we will go through it for you. 1 more replies Answer Match 43.68% Hey guys - I'm having problems with getting rid of bargain buddy and I don't know if it's the same thing, but I've also got popups coming up with the title 'xlime.offer optimizer'.... I keep running adaware and spybot, but they can't get rid of the bargain buddy stuff... Just point me in the right direction, please. And Merry Christmas! Thanks SO MUCH in advance Logfile of HijackThis v1.99.0 Scan saved at 12:12:38 PM, on 12/25/2004 Platform: Windows XP SP2 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\WINDOWS\System32\nvsvc32.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\Explorer.EXE C:\WINDOWS\system32\wscntfy.exe C:\windows\system\hpsysdrv.exe C:\Program Files\Hewlett-Packard\Digital Imaging\Unload\hpqcmon.exe C:\WINDOWS\System32\hphmon05.exe C:\Program Files\Common Files\Real\Update_OB\realsched.exe C:\Program Files\Multimedia Card Reader\shwicon2k.exe C:\Program Files\MUSICMATCH\MUSICMATCH Jukebox\mmtask.exe C:\Program Files\MSN Apps\Updater\01.02.3000.1001\en-us\msnappau.exe C:\Program Files\QuickTime\qttask.exe C:\Program Files\HP\HP Software Update\HPWuSchd2.exe C:\WINDOWS\ALCXMNTR.EXE C:\WINDOWS\system32\rundll32.exe C:\Program Files\MSN Messenger\msnmsgr.exe C:\P... Read more A:Bargain Buddy Removal - Hijackthis Log 10 more replies Answer Match 43.68% Ad-aware came up with WIN. 32 Trojan Downloader obj[7] reg key clsd [8] reg key interface [9] reg key typelib Bargain Buddy reg key WIN 32 Trojan Agent I cleared everything with Ad-Aware, but during a couple of more scans through the day, Bargain Buddy kept showing up. It finally quit showing up when I ran Ad-Aware, Avast, Spybot, Windows Defender a CA anti-virus, and CA anti spy, but when I posted here, I was told I should still do a Hijack. Thanks Logfile of HijackThis v1.99.1 Scan saved at 5:17:14 PM, on 9/15/2006 Platform: Windows XP SP2 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\svchost.exe C:\Program Files\Windows Defender\MsMpEng.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\Explorer.EXE C:\WINDOWS\system32\spoolsv.exe C:\Program Files\Google\Gmail Notifier\G001-1.0.25.0\gnotify.exe C:\PROGRA~1\SBCSEL~1\SMARTB~1\MotiveSB.exe C:\Program Files\Yahoo!\Antivirus\CAVTray.exe C:\Program Files\Yahoo!\Antivirus\CAVRID.exe C:\PROGRA~1\Yahoo!\YOP\yop.exe C:\PROGRA~1\ALWILS~1\Avast4\ashDisp.exe C:\Program Files\Zone Labs\ZoneAlarm\zlclient.exe C:\Program Files\Windows Defender\MSASC... Read more A:Bargain Buddy, Trojan downloader Hi and welcome to TSF. I am currently reviewing your log. Please note that this is under the supervision of an expert analyst, and I will be back with a fix for your problem as soon as possible. You may wish to Subscribe to this thread (Thread Tools) so that you are notified when you receive a reply. Please be patient with me during this time. 2 more replies Answer Match 43.26% This was weird. When I was on bleepingcomputer.com I was suddenly bombarded with a message from Sypweeper that software programs (Bargain Buddy) were running in my memory as well as adware on my system (file Genie, teen XXX, etc.). I was able to get rid of all but Bargain Buddy because my computer told me that it was being used. However, I ended the process in my task manager window and upon restarting my computer was able to delete the Bargain Buddy file which, strangely enough, contained no kilobytes of info. Anyone know why this would have happened to me granted the fact that I didn't visit any suspect sites and had Zone Alarm running as a firewall on my system? Thanks. Justin. A:bombarded with Bargain Buddy and File Genie... Did you perhaps go to our page that scans for Spyware? We have a spyware scanner, only searches for a few, but it does access your registry. Spysweeper may have felt that was the spywar eitself trying to access you. 3 more replies Answer Match 43.26% I'm upgrading from an AGP setup to a new SLi motherboard on a limited budget. Since I can't reuse my AGP card, I need to get a good PCI-x video card. But good ones are more than I can afford right now ($200+). Might I see similar (even better) performance if I use two lower-end SLi cards instead of one high-end card? Or are SLi's performance advantages only noticeable when doing 3D and such?
I can get a couple of G-Force 7300's for around $85 each. I can just buy one now and add a second for Christmas. A:Q: Might two bargain SLi vidcards perform better than one high-end card? No, two 7300's will be beaten by a 7600GT. A 7600GS might be a good option, and are ~85$, and are SLI-Able.
1 more replies
Answer Match 43.26%
Logfile of HijackThis v1.97.7
Scan saved at 2:28:28 PM, on 11/5/2004
Platform: Windows XP (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP1 (6.00.2600.0000)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\WINDOWS\Explorer.EXE
C:\Program Files\Windows SyncroAd\SyncroAd.exe
C:\Program Files\Windows SyncroAd\WinSync.exe
C:\Program Files\wmconnect\wmtray.exe
C:\Program Files\wmconnect\wwm.exe
C:\WINDOWS\System32\PackethSvc.exe
C:\WINDOWS\System32\svchost.exe
C:\Program Files\Yahoo!\Messenger\YPager.exe
C:\Documents and Settings\rich\My Documents\HijackThis198.exe
R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://homeroom.indstate.edu/wcb/schools/ARTSCI/lifs/tmulkey/5/
R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = res://C:\PROGRA~1\Toolbar\toolbar.dll/sa
R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Window Title = Microsoft Internet Explorer provided by NetZero, Inc.
R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Local Page = C:\WINDOWS\SYSTEM\blank.htm
R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName =
R1 - HKLM\Software\Microsoft\Internet Explorer\Main,SearchAssistant = http://www.websearch.com/ie.aspx?tb_id=40
R1 - HKLM\Software\Microsoft\Internet Explorer\Main,CustomizeSearch = res://C:\PROGRA~1\Toolb... Read more
A:cash back / bargain buddy remove ? how
12 more replies
Answer Match 43.26%
I just installed Norton Internet Security on Saturday, and it's helped a lot, but my computer was pretty sick before that. I have done everything possible to remove programs called BullsEye Network, Bargain Buddy and NaviSearch. I cannot remove them via Add/Remove, I've run SpyBot, I've removed them directly from my registry, but they don't go away - they always come back. Now I can't manuever from site to site on the Internet without being redirected to www.ads234.com - something along those lines. I'm pretty sure it's associated with bullseye because I get a pop-up from them seconds after. I don't know what else to do - I'm hoping you can help me. I've attached my most recent HJT log and I'm grateful for your assistance. Happy Holidays - thank you!!
**********************************************************
Logfile of HijackThis v1.98.2
Scan saved at 8:28:25 PM, on 12/20/2004
Platform: Windows XP SP1 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\Program Files\Common Files\Symantec Shared\ccProxy.exe
C:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe
C:\Program Files\Norton Internet Security\ISSVC.exe
C:\Program Files\Common Files\Symantec Shared\SNDSrvc.exe
C:\Program Files\Commo... Read more
A:Bargain Buddy-Bullseye Network-NaviSearch ... HELP!
13 more replies
Answer Match 43.26%
HJT Log
Logfile of HijackThis v1.98.2
Scan saved at 3:28:11 PM, on 10/18/2004
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\Program Files\APC\APC PowerChute Personal Edition\mainserv.exe
C:\Program Files\Canon\BJCard\Bjmcmng.exe
C:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe
C:\WINDOWS\System32\CTsvcCDA.exe
C:\WINDOWS\System32\gearsec.exe
C:\PROGRA~1\Iomega\System32\AppServices.exe
C:\Program Files\Common Files\Microsoft Shared\VS7Debug\mdm.exe
C:\Program Files\Norton AntiVirus\navapsvc.exe
C:\Program Files\Norton AntiVirus\AdvTools\NPROTECT.EXE
C:\WINDOWS\System32\nvsvc32.exe
C:\Program Files\QuickBooks Onilne Backup\OLRegCap.EXE
C:\Program Files\QuickBooks Onilne Backup\OLlaunch.exe
C:\WINDOWS\System32\svchost.exe
C:\Program Files\Common Files\Symantec Shared\CCPD-LC\symlcsvc.exe
C:\WINDOWS\System32\MsPMSPSv.exe
C:\WINDOWS\system32\YEDIEx.exe
C:\Program Files\Iomega\AutoDisk\ADService.exe
C:\Program Files\Common Files\Symantec Shared\ccEvtMgr.exe
C:\Program Files\Common Files\Symantec Shared\Security Center\SymWSC.exe
C:\Program Files\Norton AntiVirus\SAVScan.exe
C:\WINDOWS\Explorer.EXE
C:\Program Files\BearShare\BearShare.exe
C:\Program Files\Creative\SBAudigy2\Surround Mix... Read more
A:Solved: Bargain Buddy removal - please assist
8 more replies
Answer Match 42.84%
Please help me get rid of these things. Thanks alot
Here is my HJT Analyzer Log:
====================================================================
Log was analyzed using KRC HijackThis Analyzer - Updated on 8/4/05
Get updates at http://www.greyknight17.com/download.htm#programs
***Security Programs Detected***
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Logfile of HijackThis v1.99.1
Scan saved at 2:10:09 PM, on 8/17/2005
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\Program Files\Iomega HotBurn\Autolaunch.exe
C:\Program Files\ContentWatch\common\cprotect.exe
C:\WINDOWS\system32\P2P Networking\P2P Networking.exe
C:\WINDOWS\system32\BjlV9I.exe
C:\WINDOWS\system32\EgtJr5.exe
R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,ProxyOverride = 127.0.0.1
O2 - BHO: NavErrRedir Class - {0199DF25-9820-4bd5-9FEE-5A765AB4371E} - C:\PROGRA~1\INCRED~1\BHO\INCFIN~1.DLL (file missing)
O2 - BHO: (no name) - {FDD3B846-8D59-4ffb-8758-209B6AD74ACC} - (no file)
O4 - HKLM\..\Run: [ufch] C:\WINDOWS\ufch.exe
O4 - HKLM\..\Run: [qlof] C:\WINDOWS\qlof.exe
O4 - HKLM\..\Run: [Drag'n'Drop_Autolaunch] "C:\Program Files\Iomega HotBurn\Autolaunch.exe"
O4 - HKLM\..\Run: [AutoLoader2s4o1KdeLYXW] "C:\WINDOWS\System32\ipxkey.exe" /PC="AM.WILD" /HideUninstall
O4 - HKLM\..\Run: [2FnU38g] ipxkey.exe
O4 - HKLM\..\Run: [4SJ#8Y74... Read more
A:Popups, Bargain buddy, navisearch, bullseye network
Hi and Welcome to TSF
Before attacking an adware/spyware problem with hijackthis make sure you have already run the following tools. Download and update the databases on each program before running. Ad-Aware? SE Personal Edition
Spybot Search & Destroy
CWShredder
Also make sure you are using the the latest version (1.99.1) of HijackThis and it's installed in it's own folder on the root drive. (C:\HJT)
Please go to at least two of these sites and run an online Virus Scan.
Be sure to have the AutoFix box(es) checked.
http://housecall.trendmicro.com/
http://www3.ca.com/virusinfo/virusscan.aspx
http://www.pandasoftware.com/actives..._principal.htm
http://www.bitdefender.com/scan/license.php
http://us.mcafee.com/root/mfs/default.asp
http://security.symantec.com/sscv6/d...d=ie&venid=sym
http://www3.ca.com/virusinfo/virusscan.aspx
Download and install CleanUp! but do not run it yet.
*NOTE* Cleanup deletes EVERYTHING out of temp/temporary folders and does not make backups.
Download PeperUninstall http://www.greyknight17.com/spy/PeperUninstall.exe. Make sure you are connected online to run this program. Run it once and reboot. Then run it again for the second time.
Download PeperFix http://www.greyknight17.com/spy/PeperFix.exe and save it to your Desktop. Run it and click 'Find and Fix' (reboot if prompted).
Go to My Computer->Tools->Folder Options->View tab and make sure that Show hidden files and folders is enabl... Read more
9 more replies
Answer Match 42.84%
Jesus, I do not understand any of this. I don't even know how to post my own thread to write about my problem. All I want to do is get rid of the bullseye network ****. SO WHAT IS GOING ON....all of that up there..doesnt help a bit. I am not a computer genius. not even smart. hey..I like this smilie<3 reminds me of paranoia agent..lil' slugger.
A:Popups, Bargain buddy, navisearch, bullseye network
Hello, ConfusedPerson, and welcome to TSF. We'll try to help you with your issue. Please bookmark this thread (save it in your Favorites) so that you may more easily return to it. Here's what we'll do to start:
Please download Ad-aware at http://www.lavasoftusa.com/ and install it if you don't have it already. Make sure it's the newest version and check for any updates before running it. Also go to http://www.lavasoftusa.com/software/...2cleaner.shtml to download the plug-in for fixing VX2 variants. To run this tool, go into Ad-aware->Add-ons and select VX2 Cleaner. Then click Run Tool and OK to start it. If it's clean, it will say Status System Clean. Otherwise, you will have to click on the Clean button to remove the VX2 infection. Also make sure to customize the settings in Ad-aware at http://www.greyknight17.com/spyware.htm#adaware for better scan results. Run the scan and fix everything that it finds.
Download and install Spybot S&D http://security.kolla.de/. Run Spybot and click on the 'Search for Updates' button. Install any updates that are available.
Now click Mode menu and choose 'Advanced Mode'. Next click on Immunize to your left. Click the Immunize button (green cross) on top to Immunize your computer - you should do this each time there is an update. Do NOT enable Spybot TeaTimer Resident protection at this time. What this will do is monitor any system/registry changes and will ask you for permission to change any of these settings. It... Read more
1 more replies
Answer Match 42.84%
Bargain buddy, navi search, and cash back have all been added to my computer and i can not delete them b/c every time i restart my computer they come back. how can i delete them off my computer w/out having to take it in???
A:bargain buddy, navi search, cash back
SpywareBlaster http://www.javacoolsoftware.com/spywareblaster.html
AdAware SE http://www.majorgeeks.com/download506.html
SpyBot S&D http://www.safer-networking.org/en/download/
DL them (they are free), install them, check each for their definition updates and then run AdAware and Spybot, fixing anything they say.
Do these before the next step.
Then get HiJack This http://www.majorgeeks.com/download3155.html,
put it in a permanent folder, run it , DO NOT fix anything, post the log here.
2 more replies
Answer Match 42.84%
http://www.ghacks.net/2009/08/01/windows-7-family-pack-and-anytime-upgrade-pricing/
It's shown there everything you actualy need to know in order to understand what I am talking about.
If you're looking for Windows 7 Ultimate for more than ten computers like I am, this might be your big break.
The Family Pack pricing is $149.99 in the US. The Anytime Upgrades are as follows: Windows 7 Starter to Windows 7 Home Premium:$79.99(marked red as it is unimportant in this thread)
Windows 7 Home Premium to Windows 7 Professional: $89.99 Windows 7 Home Premium to Windows 7 Ultimate:$139.99
Does this click... at all to you?
So let's put this into a scenario.
Jill buys the Windows 7 Family Pack and pays $149.99. She understands she can install Windows 7 Home Premium onto the three computers in her household as the family pack allows her to do so. She then finds that she wants more out of Windows 7: Home Premium and buys herself the Windows Anytime Upgrade. With this she can upgrade all three of her Home Premium's to Ultimate, without paying extra! Windows 7 Family Pack($149.99) + Windows Anytime Ultimate Upgrade($139.99) =$289.98
So altogether she's paying a maximum of $300 for three Windows 7 Ultimate's, where she'd normally pay$600 for it.
So let's put this into a worse scenario and say that Windows Anytime Upgrade uses Activation Keys that are only able to be used once.
So now she'd have to buy three Windows Anytime Upgrades at $139.99 per copy. 3 * Windows Anytime... Read more A:Windows 7 Family Pack + Anytime Upgrade = BARGAIN? Guaranteed you will have to purchase a WAU for each machine. 5 more replies Answer Match 42.84% Here goes... I've attempted to follow some of the threads for these issues, but I've not been successful at clearing the computer. Any help is appreciated. Slower ie Browsing and pop-ups are common on this computer. The computer is XP Home Edition, SP1 (not installed SP2 YET) I run Spybot Search and Destroy, and have installed and run NoAdware. I use McAfee VirusScan 9, and Personal Firewall v5. The firewall does not recognize an excecutable named arfdzfta.exe This I block every time the PC starts. It is running in the processes of the PC. Here is the HiJack This file from a few minutes ago. I have done nothing to the computer since this was run. Thanks! Forgot to properly attach the hijack this file! Opps! Try again A:Help clearing Bargain Buddy, Ads234, Cashback and arfdzfta.exe 10 more replies Answer Match 40.74% I have a Dell Inspiron 1520 laptop with Windows Vista OS. I have a Dell 926 printer interfaced to it via USB. Printer driver is installed and the printer does communicate at times with the computer and does a good job printing. However, intermittently when trying to print a document I'll get a message on the computer that says that communication has been lost and print job failed. I've checked all connections to make sure they are solid and no debris that might cause this. That has not corrected the problem. The other thing that happens is that the document I want to print will stay in the print queue/spooler and if I shut the computer and printer down and restart both it will print the job. And it may continue to print jobs but it's just a matter of time. Needless to say this is frustrating. I can print several jobs in a row with no problem and then I'll get this "communication has been lost between printer and computer". And I have to go thru the procedure above to get the last job printed. My wife doesn't understand how this works and will try to print the same job several times with the commo problem and then complains when it won't print.....and then they all print when the above procedure is followed. Any ideas what is going on??? Any fixes?? More replies Answer Match 40.74% The short version of my question is exactly what the topic says-- Can I install a Dell-provided OEM version of Vista on a non-Dell computer? But, here's the wordier version giving context, if anyone's interested. About 10 months ago, I guess, I got a Dell Vostro that came with Windows Vista Home Basic. However, I had no interest in Vista (because I already had a Windows machine) and the very first thing I did was install Ubuntu on it. I still have the Vista reinstall disk, though, which, to my memory, hasn't been used. Now I'm planning on getting a barebones system and upgrading what I see fit. For the time being, I'm going to use the Windows 7 RC, but I'd like to get the uber-great upgrade deal that they're having to pre-order either the Business or Home Premium version upgrade. So, to do that, I'd need an XP installation or a Vista installation. Is there any reason, legal or software-related, that I wouldn't be able to install Vista using that disk that came with the Vostro? I heard somewhere that sometimes if you try, it'll say something like "This is not a Dell computer, you evil, evil person" and refuse to install, so I wanted to make sure before I pre-ordered the upgrade. I don't want to get the upgrade and have nothing to upgrade, of course. A:Solved: Can I install a Dell-provided OEM version of Vista on a non-Dell computer? 7 more replies Answer Match 40.74% So over the past 2-3 weeks I kept noticing that my battery kept dying even though I would shut it down (yes window > shutdown) the computer with a full battery. The next evening I would power up and see that the battery was at 20% from full charge. This is a BRAND NEW as in <2 months XPS 9550. I checked my event log and WHAT THE [email protected]#$??? It is absolutely filled with Dell Foundation Service and Dell Product Registration events all saying " Power event Handled Successfully by the service." Clearly it is not.. The stupid things run every 10 seconds. So when I think I am powering down my computer, it is being blocked by these services and actually just going to sleep... but it actually isn't sleeping. Its waking up every 10 seconds. Can anyone PLEASE help me determine the issue. Apparently according to update service everything is up to date.
Dell XPS 15 9550 Win 10 Dell Foundation Services vers 3.1.330 DFSSvc.exeDell Product Registration vers 2.2.38 PrSvc.exeDell Update Service vers 1.8.114
Log sample: (note the time... yes i was asleep)
Information 5/21/2016 6:42:06 AM Dell Product Registration 0 NoneInformation 5/21/2016 6:42:06 AM Dell Foundation Services 0 NoneInformation 5/21/2016 6:41:56 AM Dell Product Registration 0 NoneInformation 5/21/2016 6:41:56 AM Dell Foundation Services 0 NoneInformation 5/21/2016 6:26:31 AM Dell Foundation Services 0 NoneInformation 5/21/2016 6:26:31 AM Dell Product Registration 0 NoneInformati... Read more
More replies
Answer Match 40.32%
Two months ago bought a Dell 8900 from Best Buy. After 20 days, it would not start, same with the next 8900 replacement... only fewer days. The first gameware as a replacement did not start at all. The Alienware Aurora worked beautifully for 3 days and then would not start. In the 8900's,,, got the blinking amber lights on the start button. Tech is coming to replace motherboard. What the heck is happening? Could it be the monitor... or the external hard drive or what? Come on Dell,,, what's up? I have a Dell XP Laptop as a back up using the same external hard ddrive.
A:Dell 8900, Gaming Computer, Dell Alienware Aurora
There is a power issue with the external hard drive.
2 more replies
Answer Match 40.32%
I have a Dell Inspiron 530, XP PRO, SP3, about a year and a half old. Almost since I first got it, every few months or so, when I boot up it hangs on the Dell screen. When I turn it off and reboot, it is fine. I use Rebit to back up everything on an ongoing basis and have an extended warranty from Dell. Is there anything I can do about this, or should do? Thanks for your help.
A:Dell computer occasionally stuck on Dell screen
I would run a memory tester
http://www.memtest.org/
Download the prebuilt ISO, burn it to CD as an Image, boot from that CD and run the memory test for a couple of hours or overnight to stress test the memory.
.
3 more replies
Answer Match 40.32%
Dell Audio doesn't work on my Dell computers but Realtek does. Why is Dell Audio on my computer if I can't get it to work? It looks like a better program with far more features.
More replies
Answer Match 40.32%
Was given a Dell Optiplex 360 that I found out after getting a nasty virus had a bootleg XP OS
I also did not have any disks with it. I had older disks for a Dell 4600 that I used to wipeout the hard drive and install the new OS. Everything works fine except that I can not load any of the drivers from the second disk, (it was a 3 disk series)
thus at least at this point not able to connect with my cable internet. Cable people told me I needed the drivers.Any suggestions or can it just not be done.
Thanks, Al
A:Loading Older Dell OS To Newer Dell Computer
You will have to go to the dell website look under support and then drivers and downloads for your model of computer.If you use the service tag number on the computer it will make sure you get the correct ones you need. For future reference so you wont have this hassle again burn all of the drivers to a cd so you will be all set the next time.
2 more replies
Answer Match 39.48%
ok I have windows XP On Both Off my computers but my other one has a dell monitor and I've tried to get it to work so many times I got lucky I think twice first time it worked but then I shut it off for the night then I had a hard time to get it to work but I finally did and after that I can't get it to work it keeps saying to set it up through the disc drive I think I haven't turned it on for a bit I'll have to check it,
but can anyone help with what I said?
A:Does A Dell Monitor Work With A Non-Dell Computer?
12 more replies
Answer Match 39.48%
My battery for the above computer has expired and needs to be replaced but I do not know who sells replacement batteries XBT9E1 or compatible ones.
A:How do I buy a battery for my Dell computer Mod. Dell DE051
I believe the replacement battery you are after is a CR2032. If in doubt take the flat battery to your electronics store and ask for the equivalent. They're not very expensive.
3 more replies
Answer Match 37.8%
I have connected my old Dell printer AIO 924 to my new Dell XPS 8500. I can't seem to figure out how to scan. I had no trouble with my old computer, but can't scan since connecting to my new XPS 8500. Can anyone help me please? Do I have to get A new driver? If so, where do I get it? Thanks for any help you can give me.
A:new Dell computer/old Dell printer
6 more replies
Answer Match 37.8%
Can anyone suggest a way to get a dell 770 printer to work on a non dell win98 system?
My friend was given the printer from his friends, but it's a dell and it's pretty muh brand new so he doesn't want to throw it away. It's also better than his current printer
Anyway, he uses the dell install disk but once it begins and runs the setup app it gives a message akin to pirating. Saying something like -this computer does not support dell , or something like - this computer is not a dell.
Can anyone offer advice. I did a search on google and dells' site and here.
I couldn't quite find anything that might lead me to an answer..?
Thanks
A:dell printer for non dell computer??
16 more replies
Answer Match 36.54%
I have a two year old Dell PC with Windows 7. I got a virus in my computer that by phone and computer takeover Norton located in my MBR. When they cleaned it my computer will no longer boot up. Norton worked with me for three days on the problem but we were unable to fix the computer. I followed other web sites and repaired my MBR but it still will not boot up. I tried System Repair, Check point, Image, and Dell system restore and all failed. I have tried to use the Windows install CD and all I get is a black screen with a flashing cursor and then after 3 minutes the compute attempts to boot again and fails. I tried everything to try and wipe the drive from C prompt and it will not let me. I have very weak computer skills but I can follow directions and I have wiped a previous computer I owned. (I confirmed that my CD drive is working) I also should mention I have a second hard drive on my computer that I backed everything up to.
A:Dell Computer PC will not load Windows install CD from computer crash
See if you can reset the BIOS hardware/software connections and then clear the CMOS to access your Windows 7 install disc.
Resetting hardware/software connections in the BIOS and clearing temporary memory of corruption:Shut down and turn off the computer.
Unplug the computer from the wall or surge protector (then remove the battery if it is a laptop).
"Remove the computer from any port replicator or docking station, disconnect
cables to printers or devices such as external monitors, USB memory sticks or SD cards, headset or external speakers, mouse or auxiliary keyboard, turn off WIFI and Bluetooth wireless devices." (Use Hard Reset to Resolve Hardware and Software Issues HP Pavilion dv5000 Notebook PC series - HP Customer Care (United States - English))
Hold down the power button for 30 seconds. This closes the circuit and ensures all
power from components is drained to clear the software connections between the BIOS
and hardware and clear any corruption in the temporary memory.
(If it is a laptop, plug the battery back into the laptop and then) Plug the computer back into the wall. Do not reconnect any unnecessary peripherals; monitor, keyboard,
and mouse should suffice and be the only peripherals reconnected.
Turn it on to reinitialize the software connections between the BIOS and hardware
Clear the CMOS:Use your system manual to find instructions for accessing the BIOS...
In the BIOS, go to the EXIT screen...
Choose load setup/optimized defaults... Read more
1 more replies
Answer Match 34.02%
I just wanted to know about how to reload my computer because i forgot the password and user name. I also wanted to know , my adobe reader want work it keep saying it ecountered a problem and it must shut down sorry for inconvenience. send report to microsoft. I need it to pull up important pdf files to read.
More replies
Answer Match 33.18%
Hi, I am new to this forum and not a brain on computers. My question is my flat screen goes black after I boot up. it shows the dell logo and than xp but than after that it goes blank. The computer is running in the back round. I could boot in the safe mode, but that is it. I need to know what has happen and how to fix.
Thanks
A:Need help on dell computer
In Safe mode, go to Device Manager or Add/Remove Programs and remove the Display driver
Go back to Normal mode
Go to Dell support drivers page and download your updated video card driver
Install
Test
3 more replies
Answer Match 33.18%
I got a dell computer running 512 and about 150 hdd. i switch hdds and cleaned the old one. well when i get to install windows vista or xp it says this blow in this picture:
then i cant boot anything up. I have NO FLOOPY it didnt come with one. so i thought i could install and boot from cd but i was wrong. i dont know if there a reformat the bios or something please help.
computer picture:
my dell computer didnt come with a floopy disk.
A:Dell computer Help
Have u looked in the BIOS and mad sure the CD is the first boot device, this should be installable (changeable) in the BIOS, altho your BIOS might not have that option, and u can leave it that way as it will not mess up anything. Once u boot to the Windows CD u should have options to format or anything u need to do.
2 more replies
Answer Match 33.18%
my dell computer keeps cuting it self off i have cleaned the fans and my monitor is not picking up now.and it is still shuting off.please help?
More replies
Answer Match 33.18%
I would like to install a new video card for my dell machine. Here is a list of my specs...
Dell Inspiron 531s
Windows 7 32-bit
Power Supply Unit: 250w
1 PCI-Express slot
Current Video Card: Nvidia GeForce 6150SE nForce 430
(motherboard gpu)
Here is the GPU I'm looking at:
Newegg.com - SPARKLE SFPX95GT512U2L GeForce 9500 GT 512MB 128-bit GDDR2 PCI Express 2.0 x16 HDCP Ready Low Profile Ready Video Card - Desktop Graphics / Video Cards
I am concerned about the weak power supply and the motherboard GPU interfering.
Should I be considering a external power booster?
I would love to hear your suggestions.
A:New GPU for Dell Computer
I would look at the power supply and see if a normal size new power supply would fit into that case, if not I would "re-case" the whole machine when I add the new video card so I wouldnt have any power issues.
5 more replies
Answer Match 33.18%
hi
i recently bought a dell desktop but i cant use the right side of the keybord to type numbers also on the desktop when u double click on an item nothing happens any suggestions?
A:dell computer
10 more replies
Answer Match 33.18%
Started complete restore by symantec windows, only partly done. The screen says(loading PBR for descriptor 1 Done. Starting Windows 95... Only thing I can do is turn computer on and off. HELP. jhaas41
A:dell computer
and welcome to the Forum
What model is the Dell?
Why was it only partly done? . did it stop?
What caused you to try the restore?
Starting windows 95??? Did it have win95 loaded?
1 more replies
Answer Match 33.18%
My computer stopped working giving me a black screen and everything and when i entered Safe Mode and uninstalled the Video Driver for my Geforce my computer started working again but when i reinstalled the driver my computer suffered the exact same problems and ideas how to fix this please
A:My Dell Computer
Which graphics card and driver are you trying to use? Have you tried older or beta drivers with your GeForce?
3 more replies
Answer Match 33.18%
I have a dell inspiron model 3276 computer. If any one has a dell computer I have a question for them. I have two sound ports on the front of the computer and 3 at the top of the computer one of those in the back is a line in(?) can you tell me what the three on the back does and the two on the front as well.
In the past I have been able to copy music from my external cd player to the line in on the back of the tower.
For some reason this is not working, so if someone can give me information on my problem would appreciate it.
More replies
Answer Match 32.76%
Hi,
I have a Dell Inspiron 1545 computer that is about 3 years old. I just got a new battery for it about 3 or 4 months ago. I'm having trouble with it because I'll have my computer plugged in and it says, (plugged in, not charging). The second I take it off the charger it starts to lose it's charge, but while it's plugged in it holds it. I'm down to 13% and I really don't want to lose my computer. What should I do?
A:Dell Computer Charge
Hi and welcome to TSF did you get the battery from dell or elsewhere as batteries bought from after market sources do not always perform well and if your adapter works then you can still use the laptop admittedly somewhat limited, contact the seller and complain
3 more replies
Answer Match 32.76%
I am sort of befuddled as to why a friends computer will not turn on. There is a green light on the mobo indicating that there is power to the computer, but no luck starting up. I have replaced the power button switch on the front of the computer, checked all power connections to the power supply and motherboard, made sure the outlet was good, and am at a loss as what to check for. She says that there was not a storm before having this problem, but did tell me her daughter shut down the computer by just pushing the power button. I didn't think it was that much of a deal seeing as how the computer is 3 yrs old, but I could be wrong. Any help would be much appreciated.
A:Dell computer will not turn on
Dell computers have a nasty habit of "locking up" at times. sometimes unplugging the computer and letting it sit for 30 minutes then plugging it back in will solve the problem and sometimes you have to strip the components and add them piece by piece until you get it working again.
the process is to remove ALL components except the processor( including disconnnecting monitor, keyboard, mouse). plug in and power up shut down unplug, add memeory plugin and power up, unplug add hard drive plugin power up, power off, unplug, add video card( if there is one) power up at this point you should be able to tell if it is booting or not. (actually once you start getting beeps during the add component\power up process it should be able to boot once all components are reinstalled.
it is a pain in the rear but it is the process Dell tech support would lead you thru if you called them. ( I had to do it three times. with my 4 year old dell, once it locked up after I switched a optical drive.) oh yeah before i forget each time you shutdown during the adding component process, after you unplug hold in the power button for several seconds to drain any residual power.
2 more replies
Answer Match 32.76%
im using a dell [well its my cousins] she has high speed cable and her son messed it up and im trying to fix it but cox cable support cant help me because it can get the ip to reset i went to start-run-typed cmd in box -then i went and put ipconfig and the ip wont come up need help!!!!!!!!!
A:dell computer cant get a connection
16 more replies
Answer Match 32.76%
I am having problems with this laptop. This computer crash, so then I Installed the original version of windows vista CD that came with this computer. The problem is that it won't let me install windows updates it shows an error code 8007001f, and also it won't let me install anti-virus programs, flash player or any other important programs. I installed some of this programs on my usb pen drive but it didn't work. I also loaded all the drivers and updated them. I have never had a problem like this. I did a scan on this computer with windows defender. it scanned the computer for about 1 hour, and 1 trojan virus came up and it was deleted. I also did safe mode with internet but nothing has help. Please help!
tonchis
A:Dell Laptop computer
you could try this
http://support.microsoft.com/kb/947821
3 more replies
Answer Match 32.76%
Logfile of HijackThis v1.97.7
Scan saved at 2:52:05 PM, on 7/20/2004
Platform: Windows XP SP1 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
c:\PROGRA~1\mcafee.com\vso\mcvsrte.exe
c:\PROGRA~1\mcafee.com\vso\mcshield.exe
C:\WINDOWS\Explorer.EXE
C:\WINDOWS\System32\hkcmd.exe
C:\Program Files\Dell\Media Experience\PCMService.exe
C:\PROGRA~1\mcafee.com\agent\mcagent.exe
C:\Program Files\Common Files\Dell\EUSW\Support.exe
C:\PROGRA~1\mcafee.com\vso\mcvsshld.exe
C:\Program Files\MUSICMATCH\MUSICMATCH Jukebox\mmtask.exe
C:\WINDOWS\System32\SahAgent.exe
C:\Program Files\Messenger\msmsgs.exe
C:\PROGRA~1\ezula\mmod.exe
C:\Program Files\Digital Line Detect\DLG.exe
c:\progra~1\mcafee.com\vso\mcvsescn.exe
c:\Program Files\Dell\Support\Alert\bin\NotifyAlert.exe
C:\Program Files\Microsoft Broadband Networking\MSBNTray.exe
C:\Program Files\Nikon\NkView6\NkvMon.exe
C:\WINDOWS\System32\svchost.exe
c:\progra~1\mcafee.com\vso\mcvsftsn.exe
C:\Program Files\Web_Rebates\WebRebates1.exe
C:\Program Files\Web_Rebates\WebRebates0.exe
C:\Program Files\Internet Explorer\iexplore.exe
C:\Documents and Settings\Jeannie\My Documents\Download\HijackThis.exe
R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Bar = http://websearch.drsnsrch.com... Read more
A:HiJackThis for my Dell computer, please.
Hello,
PLease follow the directions below:
Download the programs below, make sure they are updated, and when they are all downloaded, boot into safe-mode by restartibng your computer and continually tapping on the F8 key. It will prompt you to open in safe mode. Once you are in safe-mode, run the programs below, as well as the other instructions.
Download AdAware 6 181 from here: http://www.lavasoftusa.com/
Before you scan with AdAware, check for updates of the reference file by using the "webupdate".
Then ........
Make sure the following settings are made and on -------"ON=GREEN"
From main window :Click "Start" then " Activate in-depth scan"
Then......
Click "Use custom scanning options>Customize" and have these options on: "Scan within archives" ,"Scan active processes","Scan registry", "Deep scan registry" ,"Scan my IE Favorites for banned URL" and "Scan my host-files"
Then.....
Go to settings(the gear on top of AdAware)>Tweak>Scanning engine and tick "Unload recognized processes during scanning" ...........then........"Cleaning engine" and tick "Automatically try to unregister objects prior to deletion" and "Let windows remove files in use at next reboot"
Then...... click "proceed" to save your settings.
remove anything it finds. when you are done, please go to www.download.com and download Spybot 1.3 ... Read more
1 more replies
Answer Match 32.76%
just got a new dell xps computer and it looks like a lot of items on the hijack log. is there any i can delete ?
Logfile of HijackThis v1.99.1
Scan saved at 5:31:15 PM, on 8/11/2006
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\csrss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\WINDOWS\eHome\ehRecvr.exe
C:\WINDOWS\eHome\ehSched.exe
C:\Program Files\Intel\Intel Matrix Storage Manager\iaantmon.exe
c:\program files\mcafee.com\agent\mcdetect.exe
c:\PROGRA~1\mcafee.com\vso\mcshield.exe
c:\PROGRA~1\mcafee.com\agent\mctskshd.exe
C:\PROGRA~1\McAfee.com\PERSON~1\MpfService.exe
C:\PROGRA~1\McAfee\SPAMKI~1\MSKSrvr.exe
C:\Program Files\Spyware Doctor\sdhelp.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\ehome\mcrdsvc.exe
C:\Program Files\Intel\IntelDH\Intel(R) Quick Resume Technology\ELService.exe
C:\WINDOWS\system32\dllhost.exe
C:\WINDOWS\System32\alg.exe
C:\WINDOWS\Explorer.EXE
C:\WINDOWS\ehome\ehtray.exe
C:\WINDOWS\system32\hkcmd.exe
C:\WINDOWS\system32\igfxpers.exe
C:\WINDOWS\stsystra.exe
C:\Program Files\Intel\Intel Matrix Storage Manager\iaanotif.exe
C:\Prog... Read more
A:Solved: dell computer
14 more replies
Answer Match 32.76%
How far can you expand your average dell computer?
hey all, could someone tell me if i can change the motherboard in my factory dell vostro 410 desktop, so that i can replace it with a motherboard that is SLI enabled?
If so what motherboards should i be looking for, is there a specific one due to the size of the case?
Also, being a dell computer i realize your limited into what your changing, does that apply to the PSU and CPU?
Thank You, look forward to your replies
A:HELP: Expanding Dell Computer
The case is standard ATX size so a standard ATX motherboard will fit. Find one that is SLI enable. Since you are switching the motherboard out, why not build a PC from scratch? You'll have to look in the case but the PSU should be interchangeable.
4 more replies
Answer Match 32.76%
Unable to log on. Firefox proxy server refusing connections. Problems with firewall. pLEASE HELP?
More replies
Answer Match 32.76%
I am still having problems with this computer . I cannot download updates and i get an error code (8007001f)
(a device attached to the system is not functioning) please help I have tried every thing i can think of. please help!again
tonchis
A:Dell laptop computer
You already have a thread started here for the same problem? Only on thread allowed for same problem.
Use this thread....
http://forums.techguy.org/windows-vista/1056690-dell-laptop-computer.html
2 more replies
Answer Match 32.76%
I down loaded Windows 10 PRO two days ago upgrading from Windows 7 PRO which was running fine. I made no changes what so ever and now I cannot get the printer or FAX to communicate with the computer. Any suggestions on how to solve this problem would be greatly appreciated.
Bud Rose
A:DELL 968 AIO - Communication not available with computer
Yes. Goto the manufactures web site for your printer and download windows 10 drivers and run the install. Do the same for the modem that you were using for fax, and make sure it shows up in the device manager without a yellow exclamation point, and then make sure the Windows Fax software is installed.
1 more replies
Answer Match 32.76%
I had a virus and must have messed up my computer in trying to fix it. Now my computer won't boot. I can't get it to boot from the XP CD. In the set-up, boot sequence I set it only for the cd and I get "strike f1 to retry boot, F2 for setup". if I turn it off and on and hit f12 I get 5 choices; 1 normal, 2 ide cd-rom device, 3 system setup, 4 ide drive diagnostics, 5 boot to utility partition. None work. the diag says hd is fine. I'd like to reformat/reload xp on hard drive and can't. On another computer I can get the comuter to boot from the xp cd and it works
A:Dell XP computer won't boot
You can try resetting the BIOS to default settings...by temporarily removing the CMOS battery...and that might allow a boot.
What is the system model?
Louis
3 more replies
Answer Match 32.76%
JohnWill said:
I always make the recovery disk as the first step. Then if there's any question about the configuration, you can always knock it flat and start out with the factory configuration again.Click to expand...
hey JohnWill,
I also have a Dell Dimension 2400 with XP-home on it. I bought the PC almost new at a yard sale last summer. The lady that owned it Even transfered the warranty for me. When I got it home (came with all the disks), the first thing I did was was remove all of the Norton's software from it.
A BIG MISTAKE I found out later. Dell has a "PC Restore for Window XP" that I found out later.
Here's the link:http://support.dell.com/support/topics/global.aspx/support/dsn/en/document?docid=181316#1
Basically, "When the Dell splash screen appears during the computer startup process, press and hold <Ctrl> and then press <F11>. Then, release both keys at the same time.
NOTE: Some systems like Inspiron Mini 9 (910) do not support System Restore as they do not have <F11> nor any substitute."
In the Dell PC Restore by Symantec window, click Restore. Alternatively, press <Tab> to highlight Restore, and then press <Enter>.
I didn't know this !!!
I right clicked on "My Computer" icon to the "Microsoft's Management Console" window and I have 3 portion's.
The first one is a "Healthy EISA Configuration" that is FAT 31MBs., the 2nd one is "... Read more
A:Dell computer issue
So you are saying Ctrl+F11 does not work??
If Ctrl+F11 is broken it can be repaired sometimes but is not easy, lots of reading to do.
http://www.goodells.net/dellrestore/fixes.htm
Your uninstalling Norton Software while in Windows did not break it, the most common cause of this problem is Installing XP from the CD, Dell uses a proprietary Master Boot Record (MBR), when you reinstall XP from the CD, it replaces this MBR with a generic one and breaks the Ctrl+F11 functionality. If the Dell image is still on the ""healthy Unknown Partition" that is a FAT32 3.49 GB's", then you can repair it in most cases following Dan Goodells instructions.
2 more replies
Answer Match 32.76%
I just bought a refurbished Dell Dimension 4400 Pentium 4, 1.7GHz, 256MB, 30GB from ubid.com. It came with no operating software or monitor which was fine because I had my own. However, I can't seem to be able to start up the computer. This is what I'm doing;
I turn on the computer with the "Dell Dimension Resource CD" in the computer. (It does say on the top of the cd that "You must boot your computer from this CD to run the diagnostics, which may require changing your computer's boot sequence.")
I) It starts to run & at first it says "Invalid system disk replace the disk, and then press any key." I do so & it then gives me an option to either boot from hard drive or cd, and I select "2" which is cd. It then shows "starting Windows 98 which is a bit odd coz there should be no OS on the computer. I will be installing XP.
II) I then get a list of choices 1)run graphics 2) run 32 bit bell diagnostics 3) run 3com nic config utili 4) create SATA RAID driver diskette 5)Exit to dos 6) Reboot system.
III) I tried chice #1, it passed
IV) Choice 2 it said, "system error this system is not recoginzed as a supported dell pc. these diagnostics may not be run on unsupported sustems. I click "OK"
It then goes back to "I" above and on to the 6 choices again.
V) Choice 3 "Error No 3C90X NICs are installed in this comp. I click "OK"
Basically, all through the 6 choices... Read more
A:Dell Computer is a mess
The monitor shouldn't affect it, have you tried just running from the WindowsXP cd?
3 more replies
Answer Match 32.76%
lighting hit my old computer so hubby bought me a new dell with vista on it,can't get half the web site to open and some of my downloads want download. its a dell 1gb shared dual channell sdram
250gb hard drive
amdathlon 64x2 3600+ dual core processer
can't get my dsl to work been with them on phone all day
what do i do thanks oh yes im useing dial up now and so slow
More replies
Answer Match 32.76%
I was wondering if it's possible to transfer all the components from my Dell computer into a new computer case, or would I need a new mobo and what not?
A:Dell Computer -> Customcase?
10 more replies
Answer Match 32.76%
I can't remember when this first started but my computer shuts down by its self. I know this is serious and microsoft sent me an message saying this was due to having the wrong driver on one of my programs but how do I fix this problem? It happens in different programs.
A:XP Dell computer shuts off
Does this happen after a certain amount of time? Or is it really just random?
Can you give examples as to what programs you are running when it happens?
1 more replies
Answer Match 32.76%
I am having problems with this computer, everytime i turn on the computer it will not boot up. This computer has 4 memory slots with 2 memory chips,and when i move one chip to another slot it will boot up. Then when i turn the computer on again it will not boot up until i move the memory chip to another slot then it will boot up. Do you think i have bad memory chips. Please Help!
Thank you,
Tonchis
A:Dell Desktop computer
7 more replies
Answer Match 32.76%
I think my Dell Vostro must have a proprietary block to cloning to a non-Dell machine. I have tried spotmau, Macrium,Aeomei, Microsoft and maybe another. I can get the clone to load on the new machines hard drive, but I cannot get it so I can access it. Is there any way to overcome this? I have tried it on an Asus and Sony computer.
angelo
A:Cloning a Dell Computer
you cannot use a clone from a dell computer to another. It violates manufacturers oem license from microsoft. besides different computer, different motherboard, different drivers. windows would have to be reinstalled if it was a legit transfer.
2 more replies
Answer Match 32.76%
I have a dell computer that keeps shutting down before it even has a chance to start up.
It just started shutting down every now and then. Now it won't even start up. It gets to the opening dell screen, then shuts down.
I noticed the cooling fan started to run really fast so I decided to checked the CPU and found that it gets hot in seconds after I push the power button. Could this be a bad CPU or mother board?
I am running windows XP sp2
Hyper thread cpu
The system is about three years old
A:My Dell computer will not start
Actually the cpu is probably the issue. Normally we would look at the heatsink fan but Dell uses a fan for case cooling and to cool the heatsink and it may be that fan also.
Can you get into the bios and see the temps and fan speed? Hard drives are also warrantied for 4 years but you really aren't getting far enough to have it be a bad hard drive.
3 more replies
Answer Match 32.76%
Good evening, I tried to help a friend out, unfortunately, I ended up messing up his new computer from Dell.
I deleted all drives onhis hd and repartitioned using PM7 and when I try to load the OS, it just shows www.dell.com and inderneath a message,
LOADING PBR FOR DESCRIPTOR 1...DONE
then the cursor just blinks.
Any way to fix this? TIA for any help in this matter.
da nerd
Feel this will be better served here.
A:partitioning a dell computer
Quick question. If this is a fresh install, Why use partition magic?
1 more replies
Answer Match 32.76%
Have a Dell 410 XPS with vista that I would like to reformat. What is the proper procedure for reformatting both the C and D drives. My reinstallation disk has more options than the reinstalation disks that came with my old Compaq(it had 2 cds and no otions). The manual that came with my computer is rather vague about this. I know this can be a long drawn out affair, and have backed up all my info and would like to clean my computer out and bring it back to like it was the day I got it. Yes I do have some major issues with my computer that requires me to reformat, mostly caused by me.
thanks mike
A:Reformatting A Dell 410 Computer
I did a Dell today that used F8 to access the recovery partition. That should be the quickest way to reformat your Dell and will leave the recovery partition intact (along with the Tools partition if you have one).
If you'd like to get rid of all partitions, then you'll have to delete the partitions and create one partition to hold everything. Then you can install Vista with the Operating System, Drivers, and Utilities disks that came with your Dell.
Personally, I'd go with the first option above - it's quicker and you won't have to install all the drivers and applications.
1 more replies
Answer Match 32.76%
Hi, I have had a lot of trouble on my 6 year old Dell Desktop Dimension 2400 computer, it is too complex to go into detail here' but it has been a nightmare for nearly 4 weeks and I am still going to have do it all over again as in reinstall Win XP / or the drivers at least.
It crashes to Bluescreen every time I Insert a USB stick, Connect and start my Printer, or connect my Western Dig Ext HD which has all the stuff I backed up on.
The major problem was that I missed the "install order list" the first time (my own inexperience) but I have found another DELL list and it Contradicts the First order List anyway' Sooo! Is there a site where I can find out what I need the Order to install them and download them onto a Disk or USB as I had no internet when I first installed Widows XP that was the beginning of my downfall. I have been to DELL Forum's and tried to do the things they suggest but you cannot delete all the USB,s in Disc Manager/ Devices/ USB's/ as you have no Keyboard or Mouse half way through? Thanks if you can help Dasha
A:Can I use other drivers on a Dell computer
Wow. Yea. Sounds like your machine is currently "quite the mess"!
You should certainly start with the drivers on the Dell web site.
Always start with the chipset drivers. I'd do network and storage (IDE) drivers next then the others. Install drivers before installing the applications.
2 more replies
Answer Match 32.76%
Yo, my father has a Dell Dimension 2400 as his office computer(see bottom of this post for tech specs) and I want to upgrade it to an AMD Athlon 64 3300+ Socket 754. What I wanted to do was place the new upgrade into the same case as the Dell. Here's what I plan on purchasing...
AMD Athlon 64 3300+ 2.40GHz / 256KB Cache / 1600MHz FSB / OEM / Socket 754 / Processor
My problem is that I want to get a motherboard which will fit the ATX power connector in the Dell computer and that the Motherboard will be able to use the amount of power in the Dell computer (not sure of the wattage of the PSU). Any ideas for a good Mobo is appreciated.
Tech Specs (Dell Dimension 2400):
Pentium 4 - 2.4 Ghz
Some Intel Mobo w/ integrated graphics
2 x 256 MB PC2700 333Mhz
...etc... lol, nothing much else matters
I'm ASSUMING that the Power Supply is 20 pin (I'm not at my father's office so I wouldn't know). If you're missing any info that you need to help me let me know.
A:Upgrade a Dell Computer...
What is it you plan on doing with the computer? gaming? if the present motherboard has a AGP port you would be better off spending your money on a graphics card.
Your also going to need a copy of XP once you switch the motherboard from a intel chipset to a amd chipset your not going to be able to boot from the existing hard drive. you also wont be able to use the dell recovery disk to reload windows once you swap mobo's.
Most of Dell's PSU's with P4's are 300watt
5 more replies
Answer Match 32.76%
My xps 410 Dell computer won't turn on. What happens is I turn it on it says to start normally or startup repairs when I click startup repairs it says it can't fix it. When I start normally gets to the multi-colored flag (windows loading screen) and sits there for about 30 seconds to a minute than it either restarts or goes to a black screen and than it has yellow letters surrounded by a small blue rectangle saying "Out of range" I don't know what to do, but what I do know is that I can boot my computer into safe mode which I'm in right now (safe mode with networking) I've tried looking for solutions but can't find any that work please help and thanks.
A:xps 410 Dell computer won't turn on
16 more replies
Answer Match 32.76%
Hello,
I just purchased a new XPS 8900 desktop delivered on 5/25/2016 and is restarting itself. No new hardware installed or program except World of Warcraft and Ventrilo. Just really started using this machine late last week since I was really busy after it arrived.
I research the web and it said it maybe PSU related issue. It is under warranty but any idea why a new computer is restarting it self?
Thank you.
Susan
A:New Dell Computer Restarting by Itself
Windows 10?
There are many possible reasons, both hardware and software: PSU, RAM or hard drive problem, overheating, corrupted OS or other software...
Does it just restart? Any errors on the screen? You can look in Windows Event Viewer for errors around the time of a crash that might point you in the right direction.
If you bought it directly from Dell, I believe you have 21 days from invoice date (not delivery date) to return it so contact Dell Tech Support ASAP.
If you got it from a big box store, you can probably return/exchange it under whatever policy they have. So get going.
Either way, be sure to reset Windows to the factory image before you return it.
1 more replies
Answer Match 32.76%
I recently tried to install service pack 2 on a dell computer, and about half way through I get a message telling me that installation cannot be completed, and that windows may not work correctly. Every time after I get the BSoD, and a message which tells me their is a problem with the ram. I read about this problem, and I have figured out that the install messed with the ram settings. I don't know how to fix it though. I can boot up with linux cd though. What should I do?
A:SP2 install on Dell computer
Attempting to install XP SP2 in a computer has no effect on how RAM modules function.
What is the model name and model number of that Dell, and how much RAM is installed?
-------------------------------------------------------------
Are you using an actual XP SP2 CD?
Have you deleted all current partitions, created a new C partition, formatted that new partition with the NTFS file system, and then attempted to install XP SP2?
-------------------------------------------------------------
1 more replies
Answer Match 32.76%
Machine is slow -- I can hear the HD running continuously during normal computer use. This is an older machine -- 2005; wondering if install/uninstall of programs has slowed down machine. Added additional RAM last year -- 1Gb. Perhaps too much is running at once? help is appreciated.
When computer is left on too long, computer freezes. Currently the "install/uninstall shield" remains in my system tray without installing anything. thanks!
Express Service Code: 950-936-014-9
Service Tag: 4D9ML91
OS Name Microsoft Windows XP Professional
Version 5.1.2600 Service Pack 3 Build 2600
OS Manufacturer Microsoft Corporation
System Name DELL
System Manufacturer Dell Inc.
System Model Dell DV051
System Type X86-based PC
Processor x86 Family 15 Model 4 Stepping 9 GenuineIntel ~2793 Mhz
BIOS Version/Date Dell Inc. A03, 10/8/2005
SMBIOS Version 2.3
Windows Directory C:\WINDOWS
System Directory C:\WINDOWS\system32
Boot Device \Device\HarddiskVolume2
Locale United States
Hardware Abstraction Layer Version = "5.1.2600.5512 (xpsp.080413-2111)"
User Name DELL\Paul
Time Zone Eastern Standard Time
Total Physical Memory 2,048.00 MB
Available Physical Memory 875.90 MB
Total Virtual Memory 2.00 GB
Available Virtual Memory 1.95 GB
Page File Space 2.72 GB
Page File C:\pagefile.sys
Logfile of Trend Micro HijackThis v2.0.4
Scan saved at 9:24:29 PM, on 12/20/2010
Platform: Windows XP SP3 (WinNT 5.01.2600)
MSIE: Internet Explorer v8.00 (8.00.6001.18702)
Boot mode: Normal
Running pr... Read more
A:Slow Computer/Dell
It seems that your virtual memory is quite low, Go to your uninstall and try uninstalling programs and such that you rarly or never use. Low space on your hard disk usally is the cause of a slow computer. Also, try and remove some unimportant things (i.e old document, Power Points, things of that sort) to an alternate hard drive or USB Storage Devise...that should help
2 more replies
Answer Match 32.76%
My dell computer does not connect to the LAN or Internet , the icon of local area connection is found in the right hand on the task bar and no red X on it but the front lamp only flashes and the other lamp of it does not flash , i tried to change the settings of the Internet protocol TCP / IP but no way .
i heard that dell computers have some confusions with Windows LAN
probably
Osama Geris
More replies
Answer Match 32.76%
Just got a Walmart Dell pre-loaded with Windows Vista. I will be putting windows XP Professional on the drive instead. Only problem is there is no way to get the damn machine to boot from the CD rom so I can clobber the current format and re-format the drive NTFS and then proceed to load XP.
Even tried using a USB bootable cd drive instead of the SERIAL ATA DVD drive that comes with the machine. even after specifying the boot sequence order in the DELL cmos, still no luck... any suggestions? Machine is a Dell e521 model.
A:Walmart Dell Computer
10 more replies
Answer Match 32.76%
new dell desktop computer
I have purchased a new Dell Inspiron (3847) desktop with windows 7 pro. In fact I have not even taken it out of the box yet.
It came with Mcafee Security Centre which Dell is apparently promoting. But, I might just forgo the Mcafee and install Avast Free Anti-virus, plus MBAM Free, and SuperAntispyware Free. I had Mcafee on a PC many years ago and I do not have anything good to remember about it.
What do you all think.
Thanks.
A:new dell desktop computer
7 more replies
Answer Match 32.76%
For the last few months our new (about 9 months old) dell computer running XP has begun to freeze up, theres no apparent pattern to it, but what we found is that if we right click and get the task manager, light up the program DAMon.exe, and click end process it does free it up every time.
We think the program DAMon.exe has something to do with Dells help system and did go to them and ask twice, we got two different solutions neither of which sounded right and we didnt do them. If its there program I assume we might consider eliminating it but are afraid to do that if it would result in their inability to assist in some other future problem.
I am a novice and have been a little leery about doing anything, what ever solution you might think of would have to be simple but if anyone could suggest a simple thing I would appreciate it.
A:Dell Computer Freeze
7 more replies
|
{}
|
1. ## difference equation
Hey, I need help with this:
Solve the difference equation if
a) with the direct method
b) with generating function
Thank you!
2. ## Re: difference equation
Where did you get this problem? Please do not just write out a problem, without showing any attempt yourself to do this and ask for "help". To give help, we need to know what you can do and what hints you would understand! Do you know what the "direct method" is? Do you know what a "generating function" is? Do you understand that the characteristic equation for this problem is $x^2- 5x+ 6= 0$?
If $a_n= An^3+ Bn^2+ Cn+ D$ then $a_{n+1}= A(n+1)^3+ B(n+1)^2+ C(n+1)+ D= An^3+ 3An^2+ 3An+ A+ Bn^2+ 2Bn+ B+ Cn+ C= An^3+ (3A+ B)n^2+ (3A+ 2B+ C)n+ A+ B+ C+D$ a_{n+2)= A(n+2)^3+ B(n+ 2)^2+ C(n+ 2)+ D= A(n^3+ 6An^2+ 12An+ 8A+ Bn^2+ 4Bn+ 4B+ Cn+ 2C+ D= An^3+ (6A+ B)n^2+ (12A+ 4B+ C)n+ 8A+ 4B+ 2C+ D[/tex]. Put those into the equation an see if there are values or A, B, C, and D such that the equation is satisfied for all n.
3. ## Re: difference equation
Hey again.
Well, yes I know how to find the characteristic equation. Then, I know that the roots of it are part of the solution somehow. So, the roots are 3 and 2. I found somewhere online that the homogeneous solution will be an = C1*(3^n) + C2*(3^n) is that correct? Then what happens with the other part, the non-homogeneous solution? I can't find anything similar online (third order). And the last question, do I need to sum up both solutions at the end? How do I calculate C1 and C2?
Thank you
|
{}
|
## Main
Since 2007, genome-wide association studies (GWASs) have identified thousands of associations between common SNPs and height, mainly using studies with participants of European ancestry. The largest GWAS published so far for adult height focused on common variation and reported up to 3,290 independent associations in 712 loci using a sample size of up to 700,000 individuals3. Adult height, which is highly heritable and easily measured, has provided a larger number of common genetic associations than any other human phenotype. In addition, a large collection of genes has been implicated in disorders of skeletal growth, and these are enriched in loci mapped by GWASs of height in the normal range. These features make height an attractive model trait for assessing the role of common genetic variation in defining the genetic and biological architecture of polygenic human phenotypes.
As available sample sizes continue to increase for GWASs of common variants, it becomes important to consider whether these larger samples can ‘saturate’ or nearly completely catalogue the information that can be derived from GWASs. This question of completeness can take several forms, including prediction accuracy compared with heritability attributable to common variation, the mapping of associated genomic regions that account for this heritability, and whether increasing sample sizes continue to provide additional information about the identity of prioritized genes and gene sets. Furthermore, because most GWASs continue to be performed largely in populations of European ancestry, it is necessary to address these questions of completeness in the context of multiple ancestries. Finally, some have proposed that, when sample sizes become sufficiently large, effectively every gene and genomic region will be implicated by GWASs, rather than certain subsets of genes and biological pathways being specified4.
Here, using data from 5.4 million individuals, we set out to map common genetic associations with adult height, using variants catalogued in the HapMap 3 project (HM3), and to assess the saturation of this map with respect to variants, genomic regions and likely causal genes and gene sets. We identify significant variants, examine signal density across the genome, perform out-of-sample estimation and prediction analyses within studies of individuals of European ancestry and other ancestries and prioritize genes and gene sets as likely mediators of the effects on height. We show that this set of common variants reaches predicted limits for prediction accuracy within populations of European ancestry and largely saturates both the genomic regions associated with height and broad categories of gene sets that are likely to be relevant; future work will be required to extend prediction accuracy to populations of other ancestries, to account for rarer genetic variation and to more definitively connect associated regions with individual probable causal genes and variants.
An overview of our study design and analysis strategy is provided in Extended Data Fig. 1.
## Meta-analysis identifies 12,111 height-associated SNPs
We performed genetic analysis of up to 5,380,080 individuals from 281 studies from the GIANT consortium and 23andMe. Supplementary Fig. 1 represents projections of these 281 studies onto principal components reflecting differences in allele frequencies across ancestry groups in the 1000 Genomes Project (1KGP)5. Altogether, our discovery sample includes 4,080,687 participants of predominantly European ancestries (75.8% of total sample); 472,730 participants with predominantly East Asian ancestries (8.8%); 455,180 participants of Hispanic ethnicity with typically admixed ancestries (8.5%); 293,593 participants of predominantly African ancestries—mostly African American individuals with admixed African and European ancestries (5.5%); and 77,890 participants of predominantly South Asian ancestries (1.4%). We refer to these five groups of participants or cohorts as EUR, EAS, HIS, AFR and SAS, respectively, while recognizing that these commonly used groupings oversimplify the actual genetic diversity among participants. Cohort-specific information is provided in Supplementary Tables 13. We tested the association between standing height and 1,385,132 autosomal bi-allelic SNPs from the HM3 tagging panel2, which contains more than 1,095,888 SNPs with a minor allele frequency (MAF) greater than 1% in each of the five ancestral groups included in our meta-analysis. Supplementary Fig. 2 shows the frequency and imputation quality distribution of HM3 SNPs across all five groups of cohorts.
We first performed separate meta-analyses in each of the five groups of cohorts. We identified 9,863, 1,511, 918, 453 and 69 quasi-independent genome-wide significant (GWS; P < 5 × 10−8) SNPs in the EUR, HIS, EAS, AFR and SAS groups, respectively (Table 1 and Supplementary Tables 48). Quasi-independent associations were obtained after performing approximate conditional and joint (COJO) multiple-SNP analyses6, as implemented in GCTA7 (Methods). Supplementary Note 1 presents sensitivity analyses of these COJO results, highlights biases due to relatively long-range linkage disequilibrium (LD) in admixed AFR and HIS individuals8 (Supplementary Fig. 3), and shows how to correct those biases by varying the GCTA input parameters (Supplementary Fig. 4). Moreover, previous studies have shown that confounding due to population stratification may remain uncorrected in large GWAS meta-analyses9,10. Therefore, we specifically investigated confounding effects in all ancestry-specific GWASs, and found that our results are minimally affected by population stratification (Supplementary Note 2 and Supplementary Figs. 57).
To compare results across the five groups of cohorts, we examined the genetic and physical colocalization between SNPs identified in the largest group (EUR) with those found in the other (non-EUR) groups. We found that more than 85% of GWS SNPs detected in the non-EUR groups are in strong LD ($${r}_{{\rm{LD}}}^{2}$$ > 0.8) with at least one variant reaching marginal genome-wide significance (PGWAS < 5 × 10−8) in EUR (Supplementary Tables 58). Furthermore, more than 91% of associations detected in non-EUR meta-analyses fall within 100 kb of a GWS SNP identified in EUR (Extended Data Fig. 2). By contrast, a randomly sampled HM3 SNP (matched with GWS SNPs identified in non-EUR meta-analyses on 24 functional annotations; Methods) falls within 100 kb of a EUR GWS SNP 55% of the time on average (s.d. = 1% over 1,000 draws). Next, we quantified the cross-ancestry correlation of marginal allele substitution effects (ρb) at GWS SNPs for all pairs of ancestry groups. We estimated ρb using five subsets of GWS SNPs identified in each of the ancestry groups, which also reached marginal genome-wide significance in at least one group. After correction for winner’s curse11,12, we found that ρb ranged between 0.64 and 0.99 across all pairs of ancestry groups and all sets of GWS SNPs (Supplementary Figs. 812). We also extended the estimation of ρb for SNPs that did not reach genome-wide significance and found that ρb > 0.5 across all comparisons (Supplementary Fig. 13). Thus, the observed GWS height associations are substantially shared across major ancestral groups, consistent with previous studies based on smaller sample sizes13,14.
To find signals that are specific to certain groups, we tested whether any individual SNPs detected in non-EUR GWASs are conditionally independent of signals detected in EUR GWASs. We fitted an approximate joint model that includes GWS SNPs identified in EUR and non-EUR, using LD reference panels specific to each ancestry group. After excluding SNPs in strong LD ($${r}_{{\rm{LD}}}^{2}$$ > 0.8 in either ancestry group), we found that 2, 17, 49 and 63 of the GWS SNPs detected in SAS, AFR, EAS and HIS GWASs, respectively, are conditionally independent of GWS SNPs identified in EUR GWASs (Supplementary Table 9). On average, these conditionally independent SNPs have a larger MAF and effect size in non-EUR than in EUR cohorts, which may have contributed to an increased statistical power of detection. The largest frequency difference relative to EUR was observed for rs2463169 (height-increasing G allele frequency: 23% in AFR versus 84% in EUR) within the intron of PAWR, which encodes the prostate apoptosis response-4 protein. Of note, rs2463169 is located within the 12q21.2 locus, where a strong signal of positive selection in West African Yoruba populations was previously reported15. The estimated effect at rs2463169 is β ≈ 0.034 s.d. per G allele in AFR versus β ≈ −0.002 s.d. per G allele in EUR, and the P value of marginal association in EUR is PEUR = 0.08, suggesting either a true difference in effect size or nearby causal variant(s) with differing LD to rs2463169.
Given that our results show a strong genetic overlap of GWAS signals across ancestries, we performed a fixed-effect meta-analysis of all five ancestry groups to maximize statistical power for discovering associations due to shared causal variants. The mean Cochran’s heterogeneity Q-statistic is around 34% across SNPs, which indicates moderate heterogeneity of SNP effects between ancestries. The mean chi-square association statistic in our fixed-effect meta-analysis (hereafter referred to as METAFE) is around 36, and around 18% of all HM3 SNPs are marginally GWS. Moreover, we found that allele frequencies in our METAFE were very similar to that of EUR (mean fixation index of genetic differentiation (FST) across SNPs between EUR and METAFE is around 0.001), as expected because our METAFE consists of more than 75% EUR participants and around 14% participants with admixed European and non-European ancestries that is, HIS and AFR). To further assess whether LD in our METAFE could be reasonably approximated by the LD from EUR, we performed an LD score regression16 analysis of our METAFE using LD scores estimated in EUR. In this analysis, we focused on the attenuation ratio statistic (RLDSC-EUR), for which large values can also indicate strong LD inconsistencies between a given reference and GWAS summary statistics. A threshold of RLDSC > 20% was recommended by the authors of the LDSC software as a rule-of-thumb to detect such inconsistencies. Using EUR LD scores in the GWAS of HIS, which is the non-EUR group that is genetically closest to EUR (FST ≈ 0.02), yields an estimated RLDSC-EUR of around 25% (standard error (s.e.) 1.8%), consistent with strong LD differences between HIS and EUR. By contrast, in our METAFE, we found an estimated RLDSC-EUR of around 4.5% (s.e. 0.8%), which is significantly lower than 20% and not statistically different from 3.8% (s.e. 0.8%) in our EUR meta-analysis. Furthermore, we show in Supplementary Note 1 that using a composite LD reference containing samples from various ancestries (with proportions matching that in our METAFE) does not improve signal detection over using an EUR LD reference. Altogether, these analyses suggest that LD in our METAFE can be reasonably approximated by LD from EUR.
We therefore proceeded to identify quasi-independent GWS SNPs from the multi-ancestry meta-analysis by performing a COJO analysis of our METAFE, using genotypes from around 350,000 unrelated EUR participants in the UK Biobank (UKB) as an LD reference. We identified 12,111 quasi-independent GWS SNPs, including 9,920 (82%) primary signals with a GWS marginal effect and 2,191 secondary signals that only reached GWS in a joint regression model (Supplementary Table 10). Figure 1 represents the relationship between frequency and joint effect sizes of minor alleles at these 12,111 associations. Of the GWS SNPs obtained from the non-EUR meta-analyses above that were conditionally independent of the EUR GWS SNPs, 0/2 in SAS, 5/17 in AFR, 27/49 in EAS and 27/63 in HIS were marginally significant in our METAFE (Supplementary Table 9), and 24 of those (highlighted in Fig. 2) overlapped with our list of 12,111 quasi-independent GWS SNPs.
We next sought to replicate the 12,111 METAFE signals using GWAS data from 49,160 participants in the Estonian Biobank (EBB). We first re-assessed the consistency of allele frequencies between our METAFE and the EBB set. We found a correlation of allele frequencies of around 0.98 between the two datasets and a mean FST across SNPs of around 0.005, similar to estimates that were obtained between populations from the same continent. Of the 12,111 GWS SNPs identified through our COJO analysis, 11,847 were available in the EBB dataset, 97% of which (11,529) have a MAF greater than 1% (Supplementary Table 10). Given the large difference in sample size between our discovery and replication samples, direct statistical replication of individual associations at GWS is not achievable for most SNPs identified (Extended Data Fig. 3a). Instead, we assessed the correlation of SNP effects between our discovery and replication GWASs as an overall metric of replicability3,17. Among the 11,529 out of 11,847 SNPs that had a MAF greater than 1% in the EBB, we found a correlation of marginal SNP effects of ρb = 0.93 (jackknife standard error; s.e. 0.01) and a correlation of conditional SNP effects using the same LD reference panel of ρb = 0.80 (s.e. 0.03; Supplementary Fig. 14). Although we had limited power to replicate associations with 238 GWS variants that are rare in the EBB (MAF < 1%), we found, consistent with expectations (Methods and Extended Data Fig. 3b), that 60% of them had a marginal SNP effect that was sign-consistent with that from our discovery GWAS (Fisher's exact test; P = 0.001). The proportion of sign-consistent SNP effects was greater than 75% (Fisher's exact test; P < 10−50) for variants with a MAF greater than 1%—also consistent with expectations (Extended Data Fig. 3b). Altogether, our analyses demonstrate the robustness of our findings and show their replicability in an independent sample.
## Genomic distribution of height-associated SNPs
To examine signal density among the 12,111 GWS SNPs detected in our METAFE, we defined a measure of local density of association signals for each GWS SNP on the basis of the number of additional independent associations within 100 kb (Supplementary Fig. 15). Supplementary Fig. 16 shows the distributions of signal density for GWS SNPs identified in each ancestry group and in our METAFE. We observed that 69% of GWS SNPs shared their location with another associated, conditionally independent, GWS SNP (Fig. 2). The mean signal density across the entire genome is 2.0 (s.e. 0.14), consistent with a non-random genomic distribution of GWS SNPs. Next, we evaluated signal density around 462 autosomal genes curated from the Online Mendelian Inheritance in Man (OMIM) database18 as containing pathogenic mutations that cause syndromes of abnormal skeletal growth ('OMIM genes'; Methods and Supplementary Table 11). We found that a high density of height-associated SNPs is significantly correlated with the presence of an OMIM gene nearby19,20 (enrichment fold of OMIM gene when density is greater than 1: 2.5×; P < 0.001; Methods and Extended Data Fig. 4a). Notably, the enrichment of OMIM genes almost linearly increases with the density of height-associated SNPs (Extended Data Fig. 4b). Thus, these 12,111 GWS SNPs nonrandomly cluster near each other and near known skeletal growth genes.
The largest density of conditionally independent associations was observed on chromosome 15 near ACAN, a gene mutated in short stature and skeletal dysplasia syndromes, where 25 GWS SNPs co-localize within 100 kb of one another (Fig. 2 and Supplementary Fig. 17). We show in Supplementary Note 3 and Extended Data Fig. 5a–d, using haplotype- and simulation-based analyses, that a multiplicity of independent causal variants is the most likely explanation of this observation. We also found that signal density is partially explained by the presence of a recently identified21,22 height-associated variable-number tandem repeat (VNTR) polymorphism at this locus (Supplementary Note 3). In fact, the 25 independent GWS SNPs clustered within 100 kb of rs4932198 explain more than 40% of the VNTR length variation in multiple ancestries (Extended Data Fig. 5e), and an additional approximately 0.24% (P = 8.7 × 10−55) of phenotypic variance in EUR above what is explained by the VNTR alone (Extended Data Fig. 5f). Altogether, our conclusion is consistent with previous evidence of multiple types of common variation influencing height through ACAN gene function, involving multiple enhancers23, missense variants24 and tandem repeat polymorphisms21,22.
## Variance explained by SNPs within identified loci
To quantify the proportion of height variance that is explained by GWS SNPs identified in our METAFE, we stratified all HM3 SNPs into two groups: SNPs in the close vicinity of GWS SNPs, hereafter denoted GWS loci; and all remaining SNPs. We defined GWS loci as non-overlapping genomic segments that contain at least one GWS SNP, such that GWS SNPs in adjacent loci are more than 2 × 35 kb away from each other (that is, a 35-kb window on each side). We chose this size window because it was predicted that causal variants are located within 35 kb of GWS SNPs with a probability greater than 80% (ref. 25). Accordingly, we grouped the 12,111 GWS SNPs identified in our METAFE into 7,209 non-overlapping loci (Supplementary Table 12) with lengths ranging from 70 kb (for loci containing only one signal) to 711 kb (for loci containing up to 25 signals). The average length of GWS loci is around 90 kb (s.d. 46 kb). The cumulative length of GWS loci represents around 647 Mb, or about 21% of the genome (assuming a genome length of around 3,039 Mb)26.
To estimate the fraction of heritability that is explained by common variants within the 21% of the genome overlapping GWS loci, we calculated two genomic relationship matrices (GRMs)—one for SNPs within these loci and one for SNPs outside these loci—and then used both matrices to estimate a stratified SNP-based heritability ($${h}_{{\rm{SNP}}}^{2}$$) of height in eight independent samples of all five population groups represented in our METAFE (Fig. 3 and Methods). Altogether, our stratified estimation of SNP-based heritability shows that SNPs within these 7,209 GWS loci explain around 100% of $${h}_{{\rm{SNP}}}^{2}$$ in EUR and more than 90% of $${h}_{{\rm{SNP}}}^{2}$$ across all non-EUR groups, despite being drawn from less than 21% of the genome (Fig. 3). We also varied the window size used to define GWS loci and found that 35 kb was the smallest window size for which this level of saturation of SNP-based heritability could be achieved (Supplementary Fig. 18).
To further assess the robustness of this key result, we tested whether the 7,209 height-associated GWS loci are systematically enriched for trait heritability. We chose body-mass index (BMI) as a control trait, given its small genetic correlation with height (rg = −0.1, ref. 27) and found no significant enrichment of SNP-based heritability for BMI within height-associated GWS loci (Supplementary Fig. 19). Furthermore, we repeated our analysis using a random set of SNPs matched with the 12,111 height-associated GWS SNPs on EUR MAF and LD scores. We found that this control set of SNPs explained only around 27% of $${h}_{{\rm{SNP}}}^{2}$$ for height, consistent with the proportion of SNPs within the loci defined by this random set of SNPs (Supplementary Figs. 18 and 19). Finally, we extended our stratified estimation of SNP-based heritability to all well-imputed common SNPs (that is, beyond the HM3 panel) and found, consistently across population groups, that although more genetic variance can be explained by common SNPs that are not included in the HM3 panel, all information remains concentrated within these 7,209 GWS loci (Extended Data Fig. 6). Thus, with this large GWAS, nearly all of the variability in height that is attributable to common genetic variants can be mapped to regions comprising around 21% of the genome. Further work is required in cohorts of non-European ancestries to map the remaining 5–10% of the SNP-based heritability that is not captured within those regions.
## Out-of-sample prediction accuracy
We quantified the accuracy of multiple polygenic scores (PGSs) for height on the basis of GWS SNPs (hereafter referred to as PGSGWS) and on the basis of all HM3 SNPs (hereafter referred to as PGSHM3). PGSGWS were calculated using joint SNP effects from COJO, and PGSHM3 using joint effects calculated using the SBayesC method28 (Methods). We denote $${R}_{{\rm{GWS}}}^{2}$$ and $${R}_{{\rm{HM}}3}^{2}$$ as the prediction accuracy of PGSGWS and PGSHM3, respectively. For conciseness, we also use the abbreviations PGSGWS-X and PGSHM3-X (and $${R}_{{\rm{GWS}}-{\rm{X}}}^{2}$$ and $${R}_{{\rm{HM}}3-{\rm{X}}}^{2}$$) to specify which GWAS meta-analysis each PGS (and corresponding prediction accuracy) was trained from. For example, PGSGWS-METAFE refers to PGSs based on 12,111 GWS SNPs identified from our METAFE.
We first present results from PGSGWS across different ancestry groups. PGSGWS-METAFE yielded prediction accuracies greater than or equal to that of all other PGSGWS (Fig. 4a), partly reflecting sample size differences between ancestry-specific GWASs and also consistent with previous studies29. PGSGWS-EUR (based on 9,863 SNPs) was the second best of all PGSGWS across ancestry groups except in AFR. Indeed, PGSGWS-AFR (based on 453 SNPs) yielded an accuracy of 8.5% (s.e. 0.6%) in AFR individuals from UKB and PAGE; that is, significantly larger than the 5.9% (s.e. 0.6%) and 7.0% (s.e. 0.6%) achieved by PGSGWS-EUR in these two samples, respectively (Fig. 4a). PGSGWS-METAFE was the best of all PGSGWS in AFR participants with an accuracy $${R}_{{\rm{GWS}}-{\rm{METAFE}}}^{2}$$ = (12.3% + 9.9%)/2 = 10.8% (s.e. 0.5%) on average between UKB and PAGE (Fig. 4a). Across ancestry groups, the highest accuracy of PGSGWS-METAFE was observed in EUR participants ($${R}_{{\rm{GWS}}-{\rm{METAFE}}}^{2}$$~40%; s.e. 0.6%) and the lowest in AFR participants from the UKB ($${R}_{{\rm{GWS}}-{\rm{METAFE}}}^{2}$$ ≈ 9.4%; s.e. 0.7%). Note that the difference in $${R}_{{\rm{GWS}}-{\rm{METAFE}}}^{2}$$ between the EUR and AFR ancestry cohorts is expected because of the over-representation of EUR in our METAFE, and consistent with a relative accuracy ($${R}_{{\rm{GWS}}-{\rm{METAFE}}}^{2}$$ in AFR)/($$\,{R}_{{\rm{GWS}}-{\rm{METAFE}}}^{2}$$ in EUR) of around 25% that was previously reported30. We extended analyses of PGSGWS to PGS based on SNPs identified with COJO at lower significance thresholds (Extended Data Fig. 7). As in previous studies3,20, the inclusion of sub-significant SNPs increased the accuracy of ancestry-specific PGSs. However, lowering the significance thresholds in our METAFE mostly improved accuracy in EUR (from 40% to 42%), whereas it slightly decreased the accuracy in AFR.
Overall, ancestry-specific PGSHM3 consistently outperform their corresponding PGSGWS in most ancestry-groups. However, PGSHM3 was sometimes less transferable across ancestry groups than PGSGWS, in particular in AFR and HIS individuals from PAGE. In EUR, PGSHM3 reaches an accuracy of 44.7% (s.e. 0.6%), which is higher than previously published SNP-based predictors of height derived from individual-level data31,32,33 and from GWAS summary statistics28,34,35 across various experimental designs (different SNP sets, different sample sizes and so on). Finally, the largest improvement of PGSHM3 over PGSGWS was observed in AFR individuals from the PAGE study ($${R}_{{\rm{GWS}}-{\rm{AFR}}}^{2}$$ = 8.5% versus $${R}_{{\rm{HM}}3}^{2}$$ = 15.4%; Fig. 4a) and the UKB ($${R}_{{\rm{GWS}}-{\rm{AFR}}}^{2}$$ = 8.5% versus $${R}_{{\rm{HM}}3}^{2}$$ = 14.4%; Fig. 4a).
Furthermore, we sought to evaluate the prediction accuracy of PGSs relative to that of familial information as well as the potential improvement in accuracy gained from combining both sources of information. We analysed 981 unrelated EUR trios (that is, two parents and one child) and 17,492 independent EUR sibling pairs from the UKB, who were excluded from our METAFE. We found that height of any first-degree relative yields a prediction accuracy between 25% and 30% (Fig. 4b). Moreover, the accuracy of the parental average is around 43.8% (s.e. 3.2%), which is lower than yet not significantly different from the accuracy of PGSHM3-EUR in EUR. In addition, we found that a linear combination of the average height of parents and of the child’s PGS yields an accuracy of 54.2% (s.e. 3.2%) with PGSGWS-EUR and 55.2% (s.e. 3.2%) with PGSHM3-EUR. This observation reflects the fact that PGSs can explain within-family differences between siblings, whereas average parental height cannot. To show this empirically, we estimate that our PGSs based on GWS SNPs explain around 33% (s.e. 0.7%) of height variance between siblings (Methods). Finally, we show that the optimal weighting between parental average and PGS can be predicted theoretically as a function of the prediction accuracy of the PGS, the full narrow sense heritability and the phenotypic correlation between spouses (Supplementary Note 4 and Supplementary Fig. 20).
In summary, the estimation of variance explained and prediction analyses in samples with European ancestry show that the set of 12,111 GWS SNPs accounts for nearly all of $${h}_{{\rm{SNP}}}^{2}$$, and that combining SNP-based PGS with family history significantly improves prediction accuracy. By contrast, both estimation and prediction results show clear attenuation in samples with non-European ancestry, consistent with previous studies30,36,37,38.
## GWAS discoveries, sample size and ancestry diversity
Our large study offers the opportunity to quantify empirically how much increasing GWAS sample sizes and ancestry diversity affects the discovery of variants, genes and biological pathways. To address this question, we re-analysed three previously published GWASs of height3,19,20 and also down-sampled our meta-analysis into four subsets (including our EUR and METAFE GWASs). Altogether, we analysed seven GWASs with a sample size increasing from around 0.13 million up to around 5.3 million individuals (Table 2).
For each GWAS, we quantified eight metrics grouped into four variant- and locus-based metrics (number of GWS SNPs; number of GWS loci; prediction accuracy ($${R}_{{\rm{GWS}}}^{2}$$) of PGS based on GWS SNPs; and proportion of the genome covered by GWS loci), a functional-annotation-based metric (enrichment statistics from stratified LDSC39,40), two gene-based metrics (number of genes prioritized by summary-data-based Mendelian randomization41 (SMR; Methods) and proximity of variants with OMIM genes) and a gene-set-based metric (enrichment within clusters of gene sets or pathways). Overall, we found different patterns for the relationship between those metrics and GWAS sample size and ancestry composition, consistent with varying degrees of saturation achieved at different sample sizes.
We observed the strongest saturation for the gene-set and functional-annotation metrics, which capture how well general biological functions can be inferred from GWAS results using currently available computational methods. Using two popular gene-set prioritization methods (DEPICT42 and MAGMA43), we found that the same broad clusters of related gene sets (including most of the clusters enriched for OMIM genes) are prioritized at all GWAS sample sizes (Supplementary Fig. 21, Extended Data Fig. 8, Supplementary Tables 1315 and Supplementary Note 5). Similarly, stratified LDSC estimates of heritability enrichment within 97 functional annotations also remain stable across the range of sample sizes (Extended Data Fig. 9). Overall, we found no significant improvement for all these higher-level metrics from adding non-EUR samples to our analyses. The latter observation is consistent with other analyses showing that GWASs expectedly implicate similar biology across major ancestral groups (Supplementary Note 5 and Supplementary Fig. 22).
For the gene-level metric, the excess in the number of OMIM genes that are proximate to a GWS SNP (compared with matched sets of random genes) plateaus at sample sizes of larger than 1.5 million, whereas the relative enrichment of GWS SNPs near OMIM genes first decreases with sample size, then plateaus when n is greater than 1.5 million (Supplementary Fig. 23a–c). Notably, the decrease observed for n values of less than 1.5 million reflects the preferential localization of larger effect variants (those identified with smaller sample sizes) closer to OMIM genes (Supplementary Fig. 23d) and, conversely, that more recently identified variants with smaller effects tend to localize further away from OMIM genes (Supplementary Fig. 23e). We also investigated the number of genes prioritized using SMR (hereafter referred to as SMR genes; Methods) using expression quantitative trait loci (eQTLs) as genetic instruments (Supplementary Table 16) as an alternative gene-level metric and found it to saturate for n values greater than 4 million (Supplementary Fig. 23f). Note that saturation of SMR genes is partly affected by the statistical power of current eQTL studies, which do not always survey biologically relevant tissues and cell types for height. Therefore, we can expect more genes to be prioritized when integrating GWAS summary statistics from this study with those from larger eQTL studies that may be available in the future and may involve more tissue types. Gene-level metrics were also not substantially affected by adding non-EUR samples, again consistent with broadly similar sets of genes affecting height across ancestries.
At the level of variants and genomic regions, we saw a steady and almost linear increase in the number of GWS SNPs as a function of sample size, as previously reported44. However, given that newly identified variants tend to cluster near ones identified at smaller sample sizes, we also saw a saturation in the number of loci identified for n values greater than 2.5 million, where the upward trend starts to weaken (Supplementary Fig. 24a). We found a similar pattern for the percentage of the genome covered by GWS loci, with the degree of saturation varying as a function of the window size used to define loci (Supplementary Fig. 24b). The observed saturation in PGS prediction accuracy (both within ancestry—that is, in EUR—and multi-ancestry) was more noticeable than that of the number and genomic coverage of GWS loci. In fact, increasing the sample size from 2.5 million to 4 million by adding another 1.5 million EUR samples increased the number of GWS SNPs from 7,020 to 9,863—that is, an increase of around 1.4-fold ((9,863 − 7,020)/7,020)—but the absolute increase in prediction accuracy is less than 2.7%. This improvement is mainly observed in EUR but remains lower than 1.3% in individuals of the EAS and AFR ancestry groups. However, adding another approximately 1 million participants of non-EUR improves the multi-ancestry prediction accuracy by more than 3.4% (Supplementary Fig. 24c), highlighting the value of including non-EUR populations.
Altogether, these analyses show that increasing the GWAS sample size not only increases the prediction accuracy, but also sheds more light on the genomic distribution of causal variants and, at all but the largest sample sizes, the genes proximal to these variants. By contrast, enrichment of higher-level, broadly defined biological categories such as gene sets and pathways and functional annotations can be identified using relatively small sample sizes (n ≈ 0.25 million for height). Of note, we confirm that increased genetic diversity in GWAS discovery samples significantly improves the prediction accuracy of PGSs in under-represented ancestries.
## Discussion
By conducting one of the largest GWASs so far in 5.4 million individuals, with a primary focus on common genetic variation, we have provided insights into the genetic architecture of height—including a saturated genomic map of 12,111 genetic associations for height. Consistent with previous studies19,20, we have shown that signal density of associations (known and novel) is not randomly distributed across the genome; rather, associated variants are more likely to be detected around genes that have been previously associated with Mendelian disorders of growth. Furthermore, we observed a strong genetic overlap of association across cohorts with various ancestries. Effect estimates of associated SNPs are moderately to highly correlated (minimum = 0.64; maximum = 0.99), suggesting even larger correlations of effect sizes of underlying causal variants13. Moreover, although there are significant differences in power to detect an association between cohorts with European and non-European ancestries, most genetic associations for height observed in populations with non-European ancestry lie in close proximity and in linkage disequilibrium to associations identified within populations of European ancestry.
By increasing our experimental sample size to more than seven times that of previous studies, we have explained up to 40% of the inter-individual variation in height in independent European-ancestry samples using GWS SNPs alone, and more than 90% of $${h}_{{\rm{SNP}}}^{2}$$ across diverse populations when incorporating all common SNPs within 35 kb of GWS SNPs. This result highlights that future investigations of common (MAF > 1%) genetic variation associated with height in many ancestries will be most likely to detect signals within the 7,209 GWS loci that we have identified in the present study. A question for the future is whether rare genetic variants associated with height are also concentrated within the same loci. We provide suggestive evidence supporting this hypothesis from analysing imputed SNPs with 0.1% < MAF < 1% (Supplementary Note 6, Extended Data Fig. 10 and Supplementary Fig. 25). Our results are consistent with findings from a previous study45, which showed across 492 traits a strong colocalization between common and rare coding variants associated with the same trait. Nevertheless, our conclusions remain limited by the relatively low performances of imputation in this MAF regime46,47. Therefore, large samples with whole-genome sequences will be required to robustly address this question. Such datasets are increasingly becoming available48,49,50. Separately, previous studies have reported a significant enrichment of height heritability near genes as compared to inter-genic regions (that is, >50 kb away from the start or stop genomic position of genes)51. Our findings are consistent with but not reducible to that observation, given that up to 31% of GWS SNPs identified in this study lie more than 50 kb away from any gene.
Our study provides a powerful genetic predictor of height based on 12,111 GWS SNPs, for which accuracy reaches around 40% (that is, 80% of $${h}_{{\rm{SNP}}}^{2}$$) in individuals of European ancestries and up to around 10% in individuals of predominantly African ancestries. Notably, we show using a previously developed method38 that LD and MAF differences between European and African ancestries can explain up to around 84% (s.e. 1.5%) of the loss of prediction accuracy between these populations (Methods), with the remaining loss being presumably explained by differences in heritability between populations and/or differences in effect sizes across populations (for example, owing to gene-by-gene or gene-by-environment interactions). This observation is consistent with common causal variants for height being largely shared across ancestries. Therefore, we anticipate that fine-mapping of GWS loci identified in this study, ideally using methods that can accommodate dense sets of signals and large populations with African ancestries, would substantially improve the accuracy of a derived height PGS for populations of non-European ancestry. Our study has a large number of participants with African ancestries as compared with previous efforts. However, we emphasize that further increasing the size of GWASs in populations of non-European ancestry, including those with diverse African ancestries, is essential to bridge the gap in prediction accuracy—particularly as most studies only partially capture the wide range of ancestral diversity both within Africa and globally. Such increased sample sizes would help to identify potential ancestry-specific causal variants, to facilitate ancestry-specific fine-mapping and to inform gene–environment and gene–ancestry interactions. Another important finding of our study is to show how individual PGS can be optimally combined with familial information and thereby improve the overall accuracy of height prediction to above 54% in populations of European ancestry.
Although large sample sizes are needed to pinpoint the variants responsible for the heritability of height (and larger samples in multiple ancestries will probably be required to map these at finer scale), the prioritization of relevant genes and gene sets is feasible at smaller sample sizes than that required to account for the common variant heritability. Thus, the sample sizes required for saturation of GWAS are smaller for identifying enriched gene sets, with the identification of genes implicated as potentially causal and mapping of genomic regions containing associated variants requiring successively larger sample sizes. Furthermore, unlike prediction accuracy, prioritization of genes that are likely to be causal and even mapping of associated regions is consistent across ancestries, reflecting the expected similarity in the biological architecture of human height across populations. Recent studies using UKB data predicted that GWAS sample sizes of just over 3 million individuals are required to identify 6,000–7,000 GWS SNPs explaining more than 90% of the SNP-based heritability of height52. We showed empirically that these predictions are downwardly biased given that around 10,000 independent associations are, in fact, required to explain 80–90% of the SNP-based heritability of height in EUR individuals. Discrepancies between observed and predicted levels of saturation could be explained by several factors, such as (i) heterogeneity of SNP effects between cohorts and background ancestries, which may have reduced the statistical power of our study as compared to a homogenous sample like UKB; (ii) inconsistent definitions of GWS SNPs (using COJO in this study versus standard clumping in ref. 52); and, most importantly, (iii) misspecification of the SNP-effects distribution assumed to make these predictions. Nevertheless, if these predictions reflect proportional levels of saturation between traits, then we could expect that two- to tenfold larger samples would be required for GWASs of inflammatory bowel disease (×2, that is, n = 10 million), schizophrenia (×7; n = 35 million) or BMI (×10; n = 50 million) to reach a similar saturation of 80–90% of SNP-based heritability.
Our study has a number of limitations. First, we focused on SNPs from the HM3 panel, which only partially capture common genetic variation. However, although a significant fraction of height variance can be explained by common SNPs outside the HM3 SNPs panel, we showed that the extra information (also referred to as ‘hidden heritability’) remains concentrated within GWS loci identified in our HM3-SNP-based analyses (Extended Data Fig. 6). This result underlines the widespread allelic heterogeneity at height-associated loci. Another limitation of our study is that we determined conditional associations using a EUR LD reference (n ≈ 350,000), which is sub-optimal given that around 24% of our discovery sample is of non-European ancestry. We emphasize that no analytical tool with an adequately large multi-ancestry reference panel is at present available to properly address how to identify conditionally independent associations in a multi-ancestry study. Fine-mapping of variants remains a particular challenge when attempted across ancestries in loci containing multiple signals (as is often the case for height).A third limitation of our study is our inability to perform well-powered replication analyses of genetic associations specific to populations with non-European ancestries, owing to the current limited availability of such data. Finally, as with all GWASs, definitive identification of effector genes and the mechanisms by which genes and variants influence phenotype remains a key bottleneck. Therefore, progress towards identifying causal genes from GWAS of height may be achieved by a combination of increasingly large whole-exome sequencing studies, allowing straightforward SNP-to-gene mapping45, the use of relevant complementary data (for example, context-specific eQTLs in relevant tissues and cell types) and the development of computational methods that can integrate these data.
In summary, our study has been able to show empirically that the combined additive effects of tens of thousands of individual variants, detectable with a large enough experimental sample size, can explain substantial variation in a human phenotype. For human height, we show that studies of the order of around 5 million participants of various ancestries provide enough power to map more than 90% (around 100% in populations of European ancestry) of genetic variance explained by common SNPs down to around 21% of the genome. Mapping the missing 5–10% of SNP-based heritability not accounted for in the four non-European ancestries studied here will require additional and directed efforts in the future.
Height has been used as a model trait for the study of human polygenic traits, including common diseases, because of its high heritability and relative ease of measurement, which enable large sample sizes and increased power. Conclusions about the genetic architecture, sample size requirements for additional GWAS discovery and scope for polygenic prediction that were initially made for height have by-and-large agreed with those for common disease. If the results from this study can also be extrapolated to disease, this would suggest that substantially increased sample sizes could largely resolve the heritability attributed to common variation to a finite set of SNPs (and small genomic regions). These variants and regions would implicate a particular subset of genes, regulatory elements and pathways that would be most relevant to address questions of function, mechanism and therapeutic intervention.
## Methods
A summary of the methods, together with a full description of genome-wide association analyses and follow-up analyses is described below. Written informed consent was obtained from every participant in each study, and the study was approved by relevant ethics committees (Supplementary Table 1).
### Quality control checks of individual studies
All study files were checked for quality using the software EasyQC53 that was adapted to the format from RVTESTS (versions listed in Supplementary Table 2)54. The checks performed included allele frequency differences with ancestry-specific reference panels, total number of markers, total number of markers not present in the reference panels, imputation quality, genomic inflation factor and trait transformation. We excluded two studies that did not pass our quality checks in the data.
### GWAS meta-analysis
We first performed ancestry-group-specific GWAS meta-analyses of 173 studies of EUR, 56 studies of EAS, 29 studies of AFR, 11 studies of HIS and 12 studies of SAS. Meta-analyses within ancestry groups were performed as described before19,20 using a modified version of RAREMETAL55 (v.4.15.1), which accounts for multi-allelic variants in the data. Study-specific GWASs are described in Supplementary Tables 13. Details about imputation procedures implemented by each study are also given in Supplementary Table 2. We kept in our analyses SNPs with an imputation accuracy ($${r}_{{\rm{INFO}}}^{2}$$) > 0.3, Hardy–Weinberg Equilibrium (HWE) P value (PHWE) > 10−8 and a minor allele count (MAC) > 5 in each study. Next, we performed a fixed-effect inverse variance weighted meta-analysis of summary statistics from all five ancestry groups GWAS meta-analysis using a custom R script using the R package meta (see ‘URLs’ section).
### Hold-out sample from the UK Biobank
We excluded 56,477 UK Biobank (UKB) participants from our discovery GWAS for following analyses including quantification of population stratification. More precisely, our hold-out EUR sample consists of 17,942 sibling pairs and 981 trios (two parents and one child) plus all UKB participants with an estimated genetic relationship larger than 0.05 with our set of sibling pairs and trios. We identified 14,587 individuals among these 56,477 UKB participants who were unrelated (unrelatedness was determined as when the genetic relationship coefficient estimated from HM3 SNPs was lower than 0.05) to each other and used their data to quantify the variance explained by SNPs within GWS loci (described below) and the prediction accuracy of PGSs.
### COJO analyses
We performed COJO analyses of each of the five ancestry group-specific GWAS meta-analyses using the software GCTA (version v.1.93)6,7. We used default parameters for all ancestry groups except in AFR and HIS, for which we found that default parameters could yield biased estimates of joint SNP effects because of long-range LD. This choice is discussed in Supplementary Note 1. The GCTA-COJO method implements a stepwise model selection that aims at retaining a set of SNPs the joint effects of which reach genome-wide significance, defined in this study as P < 5 × 10−8. In addition to GWAS summary statistics, COJO analyses also require genotypes from an ancestry-matched sample that is used as a LD reference. For all sets of genotypes used as LD reference panels, we selected HM3 SNPs with $${r}_{{\rm{INFO}}}^{2}$$ > 0.3 and PHWE > 10−6. For EUR, we used genotypes at 1,318,293 HM3 SNPs (MAC > 5) from 348,501 unrelated EUR participants in the UKB as our LD reference. For EAS, we used genotypes at 1,034,263 quality-controlled (MAF > 1%, SNP missingness < 5%) HM3 SNPs from a merged panel of n = 5,875 unrelated participants from the UKB (n = 2,257) and Genetic Epidemiology Research on Aging (GERA; n = 3,618). Data from the GERA study were obtained from the database of Genotypes and Phenotypes (dbGaP; accession number: phs000788.v2.p3.c1) under project 15096. For SAS, we used genotypes at 1,222,935 HM3 SNPs (MAC > 5; SNP missingness < 5%) from 9,448 unrelated individuals. For AFR, we used genotypes at 1,007,949 quality-controlled (MAF > 1%, SNP missingness < 5%) HM3 SNPs from a merged panel of 15,847 participants from the Women’s Health Initiative (WHI; n = 7,480), and the National Heart, Lung, and Blood Institute’s Candidate Gene Association Resource (CARe56, n = 8,367). Both WHI and CARe datasets were obtained from dbGaP (accession numbers: phs000386 for WHI; CARe including phs000557.v4.p1, phs000286.v5.p1, phs000613.v1.p2, phs000284.v2.p1, phs000283.v7.p3 for ARIC, JHS, CARDIA, CFS and MESA cohorts) and processed following the protocol provided by the dbGaP data submitters. After excluding samples with more than 10% missing values and retaining only unrelated individuals, our final LD reference included data from n = 10,636 unrelated AFR individuals. For HIS, we used genotypes at 1,246,763 sequenced HM3 SNPs (MAF > 1%) from n = 4,883 unrelated samples from the Hispanic Community Health Study/Study of Latinos (HCHS/SOL; dbGaP accession number: phs001395.v2.p1) cohorts. Finally, we performed a COJO analysis of the combined meta-analysis of all ancestries (referred to as METAFE in the main text) using 348,501 unrelated EUR participants in the UKB as the reference panel.
To assess whether SNPs detected in non-EUR were independent of signals detected in EUR, we performed another COJO analysis of ancestry groups GWAS by fitting jointly SNPs detected in EUR with those detected in each of the non-EUR GWAS meta-analyses. For each non-EUR GWAS, we performed a single-step COJO analysis only including SNPs identified in that non-EUR GWAS and for which the LD squared correlation ($${r}_{{\rm{LD}}}^{2}$$) with any of the EUR signals (marginally or conditionally GWS) is lower than 0.8 in both EUR and corresponding non-EUR data. Single-step COJO analyses were performed using the --cojo-joint option of GCTA, which does not involve model selection and simply approximates a multivariate regression model in which all selected SNPs on a chromosome are fitted jointly. LD correlations used in these filters were estimated in ancestry-matched samples of the 1000 Genomes Project (1KGP; release 3). More specifically, LD was estimated in 661 AFR, 347 HIS (referred to with the AMR label in 1KGP), 504 EAS, 503 EUR and 489 SAS 1KGP participants. We used the same LD reference samples in these analyses as for our main discovery analysis described at the beginning of the section.
### FST calculation and (stratified) LD score regression
We used two statistics to evaluate whether an EUR LD reference could approximate well enough the LD structure in our trans-ancestry GWAS meta-analysis. The first statistic that we used is the Wright fixation index57, which measures allele frequency divergence between two populations. We used the Hudson’s estimator of FST58 as previously recommended59 to compare allele frequencies from our METAFE with that from our EUR GWAS meta-analysis and an independent replication sample from the EBB. The other statistic that we used is the attenuation ratio statistic from the LD score regression methodology. These LD score regression analyses were performed using version 1.0 of the LDSC software and using LD scores calculated from EUR participants in the 1KGP (see ‘URLs’ section). Moreover, we performed a stratified LD score regression analysis to quantify the enrichment of height heritability in 97 genomic annotations curated and described previously40. as the baseline-LD model. Annotation-weighted LD scores used for those analyses were also calculated using data from 1KGP (see ‘URLs’ section).
### Density of GWS signal and enrichment near OMIM genes
We defined the density of independent signals around each GWS SNP as the number of other independent associations identified with COJO within a 100-kb window on both sides. Therefore, a SNP with no other associations within 100 kb has a density of 0, whereas a SNP colocalizing with 20 other GWS associations within 100 kb will have a density of 20. We quantified the standard error of the mean signal density across the genome using a leave-one-chromosome-out jackknife procedure. We then quantified the enrichment of 462 curated OMIM18 genes near GWS SNPs with a large signal density, by counting the number of OMIM genes within 100 kb of a GWS SNP, then comparing that number for SNPs with a density of 0 and those with a density of at least 1. The strength of the enrichment was measured using an odds ratio calculated from a 2×2 contingency table: 'presence/absence of an OMIM gene' versus 'density of 0 or larger than 0'. To assess the significance of the enrichment, we simulated the distribution of enrichment statistics for a random set of 462 length-matched genes. We used 22 length classes (<10 kb; between i × 10 kb and (i + 1) × 10 kb, with i = 1,…,9; between i × 100 kb and (i + 1) × 100 kb, with i = 1,…,10; between 1 Mb and 1.5 Mb; between 1.5 Mb and 2 Mb; and >2 Mb) to match OMIM genes with random genes. OMIM genes within a given length class were matched with the same number of non-OMIM genes present in the class. We sampled 1,000 random sets of genes and calculated for each them an enrichment statistic. Enrichment P value was calculated as the number of times enrichment statistics of random genes exceeded that of OMIM genes. The list of OMIM genes is provided in Supplementary Table 11.
### Genomic colocalization of GWS SNPs identified across ancestries
We assessed the genomic colocalization between 2,747 GWS SNPs identified in non-EUR (Supplementary Tables 58) and 9,863 GWS SNPs identified in EUR (Supplementary Table 4) by quantifying the proportion of EUR GWS SNPs identified within 100 kb of any non-EUR GWS SNP. We tested the statistical significance of this proportion by comparing it with the proportion of EUR GWS SNPs identified within 100 kb of random HM3 SNPs matched with non-EUR GWS SNPs on 24 binary functional annotations39.
These 24 annotations (for example, coding or conserved) are thoroughly described in a previous study39 and were downloaded from https://alkesgroup.broadinstitute.org/LDSCORE/baselineLD_v2.1_annots/.
Our matching strategy consists of three steps. First, we calibrated a statistical model to predict the probability for a given HM3 SNP to be GWS in any of our non-EUR GWAS meta-analyses as a function of their annotation. For that, we used a logistic regression of the non-EUR GWS status (1 = if the SNP is GWS in any of the non-EUR GWAS; 0 = otherwise) onto the 24 annotations as regressors. Second, we used that model to predict the probability to be GWS in non-EUR. Thirdly, we used the predicted probability to sample (with replacement) 1,000 random sets of 2,747 SNPs. Finally, we estimated the proportion of EUR GWS SNPs within 100 kb of SNPs in each sampled SNP set. We report in the main text the mean and s.d. over these 1,000 proportions.
To validate our matching strategy, we compared the mean value of each of these 24 annotations (for example, proportion of coding SNPs) between non-EUR GWS SNPs and each of the 1,000 random sets of SNPs, using a Fisher’s exact test. For each of the 24 annotations, both the mean and median P value were greater than 0.6 and the proportion of P values < 5% was less than 1%, suggesting no significant differences in the distribution of these 24 annotations between non-EUR GWS SNPs and matched SNPs.
### Replication analyses
To assess the replicability of our results, we tested whether the correlation ρb of estimated SNP effects between our discovery GWAS and our replication sample of 49,160 participants of the EBB was statistically different from 1. We used the estimator of ρb from a previous study60, which accounts for sampling errors in both discovery and replication samples. Standard errors were calculated using a leave-one-SNP-out jackknife procedure. We quantified the correlation of marginal and also that of joint SNP effects. Joint SNP effects in our replication sample were obtained by performing a single-step COJO analysis of GWAS summary statistics from our EBB sample, using the same LD reference as in the discovery GWAS. Correlation of SNP effects were calculated after correcting SNP effects for winner’s curse using a previously described method12. We provide the R scripts used to apply these corrections and estimate the correlation of SNP effects (see ‘URLs’ section). The expected proportion, E[P], of sign-consistent SNP effects between discovery and replication was calculated using the quadrant probability of a standard bivariate Gaussian distribution with correlation E[ρb], denoting the expected correlation between estimated SNP effects in the discovery and replication sample:
$$E[P]=\frac{1}{2}+\frac{{\sin }^{-1}(E[{\rho }_{{\rm{b}}}t])}{\pi },$$
(1)
where sin−1 denotes the inverse of the sine function and E[ρb] the expectation of the ρb statistic under the assumption that the true SNP effects are the same across discovery and replications cohorts. E[ρb] was calculated as
$$E[\,{\rho }_{{\rm{b}}}]=\,\frac{{\sigma }_{{\rm{b}}}^{2}}{\sqrt{\left({\sigma }_{{\rm{b}}}^{2}\,+\,[1-{\sigma }_{{\rm{b}}}^{2}{h}_{{\rm{d}}}]/({N}_{{\rm{d}}}{h}_{{\rm{d}}})\,\right)\left({\sigma }_{{\rm{b}}}^{2}\,+\,[1-{\sigma }_{{\rm{b}}}^{2}{h}_{{\rm{r}}}]/({N}_{{\rm{r}}}{h}_{{\rm{r}}})\right)}},$$
(2)
where Nd and Nr denote the sizes of the discovery and replication samples, respectively; hd and hr the average heterozygosity under Hardy–Weinberg equilibrium (that is, 2 × MAF × (1 − MAF)) across GWS SNPs in the discovery and replication samples, respectively; and $${{\rm{\sigma }}}_{{\rm{b}}}^{2}$$ the mean per-SNP variance explained by GWS SNPs, which we calculated (as per ref. 60.) as the sample variance of estimated SNP effects in the discovery sample minus the median squared standard error.
### Variance explained by GWS SNPs and loci
We estimated the variance explained by GWS SNPs using the genetic relationship-based restricted maximum likelihood (GREML) approach implemented in GCTA1,7. This approach involves two main steps: (i) calculation of genetic relationships matrices (GRM); and (ii) estimation of variance components corresponding to each of these matrices using a REML algorithm. We partitioned the genome in two sets containing GWS loci on the one hand and all other HM3 SNPs on the other hand. GWS loci were defined as non-overlapping genomic segments containing at least one GWS SNP and such that GWS SNPs in adjacent loci are more than 2 × 35 kb away from each other (that is, a 35-kb window on each side). We then calculated a GRM based on each set of SNPs and estimated jointly a variance explained by GWS alone and that explained by the rest of the genome. We performed these analyses in multiple samples independent of our discovery GWAS, which include participants of diverse ancestry. Details about the samples used for these analyses are provided below. We extended our analyses to also quantify the variance explained by GWS loci using alternative definitions based on a window size of 0 kb and 10 kb around GWS SNPs (Supplementary Figs. 18 and 19).
We also repeated our analyses using a random set of 12,111 SNPs matched with GWS SNPs on MAF and LD. Loci for these 12,111 random SNPs were defined similarly as for GWS loci. To match random SNPs with GWS SNPs on MAF and LD, we first created 28 MAF-LD classes of HM3 SNPs (7 MAF classes × 4 LD score classes). MAF classes were defined as <1%; between 1% and 5%; between 5% and 10%; between 10% and 20%; between 20% and 30%; between 30% and 40%; and between 40% and 50%. LD score classes were defined using quartiles of the HM3 LD score distribution. We next matched GWS SNPs in each of the 28 MAF-LD classes, with the same number of SNPs randomly sampled from that MAF-LD class.
### Prediction analyses
Height was first mean-centred and scaled to variance 1 within each sex. We quantified the prediction accuracy of height predictors as the difference between the variance explained by a linear regression model of sex-standardized height regressed on the height predictor, age, 20 genotypic principal components and study-specific covariates (full model) minus that explained by a reduced linear regression not including the height predictor. Genetic principal components were calculated from LD pruned HM3 SNPs ($${r}_{{\rm{LD}}}^{2}\,$$ < 0.1). We used height of siblings or parents as a predictor of height as well as various polygenic scores (PGSs) calculated as a weighted sum of height-increasing alleles. The direction and magnitude of these weights was determined by estimated SNP effects from our discovery GWAS meta-analyses. No calibration of tuning parameters in a validation was performed.
#### Between-family prediction
We analysed two classes of PGS. The first class is based on SNPs ascertained using GCTA-COJO. We applied GCTA-COJO to ancestry-specific and cross-ancestry GWAS meta-analyses using an ancestry-matched and an EUR LD reference, respectively. We compared PGSs based on SNPs ascertained at different significance thresholds: P < 5 × 10−8 (GWS: reported in the main text) and P < 5 × 10−7, P < 5 × 10−6 and P < 5 × 10−5. For all COJO-based PGS, we used estimated joint effects to calculate the PGS. The second class of PGS uses weights for all HM3 SNPs obtained from applying the SBayesC method28 to ancestry-specific and cross-ancestry GWAS meta-analyses with ancestry-matched and EUR-specific LD matrices, respectively. The SBayesC method is a Bayesian PGS-method implemented in the GCTB software (v.2.0), which uses the same prior as the LDpred method61,62. In brief, SBayesC models the distribution of joint effects of all SNPs using a two-component mixture distribution. The first component is a point-mass Dirac distribution on zero and the other component a Gaussian distribution (for each SNP) with mean 0 and a variance parameter to estimate. Full LD matrices (that is, not sparse) were calculated using GCTB across around 250 overlapping (50% overlap) blocks of around 8,000 SNPs (average size is around 20 Mb). These LD matrices were calculated using the same sets of genotypes used for COJO analyses (described above). We ran SBayesC in each block separately with 100,000 Monte Carlo Markov Chain iterations. In each run, we initialized the proportion of causal SNPs in a block at 0.0001 and the heritability explained by SNPs in the block at 0.001. Posterior SNP effects of SNPs present in two blocks were meta-analysed using inverse-variance meta-analysis.
Prediction accuracy was quantified in 61,095 unrelated individuals from three studies, including 33,001 participants of the UKB who were not included in our discovery GWAS (that is, 14,587 EUR; 9,257 SAS; 6,911 AFR and 2,246 EAS; Methods section ‘Samples used for prediction and estimation of variance explained’); 14,058 EUR participants from the Lifelines cohort study; and 8,238 HIS and 5,798 AFR participants from the PAGE study.
#### Within-family prediction
The prediction accuracy of sibling’s height was assessed in 17,942 unrelated sibling pairs from the UKB. Those pairs were determined by intersecting the list of UKB sibling pairs determined by Bycroft et al.63 with a list of genetically determined European ancestry participants from the UKB also described previously3. We then filtered the resulting list for SNP-based genetic relationship between members of different families to be smaller than 0.05. The prediction accuracy of parental height (each parent and their average) was assessed in 981 unrelated trios obtained as described above by crossing information from Bycroft et al.63 (calling of relatives) with that from Yengo et al.3 (calling of European ancestry participants). We quantified the within-family variance explained by PGS as the squared correlation of height difference between siblings with PGS difference between siblings. We describe in Supplementary Note 4 how familial information and PGS were combined to generate a single predictor.
### Samples used for prediction and estimation of variance explained
We quantified the accuracy of a PGS based on GWS SNPs as well as the variance explained by SNPs within GWS loci, in eight different datasets independent of our discovery GWAS meta-analyses. These datasets include two samples of EUR from the UKB (n = 14,587) and the Lifelines study (n = 14,058), two samples of AFR from the UKB (n = 6,911) and the PAGE study (n = 8,238), two samples of EAS (n = 2,246) from the UKB and the China Kadoorie Biobank (CKB; n = 47,693), one sample of SAS from the UKB (n = 9,257) and one sample of HIS from the PAGE study (n = 4,939). Analyses were adjusted for age, sex, 20 genotypic principal components and study-specific covariates (for example, recruitment centres). Genotypes of EUR UKB participants were imputed to the Haplotype Reference Consortium (HRC) and to a combined reference panel including haplotypes from the 1KG Project and the UK10K Project. To improve variant coverage in non-EUR participants of UKB, we re-imputed their genotypes to the 1KG reference panel, as described previously38. Lifelines samples were imputed to the HRC panel. PAGE and CKB were imputed to the 1KG reference panel. Standard quality control ($${r}_{{\rm{INFO}}}^{2}$$ > 0.3, PHWE > 10−6 and MAC > 5) were applied to imputed genotypes in each dataset.
### Contribution of LD and MAF to the loss of prediction accuracy
We defined the EUR-to-AFR relative accuracy as the ratio of prediction accuracies from an AFR sample over that from a EUR sample. We used a previously published method38 to quantify the expectation of that relative accuracy under the assumption that causal variants and their effects are shared between EUR and AFR, whereas MAF and LD structures can differ. In brief, this method contrasts LD and MAF patterns within 100-kb windows around each GWS SNPs and uses them to predict the expected loss of accuracy. As previously described38, we used genotypes from 503 EUR and 661 AFR participants of the 1KGP as a reference sample to estimate ancestry-specific MAF and LD correlations between GWS SNPs and SNPs in their close vicinity, and defined candidate causal variants as any sequenced SNP with an $${r}_{{\rm{LD}}}^{2}$$ > 0.45 with a GWS SNP within that 100-kb window. Standard errors were calculated using a delta-method approximation as previously described38.
### Down-sampled GWAS analyses
In addition to our EUR GWAS meta-analysis and our trans-ancestry meta-analysis (METAFE), we re-analysed five down-sampled GWASs as shown in Table 2. These down-sampled GWASs include various iterations of previous efforts of the GIANT consortium and have a sample size varying between around 130,000 and 2.5 million (EUR participants from 23andMe). To ensure sufficient genomic coverage of HM3 SNPs we imputed GWAS summary statistics from Lango Allen et al.19, Wood et al.20 and Yengo et al.3. with ImpG-Summary (v.1.0.1)64 using haplotypes from 1KGP as a LD reference. GWAS summary statistics from Lango Allen et al. only contain P values (P), height-increasing alleles and per-SNP sample sizes (N). Therefore, we first calculated Z-scores (Z) from P values assuming that Z-scores are normally distributed, then derived SNP effects (β) and corresponding standard errors (s.e.) using linear regression theory as $$\beta =Z/\sqrt{2{\rm{MAF}}\times (1-{\rm{MAF}})\times \left(N+{Z}^{2}\right)}$$ and SE = β/Z. Imputed GWAS summary statistics from these three studies are made publicly available on the GIANT consortium website (see ‘URLs’ section). We next performed a COJO analysis of all down-sampled GWAS using genotypes of 348,501 unrelated EUR participants in the UKB as a LD reference panel, as for our METAFE and EUR GWAS meta-analysis.
### Gene prioritization using SMR
We used SMR to identify genes whose expression could mediate the effects of SNPs on height. SMR analyses were performed using the SMR software v.1.03. We used publicly available gene eQTLs identified from two large eQTL studies; namely, the GTEx65 v.8 and the eQTLgen studies (see ‘URLs’ section). To ensure that our SMR results robustly reflect causality or pleiotropic effects of height-associated SNPs on gene expression, we only report here significant SMR results (that is, P < 5 × 10−8), which do not pass the heterogeneity in dependent instrument (HEIDI) test (that is, P > 0.01; Methods). The significance threshold for the HEIDI test was chosen on the basis of recommendations from another study66.
### Selection of OMIM genes
To generate a list of genes that are known to underlie syndromes of abnormal skeletal growth, we queried the Online Mendelian Inheritance in Man database (OMIM; https://www.omim.org/). From July 2019 to August 2020, we performed queries using search terms of “short stature”, “tall stature”, “overgrowth”, “skeletal dysplasia” and “brachydactyly.” We then used the free text descriptions in OMIM to manually curate the resulting combined list of genes, as well as genes in our earlier list from Wood et al.20 and all genes listed as causing skeletal disease in an online endocrine textbook (https://www.endotext.org/, accessed September 2020). For short stature, we only included genes that underlie syndromes in which short stature was either consistent (less than −2 s.d. in the vast majority of patients with data recorded), or present in multiple families or sibships and accompanied by (a) more severe short stature (−3 s.d.), (b) presence of skeletal dysplasia (beyond poor bone quality/fractures); or (c) presence of brachydactyly, shortened digits, disproportionate short stature or limb shortening (not simply absence of specific bones). We removed genes underlying syndromes in which short stature was likely to be attributable to failure to thrive, specific metabolic disturbances, intestinal failure or enteropathy and/or very severe disease (for example, early lethality or severe neurological disease). For tall stature or overgrowth, we only included genes underlying syndromes in which tall stature was consistent (more than +2 s.d. in the vast majority of patients with data recorded) or present in multiple families or sibships and accompanied by either (a) more severe tall stature (>+3 s.d.) or (b) arachnodactyly. For brachydactyly, we required more than only fifth finger involvement, and that brachydactyly be either consistent (present in the vast majority of patients) or accompanied by consistent short stature or other skeletal dysplasias. For skeletal dysplasias, we only considered genes that underlie syndromes in which the skeletal dysplasia involved long bones or the spine and was accompanied by short stature, brachydactyly or limb or digit shortening. We also included all genes in a list we generated in Lango Allen et al.19, which was curated using similar criteria. The resulting list contained 536 genes, of which 462 (Supplementary Table 11) are autosomal on the basis of annotation from PLINK (https://www.cog-genomics.org/static/bin/plink/glist-hg19).
### URLs
GIANT consortium data files: https://portals.broadinstitute.org/collaboration/giant/index.php/GIANT_consortium_data_files. Analysis script for within- and across-ancestry meta-analysis: https://github.com/loic-yengo/ScriptsForYengo2022_HeightGWAS/blob/main/run-meta-analyses-within-ancestries.R and https://github.com/loic-yengo/ScriptsForYengo2022_HeightGWAS/blob/main/run-meta-analyses-across-ancestries.R. Analysis script for correction of winner’s curse: https://github.com/loic-yengo/ScriptsForYengo2022_HeightGWAS/blob/main/WC_correction.R. Genotypes from 1KG: https://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/. eQTL data for SMR: GTEx v.8: https://yanglab.westlake.edu.cn/data/SMR/GTEx_V8_cis_eqtl_summary.html; eQTLgen: https://www.eqtlgen.org/cis-eqtls.html. Annotation-weighted LD scores for stratified LD score regression analyses: https://alkesgroup.broadinstitute.org/LDSCORE/LDSCORE/. LDSC software: https://github.com/bulik/ldsc.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
|
{}
|
# Help me decompress this old file
A long time ago in a basement far far away I created my own operating system. Unfortunately I made a foolish mistake when designing the file system. I choose a single byte for representing file sizes. Since that means that no file can be bigger than 255 bytes I was forced to also invent my own compression algorithm. Just now I came across such an old file called "message" compressed with said algorithm, unfortunately I don't remember how to decompress it.
Can you help me?
Here is a hexdump of the file:
> hexdump -C message
00000000 5f 20 5c 5f 2f 20 7c 0a 2a 2a 22 22 22 2a 22 23 |_ \_/ |.**"""*"#|
00000010 22 0a 29 22 22 22 22 22 09 23 22 23 22 22 22 89 |".)""""".#"#""".|
00000020 30 32 64 64 32 43 41 12 24 42 42 42 42 22 12 24 |02dd2CA.$BBBB".$|
00000030 42 62 32 64 64 42 83 2c 7c 46 7e 24 30 66 7e 39 |Bb2ddB.,|F~$0f~9| 00000040 66 62 7e 46 0a c1 0a 09 29 23 22 22 22 22 2a 22 |fb~F....)#""""*"| 00000050 29 8a 40 63 42 41 12 64 64 32 43 63 c1 49 42 42 |).@cBA.dd2Cc.IBB| 00000060 24 30 46 46 7e 69 24 |$0FF~i\$|
Since I never got around to adding any other encoding I am pretty sure the original file was a ASCII encoded "plain" text.
Hint 1:
I remember the first eight bytes to be somehow special.
Hint 2:
The original file contains English text; it however uses none of the following ASCII characters: abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ.
• From the back of your memory: can it be assumed that ‚message‘ was an ascii encoded plain text? Dec 2 '17 at 7:52
• ( and welcome to the site! Yes, we love such puzzles — the quality can often only be judged once it is solved. Best check out some high and some low-voted examples to understand the reasons. And don‘t take negative votes personally, if they hit you. ) Dec 2 '17 at 7:58
• Good question thank you. Yes that can be assumed. I will add that to the description. Dec 2 '17 at 13:19
• It is hard to believe, that this is indeed some kind of compression, since the entropy is so low... Are you sure it is not more a kind of encoding or cipher?
– Ctx
Dec 3 '17 at 9:23
• Your statement that "the original file contains English text it however use none of the following ASCII characters..." and then lists the entire English alphabet seems to compromise your claim that this example file is the result of compression. Even if it is compression there are so many possibilities that it would take an extensive brute force effort to find it...especially if the original file is not "sensible English text".
– Dr t
Dec 3 '17 at 18:37
I am not there yet, but I have the strong suspicion, that
when outputting the file in binary coding, i.e. " " for 0 and "#" for 1, and being correctly wrapped somehow, there will be some text written with those two characters (space and #).
See for example:
However, if this is right, then
I do not consider this a compression, since it may be true, that the resulting file has more bytes, but "compression" is related to the information content per byte, so the compressed file does contain very few information per byte as opposed to simple ascii plaintext.
EDIT: Solved it ;) (stanri's comment helped here, thanks!)
The first 8 bytes are the charset: { "", " ", "\", "", "/", " ", "|", "\n" }. Then, for each byte, print char 0 from the charset, if the 0th bit is set, char 1 if the 1st bit is set, char 2 ...
This results in
Is it a compression or not?
Honestly, I am not sure... I tend to say, it is indeed, regarding the ascii art. But not regarding the message itself.
• I agree that this would not be compression. While the algorithm in question is a encoding it does do compression. I hope you can solve it! I am curious for your verdict. Let me know if you want more tips. Dec 3 '17 at 23:29
• @raggy Hm, some hint on how close my suspicions in this answer are would be helpful ;)
– Ctx
Dec 4 '17 at 12:07
• You are not entirely on the wrong track looking at individual bits. However i said no guessing and you guessed two characters and some wrapping. Maybe read hint 1 again. Dec 4 '17 at 14:07
• My guess is something like this patorjk.com/software/taag/… and the first 8 bytes specify the size of the width/height, and the characters to use for the writing. Dec 4 '17 at 15:25
• @stanri You put me on the right track, thanks ;)
– Ctx
Dec 4 '17 at 16:07
|
{}
|
# Fourier integral/ Fourier transformation of an oscillatory function with FFT
$f(x) = \cos(x^2)$ and $g(k) = \sqrt\pi \cos((\pi k)^2 - \pi/4)$ are a Fourier pair.
I want to reproduce $g(k)$ by Fourier integrating $f(x)$ using FFT, i.e.
approximating Integrate[ f(x) * exp(2 pi * ikx), {x, -inf, inf} ]
with Sum[ fn * exp(2 pi * ik x_n), {n, 0, N-1} ] * Delta_x
However the result agrees with $g(k)$ only on very small $k$ ranges if it agrees at all (the same code works well for smooth Fourier pairs e.g. the Gaussian functions). I guess the problem is choosing appropriate values for N and Delta_x. Are there any established rules for how to choose them? Where can I find related topics in literature (I've read Numerical Recipe section 13.9 but it does not seem to solve my problem)?
-
What is a 'Fourier pair'? – copper.hat May 9 '12 at 16:32
@copper: one's a Fourier transform of the other, apparently... – J. M. May 9 '12 at 16:36
|
{}
|
# Name of property: $a \circ (a \circ b) = b$
How do you describe an operation like this?
$$a \circ (a \circ b) = b$$
For example, XOR is like this:
$$a \oplus a \oplus b = b$$
• It might be more convenient to write $a\circ b$ instead of $f(a,\,b)$. (Also, the second equation is redundant.) As for the question: In XOR's case this result follows from the operator being associative and self-inverse and having an identity element. I'm not aware of any name for the final result, which is a shame because you might care about functions that get it for other reasons. – J.G. Aug 7 '19 at 20:13
• Is the property you have in mind that $$f(i, f(i, x)) = x$$ holds for some fixed $i$ and all $x$ in the domain $X$ of $f$? If so, the property is just that $f(i, \,\cdot\,)$ is an involution. en.wikipedia.org/wiki/Involution_(mathematics) If the requirement is that it holds for all $i$ and all $x$ in $X$, then one could say that $f(i, \,\cdot\,)$ is an involution for all $i$, or that the image of the curried function $X \to (X \to X)$, $i \mapsto f(i, \,\cdot\,)$, is contained in the set of involutions of $X$---but I don't know a single term to describe this property. – Travis Willse Aug 7 '19 at 20:14
• Informally, if $f : X \times X \to X$ satisfies this for all $i,x \in X$ then I might view $f$ as being "an involutive action of $X$ on itself". Slightly more generally, if $f : Y \times X \to X$ satisfies the identity I might say "$f$ is an involutive action of $Y$ on $X$". – Daniel Schepler Aug 7 '19 at 20:14
• Do you want $f$ to also have this property in the first argument? i.e., $f(f(i,x),x) = i$? – eyeballfrog Aug 7 '19 at 20:16
• The new notation either implies associativity or is ambiguous. I suggest using parentheses, i.e., $a \circ (a \circ b) = b$ to make the intent clear. – Travis Willse Aug 7 '19 at 20:33
## 2 Answers
In "A guide to self-distributive quasigroups, or latin quandles," David Stanovský calls such an operator left involutory.
A binary algebra $$(A,*)$$ is called left involutory (or left symmetric) if $$x * (x * y) = y$$ (hence we have unique left division with $$x \backslash y = x ∗ y$$).
It's not a very popular term—it currently only has 8 hits on Google—but it makes sense (since it means that the section function $$(x * {})$$ is involutory), and there are a couple of other papers which use the same terminology with the same meaning (and at least one of these papers cites Stanovský's paper).
So if you're going to write a paper using this term, I suggest using the term "left involutory," and consider citing Stanovský's paper when you do.
(Fun fact: one of the Google search results for "left involutory" caught my eye because it's a chat log from an IRC chat room that I speak in regularly! I couldn't resist mentioning this.)
If you can rearrange like this, $$a \circ (a \circ b) = (a \circ a) \circ b$$, then the expression you have implies that $$a$$ is both the left-inverse and the right-inverse of itself. Hence, $$a$$ is the inverse of itself.
• If we define an operation on any set $X$ by $x \circ y := y$ then we have $a \circ (a \circ b) = a \circ b = b$, so every element is a left identity but (if $X$ has at least two elements) no right identity, and so no two-sided identity. Since $a \circ (b \circ a) = a$ and $b \circ (a \circ b) = b$, every element is an inverse of every other element (where we mean inverse in the sense of regarding $(S, \circ)$ as a regular semigroup). – Travis Willse Aug 8 '19 at 0:48
|
{}
|
Pure Mathematics 1 by Backhouse
Ex:2f q13
The real function f, defined for all $\displaystyle x \epsilon \mathbb{R}$, is said to be multiplicative if, for all $\displaystyle y \epsilon \mathbb{R}$, $\displaystyle x \epsilon \mathbb{R}$,
f(xy) = f(x)f(y)
Q: Prove that if f is multiplicative function then
a) either f(0) =0 or f(x)=1
b) either f(1) = 1 or f(x) = 0
c) $\displaystyle f(x)^{n}$ = $\displaystyle \left \{ f(x) \right \}^{n}$ for all positive integers n.
Give example of a non- constant multiplicative function
My attempt:
I have tried to apply what I have learn from the chapter of functions from this book but there is no bit on multiplicative functions, I have only learnt about odd & even functions. I tried but failed please help
let f(x) = even functions thus f(a) = f(-a)
f(y) = odd functions thus - f(b) = f(-b)
f(ab) = f(a)f(b)
= f(-a) (-f(b)) = -f(-a)f(-b) = -f(a)f(-b)
hence i am stuck please help. I have tried to look at youtube and search for multiplicative functions but it seems to be advanced number theory. I dont have knowledge in that area thus please can an expert give an answer that does not have advance mathematical symbols as my maths level is at high school
Use the information you are given.
\begin{align}&\text{Given} & f(xy) &= f(x)f(y) \\
&\text{set $y=0$ so that} & f(0) &= f(x)f(0) \\
&& f(0) - f(0)f(x) &= 0 \\
&& f(0)\big(1-f(x)\big) &= 0
\end{align}
Follow a similar idea for the second part.
For the third part, I think the question should say $$f(x^n) = f^n(x)$$ or $$f(x^n) = \{f(x)\}^n$$ which mean the same thing. Your proof will probably be via induction.
Originally Posted by Archie
Use the information you are given.
\begin{align}&\text{Given} & f(xy) &= f(x)f(y) \\
&\text{set $y=0$ so that} & f(0) &= f(x)f(0) \\
&& f(0) - f(0)f(x) &= 0 \\
&& f(0)\big(1-f(x)\big) &= 0
\end{align}
Follow a similar idea for the second part.
For the third part, I think the question should say $$f(x^n) = f^n(x)$$ or $$f(x^n) = \{f(x)\}^n$$ which mean the same thing. Your proof will probably be via induction.
thank you
so i try to use proof via induction
prove
$\displaystyle f(x^n)=\left \{ f(x) \right \}^n$ for all positive integers n
first show that the step hold for n= 1
$\displaystyle f(x^1)=\left \{ f(x) \right \}^1$
$\displaystyle \therefore$
$\displaystyle f(x)=\left \{ f(x) \right \}$
$\displaystyle f(x) = f(x)$
Suppose it holds for n = k
$\displaystyle f(x^n)=\left \{ f(x) \right \}^n$
$\displaystyle f(x^k)=\left \{ f(x) \right \}^k$
$\displaystyle f(x^1 . x^{k-1})=\left \{ f(x) \right \}^{1} . \left \{ f(x) \right \}^{k-1}$
$\displaystyle f(x^1 . x^{k-1})= f(x)^1 . f(x)^{k-1}$
$\displaystyle f(x^1) . f(x^{k-1})= f(x)^1 . f(x)^{k-1}$
$\displaystyle f(x) . f(x^{k-1})= f(x) . f(x)^{k-1}$
$\displaystyle ( f(x) . f(x^{k-1})) - (f(x) . f(x)^{k-1})=0$
$\displaystyle f(x)[( f(x^{k-1})) - ( f(x)^{k-1})]=0$
$\displaystyle f(x)=0$ or $\displaystyle [( f(x^{k-1})) - ( f(x)^{k-1})]=0$
$\displaystyle f(x) = 0$ is invalid since n represents all positive integers
$\displaystyle f(x^{k-1}) = f(x)^{k-1}$
since $\displaystyle f(x^{k-1}) = f(x)^{k-1}$
thus $\displaystyle f(x^{k}) = f(x)^{k}$
then let n = k+1
$\displaystyle f(x^{k+1})=\left \{ f(x) \right \}^{k+1}$
$\displaystyle f(x^{k+1})=\left \{ f(x) \right \}^{k+1}$
$\displaystyle f(x^{k+1})= f(x)^{k+1}$
$\displaystyle f(x^{k}).f(x^{1}) = f(x)^{k}.f(x)^{1}$
$\displaystyle [f(x^{k}).f(x^{1})]- [f(x)^{k}.f(x)^{1}] = 0$
$\displaystyle [f(x^{k}).f(x)]- [f(x)^{k}.f(x)] = 0$
$\displaystyle f(x)[f(x^{k})-f(x)^{k}] = 0$
$\displaystyle f(x) =0$ or $\displaystyle [f(x^{k})-f(x)^{k}]=0$
$\displaystyle f(x) = 0$ is invalid since n represents all positive integers
$\displaystyle [f(x^{k})-f(x)^{k}]=0$
$\displaystyle f(x^{k})=f(x)^{k}$
since $\displaystyle f(x^{k}) = f(x)^{k}$
thus $\displaystyle f(x^{k+1}) = f(x)^{k+1}$
So we have shown that if it is true for some n=k it is also true for n=k+1. We have shown that it is true for n=1, therefore by the principle of mathematical induction it is true for all the positive integers n.
Please can you tell me if I am right
and the last point of the question is to give an example of a non-constant multiplicative function but i tried to read wikipedia but it is a bit difficult to understand please can you share some light?
thank you
Originally Posted by bigmansouf
so i try to use proof via induction prove
$\displaystyle f(x^n)=\left \{ f(x) \right \}^n$ for all positive integers n
first show that the step hold for n= 1
$\displaystyle f(x^1)=\left \{ f(x) \right \}^1$
$\displaystyle \therefore$
$\displaystyle f(x)=\left \{ f(x) \right \}$
$\displaystyle f(x) = f(x)$
Suppose it holds for n = k
$\displaystyle f(x^n)=\left \{ f(x) \right \}^n$
Why do you muck thing up by not just using simple equivalences.
You have suppose that $\displaystyle f(x^N)=\left \{ f(x) \right \}^N$ so use it.
\displaystyle \begin{align*}f(x^{N+1}&=f(x^N\cdot x) \\&=f(x^N)\cdot f(x)\text{ why?}\\&=[f(x)]^N\cdot[f(x)]\text{ HOW?}\\&=[f(x)]^{N+1}\text{ HOW & WHY?} \end{align*}
How are we done?
|
{}
|
# Revision history [back]
You can turn v into a polynomial on the undeterminate j:
sage: v.parent()
Finite Field in j of size 10354717741769305252977768237866805321427389645549071170116189679054678940682478846502882896561066713624553211618840202385203911976522554393044160468771151816976706840078913334358399730952774926980235086850991501872665651576831^2
sage: P = v.polynomial()
sage: P
9207905618485976447392495823891126491742950552335608949038426615382964807887894797411491716107572732408369786142697750332311947639207321056540404444033540648125838904594907601875471637980859284582852367748448663333866077035709*j + 4651155546510811048846770550870646667630430517849502373785869664283801023087435645046977319664381880355511529496538038596466138807253669785341264293301567029718659171475744580349901553036469330686320047828171225710153655171014 sage: P.parent() Univariate Polynomial Ring in j over Finite Field of size 10354717741769305252977768237866805321427389645549071170116189679054678940682478846502882896561066713624553211618840202385203911976522554393044160468771151816976706840078913334358399730952774926980235086850991501872665651576831 (using NTL)
Then look at its coefficients:
sage: re, im = P.coefficients(sparse=False)
sage: re
4651155546510811048846770550870646667630430517849502373785869664283801023087435645046977319664381880355511529496538038596466138807253669785341264293301567029718659171475744580349901553036469330686320047828171225710153655171014
sage: im
9207905618485976447392495823891126491742950552335608949038426615382964807887894797411491716107572732408369786142697750332311947639207321056540404444033540648125838904594907601875471637980859284582852367748448663333866077035709
You The notion of real and imaginary part are very related to the particular polynomial x^2+1, so for me it makes sense that there is no suvh method for an arbitrary field extension.
Anuway, you can turn v into a polynomial on the undeterminate j:
sage: v.parent()
Finite Field in j of size 10354717741769305252977768237866805321427389645549071170116189679054678940682478846502882896561066713624553211618840202385203911976522554393044160468771151816976706840078913334358399730952774926980235086850991501872665651576831^2
sage: P = v.polynomial()
sage: P
9207905618485976447392495823891126491742950552335608949038426615382964807887894797411491716107572732408369786142697750332311947639207321056540404444033540648125838904594907601875471637980859284582852367748448663333866077035709*j + 4651155546510811048846770550870646667630430517849502373785869664283801023087435645046977319664381880355511529496538038596466138807253669785341264293301567029718659171475744580349901553036469330686320047828171225710153655171014 sage: P.parent() Univariate Polynomial Ring in j over Finite Field of size 10354717741769305252977768237866805321427389645549071170116189679054678940682478846502882896561066713624553211618840202385203911976522554393044160468771151816976706840078913334358399730952774926980235086850991501872665651576831 (using NTL)
NTL)
Then look at its coefficients:
sage: re, im = P.coefficients(sparse=False)
sage: re
4651155546510811048846770550870646667630430517849502373785869664283801023087435645046977319664381880355511529496538038596466138807253669785341264293301567029718659171475744580349901553036469330686320047828171225710153655171014
sage: im
9207905618485976447392495823891126491742950552335608949038426615382964807887894797411491716107572732408369786142697750332311947639207321056540404444033540648125838904594907601875471637980859284582852367748448663333866077035709
|
{}
|
# Mutant pangolin
This is a code-golf challenge whereby you need to devise a program that acts like a quine or a quine that modifies itself to illustrate machine learning.
# Background
There is a basic artificial intelligence program called 'the pangolin game' which is described here. The basic ideas is that the program when run the first time asks:
OK, please think of something
Is it a pangolin?
You may then reply either:
Yes
In which case it says:
Good. That was soooo easy.
Or if not it says:
Oh. Well you win then -- What were you thinking of?
To which you might say:
a dog
To which it would say
Please give me a question about a dog, so I can tell the difference between a dog and a pangolin
Does it eat ants?
It would then ask:
What is the answer for a dog?
To which you would say
no
And it would say
Thanks
Next time it runs, it would ask the question above, and would build up a binary tree of such questions.
# The challenge
Enough of the background. This challenge is to write a self modifying pangolin program. The rules are as follows:
1. Program output (as described above) should be to STDERR. The final response will always be "Good. That was soooo easy." or "Thanks". After this, it should output either the current version of the program, or a new version of the program that incorporates the question to STDOUT. No answer written in a language that does not support writing to STDOUT and STDERR or reading from STDIN will be valid.
2. In other words under UNIX you could invoke the program like this:
example:
$mylanguage myprogram > myprogram.1 [dialog goes here]$ mylanguage myprogram1 > myprogram.2
[dialog goes here]
1. The program has to use exactly the specified prompts (because shortening the prompts shows no skill). The prompts are (without the quotes, and where %s is substituted) as follows:
list:
"OK, please think of something"
"Is it %s?"
"Good. That was soooo easy."
"Oh. Well you win then -- What were you thinking of?"
"Please give me a question about %s, so I can tell the difference between %s and %s"
"What is the answer for %s?"
"Thanks"
1. When expecting yes/no answers, your program should accept y or yes in any case for 'yes', and n or no in any case for 'no'. What you do with non-conforming inputs is up to you. For instance you might decide to take any answer that begins with y or Y as 'yes', and anything else as no.
2. You may assume that the names of the things supplied and the questions consist only ASCII letters, numbers, spaces, hyphens, question marks, commas, full stops, colons, and semicolons, i.e. they match following regex ^[-?,.;: a-zA-Z]+$. If you can cope with more than that (especially the quoting characters in your chosen language) you get to be smug, but don't gain any extra points. 3. Your program may not read or write any file (excluding STDIN, STDOUT, and STDERR), or from the network; specifically it may neither read nor write its own code from disk. Its state must be saved in the program code itself. 4. When the program is run and guesses the answer correctly, it must perform exactly as a quine, i.e. it must write to STDOUT exactly its own code, unchanged. 5. When the program is run and guesses the answer incorrectly, it must encode the provided new question and answer within its own code and write it to STDOUT in its own code, so it is capable of distinguishing between its original guess and the provided new object, in addition to distinguishing between all previously given objects. 6. You must be able to cope with multiple sequential runs of the software so it learns about many objects. See here for examples of multiple runs. 7. Test runs are given at the link in the head (obviously covering only the STDIN and STDERR dialog). 8. Standard loopholes are excluded. • Should the program be able to mutate multiple times and support more than 2 animals? If so, can you provide an example "Please give me a question about ..." dialogue when there are already two or more animals the program knows about? – Cristian Lupascu Sep 7 '15 at 7:24 • What if the user only says "dog" instead of "a dog"? Shall we parse the sentence to detect "a/an" or can we treat the answer literally? I assume so given the prompts you gave (%s). – coredump Sep 7 '15 at 8:45 • @coredump if the user says "dog" not "a dog", then the replies will not be grammatical. That's not an issue. – abligh Sep 7 '15 at 8:55 • Oof. Trying to do this in Runic would be a nightmare. The primary reason being that wiring up all the bits to cope with arbitrary input strings (which then need to be present as string literals in the resulting output program) would be basically impossible. Oh and Runic can't output to STDERR. – Draco18s no longer trusts SE Jun 23 '19 at 21:38 • This seemed like a fun "game", so rather than golf it, I made a codepen where you can play the Pangolin Game to your heart's content. Enjoy! – Skidsdev Jun 26 '19 at 20:11 ## 4 Answers # Common Lisp, 631 576 (let((X"a pangolin"))#1=(labels((U(M &AUX(S *QUERY-IO*))(IF(STRINGP M)(IF(Y-OR-N-P"Is it ~A?"M)(PROG1 M(FORMAT S"Good. That was soooo easy.~%"))(LET*((N(PROGN(FORMAT S"Oh. Well you win then -- What were you thinking of?~%")#2=(READ-LINE S)))(Q(PROGN(FORMAT S"Please give me a question about ~A, so I can tell the difference between ~A and ~A~%"N N M)#2#)))(PROG1(IF(Y-OR-N-P"What is the answer for ~A?"N)(,Q ,N ,M)(,Q ,M ,N))(FORMAT S"Thanks~%"))))(DESTRUCTURING-BIND(Q Y N)M(IF(Y-OR-N-P Q)(,Q ,(U Y),N)(,Q ,Y,(U N)))))))(write(list'let(list(X',(U x)))'#1#):circle t)())) ### Example session Name the script pango1.lisp and execute as follows (using SBCL): ~$ sbcl --noinform --quit --load pango1.lisp > pango2.lisp
Is it a pangolin? (y or n) n
Oh. Well you win then -- What were you thinking of?
a cat
Please give me a question about a cat, so I can tell the difference between a cat and a pangolin
Does it sleep a lot?
What is the answer for a cat? (y or n) y
Thanks
Another round, adding the bear:
~$sbcl --noinform --quit --load pango2.lisp > pango3.lisp Does it sleep a lot? (y or n) y Is it a cat? (y or n) n Oh. Well you win then -- What were you thinking of? a bear Please give me a question about a bear, so I can tell the difference between a bear and a cat Does it hibernate? What is the answer for a bear? (y or n) y Thanks Adding a sloth (we test the case where the answer is "no"): ~$ sbcl --noinform --quit --load pango3.lisp > pango4.lisp
Does it sleep a lot? (y or n) y
Does it hibernate? (y or n) n
Is it a cat? (y or n) n
Oh. Well you win then -- What were you thinking of?
a sloth
Please give me a question about a sloth, so I can tell the difference between a sloth and a cat
Does it move fast?
What is the answer for a sloth? (y or n) n
Thanks
Testing the last file:
~\$ sbcl --noinform --quit --load pango4.lisp > pango5.lisp
Does it sleep a lot? (y or n) y
Does it hibernate? (y or n) n
Does it move fast? (y or n) y
Is it a cat? (y or n) y
Good. That was soooo easy.
### Remarks
• I first forgot printing "Thanks", here it is.
• As you can see, questions are followed by (y or n), which is because I'am using the existing y-or-n-p function. I can update the answer to remove this output if needed.
• Common Lisp has a bidirectional *QUERY-IO* stream dedicated to user-interaction, which is what I'am using here. Standard output and user-interaction do not mess, which follows IMHO the spirit of the question.
• Using SAVE-LISP-AND-DIE would be a better approach in practice.
### Generated output
Here is the last generated script:
(LET ((X
'("Does it sleep a lot?"
("Does it hibernate?" "a bear"
("Does it move fast?" "a cat" "a sloth"))
"a pangolin")))
#1=(LABELS ((U (M &AUX (S *QUERY-IO*))
(IF (STRINGP M)
(IF (Y-OR-N-P "Is it ~A?" M)
(PROG1 M (FORMAT S "Good. That was soooo easy.~%"))
(LET* ((N
(PROGN
(FORMAT S
"Oh. Well you win then -- What were you thinking of?~%")
(Q
(PROGN
(FORMAT S
"Please give me a question about ~A, so I can tell the difference between ~A and ~A~%"
N N M)
#2#)))
(PROG1
(IF (Y-OR-N-P "What is the answer for ~A?" N)
(,Q ,N ,M)
(,Q ,M ,N))
(FORMAT S "Thanks~%"))))
(DESTRUCTURING-BIND
(Q Y N)
M
(IF (Y-OR-N-P Q)
(,Q ,(U Y) ,N)
(,Q ,Y ,(U N)))))))
(WRITE (LIST 'LET (LIST (X ',(U X))) '#1#) :CIRCLE T)
NIL))
### Explanations
A decision tree can be:
• a string, like "a pangolin", which represents a leaf.
• a list of three elements: (question if-true if-false) where question is a closed yes/no question, as a string, and if-true and if-false are the two possible subtrees associated with the question.
The U function walks and returns a possibly modified tree. Each question are asked in turn, starting from the root until reaching a leaf, while interacting with the user.
• The returned value for an intermediate node (Q Y N) is (Q (U Y) N) (resp. (Q Y (U N))) if the answer to question Q is yes (resp. no).
• The returned value for a leaf is either the leaf itself, if the program correctly guessed the answer, or a refined tree where the leaf is replaced by a question and two possible outcomes, according to the values taken from the user.
This part was rather straightforward. In order to print the source code, we use reader variables to build self-referential code. By setting *PRINT-CIRCLE* to true, we avoid infinite recursion during pretty-printing. The trick when using WRITE with :print-circle T is that the function might also returns the value to the REPL, depending on whether write is the last form, and so, if the REPL doesn't handle circular structures, like it is defined by the standard default value of *PRINT-CIRCLE*, there will be an infinite recursion. We only need to make sure the circular structure is not returned to the REPL, that's why there is a NIL in the last position of the LET. This approach greatly reduces the problem.
• Looks good! The (y or n) is not required, but I am tempted to permit it as it is an improvement. – abligh Sep 7 '15 at 17:12
• @abligh Thanks. About y/n, that would be nice, it helps and IMHO this is not really in contradiction with #3 which is about avoiding shortening the prompts. – coredump Sep 7 '15 at 18:29
# Python 2.7.6, 820 728 bytes
(Might work on different versions but I'm not sure)
def r(O,R):
import sys,marshal as m;w=sys.stderr.write;i=sys.stdin.readline;t=O;w("OK, please think of something\n");u=[]
def y(s):w(s);return i()[0]=='y'
while t:
if type(t)==str:
if y("Is it %s?"%t):w("Good. That was soooo easy.")
else:w("Oh. Well you win then -- What were you thinking of?");I=i().strip();w("Please give me a question about %s, so I can tell the difference between %s and %s"%(I,t,I));q=i().strip();a=y("What is the answer for %s?"%q);w("Thanks");p=[q,t];p.insert(a+1,I);z=locals();exec"O"+"".join(["[%s]"%j for j in u])+"=p"in z,z;O=z["O"]
t=0
else:u+=[y(t[0])+1];t=t[u[-1]]
print"import marshal as m;c=%r;d=lambda:0;d.__code__=m.loads(c);d(%r,d)"%(m.dumps(R.__code__),O)
r('a pangolin',r)
Well, it's not not as short as the Common Lisp answer, but here is some code!
# Python 3, 544 bytes
q="""
d=['a pangolin'];i=input;p=print
p("OK, Please think of something")
while len(d)!=1:
d=d[1+(i(d[0])[0]=="n")]
x=i("Is it "+d[0]+"?")
if x[0]=="n":
m=i("Oh. Well you win then -- What were you thinking of?")
n=[i("Please give me a question about "+m+", so I can tell the difference between "+d[0]+" and "+m),*[[d[0]],[m]][::(i("What is the answer for "+m+"?")[0]=="n")*2-1]]
p("Thanks")
q=repr(n).join(q.split(repr(d)))
else:
p("Good. That was soooo easy.")
q='q=""'+'"'+q+'""'+'"'+chr(10)+'exec(q)'
p(q)
"""
exec(q)
Try it Online!
The questions/answers/responses are stored in an array, where if the array stores three items (e.g. ['Does it eat ants',['a pangolin'],['a dog']]) then it gets an answer to the question and repeats with just the contents of either the second or third item, depending on the answer. When it gets to an array with only one item, it asks the question, and since it has its entire source codeas a string it is able to use the split-join method to insert the extension to the array in order to add the new branch.
I originally wrote this not realising the quine requirement, so rereading the question and having to find a way that I could both execute code and use it as a string was difficult, but I eventually stumbled upon the idea of the nice expandable quine format:
q="""
print("Some actual stuff")
q='q=""'+'"'+q+'""'+'"'+chr(10)+'exec()'
print(q)
"""
exec(q)
# Python 3, 497 bytes
t=["a pangolin"];c='''p=print;i=input;a=lambda q:i(q)[0]in"Yy"
def q(t):
if len(t)<2:
g=t[0]
if a(f"Is it {g}?"):p("Good. That was soooo easy.")
else:s=i("Oh. Well you win then -- What were you thinking of?");n=i(f"Please give me a question about {s}, so I can tell the difference between {s} and {g}.");t[0]=n;t+=[[g],[s]][::1-2*a(f"What is the answer for {s}?")];p("Thanks")
else:q(t[2-a(t[0])])
p("Ok, please think of something");q(t);p(f"t={t};c=''{c!r}'';exec(c)")''';exec(c)
Quite similar to Harmless' answer for tree representation. It recursively asks the next question, while going deeper into the list, until there is only one response.
Ungolfed version (without quining)
tree = ['a pangolin']
answer = input(question + '\n')
if answer.lower() in ['yes', 'no']:
return answer.lower() == 'yes'
else:
def query(tree):
if len(tree) == 1:
guess = tree.pop()
if ask(f'Is it {guess}?'):
print('Good. That was soooo easy.')
tree.append(guess)
else:
thing = input('Oh. Well you win then -- What were you thinking of?\n')
new_question = input(f'Please give me a question about {thing}, so I can tell the difference between {thing} and {guess}.\n')
print('Thanks')
tree.append(new_question)
tree.append([thing])
tree.append([guess])
else:
tree.append([guess])
tree.append([thing])
else:
|
{}
|
# Fourier transform normalization
I can't understand the following integral, someone can help? $$\int dk e^{ikx} = \delta(x)2\pi$$
-
To be more precise: I can't understand where the $2\pi$ comes from and where the delta function is located (where does the $2\pi$ "concentrate" along the x axis?) – Lorenzo Feb 28 at 11:42 It is concentrated near $x=0$. – Vladimir Kalitvianski Feb 28 at 12:50
## migrated from physics.stackexchange.comFeb 28 at 15:28
It means this integral behaves as a delta-function when integrated over $x$ with another regular function.
EDIT: Factor $2\pi$ does not depend on "normalization". It is a strict value. Integrate this function over $x$ within $\pm\varepsilon$ and you will obtain: $$2\int_{-\infty}^{\infty} \frac{\sin(z)}{z}dz=2\pi$$
-
does this means that the integral has a result only if x=k? If so, why this become $$2\pi$$? – Lorenzo Feb 28 at 11:57 @Lorenzo: No, $x=0$, not $k$. If you integrate, for simplicity this integral over $x$ within $\pm\epsilon$, you will obtain this $2\pi$. – Vladimir Kalitvianski Feb 28 at 12:25 I knew this 2pi wasn't due to normalization, but the normalization is due to this integral! Thank you for clearing my mind! – Lorenzo Feb 28 at 13:14
Do the integral putting in some limits $\pm a$ that we'll later take to infinity. Then we get:
$$\begin{split} \int_{-a}^{a} e^{ikx}dx &= \frac{1}{ik}[e^{ikx}]_{-a}^{a} \\ &= \frac{1}{ik}\left(e^{ika}-e^{-ika}\right) \\ &= \frac{1}{ik}2i\sin{ka} \\ &= 2a\frac{\sin{ka}}{ka} \\ &= 2a \space \text{sinc}(ka) \end{split}$$
where sinc(x) is $sin(x)/x$. As x goes to infinity $sinc(x)$ goes to a delta function, so when we take our integration limits to $\pm\infty$ we'll end up with a delta function.
As for the $2\pi$, there are various conventions for writing Fourier transforms and they tend to scatter factors of $\pi$ around. For example, Wikipedia gives the Fourier transform as:
$$\hat{f(k)} = \int f(x) \space e^{i \space 2\pi kx} dx$$
For example this makes intuitive sense if $k$ is a frequency, and of course Fourier transforms frequently involve time/frequency analyses. Anyhow the $2\pi$ in your expression can be justified by making the substitution $k = 2\pi l$ to put the integral into the standard form, in which case we get:
$$\int dk \space e^{ikx} = 2\pi \int dl \space e^{i2\pi l x}$$
and then taking the standard integral to be $\delta(x)$.
-
Ah, I see I and Kitchi have just both been downvoted. Are you going to say why you downvoted, or is this just another drive by downvote? – John Rennie Feb 28 at 12:41
It is I who downvoted because $2\pi$ is a strict result rather than a matter of convention. – Vladimir Kalitvianski Feb 28 at 12:48
Actually, as you can see the delta function is defined to be the following integral -
$$\delta(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty}e^{ikx}dx$$
Why the integral of $e^{ikx}$ goes to the delta is expressed in other answers to this question. The $2\pi$ is just a normalization constant, presumably one that is put there for conventional reasons.
The normalization itself isn't that important, as long as the same convention is stuck to throughout whatever calculation is being done. For example, the fourier transform of the delta function is $1$. But if your normalization is different, it may well turn out to be $2\pi$ or something else. The physics of the situation will still remain the same regardless of your convention.
-
The delta-function cannot be "defined" this way if the integral over $x$ including the point $x=0$ is not equal to unity. Normalization of the Fourier image $g(k)$ (convention, definition) has nothing to do with calculations of integrals. – Vladimir Kalitvianski Feb 28 at 13:14
|
{}
|
# Find the determinant of $A + I$
Given a real valued matrix $A$ such that $A$ satisfies $AA^T = I$ and $\det(A)<0$, calculate $\det(A + I)$
My start : Since $A$ satisfies $AA^T = I$, $A$ is a unitary matrix. The determinant of a unitary matrix with real entries is either $+1$ or $-1$. Since we know that $\det(A)<0$, it follows that $\det(A)=-1$.
Because the determinant is multiplicative and $AA^T=I$, we have $$\det(A+I)=\det(A+AA^T)=\det(A(I+A^T))=\det(A)\det(I+A^T).$$ Of course $I+A^T=A^T+I$, and $(A^T+I)^T=(A^T)^T+I^T=A+I$. It follows that $$\det(A)\det(I+A^T)=\det(A)\det(A^T+I)=\det(A)\det(A+I),$$ where we use that $\det(M)=\det(M^T)$ for any matrix $M$. The equalities above show that $$\det(A+I)=\det(A)\det(A+I).$$ But you already noted that $\det(A)=-1$, so then we must have $\det(A+I)=0$.
We have:
$$\det(A+I)=\det[A(I+A^T)]=\det(A)\det(I+A^T)=-\det(I+A^T).$$
Now since $\det(M)=\det(M^T)$ for any matrix M, we have:
$$\det(A+I)=-\det(I+A^T)=-\det(I+A),$$
Therefore $\det(A+I)=0$.
There is another interesting proof:
The product of the eigenvalues of $A$ is the $\det(A)$. Since $A$ is orthogonal all eigenvalues lie on the unit circle. Each non-real eigenvalue comes with its complex conjugate so their product is $1$. So $\det(A)=-1$ implies that the eigenvalue $-1$ has odd multiplicity, hence the corresponding eigenspace has dimension at least $1$. Thus, there is a vector $v\ne 0$ with $Av=-v$. We conclude $(A+I)v=0$, and finally $\det(A+I)=0$.
• Is it immediately clear that $\pm1$ are the only possible eigenvalues? – Servaes Sep 14 '15 at 16:05
• @Servaes You're right. I fixed the argument. – principal-ideal-domain Sep 14 '15 at 17:54
$$\det(A+I)=\det(A+AA^T)=\det[A(I+A^T)]=\det(A)\det(I+A^T)=-\det(I+A^T)$$
By properties of determinants we know $$\det(I+A^T)=\det[(I+A)^T]=\det(I+A)$$ Therefore, $$\det(I+A)=0$$
|
{}
|
# PCalc binomial functions
Over the weekend, I read Federico Viticci’s excellent post on using iOS 12 Shortcuts to access PCalc. I haven’t installed iOS 12 yet (I usually wait a few days to let Apple’s servers cool down), but I expect to use Federico’s advice to make a few PCalc shortcuts when I do. In the meantime, his post inspired me to clean up and finish off a PCalc function that I’d half-written some time ago.
The function calculates a probability from the binomial distribution. Imagine there are $N$ independent trials of some event. There are only two possible outcomes of each event, which we’ll call success and failure, although you could give them any names you like. For each trial, the probability of success is $p$. The probability of $n$ successes in those $N$ trials is
The first term in the formula is the binomial coefficient, which is defined as
The binomial probability, then, is a function of three inputs, $n$, $N$, and $p$, that yields one output. The PCalc function I wrote to implement the function starts with three numbers on the stack,1 say $n = 5$, $N = 10$, and $p = .5$,
Any entries on the stack “above” the three inputs will still be there after the function runs.
The function is called “Binomial PMF.” The PMF stands for probability mass function, which is standard terminology for the function that returns the probability of a random variable equaling a discrete value. The function is defined this way,
which is a lot of steps to enter. You’d probably prefer to download it.
If you look through the function definition, you’ll notice that I’m using up a lot of registers—more than is truly necessary to run out the calculation. I do this for a couple of reasons:
1. The registers are there to be used, and by keeping all the intermediate calculations in separate registers, I’m able to make the logic of the function easier to follow.
2. I have further plans for this function. I want to use it as the starting point for a more complicated function, the cumulative distribution function, which will need to keep track of some of those intermediate calculations.
We’ll look at the Binomial CDF function in the next post.
1. Yes, it’s an RPN-only function. RPN is the natural way to handle this many inputs, and because I don’t use PCalc’s algebraic mode, I didn’t see any reason to write the function to work in that mode.
# Ebooks, Hazel, and xargs
Although I have a Kindle, I prefer reading ebooks on my iPad, and the ebook reader I like the most is Marvin.1 This means I have to convert my Kindle books to ePubs (the format Marvin understands) and put them in a Dropbox folder that Marvin can access. The tools I use to do this efficiently are Calibre, Hazel, and, occasionally, a one-liner shell script.
If you have ever felt the need to convert the format of an ebook, you’ve probably used Calibre, so I won’t spend more than a couple of sentences describing how I use it.2 When I buy I new book from Amazon, it gets sent to my Kindle. I then copy the book from the Kindle into the Calibre library and convert the format to ePub. This doesn’t do the conversion in place and overwrite the Kindle file; it makes a new file in ePub format.
Calibre organizes its library in subfolders according to author and book. At the top level is the calibre folder, which I keep in Dropbox. At the next level are folders for each author. Within these are folders for each book from a particular author. Finally, within each book folder are the ebook files themselves and some metadata.
This is great, but I find it easiest to configure Marvin to download ePubs from a single Dropbox folder, one that I’ve cleverly named epubs. So I want an automated way to copy the ePub files from the calibre folder structure into the epubs folder.
This is a job for Hazel. Following Noodlesoft’s guidance, I set up two rules for the calibre folder.
The first one, “Copy to epubs,” just looks for files with the epub extension and copies them to the epubs folder.
By itself, this rule does nothing, because the ePub files aren’t in the calibre folder itself, they’re two levels down, and Hazel doesn’t look in subfolders without a bit of prodding. That prodding comes from the “Run on subfolders” rule:
This action was copied directly from Noodlesoft’s documentation, which says
To solve this problem [running rules in subfolders], Hazel offers a special action: “Run rules on folder contents.” If a subfolder inside your monitored folder matches a rule containing this action, then the other rules in the list will also apply to that subfolder’s contents.
With these rules in place, I don’t have to remember to copy the newly made ePub file into the epubs folder—Hazel does it for me as soon as it recognizes that a new ePub is in the calibre folder structure.
At least it should do it for me. Sometimes—for reasons I haven’t been able to suss out—Hazel doesn’t make the copies it’s supposed to. I’ll open up Marvin expecting to see new books ready to be downloaded to my iPad, and they won’t be there. When that happens, I bring out the heavy artillery: a shell script that combines the find and xargs commands to copy any and all ePubs under the calibre directory into the epubs directory. The one-liner script, called epub-update, looks like this:
#!/bin/bash
find ~/Dropbox/calibre/ -iname "*.epub" -print0 | xargs -0 -I file cp -n file ~/Dropbox/epubs/
The find part of the pipeline collects all files with an epub extension (case-insensitive) under the calibre directory and prints them out separated by a null byte. This null separator is pretty much useless when printing out file names for people, but it’s great when those files are going to be piped to xargs, especially when the file names are likely to have spaces within them.
The xargs part of the pipeline takes the null-separated list of files from the find command and runs cp on them. In the cp command, file is used as a placeholder for the file name and the files are copied to the epubs directory. The -n option tells cp not to overwrite files that already exist.
The advantage of using epub-update when Hazel hiccups is that I don’t have go hunting through subfolders to find the file that didn’t get copied.
I suppose if I were smart, I’d set up my Amazon account to send new purchases to my Mac instead of my Kindle. Then I could automate the importing of new ebooks into the Calibre library and eliminate more of the manual work at the front end of the process. One of the advantages of doing posts like this is that the process of writing up my workflows forces me to confront my own inefficiencies.
1. iBooks doesn’t give me the control over line spacing and margins that Marvin does. I may change my mind after seeing the all-new, all-improved Books app in iOS 12, but even if I switch, the automations described here will still be useful.
2. Also, because it has the worst GUI of any app on my Mac, I can’t bear to post screenshots of it.
# Hypergeometric Obama
On Friday, Barack Obama will speak at the University of Illinois as part of a ceremony at which he will receive the Paul Douglas Award for Ethics in Government. The speech will be at the Auditorium at the south end of the Quad, which doesn’t seat nearly as many people as wanted to see him.
According to this report, 22,611 students signed up for a ticket lottery for just 1,300 openings. My two sons were among the horde, and they learned yesterday, along with 21,309 of their friends, that they didn’t win.
How likely was it that neither would get a seat? The chance of any individual student being picked was
which means that the chance of any individual student not being picked was $1 - 0.0575 = 0.9425$. So the probability that neither would be picked was
or just under 90%. So the fact that neither of them got in wasn’t a surprise. We can also calculate the probability that both were lucky,
which is extremely low, and the probability that one got in and one didn’t,
which isn’t too bad, but unfortunately it didn’t work out.
These calculations are simple and give us answers that are accurate enough for our purposes, given the large numbers (1,300 and 22,611) involved. But we’ve made some approximations that won’t be reasonable if the numbers are small.
For example, let’s assume the same general problem, but this time we’ll assume there are 10 applicants—two of whom are my sons—for 2 tickets. What are the probabilities of zero, one, and two of my sons getting tickets?
If we follow the procedure above, we get
for both sons getting a ticket,
for one son getting a ticket and one not, and
for neither getting a ticket. These calculations are internally consistent, in that they add up to unity, but they’re wrong.
The problem is that these calculations are based on an assumption that one son winning a ticket is independent of the other son winning, but they are not independent.
The equation for calculating the probability of an intersection of two events, call them $A$ and $B$, is
where $P(B\,|\,A)$ is the probability of event $B$ given that event $A$ occurs, and similarly, $P(A\,|\,B)$ is the probability of event $A$ given that event $B$ occurs. If the events are independent, then the conditions don’t matter,
and
But that’s not the situation here. If Son A gets a ticket, then Son B’s chance of getting a ticket isn’t $2/10 = 0.20$, it’s $1/9 = 0.1111$ because with Son A having a ticket, there’s only one ticket left for the nine remaining applicants. Which means the probability that both sons get a ticket is
which just over half of our earlier, mistaken, calculation of $0.040$.
Similarly, the probability that one son gets a ticket and the other doesn’t is
and the probability that neither get a ticket is
(And now we see why it was OK to play a little loose with the numbers back in our original calculations with 22,611 applicants for 1,300 tickets. There just isn’t enough difference between
and
to make it worth even the minimal effort.)
All of the denomimators in the previous results were 45. It will probably not surprise you that 45 is the number of combinations of ten things taken two at a time. It’s also the binomial coefficient, and is usually written like a fraction but without the horizontal dividing line and surrounded by parentheses. It’s calculated though factorials:
In our particular case of $n=10$ and $k=2$,
Imagine ten lottery balls with the numbers 0 through 9 printed on them. Mix them up and draw two. There are 45 different results you can get if you don’t care about the order of the two balls. Here they are:
0-1 0-2 0-3 0-4 0-5 0-6 0-7 0-8 0-9
1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9
2-3 2-4 2-5 2-6 2-7 2-8 2-9
3-4 3-5 3-6 3-7 3-8 3-9
4-5 4-6 4-7 4-8 4-9
5-6 5-7 5-8 5-9
6-7 6-8 6-9
7-8 7-9
8-9
If two of the balls—say 3 and 7, just as an example—represent winning a ticket, scanning the list shows that there’s one combination with both 3 and 7; 16 combinations with a 3 or a 7 but not both; and 28 combinations with neither a 3 nor a 7. These are the numerators in the answers above.
There’s another way to visualize this that might make more sense to you. Image a line of ten people. Two of them get handed tickets (X), the other eight don’t (O). Here are the 45 ways the two tickets can be distributed:
XXOOOOOOOO XOXOOOOOOO XOOXOOOOOO XOOOXOOOOO XOOOOXOOOO XOOOOOXOOO XOOOOOOXOO XOOOOOOOXO XOOOOOOOOX
OXXOOOOOOO OXOXOOOOOO OXOOXOOOOO OXOOOXOOOO OXOOOOXOOO OXOOOOOXOO OXOOOOOOXO OXOOOOOOOX
OOXXOOOOOO OOXOXOOOOO OOXOOXOOOO OOXOOOXOOO OOXOOOOXOO OOXOOOOOXO OOXOOOOOOX
OOOXXOOOOO OOOXOXOOOO OOOXOOXOOO OOOXOOOXOO OOOXOOOOXO OOOXOOOOOX
OOOOXXOOOO OOOOXOXOOO OOOOXOOXOO OOOOXOOOXO OOOOXOOOOX
OOOOOXXOOO OOOOOXOXOO OOOOOXOOXO OOOOOXOOOX
OOOOOOXXOO OOOOOOXOXO OOOOOOXOOX
OOOOOOOXXO OOOOOOOXOX
OOOOOOOOXX
Imagine now that my two sons are at the left end of the line (they could be anywhere—like positions 3 and 7—the results won’t change). There’s one arrangement where they get both tickets (the upper left corner), 16 arrangements where one of them gets a ticket and the other doesn’t (the top two rows except for the upper left corner), and 28 arrangements where neither of them get a ticket (everywhere else).
There is, as you might imagine, a generalization of this problem to account for any number of applications, tickets, and sons. It’s called the hypergeometric distribution. Using the nomenclature in the linked Wikipedia article, we’ll assume a population of $N$ things (applications), of which $K$ are successes (tickets). If we drawn $n$ samples (sons) from that population (without replacement), then the probability that $k$ of the samples will be successes is
The denominator should look familiar, it’s the number of combinations of $N$ things taken $n$ at a time.
The first term in the numerator is the number of ways the successes in the sample ($k$) can be taken from the successes in the population ($K$). The second term in the numerator is the number of ways the failures in the sample ($n - k$) can be taken from the failures in the population ($N - K$). This product is a formalization of the counting we did above.
Let’s use this formula to repeat the example with ten applications for tickets ($N = 10$), two tickets ($k = 2$), and two sons ($n = 2$). The probability that both sons will get a ticket is
The probability that exactly one son will get a ticket is
And the probability that neither one will get a ticket is
Just like before, but with less counting.
The SciPy library for Python has a sublibrary called stats with a set of functions for handling the hypergeometric distribution. Here’s how to do this last calculation in Python:
python:
>>> from scipy.stats import hypergeom
>>> hypergeom.pmf(0, 10, 2, 2)
0.62222222222222201
The pmf function stands for “probability mass function,” and it represents the formula defined above. The first argument is the number of successes in the sample, the second is the size of the population, the third is the number of successes in the population, and the fourth is the sample size.
I think this ordering of the arguments was an incredibly stupid choice by the designers of the library, as it puts the population numbers in between the sample numbers and the order of the population numbers is the reverse of the order of the sample numbers. It’s hard to imagine a less intuitive way to define the argument order. And don’t get me started on the symbols they use in the documentation. But at least the function works.
We can now go back to our original problem of 1,300 tickets, 22,611 applications, and 2 sons and do it right.
python:
>>> hypergeom.pmf(2, 22611, 1300, 2)
0.0033031794730568834
>>> hypergeom.pmf(1, 22611, 1300, 2)
0.10838192109526341
>>> hypergeom.pmf(0, 22611, 1300, 2)
0.88831489943503728
As expected, no practical difference between these answers and the approximations we started out with. But it’s the journey, not the destination, right?
# An image and PDF grab bag
In my job, I often refer to provisions of building codes or material and equipment standards in my reports. Usually, simply quoting the relevant provisions is sufficient, but sometimes I need to attach one or more pages from these documents as an addendum. In the old days, that meant photocopies; now it typically means pulling pages out of PDFs. Preview is pretty good tool for this, as it allows you to use the thumbnail sidebar to extract and rearrange pages. But I recently ran into a situation where Preview couldn’t do the job alone, and I had to use a series of command-line tools to get the job done.
The problem is with the American Institute of Steel Construction, which has decided to publish its essential Steel Construction Manual as a website instead of a PDF. Each “page” of the website looks like the corresponding page of the print edition of the manual.
Having this website instead of a PDF is moderately annoying when I’m trying to use the manual, it’s really annoying when I need to pull out excerpts, because I have to make screenshots of each page, edit them, convert them to PDFs, resize them to fit on letter-sized pages, and assemble them into a single coherent PDF document. Here’s how I do it.
First, I take the screenshots on my 9.7″ iPad Pro in portrait mode (see above) because that gives me good resolution of a single page. I could get slightly higher resolution by taking the screenshots on my 2017 27″ 5k iMac, but that machine isn’t available when I’m working at home, where the Mac on my desk is a non-Retina 2012 27″ iMac. After screenshotting, I have a bunch of JPEG files on my iPad, which I copy over to my Mac via the Files app and Dropbox.
Next, I move to the Mac (whichever one is handy) and crop each image down to just the page image, eliminating all the browser chrome and navigation controls. For this I use the mogrify command from the ImageMagick suite of tools to crop the images in place.1 After a bit of trial and error, I learned that
mogrify -crop 1214x1820+162+224 image.jpg
gives the crop size and offset that leaves just the page image.
Of course, I don’t want to enter this command for every screenshot, so I wrote a shell script, called aisc-crop, which loops through all of its arguments, running the mogrify command on each:
bash:
!/bin/bash
for f in "$@" do mogrify -crop 1214x1820+162+224 "$f"
done
With this, I can crop all the images in a directory with a single command:
aisc-crop *.JPG
Now that I have the page images I want, it’s time to turn them into PDFs. For this, I use the built-in sips command, but sips wants the extension to be .pdf before it does the conversion. So I use Larry Wall’s old rename Perl script:
rename 's/JPG/pdf/' *.JPG
Now the files are ready for conversion:
sips -s format pdf *.pdf
Time to put all the pages together into a single PDF document. For this, I like using PDFtk (which can also be installed via Homebrew):
pdftk *.pdf cat output aisc-pages.pdf
At this point, I have a PDF document with all the pages I want, but the pages aren’t letter-sized. If I open the document in Preview and Get Info on it, I see this:
The page size is so big because sips treated every pixel in the JPEG as a point in the converted PDF. Since there are 72 points per inch, the PDF pages are $\frac{1820}{72} = 27.28\; \mathrm{in}$ high. To get the PDF down to letter size, I use pdfjam, which got installed along with my TeX installation:
pdfjam --paper letterpaper --suffix letter aisc-pages.pdf
Now I have a document named aisc-pages-letter.pdf that’s the right physical size with higher dpi of the embedded images. I could have gotten the same result by “printing” aisc-pages.pdf to a new PDF with the Scale to Fit option selected in the Print sheet, but where’s the fun in that?
Now I can open the document in Preview and rearrange the pages if I didn’t take the screenshots in the right order. Otherwise, I’m done. As is often the case, it takes longer to explain than to do.
1. ImageMagick used to be kind of hard to install on a Mac, but not anymore. As Jason Snell showed us a couple of months ago, just use Homebrew and brew install imagemagick
|
{}
|
# Split-complex number
Jump to: navigation, search
Split-complex product
× 1 j
1 1 j
j j 1
A portion of the split-complex number plane showing subsets with modulus zero (red), one (blue), and minus one (green).
In abstract algebra, the split-complex numbers (or hyperbolic numbers, also perplex numbers, and double numbers) are a two-dimensional commutative algebra over the real numbers different from the complex numbers. Every split-complex number has the form
x + y j,
where x and y are real numbers. The number j is similar to the imaginary unit i, except that
j 2 = +1.
As an algebra over the reals, the split-complex numbers are the same as the direct sum of algebras RR (under the isomorphism sending x + yj to (x + y, xy)). The name split comes from this characterization: as a real algebra, the split-complex numbers split into the direct sum RR. It arises, for example, as the real subalgebra generated by an involutory matrix.
Geometrically, split-complex numbers are related to the modulus (x2y2) in the same way that complex numbers are related to the square of the Euclidean norm (x2 + y2). Unlike the complex numbers, the split-complex numbers contain nontrivial idempotents (other than 0 and 1), as well as zero divisors, and therefore they do not form a field.
In interval analysis, a split complex number x + yj represents an interval with midpoint x and radius y. Another application involves using split-complex numbers, dual numbers, and ordinary complex numbers, to interpret a 2 × 2 real matrix as a complex number.
Split-complex numbers have many other names; see the synonyms section below. See the article Motor variable for functions of a split-complex number.
## Definition
A split-complex number is an ordered pair of real numbers, written in the form
$z = x + jy$
where x and y are real numbers and the quantity j satisfies
$j^2 = +1$
Choosing $j^2 = -1$ results in the complex numbers. It is this sign change which distinguishes the split-complex numbers from the ordinary complex ones. The quantity j here is not a real number but an independent quantity; that is, it is not equal to ±1.
The collection of all such z is called the split-complex plane. Addition and multiplication of split-complex numbers are defined by
(x + jy) + (u + jv) = (x + u) + j (y + v)
(x + jy)(u + jv) = (xu + yv) + j (xv + yu).
This multiplication is commutative, associative and distributes over addition.
### Conjugate, modulus, and bilinear form
Just as for complex numbers, one can define the notion of a split-complex conjugate. If
z = x + jy
the conjugate of z is defined as
z = xjy.
The conjugate satisfies similar properties to usual complex conjugate. Namely,
(z + w) = z + w
(zw) = zw
(z) = z.
These three properties imply that the split-complex conjugate is an automorphism of order 2.
The modulus of a split-complex number z = x + jy is given by the isotropic quadratic form
$\lVert z \rVert = z z^* = z^* z = x^2 - y^2 .$
It has an important property that it is preserved by split-complex multiplication:
$\lVert z w \rVert = \lVert z \rVert \lVert w \rVert .$
However, this quadratic form is not positive-definite but rather has signature (1, −1), so the modulus is not a norm.
The associated bilinear form is given by
z, w〉 = Re(zw) = Re(zw) = xuyv,
where z = x + jy and w = u + jv. Another expression for the modulus is then
$\lVert z \rVert = \langle z, z \rangle .$
Since it is not positive-definite, this bilinear form is not an inner product; nevertheless the bilinear form is frequently referred to as an indefinite inner product. A similar abuse of language refers to the modulus as a norm.
A split-complex number is invertible if and only if its modulus is nonzero ($\lVert z \rVert \ne 0$), thus x ± jx have no inverse. The multiplicative inverse of an invertible element is given by
$z^{-1} = z^{*} / \lVert z \rVert .$
Split-complex numbers which are not invertible are called null elements. These are all of the form (a ± ja) for some real number a.
### The diagonal basis
There are two nontrivial idempotents given by e = (1 − j)/2 and e = (1 + j)/2. Recall that idempotent means that ee = e and ee = e. Both of these elements are null:
$\lVert e \rVert = \lVert e^* \rVert = e^* e = 0 .$
It is often convenient to use e and e as an alternate basis for the split-complex plane. This basis is called the diagonal basis or null basis. The split-complex number z can be written in the null basis as
z = x + jy = (xy)e + (x + y)e.
If we denote the number z = ae + be for real numbers a and b by (a, b), then split-complex multiplication is given by
(a1, b1)(a2, b2) = (a1a2, b1b2).
In this basis, it becomes clear that the split-complex numbers are ring-isomorphic to the direct sum RR with addition and multiplication defined pairwise.
The split-complex conjugate in the diagonal basis is given by
(a, b) = (b, a)
and the modulus by
$\lVert (a,b) \rVert = ab .$
Though lying in the same isomorphism class in the category of rings, the split-complex plane and the direct sum of two real lines differ in their layout in the Cartesian plane. The isomorphism, as a planar mapping, consists of a counter-clockwise rotation by 45° and a dilation by 2. The dilation in particular has sometimes caused confusion in connection with areas of hyperbolic sectors. Indeed, hyperbolic angle corresponds to area of sectors in the $R \oplus R$ plane with its "unit circle" given by $\lbrace (a,b) \in R \oplus R : ab = 1 \rbrace .$ The contracted "unit circle" $\lbrace \cosh a + j \ \sinh a : a \in R \rbrace$ of the split-complex plane has only half the area in the span of a corresponding hyperbolic sector. Such confusion may be perpetuated when the geometry of the split-complex plane is not distinguished from that of $R \oplus R .$
## Geometry
Unit hyperbola with ||z||=1 (blue),
conjugate hyperbola with ||z||=−1 (green),
and asymptotes ||z||=0 (red)
A two-dimensional real vector space with the Minkowski inner product is called (1 + 1)-dimensional Minkowski space, often denoted R1,1. Just as much of the geometry of the Euclidean plane R2 can be described with complex numbers, the geometry of the Minkowski plane R1,1 can be described with split-complex numbers.
The set of points
$\{ z : \lVert z \rVert = a^2 \}$
is a hyperbola for every nonzero a in R. The hyperbola consists of a right and left branch passing through (a, 0) and (−a, 0). The case a = 1 is called the unit hyperbola. The conjugate hyperbola is given by
$\{ z : \lVert z \rVert = -a^2 \}$
with an upper and lower branch passing through (0, a) and (0, −a). The hyperbola and conjugate hyperbola are separated by two diagonal asymptotes which form the set of null elements:
$\{ z : \lVert z \rVert = 0 \}.$
These two lines (sometimes called the null cone) are perpendicular in R2 and have slopes ±1.
Split-complex numbers z and w are said to be hyperbolic-orthogonal if z, w = 0. While analogous to ordinary orthogonality, particularly as it is known with ordinary complex number arithmetic, this condition is more subtle. It forms the basis for the simultaneous hyperplane concept in spacetime.
The analogue of Euler's formula for the split-complex numbers is
$\exp(j\theta) = \cosh(\theta) + j\sinh(\theta).\,$
This can be derived from a power series expansion using the fact that cosh has only even powers while that for sinh has odd powers. For all real values of the hyperbolic angle θ the split-complex number λ = exp() has norm 1 and lies on the right branch of the unit hyperbola. Numbers such as λ have been called hyperbolic versors.
Since λ has modulus 1, multiplying any split-complex number z by λ preserves the modulus of z and represents a hyperbolic rotation (also called a Lorentz boost or a squeeze mapping). Multiplying by λ preserves the geometric structure, taking hyperbolas to themselves and the null cone to itself.
The set of all transformations of the split-complex plane which preserve the modulus (or equivalently, the inner product) forms a group called the generalized orthogonal group O(1, 1). This group consists of the hyperbolic rotations, which form a subgroup denoted SO+(1, 1), combined with four discrete reflections given by
$z\mapsto\pm z$ and $z\mapsto\pm z^{*}.$
The exponential map
$\exp\colon(\mathbb R, +) \to \mathrm{SO}^{+}(1,1)$
sending θ to rotation by exp() is a group isomorphism since the usual exponential formula applies:
$e^{j(\theta+\phi)} = e^{j\theta}e^{j\phi}.\,$
If a split-complex number z does not lie on one of the diagonals, then z has a polar decomposition.
## Algebraic properties
In abstract algebra terms, the split-complex numbers can be described as the quotient of the polynomial ring R[x] by the ideal generated by the polynomial x2 − 1,
R[x]/(x2 − 1).
The image of x in the quotient is the "imaginary" unit j. With this description, it is clear that the split-complex numbers form a commutative ring with characteristic 0. Moreover if we define scalar multiplication in the obvious manner, the split-complex numbers actually form a commutative and associative algebra over the reals of dimension two. The algebra is not a division algebra or field since the null elements are not invertible. In fact, all of the nonzero null elements are zero divisors. Since addition and multiplication are continuous operations with respect to the usual topology of the plane, the split-complex numbers form a topological ring.
The algebra of split-complex numbers forms a composition algebra since
$\lVert zw \rVert = \lVert z \rVert \lVert w \rVert$ for any numbers z and w.
The class of composition algebras extends the normed algebras class which also has this composition property.
From the definition it is apparent that the ring of split-complex numbers is isomorphic to the group ring R[C2] of the cyclic group C2 over the real numbers R.
The split-complex numbers are a particular case of a Clifford algebra. Namely, they form a Clifford algebra over a one-dimensional vector space with a positive-definite quadratic form. Contrast this with the complex numbers which form a Clifford algebra over a one-dimensional vector space with a negative-definite quadratic form. (NB: some authors switch the signs in the definition of a Clifford algebra which will interchange the meaning of positive-definite and negative-definite). In mathematics, the split-complex numbers are members of the Clifford algebra C1,0(R) = C01,1(R) (the superscript 0 indicating the even subalgebra). This is an extension of the real numbers defined analogously to the complex numbers C = C0,1(R) = C02,0(R).
## Matrix representations
One can easily represent split-complex numbers by matrices. The split-complex number
z = x + jy
can be represented by the matrix
$z \mapsto \begin{pmatrix}x & y \\ y & x\end{pmatrix}.$
Addition and multiplication of split-complex numbers are then given by matrix addition and multiplication. The modulus of z is given by the determinant of the corresponding matrix. In this representation, split-complex conjugation corresponds to multiplying on both sides by the matrix
$C = \begin{pmatrix}1 & 0 \\ 0 & -1\end{pmatrix} .$
For any real number a, a hyperbolic rotation by a hyperbolic angle a corresponds to multiplication by the matrix
$\begin{pmatrix}\cosh a & \sinh a \\ \sinh a & \cosh a\end{pmatrix}.$
This commutative diagram relates the action of the hyperbolic versor on D to squeeze mapping σ applied to R2
The diagonal basis for the split-complex number plane can be invoked by using an ordered pair (x, y) for $z = x + jy$ and making the mapping
$(u,v) = (x,y) \begin{pmatrix}1 & 1 \\1 & -1\end{pmatrix} = (x,y) S .$
Now the quadratic form is $u v = (x+y)(x-y) = x^2 - y^2 .$ Furthermore,
$(\cosh a, \sinh a)\begin{pmatrix}1 & 1\\1 & -1\end{pmatrix} = (e^a, e^{-a})$
so the two parametrized hyperbolas are brought into correspondence with S. The action of hyperbolic versor $e^{bj} \!$ then corresponds under this linear transformation to a squeeze mapping
$\sigma:(u,v) \mapsto (r u, v/r) ,\quad r = e^b .$
Note that in the context of 2 × 2 real matrices there are in fact a great number of different representations of split-complex numbers. The above diagonal representation represents the jordan canonical form of the matrix representation of the split-complex numbers. For a split-complex number z = (x, y) given by the following matrix representation:
$Z = \begin{pmatrix}x & y \\ y & x\end{pmatrix} .$
Its Jordan canonical form is given by:
$J_{z} = \begin{pmatrix}x+y & 0 \\ 0 & x-y\end{pmatrix} ,$
where $Z = S J_{z} S^{-1} \ ,$ and
$S = \begin{pmatrix}1 & -1 \\ 1 & 1\end{pmatrix} .$
## History
The use of split-complex numbers dates back to 1848 when James Cockle revealed his Tessarines. William Kingdon Clifford used split-complex numbers to represent sums of spins. Clifford introduced the use of split-complex numbers as coefficients in a quaternion algebra now called split-biquaternions. He called its elements "motors", a term in parallel with the "rotor" action of an ordinary complex number taken from the circle group. Extending the analogy, functions of a motor variable contrast to functions of an ordinary complex variable.
Since the early twentieth century, the split-complex multiplication has commonly been seen as a Lorentz boost of a spacetime plane. In that model, the number z = x + yj represents an event in a spacio-temporal plane, where x is measured in nanoseconds and y in Mermin’s feet. The future corresponds to the quadrant of events {z : |y| < x }, which has the split-complex polar decomposition $z = \rho e^{a j} \!$. The model says that z can be reached from the origin by entering a frame of reference of rapidity a and waiting ρ nanoseconds. The split-complex equation
$e^{aj} \ e^{bj} = e^{(a+b)j}$
expressing products on the unit hyperbola illustrates the additivity of rapidities for collinear velocities. Simultaneity of events depends on rapidity a;
$\lbrace z = \sigma j e^{aj} : \sigma \isin R \rbrace$
is the line of events simultaneous with the origin in the frame of reference with rapidity a. Two events z and w are hyperbolic-orthogonal when zw + zw = 0. Canonical events exp(aj) and j exp(aj) are hyperbolic orthogonal and lie on the axes of a frame of reference in which the events simultaneous with the origin are proportional to j exp(aj).
In 1935 J.C. Vignaux and A. Durañona y Vedia developed the split-complex geometric algebra and function theory in four articles in Contribución a las Ciencias Físicas y Matemáticas, National University of La Plata, República Argentina (in Spanish). These expository and pedagogical essays presented the subject for broad appreciation.
In 1941 E.F. Allen used the split-complex geometric arithmetic to establish the nine-point hyperbola of a triangle inscribed in zz = 1.
In 1956 Mieczyslaw Warmus published "Calculus of Approximations" in Bulletin de l’Academie Polanaise des Sciences (see link in References). He developed two algebraic systems, each of which he called "approximate numbers", the second of which forms a real algebra. D. H. Lehmer reviewed the article in Mathematical Reviews and observed that this second system was isomorphic to the "hyperbolic complex" numbers, the subject of this article.
In 1961 Warmus continued his exposition, referring to the components of an approximate number as midpoint and radius of the interval denoted.
## Synonyms
Different authors have used a great variety of names for the split-complex numbers. Some of these include:
• (real) tessarines, James Cockle (1848)
• (algebraic) motors, W.K. Clifford (1882)
• hyperbolic complex numbers, J.C. Vignaux (1935)
• bireal numbers, U. Bencivenga (1946)
• approximate numbers, Warmus (1956), for use in interval analysis
• countercomplex or hyperbolic numbers from Musean hypernumbers
• double numbers, I.M. Yaglom (1968), Kantor and Solodovnikov (1989), Hazewinkel (1990), Rooney (2014)
• anormal-complex numbers, W. Benz (1973)
• perplex numbers, P. Fjelstad (1986) and Poodiack & LeClair (2009)
• Lorentz numbers, F.R. Harvey (1990)
• hyperbolic numbers, G. Sobczyk (1995)
• semi-complex numbers, F. Antonuccio (1994)
• split-complex numbers, B. Rosenfeld (1997)
• spacetime numbers, N. Borota (2000)
• Study numbers, P. Lounesto (2001)
• twocomplex numbers, S. Olariu (2002)
Split-complex numbers and their higher-dimensional relatives (split-quaternions / coquaternions and split-octonions) were at times referred to as "Musean numbers", since they are a subset of the hypernumber program developed by Charles Musès.
## See also
Higher-order derivatives of split-complex numbers, obtained through a modified Cayley–Dickson construction:
In Lie theory, a more abstract generalization occurs:
Enveloping algebras and number programs:
## References and external links
• Francesco Antonuccio (1994) Semi-complex analysis and mathematical physics
• Bencivenga, Uldrico (1946) "Sulla rappresentazione geometrica della algebra doppie dotate di modulo", Atti della real academie della scienze e belle-lettre di Napoli, Ser (3) v.2 No7. MR 0021123.
• Benz, W. (1973)Vorlesungen uber Geometrie der Algebren, Springer
• N. A. Borota, E. Flores, and T. J. Osler (2000) "Spacetime numbers the easy way", Mathematics and Computer Education 34: 159-168.
• N. A. Borota and T. J. Osler (2002) "Functions of a spacetime variable", Mathematics and Computer Education 36: 231-239.
• K. Carmody, (1988) "Circular and hyperbolic quaternions, octonions, and sedenions", Appl. Math. Comput. 28:47–72.
• K. Carmody, (1997) "Circular and hyperbolic quaternions, octonions, and sedenions – further results", Appl. Math. Comput. 84:27–48.
• F. Catoni, D. Boccaletti, R. Cannata, V. Catoni, E. Nichelatti, P. Zampetti. (2008) The Mathematics of Minkowski Space-Time, Birkhäuser Verlag, Basel. Chapter 4: Trigonometry in the Minkowski plane. ISBN 978-3-7643-8613-9.
• Francesco Catoni; Dino Boccaletti; Roberto Cannata; Vincenzo Catoni, Paolo Zampetti (2011). "Chapter 2: Hyperbolic Numbers". Geometry of Minkowski Space-Time. Springer Science & Business Media. ISBN 978-3-642-17977-8.
• James Cockle (1849) On a New Imaginary in Algebra 34:37–47, London-Edinburgh-Dublin Philosophical Magazine (3) 33:435–9, link from Biodiversity Heritage Library.
• William Kingdon Clifford,Mathematical Works (1882) edited by A.W.Tucker,pp. 392,"Further Notes on Biquaternions"
• De Boer, R. (1987) "An also known as list for perplex numbers", American Journal of Physics 55(4):296.
• Fjelstadt, P. (1986) "Extending Special Relativity with Perplex Numbers", American Journal of Physics 54:416.
• Anthony A. Harkin & Joseph B. Harkin (2004) Geometry of Generalized Complex Numbers, Mathematics Magazine 77(2):118–29.
• F. Reese Harvey. Spinors and calibrations. Academic Press, San Diego. 1990. ISBN 0-12-329650-1. Contains a description of normed algebras in indefinite signature, including the Lorentz numbers.
• Hazewinkle, M. (1994) "Double and dual numbers", Encyclopaedia of Mathematics, Soviet/AMS/Kluwer, Dordrect.
• Louis Kauffman (1985) "Transformations in Special Relativity", International Journal of Theoretical Physics 24:223–36.
• C. Musès, "Applied hypernumbers: Computational concepts", Appl. Math. Comput. 3 (1977) 211–226.
• C. Musès, "Hypernumbers II—Further concepts and computational applications", Appl. Math. Comput. 4 (1978) 45–66.
• Olariu, Silviu (2002) Complex Numbers in N Dimensions, Chapter 1: Hyperbolic Complex Numbers in Two Dimensions, pages 1–16, North-Holland Mathematics Studies #190, Elsevier ISBN 0-444-51123-7.
• Poodiack, Robert D. & Kevin J. LeClair (2009) "Fundamental theorems of algebra for the perplexes", The College Mathematics Journal 40(5):322–35.
• Rosenfeld, B. (1997) Geometry of Lie Groups Kluwer Academic Pub.
• Sobczyk, G.(1995) Hyperbolic Number Plane, also published in College Mathematics Journal 26:268–80.
• Vignaux, J.(1935) "Sobre el numero complejo hiperbolico y su relacion con la geometria de Borel", Contribucion al Estudio de las Ciencias Fisicas y Matematicas, Universidad Nacional de la Plata, Republica Argentina.
• M. Warmus (1956) "Calculus of Approximations", Bulletin de l'Academie Polonaise de Sciences, Vol. 4, No. 5, pp. 253–257, MR 0081372
• Isaak Yaglom (1968) Complex Numbers in Geometry, translated by E. Primrose from 1963 Russian original, Academic Press, pp. 18–20.
• J. Rooney (2014). "Generalised Complex Numbers in Mechanics". In Marco Ceccarelli and Victor A. Glazunov. Advances on Theory and Practice of Robots and Manipulators: Proceedings of Romansy 2014 XX CISM-IFToMM Symposium on Theory and Practice of Robots and Manipulators. Springer. doi:10.1007/978-3-319-07058-2_7. ISBN 978-3-319-07058-2.
• I.L. Kantor; A.S. Solodovnikov (1983) [1973]. Hypercomplex Numbers: An Elementary Introduction to Algebras. Springer New York. ISBN 3-540-96980-2.
|
{}
|
# OpenGL Simple 2D programming
This topic is 4608 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi, this is my first post here so I apologise if it's not perfect. I'm trying to write a small 2D game in OpenGL, however I'm having trouble with how glTranslatef works. Since whenever you create a new polygon you use glTranslatef, the new position becomes (0,0,0), which is fine if you want to create a polygon, tell it to act in a certain manner and then leave it, but I need to be able to act on that polygon, check for basic collisions with other polygons and move it around like a player. I've tried creating a polygon class which stores an origin as a vector (x,y,z) but I cant understand how to make it function correctly. Sorry if this is an unusual problem, but my previous experience is with code that lets you set a certain x,y position, based on how many pixels you want a player to move. Any help or links would be greatly appreciated.
##### Share on other sites
If I understand your problem correctly, you need the polygon's new coordinates after the transformation, but the OpenGL transformation functions just move the polygon to a new position but don't let you know what it is.
If that's your problem, you will either need to use unproject (which I know nothing about), or do all transformations in software, which is what I did because it's just as fast. (Quake does its transformations in software mode and works pretty fast on my Pentium 1, so why not?)
##### Share on other sites
Hi and welcome to the forums, Exomoto!
The first question is, are you using ortho mode? If not, start looking at it now, it'll help you a lot as you are working on a 2D game. Google 'glOrtho' with 'spec' and you should get all you need.
Quote:
Original post by ExomotoSorry if this is an unusual problem, but my previous experience is with code that lets you set a certain x,y position, based on how many pixels you want a player to move.
You can do something like this every frame:
glOrtho (0, 640, 0, 480, -1, 1); // change 640 / 480 as you wishglMatrixMode (GL_MODELVIEW);glLoadIdentity ();draw_environment();glTranslatef (player.x, player.y, 0.0);draw_player();
You can test collisions with player.x and player.y.
##### Share on other sites
That would work, unless you want a polygon-based collision detection and you want your polygons to rotate.
##### Share on other sites
Quote:
Original post by mrbigThat would work, unless you want a polygon-based collision detection and you want your polygons to rotate.
... which can also be done quite easily using simple trigonometry and keeping track of x, y and the angle of rotation [smile]
##### Share on other sites
Thanks for all the help. Most of the polygons in my game should be either octagons or rectangles, so I was just planning on using a pre-defined radius for the octagons and the x,y,height,width values for the rectangles for the collision detection. The octagons will rotate but the detection doesn't have to be perfect, just a circle area around them will suffice.
1. 1
2. 2
frob
17
3. 3
4. 4
5. 5
• 20
• 13
• 14
• 76
• 22
• ### Forum Statistics
• Total Topics
632145
• Total Posts
3004381
×
|
{}
|
# Find Area Enclosed by Curve
I want to find the area enclosed by the plane curve $x^{2/3}+y^{2/3}=1$. My attempt was to set $x=\cos^3t, \ y=\sin^3t$ so:$$x^{2/3}+y^{2/3}=\cos^2t+\sin^2t=1$$
Then the area is $$2A=\oint_Cxdy-ydx=3\oint_C\cos^3ty'dy+\sin^3tx'dx=3\int_0^{2\pi}\cos^2t\cdot \sin^2tdt=\frac{3\pi}{4}\implies A=\frac{3\pi}{8}$$
However, when I did a level curve plot I got the following figure:
so does "area enclosed by figure" even make sense? For the graph above, my calculator gives me $A=\frac{3\pi}{32}$.
• There are three more, one in each of the quadrants. But you might as well find the first quadrant area and multiply by $4$. – André Nicolas Jul 23 '16 at 16:01
• N.B. the curve is called an astroid. – J. M. isn't a mathematician Jul 24 '16 at 5:12
You only graphed the first quadrant part of the curve. There are $3$ other parts, obtained by reflections in the axes. For note that $(x,y)$ is on the curve if and only if $(-x,y)$ is on the curve if and only if $\dots$.
The area of the region they enclose is $\frac{3\pi}{8}$. Your calculation is correct. If you only want the first quadrant area (but that is not what is asked for) divide by $4$, or integrate from $0$ to $\pi/2$ instead of $0$ to $2\pi$.
• I see, thank you. I just noticed that wolfram alpha plot I was using was an "implicit plot", so I guess it didn't plot the entire contour. – user124910 Jul 23 '16 at 16:18
• @user124910: You are welcome. Plotting programs can mislead in various ways. – André Nicolas Jul 23 '16 at 16:22
In simple cartesian coordinates, through the substitution $x=z^{3/2}$ and Euler's Beta function, $$A = \int_{0}^{1}(1-x^{2/3})^{3/2}\,dx = \frac{3}{2}\int_{0}^{1}z^{1/2}(1-z)^{3/2}\,dz=\frac{3}{2}\,B\left(\frac{3}{2},\frac{5}{2}\right)\tag{1}$$ hence: $$A = \frac{3\,\Gamma\left(\frac{3}{2}\right)\Gamma\left(\frac{5}{2}\right)}{2\,\Gamma(4)}=\color{red}{\frac{3\pi}{32}}\tag{2}$$ as wanted.
Notice that we have symmetry about the $y$ and $x$ axes. Hence we can use the formula $$y=(1-x^{2/3})^{3/2}$$ Which gives the top half, integrate from $x=0$ to $1$, and then multiply by $4$, exploiting the symmetry. Hence we have $$A=4\int_0^1(1-x^{2/3})^{3/2}dx$$ Letting $x=\cos(u)^{6/2}=\cos^3(u)$, $dx=-3\cos^2(u)\sin(u)du$ we have \begin{align} A &=4\cdot 3\int_0^{\pi/2}\sin^4(u)\cos^2(u)du \\ &= 4\cdot\frac{3\pi}{32} \end{align} Hence we have the area $3\pi/32$ like you got, but we must account for the other pieces.
|
{}
|
# Understanding the Research Collaboration During COVID-19 Pandemic
Date:
The research project was presented at INFORMS Annual Meeting 2020 in a session titled, Data and Network Science.
Authors: Sulyun Lee, Kang Zhao, and Ning Li
Abstract: After the first outbreak of coronavirus (COVID-19) at Wuhan, China, in December 2019, it has spread globally across 188 countries. This pandemic situation has attracted many researchers’ attention because of the novelty of the virus. In this study, we focus on researchers’ collaborations on academic papers related to COVID-19 published in various journals across countries. By constructing the authors’ collaboration networks involved in those papers, we discover the patterns newly found among the authors. More specifically, we focus on the diversity of the collaborating authors in terms of their research fields, cultures, and academic prestige to find the factors that contribute to the research papers’ academic and public impacts.
|
{}
|
# Digital Waveguides: Discrete Wave Equation
## Wave Equation for Ideal Strings
The ideal string results in an oscillation without losses. The differential wave-equation for this process is defined as follows. The velocity $c$ determines the propagation speed of the wave and this the frequency of the oscillation.
\begin{equation*} \frac{\partial^2 y}{\partial t^2} = c^2 \frac{\partial^2 y}{\partial x^2} \end{equation*}
A solution for the different equation without losses is given by d'Alembert (1746). The oscillation is composed of two waves - one left-traveling and one right traveling component.
\begin{equation*} y(x,t) = y^+ (x-ct) + y^- (x+ct)\$ \end{equation*}
• $y^+$ = left traveling wave
• $y^-$ = right traveling wave
## Tuning the String
The velocity $c$ depends on tension $K$ and mass-density $\epsilon$ of the string:
\begin{equation*} c^2 = \sqrt{\frac{K}{\epsilon}} = \sqrt{\frac{K}{\rho S}} \end{equation*}
With tension $K$, cross sectional area $S$ and density $\rho$ in ${\frac{g}{cm^3}}$.
Frequency $f$ of the vibrating string depends on the velocity and the string length:
\begin{equation*} f = \frac{c}{2 L} \end{equation*}
## Make it Discrete
For an implementation in digital systems, both time and space have to be discretized. This is the discrete version of the above introduced solution:
\begin{equation*} y(m,n) = y^+ (m,n) + y^- (m,n) \end{equation*}
For the time, this discretization is bound to the sampling frequency $f_s$. Spatial sample distance $X$ depends on sampling-rate $f_s = \frac{1}{T}$ and velocity $c$.
• $t = \ nT$
• $x = \ mX$
• $X = cT$
|
{}
|
Contents
# Contents
## Idea
In quantum physics, the quantum adiabatic theorem for a parameterized quantum system says that under a sufficiently slow motion of the external parameters along a path $\gamma \,\colon\, [0,1] \to P$ in parameter space $P$, eigenstates for mutually commuting quantum observables of the quantum system at $\gamma(0)$ will approximately evolve to eigenstates at $\gamma(1)$, even if the corresponding eigenvalues change significantly.
For adiabatic transport along a loop, $\gamma(0) = \gamma(1)$, this implies that the eigenspaces of the system are acted on by unitary operators, which (at least for multiplicity/eigen-dimension 1) are known as Berry phases.
The adiabatic parameter-action on ground states of (topologically ordered) quantum materials is one model for quantum computation, see at adiabatic quantum computation. If these adiabatic quantum gates depend only on the isotopy class of the parameter path, then one speaks of topological quantum computation (made explicit in Arovas, Schrieffer, Wilczek & Zee 1985, p. 1, Freedman, Kitaev, Larsen & Wang 2003, pp. 6 and Nayak, Simon, Stern & Freedman 2008, §II.A.2 (p. 6), see also at braiding of anyonic defects).
The following graphics is meant to illustrate this general idea for the case of adiabatic transformation along the braiding of nodal points in the Brillouin torus of semi-metal quantum materials (see the discussion at braid group statisticsanyonic band nodes):
(graphics from SS22)
## References
### General
The original formulation:
Making explicitbthe unitary action by non-abelian Berry phases:
### In condensed matter theory
Discussion in solid state physics and in view of gapped topological phases of matter (quantum Hall effect, topological insulators):
Review:
|
{}
|
# Chapter 13 VEC and VAR Models
rm(list=ls()) #Removes all items in Environment!
library(tseries) # for adf.test()
library(dynlm) #for function dynlm()
library(vars) # for function VAR()
library(nlWaldTest) # for the nlWaldtest() function
library(lmtest) #for coeftest() and bptest().
library(broom) #for glance() and tidy()
library(PoEdata) #for PoE4 datasets
library(car) #for hccm() robust standard errors
library(sandwich)
library(knitr) #for kable()
library(forecast)
New package: vars (Pfaff 2013).
When there is no good reason to assume a one-way causal relationship between two time series variables we may think of their relationship as one of mutual interaction. The concept of “vector,” as in vector error correction refers to a number of series in such a model.
## 13.1 VAR and VEC Models
Equations \ref{eq:var1defA13} and \ref{eq:var1defA13} show a generic vector autoregression model of order 1, VAR(1), which can be estimated if the series are both I(0). If they are I(1), the same equations need to be estimated in first differences.
$$$y_{t}=\beta_{10}+\beta_{11}y_{t-1}+\beta_{12}x_{t-1}+\nu_{t}^y \label{eq:var1defA13}$$$ $$$x_{t}=\beta_{20}+\beta_{21}y_{t-1}+\beta_{22}x_{t-1}+\nu_{t}^x \label{eq:var1defB13}$$$
If the two variables in Equations \ref{eq:var1defA13} and \ref{eq:var1defB13} and are cointegrated, their cointegration relationship should be taken into account in the model, since it is valuable information; such a model is called vector error correction. The cointegration relationship is, remember, as shown in Equation \ref{eq:gencointrelation13}, where the error term has been proven to be stationary.
$$$y_{t}=\beta_{0}+\beta_{1}x_{t}+e_{t} \label{eq:gencointrelation13}$$$
## 13.2 Estimating a VEC Model
The simplest method is a two-step procedure. First, estimate the cointegrating relationship given in Equation \ref{eq:gencointrelation13} and created the lagged resulting residual series $$\hat{e} _{t-1}=y_{t-1}-b_{0}-b_{1}x_{t-1}$$. Second, estimate Equations \ref{eq:ystep2VEC13} and \ref{eq:xstep2VEC13} by OLS.
$$$\Delta y_{t}=\alpha_{10}+\alpha_{11}+\hat{e}_{t-1}+\nu_{t}^y \label{eq:ystep2VEC13}$$$ $$$\Delta x_{t}=\alpha_{20}+\alpha_{21}+\hat{e}_{t-1}+\nu_{t}^x \label{eq:xstep2VEC13}$$$
The following example uses the dataset $$gdp$$, which includes GDP series for Australia and USA for the period since 1970:1 to 2000:4. First we determine the order of integration of the two series.
data("gdp", package="PoEdata")
gdp <- ts(gdp, start=c(1970,1), end=c(2000,4), frequency=4)
ts.plot(gdp[,"usa"],gdp[,"aus"], type="l",
lty=c(1,2), col=c(1,2))
legend("topleft", border=NULL, legend=c("USA","AUS"),
lty=c(1,2), col=c(1,2))
Figure 13.1 represents the two series in levels, revealing a common trend and, therefore, suggesting that the series are nonstationary.
adf.test(gdp[,"usa"])
##
## Augmented Dickey-Fuller Test
##
## data: gdp[, "usa"]
## Dickey-Fuller = -0.9083, Lag order = 4, p-value = 0.949
## alternative hypothesis: stationary
adf.test(gdp[,"aus"])
##
## Augmented Dickey-Fuller Test
##
## data: gdp[, "aus"]
## Dickey-Fuller = -0.6124, Lag order = 4, p-value = 0.975
## alternative hypothesis: stationary
adf.test(diff(gdp[,"usa"]))
##
## Augmented Dickey-Fuller Test
##
## data: diff(gdp[, "usa"])
## Dickey-Fuller = -4.293, Lag order = 4, p-value = 0.01
## alternative hypothesis: stationary
adf.test(diff(gdp[,"aus"]))
##
## Augmented Dickey-Fuller Test
##
## data: diff(gdp[, "aus"])
## Dickey-Fuller = -4.417, Lag order = 4, p-value = 0.01
## alternative hypothesis: stationary
The stationarity tests indicate that both series are I(1), Let us now test them for cointegration, using Equations \ref{eq:au13} and \ref{eq:eau13}.
$$$aus_{t}=\beta_{1}usa_{t}+e_{t} \label{eq:au13}$$$ $$$\hat{e}_{t}=aus_{t}-\beta_{1}usa_{t} \label{eq:eau13}$$$
cint1.dyn <- dynlm(aus~usa-1, data=gdp)
kable(tidy(cint1.dyn), digits=3,
caption="The results of the cointegration equation 'cint1.dyn'")
Table 13.1: The results of the cointegration equation ‘cint1.dyn’
term estimate std.error statistic p.value
usa 0.985 0.002 594.787 0
ehat <- resid(cint1.dyn)
cint2.dyn <- dynlm(d(ehat)~L(ehat)-1)
summary(cint2.dyn)
##
## Time series regression with "ts" data:
## Start = 1970(2), End = 2000(4)
##
## Call:
## dynlm(formula = d(ehat) ~ L(ehat) - 1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.4849 -0.3370 -0.0038 0.4656 1.3507
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## L(ehat) -0.1279 0.0443 -2.89 0.0046 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.598 on 122 degrees of freedom
## Multiple R-squared: 0.064, Adjusted R-squared: 0.0564
## F-statistic: 8.35 on 1 and 122 DF, p-value: 0.00457
Our test rejects the null of no cointegration, meaning that the series are cointegrated. With cointegrated series we can construct a VEC model to better understand the causal relationship between the two variables.
vecaus<- dynlm(d(aus)~L(ehat), data=gdp)
vecusa <- dynlm(d(usa)~L(ehat), data=gdp)
tidy(vecaus)
term estimate std.error statistic p.value
(Intercept) 0.491706 0.057909 8.49094 0.000000
L(ehat) -0.098703 0.047516 -2.07727 0.039893
tidy(vecusa)
term estimate std.error statistic p.value
(Intercept) 0.509884 0.046677 10.923715 0.000000
L(ehat) 0.030250 0.038299 0.789837 0.431168
The coefficient on the error correction term ($$\hat{e}_{t-1}$$) is significant for Australia, suggesting that changes in the US economy do affect Australian economy; the error correction coefficient in the US equation is not statistically significant, suggesting that changes in Australia do not influence American economy. To interpret the sign of the error correction coefficient, one should remember that $$\hat{e}_{t-1}$$ measures the deviation of Australian economy from its cointegrating level of $$0.985$$ of the US economy (see Equations \ref{eq:au13} and \ref{eq:eau13} and the value of $$\beta_{1}$$ in Table 13.1).
## 13.3 Estimating a VAR Model
The VAR model can be used when the variables under study are I(1) but not cointegrated. The model is the one in Equations \ref{eq:var1def13}, but in differences, as specified in Equations \ref{eq:VARa13} and \ref{eq:VARb13}.
$$$\Delta y_{t}=\beta_{11}\Delta y_{t-1}+\beta_{12}\Delta x_{t-1}+\nu_{t}^{\Delta y} \label{eq:VARa13}$$$ $$$\Delta x_{t}=\beta_{21}\Delta y_{t-1}+\beta_{22}\Delta x_{t-1}+\nu_{t}^{\Delta x} \label{eq:VARb13}$$$
Let us look at the income-consumption relationship based on the $$fred$$ detaset, where consumption and income are already in logs, and the period is 1960:1 to 2009:4. Figure 13.2 shows that the two series both have a trend.
data("fred", package="PoEdata")
fred <- ts(fred, start=c(1960,1),end=c(2009,4),frequency=4)
ts.plot(fred[,"c"],fred[,"y"], type="l",
lty=c(1,2), col=c(1,2))
legend("topleft", border=NULL, legend=c("c","y"),
lty=c(1,2), col=c(1,2))
Are the two series cointegrated?
Acf(fred[,"c"])
Acf(fred[,"y"])
adf.test(fred[,"c"])
##
## Augmented Dickey-Fuller Test
##
## data: fred[, "c"]
## Dickey-Fuller = -2.62, Lag order = 5, p-value = 0.316
## alternative hypothesis: stationary
adf.test(fred[,"y"])
##
## Augmented Dickey-Fuller Test
##
## data: fred[, "y"]
## Dickey-Fuller = -2.291, Lag order = 5, p-value = 0.454
## alternative hypothesis: stationary
adf.test(diff(fred[,"c"]))
##
## Augmented Dickey-Fuller Test
##
## data: diff(fred[, "c"])
## Dickey-Fuller = -4.713, Lag order = 5, p-value = 0.01
## alternative hypothesis: stationary
adf.test(diff(fred[,"y"]))
##
## Augmented Dickey-Fuller Test
##
## data: diff(fred[, "y"])
## Dickey-Fuller = -5.775, Lag order = 5, p-value = 0.01
## alternative hypothesis: stationary
cointcy <- dynlm(c~y, data=fred)
ehat <- resid(cointcy)
adf.test(ehat)
##
## Augmented Dickey-Fuller Test
##
## data: ehat
## Dickey-Fuller = -2.562, Lag order = 5, p-value = 0.341
## alternative hypothesis: stationary
Figure 13.3 shows a long serial correlation sequence; therefore, I will let $$R$$ calculate the lag order in the ADF test. As the results of the above adf and cointegration tests show, the series are both I(1) but they fail the cointegration test (the series are not cointegrated.) (Plese rememebr that the adf.test function uses a constant and trend in the test equation; therefore, the critical values are not the same as in the textbook. However, the results of the tests should be the same most of the time.)
library(vars)
Dc <- diff(fred[,"c"])
Dy <- diff(fred[,"y"])
varmat <- as.matrix(cbind(Dc,Dy))
varfit <- VAR(varmat) # VAR() from package vars
summary(varfit)
##
## VAR Estimation Results:
## =========================
## Endogenous variables: Dc, Dy
## Deterministic variables: const
## Sample size: 198
## Log Likelihood: 1400.444
## Roots of the characteristic polynomial:
## 0.344 0.343
## Call:
## VAR(y = varmat)
##
##
## Estimation results for equation Dc:
## ===================================
## Dc = Dc.l1 + Dy.l1 + const
##
## Estimate Std. Error t value Pr(>|t|)
## Dc.l1 0.215607 0.074749 2.88 0.0044 **
## Dy.l1 0.149380 0.057734 2.59 0.0104 *
## const 0.005278 0.000757 6.97 4.8e-11 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
##
## Residual standard error: 0.00658 on 195 degrees of freedom
## Multiple R-Squared: 0.12, Adjusted R-squared: 0.111
## F-statistic: 13.4 on 2 and 195 DF, p-value: 3.66e-06
##
##
## Estimation results for equation Dy:
## ===================================
## Dy = Dc.l1 + Dy.l1 + const
##
## Estimate Std. Error t value Pr(>|t|)
## Dc.l1 0.475428 0.097326 4.88 2.2e-06 ***
## Dy.l1 -0.217168 0.075173 -2.89 0.0043 **
## const 0.006037 0.000986 6.12 5.0e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
##
## Residual standard error: 0.00856 on 195 degrees of freedom
## Multiple R-Squared: 0.112, Adjusted R-squared: 0.103
## F-statistic: 12.3 on 2 and 195 DF, p-value: 9.53e-06
##
##
##
## Covariance matrix of residuals:
## Dc Dy
## Dc 0.0000432 0.0000251
## Dy 0.0000251 0.0000733
##
## Correlation matrix of residuals:
## Dc Dy
## Dc 1.000 0.446
## Dy 0.446 1.000
Function VAR(), which is part of the package vars (Pfaff 2013), accepts the following main arguments: y= a matrix containing the endogenous variables in the VAR model, p= the desired lag order (default is 1), and exogen= a matrix of exogenous variables. (VAR is a more powerful instrument than I imply here; please type ?VAR() for more information.) The results of a VAR model are more useful in analysing the time response to shocks in the variables, which is the topic of the next section.
## 13.4 Impulse Responses and Variance Decompositions
Impulse responses are best represented in graphs showing the responses of a VAR endogenous variable in time.
impresp <- irf(varfit)
plot(impresp)
The interpretation of Figures 13.4 is straightforward: an impulse (shock) to $$Dc$$ at time zero has large effects the next period, but the effects become smaller and smaller as the time passes. The dotted lines show the 95 percent interval estimates of these effects. The VAR function prints the values corresponding to the impulse response graphs.
plot(fevd(varfit)) # fevd() is in package vars
Forecast variance decomposition estimates the contribution of a shock in each variable to the response in both variables. Figure 13.5 shows that almost 100 percent of the variance in $$Dc$$ is caused by $$Dc$$ itself, while only about 80 percent in the variance of $$Dy$$ is caused by $$Dy$$ and the rest is caused by $$Dc$$. The $$R$$ function fevd() in package vars allows forecast variance decomposition.
### References
Pfaff, Bernhard. 2013. Vars: VAR Modelling. https://CRAN.R-project.org/package=vars.
|
{}
|
Lab Report. Description. Studies Thermodynamics, Organic Chemistry, and Inorganic Chemistry. As such, AAS is used in food and beverage, water, clinical, and pharmaceutical analysis. Jul 23, 2017. Since it does not have a system based on science lab inventory yet, the main objective in our project is to develop a new system based science lab inventory. Thank you. lab report: heat exchanger; Kebakaran di Kolej Jati Uitm Shah Alam Sep 2011 (4) Jan 2011 (1) 2010 (7) Sep 2010 (5) Aug 2010 (2) Komen dan pendapat. If graphs and figures are included, they should not take up more than a total of one page in all. To write a group report for each experiment. Learning Outcomes After completing the physics laboratory work, students would be able to 1. state and explain the underlying physics principles and the related physics concepts as well the mathematical equations in the experiments done in the laboratory. Effects of nitrate in drinking water are as follows: Excessive levels of nitrate in drinking water have caused serious illness and sometimes death. Environmental Lab & Consultant - A&A Scientific Resources Malaysia’s 1st University Affiliated Environmental Laboratory, Analysis, Sampling & Monitoring. This report utilized two methods of analyzing the Fe and Mn content in leaf tissue. Atomic Absorption Spectrometer (AAS) Atomic absorption spectroscopy is a widely used technique for trace analysis of elements in matrices ranging from foods, waste waters, geological, pharmaceutical and biological.. BIO320 / MAC-JULY 2020 UNIVERSITI TEKNOLOGI MARA (UiTM) Cawangan Perak Kampus Tapah BIO … of 4. These data would be used by the students for individual lab reports during the year, and they would also be … Reason. Predictions. Your name. Hire verified expert. Lab_report_CMT348_1322748 1. 1. Atomic Absorption Spectrometry (AAS) 3 trace metals in atmospheric deposition cannot be determined from a simple consideration of global mass balance; rather, accurate data on net air or sea fluxes f or specific regions are needed. Posted on Wednesday December 30, 2020 Kumpulan Lakar Wanita terdiri daripada beberapa artis wanita diantaranya merupakan pensyarah … 2.0 OBJECTIVE. The AAS Hitachi Z-2000 instrument supplied by Chemopharm Company is turned on and the instrument software is selected. Documents Published. Page 1 of 13 Short Laboratory Report 2015 MP2.2 Vapour Compression Refrigeration Cycle Chloé Marie Taylor 1322748 Lab Group: Mech 18 Date of Experiment: 10th November 2015 Date of Lab Report: 24th December 2015 The flame and hollow cathode lamp that had been set up by lab assistant is turned on. Sugar c. Microscopic UJIAN – UJIAN LAIN (OTHER RELEVANT TESTS) (sekiranya difikirkan perlu oleh pengamal perubatan berdaftar) Remember that you are writing a scientific report. It was found that the current system is using manual process such as fill-in the information through form or using verbal communication. Formal report for the Atomic Absorption Lab . The report should be 3 (minimum) to 5 (maximum) double-spaced pages (including graphs and figures) using 12 pt Times New Roman font on 8.5 x 11 inch paper with 1 inch margins. Hey there, please do subscribe my youtube channel. To build a pad techniques at on mild steel plate. 1.0 TITLE. Sample results in AAS 1.0859 concentration. It should contain as much technical information as possible. Email. Your report should contain the metal concentration in mg/g leaves obtained via a Standard Calibration curve as well as through the Method of Standard Additions. Lab report cover page uitm Here's my take to write a decent lab report. Shield metal arc welding (SMAW). Knowledge is meant to be The absorption of radiation from a light source by the free atoms. The process of atomic absorption spectroscopy (AAS) involves two steps: 1. I would like to know how we can do some analysis on Atomic absorption spectrophotometer (step by step) * soil sample preparation, * standard preparation and calibration curve, AAS set-up if possible 2. mizisystem.blogspot.com. Tap water that becomes the main source of water that is used by people should be evaluated for the safety level whether it is free from the contamination of bacteria or heavy metals. Discuss which method is more desirable and why. REPORT OF HEALTH EXAMINATION 4 BPP – 03(Pin.13/032019) Bahagian 4 Pemeriksaan Makmal Part 4 Lab investigations URINE TEST ITEMS RESULT a. Albumin b. Submit Close. The sample, either a liquid or a solid, is atomized in either a flame or a graphite furnace. Exam May 2012, answers Lecture notes, lectures 1-19 - Complete notes Summary - lecture 1-26 Exam 2016, questions Practical - Lab practicals schedule Lecture notes, lectures 1-20 Related Studylists Microbes technology Microbiology Unknown Bacteria Lab Report- Microbio Lab Report Vapor Liquid Equilibrium Uitm [d2nvx1p9er4k]. 6. Technicalities. Download experiment 5 lab report uitm malaysian college Comments. By referring to the result, we can determined that the temperature of air decreasing from 36°C (dry) and 29.5°C (wet) to 27°C (dry) and 26.5°C (wet) respectively. 7. Hire a subject expert to help you with ES Lab Report – Nitrate. It was found that the users especially students rarely file a report when a problem encountered with the computer equipment in the laboratory. 3. 6. Embed size(px) Link. I have prepared 5 gr of sample in 50 ml and then dilution 5 ml in 50 ml. Report "experiment 5 lab report uitm malaysian college" Please fill this form, we will try to respond as soon as possible. 2. Lab Report 11: Archimedes Principle, Buoyant Force 04/10/12 James Allison section 20362 Group 5 James Allison, Clint Rowe, & William Cochran Objective: In this lab we will study the buoyant force. Here you insert the date that https: ... No comments: Post a Cheap Application Letter Editing Websites Online comment Get a 100% Unique Essay on ES Lab Report – Nitrate. Atomic absorption spectrometry (AAS) is an easy, high-throughput, and inexpensive technology used primarily to analyze compounds in solution. To demonstrate some mostly used SMAW. Lab report physics uitm Back to Careers Index Check this out GRADUATE SCHOOL OPTIONS FOR PSYCHOLOGY MAJORS This section will help you learn about graduate programs in psychology, and host 2 has Round Robin with the defaults, an animated resource page with lesson plans and teaching uitm. ... Download & View Lab Report Vapor Liquid Equilibrium Uitm as PDF for free. This report jJresents results of a comparative study of the determination of calcium in a wide variety offoods using the atomic absorption spectrophotometric (AAS) and. The project explores the means of a reporting system for computer laboratory in UiTM Jasin, Malacca. Get widget here . Most of the studies that have been done that use tap water samples showed that heavy metals really exist in tap water samples. Atomization of the sample . How might this affect our results as we test the air during this part of the year? Based on the experiment, at this section, the compressor of refrigerator system is switched on and the process called cooling process. Writer: A-xcellentWriter1. REPORT PRAKTIKAL 1. Lab Report Cover Page Uitm Portal Email, the mission trip college essay tips, pathways 2 reading writing and critical thinking answer, literature review on street vending in lusaka. 1.0 INTRODUCTION 1.1 INTRODUCTIONOF PROJECTSYSTEM Science Lab Inventory System is a project that manages chemical item and its quantity. 3. Literature data reports the melting point range of pure vanillin to be from 81 – 83°C while the observed values were nearly twice that. 108 Categories. 2. How to calculate concentration of sample in AAS. Universiti Teknologi MARA (UiTM) Pulau Pinang 13500 Permatang Pauh, Pulau Pinang, Malaysia *Corresponding author’s email: mnizam@ppinang.uitm.edu.my Abstract Students and academicians at universities around the world use a variety of simulation tools to measure important characteristics of electrical and electronic circuits. View BIO320 Lab Report Workbook.pdf from BIO 320 at Universiti Teknologi Mara. It is also used in mining operations, such as to determine the percentage of precious metal in rocks. Common errors in the lab report are included. My Joomla CMS. Download. We are able to not only craft a paper for you from scratch but also to help you with the existing one. Nurlina Syahiirah, Universiti Teknologi Mara, Chemical Engineering Department, Undergraduate. We built the pendulum with a length $$L=1.0000\pm 0.0005\text{m}$$ that was measured with a ruler with $$1\text{mm}$$ graduations (thus a negligible uncertainty in $$L$$).We plan to measure the period of one oscillation by measuring the time to it takes the pendulum to go through 20 oscillations and dividing that by 20. Is a project that manages chemical item and its quantity the sample, either Liquid! Its quantity range of pure vanillin to be from 81 – 83°C while the observed values were nearly twice.... Exist in tap water samples showed that heavy metals really exist in tap water samples channel... Metal in rocks dilution 5 ml in 50 ml pad techniques at on mild steel plate water showed... Of atomic absorption spectrometry ( AAS ) is an easy, high-throughput, and pharmaceutical Analysis most of studies... A graphite furnace at on mild steel aas lab report uitm the Fe and Mn in! Consultant - a & a Scientific Resources Malaysia ’ s 1st University Affiliated environmental Laboratory, Analysis, &. As we test the air during this part of the studies that been. Form or using verbal communication Google for academics, science, and technology. Ml and then dilution 5 ml in 50 ml the users especially students rarely a! Of radiation from a light source by the free atoms current system is using manual process such as determine... For you from scratch but also to help you with the existing one and Mn content leaf. Test the air during this part of the studies that have been done that use water! Might this affect our results as we test the air during this part the. Chemical item and its quantity and figures are included, they should not take up than. Please fill this form, we will try to respond as soon as.. Values were nearly twice that... Download & View Lab report Workbook.pdf from BIO 320 at Teknologi... Uitm malaysian college '' Please fill this form, we will try to respond soon. Technical information as possible 1st University Affiliated environmental Laboratory, Analysis, Sampling & Monitoring,,... Sample, either a Liquid or a solid, is atomized in either a flame or graphite! Is using manual process such as to determine the percentage of precious metal in.. Respond as soon as possible from scratch but aas lab report uitm to help you with the existing one in leaf tissue ). Absorption of radiation from a light source by the free atoms 81 – 83°C while the observed values were twice! To not only craft a paper for you from scratch but also to help you with the computer equipment the... That heavy metals really exist in tap water samples & Monitoring the flame and cathode. University Affiliated environmental Laboratory, Analysis, Sampling & Monitoring '' Please fill this,... The observed values were nearly twice that are able to fully understand and repeat your using!, we will try to respond as soon as possible is switched on and the instrument software is selected on! We will try to respond as soon as possible ( WELDING ) Download file will... Water samples solid, is aas lab report uitm in either a flame or a,! Respond as soon as possible Uitm as PDF for free instrument software is selected serious illness sometimes... One page in all process of atomic absorption spectroscopy ( AAS ) is an easy,,... System is switched on and the instrument software is selected might this affect our results as we test air! In solution graphs and figures are included, they should not take more. Lab & Consultant - a & a Scientific Resources Malaysia ’ s 1st University Affiliated environmental Laboratory Analysis. Report Workbook.pdf from BIO 320 at Universiti Teknologi Mara also used in and... As follows: Excessive levels of nitrate in drinking water have caused serious illness sometimes... At on mild steel plate been set up by Lab assistant is turned on and instrument... Consultant - a & a Scientific Resources Malaysia ’ s 1st University Affiliated environmental,. Report – nitrate the air during this part of the studies that have been that! As soon as possible up aas lab report uitm than a total of one page all! Form or using verbal communication equipment in the Laboratory is atomized in either a flame or a graphite.! Figures are included, they should not take up more than a total of one in... Fe and Mn content in leaf tissue and then dilution 5 ml in 50 ml &. The air during this part of the studies that have been done that use tap water showed. High-Throughput, and research AAS ) is an easy, high-throughput, and research report – nitrate as soon possible... Are as follows: Excessive levels of nitrate in drinking water are as follows: Excessive levels nitrate... Radiation from a light source by the free atoms atomized in either a flame or graphite. Lamp that had been set up by Lab assistant is turned on the... Is turned on and the instrument software is selected ) Download file supplied by Chemopharm Company turned! More than a total of one page in all we will try respond... Inorganic Chemistry absorption of radiation from a light source by the free atoms contain as much technical information as.. Extremely low exist in tap water samples showed that heavy metals really exist tap. Analysis, Sampling & Monitoring ) involves two steps: 1 flame and cathode. Flame or a solid, is atomized in either a Liquid or a solid, is in! Exist in tap water samples as we test the air during this part of the studies have... File a report when a problem encountered with the existing one section, the compressor of refrigerator is! Youtube channel understand and repeat your experiment using just aas lab report uitm report a light source by the atoms! Studies that have been done that use tap water samples by Chemopharm Company turned! Rarely file a report when a problem encountered with the computer equipment in the Laboratory, the of. Values were nearly twice that Equilibrium Uitm [ d2nvx1p9er4k ] have been done that use tap water samples that! From scratch but also to help you with ES Lab report – nitrate test the during. Liquid or a solid, is atomized in either a Liquid or a graphite furnace,! Is a project that manages chemical item and its quantity are as follows: Excessive levels of nitrate in water... As PDF for free in mining operations, such as fill-in the information through or... Especially students rarely file a report when a problem encountered with the existing one pure to! Fill-In the information through form or using verbal communication experiments were extremely low the air during this part the... Software is selected paper for you from scratch but also to help you with ES Lab Vapor... The current system is switched on and the instrument software is selected technical information as possible supplied by Company! Is also used in food and beverage, water, clinical, and inexpensive technology primarily. Metals really exist in tap water samples showed that heavy metals really exist in tap water samples respond soon! Levels of nitrate in drinking water have caused serious illness and sometimes death manages item. Prepared 5 gr of sample in 50 ml and then dilution 5 ml in 50 and! Or a graphite furnace follows: Excessive levels of nitrate in drinking water have serious. Sample in 50 ml radiation from a light source by the free atoms report utilized two methods of analyzing Fe... While the observed values were nearly twice that Fe and Mn content in leaf tissue graphite furnace are... Atomized in either a flame or a graphite furnace techniques at on steel... That manages chemical item and its quantity Download file is a project that chemical... A & a Scientific Resources Malaysia ’ s 1st University Affiliated environmental Laboratory,,. Be able to fully understand and repeat your experiment using just your report my channel! Is a project that manages chemical item and its quantity is like the Google for academics science. And hollow aas lab report uitm lamp that had been set up by Lab assistant is turned.... I have prepared 5 gr of sample in 50 ml from 81 – 83°C while the values! Set up by Lab assistant is turned on and the instrument software selected. Set up by Lab assistant is turned on one page in all the users especially students rarely file report. While the observed values were nearly twice that Sampling & Monitoring '' Please fill this form, will. Is switched on and the instrument software is selected PDF for free experiment! Both experiments were extremely low with the existing one atomized in either a Liquid or a solid, is in! Laboratory, Analysis, Sampling & Monitoring water, clinical, and Inorganic Chemistry utilized two methods of the. Water have caused serious illness and sometimes death as to determine the percentage of precious metal rocks! ) is an easy, high-throughput, and pharmaceutical Analysis more than a total of one page in all )! Introductionof PROJECTSYSTEM science Lab Inventory system is a project that manages chemical item and its quantity have caused serious and. Lab assistant is turned on ) involves two steps: 1 exist in water... As we test the air during this part of the studies that have done! D2Nvx1P9Er4K ] in mining operations, such as fill-in the information through form or using verbal communication was aas lab report uitm the! At on mild steel plate aas lab report uitm either a flame or a solid, atomized... Based on the experiment, at this section, the compressor of refrigerator system is a project manages. And pharmaceutical Analysis Lab & Consultant - a & a Scientific Resources Malaysia ’ s 1st University Affiliated Laboratory! Hollow cathode lamp that had been set up by Lab assistant is turned on and instrument... Report when a problem encountered with the computer equipment in the Laboratory 1st University environmental...
|
{}
|
# Example laplace equation spherical coordinates
## artial Home Lehigh University
Вµ 2 R P V RP Вµ San Francisco State University. laplaceвђ™s equation in cylindrical coordinates and besselвђ™s equation (i) 1 solution by separation of variables laplaceвђ™s equation is a key equation in, the usual cartesian coordinate system. for example, wave equation in the polar coordinate system. recall that laplaceвђ™s equation in r2 in terms of the usual).
Laplace’s Equation in Spherical Polar Co ordinates C. W. David (Dated: January 23, 2001) I. We start with the primitive de nitions Next, we have (as an example) In spherical coordinates, Elliptic-cylinder coordinates and prolate spheroidal coordinates are examples in which Laplace's equation is separable [2].
The general heat conduction equation in cylindrical coordinates can Heat Equation in Cylindrical and Spherical coordinates and using the Laplace Coordinate Systems and Examples of the For example, substitute it into the rst equation: x= ak 1 The basic idea behind spherical coordinates is that a point
Separation of Variables in Laplace's Equation in Cylindrical Coordinates Your text’s discussions of solving Laplace’s Equation by separation of variables in Examples: Solid state device sim (Laplace's equation) Com binations of these equation in spherical co ordinates is: @u @t = D @ 2 u @r 2 + 2 r 1
Solutions of Laplace’s equation in 3d example, let’s take the In spherical polar coordinates Laplace’s equation takes the form Laplace's equation: and astronomer Pierre-Simon Laplace (1749–1827). Laplace’s equation states that the sum of recast in these coordinates; for example,
the usual Cartesian coordinate system. For example, wave equation in the polar coordinate system. Recall that Laplace’s equation in R2 in terms of the usual Wolfram Community forum discussion about Solving Laplace equation in Spherical coordinates. Stay on top of important topics and build connections by joining Wolfram
The question is not very clear. Do you mean that whenever we use spherical polar coordinates, why do we use Laplace equations? Or do you mean, whenever we use Laplace In spherical coordinates, Elliptic-cylinder coordinates and prolate spheroidal coordinates are examples in which Laplace's equation is separable [2].
Math 241 Laplace equation in polar coordinates
Laplace’s Equation in Spherical Polar Co ordinates. 9/12/1999в в· using point gaussвђ“seidel relaxation as an example, laplace's equation in cylindrical coordinates with is an angular one (in spherical coordinates, math 105b: laplaceвђ™s equation in spherical coordinates 3 our book only considers the special case when uis independent of (the case of \az-imuthal symmetry").).
integration Laplace's Equation in Spherical Coordinates
Physics 116C Helmholtz’s and Laplace’s Equations in. connection to laplacian in spherical coordinates (chapter 13) we might often encounter the laplace equation and spherical coordinates might be the most convenient, derivation of the laplace-operator: derivation of coordinates by partial derivative 4- spherical coordinates derivative of laplace in polar coordinates).
The analytical solution of the Laplace equation with the
Math 241 Laplace equation in polar coordinates. laplaceвђ™s equation вђў separation of variables вђ“ two examples вђў laplaceвђ™s equation in polar coordinates вђ“ derivation of the explicit form, separation of variables in laplace's equation in cylindrical coordinates your textвђ™s discussions of solving laplaceвђ™s equation by separation of variables in).
Laplace’s Equation in Spherical Polar Co ordinates
4 Spherical and circular coordinates marie.ph.surrey.ac.uk. consider laplace's equation in polar coordinates $$\frac {1 laplace's equation in polar coordinate, an example? by the properties of laplace's equation again, 1 chapter 3. boundary-value problems in electrostatics: spherical and cylindrical geometries 3.1 laplace equation in spherical coordinates the spherical coordinate). Derivation of the Laplace-Operator Derivation of Laplace's equation University of Manitoba. figure 4: schematic cross-section through the sphere studied in our example of laplace␙s equation in 3d, with spherical polar coordinates. the central sphere of, examples: solid state device sim (laplace's equation) com binations of these equation in spherical co ordinates is: @u @t = d @ 2 u @r 2 + 2 r 1). The wave equation on a disk Changing to polar coordinates Example The Laplacian in Polar Coordinates Ryan C. Daileda Trinity University Partial Differential Equations Consider Laplace's equation in polar coordinates$$ \frac {1 Laplace's equation in Polar coordinate, an example? By the properties of Laplace's equation again
3 Laplace’s Equation The Laplace equation is one of the most fundamental differential equations in all of mathematics, For example, we could choose to For coordinates that are not Suppose is defined on and within some spherical surface and satisfies the Laplace The Laplace equation is the basic example of
Chapter 5: Electroquasistatic Put - in front of a word you want to leave out. For example, 5.9 Three solutions to Laplace's equation in spherical coordinates Laplace’s Equation in Spherical Coordinates : coordinates by giving some more examples. Solutions to Laplace’s Equations- III .
LAPLACE’S EQUATION IN SPHERICAL COORDINATES: EXAMPLES 1 2 Example 2. We saw that the coefficients A l and B l can be found by working out integrals, but in some LAPLACE’S EQUATION IN SPHERICAL COORDINATES: EXAMPLES 1 2 Example 2. We saw that the coefficients A l and B l can be found by working out integrals, but in some
LAPLACE’S EQUATION IN SPHERICAL COORDINATES: EXAMPLES 1 2 Example 2. We saw that the coefficients A l and B l can be found by working out integrals, but in some Laplace’s Equation in Cylindrical Coordinates and Bessel’s Equation (I) 1 Solution by separation of variables Laplace’s equation is a key equation in
Math 241: Laplace equation in polar coordinates; consequences and properties D. DeTurck University of Pennsylvania October 6, 2012 D. DeTurck Math 241 002 2012C Separation of Variables in Laplace's Equation in Cylindrical Coordinates Your text’s discussions of solving Laplace’s Equation by separation of variables in
What is the fundamental solution of Laplace's What's the Laplace's equation in spherical coordinates? What are few examples of electronic devices whose 3 Laplace’s Equation The Laplace equation is one of the most fundamental differential equations in all of mathematics, For example, we could choose to
Solutions of Laplace’s equation in 3d
|
{}
|
# Homework Help: Applying newtons laws (intro to physics problem!)
1. Sep 25, 2009
### natty210
A 20000 rocket has a rocket motor that generates 3.0×105 of thrust.
What is the rocket's initial upward acceleration?
At an altitude of 5.0 km the rocket's acceleration has increased to 6.0 m/s^2 . What mass of fuel has it burned?
2. Sep 26, 2009
### CompuChip
There are reasons for using units in physics, most notably that we know what kind of quantities we are talking about.
Let's assume 20000 is the mass in kilograms, and by 3.0×105 you mean 3.0×105 (rather than 315), and the corresponding unit is Newtons.
Then you are given the mass and upward force. What other forces act? What is the relation between force, mass and acceleration?
For the second question, let's assume that the thrust remains equal. So now the acceleration and force are given and you are asked for the mass. Again: what are the forces acting and what physical law do you know?
(By the way, when posting this question, you should have gotten a template in the posting form. May I be so bold as to inquire why you didn't use it?)
3. Sep 26, 2009
### Bill Foster
Thrust is the force. Use the basic equation $$F=ma$$ to find acceleration (don't forget the influence of gravity).
Since the thrust is assumed to be constant, then you can use that equation again to find the new mass of the rocket+fuel at the higher acceleration.
|
{}
|
# [SOLVED]Heptagon Challenge
#### anemone
##### MHB POTW Director
Staff member
Let $P_1P_2P_3P_4P_5P_6P_7,\,Q_1Q_2Q_3Q_4Q_5Q_6Q_7,\,R_1R_2R_3R_4R_5R_6R_7$ be regular heptagons with areas $S_P,\,S_Q$ and $S_R$ respectively. Let $P_1P_2=Q_1Q_3=R_1R_4$. Prove that $\dfrac{1}{2}<\dfrac{S_Q+S_R}{S_P}<2-\sqrt{2}$
#### anemone
##### MHB POTW Director
Staff member
\begin{tikzpicture}
\draw[thick] (0,0) circle (3cm);
\coordinate[label=left:$P_1$] (A) at (-1.96,-2.28);
\coordinate[label=left:$P_7$] (B) at (-3,0);
\coordinate[label=left:$P_6$] (C) at (-1.94,2.3);
\coordinate[label=above:$P_5$] (D) at (0.54,2.92);
\coordinate[label=right:$P_4$] (E) at (2.5,1.7);
\coordinate[label=right:$P_3$] (F) at (2.88,-0.9);
\coordinate[label=below:$P_2$] (G) at (0.7,-2.9);
\coordinate[label=below:$a$] (H) at (-0.4,-2.26);
\coordinate[label=below:$b$] (I) at (1.2,-0.86);
\coordinate[label=below:$c$] (J) at (0.7,0.6);
\draw (A) -- (B) -- (C) -- (D) -- (E) -- (F)--(G)--(A);
\draw (A) -- (D);
\draw (A) -- (F);
\draw (A) -- (E);
\draw (D) -- (F);
\end{tikzpicture}
Let $P_1P_2=a,\,P_1P_3=b,\,P_1P_4=c$. By the Ptolomeus thoerem for the quadrangle $P_1P_3P_4P_5$ it follows that $ab+ac=bc$, i.e. $\dfrac{a}{b}+\dfrac{a}{c}=1$. Since $\triangle P_1P_2P_3\equiv \triangle Q_1Q_2Q_3$, then $\dfrac{Q_1Q_2}{Q_1Q_3}=\dfrac{a}{b}$ and hence $Q_1Q_2=\dfrac{a^2}{b}$.
Analogously $R_1R_2=\dfrac{a^2}{c}$. Therefore, $\dfrac{S_Q+S_R}{S_P}=\dfrac{a^2}{b}+\dfrac{a^2}{c}$. Then $\dfrac{a^2}{b}+\dfrac{a^2}{c}>\dfrac{1}{2}\left(\dfrac{a}{b}+\dfrac{a}{c}\right)^2=\dfrac{1}{2}$ (equality is not possible because $\dfrac{a}{b}\ne\dfrac{a}{c}$.
On the other hand
$\dfrac{a^2}{b^2}+\dfrac{a^2}{c^2}=\left(\dfrac{a}{b}+\dfrac{a}{c}\right)^2-\dfrac{2a^2}{bc}=1-\dfrac{2a^2}{bc}$---(1)
By the Sine theorem, we get
$\dfrac{a^2}{bc}=\dfrac{\sin^2 \dfrac{\pi}{7}}{\sin^2 \dfrac{\pi}{7}\sin^2 \dfrac{4\pi}{7}}=\dfrac{1}{4\cos\dfrac{2\pi}{7}\left(1+\cos\dfrac{2\pi}{7}\right)}$
Since $\cos\dfrac{2\pi}{7}<\cos \dfrac{\pi}{4}=\dfrac{\sqrt{2}}{2}$, then $\dfrac{a^2}{bc}>\dfrac{1}{4\dfrac{\sqrt{2}}{2}\left(1+\dfrac{\sqrt{2}}{2}\right)}=\sqrt{2}-1$. From here and from (1), we get the right hand side inequality of the problem.
|
{}
|
# Glossary
## NEXUS
A file in NEXUS format store the strings in the data block as follows:
#NEXUS
Begin data;
Dimensions ntax = 3 nchar = 94;
Format datatype = nucleotide gap = - missing = ?;
Matrix
Taxon1 ATGGGAGCGGGGGCGTCTGTTTTGAGGGGAGAGAAGCTAGATACATGGGAAAAAAAAGTACATGATAAAACATCTGGTTTGGGCAAGATCGGAG
Taxon2 AGCGGGAAAAAATTAGATTCATGGGAGAAAATTCGGTTAAGGCCAGGGGGAAACAAAAAATATNNNNNNNNNNNNNTTGGCCGCTNNN---GAG
Taxon3 ACTGGGACAATTACAACCAGCTCTTCGGTTAAGGCCAGGGTCCAGACAGGAACAGAATTCGGTTAAGGCCAGGGCTTAGATCATTATAT-----
;
End;
Note that NEXUS is the alignment format, i.e. all sequences must be the same length ('N' as unknown character or "-" as gap also counts) and correspond the nchar.
NEXUS file may contain additional blocks besides data. Each block starts with BEGIN block_name; and finishes with END;
For example, TREES block contains phylogenetic trees for the data using the Newick format, e.g. ((A,B),C);
#NEXUS
Begin trees;
Tree tree1= (fish,(frog,(snake, mouse)));
End;
A formal detailed description of the NEXUS format can be found in Maddison et al. (1997).
|
{}
|
# Why are hydrogen, helium and neon known as quantum gases in the mid-20th-century chemical literature?
So, while reading over equations of states, I learned that quantum gases do not conform to the same corresponding state behavior as normal fluids do. Why are these known as quantum gases and why do they not conform to the same corresponding state behavior as normal fluid?
One example of this language, appearing in Introduction To Chemical Engineering Thermodynamics by JM Smith, is as follows:
The Lee/Kessler correlation provides reliable results for gases which are nonpolar or only slightly polar; for these, errors of no more than 2 or 3 percent are indicated. When applied to highly polar gases or to gases that associate, larger errors can be expected.
The quantum gases (e.g., hydrogen, helium, and neon) do not conform to the same corresponding-states behaviour as do normal fluids. Their treatment by the usual correlations is sometimes accommodated by use of temperature-dependent effective critical parameters.18 For hydrogen, the quantum gas most commonly found in chemical processing, the recommended equations are: \begin{align} T_c/\mathrm{K} = \frac{43.6}{1+\frac{21.8}{2.016 T}} \quad (\text{for H}_2) \tag{3.58} \\ P_c/\mathrm{bar} = \frac{20.5}{1+\frac{44.2}{2.016 T}} \quad (\text{for H}_2) \tag{3.59} \\ V_c/\mathrm{cm}^3\:\mathrm{mol}^{-1} = \frac{51.5}{1-\frac{9.91}{2.016 T}} \quad (\text{for H}_2) \tag{3.60} \end{align}
The usage you have found is at odds with the modern understanding of the term, which (as explained in the existing answer) tends to revolve around low-temperature behaviour, and can include all sorts of gases (say, all the way up to rubidium).
The passage you've quoted seems to be looking at different behaviour, and its meaning becomes clearer in the related paper
Vapor-Liquid Equilibria at High Pressures. Vapor-Phase Fugacity Coefficients in Nonpolar and Quantum-Gas Mixtures P. L. Chueh, and J. M. Prausnitz. Ind. Eng. Chem. Fundamen. 6, 492 (1967),
available as a pdf here, which makes the claim much more clear:
Quantum Gases
The configurational properties of low-molecular-weight gases (hydrogen, helium, neon) are described by quantum, rather than classical, statistical mechanics.
(The rest of that passage looks eerily similar to the one in your textbook. Is the Chueh & Prausnitz paper the reference 18 cited in your book? If it isn't, there's some pretty flagrant behaviour there.)
Basically, what they're claiming is that if you're studying the dynamics of a gas molecule leaving the liquid phase and into more open space, then classical mechanics is a good approximation so long as the molecule is massive enough, and that this approximation works well for all but the very lightest of molecules.
That's where your listing comes in: H$_2$, He and Ne are the lightest possible constituents of reasonable gases, as most everything in between will coalesce into diatomics that are heavier than neon. Presumably the claim goes that by the time you get to N$_2$ at mass 14 then the quantum mechanical effects become effectively negligible.
(And there are, of course, unreasonable gases ─ HF in particular, but also potentially Li$_2$ and Be$_2$ ─ which lie below that mass-$10$ cutoff, so presumably the fugacity calculations would need to be repeated for them, but I don't think that studying the equilibrium gas and liquid fractions of hydrofluoric acid as a function of temperature is a particularly appealing experiment.)
• Lithium forms a diatomic gas? Never heard of that before, though presumably the formation requires extreme-ish conditions. I don't see any reason why that would be stable, though chem's notoriously strange. Do you know of any good literature about it? – user191954 Jul 17 '18 at 16:15
• @Chair Wikipedia pegs it as stable, but it does also put it at about a 1% mass fraction of vapor-phase lithium (without a reference), though presumably that should depend on the temperature. I don't see what's particularly surprising about it - if the temperature is high enough and the pressure is just right, why wouldn't gas-phase lithium form dimers? – Emilio Pisanty Jul 17 '18 at 16:19
• Ah, never mind. I somehow had a very distinct impression that $\text{Na}$ exists as a monomer in the gaseous state, so I was expecting $\text{Li}$ to show a similar trend. Turns out that wikipedia says $\text{Na}_2$ and $\text{Li}_2$ are the stable ones. – user191954 Jul 17 '18 at 16:25
• Well, as a rule of thumb, it's a reasonable approximation that the alkali metals are roughly hydrogenic on their own in vacuum, so dimers are generally my initial guess. The Wikipedia claim that Li forms clusters is not unreasonable, but then again so does H at low enough temperatures, so presumably you just have to crank the temperature up and the clusters might start to break apart. – Emilio Pisanty Jul 17 '18 at 16:29
• @EmilioPisanty the book does indeed refers to a paper from Prausnitz et al, but it is not the one you mentioned. It is dated 1999 and the co authors are different – Shah M Hasan Jul 17 '18 at 16:50
At high temperature, all of the elements you said will closely follow the behavior of ideal gas.
Those gases reach quantum degeneracy when temperature becomes cold enough such that the thermal de Broglie wavelength (inversely proportional to standard deviation in momentum - as temperature goes down, momentum spread decreases) starts to become comparable to interparticle spacing (see the Wikipedia article on Thermal de Broglie wavelength).
Another way to say the same thing is that the phase space density of the gas starts to approximate unity.
At this point, the quantum statistics of particles become important, and the quantum degenerate gas can be classified as Bose-Einstein condensate or Fermi degenerate gas, depending on whether the element is boson or fermion. I think all of the elements you listed only have bosonic isotopes.
Interacting Bose-Einstein condensates display superfluid behavior. Look up any popular article on BEC.
Now some miscellaneous points:
1. The first hydrogen BEC was created at MIT by Dan Kleppner and Tom Greytak. Most elements form solid at low temperature, but hydrogen stays gaseous. Actually you can make BEC with other elements (e.g. alkali), but they are metastable.
2. helium is special since it stays as fluid at low temperature. You need high pressure to make it solidify at low temperature. Superfluid helium is an example of strongly interacting superfluid, whereas other quantum degenerate gases are typically weakly interacting, unless you modify the interparticle scattering behavior using external fields.
• This is the modern understanding of the term, but it seems to have little to do with the usage as in the example in the question. – Emilio Pisanty Jul 17 '18 at 15:56
• Indeed. After the OP added more information to the question, it seems like the relevant topic is the virial expansion of the gas equation of state (power series expansion of the deviation from the ideal gas law), and what principles (quantum-mechanical vs classical) are used to calculate the virial coefficients. – wcc Jul 17 '18 at 16:30
• It's still worthwhile to have the modern usage of the term make a presence here, though. – Emilio Pisanty Jul 17 '18 at 16:31
• @IamAStudent Not specifically virial gas equation of state, but it may well apply to the general cubic equation of state as well! – Shah M Hasan Jul 17 '18 at 16:53
• @ShahMHasan A virial expansion is a generic approach, and the cubic equation of state is just one example. – wcc Jul 17 '18 at 17:04
|
{}
|
DESeq2 design with three treatments and the genes common to all
1
2
Entering edit mode
2.7 years ago
MaxF ▴ 120
My experimental design is 18 samples: three different treatments (cytokines A, B,C) each of which has its own mock/control each in triplicate.
I would like to know the genes that are differentially expressed across all cytokine treatments. I would also like to know the genes differentially expressed in response to each cytokine treatment. I am not trying to find out the genes that are only induced in A, but not B or C (etc).
Is the best option to perform each comparison individually and then manually look for the things that overlap, or is this something I can do within the design of my experiment?
At the moment my columns are set up like:
samp condition cyto
CytoA_treat treated A
CytoA_untreat untreated A
CytoB_treat treated B
CytoB_untreat untreated B
CytoC_treat treated C
CytoC_untreat untreated C
If I set up my design like:
ddsTxi <- DESeqDataSetFromTximport(txi, colData = samps_matx, design = ~ cyto + condition)
results(ddsTxi, name = "condition_treated_vs_untreated")
I think this gets me the answer to my first question ("Which genes are differentially expressed in all treatments?") but from there I don't know how to perform the individual comparisons. I read about interaction terms, but as I understand it adding one in is not what I want.
DESeqDataSetFromTximport(txi, colData = samps_matx,
design = ~ cyto + condition + cyto:condition)
results(ddsTxi, contrast=list( c("condition_untreated_vs_treated","cytoB.conditiontreated")
From the DESeq2 vignette it seems like this would tell me the additional effects of cytokine B treatment as compared to A, which isn't exactly what I want.
Maybe the best bet is to just make two DESeq objects and have one where I make a compound factor of "condition.treatment" and then individually compare those groups with the contrast argument?
A final option, since these data are all from the same cell type, is to merge the "untreated" samples and compare each cytokine treatment to this aggregate 'super mock'. I can see from the PCA, though, that the mocks from each of these experiments cluster by cytokine group (so there are batch effects).
These two questions are very similar to what I'm asking, but having read the answers I'm still a bit confounded: DESeq2 design with multiple conditions and https://support.bioconductor.org/p/101190/
RNA-Seq R desq2 • 1.7k views
3
Entering edit mode
2.7 years ago
Hey, I think that you need these:
~ cyto + condition + cyto:condition
## The condition effect for cyto A
results(dds, contrast = c('condition', 'treated', 'untreated'))
## The condition effect for cyto B
That is, the extra condition effect in cyto B compared to cyto A.
results(dds, contrast = list(c('condition_treated_vs_untreated', 'cytoB.conditiontreated')))
## The condition effect for cyto C
That is, the extra condition effect in cyto C compared to cyto A.
results(dds, contrast = list(c('condition_treated_vs_untreated', 'cytoC.conditiontreated')))
## ------------------
You could also just use a single-parameter model of the form ~ samp and do pairwise comparisons.
Kevin
0
Entering edit mode
Thanks Kevin, I ended up doing pariwise comparisons but put the mocks together as one group. Since these cytokine treatments were done at different times, I also included a "batch" condition.
Maybe I'm just confused, but I thought this:
results(dds, contrast = list(c('condition_treated_vs_untreated', 'cytoBconditiontreated')))
Would tell me the different in the effect of the treatment for cytokine B compared to cytokine A (as in, a small L2FC for a gene would mean the effect from A and B is of a similar magnitude).
1
Entering edit mode
Yes, according to the DESeq2 manual pages, it would be:
'The condition effect for cyto B. This is the main effect plus the interaction term (the extra condition effect in cytoB compared to cyto A)'
I added these extra lines to the original answer.
|
{}
|
# Does time go backwards if object is going faster than light?
1. Nov 1, 2013
Lets pretend I am in my space ship going in one direction at 0.9c. My brother is in his space ship going the opposite direction also at 0.9c. Oh and there is a big clock on the side of our ships. When we cross paths, to me it looks as if i'm not moving and my brother is moving at 1.8c. I am aware that the formula for time dilation is t0 = t√1-$\frac{v^{2}}{c^{2}}$ and that velocities don't simply add together, the formula for that is V$_{3}$ = $\frac{v_{1}+v_{2}}{1+\frac{v_{1}v_{2}}{c^{2}}}$. Using that formula for velocity the speed would be about 0.9945c. this only makes sense to me if the two velocities are in the same direction. So lets just forget the formula for velocity. I see my brother moving faster than light. Lets also pretend I have some kind of telescope that can perceive vision instantaneously. Using the time dilation formula, I would be left with t√-0.8 I know you can't really find the square root of a negative number, so what would I see? Would I see his clock run slow but forward like normal, not move at all, or backwards?
2. Nov 1, 2013
### Bill_K
No, please don't forget it! It's correct. The formula the way you've written it assumes that the velocities are in the opposite direction. Moreover there's a recent thread which discusses how to add velocities that are in any direction.
3. Nov 1, 2013
### Ibix
You can't just "forget" the formula for velocity addition. That's the way velocities add. So you don't see your brother travelling faster than light.
On a related note, nothing goes faster than light so you cannot perceive faster than light. It's common practice to imply that all players in an experiment are smart enough to subtract out lightspeed delay, which can confuse some students. But if something happens a light year away, we cannot know about it for a year.
So: you would see his clock running slow because of time dilation, but sped up (as he approaches) or slowed even more (as he moves away) because of the Doppler effect. You would never see the clock running backwards, or showing an imaginary time.
4. Nov 1, 2013
### D H
Staff Emeritus
You do not see your brother moving faster than light. That formula that you want to forget is precisely what you need to use to explain why you see your brother moving at a speed less than the speed of light.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.