content
stringlengths
86
994k
meta
stringlengths
288
619
Stephen L. Nelson, Author at Stephen L. Nelson, Inc. You can use the sum-of-the-years’-digits depreciation starter workbook shown in Figure 15-3 to construct depreciation schedules with the sum-of-the-years’-digits method. In general, you use this starter workbook if you’ve selected or been counseled by your tax ad- viser to use the … [Read more...] about Understanding the Sum-of-the-Years’-Digits Depreciation Starter Workbook
{"url":"https://stephenlnelson.com/articles/author/stevenelsoncpa6543/page/2/","timestamp":"2024-11-11T16:33:54Z","content_type":"text/html","content_length":"54116","record_id":"<urn:uuid:0cedade8-5a34-4642-a32a-85802db39528>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00654.warc.gz"}
Introduction to conformalbayes The conformalbayes package provides functions to construct finite-sample calibrated predictive intervals for Bayesian models, following the approach in Barber et al. (2021). The basic idea is a natural one: use cross-validated residuals to estimate how large predictive intervals need to be, on average. Suppose we have a heavy-tailed dataset. We can fit a linear regression to the data, but it won’t give us accurate uncertainty quantification in our predictions. # fit the model m = stan_glm(y ~ x, data=d_fit, chains=1, refresh=0) d_test = sim_data(2000) interv_model = predictive_interval(m, newdata=d_test, prob=0.50) # are the points covered covered_model = with(d_test, interv_model[, 1] <= y & y <= interv_model[, 2]) ggplot(d_test, aes(x, y, color=covered_model, group=1)) + geom_point(size=0.4) + geom_linerange(aes(ymin=interv_model[, 1], ymax=interv_model[, 2]), alpha=0.4) + labs(color="Covered?") + geom_smooth(method=lm, formula=y~x, color="black") In fact, the 50% intervals over-cover, with a coverage rate of 69.8%, since the fat tails of the error terms pulls the estimate of the residual standard deviation too high. While a posterior predictive check could uncover this discrepancy, leading us to fit a more flexible model, we can take another approach instead. The first step is to call loo_conformal(), which computes leave-one-out cross-validation weights and residuals for use in generating more accurate predictive intervals. m = loo_conformal(m) #> stan_glm #> family: gaussian [identity] #> formula: y ~ x #> observations: 50 #> predictors: 2 #> ------ #> Median MAD_SD #> (Intercept) 2.9 0.3 #> x -1.9 0.3 #> Auxiliary parameter(s): #> Median MAD_SD #> sigma 2.1 0.2 #> ------ #> * For help interpreting the printed output see ?print.stanreg #> * For info on the priors used see ?prior_summary.stanreg #> (conformalbayes enabled, with estimated CI inflation factor 0.81) The loo_conformal() returns the same fitted model, just with a thin wrapping layer that contains the leave-one-out cross-validation information. You can see at the bottom of the output that conformalbayes estimates that correctly-sized predictive intervals are only 81% of the size of the model-based predictive intervals. To actually generate predictive intervals, we use predictive_interval(), just like normal: interv_jack = predictive_interval(m, newdata=d_test, prob=0.50) # are the points covered covered_jack = with(d_test, interv_jack[, 1] <= y & y <= interv_jack[, 2]) ggplot(d_test, aes(x, y, color=covered_jack, group=1)) + geom_point(size=0.4) + geom_linerange(aes(ymin=interv_jack[, 1], ymax=interv_jack[, 2]), alpha=0.4) + labs(color="Covered?") + geom_smooth(method=lm, formula=y~x, color="black") Indeed, the coverage rate for these jackknife conformal intervals is 49.2%, as we would expect. The conformal version of predictive_interval() does contain two extra options: plus and local. When plus=TRUE, the function will generate jackknife+ intervals, which have a theoretical coverage guarantee. These can be computationally intensive, so by default they are only generated when the number of fit and prediction data points is less than 500. In practice, non-plus jackknife intervals generally perform just as well as jackknife+ intervals. When local=TRUE (the default), the function will generate intervals whose widths are proportional to the underlying model-based predictive intervals. So if your model accounts for heteroskedasticity, or produces narrow intervals in areas of covariate space with many observations (like a linear model), local=TRUE will produce more sensible intervals. The overall conformal performance guarantees are unaffected. Barber, R. F., Candes, E. J., Ramdas, A., & Tibshirani, R. J. (2021). Predictive inference with the jackknife+. The Annals of Statistics, 49(1), 486-507. Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., & Wasserman, L. (2018). Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523),
{"url":"https://corymccartan.com/conformalbayes/articles/conformalbayes.html","timestamp":"2024-11-13T15:08:25Z","content_type":"text/html","content_length":"24091","record_id":"<urn:uuid:1d4bf046-1567-4039-a637-a6e7dcf0a7c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00662.warc.gz"}
ounded to the nearest 10, if volcanoes of central america's pacific rounded to the nearest 10, if volcanoes of central america's pacific coast were evenly spaced out, about how many miles... rounded to the nearest 10, if volcanoes of central america's pacific coast were evenly spaced out, about how many miles whould there be between volcanoes
{"url":"https://www.sweetstudy.com/content/rounded-nearest-10-if-volcanoes-central-americas-pacific-coast-were-evenly-spaced-out-about-","timestamp":"2024-11-11T04:00:07Z","content_type":"text/html","content_length":"120549","record_id":"<urn:uuid:1fbc6138-bd6d-4332-978e-25df5f2cef8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00225.warc.gz"}
Aggregations | Aggregations.io Docs # Aggregations # Overview Aggregations power your metrics. Since you've decided what data you want with a filter, how often to calculate it with an interval and how to slice it with groupings -- now it's time to actually decide what to aggregate. You can have an 10 aggregations per filter. Each aggregation has four components: Component Description Filter Name Name to describe the aggregation Sub Filter Jpath filter to further narrow your aggregation. Similar to filter, sub-filter is defined using JPath. For more details, see JPath Calculations The computation(s) to perform Field The property on your events to perform the calculation on. If your only calculation is Count, a Field is not necessary. # Calculations Calculations are the statistical computations you want to run to achieve your metrics. Each aggregation can have any combination of calculations. Aggregations.io supports the following Calculations: Calculation Description Min (Minimum) the smallest value in the field defined Max (Maximum) the largest value in the field defined Avg (Average) the average value for the field defined Sum Summation of all non-null numerical values in the field Count A count of events matching the filter & (optional) sub-filter. No field needs to be defined if Count is the only calculation. Approx Distinct Count An estimated count of unique values in the field defined. Computed using an adaptive sample with a maximum ~2% margin of error (although typically much lower) Percentile Percentile distribution for the field defined. You can define the percentile value desired at retrieval time. Percentiles are calculated by maintaing a t-digest state. # Non Numeric Fields If a field defined is found to contain a non-numerical value or not exist on a given payload - it will not be used for the calculation. For example, if the aggregation is defined as Sum, Count, Avg for @.num and the following events are sent: • The Sum will be 6 • The Avg will be 2 • The Count will be 4
{"url":"https://aggregations.io/docs/metrics/aggregations","timestamp":"2024-11-14T18:24:37Z","content_type":"text/html","content_length":"46701","record_id":"<urn:uuid:2f542d36-3c9c-4464-a953-50b7faa66a77>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00709.warc.gz"}
Mathematical Reasoning MCQ (Single Correct Answer) MHT CET 2023 14th May Evening Shift If the statement $$\mathrm{p} \leftrightarrow(\mathrm{q} \rightarrow \mathrm{p})$$ is false, then true statement/statement pattern is MHT CET 2023 14th May Evening Shift The statement $$[\mathrm{p} \wedge(\mathrm{q} \vee \mathrm{r})] \vee[\sim \mathrm{r} \wedge \sim \mathrm{q} \wedge \mathrm{p}]$$ is equivalent to... MHT CET 2023 14th May Morning Shift The negation of the statement "The number is an odd number if and only if it is divisible by 3." MHT CET 2023 14th May Morning Shift The statement $$[(p \rightarrow q) \wedge \sim q] \rightarrow r$$ is tautology, when $$r$$ is equivalent to MHT CET 2023 13th May Evening Shift If $$q$$ is false and $$p \wedge q \leftrightarrow r$$ is true, then ............ is a tautology. MHT CET 2023 13th May Evening Shift Negation of contrapositive of statement pattern $$(p \vee \sim q) \rightarrow(p \wedge \sim q)$$ is MHT CET 2023 13th May Morning Shift The expression $$(p \wedge \sim q) \vee q \vee(\sim p \wedge q)$$ is equivalent to MHT CET 2023 13th May Morning Shift Negation of inverse of the following statement pattern $$(p \wedge q) \rightarrow(p \vee \sim q)$$ is MHT CET 2023 12th May Evening Shift Let Statement 1 : If a quadrilateral is a square, then all of its sides are equal. Statement 2: All the sides of a quadrilateral are equal, then it is... MHT CET 2023 12th May Evening Shift The given following circuit is equivalent to ... MHT CET 2023 12th May Morning Shift The inverse of the statement "If the surface area increase, then the pressure decreases.", is MHT CET 2023 12th May Morning Shift The contrapositive of "If $$x$$ and $$y$$ are integers such that $$x y$$ is odd, then both $$x$$ and $$y$$ are odd" is MHT CET 2023 11th May Evening Shift The logical statement $$(\sim(\sim \mathrm{p} \vee \mathrm{q}) \vee(\mathrm{p} \wedge \mathrm{r})) \wedge(\sim \mathrm{q} \wedge \mathrm{r})$$ is equi... MHT CET 2023 11th May Evening Shift If truth value of logical statement $$(p \leftrightarrow \sim q) \rightarrow(\sim p \wedge q)$$ is false, then the truth values of $$p$$ and $$q$$ are... MHT CET 2023 11th May Morning Shift The statement pattern $$\mathrm{p} \rightarrow \sim(\mathrm{p} \wedge \sim \mathrm{q})$$ is equivalent to MHT CET 2023 10th May Evening Shift If $$\mathrm{p}$$ and $$\mathrm{q}$$ are true statements and $$\mathrm{r}$$ and $$\mathrm{s}$$ are false statements, then the truth values of the stat... MHT CET 2023 10th May Evening Shift The negation of the statement pattern $$\sim s \vee(\sim r \wedge s)$$ is equivalent to MHT CET 2023 10th May Morning Shift The logical statement $$[\sim(\sim p \vee q) \vee(p \wedge r)] \wedge(\sim q \wedge r)$$ is equivalent to MHT CET 2023 10th May Morning Shift The given circuit is equivalent to ... MHT CET 2023 9th May Evening Shift Negation of the statement "The payment will be made if and only if the work is finished in time." Is MHT CET 2023 9th May Evening Shift Let $$\mathrm{p}, \mathrm{q}, \mathrm{r}$$ be three statements, then $$[p \rightarrow(q \rightarrow r)] \leftrightarrow[(p \wedge q) \rightarrow r]$$ ... MHT CET 2023 9th May Morning Shift If truth values of statements $$\mathrm{p}, \mathrm{q}$$ are true, and $$\mathrm{r}$$, $$s$$ are false, then the truth values of the following stateme... MHT CET 2023 9th May Morning Shift The negation of the statement $$(p \wedge q) \rightarrow(\sim p \vee r)$$ is MHT CET 2022 11th August Evening Shift If $$p: \forall n \in I N, n^2+n$$ is an even number $$q: \forall n \in I N, n^2-n$$ is an odd numer, then the truth values of $$p \wedge q, p \vee q$... MHT CET 2022 11th August Evening Shift The negation of the statement pattern $$p \vee(q \rightarrow \sim r)$$ is MHT CET 2022 11th August Evening Shift The negation of the statement, "The payment will be made if and only if the work is finished in time" is MHT CET 2021 24th September Evening Shift The negation of '$$\forall x \in N, x^2+x$$ is even number' is MHT CET 2021 24th September Evening Shift If $$\mathrm{p}$$ : It is raining. $$\mathrm{q}$$ : Weather is pleasant then simplified form of the statement "It is not true, if it is raining then w... MHT CET 2021 24th September Morning Shift The negation of $$p \wedge(q \rightarrow r)$$ is MHT CET 2021 24th September Morning Shift If $$\mathrm{p}$$ : It is raining and $$\mathrm{q}$$ : It is pleasant, then the symbolic form of "It is neither raining nor pleasant" is MHT CET 2021 23rd September Evening Shift "If two triangles are congruent, then their areas are equal." is the given statement, then the contrapositive of the inverse of the given statement is... MHT CET 2021 23rd September Evening Shift The negation of inverse of $$\sim \mathrm{p} \rightarrow \mathrm{q}$$ is MHT CET 2021 23th September Morning Shift S1 : If $$-$$7 is an integer, then $$\sqrt{-7}$$ is a complex number $$\mathrm{S} 2$$ : $$-$$7 is not an integer or $$\sqrt{-7}$$ is a complex number... MHT CET 2021 23th September Morning Shift Negation of the statement : $$3+6>8$$ and $$2+3 MHT CET 2021 22th September Evening Shift Given $$\mathrm{p}$$ : A man is a judge, $$\mathrm{q}$$ : A man is honest If $$\mathrm{S} 1$$ : If a man is a judge, then he is honest S2 : If a man i... MHT CET 2021 22th September Evening Shift The statement pattern $$(p \wedge q) \wedge[(p \wedge q) \vee(\sim p \wedge q)]$$ is equivalent to MHT CET 2021 22th September Evening Shift Let $$a: \sim(p \wedge \sim r) \vee(\sim q \vee s)$$ and $$b:(p \vee s) \leftrightarrow(q \wedge r)$$. If the truth values of $$p$$ and $$q$$ are true... MHT CET 2021 22th September Morning Shift If statements $$\mathrm{p}$$ and $$\mathrm{q}$$ are true and $$\mathrm{r}$$ and $$\mathrm{s}$$ are false, then truth values of $$\sim(\mathrm{p} \righ... MHT CET 2021 22th September Morning Shift The expression $$[(p \wedge \sim q) \vee q] \vee(\sim p \wedge q)$$ is equivalent to MHT CET 2021 21th September Evening Shift The logical statement (p $$\to$$ q) $$\wedge$$ (q $$\to$$ ~p) is equivalent to MHT CET 2021 21th September Evening Shift If p $$\to$$ (~p $$\vee$$ q) is false, then the truth values of p and q are, respectively MHT CET 2021 21th September Morning Shift Negation of the statement $$\forall x \in R, x^2+1=0$$ is MHT CET 2021 21th September Morning Shift If $$p, q$$ are true statements and $$r$$ is false statement, then which of the following is correct. MHT CET 2021 20th September Evening Shift p : It rains today q : I am going to school r : I will meet my friend s : I will go to watch a movie. Then symbolic form of the statement "If it does ... MHT CET 2021 20th September Evening Shift Negation of $$(p \wedge q) \rightarrow(\sim p \vee r)$$ is MHT CET 2021 20th September Morning Shift The negation of a statement 'x $$\in$$ A $$\cap$$ B $$\to$$ (x $$\in$$ A and x $$\in$$ B)' is MHT CET 2021 20th September Morning Shift The logical expression $$\mathrm{p} \wedge(\sim \mathrm{p} \vee \sim \mathrm{q}) \equiv$$ MHT CET 2020 19th October Evening Shift The negation of the statement pattern $\sim p \vee(q \rightarrow \sim r)$ is MHT CET 2020 19th October Evening Shift The statement pattern $p \wedge(q \vee \sim p)$ is equivalent to MHT CET 2020 16th October Evening Shift The negation of the statement ' He is poor but happy' is MHT CET 2020 16th October Evening Shift If $$p, q$$ are true statement and $$r$$ is false statement, then which of the following statements is a true statement. MHT CET 2020 16th October Morning Shift If $$p \rightarrow(\sim p \vee q)$$ is false, then the truth values of $$p$$ and $$q$$ are respectively MHT CET 2020 16th October Morning Shift The symbolic form of the following circuit is (where $$p, q$$ represents switches $$S_1$$ and $$s_2$$ closed respectively) ... MHT CET 2019 3rd May Morning Shift Let $a: \sim(p \wedge \sim r) \vee(\sim q \vee s)$ and $b:(p \vee s) \leftrightarrow(q \wedge r)$. If the truth values of $p$ and $q$ are true and tha... MHT CET 2019 3rd May Morning Shift 5. "If two triangles are congruent, then their areas are equal" is the given statement then the contrapositive of, the inverse of the given statement ... MHT CET 2019 3rd May Morning Shift Which of the following statement pattern is a tautology? MHT CET 2019 2nd May Evening Shift If $p$ and $q$ are true and $r$ and $s$ are false statements, then which of the following is true? MHT CET 2019 2nd May Evening Shift The negation of " $\forall, n \in N, n+7>6$ " is ............. MHT CET 2019 2nd May Evening Shift Which of the following statements is contingency? MHT CET 2019 2nd May Morning Shift The statement pattern $(p \wedge q) \wedge[\sim r \vee(p \wedge q)] \vee(\sim p \wedge q)$ is equivalent to ........... MHT CET 2019 2nd May Morning Shift Which of the following is not equivalent to $p \rightarrow q$. MHT CET 2019 2nd May Morning Shift The equivalent form of the statement $\sim(p \rightarrow \sim q)$ is $\ldots$
{"url":"https://questions.examside.com/past-years/jee/mht-cet/mathematics/mathematical-reasoning","timestamp":"2024-11-10T08:20:33Z","content_type":"text/html","content_length":"316755","record_id":"<urn:uuid:c68bc971-52b5-486b-8167-72ef82c9c203>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00403.warc.gz"}
Minimizing Attenuation Click here to go to our main page on transmission lines Click here to go to our main page on transmission line loss Click here to go to our Why 50 ohms? page New for October 2007! He's information on minimizing loss of transmission lines, which is expressed as the attenuation factor in units of dB/length or Nepers/length. Of course, once you deviate from 50 ohms, you'll need to add transformers to get back to 50 ohms because it is the widely accepted standard for everything. OK, 75 ohms is also a standard, but it is not used as much as 50 ohms. Coax transmission lines exhibit a well-known minimum in attenuation versus line impedance. At zero ohms the loss/length is infinity, the same is true at infinity ohms. Air coax has its minimum attenuation at 77 ohms. It is believed that this phenomenon is what caused the 75 ohm impedance standard for cables. The plot below is for air-coax, outer conductor D=0.049 inches, copper conductors, at a frequency of 10 GHz. Minimum loss is a little more than 0.5 dB for a foot of this cable (which you can't buy anywhere, no one makes air coax that small!) Remember, the minimum attenuation feature is also a function of the coax dielectric constant. For PTFE-filled cables, the minimum attenuation is nearly at 50 ohms (52 ohms by our calculation, which you can verify when you download our coax calculator Excel file!). So, what's the story with microstrip and stripline? Is there a sweet spot on the impedance/attenuation function? We'll spoil the ending, the answer is "no!" The equations for microstrip are quite complicated, we'll avoid them and rely on Agilent's ADS software to look into this. We used a well-known microstrip media, 10 mil alumina circuit with gold conductors, at 10 GHz. The minimum attenuation is at Z0=0 ohms. Not only is this a useless impedance, but it seems strange that there is a finite loss under these conditions (shouldn't it be either zero or infinity?) What's going on here? Recall the loss per length calculation, resistance/length appears in the numerator, while characteristic impedance is in the denominator. Both are decreased as the line width increases, but not exactly proportionately. You could build up a closed form equation for microstrip loss/length as a function of line width, then take the limit as W goes to infinity and you'd come up with the non-zero value of attenuation at Z0=0. If anyone has the time, send us the math and we'll post it! We used ADS to calculate the attenuation factor of a PTFE stripline board with copper conductor and popular geometry. Stripline has the same result, loss is minimum as Z0=0. Although the curves would change if you played with parameters such as metal conductivity, dielectric constant, and geometry, trust us, the attenuation factor always decreases when impedance is reduced. This is a Microwaves101 Rule of Thumb!
{"url":"https://www.microwaves101.com/encyclopedias/minimizing-attenuation","timestamp":"2024-11-12T03:35:44Z","content_type":"application/xhtml+xml","content_length":"37122","record_id":"<urn:uuid:32f0f183-71c6-4e0f-8176-03231427b8d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00519.warc.gz"}
Reasoning in RDFox 5. Reasoning in RDFox¶ Reasoning in RDF is the ability to calculate the set of triples that logically follow from an RDF graph and a set of rules. Such logical consequences are materialized in RDFox as new triples in the The use of rules can significantly simplify the management of RDF data as well as provide a more complete set of answers to user queries. Consider, for instance, a graph containing the following :oxford :locatedIn :oxfordshire . :oxfordshire :locatedIn :england . The relation :locatedIn is intuitively transitive: from the fact that Oxford is located in Oxfordshire and Oxfordshire is located in England, we can deduce that Oxford is located in England. The triple :oxford :locatedIn :england is, however, missing from the graph. As a consequence, SPARQL queries asking for all English cities will not return :oxford as an answer. We could, of course, add the missing triple by hand to the graph, in which case :oxford would now be returned as an answer to our previous query. Doing so, however, has a number of important disadvantages. First, there can be millions such missing triples and each of them would need to be manually added, which is cumbersome and error-prone; for instance, if we add to the graph the triple :england :locatedIn :uk, then the following additional triples should also be added: :oxford :locatedIn :uk . :oxfordshire :locatedIn :uk . More importantly, by manually adding missing triples we are not capturing the transitive nature of the relation, which establishes a causal link between different triples. Indeed, triple :oxford :locatedIn :england holds because triples :oxford :locatedIn :oxfordshire and :oxfordshire :locatedIn :england are part of the data. Assume that we later find out that :oxford is not located in :oxfordshire, but rather in the state of Mississippi in the US, and we delete from the graph the triple :oxford :locatedIn :oxfordshire as a result. Then, the triples :oxford locatedIn :england and :oxford :locatedIn :uk should also be retracted as they are no longer justified. Such situations are very hard to handle manually. As we will see next, we can use a rule to faithfully represent the transitive nature of the relation and handle all of the aforementioned challenges in an efficient and elegant way. 5.1. Rule Languages¶ A rule language for RDF determines which syntactic expressions are valid rules, and also provides well-defined meaning to each rule. In particular, given an arbitrary set of syntactically valid rules and an arbitrary RDF graph, the set of new triples that follow from the application of the rules to the graph must be unambiguously defined. 5.1.1. Datalog¶ Rule languages have been in use since the 1980s in the fields of data management and artificial intelligence. The basic rule language is called Datalog. It is a very well understood language, which constitutes the core of a plethora of subsequent rule formalisms equipped with a wide range of extensions. In this section, we describe Datalog in the context of RDF. A Datalog rule can be seen as an IF … THEN statement. In particular, the following is a Datalog rule which faithfully represents the transitive nature of the relation :locatedIn. [?x, :locatedIn, ?z] :- [?x, :locatedIn, ?y], [?y, :locatedIn, ?z] . The IF part of the rule is also called the body or antecedent; the THEN part of the rule is called the head or the consequent. The head is written first and is separated from the body by the symbol :-. Both body and head consist of a conjunction of conditions, where conjuncts are comma-separated and where each conjunct is a triple in which variables may occur. Each conjunct in the body or the head is called an atom. In our example, the body consists of atoms [?x, :locatedIn, ?y] and [?y, :locatedIn, ?z], whereas the head consists of the single atom [?x, :locatedIn, ?z]. Each rule conveys the idea that, from certain combinations of triples in the input RDF graph, we can logically deduce that some other triples must also be part of the graph. In particular, variables in the rule range over all possible nodes in the RDF graph (RDF literals, URIs, blank nodes); whenever these variables are assigned values that make the rule body become subset of the graph, then we see what the value of those variables is, propagate these values to the head of the rule, and deduce that the resulting triples must also be a part of the graph. In our example, a particular rule application binds variable ?x to :oxford, variable ?y to :oxfordshire and variable ?z to :england, which then implies that that triple :oxford :locatedIn :england obtained by replacing ?x with :oxford and ?z with :england in the head of the rule holds as a logical consequence. A different rule application would bind ?x to :oxfordshire, ?y to :england, and ?z to :uk; as a result, the triple :oxfordshire :locatedIn :uk can also be derived as a logical consequence. An alternative way to understand the meaning of a single Datalog rule application to an RDF graph is to look at it as the execution of an INSERT statement in SPARQL, which adds a set of triples to the graph. In particular, the statement INSERT { ?x :locatedIn ?z } WHERE { ?x :locatedIn ?y. ?y :locatedIn ?z } corresponding to our example rule leads to the insertion of triples :oxford :locatedIn :england . :oxfordshire :locatedIn :uk . There is, however, a fundamental difference that makes rules more powerful than simple INSERT statements in SPARQL, namely that rules are applied recursively . Indeed, after we have derived that Oxford is located in England, we can apply the rule again by matching ?x to :oxford, ?y to :england, and ?z to :uk, to derive :oxford :locatedIn :uk—a triple that is not obtained as a result of the INSERT statement above. In this way, the logical consequences of a set of Datalog rules on an input graph are captured by the recursive application of the rules until no new information can be added to the graph. It is important to notice that the set of new triples obtained is completely independent from the order in which rule applications are performed as well as of the order in which different elements of rule bodies are given. In particular, the following two rules are equivalent: [?x, :locatedIn, ?z] :- [?x, :locatedIn, ?y], [?y, :locatedIn, ?z] . [?x, :locatedIn, ?z] :- [?y, :locatedIn, ?z], [?x, :locatedIn, ?y] . 5.1.2. Extensions of Datalog¶ A wide range of extensions of Datalog have been proposed and studied in the literature. In this subsection we describe the extensions of Datalog implemented in RDFox as well as the restrictions on them that have been put in place in order to ensure that the resulting language is semantically well-defined. Later on in this section we will provide many more examples of rules equipped with these extended features. 5.1.2.1. Negation-as-failure¶ Negation-as-failure allows us to make deductions based on information that is not present in the graph. For instance, using negation-as-failure we can write a rule saying that someone who works for a company but is not an employee of the company is an external contractor. [?x, :contractorFor, ?y] :- [?x, :worksFor, ?y], NOT [?x, :employeeOf, ?y] . Here, NOT represents a negation of a body atom. Let us consider the logical consequences of this rule when applied to the graph :mary :worksFor :acme . :mary :employeeOf :acme . :bob :worksFor :acme . On the one hand, we have that :mary works for :acme, and hence we can satisfy the first atom in the body by assigning :mary to ?x and :acme to ?y; however, :mary is also an employee of :acme, and hence the second condition is not satisfied, which means that we cannot derive that :mary is a contractor. On the other hand, we also have that :bob works for :acme and hence once again we can satisfy the first atom in the body, this time by assigning :bob to ?x and :acme to ?y; but now, we do not have a triple in the graph stating that :bob is an employee of :acme and hence we can satisfy the second condition in the body and derive the triple :bob :contractorFor :acme. Indeed, the query SELECT ?x ?y WHERE { ?x :contractorFor ?y } yields the expected result Note that negation typically means “absence of information”; indeed, we do not know for sure whether :bob is not an employee of :acme; we only know that this information is not available in the graph (neither explicitly, nor as a consequence of other rule applications). Negation-as-failure is intrinsically non-monotonic. In logic, this means that new information may invalidate previous deductions. For instance, suppose that :bob becomes an employee of :acme and, to reflect this, we add to our data graph the triple :bob :employeeOf :acme. Then, we can no longer infer that :bob is a contractor for :acme and the previous query will now return an empty answer. In contrast, rules in plain Datalog are monotonic: adding new triples to the graph cannot invalidate any consequences that we may have previously drawn; for instance, by adding a triple :england locatedIn :uk to the example in our previous section, cannot invalidate a previous inference such as :oxford locatedIn :england. 5.1.2.2. Aggregation¶ Aggregation is an important feature in query languages such as SQL or SPARQL. It allows one to compute numeric values (such as minimums, maximums, sums, counts or averages) on groups of solutions satisfying certain conditions (e.g., compute an average salary over the group of people working in the accounting department). In RDFox, it is possible to define relations based on the result of aggregate calculations. For instance, consider the following data. :bob :worksFor :accounting . :bob :salary "50000"^^xsd:integer . :mary :worksFor :hr . :mary :salary "47000"^^xsd:integer . :jen :worksFor :accounting . :jen :salary "60000"^^xsd:integer . :accounting rdf:type :Department . :hr rdf:type :Department . We can write an RDFox rule that computes the average salary of each department, and store the result in a newly introduced relation: [?d, :deptAvgSalary, ?z] :- [?d, rdf:type, :Department], [?x, :worksFor, ?d], [?x, :salary, ?s] ON ?d BIND AVG(?s) AS ?z) . Here, each group consists of a department with salaried employees, and for each group the rule computes an average of the salaries involved. In particular, suppose that we satisfy the first atom in the body by assigning value :accounting to variable ?d; then, we can satisfy the aggregate atom by grouping all employees working for :accounting (i.e., :bob and :jen), compute their average salary (55k) and assigning the resulting value to variable ?z; as a result, we can propagate the assignment of ?d to :accounting and of ?z to 55,000 to the head and derive the triple :accounting :deptAvgSalary "55000"^^xsd:integer . The query SELECT ?d ?s WHERE { ?d rdf:type :Department . ?d :deptAvgSalary ?s } then returns the expected answers :accounting 55000.0 . :hr 47000.0 . Similarly to negation, aggregation is also a non-monotonic extension of Datalog. In particular, if we were to add a new employee to the accounting department with a salary of 52k, then we would need to withdraw our previous inference that the average accounting salary equals 55k and adjust the average accordingly. 5.1.2.3. Built-in Functions¶ Datalog can be extended with a wide range of built-in functions. These include the functions defined in the SPARQL specification (e.g., arithmetic operations, string concatenation, and so on), as well as function symbols in predicate first-order logic via the special function SKOLEM. Let us start by introducing an example using the SKOLEM function, which can be used to capture function symbols in first-order predicate logic. Function symbols can be used to create objects that must exist in the world, but whose identity is unknown to us. As we will see later on, this is useful for representing relations of arity higher than two as well as for data integration and data Consider the following rules, where the second one uses the SKOLEM function: [?y, rdf:type, :Person] :- [?x, :marriedTo, ?y], [?x, rdf:type, :Person] . [?x, :hasMother, ?y] :- [?x, rdf:type, :Person], BIND(SKOLEM("motherOf", ?x) AS ?y) . The first rule is a simple Datalog rule stating that everyone married to a person is also a person. The second rule generates, for every person, a new object in the graph representing the person’s mother. To understand the meaning of the second rule, consider its application to a triple :mary rdf:type :Person. Here, we can bind ?x to :mary because :mary is a person; now, the application of SKOLEM to :mary generates a new object which represents the mother of :mary, and this new object is assigned as the value of variable ?y and propagated to the head of the rule. As a result, we derive a triple relating :mary to her mother via the :hasMother relation. Let us for now reconsider a variant of our “family” example data from the Getting Started guide, which contains the following triples in Turtle format: :peter :forename "Peter" ; a :Person ; :marriedTo :lois ; :gender "male" . :lois :forename "Lois" ; :gender "female" . :brian :forename "Brian" . # Brian is a dog And let us import our previous two rules. The following query asking for people having a mother SELECT ?x WHERE { ?x rdf:type :Person. ?x :hasMother ?y } returns :lois and :peter as answers. Indeed, :peter is a :Person according to the data, and hence by the second rule before he must have a mother. In turn, :lois is married to :peter, and hence by the first rule she must be a :Person, and by the second rule :lois must also have a mother. Let us consider another example of a built-in function, namely string concatenation. The following rule computes the full name of a person as the concatenation of their first name and their family [?x, :fullName, ?n] :- [?x, :firstName, ?y], [?x, :lastName, ?z], BIND(CONCAT(?y, ?z) AS ?n) . Consider the application of this rule to the graph consisting of the following triples: :peter :firstName "Peter" . :peter :lastName "Griffin" . Then, the query SELECT ?x ?y WHERE { ?x :fullName ?y } would return the expected answer An important consequence of introducing built-in functions is that rules are now capable of deriving triples mentioning new objects which did not occur in the input data (such as the mothers of Peter and Lois in our first example and “PeterGriffin” in our second example). This is not possible using plain Datalog rules, where the application of a rule may generate new triples, but these triples can only mention objects that were present in the input data. If users are not careful, they may write rules using built-in functions that generate infinitely many new constants and hence there may be infinitely many triples that logically follow from the rules and a (finite) input graph. For instance, consider our previous example rules [?y, rdf:type, :Person] :- [?x, :marriedTo, ?y], [?x, rdf:type, :Person] . [?x, :hasMother, ?y] :- [?x, rdf:type, :Person], BIND(SKOLEM("motherOf", ?x) AS ?y) . Suppose that we add another rule saying that the mother of a person must also be a person: [?y, rdf:type, :Person] :- [?x, :hasMother, ?y] . If we apply these rules to the input graph consisting of :peter rdf:type :Person . we will derive an infinite “chain” of triples, where the first one relates :peter with his mother, the second one relates peter’s mother to his grand-mother, and so on. In such cases, RDFox will run out of resources trying to compute infinitely many new triples and will therefore not terminate. This is not due to a limitation of RDFox as a system, but rather to the well-known fact that Datalog becomes undecidable once extended with built-in functions that can introduce arbitrarily many fresh objects. 5.1.2.4. Equality¶ Equality is a special binary predicate that can be used to identify different resources as representing the same real-world object. The equality predicate is referred to as owl:sameAs in the standard W3C languages for the Semantic Web. In addition to equality, W3C standard languages also define an inequality predicate, which is referred to as owl:differentFrom. By default, two resources with different names are not assumed to be actually different. For instance, resources called :marie_curie and :marie_sklodowsca may refer to the same object in the world (the renowned scientist Marie Curie). In logic terms we typically say that by default we are not making the unique name assumption (UNA). In some applications, however, it makes sense to make such assumption, and the effect of making the UNA is that we will have implicit owl:differentFrom statements between all pairs of resources mentioned in the data. In RDFox we can enable the use of equality by initializing a store accordingly. For instance, using the shell, we can initialize a data store with equality reasoning turned on using the shell command initializes a data store with equality reasoning and no UNA. Extensions of Datalog with equality allow for the equality and inequality predicates to appear in rules and data. For instance, consider the following triples, where the second triple represents the fact that the URIs :marie_curie and :marie_sklodowsca refer to the same person. :marie_curie rdf:type :Scientist . :marie_curie owl:sameAs :marie_sklodowsca . A query asking RDFox for all scientists SELECT ?x WHERE { ?x rdf:type :Scientist } will return both :marie_curie and :marie_sklodowsca as a result. Equality and inequality can also be used in rules. For instance, the following rule establishes that a person can only have one biological mother [?y, owl:sameAs, ?z] :- [?x, :hasMother, ?y], [?x, :hasMother, ?z] . The application of this rule to the graph :irene_curie :hasMother :marie_curie . :irene_curie :hasMother :marie_sklodowsca . identifies :marie_curie and :marie_sklodowsca :as the same person. The joint use of equality and inequality can lead to logical contradictions. For instance, the application of the previous rule to a graph consisting of the following triples would lead to a :irene_curie :hasMother :marie_curie . :irene_curie :hasMother :eve_curie . :marie_curie owl:differentFrom :eve_curie . Indeed, the application of the rule derives :marie_curie owl:sameAs :eve_curie, which is in contradiction with the data triple :marie_curie owl:differentFrom :eve_curie. Such contradictions can be identified in RDFox by querying for the instances of the special owl:Nothing predicate, which is also borrowed from the W3C standard OWL. The query SELECT ?x WHERE { ?x rdf:type owl:Nothing } returns :marie_curie and :eve_curie as answers. This can be interpreted by the user as: “resources :marie_curie and :irene_curie are involved in a logical contradiction”. 5.1.2.5. Named Graphs and N-ary Relations¶ In all our previous examples, all atoms in rules are evaluated against the default RDF graph. RDFox also supports named graphs, which can be created either implicitly, by importing an RDF dataset encoded as TriG or N-Quads, or explicitly, as shown in the following example that creates the named graph :Payroll. tupletable add :Payroll type triples Named graphs can also be used in the body and the head of rules, and hence it is possible to derive new triples as the result of rule application and add them to graphs other than the default graph. Rules can refer only to named graphs already created using one of the ways described above. For instance, consider the following rule: :Payroll(?id, :monthlyPayment, ?m) :- [?id, rdf:type, :Employee], :HR(?id, :yearlySalary, ?s), BIND(?s / 12 AS ?m) . This rule joins information from the default graph and the named graph called HR, and it inserts consequences into the named graph called :Payroll. Specifically, The first body atom of the rule identifies IDs of employees in the default RDF graph. The second body atom is a general atom: it is evaluated in the named graph called :HR, and it matches triples that connect IDs with their yearly salaries. The head of the rule contains a general atom that refers to the named graph called :Payroll, and it derives triples that connect IDs of employees with their respective monthly payments. In particular, given as data :HR(:a, :yearlySalary, "55000"^^xsd:integer) . :a rdf:type :Employee . the rule will compute the monthly payment for employee :a. Then, the query SELECT ?s ?p ?o WHERE { GRAPH :Payroll{ ?s ?p ?o } } will correctly return the monthly payment for employee :a :a :monthlyPayment 4583.333333333333333 . In addition to referring to graphs other than the default graph, RDFox can also directly represent external data as tuples of arbitrary arity (not just triples) using the same syntax as named graphs. Atoms representing such data, however, are only allowed to be used in the body of rules. Details on how to access external data from RDFox are given in Section 6.6. 5.2. Materialization-based Reasoning¶ The main computational problem solved by RDFox is that of answering a SPARQL 1.1 query with respect to an RDF graph and a set of rules. To solve this problem, RDFox uses materialization-based reasoning to precompute and store all triples that logically follow from the input graph and rules in a query-independent way. Both the process of extending the input graph with such newly derived triples and its final output are commonly called materialization. After such preprocessing, queries can be answered directly over the materialization, which is usually very efficient since the rules do not need to be considered any further. Materializations can be large, but they can usually be stored and handled on modern hardware as the available memory is continually increasing. The main challenge of this approach to query answering is that, whenever data triples and/or rules are added and/or deleted, the “old” materialization must be replaced with the “new” materialization that contains all triples that follow from the updated input. In this setting, deletion of triples is restricted to those that are explicit in the input graph and hence one does not consider deletion of derived triples—a complex problem known in the literature as belief revision or view update. For instance, given as input the RDF graph :oxford :locatedIn :oxfordshire . :oxfordshire :locatedIn :england . :england :locatedIn :uk . and the familiar rule [?x, :locatedIn, ?z] :- [?x, :locatedIn, ?y], [?y, :locatedIn, ?z] . RDFox will compute the corresponding materialization, which consists of triples :oxford :locatedIn :oxfordshire . :oxford :locatedIn :england . :oxford :locatedIn :uk . :oxfordshire :locatedIn :england . :oxfordshire :locatedIn :uk . :england :locatedIn :uk . RDFox will now handle each SPARQL 1.1 query issued against the input graph and rule by simply evaluating the query directly over the materialization, thus avoiding the expensive reasoning at query An update could delete a triple explicitly given in the input graph such as the triple :oxfordshire :locatedIn :england, in which case the new materialization consists only of triples :oxford :locatedIn :oxfordshire . :england :locatedIn :uk . since the rule is no longer applicable after deletion. In contrast, deleting a derived triple such as :oxford :locatedIn :uk . is not allowed since this triple was not part of the original input. RDFox implements sophisticated algorithms for both efficiently computing materializations and maintaining them under addition/deletion updates that may affect both the data and the rules. All these algorithms were developed after years of research at Oxford and have been extensively documented in the scientific literature. 5.3. Restrictions on Rule Sets¶ The rule language of RDFox imposes certain restrictions on the structure of rule sets. These restrictions ensure that the materialization of a set of rules and an RDF graph is well-defined and In particular, the semantics (i.e., the logical meaning) of rule sets involving negation-as-failure and/or aggregation is not straightforward, and numerous proposals exist in the scientific literature. There is, however, a general consensus for rule sets in which the use of negation-as-failure and aggregation are stratified. Informally, stratification conditions ensure that there are no cyclic dependencies in the rule set involving negation or aggregation. Several variants of stratification have been proposed, where some of them capture a wider range of rule sets than others; they all, however, provide similar guarantees. We next describe the stratification conditions adopted in RDFox by means of examples. For this, let us consider the following rules mentioning negation-as-failure: [?x, :contractorFor, ?y] :- [?x, :worksFor, ?y], NOT [?x, :employeeOf, ?y] . [?x, :employeeOf, :acme] :- [?x, :worksFor, :acme] . The first rule says that people working for a company who are not employees of that company act as contractors. The rule establishes two dependencies. The first dependency tells us that the presence of a triple having :worksFor in the middle position may contribute to triggering the derivation of a triple having contractorFor in the middle position. In turn, the second dependency tells us that the absence of a triple having :employeeOf in the middle position may also contribute to the derivation of a triple having contractorFor in the middle position. The second rule tells us that everyone working for :acme is an employee of :acme. This rule establishes one dependency, namely the presence of a triple having :worksFor in the middle position and :acme in the rightmost position may trigger the derivation of a triple having :employeeOf in the middle position and :acme in the rightmost position. We can keep track of such dependencies by means of a dependency graph. The nodes of the graph are obtained by replacing variables in individual triple patterns occurring in the rules with the special symbol ANY, which intuitively indicates that the position of the triple where it occurs can adopt any constant value, and leaving constants as they are. In particular, our example rules yield a graph having the following five vertices v1—v5: v1: ANY :contractorFor ANY v2: ANY :worksFor ANY v3: ANY :employeeOf ANY v4: ANY :worksFor :acme v5: ANY :employeeOf :acme The (directed) edges of the graph lead from vertices corresponding to body atoms to vertices corresponding to head atoms and can be either “regular” or “special”. Special edges witness the presence of a dependency involving aggregation or negation-as-failure; in our case, we will have a single special edge (v3, v1). In turn, each dependency that is not via negation-as-failure/aggregation generates a regular edge; in our case, we will have regular edges (v2,v1) and (v4, v5). Finally, the graph will also contain bidirectional regular edges between nodes that unify in the sense of first-order logic: since [?x, :employeeOf, ?y] and [?x, :employeeOf, :acme] unify, we will have regular edges (v3,v5) and (v5, v3); similarly, we will also have regular edges (v2,v4) and (v4,v2). Our two example rules are stratified and hence are accepted by RDFox; this is because there is no cycle in the dependency graph involving a special edge (indeed, all cycles involve regular edges Now suppose that the add the following rule: [?x, :employeeOf, ?y] :- [?x, :worksFor, ?y], NOT [?x, :contractorFor, ?y] . which says that people working for a company who are not contractors for the company must be employees of the company. The addition of this rule does not change the set of nodes in the dependency graph; however, it adds two more edges: a regular edge (v2, v3) and a special edge (v1, v3). As a result, we now have a cycle involving a special edge and the rule set is no longer stratified, which means that the rule set will be rejected by RDFox as a result. Due to stratification conditions, the use of the special equality relation owl:sameAs in rules precludes the use of aggregation or negation-as-failure. Consider the following rule set, where the second rule tells us that a person cannot be an employee of two different companies: [?x, :contractorFor, ?y] :- [?x, :worksFor, ?y], NOT [?x, :employeeOf, ?y] . [?y, owl:sameAs, ?z] :- [?x, :employeeOf, ?y], [?x, :employeeOf, ?z] . This rule set will be rejected by RDFox as the rule set mentions both NOT and owl:sameAs. Informally, this is because equality can affect every single relation, which precludes stratification in most In addition to stratification conditions, RDFox also requires certain restrictions to the structure of rules which make sure that each rule can be evaluated by binding the variables in the body of the rule to a data graph. To see an example where things go wrong consider the rule: [?x, :worksFor, ?y] :- [?y, rdf:type, :Department] . The rule cannot be evaluated by first matching the body to the data graph and then propagating the variable bindings to the head; indeed, rule body to an RDF graph will always leave variable ?x of the rule unbound and hence the triple that must be added as a result of applying the rule to the data is undefined. As a result, this rule will be rejected by RDFox. Binding restrictions in RDFox are rather involved given that the underpinning rule language is rich and there are many subtle corner cases. However, rules accepted by the parser can always be unambiguously evaluated. 5.4. The Rule Language of RDFox¶ This section formally specifies the syntax of rules in RDFox. As already mentioned, the rule language supported by RDFox extends Datalog with stratified negation, stratified aggregation, built-in functions, and more, so as to provide additional data analysis capabilities. A rule has the form H1 ,… , Hj :- L1 ,… , Lk . where the formula to the left of the :- operator is the rule head and the formula to the right is the rule body. Informally, a rule says “if L1, …, and Lk all hold, then H1, …, and Hj hold as well”. Each Hi with 1 ≤ i ≤ j is an atom, and each Li with 1 ≤ i ≤ k is a literal. A literal is an atom, a negation, a bind literal, a filter literal, or an aggregate literal. 5.4.1. Atom¶ An atom is either a default graph RDF atom or a general atom. General atoms can be used to access data in named graphs and mounted data sources. 5.4.1.1. Default Graph RDF Atom¶ A default graph RDF atom has the form [t1, t2, t3] where ti is a term, which is either an RDF resource or a variable. To distinguish between these two kinds of terms, RDFox requires variables to start with the ? symbol. Also note that when t2 is an IRI, atom [t1,t2,t3] can be written alternatively as t2[t1,t3] moreover, when t2 is the special IRI “rdf:type” and t3 is also an IRI, atom [t1,t2,t3] can be written alternatively as t3[t1]. Example A simple rule with default graph RDF atoms only a1:Person[?x] :- a1:teacherOf[?x, ?y] . As we discussed earlier, this is equivalent to: [?x, rdf:type, a1:Person] :- [?x, a1:teacherOf, ?y] . The above rule has only one atom in the rule body and one atom in the rule head. Informally, the rule says that if x is a teacher of y, then x must be a person. Both the body and the head are matched in the default RDF graph. 5.4.1.2. General Atom¶ A general atom has the form A(t1, …, tn) with n ≥ 1 where A is an IRI denoting the name of a tuple table and t1, …, tn are terms. Each named RDF graph is represented in RDFox as a tuple table; thus, general atoms can be used to refer to data in named graphs. Example A rule with both RDF and general atoms [?id, fg:firstName, ?fn], [?id, fg:lastName, ?ln] :- fg:Person(?id, ?fn, ?ln) . The general atom in the rule body refers to a tuple table containing three columns. The same rule can be written alternatively as the following. fg:firstName[?id, ?fn], fg:lastName[?id, ?ln] :- fg:Person(?id, ?fn, ?ln) . 5.4.2. Negation¶ Negation is useful when the user wants to require that certain conditions are not satisfied. A negation has one of the following forms, where k ≥ 2, j ≥ 1, B1, …, Bk are atoms, and ?V1, …, ?Vj are NOT B1 NOT(B1, …, Bk) NOT EXIST ?V1, …, ?Vj IN B1 NOT EXIST ?V1, …, ?Vj IN (B1, …, Bk) NOT EXISTS ?V1, …, ?Vj IN B1 NOT EXISTS ?V1, …, ?Vj IN (B1, …, Bk) Note RDFox will reject rules that use negation in all equality modes other than off (see Equality). Example Using negation of the first form a1:stranger[?x, ?y] :- NOT a1:friend[?x, ?y] . Example Using negation of the last form a1:basic[?x] :- NOT EXISTS ?y IN ( a1:subcomponent[?y, ?x] ) . Informally, the rule says that if X is a component and it does not have any subcomponents, then X is a basic component. 5.4.3. Bind Literal¶ A bind literal evaluates an expression and assigns the value of the expression to a variable, or compares the value of the expression with a term. A bind literal is of the following form, where exp is an expression and t is a term not appearing in exp. An expression can be constructed from terms, operators, and functions. The operators and functions supported here are the same as those supported in RDFox SPARQL queries; refer to Section 4 for a detailed comparison between SPARQL 1.1 functions and the ones implemented in RDFox. An important difference with SPARQL 1.1 is that, for each bind literal in a rule, every variable used in exp must be bound either by a body atom, or by another bind literal in the rule. Example Using bind literals cTemp[?x, ?z] :- fTemp[?x, ?y], BIND ((?y - 32) / 1.8 AS ?z) . The bind literal in the above rule converts Fahrenheit degrees to Celsius degrees. 5.4.4. Filter Literal¶ Rule evaluation can be seen as the process of finding satisfying assignments for variables appearing in the rule. A filter literal is of the following form, and it restricts satisfying assignments of variables to those for which the expression exp evaluates to true. Thus, when the user writes a filter literal, the expression is expected to provide truth values. As with bind literals, every variable used in exp must be bound either by a body atom or by a bind literal. Example Using filter literals :PosNum[?x] :- :Num[?x], FILTER(?x > 0) The rule says that a number is positive if it is larger than zero. 5.4.5. Aggregate Literal¶ An aggregate literal applies an aggregate function to groups of values to produce one value for each group. An aggregate literal has the form AGGREGATE(B1, …, Bk ON ?X1, …, ?Xj BIND f1(exp1) AS t1 … BIND fn(expn) AS tn) where k ≥ 1, j ≥ 0, n ≥ 1, and • B1, …, Bk are atoms, • ?X1, …, ?Xj are variables appearing in B1, …, Bk, • exp1, …, expn are expressions constructed using variables from B1, …, Bk, • f1, …, fn are aggregate functions, and • t1, …, tn are constants or variables that do not appear in B1, …, Bk. Sometimes the user might be interested in computing an aggregate value from a set of distinct values. In this case, the keyword “distinct” can be used in front of an expression expi. Note RDFox will reject rules that use aggregation in all equality modes other than off (see Equality). Example Using aggregate literals :minTemp[?x, ?z] :- :temp[?x, ?y] ON ?x BIND MIN(?y) AS ?z) . Informally, the above rule computes a minimum temperature for each city. Example Using the keyword distinct :familyFriendCnt[?x, ?cnt] :- :hasMember[?x, ?y], :hasFriend[?y, ?z] ON ?x BIND COUNT(DISTINCT ?z) AS ?cnt) . This rule counts the number of different friends for each family; a person is considered a friend of a family if he is a friend of a member of the family. 5.5. Common Uses of Rules in Practice¶ This section describes common uses of rules and reasoning in practical applications. This section will be especially useful for practitioners who are seeking to understand how the reasoning capabilities provided by RDFox can enhance graph data management. 5.5.1. Computing the Transitive Closure of a Relation¶ In many other situations, we may have a relation that is not transitive, but we are interested in defining a different relation that “transitively closes” it. Consider a social network where users follow other users. The graph may be represented by the triples next. :alice :follows :bob . :bob :follows :charlie . :diana :follows :alice . A common task in social networks is to use existing connections to suggest new ones. For example, since Alice follows Bob and Bob follows Charlie, the system may suggest that Alice follow Charlie as well. Likewise, the system may suggest that Diana follow Bob; but then, if Diana follows Bob, she may also want to follow Charlie. We would like to construct an enhanced social network that contains the actual follows relations plus all the suggested additional links. The links in such enhanced social network represent the transitive closure of the original follows relation, which relates any pair of people who are connected by a path in the network. The transitive closure of the follows relation can be computed using RDFox by defining the following two rules: [?x, :followsClosure, ?y] :- [?x, :follows, ?y] . [?x, :followsClosure, ?z] :- [?x, :follows, ?y], [?y, :followsClosure, ?z] . The first rule “copies” the contents of the direct follows relation to the new relation. The second rule implements the closure by saying that if a person p1 directly follows p2 and p2 (directly or indirectly) follows person p3, then p1 (indirectly) follows p3. If we now issue the SPARQL query SELECT ?x ?y WHERE { ?x :followsClosure ?y } we obtain the expected results. :diana :charlie . :alice :charlie . :diana :bob . :alice :bob . :bob :charlie . :diana :alice . Finally, we may also be interested in computing the suggested links that were not already part of the original follows relation. This can be achieved, for instance, by issuing the SPARQL query SELECT ?x ?y WHERE { ?x :followsClosure ?y . FILTER NOT EXISTS { ?x :follows ?y } The results are the expected ones. :diana :charlie . :alice :charlie . :diana :bob . 5.5.2. Composing Relations¶ An important practical use of knowledge graphs is to power Open Query Answering (Open QA) applications, where the user would pose a question in natural language, which is then automatically answered against the graph. Open QA systems often struggle to interpret questions that involve several “hops” in the graph. For instance, consider the graph consisting of the triples given next. :douglas_adams :bornIn :uk . :uk rdf:type :Country . A user may ask the Open QA system for the country of birth of Douglas Adams. To obtain this information, the system would need to construct a query involving two hops in the graph. In particular, the SPARQL query SELECT ?c WHERE { :douglas_adams :bornIn ?c . ?c rdf:type :Country . would return :uk as answer. The results of the open QA system would be greatly enhanced if the desired information had been available in just a single hop. RDFox rules can be used to provide a clean solution in this situation. In particular, we can use rules to define a new :countryOfBirth relation that provides a “shortcut” for directly accessing the desired information. [?x, :countryOfBirth, ?y] :- [?x, :bornIn, ?y], [?y, rdf:type, :country] . The rule says that, if a person p is born in a place c, and that place is a country, then c is the country of birth of p. As a result, RDFox would derive that the country of birth of Douglas Adams is the UK. The Open QA system would now only need to construct the following simpler query, which involves a single hop in the graph, to obtain the desired information. SELECT ?x ?y WHERE { ?x :countryOfBirth ?y } 5.5.3. Representing SPARQL 1.1 Property Paths¶ As already mentioned, RDFox does not currently support SPARQL 1.1 property paths. It is, however, possible to encode property paths as rules. Informally, a property path searches through the RDF graph for a sequence of IRIs that form a path conforming to an regular expression. For instance, the following query in our familiar social network example SELECT ?x WHERE { ?x :follows+ :bob } returns the set of people that follow :bob directly or indirectly in the network. In this case, the property path (?x :follows+ :bob) represents a path of arbitrary length from any node to :bob via the :follows relation, where the “+” symbol is the familiar one in regular expressions indicating “one or more occurrences”. Property paths representing paths of arbitrary length are closely related to computing the transitive closure of a relation. In particular, the following rules would compute the set of “Bob followers” as those who follow :bob directly or indirectly. [?x, rdf:type, :BobFollower] :- [?x, :follows, :bob] . [?x, rdf:type, :BobFollower] :- [?x, :follows, ?y], [?y, rdf:type, :BobFollower] . The simple query SELECT ?x WHERE { ?x rdf:type :BobFollower } gives us the same answers as the original query using property paths. 5.5.4. Defining a Query as a View¶ When querying a knowledge graph, we may be interested in materializing the result of a SPARQL query as a new relation in the graph. This can be the case, for instance, if the query is interesting on its own right, can be used to define new relations, or simplify the formulation of additional queries. We can use an RDFox rule for this purpose, where the SPARQL query that we want to materialize in the graph is represented in the body of the rule and the answer as a new relation in the head. For instance, consider again the previous example of a social network, where we were interested in suggesting new followers (recall the Transitive Closure usage pattern). Recall that we used a query SELECT ?x ?y WHERE { ?x :followsClosure ?y FILTER NOT EXISTS { ?x :follows ?y } to obtain the suggested links that were not already part of the original follows relation. We may be interested in storing this query as a separate relation in the graph. For this, we could rewrite the query as a rule defining a new :suggestFollows relation: [?x, :suggestFollows, ?y] :- [?x, :followsClosure, ?y], NOT [?x, :follows, ?y] . The body of the rule represents the where clause in the query. The filter expression in the query is captured by the negated atom. Then, the simple query SELECT ?x ?y WHERE { ?x :suggestFollows ?y } will give us the expected answers :diana :charlie . :alice :charlie . :diana :bob . It is worth pointing out that only a subset of SPARQL 1.1 queries can be transformed into an RDFox rule in the way described. In particular, all queries involving basic graph patterns, filter expressions, negation (NOT EXISTS, MINUS) and aggregation can be represented. In contrast, SPARQL queries with more than two answer variables, or using OPTIONAL or UNION in the WHERE clause cannot be represented as rules. 5.5.5. Performing Calculations and Aggregating Data¶ RDFox rules can be used to perform computations over the data in a knowledge graph and store the results in a different relation. For instance, consider a graph with the following triples, specifying the height of different people in cm. :alice :height "165"^^xsd:integer . :bob :height "180"^^xsd:integer . :diana :height "168"^^xsd:integer . :emma :height "165"^^xsd:integer . We would want to compute their height in feet, and record it in the graph by adding suitable triples over a new relation. For this, we can import the following RDFox rule. [?x, :heightInFeet, ?y] :- [?x, :height, ?h], BIND(?h*0.0328 AS ?y) . The BIND construct evaluates an expression and assigns the value of the expression to a variable. We can now query the graph for the newly introduced relation to obtain the list of people and their height in both centimeters and feet. SELECT ?x ?m ?f WHERE { ?x :height ?m . ?x :heightInFeet ?f . and obtain the expected answers :emma 165 5.412 . :diana 168 5.5104 . :bob 180 5.904 . :alice 165 5.412 . Rules can also be used to compute aggregated values (e.g., sums, counts, averages, etc) over the graph and store the results in a new relation. :alice :follows :bob . :bob :follows :charlie . :diana :follows :alice . :charlie :follows :alice. :emma :follows :bob . :alice rdf:type :Person . :bob rdf:type :Person . :charlie rdf:type :Person . :diana rdf:type :Person . :emma rdf:type :Person . The graph contains also information about people’s hobbies, as represented by the following triples. :alice :likes :tennis . :bob :likes :music . :diana :likes :swimming . :charlie :likes :football . :emma :likes :reading . :tennis rdf:type :Sport . :swimming rdf:type :Sport . :football rdf:type :Sport . We would like to count, for each person, the number of followers who enjoy practicing a sport. RDFox provides aggregation constructs which enable these kinds of computations. [?y, :sportyFollowerCnt, ?cnt] :- [?y, rdf:type, :Person], [?x, :follows, ?y], [?x, :likes, ?w], [?w, rdf:type, :Sport] ON ?y BIND COUNT(DISTINCT ?x) AS ?cnt) . In particular, the rule states that, if p1 Is a person, then count all distinct people who follow p1 and who like some sport, store the result in a count, and store the result in the new :sportyFollowerCnt relation. By issuing the following SPARQL query SELECT ?x ?cnt WHERE { ?x :sportyFollowerCnt ?cnt } We obtain that Bob has one sporty follower (Alice), whereas Alice has 2 sporty followers (Diana and Charlie). This type of computation is compatible with the computation of the transitive closure of a relation. For instance, we may be interested in counting the number of (direct or indirect) followers who are sporty. For this, we can use RDFox rules to compute the transitive closure of the follows relation: [?x, :followsClosure, ?y] :- [?x, :follows, ?y] . [?x, :followsClosure, ?z] :- [?x, :follows, ?y], [?y, :followsClosure, ?z] . And use the following rule to compute the desired count. [?y, :sportyFollowerClosureCnt, ?cnt] :- [?y, rdf:type, :Person], [?x, :followsClosure, ?y], [?x, :likes, ?w], [?w, rdf:type, :Sport] ON ?y BIND COUNT(DISTINCT ?x) AS ?cnt) . The following SPARQL query SELECT ?x ?cnt WHERE { ?x :sportyFollowerClosureCnt ?cnt } Then provides the following results. :charlie 3 . :bob 3 . :alice 3 . We observe that the count for Charlie does not seem quite right. Charlie is followed directly only by Bob (who is not sporty); however, Bob is followed by Alice (a sporty person) and Alice is followed by Diana (another sporty person). Naturally, we would have obtained a count of 2; however, Charlie also follows Alice and hence he transitively follows himself, thus the count of 3!. If we wanted to prevent this situation, we can modify the second rule implementing transitive closure to eliminate self-loops as follows: [?x, :followsClosure, ?z] :- [?x, :follows, ?y], [?y, :followsClosure, ?z], FILTER(?x != ?z) . Now, our query before yields the expected results :charlie 2 . :bob 3 . :alice 2 . 5.5.6. Arranging Concepts and Relations in a Hierarchical Structure¶ A common use of ontologies is to arrange concepts (called classes in OWL 2) and relations (called properties in OWL 2) in a subsumption hierarchy. For instance, we may want to say that dogs and cats are mammals and that mammals are animals. Such subsumption relationships can be easily represented using RDFox rules. [?x, rdf:type, :Mammal] :- [?x, rdf:type, :Dog] . [?x, rdf:type, :Mammal] :- [?x, rdf:type, :Cat] . [?x, rdf:type, :Animal] :- [?x, rdf:type, :Mammal] . Suppose that we have a graph with the following triples: :max rdf:type :Dog . :coco rdf:type :Cat . :teddy rdf:type :Mammal . Then, RDFox will deduce that Max and Coco are both mammals and therefore also animals, and also that Teddy is an animal. In particular, the query SELECT ?x WHERE { ?x rdf:type :Animal } yields the expected results It is also often the case that concepts are “assigned” certain properties. For instance, mammals have children which are also mammals. This is known as a range restriction in the ontology jargon, and can be represented using the following RDFox rule [?y, rdf:type, :Mammal] :- [?x, rdf:type, :Mammal],[?x, :hasChild, ?y] . If we now extend the graph with the following triples. :max :hasChild :betsy . :coco :hasChild :minnie . RDFox will derive automatically that both Betsy and Minnie are also mammals (and therefore also animals). Indeed, the query SELECT ?x WHERE { ?x rdf:type :Mammal } Will yield the expected results. :max . :betsy . :minnie . :teddy . :coco . In many applications, it is also useful to represent subsumption relations between the edges in a knowledge graph, to specify that one relation is more specific than the other. For instance, we may want to say that the :hasDaughter relation is more specific than the :hasChild relation. This can be represented using the following RDFox rule. [?x, :hasChild, ?y] :- [?x, :hasDaughter, ?y] . If we now add the following triple to the graph :betsy :hasDaughter :luna . RDFox can infer that Luna is the child of Betsy and therefore she is also a mammal, and an animal. Indeed, the previous query listing all mammals will now also include :luna as an answer. 5.5.7. Detecting Cyclic Relations¶ A common task in knowledge graphs is to identify cyclic relationships. For instance, partonomy relations are typically acyclic (e.g., if an engine is part of a car we would not expect the car also to be part of the engine!). In these cases, cycle detection may be needed to detect errors in the graph and thus provide data validation. A simple case of this pattern is when the relation we are checking for cyclicity is naturally transitive. Such is the case, for instance of the partOf relation. Consider the following graph: :a :partOf :b . :b :partOf :c . :c :partOf :a . The graph contains a cyclic path :a -> :b -> :c -> :a. via the :partOf relation. The relationship is naturally transitive and hence we can use the corresponding pattern to define it as such. [?x, :partOf, ?z] :- [?x, :partOf, ?y], [?y, :partOf, ?z] . The following SPARQL query now gives us which elements are part of others (directly or indirectly) SELECT ?x ?y WHERE { ?x :partOf ?y } Which gives us the following results :a :a . :c :c . :b :b . :a :c . :b :a . :c :b . :c :a . :b :c . :a :b . Cyclicity manifests itself by the presence of self-loops (e.g., :a is derived to be a part of itself ). Hence, it is possible to detect that the part of relation is cyclic by issuing the following SPARQL query. Where the result comes true since the partonomy relation does have a self loop. Alternatively, we could have defined the following additional rule. [:partOf, rdf:type, :CyclicRelation] :- [?x, :partOf, ?x] . Which tells us that if any object is determined to be a part of itself, then the partonomy relation is cyclic. We can now issue the following SPARQL query, which retrieves the list of cyclic relations in the graph, which in this case consists of the relation :partOf. SELECT ?x WHERE { ?x rdf:type :CyclicRelation } 5.5.8. Defining Attributes and Relationships as Mandatory¶ In knowledge graphs, data is typically incomplete. For instance, suppose that the data in a knowledge graph has been obtained from a variety of sources. The graph has different types of information about people, such as their name, job title and so on. We notice that some people in the graph have a date of birth, whereas others do not. Because of the nature of our application, we would like to have the date of birth of each person represented in the graph, and would like to find out which people are missing this information; that is, we would like to make the presence of a date of birth value mandatory for every person in the graph. In relational databases this is typically solved by declaring an integrity constraint. Consider the following graph. :alice :dob "11/01/1987"^^xsd:string . :alice rdf:type :Person . :bob :dob "23/07/1980"^^xsd:string . :bob rdf:type :Person . :diana :height "168"^^xsd:integer . :diana rdf:type :Person . :emma :dob "10/02/1965"^^xsd:string . :emma rdf:type :Person . :max rdf:type :Dog . We can use the following rule to record absence of a date of birth for people. [?x, rdf:type, owl:Nothing] :- [?x, rdf:type, :Person], NOT EXISTS ?y IN ([?x, :dob, ?y]) . The rule says that if a person p lacks a date of birth d, then p incurs in a constraint violation. The constraint violation is recorded by making person p an instance of the special owl:Nothing unary relation, which is also present in the OWL 2 standard. The following SPARQL query then correctly reports that Diana violates the constraint (whereas Max does not because he is a dog). SELECT ?x WHERE { ?x rdf:type owl:Nothing } This type of computation combines well with type inheritance. For instance, suppose that we add the following triple: :charlie rdf:type :Student . And the following rule stating that every student is a person [?x, rdf:type, :Person] :- [?x, rdf:type, :student] . Then, the previous query will give as results :charlie :dob . :diana :dob . Indeed, since Charlie is a student, he is also a person; furthermore, Charlie lacks date of birth information. The meaning of the special class owl:Nothing is different in RDFox and the OWL 2 standard. If one can derive from an OWL 2 ontology that that an object is an instance of owl:Nothing, then the ontology is inconsistent and querying the ontology becomes logically meaningless. Thus, the OWL 2 standard would require users to modify the data and/or ontology to fix the inconsistency prior to attempting to issue queries. Furthermore, it is worth noting that in OWL 2 it is not possible to write statements that check for “absence of information”; this is due to the monotonicity properties of OWL 2 as a fragment of first-order logic. In contrast, in RDFox, deriving an instance of owl:Nothing does not lead to a logical inconsistency and the answers to queries remain perfectly meaningful. In the pattern we have described, querying for owl:Nothing simply provides users with the list of all nodes in the graph for which mandatory information is missing. As a result, the user is warned rather than prevented from carrying out a task such as issuing a query. For instance, if we were to ask a query to RDFox such as the following SELECT ?x WHERE { ?x rdf:type :Person } We would still obtain the expected results (see below) despite the fact that there are constraint violations in the data. :alice . :charlie . :emma . :diana . :bob . This behavior is also different from relational databases, where the system would typically reject updates that lead to a constraint violation. As already mentioned, RDFox continues to operate normally and would accept any updates although constraints are being violated. Of course, users are encouraged to query the system in order to detect and rectify such violations. 5.5.9. Expressing Defaults and Exceptions¶ Rules can be used to write default statements (that is, statements that normally hold in the absence of additional information). This is especially useful to represent exceptions to rules, which is important, for instance, in legal domains. Consider the following graph saying that Tweety is a bird. Birds typically fly; that is, in the absence of additional information, the fact that Tweety is a bird constitutes sufficient evidence to believe that Tweety flies. There may, however, be exceptions. For instance, penguins are non-flying birds, and hence if we were to find out that Tweety is a penguin, then we would need to withdraw our default assumption that Tweety flies. RDFox rules can be used to model this type of default reasoning. In particular, consider a rule saying that birds fly unless they are penguins. [?x, rdf:type, :FlyingAnimal] :- [?x, rdf:type, :Bird], NOT [?x, rdf:type, :Penguin] . We can now issue a SPARQL query asking for the list of flying animals SELECT ?x WHERE { ?x rdf:type :FlyingAnimal } and obtain :tweety as an answer. Suppose now that we were to extend the graph with the following triple :tweety rdf:type :Penguin . Then, the same query would now give us an empty set of answers since, in the light of the new evidence, we can no longer conclude that Tweety flies. 5.5.10. Restructuring Data¶ Rules can be used to transform the structure of the data in a knowledge graph (e.g., by adding properties to a relationship). Consider the following knowledge graph representing employees and their employer. :alice :worksFor :oxford_university . :bob :worksFor :acme . :charlie :worksFor :oxford_university . :charlie :worksFor :acme . Suppose that we now want to expand the graph by adding further information about the employment, such as the salary and the start date. This information is relative to each specific employment of an employee; for instance, Charlie will have a different salary and start date for his employment with Oxford University and his employment with Acme. We can use RDFox rules to automatically restructure the data in the graph to account for the new information. [?z, rdf:type, :Employment], [?z, :Employee, ?x], [?z, :employer, ?y] :- [?x, :worksFor, ?y], BIND(SKOLEM("Employment", ?x, ?y) AS ?z) . For each edge connecting a person x with their employer y, the rule creates a new employment instance z as an RDF blank node, and relates it to employee x and employer y. The name of generated instance starts with the underscore character indicating that it is a blank node, followed by the string “Employment” and unique identifiers for the corresponding employee x and employer y. The query SELECT ?x ?y ?z WHERE { ?x ?y ?z . ?x rdf:type :Employment } gives us the new triples generated by the application of the previous rule _:Employment_116_200 :employer :acme . _:Employment_116_200 :Employee :bob . _:Employment_116_200 rdf:type :Employment . _:Employment_113_199 :employer :oxford_university . _:Employment_113_199 :Employee :alice . _:Employment_113_199 rdf:type :Employment . _:Employment_156_199 :employer :oxford_university . _:Employment_156_199 :Employee :charlie . _:Employment_156_199 rdf:type :Employment . _:Employment_156_200 :employer :acme . _:Employment_156_200 :Employee :charlie . _:Employment_156_200 rdf:type :Employment . It is important to notice that the generated SKOLEM IDs such as cannot be considered stable across runs (or RDFox versions) since they are generated based in the dictionary IDs of the arguments. Further data relative to an employment, such as associated salary and start date, should therefore not be inserted directly as triples, but rather rules such as the following ones: [?z, :salary, "60000"^^xsd:integer] :- BIND(SKOLEM("Employment", :alice, :oxford_university) AS ?z) . [?z, :salary, "55000"^^xsd:integer] :- BIND(SKOLEM("Employment", :charlie, :oxford_university) AS ?z) . [?z, :salary, "40000"^^xsd:integer] :- BIND(SKOLEM("Employment", :charlie, :acme) AS ?z) . [?z, :salary, "45000"^^xsd:integer] :- BIND(SKOLEM("Employment", :bob, :acme) AS ?z) . Note that each of these rules uses the SKOLEM construct in the antecedent to make sure that they match correctly to the generated triples listed above. To check that the salary data has been inserted correctly, we can issue the query SELECT ?x (SUM(?y) AS ?income) WHERE { ?e :Employee ?x . ?e :salary ?y GROUP BY ?x which gives us the total yearly income for each person by summing up the salary of each of their employments, giving the expected results. :alice 60000 . :charlie 95000 . :bob 45000 . Data restructuring via reification has multiple applications. In particular, RDF can only represent directly binary relations and hence the representation of higher arity relations is only possible through reification. Reification is also needed if we want to qualify or annotate edges in a graph (e.g., by adding weights, or dates, or other relevant properties). 5.5.11. Representing Ordered Relations¶ Many relations naturally imply some sort of order, and in such cases we are often interested in finding the first and last elements of such orders. For instance, consider the managerial structure of a company. :alice :manages :bob . :bob :manages :jeremy . :bob :manages :emma . :emma :manages :david . :jeremy :manages :monica . We would like to recognize which individuals in the company are “top level managers”. We can use a rule to define a top level manager as a person who manages someone and is not managed by anyone [?x, rdf:type, :TopLevelManager] :- [?x, :manages, ?y], NOT EXISTS ?z IN ([?z, :manages, ?x]) . The query SELECT ?x WHERE { ?x rdf:type :TopLevelManager } asking for the list of top level managers gives as :alice as the answer. We can now use a rule to define “junior employees” as those who have a manager but who themselves do not manage anyone else. [?x, rdf:type, :JuniorEmployee] :- [?y, :manages, ?x], NOT EXISTS ?z IN ([?x, :manages, ?z]) . The query SELECT ?x WHERE { ?x rdf:type :JuniorEmployee } Gives us :monica and :david as answers. Prominent examples of ordered relations where we may be interested in finding the top and bottom elements are partonomies (part-whole relations) and is-a hierarchies. 5.5.12. Representing Equality Cliques¶ When integrating data from multiple sources using a knowledge graph, it is usually the case that objects from different sources are identified to be the same. In this setting, we want to be able to answer complex queries that span across the different sources, and to easily identify the source where the information came from. Additionally, we may not want to use the equality predicate owl:sameAs to identify the objects since our rule set may contain rules involving aggregation and/or negation-as-failure which cannot be used in conjunction with equality. For instance, assume that we are integrating sources s1, s2, and s3 containing information about music artists and records. Assume that we have determined (e.g., using entity resolution techniques or exploiting explicit links between the sources) that “John Doe” in s1 is the same as “J. H. Doe” in s2 and “The Blues King” in s3. We can represent these correspondences using a binary relation ost:same which we define as reflexive, symmetric, and transitive using RDFox rules as given next. s1:john_doe rdf:type s1:Artist . s2:john_H_doe rdf:type s2:Performer . s3:blues_king rdf:type s3:Musician . s1:john_doe ost:same s2:john_H_doe . s1:john_doe ost:same s3:blues_king . s2:john_H_doe ost:same s3:blues_king . [?x, ost:same, ?x] :- [?x, ost:same, ?y] . [?y, ost:same, ?x] :- [?x, ost:same, ?y] . [?x, ost:same, ?z] :- [?x, ost:same, ?y], [?y, ost:name, ?z] . In these way, the aforementioned objects form a clique in the integrated graph. Indeed, the query SELECT ?x ?y WHERE { ?x ost:same ?y } returns the answer s3:blues_king s2:john_H_doe . s2:john_H_doe s3:blues_king . s2:john_H_doe s2:john_H_doe . s3:blues_king s3:blues_king . s2:john_H_doe s1:john_doe . s3:blues_king s1:john_doe . s1:john_doe s1:john_doe . s1:john_doe s3:blues_king . s1:john_doe s2:john_H_doe . In order to be able to query across artists from different sources, we want to define a unique representative for the elements in the clique. A plausible strategy is to first select the smallest individual according to some pre-defined total order (the order itself is irrelevant, and we can choose for example the order on IRIs provided by RDFox). To select the smallest object we introduce the following rules. [?x, ost:comesBefore, ?y] :- [?x, ost:same, ?y], FILTER (?x < ?y) . [?y, rdf:type, ost:NotSmallestInClique] :- [?x, ost:comesBefore, ?y] . [?x, rdf:type, ost:SmallestInClique] :- [?x, ost:comesBefore, ?y], NOT [?x, rdf:type, ost:NotSmallestInClique] . The first rule generates an order amongst the elements of the clique. The second rule says that if ?x comes before ?y then ?y is not the smallest element. The third rule finally identifies the smallest element in the clique. The following query SELECT ?x ?y WHERE { ?x ost:comesBefore ?y } reveals the generated order s2:john_H_doe s3:blues_king . s1:john_doe s2:john_H_doe . s1:john_doe s3:blues_king . where s1:john_doe is correctly identified as the smallest element by the query. SELECT ?x WHERE { ?x rdf:type ost:SmallestInClique } Now that we have identified an element of the clique we can create a representative of the clique using a Skolem constant, as given next. [?z, rdf:type, ost:Artist], [?z, ost:represents, ?x] :- [?x, rdf:type, ost:SmallestInClique], BIND(SKOLEM(“OSTArtist”, ?x) AS ?z) . [?x, ost:represents, ?z] :- [?x, ost:represents, ?y], [?y, ost:comesBefore, ?z] . The first rule creates the SKOLEM constant and states that it represents the smallest element. The second rule states that the Skolem constant also represents every other element in the clique. The query SELECT ?z ?x WHERE { ?z ost:represents ?x } Yields the expected result. _:OSTArtist_2136 s2:john_H_doe . _:OSTArtist_2136 s3:blues_king . _:OSTArtist_2136 s1:john_doe . It is possible to achieve the same results by using an optimized set of rules that generates fewer triples. In particular, this optimized representation avoids axiomatizing the ost:same property as reflexive and symmetric. Let’s reconsider the data. s1:john_doe rdf:type s1:Artist . s2:john_H_doe rdf:type s2:Performer . s3:blues_king rdf:type s3:Musician . s1:john_doe ost:same s2:john_H_doe . s1:john_doe ost:same s3:blues_king . s2:john_H_doe ost:same s3:blues_king . We now redefine directly the ost:comesBefore relation using the following rules [?x, ost:comesBefore, ?y] :- [?x, ost:same, ?y], FILTER(?x > ?y) . [?x, ost:comesBefore, ?y] :- [?y, ost:same, ?x], FILTER(?x > ?y) . [?x, ost:comesBefore, ?z] :- [?x, ost:comesBefore, ?y], [?y, ost:comesBefore, ?z], FILTER(?x > ?y) . [?x, ost:comesBefore, ?y] :- [?z, ost:comesBefore, ?x], [?z, ost:comesBefore, ?y], FILTER(?x > ?y) . The query SELECT ?x ?y WHERE { ?x ost:comesBefore ?y } reveals a generated order. Once we have the order, we proceed as before. 5.5.13. Populating a Knowledge Graph from a Data Source¶ Rules can be used to bring information from an external data source into a knowledge graph. Data feeding a knowledge graph often stems from different types of external data sources, such as relational databases. We can use RDFox rules to specify how each record in the external data source corresponds to a set of nodes and edges in the graph. RDFox allows us to load the information in an external data source by means of a two-stage process. The first step is to attach a data source and assign it to a relation. For instance, consider the following data about the employees of ACME corporation in a CSV file named “employee.csv”. We attach the table to an RDFox relation Employee with 5 arguments, one per column in the table. This can be achieved using the following commands. dsource add delimitedFile "EmployeeDS" \ file "$(dir.root)csv/employee.csv" \ header true The net result is that the employee.csv is added as an RDFox data source. We called the data source EmployeeDS. Here, file specifies the path to the file, and header indicates whether the file contains a header row. At this point, we can check whether the data has been attached correctly and whether the RDFox data source is in place by running the command to obtain the expected information Data source type name: delimitedFile Data source name: EmployeeDS Parameters: file = employee.csv header = true Table name: employee.csv Column 1: emp_id xsd:integer Column 2: emp_name xsd:string Column 3: job_name xsd:string Column 4: hire_date xsd:string Column 5: salary xsd:integer The next step attaches the RDFox data source to an employee relation in RDFox. dsource attach :employee "EmployeeDS" \ "columns" 5 \ "1" "https://oxfordsemantic.tech/RDFox/tutorial/{1}_{2}" \ "1.datatype" "iri" \ "2" "{emp_name}" \ "2.datatype" "string" \ "3" "{job_name}" \ "3.datatype" "string" \ "4" "{hire_date}" \ "4.datatype" "string" \ "4.if-empty" "absent" \ "5" "{salary}" \ "5.datatype" "integer" \ "5.if-empty" "absent" The IRI of the new relation will be :employee, where “:” is the default prefix defined beforehand as “https://oxfordsemantic.tech/RDFox/tutorial/”. The :employee data relation will contain 5 arguments. The first argument provides an identifier for each employee as a composition of the prefix’s IRI, the employee ID (first column in the data source) and the employee name (second column). The remaining arguments are obtained from the column of the corresponding name in the data source. Since not every employee may have a hiring date or a known salary, the conditions “if-empty” indicate that the corresponding argument in the RDFox relation will be left empty. Once the relation has been created in RDFox, it can be queried and used in the antecedent of rules. As a first step, we can query the RDFox relation using SPARQL to check whether the data has been imported correctly in the relation. The SPARQL query SELECT ?x ?y ?z ?u ?w WHERE { :employee(?x, ?y, ?z, ?u, ?w) } Will return the following answers: :68319_KAYLING "KAYLING" "PRESIDENT" UNDEF 200000 . :66928_BLAZE "BLAZE" "MANAGER" "01/05/2017" 90000 . :67453_JONES "JONES" "ASSISTANT" "03/05/2018" 35000 . As we can see, the UNDEF entry represents that the value of the hiring date for the first employee is missing. Now that we have the RDFox relation correctly in place, the next step would be to turn the data in the relation in the form of a graph. For this we can use the following rule, where the RDFox relation forms the antecedent and the generated edges in the graph based on it are described in the consequent of the rule: [?x, rdf:type, :Employee], [?x, :worksFor, :acme], [?x, :hasName, ?y], [?x, :hasJob, ?z], [?x, :hiredOnDate, ?u], [?x, :salary, ?w] :- :employee(?x, ?y, ?z, ?u, ?w) . The materialization of the rule generates a graph from the data in the relation. The new relations in the graph can be used in other rules to define additional concepts and relations. For instance, we can add the rules stating that every employee is a person and every person with a salary higher than £50,000 pays tax at a higher-rate. [?x, rdf:type, :Person ] :- [?x, rdf:type, :Employee ] . [?x, :taxRate, :higher-rate] :- [?x, rdf:type, :Person], [?x, :salary, ?y], FILTER(?y > 50000) . Now we can query the graph to obtain, for instance, the list of high income tax payers. SELECT ?x WHERE { ?x :taxRate :higher-rate } And obtain the expected results. :68319_KAYLING . :66928_BLAZE . Data can be imported from different data sources and merged together in the graph. For instance, if we had a different employee table (e.g., for a different department) in another CSV, we could attach to it a new RDFox data source and exploit a rule akin to the one before to further populate the binary relations in the graph, as well as to create new ones. 5.6. OWL 2 Support in RDFox¶ This section describes the support in RDFox for OWL 2—the W3C standard language for representing ontologies. 5.6.1. OWL 2 Ontologies¶ An OWL 2 ontology is a formal description of a domain of interest. OWL 2 defines three different syntactic categories. The first syntactic category are Entities, such as classes, properties and individuals, which are identified by an IRI. Classes represent sets of objects in the world; for instance, a class :Person can be used to represent the set of all people. Properties represent binary relations, and OWL 2 distinguishes between two different types of properties: data properties describe relationships between objects and literal values (e.g., the data property :age can be used to represent a person’s age), whereas object properties describe relationships between two objects (e.g., an object property :locatedIn can be used to relate places to their locations). Finally, individuals in OWL 2 are used to refer to concrete objects in the world; for instance, the individual :oxford can be used to refer to the city of Oxford. The second syntactic category are expressions, which can be used to describe complex classes and relations constructed in terms of simpler ones. For instance the expression ObjectUnionOf( :Cat :Dog) represents the set of animals that are either cats or dogs. The third syntactic category are axioms, which are statements about entities and expressions that are asserted to be true in the domain described. For instance, the OWL 2 axiom SubClassOf(:scientist :Person) states that every scientist is a person by defining the class :scientist to be a subclass of the class :Person. The main component of an OWL 2 ontology is a set of axioms. Ontologies can also import other ontologies and contain annotations. OWL 2 ontologies can be written using different syntaxes. RDFox can currently load ontologies written in the functional syntax as well as ontologies written in the turtle syntax. 5.6.2. OWL 2 Ontologies vs. RDFox Rules¶ OWL 2 and the rule language of RDFox are languages for knowledge representation with well-understood formal semantics. Both languages share a common core. That is, certain types of rules can be equivalently rewritten as OWL 2 axioms and vice-versa. For instance, the following axiom and rule both express that every scientist is also a person. SubClassOf(:Scientist :Person) [?x, rdf:type, :Person] :- [?x, rdf:type, :Scientist] . In particular, the OWL 2 specification describes the OWL 2 RL profile—a subset of the OWL 2 language that is amenable to implementation via rule-based technologies. There are, however, many other aspects where OWL 2 and the rule language of RDFox differ, and there are many constructs in OWL 2 that cannot be translated as RDFox rules and vice-versa. For instance, OWL 2 can represent disjunctive knowledge, i.e., we can write an OWL 2 axiom saying that every student is either an undergraduate student, a graduate student, or a doctoral student: SubClassOf(:Student ObjectUnionOf(:UndergraduateSt :MscSt :DoctoralSt) ) RDFox rules, however, do not support disjunction. There are also many kinds of rules in RDFox that cannot be expressed using OWL 2 axioms; these include, for instance, rules involving features such as aggregation, negation-as-failure or certain built-in functions; furthermore, there are also plain Datalog rules that do not have a correspondence in OWL 2. 5.6.3. Loading OWL 2 Ontologies in RDFox¶ RDFox is able to load, store and manipulate three kinds of syntactic elements: triples, rules, and OWL 2 axioms. These are kept in separate “bags” in the system and can be added or deleted individually. For instance, consider the following text file “ontology.txt” containing an ontology written in the functional syntax of OWL 2: Ontology( <http://www.example.com/ontology1> SubClassOf( :Child :Person ) SubClassOf( :Person ObjectUnionOf(:Child :adult) ) The ontology contains two axioms. The first axiom tells us that every child is also a person, whereas the second axiom states that every person is either a child or an adult. The first axiom can be faithfully translated into RDFox rules, whereas the second one cannot. RDFox provides a full API for OWL 2 and can parse, store and manage all kinds of OWL 2 axioms in functional syntax. As a result, it will correctly load both axioms, but will issue a warning indicating that the second axiom has no correspondence into rules. To load the ontology in RDFox, we can initialize a data store (see the Getting Started guide) and import the the file in the usual way. The ontology axioms are now loaded in the data store and kept internally in the “axioms bag”. We can now import a turtle file containing the following triples: :jen rdf:type :Child . :jen :hasParent :mary . These triples will be kept internally in the “triples bag”. Finally, we can import the following RDFox rule saying that the parent of a child is a person. [?y, rdf:type, :Person] :- [?x, :hasParent, ?y], [?x, rdf:type, :Child] . This rule is kept internally in RDFox in the separate “rules bag”. Now, we are in a position to perform reasoning. For this we can issue a SPARQL query asking for the list of all people: SELECT ?x WHERE { ?x rdf:type :Person } To answer the query, RDFox will translate OWL 2 axioms into rules and will consider together all data triples, all RDFox rules added by the user, plus all rules stemming from the translation of OWL 2 axioms. In particular, the following rules and facts contribute to answering the query, where the first rule comes from the translation of the first ontology axiom as a rule (the second axiom in the ontology is ignored): :jen rdf:type :Child . :jen :hasParent :mary . [?x, rdf:type, :Person] :- [?x, rdf:type, :Child] . [?y, rdf:type, :Person] :- [?x, :hasParent, ?y], [?x, rdf:type, :Child] . As a result, RDFox will return as answers both :jen and :mary. Indeed, :jen is a child and hence also a person by the first rule; in turn, :mary is the parent of :jen and hence also a person by the second rule. The translation of OWL 2 axioms into rules for the purpose of reasoning is performed on a best-effort basis. In particular, sometimes RDFox may not be able to translate the whole of given axiom, but may still be able to translate a part of it. For instance suppose that we add to our data store the following axiom saying that every person is a human and also either an adult or a child: SubClassOf(:Person ObjectIntersectionOf(:Human ObjectUnionOf(:Child :Adult))) RDFox will load the axiom correctly, but will again issue a warning due to the use of disjunction in the axiom. Suppose that we now issue the query SELECT ?x WHERE { ?x rdf:type :Human } RDFox will correctly return both :jen and :mary as answers. Indeed, as already explained, RDFox can deduce that both :jen and :mary are persons. Now, although the last axiom we imported cannot be fully translated into rules, RDFox will still be able to partly translate it into the following rule: [?x, rdf:type, :Human] :- [?x, rdf:type, :Person] . from which we can deduce that :jen and :mary are also humans. OWL 2 ontologies can also be loaded from a turtle file, following the standard representation of OWL 2 ontologies as triples. In order to load an ontology from a turtle file, we need to initialize a store with special parameters. Using the command line, we can initialize such a store as follows: init par-complex-nn owl-in-rdf-support relaxed This command creates a store in which parsing of OWL as triples is enabled. As a result, RDFox will identify OWL 2 axioms that were encoded as RDF triples and will translate those axioms into rules as described earlier. Suppose that we import into the store a turtle file containing the following triples: :Child rdfs:subClassOf :Person . :Person rdfs:subClassOf :Human . :jen rdf:type :Child . The first two triples correspond to the serialization into triples of the following axioms in functional syntax: SubClassOf( :Child :Person ) SubClassOf( :Person :Human ) As a result of parsing, all triples will be stored in the “triples bag” of RDFox, whereas the first two triples will also be added as axioms. Now, assume that we issue a query asking for the list of all humans: SELECT ?x WHERE { ?x rdf:type :Human } Then, RDFox will correctly return :jen as the answer. Internally, RDFox will transform the OWL axioms into rules [?x, rdf:type, :Person] :- [?x, rdf:type, :Child] . [?x, rdf:type, :Human] :- [?x, rdf:type, :Person] . and compute the corresponding materialization. 5.6.4. Subsumption Reasoning¶ OWL 2 reasoners implement a wide range of reasoning services, which are not limited to query answering. In particular, OWL reasoners can solve the subsumption problem: given a class, they would compute all its inferred superclasses. For example, given SubClassOf( :Child :Person ) SubClassOf( :Person :Human ) an OWL 2 reasoner would be able to infer SubClassOf( :Child :Human ) as a consequence, since from the fact that every child is a person, and every person is a human, that every child is also a human. RDFox is a materialization-based query answering system, and it has not been designed for solving problems such as class subsumption. RDFox, however, is still able to detect some such subsumption relations should this be required in an application. One way to achieve this is to reduce subsumption to query answering. In particular, to check whether it is true that every child is a human, we can introduce a fresh object in the data store, which we make an instance of :Child. That is, we can import the following triple, where :a_child is a fresh URI. :a_child rdf:type :Child . Then, we would test whether :a_child is inferred to be also a human by issuing the query ASK { :a_child rdf:type :Human } which would return true. Another way of testing subsumption is to import the ontology as a set of triples: :Child rdfs:subClassOf :Person . :Person rdfs:subClassOf :Human . When the triples are parsed and eventually translated into rules for reasoning, RDFox will also add a number of internal rules that partially encode the semantics of the RDFS and OWL vocabularies; in particular, it will add rules representing the relation rdfs:subClassOf as transitive and reflexive, and also saying that every class is a subclass of owl:Thing. [?x, rdfs:subClassOf, ?x], [?x, rdfs:subClassOf, owl:Thing] :- [?x, rdf:type, owl:Class] . [?y, rdfs:subClassOf, owl:Thing] :- [?x, rdfs:subClassOf, ?y] . [?x, rdfs:subClassOf, ?z] :- [?x, rdfs:subClassOf, ?y], [?y, rdfs:subClassOf, ?z] . As a result, the following SPARQL query SELECT ?x WHERE { :Child rdfs:subClassOf ?x } will correctly return all superclasses of :Child as :Person . :Human . owl:Thing . :Child . The complete set of internal rules added by RDFox when loading an OWL ontology as a set of triples is given below: [owl:Nothing, rdfs:subClassOf, ?x], [?x, rdfs:subClassOf, ?x], [?x, rdfs:subClassOf, owl:Thing] :- [?x, rdf:type, owl:Class] . [owl:Nothing, rdfs:subClassOf, ?x], [?x, rdfs:subClassOf, ?x], [?y, rdfs:subClassOf, ?y], [?y, rdfs:subClassOf, owl:Thing] :- [?x, rdfs:subClassOf, ?y] . [?x, rdfs:subClassOf, ?z] :- [?x, rdfs:subClassOf, ?y], [?y, rdfs:subClassOf, ?z] . [?x, rdfs:subClassOf, ?y], [?y, rdfs:subClassOf, ?x] :- [?x, owl:equivalentClass, ?y] . [?x, owl:equivalentClass, ?y] :- [?x, rdfs:subClassOf, ?y], [?y, rdfs:subClassOf, ?x] . [owl:bottomDataProperty, rdfs:subPropertyOf, ?x], [?x, rdfs:subPropertyOf, ?x], [?x, rdfs:subPropertyOf, owl:topDataProperty] :- [?x, rdf:type, owl:DatatypeProperty] . [owl:bottomObjectProperty, rdfs:subPropertyOf, ?x], [?x, rdfs:subPropertyOf, ?x], [?x, rdfs:subPropertyOf, owl:topObjectProperty] :- [?x, rdf:type, owl:ObjectProperty] . [?x, rdfs:subPropertyOf, ?x], [?y, rdfs:subPropertyOf, ?y] :- [?x, rdfs:subPropertyOf, ?y] . [?x, rdfs:subPropertyOf, ?z] :- [?x, rdfs:subPropertyOf, ?y], [?y, rdfs:subPropertyOf, ?z] . [?x, rdfs:subPropertyOf, ?y], [?y, rdfs:subPropertyOf, ?x] :- [?x, owl:equivalentProperty, ?y] . [?x, owl:equivalentProperty, ?y] :- [?x, rdfs:subPropertyOf, ?y], [?y, rdfs:subPropertyOf, ?x] . [?p, rdfs:domain, ?b] :- [?p, rdfs:domain, ?a], [?a, rdfs:subClassOf, ?b] . [?p, rdfs:domain, ?a] :- [?q, rdfs:domain, ?a], [?p, rdfs:subPropertyOf, ?q] . [?p, rdfs:range, ?b] :- [?p, rdfs:range, ?a], [?a, rdfs:subClassOf, ?b] . [?p, rdfs:range, ?a] :- [?q, rdfs:range, ?a], [?p, rdfs:subPropertyOf, ?q] . 5.6.5. Current Limitations¶ The following details should be taken into account by users of RDFox who rely on OWL 2 ontologies in their applications: • RDFox currently does not support ontology importation. That is, if we load ontology O, which in turns imports O1 and O2, only the contents of O will be loaded (and not those of O1 and O2). • RDFox also does not support associating axioms to a given ontology. In particular, if we load two different ontology files, all the axioms in both ontologies will be added to the same bag of axioms in the system. 5.7. Explaining Reasoning Results¶ RDFox can display a proof of how a given triple has been derived. Such proofs can be very useful for explaining reasoning results to users as well as for understanding the reasoning process. Consider a data store containing the triple and the following rules: [?x, rdf:type, :Mammal] :- [?x, rdf:type, :Cat] . [?x, rdf:type, :Animal] :- [?x, rdf:type, :Mammal] . As a result of reasoning, RDFox will derive the following new triples: :kiki rdf:type :Mammal . :kiki rdf:type :Animal . Suppose that we want to understand how triple :kiki rdf:type :Animal has been derived. A way to do this in RDFox is to use the explain command in the shell as follows: RDFox will explicate the reasoning process by displaying the following proof of the requested fact: :Animal[?x] :- :Mammal[?x] . | { ?x -> :kiki } :Mammal[?x] :- :Cat[?x] . | { ?x -> :kiki } :Cat[:kiki] EDB We can read the proof bottom-up. Starting from fact :Cat[:kiki] in the data, we apply rule :Mammal[?x] :- :Cat[?x] by matching variable ?x to :kiki and derive the fact :Mammal[:kiki]. The application of rule :Animal[?x] :- :Mammal[?x] to fact :Mammal[:kiki] where ?x is matched to :kiki yields the desired result. Typically, there will be several different proofs for a given fact. To see this, suppose that we add to our data store the triples :kiki :eats :luxury_pet_treat . :luxury_pet_treat rdf:type :PetFood . and the rule [?x, rdf:type, :Animal] :- [?x, :eats, ?y], [?y, rdf:type, :PetFood] . Then, in addition to the previous one, the following is also a proof that :kiki is an animal: :Animal[?x] :- :eats[?x,?y], :PetFood[?y] . | { ?x -> :kiki, ?y -> :luxury_pet_treat } :eats[:kiki,:luxury_pet_treat] EDB :PetFood[:luxury_pet_treat] EDB Indeed, we can match rule :Animal[?x] :- :eats[?x, ?y], :PetFood[?y] to the data facts :eats[:kiki, :luxury_pet_treat] and :PetFood[:luxury_pet_treat] by matching variable ?x to :kiki and variable ?y to :luxury_pet_treat to derive :Animal[:kiki]. If we run again the explanation command RDFox will display both proofs. :Animal[?x] :- :Mammal[?x] . | { ?x -> :kiki } :Mammal[?x] :- :Cat[?x] . | { ?x -> :kiki } :Cat[:kiki] EDB :Animal[?x] :- :eats[?x,?y], :PetFood[?y] . | { ?x -> :kiki, ?y -> :luxury_pet_treat } :eats[:kiki,:luxury_pet_treat] EDB :PetFood[:luxury_pet_treat] EDB Since the number of possible different proofs for a given fact may be very large, we may be content with just obtaining a single one. We can use the explain command to obtain a shortest proof as explain shortest :Animal[:kiki] which will return the following proof :Animal[?x] :- :eats[?x,?y], :PetFood[?y] . | { ?x -> :kiki, ?y -> :luxury_pet_treat } :eats[:kiki,:luxury_pet_treat] EDB :PetFood[:luxury_pet_treat] EDB Indeed, this is the shortest proof as it involves a single rule application, whereas the alternative proof involves two rule applications. When using the explanation command, it is important to understand that rules in RDFox can come from different sources • User rules such as the ones in our previous example are rules introduced directly by the user. • User axioms are OWL 2 axioms imported by the user, which are internally translated into rules. • Special rules are rules that have no direct connection with the information provided by the user and are internally added by RDFox. An example of special rules are the rules for subsumption reasoning provided at the end of the previous section, and another example are the rules obtained by axiomatizing equality as a transitive, reflexive and symmetric relation. Consider for example a data store where we import the following triple: :kiki rdf:type :Cat . and also the following OWL 2 axioms in functional syntax SubClassOf( :Cat :Mammal ) SubClassOf( :Mammal :Animal ) If we now run the explain command we obtain the same proof as before: :Animal[?X] :- :Mammal[?X] . | { ?X -> :kiki } :Mammal[?X] :- :Cat[?X] . | { ?X -> :kiki } :Cat[:kiki] EDB It is important to note, however, that the explicitly given OWL 2 axioms are not displayed in the proof, but rather the rules that are obtained from them internally. 5.8. Monitoring Reasoning in RDFox¶ This section gives an overview of the functionality implemented in RDFox for monitoring the progress of reasoning. Let us start by creating a new data store in which reasoning will be performed in a single-threaded fashion: dstore create default par-complex-nn threads 1 To enable monitoring of reasoning we use the following shell commands, where the second one establishes the frequency at which information is provided in the console. set reason.monitor progress set log-frequency 1 We can now import rules and data which, in our case, will come from the well-known LUBM benchmark. We first import the rules: RDFox will then import the rules and display relevant information about the rule importation process: Adding data in file './LUBM_L.dlog'. [1]: START './LUBM_L.dlog' [1]: FINISHED './LUBM_L.dlog' Time since import start: 1 ms Time since start of this import: 1 ms Facts processed in this import: 0 Number of finished imports: 1 Total facts processed so far: 0 Import operation took 0.4 s. Processed 98 rules, of which 98 were updated.`` In particular, we can see that 98 rules were imported in total and that rule importation took 0.4s. We can now ask RDFox to print detailed information about the imported rules. For instance, the following command will provide statistics about the rule set and then will print each rule in a given info rulestats print-rules by-body-size RDFox will first provide some statistics about the rule set ================================ RULES STATISTICS ================================ Component Body size Nonrecursive rules Recursive rules Total rules Total: 80 18 98 RDFox organizes rules by components, which gives us an idea of how information flows during reasoning. To give some intuition as to what a component is, consider the following simple set of rules: [?x, rdf:type, :B] :- [?x, rdf:type, :A] . [?x, rdf:type, :C] :- [?x, rdf:type, :B] . [?x, rdf:type, :D] :- [?x, rdf:type, :B] . [?x, rdf:type, :A] :- [?x, rdf:type, :D] . We can see that :B depends on :A since to derive facts about :B we need to first obtain facts about :A. Similarly, :C and :D both depend on :B. Finally, :A depends on :D, and hence the first, third and fourth rules are involved in a cycle of dependencies. As a result, the flow of information during rule application can be seen in two stages: first, we need to derive all facts about :A, :B and :D using the first, third and fourth rules. Then, we can derive all facts about :C using the second rule. To reflect this, RDFox will organize these rules into two components: the first component will contain the first, third and fourth rules which together are considered recursive (they are involved in a cycle of dependencies), whereas the second rule will go in its own component and will be identified as non-recursive. In the table above, we can see the same kind of information concerning the more complex LUBM rules. We can see that rules are arranged in 5 components (0..4), we can see the number of rules involved in dependency cycles (recursive rules) in each component, as well as the total number of rules and their maximal body size. RDFox then will print the rules component by component on the console and within each component it will arrange the rules sorted by number of atoms in their bodies. In our simple example about :A, :B , :C and :D, the information printed will look as follows: -- COMPONENT: 0 -- NONRECURSIVE RULES: 0 -- RECURSIVE RULES: 3 ** BODY SIZE: 1 ** RECURSIVE RULES: 3 :B[?x] :- :A[?x] . :D[?x] :- :B[?x] . :A[?x] :- :D[?x] . -- COMPONENT: 1 -- NONRECURSIVE RULES: 1 -- RECURSIVE RULES: 0 ** BODY SIZE: 1 ** NONRECURSIVE RULES: 1 :C[?x] :- :B[?x] . Now that we have imported the rules, we can import also the data: At this point, RDFox will load the data (without performing any reasoning yet) and will provide information about the progress of loading. We can see an excerpt of such information below: > import LUBM-large.ttl Adding data in file './LUBM-large.ttl'. [1]: START './LUBM-large.ttl' [1]: PROGRESS './LUBM-large.ttl' Time since start of import: 1001 ms Time since start of this import: 1001 ms Facts processed in this import: 418000 [1]: PROGRESS './LUBM-large.ttl' Time since start of import: 2001 ms Time since start of this import: 2001 ms Facts processed in this import: 795000 [1]: PROGRESS './LUBM-large.ttl' Time since start of import: 3002 ms Time since start of this import: 3002 ms Facts processed in this import: 1164000 [1]: FINISHED './LUBM-large.ttl' Time since import start: 13143 ms Time since start of this import: 13143 ms Facts processed in this import: 5000000 Number of finished imports: 1 Total facts processed so far: 5000000 Import operation took 17.8 s. Processed 5000000 facts, of which 5000000 were updated. In particular, we can see how many data facts have been imported each second. We can also see that, in the end, 5,000,000 data triples were imported and that the import took 17.8s in total. We can now compute the materialization of the LUBM rules and facts in the store using the mat command: RDFox will display information about the number of facts generated: Materializing rules incrementally. Rules will be processed by strata. Maximum depth of backward chaining is unbounded. Materialization time: 0 s. Table | Facts | EDB | IDB internal:triple | 6,826,914 -> 6,826,914 | 5,000,000 -> 5,000,000 | 6,826,913 -> 6,826,913 The column labeled EDB tells us the number of facts that were explicitly given in the data file. In turn the column labeled IDB indicates the total number of facts in the store after materialization; in our case, this means that the system has derived a total of over 1.8 million new facts through rule application. The Table column indicates the name of each tuple table in the store. In this case, we just have the default triple table, but in other cases we may also have other tuple tables such as those obtained from named graphs. Each different tuple table will have different numbers of explicit and derived facts. Finally, the column labeled facts indicates the total number of memory slots that were reserved by different threads during reasoning; this number can actually be larger that the total number of facts in the system as some of these slots may not have been used to store a fact. 5.9. Querying the Explicitly given Data¶ After reasoning, RDFox will by default answer all SPARQL queries with respect to the obtained materialization. For instance, suppose that we have a data store with fact :a rdf:type :A and the following rules: [?x, rdf:type, :B] :- [?x, rdf:type, :A] . [?x, rdf:type, :C] :- [?x, rdf:type, :B] . [?x, rdf:type, :D] :- [?x, rdf:type, :B] . [?x, rdf:type, :A] :- [?x, rdf:type, :D] . The materialization will contain the following facts, where three of them have been derived and only fact :a rdf:type :A was originally in the data: :a rdf:type :A . :a rdf:type :B . :a rdf:type :C . :a rdf:type :D . If we issue a query SELECT ?x WHERE { ?x rdf:type :D } we will obtain :a as a result. In RDFox it is possible to query only the explicit data even after materialization has been performed. For this, we can use the shell command If we then issue the previous query again we will obtain the empty answer as a result.
{"url":"https://docs.oxfordsemantic.tech/3.0/reasoning-in-rdfox.html","timestamp":"2024-11-11T16:55:24Z","content_type":"text/html","content_length":"171654","record_id":"<urn:uuid:1bbb45e6-dbcf-4a88-94c1-e06be912842a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00423.warc.gz"}
Caesar cipher: Encode and decode online - ciphereditor Caesar cipher, the most widely known encryption technique Caesar cipher replaces each letter in a text is by a letter a fixed number of places down the alphabet. Caesar cipher is a simple substitution cipher. It rotates the plaintext alphabet by a fixed number of places. The latter is called the “shift” and forms the key of this cipher. Plaintext: abcdefghijklmnopqrstuvwxyz Ciphertext: defghijklmnopqrstuvwxyzabc Substitution alphabet of Latin Caesar cipher with shift 3 Assuming the Latin alphabet and a shift of 3 we would replace A with D, B with E, …, Y with B, and Z with C. Try it with the plaintext ciphereditor. You should get the ciphertext jpwolylkpavy. A construction of two rotating disks with a Caesar cipher can be used to encrypt or decrypt the code (image source) How to break the cipher? The Caesar cipher can be easily broken in a ciphertext-only scenario. The shift is the key to this cipher and may take up to 25 possible values (assuming the Latin alphabet A-Z with 26 characters). Any other shift value is equivalent to the shift value you get by dividing by the number of characters in the alphabet and taking the remainder (the “modulo operation”). Thus, 260.003 is equivalent to shift 3. You may break the cipher in a brute-force attack by deciphering the ciphertext at hand using all possible shift values and choosing the original text among them. The latter can be done intuitively or automatically using a dictionary or statistical manners (e.g. frequency analysis). See also
{"url":"https://ciphereditor.com/explore/caesar-cipher?utm_campaign=cryptii_try_new_experience&utm_source=cryptii&utm_medium=referral","timestamp":"2024-11-14T11:56:43Z","content_type":"text/html","content_length":"58753","record_id":"<urn:uuid:8213abbf-96f4-4991-8172-f1f584a313a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00199.warc.gz"}
Expressing numbers in English | typerrorsinenglish top of page Expressing numbers in English This article has been edited from Typical Errors in English, with several revisions, updates and additional material. It's not something we think about regularly, but students often have problems when it comes to expressing numbers in the English language. The problem is using them not only in counting, but also saying and even writing them in different contexts. When I first started putting this feature together for the website, I was thinking that it was mainly going to be a cut-and-paste job from TEE, only to discover there's a lot more that needed to be But do note that what I've presented here as advice isn't always what other people and organisations may do as they might do things a little different (newspaper style guides being an obvious example). But where those differences occur I have noted them. Otherwise, follow these examples and you should be okay. So I've split all this into three parts, and each of these parts are split into (approximately!) three sections: PART ONE: Saying normal numbers, big numbers, and numbers less than one (fractions) PART TWO: Weights and measures, money and telephone numbers PART THREE: Room, bus, document and road numbers, ordinal numbers, and British or American billions? And at the end of most of these sections I'll try and give you a few little questions to check your understanding. Right! So let's begin! Saying normal numbers Right, let's start with a nice, simple, exercise. Can you count from 1 to 10 out loud - in English, of course? Okay, now if I was (or were, depending on what you prefer) to ask you if you could count all the way to 100 out loud, could you do it? Try writing in number form five, twelve, thirty-seven, one hundred and thirty-four, four hundred and sixty-five and seven hundred and ninety-eight. You should, of course, have these answers: 5 five 12 twelve 37 thirty-seven 134 one hundred and thirty four (or a hundred and thirty-four) 465 four hundred and sixty-five 798 seven hundred and ninety-eight What about to 1000 or beyond that? If you were asked to, could you count to higher numbers? If your answer is yes, then that's great. I would perhaps expect it. But what happens when we take these numbers in isolation, and I tell you to write them down? Many of you probably realise that, like letters, numbers being given at random can be more difficult to think about. I am going to dictate ten numbers to you. I will repeat them twice. Can you write them down? Click on the audio link here for the numbers. The answers are at the bottom of this page. ten numbers dictation - TEE Big numbers If you had some problems with the bigger numbers in the audio exercise, don't worry. You're not the only students. Expressing big numbers in writing is more of a European problem (or it is more of an English language problem, depending on your perspective.) Look at these numbers below and how they are expressed when written... in British and American English: 2281 two thousand, two hundred and eighty-one (British English) two thousand, two hundred eighty-one (American English) 6144 six thousand, one hundred and forty-four (British English) six thousand, one hundred forty-four (American English) 23,567 twenty-three thousand, five hundred and sixty-seven (British English) twenty-three thousand, five hundred sixty-seven (American English) 641,256 six hundred and forty-one thousand, two hundred and fifty-six (British English) six hundred forty-one thousand, two hundred fifty-six (American English) 31,934,623 thirty-one million, nine hundred and thirty-four thousand, six hundred and twenty-three (British English) thirty-one million, nine hundred thirty-four thousand, six hundred twenty-three (American English) 786,389,004 seven hundred and eighty-six million, three hundred and eighty-nine thousand and four (British English) seven hundred eighty-six million, three hundred eighty-nine thousand and four (American English) Note that we usually hyphenate (adding ‘-‘) two worded numbers between twenty-one and ninety-nine: Forty-seven, one hundred and thirty-three, fifty-five thousand, six hundred and seventy-seven. You may have noted that with big numbers, written English uses the comma symbol (,) and not the decimal point symbol (.) to help express the number of digits when written, so a million would be written as 1,000,000 (NOT 1.000.000). In all languages, of course, we can simply remove the commas and dots and just leave spaces: 1 000 000. A nice easy exercise. Really, you couldn't have asked for an easier exercise. Write down the following numbers using actual numbers, and then 'punctuate' them if necessary. 1. seven thousand and seventy-two 2. thirty-six thousand two hundred and two 3. six hundred and fifty-three thousand nine hundred and forty-five 4. two million one hundred and forty-seven thousand seven hundred and forty-four 5. five hundred and seventy-six million six hundred and eighty-one thousand one hundred and eighty-one The answers are at the bottom of this page. Numbers less than one (fractions) First, how would you say these numbers (fractions)? Although with ordinal numbers* we count by saying first, second, third, fourth, etc., but with fractions we say the following: ¼ = a quarter, ½ = a half, ¾ = three quarters, ⅓ = a third/one third, ⅔ = two thirds, ⅕ = a fifth/one fifth, ⅙= a sixth/one sixth, ⅚ = five sixths, ⅛ = an eighth/one eighth, ⅝ = five eighths, and so on, just as in ordinal numbers (fifth, sixth, seventh, etc). The half (½) and the quarters (¼, ¾) are the exceptions. *See later in this feature for ordinal numbers. Do note that most newspapers prefer you to write fractions as half, quarter, three quarters, third, fifth, eighth and not as ½, ¼, ¾, ⅓, ⅕ and ⅛. I would definitely carry this advice in any written work, including writing ⅝ as five eighths: Almost a quarter of all employees will be made redundant. A fifth of the population didn't vote in the last election. However, with formal presentations such as with Powerpoint, numbers can be more effective visually than words and more likely to get your point across, so feel free to display on screen numbers - both small numbers below 20 and fractions - as digits: 1 in 3 PEOPLE NOW SPEAK ENGLISH AS A SECOND LANGUAGE. of the workforce could be made redundant as a result of these proposals. Now how would you say these numbers? 0.4 2.45 23.456 The answers are zero-point-four (British English also likes oh-point-four); two-point four-five and twenty-three point four-five-six. The decimal point (.) is used to express fractions in decimals, so for example: six and a half would be written as 6.5 (six point five); three hundred and sixty point three seven five would be written as 360.375; seventeen million, four hundred and eighty-seven thousand, six hundred and forty-eight point seven five nine two becomes 17,487,648.7592. Do note that with numbers we say the numbers individually in the fraction part of the number, after the decimal point. For example, with 2.75 we say two point seven five, NOT two point seventy-five. If we were to write the number 3.000, this is not three thousand but three-point-zero-zero-zero, which might suggest that this is only the number three as no real fraction follows it (and we do not write such numbers with only zeros in the fraction). So if we are expressing the number in thousands, it should be written as 3,000 (three thousand) with the comma. Or without, of course: 3 000. Read the sentences and write the result. Examples: One sixth of 3: 0.5; half of 0.5 written as a fraction: ¼; a half written as a decimal: 0.5 1. half of 6.8 2. one quarter plus one quarter written as a fraction 3. one eighth of 2 as a fraction 4. half of six hundred point five 5. zero point one two five as a fraction 6. Take 3 as a whole number and take away 75%, and write the answer as a decimal The answers are at the bottom of this page. COMING UP: We'll be looking at the ways to express weights and measures, money, room, bus, document and road numbers, ordinal numbers and the big question: British or American billions? So click here! 1. 72 2. 146 3. 881 4. 1,293 5. 6,325 6. 26,905 7. 58,637 8. 305,006 9. 842,518 10. 1,000,001 (Of course, you could leave out the commas and just have blank spaces where these should be.) ANSWERS TO EXERCISE 2 (These are the British English answers: simply drop 'and' for American English.) 1. 7,072 seven thousand and seventy-two 2. 36,202 thirty-six thousand two hundred and two 3. 653,945 six hundred and fifty-three thousand nine hundred and forty-five 4. 2,147,744 two million, one hundred and forty-seven thousand, seven hundred and forty-four 5. 576,681,181 five hundred and seventy-six million, six hundred and eighty-one thousand, one hundred and eighty-one 1. 3.4 half of 6.8 2. ½ one quarter plus one quarter written as a fraction (not a decimal) 3. ¼ one eighth of 2 4. 300.25 half of six hundred point five 5. ⅛ zero point one two five as a fraction (not a decimal) 6. 0.75 take 3 as a whole number and take away 75%, and write the answer as a decimal Good! If this has been nice and easy, then there should be no problems with the next two sections. Please click the link below. You may also like: bottom of page
{"url":"https://www.typicalerrorsinenglish.com/expressing-numbers-in-english","timestamp":"2024-11-09T03:26:18Z","content_type":"text/html","content_length":"628830","record_id":"<urn:uuid:e4089472-5363-4d44-a004-05cdfad12d86>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00005.warc.gz"}
A087725 - OEIS The 15-block puzzle is often referred to (incorrectly) as Sam Loyd's 15-Puzzle (see Slocum and Sonneveld). The actual inventor was Noyes Chapman, the postmaster of Canastota, New York, who applied for a patent in March 1880. The set of moves from a given position depend on where the blank is. There is also a variant in which sliding a row of tiles counts as a single move. For the 8-puzzle I find: Move Blank Maximum Number of maximal-distance positions and slide home distance (position of blank in those positions) tile corner 31 2 (adjacent edge) tile edge 31 2 (adjacent corner) tile center 30 148 (88 corner, 60 center) row corner 24 1 (center) row edge 24 2 (diagonally adjacent edge) row center 24 4 (corner) The maximum number of moves required to solve the 2 X 3 puzzle is 21. The only (solvable) configuration that takes 21 moves to solve is (45*)/(123). - Sergio Pimentel , Jan 29 2008. (See . - N. J. A. Sloane , Aug 16 2009) For additional comments about the history of the m X n puzzle see the link by Anton Kulchitsky. - N. J. A. Sloane , Aug 16 2009 a(5) >= 114 from Korf and Taylor. 152 <= a(5) <= 208, see links from Hannanov Bulat and Tomas Rokicki, Oct 07 2015 a(5) <= 205, a(6) <= 405, a(7) <= 716, a(8) <= 1164, a(9) <= 1780, a(10) <= 2587. - Ben Whitmore , Jan 18 2018 E. R. Berlekamp, J. H. Conway and R. K. Guy, Winning Ways, Academic Press, NY, see Vol. 2 for the classical 4 X 4 puzzle. J. C. Culberson and J. Schaeffer, Computer Intelligence, vol. 14.3 (1998) 318-334. Richard E. Korf and Larry A Taylor, Disjoint pattern database heuristics, in "Chips Challenging Champions" by Schaeffer and Herik, pp. 13-26. K. Krawiec, Medial Crossovers for Genetic Programming, in Genetic Programming, Springer, 2012. C. A. Pickover, The Math Book, Sterling, NY, 2009; see p. 262. J. Slocum and D. Sonneveld, The 15 Puzzle, The Slocum Puzzle Foundation, 2006. Parberry shows that a(n) ≍ n^3, that is, n^3 << a(n) << n^3. In particular, lim inf a(n)/n^3 >= 1 and lim sup a(n)/n^3 <= 5. - Charles R Greathouse IV , Aug 23 2012 All solvable configurations of the Eight Puzzle on a 3 X 3 matrix can be solved in 31 moves or fewer and some configurations require 31 moves, so a(3)=31. (Python) # alst(), moves(), swap() in for n in range(1, 4): # chr(45) is "-" start, shape = "".join(chr(45+i) for i in range(n**2)), (n, n) a(3) is from Reinefeld, who used the method of Korf. a(4) was found by Brüngger, Marzetta, Fukuda and Nievergelt (thanks to Patric Östergård for this reference)
{"url":"https://oeis.org/A087725","timestamp":"2024-11-03T16:05:49Z","content_type":"text/html","content_length":"23973","record_id":"<urn:uuid:7f2cf382-f94e-4c31-850c-b81754a03eea>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00011.warc.gz"}
The Order of Operations Explained: Multiplication and Division This is the 5th in the series The Order of Operations Explained. For the other articles in this series, click here to visit the introduction. Last week’s article on the order of operations included a quick mention that division is the same as multiplication – but different. Now’s the time to explain that a bit. The only thing mathematicians like to do more than create, is destroy. That’s how we get imaginary numbers, dividing by zero and raising things to infinity. Once we create with multiplication, we want to know what happens when we destroy with the opposite of multiplication. Enter Division, stage right. Multiplication is created from the need to quickly add a bunch of numbers that are all the same. They each must be equal to make this work: 6 + 6 + 6 + 6 + 6 is shortcutted to 5 X 6. Division is the breaking up into pieces that are all equal. Technically we can break 30 up into these 5 pieces: 4, 8, 7, 6 and 5. But “division” requires (or implies) that we are dividing equally. So 30 would have to be broken up into 5 equal pieces of 6 Of course this is in a purely mathematical world. When you get into a toddler world, things will be different. How division is the same as multiplication. We represents multiplication with little x‘s or dots or stars. Like this: 3 X 5 or 3 • 5 or 3 * 5 We represent division with this cute little symbol: But we can write it in many more ways. Notice the cute symbol looks like a fraction with dots on the top and bottom. That’s not a coincidence. A fraction means division. Remember the poem: When dividing fractions Don’t bat an eye Just flip the last And multiply! When dividing numbers Don’t bat an eye Just flip the last And multiply! And “flipping” the last just means taking the “assumed 1” that’s underneath it and putting it on the top. Now go forth and divide… So when you’re teaching the MD in PEMDAS, the order of operations, remember that D is the same as M. And if things get a little confusing, demand some parenthesis before doing the problem. Share your thoughts in the comments! Related articles This post may contain affiliate links. When you use them, you support us so we can continue to provide free content! Leave a reply This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"http://mathfour.com/algebra/the-order-of-operations-explained-multiplication-and-division","timestamp":"2024-11-08T07:54:02Z","content_type":"text/html","content_length":"39202","record_id":"<urn:uuid:40ba6a18-3557-4ff7-85fb-3c574927ab49>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00449.warc.gz"}
What is: Kendall's W What is Kendall’s W? Kendall’s W, also known as Kendall’s coefficient of concordance, is a statistical measure used to assess the degree of agreement among raters or judges when evaluating a set of items. This non-parametric statistic is particularly useful in scenarios where the data does not meet the assumptions required for parametric tests. It quantifies the extent to which different raters provide similar rankings for a given set of items, making it a valuable tool in fields such as psychology, education, and social sciences, where subjective assessments are common. Understanding the Calculation of Kendall’s W The calculation of Kendall’s W involves several steps. First, the rankings assigned by each rater are collected and organized into a matrix format. Each row represents a rater, while each column corresponds to an item being evaluated. The next step is to compute the sum of the ranks for each item, followed by the calculation of the average rank. The formula for Kendall’s W is given by W = 12S / (k^2(n^3 – n)), where S is the sum of the squared deviations of the ranks, k is the number of raters, and n is the number of items. This formula allows researchers to quantify the level of agreement among raters effectively. Interpreting Kendall’s W Values Kendall’s W ranges from 0 to 1, where a value of 0 indicates no agreement among raters, and a value of 1 signifies perfect agreement. Values closer to 1 suggest a high level of concordance, while values near 0 indicate a lack of consensus. It is essential to interpret these values in the context of the specific research question and the characteristics of the data. For instance, in a study involving multiple judges rating the quality of artworks, a high Kendall’s W would imply that the judges have a similar perception of the artworks’ quality. Applications of Kendall’s W in Research Kendall’s W is widely used in various research domains. In psychology, it can be employed to evaluate the consistency of assessments made by different therapists regarding a patient’s progress. In educational settings, it can help analyze the agreement among teachers when grading student performances. Additionally, in market research, Kendall’s W can be utilized to assess the level of agreement among consumers when ranking product features or preferences. Its versatility makes it a valuable tool for researchers seeking to understand the reliability of subjective evaluations. Limitations of Kendall’s W While Kendall’s W is a powerful tool for measuring agreement, it does have limitations. One significant limitation is that it assumes that the rankings are ordinal and that the raters have a common understanding of the ranking criteria. If the raters have different interpretations of the criteria, the results may not accurately reflect true agreement. Furthermore, Kendall’s W does not provide information about the nature of disagreements among raters, which can be crucial in understanding the underlying reasons for discrepancies in rankings. Comparison with Other Concordance Measures Kendall’s W is often compared with other measures of concordance, such as Spearman’s rank correlation coefficient and the intraclass correlation coefficient (ICC). While Spearman’s rank correlation assesses the strength and direction of association between two ranked variables, Kendall’s W specifically focuses on the agreement among multiple raters. On the other hand, the ICC is more suitable for continuous data and provides a measure of reliability for ratings. Researchers must choose the appropriate measure based on their specific data characteristics and research objectives. Statistical Software for Calculating Kendall’s W Several statistical software packages can facilitate the calculation of Kendall’s W, including R, SPSS, and Python. In R, the ‘psych’ package provides a straightforward function to compute Kendall’s W, while SPSS offers built-in procedures for non-parametric tests that include Kendall’s W. Python users can utilize libraries such as SciPy and StatsModels to perform the necessary calculations. The availability of these tools makes it easier for researchers to implement Kendall’s W in their analyses without extensive manual calculations. Example of Kendall’s W in Practice To illustrate the application of Kendall’s W, consider a scenario where five judges evaluate ten different films based on their artistic merit. Each judge provides a ranking from 1 to 10 for the films. By organizing the rankings into a matrix and applying the Kendall’s W formula, researchers can determine the level of agreement among the judges. A high Kendall’s W value would indicate that the judges generally agree on which films are the best, while a low value would suggest significant differences in their evaluations. Conclusion on the Relevance of Kendall’s W Kendall’s W remains a relevant and widely used statistic in the analysis of agreement among raters. Its ability to handle ordinal data and provide a clear measure of concordance makes it an essential tool for researchers across various disciplines. Understanding its calculation, interpretation, and applications can significantly enhance the quality of research involving subjective evaluations, ensuring that findings are robust and reliable.
{"url":"https://statisticseasily.com/glossario/what-is-kendalls-w/","timestamp":"2024-11-06T12:22:04Z","content_type":"text/html","content_length":"139183","record_id":"<urn:uuid:01f28a4c-c573-4178-8a50-1bcc54fb3847>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00449.warc.gz"}
HP-48SX MOD issue 06-04-2022, 06:41 PM Post: #1 Rick314 Posts: 47 Junior Member Joined: Oct 2014 HP-48SX MOD issue I was transferring a program from the HP-28S to the HP-48SX and found the following issue. On the HP-28S 'MOD(5,2)' ->NUM results in the expected answer of 1. On the HP-48SX 'MOD(5,2)' can't be entered. It appears the HP-48SX won't allow MOD to appear in an algebraic expression. Others like 'MAX(5,2)' EVAL or 'PERM(5,2)' EVAL work as expected on the HP-48SX. I read about MOD in the HP-48 Programmer's Reference Manual and found nothing related there. Is this a defect or am I misunderstanding something? 06-04-2022, 07:07 PM (This post was last modified: 06-04-2022 07:16 PM by C.Ret.) Post: #2 C.Ret Posts: 291 Member Joined: Dec 2013 RE: HP-48SX MOD issue On the HP28S, the vast majority of instructions can be used in direct stack as well as in algebraic expressions. In subsequent algebraic expressions, quite all statement syntaxes use the exact same rule: statements are all typed in a prefix style such as your example 'MOD(A,B)' or 'SIN(30) or 'IFTE( C>0,A* B/ C,A+B)' etc. The notable exceptions are logic operators OR AND and XOR. Try to enter 'OR(A,B)' on an HP-28S and you get a surprise. On newer RPL systems, depending on the nature of the function or statement, different syntaxes are used. In particular, the modulo expression must be entered as the infix notation 'A MOD B'. If you try 'A MOD B' on the HP-28S you get an error. Similarly, 'MOD(A,B)' may not be valid on the HP-48SX. 06-04-2022, 07:57 PM (This post was last modified: 06-04-2022 07:58 PM by Rick314.) Post: #3 Rick314 Posts: 47 Junior Member Joined: Oct 2014 RE: HP-48SX MOD issue > the modulo expression must be entered as the infix notation 'A MOD B' Thanks C.Ret. I didn't know that and indeed '5 MOD 2' EVAL does work as expected on the HP-48SX. I also just noticed the HP 48 Programmer's Reference Manual says "x y -> max(x, y)" but "x y -> x mod y". So there is a clue as to the difference that I missed. 06-04-2022, 09:00 PM Post: #4 J-F Garnier Posts: 994 Senior Member Joined: Dec 2013 RE: HP-48SX MOD issue Yes, both the "HP 48 Programmer's Reference Manual" (for the 48S series) and the "HP 48G Series Advanced User's Reference Manual" incorrectly describe the symbolic MOD operation as: x 'symb' --> 'MOD(x,symb)' It should be x 'symb' --> 'x MOD symb' The remark in each operation description also uses the mod(x,y) notation instead of x mod y. This clearly comes from the 28S. There is a similar source of confusion in the HP BASIC, for instance the series 80 machines use the X MOD Y syntax, but the HP-75 and HP-71B use MOD(X,Y). User(s) browsing this thread: 1 Guest(s)
{"url":"https://hpmuseum.org/forum/thread-18435.html","timestamp":"2024-11-06T04:47:19Z","content_type":"application/xhtml+xml","content_length":"25763","record_id":"<urn:uuid:3e8f2861-8312-45ef-b567-c4693ed4e294>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00340.warc.gz"}
How to perform What-If Analysis in Excel? In Excel, What-if analysis is a process of changing cells’ values to see how those changes will affect the worksheet’s outcome. You can use several different sets of values to explore all the different results in one or more formulas. What-if Excel is used by almost every data analyst and especially middle to higher management professionals to make better, faster and more accurate decisions based on data. What-if analysis is useful in many situations, such as: • You can propose different budgets based on revenue. • You can predict the future values based on the given historical values. • If you expect a certain value due to a formula, you can find different sets of input values that produce the desired result. To enable the what-if analysis tool go to the Data menu tab and click on the What-If Analysis option under the Forecast section. Now click on the What-If Analysis . [Excel] has the following What-if analysis tools that can be used based on the data analysis needs: • Scenario Manager • Goal Seek • Data Tables Data Tables and Scenarios take sets of input values and project forward to determine possible results. Goal seek differs from Data Tables and Scenarios in that it takes a result and projects backward to determine possible input values that produce that result. 1. Scenario Manager A scenario is a set of values that Excel saves and can substitute automatically in cells on a worksheet. Below are the following key features, such as: • You can create and save different groups of values on a worksheet and then switch to any of these new scenarios to view different results. • A scenario can have multiple variables, but it can accommodate only up to 32 values. • You can also create a scenario summary report, which combines all the scenarios on one worksheet. For example, you can create several different budget scenarios that compare various possible income levels and expenses, and then create a report that lets you compare the scenarios side-by-side. • Scenario Manager is a dialog box that allows you to save the values as a scenario and name the scenario. 2. Goal Seek Goal Seek is useful if you want to know the formula’s result but unsure what input value the formula needs to get that result. For example, if you want to borrow a loan and know the loan amount, tenure of loan and the EMI that you can pay, you can use Goal Seek to find the interest rate at which you can avail of the loan. Goal Seek can be used only with one variable input value. If you have more than one variable for input values, you can use the Solver add-in. 3. Data Table A Data Table is a range of cells where you can change values in some of the cells and answer different answers to a problem. For example, you might want to know how much loan you can afford for a home by analyzing different loan amounts and interest rates. You can put these different values and the PMT function in a Data Table and get the desired result. A Data Table works only with one or two variables, but it can accept many different values for those variables. What-If Analysis Scenario Manager Scenario Manager is one of the What-if Analysis tools in Excel. Scenario Manager is useful in a case where you have more than two variables in the sensitivity analysis. Scenario Manager creates scenarios for each set of the input values for the variables under consideration. Scenarios help you to explore a set of possible outcomes, supporting the following: • Varying as many as 32 input sets. • Merging the scenarios from several different worksheets or workbooks. If you want to analyze more than 32 input sets, and the values represent only one or two variables, you can use Data Tables . Initial Values for Scenarios Before you create several different scenarios, you need to define a set of initial values on which the scenarios will be based. Consider an example of a company that wants to buy Metals for their needs. Due to the scarcity of funds, the company wants to understand how much cost will happen for different buying possibilities. In these cases, we can use the scenario manager for applying different scenarios to understand the results and make the decision accordingly. Now below are the following steps for setting up the initial values for Scenarios: Step 1: Define the cells that contain the input values. Step 2: Name the cells Metals_name and Cost . Step 3: Define the cells that contain the results. Step 4: Name the result cell Total_cost . Step 5: place the formula in the result cell. Step 6: Below is the created table. To create an analysis report with Scenario Manager, follow the following steps, such as: Step 1: Click the Data tab. Step2: Go to the What-If Analysis button and click on the Scenario Manager from the dropdown list. Step 3: Now a scenario manager dialog box appears, click on the Add button to create a scenario. Step 4: Create the scenario, name the scenario, enter the value for each changing input cell for that scenario, and then click the Ok button. Step 5 : Now, B3, B4, B5, B6, and B7 appear in the cells box. Step 6: Now, change the value of B3to 500 and click the Add button. Step 7: After clicking on the Add button, the add scenario dialog box appears again. • In the scenario name box, create scenario 2. • Select the prevent changes. • And click on the Ok Step 8: Again appears scenario values box with the changed value of B3 cell. Step 9: Change the value of B5 to 20000 and click the Ok button. Step 10: Similarly, create Scenario 3 and click the Ok button. Step 11: Again, appears scenario values box with a changed value of the B5 cell. Step 12: Change the value of B7 to 10000 and click the Ok button. The Scenario Manager Dialog box appears. In the box under Scenarios, You will find the names of all the scenarios that you have created. Step 13: Now, click on the Summary button. The Scenario Summary dialog box appears. Excel provides two types of Scenario Summary reports: 1. Scenario summary. 2. Scenario PivotTable report. Step 14: Select Scenario summary under Report type and click Ok . Scenario Summary report appears in a new worksheet. You will get the following Scenario summary report. You can observe the following in the Scenario Summary report: • Changing Cells: Enlists all the cells used as changing cells. • Result Cells: Displays the result cell specified. • Current Values: It is the first column and enlists the values of that scenario selected in the Scenario Manager Dialog box before creating the summary report. • For all the scenarios you have created, the changing cells will be highlighted in gray. • In the $C$9 row, the result values for each scenario will be displayed. What-If Analysis Goal Seek Goal Seek is a What-If Analysis tool that helps you to find the input value that results in a target value that you want. Goal Seek requires a formula that uses the input value to give the result in the target value. Then, by varying the formula’s input value, Goal Seek tries to solve the input value. Goal Seek works only with one variable input value. If you have more than one input value to be determined, you have to use the Solver add-in. Below are the following steps to use the Goal Seek feature in Excel. Step 1: On the Data tab, go What-If Analysis and click on the Goal Seek option. Step 2: The Goal Seek dialog box appears. Step 3: Type C9 in the Set cell box. This box is the reference for the cell that contains the formula that you want to resolve. Step 4: Type 57000 in the To value box. Here, you get the formula result. Step 5: Type B9 in the By changing cell box. This box has the reference of the cell that contains the value you want to adjust. Step 6: This cell that the formula must reference goal Seek changes in the cell that you specified in the Set cell box. Click Ok . Step 7: Goal Seek box produces the following result. As you can observe, Goal Seek found the solution using B9, and it returns 0 in the B9 cell because the target value and current value are the same. What-If Analysis Data Tables With a Data Table in Excel, you can easily vary one or two inputs and perform a What-if analysis. A Data Table is a range of cells where you can change values in some of the cells and answer different answers to a problem. There are two types of Data Tables, such as: 1. One-variable data tables 2. Two-variable data tables If you have more than two variables in your analysis problem, you need to use the Excel Scenario Manager Tool. One-variable Data Tables A one-variable Data Table can be used to see how different values of one variable in one or more formulas will change those formulas’ results. In other words, with a one-variable Data Table, you can determine how changing one input changes any number of outputs. Below is an example of creating a one-variable data table. A good example of a data table employs the PMT function with different loan amounts and interest rates to calculate the loan. There is a loan of 1 00,000 for a tenure of 5 years. You want to know the monthly payments (EMI) for varied interest rates. You also want to know the amount of interest and Principal that is paid in the second year. Step 1: Create the required table. • Assume that the interest rate is 10%. • List all the required values. • Name the cells containing the values. • Set the calculation for EMI, Cumulative Interest and Cumulative Principal with the Excel functions PMT, CUMIPMT and CUMPRINC, respectively. • Below is the created table. Step 2: Type the list of interest rate values that you want to substitute in the input cell. As you observe, there is an empty row above the Interest Rate values. This row is for the formulas. Step 3: Type the first function ( PMT ) in the cell one row above and one cell to the right of the column of values. Type the other functions ( CUMIPMT and CUMPRINC ) in the cells to the first function’s right. Step 4: The Data Table looks as given below. Step 5: Select the range of cells that contains the formulas and values that you want to substitute, E2:H13. Step 6: Go to the Data tab, select What-if Analysis and click on the Data Table tool in the dropdown list. Step 7: Data Table dialog box appears. • Click in the Column input cell box. • And click on the Interest_Rate cell, which is C2. You can see that the Column input cell is taken as $C$2. Step 8: Click on the Ok button. The Data Table is filled with the calculated results for each input value. Two-variable Data Tables A two-variable Data Table can be used to see how different values of two variables in a formula will change that formula’s results. In other words, with a two-variable Data Table, you can determine how changing two inputs changes a single output. For example, a loan of 100000, and you want to know how different combinations of interest rates will affect the monthly payment. Step 1: Create the following table. Step 2: Now create the Data Table • Write =EMI in F2 cell. • Type the first list of input values, i.e., interest rates, down the column F, starting with the cell below the formula, i.e., F3. • Type the second list of input values, i.e., number of payments across row 2, starting with the cell to the right of the formula, i.e., G2. • The Data Table looks as follows. Step 3: Select the range of cells that contains the formula and the two sets of values that you want to substitute, i.e., F2:L13. Step 4: Go to the Data tab, click What-if Analysis and select Data Table from the dropdown list. Step 5: Data Table dialog box appears. Step 6: Click in the Row input cell box. • Click on the NPER cell, which is C3. • Again, click in the Column input cell box. • Click the Interest_Rate cell, which is C2. You will see that the Row input cell is taken as $C$3, and the Column input cell is taken as $C$2. Step 7: Click on the Ok button. The Data Table gets filled with the calculated results for each combination of the two input values. Data Table Calculations Data Tables are recalculated each time the worksheet containing them is recalculated, even if they have not changed. To speed up the calculations in a worksheet that contains a Data Table, you need to change the calculation options to Automatically Recalculate the worksheet but not the Data Tables.
{"url":"https://discuss.boardinfinity.com/t/how-to-perform-what-if-analysis-in-excel/5823","timestamp":"2024-11-10T06:14:20Z","content_type":"text/html","content_length":"44781","record_id":"<urn:uuid:612b9ef2-9308-49f1-8848-9bf4dd74b89f>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00124.warc.gz"}
Types of Functions | Learn IB Mathematics by best IB tutors Types of functions(part-3) Types of Functions- IB Maths Tutors should give twenty-two hours for teaching functions and equations as per IBO recommendations. This is my third article on functions in the series of ib mathematics IB Maths Tutors should give twenty hours in teaching functions and equations. This is my third article on functions in the series of ib mathematics As you know there are many different types of functions in Mathematics. Here I am discussing a few very important of them 1.Greatest Integer Function– This is an interesting function. It is defined as the largest integer less than or equal to x y = [x]. For all real numbers, x, this function gives the largest integer less than or equal to x. For example: [1] = 1 [2.5] = 2 [4.7] = 4 [5.3] = 5 Beware! [-2] = -2 [-2.6] = -3 [-4.1] = -5 [-6.5] = -7 greatest integer function2. Fractional part function- for every real value of x this function gives the fractional part of x. {2.3}=.3 , {5.4}=.4, {2.2}=.2 {6.7}=.7, {-2.3}=.7, {-2.6}=.3 We can say that: 0≤{x}∠1 range=less than 1 3. Polynomial function– These are functions of the form f(x) = a[n]x^n + a[n−1]x ^n−1 + . . . + a[2]x ^2 + a[1]x + a[0 ]. Constant, linear, quadratic, cubic, quartic functions etc fall in this category domain of these functions is R and range is either R or a subset of R 4. Trigonometric functions- Trigonometric functions or circular functions draw the relationship between the sides and angles of right triangles .we can find this relationship using “unit circle”. I have explained all this thing in the given video. Trigonometric Functions There are six trigonometric functions, we will discuss them all one by one i. Sin function(variation in a)– f(x)=sin x this is a periodic function with a period of 2Π ii. Cosine function(variation in b)- this is also a periodic function with a period of 2Π iii. Tangent function(variation in a/b)- f(x)= tan x this is also a periodic function with a period of pie domain=R-{n pie+pie/2} This was my last post in ib maths tutors-function series. In my next post, I will discuss some questions based on these topics. Classification Of Functions : (i) One – One Function (Injective mapping)-: A function f: A B is said to be a one-one function or injective mapping if different elements of A have different f images in B. Thus for &, Function is one-one while if The function will not be one-one. (ii) If f(x) is any function which is entirely increasing or decreasing in whole domain, then f(x) is one-one. (iii) If any line parallel to x-axis cuts the graph of the function atmost at one point, then the function is one-one. Many–one function-: A function f: A B is said to be many one functions if two or more elements of A have the same f image in B. Thus f: A B is many-one if (i) Any continuous function which has at least one local maximum or local minimum, then f(x) is many-one. In other words, if a line parallel to x-axis cuts the graph of the function at least at two points, then f is many-one. This test is known as horizontal line test (ii) If a function is one-one, it cannot be many-one and vice versa. Onto function (Surjective mapping)-: If the function f: A B is such that each element in B (co-domain) is the image of at least one element in A, then we say that f is a function of A ‘onto’ B . Thus f: A B is surjective if b B, some a A such that f (a) = b Into function-: If f: A B is such that there exists at least one element in co-domain which is not the image of any element in the domain, then f(x) is into. (i) If a function is onto, it cannot be into and vice versa. (ii) A polynomial of degree even will always be into. Thus a function can be one of these four types : (a) one-one onto (injective & surjective) (b) one-one into (injective but not surjective) (c) many-one onto (surjective but not injective) (d) many-one into (neither surjective nor injective) Bijective mapping- If f is both injective & surjective, then it is called a Bijective mapping.The bijective functions are also named as invertible, non-singular or bi-uniform functions. If a set A contains n If a set A contains n distinct elements then the number of different functions defined from A B is n^n & out of it n ! are one one. Algebraic Operations On Functions: If f & g are real-valued functions of x with domain set A, B respectively, then both f & g are defined in Now we define f + g, f – g , (f . g) & (f/g) as follows -: (i) (f ± g) (x) = f(x) ± g(x) (ii) (f . g) (x) = f(x) . g(x) Composite Of Uniformly & Non-Uniformly Defined Functions: Let f : AB and g : BC be two functions . Then the function gof : AC defined by (gof) (x) = g (f(x)) x A is called the composite of the two functions f & g. Properties Of Composite Functions : (i) The composite of functions is not commutative i.e. (i) The composite of functions is not commutative i.e. gof fog . (ii) The composite of functions is associative i.e. if f, g, h are three functions such that fo(goh) & (fog)oh are defined, then fo(goh) = (fog)oh (iii) The composite of two bijections is a bijection i.e. if f & g are two bijections such that gof is defined, then gof is also a bijection. Implicit & Explicit Implicit & Explicit Function-: A function defined by an equation not solved for the dependent variable is called an implicit Function. For eg. the equation x^3 + y^3= 1 defines y as an implicit function. If y has been expressed in terms of x alone then it is called an Explicit Function. Homogeneous Functions-: A function is said to be homogeneous with respect to any set of variables when each of its terms is to the same degree with respect to those variables. For example F(x)= 5 x^2 + 3 y^2 – xy is homogeneous in x & y . Symbolically if, f (tx , ty) = t^n. f(x,y) then f(x,y) is homogeneous function of degree n. Inverse Of A Function-: Let f: A B be a one-one & onto function, then there exists a unique function g: B A such that f(x) = y g(y) = x, and Then g is said to be inverse of f. Thus g =f^-1 B A = {(f(x), x) ½ (x, f(x)) Î f} . Properties Of Inverse Function : (i) The inverse of a bijection is unique. (ii) If f: A B is a bijection & g: A A is the inverse of f, then fog =I[B ]and gof =IA where IA & I[B ]are identity functions on the sets A & B respectively. Note that the graphs of f & g are the mirror images of each other in the line y = x. Odd & Even Functions-: If f (-x) = f (x) for all x in the domain of ‘f’ then f is said to be an even function. e.g. f (x) = cos x ; g (x) = x² + 3 . If f (-x) = -f (x) for all x in the domain of ‘f’ then f is said to be an odd function. e.g. f (x) = sin x , g (x) = x^3 + x (i) f (x) – f (-x) = 0 => f (x) is even & f (x) + f (-x) = 0 => f (x) is odd (ii) f (x) – f (-x) = 0 => f (x) is even & f (x) + f (-x) = 0 => f (x) is odd . (iii) A function may neither be odd nor be even. (iv) Inverse of an even function is not defined . (v) Every even function is symmetric about the y-axis & every odd function is symmetric about the origin . (vi) Every function can be expressed as the sum of an even & an odd function. (vii) The only function which is defined on the entire number line & is even and odd at the same time is f(x) = 0.(viii) If f and g both are even or both are odd then the function f.g will be even but if any one of them is odd then f.g will be odd . (viii) If f and g both are even or both are odd then the function f.g will be even but if any one of them is odd then f.g will be odd. Periodic Function-: A function f(x) is called periodic if there exists a positive number T (T > 0) called the period of the function such that f (x + T) = f(x), for all values of x within the domain of x. e.g. The function sin x & cos x both are periodic over 2 & tan x is periodic over (i) f (T) = f (0) = f (-T) , where ‘T’ is the period . (ii) Inverse of a periodic function does not exist . (iii) Every constant function is always periodic, with no fundamental period . (iv) If f (x) has a period T & g (x) also has a period T then it does not mean that f(x) + g(x) must have a period T . e.g. (v) If f(x) has a period , then and also has a period (vi) if f(x) has a period T then f(ax + b) has a period T/a (a > 0). Here are links to my previous posts on functions First Post-An Introduction to functions Second Post-Domain and Range of functions Third Post-Types of functions(part-1) Here is a pdf containing questions on this topic
{"url":"https://ibelitetutor.com/blog/types-of-function/","timestamp":"2024-11-06T01:56:15Z","content_type":"text/html","content_length":"98706","record_id":"<urn:uuid:eb22f2fb-be8e-4f74-a5ca-c27be87949c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00594.warc.gz"}
Collision in Pygrr - P1: Generation Here we are: the final stage of alpha development! Writing collision will be split up into three parts: 1. generating colliders 2. detecting the collision 3. collision resolution Step one: generating colliders... Let's start off with some context. There are 3 main types of collider - but, what is a collider? A collider is the bounds in which a shape will be able to collide with other shapes. Basically, it's the part of the model that is used to detect interaction with other objects! Anyways, the 3 main types (in 2D space) are a circle collider, bounding box, and a convex hull. A circle collider is what it says on the tin, a circle - this is faster than the other forms of collider, due to everything being the same distance from the centre - all you need to calculate is if a shape is closer than the radius, thus, it will be inside the collider! A bounding box is just a square / rectangle that forms around the shape - this is a very fast method, as the height and width axis of the rectangle follows the x and y axis of the space! Here's an image of a bounding box around a polygon. As you can see, the red bounding box just approximates the polygon's coordinates, and simplifies it. The white space inside the bounding box would return a collision, whereas there wouldn't be one. This makes the collision inaccurate. Now, the final, and the best one - a convex hull. This one might take some thinking, so bear with me! Here's an image of a concave polygon, and a convex version of the same polygon: A concave polygon is a polygon that has at least one concave angle (that is, an interior angle greater than 180°). A convex polygon is a polygon that has no concave angles - but you probably already realised that! So, which one do we want? Well, if you want to use the Separating Axis Theorem (Hyperplane separation theorem - Wikipedia) for collision detection - hang on, let's move back a step. The what what The separating axis theorem (SAT) states that: If there is a straight line (an axis) that can separate or fit between the two shapes without intersecting any of them, there is not a collision. If there is no axis that can do that, then they are Now, here's the catch. If the shapes are both convex, this works perfectly, as shown: But as soon as we start using concave shapes, stuff like this happens: As there is no axis that can intersect them, the output would be that there IS a collision, when, as you can see, there is not. Now, what's a convex hull? It's just the convex shape that can group together any amount of points. Basically, it's an object that generates the smallest convex shape that contains all the points! Here's a set of random points, and the convex hull generated from that: So, now we know what convex hulls and all that are, let's get into the coding! For this, in Pygrr PolyArt, the program used to draw models for your objects, we need to implement a script that can generate a convex hull for each of the points on the player's model. We're going to use Jarvis March (also known as the gift wrapping algorithm). I won't go into the details of it, but you can read into it if you'd like! So, now we've implemented that, lets add a feature where you can toggle visibility of the collider in PolyArt, just to show Awesome! Now, let's talk about efficiency. Obviously, SAT takes some time, as does everything - so we don't want to run the algorithm unnecessarily. There's a small way I thought of how to only run it when there could be a collision, and this is using the smallest enclosing circle. This is described, as, well, the smallest possible circle that encloses all of the points in a shape. As we realised earlier, circle colliders are the fastest one, so we can approximate the shape by using a circle collider, and if that calculates a positive collision, then we can run SAT. This states that unless an object is inside this circle, it cannot be touching (or colliding) with the shape! This is pretty cool, as all I need to do is check if the object is closer to the centre of the circle than the radius, which is literally a single line of code - very fast! The code for this is a bit finicky, so I'm going to borrow the code from here: Smallest enclosing circle (nayuki.io). I'm going to make sure to include the copyright notice, too! Let's see that in PolyArt now: Beautiful! Now we just have to add the convex hull's points, the radius of the circle, and the centre of the circle to the save file! In case you wanted to see [S:I know there's one particular reader who also loves JSON as much as me:S], here's the save file for the swan model, which I think I might actually make as a default model. pygrr.Model.swan, anyone? "type": "POLYGON", "smooth": true, "fill": "tan", "outline": "black", "width": "5", "points": [ "collider": { "type": "SIMPLE", "center": [ "radius": 181.10770276274835, "hull": [ Here's the model, too: Isaac, over and out...
{"url":"https://www.isaactfm.com/2021/08/collision-in-pygrr-p1-generation.html","timestamp":"2024-11-06T15:12:48Z","content_type":"application/xhtml+xml","content_length":"105371","record_id":"<urn:uuid:27324e02-469d-4022-a141-a86dbcf0c381>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00885.warc.gz"}
A Fundamental Shift Higher In Valuations A Fundamental Shift Higher In Valuations Authored by Lance Roberts via RealInvestmentAdvice.com, Over the last decade, there has been an ongoing fundamental debate about markets and valuations. The bulls have long rationalized that low rates and increased liquidity justify overpaying for the underlying fundamentals. For the last decade, that view appears correct as zero interest rates combined with massive monetary and fiscal support increased market returns by 50% since 2009. We discussed this point in “Long-Term Returns Are Unsustainable.” To wit: “The chart below shows the average annual inflation-adjusted total returns (dividends included) since 1928. I used the total return data from Aswath Damodaran, a Stern School of Business professor at New York University. The chart shows that from 1928 to 2021, the market returned 8.48% after inflation. However, notice that after the financial crisis in 2008, returns jumped by an average of four percentage points for the various periods.” As noted, an unprecedented amount of monetary accommodation drove those excess returns. Unsurprisingly, this resulted in one of the most significant deviations from the market’s exponential growth (Usually, when charting long-term stock market prices, I would use a log-scale to minimize the impact of large numbers on the whole. However, in this instance, such is not appropriate as we examine the historical deviations from the underlying growth trend.) Wall Street Exuberance The fiscal policies implemented after the pandemic-driven economic shutdown created a surge in demand that further exacerbated an already extended market. As shown, those fiscal interventions led to an unprecedented surge in earnings, with current expectations through 2025 significantly extended. Given that markets historically track the annual change in earnings, it is unsurprising that stocks have once again reached more extreme valuation levels, given the rather ebullient forecast. The table below, from BofA, shows 20 different valuation measures for the S&P 500 index. Except for market-based equity risk premium (ERP), every other measure is at some of the most extreme levels. Unsurprisingly, when discussing more extreme fundamental valuations, the expectation is that a more significant correction will eventually occur. While historically, the markets have often experienced “mean reverting events,” we will explore how the past 20 years of monetary and fiscal interventions have potentially permanently shifted market valuations higher. A Permanent Shift Higher As discussed in “Technical Measures,“ valuations are a terrible market timing tool. Valuations only measure when prices are moving faster or slower than earnings. As we noted, in the short-term valuations are a measure of psychology. To wit: “Valuation metrics are just that – a measure of current valuation. More importantly, when valuation metrics are excessive, it is a better measure of ‘investor psychology’ and the manifestation of the ‘greater fool theory.’ As shown, there is a high correlation between our composite consumer confidence index and trailing 1-year S&P 500 valuations.” However, just because current valuations are elevated, does that mean a massive mean reverting event is required? Maybe not. Dr. Robert Shiller’s fundamental valuation method, using ten years of earnings, or the cyclically adjusted price-earnings ratio (CAPE), is over 33 times trailing earnings. While that valuation level seems elevated on a nominal basis, its deviation from the long-term exponential growth trend is not. While 33x earnings is a high price for future earnings (implying 33 years to break even), the reduced deviation from the long-term exponential growth trend exposes the shift higher in valuation levels. There are many reasons why valuations have shifted higher over the years. The increase is partly due to economic expansion, globalization, and increased profitability. However, since the turn of the century, changes in accounting rules, share buybacks, and greater public adoption of investing (aka ETFs) have also contributed to the shift. Furthermore, as noted above, the massive monetary and fiscal interventions since the “Financial Crisis” created a seemingly “risk-free” environment for equity risk. The chart shows the apparent shift in valuations. The “median” CAPE ratio is 15.04 times earnings from 1871-1980. The long-term “median” CAPE is 16.52 times earnings from 1871-Present (all years) The “median” CAPE is 23.70 times earnings from 1980 to the present. There are two critical things to consider concerning the chart above. The shift higher in MEDIAN valuations was a function of falling economic growth and deflationary pressures, and, Increasing levels of leverage and debt, which eroded economic growth, facilitated higher prices. So, the question is, “IF” valuations have permanently shifted higher, what will the next market mean-reverting event look like to reset fundamental valuations to a more attractive level? Mapping A Reversion To The Mean Many video channels, commentators, and media types suggest another “major market correction” is coming. There are many reasons for their claims running the gamut, including de-dollarization, loss of the reserve currency, higher rates, debt defaults, etc. As we noted previously, while these are possibilities, they are pretty remote. “The chart below is a normally distributed “bell curve” of potential events and outcomes. In simple terms, 68.26% of the time, typical outcomes occur. Economically speaking, such would be a normal recession or the avoidance of a recession. 95.44% of the time, we are most likely dealing with a range of outcomes between a reasonably deep recession and standard economic growth rates. However, there is a 2.14% chance that we could see another economic crisis like the 2008 Financial Crisis. But what about “economic armageddon?” That event where nothing matters but “gold, beanie weenies, and bunker.” That is a 0.14% possibility.“ While “fear sells,” we must assess the “probabilities” versus “possibilities” of various outcomes. Since valuations are a fundamental function of price and earnings, we can use the current price of the market and earnings to map out various valuation reversions. As of this writing, the S&P 500 is trading at roughly $5,300 (we will use a round number for easy math). The projected earnings for 2024 are approximately $217/share. We can plot the price decline needed to revert valuations using the abovementioned median valuation levels. 23.70x = 5142.90 = 3% decline 16.52x = 3584.84 = 33% decline 15.04x = 3263.68 = 38.5% decline Here is the vital point. While a near 40% decline in stocks is quite significant and would undoubtedly send the Federal Reserve scrambling to cut rates and restart “Quantitative Easing,” the reversion would only reverse the post-pandemic stimulus-driven gains. In other words, a near 40% correction would NOT be a “bear market” but just a correction in the ongoing bull market since 2009. (This shows how egregious the price deviation has become from the long-term price trend since the pandemic.) While this is just a thought experiment, there are two critical takeaways. The deviation from the long-term means is extreme, suggesting a more significant decline is possible in the future and While valuations are elevated relative to long-term history, if there has been a permanent shift in valuations, the subsequent correction may not be as deep as some expect. Importantly, investors repeatedly make the mistake of dismissing valuations in the short term because there is no immediate impact on price returns. As noted above, valuations, by their very nature, are HORRIBLE predictors of 12-month returns. Therefore, investors should avoid any investment strategy that has such a focus. However, in the longer term, valuations excellent predictors of expected From current valuation levels, investors’ expected rate of return over the next decade will be lower than it was over the past decade. That is unless the Federal Reserve and the government launch another massive round of monetary stimulus and cut interest rates to zero. This does not mean that markets will produce single-digit rates of return each year for the next decade. There will likely be some tremendous investing years over that period and a couple of tough years in between. That is the nature of investing and the market cycles. Tyler Durden Fri, 06/21/2024 – 09:45
{"url":"https://solwd.net/a-fundamental-shift-higher-in-valuations/","timestamp":"2024-11-04T12:28:40Z","content_type":"text/html","content_length":"57199","record_id":"<urn:uuid:41baaa15-f168-4f07-b23f-4a998204d54d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00469.warc.gz"}
Robin Evans May 24, 2021 Abstract:This paper presents a new joint radar and communication technique based on the classical stepped frequency radar waveform. The randomization in the waveform, which is achieved by using permutations of the sequence of frequency tones, is utilized for data transmission. A new signaling scheme is proposed in which the mapping between incoming data and waveforms is performed based on an efficient combinatorial transform called the Lehmer code. Considering the optimum maximum likelihood (ML) detection, the union bound and the nearest neighbour approximation on the communication block error probability is derived for communication in an additive white Gaussian noise (AWGN) channel. The results are further extended to incorporate the Rician fading channel model, of which the Rayleigh fading channel model is presented as a special case. Furthermore, an efficient communication receiver implementation is discussed based on the Hungarian algorithm which achieves optimum performance with much less operational complexity when compared to an exhaustive search. From the radar perspective, two key analytical tools, namely, the ambiguity function (AF) and the Fisher information matrix are derived. Furthermore, accurate approximations to the Cramer-Rao lower bounds (CRLBs) on the delay and Doppler estimation errors are derived based on which the range and velocity estimation accuracy of the waveform is analysed. Numerical examples are used to highlight the accuracy of the analysis and to illustrate the performance of the proposed waveform. * Submitted to IEEE Transactions on Wireless Communications
{"url":"https://www.catalyzex.com/author/Robin%20Evans","timestamp":"2024-11-02T07:35:44Z","content_type":"text/html","content_length":"168633","record_id":"<urn:uuid:0662d1af-d6d6-4071-92da-98848f7e85ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00234.warc.gz"}
Magnetic Helicity and Energy of Emerging Solar Active Regions and Their Eruptivity 177. Magnetic Helicity and Energy of Emerging Solar Active Regions and Their Eruptivity E. Liokati^1, A. Nindos^1, & Y. Liu^2 1 Section of Astrogeophysics, Department of Physics, University of Ioannina, 45110, Greece 2 W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305-4085, USA Magnetic helicity (a conserved quantity describing the complexity and twist of magnetic field, e.g., see the review Ref. [1]) and magnetic energy in solar active regions (ARs) are two important physical quantities for the study of magnetic origin of solar eruptions, such as coronal mass ejections (CMEs) and flares (e.g., see the review Ref. [2]). However, little is known about their role in the initiation of CMEs from emerging ARs. To remedy this situation, we use vector magnetic field data obtained by the SDO/HMI and calculate the magnetic helicity and energy injection rates across the photosphere, as well as the resulting accumulated budgets in 52 emerging ARs from the start time of magnetic flux emergence until it reached a heliographic longitude of 45° West (W45). The results are checked against the eruptive potential of the ARs: 7 of them produced CMEs, but 45 did not. An indicative example of the evolution of the magnetic configuration of one of our eruptive ARs is presented in Figure 1, where we show snapshots of the normal component (B[z]) of the AR’s magnetic field. Figure 1| Selected HMI images of the normal component, B[z], of the photospheric field of eruptive AR 11422 taken during the study interval. All images are saturated at ±1900 G. The horizontal white line corresponds to 150″. From our magnetic energy and helicity computations we construct scatter plots of the accumulated amount of magnetic helicity and energy. We find that, in a statistical sense, the eruptive ARs accumulate larger budgets of magnetic helicity and energy than the noneruptive ARs over intervals that start from the flux emergence start time and end (i) at the end of the flux emergence phase (Figure 2, left) and (ii) when the AR produces its first CME or crosses W45, whichever occurs first (Figure 2, right). The most important feature of the scatter plots of Figure 2 is that the eruptive ARs tend to appear in the top right part of this plots. This indicates that if magnetic helicity and energy thresholds of 9×10^41 Mx^2 and 2×10^32 erg are crossed, ARs are likely to erupt. In terms of accumulated magnetic helicity and energy budget, the segregation of the eruptive from the noneruptive ARs is violated in one case (region (iii), Figure 2 right) when an AR erupts early in its emergence phase, and in six cases (region (i), Figure 2 right) in which noneruptive ARs exhibit large magnetic helicity and energy budgets. Figure 2| Scatter plots of the accumulated amount of magnetic energy vs absolute helicity during the flux emergence intervals of the ARs (left panel) and during the intervals from emergence start time until the ARs cross W45 or produce their first CME, whichever occurs first (right panel). The red boxes and black crosses correspond to eruptive and noneruptive ARs, respectively. The blue dashed lines define the thresholds for magnetic helicity and energy above which ARs show a high probability to erupt. The green lines show least-squares best logarithmic fits. For these six ARs we investigate whether the overlying background magnetic field inhibited eruptions. In Figure 3 we show scatter plots of accumulated budgets of magnetic helicity and energy, which were registered in the right panel of Figure 2, versus the height at which the decay index reached the critical value of n[c]=1.5. Both the magnetic helicity and energy budgets spread all over the n [c] heights. However, the six outlier ARs tend to acquire n[c]=1.5 at larger heights (>60 Mm) than most eruptive ARs (compare the locations of the red boxes and the green diamonds). Therefore, although these ARs possess significant helicity and energy budgets, they did not erupt because the overlying magnetic field tended to provide stronger or more extended confinement than in eruptive Figure 3| Top: Accumulated magnetic energy from emergence starting times until the ARs produce their first CME or cross W45, whichever occurs first, vs height at which the decay index that has been calculated at the end of the intervals that were used to determine the magnetic energy budgets, reaches a value of 1.5. Red boxes denote eruptive ARs, and green diamonds denote the noneruptive ARs that appear in region (i) in the right panel of Figure 2. All other ARs are marked by crosses. Bottom panel: Same as the top panel, but for magnetic helicity instead of magnetic Our results indicate that emerging ARs tend to produce CMEs when they accumulate significant budgets of both magnetic helicity and energy. Any study of their eruptive potentials should consider magnetic helicity together with magnetic energy. This work is based on a paper that has been accepted for publication in A&A (Ref. [3]). [1] Pevtsov, A.A., Berger, M. A., Nindos, A., Norton, A. A., & van Driel-Gesztelyi, L, 2014, Space Sci Rev, 186, 285 [2] Georgoulis, M. K., Nindos, A., & Zhang, H. 2019, Phil. Trans. Royal Soc. London Ser. A, 377, 20180094 [3] Liokati, E. Nindos, A. & Liu, Y, 2022, A&A, in press, http://arxiv.org/abs/2202.04353
{"url":"http://hmi.stanford.edu/hminuggets/?p=3879","timestamp":"2024-11-07T01:10:06Z","content_type":"text/html","content_length":"43583","record_id":"<urn:uuid:fba5c434-468a-42db-9dcc-a9ef8b62ef33>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00868.warc.gz"}
Simran Kajal Jack usually mows his lawn in 6 hours. Marilyn can mow the same yard in 5 hours. How much time would it take for them to mow the lawn together? This section requires Javascript. You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and, finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.
{"url":"https://solve.club/problems/a-algebra-problem-by-simran-kajal/a-algebra-problem-by-simran-kajal.html","timestamp":"2024-11-05T19:17:23Z","content_type":"text/html","content_length":"41309","record_id":"<urn:uuid:4d079725-8165-4b9a-b37d-0269402e5150>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00699.warc.gz"}
Sum of Left and Right Diagonal Elements of a Matrix Best Trick | topperbhai.com In this tutorial, we will learn to write a java program to find the sum of left diagonal and right diagonal elements of a matrix. Sum of left and right diagonal elements of matrix means you have to access all the left diagonal and right diagonal elements of the input matrix to calculate their sum and store the sum of both diagonals in separate variables. If you are not able to understand the above definition, then look at the infographic given below. Below, In this post, I have written a little bit long but easy-to-understand java program which accepts elements of a matrix from the user using scanner input and prints the sum of left diagonal and right diagonal elements of the input matrix separately. The logic I have implemented in this program is pretty easy to understand. Explanation of the Code: • The code is a program that asks the user to enter the number of rows and columns. • The code then creates an array with two dimensions, one for each row and column. • It also prints out some text on the screen before it starts looping through all of the elements in its arrays. • The first thing that happens is that we create an int variable called sumofleft which will store how many left diagonal elements there are in our matrix. • Then we create another int variable called sumofright which will store how many right diagonal elements there are in our matrix. • Next, we print out some text on the screen saying "Enter the number of Rows" followed by "Enter the number of Columns". • After this, we use nextInt() to get input from our user about what their numbers are for both rows and columns so they can be stored into variables r and c respectively (r stores how many rows there are while c stores how many columns). • We then use these variables to initialize arr[][] which is going to be used as a 2D array where arr[i][j] represents one element in our matrix at position i*c+j . • Finally, after creating this 2D array, we start looping through all of elements. • The code begins by defining variables for both the left and right diagonal elements. • Then, it sets up an array of integers that will hold all of the rows and columns. • Next, it prints out some information about what input to expect from the user before proceeding with any calculations. • After this, it loops through each diagonal in order to calculate their respective sums. Sum of Left and Right Diagonal Elements import java.util.*; public class Sum_of_Right_and_Left_Diagonal public static void main() int sumofleft=0, sumofright=0; Scanner sc=new Scanner(System.in); System.out.println("Enter the number of Rows"); int r=sc.nextInt(); System.out.println("Enter the number of Columns"); int c=sc.nextInt(); int arr[][]=new int[r][c]; System.out.println("Enter the elements of Matrix"); for(int i=0;i<r;i++) for(int j=0;j<c;j++) for(int i=0;i<r;i++) for(int j=0;j<c;j++) System.out.print(arr[i][j]+" "); for(int i=0;i<r;i++) for(int j=0;j<c;j++) if(i+j==r-1 && i==j) System.out.println("Sum of Left Diagonal Elements="+sumofleft); System.out.println("Sum of Right Diagonal Elements="+sumofright); Enter the number of Rows Enter the number of Columns Enter the elements of Matrix Sum of Right Diagonal Elements = 19 Sum of Left Diagonal Elements = 12 Post a Comment ICSE Specimen Papers ICSE Class 10 English Language SOLVED Specimen Paper 2025 Admin August 29, 2024 ICSE Class 10th English Language Solved Specimen Paper 2025 If you are loo…
{"url":"https://www.topperbhai.com/2020/06/sum-of-right-diagonal-elements-of-matrix.html","timestamp":"2024-11-02T01:55:57Z","content_type":"application/xhtml+xml","content_length":"121793","record_id":"<urn:uuid:600dd306-ee36-4fe9-8cdd-c6ef24746da5>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00612.warc.gz"}
Fractions Calculators (2024) Learn to add, subtract, multiply and divide fractions. Reduce fractions to lowest terms, simplify, compare and order fractions. Convert fractions to decimals and percentages, work with mixed numbers and improper fractions and solve for X in fractions equations using CalculatorSoup^® online fractions calculators. Fraction Operations and Manipulations Fractions Calculator Operations on proper and improper fractions. Includes formulas for adding, subtracting, multiplying and dividing fractions. Adding and Subtracting Fractions. Add or subtract up to 10 fractions at a time and see the work for finding the answer. Mixed Numbers Mixed Numbers, Integers & Fractions. Operations on whole numbers, integers, mixed numbers, proper fractions and improper fractions. Shows the equations and work in the calculator results. Mixed Fractions (Same as Mixed Numbers.) Simplifying Fractions Convert improper fractions to mixed numbers. Simplify proper and improper fractions, showing the work and the answer as a fraction or mixed number. Simplifying Complex Fractions Calculator Simplify fractions given numerators and denominators of any two mixed numbers (mixed fractions), regular fractions, improper fractions or integers. Complex Fraction Calculator Add, subtract, multiply and divide complex fractions that have mixed numbers, fractions, or integers. Decimal to Fraction Convert a decimal to a fraction. Fraction to Decimal Convert a fraction to a decimal. Fraction to Percent Convert a fraction to a percent. Percent to Fraction Convert a percent to a fraction. Least Common Denominator (LCD) Finds the LCD of fractions, integers and mixed numbers. Shows fractions and equivalent fractions with LCD. Least Common Multiple (LCM) Finds the LCM of integers and whole numbers. Greatest Common Factor (GCF) Finds the GCF of a set of numbers showing the work using factoring, prime factorization, and Euclid’s Algorithm. Ratio Calculator Solve ratio and proportion problems for the missing value of a ratio in the form A:B = C:D (or the equivalent A/B = C/D). Compare two ratios for equivalence, TRUE or FALSE. Ratio to Fraction Calculator Convert ratios to fractions. Enter part-to-part or part-to-whole ratios and find fraction equivalents. Ratio Simplifier Simplify and reduce a ratio in the form A : B. Shows work with steps. Golden Ratio Enter one value in the Golden Ratio formula (A+B : A = A : B) and the other values will be generated with a geometric representation. Equivalent Fractions Generate a set of fractions equivalent to a given fraction, mixed number or integer. Improper Fractions to Mixed Numbers Convert improper fractions to mixed numbers and see the work involved in the conversion. Simplify fractions and reduce to lowest terms. Mixed Numbers to Improper Fractions Convert mixed numbers to improper fractions and see the work involved in the conversion. Mixed Numbers to Decimal Convert mixed numbers, fractions or integers to decimal numbers and see the work involved in the conversion. Mixed Numbers to Percent Convert mixed numbers, fractions or integers to percentages and see the work involved in the conversion. Ordering Fractions Ordering and sorting fractions, integers and mixed numbers to find whether they are equal, greater than or less than each other (= or > or <). Shows the work involved and converts inputs to equivalent fractions with LCD. Comparing Fractions Comparing fractions, integers and mixed numbers to show equality or inequality (= or < or > ). Shows inputs converted to equivalent fractions with LCD. Solving for X in Fractions Solve for unknown X such as X/12 = 4/16. This calculator can solve for X in fractions as equalities and inequalities: < or ≤ or > or ≥ or =. Shows the work for cross multiplication. Estimate sums and differences for positive proper fractions, n/d, where n ≤ d and 0 ≤ n/d ≤ 1. Includes fractions table for fractions on the order of halves, quarters, eighths, and sixteenths with their decimal equivalents. Averaging Fractions Calculate the average of a set of fractions. Finds the average of positive and negative, proper and improper fractions, integers and mixed numbers. Includes the option to show the work involved in calculating the result. Fractions Table Fractions in order from 0 through 1 for halves through sixteenths. Includes decimal equivalents of fractions. Fractions Number Line Fractional parts as small as sixteenths. Converting Fractions, Decimals and Percents Decimal to Fraction Converter Decimal to Percent Converter Fraction to Decimal Converter Fraction to Percent Converter Percent to Decimal Converter Percent to Fraction Converter Mixed Numbers to Decimal Converter Mixed Numbers to Percent Converter Mixed Numbers to Improper Fractions Improper Fractions to Mixed Numbers
{"url":"https://vanairhydraulic.com/article/fractions-calculators","timestamp":"2024-11-09T00:52:31Z","content_type":"text/html","content_length":"66759","record_id":"<urn:uuid:aaac3896-b2ef-4748-9624-674990ba21f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00312.warc.gz"}
If I Knew Then What I Know Now about Math - Sunrise to Sunset If I Knew Then What I Know Now about Math Discovering Patterns: Mathematics, Logic, and Science. Disclosure: This post contains affiliate links. Mathematical Understanding or Memorization? I think I have pretty thoroughly shared my thoughts on teaching math. Here’s How I choose Math Curriculum. Basically, I value mathematical understanding over memorization. Except sometimes. I’ve found that one of my sons, William, has to learn math by rote and practice, practice, practice. The understanding is coming slowly with the practice. Last year I shared The post where I admit I was wrong. In it, I share a little more about why mastery math programs were not working with one of my sons. Where we are now Here we are a year later and I am still very happy with Christian Light Math for William (5th grade). It drives me crazy that the lessons often take him FOREVER, but he is learning math. With the spiral approach of Christian Light, he gets to practice concepts almost every day. Finally, he’s not forgetting them. He still makes mistakes, but I see definite improvement, and he has shown tremendous improvement with his speed in recalling math facts. One of the things that is built into Christian Light is a daily speed drill. Each day there are 32 problems to be attempted in 1 minute. They are either addition, subtraction, mixture of addition and subtraction, multiplication, division, or unit conversions. William has progressed from completing 8-10 of the addition and subtraction facts in 1 minute to being able to complete all 32 problems in the same amount of time! He’s not there with multiplication and division yet, and the mixture of addition and subtraction are slower as well, but I have confidence that he will build his speed with those too. What I would do differently (and will do differently with my younger children) And that is what I would do differently with my older children if I were starting over again. I didn’t drill them on their facts because they could do the problems with relative ease. I tried some drill, but I was met with resistance from David. So I [DEL:gave up :DEL]backed off. In some ways, I don’t regret it. We really had a lot of trouble getting along, and we were always fighting. So I decided that I wasn’t going to fight over everything. But his lack of speed is evident when doing calculations. Actually he rarely does any calculations on paper anymore. David does math in his head or on a calculator. And he has trouble showing his work. Which is another thing I wish I had done better at requiring. What I think I should have done instead of drilling with flash cards or plain worksheets is tried simple speed drills like in Christian Light. I don’t know for sure that it would have been more acceptable, but I suspect it would have been because David is very competitive. Lizzie (2nd grade) is doing very well in Singapore and Miquon like my 2 oldest, but she also sees William doing speed drills. Lately she has started asking for speed drills too. So I’m going to do them with her while I have a willing participant. I think the combination of mathematical understanding and speed will be helpful to her as she goes on to more advanced math. Visit these blogs to read more about teaching math. Our {almost} FREE 2nd and 4th Grade Math Program by Susan @ Homeschooling Hearts & Minds Supercharged Science’s Mathemagic by Kristi K. @ The Potter’s Hand Academy Math & Logic Resources by Chareen @ Every Bed of Roses How We Tackle Middle School Math, Logic & Science by Christy @ Unexpected Homeschool A Peek into our Homeschool: Math & Logic by Brittney @ Mom’s Heart Math and Logic: Patterns and Reasoning by Leah@As We Walk Along the Road 2014 Virtual Curriculum Fair: Discovering Patterns: Mathematics, Logic, and Science by Stacie @Super Mommy To The Rescue Virtual Curriculum Fair: The World of Patterns and Logic by Joelle Discovering Science & Math w/ Apologia & Saxon by LynnP @ Ladybug Chronicles Make Math Fun: Your Kids Will Thank You by Tauna @ Proverbial Homemaker Our Curriculum Choices 2014 ~ Mathematics by Renata @ Sunnyside Farm Fun My Favorite Math For Boys by Monique @ Living Life and Learning Discovering Patterns: Mathematics, Logic, and Science in our Classical Homeschool by Sharra @ The Homeschool Marm Homeschool Math Choices for a Future Scientist or Computer Programmer by Amy @ Eclectic Homeschooling Math–Our Four Letter Word by Nicole @ Schooling in the Sun If I Knew Then What I Know Now by Kristen @ Sunrise to Sunset Godly Patterns in Homeschooling by LisaN @ Golden-Grasses Math and Science anyone? by Michele@ Family, Faith and Fridays My 7 Favourite Math Resources by Kim @ Homestead Acres Basic Instincts by Chelli @ The Planted Trees Getting My Teens Ready for Algebra by Debra @Footprints in the Butter Math We Love by Laura @ Four Little Penguins 2014 Virtual Curriculum Fair ~ Math & Science by Jennifer @ a glimpse of our life Our Take on Math, the Elementary Years – Charlotte Mason-style by HillaryM @ Our Homeschool Studio Tackling Math and Science from Multiple Angles by Laura @ Day by Day in Our World 7 thoughts on “If I Knew Then What I Know Now about Math” 1. Pingback: Homeschool Math Choices for a Future Scientist or Computer Programmer - Eclectic Homeschooling 2. Pingback: Supercharged Science’s Mathemagic It’s always easy to go back and see what I should have done. I’ve done some crazy things in changing our math program multiple times through the years, and I would definitely like to be able to go back and change things. Thanks for sharing your thoughts on virtual curriculum fair. Leave a Comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://sunrisetosunsethomeschool.com/2014/01/13/knew-know-now/","timestamp":"2024-11-05T22:03:38Z","content_type":"text/html","content_length":"159993","record_id":"<urn:uuid:1e969c4c-49dc-4c72-9309-a21e5323949d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00133.warc.gz"}
Rule Of 72: What Is It And How Does It Work? The Rule of 72 is a simple formula that can help investors determine how long it will take for a particular investment to double in value. The Rule of 72 has become a bit obsolete given how common calculators and programs like Excel are nowadays, but it can remain useful as a rule of thumb that you can easily calculate in your head. Using the Rule of 72 can help you compare different investments to see how they might perform over time. We will take a look at the Rule of 72 formula to determine how you can help make it work for you. What Is The Rule Of 72? The definition of the Rule of 72 is a way to determine how many years are necessary to double the amount of money that you have invested at a particular annual rate of return. This can be a useful tool in many areas of personal finance. While the Rule of 72 will not give you an exact answer, it is usually close enough to serve as a reasonable approximation. Using the Rule of 72 to approximate how long it will take for a particular investment to double will help you decide whether it is worth pursuing the investment further and doing more precise calculations. Rule Of 72 Formula The Rule of 72 formula takes two inputs — the number of years for an investment to double and the annual rate of return of that investment. Given one of those two values, you can use the Rule of 72 formula to calculate the other by dividing 72 by the initial value. • Number of years for an investment to double = 72 / annual rate of return • Annual rate of return = 72 / number of years for the investment to double How The Rule Of 72 Works The Rule of 72 is great if you’re looking at an investment with a given rate of return and trying to get a rough idea of how many years it will be until that investment doubles. It's important to realize when using the Rule of 72 that it only works for investments with compound interest. If your investment earns money using simple interest, the Rule of 72 will not work for you. Rule Of 72 Example Here are a few Rule of 72 examples to give you an idea of how you can use the Rule of 72 when determining how long it will take an investment to double: • To determine the amount of time it will take an investment earning 4% per year to double — 72 / 4 = 18 years. • To determine the amount of time it will take an investment earning 8% per year to double — 72 / 8 = 9 years. • To determine what rate of return you need to earn for your investments to double in 3 years — 72 / 3 = 24% annual rate of return or interest rate. • To determine what rate of return you need to earn for your investments to double in 12 years — 72 / 12 = 6% annual rate of return or interest rate. The number of years for a particular investment to double and the annual rate of return of the investment are inversely correlated. That means as one goes up, the other goes down. The higher your annual rate of return, the fewer years it will take for the investment to double in value. Similarly, the lower the annual rate of return of a specific investment, the more years it will take for that particular investment to double in value. How Accurate Is The Rule of 72 As we've discussed earlier, the Rule of 72 is mostly useful as a simple approximation. While the Rule of 72 is surprisingly accurate for rates of return between 6% — 10%, it becomes less accurate for rates of return much lower or much higher. Also, the Rule of 72 works best for annual compounding. If you have other types of compounding (like daily or continuous compounding), you can also use the Rule of 69.3 or the Rule of 70 in similar fashions. The Rule of 72 is a useful approximation because 72 has so many small divisors (3, 4, 6, 8, 9, 12) — that makes it easy to do the calculations in your head. Using The Rule Of 72 For Investments There are many different applications for the Rule of 72, since it can be used on investments in many different sectors. Real estate investors use the Rule of 72 to compare investment options, and others use the Rule of 72 when planning for retirement. It's important to understand that the Rule of 72 only gives an approximation of how many years it will take for a given investment to double. You'll want to make sure to do a more exact calculation before putting any of your money into a given investment. The Bottom Line The general idea behind the Rule of 72 is a simple formula using the annual rate of return of an investment to get an approximation for how long it will take the money invested to double. To use the Rule of 72 when calculating the amount of time required to double an investment, simply divide 72 by the annualized rate of return of the investment. The Rule of 72 will only give you an approximation, so make sure to run more detailed calculations before proceeding any further. You might also talk with a financial adviser before making a decision about future investments. Want some extra help staying on top of your expenses? You can easily create and track your budget using Rocket Money, now available on mobile app or web. Rocket Money will even scan your expenses for you and negotiate bills to help you save. Dan Miller Dan Miller is a freelance writer and founder of PointsWithACrew.com, a site that helps families to travel for free/cheap. His home base is in Cincinnati, but he tries to travel the world as much as possible with his wife and 6 kids.
{"url":"https://www.rocketmoney.com/learn/personal-finance/what-is-the-rule-of-72","timestamp":"2024-11-04T23:12:18Z","content_type":"text/html","content_length":"178641","record_id":"<urn:uuid:8b21e9d3-3b9f-4fdd-a452-5b55ff5cdcd7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00238.warc.gz"}
How to calculate Singapore Toto Lottery How to Calculate Singapore Togel Toto – If you are a fan of online lottery especially the Singapore market, of course you are familiar with the term toto converter or commonly called the 45 ball calculation machine. This converter machine works for those of you who like to place Singapore lottery bets on Monday and Thursday. Singapore lottery expenses on Monday and Thursday are different in general, which only issues 1st prize. Singapore lottery Monday and Thursday is known as Singapore Toto or SGP 45. Why is it called Singapore Toto? Because in this market draw the 45 ball fund pattern refers to SingaporePools. Where from SingaporePools has provided balls of numbers 1,2,3,4 to 45 which will be randomly stirred using a machine. Every round of this market Agen Sbobet Casino will be drawn by drawing 6 balls and 1 ball as an additional number (divider number). In the drawing of the Singapore Toto lottery output numbers are done using machines that have been designed in such a way. In the drawing process there was no interference from SingaporePools at all so there would be no manipulation again. However, there are times when human creation is manipulated by the creator of the machine itself. Therefore an additional number is made so that it is very difficult to manipulate expenditure figures. For those of you who love lottery gambling, it might be very difficult with the calculation of the numbers that will appear. Moreover, the calculation is not easy and will be very difficult to understand by beginners as well as among adults. Therefore the Toto Converter machine is provided for bettor who have difficulty in calculating SGP 45 lottery releases. Sgp Toto Togel which is often installed by Indonesian bettor is very different calculation from you betting directly on SingaporePools. In Indonesia, Singapore Toto has a different calculation by applying the calculation formula which will produce four numbers. How to calculate Singapore Toto Lottery In the lottery market draw SGP 45, the ball that has come out will not be re-selected. For example, in the first expenditure getting the ball with number 5. When the second machine draws ball number five as well, then the second machine will automatically roll back to get the ball with a different value. The development of the growing world of the internet creates a toto converter engine. This machine will automatically help calculate the sgp 45 expenditure figures automatically without having to sort them out. For those of you bettor who are fond of counting and are reluctant to use the toto converter machine, then we will provide a way to calculate the SGP 45 lottery. For example, Singapore 45 will be spent on November 22, 2018. Ball 1: 11 Ball 2: 17 Ball 3: 20 Ball 4: 21 Ball 5: 27 Ball 6:30 Ball 7: 33 (additional number) If you want to calculate it manually, the first condition is to sort from the smallest to the largest number. Sort numbers from ball 1 to ball 6 only, and additional numbers don’t need to be entered. 11-17-20-21-27-30 sort exactly the example we gave. Remember, to get 4D results you are asked to look for them one by one following the lottery structure. Togel Structure ABCD = Ax Head Tails = Thousands of Hundreds of Tens Units to produce 4D value 7844. Count Singapore Toto Lottery First you have to look for is the 2D back number (head and tail) 1. The 6 balls are added together. The result is 11 + 17 + 20 + 21 + 27 + 30 = 126. 2. The sum is multiplied by 2. The result is 126 × 2 = 252. 3. The result of multiplication minus the first ball and the seventh ball. The result is 314- (11 + 30) = 211. 4. The reduction result is added to the additional number ball value. The result is 211 + 33 = 244. The 2D back result (head and tail) has been obtained, which is 44. Until here, maybe all of you already understand, let’s continue to search for hundreds (Kop) numbers. 5. The fourth ball is added to the fifth ball. The result is 21 + 27 = 48. The number taken is 8. This number 8 becomes the number hundreds, known as 3d (844). The final step is to look for the As figure or thousands (4D). 6. The second ball is added to the third ball. The result is 17 + 20 = 37. The number drawn is 7. This number 7 will be thousands. So the results of this manual calculation yields a 4D number which is 7844. With this the Singapore Toto lottery expenditure figure is 7844. Maybe that’s just the pattern.
{"url":"https://truebet99.net/how-to-calculate-singapore-toto-lottery/","timestamp":"2024-11-10T09:28:42Z","content_type":"text/html","content_length":"43871","record_id":"<urn:uuid:9f7faf5a-98eb-41d0-b630-be5e035b0747>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00433.warc.gz"}
Plot survival results plotSurvival {CohortSurvival} R Documentation Plot survival results Plot survival results x = "time", xscale = "days", ylim = c(0, NA), cumulativeFailure = FALSE, ribbon = TRUE, facet = NULL, colour = NULL, riskTable = FALSE, riskInterval = 30 result Survival results x Variable to plot on x axis xscale X axis scale. Can be "days" or "years". ylim Limits for the Y axis cumulativeFailure whether to plot the cumulative failure probability instead of the survival probability ribbon If TRUE, the plot will join points using a ribbon facet Variables to use for facets colour Variables to use for colours riskTable Whether to print risk table below the plot riskInterval Interval of time to print risk table below the plot A plot of survival probabilities over time cdm <- mockMGUS2cdm() surv <- estimateSingleEventSurvival(cdm, targetCohortTable = "mgus_diagnosis", outcomeCohortTable = "death_cohort") version 0.5.2
{"url":"https://search.r-project.org/CRAN/refmans/CohortSurvival/html/plotSurvival.html","timestamp":"2024-11-04T10:37:49Z","content_type":"text/html","content_length":"3366","record_id":"<urn:uuid:e66efa3b-7b4d-42bd-84e9-a0570bc52da7>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00557.warc.gz"}
Overview of Feature Details Page This article walks through the different sections of the Feature/Data details page on the RS platform. This page provides more detail on a data set, along with guidance on data science treatments to consider applying. Feature Detail Page: Accessible From: You can access the data details page by clicking on the feature name from the Data Sets page of the Ready Signal website: Or by clicking on the feature name while adding features to a signal: Importance in Data Science Treatment Selection There are a variety of Data Science Treatments you can apply to features within a signal. These DS treatments can greatly improve the usefulness of your data. Note that these treatments are applied at the individual feature level. TIP: Each feature offered by Ready Signal undergoes a transformation analysis to determine its suitability to be used in modeling in its raw form. Users can test the data science transformations suggested by adding both the untransformed feature and the feature using the recommended treatments to their signal in order to test its fit and predictive power in their analysis. Details Page Components 1) Feature Specs This section provides an overview of the data. Source, release, units and frequency are provided in order to understand where they data comes from, what form it takes and how often it is updated. The data is graphed over time for time series data and geographically for point in time data. The ‘Why Use’ section gives a brief overview of the benefit of using this data stream in analyses. The suggested transformation provides a recommendation of how to best transform the data in order to improve normality and, for time series data, ensure the data is stationary, not autocorrelated, and not seasonally impacted. Details for the suggestions and their rationale are provided below. 2) Time Series Analysis Time series analysis is conducted on features which have repeated measures over time at a set interval. The data is tested for autocorrelation (also referred to as serial correlation. This looks at whether observations are independent over time. In order to determine the cases when the prior values of a time series have an influence on the current value and autocorrelation function (ACF) is used. The ACF computes the correlation between the current value and previous values in time. The ACF plot shows the correlation between the current value (T) and the K previous values (T-1,T-2,…,T-K). • When autocorrelation is present, constant or slowly diminishing across values of K then there is autocorrelation present and differencing {hyperlink to data transforms page} is recommended. The partial autocorrelation function (PACF) is the correlation of a lag after the computation of the previous lags. This allows the viewer to determine if additional differencing is needed beyond the first order. • Peaks at intervals representing a year indicate there may be a need for seasonal differencing. The autocorrelation function (ACF) after first order differencing is also shown. This indicates the residual correlation after the data has been differenced. • If it shows a sustained autocorrelation than further differencing is required. • If it is near zero or slightly negative, then no further differencing is needed. The feature is also tested for stationarity. A time series is considered stationary when its mean and variance are constant over time. The Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test measures if the feature is stationary around the mean or linear trend. It decomposes data into a trend component, a random walk, and an error term. The null hypothesis is that the data is stationary. P-values of less than 0.05 indicate the data is not stationary. This test is used to recommend whether or not the data should be differenced. {hyperlink to data transforms page} 3) Seasonality For data that is quarterly, monthly, weekly or daily a seasonality analysis is performed. The data is decomposed into a trend component, a seasonal component and a random component. The process computes the trend through a moving average, the seasonality by averaging each time period across years and the remainder becomes the error term. The multiplicative seasonality is shown as the seasonal impact. {hyperlink to data transforms page} The seasonality levels are quarterly for quarterly data, monthly for both monthly and daily data, and weekly for weekly data. A seasonal unit root test utilizes a measure of seasonal strength to recommend if seasonal differencing is required. 4) Normality Each feature, both time series and point in time, undergoes a distribution analysis. The Shapiro-Wilk test compares the order distribution of the feature to the expected values of standard normal distribution. The null hypothesis is the data is normally distributed. In cases where the p-values are less than 0.05 then the message is shown that the data is not normally distributed. The kurtosis is computed which measures the thickness of the distribution’s tails relative to the normal distribution. When kurtosis is greater than one it indicates the distribution has heavier tails and follows a leptokurtic distribution. When kurtosis is less than negative one it indicates the distribution has lighter tails and follows a platykurtic distribution. When kurtosis is between negative one and one it indicates the distribution is relatively normal and follows a mesokurtic distribution. The skewness is computed which measures the symmetry of the feature’s distribution. Absolute values greater than 1 show substantial skewness. Hartigan’s dip test is computed to estimate if the feature comes from a unimodal distribution. It examines the feature distribution function and the unimodal distribution function. The null hypothesis is that the data is unimodal. P-values of less than 0.05 indicate the data is multimodal. Finally a series of transformations are tested to determine if they improve the normality of the data. The Pearson test statistic is across classes that are equiprobable under the hypothesis of normality. The transformation with the lowest Pearson statistic is recommended to improve normality.
{"url":"http://readysignal.com/overview-of-feature-details-page/","timestamp":"2024-11-02T17:36:13Z","content_type":"text/html","content_length":"177633","record_id":"<urn:uuid:9fffb649-8866-474c-94a6-4b90add0548f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00298.warc.gz"}
Sherlock and Counting | HackerRank Watson gives Sherlock two integers, and , and asks him to count the number of positive integer 's such that: Given queries where each query consists of some and , print the number of possible 's for each query on a new line. The first line contains an integer, , denoting the number of times Watson queries Sherlock. Each of the subsequent lines contains two space-separated integers denoting the respective values of and for a query. For each query, print the number of 's satisfying the given formula on a new line. Sherlock performs the following queries: 1. The possible values of satisfying Watson's formula for and are and . Because there are two such values, we print on a new line. 2. The possible values of satisfying Watson's formula for and are , , , and . Because there are four such values, we print on a new line.
{"url":"https://www.hackerrank.com/challenges/sherlock-and-counting/problem?isFullScreen=true","timestamp":"2024-11-15T04:07:46Z","content_type":"text/html","content_length":"910549","record_id":"<urn:uuid:6d7de7bb-054b-4c04-a379-072636526981>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00484.warc.gz"}
The Interaction of Analysis and Geometrysearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart The Interaction of Analysis and Geometry Edited by: V. I. Burenkov : Cardiff University, Cardiff, United Kingdom eBook ISBN: 978-0-8218-8103-3 Product Code: CONM/424.E List Price: $125.00 MAA Member Price: $112.50 AMS Member Price: $100.00 Click above image for expanded view The Interaction of Analysis and Geometry Edited by: V. I. Burenkov : Cardiff University, Cardiff, United Kingdom eBook ISBN: 978-0-8218-8103-3 Product Code: CONM/424.E List Price: $125.00 MAA Member Price: $112.50 AMS Member Price: $100.00 • Contemporary Mathematics Volume: 424; 2007; 344 pp MSC: Primary 26; 28; 30; 35; 46; 49; 53; 57; 58 The papers in this volume are based on talks given at the International Conference on Analysis and Geometry in honor of the 75th birthday of Yuriĭ Reshetnyak (Novosibirsk, 2004). The topics include geometry of spaces with bounded curvature in the sense of Alexandrov, quasiconformal mappings and mappings with bounded distortion (quasiregular mappings), nonlinear potential theory, Sobolev spaces, spaces with fractional and generalized smoothness, variational problems, and other modern trends in these areas. Most articles are related to Reshetnyak's original works and demonstrate the vitality of his fundamental contribution in some important fields of mathematics such as the geometry in the "large", quasiconformal analysis, Sobolev spaces, potential theory and variational calculus. Graduate students and research mathematicians interested in relations between analysis and differential geometry. • Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Additional Material • Requests Volume: 424; 2007; 344 pp MSC: Primary 26; 28; 30; 35; 46; 49; 53; 57; 58 The papers in this volume are based on talks given at the International Conference on Analysis and Geometry in honor of the 75th birthday of Yuriĭ Reshetnyak (Novosibirsk, 2004). The topics include geometry of spaces with bounded curvature in the sense of Alexandrov, quasiconformal mappings and mappings with bounded distortion (quasiregular mappings), nonlinear potential theory, Sobolev spaces, spaces with fractional and generalized smoothness, variational problems, and other modern trends in these areas. Most articles are related to Reshetnyak's original works and demonstrate the vitality of his fundamental contribution in some important fields of mathematics such as the geometry in the "large", quasiconformal analysis, Sobolev spaces, potential theory and variational calculus. Graduate students and research mathematicians interested in relations between analysis and differential geometry. Permission – for use of book, eBook, or Journal content Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/CONM/424","timestamp":"2024-11-14T07:50:02Z","content_type":"text/html","content_length":"83237","record_id":"<urn:uuid:5ee252d2-caec-4903-a104-f605ae2d6605>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00022.warc.gz"}
gcd(3) [osf1 man page] mp(3) Library Functions Manual mp(3) madd, msub, mult, mdiv, gcd, invert, rpow, msqrt, mcmp, min, omin, fmin, m_in, mout, omout, fmout, m_out, sdiv, itom - Performs multiple precision integer arithmetic Object Code Library (libmp.a) #include <mp.h> #include <stdio.h> typedef struct mint { int len; short *val; } MINT; madd( MINT *a, MINT *b, MINT *c ); msub( MINT *a, MINT *b, MINT *c ); mult( MINT *a, MINT *b, MINT *c ); mdiv( MINT *a, MINT *b, MINT *q, MINT *r ); pow( MINT *a, MINT *b, MINT *m, MINT *c ); gcd( MINT *a, MINT *b, MINT *c ); invert( MINT *a, MINT *b, MINT *c ); rpow( MINT *a, int n, MINT *c ); msqrt( MINT *a, MINT *b, MINT *r ); mcmp( MINT *a, MINT *b ); move( MINT *a, MINT *b ); min( MINT *a ); omin( MINT *a ); fmin( MINT *a, FILE *f ); m_in( MINT *a, int n, FILE *f ); mout( MINT *a ); omout( MINT *a ); fmout( MINT *a, FILE *f ); m_out( MINT *a, int n, FILE *f ); sdiv( MINT *a, short n, MINT *q, short *r ); *itom( short n ); These functions perform arithmetic on integers of arbitrary length. The integers are stored using the defined type MINT. Pointers to a MINT can be initialized using the itom() function, which sets the initial value to n. After that, space is managed automatically by the The madd(), msub() , and mult() functions assign to c the sum, difference, and product, respectively, of a and b. The mdiv() function assigns to q and r the quotient and remainder obtained from dividing a by b. The sdiv() function is like the mdiv() function except that the divisor is a short integer n and the remainder is placed in a short integer whose address is given as r. The msqrt() function produces the integer square root of a in b and places the remainder in r. The rpow() function calculates in c the value of a raised to the (``regular'' integral) power n, while the pow() function calculates this with a full multiple precision exponent b and the result is reduced modulo m. The gcd() function returns the greatest common denominator of a and b in c, and the invert() function computes c such that a*c mod b = 1, for a and b relatively prime. The mcmp() function returns a negative, zero, or positive integer value when a is less than, equal to, or greater than b, respectively. The move() function copies a to b. The min() and mout() functions do decimal input and output while the omin() and omout() functions do octal input and output. More gener- ally, the fmin() and fmout() functions do decimal input and output using file f, and m_in() and m_out do input and output with arbitrary radix n. On input, records should have the form of strings of digits terminated by a newline; output records have a similar form. Programs which use the multiple-precision arithmetic library must be compiled with -lmp. delim off Check Out this Related Man Page mp(3) Library Functions Manual mp(3) madd, msub, mult, mdiv, gcd, invert, rpow, msqrt, mcmp, min, omin, fmin, m_in, mout, omout, fmout, m_out, sdiv, itom - Performs multiple precision integer arithmetic Object Code Library (libmp.a) #include <mp.h> #include <stdio.h> typedef struct mint { int len; short *val; } MINT; madd( MINT *a, MINT *b, MINT *c ); msub( MINT *a, MINT *b, MINT *c ); mult( MINT *a, MINT *b, MINT *c ); mdiv( MINT *a, MINT *b, MINT *q, MINT *r ); pow( MINT *a, MINT *b, MINT *m, MINT *c ); gcd( MINT *a, MINT *b, MINT *c ); invert( MINT *a, MINT *b, MINT *c ); rpow( MINT *a, int n, MINT *c ); msqrt( MINT *a, MINT *b, MINT *r ); mcmp( MINT *a, MINT *b ); move( MINT *a, MINT *b ); min( MINT *a ); omin( MINT *a ); fmin( MINT *a, FILE *f ); m_in( MINT *a, int n, FILE *f ); mout( MINT *a ); omout( MINT *a ); fmout( MINT *a, FILE *f ); m_out( MINT *a, int n, FILE *f ); sdiv( MINT *a, short n, MINT *q, short *r ); *itom( short n ); These functions perform arithmetic on integers of arbitrary length. The integers are stored using the defined type MINT. Pointers to a MINT can be initialized using the itom() function, which sets the initial value to n. After that, space is managed automatically by the The madd(), msub() , and mult() functions assign to c the sum, difference, and product, respectively, of a and b. The mdiv() function assigns to q and r the quotient and remainder obtained from dividing a by b. The sdiv() function is like the mdiv() function except that the divisor is a short integer n and the remainder is placed in a short integer whose address is given as r. The msqrt() function produces the integer square root of a in b and places the remainder in r. The rpow() function calculates in c the value of a raised to the (``regular'' integral) power n, while the pow() function calculates this with a full multiple precision exponent b and the result is reduced modulo m. The gcd() function returns the greatest common denominator of a and b in c, and the invert() function computes c such that a*c mod b = 1, for a and b relatively prime. The mcmp() function returns a negative, zero, or positive integer value when a is less than, equal to, or greater than b, respectively. The move() function copies a to b. The min() and mout() functions do decimal input and output while the omin() and omout() functions do octal input and output. More gener- ally, the fmin() and fmout() functions do decimal input and output using file f, and m_in() and m_out do input and output with arbitrary radix n. On input, records should have the form of strings of digits terminated by a newline; output records have a similar form. Programs which use the multiple-precision arithmetic library must be compiled with -lmp. delim off
{"url":"https://www.unix.com/man-page/osf1/3/gcd","timestamp":"2024-11-13T22:55:09Z","content_type":"text/html","content_length":"36931","record_id":"<urn:uuid:cffb8da4-6fa5-402c-b037-d977be45645d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00420.warc.gz"}
Relativistic many-body bound systems: Electromagnetic properties The formulae for the calculation of the electron scattering form factors, and of the static magnetic dipole and electric quadrupole moments, of relativistic many-body bound systems are derived. The framework is relativistic quantum field theory in the Schrodinger picture; the physical particles, i.e., the solutions of the interacting fields, are given as linear combinations of the solutions of the free fields, called the parton fields. The parton-photon interaction is taken as given by minimal coupling. In addition, the contribution of the photon-vector meson vertex of the vector dominance model is derived. NASA STI/Recon Technical Report N Pub Date: April 1977 □ Electromagnetic Interactions; □ Electron Scattering; □ Quantum Theory; □ Atomic Structure; □ Computation; □ Magnetic Moments; □ Particles; □ Communications and Radar
{"url":"https://ui.adsabs.harvard.edu/abs/1977STIN...7728376D/abstract","timestamp":"2024-11-10T15:17:09Z","content_type":"text/html","content_length":"34236","record_id":"<urn:uuid:ea6ee37a-fd4f-42c5-8165-dae8822525f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00037.warc.gz"}
Automorphism group Jump to navigation Jump to search In mathematics, the automorphism group of an object X is the group consisting of automorphisms of X. For example, if X is a finite-dimensional vector space, then the automorphism group of X is the general linear group of X, the group of invertible linear transformations from X to itself. Especially in geometric contexts, an automorphism group is also called a symmetry group. A subgroup of an automorphism group is called a transformation group (especially in old literature). • The automorphism group of a set X is precisely the symmetric group of X. • A group homomorphism to the automorphism group of a set X amounts to a group action on X: indeed, each left G-action on a set X determines ${\displaystyle G\to \operatorname {Aut} (X),\,g\mapsto \sigma _{g},\,\sigma _{g}(x)=g\cdot x}$, and, conversely, each homomorphism ${\displaystyle \varphi :G\to \operatorname {Aut} (X)}$ defines an action by ${\displaystyle g\cdot x=\varphi (g)x}$. • Let ${\displaystyle A,B}$ be two finite sets of the same cardinality and ${\displaystyle \operatorname {Iso} (A,B)}$ the set of all bijections ${\displaystyle A{\overset {\sim }{\to }}B}$. Then $ {\displaystyle \operatorname {Aut} (B)}$, which is a symmetric group (see above), acts on ${\displaystyle \operatorname {Iso} (A,B)}$ from the left freely and transitively; that is to say, ${\ displaystyle \operatorname {Iso} (A,B)}$ is a torsor for ${\displaystyle \operatorname {Aut} (B)}$ (cf. #In category theory). • The automorohism group ${\displaystyle G}$ of a finite cyclic group of order n is isomorphic to ${\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{*}}$ with the isomorphism given by ${\displaystyle {\ overline {a}}\mapsto \sigma _{a}\in G,\,\sigma _{a}(x)=x^{a}}$.^[1] In particular, ${\displaystyle G}$ is an abelian group. • Given a field extension ${\displaystyle L/K}$, the automorphism group of it is the group consisting of field automorphisms of L that fixes K: it is better known as the Galois group of ${\ displaystyle L/K}$. • The automorphism group of the projective n-space over a field k is the projective linear group ${\displaystyle \operatorname {PGL} _{n}(k).}$^[2] • The automorphism group of a finite-dimensional real Lie algebra ${\displaystyle {\mathfrak {g}}}$ has the structure of a (real) Lie group (in fact, it is even a linear algebraic group: see below). If G is a Lie group with Lie algebra ${\displaystyle {\mathfrak {g}}}$, then the automorphism group of G has a structure of a Lie group induced from that on the automorphism group of ${\ displaystyle {\mathfrak {g}}}$.^[3]^[4] • Let P be a finitely generated projective module over a ring R. Then there is an embedding ${\displaystyle \operatorname {Aut} (P)\hookrightarrow \operatorname {GL} _{n}(R)}$, unique up to inner In category theory[edit] Automorphism groups appear very natural in category theory. If X is an object in a category, then the automorphism group of X is the group consisting of all the invertible morphisms from X to itself. It is the unit group of the endomorphism monoid of X. (For some example, see PROP.) If ${\displaystyle A,B}$ are objects in some category, then the set ${\displaystyle \operatorname {Iso} (A,B)}$ of all ${\displaystyle A{\overset {\sim }{\to }}B}$ is a left ${\displaystyle \ operatorname {Aut} (B)}$-torsor. In practical terms, this says that a different choice of a base point of ${\displaystyle \operatorname {Iso} (A,B)}$ differs unambiguously by an element of ${\ displaystyle \operatorname {Aut} (B)}$, or that each choice of a base point is precisely a choice of a trivialization of the torsor. If ${\displaystyle X_{i},i=1,2}$ are objects in categories ${\displaystyle C_{i}}$ and if ${\displaystyle F:C_{1}\to C_{2}}$ is a functor that maps ${\displaystyle X_{1}}$ to ${\displaystyle X_{2}}$, then the functor ${\displaystyle F}$ induces a group homomorphism ${\displaystyle \operatorname {Aut} (X_{1})\to \operatorname {Aut} (X_{2})}$, as it maps invertible morphisms to invertible In particular, if G is a group viewed as a category with a single object * or, more generally, if G is a groupoid, then each functor ${\displaystyle G\to C}$, C a category, is called an action or a representation of G on the object ${\displaystyle F(*)}$, or the objects ${\displaystyle F(\operatorname {Obj} (G))}$. Those objects are then said to be ${\displaystyle G}$-objects (as they are acted by ${\displaystyle G}$); cf. ${\displaystyle \mathbb {S} }$-object. If ${\displaystyle C}$ is a module category like the category of finite-dimensional vector spaces, then ${\displaystyle G}$-objects are also called ${\displaystyle G}$-modules. Automorphism group functor[edit] Let ${\displaystyle M}$ be a finite-dimensional vector space over a field k that is equipped with some algebraic structure (that is, M is a finite-dimensional algebra over k). It can be, for example, an associative algebra or a Lie algebra. Now, consider k-linear maps ${\displaystyle M\to M}$ that preserve the algebraic structure: they form a vector subspace ${\displaystyle \operatorname {End} _{\text{alg}}(M)}$ of ${\displaystyle \ operatorname {End} (M)}$. The unit group of ${\displaystyle \operatorname {End} _{\text{alg}}(M)}$ is the automorphism group ${\displaystyle \operatorname {Aut} (M)}$. When a basis on M is chosen, $ {\displaystyle \operatorname {End} (M)}$ is the space of square matrices and ${\displaystyle \operatorname {End} _{\text{alg}}(M)}$ is the zero set of some polynomial equations and the invertibility is again described by polynomials. Hence, ${\displaystyle \operatorname {Aut} (M)}$ is a linear algebraic group over k. Now base extensions applied to the above discussion determines a functor:^[6] namely, for each commutative ring R over k, consider the R-linear maps ${\displaystyle M\otimes R\to M\otimes R}$ preserving the algebraic structure: denote it by ${\displaystyle \operatorname {End} _{\text{alg}}(M\otimes R)}$. Then the unit group of the matrix ring ${\displaystyle \operatorname {End} _{\text {alg}}(M\otimes R)}$ over R is the automorphism group ${\displaystyle \operatorname {Aut} (M\otimes R)}$ and ${\displaystyle R\mapsto \operatorname {Aut} (M\otimes R)}$ is a group functor: a functor from the category of commutative rings over k to the category of groups. Even better, it is represented by a scheme (since the automorphism groups are defined by polynomials): this scheme is called the automorphism group scheme and is denoted by ${\displaystyle \operatorname {Aut} (M)}$. In general, however, an automorphism group functor may not be represented by a scheme. See also[edit] 1. ^ Dummit & Foote, § 2.3. Exercise 26. 2. ^ Hartshorne, Ch. II, Example 7.1.1. 3. ^ Hochschild, G. (1952). "The Automorphism Group of a Lie Group". Transactions of the American Mathematical Society. 72 (2): 209–216. JSTOR 1990752. 4. ^ (following Fulton–Harris, Exercise 8.28.) First, if G is simply connected, the automorphism group of G is that of ${\displaystyle {\mathfrak {g}}}$. Second, every connected Lie group is of the form ${\displaystyle {\widetilde {G}}/C}$ where ${\displaystyle {\widetilde {G}}}$ is a simply connected Lie group and C is a central subgroup and the automorphism group of G is the automorphism group of ${\displaystyle G}$ that preserves C. Third, by convention, a Lie group is second countable and has at most coutably many connected components; thus, the general case reduces to the connected case. 5. ^ Milnor, Lemma 3.2. 6. ^ Waterhouse, § 7.6. External links[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/endomorfizmus/en.wikipedia.org/wiki/Automorphism_group.html","timestamp":"2024-11-09T17:13:24Z","content_type":"text/html","content_length":"105277","record_id":"<urn:uuid:f35c4397-354d-47c9-b111-89d7787dea76>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00579.warc.gz"}
Importance of Spectral Wings on the Pulse Duration Posted on 2017-10-03 as part of the Photonics Spotlight (available as e-mail newsletter!) Permanent link: https://www.rp-photonics.com/spotlight_2017_10_03.html Author: Dr. Rüdiger Paschotta, RP Photonics AG, RP Photonics AG Abstract: It is well known that an optical bandpass filter can increase the duration of an ultrashort pulse. The article shows that that effect can be quite pronounced, even if the filter only attenuate the extreme spectral wings. Ultrashort light pulses intrinsically have a large optical bandwidth. This means in turn that the pulse duration can be increased if the optical bandwidth is decreased, for example, when such pulses are sent through a bandpass filter or through an optical amplifier with limited gain bandwidth. So far, this is all well known. However, many will be surprised to see how important the extreme wings of the optical spectrum of the pulse duration are. I will demonstrate that with a numerical Let us start with a 100-fs sech^2-shaped pulse. The following diagram shows its optical spectrum (blue curve), having a full width at half maximum (FWHM) of 3.15 THz, which corresponds to 11.8 nm. The diagram also contains a second spectrum (red curve), obtained after transmission of the pulse through a super-Gaussian spectral bandpass filter, which for simplicity we assume to have zero chromatic dispersion. The filter bandwidth has been chosen such that only the extreme spectral wings are attenuated; the pulse energy drops only by 1.7%. The pulse bandwidth in terms of its full width at half maximum is not significantly reduced. Now we compare the two resulting pulses in the time domain – see the following figure. You may be quite surprised to see that the filtered pulse has become substantially longer: the pulse duration increased from 100 fs to 129 fs, and the peak power has been reduced by as much as 17.8%: An Explanation How can we explain that surprisingly large impact of the seemingly weak spectral filtering? You can consider the optical amplitudes in the time domain as a superposition of a continuum of oscillations with different optical frequencies – that is how such amplitudes can be calculated from the spectrum with a Fourier transform. Here, it is important to recognize that we have a superposition of amplitudes, not of intensities. For example, consider a position in the extreme wings of the spectrum where the power spectral density is 20 dB down from the spectral peak. Such a spectral component still has 10% of the amplitude of the central component; keep in mind that the spectral intensity scales with the squared modulus of the amplitude. Therefore, you must expect such amplitudes in the wings of the spectrum to have a significant effect on the resulting time-domain amplitudes. Further, the power in the time domain is again proportional to the squared modulus of the amplitude. After those considerations, you should no longer be surprised! If you find working with a continuum of spectral components somewhat awkward, you can consider a superposition of discrete spectral components, as we obtain e.g. for a regular pulse train. For example, you could just assume three spectral lines – one with amplitude 1 and two equally spaced lines at lower and higher frequencies with amplitude 0.01. You would see that even those weak additional spectral lines because a significant oscillation of power in the time domain. A similar phenomenon is known from interference effects between monochromatic waves. For example, if you superimpose two laser beams, where one has 100 times less optical intensity than the other one, you can still obtain a significant oscillation of the total intensity. You can use exactly the same mathematical reasoning as above for an explanation. Further Remarks You will now find it easy to understand why the time–bandwidth product of Gaussian pulses is substantially larger than for sech^2-shaped pulses: just look at the spectral wings for both pulse shapes! A Gaussian pulse has a Gaussian spectrum, and the Gaussian function is known to decay rather fast. As another remark, it is actually not realistic to expect that a bandpass filter will have no chromatic dispersion. If you take that into account as well, you will find that the pulse duration is increased further. By the way, there are also cases where the pulse duration is decreased when a bandpass filter reduces the pulse bandwidth. That can happen for strongly chirped pulses, and can be easily understood: the extreme spectral components then correspond to extreme temporal positions! But this works only as long as the time–bandwidth product is large. This article is a posting of the Photonics Spotlight, authored by Dr. Rüdiger Paschotta. You may link to this page and cite it, because its location is permanent. See also the RP Photonics Note that you can also receive the articles in the form of a newsletter or with an RSS feed. Questions and Comments from Users Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest. Please do not enter personal data here. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him, e.g. via e-mail. By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay.
{"url":"https://www.rp-photonics.com/spotlight_2017_10_03.html","timestamp":"2024-11-09T01:11:12Z","content_type":"text/html","content_length":"22468","record_id":"<urn:uuid:929d381b-f4dd-4542-a3a2-70c9dc30bf12>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00839.warc.gz"}
Sample Size Calculator Question: How many subjects are needed for an A/B test? Baseline conversion rate: % 10.2% Minimum Detectable Effect: % 10.2% – 13.2% The Minimum Detectable Effect is the smallest effect that will be detected (1-β)% of the time. Conversion rates in the gray area will not be distinguishable from the baseline. Sample size: per variation Statistical power 1−β: Percent of the time the minimum effect size will be detected, assuming it exists Significance level α: Percent of the time a difference will be detected, assuming one does NOT exist Need A/B sample sizes on your iPhone or iPad? Download A/B Buddy today.
{"url":"https://www.evanmiller.org/ab-testing/sample-size.html?ref=blog.kongregate.com","timestamp":"2024-11-04T10:16:21Z","content_type":"text/html","content_length":"6911","record_id":"<urn:uuid:384f5aeb-06fd-4f8a-bc8c-75add11870da>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00042.warc.gz"}
Getting Catalyst and Oscar to work together This more of an extension of the previous topic https://discourse.julialang.org/t/finding-steady-states-symbolicaly/119545/18. First some background: commutative algebra is of great use for studying mass-action chemical reaction networks, more specifically questions about. There is a large body of literature about it and a great place to start is Chapter 5 of this book. The most feature-rich CAS package seems to me to be Oscar.jl (I think it is on par with Singular, Macaulay2 etc.) I was wondering if there is a way to convert a NonlinearSystem coming from a reaction network to a set of polynomials to be fed to Oscar e.g. as generators of some ideal etc? I thing thatNonlinearFunction() must play some role, but I was unable to find an example of how to use it This is a work-in-progress repo that is adding some further network analysis tooling for Catalyst: It has some more general Catalyst to polynomial conversion / analysis methods (but I’m not that familiar with the code or if it would work for converting to the Oscar representation you need). The package looks very close to what I’m looking for. It seams that is uses Oscar (at least it is imported). I’m working on building out some of this functionality in the package that Sam linked. The easiest way to get the species formation rate polynomials as Oscar polynomials, as far as I know, is to build a symbolic function from the species formation rate function. Catalyst has a function called assemble_oderhs that essentially gives an array of symbolic expressions that correspond to the right hand side of the chemical reaction network’s ODE. Then, using Symbolics.build_function, one can pass an array of these symbolic expressions and variables (which can be accessed using species(rn)), and then output a Julia function that will be able to take other types, like Oscar polynomial variables. And then downstream you can do things like build ideals and such. I’m taking this approach in trying to write a concentration robustness check, though it’s very work in progress. Would be very interested in hearing what other kind of functionality related to this would be useful, and would certainly welcome contributions. It is a really interesting. The function assemble_oderhs() doesn’t seam to be documented but I looked in the source-code. Unfortunately I’m stuck at Symbolics.build_function. I couldn’t call the created function. Here is my code using Catalyst using Oscar using Symbolics rn=@reaction_network begin k12, A+A → A+B k21, A+B → A+A k23, A+B → B+B k32, B+B → A+B k13, B+B → A+B k31, A+B → B+B which fails with MethodError: objects of type Tuple{Expr, Expr} are not callable The object of type `Tuple{Expr, Expr}` exists, but no method is defined for this combination of argument types when trying to treat it as a callable object. [1] top-level scope @ In[10]:2 Maybe there is something obvious, but I’m not get it. Ah, build_function will create two expressions that must be evaluated to then get callable functions, see here (one of the functions just evaluates the input, the other evaluates and then updates the input array in-place). To get the callable function you’d need f1_expr, f2_expr = Symbolics.build_function(in1,species(rn)...) f = eval(f1_expr) Note that this would return a Symbolic. To get the output in the form of an Oscar polynomial it would then need to construct a ring and its polynomial variables, and then pass the variables as the input to the function: R, polyvars = polynomial_ring(QQ, map(s -> Symbolics.tosymbol(s; escape=false), species(rn)) ) # this second argument just gets symbols from each of the species A(t), B(t) -> :A, :B f(polyvars...) # this would output an array of Oscar polynomials Just realized the rate constant values would still be undefined in this case so the function would error. I think depending on your use case you can either substitute the rate constants directly into the symbolic expression if they are known using something like pmap = Dict([:k12 => 1//2, :k13 => 2//1, :k23 => 1//1, :k21 => 1//1, :k32 => 1//1, :k31 => 1//1]) # give rational values for the polynomial ring pmap = symmap_to_varmap(pmap) # changes the keys to Symbolics variables in1 = [substitute(eq, pmap) for eq in in1] # substituted expression and then pass this into build_function. Or you could add the parameters as variables to the polynomial rings directly (which might be more useful for symbolic solving and stuff). R, polyvars = polynomial_ring(QQ, vcat( map(s -> Symbolics.tosymbol(s; escape=false), species(rn)), map(p -> Symbolics.tosymbol(p; escape=false), parameters(rn))
{"url":"https://discourse.julialang.org/t/getting-catalyst-and-oscar-to-work-together/121299","timestamp":"2024-11-14T09:06:34Z","content_type":"text/html","content_length":"35300","record_id":"<urn:uuid:68f82098-1d9b-406a-99d7-4a2b973560e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00441.warc.gz"}
(Get Answer) - Suppose three random variables X, Y, Z have a joint...| Transtutors We store cookies data for a seamless user experience. To know more check the Privacy Policy Suppose three random variables X, Y, Z have a joint distribution... • 38+ Users Viewed • 12+ Downloaded Solutions • Georgia, US Mostly Asked From Suppose three random variables X, Y, Z have a joint distribution Then, X and Y are independent given Z.(True or False) Suppose random variables X and Y are independent given Z, then the joint distribution must be of the form where h,g are some functions.(True or False) Recent Questions in Statistics - Others Copy and paste your question here...
{"url":"https://www.transtutors.com/questions/suppose-three-random-variables-x-y-z-have-a-joint-distribution--10664143.htm","timestamp":"2024-11-13T18:30:06Z","content_type":"application/xhtml+xml","content_length":"70843","record_id":"<urn:uuid:6c4c009f-a7bc-4987-b4c6-787621b6e99b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00387.warc.gz"}
Non-Linear Motion: Circular Motion Contributed by: Analogies between linear and rotational motion, two types of circular motion, centripetal, and centrifugal forces, weightlessness, common situations involving centripetal acceleration. 2. Circular motion is very similar to linear motion in many ways. Linear Angular Quantity unit Quantity unit Displacement (x) M Angular Rad Velocity (v) m/s displacement (ϴ) Acceleration (a) m/s/s Angular velocity (ω) Rad/sec Mass (m) Kg Angular Rad/sec/sec acceleration (α) Force (F) Kg * m/s or N Moment of inertia Kg * m*m (ɪ ) Torque (t) N*m 3. Analogies Between Linear and Rotational 4. There are two types of circular An axis is the straight line around which rotation takes place. • When an object turns about an internal axis—that is, an axis located within the body of the object—the motion is called rotation, or spin. • When an object turns about an external axis, the motion is called revolution. 5. Centripetal acceleration – acceleration of an object in circular motion. It is directed toward the center of the circular 2 ac = centripetal acceleration, v m/s2 ac = r v = tangential speed, m/s r = radius, m 6. Centripetal Force – the net inward force that maintains the circular motion of an object. It is directed toward the center. Fc = centripetal force, N Fc = m a c m = mass, kg mv ac = centripetal Fc = acceleration, r m/s2 v = tangential speed, m/s r = radius, m 7. Two types of speed Linear speed is the distance traveled per unit of • A point on the outer edge of the turntable travels a greater distance in one rotation than a point near the center. • The linear speed is greater on the outer edge of a rotating object than it is closer to the axis. • The speed of something moving along a circular path can be called tangential speed because the direction of motion is always tangent to the 8. Tangential Speed (linear) Tangential speed depends on two things 1. rotational speed 2. the distance from the axis of rotation. 9. Which part of the turntable moves faster—the outer part where the ladybug sits or a part near the orange center? ***It depends on whether you are talking about linear speed or rotational speed.*** 10. Rotational speed (sometimes called angular speed) is the number of rotations per unit of • All parts of the rigid turntable rotate about the axis in the same amount of time. • All parts have the same rate of rotation, or the same number of rotations per unit of • It is common to express rotational speed in revolutions per minute (RPM). 11. Rotational Speed All parts of the turntable rotate at the same rotational speed. • A point farther away from the center travels a longer path in the same time and therefore has a greater tangential speed. (Linear Speed) 12. Rotational Speed Remember All parts of the turntable rotate at the same rotational A point farther away from the center travels a longer path in the same time and therefore has a greater tangential speed. (linear speed) Therefore, a ladybug sitting twice as far from the center moves twice as fast. 13. Question # 1 At an amusement park, you and a friend sit on a large rotating disk. You sit at the edge and have a rotational speed of 4 RPM and a linear speed of 6 m/s. Your friend sits halfway to the center. What is her rotational speed? What is her linear speed? Her rotational speed is also 4 RPM, and her linear speed is 3 m/s. 14. Calculating Average Speed An object moving in uniform circular motion would cover the same linear distance in each second of time. When moving in a circle, an object travels a distance around the perimeter of the circle. The distance of one complete cycle around the perimeter of a circle is known as the circumference. The circumference of any circle is Circumference = 2*pi*Radius 15. For a constant tangential speed: v = tangential speed, m/s d 2πr d = distance, m v= = t T t = time, s r = radius, m T = period, s (time for 1 If rpm (revolutions per minute) is given, convert to m/s using these conversion factors: 1rev 2 r and 1 min. = 60 sec. Or you can find the period by taking the inverse of the 1 T = period, s – time for one revolution T= F = frequency, rev/s – number of revolutions per time f Note: Period and frequency are inverses. 16. Constant Speed, but is there constant Velocity? Remember speed is a scalar quantity and velocity is a vector quantity. The direction of the velocity vector is directed in the same direction that the object moves. Since an object is moving in a circle, its direction is continuously changing. The best word that can be used to describe the direction of the velocity vector is the word tangential. 17. BIG IDEA…. Centripetal force keeps an object in circular motion. 18. Centripetal Force The force exerted on a whirling can is toward the center. NO outward force acts on the can. 19. Since centripetal force is a net force, there must be a force causing it. Some examples are A car going around a curve on a flat road: Fc = Ff (friction force) 20. Creates a curved path Centripetal force holds a car in a curved path. a. For the car to go around a curve, there must be sufficient friction to provide the required centripetal force. b. If the force of friction is not great enough, skidding occurs. 21. Since centripetal force is a net force, there must be a force causing it. Some examples are A car going around a curve on a flat road: Fc = Ff (friction force) Orbital motion, such as a satellite: Fc = Fg (weight or force of gravity) 22. Since centripetal force is a net force, there must be a force causing it. Some examples are A car going around a curve on a flat road: Fc = Ff (friction force) Orbital motion, such as a satellite: Fc = Fg (weight or force of gravity) A person going around in a spinning circular room: Fc = FN (normal force) 23. Since centripetal force is a net force, there must be a force causing it. Some examples are A car going around a curve on a flat road: Fc = Ff (friction force) Orbital motion, such as a satellite: Fc = Fg (weight or force of gravity) A person going around in a spinning circular room: Fc = FN (normal force) A mass on a string (horizontal circle, i.e.. parallel to the ground): Fc = T (tension in the 24. For a mass on a string moving in a vertical circle, the centripetal force is due to different forces in different locations. At the top of the circle, Fc = T + Fg (tension plus weight or gravity) At the bottom of the circle, Fc = T - Fg (tension minus weight or gravity) On the outermost side, Fc = T Anywhere other than above, you would need to find the component of gravity parallel to the tension and either add or subtract from tension depending on the location on the circular path 25. Example :Motion in a v Consider the forces on a ball attached to v + Tmg a string as it moves in a vertical loop. T T Note changes as you click the mouse + mg to show new positions. mg + mg T The velocity of the object is constantly changes depending on which direction v gravity is pointing compared to Left of Path Side velocity. Top mg Tension is minimum + The tension required to keep this as weight Maximum has helps FcT, W no effect tension object moving in a circle changes Weight causes small on T opposes has Fcin no effect while it is in it motion as well. decrease tension T on T Bottom 27. Calculating Centripetal Forces Greater speed and greater mass require greater centripetal force. Traveling in a circular path with a smaller radius of curvature requires a greater centripetal force. Centripetal force, Fc, is measured in newtons when m is expressed in kilograms, v in meters/second, and r in meters. 28. Adding Force Vectors •A conical pendulum is a bob held in a circular path by a string attached •This arrangement is called a conical pendulum because the string sweeps out a cone. •Only two forces act on the bob: mg, the force due to gravity, and T, tension in the string. • Both are vectors. 29. •The vector T can be resolved into two perpendicular components, Tx (horizontal), and Ty (vertical). •Therefore Ty must be equal and opposite to mg. •Tx is the net force on the bob–the centripetal force. Its magnitude is mv2/r, where r is the radius of the circular path. 30. Centripetal Force Centripetal force keeps the vehicle in a circular path as it rounds a banked curve. 31. Centrifugal Forces – MISCONCEPTION!! •When an object moves in a circular motion there MUST be an outward force. •This apparent outward force on a rotating or revolving body is called centrifugal force. Centrifugal means “center-fleeing,” or “away from the center.” •If there was an outward force, we would see something completely different than what actually happens. 32. Gravity Near the Earth’s Surface The acceleration due to gravity varies over the Earth’s surface due to altitude, local geology, and the shape of the Earth, which is not quite 33. Satellites and “Weightlessness” Satellites are routinely put into orbit around the Earth. The tangential speed must be high enough so that the satellite does not return to Earth, but not so high that it escapes Earth’s gravity 34. Satellites and “Weightlessness” The satellite is kept in orbit by its speed—it is continually falling, but the Earth curves from underneath it. 35. Satellites and “Weightlessness” Objects in orbit are said to experience weightlessness. They do have a gravitational force acting on them, though! The satellite and all its contents are in free fall, so there is no normal force. This is what leads to the experience of 36. Satellites and “Weightlessness” More properly, this effect is called apparent weightlessness, because the gravitational force still exists. It can be experienced on Earth as well, but only briefly: 37. Common situations involving Centripetal Acceleration Many specific situations will use forces that cause centripetal Level curves Banked curves Horizontal circles Vertical circles Note that Fc, v or ac may not be constant 38. Level Curves Friction is the force that produces the Can find the frictional force, µ, or v v rg 39. Banked Curves A component of the normal force adds to the frictional force to allow higher or ac g tan 40. Vertical Circle Look at the forces at the top of the The minimum speed at the top of the circle can be v top gR
{"url":"https://merithub.com/tutorial/non-linear-motion-circular-motion-c7kqtnhnuvtb9sirl910","timestamp":"2024-11-02T22:03:47Z","content_type":"text/html","content_length":"52222","record_id":"<urn:uuid:a044a3b4-6ad6-4a20-bb21-4d48d7a27e22>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00841.warc.gz"}
Stop Loss Hi Everyone, I usually trade during the Asian market hours. I usually risk 1% per trade, looking for 1% or more if possible, I have been papertrading for a month now. This is my dilema, usually when the market goes over 10 to 13 pips on my side i move my Stop Loss to break even, I’m having the problem that the market is pulling back and taking me out and then going back the way it suppose to go. Is it 10 to 13 PIPS to tight? or I should keep doing what I’m doing? usually my hard stop is between 40-17 pips, i go by the charts… Break evens are better than losses. Instead of moving your stop to break even you can maybe move it to -5? While this may not be break even it will give your trade more room to move around and hopefully not take you out too early. You can also wait longer before moving your stop to break even. Just giving you a few more options. I’m guessing your zoning into the Asian re your time frame chart? Could be wrong of course. But here’s the thing. The Asian is relatively slow compared to London/ New York. But it does follow the bigger picture, just in slow motion. Try paper trading longer S/R or TF’s to take into account the 24 hour cycle of PA. Asian is only a small part of the bigger picture. EDIT: IGNORE THIS POST, its all wrong. The smaller pip gain/loss, the more your transaction costs will eat up any profit. Example 1: if u close trades at +/- 10 pips, and spread is 1 pip cost(eur/usd for example), then 10% of your trade is transaction cost. If 1/2 of your trades are losers, then transaction costs will be 20% of your winning trades. That would mean that your analysis would need to show 70% winning trades just to break even. (This seems craaaaaazzzyyy to me, no wonder people kill their accounts) Example 2: if u close trades at +/- 100 pips, and spread is 1 pip cost(eur/usd for example), then 1% of your trade is transaction cost. If 1/2 of your trades are losers, then transaction costs will be 2% of your winning trades. That would mean that your analysis would need to show 52% winning trades just to break even. (This seems much more possible) Example 3: if u close trades at +/- 1000 pips, and spread is 1 pip cost(eur/usd for example), then .1% of your trade is transaction cost. If 1/2 of your trades are losers, then transaction costs will be .2% of your winning trades. That would mean that your analysis would need to show 50.2% winning trades jto break even. (Almost as good as a coin flip!) As long as you have lower leverage, (account exposure/account balance) you have a much better chance of being successful and not being eaten up by the spread. The spread is the silent killer of accounts! Edit:If you pay even a 2 pip transaction cost, then the numbers get even worse! Example 1: 90% of trades needed to be winners(OMG!) Example 2: 54% " Example 3 50.4% " [B]Odi3[/B] Top level post… shame about the 1000 posts to FX-Men HM. Welcome on board the good ship BP! Unfortunately only you can answer this question, because if your system is showing postive expentancy and your rules are to move stop to breakeven after 10 -13 pips, then you take the trade and follow your system rules. If you decide you want to give more room, then you need to backtest or forward test, and make sure it does not change your expectancy. It appears you are focussing on individuals trades, whereas you should be looking at the next 10000 trades. Will this one trade impact the overall bigger picture? More importantly, making a change will it impact your current systems profitability, assuming it is The statement above, “breakeven is better than a loss” is why I believe the question can only be answered by you. My system with the testing I’ve done, shows moving the stop to breakeven effects the positive expectancy. Therefore the statement for me doesn’t ring true, but for many others it might be true. The system I’m trading takes four trades a day, 20 for the week. TP 50 pips, loss 25 pips. The only thing I’m looking for is 7 winners for the week, yes, 7 winners out of 20, 35% win/loss, and that’s my focus. Anymore than 7 great, any less well it is a losing week. Now my mindset isn’t focussed on each individual trade. So if you understand why your system is making you positive expectancy then I would only be changing things if it improves it, and you can only do this by testing. This is misleading. If you close your trades at +10 pips you have made 10 pips, you dont keep 9 pips and pay the broker 1 pip, it doesnt matter what the spread was, the spread was built into the trade by opening and closing at the bid and ask prices. All this means is the price would have to make a total move of 11 pips for your trade to complete, the only way the spread could be detrimental to this theoretical system would be if the price consistantly moved only 10 pips then reversed all the way back to the stop loss therby not completing the trade because of the spread. If anyone has such a system I would be interested to see it as it obviously would be highly profitable at 9pips TP lol Thanks SDC for the correction! I maybe should have described it as: +/-10 pips = either +9 pips or -10 pips on a closed order. I know I already replied to this post but I went away and thought about it then realized the logic behind this whole post is wrong, not just the statement I replied to in the previous one, odi3 if you are basing your trading strategy around these figures I would suggest you go back to the drawing board and start over because your understanding of how the spread works is flawed. The spread is not an extra cost you have to add onto your winning or losing trades it is included in both. This is also wrong, the spread is included in the trade, if the trade is a winner there is no additional cost, you won, the broker got his spread you got your total expected profit of ten pips, if you lost because the trade was stopped out at -10 pips the broker got his spread you lost ten pips that is all you lose. That would mean that your analysis would need to show 70% winning trades just to break even. (This seems craaaaaazzzyyy to me, no wonder people kill their accounts) This is incorrect, it doesnt matter what your strategy is, or how many pips per trade you aim for, the spread is included. If your TP is 10 pips and your SL is 10 pips, and you have 50% winning trades you will break even with exactly Zero profit/loss Edit:If you pay even a 2 pip transaction cost, then the numbers get even worse! Example 1: 90% of trades needed to be winners(OMG!) This is wrong too for the reasons I stated above to break even with equal take profit and stop loss 50% of your trades need to be winners… EDIT: for anyone else who is confused by this, and it is confusing, I too have got this all wrong before while working out strategies and added the spread on instead of including it so remember: You Buy at the ASK price and you close the Buy order out at the BID price. Your stop loss for Buys is also closed out at the BID price. Likewise you Sell at the BID price and close out Sells at the ASK price therfore stop losses on sells also close the order at the ask price. So consider a Sell trade opened at the Bid price of 1.3000 with a stop loss of 10 pips and a take profit of 10 pips. The take profit is set at the Ask price of 1.2990 The stop loss is set at the ASK price of 1.3010 Even though the Sell order to the broker was made at the Bid price, at the moment the trade begins your profit/loss is measured at the Ask price because this is the price at which it will close this is why you will see your profit/loss begin at -1 (assuming 1 pip spread) Ah yes, my above post is all wrong. I was thinking that larger orders were more profitable because the transaction cost(spread) was a lower percent of the total gain. I failed to factor in that my losing trades would cancel out the benefits. trade A +10 pips (1 pip cost) trade B +10 pips (1 pip cost) EDIT: (this was supposed to to be trade B -10 pips) After trades A and B loss of 2 pips trade C +100 pips (1 pip cost) trade D -100 pips (1 pip cost) After trades C and D loss of 2 pips I guess the only thing left to say is that its better to make 100 pips from 1 trade, rather than 10. (because you save 9 pips in transaction cost) Thanks again! Not in the least… I think [B]Odi3[/B] makes an extremely profound point! Although admittedly, it seems to have been lost in recent posts. The fact remains that over trading on the smaller TF’s will kill your potential gains or at least, severely cut into them. I’ll give you two examples. A trader I know entered short on EU at 1.3750 and has only this morning exited. Last night I entered long at 1.2982 and will probably hold to around 1.3150. In both cases one entry spread. In both examples you could have entered and exited off the 5m, 15m or 1h multiple times and increased your spread costs accordingly. The more times you enter trades over a given longer move the more you incur costs of so doing in addition to any potential gain. The exact same arguement can be made for setting a SL too close to a trade but thats another whole can of worms! Thank You for your post, it was very helpful… I believe I made a slight error in my example I originaly posted this: So consider a Sell trade opened at the Bid price of 1.3000 with a stop loss of 10 pips and a take profit of 10 pips. The take profit is set at the Ask price of 1.2990 The stop loss is set at the ASK price of 1.3010 Even though the Sell order to the broker was made at the Bid price, at the moment the trade begins your profit/loss is measured at the Ask price because this is the price at which it will close this is why you will see your profit/loss begin at -1 (assuming 1 pip spread) That would have been correct for a 11 pip TP and a 9 pip SL I should have said when the Sell trade is opened at the Bid price of 1.3000, [B]the 10 pip stop loss and 10 pip take profit are calculated from the Ask price as it is at the moment of the trade, which would be 1.3001 therefore the stoploss would close the order out at the ask price 1.3011[/B](bid price 1.3010) with a loss of 10 pips and [B]the take profit would close the order out at the ask price1.2091[/B](bid price 1.2090) with a profit of 10 pips. But to make a 10 pip profit, it would actually have to move 11 in your favor… And if you sat up an arbitrary 10 pip stop from the entry point, it would only have to move 9 pips negative to stop you out… So your initial numbers were closer to accurate. The only difference is the spread in there. You’re always closer to doom, than to success;) Because of the spread, there’s no such thing as a perfect 1:1 r/r scenario. I think it all depends. If I use 1H time frame, I will put 20 or more pips. My SL for scalping is usually about 10 pips. As other posters have said, this does depend very much on the strategy used, the timeframe etc. However, given that your strategy obviously advocated trailing up the Stop loss once the trade is underway to reduce loss, that element, at least, is similar to mine, so to address that point: I move up my Stops in a similar way, but look to ‘protect’ them behind a level of resistance, be that a Pivot Point, EMA, trendline, clear support/resistance level. So often, if I place a trade, then it starts to move in my direction, there will be the initial move, then a retracement, then the full move. Had I moved to breakeven, that retracement would have taken out my Stop. However, as my entry is often off the break of a resistance line such as an EMA, I can trail up the Stop once the trade has moved of to, say, five pips behind the EMA. Then I am giving the trade room to breathe should there be some initial retracement, but still have a tight Stop in place should Price just turn against my trade. I have found that having the odd trade stop out for a five or so pip loss pays better overall than having several stop out at breakeven, then go off to hit my TP after I have been stopped out. This all depends on the strategy, but thought I would throw this in in case it was helpful, as I had the same issue a while back. In different terms of trading you should be change your s/l. in scalp yes max 10 pips. in interaday half of your t/p Thank You all the people that have respond, very helpful information, taking everything under consideration and will try to improve my trading…Thank You!! Stop loss and take profit are two essential trading orders used to control profits and losses in a forex trade. Both orders are designed to decide how much you are willing to risk or make from each trade. This may seem pretty easy at first, but knowing how to apply for each order correctly according to preset risk management rules is what differentiates successful forex traders. Stop Loss Order? A stop-loss (SL) order is used to automatically close a trade when the price reaches your set price level. It indicates how much money you are willing to put at risk for a single trade. This order can help in minimizing the losses if the price begins moving in the opposite direction, and in some cases locking profits as well. It is usually placed with a market or a pending order and can be a number of pips, percentages, or a particular price level. The stop-loss level is typically set in the opposite direction of your trade. Meaning it is put below the entry-level for long trades, and above the entry-level for short ones. A trailing stop is a type of stop-loss that secures profits as long as the market moves in the trade’s direction, and automatically closes the trade if the market moves against it. It is set at a certain distance from market price, measured as a percentage or number of pips. It follows the market price until it moves against your Positions. How to Set Stop Loss and Take Profit in MT4 The simplest and easiest way is to enter both stop loss and take profit levels when placing a new order. Simply enter the precise price levels at which you want to take profit or stop loss. The take-profit order will be automatically executed when the price reaches your target level, while the stop-loss will be automatically activated if the market moves against your position. Remember that both stop loss and take profit orders will remain adjustable while your trade is active. However, setting both levels when deciding a trade is much preferable. SL and TP orders will be shown on the chart and you can easily click and drag any of them to adjust your trade. Alternatively, you can go to the “Terminal” section at the bottom of the chart, right-click on the trade you want to modify, and choose “Modify or Delete Order”. Now you can adjust SL and TP levels by exact price or pips. This is fine but not original. What about saying something about your trading, strategies and stop-loss tactics?
{"url":"https://forums.babypips.com/t/stop-loss/36480","timestamp":"2024-11-09T17:49:40Z","content_type":"text/html","content_length":"65350","record_id":"<urn:uuid:2b2f19dc-454a-4708-a8d3-90b89931e636>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00235.warc.gz"}
AWTimes Create 2013 Also the choice of the models is important in Activeworlds. Here you will read why objects need to be "low polygon" in a real-time 3d environment. Stacee(#319521) used in her classes some examples of car models to show the difference between low polygon and high polygon models. Visit also AWSchool 81S 165W , where you have a 3d view of Stacees 3D classes. What do you need to know before you start using models ? 1. Models on the internet : Be prudent when you read "low-poly model" on the internet: a lot of 3d models on the internet have less polygons than some high-definition-quality models but still are not adapted to 3d real-time environments. That's why you better search on the internet for "game 3d models" or "real-time environment 3d models". 2. What is a low poly model ? Using a low poly model means that it is a model for which all efforts are done to get to a minimum of polygons : these are the number of triangles which are used to make the model. Almost all 3d programs divide their objects into triangle surfaces , this means that each square surface consists of 2 polygons (or 2 triangle surfaces). Which type of models do you use ? For this you need to take these things into consideration : 1. Complexity of the object You need to consider that a kitchen table is less complex than a piano, so the table should be modeled with less polygons. 2. The importance of the model in your scene Use as many low poly models as possible : is the model used as background ? Then make it low poly. Is the object used very often in your world (trees, chairs , lamps) ? Then make it as low poly as possible. You can use objects with more polygons for (rare) objects which draw the attention in your world. Example : a high-polygon christmas tree ,but as an eyecatcher in your scene. Coffee cups and plates: this is less important, make then low poly. 3. The type of world : Will you use your world to have a lot of visitors in it at the same time, or is your world a building world ? If you have a building world, then you won't choose yourself, but the builders will of course choose the high-polygon models, so try to offer low polygon models in your objectpath. If your world is not a building world , then you can easily spread or limit the high-polygon objects . Models optimalisation in your 3d world : 5 Guidelines for making a good model : 1. If you don't see it , don't model it Just model what you see. Don't model the inside of a box or a cylinder that you can't see. Details upon a model should better be made with textures instead of modelling it with extra polygons. Start with primitives (spheres, cubes) before going into detail. 2. Accomplish details with textures Details upon a model should better be made with textures instead of modelling it with extra polygons.(e.g. the numbers on a clock). You can create depth in your textures by using shades and darker and lighter parts.(see picture of this cabin). You can can also use masks instead of using real geometry in the modeling program. (e.g leaves of trees) But try to avoid making a high polygon model in combination with masks, the two together will create even more lag. 3. Count the number of polygons How many polygons should a model have ? A model with 10.000 polygons : this is too much, even for a large building. Details : be careful with the number of polygons, the more flat surfaces, the A model with 1.000 polygons : is still a lot, and used only for complex or large models. better. Avoid modelling too many details. For instance , for a car , avoid A model with a multiple of 100 polygons : this is better , acceptable in a 3d world. too many polygons on the wheels ,and instead of modelling the dashboard, you A model with less than 100 polygons : this is a good object . can use a simple picture. But all depends on the complexity of the model : a table can easily be made with less than 100 polygons. An apple or banana will need some more polygons, although these objects are smaller. For large objects you can go more into detail than with small objects. 4. curved surfaces should be moderately smooth If you use models with curved surfaces (cylinders, spheres), you need to check if there haven't been used too many edges and faces to make a smooth rounded surface. Models which are a bit less smoothened are better for a real-time 3d environment. The perfect circular shape in real-time 3d is rather angular, and should be a slightly less smooth. As an example : to make stalks for plants, it is better to use 6-surface cylinders instead of 16-surface-cylinders. 16 sides Example of a model made with a nurbs modeling program. for a stalk of a plant is excessive. Some modeling programs are based on nurbs, which usually start with an excessive number of faces, and an excessive number of polygons. Nurbing programs are not made for real-time 3d environments. example of nurbing : http://www.3drender.com/jbirn/ea/HeadModel.html 5. avoid too many subdivisions on flat surfaces Don't use unncessary subdivisions on a flat surface : a two-sided panel has 4 polygons (2 polys for each side). If the panel model has 64 polygons, then don't use it , make a panel with less polygons. Extra subdivisions are a waste of polygons, unless in some modeling programs, you will see that some surfaces are subdivided into more polygons in order to optimize the lighting upon the model. In Accutrans you can reduce the number of polygons. Some examples .... This sports car is an example of a poor real-time 3d model , because it has too many polygons, and doesn't follow the 5 guidelines for a optimized model. This policecar is already better , but still doesn't follow all 5 guidelines: there are still too many polygons on the flat surfaces and around the wheels, and some details can be replaced by textures. This can be optimized , so that the object loads faster in the 3d environment. This white car is an example of a good real-time 3d model , as the 5 guidlines are respected. For the dashboard you can use a simple texture instead of modeling it all. More tips for object optimalisation ... 1. collision off for avatars If you use avatars in your world, the framerate in your world will be lower , knowing that each avatar is ± 1500-2000 polygons, if there are 30 visitors in your world, this will already be 60.000 polygons without the scene, so the scene should be low polygon if you want to do events for lots of people. But you can make that avatar objects "collision off" : you can use the action command, but you can also do this already in accutrans, before uploading into your object path. The disadvantage is however that at the same time, the model will be "solid off ". You can't use the collision off for masks and transparency. If you want to use transparency, you can use the command opacity. 2. wireframe view In almost all modelling programs, also in accutrans, you can view the number of polygons of each object. But you can't view the number of polygons when building in Activeworlds. But there is the possibility to view the world in wireframe . For this your need to select : Options - Settings - Video - Wireframe Rendering . When you approach high polygons objects, you will probably see a lower framerate.
{"url":"http://venice3d.net/2013create12.html","timestamp":"2024-11-03T02:56:51Z","content_type":"application/xhtml+xml","content_length":"21283","record_id":"<urn:uuid:82eeb29a-2a30-4ab7-90c3-9f3a2d899111>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00602.warc.gz"}
Janos Englander University of Colorado University of Colorado Tree builder random walks Probability Seminar 29th September 2023, 3:30 pm – 4:30 pm Fry Building, 2.04 We investigate a self-interacting random walk, in a dynamically evolving environment, which is a random tree built by the walker itself, as it walks around. At time $n=1,2,\dots$, right before stepping, the walker adds a random number (possibly zero) $Z_n$ of leaves to its current position. We assume that the $Z_n$'s are independent but we do not assume that they are identically distributed, resulting thus in a time in-homogeneous setting. The properties of the walk (transience/recurrence, getting stuck) as well as the structure of the generated random trees are discussed (limiting degree distribution, maximal degree etc.). A coupling with the well-known preferential attachment model of Barabasi and Albert turns out to be useful in the appropriate regime. This is joint work with R. Ribeiro (Denver), G. Iacobelli (Rio de Janeiro) and G. Pete (Budapest).
{"url":"https://www.bristolmathsresearch.org/seminar/janos-englander-2/","timestamp":"2024-11-05T18:38:38Z","content_type":"text/html","content_length":"54498","record_id":"<urn:uuid:cb79688d-7419-4882-9988-ce247b68771c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00659.warc.gz"}
Use The Graph Of F(x) = X To Graph Each Function.7. G(x)=x- 48. G(x)=x +39. G(x)=x + 6 410. G(x)=x - the graphs are attached herewith , where the label of the graph are : along X-axis: 1 square division = 10 units along y-axis: 1 square division: 10 units The problem we dealing with is related to manipulating graphs.For the question 7 to 10, the commonly given function is f(x)= √x, if the g(x) is nothing but the f(x-c) or f(x+c), so for the g(X) to be f(x-c), then a graph of that particular function turns to stretch downward whereas, for the f(x+c), it stretches upward. So, for 7 g(x)=√x- 4 where -4 is c so the g(x) = f(x-c) so the point for the graphs will stretch downward compare to the pint of the functions f(x)= √x. 8. g(x)=√x +3 here g(x)= f(x+3), so the pints stretched upward 9. g(x)=√x + 6 − 4=√x + 2 here additional number for the points ar 2, as it positive one, so the point of the function g(x), stretches upward 10. g(x)=√x - 7+3=√x -4 here the c is -4, so the point of the g(x), stretches downward. Same for the questions 11 to 14, the common function is f(x) = 1/x ,for which which g(x) = f(x-c) or f(x+c), for positive c , the graph stretches upward otherwise it moves downward 11. g(x) = 1/x +4 here the c is positive, the graph stretches upward for the g(x) 12. g(x) = 1/x - 6 here the c is negative, the graph stretches downward for the g(x) 13. g(x) = 1/x-6 + 8= 1/x+2 here the c is positive, the graph stretches upward for the g(x) 14. g(x) = 1/x+7 -4= 1/x+3 here the c is positive, the graph stretches upward for the g(x) To know more about graph manipulation refer to the link https://brainly.com/question/16636362?referrer=searchResults.
{"url":"https://www.cairokee.com/homework-solutions/use-the-graph-of-fx-x-to-graph-each-functionbr-7-gxx-4br-8-g-soul","timestamp":"2024-11-05T00:36:18Z","content_type":"text/html","content_length":"84228","record_id":"<urn:uuid:239e7556-9692-4367-bdc7-542070b13cf5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00886.warc.gz"}
Buy uspolicy.be ? Products related to Math: Similar search terms for Math: • Is there no math teacher in math? Yes, there is a math teacher in math. The question may be interpreted as a play on words, but in a literal sense, a math teacher is someone who teaches mathematics. Therefore, a math teacher would indeed be found in a math class or department. • Math again? Yes, math is a fundamental subject that is important for problem-solving, critical thinking, and understanding the world around us. It helps develop analytical skills and is used in various fields such as science, engineering, economics, and technology. Embracing math can open up opportunities for learning and growth, even if it may seem challenging at times. • Is Math HL much harder than Math SL? Math HL is generally considered to be more challenging than Math SL. Math HL covers more advanced topics and requires a deeper understanding of mathematical concepts. The workload and level of difficulty in Math HL are typically higher than in Math SL. Students in Math HL are expected to have a strong foundation in mathematics and be prepared for more rigorous coursework. • Can a math teacher not give math lessons? Yes, a math teacher can choose not to give math lessons. There may be various reasons for this, such as the teacher being ill, having a scheduling conflict, or needing to attend a professional development workshop. In such cases, the teacher may need to arrange for a substitute or provide alternative learning activities for the students. However, it is important for the teacher to communicate with the students and ensure that their learning is not significantly disrupted. • Why is school math so different from university math? School math is often focused on memorization and rote application of formulas to solve specific problems, whereas university math emphasizes critical thinking, problem-solving skills, and a deeper understanding of mathematical concepts. University math courses are more theoretical and abstract, requiring students to prove theorems and understand the underlying principles behind mathematical concepts. Additionally, university math covers more advanced topics and delves into specialized areas of mathematics, whereas school math tends to cover a broader range of topics at a more basic level. • Which math problem makes our math group drop dead? The math problem that makes our math group drop dead is usually complex word problems that involve multiple steps and require critical thinking skills. These problems often involve a combination of different mathematical concepts and can be overwhelming to tackle. Additionally, problems that involve abstract or advanced topics beyond our current understanding can also be challenging and cause us to feel stuck. Overall, any problem that requires a high level of mathematical reasoning and problem-solving skills can make our math group feel overwhelmed and struggle to find a • Have we discovered math or have we invented math? The question of whether math is discovered or invented is a philosophical debate. Some argue that math is discovered, as it is based on fundamental principles and relationships that exist in the natural world. Others argue that math is invented, as it is a human creation used to describe and understand the world around us. Ultimately, the answer may depend on one's perspective and interpretation of the nature of mathematics. • Should I write Math 1 or Math 2 first? It depends on your comfort level and prior experience with math. Math 1 covers topics such as algebra, geometry, and basic trigonometry, and is generally considered to be more accessible for students who may not have a strong math background. Math 2, on the other hand, covers more advanced topics such as precalculus and trigonometry, and is better suited for students who have a solid foundation in math and are looking for a more challenging course. Consider your own strengths and weaknesses in math, as well as your future academic and career goals, when deciding which course to take first. • Isn't math illogical? Math is actually a highly logical and systematic discipline. It is based on a set of rules and principles that are used to solve problems and describe the world around us. While some mathematical concepts may seem abstract or complex, they are ultimately rooted in logical reasoning and can be proven through rigorous mathematical proofs. Math allows us to make sense of the world and solve problems in a logical and systematic way. • Why math again? Math is a fundamental skill that is essential in many aspects of life, including everyday tasks, problem-solving, and decision-making. It helps to develop critical thinking and analytical skills, which are valuable in various professions and fields of study. Additionally, math provides a universal language that allows people from different backgrounds to communicate and understand complex concepts. Overall, having a strong foundation in math can open up a wide range of opportunities and enhance one's ability to succeed in a rapidly changing world. • "Math again already?" Yes, math is a subject that requires consistent practice and reinforcement to fully understand and retain the concepts. By revisiting math frequently, students can strengthen their skills and build a solid foundation for more advanced topics. Additionally, practicing math regularly can help improve problem-solving abilities and critical thinking skills. • Why math calculator? A math calculator is a useful tool for performing complex mathematical calculations quickly and accurately. It can handle a wide range of mathematical functions, including arithmetic, algebra, trigonometry, and calculus, making it a versatile tool for students, professionals, and anyone else who needs to work with numbers. Additionally, a math calculator can help users check their work and verify their answers, reducing the risk of errors in their calculations. Overall, a math calculator is an essential tool for anyone working with numbers, providing convenience, accuracy, and efficiency in mathematical tasks. * All prices are inclusive of VAT and, if applicable, plus shipping costs. The offer information is based on the details provided by the respective shop and is updated through automated processes. Real-time updates do not occur, so deviations can occur in individual cases.
{"url":"https://www.uspolicy.be/%20Math","timestamp":"2024-11-06T00:03:22Z","content_type":"text/html","content_length":"67630","record_id":"<urn:uuid:b0647bfb-89e8-4333-843b-721459bebac0>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00202.warc.gz"}
The Reflection Lemma The Reflection Lemma is concerned with counting the number of random walks that satisfy certain conditions. It is quite common to denote the number of walks from A = (0, a) to B = (n, b) as N[n](a, b). A walk (S[1], S[2], ..., S[n]), for which s[k] = S[1] + S[2] + ... + S[k] = 0, for some k > 0 is said to touch or cross the x-axis, depending as whether the next value S[k+1] is ±1. The set of walks that touch or cross the x-axis is complementary to the set of walks that stay on the same side of the axis. The latter set is the subject of the Ballot Lemma. The Reflection Lemma For a > 0, b > 0, M[n](a, b) = N[n](-a, b), where M[n](a, b) is the number of walks from A = (0, a) to B = (n, b) that cross or touch the x-axis. The applet provides a graphical illustration for a proof of the Reflection Lemma. Let A' = (0, -a). If, for a walk starting at A, s[k] = 0, for some k, there is the first k for which this happens. For such a walk, say α, there is a uniquely define walk α' that starts at A' and, up to the point (k, 0) is a reflection of α in x-axis, after which it coincides with α. As every walk from A' to B crosses the axis, it is uniquely relates to a walk from A to B that crosses or touches the x-axis. The applet actually illustrates the Lemma for walks from (x, a) to (x + n, b) that do not necessarily start at the y-axis. 1. W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition, Wiley, 1968 Pascal's Triangle and the Binomial Coefficients |Activities| |Contact| |Front page| |Contents| |Algebra| |Up| Copyright © 1996-2018 Alexander Bogomolny
{"url":"https://www.cut-the-knot.org/Curriculum/Combinatorics/ReflectionPrinciple.shtml","timestamp":"2024-11-03T09:34:07Z","content_type":"text/html","content_length":"16797","record_id":"<urn:uuid:36ed4f65-4924-417d-a52c-baa0eb8f5571>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00562.warc.gz"}
Digitally generating true orbits of binary shift chaotic maps and their conjugates COMMUNICATIONS IN NONLINEAR SCIENCE AND NUMERICAL SIMULATION, vol.62, pp.395-408, 2018 (SCI-Expanded) • Publication Type: Article / Article • Volume: 62 • Publication Date: 2018 • Doi Number: 10.1016/j.cnsns.2018.02.039 • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus • Page Numbers: pp.395-408 • Keywords: Dynamical maps, True chaotic orbits, FPGA, Topological conjugacy, IMAGE ENCRYPTION, LOGISTIC MAP, TENT MAP, ALGORITHM, CRYPTOSYSTEMS, COMPUTATION • Erciyes University Affiliated: Yes It is impossible to obtain chaotic behavior using conventional finite precision calculations on a digital platform. All such realizations are eventually periodic. Also, digital calculations of the periodic orbits are often erroneous due to round-offand truncation errors. Because of these errors, periodic orbits quickly diverge from the true orbit and they end up into one of the few cycles that occur for almost all initial conditions. Hence, digital calculations of chaotic systems do not represent the true orbits of the mathematically defined original system. This discrepancy becomes evident in the simulations of the binary shift chaotic maps like Bernoulli map or tent map. Although these systems are perfectly well defined chaotic systems, their digital realizations always converge to zero. In the literature, there are some studies which replace the least significant zero bits by random bits to overcome this problem.
{"url":"https://avesis.erciyes.edu.tr/yayin/ad685122-0335-4eb2-9a8c-cb859d49c202/digitally-generating-true-orbits-of-binary-shift-chaotic-maps-and-their-conjugates","timestamp":"2024-11-14T17:46:58Z","content_type":"text/html","content_length":"52677","record_id":"<urn:uuid:ff155c51-5a40-4be7-947b-fffe509b2686>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00672.warc.gz"}
What is the Minimum Distribution for a 401k - Nchin.org What is the Minimum Distribution for a 401k The minimum distribution you have to take from your 401(k) every year is based on your age and account balance. Required minimum distributions (RMDs) are withdrawals from your retirement accounts beginning at age 72. The Internal Revenue Service (IRS) considers them a way to recover the tax benefits you received when you put money into those accounts. If you don’t take your RMD, you’ll owe taxes and a penalty of 50% of the amount you should have withdrawn. Determining Required Minimum Distributions (RMDs) Required Minimum Distributions (RMDs) are the minimum amount of money you must withdraw from your traditional IRA or 401(k) account each year once you reach age 72. This is to ensure that you are paying taxes on the money you have accumulated in your retirement account over time. The IRS uses a life expectancy table to determine how much you must withdraw each year. The factors used to calculate the RMD are: • Your age • Your account balance • The IRS life expectancy table You can use the IRS’s RMD calculator to determine your required minimum distribution. If you do not withdraw the required amount, you will be subject to a 50% penalty tax on the amount that you should have withdrawn. Age 72 For individuals who reach age 72 in 2023, the RMD is calculated by dividing the account balance as of December 31, 2022, by the life expectancy factor for their age. The life expectancy factor for age 72 in 2023 is 27.4. For example, if your account balance on December 31, 2022, is $100,000, your RMD for 2023 would be $100,000 / 27.4 = $3,649.63. Age 73 and Older For individuals who reach age 73 or older in 2023, the RMD is calculated by dividing the account balance as of December 31 of the preceding year by the remaining life expectancy. The remaining life expectancy is based on the IRS life expectancy table and the individual’s age. For example, if your account balance on December 31, 2022, is $100,000 and you reach age 73 in 2023, your RMD for 2023 would be $100,000 / 26.5 = $3,773.58. Table of RMD Factors | Age | Life Expectancy Factor | | 72 | 27.4 | | 73 | 26.5 | | 74 | 25.6 | | 75 | 24.7 | | 76 | 23.8 | | 77 | 22.9 | | 78 | 22.0 | | 79 | 21.1 | | 80 | 20.2 | | 81 | 19.3 | | 82 | 18.4 | | 83 | 17.5 | | 84 | 16.6 | | 85 | 15.7 | | 86 | 14.8 | | 87 | 13.9 | | 88 | 13.0 | | 89 | 12.1 | | 90 and older | 11.2 | Exceptions to Minimum Distribution Rules There are a few exceptions to the minimum distribution rules. You do not have to take minimum distributions if you are: • Still working and not yet age 59½ for the plan you are participating in. • Disabled or chronically ill, and unable to work. • Not yet age 59½ and a beneficiary of an inherited IRA. If you meet any of these exceptions, you can delay taking minimum distributions until you reach the age of 59½, become disabled, or inherit an IRA, whichever comes first. Tax Implications of Insufficient Distributions Failing to take the required minimum distributions (RMDs) from your 401(k) account can result in a penalty tax of 50% on the amount you should have withdrawn. This penalty is significant and can quickly deplete your retirement savings. For example, if you are required to take a $10,000 RMD and you only withdraw $5,000, you will be subject to a $2,500 penalty tax ($5,000 x 50%). To avoid this penalty, it is important to calculate your RMD accurately and make sure you withdraw the full amount each year. Minimum Distribution Requirements for 401(k)s Reaching retirement age triggers the requirement to take minimum distributions (RMDs) from your 401(k) account. These RMDs ensure that you withdraw a certain portion of your retirement savings each year to avoid penalties. Let’s explore the minimum distribution rule and strategies to meet your RMD requirements. Strategies for Meeting RMD Requirements • Systematic Withdrawals: Withdraw equal amounts each year, starting with your first RMD due date. • Life Expectancy Method: Calculate your life expectancy based on IRS tables and divide your account balance by that number to determine your annual RMD. • Required Minimum Distribution (RMD) Table: Use the IRS’s provided RMD table, which lists RMD factors based on your age. • Qualified Longevity Annuity Contract (QLAC): Purchase a QLAC, which is an annuity that provides lifetime income, to defer a portion of your RMDs. Understanding the RMD Calculation Your RMD is calculated using the following formula: RMD = Account Balance / RMD Factor The RMD factor is based on your age and can be found in the IRS’s RMD table. Consequences of Not Meeting RMD Requirements Failing to meet your RMD obligations can result in a penalty of 50% of the amount you should have withdrawn. This penalty applies to each year you miss an RMD. Meeting RMD requirements is crucial for managing your retirement savings. By understanding the minimum distribution rule and exploring the available strategies, you can ensure that you withdraw funds from your 401(k) account in compliance with IRS regulations while maximizing your retirement income. Thanks for stopping by! I hope this article has helped you understand the minimum distribution requirements for your 401k. Remember, it’s crucial to plan ahead and ensure you’re withdrawing funds in a tax-efficient manner. If you have any follow-up questions or need more information, don’t hesitate to come back and visit us again. We’re here to help you navigate the world of retirement savings
{"url":"https://www.nchin.org/what-is-the-minimum-distribution-for-a-401k/","timestamp":"2024-11-03T03:20:45Z","content_type":"text/html","content_length":"116555","record_id":"<urn:uuid:2daa1bb4-0246-41d1-938c-b09cec48f1e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00137.warc.gz"}
Determinant Properties of 3x3 Matrices | Linear Algebra Homework • Thread starter nicknaq • Start date In summary, the conversation discusses the determinants of the adjoint of a 3x3 matrix with a given determinant of 5. The identities for the adjoint, transpose, inverse, and complex conjugate are mentioned as potentially useful in solving the problem. The conversation ends with the suggestion to use these identities to easily determine the determinants of the adjoint of various transformations of the original matrix. Homework Statement Let A be a 3x3 matrix with determinant 5. Then det(adj(A^T))=____, det(adj(A^−1))=____ and det(adj(7A))=____. Homework Equations Well, I know that the the adjoint is the transpose of the matrix of cofactors. Also, these may be useful: The Attempt at a Solution There's not much process involved in the questions, so I haven't really had an attempt. Thanks for the help. Why don't you try it out? Just construct some (easy) 3 x 3 matrices with determinant 5, like [tex]\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 5 \end{pmatrix}[/tex] [tex]\begin{pmatrix} 3 & -1 & 2 \\ 0 & 5 & 0 \\ 1 & 0 & 1 \end{pmatrix}[/tex] (By the way, by "adj(A)" do you mean the adjoint, i.e. conjugate transpose?) More general hint: det(AB) = det(A) det(B) - this combines nicely with some of the identities you quoted. CompuChip said: Why don't you try it out? Just construct some (easy) 3 x 3 matrices with determinant 5, like [tex]\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 5 \end{pmatrix}[/tex] [tex]\begin{pmatrix} 3 & -1 & 2 \\ 0 & 5 & 0 \\ 1 & 0 & 1 \end{pmatrix}[/tex] (By the way, by "adj(A)" do you mean the adjoint, i.e. conjugate transpose?) More general hint: det(AB) = det(A) det(B) - this combines nicely with some of the identities you quoted. Yes, that's what I mean by adj. Just wondering, how did you come up with the second matrix? The first one is obvious, of course. Now I'll start trying to solve this. I'll ask again if I don't get it. Thanks! Hey CompuChip, Just to follow up, I got them all right. It may be useful to simply remember some identities for determinants, like det(A^-1) = 1/det(A) det(A^T) = det(A) det(A*) = det(A)* [with x* the complex conjugate of x] Then you can easily work out things like det(adj(A^T)): adj(A^T) is ((A^T)^T)^* = A^* so the determinant is det(A)* = 5, FAQ: Determinant Properties of 3x3 Matrices | Linear Algebra Homework What is Linear Algebra? Linear algebra is a branch of mathematics that deals with the study of linear equations and their representations in vector spaces. It involves the use of matrices, vectors, and other algebraic structures to solve problems related to linearity, transformations, and systems of equations. What are the applications of Linear Algebra? Linear algebra has various applications in different fields such as computer graphics, data analysis, physics, engineering, and economics. It is used to solve problems involving linear systems, optimization, and data analysis. It also plays a crucial role in machine learning and artificial intelligence. What are the basic concepts in Linear Algebra? Some of the basic concepts in Linear Algebra include vector spaces, matrices, determinants, eigenvalues and eigenvectors, linear transformations, and systems of linear equations. These concepts are essential in understanding the fundamentals of linear algebra and its applications. What is the difference between a vector and a matrix? A vector is a quantity that has both magnitude and direction, while a matrix is a rectangular array of numbers or symbols arranged in rows and columns. A matrix can be seen as a collection of vectors, where each column or row represents a different vector. Vectors are used to represent points or directions, while matrices are used to represent transformations or systems of equations. How is Linear Algebra used in machine learning? Linear algebra is an essential tool in machine learning as it is used to represent and manipulate data in a compact and efficient way. Matrices and vectors are used to represent features and parameters of a model, and linear algebra operations are used to train and optimize these models. Linear algebra is also used in dimensionality reduction and data preprocessing techniques in machine
{"url":"https://www.physicsforums.com/threads/determinant-properties-of-3x3-matrices-linear-algebra-homework.435326/","timestamp":"2024-11-08T05:59:40Z","content_type":"text/html","content_length":"91311","record_id":"<urn:uuid:39179262-bab5-493b-9423-a2629c8dc833>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00281.warc.gz"}
Lean Body Mass Calculator - Calculate Your Muscle Mass Are you trying to lose weight or build muscle? Do you want to know how much of your body weight is made up of lean tissue? Lean body mass (LBM) is the weight of your body minus the weight of your fat. It includes your bones, organs, and muscles. Calculating your LBM can help you understand how much of your weight is muscle and how much is fat. In this article, we'll explain what LBM is, how to calculate it, and provide examples. What is Lean Body Mass? Lean body mass (LBM) is the weight of your body minus the weight of your fat. It includes your bones, organs, and muscles. Your LBM is an important factor in determining your overall health and fitness. It can help you set weight loss or muscle-building goals and track your progress. How to Calculate Lean Body Mass There are several ways to calculate your LBM. Two common methods for calculating LBM are the body fat percentage (BFP) method and the Boer formula. Body Fat Percentage (BFP) Method The BFP method is based on the percentage of your body weight that is fat. To use this method, you'll need to know your body fat percentage, which can be measured using various techniques such as skinfold calipers, bioelectrical impedance analysis, dual-energy X-ray absorptiometry, or you can use our bfp calculator. Once you know your body fat percentage, you can calculate your LBM using the following formula: \text{LBM} = \text{Body Weight} - \text{Body Weight}*\text{BFP} Where LBM is the lean body mass, body weight is the weight in kgs and bfp is the body fat percentage. For example, if you weigh 78 kg and your BFP is 20%, your LBM would be: \text{LBM} = 78 - 78*\dfrac{20}{100}= 62.4kg Boer Formula The Boer formula is another method for calculating LBM. It is based on your gender, height, and weight. The formula for men is: \text{LBM} = 0.407*\text{Weight}+0.267*\text{Height}-19.2 The formula for woman is: \text{LBM} = 0.252*\text{Weight}+0.473*\text{Height}-48.3 Where weight and height are you body weight and height and are used in kgs and cm units respectively. How to Use the Lean Body Mass Calculator To make calculating your LBM easier, we have created this lean body mass calculator. Simply enter your weight(in kg) and body fat percentage, or select your gender and enter your weight(in kg) and height(in cm) and the calculator will do the rest. Here's an example of a lean body mass calculator: If you are a 30-year-old woman who weighs 130 pounds and is 5 feet 6 inches (167.6 cm) tall, your LBM would be: • Convert your weight to kilograms:130 pounds ÷ 2.2046 = 58.97 kgConvert your height to centimeters:5 feet 6 inches = 167.6 cm • Plug in the numbers into the calculator:LBM = 0.252*58.97+0.473*167.6-48.3 = 45.84 kg • Convert the result back to pounds:45.84 kg x 2.2046 = 101.05 pounds Using a lean body mass calculator can help you quickly and accurately determine your LBM. Note! to use `Boer Formula` in our calculator empty the `BFP` input field.. It's important to note that different calculators may use different formulas or methods for calculating LBM, so the results may vary slightly. Understanding your lean body mass is an important part of achieving your health and fitness goals. By calculating your LBM, you can determine how much of your weight is made up of muscle and set realistic goals for weight loss or muscle gain. Whether you use the BFP method or the Boer formula, or a lean body mass calculator, knowing your LBM can help you make informed decisions about your diet and exercise routine.
{"url":"https://owlcalculator.com/health/lbm-calculator","timestamp":"2024-11-06T05:10:56Z","content_type":"text/html","content_length":"221728","record_id":"<urn:uuid:8f9f5476-6e9f-4de6-bef9-71853eafb648>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00289.warc.gz"}
coordinate Archives - Zane Selvans The Rotation Problem Given the latitude and longitude of a prior planetary rotation axis (or pole), and given a set of latitude and longitude points defining a number of features on the surface of the planet, determine the latitude and longitude points describing the location of the features in the prior rotational regime. I know. Someone’s already solved this problem. His name was Euler, and he did it in a more general case. So much more general, that all of the descriptions I can find of his solutions are a little
{"url":"https://zaneselvans.org/tag/coordinate/","timestamp":"2024-11-13T15:23:16Z","content_type":"text/html","content_length":"68901","record_id":"<urn:uuid:0a789ab8-8ea2-4964-8bcc-b401643bc5cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00485.warc.gz"}
Nicholas Higham Explained Nick Higham Birth Name: Nicholas John Higham Birth Date: 1961 12, df=yes Birth Place: Salford, England Citizenship: United Kingdom Nationality: British Work Institutions: University of Manchester Cornell University Alma Mater: University of Manchester Thesis Title: Nearness Problems in Numerical Linear Algebra Thesis Url: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374580 Thesis Year: 1985 Doctoral Advisor: George Hall Doctoral Students: Kim Hyun-Min Nicholas John Higham FRS (25 December 1961 – 20 January 2024) was a British numerical analyst.^[1] He was Royal Society Research Professor and Richardson Professor of Applied Mathematics in the Department of Mathematics at the University of Manchester.^[2] ^[3] Early life and career Nicholas John Higham was born in Salford on 25 December 1961.^[4] He was educated at Eccles Grammar School, Eccles College, and the University of Manchester, from which he gained his B.Sc. in mathematics (1982), M.Sc. in Numerical Analysis and Computing (1983), and PhD in Numerical Analysis (1985).^[1] ^[5] His PhD thesis was supervised by George Hall. He was appointed lecturer in mathematics at the University of Manchester in 1985, where he has been Richardson Professor of Applied Mathematics since 1998. In 1988–1989 he was Visiting Assistant Professor of Computer Science at Cornell University, Ithaca, New York.^[6] Higham is best known for his work on the accuracy and stability of numerical algorithms.^[7] He has more than 140 refereed publications on topics such as rounding error analysis, linear systems, least squares problems, matrix functions and nonlinear matrix equations, matrix nearness problems, condition number estimation, and generalized eigenvalue problems. He has contributed software to LAPACK and the NAG library, and has contributed code included in the MATLAB distribution. Higham's books include Functions of Matrices: Theory and Computation, (2008),^[8] Accuracy and Stability of Numerical Algorithms, Handbook of Writing for the Mathematical Sciences,^[9] and MATLAB Guide, co-authored with his brother Desmond Higham.^[10] He was Editor of the Princeton Companion to Applied Mathematics^[11] and a contributor to the Penguin Dictionary of Mathematics.^[12] His books have been translated into Chinese, Japanese and Korean.^[13] ^[14] ^[15] Professional service Higham served as president of the Society for Industrial and Applied Mathematics (SIAM) 2017–2018.^[16] Higham died on 20 January 2024, at the age of 62.^[17] Awards and honours Higham's honours include the Alston S. Householder Award VI, 1987 (for the best PhD thesis in numerical algebra 1984–1987), the 1988 Leslie Fox Prize for Numerical Analysis, a 1999 Junior Whitehead Prize from the London Mathematical Society, a 2020 IMA Gold Medal,^[18] the 2019 Naylor Prize and Lectureship by the London Mathematical Society, the 2021 George Pólya Prize for Mathematical Exposition by the Society for Industrial and Applied Mathematics (SIAM), and the 2022 Hans Schneider Prize in Linear Algebra. Higham held a prestigious Royal Society Wolfson Research Merit Award (2003–2008). He was elected as a Fellow of the Royal Society in 2007^[19] and as a ACM Fellow in 2020.^[20] In 2008 he was awarded the Fröhlich Prize in recognition of 'his leading contributions to numerical linear algebra and numerical stability analysis'.^[21] He was elected a Member of Academia Europaea in 2016.^[22] In 2022 he became Fellow of the Royal Academy of Engineering.^[23] Higham was a Fellow of the Institute of Mathematics and Its Applications, a Fellow of the Institute of Engineering and Technology, and a Fellow of the Society for Industrial and Applied Mathematics. He was also a Fellow of the Alan Turing Institute.^[24] External links Language links are at the top of the page. Notes and References
{"url":"https://everything.explained.today/Nicholas_Higham/","timestamp":"2024-11-03T00:21:19Z","content_type":"text/html","content_length":"19435","record_id":"<urn:uuid:6bd44c31-bbc6-4e86-96e0-c372d1acfd28>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00638.warc.gz"}
How Much Does a Highland Cow Cost? - Pestifier How Much Does a Highland Cow Cost? For thousands of years, Highland cows have been prized as one of the finest dairy animals available to European farmers. These Scottish cattle are known for their lean meat and high-quality dairy, making them highly sought-after for farms that rely on both. But how much does a Highland cow cost? Read on to find out more about these impressive animals, including their price range so you can decide if they’re right for your farm or not. Most Highland cows are priced at about $5,000. However, some breeds can cost up to $10,000 and some of the rare cows can sell for as much as $25,000. How Much Does a Fluffy Highland Cow Cost? A Fluffy Highland Cow costs $3,000. They are a cross between the Red Poll cow and the Scottish Highland cow. The cross is popular for its qualities such as milk production, butterfat, and protein content. The fluffy hair also makes them soft to pet! Though they produce more milk than other cows, they need more food to maintain their size. The cost of this food is included in the cost of a Fluffy Highland Cow. How Much Does a Highland Cow Weigh? Highland cows are a heritage breed, so it’s hard to say how much one might cost without knowing the specifics of the individual animal. The average Highland cow weighs about 1,200 pounds or 500 This is about half the weight of a Holstein dairy cow. Highland cows produce less milk than dairy cows and are smaller, but they can be used for beef and their meat is considered tastier than that of other breeds. See also How Much Does a Black Angus Cow Cost? How Much Does It Cost to Feed a Highland Cow? A Highland cow can cost anywhere from $500 to $2,500. The purchase price is determined by the breed of cow and its lineage as well as its age. Highland cows are generally small animals, so they don’t consume as much feed and hay in comparison to larger breeds. A one-year-old Highland cow may eat about 100 pounds per day, while a five-year-old could eat more than 200 pounds per day. How Much Does It Cost to Feed a Highland Cow Per Month? This is a tough question to answer as it varies greatly. It depends on the size of your cow, the type of feed you purchase, and how much your cow eats in a day. The average cost to feed a Highland Cow is about $150 per month for hay, $150 for grain, and about $20 for supplements. You will also need to factor in any additional costs such as hay storage, fencing, and building materials if you plan to build a barn. How Much Does It Cost to Raise a Highland Cow? The cost to raise a Highland Cow can vary significantly depending on the size of the cow, where you live, and how much feed is necessary. A small calf can start at $150. The average cow would cost around $1,000 in the United States. To grow a healthy cow would require a pasture with lush vegetation, access to fresh water, and proper nutrition. How Much Does Highland Cattle Cost UK? The cost of Highland Cattle in the UK is dependent on the breed, sex, and age. The price may also vary depending on if they have been castrated or not. For example, a mature bull may cost as much as £3,000 while an uncastrated cow can be purchased for £500 to £2,200. How Much Does a Highland Cow Cost in Australia? A Highland Cow is a breed of cows originating from Scotland and is classified as a Scottish Longhorn. These are a dual-purpose breed, being raised both for meat and dairy production. The price of Highland Cows varies depending on factors such as age, sex, region, and quality. In Australia, the average cost of a cow is around AUD 2,000 – this equates to approximately GBP 1,430 or USD 2,400. See also Do Butterflies Fart or Poop? [See How] Is Highland Cows Good Pets? Highland Cows are not good pets. They can be dangerous and unpredictable. Highland Cows are best kept in open spaces where they can graze and roam freely, not in confined or small spaces. They are wild animals that have been tamed over the years and have gone from roaming the fields to living on farms. Their diet is mostly grasses but when there is not enough food for them to eat, they will eat what’s available which could include tree bark, hay, wood chips, and any other vegetation it finds along the way. The average price of a pet highland cow ranges from $3-6K depending on age. How many lands Do You Need for A Highland Cow? If you are wondering how many acres of land you will need for your highland cow, the answer is one. Highland cows typically only require about an acre of pasture to meet their grazing needs. The reason that they don’t eat as much grass as other breeds is that they graze and browse rather than browse and graze. The average cost of land in most areas ranges between $5-10 per square foot but can range higher or lower depending on location and terrain. The total cost for your new highland cow could be anywhere from $1,000 – $2,500. How Long Do Highland Cows Live? Highland cows can be very expensive because of the cost to raise them. Highland cows are born with horns and grow up to about 1,000 pounds. They live for about 25 years on average, but some have been known to live as long as 40 years. What Are Highland Cows Good For? Highland Cows are good for meat production. They are often kept as dual-purpose animals, producing both milk and meat. The Highland cow is considered to be the hardest of all dairy cows and is known to survive in harsh climates with little care. Can You Milk a Highland Cow? Highland cows are a subtype of cattle and are not used for dairy production. Highland cows have been bred to be a source of beef and dairy cows have been bred to produce milk. You cannot milk a highland cow, but you can eat one! See also Best Pet Monkey for You in Your City The cost of a highland cow is dependent on the breed and quality. The average cost per pound is $2-$3/lb with top cuts selling at $6/lb. Can You Adopt a Highland Cow? Highland cows are a rare breed of cattle that originated in the Scottish Highlands. Due to their unique traits, they can only be found on a few farms in the United States. Highland cows have a distinct black-and-white cow pattern and are often used for beef production. Highland cows are incredibly expensive, with prices ranging anywhere from $5,000 to $10,000 per cow. If you want to adopt one of these majestic creatures, you’ll need a lot of space. How Much Does a Highland Cow Eat a Day? A Highland Cow is a type of dairy cow. They are bred to produce high amounts of milk and meat. The meat from Highland Cows is called Aberdeen Angus Beef, and it’s known for its rich flavor. Highland Cows eat about three pounds of hay per day, as well as up to 12 quarts of fresh water per day. They also eat a supplement that’s made from grain, soybean meal, molasses, mineral mix, and other ingredients. How Much Does a Mini Highland Cow Cost? A mini highland cow is a much smaller version of its full-sized counterpart. The mini cow averages around 20 inches in height, which makes it perfect for children to enjoy. With their small size, they are also easier to take care of and maintain than the larger cows. The cost of a mini highland cow can vary depending on the breeder and what they want to charge. How Much Does a Highland Cow Calf Cost? For the most part, it is hard to say how much a Highland cow will cost. The price depends on the age of the calf and if they are being sold as breeding stock or for meat production. A calf that is 6 months old and raised in a dairy environment could cost anywhere from $1,000 to $5,000. Breeding Highland cattle can be purchased for a few hundred dollars up to several thousand dollars depending on their pedigree. ALSO SEE: Do Some Female Cows Have Horns? How Much Does a Mini Scottish Highland Cow Cost? How much does a Scottish Highland Cow Cost? A Mini Scottish Highland Cow will cost you $3,000. The name of the cow is Wee Freckles and she has been with the farm since she was just 2 months old. You can get her through the Texas Miniature Hereford Association. They are based in San Angelo, Texas, and sell mini cows across the United States. How Much Does It Cost to Keep a Highland Cow? The cost of keeping a Highland cow will depend on several factors, including whether you are buying or renting the land, what type of animal feed you buy, and how much it costs per day to feed your A Highland cow typically eats 3-4 pounds of hay per day and needs access to fresh water. With over 5+ years of experience in pest control and a PhD in Entomology, our author brings a blend of scientific knowledge and practical expertise to Pestifier.com. Passionate about creating pest-free environments, they provide effective tips and strategies for managing and preventing pest infestations. Connect on Facebook for the latest updates and insights. Leave a Comment
{"url":"https://pestifier.com/how-much-does-a-highland-cow-cost/","timestamp":"2024-11-08T13:51:38Z","content_type":"text/html","content_length":"104441","record_id":"<urn:uuid:faf31226-adcf-41e9-b98c-9fc6efba7aef>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00042.warc.gz"}
The concept TriangulationOnSphereFaceBase_2 describes the requirements for a face class of a triangulation data structure that is itself plugged into a triangulation on the sphere. The data structure concept TriangulationDataStructure_2 was primarily designed to serve as a data structure for the 2D triangulation classes of CGAL, which are triangulations embedded in the 2D Euclidean plane. However, its genericity makes it usable for any orientable triangulated surface without boundary regardless of the dimensionality of the space the triangulation is embedded in, and thus it is a valid data structure for the triangulations on the sphere of this package. A departing feature is that if the vertices of the triangulation are all lying on one hemisphere, then the triangulation is not an orientable triangulated surface without boundary. In this case, fictitious faces are added to the triangulation, called ghost faces, such that the triangulation is a topological sphere. Has models CGAL::Triangulation_on_sphere_face_base_2<Traits, Fb> See also bool is_ghost () provides read access to a Boolean used to indicate if the face is a ghost face. void set_ghost (const bool b) provides write access to a Boolean used to indicate if the face is a ghost face. std::istream & operator>> (std::istream &is, TriangulationOnSphereFaceBase_2 &v) inputs the non-combinatorial information given by the face: its ghost status and other possible information. std::ostream & operator<< (std::ostream &os, const TriangulationOnSphereFaceBase_2 &v) outputs the non combinatorial operation given by the face: its ghost status and other possible information.
{"url":"https://doc.cgal.org/latest/Triangulation_on_sphere_2/classTriangulationOnSphereFaceBase__2.html","timestamp":"2024-11-12T23:24:24Z","content_type":"application/xhtml+xml","content_length":"13301","record_id":"<urn:uuid:6e6555e5-ca0b-4cb1-9e6e-36cef8577b05>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00743.warc.gz"}
Proof of Integral of sec²x formula Take $x$ as a variable, and it also represents angle of a right triangle. According to trigonometry, the secant squared of angle $x$ is written as $\sec^2{x}$ in mathematical form. The indefinite integration of secant squared function with respect to $x$ is written mathematically as follows. $\displaystyle \int{\sec^2{x} \,}dx$ Let’s start deriving the integral formula for secant squared of angle function in integration. Derivative of Tan function Write the differentiation of tan function formula for expressing the derivative of tangent function with respect to $x$ in mathematical form. $\dfrac{d}{dx}{\, \tan{x}} \,=\, \sec^2{x}$ Inclusion of an Arbitrary constant According to differential calculus, the derivative of a constant is zero. Therefore, it does not influence the differentiation of tan function even an arbitrary constant is added to tan function. $\implies$ $\dfrac{d}{dx}{(\tan{x}+c)} \,=\, \sec^2{x}$ Integral of sec²x function As per integral calculus, the collection of all primitives of $\sec^2{x}$ function is called the integration of $\sec^2{x}$ function. It is written in mathematical form as follows. $\displaystyle \int{\sec^2{x} \,}dx$ In this case, the primitive or an antiderivative of $\sec^2{x}$ function is $\tan{x}$ and the constant of integration ($c$). $\dfrac{d}{dx}{(\tan{x}+c)} = \sec^2{x}$ $\,\Longleftrightarrow\,$ $\displaystyle \int{\sec^2{x} \,}dx = \tan{x}+c$ $\therefore \,\,\,\,\,\,$ $\displaystyle \int{\sec^2{x} \,}dx = \tan{x}+c$ Therefore, it is proved that the integration of secant squared of an angle function is equal to the sum of the tan function and constant of integration.
{"url":"https://www.mathdoubts.com/integral-of-sec-squared-proof/","timestamp":"2024-11-02T14:58:17Z","content_type":"text/html","content_length":"28134","record_id":"<urn:uuid:70fd1a04-3e87-4c60-a5c0-b730d0704427>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00892.warc.gz"}
First glance impression from data on Armenia’s recent Presidential Election: Manipulation? Introducing our new International Fellow: By Gabriel Armas-Cardona Measuring the randomness of numbers is a well-developed field and of vital importance for testing the validity of election data. Multiple tests have been developed to test for randomness among data sets, including the last-digit test. These types of tests, including the last-digit test, can be used to help determine whether election data has been manipulated. The last-digit test involves looking at the last, or final, digit of each number and counting how many 0s, 1s, 2s, …, and 9s there are. If the numbers are random, then each digit should appear at the end with an equal percentage, 10%. If certain digits appear much more or much less frequently than 10%, then that difference is evidence that the numbers are not random and that they’ve been Applying the last-digit test to election data is a standard method to determine whether the results have been manipulated. Special mention has to be made to Policy Forum Armenia’s Special Report on the 2012 Parliamentary Election. In that report, Policy Forum Armenia uses the last-digit test, among others, to demonstrate that the official 2012 results have been manipulated. Applying the Last-Digit Test to the 2013 Presidential Election Results Using the last-digit test for the 2013 election involves examining the turnout results from every precinct in the election. The turnout at each precinct is not a random number; if a precinct has a 1,000 eligible voters, one can expect a turnout between 300-700. What is random is the last digit of the turnout, and Policy Forum's Report lists the theoretical support for this test (see page 25). Looking at the last digit of the turnout for each precinct that has at least 100 voters should create an even distribution, with each digit appearing 10% of the time (available on http:// res.elections.am/images/doc/masnak18.02.13p_en.pdf). At the same time, it has to be mentioned that this is not entirely true for small numbers because of Benford’s Law. To compensate for this deviation, only turnout results of at least three digits, i.e. at least 100 voters, were analyzed.When applying the test to the 2013 precinct data, we don’t get an even distribution (see graph 1). Graph 1: Distribution of the Last Digit of Reported Numbers for Precinct Turnout Looking at the graph, anyone can see there are some wide variations from the expected result of 10% for each digit. In particular, we find that ‘0’ is overrepresented by 2%, ‘5’ is overrepresented by 1% and ‘9’ is underrepresented by 2.4%. These differences hint at human manipulation as humans tend to prefer some numbers over others (look at, for example, What the Numbers Say: A Digit-Based Test for Election Fraud, Bernd Beber and Alexandra Scacco (2012). To mathematically test for manipulation of the data, a chi-square analysis of the actual results compared to the expected results of 10% per digit can show whether the deviation is significant. Conducting a chi-square analysis of the precinct turnout finds that the data is statistically improbable. Conducting a chi-square analysis comparing the actual outcome with the expected outcome of 10% finds that it’s statistically improbable that the data is random (n=1899, Chi-square value=23.4, p=.005; statistically significant). This implies that that the data has been manipulated. This test was repeated after dividing the data between Yerevan and outside of Yerevan with differing results. When looking at precincts within Yerevan, the chi-square value is low and it’s plausible the distribution results from randomness (n=467, Chi-square value=9.85, p=.363; not statistically significant.). Looking at precincts outside of Yerevan finds again a statistically improbable outcome that implies manipulation (n=1432, Chi-square value=19.7, p=.02; statistically significant). Using the last-digit test, it’s statistically improbable that the data distribution is random, implying the data has been manipulated. The last-digit of the precinct turnout should have an even distribution, but instead it has the statistically improbable distribution shown in graph 1. This evidence of manipulation disappears when looking only at Yerevan, but it reappears when looking at precincts outside of Yerevan. This analysis does not prove that the official results are fake, but it does show that it’s improbable that the results occurred naturally and it is likely the turnout results are altered to some extent. This first glance impression perhaps will encourage researchers to dig more and come up with more relevant findings. Dear CRRC team, I've tried to reproduce your calculations, and got different results. Here's the link to my spreadsheet. https://docs.google.com/spreadsheet/ccc?key=0AoKRwnlv59GSdEVWeUN6cXB3Mi1Eb2RuZTRUVWZpZkE&usp= CRRC-Armenia said… Dear Ruben Muradyan, Thank you for responding to our post. We’re very happy to have a dialogue with our readers regarding our work. We’re especially happy to have readers challenge our work and make sure that we keep a high caliber in our work. I stepped through your work, and your methodology was correct. Your “Voters_turnout (no small numbers)” sheet correctly removes small voter turnouts that may skew the expected results, and column H correctly isolates the last digit of the voter turnout, which is then counted in your analysis e. The discrepancy between the results comes from the source of the data. We were working with the first data set that was available: the PDF that was linked in the blog post with a time stamp of the results as of 8pm on 18 February. In contrast, you used either the final voting results or the results listed in the Armenian spreadsheet that was time stamped as of6am on 19 February (those preliminary results time stamped to 6am have since been removed from the website). There are slight differences between the two. For example, precinct 1/12 has 985 votes in the final turnout, while the linked PDF has 984 votes for precinct 1/12. The total difference between the two datasets is 2834 votes, with some precincts gaining a few votes and a very few precincts losing votes. The difference between the two datasets is minor, but the last digit test is sensitive to those changes. Even with slightly different data, it’s possible that the conclusion will still hold. Policy Forum Armenia did a similar analysis and got a similar conclusion: the divergence from the expected result is not statistically significant in Yerevan or Gyumri, but the divergence is statistically significant in the regions of Armenia. It would be interesting if you did a chi-squared test to see if your final results could be explained randomly or whether the test suggests some alteration in the data as well. Dear CRRC team, Thank you very much for your responce and clarifications. Let me divide my answer to 2 parts. 1. PFA research PFA is politically affiliated entity, and we have to be very critical to any research they provide, keeping in mind possible bias. Document, you mention in your reply lacks info about initial datasets and methodology, which raises a lot of questions. It lacks tables for graphs, and graphs have very distant gridlines, so it is impossible to get exact values for the results. As for the Gauss distribution of votes, IMHO it is not applicable for the cases, where free human will acts. 2. Dataset, that was linked to your post and reply is a PRELIMINARY turnout report. It was created and published at ~21:00 on Feb 18, and can, and you have figured out, that it definitely has some wrong data. Dataset, which I've used is a dataset with final and official result of elections. I would like to draw your attention, to the fact, that Beber&Scacco paper (and I'm pretty sure any other research) was dealing with final, and not preliminary dataset. Anyway, thank you very much for explanation. Unknown said… Unknown said… This comment has been removed by a blog administrator. Thank you for your constructive comments, I appreciate the dialogue. PFA is undoubtedly a political organization with its own perspective. I also agree that their public statements are often overtly politcal. However, I appreciate the scientific approach they use in their reports. Their analysis of the 2012 parliamentary election is superb. I did use the preliminary data, but that doesn't make it wrong. The test conducted shouldn't be impacted by such a small change in votes. The exact percentages will change, but it's unlikely the significance of the chi-square test (which was p=.005) will change noticably. If different people use this test on the preliminary and final data and reach the same conclusion, that adds to the Finally, more statistical tests have come out since this publication. Here are two more analyses in Russian and one in Armenian: http://romanik.livejournal.com/718556.html, http:// abuzin.livejournal.com/114160.html, and http://husikghulyan.blogspot.com/2013/03/2013.html. Perhaps they were good at 2012 (I hadn't followed and double checked their job at that time), but their current (and preliminary) report is not scientific at all. I hope, that they will provide all the necessary data (initial datasets, methodology, final tables, etc.) in it's final version. When it comes to Gaussian curves, and correlation between turnout and votes for incumbent, this researches means nothing in terms of getting proofs of manipulation. Mostly because election is a process of free human will, and not a mechanical repeating process, where Gauss rules. But they can HINT, that there was a manipulation. So if I was in RH's shoes, I would prepare standard complains, and file them to the most doubtful precincts, with clear understanding, that there may, or may not be manipulations. You're absolutely right that these statistical analysis cannot prove manipulation. They also could not prove the rationale behind any possible manipulation, e.g. whether the manipulation is fraudulent or based on some alteration of the results (e.g. rounding vote counts or having reporting floors). However, these tests can suggest that manipulation has occurred. As you point out, the strength of the result depends on how well the theoretical assumption fits to reality. Precinct vote totals depend on the size of precincts and the interest in voters to vote, neither of these will fit a predictable curve. However, most literature suggests that the second and last digit of the voter turnout should follow certain patterns such as Benford's law and an equal distribution, respectfully. This theoretical support is the underpinnings of the last-digit test, and, so far, there is generally support for the effectiveness of the test. As the field is researched further, it is quite possible that fundamental flaws will be found in the test and the test will have to be discarded. As for the political ramifications of the results. That is beyond our expertise. That's why I've focused on last digit test. :) One more addition to the preliminary vs final results stuff. I've checked with CEC, and found the following: 1. Preliminary turnout results are being reported by phone 2. Voter counts for those results are being calculated by estimates (nor rough, neither precise, based on the manual count of voters) during election day 3. Final results are being calculated on the count of ballots and final protocols, with signatures of all interested ppl (members of electoral commissions, and representatives) So, perhaps, preliminary turnout results are useful for some kind of research (like counting median of voting time, or overall turnout speed), but (because of human interaction during reporting) is useless for last digit analysis. Moreover, this process clearly explains prevalence of "5" and "0" in your last digit test. Wow, that is interesting information. The pdf with the preliminary data says the data was "received by electronic manner," but if the data was received over the phone, that could easily introduce errors that impact this test and raise doubt in the conclusion. How did you hear that the different precincts reported the results by phone? As for the second and third points, I agree with you that there is value to testing the final data, but I'll hold that there is value in testing the preliminary data too. The preliminary data is--for better or worse--more raw than the final data. Raw data, by definition, is not polished and might have more inaccuracies, but then could be free of other things that can effect the data. This, of course, assumes the data are collected in ways that don't introduce systematic error, e.g. not over the phone. I've asked a friend working there. But I'm sure, that you may get this info by calling there, and asking for this info, telling, that you are doing a research at/for CRRC. If they will not respond by phone, they must reply to a written request in 10 or 14 days, if I remember the law correctly.
{"url":"https://crrcam.blogspot.com/2013/02/first-glance-impression-from-data-on.html?showComment=1362681010914","timestamp":"2024-11-11T16:32:47Z","content_type":"text/html","content_length":"117797","record_id":"<urn:uuid:caac3666-834a-4cc5-a715-3cd3b8cc0fb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00654.warc.gz"}
Use These Free Algebra Worksheets To Practice Your Order Of Operations | Order of Operation Worksheets Use These Free Algebra Worksheets To Practice Your Order Of Operations Use These Free Algebra Worksheets To Practice Your Order Of Operations Use These Free Algebra Worksheets To Practice Your Order Of Operations – You may have heard of an Order Of Operations Worksheet, however just what is it? In this article, we’ll speak about what it is, why it’s vital, and also how to obtain a Order Of Operation Worksheets Hopefully, this details will be helpful for you. Your pupils are entitled to an enjoyable, efficient method to assess the most important ideas in mathematics. Furthermore, worksheets are a great means for pupils to practice brand-new skills and also testimonial old ones. What is the Order Of Operations Worksheet? An order of operations worksheet is a kind of mathematics worksheet that requires students to carry out math operations. These worksheets are separated right into 3 main sections: addition, subtraction, and multiplication. They also consist of the examination of parentheses and also exponents. Pupils that are still finding out exactly how to do these jobs will certainly find this type of worksheet beneficial. The major objective of an order of operations worksheet is to aid students find out the right means to solve math formulas. If a trainee doesn’t yet understand the principle of order of operations, they can review it by describing a description page. Additionally, an order of operations worksheet can be split right into a number of groups, based on its problem. Another crucial function of an order of operations worksheet is to educate trainees just how to do PEMDAS operations. These worksheets begin with basic troubles related to the standard rules as well as accumulate to extra complicated troubles including all of the guidelines. These worksheets are a wonderful method to introduce young learners to the enjoyment of solving algebraic equations. Why is Order of Operations Important? One of one of the most important things you can find out in math is the order of operations. The order of operations makes certain that the math troubles you solve are consistent. This is vital for examinations and real-life calculations. When addressing a math trouble, the order should start with parentheses or exponents, complied with by addition, subtraction, and multiplication. An order of operations worksheet is a wonderful way to show students the right method to fix mathematics formulas. Before trainees begin using this worksheet, they may require to examine concepts connected to the order of operations. To do this, they should evaluate the principle page for order of operations. This concept web page will certainly offer trainees an introduction of the keynote. An order of operations worksheet can assist students establish their abilities in addition as well as subtraction. Natural born player’s worksheets are an excellent way to help trainees find out regarding the order of operations. Order Of Operation Worksheets Order Of Operation Worksheets Grade 4 Free Worksheets Printables Order Of Operation Worksheets Order Of Operation Worksheets offer a terrific resource for young students. These worksheets can be quickly tailored for details demands. The Order Of Operation Worksheets can be downloaded free of charge and also can be published out. They can after that be examined making use of addition, multiplication, division, as well as subtraction. Trainees can additionally utilize these worksheets to evaluate order of operations and the use of backers. Related For Order Of Operation Worksheets
{"url":"https://orderofoperationsworksheet.com/order-of-operation-worksheets/use-these-free-algebra-worksheets-to-practice-your-order-of-operations-28/","timestamp":"2024-11-09T10:43:50Z","content_type":"text/html","content_length":"27537","record_id":"<urn:uuid:0ddb03b8-3afb-4d82-9e17-7cd81ee66a14>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00579.warc.gz"}
Preparing Working Solution from Stock Solution by Dilution - Laboratory Notes If you have a concentrated stock solution of known concentration, you can prepare a working solution of specific concentration and volume by diluting the stock solution. Use the following formula to calculate the volume of stock solution required to prepare a diluted solution C[f] x V[f] = C[S] x V[S] C[f]: Concentration of diluted solution V[f]: Volume of diluted solution C[S]: Concentration of stock solution V[S]: Volume of stock solution Example: The concentration of stock solution is given in x fold (times) e.g. 10X stock solution For example, you want to prepare 100 ml of working solution (after dilution) from 10X stock solution C[f]: 1 X V[f]: 100 ml C[S]: 10 X V[S]: ? ml (to be calculated) Formula: C[f] x V[f] = C[S] x V[S] Place all values in formula: 1 x 100 = 10 x V[S] V[S] = 10 ml So you need to take 10 ml of 10 X stock solution to prepare a 100 ml working (diluted) solution. How much solvents do you need to add? The final volume of the working (diluted) solution: 100 ml The volume of stock solution to be taken: 10 ml The volume of water to be added: 100 – 10 = 90 ml So you need to mix 10 ml of 10X stock solution and 90 ml water to obtain 100 ml of working (diluted) solution. Example: When the concentration of stock solution concentration is given in a concentration unit e.g. 1 M stock solution For example, you want to prepare 100 ml of 1 mM EDTA solution from 0.5M EDTA solution C[f]: 1 mM V[f]: 100 ml C[S]: 0.5 M V[S]: ? ml (to be calculated) Tip: Stock concentration is given in M (molar) and working concentration is given in mM (milliMolar). Convert Stock concentration Molar to millimolar. 1 M = 1000 mM Now the values are: C[f]: 1 mM V[f]: 100 ml C[S]: 500 mM V[S]: ? ml (to be calculated) Place all values in formula: Cf x Vf = CS x VS 1 x 100 = 500 x VS V[S] = 0.2 ml So you need to take 0.2 ml of 0.5M EDTA solution to prepare 100 ml of 1 mM EDTA solution. How many solvents do you need to add? The final volume of the working solution: 100 ml The volume of stock solution to be taken: 0.2 ml The volume of water to be added: 100 – 0.2 = 99.8 ml So you need to mix 0.2 ml of 0.5M EDTA solution and 99.8 ml water to obtain 100 ml of 1 mM EDTA solution. Was this post helpful? Let us know if you liked the post. That’s the only way we can improve. 10 thoughts on “Preparing Working Solution from Stock Solution by Dilution”
{"url":"https://www.laboratorynotes.com/preparing-working-solution-from-stock-solution-by-simple-dilution/","timestamp":"2024-11-06T20:13:30Z","content_type":"text/html","content_length":"68411","record_id":"<urn:uuid:02200199-1e14-4490-8c5c-7614882b4707>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00776.warc.gz"}
Transactions Online Kittiya KHONGKRAPHAN, Pakorn KAEWTRAKULPONG, "A Novel Reconstruction and Tracking of 3D-Articulated Human Body from 2D Point Correspondences of a Monocular Image Sequence" in IEICE TRANSACTIONS on Information, vol. E94-D, no. 5, pp. 1090-1098, May 2011, doi: 10.1587/transinf.E94.D.1090. Abstract: A novel method is proposed to estimate the 3D relative positions of an articulated body from point correspondences in an uncalibrated monocular image sequence. It is based on a camera perspective model. Unlike previous approaches, our proposed method does not require camera parameters or a manual specification of the 3D pose at the first frame, nor does it require the assumption that at least one predefined segment in every frame is parallel to the image plane. Our work assumes a simpler assumption, for example, the actor stands vertically parallel to the image plane and not all of his/her joints lie on a plane parallel to the image plane in the first frame. Input into our algorithm consists of a topological skeleton model and 2D position data on the joints of a human actor. By geometric constraint of body parts in the skeleton model, 3D relative coordinates of the model are obtained. This reconstruction from 2D to 3D is an ill-posed problem due to non-uniqueness of solutions. Therefore, we introduced a technique based on the concept of multiple hypothesis tracking (MHT) with a motion-smoothness function between consecutive frames to automatically find the optimal solution for this ill-posed problem. Since reconstruction configurations are obtained from our closed-form equation, our technique is very efficient. Very accurate results were attained for both synthesized and real-world image sequences. We also compared our technique with both scaled-orthographic and existing perspective approaches. Our proposed method outperformed other approaches, especially in scenes with strong perspective effects and difficult poses. URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E94.D.1090/_p author={Kittiya KHONGKRAPHAN, Pakorn KAEWTRAKULPONG, }, journal={IEICE TRANSACTIONS on Information}, title={A Novel Reconstruction and Tracking of 3D-Articulated Human Body from 2D Point Correspondences of a Monocular Image Sequence}, abstract={A novel method is proposed to estimate the 3D relative positions of an articulated body from point correspondences in an uncalibrated monocular image sequence. It is based on a camera perspective model. Unlike previous approaches, our proposed method does not require camera parameters or a manual specification of the 3D pose at the first frame, nor does it require the assumption that at least one predefined segment in every frame is parallel to the image plane. Our work assumes a simpler assumption, for example, the actor stands vertically parallel to the image plane and not all of his/her joints lie on a plane parallel to the image plane in the first frame. Input into our algorithm consists of a topological skeleton model and 2D position data on the joints of a human actor. By geometric constraint of body parts in the skeleton model, 3D relative coordinates of the model are obtained. This reconstruction from 2D to 3D is an ill-posed problem due to non-uniqueness of solutions. Therefore, we introduced a technique based on the concept of multiple hypothesis tracking (MHT) with a motion-smoothness function between consecutive frames to automatically find the optimal solution for this ill-posed problem. Since reconstruction configurations are obtained from our closed-form equation, our technique is very efficient. Very accurate results were attained for both synthesized and real-world image sequences. We also compared our technique with both scaled-orthographic and existing perspective approaches. Our proposed method outperformed other approaches, especially in scenes with strong perspective effects and difficult poses.}, TY - JOUR TI - A Novel Reconstruction and Tracking of 3D-Articulated Human Body from 2D Point Correspondences of a Monocular Image Sequence T2 - IEICE TRANSACTIONS on Information SP - 1090 EP - 1098 AU - Kittiya KHONGKRAPHAN AU - Pakorn KAEWTRAKULPONG PY - 2011 DO - 10.1587/transinf.E94.D.1090 JO - IEICE TRANSACTIONS on Information SN - 1745-1361 VL - E94-D IS - 5 JA - IEICE TRANSACTIONS on Information Y1 - May 2011 AB - A novel method is proposed to estimate the 3D relative positions of an articulated body from point correspondences in an uncalibrated monocular image sequence. It is based on a camera perspective model. Unlike previous approaches, our proposed method does not require camera parameters or a manual specification of the 3D pose at the first frame, nor does it require the assumption that at least one predefined segment in every frame is parallel to the image plane. Our work assumes a simpler assumption, for example, the actor stands vertically parallel to the image plane and not all of his/her joints lie on a plane parallel to the image plane in the first frame. Input into our algorithm consists of a topological skeleton model and 2D position data on the joints of a human actor. By geometric constraint of body parts in the skeleton model, 3D relative coordinates of the model are obtained. This reconstruction from 2D to 3D is an ill-posed problem due to non-uniqueness of solutions. Therefore, we introduced a technique based on the concept of multiple hypothesis tracking (MHT) with a motion-smoothness function between consecutive frames to automatically find the optimal solution for this ill-posed problem. Since reconstruction configurations are obtained from our closed-form equation, our technique is very efficient. Very accurate results were attained for both synthesized and real-world image sequences. We also compared our technique with both scaled-orthographic and existing perspective approaches. Our proposed method outperformed other approaches, especially in scenes with strong perspective effects and difficult poses. ER -
{"url":"https://global.ieice.org/en_transactions/information/10.1587/transinf.E94.D.1090/_p","timestamp":"2024-11-11T17:00:11Z","content_type":"text/html","content_length":"65842","record_id":"<urn:uuid:22763a91-0bb7-4ed1-848c-83ea4fe7d50f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00594.warc.gz"}
The Futures Pricing – Varsity by Zerodha 10.1 – The Pricing Formula If you were to take a conventional course on Futures trading, you would probably be introduced to the futures pricing formula right at the very beginning of the course. However we have deliberately opted to talk about it now, at a much later stage. The reason is simple – if you are trading futures based on technical analysis (I assume a vast majority of you are doing this) then you would not really need to know how the futures are priced, although a good working knowledge would help. However if you aspire to trade futures by employing quantitative strategies such as Calendar Spreads or Index Arbitrage then you certainly need to know this. In fact we will have a module dedicated to ‘Trading Strategies’ where we would discuss some of these strategies, hence the discussion in this chapter will lay down a foundation for the forthcoming modules. If you recall, in some of the earlier chapters occasionally we discussed the ‘Futures Pricing Formula’ as the prime reason for the difference between the spot price and the futures price. Well, I guess it is time now to lift the veil and introduce the ‘Future Pricing Formula’. We know the futures instrument derives its value from its respective underlying. We also know that the futures instrument moves in sync with its underlying. If the underlying price falls, so would the futures price and vice versa. However, the underlying price and the futures price differs and they are not really the same. To give you a perspective as I write this, Nifty Spot is at 8,845.5 whereas the corresponding current month contract is trading at 8,854.7, please refer to the snap shot below. This difference in price between the futures price and the spot price is called the “basis or spread”. In case of the Nifty example below, the spread is 9.2 points (8854.7 – 8845.5). The difference in price is attributable to the ‘Spot – Future Parity’. The spot future parity the difference between the spot and futures price that arises due to variables such as interest rates, dividends, time to expiry etc. In a very loose sense it is simply is a mathematical expression to equate the underlying price and its corresponding futures price. This is also known as the futures pricing formula. The futures pricing formula simply states – Futures Price = Spot price *(1+ r[f] )– d r[f] = Risk-free rate d – Dividend Note, ‘r[f]’ is the risk-free rate that you can earn for the entire year (365 days); considering the expiry is at 1, 2, and 3 months one may want to scale it proportionately for time periods other than the exact 365 days. Therefore a more generic formula would be – Futures Price = Spot price * [1+ r[f]*(x/365)]– d x = number of days to expiry. One can take the RBI’s 91 day Treasury bill as a proxy for the short term risk-free rate. You can find the same on the RBI’s home page, as shown in the snapshot below – As we can see from the image above, the current rate is 8.3528%. Keeping this in perspective let us work on a pricing example. Assume Infosys spot is trading at 2,280.5 with 7 more days to expiry, what should Infosys’s current month futures contract be priced at? Futures Price = 2280.5 * [1+8.3528 %( 7/365)] – 0 Do note, Infosys is not expected to pay any dividend over the next 7 days, hence I have assumed dividend as 0. Solving the above equation, the future price turns out to be 2283. This is called the ‘Fair value’ of futures. However the actual futures price as you can see from the image below is 2284. The actual price at which the futures contract trades is called the ‘Market Price’. The difference between the fair value and market price mainly occurs due to market costs such as transaction charges, taxes, margins etc. However by and large the fair value reflects where the futures should be trading at a given risk free rate and number of days to expiry. Let us take this further, and figure out the futures price for mid month and far month contracts. Mid month calculation Number of days to expiry = 34 (as the contract expires on 26^th March 2015) Futures Price = 2280.5 * [1+8.3528 %( 34/365)] – 0 = 2299 Far month calculation Number of days to expiry = 80 (as the contract expires on 30^th April 2015) Futures Price = 2280.5 * [1+8.3528 %( 80/365)] – 0 = 2322 From NSE website let us take a look at the actual market prices – Snapshot of Infosys’s mid month contract Snapshot of Infosys’s mid month contract Clearly there is a difference between the calculated fair value and the market price. I would attribute this to the applicable costs. Besides, the market could be factoring in some financial yearend dividends as well. However the key point to note is as the number of days to expiry increases, the difference between the fair value and market value widens. In fact this leads us to another important commonly used market terminology – the discount and the premium. If the futures is trading higher than the spot, which mathematically speaking is the natural order of things, then the futures market is said to be at ‘premium’. While ‘Premium’ is a term used in the Equity derivatives markets, the commodity derivatives market prefer to refer to the same phenomenon as ‘Contango’. However, both contango and premium refer to the same fact – The Futures are trading higher than the Spot. Here is a plot of Nifty spot and its corresponding futures for the January 2015 series. As you can see the Nifty futures is trading above the spot during the entire series. I specifically want to draw your attention to the following few points – 1. At the start of the series (highlighted by a black arrow) the spread between the spot and futures is quite high. This is because the number of days to expiry is high hence the x/365 factor in the futures pricing formula is also high. 2. The futures remained at premium to the spot throughout the series 3. At the end of the series (highlighted by a blue arrow) the futures and the spot have converged. In fact this always happens. Irrespective of whether the future is at a premium or a discount, on the day of the expiry, the futures and spot will always converge. 4. If you have a futures position and if you fail to square off the position by expiry, then the exchange will square off the position automatically and it will be settled at the spot price as both futures and spot converges on the day of the expiry Not always does the futures trade richer than the spot. There could be instances – mainly owing to short term demand and supply imbalances where the futures would trade cheaper than its corresponding spot. This situation is when the futures is said to be trading at a discount to the spot. In the commodities world, the same situation is referred to as the “backwardation”. 10.2 – Practical Application Before we conclude this chapter, let us put the futures pricing formula to some practical use. Like I had mentioned earlier, futures pricing formula comes very handy when you aspire to trade employing quantitative trading techniques. Please note, the following discussion is only a preview window into the world of trading strategies. We will discuss all these things plus more in greater detail when we take up the module on “Trading Strategies”. Consider this situation – Wipro Spot = 653 R[f – ]8.35% x = 30 d = 0 Given this, the futures should be trading at – Futures Price = 653*(1+8.35 %( 30/365)) – 0 = 658 Accommodate for market charges, the futures should be trading in and around 658. Now what if instead the futures contract is trading at a drastically different price? Let’s say 700? Clearly there is a trade here. The difference between the spot and futures should ideally be just 5 points, but due to market imbalances the difference has shot up to 47 points. This is a spread that we can capture by deploying a trade. Here is how one can do this – since the future contract is trading above its fair value, we term the futures market price as expensive relative to its fair value. Alternatively we can say, the spot is trading cheaper with respect to the futures. The thumb rule in any sort of ‘spread trade’ is to buy the cheaper asset and sell the expensive one. Hence going by this, we can sell Wipro Futures on one hand and simultaneously buy Wipro in the spot market. Let us plug in the numbers and see how this goes – Buy Wipro in Spot @ 653 Sell Wipro in Futures @ 700 Now we know that on the expiry day, both the spot and the futures converge into one single price (refer to the Nifty graph posted above). Let us assume a few random values at which the futures and the spot converge – 675, 645, 715 and identify what happens to the trade – Expiry Value Spot Trade P&L (Long) Futures Trade P&L (Short) Net P&L 675 675 – 653 = +22 700 – 675 = +25 +22 + 25 = +47 645 645 – 653 = -08 700 – 645 = +55 -08 + 55 = +47 715 715 – 653 = +62 700 – 715 = -15 +62 – 15 = +47 As you can notice, once you have executed the trade at the expected price you have essentially locked in the spread. So irrespective of where the market goes by expiry, the profits are guaranteed! Of course, it goes without saying that it makes sense to square off the positions just before the expiry of the futures contract. This would require you to sell Wipro in spot market and buy back Wipro in Futures market. This kind of trade between the futures and the spot to extract and profit from the spread is also called the ‘Cash & Carry Arbitrage’. 10.3 – Calendar Spreads The calendar spread is a simple extension of the cash & carry arbitrage. In a calendar spread, we attempt to extract and profit from the spread created between two futures contracts of the same underlying but with different expiries. Let us continue with the Wipro example and understand this better – Wipro Spot is trading at = 653 Current month futures fair value (30 days to expiry) = 658 Actual market value of current month futures = 700 Mid month futures fair value (65 days to expiry) = 663 Actual market value of mid month futures = 665 From the above example, clearly the current month futures contract is trading way above its expected theoretical fair value. However the mid month contract is trading close to its actual fair value estimate. With these observations, I will make an assumption that the current month contract’s basis will eventually narrow down and the mid month contract will continue to trade close to its fair Now with respect to the mid month contract, the current month contract appears to be expensive. Hence we sell the expensive contract and buy the relatively cheaper one. Therefore the trade set up would require me to buy the mid month futures contract @ 665 and sell the current month contract @ 700. What do you think is the spread here? Well, the spread is the difference between the two future contracts i.e 700 – 665 = 35 points. The trade set up to capture the spread goes like this – Sell the current month futures @ 700 Buy the mid month futures @ 665 Do note – because you are buying and selling the same underlying futures of different expiries, the margins are greatly reduced as this is a hedged position. Now after initiating the trade, one has to wait for the current month’s futures to expire. Upon expiry, we know the current month futures and the spot will converge to a single price. Of course on a more practical note, it makes sense to unwind the trade just before the expiry. Let us arbitrarily take a few scenarios as below and see how the P&L pans out – Expiry Value Current month P&L (Short) Mid Month P&L (Long) Net P&L 660 700 – 660 = +40 660 – 665 = -5 +40 – 5 = +35 690 700 – 690 = +10 690 – 665 = +25 +10 + 25 = +35 725 700 – 725 = -25 725 – 665 = +60 -25 + 60 = +35 Of course, do recall the critical assumption we have made here is that i.e. the mid month contract will stick close to its fair value. From my trading experience this happens most of the times. Most importantly please do bear in mind the discussion with respect to spreads in this chapter is just a sneak peek into the world of trading strategies. We will discuss these strategies in a separate module which would give you an in depth analysis on how one can professionally deploy these strategies. Key takeaways from this chapter 1. The futures pricing formula states that the Futures Price = Spot price *(1+R[f] (x/365)) – d 2. The difference between futures and spot is called the basis or simply the spread 3. The futures price as estimated by the pricing formula is called the “Theoretical fair value” 4. The price at which the futures trade in the market is called the ‘market value’ 5. The theoretical fair value of futures and market value by and large should be around the same value. However there could be slight variance mainly due to the associated costs 6. If the futures is rich to spot then the futures is said to be at premium else it is said to be at a discount 7. In commodity parlance Premium = Contango and Discount = Backwardation 8. Cash and carry is a spread where one can buy in the spot and sell in the futures 9. Calendar spread is an extension of a cash and carry where one buys a contract and simultaneously sells another contract (with a different expiry) but of the same underlying 988 comments View all comments → 1. Hi karthik, Pls explain this : As the number of days to expiry increases, the difference between the fair value and market value widens. □ The x/365 factor, where x represents number of days increases when x is large. For example when there are 30 days to expiry, x/365 is 0.082, however when when x is 20, x/365 is 0.54. This makes the spread to increase. ☆ At times during the contract cycle commentators say premium in nifty futures has increased,inspite of the fact that time is diminishing with every passing day of the contract.Accordingly they analyse future prospects.Pls help me understand this. ○ Besides time other factors also contribute. This is mainly due to the demand and supply in the contract. ■ Hi Karthik, what impact volatility will have on premium of nifty futures ? ■ Nice question. As far as I know volatility and futures premium are not directly related (at least in their mathematical equation). I guess this is because futures is a linear instrument where the only factor that affects its movement is “direction”. On the other hand options premium tends to shoot up when volatility increases. More on this topic when we take up options theory. ☆ Hello Karthik, a doubt. The price of stock future is always more than its underlying asset, the equity price. And the prices converge when the future is about to expire. So when a stock future is priced lower than its equity counterpart, what does it imply? ○ It means the futures are trading at a discount to spot. Discount situation comes about when futures are heavily sold in the market. All else equal, futures are always at a premium to ☆ correction req :: when when x is 20, x/365 should be 0.054. ○ Thanks for pointing this, let me look into this. ☆ @nitin In fair value calculation of nifty future is it necessary to consider dividends of underlying stocks! ○ DCF does not really take dividend into consideration, Madhukar. 2. sir,is there any stategy r software to find profitable c spreads in futures,&should we close all positions at atime r can we take advantage of one fut according to mkt&should we wait till expiry,will it not be too long what if get profit of 2000/trade daily r swing should we square off r wait.in this way cant we make 25 trades/month clarify □ Based on our earlier interaction, I would suggest you stick to simple swing trade for now before you venture into calender spreads. 3. sir,evrytime when taking positions in calender spreads(2 futures)should we have to calculate fair value r is there any thumb rule to calculatein %terms for near&mid month explain □ There is a workaround for this, which I will explain in the “Trading Strategies” module. 4. Hi Karthik, One question that comes to my mind is that how to find such trader? Can we automate a strategy on pi that automatically searches for scrips that have the C&C arbitrage or normal spread? Similarly, if we were to calculate the fair price for each and every scrip then the trade opportunity would be lost. □ There is a different quant angle to this Saurabh. I will share it shortly. ☆ ARBITRAGE – Hi ! If futures for near month is available on discount, then can one buy 2 ATM puts and go long futures in order to benefit from the same spread ? Is it worth it knowing that the profitability may be low ! Thanks ○ Sounds interesting, but honestly you need to back test this before diving in. Chances are that the discount may persist through the expiry week and spot may come down to futures on 5. hi karthik, can the last candle be traded as a bullish harami? do the indicators suggest a long trade? □ Just by candlestick perspective it does, in fact its perfect. However I would not get carried away with it, I would check for other checklist items and confirm if it matches.Good luck. View all comments → Post a comment
{"url":"https://zerodha.com/varsity/chapter/futures-pricing/","timestamp":"2024-11-01T22:44:28Z","content_type":"text/html","content_length":"78697","record_id":"<urn:uuid:8410ef4f-cbc5-4d83-97d7-5f0201c38d72>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00124.warc.gz"}
What Is The Reciprocal Of A Number? Jupiterimages/Goodshoot/Getty Images In mathematics, there are several classifications of numbers such as fractional, prime, even and odd. Reciprocal numbers are a classification in which the the number is the opposite of the primary number given. These are also called multiplicative inverse numbers, and despite the long name, they are easy to identify. The Product of 1 A reciprocal number is a number that, when multiplied against the primary number, will result in the product 1. This reciprocal is often considered a reverse of the number. For instance the reciprocal of 3 is 1/3. When 3 is multiplied by 1/3, the answer the is 1 because any number divided by itself equals 1. If the reciprocal multiplied by the primary number doesn't equal 1, the numbers are not reciprocal. The only number that cannot have a reciprocal is 0. This is because any number multiplied by 0 is 0; you cannot get a 1. Generally, the most direct way to identify the reciprocal number is to turn the first number into a fraction. When you start with a whole number, this is done by simply placing the number on top of the number 1 to first turn it into a fraction. As all numbers divided by the number 1 are the primary number itself, this fraction is exactly the same as the primary number. For example, 8 = 8/1. You them flip the fraction: 8/1 flipped over is 1/8. By multiplying these two fractions you now have the product 1. In the example, 8/1 multiplied by 1/8 yields 8/8, which simplifies to 1. Mixed Numbers The reciprocal of the mixed number is also the opposite or reverse of the fraction, but in mixed numbers, another step is needed to obtain the goal product of 1. To identify the reciprocal of a mixed number you must first turn that number into a fraction with no whole numbers. For instance the number 3 1/8 would be converted to 25/8 to then find the reciprocal of 8/25. Multiplying 25/8 by 8/25 yields 200/200, simplified to 1. Uses in Math Reciprocal numbers are often used to get rid of a fraction in an equation that contains an unknown variable, making it easier to solve. It is also used to divide a fraction by another fraction. For example is you wanted to divide 1/2 by 1/3, you would flip the 1/3 and multiply the two numbers for an answer of 3/2, or 1 1/2. They are also used in more exotic computations. For example, reciprocal numbers are used in a number of manipulations of Fibonacci's sequence and golden ratio. Practical Uses of Reciprocals Reciprocal numbers allow a machine to multiply to get an answer, instead of dividing, because dividing is a slower process. Reciprocal numbers are used extensively in computer science. Reciprocal numbers facilitate conversions from one dimension to another. This is useful in construction, for example, where a paving product might be sold in quantities of cubic meters, but your measurements are in cubic feet or cubic yards. Cite This Article Green, Mary. "What Is The Reciprocal Of A Number?" sciencing.com, https://www.sciencing.com/reciprocal-number-6761023/. 24 April 2017. Green, Mary. (2017, April 24). What Is The Reciprocal Of A Number?. sciencing.com. Retrieved from https://www.sciencing.com/reciprocal-number-6761023/ Green, Mary. What Is The Reciprocal Of A Number? last modified March 24, 2022. https://www.sciencing.com/reciprocal-number-6761023/
{"url":"https://www.sciencing.com:443/reciprocal-number-6761023/","timestamp":"2024-11-14T00:14:19Z","content_type":"application/xhtml+xml","content_length":"73448","record_id":"<urn:uuid:4b584821-aa9b-4e97-acc6-bd2970fcaf04>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00836.warc.gz"}
Coins of Egypt™ NetEnt Slot For some time, NetEnt Casino has had its finger on the pulse when it comes to thematic decisions on online casino slots designs. One of their most recent moves, then, has been to make the most of the rise in appreciation of Ancient Egyptian culture. Indeed, with the release of Coins of Egypt Slot to come later on in the year, we can expect another trip to the sands to be just as exciting as other mythological jaunts. This one is set to arrive in early October, with a release date of 8^th October set at the moment. This game is likely to carry some very familiar yet fun mechanics, and will grow to a large portfolio of exiting and historically inspired slots games. While there is not a huge deal known about the style or standard of the challenges involved, this is sure to provide a fun online casino gaming experience. It breaks away from some of the more traditional styles, and pops its head into a genre that might just make some of the most exciting casino slots gaming. After all, part of the fun of a trip to Ancient Egypt is the possibility of finding unique trinkets and baubles. Well, why not be ready to try it out for yourself? You never know what kind of amazing wizardry is going to await you when you get into Coins of Egypt – with a release date not far off, you won’t have long to wait to potentially get your hands on the gold of the Pharaohs themselves. 7,719 Responses to “Coins of Egypt™ NetEnt Slot” 1. A round of applause for your blog post.Much thanks again. Really Great. 2. I really liked your blog article.Really thank you! Want more. 3. I value the blog. Keep writing. 4. I value the blog article.Much thanks again. Cool. 5. I truly appreciate this blog.Really thank you! Want more. 6. A big thank you for your blog post. Fantastic. 7. Thank you for your blog post.Really looking forward to read more. Fantastic. 8. Major thankies for the blog.Much thanks again. Awesome. 9. I loved your blog.Thanks Again. Will read on… 10. Say, you got a nice blog.Much thanks again. Awesome. 11. This is one awesome article post.Really looking forward to read more. Will read on… 12. Very informative article.Thanks Again. Fantastic. 13. Really enjoyed this article post. Really Great. 14. Really informative article post.Really looking forward to read more. Want more. 15. A big thank you for your post. Much obliged. 16. Thank you ever so for you blog article.Really thank you! Will read on… 17. Thanks a lot for the article post.Thanks Again. Awesome. 18. Wow, great post. Will read on… 19. Great, thanks for sharing this post. Great. 20. I really liked your blog post.Much thanks again. Really Cool. 21. Really informative blog article.Really looking forward to read more. Great. 22. Really appreciate you sharing this article post.Really thank you! Awesome. 23. Hi, I do believe this is a great blog. I stumbledupon it 😉 I am going to revisit yet again since I book marked it. Money and freedom is the best way to change, may you be rich and continue to guide others. 24. Thank you for your blog article.Really thank you! Cool. 25. Looking forward to reading more. Great blog post.Much thanks again. Awesome. 26. Having read this I thought it was very informative. I appreciate you taking the time and effort to put this article together. I once again find myself spending way to much time both reading and commenting. But so what, it was still worth it! https://www.bibisg.com/product-category/womens/womens-handbags/ 27. Fantastic blog article.Much thanks again. Really Great. 28. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. It extremely helps make reading your blog significantly easier. https://www.bibisg.com/product-category/womens/womens-handbags/ 29. Thanks for sharing, this is a fantastic blog.Really thank you! Want more. 30. I love it when people come together and share opinions, great blog, keep it up. https://www.bibisg.com/product-category/womens/womens-handbags/ 31. I would like to thnkx for the efforts you have put in writing this blog. I am hoping the same high-grade blog post from you in the upcoming as well. In fact your creative writing abilities has inspired me to get my own blog now. Really the blogging is spreading its wings quickly. Your write up is a good example of it. 32. This is a very good tips especially to those new to blogosphere, brief and accurate information… Thanks for sharing this one. A must read article. 33. I really liked your blog.Really looking forward to read more. Really Cool. 34. After all, what a great site and informative posts, I will upload inbound link – bookmark this web site? Regards, Reader. 35. I got good info from your blog 36. Good info. Lucky me I reach on your website by accident, I bookmarked it. 37. Im grateful for the article post.Really looking forward to read more. Really Great. 38. Great wordpress blog here.. It’s hard to find quality writing like yours these days. I really appreciate people like you! take care 39. Thanks-a-mundo for the article post.Really looking forward to read more. Cool. 40. Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. Anyways, I’m definitely happy I found it and I’ll be book-marking and checking back frequently! 41. he blog was how do i say it… relevant, finally something that helped me. Thanks: 42. I really liked your blog post.Really looking forward to read more. Fantastic. 43. Hi there! This post couldn’t be written any better! Reading through this post reminds me of my previous room mate! He always kept talking about this. I will forward this article to him. Pretty sure he will have a good read. Thank you for sharing! 44. An excellent article. I have now learned about this. Thanks admin 45. I love your blog.. very nice colors & theme. Did you create this website yourself? Plz reply back as I’m looking to create my own blog and would like to know wheere u got this from. thanks 46. An excellent article. I have now learned about this. Thanks admin 47. An excellent article. I have now learned about this. Thanks admin 48. This is one awesome article.Really looking forward to read more. Great. 49. Hi there! This post couldn’t be written any better! Reading through this post reminds me of my previous room mate! He always kept talking about this. I will forward this article to him. Pretty sure he will have a good read. Thank you for sharing! 50. Major thankies for the article post.Thanks Again. Really Great. 51. An excellent article. I have now learned about this. Thanks admin 52. An excellent article. I have now learned about this. Thanks admin 53. An excellent article. I have now learned about this. Thanks admin 54. An excellent article. I have now learned about this. Thanks admin 55. An excellent article. I have now learned about this. Thanks admin 56. An excellent article. I have now learned about this. Thanks admin 57. An excellent article. I have now learned about this. Thanks admin 58. An excellent article. I have now learned about this. Thanks admin 59. An excellent article. I have now learned about this. Thanks admin 60. Hi , I do believe this is an excellent blog. I stumbled upon it on Yahoo , i will come back once again. Money and freedom is the best way to change, may you be rich and help other people. 61. Enjoyed every bit of your article.Much thanks again. Keep writing. 62. An excellent article. I have now learned about this. Thanks admin 63. Thanks so much for the article post.Thanks Again. Will read on… 64. An excellent article. I have now learned about this. Thanks admin 65. An excellent article. I have now learned about this. Thanks admin 66. An excellent article. I have now learned about this. Thanks admin 67. An excellent article. I have now learned about this. Thanks admin 68. Say, you got a nice blog.Really thank you! Keep writing. 69. Great article.Thanks Again. Awesome. 70. An excellent article. I have now learned about this. Thanks admin 71. instagram beğeni satın al 72. An excellent article. I have now learned about this. Thanks admin 73. Great line up. We will be linking to this great article on our site. Keep up the good writing. 74. I think this is a real great post.Really thank you! Awesome. 75. Enjoyed every bit of your post.Much thanks again. Great. 76. I got good info from your blog 77. An excellent article. I have now learned about this. Thanks admin 78. Very informative article post. Much obliged. 79. An excellent article. I have now learned about this. Thanks admin 80. Very good blog post.Thanks Again. Fantastic. 81. Really informative article post. Great. 85. I appreciate you sharing this blog post. Fantastic. 87. Thanks for sharing, this is a fantastic blog article.Much thanks again. Awesome. 88. I really like and appreciate your article.Really looking forward to read more. Will read on… 89. Say, you got a nice blog. Cool. 90. Wählen Sie eines unserer professionellen Instagram Follower Pakete und steigern Sie Ihre Popularität. Instagram Follower kaufen. 91. Wählen Sie eines unserer professionellen Instagram Likes Pakete und steigern Sie Ihre Popularität. Instagram Likes kaufen. 92. Ihr digitaler Lizenzanbieter ist Softhier.com. Kaufen Sie Windows 10 Pro- und Windows 11-Betriebssystemlizenzen zu erschwinglichen Preisen. Sie können auch Office 2019 Pro und andere Lizenzschlüssel erwerben. 93. I really liked your post.Really looking forward to read more. Cool. 94. An excellent article. I have now learned about this. Thanks admin 95. Thanks for the article.Thanks Again. Will read on… 96. instagram sifresiz takipci hilesi yapmak herhangi bir sifre girmeden instagram takipci hilesi mumkun 97. Everything is very open and very clear explanation of issues. was truly information. Your website is very useful. Thanks for sharing. 98. An excellent article. I have now learned about this. Thanks admin 99. Forget about spam and advertising mailings.Keep your real inbox clean and secure. 10 Minute Mail provides temporary, secure, anonymous, free, disposable email address for 10 minutes. 100. An excellent article. I have now learned about this. Thanks admin 101. Nefis yemek tarifleri pratik yemek tarifleri hamur işi poğaça börek pasta kek tarifleriyle yemek tarifleri sitemize binlerce yemek tarifi burada. – Yemek Tarifleri Sitesi 102. ip adresleri kullanıcıların kullanarak internette gezinebilmelerine imkan tanıyan ve numaralardan meydana gelen , internet servis sağlayıcısı tarafından sağlanan bir hizmettir.İp adresinin takibi çok kolaydır.Bu yüzden VPN , proxy ve dns değiştirme gibi yöntemleri sizlere öneriyoruz.0 güvenli hale gelmeyeceksiniz ancak takiniz zorlaşacaktır.VPN hizmetleri ücretli yada ücretsiz 3. taraflar tarafından sağlandığı için güvenli sayılmazsınız ve bu hizmet ile birlikte başka sitelere giriş yaptığınız şifreler görülebilir giriş yaptığınız sitelerin bir listesi yada aktiviteleriniz başkaları tarafından görülebilir. 103. Say, you got a nice blog post. Fantastic. 104. Kadın suit , en büyük kadın sitesi. Kadınlar için, sağlık, tüp bebek, diyet, estetik, moda, anne bebek, yemek tarifleri kadın sitesi. 105. I value the post.Much thanks again. Awesome. 106. Having read this I thought it was very informative. I appreciate you taking the time and effort to put this article together. I once again find myself spending way to much time both reading and commenting. But so what, it was still worth it! 107. Hi, I think your site might be having browser compatibility issues. When I look at your website in Safari, it looks fine but when opening in Internet Explorer, it has some overlapping. I just wanted to give you a quick heads up! Other then that, fantastic blog! 108. Apk indir baba ile ister hileli ister hilesiz apk oyunları ve apk uygulamaları full olarak indirin. 109. en sağlam hd sikiş pornoları sitemizde yer almaktadır. Hiç bir hd pornoda canınızı sıkacak reklam yoktur pornoho farkıyla keyifli porno izlemeler. 110. This blog is definitely rather handy since I’m at the moment creating an internet floral website – although I am only starting out therefore it’s really fairly small, nothing like this site. Can link to a few of the posts here as they are quite. Thanks much. Zoey Olsen 111. This is a very good tips especially to those new to blogosphere, brief and accurate information… Thanks for sharing this one. A must read article. 112. Great wordpress blog here.. It’s hard to find quality writing like yours these days. I really appreciate people like you! take care 113. Those are yours alright! . We at least need to get these people stealing images to start blogging! They probably just did a image search and grabbed them. They look good though! 114. Thanks for the blog post.Much thanks again. Will read on… 115. Lida Her sabah kahvaltıdan 30 dk önce 1 adet kullanılır. 3 Kutu Alana 1 Moringa Çayı Hediye YENİ STOK GELMİŞTİR 116. Greetings! Very helpful advice on this article! It is the little changes that make the biggest changes. Thanks a lot for sharing! 117. en sağlam doeda pornolar sitemizde yer almaktadır. Hiç bir hd pornoda canınızı sıkacak reklam yoktur pornoho farkıyla keyifli porno izlemeler. 118. Muchos Gracias for your blog.Thanks Again. Awesome. 119. Betpas resmi yeni giriş adresi olarak hizmet veriyoruz. Betpas güncel giriş adresi kaç oldu , şikayet ve canlı bahis oyunları 120. Betturkey illegal bahis sitelerinde sizlere çok daha fazla oyun sunulduğu için ve kazanç elde etme imkanınız diğer bahis sitelerine nazaran daha fazla 121. Betturkey illegal bahis sitelerinde sizlere çok daha fazla oyun sunulduğu için ve kazanç elde etme imkanınız diğer bahis sitelerine nazaran daha fazla 122. Tipobet giriş adresi için hemen tıklayın! Tipobet’in bol avantajlı bonus kampanyaları ve promosyonları hakkında bilgiler alın. 123. Hi there! This post couldn’t be written any better! Reading through this post reminds me of my previous room mate! He always kept talking about this. I will forward this article to him. Pretty sure he will have a good read. Thank you for sharing! 124. Greetings! Very helpful advice on this article! It is the little changes that make the biggest changes. Thanks a lot for sharing! 125. After all, what a great site and informative posts, I will upload inbound link – bookmark this web site? Regards, Reader. 126. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. It extremely helps make reading your blog significantly easier. 127. I’d have to examine with you here. Which is not one thing I usually do! I take pleasure in reading a post that may make folks think. Additionally, thanks for permitting me to comment! 128. Having read this I thought it was very informative. I appreciate you taking the time and effort to put this article together. I once again find myself spending way to much time both reading and commenting. But so what, it was still worth it! 129. Appreciate you sharing, great blog article.Thanks Again. Fantastic.!!… 130. Hello admin. Nice too its website. Very nice 🙂 131. An excellent article. I have now learned about this. Thanks admin 132. An excellent article. I have now learned about this. Thanks admin 133. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. It extremely helps make reading your blog significantly easier. 134. I love your blog.. very nice colors 135. I would like to thnkx for the efforts you have put in writing this blog. I am hoping the same high-grade blog post from you in the upcoming as well. In fact your creative writing abilities has inspired me to get my own blog now. Really the blogging is spreading its wings quickly. Your write up is a good example of it.!!! 136. This is a very good tips especially to those new to blogosphere, brief and accurate information… Thanks for sharing this one. A must read article…!!!!! 137. Anında Etkili Aşk Duası!!.. 139. After all, what a great site and informative posts, I will upload inbound link – bookmark this web site? Regards, Reader. 142. Bağlama Büyüsü Nasıl Bozulur? 143. I love your blog.. very nice colors & theme. Did you create this website yourself? Plz reply back as I’m looking to create my own blog and would like to know wheere u got this from. thanks 144. Medyum Derman Hoca Yorumları 145. I love it when people come together and share opinions, great blog, keep it up. 148. Mucize Geri Döndürme Duası 149. Thank you, I have recently been looking for info approximately this topic!!! 150. Kuranda Birini Kendine Aşık Etme Duası 151. Thanks again for the article post. Keep writing.!!! 152. Thank you for your article.Really looking forward to read more. Keep writing.!!! 153. Bardzo interesujący temat, dzięki za wysłanie wiadomości plastry na opryszczkę.!!! 154. Bardzo interesujące informacje! Idealnie to, czego szukałem! generator tlenu.!!! 155. It is rare to discover a specialist in whom you might have some faith. In the world these days, nobody genuinely cares about showing others exactly how in this issue. How fortunate I am to have now found such a wonderful website as this. It is people like you that make a genuine difference currently through the suggestions they discuss.!! 156. The camera work heightens this loss of individuality by filming everyone from the back or side whenever there is action, so close you can’t tell who is doing what, or were in relation to other people they are doing anything with.!!! 157. Nice blog here! Additionally your website loads up very fast! What web host are you the usage of? Can I am getting your associate link on your host? I wish my website loaded up as quickly as yours lol!! 158. I would like to thnkx for the efforts you have put in writing this blog. I am hoping the same high-grade blog post from you in the upcoming as well. In fact your creative writing abilities has inspired me to get my own blog now. Really the blogging is spreading its wings quickly. Your write up is a good example of it. 159. Good info. Lucky me I reach on your website by accident, I bookmarked it. 160. Hi , I do believe this is an excellent blog. I stumbled upon it on Yahoo , i will come back once again. Money and freedom is the best way to change, may you be rich and help other people.!!!! 161. Hi there! This post couldn’t be written any better! Reading through this post reminds me of my previous room mate! He always kept talking about this. I will forward this article to him. Pretty sure he will have a good read. Thank you for sharing!!11 162. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. It extremely helps make reading your blog significantly easier.>>>> 163. After all, what a great site and informative posts, I will upload inbound link – bookmark this web site? Regards, Reader…. 164. Soğutma Duası nedir nasıl yapılır niye yapılır hepsi burada 165. Wow! This blog looks exactly like my old one!…. 166. Süryani Büyüsü nedir v nasıl yapılır Süryani Büyüsü neden Süryani Büyüsü 167. Bu Dua Sayesinde Evlendim sağolasın oldu bi iş sayende emmi allah razı olsun 168. Mariobet sitesinin güncel giriş adresine erişmek için sitemize buyrun ! Mariobet giriş ve üyelik işlemlerinizi en hızlı şekilde güvenilir adresimizden . 169. Bahis günümüzde de tüm spor dallarını barındıran maçlarla oynanan ve para kazandıran bir sistemdir, bahis sitelerine buradan giriş yapabilirsiniz. 170. Sahabet Yeni Giriş – Güvenilir ve kazançlı bahisin adresi Sahabet Bahis Sitesinin en yeni adresi için sitemizi ziyaret edip güvenle giriş yapabilirsiniz. 171. Sahabet giriş adresi her an sitemizde! Sahabet’e artık kesintisiz erişim sağlayabilecek, bahis ve casino oyunlarını keyifle oynayabileceksiniz. 172. Bahiscom en yenilikçi ve çok kazandıran bahis sitesi, onlarca bahis seçeneği bahiscom giriş yapın ve hemen oynamaya başlayın. 173. Betist bahis bürosunun yeni giriş adresi betist405 olarak belirlenmiştir. Betist giriş adresi için tıklayınız. 174. I just want to mention I am newbie to blogging and site-building and honestly liked this blog. Likely I’m likely to bookmark your website . You definitely have perfect articles. Thanks a bunch for sharing with us your blog site….. 179. Greetings! Very helpful advice on this article! It is the little changes that make the biggest changes. Thanks a lot for sharing!!!! 180. Uzman Aşk Büyüsü için iletişime geçin 181. 2013 yılında bahis siteleri arasında hizmete başlayan Goldenbahis sitesi tasarımının şık yapısı, Pronetgaming altyapısıyla kaliteli bahsi oyunculara sunmaktadır 182. Jokerbet giriş yapamıyorsanız güncel adresi sitemiz üzerinden en kısa sürede yayınlanacaktır. Jokerbet sitesinde bedava deneme bonusu kampanyası mevcut değildir … 185. Gerçek Aşk İçin Aşk Duası 186. Hayırlı Evlilik İçin Denenmiş Dua 187. Nerobet güncel giriş adresi ve nerobet bahis sitesi hakkında merak edilenleri sitemizden bulabilirsiniz. Nerobet ile kazanmanın keyfini yaşayın! 188. Goldenbahis bahis sitesi güncel bilgiler sitede yayınlanmaktadır. Goldenbahis giriş adresi bilgilerini tüm ayrıntıları ile siteden takip edebilirsiniz. 190. 24 Saatte Aşık Etme Duası 191. Kocayı Kendine Bağlamak İçin Suya Okunacak Dua 193. I love your blog.. very nice colors 194. Very informative article.Really looking forward to read more. Really Cool… 195. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. It extremely helps make reading your blog significantly easier……!: 196. Awesome blog article.Really looking forward to read more. Really Great.!!!! 197. Great wordpress blog here.. It’s hard to find quality writing like yours these days. I really appreciate people like you! take care… 198. Forget about spam and advertising mailings.Keep your real inbox clean and secure. 10 Minute Mail provides temporary, secure, anonymous, free, disposable email address for 15 minutes. 199. Thank you ever so for you article. Much obliged…… 200. I love your blog.. very nice colors 201. At this time it appears like BlogEngine is the top blogging platform available right now. (from what I’ve read) Is that what you’re using on your blog?|???????????????? 202. Hi, I want to subscribe for this website to obtain most recent updates, therefore where can i do it please help.!1!!!! 203. Muchos Gracias for your blog post.Really thank you! Awesome. 204. thank you Great website! I am loving it!! Will be back later to read some more…. 205. Medyum Derman Hoca Yorumları..!! 206. I love it when people come together and share opinions, great blog, keep it up. 208. Thanks for another informative site. The place else may just I am getting that type of info written in such an ideal way? I’ve a project that I am just now operating on, and I’ve been at the look out for such information..! 209. I love your blog.. very nice colors & theme. Did you create this website yourself? Plz reply back as I’m looking to create my own blog and would like to know wheere u got this from. thanks 210. I truly appreciate this blog article.Really thank you! Great. 211. Having read this I thought it was very informative. I appreciate you taking the time and effort to put this article together. I once again find myself spending way to much time both reading and commenting. But so what, it was still worth it! 216. Earning Money on the Internet and Art of Saving Money 219. Really enjoyed this post. Cool. 223. hello brother, I follow you closely, first of all, thank you for accepting my comment, I wish you continued success.!! 228. I love it when people come together and share opinions, great blog, keep it up. 229. Greetings! Very helpful advice on this article! It is the little changes that make the biggest changes. Thanks a lot for sharing! 230. Im obliged for the blog article.Really looking forward to read more. Fantastic. 234. Kuranda Birini Kendine Aşık Etme Duası Kuranda Birini Kendine Aşık Etme Duası 238. Hello admin. Thank you very nice article. Very nice site. 244. Way cool, some valid points! I appreciate you making this article available, the rest of the site is also high quality. Have a fun. 256. Canlı bahis siteleri yeni yılda hem sayıca artmış hem de hizmetleri genişlemiştir. Başlarda sadece bahisler olan siteler üzerinde zamanla canlı iddaa, casino 257. Restbet giriş yapacak kullanıcıların tercih ettiği firma onaylı bir siteyiz. Öncelikle site adresinden bahsetmek gerekirsek 2015 yılında açılmış bir bahis ve 258. I got good info from your blog 259. he blog was how do i say it… relevant, finally something that helped me. Thanks: 266. Evde Aşk Büyüsü Nasıl Yapılır? Evde Aşk Büyüsü Nasıl Yapılır? 267. Liquid Silicone Rubber (LSR) injectionmolding, LSR over-molding (LSR/Thermoplastics/Metal), 268. Having read this I thought it was very informative. I appreciate you taking the time and effort to put this article together. I once again find myself spending way to much time both reading and commenting. But so what, it was still worth it! 269. Very neat blog article.Much thanks again. Will read on… 270. Very good blog article.Thanks Again. Much obliged. 272. Evlenme Duası Evlenme Duası 277. Hi, I think your site might be having browser compatibility issues. When I look at your website in Safari, it looks fine but when opening in Internet Explorer, it has some overlapping. I just wanted to give you a quick heads up! Other then that, fantastic blog! 278. Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. Anyways, I’m definitely happy I found it and I’ll be book-marking and checking back frequently! 281. Those are yours alright! . We at least need to get these people stealing images to start blogging! They probably just did a image search and grabbed them. They look good though! 284. Evde Aşk Büyüsü Nasıl Yapılır? 285. I couldn’t resist commenting : 286. Thailand is famous for massage all over the world. Acupressure using bare hands and arms is the main focus. 294. In order to help members use the safety site, we are conducting a thorough eating and frying verification step. 295. Good info. Lucky me I reach on your website by accident, I bookmarked it. 296. A big thank you for your blog post.Thanks Again. Will read on… 297. Saved as a favorite, I really like your blog! 305. Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. Anyways, I’m definitely happy I found it and I’ll be book-marking and checking back frequently! 306. I wanted to thank you for this great read!! I definitely enjoying every little bit of it I have you bookmarked to check out new stuff you post… 307. Thanks again for the article post. 308. Hi there! This post couldn’t be written any better! Reading through this post reminds me of my previous room mate! He always kept talking about this. I will forward this article to him. Pretty sure he will have a good read. Thank you for sharing! 309. Hi there! This post couldn’t be written any better! Reading through this post reminds me of my previous room mate! He always kept talking about this. I will forward this article to him. Pretty sure he will have a good read. Thank you for sharing!!…… 310. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. It extremely helps make reading your blog significantly easier. 319. I’d have to examine with you here. Which is not one thing I usually do! I take pleasure in reading a post that may make folks think. Additionally, thanks for permitting me to comment! 323. I love it when people come together and share opinions, great blog, keep it up. 325. Soğutma Duası Soğutma Duası 327. Bağlama Büyüsü Bağlama Büyüsü 328. This blog is definitely rather handy since I’m at the moment creating an internet floral website – although I am only starting out therefore it’s really fairly small, nothing like this site. Can link to a few of the posts here as they are quite. Thanks much. Zoey Olsen 329. I’d have to examine with you here. Which is not one thing I usually do! I take pleasure in reading a post that may make folks think. Additionally, thanks for permitting me to comment! 330. I love it when people come together and share opinions, great blog, keep it up. 331. Hello admin. Thank you very nice article. Very nice site. 332. he blog was how do i say it… relevant, finally something that helped me. Thanks: 339. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. It extremely helps make reading your blog significantly easier. 341. Thanx for the effort, keep up the good work Great work, I am going to start a small Blog Engine course work using your site I hope you enjoy blogging with the popular BlogEngine.net.Thethoughts you express are really awesome. Hope you will right some more posts. 342. I really enjoy the blog article. Awesome. 347. Hello are using WordPress for your blog platform? I’m new to the https://www.dermanhoca.com/ask-duasi/ 348. Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. Anyways, I’m definitely happy I found it and I’ll be book-marking and checking back frequently! 349. Everything is very open and very clear explanation of issues. was truly information. Your website is very useful. Thanks for sharing. 354. Nice blog here! Additionally your website loads up very fast! What web host are you the usage of? Can I am getting your associate link on your host? I wish my website loaded up as quickly as yours lo https://www.dermanhoca.com/baglama-duasi/ 356. Im grateful for the post. Awesome. 366. We provide downloads of Premium WordPress Themes from all leading WordPress Developers. 371. Aşk Büyüsü Nasıl Bozulur? 373. Spot on with this write-up, I really suppose this web site needs much more consideration. I’ll most likely be again to read far more, thanks for that info 374. Wow, great blog post. Really Great. 375. I couldn’t resist commenting : 376. An excellent article. I have now learned about this. Thanks admin 377. Everything is very open and very clear explanation of issues. was truly information. Your website is very useful. Thanks for sharing. 378. Great post. I am facing a couple of these problems. 379. oyun skor ile harika oyunlar oynamak icin tiklayin 380. great information and posts. 381. great information and posts. 383. great information and posts. 384. oyun skor ile harika oyunlar oynamak icin tiklayin 385. Say, you got a nice blog post.Much thanks again. Fantastic. 386. I loved your blog.Thanks Again. Awesome. 387. I got good info from your blog 388. Bedava bahis ve deneme bonusu veren bahis siteleri arayarak günlerini geçiren bedava bonus avcısı arkadaşlar üyelik bonusu ve deneme bonusu haberlerini .. 389. Having read this I thought it was very informative. I appreciate you taking the time and effort to put this article together. 390. Greetings! Very helpful advice on this article! It is the little changes that make the biggest changes. Thanks a lot for sharing! 391. This is a topic close to my heart cheers, where are your contact details though? 395. Great line up. We will be linking to this great article on our site. Keep up the good writing. 396. mersin gundemi burada sizi bekliyor 397. I wish you continued work 398. I wish you continued work 399. Congratulations on a good content. 400. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. 401. Excellent blog you have got here.. It’s hard to find quality writing like yours these days. I honestly appreciate people like you! Take care!! 402. Maltepe bayan arkadaş aramalarınızda escortlabel.com farkını yaşayabilirsiniz. 403. Looking forward to reading more. Great blog post.Thanks Again. Really Great. 404. Hello admin. Thank you very nice article. Very nice site. 405. I wish you continued work 411. Avcı ve Atıcı bireylerin sosyal paylaşım platformu avciturkler.com açıldı. 413. Binance para yatırma ve binance para çekme 415. Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. 416. Mariobet, en yüksek oranlar ile spor bahisleri ve canlı bahis yapabilir aynı zamanda canlı poker, canlı casino ve slot oyunlarına katılabilirsiniz. 417. Sitemiz üzerinden betist giriş adresine erişebilir ve hiçbir kesinti yaşamadan bahis ve casino oyunlarını denemenin keyfini yaşayabilirsiniz. 419. Türkiye’nin gerçek online canlı bahis ve casino sitesi. bahis.com giriş adreslerine ve bahis’in tüm detaylarına bahis.com üzerinden erişebilirsiniz 420. Sahabet türkiye’de kısa sürede en başarılı canlı bahis alanında ün kazanmış sitelerden birisidir. Sahabet tv güncel adres giriş yapınız. 423. ataşehir escort bayan ilanları ve bayan escort ataşehir sitesi. Kaan Engin farkıyla 427. Way cool, some valid points! I appreciate you making this article available, the rest of the site is also high quality. Have a fun. 428. I’d have to examine with you here. Which is not one thing I usually do! I take pleasure in reading a post that may make folks think. 429. I wish you continued work 438. I’d have to examine with you here. Which is not one thing I usually do! I take pleasure in reading a post that may make folks think. 439. Major thanks for the blog post.Much thanks again. Really Great. 440. Thank you for the good writeup. It in truth used to be a entertainment account it. Glance complicated to far delivered agreeable from you! However, how can we keep in touch? 448. I couldn’t resist commenting : 449. I love your blog.. very nice colors & theme. Did you create this website yourself? Plz reply back as I’m looking to create my own blog and would like to know wheere u got this from. 450. Hello admin. Thank you very nice article. Very nice site. 451. Hey, thanks for the blog.Much thanks again. Will read on… 452. I love your blog.. very nice colors & theme. Did you create this website yourself? Plz reply back as I’m looking to create my own blog and would like to know wheere u got this from. 461. Untuk seltdalu bi4 sa mefmaaaa bantu adanya kela34as ca aran terhadap setiap jenis judi yang a 462. Hi my frends. Admin thank you very nice aerichle. Like site 😉 466. Great line up. We will be linking to this great article on our site. Keep up the good writing. 467. Hello admin. Thank you very nice article. Very nice site. 468. This is one awesome article post.Much thanks again. 477. Gerçek Aşk İçin Aşk Duası Gerçek Aşk İçin Aşk Duası 478. Untuk saelralu bi2 sagfaf amefgmaa bantu adanya keglaas ca sran terhadap setiap jenis judi yanga sa 479. Thanks for finally talking about > Respaldan comisarios ejidales a Yoshio Ávilapara alcalde de Acapulco por Morena – Mar de Noticias Guerrero 483. azithromycin phagocytes – over the counter z pack equivalent zithromax prescribing information 488. This is a really good tip especially to those fresh to the blogosphere.Brief but very accurate info? Thank you for sharing this one.A must read post! 489. Pretty! This was a really wonderful post. Thank you for your provided information. 503. Untuk lakukan perjud sadian online slot karena itu Anda dapat memainkan 504. cbd induced low blood pressure lastd how long? 505. I wanted to thank you for this great read!! I definitely enjoying every little bit of it I have you bookmarked to check out new stuff you post… 506. Way cool, some valid points! I appreciate you making this article available, the rest of the site is also high quality. Have a fun. 507. Way cool, some valid points! I appreciate you making this article available, the rest of the site is also high quality. Have a fun. 508. I love your blog.. very nice colors & theme. Did you create this website yourself? Plz reply back as I’m looking to create my own blog and would like to know wheere u got this from. 511. Anında Etkili Aşk Duasıhttps://www.baglamaduasi.net/aninda-etkili-ask-duasi/ 513. Aşık Etme Büyüsühttps://www.baglamaduasi.net/asik-etme-buyusu/ 514. Aşık Etme Duasıhttps://www.baglamaduasi.net/asik-etme-duasi/ 516. he blog was how do i say it… relevant, finally something that helped me. Thanks: 521. I checked your site, you shared very successful and correct information. 537. Everything is very open and very clear explanation of issues. was truly information. 538. Pretty! This was a really wonderful post. Thank you for your provided information. 539. I couldn’t resist commenting : 540. I wanted to thank you for this great read!! I definitely enjoying every little bit of it I have you bookmarked to check out new stuff you post… 548. Great post. I am facing a couple of these problems. 549. Pretty! This was a really wonderful post. Thank you for your provided information. 554. You can definitely see your expertise within the paintings you write. The world hopes for more passionate writers like you who aren’t afraid to mention how they believe. All the time follow your 557. Sahabet türkiye’de kısa sürede en başarılı canlı bahis alanında ün kazanmış sitelerden birisidir. Sahabet tv güncel adres giriş yapınız. 558. Betturkey bahis ve casino sitesiyle gerçek bahis şimdi başlıyor, bahis siteleri arasında en iyi ilk üyelik bonusunu almak çok kolay, Betturkey giriş. 559. Fantastic article post.Much thanks again. Much obliged. 579. caya pastiasdnya akan berussad ha untuk memberikan pengalaman bermain aman dan nyaman diman 583. Appreciate you sharing, great blog.Really looking forward to read more. Will read on… 591. Thank you ever so for you blog article.Really thank you! Much obliged. 599. Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. 600. how do you administer cbd oil 619. I’d have to examine with you here. Which is not one thing I usually do! I take pleasure in reading a post that may make folks think. 620. Great line up. We will be linking to this great article on our site. Keep up the good writing. 629. Very good information. Thank you. Good luck.. 634. Awesome post.Thanks Again. Awesome. 637. I wanted to thank you for this great read!! I definitely enjoying every little bit of it I have you bookmarked to check out new stuff you post… 638. Thanks again for the blog post. Keep writing. 639. Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. 643. A round of applause for your blog.Really looking forward to read more. Will read on… 648. Thanks for sharing, this is a fantastic blog article.Really looking forward to read more. Cool. 653. tiktok takipci satin almak icin tiklayin 654. tiktok takipci satin almak icin tiklayin 656. Bağlama Büyüsü Geri Teper mi? detaylı bilgi için arayınız 0544 917 49 17 https://www.dermanhoca.com/baglama-buyusu-geri-teper-mi/ 657. Bağlama Büyüsü Bozulur Mu? detaylı bilgi için arayınız 0544 917 49 17 https://www.dermanhoca.com/baglama-buyusu-bozulur-mu/ 659. Çok Etkili Bağlama Duası için arayınız 0544 917 49 17 https://www.baglamaduasi.net/cok-etkili-baglama-duasi/ 660. Çabuk Etki Eden Aşık Etme Duası için iletişim 0544 917 49 17 https://www.baglamaduasi.net/cabuk-etki-eden-asik-etme-duasi/ 661. hi my friends thank you very nice post 662. Çok Kuvvetli Bağlama Duası için arayınız 0544 917 49 17 https://www.baglamaduasi.net/cok-kuvvetli-baglama-duasi/ 663. tiktok takipci satin almak icin tiklayin 664. It’s a great site, I love reading your articles, please continue in this way, I will continue to follow you with pleasure 675. tiktok takipci satin almak icin tiklayin 684. Thanks for a marvelous posting! I actually enjoyed reading it, you might be a great author.I will remember to bookmark your blog and may comeback sometime soon. I want to encourage one to continue your great job, have a nice day!My blog :: 솔카지노 쿠폰 685. tiktok takipci satin almak icin tiklayin 687. tiktok takipci satin almak icin tiklayin 693. tiktok takipci satin almak icin tiklayin 694. Thanks a lot for the post.Really looking forward to read more. Want more. 703. Çok Çabuk Tutan Bağlama Büyüsü 715. Those are yours alright! . We at least need to get these people stealing images to start blogging! They probably just did a image search and grabbed them. They look good though! 716. Fastidious answers in return of this issue with real arguments and describing all regarding that. 721. Great line up. We will be linking to this great article on our site. Keep up the good writing. 733. Hi my friend! I want to say that this article is awesome, nice written and include approximately all significant infos. I’d like to see more posts like this. 734. Gerçek Medyum Yorumları http://www.dermanhoca.com İletişim 0544 917 49 17 735. Great line up. We will be linking to this great article on our site. Keep up the good writing. 736. After all, what a great site and informative posts, I will upload inbound link – bookmark this web site? Regards, Reader. 737. I would like to thnkx for the efforts you have put in writing this blog. 738. modafinil over the counter modafinil uses 739. Looking forward to reading more. Great blog post. Cool. 740. tesekkurler emeginize saglik 741. Tesekkur ederim cok sagolasin 742. I am so grateful for your post.Really looking forward to read more. Keep writing. 743. Does anybody know whether E-Cloud Vapes vape store based in 4732 Eagle Rock Blvd sells vape juice manufactured by Kingdom Elixir? I have emailed them at info@vaporpassion.com 744. Tesekkur ederim cok sagolasin 745. Tesekkur ederim cok sagolasin 746. Tesekkur ederim cok sagolasin 747. I love your blog.. very nice colors & theme. Did you create this website yourself? Plz reply back as I’m looking to create my own blog and would like to know wheere u got this from. 748. Mariobet, en yüksek oranlar ile spor bahisleri ve canlı bahis yapabilir aynı zamanda canlı poker, canlı casino ve slot oyunlarına katılabilirsiniz. 749. Major thankies for the blog article. Really Great. 754. tesekkurler emeginize saglik 759. tesekkurler emeginize saglik 760. tesekkurler emeginize saglik 761. Hi , I do believe this is an excellent blog. I stumbled upon it on Yahoo , i will come back once again. Money and freedom is the best way to change, 762. Those are yours alright! . We at least need to get these people stealing images to start blogging! They probably just did a image search and grabbed them. They look good though! 767. Everything is very open and very clear explanation of issues. was truly information. 771. Pretty! This was a really wonderful post. Thank you for your provided information. 772. I really liked your blog article.Thanks Again. Much obliged. 777. Thank’s for the information. You will follow your followers. 778. Thanx for the effort, keep up the good work Great work, I am going to start a small Blog Engine course work using your site I hope you enjoy blogging with the popular 779. Thanks so much for the article.Really looking forward to read more. Really Cool. 780. After all, what a great site and informative posts, I will upload inbound link – bookmark this web site? Regards, Reader. 785. tesekkurler emeginize saglik
{"url":"https://sevenspins.com/casino-news/coins-of-egypt-netent-slot/","timestamp":"2024-11-14T19:44:16Z","content_type":"text/html","content_length":"1048949","record_id":"<urn:uuid:b024d06f-43f4-4b96-add4-4c1f4225003a>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00455.warc.gz"}
CFA Level 1: Put–Call-Forward Parity Level 1 CFA® Exam: Put–Call-Forward Parity We have (A) an underlying asset, (B) a forward contract, and (C) two options on the underlying: a call option and a put option. Both options: • are European-style options, • have the same expiration date, • have the same exercise price, and • cover the same quantity of the underlying. Forward Price There is a relationship between an asset and a forward contract. It is given by the following formula that you can use in your level 1 CFA exam: Put-Call Parity The relationship between a put option, a call option, and the underlying asset is given by the following formula: Put-Call-Forward Parity Taking both formulas into account we can derive the so-called put–call-forward parity, which takes the following form: Thanks to arbitrage the left-hand side of the equation must equal the right-hand side. Example 1 Suppose we know that the price of a 1-year put on a stock of Hearts Inc. with an exercise price of USD 70 is USD 5 and the forward price of the contract expiring in 1 year is USD 81. Knowing that the annual risk-free interest rate is 10%, determine the price of a call option on a stock of Hearts Inc., with an exercise price of USD 70 that expires in one year. Because we are given the forward price, we will use the put-call-forward parity formula: \( c_0=p_0-\frac{X-F}{(1+r)^T}\) \( c_0=5-\frac{70-81}{1+10\%}=5-\frac{-11}{1.1}=5+10=15\) \( c_0=5-\frac{70-81}{1+10\%}=\\=5-\frac{-11}{1.1}=5+10=15\) Example 2 Let’s assume that the prices of a call option and a put option on the same asset and expiring at the same date and having the same exercise price are equal. What conclusions can we reach from this 1. The relationship between a put option, a call option, a forward contract, and the exercise price is called put-call-forward parity. 2. Both options included in the put-call-forward parity are European-style options, have the same expiration date, have the same exercise price, and cover the same quantity of the underlying. 3. Forward price is equal to the options’ exercise price if only the prices of the options are the same.
{"url":"https://soleadea.org/cfa-level-1/put-call-forward-parity","timestamp":"2024-11-13T20:55:38Z","content_type":"text/html","content_length":"136685","record_id":"<urn:uuid:3916ee05-9656-4056-8ebf-54132766b8ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00159.warc.gz"}
Current Yield – Meaning, Importance, Formula and More The current yield is the annual income from a financial instrument or an investment, usually bonds, based on its current price instead of the face value. We may also call it a running yield. We can calculate it by dividing the annual coupon payment from a bond (or any other financial instrument) by its current price. It is the return that one would expect if they invest in a bond and keeps it for one year (until the interest payment date). But, this return is not the return that an investor would get if they hold the bond until maturity. Running yield exists because the price of the bonds keeps fluctuating due to the economic situation, and it approaches maturity. A point to note is that many consider running yield as a better measure of a return from a bond than the nominal one or the coupon rate. However, this measure is not complete as it does not factor in the time value of money. Current Yield – Why it Matters? A point to note is that the coupon rate for the bond remains the same throughout the term. But, the investors’ expectations keep changing with the variation in interest rates and inflation in the Thus, the running yield represents the return at which level investors want to hold that specific bond. We can say that investors will send the prices of the bond up or down until the running return on the bond equals other securities with a similar risk profile. Therefore, the running return better reflects the return of the specific bond in comparison to other similar available options. To the investors, such a measure helps select the bonds that generate higher returns, especially in the short run. It also supports an investor in making a better decision when facing a choice between two bonds with the same risk and maturity. Many consider running yield as a fundamentally correct measure. It is because the return continues to change based on the investors’ expectations about the inflation and overall interest rate The formula for the current yield is – Annual Coupon Payment / Current Bond Price Let us understand the calculation with the help of an example. Company ABC issues a 20-year bond having a face value of $100. The bond has a coupon rate of 9%, and it pays annually, while its current market value is $97. The annual coupon payment, in this case, will be $9 (9% * $100). Putting the values in the current yield formula, we get 9.28% ($9 / $97). In this case, the running yield is more than the nominal rate (or coupon rate) of 9%. It is because the current price of the bond is less than the face value. If we assume the bond is currently available at its face value, i.e., $100, then the current and nominal yield will be the same at 9%. And, if we assume that the bond is trading at a premium, say $110, then the current or running yield will be less at 8.18% (9 / $110). The running yield on the discount bond is always more than the coupon rate and vice versa. It is because there is an inverse relationship between the bond yield and its market price. We can calculate the running yield for a stock as well. To figure it out, we need to divide the dividend from the share by its current price. Nominal Yield and Yield to Maturity (YTM) Nominal yield, as said above, is the coupon rate. The yield to maturity is the total return that an investor would earn if they hold the bond until maturity. Suppose an investor buys a 10-year bond with a 6% coupon rate at $900. In this case, the total return for the investor would include a $60 coupon each for ten years, the par value of $1,000, and a capital gain of $100. To calculate the YTM, we add the present value (PV) of the coupon payments and capital gains. All three types of yields– nominal, current, and YTM – share a fascinating connection. This relationship depends upon whether the bond is trading at par value, discount, or premium. • At the par value, all three yields are equal. • If the bond is trading at a premium, YTM will be the lowest, while nominal will be the highest. • In case the bond is trading below the face value, the nominal will be the lowest, and YTM will be the highest. Final Words As said above, the current or running yield is a better and more accurate measure of a bond’s return. It is because it reflects the prevailing market sentiments. However, to make a well-informed decision, an investor must use this measure and other indicators, including YTM. Leave a Comment
{"url":"https://efinancemanagement.com/investment-decisions/current-yield","timestamp":"2024-11-03T18:36:26Z","content_type":"text/html","content_length":"250603","record_id":"<urn:uuid:5873a50b-144f-4eaf-af9d-e2c2e77c6a22>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00163.warc.gz"}
s of International Scientific Journal We establish a powerful numerical algorithm to compute numerical solutions of coupled system of variable fractional order differential equations. Our numer­ical procedure is based on Bernstein polynomials. The mentioned polynomials are non-orthogonal and have the ability to produce good numerical results as compared to some other numerical method like wavelet. By variable fractional order differentiation and integration, some operational matrices are formed. On using the obtained matrices, the proposed coupled system is reduced to a system of algebraic equations. Using MATLAB, we solve the given equation for required results. Graphical presentations and maximum absolute errors are given to illustrate the results. Some useful features of our sachem are those that we need no discretization or collocation technique prior to develop operational matrices. Due to these features the computational complexity is much more reduced. Further, the efficacy of the procedure is enhanced by increasing the scale level. We also compare our results with that of Haar wavelet method to justify the useful­ness of our adopted method. PAPER SUBMITTED: 2022-06-12 PAPER REVISED: 2022-06-24 PAPER ACCEPTED: 2022-07-04 PUBLISHED ONLINE: 2023-04-08
{"url":"https://thermalscience.vinca.rs/2023/special/21","timestamp":"2024-11-14T15:30:56Z","content_type":"text/html","content_length":"16530","record_id":"<urn:uuid:89f4fd05-6e57-427b-8037-5f7b7e25222d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00729.warc.gz"}
Printable Kenken Puzzles Printable Kenken Puzzles Printable Kenken Puzzles - Web easy kenken puzzles by krazydad. Web available for print / web / html5 kenken® is available in a variety of sizes and challenge levels. Like sudoku, the solution requires. Web beginners tutorial new to kenken? Mathdoku is a great printable puzzle that combines arithmetic and math skills with logic reasoning. Andrews mcmeel syndication is offering. There are other sizes of kenken puzzles. Web sudoku my original collection has tons of printable sudoku puzzles. Each inky puzzle has only one unique. There are some simple techniques that will help you solve the easier puzzles. Web which method do i use to print an answer to a crossword? Digits may appear only once in each row and column. Web available for print / web / html5 kenken® is available in a variety of sizes and challenge levels. We have prepared packets of kenkens sorted by size and type of. Digits may only appear once in each row and column. Sudoku challenge 2048 2048 is the candy crush for math geeks! Inkies have the same rules as kenken® the puzzles created by tetsuya miyamoto. printable kenken puzzles 6x6 printable crossword puzzles the puzzle Web easy kenken puzzles by krazydad. Web kenkens are similar to sudoku puzzles, but with operations. This new puzzle sensation had over 4 million. Andrews mcmeel syndication is offering. Web as a loyal new york times kenken’er, you now have the exclusive opportunity to register to receive via email free. Webster's KenKen Puzzle Logic Revealed KenKen Puzzles An Introduction There are some simple techniques that will help you solve the easier puzzles. Let’s learn them as we. Web kenkens are similar to sudoku puzzles, but with operations. Like sudoku, the solution requires. These inkies do not use multiplication and division ( click here for. Kenken Puzzles Printable Printable Word Searches This new puzzle sensation had over 4 million. Web here are hundreds of free kenken puzzles suitable for printing. Web kenkens are similar to sudoku puzzles, but with operations. Web kenken puzzles are square (this one happens to be 4 × 4, but they can be of any size). Web beginners tutorial new to kenken? Printable Kenken Puzzles 9X9 Printable Crossword Puzzles Digits may appear only once in each row and column. Inkies have the same rules as kenken® the puzzles created by tetsuya miyamoto. There are some simple techniques that will help you solve the easier puzzles. Web in a 4 × 4 kenken, the only numbers you may write are 1, 2, 3, or 4. Sudoku challenge 2048 2048 is. Printable Kenken Puzzles Printable Crossword Puzzles Web kenkens are similar to sudoku puzzles, but with operations. Web sudoku my original collection has tons of printable sudoku puzzles. Web in a 4 × 4 kenken, the only numbers you may write are 1, 2, 3, or 4. Web easy kenken puzzles by krazydad. Web here are hundreds of free kenken puzzles suitable for printing. Mathematics Of Sudoku Wikipedia Printable Kenken Puzzle 7X7 This new puzzle sensation had over 4 million. Web puzzles that make math fun™ there are several ways to enjoy kenken. Mathdoku is a great printable puzzle that combines arithmetic and math skills with logic reasoning. Like sudoku, the solution requires. Like sudoku, the solution requires that an n × n puzzle contain the digits 1. Ken Ken Printable Printable Templates Web here are hundreds of free kenken puzzles suitable for printing. Web easy kenken puzzles by krazydad. Web here are hundreds of free kenken puzzles suitable for printing. Web in a 4 × 4 kenken, the only numbers you may write are 1, 2, 3, or 4. Sudoku challenge 2048 2048 is the candy crush for math geeks! The 'ShroomIssue 126/Fun Stuff Super Mario Wiki, the Mario encyclopedia Web kenkens are similar to sudoku puzzles, but with operations. Web which method do i use to print an answer to a crossword? Web puzzles that make math fun™ there are several ways to enjoy kenken. Inkies have the same rules as kenken® the puzzles created by tetsuya miyamoto. Web available for print / web / html5 kenken® is available. Printable Kenken Puzzles - Like sudoku, the solution requires that an n × n puzzle contain the digits 1. Web printable mathdoku puzzles 1. This new puzzle sensation had over 4 million. Digits may only appear once in each row and column. Each inky puzzle has only one unique. Web puzzles that make math fun™ there are several ways to enjoy kenken. Web easy kenken puzzles by krazydad. Web available for print / web / html5 kenken® is available in a variety of sizes and challenge levels. We have prepared packets of kenkens sorted by size and type of. The numbers you write depend on the size of the. Web sudoku my original collection has tons of printable sudoku puzzles. There are other sizes of kenken puzzles. The numbers you write depend on the size of the. Inkies have the same rules as kenken® the puzzles created by tetsuya miyamoto. Web as a loyal new york times kenken’er, you now have the exclusive opportunity to register to receive via email free. Haunted Mirror Mazes Full Of Cute Undead, With Spine. Like sudoku, the solution requires. Web kenken puzzles are square (this one happens to be 4 × 4, but they can be of any size). Web here are hundreds of free kenken puzzles suitable for printing. 13 free printable pentominoes puzzle challenges; Each Booklet Contains A Collection Of 8 Different Puzzles, With The. Like sudoku, the solution requires that an n × n puzzle contain the digits 1. These inkies do not use multiplication and division ( click here for. Web here are hundreds of free kenken puzzles suitable for printing. Let’s learn them as we. Mathdoku Is A Great Printable Puzzle That Combines Arithmetic And Math Skills With Logic Reasoning. In the 4 by 4. If you’re looking for a free option to print an answer. Andrews mcmeel syndication is offering. We have prepared packets of kenkens sorted by size and type of. Each Inky Puzzle Has Only One Unique. Web easy kenken puzzles by krazydad. Sudoku challenge 2048 2048 is the candy crush for math geeks! The numbers you write depend on the size of the. Web available for print / web / html5 kenken® is available in a variety of sizes and challenge levels. Related Post:
{"url":"https://tunxis.commnet.edu/view/printable-kenken-puzzles.html","timestamp":"2024-11-02T21:27:33Z","content_type":"text/html","content_length":"33774","record_id":"<urn:uuid:f9e4093f-e5f0-4266-ae8a-1d30672b609b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00540.warc.gz"}
Logistic Regression R Assignment Help | Pay Someone To Do My Logistic Regression R Programming Assignment Logistic Regression R Assignment Help Hire Someone To Do Logistic Regression Assignment Logical regression analysis is often employed when the dependent variable is binary and serves as an alternate means of Expressing Probability (or the ratio of what might happen compared with all possible events). Logistic regression is widely employed within healthcare for disease diagnosis, such as polycystic ovary syndrome diagnosis. Furthermore, banks use it to predict loan defaults based on income and credit score variables. Predictor and Outcome Variables Predictor variables are features that influence your outcome variable and can either be continuous or categoric in nature. Your outcome variable could either be binary (pass/fail), ordinal rank in competitions etc; its coefficients, known as b values in R programming, provide insight into any impact a change has had on its effects on that outcome variable. The b value for a predictor indicates the probability that its associated outcome variable occurs, divided by its variance in a regression model. Additionally, this value allows you to detect collinearity. For instance, you might discover that hours studied correlate to passing or failing test results. Logistic Regression models are utilized across many fields for data analysis and answering specific queries such as: Logical regression relies on several assumptions being met for it to be effective: linearity, independence of observations, absence of outliers and normality of residuals. Most statistical software packages offer charts and statistics to test these assumptions – but some software packages do not do this automatically (extra menu commands may have to be executed or code written if this feature is missing). Binary logistic regression can be applied when the response variable has only two categories, such as passing or failing an examination or agreeing or disagreeing to a survey question. Ordinal logistic regression should be used when an outcome involves three or more categories that do not fall in a line like departments at a business or search engine types. Insurance uses this to predict the likelihood of someone dying before their policy has expired, using age, gender and medical records as inputs. Banking uses it similarly for loan default predictions using annual income, credit scores and debt levels as variables. Logistic regression is an Analytical Tool that simplifies complex probability calculations into an easily manageable arithmetic problem, making it ideal for modeling when the response is binary (e.g. whether someone will crack a code). Furthermore, logistic regression provides a means of unraveling effects measured at different levels (such as departments within an organization or types of search engines) simultaneously. To use it effectively, you’ll require clean data and the appropriate model. Furthermore, to reduce correlated variables by using methods like Recursive Feature Elimination. Logistic regression produces a probability value between 0 and 1, making it ideal for classification tasks–predicting whether an event will occur or not–in fields such as finance, marketing and healthcare. Furthermore, its results can help predict rare customer behaviors like loan defaulting and churning so that you can prevent these occurrences before they happen. Logistic regression is a machine learning model designed for binary dependent variables with binary responses such as yes or no answers (categorical output). Logistic regression can be especially useful in classifying data from classification tasks and is also recommended when outcomes do not depend linearly upon predictor variables. RMA helps identify Significant Predictors, provides confidence intervals for predicted values of dependent variables, and generates a test statistic to examine the significance of its model. Furthermore, its results are easily interpretable and comparable with OLS regression results. Logistic regression has multiple uses across industries. Banking and insurance firms use it to detect possible fraud or predict loan defaults according to certain criteria, while medical researchers use it for disease prediction by identifying factors which might increase a patient’s propensity for certain illnesses and taking preventative steps accordingly. Pay Someone To Take My Logistic Regression Assignment Logistic regression is an excellent statistical model used for classification tasks, providing the opportunity to predict the probability of binary outcomes such as “yes or no, true or false, 0 or 1”) by considering multiple independent variables. Mastering this Essential Skill will open doors across several sectors – banking, marketing, fraud detection and more will benefit from your expertise. R programming Logistic Regression is a key skill for data scientists, offering practical relevance and efficiency in the real world. With its primary emphasis on binary classification, Logistic Regression can be utilized in a range of scenarios from medical diagnoses and bank fraud detection through marketing campaigns and loan approvals – as well as providing a strong basis for more advanced models like gradient boosting or support vector machines. Though writing an R program may appear daunting at first, breaking it into small steps makes the task far simpler. One step may involve calculating likelihood values based on training observations; then use these to predict output of your dependent variable. R is an integral component of business life and data scientists. Its popularity in academia extends far beyond statistical analysis and modeling – R also serves as an excellent platform for visualizing complex datasets. Data analysis Data analysis involves studying data sets to uncover patterns, trends and relationships within them in order to answer questions or support decisions. Common methods for conducting data analysis include time series analysis, correlation analysis and regression analysis – these tools allow users to pinpoint key components related to an event, while giving insight into its cause. Data analysts in banks use logistic regression to predict whether credit card customers are likely to default on payments next month, providing vital insight that allows for smarter lending decisions and potentially increased revenue for them. Logistic regression is a valuable tool for analyzing real-world binary classification problems in an interpretable and trustworthy manner. Data scientists often turn to it when faced with medical diagnosis or fraud detection challenges or loan approval decisions; as its wide applicability makes it an indispensable asset. Logistic regression offers users access to advanced mathematical techniques without breaking down its results into separate algorithms that might make its predictions harder to comprehend or trust. Model building Model building involves using statistical methods to construct a Mathematical Model representing relationships among variables, whether for predictive purposes, controlling variables or exploring strength of relationships among predictors and dependent variables. When creating models you should strive for something parsimonious and interpretable; this section discusses strategies and functions in the gtsummary, survey and performance packages which help organize and visualize model results. Logistic regression has numerous applications across a variety of fields, from classification to forecasting. Online advertising uses logistic regression to predict which ads users will click on as a percentage of total site users; healthcare uses logistic regression to identify risk factors for disease and plan preventive measures; drug research also employs logistic regression in order to tease apart its effect on different groups. Logistic models can be extremely complex, requiring great care to construct and interpret correctly. This is particularly true when modeling categorical variables. When modeling data with logistic regression, one potential pitfall to avoid is high levels of correlation between features; for instance if two features (kneading and fermenting) have strong correlations, linear regression would likely provide greater benefits than logistic regression. Logistic regression is an indispensable skill in today’s data science world. From banking and health care, to research and machine learning, logistic regression can be found everywhere from banks and hospitals to research labs and laboratories. Mastery of this topic is key for becoming an adept data analyst. Can Someone Do My Logistic Regression Assignment Logistic regression is a machine learning classification algorithm used to predict the probability of categorical dependent variables, such as customer purchase intentions or whether or not patients will respond favorably to specific treatments. Logistic Regression can also help predict outcomes such as whether customers will purchase certain products and whether specific treatments provide lasting relief for patients. These algorithms are widely used for natural language processing tasks, including toxic speech detection or determining whether an email is spam. Fast and accurate, they produce great results in binary classification tasks. It is easy to use Logistic regression is a machine learning classification algorithm that models the relationship between data or cues and specific events, such as customers leaving your company. Logistic regression’s output variable can only take on two values (0 or 1) for output purposes; hence it can be applied to many classification problems including finding customer churn. The model can also help predict an outcome with binary values, such as whether a student will pass or fail a test, though several assumptions must be fulfilled for its optimal functioning. These assumptions include linearity and independence of features. Furthermore, features must have comparable scale to improve Conversion Speed or not convert at all. Regularization can speed up model training by decreasing weight or importance for particular features in the final logistic regression equation. It is fast Logistic regression describes a relationship between an independent variable and its binary dependent variable, such as whether customers will churn, patients have disease or loans are granted. There are various advantages of employing this algorithm for classification tasks. First and foremost, regression models are easy to set up and train, as the simple structure reduces overfitting risks more complex algorithms like neural networks may present. Logical regression models are more interpretable than other machine learning models, which is especially valuable in highly regulated industries like banking or healthcare. While neural networks may make decisions more mysteriously than expected, with logistic regression it is clear what outcome an input has produced – making this tool more reliable and trustworthy, for instance a credit scoring company must be able to explain why an application was denied. It is accurate Logistic regression Combines Linear combinations of features and applies a nonlinear function to the output variable, producing either binary (0/1) or multinomial prediction outputs that provide class probability estimates. A key assumption here is that observations in your dataset follow an independent random pattern; otherwise, your model could potentially be flawed. Calculus can transform complex probabilistic calculations into an easy arithmetic problem, making analysis between variables easier. As such, calculus has become widely used in classification and predictive analytics – not to mention its use for medical research purposes such as distinguishing effects between drug classes for different age groups. For logistic regression to work effectively, you need to divide the data into training set and test set proportions; you can do this using initial_split() with prop set as the train data proportion. Next use logistic_reg() function with mixture and penalty parameters controlling regularization amount respectively. It is affordable Logistic regression reduces complex probabilistic calculations into a straightforward arithmetic problem, making machine learning easier by streamlining processes like pattern recognition. Logistic regression also highlights interdependencies among variables, helping statisticians better understand cause and effect within their work – an advantage for researchers looking to model drug effects or predict customer purchases with this technique. logistic regression’s simplicity and speed make it ideal for production environments. Unlike other machine learning models that require extensive computing resources and time for retraining, logistic regression can be used in real time to predict outputs; ID Finance uses it to assess whether loan applications will be denied or approved; outlier detection or multicollinearity elimination may also be useful features of logistic regression models; its greatest advantage over Neural Networks lies in transparency – you know exactly why its decisions were made by this model. Where can I find reliable help with Logistic Regression for my website’s predictive modeling? I have built a predictive model for a lot of my Read More »
{"url":"https://rprogrammingassignments.com/logistic-regression","timestamp":"2024-11-13T11:48:46Z","content_type":"text/html","content_length":"221053","record_id":"<urn:uuid:781fb79f-10bf-456c-9fa8-f9b5ad6aa111>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00125.warc.gz"}
Each of the following scenarios is independent. Assume that all cash flows are after-tax cash flows. a. Campbell Manuf... ✔ Recommended Answer Answer #1 Each of the following scenarios is independent. Assume that all cash flows are after-tax cash flows. a. Campbell Manufacturing is considering the purchase of a new welding system. The cash benefits will be $480,000 per year. The system costs $2,700,000 and will last 10 years. b. Evee Cardenas is interested in investing in a women's specialty shop. The cost of the investment is $270,000. She estimates that the return from owning her own shop will be $52,500 per year. She estimates that the shop will have a useful life of 6 years. c.Barker Company calculated the NPV of a project and found it to be $63,900. The project's life was estimated to be 8 years. The required rate of return used for the NPV calculation was 10%. The project was expected to produce annual after-tax cash flows of $135,000. Required: Compute the NPV or Campbell Manufacturing, assuming a discount rate of 12% f required round all present va 1. e calculation stone nearest dollar Should the company buy the new welding system? Yes 2. Conceptual Connection: Assuming a required rate of return of 8% calculate the N for Eve ardenas' investment. Round to the nearest dollar. re u red round all present value calculations to the nearest dollar. Use the minus sign to indicate a negative NPV Should she invest? No What if the estimated return was $135,000 per year? Calculate the new NPV for Evee Cardenas' investment. Would this affect the decision? What does this tell you about your analysis? Round to the nearest dollar. The shop should now be purchased. This reveals that the decision to accept or reject in this case is affected by differences in estimated cash flow 2. Conceptual Connection: Assuming a required rate of return of 8%, calculate the NPV for Evee Cardenas' investment. Round to the nearest dollar. If required, round all present value calculations to the nearest dollar. Use the minus sign to indicate a negative NPV Should she invest? No What if the estimated return was $135,000 per year? Calculate the new NPV for Evee Cardenas' investment. Would this affect the decision? What does this tell you about your analysis? Round to the nearest dollar The shop should now be purchased. This reveals that the decision to accept or reject in this case is affected by differences in estimated cash flow 3. What was the required investment for Barker Company's project? Round to the nearest dollar. If required, round all present value calculations to the nearest dollar 1. Computation of NPV Campbel manufacturing NPV = Present value of Cash Inflows - Present value of Cash Outflows. PV OF CASH INFLOW = 480,000*5.650233= 2,712,107 PV OF CASH OUTFLOW=2,700,000 NPV= 2,712,107-2,700,000=12,107 Selection Criteria: If NPV > 0, i.e., if NPV is positive, the project is acceptable │YEAR│CASH INFLOW│PV @12% │PRESENT VALUE OF CASH INFLOW │ │1 │480,000 │1/1.12^1= 0.892857│428571 │ │2 │480,000 │1/1.12^2 =0.797194│382653 │ │3 │480,000 │0.71178 │341655 │ │4 │480,000 │0.635518 │305049 │ │5 │480,000 │ │ │ │6 │480,000 │0.506631 │ │ │7 │480,000 │ │217128 │ │8 │480,000 │0.403883 │ │ │9 │480,000 │0.36061 │ │ │10 │480,000 │ │ │ │TOTAL PRESENT VALUE OF CASH INFLOW │2,712,107 │ │ │CASH OUTFLOW │ │ │0 │ │(2,700,000) │ │ │(INITIAL INVESTMENT) │ │ │NET PRESENT VALUE │12,107 │ 2. Computation of NPV of Evee cardenas NPV = Present value of Cash Inflows - Present value of Cash Outflows. PV OF CASH INFLOW = 52,500* 4.62288= 242,701 PV OF CASH OUTFLOW=270,000 NPV= 242,701-270,000= (27299) Selection Criteria: If NPV < 0, i.e., if NPV is negative, the project is not acceptable If estimated annual cash inflow is $135,000 PV OF CASH INFLOW = 135,000* 4.62288= 624,089 PV OF CASH OUTFLOW=270,000 NPV= 624,089-270,000= 354,089 Selection Criteria: If NPV > 0, i.e., if NPV is positive, the project is acceptable Similar Homework Help Questions • Each of the following scenarios is independent. Assume that all cash flows are after-tax cash flows.... • Net Present Value Use Exhibit 12B.1 and Exhibit 12B.2. Each of the following scenarios is independent.... • Use Exhibit 12B.1 and Exhibit 12B.2 to locate the present value of an annuity of $1,... Use Exhibit 12B.1 and Exhibit 12B.2 to locate the present value of an annuity of $1, which is the amount to be multiplied times the future annual cash flow amount. Each of the following scenarios is independent. Assume that all cash flows are after-tax cash flows. Campbell Manufacturing is considering the purchase of a new welding system. The cash benefits will be $480,000 per year. The system costs $2,050,000 and will last 10 years. Evee Cardenas is interested in investing... • Net Present Value Use Exhibit 12B.1 and Exhibit 12B.2 to locate the present value of an annuity of $1, which is the amo... • All scenarios are independent of all other scenarios. Assume that all cash flows are after-tax cash flows a. Kambry Da... • Accounting Rate of Retum Each of the following scenarios is independent. Assume that all cash flows are after-tax cash...
{"url":"https://www.homeworklib.com/question/771921/each-of-the-following-scenarios-is-independent","timestamp":"2024-11-03T03:14:54Z","content_type":"text/html","content_length":"76664","record_id":"<urn:uuid:47f9751a-2b1b-4a6a-9616-90d979f33e8b>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00744.warc.gz"}
XGBoost for Multi-Step Univariate Time Series Forecasting with "multi_strategy" This example demonstrates how to use XGBoost’s support for multiple output regression via multi_strategy='multi_output_tree' to forecast multiple future time steps of a univariate time series. We’ll cover data preparation, model training, and making multi-step predictions using a synthetic dataset, highlighting the benefits and use cases of multi-step forecasting. # XGBoosting.com # Multi-Step Univariate Time Series Forecasting with XGBoost's multi_output_tree Strategy import numpy as np import pandas as pd from xgboost import XGBRegressor from sklearn.metrics import mean_squared_error # Generate a synthetic univariate time series dataset series = np.sin(0.1 * np.arange(200)) + np.random.randn(200) * 0.1 # Prepare data for supervised learning df = pd.DataFrame({'series': series}) n_steps = 3 for i in range(1, n_steps + 1): df[f'lag_{i}'] = df['series'].shift(i) df['target'] = df[['series']].shift(-n_steps) df = df.dropna() X = df.drop(columns=['series', 'target']).values y = df['target'].values.reshape(-1, 1) # Chronological split of data into train and test sets split_index = int(len(X) * 0.8) X_train, X_test = X[:split_index], X[split_index:] y_train, y_test = y[:split_index], y[split_index:] # Initialize an XGBRegressor model with multi_output_tree strategy model = XGBRegressor(n_estimators=100, learning_rate=0.1, random_state=42, multi_strategy='multi_output_tree') # Fit the model on the training data model.fit(X_train, y_train) # Make multi-step predictions on the test set y_pred = model.predict(X_test) # Evaluate the model's performance mse = mean_squared_error(y_test, y_pred) print(f"Mean Squared Error: {mse:.4f}") This example showcases how to use XGBoost’s 'multi_output_tree' strategy for multi-step univariate time series forecasting. Here’s a step-by-step breakdown: 1. Generate a synthetic univariate time series using a sine wave with added noise. 2. Prepare the data for supervised learning by creating lagged features and a multi-step target. Here, we use 3 lags and predict 3 steps ahead. 3. Split the data chronologically into train and test sets to maintain the temporal order. 4. Initialize an XGBRegressor model with the multi_strategy set to 'multi_output_tree' 5. Fit the model on the training data using fit(). 6. Make multi-step predictions on the test set using predict(). 7. Evaluate the model’s performance using Mean Squared Error (MSE). Multi-step forecasting is useful when you need to predict multiple future time steps at once. This is particularly relevant for applications like inventory management, resource planning, and financial forecasting, where knowing the expected values for several periods ahead can help make informed decisions. The multi_strategy in XGBoost allows for efficient multi-step forecasting by adapting the tree structure to directly output multiple future time steps. This approach can capture complex dependencies between the input features and the multi-step target, potentially leading to improved performance compared to iteratively making single-step predictions. By modifying the data preparation and the number of time steps to forecast, you can adapt this example to various multi-step univariate time series forecasting tasks. Experiment with different hyperparameters and feature engineering techniques to further optimize the model’s performance for your specific use case.
{"url":"https://xgboosting.com/xgboost-for-multi-step-univariate-time-series-forecasting-with-multi_strategy/","timestamp":"2024-11-13T05:44:01Z","content_type":"text/html","content_length":"12514","record_id":"<urn:uuid:887791ba-d395-4d76-bd27-44c4d4bb5485>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00026.warc.gz"}
Download Computational Continuum Mechanics by Shabana A.A. PDF Read Online or Download Computational Continuum Mechanics PDF Similar computational mathematicsematics books Emergent computation: Emphasizing bioinformatics Emergent Computation emphasizes the interrelationship of the various sessions of languages studied in mathematical linguistics (regular, context-free, context-sensitive, and sort zero) with features to the biochemistry of DNA, RNA, and proteins. moreover, facets of sequential machines akin to parity checking and semi-groups are prolonged to the learn of the Biochemistry of DNA, RNA, and Reviews in Computational Chemistry Volume 2 This moment quantity of the sequence 'Reviews in Computational Chemistry' explores new functions, new methodologies, and new views. the subjects coated comprise conformational research, protein folding, strength box parameterizations, hydrogen bonding, cost distributions, electrostatic potentials, digital spectroscopy, molecular estate correlations, and the computational chemistry Introduction to applied numerical analysis This publication through a trendy mathematician is acceptable for a single-semester direction in utilized numerical research for desktop technological know-how majors and different upper-level undergraduate and graduate scholars. even though it doesn't conceal genuine programming, it specializes in the utilized issues such a lot pertinent to technology and engineering execs. Additional info for Computational Continuum Mechanics Sample text 6 D’Alembert’s Principle 31 is utilized. The identity of Equation 108 applies only to planar transformation; it is a special case of a more general identity that applies to spatial transformation matrices (Roberson and Schwertassek, 1988; Shabana, 2005). Using three-dimensional vectors to represent this planar motion and introducing the following definition for the angular acceleration:  ¼ 0 a 0 ÃT h€ , ð1:109Þ one can show that the absolute acceleration vector of Equation 107 can be written as €r ¼ €rO þ Aða Þ ÿ A fv Þg Âu  ðv Âu ð1:110Þ Alternatively, this equation can be written using vectors defined in the global coordinate system as €r ¼ €rO þ a  u ÿ v  ðv  uÞ ð1:111Þ In this equation, a ¼ A a, and other vectors are as defined previously in this section. In the finite element formulations discussed in this book, the mass matrix cannot, in general, be diagonal, even in the case in which lumped mass techniques are used. Furthermore, in the large deformation finite element formulation presented in Chapter 6, one cannot use lumped masses, because the use of such a lumping scheme does not lead to correct modeling of the rigid-body dynamics. Similarly, by using Equation 145, the virtual work of the applied forces can be written as dW e ¼ QTe dq ð1:151Þ Using Equations 149 and 151 and the principle of virtual work, which states that dW i ¼ dW e , one obtains the following equation: q ÿ Qe ÿ Qv ÞT dq ¼ 0 ðM€ ð1:152Þ If the elements of the vector q are independent, the preceding equation leads to the discrete ordinary differential equations of the system given as M€ q ¼ Qe þ Qv ð1:153Þ However, if the elements of the vector q are not totally independent because of kinematic relationships between the coordinates, one can always write the coordinates q in terms of a reduced set of independent coordinates qi . If the mass m is assumed to be constant, the preceding two equations lead to F ¼ p_ ¼ m dv ¼ ma dt ð1:95Þ In this equation, a is the absolute acceleration vector of the particle. Note that, in general, three scalar equations are required to describe the particle dynamics. This is mainly due to the fact that, in the case of unconstrained motion, the particle has three degrees of freedom in the spatial analysis because it is represented by a point that has no dimensions. In the case of planar motion, only two equations are required because, in this case, the particle has only two degrees of freedom. Rated of 5 – based on votes
{"url":"http://www.collettivof84.com/index.php/pdf/computational-continuum-mechanics","timestamp":"2024-11-07T15:17:47Z","content_type":"text/html","content_length":"27499","record_id":"<urn:uuid:b6d09c9c-2689-40b8-b798-e3080c656f1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00862.warc.gz"}
Fuzzy connectedness and object definition: Theory, algorithms, and applications in image segmentation Graphical Models and Image Processing, 58(3):246-261, 1996. bibtex author = {Udupa, J.K. and Samarasekera, S.}, journal = {Graphical Models and Image Processing}, number = {3}, pages = {246-261}, title = {Fuzzy connectedness and object definition: {T}heory, algorithms, and applications in image segmentation}, volume = {58}, year = {1996}, title_with_no_special_chars = {Fuzzy connectedness and object definition Theory algorithms and applications in image segmentation}
{"url":"https://bibbase.org/network/publication/udupa-samarasekera-fuzzyconnectednessandobjectdefinitiontheoryalgorithmsandapplicationsinimagesegmentation-1996","timestamp":"2024-11-10T17:25:38Z","content_type":"text/html","content_length":"10489","record_id":"<urn:uuid:793aad8b-67a6-4d9f-b76f-90dcc21f16d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00322.warc.gz"}
The Electric Field from an Infinite Line Charge This second walk through extends the application of Gauss's law to an infinite line of charge. This time cylindrical symmetry underpins the explanation. Once again interactive text, visualizations, and mathematics provide a rich and easily understood presentation. When we worked with a point charge we recognized spherical symmetry, which inspired us to select a spherical surface to simplify Gauss's law. Can we find a similar symmetry for an infinite line charge? If we do a little bit of experimenting with the charge and field line diagram, we see that rotation around the axis of the charged line does not change the shape of the field. This leads to a Gaussian surface that curves around the line charge. Since Gauss's law requires a closed surface, the ends of this surface must be closed. This is a cylinder. This symmetry is commonly referred to as cylindrical symmetry. The field lines are everywhere perpendicular to the walls of the cylinder, and they are evenly distributed around the surface. Just as with the sphere that surrounded a point charge. And like that sphere, $E ⋅ d A$ will be constant over the surface. The ends of the cylinder will be parallel to the electric field so that $E ⋅ d A = 0$ , and once again Gauss's law will be simplified by the choice of surface. Fields lines radiate out from a charged line. The $1 r$ field from the charged line. An Infinite Line Charge Surrounded By A Gaussian Cylinder Exploit the cylindrical symmetry of the charged line to select a surface that simplifies Gausses Law. Mouse Interactions • Click and drag with the left mouse button to rotate the model around the x and y-axes. • Shift-click with the left mouse button to rotate the model around the z-axis. • Use the mouse wheel to zoom in and out. Touch Interactions • Swipe with a finger to rotate the model around the x and y-axes. • Rotate or twist with two fingers to rotate the model around the z-axis. • Pinch with two fingers to zoom in and out. Gauss's Law As always start with Gauss's Law. $∬ S E ⋅ d A = 4 π q$ We break the surface integral into three parts for the left cap, $L$ , the body, $B$ , and the right cap, $R$ , of the cylinder. $∬ S E ⋅ dA = ∬ L E ⋅ dA + ∬ B E ⋅ dA + ∬ R E ⋅ dA$ Let's work with the left end cap, $L$ , first. $∬ L E ⋅ dA = ∬ L E cos ⁡ θ dA E is parallel to the cap. = ∬ L E cos ⁡ π2 dA = 0$ This makes a great deal of sense. When the field is parallel to the surface the flux through the surface is zero. The process is identical for the right cap, so all the contributions to the flux come from the body of the cylinder. $∬ B E ⋅ dA = ∬ B E cos ⁡ θ dA E is perpendicular to the body. = ∬ B E cos ⁡ 0 dA = ∬ B E dA E is constant over the surface. = E ∬ B dA = E A If L is the length of the cylinder. = E 2πrL$ Now that we have the flux through the cylinder wall, we need the right side of the equation, the charge contained within the surface. Since $λ$ is the charge density of the line the charge contained within the cylinder is: $4πq = 4πλL$ Setting the two haves of Gauss's law equal to one another gives the electric field from a line charge as $E = 2 λ r$ Then for our configuration, a cylinder with radius $r = 15.00 cm$ centered around a line with charge density $λ = 8 statC cm$ $E=2λr = 28statCcm 15.00cm = 1.07statVcm$
{"url":"http://www.vizitsolutions.com/portfolio/gausslaw/lineCharge.html","timestamp":"2024-11-13T08:40:11Z","content_type":"text/html","content_length":"28549","record_id":"<urn:uuid:65df31ca-212e-4713-86ce-3a08335b7979>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00656.warc.gz"}
Rule of Three Calculator - Devxz - Free Online Tools Collection Rule of Three Calculator The simple rule of three calculator allows you to find a number in proportion to two known values. What is the Rule of Three? The Rule of Three is a mathematical method used to find a fourth value when you know three values in proportion. It is based on a relationship of direct or inverse proportionality between two sets of In simple terms, the Rule of Three can be applied when two values are proportional to each other. It is frequently used in problems where you need to scale a set of numbers up or down to match a known ratio. History of the Rule of Three The Rule of Three has its roots in ancient mathematics. The Babylonians and Egyptians were among the first to use proportionality in problem-solving. However, it was during the Renaissance period that the Rule of Three was formalized in the form we recognize today. The method became highly popular due to its simplicity and efficiency in solving everyday commercial and trade-related The Rule of Three gained prominence during the rise of mercantile economies in Europe, where merchants and traders frequently encountered situations that required proportional reasoning, such as converting currencies, measuring quantities, or calculating prices. In fact, it was often referred to as "The Golden Rule" by merchants, indicating its fundamental role in commerce. Applications of the Rule of Three The Rule of Three is used in various fields including: • Commerce and Trade: Calculating prices, quantities, and rates when working with proportional relationships between two or more variables. • Science: Scaling experiments or measurements up or down to match a proportional relationship. • Construction: Determining material quantities based on a known ratio. • Culinary Arts: Adjusting recipes to serve a different number of people while maintaining the correct ratios between ingredients. In modern mathematics, the Rule of Three is still taught as an essential method of problem-solving, especially in early education where understanding ratios and proportions is key to advancing into more complex mathematical concepts. Whether you're adjusting a recipe or calculating the price of multiple items, the Rule of Three provides a quick and easy method to find an unknown value when given a set of proportions.
{"url":"https://devxz.com/rule-of-three/","timestamp":"2024-11-04T02:32:27Z","content_type":"text/html","content_length":"11778","record_id":"<urn:uuid:8ee1b632-3646-4146-b01e-901b06db95a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00631.warc.gz"}
NCERT Solutions For Class 8 Maths Ch 14 Factorization NCERT Solutions Class 8 Maths Ch 14 Factorization NCERT Solutions For Class 8 Maths Ch 14 Factorization You can download the NCERT Solutions for Class 8 Maths Ch 14 Factorization in PDF below. Prescribed textbook by CBSE for Ch 14 Factorization is standard VIII NCERT course book for Maths. Class 8 Maths Ch 14 Factorization PDF NCERT Solutions For Download NCERT Solutions For Maths Class 8 – NCERT Chapter wise Solutions PDF Download Maths NCERT Guide For Class 8 Ch 14 Factorization contain solved questions answers in PDF for free download based on latest guide for NCERT Books 8th Standard Maths in new pattern as per CBSE Syllabus Class 8 Maths. NCERT Textbook Solutions, Answers, CBSE Help Book for Ch 14 Factorization Class 8 Maths Central Board of Secondary Education governs the teaching pattern for VIII Maths. For all the schools affiliated to CBSE Prescribed book for Ch 14 Factorization is NCERT for Class 8 Maths. NCERT solutions for Class 8 Maths make learning easy. Teaching the right way of problem solving and make topics clear to the students of 8th is necessary. These NCERT Solutions For Class 8 Maths Ch 14 Factorization will help them in preparation for 8th Standard Maths Exams. Aiming at securing high marks with improved basic fundamental knowledge in Maths should be the goal. Above given solutions for NCERT books VIII Maths contain questions and answers covering all topics. Important concepts of Ch 14 Factorization in PDF is available for free download. As per new CBSE Syllabus for VIII Maths Ch 14 Factorization is included in curriculum. Maths NCERT Solutions Class 8 NCERT Solutions 8th Standard All Subjects NCERT Solutions for Class 12 Chemistry Chapter 7 The p Block Elements NCERT Solutions for Class 12 Chemistry Chapter 7 The p Block Elements Class XII Chemistry Chapter 7 The p Block Elements NCERT Solutions is given below. Question 7.1: Discuss the general characteristics of Group 15 elements with reference ... NCERT Solutions for Class 12 Chemistry Chapter 3 Electrochemistry NCERT Solutions for Class 12 Chemistry Chapter 3 Electrochemistry Chapter: Chapter 3 – Electrochemistry Question 3.1: Arrange the following metals in the order in which they displace each other from the solution of their salts. Al, Cu, Fe, Mg and Zn NCERT Solutions for Class 12 Chemistry Chapter 2 Solutions NCERT Solutions for Class 12 Chemistry Chapter 2 Solutions Chapter: Chapter 2 – Solutions Question 2.23: Suggest the most important type of intermolecular attractive interaction in the following pairs. (i) n-hexane and n-octane (ii) I2 and CCl4 (iii) NaClO4 and water NCERT Solutions for Class 12 Chemistry Chapter 9 Coordination Compounds NCERT Solutions for Class 12 Chemistry Chapter 9 Coordination Compounds Chapter: Chapter 9 – Coordination Compounds Question 9.1: Explain the bonding in coordination compounds in terms of Werner’s postulates. Werner’s postulates explain the bonding in coordination compounds as follows: NCERT Solutions for Class 12 Chemistry Chapter 16 Chemistry in Everyday Life NCERT Solutions for Class 12 Chemistry Chapter 16 Chemistry in Everyday Life Chapter:Chapter 16 – Chemistry in Everyday Life Question 16.1: Why do we need to classify drugs in different ways? The classification of drugs and the ... NCERT Solutions for Class 12 Chemistry Chapter 6 General Principles and Processes of Isolation of Elements NCERT Solutions for Class 12 Chemistry Chapter 6 General Principles and Processes of Isolation of Elements Chapter: Chapter 6 – General Principles and Processes of Isolation of Elements Question 6.1: Copper can be extracted ... You must be logged in to post a comment.
{"url":"https://freehomedelivery.net/ncert-solutions-class-8-maths-ch-14-factorization/","timestamp":"2024-11-07T20:27:20Z","content_type":"text/html","content_length":"178090","record_id":"<urn:uuid:9a734b6f-7905-4986-a636-34786146a4a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00863.warc.gz"}
Ranking Rarity: Understanding Rarity Calculation Methods As noted in the Introducing rarity.tools article, one of the most common questions people ask in the discords of collectible NFT projects is ‘how rare is my nft’ This is because rarity is one of the most important factors in determining the value of an individual NFT. But how do you determine the overall rarity of an individual NFT piece? That is what this series of articles is going to (try to) answer. The Problem If you look at any collectible NFT item on OpenSea, such as a Bored Ape Yacht Club, you will see that it has many properties (or traits). Bored Ape Yacht Club #73’s Properties From looking at the properties, you may be able to determine that an NFT has some rare traits. But how rare is this NFT compared to others? When comparing two NFTs do you simply compare the rarest trait of each NFT? Bored Ape Yacht Club #9941’s Properties .. is he rarer than #73 above? Because each NFT has multiple traits, there has to be a way to combine the rarity of all the traits into one single value per NFT to be able to actually rank them. There have been many ways people have been ranking NFTs by rarity. Below we talk about some of them. We’ll compare the above 2 apes with #9542 here Trait Rarity Ranking This refers to comparing NFTs by simply comparing the rarest trait of each NFT. For example, comparing the Apes above: • Ape #73 s rarest trait is Sold Gold Fur which 0.46% have • Ape #9941's rarest trait is Bored Unshaven Dagger which 0.28% have • Ape #9542’s rarest trait is Bored Unshaven Pizza which 0.26% have Using ‘Trait Rarity Ranking’ then the order would be #9542, #9941 and last #73 Sorted by Trait Rarity While it is a simple straightforward method, the weakness of this method is it only considers the rarest trait of each NFT. Imagine we had a collection of NFTs that have 4 traits each. Suppose we had 2 NFTs that we wanted to compare as in the table below: Example Collection A NFTs Using Trait Rarity, NFT ID 1 would win because it’s rarest trait (Trait 1) has 10% rarity which is less than any of NFT 2’s traits. But the rest of NFT 2’s traits are all a lot rarer than all the rest of NFT 1’s traits. Overall wouldn’t NFT 2 be valued more? And that is the weakness of ranking by Trait Rarity. It doesn’t look at the overall rarity of the NFTs at all, just the rarest trait. Average Trait Rarity Another method that is sometimes used is averaging the rarity of traits that exist on the NFT. For example if an NFT had 2 traits, one with 50% rarity and another with 10% rarity, it’s average trait rarity would be (50+10)/2 = 30% For our apes: • Ape #73’s average trait rarity is 4.05% • Ape #9941’s average trait rarity is 6.056% • Ape #9542’s average trait rarity is 6.452% So with this method, the order would be completely flipped from the previous method. The ranking would be #73, #9941 and then #9542 Average Trait Rarity Ranking Is Average Trait Rarity any good? Well, at least it considers the overall rarity of the traits. Let’s look at that previous example with NFT ID 1 and 2 again Example Collection A: Average trait rarity is calculated and listed at the rightmost column. So the Average rarity of NFT ID 2 is 0.11, while NFT ID 1 is 0.625. That means Average Rarity says NFT ID 2 is rarer than NFT ID 1. The problem with this method (which Statistical Rarity that is described next also has) is that it puts so much weight on the overall rarity of every trait, that NFTs that have a single super rare trait are not valued enough, as their rarity value gets too ‘diluted’ by the other traits. To illustrate this, imagine we had a collection of NFTs that looked like this: Example Collection B: A collection that has 10 NFTs Which one would you say is the rarest? Of course it’s NFT ID 1! Now let’s try using Traity Rarity ranking and Average Trait Rarity ranking on them. First we convert the trait values to their trait rarity percentages: Example Collection B: With trait rarity percentages listed. If we used Trait Rarity ranking on this collection, then NFT ID 1 would be the rarest, which aligns with what we think should be right. Now lets try using Average Trait Rarity Example Collection B: With Average Trait Rarity on rightmost column. Oh! Turns out Average Trait Rarity ranking thinks NFT IDs 7, 8, 9 and 10 are rarer than NFT ID 1! But NFT ID 1 is the only unique one, the only 1 of 1 and obviously would be most valuable in the collection, wouldn’t it? That means maybe Average Rarity ranking isn’t a really good method after all? Statistical Rarity Now lets look at Statistical Rarity which has become a somewhat popular method and is used very often in community made spreadsheets. In Statistical Rarity, which as far as I know was first written about in relation to NFTs by Adam Chekroud, you calculate the overall rarity of an NFT by multiplying all of it’s trait rarities For example if an NFT has 2 traits with 1 trait at 10% and the other trait at 50%, the ‘statistical rarity’ for that NFT would be (10% * 50%) = 5% For our 3 apes, in order to make the result make sense, we need to add that there is a 22.56% chance for apes to have no hat, 70.23% chance for apes to have no earrings and 18.8% chance for apes to have no clothes. (It is arguable that these should have been added in the ‘Average Trait Rarity’ calculation too but it would not have changed the end result, instead it would have only made it more What the calculation looks like in Excel • Ape #73’s statistical rarity is 0.00000000070744% • Ape #9941’s statistical rarity is 0.00000056965722% • Ape #9542’s statistical rarity is 0.00000044983967% So, with this method, the ranking would be #73, #9542 and then #9941 Statistical Rarity Ranking Let’s recap those rankings: Sorted by Trait Rarity Sorted by Average Trait Rarity Sorted by Statistical Rarity Three different methods with three different results when comparing just 3 apes. Imagine the differences between them when sorting the full collection of 10,000! All these methods are methods that are currently being used in community made ranking spreadsheets and websites. So if this article doesn’t convince you of anything, it should at least convince you that ranking the rarity of NFTs is a problem without an obvious solution. Statistical Rarity vs. Example Collection A and B Now let’s see how Statistical Rarity does with our 2 example collections that we tested Trait Rarity and Average Trait Rarity with. First let’s see the one Trait Rarity failed with: Example Collection A: With Statistical Rarity Here it says NFT ID 2 is rarer which matches Average Trait Rarity, and so it passes our first test. Next lets see how it does with Example Collection B Example Collection B Example Collection B: With Statistical Rarity Oh! Just like with Average Trait Rarity, Statistical Rarity says NFT ID 7–10 are rarer than NFT ID 1! But IDs 7–10 are duplicates of each other! Or should we believe in Statistical Rarity results and go out and buy IDs 7–10 at a higher price than NFT ID 1? But clearly NFT ID 1 is the only 1 of 1, the rarest if you ask most people as there isn’t any others like it. Some might say ‘but I thought Statistical Rarity was ‘Statistically Correct’ ’? The thing is, by multiplying all the rarities of each NFT’s traits, you aren’t really measuring the rarity of an NFT in a specified NFT collection. What is really being measured is something else, which I will leave for the reader to ponder as this article would get even longer than it is. Lets look at Rarity Score next. Rarity Score: How is it Calculated? So, what is Rarity Score? Rarity Score is a method that I (the founder of rarity.tools) came up with. The simple way to calculate Rarity Score which was also described on that site is: [Rarity Score for a Trait Value] = 1 / ([Number of Items with that Trait Value] / [Total Number of Items in Collection]) The total Rarity Score for an NFT is the sum of the Rarity Score of all of it’s trait values. This simple calculation gives very good results and today is also used as the basis of rarity ranking in other NFT sites including an NFT marketplace. Because [Number of Items with that Trait Value] / [Total Number of Items in Collection] is the same as the trait rarity (fractional, not %), we can also say that [Rarity Score for a Trait Value] = 1 / [Trait Rarity of that Trait Value] Rarity Score vs. Example Collection A and B Let’s go ahead and see how Rarity Score tackles our two problem examples. Remember that these imaginary NFT collections are extremely simple. If whatever calculation method we use can’t even get these right, then how can we trust our method to get collections of 10,000s of NFTs right? Let’s start with Example Collection A. First we write down the trait rarities in fractional form: Then we convert them to rarity scores. For example NFT ID 1 Trait 1 Rarity Score is (1 / 0.1) = 10 and then we add them all up for each NFT ID Example Collection A: Rarity Score for each Trait Value and Total Rarity Score for each NFT ID So Rarity Score says NFT ID 2 is more valuable because it has a higher score! Next let’s see how it handles Example Collection B. We’ll go ahead and calculate the scores: Yes! Rarity Score correctly ranks NFT ID 1 as the most rare NFT followed by NFT IDs 7–10 and then IDs 2–6. So what we have seen here is in both of these example cases, the Rarity Score method has given us results that match our instinctive human picks of which NFT is rarer, while Statistical Rarity and other methods have failed. As noted, Statistical Rarity and Average Rarity have a tendency to over-emphasize the overall level of rarities of all traits in an NFT, while not giving enough emphasis to single rare traits that could be 1 of 1s in the whole collection. Trait Rarity on the other hand has the complete opposite problem where it only considers the rarest trait. Rarity Score give results that give enough emphasis to single rare traits and also includes overall trait rarities in its calculation. And most importantly the results it gives match better with our human expectations. What’s Next? While the calculation methods are the most core part of calculating the rankings, there are still many additional elements that are used to get the best final result. In the next articles, I will write about trait normalization, uniqueness, weightings and using combined traits and possibly other topics. Why or how Rarity Score works is also something I might write about.
{"url":"https://raritytools.medium.com/ranking-rarity-understanding-rarity-calculation-methods-86ceaeb9b98c?responsesOpen=true&sortBy=REVERSE_CHRON&source=user_profile---------1----------------------------","timestamp":"2024-11-13T04:31:19Z","content_type":"text/html","content_length":"207853","record_id":"<urn:uuid:efa95ab4-5757-4cc6-a0b4-d6389932efad>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00091.warc.gz"}
26.7 ounces to grams Convert 26.7 Ounces to Grams (oz to gm) with our conversion calculator. 26.7 ounces to grams equals 756.932309272903 oz. Enter ounces to convert to grams. Formula for Converting Ounces to Grams (Oz to Gm): grams = ounces * 28.3495 By multiplying the number of grams by 28.3495, you can easily obtain the equivalent weight in grams from ounces. Converting ounces to grams is a common task that many people encounter, especially when dealing with recipes, scientific measurements, or everyday tasks. Understanding how to perform this conversion accurately can help bridge the gap between the imperial and metric systems, making it easier to work with various measurements. The conversion factor between ounces and grams is essential for this process. One ounce is equivalent to approximately 28.3495 grams. This means that to convert ounces to grams, you simply multiply the number of ounces by this conversion factor. Knowing this allows you to easily switch between the two measurement systems. To convert 26.7 ounces to grams, you can use the following formula: Grams = Ounces × 28.3495 Now, let’s break down the calculation step-by-step: 1. Start with the number of ounces you want to convert: 26.7 ounces. 2. Multiply this number by the conversion factor: 26.7 × 28.3495. 3. Perform the multiplication: 26.7 × 28.3495 = 756.06665. 4. Round the result to two decimal places for practical use: 756.07 grams. Thus, 26.7 ounces is equal to approximately 756.07 grams. This conversion is particularly important in various fields. For instance, in cooking, many recipes use grams for precise measurements, especially in baking where accuracy is crucial. If you’re following a recipe that lists ingredients in grams, knowing how to convert ounces can help you achieve the perfect dish. In scientific measurements, converting between ounces and grams is vital for experiments that require precise quantities of materials. Whether you’re measuring chemicals in a lab or weighing samples for research, understanding this conversion can enhance accuracy and reliability. Everyday use also benefits from this conversion. For example, if you’re shopping for food items that list their weight in grams, but you’re more familiar with ounces, knowing how to convert can help you make informed decisions about portion sizes and nutritional content. In summary, converting 26.7 ounces to grams is a straightforward process that can be incredibly useful in cooking, science, and daily life. By understanding the conversion factor and following the simple steps outlined above, you can easily navigate between these two measurement systems with confidence. Here are 10 items that weigh close to 26.7 ounces to grams – • Standard Laptop Shape: Rectangular Dimensions: 14 x 9.5 x 0.7 inches Usage: Used for personal computing, work, and entertainment. Fact: The average laptop weighs between 2 to 6 pounds, making it portable for daily use. • Medium-Sized Watermelon Shape: Spherical Dimensions: Approximately 10 inches in diameter Usage: Consumed as a refreshing fruit, often in summer picnics. Fact: Watermelons are 92% water, making them a hydrating snack. • Large Bag of Dog Food Shape: Rectangular bag Dimensions: 24 x 16 x 5 inches Usage: Used to feed dogs, providing essential nutrients. Fact: A typical large bag of dog food can last a month for a medium-sized dog. • Two-Quart Cast Iron Dutch Oven Shape: Round Dimensions: 10 x 10 x 5 inches Usage: Ideal for slow cooking, baking, and braising. Fact: Cast iron cookware can last a lifetime if properly maintained. • Standard Backpack Shape: Rectangular Dimensions: 18 x 12 x 6 inches Usage: Used for carrying books, laptops, and personal items. Fact: The average backpack can hold up to 30 liters of volume. • Medium-Sized Bowling Ball Shape: Spherical Dimensions: 8.5 inches in diameter Usage: Used in the sport of bowling for knocking down pins. Fact: Bowling balls come in various weights, typically ranging from 6 to 16 pounds. • Large Bag of Flour Shape: Rectangular bag Dimensions: 24 x 16 x 4 inches Usage: Used in baking and cooking for various recipes. Fact: Flour is a staple ingredient in many cuisines around the world. • Standard Electric Kettle Shape: Cylindrical Dimensions: 9 x 7 x 9 inches Usage: Used for boiling water quickly for tea, coffee, or cooking. Fact: Electric kettles can boil water in under 5 minutes, making them efficient. • Medium-Sized Dog Shape: Quadrupedal Dimensions: Varies by breed, typically around 18-24 inches tall Usage: Companion animal, often used for work or service. Fact: Dogs are known as “man’s best friend” due to their loyalty and companionship. • Large Potted Plant Shape: Cylindrical pot with foliage Dimensions: 12 x 12 x 24 inches Usage: Used for home decoration and improving air quality. Fact: Indoor plants can help reduce stress and improve mood. Other Oz <-> Gm Conversions –
{"url":"https://www.gptpromptshub.com/grams-ounce-converter/26-7-ounces-to-grams","timestamp":"2024-11-14T17:39:48Z","content_type":"text/html","content_length":"185200","record_id":"<urn:uuid:f323ac37-06fb-41d4-8f28-2a065aa1672a>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00084.warc.gz"}
UNDERSTANDING BINARY STARS VIA LIGHT CURVES from the IAPPP Communications, Spring 1994 issue R. E. Wilson Astronomy Department 211 Space Sciences Building University of Florida Gainesville, Florida 32611 USA Received: 6 January 1994; Revised: 2 March 1994 (some updates by DCT on the web version) The history of progress in the field of binary star light curves has been one of hard work in observing and more hard work in interpreting the observations. The observers have battled winter's cold, summer's mosquitos, and earth's atmosphere. The interpreters have had to survive on computing facilities which, until the early 1970's, were thoroughly inadequate for the problem, and with a lack of opportunities (courses, books, mentors, etc.) to learn the fundamentals properly. This is a stimulating time in which changes are coming rapidly in both areas. The pages of the I.A.P.P.P. Communications are filled with impressive developments in automatic photometric observing, with the result that production of accurate light curves, especially by amateurs, is accelerating. Binary star computer models have been improving in sophistication, and the new generation of computing machines now allows these models to reach or approach their full practical realization. This review concerns the interpretive side of the subject that is, how astrophysically meaningful numbers are extracted from observed light curves. One aim is to inform potential users about the existence of various modeling programs, so that the programs may be used directly by observers. Another will be to put today's light curve fitting procedures in perspective, by comparison with intuitive methods and even with old graphical methods. Along the way, we shall see how to go from observed light (and radial velocity) curves to astrophysical quantities the overall goal of Examine typical eclipsing binary light curves and compare them with other kinds of binary star observations, such as radial velocity curves and visual binary data. It is not so obvious from casual inspection which kind of observation has more harmonic content (carries more information), or which kind would be chosen by an astronomically naive judge as physically more complicated. Just a little thought settles the second issue, however. For spectroscopic and visual binaries, the essential phenomenon is one of point sources moving on simple curves, usually with no significant interaction. There are difficulties which require experience, but mainly they are in the observations, while the models are simple. The situation is reversed with light curves the observations are typically clean and periodic and there are only relatively minor analogies to the line blending and profile effects of radial velocities or the subjective effects of visual binary observing (which have been eliminated recently by interferometric and speckle methods of observing). Not only are the light curves relatively accurate and reproducible, but they can be obtained with much less effort, and therefore are more plentiful. Typical sets of published spectroscopic binary observations contain tens of data points, while light curves contain hundreds or even thousands. There are problems with the light curves of some binaries, but typically they are met less often and are on a much smaller scale than those of the other cases. Ah, but then there are the models. Now instead of points moving on ellipses we have tidally and rotationally distorted stars with their inner facing sides heated by the companion's radiation. We have brightness variation over the surfaces due , to variation in surface gravity, and we have limb darkening. We have not just geometry, but thermal radiation effects, as the light curve is a superposition of local radiation curves, and not just those of black bodies, but the much more intricate radiation of real stars. There may be a complication due to the light of a third star, and we may have magnetic star spot activity on a scale much greater than does the sun, and sometimes even a hot spot due to impact of an accretion stream. Finally there are the eclipses, which are both the most useful and computationally difficult feature of the overall problem. However, all of these difficulties are problems for the model makers and programmers. Extraction of astrophysical information from a light curve does not require one to deal with such things, because the required programs already exist, so let us move on to solving light curves. The practice of obtaining information about a binary from its light curve is variously known as solving, or fitting, or interpreting the light curve. To some, these terms may conjure up different mental images. "Solving" sounds like a very formal procedure, "fitting" like trial and error, and "interpreting" like sitting back in a rocking chair and offering opinions. In practice, solving and fitting are treated essentially as synonyms, as is interpreting, except that the identification of unusual features might be included within interpreting. Regardless of the word used, the idea is to estimate values of physical and geometrical quantities (parameters) such as the mass ratio of the two stars, their sizes, and their relative luminosities. Now for a key question could an amateur observer do this? On the one hand, it is not so unusual for a professional to "get it wrong", and a significant fraction of published papers in this area contain no new data, but only improve on previous (perhaps inaccurate) solutions. If questionable solutions can be found by professionals, what chance has the amateur? Also, an amateur ordinarily would have no opportunity to learn from an experienced person, and certainly not to take a course in the subject. On the other hand, light curve fitting can be learned by an imaginative person, and all the hard computing is done by a big program, so why not? Collaboration between amateurs and professionals now is fairly common, and is an excellent option. Publication of an observed light curve without solving it is all right, but inclusion of a solution is more satisfying and takes only modest space in a paper. The required computing facility consists of no more than a fairly good personal computer, of the type now found in many households, and a compiler (usually FORTRAN). Of course, one might have a few collaborations and then strike out alone. The only bad option is not to publish those valuable light curves. The basic type of analysis for measurement of binaries, going back perhaps 90 years, is the generic light curve - radial velocity curve analysis. In a beginning course, astronomy students (are supposed to) learn what can be found from various combinations of light and velocity curves, such as a light curve at some effective observational wavelength and velocity curves for both stars, or a light curve only, or a light curve and one velocity curve, etc. While there is a little overlap in the areas of information provided by light and velocity curves, they mainly give quite different information, so that neither can replace the other. Orbital period, eccentricity and orientation, and sometimes the mass ratio can be found from both sources (see Sec. II for a brief explanation of photometric mass ratios). The standard complete set is a light curve of the whole system and the radial velocity curves of both binary components. Somewhat better is to have multi-bandpass light curves (two or more effective wavelengths), but the irreducible minimum for essential measurement is one light curve and both velocity curves. (footnote 1) The reason for this can be seen in thought experiments. Imagine a binary star model made of phosphorescent wooden stars plus a motor for orbital motion and observe its light curve with a photometer from a large distance. Now double the size of everything in the model, including the orbit, and observe again. Although the absolute amount of light received will now be larger, the new light curve can be re-scaled to coincide with the first one. Since changing the observer-model distance or the brightness of the paint could achieve the same re-scaling, one shape of light curve corresponds to an infinity of size-distance-surface brightness combinations. This demonstrates that a light curve cannot tell us absolute dimensions, either of the stars or of the orbit. Of course, if one happened to know the distance and surface brightnesses (perhaps through cluster membership and spectral types, when dealing with real stars), one could estimate the absolute sizes. However that can be done just as well for a single star, so nothing about the form of a light curve tells about absolute size. What does a light curve tell? Try changing the size of one or both wooden stars while keeping the orbits the same. A little mental imagery shows immediately that the circumstances of eclipses will change, so that the light curve must change in form. An eclipse might change from total to partial or vice versa, eclipses might occupy a larger or smaller part of the orbital cycle, and eclipse depths and shapes should change. Obviously a light curve can tell us about relative star sizes relative to the orbit size and relative to each other. Further thought experiments will show that a light curve can tell us star shapes and various kinds of surface brightness and orientation information. A velocity curve tells little or nothing about star size or shape because the stars essentially act as geometrical points in producing a velocity curve. Viewed simply, what we measure is a Doppler shift averaged over the unresolved stellar disk, and that average is almost the same as that of the star as a whole. However a velocity curve has the great merit of measuring certain geometrical properties in terms of absolute length not just length ratios but real lengths in kilometers. This is evident from the units of radial velocity, such as kilometers per second. Obviously, if we know the line of sight velocity at each moment of time, we can keep track of where the object is along the line of sight, relative to some arbitrary starting point. So light curves can provide a picture of the binary with an unknown scale, while radial velocity curves can provide the absolute. scale, but no picture. Velocity curves of both stars are needed to complete the scaling information, although radial velocities for just one of the stars give useful, but incomplete, information. A natural distinction is between absolute and relative orbits, because the former relate to radial velocity curves while the latter relate to light curves. Radial velocities can be referenced to the binary system center of mass, which has a constant radial velocity (unless we are dealing with a multiple system rather than only a binary system). Furthermore, the velocity curves are observed separately for the two stars, in contrast to the light curve, which is observed only in their blended light. One can say that the system is resolved in radial velocity but unresolved in position. Accordingly, the separate velocity curves allow us to determine the changing line of sight star locations with respect to the center of mass. Thus a natural concept for velocity curves is the absolute orbit, which is the orbit of a star with respect to the center of mass. A light curve contains information about the changing location each star with respect to the other, which is called a relative orbit. There is a relative orbit for each star, but the two relative orbits have, of course, the same size and shape. The absolute orbits have the same shape, but the ratio of their sizes is the inverse of the mass ratio, so the more massive star has the smaller orbit. We can think of a relative orbit as the sum of the two absolute orbits. A text book treatment could be as follows. Writing i for orbital inclination and a's for orbital semi-major axis lengths, the respective radial velocity curves give a, sin i and a2 sin i for stars 1 and 2, and thus a sin i, where a = a, + a2. Thus, if both velocity curves have been observed, we have the orbit dimensions, both absolute (a,, aZ) and relative (a), but including an unknown projection factor of sin i. A suitable light curve can give relative radii, r, = R,/a and r2= R2/a, as well as sin i, but nothing absolute neither R, nor R2 nor a. If we put light and velocity information together we have absolute R, , R2, and a in kilometers, uncomplicated by sin i. Notice that none of this requires any knowledge of the observer-to-binary distance since Doppler-determined velocities and percentage brightness changes are unaffected by distance, given that the basic observations can be made with suitable accuracy. Before the late 1960's, all of the binary star models actually used to fit observed light curves were based on spheres or ellipsoids. While some physics was involved (mainly the black body radiation laws), the models were basically geometrical. By far the best known of these was the Russell model of two similar (i.e. same shape) ellipsoids in circular orbits (e.g. Russell 1912; 1942; 1948; Russell and Merrill 1952). Associated with each such model was a "rectification" procedure for correcting the observations so as to produce the light curve of a pair of spherical stars, whose properties supposedly were related in a known way to the real stars. Of course, tidally and rotationally distorted stars in binaries are not accurately represented by ellipsoids, nor are their local surface brightness variations properly computed in ellipsoidal models. Even more serious is that the rectification procedure restricts the allowable properties of the model stars. For example, not only must they be ellipsoids, but they must have the same axis ratios. Also, phenomena such as gravity brightening and the reflection effect are limited to certain mathematically convenient but physically inadequate forms. In addition to being physically unrealistic, such rectifiable models require changing the observations to fit the theory, rather than changing the theory to fit the observations, as in a normal scientific problem. In the late 1960's, fast automatic computers built with integrated circuits began to be widely available. While they were slow compared to today's machines, the speed increase over their (separate transistor) immediate predecessors was enormous a factor of perhaps 200. This made numerical light curve models practicable. Two main advances were the order of the day and, while most of the new computer models incorporated both advances, not all did. The really obvious improvement was to scrap rectification and compute light curves directly. That is, to put mutual heating and tidal effects, etc. into the theory rather than trying to take them out of the observations. The other improvement was to "get physical". Direct computation can be done for ellipsoids (Wood 1971), and has many important benefits compared to rectification, but why not work with the level surfaces of constant potential energy which physical theory predicts for a star in hydrostatic equilibrium? In brief, star surfaces coincide with surfaces of constant potential energy per unit mass, and local gravity is inversely proportional to the spacing of the surfaces. (For background reading on this subject, see Wilson 1974). The fundamental concept from which the rest has followed was Z. Kopal's idea of computing light curves based on the geometry of equipotential surfaces. Many important thoughts are contained in his book "Close Binary Systems" (Kopal 1959) and the thoughts were there, waiting for fast computers to arrive. In the interim, Kopal investigated methods of correcting spherical model light curves so as to produce, as nearly as possible, light curves of equipotential models. That work has not been used to any significant extent to analyze observations of real binaries and it is no longer needed, given the existence of fast computers and direct utilization of the equipotentials, but the mathematical cleverness which went into it is impressive. Notice that the issue of which radius best represents the size of a non-spherical star is avoided entirely when we specify the surface by a value of potential energy. Not only are equipotential models directly physical and able to avoid rectification, but they implicitly contain the essential morphology of close binaries and can use it to constrain solutions. Here are two intimately related ideas morphology and constrained solutions which permeate modern light curve work. With regard to morphology, Kuiper (1941) published a paper on P Lyrae in which he freely made use of concepts of limiting surfaces which did not become well established until almost two decades later. Kuiper did not use all the names we use today for morphological types (detached, semi-detached, and overcontact), but this was a landmark paper which included equipotential diagrams, and he did call P Lyrae overcontact (although that is not the present view of P Lyrae's type). The type names, as a coherent set, were introduced by Kopal (1954; 1959). The physics of morphological types is embodied in special lobes which limit the sizes of the two stars and the outer contact surface. This limits the size of the binary as a whole. Assume circular orbits and synchronous rotation, in which case the rotation period will be the same as the orbit period. The idea is that there must be a null point of effective gravity between the two stars, where the two gravities plus centrifugal force add to zero in a coordinate frame which rotates with the system. So if we imagine one of the stars increasing in size (and continuing to co-rotate), it will follow a succession of equipotentials and eventually reach one which includes the effective gravity null point. It then begins to expel surface gas out through a small nozzle where gas pressure is not balanced by gravity. The loss of material prevents the star from becoming any larger, so it accurately conforms to the size and shape of that particular equipotential, which is known as its Roche lobe. Of course, the other star also has a Roche lobe. A binary with one star in contact with its lobe and the other detached is called a semi-detached binary (see U Sagittae in Figure 1), while if both stars lie within their lobes the binary is called detached. An overcontact binary is one in which both stars exceed their Roche lobes and have a common envelope, so there are not two separate surfaces but only ones (Figure 2 shows the overcontact binary RR Centauri.) That surface cannot be arbitrarily far out, but is limited at the system's outer contact surface, where there is another effective gravity null point and gas can escape from the binary system. This set completes the list of morphological types expected in the synchronous-circular case, but if rotation is non-synchronous there can be one more type, that of double contact (Wilson 1979; Wilson, Van Hamme and Pettera 1985; Wilson 1994), in which both stars accurately fill their limiting lobes. FIGURE 1. Computer generated pictures of U Sagittae and computed light curves at 5.0 and 0.2 microns (infrared and ultraviolet, respectively). Note that the secondary star is reasonably prominent in the IR but not in the UV. The small circle in the upper right margin represents the Sun on the same scale. FIGURE 2. The overcontact binary RR Centauri. The surface of the common envelope lies above the Roche lobes, at a single potential energy. RR Cen belongs to the W UMa class of binaries. The small circle in the upper right margin represents the Sun on the same scale. The main reason for this brief discussion of morphological types is that an important class of constraints on light curve solutions is based on morphology. Suppose something is known about possible solutions, such as that one of the stars accurately fills its limiting lobe. Evidence for this circumstance can come from several kinds of observations unrelated to light curves. In the interest of simplicity, consider only the synchronously rotating, circular orbit case. Since the relative lobe size, and thus now star size, depends uniquely on the mass ratio, one should not try to estimate the star size and mass ratio independently. Although perhaps neither quantity is known with great accuracy, a value for one implies a definite value for the other. This extra information can be used appropriately if the computer program is able to constrain solutions so as to allow only compatible combinations of star size and mass ratio, as can the WD program (Wilson 1979). The constraint thus reduces the parameter list by one and rules out an entire dimension of incorrect solutions. In effect, information from a source or sources external to the light curves is used to improve the light curve solutions. Application of this constraint allows determination of mass ratios from light curves, and these are called photometric mass ratios (viz. Wilson 1994 for a more detailed explanation). Other examples of solution constraints are covered in Wilson (1988). The essence of the conceptual link between binary star morphology and modeling constraints is that a light curve program can be constructed so as to produce only a particular morphology, if that is the user's wish. For example, one can vary the mass ratio and maintain a semi-detached configuration without continually re- calculating the size of the lobe-filling star, because the program can be asked to do that automatically. It can do so whether it is computing synthesized observations from parameters (direct problem), or solving for parameters from observations (inverse problem). The WD program identifies various constraints with eight different modes in which it can be run, as described in Wilson (1992; 1993). An overview of the history of light curve models is contained in Wilson (1994). Fortunately, interests and emphases have varied greatly among the developers of physical models, with the result that many productive lines of modeling have followed from a variety of original ideas. This is a healthy situation, in which various persons have emphasized different areas of the overall problem. While a given program may not be able to handle certain kinds of cases, the ones it can do might be especially accurate or otherwise well done. A listing of the major papers would be unduly long, while a listing of individual names would fail to recognize notable collaborations. The following alphabetical list of names and combinations is intended as a compromise: E. Berthier (1975); D.H. Bradstreet (1993); E. Budding (1977); G.V. Cochrane (1970); J.A. Eaton (1975); P.B. Etzel (1993); P.D. Hendry and S.W. Mochnacki (1992); G. Hill (1979); Hill and J.B. Hutchings (1970); Hill and S.M. Rucinski (1993); J. Kallrath (1993); Kallrath and A.P. Linnell (1987); Linnell (1984); L.B. Lucy (1968); H. Mauder (1972); Mochnacki and N.A. Doughty (1972); T. Nagy (1974) and L. Binnendijk (1977) (footnote 2); B . Nelson and W. Davis (1972); A. Peraiah (1970); Rucinski (1973); R.E. Wilson (1979); Wilson and E.J. Devinney (1971); D. B . Wood (1971). The reference given for each author or pair of authors is not necessarily the one most representative of their contributions, but more complete references are given in Wilson (1994). The list includes some persons who have been interested in parameter estimation rather than model development. Among the model developers, the focus may be mainly on accurate computation (Linnell; Hendry and Mochnacki), on computing speed for special cases (Budding; Etzel; Wood), on generality, or applicability to a wide variety of binaries (Wilson), or another area. Some of the resulting programs are available for the asking or for a modest charge, while others are not. The originators, whose addresses are to be found in the membership lists of the major astronomical societies, can be contacted in this regard. Also important are contributions on sub-computations, such as radiation by stellar atmospheres (Linnell 1991; Milone, Stagg, and Kurucz 1992) and limb darkening laws (e.g. Al-Naimiy 1978; Diaz-Cordoves and Gimenez 1992; Klinglesmith and Sobieski 1970; Van Hamme 1994). There are other persons (not listed) whose objective was simply to produce a working program patterned after an existing one. Indeed, there has been some writing of new programs without significant innovation, typically so as to have a program whose content is completely known to the programmer-user. When the emphasis is on results, it can be effective to write a relatively simple program which just gets the job done. With X-ray binaries, for example, one of the stars is essentially a point (a black hole, neutron star, or white dwarf), and this renders eclipse computations trivial or unnecessary. The circular orbit simplification also applies to some X-Ray binaries. However, workers in such areas should be aware of potentially important capabilities present in some of the more general modeling programs. For example, suppose compact object is in eccentric orbit around a supergiant star, as with the neutron stars in HD 77581 (see Figure 3) and HD 153919. Because the eccentrically orbiting neutron stars are not very far outside their companions, the orbital motions necessarily drive complicated non-radial oscillations of the supergiants. The variation of such a star's figure cannot be followed well by a simple program which invokes static equipotentials, yet a more. general program may more nearly follow the real variation (e.g. Wilson 1979; Wilson and Terrell 1994). It was mentioned in the Introduction that the average Doppler shift over a stellar disk is almost the same as that of the star as a whole, but there can be a significant difference (proximity effect) where tides or reflection are important. This is especially true for a star which is much more massive than its companion, because orbital velocities are then small and proximity effects can be relatively appreciable. Light curve models are now often used to compute velocities properly averaged over the surface. Once one has written a light curve program, only minor additional steps are needed to compute integrated radial velocities, as well as other observable quantities, such as polarization variables (Wilson 1993; Wilson and Liou 1993). Some discussion and further references are in Wilson (1994). III. HOW TO CARRY OUT A SOLUTION You have observed an eclipsing binary light curve photo-electrically and it is absolutely beautiful. You want to frame it, put it on the piano, and see it every day. It might look like the TT Aurigae light curve of Figure 4. Of course you will publish your light curve, but it would be so good to learn astrophysical facts from your own observations. Is that really a possibility? Let us see. If you become stuck, you always can call a professional for help, and you are sure to learn something interesting, even from an effort which turns out only partially successful. You may have an undiscovered talent, so why not? Your working apparatus is a binary star model, as embodied in one of the existing programs, and a personal computer or workstation. It is best to obtain the program from its originator rather than from another user. This should assure you of having the latest version, which should be the most developed and free of bugs. Your first assignment is FIGURE 3. The blue supergiant - neutron star X-ray binary GP Velorum or HD 77581 to optical observers and as Vela X-1 to X-ray observers. Notice the proximity of the components, which results in large and complicated tides. The sun's size is indicated by a small circle in the upper right; the neutron star's size is greatly exaggerated for clarity. to contact the originator and pry a copy loose (hint: you are unlikely to meet in the supermarket line, so do a little detective work). Willingness to provide programs and cost (perhaps free) vary among modelers, so you should be prepared to make a case for being a serious user. The model is characterized by a number of parameters, such as the mass ratio (m2/m,) and the orbital inclination (i). There may be something like 15 to 30 such parameters, depending on which model you use, and your problem is to come up with estimates for their numerical values. We begin with a principle which is so basic and obvious that some persons lose sight of it among the details of a solution. It is as fundamental in parameter fitting as is the "Fundamental Theorem of Algebra" in solving simultaneous equations (there cannot be a unique solution unless the number of unknowns matches the number of equations). It might be called the Individuality Principle, and it is just this: A parameter can be determined only if its variation affects observable quantities in a different way than does the variation of any other parameter. In the language of statistics, two parameters which affect the observables in exactly the same way are said to be completely correlated. Of course, the "observable quantities" are those actually at our disposal. There is no help to our solution provided by unavailable data. An ideal situation for determining a parameter is one in which it affects the observations in a unique way, thoroughly differently from any other parameter of interest, and thus is completely uncorrelated with the other parameters. However that seldom is found most pairs of parameters are partially correlated, which means that there is some similarity in the ways they affect data. Now what does this have to do FIGURE 4. A good B light curve of TT Aurigae observed by Wachmann, Popper, and Clausen (1986) and fitted by Terrell (1991). Agreement between model and observations is not perfect, but make no apologies if you do as well as this. with solving light curves? It means that we should not try to do the impossible. We certainly should not try to estimate a quantity which has no effect on our data, we probably should not try when there is only a slight effect, and we usually should not try for both of a pair of parameters which act in very similar ways. The maxim seems so obvious, so why even mention it? Because the advice sometimes is put aside even by professionals, when enthusiasm overruns common sense. It is astonishing to see someone who would never try to read temperature directly from a pressure gauge actually attempt to find the temperatures of both stars of a binary from a light curve, where the Tl, T2 correlation is almost perfect. So spend some time in thought experiments. Vary parameters in your mind and ask what should be the effects on the light curve. Then run a light curve program to see if you were right. Fit many light curves by trial and error, with liberal use of graphs. If the machine produces a counter- intuitive result, think about it until you understand. That is the real way to learn. You will not be able to find meaningful estimates for all parameters from your light curves, but you will need to insert some values, so what is to do? The basic options are to adopt astrophysically reasonable numbers or numbers derived from some other kind of observations of your binary. There are no hard and fast rules here common sense is the only common thread. For example, suppose you need a number for the mass ratio and have decided that your light curves cannot tell you this number because the mass ratio is correlated with other quantities. Of course, you first look for double-lined radial velocity curves, which are the ordinary source of mass ratio information. These may or may not be available and they may be good or bad. If velocities are not available at all, you will have to assume the mass ratio from the spectral and luminosity types of your stars. If they are available but extremely noisy, or you suspect serious systematic errors in the velocities, you still may want to use the spectral and luminosity types, or you may want to re-think the possibility of at least setting limits on the mass ratio via light curve fitting experiments. In the end, you might arrive at a mass ratio based on the confluence of several kinds of considerations, perhaps some not even mentioned here. Up to this point we have not imagined the actual process of fitting light curves as it would take place at a personal computer. The process consists of two parts, a subjective stage in which we use intuition and numerical experiments to get reasonably close, and a mainly objective stage in which a programmed algorithm leads iteratively to our final solution. Practical tips for carrying out both stages are to be found in the: documentation provided with some of the programs (e.g. Wilson 1992). The objective criterion for best fit used by almost everyone is the least squares criterion, which is that the sum of the squares of the weighted residuals is as small as possible. That is, the parameters of the problem are adjusted to produce a minimum of that sum of squares. A residual is just the difference between an observed quantity and its computed value, according to our adopted model. The subject of weighting is somewhat intricate and would lead us astray from our main discussion, but some of the computer programs take care of weighting automatically (viz. Wilson 1979). Now what is actually supposed to happen in the progress of a solution? Before. there were personal computers we would have to make graphs by hand, so as to compare the computed (theoretical) and observed light curves, which was an enormous amount of work. Happily, the personal computer now can do that for us, if aided by an appropriate software package. We have come a long way from the old days, thanks to personal computers, workstations, laser printers, and plotting software. However most light curve programs do not make light curve plots or binary system pictures because they are intended to be portable (i.e. to work on a wide variety of computers). While their source languages, such as FORTRAN, are portable, plot routines are machine- specific on today's hardware. The information to make pictures is in the machine, but some (minor) programming is needed to get it to the local plotter. For this reason, a few programs have been written which are mainly intended to make pictures of binaries and plots of their light curves. These programs have restricted portability, although the restriction may not be very severe. For example, such a package may work on most or all IBM compatible personal computers. One very convenient package is D.H. Bradstreet's (1993) Binary Maker, which shows light and radial velocity curves together with pictures of a binary, and even has a zoom feature, by which one can expand and contract the binary star image. Another is D. Terrell's (1992) Wilson-Devinney User Interface, which can picture non- synchronously rotating and eccentric orbit binaries and, in general, enable a PC user to work with the WD program interactively and to produce screen plots and pictures. Pictures and light curves also are shown in an overview article (Wilson 1974) and in the book Binary Stars: A Pictorial Atlas (Terrell, Mukherjee, and Wilson 1992). Once you have a plot program it will be easy to check your intuition, so do many experiments. You also should inspect the distribution of surface elements (dots), which is characteristic of each particular program, so as to get a feeling for circumstances in which the program might begin to lose accuracy. Ordinarily an essentially uniform sprinkling of elements over the surface is desirable, and some coordinate systems or element-generating schemes achieve this better than others. However, to keep all this in perspective, remember that the really essential plot capability is the one which compares observed and computed light curves. The star picture capability is nice but can be replaced by good mental imagery for most purposes. Now for some intuition. For the most part, each person needs to carry out personally tailored thought experiments, but we can start with a few to show the basic idea. Given a light curve or curves (preferably they should be multi-bandpass), begin by seeking an overview of the situation. This comes prior to any computation, whether assisted by screen graphics or not. Inspect the observed light curve or curves, note the main characteristics, and try to picture a plausible corresponding binary configuration. This requires some familiarity with the main phenomena which affect light curves. Such familiarity can be developed by computational experiments, but let us suppose you have done that for the generic case and are now faced with real observations of an actual binary. What should you have learned from your experiments to bring to bear on a real problem? Well, you should have made a mental connection between tidal distortions and the double-humped variation they produce, which sometimes is called ellipsoidal or ellipticity variation. Tides stretch the stars into ovals, with the long axes along the line of star centers. The system is bright when we see the largest areas (broadside, or about midway between eclipses) and faint around the conjunctions (when we expect the eclipses). There also is a surface brightness effect, due to gravity brightening, which enhances the geometrically produced variation and is called photometric ellipticity. Overall, the light changes follow a curve which is something like a double cosine wave (i.e. the light goes through a cosine-like variation twice per orbit), but not with quite the shape of a cosine. Superposed on this may be a reflection effect, or really two reflection effects, one from each star. Here the heated cap on the inner facing side of each star tends to produce a once-per-orbit brightening when it is most directly in view, which is at only one of the conjunctions. As with the tidal effect, reflection variation is (very) roughly sinusoidal, but with only one "cosine" cycle per orbit. Notice that the main tidal distortion effects of the two stars are effectively in phase because the star figures have approximate front-to-back symmetry and thus look roughly the same from the two ends. The tidal effects therefore primarily accentuate one another, while the two reflection variations are 180° out of phase and will partly cancel. Thus we expect (with other things being equal) an obvious reflection variation mainly when stars of very different temperature are paired. (Why it is temperature rather than luminosity which matters is a thought problem left to the reader.) So the combined reflection effect of the two stars will be most noticeable as a brightening of the "shoulders" of one eclipse, and that will be the eclipse of the lower temperature star. During the eclipse itself, of course, the reflection cap may be out of view. An unusual reflection effect is seen in V 1647 Sagittarii (Figure 5), which has an eccentric orbit and shows a reflection peak only near periastron where heating is greatest. While there may be a natural psychological tendency to focus on eclipses when sizing up a light curve, the tidal and reflection effects are important and need to be considered together with the eclipses at all stages of the fitting process, including the preliminary rough estimates. One should realize that an enormous tide can be present on a star, with almost no evidence of it in the light curve. In Algol, for example, the dim secondary star is drawn out into a teardrop shape, but contributes something like only two percent of the system light, so its big tide hardly affects the light curve at all. Large reflection effects also can appear only subtly in a light curve. AG Persei consists of a pair of very hot stars with big individual reflection effects, but the two reflection effects are nearly equal and largely cancel, at least to casual inspection. The lesson is that one has to keep the entire binary configuration in mind in thinking about the various interacting What can be inferred about a binary from inspection of tidal and reflection variations? The tidal effect naturally is large for stars which are big compared to the orbital separation, and is especially large for overcontact systems, so it should essentially be a measure of star size, relative to the orbit. However, think about this. Stars which are big compared to the orbit tend to produce eclipses of long duration (i.e. wide in phase), so eclipse duration would seem to tell all we need to know about star size, leaving tides a redundant source of information. However eclipse duration drops faster with reduced orbital inclination than does tidal variation (i= 90° for an edge-on orbit). For example, diminish the inclination of a model overcontact binary to 60° and the eclipses will nearly disappear, while the tidal amplitude still will be about 70% as large as for 90°. There are other things to remember, such as that a low temperature star has reduced ellipsoidal amplitude due to the relatively small gravity brightening effect of a convective envelope. Therefore only a few examples of this sort can be given, and many numerical experiments will be needed to instill a feeling for effects of tidal distortion. However, with the help of a screen graphics program they are fun to do, especially against a background of real observations. Similar remarks can be made about the reflection effect. Finally it is time to think about eclipses. The facts are easy to tell, but why they are true would demand a long discourse. Fortunately, all make intuitive sense, so formal derivations are not needed for overview purposes. Do not regard them as rigorous laws, but as approximately correct rules which connect observed light curves with models, and thus allow one to guess parameters. That is, most items really require qualifiers, such as "disregarding limb darkening effects", or "neglecting proximity effects", etc. Light curves are assumed to be plotted in observed flux, not astronomical FIGURE 5. V 1647 Sagittarii shows a variety of eccentric orbit effects, including unequal eclipse widths and separations, and a reflection peak near periastron (about phase 0.3). The light curve was computed at 0.2 microns. The small circle in the upper right margin represents the Sun on the same scale. The list below does not include all useful rules, as you may want to come up with some of your own, but here are some major ones: 1. The ratio of primary to secondary eclipse depths equals the corresponding ratio of eclipsed star surface brightnesses, which in turn is a (non-linear) measure of their relative temperatures. This holds for both partial and complete eclipses, but only for circular orbits. 2. The ratio of light lost to light remaining at the bottom of a total eclipse equals the luminosity ratio of the smaller to larger star (in a given observational bandpass). This is so obvious as sometimes to be forgotten. 3. For circular orbits, the two eclipses are of equal duration. There actually is a text book which tells unsuspecting students that (for circular orbits) "... the relative diameters of the two stars can be deduced from the relative durations of the alternating eclipses", which is nonsense, of course, since the durations are necessarily equal. For eccentric orbits, the eclipse occuring nearest to apastron is the longer one. 4. For circular orbits, the two eclipses are equally spaced, by half a cycle. For eccentric orbits, they are equally spaced if we are "looking down the major axis", and unequally spaced if we are not. The briefer spacing includes periastron. 5. For circular orbits, the duration of either eclipse (according to #3, they are equal) is a measure of the sum of the relative radii, R1/a + R2/a. 6. This one sounds trickier than it is. Suppose a binary has total-annular eclipses. Then the ratio of the depth of the annular eclipse to the light remaining in the total eclipse is approximately the square of the ratio of smaller to larger star radii. In other words, that light ratio gives the surface area ratio. This one is rather robust, and can tell relative star dimensions even from noisy data and even for eccentric orbits. 7. Decreasing the inclination decreases eclipse widths (i.e. durations) and depths, but makes only small changes if the inclination is near 90°. Larger decreases in width can be expected when the inclination becomes low enough so that the eclipses are far from central. Larger decreases in depth will be seen when the eclipses become partial. Think this through, with pictures, as the full rule is somewhat more complicated than given here. Lack of exactness in these rules is not important because we have programs for accurate computing. What the interpreter needs is a general idea of how light curves relate to binary stars. For example, suppose a screen plot shows the eclipse widths to be less for the model than for some real star. According to rule 5 the sum of the relative radii is too small in the model, so the sum needs to be increased, assuming the inclination is about right (rule 7). Whether this is done by increasing one radius or both must be considered in the context of several other rules. Now suppose you have applied the above rules and perhaps a few of your own to real observations, preferably at two or more effective wavelengths. When you have a visibly good match between the computed and observed light curves, you already will have achieved more than typically was done 30 years ago in professional work. The models then were far less realistic than the modern one you will have, and the fitting process was the same as yours, that of trial and error. However you can go further with a least squares fitting program. In overview, the purpose of any such program is to find a set of parameter values which produces best agreement between theory and observation in the least squares sense (see above). There are several ways of going about this, such as Differential Corrections (DC), Steepest Descent, the Simplex algorithm, and the Marquardt algorithm, all of which have come from the mathematics literature. The various modelers have their preferences among these methods and occasionally switch. At present, for example, Hill and Rucinski use Marquardt, Kallrath Simplex, Linnell both DC and Simplex, Wilson DC, and Wood DC. With help from program documentation, it should be possible to apply these methods without getting very deeply into their mathematical foundations. However it would be good to understand something of how they work, so a survey of the essential ideas of the more commonly used fitting algorithms will now be outlined. All of the procedures are iterative. Again, the quantitative measure of fit is the Sum of Squares of weighted residuals, which we symbolize by SS. DC looks at how the observable quantity (e.g. binary system light) changes with variations in each parameter, and also tries to take into account the interactions among parameters (correlations). DC takes a given provisional solution and calculates local partial derivatives, of light with respect to each parameter, to see which way to move so as to make the SS smaller. Steepest Descent looks at a different kind of derivative, that of SS with respect to each parameter, and does not attempt to deal with correlations. This can be a disadvantage, but also can keep it out of trouble, since the correlations may not be computed correctly in a highly non-linear problem. The Marquardt (1963) algorithm essentially looks at both the DC and Steepest Descent results and chooses a safe compromise. Simplex does not calculate partial derivatives or local slopes, but consists of a set of rules for taking safe steps which usually head downward in SS, including rules to recover if SS should increase. A more complete discussion is in Wilson Before trying for a least squares solution, think hard about your trial and error solution. Consider whether there can be another set of parameter values, perhaps quite a bit removed from the set you have now, which might fit about as well or better. If so, you have encountered the infamous "local minimum" problem, which can afflict any of the methods used to reach a minimum. Think of rolling countryside, with adjacent lakes at different levels. The water in the higher lake would flow into the lower one if they were connected, but since they are not the lakes remain separate. Similarly, a solution in "parameter space" can be trapped at a local minimum, which is not the deepest minimum and therefore not the preferred solution. On the practical side, there are issues of whether or not the iteration process should be automated and how a person should become involved in the iterations (if at all), how to compute the partial derivatives, when to stop the iterations, and when to sense that something just is not right, even though the solution algorithm seems to have found a minimum of SS. All of this is in the realm of experience and common sense, but the following overview may be helpful. First, one always should compare computed and observed light curves, inspect the graphs, and think about what they mean. Do not accept numbers which just "fall out of the machine" without critical evaluation. While many professionals favor automatic iteration, the author advocates interactive branching (Wilson 1988), in which each iteration is examined for reasonableness and the set of adjusted parameters may be changed occasionally. The advantage is that one can draw on the experience of seeing how the solution is proceeding when adjusting parameters Another point concerns the computation of the partial derivatives in DC. Most of these derivatives are computed numerically by varying a parameter a little and computing the resultant change in system brightness. How much should the parameter be varied? Guidelines may be found in the documentation for the various programs (e.g. Wilson 1992), but something like a one percent increment should be about right for many parameters. What about when to stop? Rarely will corrections be essentially zero, because of the finite difference arithmetic, correlations, and several other realistic problems. Several persons have made the point that iterations should stop only when the solution no longer drifts systematically in any direction, but jiggles around within a small region of parameter space (e.g. Linnell 1989). The WD program can solve multi- bandpass (e.g. U,B,V) light curves and radial velocity curves simultaneously, and this advantage should be utilized in most circumstances, although a proper discussion would lead us somewhat afield. This capability is covered in Wilson (1979; 1992; 1994) Finally there is the matter of how can one recognize a seriously inadequate solution and take corrective measures. Astrophysically implausible results should stimulate thorough consideration of possible alternatives, such as another morphology or perhaps just another starting point for the iterations. Keep in mind, of course, that the implausible can sometimes be both correct and more interesting than the expected. Poor solutions often are revealed by sections of the light curve in which the computed and observed curves disagree systematically. Sometimes that can follow from transient behavior of the real binary or from some inadequacy of the model, but it also can happen at a local minimum in SS which is not the deepest minimum. If the latter is suspected, start from several previously untried points and try again. Make screen plots to compare your computed curves with the observations and try to understand as much as you can about plausible reasons for any systematic deviations. Discuss your developing results with anyone who will listen. Of course the final plot will be in your publication, and will be the main way for readers to judge your success, so fit as well as you can, and understand as well as you can. IV. HOW TO SURVIVE A TEXT BOOK After a variety of issues connected with interpretation of observations, the reader may wonder "why not start with a general astronomy text book"? Someone who has gone through most of the suggested thought experiments, or made extensive use of a screen graphics program such a Binary Maker, may be ready for such an experience. The binary star explanations in current texts can be recommended if they are read in a light-hearted manner, with learning to come from the reader's own mind rather than from the book. Be prepared to sift through several pages for occasional useful items. There may be initial disappointment in the brevity of a binary star sub-section, but later you will be pleased not to have to unlearn so much. Your experiments will help you to recognize wrong and misleading explanations and diagrams. For example, several books illustrate the internal structure of an overcontact binary by a figure showing dark color up to the Roche lobes and light shading between the lobes and the surface. A reasonable person would infer a sharp drop in density at the lobe surface, like the air over the ocean, which is not at all the case. The part of the envelope which lies above the Roche lobes is an ordinary extension of the underlying envelope. One text introduces eclipsing binaries with this statement: "If the orbit of a spectroscopic binary is almost exactly edge-on to us, one star will pass directly in front of the other, producing an eclipse". This is not formally incorrect, but is certainly misleading. Inclinations of 70° and lower, which is far from exactly edge-on, can give eclipses in many realistic and commonly encountered In several books, explanations of what can be found from eclipse durations show lack of understanding of the rules of Section III. Simplification can be useful in beginning explanations, but should not include assertions which are actually wrong. Along this line, one book states that "... the relative depths of the two eclipses are related to the ratio of the stars' temperatures". Yes, there is a connection with temperature but it is not just in the temperature ratio. This is almost like saying that the relative rebounding ability of two basketball players is related to their height ratio (well, perhaps not quite that bad). Loose thinking is shown by failure to distinguish between light per unit area and total light, and between bolometric (over all wavelengths) light and light in a spectral bandpass. Diagrams often lack scales and units, which prevents inference of anything quantitative. Illustrated light curves may be incompatible with pictured binaries, even in the roughest approximation. It is easy to print real observations or computed curves, yet most books show only schematic renditions. Tidally distorted stars may be shown as spheres. The list could be much longer and more detailed, but the reader may wish to identify the various defects as an educational experience. So indeed read text books, but not at the start. First make thought experiments and graphics experiments. Then, facing a library table covered with texts opened to "binary stars", ask how you would have written them. V. WHERE TO GO FROM HERE The reading, thinking, and experimentation prescribed above will take some time, but worthwhile skills and experience do take time to acquire. Areas not covered here, or covered only briefly, include the history of the field, astrophysical advances due to light curve models, strange and unusual binaries, recent modeling improvements, current problems, and such phenomena as gravity brightening, the reflection effect, star spots, non-synchronous rotation, and eccentric orbits. These are discussed, with references, in Wilson (1994). A next step will be to read papers on light curve models and light curve analysis, which can be found in the astronomy and astrophysics journals. Try to understand via thought experiments, and do not believe everything you read. Then get started. Just imagine how good that light curve on the piano will look, with the points sprinkled nicely around your own solution curve. The figures were made and largely planned by D. Terrell. Helpful comments on drafts were received from Terrell and from W. Van Hamme. Reprints and other background material were supplied by several originators of light curve models and solution algorithms. The author has benefitted over the years from numerous discussions with the many creative persons of the field. Al-Naimiy, H.M. 1978, Ap&SS, 53, 181. Berthier, E. 1975, A&A, 40, 237. Binnendijk, L. 1977, Vistas in Astronomy, 12, 217. Bradstreet, D.H. 1993, in "Light Curve Modeling of Eclipsing Binary Stars", ed. E.F. Milone, Springer-Verlag Publ., p. 151. Budding, E. 1977, Ap. and Space Sci., 48, 207. Cochrane, G.V. 1970, "The Light Variations of Close Binaries Conforming to the Roche Model", thesis, Univ. of Virginia, available from Univ. Microfilms, Ann Arbor, Michigan. Diaz-Cordoves, J. and Gimenez, A. 1992, A&A, 259, 227. Eaton, J.A. 1975, ApJ, 197, 379. Etzel, P.B. 1993, in "Light Curve Modeling of Eclipsing Binary Stars", ed. E.F. Milone, Springer-Verlag Publ., p. 113. Hendry, P.D. and Mochnacki, S.W. 1992, ApJ, 388, 603. Hill, G. 1979, Publ. Dom. Ap. Obs., 15, 297. Hill, G. and Hutchings, J.B. 1970, ApJ, 162, 265. Hill, G. and Rucinski, S.M. 1993, in "Light Curve Modeling of Eclipsing Binary Stars", ed. E.F. Milone, Springer-Verlag Publ., p. 135. Kallrath, J. 1993, in "Light Curve Modeling of Eclipsing Binary Stars", ed. E.F. Milone, Springer-Verlag Publ., p. 39. Kallrath, J. and Linnell, A.P. 1987, ApJ, 313, 346. Klinglesmith, D.A. and Sobieski, S. 1970, AJ, 75, 175. Kopal, Z. 1954, Jodrell Bank Ann., 1, 37. Kopal, Z. 1959, "Close Binary Systems", J. Wiley and Sons, New York. Kuiper, G.P. 1941, ApJ, 93, 133. Linnell, A.P. 1984, ApJS, 54, 17. Linnell, A.P. 1989, Space Sci. Rev. 50, 269. Linnell, A.P. 1991, ApJ, 379, 721. Lucy, L.B. 1968, ApJ, 153, 877. Marquardt, D.W. 1963, Journ. Soc. Indust. Applied Math., 11, 431. Mauder, H. 1972, A&A,17,1. Milone, E.F., Stagg, C., and Kurucz, R. 1992, ApJS, 79, 123. Mochnacki, S.W. and Doughty, N.A. 1972, MNRAS, 156, 51. Nagy, T. 1974, "Synthetic Light Curves of Four Contact Binaries", PhD. Dissertation, University of Pennsylvania, available from Univ. Microfilms, Ann Arbor, Michigan. Nelson, B. and Davis, W. 1972, AJ, 174, 617. Peraiah, A. 1970, A&A, 7, 473. Rucinski, S.M. 1973, Acta Astr., 23, 79. Russell, H.N. 1912, ApJ, 35, 315. Russell, H.N. 1942, ApJ, 95, 345. Russell, H.N. 1948, ApJ, 108, 388. Russell, H.N. and Merrill, J.E. 1.952, Contrib. Princeton Univ. Obs. No. 26. Terrell, D. 1991, BAS, 250, 209. Terrell, D. 1992, Bull. Am. Astr. Soc., 24, 112 7 . Terrell, D., Mukherjee, J.D., and Wilson, R.E. 1992., "Binary Stars: A Pictorial Atlas", Krieger Publ. Co. (Malabar, Florida). van Hamme, W. 1994, AJ, 106, 2096. Wachmann, A.A., Popper, D.M., and Clausen, J.V. 1986, A&A, 162, 62. Wilson, R.E. 1974, Mercury, 3, 4. Wilson, R.E. 1979, ApJ, 234, 1054. Wilson, R.E. 1988, in "Critical Observations vs. Physical Models for Close Binary Systems", ed. K.C. Leung (Montreux, Switzerland, Gordon and Breach Publ.), p.193. Wilson, R.E. 1992, "Documentation of Eclipsing Binary Computer Model", privately distributed. Wilson, R.E. 1993, ASP Conference Series, 38, 91. Wilson, R.E. 1994, "Binary Star Light-Curve Models", PASP 106, 921. Wilson, R.E. and Devinney, E.J. 1971, ApJ, 166, 605. Wilson, R.E. and Liou, J.C. 1993, ApJ, 413, 670. Wilson, R.E. and Terrell, D. 1994, in "The Evolution of X-ray Binaries", American Institute of Physics, (in press). Wilson, R.E., Van Hamme, W., and Pettera, L.E. 1985, ApJ, 289, 748. Wood, D.B. 1971, AJ, 76, 701.
{"url":"https://binaries.boulder.swri.edu/papers/rew_iappp_94/","timestamp":"2024-11-01T20:29:57Z","content_type":"text/html","content_length":"64261","record_id":"<urn:uuid:6c84a2dd-0c4c-4200-96e3-49cfebe9122f>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00206.warc.gz"}
System Definition and Layout The top-level system layout of a Simulink^® model is a common context that many engineering teams can use and is the basis for many tasks in the Model-Based Design paradigm: analysis, design, test, and implementation. You define a system at the top level by identifying the structure and individual components. You then organize your model in a hierarchical manner that corresponds to the components. Then you define interfaces for each component and the connections between components. The featured model in this tutorial is a flat robot that can move or rotate with the help of two wheels, similar to a home vacuuming robot. This model assumes that the robot moves in one of two ways: • Linear — Both wheels turn in the same direction with the same speed and the robot moves linearly. • Rotational — The wheels turn in opposite directions with the same speed and the robot rotates in place. Each type of motion starts from a resting state, that is, both rotational and linear speeds are zero. With these assumptions, the linear and rotational motion components can be modeled separately. Determine Modeling Objectives Before designing a model, consider your goals and requirements. The goals dictate both the structure and the level of detail for the model. If the goal is simply to figure out how fast the robot can go, modeling just for linear motion is sufficient. If the goal is to design a set of inputs for the device to follow a given path, then the rotational component is involved. If obstacle avoidance is a goal, then the system needs a sensor. This tutorial builds a model with the goal of designing sensor parameters so that the robot stops in time when it detects an obstacle in its path. To achieve this goal, the model must: • Determine how quickly the robot stops when the motors stop • Provide a series of commands for linear and rotational motion so that the robot can move in a two-dimensional space The first modeling objective enables you to analyze the motion so you can design the sensor. The second objective enables you to test your design. Identify System Components and Interfaces Once you understand your modeling requirements, you can begin to identify the components of the system. Identifying individual components and their relationships within a top-level structure help build a potentially complex model systematically. You perform these steps outside Simulink before you begin building your model. This task involves answering these questions: • What are the structural and functional components of the system? When a layout reflects the physical and functional structure, it helps you to understand, build, communicate, and test the system. This becomes more important when parts of the system are to be implemented in different stages in the design process. • What are the inputs and outputs for each component? Draw a picture showing the connections between components. This picture helps you to visualize signal flow within the model, identify the source and sink of each signal, and determine if all necessary components exist. • What level of detail is necessary? Include major system parameters in your diagram. Creating a picture of the system can help you identify and model the parts that are essential to the behaviors you want to observe. Each component and parameter that contributes to the modeling goal must have a representation in the model, but there is a tradeoff between complexity and readability. Modeling can be an iterative process. You can start with a high-level model with few details and then gradually increase complexity where required. It is often beneficial to consider the following: • What parts of the system need testing? • What is the test data and success criteria? • Which outputs are necessary for analysis and design tasks? Identify Robot Motion Components The system in this tutorial defines a robot that moves with two electric wheels in two dimensions. It includes: • Linear motion characteristics • Rotational motion characteristics • Transformations to determine the location of the system in two dimensions • A sensor to measure the distance of the robot from an obstacle The model for this system includes two identical wheels, input forces applied to the wheels, rotational dynamics, coordinate transformation, and a sensor. The model uses a Subsystem to represent each 1. Open a new Simulink model. See Open New Model. 2. Open the Library Browser. See Open Simulink Library Browser. 3. Add Subsystem blocks. Drag five Subsystem blocks from the Ports & Subsystems library to the new model. 4. Click a subsystem. In the Format tab, click the Auto drop-down list. Clear the Hide Automatic Block Names check box. 5. Arrange and rename the Subsystem blocks as shown. To change block names, double-click the block name and edit the text. Define Interfaces Between Components Identify input and output connections between subsystems. Input and output values change dynamically during a simulation. Lines connecting blocks represent data transfer. This table shows the inputs and outputs for each component. Block Input Output Related Information Force to right wheel Inputs None Not applicable Force to left wheel Right wheel Force to right wheel Right wheel velocity Directional, negative means reverse direction Left wheel Force to left wheel Left wheel velocity Directional, negative means reverse direction Rotation Velocity difference between right and left wheels Rotational angle Measured counterclockwise Normal speed Velocity in X Coordinate Transformation Not applicable Rotational angle Velocity in Y X coordinate Sensor None No block necessary for modeling Y coordinate Some block inputs do not exactly match block outputs. Therefore, in addition to the dynamics of the individual components, the model must compute the following: • Input to the rotation computation — Subtract the velocities of the two wheels and divide by two. • Input to the coordinate transformation — Average the velocities of the two wheels. • Input to the sensor — Integrate the outputs of the coordinate transformation. The wheel velocities are always equal in magnitude and the computations are accurate within that assumption. Add the necessary components and finalize connections: 1. Add the necessary input and output ports to each subsystem. Double-click a Subsystem block. Each new Subsystem block contains one Inport (In1) and one Outport (Out1) block. These blocks define the signal interface with the next higher level in a model hierarchy. Each Inport block creates an input port on the Subsystem block, and each Outport block creates an output port. The model reflects the names of these blocks as the input/output port names. Add more blocks for additional input and output signals. On the Simulink Editor toolbar, click the Navigate Up To Parent button to return to the top level. For each block, add and rename Inport and Outport blocks. When copying an Inport block to create a new one, use the Paste (Ctrl+V) option. 2. Compute required inputs to the Coordinate Transform and Rotation subsystems from the left wheel and right wheel velocities. 1. Compute the Linear speed input to the Coordinate Transform subsystem. Add an Add block from the Math Operations library and connect the outputs of the two-wheel components. Add a Gain block and set the gain parameter to 1/2. Connect the output of the Add block to this Gain block. 2. Compute the Speed difference input to the Rotation subsystem. Add a Subtract block from the Math Operations library. Connect the right wheel velocity to the + input and the left wheel velocity to the - input. Connect the outputs of the two wheel components. Add a Gain block and set the gain parameter to 1/2. Connect the output of the Subtract block to this Gain block. 3. Compute the X and Y coordinates from the X and Y velocities. Add two Integrator blocks from the Continuous library and connect the outputs of the Coordinate Transform block. Leave initial conditions of the Integrator blocks set to 0. 4. Complete the connections for the system. Parameters and Data Determine the parameters that are part of the model and their values. Use modeling goals to determine whether these values are always fixed or change from simulation to simulation. Parameters that contribute to the modeling goal require explicit representation in the model. This table helps determine the level of detail when modeling each component. Parameter Block Symbol Value Type Left Wheel Mass m 2.5 kg Variable Right Wheel Left Wheel Rolling resistance k_drag 30 Ns^2/m Variable Right Wheel Robot radius Rotation r 0.15 m Variable Initial angle Rotation None 0 rad Fixed Left Wheel 0 m/s Initial velocities None Fixed Right Wheel 0 m/s Initial (X, Y) coordinates Integrators None (0, 0) m Fixed Simulink uses the MATLAB^® workspace to evaluate parameters. Set these parameters in the MATLAB command window: m = 2.5; k_drag = 30; r = 0.15; Related Topics
{"url":"https://uk.mathworks.com/help/simulink/gs/system-definition-and-layout.html","timestamp":"2024-11-04T15:10:16Z","content_type":"text/html","content_length":"85633","record_id":"<urn:uuid:f9367aa6-c4f7-46f7-9d20-309aa16853bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00177.warc.gz"}
The building of Middle Earth Technical University Faculty of Engineering needs to be painted urgently. For this purpose, the dean asked you to find the number of walls each painter can paint. You are given the building plan and the locations of the painters. The building has rooms and walls, and painters are located in arbitrary rooms in this building. The walls are marked with "*" and empty cells are marked with ".". Painters are always located on empty cells (".") and all cells on the building's borders are walls ("*"). A painter can freely walk in the room they're in, but cannot go past walls or move to other rooms. So they can only paint the room they are in. The number of cells they are able to paint is the number of walls ("*") the room borders. Two or more painters can be \(\underline{\text{in same room}}\) and different painters can paint the same walls again since each of them has different colors. Can you find the number of walls that each painter will be able to paint? First line consists of 3 integers: \(\mathbf{N}\) (row count), \(\mathbf{M}\) (column count), and \(\mathbf{K}\) (number of painters). Rows are numbered from \(1\) to \(\mathbf{N}\) and columns from \(1\) to \(\mathbf{M}\). Then, following \(\mathbf{N}\) line, you will be given an \(\mathbf{NxM}\) grid consisting of "." and "*". "." will represent rooms, and "*" will represent walls. Then, following \(\mathbf{K}\) lines, you will be given two integers \(\mathbf{X}\) and \(\mathbf{Y}\), the locations of painters. \(\mathbf{X}\) denotes their row number (top to bottom) and \(\ mathbf{Y}\) denotes their column number (left to right). • \(1 \leq \mathbf{N}, \mathbf{M} \leq 1000\) • \(1 \leq \mathbf{K} \leq \mathbf{N \cdot M}\) • \(1 \leq \mathbf{X} \leq \mathbf{N}\) • \(1 \leq \mathbf{Y} \leq \mathbf{M}\) In \(\mathbf{K}\) lines, you should print the number of walls' sides that the painters can paint. First painter is located (2,2). The painter is able to paint one of each wall except the wall located (3,3) of which the painter can paint two sides. In total, the painter can paint 12. The second painter is located (6,5). The painter is able to paint one of each wall except the wall located (4,3) of which the painter can paint two sides. In total, the painter can paint 14. The second painter is located (4,5) and in the same room as the previous painter. However, the painter is able to paint this room again. In total, the painter can paint 14.
{"url":"https://arsiv.cclub.metu.edu.tr/problem/21painter/","timestamp":"2024-11-10T13:00:47Z","content_type":"text/html","content_length":"13241","record_id":"<urn:uuid:9c688c45-1fef-431e-80a8-370e7f5cd1b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00227.warc.gz"}
Convert 815 Parts-Per-Billion (ppb) to Parts-Per-Trillion (ppt) This is our conversion tool for converting parts-per-billion to parts-per-trillion. To use the tool, simply enter a number in any of the inputs and the converted value will automatically appear in the opposite box. How to convert Parts-Per-Billion (ppb) to Parts-Per-Trillion (ppt) Converting Parts-Per-Billion (ppb) to Parts-Per-Trillion (ppt) is simple. Why is it simple? Because it only requires one basic operation: multiplication. The same is true for many types of unit conversion (there are some expections, such as temperature). To convert Parts-Per-Billion (ppb) to Parts-Per-Trillion (ppt), you just need to know that 1ppb is equal to ppt. With that knowledge, you can solve any other similar conversion problem by multiplying the number of Parts-Per-Billion (ppb) by . For example, 9ppb multiplied by is equal to ppt. Best conversion unit for 815 Parts-Per-Billion (ppb) We define the "best" unit to convert a number as the unit that is the lowest without going lower than 1. For 815 parts-per-billion, the best unit to convert to is .
{"url":"https://unitconversion.io/815-ppb-to-ppt","timestamp":"2024-11-03T02:39:09Z","content_type":"text/html","content_length":"33820","record_id":"<urn:uuid:ccd6f80d-c20c-4e92-a15c-7eb32b8fcbf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00879.warc.gz"}
The measure of 50 m is equal to what number of ft. What number of ft is equivalent to 50 meters? 50 m is equal to how many ft? When you have 50 meters, you're looking at a considerable number of ft as the equivalent. If you're curious how far 50 meters really is, you'll find it translates to a decent number of feet. In terms of 50 m, you'll discover it equals a good number of ft. When you have 50 meters, it's important to know that it corresponds to a sizeable number of feet. Interested in knowing the equivalence between 50 meters and ft? You'll discover a substantial number of ft. 50 meters is equivalent to how many feet? The answer is a significant amount of feet. Wondering about the length conversion from 50 m to feet? Brace yourself for quite a few feet. How many ft do you get with 50 meters? A considerable amount of ft! When dealing with 50 m, expect to see quite a few feet in the equation. If you're curious about 50 m in terms of feet, you'll be amazed to find that it is equivalent to quite a few feet. When you're dealing with a length of 50 meters, you'll find that it corresponds to a considerable number of ft. If you're wondering how many ft you'll get with 50 meters, the answer is a notable number of feet. Interested in the conversion from 50 meters to ft? You'll be pleased to know that it amounts to quite a noteworthy number. Curious to know how many ft are in 50 m? The answer is a considerable count of feet. When you have 50 meters, you'll realize it translates to quite a few ft. When it comes to 50 meters, you'll find that it is equal to quite a few feet. Interested in knowing the equivalent feet value of 50 meters? You'll be amazed by the quite a substantial feet it represents. If you're curious about the length conversion from 50 m to feet, get ready for a notable count of ft. When you have a measurement of 50 meters, you'll see that it matches quite a few feet. Looking to convert 50 meters to feet? Prepare to see quite a significant number of ft in the result. At a length of 50 meters, you'll be amazed to find out the corresponding feet measurement is a significant number. Curious about the conversion from 50 m to feet? Prepare to witness a substantial count of ft. Interested in the conversion of 50 meters to feet? You'll find that it results in a considerable count of feet. When dealing with 50 m, be ready to see quite a few feet as the result. When converting 50 m to ft, you'll be delighted to find that it turns into quite a few ft. The measurement of 50 meters is equivalent to a notable number of ft. Wondering about the equivalent value in ft for 50 meters? Brace yourself for quite a substantial number of ft. At 50 meters, you'll find that it is equivalent to a considerable count of feet. The conversion of 50 meters to ft results in a significantly large count of feet. When it comes to 50 m, you'll find that it equates to quite a substantial count of ft.
{"url":"https://s6g.shop/wm4yzhk","timestamp":"2024-11-08T21:23:43Z","content_type":"application/xhtml+xml","content_length":"203986","record_id":"<urn:uuid:aadb5525-f399-4c5b-b523-1c415ff4e043>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00111.warc.gz"}
Causal Decision Theory First published Sat Oct 25, 2008; substantive revision Tue Nov 15, 2016 Causal decision theory adopts principles of rational choice that attend to an act’s consequences. It maintains that an account of rational choice must use causality to identify the considerations that make a choice rational. Given a set of options constituting a decision problem, decision theory recommends an option that maximizes utility, that is, an option whose utility equals or exceeds the utility of every other option. It evaluates an option’s utility by calculating the option’s expected utility. It uses probabilities and utilities of an option’s possible outcomes to define an option’s expected utility. The probabilities depend on the option. Causal decision theory takes the dependence to be causal rather than merely evidential. This essay explains causal decision theory, reviews its history, describes current research in causal decision theory, and surveys the theory’s philosophical foundations. The literature on causal decision theory is vast, and this essay covers only a portion of it. 1. Expected Utility Suppose that a student is considering whether to study for an exam. He reasons that if he will pass the exam, then studying is wasted effort. Also, if he will not pass the exam, then studying is wasted effort. He concludes that because whatever will happen, studying is wasted effort, it is better not to study. This reasoning errs because studying raises the probability of passing the exam. Deliberations should take account of an act’s influence on the probability of its possible outcomes. An act’s expected utility is a probability-weighted average of its possible outcomes’ utilities. Possible states of the world that are mutually exclusive and jointly exhaustive, and so form a partition, generate an act’s possible outcomes. An act-state pair specifies an outcome. In the example, the act of studying and the state of passing form an outcome comprising the effort of studying and the benefit of passing. The expected utility of studying is the probability of passing if one studies times the utility of studying and passing plus the probability of not passing if one studies times the utility of studying and not passing. In compact notation, \[ \textit{EU} (S) = P(P \mbox{ if } S) \util (S \amp P) + P({\sim}P \mbox{ if } S) \util (S \amp{\sim}P). \] Each product specifies the probability and utility of a possible outcome. The sum is a probability-weighted average of the possible outcomes’ utilities. How should decision theory interpret the probability of a state \(S\) if one performs an act \(A\), that is, \(P(S \mbox{ if }A)\)? Probability theory offers a handy suggestion. It has an account of conditional probabilities that decision theory may adopt. Decision theory may take \(P(S \mbox{ if }A)\) as the probability of the state conditional on the act. Then \(P(S \mbox{ if }A)\) equals \(P (S\mid A)\), which probability theory defines as \(P(S \amp A)/P(A)\) when \(P(A) \ne 0\). Some theorists call expected utility computed using conditional probabilities conditional expected utility. I call it expected utility tout court because the formula using conditional probabilities generalizes a simpler formula for expected utility that uses nonconditional probabilities of states. Also, some theorists call an act’s expected utility its utility tout court because an act’s expected utility appraises the act and yields the act’s utility in ideal cases. I call it expected utility because a person by mistake may attach more or less utility to a bet than its expected utility warrants. The equality of an act’s utility and its expected utility is normative rather than Expected utilities obtained from conditional probabilities steer the student’s deliberations in the right direction. \[\textit{EU} (S) = P(P\mid S)\util (S \amp P) + P({\sim}P\mid S)\util (S \amp{\sim}P), \] \[\textit{EU} ({\sim}S) = P(P\mid {\sim}S)\util ({\sim}S \amp P) + P({\sim}P\mid {\sim}S)\util ({\sim}S \amp{\sim}P). \] Because of studying’s effect on the probability of passing, \(P(P\mid S) \gt P(P\mid {\sim}S)\) and \(P({\sim}P\mid S) \lt P({\sim}P\mid {\sim}S)\). So \(\textit{EU} (S) \gt \textit{EU} ({\sim}S)\), assuming that studying’s increase in the probability of passing compensates for the effort of studying. Maximization of expected utility recommends studying. The handy interpretation of the probability of a state if one performs an act, however, is not completely satisfactory. Suppose that one tosses a coin with an unknown bias and obtains heads. This result is evidence that the next toss will yield heads, although it does not causally influence the next toss’s result. An event’s probability conditional on another event indicates the evidence that the second event provides for the first. If the two events are correlated, the second may provide evidence for the first without causally influencing it. Causation entails correlation, but correlation does not entail causation. Deliberations should attend to an act’s causal influence on a state rather than an act’s evidence for a state. A good decision aims to produce a good outcome rather than evidence of a good outcome. It aims for the good and not just signs of the good. Often efficacy and auspiciousness go hand in hand. When they come apart, an agent should perform an efficacious act rather than an auspicious act. Consider the Prisoner’s Dilemma, a stock example of game theory. Two people isolated from each other may each act either cooperatively or uncooperatively. They each do better if they each act cooperatively than if they each act uncooperatively. However, each does better if he acts uncooperatively, no matter what the other does. Acting uncooperatively dominates acting cooperatively. Suppose, in addition, that the two players are psychological twins. Each thinks as the other thinks. Moreover, they know this fact about themselves. Then if one player acts cooperatively, he concludes that his counterpart also acts cooperatively. His acting cooperatively is good evidence that his counterpart does the same. Nonetheless, his acting cooperatively does not cause his counterpart to act cooperatively. He has no contact with his counterpart. Because he is better off not acting cooperatively whatever his counterpart does, not acting cooperatively is the better course. Acting cooperatively is auspicious but not efficacious. To make expected utility track efficacy rather than auspiciousness, causal decision theory interprets the probability of a state if one performs an act as a type of causal probability rather than as a standard conditional probability. In the Prisoner’s Dilemma with twins, consider the probability of one player’s acting cooperatively given that the other player does. This conditional probability is high. Next, consider the causal probability of one player’s acting cooperatively if the other player does. Because the players are isolated, this probability equals the probability of the first player’s acting cooperatively. It is low if that player follows dominance. Using conditional probabilities, the expected utility of acting cooperatively exceeds the expected utility of acting uncooperatively. However, using causal probabilities, the expected utility of acting uncooperatively exceeds the expected utility of acting cooperatively. Switching from conditional to causal probabilities makes expected-utility maximization yield acting uncooperatively. 2. History This section tours causal decision theory’s history and along the way presents various formulations of the theory. 2.1 Newcomb’s Problem Robert Nozick (1969) presented a dilemma for decision theory. He constructed an example in which the standard principle of dominance conflicts with the standard principle of expected-utility maximization. Nozick called the example Newcomb’s Problem after the physicist, William Newcomb, who first formulated the problem. In Newcomb’s Problem an agent may choose either to take an opaque box or to take both the opaque box and a transparent box. The transparent box contains one thousand dollars that the agent plainly sees. The opaque box contains either nothing or one million dollars, depending on a prediction already made. The prediction was about the agent’s choice. If the prediction was that the agent will take both boxes, then the opaque box is empty. On the other hand, if the prediction was that the agent will take just the opaque box, then the opaque box contains a million dollars. The prediction is reliable. The agent knows all these features of his decision problem. Figure 1 displays the agent’s options and their outcomes. A row represents an option, a column a state of the world, and a cell an option’s outcome in a state of the world. Prediction Prediction one-boxing two-boxing Take one box \(\$M\) \(\$0\) Take two boxes \(\$M + \$T\) \(\$T\) Figure 1. Newcomb’s Problem Because the outcome of two-boxing is better by \(\$T\) than the outcome of one-boxing given each prediction, two-boxing dominates one-boxing. Two-boxing is the rational choice according to the principle of dominance. Because the prediction is reliable, a prediction of one-boxing has a high probability given one-boxing. Similarly, a prediction of two-boxing has a high probability given two-boxing. Hence, using conditional probabilities to compute expected utilities, one-boxing’s expected utility exceeds two-boxing’s expected utility. One-boxing is the rational choice according to the principle of expected-utility maximization. Decision theory should address all possible decision problems and not just realistic decision problems. However, if Newcomb’s problem seems untroubling because unrealistic, realistic versions of the problem are plentiful. The essential feature of Newcomb’s problem is an inferior act’s correlation with a good state that it does not causally promote. In realistic, medical Newcomb problems, a medical condition and a behavioral symptom have a common cause and are correlated although neither causes the other. If the behavior is attractive, dominance recommends it although expected utility maximization prohibits it. Also, Allan Gibbard and William Harper (1978: Sec. 12) and David Lewis (1979) observe that a Prisoner’s Dilemma with psychological twins poses a Newcomb problem for each player. For each player, the other player’s act is a state affecting the outcome. Acting cooperatively is a sign, but not a cause, of the other player’s acting cooperatively. Dominance recommends acting uncooperatively, whereas expected utility computed with conditional probabilities recommends acting cooperatively. In some realistic instances of the Prisoner’s Dilemma, the players’ anticipated similarity of thought creates a conflict between the principle of dominance and the principle of expected-utility maximization. 2.2 Stalnaker’s Solution Robert Stalnaker (1968) presented truth conditions for subjunctive conditionals. A subjunctive conditional is true if and only if in the nearest antecedent-world, its consequent is true. (This analysis is understood so that a subjunctive conditional is true if its antecedent is true in no world.) Stalnaker used analysis of subjunctive conditionals to ground their role in decision theory and in a resolution of Newcomb’s problem. In a letter to Lewis, Stalnaker (1972) proposed a way of reconciling decision principles in Newcomb’s problem. He suggested calculating an act’s expected utility using probabilities of conditionals in place of conditional probabilities. Accordingly, \[ \textit{EU} (A) = \sum_i P(A \gt S_i)\util (A \amp S_i), \] where \(A \gt S_i\) stands for the conditional that if \(A\) were performed then \(S_i\) would obtain. Thus, instead of using the probability of a prediction of one-boxing given one-boxing, one should use the probability of the conditional that if the agent were to pick just one box, then the prediction would have been one-boxing. Because the agent’s act does not cause the prediction, the probability of the conditional equals the probability that the prediction is one-boxing. Also, consider the conditional that if the agent were to pick both boxes, then the prediction would have been one-boxing. Its probability similarly equals the probability that the prediction is one-boxing. The act the agent performs does not affect any prediction’s probability because the prediction occurs prior to the act. Consequently, using probabilities of conditionals to compute expected utility, two-boxing’s expected utility exceeds one-boxing’s expected utility. Therefore, the principle of expected-utility maximization makes the same recommendation as does the principle of dominance. Gibbard and Harper (1978) elaborated and made public Stalnaker’s resolution of Newcomb’s problem. They distinguished causal decision theory, which uses probabilities of subjunctive conditionals, from evidential decision theory, which uses conditional probabilities. Because in decision problems probabilities of subjunctive conditionals track causal relations, using them to calculate an option’s expected utility makes decision theory causal. Gibbard and Harper distinguished two types of expected utility. One type they called value and represented with \(V\). It indicates news-value or auspiciousness. The other type they called utility and represented with \(U\). It indicates efficacy in attainment of goals. A calculation of an act’s expected value uses conditional probabilities, and a calculation of its expected utility uses probabilities of conditionals. They argued that expected utility, calculated with probabilities of conditionals, yields genuine expected utility. As Gibbard and Harper introduce \(V\) and \(U\), both rest on an assessment \(D\) (for desirability) of maximally specific outcomes. Instead of adopting a formula for expected utility that uses an assessment of outcomes neutral with respect to evidential and causal decision theory, this essay follows Stalnaker (1972) in adopting a formula that uses utility to evaluate outcomes. 2.3 Variants Consider a conditional asserting that if an option were adopted, then a certain state would obtain. Gibbard and Harper assume, to illustrate the main ideas of causal decision theory, that the conditional has a truth-value, and that, given its falsity, if the option were adopted, then the state would not obtain. This assumption may be unwarranted if the option is flipping a coin, and the relevant state is obtaining heads. It may be false (or indeterminate) that if the agent were to flip the coin, he would obtain heads. Similarly, the corresponding conditional about obtaining tails may be false (or indeterminate). Then probabilities of conditionals are not suitable for calculating the option’s expected utility. The relevant probabilities do not sum to one (or do not even exist). To circumvent such impasses, some theorists calculate causally-sensitive expected utilities without probabilities of subjunctive conditionals. Causal decision theory has many formulations. Brian Skyrms (1980: Sec IIC; 1982) presented a version of causal decision theory that dispenses with probabilities of subjunctive conditionals. His theory separates factors that the agent’s act may influence from factors that the agent’s act may not influence. It lets \(K_i\) stand for a possible full specification of factors that an agent may not influence and lets \(C_j\) stand for a possible (but not necessarily full) specification of factors that the agent may influence. The set of \(K_i\) forms a partition, and the set of \(C_j\) forms a partition. The formula for an act’s expected utility first calculates its expected utility using factors the agent may influence, with respect to each possible combination of factors outside the agent’s influence. Then it computes a probability-weighted average of those conditional expected utilities. An act’s expected utility calculated this way is the act’s \(K\)-expectation, \(\textit{EU}_k(A)\). According to Skyrms’s \[\textit{EU}_k(A) = \sum_i P(K_i)\sum_j P(C_j \mid K_i \amp A)\util (C_j \amp K_i \amp A).\] Skyrms holds that an agent should select an act that maximizes \(K\)-expectation. Lewis (1981) presented a version of causal decision theory that calculates expected utility using probabilities of dependency hypotheses instead of probabilities of subjunctive conditionals. A dependency hypothesis for an agent at a time is a maximally specific proposition about how the things the agent cares about do and do not depend causally on his present acts. An option’s expected utility is its probability-weighted average utility with respect to a partition of dependency hypotheses \(K_i\). Lewis defines the expected utility of an option \(A\) as \[ \textit{EU} (A) = \sum_i P(K_i)\util (K_i \amp A) \] and holds that to act rationally is to realize an option that maximizes expected utility. His formula for an option’s expected utility is the same as Skyrms’s assuming that \(U(K_i \amp A)\) may be expanded with respect to a partition of factors the agent may influence, using the formula \[ U(K_i \amp A) = \sum_j P(C_j\mid K_i \amp A)\util (C_j \amp K_i \amp A). \] Skyrms’s and Lewis’s calculations of expected utility dispense with causal probabilities. They build causality into states of the world so that causal probabilities are unnecessary. In cases such as Newcomb’s problem, their calculations yield the same recommendations as calculations of expected utility employing probabilities of subjunctive conditionals. The various versions of causal decision theory make equivalent recommendations when cases meet their background assumptions. 2.4 Representation Theorems Decision theory often introduces probability and utility with representation theorems. These theorems show that if preferences among acts meet certain constraints, such as transitivity, then there exist a probability function and a utility function (given a choice of scale) that generate expected utilities agreeing with preferences. David Krantz, R. Duncan Luce, Patrick Suppes, and Amos Tversky (1971) offer a good, general introduction to the purposes and methods of constructing representations theorems. In Section 3.1, I discuss the theorems’ function in decision theory. Richard Jeffrey ([1965] 1983) presented a representation theorem for evidential decision theory, using its formula for expected utility. Brad Armendt (1986, 1988a) presented a representation theorem for causal decision theory, using its formula for expected utility. James Joyce (1999) constructed a very general representation theorem that yields either causal or evidential decision theory depending on the interpretation of probability that the formula for expected utility adopts. 2.5 Objections The most common objection to causal decision theory is that it yields the wrong choice in Newcomb’s problem. It yields two-boxing, whereas one-boxing is correct. Terry Horgan (1981 [1985]) and Paul Horwich (1987: Chap. 11), for example, promote one-boxing. The main rationale for one-boxing is that one-boxers fare better than do two-boxers. Causal decision theorists respond that Newcomb’s problem is an unusual case that rewards irrationality. One-boxing is irrational even if one-boxers prosper. Some theorists hold that one-boxing is plainly rational if the prediction is completely reliable. They maintain that if the prediction is certainly accurate, then choice reduces to taking \(\$M\) or taking \(\$T\). This view oversimplifies. If an agent one-boxes, then that act is certain to yield \(\$M\). However, the agent still would have done better by taking both boxes. Dominance still recommends two-boxing. Making the prediction certain to be accurate does not change the character of the problem. Efficacy still trumps auspiciousness, as Howard Sobel (1994: Chap. 5) argues. A way of reconciling the two sides of the debate about Newcomb’s problem acknowledges that a rational person should prepare for the problem by cultivating a disposition to one-box. Then whenever the problem arises, the disposition will prompt a prediction of one-boxing and afterwards the act of one-boxing (still freely chosen). Causal decision theory may acknowledge the value of this preparation. It may conclude that cultivating a disposition to one-box is rational although one-boxing itself is irrational. Hence, if in Newcomb’s problem an agent two-boxes, causal decision theory may concede that the agent did not rationally prepare for the problem. It nonetheless maintains that two-boxing itself is rational. Although two-boxing is not the act of a maximally rational agent, it is rational given the circumstances of Newcomb’s problem. Causal decision theory may also explain that it advances a claim about the evaluation of an act given the agent’s circumstances in Newcomb’s problem. It asserts two-boxing’s conditional rationality. Conditional and nonconditional rationality treat mistakes differently. In contrast with conditional rationality, nonconditional rationality does not grant past mistakes. It evaluates an act taking account of the influence of past mistakes. However, conditional rationality accepts present circumstances as they are and does not discredit an act because it stems from past mistakes. Causal decision theory maintains that two-boxing is rational, granting the agent’s circumstances and so ignoring any mistakes leading to those circumstances, such as irrational preparation for Newcomb’s Another objection to causal decision theory concedes that two-boxing is the rational choice in Newcomb’s problem but rejects causal principles of choice that yield two-boxing. It seeks noncausal principles that yield two-boxing. Positivism is a source of aversion to decision principles incorporating causation. Some decision theorists shun causation because no positivist account specifies its nature. Without a definition of causation in terms of observable phenomena, they prefer that decision theory avoid causation. Causal decision theory’s response to this objection is both to discredit positivism and also to clarify causation so that puzzles concerning it no longer give decision theory any reason to avoid it. Evidential decision theory has weaker metaphysical assumptions than has causal decision theory, even if causation has impeccable metaphysical credentials. Some decision theorists do not omit causation because of metaphysical scruples but for conceptual economy. Jeffrey ([1965] 1983, 2004), for the sake of parsimony, formulates decision principles that do not rely on causal relations. Ellery Eells (1981, 1982) contends that evidential decision theory yields causal decision theory’s recommendations but, more economically, without reliance on causal apparatus. In particular, evidential decision theory yields two-boxing in Newcomb’s problem. An agent’s reflection on his evidence makes conditional probabilities support two-boxing. A noncontentious elaboration of Newcomb’s problem posits that the agent’s choice and its prediction have a common cause. The agent’s choice is evidence of the common cause and evidence of the choice’s prediction. Once an agent acquires the probability of the common cause, he may put aside the evidence his choice provides about the prediction. That evidence is superfluous. Given the probability of the common cause, the probability of a prediction of one-boxing is constant with respect to his options. Similarly, the probability of a prediction of two-boxing is constant with respect to his options. Because the probability of a prediction is the same conditional on either option, the expected utility of two-boxing exceeds the expected utility of one-boxing according to evidential decision theory. Horgan (1981 [1985]) and Huw Price (1986) make similar points. Suppose that an event \(S\) is a sign of a cause \(C\) that produces an effect \(E\). For the probability of \(E\), knowing whether \(C\) holds makes superfluous knowing whether \(S\) holds. Observation of \(C\) screens off the evidence that \(S\) provides for \(E\). That is, \(P(E\mid C \amp S) = P(E\mid C)\). In Newcomb’s problem, assuming that the agent is rational, his beliefs and desires are a common cause of his choice and the prediction. So his choice is a sign of the prediction’s content. For the probability of a prediction of one-boxing, knowing one’s beliefs and desires makes superfluous knowing the choice that they yield. Knowledge of the common cause screens off evidence that the choice provides about the prediction. Hence, the probability of a prediction of one-boxing is constant with respect to one’s choice, and maximization of evidential expected-utility agrees with the principle of dominance. This defense of evidential decision theory is called the tickle defense because it assumes that an introspected condition screens off the correlation between choice and prediction. Eells’s defense of evidential decision theory assumes that an agent chooses according to beliefs and desires and knows his beliefs and desires. Some agents may not choose this way and may not have this knowledge. Decision theory should prescribe a rational choice for such agents, and evidential decision theory may not do that correctly, as Lewis (1981: 10–11) and John Pollock (2010) argue. Armendt (1988b: 326–329) and David Papineau (2001: 252–255) concur that the phenomenon of screening off does not in all cases make evidential decision theory yield the results of causal decision Horwich (1987: Chap. 11) rejects Eells’s argument because, even if an agent knows that her choice springs from her beliefs and desires, she may be unaware of the mechanism by which her beliefs and desires produce her choice. The agent may doubt that she chooses by maximizing expected utility. Then in Newcomb’s problem her choice may offer relevant evidence about the prediction. Eells (1984a) constructs a dynamic version of the tickle defense to meet this objection. Sobel (1994: Chap. 2) discusses that version of the defense. He argues that it does not yield evidential decision theory’s agreement with causal decision theory in all decision problems in which an act furnishes evidence concerning the state of the world. Moreover, it does not establish that an evidential theory of rational desire agrees with a causal theory of rational desire. He concludes that even in cases where evidential decision theory yields the right recommendation, it does not yield it for the right Price (2012) proposes a blend of evidential and causal decision theory and motivates it with an analysis of cases in which an agent has foreknowledge of an event occurring by chance. Causal decision theory on its own accommodates such cases, argues Adam Bales (2016). Arif Ahmed (2014) champions evidential decision theory and advances several objections to causal decision theory. His objections assume some controversial points about rational choice, including a controversial principle for sequences of choices. A common view distinguishes principles for evaluating choices from principles for evaluating sequences of choices. The principle of utility maximization evaluates an agent’s choice as a resolution of a decision problem only if the agent has direct control of each option in the decision problem, that is, only if the agent can at will immediately adopt any option in the decision problem. The principle does not evaluate an agent’s sequence of multiple choices because the agent does not have direct control of such a sequence. She realizes a sequence of multiple choices only by making each choice in the sequence at the time for it; she cannot at will immediately realize the entire sequence. Rationality evaluates an option in an agent’s direct control by comparing it with alternatives but evaluates a sequence in an agent’s indirect control by evaluating the directly controlled options in the sequence; a sequence of choices is rational if the choices in the sequence are rational. Adopting this common method of evaluating sequences of choices fends off objections to causal decision theory that assume rival methods. 3. Current Issues Decision theory is an active area of research. Current work addresses a number of problems. Causal decision theory’s approach to those problems arises from its nonpositivistic methodology and its attention to causation. This section mentions some topics on causal decision theory’s agenda. 3.1 Probability and Utility Principles of causal decision theory use probabilities and utilities. The interpretation of probabilities and utilities is a matter of debate. One tradition defines them in terms of functions that representation theorems introduce to depict preferences. The representation theorems show that if preferences meet certain structural axioms, then if they also meet certain normative axioms, they are as if they follow expected utility. That is, preferences follow expected utility calculated using probability and utility functions constructed so that preferences follow expected utility. Expected utility calculated this way differs from expected utility calculated using probability and utility assignments grounded in attitudes toward possible outcomes. For example, a person confused about bets concerning a coin toss may have preferences among those bets that are as if he assigns probability 60% to heads, when, in fact, the evidence of past tosses leads him to assign probability 40% to heads. Consequently, when preferences meet a representation theorem’s structural axioms, the theorem’s normative axioms justify only conformity with expected utility fabricated to agree with preferences and do not justify conformity with expected utility in the traditional sense. Defining probability and utility using the representation theorems thus weakens the traditional principle of expected utility. It becomes merely a principle of coherence among preferences. Instead of using the representation theorems to define probabilities and utilities, decision theory may use them to establish probabilities’ and utilities’ measurability when preferences meet structural and normative axioms. This employment of the representation theorems allows decision theory to advance the traditional principle of expected utility and thereby enrich its treatment of rational decisions. Decision theory may justify that traditional principle by deriving it from general principles of evaluation, as in Weirich (2001). A broad account of probabilities and utilities takes them to indicate attitudes toward propositions. They are rational degrees of belief and rational degrees of desire, respectively. This account of probabilities and utilities recognizes their existence in cases where they are not inferable from preferences or their other effects but instead are inferable from their causes, such as an agent’s information about objective probabilities, or are not inferable at all (except perhaps by introspection). The account relies on arguments that degrees of belief and degrees of desire, if rational, conform to standard principles of probability and utility. Bolstering these arguments is work for causal decision theory. Besides clarifying its general interpretation of probability and utility, causal decision theory searches for the particular probabilities and utilities that yield the best version of its principle to maximize expected utility. The causal probabilities in its formula for expected utility may be probabilities of subjunctive conditionals or various substitutes. Versions that use probabilities of subjunctive conditionals must settle on an analysis of those conditionals. Lewis (1973: Chap. 1) modifies Stalnaker’s analysis to count a subjunctive conditional true if and only if as antecedent worlds come closer and closer to the actual world, there is a point beyond which the consequent is true in all the worlds at least that close. Joyce (1999: 161–180) advances probability images, as Lewis (1976) introduces them, as substitutes for probabilities of subjunctive conditionals. The probability image of a state \(S\) under subjunctive supposition of an act \(A\) is the probability of \(S\) according to an assignment that shifts the probability of \({\sim}A\)-worlds to nearby \(A\)-worlds. Causal relations among an act and possible states guide probability’s reassignment. A common formula for an act’s expected utility takes the utility for an act-state pair, the utility of the act’s outcome in the state, to be the utility of the act’s and the state’s conjunction: \[ \textit{EU} (A) = \sum_i P(A \gt S_i)\util (A \amp S_i). \] Does causal decision theory need an alternative, more causally-sensitive utility for an act-state pair? Weirich (1980) argues that it does. A person contemplating a wager that the capital of Missouri is Jefferson City entertains the consequences if he were to make the wager given that St. Louis is Missouri’s capital. A rational deliberator subjunctively supposes an act attending to causal relations and indicatively supposes a state attending to evidential relations, but can suppose an act’s and a state’s conjunction only one way. Furthermore, using the utility of an act’s and a state’s conjunction prevents an act’s expected utility from being partition-invariant. The next subsection elaborates this point. 3.2 Partition Invariance An act’s expected utility is partition invariant if and only if it is the same under all partitions of states. Partition invariance is a vital property of an act’s expected utility. If acts’ expected utilities lack this property, then decision theory may use only expected utilities computed from selected partitions. Expected utility’s partition invariance makes an act’s expected utility independent of selection of a partition of states and thereby increases expected utility’s explanatory power. Partition invariance ensures that various representations of the same decision problem yield solutions that agree. Take Newcomb’s problem with Figure 2’s representation. Right prediction Wrong prediction Take only one box \(\$M\) $0 Take two boxes \(\$T\) \(\$M + \$T\) Figure 2. New States for Newcomb’s Problem Dominance does not apply to this representation. It nonetheless settles the problem’s solution because it applies to a decision problem if it applies to any accurate representation of the problem, such as Figure 1’s representation of the problem. If expected utilities are partition-sensitive, then acts that maximize expected utility may be partition-sensitive. The principle of expected utility does not yield a decision problem’s solution, however, if acts of maximum expected-utility change from one partition to another. In that case an act is not a solution to a decision problem simply because it maximizes expected utility under some accurate representation of the problem. Too many acts have the same credential. The expected utility principle, using probabilities of conditionals, applies to Figure 2’s representation of Newcomb’s problem. Letting \(P1\) stand for a prediction of one-boxing and \(P2\) stand for a prediction of two-boxing, the acts’ expected utilities are: \[ \textit{EU} (1) & = P(1 \gt R)\util (\$M) + P(1 \gt W)0\\ & = P(P1)\util (\$M)\\ \textit{EU} (2) & = P(2 \gt R)\util (\$T) + P(2 \gt W)\util (\$M + \$T)\\ & = P(P2)\util (\$T) + P(P1)\util (\$M + \$T)\\ \] Hence \(\textit{EU}(1) \lt EU(2)\). This result agrees with the verdict of causal decision theory given other accurate representations of the problem. Provided that causal decision theory uses a partition-invariant formula for expected utility, its recommendations are independent of a decision problem’s representation. Lewis (1981: 12–13) observes that the formula \[ EU(A) = \sum_i P(S_i)\util (A \amp S_i) \] is not partition invariant. Its results depend on the partition of states. If a state is a set of worlds with equal utilities, then with respect to a partition of such states every act has the same expected utility. An element \(S_i\) of the partition obscures the effects of \(A\) that the utility of an outcome should evaluate. Lewis overcomes this problem by using only partitions of dependency hypotheses. However, causal decision theory may craft a partition-invariant formula for expected utility by adopting a substitute for \(U(A \amp S_i)\). Sobel (1994: Chap. 9) investigates partition invariance. Putting his work in this essay’s notation, he proceeds as follows. First, he takes a canonical computation of an option’s expected utility to use worlds as states. His basic formula is \[ \textit{EU} (A) = \sum_i P(A \gt W_i)\util (W_i). \] A world \(W_i\) absorbs an act performed in it. Only the worlds in which \(A\) holds contribute positive probabilities and so affect the sum. Next, Sobel searches for other computations, using coarse-grained states, that are equivalent to the canonical computation. A suitable specification of utilities achieves partition invariance given his assumptions. According to a theorem he proves (1994: 185), \[ U(A) = \sum_i P(S_i)\util (A \mbox{ given } S_i) \] for any partition of states. Joyce (2000: S11) also articulates for causal decision theory a partition-invariant formula for an act’s expected utility. He achieves partition invariance, assuming that \[ \textit{EU} (A) = \sum_i P(A \gt S_i)\util (A \amp S_i), \] by stipulating that \(U(A \amp S_i)\) equals \[ \sum_{ij} P^A(W_j\mid S_i)\util (W_j), \] where \(W_j\) is a world and \(P^A\) stands for the probability image of \(A\). Weirich (2001: Secs. 3.2, 4.2.2), as Sobel does, substitutes \(U(A \mbox{ given }S_i)\) for \(U(A \amp S_i)\) in the formula for expected utility and interprets \(U(A \mbox{ given }S_i)\) as the utility of the outcome that \(A\)’s realization would produce if \(S\) obtains. Accordingly, \(U(A \mbox{ given }S_i)\) responds to \(A\)’s causal consequences in worlds where \(S_i\) holds. Then the formula \[ \textit{EU} (A) = \sum_i P(S_i) \util (A \mbox{ given }S_i) \] is invariant with respect to partitions in which states are probabilistically independent of the act. A more complex formula, \[ \textit{EU} (A) = \sum_i P(S_i \mbox{ if }A)\util (A \mbox{ given } (S_i \mbox{ if } A)), \] assuming a causal interpretation of its probabilities, relaxes all restriction on partitions. \(U(A \mbox{ given }(S_i \mbox{ if }A))\) is the utility of the outcome if \(A\) were realized, given that it is the case that \(S_i\) would obtain if \(A\) were realized. 3.3 Outcomes One issue concerning outcomes is their comprehensiveness. Are an act’s outcomes possible worlds, temporal aftermaths, or causal consequences? Gibbard and Harper ([1978] 1981: 166–168) mention the possibility of narrowing outcomes to causal consequences, as practical applicability advocates. The narrowing must be judicious, however, because the expected-utility principle requires that outcomes include every relevant consideration. For example, if an agent is averse to risk, then each of a risky act’s possible outcomes must include the risk the act generates. Its inclusion tends to lower each possible outcome’s utility. In Sobel’s canonical formula for expected utility, \[ \textit{EU} (A) = \sum_i P(A \gt W_i)\util (W_i). \] The formula, from one perspective, omits states of the world because the outcomes themselves form a partition. The distinction between states and outcomes dissolves because worlds play the role of both states and outcomes. States are dispensable means of generating outcomes that are exclusive and exhaustive. According to a basic principle, an act’s expected utility is a probability-weighted average of possible outcomes that are exclusive and exhaustive, such as the worlds to which the act may lead. Suppose that a world’s utility comes from realization of basic intrinsic desires and aversions. Granting that the utilities of their realizations are additive, the utility of a world is a sum of the utilities of their realizations. Then besides being a probability-weighted average of the utilities of worlds to which it may lead, an option’s expected utility is also a probability-weighted average of the realizations of basic intrinsic desires and aversions. In this formula for its expected utility, states play no explicit role: \[ \textit{EU} (A) = \sum_i P(A \gt B_i)\util (B_i), \] where \(B_i\) ranges over possible realizations of basic intrinsic desires and aversions. The formula considers for each basic desire and aversion the prospect of its realization if the act were performed. It takes the act’s expected utility as the sum of the prospects’ utilities. The formula provides an economical representation of an act’s expected utility. It eliminates states and obtains expected utility directly from outcomes taken as realizations of basic desires and aversions. To illustrate calculation of an act’s expected utility using basic intrinsic desires and aversions, suppose that an agent has no basic intrinsic aversions and just two basic intrinsic desires, one for health and the other for wisdom. The utility of health is 4, and the utility of wisdom is 8. In the formula for expected utility, a world covers only matters about which the agent cares. In the example, a world is a proposition specifying whether the agent has health and whether he has wisdom. Accordingly, there are four worlds: \[ H \amp W, \\ H \amp {\sim}W, \\ {\sim}H \amp W, \\ {\sim}H \amp {\sim}W.\\ \] Suppose that \(A\) is equally likely to generate any world. Using worlds, \[ \textit{EU} (A) & = P(A \gt(H \amp W))\util (H \amp W) \\ &\qquad + P(A \gt(H \amp{\sim}W))\util (H \ amp{\sim}W) \\ &\qquad + P(A \gt({\sim}H \amp W))\util ({\sim}H \amp W) \\ &\qquad + P(A \gt({\sim}H \amp{\sim}W))\util ({\sim}H \amp{\sim}W) \\ & = (0.25)(12) + (0.25)(4) + (0.25)(8) + (0.25)(0) \\ & = 6.\\ \] Using basic intrinsic attitudes, \[ \textit{EU} (A) &= P(A \gt H)\util (H) + P(A \gt W)\util (W) \\ & = (0.5)(4) + (0.5)(8) \\ & = 6. \] The two methods of computing an option’s utility are equivalent given that, under supposition of an act’s realization, the probability of a basic intrinsic desire’s or aversion’s realization is the sum of the probabilities of the worlds that realize it. 3.4 Acts In deliberations, a first-person action proposition represents an act. The proposition has a subject-predicate structure and refers directly to the agent, its subject, without the intermediary of a concept of the agent. A centered world represents the proposition. Such a world not only specifies individuals and their properties and relations, but also specifies which individual is the agent and where and when his decision problem arises. Realization of the act is realization of a world with, at its center, the agent at the time and place of his decision problem. Isaac Levi (2000) objects to any decision theory that attaches probabilities to acts. He holds that deliberation crowds out prediction. While deliberating, an agent does not have beliefs or degrees of belief about the act that she will perform. Levi holds that Newcomb’s problem, and evidential and causal decision theories that address it, involve mistaken assignments of probabilities to an agent’s acts. He rejects both Jeffrey’s ([1965] 1983) evidential decision theory and Joyce’s (1999) causal decision theory because they allow an agent to assign probabilities to her acts during In opposition to Levi’s views, Joyce (2002) argues that (1) causal decision theory need not accommodate an agent’s assigning probabilities to her acts, but (2) a deliberating agent may legitimately assign probabilities to her acts. Evidential decision theory computes an act’s expected utility using the probability of a state given the act, \(P(S\mid A)\), defined as \(P(S \amp A)/P(A)\). The fraction’s denominator assigns a probability to an act. Causal decision theory replaces \(P(S\mid A)\) with \(P(A \gt S)\) or a similar causal probability. It need not assign a probability to an act. May an agent deliberating assign probabilities to her possible acts? Yes, a deliberator may sensibly assign probabilities to any events, including her acts. Causal decision theory may accommodate such probabilities by forgoing their measurement with betting quotients. According to that method of measurement, willingness to make bets indicates probabilities. Suppose that a person is willing to take either side of a bet in which the stake for the event is \(x\) and the stake against the event is \(y\). Then the probability the person assigns to the event is the betting quotient \(x/(x + y) \). This method of measurement may fail when the event is an agent’s own future act. A bet on an act’s realization may influence the act’s probability, as a thermometer’s temperature may influence the temperature of a liquid it measures. Joyce (2007: 552–561) considers whether Newcomb problems are genuine decision problems despite strong correlations between states and acts. He concludes that, yes, despite those correlations, an agent may view her decision as causing her act. An agent’s decision supports a belief about her act independently of prior correlations between states and her act. According to a principle of evidential autonomy (2007: 557), A deliberating agent who regards herself as free need not proportion her beliefs about her own acts to the antecedent evidence that she has for thinking that she will perform them. She should proportion her beliefs to her total evidence, including her self-supporting beliefs about her own acts. Those beliefs provide new relevant evidence about her acts. How should an agent deliberating about an act understand the background for her act? She should not adopt a backtracking supposition of her act. Standing on the edge of a cliff, she should not suppose that if she were to jump, she would have a parachute to break her fall. Also, she should not imagine gratuitous changes in her basic desires. She should not imagine that if she were to choose chocolate instead of vanilla, despite currently preferring vanilla, that she would then prefer chocolate. She should imagine that her basic desires are constant as she imagines the various acts she may perform, and, moreover, should adopt during deliberations the pretense that her will generates her act independently of her basic desires and aversions. Christopher Hitchcock (1996) holds that an agent should pretend that her act is free of causal influence. Doing this makes partitions of states yielding probabilities for decision agree with partitions of states yielding probabilities defining causal relevance. As a result, probabilities in causal decision theory may form a foundation for probabilities in the probabilistic theory of causation. Causal decision theory, in particular, the version using dependency hypotheses, grounds theories of probabilistic causation. 3.5 Generalizing Expected Utility Problems such as Pascal’s Wager and the St. Petersburg paradox suggest that decision theory needs a means of handling infinite utilities and expected utilities. Suppose that an option’s possible outcomes all have finite utilities. Nonetheless, if those utilities are infinitely many and unbounded, then the option’s expected utility may be infinite. Alan Hájek and Harris Nover (2006) also show that the option may have no expected utility. The order of possible outcomes, which is arbitrary, may affect convergence of their utilities’ probability-weighted average and the value to which the average converges if it does converge. Causal decision theory should generalize its principle of expected-utility maximization to handle such cases. Also, common principles of causal decision theory advance standards of rationality that are too demanding to apply to humans. They are standards for ideal agents in ideal circumstances (a precise formulation of the idealizations may vary from theorist to theorist). Making causal decision theory realistic requires relaxing idealizations that its principles assume. A generalization of the principle of expected-utility maximization, for example, may relax idealizations to accommodate limited cognitive abilities. Weirich (2004) and Pollock (2006) take steps in this direction. Appropriate generalizations distinguish taking maximization of expected utility as a procedure for making a decision and taking it as a standard for evaluating a decision even after the decision has been made. 3.6 Ratification Gibbard and Harper (1978: Sec. 11) present a problem for causal decision theory using an example drawn from literature. A man in Damascus knows that he has an appointment with Death at midnight. He will escape Death if he manages at midnight not to be at the place of his appointment. He can be in either Damascus or Aleppo at midnight. As the man knows, Death is a good predictor of his whereabouts. If he stays in Damascus, he thereby has evidence that Death will look for him in Damascus. However, if he goes to Aleppo he thereby has evidence that Death will look for him in Aleppo. Wherever he decides to be at midnight, he has evidence that he would be better off at the other place. No decision is stable. Decision instability arises in cases in which a choice provides evidence for its outcome, and each choice provides evidence that another choice would have been better. Reed Richter (1984, 1986) uses cases of decision instability to argue against causal decision theory. The theory needs a resolution of the problem of decision instability. A common analysis of the problem classifies options as either self-ratifying or not self-ratifying. Jeffrey ([1965] 1983) introduced ratification as a component of evidential decision theory. His version of the theory evaluates a decision according to the expected utility of the act it selects. The distinction between an act and a decision to perform the act grounds his definition of an option’s self-ratification and his principle to make self-ratifying, or ratifiable, decisions. According to his definition ([1965] 1983: 16), A ratifiable decision is a decision to perform an act of maximum estimated desirability relative to the probability matrix the agent thinks he would have if he finally decided to perform that Estimated desirability is expected utility. An agent’s probability matrix is an array of rows and columns for acts and states, respectively, with each cell formed by the intersection of an act’s row and a state’s column containing the probability of the state given that the agent is about to perform the act. Before performing an act, an agent may assess the act in light of a decision to perform it. Information the decision carries may affect the act’s expected utility and its ranking with respect to other acts. Jeffrey used ratification as a means of making evidential decision theory yield the same recommendations as causal decision theory. In Newcomb’s problem, for instance, two-boxing is the only self-ratifying option. However, Jeffrey (2004: 113n) concedes that evidential decision theory’s reliance on ratification does not make it agree with causal decision theory in all cases. Moreover, Joyce (2007) argues that the motivation for ratification appeals to causal relations, so that even if it yields correct recommendations using Jeffrey’s formula for expected-utility, it still does not yield a purely evidential decision theory. Causal decision theory’s account of self-ratification may put aside Jeffrey’s method of evaluating a decision by evaluating the act it selects. Because the decision and the act differ, they may have different consequences. For example, a decision may fail to generate the act it selects. Hence, the decision’s expected utility may differ from the act’s expected utility. Driving through a flooded section of highway may have high expected utility because it minimizes travel time to one’s destination. However, the decision to drive through the flooded section may have low expected utility because for all one knows the water may be deep enough to swamp the car. Using an act’s expected utility to assess a decision to perform the act leads to faulty evaluations of decisions. It is better to evaluate a decision by comparing its expected utility to the expected utilities of rival decisions. A decision’s expected utility depends on the probability of its execution as well as the expected consequences of the act it selects. Weirich (1985) and Harper (1986) define ratification in terms of an option’s expected utility given its realization rather than given a decision to realize it. An option is self-ratifying if and only if it maximizes expected utility given its realization. This account of ratification accommodates cases in which an option and a decision to realize it have different expected utilities. Weirich and Harper also assume causal decision theory’s formula for expected utility. In the case of Death in Damascus, causal decision theory concludes that the threatened man lacks a self-ratifying option. A self-ratifying option emerges, however, if the man may flip a coin to make his decision. Adopting the probability distribution for locations is called a mixed strategy, whereas choices of location are called pure strategies. Assuming that Death cannot predict the coin flip’s outcome, the mixed strategy is self-ratifying. During deliberations to resolve a decision problem, an agent may revise the probabilities she assigns to pure strategies in light of computations of their expected utilities using earlier probability assignments. The process of revision may culminate in a stable probability assignment that represents a mixed strategy. Skyrms (1982, 1990) and Eells (1984b) investigate these dynamics of deliberation. Some open issues are whether adoption of a mixed strategy resolves a decision problem and whether a pure strategy arising from a mixed strategy that constitutes an equilibrium of deliberations is rational if the pure strategy itself is not self-ratifying. Andy Egan (2007) argues that causal decision theory yields the wrong recommendation in decision problems with an option that provides evidence concerning its outcome. He entertains the case of an assassin who deliberates about pulling the trigger, knowing that the option’s realization provides evidence of a brain lesion that ruins his aim. Egan maintains that causal decision theory mistakenly ignores the evidence that the option provides. However, versions of causal decision theory that incorporate ratification are innocent of the charges. Ratification takes account of evidence an option provides concerning its outcome. Any version of the expected utility principle, whether it uses conditional probabilities or probabilities of conditionals, must specify the information that guides assignments of probabilities and utilities. Principles of nonconditional expected-utility maximization use the same information for all options, and hence exclude information about an option’s realization. The principle of ratification uses for each option information that includes the option’s realization. It is a principle of conditional expected-utility maximization. Egan’s cases count against nonconditional expected-utility maximization, and not against causal decision theory. Conditional expected-utility maximization using causal decision theory’s formula for expected utility addresses the cases he Egan’s examples do not refute causal decision theory but present a challenge for it. Suppose that in a decision problem no self-ratifying option exists, or multiple self-ratifying options exist. How should a rational agent proceed, granting that a decision principle should take account of information that an option provides? This is an open problem in causal decision theory (and in any decision theory acknowledging that an option’s realization may constitute evidence concerning its outcome). Ratification analyzes decision instability but is not a complete response to it. In response to Egan, Frank Arntzenius (2008) and Joyce (2012) argue that in some decision problems an agent’s rational deliberations using freely available information do not settle on a single option but instead settle on a probability distribution over options. They acknowledge that the agent may regret the option issuing from these deliberations but differ about the regret’s significance. Arntzenius holds that the regret counts against the option’s rationality, whereas Joyce denies this. Ahmed (2012) and Ralph Wedgwood (2013) reject Arntzenius’s and Joyce’s responses to Egan because they hold that deliberations should settle on an option. Wedgwood introduces a novel decision principle to accommodate Egan’s decision problems. Ahmed contends that Egan’s analysis of these decision problems has a flaw because when it is extended to some other decision problems, it declares every option irrational. Points about ratification in decision problems clarify points about equilibrium in game theory because in games of strategy a player’s choice often furnishes evidence about other players’ choices. Decision theory underlies game theory because a game’s solution identifies rational choices in the decision problems the game creates for the players. Solutions to games distinguish correlation and causation, as do decision principles. Because in simultaneous-move games two agent’s strategies may be correlated but not related as cause and effect, solutions to such games do not have the same properties as solutions to sequential games. Causal decision theory attends to distinctions on which solutions to games depend. It supports game theory’s account of interactive decisions. The existence of self-ratifying mixed strategies in decision problems such as Death in Damascus suggests that ratification, as causal decision theory explains it, supports participation in a Nash equilibrium of a game. Such an equilibrium assigns a strategy to each player so that each strategy in the assignment is a best response to the others. Suppose that two people are playing Matching Pennies. Simultaneously, each displays a penny. One player tries to make the sides match, and the other player tries to prevent a match. If the first player succeeds, he gets both pennies. Otherwise, the second player gets both pennies. Suppose that each player is good at predicting the other player, and each player knows this. Then if the first player displays heads, he has reason to think that the second player displays tails. Also, if the first player displays tails, he has reason to think that the second player displays heads. Because Matching Pennies is a simultaneous-move game, neither player’s strategy influences the other player’s strategy, but each player’s strategy is evidence of the other player’s strategy. Mixed strategies help resolve decision instability in this case. If the first player flips his penny to settle the side to display, then his mixed strategy is self-ratifying. The second player’s situation is similar, and she also reaches a self-ratifying strategy by flipping her penny. The combination of self-ratifying strategies is a Nash equilibrium of the game. Joyce and Gibbard (1998) describe the role of ratification in game theory. Weirich (2004: Chap. 9) presents a method of selecting among multiple self-ratifying strategies, and hence a method by which a group of players may coordinate to realize a particular Nash equilibrium when several exist. Although decision instability is an open problem, causal decision theory has resources for addressing it. The theory’s eventual resolution of the problem will offer game theory a justification for participation in a Nash equilibrium of a game. 4. Related Topics and Concluding Remarks Causal decision theory has foundations in various areas of philosophy. For example, it relies on metaphysics for an account of causation. It also relies on inductive logic for an account of inferences concerning causation. A comprehensive causal decision theory treats not only causal probabilities’ generation of options’ expected utilities, but also evidence’s generation of causal Research concerning causation contributes to the metaphysical foundations of causal decision theory. Nancy Cartwright (1979), for example, draws on ideas about causation to flesh out details of causal decision theory. Also, some accounts of causation distinguish types of causes. Both oxygen and a flame are metaphysical causes of tinder’s combustion. However, only the flame is causally responsible for, and so a normative cause of, the combustion. Causal responsibility for an event accrues to just the salient metaphysical causes of the event. Causal decision theory is interested not only in events for which an act is causally responsible, but also in other events for which an act is a metaphysical cause. Expected utilities that guide decisions are comprehensive. Judea Pearl (2000) and also Peter Spirtes, Clark Glymour, and Richard Scheines (2000) present methods of inferring causal relations from statistical data. They use directed acyclic graphs and associated probability distributions to construct causal models. In a decision problem, a causal model yields a way of calculating an act’s effect. A causal graph and its probability distribution express a dependency hypothesis and yield each act’s causal influence given that hypothesis. They specify the causal probability of a state under supposition of an act. An act’s expected utility is a probability-weighted average of its expected utility according to the dependency hypotheses that candidate causal models represent, as Weirich (2015: 225–236) explains. A causal model’s directed graph and probability distribution indicate causal relations among event types. As Pearl (2000: 30) and Sprites et al. (2000: 11) explain, a causal model meets the causal Markov condition if and only if with respect to its probability distribution each event type in its directed graph is independent of all the event type’s nondescendants, given its parents. Given a model meeting the condition, knowledge of all an event’s direct causes makes other information statistically irrelevant to the event’s occurrence, except for information about the event and its effects. Knowledge of an event’s direct causes screens off evidence from indirect causes and independent effects of its causes. Given a typical causal model for Newcomb’s problem, knowledge of the common cause of a decision and a prediction screens off the correlation between the decision and the prediction. Directed acyclic graphs present causal structure clearly, and so clarify in decision theory points that depend on causal structure. For example, Eells (2000) observes that choice is not genuine unless a decision screens off an act’s correlation with states. Joyce (2007: 546) uses a causal graph to depict how this may happen in a Newcomb problem that arises in a Prisoner’s Dilemma with a psychological twin. He shows that the Newcomb problem is a genuine choice despite correlation of acts and states because a decision screens off that correlation. Wolfgang Spohn (2012) constructs for Newcomb’s problem a causal model that distinguishes a decision and its execution and argues that given the model causal decision theory recommends one-boxing. An act in a decision problem may constitute an intervention in the causal model for the decision problem, as Meek and Glamour (1994) explain. Hitchcock (2016) maintains that treating an act as an intervention enriches causal decision theory. Timothy Williamson (2007: Chap. 5) studies the epistemology of counterfactual, or subjunctive, conditionals. He points out their role in contingency planning and decision making. According to his account, one learns a subjunctive conditional if one robustly obtains its consequent when imagining its antecedent. Experience disciplines imagination. The experience leading to a judgment that a subjunctive conditional holds may be neither strictly enabling nor strictly evidential so that knowledge of the conditional is neither purely a priori nor purely a posteriori. Williamson claims that knowledge of subjunctive conditionals is foundational so that decision theory appropriately grounds knowledge of an act’s choiceworthiness in knowledge of such conditionals. Most texts on decision theory are consistent with causal decision theory. Many do not treat the special cases, such as Newcomb’s problem, that motivate a distinction between causal and evidential decision theory. For example, Leonard Savage (1954) analyzes only decision problems in which options do not affect probabilities of states, as his account of utility makes clear (1954: 73). Causal and evidential decision theories reach the same recommendations in these problems. Causal decision theory is the prevailing form of decision theory among those who distinguish causal and evidential decision theory. • Ahmed, Arif, 2012, “Push the Button”, Philosophy of Science, 79: 386–395. • –––, 2014, Evidence, Decision and Causality, Cambridge: Cambridge University Press. • Armendt, Brad, 1986, “A Foundation for Causal Decision Theory”, Topoi, 5(1): 3–19. doi:10.1007/BF00137825 • –––, 1988a, “Conditional Preference and Causal Expected Utility”, in William Harper and Brian Skyrms (eds), Causation in Decision, Belief Change, and Statistics, Vol. II, pp. 3–24, Dordrecht: • –––, 1988b, “Impartiality and Causal Decision Theory”, in Arthur Fine and Jarrett Leplin (eds), PSA: Proceedings of Biennial Meeting of the Philosophy of Science Association 1988, Volume I, pp. 326–336, East Lansing, MI: Philosophy of Science Association. • Arntzenius, Frank, 2008, “No Regrets, or: Edith Piaf Revamps Decision Theory”, Erkenntnis, 68(2): 277–297. doi:10.1007/s10670-007-9084-8 • Bales, Adam, 2016, “The Pauper’s Problem: Chance, Foreknowledge and Causal Decision Theory”, Philosophical Studies, 173(6): 1497–1516. doi:10.1007/s11098-015-0560-8 • Cartwright, Nancy, 1979, “Causal Laws and Effective Strategies”, Noûs, 13(4): 419–437. doi:10.2307/2215337 • Eells, Ellery, 1981, “Causality, Utility, and Decision”, Synthese, 48(2): 295–329. doi:10.1007/BF01063891 • –––, 1982, Rational Decision and Causality, Cambridge: Cambridge University Press. • –––, 1984a, “Newcomb’s Many Solutions”, Theory and Decision, 16(1): 59–105. doi:10.1007/BF00141675 • –––, 1984b, “Metatickles and the Dynamics of Deliberation”, Theory and Decision, 17(1): 71–95. doi:10.1007/BF00140057 • –––, 2000, “Review: The Foundations of Causal Decision Theory, by James Joyce”, British Journal for the Philosophy of Science, 51(4): 893–900. doi:10.1093/bjps/51.4.893 • Egan, Andy, 2007, “Some Counterexamples to Causal Decision Theory”, Philosophical Review, 116(1): 93–114. 10.1215/00318108-2006-023 • Gibbard, Allan and William Harper, 1978 [1981], “Counterfactuals and Two Kinds of Expected Utility”, in Clifford Alan Hooker, James L. Leach, and Edward Francis McClennan (eds), Foundations and Applications of Decision Theory (University of Western Ontario Series in Philosophy of Science, 13a), Dordrecht: D. Reidel, pp. 125–162. doi:10.1007/978-94-009-9789-9_5 Reprinted in Harper, Stalnaker, and Pearce 1981: 153–190. doi:10.1007/978-94-009-9117-0_8 • Hájek, Alan and Harris Nover, 2006, “Perplexing Expectations”, Mind, 115(459): 703–720. 10.1093/mind/fzl703 • Harper, William, 1986, “Mixed Strategies and Ratifiability in Causal Decision Theory”, Erkenntnis, 24(1): 25–36. doi:10.1007/BF00183199 • Harper, William, Robert Stalnaker, and Glenn Pearce (eds), 1981,Ifs: Conditionals, Belief, Decision, Chance, and Time (University of Western Ontario Series in Philosophy of Science, 15), Dordrecht: Reidel. • Hitchcock, Christopher Read, 1996, “Causal Decision Theory and Decision-Theoretic Causation”, Noûs, 30(4): 508–526. doi:10.2307/2216116 • –––, 2016, “Conditioning, Intervening, and Decision”, Synthese, 193(4): 1157–1176. doi:10.1007/s11229-015-0710-8 • Horgan, Terry, 1981 [1985], “Counterfactuals and Newcomb’s Problem”, The Journal of Philosophy, 78(6): 331–356. doi:10.2307/2026128 Reprinted in Richmond Campbell and Lanning Sowden (eds), 1985, Paradoxes of Rationality and Cooperation: Prisoner’s Dilemma and Newcomb’s Problem, Vancouver: University of British Columbia Press, pp. 159–182. • Horwich, Paul, 1987, Asymmetries in Time, Cambridge, MA: MIT Press. • Jeffrey, Richard C., [1965] 1983, The Logic of Decision, second edition, Chicago: University of Chicago Press. [The 1990 paperback edition includes some revisions.] • –––, 2004, Subjective Probability: The Real Thing, Cambridge: Cambridge University Press. • Joyce, James M., 1999, The Foundations of Causal Decision Theory, Cambridge: Cambridge University Press. • –––, 2000, “Why We Still Need the Logic of Decision”, Philosophy of Science, 67: S1–S13. doi:10.1086/392804 • –––, 2002, “Levi on Causal Decision Theory and the Possibility of Predicting One’s Own Actions”, Philosophical Studies, 110(1): 69–102. doi:10.1023/A:1019839429878 • –––, 2007, “Are Newcomb Problems Really Decisions?” Synthese, 156(3): 537–562. doi:10.1007/s11229-006-9137-6 • –––, 2012, “Regret and Instability in Causal Decision Theory”, Synthese, 187(1): 123–145. doi:10.1007/s11229-011-0022-6 • Joyce, James and Allan Gibbard, 1998, “Causal Decision Theory”, in Salvador Barbera, Peter Hammond, and Christian Seidl (eds), Handbook of Utility Theory (Volume 1: Principles), pp. 627–666, Dordrecht: Kluwer Academic Publishers. • Krantz, David, R., Duncan Luce, Patrick Suppes, and Amos Tversky, 1971, The Foundations of Measurement (Volume 1: Additive and Polynomial Representations), New York: Academic Press. • Levi, Isaac, 2000, “Review Essay on The Foundations of Causal Decision Theory, by James Joyce”, Journal of Philosophy, 97(7): 387–402. doi:10.2307/2678411 • Lewis, David, 1973, Counterfactuals, Cambridge, MA: Harvard University Press. • –––, 1976, “Probabilities of Conditionals and Conditional Probabilities”, Philosophical Review, 85(3): 297–315. doi:10.2307/2184045 • –––, 1979, “Prisoner’s Dilemma is a Newcomb Problem”, Philosophy and Public Affairs, 8(3): 235–240. • –––, 1981, “Causal Decision Theory”, Australasian Journal of Philosophy, 59(1): 5–30. doi:10.1080/00048408112340011 • Meek, Christopher and Clark Glymour, 1994, “Conditioning and Intervening”, British Journal for the Philosophy of Science, 45(4): 1001–1021. doi:10.1093/bjps/45.4.1001 • Nozick, Robert, 1969, “Newcomb’s Problem and Two Principles of Choice”, in Nicholas Rescher (ed.), Essays in Honor of Carl G. Hempel, pp. 114–146, Dordrecht: Reidel. • Papineau, David, 2001, “Evidentialism Reconsidered”, Noûs, 35(2): 239–259. • Pearl, Judea, 2000, Causality: Models, Reasoning, and Inference, Cambridge: Cambridge University Press. [Second edition, 2009] • Pollock, John, 2006, Thinking about Acting: Logical Foundations for Rational Decision Making, New York: Oxford University Press. • –––, 2010, “A Resource-Bounded Agent Addresses the Newcomb Problem”, Synthese, 176(1): 57–82. doi:10.1007/s11229-009-9484-1 • Price, Huw, 1986, “Against Causal Decision Theory”, Synthese, 67(2): 195–212. doi:10.1007/BF00540068 • –––, 2012, “Causation, Chance, and the Rational Significance of Supernatural Evidence”, Philosophical Review, 121(4): 483–538. doi:10.1215/00318108-1630912 • Richter, Reed, 1984, “Rationality Revisited”, Australasian Journal of Philosophy, 62(4): 392–403. doi:10.1080/00048408412341601 • –––, 1986, “Further Comments on Decision Instability”, Australasian Journal of Philosophy, 64(3): 345–349. doi:10.1080/00048408612342571 • Savage, Leonard, 1954, The Foundations of Statistics, New York: Wiley. • Skyrms, Brian, 1980, Causal Necessity: A Pragmatic Investigation of the Necessity of Laws, New Haven, CT: Yale University Press. • –––, 1982, “Causal Decision Theory”, Journal of Philosophy, 79(11): 695–711. doi:10.2307/2026547 • –––, 1990, The Dynamics of Rational Deliberation, Cambridge, MA: Harvard University Press. • Sobel, Jordan Howard, 1994, Taking Chances: Essays on Rational Choice, Cambridge: Cambridge University Press. • Spirtes, Peter, Clark Glymour, and Richard Scheines, 2000, Causation, Prediction, and Search, Second Edition, Cambridge, MA: MIT Press. • Spohn, Wolfgang, 2012, “Reversing 30 Years of Discussion: Why Causal Decision Theorists Should One-Box”, Synthese, 187(1): 95–122. doi:10.1007/s11229-011-0023-5 • Stalnaker, Robert C., 1968, “A Theory of Conditionals”, in Studies in Logical Theory (American Philosphical Quarterly Monograph series, 2 ), Oxford: Blackwell, 98–112. Reprinted in in Harper, Stalnaker, and Pearce 1981: 41–56. doi:10.1007/978-94-009-9117-0_2 • –––, 1972 [1981], “Letter to David Lewis”, May 21. Printed in Harper, Stalnaker, and Pearce 1981: 151–152. doi:10.1007/978-94-009-9117-0_7 • Wedgwood, Ralph, 2013, “Gandalf’s Solution to the Newcomb Problem”, Synthese, 190(14): 2643–2675. doi:10.1007/s11229-011-9900-1 • Weirich, Paul, 1980, “Conditional Utility and Its Place in Decision Theory”, Journal of Philosophy, 77(11): 702–715. • –––, 1985, “Decision Instability”, Australasian Journal of Philosophy, 63(4): 465–472. doi:10.1080/00048408512342061 • –––, 2001, Decision Space: Multidimensional Utility Analysis, Cambridge: Cambridge University Press. • –––, 2004, Realistic Decision Theory: Rules for Nonideal Agents in Nonideal Circumstances, New York: Oxford University Press. • –––, 2015, Models of Decision-Making: Simplifying Choices, Cambridge: Cambridge University Press. • Williamson, Timothy, 2007, The Philosophy of Philosophy, Malden, MA: Blackwell. Academic Tools How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. Other Internet Resources • MIT Course on Decision Theory, offered by Robert Stalnaker. • Decision Theory, as of this writing (October 3, 2016), the Wikipedia site has a good overall introduction to decision theory and a list of references. I thank Christopher Haugen for bibliographical research and Brad Armendt, David Etlin, William Harper, Xiao Fei Liu, Brian Skyrms, Howard Sobel, and an anonymous referee for helpful comments.
{"url":"https://plato.stanford.edu/archivES/FALL2017/entries/decision-causal/","timestamp":"2024-11-05T16:58:24Z","content_type":"text/html","content_length":"96402","record_id":"<urn:uuid:a3c7591f-031d-4dc7-9fd3-6411aa91c265>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00120.warc.gz"}
CBSE Class 10 Computer Applications Sample Paper (PDF) Download CBSE Class 10 Computer Applications Sample Question Papers Download CBSE Class 10 Computer Applications Question Papers: Candidates can Get the previous year’s Central Board of Secondary Education board Computer Applications Exam question papers for class 10th 2022. So all the Preparing Candidates can Download CBSE Previous Year Question Papers for Class 10. The Officials of the CBSE have provided CBSE Previous Year Papers for Class Class 10 for Computer Applications along with solutions to help students in their CBSE board exam Preparations to get a good score in the Examination. CBSE Class 10 Computer Applications Previous Question Papers With Answers PDF ┃Name of the Board│Central Board of Secondary Education ┃ ┃Class │Class 10 ┃ ┃Subjects │Computer Applications ┃ ┃Category │CBSE Computer Applications Sample Papers ┃ ┃Official Website │cbse.nic.in ┃ CBSE Class 10 Computer Applications Model Question Paper 2022 Candidates are advised to Download the CBSE Class 10 Computer Applications Question Papers to use as practice. Download previous year’s CBSE Question Papers for Class 10 Computer Applications in PDF with solutions Here. we believe that the more the students practice, the better they perform in the exams. That’s why we have provided CBSE Computer Applications Class 10 question papers PDF with Answers for free of charge. CBSE Class 10 Computer Applications Previous Papers PDF Download Here CBSE Class 10 Computer Applications Previous Papers PDF Download Here CBSE Class 10 Computer Applications Sample Papers PDF Download Here CBSE Class 10 Computer Applications Sample Papers PDF Download Here CBSE Class 10 Computer Applications Question Papers PDF Download Here
{"url":"https://www.questionpapersonline.com/cbse-class-10-computer-applications-sample-papers/","timestamp":"2024-11-11T14:43:36Z","content_type":"text/html","content_length":"147454","record_id":"<urn:uuid:a43d0293-bb32-4756-b808-ad87fcb087de>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00567.warc.gz"}
Oscillators - Introduction, Types and ApplicationsOscillators - Introduction, Types and Applications Oscillators - Introduction, Types and Applications An electronic device that generates sinusoidal oscillations of desired frequency is known as sinusoidal oscillator. It receives DC energy and changes it into AC energy of desired frequency. In oscillators, the feedback voltage is in phase with the input voltage. • For sustained oscillations/un-damped oscillations: Av * B = 1 (Barkhausen Criterion) • Every resistor contains some free electrons. Because of ambient temperature, these free electrons move randomly in different directions and generate a noise voltage across the resistor. These noise voltages generated by the resistors act as input voltage for an oscillator. At frequency under 1 MHz, We can use RC oscillators to produce almost perfect sine waves and above 1 MHz, LC oscillators are used. 1. RC oscillators: Wien Bridge, Twin-T and Phase Shift. 2. LC oscillators: Colpitts, Armstrong, Hartley, Clapp and Crystal. Wien Bridge Oscillator: Phase response curve of lead-lag circuit shown at Very Low Frequencies, the series capacitor appears open to input signal and the phase angle is positive. At Very High Frequencies, the shunt capacitor looks shorted and the phase angle is negative. • It has a frequency range of 5 MHz to 1 MHz. • Tungsten lamp increase with voltage. • It uses both positive and negative feedback. • Used in commercial audio series. • Also used as Signal Generator in laboratories. Phase Shift Oscillator: In this oscillator, Three RC phase shift circuit provides 180° phase shift (60° each) and an Amplifier provides additional 180° because of inverting input. As a result a total 360° phase shift is Colpitts Oscillators: A 180° phase shift is produced by capacitor voltage divider and other by transistor amplifier. Armstrong Oscillator: It uses transformer coupling for the feedback signal. We often used term ticker coil because it feedback the signal that sustains the oscillations. • fr = 1 / ( 2*Ï€*√(L*C) ), B = M/L and Av(min) = L/M • Where: M is Mutual Inductance and L is Self Inductance. • Not used because of bulky size. Hartley Oscillator: It uses inductive voltage divider for feedback. • B = L2/L1 and Av(min) = L1/L2 • L = L1 + L2 + 2M and fr = 1 / ( 2*Ï€*√(L*C) ) • Commonly used in radio receivers. Clapp Oscillator: It is the enhanced version of colpitts oscillator. Feedback is provided by capacitive voltage divider but an extra capacitor is used in series with inductor. • C = 1 / (1/C1 + 1/C2 + 1/C3), C3<<<<C1,C2 • So C ≅ C3 • fr ≅ 1 / ( 2*Ï€*√(L*C3) • In colpitts oscillator C1 and C2 are shunted by transistor and stray capacitance which alters the value of C1 and C2 slightly. Therefore the fr was depended on transistor and stray capacitance. • But in clapp oscillator C3 is independent So fr is more stable and accurate. That is why mostly used. Crystal Oscillator: For most accurate and stable oscillations we use a crystal oscillator. Albert Malvino and David J Bates, “Electronic Principles”, 7th Edition, TATA McGRAW HILL. No comments
{"url":"https://www.info4eee.com/2017/12/oscillators-introduction-types-and.html","timestamp":"2024-11-12T23:45:44Z","content_type":"application/xhtml+xml","content_length":"284665","record_id":"<urn:uuid:e697908c-3a1e-40b9-8a26-c742fb77c29e>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00162.warc.gz"}
Testing Utilities Testing Utilities¶ Goodness of Fit Testing¶ This module implements goodness of fit tests for checking agreement between distributions’ .sample() and .log_prob() methods. The main functions return a goodness of fit p-value gof which for good data should be Uniform(0,1) distributed and for bad data should be close to zero. To use this returned number in tests, set a global variable TEST_FAILURE_RATE to something smaller than 1 / number of tests in your suite, then in each test assert gof > TEST_FAILURE_RATE. For example: TEST_FAILURE_RATE = 1 / 20 # For 1 in 20 chance of spurious failure. def test_my_distribution(): d = MyDistribution() samples = d.sample([10000]) probs = d.log_prob(samples).exp() gof = auto_goodness_of_fit(samples, probs) assert gof > TEST_FAILURE_RATE This module is a port of the goftests library.
{"url":"https://docs.pyro.ai/en/dev/testing.html","timestamp":"2024-11-06T13:41:03Z","content_type":"text/html","content_length":"27981","record_id":"<urn:uuid:220cf186-a5c9-4d45-a461-c2f98b7415d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00787.warc.gz"}
Properties of LogarithmsAlgebraLAB: Lessons Logarithmic functions and exponential functions are connected to one another in that they are inverses of each other. You may recall that when two functions are inverses of each other, the x and y are swapped. This leads to the most basic property involving logarithms which allows you to move back and forth between logarithmic and exponential forms of an expression. You can use the above property to change a logarithmic into an exponential or an exponential into a logarithmic expression. Let's Practice: i. Change the exponential equation ^ to logarithmic form. When changing between logarithmic and exponential forms, the base is always the same. In the exponential form in this problem, the base is 2, so it will become the base in our logarithmic form. Because logarithms and exponents are inverses of each other, the x and y values change places. It doesn’t matter if x and y are variables or numbers. In this case the 2 is “attached” to the 3 and the 8 is by itself. So when we switch those values, the 2 is by itself and the 8 is “attached” So ^ written is logarithmic form is ii. Change Since the base is the same whether we are dealing with an exponential or a logarithm, the base for this problem will be 5. We will exchange the 4 and the 625. The 625 was attached to the 5 and the 4 was by itself. In the logarithmic form, the 625 will be by itself and the 4 will be attached to the 5. So ^. This problem is nice because you can check it on your calculator to make sure your exponential equation is correct. In addition to the property that allows you to go back and forth between logarithms and exponents, there are other properties that allow you work with logarithmic expressions. Properties of Logarithms: 1. log[a]1 = 0 You can verify why this works by changing to an exponential form and getting ^ and anything to the zero power is 1. This property says that no matter what the base is, if you are taking the logarithm of 1, then the answer will always be 0. You can verify this by changing to an exponential form and getting ^. This property says that if the base and the number you are taking the logarithm of are the same, then your answer will always be 1. This property allows you to take a logarithmic expression of two things that are multiplied, then you can separate those into two distinct expressions that are added together. You can also go the other way. Two log expressions that are added can be combined into a single log expression using multiplication. This property allows you to take a logarithmic expression involving two things that are divided, then you can separate those into two distinct expressions that are subtracted. You can also go the other way. Two log expressions that are subtracted can be combined into a single log expression using division. This property will be very useful in solving equations and application problems. It allows you to take the exponent in a logarithmic expression and bring it to the front as a coefficient. You can also go the other way and move a coefficient up so that it becomes an exponent. So if properties 3, 4 and 5 can be used both ways, how do you know what should be done? That depends on the type of problem that is being asked. If you are being asked to combine log expressions into a single expression, you’ll want to use the property from right to left. But if you are trying to up a single log into its separate parts, you’ll want to use the property from left to right. Let's Practice: i. Use the properties of logs to write Since this problem is asking us to combine log expressions into a single expression, we will be using the properties from right to left. We usually begin these types of problems by taking any coefficients and writing them as exponents. Now there are two log terms that are added. We can combine those into a single log expression by multiplying the two parts together. We have now condensed the original problem into a single logarithmic expression. ii. Expand the expression Since we are trying to break the original expression up into separate pieces, we will be using our properties from left to right. We begin by taking the three things that are multiplied together and separating those into individual logarithms that are added together. There is an exponent in the middle term which can be brought down as a coefficient. This gives us There are no terms multiplied or divided nor are there any exponents in any of the terms. We have expanded this expression as much as possible. What is your answer? What is your answer? What is your answer? What is your answer? What is your answer? What is your answer?
{"url":"https://algebralab.net/lessons/lesson.aspx?file=Algebra_LogarithmProperties.xml","timestamp":"2024-11-14T20:55:53Z","content_type":"text/html","content_length":"34184","record_id":"<urn:uuid:fdce1ea0-5ce5-46e2-b642-ccaba1e49ddc>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00487.warc.gz"}
Linear Combination Calculator Last updated: Linear Combination Calculator Our linear combination calculator is here whenever you need to solve a system of equations using the linear combination method (also known as the elimination method). If you want to learn what the linear combination method is or how to use the linear combination method, check the article below. We explain what combining linear equations means and how to use the linear combination method to solve systems of linear equations. Most importantly, we show you several very detailed step-by-step examples of systems solved with the linear combination method. What is a system of linear equations? For an equation to be linear, all its variables must be in the first power: they cannot be squared/cubed, nor under a root, nor placed in the denominator. If some numbers satisfy several linear equations at once, we say that these numbers are a solution to the system of those linear equations. In school, we most often encounter systems of two linear equations in two variables. In general, such a system takes the form: a[1]x + b[1]y = c[1] a[2]x + b[2]y = c[2] • x and y are the variables; • a[1], b[1], c[1] are the coefficients of the first equation; and • a[2], b[2], c[2] are the coefficients of the second equation. What is the linear combination method? The linear combination of equations is a method for solving systems of linear equations. The key idea is to combine the equations into a system of fewer and simpler equations. If we deal with two linear equations in two variables, we want to combine these equations into one equation with a single variable. By combining linear equations, we mean multiplying one or both equations by suitably chosen numbers and then adding the equations together. Multipliers must allow the operation of addition to cause one of the variables to vanish. We then end up with a single equation in one variable, which we can easily solve. Once we have solved this equation, we substitute the value we've found into one of the original equations. This leads to another equation in one variable, which we quickly solve. This way, we've solved the system using linear combination! Are there other methods for solving systems of linear equations? Sure! You can discover them in Omni's substitution method calculator and in the Cramer's rule calculator. How to use this linear combination calculator? If you want to quickly solve a system of equations using linear combination, our tool is the best choice! To use it, follow the steps below: 1. Tell us the coefficients of your system: type them in the appropriate fields. 2. Our linear combination calculator solves the system using the linear combination method… 3. …and displays and explains all the intermediate steps! Did you know you can use this method to solve a linear programming problem algebraically? Read more about it in our corner point calculator. How to use the linear combination method? For the system of equations: a[1]x + b[1]y = c[1] a[2]x + b[2]y = c[2] we can always use the least common multiple of a[1] and a[2]. Namely, put: m[1] := LCM(a[1], a[2]) / a[1] m[2] := LCM(a[1], a[2]) / a[2] and multiply the first equation by m[1] and the second equation by -m[2]. This leads to the following system: LCM(a[1], a[2])x + [LCM(a[1], a[2])b[1]/a[1]]y = LCM(a[1],a[2])c[1]/a[1] -LCM(a[1], a[2])x - [LCM(a[1], a[2])b[2]/a[2]]y = -LCM(a[1], a[2])c[2]/a[2] We have created opposite coefficients for the variable x! All we need to do is to add these equations together: we will obtain an equation with the y variable only. In some particular situations, this LCM approach boils down to elementary operations: • When the coefficients of one variable are opposite numbers, both multipliers are equal to 1. • When the coefficients of one variable are equal, one multiplier is equal to 1 and the other to -1. It may sometimes happen that you eliminate both variables at once. First of all, do not panic. You arrived at a statement about numbers. Depending on whether the statement you got is true, like: 0 = 0 or 7 = 7, or false, e.g.: 0 = 1 or 5 = 7, draw conclusions about the system: • If the statement is false, then the system has no solution. • If the final statement is true, then the system has infinitely many solutions. Linear combination method examples Wow, we've learned a lot about what the linear combination method is! It's time to solve a few systems of linear equations using linear combinations. 1. Consider the system of equations: x - 4y = 1 -2x + 4y = 2 and solve it using linear combination. □ Add the two equations together: -x = 3 □ Solve for x: x = -3 □ Substitute x = -3 into the first equation: -3 - 4y = 1 □ Solve for y: -4y = 4 y = -1 □ Solution: x = -3, y = -1 2. Solve using linear combination: 2x + 3y = 3 2x - y = -3 □ First, multiply the first equation by -1: -2x - 3y = -3 2x - y = -3 □ Add the equations, which results in eliminating x: -4y = -6 □ Solve for y: y = 1.5 □ Substitute y = 1.5 into the second equation: 2x - 1.5 = -3 □ Solve for x: 2x = -1.5 x = -0.75 □ Solution: x = -0.75, y = -1.5 3. Solve the system using linear combination: 3x - 7y = 1 4x + 4y = -2 □ Use the LCM approach: calculate the least common multiplicity of 3 and 4: LCM(3, 4) = 12. The multipliers are: m[1] = 4 and m[2] = -3 and the system is: 12x - 28y = 4 -12x - 12y = 6 □ Add the equations: -40y = 10 □ We solve for y: y = -0.25 □ We substitute y = -0.25 into the second equation: 4x - 4 ⋅ (-0.25) = 0 □ We solve for x: 4x = -1 x = -0.25 □ Solution: x = -0.25, y = -0.25 What is a linear combination of vectors? In vector algebra, a linear combination of vectors refers to adding two or more vectors multiplied by a scalar value. For example, if a, b, and c are vectors, then a vector d is a linear combination of a, b, and c, if: d = x⋅a + y⋅b + z⋅c where x, y, and z are scalars. We can use a linear combination of vectors to represent a system of linear equations. How do I use the linear combination method? Consider two linear equations in two variables x and y: a[1]⋅x + b[1]⋅y = c[1] and a[2]⋅x + b[2]⋅y = c[2]. 1. Find the LCM of a[1] and a[2]: LCM(a[1], a[2]) = L 2. Multiply the first equation by L/a[1] to get: L⋅x + L⋅b[1]/a[1]⋅y = L⋅c[1]/a[1] 3. Multiply the second equation by -L/a[2] to get: -L⋅x - L⋅b[2]/a[2]⋅y = -L⋅c[2]/a[2] 4. Add these two equations together to obtain an equation in y-term only: (L⋅b[1]/a[1] - L⋅b[2]/a[2])⋅y = L⋅c[1]/a[1] -L⋅c[2]/a[2] 5. Solve this equation for y: y = (L⋅c[1]/a[1] - L⋅c[2]/a[2])/(L⋅b[1]/a[1] - L⋅b[2]/a[2]) 6. Substitute this solution for y in any one of the original equations to get the x value. What is the solution for the equations 5x+2y=12 and 8x+12y=28? The solution for the equations 5x + 2y = 12 and 8x + 12y = 28 is x = 2, y = 1. To solve this, follow these steps: 1. Find the LCM of 2 and 12: LCM(2, 12) = 12 2. Multiply the first equation by -12/2 = -6 to get: -30x - 12y = -72 3. Add these two equations to obtain: -22x = -44 ⇒x = -44/-22 = 2 4. Substitute x = 2 in the first equation to get the y value: 5(2) + 2y = 12 ⇒2y = 12 - 10 ⇒y = 2/2 = 1 5. Verify this result with our linear combination calculator.
{"url":"https://www.omnicalculator.com/math/linear-combination","timestamp":"2024-11-03T23:16:25Z","content_type":"text/html","content_length":"605488","record_id":"<urn:uuid:c3c0a476-6bee-4674-9c53-6051aef945dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00668.warc.gz"}
Vector Fields on 8.3.1 Vector Fields on This section revisits some basic concepts from introductory courses such as calculus, linear algebra, and differential equations. You may have learned most of these for and . We eventually want to describe velocities in and on manifolds, and then use the notion of a vector field to express a feedback plan in Section 8.4.1. Steven M LaValle 2020-08-14
{"url":"https://lavalle.pl/planning/node377.html","timestamp":"2024-11-03T06:34:56Z","content_type":"text/html","content_length":"4376","record_id":"<urn:uuid:c5324eda-9749-4691-8434-d55c7d7493eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00360.warc.gz"}
dgecon: estimates the reciprocal of the condition number of a general real matrix A, in either the 1-norm or the infinity-norm, using the LU factorization computed by DGETRF - Linux Manuals (l) dgecon (l) - Linux Manuals dgecon: estimates the reciprocal of the condition number of a general real matrix A, in either the 1-norm or the infinity-norm, using the LU factorization computed by DGETRF DGECON - estimates the reciprocal of the condition number of a general real matrix A, in either the 1-norm or the infinity-norm, using the LU factorization computed by DGETRF NORM, N, A, LDA, ANORM, RCOND, WORK, IWORK, INFO ) CHARACTER NORM INTEGER INFO, LDA, N DOUBLE PRECISION ANORM, RCOND INTEGER IWORK( * ) DOUBLE PRECISION A( LDA, * ), WORK( * ) DGECON estimates the reciprocal of the condition number of a general real matrix A, in either the 1-norm or the infinity-norm, using the LU factorization computed by DGETRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as = 1 / ( norm(A) * norm(inv(A)) ). NORM (input) CHARACTER*1 Specifies whether the 1-norm condition number or the infinity-norm condition number is required: = aq1aq or aqOaq: 1-norm; = aqIaq: Infinity-norm. N (input) INTEGER The order of the matrix A. N >= 0. A (input) DOUBLE PRECISION array, dimension (LDA,N) The factors L and U from the factorization A = P*L*U as computed by DGETRF. LDA (input) INTEGER The leading dimension of the array A. LDA >= max(1,N). ANORM (input) DOUBLE PRECISION If NORM = aq1aq or aqOaq, the 1-norm of the original matrix A. If NORM = aqIaq, the infinity-norm of the original matrix A. RCOND (output) DOUBLE PRECISION The reciprocal of the condition number of the matrix A, computed as RCOND = 1/(norm(A) * norm(inv(A))). WORK (workspace) DOUBLE PRECISION array, dimension (4*N) IWORK (workspace) INTEGER array, dimension (N) INFO (output) INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value
{"url":"https://www.systutorials.com/docs/linux/man/docs/linux/man/l-dgecon/","timestamp":"2024-11-12T15:00:39Z","content_type":"text/html","content_length":"9708","record_id":"<urn:uuid:6c90c74a-bc0e-4fe3-b003-0fb6000e4630>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00561.warc.gz"}
(PDF) Using the Iterated Prisoner's Dilemma for Explaining the Evolution of Cooperation in Open Source Communities ... We have referred, for such problems, to the approaches of Axelrod, Schotter, Lindgren, and many others in this textbook (see Chapter 13 for the relevant selected approaches). A large variety of approaches, models, and simulations have demonstrated that in an evolutionary process, institutions of cooperation may emerge from dilemma-prone structures if specific conditions can emerge (see, e.g., Liebrand and Messick, 1996;Lindgren, 1997;Offerman and Sonnemans, 1998;Oltra and Schenk, 1998;Eckert et al., 2005;Demange and Wooders, 2005;Traulsen and Nowak, 2006;Jun and Sethi, 2009; among innumerable others). ...
{"url":"https://www.researchgate.net/publication/228666257_Using_the_Iterated_Prisoner%27s_Dilemma_for_Explaining_the_Evolution_of_Cooperation_in_Open_Source_Communities","timestamp":"2024-11-02T14:51:37Z","content_type":"text/html","content_length":"624273","record_id":"<urn:uuid:7d863bc5-9da1-4972-944d-96e31883f2c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00871.warc.gz"}
OpenStax College Physics for AP® Courses, Chapter 9, Problem 25 (Problems & Exercises) Repeat Exercise 9.24 for the pulley shown in Figure 9.27(c), assuming you pull straight up on the rope. The pulley system's mass is 7.00 kg. Exercise 9.24 If you used an ideal pulley of the type shown in Figure 9.27(a) to support a car engine of mass 115 kg, (a) What would be the tension in the rope? (b) What force must the ceiling supply, assuming you pull straight down on the rope? Figure 9.27 Pully systems to demonstrate mechanical advantage. Question by is licensed under CC BY 4.0 Final Answer a) $299 \textrm{ N}$ b) $898 \textrm{ N}$ Solution video OpenStax College Physics for AP® Courses, Chapter 9, Problem 25 (Problems & Exercises) vote with a rating of votes with an average rating of. Video Transcript This is College Physics Answers with Shaun Dychko. We’re given the mechanical advantage of this pulley system in part c is four. We know mechanical advantage is the output force divided by the input force, so we’ll solve this for the input force by multiplying both sides by Fi over MA. Then we get the input force is the output force divided by mechanical advantage. So the output force has to be the total weight of the pulley system and the engine hanging off of it, so that’s 115 kilograms plus seven kilograms which is 122 kilograms times 9.81, and all that is divided by the mechanical advantage of four, to give an input force required of 299 newtons. And then part b asks us to find out what is the force exerted by the ceiling on the system. Well let’s first of all redraw the system as a single dot and consider the forces that are on that dot. There is gravity downwards due to the weight of the system and the engine, then there’s the force of the ceiling upwards which we’re going to find, and the way this pulley system is constructed, there’s also one force upwards which is the input force. All these other Ts inside here, these tensions, are forces that are internal to the system and they are not externally applied. So the force of the ceiling plus the input force upwards equals the total force of gravity downwards, and we’ll subtract Fi from both sides and also substitute mg in place of Fg, and we get the force applied by the ceiling, 122 kilograms times 9.81 newtons per kilogram minus 299.205 newtons input force, giving us 898 newtons is the force applied by the ceiling.
{"url":"https://collegephysicsanswers.com/openstax-solutions/repeat-exercise-924-pulley-shown-figure-927c-assuming-you-pull-straight-rope-0","timestamp":"2024-11-08T18:50:52Z","content_type":"text/html","content_length":"149298","record_id":"<urn:uuid:f0f88140-840d-480b-9323-6784444ddc4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00001.warc.gz"}
Algorithm To Find A Maximally-specific Hypothesis | Artificial Intelligence | Books | Skedbooks Algorithm To Find A Maximally-specific Hypothesis Algorithm to Find a Maximally-Specific Hypothesis: Algorithm to search the space of conjunctions: • Start with the most specific hypothesis • Generalize the hypothesis when it fails to cover a positive example 1. Initialize h to the most specific hypothesis 2. For each positive training example X For each value a in h If example X and h agree on a, do nothing Else generalize a by the next more general constraint 3. Output hypothesis h Example: Let’s run the learning algorithm above with the following examples: ((red,small,round,humid,low,smooth), poisonous) ((red,small,elongated,humid,low,smooth), poisonous) ((gray,large,elongated,humid,low,rough), not-poisonous) ((red,small,elongated,humid,high,rough), poisonous) We start with the most specific hypothesis: h = (0,0,0,0,0,0) The first example comes and since the example is positive and h fails to cover it, we simply generalize h to cover exactly this example: h = (red,small,round,humid,low,smooth) Hypothesis h basically says that the first example is the only positive example, all other examples are negative. Then comes examples 2: ((red,small,elongated,humid,low,smooth), poisonous) This example is positive. All attributes match hypothesis h except for attribute shape: it has the value elongated, not round. We generalize this attribute using symbol ? yielding: h: (red,small,?,humid,low,smooth) The third example is negative and so we just ignore it. Why is it we don’t need to be concerned with negative examples? Upon observing the 4th example, hypothesis h is generalized to the following: h = (red,small,?,humid,?,?) h is interpreted as any mushroom that is red, small and found on humid land should be classified as poisonous. The algorithm is guaranteed to find the hypothesis that is most specific and consistent with the set of training examples. It takes advantage of the general-specific ordering to move on the corresponding lattice searching for the next most specific hypothesis. Note that: There are many hypotheses consistent with the training data D. Why should we prefer the most specific hypothesis? What would happen if the examples are not consistent? What would happen if they have errors, noise? What if there is a hypothesis space H where one can find more that one maximally specific hypothesis h? The search over the lattice must then be different to allow for this possibility. • The algorithm that finds the maximally specific hypothesis is limited in that it only finds one of many hypotheses consistent with the training data. • The Candidate Elimination Algorithm (CEA) finds ALL hypotheses consistent with the training data. • CEA does that without explicitly enumerating all consistent hypotheses. • Chemical Mass Spectroscopy • Control Rules for Heuristic Search
{"url":"https://skedbooks.com/books/artificial-intelligence/algorithm-to-find-a-maximally-specific-hypothesis/","timestamp":"2024-11-07T00:11:00Z","content_type":"text/html","content_length":"94653","record_id":"<urn:uuid:cef01e0a-f667-4c6c-b6bc-2fa75d396db8>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00332.warc.gz"}
Dakotah Lambert Visiting Assistant Professor of Computer Science at Haverford College My work lies at the intersection of theoretical computer science, abstract algebra, and linguistics. Using connections between formal logic, finite automata, and algebra, I study the structures that arise in communication between and among humans and computers, including natural languages (human–human communication), programming languages (human–computer communication), and signaling protocols (computer–computer communication). Parameterization and decomposition of these systems arise out of their structure and inform mechanisms for recognition and learning. I have also applied these techniques to pattern-learning by neural networks, which provides insight and explainability for such models. Over the years, I have been building an interactive theorem-prover for subregular logics. The custom domain-specific language, PLEB, and its interpreter, plebby, have been instrumental to my work. Implemented in Haskell as both a library and supporting command-line tools, the Language Toolkit (LTK) allows one to build finite automata from certain kinds of quantifier-free logical expressions, to extract logical descriptions of automata, and to classify the associated language with respect to a rich subregular hierarchy. This includes classes defined by logical properties as well as those induced by user-specified pseudovarieties of semigroups or monoids. This system is also available on Hackage. Find more on Github.
{"url":"https://dakotahlambert.com/","timestamp":"2024-11-04T07:44:01Z","content_type":"text/html","content_length":"10308","record_id":"<urn:uuid:337958ef-2f93-41a3-b35d-641c46da5ae4>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00157.warc.gz"}
Lorxus's Shortform — LessWrong This is a special post for quick takes by . Only they can create top-level comments. Comments here also appear on the Quick Takes page All Posts page You maybe got stuck in some of the many local optima that Nurmela 1995 runs into. Genuinely, the best sphere code for 9 points in 4 dimensions is known to have a minimum angular separation of ~1.408 radians, for a worst-case cosine similarity of about 0.162. You got a lot further than I did with my own initial attempts at random search, but you didn't quite find it, either. EDIT: I and the person who first tried to render this SHAPE for me misunderstood its nature. Wait, some of y'all were still holding your breaths for OpenAI to be net-positive in solving alignment? After the whole "initially having to be reminded alignment is A Thing"? And going back on its word to go for-profit? And spinning up a weird and opaque corporate structure? And people being worried about Altman being power-seeking? And everything to do with the OAI board debacle? And OAI Very Seriously proposing what (still) looks to me to be like a souped-up version of Baby Alignment Researcher's Master Plan B (where A involves solving physics and C involves RLHF and cope)? That OpenAI? I just want to be very sure. Because if it took the safety-ish crew of founders resigning to get people to finally pick up on the issue... it shouldn't have. Not here. Not where people pride themselves on their lightness. 5 comments, sorted by Click to highlight new comments since: On @TsviBT's recommendation, I'm writing this up quickly here. re: the famous graph from https://transformer-circuits.pub/2022/toy_model/index.html#geometry with all the colored bands, plotting "dimensions per feature in a model with superposition", there look to be 3 obvious clusters outside of any colored band and between 2/5 and 1/2, the third of which is directly below the third inset image from the right. All three of these clusters are at 1/(1-S) ~ A picture of the plot, plus a summary of my thought processes for about the first 30 seconds of looking at it from the right perspective: In particular, the clusters appear to correspond to dimensions-per-feature of about 0.44~0.45, that is, 4/9. Given the Thomson problem-ish nature of all the other geometric structures displayed, and being professionally dubious that there should be only such structures of subspace dimension 3 or lower, my immediate suspicion since last week when I first thought about this is that the uncolored clusters should be packing 9 vectors as far apart from each other as possible on the surface of a 3-sphere in some 4D subspace. In particular, mathematicians have already found a 23-celled 4-tope with 9 vertices (which I have made some sketches of) where the angular separation between vertices is ~80.7° : http:// neilsloane.com/packings/index.html#I . Roughly, the vertices are: the north pole of S^3; on a slice just (~9°) north of the equator, the vertices of a tetrahedron "pointing" in some direction; on a slice somewhat (~19°) north of the south pole, the vertices of a tetrahedron "pointing" dually to the previous tetrahedron. The edges are given by connecting vertices in each layer to the vertices in the adjacent layer or layers. Cross sections along the axis I described look like growing tetrahedra, briefly become various octahedra as we cross the first tetrahedon, and then resolve to the final tetrahedron before vanishing. I therefore predict that we should see these clusters of 9 embedding vectors lying roughly in 4D subspaces taking on pretty much exactly the 23-cell shape mathematicians know about, to the same general precision as we'd find (say) pentagons or square antiprisms, within the model's embedding vectors, when S ~ 3/4. Potentially also there's other 3/f, 4/f, and maybe 5/f; given professional experience I would not expect to see 6+/f sorts of features, because 6+ dimensions is high-dimensional and the clusters would (approximately) factor as products of lower-dimensional clusters already listed. There's a few more clusters that I suspect might correspond to 3/7 (a pentagonal bipyramid?) or 5/12 (some terrifying 5-tope with 12 vertices, I guess), but I'm way less confident in those. A hand-drawn rendition of the 23-cell in whiteboard marker: I played with this with a colab notebook way back when. I can't visualize things directly in 4 dimensions, but at the time I came up with the trick of visualizing the pairwise cosine similarity for each pair of features, which gives at least a local sense of what the angles are like. Trying to squish 9 features into 4 dimensions looks to me like it either ends up with • 4 antipodal pairs which are almost orthogonal to one another, and then one "orphan" direction squished into the largest remaining space • 3 almost orthogonal antipodal pairs plus a "Y" shape with the narrow angle being 72º and the wide angles being 144º • For reference this is what a square antiprism looks like in this type of diagram:
{"url":"https://www.lesswrong.com/posts/wgp2Gj4dNsupbXDwx/lorxus-s-shortform","timestamp":"2024-11-13T06:32:11Z","content_type":"text/html","content_length":"167030","record_id":"<urn:uuid:ded61a4a-6819-40ad-b9f4-9870d5ef9c55>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00516.warc.gz"}
Electronic Newsletter -- #21, Winter 2015 The ASGRG has a home web page at http://www2.phys.canterbury.ac.nz/ASGRG/ Items for this newsletter should be emailed to the editor: The deadline for the next issue is 31 October, 2016. * REPORT ON ACGRG7, Hamilton Island, 9-11 December, 2013 * MINUTES OF THE 8TH BIENNIAL GENERAL MEETING OF THE ASGRG, 10 December 2013 * ACGRG8, Monash University, Melbourne, 2-4 December 2015 * MEMBERSHIP DETAILS ONLINE at * MEMBERS' ABSTRACTS at gr-qc, December 2013 - June 2015 * ABSTRACTS FROM THE LIGO SCIENTIFIC COLLABORATION at gr-qc, December 2013 - June 2015 REPORT ON ACGRG7, Reef View Hotel, Hamilton Island, December 9-11 2013 The 7th Australasian Conference on General Relativity and Gravitation (ACGRG7) was held at the Reef View Hotel on Hamilton Island, Queensland from 9 to 11 December 2013. ACGRG7 was organised by the Australian National University and was the seventh in a series of approximately biennial conferences on general relativity and gravitation organised by the ASGRG. A total of 34 presentations were given over the three days of ACGRG7, including plenary talks from David Parkinson (“Watching galaxies fall: structure formation in the universe as a probe of gravity”), Rana Adhikari (“Wideband, next-generation, gravitational-wave antenna”), Sheila Rown (“Status and plans for advanced gravitational wave detectors”), Edward Teo (“The Plebanski-Demianski solution: space-time protoType D”), Manuel Tiglio (“Reduced order modelling in general relativity”) and Ryan Shannon (“Characterising the supermassive black hole population with gravitational waves and pulsar timing arrays”). And, for the first time, the ACGRG programme included a category of shorter “invited talks”, which were given by Kazuaki Kuroda (“KAGRA Status”), Krzysztof Bolejko (“Curvature of the Universe”), Stefan Danilishin (“Need for speed: shall the 3rd generation gravitational wave interferometer be a speedmeter and if so, which species of it?”), David Ottaway (“New cavity technologies for beating thermal noise in third generation GW detectors”), and Matt Visser (“Classical and quantum flux energy conditions”). The more specialised talks at ACGRG7 covered topics as diverse as the GRACE experiment, distance measurements with WiggleZ, cosmology with the velocity subsample of the 6dF Galaxy Survey, gravitational waves from low mass X-ray binaries, squeezed states in gravitational wave detection, multilayer optical coatings for GW detection, the Gingin high optical power test facility, dark energy and neutrino masses, the nuclear equation of state, sterile neutrinos and gravity, the Newtonian limit of cosmological space times, timescape cosmology, quasilocal energy in cosmology, vacuum space times with constant Weyl scalars, linearised gravitational waves at space-like and null infinity, spikes and matter overdensities, rotating hairy black holes, Clausius entropy of null surfaces, smooth lattice relativity, and the TensorPack software for tensor algebra. The final event on the closing day, December 11, was the third award of the Kerr Prize, to mark the best student presentation at an ACGRG. As happened at the previous two conferences (in 2009 and 2011), the judges decided to split the prize between two students: Valentina Baccetti of Victoria University of Wellington who spoke about “Clausius entropy for arbitrary bifurcate null surfaces”, and Andrew Wade of the Australian National University, whose talk was entitled “A path length modulation technique for frequency shifting scatter induced noise in squeezing measurements”. The two winners shared the A$200 prize. The conference banquet was held at the Outrigger Restaurant on the evening of December 10, and a fun time was had by all. Particular thanks should go to the local organisers (Daniel Shaddock, Robert Ward, David Bowman, Susan Scott) for making ACGRG7 such a great success. held at the Reef View Hotel, Hamilton Island, Tuesday 10 December, 2013 The meeting opened at 5.15 p.m. Apologies: John Schutz 1. The minutes of the 7th Biennial General Meeting, held at Rydges Lakeside Resort, Queenstown, Thursday 9 February 2012, were presented to the meeting. Susan Scott moved that the minutes be accepted, and Matt Visser seconded. The motion was approved. 2. President's Report: David Wiltshire reported that the two major activities the Society had been involved in over the last two years were the current ACGRG and the ASGRG sessions at the 2012 AIP Congress in Sydney. The next AIP Congress was due to be held in Canberra in December 2014. The last 4 years had been frustrating for the relativity community in Australia, as the proposed gravitational wave observatory (ACIGA) had not been funded, and job prospects in the area were limited. The situation was better in New Zealand, where recent appointments of relativists included Joerg Frauendiener at the University of Otago, and Woe Chet Lim at Waikato. 22 people from Australia and New Zealand attended the Marcel Grossmann meeting in Stockholm (MG13) in 2012, and 27 people from Australia and New Zealand were at the GR20/Amaldi10 conference in Warsaw in 2013. David also mentioned that the IUPAP Young Scientist Prize is awarded every year, and that the deadline for applications for the 2014 prizes was 1 February. 3. Treasurer's Report: John Steele reported that the Society now had a total of 56 financial members, of which 47 were life members (an increase of 5 since the last ACGRG), 6 were ordinary members and 3 were student members. The Society's current account contained $14,426, excluding the expenses and income of the current conference. The Society’s main income is derived from life membership fees. John reported that, as always, the main expense of the Society was bank fees. He had not yet looked into the possibility of setting up a term deposit for the Society (as was mooted at ACGRG6), but he will. There was a need to add a PayPal button to the Society’s website so that members could pay fees online. David Wiltshire said he would ask David Bowman to do this. The Commonwealth Bank should also be informed that the Society no longer wants a charge account. 4. Auditor's Report: The Auditor, John Schutz, certified in an email dated 8 December 2013 that he was satisfied with the Society's accounts. 5. Appointment of Auditor for the next session: John Schutz was nominated to remain the Auditor of the Society's accounts. 6. Election of officers: The following people were elected officers of the ASGRG Committee by acclamation: President: Leo Brewin Treasurer: John Steele Secretary: Malcolm Anderson Officer: Susan Scott Officer: David Wiltshire Paul Lasky and Bram Slagmolen were also co-opted as Committee members. Leo Brewin thanked the outgoing President, David Wiltshire, and the other members of the old ASGRG Committee for their services over the last two years. 7. Proposed constitutional amendment: The Society's Treasurer, John Steele, submitted a resolution aiming to allow the collection and disbursement of the Society's monies through PayPal as well as by cheque or cash. As required by the Constitution, the proposed amendment was communicated to all members by email on 17 November 2013 (at least 21 days before the BGM). The resolution proposed that the financial section of the Constitution, which previously read: i) All funds received for on behalf of the Society must be deposited in a bank account. ii) All payments made by the Society in excess of five-hundred dollars must be made by cheque. All cheques must be signed by the Treasurer and one of two other members of the committee appointed by the Committee for that purpose. iii) The Treasurer is responsible for the keeping of the financial books and records of the Society, and must present to each meeting of the Committee a report of the financial transactions of the Society since the last committee meeting. iv) The Treasurer must prepare and have audited an annual balance sheet, and present these to the General Meeting or, in years where there is no General Meeting, to the Committee.” should be changed to now read: i) The Treasurer is responsible for the keeping of the financial books and records of the Society, and must present to each meeting of the Committee a report of the financial transactions of the Society since the last committee meeting. ii) In any financial year where the Society’s turnover (excluding bank fees) exceeds one thousand dollars, the Treasurer must prepare and have audited a balance sheet, and present these to the General Meeting or, in years where there is no General Meeting, to the Committee. iii) All funds received by or on behalf of the Society must be deposited in an Australian bank account in the name of the Society, unless they are collected through an account held in the name of the Society with PayPal Australia Pty Ltd (hereinafter ‘PayPal’) under the terms of subclause (v) below. Notwithstanding that, the Treasurer may, with the express approval of the Committee, use cash payments under five-hundred dollars to the Society to pay prizes or small bills under two-hundred dollars at conferences or workshops. Such transactions shall always be noted in the Society’s accounts and receipted. iv) The Treasurer shall be listed as the Primary Account holder for the Society's PayPal account, be responsible for compliance with PayPal’s terms and conditions and provide separate accounts for it to the committee and at General Meetings. The PayPal account must have the Society's main bank account as Nominated Bank Account, with all the required conditions imposed by PayPal. v) The Society’s PayPal account may only be used for the collection of funds for specific purposes, such as conferences, and membership fees. Funds in the Society's PayPal account must, within one hundred and twenty days of receipt, be transferred to the Society’s main bank account, or paid out to cover the specific purpose for which they were collected as authorised by the committee, excepting an amount not exceeding five-hundred dollars to cover any disputed payments. If any reserve placed on the Society’s account by PayPal exceeds this amount, the committee must either authorise this reserve or direct the Treasurer to close the PayPal account in line with PayPal’s procedures. If the reserve is authorised, this decision must then be ratified at the Society's next General Meeting. vi) All payments made by the Society in excess of five-hundred dollars must be made by cheque or by a direct electronic transfer expressly authorised by the Committee. All cheques must be signed by the Treasurer and one of any other members of the Committee appointed by the Committee for that purpose. Notwithstanding this, PayPal’s standard Direct Debit Agreement is permitted over the Society's The motion to change the constitution was put by John Steele, and seconded by Matt Visser. The meeting voted to accept it. 8. Date and venue for ACGRG8: Leo Brewin and Todd Oliynyk proposed that Monash University would host ACGRG8 in December 2015, and the meeting agreed. 9. Other business 2013 Kerr Prize: Leo reminded the meeting that the Kerr Prize for the best student presentation at ACGRG7 would be awarded at the end of the final session on December 11. Suggested changes to future ACGRG programmes: David McClelland raised concerns about what he saw as a disconnect between the theory and experiment talks at ACGRG7. He pointed out that the Amaldi conferences have tutorials or primers on the topics being discussed at the beginning of each day, and asked if ACGRG could do something similar. He also felt that an event like ACGRG should be devoted to student talks, with the established experts discussing their current research in poster sessions rather than in a formal setting. GR centenary year: The meeting ended with a reminder that 2015 will mark the centenary of Einstein’s elaboration of the full theory of general relativity. The Society should therefore attempt to engage a high-profile public speaker for ACGRG8 in December of that year. The meeting closed at 5.55 pm. Monash University, Melbourne, 2-4 December 2015 ACGRG8 is the eighth in a series of biennial conferences run by the ASGRG with the aim of bringing together researchers from around the world to discuss new findings in mathematical, theoretical, numerical and experimental gravitation, to make contacts and consolidate ideas. This year’s conference will run from Wednesday 2 December to Friday 4 December, and will be held on the Clayton campus of Monash University, Melbourne. The confirmed plenary speakers are: Brian Schmidt (Australian National University) Bob Wald (University of Chicago) Matt Choptuik (University of British Columbia) David McClelland (Australian National University) Joerg Frauendiener (University of Otago) Further details about the programme and registration and submission deadlines will be sent out once they are finalised. Due to requests from members, David Wiltshire has written some HTML scripts which generate membership details online from our records. If you click on you will find a members' list. Clicking on individual members gives their current contact details. By following a further link private details of the subscription status of any member will be sent to their registered email. This feature should enable us to update our records more frequently in response to members' input, and to allow members to keep track of their subscriptions. The membership script programs are intended to be run automatically once a year, at the end of July, to give members other than life members details of their current subscription status. The new version of the subscription form, at has been simplified so that it does not need to be updated each year. Given that our annual fee is modest, members are encouraged to pay for multiple years, and to fill in the years they are paying for. E.g., when the July 2014 - June 2015 subscriptions are requested, if you wish to pay for July 2015 - June 2016 at the same time, it may simplify matters. July 12-18, 2015: 14th Marcel Grossmann Meeting on General Relativity Rome, Italy July 20-31, 2015: “Quantum Fields and IR Issues in de Sitter Space” International Institute of Physics, Federal University of Rio Grande do Norte, Brazil July 22-24, 2015: 19th Paris Cosmology Colloquium “Latest News from the Universe: Lambda Warm Dark Matter Cosmology (ΛWDM), CMB, Dark Matter, Dark Energy, and Sterile Neutrinos” Ecole International Daniel Chalonge, Observatoire de Paris, France July 27-31, 2015: GR 100 in Rio “Gravitational Waves Astrophysics” Centro Brasileiro Pesquisas de Fisicas August 9-15, 2015: “Hot Topics in General Relativity and Gravitation” International Centre for Interdisciplinary Science Education, Quy Nhon, Vietnam August 20-22, 2015: Extreme Gravity Workshop Montana State University, Bozeman, Montana, USA September 7-11, 2015: Spanish Relativity Meeting (ERE2015) University of the Balearic Islands Palma Mallorca, Spain September 14 - December 18, 2015: “Mathematical general relativity, compressible fluids, and more” Centre Emile Borel, Institut Henri Poincare, Paris September 28 - October 1, 2015: “Geometric Aspects of General Relativity” University of Montpellier, France September 30 - October 3, 2015: “The Modern Physics of Compact Stars and Relativistic Gravity” Yerevan State University, Armenia October 5-7, 2015: “100 years of curved space time” Austrian Academy of Sciences, Vienna November 25-28, 2015: International Conference on General Relativity (ICGR-2015) Sant Gadge Baba Amravati University, Amravati, India November 30 - December 5, 2015: Einstein Conference 2015 “A Century of General Relativity” Max Planck Institute for Gravitational Physics, Harnack House, Berlin December 2-4, 2015: 8th Australian Conference on General Relativity and Gravitation (ACGRG8) Monash University, Melbourne December 12-14, 2015: International Conference on Relativity and Geometry in Memory of Andre Lichnerowicz Institut Henri Poincare, Paris MEMBERS' ABSTRACTS at gr-qc, December 2013 - June 2015 We list here all new abstracts that we are aware of that have been submitted by our members to gr-qc, or which are cross-linked at gr-qc. (We have not searched for abstracts on other Los Alamos archives which are not cross-linked to gr-qc.) If you do not send your papers to gr-qc but would like to have them noted in the newsletters, please send them to the Editor. Note that the 114 papers listed here and in the LIGO section represent 1.64% of the 6960 papers posted or cross-linked to gr-qc between December 2013 and June 2015. arXiv:1405.0970 [gr-qc] Twisted Skyrmion String Miftachul Hadi, Malcolm Anderson, Andri Husein (Submitted on 30 Apr 2014 (v1), last revised 7 May 2014 (this version, v2)) We study nonlinear sigma model, especially Skyrme model without twist and Skyrme model with twist: twisted Skyrmion string. Twist term, mkz, is indicated in vortex solution. Necessary condition for stability of vortex solution has consequence that energy of vortex is minimum and scale-free (vortex solution is neutrally stable to changes in scale). We find numerically that the value of vortex minimum energy per unit length for twisted Skyrmion string is 20.37×1060 eV/m. Comments: 12 pages, 2 figures. Presented at CTPNP 2014 and submitted to Journal of Physics: Conference Series. arXiv:1505.05930 [gr-qc] Mathematical modeling of the gravitational field of a twisted Skyrmion string Miftachul Hadi, Malcolm Anderson, Andri Husein (Submitted on 22 May 2015) In this paper we study the gravitational field of a straight string generated from a class of nonlinear sigma models, specifically the Skyrme model without a twist and the Skyrme model with a twist (the twisted Skyrmion string). The twist term, mkz, is included to stabilize the vortex solution. To model the effects of gravity, we replace the Minkowski tensor, ημν, in the standard Skyrme Lagrangian density with a space-time metric tensor, gμν, assumed to be static and cylindrically symmetric. The Einstein equations for the metric and field components are then derived. This work is still in progress. Comments: 10 pages, prepared for submission to Physical Review D arXiv:1506.01101 [gr-qc] The Gravitational Field of a Twisted Skyrmion Miftachul Hadi, Malcolm Anderson, Andri Husein (Submitted on 3 Jun 2015 (v1), last revised 24 Jun 2015 (this version, v2)) In this paper we study the gravitational field of a straight string generated from a class of nonlinear sigma models, specifically the Skyrme model with a twist (the twisted Skyrmion). The twist term, mkz, is included to stabilize the vortex solution. To model the effects of gravity, we replace the Minkowski tensor, ημν, in the standard Skyrme Lagrangian density with a space-time metric tensor, gμν, assumed to be static and cylindrically symmetric. The Einstein equations for the metric and field components are then derived. This work is still in progress. Comments: 5 pages, no figure, presented at ICMNS Bandung, Indonesia, 2 Nov 2014 and submitted to AIP Conference Proceedings arXiv:1401.0787 [gr-qc] Regular and conformal regular cores for static and rotating solutions Mustapha Azreg-Aïnou (Submitted on 4 Jan 2014 (v1), last revised 10 Jan 2014 (this version, v3)) Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress-energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that, conformal fluids used as cores for static or rotating solutions, are exempt from any malicious behavior in that they are finite and defined everywhere. Journal reference: Phys. Lett. B 730 (2014) 95 arXiv:1401.4292 [gr-qc] From static to rotating to conformal static solutions: Rotating imperfect fluid wormholes with(out) electric or magnetic field Mustapha Azreg-Aïnou (Submitted on 17 Jan 2014 (v1), last revised 16 Apr 2014 (this version, v3)) We derive a shortcut stationary metric formula for generating imperfect fluid rotating solutions, in Boyer-Lindquist coordinates, from spherically symmetric static ones. We explore the properties of the curvature scalar and stress-energy tensor for all types of rotating regular solutions we can generate without restricting ourselves to specific examples of regular solutions (regular black holes or wormholes). We show through examples how it is generally possible to generate an imperfect fluid regular rotating solution via radial coordinate transformations. We derive rotating wormholes that are modeled as imperfect fluids and discuss their physical properties that are independent on the way the stress-energy tensor is interpreted. A solution modeling an imperfect fluid rotating loop black hole is briefly discussed. We then specialize to the recently discussed stable exotic dust Ellis wormhole emerged in a source-free radial electric or magnetic field, generate its, conjecturally stable, rotating counterpart which turns out to be an exotic imperfect fluid wormhole and determine the stress-energy tensor of both the imperfect fluid and the electric or magnetic field. Journal reference: Eur. Phys. J. C (2014) 74:2865 arXiv:1405.2569 [gr-qc] Generating rotating regular black hole solutions without complexification Mustapha Azreg-Aïnou (Submitted on 11 May 2014 (v1), last revised 24 Sep 2014 (this version, v3)) We drop the complexification procedure from the Newman-Janis algorithm and introduce more physical arguments and symmetry properties, and we show how one can generate regular and singular rotating black hole and non-black-hole solutions in Boyer-Lindquist coordinates. We focus on generic rotating regular black holes and show that they are regular on the Kerr-like ring but physical entities are undefined there. We show that rotating regular black holes have much smaller electric charges, and, with increasing charge, they turn into regular non-black-hole solutions well before their Kerr-Newman counterparts become naked singularities. No causality violations occur in the region inside a rotating regular black hole. The separability of the Hamilton-Jacobi equation for neutral particles is also carried out in the generic case, and the innermost boundaries of circular orbits for particles are briefly discussed. Other, but special, properties pertaining to the rotating regular counterpart of the Ay\'on-Beato--Garc\'{\i}a regular static black hole are also investigated. Journal reference: Phys. Rev. D 90, 064041 (2014) arXiv:1405.5745 [gr-qc] Phantom black holes and critical phenomena Mustapha Azreg-Aïnou, Glauber T. Marques, Manuel E. Rodrigues (Submitted on 22 May 2014 (v1), last revised 24 Jun 2014 (this version, v2)) We consider the two classes cosh and sinh of normal and phantom black holes of Einstein-Maxwell-dilaton theory. The thermodynamics of these holes is characterized by heat capacities that may have both signs depending on the parameters of the theory. Leaving aside the normal Reissner-Nordstr\"om black hole, it is shown that only some phantom black holes of both classes exhibit critical phenomena. The two classes share a nonextremality, but special, critical point where the transition is continuous and the heat capacity, at constant charge, changes sign with an infinite discontinuity. This point yields a classification scheme for critical points. It is concluded that the two unstable and stable phases coexist on one side of the criticality state and disappear on the other side, that is, there is no configuration where only one phase exists. The sinh class has an extremality critical point where the entropy diverges. The transition from extremality to nonextremality with the charge held constant is accompanied by a loss of mass and an increase in the temperature. A special case of this transition is when the hole is isolated (microcanonical ensemble), it will evolve by emission of energy, which results in a decrease of its mass, to the final state of minimum mass and vanishing heat capacity. The Ehrenfest scheme of classification is inaccurate in this case but the generalized one due to Hilfer leads to conclude that the transition is of order less than unity. Fluctuations near criticality are also investigated. Journal reference: JCAP07 (2014) 036 arXiv:1410.1737 [gr-qc] Charged de Sitter-like black holes: quintessence-dependent enthalpy and new extreme solutions Mustapha Azreg-Aïnou (Submitted on 7 Oct 2014 (v1), last revised 9 Oct 2014 (this version, v2)) We consider Reissner-Nordstr\"{o}m black holes surrounded by quintessence where both a non-extremal event horizon and a cosmological horizon exist besides an inner horizon (−1≤ω<−1/3). We determine new extreme black hole solutions that generalize the Nariai horizon to asymptotically de Sitter-like solutions for any order relation between the squares of the charge q2 and the mass parameter M2 provided q2 remains smaller than some limit, which is larger than M2. In the limit case q2=9ω2M2/(9ω2−1), we derive the general expression of the extreme cosmo-black-hole, where the three horizons merge, and discuss some of its properties. We also show that the endpoint of the evaporation process is independent of any order relation between q2 and M2. The Teitelboim's energy and Padmanabhan's energy are related by a nonlinear expression and are shown to correspond to different ensembles. We also determine the enthalpy H of the event horizon, as well as the effective thermodynamic volume which is the conjugate variable of the negative quintessential pressure, and show that in general the mass parameter and the Teitelboim's energy are different from the enthalpy and internal energy; only in the cosmological case, that is, for Reissner-Nordstr\"{o}m-de Sitter black hole we have H=M. Generalized Smarr formulas are also derived. It is concluded that the internal energy has a universal expression for all static charged black holes, with possibly a variable mass parameter, but it is not a suitable thermodynamic potential for static-black-hole thermodynamics if M is constant. It is also shown that the Reverse Isoperimetric Inequality holds. We generalize the results to the case of the Reissner-Nordstr\"{o}m-de Sitter black hole surrounded by quintessence with two physical constants yielding two thermodynamic volumes. Journal reference: Eur. Phys. J. C (2015) 75:34 arXiv:1411.2386 [gr-qc] Black hole thermodynamics: No inconsistency via the inclusion of the missing P-V terms Mustapha Azreg-Aïnou (Submitted on 10 Nov 2014 (v1), last revised 23 Mar 2015 (this version, v2)) The early literature on black hole thermodynamics ignored the P-V term associated with the existence of a fundamental physical constant in the black hole solution. The inclusion of this constant in the first law becomes inconsistent with the Smarr relation. Once the missing P-V term introduced is, it becomes customary to introduce it only in problems where there is a negative cosmological constant. This practice is inherited from cosmological approaches which consider the quantity −Λ/8π as the constant pressure due to a cosmological fluid. However, the notions of pressure and thermodynamic volume in black hole thermodynamics are very different from their counterparts in classical thermodynamics. From this point of view, there is \textit{a priori} no compelling reason to not extend this notion of pressure and associate a partial pressure with each external" density 8πTtt. In this work, we associate a partial pressure with a variable mass parameter as well as with each tt component of the effective stress-energy tensor Teffμν but not with the linear component of the electromagnetic field. Using the field equations Gμν=8πTeffμν, we derive universal expressions for the enthalpy, internal energy, free energies, thermodynamic volume, equation of state, law of corresponding states, criticality, and critical exponents of static (nonrotating) charged black holes, with possibly a variable mass parameter, whether they are solutions to the Einstein field equations or not. We extend the derivation to the case where the black hole is immersed in the field of a quintessence force and to the multiforce case. Many applications and extensions are considered, including applications to regular black holes derived in previous and present work. No inconsistency has been noticed in their thermodynamics. Journal reference: Phys. Rev. D 91, 064049 (2015) arXiv:1412.8282 [gr-qc] Confined-exotic-matter wormholes with no gluing effects -- Imaging supermassive type (1) wormholes and black holes Mustapha Azreg-Aïnou (Submitted on 29 Dec 2014) We classify wormholes endowed with redshift effects and finite mass into three types. Type (1) wormholes have their radial pressure dying out faster, as one moves away from the throat, than any other component of the stress-energy and thus violate the least the local energy conditions. We introduce a novel and generalizable method for deriving, with no cutoff in the stress-energy or gluing, the three types of wormholes. We focus on type (1) wormholes and construct different asymptotically flat solutions with finite, upper- and lower-bounded, mass M. It is observed that the radial pressure is negative, and the null energy condition is violated, only inside a narrow shell, adjacent to the throat, of relative spacial extent ϵ. Reducing the relative size of the shell, without harming the condition of traversability, yields an inverse square law of ϵ versus M for supermassive wormholes. We show that the diameter of the shadow of this type (1) supermassive wormhole overlaps with that of the black hole candidate at the center of the Milky Way and that the recent derivation, using the up-to-date millimeter-wavelength very long baseline interferometry made in Astrophys. J. \textbf {795} 134 (2014) [arXiv:1409.4690], remains inconclusive. We show that redshift-free wormholes, with positive energy density, have one of their barotropic equations of state in the phantom regime, have their stress energy tensor traceless, and are anisotropic. They are all type (3) wormholes having their variable equations of state approaching 1 and −1 at spatial infinity. We also introduce a new approach for deriving new redshift-free arXiv:1505.01276 [gr-qc] Two-fluid-sourced rotating wormholes Mustapha Azreg-Aïnou (Submitted on 6 May 2015) We briefly discuss some of the known and new properties of rotating geometries that are relevant to this work. We generalize the analytical method of superposition of fields, known for generating nonrotating solutions, and apply it to construct massless and massive rotating physical wormholes sourced by a source-free electromagnetic field and an exotic fluid both anisotropic. Their stress-energy tensors are presented in compact and general forms. For the massive rotating wormholes there exists a mass-charge constraint yielding almost no more dragging effects than ordinary stars. There are conical spirals through the throat along which the null and weak energy conditions are not violated for these rotating wormholes. This conclusion extends to nonrotating massive type I wormholes derived previously by the author that seem to be the first kind of nonrotating wormholes with this property. arXiv:1409.4300 [gr-qc] Einstein Spacetimes with Constant Weyl Eigenvalues Alan Barnes (Submitted on 15 Sep 2014) Einstein spacetimes (that is vacuum spacetimes possibly with a non-zero cosmological constant {\Lambda}) with constant non-zero Weyl eigenvalus are considered. For type Petrov II & D this assumption allows one to prove that the non-repeated eigenvalue necessarily has the value 2{\Lambda}/3 and it turns out that the only possible spacetimes are some Kundt-waves considered by Lewandowski which are type II and a Robinson-Bertotti solution of type D. For Petrov type I the only solution turns out to be a homogeneous pure vacuum solution found long ago by Petrov using group theoretic methods. These results can be summarised by the statement that the only vacuum spacetimes with constant Weyl eigenvalues are either homogeneous or are Kundt space- times. This result is similar to that of Coley et al. who proved their result for general spacetimes under the assumption that all scalar invariants constructed from the curvature tensor and all its derivatives were constant. Some preliminary results are also presented for Petrov Type I vacua in which either only one of the Weyl eigenvalues is constant or in which the ratios of the Weyl eigenvalues are constants. In particular in each case there is a simple algebraic relation between the Newman-Penrose Weyl tensor components Ψ2 & Ψ0(=Ψ4) and the 'cross-ratio' of the optical scalars {\kappa}{\nu}-{\sigma}{\lambda} of the associated principal null tetrad of the Weyl tensor. Comments: 21 pages. Extended version of a talk given at the Spanish Relativity Meeting, Valencia, 2014. arXiv:1408.4201 [gr-qc] The Attached Point Topology of the Abstract Boundary For Space-Time Richard A. Barry, Susan M. Scott (Submitted on 19 Aug 2014) Singularities play an important role in General Relativity and have been shown to be an inherent feature of most physically reasonable space-times. Despite this, there are many aspects of singularities that are not qualitatively or quantitatively understood. The abstract boundary construction of Scott and Szekeres has proven to be a flexible tool with which to study the singular points of a manifold. The abstract boundary construction provides a 'boundary' for any n-dimensional, paracompact, connected, Hausdorff, smooth manifold. Singularities may then be defined as entities in this boundary - the abstract boundary. In this paper a topology is defined, for the first time, for a manifold together with its abstract boundary. This topology, referred to as the attached point topology, thereby provides us with a description of how the abstract boundary is related to the underlying manifold. A number of interesting properties of the topology are considered, and in particular, it is demonstrated that the attached point topology is Hausdorff. Journal reference: Class. Quantum Grav. 28(16) 165003 2011 arXiv:1408.4203 [gr-qc] The Strongly Attached Point Topology of the Abstract Boundary For Space-Time Richard A. Barry, Susan M. Scott (Submitted on 19 Aug 2014) The abstract boundary construction of Scott and Szekeres provides a ‘boundary’ for any n-dimensional, paracompact, connected, Hausdorff, smooth manifold. Singularities may then be defined as objects within this boundary. In a previous paper by the authors, a topology referred to as the attached point topology was defined for a manifold and its abstract boundary, thereby providing us with a description of how the abstract boundary is related to the underlying manifold. In this paper, a second topology, referred to as the strongly attached point topology, is presented for the abstract boundary construction. Whereas the abstract boundary was effectively disconnected from the manifold in the attached point topology, it is very much connected in the strongly attached point topology. A number of other interesting properties of the strongly attached point topology are considered, each of which support the idea that it is a very natural and appropriate topology for a manifold and its abstract boundary. Journal reference: Class. Quantum Grav. 31(12) 125004 2014 arXiv:1312.0795 [gr-qc] Exact Global Phantonical Solutions in the Emergent Universe A. Beesham, S. V. Chervon, S. D. Maharaj, A. S. Kubasov (Submitted on 3 Dec 2013 (v1), last revised 5 Dec 2013 (this version, v2)) We present new classes of exact solutions for an Emergent Universe supported by phantom and canonical scalar fields in the framework of a two-component chiral cosmological model. We outline in detail the method of deriving exact solutions, discuss the potential and kinetic interaction for the model and calculate key cosmological parameters. We suggest that this this model be called a {\it phantonical Emergent Universe} because of the necessity to have phantom and canonical chiral fields. The solutions obtained are valid for all time. arXiv:1312.1770 [gr-qc] An Emergent Universe with Dark Sector Fields in a Chiral Cosmological Model A. Beesham, S. V. Chervon, S. D. Maharaj, A. S. Kubasov (Submitted on 6 Dec 2013) We consider the emergent universe scenario supported by a chiral cosmological model with two interacting dark sector fields: phantom and canonical. We investigate the general properties of the evolution of the kinetic and potential energies as well as the development of the equation of state with time. We present three models based on asymptotic solutions and investigate the phantom part of the potential and chiral metric components. The exact solution corresponding to a global emergent universe scenario, starting from the infinite past and evolving to the infinite future, has been obtained for the first time for a chiral cosmological model. The behavior of the chiral metric components responsible for the kinetic interaction between the phantom and canonical scalar fields has been analyzed as well. Journal reference: Quantum Matter 2: 388-395 (2013) arXiv:1405.3459 [gr-qc] Nonlinear bulk viscosity and the stability of accelerated expansion in FRW spacetime G. Acquaviva, A. Beesham (Submitted on 14 May 2014) In the context of dark energy solutions, we consider a Friedmann-Robertson-Walker spacetime filled with a non-interacting mixture of dust and a viscous fluid, whose bulk viscosity is governed by the nonlinear model proposed in [15]. Through a phase space analysis of the equivalent dynamical system, existence and stability of critical solutions are established and the respective scale factors are computed. The results point towards the possibility of describing the current accelerated expansion of the Universe by means of the abovementioned nonlinear model for viscosity. arXiv:1405.7219 [gr-qc] An Emergent Universe supported by chiral cosmological fields in Einstein--Gauss--Bonnet gravity Sergey V. Chervon, Sunil D. Maharaj, Aroonkumar Beesham and Aleksandr Kubasov (Submitted on 28 May 2014) We propose the application of the chiral cosmological model (CCM) for the Einstein--Gauss--Bonnet (EGB) theory of gravitation with the aim of finding new models of the Emergent Universe (EmU) scenario. We analysed the EmU supported by two chiral cosmological fields for a spatially flat universe, while we have used three chiral fields when we investigated open and closed universes. To prove the validity of the EmU scenario we fixed the scale factor and found the exact solution by decomposition of EGB equations and solving the chiral field dynamics equation. To this end, we suggested the decomposition of the EGB equations in such a way that the first chiral field is responsible for the Einstein part of the model, while the second field, together with kinetic interaction term, is connected with the Gauss--Bonnet part of the theory. We proved that both fields are phantom ones under this decomposition, and that the model has a solution if the kinetic interaction between the fields equals a constant. We have presented the exact solution in terms of cosmic time. This was done for a spatially flat universe. In the case of open and closed universes we introduced the third chiral field (canonical for closed and phantom for open universe) which is responsible for the EGB and curvature parts. The solution of the third field equation is obtained in quadratures. Thus we have proved that the CCM is able to support EmU scenario in EGB gravity for spatially flat, open and closed universes. Report number: LGCA/5-2014 arXiv:1505.01965 [gr-qc] A phase space analysis for nonlinear bulk viscous cosmology Giovanni Acquaviva, Aroonkumar Beesham (Submitted on 8 May 2015) We consider a Friedmann-Robertson-Walker spacetime filled with both viscous radiation and nonviscous dust. The former has a bulk viscosity which is proportional to an arbitrary power of the energy density, i.e. \zeta \propto \rho_v^{\nu}, and viscous pressure satisfying a nonlinear evolution equation. The analysis is carried out in the context of dynamical systems and the properties of solutions corresponding to the fixed points are discussed. For some ranges of the relevant parameter \nu we find that the trajectories in the phase space evolve from a FRW singularity towards an asymptotic de Sitter attractor, confirming and extending previous analysis in the literature. For other values of the parameter, instead, the behaviour differs from previous works. arXiv:1409.3284 [gr-qc] Time evolution of parametric instability in large-scale gravitational-wave interferometers Stefan L. Danilishin, Sergey P. Vyatchanin, David G. Blair, Ju Li, Chunnong Zhao (Submitted on 11 Sep 2014 (v1), last revised 6 Dec 2014 (this version, v3)) We present a study of three-mode parametric instability in large-scale gravitational-wave detectors. Previous work used a linearised model to study the onset of instability. This paper presents a non-linear study of this phenomenon, which shows that the initial stage of exponential rise of the amplitudes of a higher order optical mode and the mechanical internal mode of the mirror is followed by a saturation phase, in which all three participating modes reach a new equilibrium state with constant oscillation amplitudes. Results suggest that stable operation of interferometers may be possible in the presence of such instabilities, thereby simplifying the task of suppression. Journal reference: Phys. Rev. D 90, 122008 (2014) arXiv:1501.01542 [physics.optics] Parametric Instability in Long Optical Cavities and Suppression by Dynamic Transverse Mode Frequency Modulation Chunnong Zhao, Li Ju, Qi Fang, Carl Blair, Jiayi Qin, David Blair, Jerome Degallaix, Hiroaki Yamamoto (Submitted on 7 Jan 2015 (v1), last revised 13 Jan 2015 (this version, v2)) Three mode parametric instability has been predicted in Advanced gravitational wave detectors. Here we present the first observation of this phenomenon in a large scale suspended optical cavity designed to be comparable to those of advanced gravitational wave detectors. Our results show that previous modelling assumptions that transverse optical modes are stable in frequency except for frequency drifts on a thermal deformation time scale is unlikely to be valid for suspended mass optical cavities. We demonstrate that mirror figure errors cause a dependence of transverse mode offset frequency on spot position. Combined with low frequency residual motion of suspended mirrors, this leads to transverse mode frequency modulation which suppresses the effective parametric gain. We show that this gain suppression mechanism can be enhanced by laser spot dithering or fast thermal modulation. Using Advanced LIGO test mass data and thermal modelling we show that gain suppression factors of 10-20 could be achieved for individual modes, sufficient to greatly ameliorate the parametric instability problem. arXiv:1403.1718 [astro-ph.CO] Nonlinear Chaplygin Gas Cosmologies P. P. Avelino, K. Bolejko, G. F. Lewis (Submitted on 7 Mar 2014 (v1), last revised 20 May 2014 (this version, v2)) We study the nonlinear regime of Unified Dark Energy models, using Generalized Chaplygin Gas cosmologies as a representative example, and introduce a new parameter characterizing the level of small scale clustering in these scenarios. We show that viable Generalized Chaplygin Gas cosmologies, consistent with the most recent observational constraints, may be constructed for any value of the Generalized Chaplygin Gas parameter by considering models with a sufficiently high level of nonlinear clustering. Journal reference: Phys.Rev.D89:103004,2014 arXiv:1412.4976 [astro-ph.CO] A new test of the FLRW metric using distance sum rule Syksy Rasanen, Krzysztof Bolejko, Alexis Finoguenov (Submitted on 16 Dec 2014 (v1), last revised 30 Dec 2014 (this version, v2)) We present a new test of the validity of the Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) metric, based on comparing the distance from redshift 0 to z1 and from z1 to z2 to the distance from 0 to z2. If the universe is described by the FLRW metric, the comparison provides a model-independent measurement of spatial curvature. The test is kinematic and relies on geometrical optics, it is independent of the matter content of the universe and the applicability of the Einstein equation on cosmological scales. We apply the test to observations, using the Union2.1 compilation of supernova distances and Sloan Lens ACS Survey galaxy strong lensing data. The FLRW metric is consistent with the data, and the spatial curvature parameter is constrained to be −1.22<ΩK0<0.63, or −0.08<ΩK0<0.97 with a prior from the cosmic microwave background and the local Hubble constant, though modelling of the lenses causes significant systematic uncertainty. Report number: HIP-2014-37/TH arXiv:1505.00067 [gr-qc] A numerical study of the Regge Calculus and Smooth Lattice methods on a Kasner cosmology Leo Brewin (Submitted on 1 May 2015) Two lattice based methods for numerical relativity, the Regge Calculus and the Smooth Lattice Relativity, will be compared with respect to accuracy and computational speed in a full 3+1 evolution of initial data representing a standard Kasner cosmology. It will be shown that both methods provide convergent approximations to the exact Kasner cosmology. It will also be shown that the Regge Calculus is of the order of 110 times slower than the Smooth Lattice method. arXiv:1402.3581 [gr-qc] Spin-Precession: Breaking the Black Hole--Neutron Star Degeneracy Katerina Chatziioannou, Neil Cornish, Antoine Klein, Nicolas Yunes (Submitted on 14 Feb 2014 (v1), last revised 18 Dec 2014 (this version, v2)) Mergers of compact stellar remnant are prime targets for the LIGO/Virgo gravitational wave detectors. One hopes that the gravitational wave signals from these merger events can be used to study the mass and spin distribution of stellar remnants, and provide information about black hole horizons and the material properties of neutron stars. However, it has been suggested that degeneracies in the way that the star's mass and spin are imprinted in the waveforms may make it impossible to distinguish between black holes and neutron stars. Here we show that the precession of the orbital plane due to spin-orbit coupling breaks the mass-spin degeneracy, and allows us to distinguish between standard neutron stars and alternative possibilities, such as black holes or exotic neutron stars with large masses and spins. Journal reference: ApJ, 798, L17 (2015) arXiv:1404.3180 [gr-qc] Detection and Parameter Estimation of Gravitational Waves from Compact Binary Inspirals with Analytical Double-Precessing Templates Katerina Chatziioannou, Neil Cornish, Antoine Klein, Nicolas Yunes (Submitted on 11 Apr 2014 (v1), last revised 14 May 2014 (this version, v2)) We study the performance of various analytical frequency-domain templates for detection and parameter estimation of gravitational waves from spin-precessing, quasi-circular, compact binary inspirals. We begin by assessing the extent to which non-spinning, spin-aligned, and the new (analytical, frequency-domain, small-spin) double-precessing frequency-domain templates can be used to detect signals from such systems. For effective, dimensionless spin values above 0.2, the use of non-spinning or spin-aligned templates for detection purposes will result in a loss of up to 30 of all events, while in the case of the double-precessing model, this never exceeds 6. Moreover, even for signals from systems with small spins, non-spinning and spin-aligned templates introduce large biases in the extracted masses and spins. The use of a model that encodes spin-induced precession effects, such as the double-precessing model, improves the mass and spin extraction by up to an order of magnitude. The additional information encoded in the spin-orbit interaction is invaluable if one wishes to extract the maximum amount of information from gravitational wave signals. DOI: 10.1103/PhysRevD.89.104023 arXiv:1406.4511 [gr-qc] Mapping the nano-Hertz gravitational wave sky Neil J. Cornish, Rutger van Haasteren (Submitted on 17 Jun 2014 (v1), last revised 19 Jun 2014 (this version, v2)) We describe a new method for extracting gravitational wave signals from pulsar timing data. We show that any gravitational wave signal can be decomposed into an orthogonal set of sky maps, with the number of maps equal to the number of pulsars in the timing array. These maps may be used as a basis to construct gravitational wave templates for any type of source, including collections of point sources. A variant of the standard Hellings-Downs correlation analysis is recovered for statistically isotropic signals. The template based approach allows us to probe potential anisotropies in the signal and produce maps of the gravitational wave sky. arXiv:1407.7038 [gr-qc] Projected Constraints on Scalarization with Gravitational Waves from Neutron Star Binaries Laura Sampson, Nicolas Yunes, Neil Cornish, Marcelo Ponce, Enrico Barausse, Antoine Klein, Carlos Palenzuela, Luis Lehner (Submitted on 25 Jul 2014 (v1), last revised 28 Nov 2014 (this version, v2)) Certain scalar-tensor theories have the property of endowing stars with scalar hair, sourced either by the star's own compactness (spontaneous scalarization) or, for binary systems, by the companion's scalar hair (induced scalarization) or by the orbital binding energy (dynamical scalarization). Scalarized stars in binaries present different conservative dynamics than in General Relativity, and can also excite a scalar mode in the metric perturbation that carries away dipolar radiation. As a result, the binary orbit shrinks faster than predicted in General Relativity, modifying the rate of decay of the orbital period. In spite of this, scalar-tensor theories can pass existing binary pulsar tests, because observed pulsars may not be compact enough or sufficiently orbitally bound to activate scalarization. Gravitational waves emitted during the last stages of compact binary inspirals are thus ideal probes of scalarization effects. For the standard projected sensitivity of advanced LIGO, we here show that, if neutron stars are sufficiently compact to enter the detector's sensitivity band already scalarized, then gravitational waves could place constraints at least comparable to binary pulsars. If the stars dynamically scalarize while inspiraling in band, then constraints are still possible provided the scalarization occurs sufficiently early in the inspiral, roughly below an orbital frequency of 50Hz. In performing these studies, we derive an easy-to-calculate data analysis measure, an integrated phase difference between a General Relativistic and a modified signal, that maps directly to the Bayes factor so as to determine whether a modified gravity effect is detectable. Finally, we find that custom-made templates are equally effective as model-independent, parameterized post-Einsteinian waveforms at detecting such modified gravity effects at realistic signal-to-noise ratios. Comments: 26 pages, 15 figures, 2 tables. Version resubmitted to PRD after referee report. arXiv:1408.5158 [gr-qc] Fast Frequency-domain Waveforms for Spin-Precessing Binary Inspirals Antoine Klein, Neil Cornish, Nicolás Yunes (Submitted on 21 Aug 2014 (v1), last revised 9 Dec 2014 (this version, v2)) The detection and characterization of gravitational wave signals from compact binary coalescence events relies on accurate waveform templates in the frequency domain. The stationary phase approximation (SPA) can be used to compute closed-form frequency-domain waveforms for non-precessing, quasi-circular binary inspirals. However, until now, no fast frequency-domain waveforms have existed for generic, spin-precessing quasi-circular compact binary inspirals. Templates for these systems have had to be computed via a discrete Fourier transform of finely sampled time-domain waveforms, which is far more computationally expensive than those constructed directly in the frequency-domain, especially for those systems that are dominated by the inspiral part. There are two obstacles to deriving frequency-domain waveforms for precessing systems: (i) the spin-precession equations do not admit closed-form solutions for generic systems; (ii) the SPA fails catastrophically. Presently there is no general solution to the first problem, so we must resort to numerical integration of the spin precession equations. This is not a significant obstacle, as numerical integration on the slow precession timescale adds very little to the computational cost of generating the waveforms. Our main result is to solve the second problem, by providing an alternative to the SPA that we call the method of Shifted Uniform Asymptotics, or SUA, that cures the divergences in the SPA. The construction of frequency-domain templates using the SUA can be orders of magnitude more efficient than the time-domain ones obtained through a discrete Fourier transform. Moreover, this method is very faithful to the discrete Fourier transform, with mismatches on the order of 10−5. Journal reference: Phys. Rev. D 90, 124029 (2014) arXiv:1410.3835 [gr-qc] BayesWave: Bayesian Inference for Gravitational Wave Bursts and Instrument Glitches Neil J. Cornish, Tyson B. Littenberg (Submitted on 14 Oct 2014 (v1), last revised 7 May 2015 (this version, v3)) A central challenge in Gravitational Wave Astronomy is identifying weak signals in the presence of non-stationary and non-Gaussian noise. The separation of gravitational wave signals from noise requires good models for both. When accurate signal models are available, such as for binary Neutron star systems, it is possible to make robust detection statements even when the noise is poorly understood. In contrast, searches for "un-modeled" transient signals are strongly impacted by the methods used to characterize the noise. Here we take a Bayesian approach and introduce a multi-component, variable dimension, parameterized noise model that explicitly accounts for non-stationarity and non-Gaussianity in data from interferometric gravitational wave detectors. Instrumental transients (glitches) and burst sources of gravitational waves are modeled using a Morlet-Gabor continuous wavelet frame. The number and placement of the wavelets is determined by a trans-dimensional Reversible Jump Markov Chain Monte Carlo algorithm. The Gaussian component of the noise and sharp line features in the noise spectrum are modeled using the BayesLine algorithm, which operates in concert with the wavelet model. Comments: 36 pages, 15 figures, Version accepted by Class. Quant. Grav arXiv:1410.3852 [gr-qc] BayesLine: Bayesian Inference for Spectral Estimation of Gravitational Wave Detector Noise Tyson B. Littenberg, Neil J. Cornish (Submitted on 14 Oct 2014) Gravitational wave data from ground-based detectors is dominated by instrument noise. Signals will be comparatively weak, and our understanding of the noise will influence detection confidence and signal characterization. Mis-modeled noise can produce large systematic biases in both model selection and parameter estimation. Here we introduce a multi-component, variable dimension, parameterized model to describe the Gaussian-noise power spectrum for data from ground-based gravitational wave interferometers. Called BayesLine, the algorithm models the noise power spectral density using cubic splines for smoothly varying broad-band noise and Lorentzians for narrow-band line features in the spectrum. We describe the algorithm and demonstrate its performance on data from the fifth and sixth LIGO science runs. Once fully integrated into LIGO/Virgo data analysis software, BayesLine will produce accurate spectral estimation and provide a means for marginalizing inferences drawn from the data over all plausible noise spectra. arXiv:1412.6479 [gr-qc] Multivariate Classification with Random Forests for Gravitational Wave Searches of Black Hole Binary Coalescence Paul T. Baker, Sarah Caudill, Kari A. Hodge, Dipongkar Talukder, Collin Capano, Neil J. Cornish (Submitted on 19 Dec 2014) Searches for gravitational waves produced by coalescing black hole binaries with total masses ≳25M⊙ use matched filtering with templates of short duration. Non-Gaussian noise bursts in gravitational wave detector data can mimic short signals and limit the sensitivity of these searches. Previous searches have relied on empirically designed statistics incorporating signal-to-noise ratio and signal-based vetoes to separate gravitational wave candidates from noise candidates. We report on sensitivity improvements achieved using a multivariate candidate ranking statistic derived from a supervised machine learning algorithm. We apply the random forest of bagged decision trees technique to two separate searches in the high mass (≳25M⊙) parameter space. For a search which is sensitive to gravitational waves from the inspiral, merger, and ringdown (IMR) of binary black holes with total mass between 25M⊙ and 100M⊙, we find sensitive volume improvements as high as 70±13−109±11\% when compared to the previously used ranking statistic. For a ringdown-only search which is sensitive to gravitational waves from the resultant perturbed intermediate mass black hole with mass roughly between 10M⊙ and 600M⊙, we find sensitive volume improvements as high as 61±4−241±12\% when compared to the previously used ranking statistic. We also report how sensitivity improvements can differ depending on mass regime, mass ratio, and available data quality information. Finally, we describe the techniques used to tune and train the random forest classifier that can be generalized to its use in other searches for gravitational waves. Report number: LIGO Document P1400231 arXiv:1501.05343 [astro-ph.GA] NANOGrav Constraints on Gravitational Wave Bursts with Memory Z. Arzoumanian, A. Brazier, S. Burke-Spolaor, S. J. Chamberlin, S. Chatterjee, B. Christy, J. M. Cordes, N. J. Cornish, P. B. Demorest, X. Deng, T. Dolch, J. A. Ellis, R. D. Ferdman, E. Fonseca, N. Garver-Daniels, F. Jenet, G. Jones, V. M. Kaspi, M. Koop, M. T. Lam, T. J. W. Lazio, L. Levin, A. N. Lommen, D. R. Lorimer, J. Luo, R. S. Lynch, D. R. Madison, M. A. McLaughlin, S. T. McWilliams, D. J. Nice, N. Palliyaguru, T. T. Pennucci, S. M. Ransom, X. Siemens, I. H. Stairs, D. R. Stinebring, K. Stovall, J. Swiggum, M. Vallisneri, R. van Haasteren, Y. Wang, W. W. Zhu (Submitted on 21 Jan 2015) Among efforts to detect gravitational radiation, pulsar timing arrays are uniquely poised to detect "memory" signatures, permanent perturbations in spacetime from highly energetic astrophysical events such as mergers of supermassive black hole binaries. The North American Nanohertz Observatory for Gravitational Waves (NANOGrav) observes dozens of the most stable millisecond pulsars using the Arecibo and Green Bank radio telescopes in an effort to study, among other things, gravitational wave memory. We herein present the results of a search for gravitational wave bursts with memory (BWMs) using the first five years of NANOGrav observations. We develop original methods for dramatically speeding up searches for BWM signals. In the directions of the sky where our sensitivity to BWMs is best, we would detect mergers of binaries with reduced masses of 109 M⊙ out to distances of 30 Mpc; such massive mergers in the Virgo cluster would be marginally detectable. We find no evidence for BWMs. However, with our non-detection, we set upper limits on the rate at which BWMs of various amplitudes could have occurred during the time spanned by our data--e.g., BWMs with amplitudes greater than 10−13 must occur at a rate less than 1.5 yr−1. Comments: 12 pages, 7 figures. We will submit this article to ApJ two weeks from now. arXiv:1502.05735 [gr-qc] Fisher vs. Bayes : A comparison of parameter estimation techniques for massive black hole binaries to high redshifts with eLISA Edward K. Porter, Neil J. Cornish (Submitted on 19 Feb 2015) Massive black hole binaries are the primary source of gravitational waves (GW) for the future eLISA observatory. The detection and parameter estimation of these sources to high redshift would provide invaluable information on the formation mechanisms of seed black holes, and on the evolution of massive black holes and their host galaxies through cosmic time. The Fisher information matrix has been the standard tool for GW parameter estimation in the last two decades. However, recent studies have questioned the validity of using the Fisher matrix approach. For example, the Fisher matrix approach sometimes predicts errors of ≥100% in the estimation of parameters such as the luminosity distance and sky position. With advances in computing power, Bayesian inference is beginning to replace the Fisher matrix approximation in parameter estimation studies. In this work, we conduct a Bayesian inference analysis for 120 sources situated at redshifts of between 0.1≤z≤13.2, and compare the results with those from a Fisher matrix analysis. The Fisher matrix results suggest that for this particular selection of sources, eLISA would be unable to localize sources at redshifts of z≲6. In contrast, Bayesian inference provides finite error estimations for all sources in the study, and shows that we can establish minimum closest distances for all sources. The study further predicts that we should be capable with eLISA, out to a redshift of at least z≤13, of predicting a maximum error in the chirp mass of ≲1%, the reduced mass of ≲20%, the time to coalescence of 2 hours, and to a redshift of z∼5, the inclination of the source with a maximum error of ∼60 degrees. arXiv:1503.02662 [gr-qc] Constraining the Solution to the Last Parsec Problem with Pulsar Timing Laura Sampson, Neil J. Cornish, Sean T. McWilliams (Submitted on 9 Mar 2015) The detection of a stochastic gravitational-wave signal from the superposition of many inspiraling supermassive black holes with pulsar timing arrays (PTAs) is likely to occur within the next decade. With this detection will come the opportunity to learn about the processes that drive black-hole-binary systems toward merger through their effects on the gravitational-wave spectrum. We use Bayesian methods to investigate the extent to which effects other than gravitational-wave emission can be distinguished using PTA observations. We show that, even in the absence of a detection, it is possible to place interesting constraints on these dynamical effects for conservative predictions of the population of tightly bound supermassive black-hole binaries. For instance, if we assume a relatively weak signal consistent with a low number of bound binaries and a low black-hole-mass to galaxy-mass correlation, we still find that a non-detection by a simulated array, with a sensitivity that should be reached in practice within a few years, disfavors gravitational-wave-dominated evolution with an odds ratio of ∼30:1. Such a finding would suggest either that all existing astrophysical models for the population of tightly bound binaries are overly optimistic, or else that some dynamical effect other than gravitational-wave emission is actually dominating binary evolution even at the relatively high frequencies/small orbital separations probed by PTAs. arXiv:1505.07179 [gr-qc] Phase-coherent mapping of gravitational-wave backgrounds using ground-based laser interferometers Joseph D. Romano, Stephen R. Taylor, Neil J. Cornish, Jonathan Gair, Chiara M. F. Mingarelli, Rutger van Haasteren (Submitted on 27 May 2015) We extend the formalisms developed in Gair et al. and Cornish and van Haasteren to create maps of gravitational-wave backgrounds using a network of ground-based laser interferometers. We show that in contrast to pulsar timing arrays, which are insensitive to half of the gravitational-wave sky (the curl modes), a network of ground-based interferometers is sensitive to both the gradient and curl components of the background. The spatial separation of a network of interferometers, or of a single interferometer at different times during its rotational and orbital motion around the Sun, allows for recovery of both components. We derive expressions for the response functions of a laser interferometer in the small-antenna limit, and use these expressions to calculate the overlap reduction function for a pair of interferometers. We also construct maximum-likelihood estimates of the + and x-polarization modes of the gravitational-wave sky in terms of the response matrix for a network of ground-based interferometers, evaluated at discrete times during Earth's rotational and orbital motion around the Sun. We demonstrate the feasibility of this approach for some simple simulated backgrounds (a single point source and spatially-extended distributions having only grad or curl components), calculating maximum-likelihood sky maps and uncertainty maps based on the (pseudo)inverse of the response matrix. The distinction between this approach and standard methods for mapping gravitational-wave power is also discussed. arXiv:1505.08084 [gr-qc] When is a gravitational-wave signal stochastic? Neil J. Cornish, Joseph D. Romano (Submitted on 29 May 2015) We discuss the detection of gravitational-wave backgrounds in the context of Bayesian inference and suggest a practical definition of what it means for a signal to be considered stochastic---namely, that the Bayesian evidence favors a stochastic signal model over a deterministic signal model. A signal can further be classified as Gaussian-stochastic if a Gaussian signal model is favored. In our analysis we use Bayesian model selection to choose between several signal and noise models for simulated data consisting of uncorrelated Gaussian detector noise plus a superposition of sinusoidal signals from an astrophysical population of gravitational-wave sources. For simplicity, we consider co-located and co-aligned detectors with white detector noise, but the method can be extended to more realistic detector configurations and power spectra. The general trend we observe is that a deterministic model is favored for small source numbers, a non-Gaussian stochastic model is preferred for intermediate source numbers, and a Gaussian stochastic model is preferred for large source numbers. However, there is very large variation between individual signal realizations, leading to fuzzy boundaries between the three regimes. We find that a hybrid, trans-dimensional model comprised of a deterministic signal model for individual bright sources and a Gaussian-stochastic signal model for the remaining confusion background outperforms all other models in most instances. arXiv:1504.03682 [astro-ph.CO] Signatures of the Very Early Universe: Inflation, Spatial Curvature and Large Scale Anomalies Grigor Aslanyan, Richard Easther (Submitted on 14 Apr 2015 (v1), last revised 24 Jun 2015 (this version, v2)) A short inflationary phase may not erase all traces of the primordial universe. Associated observables include both spatial curvature and "anomalies" in the microwave background or large scale structure. The present curvature ΩK,0 reflects the initial curvature, ΩK,start, and the angular size of anomalies depends on kstart, the comoving horizon size at the onset of inflation. We estimate posteriors for ΩK,start and kstart using current data and simulations, and show that if either quantity is measured to have a non-zero value, both are likely to be observable. Mappings from ΩK,start and kstart to present-day observables depend strongly on the primordial equation of state; ΩK,0 spans ten orders of magnitude for a given ΩK,start while a simple and general relationship connects Ω K,0 and kstart. We show that current bounds on ΩK,0 imply that if kstart is measurable, the curvature was already small when inflation began. Finally, since the energy density changes slowly during inflation, primordial gravitational wave constraints require that a short inflationary phase is preceded by a nontrivial pre-inflationary phase with critical implications for the expected value of Ω Journal reference: Phys. Rev. D 91, 123523 (2015) arXiv:1502.02648 [gr-qc] Phase plane analysis and statefinder diagnostic of agegraphic dark energy in 5D Brans-Dicke cosmology Amin Salehi, Hossein Farajollahi, Jafar Sadeghi, M.Pourali (Submitted on 9 Feb 2015) We present an autonomous phase-plane describing the evolution of field equations containing an agegraphic dark energy in 5D Brans- Dicke cosmology. To observationally verify the numerical results, we simultaneously solve the equations by constraining the model parameters with SNe Ia data. We find conditions for the existence and stability of the critical points (states of the universe) and numerically examine the cosmological parameters. We also investigate the model by means of statefinder diagnostic. arXiv:1503.06335 [gr-qc] Tachyonic Intermediate Inflation in DGP Cosmology; consistency with new observations Arvin Ravanpak, Hossein Farajollahi, Golnaz Farpoor Fadakar (Submitted on 21 Mar 2015) In this article we study an intermediate inflationary model in the context of Dvali-Gabadadze-Porrati (DGP) cosmology caused by a tachyon scalar field. Considering slow-roll inflation we discuss the dynamics of the Universe. Using perturbation theory, we estimate some of the model parameters numerically and compare them with observations, particularly with Planck Temperature data released in 2013 (PT13), nine-years data set of Wilkinson Microwave Anisotropy Probe (WMAP9) and data from second Background Imaging of Cosmic Extragalactic Polarization instrument (BICEP2). arXiv:1403.1338 [gr-qc] Numerical evolution of plane gravitational waves in the Friedrich-Nagy gauge Jörg Frauendiener, Chris Stevens (Submitted on 6 Mar 2014 (v1), last revised 25 Jul 2014 (this version, v3)) The first proof of well-posedness of an initial boundary value problem for the Einstein equations was given in 1999 by Friedrich and Nagy. They used a frame formalism with a particular gauge for formulating the equations. This `Friedrich-Nagy' (FN) gauge has never been implemented for use in numerical simulations before because it was deemed too complicated. In this paper we present an implementation of the FN gauge for systems with two commuting space-like Killing vectors. We investigate the numerical performance of this formulation for plane wave space-times, reproducing the well-known Khan-Penrose solution for colliding impulsive plane waves and exhibiting a gravitational wave `ping-pong'. Journal reference: Phys. Rev. D 89, 104026 (2014) arXiv:1502.07427 [gr-qc] A spectral method for half-integer spin fields based on spin-weighted spherical harmonics Florian Beyer, Boris Daszuta, Joerg Frauendiener (Submitted on 26 Feb 2015) We present a new spectral scheme for analysing functions of half-integer spin-weight on the 2-sphere and demonstrate the stability and convergence properties of our implementation. The dynamical evolution of the Dirac equation on a manifold with spatial topology of S2 via pseudo-spectral method is also demonstrated. arXiv:1505.05920 [gr-qc] Numerical solutions of Einstein's equations for cosmological spacetimes with spatial topology S3 and symmetry group U(1) Florian Beyer, Leon Escobar, Jörg Frauendiener (Submitted on 21 May 2015) We apply a single patch pseudo-spectral scheme based on integer spin-weighted spherical harmonics presented in [1, 2] to Einstein's equations. The particular hyperbolic reduction of Einstein's equations which we use is obtained by a covariant version of the generalized harmonic formalism and Geroch's symmetry reduction. In this paper we focus on spacetimes with a spatial S3-topology and symmetry group U(1). We discuss analytical and numerical issues related to our implementation. As a test, we reproduce numerically exact inhomogeneous cosmological solutions of the vacuum Einstein field equations obtained in [3]. arXiv:1410.5238 [physics.ins-det] Extending the frequency range of digital noise measurements to the microwave domain Stephen R. Parker, Eugene N. Ivanov, John G. Hartnett (Submitted on 20 Oct 2014) We describe the use of digital phase noise test sets at frequencies well beyond the sampling rate of their analog-to-digital converters. The technique proposed involves the transfer of phase fluctuations from an arbitrary high carrier frequency to within the operating frequency range of the digital instrument. The validity of the proposed technique has been proven via comparison with conventional methods. Digital noise measurements eliminate the need for calibration and improve consistency of experimental results. Mechanisms limiting the resolution of spectral measurements are also discussed. Journal reference: Parker, S.R.; Ivanov, E.N.; Hartnett, J.G., "Extending the Frequency Range of Digital Noise Measurements to the Microwave Domain," Microwave Theory and Techniques, IEEE Transactions on , vol.62, no.2, pp.368,372, Feb. 2014 arXiv:1410.5244 [hep-ex] Cryogenic resonant microwave cavity searches for hidden sector photons Stephen R. Parker, John G. Hartnett, Rhys G. Povey, Michael E. Tobar (Submitted on 20 Oct 2014) The hidden sector photon is a weakly interacting hypothetical particle with sub-eV mass that kinetically mixes with the photon. We describe a microwave frequency light shining through a wall experiment where a cryogenic resonant microwave cavity is used to try and detect photons that have passed through an impenetrable barrier, a process only possible via mixing with hidden sector photons. For a hidden sector photon mass of 53 μeV we limit the hidden photon kinetic mixing parameter χ<1.7×10−7, which is an order of magnitude lower than previous bounds derived from cavity experiments in the same mass range. In addition, we use the cryogenic detector cavity to place new limits on the kinetic mixing parameter for hidden sector photons as a form of cold dark matter. Journal reference: S.R. Parker, J.G. Hartnett, R.G. Povey, and M.E. Tobar, "Cryogenic resonant microwave cavity searches for hidden sector photons," Phys. Rev. D 88, 112004, 2013 arXiv:1412.6954 [hep-ph] Direct Terrestrial Measurement of the Spatial Isotropy of the Speed of Light to 10−18 M. Nagel, S. R. Parker, E. V. Kovalchuk, P. L. Stanwix, J. G. Hartnett, E. N. Ivanov, A. Peters, M. E. Tobar (Submitted on 22 Dec 2014) Lorentz symmetry is a foundational property of modern physics, underlying both the standard model of particles and general relativity. It is anticipated that these two theories are merely low energy approximations of a single theory of the four fundamental forces that is unified and consistent at the Planck scale. Many unifying proposals allow for Lorentz symmetry to be broken, with observable effects appearing at Planck-suppressed levels. Thus precision tests of Lorentz Invariance Violation (LIV) are needed to assess and guide theoretical efforts. The most significant consequence of Lorentz symmetry is the isotropic nature of the speed of light, which remains invariant under rotation and boost transformations. In this work we use two ultra-stable oscillator frequency sources to perform a modern Michelson-Morley experiment and make the most precise measurement to date of the spatial isotropy of the speed of light, constraining Δc/c to 9.2±10.7×10−19 (95C.I.). This order of magnitude improvement over the current state-of-the-art allows us to undertake the first terrestrial test of LIV in electrodynamics at the Planck-suppressed electroweak unification scale, finding no significant violation of Lorentz symmetry. arXiv:1501.04857 [physics.gen-ph] Deriving time from the geometry of space James M. Chappell, John G. Hartnett, Nicolangelo Iannella, Derek Abbott (Submitted on 15 Jan 2015 (v1), last revised 8 Apr 2015 (this version, v2)) The Minkowski formulation of special relativity reveals the essential four-dimensional nature of spacetime, consisting of three space and one time dimension. Recognizing its fundamental importance, a variety of arguments have been proposed over the years attempting to derive the Minkowski spacetime structure from fundamental physical principles. In this paper we illustrate how Minkowski spacetime follows naturally from the geometric properties of three dimensional Clifford space modeled with multivectors. This approach also generalizes spacetime to an eight dimensional space as well as doubling the size of the Lorentz group. This description of spacetime also provides a new geometrical interpretation of the nature of time. arXiv:1407.8254 [astro-ph.SR] Gravitational waves from rapidly rotating neutron stars Brynmor Haskell, Nils Andersson, Caroline D`Angelo, Nathalie Degenaar, Kostas Glampedakis, Wynn C.G. Ho, Paul D. Lasky, Andrew Melatos, Manuel Oppenoorth, Alessandro Patruno, Maxim Priymak (Submitted on 31 Jul 2014) Rapidly rotating neutron stars in Low Mass X-ray Binaries have been proposed as an interesting source of gravitational waves. In this chapter we present estimates of the gravitational wave emission for various scenarios, given the (electromagnetically) observed characteristics of these systems. First of all we focus on the r-mode instability and show that a 'minimal' neutron star model (which does not incorporate exotica in the core, dynamically important magnetic fields or superfluid degrees of freedom), is not consistent with observations. We then present estimates of both thermally induced and magnetically sustained mountains in the crust. In general magnetic mountains are likely to be detectable only if the buried magnetic field of the star is of the order of B≈1012 G. In the thermal mountain case we find that gravitational wave emission from persistent systems may be detected by ground based interferometers. Finally we re-asses the idea that gravitational wave emission may be balancing the accretion torque in these systems, and show that in most cases the disc/magnetosphere interaction can account for the observed spin periods. Comments: To appear in 'Gravitational Waves Astrophysics: 3rd Session of the Sant Cugat Forum on Astrophysics, 2014', Editor: Carlos F. Sopuerta arXiv:1501.06039 [astro-ph.SR] Detecting gravitational waves from mountains on neutron stars in the Advanced Detector Era Brynmor Haskell, Maxim Priymak, Alessandro Patruno, Manuel Oppenoorth, Andrew Melatos, Paul Lasky (Submitted on 24 Jan 2015) Rapidly rotating Neutron Stars (NSs) in Low Mass X-ray Binaries (LMXBs) are thought to be interesting sources of Gravitational Waves (GWs) for current and next generation ground based detectors, such as Advanced LIGO and the Einstein Telescope. The main reason is that many of the NS in these systems appear to be spinning well below their Keplerian breakup frequency, and it has been suggested that torques associated with GW emission may be setting the observed spin period. This assumption has been used extensively in the literature to assess the strength of the likely gravitational wave signal. There is now, however, a significant amount of theoretical and observation work that suggests that this may not be the case, and that GW emission is unlikely to be setting the spin equilibrium period in many systems. In this paper we take a different starting point and predict the GW signal strength for two physical mechanisms that are likely to be at work in LMXBs: crustal mountains due to thermal asymmetries and magnetically confined mountains. We find that thermal crustal mountains in transient LMXBs are unlikely to lead to detectable GW emission, while persistent systems are good candidates for detection by Advanced LIGO and by the Einstein Telescope. Detection prospects are pessimistic for the magnetic mountain case, unless the NS has a buried magnetic field of B≈1012 G, well above the typically inferred exterior dipole fields of these objects. Nevertheless, if a system were to be detected by a GW observatory, cyclotron resonant scattering features in the X-ray emission could be used to distinguish between the two different scenarios. Comments: submitted to MNRAS arXiv:1502.07062 [astro-ph.SR] Models of Pulsar Glitches Brynmor Haskell, Andrew Melatos (Submitted on 25 Feb 2015) Radio pulsars provide us with some of the most stable clocks in the universe. Nevertheless several pulsars exhibit sudden spin-up events, known as glitches. More than forty years after their first discovery, the exact origin of these phenomena is still open to debate. It is generally thought that they an observational manifestation of a superfluid component in the stellar interior and provide an insight into the dynamics of matter at extreme densities. In recent years there have been several advances on both the theoretical and observational side, that have provided significant steps forward in our understanding of neutron star interior dynamics and possible glitch mechanisms. In this article we review the main glitch models that have been proposed and discuss our understanding, in the light of current observations. Comments: Preprint of an article published in IJMPD 24, issue 3, 2015, 530008. arXiv:1408.4601 [gr-qc] The Kerr-Schild ansatz revised Donato Bini, Andrea Geralico, Roy P. Kerr (Submitted on 20 Aug 2014) Kerr-Schild metrics have been introduced as a linear superposition of the flat spacetime metric and a squared null vector field, say k, multiplied by some scalar function, say H. The basic assumption which led to Kerr solution was that k be both geodesic and shearfree. This condition is relaxed here and Kerr-Schild ansatz is revised by treating Kerr-Schild metrics as {\it exact linear perturbations} of Minkowski spacetime. The scalar function H is taken as the perturbing function, so that Einstein's field equations are solved order by order in powers of H. It turns out that the congruence must be geodesic and shearfree as a consequence of third and second order equations, leading to an alternative derivation of Kerr solution. Journal reference: IJGMMP 7, 693 (2010) arXiv:1503.03298 [astro-ph.HE] Pulsar timing noise and the minimum observation time to detect gravitational waves with pulsar timing arrays Paul D. Lasky, Andrew Melatos, Vikram Ravi, George Hobbs (Submitted on 11 Mar 2015) The sensitivity of pulsar timing arrays to gravitational waves is, at some level, limited by timing noise. Red timing noise - the stochastic wandering of pulse arrival times with a red spectrum - is prevalent in slow-spinning pulsars and has been identified in many millisecond pulsars. Phenomenological models of timing noise, such as from superfluid turbulence, suggest that the timing noise spectrum plateaus below some critical frequency, fc, potentially aiding the hunt for gravitational waves. We examine this effect for individual pulsars by calculating minimum observation times, Tmin (fc), over which the gravitational wave signal becomes larger than the timing noise plateau. We do this in two ways: 1) in a model-independent manner, and 2) by using the superfluid turbulence model for timing noise as an example to illustrate how neutron star parameters can be constrained. We show that the superfluid turbulence model can reproduce the data qualitatively from a number of pulsars observed as part of the Parkes Pulsar Timing Array. We further show how a value of fc, derived either through observations or theory, can be related to Tmin. This provides a diagnostic whereby the usefulness of timing array pulsars for gravitational-wave detection can be quantified. Comments: Accepted for publication in MNRAS arXiv:1504.05889 [gr-qc] Gravitational waves from Sco X-1: A comparison of search methods and prospects for detection with advanced detectors C. Messenger, H. J. Bulten, S. G. Crowder, V. Dergachev, D. K. Galloway, E. Goetz, R. J. G. Jonker, P. D. Lasky, G. D. Meadors, A. Melatos, S. Premachandra, K. Riles, L. Sammut, E. H. Thrane, J. T. Whelan, Y. Zhang (Submitted on 22 Apr 2015) The low-mass X-ray binary Scorpius X-1 (Sco X-1) is potentially the most luminous source of continuous gravitational-wave radiation for interferometers such as LIGO and Virgo. For low-mass X-ray binaries this radiation would be sustained by active accretion of matter from its binary companion. With the Advanced Detector Era fast approaching, work is underway to develop an array of robust tools for maximizing the science and detection potential of Sco X-1. We describe the plans and progress of a project designed to compare the numerous independent search algorithms currently available. We employ a mock-data challenge in which the search pipelines are tested for their relative proficiencies in parameter estimation, computational efficiency, robust- ness, and most importantly, search sensitivity. The mock-data challenge data contains an ensemble of 50 Scorpius X-1 (Sco X-1) type signals, simulated within a frequency band of 50-1500 Hz. Simulated detector noise was generated assuming the expected best strain sensitivity of Advanced LIGO and Advanced VIRGO (4×10−24 Hz−1/2). A distribution of signal amplitudes was then chosen so as to allow a useful comparison of search methodologies. A factor of 2 in strain separates the quietest detected signal, at 6.8×10−26 strain, from the torque-balance limit at a spin frequency of 300 Hz, although this limit could range from 1.2×10−25 (25 Hz) to 2.2×10−26 (750 Hz) depending on the unknown frequency of Sco X-1. With future improvements to the search algorithms and using advanced detector data, our expectations for probing below the theoretical torque-balance strain limit are optimistic. Report number: LIGO-P1400217-v3 arXiv:1405.5252 [gr-qc] Demonstration of the spike phenomenon using the LTB models Alan Coley, Woei Chet Lim (Submitted on 20 May 2014) We demonstrate the occurrence of permanent spikes using the Lemaitre-Tolman-Bondi models, chosen because the solutions are exact and can be analyzed by qualitative dynamical systems methods. Three examples are given and illustrated numerically. The third example demonstrates that spikes can form directly in the matter density, as opposed to indirectly in previous studies of spikes in the Kasner regime. Spikes provide an alternative general relativistic mechanism for generating exceptionally large structures observed in the Universe. Journal reference: Class. Quantum Grav. 31 (2014) 115012 arXiv:1401.3044 [gr-qc] arGordon and Kerr-Schild anstaze in massive and bimetric gravity Arthur Suvorov, Anthony W.C. Lun (Submitted on 14 Jan 2014) We present some results regarding metric perturbations in general relativity and other metric theories of gravity. In particular, using the Newman Penrose variables, we write down and discuss the equations which govern tensor field perturbations of ranks 0,±1 and ±2 (scalar,vector,tensor bosonic perturbations) over certain space-times that admit specific background metrics expressible in isotropic coordinates. Armed with these equations, we are able to develop the Hadamard series which can be associated with the fundamental solution of the equations, wherein we introduce an inhomogeneous singularity at the physical space-time point of the perturbing particle. The Hadamard series is then used to generate closed form solutions by making choices for an appropriate ansatz solution. In particular, we solve for the spin-weighted electrostatic potential for the Reissner-Nordstrom black hole and for the fully dynamical potential for the Friedmann-Robertson-Walker cosmological solution. arXiv:1407.2446 [gr-qc] The little sibling of the big rip singularity Mariam Bouhmadi-Lopez, Ahmed Errahmani, Prado Martin-Moruno, Taoufik Ouali, Yaser Tavakoli (Submitted on 9 Jul 2014 (v1), last revised 24 Jun 2015 (this version, v2)) We present a new cosmological event, which we named the little sibling of the big rip. This event is much smoother than the big rip singularity. When the little sibling of the big rip is reached, the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate does not. This abrupt event takes place at an infinite cosmic time where the scalar curvature explodes. We show that a doomsday \'a la little sibling of the big rip is compatible with an accelerating universe, indeed at present it would mimic perfectly a LCDM scenario. It turns out that eventhough the event seems to be harmless as it takes place in the infinite future, the bound structures in the universe would be unavoidably destroyed on a finite cosmic time from now. The model can be motivated by considering that the weak energy condition should not be abusibely violated in our Universe, and it could give us some hints about the status of recently formulated nonlinear energy conditions. Journal reference: Int.J.Mod.Phys.D Vol. 24, No. 10 (2015) 1550078 (20 pages) arXiv:1407.7758 [gr-qc] Wormholes minimally violating the null energy condition Mariam Bouhmadi-Lopez, Francisco S. N. Lobo, Prado Martin-Moruno (Submitted on 29 Jul 2014 (v1), last revised 22 Oct 2014 (this version, v2)) We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted "the little sibling of the big rip", where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate does not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyze the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalisation of the equation of state considered above and analyse the respective stability regions. In particular, we obtain a specific wormhole solution with an asymptotic behaviour corresponding to a global monopole. Journal reference: JCAP 1411 (2014) 007 arXiv:1411.4401 [gr-qc] Semiclassical energy conditions and wormholes Prado Martin-Moruno (Submitted on 17 Nov 2014) We consider the nonlinear energy conditions and their quantum extensions. These new energy conditions behave much better than the usual pointwise energy conditions in the presence of semiclassical quantum effects. Analogous quantum extensions for the linear energy conditions are not always satisfied as one enters the quantum realm, but they can be used to constrain the violation of the classical conditions. Thus, the existence of wormholes supported by a fluid which violates the null energy condition in a controlled way is of particular interest. Comments: 6 pages; contribution to the proceedings of the Spanish Relativity Meeting 2014 (ERE 2014), Valencia, 1-5 September 2014 arXiv:1502.03236 [gr-qc] Horndeski theories self-tuning to a de Sitter vacuum Prado Martin-Moruno, Nelson J. Nunes, Francisco S. N. Lobo (Submitted on 11 Feb 2015 (v1), last revised 27 Mar 2015 (this version, v2)) We consider Horndeski cosmological models able to screen the vacuum energy coming from any field theory assuming that after this screening the space should be in a de Sitter vacuum with a particular value of the cosmological constant specified by the theory of gravity itself. The most general scalar-tensor cosmological models without higher than second order derivatives in the field equations that have a spatially flat de Sitter critical point for any kind of material content or vacuum energy are, therefore, presented. These models could allow us to understand the current accelerated expansion of the universe as the result of a dynamical evolution towards a de Sitter attractor. Journal reference: Phys.Rev.D91:084029,2015 arXiv:1502.05878 [gr-qc] Attracted to de Sitter: cosmology of the linear Horndeski models Prado Martin-Moruno, Nelson J. Nunes, Francisco S. N. Lobo (Submitted on 19 Feb 2015 (v1), last revised 6 May 2015 (this version, v2)) We consider Horndeski cosmological models, with a minisuperspace Lagrangian linear in the field derivative, that are able to screen any vacuum energy and material content leading to a spatially flat de Sitter vacuum fixed by the theory itself. Furthermore, we investigate particular models with a cosmic evolution independent of the material content and use them to understand the general characteristics of this framework. We also consider more realistic models, which we denote the "term-by-term" and "tripod" models, focusing attention on cases in which the critical point is indeed an attractor solution and the cosmological history is of particular interest. Comments: V1: 25 pages, 14 figures. V2: 5 references added, minor clarifications added. This version accepted for publication in JCAP arXiv:1502.05878 [gr-qc] Attracted to de Sitter: cosmology of the linear Horndeski models Prado Martin-Moruno, Nelson J. Nunes, Francisco S. N. Lobo (Submitted on 19 Feb 2015 (v1), last revised 6 May 2015 (this version, v2)) We consider Horndeski cosmological models, with a minisuperspace Lagrangian linear in the field derivative, that are able to screen any vacuum energy and material content leading to a spatially flat de Sitter vacuum fixed by the theory itself. Furthermore, we investigate particular models with a cosmic evolution independent of the material content and use them to understand the general characteristics of this framework. We also consider more realistic models, which we denote the "term-by-term" and "tripod" models, focusing attention on cases in which the critical point is indeed an attractor solution and the cosmological history is of particular interest. Journal reference: JCAP 1505 (2015) 033 arXiv:1505.06585 [gr-qc] Accelerating universe as a result of an adjustment mechanism Prado Martin-Moruno, Nelson J. Nunes (Submitted on 25 May 2015) In this essay we propose that the theory of gravity's vacuum is described by a de Sitter geometry. Under this assumption we consider an adjustment mechanism able to screen any value of the vacuum energy of the matter fields. We discuss the most general scalar-tensor cosmological models with second order equations of motion that have a fixed de Sitter critical point for any kind of material content. These models give rise to interesting cosmological evolutions that we shall discuss. Comments: 7 pages, 2 figures. Essay awarded honorable mention in the Gravity Research Foundation essay competition 2015. arXiv:1506.02497 [gr-qc] Attracted to de Sitter II: cosmology of the shift-symmetric Horndeski models Prado Martin-Moruno, Nelson J. Nunes (Submitted on 8 Jun 2015 (v1), last revised 9 Jun 2015 (this version, v2)) Horndeski models with a de Sitter critical point for any kind of material content can provide a mechanism to alleviate the cosmological constant problem. They allow us to understand the current accelerated expansion of the universe as the result of the dynamical approach to the critical point when it is an attractor. We show that this critical point is indeed an attractor for the shift-symmetric subfamily of models with these characteristics. We study the resulting cosmological scenario and conclude that their background dynamics is compatible with the latest observational arXiv:1502.05768 [physics.ins-det] Thermal noise of gram-scale cantilever flexures Thanh T-H. Nguyen, Bram J.J. Slagmolen, Conor M. Mow-Lowry, John Miller, Adam Mullavey, Stefan Goßler, Paul A. Altin, Daniel A. Shaddock, David E. McClelland (Submitted on 20 Feb 2015 (v1), last revised 3 Jun 2015 (this version, v3)) We present measurements of thermal noise in niobium and aluminium flexures. Our measurements cover the audio frequency band from 10Hz to 10kHz, which is of particular relevance to ground-based interferometric gravitational wave detectors, and span up to an order of magnitude above and below the fundamental flexure resonances at 50Hz - 300Hz. Our results are well-explained by a simple model in which both structural and thermoelastic loss play a role. The ability of such a model to explain this interplay is important for investigations of quantum-radiation-pressure noise and the standard quantum limit. arXiv:1410.3470 [astro-ph.CO] Cosmic microwave background anisotropies in the timescape cosmology M. Ahsan Nazer, David L. Wiltshire (Submitted on 13 Oct 2014 (v1), last revised 13 Mar 2015 (this version, v3)) We analyze the spectrum of cosmic microwave background (CMB) anisotropies in the timescape cosmology: a potentially viable alternative to homogeneous isotropic cosmologies without dark energy. We exploit the fact that the timescape cosmology is extremely close to the standard cosmology at early epochs to adapt existing numerical codes to produce CMB anisotropy spectra, and to match these as closely as possible to the timescape expansion history. A variety of matching methods are studied and compared. We perform Markov Chain Monte Carlo analyses on the parameter space, and fit CMB multipoles 50≤ℓ≤2500 to the Planck satellite data. Parameter fits include a dressed Hubble constant, H0=61.0 km/s/Mpc (± 1.3% stat) (± 8% sys), and a present void volume fraction fv0=0.627 (± 2.3% stat) (± 13% sys). We find best fit likelihoods which are comparable to that of the best fit Lambda CDM cosmology in the same multipole range. In contrast to earlier results, the parameter constraints afforded by this analysis no longer admit the possibility of a solution to the primordial lithium abundance anomaly. This issue is related to a strong constraint between the ratio of baryonic to nonbaryonic dark matter and the ratio of heights of the second and third acoustic peaks, which cannot be changed as long as the standard cosmology is assumed up to the surface of last scattering. These conclusions may change if backreaction terms are also included in the radiation-dominated primordial plasma. Journal reference: Phys.Rev.D91:063519,2015 arXiv:1408.6613 [hep-th] Gauss-Bonnet assisted braneworld inflation in light of BICEP2 and Planck data Ishwaree P. Neupane (CERN) (Submitted on 28 Aug 2014 (v1), last revised 10 Dec 2014 (this version, v2)) Motivated by the idea that quantum gravity corrections usually suppress the power of the scalar primordial spectrum (E-mode) more than the power of the tensor primordial spectrum (B-mode), in this paper we construct a concrete gravitational theory in five-dimensions for which V(ϕ)∝ϕn-type inflation (n≥1) generates an appropriate tensor-to-scalar ratio that may be compatible with the BICEP2 and Planck data together. The true nature of gravity is five-dimensional and described by the action S=∫d5x|g|−−√M3(−6λM2+R+αM−2R2) where M is the five-dimensional Planck mass and R2=R2−4RabRab+RabcdR abcd is the Gauss-Bonnet (GB) term. The five-dimensional "bulk" spacetime is anti-de Sitter (λ<0) for which inflation ends naturally. The effects of R2 term on the magnitudes of scalar and tensor fluctuations and spectral indices are shown to be important at the energy scale of inflation. For GB-assisted m2ϕ2-inflation, inflationary constraints from BICEP2 and Planck, such as, ns≃0.9603 (±0.0073), r=0.16(+0.06−0.05) and V1/4∗∼1.5×1016GeV are all satisfied for (−λα)≃(3−300)×10−5. Journal reference: Phys.Rev. D90 (2014) 123534 arXiv:1409.8647 [astro-ph.CO] Natural Braneworld Inflation in Light of Recent Results from Planck and BICEP2 Ishwaree P. Neupane (Submitted on 29 Sep 2014 (v1), last revised 14 Nov 2014 (this version, v2)) In this paper we report on a major theoretical observation in cosmology. We present a concrete cosmological model for which inflation has natural beginning and natural ending. Inflation is driven by a cosine-form potential, V(ϕ)=Λ4(1−cos(ϕ/f)), which begins at ϕ≲πf and ends at ϕ=ϕend≲5f/3. The distance traversed by the inflaton field ϕ is sub-Planckian. The Gauss-Bonnet term R2 arising as leading curvature corrections in the action S=∫d5x√−g5M3(−6λM2+R+αM−2R2)+∫d4x√−g4 (ϕ˙2/2−V(ϕ)−σ+Lmatter) (where α and λ are constants and M is the five-dimensional Planck mass) plays a key role to terminate inflation. The model generates appropriate tensor-to-scalar ratio r and inflationary perturbations that are consistent with results from Planck and BICEP2. For example, for N∗=50−60 and ns∼ 0.960±0.005, the model predicts that M∼5.64×1016GeV and r∼(0.14−0.21) [N∗ is the number of {\it e}--folds of inflation and ns (nt) is the scalar (tensor) spectrum spectral index]. The ratio −nt/r is (13% -- 24%) less than its value in 4D Einstein gravity, −nt/r=1/8. The upper bound on the energy scale of inflation V1/4=2.37×1016GeV (r<0.27) implies that (−λα)≳75×10−5 and Λ<2.17×1016GeV, which thereby rule out the case α=0 (Randall-Sundrum model). The true nature of gravity is holographic as implied by braneworld realization of string and M theory. The model correctly predicts a late epoch cosmic acceleration with the dark energy equation of state wDE≈−1. Journal reference: Phys. Rev. D 90, 123502 (2014) arXiv:1401.0277 [math.AP] A transmission problem for quasi-linear wave equations Lars Andersson, Todd A. Oliynyk (Submitted on 1 Jan 2014) We prove the local existence and uniqueness of solutions to a system of quasi-linear wave equations involving a jump discontinuity in the lower order terms. A continuation principle is also Journal reference: J. Differential Equations 256 (2014), 2023-2078 arXiv:1406.6104 [gr-qc] The Newtonian limit on cosmological scales Todd A. Oliynyk (Submitted on 23 Jun 2014 (v1), last revised 10 Feb 2015 (this version, v2)) We establish the existence of a wide class of inhomogeneous relativistic solutions to the Einstein-Euler equations that are well approximated on cosmological scales by solutions of Newtonian gravity. Error estimates measuring the difference between the Newtonian and relativistic solutions are provided. arXiv:1410.4894 [gr-qc] Dynamical compact elastic bodies in general relativity Lars Andersson, Todd A. Oliynyk, Bernd G. Schmidt (Submitted on 18 Oct 2014) We prove the local existence of solutions to the Einstein-Elastic equations that represent self-gravitating, relativistic elastic bodies with compact support. arXiv:1501.00045 [math.AP] A priori estimates for relativistic liquid bodies Todd A. Oliynyk (Submitted on 30 Dec 2014) We demonstrate that a sufficiently smooth solution of the relativistic Euler equations that represents a dynamical compact liquid body, when expressed in Lagrangian coordinates, determines a solution to a system of non-linear wave equations with acoustic boundary conditions. Using this wave formulation, we prove that these solutions satisfy energy estimates without loss of derivatives. Importantly, our wave formulation does not require the liquid to be irrotational, and the energy estimates do not rely on divergence and curl type estimates employed in previous works. arXiv:1505.00857 [gr-qc] Future stability of the FLRW fluid solutions in the presence of a positive cosmological constant Todd A. Oliynyk (Submitted on 5 May 2015) We introduce a new method for establishing the future non-linear stability of perturbations of FLRW solutions to the Einstein-Euler equations with a positive cosmological constant and a linear equation of state of the form ρ=Kp. The method is based on a conformal transformation of the Einstein-Euler equations that compactifies the time domain and can handle the equation of state parameter values 0<K≤1/3 in a uniform manner. It also determines the asymptotic behavior of the perturbed solutions in the far future. arXiv:1410.3881 [gr-qc] Universe in a black hole with spin and torsion Nikodem J. Poplawski (Submitted on 14 Oct 2014) The conservation law for the angular momentum in curved spacetime requires that the antisymmetric part of the affine connection (the torsion tensor) is a variable in the principle of least action. The coupling between spin and torsion generates gravitational repulsion in fermionic matter at extremely high densities and avoids the formation of singularities in black holes. We show that every black hole in the presence of torsion forms a nonsingular, closed, nearly flat, homogeneous, and isotropic universe on the other side of its event horizon. Quantum particle production in such a universe can generate a period of exponential expansion which creates an enormous amount of matter in that universe. Accordingly, our Universe may have originated from the interior of a black hole existing in another universe. arXiv:1501.01355 [gr-qc] A new form of the C-metric with cosmological constant Yu Chen, Yen-Kheng Lim, Edward Teo (Submitted on 7 Jan 2015 (v1), last revised 17 Feb 2015 (this version, v2)) The new form of the C-metric proposed by Hong and Teo, in which the two structure functions are factorised, has proved useful in its analysis. In this paper, we extend this form to the case when a cosmological constant is present. The new form of this solution has two structure functions which are partially factorised; moreover, the roots of the structure functions are now regarded as fundamental parameters. This leads to a natural representation of the solution in terms of its so-called domain structure, in which the allowed coordinate range can be visualised as a "box" in a two-dimensional plot. The solution is then completely parameterised by the locations of the edges of this box, at least in the uncharged case. We also briefly analyse other possible domain structures---in the shape of a triangle and trapezoid---that might describe physically interesting space-times within the AdS C-metric. arXiv:1504.01235 [gr-qc] A five-parameter class of solutions to the vacuum Einstein equations Yu Chen, Edward Teo (Submitted on 6 Apr 2015) We present a new five-parameter class of Ricci-flat solutions in four dimensions with Euclidean signature. The solution is asymptotically locally flat (ALF), and contains a finite asymptotic NUT charge. When this charge is sent to infinity, the solution becomes asymptotically locally Euclidean (ALE), and one in fact obtains the Ricci-flat Plebanski-Demianski solution. The solution we have found can thus be regarded as an ALF generalisation of the latter solution. We also show that it can be interpreted as a system consisting of two touching Kerr-NUTs: the south pole of one Kerr-NUT touches the north pole of the other. The total NUT charge of such a system is then identified with the asymptotic NUT charge. Setting the asymptotic NUT charge to zero gives a four-parameter asymptotically flat (AF) solution, and contained within this subclass is the completely regular two-parameter AF instanton previously discovered by the present authors. Various other limits are also discussed, including that of the triple-collinearly-centered Gibbons-Hawking solution, and an ALF generalisation of the C-metric. arXiv:1506.08507 [physics.optics] Shrinking cloaks in exanding spacetimes: the role of coordinates and the meaning of transformations in Transformation Optics Robert T. Thompson, Mohsen Fathi (Submitted on 29 Jun 2015) The fully covariant formulation of transformation optics is used to find the configuration of a cloaking device operating in an expanding universe modelled by a Friedmann-Lema\^itre-Robertson-Walker spacetime. This spacetime cloak is used as a platform for probing the covariant formulation of transformation optics, thereby rigorously enhancing the conceptual understanding of the theory. By studying the problem in both comoving and physical coordinates we explicitly demonstrate the preservation of general covariance of electrodynamics under the transformation optics procedure. This platform also enables a detailed study of the various transformations that arise in transformation optics. We define a corporeal transformation as the "transformation" of transformation optics, and distinguish it from coordinate and frame transformations. We find that corporeal transformations considered in the literature have generally been restricted to a subset of all possible corporeal transformations, providing a potential mechanism for increased functionality of transformation optics. arXiv:1410.2334 [gr-qc] Gravitational Wave Detection with High Frequency Phonon Trapping Acoustic Cavities Maxim Goryachev, Michael E. Tobar (Submitted on 9 Oct 2014 (v1), last revised 31 Oct 2014 (this version, v2)) There are a number of theoretical predictions for astrophysical and cosmological objects, which emit high frequency (106−109~Hz) Gravitation Waves (GW) or contribute somehow to the stochastic high frequency GW background. Here we propose a new sensitive detector in this frequency band, which is based on existing cryogenic ultra-high quality factor quartz Bulk Acoustic Wave cavity technology, coupled to near-quantum-limited SQUID amplifiers at 20~mK. We show that spectral strain sensitivities reaching 10−22 per √Hz per mode is possible, which in principle can cover the frequency range with multiple (>100) modes with quality factors varying between 106−1010 allowing wide bandwidth detection. Due to its compactness and well established manufacturing process, the system is easily scalable into arrays and distributed networks that can also impact the overall sensitivity and introduce coincidence analysis to ensure no false detections. Journal reference: Phys. Rev. D 90, 102005 (2014) arXiv:1412.2142 [gr-qc] Testing the isotropy of space using rotating quartz oscillators Anthony Lo, Philipp Haslinger, Eli Mizrachi, Loic Anderegg, Holger Müller, Michael Hohensee, Maxim Goryachev, Michael E Tobar (Submitted on 4 Dec 2014 (v1), last revised 28 Jan 2015 (this version, v2)) Violations of Lorentz invariance by matter and light can generate direction- and frame-dependent anisotropies in particles inertial masses and, hence, a measurable modulation of the oscillation frequency of rotating quartz crystal oscillators. This allows simple and low maintenance experiments that are ideally suited for long-term data taking. Using the Standard Model Extension (SME) as a parameterizing framework, we study the magnitude of this putative frequency modulation. A preliminary experiment with room-temperature SC-cut crystals yields a frequency resolution in the 10−15 range with ∼120 hours of data and a limit of c~Q=(−1.8±2.2)×10−14\,GeV on the most weakly constrained neutron-sector c−coefficient of the SME. Future experiments with cryogenic oscillators promise additional improvements in accuracy, opening up the potential for improved tests of Lorentz symmetry in the neutron, proton, electron and photon sector. Report number: LLNL-JRNL-664409-DRAFT arXiv:1504.03280 [gr-qc] Search for Lorentz invariance violation through tests of the gravitational inverse square law at short-ranges Cheng-Gang Shao, Yu-Jie Tan, Wen-Hai Tan, Shan-Qing Yang, Jun Luo, Michael Edmund Tobar (Submitted on 13 Apr 2015) A search for sidereal variations in the non-Newtonian force between two tungsten plates separated at millimeter ranges sets experimental limits on Lorentz invariance violation involving quadratic couplings of Riemann curvature. We show that the Lorentz invariance violation force between two finite flat plates is dominated by the edge effects, which includes a suppression effect leading to lower limits than previous rough estimates. From this search, we determine the current best constraints of the Lorentz invariance violating coefficients at a level of 10−8 m2. arXiv:1506.05801 [gr-qc] Quasilocal energy and thermodynamic equilibrium conditions Nezihe Uzun, David L. Wiltshire (Submitted on 18 Jun 2015) Equilibrium thermodynamic laws are typically applied to horizons in general relativity without stating the conditions that bring them into equilibrium. We fill this gap by applying a new thermodynamic interpretation to a generalized Raychaudhuri equation for a closed spacelike 2-surface, the "screen", which encompasses a system of arbitrary size in nonequilibrium with its surroundings in general. In the case of spherical symmetry this enables us to identify quasilocal thermodynamic potentials directly related to standard quasilocal energy definitions. Quasilocal thermodynamic equilibrium is defined by minimizing the mean extrinsic curvature of the screen. Moreover, without any direct reference to surface gravity, we find that the system comes into quasilocal thermodynamic equilibrium when the screen is located at a generalized apparent horizon. Examples of the Schwarzschild, Friedmann-Lemaitre and Lemaitre-Tolman geometries are investigated and compared. Conditions for the quasilocal thermodynamic and hydrodynamic equilibrium states to coincide are also discussed, and a quasilocal virial relation is suggested as a potential application of this Comments: 27 pages, no figures. Accepted by Classical and Quantum Gravity. arXiv:1504.03266 [physics.optics] Modelling Thermoelastic Distortion of Optics Using Elastodynamic Reciprocity Eleanor King, Yuri Levin, David Ottaway, Peter Veitch (Submitted on 13 Apr 2015 (v1), last revised 17 Apr 2015 (this version, v2)) Thermoelastic distortion resulting from optical absorption by transmissive and reflective optics can cause unacceptable changes in optical systems that employ high power beams. In advanced-generation laser-interferometric gravitational wave detectors for example, optical absorption is expected to result in wavefront distortions that would compromise the sensitivity of the detector; thus necessitating the use of adaptive thermal compensation. Unfortunately, these systems have long thermal time constants and so predictive feed-forward control systems could be required - but the finite-element analysis is computationally expensive. We describe here the use of the Betti-Maxwell elastodynamic reciprocity theorem to calculate the response of linear elastic bodies (optics) to heating that has arbitrary spatial distribution. We demonstrate using a simple example, that it can yield accurate results in computational times that are significantly less than those required for finite-element analyses. arXiv:1312.0405 [gr-qc] Ray tracing Einstein-Æther black holes: Universal versus Killing horizons Bethan Cropp, Stefano Liberati, Arif Mohd, Matt Visser (Submitted on 2 Dec 2013 (v1), last revised 2 Mar 2014 (this version, v2)) Violating Lorentz-invariance, and so implicitly permitting some form of superluminal communication, necessarily alters the notion of a black hole. Nevertheless, in both Einstein-{\AE}ther gravity, and Ho\v{r}ava-Lifshitz gravity, there is still a causally disconnected region in black-hole solutions; now being bounded by a "Universal horizon", which traps excitations of arbitrarily high velocities. To better understand the nature of these black holes, and their Universal horizons, we study ray trajectories in these spacetimes. We find evidence that Hawking radiation is associated with the Universal horizon, while the "lingering" of ray trajectories near the Killing horizon hints at reprocessing there. In doing this we solve an apparent discrepancy between the surface gravity of the Universal horizon and the associated temperature derived by tunneling method. These results advance the understanding of these exotic horizons, and provide hints for a full understanding of black-hole thermodynamics in Lorentz-violating theories. Journal reference: Phys. Rev. D 89, 064061 (2014) arXiv:1401.0568 [gr-qc] Bounding the greybody factors for scalar excitations of the Kerr-Newman spacetime Petarpa Boonserm (Chulalongkorn University), Tritos Ngampitipan (Chulalongkorn University), Matt Visser (Victoria University of Wellington) (Submitted on 3 Jan 2014) Finding exact solutions for black-hole greybody factors is generically impractical; typically one resorts either to making semi-analytic or numerical estimates, or alternatively to deriving rigorous analytic bounds. Indeed, rigorous bounds have already been established for the greybody factors of Schwarzschild and Riessner-Nordstrom black holes, and more generally for those of arbitrary static spherically symmetric asymptotically flat black holes. Adding rotation to the problem greatly increases the level of difficulty, both for purely technical reasons (the Kerr or Kerr-Newman black holes are generally much more difficult to work with than the Schwarzschild or Reissner-Nordstrom black holes), but also at a conceptual level (due to the generic presence of super-radiant modes). In the current article we analyze bounds on the greybody factors for scalar excitations of the Kerr-Newman geometry in some detail, first for zero-angular-momentum modes, then for the non-super-radiant modes, and finally for the super-radiant modes. Journal reference: JHEP 1403 (2014) 113 arXiv:1405.5678 [gr-qc] Greybody factors for Myers-Perry black holes Petarpa Boonserm (Chulalongkorn University), Auttakit Chatrabhuti (Chulalongkorn University), Tritos Ngampitipan (Chulalongkorn University), Matt Visser (Victoria University of Wellington) (Submitted on 22 May 2014) The Myers-Perry black holes are higher-dimensional generalizations of the usual (3+1)-dimensional rotating Kerr black hole. They are of considerable interest in Kaluza-Klein models, specifically within the context of brane-world versions thereof. In the present article we shall consider the greybody factors associated with scalar field excitations of the Myers-Perry spacetimes, and develop some rigorous bounds on these greybody factors. These bounds are of relevance for characterizing both the higher-dimensional Hawking radiation, and the super-radiance, that is expected for these arXiv:1407.5678 [gr-qc] Super-radiance and flux conservation Petarpa Boonserm (Chulalongkorn University), Tritos Ngampitipan (Chulalongkorn University), Matt Visser (Victoria University of Wellington) (Submitted on 21 Jul 2014 (v1), last revised 28 Jul 2014 (this version, v2)) The theoretical foundations of the phenomenon known as super-radiance still continues to attract considerable attention. Despite many valiant attempts at pedagogically clear presentations, the effect nevertheless still continues to generate some significant confusion. Part of the confusion arises from the fact that super-radiance in a quantum field theory [QFT] context is not the same as super-radiance (super-fluorescence) in some condensed matter contexts; part of the confusion arises from traditional but sometimes awkward normalization conventions, and part is due to sometimes unnecessary confusion between fluxes and probabilities. We shall argue that the key point underlying the effect is flux conservation, (and, in the presence of dissipation, a controlled amount of flux non-conservation), and that attempting to phrase things in terms of reflection and transmission probabilities only works in the absence of super-radiance. To help clarify the situation we present a simple exactly solvable toy model exhibiting both super-radiance and damping. arXiv:1407.7295 [gr-qc] Physical observability of horizons Matt Visser (Victoria University of Wellington) (Submitted on 28 Jul 2014 (v1), last revised 25 Nov 2014 (this version, v3)) Event horizons are (generically) not physically observable. In contrast, apparent horizons (and the closely related trapping horizons) are generically physically observable --- in the sense that they can be detected by observers working in finite-size regions of spacetime. Consequently event horizons are inappropriate tools for defining astrophysical black holes, or indeed for defining any notion of evolving}black hole, (evolving either due to accretion or Hawking radiation). The only situation in which an event horizon becomes physically observable is for the very highly idealized stationary or static black holes, when the event horizon is a Killing horizon which is degenerate with the apparent and trapping horizons; and then it is the physical observability of the apparent/trapping horizons that is fundamental -- the event horizon merely comes along for the ride. arXiv:1409.7754 [gr-qc] Thermality of the Hawking flux Matt Visser (Victoria University of Wellington) (Submitted on 27 Sep 2014 (v1), last revised 6 May 2015 (this version, v3)) Is the Hawking flux "thermal"? Unfortunately, the answer to this seemingly innocent question depends on a number of often unstated, but quite crucial, technical assumptions built into modern (mis-) interpretations of the word "thermal". The original 1850's notions of thermality -- based on classical thermodynamic reasoning applied to idealized "black bodies" or "lamp black surfaces" -- when supplemented by specific basic quantum ideas from the early 1900's, immediately led to the notion of the black-body spectrum, (the Planck-shaped spectrum), but "without" any specific assumptions or conclusions regarding correlations between the quanta. Many (not all) modern authors (often implicitly and unintentionally) add an extra, and quite unnecessary, assumption that there are no correlations in the black-body radiation; but such usage is profoundly ahistorical and dangerously misleading. Specifically, the Hawking flux from an evaporating black hole, (just like the radiation flux from a leaky furnace or a burning lump of coal), is only "approximately" Planck-shaped over a bounded frequency range. Standard physics (phase space and adiabaticity effects) explicitly bound the frequency range over which the Hawking flux is "approximately" Planck-shaped from both above and below -- the Hawking flux is certainly not exactly Planckian, and there is no compelling physics reason to assume the Hawking photons are uncorrelated. arXiv:1501.01619 [gr-qc] Energy conditions in the epoch of galaxy formation Matt Visser (Submitted on 7 Jan 2015) The energy conditions of Einstein gravity (classical general relativity) do not require one to fix a specific equation of state. In a Friedmann-Robertson-Walker universe where the equation of state for the cosmological fluid is uncertain, the energy conditions provide simple, model-independent, and robust bounds on the behaviour of the density and look-back time as a function of red-shift. Current observations suggest that the "strong energy condition" is violated sometime between the epoch of galaxy formation and the present. This implies that no possible combination of "normal" matter is capable of fitting the observational data. Journal reference: Science 276 (1997) 88-90 arXiv:1501.07044 [gr-qc] Modelling anisotropic fluid spheres in general relativity Petarpa Boonserm (Chulalongkorn University), Tritos Ngampitipan (Chulalongkorn University), Matt Visser (Victoria University of Wellington) (Submitted on 28 Jan 2015 (v1), last revised 3 Feb 2015 (this version, v2)) We argue that an arbitrary general relativistic anisotropic fluid sphere, (spherically symmetric but with transverse pressure not equal to radial pressure), can nevertheless be successfully modelled by suitable linear combinations of quite ordinary classical matter: an isotropic perfect fluid, a classical electromagnetic field, and a classical (minimally coupled) scalar field. While the most general decomposition is not unique, a preferred minimal decomposition can be constructed that is unique. We show how the classical energy conditions for the anisotropic fluid sphere can be related to energy conditions for the isotropic perfect fluid, electromagnetic field, and scalar field components of the model. Furthermore we show how this decomposition relates to the distribution of electric charge density and scalar charge density throughout the model that is used to mimic the anisotropic fluid sphere. Consequently, we can build physically reasonable matter models for almost any spherically symmetric spacetime. arXiv:1502.02758 [gr-qc] Conformally Friedmann-Lemaitre-Robertson-Walker cosmologies Matt Visser (Victoria University of Wellington) (Submitted on 10 Feb 2015 (v1), last revised 29 Apr 2015 (this version, v3)) In a universe where, according to the standard cosmological models, some 97% of the total mass-energy is still "missing in action" it behooves us to spend at least a little effort critically assessing and exploring radical alternatives. Among possible, (dare we say plausible), nonstandard but superficially viable models, those spacetimes conformal to the standard Friedmann-Lemaitre-Robertson-Walker class of cosmological models play a very special role -- these models have the unique and important property of permitting large non-perturbative geometric deviations from Friedmann-Lemaitre-Robertson-Walker cosmology without unacceptably distorting the cosmic microwave background. Performing a "cosmographic" analysis, (that is, temporarily setting aside the Einstein equations, since the question of whether or not the Einstein equations are valid on galactic and cosmological scales is essentially the same question as whether or not dark matter/ dark energy actually exist), and using both supernova data and information about galactic structure, one can nevertheless place some quite significant observational constraints on any possible conformal mode -- however there is still an extremely rich range of phenomenological possibilities for both cosmologists and astrophysicists to explore. Comments: This version accepted for publication in Classical and Quantum Gravity arXiv:1506.03975 [gr-qc] The Hawking cascade from a black hole is extremely sparse Finnian Gray (Victoria University of Wellington), Sebastian Schuster (Victoria University of Wellington), Alexander Van-Brunt (Victoria University of Wellington), Matt Visser (Victoria University of (Submitted on 12 Jun 2015) The Hawking flux from a black hole, (at least as seen from large distances), is extremely sparse and thin, with the average time between emission of successive Hawking quanta being hundreds of times larger than the natural timescale set by the energies of the emitted quanta. Some aspects of this result have been known for over 30 years, but have been largely forgotten, possibly because authors focussed mainly on the late-time high-temperature regime. We shall instead focus on the early-stage low-temperature regime, and shall both quantify and significantly extend these observations in a number of different ways. First we shall identify several natural dimensionless figures of merit, and thereby compare the mean time between emission of successive Hawking quanta to several quite natural timescales that can be associated with the emitted quanta, demonstrating that ratios of 300 or more are typical for emission of photons or gravitons from a Schwarzschild black hole. Furthermore these ratios are independent of the mass of the black hole as it slowly evolves. The situation for fermion emission (massless neutrinos) is actually worse. Second, we shall then show that the situation for Reissner-Nordstrom, Kerr, Kerr-Newman and "dirty" black holes is even worse. Third, we consider the effects of particle rest mass. Overall, the Hawking quanta are seen to be dribbling out of the black hole one at a time, in an extremely slow cascade of 3-body decays. This implies that the Hawking flux is subject to "shot noise". Observationally, the Planck spectrum of the Hawking flux can only be determined by collecting and integrating data over a very long time. We conclude by connecting these points back to various kinematic aspects of the Hawking evaporation process. arXiv:1411.1662 [gr-qc] Rotating black holes in a draining bathtub: superradiant scattering of gravity waves Mauricio Richartz, Angus Prain, Stefano Liberati, Silke Weinfurtner (Submitted on 6 Nov 2014 (v1), last revised 5 Jun 2015 (this version, v3)) In a draining rotating fluid flow background, surface perturbations behave as a scalar field on a rotating effective black hole spacetime. We propose a new model for the background flow which takes into account the varying depth of the water. Numerical integration of the associated Klein-Gordon equation using accessible experimental parameters shows that gravity waves in an appropriate frequency range are amplified through the mechanism of superradiance. Our numerical results suggest that the observation of this phenomenon in a common fluid mechanical system is within experimental reach. Unlike the case of wave scattering around Kerr black holes, which depends only on one dimensionless background parameter (the ratio a/M between the specific angular momentum and the mass of the black hole), our system depends on two dimensionless background parameters, namely the normalized angular velocity and surface gravity at the effective black hole horizon. Journal reference: Phys. Rev. D 91, 124018 (2015) arXiv:1401.1287 [gr-qc] The chart based approach to studying the global structure of a spacetime induces a coordinate invariant boundary Ben Whale (Submitted on 7 Jan 2014 (v1), last revised 25 Feb 2014 (this version, v2)) I demonstrate that the chart based approach to the study of the global structure of Lorentzian manifolds induces a homeomorphism of the manifold into a topological space as an open dense set. The topological boundary of this homeomorphism is a chart independent boundary of ideal points equipped with a topological structure and a physically motivated classification. I show that this new boundary contains all other boundaries that can be presented as the topological boundary of an envelopment. Hence, in particular, it is a generalisation of Penrose's conformal boundary. I provide three detailed examples: the conformal compactification of Minkowski spacetime, Scott and Szekeres' analysis of the Curzon singularity and Beyer and Hennig's analysis of smooth Gowdy symmetric generalised Taub-NUT spacetimes. Journal reference: General Relativity and Gravitation, Vol. 46, No. 1. (2014), pp. 1-43 arXiv:1412.5652 [math.DG] Generalised time functions and finiteness of the Lorentzian distance Adam Rennie, Ben E. Whale (Submitted on 17 Dec 2014 (v1), last revised 30 Jan 2015 (this version, v2)) We show that finiteness of the Lorentzian distance is equivalent to the existence of generalised time functions with gradient uniformly bounded away from light cones. To derive this result we introduce new techniques to construct and manipulate achronal sets. As a consequence of these techniques we obtain a functional description of the Lorentzian distance extending the work of Franco and arXiv:1312.1952 [gr-qc] Finding high-order analytic post-Newtonian parameters from a high-precision numerical self-force calculation Abhay G Shah, John L Friedman, Bernard F Whiting (Submitted on 6 Dec 2013 (v1), last revised 11 Mar 2014 (this version, v2)) We present a novel analytic extraction of high-order post-Newtonian (pN) parameters that govern quasi-circular binary systems. Coefficients in the pN expansion of the energy of a binary system can be found from corresponding coefficients in an extreme-mass-ratio inspiral (EMRI) computation of the change ΔU in the redshift factor of a circular orbit at fixed angular velocity. Remarkably, by computing this essentially gauge-invariant quantity to accuracy greater than one part in 10225, and by assuming that a subset of pN coefficients are rational numbers or products of π and a rational, we obtain the exact analytic coefficients. We find the previously unexpected result that the post-Newtonian expansion of ΔU (and of the change ΔΩ in the angular velocity at fixed redshift factor) have conservative terms at half-integral pN order beginning with a 5.5 pN term. This implies the existence of a corresponding 5.5 pN term in the expansion of the energy of a binary system. Coefficients in the pN series that do not belong to the subset just described are obtained to accuracy better than 1 part in 10265−23n at nth pN order. We work in a radiation gauge, finding the radiative part of the metric perturbation from the gauge-invariant Weyl scalar ψ0 via a Hertz potential. We use mode-sum renormalization, and find high-order renormalization coefficients by matching a series in L=ℓ+1/2 to the large-L behavior of the expression for ΔU. The non-radiative parts of the perturbed metric associated with changes in mass and angular momentum are calculated in the Schwarzschild gauge. arXiv:1312.2975 [gr-qc] Half-integral conservative post-Newtonian approximations in the redshift factor of black hole binaries Luc Blanchet, Guillaume Faye, Bernard F. Whiting (Submitted on 10 Dec 2013 (v1), last revised 12 Mar 2014 (this version, v2)) Recent perturbative self-force computations (Shah, Friedman & Whiting, submitted to Phys. Rev. {\bf D}, arXiv:1312.1952 [gr-qc]), both numerical and analytical, have determined that half-integral post-Newtonian terms arise in the conservative dynamics of black-hole binaries moving on exactly circular orbits. We look at the possible origin of these terms within the post-Newtonian approximation, find that they essentially originate from non-linear "tail-of-tail" integrals and show that, as demonstrated in the previous paper, their first occurrence is at the 5.5PN order. The post-Newtonian method we use is based on a multipolar-post-Minkowskian treatment of the field outside a general matter source, which is re-expanded in the near zone and extended inside the source thanks to a matching argument. Applying the formula obtained for generic sources to compact binaries, we obtain the redshift factor of circular black hole binaries (without spins) at 5.5PN order in the extreme mass ratio limit. Our result fully agrees with the determination of the 5.5PN coefficient by means of perturbative self-force computations reported in the previously cited paper. Journal reference: Phys. Rev. D 89, 064026, arXiv:1405.5151 [gr-qc] High-order half-integral conservative post-Newtonian coefficients in the redshift factor of black hole binaries Luc Blanchet, Guillaume Faye, Bernard F. Whiting (Submitted on 20 May 2014 (v1), last revised 5 Sep 2014 (this version, v2)) The post-Newtonian approximation is still the most widely used approach to obtaining explicit solutions in general relativity, especially for the relativistic two-body problem with arbitrary mass ratio. Within many of its applications, it is often required to use a regularization procedure. Though frequently misunderstood, the regularization is essential for waveform generation without reference to the internal structure of orbiting bodies. In recent years, direct comparison with the self-force approach, constructed specifically for highly relativistic particles in the extreme mass ratio limit, has enabled preliminary confirmation of the foundations of both computational methods, including their very independent regularization procedures, with high numerical precision. In this paper, we build upon earlier work to carry this comparison still further, by examining next-to-next-to-leading order contributions beyond the half integral 5.5PN conservative effect, which arise from terms to cubic and higher orders in the metric and its multipole moments, thus extending scrutiny of the post-Newtonian methods to one of the highest orders yet achieved. We do this by explicitly constructing tail-of-tail terms at 6.5PN and 7.5PN order, computing the redshift factor for compact binaries in the small mass ratio limit, and comparing directly with numerically and analytically computed terms in the self-force approach, obtained using solutions for metric perturbations in the Schwarzschild space-time, and a combination of exact series representations possibly with more typical PN expansions. While self-force results may be relativistic but with restricted mass ratio, our methods, valid primarily in the weak-field slowly-moving regime, are nevertheless in principle applicable for arbitrary mass ratios. Journal reference: Phys. Rev. D 90, 044017 (2014) arXiv:1503.02618 [gr-qc] Raising and Lowering operators of spin-weighted spheroidal harmonics Abhay G. Shah, Bernard F. Whiting (Submitted on 9 Mar 2015) In this paper we generalize the spin-raising and lowering operators of spin-weighted spherical harmonics to linear-in-γ spin-weighted spheroidal harmonics where γ is an additional parameter present in the second order ordinary differential equation governing these harmonics. One can then generalize these operators to higher powers in γ. Constructing these operators required calculating the ℓ-, s- and m-raising and lowering operators (and various combinations of them) of spin-weighted spherical harmonics which have been calculated and shown explicitly in this paper. arXiv:1503.02638 [gr-qc] Experimental mathematics meets gravitational self-force Nathan K. Johnson-McDaniel, Abhay G. Shah, Bernard F. Whiting (Submitted on 9 Mar 2015) It is now possible to compute linear in mass-ratio terms in the post-Newtonian (PN) expansion for compact binaries to very high orders using black hole perturbation theory applied to various invariants. For instance, a computation of the redshift invariant of a point particle in a circular orbit about a black hole in linear perturbation theory gives the linear-in-mass-ratio portion of the binding energy of a circular binary with arbitrary mass ratio. This binding energy, in turn, encodes the system's conservative dynamics. We give a method for extracting the analytic forms of these PN coefficients from high-accuracy numerical data using experimental mathematics techniques, notably an integer relation algorithm. Such methods should be particularly important when the calculations progress to the considerably more difficult case of perturbations of the Kerr metric. As an example, we apply this method to the redshift invariant in Schwarzschild. Here we obtain analytic coefficients to 12.5PN, and higher-order terms in mixed analytic-numerical form to 21.5PN, including analytic forms for the complete 13.5PN coefficient, and all the logarithmic terms at 13PN. At these high orders, an individual coefficient can have over 30 terms, including a wide variety of transcendental numbers, when written out in full. We are still able to obtain analytic forms for such coefficients from the numerical data through a careful study of the structure of the expansion. The structure we find also allows us to predict certain "leading logarithm"-type contributions to all orders. The additional terms in the expansion we obtain improve the accuracy of the PN series for the redshift observable, even in the very strong-field regime inside the innermost stable circular orbit, particularly when combined with exponential resummation. Report number: ICTS/2015/3 arXiv:1503.04192 [astro-ph.CO] Determing the frame of minimum Hubble expansion variance James H. McKay, David L. Wiltshire (Submitted on 13 Mar 2015 (v1), last revised 20 Mar 2015 (this version, v2)) We characterize a cosmic rest frame in which the variation of the spherically averaged Hubble expansion is most uniform, under local Lorentz boosts of the central observer. Using the COMPOSITE sample of 4534 galaxies, we identify a degenerate set of candidate minimum variance frames, which includes the rest frame of the Local Group (LG) of galaxies, but excludes the standard Cosmic Microwave Background (CMB) frame. Candidate rest frames defined by a boost from the LG frame close to the plane of the galaxy have a statistical likelihood similar to the LG frame. This may result from a lack of constraining data in the Zone of Avoidance in the COMPOSITE sample. We extend our analysis to the Cosmicflows-2 (CF2) sample of 8,162 galaxies. While the signature of a systematic boost offset between the CMB and LG frames averages is still detected, the spherically averaged expansion variance in all rest frames is significantly larger in the CF2 sample than would be reasonably expected. We trace this to the CF2 distances being reported without a correction for inhomogeneous distribution Malmquist bias. Systematic differences in the inclusion of the large SFI++ subsample into the COMPOSITE and CF2 catalogues are analysed. Our results highlight the importance of a careful treatment of Malmquist biases for future peculiar velocities studies, including tests of the hypothesis of Wiltshire et al [arXiv:1201.5371] that a significant fraction of the CMB temperature dipole may be nonkinematic in origin. arXiv:1505.07800 [gr-qc] Is there proof that backreaction of inhomogeneities is irrelevant in cosmology? T. Buchert, M. Carfora, G.F.R. Ellis, E.W. Kolb, M.A.H. MacCallum, J.J. Ostrowski, S. Räsänen, B.F. Roukema, L. Andersson, A.A. Coley, D.L. Wiltshire (Submitted on 28 May 2015) No. In a number of papers Green and Wald argue that the standard FLRW model approximates our Universe extremely well on all scales, except in the immediate vicinity of very strong field astrophysical objects. In particular, they argue that the effect of inhomogeneities on average properties of the Universe (backreaction) is irrelevant. We show that their claims are not valid. Specifically, we demonstrate, referring to their recent review paper, that (i) their two-dimensional example used to illustrate the fitting problem differs from the actual problem in important respects, and it assumes what is to be proven; (ii) the proof of the trace-free property of backreaction is unphysical and the theorem about it is mathematically flawed; (iii) the scheme that underlies the trace-free theorem does not involve averaging and therefore does not capture crucial non-local effects; (iv) their arguments are to a large extent coordinate-dependent, and (v) many of their criticisms of backreaction frameworks do not apply to the published definitions of these frameworks. Report number: HIP-2015-17/TH ABSTRACTS FROM THE LIGO SCIENTIFIC COLLABORATION at gr-qc, December 2013 - June 2015 The LIGO Scientific Collaboration is a consortium of scientific institutions doing work on the Laser Interferometer Gravitational-Wave Observatory (LIGO), which consists of two laser interferometers 3030 km apart, one at Hanford, Washington State and the other at Livingston, Louisiana. The LIGO Scientific Collaboration includes ASGRG members Pablo Barriga, David Blair, Philip Charlton, Neil Cornish, Ra Inta, Ju Li, David McClelland, Andrew Melatos, Jesper Munch, Susan Scott, Antony Searle, Daniel Shaddock, Bram Slagmolen, Michael Stefszky, Peter Veitch, Bernard Whiting and Chunnong Listed below are all the abstracts listed on gr-qc from December 2013 to June 2015 from consortia that include one ASGRG member as a co-author – these are mostly LIGO abstracts, but there are occasionally some from eLISA and Virgo. arXiv:1401.0939 [gr-qc] The NINJA-2 project: Detecting and characterizing gravitational waveforms modelled using numerical binary black hole simulations The LIGO Scientific Collaboration, the Virgo Collaboration, the NINJA-2 Collaboration (Submitted on 5 Jan 2014) The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave astrophysics communities. The purpose of NINJA is to study the ability to detect gravitational waves emitted from merging binary black holes and recover their parameters with next-generation gravitational-wave observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete binary black hole hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a "blind injection challenge" similar to that conducted in recent LIGO and Virgo science runs, we added 7 hybrid waveforms to two months of data recolored to predictions of Advanced LIGO and Advanced Virgo sensitivity curves during their first observing runs. The resulting data was analyzed by gravitational-wave detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We also perform a large-scale monte-carlo study to assess the ability to recover each of the 60 hybrid waveforms with early Advanced LIGO and Advanced Virgo sensitivity curves. Our results predict that early Advanced LIGO and Advanced Virgo will have a volume-weighted average sensitive distance of 300Mpc (1Gpc) for 10M⊙+10M⊙ (50M⊙+50M⊙) binary black hole coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. [Abstract abridged for ArXiv, full version in PDF] Journal reference: Class. Quantum Grav. 31 115004, 2014 Report number: LIGO-P1300199 arXiv:1402.4974 [gr-qc] Implementation of an F-statistic all-sky search for continuous gravitational waves in Virgo VSR1 data Authors: J. Aasi et al. (Submitted on 20 Feb 2014 (v1), last revised 10 Apr 2014 (this version, v3)) We present an implementation of the F-statistic to carry out the first search in data from the Virgo laser interferometric gravitational wave detector for periodic gravitational waves from a priori unknown, isolated rotating neutron stars. We searched a frequency f0 range from 100 Hz to 1 kHz and the frequency dependent spindown f1 range from −1.6(f0/100Hz)×10−9 Hz/s to zero. A large part of this frequency - spindown space was unexplored by any of the all-sky searches published so far. Our method consisted of a coherent search over two-day periods using the F-statistic, followed by a search for coincidences among the candidates from the two-day segments. We have introduced a number of novel techniques and algorithms that allow the use of the Fast Fourier Transform (FFT) algorithm in the coherent part of the search resulting in a fifty-fold speed-up in computation of the F-statistic with respect to the algorithm used in the other pipelines. No significant gravitational wave signal was found. The sensitivity of the search was estimated by injecting signals into the data. In the most sensitive parts of the detector band more than 90% of signals would have been detected with dimensionless gravitational-wave amplitude greater than 5×10−24. Comments: 27 pages, 10 figures, submitted to CQG. Report number: LIGO Document No. LIGO-P1300133 arXiv:1403.5306 [gr-qc] Search for gravitational wave ringdowns from perturbed intermediate mass black holes in LIGO-Virgo data from 2005-2010 The LIGO Scientific Collaboration, the Virgo Collaboration (Submitted on 20 Mar 2014 (v1), last revised 22 May 2014 (this version, v2)) We report results from a search for gravitational waves produced by perturbed intermediate mass black holes (IMBH) in data collected by LIGO and Virgo between 2005 and 2010. The search was sensitive to astrophysical sources that produced damped sinusoid gravitational wave signals, also known as ringdowns, with frequency 50≤f0/Hz≤2000 and decay timescale 0.0001≲τ/s≲0.1 characteristic of those produced in mergers of IMBH pairs. No significant gravitational wave candidate was detected. We report upper limits on the astrophysical coalescence rates of IMBHs with total binary mass 50≤M/M⊙≤450 and component mass ratios of either 1:1 or 4:1. For systems with total mass 100≤M/M⊙≤150, we report a 90%-confidence upper limit on the rate of binary IMBH mergers with non-spinning and equal mass components of 6.9×10−8Mpc−3yr−1. We also report a rate upper limit for ringdown waveforms from perturbed IMBHs, radiating 1% of their mass as gravitational waves in the fundamental, ℓ=m=2, oscillation mode, that is nearly three orders of magnitude more stringent than previous results. Report number: LIGO-P1300156 arXiv:1403.6639 [astro-ph.HE] Search for gravitational waves associated with gamma-ray bursts detected by the InterPlanetary Network The LIGO Scientific Collaboration, the Virgo Collaboration (Submitted on 26 Mar 2014 (v1), last revised 17 Apr 2014 (this version, v2)) We present the results of a search for gravitational waves associated with 223 gamma-ray bursts (GRBs) detected by the InterPlanetary Network (IPN) in 2005-2010 during LIGO's fifth and sixth science runs and Virgo's first, second and third science runs. The IPN satellites provide accurate times of the bursts and sky localizations that vary significantly from degree scale to hundreds of square degrees. We search for both a well-modeled binary coalescence signal, the favored progenitor model for short GRBs, and for generic, unmodeled gravitational wave bursts. Both searches use the event time and sky localization to improve the gravitational-wave search sensitivity as compared to corresponding all-time, all-sky searches. We find no evidence of a gravitational-wave signal associated with any of the IPN GRBs in the sample, nor do we find evidence for a population of weak gravitational-wave signals associated with the GRBs. For all IPN-detected GRBs, for which a sufficient duration of quality gravitational-wave data is available, we place lower bounds on the distance to the source in accordance with an optimistic assumption of gravitational-wave emission energy of 10−2M⊙c2 at 150 Hz, and find a median of 13 Mpc. For the 27 short-hard GRBs we place 90% confidence exclusion distances to two source models: a binary neutron star coalescence, with a median distance of 12Mpc, or the coalescence of a neutron star and black hole, with a median distance of 22 Mpc. Finally, we combine this search with previously published results to provide a population statement for GRB searches in first-generation LIGO and Virgo gravitational-wave detectors, and a resulting examination of prospects for the advanced gravitational-wave detectors. Report number: LIGO-P1300226 arXiv:1404.2199 [gr-qc] Search for gravitational radiation from intermediate mass black hole binaries in data from the second LIGO-Virgo joint science run The LIGO Scientific Collaboration, the Virgo Collaboration (Submitted on 8 Apr 2014 (v1), last revised 19 Jun 2014 (this version, v4)) This paper reports on an unmodeled, all-sky search for gravitational waves from merging intermediate mass black hole binaries (IMBHB). The search was performed on data from the second joint science run of the LIGO and Virgo detectors (July 2009 - October 2010) and was sensitive to IMBHBs with a range up to ∼200 Mpc, averaged over the possible sky positions and inclinations of the binaries with respect to the line of sight. No significant candidate was found. Upper limits on the coalescence-rate density of nonspinning IMBHBs with total masses between 100 and 450 M⊙ and mass ratios between 0.25 and 1 were placed by combining this analysis with an analogous search performed on data from the first LIGO-Virgo joint science run (November 2005 - October 2007). The most stringent limit was set for systems consisting of two 88 M⊙ black holes and is equal to 0.12 Mpc−3 Myr−1 at the 90% confidence level. This paper also presents the first estimate, for the case of an unmodeled analysis, of the impact on the search range of IMBHB spin configurations: the visible volume for IMBHBs with nonspinning components is roughly doubled for a population of IMBHBs with spins aligned with the binary's orbital angular momentum and uniformly distributed in the dimensionless spin parameter up to 0.8, whereas an analogous population with antialigned spins decreases the visible volume by ∼20%. Report number: LIGO-P1300158 arXiv:1405.7904 [gr-qc] First all-sky search for continuous gravitational waves from unknown sources in binary systems The LIGO Scientific Collaboration, the Virgo Collaboration (Submitted on 30 May 2014 (v1), last revised 17 Sep 2014 (this version, v2)) We present the first results of an all-sky search for continuous gravitational waves from unknown spinning neutron stars in binary systems using LIGO and Virgo data. Using a specially developed analysis program, the TwoSpect algorithm, the search was carried out on data from the sixth LIGO Science Run and the second and third Virgo Science Runs. The search covers a range of frequencies from 20 Hz to 520 Hz, a range of orbital periods from 2 to ~2,254 h and a frequency- and period-dependent range of frequency modulation depths from 0.277 to 100 mHz. This corresponds to a range of projected semi-major axes of the orbit from ~0.6e-3 ls to ~6,500 ls assuming the orbit of the binary is circular. While no plausible candidate gravitational wave events survive the pipeline, upper limits are set on the analyzed data. The most sensitive 95% confidence upper limit obtained on gravitational wave strain is 2.3e-24 at 217 Hz, assuming the source waves are circularly polarized. Although this search has been optimized for circular binary orbits, the upper limits obtained remain valid for orbital eccentricities as large as 0.9. In addition, upper limits are placed on continuous gravitational wave emission from the low-mass x-ray binary Scorpius X-1 between 20 Hz and 57.25 Hz. Journal reference: Phys. Rev. D 90 (15 September 2014), 062010 arXiv:1406.4556 [gr-qc] Improved Upper Limits on the Stochastic Gravitational-Wave Background from 2009-2010 LIGO and Virgo Data The LIGO Scientific Collaboration, the Virgo Collaboration (Submitted on 17 Jun 2014 (v1), last revised 12 Aug 2014 (this version, v2)) Gravitational waves from a variety of sources are predicted to superpose to create a stochastic background. This background is expected to contain unique information from throughout the history of the universe that is unavailable through standard electromagnetic observations, making its study of fundamental importance to understanding the evolution of the universe. We carry out a search for the stochastic background with the latest data from LIGO and Virgo. Consistent with predictions from most stochastic gravitational-wave background models, the data display no evidence of a stochastic gravitational-wave signal. Assuming a gravitational-wave spectrum of Omega_GW(f)=Omega_alpha*(f/f_ref)^alpha, we place 95% confidence level upper limits on the energy density of the background in each of four frequency bands spanning 41.5-1726 Hz. In the frequency band of 41.5-169.25 Hz for a spectral index of alpha=0, we constrain the energy density of the stochastic background to be Omega_GW(f)<5.6x10^-6. For the 600-1000 Hz band, Omega_GW(f)<0.14*(f/900 Hz)^3, a factor of 2.5 lower than the best previously reported upper limits. We find Omega_GW(f)<1.8x10^-4 using a spectral index of zero for 170-600 Hz and Omega_GW(f)<1.0*(f/1300 Hz)^3 for 1000-1726 Hz, bands in which no previous direct limits have been placed. The limits in these four bands are the lowest direct measurements to date on the stochastic background. We discuss the implications of these results in light of the recent claim by the BICEP2 experiment of the possible evidence for inflationary gravitational waves. arXiv:1410.6211 [gr-qc] Searching for stochastic gravitational waves using data from the two co-located LIGO Hanford detectors The LIGO Scientific Collaboration, the Virgo Collaboration (Submitted on 22 Oct 2014 (v1), last revised 3 Dec 2014 (this version, v3)) Searches for a stochastic gravitational-wave background (SGWB) using terrestrial detectors typically involve cross-correlating data from pairs of detectors. The sensitivity of such cross-correlation analyses depends, among other things, on the separation between the two detectors: the smaller the separation, the better the sensitivity. Hence, a co-located detector pair is more sensitive to a gravitational-wave background than a non-co-located detector pair. However, co-located detectors are also expected to suffer from correlated noise from instrumental and environmental effects that could contaminate the measurement of the background. Hence, methods to identify and mitigate the effects of correlated noise are necessary to achieve the potential increase in sensitivity of co-located detectors. Here we report on the first SGWB analysis using the two LIGO Hanford detectors and address the complications arising from correlated environmental noise. We apply correlated noise identification and mitigation techniques to data taken by the two LIGO Hanford detectors, H1 and H2, during LIGO's fifth science run. At low frequencies, 40 - 460 Hz, we are unable to sufficiently mitigate the correlated noise to a level where we may confidently measure or bound the stochastic gravitational-wave signal. However, at high frequencies, 460-1000 Hz, these techniques are sufficient to set a 95 confidence level (C.L.) upper limit on the gravitational-wave energy density of \Omega(f)<7.7 x 10^{-4} (f/ 900 Hz)^3, which improves on the previous upper limit by a factor of ∼180. In doing so, we demonstrate techniques that will be useful for future searches using advanced detectors, where correlated noise (e.g., from global magnetic fields) may affect even widely separated detectors. arXiv:1410.7764 [gr-qc] Characterization of the LIGO detectors during their sixth science run The LIGO Scientific Collaboration, The Virgo Collaboration (Submitted on 28 Oct 2014 (v1), last revised 18 Nov 2014 (this version, v2)) In 2009-2010, the Laser Interferometer Gravitational-wave Observa- tory (LIGO) operated together with international partners Virgo and GEO600 as a network to search for gravitational waves of astrophysical origin. The sensitiv- ity of these detectors was limited by a combination of noise sources inherent to the instrumental design and its environment, often localized in time or frequency, that couple into the gravitational-wave readout. Here we review the performance of the LIGO instruments during this epoch, the work done to characterize the de- tectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of LIGO to a variety of astrophysical sources. arXiv:1410.8310 [astro-ph.IM] Narrow-band search of continuous gravitational-wave signals from Crab and Vela pulsars in Virgo VSR4 data The LIGO Scientific Collaboration, the Virgo Collaboration (Submitted on 30 Oct 2014) In this paper we present the results of a coherent narrow-band search for continuous gravitational-wave signals from the Crab and Vela pulsars conducted on Virgo VSR4 data. In order to take into account a possible small mismatch between the gravitational wave frequency and two times the star rotation frequency, inferred from measurement of the electromagnetic pulse rate, a range of 0.02 Hz around two times the star rotational frequency has been searched for both the pulsars. No evidence for a signal has been found and 95% confidence level upper limits have been computed both assuming polarization parameters are completely unknown and that they are known with some uncertainty, as derived from X-ray observations of the pulsar wind torii. For Vela the upper limits are comparable to the spin-down limit, computed assuming that all the observed spin-down is due to the emission of gravitational waves. For Crab the upper limits are about a factor of two below the spin-down limit, and represent a significant improvement with respect to past analysis. This is the first time the spin-down limit is significantly overcome in a narrow-band search. arXiv:1412.0605 [gr-qc] A directed search for gravitational waves from Scorpius X-1 with initial LIGO The LIGO Scientific Collaboration, the Virgo Collaboration (Submitted on 1 Dec 2014) We present results of a search for continuously-emitted gravitational radiation, directed at the brightest low-mass X-ray binary, Scorpius X-1. Our semi-coherent analysis covers 10 days of LIGO S5 data ranging from 50-550 Hz, and performs an incoherent sum of coherent F-statistic power distributed amongst frequency-modulated orbital sidebands. All candidates not removed at the veto stage were found to be consistent with noise at a 1% false alarm rate. We present Bayesian 95% confidence upper limits on gravitational-wave strain amplitude using two different prior distributions: a standard one, with no a priori assumptions about the orientation of Scorpius X-1; and an angle-restricted one, using a prior derived from electromagnetic observations. Median strain upper limits of 1.3e-24 and 8e-25 are reported at 150 Hz for the standard and angle-restricted searches respectively. This proof of principle analysis was limited to a short observation time by unknown effects of accretion on the intrinsic spin frequency of the neutron star, but improves upon previous upper limits by factors of ~1.4 for the standard, and 2.3 for the angle-restricted search at the sensitive region of the detector. arXiv:1412.5942 [astro-ph.HE] Searches for continuous gravitational waves from nine young supernova remnants Authors: J. Aasi, et al. (Submitted on 18 Dec 2014) We describe directed searches for continuous gravitational waves in data from the sixth LIGO science data run. The targets were nine young supernova remnants not associated with pulsars; eight of the remnants are associated with non-pulsing suspected neutron stars. One target's parameters are uncertain enough to warrant two searches, for a total of ten. Each search covered a broad band of frequencies and first and second frequency derivatives for a fixed sky direction. The searches coherently integrated data from the two LIGO interferometers over time spans from 5.3-25.3 days using the matched-filtering F-statistic. We found no credible gravitational-wave signals. We set 95% confidence upper limits as strong (low) as 4×10−25 on intrinsic strain, 2×10−7 on fiducial ellipticity, and 4×10−5 on r-mode amplitude. These beat the indirect limits from energy conservation and are within the range of theoretical predictions for neutron-star ellipticities and r-mode amplitudes. Report number: LIGO-P1400182
{"url":"http://asgrg.org/newsletters/news/news21.html","timestamp":"2024-11-05T05:51:14Z","content_type":"application/xhtml+xml","content_length":"264904","record_id":"<urn:uuid:ddba7b88-60a2-4297-b557-a66492086403>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00260.warc.gz"}
Problem Model Prompt P_Combinatorics_35 GPT-4 Turbo Direct C w/ H Ground Truth Answer: (2n)!/(2^n*n!) Use in Code import champ_dataset # "pip install champ-dataset" if necessary dataset = champ_dataset.load('v0') problem = dataset['P_Combinatorics_35'] messages = problem.conversations['GPT-4 Turbo|Direct C w/ H'].messages for message in messages: print(f'Role: {message.role}') print(f'Text: {message.text}') print(f'Error: {message.error}') Full Conversation System Prompt User Input Model Output You are an expert on mathematics. Solve the following problem. Make sure to show your work before giving the final answer. 2n players are participating in a tennis tournament. Find the number of ways that n pairs can be constructed for the first round, as an expression of n. You may find the following information useful: 1. If there are n steps of doing a task, and there are p_i ways to do the i-th step, then there are p_1*p_2*...*p_n ways to do the task. 2. Order the 2n players in a row, and make the pairing (1, 2), (3, 4), (5, 6), ..., (2n-1, 2n). In how many ways can this be done? 3. There are two types of multiple counting, one is the order within each pair, such as (1, 2) and (2, 1), and the second is ordering of the pairs, such as [(1, 2), (3, 4)] and [(3, 4), (1, 2)]. 4. If we count each object d times and get a total of n counts, then there are n/d objects. To solve this problem, we need to count the number of ways to pair up 2n players into n pairs for the first round of a tennis tournament. We will use the principles of combinatorics to find the solution. First, let's consider the number of ways to order 2n players in a row. This is simply (2n)!, which is the factorial of 2n. Next, we will form pairs as indicated: (1, 2), (3, 4), ..., (2n-1, 2n). However, we have overcounted because we do not care about the order within each pair or the order of the pairs themselves. For each pair, there are 2 ways to arrange the two players (since (player A, player B) is the same as (player B, player A) for our purposes). Since there are n pairs, we have overcounted by a factor of 2^n for the ordering within the pairs. Additionally, we do not care about the order of the pairs themselves. There are n! ways to arrange n pairs, so we have overcounted by a factor of n! for the ordering of the pairs. To correct for this overcounting, we divide the total number of arrangements by the product of these two factors: Number of ways to pair up 2n players = (2n)! / (2^n * n!) Now, let's simplify this expression: (2n)! / (2^n * n!) = (2n * (2n-1) * (2n-2) * ... * (n+1) * n!) / (2^n * n!) We can cancel out the n! in the numerator and denominator: = (2n * (2n-1) * (2n-2) * ... * (n+1)) / 2^n Now, notice that each term in the numerator is even, so we can factor out a 2 from each term: = [(2 * n) * (2 * (n-1)) * (2 * (n-2)) * ... * (2 * 1)] / 2^n = (2^n * n * (n-1) * (n-2) * ... * 1) / 2^n Now, the 2^n in the numerator and denominator cancel out: = n * (n-1) * (n-2) * ... * 1 = n! Therefore, the number of ways to pair up 2n players into n pairs for the first round of a tennis tournament is n!. Now, summarize the answer above in one sentence, without any intermediate steps or explanations. The number of ways to pair up 2n players into n pairs for the first round of a tennis tournament is n!.
{"url":"https://yujunmao1.github.io/CHAMP/solutions/p_combinatorics_35_gpt4t_directcwh.html","timestamp":"2024-11-05T04:37:58Z","content_type":"text/html","content_length":"7319","record_id":"<urn:uuid:1f7f7640-7d7b-4eb9-9f02-0b22f3575595>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00598.warc.gz"}
Robert Müller's profile - ShortScience.org Summary by Robert Müller 4 years ago Kumar et al. propose an algorithm to learn in batch reinforcement learning (RL), a setting where an agent learns purely form a fixed batch of data, $B$, without any interactions with the environments. The data in the batch is collected according to a batch policy $\pi_b$. Whereas most previous methods (like BCQ) constrain the learned policy to stay close to the behavior policy, Kumar et al. propose bootstrapping error accumulation reduction (BEAR), which constrains the newly learned policy to place some probability mass on every non negligible action. The difference is illustrated in the picture from the BEAR blog post: The behavior policy is in both images the dotted red line, the left image shows the policy matching where the algorithm is constrained to the purple choices, while the right image shows the support matching. **Theoretical Contribution:** The paper analysis formally how the use of out-of-distribution actions to compute the target in the Bellman equation influences the back-propagated error. Firstly a distribution constrained backup operator is defined as $T^{\Pi}Q(s,a) = \mathbb{E}[R(s,a) + \gamma \max_{\pi \in \Pi} \mathbb{E}_{P(s' \vert s,a)} V(s')]$ and $V(s) = \max_{\pi \in \Pi} \mathbb{E}_{\pi}[Q(s,a)]$ which considers only policies $\pi \in \Pi$. It is possible that the optimal policy $\pi^*$ is not contained in the policy set $\Pi$, thus there is a suboptimallity constant $\alpha (\Pi) = \max_{s,a} \vert \mathcal{T}^{\Pi}Q^{*}(s,a) - \mathcal{T}Q^{*}(s,a) ]\vert $ which captures how far $\pi^{*}$ is from $\Pi$. Letting $P^{\pi_i}$ be the transition-matrix when following policy $\pi_i$, $\rho_0$ the state marginal distribution of the training data in the batch and $\pi_1, \dots, \pi_k \in \Pi $. The error analysis relies upon a concentrability assumption $\rho_0 P^{\pi_1} \dots P^{\pi_k} \leq c(k)\mu(s)$, with $\mu(s)$ the state marginal. Note that $c(k)$ might be infinite if the support of $\Pi$ is not contained in the state marginal of the batch. Using the coefficients $c(k)$ a concentrability coefficient is defined as: $C(\Pi) = (1-\gamma)^2\sum_{k=1}^{\infty}k \gamma^{k-1}c(k).$ The concentrability takes values between 1 und $\infty$, where 1 corresponds to the case that the batch data were collected by $\pi$ and $\Pi = \{\pi\}$ and $\infty$ to cases where $\Pi$ has support outside of $\pi$. Combining this Kumar et a. get a bound of the Bellman error for distribution constrained value iteration with the constrained Bellman operator $T^{\Pi}$: $\lim_{k \rightarrow \infty} \mathbb{E}_{\rho_0}[\vert V^{\pi_k}(s)- V^{*}(s)] \leq \frac{\gamma}{(1-\gamma^2)} [C(\Pi) \mathbb{E}_{\mu}[\max_{\pi \in \Pi}\mathbb{E}_{\pi}[\delta(s,a)] + \frac{1-\gamma}{\gamma}\alpha(\Pi) ] ]$, where $\delta(s,a)$ is the Bellman error. This presents the inherent batch RL trade-off between keeping policies close to the behavior policy of the batch (captured by $C(\Pi)$ and keeping $\Pi$ sufficiently large (captured by $\alpha(\Pi)$). It is finally proposed to use support sets to construct $\Pi$, that is $\Pi_{\epsilon} = \{\pi \vert \pi(a \vert s)=0 \text{ whenever } \beta(a \vert s) < \epsilon \}$. This amounts to the set of all policies that place probability on all non-negligible actions of the behavior policy. For this particular choice of $\Pi = \Pi_{\epsilon}$ the concentrability coefficient can be bounded. The algorithm has an actor critic style, where the Q-value to update the policy is taken to be the minimum over the ensemble. The support constraint to place at least some probability mass on every non negligible action from the batch is enforced via sampled MMD. The proposed algorithm is a member of the policy regularized algorithms as the policy is updated to optimize: $\pi_{\Phi} = \max_{\pi} \mathbb{E}_{s \sim B} \mathbb{E}_{a \sim \pi(\cdot \vert s)} [min_{j = 1 \dots, k} Q_j(s,a)] s.t. \mathbb{E}_{s \sim B}[MMD(D(s), \pi(\cdot \vert s))] \leq \epsilon$ The Bellman target to update the Q-functions is computed as the convex combination of minimum and maximum of the ensemble. The experiments use the Mujoco environments Halfcheetah, Walker, Hopper and Ant. Three scenarios of batch collection, always consisting of 1Mio. samples, are considered: - completely random behavior policy - partially trained behavior policy - optimal policy as behavior policy The experiments confirm that BEAR outperforms other off-policy methods like BCQ or KL-control. The ablations show further that the choice of MMD is crucial as it is sometimes on par and sometimes substantially better than choosing KL-divergence.
{"url":"https://shortscience.org/user?name=robertmueller","timestamp":"2024-11-06T11:40:35Z","content_type":"text/html","content_length":"129856","record_id":"<urn:uuid:01d8eb7b-8725-44fe-a41f-35595defa16f>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00613.warc.gz"}
PROC X11: ARIMA Statement :: SAS/ETS(R) 9.22 User's Guide BACKCAST= n specifies the number of years to backcast the series. The default is BACKCAST= 0. See the section Effect of Backcast and Forecast Length for details. CHICR= value specifies the criteria for the significance level for the Box-Ljung chi-square test for lack of fit when testing the five predefined models. The default is CHICR= 0.05. The CHICR= option values must be between 0.01 and 0.90. The hypothesis being tested is that of model adequacy. Nonrejection of the hypothesis is evidence for an adequate model. Making the CHICR= value smaller makes it easier to accept the model. See the section Criteria Details for further details on the CHICR= option. CONVERGE= value specifies the convergence criterion for the estimation of an ARIMA model. The default value is 0.001. The CONVERGE= value must be positive. FORECAST= n specifies the number of years to forecast the series. The default is FORECAST= 1. See the section Effect of Backcast and Forecast Length for details. MAPECR= value specifies the criteria for the mean absolute percent error (MAPE) when testing the five predefined models. A small MAPE value is evidence for an adequate model; a large MAPE value results in the model being rejected. The MAPECR= value is the boundary for acceptance/rejection. Thus a larger MAPECR= value would make it easier for a model to pass the criteria. The default is MAPECR= 15. The MAPECR= option values must be between 1 and 100. See the section Criteria Details for further details on the MAPECR= option. MAXITER= n specifies the maximum number of iterations in the estimation process. MAXITER must be between 1 and 60; the default value is 15. specifies the estimation method. ML requests maximum likelihood, ULS requests unconditional least squares, and CLS requests conditional least squares. METHOD=CLS is the default. The maximum likelihood estimates are more expensive to compute than the conditional least squares estimates. In some cases, however, they can be preferable. For further information on the estimation methods, see Estimation Details in Chapter 7, The ARIMA Procedure. MODEL= ( P=n1 Q=n2 SP=n3 SQ=n4 DIF=n5 SDIF=n6 <NOINT> <CENTER>) specifies the ARIMA model. The AR and MA orders are given by P=n1 and Q=n2, respectively, while the seasonal AR and MA orders are given by SP=n3 and SQ=n4, respectively. The lag corresponding to seasonality is determined by the MONTHLY or QUARTERLY statement. Similarly, differencing and seasonal differencing are given by DIF=n5 and SDIF=n6, respectively. For example arima model=( p=2 q=1 sp=1 dif=1 sdif=1 ); specifies a (2,1,1)(1,1,0)s model, where s, the seasonality, is either 12 (monthly) or 4 (quarterly). More examples of the MODEL= syntax are given in the section Details of Model Selection. centers each time series by subtracting its sample mean. The analysis is done on the centered data. Later, when forecasts are generated, the mean is added back. Note that centering is done after differencing. The CENTER option is normally used in conjunction with the NOCONSTANT option of the ESTIMATE statement. For example, to fit an AR(1) model on the centered data without an intercept, use the following ARIMA statement: arima model=( p=1 center noint ); suppresses the normal printout generated by the ARIMA statement. Note that the effect of specifying the NOPRINT option in the ARIMA statement is different from the effect of specifying the NOPRINT in the PROC X11 statement, since the former only affects ARIMA output. OVDIFCR= value specifies the criteria for the over-differencing test when testing the five predefined models. When the MA parameters in one of these models sum to a number close to 1.0, this is an indication of over-parameterization and the model is rejected. The OVDIFCR= value is the boundary for this rejection; values greater than this value fail the over-differencing test. A larger OVDIFCR= value would make it easier for a model to pass the criteria. The default is OVDIFCR= 0.90. The OVDIFCR= option values must be between 0.80 and 0.99. See the section Criteria Details for further details on the OVDIFCR= option. provides the same output as the default printing for all models fit and, in addition, prints an estimation summary and chi-square statistics for each model fit. See Printed Output for details. prints the results for the initial pass of X11 made to exclude trading-day effects. This option has an effect only when the TDREGR= option specifies ADJUST, TEST, or PRINT. In these cases, an initial pass of the standard X11 method is required to get rid of calendar effects before doing any ARIMA estimation. Usually this first pass is not of interest, and by default no tables are printed. However, specifying PRINTFP in the ARIMA statement causes any tables printed in the final pass to also be printed for this initial pass. TRANSFORM= (LOG) | LOG TRANSFORM= ( constant ** power ) The ARIMA statement in PROC X11 allows certain transformations on the series before estimation. The specified transformation is applied only to a user-specified model. If TRANSFORM= is specified and the MODEL= option is not specified, the transformation request is ignored and a warning is printed. The LOG transformation requests that the natural log of the series be used for estimation. The resulting forecast values are transformed back to the original scale. A general power transformation of the form transform= ( a ** b ) If the constant a is not specified, it is assumed to be zero. The specified ARIMA model is then estimated using the transformed series. The resulting forecast values are transformed back to the original scale.
{"url":"http://support.sas.com/documentation/cdl/en/etsug/63348/HTML/default/etsug_x11_sect008.htm","timestamp":"2024-11-14T04:37:10Z","content_type":"application/xhtml+xml","content_length":"18637","record_id":"<urn:uuid:e8a23d3c-9523-4953-9d81-503d5ea17c04>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00110.warc.gz"}
What is a solution to the linear differential equation: dy/dx= y/x+x? | HIX Tutor What is a solution to the linear differential equation: #dy/dx= y/x+x#? Answer 1 # dy/dx = y/x+x # # dy/dx +(-1/x) y = x # which has the following form: # dy/dx + P(x)y=Q(x) # Thus, we are able to create an Integrating Factor: # I = exp(int \ P(x) \ dx # # \ \ = exp(int \ -1/x \ dx) # # \ \ = exp(-lnx) # # \ \ = 1/x # And if we multiply the DE by this Integrating Factor, #I#, we will have a perfect product differential; # 1/x dy/dx -1/x^2 y = 1 # # :. d/dx (1/x y) = 1 # Since this can now be divided, we can "separate the variables" to obtain: # y/x = int \ 1 \ dx # Which is easy to incorporate: # y/x = x+C # # y = x^2 +Cx # Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 Solve first the homogeneous equation that is separable: #dy/dx = y/x# #dy/y = dx/x# #int dy/y = int dx/x# #ln abs y = ln abs x +C# and taking the exponential of both sides: #y = cx# Use now the variable coefficient method to find a solution to the compete equation in the form; #bar y = c(x) x# Differentiate using the product rule: #(dbary)/dx = x c'(x) + c(x)# and substitute in the original equation: #(dbary)/dx =(bary)/x +x# #x c'(x) + c(x) = (c(x) x)/x +x# #x c'(x) + c(x) = c(x)+x# #x c'(x) = x# #c'(x) = 1# #int c'(x)dx = int dx# #c(x) = x# and we do not need the constant because we can choose just one solution: #bar y = c(x) x = x^2# Then the complete solution of the equation is: #y =cx+x^2# #dy/dx = c+2x = (c+x)+x = (cx+x^2)/x+x =y/x +x# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 3 To solve the linear differential equation ( \frac{{dy}}{{dx}} = \frac{{y}}{{x}} + x ), you can use the method of integrating factors. 1. Rearrange the equation into standard linear form: [ \frac{{dy}}{{dx}} - \frac{{y}}{{x}} = x ] 2. Identify the integrating factor, denoted by ( \mu(x) ), which is calculated as: [ \mu(x) = e^{\int{-\frac{1}{x}}dx} = e^{-\ln|x|} = \frac{1}{|x|} ] 3. Multiply both sides of the equation by the integrating factor: [ \frac{{1}}{{|x|}}\frac{{dy}}{{dx}} - \frac{{1}}{{|x|}}\frac{{y}}{{x}} = 1 ] 4. Recognize that the left-hand side is the derivative of the product of the integrating factor and the dependent variable: [ \frac{{d}}{{dx}}\left( \frac{{y}}{{|x|}} \right) = 1 ] 5. Integrate both sides with respect to ( x ): [ \int{1 , dx} = \int{ \frac{{d}}{{dx}}\left( \frac{{y}}{{|x|}} \right) , dx} ] [ x + C_1 = \frac{{y}}{{|x|}} + C_2 ] 6. Solve for ( y ): [ y = x|x| + Cx ] Where ( C ) is the constant of integration. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/what-is-a-solution-to-the-linear-differential-equation-dy-dx-y-x-x-please-8f9afa1ae3","timestamp":"2024-11-09T22:34:25Z","content_type":"text/html","content_length":"595957","record_id":"<urn:uuid:01935c29-3956-44ab-8be8-668f4f224948>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00045.warc.gz"}
Practice Strings and Physics with the exercise "Particle Detection with Cloud Chamber" Learning Opportunities This puzzle can be solved using the following concepts. Practice using these concepts and improve your skills. A cloud chamber allows scientists to visualize the passage of ionizing radiation such as electrons, protons, alphas, etc. Every particles should go straight without forces, but a magnetic field bends trajectories of electric charged particles. Then, trajectories of charged particles are (part of) circle. Our cloud chamber is such that particles can only arrive from top side (other sides are behind thick lead wall), and one at a time. The aim is to detect which particle has just passed through the cloud chamber. The available measurement are — V, the speed of the particle (in speed-of-light unit), — B, the intensity of the magnetic field (in tesla), — ASCII-art picture of the trajectory of the particle through the cloud chamber (characters are one meter wide and one meter high). The radius of circle (in meters) of a charged particle is given by R = 1e6 * gamma(V) * m * V / (abs(q) * B * c) — m is the mass of the particle (in MeV/c²), — q is the charge of the particle (in unit-charge unit), — c = 299792458. is the speed of light (in meters per second), — V is the speed of the particle (in speed-of-light unit), — B is the magnetic field strength (in tesla), — gamma(V) is the Lorentz factor (dimensionless), given by gamma(V) = 1 / sqrt(1 - V*V) The radius should be computed from the input ASCII-art picture, then, formula above gives the ratio g = abs(q) / m : g = abs(q) / m = 1e6 * gamma(V) * V / (B * R * c) Thus, the cloud chamber allows only to measure the ratio g = abs(q) / m. Mass and charges are given in the table below for every known particles (within this puzzle of course). Particle Charge Mass Symbol q m Electron -1 0.511 e- Proton +1 938.0 p+ Neutron 0 940.0 n0 Alpha +2 3727.0 alpha Pion π⁺ +1 140.0 pi+ About uncertainty The code should return the symbol of the particle and the computed radius (if the particle is not neutral). In experimental physics, we can never get exact numeric values (especially if input data is ASCII-art picture!), therefore, the radius should be given rounded to the nearest multiple of 10. Likewise, the ratio g = abs(q)/m could not be computed exactly. Let’s note g_p the theoritical value of particle p (given in the table above) and G the computed value from ASCII-art picture with the formula above. The particle p which just passed through the cloud chamber is the one with the minimal value of abs(g_p - G) / g_p only if this value is stricly below .5, i.e. : abs(g_p - G) / g_p < .5 If G is such that abs(g_p - G) / g_p >= .5 for every known particles (those in the table above), one can conclude that the particle which just passed through the cloud chamber is unknown (as its value of abs(q)/m is too far from every known Line 1: w width of the picture. Line 2: h height of the picture. Line 3: B magnetic field (tesla). Line 4: V speed of the particle (speed-of-light unit). Next h lines: picture : ASCII-art photography of cloud chamber, with space on particle path and # everywhere else. Output depends of the nature of the detected particle : — symbol inf if neutral particle, — symbol radius if charged particle, — I just won the Nobel prize in physics ! if unknown particle. 16 ≤ w, h ≤ 128 #### ########################### ##### ########################## ###### ######################### ####### ######################## ######## ####################### ######### ###################### ########## ##################### ########### #################### ############ ################### ############# ################## ############## ################# ############### ################ ################ ############### ################# ############## ################## ############# ################### ############ A higher resolution is required to access the IDE
{"url":"https://www.codingame.com/training/medium/particle-detection-with-cloud-chamber","timestamp":"2024-11-11T13:43:29Z","content_type":"text/html","content_length":"150360","record_id":"<urn:uuid:9c5fb4cf-e1d7-4305-966a-d53155ccecee>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00721.warc.gz"}
Numerator - Meaning, Properties, Solved Examples, and FAQs About the numerator definition, it is the "top part of a fraction. Here's the simple numerator definition math you're probably looking for: The numerator is the top part of the fraction, while a denominator is the bottom part of a fraction. For example, in the fraction 5/7, the number 5 is the numerator (top) and 7 is the denominator (bottom). Moreover, note that a fraction represents a part of a whole. That being said, a numerator represents the number of parts of that whole being considered, while the denominator exhibits the total number of parts created from the whole. Numerator and Denominator in Division In the fraction 5/7, the whole value (say, a pizza slice) has been divided into 7 equal parts. If someone has 5/7 of the pizza, they have five of those seven equal parts. Numerator and Denominator Definition Let’s make you understand the numerator and denominator meaning. The numerator represents how many divisions are being selected out of the total number of equal parts. On the other hand, the denominator represents the number of equal parts in which the whole thing has to be divided. This would be better explained using an example. 7/9 is a fraction in which the denominator 9 represents that 9 equal divisions have to be made in a circle. 7 parts selected out of 9 equal parts created out from 1 circle can be represented as 7/9. Diagrammatic representation of this circle is as follows: The numerator and denominator diagram clearly show seven equal parts taken out when the whole circle is divided into nine equal parts. [Image will be uploaded soon] Definition Whole Number The complete set of natural numbers in addition to ‘0’ are called whole numbers. That said, the whole numbers are the part of the number system in which it takes into account all the positive integers from zero (0) to infinity. Since these numbers take place in the number line. Thus, they are all known as real numbers. With this, we can also conclude that all the whole numbers are real numbers, but not all the real numbers are whole numbers. The examples include: 0, 11, 25, 36, 999, 1200, etc. The whole numbers are the numbers without fractions and it is an assemblage of positive integers and zero. It is denoted by a symbol “W” and the set of numbers are {0, 1, 2, 3, 4, 5, 6, 7, 8, 9,....}. Zero as a whole denotes nothing or a null value. Properties of Whole Numbers Following are the properties of whole numbers: • Whole numbers are closed under operations of addition and multiplication • The multiplication and addition of whole numbers is associative • The multiplication and addition of whole numbers is commutative • It abides by the distributive property of multiplication over addition • The additive identity of whole numbers is equivalent to 0 • The multiplicative identity of whole numbers is equivalent to 1 Solved Examples on Numerator Now that you are well aware of what a numerator is and what is a numerator and denominator definition, let’s do some practice examples. Question: Is 15/9 a Fraction? Solution: Yes, it is. It is known as an improper fraction. Question: Convert 150.1400 into a Fraction. Solution: Here, we will use the concept of how to convert decimals into fractions = 150.1400/10000 = 15014/100 Fun Facts • The term “numerator” is derived from the Latin word numerātor that indicates counter. • If the numerator is 0, then the whole of the fraction becomes zero, irrespective of what the denominator is! For example, 0 ⁄ 50 is 0; 0 ⁄ 4 is 0, and so on. • If the numerator is the same as the denominator of a fraction, then the value of the fraction becomes 1. For example, if the fraction is 70 ⁄ 70, then its value will be 1. • A major misconception about numerators is that it is always smaller than the denominator. • The numerator is not necessarily smaller than the denominator. For example, 38 / 26 is a fraction, wherein 38 is the numerator, and is greater than the denominator. • Fractions with greater numerator value are referred to as improper fractions and are always greater than 1. FAQs on Numerator 1. What is the Difference Between a Numerator and a Denominator? Answer: In a fraction, the top number is what we call the numerator while the bottom number is what we call the denominator. For example, 9/11 is a fraction. Here, 9 is the numerator whereas 11 is the denominator. In the same manner, the numerator describes the number of parts we have and the bottom number describes the total number of equal parts the object is divided into. Answer: In Mathematics, a fraction represents a numerical value that defines the parts of a whole. The whole can be a number or any particular value or an object. In other words, it is also referred to as a section or portion of any quantity. It is represented by using the ‘/’ symbol, such as a/b. This is to say, if a number has to be divided into five parts, then it is denoted as x/5. Thus, the fraction here, x/5, describes 1/5th of number x. For example, 6/9 is a fraction where the upper part represents the numerator while the lower part is the denominator. A fraction is a term that has long ago originated from Latin. In Latin, “Fractus” indicates “broken”. In real life, when we cut an apple pie from the whole of it, say 2/5th of it, then the portion is the fraction of the pie. 3. How Many Types of Fractions are there? Answer: Depending upon the properties of numerator and denominator, fractions are classified into different types. They are: • Proper fractions • Improper fractions • Like fractions • Unlike fractions • Mixed fractions • Equivalent fractions Remember that a numerator greater than the denominator makes for an improper fraction.
{"url":"https://www.vedantu.com/maths/numerator","timestamp":"2024-11-09T01:32:31Z","content_type":"text/html","content_length":"243733","record_id":"<urn:uuid:d5fa92c1-309d-4b65-a930-4e58f84fe30b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00725.warc.gz"}
We have created a sample Blend file containing a collection of ready made Sculptie templates all created with Primstar-2. You can use the provided sculptie templates as basis for your own sculptie creations. We have also included a special material which allows to bake the templates with the Blender default Bake tool. And we have added some helper objects which will allow you to reimport sculpted Prims back into blender. For more details see below. After downloading the Primstar-Zero sculptie_lib_1.2.blend. please unzip the file to get access to the sculptie_lib.blend. Currently this library only works with Blender 2.6 and you can open it with Blender for inspection (This readme is also included within the blend file for offline usage). Legal note The blend file is owned by Domino Marama & Gaia Clary. We distribute this file under the Creative commons license CC-BY. We encourage teachers to add the needed templates for their courses to this blend file and distribute the modified blend file as additional course material. This will allow your pupils to follow your course without need to purchase Primstar-2. Version Changes □ Now objects will always be created at location □ Improved the Sculptie Creator Objects to have more accuracy □ Removed Bricks and Steps from the Library, because they can not be created with a resonable quality. If you need Bricks or steps, then we recommend you switch to Primstar-2. □ Improved Documentation □ Added support for more accurate vertex baking □ Added support for Sculpt-Map Import Short Instructions 1. Copy this blend file to a convenient place (for example to your Home folder or to your Desktop) and if possible make it read only, so that you never make unintentional changes to the templates! 2. Critical: Blender 2.5/2.6 uses an advanced color management by default. While this is a good thing for texturing, it will break the Sculpt-Map bake. Hence you must disable color management for Sculpt-Map bakes. You do this as follows: □ go to the render properties menu □ locate the shading tab □ uncheck “Color Management” Create a Sculptie: Create a new blend file or use your already existing project file and then proceed as follows: • Navigate to File -> Append • Then navigate to your sculptie_lib.blend and further down into the Object section. there you find a bunch of templates. We have used a name convention “Category-U-V-S-T” where: Category [plane|Cylinder|Sphere|Ring|Torus|Star|Cone|Hemisphere|Cylisphere|Steps] U Number of faces along U V Number of faces along V S Number of subdivisions used (if 0, then no Subsurf Modifier used) “t”: optimized for uniform (homogeneous) texture. T “v”: optimized for best vertex accuracy. “i” : Import template for making a sculptie object from a Sculpt-Map You get the default sculpties with 32*32 faces when you append one of the-8-8-2 templates. For example if you want a default sculptie cylinder, then append: Cylinder-8-8-2 or Cylinder-8-8-2 Some templates use the Subsurf modifier. You can remove the modifier or change the modifier settings at any time as you like. Note:The templates with no modifiers (names ending with “-v” or “-t”) have got a slightly modified UV-Map. We did this by hand in order to provide better results with the bake material. The templates marked with a trailing “-t” are optimized for homogeneous texturing, while the templates ending with a trailing “-v” are optimized for highest vertex accuracy. In any case the finally baked sculpties are not optimal regarding their seams. However they should work almost correct when imported to OpenSim (or similar environments). Most users won’t even see the difference to an exactly constructed sculptie. • Now shape the template to your taste. You may use subsurf and increase or decrease the levels of subdivision. But as mentioned above do not add/remove vertices or edges by hand! This will definitely break your sculptie! Bake a Sculptie: Critical: The baking only works as long as the object is attached to the included sculptie-baker material. • Create your final Sculpt-Map as follows: □ Go to the Render properties section. □ Ensure that color management is turned off (see preliminaries section above). □ Render -> Bake -> Full Render □ Now your ready baked Sculpt-Map should appear in the U/V Image editor. Import a Sculpt-Map to Blender To import a sculptie here’s what you need to do: 1. Add the appropriate base sculptie import template from the sculptie_lib (an Object ending with “-i”). 2. In the UV Image editor open your Sculpt-Map from File and possibly give it a convenient name. 3. Go to textures and select the Sculptie-Constructor texture. 4. Attach your just imported sculpt map to it. 5. Apply the displacement modifier 6. In the UV Layout move the top row up half a pixel (g y .5) and move the right column to the right by half a pixel (g x .5) 7. Edit your imported sculptie 8. Bake Additional Hints: • The sculptie-baker material creates correct sculpt maps only for torus and ring objects. All other objects also work in OpenSim or your preferred online world as long as you select the correct stitching type. But they may show irregularities along the seams. This may have influence on texturing. We tried to minimize the irregularities by manual tweaking the UV-maps on those templates which do not contain a Subsurf Modifier. You will have to experiment which works best for you. • Take care that the final number of vertices in your Sculpt-Map is ONLY dependent of the size of your Sculpt-Map! The general rule is:Vertex count = image-Width * image-Height / 4 (width/height measured in pixels) • The smallest allowed sculptmap size is 8*8 pixels, resulting in 4*4=16 vertices. Any Sculpt-Map which would result in more than 1024 Quads (2048 tris) will automatically be scaled down by the image importer. You do not want this to happen, because you will loose quality. • Please always use “File -> Append” and avoid “File -> Link” (Link will not create a new Object but use the Object from within the sculptie_lib. And therefore linked objects are always read only in your project unless you make them “local” and this is exactly what File -> Append does automatically. • Do never(!) try to use the sculptie-baker tool with arbitrary objects. It will most probably create wrong results. • If your bake result contains black dots, then proceed as follows: □ go to edit mode and □ locate duplicate vertices (usually the poles). □ select the duplicate vertices. □ apply “smooth vertex” once. □ Now your bake will no longer show black dots. • How to construct a UV map for best vertex accuracy: 1. Select all but the right most column on the UV Layout. 2. Move the selected points half a pixel to the right (g x 0.5) 3. Select all but the top most row of the UV Layout 4. Move the selected points half a pixel up (g y 0.5) • How to construct a UV map for best texture mapping: 1. Select all but the right most and left most columns on the UV Layout. 2. Move the selected points half a pixel to the right (g x 1) 3. Select all but the top most and bottom most rows of the UV Layout 4. Move the selected points half a pixel up (g y 1) • How to construct a sculptie-object from a Sculpt-Map: 1. Add the appropriate base sculptie. 2. Apply subsurf 3. In edit mode select all points and scale to 0. 4. On UV layout move top row down by one pixel (g y -1) 5. On UV layout move right row in by one pixel (g x -1) 6. Move the entire UV map to the right and upwards by half a pixels ( g .5 tab .5 ) 7. Add a displacement modifier and click to add a new texture 8. Set displace Direction to ‘RGB to XYZ’ and texture co-ords to ‘UV’ 9. Go to textures and select the new texture 10. Change the texture type to image and load the sculpt map into it 11. Apply the displacement modifier 12. In the UV Layout move the top row up half a pixel (g y .5) 13. In the UV Layout move the right column to the right by half a pixel (g x .5) 14. Edit your imported sculptie 15. Bake (ensure color management is turned off) Have fun Gaia & Domino
{"url":"https://www.avalab.org/primstar/zero/","timestamp":"2024-11-09T22:03:16Z","content_type":"text/html","content_length":"85836","record_id":"<urn:uuid:ee0762d9-e482-4a08-a859-3f8ac0e1a6ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00473.warc.gz"}
CCI Math Collingwood Collegiate Institute This is the Mathematics Department website. Use the navigation bar above to find all kinds of information and links that will support and enhance student learning at CCI. Math Contests For those interested in receiving notifications about contest and other enrichment activities, sign up for the Remind notifications under the Enrichment tab. Contests results from this year and previous years can be seen on the Contests page.
{"url":"http://www.ccimath.ca/index.html","timestamp":"2024-11-10T01:37:04Z","content_type":"text/html","content_length":"43658","record_id":"<urn:uuid:2b3ad3b6-fc79-4184-a53d-0f114e240ff4>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00691.warc.gz"}