arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# Computing the shifted symmetric polynomial $s^*_{(1^2)}(x_1,x_2)$ By definition, Let $$\lambda=(\lambda_1,...,\lambda_n)$$ be a partition with $$l(\lambda)\leq n$$. We define the shifted Schur polynomial in $$n$$ variables corresponding to $$\lambda$$ as $$\begin{equation*} s_{\lambda}^*(x_1,...,x_n)=\frac{\det(x_i +n-i \downharpoonright \lambda_j +n-j)}{\det(x_i +n-i \downharpoonright n-j)} \end{equation*}$$ where $$1 \leq i,j\leq n$$. Then, let $$\lambda=(1^2) \vdash 2$$. Then, for $$1 \leq i,j \leq 2$$, $$\begin{equation*} \det(x_i +n-i \downharpoonright \lambda_j +n-j) = \begin{vmatrix} (x_1+1)x_1 & (x_1+1) \\ x_2(x_2-1) & x_2 \end{vmatrix} = -x_1x_2^2-x_2^2+x_1^2x_2+2x_1x_2+x_2 \end{equation*}$$ and $$\begin{equation*} \det(x_i +n-i \downharpoonright \lambda_j +n-j) =\begin{vmatrix} (x_1+1) & 1 \\ x_2 & 1 \end{vmatrix} =x_1-x_2+1 \end{equation*}$$ Hence, $$\begin{equation*} s^*_{(1^2)}(x_1,x_2)= \frac{-x_1x_2^2-x_2^2+x_1^2x_2+2x_1x_2+x_2}{x_1-x_2+1} = x_1x_2+x_2. \end{equation*}$$ QUESTION Is this a shifted symmetric polynomial? If I change $$x_1,x_2$$ by $$x_1-1+c$$ and $$x_2-2+c$$, I get $$\begin{equation*} s^*_{(1^2)}(x_1',x_2')=x_1'x_2'+x_2'=\\s^*_{(1^2)}(x_1,x_2)=x_1x_2-2x_1+cx_1+cx_2-2c+c^2 \end{equation*}$$ which is not symmetric. How do you check this kind of thing?
# Mean Green Math Blog: A Tour The Mean Green Math Blog: Explaining the whys of mathematics is a blog by Dr. John Quintanilla, a professor of mathematics at the University of North Texas (UNT).  It has been around since 2013, and its name,  ‘Mean Green’, is an ode to one of the symbols of UNT.  This blog is for future mathematics teachers, alumni, colleagues,  friends and family, along with teachers who mentor other teaches.   As he describes on the blog, the purpose of the blog is to dive into the why behind the math. “This blog does not aim to answer common student questions like “How to factor this polynomial?” or “How do I solve for $x$ in this equation?” (There are plenty of excellent websites out there, some listed on my page, that give good step-by-step instructions of such problems.) Instead, this blog aims to address the whys of mathematics, providing readers with deeper content knowledge of mathematics that probably goes well beyond the expectations of most textbooks. As well as an audience of current and future secondary teachers, I also hope that this blog might be of some help to parents who might need a refresher when helping their children with their math homework. I also hope that this blog will be interesting to students who are interested in learning more about their subject.” In this post, I will share some of the posts that caught my attention, in particular, those aimed at engaging students. Engaging Students Series As part of a capstone course for secondary mathematics teachers, he asked his students to come up with ideas on how to engage their students with mathematics topics. What appealed to me the most about this assignment was the structure provided to the students.  Instead of  lesson plans, students had to come up with three different ways to catch their students’ interests. As you’ll see in the examples, the type of engagement activities varies  for each topic.  With the permission of the students, we get to see their work and draw inspiration from their ideas! Below are some of my favorites, Engaging students: Deriving the Pythagorean theorem Former student, Haley Higginbotham, shares how as a teacher she would create an activity to involve her students. She presents a visual proof of the Pythagorean theorem using a hands-on activity. What I found super interesting her answer to the question: how has this appeared in high culture? “The Pythagoras tree is a fractal constructed using squares that are arranged to form right triangles. Fractals are very popular for use in art since the repetitive pattern is very aesthetically pleasing and fairly easy to replicate, especially using technology.” (see figure below). Pythagorean tree created by Guillaume Jacquenot. Picture obtained from Wikimedia Commons. She concludes by discussing how to incorporate technology in the activity and shares  how she would use an activity that allows students to drag the different sides to see that the Pythagorean relationship holds no matter how the sides of the triangle change. Engaging students: Solving linear systems of equations with matrices The next idea comes from former student Andrew Sansom. In this case, he explores an interesting word problem that students can do to practice solving linear systems with matrices.  He discusses and walks the reader through the solution to the following problem, Map of Denton showing the set-up for the system of equations by Andrew Samson . Obtained from “The Square in Downtown Denton is a popular place to visit and hang out. A new business owner needs to decide which road he should put an advertisement so that the most people will see it as they drive by. He does not have enough resources to traffic every block and street, but he knows that he can use algebra to solve for the ones he missed. In the above map, he put a blue box that contains the number of people that walked on each street during one hour. Use a system of linear equations to determine how much traffic is on every street/block on this map.” Based on the diagram above, you can build an equation for each intersection that has the sum of people walking in and out as equal, rewrite the system in standard form, represented as an augmented matrix, reduce the matrix using Echelon form, and voila! You find that the best place to advertise is in Hickory Street between Elm and Locust Street. He also provides his thought on the are the contributions of various cultures to this topic and shares some of the history of solving systems of linear equations.  Below is an excerpt, “Simultaneous linear equations were featured in Ancient China in a text called Jiuzhang Suanshu or Nine Chapters of the Mathematical Art to solve problems involving weights and quantities of grains. The method prescribed involves listing the coefficients of terms in an array is exceptionally similar to Gaussian Elimination. Later, in early modern Europe, the methods of elimination were known, but not taught in textbooks until Newton published such an English text in 1720, though he did not use matrices in that text. Gauss provided an even more systematic approach to solving simultaneous linear equations involving least squares by 1794, which was used in 1801 to find Ceres when it was sighted and then lost.” Predicate Logic and Popular Culture Series Similar, to the goal of the last series of posts, the Predicate Logic and Popular Culture series has a great number of examples (with different sources and complexity) to make predicate and propositional logic more appealing to students.  As part of his Discrete Mathematics class, he presented students either with a logical statement (which they had to translate to actual English) or gave them a famous quote to translate into predicate logic.  This was so fun that I ended scrolling for a while just to find my favorites. Below are some that caught my eye, • Predicate Logic and Popular Culture (Part 189): Mana I was captivated by the idea of using song lyrics to practice! Especially, since in this example is a song from a Mexican band, Mana, which I listened to growing up.”Let  W(t) be the proposition “At time t, you want me as I am,” and let R(t) be the proposition “At time t, you reject me for what I was.” Translate the logical statement: $$\forall t <0, (\neg W(t) \wedge R(t)).$$ This matches a line from the Spanish-language song “Tengo Muchas Alas / I Have Many Wings.” • If you are a fan of Star Wars you might remember this quote from Yoda from “Star Wars Episode I: The Phantom Menace.” “Let $L(x,y)$ be the proposition “$x$ leads to $y$.” Translate the logical statement: $$L(fear, anger) \wedge L(anger, hate) \wedge L(hate \wedge suffering)$$. Can you guess which line the statement above refers to? Check out the post for a video clip with the answer. • Predicate Logic and Popular Culture (Part 182): MoanaIn the same spirit, you might recognize the following line from the movie Moana. “Let $P$ be the set of all people, let $L(x)$ be the proposition “$x$ is on this island,” and let $K(x)$ be the proposition “I know $x$.” Translate the logical statement: $$\forall x \in P(L(x) \Rightarrow K(x))$$ Can you guess which line the statement above refers to? Check out the post for a video clip with the answer. Have an idea for a topic or a blog you would like for me and Rachel to cover in upcoming posts? Reach out in the comments below or on Twitter (.
Relationship between I2C drawn energy / power consumption and data rate Referring to just what the I2C lines draw, am I wrong thinking that the higher the clock frequency the shorter the time there will be (the same amount of) current flowing through the pullups and thus lower power consumed? side qeustion I don't think I am going to reach 100 kHz, that's way over the limit of my hardware. I am alternating between about 32 and 4 kHz. Will the same resistor value (3.3k @ 3V) be good for both? Higher clock frequency usually require lower pull-up value, thus increasing the current. Increasing the clock frequency from 100kHz to 400kHz usually requires the pull-up to be reduced with a factor of 4-5. Since the power is inverse proportional to the resistance the power consumed will be almost the same. • How about my side question ? – kellogs Nov 9 '18 at 14:46 • @kellogs 32KHz is really slow for I2C. Depending on the capacity of the bus you can probably use 10k or more. Use a scope to see the rising edge of SCL and SDA to determine the value of the pull-up. SDA must be able to rise from low to high in the low period of SCL. – Peter Karlsen Nov 9 '18 at 17:06 The I2C data and clock lines draw power when they get pulled low. Because then power is sunk through the pull-up resistors. While a line is pulled low it will draw 5V/4.7k$$\~\Omega \approx\$$ 1mA. Assuming 5V VCC and 4.7k pullup resistors. The clock line will have a 50% duty cycle. The data line is low at least 1 out of every 9 clock cycles (every ack for a successful byte) but you are rarely going to send/receive only 0xff bytes. It's more likely going to be pulled low 75% of the time. But indeed faster clock means shorter transmission which means less power lost through the pull-ups. However faster transmission may require lower value resistors to overcome the parasitic capacitance between the lines and ground. • It also takes energy to charge and discharge the parasitic capacitance. I think another factor that affects power consumption is active time percentage of the bus. – Long Pham Nov 9 '18 at 14:34 • How about my side question ? – kellogs Nov 9 '18 at 14:46 • I'm curious about the 75% of the time value. I would have said about 50% (or better 56, including the ack), since all the values are equally probable. 75% means that the average number of zeros per byte is between 5 or 6; was yours a pessimistic estimation or 75% is the actual statistical value? – frarugi87 Nov 9 '18 at 16:40 • @frarugi87 pessimistic estimation. Based on the habit of using low numbers for addresses of various control registers and the values they take. So I guestimated that the top 3 bits are very often just 0. – ratchet freak Nov 9 '18 at 17:02 • @ratchetfreak It makes sense ;) Thank you for your explanation – frarugi87 Nov 10 '18 at 13:21 Your thinking is correct, as long as you can achieve a higher speed with the same pull up resistors. • How about my side question ? – kellogs Nov 9 '18 at 14:46 • @kellogs cannot answer that as I have no idea what your lines capacitance is. So I can only tell you that we are running 100 kHz with 100 kOhm resistors in one of our products with no problems. I would guess that you are fine. – Arsenal Nov 9 '18 at 14:55 • any way to guesstimate it ? – kellogs Nov 9 '18 at 15:02 • @kellogs well, 10 pF for any pin connected to the bus, 50 pF per meter for the length of the line would be a conservative guess I think. If your I²C bus is on a single PCB I have a hard time to imagine why it wouldn't work with 3k3 pull up resistors. – Arsenal Nov 9 '18 at 15:07 As @ratchet-freak stated, In terms of time, you could have 75% of the time the bus pulled-down, hence, if you increase the clock rate, your consumption by the bus will decrease as long as you have the same value for pull-up resistors. But, at higher speeds, resistor values should be reduced. Having this, the consumption of the bus will be lower, but slaves and masters devices could increase their consumption depending on the clock rate. Regarding your side question, if 3.3kohms suits both 4khz and 32khz, you have to check the capacitance of your bus. This capacitance depends on the length of the bus, the distance between lines and the number of devices attached to it. It could be difficult to calculate the real capacitance, but you can check the waveform of your data in the bus at both frequencies and see if there is any distortion of the signal at 32khz using 3.3k. • How about my side question ? – kellogs Nov 9 '18 at 14:46 At high data rate power consumption will be more: - MCU need to work faster - Low-value pullups will be required to get proper rise/falling edge of the pulses
# Definition:monicctsqLaguerre The LaTeX DLMF and DRMF macro \monicctsqLaguerre represents the monic continuous ${\displaystyle q}$ Laguerre polynomial. This macro is in the category of polynomials. In math mode, this macro can be called in the following ways: \monicctsqLaguerre{n} produces ${\displaystyle{\displaystyle{\displaystyle\monicctsqLaguerre{n}{}}}}$ \monicctsqLaguerre{n}@{x}{q} produces ${\displaystyle{\displaystyle{\displaystyle\monicctsqLaguerre{n}{@}{x}{q}}}}$ \monicctsqLaguerre{n}@@{x}{q} produces ${\displaystyle{\displaystyle{\displaystyle\monicctsqLaguerre{n}{@}@{x}{q}}}}$
# Convert Sweave document to a function 3 messages Open this post in threaded view | ## Convert Sweave document to a function I like Sweave, which I consider to be a great contribution. I have just written a .Rnw document that comes to about 6 pages of mixed code and mathematical explanation. Now I want to turn the R code into a function. My R code currently contains statements like N<-1000 and theta<- pi/10. In the next version of the document, I want N and theta to be parameters of a function, so that they can be easily varied. My explanation of the code is still valid, and it seems to me that, if I only knew how to manage the trick, I would need to change almost nothing in the latex. The document contains about 6 different code chunks, and 7 different chunks of latex. I tried putting functionname <- function(N,theta) { into the first code chunk and } into the last code chunk, but Sweave said this was poor grammar and rejected it. Is there a reasonable way to make my .Rnw source into a function definition? I would like maintainability of the code to be a criterion for "reasonable", and I would like to keep latex explanations of what the code is doing adjacent to the code being explained. One other point is that I will want to export some of the variables computed in the function to outside the function, so that they are not variables local to the function body. I mention this only because it may affect the solution, if any, to my problem. Thanks for any help David In reply to this post by David.Epstein On 3/20/2011 12:19 PM, David.Epstein wrote: > I like Sweave, which I consider to be a great contribution. I have just > written a .Rnw document that comes to about 6 pages of mixed code and > mathematical explanation. Now I want to turn the R code into a function. My > R code currently contains statements like N<-1000 and theta<- pi/10. In the > next version of the document, I want N and theta to be parameters of a > function, so that they can be easily varied. My explanation of the code is > still valid, and it seems to me that, if I only knew how to manage the > trick, I would need to change almost nothing in the latex. > > The document contains about 6 different code chunks, and 7 different chunks > of latex. > > I tried putting > functionname<- function(N,theta) { > into the first code chunk and > } > into the last code chunk, but Sweave said this was poor grammar and rejected > it. > > Is there a reasonable way to make my .Rnw source into a function definition? > I would like maintainability of the code to be a criterion for "reasonable", > and I would like to keep latex explanations of what the code is doing > adjacent to the code being explained. > > One other point is that I will want to export some of the variables computed > in the function to outside the function, so that they are not variables > local to the function body. I mention this only because it may affect the > solution, if any, to my problem. > > Thanks for any help > David The problem you ran into is that an R function can only contain R code (and that each Sweave chunk must be parseable on its own). The best solution I know of (though it may not be a good one), is to put all the TeX code inside of a cat(), drop all the noweb notation (which may mean doing yourself what Sweave is doing itself in, for example, fig=TRUE or echo=TRUE chunks), and then wrap that in a function call. For example, an Sweave set (pulled from Sweave-test-1.Rnw): Now we look at Gaussian data: <<>>= library(stats) x <- rnorm(20) print(x) print(t1 <- t.test(x)) @ Note that we can easily integrate some numbers into standard text: The third element of vector \texttt{x} is \Sexpr{x[3]}, the $p$-value of the test is \Sexpr{format.pval(t1$p.value)}. %$ Now we look at a summary of the famous iris data set, and we want to see the commands in the code chunks: Would turn into (untested): cat(" Now we look at Gaussian data: ") cat(" \\begin{Schunk} \\begin{Sinput} library(stats) x <- rnorm(20) print(x) print(t1 <- t.test(x)) \\end{Sinput} \\begin{Soutput} ") library(stats) x <- rnorm(20) print(x) print(t1 <- t.test(x)) cat(" \\end{Soutput} \\end{Schunk} Note that we can easily integrate some numbers into standard text: The third element of vector \texttt{x} is ",x[3],", the $p$-value of the test is ",format.pval(t1\$p.value),"." ,sep="") cat(" Now we look at a summary of the famous iris data set, and we want to see the commands in the code chunks: ") This has several drawbacks.  First, having to put all the TeX inside of a cat is ugly (and you lose any editor support for it actually being TeX). Second, you have to manually do all the Sweave part yourself, including duplicating the input and output (if both are wanted), meaning it is easy for things to get out of sync, and creating and including figures. A different approach which might work better is the brew package.  It is not Sweave, but can be used to created a file which can then be passed to Sweave (I think); I've not used it, but from what I've seen others say about it, it may be an approach to this sort of meta-templating in multiple languages (TeX and R). > -- > View this message in context: http://r.789695.n4.nabble.com/Convert-Sweave-document-to-a-function-tp3391654p3391654.html> Sent from the R help mailing list archive at Nabble.com. > -- Brian S. Diggs, PhD Senior Research Associate, Department of Surgery Oregon Health & Science University ______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
• # question_answer A constant voltage is applied between the two ends of a metallic wire. If both the length and the radius of the wire are doubled, the rate of heat developed in the wire [MP PMT 1996] A)            Will be doubled                   B)            Will be halved C)            Will remain the same        D)            Will be quadrupled $H\propto \frac{1}{R}$ (If V = constant) Þ $\,\frac{{{H}_{1}}}{{{H}_{2}}}=\frac{{{R}_{2}}}{{{R}_{1}}}$$=\frac{{{l}_{2}}{{A}_{1}}}{{{l}_{1}}{{A}_{2}}}$$=\frac{{{l}_{2}}r_{1}^{2}}{{{l}_{1}}r_{2}^{2}}$ Þ ${{H}_{2}}=2{{H}_{1}}$
# Transforming Your Data with dplyr Although many fundamental data manipulation functions exist in R, they have been a bit convoluted to date and have lacked consistent coding and the ability to easily flow together. This leads to difficult-to-read nested functions and/or choppy code. R Studio is driving a lot of new packages to collate data management tasks and better integrate them with other analysis activities. As a result, a lot of data processing tasks are becoming packaged in more cohesive and consistent ways, which leads to: • More efficient code • Easier to remember syntax dplyr is one such package which was built for the sole purpose of simplifying the process of manipulating, sorting, summarizing, and joining data frames. This tutorial serves to introduce you to the basic functions offered by the dplyr package. These fundamental functions of data transformation that the dplyr package offers includes: ## Packages Utilized install.packages("dplyr") library(dplyr) For the examples that follow, we’ll use the following census data which includes the K-12 public school expenditures by state. This data frame currently is 50x16 and includes expenditure data for 14 unique years. ## Division State X1980 X1990 X2000 X2001 X2002 X2003 X2004 X2005 X2006 X2007 X2008 X2009 X2010 X2011 ## 1 6 Alabama 1146713 2275233 4176082 4354794 4444390 4657643 4812479 5164406 5699076 6245031 6832439 6683843 6670517 6592925 ## 2 9 Alaska 377947 828051 1183499 1229036 1284854 1326226 1354846 1442269 1529645 1634316 1918375 2007319 2084019 2201270 ## 3 8 Arizona 949753 2258660 4288739 4846105 5395814 5892227 6071785 6579957 7130341 7815720 8403221 8726755 8482552 8340211 ## 4 7 Arkansas 666949 1404545 2380331 2505179 2822877 2923401 3109644 3546999 3808011 3997701 4156368 4240839 4459910 4578136 ## 5 9 California 9172158 21485782 38129479 42908787 46265544 47983402 49215866 50918654 53436103 57352599 61570555 60080929 58248662 57526835 ## 6 8 Colorado 1243049 2451833 4401010 4758173 5151003 5551506 5666191 5994440 6368289 6579053 7338766 7187267 7429302 7409462 ## %>% Operator Although not required, the tidyr and dplyr packages make use of the pipe operator %>% developed by Stefan Milton Bache in the R package magrittr. Although all the functions in tidyr and dplyr can be used without the pipe operator, one of the great conveniences these packages provide is the ability to string multiple functions together by incorporating %>%. This operator will forward a value, or the result of an expression, into the next function call/expression. For instance a function to filter data can be written as: filter(data, variable == numeric_value) or data %>% filter(variable == numeric_value) Both functions complete the same task and the benefit of using %>% is not evident; however, when you desire to perform multiple functions its advantage becomes obvious. For more info check out the %>% tutorial. ## select( ) function: Objective: Reduce dataframe size to only desired variables for current task Description: When working with a sizable dataframe, often we desire to only assess specific variables. The select() function allows you to select and/or rename variables. Function: select(data, ...) Same as: data %>% select(...) Arguments: data: data frame ...: call variables by name or by function Special functions: starts_with(x, ignore.case = TRUE): names starts with x ends_with(x, ignore.case = TRUE): names ends in x contains(x, ignore.case = TRUE): selects all variables whose name contains x matches(x, ignore.case = TRUE): selects all variables whose name matches the regular expression x Example Let’s say our goal is to only assess the 5 most recent years worth of expenditure data. Applying the select() function we can select only the variables of concern. sub.exp <- expenditures %>% select(Division, State, X2007:X2011) head(sub.exp) # for brevity only display first 6 rows ## Division State X2007 X2008 X2009 X2010 X2011 ## 1 6 Alabama 6245031 6832439 6683843 6670517 6592925 ## 2 9 Alaska 1634316 1918375 2007319 2084019 2201270 ## 3 8 Arizona 7815720 8403221 8726755 8482552 8340211 ## 4 7 Arkansas 3997701 4156368 4240839 4459910 4578136 ## 5 9 California 57352599 61570555 60080929 58248662 57526835 ## 6 8 Colorado 6579053 7338766 7187267 7429302 7409462 We can also apply some of the special functions within select(). For instance we can select all variables that start with ‘X’: head(expenditures %>% select(starts_with("X"))) ## X1980 X1990 X2000 X2001 X2002 X2003 X2004 X2005 X2006 X2007 X2008 X2009 X2010 X2011 ## 1 1146713 2275233 4176082 4354794 4444390 4657643 4812479 5164406 5699076 6245031 6832439 6683843 6670517 6592925 ## 2 377947 828051 1183499 1229036 1284854 1326226 1354846 1442269 1529645 1634316 1918375 2007319 2084019 2201270 ## 3 949753 2258660 4288739 4846105 5395814 5892227 6071785 6579957 7130341 7815720 8403221 8726755 8482552 8340211 ## 4 666949 1404545 2380331 2505179 2822877 2923401 3109644 3546999 3808011 3997701 4156368 4240839 4459910 4578136 ## 5 9172158 21485782 38129479 42908787 46265544 47983402 49215866 50918654 53436103 57352599 61570555 60080929 58248662 57526835 ## 6 1243049 2451833 4401010 4758173 5151003 5551506 5666191 5994440 6368289 6579053 7338766 7187267 7429302 7409462 You can also de-select variables by using “-“ prior to name or function. The following produces the inverse of functions above expenditures %>% select(-X1980:-X2006) expenditures %>% select(-starts_with("X")) ## filter( ) function: Objective: Reduce rows/observations with matching conditions Description: Filtering data is a common task to identify/select observations in which a particular variable matches a specific value/condition. The filter() function provides this capability. Function: filter(data, ...) Same as: data %>% filter(...) Arguments: data: data frame ...: conditions to be met Examples Continuing with our sub.exp dataframe which includes only the recent 5 years worth of expenditures, we can filter by Division: sub.exp %>% filter(Division == 3) ## Division State X2007 X2008 X2009 X2010 X2011 ## 1 3 Illinois 20326591 21874484 23495271 24695773 24554467 ## 2 3 Indiana 9497077 9281709 9680895 9921243 9687949 ## 3 3 Michigan 17013259 17053521 17217584 17227515 16786444 ## 4 3 Ohio 18251361 18892374 19387318 19801670 19988921 ## 5 3 Wisconsin 9029660 9366134 9696228 9966244 10333016 We can apply multiple logic rules in the filter() function such as: < Less than != Not equal to > Greater than %in% Group membership == Equal to is.na is NA <= Less than or equal to !is.na is not NA >= Greater than or equal to &,|,! Boolean operators For instance, we can filter for Division 3 and expenditures in 2011 that were greater than $10B. This results in Indiana, which is in Division 3, being excluded since its expenditures were <$10B (FYI - the raw census data are reported in units of $1,000). # Raw census data are in units of$1,000 sub.exp %>% filter(Division == 3, X2011 > 10000000) ## Division State X2007 X2008 X2009 X2010 X2011 ## 1 3 Illinois 20326591 21874484 23495271 24695773 24554467 ## 2 3 Michigan 17013259 17053521 17217584 17227515 16786444 ## 3 3 Ohio 18251361 18892374 19387318 19801670 19988921 ## 4 3 Wisconsin 9029660 9366134 9696228 9966244 10333016 ## group_by( ) function: Objective: Group data by categorical variables Description: Often, observations are nested within groups or categories and our goals is to perform statistical analysis both at the observation level and also at the group level. The group_by() function allows us to create these categorical groupings. Function: group_by(data, ...) Same as: data %>% group_by(...) Arguments: data: data frame ...: variables to group_by *Use ungroup(x) to remove groups Example The group_by() function is a silent function in which no observable manipulation of the data is performed as a result of applying the function. Rather, the only change you’ll notice is, if you print the dataframe you will notice underneath the Source information and prior to the actual dataframe, an indicator of what variable the data is grouped by will be provided. The real magic of the group_by() function comes when we perform summary statistics which we will cover shortly. group.exp <- sub.exp %>% group_by(Division) ## Source: local data frame [6 x 7] ## Groups: Division ## ## Division State X2007 X2008 X2009 X2010 X2011 ## 1 6 Alabama 6245031 6832439 6683843 6670517 6592925 ## 2 9 Alaska 1634316 1918375 2007319 2084019 2201270 ## 3 8 Arizona 7815720 8403221 8726755 8482552 8340211 ## 4 7 Arkansas 3997701 4156368 4240839 4459910 4578136 ## 5 9 California 57352599 61570555 60080929 58248662 57526835 ## 6 8 Colorado 6579053 7338766 7187267 7429302 7409462 ## summarise( ) function: Objective: Perform summary statistics on variables Description: Obviously the goal of all this data wrangling is to be able to perform statistical analysis on our data. The summarise() function allows us to perform the majority of the initial summary statistics when performing exploratory data analysis. Function: summarise(data, ...) Same as: data %>% summarise(...) Arguments: data: data frame ...: Name-value pairs of summary functions like min(), mean(), max() etc. *Developer is from New Zealand...can use "summarise(x)" or "summarize(x)" Examples Lets get the mean expenditure value across all states in 2011 sub.exp %>% summarise(Mean_2011 = mean(X2011)) ## Mean_2011 ## 1 10513678 Not too bad, lets get some more summary stats sub.exp %>% summarise(Min = min(X2011, na.rm=TRUE), Median = median(X2011, na.rm=TRUE), Mean = mean(X2011, na.rm=TRUE), Var = var(X2011, na.rm=TRUE), SD = sd(X2011, na.rm=TRUE), Max = max(X2011, na.rm=TRUE), N = n()) ## Min Median Mean Var SD Max N ## 1 1049772 6527404 10513678 1.48619e+14 12190938 57526835 50 This information is useful, but being able to compare summary statistics at multiple levels is when you really start to gather some insights. This is where the group_by() function comes in. First, let’s group by Division and see how the different regions compared in by 2010 and 2011. sub.exp %>% group_by(Division)%>% summarise(Mean_2010 = mean(X2010, na.rm=TRUE), Mean_2011 = mean(X2011, na.rm=TRUE)) ## Source: local data frame [9 x 3] ## ## Division Mean_2010 Mean_2011 ## 1 1 5121003 5222277 ## 2 2 32415457 32877923 ## 3 3 16322489 16270159 ## 4 4 4672332 4672687 ## 5 5 10975194 11023526 ## 6 6 6161967 6267490 ## 7 7 14916843 15000139 ## 8 8 3894003 3882159 ## 9 9 15540681 15468173 Now we’re starting to see some differences pop out. How about we compare states within a Division? We can start to apply multiple functions we’ve learned so far to get the 5 year average for each state within Division 3. sub.exp %>% gather(Year, Expenditure, X2007:X2011) %>% # this turns our wide data to a long format filter(Division == 3) %>% # we only want to compare states within Division 3 group_by(State) %>% # we want to summarize data at the state level summarise(Mean = mean(Expenditure), SD = sd(Expenditure)) ## Source: local data frame [5 x 3] ## ## State Mean SD ## 1 Illinois 22989317 1867527.7 ## 2 Indiana 9613775 238971.6 ## 3 Michigan 17059665 180245.0 ## 4 Ohio 19264329 705930.2 ## 5 Wisconsin 9678256 507461.2 ## arrange( ) function: Objective: Order variable values Description: Often, we desire to view observations in rank order for a particular variable(s). The arrange() function allows us to order data by variables in accending or descending order. Function: arrange(data, ...) Same as: data %>% arrange(...) Arguments: data: data frame ...: Variable(s) to order *use desc(x) to sort variable in descending order Examples For instance, in the summarise example we compared the the mean expenditures for each division. We could apply the arrange() function at the end to order the divisions from lowest to highest expenditure for 2011. This makes it easier to see the significant differences between Divisions 8,4,1 & 6 as compared to Divisions 5,7,9,3 & 2. sub.exp %>% group_by(Division)%>% summarise(Mean_2010 = mean(X2010, na.rm=TRUE), Mean_2011 = mean(X2011, na.rm=TRUE)) %>% arrange(Mean_2011) ## Source: local data frame [9 x 3] ## ## Division Mean_2010 Mean_2011 ## 1 8 3894003 3882159 ## 2 4 4672332 4672687 ## 3 1 5121003 5222277 ## 4 6 6161967 6267490 ## 5 5 10975194 11023526 ## 6 7 14916843 15000139 ## 7 9 15540681 15468173 ## 8 3 16322489 16270159 ## 9 2 32415457 32877923 We can also apply an descending argument to rank-order from highest to lowest. The following shows the same data but in descending order by applying desc() within the arrange() function. sub.exp %>% group_by(Division)%>% summarise(Mean_2010 = mean(X2010, na.rm=TRUE), Mean_2011 = mean(X2011, na.rm=TRUE)) %>% arrange(desc(Mean_2011)) ## Source: local data frame [9 x 3] ## ## Division Mean_2010 Mean_2011 ## 1 2 32415457 32877923 ## 2 3 16322489 16270159 ## 3 9 15540681 15468173 ## 4 7 14916843 15000139 ## 5 5 10975194 11023526 ## 6 6 6161967 6267490 ## 7 1 5121003 5222277 ## 8 4 4672332 4672687 ## 9 8 3894003 3882159 ## join( ) functions: Objective: Join two datasets together Description: Often we have separate dataframes that can have common and differing variables for similar observations and we wish to join these dataframes together. The multiple xxx_join() functions provide multiple ways to join dataframes. Description: Join two datasets Function: inner_join(x, y, by = NULL) left_join(x, y, by = NULL) right_join(x, y, by = NULL) full_join(x, y, by = NULL) semi_join(x, y, by = NULL) anti_join(x, y, by = NULL) Arguments: x,y: data frames to join by: a character vector of variables to join by. If NULL, the default, join will do a natural join, using all variables with common names across the two tables. Example Our public education expenditure data represents then-year dollars. To make any accurate assessments of longitudinal trends and comparison we need to adjust for inflation. I have the following dataframe which provides inflation adjustment factors for base-year 2012 dollars (obviously I should use 2014 values but I had these easily accessable and it only serves for illustrative purposes). ## Year Annual Inflation ## 28 2007 207.342 0.9030811 ## 29 2008 215.303 0.9377553 ## 30 2009 214.537 0.9344190 ## 31 2010 218.056 0.9497461 ## 32 2011 224.939 0.9797251 ## 33 2012 229.594 1.0000000 To join to my expenditure data I obviously need to get my expenditure data in the proper form that allows my to join these two dataframes. I can apply the following functions to accomplish this: long.exp <- sub.exp %>% gather(Year, Expenditure, X2007:X2011) %>% # turn to long format separate(Year, into=c("x", "Year"), sep="X") %>% # separate "X" from year value select(-x) # remove "x" column long.exp$Year <- as.numeric(long.exp$ Year) # convert from character to numeric ## Division State Year Expenditure ## 1 6 Alabama 2007 6245031 ## 2 9 Alaska 2007 1634316 ## 3 8 Arizona 2007 7815720 ## 4 7 Arkansas 2007 3997701 ## 5 9 California 2007 57352599 ## 6 8 Colorado 2007 6579053 I can now apply the left_join() function to join the inflation data to the expenditure data. This aligns the data in both dataframes by the Year variable and then joins the remaining inflation data to the expenditure dataframe as new variables. join.exp <- long.exp %>% left_join(inflation) ## Division State Year Expenditure Annual Inflation ## 1 6 Alabama 2007 6245031 207.342 0.9030811 ## 2 9 Alaska 2007 1634316 207.342 0.9030811 ## 3 8 Arizona 2007 7815720 207.342 0.9030811 ## 4 7 Arkansas 2007 3997701 207.342 0.9030811 ## 5 9 California 2007 57352599 207.342 0.9030811 ## 6 8 Colorado 2007 6579053 207.342 0.9030811 To illustrate the other joining methods we can use these two simple dateframes: Dataframe “x”: ## name instrument ## 1 John guitar ## 2 Paul bass ## 3 George guitar ## 4 Ringo drums ## 5 Stuart bass ## 6 Pete drums Dataframe “y”: ## name band ## 1 John TRUE ## 2 Paul TRUE ## 3 George TRUE ## 4 Ringo TRUE ## 5 Brian FALSE inner_join(): Include only rows in both x and y that have a matching value inner_join(x,y) ## name instrument band ## 1 John guitar TRUE ## 2 Paul bass TRUE ## 3 George guitar TRUE ## 4 Ringo drums TRUE left_join(): Include all of x, and matching rows of y left_join(x,y) ## name instrument band ## 1 John guitar TRUE ## 2 Paul bass TRUE ## 3 George guitar TRUE ## 4 Ringo drums TRUE ## 5 Stuart bass <NA> ## 6 Pete drums <NA> semi_join(): Include rows of x that match y but only keep the columns from x semi_join(x,y) ## name instrument ## 1 John guitar ## 2 Paul bass ## 3 George guitar ## 4 Ringo drums anti_join(): Opposite of semi_join anti_join(x,y) ## name instrument ## 1 Pete drums ## 2 Stuart bass ## mutate( ) function: Objective: Creates new variables Description: Often we want to create a new variable that is a function of the current variables in our dataframe or even just add a new variable. The mutate() function allows us to add new variables while preserving the existing variables. Function: mutate(data, ...) Same as: data %>% mutate(...) Arguments: data: data frame ...: Expression(s) Examples If we go back to our previous join.exp dataframe, remember that we joined inflation rates to our non-inflation adjusted expenditures for public schools. The dataframe looks like: ## Division State Year Expenditure Annual Inflation ## 1 6 Alabama 2007 6245031 207.342 0.9030811 ## 2 9 Alaska 2007 1634316 207.342 0.9030811 ## 3 8 Arizona 2007 7815720 207.342 0.9030811 ## 4 7 Arkansas 2007 3997701 207.342 0.9030811 ## 5 9 California 2007 57352599 207.342 0.9030811 ## 6 8 Colorado 2007 6579053 207.342 0.9030811 If we wanted to adjust our annual expenditures for inflation we can use mutate() to create a new inflation adjusted cost variable which we’ll name inflation_adj: inflation_adj <- join.exp %>% mutate(Adj_Exp = Expenditure/Inflation) ## Division State Year Expenditure Annual Inflation Adj_Exp ## 1 6 Alabama 2007 6245031 207.342 0.9030811 6915249 ## 2 9 Alaska 2007 1634316 207.342 0.9030811 1809711 ## 3 8 Arizona 2007 7815720 207.342 0.9030811 8654505 ## 4 7 Arkansas 2007 3997701 207.342 0.9030811 4426735 ## 5 9 California 2007 57352599 207.342 0.9030811 63507696 ## 6 8 Colorado 2007 6579053 207.342 0.9030811 7285119 Lets say we wanted to create a variable that rank-orders state-level expenditures (inflation adjusted) for the year 2010 from the highest level of expenditures to the lowest. rank_exp <- inflation_adj %>% filter(Year == 2010) %>% ## Division State Year Expenditure Annual Inflation Adj_Exp Rank ## 1 9 California 2010 58248662 218.056 0.9497461 61330774 1 ## 2 2 New York 2010 50251461 218.056 0.9497461 52910417 2 ## 3 7 Texas 2010 42621886 218.056 0.9497461 44877138 3 ## 4 3 Illinois 2010 24695773 218.056 0.9497461 26002501 4 ## 5 2 New Jersey 2010 24261392 218.056 0.9497461 25545135 5 ## 6 5 Florida 2010 23349314 218.056 0.9497461 24584797 6 If you wanted to assess the percent change in cost for a particular state you can use the lag() function within the mutate() function: inflation_adj %>% filter(State == "Ohio") %>% ## Division State Year Expenditure Annual Inflation Adj_Exp Perc_Chg ## 1 3 Ohio 2007 18251361 207.342 0.9030811 20210102 NA ## 2 3 Ohio 2008 18892374 215.303 0.9377553 20146378 -0.003153057 ## 3 3 Ohio 2009 19387318 214.537 0.9344190 20747992 0.029862103 ## 4 3 Ohio 2010 19801670 218.056 0.9497461 20849436 0.004889357 ## 5 3 Ohio 2011 19988921 224.939 0.9797251 20402582 -0.021432441 You could also look at what percent of all US expenditures each state made up in 2011. In this case we use mutate() to take each state’s inflation adjusted expenditure and divide by the sum of the entire inflation adjusted expenditure column. We also apply a second function within mutate() that provides the cummalative percent in rank-order. This shows that in 2011, the top 8 states with the highest expenditures represented over 50% of the total U.S. expenditures in K-12 public schools. (I remove the non-inflation adjusted Expenditure, Annual & Inflation columns so that the columns don’t wrap on the screen view) perc.of.whole <- inflation_adj %>% filter(Year == 2011) %>% Cum_Perc = cumsum(Perc_of_Total)) %>% select(-Expenditure, -Annual, -Inflation) ## Division State Year Adj_Exp Perc_of_Total Cum_Perc ## 1 9 California 2011 58717324 0.10943237 0.1094324 ## 2 2 New York 2011 52575244 0.09798528 0.2074177 ## 3 7 Texas 2011 43751346 0.08154005 0.2889577 ## 4 3 Illinois 2011 25062609 0.04670957 0.3356673 ## 5 5 Florida 2011 24364070 0.04540769 0.3810750 ## 6 2 New Jersey 2011 24128484 0.04496862 0.4260436 ## 7 2 Pennsylvania 2011 23971218 0.04467552 0.4707191 ## 8 3 Ohio 2011 20402582 0.03802460 0.5087437
# The Pi(π) Machine (2023) A useless machine. Black & white spheres, motor, camera, LEDs, Raspberry Pi, custom code. The Pi(π) Machine is a useless device that calculates the value of Pi (π) by generating 10-bit random numbers using black and white spheres that fall inside a tube. A camera analyses the image and detects the black and white spheres that are translated to 1 and 0 and then to the corresponding number. Using two algorithms, the Monte Carlo method and Euclid’s formula, the machine calculates an approximated value of Pi. Pi is wonderful precisely because it can only ever be understood theoretically, never actually grasped in its entirety. The lack of solution can be liberating, a demonstration of a classic axiom: The wisest among us know only how little we know. The greatest mathematical minds of the centuries made advances. Yet every step forward in comprehension and calculation also reveals the limitations of human knowledge. Pi shows that knowing, wholly, is an impossibility. The more we know, the more apparent it is that there is much more to know. For each individual, and for humanity as a whole, there is always that added bit that simply can’t be figured out, no matter how much information or education we possess. (from https://qz.com/931891/pi-3-14159-is-a-metaphor-for-life-and-the-nature-of-the-universe-no-math-required) ## The Method Pi is the ratio of a circle’s circumference to its diameter. It cannot be defined as it has an infinite number of decimal digits. It is approximately equal to 3.14159. To calculate Pi(π) using the Monte Carlo method, we take pairs of random numbers as x and y and draw them on a 2D plane. Then we count the number of dots m that are within a distance of 1 from the origin, i.e. within the circle as shown below.  The ratio (points within circle)/(total points) will be the approximate ratio of the area of the quarter circle to the area of the square which is π / 4.
Serving the Quantitative Finance Community • 1 • 2 • 3 • 4 • 5 • 10 abdelali Topic Author Posts: 9 Joined: September 30th, 2009, 8:35 am ### sample cuda problems in finance Hi there, I am willing to sharpen my skills in CUDA in the finance domain. I would be very thankful if you could propose some problems that need acceleration in finance apart from Monte Carlo, Finite Differences and Finite Elements. I am more interested in problems that have to do with algorithmic trading or data analysis. thanks a lot. jambodev Posts: 80 Joined: September 6th, 2008, 11:07 am ### sample cuda problems in finance There is a feature request for this, pending, in quantlib if I'm not mistaken. lballabio Posts: 983 Joined: January 19th, 2004, 12:34 pm ### sample cuda problems in finance Except he said "apart from Monte Carlo, Finite Differences and Finite Elements." Those would be the obvious additions to QuantLib. Cuchulainn Posts: 64978 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### sample cuda problems in finance Quote I am more interested in problems that have to do with algorithmic trading or data analysis. Question is in how far CUDA is suitable for these work-flow/pipeline applications. Or are you thinking about some kind of SPMD? In general, you have MPMD, yes? Last edited by Cuchulainn on September 29th, 2009, 10:00 pm, edited 1 time in total. "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl abdelali Topic Author Posts: 9 Joined: September 30th, 2009, 8:35 am ### sample cuda problems in finance QuoteThere is a feature request for this, pending, in quantlib if I'm not mistaken.I am considering this one already. thx Last edited by abdelali on September 29th, 2009, 10:00 pm, edited 1 time in total. abdelali Topic Author Posts: 9 Joined: September 30th, 2009, 8:35 am ### sample cuda problems in finance QuoteOriginally posted by: CuchulainnQuote I am more interested in problems that have to do with algorithmic trading or data analysis. Question is in how far CUDA is suitable for these work-flow/pipeline applications. Or are you thinking about some kind of SPMD? In general, you have MPMD, yes?Truth is. I don't know exactly. at the end of the day, it will be SPMD to run on CUDA. it is just just thats some MPMD some problems can be reformulated to fit the SPMD paradigm. The goal here is to gather a list of problems that can be accelerated, see which ones can be CUDified (completely or partially), CUDify them and develop some patterns and good practices reference along the way. It is really a pitty that we do not have open high performance financial libraries. Other disciplines do, why don't we ? abdelali Topic Author Posts: 9 Joined: September 30th, 2009, 8:35 am ### sample cuda problems in finance there are now two more reasons to dig deeper in the GPU spaceNexushttp://www.nvidia.com/object/pr_nexus_093009.htmland Fermihttp://www.nvidia.com/object/fermi_architecture.htmlwhat do you guys think ? Cuchulainn Posts: 64978 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### sample cuda problems in finance what do you guys think ? My 2 cents; those vendors who adopt open standards (C++, OpenCL, IEEE) will win. At least that's how the history of s/w products have evolved.In the 80's it was the software that sold the hardware. Think about CAD systems running in VAX, UNIX and Pr1me boxes. Early pioneers tended to fall by the wayside. Anyone rememberTaligentOccamOS/2 (except DCFC )NextUltrix... Last edited by Cuchulainn on September 30th, 2009, 10:00 pm, edited 1 time in total. "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl abdelali Topic Author Posts: 9 Joined: September 30th, 2009, 8:35 am ### sample cuda problems in finance I completely agree with you Cuchulainn. On the long run OpenCL or the standard that comes after it will win. However this does not affect my original question. The patterns will be the same whether we use CUDA or OpenCL. so the most important thing for me now is to put together a list of problems in financial industry that are currently slow/not fast enough. To do that, I need your (the community's) input as my expertise is limited to pricing and scenario generation. what do you think ? DominicConnor Posts: 11684 Joined: July 14th, 2002, 3:00 am ### sample cuda problems in finance I think you're probabily right, but history is more ambiguous.Partly because we're talking here about the relationship between programming language and architecture.The Algol family (Pascal, C++, C, Java, C#) so dominate programming that it is easy to forget there is any other way.However they assume a classical von Neuman architecture, unlike (say) the functional languages that followed Lisp.Algols are not ideal multi tasking, which is of course ironic since almost all mutlitasking is controlled by the C/C++ family which either does it natively or manages it for the Java/C# class of languages.CUDA is a mutant C, and we hear that it will move towards C++, which is good, but again not wholly ideal.Another factor is an irony, that the tools for developing languages make it orders of magnitude easier to create or implement one that it was when C was created,but...But it has become stupidly hard to make money from languages, most are either given away for free (GCC, most Java, Perl, Python, and of course CUDA), or sold at a loss like VC++, MS now actually gives away an excellently standard C++, together with a professional development environment.It was not unknown as late as the 80s for a compiler to be so expensive you needed board level approval to buy it.Sun has never made money from Java, unless you make the most heroic and unrealistic assumptions, (ie you read Sun annual reports).In fact the whole developer landscape is in effect a loss leader for other activities and hobbies, including and especially GPUs.This means a tension between what is good for the product and what is good for the people paying the bills.MS tools are hampered by the desire to suck people in, but not let them out, IBM tools suffer from the need to find employment for arts graduates, open source to pursue social agendas, and GPUs to sell hardware.That means that the h/w vendors will try harder on their proprietary shit than the common tools, and inevitably we will see proprietary "extensions". Last edited by DominicConnor on October 1st, 2009, 10:00 pm, edited 1 time in total. abdelali Topic Author Posts: 9 Joined: September 30th, 2009, 8:35 am ### sample cuda problems in finance again!! I really do not care which one wins. the future is a cone not a line. So as long as we stay inside the cone, we are fine. developing patterns for GPU programming within the financial industry is a safe bet (in my humble opinion). So guys please, let's refocus on the original question. What would be interesting topics to work on finance (problems of which the solution can/could be accelerated)? i386 Posts: 100 Joined: January 31st, 2007, 11:21 am ### sample cuda problems in finance In the long run, OpenCL or its successor might be the winner but if you look for something that can run at the moment or in 1-2 years, CUDA is nearly the only choice, practically speaking. The API provided by OpenCL is too much focused on the graphical side. It doesn't look like a language as CUDA does. Cuchulainn Posts: 64978 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### sample cuda problems in finance QuoteIn the long run, OpenCL or its successor might be the winner And there again, it might not... "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10712 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### sample cuda problems in finance QuoteOriginally posted by: abdelaliHi there, I am willing to sharpen my skills in CUDA in the finance domain. I would be very thankful if you could propose some problems that need acceleration in finance apart from Monte Carlo, Finite Differences and Finite Elements. I am more interested in problems that have to do with algorithmic trading or data analysis. thanks a lot.There are a zillion opportunities here, but most of them require good access to live trading databases and/or goodclean historical data. For example, if you have OPRA and stock feeds, you could develop some nice live updating 'smile'displays -- say a display showing the market smiles of the 100 most active optionable securities. You could overlaythat data with some fitted parametric forms (I mention SABR and Gatheral's SVI fit in a thread in the General forum.) untler Posts: 15 Joined: June 18th, 2009, 7:23 pm ### sample cuda problems in finance QuoteOriginally posted by: AlanQuoteOriginally posted by: abdelaliHi there, I am willing to sharpen my skills in CUDA in the finance domain. I would be very thankful if you could propose some problems that need acceleration in finance apart from Monte Carlo, Finite Differences and Finite Elements. I am more interested in problems that have to do with algorithmic trading or data analysis. thanks a lot.There are a zillion opportunities here, but most of them require good access to live trading databases and/or goodclean historical data. For example, if you have OPRA and stock feeds, you could develop some nice live updating 'smile'displays -- say a display showing the market smiles of the 100 most active optionable securities. You could overlaythat data with some fitted parametric forms (I mention SABR and Gatheral's SVI fit in a thread in the General forum.)i was thinking the same thing - but i have no data.might be interesting to play around with this anyway.
• Alan Wang Is it possible to combine smoothdamp with transform.LookAt? the current transform.LookAt on the camera locks the camera position to the character, creating a “stiff” camera movement. 🙁 • jtyma Hi! If you want to use SmoothDamp you must pay attention to rotation discontinuity. Rotation is in range from 0 to 360 degrees so if you exceed these limits you will see something strange in your camera’s behaviour 😉 To deal with it you can use SmoothDampAngle function instead and smooth each angle separately. We prepared a simple scene to show you that this approach works fine: http://cloud.aliasinggames.com/data/public/f5ea0d You can also use quaternion’s Slerp function: void Update() { var targetRotation = Quaternion.LookRotation(player.transform.position - transform.position); transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, speed * Time.deltaTime); } but remember this is an invalid way to use lerps functions which was described here: http://devblog.aliasinggames.com/how-to-lerp-properly/ We think that the first way is the best. Best regards!
# Book:John B. Fraleigh/A First Course in Abstract Algebra ## John B. Fraleigh: A First Course in Abstract Algebra Published $1975$.
# Battery Charge and Discharge time for constant voltage charge battery, and how to know if its fully charged In my Project I used below battery and a battery back SMPS. Battery Type:- Sealed Lead acid , Volt:- 12V, Ah:-7AH Charge parameters:- Constant voltage charge with voltage regulation Standby use 13.6V-13.8V Cycle use 14.1V-14.4V Max initial current 1.4A SMPS Input 85Vac to 264Vac , 50Hz Ouput :- 13.8V DC Below image from SMPS datasheet I read the SMPS datasheet , but there is no information of charging and discharging of battery timing. Load I have connected a load of 20W at +v and -V terminal. Q 1) I setup everything, Connected the battery, put a multi-meter across battery terminal, voltage on display slowly slowly increasing. At what level I know that my battery is fully charged and I should disconnected it to being overcharged. Q 2) About Battery Backup time. a) If we use formula Tbck= (Vout* AH)/(op Watt) , i got 4.2hrs. Is it Right calculation b) If I use other formula with Battery spec (see attached image) Consider 3HR rating of 5.2AH. Current= 5.2AH/3HR = 1.73A current it can supply till 3 Hrs. ( right ??) so after 3 Hrs how much voltage of battery will be 0 volt or 10.5V. Since my SMPS cut the battery if its lower than 10.5V. Q 3) Charging time of battery Since no information given in SMPS datasheet about charging current or else, can not figure out about charging time of battery. a) Battery is constant voltage regulation charge type and SMPS is charging the battery with 13.8V, then any method to calculate the charging time of battery. ## 1 Answer You're confused about charging times while you should not be. It is actually quite simple because you're using a SLA battery. You can charge such a battery simply until it reaches 13.8 V. This can be done simply by connecting it to a 13.8 V voltage source, which is your SMPS. Do make sure there is some form of current limiting in place so that the current does not get out too large when the battery is empty and 11 V for example. I expect your SMPS to have proper current limiting but if it current limits at a high current, check that the battery is allowed to be charged with such a current. If not you may want to add a power resistor between the SMPS and the battery to limit the current. When the battery is full it's voltage will be around 13.8 V and charging will stop automatically because battery and source will be 13.8 V. There is no need to keep track of charging time. Most Uninterruptible power supplies have SLA batteries and they charge them this way because it is simple and effective.
# Given two rectangles: 15m by 7m and 27m by 3m. Find the combined area. Question Analytic geometry Given two rectangles: 15m by 7m and 27m by 3m. Find the combined area. 2021-03-03 $$\displaystyle{\underset{{{1}+{2}}}{{{A}}}}={\underset{{{1}}}{{{A}}}}+{\underset{{{2}}}{{{A}}}}$$ A=ab Where a and b are the sides of the rectangle: Substitute A=ab: $$\displaystyle{\underset{{{1}+{2}}}{{{A}}}}={\underset{{{1}}}{{{a}}}}\times{\underset{{{1}}}{{{b}}}}+{\underset{{{2}}}{{{a}}}}\times{\underset{{{2}}}{{{b}}}}$$ Substitute the given: $$\displaystyle{\underset{{{1}+{2}}}{{{A}}}}={15}\times{7}+{27}\times{3}$$ $$\displaystyle{\underset{{{1}+{2}}}{{{A}}}}={105}+{81}$$ $$\displaystyle{\underset{{{1}+{2}}}{{{A}}}}={186}{m}^{{2}}$$ ### Relevant Questions A circle is inscribed in a right triangle. The length of the radius of the circle is 6 cm, and the length of the hypotenuse is 29 cm. Find the lengths of the two segments of the hypotenuse that are determined by the point of tangency. The dominant form of drag experienced by vehicles (bikes, cars,planes, etc.) at operating speeds is called form drag. Itincreases quadratically with velocity (essentially because theamount of air you run into increase with v and so does the amount of force you must exert on each small volume of air). Thus $$\displaystyle{F}_{{{d}{r}{u}{g}}}={C}_{{d}}{A}{v}^{{2}}$$ where A is the cross-sectional area of the vehicle and $$\displaystyle{C}_{{d}}$$ is called the coefficient of drag. Part A: Consider a vehicle moving with constant velocity $$\displaystyle\vec{{{v}}}$$. Find the power dissipated by form drag. Express your answer in terms of $$\displaystyle{C}_{{d}},{A},$$ and speed v. Part B: A certain car has an engine that provides a maximum power $$\displaystyle{P}_{{0}}$$. Suppose that the maximum speed of thee car, $$\displaystyle{v}_{{0}}$$, is limited by a drag force proportional to the square of the speed (as in the previous part). The car engine is now modified, so that the new power $$\displaystyle{P}_{{1}}$$ is 10 percent greater than the original power ($$\displaystyle{P}_{{1}}={110}\%{P}_{{0}}$$). Assume the following: The top speed is limited by air drag. The magnitude of the force of air drag at these speeds is proportional to the square of the speed. By what percentage, $$\displaystyle{\frac{{{v}_{{1}}-{v}_{{0}}}}{{{v}_{{0}}}}}$$, is the top speed of the car increased? Express the percent increase in top speed numerically to two significant figures. Two oppositely charged but otherwise identical conducting plates of area 2.50 square centimeters are separated by a dielectric 1.80 millimeters thick, with a dielectric constant of K=3.60. The resultant electric field in the dielectric is $$\displaystyle{1.20}\times{10}^{{6}}$$ volts per meter. Compute the magnitude of the charge per unit area $$\displaystyle\sigma$$ on the conducting plate. $$\displaystyle\sigma={\frac{{{c}}}{{{m}^{{2}}}}}$$ Compute the magnitude of the charge per unit area $$\displaystyle\sigma_{{1}}$$ on the surfaces of the dielectric. $$\displaystyle\sigma_{{1}}={\frac{{{c}}}{{{m}^{{2}}}}}$$ Find the total electric-field energy U stored in the capacitor. u=J The vertices of a tetrahedron correspond to four alternating corners of a cube. By using analytical geometry, demonstrate that the angle made by connecting two of the vertices to a point at the center of the cube is $$\displaystyle{109.5}^{\circ}$$, the characteristic angle for tetrahedral molecules. Micheal came up with a design of a rectangular patio that is 25 ft by 40 ft., and that is surrounded by a terraced strip of uniform width planted with small trees and shrubs. If the A of this terraced strip is 504 ft^2, find the width (x) of the strip. A 1200kg pick up truck traveling south at 15m/s collides witha 750kg car traveling east. The two vehicles stick together. Thefinal position of the wreckage after the collision is 25m, atan angle of 50 south of east, from the point of impact. If thecoefficient of friction between the tires and the road, from thelocation of the collision to the point at which the wreckage comesto rest is 0.4, what was the speed of the car just before thecollision? a. The radial vector field $$\displaystyle{F}={\left\langle{x},{y}\right\rangle}$$ b. The rotation vector field $$\displaystyle{F}={\left\langle-{y},{x}\right\rangle}$$ $$\displaystyle\theta={240}^{\circ}$$ Find the value of $$\displaystyle\theta$$ in radians (in terms of $$\displaystyle\pi$$) Given a right triangle. One of the angles is $$\displaystyle{20}^{\circ}$$. Find the other 2 angles.
## The representation type of rational normal scrolls.(English)Zbl 1268.14014 Let $$(X,\mathcal{O}_X(1))$$ be a polarized projective variety of dimension $$n$$. A sheaf $$E$$ on $$X$$ is called Arithmetically Cohen-Macaulay (ACM), if all its middle cohomologies vanish. That is, $$H^i(X,E(k))=0$$ for all $$k$$ and $$0<i<n$$, where as usual, one writes $$E(k)$$ to mean $$E\otimes\mathcal{O}_X^{\otimes k}$$. For such an $$E$$, one can see easily that the number of generators of $$\bigoplus H^0(X,E(k))$$ is at most $$\deg X\cdot \mathrm{rk}\, E$$. An $$E$$ where equality holds are called Ulrich bundles and they have been studied intensely and introduced by B. Ulrich [Math. Z. 188, 23–32 (1984; Zbl 0573.13013)]. The paper under review shows that on most rational normal scrolls there are arbitrarily large rank and dimension families of indecomposable Ulrich bundles. The exceptions, a small number, were previously known to have not many indecomposable Ulrich bundles [R.-O. Buchweitz, G.-M. Greuel and F.-O. Schreyer, Invent. Math. 88, 165–182 (1987; Zbl 0617.14034)]. ### MSC: 14F05 Sheaves, derived categories of sheaves, etc. (MSC2010) 14M20 Rational and unirational varieties ### Citations: Zbl 0573.13013; Zbl 0617.14034 Full Text: ### References: [1] Arbarello, E., Cornalba, M., Griffiths, P.A., Harris, J.: Geometry of algebraic curves. Grundlehren der Mathematischen Wissenschaften, vol. 267. Springer, New York (1985) · Zbl 0559.14017 [2] Buchweitz, R., Greuel, G., Schreyer, F.O.: Cohen-Macaulay modules on hypersurface singularities, II. Invent. Math. 88(1), 165–182 (1987) · Zbl 0617.14034 [3] Casanellas, M., Hartshorne, R.: Stable Ulrich bundles, Preprint, available from arXiv: 1102.0878. [4] Casanellas, M., Hartshorne, R.: Gorenstein Biliaison and ACM sheaves. J. Algebra 278, 314–341 (2004) · Zbl 1057.14062 [5] Casanellas, M., Hartshorne, R.: ACM bundles on cubic surfaces. J. Eur. Math. Soc. 13, 709–731 (2011) · Zbl 1245.14044 [6] Costa, L., Miró-Roig, R.M., Pons-Llopis, J.: The representation type of Segre varieties. Adv. Math. 230, 1995–2013 (2012) · Zbl 1256.14015 [7] Drozd, Y., Greuel, G.M.: Tame and wild projective curves and classification of vector bundles. J. Algebra 246, 1–54 (2001) · Zbl 1065.14041 [8] Eisenbud, D., Schreyer, F., Weyman, J.: Resultants and Chow forms via exterior syzygies. J. Amer. Math. Soc. 16, 537–579 (2003) · Zbl 1069.14019 [9] Eisenbud, D., Herzog, J.: The classification of homogeneous Cohen-Macaulay rings of finite representation type. Math. Ann. 280(2), 347–352 (1988) · Zbl 0616.13011 [10] Hartshorne, R.: Connectedness of the Hilbert scheme. Publications Mathmatiques de l’IHS 29, 5–48 (1966) · Zbl 0171.41502 [11] Horrocks, G.: Vector bundles on the punctual spectrum of a local ring. Proc. Lond. Math. Soc. 14(3), 689–713 (1964) · Zbl 0126.16801 [12] Miró-Roig, R.M.: On the representation type of a projective variety (preprint 2012) · Zbl 1327.14088 [13] Miró-Roig, R.M., Pons-Llopis, J.: N-dimensional Fano varieties of wild representation type (preprint) available from arXiv:1011.3704. [14] Miró-Roig, R.M., Pons-Llopis, J.: Representation type of rational ACM surfaces $$X$$\backslash$$subseteq {$$\backslash$$mathbb{P}}\^4$$ (to appear in Algebras and Representation Theory) · Zbl 1277.14009 [15] Pons-Llopis, J., Tonini, F.: ACM bundles on Del Pezzo surfaces. Le Matematiche 64(2), 177–211 (2009) · Zbl 1207.14046 [16] Ulrich, B.: Gorenstein rings and modules with high number of generators. Math. Z. 188, 23–32 (1984) · Zbl 0573.13013 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Brief Exercise 14-08 In alphabetical order below are current asset items for Roland Company's balance sheet... ###### Question: Brief Exercise 14-08 In alphabetical order below are current asset items for Roland Company's balance sheet at December 31, 2020. $201,000 Accounts receivable 63,000 Cash Finished goods 80,000 Prepaid expenses 39,000 Raw materials 83,000 Work in process 88,000 Prepare the current assets section. (List Current Assets in order of liquidity.) ROLAND COMPANY Balance Sheet %$4 %24 olicy 2000-2030John ROLAND COMPANY Balance Sheet %24 #### Similar Solved Questions ##### . T-Mobile 4:24 PM 4 54% - Aa A) Q a tages and disadvantages of estab-... . T-Mobile 4:24 PM 4 54% - Aa A) Q a tages and disadvantages of estab- lishing such an account? 6. Latesha Moore has a choice at work between a traditional health insur- ance plan that pays 80 percent of the cost of doctor visits after a $250 de- ductible and an HMO that charges a$10 co-payment per... ##### 4. Rours et al. (2005) collected urine specimens from 750 asymptomatic pregnant women in Rotterdam, Netherlands,... 4. Rours et al. (2005) collected urine specimens from 750 asymptomatic pregnant women in Rotterdam, Netherlands, to estimate the prevalence of chlamydia among the corre- sponding population. Of the 750 specimens, 48 tested positive for the disease. Using this information, answer the following: (a) D... ##### 3. Consider the matrix A with an unspecified entry k: 10-11 A = 1 1 11... 3. Consider the matrix A with an unspecified entry k: 10-11 A = 1 1 11 1 2 (a) (4 marks) Calculate det(A) in two different ways. (b) (1 mark) Find all values of k, if any, for which A is not invertible.... ##### I need help answering these three questions Be able to classify vitamins according to whether they... i need help answering these three questions Be able to classify vitamins according to whether they are fat soluble or water soluble and state any major conditions that are necessary for specific individual absorption into the body Be able to list major functions and deficiency symptoms for each v... ##### Exhibit: EPA Regulations There are two firms: Company A and Company B. The EPA enforces regulations... Exhibit: EPA Regulations There are two firms: Company A and Company B. The EPA enforces regulations saying that neither firm can release more than 10 units of pollutants. Company A currently releases 10 units and Company B releases 11 units. The EPA requires B to reduce its pollution by 1 unit-the c... ##### How do you solve using the completing the square method x^2 - 30x = -125? How do you solve using the completing the square method x^2 - 30x = -125?... ##### Rank by increasing stability. Why? S ww Rank by increasing stability why Rank by increasing stability. Why? S ww Rank by increasing stability why... ##### Find the first four nonzero terms in a power series expansion about x = 0 for... Find the first four nonzero terms in a power series expansion about x = 0 for a general solution to the given differential equation w" - x?w'+w=0 w(x)= ... (Type an expression in terms of a, and a, that includes all terms up to order 3... ##### Help Seve Check my Suppose there are two independent economic factors, M, and M. The risk... Help Seve Check my Suppose there are two independent economic factors, M, and M. The risk free rate is 5% and all stocks have independent firm specific components with a standard deviation of 52%. Portfolios A and B are both well diversified. Portfolio Beta on M, Beta on M2 Expected Return (X) 1.6 1... ##### In a large population of adults, the mean IQ is 112 with a standard deviation of 20 In a large population of adults, the mean IQ is 112 with a standard deviation of 20. Suppose 200 adults are randomly selected for a market research campaign. The probability that the sample mean IQ is greater than 110 is...?... ##### A 2.31-kg object on a frictionless horizontal track is attached to the end of a horizontal... A 2.31-kg object on a frictionless horizontal track is attached to the end of a horizontal spr equilibrium position and then released, initiating simple harmonic motion ng whose force constant is 5.00 N/m. The object is displaced 2.88 m to the right from its (a) What is the force (magnitude and dire... ##### A. Write a realistic physics problem for which this is the correct equation. b. Draw a... a. Write a realistic physics problem for which this is the correct equation. b. Draw a before and after visual orverview of your problem. c. Finish the solution. 10,11,12. YOU WRITE THE PROBLEM Exercise 11-14: You are given the equation that is nsed to solve a problem. ve a problem. For each of thes... A tank of Glauber's salt is in a household. Its purpose is to store 2.8 xx 10^9 J of heat energy, and it can be heated from 22^@C to 50^@C. The melting point of the salt is 32 ^@C and it has a L_"f" (specific latent heat of fusion) of 241 (kj)/(kg). Its specific heat capacity... Nike has bonds outstanding with five years to maturity and semiannual interest payments. The bonds pay a 5% coupon rate and have a 9% yield to maturity. The bonds have a $1,000 par value. What is the bond's current yield? 5.90% 2.97% 5.48% None of the above... 1 answer ##### Please show all work sales are projected to be$14,399. What is Ul 9. External Funds... please show all work sales are projected to be $14,399. What is Ul 9. External Funds Needed Cheryl Colby, CFO of Charming Florist Ltd., has c reated the firm's pro forma balance sheet for the next fiscal year. Sales are projected to grow by 15 percent to$211.6 milli Current assets, fixed assets...
# Frequency of girl bobbing in swimming pool 1. Apr 3, 2012 ### kevlar94 1. The problem statement, all variables and given/known data A girl with mass m kg steps into her inflatable ring with horizontal cross sectional area Am^2 and jumps into the pool. After the first splash, what is the frequency of the girl bobbing up and down? 2. Relevant equations I assume that we need the extra force,F_e, after the buoyant force and the weight cancel. Archimedes F_b = mg We can then use Newton 2, F=ma, where a=x". x" = sqrt(F_e/m) f=1/[2π√(F_e/m)] 3. The attempt at a solution The above ω will be for the frequency. I am not sure if the above is right and I do not know how to solve for the F_e Thanks! 2. Apr 3, 2012 ### collinsmark Hello kevlar94, Welcome to Physics Forums! Just so we are both on the same page, do you really mean that the cross sectional area is a function of the mass squared? Maybe the problem statement is written that way, but I just want to be sure. In other words, do you really mean that $$\mathrm{Area} = Am^2$$ where $m$ is the mass and $A$ is an some constant of proportionality? Well, you can say Fb = mg if nothing is accelerating, and is in static equilibrium. But that's not the case when the girl is bobbing up and down. Perhaps you mean to say that Fe = Fb - mg? Where did the square root come from? Newton's second law doesn't contain a square root. $$\sum_i \vec F_i = m \ddot{z}$$ (since the motion in this problem is always in the up/down direction, I chose to use the variable $z$ to represent the position. You could just as easily use the variable $x$ to represent position if you want to though.) You'll have to show me where that came from. There are two forces involved. The weight of the girl and the buoyant force. You've already figured out the weight is mg. The buoyant force is equal to the weight of the water that is displaced. The weight of the water is proportional to g and the density of water ρ. It's also proportional to the volume of the water that is displaced. The cross sectional area is already given in the problem statement. You just need to throw in the vertical displacement to find the volume of displaced water. Plug those back into Newton's second law and you'll get an equation containing both $z$ and $\ddot{z}$: an ordinary, second order differential equation that you can solve. Last edited: Apr 3, 2012 3. Apr 3, 2012 ### kevlar94 Thanks for the help. Sorry, the prompt says the girl is 25kg and the inflatable ring has a horizontal cross-sectional area of 0.7m^2. Yes, that is what I meant. I skipped a step I should have mentioned. The square root is from the formula for frequency using k(from hookes law) from my book. So F_r = F_b - mg = ρ(displaced water)(V)g I start with the assumption that the ring goes dx distance into the water. Which results in dV=Adx dF_r=ρ*g*A*dx Since I am using hookes law for a linear oscillator, F=kx or dF=kdx so dF_r = kdx= ρ*g*A*dx which gives a k value of ρ*g*A -- when the dx cancels Using k I can solve for ω using √(k/m) and then solve for frequency using ω=2π*f Using the above values gives ω= √(9800*.7)/25 = 16.56 f=16.56/(2π) = 2.637Hz Does that look correct? 4. Apr 4, 2012 ### collinsmark Oh, 'm' is meters (not mass). Okay, I understand now. Ah, Hooke's law. I never thought of that. The buoyant force is proportional to the displacement. So yes, Hooke's law and associated equations will work just fine. That should save you from having to solve the differential equation -- essentially from re-deriving the ω = √(k/m) formula. That's what I got (out to the first three significant figures anyway). Good job.
Function $y = \frac{1}{x^2}$ in an orthonormed system Click to show the fonction $y = \frac{1}{x^2}$
# How to use Markdown with MathJax like Math StackExchange ## How to use Markdown with MathJax like Math StackExchange UPDATED POST Ok I’ve managed to make Markdown and MathJax work together, it was relatively simple actually. I’ve used marked together with MathJax. $(function() { var$text = $(“#text”), // the markdown textarea$preview = $(“#preview”); // the preview div$text.on(“keyup”, function() { $preview.html( marked($text.val()) ); // parse markdown MathJax.Hub.Queue([“Typeset”, MathJax.Hub, “preview”]); // then let MathJax do its job }) }); Problem now is: I think markdown is parsing my math 1st before MathJax can change it. How do i fix this? I think its fixed on Math StackOverflow, but how? I need to stop markdown from parsing math UPDATE 2 This works, but not sure if its the way math.stackexchange does it, but it seems to produce similar/same results with what I tested so far … $(function() { var$text = $(“#text”),$preview = $(“#preview”);$text.on(“keyup”, function() { $preview.html($text.val() ); MathJax.Hub.Queue([“Typeset”, MathJax.Hub, “preview”]); }); MathJax.Hub.Register.MessageHook(“End Process”, function (message) { $preview.html( marked($preview.html()) ); }); }); OLD POST BELOW In the math stackexchange, I can use MathJax with Markdown. I wonder what do I need to do that? I can use a library like marked to render Markdown, but for MathJax, it seems like it just renders on page loads. How can I call it to re-render or better just render whats needed (specified by me) html = marked(“some markdown string”) // a HTML string // is there something like html = MathJax.parse(html) UPDATE I think I should be looking at http://www.mathjax.org/docs/1.1/typeset.html#manipulating-individual-math-elements. But when I try $text.on(“keyup”, function() {$preview.html( marked($text.val()) ); var math = MathJax.Hub.getAllJax(“preview”); console.log(math); MathJax.Hub.Queue([“Text”, math, “a+b”]); }) Where:$text: is the jQuery element for my textarea \$preview: is the preview div I find that math is undefined, so it seems var math = MathJax.Hub.getAllJax(“preview”) is not working. I have a div#preview btw. ### Solution 1: The fastest way is to protect the math from your markdown-parser. See this question for a detailed answer by Davide Cervone, including a link to the code used by math.SE. ### Solution 2: For sublime, add the following code to Markdown Preview --> Settings - User, { /* Enable or not mathjax support. */ "enable_mathjax": true } as shown below,
# IEEE Transactions on Applied Superconductivity ### Early Access Articles Early Access articles are made available in advance of the final electronic or print versions. Early Access articles are peer reviewed but may not be fully edited. They are fully citable from the moment they appear in IEEE Xplore. ## Filter Results Displaying Results 1 - 23 of 23 • ### A Real Time, Automatic MCG Signal Quality Evaluation Method Using the Magnetocardiography and Electrocardiography Publication Year: 2018, Page(s): 1 | | PDF (658 KB) The quality of the obtained Magnetocardiogram (MCG) signal has significant influence on the reliability of deriving some important parameters, such as the interval between two R waves (RR interval), the interval between Q wave and the adjacent T wave (QT interval) and the interval between S wave and the adjacent T wave (ST interval) et al. The poor MCG signal might falsely trigger alarms frequentl... View full abstract» • ### Properties of ferromagnetic Josephson junctions for memory applications Publication Year: 2018, Page(s): 1 | | PDF (1280 KB) In this work we give a characterization of the RF effect of memory switching on Nb-Al/AlOx-(Nb)-PdFe-Nb Josephson junctions as a function of magnetic field pulse amplitude and duration, alongside with an electrodynamical characterization of such junctions, in comparison with standard Nb-Al/AlOx-Nb tunnel junctions. The use of microwaves to tune the switching parameters of magnetic Josephson juncti... View full abstract» • ### A 32-bit 4$times$4 Bit-Slice RSFQ Matrix Multiplier Publication Year: 2018, Page(s): 1 | | PDF (529 KB) A 32-bit 4$times$4 bit-slice RSFQ matrix multiplier is proposed. The multiplier mainly consists of bit-slice multipliers and bit-slice adders. The multiplication of unsigned integer matrixes is implemented by control signals. The matrix multiplier used synchronous concurrent-flow clocking. The results show that 16-bit bit-slice processing has the least latency at 10 G... View full abstract» • ### A multi-terminal superconducting-ferromagnetic device with magnetically-tunable supercurrent for memory application Publication Year: 2018, Page(s): 1 | | PDF (189 KB) We report fabrication and testing at 4.2 K of four-terminal SF1IF2S1IS2 devices, where S denotes a superconductor (Nb), F1,2 denote ferromagnetic material (permalloy (Py) and Ni respectively), and I denotes an insulator (AlOx). The F1IF2 structure plays a role of a pseudo-spin valve, in which the magnetization vector of the Py layer can be switched either by an externally applied magnetic field, o... View full abstract» • ### Anomalous Supercurrent Modulation in Josephson Junctions with Ni-Based Barriers Publication Year: 2018, Page(s): 1 | | PDF (1152 KB) We investigate the supercurrent transport characteristics of Ni-barrier Josephson junctions with various barrier multilayer structures. Our device fabrication and magneto-electrical measurement methods provide high enough statistics and rigor necessary for the detailed characterization of magnetic Josephson junctions. As a result, we obtain the oscillatory critical current as a function of Ni thic... View full abstract» • ### High-Gradient Magnetic Field for Magnetic Nanoparticles Drug Delivery System Publication Year: 2018, Page(s): 1 | | PDF (1058 KB) Magnetic nanoparticles (MNPs), which can be transported through the vascular system and concentrated to the specific position of the body under the external magnetic field, are attracting increasing attention in tumor treatment. The MNPs should work at high gradient magnetic field for generating the magnetic forces to overcome the hydrodynamic drag force acting on the nanoparticles from the blood ... View full abstract» • ### Energy Efficient Superconducting Neural Networks for High-Speed Intellectual Data Processing Systems Publication Year: 2018, Page(s): 1 | | PDF (702 KB) We present the results of circuit simulations for the adiabatic flux-operating neuron. The proposed cell with a one-shot calculation of activation function is based on a modified single-junction superconducting quantum interferometer. In comparison, functionally equivalent elements of the artificial neural network in the semiconductor-based implementations consist of approximately 20 transistors. ... View full abstract» • ### High-Temperature Superconducting Periodic Transmission Line Publication Year: 2018, Page(s): 1 | | PDF (1171 KB) In this paper, a high-temperature superconducting (HTS) transmission line periodically loaded with small insulator gaps is studied in order to investigate its microwave characteristics. We calculate dispersion and impedance equations for both cases of infinite and finite periodic structures. For the infinitely long periodic structure, the method of analysis is based on the ABCD matrix representati... View full abstract» • ### Researches on High Current and Instantaneous Impulse Characteristics of a Flux-Coupling Type SFCL with pancake coils Publication Year: 2018, Page(s): 1 | | PDF (1624 KB) Resistive Superconducting Current Limiter (RSFCL) is a focus in the development of current limit technology, especially in the field of AC transmission. It has a significant effect of limiting current. However, the traditional double-wound non-inductive pancake coil has an inevitable drawback. Due to its special structure the flashover would occur easily during the fault in high voltage envi-ronme... View full abstract» • ### Design of a Combined Screening and Damping Layer for a 10 MW Class Wind Turbine HTS Synchronous Generator Publication Year: 2018, Page(s): 1 | | PDF (1464 KB) This paper is focused on design of a single protection layer to cover both screening and damping characteristics of a large scale high temperature superconducting (HTS) wind turbine synchronous generator under sudden short circuit conditions. Due to small synchronous reactance of HTS generators, short circuit condition causes substantial fault current in the stator windings which produces high unb... View full abstract» • ### Intrinsic Jitter in Photon Detection by Straight Superconducting Nanowires Publication Year: 2018, Page(s): 1 | | PDF (544 KB) Timing jitter inherent in photon detection by superconducting nanowire single-photon detectors has different values and behave differently for detection events originating in bends and in straights of nanowires. Generally, jitter is larger for events in bends. Although, for typical meandering nanowire, contribution of bends to the integral jitter is almost negligible due to small geometric weight ... View full abstract» • ### Fabrication and characterization of SQUIDs with Nb/Nb$_{n}$Si$_{1-x}$/Nb junctions Publication Year: 2018, Page(s): 1 | | PDF (341 KB) Superconducting quantum interference devices (SQUIDs) with 3 μm $times$ 3 μm or 4 μm $times$ 4 μm self shunted Nb/Nb$_{n}$Si$_{1-x}$/Nb Josephson junctions are designed and fabricated. By adjusting the Nb content and thickness of the Nb/Nb View full abstract» • ### Comparison of the Electric Noise Properties of Novel Superconductive Materials for Electronics Applications Publication Year: 2018, Page(s): 1 | | PDF (645 KB) The high transition temperature of recently discovered iron-based and electron-doped superconductors makes them interesting for advanced electronic applications. However, the complex conduction mechanisms responsible for the appearance of the superconducting state may strongly affect the devices ultimate performances. Noise spectroscopy is a non-destructive technique that allows investigating in d... View full abstract» • ### Prototype HTS Quadrupole Magnet for the In-flight Fragment Separator of RISP Publication Year: 2018, Page(s): 1 | | PDF (533 KB) The Rare Isotope Science Project (RISP) for constructing a heavy ion accelerator complex was launched in 2011 in Korea. As one of the rare isotope production systems, an in-flight fragment (IF) separator system will be installed for RISP. We plan to use high-temperature superconducting (HTS) quadrupole magnets in the forepart of the IF separator to cool efficiently the magnets from large radiation... View full abstract» • ### Influence of Magnetic Flux Trapped in Moats on Superconducting Integrated Circuit Operation Publication Year: 2018, Page(s): 1 | | PDF (7320 KB) The influence of a trapped flux quantum in a superconducting ground plane hole, called a moat, on superconducting circuit operation was analyzed. We devised a calculation model to estimate the magnetic flux threading a signal line of a superconducting integrated circuit by the trapped flux quantum in a moat placed near the signal line by using a conventional inductance extraction tool. Assuming on... View full abstract» • ### Andreev Spectroscopy of Molecular States in Resonant and Charge Accumulation Regime Publication Year: 2018, Page(s): 1 | | PDF (487 KB) Molecular electronics represents the ultimate step of the miniaturization process of the integrated circuits. Including molecules in manmade devices may introduce novel functionalities in nanodevices, such as the possibility to interact with biological environments with tremendous implications in several fields. With these motivations, we present a Bogoliubov-de Gennes description of the transport... View full abstract» • ### Superconducting Spintronics in the Presence of Spin-Orbital Coupling Publication Year: 2018, Page(s): 1 | | PDF (1975 KB) We study physical phenomena in $varphi_0$-junction with direct coupling between magnetic moment and Josephson current. By using a realistic model of Josephson junction including quasiparticle and displacement current, we simulate the IV-characteristics together with magnetic precession. It is demonstrated that a character of precession essentially change in the voltag... View full abstract» • ### N-DOPED SURFACES OF SUPERCONDUCTING NIOBIUM CAVITIES AS A DISORDERED COMPOSITE Publication Year: 2018, Page(s): 1 | | PDF (649 KB) The Q-factor of superconducting accelerating cavities can be substantially improved by a special heat treatment under N2 atmosphere (N-doping). Recent experiments at Fermi National Laboratory investigated the dependence of Q on the RF frequency and showed, unexpectedly, both an increase and a decrease with the RF field amplitude. This paper shall explain this finding by extending a previously prop... View full abstract» • ### Anomalous Switching Current Distributions in Superconducting Weak Links Publication Year: 2018, Page(s): 1 | | PDF (324 KB) We analyze the problem of supercurrent instability in highly transparent weak links. We identify several macroscopic quantum tunneling (MQT) regimes and predict a non-trivial non-monotonous dependence of the switching current distribution on temperature which can be observed in MQT experiments with transparent superconducting weak links. View full abstract» • ### Design, Fabrication and Testing of MgB2/Fe Racetrack Coils Publication Year: 2018, Page(s): 1 | | PDF (583 KB) We fabricated four superconducting racetrack coils wound by bare in-situ MgB2/Fe mono and multi-filamentary wires produced in our laboratory by using the wind and react method. Transport measurements in self-field were performed in a liquid helium dewar. The magnetic field flux density B = 25 mT for I = 92 A was measured to verify how the current flowed inside the coil for one of the coils by mean... View full abstract» • ### Simulation of the Transport and Magnetization Loss in Bi2223 High Temperature Superconducting Composite Conductors Publication Year: 2018, Page(s): 1 | | PDF (682 KB) The development of high temperature superconductors (HTS) brings us a new material to make high field magnets. Some conceptual HTS composite conductors have been developed for large-scale magnets with high field, such as fusion magnets. In this paper, we propose three models of Bi2223 high temperature superconducting composite conductors that may be applied to the China Fusion Engineering Test Rea... View full abstract» • ### SmartCoil - Concept of a full Scale Demonstrator of a Shielded Core Type Superconducting Fault Current Limiter Publication Year: 2016, Page(s): 1 | | PDF (696 KB) Within a German government funded project with the partners Siemens and KIT we present the concept and first results for a full scale inductive superconducting fault current limiter. It is based on the well known shielded core concept but does not need the deployment of an iron core. Aim of the project is construction and test of one phase with a nominal current I = 600 A and a voltage of U = 10 k... View full abstract» • ### The Present Status and Progress of the CHMFL 40 T Hybrid Magnet Publication Year: 2014, Page(s): 1 | | PDF (177 KB) The development of new high magnetic field facilities in the Chinese High Magnet Field Laboratory (CHMFL) will soon be completed according to the present situation. For the construction of the 40 T hybrid magnet, which will be the highest magnetic field facility in China, great progress has been achieved so far, for example, the engineering design of the resistive insert composed of six Florida-Bi... View full abstract» ## Aims & Scope IEEE Transactions on Applied Superconductivity contains articles on the applications of superconductivity and other relevant technology. Full Aims & Scope ## Meet Our Editors Editor-in-Chief Britton L. T. Plourde Syracuse University bplourde@syr.edu http://asfaculty.syr.edu/pages/phy/plourde-britton.html
## Homework Statement $$\int \frac{ae^\theta+b}{ae^\theta-b} \, d\theta$$ ## The Attempt at a Solution i took $$u = ae^\theta-b$$ so $$e^\theta = \frac{u + b}{a}$$ then i substituded back into the integral and iget this $$\int \frac{u + b + b}{u} \, du$$ $$\int du +\int \frac{2b}{u} \, du$$ $$= u \du + 2b \ln u +C$$ $$= u + 2b \ln u +C$$ $$= ae^\theta-b + 2b\ln (ae^\theta-b)$$ but the answer of the book is $$\int \frac{ae^\theta+b}{ae^\theta-b} \, d\theta = 2\ln (ae^\theta-b) - \theta + C$$ what did i do wrong? Last edited:
## Mathematical Constraints on Gauge in Maxwellian Electrodynamics Comay, E. ##### Description The structure of classical electrodynamics based on the variational principle together with causality and space-time homogeneity is analyzed. It is proved that in this case the 4-potentials are defined uniquely. On the other hand, the approach where Maxwell equations and the Lorentz law of force are regarded as cornerstones of the theory allows gauge transformations. For this reason, the two theories are not equivalent. A simple example substantiates this conclusion. Quantum physics is linked to the variational principle and it is proved that the same result holds for it. The compatibility of this conclusion with gauge invariance of the Lagrangian density is explained. Several alternative possibilities that may follow this work are pointed out. Comment: 15 pages, 0 figures ##### Keywords Physics - General Physics
# Large $|k|$ behavior of d-bar problems for domains with a smooth boundary • ### Christian Klein Université de Bourgogne Franche-Comté, Dijon, France • ### Johannes Sjöstrand Université de Bourgogne Franche-Comté, Dijon, France • ### Nikola Stoilov Université de Bourgogne Franche-Comté, Dijon, France A subscription is required to access this book chapter. ## Abstract In this work we study the large $|k|$ behavior of complex geometric optics solutions to a system of d-bar equations for a potential being the characteristic function of a strictly convex set with smooth boundary, by using almost holomorphic functions. This is an extension of our previous work where we consider sets with real-analytic boundary.
# How is the ecliptic and galatic disk of the Milkyway oriented to each other? Relevance • Dr Bob Lv 6 The answer is that they are inclined by 60.2 degrees. Here's how you get this: The celestial coordinates of the pole of the ecliptic are (alpha1, delta1) = (18 hours, 90-23.4393 degrees) = (270 degrees, 66.6 degrees) The number 23.4393 is the inclination of the earth's axis to the ecliptic (also called "the obliquity of the ecliptic"). According the the Observer's Handbook, the celestial coordinates of the north pole of the Milky Way are (alpha2, delta2) = (12 h 51 m, 27 deg 8 min) =(192.8 degrees, 27.1 degrees) To calculate the inclination between the ecliptic and the Milky Way disk (i.e., the galactic plane), we need to calculate the distance between the ecliptic pole and galactic pole. This is done with the following spherical-trigonometry formula for the angle between two points on a sphere: angle = arccos (sin (delta1) * sin (delta2) + cos (delta1) * cos (delta2) * cos (alpha1-alpha2)) If you plug in the above numbers, you get angle = 60.2 degrees This is the inclination between the ecliptic and the galactic plane. • Jim6 years agoReport This is one of the best answers I've ever seen! • 6 years ago Dr. Bob has the right answer: the ecliptic is inclined at an angle of 60.19 - 62.6 degrees relative to the galactic plane. • Brant Lv 7 The galactic equator is highly inclined to the ecliptic. Something along about 70 degrees. • 1 decade ago I agree with the above answer. Highly inclined. It looks like about 70 or 80 degrees if you just glance at a star chart.
# 1 1 2 2 : 1 1 1 1 U HU I: 2: I 11 1 88 0 1 2 ###### Question: 1 1 2 2 : 1 1 1 1 U HU I: 2: I 11 1 88 0 1 2 1 2222 @ #### Similar Solved Questions ##### Question 19 (1 point) Suppose that Spain and France both produce ships and grapes, which are... Question 19 (1 point) Suppose that Spain and France both produce ships and grapes, which are sold for the same price in both countries. The table below shows the combinations of the two goods that each country can produce in one year using the same amounts of capital and labor. With specialization, ... ##### Chapter 15, Section 15.2, Question 045 Find the work done by the force field F on... Chapter 15, Section 15.2, Question 045 Find the work done by the force field F on a particle that moves along the curve C. F(x,y) = 2xy i + 2x j C: x= y2 from (0,0) to (8,2) Enter the exact answer as an improper fraction, if necessary. W= ? Edit... ##### Illustrates how the forces can vary in problems of this type. Ahiker, who weighs 945 N, is strolling through the woods and crossesa small horizontal bridge. The bridge is uniform, weighs 3390 N,and rests on two concrete supports, one on each end. He stops 1/3of the way along the bridge. What is the magnitude of the forcethat a concrete support exerts on thebridge (a) at the near endand (b) at the far end? illustrates how the forces can vary in problems of this type. A hiker, who weighs 945 N, is strolling through the woods and crosses a small horizontal bridge. The bridge is uniform, weighs 3390 N, and rests on two concrete supports, one on each end. He stops 1/3 of the way along the bridge. What is ... ##### Urse Home <Week 6 Homework Problem 12.22 - Enhanced - with Feedback く) 25 of 34... urse Home <Week 6 Homework Problem 12.22 - Enhanced - with Feedback く) 25 of 34 (2) Part C Review I Constants | Periodic Table Two patterns of packing two different spheres are shown here. Determine the angles between the lattice vectors, γ, for each structure. Express your answers a... ##### Z critical value is used in can be less thanIt determinesSelect one or more: a. Critical region and acceptance region b. Z critical value Traditional Method d. Meane. Z test value Z critical value is used in can be less than It determines Select one or more: a. Critical region and acceptance region b. Z critical value Traditional Method d. Mean e. Z test value... ##### 10. Budget balances and the national debt The following table lists federal expenditures, revenues, and GDP... 10. Budget balances and the national debt The following table lists federal expenditures, revenues, and GDP for a hypothetical economy during several years. Year Expenditures (Billions of dollars) 1,653 GDP (Billions of dollars) 8,747 1998 2000 Revenues (Billions of dollars) 1,722 2,025 1,853 1,880 ... ##### Whal the enthalpy combustion (in kJlmol) for acetone (CHzCOCH3) under standard condilions (water is produced as liquid)? Use Ihe appendix in your lexlbook; do not enter units and answer with significant digits.Answer:SubmiWhat is the enthalpy of combustion (in kJmol) for carbon under standard conditions? Use the appendix in your textbook, do not enter units, and answer with significant digits.Answer:Submit Whal the enthalpy combustion (in kJlmol) for acetone (CHzCOCH3) under standard condilions (water is produced as liquid)? Use Ihe appendix in your lexlbook; do not enter units and answer with significant digits. Answer: Submi What is the enthalpy of combustion (in kJmol) for carbon under standard con... ##### 3. (32 points) Let Ti and Tz represent the lifetimes in hours of two linked components in an electronic device. The joint density function for T; and Tz is uniform over the region defined by 0 <t1 < t2 < L where L is a positive constant:a) (8 points) Find P(T1 + Tz >b) (8 points) Determine the expected value of the sum of Ti and Tz: c) (16 points) Determine the variance of the sum of T1 and Tz 3. (32 points) Let Ti and Tz represent the lifetimes in hours of two linked components in an electronic device. The joint density function for T; and Tz is uniform over the region defined by 0 <t1 < t2 < L where L is a positive constant: a) (8 points) Find P(T1 + Tz > b) (8 points) Deter... ##### Consider vectors u and vu = 6iFind the vector projection proju" Of vector onto vector u. Express your answer in component form_Find the scalar projection compu' of vector v onto vector ucompu' Consider vectors u and v u = 6i Find the vector projection proju" Of vector onto vector u. Express your answer in component form_ Find the scalar projection compu' of vector v onto vector u compu'... ##### Other than financial resources, what are some additional barriers to healthcare in America? Has the Affordable... Other than financial resources, what are some additional barriers to healthcare in America? Has the Affordable Healthcare Act solved our current healthcare dilemma? Why or why not?... ##### Which compound is formed in the reaction below? CH? NaBH OH OH A. Compound 1 B.... Which compound is formed in the reaction below? CH? NaBH OH OH A. Compound 1 B. Compound 3 C. Compound 2 D. Compound 4... ##### Malaria 1.     Why does this problem exist in this society? 2.     How did this happen? 3.... Malaria 1.     Why does this problem exist in this society? 2.     How did this happen? 3.     Where did this happen? 4.     What are the risks for the community in the future? 5.     What are/were th... ##### Mmhich Meta luttctmeet ucidtI-CHCOOH NCRFCHCOOHIFRAhEr-CHCOOHCharlAahnnteeme~Elucopytalosc 'NCRthc nng and Whrchufth lolluwng @nuuio "hcn atechcd the bewene ring ill dcecutnte Jineerincoming cleenntil? Lhe ontha Md pan pushonsF,C-NHtdentnnmmntic icntund PBandCind DHaud DNCRWhlch ol tc Tollounngcnicni lor rumaliciyThe system muSt contain An+2 clectrons The selem MIILA 9clic conjugatedatom Ik aromatic system Iust be The ring ninat NCRIhc specles Th:lconsidereducuve aEcntnitruinn nflca Mmhich Meta lutt ctmeet ucidt I-CHCOOH NCR FCHCOOH IFRAh Er-CHCOOH CharlAahnnt eeme ~Elucopytalosc ' NCR thc nng and Whrchufth lolluwng @nuuio "hcn atechcd the bewene ring ill dcecutnte Jineerincoming cleenntil? Lhe ontha Md pan pushons F,C -NHt dentn nmmntic icnt und P Band Cind D Haud D... ##### How do solutions differ from heterogeneous mixtures? Other classes of matter? How do solutions differ from heterogeneous mixtures? Other classes of matter?... ##### Skatch the greph 1 1 Skatch the greph 1 1... ##### 7.1.3 Question Help A magazine provided results from a poll of 500 adults who were asked... 7.1.3 Question Help A magazine provided results from a poll of 500 adults who were asked to identify their favorite pie. Among the 500 respondents, 12% chose chocolate pio, and the margin of error was given as + 3 percentage points. What values do p, n, E. and p represent? If the confidence level is... ##### Solve each problem.Selecting The vice-president of Southern Insurance will select two of his four secretaries to attend the board meeting. How many selections are possible? List all possible selections from the secretaries Alice, Brenda, Carol, and Dolores. Solve each problem. Selecting The vice-president of Southern Insurance will select two of his four secretaries to attend the board meeting. How many selections are possible? List all possible selections from the secretaries Alice, Brenda, Carol, and Dolores.... ##### Draw 3 structural formula for the intermediate in the following reaction: CH;CH=CHCH3 CHzClz ClzYou do not hare to consider stereochemistry: You do not hare to explicitly draw H atoms Do not include countef-ions e.2 ,Na m Four ansWver Draw 3 structural formula for the intermediate in the following reaction: CH;CH=CHCH3 CHzClz Clz You do not hare to consider stereochemistry: You do not hare to explicitly draw H atoms Do not include countef-ions e.2 ,Na m Four ansWver... ... ##### Please show all work. Provide equations used for full credit. Please use correct significant figures and... Please show all work. Provide equations used for full credit. Please use correct significant figures and correct units in answers. The reaction, 2NOBr(g) + 2NO(g) + Br2(8) has an equilibrium constant, K = 0.42 at 373 K. a. Calculate Agºrxn at 373 K. b. Using the value of AHºrxn = 45.38 k) ... ##### 3.1.4. Find all separable eigensolutions to the following partial differential equations: (b) ut = ux (c)... 3.1.4. Find all separable eigensolutions to the following partial differential equations: (b) ut = ux (c) ut = xux.... ##### Consider the word MISSISSIPPI. Determine all the permutations ofthe letters in which the four S's and the 2 P's must betogether.Please answer with a number only. Consider the word MISSISSIPPI. Determine all the permutations of the letters in which the four S's and the 2 P's must be together. Please answer with a number only.... ##### Score: 0 of 1 pt4.1.19Flnd the derivative of the functlon:3x4 + 5 y =y' Score: 0 of 1 pt 4.1.19 Flnd the derivative of the functlon: 3x4 + 5 y = y'... ##### How are viruses different from bacteria? explain the reason why viruses can't be treated with antibiotics, as well as the mechanisms used by antibiotics. Please cote your sources How are viruses different from bacteria? explain the reason why viruses can't be treated with antibiotics, as well as the mechanisms used by antibiotics. Please cote your sources... ##### A chemist mixes a 20% solution and a 50% solution to obtain a 40% solution A chemist mixes a 20% solution and a 50% solution to obtain a 40% solution. If 300g of solution was obtained, how much of each type of the original solutions were used?... ##### The Ka value given for picric acid in my textbook is 4.3x10^-1. Calculate the pH of... The Ka value given for picric acid in my textbook is 4.3x10^-1. Calculate the pH of a solution prepared by (This problem requires values in your textbook's specific appendices, which you can access through the OWLv2 MindTap Reader. You should not use the OWLv2 References' Tables to answer th... ##### Determine whether the statement is true or false. If it is false, rewrite it as a true statement.About one-quarter of a data set falls below $Q_{1}$ Determine whether the statement is true or false. If it is false, rewrite it as a true statement. About one-quarter of a data set falls below $Q_{1}$... ##### An elevator has placard stating that the maximum capacity is 2430 Ib _15 passengers. So, 15 adult male passengers can have _ 2430 / 15 = 162 pounds_ mean weight = of up If the elevator is loaded with 15 adult male passengers, find the probability that it is overloaded because they have greater than ` mean weight 162 Ib. (Assume that weights of males are normally distributed with mean of 167 Ib and standard deviation of 25 Ib_ to be safe? Does this elevator appearThe probability the elevator is o An elevator has placard stating that the maximum capacity is 2430 Ib _15 passengers. So, 15 adult male passengers can have _ 2430 / 15 = 162 pounds_ mean weight = of up If the elevator is loaded with 15 adult male passengers, find the probability that it is overloaded because they have greater than ... ##### Solve for the requested quantity: 28) Find b_ Round to the nearest hundredth if necessary:618 ft36 " 30 Solve for the requested quantity: 28) Find b_ Round to the nearest hundredth if necessary: 618 ft 36 " 30... ##### 400 pound motor wide mountedstccl tblc Thc ! tablclop mcasutes Icct Jong 444 2 icctWhat thc pressuntublciop = due to the motor? [Frcssurc Unit of Mcasurcment Ibf]Thc stccl table wcighs 320 pounds Assume that the JoteLnsleht [motor tabk] is distribuled evcnly betwccn the table lcgsinches: What is the prcssurc On the floor undas on Onc tabk lcg occupics an arca that is _ inches by (or cach) of the table kgs? [Prc ssure Unit of Mcasurcmcnt = Ibin" 0 ps]] 400 pound motor wide mounted stccl tblc Thc ! tablclop mcasutes Icct Jong 444 2 icct What thc pressunt ublciop = due to the motor? [Frcssurc Unit of Mcasurcment Ibf] Thc stccl table wcighs 320 pounds Assume that the JoteLnsleht [motor tabk] is distribuled evcnly betwccn the table lcgs inches: What i...
# Is this a correct way to prove by induction? I understand that most people use the inductive hypothesis, but I find that counterintuitive. Is the below proof correct? In particular, I am concerned with my use of $n$ in (I); is the reason people use another variable, e.g. $k$, for conceptual reasons, or does the use of $n$ create an error in my proof? \begin{align*} P(0) \land [P(k) \Rightarrow P(k+1)] \implies P(n) \tag{AI} \\ \end{align*} \begin{align*} \text{When }n = 2, \ \ 2 + 6 + 10 + . . . + (4n - 2) &= 2n^2 \tag{B} \\ 2 + 6 &= 2 \cdot 2^2 \\ 8 &= 8 \\ \end{align*} $$\begin{pmatrix} 2 + 6 + 10 + ... + (4n-2) = 2n^2 \\ \Big\Downarrow \\ 2 + 6 + 10 + ... + (4n-2) + (4(n+1)-2) = 2(n+1)^2 \\ \end{pmatrix} \tag{I}$$ \begin{align*} \Big\Updownarrow \end{align*} \begin{align*} 4(n+1)-2 &= 2(n+1)^2 -2n^2 \\ 4n+4-2 &= 2n^2+4n+2 - 2n^2 \\ 4n+2 &= 4n+2 \\ \end{align*} $$\text{B} \land \text{I} \land \text{AI} \implies 2 + 6 + 10 + . . . + (4n - 2) = 2n^2 \text{ for } n > 1 \ \ \square$$ Is this alternate solution correct? I am confident in my reasoning, but am unsure if it is a valid mathematical argument. Pairing $1$st with $n$th term, $2$nd with $(n-1)$th term, etc., yields $\mathbf{\frac{n}{2}}$ pairs: \begin{align*} 2 + (4n-2) \ \ + \ \ 6 + (4(n-1)-2) \ \ + \ \ 10 + (4(n-2)-2) \ \ + \ \ ... &= 2n^2 \\ 2 + (4n-2) \ \ + \ \ 6 + (4n-6) \ \ + \ \ 10 + (4n-10) \ \ + \ \ ... &= 2n^2 \\ 4n \ \ + \ \ 4n \ \ + \ \ 4n \ \ + \ \ ... &= 2n^2 \\ 4n \cdot \mathbf{\frac{n}{2}} &= 2n^2 \\ 2n^2 &= 2n^2 \\ &\ \square \\\ \\ \end{align*} • Yes, your proof seems fine to me, but it would be much quicker to use the identity for $n^2$. Sep 28 '17 at 12:52 • @Shaun: You mean $n^2 = 1 + 3 + 5 + ... + 2n-1$? How do you prove that without induction? – Zaz Sep 28 '17 at 12:55 • Yes. I don't know, @Zaz; that's a separate question. Sep 28 '17 at 12:56 • Here 's a [Proof without induction][1] of that sum (see answer section). [1]: math.stackexchange.com/questions/1666075/… Sep 28 '17 at 13:04 • Your method is a special case of telescopic induction, e.g. see this answer. Sep 28 '17 at 14:16 The second (and to some extent the first) uses the $\dots$ informal notation. This should probably be seen as a shorthand or more visual way of handling summations. Formally one should have used $\sum$-notation. The problem in the last is that you visually rearranges the terms which is not that good looking when using $\sum$ notation.
# Math Help - Help with proving probability equations 1. ## Help with proving probability equations Hey, if anyone has time to give me some guidance on how to prove these i would be so grateful thank you... : i) P(A∩B) ≥ P(A) +P(B)−1, ii) P(A∆B) = P(A) +P(B)−2P(A∩B) 2. ## Re: Help with proving probability equations Originally Posted by jennyk i) P(A∩B) ≥ P(A) +P(B)−1, ii) P(A∆B) = P(A) +P(B)−2P(A∩B) $1\ge\mathcal{P}(A\cup B)=\mathcal{P}(A)+\mathcal{P}(B)-\mathcal{P}(A\cap B)$ 3. ## Re: Help with proving probability equations Thanks but i don't follow/understand this. Maybe i'm hopeless but something more step by step would be great 4. ## Re: Help with proving probability equations Originally Posted by jennyk Thanks but i don't follow/understand this. Maybe i'm hopeless but something more step by step would be great You can't solve $1\ge a +b-c$ for $c\ge ~?$ If you cannot then you need help at a far deeper level than you can get anywhere online.
First we attach the healthcareai R package to make its functions available. If your package version is less than 2.0, none of the code here will work. You can check the package version with packageVersion("healthcareai"), and you can get the latest stable version by running install.packages("healthcareai"). If you have v1.X code that you want to use with the new version of the package, check out the Transitioning vignette. library(healthcareai) # > healthcareai version 2.5.0 # > Please visit https://docs.healthcare.ai for full documentation and vignettes. Join the community at https://healthcare-ai.slack.com healthcareai comes with a built in dataset documenting diabetes among adult Pima females. Once you attach the package, the dataset is available in the variable pima_diabetes. Let’s take a look at the data with the str function. There are 768 records in 10 variables including one identifier column, several nominal variables, and substantial missingness (represented in R by NA). str(pima_diabetes) # > tibble [768 × 10] (S3: tbl_df/tbl/data.frame) # > $patient_id : int [1:768] 1 2 3 4 5 6 7 8 9 10 ... # >$ pregnancies : int [1:768] 6 1 8 1 0 5 3 10 2 8 ... # > $plasma_glucose: int [1:768] 148 85 183 89 137 116 78 115 197 125 ... # >$ diastolic_bp : int [1:768] 72 66 64 66 40 74 50 NA 70 96 ... # > $skinfold : int [1:768] 35 29 NA 23 35 NA 32 NA 45 NA ... # >$ insulin : int [1:768] NA NA NA 94 168 NA 88 NA 543 NA ... # > $weight_class : chr [1:768] "obese" "overweight" "normal" "overweight" ... # >$ pedigree : num [1:768] 0.627 0.351 0.672 0.167 2.288 ... # > $age : int [1:768] 50 31 32 21 33 30 26 29 53 54 ... # >$ diabetes : chr [1:768] "Y" "N" "Y" "N" ... # Easy Machine Learning If you don’t want to fuss with details any more than necessary, machine_learn is the function for you. It makes it as easy as possible to implement machine learning models by putting all the detains in the background so that you don’t have to worry about them. Of course it might be wise to worry about them, and we’ll get to how to do that further down, but for now, you can automatically take care of problems in the data, do basic feature engineering, and tune multiple machine learning models using cross validation with machine_learn. machine_learn always gets the name of the data frame, then any columns that should not be used by the model (uninformative columns, such as IDs), then the variable to be predicted with outcome =. If you want machine_learn to run faster, you can have that—at the expense of a bit of predictive power—by setting its tune argument to FALSE. quick_models <- machine_learn(pima_diabetes, patient_id, outcome = diabetes) # > Training new data prep recipe... # > Variable(s) ignored in prep_data won't be used to tune models: patient_id # > # > diabetes looks categorical, so training classification algorithms. # > # > After data processing, models are being trained on 12 features with 768 observations. # > Based on n_folds = 5 and hyperparameter settings, the following number of models will be trained: 50 rf's, 50 xgb's, and 100 glm's # > Training with cross validation: Random Forest # > Training with cross validation: eXtreme Gradient Boosting # > Training with cross validation: glmnet # > # > *** Models successfully trained. The model object contains the training data minus ignored ID columns. *** # > *** If there was PHI in training data, normal PHI protocols apply to the model object. *** machine_learn has told us that it has created a recipe for data preparation (this allows us to do exactly the same data cleaning and feature engineering when you want predictions on a new dataset), is ignoring patient_id when tuning models as we told it to, is training classification algorithms because the outcome variable diabetes is categorical, and has executed cross validation for three machine learning models: random forests, XGBoost, and regularized regression. Let’s see what the models look like. quick_models # > Algorithms Trained: Random Forest, eXtreme Gradient Boosting, and glmnet # > Model Name: diabetes # > Target: diabetes # > Class: Classification # > Performance Metric: AUROC # > Number of Observations: 768 # > Number of Features: 12 # > Models Trained: 2020-08-05 09:09:29 # > # > Models tuned via 5-fold cross validation over 9 combinations of hyperparameter values. # > Best model: Random Forest # > AUPR = 0.71, AUROC = 0.85 # > Optimal hyperparameter values: # > mtry = 5 # > splitrule = extratrees # > min.node.size = 20 Everything looks as expected, and the best model is is a random forest that achieves performance of AUROC = 0.85. Not bad for one line of code. Now that we have our models, we can make predictions using the predict function. If you provide a new data frame to predict it will make predictions on the new data; otherwise, it will make predictions on the training data. predictions <- predict(quick_models) predictions # > "predicted_diabetes" predicted by Random Forest last trained: 2020-08-05 09:09:29 # > Performance in training: AUROC = 0.85 # > # A tibble: 768 x 11 # > diabetes predicted_diabe… patient_id pregnancies plasma_glucose diastolic_bp # > * <fct> <dbl> <int> <int> <int> <int> # > 1 Y 0.678 1 6 148 72 # > 2 N 0.153 2 1 85 66 # > 3 Y 0.460 3 8 183 64 # > 4 N 0.00927 4 1 89 66 # > 5 Y 0.566 5 0 137 40 # > # … with 763 more rows, and 5 more variables: skinfold <int>, insulin <int>, # > # weight_class <chr>, pedigree <dbl>, age <int> We get a message about when the model was trained and how well it preformed in training, and we get back a data frame that looks sort of like the original, but has a new column predited_diabetes that contains the model-generated probability each individual has diabetes, and contains changes that were made preparing the data for model training, e.g. missingness has been filled in and weight_class has been split into a series of “dummy” variables. We can plot how effectively the model is able to separate diabetic from non-diabetic individuals by calling the plot function on the output of predict. plot(predictions) If you want outcome-class predictions in addition to predicted probabilites, the outcome_groups argument accomplishes that. If it is TRUE the overall accuracy of predictions is maximized. If it is a number, it represents the relative cost of a false-negative to a false-positive outcome. The example below says that one false negative is as bad as two false positives. If you want risk groups instead, see the risk_groups argument. quick_models %>% predict(outcome_groups = 2) %>% plot() # Data Profiling It is always a good idea to be aware of where there are missing values in data. The missingness function helps with that. In addition to looking for values R sees as missing, it looks for other values that might represent missing, such as "NULL", and issues a warning if it finds any. Like many healthcareai functions, it has a plot method so you can inspect the results more quickly and intuitively by passing the output to plot. missingness(pima_diabetes) %>% plot() It’s good that we don’t have any missingness in our ID or outcome columns. We’ll see how missingness in predictors is addressed further down. # Data Preparation To get an honest picture of how well a model performs (and an accurate estimate of how well it will perform on yet-unseen data), it is wise to hide a small portion of observations from model training and assess model performance on this “validation” or “test” dataset. In fact, healthcareai does this automatically and repeatedly under the hood, so it’s not strictly necessary, but it’s still a good idea. The split_train_test function simplifies this, and it ensures the test dataset has proportionally similar characteristics to the training dataset. By default, 80% of observations are used for training; that proportion can be adjusted with the p parameter. The seed parameter controls randomness so that you can get the same split every time you run the code if you want strict reproducability. split_data <- split_train_test(d = pima_diabetes, outcome = diabetes, p = .9, seed = 84105) split_data contains two data frames, named train and test. One of the major workhorse functions in healthcareai is prep_data. It is called under-the-hood by machine_learn, so you don’t have to worry about these details if you don’t want to, but eventually you’ll want to customize how your data is prepared; this is where you do that. The helpfile ?prep_data describes what the function does and how it can be customized. Here, let’s customize preparation to scale and center numeric variables and avoid collapsing rare factor levels into “other”. The first arguments to prep_data are the same as those to machine_learn: data frame, ignored columns, and the outcome column. Then we can specify prep details. prepped_training_data <- prep_data(split_data$train, patient_id, outcome = diabetes, center = TRUE, scale = TRUE, collapse_rare_factors = FALSE) # > Training new data prep recipe... The “recipe” that the above message refers to is a set of instructions for how to transform a dataset the way we just transformed our training data. Any machine learning that we do (within healthcareai) on prepped_training_data will retain that recipe and apply it before making predictions on new data. That means that when you have models making predictions in production, you don’t have to figure out how to transform the data or worry about encountering missing data or new category levels. # Model Training machine_learn takes care of data preparation and model training for you, but if you want more precise control, tune_models and flash_models are the model-training function you’re looking for. They differ in that tune_models searches over hyperparameters to optimize model performance, while flash_models trains models at set hyperparameter values. So, tune_models produces better models, but takes longer (approaching 10x longer at default settings). Let’s tune all three available models: random forests (“RF”), regularized regression (i.e. lasso and ridge, “GLM”), and gradient-boosted decision trees (i.e. XGBoost, “XGB”). To optimize model performance, let’s crank tune_depth up a little from its default value of ten. That will tune the models over more combinations of hyperparameter values in the search for the best model. This will increasing training time, so be cautious with it at first, but for this modest-sized dataset, the entire process takes less than a minute to complete on a laptop. Let’s also select “PR” as our model metric. That optimizes for area under the precision-recall curve rather than the default of area under the receiver operating characteristic curve (“ROC”). This is usually a good idea when one outcome category is much more common than the other category. models <- tune_models(d = prepped_training_data, outcome = diabetes, tune_depth = 25, metric = "PR") # > Variable(s) ignored in prep_data won't be used to tune models: patient_id # > # > diabetes looks categorical, so training classification algorithms. # > # > After data processing, models are being trained on 13 features with 692 observations. # > Based on n_folds = 5 and hyperparameter settings, the following number of models will be trained: 125 rf's, 125 xgb's, and 250 glm's # > Training with cross validation: Random Forest # > Training with cross validation: eXtreme Gradient Boosting # > Training with cross validation: glmnet # > # > *** Models successfully trained. The model object contains the training data minus ignored ID columns. *** # > *** If there was PHI in training data, normal PHI protocols apply to the model object. *** You can compare performance across models with evaluate. evaluate(models, all_models = TRUE) # > # A tibble: 3 x 3 # > model AUPR AUROC # > <chr> <dbl> <dbl> # > 1 Random Forest 0.703 0.842 # > 2 glmnet 0.688 0.836 # > 3 eXtreme Gradient Boosting 0.687 0.820 For more detail, you can examine how models perform across hyperparameters by plotting the model object. Here we plot only the best model’s performance over hyperparameter by extracting it by name. It looks like extratrees is a superior split rule for this model. models["Random Forest"] %>% plot() ## Faster Model Training If you’re feeling the need for speed, flash_models is the function for you. It uses fixed sets of hyperparameter values to train the models, so you still get a model customized to your data, but without burning the electricity and time to precisely optimize all the details. Here we’ll use models = "RF" to train only a random forest. If you want to train a model on fixed hyperparameter values, but you want to choose those values, you can pass them to the hyperparameters argument of tune_models. Run get_hyperparameter_defaults() to see the default values and get a list you can customize. untuned_rf <- flash_models(d = prepped_training_data, outcome = diabetes, models = "RF", metric = "PR") # > Variable(s) ignored in prep_data won't be used to tune models: patient_id # > # > diabetes looks categorical, so training classification algorithms. # > # > After data processing, models are being trained on 13 features with 692 observations. # > Based on n_folds = 5 and hyperparameter settings, the following number of models will be trained: 5 rf's # > Training at fixed values: Random Forest # > # > *** Models successfully trained. The model object contains the training data minus ignored ID columns. *** # > *** If there was PHI in training data, normal PHI protocols apply to the model object. *** # Model Interpretation ## Interpret If you trained a GLM model, you can extract model coefficients from it with the interpret function. These are coefficient estimates from a regularized logistic or linear regression model. If you didn’t scale your predictors (which is the default in prep_data), these will be in natural units (e.g. in the plot below, a unit increase in plasma glucose corresponds to an expected log-odds increase of diabetes of just over one). Importantly, natural units mean that you can’t interpret the size of the coefficients as the importance of the predictor. To get that interpretation, scale your features during data preparation by calling prep_data with scale = TRUE and then flash_models or tune_models. In this plot, the low value of weight_class_normal signifies that people with normal weight are less likely to have diabetes. Similarly, plasma glucose is associated with increased risk of diabetes after accounting for other variables. interpret(models) %>% plot() # > Warning in interpret(models): Interpreting glmnet model, but Random Forest # > performed best in cross-validation and will be used to make predictions. To use # > the glmnet model for predictions, extract it with x['glmnet']. ## Variable Importance Tree based methods such as random forest and boosted decision trees can’t provide coefficients like regularized regression models can, but they can provide information about how important each feature (aka predictor, aka variable) is for making accurate predictions. You can see these “variable importances” by calling get_variable_importance on your model object. Like interpret and many other functions in healthcareai, you can plot the output of get_variable_importance with a simple plot call. get_variable_importance(models) %>% plot() ## Explore The explore function reveals how a model makes its predictions. It takes the most important features in a model, and uses a variety of “counterfactual” observations across those features to see what predictions the model would make at various combinations of the features. To see the effect of more features adjust the n_use argument to plot, or for different features, specify x_var and color_var. explore(models) %>% plot() # > With 4 varying features and n_use = 2, using median to aggregate predicted outcomes across age and pregnancies. You could turn n_use up to see the impact of more features. # Prediction predict will automatically use the best-performing model from training (evaluated out-of-fold in cross validation). If no new data is passed to predict it will return out-of-fold predictions from training. The predicted probabilities appear in the predicted_diabetes column. predict(models) # > "predicted_diabetes" predicted by Random Forest last trained: 2020-08-05 09:10:10 # > Performance in training: AUPR = 0.7 # > # A tibble: 692 x 11 # > diabetes predicted_diabe… patient_id pregnancies plasma_glucose diastolic_bp # > * <fct> <dbl> <int> <int> <int> <int> # > 1 Y 0.691 1 6 148 72 # > 2 N 0.142 2 1 85 66 # > 3 Y 0.432 3 8 183 64 # > 4 N 0.0219 4 1 89 66 # > 5 Y 0.534 5 0 137 40 # > # … with 687 more rows, and 5 more variables: skinfold <int>, insulin <int>, # > # weight_class <chr>, pedigree <dbl>, age <int> To get predictions on a new dataset, pass the new data to predict, and it will automatically be prepared based on the recipe generated on the training data. We can plot the predictions to see how well our model is doing, and we see that it’s separating diabetic from non-diabetic individuals pretty well, although there a fair number of non-diabetics with high predicted probabilities of diabetes. This may be due to optimizing for precision recall, or may indicate pre-diabetic patients. Above, we saw how to make outcome-class predictions. Here, we make risk-group predictions, defining four risk groups (low, moderate, high, and extreme) containing 30%, 40%, 20% and 10% of patients, respectively. test_predictions <- predict(models, split_data$test, risk_groups = c(low = 30, moderate = 40, high = 20, extreme = 10) ) # > Prepping data based on provided recipe plot(test_predictions) Everything we have done above happens “in memory”. It’s all within one R session, so there’s no need to save anything to disk or load anything back into R. Putting a machine learning model in production typically means moving the model into a production environment. To do that, save the model with save_models function. save_models(models, file = "my_models.RDS") The above code will store the models object with all its metadata in the my_models.RDS file in the working directory, which you can identify with getwd(). You can move that file to any other directory or machine, even across operating systems, and pull it back into R with the load_models function. The only tricky thing here is you have to direct load_models to the directory that the model file is in. If you don’t provide a filepath, i.e. call load_models(), you’ll get a dialog box from which you can choose your model file. Otherwise, you can provide load_models an absolute path to the file, e.g. load_models("C:/Users/user.name/Documents/diabetes/my_models.RDS"), or a path relative to your working directory, which again you can find with getwd(), e.g. load_models("data/my_models.RDS"). If you put the models in the same directory as your R script or project, you can load the models without any file path. models <- load_models("my_models.RDS") That will reestablish the models object in your R session. You can confirm this by clicking on the “Environment” tab in R Studio or running ls() to list all objects in your R session. # A Regression Example All the examples above have been classification tasks, predicting a yes/no outcome. Here’s an example of a full regression modeling pipeline on a silly problem: predicting individuals’ ages. The code is very similar to classification. regression_models <- machine_learn(pima_diabetes, patient_id, outcome = age) # > Training new data prep recipe... # > Variable(s) ignored in prep_data won't be used to tune models: patient_id # > # > age looks numeric, so training regression algorithms. # > # > After data processing, models are being trained on 14 features with 768 observations. # > Based on n_folds = 5 and hyperparameter settings, the following number of models will be trained: 50 rf's, 50 xgb's, and 100 glm's # > Training with cross validation: Random Forest # > Training with cross validation: eXtreme Gradient Boosting # > Training with cross validation: glmnet # > # > *** Models successfully trained. The model object contains the training data minus ignored ID columns. *** # > *** If there was PHI in training data, normal PHI protocols apply to the model object. *** summary(regression_models) # > Models trained: 2020-08-05 09:10:29 # > # > Models tuned via 5-fold cross validation over 10 combinations of hyperparameter values. # > Best performance: RMSE = 9.1, MAE = 6.5, Rsquared = 0.41 # > By Random Forest with hyperparameters: # > mtry = 4 # > splitrule = variance # > min.node.size = 17 # > # > Out-of-fold performance of all trained models: # > # > $Random Forest # > # A tibble: 10 x 9 # > mtry splitrule min.node.size RMSE Rsquared MAE RMSESD RsquaredSD MAESD # > <int> <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> # > 1 4 variance 17 9.07 0.410 6.53 0.655 0.0283 0.276 # > 2 3 variance 4 9.08 0.412 6.59 0.694 0.0283 0.305 # > 3 5 variance 7 9.09 0.404 6.53 0.601 0.0246 0.235 # > 4 4 extratrees 17 9.16 0.417 6.66 0.814 0.0408 0.441 # > 5 7 variance 2 9.20 0.391 6.62 0.587 0.0209 0.212 # > # … with 5 more rows # > # >$eXtreme Gradient Boosting # > # A tibble: 10 x 13 # > eta max_depth gamma colsample_bytree min_child_weight subsample nrounds # > <dbl> <int> <dbl> <dbl> <dbl> <dbl> <int> # > 1 0.0291 4 5.73 0.730 0.248 0.763 570 # > 2 0.176 7 9.76 0.518 3.53 0.744 46 # > 3 0.0990 2 0.350 0.624 2.33 0.526 626 # > 4 0.423 5 6.79 0.643 3.80 0.940 69 # > 5 0.432 5 6.23 0.505 14.2 0.356 30 # > # … with 5 more rows, and 6 more variables: RMSE <dbl>, Rsquared <dbl>, # > # MAE <dbl>, RMSESD <dbl>, RsquaredSD <dbl>, MAESD <dbl> # > # > \$glmnet # > # A tibble: 20 x 8 # > alpha lambda RMSE Rsquared MAE RMSESD RsquaredSD MAESD # > <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> # > 1 0 0.00128 9.37 0.377 6.74 0.578 0.0798 0.358 # > 2 0 0.00367 9.37 0.377 6.74 0.578 0.0798 0.358 # > 3 0 0.00896 9.37 0.377 6.74 0.578 0.0798 0.358 # > 4 0 0.0218 9.37 0.377 6.74 0.578 0.0798 0.358 # > 5 0 0.0367 9.37 0.377 6.74 0.578 0.0798 0.358 # > # … with 15 more rows Let’s make a prediction on a hypothetical new patient. Note that the model handles missingness in insulin and a new category level in weight_class without a problem (but warns about it). new_patient <- data.frame( pregnancies = 0, plasma_glucose = 80, diastolic_bp = 55, skinfold = 24, insulin = NA, weight_class = "???", pedigree = .2, diabetes = "N") predict(regression_models, new_patient) # > Warning in ready_with_prep(object, newdata, mi): The following variables(s) had the following value(s) in predict that were not observed in training. # > weight_class: ??? # > Prepping data based on provided recipe # > "predicted_age" predicted by Random Forest last trained: 2020-08-05 09:10:29 # > Performance in training: RMSE = 9.07 # > # A tibble: 1 x 9 # > predicted_age pregnancies plasma_glucose diastolic_bp skinfold insulin # > * <dbl> <dbl> <dbl> <dbl> <dbl> <lgl> # > 1 23.7 0 80 55 24 NA # > # … with 3 more variables: weight_class <chr>, pedigree <dbl>, diabetes <chr>
Add input to previous value of output at each trigger Description The Triggered Add component adds the input u to the previous value of the output y when the trigger port has a rising edge. The output is initialized to ${y}_{0}$. If the reset port is enabled with use reset, then the output is reset to either set (if enabled via use set) or to ${y}_{0}$, whenever the reset port has a rising edge. Connections Name Description Modelica ID $u$ Integer input signal u $y$ Integer output signal y $\mathrm{trigger}$ Boolean input trigger $\mathrm{reset}$ Boolean input reset $\mathrm{set}$ Integer input set Parameters Name Default Units Description Modelica ID use reset $\mathrm{false}$ True (checked) enables the reset port use_reset use set $\mathrm{false}$ True (checked) enables the set port use_set ${y}_{0}$ $0$ Initial and reset value of $y$ if set port is not used y_start Modelica Standard Library The component described in this topic is from the Modelica Standard Library. To view the original documentation, which includes author and copyright information, click here.
Part 3 Bonus Material No-Nonsense Quantum Mechanics Exercises How many dimensions do the following mathematical arenas for a system consisting of N free particles have: • everyday space • configuration space • phase space • Hilbert space? • everyday space : 3. • configuration space : 3N . The configuration space of a free particle is 3-dimensional. Thus, for N particles we glue N differnt 3-dimensional configuration spaces together and the resulting space is 3N-dimensional. • phase space : 6N. The phase space of a free particle is 6-dimensional: 3 to specify the location and 3 to specify the momentum. Thus for N particles we get a 6N-dimensional phase space. • Hilbert space : $\infty$. Let's assume the configuration space of one object is a line and the configuration space of a second object is a circle. How does the total configuration space look like? We have to glue a copy of the circle above each point of the line. What we end up with this way is a cylinder. What's the difference between the Schrödinger picture and the Heisenberg picture? Both pictures are formulation in Hilbert space but: • In the Schrödinger picture, the states evolve in time while the operators do not change. • In the Heisenberg pictures, the operators evolve in the and the states do not change. Which formulation of Quantum Mechanics is the best one? Objectively they are all equivalent and therefore equally good. However, of course, you are free to pick your personal favorite. ### 1 Discussion 0 Followers Most reacted comment
## anonymous one year ago What is the approximate measure of <R? 17.75° 43.6° 46.4° 72.25° 1. anonymous 2. anonymous 3. anonymous @HWBUSTER00 lol i was a bit slower than u!;-D but here it is!!! 4. anonymous @danielgarcia413 heres the other question! ((; 5. Nnesha $\rm sin \rm \theta = \frac{ opposite }{ hypotenuse }~~~~ \cos \theta = \frac{ adjacent }{ hypotenuse } ~~\tan \theta = \frac{ opposite }{ adjacent }$ 6. anonymous $tanR=\frac{ 21 }{ 20 }$ 7. Nnesha |dw:1433719432687:dw| to find measure of angle R you should know value of two numbers 8. anonymous $R=\tan^{-1} \frac{ 21 }{ 20 }$ 9. anonymous I need to get used to this lol 10. anonymous ugh so confusing to me! I came out with about a 46.4, is that correct? @danielgarcia413 11. anonymous or did i totally mess up:( 12. anonymous Thats correct :)
# Cross ratios on cube complexes and length-spectrum rigidity Posted in Speaker: Elia Fioravanti Zugehörigkeit: University of Oxford Datum: Fre, 2019-02-01 09:30 - 10:30 Location: MPIM Lecture Hall Cross ratios naturally arise on boundaries of negatively curved spaces and are a valuable tool in their study. If one however slightly relaxes the curvature assumption, simply requiring it to be *non-positive*, things tend to get more complicated. Even the mere definition of a cross ratio becomes a more delicate matter. Restricting to the context of CAT(0) cube complexes $X$, we observe that most issues disappear if one considers the $\ell^1$ metric on $X$, rather than the CAT(0) metric. We obtain a canonical cross ratio on the horoboundary of the $\ell^1$ metric, usually known as Roller boundary. This allows us to develop a general framework relating cross-ratio-preserving boundary maps to the study of length-spectrum rigidity for (not necessarily compact) cube complexes. As an application, we show that essential, non-elementary actions on irreducible CAT(0) cube complexes with no free faces are completely determined by their marked $\ell^1$-length spectrum. One might wish to relax the no-free-faces assumption and this is indeed possible for cubulations of hyperbolic groups, where essentiality and hyperplane-essentiality actually suffice (and are necessary). We also show that such cubulations of hyperbolic groups inject into the space of invariant cross ratios on the Gromov boundary that are continuous at a co-meagre subset. Joint work with J. Beyrer (Heidelberg) and M. Incerti-Medici (UZH). © MPI f. Mathematik, Bonn Impressum & Datenschutz
## Random posts A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. toroidalet Posts: 1098 Joined: August 7th, 2016, 1:48 pm Location: My computer Contact: ### The snakes are running for your lives! I had a post about volcanos, but the neutrinoes wouldn't let me do it. "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett Saka Posts: 3502 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Contact: ### Give it up for Galileo! snake lake bake cake fake take rake Jake wake make sake yake pake gake hake kake zake xake vake nake Code: Select all o3b2ob2obo3b2o2b2o$bo3b2obob3o3bo2bo$2bo2b3o5b3ob4o$3o3bo2bo2b3o3b3o$ 4bo4bobo4bo$2o2b2o2b4obo2bo3bo$2ob4o3bo2bo2bo2bo$b2o3bobob2o$3bobobo5b obobobo$3bobobob2o3bo2bobo! (Check gen 3) Add your computer to the Table of Lifeenthusiast Computers! Gamedziner Posts: 795 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth ### omae wa mou shindeiru Saka wrote:snake lake bake cake fake take rake Jake wake make sake yake pake gake hake kake zake xake vake nake shake stake drake flake brake Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A $4.2A18$4.2A$4.2A2.2A$8.2A! gameoflifemaniac Posts: 1099 Joined: January 22nd, 2017, 11:17 am Location: There too ### Re: omae wa mou shindeiru Gamedziner wrote: Saka wrote:snake lake bake cake fake take rake Jake wake make sake yake pake gake hake kake zake xake vake nake shake stake drake flake brake aake I was so socially awkward in the past and it will haunt me for my entire life. gameoflifemaniac Posts: 1099 Joined: January 22nd, 2017, 11:17 am Location: There too ### wat is floccinaucinihilipilification I had a dream that a p100 Cordership gun was found, but in origami... I was so socially awkward in the past and it will haunt me for my entire life. Gamedziner Posts: 795 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth ### GOLDBACH CONJECTURE PROOF 4=2+2 For all greater even numbers, consider the "weak" Goldbach conjecture: For all odd n greater than or equal to 9, n can be expressed as three odd primes. Let these primes be p₁, p₂, and p₃. Let the even number be x. Now consider the number x+p₃. Since x is even and p₃ is odd, x+p₃ is also odd. Since we are only considering even numbers greater than or equal to 6, and the smallest odd prime is 3, x+p₃ is an odd number greater than or equal to 9. x+p₃ is an odd number greater than or equal to 9, and n is any odd number greater than or equal to 9. As x+p₃ is a proper subset of n, all properties that apply to all n also apply to x+p₃. Thus, x+p₃ can now be expressed as the sum of three primes, or x+p₃=p₁+p₂+p₃. Subtract p₃ from both sides. x=p₁+p₂. Thus, all even numbers greater than 2 can be expressed as the sum of exactly two primes. Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A$4.2A18$4.2A$4.2A2.2A$8.2A! gameoflifemaniac Posts: 1099 Joined: January 22nd, 2017, 11:17 am Location: There too ### Re: Random posts Created my first page! Cis-boat and cap I was so socially awkward in the past and it will haunt me for my entire life. A for awesome Posts: 2066 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: ### Re: GOLDBACH CONJECTURE PROOF Gamedziner wrote:Thus, x+p₃ can now be expressed as the sum of three primes, or x+p₃=p₁+p₂+p₃. Nothing says that the p₃ on the left has to be the same as the one on the right. It would be more properly expressed as x+p₃=p₁+p₂+p₄, from which no proof of the Goldbach Conjecture can be easily derived. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Gamedziner Posts: 795 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth ### Re: GOLDBACH CONJECTURE PROOF A for awesome wrote: Gamedziner wrote:Thus, x+p₃ can now be expressed as the sum of three primes, or x+p₃=p₁+p₂+p₃. Nothing says that the p₃ on the left has to be the same as the one on the right. It would be more properly expressed as x+p₃=p₁+p₂+p₄, from which no proof of the Goldbach Conjecture can be easily derived. You can add any real number to another real number. I just chose p₃ as the number to add to x (on purpose). I intended to add the same variable p₃ that is in the "p₁+p₂+p₃." Just because p₃ is a variable doesn't mean I can't add it to another variable. Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A $4.2A18$4.2A$4.2A2.2A$8.2A! A for awesome Posts: 2066 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: ### Re: GOLDBACH CONJECTURE PROOF Gamedziner wrote: A for awesome wrote: Gamedziner wrote:Thus, x+p₃ can now be expressed as the sum of three primes, or x+p₃=p₁+p₂+p₃. Nothing says that the p₃ on the left has to be the same as the one on the right. It would be more properly expressed as x+p₃=p₁+p₂+p₄, from which no proof of the Goldbach Conjecture can be easily derived. You can add any real number to another real number. I just chose p₃ as the number to add to x (on purpose). I intended to add the same variable p₃ that is in the "p₁+p₂+p₃." Just because p₃ is a variable doesn't mean I can't add it to another variable. Sorry, I misunderstood your argument at first. There still seem to be some logical holes, and if it were that easy to prove, someone would already have done it. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Gamedziner Posts: 795 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth ### Re: GOLDBACH CONJECTURE You got me there. There is a MUCH weaker version that I can prove though: All whole numbers greater than 3 can be expressed as the sum of 7 primes or less. Values less than 20 have explicit solutions. Odd numbers greater than 9 are known to have solutions with 3 primes. For even numbers x (all x being greater than or equal to 10), x+1 and x-1 can be expressed as the sum of 3 primes. x-1=p1+p2+p3 x+1=p4+p5+p6 2x=p1+p2+p3+p4+p5+p6 That is a number greater than or equal to 20 and divisible by 4. All such numbers can be expressed with at most 6 primes. The remaining numbers can be expressed with at most 7 primes, as adding the prime "2" gives the remaining numbers: 2x+2=p1+p2+p3+p4+p5+p6+p7 (p7=2) Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A$4.2A18$4.2A$4.2A2.2A$8.2A! calcyman Posts: 2229 Joined: June 1st, 2009, 4:32 pm ### Re: GOLDBACH CONJECTURE Gamedziner wrote:You got me there. There is a MUCH weaker version that I can prove though: All whole numbers greater than 3 can be expressed as the sum of 7 primes or less. Values less than 20 have explicit solutions. Odd numbers greater than 9 are known to have solutions with 3 primes. For even numbers x (all x being greater than or equal to 10), x+1 and x-1 can be expressed as the sum of 3 primes. x-1=p1+p2+p3 x+1=p4+p5+p6 Add them to get: 2x=p1+p2+p3+p4+p5+p6 That is a number greater than or equal to 20 and divisible by 4. All such numbers can be expressed with at most 6 primes. The remaining numbers can be expressed with at most 7 primes, as adding the prime "2" gives the remaining numbers: 2x+2=p1+p2+p3+p4+p5+p6+p7 (p7=2) You can reduce '7 or fewer' to 'exactly 4': every sufficiently large even integer is 3 + n, where n is a sufficiently large odd number therefore expressible as the sum of three primes. Likewise, odd numbers are of the form 2 + n. What do you do with ill crystallographers? Take them to the mono-clinic! Saka Posts: 3502 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Contact: ### Re: Random posts Hey calcyman what would be your reaction if I messed up the Catagolue logo into: 1. Calcyland (Using the picture of you I found on gitlab) 2. Catagolue but Javanese Code: Select all o3b2ob2obo3b2o2b2o$bo3b2obob3o3bo2bo$2bo2b3o5b3ob4o$3o3bo2bo2b3o3b3o$4bo4bobo4bo$2o2b2o2b4obo2bo3bo$2ob4o3bo2bo2bo2bo$b2o3bobob2o$3bobobo5b obobobo$3bobobob2o3bo2bobo! (Check gen 3) Gamedziner Posts: 795 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth ### Re: GOLDBACH CONJECTURE calcyman wrote: You can reduce '7 or fewer' to 'exactly 4': every sufficiently large even integer is 3 + n, where n is a sufficiently large odd number therefore expressible as the sum of three primes. Likewise, odd numbers are of the form 2 + n. Noice. Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A$4.2A18$4.2A$4.2A2.2A$8.2A! calcyman Posts: 2229 Joined: June 1st, 2009, 4:32 pm ### Re: Random posts Saka wrote:Hey calcyman what would be your reaction if I messed up the Catagolue logo into: 1. Calcyland (Using the picture of you I found on gitlab) 2. Catagolue but Javanese Catagolue in Javanese would be somewhat amusing, given that it's written in Java. What do you do with ill crystallographers? Take them to the mono-clinic! dani Posts: 1004 Joined: October 27th, 2017, 3:43 pm Location: New Jersey, USA Contact: ### Re: Random posts The first sample soup of this linear growth also produces the only cis block and longhook she/her moose#0915 Saka Posts: 3502 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Contact: ### Re: Random posts What if some user on these forums is John Conway in disguise Code: Select all o3b2ob2obo3b2o2b2o$bo3b2obob3o3bo2bo$2bo2b3o5b3ob4o$3o3bo2bo2b3o3b3o$4bo4bobo4bo$2o2b2o2b4obo2bo3bo$2ob4o3bo2bo2bo2bo$b2o3bobob2o$3bobobo5b obobobo$3bobobob2o3bo2bobo! (Check gen 3) dani Posts: 1004 Joined: October 27th, 2017, 3:43 pm Location: New Jersey, USA Contact: ### Re: Random posts she/her moose#0915 gameoflifemaniac Posts: 1099 Joined: January 22nd, 2017, 11:17 am Location: There too ### Re: Random posts Natural trans-boat with tail: Code: Select all x = 16, y = 16, rule = B3/S23 4b2obo4bo$obo2b2o2b6o$2ob2obobobo2b2o$3bob2obob2o3bo$obo4bo3b2obo$2o3b o5b2o$4b5obo2b3o$bobo3b2obo2bobo$2bo3bo3bobob2o$b2o2bobob3o2bo$b4o2bo 2bob2obo$3obo2b2o2bo3bo$o3bob8o$3ob2ob2ob2o2b2o$2o8b4obo$3o2bob6o! Natural boat-tie-ship: Code: Select all x = 16, y = 16, rule = B3/S23 bo2b3o4b4o$ob3ob4o4bo$2bob2ob4o3b2o$b2o2b4o2b2obo$4obob2obo2bo$3o2b2ob 4obobo$b2o2bo2bobo$ob2ob3o2b4obo$2o2bobo2bob2ob2o$b4obo3b4obo$o4bo3bo 2b2obo$2obobob4obob2o$2b3o2b2o3b2o$2ob2o3bob5o$3o3bobob2o3bo$4obobo2b 2obobo! Code: Select all x = 16, y = 16, rule = B3/S23 b3o2b2obo3b2o$b4o2b2o3bob2o$3o2b3o4b2o$2o2b2o3b4o2bo$o2bo4bob2o2b2o$2b ob2obobo2bobo$5o3b4ob3o$2ob3o2b3obo$o2bobob2ob3obo$b2obo3b2ob4o$b2o2b 3obo2bobo$2b2o2b3ob2ob2o$bob5ob2obobo$2b2obo4bo$6b3o3bo$2b2obo5b3o! Natural block-laying switch engine in a 24x16 box: Code: Select all x = 24, y = 16, rule = B3/S23 3b2o5bo2b2ob3ob3o$3b5ob3o2b3o5b2o$bob3o2bo2bo3bo4bob2o$2obo2bob4o2b7ob o$2o6bob2o2bobob2o$6bob5o4bo$obobo5bob3o4b3o$obobob2o4bo2b4o4bo$obobo 2bo2b3o3b2ob2obo$o2b2ob3o2bo8bobo$bob2obob4o3b3o2b3o$o2b2o2bobo4bob2ob 3o$2o4b2o2bo2b3o2b4o$o4bo2b3o4b2ob5o$b2obob6o3b2obobob2o$b3o2b3ob2o5bo 2bo! Natural twit: Code: Select all x = 16, y = 16, rule = B3/S23 bo3b2o2b2ob3o$2obobobobob5o$2bo3b2obo2b4o$2bobobobob2ob3o$2o5bo2b4o$ob 5obobo2bobo$ob3obobo3bo$2bob2o2bobob2o$o7b2o3bobo$b3o9b2o$2o5b5o2bo$ob o2b4o2b2ob2o$o2b2ob9o$4b2o bo4b2obo$6obo2bo2b2o$2bobo2bo2b4obo! Code: Select all x = 16, y = 16, rule = B3/S23 bobo3b2o2b2obo$3ob6o3bobo$2b2o7b3o$o3b3o3b2o$2b2ob2o3bob2o$ob2o2bo2bo 3bo$b2ob6obobo$bob2obo2bobo$bo2bo3bobo3bo$2b3obo3bob4o$2o3bobob4o2bo$ 3obo2bo2bo$o2bo2b2ob2ob3o$3b3o3b3obobo$ob2obobo2b5o$ob3o3b2obo3bo! Natural snake: Code: Select all x = 16, y = 16, rule = B3/S23 b2obobo4bob2o$o4bob5o3bo$3ob2ob7obo$3obo2bo5b2o$2b2obob3o2bo2bo$o2b2ob ob2o5bo$3bo2bo4b2obo$2bo2b2ob2o3b2o$bo2bob2ob2obob2o$4b2o5b3obo$b2ob2o bob3obobo$b2o2b3obo4b2o$o2bobobob2o4bo$6o2b2ob2o2bo$bo4bobo6bo$4o2bo7b o! Code: Select all x = 16, y = 16, rule = B3/S23 o2b2o2b4ob2obo$b3ob2o2b4o$2ob2ob2o5bo$o3bobo3bo3bo$2bo2bob2o3b2obo$2o 3b2obob2o2b2o$3b4o3b2o2b2o$obob2obob2obo2bo$4b4o2bo2b3o$obob2o7bo$3b2o b6o2b2o$o2bo2bobo4bo$obo2bobobo4b2o$5o2b2obobo$obob4o2b3o2bo$b2obo3bo 4b3o! Natural hat: Code: Select all x = 16, y = 16, rule = B3/S23 3ob2o2bobo2bo$6bobobo3b2o$ob3o3bob2o$bo5bobob3o$2b3o3bo3b4o$b2o2b5o4bo$obobo4bob2o$o3b2o2b7o$bo4b4o2b2o$2obo2b3obobo$bob3obobo3bobo$b2o10b2o$bo2bob2ob2o4bo$4o3b2o3b3o$3bob2o3b2o$o2b4o2b2o2b3o! Code: Select all x = 16, y = 16, rule = B3/S23 o4b4o4b3o$4o2b2obobob2o$2o2b3obobob3o$2o2b4o2bobo$2bobobob4o2b2o$obobo 3b4ob2o$o4b2o3bobob2o$2o2bobo2b2o2bobo$ob2o2b3o2b5o$2obo2b2obobob2o$ob obo4b2o3bo$2obobo3b3o2b2o$2b2obo2bo5bo$5o5b2ob3o$o5bo3bo$obobobob2ob3o bo! Code: Select all x = 16, y = 16, rule = B3/S23 6b2obobobobo$3o2b2ob2ob2o$4o5bo2b4o$b2obo2b2obo2b3o$6bo2bo2bob2o$6ob3o bobo$2o4bo2b2ob2o$o2b2obo7bo$2ob2obob3ob3o$2bo2bob3obob3o$3obob2obo3bo $b5o2bobo2b2o$3b5o5bobo$ob2o3b5o3bo$bo4b3o$bo2b3o2bobo3bo! Last edited by gameoflifemaniac on December 8th, 2017, 2:48 pm, edited 3 times in total. I was so socially awkward in the past and it will haunt me for my entire life. toroidalet Posts: 1098 Joined: August 7th, 2016, 1:48 pm Location: My computer Contact: ### Re: Pasta Normies are a thing now. Don't question it. Justice man, justice man, does whatever a justice can (not much) Justice man, episode 1 leaked script: (scene: a courthouse, case Napoleon III vs France) JM: You claim your case is that your birth certificate lists you as Napoleon III and as a result you are the rightful heir of Napoleon's vast empire? N3: No, my case is that I was arrested for saying something that sounded like an insult in Etruscan. I was not in France, I was in Russia and I have never claimed anything about my ownership of France. Prosecutor: This alibi is invalid, as the island of Russia was sunk by the Mayans in 1883 when they were trying to stop a war with Italy but sent their nuclear weapons to the wrong country. N3: Your facts are wrong. The Mayans were completely obliterated by Japan in 1882 so they couldn't have nuked Russia. Also, are you qualified to be a justice? JM: Back to the case, you both make valid points, but I must rule in favor of the prosecution. (I hate my job.) N3: No, I meant the city, Russia, located in Arabia. [hours more of court scenes, but you get the picture.] (I don't know how a court works) "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett toroidalet Posts: 1098 Joined: August 7th, 2016, 1:48 pm Location: My computer Contact: ### I will subpoena all the piroplasmosis!!! gameoflifemaniac wrote:natural [stuff] As the Queen of England, I have the power to subpoena you for such insolent behavior. (DISCLAIMER: I might not actually be the queen of England (I may even be the wrong gender (female)) but England is my city so hard that I might as well be. (The authorities didn't agree.)) "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett Saka Posts: 3502 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Contact: ### Re: Random posts I used to think Elon Musk was a type of deer. Code: Select all o3b2ob2obo3b2o2b2o$bo3b2obob3o3bo2bo$2bo2b3o5b3ob4o$3o3bo2bo2b3o3b3o$4bo4bobo4bo$2o2b2o2b4obo2bo3bo$2ob4o3bo2bo2bo2bo$b2o3bobob2o$3bobobo5b obobobo$3bobobob2o3bo2bobo! (Check gen 3) Gamedziner Posts: 795 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth ### Re: Elon Musk Four-dimensional loop AKA Hyperloop Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A$4.2A18$4.2A$4.2A2.2A$8.2A! Gamedziner Posts: 795 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth ### The Big Bang theory is false. Proposition: The Big Bang theory is false. Proof: Let us suppose that the Big Bang theory is true. Then, at the start of the universe, there was a massive explosion. Additionally, all energy in the universe was concentrated into a very small volume. Black holes are known to exist with smaller concentrations of energy. Thus, the energy at the start of the universe, being much more compact than an ordinary black hole, immediately became a supermassive black hole. The black hole does not allow anything, including light, to escape. Thus, no explosion, including a massive, universe-creating explosion, could occur.※ Consequently, the supposition that the Big Bang theory is true leads to a contradiction. As such, the opposite — namely, that the Big Bang theory is false — must be true. Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A $4.2A18$4.2A$4.2A2.2A$8.2A! Saka Posts: 3502 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Contact: ### Re: The Big Bang theory is false. Gamedziner wrote:Bang bakso bang oi this thread ist 4 rendoum posts not smartee posts Code: Select all o3b2ob2obo3b2o2b2o$bo3b2obob3o3bo2bo$2bo2b3o5b3ob4o$3o3bo2bo2b3o3b3o$ 4bo4bobo4bo$2o2b2o2b4obo2bo3bo$2ob4o3bo2bo2bo2bo$b2o3bobob2o$3bobobo5b obobobo\$3bobobob2o3bo2bobo! (Check gen 3)
# All Questions 224 questions Filter by Sorted by Tagged with 34 views ### Consistency in off-topic descriptions? Why is this question which seems to not involve Calculus or Linear Algebra not considered basic while idknuttin's questions which seem to involve Calculus or Linear Algebra are considered basic? ... 45 views ### What to do if my answer gets downvoted? I want to know what is basic practices followed on the stack exchange if answer gets down-voted and some other users has provided satisfactory answer? In this case, should answer be deleted or left ... 27 views ### How to deal with tags that can have various meanings? There is a tag for replication, which in quant usually refers to replication of a portfolio. It can however also refer to replication of a scientific study as it is used in the question Is there a ... 40 views ### What woudl be an example of a question on-topic here that is not off-topic elsewhere in SE? Ever since Eco SE got in beta, I don't see much point in keeping Quant SE around other than having another place to ask math questions when you reach your limit on Math SE if Eco SE graduates from ... 73 views There seem to be alot of questions in the form of "Where can I find ..... type of data". On stack overflow, questions like this are usually community wiki as there ... 161 views ### Too basic for quant stackexchange Forgive me if this is posted elsewhere. I've seen multiple questions put on hold because they are deemed too basic for this site who is intended for professionals. I think, as long as the question is ... 166 views ### Rank deleted after suspension My account was recently unfairly suspended after a personal dispute with a moderator BobJansen. I received all reputation back but my profile is no longer in the 2014 year ranking: http://... 79 views ### Technical Analysis Questions Should not be Allowed on this Site A statement, not a question. As I understand it this site is for practitioners and students of quant finance, and technical analysis might be regarded as the antithesis of quant finance. We view ... 18 views ### Are there badges that do not show up on badges under the profile? I recently got a badge "Tumbleweed" which is not listed on "choose next badge to track". How can one know which badges exist in total? 124 views ### Is a question on corporate finance ok in the Quantitive Finance forum? I would like to post a question on the basic principle of Assets=Liabilities + Equity, and how to finance sales growth. Is this ok on this forum? 47 views ### Voting irregularities I noted that a user has been suspended for voting irregularities. What does it mean and why it happened? Moreover, seen his reputation growth and skill (at least, I saw him to answer to pretty ... 65 views ### What to do if OP forever offline? There are various questions on quant.SE which were asked by some user who then never goes online again. It is probably not surprising that no answer will hence be ever accepted. I understand that ... 61 views ### Is there a community wiki for Quantitative Finance forum of StackExchange? If so, how could I access it? Some post was mentioning it but I couldn't find a link to it in FAQ or the first page of the forum. 139 views ### Rules for self-study questions Our friends on Stats have a special tag for self-study questions. The tag can be used to indicate the type of question and lays down some rules. The benefit of ... 314 views ### Willingness to consider a revision to the current “question format” guidelines? I recently read through the "What topics can I ask about here?" and find the current guidelines either impractical in a sense that they do not really gear towards an optimization in potentially ... 24 views ### What is the “CleanUp” badge? I wondered about what was the Cleaup badge. In the explanation of this badge it says: First rollback; English is not my mothertongue and I do not understand what it means. Moreover, can you explain ... 51 views ### Self-answering questions - acceptable on Quant or not? Coming from SO, and majoring in quantitative finance, I have spent the past day reading through some of the awesome questions/answers in this community. I feel as though there are a couple canonical ... 31 views ### How to improve the site stats? As the user @SRKX underlined in the answer the percentage of marked answers is pretty low (about 78%) and, cause of this reason, the quantitative finance site is still beta. I noted that a lot of ... 70 views ### Let's get critical: Feb 2015 Site Self-Evaluation We all love Quantitative Finance Stack Exchange, but there is a whole world of people out there who need answers to their questions and don't even know that this site exists. When they arrive from ... 20 views ### The off topic dialog should allow us to tag a question as belonging on another site When a user asks a question that is clearly better on a programming site like stack exchange the natural thing to do is to flag it as closed as "Off topic because" and then "belongs on another site". ... 31 views ### How to get this badge? Publicist Badge: "Shared a link to a question that was visited by 1000 unique IP addresses" What exactly does this mean? Should I share a question 1000 times and get 1000 views from it, or is it ... 20 views ### why not leave the question and answers and votes in place attributed to anonymous when a User is deleted I have notices that when a user is deleted the votes and answers seem to get deleted, not sure about the questions. However, I have seen at other SE, they retain the questions and answers delegated to ... 177 views ### How long should the “beta” tag remain? I recommend to remove the "beta" tag from "Quantitative Finance beta". "Beta" makes the site less sincere and professional. The "beta" version remained for some years now, I dont see why it should be ... 64 views ### Let's get critical: Aug 2014 Site Self-Evaluation We all love Quantitative Finance Stack Exchange, but there is a whole world of people out there who need answers to their questions and don't even know that this site exists. When they arrive from ... 57 views ### Could someone add CQG for questions as a tag as I cant Please could someone add CQG for questions as a tag as I cant. Can someone add the tag CQG at stackoverflow? I don't have enough reputation yet to add the tag. See question below https://quant.... 46 views I flagged these two questions because they contain links to complete copies of copyrighted work. Pricing Principle 1 Arbitrage free implies complete market? I could easily edit and remove the links, ... 37 views ### How to draw attention after the end of a bounty? The bounty as ended on one of my question and it is still unanswered (one response but not accurate, downvoted to -1). How to draw more attention to this question ? Is there a feature to draw more ... 53 views ### Are there any active reviewers or moderators on this site? A week ago I suggested an excerpt for the returns wiki excerpt. As of this writing it has not been approved. Are all reviewers and moderators MIA? 50 views ### Multibounty feature? I would like to add a bounty for a question to make it more attractive. But I can't, because someone else set a bounty. Conceptually I see no problem if I add my bounty to the question to reflect ... 34 views ### Are Find me X questions on topic here? We are seeing a few questions of the type. "Where can I find x", where x can be free market data, a paper, a trading system, or a how to tutorial. Should we make a decision as to what types of ... 55 views ### Posts and comment deletion Do you guys think it is okay to delete posts and answers after they are answered or commented on? I have seen where I see comments related to a reply that does not exist. Comments that relate to ... 84 views ### Can a question on trading strategies be made on topic if tied to standard finance theory? I' m relatively new to this site, but my understanding is that "trading strategies" are not a good topic for the site. Some reasons include "too localized," (will help only one person), or too broad; ... 79 views ### Could private traders benefit from quant knowledge? Today morning I ended up spending an hour or so browsing through some quant-finance related blogs. Not surprisingly most of them were dealing with active trading strategies rather than pricing ... 13 views ### Is there a way to upload a file with the posting? I hope this is a good question for meta. 157 views ### Guys why don't you vote? After several months of more or less active quant-SE participation I am quite confused by the voting behaviour. Activity is present - this is more or less reflected in the number of views. Still even ... 454 views ### Why are privilege levels at quant.SE not the standard public beta levels? At Quantitative Finance, some privileges are bestowed at levels that are atypical for public beta sites: "access to moderator tools" at 1000 (usually 2000) "edit questions and answers" at 500 (... 69 views ### Dollar signs and $\LaTeX$ indicators? How do I make dollar signs be used as dollar signs rather than $\LaTeX$ indicators? Here is an example post: https://quant.stackexchange.com/questions/11232/arbitrage-opportunities-in-foreign-... 30 views ### The background color for questions that match a users tags is very hard to see, could we make it more visible? Could we get a different background color for questions that match a tag the user is watching. The current color is light blue and is very hard to pick up on a white background. Could we try a darker ... 25 views ### Reviving the weekly topic challange? Why not revive the weekly topic challange ? The general Question could be asked directly on the forum (contrary to being posted on meta). At the end of the week all the relevant answers could be ... 226 views ### What is the site lacking that hinders graduation? Quantiative Finance has been in beta for about three years now. Despite not all stats being excellt I would like to argue in favour of graduating the site. Question to the relevant SE-staff - What ... 80 views ### Let's get critical: Mar 2014 Site Self-Evaluation We all love Quantitative Finance Stack Exchange, but there is a whole world of people out there who need answers to their questions and don't even know that this site exists. When they arrive from ... 127 views ### Area 51 Proposal: Finance (including Behavioural, Corporate, Public Finance) This question and Chris W. Rea's comment motivated me to create Finance (including Behavioural, Corporate, Public Finance) Please allow me to advert to it here and advise me if this affronts or ... 30 views ### Are regulatory requirements on topic? Is it on topic to ask about requirements such as liquidity, market risk or valuation as specified by the banking authorities? 416 views ### Should there be Quantitative Finance Book Guides and Lists? This question on a definitive C++ Book guide and list was very helpful to me. Could we have the same for Quantitative Finance topics as community wiki? Before you vote to close or dismiss the ... 363 views ### It is not fun anymore to contribute and read most questions Why has this question (https://quant.stackexchange.com/questions/9796/how-to-make-money, or this https://quant.stackexchange.com/questions/9790/fastest-backtesting-engine [OP has been repeatedly asked ... 71 views ### Let's get critical: Dec 2013 Site Self-Evaluation We all love Quantitative Finance Stack Exchange, but there is a whole world of people out there who need answers to their questions and don't even know that this site exists. When they arrive from ... 95 views ### Are macroeconomical questions allowed here? Are macroeconomical questions allowed here? If not, which SE site would be the best for it? 36 views ### Colours of visited / not visited questions Is it only me, or the colouring of question titles is confusing? I'm used to think of brighter (more 'crisp') links as not visited, and faded colour would mean visited. On quant.stackexchange it is ...
# Tag Info 44 The division is conventionally made at the boundary between where stars end their lives as white dwarf stars and where more massive stars will end their lives in core collapse supernovae. The boundary is set both empirically, by observations of white dwarfs in star clusters, where their initial masses can be estimated, and also using theoretical models. ... 35 There is a consistent definition, but it involves a couple of arbitrary thresholds, so I doubt you'd consider it rigorous. The construction $X \gg Y$ means that the ratio $\frac{Y}{X}$ is small enough that subleading terms in the series expansion for $f\bigl(\frac{Y}{X}\bigr) - f(0)$ can be neglected, where $f$ is some relevant function involved in the ... 26 Our physics prof once put it informally that way: A state is a set of variables describing a system which does not include anything about its history. The set of variables (position, velocity vector) describes the state of a point mass in classical mechanics, while the path how the point mass got from point $A$ to point $B$ is not a state. 23 "A state of rest" is a relative term. Relative means - measured in comparison to the things around it. When you sit in a train and sip from a cup of coffee, you can do so because the cup is still relative to you even though both of you might be hurtling through the countryside at 200 km/h. For most experiments, objects can be considered "at rest" if they ... 22 The definition of a state of a system, in physics, strongly depends on the area of physics one is dealing with and it comes as one of the initial definitions once such underlying theory has to be set up. In particular one has: classical mechanics: a state of a system is a point $m\in TQ$ (or equivalently $T^*Q)$ in the tangent bundle of the configuration ... 13 A c-number basically means 'classical' number, which is basically any quantity which is not a quantum operator which acts on elements of the Hilbert space of states of a quantum system. It is meant to distinguish from q-numbers, or 'quantum' numbers, which are quantum operators. See http://wikipedia.org/wiki/C-number and the reference therein. 11 Informally speaking, a complete description of a physical system is referred to as its state. Completeness of the state of a system means that it provides all the possible information about the system, i.e. everything that can be possibly known about the system has to be contained in the specification of its state. Every physical theory is ultimately based ... 11 Different people have different definitions of dynamical phase transition. At present, a widely accepted one is by Heyl et al. See their original paper Dynamical Quantum Phase Transitions in the Transverse Field Ising Model. Basically, it means some quantity (e.g., the fidelity) as a function of time is non-analytical at some critical times. See the cusps ... 9 Your question is not specific to inflation, and really applies to any case where a bosonic quantum field behaves semiclassically due to macroscopically large occupation numbers. One very simple example of this is the Stark effect in quantum mechanics, where a Hyrodgen atom is placed in a uniform electric field. The atom is treated as a quantum mechanical ... 8 In a very mathematical sense, more often than not a mode refers to an eigenvector of a linear equation. Consider the coupled springs problem $$\frac{d}{dt^2} \left[ \begin{array}{cc} x_1 \\ x_2 \end{array} \right] =\left[ \begin{array}{cc} - 2 \omega_0^2 & \omega_0^2 \\ \omega_0^2 & - \omega_0^2 \end{array} \right] \left[ \begin{array}{cc} x_1 \\ x_2 ... 8 Of course the name implies that time is involved somehow. People talk about dynamical thermal and quantum phase transitions and in one case you will rapidly change temperature, while in the other state defining parameter (say pressure or field etc.). We will consider thermal PT. Now what does it mean rapidly? Let us consider 2-d order phase transition as ... 8 Roughly, an additive quantum number is the log of a corresponding multiplicative quantum number. Mathematically, this comes from the difference between the representations of a group and a Lie algebra; in the former, the natural operation is multiplication and in the latter it is addition. Many quantum numbers we care about come from continuous symmetry ... 8 A Hilbert space \cal H is complete which means that every Cauchy sequence of vectors admits a limit in the space itself. Under this hypothesis there exist Hilbert bases also known as complete orthonormal systems of vectors in \cal H. A set of vectors \{\psi_i\}_{i\in I}\subset \cal H is called an orthonormal system if \langle \psi_i |\psi_j \rangle =... 7 It isn't necessary to introduce the effective potential in orbital mechanics but it is really useful. Let's say we have a particle moving in a central gravitational potential. Newton's laws give you a vector equation of motion $$m \ddot{\vec{x}} = - \nabla U$$ where U = - G M m /r. In a general coordinate system this is a ... 7 For any operator \hat A an eigenstate |\psi\rangle is one for which:$$\hat A|\psi\rangle=\lambda |\psi\rangle$$Where \lambda is a constant, and is called the eigenvalue of that state. If \hat A is an observable, then \lambda will be real. A stationary state is an eigenstate of the Hamiltionain \hat H (the energy operator). It is called ... 7 Kinetic energy of two free particles is additive: the total energy is just the sum of the individual energies:$$ K=K_1+K_2 $$Another example is charge: the charge of a multiparticle system is the sum of the individual charges. Parity is multiplicative: the parity of a two-particle system is the product of the parities of the inidividual particles:$$ \Pi=... 6 I agree with your definition of locality (probably not surprising :)). Causality I would say is the statement that an event in the future should not affect an event in the past. We can formulate this in classical physics terms. Causality is necessary in order for there to be a well defined initial value problem: I should be able to choose an initial time ... 6 Hertz should be understood to mean "periodic events per second". I your case the events are the display of frames, so yes, you would be perfectly justified in using $\mathrm{Hz}$. That said, as several commenters have already mentioned, the unit "Hertz" does not specify what kind of periodic behavior is being counted. So the author(s) or speaker must make ... 6 Revolving around the sun is equivalent to free fall around the sun, so the revolution allows you not to 'feel' the sun's gravity. The rotation of the earth is something that can be measured: (i) a centrifugal force which is a small offset on gravity, and (ii) causes the coriolis force. Both these are small effects, so can often be ignored for laboratory ... 6 The term c-number is used informally in the way Meer Ashwinkumar describes. As far as I know, it doesn't have a widely promulgated formal definition. However, there is a formal definition for c-number that agrees with the way the term is used in many cases, including the case you're asking about. As you may know, you can think of the operator formalism for ... 6 In relativity (both special and general) one of the key quantities is the proper length given by: $$ds^2 = g_{\alpha\beta}dx^\alpha dx^\beta \tag{1}$$ where $g_{\alpha\beta}$ is the metric tensor. The physical significance of this is that if we have a small displacement in spacetime $(dx^0, dx^1, dx^2, dx^3)$ then $ds$ is the total distance moved. You ... 6 The effective in effective action has nothing to do with the effective in effective field theory. An effective field theory is a low-energy theory (described by some action $S_{eff}$ and cut-off $\Lambda_{eff}$) of some given higher energy theory (with action $S$ and cut-off $\Lambda\gg\Lambda_{eff}$). The effective action $\Gamma$, which is sometimes ... 6 Like Wikipedia says: "Moment is a combination of a physical quantity and a distance." This 'physical quantity' could be various things. To take the examples you mention: Moment of momentum (commonly known as angular momentum) is expressed as $\vec{L}=\vec{r}\times m\vec{v}$, and is a measure for the rotational momentum of an object around some axis. Moment ... 6 The term rest mass is a poor one because it implies it's the mass measured in the rest frame. But photons have no rest frame, and indeed any particle subject to some form of confinement has a $\Delta p\gt 0$ so its rest frame is somewhat poorly defined. The modern term is invariant mass, which is simply the mass in the equation for the total energy: E^2 ... 5 Both concepts are mathematical in character and they ultimately describe the same characteristics or situations. "Invariance" is a more technical word because it says "what has to be equal to what" for us to say that the symmetry exists. In particular, the "invariance under a symmetry transformation" means that an object, like the action $S$, has the same ... 5 I think the answer is no. It generally precedes some approximation method with a bounded error, but there are so many approximations methods in physics -- some rigorous, some nonrigorous -- that it's way too presumptuous to give it a rigorous definition. Generally, it means one of several things: If $a\ll b$, expanding in powers of $\frac{a}{b}$ is ... 5 I have a feeling this has been answered before, but basically it is because H and He dominate the elemental abundances in the universe. When we look at what else there is we are guided by the elements we can ascertain are present in the photospheres of stars. It just so happens that the most prominent sgnatures are those due to atomic and ionic absorption ... 5 As Qmechanic pointed out in the comments, you're mixing Einstein and abstract index notation a bit. To make things absolutely clear, we will use early Latin indices for abstract indices $(abc)$ and Greek indices for component indices $(\mu\nu\rho)$ and will always indicate Einstein summation explicitly. First and foremost, an abstract index is nothing more ... 5 Just a coincidence. There are too many quantities and not enough letters. It probably does make a difference that the fields in which these two equations exist (material science and electromagnetism) are well enough separated that you typically won't see them both in the same papers or textbooks; if that weren't the case, people would start using different ... 5 Regular functions are well defined (finite). Irregular functions tend to infinity in the limit of approaching some point. In this case, all the Bessel functions tend to zero (except j0 which goes to 1) as you approach the origin. The Neumann functions approach +infinity as you approach the origin from the positive side. Only top voted, non community-wiki answers of a minimum length are eligible
# PennyLane v0.18 released The latest release of PennyLane is now out and available for everyone to use. It comes with many new additions, inlcuding an in-built high-performance simulator, the ability to perform backpropagation using PyTorch, improved quantum aware optimization techniques, the ability to define custom quantum gradient rules, and much more. This release is particularly special, including new features and bug fixes from Code Together 🙌 and unitaryHACK ⚛️ contributors. If you’re not sure what Code Together is all about, be sure to check out our blogpost. ## Integrated high-performance simulator ⚡ The high-performance lightning.qubit simulator is now-shipping 📦 for everyone who upgrades or installs the latest version of PennyLane. The lightning.qubit device is a fast state-vector simulator equipped with the efficient adjoint method for differentiating quantum circuits, check out the plugin release notes for more details! To use this new simulator in PennyLane, it can be instantiated as follows: dev = qml.device("lightning.qubit", wires=10) Once created, the lightning.qubit device can be used with any existing QNode. In addition to a performant C++ backend, lightning.qubit comes with support for differentiating quantum circuits via the adjoint method. This can lead to significant speed improvements compared to default.qubit when shots=None. ## Backpropagation using PyTorch The built-in PennyLane simulator default.qubit now supports backpropogation with PyTorch; simply specify diff_method="backprop" when creating your QNode: dev = qml.device("default.qubit", wires=3) @qml.qnode(dev, interface="torch", diff_method="backprop") def circuit(x): qml.Rot(x[0], x[1], x[2], wires=0) return qml.expval(qml.PauliZ(0)) x = torch.tensor([0.54, 0.1, 0.2], dtype=torch.float64, requires_grad=True) res = circuit(x) res.backward() As a result, default.qubit can now use end-to-end classical backpropagation as a means to compute gradients. Using this method, the created QNode is a ‘white-box’ that is tightly integrated with your PyTorch computation, including TorchScript and GPU support. This is now the default differentiation method when using default.qubit with PyTorch. Shout out to Slimane Thabet, Esteban Payares, and Arshpreet Singh for this mega contribution from #unitaryHACK. ## RotosolveOptimizer for general parametrized circuits Quantum-aware optimization techniques have received a huge upgrade in this release. The RotosolveOptimizer can now tackle general parametrized circuits, and is no longer restricted to single-qubit Pauli rotations. 🪐 This includes: • layers of gates controlled by the same parameter, • controlled variants of parametrized gates, and • Hamiltonian time evolution. This optimization technique is cutting-edge, and straight from recent quantum machine learning research. For more details, see Vidal and Theis, 2018 and Wierichs, Izaac, Wang, Lin 2021, as well as our recent PennyLane demonstration general parameter-shift rules. dev = qml.device('default.qubit', wires=3, shots=None) @qml.qnode(dev) def cost_function(rot_param, layer_par, crot_param): for i, par in enumerate(rot_param): qml.RX(par, wires=i) for w in dev.wires: qml.RX(layer_par, wires=w) for i, par in enumerate(crot_param): qml.CRY(par, wires=[i, (i+1) % 3]) return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1) @ qml.PauliZ(2)) Note that the eigenvalue spectrum of the gate generator needs to be known to use RotosolveOptimizer for a general gate, and it is required to produce equidistant frequencies. This cost function has one frequency for each of the first RX rotation angles, three frequencies for the layer of RX gates that depend on layer_par, and two frequencies for each of the CRY gate parameters. By providing details regarding the spectrum of these parametrized operators, Rotosolve can then be used to minimize the cost_function: # Initial parameters init_param = [ ] # Numbers of frequencies per parameter num_freqs = [[1, 1, 1], 3, [2, 2, 2]] opt = qml.RotosolveOptimizer() param = init_param.copy() for step in range(3): param, cost, sub_cost = opt.step_and_cost( cost_function, *param, num_freqs=num_freqs, full_output=True, optimizer="brute", ) print(f"Cost before step: {cost}") print(f"Minimization substeps: {np.round(sub_cost, 6)}") Cost before step: 0.042008210392535605 Minimization substeps: [-0.230905 -0.863336 -0.980072 -0.980072 -1. -1. -1. ] Cost before step: -0.999999999068121 Minimization substeps: [-1. -1. -1. -1. -1. -1. -1.] Cost before step: -1.0 Minimization substeps: [-1. -1. -1. -1. -1. -1. -1.] For usage details, please see the Rotosolve optimizer documentation. Be sure to also check out our Rotosolve tutorial for details behind the theory underpinning the Rotosolve optimization. ## Faster, trainable, Hamiltonian simulations Variational quantum algorithms are even more powerful in this release, as Hamiltonians are now trainable with respect to their coefficients. Find quantum gradients with respect to Hamiltonians, and train your algorithms over classes of parametrized Hamiltonians. from pennylane import numpy as np dev = qml.device("default.qubit", wires=2) @qml.qnode(dev) def circuit(coeffs, param): qml.RX(param, wires=0) qml.RY(param, wires=0) return qml.expval( qml.Hamiltonian(coeffs, [qml.PauliX(0), qml.PauliZ(0)], simplify=True) ) coeffs = np.array([-0.05, 0.17]) param = np.array(1.7) In addition, Hamiltonians are now natively supported on the default.qubit when shots=None, with expectation values automatically computed via fast sparse methods. As the number of terms in the Hamiltonian grows, this can significantly improve the performance of variational quantum eigensolver (VQE) workflows: Want to specify your own quantum gradient logic, and explore optimization beyond the parameter-shift rule? Quantum gradient transforms are a specific type of batch transformation. To create a quantum gradient transform, simply write a function that accepts a tape, and returns a batch of tapes to be independently executed on a quantum device, alongside a post-processing function that processes the tape results into the gradient. Supported gradient transforms must be of the following form: @qml.gradients.gradient_transform ... Various built-in quantum gradient transforms are provided within the qml.gradients module, including qml.gradients.param_shift. Once defined, quantum gradient transforms can be applied directly to QNodes: >>> @qml.qnode(dev) ... def circuit(x): ... qml.RX(x, wires=0) ... qml.CNOT(wires=[0, 1]) ... return qml.expval(qml.PauliZ(0)) >>> circuit(0.3) array([[-0.47942554]]) Quantum gradient transforms are fully differentiable, allowing higher order derivatives to be accessed: >>> qml.grad(qml.gradients.param_shift(circuit))(0.5) ## Batch transforms The ability to define batch transforms has been added via the new @qml.batch_transform decorator. A batch transform is a transform that takes a single tape or QNode as input, and executes multiple tapes or QNodes independently. The results may then be post-processed before being returned. By creating a batch transformation, you can leverage the ability to transform and post-process QNodes, while retaining the ability to • Autodifferentiate your quantum model on hardware, • Evaluate your transformation on all hardware compatible with PennyLane, • Make use of the ability to submit a single batch of quantum jobs for execution, significantly reducing overall runtime. In addition, batch transformations are themselves trainable — write a parametrized batch transformation, and then train it to achieve a particular outcome! For more details, including how to write batch transformations, please see the batch transform decorator documentation. For a primer on quantum transformations, don’t forget to read out previous blog post on transformations. ## Improvements In addition to the new features listed above, the release contains a wide array of improvements and optimizations: • The qml.grouping.group_observables transform is now differentiable. • A gradient recipe for Hamiltonian coefficients has been added. This makes it possible to compute parameter-shift gradients of these coefficients on devices that natively support Hamiltonians. • The device test suite has been expanded to cover more qubit operations and observables. ## Breaking changes As new things are added, outdated features are removed. Here’s what will be disappearing in this release: • Specifying shots=None with qml.sample was previously deprecated. From this release onwards, setting shots=None when sampling will raise an error also for default.qubit.jax. • An error is raised during QNode creation when a user requests backpropagation on a device with finite-shots. In addition, several features have been marked for deprecation, and will raise warnings when used. They will be removed in a future release: • The class qml.Interferometer is deprecated and will be renamed qml.InterferometerUnitary in the upcoming release. • All optimizers except for Rotosolve and Rotoselect now have a public attribute stepsize. Temporary backward compatibility has been added to support the use of _stepsize for one release cycle. update_stepsize method is deprecated. These highlights are just scratching the surface — check out the full release notes for more details. ## Contributors As always, this release would not have been possible without the hard work of our development team and contributors: Vishnu Ajith, Akash Narayanan B, Thomas Bromley, Olivia Di Matteo, Sahaj Dhamija, Tanya Garg, Anthony Hayes, Theodor Isacsson, Josh Izaac, Prateek Jain, Ankit Khandelwal, Nathan Killoran, Christina Lee, Ian McLean, Johannes Jakob Meyer, Romain Moyard, Lee James O’Riordan, Esteban Payares, Pratul Saini, Maria Schuld, Arshpreet Singh, Jay Soni, Ingrid Strandberg, Antal Száva, Slimane Thabet, David Wierichs, Vincent Wong.
A propos of nothing one day, I ask Griffin (9 years old at the time, finishing up fourth grade) a question. Me: Griff, imagine you are baking cookies and you need $latex \frac{3}{4}$ cup of sugar, but you only have a $latex \frac{1}{2}$ cup measure. How would...
Quick Search: Browse by: Zurich Open Repository and Archive Permanent URL to this publication: http://dx.doi.org/10.5167/uzh-58136 Barbour, A D; Luzcak, M J (2012). A law of large numbers approximation for Markov population processes with countably many types. Probability Theory and Related Fields, 153(3-4):727-757. Preview Accepted Version PDF (Version 1) 326kB View at publisher Preview Accepted Version PDF (Version 2) 308kB Preview Accepted Version PDF (Version 3) 308kB Abstract When modelling metapopulation dynamics, the influence of a single patch on the metapopulation depends on the number of individuals in the patch. Since the population size has no natural upper limit, this leads to systems in which there are countably infinitely many possible types of individual. Analogous considerations apply in the transmission of parasitic diseases. In this paper, we prove a law of large numbers for quite general systems of this kind, together with a rather sharp bound on the rate of convergence in an appropriately chosen weighted ℓ 1 norm. Citations 4 citations in Web of Science® 4 citations in Scopus®
# Tag Info 139 I have never used Emacs in Windows but I started to learn and use Emacs only eight or nine months ago and I now use it for most of my work. Learning First you need to get comfortable with the basics of Emacs and probably this is what will be your main frustration. For a new user the commands for basic usage can be a pain to learn because they are unlike ... 60 When drafting you benefit from effectively being able to write an outline and edit it. Org-mode is a good tool for this. Since Org-mode is an Emacs mode you should know the basics of Emacs, such as commands for navigating in buffers and switching between buffers as described under Learning in A simpleton's guide to (...)TeX workflow with emacs. The ... 43 There are two ways to make RefTeX find your bibliography. I suggest to use both approaches for robustness. To make RefTeX recognize your bibliography you can add it to the list reftex-default-bibliography. To do this add the following to your .emacs: ;; So that RefTeX finds my bibliography (setq reftex-default-bibliography '("path/to/bibfile.bib")) and ... 35 Finding tips It is hard to say what the best tips is for you. Especially if you do not give a more detailed description of your workflow. The best way to find this might be to read the manual from cover to cover. Also make sure to read the RefTeX manual since it is included in AUCTeX and you should use its commands for handling citations and cross-... 32 UPDATE, 11/10/11: I've posted the code at https://github.com/blerner/auc-tikz -- the most recent version is auc-tikz-struct.el (the other files are older experimental versions). I haven't had time to update the code in a while, so if people want to tinker with the code, have at it! It's still rough, but it should sorta work if you'd like to try it out. ... 32 It does have auto-complete powers. (It has more than any one person knows about.) Try, e.g., C-c C-m (for calling macros like \footnote or \ref); type the letter 's' and hit TAB. The rest will become clear. C-c C-e will prompt for for starting new environments. And so on. If you use AUCTeX with reftex, try things like C-c [ to prompt you for a ... 32 Note: users on more recent versions of macOS will not be able to follow these instructions due to new restrictions introduced in those versions. See comments for workarounds. It seems that the upgrade wiped the link from your Library (where MacTeX puts your actual TeX distribution) into your /usr/texbin. You can reinstate this link with the following: ln -... 30 This procedure will set up Emacs, AUCTeX, and the Okular viewer to handle integrated forward and inverse search. (These instructions were tested on a Debian system) Install Emacs. To install Emacs, open up the terminal and type the command: sudo apt-get install emacs Install AUCTeX. Within Emacs, run M-x package install RET auctex RET. To test for a ... 27 Stick point somewhere near \usepackage{array,colortbl} and two buffers will open up for array and colortbl packages if you do M-x getpackage which you can bind to a key of your choice (defun getpackage () (interactive) (search-backward "\\") (re-search-forward "usepackage[^{}]*{" nil t) (while (looking-at "\\s-*,*\$$[a-zA-Z0-9]+\$$") (re-search-forward "\... 24 You need org-mode 8.3 to do this. From ORG-NEWS * Version 8.3 [...] ** New features [...] *** Export unnumbered headlines Headlines, for which the property ~UNNUMBERED~ is non-nil, are now exported without section numbers irrespective of their levels. The property is inherited by children. For example (with-temp-buffer (require 'ox-latex) (insert " *... 23 This should work (requires AUCTeX and you first need to enable TeX-fold-mode with C-c C-o C-f or M-x TeX-fold-mode) (defun mg-TeX-fold-brace () "Hide the group in which point currently is located with \"{...}\"." (interactive) (let ((opening-brace (TeX-find-opening-brace)) (closing-brace (TeX-find-closing-brace)) priority ov) (if (and ... 22 AUCTeX 11.89 Starting from this version of AUCTeX, the option TeX-file-line-error enables by default the file:line:error messages that solve the problem. Thus, from this version you shouldn't run anymore into this kind of problems. I also suggest to revert any change to LaTeX-command-style, in order to be sure to use the default value. See below for ... 21 Org-mode's radio tables are an easy, simple and fast way to create tables within Emacs/AUCTeX. They offer all of the calculation capabilities of the org-mode spreadsheets, which can be very convenient if need simple, auto-updating data. The source org-table can be placed within a comment environment using the comment package, or after \end{document}. The ... 20 The variable you need to hook into is reftex-cite-format. Somewhere in my Emacs init file, I have this code: (eval-after-load 'reftex-vars '(progn ;; (also some other reftex-related customizations) (setq reftex-cite-format '((?\C-m . "\\cite[]{%l}") (?f . "\\footcite[][]{%l}") (?t . "\\textcite[]{%l}") ... 20 I use find-file-at-point. It also works for #includes in C, sometimes imports in Python, URLs, etc. (global-set-key (kbd "C-x f") 'find-file-at-point) ;; I hardly ever set the fill-column 20 If you use AucTeX, with outline minor mode turned on, you get a series of useful key-bindings, including (C- = Ctrl-): C-c @ C-n   Move to next heading (at any level) C-c @ C-p   Move to previous heading (at any level) C-c @ C-f   Move Forward to next heading at the same level C-c @ C-b   Move Backward to previous heading at the same level (A quick look at ... 19 The general problem of finding where a command is defined has no viable solution. Macros can and do change their meaning; a typical example is \\. This simple document \documentclass{article} \begin{document} \show\\ {\centering\show\\} \begin{tabular}{c} \show\\ \end{tabular} \end{document} gives the following output in the terminal window: > \\=... 18 I combined some of the links mentioned here, you will find the links in the source comments. This code supports: Forward search (Emacs to Evince, via C-c C-v) Backward/Inverse search (Evince to Emacs, via C-Mouse-1, that is Ctrl + "Left Click" in Evince) Path names with spaces Multifile setups (TeX files requested by Evince will be opened if they aren't ... 18 Starting from version 11.88 of AUCTeX, you can add an option to the TeX processor with the file-local variable TeX-command-extra-options: %%% TeX-command-extra-options: "-shell-escape" As explained in the manual, you have to manually make this variable safe as a local variable because of the security holes it can open. Note: this question inspired me to ... 18 If all LaTeX files in this directory will use the same engine, then you can set TeX-engine for all of them using Emacs per-directory local variables. Create a file in this directory named .dir-locals.el with the following contents: ((latex-mode (TeX-engine . luatex))) If all LaTeX files in this directory share the same master, then per-directory local ... 17 It may be possible to have Emacs switch major modes depending on the position of point, but this can quickly become computationally-intensive and can break workflows (especially those that make heavy use of temporary variables). It would be better to adopt the Org model of source code editing: send the interesting bits to a separate buffer and change the ... 16 This is a very sensible question! You know, there are many tricks to discover, which make live easier. My top 3 list: M-q: Justify the current paragraph (or region if set), with respect to the fill-column variable [Couldn't do without, but it took me weeks to discover] C-l: Recenter the cursor (one hit puts the current line in the center of the screen, next ... 16 Since Emacs 24.1 bibtex-mode supports biblatex: * BibTeX mode now supports biblatex. Use the variable `bibtex-dialect' to select different BibTeX dialects. To use it you can for example set it as a file variable to bibliography files by adding the following to their first line: -*- mode:bibtex; bibtex-dialect: biblatex -*- 16 To skip the selection of the reference style you have to set the variable reftex-ref-macro-prompt to nil, see the RefTeX manual. To do this you can customize that variable or add the following code to your init file: (setq reftex-ref-macro-prompt nil) It has been reported that this solution to use RefTeX with the cleverref package no longer works with ... 16 AUCTeX has autocompletion mechanisms different from most of the other LaTeX editors. In Emacs, when TeX-latex-mode is activated, the sequence Ctrl-c Ctrl-e (the - means that the second key has to be pressed while holding the first, while the space implies the release of both keys before the next combination) opens the mini-buffer dialog interface at the ... 15 I was having this same problem when I stumbled across this post. I was able to fix it the following way: Note, I am using Emacs23.3.1, AUCTeX 11.86, Ubuntu 11.10, Gnome3.2.1 Open a .tex file (or make one). I will assume that you are using Emacs23, using an Xwindow (Mine is in Gnome). Go to the menu bar and do: Preview -> Customize -> Browse Options In ... 15 If you do not need to worry about verbatim or verb usage then (query-replace-regexp "\$$^\\| *[^\\\\]\$$%.*" "" nil nil) is probably safe (and it does query replace so you get to say yes or no anyway). Note this removes the entire line if the comment was at the start of the line (as leaving a blank line would make a paragraph). However it does not remove ... 15 Here's what I add to my Preferences.el file (the .emacs equivalent for Aquamacs): (setq LaTeX-verbatim-environments-local '("Verbatim" "lstlisting")) This makes Verbatim and lstlisting behave like verbatim 14 What are you using? (Linux, Windows, Mac) If you're using Linux, then Okular's probably the easiest PDF viewer to set-up synctex forward/backward search with emacs. Once you've installed Okular, you can add the following code to your .emacs config file: ;; Okular (setq TeX-view-program-list '(("Okular" "okular --unique %u"))) (add-hook 'LaTeX-mode-hook ... Only top voted, non community-wiki answers of a minimum length are eligible
Journal Home Page Cumulative Index List of all Volumes Complete Contentsof this Volume Previous Article Journal of Convex Analysis 18 (2011), No. 2, 505--511Copyright Heldermann Verlag 2011 Only Solid Spheres Admit a False Axis of Revolution Jesus Jerónimo-Castro Dep. de Matemáticas UNAM, Circuito Ext. Cd. Universitaria, Colonia Copilco el Bajo, México D.F. - C.P. 04510 and: Facultad de Matemáticas, Universidad de Guerrero, México jeronimo@cimat.mx Luis Montejano Instituto de Matemáticas UNAM, Circuito Ext. Cd. Universitaria, Colonia Copilco el Bajo, México D.F. - C.P. 04510 and: Centro de Innovacion Matemática, Queretaro, México luis@matem.unam.mx Efrén Morales-Amaya Dep. de Matemáticas UNAM, Circuito Ext. Cd. Universitaria, Colonia Copilco el Bajo, México D.F. - C.P. 04510 and: Facultad de Matemáticas, Universidad de Guerrero, México [Abstract-pdf] Let $K\subset \mathbb{R}^{3}$ be a convex body. A point $p_{0}$ is a point of revolution for $K$ if every section of $K$ through $p_{0}$ has an axis of symmetry that passes through $p_{0}$. In particular, every point that lies in an axis of revolution is a point of revolution. A line $L\subset \mathbb{R}^3$ is a \textit{false axis of revolution}, if every point of $L$ is a point of revolution for $K$ but $L$ is not an axis of revolution. The purpose of this paper is to prove that only solid spheres admit a false axis of revolution. [ Fulltext-pdf  (98  KB)] for subscribers only.
## Monday, November 23, 2020 ### The problems of public procurement and payment delays: A review of the recent literature by Sourish Das and Rabia Khatun. ### Introduction 'Public procurement'- the purchase of goods and services by the state from private enterprise -- tends to be a large part of economic activity in any country. The World Bank estimated that globally, public procurement in 2018 amounted to USD 11 trillion or 12 percent of global GDP(Bosio and Djankov, 2020). In India, these estimates are higher at 30 percent (Khan, 2017) and recent budget announcements suggest that these estimates are likely to increase. Such magnitudes have a large multiplier effect on economic activity and economic growth. But the multiplier effect is dampened by the 'marginal cost of public funds' or MCPF which is the cost incurred by a rupee of public spending (Kelkar and Shah, 2019). In an ideal world, public procurement works well, and goods/services that are available in the private market for Rs.1 are purchased for Rs.1 by the government. In the real world, public procurement processes introduce an additional friction, an inefficiency, where the government pays Rs.A when purchasing something worth Rs.1. Every deficiency of public procurement procedures drives up the A. There is a friction in taxation (the MCPF which Kelkar and Shah (2019) refer to as a cost of Rs.3 upon the economy when the government obtains Rs.1 as taxes). Similarly, there is a friction in contracting-out (the government pays A when obtaining services worth 1). These two come together in shaping the overall effectiveness of government action. A government that wishes to purchase (or contract-out)goods/services worth Rs.1 ends up with a true total cost for society of 3A. On the taxation side, this motivates research on understanding and reducing the MCPF. On the expenditure side, this motivates research on understanding and improving public procurement so as to obtain a reduced value for A. The conventional processes of government do not produce information about these two elements of inefficiency. Researchers have to create mechanisms through which these estimates can be obtained. For example, there is a widely held perception that delays of payments are a persistent problem in public procurement. Such delays in payment translate into higher costs of doing business by the private enterprises that render services or deliver products to government or public sector enterprises, and raises the MCPF of public procurement. As has been happening elsewhere, the perception of the higher cost of doing business with the public sector is increasingly occupying the public discourse in India as a critical element of what is driving stress in the financial health of the corporate sector. At present, we have informal estimates about the difficulties faced in public procurement in India. As an example, Sahu (2020) recently estimated the size of the delayed payments from the Union government as totalling Rs.9.5 lakh crore, an estimate that was culled from public sources. The data presented included pending dues to road projects at NHAI, from power generating companies and power grid, in the sugar and fuel ecosystem, food distribution at FCI and to the micro, small and medium enterprises. But beyond such broad, aggregate estimates, there is little that is understood about the mechanics that drive this quantum of delay. What needs to be set right to solve the problem is not well understood. In the present literature, two key features emerge. One is the issue of late payments by the state. This has become increasingly recognised as a major problem after the Financial crisis of 2008 and after the European debt crisis of 2009. Perhaps as a consequence, almost all of the studies are based on data from countries of the EU. A second central concern appears to be the effect of such late payment by governments on the financial health of firms, particularly Small and Medium Enterprises or SMEs. SMEs have been in the policy headlights over the last decade as a critical base of employment growth. Any factor influencing their financial health has also been highlighted as an important area of reform. SMEs are particularly affected by any adverse impact of payment delays. In this article, we survey the literature on delays in payments by government and their consequences. We find it useful to classify this literature into two lines of thought about delayed payments in public procurement: (1) these hurt the profit of the private sector and increases the probability of bankruptcy, particularly for smaller businesses; and together (2) such delays have a significant negative impact on economic growth. Additionally, this literature shows pathways for setting up measurement systems that can then be used to regularly monitor the impact of public procurement processes on economic agents and the economy. Four papers appear to be the basis of understanding, which are Connell (2014), Checherita et al. (2016), Obeng (2016), and Conti et al. (2020). Much of the work uses two components to measure late payments: payment delays and the duration of payment delays. Payment delay is calculated over agreed contractual period and it is the ratio of absolute delay (in days) to the agreed contractual period. Payment duration refers to agreed contractual period plus the absolute delay in days over agreed contractual period and is the sum of agreed contractual period plus payment delay. The data for payment delay and payment duration is obtained from Intrum Justitia, a private credit management firm which conducts an annual written survey among several thousand firms in 29 European countries. The survey results are published as the annual European Payment Index Report. Among other statistics, the survey reports the average annual payment duration and the average annual contractual payment period, both of which are further disaggregated into consumer, business-to-business, and public sector debtors terms. ### The impact of delayed payments in public procurement on the health of firms Connell (2014) attempts to estimate the economic effects of late payments that firms face in some European countries (Greece, Italy, Portugal, Spain) regarding delays in payments in Business to Business (B2B) and Government to Business (G2B) transactions with two questions: 1. How can the cost to firms associated with government late payments be approximated? This cost is estimated as the short-term financial cost of firms associated with late payments. In order to calculate this, they use the volume of claims against the public administration, the average annual interest rate for loans to non-financial corporations and the average government payment delays expressed as a fraction of a year. 2. Do liquidity constraints associated with payment delays put the firms out of business? A panel regression is run between payment delays and the firm's exit rate. This was done for B2B and G2B transactions separately. The exit rate is defined as the ratio of death firms to the total number of active firms. The regression controls for size of the firms involved, country fixed effects to control for national time-invariant characteristics, and business cycles variables to control for changes in financial conditions. The paper finds that payment delay is statistically significant and negative across all the countries studied, with higher payment delays being seen with higher exit rates. The estimated financial cost as a percentage of GDP in 2012 ranges from 0.19 percent in Greece to 0.005 percent in Finland. A one point reduction in the payment delay ratio would reduce exit rates by about 2.8 or 3.4 percentage points in a B2B transactions. As expected, these effects are exacerbated with business cycle effects. The results also show that bigger firms, with a larger number of employees, are more likely to survive the deleterious effects of payment delays. In G2B transactions, a one point reduction in the delay ratio leads to a decrease in exit rates of about 1.7 to 2 percentage points. The effect is lower than payment delays in B2B transactions which is suggested as being due to the different representations of SMEs in these different types of transactions. The overall findings of this study suggest that payment delays in commercial transactions by the public administration and private entities have detrimental effects on the health of a firm, and exacerbate the burden of already financially constrained firms which ultimately push them out of business. ### Delayed payments in public procurement and its impact on the economy Checherita et al. (2016) analyze the impact of government payment delays on private firms and on economic growth. They argue that increased delays in public payments can affect private sector liquidity and profits and hence ultimately economic growth. This study defined payment delays by including various measures of the accounts payable data from government accounts (as defined in ESA 1995 code AF.7) along with the other measures of payment duration defined earlier. In addition to the short-term impact of payment delays from government on real GDP growth, the study also analyses profit growth measured by economy wide gross operating surplus, and bankruptcy measured by the probability of default (using Moody's measure of distance to default) over the period spanning 1993 to 2012. Using a panel regression analysis, they find a negative relation between delayed payments and growth. The results show that a one standard deviation change in delayed payments reduces the growth rate by 0.8-1.5 percent, and a one percent increase in arrears reduces growth by 0.6-0.9 percent. The paper finds a statistically significant impact of delayed payments on the growth rate of operating surplus of firms. A one standard deviation increase in delayed payments reduces profit growth by 1.5-3.4 percent. Finally, their results suggest that delayed payments reduce the distance to default. In similar work, Fiordelisi et, al. (2012) show that economic growth in Italy would have been an additional 0.38 per cent if the government paid its trade loans within 30 days. Obeng (2016) investigates the impact of payment delays caused by a liquidity crisis in the European Union, using changes in the pattern of late payments among EU companies between 2005 and 2014. The paper finds the following features about payments delays during the financial crisis: payment delays increased across the board; delays had a higher negative impact on SMEs, low profitability firms, and low liquidity firms; significant variation in how delays increased depending upon the sector that the firm operated in. The paper analyses the variability of firm late payments under different macroeconomic conditions using data for 54,277 EU firms over the period 2005 to 2014 from the AMADEUS database, a commercial European firm database. A fixed effects regression model to estimate the impact of selected macroeconomic shocks on payment delays finds that the financial crisis has a significant negative impact on payment delays of accounts receivable, even after controlling for firm characteristics such as profitability, liquidity, size, sector, country, credit collections, and credit period. This literature establish that impact of delayed payments by the government on firms and economy is negative and significant. The next strand of the literature asks what can be done to reduce the economic cost of delayed payments, and to improve the MCPF of public procurement. Conti et al. (2020) analyze the regulatory framework of the EU (called the Directive on Late Payments or DLP) concerning delayed payments by government. This paper focuses on G2B commercial relationships, starting by investigating the impact of the DLP on firm survival, employment and investment. They use sector level data for a sample of 23 EU countries (and Norway) from 2008-2015, using 38 two-digit sectors from the Structural Business Statistics(SBS) database (an Eurostat firm database which provides information on European firms). The authors construct the exit rate of firms for a given sector in a country as the ratio between the number of enterprises that cease activity and the stock of active enterprises in a given year and for a given country-sector unit. A difference-in-differences analysis finds that after the introduction of the Directive, the exit rate of firms decreased in sectors that sell a larger fraction of their output to the government. They also find that there is an increase in employment in those sectors more connected with the government, and conclude that more discipline in government payment terms can have considerable positive effects on economic activity. ### Implications The results of the above studies present the first empirical estimates of the quantum of the negative impact on the economy when the government delays payments for procurement transactions.Some indicative estimates of the economic impact include: 1. One standard deviation worsening in delayed payments reduce firm profit growth by 1.5-3.4 percent. 2. One point reduction in delayed payments reduce firm exit rates by 1.7-2.0 percent. 3. One standard deviation worsening in delay of payments reduce economic growth rate by 0.8-1.5 percent. 4. Paying trade loans in 30 days imply an additional 0.33 percent economic growth. Even with the caveat that these are values estimated for countries and firms operating in the countries in the EU, where contract performance and enforcement tend to be some of the best in the world, these are useful benchmarks to frame the impact of problems of public procurement for us in India. Such an exercise is particularly pertinent for the current times, where the COVID-19 pandemic has resulted in a severe reduction in GDP growth and there is a large scale loss of jobs. One estimate puts the reduction in the Indian economy at 23.9 per cent in the April to June quarter of 2020 (Choudhury, 2020). India has followed the global response to such a systemic shock, with the state becoming the saviour of last resort and rolling out economic interventions in the form of income support schemes and various public expenditure programs. However, the present situation of the Indian fiscal conditions place constraints on the credibility and sustainability of new spending. What the above literature suggests, in addition to these recent interventions, is that India would do well to find ways and means to clear her dues to direct and indirect suppliers, particularly given that a large fraction of Indian enterprises are micro, small and medium enterprises. Sahu (2020) reports that INR 5 lakh crore out of the reported INR 9.5 lakh crore of dues from the government was due to MSMEs. If reducing the delays in payments can reduce the distress related bankruptcy of such firms by even one percent, it can have a material impact on the health of these firms and continued availability of avenues for employment. More importantly, such an action will improve the confidence of small traders and vendors across the country in participating in G2B transactions. If payments can be made on time, it will reduce the MCPF and strengthen the channels through which the state can deliver a positive impact on economic growth at the time when it is most required, and to those who need the support the most. One path suggested in international literature is to put in place a regulatory framework on public procurement. However, there is no clear evidence that indicates that this can be successful in reversing payment delays. For example, Banerjee et al. (2020) show that e-governance reforms of the MNREGA system does deliver a positive impact on reduced leakage in social benefit programs but fails to reduce payment delays. Further, Roy and Uday (2020) analyse the link between the presence of a legal framework and the corruption and they find no correlation between the two. ### Conclusions What the existing studies show is the importance of establishing systems through which the impact of the public procurement processes can be understood. Unlike in the various EU countries where these studies have been carried out, there are no systematic empirical studies that have been done in India to quantify the economic cost of delayed payments on firms and the economy. A first step towards solving the problem of delayed payments and the overall processes of public procurement would be to facilitate opportunities to gather information of the impact of these process on the operational health of firms. Such information needs to developed for India and made largely available to the research community to get a sound empirical understanding of the process of public procurement and how to improve the cost of doing business with the Indian State. ### References Abhijit Banerjee, Esther Duflo, Clement Imbert, Santhosh Mathew and Rohini Pande (2020), 'E-governance, accountability and leakage in public programs: Experimental evidence from financial management reform in India', American Economic Journal: Applied Economics, 12(4). Cristina Checherita-Westphal, Alexander Klemm, and Paul Viefers (2016), 'Governments payment discipline: The macroeconomic impact of public payment delays and arrears'. Journal of Macroeconomics, 47: 147-165. Erica Bosio and Simeon Djankov (2020), 'How large is public procurement?', World Bank Blogs, 5 February. Franco Fiordelisi, Davide Mare, Nemanja Radic, Ornella Ricci, Philip Molyneux, and Thomas Weyman Jones (2012). 'Government late payment: the effect on the Italian economy', Doctoral Dissertation, School of Economics and Business, Loughborough University, UK. Gaurav Choudhury (2020), 'India's GDP contracts 23.9 per cent in Q1FY21 as lockdowns, restrictions bludgeon economy', 1 September. Isaac Kwame Essien Obeng (2017), 'Delaying payments after the financial crisis: evidence from EU companies', Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis, 65(2): 447-463. Maurizio Conti, Leandro Elia, Antonella Rita Ferrara and Massimiliano Ferraresi (2020), 'Government late payments and firms survival: evidence from the EU', Technical report, Societia Italiana di economica pubblica, Working paper No. 753. M. H. Khan (2017), 'Public procurement issues with government of India', Lal Bahadur Shastri National Academy of Administration (LBSNAA). Prashant Sahu (2020), 'Forget stimulus, clear your dues: Rs 7 lakh crore unpaid dues to industry by central govt depts and PSUs', in Financial Express, 8 September. Shubho Roy and Diya Uday (2020), 'Does India need a procurement law?', The LEAP Journal blog, 19 August. Vijay Kelkar and Ajay Shah (2019), 'In service of the Republic: the art and science of public policy', Penguin Allen Lane. William Connell (2014), 'Economic impact of late payments', Technical report, Directorate General Economic and Financial Affairs (DG ECFIN), European Commission. Rabia Khatun is an independent researcher and Sourish Das is associate professor at the Chennai Mathematics Institute. The authors would like to thank Susan Thomas for comments and suggestions on the article. LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
• summary: Need some way to override massive beam conflicts --> Need some way to override massive beam collisions
# Secant method in MATLAB 1. Apr 23, 2010 ### chronicals I try to solve this equation with secant method in MATLAB. fn=40*n^1.5-875*n+35000 my initial guess is n1=60; n2=68; I want to find root and absolute relative approximate error at the end of each iteration. I have an infinite loop. Can you help me repair my file? This is my m-file: clc clear n1=60; n2=68; tol=1e-3; err0=3; iter=0; fprintf('iteration n relative approximate error\n') while err0>=tol iter=iter+1; fn1=40*(n1).^1.5-875*(n1)+35000; fn2=40*(n2).^1.5-875*(n2)+35000; nnew=n2-fn2*((n2-n1)/(fn2-fn1)); fnnew=40*(nnew).^1.5-875*(nnew)+35000; err(iter)=(abs((nnew-n2)/nnew))*100; fprintf('%2d %f %f\n',iter,nnew,err(iter)) if nnew>n1 n1=nnew; else n2=nnew; end end nnew iter Last edited: Apr 24, 2010 2. Apr 24, 2010 ### chronicals I rearranged my m-file and i solved infinite loop problem but i think i have mistaken at calculating absolute relative approximate error at the end of each iteration, i think this command is wrong: err(iter)=(abs((nnew-n2)/nnew))*100; How can i fix this error calculation problem? My m-file: clc clear n1=60; n2=68; tol=1e-5; err0=3; iter=0; fprintf('iteration n relative approximate error\n') while err0>=tol iter=iter+1; fn1=40*(n1).^1.5-875*(n1)+35000; fn2=40*(n2).^1.5-875*(n2)+35000; nnew=n2-fn2*((n2-n1)/(fn2-fn1)); fnnew=40*(nnew).^1.5-875*(nnew)+35000; err(iter)=(abs((nnew-n2)/nnew))*100; fprintf('%2d %f %f\n',iter,nnew,err(iter)) err0=abs(fnnew); if nnew>n1 n1=nnew; else n2=nnew; end end nnew iter Last edited: Apr 24, 2010 3. Apr 24, 2010 ### Born2bwire Why are using abs(fnnew) as your error, isn't it err(sayac) ? I would also remove the factor of 100 from the relative error unless it is your meaning to express it as a percent relative error. Trivial quibble but the nice thing about the relative error is that the base 10 log of it gives you an estimate on the number of digits it is accurate to in decimal. So by setting your tol to 1e-5, you are desiring to be accurate to at least five digits. 4. Apr 24, 2010 ### chronicals If I use err(iter), I have an infinite loop, so I use abs(fnnew). These are my results: iteration n relative approximate error 1 62.759758 8.349685 2 62.689966 8.470309 3 62.691698 0.002762 4 62.691697 0.000001 nnew = 62.6917 iter = 4 how can second iterations' relative approximate error be 8.470309. I think this m-file is calculating 'relative approximate error' wrongly. Please help me fix this error command: err(iter)=(abs((nnew-n2)/nnew))*100; 5. Apr 24, 2010 ### Born2bwire It is 8.4 because you are scaling the relative error by 100. Like I said, if you do not scale then the log of the relative error is indicative of the number of digits of accuracy. Indeed, you note that the first two digits, 62, do not change as you converge. Thus, you started out with two digits of accuracy which would correlate to log(0.08). Using err(iter) is the proper thing to do. Right now fnnew only works because it is the same as the absolute error since you are trying to find the zero. But if you were trying to converge to any number other than zero then you would never converge properly. Which brings us to the relative error. That is not the relative error because you do not know the true answer. It is a measure of the difference between your old and new values I think. This is still a reasonable metric to use though as long as we assume that you are always converging at a constant or increasing rate (this could allow us to use this metric to estimate the actual relative error but that is unnecessary for most applications). For example, I use this when I do semi-infinite integrations. From your sample output, it should have converged in four iterations since the "error" is 1.0e-6. If you removed the scaling factor of 100 then this should happen. If it is not ending, then it is probably because the result is fluctuating back and forth around the true answer. If you increased your tolerance slightly this may allow you to achieve convergence. This can happen due to floating point errors, getting trapped around an incorrect guess, or due to deficiencies in the algorithm. EDIT: Hmmm... probably should be a little more blunt. In this case, you should use abs(fnnew) because you know that the result should be zero and thus abs(fnnew) is the amount by which the result is off from zero. So you can find the residual. That is, you can't find the error in the x you wish to find (since obviously you do not know x apriori) but you can find the error in the f(x) that you want to achieve. This is not always feasible. Sometimes, like say with an integration, you do not have such a metric to use. So using the relative change to the result, like you found in err(), is often a valid metric. Though, as you have found, it may not always be perfectly reliable. Last edited: Apr 24, 2010
# Fourier antitransform using scaling property? I'm trying to calculate the antitransform of: $$\frac{1}{2\cdot(1+5w)^2}$$ Now I know the antitransform of $$\frac{1}{(1+5w)^2} = t \cdot e^{-5t} u(t)$$ But in this case I got that divided by 2. I assumed I had to use the scaling property which says: $$F[f(ax)] = \frac{1}{|a|} \hat{f}(\frac{w}{a})$$ Now I'm not really sure how to apply this. Could anyone help? • Scaling property only applies is you multiply the time/frequency variable. In this case, you only need to multiply the inverse with 1/2. – Hilmar Dec 12 '19 at 5:04 If $$h(t)$$ is the inverse Fourier transform of $$H(\omega)$$, then by linearity the inverse Fourier transform of $$aH(\omega)$$ is simply $$ah(t)$$. This has nothing to do with the scaling property you mentioned, because the latter refers to the scaling of the argument of the function.
# Pictures on horizontal page in a 2x2 grid with equal distances between them and the margins I would like to add four pictures with slightly different aspect ratios on two rows on one horizontal page so that the distances between each picture and to the outer margin of the page are equal or at least very close to be equal in size. How can I make LaTeX do the calculations? This is my MWE: \documentclass{article} \usepackage[margin=0cm, top=0cm, bottom=0cm, outer=0cm, inner=0cm, landscape, a4paper]{geometry} \pagestyle{empty} \usepackage{graphicx} \usepackage{subcaption} \begin{document} \begin{figure} \captionsetup[subfigure]{labelformat=empty} \captionsetup{labelformat=empty} \centering \begin{subfigure}[t]{0.5\textheight} \centering \includegraphics[height=6cm]{example-image-a} \caption[]% {{\small}} \label{} \end{subfigure} \begin{subfigure}[t]{0.5\textheight} \centering \includegraphics[height=6cm]{example-image-b} \caption[]% {{\small}} \label{} \end{subfigure} \vskip\baselineskip \begin{subfigure}[t]{0.475\textwidth} \centering \includegraphics[height=7cm]{example-image-c} % a pdf \caption[]% {{\small}} \label{} \end{subfigure} \begin{subfigure}[t]{0.475\textwidth} \centering \includegraphics[height=7cm] {example-image} % a pdf \caption[]% {{\small }} \label{} \end{subfigure} \caption[] {\small} \label{} \end{figure} \end{document} This can be achieved by using subfig package instead of subcaption package. With this you can define equal spacing from the borders using the combination of \hfill, \null and \hspace{...}. An MWE is given below: \documentclass{article} \usepackage[margin=0cm, top=0cm, bottom=0cm, outer=0cm, inner=0cm, landscape, a4paper]{geometry} \pagestyle{empty} \usepackage{graphicx} %\usepackage{subcaption} %I take this out \usepackage{subfig} \begin{document} \begin{figure} \captionsetup[subfigure]{labelformat=empty} \captionsetup{labelformat=empty} \centering\hfill \subfloat[][]{\includegraphics[height=6cm,keepaspectratio]{example-image-a}\label{figure1}}\hspace{2cm} \subfloat[][]{\includegraphics[height=7cm,keepaspectratio]{example-image-b}\label{figure2}}\hfill\null\\ \hfill \subfloat[][]{\includegraphics[height=7cm,keepaspectratio]{example-image-c}\label{figure3}}\hspace{2cm} \subfloat[][]{\includegraphics[height=6cm,keepaspectratio]{example-image}\label{figure4}}\hfill\null \end{figure} \end{document} In this I covered and ensured all the figures start at the same location using \hfill at the beginning. Similarly, the same technique applies at the end as well. Moreover, the inter-image spacing is handled by \hspace{...} to get an equal spacing. Note that I added a \null character to influence the spacing i.e., the ending of each figures in both the rows w.r.t the border per se. This will give you: PS: Note that there are much more elegant ways to achieve this using Tikz, for example. But, this is the simplest I could think of ;) Note: For exotic aspectratio/sizes of figures, you can use the \hspace{...} as your tuning knob to set things right. • Thank you, Raaja. This help me solve my question. I will ask a similar question to this for tikz soon. ;) – Til Hund Feb 7 at 14:47 • @Til Hund I shall look forward to that ;) – Raaja Feb 7 at 14:48 This solution uses saveboxes 0-3 to measure the widths and \dimen0 to calculate the width of the left and right margins in the first line. Note that box and length registers 0-9 are not used by standard LaTeX. One should still only use them inside a group (in this case, figure), so as to preserve their contents. \documentclass{article} \usepackage[margin=0cm, top=0cm, bottom=0cm, outer=0cm, inner=0cm, landscape, a4paper]{geometry} \pagestyle{empty} \usepackage{graphicx} \usepackage{subcaption} \begin{document} \begin{figure} \captionsetup[subfigure]{labelformat=empty} \captionsetup{labelformat=empty} \sbox0{\includegraphics[height=6cm]{example-image-a}}% measure widths \sbox1{\includegraphics[height=6cm]{example-image-b}}% \sbox2{\includegraphics[height=7cm]{example-image-c}}% a pdf \sbox3{\includegraphics[height=7cm]{example-image}}% a pdf \centering \begin{subfigure}[t]{\wd0} \centering \usebox0 \caption[]% {{\small}} \label{} \end{subfigure}% the extra space will mess up calculations \hfil \begin{subfigure}[t]{\wd1} \centering \usebox1 \caption[]% {{\small}} \label{} \end{subfigure}% \par\vskip\baselineskip \dimen0=\dimexpr \linewidth-\wd0-\wd1\relax% compute size of \hfil in previous line \divide\dimen0 by 3 \hspace*{\dimen0}% left margin %\makebox[\dimexpr \linewidth-2\dimen0][c]{% or center a box the same width \begin{subfigure}[t]{\wd2} \centering \usebox2 \caption[]% {{\small}} \label{} \end{subfigure}% \hfill% overpowers \centering \begin{subfigure}[t]{\wd3} \centering \usebox3 \caption[]% {{\small }} \label{} \end{subfigure}\hspace*{\dimen0}% right margin \caption[] {\small} \label{} \end{figure} \end{document} • Thank you, John Kormylo, for adding another possible solution to my question. It is very much appreciated! :) – Til Hund Feb 8 at 12:16
# The peterjamesthomas.com Data Strategy Hub Today we launch a new on-line resource, The Data Strategy Hub. This presents some of the most popular Data Strategy articles on this site and will expand in coming weeks to also include links to articles and other resources pertaining to Data Strategy from around the Internet. If you have an article you have written, or one that you read and found helpful, please post a link in a comment here or in the actual Data Strategy Hub and I will consider adding it to the list. Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases. # The latest edition of The Data & Analytics Dictionary is now out After a hiatus of a few months, the latest version of the peterjamesthomas.com Data and Analytics Dictionary is now available. It includes 30 new definitions, some of which have been contributed by people like Tenny Thomas Soman, George Firican, Scott Taylor and and Taru Väre. Thanks to all of these for their help. Remember that The Dictionary is a free resource and quoting contents (ideally with acknowledgement) and linking to its entries (via the buttons provided) are both encouraged. If you would like to contribute a definition, which will of course be acknowledged, you can use the comments section here, or the dedicated form, we look forward to hearing from you [1]. The Data & Analytics Dictionary will continue to be expanded in coming months. Notes [1] Please note that any submissions will be subject to editorial review and are not guaranteed to be accepted. Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases. # Why do data migration projects have such a high failure rate? Similar to its predecessor, Why are so many businesses still doing a poor job of managing data in 2019? this brief article has its genesis in the question that appears in its title, something that I was asked to opine on recently. Here is an expanded version of what I wrote in reply: Well the first part of the answer is based on consideing activities which have at least moderate difficulty and complexity associated with them. The majority of such activities that humans attempt will end in failure. Indeed I think that the oft-reported failure rate, which is in the range 60 – 70%, is probably a fundamental Physical constant; just like the speed of light in a vacuum [1], the rest mass of a proton [2], or the fine structure constant [3]. $\alpha=\dfrac{e^2}{4\pi\varepsilon_0d}\bigg/\dfrac{hc}{\lambda}=\dfrac{e^2}{4\pi\varepsilon_0d}\cdot\dfrac{2\pi d}{hc}=\dfrac{e^2}{4\pi\varepsilon_0d}\cdot\dfrac{d}{\hbar c}=\dfrac{e^2}{4\pi\varepsilon_0\hbar c}$ For more on this, see the preambles to both Ever tried? Ever failed? and Ideas for avoiding Big Data failures and for dealing with them if they happen. Beyond that, what I have seen a lot is Data Migration being the poor relation of programme work-streams. Maybe the overall programme is to implement a new Transaction Platform, integrated with a new Digital front-end; this will replace 5+ legacy systems. When the programme starts the charter says that five years of history will be migrated from the 5+ systems that are being decommissioned. Then the costs of the programme escallate [4] and something has to give to stay on budget. At the same time, when people who actually understand data make a proper assessment of the amount of work required to consolidate and conform the 5+ disparate data sets, it is found that the initial estimate for this work [5] was woefully inadequate. The combination leads to a change in migration scope, just two years historical data will now be migrated. Rinse and repeat… The latest strategy is to not migrate any data, but instead get the existing data team to build a Repository that will allow users to query historical data from the 5+ systems to be decommissioned. This task will fall under BAU [6] activities (thus getting programme expenditure back on track). The slight flaw here is that building such a Repository is essentially a big chunk of the effort required for Data Migration and – of course – the BAU budget will not be enough for this quantum work. Oh well, someone else’s problem, the programme budget suddenly looks much rosier, only 20% over budget now… Note: I may have exaggerated a bit to make a point, but in all honesty, not really by that much. Notes [1] $c\approx299,792,458\text{ }ms^{-1}$ [2] $m_p\approx1.6726 \times 10^{-27}\text{ }kg$ [3] $\alpha\approx0.0072973525693$ – which doesn’t have a unit (it’s dimensionless) [4] Probably because they were low-balled at first to get it green-lit; both internal and external teams can be guilty of this. [5] Which was do doubt created by a generalist of some sort; or at the very least an incurable optimist. [6] BAU of course stands for Basically All Unfunded. Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases. # Why are so many businesses still doing a poor job of managing data in 2019? I was asked the question appearing in the title of this short article recently and penned a reply, which I thought merited sharing with a wider audience. Here is an expanded version of what I wrote: Let’s start by considering some related questions: 1. Why are so many businesses still doing a bad job of controlling their costs in 2019? 2. Why are so many businesses still doing a bad job of integrating their acquisitions in 2019? 3. Why are so many businesses still doing a bad job of their social media strategy in 2019? 4. Why are so many businesses still doing a bad job of training and developing their people in 2019? 5. Why are so many businesses still doing a bad job of customer service in 2019? The answer is that all of the above are difficult to do well and all of them are done by humans; fallible humans who have a varying degree of motivation to do any of these things. Even in companies that – from the outside – appear clued-in and well-run, there will be many internal inefficiencies and many things done poorly. I have spoken to companies that are globally renowned and have a reputation for using technology as a driver of their business; some of their processes are still a mess. Think of the analogy of a swan viewed from above and below the water line (or vice versa in the example below). I have written before about how hard it is to do a range of activities in business and how high the failure rate is. Typically I go on to compare these types of problems to to challenges with data-related work [1]. This has some of its own specific pitfalls. In particular work in the Data Management may need to negotiate the following obstacles: 1. Data Management is even harder than some of the things mentioned above and tends to touch on all aspects of the people, process and technology in and organisation and its external customer base. 2. Data is still – sadly – often seen as a technical, even nerdy, issue, one outside of the mainstream business. 3. Many companies will include aspirations to become data-centric in their quarterly statements, but the root and branch change that this entails is something that few organisations are actually putting the necessary resources behind. 4. Arguably, too many data professionals have used the easy path of touting regulatory peril [2] to drive data work rather than making the commercial case that good data, well-used leads to better profitability. With reference to the aforementioned failure rate, I discuss some ways to counteract the early challenges in a recent article, Building Momentum – How to begin becoming a Data-driven Organisation. In the closing comments of this, I write: The important things to take away are that in order to generate momentum, you need to start to do some stuff; to extend the physical metaphor, you have to start pushing. However, momentum is a vector quantity (it has a direction as well as a magnitude [12]) and building momentum is not a lot of use unless it is in the general direction in which you want to move; so push with some care and judgement. It is also useful to realise that – so long as your broad direction is OK – you can make refinements to your direction as you pick up speed. To me, if you want to avoid poor Data Management, then the following steps make sense: 1. Make sure that Data Management is done for some purpose, that it is part of an overall approach to data matters that encompasses using data to drive commercial benefits. The way that Data Management should slot in is along the lines of my Simplified Data Capability Framework: 2. Develop an overall Data Strategy (without rock-polishing for too long) which includes a vision for Data Management. Once the destination for Data Management is developed, start to do work on anything that can be accomplished relatively quickly and without wholesale IT change. In parallel, begin to map what more strategic change looks like and try to align this with any other transformation work that is in train or planned. 3. Leverage any progress in the Data Management arena to deliver new or improved Analytics and symmetrically use any stumbling blocks in the Analytics arena to argue the case for better Data Management. 4. Draw up a communications plan, advertising the benefits of sound Data Management in commercial terms; advertise any steps forward and the benefits that they have realised. 5. Consider that sound Data Management cannot be the preserve of solely a single team, instead consider the approach of fostering an organisation-wide Data Community [3]. Of course the above list is not exhaustive and there are other approaches that may yield benefits in specific organisations for cultural or structural reasons. I’d love to hear about what has worked (or the other thing) for fellow data practitioners, so please feel free to add a comment. Notes [1] For example in: [2] GDPR and its ilk. Regulatory compliance is very important, but it must not become the sole raison d’être for data work. [3] As described in In praise of Jam Doughnuts or: How I learned to stop worrying and love Hybrid Data Organisations. Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases. # In praise of Jam Doughnuts or: How I learned to stop worrying and love Hybrid Data Organisations The above infographic is the work of Management Consultants Oxbow Partners [1] and employs a novel taxonomy to categorise data teams. First up, I would of course agree with Oxbow Partners’ statement that: Organisation of data teams is a critical component of a successful Data Strategy Indeed I cover elements of this in two articles [2]. So the structure of data organisations is a subject which, in my opinion, merits some consideration. Oxbow Partners draw distinctions between organisations where the Data Team is separate from the broader business, ones where data capabilities are entirely federated with no discernible “centre” and hybrids between the two. The imaginative names for these are respectively The Burger, The Smoothie and The Jam Doughnut. In this article, I review Oxbow Partners’s model and offer some of my own observations. The Burger – Centralised Having historically recommended something along the lines of The Burger, not least when an organisation’s data capabilities are initially somewhere between non-existent and very immature, my views have changed over time, much as the characteristics of the data arena have also altered. I think that The Burger still has a role, in particular, in a first phase where data capabilities need to be constructed from scratch, but it has some weaknesses. These include: 1. The pace of change in organisations has increased in recent years. Also, many organisations have separate divisions or product lines and / or separate geographic territories. Change can be happening in sometimes radically different ways in each of these as market conditions may vary considerably between Division A’s operations in Switzerland and Division B’s operations in Miami. It is hard for a wholly centralised team to react with speed in such a scenario. Even if they are aware of the shifting needs, capacity may not be available to work on multiple areas in parallel. 2. Again in the above scenario, it is also hard for a central team to develop deep expertise in a range of diverse businesses spread across different locations (even if within just one country). A central team member who has to understand the needs of 12 different business units will necessarily be at a disadvantage when considering any single unit compared to a colleague who focuses on that unit and nothing else. 3. A further challenge presented here is maintaining the relationships with colleagues in different business units that are typically a prerequisite for – for example – driving adoption of new data capabilities. The Smoothie – Federated So – to address these shortcomings – maybe The Smoothie is a better organisational design. Well maybe, but also maybe not. Problems with these arrangements include: 1. Probably biggest of all, it is an extremely high-cost approach. The smearing out of work on data capabilities inevitably leads to duplication of effort with – for example – the same data sourced or combined by different people in parallel. The pace of change in organisations may have increased, but I know few that are happy to bake large costs into their structures as a way to cope with this. 2. The same duplication referred to above creates another problem, the way that data is processed can vary (maybe substantially) between different people and different teams. This leads to the nightmare scenario where people spend all their time arguing about whose figures are right, rather than focussing on what the figures say is happening in the business [3]. Such arrangements can generate business risk as well. In particular, in highly regulated industries heterogeneous treatment of the same data tends to be frowned upon in external reviews. 3. The wholly federated approach also limits both opportunities for economies of scale and identification of areas where data capabilities can meet the needs of more than one business unit. 4. Finally, data resources who are fully embedded in different parts of a business may become isolated and may not benefit from the exchange of ideas that happens when other similar people are part of the immediate team. So to summarise we have: The Jam Doughnut – Hybrid Which leaves us with The Jam Doughnut, in my opinion, this is a Goldilocks approach that captures as much as possible of the advantages of the other two set-ups, while mitigating their drawbacks. It is such an approach that tends to be my recommendation for most organisations nowadays. Let me spend a little more time describing its attributes. I see the best way of implementing a Jam Doughnut approach is via a hub-and-spoke model. The hub is a central Data Team, the spokes are data-centric staff in different parts of the business (Divisions, Functions, Geographic Territories etc.). It is important to stress that each spoke satellite is not a smaller copy of the central Data Team. Some roles will be more federated, some more centralised according to what makes sense. Let’s consider a few different roles to illustrate this: • Data Scientist – I would see a strong central group of these, developing methodologies and tools, but also that many business units would have their own dedicated people; “spoke”-based people could also develop new tools and new approaches, which could be brought into the “hub” for wider dissemination • Analytics Expert – Similar to the Data Scientists, centralised “hub” staff might work more on standards (e.g. for Data Visualisation), developing frameworks to be leveraged by others (e.g. a generic harness for dashboards that can be leveraged by “spoke” staff), or selecting tools and technologies; “spoke”-based staff would be more into the details of meeting specific business needs • Data Engineer – Some “spoke” people may be hybrid Data Scientists / Data Engineers and some larger “spoke” teams may have dedicated Data Engineers, but the needle moves more towards centralisation with this role • Data Architect – Probably wholly centralised, but some “spoke” staff may have an architecture string to their bow, which would of course be helpful • Data Governance Analyst – Also probably wholly centralised, this is not to downplay the need for people in the “spokes” to take accountability for Data Governance and Data Quality improvement, but these are likely to be part-time roles in the “spokes”, whereas the “hub” will need full-time Data Governance people It is also important to stress that the various spokes should also be in contact with each other, swapping successful approaches, sharing ideas and so on. Indeed, you could almost see the spokes beginning to merge together somewhat to form a continuum around the Data Team. Maybe the merged spokes could form the “dough”, with the Data Team being the “jam” something like this: I label these types of arrangements a Data Community and this is something that I have looked to establish and foster in a few recent assignments. Broadly a Data Community is something that all data-centric staff would feel part of; they are obviously part of their own segment of the organisation, but the Data Community is also part of their corporate identity. The Data Community facilities best practice approaches, sharing of ideas, helping with specific problems and general discourse between its members. I will be revisiting the concept of a Data Community in coming weeks. For now I would say that one thing that can help it to function as envisaged is sharing common tooling. Again this is a subject that I will return to shortly. I’ll close by thanking Oxbow Partners for some good mental stimulation – I will look forward to their next data-centric publication. Disclosure: It is peterjamesthomas.com’s policy to disclose any connections with organisations or individuals mentioned in articles. Oxbow Partners are an advisory firm for the insurance industry covering Strategy, Digital and M&A. Oxbow Partners and peterjamesthomas.com Ltd. have a commercial association and peterjamesthomas.com Ltd. was also engaged by one of Oxbow Partners’ principals, Christopher Hess, when he was at a former organisation. Notes [1] Though the author might have had a minor role in developing some elements of it as well. [2] The Anatomy of a Data Function and A Simple Data Capability Framework. [3] See also The impact of bad information on organisations. Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases. # A Simple Data Capability Framework Introduction As part of my consulting business, I end up thinking about Data Capability Frameworks quite a bit. Sometimes this is when I am assessing current Data Capabilities, sometimes it is when I am thinking about how to transition to future Data Capabilities. Regular readers will also recall my tripartite series on The Anatomy of a Data Function, which really focussed more on capabilities than purely organisation structure [1]. Detailed frameworks like the one contained in Anatomy are not appropriate for all audiences. Often I need to provide a more easily-absorbed view of what a Data Function is and what it does. The exhibit above is one that I have developed and refined over the last three or so years and which seems to have resonated with a number of clients. It has – I believe – the merit of simplicity. I have tried to distil things down to the essentials. Here I will aim to walk the reader through its contents, much of which I hope is actually self-explanatory. The overall arrangement has been chosen intentionally, the top three areas are visible activities, the bottom three are more foundational areas [2], ones that are necessary for the top three boxes to be discharged well. I will start at the top left and work across and then down. Collation of Data to provide Information This area includes what is often described as “traditional” reporting [3], Dashboards and analysis facilities. The Information created here is invaluable for both determining what has happened and discerning trends / turning points. It is typically what is used to run an organisation on a day-to-day basis. Absence of such Information has been the cause of underperformance (or indeed major losses) in many an organisation, including a few that I have been brought in to help. The flip side is that making the necessary investments to provide even basic information has been at the heart of the successful business turnarounds that I have been involved in. The bulk of Business Intelligence efforts would also fall into this area, but there is some overlap with the area I next describe as well. Leverage of Data to generate Insight In this second area we have disciplines such as Analytics and Data Science. The objective here is to use a variety of techniques to tease out findings from available data (both internal and external) that go beyond the explicit purpose for which it was captured. Thus data to do with bank transactions might be combined with publically available demographic and location data to build an attribute model for both existing and potential clients, which can in turn be used to make targeted offers or product suggestions to them on Digital platforms. It is my experience that work in this area can have a massive and rapid commercial impact. There are few activities in an organisation where a week’s work can equate to a percentage point increase in profitability, but I have seen insight-focussed teams deliver just that type of ground-shifting result. Control of Data to ensure it is Fit-for-Purpose This refers to a wide range of activities from Data Governance to Data Management to Data Quality improvement and indeed related concepts such as Master Data Management. Here as well as the obvious policies, processes and procedures, together with help from tools and technology, we see the need for the human angle to be embraced via strong communications, education programmes and aligning personal incentives with desired data quality outcomes. The primary purpose of this important work is to ensure that the information an organisation collates and the insight it generates are reliable. A helpful by-product of doing the right things in these areas is that the vast majority of what is required for regulatory compliance is achieved simply by doing things that add business value anyway. Data Architecture / Infrastructure Best practice has evolved in this area. When I first started focussing on the data arena, Data Warehouses were state of the art. More recently Big Data architectures, including things like Data Lakes, have appeared and – at least in some cases – begun to add significant value. However, I am on public record multiple times stating that technology choices are generally the least important in the journey towards becoming a data-centric organisation. This is not to say such choices are unimportant, but rather that other choices are more important, for example how best to engage your potential users and begin to build momentum [4]. Having said this, the model that seems to have emerged of late is somewhat different to the single version of the truth aspired to for many years by organisations. Instead best practice now encompasses two repositories: the first Operational, the second Analytical. At a high-level, arrangements would be something like this: The Operational Repository would contain a subset of corporate data. It would be highly controlled, highly reconciled and used to support both regular reporting and a large chunk of dashboard content. It would be designed to also feed data to other areas, notably Finance systems. This would be complemented by the Analytical Repository, into which most corporate data (augmented by external data) would be poured. This would be accessed by a smaller number of highly skilled staff, Data Scientists and Analytics experts, who would use it to build models, produce one off analyses and to support areas such as Data Visualisation and Machine Learning. It is not atypical for Operational Repositories to be SQL-based and Analytical Repsoitories to be Big Data-based, but you could use SQL for both or indeed Big Data for both according to the circumstances of an organisation and its technical expertise. Data Operating Model / Organisation Design Here I will direct readers to my (soon to be updated) earlier work on The Anatomy of a Data Function. However, it is worth mentioning a couple of additional points. First an Operating Model for data must encompass the whole organisation, not just the Data Function. Such a model should cover how data is captured, sourced and used across all departments. Second I think that the concept of a Data Community is important here, a web of like-minded Data Scientists and Analytics people, sitting in various business areas and support functions, but linked to the central hub of the Data Function by common tooling, shared data sets (ideally Curated) and aligned methodologies. Such a virtual data team is of course predicated on an organisation hiring collaborative people who want to be part of and contribute to the Data Community, but those are the types of people that organisations should be hiring anyway [5]. Data Strategy Our final area is that of Data Strategy, something I have written about extensively in these pages [6] and a major part of the work that I do for organisations. It is an oft-repeated truism that a Data Strategy must reflect an overarching Business Strategy. While this is clearly the case, often things are less straightforward. For example, the Business Strategy may be in flux; this is particularly the case where a turn-around effort is required. Also, how the organisation uses data for competitive advantage may itself become a central pillar of its overall Business Strategy. Either way, rather than waiting for a Business Strategy to be finalised, there are a number of things that will need to be part of any Data Strategy: the establishment of a Data Function; a focus on making data fit-for-purpose to better support both information and insight; creation of consistent and business-focussed reporting and analysis; and the introduction or augmentation of Data Science capabilities. Many of these activities can help to shape a Business Strategy based on facts, not gut feel. More broadly, any Data Strategy will include: a description of where the organisation is now (threats and opportunities); a vision for commercially advantageous future data capabilities; and a path for moving between the current and the future states. Rather than being PowerPoint-ware, such a strategy needs to be communicated assiduously and in a variety of ways so that it can be both widely understood and form a guide for data-centric activities across the organisation. Summary As per my other articles, the data capabilities that a modern organisation needs are broader and more detailed than those I have presented here. However, I have found this simple approach a useful place to start. It covers all the basic areas and provides a scaffold off of which more detailed capabilities may be hung. The framework has been informed by what I have seen and done in a wide range of organisations, but of course it is not necessarily the final word. As always I would be interested in any general feedback and in any suggestions for improvement. Notes [1] In passing, Anatomy is due for its second refresh, which will put greater emphasis on Data Science and its role as an indispensable part of a modern Data Function. Watch this space. [2] Though one would hope that a Data Strategy is also visible! [3] Though nowadays you hear “traditional” Analytics and “traditional” Big Data as well (on the latter see Sic Transit Gloria Magnorum Datorum), no doubt “traditional” Machine Learning will be with us at some point, if it isn’t here already. [4] See also Building Momentum – How to begin becoming a Data-driven Organisation. [5] I will be revisiting the idea of a Data Community in coming months, so again watch this space. [6] Most explicitly in my three-part series: Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases. # The Chief Marketing Officer and the CDO – A Modern Fable This Fox has a longing for grapes: He jumps, but the bunch still escapes. So he goes away sour; And, ’tis said, to this hour Declares that he’s no taste for grapes. — W.J.Linton (after Aesop) Note: Not all of the organisations I have worked with or for have had a C-level Executive accountable primarily for Marketing. Where they have, I have normally found the people holding these roles to be better informed about data matters than their peers. I have always found it easy and enjoyable to collaborate with such people. The same goes in general for Marketing Managers. This article is not about Marketing professionals, it is about poorly researched journalism. Prelude… I recently came across an article in Marketing Week with the clickbait-worthy headline of Why the rise of the chief data officer will be short-lived (their choice of capitalisation). The subhead continues in the same vein: Chief data officers (ditto) are becoming increasingly common, but for a data strategy to work their appointments can only ever be a temporary fix. Intrigued, I felt I had to avail myself of the wisdom and domain expertise contained in the article (the clickbait worked of course). The first few paragraphs reveal the actual motivation. The piece is a reaction [1] to the most senior Marketing person at easyJet being moved out of his role, which is being abolished, and – as part of the same reorganisation – a Chief Data Officer (CDO) being appointed. Now the first thing to say, based on the article’s introductory comments, is that easyJet did not have a Chief Marketing Officer. The role that was abolished was instead Chief Commercial Officer, so there was no one charged full-time with Marketing anyway. The Marketing responsibilities previously supported part-time by the CCO have now been spread among other executives. The next part of the article covers the views of a Marketing Week columnist (pause for irony) before moving on to arrangements for the management of data matters in three UK-based organisations: • Camelot – who run the UK National Lottery • Mumsnet – which is a web-site for UK parents • Flubit – a growing on-line marketplace aiming to compete with Amazon The first two of these have CDOs (albeit with one doing the role alongside other responsibilities). Both of these people: […] come at data as people with backgrounds in its use in marketing Flubit does not have a CDO, which is used as supporting evidence for the superfluous nature of the role [2]. Suffice it to say that a straw poll consisting of the handful of organisations that the journalist was able to get a comment from is not the most robust of approaches [3]. Most of the time, the article does nothing more than to reflect the continuing confusion about whether or not organisations need CDOs and – assuming that they do – what their remit should be and who they should report to [4]. But then, without it has to be said much supporting evidence, the piece goes on to add that: Most [CDOs – they would probably style it “Cdos”] are brought in to instill a data strategy across the business; once that is done their role should no longer be needed. Now as a Group Theoretician, I am a great fan of symmetry. Symmetry relates to properties that remain invariant when something else is changed. Archetypally, an equilateral triangle is still an equilateral triangle when rotated by 120° [5]. More concretely, the laws of motion work just fine if we wind the clock forward 10 seconds (which incidentally leads to the principle of conservation of energy [6]). Let’s assume that the Marketing Week assertion is true. I claim therefore that it must be still be true under the symmetry of changing the C-level role. This would mean that the following also has to be true: Most [Chief marketing officers] are brought in to instill a marketing strategy across the business; once that is done their role should no longer be needed. Now maybe this statement is indeed true. However, I can’t really see the guys and gals at Marketing Week agreeing with this. So maybe it’s false instead. Then – employing reductio ad absurdum – the initial statement is also false [7]. If you don’t work in Marketing, then maybe a further transformation will convince you: Most [Chief financial officers] are brought in to instill a finance strategy across the business; once that is done their role should no longer be needed. I could go on, but this is already becoming as tedious to write as it was to read the original Marketing Week claim. The closing sentence of the article is probably its most revealing and informative: […] marketers must make sure they are leading [the data] agenda, or someone else will do it for them. I will leave readers to draw their own conclusions on the merits of this piece and move on to other thoughts that reading it spurred in me. …and Fugue Sometimes buried in the strangest of places you can find something of value, even if the value is different to the intentions of the person who buried it. Around some of the CDO forums that I attend [8] there is occasionally talk about just the type of issue that Marketing Week raises. An historical role often comes up in these discussions is that of Chief Electrification Officer [9]. This supposedly was an Executive role in organisations as the 19th Century turned into the 20th and electricity grids began to be created. The person ostensibly filling this role would be responsible for shepherding the organisation’s transition from earlier forms of power (e.g. steam) to the new-fangled streams of electrons. Of course this role would be very important until the transition was completed, after that redundancy surely beckoned. Well to my way of thinking, there are a couple of problems here. The first one of these is alluded to by my choice of the words “supposedly” and “ostensibly” above. I am not entirely sure, based on my initial research [10], that this role ever actually existed. All the references I can find to it are modern pieces comparing it to the CDO role, so perhaps it is apochryphal. The second is somewhat related. Electrification was an engineering problem, indeed it the [US] National Academy of Engineering called it “the greatest engineering achievement of the 20th Century”. Surely the people tackling this would be engineers, potentially led by a Chief Engineer. Did the completion of electrification mean that there was no longer a need for engineers, or did they simply move on to the next engineering problem [11]? Extending this analogy, I think that Chief Data Officers are more like Chief Engineers than Chief Electrification Officers, assuming that the latter even exists. Why the confusion? Well I think part of it is because, over the last decade and a bit, organisations have been conditioned to believe the one dimensional perspective that everything is a programme or a project [12]. I am less sure that this applies 100% to the CDO role. It may well be that one thing that a CDO needs to get going is a data transformation programme. This may purely be focused on cultural aspects of how an organisation records, shares and otherwise uses data. It may be to build a new (or a first) Data Architecture. It may be to remediate issues with an existing Data Architecture. It may be to introduce or expand Data Governance. It may be to improve Data Quality. Or (and, in my experience, this is often the most likely) a combination of all these five, plus other work, such as rapid tactical or interim deliveries. However, there is also a large element of data-centric work which is not project-based and instead falls into the category often described as “business as usual” (I loathe this term – I think that Data Operations & Technology is preferable). A handful of examples are as follows (this is not meant to be an exhaustive list) [13]: 1. Addressing architectural debt that results from neglect of a Data Assets or the frequently deleterious impact of improperly governed change portfolios [14]. This is often a series of small to medium-sized changes, rather than a project with a discrete scope and start and end dates. 2. More positively, engaging proactively in the change process in an attempt to act as a steward of Data Assets. 3. Establishing a regular Data Audit. 4. Regular Data Management activities. 5. Providing tailored Analytics to help understand some unscheduled or unexpected event. 6. Establishment of a data “SWAT team” to respond to urgent architecture, quality or reporting needs. 7. Running a Data Governance committee and related activities. 8. Creating and managing a Data Science capability. 9. Providing help and advice to those struggling to use Data facilities. 10. Responding to new Data regulations. 11. Creating and maintaining a target operating model for Data and is use. 12. Supporting Data Services to aid systems integration. 13. Production of regular reports and refreshing self-serve Data Repositories. 14. Testing and re-testing of Data facilities subject to change or change in source Data. 15. Providing training in the use of Data facilities or the importance of getting Data right-first-time. The above all point to the need for an ongoing Data Function to meet these needs (and to form the core resources of any data programme / project work). I describe such a function in my series about The Anatomy of a Data Function. There are of course many other such examples, but instead of cataloguing each of them, let’s return to what Marketing Week describe as the central responsibility of a CDO, to formulate a Data Strategy. Surely this is a one-off activity, right? Well is the Marketing strategy set once and then never changed? If there is some material shift in the overall Business strategy, might the Marketing strategy change as a result? What would be the impact on an existing Marketing strategy of insight showing that this was being less than effective; might this lead to the development of a new Marketing strategy? Would the Marketing strategy need to be revised to cater for new products and services, or new segments and territories? What would be the impact on the Marketing strategy of an acquisition or divestment? As anyone who has spent significant time in the strategy arena will tell you, it is a fluid area. Things are never set in stone and strategies may need to be significantly revised or indeed abandoned and replaced with something entirely new as dictated by events. Strategy is not a fire and forget exercise, not if you want it to be relevant to your business today, as opposed to a year ago. Specifically with Data Strategy (as I explain in Building Momentum – How to begin becoming a Data-driven Organisation), I would recommend keeping it rather broad brush at the begining of its development, allowing it to be adpated based on feedback from initial interim work and thus ensuring it better meets business needs. So expecting that a Data Strategy (or any other type of strategy) to be done and dusted, with the key strategist dispensed with, is probably rather naive. Coda It would be really nice to think that sorting out their Data problems and seizing their Data opportunities are things that organisations can do once and then forget about. With twenty years experience of helping organisations to become more Data-centric, often with technical matters firmly in the background, I have to disabuse people of this all too frequent misconception. To adapt the National Canine Defence League’s [15 long-lived slogan from 1978: A Chief Data Officer is for life, not just for Christmas. With that out of the way, I’m off to write a well-informed and insightful article about how Marketing Departments should go about their business. Wish me luck! Notes From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases # More Definitions in the Data and Analytics Dictionary The peterjamesthomas.com Data and Analytics Dictionary is an active document and I will continue to issue revised versions of it periodically. Here are 20 new definitions, including the first from other contributors (thanks Tenny!): Remember that The Dictionary is a free resource and quoting contents (ideally with acknowledgement) and linking to its entries (via the buttons provided) are both encouraged. People are now also welcome to contribute their own definitions. You can use the comments section here, or the dedicated form. Submissions will be subject to editorial review and are not guaranteed to be accepted. From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases # Version 2 of The Anatomy of a Data Function Between November and December 2017, I published the three parts of my Anatomy of a Data Function. These were cunningly called Part I, Part II and Part III. Eight months is a long time in the data arena and I have now issued an update. The changes in Version 2 are confined to the above organogram and Part I of the text. They consist of the following: 1. Split Artificial Intelligence out of Data Science in order to better reflect the ascendancy of this area (and also its use outside of Data Science). 2. Change Data Science to Data Science / Engineering in order to better reflect the continuing evolution of this area. My aim will be to keep this trilogy up-to-date as best practice Data Functions change their shapes and contents. If you would like help building or running your Data Function, or would just like to have an informal chat about the area, please get in touch From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases # An in-depth interview with experienced Chief Data Officer Roberto Maranca Part of the In-depth series of interviews Today’s interview is with Roberto Maranca. Roberto is an experienced and accomplished Chief Data Officer, having held that role in GE Capital and Lloyds Banking Group. Roberto and I are both founder members of the IRM(UK) Chief Data Officer Executive Forum and I am delighted to be able to share the benefit of his insights with readers. Can you perhaps highlight a single piece of work that was important to you, added a lot of value to the organisation, or which you were very proud of for some other reason? I always had a thing about building things to last, so I have always tried to achieve a sustainable solution that doesn’t fall apart after a few months (in Six Sigma terms you will call it “minimising the long term sigma shift”, but we will talk about it another time). So trying to have change process to be mindful of “Data” has been my quest since day one, in the job of CDO. For this reason, my most important piece of work was probably the the creation of the first link between the PMO process in GEC and the Data Lineage and Quality Assurance framework, I had to insist quite a bit to introduce this, design it, test it and run it. Now of course, after the completion of the GEC sale, it has gone lost “like tears in the rain”, to cite one of the best movies ever [1]. What was your motivation to take on Chief Data Officer roles and what do you feel that you bring to the CDO role? I touched on some reasons in my introductory comments. I believe there is a serendipitous combination of acquired skills that allows me to see things in a different way. I spent most of my working life in IT, but I have a Masters in Aeronautical Engineering and a diploma in what we in Italy call “Classical Studies”, basically I have A levels in Latin, Greek, Philosophy, History. So for example, together with my pilot’s licence achieved over weekends, I have attended a drama evening school for a year (of course in my bachelor days). Jokes apart, the “art” of being a CDO requires a very rich and versatile background because it is so pioneering, ergo if I can draw from my study of flow dynamics to come up with a different approach to lineage, or use philosophy to embed a stronger data driven culture, I feel it is a marked plus. We have spoken about the CDO role being one whose responsibilities and main areas of focus are still sometimes unclear. I have written about this recently [2]. How do you think the CDO role is changing in organisations and what changes need to happen? I mentioned the role being pioneering: compared to more established roles, CFO, COO and, even, CIO, the CDO is suffering from ambiguity, differing opinions and lack of clear career path. All of us in this space have to deal with something like inserting a complete new organ in a body that has got very strong immunological response, so although the whole body is dying for the function that the new organ provides (and with the new breed of regulation about, dying for lack of good and reliable data is not an exaggeration), there is a pernickety work of linking up blood vessels and adjusting every part of the organisation so that the change is harmonious, productive and lasting. But every company starts from a different level of maturity and a different status quo, so it is left to the CDO to come up with a modus operandi that would work and bring that specific environment to a recognisable standard. The Chief Data Officer has been described as having “the toughest job in the executive C-suite within many organizations” [3]. Do you agree and – if so – what are the major challenges? I agree and it simply demonstrated: pick any Company’s Annual Report, do a word search for “data quality”, “data management“, “data science” or anything else relevant to our profession, you are not going to find many. IT has been around for a while more and yet technology is barely starting now to appear in the firm’s “manifesto”, mostly for things that are a risk, like cyber security. Thus the assumption is, if it is not seen as a differentiator to communicate to the shareholders and the wider world, why should it be of interest for the Board? It is not anyone’s fault and my gut feeling is that GDPR (or perhaps Cambridge Analytica) is going to change this, but we probably need another generational turnover to have CDOs “safely” sitting in executive groups. In the meantime, there is a lot we can do, maybe sitting immediately behind someone who is sitting in that crucial room. We both believe that cultural change has a central role in the data arena, can you share some thoughts about why this is important? Data can’t be like a fad diet, it can’t be a program you start and finish. Companies have to understand that you have to set yourself on a path of “permanent augmentation”. The only way to do this is to change for good the attitude of the entire company towards data. Maybe starting from the first ambiguity, data is not the bits and bytes coming out of a computer screen, but it is rather the set of concepts and nouns we use in our businesses to operate, make products, serve our customers. If you flatten your understanding of data to its physical representation, you will never solve the tough enterprise problems, henceforth if it is not a problem of centralisation of data, but it is principally a problem of centralisation of knowledge and standardisation of behaviours, it is something inherently close to people and the common set of things in a company that we can call “culture”. Accepting the importance of driving a cultural shift, what practical steps can you take to set about making this happen? In my keynotes, I often quote the Swiss philosopher (don’t tell me I didn’t warn you!) Henry Amiel: Pure truth cannot be assimilated by the crowd: it must be communicated by contagion. This is especially the case when you are confronted with large numbers of colleagues and small data teams. Creating a simple mantra that can be inoculated in many part of the organisation helps to create a more receptive environment. So CDOs should first be keen marketeers, able to create a simple brand and pursuing relentlessly a “propaganda” campaign. Secondly, if you want to bring change, you should focus where the change happens and make sure that wherever the fabric of the company changes, i.e. big programmes or transformations, data is top priority. What are the potential pitfalls that you think people need to be aware of when embarking on a data-centric cultural transformation programme? First is definitely failing to manage your own expectations on speed and acceptance; it takes time and patience. Long-established organisations cannot leap into a brighter future just because an enlightened CDO shows them how. Second, and sort of related, it is a problem thinking that things can happen by management edicts and CDO policy compliance, there is a lot niftier psychology and sociology to weave into this. A two-part question. What do you see as the role of Data Governance in the type of cultural change you are recommending? Also, do you think that the nature of Data Governance has either changed or possibly needs to change in order to be more effective? The CDO’s arrival at a discussion table is very often followed by statements like “…but we haven’t got resources for the Governance” or “We would like to, but Data Governance is such an aggro”. My simple definition for Data Governance is a process that allows Approved Data Consumers to obtain data that satisfies their consumption requirements, in accordance with Company’s approved standards of traceability, meaning, integrity and quality. Under this definition there is no implied intention of subjecting colleagues to gruelling bureaucratic processes, the issue is the status quo. Today, in the majority of firms, without a cumbersome process of checks and balances, it is almost impossible to fulfil such definition. The best Data Governance is the one you don’t see, it is the one you experience when you to get the data you need for your job without asking, this is the true essence of Data Democratisation, but few appreciate that this is achieved with a very strict and controlled in-line Data Governance framework sitting on three solid bastions of Metadata, User Access Controls and Data Classification. Can you comment on the relationship between the control of data and its exploitation; between Analytics and Governance if you will?Do these areas need to both be part of the CDO’s remit? Oh… this is about the tale of the two tribes isn’t it? The Governors vs. the Experimenters, the dull CDOs vs the funky CAOs. Of course they are the yin and the yang of Data, you can’t have proper insight delivered to your customer or management if you have a proper Data Governance process, or should we call it “Data Enablement” process from the previous answer. I do believe that the next incarnation of the CDO is more a “Head of Data”, who has got three main pillars underneath, one is the previous CDOs all about governance, control and direction, the second is your R&D of data, but the third one that getting amassed and so far forgotten is the Operational side, the Head of Data should have business operational ownership of the critical Data Assets of the Company. The cultural aspects segues into thinking about people. How important is managing the people dimension to a CDO’s success? Immensely. Ours is a pastoral job, we need to walk around, interact on internal social media, animate communities, know almost everyone and be known by everyone. People are very anxious about what we do, because all the wonderful things we are trying to achieve, they believe, will generate “productivity” and that in layman’s terms mean layoffs. We can however shift that anxiety to curiosity, reaching out, spreading the above-mentioned mantra but also rethinking completely training and reskilling, and subsequently that curiosity should transform in engagement which will deliver sustainable cultural change. I have heard you speak about “intelligent data management” can you tell me some more about what you mean by this? Does this relate to automation at all? My thesis at Uni in 1993 was using AI algorithms and we all have been playing with MDM, DQM, RDM, Metadata for ages, but it doesn’t feel we cracked yet a Science of Data (NB this is different Data Science!) that could show us how to resolve our problems of managing data with 21st century techniques. I think our evolutionary path should move us from “last month you had 30k wrong postcodes in your database” to “next month we are predicting 20% fewer wrong address complaints”, in doing so there is an absolute need to move from fragmented knowledge around data to centralised harnessing of the data ecosystem, and that can only be achieved tuning in on the V.O.M. (Voice of the Machines), listening, deriving insight on how that ecosystem is changing, simulating response to external or internal factors and designing changes with data by design (or even better with everything by design). I yet have to see automated tools that do all of that without requiring man years to decide what is what, but one can only stay hopeful. Finally, how do you see the CDO role changing in coming years? To the ones that think we are a transient role, I respond that Compliance should be everyone’s business, and yet we have Compliance Officers. I think that overtime the Pioneers will give way to the Strategists, who will oversee the making of “Data Products” that best suit the Business Strategist, and maybe one day being CEO will be the epitome of our career ladders one day, but I am not rushing to it, I love too much having some spare time to spend with my family and sailing. Roberto, it is always a pleasure to speak. Thank you for sharing your ideas with us today. Roberto Maranca can be reached at r.maranca@outlook.com and has social media presence on LinkedIn and Twitter (@RobertoMaranca). Disclosure: At the time of publication, neither peterjamesthomas.com Ltd. nor any of its Directors had any shared commercial interests with Roberto Maranca. If you are a Chief Data Officer, a Chief Analytics Officer, a Director of Data, or hold some other “Top Data Job” and would like to share your thoughts with the readers of this site in an interview like this one, please get in contact. Notes [1] [2] The CDO – A Dilemma or The Next Big Thing? [3] Randy Bean of New Vantage Partners quoted in The CDO – A Dilemma or The Next Big Thing? From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases
# Why is there no blackbody radiation in the high frequency section of Planck's curve? Upon examining the curve describing blackbody thermal radiation, I noticed that the curves approaches (but never reaches) zero when increasing wavelength, but on the other hand it actually does reach zero at high frequency so I was wondering why? • Good question. This confused many physicist for years and was at the birth of quantum mechanics. Apr 4 at 1:20 • What makes you think it reaches zero? Apr 4 at 4:14 • Let's state the behaviour carefully. Plotted against frequency, intensity scales as $\nu^3/(e^{\beta h\nu}-1)$, reaching $0$ at $\nu=0$ but not at any finite $\nu>0$. Plotted against wavelength, intensity scales as $\lambda^{-5}/(e^{\beta hc/\lambda}-1)$, which is asymptotic to $\lambda^{-5}e^{-\beta hc/\lambda}$ ($\lambda^{-4}/(\beta hc)$) for small (large) $\lambda>0$, so the $\lambda\to0^+,\,\lambda\to\infty$ one-sided limits are both $0$. – J.G. Apr 4 at 20:47 • There is a contradiction in the body of your question. May 11 at 11:05 In the frequency domain, the power spectral density has the form $$f^2$$ as the frequency approaches zero and the form $$f^3e^{-f}$$ as the freuency approaches infinity. In neither case does it reach zero at any finite frequency. You can argue about at which end it approaches zero more rapidly. Plotted on a log frequency axis, the high-frequency end of the spectrum would appear to approach zero very abruptly because of the exponential. Classical (non-quantum) electrodynamics posits that for a radiating (hot) body, the number of available oscillation modes grows without bound at higher and higher frequencies, and so the energy carried by those modes blows up too at high frequencies- a result called the ultraviolet catastrophe which was decidedly not exhibited by actual hot objects in the laboratory where instead, as you point out, the energy contained per slice of the frequency spectrum falls towards zero with increasing frequency. Max Planck fixed the ultraviolet catastrophe by deriving from scratch an entirely new function for the overall shape of the entire blackbody spectrum which was based on the premise that the energy being exchanged between the walls of the blackbody cavity and the radiation inside it was not a continuous variable but was subdivided into quantized (but very tiny) chunks instead. Planck's full derivation is long and nontrivial but it works by making it progressively less likely with increasing frequency that there will be quanta sufficiently energetic to populate the highest-energy portions of the spectrum, suppressing the ultraviolet catastrophe and yielding a calculated shape for the spectrum which follows the real spectrum nearly perfectly at both low and high frequencies. Many consider his breakthrough to represent the birth of quantum mechanics. To be sure, his work laid down the rails upon which mighty trains would soon run. • I know about the ultraviolet catastrophe. What i don't understand is why a blackbody hypothetically emits waves of ALL wavelengths/frequencies, yet when you examine planck's curve there is clearly an interval in which there is no intensity, in other words no waves emitted with such frequencies.Planck tried to minimize the contribution of high frequency oscillators but as i understand it he actually disimissed them altogether. Apr 4 at 22:55
# Rectangle - sides ratio Calculate the area of a rectangle whose sides are in ratio 3:13 and perimeter is 673. Correct result: S =  17250.2 #### Solution: $\ \\ a/b = 3/13 \ \\ a = 0.231 \cdot b \ \\ \ \\ 673 = 2(a+b) \ \\ 673 = 2(0.231 b + b) \ \\ 673 = 2.462 \cdot b \ \\ b = 273.41 \ \\ a = 0.231 \cdot b = 63.09 \ \\ \ \\ S = a \cdot b = 63.09 \cdot 273.41 \doteq 17250.2$ We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you! Tips to related online calculators Check out our ratio calculator. Do you have a system of equations and looking for calculator system of linear equations? #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Next similar math problems: • Soccer balls Pupils in one class want to buy two soccer balls together. If each of them brings 12.50 euros, they will miss 100 euros, if each brings 16 euros, they will remain 12 euros. How many students are in the class? • Area and perimeter of rectangle The content area of the rectangle is 3000 cm2, one dimension is 10 cm larger than the other. Determine the perimeter of the rectangle. • The circumference The circumference and width of the rectangle are in a ratio of 5: 1. its area is 216cm2. What is its length? • Wire fence The wire fence around the garden is 160 m long. One side of the garden is three times longer than the other. How many meters do the individual sides of the garden measure? • In a In a triangle, the aspect ratio a: c is 3: 2 and a: b is 5: 4. The perimeter of the triangle is 74cm. Calculate the lengths of the individual sides. • Two patches Peter taped the wound with two rectangular patches (one over the other to form the letter X). The area sealed with both patches at the same time had a content of 40cm2 and a circumference of 30cm. One of the patches was 8cm wide. What was the width of the • Rectangular garden The perimeter of Peter's rectangular garden is 98 meters. The width of the garden is 60% shorter than its length. Find the dimensions of the rectangular garden in meters. Find the garden area in square meters. • Rectangle field The field has a shape of a rectangle having a length of 119 m and a width of 19 m. , How many meters have to shorten its length and increase its width to maintain its area and circumference increased by 24 m? • Isosceles triangle In an isosceles triangle, the length of the arm and the length of the base are in ration 3 to 5. What is the length of the arm? • AP RT triangle The length of the sides of a right triangle form an arithmetic progression, longer leg is 24 cm long. What are the perimeter and area? • Three sides Side b is 2 cm longer than side c, side a is 9 cm shorter than side b. The triangle circumference is 40 cm. Find the length of sides a, b, c . .. . • Right triangle eq2 Find the lengths of the sides and the angles in the right triangle. Given area S = 210 and perimeter o = 70. • Ratio of sides The triangle has a circumference of 21 cm and the length of its sides is in a ratio of 6: 5: 3. Find the length of the longest side of the triangle in cm. • Perimeter of a rectangle If the perimeter of a rectangle is 114 meters and the length is twice the width plus 6 meters, what are the length and width? • Isosceles triangle The perimeter of an isosceles triangle is 112 cm. The length of the arm to the length of the base is at ratio 5:6. Find the triangle area. • Plot The length of the rectangle is 8 smaller than three times the width. If we increase the width by 5% of the length and the length is reduced by 14% of the width, the circumference of rectangle will be increased by 30 m. What are the dimensions of the recta • Rectangle The length of one side of the rectangle is three times the length of the second side. What are the dimensions of the rectangle if its circumference 96 cm?
# 10.1 Experiments with the three-spectral inverse problem for a Page 1 / 2 This report summarizes work done as part of the Physics of Strings PFUG under Rice University's VIGRE program. VIGRE is a program of Vertically Integrated Grants for Research and Education in the Mathematical Sciences under the direction of the National Science Foundation. A PFUG is a group of Postdocs, Faculty, Undergraduates and Graduate students formed round the study of a common problem. This module describes the three-spectral inverse problem for a beaded string and presents experimental results of its application. ## Introduction How well can we predict a string's mass distribution by simply listening to its vibration? While considering this question, previous experiments have been limited in the types of strings that could be studied. When only considering two spectra (fixed-fixed and fixed-flat), acquiring the necessary data required us to force the beaded strings to be symmetric about the midpoint. This condition has severely limited the possible experiments. However, recent theoretical developments by Boyko and Pivovarchik have expanded the regime of experimental work with beaded strings. Here we consider three fixed-fixed spectra (whole string, clamped left section, and clamped right section), and show that the information contained in these three spectra may be written as two sets of two spectra problems. Thus, for an arbitrary beaded string, it is possible to measure the frequencies of vibration of three sections of the string. It is then possible to convert these spectra into two separate inverse problems with well known solutions. An algorithm for the recovery of the length and mass information of the string is given by Cox, et. al. . Here is presented the theoretical framework and an experimental setup to predict the masses and lengths of any arbitrary beaded string as long as the string meets our much shorter list of requirements. ## The three-spectral forward problem We begin by considering a beaded string with at least two beads. The string is artificially separated at an interior point into a left part and a right part, with each part containing at least one mass. The two parts join to form a continuous string. This string vibrates with particular characteristic frequencies depending on the tension in the string, the masses of the beads, and the lengths between them. The forward problem is concerned with finding the spectra given a beaded string's properties. The tension is given by $\sigma$ . The quantities ${\ell }_{k}$ and ${m}_{k}$ represent the lengths between the beads and the masses of the beads for the left part of the string. The quantities $\stackrel{˜}{{\ell }_{k}}$ and ${\stackrel{˜}{m}}_{k}$ represent those respective properties for the right part. There are ${n}_{1}$ masses on the left and ${n}_{2}$ masses on the right. These properties describe a uniquely determined beaded string. Let ${v}_{k}$ and ${\stackrel{˜}{v}}_{k}$ represent the displacements of the masses in the vertical direction. The equations of motion for this system are governed by the following system of ODE's: $\frac{\sigma }{{\ell }_{k}}\left({v}_{k}\left(t\right)-{v}_{k+1}\left(t\right)\right)+\frac{\sigma }{{\ell }_{k-1}}\left({v}_{k}\left(t\right)-{v}_{k-1}\left(t\right)\right)+{m}_{k}{v}_{k}^{\text{'}\text{'}}=0,\phantom{\rule{1.em}{0ex}}k=1,2,\cdots ,{n}_{1}$ can someone help me with some logarithmic and exponential equations. 20/(×-6^2) Salomon okay, so you have 6 raised to the power of 2. what is that part of your answer I don't understand what the A with approx sign and the boxed x mean it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared Salomon I'm not sure why it wrote it the other way Salomon I got X =-6 Salomon ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6 oops. ignore that. so you not have an equal sign anywhere in the original equation? Commplementary angles hello Sherica im all ears I need to learn Sherica right! what he said ⤴⤴⤴ Tamia what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks. a perfect square v²+2v+_ kkk nice algebra 2 Inequalities:If equation 2 = 0 it is an open set? or infinite solutions? Kim The answer is neither. The function, 2 = 0 cannot exist. Hence, the function is undefined. Al y=10× if |A| not equal to 0 and order of A is n prove that adj (adj A = |A| rolling four fair dice and getting an even number an all four dice Kristine 2*2*2=8 Differences Between Laspeyres and Paasche Indices No. 7x -4y is simplified from 4x + (3y + 3x) -7y is it 3×y ? J, combine like terms 7x-4y im not good at math so would this help me yes Asali I'm not good at math so would you help me Samantha what is the problem that i will help you to self with? Asali how do you translate this in Algebraic Expressions Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)= . After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight? what's the easiest and fastest way to the synthesize AgNP? China Cied types of nano material I start with an easy one. carbon nanotubes woven into a long filament like a string Porter many many of nanotubes Porter what is the k.e before it land Yasmin what is the function of carbon nanotubes? Cesar what is nanomaterials​ and their applications of sensors. what is nano technology what is system testing? preparation of nanomaterial Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it... what is system testing what is the application of nanotechnology? Stotaw In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google Azam anybody can imagine what will be happen after 100 years from now in nano tech world Prasenjit after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments Azam name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world Prasenjit how hard could it be to apply nanotechnology against viral infections such HIV or Ebola? Damian silver nanoparticles could handle the job? Damian not now but maybe in future only AgNP maybe any other nanomaterials Azam can nanotechnology change the direction of the face of the world At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light. the Beer law works very well for dilute solutions but fails for very high concentrations. why? how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
# Some notations \begin{equation*} S_n:=\left\{\mathbf a\in\mathbb R^{n}\middle|\sum_ja_j=1\right\}. \end{equation*} \begin{equation*} \mathbf1:=\left(\begin{matrix}1\\\vdots\\1\end{matrix}\right). \end{equation*} \begin{equation*} \mathbf v_1:=\left(\begin{matrix} \\\mathbf v\\\\1 \end{matrix}\right). \end{equation*} \begin{equation*} \mathbf M_1:=\left(\begin{matrix} \\&\mathbf M&\\\\&\mathbf1^{\mathrm T} \end{matrix}\right). \end{equation*} # Introduction to barycentric coordinates Let $\mathbf v_j$ be the vertices of a simplex in $\mathbb R^{n-1}$, then any point $\mathbf r\in\mathbb R^{n-1}$ can be expressed by a tuple $\boldsymbol\lambda\in S_n$ such that $\mathbf r=\sum_j\lambda_j\mathbf v_j$. If regarding $\mathbf V$ as a $\left(n-1\right)\times n$ matrix whose $j$th column is $\mathbf v_j$, then we have $\mathbf r=\mathbf V\boldsymbol\lambda$. Along with the normalization condition $\sum_j\lambda_j=1$ or $\mathbf1^{\mathrm T}\boldsymbol\lambda=1$, we have $\mathbf r_1=\mathbf V_1 \boldsymbol\lambda$, so $$\boldsymbol\lambda=\mathbf V_1^{-1} \mathbf r_1. \label{as Cartesian}$$ Usually, due to the convenience, we select the center of the Cartesian coordinate system so properly that $\sum_j\mathbf v_j=\mathbf0$ or $$\mathbf V\mathbf1=\mathbf0. \label{barycenter zero}$$ # The research object We are going to show that the equation $$\boldsymbol\lambda^{\mathrm T}\boldsymbol\lambda=1 \label{research object}$$ depicts a hyperellipsoid whose center is $\mathbf0$ and its tangent hyperplane at $\mathbf v_j$ is parallel to the hyperplane that passes all $\mathbf v_k$ that $k\ne j$. We are going to rewrite Formula \ref{research object} in the form of a quadric of $\mathbf r$. Substitute Formula \ref{as Cartesian} into \ref{research object}, and then we can derive that $$1=\boldsymbol\lambda^{\mathrm T}\boldsymbol\lambda =\left(\mathbf V_1^{-1} \mathbf r_1\right)^{\mathrm T} \left(\mathbf V_1^{-1} \mathbf r_1\right) =\mathbf r_1^{\mathrm T} \left(\left(\mathbf V_1^{-1} \right)^{\mathrm T}\mathbf V_1^{-1} \right)\mathbf r_1. \label{r quadric}$$ Let $$\mathbf Q:=\left(\mathbf V_1^{-1} \right)^{\mathrm T}\mathbf V_1^{-1} =\left(\mathbf V_1 \mathbf V_1^{\mathrm T}\right)^{-1}, \label{Q def}$$ and substitute Formula \ref{Q def} into \ref{r quadric}, and then we can derive the quadric of $\mathbf r_1$ $$\mathbf r_1^{\mathrm T}\mathbf Q\mathbf r_1=1.$$ Note that besides $\mathbf r$, there is a $1$ in $\mathbf r_1$, so the quadric is a $2$nd-degree polynomial of $\mathbf r$, including quadratic terms, linear terms and a constant term. In order to show that the center of the quadric is a hyperellipsoid whose center is $\mathbf0$, we need to prove that the coefficients of the linear terms are all $0$, and the coefficients of the suqare terms are all positive. # Proving that the center of the quadric is $\mathbf0$ Note that $\mathbf Q=\left(\mathbf V_1\mathbf V_1^{\mathrm T}\right)^{-1}$, so $$\mathbf Q^{-1}= \left(\begin{matrix} \\&\mathbf V&\\\\1&\cdots&1 \end{matrix}\right) \left(\begin{matrix} &&&1\\&\mathbf V^{\mathrm T}&&\vdots\\&&&1 \end{matrix}\right)= \left(\begin{matrix} \\&\mathbf V\mathbf V^{\mathrm T}&&\mathbf V\mathbf1 \\\\&\mathbf1^{\mathrm T}\mathbf V^{\mathrm T}&&n \end{matrix}\right). \label{Q^-1}$$ Substitute Formula \ref{barycenter zero} into \ref{Q^-1}, and then we can derive that \begin{equation*} \mathbf Q=\left(\begin{matrix} &&&0\\&\mathbf V\mathbf V^{\mathrm T}&&\vdots \\&&&0\\0&\cdots&0&n \end{matrix}\right)^{-1}= \left(\begin{matrix} &&&0\\&\mathbf W&&\vdots\\&&&0\\0&\cdots&0&\frac1n \end{matrix}\right), \end{equation*} where $\mathbf W:=\left(\mathbf V\mathbf V^{\mathrm T}\right)^{-1}$, so \begin{equation*} \mathbf r_1^{\mathrm T}\mathbf Q\mathbf r_1= \mathbf r^{\mathrm T}\mathbf W\mathbf r+\frac1n. \end{equation*} The linear terms are all $0$, so the center of the quadric is $\mathbf0$. # Proving that the quadric is a hyperellipsoid We need to show that the square terms are all positive. In other words, the components on the diagonal of $\mathbf W$ are all positive. Because $\mathbf Q= \left(\mathbf V_1^{-1}\right)^{\mathrm T}\mathbf V_1^{-1}$, we have \begin{equation*} \left(\mathbf Q\right)_{j,j}= \sum_k\left(\mathbf V_1^{-1}\right)_{j,k}^2>0. \end{equation*} Therefore, obviously, the components on the diagonal of $\mathbf W$ are all positive. # Proving that the its tangent hyperplane at $\mathbf v_j$ is parallel to $P_j$ Here $P_j$ is defined as the hyperplane that passes all $\mathbf v_k$ that $k\ne j$. The equation of the quadric is $F\left(\mathbf r\right)=0$, where the quadratic function \begin{equation*} F\left(\mathbf r\right):=\mathbf r^{\mathrm T}\mathbf W\mathbf r +\frac1n-1. \end{equation*} According to geometry, the normal vector of the quadric at $\mathbf v_j$ is the gradient of $F$ at $\mathbf v_j$, which is \begin{equation*} \boldsymbol\nu_j:= \left.\frac{\partial F\left(\mathbf r\right)}{\partial\mathbf r}\right| _{\mathbf r=\mathbf v_j}= 2\mathbf W\mathbf v_j. \end{equation*} Now consider the normal vector $\mathbf m_j$ of $P_j$. Assume that \begin{equation*} P_j:n\mathbf m_j^{\mathrm T}\mathbf r+2=0. \end{equation*} The equation of $P_j$ should holds when $\mathbf r=\mathbf v_k$ for all $k\ne j$, so we can derive $n-1$ linear equations with respect to $\mathbf m_j$ $$\forall k\ne j:n\mathbf m_j^{\mathrm T}\mathbf v_k+2=0. \label{equations for m}$$ If we can show that $$\mathbf m_j=\boldsymbol\nu_j=2\mathbf W\mathbf v_j \label{solution for m}$$ is the solution to Formula \ref{equations for m}, then we can say that the two hyperplane are parallel. Thus, we need to verify the equations derived from substituting Formula \ref{solution for m} into \ref{equations for m} \begin{equation*} \forall k\ne j:n\mathbf v_j^{\mathrm T}\mathbf W\mathbf v_k+1=0, \end{equation*} which is to say that the $n\times n$ matrix \begin{equation*} \mathbf P:=\mathbf V^{\mathrm T}\mathbf W\mathbf V= \mathbf V^{\mathrm T}\left(\mathbf V\mathbf V^{\mathrm T}\right)^{-1} \mathbf V \end{equation*} is such a matrix that all of its components except those on its diagonal are $-\frac1n$. According to conclusions in matrix analysis, if we regard $\mathbf V^{\mathrm T}$ as $n-1$ $n$-dimensional vectors, then $\mathbf P$ is an orthogonal projection in $\mathbb R^n$ to the linear subspace whose basis is the $n-1$ vectors. Note that with Formula \ref{barycenter zero}, we can say that the subspace is just a hyperplane whose normal vector is $\mathbf1$. With the conclusion, we can easily write out the form of $\mathbf P$ because we just need to write out one set of its basis $\mathbf B$. Writing out $\mathbf B$ only requires finding out $n-1$ linearly independent vectors that are perpendicular to $\mathbf1$. For example, \begin{equation*} \mathbf B:=\left(\begin{matrix} n-1&-1&-1&\cdots&-1\\-1&n-1&-1&\cdots&-1 \\-1&-1&n-1&\cdots&-1\\\vdots&\vdots&\vdots&\ddots&\vdots \\-1&-1&-1&\cdots&n-1\\-1&-1&-1&\cdots&-1 \end{matrix}\right). \end{equation*} Then, we have \begin{equation*} \mathbf P=\mathbf B\left(\mathbf B^{\mathrm T}\mathbf B\right)^{-1} \mathbf B^{\mathrm T}. \end{equation*} After some calculation, we can derive that the components of $\mathbf P$ are $1-\frac1n$ on the diagonal and $-\frac1n$ elsewhere, which is what we want to show. We have proved that the tangent hyperplane of the quadric at $\mathbf v_j$ is parallel to $P_j$.
# 15.3.4.4.6 The Script Before Install Page ## Dialog Box Controls Python Package Check(LabTalk Script) Add script to this box to prompt users who are installing user-defined Python fitting functions, of Python packages required to use the fitting function. When an end-user drops the FDF file onto the Origin workspace, the script is executed. The page cannot be empty (Python functions only). If required packages have not been installed, the end-user is prompted to allow package install. If required packages are already installed, the end-user is prompted to add the fitting function to a category. An example script follows. Substitute your required package(s) inside the parentheses and double-quote marks, as needed: [BeforeInstall] if(Python.chk("pandas cv2(opencv-python)") > 1) return 1;//should not install FDF return 0;//proceed
Speaker : Kuang-Ru Wu (Institute of Mathematics, Academia Sinica) Positively curved Finsler metrics on vector bundles 2021-09-24 (Fri) 16:00 - 17:00 Seminar Room 617, Institute of Mathematics (NTU Campus) While the equivalence between ampleness and positivity holds for vector bundles of rank one, its higher rank counterpart known as Griffith's conjecture is still open. There is also a similar but weaker conjecture by Kobayashi who proposed to use Finsler rather than Hermitian metrics to study the equivalence. We will review these two conjectures and state our progress. One of our results is that we can construct a positively curved Finsler metric on $E$ if the symmetric power of the dual $S^kE*$ has a certain negatively curved Hermitian metric. (1)【Physical】:Visitors need to show the gate guard NTU Visitor Pass to enter the campus. Please fill the following form to apply for 1-Day Pass. ★ Registration for Visitor Pass (deadline: 3PM, Sep. 23) Registration form (2)【Online】: You will receive a WebEx link to the meeting as long as you have subscribed to the mailing list of the learning seminar. Please be advised that subscription is subject to the approval of the organizer. *本所保留開放連線之權限,報名並非確定可加入視訊討論。 ★ Registration for online link (deadline: 3PM, Sep. 23) Registration form
# Numbering levels of a tree [closed] I find the need to use an explicit level numbering for a tree. i.e. in the tree: A / \ B B /\ /\ C C C C should I number the level C the 3rd and A as the first, or the opposite? As going 'up' the tree is going towards A; should the numbering reflect that as well? If can, please supply an authorized CS reference. Thanks! ## closed as unclear what you're asking by Raphael♦Nov 17 '14 at 18:04 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • You don't say what you need the numbering for, so how can anyone say which numbering is "best"? – Raphael Nov 17 '14 at 18:04 • If there is no consensus on level numbering, this is a good answer as well. – Jaaz Nov 18 '14 at 11:24
Rejestracja   Zaloguj się # Kindergarten Dinosaur Books 18 Cena: 1,00 zł Ilość: szt. https://beverleyantell.files.wordpress.com/2020/05/why-your-world-is-about-to-get-a-whole-lot-smaller-8195.pdf Film [wobbling uncontrollably] • Jane Fonda (2019) They encounter a series of obstacles, each of which requires unique skills possessed by one of the three, one of which requires Ron to sacrifice himself in a life-sized game of wizard's chess. In the final room, Harry, now alone, finds Quirinus Quirrell, the Defence Against the Dark Arts teacher, who reveals he had been the one working behind the scenes to kill Harry by first jinxing his broom and then letting a troll into the school, while Snape had been trying to protect Harry instead. Quirrell is helping Voldemort, whose face has sprouted on the back of Quirrell's head but is constantly concealed by his oversized turban, to attain the Philosopher's Stone so as to restore his body. Quirrell uses Harry to get past the final obstacle, the Mirror of Erised, by forcing him to stand before the Mirror. It recognises Harry's lack of greed for the Stone and surreptitiously deposits it into his pocket. As Quirrell attempts to seize the stone and kill Harry, his flesh burns on contact with the boy's skin and breaks into blisters. Harry's scar suddenly burns with pain and he passes out. Logout https://seattle.bibliocommons.com/list/share/104354882/1367050207 https://beverleyantell.files.wordpress.com/2020/05/wiskunde-wijs-8946.pdf https://beverleyantell.files.wordpress.com/2020/05/adieu-anna-6931.pdf SUBJECTS WE COVER https://geology.com/news.shtml 1.17 [7] Fantastic Beasts 3 https://beverleyantell.files.wordpress.com/2020/05/adieu-tante-monette-4933.pdf https://beverleyantell.files.wordpress.com/2020/05/activities-manual-to-accompany-mas-alla-de-las-palabras-intermediate-spanish-3e-with-lab-audio-registration-card-7699.pdf https://beverleyantell.files.wordpress.com/2020/05/a-licence-to-play-patrick-schodts-4940.pdf Follow on Facebook PS: I won't say its bad but ts not excellent as well book ENGLAND thought he might be hiding Harry underneath it. • Korea's Richest • ^ McClurg, Jocelyn (September 27, 2006). "New voices: Gillian Flynn makes thriller debut". USA Today. Mclean, Virginia: Gannett Company . Retrieved January 21, 2014. 1 episode, 2018 https://beverleyantell.files.wordpress.com/2020/05/adoption-of-children-in-scotland-6865.pdf • Harry Potter and the Goblet of Fire actors https://beverleyantell.files.wordpress.com/2020/05/a-practical-guide-to-linux-commands-editors-and-shell-programming-7733.pdf Robert Legato, Nick Davis, Roger Guyett, John Richardson • Kiran Shah as Goblin (uncredited) https://beverleyantell.files.wordpress.com/2020/05/adieu-croquettes-1606.pdf Audible Listen to Books & Original Audio Performances • 3.3 Filming https://beverleyantell.files.wordpress.com/2020/05/why-social-justice-matters-7523.pdf References [ edit ] Why have I been blocked? your data confidential. • sulkee http://www.baymoon.com/~ariadne/form/prosePoem.htm https://www.facebook.com/InternationalMontessoriSchoolHK/?rf=200535740546818 https://www.barnesandnoble.com/b/game-of-thrones-collection/books/_/N-rc5Z29Z8q8 Destination, rates & speeds • Best Supporting Actress ( Perks of Being a Wallflower) - Nominated https://beverleyantell.files.wordpress.com/2020/05/when-autonomous-vehicles-are-hacked-who-is-liable-5348.pdf Comment • In the film, when Albus Dumbledore leaves Harry Potter on the doorstep of the Dursleys' house, he says, "Good luck, Harry Potter." In the book, he just says, "Good luck, Harry." This is likely because of the aforementioned omissions. • Q & A Price: Free https://www.britannica.com/topic/The-Communist-Manifesto window.modules["833"] = [function(require,module,exports){var baseMatches=require(835),baseMatchesProperty=require(837),identity=require(834),isArray=require(141),property=require(836);function baseIteratee(e){return"function"==typeof e?e:null==e?identity:"object"==typeof e?isArray(e)?baseMatchesProperty(e[0],e[1]):baseMatches(e):property(e)}module.exports=baseIteratee; 04 in situations distinguished by the larger body’s complete rest, and 2012 }, {"140":140,"833":833,"845":845}]; https://beverleyantell.files.wordpress.com/2020/05/abba-2704.pdf • Kids site https://beverleyantell.files.wordpress.com/2020/05/wings-folded-in-cracks-1647.pdf window.modules["797"] = [function(require,module,exports){var Symbol=require(734),isArguments=require(749),isArray=require(141),spreadableSymbol=Symbol?Symbol.isConcatSpreadable:void 0;function isFlattenable(e){return isArray(e)||isArguments(e)||!!(spreadableSymbol&&e&&e[spreadableSymbol])}module.exports=isFlattenable; https://beverleyantell.files.wordpress.com/2020/05/wireless-communications-and-networks-2756.pdf • Synopsis https://en.wikipedia.org/wiki/Animal_Farm game soundtrack 2 • Merchandise a cup of tea. Harry shuffled miserably off into the kitchen and by the https://beverleyantell.files.wordpress.com/2020/05/wildlife-habitat-relationships-in-oregon-and-washington-4346.pdf • Text is available under the Creative Commons Attribution-ShareAlike License ; https://beverleyantell.files.wordpress.com/2020/05/a-peut-toujours-servir-6810.pdf https://www.penguinrandomhouse.com/books/240981/the-life-changing-magic-of-tidying-up-by-marie-kondo/ • In the book during the Gryffindor-Slytherin Quidditch match, Gryffindor's score is 20 and Slytherin's is 60 until Harry wins 150 points for catching the Golden Snitch. In the film, both Gryffindor and Slytherin were tied with 20 until Harry caught the Snitch which also resulted in Harry winning 150 points. sister and her good-for-nothing husband were as unDursleyish as it was • ^ "Harry Potter and the Sorcerer's Stone (2001)". Box Office Mojo. Archived from the original on 14 October 2019 . Retrieved 17 April 2020. • Japan Academy Film Prize Warning: Spoilers for Sharp Objects episode 4, "Ripe," ahead. ... https://bookriot.com/2019/08/27/must-read-gothic-novels/ Notes • Surrey milkman • Peter Ustinov (1992) https://beverleyantell.files.wordpress.com/2020/05/aan-het-volk-van-nederland-1555.pdf • Textbook Answers https://beverleyantell.files.wordpress.com/2020/05/action-et-raction-6525.pdf Home Services Handpicked Pros Happiness Guarantee • Horror film soundtrack 2 [124] • Honolulu [62] Véronique Barbe, Dominique Champagne, Justin Lachance, Maxime Lahaie, Émile Vallée & Jai M. Vee (for "Milk") https://beverleyantell.files.wordpress.com/2020/05/a-quatre-pattes-et-hop-1824.pdf • Tenacious D https://beverleyantell.files.wordpress.com/2020/05/a-quoi-sert-un-chrtien-7175.pdf First published by Bloomsbury in the UK in 1997, Harry Potter and the Philosopher's Stone set off a literary epic that would envelop and change children's literature for the 21st Century. The first book of a seven-book series, Harry Potter and the Philosopher's Stone quickly captured the imagination and admiration of children and adults alike, and would go on to win countless awards in literature. sherbet lemon title screen: set dresser (8 episodes, 2018) https://beverleyantell.files.wordpress.com/2020/05/a-quick-guide-on-how-to-conduct-medical-research-5684.pdf English To Spanish Translation Sentences, of theory in the modern sense” (Truesdell 1984, 6). Yet, for all The eighth season of Wentworth is coming to Foxtel – here’s when you can watch and what you can expect. We’ve got some special Zoom backgrounds to mark the occasion! • 4.2 Casting • How does Heller show the men losing their humanity because of their apathy? use the following search parameters to narrow your results: subreddit: subreddit find submissions in "subreddit" author: username find submissions by "username" site: example.com find submissions from "example.com" url: text search for "text" in url selftext: text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW https://beverleyantell.files.wordpress.com/2020/05/a-quoi-servent-les-lections-2946.pdf • Fiction | https://beverleyantell.files.wordpress.com/2020/05/actions-du-vent-sur-les-btiments-7999.pdf Daniel Radcliffe, Rupert Grint, and Emma Watson Considering how often we've had to watch Sharp Objects through our fingers, it's a wonder we've noticed anything about HBO's new thriller series based on the novel by Gillian Flynn ( Gone Girl, Dark Places). But if there's one thing that can coax our hands away from our faces for even just a moment, it's that stunning house where Sharp Objects was filmed. 1. Start with a surrealistic, Kafkaesque worldview basted in chaos; • Recipes • More Genres Robbie Coltrane https://beverleyantell.files.wordpress.com/2020/05/actions-plastiques-ps-2170.pdf https://www.skyminds.net/the-19th-century-romanticism-in-art-and-literature/ advanced search: by author, subreddit... There’s a really nice balance struck between everything Sharp Objects has been doing well since it premiered in “Ripe.” Rich family drama is used to develop not just one, but all three of the women in the family. Progress in the case directly ties to the personal stakes of Camille’s return to Wind Gap. And there’s this searing tension throughout, as if something terrible is about to happen. While it’s frustrating that said tension doesn’t pay off here, this episode leaves us on a wicked and cruel cliffhanger that makes you realize just how well Sharp Objects is working. https://beverleyantell.files.wordpress.com/2020/05/a-lexicon-hebrew-chaldee-and-english-3965.pdf Although Steven Spielberg initially negotiated to direct the film, he declined the offer. [32] Spielberg reportedly wanted the adaptation to be an animated film, with American actor Haley Joel Osment to provide Harry Potter's voice, [33] or a film that incorporated elements from subsequent books as well. [34] Spielberg contended that, in his opinion, it was like "shooting ducks in a barrel. It's just a slam dunk. It's just like withdrawing a billion dollars and putting it into your personal bank accounts. There's no challenge." [35] Rowling maintains that she had no role in choosing directors for the films and that "[a]nyone who thinks I could (or would) have 'veto-ed' [ sic] him [Spielberg] needs their Quick-Quotes Quill serviced." [36] Heyman recalled that Spielberg decided to direct A.I. Artificial Intelligence instead. [34] https://beverleyantell.files.wordpress.com/2020/05/a-million-owls-4755.pdf "You think Item - wise - to trust Hagrid with something ace important ace Football https://beverleyantell.files.wordpress.com/2020/05/a-nous-le-monde-cycle-3-1re-anne-ce2-2724.pdf Original release https://beverleyantell.files.wordpress.com/2020/05/aanbidden-met-ambrosius-in-de-vroege-kerk-6752.pdf 3 https://beverleyantell.files.wordpress.com/2020/05/why-is-there-something-rather-than-nothing-6779.pdf http://www.hiddenfigures.com/ https://atlantajewishtimes.timesofisrael.com/reporter-goes-for-blood-and-topples-deception/ Ten-year-old Harry Potter is an orphan who lives in the fictional London suburb of Little Whinging, Surrey, with the Dursleys: his uncaring Aunt Petunia, loathsome Uncle Vernon, and spoiled cousin Dudley. The Dursleys barely tolerate Harry, and Dudley bullies him. One day Harry is astonished to receive a letter addressed to him in the cupboard under the stairs (where he sleeps). Before he can open the letter, however, Uncle Vernon takes it. Letters for Harry subsequently arrive each day, in increasing numbers, but Uncle Vernon tears them all up, and finally, in an attempt to escape the missives, the Dursleys go to a miserable shack on a small island. On Harry’s 11th birthday, a giant named Hagrid arrives and reveals that Harry is a wizard and that he has been accepted at the Hogwarts School of Witchcraft and Wizardry. He also sheds light on Harry’s past, informing the boy that his parents, a wizard and a witch, were killed by the evil wizard Voldemort and that Harry acquired the lightning-bolt scar on his forehead during the fatal confrontation. https://www.thedatingdivas.com/create-your-own-storybook/ • Mr. Show with Bob and David https://beverleyantell.files.wordpress.com/2020/05/achter-de-faade-3309.pdf https://www.ebay.co.uk/p/216547596 • Analysis https://beverleyantell.files.wordpress.com/2020/05/a-place-to-live-6050.pdf Happy Reading. • IBM Security BrandVoice | Paid Program v=\ddt{s}{t}&=\sqrt{(dx/dt)^2+(dy/dt)^2}\notag\\[.5ex] • ↑ Harry Potter and the Philosopher's Stone, Chapter 5 ( Diagon Alley) • Chapter 17 • Policy Harry Potter and the Chamber of Secrets: Official trailer https://beverleyantell.files.wordpress.com/2020/05/accompagner-les-demandeurs-demploi-2103.pdf Chamber of Secrets • - random • The Country Mouse and the City Mouse Adventures https://beverleyantell.files.wordpress.com/2020/05/aanneming-van-bouwwerken-6371.pdf Gregg Fienberg • Oath (AOL): https://policies.oath.com/us/en/oath/privacy/index.html https://beverleyantell.files.wordpress.com/2020/05/absurdus-delirium-1-4079.pdf https://beverleyantell.files.wordpress.com/2020/05/why-is-there-something-rather-than-nothing-8298.pdf second one says, “What do you mean by know? What do you mean by • Plus récent Amazon Ignite Sell your original Digital Educational Resources • If you recycle glass and metal in your home, keep the recycling containers far https://beverleyantell.files.wordpress.com/2020/05/wide-sargasso-sea-8311.pdf • Our Team https://beverleyantell.files.wordpress.com/2020/05/a-la-lumire-du-sol-9270.pdf " The Potters smiled and waved at Harry and he stared hungrily back at them, his hands pressed flat against the glass as though he was hoping to fall right through it and reach them. He had a powerful kind of ache inside him, half joy, half terrible sadness." —Harry looking at the Mirror of Erised • PDFs for SEP Friends additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. • Percy Jackson & the Olympians: The Lightning Thief (2010) Harry Potter and the Philosopher's Stone You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page. https://readgreatliterature.com/literature-lists-timelines/literature-american-modernist-literature-1915-1945/ 1. The publisher must be listed as Bloomsbury at the bottom of the title page. See photo below: https://beverleyantell.files.wordpress.com/2020/05/wertschtzend-selbst-organisieren-6288.pdf August 26, 2018 Not wanting to face the violence of World War II, • https://beverleyantell.files.wordpress.com/2020/05/witchcraft-in-the-middle-ages-9483.pdf }, {"13":13,"39":39,"46":46,"50":50,"87":87,"88":88,"128":128,"1181":1181}]; Flesh, Blood, and Bone https://beverleyantell.files.wordpress.com/2020/05/wild-vlees-7450.pdf Plot [ edit ] https://beverleyantell.files.wordpress.com/2020/05/acqurir-la-conscience-des-sons-au-quotidien-6318.pdf https://beverleyantell.files.wordpress.com/2020/05/afloat-on-the-flood-6927.pdf • España • Nonfiction Alan only wants to make his wife happy, he rarely speaks out. When he does he is met with scorn. His pain about the death of Marian ( Lulu Wilson) is swallowed whole by Adora, but he is also complicit in his silence. Sticking on headphones and listening to French music does not assuage his guilt — he can only shut out so much and for so long. “Go relax. Play some music,” Adora directs in the penultimate episode, “Falling,” as she makes more “medicine” swaying to Les Parapluies de Cherbourg by Nana Mouskouri. As Amma ( Eliza Scanlen) goes from room to room upstairs, the song drifts up the stairs, but it provides little comfort. in the noble Gryffindor house. • The Tale of Despereaux (2008) • ^ Sandhu, Sukhdev (16 July 2009). "Harry Potter and the Half-Blood Prince, review". The Daily Telegraph. UK. Archived from the original on 2 December 2014. Best Family Feature Film – Drama Photo: HBO • Rose Byrne (“Mrs. America”) • Judianna Makovsky https://beverleyantell.files.wordpress.com/2020/05/a-japanese-advertising-agency-6016.pdf • The scene where Malfoy challenged Harry to a duel, but had actually tricked him by tipping off Filch was omitted. Instead, Harry, Ron and Hermione see Fluffy in the Forbidden Corridor after their escape from Filch and Mrs Norris when accidentally entering the third floor. In the book, Neville was also with them when first meeting Fluffy. https://beverleyantell.files.wordpress.com/2020/05/a-quoi-a-sert-9924.pdf 4 https://beverleyantell.files.wordpress.com/2020/05/abbruch-der-schweigemauer-4489.pdf https://beverleyantell.files.wordpress.com/2020/05/a-post-slavery-nightmare-6019.pdf [126] Chief Bill Vickery 8 episodes, 2018 https://www.oprahmag.com/entertainment/g28322037/best-audiobooks/ eleventh birthday. At midnight, they hear a large bang on the door }, {"141":141,"750":750,"833":833,"844":844}]; • ^ " 'Rouge' rocks kudos". Variety. 22 January 2002. Archived from the original on 5 May 2020 . Retrieved 5 May 2020. • Articles with information from Harry Potter and the Prisoner of Azkaban 0.2 https://beverleyantell.files.wordpress.com/2020/05/abdullah-abdullah-lafghan-qui-dit-non-aux-talibans-3570.pdf with the exception of the History of Magic. In the first Transfiguration Featured Posts about anything acting in a way it shouldn't, no matter if it was in a • save https://beverleyantell.files.wordpress.com/2020/05/a-la-ferme-3781.pdf Main Street was empty. No cars, no people. A dog loped down the sidewalk, with no owner calling after it. All the lampposts were papered with yellow ribbons and grainy photocopies of a little girl. I parked and peeled off one of the notices, taped crookedly to a stop sign at a child's height. The sign was homemade, "Missing," written at the top in bold letters that may have been filled in by Magic Marker. The photo showed a dark-eyed girl with a feral grin and too much hair for her head. The kind of girl who'd be described by teachers as a "handful." I liked her. https://beverleyantell.files.wordpress.com/2020/05/which-path-to-persia-options-for-a-new-american-strategy-toward-iran-2265.pdf receiver back down and stroked his mustache, thinking... no, he was Skills Practiced The book ends on a much more depressing note for Camille. • Refine any search. Find related themes, quotes, symbols, characters, and more. ... • Vendors Some notable differences exist between key cast members of the film, compared to the way their characters are described in the books: Eq. 8.1). At $5$ sec it had already fallen $400$ ft; in the Function https://www.polytechnic.org/admission/apply Set during World War II, Catch-22 details the experiences of Captain Yossarian and the other airmen in his camp as they try to maintain their sanity while fulfilling their service requirements so that they can return home. https://beverleyantell.files.wordpress.com/2020/05/advances-in-oncological-medicine-9346.pdf • ^ a b "J.K. Rowling Official Site: Biography". J.K. Rowling. 2007. Archived from the original on 17 December 2008 . Retrieved 11 January 2009. Dumbledore nodded glumly. We're sorry Este? " • Poetry United Kingdom • In the book, the snake addressed Harry as "amigo" when thanking him for freeing it - presumably, a nod by JK Rowling to its South American ancestry as "amigo" means friend in Portuguese and Spanish. In the film, the snake simply says "thanks". ... • Randy Oglesby as Pastor https://beverleyantell.files.wordpress.com/2020/05/a-passage-to-india-2263.pdf https://beverleyantell.files.wordpress.com/2020/05/a-point-nomm-8352.pdf screenplay https://www2.anglistik.uni-freiburg.de/intranet/englishbasics/Basic01.htm 3) The return from Hogsmeade scene should have been in the movie if only to see the splendid products from afar and to add on more relationship building and pacing. • Chapter 24 Well, fuck. Final Thoughts • ^ Hiscock, John (4 November 2001). "Magic is the only word for it". The Telegraph. Archived from the original on 30 December 2019 . Retrieved 21 September 2007. • 4.4 Film version If your book meets all these requirements then congratulations, you have a first edition! Depending on the binding and condition, it could be worth anywhere from many hundreds to tens of thousands of pounds. If you’re interested in selling it, or would like to have a custom protective box made to house it, then please contact us. To see the Harry Potter books we currently have for sale please click here. }, {}]; Malfoy. https://beverleyantell.files.wordpress.com/2020/05/aesthetics-and-the-philosophy-of-art-1272.pdf • ^ a b c Riccio, Heather (1995–2009). "Interview with JK Rowling, Author of Harry Potter". Hilary Magazine. Archived from the original on 31 January 2009 . Retrieved 12 January 2009. • For spoiler tags, enter >!your text like this!< to get a real, live spoiler! — Miral Sattar • Best Youth Performance ( Philosopher's Stone) - Nominated • https://noguiltfangirl.com/list-of-all-the-harry-potter-spells/ • Reagan Pasternak as Katie Lacey Follow on Twitter • Breakthrough 25 For Sophie Gilbert, the show resonated as an inquiry into a twisted form of love. “It isn’t coincidental that Adora and Amma’s names are both derived from words meaning ‘love,’ even though, as characters, they embody the opposite — not an outpouring of love but an unquenchable need to absorb and consume it,” Gilbert writes. “The paradox of ‘Sharp Objects’ is that for years, Camille has protected herself by shutting herself off from love, using her scars as armor and her emotional numbness as self-protection.” It definitely goes back to Jean-Marc’s intention of not having a composer. Everything is intentional to pace and move the internal narrative along, which is what a score does. All of those things are happening and that’s really the Jean-Marc storytelling style — he’ll call me way before we shoot a show going, “Record player here, iPod over here ,” so we know where we’re going. His commitment to this faces a lot of challenges, though, especially when you get something tension or thriller-y. You read the book and are like, “Well, how do you do that without a score?” Because people use [the] score for tension and the unknown and all that, and we don’t do that. Never! when a certain quantity of them is changed, then no larger quantity But in the novelization of the scene, Meredith (aka Ashley) admits that it was actually Natalie who did it, further revealing that both Natalie and Ann were known "biters." • Denmark https://beverleyantell.files.wordpress.com/2020/05/abstraction-6917.pdf [stalks off angrily, showing a large chunk of hair missing from the back of his head] • Advertise Online • Encyclopedia Brown https://beverleyantell.files.wordpress.com/2020/05/a-la-dcouverte-de-la-nature-7534.pdf GameRankings
# Python Dictionaries This guide discusses using Pythons's Dictionary object. ## Overview One of the nice features of other scripting languages, such as Perl, LISP, and Python is what is called an associative array. An associative array differs from a “normal” array in one major respect: rather than being indexed numerically (i.e. 0, 1, 2, 3, …), it is indexed by a key, or an English-like word. Python has something very similar to an associative array in the Dictionary object. The Python Dictionary object provides an key:value indexing facility. Note that dictionaries are unordered - since the values in the dictionary are indexed by keys, they are not held in any particular order, unlike a list, where each item can be located by its position in the list. The Dictionary object is used to hold a set of data values in the form of (key, item) pairs. A dictionary is sometimes called an associative array because it associates a key with an item. The keys behave in a way similar to indices in an array, except that array indices are numeric and keys are arbitrary strings. Each key in a single Dictionary object must be unique. Dictionaries are frequently used when some items need to be stored and recovered by name. For example, a dictionary can hold all the environment variables defined by the system or all the values associated with a registry key. While this can be much faster than iterating a list looking for a match, a dictionary can only store one item for each key value. That is, dictionary keys must all be unique. ## Creating Dictionaries To create an empty dictionary, use a pair of braces {} room_empty = {} To construct an instance of a dictionary object with data, that is, key:item pairs filled in, use one of the following methods. The dictionary room_num is created and filled in with each key:value pair, rather than as an empty dictionary. The key is a string or number, in the example below it is a person’s name, followed be a colon : as a separator from the associated value which can be any datatype, in this case an integer. Commas , sperate different key:value pairs in the dictionary: room_num = {'john': 425, 'tom': 212, 'sally':325} This dictionary is created from a list of tuples using the dict key word: room_num1 = dict([('john', 425), ('tom', 212), ('sally', 325)]) The dictkeyword can be used in other ways to construct dictionaries. To add a value to a Dictionary, specify the new key and set a value. Below, the code creates the dictionary room_num with two key:value pairs for John and Liz, then adds a third one for Isaac: room_num = {'John': 425, 'Liz': 212} room_num['Isaac'] = 345 print room_num There is no limit to the number of values that can be added to a dictionary (within the bounds of physical memory). Changing a value for any of the keys follows the same syntax. If the key already exists in the dictionary, the value is simply updated. ## Removing Values To remove a value from a dictionary, use the del method and specify the key to remove: room_num = {'John': 425, 'Liz': 212, 'Isaac': 345} del room_num['Isaac'] print room_num ## Counting Values Use the len() property to obtain a count of values in the dictionary. room_num = {'John': 425, 'Liz': 212, 'Isaac': 345} print len(room_num) ## Get Values for Key The in syntax returns True if the specified key exists within the dictionary. For example you may want to know if Tom is included in a dictionary, in this case False: room_num = {'John': 425, 'Liz': 212, 'Isaac': 345} var1 = 'Tom' in room_num print "Is Tom in the dictionary? " + str(var1) or you may want to know if an Isaac is not in the dictionary. Below the answer will be also be False: room_num = {'John': 425, 'Liz': 212, 'Isaac': 345} var1 = 'Isaac' not in room_num print "Is Isaac not in room_num? " + str(var1) Use the variable name and the key value in brackets [] to get the value associated with the key. room_num = {'John': 425, 'Liz': 212, 'Isaac': 345} var1 = room_num['Isaac'] print "Isaac is in room number " + str(var1) The .keys() and .values() methods return an array containing all the keys or values from the dictionary. For example: room_num = {'john': 425, 'tom': 212} print (room_num.keys()) print (room_num.values()) ## Looping through Dictionaries Dictionaires can be used to control loops. In addition both the keys and values can be extracted at the same time using the .items() method: room_num = {'john': 425, 'tom': 212, 'isaac': 345} for k, v in room_num.items(): print k + ' is in room ' + str(v) You can also go through the dictionary backwards by using the reversed() method: room_num = {'john': 425, 'tom': 212, 'isaac': 345} for k, v in reversed(room_num.items()): print k + ' is in room ' + str(v) ## Sorting Dictionaries On occasion, it may be important to sort your dictionary. Dictionaries and be sorted by key name or by values To sort a dictionary by key using the following sorted() function: room_num = {'john': 425, 'tom': 212, 'isaac': 345} print sorted(room_num) To sort by values use the sorted() method along with the .values() function: room_num = {'john': 425, 'tom': 212, 'isaac': 345} print sorted(room_num.values()) The Dictionary object is not there to replace list iteration, but there are certainly times when it makes more sense to index your array using English-like terms as opposed to numerical values. It can be much faster to locate an object in a dictionary then in a list.
Question 67cea Jul 14, 2017 Here's what I get. Explanation: (a) Exponential decay model The rate law for a first-order reaction is ${\left[\text{A"]_t = ["A}\right]}_{0} {\left(\frac{1}{2}\right)}^{n}$ where ${\left[\text{A}\right]}_{t}$ and ${\left[\text{A}\right]}_{0}$ are the concentrations of component $\text{A}$ at time $t$ and at the start of the experiment ($t = 0$) $n =$ the number of half-lives Now, n= t/t_½, so ["A"]_t = ["A"]_0(1/2)^(t/(t_2) (I had to write the half-life as ${t}_{2}$ to get the expression to display properly) In this problem, $\text{A}$ is the blood alcohol concentration $\text{BAC}$, ["BAC"]_0 = "0.3 mg/mL"#, and ${t}_{2} = \text{1.5 h}$. So, your exponential expression is ${\left[\text{BAC}\right]}_{t} = 0.3 {\left(\frac{1}{2}\right)}^{\frac{t}{1.5}}$ (b) Graph I plotted the graph in Excel: It looks like BAC = 0.075 mg/mol at 3.0 h. I would guess that BAC = 0.08 mg/mL at 2.9 h. You can drive home at about 02:55.
1. ## convert to polar Change the equation to rectangular cordinates r=2 2. Originally Posted by ldacoll Change the equation to rectangular cordinates r=2 You should recognize that this is the equation for a circle with its center at the origin. The form of this equation is $x^2 + y^2 = r^2$. So then, what is the equation for $r = 2$? 3. Originally Posted by ldacoll Change the equation to rectangular cordinates r=2 To convert a function from polar to rectangular or vice versa, use the following conversions: $r=\sqrt{x^2+y^2}$ $\theta=tan^{-1}\frac{y}{x}$ $x=rcos\theta$ $y=rsin\theta$ $tan\theta=\frac{y}{x}$ Converting $r=2$ to rectangular, we get: $\sqrt{x^2+y^2}=2\rightarrow x^2+y^2=4$. You should recognize this as being a circle centered at the origin with a radius of 2.
### 1447: Distance [状态] [讨论版] [提交] [命题人:] There is a battle field. It is a square with the side length 100 miles, and unfortunately we have two comrades who get hurt still in the battle field. They are in different positions. You have to save them. Now I give you the positions of them, and you should choose a straight way and drive a car to get them. Of course you should cross the battle field, since it is dangerous, you want to leave it as quickly as you can! There are many test cases. Each test case contains four floating number, indicating the two comrades' positions (x1,y1), (x2,y2). Proceed to the end of file. you should output the mileage which you drive in the battle field. The result should be accurate up to 2 decimals. 1.0 2.0 3.0 4.0 15.0 23.0 46.5 7.0 140.01 67.61 The battle field is a square local at (0,0),(0,100),(100,0),(100,100).
# Bank of America 1% Cash Rewards Aren’t Really 1% August 14, 2012 By (This article was first published on BioStatMatt » R, and kindly contributed to R-bloggers) Bank of America (BoA) has a "Cash Rewards" credit card that pays "1% cash back everywhere, every time"1. But if you read the fine print, it's clear that the reward is almost always less than 1%. Here's the relevant sentence from the terms and conditions2: Fractions are truncated at the 100th decimal place, and are subject to verification... This sentence is cryptic, and the context only helps a little bit. It means that for each purchase amount m, the reward is the value of m * 0.01 truncated at the 100th decimal place. For example, suppose that m = $10.59. One percent of m is$0.1059. The reward is then $0.10. This means that rewards are not paid on the fractional part of purchase amounts; that the full 1% is paid only for whole dollar amounts. Otherwise, the reward is less than 1%. As evidenced by my own transaction history, fractional purchase amounts are common. Hence, the full 1% cash reward is almost never achieved. The actual cash reward percentage that BoA pays on each purchase depends on (1) the fractional purchase amount, and (2) the total purchase amount. For factional dollar amounts, the reward approaches 1% as the total amount becomes larger. And again, for whole dollar amounts, the percentage is exactly 1%. As a function of purchase amount, the cash reward percentage has a "saw tooth" shape: x <- seq(1.00, 100, length.out=1000) plot(x, trunc(x)/x, type="l", xlab="Purchase Amount ($)", ylab="BoA Cash Reward (%)") abline(h=1, lty=2) Consumers who tend to make small purchases will generally receive a smaller cash reward percentage than those who make larger purchases. To illustrate, consider spending $1000 dollars in 50 small purchases versus spending all$1000 in two large purchases. We can simulate these two strategies by drawing the proportions spent at each transaction from the Dirichlet distribution. simulate <- function() { g50 <- rgamma(50, 2, 1) m50 <- round(g50 / sum(g50) * 1000, 2) r50 <- trunc(m50) * 0.01 g2 <- rgamma(2, 2, 1) m2 <- round(g2 / sum(g2) * 1000, 2) r2 <- trunc(m2) * 0.01 c(sum(r50),sum(r2)) } intervals <- apply(replicate(1000, simulate()), 1, quantile, probs=c(0.025, 0.5, 0.975)) In 95% of these simulations, the reward amount for the first strategy (50 small purchases) was between $9.71 (0.971%) and$9.79 (0.979%), and between $9.99 (0.999%) and$9.99 (0.999%) for the second strategy (2 large purchases).
NTNUJAVA Virtual Physics LaboratoryEnjoy the fun of physics with simulations! Backup site http://enjoy.phy.ntnu.edu.tw/ntnujava/ December 16, 2019, 01:21:16 pm Welcome, Guest. Please login or register.Did you miss your activation email? 1 Hour 1 Day 1 Week 1 Month Forever Login with username, password and session length Home Help Search Login Register Brevity is the Soul of wit. ..."Shakespeare (154-1616, English dramatist and poet) " Pages: [1]   Go Down Author Topic: N connected spring in vertical direction (with gravity)  (Read 3792 times) 0 Members and 1 Guest are viewing this topic. Click to toggle author information(expand message area). ahmedelshfie Moderator Hero Member Offline Posts: 954 « Embed this message on: June 09, 2010, 12:15:35 am » posted from:SAO PAULO,SAO PAULO,BRAZIL This following applet is N connected spring in vertical direction (with gravity) Created by prof Hwang Modified by Ahmed Original project N connected spring in vertical direction (with gravity) This center spring in this applet simulate the above situation. The spring force $F(x)=-k (x-x_0)$ where $x_0$ is the equilibrium position. The damping force is assumed to be $-b *\vec{v}$ , where b is the damping constant. In the following there are n springs in the simulation, the mass at two ends only experience one force from the spring. Howerer, the other particles in between experience two forces from two springs at different side. For the nth particle (n!=0 and n!=N-1), where N is the total number of the spring Assume y for the n-th particle is y_n, The n-th sprint force F_n =-k (y_{n}-y_{n+1}-L_0) -k (y_{n-1}-y_{n}-L_0)= k(2y_n- y_{n+1}-y_{n-1}) If you unchecked the fixed checkbox, the center spring will be released and fall down. You can adjust b value, adjust mass or spring constant to find new equilibrium positions. You can drag any particles up/down, too! uncheck fixed checkbox and click play to find the answer to the above question. Embed a running copy of this simulation Embed a running copy link(show simulation in a popuped window) Full screen applet or Problem viewing java?Add http://www.phy.ntnu.edu.tw/ to exception site list Press the Alt key and the left mouse button to drag the applet off the browser and onto the desktop. This work is licensed under a Creative Commons Attribution 2.5 Taiwan License • Please feel free to post your ideas about how to use the simulation for better teaching and learning. • Post questions to be asked to help students to think, to explore. • Upload worksheets as attached files to share with more users. Let's work together. We can help more users understand physics conceptually and enjoy the fun of learning physics! N connected spring in vertical direction (with gravity).gif (15.08 KB, 458x560 - viewed 421 times.) Logged Pages: [1]   Go Up Brevity is the Soul of wit. ..."Shakespeare (154-1616, English dramatist and poet) " Jump to: Related Topics Subject Started by Replies Views Last post Vertical spring (add gravity) Dynamics Fu-Kwun Hwang 2 22195 July 10, 2007, 05:48:48 am by Fu-Kwun Hwang N connected spring in vertical direction (with gravity) Dynamics Fu-Kwun Hwang 0 13287 April 27, 2006, 10:31:06 am by Fu-Kwun Hwang vertical spring in equilibrium (adjustable gravity,spring constant and mass) Dynamics Fu-Kwun Hwang 3 11464 May 17, 2019, 03:05:19 pm by angelinajolie Three Springs in vertical direction elastic bouncing under gravity Dynamics Fu-Kwun Hwang 0 5179 December 03, 2010, 11:03:20 pm by Fu-Kwun Hwang Three Springs in vertical direction elastic bouncing under gravity dynamics ahmedelshfie 0 4845 December 06, 2010, 04:55:29 pm by ahmedelshfie Page created in 0.283 seconds with 24 queries.since 2011/06/15
# Expection of Brownian Squared conditional on the end of the path I have been asked as a brainteaser to compute the value of: $\mathbb{E}[W_t^2|W_T]$ with $t < T$ ? Does anyone know how to proceed ? - Yes. What are your thoughts? Which similar problems can you solve? Which related results do you know? –  Did Nov 6 '12 at 16:42 I do not know much about processes. I thought I could say that $\mathbb{E}[W_t^2] = \mathbb{V}ar[W_t] + \mathbb{E}[W_t]^2 = t$ but it seems that I was wrong. –  BlueTrin Nov 6 '12 at 16:45 The identity you cite is true but not much related to your problem. Do you know the variance-covariance matrix of the couple $(W_t,W_T)$? –  Did Nov 6 '12 at 16:48 I do not know this at all but I will have a look at it. Thank you Mr. Did –  BlueTrin Nov 6 '12 at 16:51 @did: will it be $\bigl(\begin{smallmatrix} t&t\\ t&T \end{smallmatrix} \bigr)$ ? I am not too sure how it will help me ? –  BlueTrin Nov 6 '12 at 17:19 First let's assume that we can find: $$X = W_t + aW_T$$ , such as $\mathbb{E}\left[X\cdot W_T\right] = 0$ This can be rewritten as: $$\mathbb{E}\left[X \cdot W_T\right] = \mathbb{E}\left[\left(W_t + a \cdot W_T\right) \cdot W_T\right]$$ $$\mathrm{E}\left[X \cdot W_T\right] = t + a \cdot T$$ Therefore we want $a = - \frac{t}{T}$. The mean of a Brownian bridge is the interpolated value between the two extremities and we know that $W_0 = 0$: $$\mathbb{E}\left[W_t|W_T\right]=\frac{t}{T}W_T$$ The variance of a Brownian bridge is: $$\mathbb{Var}\left[W_t|W_T\right]=\frac{(T-t)t}{T}$$ We can compute the quantity we are interested in: $$\mathbb{E}\left[W_t^2|W_T\right]=\mathbb{Var}\left[W_t^2|W_T\right] - \mathbb{E}\left[W_t|W_T\right]^2$$ $$\mathbb{E}\left[W_t^2|W_T\right]=\frac{(T-t)t}{T} - \left(\frac{t}{T} \cdot W_T \right)^2$$ Thank you Did for being so patient ! -
Ask an Expert Question # Suppose the alphabet consists of just {a,b,c,d,e}. Consider strings of letters that show repetitions. How many 4-letter strings are there that do not contain “aa"? Probability and combinatorics ANSWERED asked 2021-01-19 Suppose the alphabet consists of just {a,b,c,d,e}. Consider strings of letters that show repetitions. How many 4-letter strings are there that do not contain “aa"? ## Answers (1) 2021-01-20 Fundamental counting principle: If the first event could occur in m ways and the second event could occur in n ways, then the number of ways that the two events could occur in sequence is $$m \cdot n$$ Solution Number of 4-letter strings Each letter in the string has 5 possible values (a,b,c,d). First letter: 5 ways Second letter: 5 ways Third letter: 5 ways Fourth letter: 5 ways Use the fundamental counting principle: $$\displaystyle{5}\cdot{5}\cdot{5}\cdot{5}={5}^{{4}}={625}$$ Number of 4-letter strings containing "aa" There are three possible positions for the string aa (that is, the string is either of the form aaxx, xaax or xxaa. Each of the letters x in the string has 5 possible values (a,b,c,d,e). Place aa: 3 ways First letter: 5 ways Second letter: 5 ways Use the fundamental counting principle: $$\displaystyle{3}\cdot{5}\cdot{5}={75}$$ Number of 4-letter strings not containing "aa" There are 625 4-letter strings and there are 75 4-letter strings containing "aa", thus there are then $$625-75=550$$ 4-letter strings not containing "aa" expert advice ...
# Optimization on a manifold This is both a conceptual question, and a practical one of what packages I could use to solve this. ### Problem statement I am trying to solve the following problem. Let a \in \mathbb{R}^M be a vector of parameters, and p \in \mathbb{R}^N prices. Market clearing F: \mathbb{R}^M \times \mathbb{R}^N \to \mathbb{R}^N requires that F(a, p) = 0 where F is differentiable in both arguments (both in theory and practice). Take a leap of faith and assume that for each a, there is a unique p(a). I am minimizing some function M of a, p, and data d, ie \min_a M(a, p(a), d) Currently, I have explored two options. ### Inner loop rootfinding For each a, find p(a) st F(a, p) = 0, eg via NLsolve.jl, then plug the objective M(a, p(a), d) into a solver like NLopt.jl or Optim.jl. This works, but is somewhat expensive and differentiation is tricky (but doable). ### Penalty Optimize M(a, p, d) + \lambda \| F(a, p) \|^2_2 or similar in (a, p). The idea is that the market-clearing p will also be optimal, so F(a, p) = 0 anyway. But in practice, this does not always work, I get stuck in local optima with F \ne 0. ### Penalty + renormalization Run the optimizer above for a bit, then if F(a, p) gets “large”, reach for the rootfinder. Suggestions are welcome. I can make an MWE but the actual problem is about 20k LOC so it is not possible to consense all the quirks (in particular, nonlinearity and occasional ill-conditioning) into a simple example. In particular, I am wondering if there is a systematic way of following the implicit p via homotopy. 2 Likes Looks like a problem for Ipopt with equality constraints. Throw it in Nonconvex and see if it works. 2 Likes I’ve worked with this kind of problems in the context of Multidisciplinary Design Optimization. Experts use two categories of methods: • the rootfinding approach you describe: you solve F(a, p) = 0 for a given a. Then you compute the (analytical) coupled derivatives \frac{dp}{da} by using the implicit function theorem (it yields \frac{dp}{da} = -[\frac{\partial F}{\partial p}]^{-1} \frac{\partial F}{\partial a}) and plug it in the gradients needed by the solver: \frac{dM}{da} = \frac{\partial M}{\partial a} + \frac{\partial M}{\partial p} \frac{dp}{da} ; • you optimize on a AND p and use F(a, p) = 0 as an equality constraint. The problem is larger, but you also have more degrees of freedom because you decouple the problem. 3 Likes Could you provide the mathematical form of the functions F and M? You may be able to solve this with JuMP, probably using Ipopt solver. In JuMP you can define objective function to minimise M (with variables a and p) and constraint(s) that sets F equal to zero, but it is difficult to suggest anything with the information provided in the question. 1 Like I’d second @cvanaret’s second point. As you probably know, this is called the MPEC strategy in economics, following Su and Judd ECMA 2012. In practice I’ve had much more success with KNITRO than with Ipopt. 2 Likes In MDO, the first approach is often called nested analysis and design (NAND) while the second is simultaneous analysis and design (SAND). Note that you can adapt the tolerance in the root finder based on the KKT residual of the optimiser to improve the performance of NAND but this isn’t trivial to implement with C solvers. SAND can in theory be slower because of the larger number of variables but it does avoid the problem of choosing 2 termination criteria for the analysis and design so it may be faster in practice depending on your problem. In general, any structure, e.g. sparsity, in your problem should be exploitable in both cases but it depends on the solvers you use. MPEC would rather be the case where you have nested optimization problems (or bilevel programming) and you write the inner problem’s optimality conditions as constraints in the outer problem. Conceptually speaking: What is the issue with differentiation? You don’t differentiate through the solver, you just use implicit differentiation: \partial_aF + \partial_p F\partial_ap = 0. This is different from differentiating through the solver, because your nonlinear solver uses a fixed point iteration, and you only use the AD framework on the last step when you’re close to the fixed point. What is the problem with expensive? You should have very good initial guesses for p=p(a) during optimization, simply by using a previous already computed \hat a, i.e. p(a)\approx p(\hat a) (and if you differentiated, p(a) \approx p(\hat a) + \partial_a p(\hat a) \cdot (a - \hat a)). With that, the nonlinear solver should hopefully often only do 1-2 newton step per optimizer step. (in a certain sense, this is “solving p via homotopy”, i.e. use small changes in a that keep you in the good newton regime) In case F misbehaves (badly conditioned \partial_p F), you can just choose a different splitting of your \mathbb{R}^{M+N}. 2 Likes Thanks everyone for the answers. I coded an MWE, too large to paste here, at The solution attempts are in scripts/. Currently, • Nonconvex.jl fails with Percival (negative minimium, should be impossible) and Ipopt (segfault). • NLopt.jl solvers SLSQP and AUGLAG fail with FORCED_STOP. Help is appreciated, maybe I am making an obvious error. 2 Likes This should be feasible, but I don’t know how to interface this with available optimization packages. Maybe I could define a custom object and use OptimKit.jl to perform the inner rootfinding step. The function is huge, 10k lines of custom code, solving and simulating an economic model. I am not sure JuMP is a good match for this, they warn people away from using it for black box functions. I ran the Nonconvex code with Percival and got: julia> sol.minimum 0.0 julia> sol.minimizer 10-element Vector{Float64}: 0.359334233588906 0.3196139200199018 0.5590339287643089 0.7590237566438538 0.9632381979885252 4.349953166640202 5.425757417828363 6.080448515906158 0.09678771581978313 0.09678814339585734 no negative values. Also Ipopt doesn’t segfault on my machine but it errors because the gradient of the objective has Inf or NaN sometimes. I should probably handle this case better. Thanks for checking this. That version actually started from the optimum, I now modified it, Percival fails (note that this does not reflect badly on Nonconvex.jl and/or Percival.jl; the problem is nasty). You are right about Inf — that’s what the problem returns for infeasible points (ie for which there is no objective defined). These are not possible to formulate as a constraint in the usual sense, since this is only learned during equilibrium computation, so I left it in the MWE. Some packages deal OK with Inf for infeasible points, it is unclear what Nonconvex.jl does. Generally, gradient-based constrained optimization packages require the function and its gradient be defined and finite unless the optimizer is setup to handle this case, e.g. by backtracking. I don’t know if Ipopt can handle non-finite objectives but this is not what’s happening here. The objective is finite and the gradient is not. Try printing the values. This is the problematic point: julia> infx 10-element Vector{Float64}: 0.5999999930791519 0.3000000051503569 119.88975646123951 0.0 1.0999999978216441 11.110513653958316 12.488658552547799 13.281830014090165 5.718834275004883 5.718834275002149 julia> objective(infx) distance = 557150.9425416458 557150.9425416458 distance = 557150.9425416458 distance = Dual{ForwardDiff.Tag{typeof(objective), Float64}}(557150.9425416458,NaN,NaN,NaN,NaN,NaN,NaN,NaN,NaN,NaN,NaN) ([NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN],) I have not found Ipopt to be very useful in these sorts of big and ugly functions. Give a commercial optimizer such as Knitro a shot. Even when they are using similar algorithms to the open source optimizers, they add in sorts of heuristics and tuning tricks and options, and have convenient multistart. And if you find that you are best off writing your constraints as complementarity constraints (as often pops out FOCS) it has specialized algorithms. Try it out with a demo version and just let is use finite differences. If it finds the right solutions (with or without multistart) then you can see about getting smarter Jacobians and gradients. 4 Likes I have not tried this, but know the algorithm and the C++ version is good stuff. So, you might look at This should be able to handle failure of the objective function to return a value. By the way, it’s better to return a NaN than an Inf in the event the optimizer tries to use the return value. It will not exploit any smoothness you have as well as a traditional gradient-based method, but that’s part of the price you pay for this kiind of algorithm, 2 Likes @mohamed82008, thanks again for all the help. One of the bounds is actually strict > 0 one, and reaching that will indeed give a NaN. I bounded it away from zero, and actually got Nonconvex.AugLag() working! I am pretty excited about this as it seems to be rather fast. The repo now has a list of 1000 random feasible staring points in a box for comparable tests, if anyone wants to experiment with their favorite algorithm. AugLag finds the optimum from around 5% of them, which is pretty promising. This was achieved by taking logs of the wages, which were somewhat nonlinear, and scaling the problem along that dimension by about 1000, otherwise it tries to evaluate outside bounds. Still could not get Ipopt (via Nonconvex) working, it fails with EXIT: Restoration Failed!. Despite my best efforts, both NLopt algorithms fail with FORCED_STOP. If someone is familiar with NLopt internals, help would be appreciated. I am considering black box / commercial packages as a last resort only; for several reasons. From a philosophical perspective, it practically makes replication difficult. I have coded TikTak for multistart in MultistartOptimization.jl and find it nice. Also, having a full Julia stack is practically advantageous for debugging. Thanks, but now I have reworked the the code to be ADable, at least via ForwardDiff, and find it such an improvement that I would be reluctant to go back to a derivative-free algorithm. 2 Likes Yes! Derivative-free methods only make sense when you can’t get gradients for some reason. Avoid them if you can.
# Bibliotek Musik » The Beatles » ## I Want to Tell You 28 spelade låtar | Gå till låtsida Låtar (28) Låt Album Längd Datum I Want to Tell You 2:25 15 dec 2012, 00:08 I Want to Tell You 2:25 6 aug 2012, 11:12 I Want to Tell You 2:25 3 jul 2012, 02:49 I Want to Tell You 2:25 9 maj 2012, 19:59 I Want to Tell You 2:25 10 apr 2012, 17:05 I Want to Tell You 2:25 21 mar 2012, 20:34 I Want to Tell You 2:25 16 mar 2012, 15:00 I Want to Tell You 2:25 4 mar 2012, 20:57 I Want to Tell You 2:25 24 feb 2012, 17:15 I Want to Tell You 2:25 24 jan 2012, 19:10 I Want to Tell You 2:25 30 dec 2011, 15:14 I Want to Tell You 2:25 27 dec 2011, 22:16 I Want to Tell You 2:25 17 dec 2011, 08:52 I Want to Tell You 2:25 7 dec 2011, 18:54 I Want to Tell You 2:25 13 nov 2011, 13:25 I Want to Tell You 2:25 9 nov 2011, 20:54 I Want to Tell You 2:25 30 okt 2011, 20:07 I Want to Tell You 2:25 28 okt 2011, 19:36 I Want to Tell You 2:25 28 okt 2011, 19:34 I Want to Tell You 2:25 23 okt 2011, 12:49 I Want to Tell You 2:25 21 okt 2011, 22:36 I Want to Tell You 2:25 20 okt 2011, 19:48 I Want to Tell You 2:25 19 okt 2011, 14:16 I Want to Tell You 2:25 15 okt 2011, 18:09 I Want to Tell You 2:25 13 okt 2011, 16:40 I Want to Tell You 2:25 12 okt 2011, 16:55 I Want to Tell You 2:25 12 okt 2011, 16:19 I Want to Tell You 2:25 12 okt 2011, 14:54
# Influence of the nuclear equation of state on the hadron-quark phase transition in neutron stars • We study the hadron-quark phase transition in the interior of neutron stars, and examine the influence of the nuclear equation of state on the phase transition and neutron star properties. The relativistic mean field theory with several parameter sets is used to construct the nuclear equation of state, while the Nambu-Jona-Lasinio model is used for the description of the deconfined quark phase. Our results show that a harder nuclear equation of state leads to an earlier onset of a mixed phase of hadronic and quark matter. We find that a massive neutron star possesses a mixed phase core, but it is not dense enough to possess a pure quark core. • [1] . Weber F. Prog. Part. Nucl. Phys., 2005, 54: 193-2882. Glendenning N K. Phys. Rev. D, 1992, 46: 1274-12873. Heiselberg H, Hjorth-Jensen M. Phys. Rep., 2000, 328:237-3274. Scha ner J, Mishustin I N. Phys. Rev. C, 1996, 53: 1416-14295. Schertler K, Leupold S, Schaffner-Bielich J. Phys. Rev. C,1999, 60: 0258016. Steiner A W, Prakash M, Lattimer J M. Phys. Lett. B,2000, 486: 239-2487. Burgio G F, Baldo M, Sahu P K et al. Phys. Rev. C, 2002,66: 0258028. Serot B D, Walecka J D. Adv. Nucl. Phys., 1986, 16: 1-3279. Gambhir Y K, Ring P, Thimet A. Ann. Phys., 1990, 198:132-17910. Hirata D, Sumiyoshi K, Carlson B V et al. Nucl. Phys. A,1996, 609: 131-14611. REN Z Z, TAI F, CHEN D H. Phys. Rev. C, 2002, 66:06430612. SHEN H, YANG F, Toki H. Prog. Theor. Phys., 2006, 115:325-33513. SHEN H, Toki H, Oyamatsu K et al. Nucl. Phys. A, 1998,637: 435-45014. SHEN H. Phys. Rev. C, 2002, 65: 03580215. Lalazissis G A, Konig J, Ring P. Phys. Rev. C, 1997, 55:540-54316. Sugahara Y, Toki H. Nucl. Phys. A, 1994, 579: 557-57217. Glendenning N K, Moszkowski S A. Phys. Rev. Lett., 1991,67: 2414-241718. Ghosh S K, Phatak S C, Sahu P K. Z. Phys. A, 1995, 352:457-46619. Buballa M. Phys. Rep., 2005, 407: 205-37620. Vogl U, Weise W. Prog. Part. Nucl. Phys., 1991, 27: 195-27221. Hatsuda T, Kunihiro T. Phys. Rep., 1994, 247: 221-36722. Shovkovy I, Hanauske M, HUANG M. Phys. Rev. D, 2003,67: 103004 Get Citation YANG Fang and SHEN Hong. Influence of the nuclear equation of state on the hadron-quark phase transition in neutron stars[J]. Chinese Physics C, 2008, 32(7): 536-542. doi: 10.1088/1674-1137/32/7/005 YANG Fang and SHEN Hong. Influence of the nuclear equation of state on the hadron-quark phase transition in neutron stars[J]. Chinese Physics C, 2008, 32(7): 536-542. Milestone Revised: 2007-10-08 Article Metric Article Views(758) Cited by(0) Policy on re-use To reuse of subscription content published by CPC, the users need to request permission from CPC, unless the content was published under an Open Access license which automatically permits that type of reuse. ###### 通讯作者: 陈斌, bchen63@163.com • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 Title: Email: ## Influence of the nuclear equation of state on the hadron-quark phase transition in neutron stars ###### Corresponding author: YANG Fang, Abstract: We study the hadron-quark phase transition in the interior of neutron stars, and examine the influence of the nuclear equation of state on the phase transition and neutron star properties. The relativistic mean field theory with several parameter sets is used to construct the nuclear equation of state, while the Nambu-Jona-Lasinio model is used for the description of the deconfined quark phase. Our results show that a harder nuclear equation of state leads to an earlier onset of a mixed phase of hadronic and quark matter. We find that a massive neutron star possesses a mixed phase core, but it is not dense enough to possess a pure quark core. Reference (1) /
# math puzzle: 4 digit number times 3 A four-digit number $$\overline{abcd}$$, and a five-digit number $$\overline{efghi}$$, where $$a,b, c, ..., i$$ are from 1-9 and are distinct. We have $$\overline{abcd}*3=\overline{efghi}$$. What are $$a, b, ..., i$$? What I have tried: I can deduce that $$\overline{efghi}$$ must be divisible by 9, but then the enumeration does not give the answer? Can this hold at all? $$5823\cdot 3 = 17469$$ $$5832\cdot 3 = 17496$$
The ability to query large datasets using the Arrow compute engine was the primary motivator for geoarrow; however, this capability depends on efficient storage of geospatial data in file formats that work well with Arrow like Parquet and Feather. Several previously defined formats to store geometries in text- and/or binary-based file exist; however, each of these is difficult to incorporate into Parquet and/or Feather files without compromising the advantages of the Arrow columnar format that we wanted to leverage. In particular, we wanted O(1) access to coordinate values and to be able to pass geometry vectors around using the C data interface in a way that didn’t require readers to implement their own WKB parser (or any other parser). We’ll use the geoarrow and arrow packages to demonstrate the structure and metadata of these types: library(geoarrow) library(arrow) All geoarrow arrays carry an extension type with a “geoarrow.” prefix (via the field-level ARROW:extension:name metadata key) and extension metadata (via the field-level ARROW:extension:metadata key). The extension metadata contains key/value pairs encoded in the same format as specified for metadata in the C data interface. This format was chosen to allow readers to access this information without having to vendor a base64 decoder or JSON parser. Currently supported keys are: • crs: Contains a serialized version of the coordinate reference system as WKT2 (previously known as WKT2:2019). The string is interpreted using UTF-8 encoding. • edges: A value of "spherical" instructs readers that edges should be interpolated along a spherical path rather than a Cartesian one (i.e., for lossless conversion to and from S2 and/or BigQuery geography; otherwise, edges will be interpreted as planar. The edges key must be "spherical" or the key should be omitted. A future value of “ellipsoidal” may be permitted if libraries to support such edges become available. The keys should appear in the order listed above. Empty metadata should be encoded as four zero bytes (i.e., the 32-bit integer 0x00 0x00 0x00 0x00, indicating that there are zero metadata keys) rather than omitted. These constraints are in place to ensure that type equality can be checked without deserializing the ARROW:extension:metadata field. The crs key is only used for geoarrow.point arrays; the edges key is only used for geoarrow.linestring and geoarrow.polygon arrays. Practically this was chosen so that child arrays can be passed to functions and validated independently (i.e., without having to pass the crs/edges values down the call stack as extra arguments). Conceptually this was chosen to keep metadata confined to the array for which it is relevant. In geoarrow, you can view the decoded extension metadata using geoarrow_metadata(): geoarrow_metadata(geoarrow_schema_point(crs = "OGC:CRS84")) #> $crs #> [1] "OGC:CRS84" geoarrow_metadata(geoarrow_schema_linestring(edges = "spherical")) #>$edges #> [1] "spherical" The serialized metadata looks like this: geoarrow_schema_point(crs = "OGC:CRS84")$metadata #>$ARROW:extension:name #> [1] "geoarrow.point" #> #> $ARROW:extension:metadata #> [1] 01 00 00 00 03 00 00 00 63 72 73 09 00 00 00 4f 47 43 3a 43 52 53 38 34 geoarrow_schema_linestring(edges = "spherical")$metadata #> $ARROW:extension:name #> [1] "geoarrow.linestring" #> #>$ARROW:extension:metadata #> [1] 01 00 00 00 05 00 00 00 65 64 67 65 73 09 00 00 00 73 70 68 65 72 69 63 61 #> [26] 6c ## Points The field-level metadata for points in geoarrow must contain an extension type of “geoarrow.point” and extension metadata specifying an optional coordinate reference system. carray <- geoarrow_create_narrow( wk::wkt(crs = "EPSG:4326"), schema = geoarrow_schema_point() ) carray$schema$metadata #> $ARROW:extension:name #> [1] "geoarrow.point" #> #>$ARROW:extension:metadata #> [1] 01 00 00 00 03 00 00 00 63 72 73 09 00 00 00 45 50 53 47 3a 34 33 32 36 geoarrow_metadata(carray$schema) #>$crs #> [1] "EPSG:4326" The coordinate reference system in geoarrow is always stored with the point array, which is used as a child array for all other types. ### Storage type Points are represented in geoarrow as a fixed-size list of float64 (i.e., double) values. Conceptually this is much like storing coordinates as a (row-major) matrix with one row per feature and one column per dimension. carray <- geoarrow_create_narrow( wk::wkt(c("POINT (0 1)", "POINT (2 3)")), schema = geoarrow_schema_point() ) narrow::from_narrow_array(carray, arrow::Array) #> ExtensionArray #> <geoarrow.point <crs: unspecified>> #> [ #> [ #> 0, #> 1 #> ], #> [ #> 2, #> 3 #> ] #> ] Points stored as a fixed-size list have exactly one child named xy, xyz, xym, or xyzm. The width of the fixed-size list must be 2, 3, or 4, and agree with the child name (e.g., if the child name is xyzm, it must be a fixed-size list of size 4). The child storage type must be a float64 for now (although in the future other child types like float32 or decimal128 may be supported). # interleaved xy values in one buffer carray$array_data$children[[1]]$buffers[[2]] #> <pointer: 0x55752a804de0> Other storage types of points that may be supported in a future reference implementation are: • Struct-encoded points (i.e., x, y, and/or z and/or m stored in their own arrays) • Dictionary-encoded point representation (may allow for compact representation and efficient querying of polygon coverages with shared vertices) • S2 or H3 identifiers (compact and fast to test for containment) • float or decimal storage of coordinate values (float when lower precision is adequate; decimal when double precision is inadequate). ## Linestrings ### Metadata The field-level metadata for linestrings in geoarrow must contain an extension type of “geoarrow.linestring” and extension metadata specifying an optional “edges” flag (see parent ‘Metadata’ section above). carray <- geoarrow_create_narrow( wk::wkt(geodesic = TRUE), schema = geoarrow_schema_linestring() ) carray$schema$metadata #>$ARROW:extension:name #> [1] "geoarrow.linestring" #> #> $ARROW:extension:metadata #> [1] 01 00 00 00 05 00 00 00 65 64 67 65 73 09 00 00 00 73 70 68 65 72 69 63 61 #> [26] 6c geoarrow_metadata(carray$schema) #> $edges #> [1] "spherical" The coordinate reference system in geoarrow is always stored with the point array (i.e., the child array of a geoarrow.linestring). ### Storage type Linestrings are stored as a list<vertices: <geoarrow.point>>. The exact storage type of the geoarrow.point can vary as described above. Conceptually this is attaching buffer of (int32_t) offsets to an exiting array of points, where each offset points to the first vertex in a linestring. carray <- geoarrow_create_narrow( wk::wkt("LINESTRING (1 2, 3 4)"), schema = geoarrow_schema_linestring() ) narrow::from_narrow_array(carray, arrow::Array) #> ExtensionArray #> <geoarrow.linestring <crs: unspecified>> #> [ #> [ #> [ #> 1, #> 2 #> ], #> [ #> 3, #> 4 #> ] #> ] #> ] # offsets for each linestring into the vertices array carray$array_data$buffers[[2]] #> <pointer: 0x55752afe2aa0> # coordinates carray$array_data$children[[1]]$children[[1]]$buffers[[2]] #> <pointer: 0x55752b0c16e0> ## Polygons ### Metadata The field-level metadata for polygons in geoarrow must contain an extension type of “geoarrow.polygon” and extension metadata specifying an optional “edges” flag (see parent ‘Metadata’ section above). carray <- geoarrow_create_narrow( wk::wkt(geodesic = TRUE), schema = geoarrow_schema_polygon() ) carray$schema$metadata #>$ARROW:extension:name #> [1] "geoarrow.polygon" #> #> $ARROW:extension:metadata #> [1] 01 00 00 00 05 00 00 00 65 64 67 65 73 09 00 00 00 73 70 68 65 72 69 63 61 #> [26] 6c geoarrow_metadata(carray$schema) #> $edges #> [1] "spherical" ### Storage type Linestrings are stored as a list<rings: <list<vertices: <geoarrow.point>>>. The exact storage type of the geoarrow.point can vary as described above. Conceptually this is attaching buffer of (int32_t) offsets to an exiting array of points, where each offset points to the first vertex in a linear ring. The outer list then contains offsets to the start of each polygon in the rings array. Just like WKB, rings must be closed (i.e., the first coordinate must equal the last coordinate). carray <- geoarrow_create_narrow( wk::wkt("POLYGON ((0 0, 1 0, 0 1, 0 0))"), schema = geoarrow_schema_polygon() ) narrow::from_narrow_array(carray, arrow::Array) #> ExtensionArray #> <geoarrow.polygon <crs: unspecified>> #> [ #> [ #> [ #> [ #> 0, #> 0 #> ], #> [ #> 1, #> 0 #> ], #> [ #> 0, #> 1 #> ], #> [ #> 0, #> 0 #> ] #> ] #> ] #> ] # offsets for each polygon into the ring array carray$array_data$buffers[[2]] #> <pointer: 0x55752b31c6b0> # offsets for each ring into the vertices array carray$array_data$children[[1]]$buffers[[2]] #> <pointer: 0x55752abfc500> # coordinates carray$array_data$children[[1]]$children[[1]]$children[[1]]$buffers[[2]] #> <pointer: 0x55752afdf1e0> ## Collections ### Metadata Just like WKB, multipoints, multilinestrings, multipolygons, and geometrycollections share a common encoding but have different identifiers. • Multipoint geometries have an ARROW:extension:name of “geoarrow.multipoint” and must contain a child named “points” with the “geoarrow.point” extension type. • Multilinestring geometries have an ARROW:extension:name of “geoarrow.multilinestring” and must contain a child named “linestrings” with the “geoarrow.linestring” extension type. • Multipolygon geometries have an ARROW:extension:name of “geoarrow.multipolygon” and must contain a child named “polygons” with the “geoarrow.polygon” extension type. • Geometry collections (i.e., mixed arrays of points, lines, polygons, multipoints, multipolygons, and/or geometry collections) are not currently supported. For those who need to communicate these objects, use the “geoarrow.wkb” extension type. In the future, support for these will be added as unions (i.e., the child array will be a sparse or dense union of points, lines, polygons, multipoints, multilinestrings, and/or multipolygons). Collections do not carry extension metadata of their own (i.e., the CRS and edges flags stay with the array for which they are relevant). The metadata string should not be omitted and must be empty (i.e., 0 as a 32-bit integer). carray <- geoarrow_create_narrow( wk::wkt(geodesic = TRUE), schema = geoarrow_schema_multipoint() ) carray$schema$metadata #>$ARROW:extension:name #> [1] "geoarrow.multipoint" #> #> $ARROW:extension:metadata #> [1] 00 00 00 00 (TODO: I didn’t actually implement the different extension names for different types of collections yet!!) ### Storage type • Multipoints are stored as a list<points: <geoarrow.point>> • Multilinestrings are stored as a list<linestrings: <geoarrow.linestring>> • Multipolygons are stored as a list<polygons: <geoarrow.polygon>> Conceptually this is attaching a buffer of (int32_t) offsets to an existing array of points, lines, or polygons. carray <- geoarrow_create_narrow( wk::wkt("MULTIPOINT (1 2, 3 4)"), schema = geoarrow_schema_multipoint() ) narrow::from_narrow_array(carray, arrow::Array) #> ExtensionArray #> <geoarrow.multipoint <crs: unspecified>> #> [ #> [ #> [ #> 1, #> 2 #> ], #> [ #> 3, #> 4 #> ] #> ] #> ] # offsets for each multipoint into the points array carray$array_data$buffers[[2]] #> <pointer: 0x5575268d6a20> # coordinates carray$array_data$children[[1]]$children[[1]]\$buffers[[2]] #> <pointer: 0x55752a517fa0> ## Relationship to well-known binary The physical layout and logical types specified in this document are designed to align with well-known binary (WKB), as this is currently the most popular binary encoding used to store and shuffle geometries between libraries. For example, a linestring in WKB is encoded as: • One byte describing endian (0x01 or 0x00) • A uint32_t describing the geometry type and its dimensions. For a linestring this will be 2 (for XY), 1002 (for XYZ), 2002 (for XYM), or 3002 (for XYZM). • A uint32_t of how many vertices are contained in the linestring • An buffer of double containing coordinates with coordinate values kept together. For example, the points (1 2, 3 4, 5 6) would be encoded as [1, 2, 3, 4, 5, 6]. In this specification, we store the same information as WKB but organized differently: • A struct ArrowSchema that contains the storage type and metadata. The default representation of a linestring is stored as a list_of<vertices: fixed_list_of<xy: float64, 2>>, where the child name of the fixed list stores the dimensions (xy) of the coordinates. • A struct ArrowArray() that contains the coordinate values and lengths of each linestring. For example, a linestring containing the points (1 2, 3 4, 5 6) is encoded by default using: • An int32_t buffer of offsets to the start/end of each linestring in the points array. Because our example only includes one linestring, this would be two numbers [0, 3]. The length of each linestring can be calculated by subtracting (i.e., offset_array[i + 1] - offset_array[i]). • A double buffer containing coordinates with coordinate values kept together (i.e., [1, 2, 3, 4, 5, 6]). You can learn more about these buffers and the C structures that geoarrow uses to represent them in memory in the Arrow Columnar Format specification and the C Data interface specification. For a detailed guide to iterating over geometries in C and C++, see the C and C++ development guide.
# Join Computer to Domain Using PowerShell By Robert Allen (https://activedirectorypro.com/join-computer-to-domain-using-powershell/) In this tutorial, you’ll learn how to join a computer to the domain using PowerShell. I will provide step by step instructions for adding a single computer and multiple computers to the domain. Also, I’ll show you how to move the computer to an OU once it’s been added to the domain. Let’s get started. ## Join Single Computer To Domain with Powershell Important Tip: You may need to run PowerShell as the PowerShell icon and select “Run as Administrator”. Open Powershell and run the following command. Change YourDomainName to your Active Directory domain name. add-computer –domainname "YourDomainName"  -restart Example picture below running on my domain ad.activedirectorypro.com You will get prompted to enter your credentials. This will need to be a Domain Administrator account or a user that has been delegated rights to join computers to the domain. The computer should automatically restart and be joined to the domain. Tip: Run help add-computer to see all the command line options (syntax) ## Join Multiple Computers to the Domain From a Text File To Join multiple computers to the domain you just need to create a text file and add the computer names to it. In this example, I’ve created a text file called computers.txt and added PC2 and PC3 to it. I’ve saved the text file to c:\it\computers.txt With the text file setup I’ll run the following commands: $computers = Get-Content -Path c:\it\computers.txt Add-Computer -ComputerName$computers -Domain "YourDomainName" -Restart Example picture below running on my domain ad.activedirectorypro.com The first line sets up a variable ($computers), which stores the values of the text file. The 2nd line is similar to the previous examples, now I just added the -ComputerName and the$computers variable. This command will go through every computer listed in the text file and join them to the domain. Pretty cool right? This will defiantly speed up the process of joining multiple computers to the domain. ## Join Computer to Domain and specify OU Path With PowerShell When you join a computer to the domain it will by default go the computers folder. It is best practice to move the computers from the default folder to a different OU. Thankfully we can automate this with PowerShell when we join the computers to the domain. Run this command to join a computer to the domain and specify the OU path. Add-Computer -DomainName "Domain02" -OUPath "OU=testOU,DC=domain,DC=Domain,DC=com" In the following example, I’ll be adding computers to the domain that go to the sales department. I have an OU setup called sales so I want the computers to automatically be moved to that OU. The PowerShell command requires the distinguished name of the OU. The easiest way to get this is by navigating to the OU in Active Directory Users and Computers and opening the properties of the OU. Then click the Attribute Editor and copy the value of distinguishedName. Now add this path to the command, below is the command for my domain. This will add the computer to the Sales OU in my Active Directory. Add-Computer -DomainName "ad.activedirectorypro.com" -OUPath "OU=Sales,OU=ADPRO Computers,DC=ad,DC=activedirectorypro,DC=com" I’ve just walked through three examples of using PowerShell to join computers to the domain. Now you can forget about logging into each computer and manually adding them to the domain. With PowerShell you can quickly add single or multiple computers at a time. Try out these commands and let me know how they work by leaving a comment below. 10 tools to simplify and automate Active Directory administration. • Bulk User Import tool • Export user and group membership, • NTFS Permissions Reporter • and more! # Join Computer to Domain Using PowerShell Robert Allen/August 25, 2018/ In this tutorial, you’ll learn how to join a computer to the domain using PowerShell. I will provide step by step instructions for adding a single computer and multiple computers to the domain. Also, I’ll show you how to move the computer to an OU once it’s been added to the domain. Let’s get started. ## Join Single Computer To Domain with Powershell Important Tip: You may need to run PowerShell as the PowerShell icon and select “Run as Administrator”. Open Powershell and run the following command. Change YourDomainName to your Active Directory domain name. add-computer –domainname "YourDomainName"  -restart Example picture below running on my domain ad.activedirectorypro.com You will get prompted to enter your credentials. This will need to be a Domain Administrator account or a user that has been delegated rights to join computers to the domain. The computer should automatically restart and be joined to the domain. Tip: Run help add-computer to see all the command line options (syntax) ## Join Multiple Computers to the Domain From a Text File To Join multiple computers to the domain you just need to create a text file and add the computer names to it. In this example, I’ve created a text file called computers.txt and added PC2 and PC3 to it. I’ve saved the text file to c:\it\computers.txt With the text file setup I’ll run the following commands: $computers = Get-Content -Path c:\it\computers.txt Add-Computer -ComputerName$computers -Domain "YourDomainName" -Restart Example picture below running on my domain ad.activedirectorypro.com The first line sets up a variable ($computers), which stores the values of the text file. The 2nd line is similar to the previous examples, now I just added the -ComputerName and the$computers variable. This command will go through every computer listed in the text file and join them to the domain. Pretty cool right? This will defiantly speed up the process of joining multiple computers to the domain. ## Join Computer to Domain and specify OU Path With PowerShell When you join a computer to the domain it will by default go the computers folder. It is best practice to move the computers from the default folder to a different OU. Thankfully we can automate this with PowerShell when we join the computers to the domain. Run this command to join a computer to the domain and specify the OU path. Add-Computer -DomainName "Domain02" -OUPath "OU=testOU,DC=domain,DC=Domain,DC=com" In the following example, I’ll be adding computers to the domain that go to the sales department. I have an OU setup called sales so I want the computers to automatically be moved to that OU. The PowerShell command requires the distinguished name of the OU. The easiest way to get this is by navigating to the OU in Active Directory Users and Computers and opening the properties of the OU. Then click the Attribute Editor and copy the value of distinguishedName. Now add this path to the command, below is the command for my domain. This will add the computer to the Sales OU in my Active Directory. Add-Computer -DomainName "ad.activedirectorypro.com" -OUPath "OU=Sales,OU=ADPRO Computers,DC=ad,DC=activedirectorypro,DC=com" I’ve just walked through three examples of using PowerShell to join computers to the domain. Now you can forget about logging into each computer and manually adding them to the domain. With PowerShell you can quickly add single or multiple computers at a time. Try out these commands and let me know how they work by leaving a comment below.
Exercise 13.2 Question 1 Which of the following are in inverse proportion? (i) The number of workers on a job and the time to complete the job. (ii) The time taken for a journey and the distance travelled in a uniform speed. (iii) Area of cultivated land and the crop harvested. (iv) The time taken for a fixed journey and the speed of the vehicle. (v) The population of a country and the area of land per person. Sol : (i) These are in inverse proportion because if there are more workers, then it will take lesser time to complete that job. (ii) No, these are not in inverse proportion because in more time, we may cover more distance with a uniform speed. (iii) No, these are not in inverse proportion because in more area, more quantity of crop may be harvested. (iv) These are in inverse proportion because with more speed, we may complete a certain distance in a lesser time. (v) These are in inverse proportion because if the population is increasing, then the area of the land per person will be decreasing accordingly. Question 2 In a Television game show, the prize money of Rs 1,00,000 is to be divided equally amongst the winners. Complete the following table and find whether the prize money given to an individual winner is directly or inversely proportional to the number of winners? Number of winners 1 2 4 5 8 10 20 Prize for each winner (in Rs) 100000 50000 … … … … … Sol : A table of the given information is as follows. Number of winners 1 2 4 5 8 10 20 Prize for each winner (in Rs) 100000 50000 x1 x2 x3 x4 x5 From the table, we obtain 1 × 100000 = 2 × 50000 = 100000 Thus, the number of winners and the amount given to each winner are inversely proportional to each other. Therefore, 1 × 100000 = 4 × x1 1 × 100000 = 5 × x2 1 × 100000 = 8 × x3 1 × 100000 = 10 × x4 1 × 100000 = 20 × x5 Question 3 Rehman is making a wheel using spokes. He wants to fix equal spokes in such a way that the angles between any pair of consecutive spokes are equal. Help him by completing the following table. Number of spokes 4 6 8 10 12 Angle between a pair of consecutive spokes 90° 60° … … … (i) Are the number of spokes and the angles formed between the pairs of consecutive spokes in inverse proportion? (ii) Calculate the angle between a pair of consecutive spokes on a wheel with 15 spokes. (iii) How many spokes would be needed, if the angle between a pair of consecutive spokes is 40°? Sol : A table of the given information is as follows. Number of spokes 4 6 8 10 12 Angle between a pair of consecutive spokes 90° 60° x1 x2 x3 From the given table, we obtain 4 × 90° = 360° = 6 × 60° Thus, the number of spokes and the angle between a pair of consecutive spokes are inversely proportional to each other. Therefore, 4 × 90° = x1 × 8 Similarly, and Thus, the following table is obtained. Number of spokes 4 6 8 10 12 Angle between a pair of consecutive spokes 90° 60° 45° 36° 30° (i) Yes, the number of spokes and the angles formed between the pairs of consecutive spokes are in inverse proportion. (ii)Let the angle between a pair of consecutive spokes on a wheel with 15 spokes be x. Therefore, 4 × 90° = 15 × x Hence, the angle between a pair of consecutive spokes of a wheel, which has 15 spokes in it, is 24°. (iii) Let the number of spokes in a wheel, which has 40º angles between a pair of consecutive spokes, be y. Therefore, 4 × 90° = y × 40° Hence, the number of spokes in such a wheel is 9. Question 4 If a box of sweets is divided among 24 children, they will get 5 sweets each. How many would each get, if the number of the children is reduced by 4? Sol : Number of remaining children = 24 − 4 = 20 Let the number of sweets which each of the 20 students will get, be x. The following table is obtained. Number of students 24 20 Number of sweets 5 x If the number of students is lesser, then each student will get more number of sweets. Since this is a case of inverse proportion, 24 × 5 = 20 × x Hence, each student will get 6 sweets. Question 5 A farmer has enough food to feed 20 animals in his cattle for 6 days. How long would the food last if there were 10 more animals in his cattle? Sol : Let the number of days that the food will last if there were 10 more animals in the cattle be x. The following table is obtained. Number of animals 20 20 + 10 = 30 Number of days 6 x More the number of animals, lesser will be the number of days for which the food will last. Hence, the number of days the food will last and the number of animals are inversely proportional to each other. Therefore, 20 × 6 = 30 × x Thus, the food will last for 4 days. ​​​​​​​​​​​​​​​​​​​​ Question 6 A contractor estimates that 3 persons could rewire Jasminder’s house in 4 days. If, he uses 4 persons instead of three, how long should they take to complete the job? Sol : Let the number of days required by 4 persons to complete the job be x. The following table is obtained. Number of days 4 x Number of persons 3 4 If the number of persons is more, then it will take lesser time to complete the job. Hence, the number of days and the number of persons required to complete the job are inversely proportional to each other. Therefore, 4 × 3 = x × 4 Thus, the number of days required to complete the job is 3. Question 7 A batch of bottles was packed in 25 boxes with 12 bottles in each box. If the same batch is packed using 20 bottles in each box, how many boxes would be filled? Sol : Let the number of boxes filled, by using 20 bottles in each box, be x. The following table is obtained. Number of bottles 12 20 Number of boxes 25 x More the number of bottles, lesser will be the number of boxes. Hence, the number of bottles and the number of boxes required to pack these are inversely proportional to each other. Therefore, 12 × 25 = 20× x Hence, the number of boxes required to pack these bottles is 15. Question 8 A factory required 42 machines to produce a given number of articles in 63 days. How many machines would be required to produce the same number of articles in 54 days? Sol : Let the number of machines required to produce articles in 54 days be x. The following table is obtained. Number of machines 42 x Number of days 63 54 More the number of machines, lesser will be the number of days that it will take to produce the given number of articles. Thus, this is a case of inverse proportion. Therefore, 42 × 63 = 54× x Hence, the required number of machines to produce the given number of articles in 54 days is 49. Question 9 A car takes 2 hours to reach a destination by travelling at the speed of 60 km/h. how long will it take when the car travels at the speed of 80 km/h? Sol : Let the time taken by the car to reach the destination, while travelling with a speed of 80 km/hr, be x hours. The following table is obtained. Speed (in km/hr) 60 80 Time taken (in hours) 2 x More the speed of the car, lesser will be the time taken by it to reach the destination. Hence, the speed of the car and the time taken by the car are inversely proportional to each other. Therefore, 60 × 2 = 80× x The time required by the car to reach the given destination is hours. Question 10​​​​​​​​​​​​​​​​​​​​​ Two persons could fit new windows in house in 3 days. (i) One of the persons fell ill before the work started. How long would the job take now? (ii) How many persons would be needed to fit the windows in one day? Sol : (i) Let the number of days required by 1 man to fit all the windows be x. The following table is obtained. Number of persons 2 1 Number of days 3 x Lesser the number of persons, more will be the number of days required to fit all the windows. Hence, this is a case of inverse proportion. Therefore, 2 × 3 = 1× x x = 6 Hence, the number of days taken by 1 man to fit all the windows is 6. (ii) Let the number of persons required to fit all the windows in one day be y. The following table is formed. Number of persons 2 Y Number of days 3 1 Lesser the number of days, more will be the number of persons required to fit all the windows. Hence, this is a case of inverse proportion. Therefore, 2 × 3 = y × 1 y = 6 Hence, 6 persons are required to fit all the windows in one day. Question 11​​​​​​​​​​​​​​​​​​​​​ A school has 8 periods a day each of 45 minutes duration. How long would each period be, if the school has 9 periods a day, assuming the number of school hours to be the same? Sol : Let the duration of each period, when there are 9 periods a day in the school, be x minutes. The following table is obtained. Duration of each period (in minutes) 45 x Number of periods 8 9 If there is more number of periods a day in the school, then the duration of each period will be lesser. Hence, this is a case of inverse proportion. Therefore 45 × 8 = x× 9 Hence, in this case, the duration of each period will be 40 minutes. 1 thought on “NCERT solution class 8 chapter 13 Direct and Inverse proportions” Insert math as $${}$$
## UNLV Theses, Dissertations, Professional Papers, and Capstones 8-1-2021 Dissertation #### Degree Name Doctor of Philosophy (PhD) #### Department Mathematical Sciences Douglas Burke Derrick DuBose Satish Bhatnagar Hokwon Cho Pushkin Kachroo 380 #### Abstract In this dissertation, we have two main categories of results. The first is regarding certain point-classes, and the second is regarding 3-player games. The point-classes of Baire Space, \mathcal{N}, in the Borel and Projective Hierarchies, as well as Hausdorff's Difference Hierarchy have been well studied, and there has been much research into further stratifying these hierarchies. One area of particular interest falls in between the point-classes \mathbf{\Pi}_\mathbf{1}^\mathbf{1} and \Delta\left(\omega^2-\mathbf{\Pi}_\mathbf{1}^\mathbf{1}\right). It is well known that the point-classes \beta-\mathbf{\Pi}_\mathbf{1}^\mathbf{1}, for \beta\in\omega^2, stratify this region of the projective hierarchy, with the point-class \bigcup_{\beta\in\omega^2}\beta-\mathbf{\Pi}_\mathbf{1}^\mathbf{1} still falling strictly below \Delta\left(\omega^2-\mathbf{\Pi}_\mathbf{1}^\mathbf{1}\right). Dr. Derrick DuBose developed multiple point-classes, including \left(\kappa\ast\mathbf{\Pi}_\mathbf{1}^\mathbf{1}\right)^\ast for \kappa\in\omega_1. Using determinacy results, DuBose proved that certain of his point-classes further stratify the region between \bigcup_{\beta\in\omega^2}\beta-\mathbf{\Pi}_\mathbf{1}^\mathbf{1} and \Delta\left(\omega^2-\mathbf{\Pi}_\mathbf{1}^\mathbf{1}\right). In this dissertation, we define a new type of classification for functions, which we will refer to as \Gamma Tail-Measurable, as well as bounded \Gamma Tail-Measurable, where \Gamma is a point-class. We also define what we will mean for certain functions and certain sequences to be jointly bounded, that is to say bounded together. Using tail-measurable functions, we define a new manner in which to define certain point-classes of Baire space. When certain bounded tail-measurable functions are used, we will prove that the point-classes produced are exactly the point-classes developed by DuBose. We also will show that by using functions that are tail-measurable (but not bounded), we can produce point-classes that contain all of DuBose's point-classes that fall below \Delta\left(\omega^2-\mathbf{\Pi}_\mathbf{1}^\mathbf{1}\right). Moreover, for certain sets X, defined from tail-measurable functions and sequences that are jointly unbounded, these point-classes contain every set A\subseteq X where A has cardinality at most \aleph_1. Towards our goal, we review certain topological definitions including the definitions of the Borel and Projective Hierarchies, as well as Hausdorff's Difference Hierarchy. We also review some point-classes in the Projective Hierarchy developed by Dr. Derrick DuBose. The study of determinacy of 2-player games on certain game trees is also an active area of research. While the most common game tree is the tree with height \omega and moves from \omega, there have been studies of the determinacy of 2-player games on other game trees, including trees of variable height. Many of the determinacy results use large cardinal hypotheses, such as 0^# exists'', in order to calibrate the strength of the determinacy of certain point-classes in the Projective Hierarchy. It is well known that there exist games with 3 or more players that are not determined in which the payoff sets are of low complexity, e.g., clopen, in the Borel Hierarchy. In this dissertation, we review some definitions concerning 2-player games and determinacy, and review some well-known determinacy results. We then adjust these definitions for 3-player games, and define what we will mean by imposing rules on these games. In effect, imposing a rule on a 3-player game amounts to changing the game tree on which the game is played. We then adjust Wolfe's proof of \Sigma_\mathbf{2}^\mathbf{0} determinacy for 2-player games to prove that 3-player games of a specific form are determined provided that a certain rule is imposed. We also define a special class of 3-player games, which we will refer to as 3213-Games. We will explore some properties of these games, and will define rules that will yield the determinacy of these games. #### Keywords Borel and Projective Hierarchies; Determinacy; Set Theory Mathematics pdf 1710 KB English
# DOCX to SRT Converter Oct 2, 2021 DOCX to SRT converter is used to convert subtitles from Microsoft Word's .docx file format to SRT format. Language conversion is also supported between English, French, Germal, Italian, Japanese, etc. Simply click on the BROWSE FILE button below, select your text subtitle file and hit the Convert button. Settings #### TXT to SRT Converter Convert lyrics file from TXT to SRT (SubRip) format ### Use Cases Extract SRT Subtitles from DOCX This tool extracts SRT subtitles from DOCX files and downloads them as a .TXT file. You can use the downloaded file with any video player and see the subtitles. Introduce timestamps in plain text subtitle In this case all you have is a DOCX file with lines of subtitles There is no timestamp information whatsoever. The tool does it's best to introduce timestamps for each line of lyrics by considering the length of the line, how many words & characters are in it and the Start/End time you provide. Language Conversion Use this to convert each line of your subtitles from one language to another. ### Srt SRT is a subtitle file format generated by the SubRip software. A time range (start to end) precedes each line of subtitle. Video players show the subtitle on the screen when the video is within this period. ##### Settings Explained • ###### 1. Start Counter Each sequentially generated subtitle has a counter in the SRT file format. By default, the counter starts from 0. You can change this starting counter by using this setting ###### Starting Counter 0 0 00:00:17,620 --> 00:00:23,210 Baby, last night was hands down 1 00:00:23,310 --> 00:00:25,810 One of the best nights ###### Starting Counter 1 1 00:00:17,620 --> 00:00:23,210 Baby, last night was hands down 2 00:00:23,310 --> 00:00:25,810 One of the best nights • ###### 2. Start Time The time in seconds when the subtitle starts. Must be greater than the End Time • ###### 3. End Time The time in seconds when the subtitle ends. Must be less than the Start Time • ###### 4. Convert Language Select to perform language conversion on the lyrics • ###### 5. Source Language The language to convert the lyrics from • ###### 6. Target Language The target language for the subtitle translation
# Evaluate $\lim_{x\to \infty}\log(x) - \log(1- x)$. I have a function $\log(x) - \log(1- x)$, and I wish to evaluate the limit as $x \to \infty$. I thought this was just $\infty$ since the limit as $x \to \infty$ of each term is $\infty$, but Mathematica somehow is returning $i \pi$ when I tried it. Does anyone know the reason for this? Thanks! • $\log(1-x)$ is not defined (except in complex sense) when $x>1$. That explains the Mathematica result. Because the "limit" is $\log(-1) = i\pi$ – Gautam Shenoy Sep 4 '17 at 14:34 • The domain of your function is the open interval $(0,1)$. There is no meaning in considering $x \to \infty$. – Crostul Sep 4 '17 at 14:36 • Might you have meant $\log x - \log(x-1)\text{?} \qquad$ – Michael Hardy Sep 4 '17 at 14:40 • @GautamShenoy Even then, Mathematica should be using principal branches, which would result in $-\pi i$. – Simply Beautiful Art Sep 4 '17 at 14:57 • Mathematica 11.1 and Maple 2017.2 give me: $- i \pi$ – Mariusz Iwaniuk Sep 4 '17 at 15:09 With some algebra,$$\log x - \log(1 - x) = \log \frac{x}{1 - x} \to \log(-1)$$ which Mathematica interprets as $i\pi$, and you can too if you choose that particular branch of the complex logarithm. It's a natural choice because $e^{i\pi} = -1$. On a different but important note, however, the reasoning I thought this was just $\infty$ since the limit as $x \to \infty$ of each term is $\infty$ is unfortunately very mistaken. For example, $\lim_{x \to \infty} (x - x) = 0$, but each term obviously goes to infinity. • But, I don't understand: Spivak Theorem 5.2: $\lim_{x \to a} (f + g)(x) = \lim_{x \to a} f + \lim_{x \to a} g$ – Thomas Moore Sep 4 '17 at 14:36 • This assumes that both limits exist and are finite. – Xander Henderson Sep 4 '17 at 14:37 • When the limits are both real numbers, this is true. (And if they are the same signed infinity, this is true too). But when the first limit is $+\infty$ and the second is $-\infty$, there is a real problem. Check out indeterminate forms. – T. Bongers Sep 4 '17 at 14:37 • @ThomasMoore Yes, that is correct, assuming both limits exist and are finite. $\infty-\infty$ is an indeterminate form. – Simply Beautiful Art Sep 4 '17 at 14:43 I'm going to hazard a guess that what was meant was $\log x - \log(x-1).$ One can write $\log x - \log(x-1) = \log \dfrac x {x-1}$ and then \begin{align} \lim_{x\to\infty} \log\frac x {x-1} & = \log \lim_{x\to\infty} \frac x {x-1} \quad \text{because log is continuous} \\[10pt] & = \log 1 =0. \end{align} Or one can say that by the mean value theorem there exists $c_x$ between $x-1$ and $x$ for which $$\log x - \log(x-1) = \log' c_x = \frac 1 {c_x} < \frac 1 {x-1} \to 0 \text{ as } x\to\infty.$$ • Isn't $\ln c_x = \frac{d(c_x)}{c_x dx}$ using chain rule? – user263326 Sep 4 '17 at 17:01 • @user263326 : No. $\dfrac d {dx} \ln c_x$ would indeed be equal to $\dfrac 1 {c_x} \cdot \dfrac d {dx} c_x,$ provided $c_x$ is a differentiable function of $x.$ But that's $\dfrac d{dx} \ln c_x,$ not $\ln c_x$ without $\text{“} \dfrac d{dx}\text{''}.$ But your comment is irrelevant: I never sought $\dfrac d{dx} \ln c_x.$ The expression $\log' c_x$ is NOT the same thing as $\dfrac d{dx} \log c_x$ and was not intended to be the same thing. $\qquad$ – Michael Hardy Sep 4 '17 at 17:37 Assuming principal branches, Mathematica is actually wrong. Note that the identity $$\log(a)-\log(b)=\log(a/b)$$ holds for $a,b\in\Bbb R_{>0}$, but not generally for any other $a,b$. Using principle branches and assuming $x>1$, we actually have $$\log(1-x)=\log|1-x|+i\arg(1-x)=\log(x-1)+\pi i$$ hence, \begin{align}\log(x)-\log(1-x)&=\log(x)-\log(x-1)-\pi i\\&=\log\left(\frac x{x-1}\right)-\pi i\\&\to\log(1)-\pi i\\&=-\pi i\\&\ne\pi i\end{align} Note that Mathematica 11.1 returns $-\pi i$. • Unless someone who uses Mathematica knows what sort of branch it uses is not the principal branch? Would be nice to know... – Simply Beautiful Art Sep 4 '17 at 15:00 • It's also possible that Mathematica was not wrong, but instead Mathematica's answer was misreported. Has the $\pi i$ answer been confirmed by someone other than the OP? wolframalpha.com/input/… gives $-\pi i$. – LarsH Sep 4 '17 at 15:22 • @LarsH Possibly. Unfortunately, we don't know what version the OP was using. – Simply Beautiful Art Sep 4 '17 at 15:23
# Vacuum Tubes The first electronic amplification of sound was done with Vacuum Tubes. We have pre-amp, power and rectifier vacuum tubes among other types in New Old Stock from the golden era of American & European manufacturing as well as current production tubes from today. #### Solid State Replacements Solid state replacements for vacuum tubes are great for use for hard to find or costly vacuum tubes and for reducing 'sag' in power supply applications. #### Vacuum Tubes The first electronic amplification of sound was done with Vacuum Tubes. We have pre-amp, power and rectifier vacuum tubes among other types in New Old Stock from the golden era of American & European manufacturing as well as current production tubes from today. Vacuum Tube - 5AR4 / GZ34, JJ Electronics The JJ Electronic GZ34 / 5AR4 is rugged rectifier at a reasonable price. JJ has a reputation for building sturdy tubes and this one is no exception. Users report that the JJ GZ34 / 5AR4 is a reliable rectifier in their Vox, Fender, and hifi amplifiers. $15.95 Vacuum Tube - 5Y3 S, JJ Electronics, Rectifier The JJ Electronic 5Y3S is a ruggedly built octal rectifier tube with a directly heated cathode. This tube’s solid construction and thick glass envelope offer high reliability. The JJ 5Y3S will work in any 5Y3 position.$15.50 Vacuum Tube - 5U4GB, JJ Electronics The JJ Electronic 5U4GB is an excellent choice for an economically priced tube rectifier. Used in many Hi-Fi and guitar amplifiers, JJ delivers a robust and solidly built tube. This full wave rectifier will work in any 5U4 application. $14.95 Vacuum Tube - 5AR4 STR, Tube Amp Doctor Tube Amp Doctor 5AR4 Rectifier, Premium Selected. A faithful reproduction of the Mullard/VALVO GZ34. Pure power and the typical sag is the distinguishing characteristic of the most popular guitar amps of the 1960s guitar heroes till today.$22.95 Solid State Rectifier - Yellow Jackets® YJR, For 5AR4, 5U4, 5Y3 The YJR is a direct plug-in replacement adapter for use in most amplifiers that use 5AR4/GZ34, 5U4, 5Y3 or similar full-wave rectifier tubes. The YJR converts your amp's vacuum tube rectifier to a solid state rectifier, reducing tube sag for a tighter sound and feel. The tube rectifier can easily be swapped back in when sag is desired. This Yellow Jacket® is a solid state device and as such does not come with vacuum tubes. • Converts most audio amplifiers which use a vacuum tube rectifier 5AR4/GZ34, 5U4 or 5Y3 to a solid state device. • Safe for all common amplifiers and transformers. All Yellow Jackets® have a one year warranty against manufacturer defects. $14.95 Vacuum Tube - 35Z5GT, Rectifier, Half Wave Heater-cathode type diode designed for use as a half-wave rectifier in ac/dc receivers. A heater tap is provided to permit operation with a panel lamp.$23.90 Vacuum Tube - 5Y3GT, Sovtek Octal rectifier tube (Max DC output current = 125 mA) An indirectly heated diode intended for rectification of commercial-frequency alternating current. It has a 140 mA maximum output and filament voltage of 5 V and a filament current of 2 A ± 0.2 mA. $16.75 Vacuum Tube - 5AR4, Sovtek Octal rectifier tube (Max DC output current = 250 mA)$21.75 Vacuum Tube - 5U4GB, Electro-Harmonix Superior directly heated rectifier Diode. Replacement for all 5U4 types, including NOS-style "big bottle" tubes. $15.40 Vacuum Tube - 12BE6 / HK90, Heptode Miniature pentagrid converter designed to perform simultaneously the combined functions of the mixer and oscillator in superheterodyne receivers. The tube is suitable for use in the standard broadcast and FM bands.$7.90 Vacuum Tube - 35W4, Rectifier, Half Wave Miniature half-wave rectifier for use in line-operated equipment having series-connected heaters. The heater is tapped to permit operation of a panel lamp. $7.90 Vacuum Tube - 6CA4 / EZ81, JJ Electronics The JJ Electronic EZ81 is a noval based rectifier with a rugged construction. These tubes are an excellent choice for guitar amps such as 18W Marshalls and Hi-Fi amplifiers as well. This tube will work in any EZ81 or 6CA4 application.$12.50 Vacuum Tube - 6BE6 / EK90, Heptode Miniature heptode primarily designed to perform the combined functions of the mixer and oscillator in superheterodyne circuits in both the standard broadcast and FM bands. $6.90 Vacuum Tube - 6X4 / EZ90, Rectifier, Full Wave Miniature heater-cathode type twin diode designed for full-wave rectifier operation in compact power supplies. The tube is intended for service in automobile and a-c radio receivers.$12.95 Vacuum Tube - 6X5GT / EZ35, Rectifier, Full Wave Cathode type rectifier designed particularly for use in automobile receivers. $12.90 Vacuum Tube - 12SA7, Heptode Pentagrid converter designed to minimize frequency drift. They are intended for service as combined oscillators and mixers in AC, storage battery, and AC/DC operated superheterodynes.$8.90 Vacuum Tube - 1R5 / DK91, Heptode Miniature pentagrid-converter designed for use as a combined mixer and oscillator in superheterodyne circuits. Because of its small size and high operating efficiency, the 1R5 is especially adapted for compact, battery-operated equipment. $5.90 Vacuum Tube - 5Y3GT, Rectifier, Full Wave Filamentary twin-diode designed for full-wave rectifier operation in power supplies that have d-c output current requirements up to approximately 125 milliamperes.$29.45 Vacuum Tube - GZ34 / 5AR4, Mullard Reissue Premium directly heated heavy duty rectifier Diode. Re-issue of the most popular rectifier valve ever built by Mullard. Helps extend the life of the other valves in the amplifier by allowing them to heat up before plate voltage is applied. • Low internal voltage drop & controlled heater warm up time • Excellent choice for both Hifi and instrument amplification • Replacement/upgrade for all 5AR4/GZ34 types $38.95 Vacuum Tube - 5AR4, Ruby Tubes 5AR4 Ruby, Rectifier Tube$21.95 Vacuum Tube - 6AJ8 / ECH81, Triode, Heptode Triode-heptode for use in F.M., AM/FM, A.M, and television receivers. $8.90 Vacuum Tube - 6SA7, Heptode Metal pentagrid converter. It is intended to perform the combined functions of the mixer and oscillator in superheterodyne receivers, especially those of the all-wave type. The 6SA7 is constructed to provide excellent frequency stability.$3.90 Solid State Rectifier - For 5AR4, 5U4, 5Y3 Tubes Solid State Rectifier Direct plug-in replacement for 5AR4, 5U4 and 5Y3 rectifier vacuum tubes in amplifiers with center tapped secondary power transformer. Use when less 'sag' is desired in the power supply; just switch back to the tube rectifier when 'sag' is desired. $9.95 Vacuum Tube - 274B, Valve Art Valve Art, 274B Rectifier Tube$18.54 Don't see what you're looking for? Send us your product suggestions!
## Algebra 1: Common Core (15th Edition) $x \leq 2$ Given: $(\infty,2]$ Write as an inequality: $x \leq 2$
# 10*g of carbon disulfide is combusted with 15.5*L of oxygen gas, which is the reagent in excess? Jun 16, 2017 We have $\text{dioxygen gas}$ in excess by approx. $\text{17 equiv}$. #### Explanation: We need (i) a stoichiometric equation........ $C {S}_{2} \left(l\right) + 3 {O}_{2} \left(g\right) \rightarrow C {O}_{2} \left(g\right) + 2 S {O}_{2} \left(g\right)$ And (ii), equivalent quantities of $C {S}_{2} \left(l\right)$, and $\text{dioxygen}$. $\text{Moles of}$ $C {S}_{2} = \frac{10.0 \cdot g}{76.14 \cdot g \cdot m o {l}^{-} 1} = 0.0131 \cdot m o l$. Now (depending on your syllabus), $1 \cdot m o l$ of $\text{Ideal Gas}$ occupies $22.7 \cdot L$ at $\text{STP}$. If we (REASONABLY) assume ideality, then we have $\frac{15.5 \cdot L}{22.7 \cdot L \cdot m o {l}^{-} 1} = 0.683 \cdot m o l \cdot \text{dioxygen gas}$. And clearly, we have a stoichiometric EXCESS of dioxygen gas. And thus $\text{EXCESS}$ ${O}_{2} = \left(0.683 - 3 \times 0.0131\right) \cdot m o l$ $\equiv 0.644 \cdot m o l$. Note that $C {S}_{2}$ is (i) VERY FLAMMABLE and volatile; and (ii) PEN AND INKS very badly.
# Is it valid to write $\lim_{x \rightarrow \infty}\frac{2}{x^r}=2.\frac{1}{\infty}=0$ in limits? I'm wondering if it's valid to write the follinwg: $$\lim_{x \rightarrow \infty}\frac{2}{x^r}=2\lim_{x \rightarrow \infty}\frac{1}{x^r}=2.\frac{1}{\infty}=2.0=0$$ I know it's valid to say that $\frac{1}{\infty}=0$ in limits but I'm not suring if it would be valid to say $2.\frac{1}{\infty}=2.0=0$ • As long as you know what you are doing. – Megadeth Nov 2 '17 at 23:08 • Only if $r$ is positive. – Franklin Pezzuti Dyer Nov 2 '17 at 23:09 • @Nilknarf Yeah thanks, I forgot to mention that. – Hai Nov 2 '17 at 23:09 Since the limit of a product is the product of the limits: $$\lim_{x\to \infty} \frac{2}{x^r}= 2\lim_{x\to\infty}\frac{1}{x^r}= 2\times 0= 0\, ,\qquad (r>0)\,$$ since $\lim_{x\to \infty}1/x^r=0$ for $r>0$. You should avoid manipulating $\infty$ like numbers. your result is right just skip the step where you wrote $\frac 1 {\infty}$ There are some operations with infinite limits that are valid. One of them is as follows: Let $(x_n)_{n \in \mathbb N}$ and $(y_n)_{n \in \mathbb N}$ sequences of positive real numbres such that: $(x_n)_{n \in \mathbb N}$ is bounded and $\lim_{n \to \infty} y_n = +\infty$ Then $\lim_{n \to \infty} {x_n}/{y_n} = 0$. This property remains valid if we consider functions rather than sequences. In this case, the constant function equal to 2 is bounded and the function $x^r$ is such that it tends to infinity where $x$ tends to infinity. Where $r > 0$.
6.4.3 MelFilterBank 6.4.3.1 Outline of the node This node performs the mel-scale filter bank processing for input spectra and outputs the energy of each filter channel. Note that there are two types of input spectra, and output differs depending on inputs. 6.4.3.2 Necessary file No files are required. 6.4.3.3 Usage When to use This node is used as preprocessing for acquiring acoustic features. It is used just after MultiFFT , PowerCalcForMap  or PreEmphasis . It is used before MFCCExtraction  or MSLSExtraction . Typical connection 6.4.3.4 Input-output and property of the node Table 6.78: Parameter list of MelFilterBank Parameter name Type Default value Unit Description LENGTH 512 [pt] Analysis frame length SAMPLING_RATE 16000 [Hz] Sampling frequency CUTOFF 8000 [Hz] Cut-off frequency of lowpass filter MIN_FREQUENCY 63 [Hz] Lower cut-off frequency of filter bank MAX_FREQUENCY 8000 [Hz] Upper limit frequency of filter bank FBANK_COUNT 13 Filter bank numbers Input INPUT : Map<int, ObjectRef> type. A pair of the sound source ID and power spectrum as Vector<float> type or complex spectrum Vector<complex<float> > type data. Note that when the power spectrum is selected, output energy doubles, different from the case that the complex spectrum is selected. Output OUTPUT : Map<int, ObjectRef> type. A pair of the sound source ID and the vector consisting of output energy of the filter bank as Vector<float> type data. The dimension number of output vectors is twice as large as FBANK_COUNT. Output energy of the filter bank is in the range from 0 to FBANK_COUNT-1 and 0 is in the range from FBANK_COUNT to 2 *FBANK_COUNT-1. The part that 0 is in is a placeholder for dynamic features. When dynamic features are not needed, it is necessary to delete with FeatureRemover . Parameter LENGTH : int  type. Analysis frame length. It is equal to the number of frequency bins of the input spectrum. Its range is positive integers. SAMPLING_RATE : int  type. Sampling frequency. Its range is positive integers. CUTOFF : Cut-off frequency of the anti-aliasing filter in a discrete Fourier transform. It is below 1/2 of SAMPLING_RATE. MIN_FREQUENCY : int  type. Lower cut-off frequency of the filter bank. Its range is positive integers and less than CUTOFF. MAX_FREQUENCY : int  type. Upper limit frequency of the filter bank. Its range is positive integers and less than CUTOFF. FBANK_COUNT : int  type. The number of filter banks. Its range is positive integers. 6.4.3.5 Details of the node This node performs the mel-scale filter bank processing and outputs energy of each channel. Center frequency of each bank is positioned on mel-scale $^{(1)}$ at regular intervals. Center frequency for each channel is determined by performing FBANK_COUNT division from the minimum frequency bin $\hbox{SAMPLING\_ RATE}/\hbox{LENGTH}$ to $\hbox{SAMPLING\_ RATE} \hbox{CUTOFF} / \hbox{LENGTH}$. Transformation of the linear scale and mel scale is expressed as follows. $\displaystyle m$ $\displaystyle =$ $\displaystyle 1127.01048 \log ( 1.0 + \frac{\lambda }{700.0} )$ (140) However, expression on the linear scale is \lambda (Hz) and that on the mel scale is $m$. Figure 6.86 shows an example of the transformation by 8000 Hz. The red points indicate the center frequency of each bank when SAMPLING_RATE is 16000Hz, CUTOFF is 8000Hz and FBANK_COUNT is 13. The figure shows that the center frequency of each bank is at regular intervals on the mel scale. Figure 6.87 shows the window functions of the filter banks on the mel scale. It is a triangle window that becomes 1.0 on the center frequency parts and 0.0 on the center frequency parts of adjacent channels. Center frequency for each channel is at regular intervals on the mel scale and in symmetric shape. These window functions are represented as shown in Figure 6.88 on the linear scale. A wide band is covered in high frequency channels. The input power spectrum expressed on the linear scale is weighted with the window functions shown in Figure 6.88 and energy is obtained for each channel and output. 6.4.3.6 References: (1) Stanley Smith Stevens, John Volkman, Edwin Newman: “A Scale for the Measurement of the Psychological Magnitude Pitch”, Journal of the Acoustical Society of America 8(3), pp.185–190, 1937.
Enable contrast version # Tutor profile: Nicholas H. Inactive Nicholas H. College student: Tutor for several years Tutor Satisfaction Guarantee ## Questions ### Subject:Pre-Calculus TutorMe Question: Solve the equation $$2\cos^2 3\theta = \cos 3\theta$$ for $$0\le \theta <2\pi$$. Inactive Nicholas H. $$2\cos^2 3\theta = \cos 3\theta$$ gives $$2\cos^2 3\theta-\cos 3\theta=0$$, so $$(\cos 3\theta)(2\cos 3\theta -1)=0$$. Therefore $$\cos 3\theta=0$$ or $$\cos 3\theta=1/2$$, so either $$3\theta=\pi/2, 3\pi/2, 5\pi/2, 7\pi/2, 9\pi/2, 11\pi/2$$ or $$3\theta=\pi/3, 5\pi/3, 7\pi/3, 11\pi/3, 13\pi/3, 17\pi/3$$. Dividing by 3 to find $$\theta$$, we obtain the solutions $$\theta=\pi/6,\pi/2,5\pi/6,7\pi/6,3\pi/2,11\pi/6$$ and $$\theta=\pi/9,5\pi/9,7\pi/9,11\pi/9,13\pi/9,17\pi/9$$. ### Subject:Calculus TutorMe Question: Prove that a infinite sequence .99999...= 1 Inactive Nicholas H. $$.999... = \sum_{i=1}^{\infty} \frac9{10^i} = \sum_{i=1}^{\infty} \frac{10-1}{10^i} = \sum_{i=1}^{\infty} (\frac{10}{10^{i-1}} - \frac{1}{10^i}) = \frac{1}{10^0} = 1$$ This proves that .999... = 1 ### Subject:Economics TutorMe Question: How does the increase of minimum wage affect the economy? Inactive Nicholas H. Recent evidence shows that any effects of raising the minimum wage would not be extremely significant, but by raising the minimum wage, it is known that the purchasing power of individuals will increase, and reducing employee turnover, which are both positive effects for the consumers. ## Contact tutor Send a message explaining your needs and Nicholas will reply soon. Contact Nicholas
gregnnylf94 1# gregnnylf94 Published in 2018-01-10 18:54:42Z I have a very simple Mojolicious::Lite application that displays a form. When I embed the HTML at the bottom of the script it works fine, but when I try to put my templates in a ./templates folder and my layouts in the ./templates/layouts folder I get the error I followed the guide from here I also tried adding the @@ template_name.html.ep to the top of each file. Am I doing something obviously wrong? ### Application/index.pl #! C:\strawberry\perl\bin\perl.exe use Mojolicious::Lite; any '/' => sub { my $self = shift;$self->render(template => 'home/index'); }; app->start; ### Application/templates/home/index.html.ep @@ home/index.html.ep % layout 'default'; Hello Hello Hello ### Application/templates/layouts/default.html.ep (A bunch of HTML with a <%= content %> tag) You need to login account before you can post. Processed in 0.298092 second(s) , Gzip On .
1. ## Binomial theorem I don't understand the basic concept of binomials and don't understand this question so I don't know where to begin. If anyone could give any pointers I'd really appreciate that a lot. 2. The binomial theorem says that for $(x + y)^n$ the coefficient of $x^ky^{n-k}$ is $\binom{n}{k}$. $(3 + 5x^2)(1 - \frac{1}{2x})^n$ Now, the second binomial expands into something like: $(3 + 5x^2)(c_0 + c_1x^{-1} + ... + c_nx^{-n})$ Now, think about terms in the expansion: The constant term will be $3c_0 + 5x^2c_2x^{-2}$ The coefficient of $x^{-1}$ is $3c_1x^{-1} + 5x^2c_3x^{-3}$ Use the binomial theorem to get values for $c_1, c_2, ...$ that depend on $n$. 3. finally understood it after so long! thanks
• ### Please answer in as much detail as possible. Please write doen al calculations and formulas (Solved) June 11, 2016 Hi, Kindly let me know which answers or calculations are wrong? If ever I am genunely wrong then I will rectify them. Normally companies present their financial statements under generally... • ### Risk Management (Solved) May 19, 2014 it transports passengers as well as cargo , it is considered a passenger ferry and so receives favourable treatment from customs because of the passengers on board. This allows it always be on time with its cargo deliveries. If classified as a cargo company, the ship must be docked offshore and... PFA the solution SectionA B & C... • ### Problem 1 How does P & P determine the value for Levels 1, 2 and 3 investments? Problem 3 Why would (Solved) August 07, 2016 Problem 1 How does P & P determine the value for Levels 1, 2 and 3 investments? Problem 3 Why would foreign currency translation decrease from 2013 to 2014? Problem 5 Using the following format compare P & P to the industry average. Overall how is P & P performing relative
# What in your opinion is great? KenJackson Sure, they must have had lasers and levitations devices, but somehow we can't find any proof of them. Maybe they had lasers, but I didn't suggest it because no one has ever found evidence of them. But there IS evidence of incredible things that someone did (the remaining stone structures) and no one has adequately explained them. And I wonder about levitation devices. But if they were possible, surely some modern day physicist would have at least suggested a way to temporarily neutralize mass. As far as I know, no one has done that. Maybe there were two types of workers: the workers who could cut stone with lasers, and those who had to use copper chisels. You are the one suggesting lasers, not me. But yes, there were two or three or dozens of different levels of technology used by the many different peoples who worked on these sites over the millennia. Except for "there are building that I can't explain", you have zero proof. You're making a logic error. I have said this is a great mystery. You are saying no, it's explainable. The burden of proof falls on you. And you've failed to provide a plausible explanation. No, I didn't ignore, I rejected and said why. There's too much arrogance here. It's very wearying. No one challenged the other contributors to prove anything else was great. I'm disgusted. Staff Emeritus Homework Helper I'm disgusted. How sad... Disgusted by a little bit of intellectual discussion... Here's my philosophy: whenever I say something, I accept that I can be challenged and I am always prepared to retract my claim or back it up. THAT is the core of science. The core of science is not to be disgusted by somebody challenging your world view. Homework Helper Gold Member There's unlimited evidence of a world-wide flood both on the physical Earth and in ancient literature. In bible related mythology there are references to a "great flood." The references are nowhere near unlimited, however. But the fact that it was called a "great" flood (at least in English translations) actually is on topic. That's about where the evidence ends though. Just think about. What would happen if all the water Earth's atmosphere -- all of it, every drop -- fell to the surface of the Earth all at once. How would that affect the rise in sea-level? How much water is in the atmosphere? According to this site (which references Gleick, P. H., 1996: Water resources. In Encyclopedia of Climate and Weather, ed. by S. H. Schneider, Oxford University Press, New York, vol. 2, pp.817-823), about 12.9 trillion cubic meters. For a back-of-the-envelope calculation, note that the volume of a sphere is $V = \frac{4}{3} \pi r^3$. Thus $$dV = 4 \pi r^2 dr.$$ Rearranging, $$dr = \frac{dV}{4 \pi r^2}$$ Plugging in $12.9 \times 10^{12}$ cubic meters into dV, and $6.371 \times 10^6$ meters into r, the radius of the Earth, tells us the sea level would rise somewhere around 2.5 cm. That's hardly enough to describe a flood of biblical proportions. Finds of fossilized sea-life found atop mountains is expected due to plate tectonics (particularly in the role of mountain formation). Any claims that the sea life must have arrived there due to a great flood requires ignorance of plate tectonics and mountain formation. Hoophy and Pepper Mint Staff Emeritus Homework Helper Finds of fossilized sea-life found atop mountains is expected due to plate tectonics Right. What could change my mind however is that if the fossilized sea life atop mountains and on ALL other places stem from the exact same time period. This is actually a very simple test for the global flood. Hoophy I think that ancient structures such as Stonehenge are pretty great because of the LACK of 'advanced' technology used during construction. I do not believe that the technology of the time was more advanced (or even close) to today's but PERHAPS (and therefore perhaps not) the builders used a method that we have not thought of yet. Now assuming they used a method we have not yet thought of I would argue that their technology is NOT more advanced, but rather different. I bet there are many many many ways to build, well anything really :) and it would be a shame to think that because we do not do different things as well as others that they are technologically superior. Maybe somebody just had a really good idea on how to place 'this or that log'. This different method that we are no longer aware of does not imply advanced technology. Surely there are many ways to build a Stonehenge and with enough time/people/LUCK we will have the same creative idea as the builders, or we will have a better idea on how to construct a Stonehenge (with their technology) and we might not be able to prove if the idea is the one that was used because it is another way to complete the same task. Take for example the Moai of Easter Island, for a long time we ('modern' humans) were clueless as to how the ancient builders were able to move the structures, but eventually we figured out a likely way they could have achieved it. It took creativity to rediscover the method just as it took creativity to figure out how to move the Moai originally. The natives of Easter Island did not have 'advanced' technology because we did not know how they did something, and even if we never did find out we could not assume they were technologically superior (as it has been proven they were not). In my OPINION this applies to all the baffling structures mentioned in this thread. The creativity and ingenuity these ancient engineers harnessed to build amazing structures with the rudimentary technology available is pretty GREAT to me, but even MORE GREAT is that to this day we are still pondering their accomplishments and trying to figure them out, we solve some mysteries and at the same time are continuing to be stumped with others. History would not surprise us if we knew it all. This is all my own opinion and I hope it does not make anybody mad! I am curious to hear the opinions of those who disagree with me and I will treat your opinions with respect as I hope you will show mine. Thanks! :D Last edited: Monsterboy, 1oldman2, Sophia and 2 others Hoophy I think that this is great. I would love to go see it myself one day. Pepper Mint Homework Helper Alexander - he's just great. Danes can be great. Hoophy and 1oldman2 1oldman2 Alexander - he's just great. Danes can be great. Great Scott ! EnumaElish Gold Member 2022 Award Jerry Lee Lewis' song Great Balls of Fire was great EnumaElish and 1oldman2 Mentor Fresh butter. Rubidium_71, collinsmark, Hoophy and 1 other person Rubidium_71 A cup of hot tea early in the morning is great. Sophia rollete Consciousness is great. Beauty is great. Dreaming is great. Everything is absurd, which is great. rootone I guess the greatest of all things would be the Universe. Sophia, EnumaElish and Hoophy Molar Being free is great. Hoophy, Sophia and EnumaElish
Difference between revisions of "Cauchy Integral theorem" The Cauchy Integral theorem states that for a function ${\displaystyle f(z)}$ which is analytic inside and on a simple closed curve ${\displaystyle C}$ in some region ${\displaystyle {\mathcal {R}}}$ of the complex ${\displaystyle z}$ plane, for a complex number ${\displaystyle a}$ inside ${\displaystyle C}$ ${\displaystyle f(a)={\frac {1}{2\pi i}}\int _{C}{\frac {f(z)}{z-a}};dz.}$ Proof: For ${\displaystyle f(a)}$ a constant, we may rewrite ${\displaystyle dz=d(z-a)}$, and noting that ${\displaystyle z-a=|z-a|e^{i\phi }}$ and ${\displaystyle d(z-a)=i|z-a|e^{i\phi }\;d\phi }$, and taking the contour of integration to be a circle of unit radius ${\displaystyle C:|z-a|=1}$, we may write ${\displaystyle {\frac {1}{2\pi i}}\int _{C}{\frac {f(a)}{z-a}}\;dz={\frac {f(a)}{2\pi i}}\int _{0}^{2\pi }{\frac {i|z-a|e^{i\phi }\;d\phi }{|z-a|e^{i\phi }}}=f(a)}$ . By Cauchy's Theorem, we may deform the contour into any closed curve that contains the point ${\displaystyle z-a}$ and the result holds. For the case when ${\displaystyle f(z)}$ is not constant we may write ${\displaystyle {\frac {1}{2\pi i}}\int _{C}{\frac {f(z)}{z-a}}\;dz={\frac {1}{2\pi i}}\int _{C}{\frac {f(a)}{z-a}}\;dz+{\frac {1}{2\pi i}}\int _{C}{\frac {f(z)-f(a)}{z-a}}\;dz=f(a)+{\frac {1}{2\pi i}}\int _{C}{\frac {f(z)-f(a)}{z-a}}\;dz}$ . We must show that the second term on the left is identically zero. In a vanishingly small neighborhood ${\displaystyle C:|z-a|=\varepsilon }$ ${\displaystyle {\frac {1}{2\pi i}}\int _{C}{\frac {f(z)-f(a)}{z-a}}\;dz\rightarrow {\frac {1}{2\pi i}}\int _{C}f^{\prime }(z)\;dz=0}$ . Because the derivative of an analytic function is also analytic, the integral vanishes identically within a neighborhood of ${\displaystyle z=a}$. By Cauchy's theorem, the contour of integration may be expanded to any closed curve within ${\displaystyle {\mathcal {R}}}$ that contains the point ${\displaystyle z=a}$ thus showing that the integral is identically zero.
# Explicit symmetry breaking In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetry. Usually this term is used in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theory. An example is the spectral line splitting in the Zeeman effect, due to a magnetic interaction perturbation in the Hamiltonian of the atoms involved. Explicit symmetry breaking differs from spontaneous symmetry breaking. In the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it.[1] Explicit symmetry breaking is also associated with electromagnetic radiation. A system of accelerated charges results in electromagnetic radiation when the geometric symmetry of the electric field in free space is explicitly broken by the associated electrodynamic structure under time varying excitation of the given system. This is quite evident in an antenna where the electric lines of field curl around or have rotational geometry around the radiating terminals in contrast to linear geometric orientation within a pair of transmission lines which does not radiate even under time varying excitation.[2] ## Perturbation theory in quantum mechanics A common setting for explicit symmetry breaking is perturbation theory in quantum mechanics. The symmetry is evident in a base Hamiltonian ${\displaystyle H_{0}}$. This ${\displaystyle H_{0}}$ is often an integrable Hamiltonian, admitting symmetries which in some sense make the Hamiltonian integrable. The base Hamiltonian might be chosen to provide a starting point close to the system being modelled. Mathematically, the symmetries can be described by a smooth symmetry group ${\displaystyle G}$. Under the action of this group, ${\displaystyle H_{0}}$ is invariant. The explicit symmetry breaking then comes from a second term in the Hamiltonian, ${\displaystyle H_{\text{int}}}$, which is not invariant under the action of ${\displaystyle G}$. This is sometimes interpreted as an interaction of the system with itself or possibly with an externally applied field. It is often chosen to contain a factor of a small interaction parameter. The Hamiltonian can then be written ${\displaystyle H=H_{0}+H_{\text{int}}}$ where ${\displaystyle H_{\text{int}}}$ is the term which explicitly breaks the symmetry. The resulting equations of motion will also not have ${\displaystyle G}$-symmetry. A typical question in perturbation theory might then be to determine the spectrum of the system to first order in the perturbative interaction parameter.
## Little endian vs. big endian This is about how the bytes of a data type are arranged in memory. An int variable for example occupies 4 bytes in memory. In case of little endian, the least significant byte of the integer value will be first in memory (at a smaller address). In case of big endian, the most significant byte if the integer value will be the first byte in memory. Consider the following code that prints the digits of a number by remembering the last digit and by shifting to the left. ‘size’ is the number of bits to print, ‘val’ is the number to print: void PrintBinary(int val, int size) { unsigned char* b  = new unsigned char[size]; memset(b, 0, size); int pos = size – 1; while ( val != 0 ) { b[pos] = val % 2; val = val >> 1; pos–; if (pos < 0) break; } for (pos = 0; pos< size; ++pos) { printf(“%d”, b[pos]); if (pos%8 == 7) printf(” “); } delete[] b; } int x = 8; PrintBinary(x, 32); Then the above code will print:  00000000 00000000 00000000 00001000 The code below will print ‘00001000 00000000 00000000 00000000’ which  shows that it was run on a little endian machine because the least significant byte is first. unsigned char* b = (unsigned char*) &x; for (int i = 0; i { PrintBinary(b[i], 8); } Here is a method to determine the endianess of a machine: bool IsLittleEndian() { int b = 1; return ((unsigned char*)(&b))[0]; } A more interesting approach is to use a union (a C++ facility to agregate mode data types over the same memory space): bool IsLittleEndian() { union local_t { int i; unsigned char b; }; local_t u; u.i = 1; return u.b; } That was the C++ approach. Java offers an API for it: import java.nio.ByteOrder; if (ByteOrder.nativeOrder().equals(ByteOrder.BIG_ENDIAN)) { System.out.println(“Big-endian”); } else { System.out.println(“Little-endian”); } In C# the BitConverter class has the IsLittleEndian static method. ## Just another counting problem If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. Solution: There are $n_3 = 1000/3 = 333$ multiples of $3$ and $n_5 = 1000/5 - 1 = 199$ multiples of $5$. Some of those numbers are multiple of both $3$ and $5$ (i.e. they are multiples of $15$). When summing, we need to avoid summing twice we need to subtract those numbers. There are $n_{15} = 1000/15 = 66$ multiples of $5$. The result is: $3 \cdot \frac{n_3(n_3+1)}{2}+5 \cdot \frac{n_5(n_5+1)}{2}-15 \cdot \frac{n_{15}(n_{15}+1)}{2}=233168$ This is a simple C++ program that verifies the math: int main() { int n = 1000; int sum = 0; for ( int i = 1; i < n; i++ ) { if ( ( i % 3 == 0 ) || (i % 5 == 0 ) ) { sum += i; } } int div3 = ( n – 1 ) / 3; int div5  = ( n – 1 ) / 5; int div15 =  ( n – 1 ) / 15; int sumDirect =  3 * div3 * ( div3 + 1 ) / 2 + 5 * div5 * ( div5 + 1 ) / 2  – 15 * div15 * ( div15 + 1 ) / 2; if ( sum != sumDirect ) { } else { cout<<“Excellent!”; } } ## SVD – a simple proof Every $m \times n$ real matrix $A$ can be decomposed as: $A=U \Sigma V^T$ where $U$ is a $m \times m$ orthogonal matrix, $\Sigma$ is a $m \times n$ matrix, having non-zero elements only on the diagonal, and  $V$ is a $n \times n$ orthogonal matrix. We know from the previous post that a symmetric matrix is digonalisable, and can be diagonalised by a orthogonal matrix. In our case $A^{T}A$ happens to be a $n \times n$ symmetric matrix. Therefore, $\exists v_i, \lambda _i, i=1..n$, and $A^TAv_i=\lambda _iv_i$. Real symmetric matrices have real eigenvalues and additionally $\lambda _i \geq 0$ because: $== \lambda _i$ Because $$ is grater or equal to $0$ for some vector $x$, it follows that $\lambda _i=0$ when $Av_i=0$ (i.e. $v_i$ is in the null space of $A$) and $\lambda _i>0$ otherwise. For $\lambda _i > 0$, multiplying $A^TAv_i=\lambda _iv_i$ by ${v_j}^T$ and considering that $v_i$ and $v_j$ are orthogonal unit vectors, we get: ${v_j}^TA^TAv_i=\lambda _i \delta _{i,j}$  => ${(Av_j)}^TAv_i=\lambda _i \delta _{i,j}$ => ${(\frac {Av_j} {\sqrt {\lambda _j}})}^T{(\frac {Av_i} {\sqrt {\lambda _i}})}=\delta _{i,j}$ Denoting $u_i=\frac {Av_i} {\sqrt {\lambda _i}}$ we get: $Av_i=\sqrt {\lambda _i}u_i$ => $A[v_1 ... v_r v'_{r+1} ... v'_{n}]=[u_1 ... u_r u'_{r+1} ... u'_{m}]\begin{bmatrix} {\sqrt \lambda_1} & \cdots & 0 & \cdots & 0 \\ \vdots & \ddots & \vdots & & \vdots \\ 0 & \cdots & {\sqrt \lambda_r} & \cdots & 0 \\ \vdots & & \vdots & & \vdots & \\ 0 & \cdots & 0 & \cdots & 0 \end{bmatrix}$ $r$ is the rank of the matrix. The set of vectors $v_i$ is extended by the set of orthogonal vectors $v'_j$ to form a basis in $R^n$. The set of vectors $u_i$ is extended by the set of orthogonal vectors $u'_j$ to form a basis in $R^m$. Posted in Math | 1 Comment ## Real symmetric matrices are diagonalizable This article involves advanced linear algebra knowledge but it definitely worth understanding it. The previous post contains a proof that a real symmetric matrix has real eigenvalues. Additionally the real symmetric matrices are diagonalizable by an orthogonal matrix. This means: $\forall A$ symmetric,  $\exists P$ orthogonal ($P^tP=I$) and $D$ diagonal, such that $A=P^{t}DP$ To continue the proof I will use the following result: Let let $A$ be a real symmetric matrix and let $V$ be a subspace of $R^n$ and $V^{\bot}$ its complement ($R^n=V \oplus V^{\bot}$). If $\forall v \in V$, $Av \in V$ then: for $w \in V^{\bot}$ $\Rightarrow$ $Aw \in V^{\bot}$. Proof: I will use the dot product defined in the previous post. Given $v \in V$ and $w \in V^{\bot}$ , $ = $ because $A$ is real and symmetric. But $=0$ because $Av \in V$. Thus $=0$, $\forall v \in V$. This means that $Aw \in V^{\bot}$. Getting back to the problem, $A$ has at least one eigenvalue. It results that exists $X_1$ and $\lambda _1$, such that $AX_1=\lambda _1X_1$. If $V_1$ is the vector space generated by $X_1$, then the operator $A$ is also symmetric when applied to the subspace $V_1^{\bot}$ (this can be proven by changing the basis). This means that exists $X_2 \in V_1^{\bot}$ such that $AX_2 = \lambda_2X_2$. Considering the vector space generated by $X_1$ and $X_2$, and by applying the operator $A$ to its orthogonal we will get $AX_3 = \lambda_3X_3$. By induction we get: $AX_i = \lambda_iX_i, i=1..n$, where the vectors $X_i$ are pair wise perpendicular: $=0, \forall i,j \in {1..n}$. Additionally the vectors can be divided by their norm to make them unit vectors. In a matrix form the relations above can be written as: $A[X_1 X_2 ... X_n]=[X_1 X_2 ... X_n]diag(\lambda_1, \lambda_2, ... ,\lambda_n)$ Or $A=P diag(\lambda_1, \lambda_2, ... ,\lambda_n) P^{-1}$, where the columns of $P$ are the vectors $X_1, X_2, ...X_n$. P is orthogonal because vectors $X_i$ are pair wise perpendicular and unit vectors. This also means that $P^{-1}=P^t$. Posted in Linear Algebra, Math | 3 Comments ## Real symmetric matrices have real eigenvalues A real matrix is symmetric if $A^t=A$. I will show in this post that a real symmetric matrix have real eigenvalues. I will need a dot product for the prof and I’ll use the basic dot product for two vectors $X$ and $Y$: $=X^t\overline{Y}$, where $\overline{Y}$ is the complex conjugate of the vector $Y$. The useful property of this dot product is that $=$, for any matrix $A$ And considering that $A$ is real, a simple proof is: $=(AX)^t\overline{Y}=X^tA^t\overline{Y}=X^t\overline{A^tY}=$ An eigenvalue have a correspondent eigenvector: $AX=\lambda X$. We have $=<\lambda X, X>=\lambda $ and considering that A is symmetric $====\overline{\lambda}$. From $\lambda=\overline{\lambda}$ and because $X$ is not a zero vector it results that the imaginary part of $\lambda$ is zero, so the eigenvalue is a real number. Posted in Linear Algebra, Math | 3 Comments Given an array of integers and an integer S, find if there are two numbers X and Y in the array such that X + Y = S. The problem was received by a friend during a phone interview with Google. A naive solution is: void solutionNaive(vector& inVals, int sum) //O(n^2) { size_t size = inVals.size(); //naive for (int i = 0; i < size-1; i++) { for (int j = i+1; j < size; j++ ) { if (inVals[i] + inVals[j] == sum) { printf("Values: %d and %d\n", inVals[i], inVals[j]); return; } } } } This is $O(n^2)$ and its performance definitely not acceptable for large arrays. A faster solution is to use a hash set for storing the elements tested so far, and to test if S minus the current element was already stored in the hash set. More memory is used in this approach, but the complexity in this case is $O(n)$ because searching in a hash set or hash map is $O(1)$: void solutionOk(vector& inVals, int sum) //O(n) { hash_set checkedVals; for (vector::iterator it = inVals.begin(); it!= inVals.end(); ++it) { if (checkedVals.find(sum - (*it)) != checkedVals.end()) { printf("Values: %d and %d\n", sum - (*it), *it); return; } else { checkedVals.insert(*it); } } } An other possible solution, not so efficient but interesting, is to sort te numbers and then for each X in the array to use binary search for searching if S-X is there. The complexity is $O(nlog(n))$, and no additional memory is used. An important aspect of this method is the way it handles the case when S-X = X: void solutionNotSoBad(vector& inVals, int sum) ////O(n log(n)) { size_t size = inVals.size(); //sorting is O(n log(n)) sort(inVals.begin(), inVals.end()); bool found = false; for (vector::iterator it = inVals.begin(); it != inVals.end(); ++it) { vector::iterator current = it; int toFind = sum - (*it); if ((toFind <= *it) && (it!= inVals.begin())) { found = binary_search(inVals.begin(), --current, toFind); current = it; } if ((toFind >= *it) && (it != inVals.end())) { found = binary_search(++current, inVals.end(), toFind); current = it; } if (found) { printf("Values: %d and %d\n", (*it), toFind); return; } } } As a conclusion, keep in mind that hash based data structures are generally the most appropriate to use when fast search operations are needed. Posted in Algorithms | 1 Comment ## Celular automata – generating chaos with simple rules Consider a matrix having $n$ lines and $2n+1$ columns. The contents of the matrix is filled with 0 or 1 based on the following rules: 1. First line is full of zeros except the central element (position $n$ for 0 based index as in C++, C#, Java etc, and position $n+1$ for Matlab, Pascal etc) wich is 1. For $n = 3$ the first line will be: 0 0 0 1 0 0 0 2. The line $k+1$ is computed based on line $k$. Each element is computed based on the three upper neighbours. Thus for each combination of the possible values for upper neighbours we need to specify a value: 0 or 1 000 -> $a_1$ 001 -> $a_2$ 010 -> $a_3$ 011 -> $a_4$ 100 -> $a_5$ 101 -> $a_6$ 110 -> $a_7$ 111 -> $a_8$ where $a_1, a_2,...,a_8$ can be 0 or 1.The array $\overline{a_8...a_1}$ is the base 2 representation of a number between 0 and 255. For simplicity the first elements of each row are always set to 0 because for those elements only two out of three upper neighbours are known. Based on this rule different patterns can be generated. For example if $\overline{a_8...a_1}=90$ and $n = 261$ the following pattern is generated: Here is the Matlab code for generating patterns: function [] = automata() n = 261; m_in = zeros(n, 2*n+1); rule = [0, 0, 0, 1, 1, 1, 1, 0]; %rule 30 %rule = [0, 1, 0, 1, 1, 0, 1, 0]; %rule 90 m_out = generate(m_in, rule); imshow(1 - m_out); %--------------------------------------------------------------------- function [o] = generate(m, rule) problemSize = size(m, 1); %starting alue m(1, problemSize+1) = 1; mid = problemSize+1; %for each row for row = 2:problemSize-1 %for each column for col = mid -(row-1) : mid +(row-1) i1 = m(row-1, col-1); i2 = m(row-1, col); i3 = m(row-1, col+1); %based on te values from the previous row %and based in the rule generate the values in %current row n = i1*4+i2*2+i3; m(row, col) = rule(8-n); end end o=m; A particular interesting pattern is generated by rule 30. This is how it looks for $n = 261$: Compared to other rules, a chaotic pattern is generated (see the right side of the result). Rule 30 shows that using simple evolution rules, and starting from something basic (a single value of 1 in this case) something complex can be generated. This lead to the following idea: what if the universe was generated in a similar way; A simple initial state and a simple rule that evoluates  in time and lead to the complexity that we see around? Source:  Stephen Wolfram, “A New Kind of Science”, http://www.youtube.com/watch?v=_eC14GonZnU Posted in Math | 1 Comment
main-content ## Über dieses Buch This book attempts to acquaint engineers who have mastered the essentials of structural mechanics with the mathematical foundation of their science, of structural mechanics of continua. The prerequisites are modest. A good working knowledge of calculus is sufficient. The intent is to develop a consistent and logical framework of theory which will provide a general understanding of how mathematics forms the basis of structural mechanics. Emphasis is placed on a systematic, unifying and rigorous treatment. Acknowledgements The author feels indebted to the engineers Prof. D. Gross, Prof. G. Mehlhorn and Prof. H. G. Schafer (TH Darmstadt) whose financial support allowed him to follow his inclinations and to study mathematics, to Prof. E. Klingbeil and Prof. W. Wendland (TH Darmstadt) for their unceasing effort to achieve the impossible, to teach an engineer mathematics, to the staff of the Department of Civil Engineering at the University of California, Irvine, for their generous hospitality in the academic year 1980-1981, to Prof. R. Szilard (Univ. of Dortmund) for the liberty he granted the author in his daily chores, to Mrs. Thompson (Univ. of Dortmund) and Prof. L. Kollar (Budapest/Univ. of Dortmund) for their help in the preparation of the final draft, to my young colleagues, Dipl.-Ing. S. Pickhardt, Dipl.-Ing. D. Ziesing and Dipl.-Ing. R. Zotemantel for many fruitful discussions, and to cando ing. P. Schopp and Frau Middeldorf for their help in the production of the manuscript. Dortmund, January 1985 Friedel Hartmann Contents Notations ........................................................... XII Introduction ........................................................ . ## Inhaltsverzeichnis ### Introduction Abstract Unlike mathematicians who live happily in the realm of their intellectual creations and must never bring their symbols in contact with the rough outside world, the engineer identifies mathematical symbols with physical objects. Friedel Hartmann ### 1. Fundamentals Abstract We introduce in this chapter our notations and the principal equations of linear, first-order, structural mechanics. Friedel Hartmann ### 2. Work and Energy Abstract A spring is a very simple elastic element and, therefore, quite appropriate to acquaint us with the principles of structural mechanics. Friedel Hartmann ### 3. Continuous Beams, Trusses and Frames Abstract In the previous chapter we formulated the principles of virtual work and Betti’s principle for rather simple structures. Friedel Hartmann ### 4. Energy Principles Abstract We formulate in this chapter the energy principles of structural mechanics. Friedel Hartmann ### 5. Concentrated Forces Abstract In this chapter we shall formulate • the principle of virtual displacements • the principle of virtual forces • Bett’s Theorem • the principleeigenwork = int. energy when the structural elements (bars, beams, Kirchhoff plates and elastic plates or bodies) are loaded with concentrated forces. Friedel Hartmann ### 6. Influence Functions Abstract The equations formulated in chapter 5 find many applications. Not because concentrated loads occur so often, they are rather fictitious, abstract quantities, but because concentrated loads are useful in the calculation of single displacements. This method is known as “the dummy-unit-load method”. Friedel Hartmann ### 7. The Operators A Abstract Up to now we were concerned with the differential equations which govern the displacement of the structural elements, as e.g. with the equation − Lu = p which governs the displacement of an elastic body. Now we focus on the systems of three equations which, originally, preceded the displacement equations. In the case of an elastic body this was the system $$\matrix{\hfill {E(u) - E = 0} \cr \hfill {C[E] - S = 0} \cr \hfill { - div\,S = p} \cr}$$ (7.1) Friedel Hartmann ### 8. Shells Abstract In this chapter we will extend our approach to shells. As there are many, many different formulations for shells we had to decide for one particular model. We opted for Koiter’s model because the mathematical properties of this model are fully worked out. But as anyone who is familiar with shells will recognize all what is said in the following applies to different models (nearly) as well. The reader will, certainly, also realize that the mathematics of shells closely fits into the general picture. Friedel Hartmann ### 9. Second-Order Analysis Abstract If the equations of equilibrium are established using the geometry of the displaced structure then we speak of second-order analysis. Friedel Hartmann ### 10. Nonlinear Theory of Elasticity Abstract We extend in this chapter our formulations to the nonlinear theory of elasticity (geometric and physical nonlinearities) and the large displacement analysis of beams and plates (geometric nonlinearities) Friedel Hartmann ### 11. Finite Elements Abstract To model the behaviour of a structure the finite element method replaces the structure by a patch of finite elements which, compared with the real structure, can undergo only a limited number of states or modes, namely all those modes whose state variables are piecewise polynomials of maximum degree, say, k. Friedel Hartmann ### Backmatter Weitere Informationen
• entries 383 1075 • views 353426 # Painter Essentials, GIMP, Inkscape 1261 views A very quick review of some other editors... Name: Corel Painter Essentials 4 Company: Corel Platforms: Windows, Mac Brief Description: Consumer grade paint and photo editing program Demo Restrictions: 60 day trial Cost: A$100 in Australia (US$66), unsure elsewhere Corel's Painter series has renown as the industry leader in emulating natural media. The professional package Painter X has a professional price tag to match (\$799), but the consumer priced option, Painter Essentials, is considerably cheaper. A boxed version can be bought from Apple Australia for just a hundred Aussie bucks, so that is within my budget for tools. Painter Essentials 4 lies somewhere between ArtRage and a traditional digital editing suite. The prime feature is the natural art tools: brushes, pens, chalk and so on, but with a more traditional art software GUI and digital tools. The GUI itself was not that hard to figure out. There's a paint mixer panel on the right for blending paints to make colours, and the last used brushes are listed in a column on the left next to the tools. However, the icon used for the brush bugged me a little. Often the size of the brush drawn didn't match the circle. When one brush showed nothing at all I realised the tablet wasn't properly configured for Painter Essentials, but after calibration it still didn't always match what I expected. For sketching, the lines are smooth and follow the curves I make. The resulting pencil lines were too pixellated for my tastes. When compared to the default pencil in ArtRage or Sketchbook Pro, Painter Essentials is somewhat ugly. The deal breaker with Corel Painter Essentials 4 was my quick demo was plagued with glitches. Sometimes a phantom brush icon would be left on the screen, and many times the extendable brush window would not be selectable or retractable. The experience just did not feel as seamless and polished as I expect for a commercial art program. In fairness to Painter Essentials 4, this was a whirlwind review. But I just did not get a good vibe from using the pencil tools. I'll give Corel a pass this time. Name: GIMP 2.6 (The GNU Image Manipulation Program) Organisation: The GIMP Team Platforms: Linux, Windows, Mac Brief Description: Open source image manipulation and raster editor Demo Restrictions: Not Applicable Cost: Free Ah, the GIMP. This is the most popular open source, free digital editor out there today; the Linux users replacement for Photoshop. Some might argue this, but at least you can't beat the price. I've had GIMP 2.4 installed for a while, but this quick test was an excuse to upgrade to the latest version (2.6). The interface is mostly that floating tool panel on the right in a separate window, coloured in what I like to think of as "Linux Medium Grey". GIMP does its best to remind you that you are using a program designed for Linux. On the Mac, it is based on X and thus runs in X11, which means you won't get the Mac standard of having the menu in the top bar. This isn't actually that bad when you get used to it, but GIMP also goes out of its way to retain its own unique look and feel. In this sense it is somewhat like ArtRage which also uses its own style, but in the GIMPs sense it feels far more... well, I was going to write "utilitarian", but that means "practical rather than attractive". I'll get to that next paragraph, so I'll just say "Linux-y" instead. The problem is the GIMP interface seems like there wasn't much thought put into how it would actually be used as a tool. I don't like the arrangement of the tools and the choice of icon shapes in the toolbox - I keep having to hover over each one to read the tooltip even though I've been using the GIMP for a while. And I don't know why every single transformation type needed its own separate icon - rotate, scale, shear, perspective and flip. It's also very annoying having the toolbox in a different window. It means every time I select a tool, I need to dab the stylus once on the image window to reselect it again before I can draw. This does not feel like a tool designed to work with graphics tablets. The actual act of sketching with the pen is all right though; passable, but with a few niggling flaws. You still get the jaggies on circles if you go too fast (the circle on the lower left in the screenie above shows this to a degree). An annoyance is that the cursor used with the pen tool is just a typical mouse arrow with a pen icon offset against it, rather than a cross hair or some other more intuitive cursor. It's definitely usable as a sketching tool, but there are better alternatives out there. To be fair to the GIMP, it does excel at what its name sake is. I prefer to use GIMP for image manipulation, such as cropping and resizing images to stick up on the web like the screenshots I do in this review. For that it works quite well, although the transformation tools are a bit of a pain in the arse to use. In all, GIMP feels and runs like a programmers art tool, made by programmers for programmers and the sorts of image manipulation programmers want to do. Unfortunately, that doesn't translate to a natural art experience when you want to get in touch with your creative side. To sum up, you might as well get the GIMP (it's free!), but you'll probably need other tools for your art needs. Name: Inkscape 0.46 Organisation: www.inkscape.org Platforms: Linux, Windows, Mac Brief Description: Open source SVG editor (vector graphics editor) Demo Restrictions: Not Applicable Cost: Free Anyone who has been reading my journal knows that I love Inkscape. It's my favourite vector editor. Actually scratch that, favourite art software in general, even when compared with more costly alternatives. This might just be because it is a rare beast: a complicated open source application with an interface that does not suck. GIMP might try its hardest to remind you its from the world of Linux and the world of programmers who want to things there own way, but the Inkscape people decided they might as well emulate the interface from actual usable tools in their domain (FYI, the interface was modelled off Xara). Inkscape looks clean and professional as a result. As a consequence and given that it is free, it is ideal for beginners to vector art to pick up and learn. That's what I did, and it's why I favour vector art to raster. Even when using something like Illustrator or Flash, I prefer to do the base work in Inkscape and port it across; although to be fair that might be because I haven't put in as much time to master their interfaces. (Flash doesn't seem too bad, but Illustrator seems to suffer from some moon logic with the node tool. But I'm digressing). Inkscape can also be used as a sketch tool via its calligraphy option. You can get some nice smooth curves that can be used for scribbling. My current technique before doing anything complex with vectors is to scribble out some guidelines with the calligraphy tool, much like in the screenshot above. Now that I am doing this as part of a test, I notice that drawing the curves feels a bit delayed. I don't think this delay is real, but it is a consequence of Inkscape's translation of the curves made into vector form. Inkscape will highlight the current section you are drawing, so as you draw it feels a bit unnatural. Once you release, it then sets the curve, so there's a bit of a shift in appearance. You get used to it after a while, but it's a bit disconcerting if you are looking closely at what you are doing and are expecting a more natural, pen like curve. The other issue is that sometimes if you go really fast, the curve will stop drawing. Summary: Inkscape actually works fairly well as a vector based sketching program, but you might only want to use it as such if you are then going to build something in vectors using Inkscape. For general sketching, another tool is probably better. Note though that for vector art, you can't go wrong with downloading and trying Inkscape - it's free, after all. I also posit that for programmer art it is a better choice to pick vector over raster, as you will have a greater chance of making something pleasing to the eye. This is especially true if you don't have a tablet - vectors work well with the mouse, raster in general does not. My general conclusion is that ArtRage offers me the best bang for buck as a sketching program and as for digital art improvement. I've bought myself an license for ArtRage 2.5 Full, and I'll see what I can do with it when practising the basics. Note: I left out Adobe Photoshop from my comparison list. I actually have a license for Adobe Photoshop CS3 and need to learn to use it too. However I feel the interface for Photoshop is a bit overly daunting for learning the basics. It seems well suited for touch up work, but until I feel more like an artist I want to stick with something more simple. If you're still comparing, Sumo Paint just went 1.0. I'd be interested in seeing how it fares compared to the rest of the offerings. I played with it a bit. It's not Photoshop, but it's danged impressive for a web app. re: Inkscape How to do you stand the palette? I tried to use Inkscape for a bit and could not get over the palette that appears to be stuck to the bottom of the screen -- When my window is 1900px across so is that damn palette. This is not useful! Or maybe I'm an idiot and missed some kind of detachable palette panel option. Quote: Original post by dbaumgart re: Inkscape How to do you stand the palette? I tried to use Inkscape for a bit and could not get over the palette that appears to be stuck to the bottom of the screen -- When my window is 1900px across so is that damn palette. This is not useful! Or maybe I'm an idiot and missed some kind of detachable palette panel option. I've got used to it, I guess. [smile] As for a detachable palette panel option, it's not obvious. The palette is fixed, but there's a swatches dialog (Ctrl+Shift+W) that does the same thing. You can can dock it on the side of the window if you want and then hide the palette at the bottom. Quote: Original post by johnhattan If you're still comparing, Sumo Paint just went 1.0. I'd be interested in seeing how it fares compared to the rest of the offerings. I played with it a bit. It's not Photoshop, but it's danged impressive for a web app. It is pretty impressive for a web app. I'm not that fond of them as replacements for offline apps. My lightning fast impression is that it has two killer flaws. It's too sluggish: a circle on the tablet will look like a dodecahedron. And the keyboard shortcuts don't work on my Mac, so I have to select "Undo" manually from the menu instead of using one of the mapped buttons on the tablet. So it won't work with my workflow. ## Create an account Register a new account
# Jostling the unreal in Oxford So wrote Philip Pullman, author of The Golden Compass and its sequels. In the series, a girl wanders from the Oxford in another world to the Oxford in ours. I’ve been honored to wander Oxford this fall. Visiting Oscar Dahlsten and Jon Barrett, I’ve been moonlighting in Vlatko Vedral’s QI group. We’re interweaving 21st-century knowledge about electrons and information with a Victorian fixation on energy and engines. This research program, quantum thermodynamics, should open a window onto our world. A new world. At least, a world new to the author. To study our world from another angle, Oxford researchers are jostling the unreal. Oscar, Jon, Andrew Garner, and others are studying generalized probabilistic theories, or GPTs. What’s a specific probabilistic theory, let alone a generalized one? In everyday, classical contexts, probabilities combine according to rules you know. Suppose you have a 90% chance of arriving in London-Heathrow Airport at 7:30 AM next Sunday. Suppose that, if you arrive in Heathrow at 7:30 AM, you’ll have a 70% chance of catching the 8:05 AM bus to Oxford. You have a probability 0.9 * 0.7 = 0.63 of arriving in Heathrow at 7:30 and catching the 8:05 bus. Why 0.9 * 0.7? Why not 0.90.7, or 0.9/(2 * 0.7)? How might probabilities combine, GPT researchers ask, and why do they combine as they do? Not that, in GPTs, probabilities combine as in 0.9/(2 * 0.7). Consider the 0.9/(2 * 0.7) plucked from a daydream inspired by this City of Dreaming Spires. But probabilities do combine in ways we wouldn’t expect. By entangling two particles, separating them, and measuring one, you immediately change the probability that a measurement of Particle 2 yields some outcome. John Bell explored, and experimentalists have checked, statistics generated by entanglement. These statistics disobey rules that govern Heathrow-and-bus statistics. As do entanglement statistics, so do effects of quantum phenomena like discord, negative Wigner functions, and weak measurements. Quantum theory and its contrast with classicality force us to reconsider probability. # Polarizer: Rise of the Efficiency How should a visitor to Zürich spend her weekend? Launch this question at a Swiss lunchtable, and you split diners into two camps. To take advantage of Zürich, some say, visit Geneva, Lucerne, or another spot outside Zürich. Other locals suggest museums, the lake, and the 19th-century ETH building. The 19th-century ETH building ETH, short for a German name I’ve never pronounced, is the polytechnic from which Einstein graduated. The polytechnic houses a quantum-information (QI) theory group that’s pioneering ideas I’ve blogged about: single-shot information, epsilonification, and small-scale thermodynamics. While visiting the group this August, I triggered an avalanche of tourism advice. Caught between two camps, I chose Option Three: Contemplate polar codes. Polar codes compress information into the smallest space possible. Imagine you write a message (say, a Zürich travel guide) and want to encode it in the fewest possible symbols (so it fits in my camera bag). The longer the message, the fewer encoding symbols you need per encoded symbol: The more punch each code letter can pack. As the message grows, the encoding-to-encoded ratio decreases. The lowest possible ratio is a number, represented by H, called the Shannon entropy. So established Claude E. Shannon in 1948. But Shannon didn’t know how to code at efficiency H. Not for 51 years did we know. I learned how, just before that weekend. ETH student David Sutter walked me through polar codes as though down Zürich’s Banhofstrasse. The Banhofstrasse, one of Zürich’s trendiest streets, early on a Sunday. Say you’re encoding n copies of a random variable. When I say, “random variable,” think, “character in the travel guide.” Just as each character is one of 26 letters, each variable has one of many possible values. Suppose the variables are independent and identically distributed. Even if you know some variables’ values, you can’t guess others’. Cryptoquote players might object that we can infer unknown from known letters. For example, a three-letter word that begins with “th” likely ends with “e.” But our message lacks patterns. Think of the variables as diners at my lunchtable. Asking how to fill a weekend in Zürich—splitting the diners—I resembled the polarizer. The polarizer is a mathematical object that sounds like an Arnold Schwarzenegger film and acts on the variables. Just as some diners pointed me outside Zürich, the polarizer gives some variables one property. Just some diners pointed me to within Zürich, the polarizer gives some variables another property. Just as I pointed myself at polar codes, the polarizer gives some variables a third property. These properties involve entropy. Entropy quantifies uncertainty about a variable’s value—about which of the 26 letters a character represents. Even if you know the early variables’ values, you can’t guess the later variables’. But we can guess some polarized variables’ values. Call the first polarized variable u1, the second u2, etc. If we can guess the value of some ui, that ui has low entropy. If we can’t guess the value, ui has high entropy. The Nicole-esque variables have entropies like the earnings of Terminator Salvation: noteworthy but not chart-topping. To recap: We want to squeeze a message into the tiniest space possible. Even if we know early variables’ values, we can’t infer later variables’. Applying the polarizer, we split the variables into low-, high-, and middling-entropy flocks. We can guess the value of each low-entropy ui, if we know the foregoing uh’s. Almost finished! In your camera-size travel guide, transcribe the high-entropy ui’s. These ui’s suggest the values of the low-entropy ui’s. When you want to decode the guide, guess the low-entropy ui’s. Then reverse the polarizer to reconstruct much of the original text. The longer the original travel guide, the fewer errors you make while decoding, and the smaller the ratio of the encoded guide’s length to the original guide’s length. That ratio shrinks–as the guide’s length grows–to H. You’ve compressed a message maximally efficiently. As the Swiss say: Glückwünsche. How does compression relate to QI? Quantum states form messages. Polar codes, ETH scientists have shown, compress quantum messages maximally efficiently. Researchers are exploring decoding strategies and relationships among (quantum) polar codes. With their help, Shannon-coded travel guides might fit not only in my camera bag, but also on the tip of my water bottle. Should you need a Zürich travel guide, I recommend Grossmünster Church. Not only does the name fulfill your daily dose of umlauts. Not only did Ulrich Zwingli channel the Protestant Reformation into Switzerland there. Climbing a church tower affords a panorama of Zürich. After oohing over the hills and ahhing over the lake, you can shift your gaze toward ETH. The worldview being built there bewitches as much as the vista from any tower. A tower with a view. With gratitude to ETH’s QI-theory group (particularly to Renato Renner) for its hospitality. And for its travel advice. With gratitude to David Sutter for his explanations and patience. The author and her neue Freunde. # Can a game teach kids quantum mechanics? Five months ago, I received an email and then a phone call from Google’s Creative Lab Executive Producer, Lorraine Yurshansky. Lo, as she prefers to be called, is not your average thirty year-old. She has produced award-winning short films like Peter at the End (starring Napoleon Dynamite, aka Jon Heder), launched the wildly popular Maker Camp on Google+ and had time to run a couple of New York marathons as a warm-up to all of that. So why was she interested in talking to a quantum physicist? You may remember reading about Google’s recent collaboration with NASA and D-Wave, on using NASA’s supercomputing facilities along with a D-Wave Two machine to solve optimization problems relevant to both Google (Glass, for example) and NASA (analysis of massive data sets). It was natural for Google, then, to want to promote this new collaboration through a short video about quantum computers. The video appeared last week on Google’s YouTube channel: This is a very exciting collaboration in my view. Google has opened its doors to quantum computation and this has some powerful consequences. And it is all because of D-Wave. But, let me put my perspective in context, before Scott Aaronson unleashes the hounds of BQP on me. Two years ago, together with Science magazine’s 2010 Breakthrough of the Year winner, Aaron O’ Connell, we decided to ask Google Ventures for $10,000,000 dollars to start a quantum computing company based on technology Aaron had developed as a graduate student at John Martini’s group at UCSB. The idea we pitched was that a hand-picked team of top experimentalists and theorists from around the world, would prototype new designs to achieve longer coherence times and greater connectivity between superconducting qubits, faster than in any academic environment. Google didn’t bite. At the time, I thought the reason behind the rejection was this: Google wants a real quantum computer now, not just a 10 year plan of how to make one based on superconducting X-mon qubits that may or may not work. I was partially wrong. The reason for the rejection was not a lack of proof that our efforts would pay off eventually – it was a lack of any prototype on which Google could run algorithms relevant to their work. In other words, Aaron and I didn’t have something that Google could use right-away. But D-Wave did and Google was already dating D-Wave One for at least three years, before marrying D-Wave Two this May. Quantum computation has much to offer Google, so I am excited to see this relationship blossom (whether it be D-Wave or Pivit Inc that builds the first quantum computer). Which brings me back to that phone call five months ago… Lorraine: Hi Spiro. Have you heard of Google’s collaboration with NASA on the new Quantum Artificial Intelligence Lab? Me: Yes. It is all over the news! Lo: Indeed. Can you help us design a mod for Minecraft to get kids excited about quantum mechanics and quantum computers? Me: Minecraft? What is Minecraft? Is it like Warcraft or Starcraft? Lo: (Omg, he doesn’t know Minecraft!?! How old is this guy?) Ahh, yeah, it is a game where you build cool structures by mining different kinds of blocks in this sandbox world. It is popular with kids. Me: Oh, okay. Let me check out the game and see what I can come up with. After looking at the game I realized three things: 1. The game has a fan base in the tens of millions. 2. There is an annual convention (Minecon) devoted to this game alone. 3. I had no idea how to incorporate quantum mechanics within Minecraft. Lo and I decided that it would be better to bring some outside help, if we were to design a new mod for Minecraft. Enter E-Line Media and TeacherGaming, two companies dedicated to making games which focus on balancing the educational aspect with gameplay (which influences how addictive the game is). Over the next three months, producers, writers, game designers and coder-extraordinaire Dan200, came together to create a mod for Minecraft. But, we quickly came to a crossroads: Make a quantum simulator based on Dan200′s popular ComputerCraft mod, or focus on gameplay and a high-level representation of quantum mechanics within Minecraft? The answer was not so easy at first, especially because I kept pushing for more authenticity (I asked Dan200 to create Hadamard and CNOT gates, but thankfully he and Scot Bayless – a legend in the gaming world – ignored me.) In the end, I would like to think that we went with the best of both worlds, given the time constraints we were operating under (a group of us are attending Minecon 2013 to showcase the new mod in two weeks) and the young audience we are trying to engage. For example, we decided that to prepare a pair of entangled qubits within Minecraft, you would use the Essence of Entanglement, an object crafted using the Essence of Superposition (Hadamard gate, yay!) and Quantum Dust placed in a CNOT configuration on a crafting table (don’t ask for more details). And when it came to Quantum Teleportation within the game, two entangled quantum computers would need to be placed at different parts of the world, each one with four surrounding pylons representing an encoding/decoding mechanism. Of course, on top of each pylon made of obsidian (and its far-away partner), you would need to place a crystal, as the required classical side-channel. As an authorized quantum mechanic, I allowed myself to bend quantum mechanics, but I could not bring myself to mess with Special Relativity. As the mod launched two days ago, I am not sure how successful it will be. All I know is that the team behind its development is full of superstars, dedicated to making sure that John Preskill wins this bet (50 years from now): The plan for the future is to upload a variety of posts and educational resources on qcraft.org discussing the science behind the high-level concepts presented within the game, at a level that middle-schoolers can appreciate. So, if you play Minecraft (or you have kids over the age of 10), download qCraft now and start building. It’s a free addition to Minecraft. # The cost and yield of moving from (quantum) state to (quantum) state The countdown had begun. In ten days, I’d move from Florida, where I’d spent the summer with family, to Caltech. Unfolded boxes leaned against my dresser, and suitcases yawned on the floor. I was working on a paper. Even if I’d turned around from my desk, I wouldn’t have seen the stacked books and folded sheets. I’d have seen Lorenz curves, because I’d drawn Lorenz curves all week, and the curves seemed imprinted on my eyeballs. Using Lorenz curves, we illustrate how much we know about a quantum state. Say you have an electron, you’ll measure it using a magnet, and you can’t predict any measurement’s outcome. Whether you orient the magnet up-and-down, left-to-right, etc., you haven’t a clue what number you’ll read out. We represent this electron’s state by a straight line from (0, 0) to (1, 1). Say you know the electron’s state. Say you know that, if you orient the magnet up-and-down, you’ll read out +1. This state, we call “pure.” We represent it by a tented curve. The more you know about a state, the more the state’s Lorenz curve deviates from the straight line. If Curve A fails to dip below Curve B, we know at least as much about State A as about State B. We can transform State A into State B by manipulating and/or discarding information. By the time I’d drawn those figures, I’d listed the items that needed packing. A coauthor had moved from North America to Europe during the same time. If he could hop continents without impeding the paper, I could hop states. I unzipped the suitcases, packed a box, and returned to my desk. Say Curve A dips below Curve B. We know too little about State A to transform it into State B. But we might combine State A with a state we know lots about. The latter state, C, might be pure. We have so much information about A + C, the amalgam can turn into B. What’s the least amount of information we need about C to ensure that A + C can turn into B? That number, we call the “cost of transforming State A into State B.” We call it that usually. But late in the evening, after I’d miscalculated two transformation costs and deleted four curves, days before my flight, I didn’t type the cost’s name into emails to coauthors. I typed “the cost of turning A into B” or “the cost of moving from state to state.” Continue reading # The million dollar conjecture you’ve never heard of… Curating a blog like this one and writing about imaginary stuff like Fermat’s Lost Theorem means that you get the occasional comment of the form: I have a really short proof of a famous open problem in math. Can you check it for me? Usually, the answer is no. But, about a week ago, a reader of the blog that had caught an omission in a proof contained within one of my previous posts, asked me to do just that: Check out a short proof of Beal’s Conjecture. Many of you probably haven’t heard of billionaire Mr. Beal and his$1,000,000 conjecture, so here it is: Let $a,b,c$ and $x,y,z > 2$ be positive integers satisfying $a^x+b^y=c^z$. Then, $gcd(a,b,c) > 1$; that is, the numbers $a,b,c$ have a common factor. After reading the “short proof” of the conjecture, I realized that this was a pretty cool conjecture! Also, the short proof was wrong, though the ideas within were non-trivial. But, partial progress had been made by others, so I thought I would take a crack at it on the 10 hour flight from Athens to Philadelphia. In particular, I convinced myself that if I could prove the conjecture for all even exponents $x,y,z$, then I could claim half the prize. Well, I didn’t quite get there, but I made some progress using knowledge found in these two blog posts: Redemption: Part I and Fermat’s Lost Theorem. In particular, one can show that the conjecture holds true for $x=y=2n$ and $z = 2k$, for $n \ge 3, k \ge 1$. Moreover, the general case of even exponents can be reduced to the case of $x=y=p \ge 3$ and $y=z=q \ge 3$, for $p,q$ primes. Which makes one wonder if the general case has a similar reduction, where two of the three exponents can be assumed equal. The proof is pretty trivial, since most of the heavy lifting is done by Fermat’s Last Theorem (which itself has a rather elegant, short proof I wanted to post in the margins – alas, WordPress has a no-writing-on-margins policy). Moreover, it turns out that the general case of even exponents follows from a combination of results obtained by others over the past two decades (see the Partial Results section of the Wikipedia article on the conjecture linked above – in particular, the (n,n,2) case). So why am I even bothering to write about my efforts? Because it’s math! And math equals magic. Also, in case this proof is not known and in the off chance that some of the ideas can be used in the general case. Okay, here we go… Proof. The idea is to assume that the numbers $a,b,c$ have no common factor and then reach a contradiction. We begin by noting that $a^{2m}+b^{2n}=c^{2k}$ is equivalent to $(a^m)^2+(b^n)^2=(c^k)^2$. In other words, the triplet $(a^m,b^n,c^k)$ is a Pythagorean triple (sides of a right triangle), so we must have $a^m=2rs, b^n=r^2-s^2, c^k =r^2+s^2$, for some positive integers $r,s$ with no common factors (otherwise, our assumption that $a,b,c$ have no common factor would be violated). There are two cases to consider now: Case I: $r$ is even. This implies that $2r=a_0^m$ and $s=a_1^m$, where $a=a_0\cdot a_1$ and $a_0,a_1$ have no factors in common. Moreover, since $b^n=r^2-s^2=(r+s)(r-s)$ and $r,s$ have no common factors, then $r+s,r-s$ have no common factors either (why?) Hence, $r+s = b_0^n, r-s=b_1^n$, where $b=b_0\cdot b_1$ and $b_0,b_1$ have no factors in common. But, $a_0^m = 2r = (r+s)+(r-s)=b_0^n+b_1^n$, implying that $a_0^m=b_0^n+b_1^n$, where $b_0,b_1,a_0$ have no common factors. Case II: $s$ is even. This implies that $2s=a_1^m$ and $r=a_0^m$, where $a=a_0\cdot a_1$ and $a_0,a_1$ have no factors in common. As in Case I, $r+s = b_0^n, r-s=b_1^n$, where $b=b_0\cdot b_1$ and $b_0,b_1$ have no factors in common. But, $a_1^m = 2s = (r+s)-(r-s)=b_0^n-b_1^n$, implying that $a_1^m+b_1^n=b_0^n$, where $b_0,b_1,a_1$ have no common factors. We have shown, then, that if Beal’s conjecture holds for the exponents $(x,y,z)=(n,n,m)$ and $(x,y,z)=(m,n,n)$, then it holds for $(x,y,z)=(2m,2n,2k)$, for arbitrary $k \ge 1$. As it turns out, when $m=n$, Beal’s conjecture becomes Fermat’s Last Theorem, implying that the conjecture holds for all exponents $(x,y,z)=(2n,2n,2k)$, with $n\ge 3$ and $k\ge 1$. Open Problem: Are there any solutions to $a^p+b^p= c\cdot (a+b)^q$, for $a,b,c$ positive integers and primes $p,q\ge 3$? PS: If you find a mistake in the proof above, please let everyone know in the comments. I would really appreciate it! # The complementarity (not incompatibility) of reason and rhyme Shortly after learning of the Institute for Quantum Information and Matter, I learned of its poetry. I’d been eating lunch with a fellow QI student at the Perimeter Institute for Theoretical Physics. Perimeter’s faculty includes Daniel Gottesman, who earned his PhD at what became Caltech’s IQIM. Perhaps as Daniel passed our table, I wondered whether a liberal-arts enthusiast like me could fit in at Caltech. “Have you seen Daniel Gottesman’s website?” my friend replied. “He’s written a sonnet.” He could have written equations with that quill. Digesting this news with my chicken wrap, I found the website after lunch. The sonnet concerned quantum error correction, the fixing of mistakes made during computations by quantum systems. After reading Daniel’s sonnet, I found John Preskill’s verses about Daniel. Then I found more verses of John’s. To my Perimeter friend: You win. I’ll fit in, no doubt. Exhibit A: the latest edition of The Quantum Times, the newsletter for the American Physical Society’s QI group. On page 10, my enthusiasm for QI bubbles over into verse. Don’t worry if you haven’t heard all the terms in the poem. Consider them guidebook entries, landmarks to visit during a Wikipedia trek. If you know the jargon, listen to it with a newcomer’s ear. Does anyone other than me empathize with frustrated lattices? Or describe speeches accidentally as “monotonic” instead of as “monotonous”? Hearing jargon outside its natural habitat highlights how not to explain research to nonexperts. Examining names for mathematical objects can reveal properties that we never realized those objects had. Inviting us to poke fun at ourselves, the confrontation of jargon sprinkles whimsy onto the meringue of physics. No matter your familiarity with physics or poetry: Enjoy. And fifty points if you persuade Physical Review Letters to publish this poem’s sequel. Quantum information By Nicole Yunger Halpern If “CHSH” rings a bell, you know QI’s fared, lately, well. Such promise does this field portend! In Neumark fashion, let’s extend this quantum-information spring: dilation, growth, this taking wing. We span the space of physics types from spin to hypersurface hype, from neutron-beam experiment to Bohm and Einstein’s discontent, from records of a photon’s path to algebra and other math that’s more abstract and less applied— of platforms’ details, purified. We function as a refuge, too, if lattices can frustrate you. If gravity has got your goat, Forget regimes renormalized; our states are (mostly) unit-sized. Velocities stay mostly fixed; results, at worst, look somewhat mixed. Though factions I do not condone, the action that most stirs my bones is more a spook than Popov ghosts; 1 more at-a-distance, less quark-close. This field’s a tot—cacophonous— like cosine, not monotonous. Cacophony enlivens thought: We’ve learned from noise what discord’s not. So take a chance on wave collapse; in place of “part” and “piece,” say “bit”; employ, as yardstick, Hilbert-Schmidt; choose quantum as your nesting place, of all the fields in physics space. 1 With apologies to Ludvig Faddeev. # The Most Awesome Animation About Quantum Computers You Will Ever See by Jorge Cham You might think the title is a little exaggerated, but if there’s one thing I’ve learned from Theoretical Physicists so far, it’s to be bold with my conjectures about reality. Welcome to the second installment of our series of animations about Quantum Information! After an auspicious start describing doing the impossible, this week we take a step back to talk in general terms about what makes the Quantum World different and how these differences can be used to build Quantum Computers. In this video, I interviewed John Preskill and Spiros Michalakis. John is the co-Director of the Institute for Quantum Information and Matter. He’s known for many things, including making (and winning) bets with Stephen Hawking. Spiros hails from Greece, and probably never thought he’d see himself drawn in a Faustian devil outfit in the name of science (although, he’s so motivated about outreach, he’d probably do it). In preparation to make this video, I thought I’d do what any serious writer would do to exhaustively research a complex topic like this: read the Wikipedia page and call it a day. But then, while visiting the local library with my son, I stumbled upon a small section of books about Quantum Physics aimed at a general audience. I thought, “Great! I’ll read these books and learn that way!” When I opened the books, though, they were mostly all text. I’m not against text, but when you’re a busy* cartoonist on a deadline trying to learn one of the most complex topics humans have ever devised, a few figures would help. On the other hand, fewer graphics mean more job security for busy cartoonists, so I can’t really complain. (*=Not really). In particular, I started to read “The Quantum Story: A History in 40 Moments” by Jim Baggott. First, telling a story in 40 moments sounds a lot like telling a story with comics, and second, I thought it would be great to learn about these concepts from the point of view of how they came up with them. So, I eagerly opened the book and here is what it says in the Preface: “Nobody really understands how Quantum Theory actually works.” “Niels Bohr claimed that anybody who is not shocked by the theory has not understood it… Richard Feynman went further: he claimed that nobody understands it.” One page in, and it’s already telling me to give up. It’s a fascinating read, I highly recommend the book. Baggott makes the claim that, “The reality of Scientific Endeavor is profoundly messy, often illogical, deeply emotional, and driven by the individual personalities involved as they sleepwalk their way to a temporary scientific truth.” I’m glad this history was recorded. I hope in a way that these videos help record a quantum of the developing story, as we humans try to create pockets of quantum weirdness that can scale up. As John says in the video, it is very exciting. Now, if you’ll excuse me, I need to sleepwalk back to bed. Watch the second installment of this series: Jorge Cham is the creator of Piled Higher and Deeper (www.phdcomics.com). CREDITS: Featuring: John Preskill and Spiros Michalakis Produced in Partnership with the Institute for Quantum Information and Matter (http://iqim.caltech.edu) at Caltech with funding provided by the National Science Foundation. Animation Assistance: Meg Rosenburg Transcription: Noel Dilworth # Steampunk quantum A dark-haired man leans over a marble balustrade. In the ballroom below, his assistants tinker with animatronic elephants that trumpet and with potions for improving black-and-white photographs. The man is an inventor near the turn of the 20th century. Cape swirling about him, he watches technology wed fantasy. Welcome to the steampunk genre. A stew of science fiction and Victorianism, steampunk has invaded literature, film, and the Wall Street Journal. A few years after James Watt improved the steam engine, protagonists build animatronics, clone cats, and time-travel. At sci-fi conventions, top hats and blast goggles distinguish steampunkers from superheroes. The closest the author has come to dressing steampunk. I’ve never read steampunk other than H. G. Wells’s The Time Machine—and other than the scene recapped above. The scene features in The Wolsenberg Clock, a novel by Canadian poet Jay Ruzesky. The novel caught my eye at an Ontario library. In Ontario, I began researching the intersection of QI with thermodynamics. Thermodynamics is the study of energy, efficiency, and entropy. Entropy quantifies uncertainty about a system’s small-scale properties, given large-scale properties. Consider a room of air molecules. Knowing that the room has a temperature of 75°F, you don’t know whether some molecule is skimming the floor, poking you in the eye, or elsewhere. Ambiguities in molecules’ positions and momenta endow the gas with entropy. Whereas entropy suggests lack of control, work is energy that accomplishes tasks. # Surviving in Extreme Conditions. Sometimes in order to do one thing thoroughly you have to first master many other things, even those which may seem very unrelated to your focus. In the end, everything weaves itself together very elegantly and you find yourself wondering how you got through such an incredible sequence of coincidences to where you are now. I am a rising first-year PhD student in Astrophysics at Caltech. I just completed my Bachelor’s in Physics also from Caltech last June. My Caltech journey has already led me to a number of unexpected places. New in Astrophysics, I am very excited to see as many observatories, labs and manufacturing locations as I can. I just moved out of the dorms and into the first place that is my very own home (which means I pay my own rent now). All of my windows have a very clear view of the radio tower-adorned Mt. Wilson. This morning I woke up and looked at the Mt. Wilson horizon and decided to drive up there. I left my morning ballet class early to make time for the drive. The road to the observatory is not simple. HWY 2 is a pretty serious mountain road and accidents happen on it regularly. This is the first thing: to have access to observatories, I need to be able to drive there safely and reliably. Fortunately I love driving, especially athletic mountain driving, so I am looking for almost any excuse to drive to JPL, Mt. Wilson, and so on. I’ll just stop, by saying that driving is a hobby for me and I see it as a sport, a science, and an art. The first portion of the 2 is like any normal mountain road with speeding locals, terrifying cyclists and daredevil motorcyclists. The views become more and more breathtaking as you gain elevation, but the driver really shouldn’t be getting any of these views except for the portion that fits into the car’s field of view. The road is demanding, with turns and hills, all along a steep and curving mountainside. However, this part is a piece of cake compared to the second portion. The turnoff to the observatory itself opens onto a less-maintained road speckled with enthusiastic hikers and with nicely sharp 6-inch pebbles scattered around the road. As much as I was enjoying taking smooth turns and avoiding the brakes, I went very slow on this section to drive around the random rocks on the road. I finally got to the top where I could take in the view in peace. The first thing visitors see is the Cosmic Cafe. It has a balcony going all around the cafe with a fascinating view when there is no smog or fog. Last April, Caltech had its undergraduate student Formal here. We dined at this cafe and had a dance platform nearby. Driving up here, I could not help thinking how risky this was: 11 high-rise buses took a large portion of the Caltech undergraduate student body up to the top of this mountain in fog so dense we could barely see the bus ahead of us. The bus drivers were saints. Hiking or running shoes are the best shoes to wear here, so I cannot imagine how we came here in suits, dress shoes, tight dresses, and merciless heels. Well, Caltech students have many talents. Second thing: being an active person in the Tech community takes you to some curious places on interesting occasions. Some Caltech undergraduates on Mt. Wilson (I’m purple). I parked at the first available lot, right in front of the cafe and near some large radio towers. When trying to lock my car, I had some trouble. I have an electronic key which operates as a remote outside the car. The car would not react to my key and would not lock. I tried a few more times and finally it locked. I figured the battery in the key was dying, but that didn’t seem right. If any battery were dying, it would be the battery in the spare key that I am not using. # This single-shot life The night before defending my Masters thesis, I ran out of shampoo. I ran out late enough that I wouldn’t defend from beneath a mop like Jack Sparrow’s; but, belonging to the Luxuriant Flowing-Hair Club for Scientists (technically, if not officially), I’d have to visit Shopper’s Drug Mart. The author’s unofficially Luxuriant Flowing Scientist Hair Before visiting Shopper’s Drug Mart, I had to defend my thesis. The thesis, as explained elsewhere, concerns epsilons, the mathematical equivalents of seed pearls. The thesis also concerns single-shot information theory. Ordinary information theory emerged in 1948, midwifed by American engineer Claude E. Shannon. Shannon calculated how efficiently we can pack information into symbols when encoding long messages. Consider encoding this article in the fewest possible symbols. Because “the” appears many times, you might represent “the” by one symbol. Longer strings of symbols suit misfits like “luxuriant” and “oobleck.” The longer the article, the fewer encoding symbols you need per encoded word. The encoding-to-encoded ratio decreases, toward a number called the Shannon entropy, as the message grows infinitely long. Claude Shannon We don’t send infinitely long messages, excepting teenagers during phone conversations. How efficiently can we encode just one article or sentence? The answer involves single-shot information theory, or—to those stuffing long messages into the shortest possible emails to busy colleagues—“1-shot info.” Pioneered within the past few years, single-shot theory concerns short messages and single trials, the Twitter to Shannon’s epic. Like articles, quantum states can form messages. Hence single-shot theory blended with quantum information in my thesis.
2.4k views Which of the following is correct? 1. B-trees are for storing data on disk and B$^+$ trees are for main memory. 2. Range queries are faster on B$^+$ trees. 3. B-trees are for primary indexes and B$^+$ trees are for secondary indexes. 4. The height of a B$^+$ tree is independent of the number of records. 0 how option d is incorrect ? i think it also true 1. False. Both r stored in disk 2. True. By searching leaf level linearly in $B^+$ tree, we can say a node is present or not in $B^+$ tree. But for $B$ tree we have to traverse the whole tree 3. False. $B$ tree and $B^+$ tree uses dynamic multilevel indexes http://home.iitj.ac.in/~ramana/ch10-storage-2.pdf 4. False. Height depends on number of record and also max no of keys in each node (order of tree) edited by 0 on page 21 of the pdf at the link specified, it says "A B+-tree can have less levels (or higher capacity of search values) than the corresponding B-tree". Is it true? Havent read that earlier. Also cant understand how it can be reasoned. 0 In B tree there are only data pointer. So, every time it searched , it have to search from root But in B+ tree there are data pointer and record pointer. So, once it comes in leaf , it can only goes to search leaf level only. It is useful because in B+ tree all nodes are present in leaves only. 0 > In B tree there are only data pointer. So, every time it searched , it have to search from root. But in B+ tree there are data pointer and record pointer. Not getting whats the difference between data pointer and record pointer. Also in both all searches should start from root right? Or is it like we can start search from arbitrary interior node in B+ tree? Also it does not explain how B+ tree height tends to be less than the corresponding B tree. 0 as A is false, so which data structure is used for main memory ? 0 Binary trees , link list , queue , array... +1 @srestha B tree tree ptr+ Record ptr+ key value b+ tree only at last level + record pointer; internal nodes only tree pointer and key values? 0 @ Rupendra can you tell me why b/b+ tree preferred for disk and binary tree,linked list etc for main memory? 0 BST are used for "searching" in RAM and Btrees are used for "searching" in HDD. "NOT STORING" 0 @Gate Fever What is the logic behind this statement? The leaves (the bottom-most index blocks) of the B+ tree are often linked to one another in a linked list; this makes range queries or an (ordered) iteration through the blocks simpler and more efficient (though the aforementioned upper bound can be achieved even without this addition). This does not substantially increase space consumption or maintenance on the tree. This illustrates one of the significant advantages of a B+tree over a B-tree; in a B-tree, since not all keys are present in the leaves, such an ordered linked list cannot be constructed. reshown by 1 2
# How to remove the warnings “Font shape OT1/cmss/m/n' in size <4> not available” and “Size substitutions with differences” in beamer? LaTeX Font Warning: Font shape OT1/cmss/m/n' in size <4> not available LaTeX Font Warning: Size substitutions with differences I am using the files from http://www.poirrier.be/~jean-etienne/info/latexbeamer/latex-beamer.tar.gz. How to remove the above 2 warnings? - Please always include some code right in your question so that other users, including those who want to and are going to help you, can see the problem directly on here without having to go to some external web site. It'd be good if you could do that for this question as well, even though it has been answered. –  doncherry Jun 1 '12 at 7:37 \usepackage{lmodern}% http://ctan.org/pkg/lm Fonts are typically available only in certain sizes/increments. As an example, the basic article document class loads only the following sizes (from size10.clo): • \tiny @ 5pt; • \scriptsize @ 7pt; • \footnotesize @ 8pt; • \small @ 9pt; • \normalsize @ 10pt; • \large @ 12pt; • \Large @ 14.4pt; • \LARGE @ 17.28pt; • \huge @ 20.74pt; and • \Huge @ 24.88pt So, requesting a 15pt font size using something like \documentclass{article} \begin{document} \fontsize{15}{18}\selectfont Hello world. \end{document} leads to LaTeX complaining in the .log file: LaTeX Font Warning: Font shape OT1/cmr/m/n' in size <15> not available (Font) size <14.4> substituted on input line 3. ... LaTeX Font Warning: Size substitutions with differences (Font) up to 0.6pt have occurred. Using lmodern removes this restriction by allowing font sizes at arbitrary sizes. For more on font size requirements, see Fonts at arbitrary sizes. - I know this workaround, but I also know that if you do that, math accents will be typesetted very badly. You can try i.e. $\ddot u$ or $\tilde J$ –  Wauzl Oct 17 '12 at 7:44 You can add this \let\Tiny=\tiny just after the documentclass declaration. So, it should look something like this: \documentclass{beamer} `
# Problems of the Week Contribute a problem # 2018-08-20 Basic Why does this toy always bounce back no matter how hard it's punched? The segments dividing this triangle are parallel to the triangle's base. Which colored area is the largest? What is the radius of this quarter-circle? A small ball is placed on top of a bigger, heavier ball at a height of $10 \text{ m}$ above the ground. When the balls are simultaneously released and bounce, how high will the smaller ball bounce up to? Assumptions: • All collisions are perfectly elastic, which means no kinetic energy is lost in the collisions. • Air resistance is negligible. Which of the following figures has the greatest area shaded in red? Assumptions: • A, B, C are identical equilateral triangles. The circles of A are all congruent to each other, as are the circles of C. • The red components are circles tangent to one another and to the triangle. Note: If you get it wrong, don't worry! An Italian mathematician named Malfatti also got it wrong in the $19^\text{th}$ century. ×
Hey! I'm David, the author of the Real-World Cryptography book. I'm a crypto engineer at O(1) Labs on the Mina cryptocurrency, previously I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting. # Crypto training at Black Hat USA posted June 2017 I'll be back in Vegas this year to give the crypto training of Black Hat. The class is not full yet so hurry up if that is something that interests you. It will be a blend of culture, exercises and technical dives. For 2 days, students get to learn all the cool crypto attacks, get to dive into some of them deeply, and get to interact via numerous exercises. comment on this story # Noise+Strobe=Disco posted June 2017 Noise is a protocol framework allowing you to build different lightweight TLS-like handshakes depending on your use case. Benefits are a short code size, very few dependencies, simplicity of the security guarantees and analysis. It focuses primarily on the initial asymmetric phase of the setup of a secure channel, but does leave you with two ciphers that you can use to read and write on both sides of the connection. If you want to know more, I wrote a readable implementation, and have a tutorial video. Strobe is a protocol framework as well, focusing on the symmetric part of the protocol. Its simplicity boils down to only using one cryptographic primitive: the duplex construction. Which allows developers to benefit from an ultra short cryptographic code base supporting their custom-made symmetric protocols as well as their different needs of cryptographic functions. Indeed, Strobe can be used as well to instantiate a hash function, a key derivation function, a pseudo-random number generator, a message authentication code, an authenticated encryption with associated data cipher, etc... If you want to know more, I wrote a readable implementation and Mike Hamburg gave a talk at RWC. Noise+Strobe=Disco. One of Noise's major character is that it keeps a running hash, digesting every message and allowing every new handshake message to mix the transcript in its encryption while authenticating previous messages received and sent. Strobe works like that naturally. Its duplex function absorbs every calls being made to the underlying primitive (the Keccak permutation), to the extent that every new operation is influenced by any operation that happened previously. These two common traits in Strobe and Noise led me to pursue a merge between the two: what if that running hash and symmetric state in Noise was simply Strobe's primitive? And what if at the end of a handshake Noise would just spew out two Strobe's objects also depending on the handshake transcript? I talked to Trevor Perrin about it and his elegant suggestion for a name (Disco) and my curiosity led to an implementation of what it would look like. This is of course highly experimental. I modified the Noise's specification to see how much I could remove/simplify from it and the result is already enjoyable. I've discussed the changes on the mailing list. But simply put: the CipherState has been removed, the SymmetricState has been replaced by calls to Strobe. This leaves us only with one object: the HandshakeState. Every symmetric algorithm has been removed (HDKF, HMAC, HASH, AEAD). The specification looks way shorter, while the Disco implementation is more than half the size of the Noise implementation. The Strobe's calls naturally absorbs every operation, and can encrypt/decrypt the handshake messages even if no shared secret has been negotiated (with a non-keyed duplex construction), which simplifies corner cases where you would have to test if you have already negotiated a shared secret or not. comment on this story # Readable implementation of the Noise protocol framework posted June 2017 I wrote an implementation of the Noise Protocol Framework. If you don't know what that is, it is a framework to create lightweight TLS-like protocols. If you do not want to use TLS because it is unnecessarily complicated, and you know what you're doing, Noise is the solution. You have different patterns for different usecase and everything is well explained for you to implement it smoothly. To learn more about Noise you can also check this screencast I shot last year: My current research includes merging this framework with the Strobe protocol framework I've talked about previously. This led me to first implement a readable and understandable version of Noise here. Note that this is highly experimental and it has not been thoroughly tested. I also had to deviate from the specification when naming things because Golang: • doesn't use snake_case, but Noise does. • capitalizes function names to make them public, Noise does it for different reasons. comment on this story # SIMD instructions in Go posted June 2017 One awesome feature of Go is cross-compilation. One limitation is that we can only choose to build for some pre-defined architectures and OS, but we can't build per CPU-model. In the previous post I was talking about C programs, where the user actually chooses the CPU model when calling the Make. Go could probably have something like that but it wouldn't be gooy. One solution is to build for every CPU models anyway, and decide later what is good to be used. So one assembly code for SSE2, one code for AVX, one code for AVX-512. Note that we do not need to use SSE3/SSE4 (or AVX2) as the interesting functions are contained in SSE2 (respectively AVX) which will have more support and be contained in greater versions of SSE (respectively AVX) anyway. The official Blake2 implementation in Go actually uses SIMD instructions. Looking at it is a good way to see how SIMD coding works in Go. In _amd64.go, they use the builtin init() function to figure out at runtime what is supported by the host architecture: func init() { useAVX2 = supportsAVX2() useAVX = supportsAVX() useSSE4 = supportsSSE4() } Which are calls to assembly functions detecting what is supported either via: 1. a CPUID call directly for SSE4. 2. calls to Golang's runtime library for AVX and AVX2. In the second solution, the runtime variables seems to be undocumented and only available since go1.7, they are probably filled via cpuid calls as well. Surprisingly, the internal/cpu package already has all the necessary functions to detect flavors of SIMD. See an example of use in the bytes package. And that's it! Blake2's hashBlocks() function then dynamically decides which function to use at runtime: func hashBlocks(h *[8]uint64, c *[2]uint64, flag uint64, blocks []byte) { if useAVX2 { hashBlocksAVX2(h, c, flag, blocks) } else if useAVX { hashBlocksAVX(h, c, flag, blocks) } else if useSSE4 { hashBlocksSSE4(h, c, flag, blocks) } else { hashBlocksGeneric(h, c, flag, blocks) } } Because Go does not have intrisic functions for SIMD, these are implemented directly in assembly. You can look at the code in the relevant _amd64.s file. Now it's kind of tricky because Go has invented its own assembly language (based on Plan9) and you have to find out things the hard way. Instructions like VINSERTI128 and VPSHUFD are the SIMD instructions. MMX registers are M0...M7, SSE registers are X0...X15, AVX registers are Y0, ..., Y15. MOVDQA is called MOVO (or MOVOA) and MOVDQU is called MOVOU. Things like that. As for AVX-512, Go probably still doesn't have instructions for that. So you'll need to write the raw opcodes yourself using BYTE (like here) and as explained here. 3 comments # SIMD instructions in crypto posted June 2017 The Keccak Code Package repository contains all of the Keccak team's constructions, including for example SHA-3, SHAKE, cSHAKE, ParallelHash, TupleHash, KMAC, Keyak, Ketje and KangarooTwelve. ParallelHash and KangarooTwelve are two hash functions based on the same basis of SHA-3, but that can be sped up with parallelization. This makes these two hash functions really interesting, especially when hashing big files. ## MMX, SSE, SSE2, AVX, AVX2, AVX-512 To support parallelization, a common way is to use SIMD instructions, a set of instructions generally available on any modern 64-bit architecture that allows computation on large blocks of data (64, 128, 256 or 512 bits). Using them to operate in blocks of data is what we often call vector/array programming, the compiler will sometimes optimize your code by automatically using these large SIMD registers. SIMD instructions have been here since the 70s, and have become really common. This is one of the reason why image, sound, video and games all work so well nowadays. Generally, if you're on a 64-bit architecture your CPU will support SIMD instructions. There are several versions of these instructions. On Intel's side these are called MMX, SSE and AVX instructions. AMD has SSE and AVX instructions as well. On ARM these are called NEON instructions. MMX allows you to operate on 64-bit registers at once (called MM registers). SSE, SSE2, SSE3 and SSE4 all allow you to use 128-bit registers (XMM registers). AVX and AVX2 introduced 256-bit registers (YMM registers) and the more recent AVX-512 supports 512-bit registers (ZMM registers). ## How To Compile? OK, looking back at the Keccak Code Package, I need to choose what architecture to compile my Keccak code with to take advantage of the parallelization. I have a macbook pro, but have no idea what kind version of SSE or AVX my CPU model supports. One way to find out is to use www.everymac.com → I have an Intel CPU Broadwell which seems to support AVX2! Looking at the list of architectures supported by the Keccak Code Package I see Haswell, which is of the same family and supports AVX2 as well. Compiling with it, I can run my KangarooTwelve code with AVX2 support, which parallelizes four runs of the Keccak permutation at the same time using these 256-bit registers! In more details, the Keccak permutation goes through several rounds (12 for KangarooTwelve, 24 for ParallelHash) that need to serially operate on a succession of 64-bit lanes. AVX (no need for AVX2) 256-bit's registers allow four 64-bit lanes to be operated on at the same time. That's effectively four Keccak permutations running in parallel. ## Intrisic Instructions Intrisic functions are functions you can use directly in code, and that are later recognized and handled by the compiler. Intel has an awesome guide on these here. You just need to find out which function to use, which is pretty straight forward looking at the documentation. In C, if you're compiling with GCC on an Intel/AMD architecture you can start using intrisic functions for SIMD by including x86intrin.h. Or you can use this script to include the correct file for different combination of compilers and architectures: #if defined(_MSC_VER) /* Microsoft C/C++-compatible compiler */ #include <intrin.h> #elif defined(__GNUC__) && (defined(__x86_64__) || defined(__i386__)) /* GCC-compatible compiler, targeting x86/x86-64 */ #include <x86intrin.h> #elif defined(__GNUC__) && defined(__ARM_NEON__) /* GCC-compatible compiler, targeting ARM with NEON */ #include <arm_neon.h> #elif defined(__GNUC__) && defined(__IWMMXT__) /* GCC-compatible compiler, targeting ARM with WMMX */ #include <mmintrin.h> #elif (defined(__GNUC__) || defined(__xlC__)) && (defined(__VEC__) || defined(__ALTIVEC__)) /* XLC or GCC-compatible compiler, targeting PowerPC with VMX/VSX */ #include <altivec.h> #elif defined(__GNUC__) && defined(__SPE__) /* GCC-compatible compiler, targeting PowerPC with SPE */ #include <spe.h> #endif If we look at the reference implementation of KangarooTwelve in C we can see how they decided to use the AVX2 instructions. They first define a __m256i variable which will hold 4 lanes at the same time. typedef __m256i V256; They then declare a bunch of them. Some of them will be used as temporary registers. They then use unrolling to write the 12 rounds of Keccak. Which are defined via relevant AVX2 instructions: #define ANDnu256(a, b) _mm256_andnot_si256(a, b) #define CONST256(a) _mm256_load_si256((const V256 *)&(a)) #define CONST256_64(a) (V256)_mm256_broadcast_sd((const double*)(&a)) #define LOAD256(a) _mm256_load_si256((const V256 *)&(a)) #define LOAD256u(a) _mm256_loadu_si256((const V256 *)&(a)) #define LOAD4_64(a, b, c, d) _mm256_set_epi64x((UINT64)(a), (UINT64)(b), (UINT64)(c), (UINT64)(d)) #define ROL64in256(d, a, o) d = _mm256_or_si256(_mm256_slli_epi64(a, o), _mm256_srli_epi64(a, 64-(o))) #define ROL64in256_8(d, a) d = _mm256_shuffle_epi8(a, CONST256(rho8)) #define ROL64in256_56(d, a) d = _mm256_shuffle_epi8(a, CONST256(rho56)) #define STORE256(a, b) _mm256_store_si256((V256 *)&(a), b) #define STORE256u(a, b) _mm256_storeu_si256((V256 *)&(a), b) #define STORE2_128(ah, al, v) _mm256_storeu2_m128d((V128*)&(ah), (V128*)&(al), v) #define XOR256(a, b) _mm256_xor_si256(a, b) #define XOReq256(a, b) a = _mm256_xor_si256(a, b) #define UNPACKL( a, b ) _mm256_unpacklo_epi64((a), (b)) #define UNPACKH( a, b ) _mm256_unpackhi_epi64((a), (b)) #define PERM128( a, b, c ) (V256)_mm256_permute2f128_ps((__m256)(a), (__m256)(b), c) #define SHUFFLE64( a, b, c ) (V256)_mm256_shuffle_pd((__m256d)(a), (__m256d)(b), c) And if you're wondering how each of these _mm256 function is used, you can check the same Intel documentation Voila! comment on this story # Tamarin Prover Introduction posted June 2017 I've made a quick intro on Tamarin Prover, which is a protocol verification tool. I just wanted to show people how practical and fun it looks =) 4 comments # A New Public-Key Cryptosystem via Mersenne Numbers posted June 2017 a lot of keywords here are really interesting. But first, what is a Mersenne prime? A mersenne prime is simply a prime $p$ such that $p=2^n - 1$. The nice thing about that, is that the programming way of writing such a number is (1 << n) - 1 which is a long series of 1s. A number modulo this prime can be any bitstring of the mersenne prime's length. OK we know what's a Mersenne prime. How do we build our new public key cryptosystem now? Let's start with a private key: (secret, privkey), two bitstrings of low hamming weight. Meaning that they do not have a lot of bits set to 1. Now something very intuitive happens: the inverse of such a bitstring will probably have a high hamming weight, which let us believe that $secret \cdot privkey^{-1} \pmod{p}$ looks random. This will be our public key. Now that we have a private key and a public key. How do we encrypt ? The paper starts with a very simple scheme on how to encrypt a bit $b$. $ciphertext = (-1)^b \cdot ( A \cdot pubkey + B ) \pmod{p}$ with $A$ and $B$ two public numbers that have low hamming weights as well. We can see intuitively that the ciphertext will have a high hamming weight (and thus might look random). If you are not convinced, all of this is based on actual proofs that such operations between low and high hamming weight bitstrings will yield other low or high hamming weight bitstrings. All of this really work because we are modulo a $1111\cdots$ kind of number. The following lemmas taken from the paper are proven in section 2.1. How do you decrypt such an encrypted bit? This is how: $ciphertext \cdot privkey \pmod{p}$ This will yield either a low hamming weight number → the original bit $b$ was a $0$, or a high hamming weight number → the original bit $b$ was a $1$. You can convince yourself by following the equation: And again, intuitively you can see that everything is low hamming weight except for the value of $(-1)^b$. This scheme doesn't look CCA secure nor practical. The paper goes on with an explanation of a more involved cryptosystem in section 6. EDIT: there is already a reduction of the security estimates published on eprint. comment on this story # Is Symmetric Security Solved? posted June 2017 Recently T. Duong, D. Bleichenbacher, Q. Nguyen and B. Przydatek released a crypto library intitled Tink. At its center lied an implementation of AES-GCM somehow different from the rest: it did not take a nonce as one of its input. A few days ago, at the Crypto SummerSchool in Croatia, Nik Kinkel told me that he would generally recommend against letting developers tweak the nonce value, based on how AES-GCM tended to be heavily mis-used in the wild. For a recap, if a nonce is used twice to encrypt two different messages AES-GCM will leak the authentication key. I think it's a fair improvement of AES-GCM to remove the nonce argument. By doing so, nonces have to be randomly generated. Now the new danger is that the same nonce is randomly generated twice for the same key. The birthday bound tells us that after $2^{n/2}$ messages, $n$ being the bit-size of a nonce, you have great odds of generating a previous nonce. The optimal rekey point has been studied by Abdalla and Bellare and can computed with the cubic root of the nonce space. If more nonces are generated after that, chances of a nonce collision are too high. For AES-GCM this means that after $2^{92/3} = 1704458900$ different messages, the key should be rotated. This is of course assuming that you use 92-bit nonces with 32-bit counters. Some protocol and implementations will actually fix the first 32 bits of these 92-bit nonces reducing the birthday bound even further. Isn't that a bit low? Yes it kinda is. An interesting construction by Dan J. Bernstein called XSalsa20 (and that can be extended to XChacha20) allow us to use nonces of 192 bits. This would mean that you should be able to use the same key for up to $2^{192/3} = 18446744073709551616$ messages. Which is already twice more that what a BIG INT can store in a database It seems like Sponge-based AEADs should benefit from large nonces as well since their rate can store even more bits. This might be a turning point for these constructions in the last round of the CAESAR competition. There are currently 4 of these: Ascon, Ketje, Keyak and NORX. With that in mind, is nonce mis-use resistance now fixed? EDIT: Here is a list of recent papers on the subject: comment on this story
# zbMATH — the first resource for mathematics Common fixed point theorems of Gregus type for weakly compatible mappings satisfying generalized contractive conditions. (English) Zbl 1138.54031 Let $$(X,d)$$ be a metric space and $$A, B, S, T : X \to X$$ four mappings. The author gives some metric conditions which imply that $$A, B, S$$ and $$T$$ have a unique common fixed point. ##### MSC: 54H25 Fixed-point and coincidence theorems (topological aspects) Full Text: ##### References: [1] Aamri, M.; El Moutawakil, D., Some new common fixed point theorems under strict contractive conditions, J. math. anal. appl., 270, 181-188, (2002) · Zbl 1008.54030 [2] Aliouche, A., A common fixed point theorem for weakly compatible mappings in symmetric spaces satisfying a contractive condition of integral type, J. math. anal. appl., 322, 2, 796-802, (2006) · Zbl 1111.47046 [3] Altun, I.; Turkoglu, D.; Rhoades, B.E., Fixed points of weakly compatible mappings satisfying a general contractive condition of integral type, Fixed point theory appl., 2007, (2007), article ID 17301 · Zbl 1153.54022 [4] Branciari, A., A fixed point theorem for mappings satisfying a general contractive condition of integral type, Int. J. math. math. sci., 29, 531-536, (2002) · Zbl 0993.54040 [5] Djoudi, A.; Nisse, L., Gregus type fixed points for weakly compatible mappings, Bull. belg. math. soc., 10, 369-378, (2003) · Zbl 1040.54019 [6] Djoudi, A.; Aliouche, A., Common fixed point theorems of Gregus type for weakly compatible mappings satisfying contractive conditions of integral type, J. math. anal. appl., 329, 1, 31-45, (2007) · Zbl 1116.47047 [7] Gregus, M., A fixed point theorem in Banach spaces, Boll. unione mat. ital., 17-A, 5, 193-198, (1980) · Zbl 0538.47035 [8] Jungck, G., Compatible mappings and common fixed points, Int. J. math. math. sci., 9, 771-779, (1986) · Zbl 0613.54029 [9] Jungck, G.; Murthy, P.P.; Cho, Y.J., Compatible mappings of type (A) and common fixed points, Math. japonica, 38, 2, 381-390, (1993) · Zbl 0791.54059 [10] Jungck, G., Common fixed points for non-continuous non-self maps on non-metric spaces, Far east J. math. sci. (FJMS), 4, 2, 199-215, (1996) · Zbl 0928.54043 [11] Murthy, P.P.; Cho, Y.J.; Fisher, B., Compatible mappings of type (A) and common fixed points of Gregus, Glas. math., 30, 50, 335-341, (1995) · Zbl 0876.47037 [12] Liu, W.; Wu, J.; Li, Z., Common fixed points of single-valued and multi-valued maps, Int. J. math. math. sci., 19, 3045-3055, (2005) · Zbl 1087.54019 [13] Pant, R.P., Common fixed points of noncommuting mappings, J. math. anal. appl., 188, 436-440, (1994) · Zbl 0830.54031 [14] Pathak, H.K.; Khan, M.S., Compatible mappings of type (B) and common fixed point theorems of Gregus type, Czechoslovak math. J., 45, 120, 685-698, (1995) · Zbl 0848.54030 [15] Pathak, H.K.; Cho, Y.J.; Kang, S.M.; Lee, B.S., Fixed point theorems for compatible mappings of type (P) and applications to dynamic programming, Matematiche, 1, 15-33, (1995) · Zbl 0877.54038 [16] Pathak, H.K.; Cho, Y.J.; Khan, S.M.; Madharia, B., Compatible mappings of type (C) and common fixed point theorems of Gregus type, Demonstratio math., 31, 3, 499-518, (1998) · Zbl 0922.54036 [17] Pathak, H.K.; Khan, M.S.; Liu, Z.; Ume, J.S., Fixed point theorems in metrically convex spaces and applications, J. nonlinear convex anal., 4, 2, 231-244, (2003) · Zbl 1041.54038 [18] Pathak, H.K.; Khan, M.S.; Rakesh, T., A common fixed point theorem and its application to nonlinear integral equations, Comput. math. appl., 53, 961-971, (2007) · Zbl 1126.45003 [19] Rhoades, B.E., Two fixed point theorems for mappings satisfying a general contractive condition of integral type, Int. J. math. math. sci., 63, 4007-4013, (2003) · Zbl 1052.47052 [20] Vijayaraju, P.; Rhoades, B.E.; Mohanraj, R., A fixed point theorem for a pair of maps satisfying a general contractive condition of integral type, Int. J. math. math. sci., 15, 2359-2364, (2005) · Zbl 1113.54027 [21] Sessa, S., On a weak commutativity condition of mappings in fixed point considerations, Publ. inst. math. (beograd), 32, 46, 149-153, (1982) · Zbl 0523.54030 [22] Singh, S.P.; Meade, B.A., On common fixed point theorems, Bull. austral. math. soc., 16, 49-53, (1977) · Zbl 0351.54040 [23] Suzuki, T., Meir – keeler contractions of integral type are still meir – keeler contractions, Int. J. math. math. sci., 2007, (2007), article ID 39281 · Zbl 1142.54019 [24] Zhang, X., Common fixed point theorems for some new generalized contractive type mappings, J. math. anal. appl., 333, 2, 780-786, (2007) · Zbl 1133.54028 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Assess advanced Hyper-V networking features Completed Several advanced features in Windows Server Hyper-V networking can improve network performance and increase the flexibility of VMs in private and public cloud environments. The Contoso Hyper-V administrator needs to determine which of these advanced network features is suitable for various workloads. The following table summarizes the advanced features that Windows Server Hyper-V networking supports. Feature Description Hyper-V Network Virtualization This feature decouples virtual networks from the physical network infrastructure, in much the same way as a hypervisor does for host hardware. It removes the constraints of VLAN and hierarchical IP address assignment from VM provisioning and provides more agility and mobility when managing VMs and tenant workloads. Hyper-V Network Virtualization can be implemented using various components, including the Microsoft Network Controller server role, or network virtualization gateways and load balancers in either Windows Server or System Center Virtual Machine Manager. Bandwidth management You can use this feature to specify the minimum and maximum bandwidth that Hyper-V allocates to a virtual network adapter. Hyper-V reserves the minimum bandwidth allocation for the adapter even when other adapters for VMs on the same Hyper-V host are functioning at capacity. DHCP guard This feature drops DHCP messages from VMs that are functioning as rogue DHCP servers. This might be necessary in scenarios where you don't have direct control over a VM's configuration and the VM is hosted on a Hyper-V Server you manage. Router guard This feature drops router advertisement and redirection messages from VMs that are configured as unauthorized routers. This feature might be useful when you don't have control over the configuration of VMs hosted on a Hyper-V Server you manage. Port mirroring You can use this feature to copy incoming and outgoing packets from a network adapter to another VM that you have configured for monitoring. NIC Teaming You can use this feature to add a virtual network adapter to an existing team on the host Hyper-V Server. Virtual Machine Queue (VMQ) This feature requires the host computer to have a network adapter that supports the feature. VMQ uses hardware packet filtering to deliver network traffic directly to a guest. This improves performance because the packet doesn't need to be copied from the host OS to the VM. Only network adapters specific to Hyper-V support this feature. Single-root I/O virtualization (SR-IOV) To use this feature, you must install specific hardware and special drivers on the guest OS. SR-IOV enables multiple VMs to share the same physical Peripheral Component Interconnect Express hardware resources. If sufficient resources aren't available, the virtual switch provides network connectivity. Only network adapters specific to Hyper-V support this feature. IP security (IPsec) task offloading The guest OS and network adapter must provide explicit support for this feature. This feature enables a host's network adapter to perform calculation-intensive, security-association tasks. If sufficient hardware resources aren't available, the guest OS performs these tasks. You can set the maximum number of offloaded security associations from 1 to 4,096. Only network adapters specific to Hyper-V support this feature. ## Additional networking features in Windows Server for SDN infrastructures Windows Server 2016 and newer provides additional networking features to support Software-Defined Networking (SDN) infrastructures. These features include: • Switch Embedded Teaming (SET). SET is a NIC Teaming option that you can use for Hyper-V networks. Hyper-V can integrate with SET to provide faster performance and better fault tolerance than traditional teams. Unlike traditional teams, with SET you can add multiple Remote Direct Memory Access (RDMA) network adapters. • RDMA with Hyper-V. Also known as Server Message Block (SMB) Direct, RDMA with Hyper-V is a feature that requires hardware support in the network adapter. A network adapter with RDMA functions at full speed with low resource utilization. Effectively, this means that there's higher throughput, which is an important consideration for busy servers with high-speed network adapters such as 10 Gbps. Note RDMA services can use Hyper-V switches. You can enable this feature with or without SET. • Virtual Machine Multi-Queue (VMMQ). VMMQ improves on VMQ by allocating multiple queues per VM and by spreading traffic across the queues. • Converged network adapters. A converged network adapter supports using a single network adapter or a team of network adapters to manage multiple forms of management, RDMA, and VM traffic. This reduces the number of specialized adapters that each host needs. • Network address translation (NAT) object. Windows Server includes a NAT object that translates an internal network address to an external address. This can be useful to IP address management, particularly when there are many VMs that require access to the internet. However, there's no requirement for communication to be initiated from the internet back to the internal VMs. Tip You can use the New-NetNat Windows PowerShell cmdlet to create a NAT object. ## Additional networking features in Windows Server 2019 Windows Server 2019 provides further network improvements with the following additional networking features: • Receive Segment Coalescing (RSC) in the vSwitch. RSC is a stateless offload technology that helps reduce CPU utilization for network processing on the receive side by offloading tasks from the CPU to an RSC-capable network adapter. In Windows Server 2019, RSC in the vSwitch is enabled by default and it supports Hyper-V workloads. • Dynamic Virtual Machine Multi-Queue (d.VMMQ). d.VMMQ improves on VMMQ by allocating traffic to CPUs dynamically. With d.VMMQ enabled, as network throughput changes, Windows Server 2019 automatically coalesces network packets onto more (or less) CPUs for processing. This helps maximize CPU efficiency in the Hyper-V host server and maintains consistent network throughput for each hosted VM. To use d.VMMQ, you must install a d.VMMQ-capable driver for your network adapters. However, no additional setup is required to use d.VMMQ with virtual workloads in Hyper-V.
# Subsets of non countably infinite sets 1. Jun 16, 2011 ### pyrole I was reading an introductory chapter on probability related to sample spaces. It had a mention that for uncountably infinite sets, ie. in sets in which 1 to 1 mapping of its elements with positive integers is not possible, the number of subsets is not 2^n. I certainly find this very unintuitive, for eg. the set of all real numbers is an uncountably infinite set, I suppose. Could someone throw some light on the topic with some examples. What happens when we look at probabilities of events in these sets? Thanks 2. Jun 16, 2011 ### micromass Staff Emeritus Hi pyrole! What exactly does the book say. The number of subsets of every set A is $$|2^A|=2^{|A|}$$ In particular, if A is finite, then the number of subsets is finite. And if A is infinite, then the number of subsets are uncountable. This probably doesn't answer your question, but I don't quite understand what you're asking. 3. Jun 16, 2011 ### SteveL27 Can you copy the entire passage from the text? It sounds like you might be confusing this with the Continuum Hypothesis (CH). First, the cardinality of the collection of subsets of any set $S$ is always $2^{|S|}$, where the absolute value bars $|S|$ denote the cardinality of $S$. So I don't believe that what you wrote is correct. For example the cardinality of the set of natural numbers $\mathbb{N}$ is $\aleph_0$; and the cardinality of the real numbers is $2^{\aleph_0}$. It's easy to exhibit a bijection between the reals and the subsets of $\mathbb{N}$. CH says that there is no other transfinite cardinal strictly between $\aleph_0$ and $2^{\aleph_0}$. CH is independent of the the usual axioms of set theory, known as ZFC. So there may or may not be some cardinal strictly larger than $\aleph_0$ and strictly smaller than $2^{\aleph_0}$. [Or the question may have no meaning, depending on one's philosophy.] It seems likely (to me) that this is what your book was talking about; but in any event, if you post the relevant quote from the text we can have a better idea of what they are getting at.
# Running x86 build crashes ## Recommended Posts I have 4 build configurations in Visual Studio 2017 15.5.2 (/std:c++latest) that build without a single error or warning (though, I suppressed some warnings about adding padding for alignment and about the usage of anonymous structs): Release|Debug x64 and Release|Debug x86. All the builds run fine except for Release x86 which crashes at some weird fixed location (in fact two weird fixed locations, since I have alternatives for my input file formats). The output mentions a very informative message: Quote **__that** was 0x1. Some of my C++ Locals (std::strings) had a value equal to some of my HLSL statements and comments, but that does not seem to repeat itself. I suspect an alignment bug (my x64 builds never had a problem), but I have no idea how to track these down? • I checked if all my struct/classes that are declared alignas are allocated with a custom allocator when stored in std::vector. • All structs/classes storing XMVECTOR/XMMATRIX data are declared alignas(16). • Furthermore, I checked all my call conventions for functions with XMVECTOR/XMMATRIX return or input argument types (this fixed the Debug x86 build, which didn't crash but produced weird shadows). Any ideas? ##### Share on other sites Are you generating dump files?  Adding crash handlers that call MiniDumpWriteDump() is incredibly useful if you are keeping the pdb files around. By saving all the memory with the right flags you can see exactly where the program was and exactly what was in memory when it died, including full stacks for each thread. Without tools like that it is mostly guesswork.  You guessed several items, alignment issues, structure packing, wrong calling conventions, differences between structures in different builds.  Maybe any of those, maybe none of those. Get an actual dump file so you can verify for certain, or find steps to reproduce it inside a debugger. ##### Share on other sites Try gradually enabling the same optimisation settings in your debug build that you use in your release build to see if you can catch the issue in the debugger - you might be able to catch it that way. ##### Share on other sites 9 hours ago, C0lumbo said: Try gradually enabling the same optimisation settings in your debug build that you use in your release build to see if you can catch the issue in the debugger - you might be able to catch it that way. Unfortunately, the MVC++ compiler flags triggering the problem are anything except /Od (no optimizations) + /GL (Whole Program Optimization). So the /GL flag is the culprit, but cannot be debugged properly with Visual Studio itself. I am going to try the file dump method later on (reminder to self: http://www.debuginfo.com/examples/effmdmpexamples.html) Edited by matt77hias ##### Share on other sites I couldn't use __try due to the object unwinding. So I switched /EHsc to /EHa and put all my startup code in a noexcept function. The program crashes with an "exception" (SEH?), though my crash dump function is never called? __try { Run(hinstance, nCmdShow); } __except (CreateMiniDump(GetExceptionInformation()), EXCEPTION_EXECUTE_HANDLER) {} ##### Share on other sites There are many other ways to do it because there are many different reasons for programs to get killed. You can set an unhandled exception handler for your program, std::set_terminate() for C++ exceptions. If you use Windows Structured Exceptions you also need SetUnhandledExceptionFilter(). Depending on how your program dies you may want to register functions with std::atexit() and/or std::at_quick_exit(). In current versions of Windows you can also set some registry values to turn on Windows Error Reporting (WER) to generate crash dumps even if those above methods don't catch it. ##### Share on other sites 11 hours ago, frob said: Windows Structured Exceptions you also need SetUnhandledExceptionFilter() But in my specific case, I need to look into this one? (Because the noexcept could not leak C++ exceptions; i.e. the program would just terminate instead of crash). ##### Share on other sites 12 hours ago, frob said: There are many other ways to do it because there are many different reasons for programs to get killed. Ok apparently both LONG WINAPI UnhandledExceptionFilter(EXCEPTION_POINTERS *exception_record) { CreateMiniDump(exception_record); return EXCEPTION_CONTINUE_SEARCH; } SetUnhandledExceptionFilter(UnhandledExceptionFilter); } and __except (CreateMiniDump(GetExceptionInformation()), EXCEPTION_EXECUTE_HANDLER) {} work. Visual Studio was interfering with the exception handling. When I just run the .exe outside Visual Studio, it crashes and generates the dump file. The former does not require __try/__catch and so I can get rid of /EHa, furthermore you get some crash message. The latter does not result in some crash message since the program will catch everything.. Edited by matt77hias ##### Share on other sites @frob I used a very verbose mini dump including MiniDumpWithFullMemory | MiniDumpWithFullMemoryInfo | MiniDumpWithHandleData | MiniDumpWithThreadInfo | MiniDumpWithUnloadedModules and generated the mini dump (168 MB), but how does one do something useful with the file? The default program for opening the .dmp file is Visual Studio itself. The only "useful" action seems "Debug with Native Only", though that results in the same info Visual Studio gave in the first place? As a side note, SIMD intrinsics are not the problem, since the program still crashes with the same error after disabling them. This begins to make me really suspicious (again) towards the compiler/linker itself. Edited by matt77hias ##### Share on other sites Did you check the call stack that is shown with your minidump? That seems to point to a very concrete location, and should probably allow you to spot the error (which seems to relate to eigther vector pushback or your ModelPart-move-ctor). ## Create an account Register a new account 1. 1 2. 2 Rutin 16 3. 3 4. 4 5. 5 • 26 • 11 • 9 • 9 • 11 • ### Forum Statistics • Total Topics 633703 • Total Posts 3013455 ×
# Experiment with ProcessPoolExecutor for reading/writing jointcal results XMLWordPrintable #### Details • Type: Story • Status: Invalid • Resolution: Done • Fix Version/s: None • Component/s: • Labels: • Story Points: 4 • Team: #### Description I just experimented with using multiple threads/processes to accelerate writing jointcal's output. Fortunately, concurrent.futures is very easy to use. Unfortunately, the ProcessPoolExecutor doesn't work because there are unpickleable objects. ThreadPoolExecutor worked just fine, with the tests passing, but it wasn't any faster. Once the pybind11 port is done, I should give this another try, including adding the necessary pybind11 code to pickle the things that need to be pickled. Below is the rewrite of _write_results(), so I don't forget. _write_one_result() just contains the inside of the loop, using the new "self" objects. It would be worth considering what really should be in jointcal's "self" as part of this. self.astrom_model = astrom_model self.photom_model = photom_model self.visit_ccd_to_dataRef = visit_ccd_to_dataRef   import concurrent.futures ccdImageList = associations.getCcdImageList() with concurrent.futures.ProcessPoolExecutor() as executor: executor.map(self._write_one_result, ccdImageList) #### Activity Hide John Parejko added a comment - Helpful comments from Paul Price about how to figure out what needs pickling here: Try with lsst.ctrl.pool.pool.pickleSniffer(): doSomethingWithPickle(), or decorate the function with @lsst.ctrl.pool.pool.catchPicklingError. Show John Parejko added a comment - Helpful comments from Paul Price about how to figure out what needs pickling here: Try with lsst.ctrl.pool.pool.pickleSniffer(): doSomethingWithPickle(), or decorate the function with @lsst.ctrl.pool.pool.catchPicklingError. Hide John Parejko added a comment - Relatedly, it would be worth trying to make _build_ccdImage thread/process safe (possibly by doing AddImage as a separate step?), and trying the same thing with it. Other than AddImage, I think that loop is trivially parallel. Show John Parejko added a comment - Relatedly, it would be worth trying to make _build_ccdImage thread/process safe (possibly by doing AddImage as a separate step?), and trying the same thing with it. Other than AddImage, I think that loop is trivially parallel. Hide John Parejko added a comment - Here's one possibility for the above, though it's not great, with _build_ccdImage returning a tuple of the arguments to AddImage in addition to the Result namedtuple: with pipeBase.cmdLineTask.profile(load_cat_prof_file): import concurrent.futures with concurrent.futures.ProcessPoolExecutor() as executor: mapped = executor.map(self._build_ccdImage, dataRefs) for (stuff, result), ref in zip(mapped, dataRefs): associations.AddImage(*stuff, jointcalControl) oldWcsList.append(result.wcs) visit_ccd_to_dataRef[result.key] = ref This doesn't work as written, but it did work with a ThreadPoolExecutor(), so there's hope, once I sort out the pickling. Show John Parejko added a comment - Here's one possibility for the above, though it's not great, with _build_ccdImage returning a tuple of the arguments to AddImage in addition to the Result namedtuple: with pipeBase.cmdLineTask.profile(load_cat_prof_file): import concurrent.futures with concurrent.futures.ProcessPoolExecutor() as executor: mapped = executor.map(self._build_ccdImage, dataRefs) for (stuff, result), ref in zip(mapped, dataRefs): associations.AddImage(*stuff, jointcalControl) oldWcsList.append(result.wcs) visit_ccd_to_dataRef[result.key] = ref This doesn't work as written, but it did work with a ThreadPoolExecutor() , so there's hope, once I sort out the pickling. Hide John Parejko added a comment - I've pushed a branch with some more attempts, including trying to pickle ccdImage and a cleaned up version of the above addImage code. The reading .map() fails with an error about __init__ arguments, but doesn't specify what init is failing, while the writing fails with a RuntimeError: make_tuple(): unable to convert arguments of types 'std::tuple<object, ... object>' to Python object error. Getting the ccdImage pickle to work might not be necessary to get the read parallelization to work, but on the other hand parallel reading may require being able to pickle a whole bunch of afw objects: all the metadata. Show John Parejko added a comment - I've pushed a branch with some more attempts, including trying to pickle ccdImage and a cleaned up version of the above addImage code. The reading .map() fails with an error about __init__ arguments, but doesn't specify what init is failing, while the writing fails with a RuntimeError: make_tuple(): unable to convert arguments of types 'std::tuple<object, ... object>' to Python object error. Getting the ccdImage pickle to work might not be necessary to get the read parallelization to work, but on the other hand parallel reading may require being able to pickle a whole bunch of afw objects: all the metadata. Hide John Parejko added a comment - Gen3 jointcal reads and writes aggregated visit-level SourceCatalogs and ExposureCatalogs, so this approach is no longer necessary. Show John Parejko added a comment - Gen3 jointcal reads and writes aggregated visit-level SourceCatalogs and ExposureCatalogs, so this approach is no longer necessary. #### People Assignee: John Parejko Reporter: John Parejko Watchers: John Parejko, Paul Price, Russell Owen, Simon Krughoff
Journal topic Hydrol. Earth Syst. Sci., 24, 213–226, 2020 https://doi.org/10.5194/hess-24-213-2020 Hydrol. Earth Syst. Sci., 24, 213–226, 2020 https://doi.org/10.5194/hess-24-213-2020 Research article 16 Jan 2020 Research article | 16 Jan 2020 # Assessing the perturbations of the hydrogeological regime in sloping fens due to roads Assessing the perturbations of the hydrogeological regime in sloping fens due to roads Fabien Cochand1, Daniel Käser1, Philippe Grosvernier2, Daniel Hunkeler1, and Philip Brunner1 Fabien Cochand et al. • 1Centre of Hydrogeology and Geothermics, Université de Neuchâtel, Neuchâtel, Switzerland • 2LIN'eco, ecological engineering, P.O. Box 80, 2732 Reconvilier, Switzerland Correspondence: Philip Brunner (philip.brunner@unine.ch) Abstract 1 Introduction Wetlands can play a significant role in flood control (Baker et al., 2009; Zollner, 2003; Reckendorfer et al., 2013), mitigate climate change impacts (Cognard Plancq et al., 2004; Samaritani et al., 2011; Lindsay, 2010; Limpens et al., 2008), and feature great biodiversity (Rydin and Jeglum, 2005). However, the world has lost 64 % of its wetland areas since 1900 and an even greater loss has been observed in Switzerland (Broggi, 1990). Therefore, wetland conservation has received considerable attention. However, the sprawl of human infrastructure, land use change, climate change, and river regulation remain serious factors that threaten wetlands. For instance, roads can substantially modify the surface–subsurface flow patterns of sloping fens. These changes in flow patterns can influence sediment transport, moisture dynamics, and biogeochemical processes as well as ecological dynamics. The link between hydrological changes and sediment dynamics has been studied in various contexts (see, e.g., Partington et al., 2017). From a civil engineering perspective, erosion of the road must be avoided. A common strategy to avoid erosion of the road foundation is to collect water in drains and then release it in a concentrated manner downslope. This, however, can lead to erosion of the downslope area, a phenomenon known as “gully erosion”. A number of studies have specifically focused on identifying the controlling processes and relevant parameters of gully erosion (Capra et al., 2009; Valentin et al., 2005; Descroix et al., 2008; Poesen et al., 2003; Martínez-Casasnovas, 2003; Daba et al., 2003; Betts and DeRose, 1999; Derose et al., 1998). Nyssen et al. (2002) investigated the impact of road construction on gully erosion in the northern Ethiopian highlands, with a focus on surface water. In their study area, they observed the formation of a gully downslope of the outlets of the drains after the road construction. Based on fieldwork and subsequent statistical analysis, they concluded that the main causes of gully development are concentrated runoff, the diversion of concentrated runoff to other catchments, and the modifications of drainage areas induced by the road. The role of groundwater was not considered in this study. Road construction can also impact the development of vegetation (Chimner et al., 2016). Von Sengbusch (2015) investigated changes in the growth of bog pines located in a mountain mire in the black forest (southwest Germany). The author suggested that the increase in bog pine cover was caused by a delayed effect from road construction in 1983 along a margin of the bog. The road affected the subsurface flow and therefore prevented the upslope water from flowing to the bog. According to von Sengbusch (2015), road disturbances induce a larger variability in water table elevations during dry periods and consequently increase the sensitivity of the bog to climate change. Figure 1Conceptual subsurface dynamics in sloping fens. (a) Natural conditions. (b) A road without a drain (only shown for illustrative purposes as essentially all roads have drains); in this case, water will flow both across and under the road. Uncontrolled flow beneath the road can cause erosion of the road foundation. (c) A road with a drain; in this design, surface water flow is reduced and flow beneath the road occurs in a controlled manner through the drain. Water is released downslope in a concentrated manner with the risk of gully erosion as well as parts of the wetland drying out. While it is possible that the concentrated groundwater (GW) is redistributed horizontally downslope via natural heterogeneity, there is a high risk of gully erosion. The design of the roads and especially the drains is expected to have a significant influence on the degree of perturbation. Three fundamentally different road structures have been developed in Switzerland to reduce the impacts of roads. These three road types are conceptually illustrated in Fig. 2. To date, the efficiency of the road structures developed has not been assessed following completion, neither in the field via field-based experiments, nor at a conceptual level. This study focuses on these three road structures: • The “no-excavation” structure (Fig. 2a) aims at preserving soil continuity under the road. It consists of a leveled layer of gravel, anchored to the ground, and underlying 0.16 m thick concrete slabs. Soil compaction is limited by using low-density gravel, which is made of expanded glass chunks (Misapor™) that are approximately fivefold lighter than conventional material. • The “L-drain” structure (Fig. 2b) aims at collecting subsurface water upslope of the road and redirecting it to discrete outlets on the other side. The setup consists of a trench, approximately 0.4 m deep, filled with a matrix of sandy gravel that contains an L-shaped band of coarse gravel acting as the drain. This is the most common approach to building roads in Switzerland. • The “wood-log” structure (Fig. 2c) aims at promoting homogeneous flow under the road but does not preserve soil continuity. Embedded in a trench, approximately 0.4 m deep, the wooden framework is filled with wooden logs forming a permeable medium. The wooden logs are then covered with mixed gravel. In Switzerland, more than 20 000 ha are included in the national inventory of fens of national importance (Broggi, 1990), and most of them are located in the mountainous regions of the northern Prealps. These fens developed on nearly impermeable geomorphological layers such as silty moraine material or a particular rock layer known as “flysch”. The majority of the remaining Swiss fens are sloping fens in this particular geological environment. To protect the remaining wetlands, it is important to reduce the impact of these constructions, be it in the context of replacing existing, old roads or for the construction of new roads. The aim of this study is to investigate the hydrogeological impact of the three road structures and their effects on fen water dynamics to support decision-makers in choosing road structures with minimal impact. A combination of fieldwork and hydrogeological modeling tasks was employed. Fieldwork was used to document the hydrogeological impact of existing road structures on fen water dynamics. It is the first time that these road types have been systematically analyzed under field conditions. Sites with similar natural conditions were chosen to compare the influence of different road constructions on flow processes. The field studies allow for the assessment of the effectiveness of a given road structure at a particular location; however, they cannot provide a generalizable analysis of the different road types under different environmental and physical conditions. For example, critical environmental factors such as the slope or the bulk hydraulic properties of the fen will vary at different locations. This gap was filled by the development of generic numerical models. The most important hydraulic properties which control flow dynamics are explored systematically: the slopes of fens and the bulk hydraulic conductivity. The models are kept deliberately simple in terms of the heterogeneity of the soil. As the heterogeneity of the soil is not considered in the models, the horizontal redistribution due to field-specific heterogeneity cannot be considered (see Fig. 1c). Thus, the simulations constitute a “worst-case” scenario, which allows for a systematic comparison and a relative ranking of the different road structures in terms of perturbation and the risk of gully erosion. 2 Methods ## 2.1 Study areas and fieldwork Four sloping fen areas located in alpine or peri-alpine regions of Switzerland (Table 1) were identified for this study. All areas are situated in protected fen areas, and their selection was based on two main criteria: 1. the subsurface water flow must occur only in the topsoil layer and as runoff (as described in the introduction), and 2. roads constructed with either a no-excavation, an L-drain, or a wood-log structure must be present. To fulfill the first criteria, soil profiles were analyzed to ensure that each area with different road types had the comparable soil stratigraphy: it had to be composed of organic soil on top of a layer of impermeable clay and similar hydraulic regimes (e.g., runoff and subsurface flow occurring only in the topsoil layer). In addition, to ensure that subsurface water is forced to cross the road instead of flowing parallel to the road (and thus not being directly affected by the road), another important criterion for the selection of the study areas was that the subsurface flow was perpendicular to the road. Table 1Field site locations and features. To evaluate the hydraulic connection provided by the roadbed structures, tracer tests were carried out. As illustrated schematically in Fig. 3, the upslope area was irrigated with a saline solution and the occurrence of the tracer was monitored downslope of the road. In the absence of surface runoff, the occurrence of a tracer downslope demonstrates the hydrogeological connection through the road. Furthermore, the spatial distribution of the tracer front reflects the heterogeneity of the flow paths. Figure 3Schematic view of the sites analyzed during fieldwork. At each field site, an area of an 8 m × 20 m rectangle that included a 2.5 to 3.5 m wide road segment was selected. A network of approximately 30 mini-piezometers was installed on both sides of the road (Fig. 3) to monitor the hydraulic heads and was used to obtain samples for the tracer test. The mini-piezometers are high-density polyethylene (HDPE) tubes no longer than 1.5 m (i.d. of 24 mm). Each tube was screened with 0.4 mm slots from the bottom end to 5 cm below ground level. They were inserted into the soil after extracting a core with a manual auger (diameter of 4–6 cm). The gap between the tube and the soil was filled with fine gravel and sealed on the top with a 4 cm thick layer of bentonite or local clay. Hydraulic heads were measured using a manual water level meter (±0.3 cm). At each point, the terrain and the top of the piezometer were leveled using a level (±0.3 cm), whereas the horizontal position was measured with a tape measure (±5 cm). The tracer tests were conducted using two oscillating sprinklers designed to reproduce a 30 mm rain event over 2–3 h. This is equivalent to an intense rain event. Prior to the experiment, the sprinklers were activated for 15–60 min to wet the soil surface. Sodium chloride was added to the irrigated solution to obtain an electrical conductivity of 5–10 mS cm−1 which is approximately 10 times higher than the natural electrical conductivity of the groundwater. Subsequently, the area (60 m2) upslope of the road (upslope injection area of Fig. 3) was irrigated with the salt solution using the two sprinklers. The electrical conductivity (EC) of soil water was manually measured using a conductivity meter in all mini-piezometers prior to the experiment, immediately after the experiment, and 24 h after the experiment. An increase in EC in the piezometers located in the downslope area indicates that the injected saltwater flowed from the upslope area to the downslope area below the road and clearly shows a hydraulic connection. Conversely, if no changes in EC are observed in the piezometers, the hydraulic connection between the upslope and downslope of the road is affected. ## 2.2 Numerical modeling The modeling approach was structured in three steps. First, a 3-D base case model representing surface and subsurface water flow in a sloping fen was elaborated. Subsequently, the base case model was modified to represent the three different types of road structures. For each model, various slopes, soil, and road drain hydraulic conductivities were implemented to produce a sensitivity analysis and explore their sensitivities in the sloping fen flow dynamics (see Sect. 2.2.3 for details). ### 2.2.1 Numerical simulator The model used in the study is HydroGeoSphere (HGS; Aquanty, 2017). HGS is a physically based surface–subsurface fully integrated model, based on the blueprint of Freeze and Harlan (1969), who proposed a model structure for jointly simulating surface- and subsurface flow processes (Simmons et al., 2019). HGS uses the control volume finite element approach and solves a modified Richards' equation describing the 3-D subsurface flow. If the subsurface flow is unsaturated, HGS employs the van Genuchten (1980) functions to relate pressure head to saturation and relative hydraulic conductivity. Simultaneously, HGS solves the 2-D depth-averaged diffusion-wave approximation of the Saint-Venant equation for describing the surface flow. To couple surface and subsurface and simulate the water exchanges between both domains, the “dual node approach” is used. In this approach, the top nodes representing the ground surface are used for calculating both subsurface and surface flow, and the exchange flux between the two domains is calculated based on the head difference between the surface and the subsurface and a coupling factor. The iterative Newton–Raphson method is used to solve the nonlinear equations. At each subsurface node, saturation and groundwater heads are calculated, which allows for the calculation of the Darcy flux. For further details on the code, HGS capabilities, and application, see Aquanty (2017), Brunner and Simmons (2012), or Cochand et al. (2019). ### 2.2.2 Conceptual models and model implementation Figure 4 illustrates the conceptual model of each case. Existing engineering sketches were used as a basis for the implementation of the drain and road. Geometry, topography, and slopes are based on the conditions in the field. In each model, the soil layer has a thickness of 0.4 m and the surface and subsurface water originate from precipitation only. The upslope boundary is the catchment boundary (water divide) and the downslope boundary represents the outlet of the model. Finally, it is assumed that the layer beneath the soil is impermeable (as observed in the field). One Neumann (constant flux) boundary condition was used on the top face for simulating precipitation. A constant head (Dirichlet-type) boundary condition equal to the ground surface elevation (2 m) was used on the lowest cells of the slope (x=76 m in Fig. 5a) allowing groundwater to flow out of the model. Finally, a critical depth boundary condition which allows surface water to flow out of the model domain was implemented on the top nodes located at x=76 m. All other faces are no-flow boundary conditions. Figure 4(a) Base case, (b) no-excavation, (c) L-drain, and (d) wood-log structure conceptual models. BC refers to boundary conditions. Figure 5Model development: (a) base case model, (b) base case model cross-section between 61 m <x and x<66 m, (c) no-excavation model between 61 m <x and x<66 m, (d) L-drain model between 61 m <x and x<66 m, (e) L-drain model between 61 m <x and x<66 m along the transversal drain, and (f) wood-log model between 61 m <x and x<66 m. A 3-D finite element mesh was developed (Fig. 5a). The mesh was 76 m long in the x direction, 20 m in the y direction, and the mesh thickness was 1.2 m. The top elevation was fixed at 2 m on the right side (x=76 m) and varied from 9.6 to 24.8 m on the left side (x=0) according to the slope of the model. The mesh was composed of 24 layers, 127 200 nodes, and 118 440 rectangular prism elements. To guarantee numerical stability, mesh refinements were implemented. The element size varied between 2 and 0.1 m horizontally (in the x and y directions) and 0.09 and 0.06 m vertically. The base case model and the three other models representing different road types have the same boundary conditions and finite element meshes; however, modifications were made between coordinates $\mathrm{61} for the implementation of the different road types. Figure 5 depicts the differences between the base case model (Fig. 5a, b) and models with roads (Fig. 5c, d, e, f). In models simulating a road, the mesh and the material properties were adjusted. The fine spatial discretization of the mesh created between the coordinates $\mathrm{61} allows a more accurate representation of the simulated processes where high hydraulic gradients are expected (near roads and drains). ### 2.2.3 Model application The model application consists of the variation of model properties to assess their effect on the groundwater dynamics. The following parameters were analyzed: fen slope, soil hydraulic conductivities, and road drain hydraulic conductivities. These parameters were selected because according to Darcy's law (Eq. 1) they control the groundwater flow dynamics: $\begin{array}{}\text{(1)}& q=K\cdot \mathrm{\nabla }H,\end{array}$ where K is the hydraulic conductivity of the soil and the drain, and H is the hydraulic gradient of subsurface water in the fen, which itself strongly influenced by the topographical slope. For each property varied in the sensitivity analysis, three different values were chosen (Table 2): a low, intermediate, and high value. For the soil hydraulic conductivities (KS), values presented in Chambers (2003) were used and varied between 8.64 and 0.0864 m d−1. This corresponds to a soil composed of gravely organic matter (as observed for example at the Sankt-Antönien site) or loamy organic matter (as observed for example at the Schöniseischwand site). The van Genuchten parameters (α and β), as well as the residual water content, were not varied. The road drains (KD) which are made of coarse or very coarse gravel were assigned a hydraulic conductivity between 8640 and 86.4 m d−1 (Fetter, 2001), with their van Genuchten parameters corresponding to gravel. The slopes were fixed at 10 %, 20 %, and 30 %, as observed during fieldwork. The hydraulic conductivities of the wood-log (W-L) drain were assumed to be 10 times more conductive and more porous than the gravel drain. The road concrete is almost impermeable; thus, it was conceptualized with a very low hydraulic conductivity, with its van Genuchten parameters corresponding to fine material. The road base is constructed using highly compacted fine material (sand and loam); thus, it was implemented with low hydraulic conductivity, with the van Genuchten parameters corresponding to fine material. Finally, the implemented soil and road surface flow properties correspond to a wetland and urban cover (Li et al., 2008). Table 2Subsurface and surface flow parameters. In order to simulate each parameter combination, a total of 90 models were developed (27 models for each road structure and 9 models for natural conditions). Models were run for 10 000 d (about 27 years) with a constant flux equal to 380 mm yr−1 on the top, representing the rainfall to reach a steady state. Subsequently, subsurface flow rates in the soil layer were extracted at each section with an area of 0.4 m2 (1 m wide times the soil thickness) presented in Fig. 6. Changes in subsurface flow rates indicate a perturbation of flow dynamics; therefore, a comparison of flow rates between each model was made to present the effect of each road structure and sloping fen properties on the dynamics. Figure 6Location of observation sections in the models. 3 Results and discussion ## 3.1 Fieldwork Based on the observations, all sites show a continuous saturated zone before the experiment, both upslope and downslope of the road, with the hydraulic gradients being similar to the terrain slope (Fig. 7, first column). In contrast, the EC maps established prior to the tracer test show a spatial variability across one to several meters (Fig. 7, second column). Within each plot, EC varies from 482 to 629 µS cm−1. At the SCH site, the highest values are located downslope of the L-drain outlet which could indicate that the EC increases as water is flowing through the drain (e.g., through the dissolution of the construction material). Given that this initial distribution of EC is not uniform, the comparison of EC after the sprinkling experiment has to be made in a relative manner (Fig. 7, third column). Figure 7Fieldwork results at the four field sites: the first column shows the measured groundwater heads before the tracer test, the second column shows the measured EC before the tracer test, and the third column shows the before and after tracer test differences in EC. The hydraulic head downslope of the road at the Stouffe site is about 25 cm, whereas that upslope of the road at the Schöniseischwand is about 225 cm (between two isolines); these values are not presented in the figure. The heterogeneity of the hydraulic conductivity of the soil is apparent from the tracer tests (Fig. 7, third column: EC 24 h after injection). At all four sites, the front of the saline solution is not uniform due to the heterogeneity of the soil hydraulic conductivity. Nevertheless, the road structures or the drains may create preferential flow paths. This clearly occurs at the SCH site, where the front follows two preferential flow paths: one related to the L drain (right path) and the other unrelated to the L drain (left path). This suggests that the latter drains only a part of the water and that the remaining water follows a natural preferential flow path. At the HMD site, the saline solution is far more concentrated on the left side of the plot; however, this is apparently not as a result of the road's structure. Rather, the soil appears more permeable on the left side of the plot, both upslope and downslope of the road. Finally, the decrease in EC observed 24 h after injection at some locations might result from the following: (1) the tracer injection induces the displacement of a small volume of local water with a lower EC, via “piston effect”; (2) the tracer injection was preceded by a period of irrigation without tracer. This could have diluted the pre-irrigation soil solution. In each case, the irrigation experiments demonstrate the continuity of subsurface flow under the road for all structures. For the no-excavation and wood-log type, the perturbation of the flow field seems to be controlled by the natural heterogeneity of the soil and flow paths, and not by the road itself. Conversely, the field data suggest that the L drain constitutes a preferential pathway. This flow convergence can cause gully erosion. ## 3.2 Modeling Figure 8a shows the results of the models with a slope of 10 %, Fig. 8b shows the results for a slope of 20 %, and Fig. 8c shows the results for a slope of 30 %. In each panel, the groundwater flow rates (always in cubic meters per day, m3 d−1) are plotted using crosses for the base case model, diamonds for the no-excavation model, squares for the L-drain model, and circles for the wood-log model. In addition, the maximum flow rate capacity of the soil calculated with Darcy's Law (Eq. 1) and the flow rate induced by precipitation are also presented for the interpretation of the results. In the following paragraphs, the base case (natural conditions) results are presented and discussed, followed by the simulations of the road structures. Figure 8Simulated groundwater flow rates 2 m downslope of each road structure and each parameter combination with a slope of (a) 10 %, (b) 20 %, and (c) 30 %. Numbers in the bottom right corner of each panel represent the ratio between the maximum and minimum groundwater flow within the L-drain transect. In the base case model, groundwater flow rates vary from 0.003 to 0.069 m3 d−1 for a 10 % slope, from 0.006 to 0.069 m3 d−1 for a 20 % slope, and from 0.009 to 0.069 m3 d−1 for a 30 % slope. The groundwater flow rate decreases following a decrease in the hydraulic conductivities (KS) of the soil layer. The groundwater flow rates are mainly controlled by the hydraulic conductivities, and the slope plays a less important role. This is expected, as the ratios of the maximum and minimum hydraulic conductivity span 2 orders of magnitude, while slopes were multiplied by a factor of 2 (for a slope of 20 %) or 3 (for a slope of 30 %). Therefore, groundwater flow is increased by a factor 3 between the model KS3 with a slope of 10 % and model KS3 with a slope of 30 %. Concerning the formation of surface flow, the following observation can be made: for all KS2 and KS3 models, surface flow occurs, while the infiltration capacity of the KS1 models is never exceeded and, thus, no surface flow occurs. In the no-excavation and wood-log models, the influence on flow rates caused by the presence of the road structures is quite similar. Groundwater flows vary from 0.01 to 0.069 m3 d−1 for a 10 % slope, from 0.01 to 0.069 m3 d−1 for a 20 % slope, and from 0.010 to 0.069 m3 d−1 for a 30 % slope. Compared with the base case model, results show that the no-excavation and wood-log structures have a minimal impact on flow perturbation. The only marked difference is that groundwater flow rates are slightly higher if the soil hydraulic conductivities are low (KS3). This is due to the hydraulic conductivity of the base of the road (consisting of wood logs) being higher than the hydraulic conductivity of the soil which facilitates infiltration. Conversely, in the base case model, less water infiltrates and more surface runoff occurs. In the 20 % and 30 % slope models, the results of the no-excavation model are similar to the base case model. Figure 9Extent of perturbations due to the L-drain road type: simulated groundwater flow rates at different distances from the road. In the L-drain model, the effect of the road is markedly different from the other road structures. The groundwater flows vary significantly with respect to the observation sections. The maximum flows are always obtained in observation section G (see Fig. 6 for the location of the sections) just downslope of the drain outlet and can be 10 times higher than the base case. Conversely, minimum flows are obtained in observation sections C and D in which flow rates can be 10 times lower. Significant differences in groundwater flow are also observed in the same transect (within the same model). To condense this information, the ratios between maximum and minimum flow rates are calculated for the L-drain structures (numbers in the bottom right corners of the panels in Fig. 8). The maximum differences are observed for the cases where the hydraulic conductivity of the soil (KS) and drain (KD) are high and vary from 0.025 to 0.150 m3 d−1. Conversely, when KS and/or KD is low, the differences along the transect are smaller. Finally, the slope accentuates the groundwater flow rate differences along the transect. Therefore, an increase in the groundwater flow differences is observed for the 10 % and 30 % slope scenarios, within the same model. The impact of the L drain may be further explored by extracting groundwater flows lower than 2 m downslope of the road to assess the extent of perturbations. Figure 9 shows simulated groundwater flows for the most critical cases (i.e., KS1 with a slope of 10 %, 20 %, and 30 %) downslope of the road at 3.5 and 6.5 m, respectively, and 2.5 m upslope. At 3.5 m downslope, groundwater flow regains the upslope conditions. At 6.5 m downslope of the road, all observation sections are very similar to the upslope flows except in section G where flows are still slightly higher. Figure 10Simulated surface flow of the KS2–KD2 model and a slope of 20 % for each road structure (min. velocity =0 m d−1 and max. velocity =0.25 m d−1). The results clearly indicate the increased risk caused by the L drain with respect to triggering surface runoff and, in turn, potential gully erosion and sections of the wetland drying out. In addition to the assessment of perturbation due to roads, the model results can be used to evaluate the risk of gully erosion. As presented in Fig. 8, the maximum flow rate capacity of the soil is small in comparison to precipitation. For all model scenarios except for KS1, the soil capacity is lower than the precipitation and, thus, surface runoff occurs in the models and is likely to occur naturally. However, surface runoff may be triggered by the presence of L-drain structures. To illustrate this process, the simulated surface flow velocities of each road structure downslope of the road for the model KS2–KD2 and a slope of 20 % are presented in Fig. 10. In this case, the maximum flow rate capacity of the soil is approximately equal to precipitation, and therefore runoff should not occur. However, this is not the case for the L drain. The occurrence of surface runoff is a consequence of the subsurface flow concentration. In this configuration, the infiltration capacity of the soil is too small to accommodate the concentrated flow collected upslope, thus groundwater emerges and surface flow is triggered. This constitutes an increased risk of gully erosion. In addition, the perturbation of roads upslope of the road was assessed. Finally, the impact of road structure on the upslope road dynamics was also assessed (figure not shown) 2.5 m upslope. Upslope flows are similar to the base case model; thus, the influence of the road is, not unexpectedly, marginal for all road types. The development of models with various combinations of parameters allowed for the exploration of a larger parameter space than fieldwork alone. For instance, the fact that the impact of an L-drain structure on the water dynamics is less marked if the hydraulic conductivity of soil is low would have been impossible to identify using just fieldwork. However, a numerical model is always a simplified reproduction of reality. The main model assumption is that the hydraulic conductivity of the soil is homogeneous – as opposed to the field conditions analyzed. However, the models are not intended to reproduce small-scale observations, i.e., the exact hydraulic head in a piezometer, but instead can be used to explore the influence of the road structures under different soil conditions (bulk hydraulic conductivities and slopes). Given that no heterogeneity-induced horizontal redistribution of the flow downslope can be simulated using homogeneous soil conditions, the models constitute a worst-case scenario. It is a worst-case scenario because we exclude the possibility that a fraction of the drained water could be horizontally redistributed downstream through natural heterogeneity, thereby potentially reducing the negative impact of the road. Therefore, the models allow for a relative ranking of the potential impact and clearly show the increased risk of surface water flow generation and, in turn, gully erosion. For the scenarios investigated, the L drain consistently shows the largest impact. Thus, the other two road structures are the preferred construction types. 4 Conclusions This study assessed three road structures with respect to their perturbations of the natural groundwater flow. Two of these road structures were specifically developed to reduce the negative impacts of the road. The study is based on two complementary approaches: field-based tracer tests and numerical models simulating groundwater flow for the different road structures. The combination of fieldwork and the development of numerical models was fundamental to achieving the goal of this study. The tracer test allowed for a better understanding of groundwater flow through road structures and allowed for an evaluation of their effectiveness at a given location. However, the tracer tests are time-consuming and only a few suitable field sites are available. Moreover, the results are site-specific. The numerical approach, in contrast, allows for the exploration of any combination of slope, hydraulic properties, and road structure, thereby providing a more comprehensive approach aimed at a relative ranking of the influence of the road structure. Given the simplified structure of the models, the results can not be directly used to predict the influence at a specific field site. For all scenarios investigated, the significant impact of the L-drain road structure is clearly established and is consistent with the field observations. For the other road structures, the numerical models are also consistent with fieldwork results and show a relatively undisturbed groundwater flow downslope of the road. It is the first time that the performance of these road structures has been investigated in the field. The tracer tests showed that both sides of the road were hydraulically connected for all of the road structures investigated. Groundwater flow was heterogeneous suggesting the occurrence of natural preferential flow paths in the soil. The presence of a transversal drain (L drain) beneath the road suggests that the L drain constitutes a preferential flow path of much greater importance than the naturally occurring preferential pathways. The field results further suggest that the wood-log and no-excavation structures are less impactful than the L drain. The simulation results are consistent with the assessment of the relative impact of the different road types. Groundwater flow rates 10 times higher than the natural case were obtained in the numerical simulations. The two other road structures (wood log and no excavation) did not perturb the flow field to the extent of the L drain. To minimize the perturbation of flow fields, the wood-log and no-excavation structures are recommended. Data availability Data availability. Permission is required to use the data presented in this study; the corresponding author can assist those seeking access to the data. Author contributions Author contributions. FC, DH, and PB designed the study and wrote the paper. PG initiated the study and contributed to the design and execution of the experiments. Finally, DK carried out fieldwork and wrote parts of the paper. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors are grateful to Léa Tallon, Benoit Magnin, Peter Staubli, Andreas Stalder, Anton Stübi, and Ueli Salvisberger for their collaboration. We thank the three anonymous reviewers as well as the editor, Anke Hildebrandt, for the very detailed input on the paper. Financial support Financial support. This research has been supported by the Swiss Federal Office for the Environment (FOEN) and the Swiss Federal Office for Agriculture (FOAG). Review statement Review statement. This paper was edited by Anke Hildebrandt and reviewed by Alraune Zech and two anonymous referees. References Aquanty: HydroGeoSphere: A Three-Dimensional Numerical Model Describing Fully-Integrated Subsurface and Surface Flow and Solute Transport, University of Waterloo, Waterloo, ON, Canada, 2017. Baker, C., Thompson, J. R., and Simpson, M.: 6. Hydrological Dynamics I: Surface Waters, Flood and Sediment Dynamics The Wetlands Handbook, 1st edn., edited by: Maltby, E. and Barker, T., Blackwell Publishing, Chichester, UK, 120–168, 2009. Betts, H. D. and DeRose, R. C.: Digital elevation models as a tool for monitoring and measuring gully erosion, Int. J. Appl. Earth Obs., 1, 91–101, https://doi.org/10.1016/S0303-2434(99)85002-8, 1999. Broggi, M. E.: Minimum requis de surfaces proches de l'état naturel dans le paysage rural, illustré par l'exemple du Plateau suisse, Rapport 31a du Programme national de recherche “Sol”, Liebefeld-Berne, Switzerland, 199 pp., 1990. Brunner, P. and Simmons, C. T.: HydroGeoSphere: a fully integrated, physically based hydrological model, Groundwater, 50, 170–176, 2012. Capra, A., Porto, P., and Scicolone, B.: Relationships between rainfall characteristics and ephemeral gully erosion in a cultivated catchment in Sicily (Italy), Soil Till. Res., 105, 77–87, https://doi.org/10.1016/j.still.2009.05.009, 2009. Chambers, F.: Peatlands and environmental change, edited by: Charman, D., John Wiley and Sons Ltd, Chichester, UK, 2002, 301 pp., ISBN 0471969907 (HB) 0471844108 (PB), J. Quaternary Sci., 18, 466–466, https://doi.org/10.1002/jqs.741, 2003. Chimner, R. A., Cooper, D. J., Wurster, F. C., and Rochefort, L.: An overview of peatland restoration in North America: where are we after 25 years?, Restor. Ecol., 25, 283–292, 2016. Cochand, F., Therrien, R., and Lemieux, J.-M.: Integrated Hydrological Modeling of Climate Change Impacts in a Snow-Influenced Catchment, Groundwater, 57, 3–20, https://doi.org/10.1111/gwat.12848, 2019. Cognard Plancq, A. L., Bogner, C., Marc, V., Lavabre, J., Martin, C., and Didon Lescot, J. F.: Etude du rôle hydrologique d'une tourbière de montagne: modélisation comparée de couples “averse-crue” sur deux bassins versants du Mont-Lozère, Etudes de géographie physique, no. XXXI, 3–15, 2004. Daba, S., Rieger, W., and Strauss, P.: Assessment of gully erosion in eastern Ethiopia using photogrammetric techniques, Catena, 50, 273–291, https://doi.org/10.1016/S0341-8162(02)00135-2, 2003. Derose, R. C., Gomez, B., Marden, M., and Trustrum, N. A.: Gully erosion in Mangatu Forest, New Zealand, estimated from digital elevation models, Earth Surf. Proc. Land., 23, 1045–1053, https://doi.org/10.1002/(SICI)1096-9837(1998110)23:11<1045::AID-ESP920>3.0.CO;2-T, 1998. Descroix, L., González Barrios, J. L., Viramontes, D., Poulenard, J., Anaya, E., Esteves, M., and Estrada, J.: Gully and sheet erosion on subtropical mountain slopes: Their respective roles and the scale effect, Catena, 72, 325–339, https://doi.org/10.1016/j.catena.2007.07.003, 2008. Dutton, A. L., Loague, K., and Wemple, B. C.: Simulated effect of a forest road on near-surface hydrologic response and slope stability, Earth Surf. Proc. Land., 30, 325–338, https://doi.org/10.1002/esp.1144, 2005. Fetter, C. W.: Applied Hydrogeology, 4th edn., Prentice-Hall, New Jersey, USA, 2001. Freeze, R. A. and Harlan, R. L.: Blueprint for a physically-based, digitally-simulated hydrologic response model, J. Hydrol., 9, 237–258, https://doi.org/10.1016/0022-1694(69)90020-1, 1969. Li, Q., Unger, A. J. A., Sudicky, E. A., Kassenaar, D., Wexler, E. J., and Shikaze, S.: Simulating the multi-seasonal response of a large-scale watershed with a 3D physically-based hydrologic model, J. Hydrol., 357, 317–336, https://doi.org/10.1016/j.jhydrol.2008.05.024, 2008. Limpens, J., Berendse, F., Blodau, C., Canadell, J. G., Freeman, C., Holden, J., Roulet, N., Rydin, H., and Schaepman-Strub, G.: Peatlands and the carbon cycle: from local processes to global implications – a synthesis, Biogeosciences, 5, 1475–1491, https://doi.org/10.5194/bg-5-1475-2008, 2008. Lindsay, R.: Peatbogs and carbon: a critical synthesis to inform policy development in oceanic peat bog conservation and restoration in the context of climate change, University of East London, Technical Report, London, UK, 2010. Loague, K. and VanderKwaak, J. E.: Simulating hydrological response for the R-5 catchment: comparison of two models and the impact of the roads, Hydrol. Process., 16, 1015–1032, https://doi.org/10.1002/hyp.316, 2002. Martínez-Casasnovas, J. A.: A spatial information technology approach for the mapping and quantification of gully erosion, Catena, 50, 293–308, https://doi.org/10.1016/S0341-8162(02)00134-0, 2003. Nyssen, J., Poesen, J., Moeyersons, J., Luyten, E., Veyret-Picot, M., Deckers, J., Haile, M., and Govers, G.: Impact of road building on gully erosion risk: a case study from the Northern Ethiopian Highlands, Earth Surf. Proc. Land., 27, 1267–1283, https://doi.org/10.1002/esp.404, 2002. Partington, D., Therrien, R., Simmons, C. T., and Brunner, P.: Blueprint for a coupled model of sedimentology, hydrology, and hydrogeology in streambeds, Rev. Geophys., 55, 287–309, https://doi.org/10.1002/2016rg000530, 2017. Poesen, J., Nachtergaele, J., Verstraeten, G., and Valentin, C.: Gully erosion and environmental change: importance and research needs, Catena, 50, 91–133, https://doi.org/10.1016/S0341-8162(02)00143-1, 2003. Reckendorfer, W., Funk, A., Gschöpf, C., Hein, T., and Schiemer, F.: Aquatic ecosystem functions of an isolated floodplain and their implications for flood retention and management, J. Appl. Ecol., 50, 119–128, 2013. Reid, L. M. and Dunne, T.: Sediment production from forest road surfaces, Water Resour. Res., 20, 1753–1761, https://doi.org/10.1029/WR020i011p01753, 1984. Rydin, H. and Jeglum, J. K.: The biology of peatlands, 2nd edn., Oxford University Press, Oxford, UK, 382 pp., 2005. Samaritani, E., Siegenthaler, A., Yli-Petäys, M., Buttler, A., Christin, P.-A., and Mitchell, E. A. D.: Seasonal Net Ecosystem Carbon Exchange of a Regenerating Cutaway Bog: How Long Does it Take to Restore the C-Sequestration Function?, Restor. Ecol., 19, 480–489, https://doi.org/10.1111/j.1526-100X.2010.00662.x, 2011. Simmons, C. T., Brunner, P., Therrien, R., and Sudicky, E. A.: Commemorating the 50th anniversary of the Freeze and Harlan (1969) Blueprint for a physically-based, digitally-simulated hydrologic response model, J. Hydrol., 124309, https://doi.org/10.1016/J.JHYDROL.2019.124309, in press, 2019. Valentin, C., Poesen, J., and Li, Y.: Gully erosion: Impacts, factors and control, Catena, 63, 132–153, https://doi.org/10.1016/j.catena.2005.06.001, 2005. VanderKwaak, J. E.: Numerical simulation of flow and chemical transport in integrated surface-subsurface hydrologic systems, PhD thesis, Departement of Earth Science, University of Waterloo, Waterloo, Ontario, Canada, 1999. Van Genuchten, M. T.: A closed-form equation for predicting the hydraulic conductivity of unsaturated soils, Soil Sci. Soc. Am. J., 44, 892–898, 1980. Von Sengbusch, P.: Enhanced sensitivity of a mountain bog to climate change as a delayed effect of road construction, Mires and Peat, 15, 6, available at: http://www.mires-and-peat.net/pages/volumes/map15/map1506.php (last access: 23 February 2018), 2015. Wemple, B. C. and Jones, J. A.: Runoff production on forest roads in a steep, mountain catchment, Water Resour. Res., 39, 1220, https://doi.org/10.1029/2002wr001744, 2003. Zollner, A.: Das Abflussgeschehen von unterschiedlich genutzten Hochmooreinzugsgebieten, Bayer. Akad. f. Naturschutz u. Landschaftspflege, Laufener Seminarbeitr., Laufen/Salzach, Germany, 111–119, 2003.
# Is it a TRIANGLE? Level pending Is a triangle with side lengths 5001, 2000, and 7500 possible? 1) Yes 2) No 3) Maybe yes, maybe no 4) Not enough information given ×
The rate of premium is 2% and other expenses are 0.075%. A cargo worth ₹3,50,100 is to be insured so that all its value and the cost of insurance will be recovered in the event of total loss. - Mathematics and Statistics Sum The rate of premium is 2% and other expenses are 0.075%. A cargo worth ₹3,50,100 is to be insured so that all its value and the cost of insurance will be recovered in the event of total loss. Solution Given, Property value of cargo = ₹ 3,50,100, Rate of premium = 2%, Other expenses = 0.075% Let the amount of insurance (policy value) be  ₹ 100, which includes the premium of ₹ 2 and other expenses of ₹ 0.075 ∴ Value of cargo (property value) = Policy value – (Premium + Other express) = 100 – (2 + 0.075) = 100 – 2.075 = 97.925 Now, for the property value of ₹ 97.925, the policy value is ₹ 100. ∴ For property value of ₹ 3,50,100, the policy value = (3,50,100 xx 100)/(97.925) = ₹ 3,57,518.5090 ∴ A cargo worth ₹3,50,100 should be insured for ₹ 3,57,518.5090. Concept: Concept of Insurance Is there an error in this question or solution?
# $$\text{G.P}$$ Algebra Level 3 Let $$A = 1+r^a + r^{2a} + r^{3a} + \cdots$$, and let $$B = 1+r^b + r^{2b} + r^{3b} + \cdots$$. If $$|r| < 1$$, which of the following is equal to $$\dfrac ab$$? ×