text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
Power Electronics
# Static induction devices
This post gives an introduction into static induction transistors. Static induction devices (SID) are transistors operating with a prepunch-through region. These devices feature outstanding operating characteristics, with frequencies up to 1THz, and have great switching speed, low switching energy, large reverse voltage and low forward voltage drop.
Let’s consider the static induction transistor (SIT). These can be seen schematically in Figure 1. The operational idea of an SID is that it electrostatically induces a potential barrier in the device and can control the current between the drain and source. For an SID, where a small electrostatic field exists in the region of the potential barrier, the current through the device (drift and diffusion) can be described as: ${J}_{n}=\frac{q{D}_{n}{N}_{S}}{{\int }_{{x}_{1}}^{{x}_{2}}exp\left(–\frac{\phi \left(x\right)}{{V}_{T}}\right)dx}$. Here ${N}_{S}$ is the carrier concentration for ${x}_{1}$.
Figure 1.
The potential along the static induction transistor channel and across the transistor channel can be described relatively with the function of the second order. $\phi \left(x\right)=\Phi \left[1–\left(2\frac{x}{L}–1{\right)}^{2}\right]$, and $\phi \left(x\right)=\Phi \left[1–\left(2\frac{y}{W}–1{\right)}^{2}\right]$ for two channel dimensions, here $\Phi$ is the height of the potential barrier. The drain current through the transistor will be ${I}_{D}=d{D}_{p}{N}_{S}Z\frac{W}{L}exp\left(\frac{\Phi }{{V}_{T}}\right)$. Here ${N}_{S}$ is the concentration of the source, $\frac{W}{L}$ are the geometrical characteristics of the potential saddle of the barrier. Here the potential $\Phi$ is always a function of the gate and drain voltages.
Currents in the static induction transistors are controlled with the other characteristics – the space charge of the charge carriers.
SITs can be used to obtain the SID. These diodes have a low forward voltage drop. This diode can be made by shortening the emitter to the gate of the SIT (Figure 2).
Figure 2.
|
{}
|
# To calculate:To check if(f*g)(x)=(g*f)(x). Professor Harsh gave a test to his college algebra class and nobody got more than 80 points (out of 100) on the test. One problem worth 8 points had insufficient data, so nobody could solve that problem. The professor adjusted the grades for the class by a. Increasing everyone's score by 10% and b. Giving everyone 8 bonus points c. x represents the original score of a student
Question
Upper level algebra
To calculate:To check if(f*g)(x)=(g*f)(x). Professor Harsh gave a test to his college algebra class and nobody got more than 80 points (out of 100) on the test. One problem worth 8 points had insufficient data, so nobody could solve that problem.
The professor adjusted the grades for the class by
a. Increasing everyone's score by 10% and
b. Giving everyone 8 bonus points
c. x represents the original score of a student
2021-02-06
The function $$f(x) = 1$$. lxrepresents the score increased by 10%
The function $$g(x) = x + 8$$ represents the score increased by 8 points
The function $$(f*g)(x) = 1.1(x + 8)$$ represents the final score when the score is first increased by 8 bonus points and then by 10%
The function $$(g*f)(x) = 1.1x + 8$$ represents the final score when the score is first increased by 10% and then by 8 bonus points
### Relevant Questions
To calculate:To evaluate $$(f*g)(70)\ and\ (g*f)(70)$$.
Professor Harsh gave a test to his college algebra class and nobody got more than 80 points (out of 100) on the test.
One problem worth 8 points had insufficient data, so nobody could solve that problem.
The professor adjusted the grades for the class by
a. Increasing everyone's score by 10% and
b. Giving everyone 8 bonus points
c. x represents the original score of a student
To calculate: To write the given statements (a) and (b) as functions f(x) and g(x)respectively.
Professor Harsh gave a test to his college algebra class and nobody got more than 80 points (out of 100) on the test.
One problem worth 8 points had insufficient data, so nobody could solve that problem.
The professor adjusted the grades for the class by
a. Increasing everyone's score by 10% and
b. Giving everyone 8 bonus points
c. x represents the original score of a student
To find the lowest original score that will result in an A if the professor uses
$$(i)(f*g)(x)\ and\ (ii)(g*f)(x)$$.
Professor Harsh gave a test to his college algebra class and nobody got more than 80 points (out of 100) on the test.
One problem worth 8 points had insufficient data, so nobody could solve that problem.
The professor adjusted the grades for the class by
a. Increasing everyone's score by 10% and
b. Giving everyone 8 bonus points
c. x represents the original score of a student
The table below shows the number of people for three different race groups who were shot by police that were either armed or unarmed. These values are very close to the exact numbers. They have been changed slightly for each student to get a unique problem.
Suspect was Armed:
Black - 543
White - 1176
Hispanic - 378
Total - 2097
Suspect was unarmed:
Black - 60
White - 67
Hispanic - 38
Total - 165
Total:
Black - 603
White - 1243
Hispanic - 416
Total - 2262
Give your answer as a decimal to at least three decimal places.
a) What percent are Black?
b) What percent are Unarmed?
c) In order for two variables to be Independent of each other, the P $$(A and B) = P(A) \cdot P(B) P(A and B) = P(A) \cdot P(B).$$
This just means that the percentage of times that both things happen equals the individual percentages multiplied together (Only if they are Independent of each other).
Therefore, if a person's race is independent of whether they were killed being unarmed then the percentage of black people that are killed while being unarmed should equal the percentage of blacks times the percentage of Unarmed. Let's check this. Multiply your answer to part a (percentage of blacks) by your answer to part b (percentage of unarmed).
Remember, the previous answer is only correct if the variables are Independent.
d) Now let's get the real percent that are Black and Unarmed by using the table?
If answer c is "significantly different" than answer d, then that means that there could be a different percentage of unarmed people being shot based on race. We will check this out later in the course.
Let's compare the percentage of unarmed shot for each race.
e) What percent are White and Unarmed?
f) What percent are Hispanic and Unarmed?
If you compare answers d, e and f it shows the highest percentage of unarmed people being shot is most likely white.
Why is that?
This is because there are more white people in the United States than any other race and therefore there are likely to be more white people in the table. Since there are more white people in the table, there most likely would be more white and unarmed people shot by police than any other race. This pulls the percentage of white and unarmed up. In addition, there most likely would be more white and armed shot by police. All the percentages for white people would be higher, because there are more white people. For example, the table contains very few Hispanic people, and the percentage of people in the table that were Hispanic and unarmed is the lowest percentage.
Think of it this way. If you went to a college that was 90% female and 10% male, then females would most likely have the highest percentage of A grades. They would also most likely have the highest percentage of B, C, D and F grades
The correct way to compare is "conditional probability". Conditional probability is getting the probability of something happening, given we are dealing with just the people in a particular group.
g) What percent of blacks shot and killed by police were unarmed?
h) What percent of whites shot and killed by police were unarmed?
i) What percent of Hispanics shot and killed by police were unarmed?
You can see by the answers to part g and h, that the percentage of blacks that were unarmed and killed by police is approximately twice that of whites that were unarmed and killed by police.
j) Why do you believe this is happening?
Do a search on the internet for reasons why blacks are more likely to be killed by police. Read a few articles on the topic. Write your response using the articles as references. Give the websites used in your response. Your answer should be several sentences long with at least one website listed. This part of this problem will be graded after the due date.
We will now add support for register-memory ALU operations to the classic five-stage RISC pipeline. To offset this increase in complexity, all memory addressing will be restricted to register indirect (i.e., all addresses are simply a value held in a register; no offset or displacement may be added to the register value). For example, the register-memory instruction add x4, x5, (x1) means add the contents of register x5 to the contents of the memory location with address equal to the value in register x1 and put the sum in register x4. Register-register ALU operations are unchanged. The following items apply to the integer RISC pipeline:
a. List a rearranged order of the five traditional stages of the RISC pipeline that will support register-memory operations implemented exclusively by register indirect addressing.
b. Describe what new forwarding paths are needed for the rearranged pipeline by stating the source, destination, and information transferred on each needed new path.
c. For the reordered stages of the RISC pipeline, what new data hazards are created by this addressing mode? Give an instruction sequence illustrating each new hazard.
d. List all of the ways that the RISC pipeline with register-memory ALU operations can have a different instruction count for a given program than the original RISC pipeline. Give a pair of specific instruction sequences, one for the original pipeline and one for the rearranged pipeline, to illustrate each way.
Hint for (d): Give a pair of instruction sequences where the RISC pipeline has “more” instructions than the reg-mem architecture. Also give a pair of instruction sequences where the RISC pipeline has “fewer” instructions than the reg-mem architecture.
A new thermostat has been engineered for the frozen food cases in large supermarkets. Both the old and new thermostats hold temperatures at an average of $$25^{\circ}F$$. However, it is hoped that the new thermostat might be more dependable in the sense that it will hold temperatures closer to $$25^{\circ}F$$. One frozen food case was equipped with the new thermostat, and a random sample of 21 temperature readings gave a sample variance of 5.1. Another similar frozen food case was equipped with the old thermostat, and a random sample of 19 temperature readings gave a sample variance of 12.8. Test the claim that the population variance of the old thermostat temperature readings is larger than that for the new thermostat. Use a $$5\%$$ level of significance. How could your test conclusion relate to the question regarding the dependability of the temperature readings? (Let population 1 refer to data from the old thermostat.)
(a) What is the level of significance?
State the null and alternate hypotheses.
$$H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}>?_{2}^{2}H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}\neq?_{2}^{2}H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}?_{2}^{2},H1:?_{1}^{2}=?_{2}^{2}$$
(b) Find the value of the sample F statistic. (Round your answer to two decimal places.)
What are the degrees of freedom?
$$df_{N} = ?$$
$$df_{D} = ?$$
What assumptions are you making about the original distribution?
The populations follow independent normal distributions. We have random samples from each population.The populations follow dependent normal distributions. We have random samples from each population.The populations follow independent normal distributions.The populations follow independent chi-square distributions. We have random samples from each population.
(c) Find or estimate the P-value of the sample test statistic. (Round your answer to four decimal places.)
(d) Based on your answers in parts (a) to (c), will you reject or fail to reject the null hypothesis?
At the ? = 0.05 level, we fail to reject the null hypothesis and conclude the data are not statistically significant.At the ? = 0.05 level, we fail to reject the null hypothesis and conclude the data are statistically significant. At the ? = 0.05 level, we reject the null hypothesis and conclude the data are not statistically significant.At the ? = 0.05 level, we reject the null hypothesis and conclude the data are statistically significant.
(e) Interpret your conclusion in the context of the application.
Reject the null hypothesis, there is sufficient evidence that the population variance is larger in the old thermostat temperature readings.Fail to reject the null hypothesis, there is sufficient evidence that the population variance is larger in the old thermostat temperature readings. Fail to reject the null hypothesis, there is insufficient evidence that the population variance is larger in the old thermostat temperature readings.Reject the null hypothesis, there is insufficient evidence that the population variance is larger in the old thermostat temperature readings.
1. Find each of the requested values for a population with a mean of $$? = 40$$, and a standard deviation of $$? = 8$$ A. What is the z-score corresponding to $$X = 52?$$ B. What is the X value corresponding to $$z = - 0.50?$$ C. If all of the scores in the population are transformed into z-scores, what will be the values for the mean and standard deviation for the complete set of z-scores? D. What is the z-score corresponding to a sample mean of $$M=42$$ for a sample of $$n = 4$$ scores? E. What is the z-scores corresponding to a sample mean of $$M= 42$$ for a sample of $$n = 6$$ scores? 2. True or false: a. All normal distributions are symmetrical b. All normal distributions have a mean of 1.0 c. All normal distributions have a standard deviation of 1.0 d. The total area under the curve of all normal distributions is equal to 1 3. Interpret the location, direction, and distance (near or far) of the following zscores: $$a. -2.00 b. 1.25 c. 3.50 d. -0.34$$ 4. You are part of a trivia team and have tracked your team’s performance since you started playing, so you know that your scores are normally distributed with $$\mu = 78$$ and $$\sigma = 12$$. Recently, a new person joined the team, and you think the scores have gotten better. Use hypothesis testing to see if the average score has improved based on the following 8 weeks’ worth of score data: $$82, 74, 62, 68, 79, 94, 90, 81, 80$$. 5. You get hired as a server at a local restaurant, and the manager tells you that servers’ tips are $42 on average but vary about $$12 (\mu = 42, \sigma = 12)$$. You decide to track your tips to see if you make a different amount, but because this is your first job as a server, you don’t know if you will make more or less in tips. After working 16 shifts, you find that your average nightly amount is$44.50 from tips. Test for a difference between this value and the population mean at the $$\alpha = 0.05$$ level of significance.
When a gas is taken from a to c along the curved path in the figure (Figure 1) , the work done by the gas is W = -40 J and the heat added to the gas is Q = -140 J . Along path abc, the work done by the gas is W = -50 J . (That is, 50 J of work is done on the gas.)
I keep on missing Part D. The answer for part D is not -150,150,-155,108,105( was close but it said not quite check calculations)
Part A
What is Q for path abc?
Express your answer to two significant figures and include the appropriate units.
Part B
f Pc=1/2Pb, what is W for path cda?
Express your answer to two significant figures and include the appropriate units.
Part C
What is Q for path cda?
Express your answer to two significant figures and include the appropriate units.
Part D
What is Ua?Uc?
Express your answer to two significant figures and include the appropriate units.
Part E
If Ud?Uc=42J, what is Q for path da?
Express your answer to two significant figures and include the appropriate units.
The bulk density of soil is defined as the mass of dry solidsper unit bulk volume. A high bulk density implies a compact soilwith few pores. Bulk density is an important factor in influencing root development, seedling emergence, and aeration. Let X denotethe bulk density of Pima clay loam. Studies show that X is normally distributed with $$\displaystyle\mu={1.5}$$ and $$\displaystyle\sigma={0.2}\frac{{g}}{{c}}{m}^{{3}}$$.
(b) Find the probability that arandomly selected sample of Pima clay loam will have bulk densityless than $$\displaystyle{0.9}\frac{{g}}{{c}}{m}^{{3}}$$.
(c) Would you be surprised if a randomly selected sample of this type of soil has a bulkdensity in excess of $$\displaystyle{2.0}\frac{{g}}{{c}}{m}^{{3}}$$? Explain, based on theprobability of this occurring.
|
{}
|
# Optimising functions¶
Now for something a bit different. PyTorch is a tensor processing library and whilst it has a focus on neural networks, it can also be used for more standard funciton optimisation. In this example we will use torchbearer to minimise a simple function.
## The Model¶
First we will need to create something that looks very similar to a neural network model - but with the purpose of minimising our function. We store the current estimates for the minimum as parameters in the model (so PyTorch optimisers can find and optimise them) and we return the function value in the forward method.
class Net(Module):
def __init__(self, x):
super().__init__()
self.pars = torch.nn.Parameter(x)
def f(self):
"""
function to be minimised:
f(x) = (x[0]-5)^2 + x[1]^2 + (x[2]-1)^2
Solution:
x = [5,0,1]
"""
out = torch.zeros_like(self.pars)
out[0] = self.pars[0]-5
out[1] = self.pars[1]
out[2] = self.pars[2]-1
def forward(self, _, state):
state[ESTIMATE] = self.pars.detach().unsqueeze(1)
return self.f()
## The Loss¶
For function minimisation we have an analogue to neural network losses - we minimise the value of the function under the current estimates of the minimum. Note that as we are using a base loss, torchbearer passes this the network output and the “label” (which is of no use here).
def loss(y_pred, y_true):
return y_pred
## Optimising¶
We need two more things before we can start optimising with torchbearer. We need our initial guess - which we’ve set to [2.0, 1.0, 10.0] and we need to tell torchbearer how “long” an epoch is - I.e. how many optimisation steps we want for each epoch. For our simple function, we can complete the optimisation in a single epoch, but for more complex optimisations we might want to take multiple epochs and include tensorboard logging and perhaps learning rate annealing to find a final solution. We have set the number of optimisation steps for this example as 50000.
p = torch.tensor([2.0, 1.0, 10.0])
training_steps = 50000
The learning rate chosen for this example is very low and we could get convergence much faster with a larger rate, however this allows us to view convergence in real time. We define the model and optimiser in the standard way.
model = Net(p)
optim = torch.optim.SGD(model.parameters(), lr=0.0001)
Finally we start the optimising on the GPU and print the final minimum estimate.
tbtrial = tb.Trial(model, optim, loss, [tb.metrics.running_mean(ESTIMATE, dim=1), 'loss'])
tbtrial.for_train_steps(training_steps).to('cuda')
tbtrial.run()
print(list(model.parameters())[0].data)
Usually torchbearer will infer the number of training steps from the data generator. Since for this example we have no data to give the model (which will be passed None), we need to tell torchbearer how many steps to run using the for_train_steps method.
## Viewing Progress¶
You might have noticed in the previous snippet that the example uses a metric we’ve not seen before. The state key that represents our estimate in state can also act as a metric and is created at the beginning of the file with:
Putting all of it together and running provides the following output:
0/1(t): 100%|██████████| 50000/50000 [00:53<00:00, 931.36it/s, loss=4.5502, running_est=[4.9988, 0.0, 1.0004], running_loss=0.0]
The final estimate is very close to the true minimum at [5, 0, 1]:
tensor([ 4.9988e+00, 4.5355e-05, 1.0004e+00])
## Source Code¶
The source code for the example is given below:
|
{}
|
View Single Post
P: 13 1. The problem statement, all variables and given/known data Determine the expectation values of the product of the number operators $$\left\langle{N_{a_{1}}}{N_{b_{1}}}\right\rangle$$ Explain your result. 2. Relevant equations http://en.wikipedia.org/wiki/Hong-Ou-Mandel_effect 3. The attempt at a solution I've got the expectation value as zero, and i've been told that is correct but i can't explain how. The N_a and N_b refer to the two photons or modes after the beam splitter (i think)(mode zero would be before the beam splitter) so i don't know why they seem to disappear after the beam splitter and why the expectation value (is this the average value of the intensity at one of the detectors?) is zero. I would have expected it to be either: 1 - because of the HOM effect the two input photons will merge into one photon with double intensity and since there are two detectors, and the odds of being detected at either is 50-50, the average value at either is half of 2 = 1. Or 2 - This is if the expectation value only takes into account one of the detectors when the doubled intensity photon is detected..maybe?
|
{}
|
Marks 1
More
Given two continuous time signals $$x\left( t \right) = {e^{ - t}}$$ and $$y\left( t \right) = {e^{ - 2t}}$$ which exist...
GATE CE 2011
Laplace transform of $$f\left( x \right) = \cos \,h\left( {ax} \right)$$ is
GATE CE 2009
The Laplace transform of a function $$f(t)$$ is $$F\left( s \right) = {{5{s^2} + 23s + 6} \over {s\left( {{s^2} + 2s +... GATE CE 2005 A delayed unit step function is defined as$$u\left( {t - a} \right) = \left\{ {\matrix{ {0,} & {t Its Laplace tran... GATE CE 2004 If $$L$$ denotes the laplace transform of a function, $$L\left\{ {\sin \,\,at} \right\}$$ will be equal to GATE CE 2003 The inverse Laplace transform of $$1/\left( {{s^2} + 2s} \right)$$ is GATE CE 2001 The Laplace transform of the function \eqalign{ & f\left( t \right) = k,\,0 is ... GATE CE 1999 The Laplace Transform of a unit step function{u_a}\left( t \right),$$defined as$$\matrix{ {{u_a}\left( t \right... GATE CE 1998 $${\left( {s + 1} \right)^{ - 2}}$$ is laplace transform of GATE CE 1998 The inverse Laplace transform of $${{\left( {s + 9} \right)} \over {\left( {{s^2} + 6s + 13} \right)}}$$ is GATE CE 1995 Marks 2 More If $$F\left( s \right) = L\left\{ {f\left( t \right)} \right\} = {{2\left( {s + 1} \right)} \over {{s^2} + 4s + 7}}$$ th... GATE CE 2011 Laplace transform of $$f\left( t \right) = \cos \left( {pt + q} \right)$$ is GATE CE 2005 The Laplace transform of the following function is $$f\left( t \right) = \left\{ {\matrix{ {\sin t} & {for\,\,0 \le... GATE CE 2002 Using Laplace transforms, solve$${a \over {{s^2} - {a^2}}}\,\,\left( {{d^2}y/d{t^2}} \right) + 4y = 12t\,\,$$given th... GATE CE 2002 Let$$F\left( s \right) = L\left[ {f\left( t \right)} \right]$$denote the Laplace transform of the function$$f(t)$$. W... GATE CE 2000 Using Laplace transform, solve the initial value problem$$9{y^{11}} - 6{y^1} + y = 0y\left( 0 \right) = 3$and$...
GATE CE 1996
EXAM MAP
Graduate Aptitude Test in Engineering
GATE ECE GATE CSE GATE CE GATE EE GATE ME GATE PI GATE IN
Joint Entrance Examination
JEE Main JEE Advanced
|
{}
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
Remark 7.2.2.4. The converse of Corollary 7.2.2.3 is also true: if $e: A \rightarrow B$ is a morphism of simplicial sets having the property that precomposition with the induced map $e^{\triangleleft }: A^{\triangleleft } \rightarrow B^{\triangleleft }$ carries limit diagrams to limit diagrams, then $e$ is left cofinal. Moreover, it suffices check this condition for diagrams in the $\infty$-category $\operatorname{\mathcal{S}}$ of spaces (see Corollary 7.4.5.11).
|
{}
|
# Why does the inverse gamma distribution look essentially the same when increasing the variance?
I have a function in R which for a given mean $$\mu$$ and variance $$\sigma^2$$, spits out the parameters for the shape, $$\alpha$$, and rate, $$\beta$$ of the inverse gamma distribution with that mean and variance.
When setting the mean fixed at $$\mu=1$$, I noticed that increasing the variance from say 10 to 100, has almost no visual effect on the inverse gamma produced (both plots look identical to the figure below). Furthermore, when I sample from the corresponding distributions, the values don't seem to have a variance of higher than around 1 or 2.
// mu=1, var=10 (alpha=2.1, beta=1.1)
> rinvgamma(50,2.1,1.1)
[1] 0.9211008 0.4302407 0.5478425 0.5462700 0.3628111 1.1686150 1.0768032 1.2512244 1.7006653 0.7279801 0.5836686
[12] 0.5233413 0.3448369 0.3536895 0.8559441 0.3650069 0.5783526 0.5317600 0.8353258 0.5587801 0.4564186 1.0995836
[23] 0.3449476 0.5421546 0.1790850 7.3857559 0.6131200 0.6986830 0.2800351 0.5196334 0.3979346 0.4807825 0.3273610
[34] 1.4138987 0.9715384 0.4951919 1.6518877 0.6431032 0.2710467 0.7001043 0.2214802 0.2868089 0.3429655 0.6041847
[45] 0.2402038 0.3867785 0.2216013 0.9520140 0.5259651 0.5929160
// mu = 1, var=1000 (alpha=2.001, beta=1.001)
> rinvgamma(50,2.001,1.001)
[1] 0.6629496 2.8095275 0.4344837 0.5361150 0.5480011 0.8444600 0.4915654 1.0122025 0.4696304 0.9146489 0.5594634
[12] 0.2602492 0.6842792 0.6851688 0.4322968 3.0454766 0.2657547 0.2271760 0.7609194 0.1481709 0.9178390 0.3807237
[23] 0.7069672 1.3271837 0.4251466 1.1065251 0.5227289 0.6202342 1.6205382 1.1839423 0.6605649 0.9626064 0.9334211
[34] 0.6991242 0.7422693 0.2975816 1.0126841 0.4243968 0.6060229 0.3395812 2.5289663 4.1525575 0.8321886 1.4190376
[45] 0.2293600 0.3870373 0.7642754 0.3934958 0.7532922 0.6021346
The $$\alpha$$ and $$\beta$$ values produced from the function ($$\alpha=2.1 \ , \beta=1.1$$ and $$\alpha=2.001 \ , \beta=1.001$$) seem to give the correct $$\mu$$ and $$\sigma^2$$ according to the mean-variance equations (see Wikipedia page here) so I'm puzzled why the distributions seem very similar.
Interestingly, if I increase the mean to $$\mu=100$$, the same effect is observed, but for larger variances (i.e. distributions with $$\mu=100$$ and $$\sigma^2=1000$$ vs. $$\sigma^2=2000$$ appear the same) - as if it's parameter combinations with a large relative difference between mean and variance values that are causing the issue.
Can anyone explain why I'm observing this effect?
|
{}
|
# 3.3V Power Supply Module
Module with stabilizer 3.3V.
SKU: OTH15300695327
Price: 0.17 $Old Price: 0.99$
Product in stock
SSL Certificate
Quantity
# Overview
The stabilizer is useful for circuits you need and 3.3 V supply. The regulator is LDO, so it allows you to feed the input at 5 V, the other voltage used for electronics.
The PCB is small in size and has two input and two output connectors.
|
{}
|
# Angular momentum power plant on Earth
If tidal power plants are slowing down Earth's rotation then is it theoretically possible to build a power plant that would drain energy from Earth's angular momentum (thus slowing down it's rotation)?
What would such machine look like?
-
There are a few important points to make before I attempt an answer. The first is just a point about terminology: you have to be careful to distinguish between energy and (angular) momentum. They are not the same thing - they have different units - and so it doesn't strictly make sense to talk about "draining energy from Earth's angular momentum", since it's impossible to convert one of those quantities into the other. This doesn't mean it's a bad question, it's just that you have to be careful to understand the difference.
The second important point is that angular momentum is conserved. If you drain it from one thing, it has to go into something else. Energy is the same, but energy can come in useful forms that we mostly call "work" and no-so-useful forms that we mostly call "heat". When we say "generating energy" we really mean converting it into a useful form.
Although you can't convert angular momentum into energy, it is possible to generate useful energy by transferring angular momentum from one body to another. The rule is that you can do this whenever the angular velocities of the two bodies are different, as Peter Shor mentioned in his answer. This is exactly the same rule as the one that allows you to extract energy by transferring charge when two voltages are different, or by transferring linear momentum when the velocities are different, or by transferring heat when the temperatures are different.
The main point is that if you want to generate energy by decreasing Earth's angular momentum, you have to transfer that anglar momentum somewhere else. One place to put it is the Moon, which is effectively what happens when you extract energy from the tides. (The moon orbits the Earth more slowly than the Earth rotates, so their angular velocities around the centre of the Earth are different.) However, it's difficult to imagine how the flow of angular momentum from the Earth to the Moon could be sped up.
However, there is another way you could do it, and that's to launch things into space. If you can get mass far enough away from Earth then it will have a lower angular velocity, and so in theory you could generate useful work by transferring matter from Earth's surface to deep space.
But the question is how you could do it. Launching stuff in rockets won't do the job, since it takes loads of energy to accelerate things to escape velocity, and there's no practical way to get it back. However, with a space elevator it would be different. Imagine a space elevator that's so long that the centrifugal force at the counterweight end is much larger than $1g$. You have to expend some energy lifting mass up to the zero-g point, but this will be more than outweighed by the energy you can gain by letting the mass power a generator as it moves from the zero-g point to the counterweight. Once it gets there its velocity will be enough to escape the Earth's gravity well and you can just let it go, sending the lift back to collect the next mass.
If you do this repeatedly you will have a net gain of energy, just from sending garbage into space, and it ultimately works because it transfers angular momentum from a region of high angular velocity (the Earth) to one of low angular velocity (objects moving in space far from Earth). However, there are enormous practical issues that would probably prevent it from ever being done. Aside from the strength of cable needed, the angular momentum will be transferred not directly from the surface of the Earth but from the whole length of the cable as the mass moves along it, which will make it swing around like crazy unless it can somehow be made to behave as if it's rigid. But nevertheless, it's possible in principle.
Come to think of it, if we're coming up with crazy schemes that would only work in principle, I guess you could build a rail track around the equator of the Earth, put a very heavy train on it, and attach the train via a cable to the moon so that it pulls the train along, powering a generator. This would be an effective way to generate work by increasing the transfer of angular momentum between the two bodies, but of course it's pure fantasy.
-
+1: Very nice. I think you should make the space elevators rigid. Then it's clear how it works (sending the mass up reduces the earth's angular momentum the same way a spinning ice skater slows down when he extends his arms). So you're slightly incorrect; the angular momentum isn't transferred to the tip of the cable; it's transferred to its entire length via the Coriolis force generated when you move the load up the cable. – Peter Shor Dec 26 '12 at 14:21
@PeterShor good point, I've edited the answer. I wonder if it would be possible to build an elevator that behaves rigidly enough by having a network of cables in tension, anchored to multiple points on the Earth's surface, kind of like one of these flickr.com/photos/46309585@N04/4561104341 - that would solve some of the obvious problems with having an actual rigid rod stuck into the ground... – Nathaniel Dec 26 '12 at 14:40
It seems to me that this scheme might actually be practical for a fast-rotating asteroid. Have any studies been done on this? – Peter Shor Dec 26 '12 at 17:30
This is the kind of answer I was looking for, thanks. – daniel.sedlacek Aug 20 '13 at 11:55
To harness the energy of the Earth's rotation, you need something with different angular velocity (just like harnessing thermal energy requires two things at different temperature). Everything on Earth is rotating with the same speed, and so has the same angular velocity; the closest thing with a different angular velocity is the Moon.
This is in fact what is powering the tides: the Moon is gaining angular momentum and changing its orbit. However, we have no practical way to speed up the transfer of angular momentum to the Moon, so harnessing Earth's angular momentum in any way other than using the existing tides is completely impractical.
-
Yes, many things rob Earth's angular momentum and slow it down. For example, when a rocket lands on Earth, it gains/loses speed due to the Earth's rotation. Other than that, when a rocket leaves Earth, they use the fact that the Earth rotates in order to gain kinetic energy from the Earth's rotation. In other words, the rocket robs the Earth's energy.
NASA uses this fact to their advantage. Their 'rob' Earth's energy to speed up their spacecraft during launch.
This relates to angular momentum because the rocket acts as a mass being added or removed from the side of a spinning 'sphere' (the Earth). If you use calculations, you would find things like conservation of energy and conservation of angular momentum.
However, we don't have power plants doing that because it is highly impractical.
It is impractical because we have an atmosphere which moves along with the Earth's rotation
Edit: Let's imagine that Earth's atmosphere doesn't exist. So we have a big rotating sphere in vacuum we could launch a spacecraft in the direction of rotation, have some turbines 'take' the spacecraft's kinetic energy, let the spacecraft land, and repeat. It's a simplified version of some system. We could even use magnets and Faraday's law (the spacecraft is a magnet) instead of some turbine.
$\Delta L = mv_{initial} r - (\Sigma m)v_{final} r$
$= m_{earth} r_{earth} v_{earth,initial} - {v_{earth,final} (m_{earth} r_{earth} + m_{rocket} r_{rocket})}$
Since $v_{initial} \approx v_{final}$, we have $\Delta L = -vrm_{rocket}$
-
Note that sometimes Earth 'robs' energy from other objects, too – raindrop Dec 21 '12 at 22:07
The movement of Earth's atmosphere is not caused by Earth's angular momentum. If it were, they would have come into equilibrium billions of years ago and the atmosphere would not be in motion. Instead the movement is caused by thermally driven convection due to the Sun heating air near the surface, and the air at the top losing heat due to emitting thermal radiation itself. These processes can result in a transfer of angular momentum between the Earth and its atmosphere, but of course the total is always conserved. – Nathaniel Dec 26 '12 at 3:21
Also, you couldn't possibly generate energy by launching and landing spacecraft in the way you suggest, even if there were no atmosphere. It's true that kinetic energy is transferred from Earth to the spacecraft on takeoff, but this is a thermodynamically unfavourable energy transfer and work must be done to perform it. If you launch things using a space elevator it's a different story though - I'm writing an answer to explain that. – Nathaniel Dec 26 '12 at 3:46
@Nathaniel I assumed that the tangential velocity of the Earth at the Earth's surface was much greater than usual wind speeds parallel to the Earth's surface, so I thought that somehow the atmosphere 'rotated along' with the Earth's rotation. I've edited the question to explain the spacecraft thing. – raindrop Dec 26 '12 at 6:31
the atmosphere does rotate along with the Earth. But when you talk about "[taking] advantage of the movement of Earth’s atmosphere (caused by Earth’s angular momentum) relative to outer layers of Earth", it sounds like you're assuming it doesn't. – Nathaniel Dec 26 '12 at 6:39
Technically speaking, every time you exert a force on planet Earth, you are accelerating it. Take this case for example:
Earth spins counter-clockwise, or towards the East as it would be observed on Earth. Thus, should you exert a force in the same motion (walking West, applying a force on the planet to the East), you would be increasing the rate at which Earth spins (albeit this is a TINY increase). In the same way, walk to the East and you'll apply a tangential force towards the West that actually slows Earth's rotation slightly.
This behavior would obviously scale up for more massive objects exerting larger forces, resulting in the slight decrease apparently caused by the power plants you mentioned.
Back to your case for building a plant to harness this energy, though... There are currently no plans to try and harness this energy, as the facility required to do it would likely be ridiculously impractical.
-
what is impractical on tidal power plants? :) – daniel.sedlacek Dec 21 '12 at 22:22
When you start walking you push the Earth in one direction, but when you stop walking you push it in the opposite direction. You cannot change the total angular momentum unless you transfer it to another object away from the Earth. The tides do this by exerting a non-constant gravitational force on the moon, but most ways of generating power do not do this, so they don't affect the total angular momentum at all. – Nathaniel Dec 26 '12 at 7:03
It's only counter-clockwise for those of the northern-hemisphere persuasion ;-) – Mike Dunlavey Dec 26 '12 at 14:49
|
{}
|
# Math Help - Basic Calculus HELP!!!!!!!! (:-)
1. ## Basic Calculus HELP!!!!!!!! (:-)
Find the gradient of the tangeant to the curve:
$
f(x) = x^3/2 - 5x^2$
at $x = 2$
Find the coordinates of the points at the curve:
$f(x) = (3x - 2)^2$ at which the tangeant is parralel to the line y - 6x = 8.
Find the coordinates of the points on the curve $y = x^3 - x^2 + 5x - 3$ at which the tangeant is perpindicular to $x + 12y = 38$
Find the equation of the curve $y = f(x)$ given $f'(x) = 4x^2 - x + 1$ and $f(0) = 3.$
2. Originally Posted by mibamars
Find the equation of the curve $y = f(x)$ given $f'(x) = 4x^2 - x + 1$ and $f(0) = 3.$
$f'(x) = 4x^2 - x + 1$ and $f(0) = 3.$
$f(x) = (4/3)x^3 - (1/2)x^2 + x + c$
$f(0) = 3 = (4/3)0^3 - (1/2)0^2 + 0 + c$
Therefore, $c = 3$
$f(x) = (4/3)x^3 - (1/2)x^2 + x + 3$
3. Originally Posted by mibamars
(1) Find the gradient of the tangeant to the curve:
$
f(x) = x^3/2 - 5x^2$
at $x = 2$
(2) Find the coordinates of the points at the curve:
$f(x) = (3x - 2)^2$ at which the tangeant is parralel to the line y - 6x = 8.
(3) Find the coordinates of the points on the curve $y = x^3 - x^2 + 5x - 3$ at which the tangeant is perpindicular to $x + 12y = 38$
(4) Find the equation of the curve $y = f(x)$ given $f'(x) = 4x^2 - x + 1$ and $f(0) = 3.$
Hello,
to (1):
Calculate the first derivative of f: $f'(x) = \frac32 x^2 - 10x$ and plug in x = 2. That means you are calculating $f'(2) = -14$
to (2):
Rearrange the equation of the given line to: $y = 6x+8$. The slope of the line is m = 6. If the tangent is parallel to this line it's slope must be 6 too. That means you are looking for those values of x that f'(x) = 6.
$f'(x) = 2(3x-2) \cdot 3 = 18x - 12$
$18x-12 = 6~\iff~ x = 1$ . Plug in this value into the equation of the function. You'll get the coordinates of the tangent point as T(1, 1)
to (3):
Rearrange the equation of the line $x + 12y = 38~\iff~ y = -\frac1{12} x + \frac{19}6$. Therefore the perpendicular direction has the slope m = 12. Thus you are looking for those values of x where f'(x) = 12:
$f'(x) = 3x^2-2x+5$
$3x^2-2x+5 = 12~\iff~x = \frac13 \pm \frac13 \cdot \sqrt{22}$
You have to plug in these 2 values for x into the original equation of the function to get the y-coordinate of the tangent point.
|
{}
|
What notions are used but not clearly defined in modern mathematics? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T05:01:13Z http://mathoverflow.net/feeds/question/56677 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics What notions are used but not clearly defined in modern mathematics? kakaz 2011-02-25T20:43:24Z 2012-01-01T14:12:17Z <blockquote> <p><em>"Everyone knows what a curve is, until he has studied enough mathematics to become confused through the countless number of possible exceptions."</em></p> <p>Felix Klein</p> </blockquote> <p>What notions are used but not clearly defined in modern mathematics?</p> <hr> <p>To clarify further what is the purpose of the question following is another quote by M. Emerton: </p> <blockquote> <p><em>"It is worth drawing out the idea that even in contemporary mathematics there are notions which (so far) escape rigorous definition, but which nevertheless have substantial mathematical content, and allow people to make computations and draw conclusions that are otherwise out of reach."</em></p> </blockquote> <p>The question is about examples for such notions.</p> <p>The question was asked by <strong><a href="http://mathoverflow.net/users/3811/kakaz" rel="nofollow">Kakaz</a></strong></p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56702#56702 Answer by Henry Towsner for What notions are used but not clearly defined in modern mathematics? Henry Towsner 2011-02-26T04:06:12Z 2011-02-26T04:06:12Z <p>In proof theory, the notion of a "natural well-ordering" comes up, but isn't (perhaps can't be) defined formally.</p> <p>In a similar vein, I'm told that inner model theorists were proving results about "the core model" for decades without having a precise definition of what it was.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56707#56707 Answer by Frank for What notions are used but not clearly defined in modern mathematics? Frank 2011-02-26T04:49:01Z 2011-02-26T04:49:01Z <p>The definition of a number is kinda fuzzy. Is the alph null a number? </p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56709#56709 Answer by Gerry Myerson for What notions are used but not clearly defined in modern mathematics? Gerry Myerson 2011-02-26T05:30:46Z 2011-02-26T05:30:46Z <p>For a number of years, different authors were using different definitions of "chaos", but I think that has settled down now. </p> <p>"Quantum group" may be a good answer. If Wikipedia can be trusted on this issue, "In mathematics and theoretical physics, the term quantum group denotes various kinds of noncommutative algebra with additional structure. In general, a quantum group is some kind of Hopf algebra. There is no single, all-encompassing definition, but instead a family of broadly similar objects." </p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56710#56710 Answer by Emerton for What notions are used but not clearly defined in modern mathematics? Emerton 2011-02-26T05:41:41Z 2011-02-27T00:08:04Z <p>One of the most important contemporary mathematical concepts without a rigorous definition is quantum field theory (and related concepts, such as Feynman path integrals). </p> <p>Note: As noted in the comments below, there is a branch of pure mathematics --- constructive field theory --- devoted to making rigorous sense of this problem via analytic methods. I should add that there is also a lot of research devoted to understanding various aspects of field theory via (higher) categorical points of view. But (as far as I understand), there remain important and interesting computations that physicists can make using quantum field theoretic methods which can't yet be put on a rigorous mathematical basis.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56712#56712 Answer by Alex Bartel for What notions are used but not clearly defined in modern mathematics? Alex Bartel 2011-02-26T06:13:17Z 2011-02-26T14:24:57Z <p>I'm not sure how well this fits the bill, but in algebraic geometry and number theory, the notion of <a href="http://en.wikipedia.org/wiki/Motive_%28algebraic_geometry%29#Mixed_motives" rel="nofollow">mixed motives</a> is still undefined, although people have a fairly good idea of what properties they want the category of mixed motives to have.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56715#56715 Answer by Jeff Adler for What notions are used but not clearly defined in modern mathematics? Jeff Adler 2011-02-26T07:46:47Z 2011-02-26T07:46:47Z <p>The set of equivalence classes of irreducible, smooth representations of a reductive $p$-adic group $G$ should be partitioned into finite subsets called $L$-packets. Each $L$-packet should correspond to a Langlands parameter, but since this correspondence remains conjectural, $L$-packets are not defined in general. In some important cases, one knows exactly what the $L$-packets are. For example, if $G$ is a general linear group, then the $L$-packets are singletons. For other groups, there are some properties that $L$-packets are believed to satisfy, but that's not a definition.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56717#56717 Answer by Harry Gindi for What notions are used but not clearly defined in modern mathematics? Harry Gindi 2011-02-26T08:08:23Z 2011-02-26T13:10:27Z <p>The notion of canonicity (with respect to maps and objects) has thusfar evaded attempts by mathematicians to formalize it. If I remember correctly, Bourbaki tried to give it a definition based on some ideas of Chevalley, but, at least to my knowledge, it was deleted from later drafts of the Elements because it was not a particularly useful notion (or perhaps it just didn't work out. There was a thread on MO asked by Kevin Buzzard about this particular section of Bourbaki, and maybe you could find more details there). Jim Dolan more recently tried to give a definition of a canonical transformation between functors, but his notion is essentially that of a transformation that is natural when restricted to the core groupoid. However, this doesn't really capture all of the cases that we want, and I don't know of any serious attempt to make use of the notion. </p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56720#56720 Answer by Michael Greinecker for What notions are used but not clearly defined in modern mathematics? Michael Greinecker 2011-02-26T08:38:33Z 2011-02-26T08:38:33Z <p>In Leo Corry's book <a href="http://www.amazon.com/Modern-Algebra-Rise-Mathematical-Structures/dp/3764370025/ref=sr_1_1?ie=UTF8&s=books&qid=1298709298&sr=8-1" rel="nofollow">Modern Algebra and the Rise of Mathematical Structures</a> , he chronicles how mathematicians have tried to give a formal definition of <strong>structure</strong> via lattice theory, Bourbaki's set theoretic structures, and category theory. At least according to Corry, the concept is still elusive and not really captured by any of the attempts.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56721#56721 Answer by lvb for What notions are used but not clearly defined in modern mathematics? lvb 2011-02-26T09:37:32Z 2011-02-26T09:53:39Z <p>So-called <a href="http://en.wikipedia.org/wiki/Stiff_equation" rel="nofollow">Stiff ODEs</a> might qualify. In the literature one finds plenty of different attempts to define the notion of a <em>stiff</em> initial value problem for an ODE, some of them more, some less precise and they all try to capture the phenomenon of rapid step size decrease when numerically integrating some IVPs with explicit schemes whereas some implicit schemes do very well without slowing down significantly. In fact, some authors use <em>this</em> as the definition of a stiff IVP.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56722#56722 Answer by kakaz for What notions are used but not clearly defined in modern mathematics? kakaz 2011-02-26T09:55:54Z 2012-01-01T06:20:26Z <p>The field with one element, $F_1$. </p> <p><em>Georges Elencwajg in</em> <a href="http://meta.mathoverflow.net/discussion/968/notions-used-but-not-rigorously-defined/#Item_0" rel="nofollow">http://meta.mathoverflow.net/discussion/968/notions-used-but-not-rigorously-defined/#Item_0</a></p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56723#56723 Answer by darij grinberg for What notions are used but not clearly defined in modern mathematics? darij grinberg 2011-02-26T10:12:58Z 2011-02-26T10:12:58Z <p>Left/right derived functors. If $F$ is an additive functor from a category $A$ to another category $B$, then the left/right derived functors of $F$ go from $A$ to... where? Not to $B$ certainly, because this would require global choice on $A$ or break canonicity.</p> <p>There seem to be solutions nowadays, with the notions of derived categories and <a href="http://ncatlab.org/nlab/show/anafunctor" rel="nofollow">anafunctors</a>. Unfortunately, there seems to be no introductory text yet which would systematically develop homological algebra in a clean way, without cheating and speculating over one's head. I am more than glad to be proven wrong...</p> <p>PS. This might be what Harry Gindi is referring to.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56725#56725 Answer by kakaz for What notions are used but not clearly defined in modern mathematics? kakaz 2011-02-26T10:46:49Z 2012-01-01T06:19:35Z <p>Notion of calculability:</p> <blockquote> <p>A function of positive integers is <strong>calculable</strong> only if recursive.</p> </blockquote> <p>Calculable function ( in a objective meaning) as used in Church-Turing Thesis <a href="http://plato.stanford.edu/entries/church-turing/" rel="nofollow">http://plato.stanford.edu/entries/church-turing/</a></p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56731#56731 Answer by kakaz for What notions are used but not clearly defined in modern mathematics? kakaz 2011-02-26T12:07:08Z 2011-02-26T12:07:08Z <p>Maybe situation in Matroid theory where there is several strict axiomatization schemes but its equivalence is not easy to prove, is interesting here. Probably there should be some generalization which would tie this different approaches into one, more or less obvious notion. There is even terminology connected with that phenomenon by G.C.Rota Cryptomorphism <a href="http://en.wikipedia.org/wiki/Cryptomorphism" rel="nofollow">http://en.wikipedia.org/wiki/Cryptomorphism</a> </p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56740#56740 Answer by mathphysicist for What notions are used but not clearly defined in modern mathematics? mathphysicist 2011-02-26T15:10:46Z 2011-02-26T15:10:46Z <p>Not only is the notion of chaos not well-defined (cf. the answer of Gerry Myerson), but the same holds true for its opposite: there is no universally accepted definition of <a href="http://mathoverflow.net/questions/6379/what-is-an-integrable-system/" rel="nofollow">integrable system</a> yet.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56747#56747 Answer by Gil Kalai for What notions are used but not clearly defined in modern mathematics? Gil Kalai 2011-02-26T16:28:25Z 2011-02-26T16:28:25Z <p>I have three (somewhat related) examples:</p> <p>1) The notion of <strong>explicit construction</strong>. Seeking explicit constructions to non-constructive replace existence proofs is an old endeavor. Computational complexity offers, in some cases, formal definitions (constructions that can be dome in P or in polylog space.) But these definitions are slightly controversial. In any case people looked for explicit constructions before any explicit definition for the term explicit construction was known.</p> <p>2) The notion of <strong>effective bounds/proofs</strong>. There are many important problems about replacing a proof giving non effective bounds with a proof giving effective bounds. Usually I can understand a specific such problem but the general notion of effectiveness is not clear to me. (A famous example: effective proofs for Thue Siegel-Roth theorem.)</p> <p>3) <strong>Elementary proofs</strong>. I remember that finding elementary proofs for the prime number theorem was a major goal. I was told what this means many times and in a few of those I even understood. But the notion of "elementary" proof in analytic number theory remained quite vague for me.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56752#56752 Answer by Douglas Bowman for What notions are used but not clearly defined in modern mathematics? Douglas Bowman 2011-02-26T18:27:42Z 2011-02-26T18:27:42Z <p>The notion of a $q$-analogue in enumerative combinatorics.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56765#56765 Answer by Andres Caicedo for What notions are used but not clearly defined in modern mathematics? Andres Caicedo 2011-02-26T20:44:37Z 2011-02-26T20:44:37Z <p>There are several examples in set theory; the three I mention are related so I will include them in a single answer rather than three.</p> <blockquote> <p>1) <em>Large cardinal notion</em>.</p> </blockquote> <p>I have seen in print many times that there is no precise definition of what a large cardinal is, but I must disagree, since "weakly inaccessible cardinal" covers it. Of course, if you retreat to set theories without choice then there may be some room for discussion, but this is a technical point.</p> <p>People seem to mean something different when they say that large cardinal is not defined. It looks to me like they mean that the word should be used in reference to <em>significant</em> sign posts within the large cardinal hierarchy (such as "weakly compact", "strong", but not "the third Mahlo above the second measurable") and, since "significant" is not well defined, then...</p> <p>However, it seems clear that nowadays we are more interested in large cardinal <em>notions</em> rather than the large cardinals per se. To illustrate the difference, "$0^\sharp$ exists" is obviously a large cardinal notion, but I do not find it reasonable to call it (or $0^\sharp$) a large cardinal.</p> <p>And <em>large cardinal notion</em> is not yet a precisely defined concept. A very interesting approximation to such a notion is based on the hierarchy of <em>inner model operators</em> studied by Steel and others. But their meaningful study requires somewhat strong background assumptions, and so many of the large cardinal notions at the level of $L$ or "just beyond" do not seem to be not properly covered under this umbrella.</p> <blockquote> <p>2) <em>The core model</em>.</p> </blockquote> <p>This was mentioned by Henry Towsner. I do not think it is accurate that we were proving results about it without a precise definition. What happens is that all the results about it have additional assumptions beyond ZFC, and we would like to be able to remove them. More precisely, we cannot show its existence without additional assumptions, and these additional assumptions are also needed to establish its basic properties.</p> <p>The core model is intended to capture the "right analogue" of $L$ based on the background universe. If the universe does not have much large cardinal structure, this analogue is $L$ itself. If there are no measurable cardinals in inner models, the analogue is the Dodd-Jensen core model, and the name comes from their work. Etc. In each situation we know what broad features we expect the core model to have (this is the "not clearly defined part"). Once in each situation we formalize these broad features, we can proceed, and part of the problem is in showing its existence. </p> <p>Currently, we can only prove it under appropriate "anti-large cardinal assumptions", saying that the universe is not too large in some sense. One of the issues is that we want the core model to be a fine structural model, but we do not have a good inner model theory without anti-large cardinal assumptions. Another more serious issue is that as we climb through the large cardinal hierarchy, the properties we can expect of the core model become weaker. For example, if $0^\sharp$ does not exist, we have a full covering lemma. But this is not possible once we have measurables, due to Prikry forcing. We still have a version of it (weak covering), and this is one of the essential properties we expect.</p> <p>(There are additional technical issues related to <em>correctness</em>.)</p> <p>But it is fair to expect that as we continue developing inner model theory, we will find that our current notions are too restrictive. As a technical punchline, currently the most promising approach to a general notion seems to be in terms of Sargsyan's hod-models. But it looks to me this will only take us as far as determinacy or Universal Baireness can go. </p> <blockquote> <p>3) <em>Definable sets of reals</em>.</p> </blockquote> <p>We tend to say that descriptive set theory studies definable sets of reals as opposed to arbitrary such sets. This is a useful but not precise heuristic. It can be formalized in wildly different ways, depending of context. A first approximation to what we mean is "Borel", but this is too restrictive. Sometimes we use definability in terms of the projective hierarchy. Other times we say that a definable set is one that belongs to a natural model of ${\sf AD}^{+}$. But it is fair to say that these are just approximations to what we would really like to say. </p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56779#56779 Answer by Pablo Shmerkin for What notions are used but not clearly defined in modern mathematics? Pablo Shmerkin 2011-02-27T00:46:49Z 2011-02-27T00:46:49Z <p>Surprised nobody mentioned <strong>fractal</strong> yet. (Chaos has been mentioned but the connection is tenuous.)</p> <p>No satisfactory definition of fractal exists. Mandelbrot tentatively defined a fractal as a set whose Hausdorff dimension is strictly larger than its topological dimension. But this leaves out many sets that most people agree are fractals, and it's hard to extend to other objects (like measures) that one also wants to consider as fractals.</p> <p>Taylor defined a fractal as a set with coinciding Hausdorff and packing dimensions. His goal was to leave out too irregular objects (for which different concepts of fractal dimension may differ), but according to his definition any smooth object is a fractal, and clearly fractal sets such as Bedford-McMullen carpets are left out.</p> <p>In applied fields, a fractal is often defined as a set having some kind of similarity: small parts are similar to the whole set, perhaps in a statistical or approximate sense. While many fractals arising in practice do enjoy this feature, this is still a very vague definition.</p> <p>Some authors consider <em>any</em> set or measure in Euclidean space to be a fractal, when the goal is to study properties typically associated with fractal sets, such as Hausdorff dimension.</p> <p>At the end of the day, there is agreement that giving a universal definition of fractal is impossible, yet it is a useful concept to have around, and people know a fractal when they see it.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56840#56840 Answer by David Harris for What notions are used but not clearly defined in modern mathematics? David Harris 2011-02-27T18:07:41Z 2011-02-27T18:07:41Z <p>Infinitesimals are almost in this category.</p> <p>Technically, calculus generally uses limits instead of infinitesimals. And there are logical systems (e.g. nonstandard analysis) in which genuine infinitesimals are rigorously defined. However, people find infinitesimals easier for intuition even in the context of the standard analysis. This type of infinitesimal reasoning generally then needs to be transformed into standard proofs.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56849#56849 Answer by none for What notions are used but not clearly defined in modern mathematics? none 2011-02-27T20:24:45Z 2011-02-27T20:24:45Z <p>How about the natural numbers $\mathbf N$ and the real numbers $\mathbf R$? If they were both clearly defined, then (for example) mathematicians would agree that the continuum hypothesis has a truth value, even if that value is not known. But there's not such agreement, and some will dispute that CH even has a meaning, much less a truth value. Even keeping it just to $\mathbf N$, all "definitions" that I know of are circular (e.g. the Peano axioms in second order logic just kick the unclarity up to the level of the predicates that the induction axiom quantifies over). The Hilbert $\omega$-rule is similarly self-referential. Yet we (mostly) agree that all arithmetic formulas (even, say, $\Pi^0_{100}$ formulas) <i>do</i> have truth values, that there's a (not effectively describable) first-order theory of "true arithmetic", etc. It just comes back to "the naturals, I mean the ordinary naturals, you know, 0, 1, 2, 3..." which comes across as a little bit faith-based ;-).</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/57123#57123 Answer by John Sidles for What notions are used but not clearly defined in modern mathematics? John Sidles 2011-03-02T16:26:36Z 2011-06-20T18:37:11Z <p>In response to Colin Tan's request (below), I have posted these remarks as the <i>TCS StackExchange</i> question "<a href="http://cstheory.stackexchange.com/q/7059/1519" rel="nofollow">Do the undecidable attributes of P pose an obstruction to deciding P versus NP?</a>" </p> <hr> <p>That a mathematical idea be "clearly defined" is itself an idea that perhaps could be more clearly defined ... one candidate for a more rigorous assertion is that a mathematical intuition be formally <i>decidable</i>. Moreover, widespread intuitions that are eventually proved to be decidable versus undecidable have an illustrious history in mathematics.</p> <p>These reflections lead to the suggestion this community wiki's question would be better-posed mathematically (and might perhaps be more useful too) if it were amended to read:<blockquote>"What intuitions are commonly embraced and/or have proved to be broadly useful, but nonetheless are formally <i>undecidable</i>, in modern mathematics?"</blockquote>One specific example that comes to mind is <a href="http://cstheory.stackexchange.com/questions/5004/are-runtime-bounds-in-p-decidable-answer-no" rel="nofollow">Emanuele Viola's theorem</a>, with its implication that the set of Turing machines {M} associated to P has no decidable runtime ordering. Viola's proof of undecidability was eye-opening to me, and it has filled the valuable role of leading me to wonder "What else is out there?"</p> <p>To show the utility of these reflections, Section 1.5.2 of Sanjeev Arora and Boaz Barak's well-respected textbook <i>Computational Complexity: a Modern Approach</i> is titled <i>"Criticisms of P and some efforts to address them"</i>. I have often wished that Arora and Barak had written more on this theme. With the help of Viola theorem, this wich becomes specific and rigorous: a section titled "What properties of P are not <i>decidable</i> in modern mathematics?" </p> <p>No doubt many more examples of "undecidable intuitions of modern mathematics" could be posted, and it would be great fun to read other people's examples. However, it seems inappropriate to amend the topic of a community wiki in such a fundamental respect, and so I am posting this amended question as a suggested general "answer" instead.</p> <hr> <p>Partially in response to Colin Tan's request (in the comments below), I have posted on TCS StackExchange the specific question "<a href="http://cstheory.stackexchange.com/questions/6836/what-is-the-proper-role-of-verification-in-quantum-sampling-simulation-and-exte" rel="nofollow">What is the proper role of verification in quantum sampling, simulation, and extended-Church-Turing (E-C-T) testing?</a>".</p> <p>More broadly, on Lance Fortnow's weblog, under the topic "<a href="http://www.blogger.com/comment.g?blogID=3722233&postID=3501776212238109913" rel="nofollow">75 Years of Computer Science</a>", the question is raised </p> <blockquote> <p>"Do there exist languages $L$ that are recognized solely by those Turing machines in $P$ whose runtime exponents are undecidable? Can examples of these machines and languages be finitely constructed?" </p> </blockquote> <p>... but I am not (yet) prepared to post this as a MathOverflow and/or TCS StackExchange question. Thanks and appreciation are extended to Colin.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/57193#57193 Answer by Michael Hardy for What notions are used but not clearly defined in modern mathematics? Michael Hardy 2011-03-03T01:59:47Z 2011-03-03T01:59:47Z <p>Some people claim to have defined the concept of "closed form" fully and precisely. Have they?</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/63927#63927 Answer by Nilima Nigam for What notions are used but not clearly defined in modern mathematics? Nilima Nigam 2011-05-04T16:05:20Z 2011-05-04T16:05:20Z <p>'Applied Mathematics' is a much-used term in modern mathematics, but I've yet to find a universally-agreed upon definition. Given its use as a major category ('pure' vs 'applied') and repository of sundry generalizations ('non-rigorous','relevant', 'not deep', 'critical to science', etc.), surely a precise definition is in order.</p> <p>In the MSC, there is only one MSC code with this phrase (00A69). Based on this, maybe 'Applied Mathematics' is a field of inquiry which is not important</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/63934#63934 Answer by Roland Bacher for What notions are used but not clearly defined in modern mathematics? Roland Bacher 2011-05-04T17:40:57Z 2011-05-04T17:40:57Z <p>The different domains of mathematics (analysis, geometry, algebra, probability, combinatorics etc) are not completely clearly defined and overlap. This is of course not a problem when doing mathematics but sometimes when trying to categorize some work.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/75605#75605 Answer by Hans Stricker for What notions are used but not clearly defined in modern mathematics? Hans Stricker 2011-09-16T15:00:16Z 2011-09-16T15:00:16Z <p>Relating to the answer on <a href="http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/56779#56779" rel="nofollow">fractals</a>: What about the notion of <strong>dimension</strong> itself (let alone fractal dimensions)? (At least <a href="http://en.wikipedia.org/wiki/Dimension" rel="nofollow">Wikipedia</a> doesn't give a concise general definition of "dimension".)</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/76713#76713 Answer by Adam Bjorndahl for What notions are used but not clearly defined in modern mathematics? Adam Bjorndahl 2011-09-29T01:04:26Z 2011-09-29T01:04:26Z <p>The notion of a <strong>solution concept</strong> in game theory. Although the most famous example of such---Nash equilibrium---is rigourously defined, as are several others (correlated equilibrium, rationalizability, sequential equilibrium, etc.), there is no satisfactory general definition of the <em>type</em> of object of which these are tokens. Indeed, the purported definition that appears in <a href="http://en.wikipedia.org/wiki/Solution_concept" rel="nofollow">this Wikipedia article</a> is, in a sense, as far from informative as it could be without incurring a type mismatch.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/76822#76822 Answer by David Harris for What notions are used but not clearly defined in modern mathematics? David Harris 2011-09-30T02:13:55Z 2011-09-30T02:13:55Z <p>In analysis, the concept of a limit at infinity vs. a limit at a real number $r$.</p> <p>Typically, there is a whole list of definitions of various limits $\lim_{x \rightarrow a} f(x) = b$, depending on whether $a$ and $b$ are ordinary reals or $\pm \infty$. You may have 9 separate definitions of the limit, one for each case. This situation repeats itself any time a limit is used implicitly, for example if an integral converges to a real or to $\pm \infty$, a series converges, and so on. </p> <p>Everyone knows that these definitions are really the same, but it seems more cumbersome to have a single unified definition than to have separate definitions that are, informally, the same concept. It is this covert "intuitive sense" in which all the definitions are the same that is not clearly defined.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/76850#76850 Answer by Jacques Carette for What notions are used but not clearly defined in modern mathematics? Jacques Carette 2011-09-30T12:31:23Z 2011-09-30T12:31:23Z <p>I asked about <a href="http://mathoverflow.net/questions/29413/defining-variable-symbol-indeterminate-and-parameter" rel="nofollow">Defining variable, symbol, indeterminate and parameter</a> previously on MO, and did not get any satisfying answers for all these concepts. The one exception is that of variable (and meta-variable) where Neel gave good pointers.</p> http://mathoverflow.net/questions/56677/what-notions-are-used-but-not-clearly-defined-in-modern-mathematics/84683#84683 Answer by Buschi Sergio for What notions are used but not clearly defined in modern mathematics? Buschi Sergio 2012-01-01T14:12:17Z 2012-01-01T14:12:17Z <p>This is mine opinion:</p> <p>The mathalogic of usual language (that come from Aristotele) when we argue about a proof, expecially "reductio ad absurdum" rule, that is a implicit faith act on the coherence of what we are speaking. THe use of a relation (as $\in$) before define relation in the set theory contest, and using a "naive natural finite quantity manipulation" before bulding the theory of set or some well formalized mathematics theory. </p> <p>ABout no too professional mathematics:</p> <p>concept and existence of points, and spaces as a set of pont.</p> <p>Concept of natural number set $\mathbb{N}$ as a "infinite in act" (quite it all gived) and no as "ever could add some to it" (infinite as ever continuing bulding), infact the passage to "infinite in act" need Peano axioms. </p>
|
{}
|
# Angle of elevation and depression
### Angle of elevation and depression
#### Lessons
When you look at an object above you, the angle between the horizontal and your line of sight to the object is called the angle of elevation.
When you look at an object below you, the angle between the horizontal and your line of sight to the object is called the angle of depression.
• Introduction
Introduction to application to bearings – angle of elevation/depression
a)
Angle of elevation
b)
Angle of depression
• 1.
Analyze the Use of Angle of Elevation or Depression
A house was built next to a mountain. The angle of depression from the top of the mountain to the house is 12°. If the mountain is 800m tall, how far is the house from the mountain?
• 2.
Billy is standing 150m away from a wind turbine. His eye level is 2m from the ground. If his angle of elevation is 25° to the top of the turbine, determine the height of the wind turbine.
• 3.
Evaluate the Height of An Object With No Angle of Elevation or Depression Given
Nelson, whose height is 1.8m, is standing 12m away from a light pole. If he casts a shadow 15m away from the pole, what is the height of the light pole?
• 4.
Determine the Height of An Object Using the Angle of Elevation AND Depression
Buildings X and Y are 240m from each other. From the roof of building X, the angle of elevation to the top of building Y is 26° and the angle of depression to the base of building Y is 32°. How tall is each building?
|
{}
|
# How do you prove sin(x+y)*sin(x-y)=(sinx)^2-(siny)^2?
Apr 16, 2015
Using the angle sum and difference formula of the function sinus on the first member:
$\left(\sin x \cos y + \cos x \sin y\right) \left(\sin x \cos y - \cos x \sin y\right) =$
${\sin}^{2} x {\cos}^{2} y - {\cos}^{2} x {\sin}^{2} y =$
$= {\sin}^{2} x \left(1 - {\sin}^{2} y\right) - \left(1 - {\sin}^{2} x\right) {\sin}^{2} y =$
$= {\sin}^{2} x - {\sin}^{2} x {\sin}^{2} y - {\sin}^{2} y + {\sin}^{2} x {\sin}^{2} y =$
$= {\sin}^{2} x - {\sin}^{2} y$
that is the second member.
|
{}
|
# Letters to numbers to letters (poorly)
Every now and again, I'll encounter a large string of alphabetical text encoded as numbers from 1-26. Before I knew that I could just google a translator online, I had to personally, painstakingly copy and paste the entirety of the encoded text into a text document and translate it myself.
Thankfully, I did know about find-and-replace features, which I used to search for all copies of each number and convert them to their corresponding characters. Only one problem... If you do this in numerical order, it will come out garbled.
This is because the single digit numbers used to encode a-i also appear as part of the remaining two digit numbers encoding j-z. For example, 26 should decode to z, but since both 2 and 6 are less than 26, we decode those first. 2 becomes b, and 6 becomes f, so 26 becomes bf.
Note also that for 10 and 20, there is no conversion for 0, so we end up with a0 and b0 respectively.
## Challenge
Given a string of alphabetical characters (abcdefghijklmnopqrstuvwxyz), output the result of converting to numbers and back again using the algorithm described above. You may assume input is all lowercase or all uppercase. This is , so shortest code in bytes wins.
Important: You may not zero index.
Full conversion list:
a: a
b: b
c: c
d: d
e: e
f: f
g: g
h: h
i: i
j: a0
k: aa
l: ab
m: ac
o: ae
p: af
q: ag
r: ah
s: ai
t: b0
u: ba
v: bb
w: bc
x: bd
y: be
z: bf
## Examples
xyzzy => bdbebfbfbe
caged => caged
jobat => a0aebab0
swagless => aibcagabeaiai
• sorry bout the sudden title change. Turns out im stutpid Jul 21 at 16:42
• Not really impacting the question, but if you don't do it in numerical order it will still come out garbled unless you have separated your numbers somehow. For instance if you go in reverse order you will replace 26 while these may actually have been a 2 and a separate 6. Jul 22 at 19:34
# Retina 0.8.2, 32 bytes
[j-s]
a$& [t-z] b$&
Tj-z0a-i0l
Try it online! Link includes test cases. Explanation:
[j-s]
a$& [t-z] b$&
Precede the letters j-z with an a or b as appropriate.
Tj-z0a-i0l
Translate them to 0 or a-i as appropriate. (l expands to a-z but only a-f get used as the replacement list gets truncated to the length of the translation list.)
# Vyxal, 8 bytes
øAṅ₄ɾkaĿ
Try it Online!
# JavaScript (Node.js), 70 bytes
s=>s.replace(r=/./g,c=>parseInt(c,36)-9).replace(r,i=>'0abcdefghi'[i])
Try it online!
# R, 75 bytes
\(s,[=gsub)chartr("j-z","0a-i0a-i","([j-s])"["a\\1","([t-z])"["b\\1",s]])
Attempt This Online!
Straightforward replacement.
### R, 84 81 bytes
Edit: -3 bytes thanks to @Dominic van Essen.
\(s)gsub("",0,intToUtf8(strtoi(unlist(strsplit(paste(utf8ToInt(s)-96),"")))+96))
Attempt This Online!
Going with the algorithm.
• as.double -> strtoi for the second one... Jul 22 at 11:53
• See here for a shorter approach: codegolf.stackexchange.com/a/250188/55372 Jul 22 at 14:25
# Charcoal, 13 bytes
⭆⭆S⊕⌕βι§⁺β0⊖ι
Try it online! Link is to verbose version of code. Explanation:
S Input string
⭆ Map over characters and join
⌕ Index of
ι Current character
β In lowercase alphabet
⊕ Incremented
⭆ Map over characters and join
β Lowercase alphabet
⁺ Concatenated with
0 Literal string 0
§ Indexed by
ι Current character
⊖ Decremented (auto-casts to integer)
# JavaScript (ES6), 71 bytes
s=>s.replace(/[j-z]/g,c=>"ab"[c>'s'|0]+"abcdefghi0"[parseInt(c,36)%10])
Try it online!
# Burlesque, 25 bytes
)**96?-imXX@azr@'0+]jsi\[
Try it online!
)** # Map ord(a)
96?- # Subtract 96 (get index into alphabet)
imXX # Concat and separate by digit
@azr@ # Alphabet [a,b,..,z]
'0+] # Prepend [0,a,b,..,z]
jsi # Select indices
\[ # Concat
# Python 3, 79 bytes
lambda s:s.translate({n:'AB'[n>83]+'0ABCDEFGHI'[n%74%10]for n in range(74,91)})
Try it online!
s/./-96+ord$&/ge;y/1-9/a-i/ Try it online! # 05AB1E, 10 8 bytes A0ìDIkSè I/O as lowercase list of characters. -2 bytes thanks to @CommandMaster Explanation: A # Push the lowercase alphabet 0ì # Prepend a leading "0" D # Duplicate it I # Push the input character-list k # Get the (0-based) index of each character in the "0abc...xyz" string S # Convert it to a flattened list of digits è # (0-based) index each into the "0abc...xyz" string # (after which the result is output implicitly) If None is a valid output for 0, builtin .b could be used for an alternative 8-byter in the legacy version of 05AB1E, with uppercase character-lists as I/O: try it online. (The .b results in "@" in the new version of 05AB1E for 0: try it online.) • A0šDIkSè seems to work for 8 bytes? Jul 22 at 12:43 • @CommandMaster Ah, of course.. Why not just prepend instead of append so fixing the 0-based to 1-based indexing manually with >/< isn't necessary. It looks so simple now that I see it.. Thanks. Jul 22 at 13:28 # Pyth, 16 bytes sm@+G0tsdjkmhxGd Try it online! # Husk, 14 bytes m!₁ṁod€₁ …"az0 Try it online! Inspired by Kevin Cruijssen's approach, although comes-out a bit longer in Husk... … # Fill-in the gaps with character ranges "az0 # in the string "az0" # (fill-in works both backwards & forwards, resulting in: # "abcde...vwxyzyxwv...edcba_^]\...43210") # = assign this as '₁' ṁo # Now, for each input letter €₁ # get the index of first occurence in ₁ (=index in alphabet) d # and split the decimal digits; m # Then, for each digit !₁ # retrieve the element in ₁ using modular 1-based indexing # (so zero retrieves the last element) # Python 3.8 (pre-release), 76 bytes f=lambda s:s and'ba'[2-(q:=ord(s[0])%32)//10::2]+'0abcdefghi'[q%10]+f(s[1:]) Try it online! # R, 68 bytes \(x,y=utf8ToInt(x)-96)intToUtf8(rbind(y/10+96*(y>9),48--y%%10%%-58)) Attempt This Online! Calculates 2 decimal digits for each letter, and then calculates the corresponding codepoint values, setting unused values (leading zeros) to zero, which is not output by the intToUtf8 function. # R, 54 bytes \(x)chartr('1-9','a-i',Reduce(paste0,utf8ToInt(x)-96)) Attempt This Online! Port of Kjetil S's Perl answer. # C (GCC), 10188 87 bytes f(char*s){for(char*x;*s;)for(asprintf(&x,"%d",*s++-64);*x;)putchar(*x+(*x++^48?16:0));} Attempt This Online! Prints to standard out. Uses asprintf, which is a GNU-specific function and may or may not require a definition in the header, so I don't know if it's okay. This is basically the first C function I've ever written, so there are almost certainly ways this can be improved. -13 bytes from @c-- by iterating one character at a time, removing parens, and changing the ternary -1 bytes from @ceilingcat by moving the char*x declaration into the for statement # Piet + ascii-piet, 129 bytes (5×27=135 codels) uqijsvudnbddt feussskiu L?q sa dttlvqfeumdctss?Lr????saa???????????????s Lkjftrqavqcefcnbftmnkss K ssvc Try Piet online! Input the string in upper-case with the sentinel value @. Explanation Read char from input, subtract 64 and check if it is greater than 9. If c<=9: check if its check if it is greater than 0 (else terminate). Add 64, print char and start over. If c>9: duplicate, push 10 on stack, duplicate, roll, divide, print char, modulo and check if it is greater than 0. If c>0: print char (as above) and start over. If c=0: print number and start over. # Factor, 53 bytes [ 96 v-n [ >dec ] map-concat 48 v+n "" "0" replace ] Needs modern Factor for >dec but here's a version that works on TIO for +1 byte: Try it online! # Batch, 432 Bytes SETLOCAL ENABLEDELAYEDEXPANSION&(CMD /U /C^"<NUL SET /P=abcdefghijklmnopqrstuvwxyz^"|FIND /V ""|FINDSTR /N "^")>a&(CMD /U /C^"<NUL SET /P=%s%^"|FIND /V "")>t&FOR /F %%Q in (t)DO (FOR /F "tokens=1-2 delims=:" %%A in ('FINDSTR /RC:"%%Q" a')DO (IF %%A GEQ 20 (SET /Ad=%%A+76&CMD /CEXIT !d:96=48!&<NUL SET /P=b!=exitcodeAscii!) else IF %%A GEQ 10 (SET /Ad=%%A+86&CMD /CEXIT !d:96=48!&<NUL SET /P=a!=exitcodeAscii!) else <NUL SET /P=%%B)) Assuming @ECHO OFF and passing it as CALL golf.bat swagless produces aibcagabeaiai This can probably not the best way, but I thought that this was a pretty cool way to do it. Ungolfed: (CMD /U /C^"<NUL SET /P=abcdefghijklmnopqrstuvwxyz^"|FIND /V ""|FINDSTR /N "^")>a (CMD /U /C^"<NUL SET /P=%1^"|FIND /V "")>t FOR /F %%Q in (t) DO ( FOR /F "tokens=1-2 delims=:" %%A in ('FINDSTR /RC:"%%Q" a') DO ( IF %%A GEQ 20 ( SET /Ad=%%A+76 CMD /CEXIT !d:96=48! <NUL SET /P=b!=exitcodeAscii! ) else IF %%A GEQ 10 ( SET /Ad=%%A+86 CMD /CEXIT !d:96=48! <NUL SET /P=a!=exitcodeAscii! ) else <NUL SET /P=%%B ) ) # Bash, 74 47 bytes sed "s/[j-s]/a&/g;s/[t-z]/b&/g"|tr j-z 0a-i0a-i Try it online! -27 thanks to Neil's idea # first approach, 74 bytes sed "s/./\"'&\"/g"|xargs printf '%d-96\n'|bc|sed '/../s/./&\n/'|tr 1-9 a-j Try it online! # lin, 41 bytes $a0,.+10t2/\"rev _"' , ,*10d tro
Try it here!
Decided to go for a different non-regex approach, which came out quite nicely. Turns out that tro surprisingly accepts an iterator of iterators, which saved quite a few bytes.
For testing purposes:
"jobat" ; outln
$a0,.+10t2/\"rev _"' , ,*10d tro ## Explanation Prettified code: $a 0, dup 10t 2/\ ( rev _ ) ' , ,* 10d tro
• $a 0, alphabet with 0 in front as a • $a 0,.+ 10t duplicate and create 0abcdefghi
• 2/\ ( rev _ ) ' 2-digit pairs (00 0a 0b ... ig ih ii)
• , ,* 10d zip with a and remove first 10 pairs ([j a0] [k aa] [l ab] ... [x bd] [y be] [z bf])
• tro transliterate
# Lexurgy, 92 bytes
a:
Naive 1-1 substitution.
# C (GCC), 74 bytes
c;f(char*s){for(;c=*s++%96;putchar((c%10?:-48)+96))c>9&&putchar(c/10+96);}
Attempt This Online!
# Dyalog APL, 32 bytes
{(⎕A,'0')[27@(0∘=)⍎¨∊⍕¨⎕A⍳⍥⎕C⍵]}
Attempt This Online!
⎕A⍳ ⍵ find each character's index in the alphabet
⍥⎕C case insensitively (casefold both arguments first)
⍕¨ turn each number into string
∊ flatten nested array to get the individual digits
⍎¨ turn each digit back into a number
27@(0∘=) turn 0s into 27s
[ ] index into
⎕A the alphabet
,'0' with a 0 at the end at the 27th position
|
{}
|
# Positioning
CSS positioning
TAGS: CSS, Cyber Tech, Design, Programming, TECH
CSS properties for CSS Positioning (CSSP) can be applied to particular objects.
The primary CSS properties for CSSP are position and float. Position has three possible values:
• static. This is the default: normal flow, regular old HTML positioning. The left and top properties do not apply.
• relative. This is almost like static except that you can offset the object from its normal HTML position. Relative positioning does reserve the space of the item's original position, i.e. it makes other items flow around the space where the item would have been. Relatively positioned objects moved off screen will not cause scroll bars to appear.
• absolute. This positions items relative to the top left corner (0,0) of the next positioned parent, or, if there isn't one, the body by default. To avoid using the body as the reference, sometimes an absolutely positioned box is set relative to some inline content that has its position property set to relative. Absolute positioning does not reserve the space of the item's original position, i.e. if other items are displayed at the new position, then both will be drawn. The left, right, top, and bottom properties apply. Absolutely positioned objects moved off screen will cause scroll bars to appear. Absolutely positioned objects have padding and borders but no margins.
• fixed. Like absolute except that the box is fixed in respect to the view port. EGs: The box does not move when the page is scrolled or is at a specific spot on each printed page. Not implemented by MS.
Once the position property is set to absolute or relative, set at least one of these style properties: bottom, left, right, and top. Positioned objects can overlap or under-lap objects at the new position using the z-index property.
The objects that support CSSP depend on the browser:
• Netscape Navigator 4:
• Can apply relative positions to X, where X equals block elements, spans, and tables.
• Can apply relative or absolute positions to X.
• The only container recognized is the browser window itself.
• Microsoft Internet Explorer 4:
• Can apply relative positions to any page element.
• Can apply absolute positions to X, images, applets, objects, buttons, input elements, select lists, and text areas. IE can also apply absolute positioning to these elements unique to IE: fieldsets and iframes.
• The containers recognized include the browser window itself, and divisions, including nested divisions.
Fractions of block elements cannot be positioned. Either the whole block element gets positioned or nothing.
Block formatting involves boxes (elements, tags) that cause line breaks before and after itself, i.e. boxes are laid out vertically. Examples include <p>, <h1>, lists, and tables. Inline formatting involves boxes that are laid out horizontally. EG: Divisions and spans both mark sections of web pages but divisions (<div>) are block elements and spans (<span>) are not. That is, a span can be smoothly included within a paragraph but a division would add line breaks. For more see my section on HTML.
A float is a box that is shifted to the left or right on the current line with the float property. The width property of a floated box must be specified. The content float may be either a block or an inline block but the float property generates a block box. Content flows on the other side of a float, i.e. a box floated left will be shifted to the left and have content flowing on its right. When an object clears a side with the clear property, it increases its top margin until it clears an objects floated on the specified side. The order of floated boxes is relevant. EG: If 2 boxes are both floated left, then the 2nd one will be displayed left most.
There are many tricks that can be done with CSSP. [I used to have some examples but I made them long ago and they're way out of date. Plus there are many other good examples on the net.]
GeorgeHernandez.comSome rights reserved
|
{}
|
High Temperature Hardness Testing on alloys
using MFT5000
Characterizing mechanical properties at high temperatures is critical in applications such as engines, turbines, aerospace, power plant constituent parts, hot rolling, ceramics, combustion engine, metallic alloys etc. Phenomena such as phase transformation change in solubility of alloys etc. could significantly alter the mechanical properties of the components used in high temperature applications.
|
{}
|
# Proof By Induction Division Questions, Answers and Solutions
a - answer s - solution v - video
## Question 1
Prove by mathematical induction:
a) $6^n+4$ is divisible by $5$ for all $n\ge0$ and $n\in\mathbb{Z}^+$a s v
b) $8^n-1$ is divisible by $7$ for all $n\ge1$ and $n\in\mathbb{Z}^+$a s v
c) $5^n-1$ is divisible by $4$ for all $n\ge0$ and $n\in\mathbb{Z}^+$a s v
d) $n^3-7n+9$ is divisible by $3$ for all $n\ge0$ and $n\in\mathbb{Z}^+$a s v
e) $8^n-3^n$ is divisible by $5$ for all $n\ge1$ and $n\in\mathbb{Z}^+$a s v
f) $5^n+2\times11^n$ is divisible by $3$ for all $n\ge0$ and $n\in\mathbb{Z}^+$a s v
## Question 2
Prove by mathematical induction:
a) $x^n-y^n$ can be divided by $x-y$ for $n\geq1$, $x\ne y$ and $n\in\mathbb{Z}^+$a s v
|
{}
|
Compatible Iwasawa decomposition for embedding of the orthogonal Lie group - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T09:32:19Z http://mathoverflow.net/feeds/question/38011 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/38011/compatible-iwasawa-decomposition-for-embedding-of-the-orthogonal-lie-group Compatible Iwasawa decomposition for embedding of the orthogonal Lie group Eachara Donk 2010-09-07T21:49:15Z 2012-12-22T06:22:00Z <p>I am looking for an embedding of the orthogonal Lie group O(n,C) into GL(m,C) such that the standard Iwasawa decomposition (also known as the QR-decomposition) for the group GL(m,C) induces an Iwasawa decomposition for the group O(n,C). Recall that the standard Iwasawa decomposition for the general linear group GL(m,C)=U(m)R, with R being the subgroup of upper-diagonal matrices with positive real diagonal entries, and U(m) - the unitary subgroup. </p> http://mathoverflow.net/questions/38011/compatible-iwasawa-decomposition-for-embedding-of-the-orthogonal-lie-group/115773#115773 Answer by Aakumadula for Compatible Iwasawa decomposition for embedding of the orthogonal Lie group Aakumadula 2012-12-08T05:42:18Z 2012-12-08T05:42:18Z <p>There are general theorems for an arbitrary semi-simple subgroup of \$GL_n({\mathbb C})\$ which do this. I will work this out for \$O(n,{\mathbb C})\$ when \$n=2m\$ is even. </p> <p>Fix the standard inner product with respect to which the unitary group \$U(n)\$ is defined. View \$O(n)=O(2m)\$ as the subgroup which fixes the quadratic form </p> <p>\$\$Q=x_1x_{2m}+x_2x_{2m-1}+\cdots+ x_mx_{m+1}.\$\$</p> <p>All this is explained in Professor Jim Humphrey's book on Lie algebras.</p> <p>Then the Iwasawa decomposition of \$GL_n({\mathbb C})\$ restriced to \$O(n)\$ gives an Iwasawa decomposition on \$O(n)\$. </p>
|
{}
|
Find the smallest natural number not in a given set of integers
What is the computational complexity of this task? The goal is to compute the number $x=\min(\Bbb N\setminus A)$, where $A$ is the input list and the complexity parameter is $n=|A|$ (which is finite).
This is a common problem when generating a unique identifier: you want a number that is not equal to any other in the "pool" of other identifiers. A simpler algorithm just takes $x=\max(A)+1$, and assuming that $A$ is represented as a list of length $n$, calculating this is $O(n)$. But this leaves "gaps" in the set (assuming we are adding these new numbers to the pool $A:=A\cup\{x\}$ repeatedly), and this can be undesirable. A "gap-filling" algorithm uses $x=\min(\Bbb N\setminus A)$ instead, but this is a more complex operation.
A lower bound on the complexity is of course $O(n)$, since one must at least observe the entire input. An upper bound is $O(n\log n)$, since we can sort the list $A$ into order, then just walk down the list until we find an $x$ such that $A(x)\ne x$.
• Forgive me for conflating lists and sets in the description of $A$. For the purposes of computing the complexity, $A$ is a list, or function $A:n\to\Bbb N$, and the set $A$ is really ${\rm ran}(A)$. Aug 10 '13 at 14:40
If $n=0$, trivially $f(A)=1$. Otherwise, let $m=\frac n2$. In one pass, split $A$ into $A_1=\{\,a\in A\mid a\le m\,\}$ and $A_2=\{\,a-m\mid a\in A, a>m\,\}$. If $|A_1|<m$ return $f(A_1)$, otherwise return $m+f(A_2)$. For the time needed, we obtain $T(n)\le n+T(\frac n2)$, hence $T(n)=O(n)$. If $A$ is given as a linked list and we don't mind if the order of the list is affected we can undo the splitting and subtracting before returning and need only $O(\log n)$ addtional memory for the recursion (as opposed to $O(n)$ memory with dkuper's solution).
• You can avoid the subtraction as well if you generalize the problem to $f(k,A)=\min((\Bbb N+k)\setminus A)$, where $A\subseteq\Bbb N+k$. Then let $m=k+\frac n2$, and $A_1=\{a\in A\mid a\le m\}$ and $A_2=\{a\in A\mid a>m\}$, and then $f(k,A)=f(k,A_1)$ if $|A_1|<m-k$ and $f(k,A)=f(m,A_2)$ otherwise. Aug 10 '13 at 15:58
Create a table $t$ of booleans of size $n$ initialized to $0$. Then go through $A$. Every time you see $x\in [0,n-1]$, you put a $1$ in $t(x)$. If $x$ is bigger than $n$ you just ignore it.
In the end you just need to go through $t$ and stop as soon as there is a $0$. So you do at most $2n$ steps.
|
{}
|
# Basic group theory question on countable groups and infinite abelian groups.
Here are two (maybe simple) questions.
A: Every countable group $G$ has only countably many distinct subgroups.
B: Every infinite abelian group has atleast one element of infinite order.
Both these statements are false. I am unable to find any counterexamples. Just Hints would be highly appreciated.
-
Hint: these aren't very obscure possibilities. You can do both with vector spaces. – Chris Eagle May 6 at 9:28
A) Consider the vector space $V$ over the field of two elements with a countably infinite basis. A countably infinite set has uncountably many subsets.
B) $\mathbb{Q}/\mathbb{Z}$.
|
{}
|
# What is the square root of 12?
$\sqrt{12} \implies \sqrt{4 \cdot 3} \implies \sqrt{4} \sqrt{3} \implies 2 \sqrt{3}$
Or, if a number is needed, we can estimate $\sqrt{3}$ as $1.732$ give:
$2 \sqrt{3} \cong 2 \cdot 1.732 \cong 3.464$
|
{}
|
# Safety engineering, target selection, and alignment theory
| | Analysis
Artificial intelligence capabilities research is aimed at making computer systems more intelligent — able to solve a wider range of problems more effectively and efficiently. We can distinguish this from research specifically aimed at making AI systems at various capability levels safer, or more “robust and beneficial.” In this post, I distinguish three kinds of direct research that might be thought of as “AI safety” work: safety engineering, target selection, and alignment theory.
Imagine a world where humans somehow developed heavier-than-air flight before developing a firm understanding of calculus or celestial mechanics. In a world like that, what work would be needed in order to safely transport humans to the Moon?
In this case, we can say that the main task at hand is one of engineering a rocket and refining fuel such that the rocket, when launched, accelerates upwards and does not explode. The boundary of space can be compared to the boundary between narrowly intelligent and generally intelligent AI. Both boundaries are fuzzy, but have engineering importance: spacecraft and aircraft have different uses and face different constraints.
Paired with this task of developing rocket capabilities is a safety engineering task. Safety engineering is the art of ensuring that an engineered system provides acceptable levels of safety. When it comes to achieving a soft landing on the Moon, there are many different roles for safety engineering to play. One team of engineers might ensure that the materials used in constructing the rocket are capable of withstanding the stress of a rocket launch with significant margin for error. Another might design escape systems that ensure the humans in the rocket can survive even in the event of failure. Another might design life support systems capable of supporting the crew in dangerous environments.
A separate important task is target selection, i.e., picking where on the Moon to land. In the case of a Moon mission, targeting research might entail things like designing and constructing telescopes (if they didn’t exist already) and identifying a landing zone on the Moon. Of course, only so much targeting can be done in advance, and the lunar landing vehicle may need to be designed so that it can alter the landing target at the last minute as new data comes in; this again would require feats of engineering.
Beyond the task of (safely) reaching escape velocity and figuring out where you want to go, there is one more crucial prerequisite for landing on the Moon. This is rocket alignment research, the technical work required to reach the correct final destination. We’ll use this as an analogy to illustrate MIRI’s research focus, the problem of artificial intelligence alignment.
# The need to scale MIRI’s methods
| | Analysis
Andrew Critch, one of the new additions to MIRI’s research team, has taken the opportunity of MIRI’s winter fundraiser to write on his personal blog about why he considers MIRI’s work important. Some excerpts:
Since a team of CFAR alumni banded together to form the Future of Life Institute (FLI), organized an AI safety conference in Puerto Rico in January of this year, co-authored the FLI research priorities proposal, and attracted 10MM of grant funding from Elon Musk, a lot of money has moved under the label “AI Safety” in the past year. Nick Bostrom’s Superintelligence was also a major factor in this amazing success story. A lot of wonderful work is being done under these grants, including a lot of proposals for solutions to known issues with AI safety, which I find extremely heartening. However, I’m worried that if MIRI doesn’t scale at least somewhat to keep pace with all this funding, it just won’t be spent nearly as well as it would have if MIRI were there to help. We have to remember that AI safety did not become mainstream by a spontaneous collective awakening. It was through years of effort on the part of MIRI and collaborators at FHI struggling to identify unknown unknowns about how AI might surprise us, and struggling further to learn to explain these ideas in enough technical detail that they might be adopted by mainstream research, which is finally beginning to happen. But what about the parts we’re wrong about? What about the sub-problems we haven’t identified yet, that might end up neglected in the mainstream the same way the whole problem was neglected 5 years ago? I’m glad the AI/ML community is more aware of these issues now, but I want to make sure MIRI can grow fast enough to keep this growing field on track. Now, you might think that now that other people are “on the issue”, it’ll work itself out. That might be so. But just because some of MIRI’s conclusions are now being widely adopted widely doesn’t mean its methodology is. The mental movement “Someone has pointed out this safety problem to me, let me try to solve it!” is very different from “Someone has pointed out this safety solution to me, let me try to see how it’s broken!” And that second mental movement is the kind that allowed MIRI to notice AI safety problems in the first place. Cybersecurity professionals seem to carry out this movement easily: security expert Bruce Schneier calls it the security mindset. The SANS institute calls it red teaming. Whatever you call it, AI/ML people are still more in maker-mode than breaker-mode, and are not yet, to my eye, identifying any new safety problems. I do think that different organizations should probably try different approaches to the AI safety problem, rather than perfectly copying MIRI’s approach and research agenda. But I think breaker-mode/security mindset does need to be a part of every approach to AI safety. And if MIRI doesn’t scale up to keep pace with all this new funding, I’m worried that the world is just about to copy-paste MIRI’s best-2014-impression of what’s important in AI safety, and leave behind the self-critical methodology that generated these ideas in the first place… which is a serious pitfall given all the unknown unknowns left in the field. See our funding drive post to help contribute or to learn more about our plans. For more about AI risk and security mindset, see also Luke Muehlhauser’s post on the topic. # Jed McCaleb on Why MIRI Matters | | Guest Posts This is a guest post by Jed McCaleb, one of MIRI’s top donors, for our winter fundraiser. A few months ago, several leaders in the scientific community signed an open letter pushing for oversight into the research and development of artificial intelligence, in order to mitigate the risks and ensure the societal benefit of the advanced technology. Researchers largely agree that AI is likely to begin outperforming humans on most cognitive tasks in this century. Similarly, I believe we’ll see the promise of human-level AI come to fruition much sooner than we’ve fathomed. Its effects will likely be transformational — for the better if it is used to help improve the human condition, or for the worse if it is used incorrectly. As AI agents become more capable, it becomes more important to analyze and verify their decisions and goals. MIRI’s focus is on how we can create highly reliable agents that can learn human values and the overarching need for better decision-making processes that power these new technologies. The past few years has seen a vibrant and growing AI research community. As the space continues to flourish, the need for collaboration will continue to grow as well. Organizations like MIRI that are dedicated to security and safety engineering help fill this need. And, as a nonprofit, its research is free from profit obligations. This independence in research is important because it will lead to safer and more neutral results. By supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good. For humanity’s benefit, we need to guarantee that AI systems can reliably pursue goals that are aligned with society’s human values. If organizations like MIRI are able to help engineer this level of technological advancement and awareness in AI systems, imagine the endless possibilities of how it can help improve our world. It’s critical that we put the infrastructure in place in order to ensure that AI will be used to make the lives of people better. This is why I’ve donated to MIRI, and why I believe it’s a worthy cause that you should consider as well. Jed McCaleb created eDonkey, one of the largest file-sharing networks of its time, as well as Mt. Gox, the first Bitcoin exchange. Recognizing that the world’s financial infrastructure is broken and that too many people are left without resources, he cofounded Stellar in 2014. Jed is also an advisor to MIRI. # OpenAI and other news | | News We’re only 11 days into December, and this month is shaping up to be a momentous one. On December 3, the University of Cambridge partnered with the University of Oxford, Imperial College London, and UC Berkeley to launch the Leverhulme Centre for the Future of Intelligence. The Cambridge Centre for the Study of Existential Risk (CSER) helped secure initial funding for the new independent center, in the form of a15M grant to be disbursed over ten years. CSER and Leverhulme CFI plan to collaborate closely, with the latter focusing on AI’s mid- and long-term social impact.
Meanwhile, the Strategic Artificial Intelligence Research Centre (SAIRC) is hiring its first research fellows in machine learning, policy analysis, and strategy research: details. SAIRC will function as an extension of two existing institutions: CSER, and the Oxford-based Future of Humanity Institute. As Luke Muehlhauser has noted, if you’re an AI safety “lurker,” now is an ideal time to de-lurk and get in touch.
MIRI’s research program is also growing quickly, with mathematician Scott Garrabrant joining our core team tomorrow. Our winter fundraiser is in full swing, and multiple matching opportunities have sprung up to bring us within a stone’s throw of our first funding target.
# MIRI’s 2015 Winter Fundraiser!
| | News
The Machine Intelligence Research Institute’s 2015 winter fundraising drive begins today, December 1! Our current progress:
The drive will run for the month of December, and will help support MIRI’s research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact.
#### MIRI’s Research Focus
The field of AI has a goal of automating perception, reasoning, and decision-making — the many abilities we group under the label “intelligence.” Most leading researchers in AI expect our best AI algorithms to begin strongly outperforming humans this century in most cognitive tasks. In spite of this, relatively little time and effort has gone into trying to identify the technical prerequisites for making smarter-than-human AI systems safe and useful.
We believe that several basic theoretical questions will need to be answered in order to make advanced AI systems stable, transparent, and error-tolerant, and in order to specify correct goals for such systems. Our technical agenda describes what we think are the most important and tractable of these questions.
Smarter-than-human AI may be 50 years or more away. There are a number of reasons we nonetheless consider it important to begin work on these problems today:
• High capability ceilings — Humans appear to be nowhere near physical limits for cognitive ability, and even modest advantages in intelligence may yield decisive strategic advantages for AI systems.
• “Sorcerer’s Apprentice” scenarios — Smarter AI systems can come up with increasingly creative ways to meet programmed goals. The harder it is to anticipate how a goal will be achieved, the harder it is to specify the correct goal.
• Convergent instrumental goals — By default, highly capable decision-makers are likely to have incentives to treat human operators adversarially.
• AI speedup effects — Progress in AI is likely to accelerate as AI systems approach human-level proficiency in skills like software engineering.
We think MIRI is well-positioned to make progress on these problems for four reasons: our initial technical results have been promising (see our publications), our methodology has a good track record of working in the past (see MIRI’s Approach), we have already had a significant influence on the debate about long-run AI outcomes (see Assessing Our Past and Potential Impact), and we have an exclusive focus on these issues (see What Sets MIRI Apart?). MIRI is currently the only organization specializing in long-term technical AI safety research, and our independence from industry and academia allows us to effectively address gaps in other institutions’ research efforts.
#### General Progress This Year
In June, Luke Muehlhauser left MIRI for a research position at the Open Philanthropy Project. I replaced Luke as MIRI’s Executive Director, and I’m happy to say that the transition has gone well. We’ve split our time between technical research and academic outreach, running a workshop series aimed at introducing a wider scientific audience to our work and sponsoring a three-week summer fellows program aimed at training skills required to do groundbreaking theoretical research.
Our fundraiser this summer was our biggest to date. We raised a total of $631,957 from 263 distinct donors, smashing our previous funding drive record by over$200,000. Medium-sized donors stepped up their game to help us hit our first two funding targets: many more donors gave between $5,000 and$50,000 than in past fundraisers. Our successful fundraisers, workshops, and fellows program have allowed us to ramp up our growth substantially, and have already led directly to several new researcher hires.
2015 has been an astounding year for AI safety engineering. In January, the Future of Life Institute brought together the leading organizations studying long-term AI risk and top AI researchers in academia and industry for a “Future of AI” conference in San Juan, Puerto Rico. Out of this conference came a widely endorsed open letter, accompanied by a research priorities document drawing heavily on MIRI’s work. Two prominent AI scientists who helped organize the event, Stuart Russell and Bart Selman, have since become MIRI research advisors (in June and July, respectively). The conference also resulted in an AI safety grants program, with MIRI receiving some of the largest grants.
In addition to the FLI conference, we’ve spoken this year at AAAI-15, AGI-15, LORI 2015, EA Global, the American Physical Society, and the leading U.S. science and technology think tank, ITIF. We also co-organized a decision theory conference at Cambridge University and ran a ten-week seminar series at UC Berkeley.
Three new full-time research fellows have joined our team this year: Patrick LaVictoire in March, Jessica Taylor in August, and Andrew Critch in September. Scott Garrabrant will become our newest research fellow this month, after having made major contributions as a workshop attendee and research associate.
Meanwhile, our two new research interns, Kaya Stechly and Rafael Cosman, have been going through old results and consolidating and polishing material into new papers; and three of our new research associates, Vadim Kosoy, Abram Demski, and Tsvi Benson-Tilsen, have been producing a string of promising results on our research forum. Another intern, Jack Gallagher, contributed to our type theory project over the summer.
To accommodate our growing team, we’ve recently hired a new office manager, Andrew Lapinski-Barker, and will be moving into a larger office space this month. On the whole, I’m very pleased with our new academic collaborations, outreach efforts, and growth.
#### Research Progress This Year
As our research projects and collaborations have multiplied, we’ve made more use of online mechanisms for quick communication and feedback between researchers. In March, we launched the Intelligent Agent Foundations Forum, a discussion forum for AI alignment research. Many of our subsequent publications have been developed from material on the forum, beginning with Patrick LaVictoire’s “An introduction to Löb’s theorem in MIRI’s research.”
We have also produced a number of new papers in 2015 and, most importantly, arrived at new research insights.
In July, we revised our primary technical agenda paper for 2016 publication. Our other new publications and results can be categorized by their place in the research agenda:
We’ve been exploring new approaches to the problems of naturalized induction and logical uncertainty, with early results published in various venues, including Fallenstein et al.’s “Reflective oracles” (presented in abridged form at LORI 2015) and “Reflective variants of Solomonoff induction and AIXI” (presented at AGI-15), and Garrabrant et al.’s “Asymptotic logical uncertainty and the Benford test” (available on arXiv). We also published the overview papers “Formalizing two problems of realistic world-models” and “Questions of reasoning under logical uncertainty.”
In decision theory, Patrick LaVictoire and others have developed new results pertaining to bargaining and division of trade gains, using the proof-based decision theory framework (example). Meanwhile, the team has been developing a better understanding of the strengths and limitations of different approaches to decision theory, an effort spearheaded by Eliezer Yudkowsky, Benya Fallenstein, and me, culminating in some insights that will appear in a paper next year. Andrew Critch has proved some promising results about bounded versions of proof-based decision-makers, which will also appear in an upcoming paper. Additionally, we presented a shortened version of our overview paper at AGI-15.
In Vingean reflection, Benya Fallenstein and Research Associate Ramana Kumar collaborated on “Proof-producing reflection for HOL” (presented at ITP 2015) and have been working on an FLI-funded implementation of reflective reasoning in the HOL theorem prover. Separately, the reflective oracle framework has helped us gain a better understanding of what kinds of reflection are and are not possible, yielding some nice technical results and a few insights that seem promising. We also published the overview paper “Vingean reflection.”
Jessica Taylor, Benya Fallenstein, and Eliezer Yudkowsky have focused on error tolerance on and off throughout the year. We released Taylor’s “Quantilizers” (accepted to a workshop at AAAI-16) and presented the paper “Corrigibility” at a AAAI-15 workshop.
In value specification, we published the AAAI-15 workshop paper “Concept learning for safe autonomous AI” and the overview paper “The value learning problem.” With support from an FLI grant, Jessica Taylor is working on better formalizing subproblems in this area, and has recently begun writing up her thoughts on this subject on the research forum.
Lastly, in forecasting and strategy, we published “Formalizing convergent instrumental goals” (accepted to a AAAI-16 workshop) and two historical case studies: “The Asilomar Conference” and “Leó Szilárd and the danger of nuclear weapons.” Many other strategic analyses have been posted to the recently revamped AI Impacts site, where Katja Grace has been publishing research about patterns in technological development.
#### Fundraiser Targets and Future Plans
Like our last fundraiser, this will be a non-matching fundraiser with multiple funding targets our donors can choose between to help shape MIRI’s trajectory. Our successful summer fundraiser has helped determine how ambitious we’re making our plans; although we may still slow down or accelerate our growth based on our fundraising performance, our current plans assume a budget of roughly $1,825,000 per year. Of this, about$100,000 is being paid for in 2016 through FLI grants, funded by Elon Musk and the Open Philanthropy Project. The rest depends on our fundraising and grant-writing success. We have a twelve-month runway as of January 1, which we would ideally like to extend.
Taking all of this into account, our winter funding targets are:
Target 1 — $150k: Holding steady. At this level, we would have enough funds to maintain our runway in early 2016 while continuing all current operations, including running workshops, writing papers, and attending conferences. Target 2 —$450k: Maintaining MIRI’s growth rate. At this funding level, we would be much more confident that our new growth plans are sustainable, and we would be able to devote more attention to academic outreach. We would be able to spend less staff time on fundraising in the coming year, and might skip our summer fundraiser.
Target 3 — $1M: Bigger plans, faster growth. At this level, we would be able to substantially increase our recruiting efforts and take on new research projects. It would be evident that our donors’ support is stronger than we thought, and we would move to scale up our plans and growth rate accordingly. Target 4 —$6M: A new MIRI. At this point, MIRI would become a qualitatively different organization. With this level of funding, we would be able to diversify our research initiatives and begin branching out from our current agenda into alternative angles of attack on the AI alignment problem.
Our projected spending over the next twelve months, excluding earmarked funds for the independent AI Impacts project, breaks down as follows:
Our largest cost ($700,000) is in wages and benefits for existing research staff and contracted researchers, including research associates. Our current priority is to further expand the team. We expect to spend an additional$150,000 on salaries and benefits for new research staff in 2016, but that number could go up or down significantly depending on when new research fellows begin work:
• Mihály Bárász, who was originally slated to begin in November 2015, has delayed his start date due to unexpected personal circumstances. He plans to join the team in 2016.
• We are recruiting a specialist for our type theory in type theory project, which is aimed at developing simple programmatic models of reflective reasoners. Interest in this topic has been increasing recently, which is exciting; but the basic tools needed for our work are still missing. If you have programmer or mathematician friends who are interested in dependently typed programming languages and MIRI’s work, you can send them our application form.
• We are considering several other possible additions to the research team.
Much of the rest of our budget goes into fixed costs that will not need to grow much as we expand the research team. This includes $475,000 for administrator wages and benefits and$250,000 for costs of doing business. Our main cost of doing business is renting office space (slightly over $100,000). Note that the boundaries between these categories are sometimes fuzzy. For example, my salary is included in the admin staff category, despite the fact that I spend some of my time on technical research (and hope to increase that amount in 2016). Our remaining budget goes into organizing or sponsoring research events, such as fellows programs, MIRIx events, or workshops ($250,000). Some activities (e.g., traveling to conferences) are aimed at sharing our work with the larger academic community. Others, such as researcher retreats, are focused on solving open problems in our research agenda. After experimenting with different types of research staff retreat in 2015, we’re beginning to settle on a model that works well, and we’ll be running a number of retreats throughout 2016.
In past years, we’ve generally raised \$1M per year, and spent a similar amount. Thanks to substantial recent increases in donor support, however, we’re in a position to scale up significantly.
Our donors blew us away with their support in our last fundraiser. If we can continue our fundraising and grant successes, we’ll be able to sustain our new budget and act on the unique opportunities outlined in Why Now Matters, helping set the agenda and build the formal tools for the young field of AI safety engineering. And if our donors keep stepping up their game, we believe we have the capacity to scale up our program even faster. We’re thrilled at this prospect, and we’re enormously grateful for your support.
# New paper: “Quantilizers”
| | Papers
MIRI Research Fellow Jessica Taylor has written a new paper on an error-tolerant framework for software agents, “Quantilizers: A safer alternative to maximizers for limited optimization.” Taylor’s paper will be presented at the AAAI-16 AI, Ethics and Society workshop. The abstract reads:
In the field of AI, expected utility maximizers are commonly used as a model for idealized agents. However, expected utility maximization can lead to unintended solutions when the utility function does not quantify everything the operators care about: imagine, for example, an expected utility maximizer tasked with winning money on the stock market, which has no regard for whether it accidentally causes a market crash. Once AI systems become sufficiently intelligent and powerful, these unintended solutions could become quite dangerous. In this paper, we describe an alternative to expected utility maximization for powerful AI systems, which we call expected utility quantilization. This could allow the construction of AI systems that do not necessarily fall into strange and unanticipated shortcuts and edge cases in pursuit of their goals.
Expected utility quantilization is the approach of selecting a random action in the top n% of actions from some distribution γ, sorted by expected utility. The distribution γ might, for example, be a set of actions weighted by how likely a human is to perform them. A quantilizer based on such a distribution would behave like a compromise between a human and an expected utility maximizer. The agent’s utility function directs it toward intuitively desirable outcomes in novel ways, making it potentially more useful than a digitized human; while γ directs it toward safer and more predictable strategies.
Quantilization is a formalization of the idea of “satisficing,” or selecting actions that achieve some minimal threshold of expected utility. Agents that try to pick good strategies, but not maximally good ones, seem less likely to come up with extraordinary and unconventional strategies, thereby reducing both the benefits and the risks of smarter-than-human AI systems. Designing AI systems to satisfice looks especially useful for averting harmful convergent instrumental goals and perverse instantiations of terminal goals:
• If we design an AI system to cure cancer, and γ labels it bizarre to reduce cancer rates by increasing the rate of some other terminal illness, them a quantilizer will be less likely to adopt this perverse strategy even if our imperfect specification of the system’s goals gave this strategy high expected utility.
• If superintelligent AI systems have a default incentive to seize control of resources, but γ labels these policies bizarre, then a quantilizer will be less likely to converge on these strategies.
Taylor notes that the quantilizing approach to satisficing may even allow us to disproportionately reap the benefits of maximization without incurring proportional costs, by specifying some restricted domain in which the quantilizer has low impact without requiring that it have low impact overall — “targeted-impact” quantilization.
One obvious objection to the idea of satisficing is that a satisficing agent might build an expected utility maximizer. Maximizing, after all, can be an extremely effective way to satisfice. Quantilization can potentially avoid this objection: maximizing and quantilizing may both be good ways to satisfice, but maximizing is not necessarily an effective way to quantilize. A quantilizer that deems the act of delegating to a maximizer “bizarre” will avoid delegating its decisions to an agent even if that agent would maximize the quantilizer’s expected utility.
Taylor shows that the cost of relying on a 0.1-quantilizer (which selects a random action from the top 10% of actions), on expectation, is no more than 10 times that of relying on the recommendation of its distribution γ; the expected cost of relying on a 0.01-quantilizer (which selects from the top 1% of actions) is no more than 100 times that of relying on γ; and so on. Quantilization is optimal among the set of strategies that are low-cost in this respect.
However, expected utility quantilization is not a magic bullet. It depends strongly on how we specify the action distribution γ, and Taylor shows that ordinary quantilizers behave poorly in repeated games and in scenarios where “ordinary” actions in γ tend to have very high or very low expected utility. Further investigation is needed to determine if quantilizers (or some variant on quantilizers) can remedy these problems.
|
{}
|
# Why my code for format change doesn't work? (Format Rules)
I want to write the following in mathematica
$$(\sum_{k=1}^n a^2_k)$$
Now I've written this as an indexed variable
Sum[(a^2)[k], {k, 1, n}]
which in mathematica shows up as $$\sum_{k=1}^n (a[k])^2$$ But I want this displayed as the Latex form, with the subscripts (but I don't want to use the subscript command) and I tried
(# /: Format[(Power[#[i_], exp_])] :=
Subscript[(Power[#, exp]), i]) & /@ {a};
But I get an error saying
"Rule for Format of (a^exp_)[i_] can only be attached to Power."
Please guide me on what I am doing wrong with the Format command.
• "but I don't want to use the subscript command" Why not? (a^2)[k] doesn't make much sense for anything else than displaying an expression in a certain way, anyway. Oct 8, 2017 at 11:28
• @Szabolcs , because I don't the fullform of the expression to have subscripts. But I just want it to be displayed that way. Oct 8, 2017 at 11:57
• I don't understand your comment because it is not a full sentence. Oct 8, 2017 at 12:39
• @Szabolcs , there is nothing ambiguous about that sentence. When I use FullForm[expr] on the expression containing things like the one mentioned in the question, I don't want it show subscripts in the fullform. I just want to use the Format command to display it that way. This question relates to the use of the format command. Oct 8, 2017 at 13:24
• "I don't the fullform" is not a full sentence. I don't think a reasonable answer can be given unless you explain what you want in very clear terms. (a^2)[k] is plainly incorrect in Mathematica. Redefining the way Power formats is a bad idea. I am not sure what you are trying to achieve with #. For these reasons, I am going to stop here. Oct 8, 2017 at 13:32
Format[a[k_]] := Subscript[a,k]
$$\sum _{k=1}^{\infty } a_k{}^2$$
|
{}
|
Continuous stochastic process
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
In probability theory, a continuous stochastic process is a type of stochastic process that may be said to be "continuous" as a function of its "time" or index parameter. Continuity is a nice property for (the sample paths of) a process to have, since it implies that they are well-behaved in some sense, and, therefore, much easier to analyse. It is implicit here that the index of the stochastic process is a continuous variable. Note that some authors[1] define a "continuous (stochastic) process" as only requiring that the index variable be continuous, without continuity of sample paths: in some terminology, this would be a continuous-time stochastic process, in parallel to a "discrete-time process". Given the possible confusion, caution is needed.[1]
Definitions
Let (Ω, Σ, P) be a probability space, let T be some interval of time, and let X : T × Ω → S be a stochastic process. For simplicity, the rest of this article will take the state space S to be the real line R, but the definitions go through mutatis mutandis if S is Rn, a normed vector space, or even a general metric space.
Continuity with probability one
Given a time t ∈ T, X is said to be continuous with probability one at t if
${\displaystyle {\mathbf {P} }\left(\left\{\omega \in \Omega \left|\lim _{s\to t}{\big |}X_{s}(\omega )-X_{t}(\omega ){\big |}=0\right.\right\}\right)=1.}$
Mean-square continuity
Given a time t ∈ T, X is said to be continuous in mean-square at t if E[|Xt|2] < +∞ and
${\displaystyle \lim _{s\to t}{\mathbf {E} }\left[{\big |}X_{s}-X_{t}{\big |}^{2}\right]=0.}$
Continuity in probability
Given a time t ∈ T, X is said to be continuous in probability at t if, for all ε > 0,
${\displaystyle \lim _{s\to t}{\mathbf {P} }\left(\left\{\omega \in \Omega \left|{\big |}X_{s}(\omega )-X_{t}(\omega ){\big |}\geq \varepsilon \right.\right\}\right)=0.}$
Equivalently, X is continuous in probability at time t if
${\displaystyle \lim _{s\to t}{\mathbf {E} }\left[{\frac {{\big |}X_{s}-X_{t}{\big |}}{1+{\big |}X_{s}-X_{t}{\big |}}}\right]=0.}$
Continuity in distribution
Given a time t ∈ T, X is said to be continuous in distribution at t if
${\displaystyle \lim _{s\to t}F_{s}(x)=F_{t}(x)}$
for all points x at which Ft is continuous, where Ft denotes the cumulative distribution function of the random variable Xt.
Sample continuity
{{#invoke:main|main}}
X is said to be sample continuous if Xt(ω) is continuous in t for P-almost all ω ∈ Ω. Sample continuity is the appropriate notion of continuity for processes such as Itō diffusions.
Feller continuity
{{#invoke:main|main}}
X is said to be a Feller-continuous process if, for any fixed t ∈ T and any bounded, continuous and Σ-measurable function g : S → R, Ex[g(Xt)] depends continuously upon x. Here x denotes the initial state of the process X, and Ex denotes expectation conditional upon the event that X starts at x.
Relationships
The relationships between the various types of continuity of stochastic processes are akin to the relationships between the various types of convergence of random variables. In particular:
• continuity with probability one implies continuity in probability;
• continuity in mean-square implies continuity in probability;
• continuity with probability one neither implies, nor is implied by, continuity in mean-square;
• continuity in probability implies, but is not implied by, continuity in distribution.
It is tempting to confuse continuity with probability one with sample continuity. Continuity with probability one at time t means that P(At) = 0, where the event At is given by
${\displaystyle A_{t}=\left\{\omega \in \Omega \left|\lim _{s\to t}{\big |}X_{s}(\omega )-X_{t}(\omega ){\big |}\neq 0\right.\right\},}$
and it is perfectly feasible to check whether or not this holds for each t ∈ T. Sample continuity, on the other hand, requires that P(A) = 0, where
${\displaystyle A=\bigcup _{t\in T}A_{t}.}$
Note that A is an uncountable union of events, so it may not actually be an event itself, so P(A) may be undefined! Even worse, even if A is an event, P(A) can be strictly positive even if P(At) = 0 for every t ∈ T. This is the case, for example, with the telegraph process.
Notes
1. Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9 (Entry for "continuous process")
References
• {{#invoke:citation/CS1|citation
|CitationClass=book }}
• {{#invoke:citation/CS1|citation
|CitationClass=book }} (See Lemma 8.1.4)
|
{}
|
# Lately in Lie Groups
This came from a lecture given on 11/10/09.
The lecture began with the definition of the Cartan matrix. Basically you order the simple roots $a_1, ..., a_n$ and the $ij$th entry is $2\frac{(a_i,a_j)}{(a_j,a_j)}$.
Reason to care about the Cartan Matrix. Closely associated to the Cartan matrix, is the Coxeter matrix whose entries are $(a_i, a_j)$; note it is in general its entries are not all rational. As it happens the covolume is the determinant of this matrix.
Simply laced Dynkin diagrams are ones where there is at most one edge between any two nodes. For these diagrams, the Coxeter matrix has 1’s on diagonal and -1/2 on off diagonals.
Now $\det (Cartan(E_8)) = 1$ (example of an even unimodular lattice), even means inner product of every vector with itself is even., the only such one in 8 dimension. (its the first one) they only occur in dimension congruent to zero mod 8 (actually signature has to be congruent to zero mod 8, e.g could have 26 dim with sig = 25 – 1).Above disscussion gives some meaning for the det. of cartan for simply laced Dynkin diagrams. Cartan Matrix is positive definite (basically because Coxeter matrix is ).
Finding Lie Algebras. Last time talked about classification of Dynkin diagrams. Now what to realize roots systems as roots systems of some lie algebras. We’ve know this true already for a lot of irreducible root systems (the Weyl group is also listed): (at some point I should understand the action of the Weyl group better)
$A_n \leftrightarrow sl(n+1, \mathbb{C})$, $W = S_{n+1}$
$B_n \leftrightarrow so(2n+1, \mathbb{C})$$W = (Z/2)^n * S_n$ ( permuting up to sign) ( because can flip one and leave the others unchanged)
$C_n \leftrightarrow sp(2n, \mathbb{C})$, same Weyl group
$Dn \leftrightarrow so(2n,\mathbb{C})$ $W = (Z/2)^{n-1}*S_n$ rational: there is parody between sign change = parody of the permutation.
Existence of root system associated to each Dynkin diagram is trivial, just take group generated by reflections by simple roots, and take orbits of simple roots.
Getting Dynkin diagrams from $A,D,E$ series
Label the roots as follows
Explicit description (in terms of coordinates) are given of all the roots in the actual root system roughly around pg. 333 of Fulton and Harris. From E8 one can obtain roots systems associated to other Dynkin diagram. For example, E6, E7 are natrually subspaces of the root system for E8.
One can get F4 from E6 (the diagram for E6 is the one above except you erase $a7, a8$). In this case there is an obvious involution: $a1 \mapsto a6$, $a3 \mapsto a5$. The quotient under this action gives F4.
If you quotient by the dihedral symmetry of $latexD_4$, you get $G_2$. Also a symmetry of $A_n$ will give you $B_n$, and similarly quotienting by a symmetry of $D_n$ gives $C_n$. Thus $A_n, D_n, E_8$ are the ones to be concerned about.
Regarding $sp(2n, \mathbb{C})$. Let $M = \bigl( \begin{smallmatrix} 0 & I_n \\ -I_n & 0 \end{smallmatrix}\bigr)$. And write $Mat(A,B,C,D)$ for the usual 2×2 block matrix, i.e. $M = Mat(0,I_n, -I_n, 0)$. Get that
$sp(2n, \mathbb{C}) = \{X | X^\intercal M + M X = 0\}$
consequently, the Cartan subalgebra is
$h = \{H_i = E_ii - E_{n+i, n+i}\}$
etc, etc some details I didn’t write down very well, but here is the rest of it in raw form:
L_i \in h^*, <L_i, H_j> = d_ij
X_i,j = E_i,j – E_n,jE , roots L_i – L_j
A -A^t = D
E_i, n+j + E_j,n+i L_i +L_j
E_n+i,j + En+j,i -L_i-L_j
E_n+i,j 2L_i
E_n+i,i -2L_i
latter two are diagonals of B,C
roots \pm L_i \pm L_j \in h^*
———————–
So(2n), M = 0 I
I 0
Is X = (A,B,C,D) as above, see that A = -D^t, and B,C are skey symm.
Eigenvectors:
E_i,j – E_n+i, n+j with root L_i — L_j
E_i, n+j – E_j,n+i root L_i + L_j
E_n+i, j – E_n+j,i root -L_i – L_j
also have \pm L_i \pm L_j, i \ne j. Above covers An, Cn, Dn, so(2n+1, C) is similar computation.
Last little bit. Assume for now that can construct $E_8$, If $\mathfrak{g}$ is a simple lie algebra, with dynkin diagram $D$, $D^0$ is a subdiagram obtained from removing some nodes, together will all the lines meeting those nodes. Then can construct $\mathfrak{g}^0$ sub alg. with $D^0$ as its dynkin diagram. $\mathfrak{g}^0$ is gen. by $\mathfrak{g}_{\pm a}$ with $a \in D^0$.
Exercise: prove this by verifying positive roots of $\mathfrak{g}^0$ are positive roots $b$ of $\mathfrak{g}$ that are sums of roots in $D_0$ and $\mathfrak{h}^0$ is spanned by $H_b \in \mathfrak{h}$
Basically this shows if two roots systems are abstractly isomorphic they give rise to isomorphic lie algebras. Eventually will construct $E_8$, but now Uniqueness: $g,g'$ have Cartan subalgebras $h,h'$ with choice of positive roots and isomprhic root systems, $\exists$ iso $h \to h'$ taking $H_i \to H'_i$, where $a_1, ..., a_n$ positive roots for $h$
Choose $X_i, X'_i$ arb in $g_{a_i}, g_{a'_i}$, just map $X_i \to X'_i$, claim: there exists a unique iso $g \to g'$ extending $h \to h'$ and mapping $X_i to X'_i$.
proof: (in Fulton and Harris) , map is det. on $Y_i \in g_{a_i}$ by $[Y,X] = H$ and $[Y',X'] = H'$, these generated all of $g, g'$ respectively (i.e. $latexX,H$ uniquely det. $Y$).
Existence: let $g'' \subset g \oplus g'$ be the graph of the map: its generated by
$H''_i = H'_i \oplus H_i$ and $X''_i = X_i \oplus X'_i$
and similarly for $Y_i$. It suffices to show the two projections are isos from $g'' \to g$ or $g'$ respectively.
Kernel of second projection is $t \oplus 0$, where $t is an ideal of g$. since $g$ is simple, $t = 0$, or $t = g$. In the latter case, $g'' = g \oplus g'$. Consider a maximal root $b$, by maximal mean $[X_i, X_b] = 0$ for all $i$ simple roots.
Take $X_b$ and $X'_b$ in corresp. root spaces and set $X''_b = X_b\oplus X'_b$ and $W$ be the subspace $g''$ obtained by applying all the $Y''_i$. $latex W$ is proper subspace of $g''$.
Claim: $g''$ perserves $W$, this is clear for $H''_i$, just have to show $X''_j$ perserves $W$, inductive argument.
Suppose $[X''_j, [Y''_{j1} [.... [ Y''_{jn}, X''_b ]]]]]$, suppose its in $W$ for n, show it will be in $W$ for $n + 1$
$[X''_j , [Y''_{i, n+1}, Z]]$ = (use jacobi) = $- [Y''_{k+1}, [Z, X''_j]] - [Z, [X''_j, Y''_{ik_1}]]$
first term is in $W$ by induction, and second term should be 0, or a mutliple of $H''_j$ applied to Z, so by induction perserves $W$, so $W$ is proper invariant subspace. And $W$ is ideal of $g \oplus g'$ , but must be proper so can’t be everything, so forces $X_b oplus 0 \in W$, so $W_b$ is two dimensional, contradiciton.
|
{}
|
COMPSTAT 2022: Start Registration
View Submission - COMPSTAT2022
A0617
Title: Living on the edge: A unified approach to antithetic sampling Authors: Lorenzo Frattarolo - European Commission Joint Research Centre (JRC) (Italy) [presenting]
Roberto Casarin - University Ca' Foscari of Venice (Italy)
Radu Craiu - University of Toronto (Canada)
Christian Robert - Universite Paris-Dauphine (France)
Abstract: Recurrent ingredients in the antithetic sampling literature are identified which lead to a unified sampling framework. We introduce a new class of antithetic schemes that include the most used antithetic proposals. This perspective enables the derivation of new properties of the sampling schemes: i) optimality in the Kullback-Leibler sense; ii) closed-form multivariate Kendall's $\tau$ and Spearman's $\rho$; iii) ranking in concordance order and iv) a central limit theorem that characterizes stochastic behavior of Monte Carlo estimators when the sample size tends to infinity. The proposed simulation framework inherits the simplicity of the standard antithetic sampling method, requiring the definition of a set of reference points in the sampling space and the generation of uniform numbers on the segments joining the points. We provide applications to Monte Carlo integration and Markov Chain Monte Carlo Bayesian estimation.
|
{}
|
# Basis of the kernel
## Main Question or Discussion Point
Find a basis for Ker T that contains S = $$\begin{pmatrix} 1\\ 0\\ 1\\ 0\\ \end{pmatrix}$$, $$\begin{pmatrix} 0\\ 1\\ 0\\ 2\\ \end{pmatrix}$$ where $$T : R^4 -> R^4$$ is defined by
$$T\begin{pmatrix} 1\\ b\\ c\\ d\\ \end{pmatrix} = \begin{pmatrix} a - b - c\\ a - 2b + c\\ 0\\ 0\\ \end{pmatrix}$$.
Well, I have found a basis 'B' for Ker (T) to be B ={$$\begin{pmatrix} 3\\ 2\\ 1\\ 0\\ \end{pmatrix}$$, $$\begin{pmatrix} 0\\ 0\\ 0\\ 1\\ \end{pmatrix}$$}.
I noticed that the two sets of vectors S and B are linearly independent of one another. Does this mean that there is no basis for Ker(T) that contains S, as a basis must be a minimal spanning set? Or have I gone astray somewhere?
Related Linear and Abstract Algebra News on Phys.org
HallsofIvy
Homework Helper
Find a basis for Ker T that contains S = $$\begin{pmatrix} 1\\ 0\\ 1\\ 0\\ \end{pmatrix}$$, $$\begin{pmatrix} 0\\ 1\\ 0\\ 2\\ \end{pmatrix}$$ where $$T : R^4 -> R^4$$ is defined by
$$T\begin{pmatrix} 1\\ b\\ c\\ d\\ \end{pmatrix} = \begin{pmatrix} a - b - c\\ a - 2b + c\\ 0\\ 0\\ \end{pmatrix}$$.
Well, I have found a basis 'B' for Ker (T) to be B ={$$\begin{pmatrix} 3\\ 2\\ 1\\ 0\\ \end{pmatrix}$$, $$\begin{pmatrix} 0\\ 0\\ 0\\ 1\\ \end{pmatrix}$$}.
I noticed that the two sets of vectors S and B are linearly independent of one another. Does this mean that there is no basis for Ker(T) that contains S, as a basis must be a minimal spanning set? Or have I gone astray somewhere?
Perhaps you should reread the problem. "Find a basis for the kernel of T that includes <1, 0, 1, 0> and <0, 1, 0, 2>" makes no sense as it is easy to see that those two vectors are NOT in the kernel of T and so cannot be in any basis for that kernel.
Alright, that is what I had thought but just wanted to verify it with someone. Thank you very much!
HallsofIvy
|
{}
|
1:30 AM
@ApoorvPotnis yep, same problem, I think probably the ores are not just as simple in their actual composition as they are served to us in high school; and the original ore industry might have made advances earlier than organic chemistry did, so the former might not have had the facility to standardize its names (due to lack of comm systems, etc.)
hmm...
14
(+)-Clopidogrel (sold as Plavix) is a common pharmaceutical used to help combat the risk of heart disease. The World Health Organisation lists Plavix on its list of 'essential medicines' due to its broad efficacy, low toxicity and low cost (less than 1USD/month in the developing world). Many i...
@Zhe this drug just appeared in our group problems
so I finally got around to devising a synthesis. After realising that I can't selectively ortho-chlorinate phenylalanine methyl ester, I stole your idea for the asymmetric Strecker (I'm guessing we could hydrolyse the aminonitrile, then TMSCl + MeOH to get the methyl ester). I used 3-(hydroxymethyl)thiophene as a starting material, though. Maybe one day I'll post it as an answer, even though it technically doesn't obey the stated rules, since I'm buying the heterocycle.
@ApoorvPotnis "most of the students have no interest in obtaining correct information but only score high." make that "most" to "many/several", using absolute terms like "most" might not always be a good idea
1:50 AM
@AnyChemSoftwareSpecialist do there exist software to generate resonating structures of elementary organic molecules? checked out this list but its extremely out-dated chemistry.meta.stackexchange.com/questions/291/…
chemsketch can generate the iupac name of elementary molecules, so why not the reso structures? :/
2:43 AM
@GaurangTandon I agree, was going to say that
2:55 AM
@ApoorvPotnis That is a very sad truth. However, if you don't seek to learn, those people should not come here and expect that we confirm whatever wrong things they have in these textbooks. If things are wrong, they must be addressed as that, otherwise there would never be any progress.
@ApoorvPotnis You should be completely aware that the blue book for organic nomenclature is about 1300 pages long. I would not call that a handful of rules. The red book for inorganic nomenclature is only about 400 pages. It is similarly as systematic as organic nomenclature.
@M.A.R. nom nom nom...
@GaurangTandon My canned comment links to the tutorials... I have updated your post with chemistry markup. If you want to know more, please have a look here and here. We prefer to not use MathJax in the title field, see here for details.
@Gaurang Also have a look at the hidden points of editing
@GaurangTandon because resonance structures are very difficult to address... see my question: chemistry.stackexchange.com/q/49440/4945
And going a bit further, resonance in itself is not as straight forward as you might think. There are programs, like NBO, which performs natural resonance theory (example), but it is based on MO theory and localising these orbitals. In any case, to address resonance, you need to know the structure and the electronic structure, a Lewis sketch most of the time is insufficient to do this adequately.
It is unlikely that any simple program can come up with a good set of resonance structures, if even the more complicated ones start to fail. Your brain and your intuition is your most powerful tool here. (Natural resonance theory fails for BF3 to name just one example.)
3:25 AM
@Martin-マーチン this page was quite basic and mostly contained formatting overlapping with math expressions; the second one's good though, bookmarked :)
IIRC there was an effort to combine the extensive MathJax knowledge into one post. But I guess that is just a huge effort...
@Martin-マーチン i'm not asking the software to "name" the reso hybrids, just to draw it, won't that be easier? And no, no natural resonance theory, just simple high school reso theory :(
In high school your brain should be your most powerful tool. Honestly though, resonance is complicated for computers, a human mind is more likely to pick these things up via intuition...
@Martin-マーチン but the red book probably doesn't name the ores, does it? here are some example names "sphalerite, argentite, cerussite, casseterite, chalcoite, chalopyrite, anglessite, galena, matlockite" hardly systematic to me :(
@GaurangTandon -ite ;)
3:31 AM
@Martin-マーチン yes but I had to just draw all eight reso structures for paraterphenyl manually in chemsketch :(
@Martin-マーチン ha that's reasonable xD
Mineralogy and its findings probably predates modern nomenclature and therefore a lot of names were common in the literature before that time. They were retained (old habits are hard to kill). We don't question the use of ammonia, acetic acid, toluene, etc. - organic chemistry does the same thing. The compound space of new stuff just is so big, that trivial names can become more complicated than the systematic stuff
noob question but this post states electronic configurations as $(1s)^2(2s)^2$ I always only saw $1s^22s^2$. Are both notations accepted?
mathjax doesn't work in comments :'(
@Martin-マーチン "old habits are hard to kill" reasonable, it's still hard for me to get rid of ridiculous high school notions :(
@GaurangTandon mathjax should work in comments, at least on the main page, not the app though
@Martin-マーチン $2^2=4$ doesn't work on pc, i never use mobile apps
for chat you have to enable it...
3:39 AM
@Martin-マーチン where's the button?
27
When I was walking through some of the chatrooms on the network, I found that mathematics.se has some rules for their chat. I thought this might be a good idea for us as well, hence I propose these guidelines: Have fun! Our chatroom is intended for casual conversations. There is almost nothing ...
it's hidden in there how to do it somewhere..
@Martin-マーチン RE "organic chemistry does the same thing." well imho the names like "ammonia, acetic acid,etc." are reasonably distinct from one another to be memorized separately; the same cannot be said for metallurgy though, see "casseterite" and "cerrusite" they're sooo distinct yet names are sooo similar :(
@GaurangTandon I think both are accepted, but the orbital denominators should be upright... the easiest to type (and I think supported by now) is \ce{1s^2 2s^2 2p^2}
@Martin-マーチン this is the method; there's a one time use below, will use that
@GaurangTandon I understand what you're saying; and I hate that some schools put so much focus on nomenclature. If you don't know what the chemistry is, why would you need to know how to name it? I think that are wasted opportunities...
@GaurangTandon i use this... stackapps.com/q/4370/27677
3:46 AM
@Martin-マーチン we are indeed taught the metallurgy of ores, but those are very few (alumina, sphalerite, chalcopyrite, galena, haematite, argentite; basically one for each metal) important ores. The real problem is that we are supposed to know the nomenclature of many more confusing ores, because you never know which one they may ask in the exams! :(
yeah, bulls****
If anyone ever said in school "I'm never going to use that anyway!" {S..}he is completely right about that, given the very few exceptions who go into that field.
@Martin-マーチン exactly
@Martin-マーチン that's better
It doesn't use the correct MathJax version though, so sometimes stuff fails... not sure if $\pu{1 bar}$ works...
nah, it doesn't...
the more you edit, the less you need it...
hehe
4:38 AM
i was hoping to set it up such that there's a message in my stackexchange inbox at the top right every time a new question got posted on the chem.SE site
is it possible to do this?
given there are so few questions every day, I wouldn't mind checking them all out
no i don't think this is possible, and not adviseable
right now I have the Newest Questions pgae open all but it isn't the most efficient way to keep track of everything
i did set up a filter stackexchange.com/filters/122668/new-question but am not sure if it does send a msg in my SE inbox
@Martin-マーチン i can, is there a way through that?
there should be a way
4:46 AM
lol that way easy
let's hope it catches new questions just as well
if you want to filter it by tags just do feeds/tags/tag1+tag2+...
@GaurangTandon yes, sorry i was busy... anyway... new messages get dropped in a feed in chat all the time
+ some maths, physics, bio questions
yes that's ok :)
@Martin-マーチン when did we have these questions on Chem.SE?
feeds has a delay though
@GaurangTandon i don't understand the question
@Martin-マーチン you can change the "feed update interval" to 1 minute (minimum) in rss feed reader; though I think 10minutes is reasonable
@Martin-マーチン I didn't understand why you said "maths, physics, bio questions" would drop in a feed for recent questions list on chem.se
you can't change what se is not pushing ;) you can only update how often your reader checks
4:59 AM
@Martin-マーチン oh yeah, lazy backend, that's reasonable :P
in the chat window, not on the main site...
oh ok
whenever there is a new question in one of the feeds, it'll get posted to the top drop down, that includes a few questions on maths, bio, phys
@Martin-マーチン never seen it before for maths/bio/phys though, will keep an eye out
5:03 AM
oh interesting
i've only seen recent questions from chemistry only
5:33 AM
@GaurangTandon Agreed. 'Most' should be changed to 'many/several'.
@Martin-マーチン Yes. The actual rules for nomenclature may be numerous in number but at an introductory level, they are really a handful.
@GaurangTandon @ApoorvPotnis @AvatarShiny are you all giving JEE too?
are you in 12th or a dropper?
Dropper
@Rick that's a nice group ;)
5:39 AM
@GaurangTandon Given the state of research in India, I thought 'most' would be correct.
@ApoorvPotnis well, I would tend to refrain from using absolute terms like "most" as long as I am not in a position to accurately justify the claim; the same claim if made by a professor in the teaching field with +10yo experience would be better; hope you understand :)
Yes
11th
me in 12th though
@Rick you in which class?
Even I'm a dropper
@GaurangTandon Agreed
5:44 AM
@GaurangTandon so you have boards too coming soon right?
@Rick i'm in ISC; boards are going on :P
they started 12th feb
6:37 AM
@Martin-マーチン I had set my feeder to check the recent questions feed every 10mins, and it has just now caught a question that was posted just 4mins ago, i'd say that's pretty excellent
well, there is a new feed item in chat, too
damnit, why do people put tags in titles
7:09 AM
What exactly is the mechanism of hydration by "nascent hydrogen"? According to the wikipedia page, it's an outdated theory, but I don't get the actual mechanism from that page. Like what exactly happens when we convert Nitrobenzne to aniline using Sn with conc HCl?
it looks like even Na/ethanol generates this so called "nascent H", I guess having a metal is important then?
@Rick i'll say it probably doesn't matter, at least at our stage :P
@Rick the complete mechanism might not fit into the chat here, so consider asking a new question on the main site ;) (spoilers: i'll upvote and star it immediately)
7:50 AM
alright done.
8:20 AM
@GaurangTandon if you remove the hydrogens on the ghost atom (circled in pink) I'll accept your edit. in that specific case there are no hydrogens there. chemistry.stackexchange.com/review/suggested-edits/73086
Otherwise it looks great
chemsketch won't let me remove the hydrogens
will a note at the end of the image be ok?
chemsketch limitation; no hydrogens should be there at the end of the image
^^^^ that'll work?
@Martin-マーチン it's also pinging me when latest questions get edited even though I've read them, that's bad :(
@GaurangTandon A cursory glance over the question and someone will be stuck with an incorrect concept.
@AvatarShiny is that ok?
8:36 AM
Yes that's perfect
8:56 AM
Approved
9:14 AM
@AvatarShiny regarding this chat.stackexchange.com/transcript/3229?m=42728627#42728627 I checked my lecture notes, my prof has also said that red P+HI will reduce -COOH
But do you have MS Chouhan book? Because in it, it is given that -COOH isn't reduced by RedP+HI
this is a dead end now
Check chapter Aromatic compounds Q 28 synthesis of ibuprofen
9:41 AM
I've been checking industrial methods but none mentions red P +HI except ms chouhan :(
https://en.wikibooks.org/wiki/Structural_Biochemistry/Ibuprofen
http://khakiimonstarr.blogspot.in/2013/06/synthesis-of-ibuprofen.html
http://jacr.kiau.ac.ir/article_516129_239923d1582447450a36d280e0dd4be6.pdf
everyone says the same thing, none mentions RedP+HI :(
still checking
this paper mentions it (indexed by Google) but I don't have access :'(
accessed! (long live sci-hub)
see this, @AvatarShiny clearly shows that redP+HI didn't reduce carboxylic acid
10:07 AM
had to post the question on main site to validate my things, did chemistry.stackexchange.com/q/90696/5026
1 hour later…
11:29 AM
@GaurangTandon we we're taught that
2 hours later…
1:13 PM
@Martin-マーチン In this post from 2014 you happen to use \mathcalfor formatting the Faraday's constant which gives it a really curvy look
https://chemistry.stackexchange.com/q/20066/5026
Any specific reason for that?
The [wiki post](https://chemistry.meta.stackexchange.com/q/443/5026) doesn't mention it anywhere... _<markdown fails -_->_
I used to type constants with that to distinguish them from variables, but that's just style. I don't do it anymore.
alright I fixed that
and there's one thing that's confusing me for quite a long time now...what is wrong with "number of moles"? :P (honestly serious question though)
I mean, an innocent google search for "number of moles" yields many sites still use that phrase but apparently it's wrong.
I think that's because the mole already refers to a number of some things
so prepending "number of" to it is redundant
like the RAS syndrome?
2:01 PM
@orthocresol O.o
2:41 PM
Hi, would it be ok to have a "pdb" tag on chemistry.SE? I watch this tag on bioinformatics.SE and biology.SE (through email notifications), and now I just realized that there are many similar questions on chemistry: chemistry.stackexchange.com/search?q=pdb but it's harder to monitor them without a tag. I don't have enough points on chemistry to create a tag myself.
2:53 PM
@marcin Could be justified. To be sure you might want to ask on meta.
3:29 PM
@Loong OK, I just asked it on meta
3:54 PM
1
Would it be ok to have a [pdb] tag on chemistry.SE? I watch this tag on bioinformatics.SE and biology.SE (through email notifications), and now I just realized that there are many similar questions on chemistry: https://chemistry.stackexchange.com/search?q=pdb but it's harder to monitor them with...
4:46 PM
0
I encountered this question in the chapter Amines and trust me I've gone through many textbooks but couldn't find the functioning of those reagents, well some of them. Firstly could you explain what does the 3 reagents under the same arrow signify? Also, can you explain the mechanism step by s...
Is this question wrong? I thought Gabriel phthalimide reactions cannot make aromatic amines?
@Rick Yes, probably wrong. If there was dinitrochlorobenzene, it should work though.
Klaus was a softie for poor homework, very rarely see him these days
5:02 PM
Alright, thanks
agree with mithoron, i too was taught that SN2 doesn't take place on aromatic rings until EWG groups are placed at ortho/para positions (SN2Ar)
at the crucial part of his answer, Klaus has linked to en.wikipedia.org/wiki/Nucleophilic_aromatic_substitution which also depicts a EWG group
or was he thinking of the benzyne mechanism (given on that wikipedia page under third bullet point)?
that seems to work without EWGs, i haven't been taught that type of substitution though, so I can't comment if Klaus got that part of his answer exactly wrong
@GaurangTandon It woudn't work in this case I think
@Mithoron probably, but I am not at all sure about the benzyne mechanism
Conc. NaOH can make sodium phenolate from PhCl, but I don't think it's compatible
maybe, though I gotta get some sleep now, cya in the morning tomo, or rather my morning tomorrow, timezones differ, lol
2 hours later…
6:48 PM
I think I'm going to hide a link to chem SE in my dissertation
need to validate my use of this site as a massive time sink
validate to yourself?
...
maybe I should say "easter egg"
7:01 PM
@pentavalentcarbon :)
an (edited) answer to at least one post is going in as a section, so hyperlinks need to be hidden somewhere
7:16 PM
also realized I only have one picture of a molecule
7:32 PM
I put it in my cv.
"have 32,196 reputation on Stack Exchange"
(I didn't write that exactly, but I did put a line about it, especially because there was some kind of recognition:
publish correction: on <insert date>, rep grew to 33,000
...wait...seriously?
yeah
obviously it wasn't my major selling point
it was under "other activities" which included rubbish such as playing in orchestra
@pentavalentcarbon You can totally put it up front. Just do it in a language that no one understands, and it'll be fine
well if that's rubbish then my two fun/valuable hobbies are worthless :(
oh, well, it's rubbish in the sense that it doesn't really tip the balance.
7:37 PM
agreed
It should give you a "well rounded" look.
I can't really tell what impression it gives
it could be "this guy doesn't have a life, in his free time he's still doing chemistry" which may or may not be bad
but because of this I eventually decided to include one line on it
I remember that
Without that tweet I would probably have left it out.
but yeah, it was a really minor point
You're famous
well, internet famous
Also, I've magically made it to the top 30 for all time reputation
Though I'm not sure if that's good or bad... :/
7:57 PM
organic chemists get all the rep
where's my communism???
@Zhe You're gonna get 10k soon-ish
Seeing deleted posts and stuff...
@pentavalentcarbon I got +105 for answering a question about the shape of d-orbitals...
In a way that I'm sure will make a real chemist cringe
o.0 Oh not even mention it ;D
I want to say I'd rather fall on a sword, but one of my most upvoted answers was "what does F10.8 mean in F77"
terrible
most of the good stuff...not so much
@pentavalentcarbon I don't even know the words in that question
8:02 PM
format specifier in Fortran
tch shame on you
If it was at least F95... :D
I'm not touching fortran
actually didn't need to say f77, it is valid in all versions, and frankly no different from %10.8f
anyway, back to molecules
@pentavalentcarbon Ehh, learning fortran is some waste of time these days
since none of my good stuff gets upvoted, I created my own ~/Dropbox/stack_exchange/chemistry/completed/hall_of_fame for my own amusement
8:06 PM
@Mithoron It's OK, he knows python
@Zhe Of course :D
@Mithoron like all things, it depends...it's more about maintaining old code than anything else...note that I can read but cannot write it
It just doesn't do much good to learn it additionally
@pentavalentcarbon No, it's not. I just delete the old files.
ok snark master, you know what I mean
8:09 PM
But some software and old-timer scientists still use it ;)
if you've used quantum chemistry software you've used something written in Fortran
I wonder if there's any fortran in the codebase at work
what's the standard fortran extension?
.F or .f for 77, .f90 for 90, .f95, .f03, .f08
oh, we have some 3rd party R packages checked in
unsurprising, they probably call very old stats stuff
8:23 PM
There's certainly no qchem or gaussian
I lied
O.M.F.G.
There's some random library that will read a qchem checkpoint file
Not sure if that's an overinclude or what
fchk?
But wow
I dunno
I'm not reading it that closely
it's in pymatgen?
oh, makes sense
fchk is like xyz, it's "standardized" but not, and abused in all sorts of ways
can hold all sorts of...goodies...
I won't ask why pymatgen is in there
8:29 PM
And I wouldn't be able to answer
And if I found out I probably couldn't tell you
8:50 PM
@M.A.R. o/
Hey
I was busy teasing the other Mith in another chat
Oh no! ;)
waves a leek at @M.A.R.
:D
@Mithrandir you're far from home
9:00 PM
You triggered my self-stalker system.
Which stalks . . . yourself?
Besides, I figured it might be a good thing if I periodically dropped in here.
@M.A.R. Naturally.
@Mithrandir I think I should call NatGeo
too many punners in here rn
You are truly a unique species
You too penta, but you already know
9:03 PM
you don't know the half of it
I know most of it
@pentavalentcarbon I take it that people do periodically make puns on the name of the room?
Except the simulations and the nerdy chemistry
@Mithrandir no, just this other one cough says weird stuff sometimes
@Mithrandir but they're much better quality puns
Objectively measured
@pentavalentcarbon hey who you calling cough?
Hey Eva. Sorry, @NotEva
9:06 PM
@M.A.R. you've outed yourself, they aren't "simulations"
@pentavalentcarbon portrayals?
@M.A.R. more like "wishful thinking"
That funny looking lady is Mona Lisa, not pentavalentcarbon
sorry, I look nothing like a funny-looking lady, IMO
@pentavalentcarbon shame
@pentavalentcarbon idea: What if, to, say, a lemur, a redneck with 300 kilograms of beard is like a beautiful lady?
9:12 PM
no comment
Let's change the subject
And everything we think we think
Or the recent scandals
Or the not so recent scandal
Goddammit, is there anything joyful in the society?
you think you think?
@pentavalentcarbon I think I think I think I can't decipher that message
I think
I would google image search for "head exploding" but I have safe search turned off and I don't want to be scarred for life
FWIW, a chest exploding would look uglier
9:18 PM
I had some rum raisin the other night
it was good
@pentavalentcarbon the only real flavor is pure vanilla
with no thoughts of exploding body parts in mind
@M.A.R. I'll pretend you didn't say that
@pentavalentcarbon such a loss. Not too late though. Next time you eat ice cream, think about exploding body parts
@pentavalentcarbon well, I hate real chocolate too
9:22 PM
@M.A.R. what, are there only 2 flavors over there?
@pentavalentcarbon wat
No, well, I meant I hate that dark chocolate they call real chocolate too
I am so confused right now
what is the definition of "real flavor"?
are we talking about canonical ice cream flavors?
@pentavalentcarbon real flavor argument is something you use when you have some sort of enjoyable thing that you don't enjoy because of the flavor
that's almost as confusing to me as most of the questions on this site
@pentavalentcarbon are we talking about anything?
9:27 PM
@M.A.R. AAAARRRRRGHHHHHHHHHH
wheres @chemobot, I need a !!beer
I'll pass
|
{}
|
geom_segment
0th
Percentile
Line segments and curves
geom_segment draws a straight line between points (x, y) and (xend, yend). geom_curve draws a curved line. See the underlying drawing function grid::curveGrob() for the parameters that control the curve.
Usage
geom_segment(mapping = NULL, data = NULL, stat = "identity",
position = "identity", ..., arrow = NULL, arrow.fill = NULL,
lineend = "butt", linejoin = "round", na.rm = FALSE,
show.legend = NA, inherit.aes = TRUE)geom_curve(mapping = NULL, data = NULL, stat = "identity",
position = "identity", ..., curvature = 0.5, angle = 90, ncp = 5,
arrow = NULL, arrow.fill = NULL, lineend = "butt", na.rm = FALSE,
show.legend = NA, inherit.aes = TRUE)
Arguments
mapping
Set of aesthetic mappings created by aes() or aes_(). If specified and inherit.aes = TRUE (the default), it is combined with the default mapping at the top level of the plot. You must supply mapping if there is no plot mapping.
data
The data to be displayed in this layer. There are three options:
If NULL, the default, the data is inherited from the plot data as specified in the call to ggplot().
A data.frame, or other object, will override the plot data. All objects will be fortified to produce a data frame. See fortify() for which variables will be created.
A function will be called with a single argument, the plot data. The return value must be a data.frame, and will be used as the layer data.
stat
The statistical transformation to use on the data for this layer, as a string.
position
Position adjustment, either as a string, or the result of a call to a position adjustment function.
...
Other arguments passed on to layer(). These are often aesthetics, used to set an aesthetic to a fixed value, like colour = "red" or size = 3. They may also be parameters to the paired geom/stat.
arrow
specification for arrow heads, as created by arrow().
arrow.fill
fill colour to use for the arrow head (if closed). NULL means use colour aesthetic.
lineend
Line end style (round, butt, square).
linejoin
Line join style (round, mitre, bevel).
na.rm
If FALSE, the default, missing values are removed with a warning. If TRUE, missing values are silently removed.
show.legend
logical. Should this layer be included in the legends? NA, the default, includes if any aesthetics are mapped. FALSE never includes, and TRUE always includes. It can also be a named logical vector to finely select the aesthetics to display.
inherit.aes
If FALSE, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions that define both data and aesthetics and shouldn't inherit behaviour from the default plot specification, e.g. borders().
curvature
A numeric value giving the amount of curvature. Negative values produce left-hand curves, positive values produce right-hand curves, and zero produces a straight line.
angle
A numeric value between 0 and 180, giving an amount to skew the control points of the curve. Values less than 90 skew the curve towards the start point and values greater than 90 skew the curve towards the end point.
ncp
The number of control points used to draw the curve. More control points creates a smoother curve.
Details
Both geoms draw a single segment/curve per case. See geom_path if you need to connect points across multiple cases.
Aesthetics
geom_segment() understands the following aesthetics (required aesthetics are in bold):
• x
• y
• xend
• yend
• alpha
• colour
• group
• linetype
• size
Learn more about setting these aesthetics in vignette("ggplot2-specs").
geom_path() and geom_line() for multi- segment lines and paths.
geom_spoke() for a segment parameterised by a location (x, y), and an angle and radius.
• geom_segment
• geom_curve
Examples
# NOT RUN {
b <- ggplot(mtcars, aes(wt, mpg)) +
geom_point()
df <- data.frame(x1 = 2.62, x2 = 3.57, y1 = 21.0, y2 = 15.0)
b +
geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2, colour = "curve"), data = df) +
geom_segment(aes(x = x1, y = y1, xend = x2, yend = y2, colour = "segment"), data = df)
b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2), data = df, curvature = -0.2)
b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2), data = df, curvature = 1)
b + geom_curve(
aes(x = x1, y = y1, xend = x2, yend = y2),
data = df,
arrow = arrow(length = unit(0.03, "npc"))
)
ggplot(seals, aes(long, lat)) +
geom_segment(aes(xend = long + delta_long, yend = lat + delta_lat),
arrow = arrow(length = unit(0.1,"cm"))) +
borders("state")
# Use lineend and linejoin to change the style of the segments
df2 <- expand.grid(
lineend = c('round', 'butt', 'square'),
linejoin = c('round', 'mitre', 'bevel'),
stringsAsFactors = FALSE
)
df2 <- data.frame(df2, y = 1:9)
ggplot(df2, aes(x = 1, y = y, xend = 2, yend = y, label = paste(lineend, linejoin))) +
geom_segment(
lineend = df2$lineend, linejoin = df2$linejoin,
size = 3, arrow = arrow(length = unit(0.3, "inches"))
) +
geom_text(hjust = 'outside', nudge_x = -0.2) +
xlim(0.5, 2)
# You can also use geom_segment to recreate plot(type = "h") :
counts <- as.data.frame(table(x = rpois(100,5)))
counts$x <- as.numeric(as.character(counts$x))
with(counts, plot(x, Freq, type = "h", lwd = 10))
ggplot(counts, aes(x, Freq)) +
geom_segment(aes(xend = x, yend = 0), size = 10, lineend = "butt")
# }
Documentation reproduced from package ggplot2, version 3.1.1, License: GPL-2 | file LICENSE
Community examples
Looks like there are no examples yet.
|
{}
|
## Minimising the action for surface area in AdS
Hello people.
I've got an action which needs minimising
$$\int dr \ r \sqrt{U'^{2}+U^{4}}$$
Where U(r). Simply plugging this into the EL equations yields a nasty looking 2nd order nonlinear differential equation. I'm just wondering if there's an easier way of solving for U(r). I've tried passing over into Hamiltonian mechanics but that seemed to confuse matters slightly (I probably got it wrong). Wondering if there's some implementation of Noether's Theorem that could give a solvable differential equation. As always, much thanks for your help.
PhysOrg.com science news on PhysOrg.com >> 'Whodunnit' of Irish potato famine solved>> The mammoth's lament: Study shows how cosmic impact sparked devastating climate change>> Curiosity Mars rover drills second rock target
I see integral w.r.t. tau and U is a function of what? r?
I see integral w.r.t. tau and U is a function of what? r? BTW: What is the interval of integration?
## Minimising the action for surface area in AdS
Hey, thanks for having a look at this,
I think latex has just made the r look like a tau. It's meant to read
Integral dr*r*(U'^2 + U^4)^1/2
Also, the integration region is between 0 and some constant L, but that will lead to an inifinity, so the problem is actually integrated between 0 and (L^2-c^2)^1/2. I know the answer is roughly U(r)=(L^2-r^2)^-1/2, but my real problem is trying to derive this.
Well, it seems the integral may very well be zero with the right choice of U' and U over an interval of positive r. Considering the even powers involved, there are not many choices other than U= (?) ans: U=0 (identically).
Tags action, el equations
|
{}
|
Burning Fuels and Producing Water
I have a question about Optics and how this links to burning fuels in a combustion reaction.
If I have hexane, the following reaction occurs:
hexane + oxygen $\rightarrow$ carbon dioxide + water vapour
Now, I have a question. Why don't we tend to see any water being formed when we burn methane on a gas cooker?
This is only because the same equation can also be applied to methane in our cookers:
methane + oxygen $\rightarrow$ carbon dioxide + water vapour
I tried this earlier while preparing my food and it turns out I don't see any, even though I am burning the fuel.
• Water vapour is invisible. It's not mist. – Bergi Nov 29 '17 at 0:48
• If you're ever boiling water in a pot with a glass lid, watch what happens. At first you can't see inside, because water has condensed all over the inside of the lid. Then at full rolling boil, you can see clearly into the pot. Actual steam has filled the pot, and steam is invisible. If you look closely where the steam is coming out of the vent hole in the lid, you'll see there's a small transparent gap (a few mm) at the vent outlet, then the cloud of steam forms above the hole. The steam emerging from the pot is invisible. The water droplets forming in the air are not. – Jason Nov 29 '17 at 13:18
The air in the kitchen is warm enough and dry enough that the water vapor isn't condensing. If you want to see it, take a pan and fill it will ice water, then put it over the flame. You'll see the water vapor condensing on the outside of the pot.
• If you use ice water you may get condensation even without lighting the gas (if your kitchen is humid) so try a control experiment. On the other hand if you use room-temperature water you should see some condensation at first. Cold tap water works very well – Chris H Nov 29 '17 at 10:33
Because the water vapour/$CO_2$ mixture is very hot. Then it mixes with the surrounding air and dilutes too much to condense. This why you can't "see" it.
But hold a really cold pan in the flame and initially you'll see water condense onto the surface of the pan. Then the pan heats up and the condensation evaporates again.
protected by Qmechanic♦Nov 29 '17 at 14:48
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
{}
|
$$\require{cancel}$$
An introductory course in computational physics for upper-level undergraduates. Topics covered include numerical linear algebra, eigenvalue problems, sparse matrix problems, numerical integration and initial-value problems, Fourier transforms, and Monte Carlo simulations. Programming examples are based on Scientific Python.
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Aug 2019, 15:55
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Which of the following CANNOT be a value of 1/(x - 1)?
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 57155
Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
16 Sep 2018, 22:11
00:00
Difficulty:
25% (medium)
Question Stats:
69% (00:48) correct 31% (00:35) wrong based on 203 sessions
### HideShow timer Statistics
Which of the following CANNOT be a value of $$\frac{1}{x-1}$$?
A. -1
B. 0
C. 2/3
D. 1
E. 2
_________________
NUS School Moderator
Joined: 18 Jul 2018
Posts: 1028
Location: India
Concentration: Finance, Marketing
WE: Engineering (Energy and Utilities)
Re: Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
16 Sep 2018, 22:20
Equating all the options to the given equation $$\frac{1}{x-1}$$
Only 0 doesn't satisfy.
$$\frac{1}{x-1}$$ = 0.
Gives 1 = 0. Which is not possible.
_________________
Press +1 Kudos If my post helps!
Director
Joined: 06 Jan 2015
Posts: 697
Location: India
Concentration: Operations, Finance
GPA: 3.35
WE: Information Technology (Computer Software)
Re: Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
16 Sep 2018, 22:23
Bunuel wrote:
Which of the following CANNOT be a value of $$\frac{1}{x-1}$$?
A. -1
B. 0
C. 2/3
D. 1
E. 2
Only B is not possible
1/0 is undefined. Hence B
To Verify you can substitute the value for x
Sub x=0 then $$\frac{1}{x-1}$$ = -1
Sub x=5/2 then $$\frac{1}{x-1}$$ = 2/3
Sub x=2 then $$\frac{1}{x-1}$$ = 1
Sub x=3 then $$\frac{1}{x-1}$$ = 2
Hence B
_________________
आत्मनॊ मोक्षार्थम् जगद्धिताय च
Resource: GMATPrep RCs With Solution
Director
Joined: 06 Jan 2015
Posts: 697
Location: India
Concentration: Operations, Finance
GPA: 3.35
WE: Information Technology (Computer Software)
Re: Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
17 Sep 2018, 19:10
1
Bunuel Can you check the OA?
We are getting as B...
_________________
आत्मनॊ मोक्षार्थम् जगद्धिताय च
Resource: GMATPrep RCs With Solution
Math Expert
Joined: 02 Sep 2009
Posts: 57155
Re: Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
17 Sep 2018, 20:19
NandishSS wrote:
Bunuel Can you check the OA?
We are getting as B...
_________________________
B is the OA. Edited. Thank you.
_________________
Senior Manager
Joined: 29 Dec 2017
Posts: 382
Location: United States
Concentration: Marketing, Technology
GMAT 1: 630 Q44 V33
GMAT 2: 690 Q47 V37
GMAT 3: 710 Q50 V37
GPA: 3.25
WE: Marketing (Telecommunications)
Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
18 Sep 2018, 15:06
See graphic approach to this question below. Lines will never cross the x-axis - function y will never = 0. Answer B
.
Attachments
3.png [ 4.84 KiB | Viewed 1416 times ]
CEO
Joined: 12 Sep 2015
Posts: 3912
Re: Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
18 Sep 2018, 16:30
Top Contributor
Bunuel wrote:
Which of the following CANNOT be a value of $$\frac{1}{x-1}$$?
A. -1
B. 0
C. 2/3
D. 1
E. 2
In order for x/y to equal 0, we need x to equal 0
So, we can see that, in the fraction 1/(x - 1), the numerator does NOT equal zero.
This means that 1/(x - 1) CANNOT equal 0.
Cheers,
Brent
_________________
Test confidently with gmatprepnow.com
CEO
Joined: 12 Sep 2015
Posts: 3912
Re: Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
18 Sep 2018, 16:40
Top Contributor
Bunuel wrote:
Which of the following CANNOT be a value of $$\frac{1}{x-1}$$?
A. -1
B. 0
C. 2/3
D. 1
E. 2
Alternatively, we can test each answer choice
A) -1
Is it possible for 1/(x-1) to equal -1?
Let's find out.
We'll see if we can solve the equation: 1/(x-1) = -1
Multiply both sides by (x-1) to get: 1 = -1(x-1)
Expand right side: 1 = -x + 1
Solve: x = 0
So, when x = 0, 1/(x-1) = -1
Since 1/(x-1) CAN equal -1, we can ELIMINATE A
B) 0
Is it possible for 1/(x-1) to equal 0?
Let's find out.
We'll see if we can solve the equation: 1/(x-1) = 0
Multiply both sides by (x-1) to get: 1 = 0
Hmmmm.
Looks like 1/(x-1) CANNOT equal 0
At this point, I'd select B and move on.
But, for "kicks" let's keep going
C) 2/3
Multiply both sides by (x-1) to get: 1 = (2/3)(x-1)
Expand right side: 1 = 2x/3 - 2/3
Add 2/3 to both sides: 5/3 = 2x/3
Multiply both sides by 3 to get: 5 = 2x
Solve: x = 2.5
So, when x = 2.5, 1/(x-1) = 2/3
Since 1/(x-1) CAN equal 2/3, we can ELIMINATE C
D) 1
Multiply both sides by (x-1) to get: 1 = (1)(x-1)
Expand right side: 1 = x - 1
Add 1 to both sides: 2 = x
So, when x = 2, 1/(x-1) = 1
Since 1/(x-1) CAN equal 1, we can ELIMINATE D
E) 1
Multiply both sides by (x-1) to get: 1 = (2)(x-1)
Expand right side: 1 = 2x - 2
Add 2 to both sides: 3 = 2x
Solve: x = 3/2
So, when x = 3/2, 1/(x-1) = 2
Since 1/(x-1) CAN equal 2, we can ELIMINATE E
Cheers,
Brent
_________________
Test confidently with gmatprepnow.com
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 7399
Location: United States (CA)
Re: Which of the following CANNOT be a value of 1/(x - 1)? [#permalink]
### Show Tags
18 Sep 2018, 18:48
Bunuel wrote:
Which of the following CANNOT be a value of $$\frac{1}{x-1}$$?
A. -1
B. 0
C. 2/3
D. 1
E. 2
Recall that zero divided by a nonzero quantity is zero. However, we see that the numerator of the given expression is not zero and can never be zero, therefore, its value cannot be zero.
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Re: Which of the following CANNOT be a value of 1/(x - 1)? [#permalink] 18 Sep 2018, 18:48
Display posts from previous: Sort by
|
{}
|
# Co-variance of dependent binomial random variables.
We roll a dice n times , let X (random variable) be the number of times we get 2 and Y- the number of times we get 3. I know that $X,Y$~$bin(n,\frac{1}{6})$, I need to find Cov(X,Y). I thought about first calculating $\mathbb{E}(XY)$. But i'm stuck .
Let $X_i$ take value $1$ if by the $i$-th roll the die shows a 2, and let it take value $0$ otherwise.
Let $Y_i$ take value $1$ if by the $i$-th roll the die shows a 3, and let it take value $0$ otherwise.
Then by bilinearity of covariance we find: $$\mathsf{Cov}(X,Y)=\mathsf{Cov}(\sum_{i=1}^nX_i,\sum_{j=1}^nY_j)=\sum_{i=1}^n\sum_{j=1}^n\mathsf{Cov}(X_i,Y_j)$$
It is evident that $X_i$ and $Y_j$ are independent if $i\neq j$ so that in that case $\mathsf{Cov}(X_i,Y_j)=0$.
Also it is evident that $\mathsf{Cov}(X_i,Y_i)=\mathsf{Cov}(X_1,Y_1)$ for every $i$, so we end up with:
$$\mathsf{Cov}(X,Y)=n\mathsf{Cov}(X_1,Y_1)$$
Can you find $\mathsf{Cov}(X_1,Y_1)$ yourself?
• Cov(X,Y)=n(n−1)Cov(X1,Y2)+nCov(X1,Y1) Why is that? – Eran Dec 26 '17 at 14:59
|
{}
|
How does salt change the pH of water?
Jun 16, 2016
Common salt, $N a C l$, the stuff with which you season your fish and chips, does not change the $p H$ of water.
Explanation:
$N a C l$ is the salt of a strong acid and a strong base. The salt does not affect equilibrium values of $\left[{H}_{3} {O}^{+}\right]$ and $\left[H {O}^{-}\right]$:
$H C l \left(a q\right) + N a O H \left(a q\right) \rightarrow N a C l \left(s\right) + {H}_{2} O$
The reaction as written lies strongly to the right.
On the other hand we could perform the equivalent reaction with the weak acid, hydrogen fluoride:
$H F \left(a q\right) + N a O H \left(a q\right) r i g h t \le f t h a r p \infty n s {H}_{2} O \left(l\right) + N a F \left(a q\right)$
The sodium fluoride product would participate in the back reaction to some extent, and a solution of $N a F$ in water would give a slightly basic solution:
|
{}
|
## Calculus (3rd Edition)
Minimum value is 5 at $x=0$ and maximum value is 21 at $x=2$
$f'(x)= 4x+4$ $f$ is differentiable everywhere. So, the only critical point we get is $c$ such that $f'(c)=0$ i.e., $4c+4=0\implies 4c=-4\implies c=-1$ But, this point is outside the given interval. To find the maximum and minimum value, we compare values at the end points only (since there are no critical points inside the given interval). $f(0)= 2(0)^{2}+4(0)+5=5$ $f(2)=2(2)^{2}+4(2)+5=21$ The minimum value is 5 at $x=0$ and maximum value is 21 at $x=2$
|
{}
|
# 8.3.7. Calculate diffusion coefficients
The diffusion coefficient D of a material can be measured in at least 2 ways using various options in LAMMPS. See the examples/DIFFUSE directory for scripts that implement the 2 methods discussed here for a simple Lennard-Jones fluid model.
The first method is to measure the mean-squared displacement (MSD) of the system, via the compute msd command. The slope of the MSD versus time is proportional to the diffusion coefficient. The instantaneous MSD values can be accumulated in a vector via the fix vector command, and a line fit to the vector to compute its slope via the variable slope function, and thus extract D.
The second method is to measure the velocity auto-correlation function (VACF) of the system, via the compute vacf command. The time-integral of the VACF is proportional to the diffusion coefficient. The instantaneous VACF values can be accumulated in a vector via the fix vector command, and time integrated via the variable trap function, and thus extract D.
|
{}
|
Question
# A cone, a hemisphere and a cylinder stand on equal bases and have the same height. Show that their volumes are in the ratio $$1:2:3$$.
Solution
## Let the radius of cone, hemisphere and cylinder be r. Then, height of the hemisphere = r, height of the cone = r, and height of the cylinder = r. Let $$V_{1},V_{2},V_{3}$$ be the volumes of cone, hemisphere and the cylinder respectively. Then, $$V_{1} = \dfrac{1}{3}\pi r^{2}\times r = \dfrac{1}{3}\pi r^{3}$$$$V_{2} = \dfrac{2}{3}\pi r^{3}$$ $$V_{3} = \pi r^{2}\times r = \pi r^{3}$$$$\therefore V_{1}:V_{2}:V_{3} = 1:2:3$$Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
{}
|
# 4th Class Mathematics Geometrical Figures
Geometrical Figures
Category : 4th Class
Geometrical Figures
Introduction
In our day to day life we come across a number of objects. All the objects has a specific shape and size. We recognize a number of objects by their shape. Therefore, to know about the objects and of their shapes is very important. In this chapter we will study about the shapes of different geometrical figures.
Point
To show a particular location, a dot (.) is placed over it, that dot is known as a point.
$\centerdot \,\xrightarrow{{}}A$
A is a point
Line Segment
Line segment is defined as the shortest distance between two fixed points. It has fixed length.
• Example
How many line segments are there in the following figure?
Solution:
There are 6 line segments in the given figure.
Ray
It is defined as the extension of a line segment in one direction up to infinity.
• Example
How many rays are there in the following figure?
Solution: There are 12 rays.
Line
Line is defined as the extension of a line segment up to infinite in either direction.
• Example
How many lines are there in the following figure?
Solution: There are two lines.
Angle
Inclination between two rays having common end point is called angle.
$\angle ABC$ is a angle
A Right Angle
An angle whose measure is exactly $90{}^\circ$is a right angle.
$\angle ABC$is a right angle
An Acute Angle
An angle whose measure is less than $90{}^\circ$is an acute angle.
$\angle DEF$is an acute angle.
An Obtuse Angle
An angle whose measure is greater than $90{}^\circ$ but less than $180{}^\circ$is an obtuse angle.
$\angle LMN$is an obtuse angle
• Example
The following angle is a____.
Solution: It is an obtuse angle, because its measure is greater than$90{}^\circ$.
Polygon
A Simple closed figure formed of three or more line segments is called a polygon. Line segments which form a polygon are called its sides. The point at which two adjacent sides of a polygon meet, is called a vertex of polygon. In the given figure, triangle, quadrilateral, pentagon and hexagon, all are examples of polygon.
Types of Polygons
Regular and Irregular Polygon
A regular polygon has all sides equal and all angles equal, otherwise, it is an irregular polygon.
Simple and Complex Polygon
A simple polygon has only one boundary and it does not cross over itself. A complex polygon intersects itself.
Names of Polygons
Names of polygons based on their number of sides are given in the following table:
Name Number of Sides Number of vertices Number of Angles Shape Triangle 3 3 3 Quadrilateral 4 4 4 Pentagon 5 5 5 Hexagon 6 6 6
Triangle
The geometrical shapes having three sides are called triangles. There are various types of triangles.
Equilateral Triangle
A triangle whose all sides are equal is called equilateral triangle.
Triangle ABC is an equilateral triangle.
Isosceles Triangle
A triangle whose two sides are equal is called isosceles triangle.
In triangle ABC AB = AC = 4 cm hence, triangle ABC is an isosceles triangle.
Scalene Triangle
A triangle whose sides are unequal is called scalene triangle. In the figure below side lengths of triangle ABC are not equal hence, triangle ABC is a scalene triangle.
A Right Angle Triangle
A triangle whose one angle is a right angle is called right angled triangle. In the picture below triangle ABC is a right angle.
Acute Angled Triangle
A triangle with all the angles as acute angles is called acute angled triangle.
Obtuse Angled Triangle
A triangle with one angle as obtuse angle and other two angles are acute angles is called an obtuse angled triangle.
• Example
Name the triangle whose all sides are equal.
Solution:
Equilateral triangle
• Example
Name the triangle given below
Solution:
Right angled triangle because one angle is right angle.
The geometrical figure having four sides is called quadrilateral.
Rectangle
Rectangle is a quadrilateral in which:
(i) All angles are of$90{}^\circ$.
(ii) Opposite sides are equal.
Square
Square is a quadrilateral, in which:
(i) All angles are of$90{}^\circ$.
(ii) All sides are equal.
Note: Sum of the angles of a quadrilateral is $360{}^\circ$and sum of all the interior angles of a triangle is $180{}^\circ$
Circle
Circle is a closed curved line whose all points are at the same distance from a given point in a plane.
In the above circle, 0 is the centre of the circle. OA, OB and OC are radius of the circle. AB is the diametre of the circle.
Chord
A line segment joining two distinct points which lie on the circle is called chord of the circle.
In the figure given above DE and AB are chord of the circle.
Radius of a circle is half of the diametre.
In the figure given under circle, OA, OB and OC are radius of the circle.
Diametre
Diametre is the largest chord of the circle.
In the figure given under circle, AB is the diametre of the circle.
• Example
Diametre of a circle is 8 cm. Find the radius of the circle.
Solution:
$Radius=\frac{Diametre}{2}=\frac{8}{2}cm=4cm$
#### Other Topics
##### 30 20
You need to login to perform this action.
You will be redirected in 3 sec
|
{}
|
base-4.17.0.0: Basic libraries
GHC.TypeError
Description
This module exports the TypeError family, which is used to provide custom type errors, and the ErrorMessage kind used to define these custom error messages. This is a type-level analogue to the term level error function.
Since: base-4.16.0.0
Synopsis
# Documentation
A description of a custom type error.
Constructors
Text Symbol Show the text as is. forall t. ShowType t Pretty print the type. ShowType :: k -> ErrorMessage ErrorMessage :<>: ErrorMessage infixl 6 Put two pieces of error message next to each other. ErrorMessage :$$: ErrorMessage infixl 5 Stack two pieces of error message on top of each other. type family TypeError (a :: ErrorMessage) :: b where ... Source # The type-level equivalent of error. The polymorphic kind of this type allows it to be used in several settings. For instance, it can be used as a constraint, e.g. to provide a better error message for a non-existent instance, -- in a context instance TypeError (Text "Cannot Show functions." :$$:
Text "Perhaps there is a missing argument?")
=> Show (a -> b) where
showsPrec = error "unreachable"
It can also be placed on the right-hand side of a type-level function to provide an error for an invalid case,
type family ByteSize x where
ByteSize Word16 = 2
ByteSize Word8 = 1
ByteSize a = TypeError (Text "The type " :<>: ShowType a :<>:
Text " is not exportable.")
Since: base-4.9.0.0
type family Assert check errMsg where ... Source #
A type-level assert function.
If the first argument evaluates to true, then the empty constraint is returned, otherwise the second argument (which is intended to be something which reduces to TypeError is used).
For example, given some type level predicate P' :: Type -> Bool, it is possible to write the type synonym
type P a = Assert (P' a) (NotPError a)
where NotPError reduces to a TypeError which is reported if the assertion fails.
Since: base-4.16.0.0
Equations
Assert 'True _ = () Assert _ errMsg = errMsg
|
{}
|
AInfinity -- A-infinity algebra and module structures on free resolutions
Description
Following Jesse Burke's paper "Higher Homotopies and Golod Rings", given a polynomial ring S and a factor ring R = S/I and an R-module X, we compute (finite) A-infinity algebra structure mR on an S-free resolution of R and the A-infinity mR-module structure on an S-free resolution of X, and use them to give a finite computation of the maps in an R-free resolution of X that we call the Burke resolution. Here is an example with the simplest Golod non-hypersurface in 3 variables
i1 : S = ZZ/101[a,b,c] o1 = S o1 : PolynomialRing i2 : R = S/(ideal(a)*ideal(a,b,c)) o2 = R o2 : QuotientRing i3 : mR = aInfinity R; i4 : res coker presentation R 1 3 3 1 o4 = S <-- S <-- S <-- S <-- 0 0 1 2 3 4 o4 : ChainComplex i5 : mR#{2,2} o5 = {3} | 0 -a 0 a 0 0 0 -c 0 | {3} | 0 0 -a 0 0 0 a b 0 | {3} | 0 0 0 0 0 -a 0 0 0 | 3 9 o5 : Matrix S <--- S
Given a module X over R, Jesse Burke constructed a possibly non-minimal R-free resolution of any length from the finite data mR and mX:
i6 : X = coker vars R o6 = cokernel | a b c | 1 o6 : R-module, quotient of R i7 : A = betti burkeResolution(X,8) 0 1 2 3 4 5 6 7 8 o7 = total: 1 3 6 13 28 60 129 277 595 0: 1 3 6 13 28 60 129 277 595 o7 : BettiTally i8 : B = betti res(X, LengthLimit => 8) 0 1 2 3 4 5 6 7 8 o8 = total: 1 3 6 13 28 60 129 277 595 0: 1 3 6 13 28 60 129 277 595 o8 : BettiTally i9 : A == B o9 = true
• aInfinity -- aInfinity algebra and module structures on free resolutions
Version
This documentation describes version 0.1 of AInfinity.
Source code
The source code from which this documentation is derived is in the file AInfinity.m2.
Exports
• Functions and commands
• aInfinity -- aInfinity algebra and module structures on free resolutions
• burkeResolution -- compute a resolution from A-infinity structures
• displayBlocks -- prints a matrix showing the source and target decomposition
• extractBlocks -- displays components of a map in a labeled complex
• golodBetti -- list the ranks of the free modules in the resolution of a Golod module
• picture -- displays information about the blocks of a map or maps between direct sum modules
• Methods
• "aInfinity(HashTable,Module)" -- see aInfinity -- aInfinity algebra and module structures on free resolutions
• "aInfinity(Module)" -- see aInfinity -- aInfinity algebra and module structures on free resolutions
• "aInfinity(Ring)" -- see aInfinity -- aInfinity algebra and module structures on free resolutions
• "burkeResolution(Module,ZZ)" -- see burkeResolution -- compute a resolution from A-infinity structures
• "displayBlocks(Matrix)" -- see displayBlocks -- prints a matrix showing the source and target decomposition
• "extractBlocks(Matrix,List)" -- see extractBlocks -- displays components of a map in a labeled complex
• "extractBlocks(Matrix,List,List)" -- see extractBlocks -- displays components of a map in a labeled complex
• "golodBetti(Module,ZZ)" -- see golodBetti -- list the ranks of the free modules in the resolution of a Golod module
• "picture(ChainComplex)" -- see picture -- displays information about the blocks of a map or maps between direct sum modules
• "picture(Complex)" -- see picture -- displays information about the blocks of a map or maps between direct sum modules
• "picture(Matrix)" -- see picture -- displays information about the blocks of a map or maps between direct sum modules
• "picture(Module)" -- see picture -- displays information about the blocks of a map or maps between direct sum modules
• Symbols
• Check -- Option for burkeResolution
For the programmer
The object AInfinity is .
|
{}
|
# Trinomial trees simulation
We consider a diffusion process $$X_t$$ which evolves according to $dX_t = \mu(t,X_t)dt + \sigma(t,X_t)dW_t$ where $$\mu$$ and $$\sigma$$ are smooth scalar real functions and $$W_t$$ is a scalar standard Brownian motion.
We observe the process on the time interval $$[0,T]$$ and we make the time discretization $$t_0 = 0, t_1 = \frac T N, \ldots, t_N = T$$.
We will see three different trinomial trees constructions.
#### Basic trinomial tree
At each time step $$i$$, starting form $$x_{i,j}$$ the process can go up, straight or down as follows :
$\begin{cases} x_{i+1,j+1}= x_{i,j} + (j+1)\Delta x_{i+1}\\ x_{i+1,j}= x_{i,j} + j \Delta x_{i+1} \\ x_{i+1,j-1}= x_{i,j} + (j-1)\Delta x_{i+1} \end{cases}$
with $$\Delta x_{i+1} = V_i \sqrt{3}$$ and $$x_{0,0}=X_0$$.
The result of this construction is a symmetric tree around $$X_0$$.
#### Trinomial tree with boundaries conditions
We suppose that the conditional law of the process at time $$i$$ depends only on $$i$$. Then if we set two levels $$0<a<b<1$$, at each time step $$i$$ we can compute the inferior and the superior quantile $$Q_{min}^i$$ and $$Q_{max}^i$$, such that $\mathbb{P}(X_{t_i}<Q_{min}^i)=a \quad \text{and} \quad \mathbb{P}(X_{t_i}<Q_{max}^i)=b.$
Starting from $$x_{i,j}$$, we proceed at the tree construction depending on whether we passed or not the quantiles boundaries at the next step:
• If $$j\Delta x_{i+1} > Q_{max}^{i+1}$$, the process can go straight or twice down as follows :
$\begin{cases} x_{i+1,j}= x_{i,j} + j\Delta x_{i+1}\\ x_{i+1,j-1}= x_{i,j} + (j-1) \Delta x_{i+1} \\ x_{i+1,j-2}= x_{i,j} + (j-2)\Delta x_{i+1} \end{cases}$
• If $$j\Delta x_{i+1} < Q_{min}^{i+1}$$, the process can go straight or twice up as follows :
$\begin{cases} x_{i+1,j}= x_{i,j} + j\Delta x_{i+1}\\ x_{i+1,j+1}= x_{i,j} + (j+1) \Delta x_{i+1} \\ x_{i+1,j+2}= x_{i,j} + (j+2)\Delta x_{i+1} \end{cases}$
• Otherwise, the tree follows a basic evolution
$\begin{cases} x_{i+1,j+1}= x_{i,j} + (j+1)\Delta x_{i+1}\\ x_{i+1,j}= x_{i,j} + j \Delta x_{i+1} \\ x_{i+1,j-1}= x_{i,j} + (j-1)\Delta x_{i+1} \end{cases}$
with $$\Delta x_{i+1} = V_i \sqrt{3}$$.
#### Trinomial tree with Hull-White method
We state that starting from $$x_{i,j}$$ we can reach the positions
$\begin{cases} x_{i+1,k+1}=(k+1)\Delta x_{i+1} \quad \text{with probability} \quad p_u, \\ x_{i+1,k}=k\Delta x_{i+1} \quad \text{with probability} \quad p_m, \\ x_{i+1,k-1}=(k-1)\Delta x_{i+1} \quad \text{with probability} \quad p_d. \end{cases}$
We compute the conditional mean and the conditional variance of the process,
$\mathbb{E} ( X(t_{i+1} ) | X(t_{i}) = x_{i,j} ) = M_{i,j},$ $\mathbb{V} ( X(t_{i+1} ) | X(t_{i}) = x_{i,j} ) = V_{i,j}^2.$
We search for $$p_u$$, $$p_m$$ and $$p_d$$ such that the conditional mean and the conditional variance match those in the tree.
We notice that
\begin{aligned} x_{i+1,k+1} &= x_{i+1,k} + \Delta x_{i+1},\\ x_{i+1,k-1} &= x_{i+1,k} - \Delta x_{i+1}. \end{aligned}
Then we can write
\begin{aligned} p_u( x_{i+1,k} + \Delta x_{i+1} ) + p_m x_{i+1,k} +p_d (x_{i+1,k} - \Delta x_{i+1}) &= M_{i,j}, \\ p_u( x_{i+1,k} + \Delta x_{i+1} )^2 + p_m x_{i+1,k}^2 +p_d (x_{i+1,k} - \Delta x_{i+1})^2 &= V_{i,j}^2 + M_{i,j}^2, \end{aligned}
which brings us to
\begin{aligned} x_{i+1,k} + (p_u + p_d) \Delta x_{i+1} &= M_{i,j},\\ x_{i+1,k}^2 + 2x_{i+1,k} \Delta x_{i+1} (p_u-p_d) + \Delta x_{i+1}^2 (p_u + p_d ) &= V_{i,j}^2 + M_{i,j}^2. \end{aligned}
Setting $$\eta_{j,k} = M_{i,j} - x_{i+1,k}$$, we can write
\begin{aligned} (p_u + p_d) \Delta x_{i+1} &= \eta_{j,k}, \\ (p_u + p_d ) \Delta x_{i+1}^2 &= V_{i,j}^2 + \eta_{j,k}^2 \end{aligned}
and remembering that $$p_m = 1- p_u -p_d$$, we obtain
\begin{aligned} p_u &= \frac{V_{i,j}^2}{2 \Delta x_{i+1}^2}+ \frac{\eta_{j,k}^2}{2 \Delta x_{i+1}^2} + \frac{\eta_{j,k}}{2 \Delta x_{i+1}},\\ p_m &= 1 - \frac{V_{i,j}^2}{\Delta x_{i+1}^2} - \frac{\eta_{j,k}^2}{\Delta x_{i+1}^2},\\ p_d &= \frac{V_{i,j}^2}{2 \Delta x_{i+1}^2}+ \frac{\eta_{j,k}^2}{2 \Delta x_{i+1}^2} - \frac{\eta_{j,k}}{2 \Delta x_{i+1}}. \end{aligned}
We take advantage of the available degrees of freedom, in order to obtain quantities that are always positive. For that, we assume that $$V_{i,j}$$ is independent of $$j$$. From now on we write $$V_i$$ instead of $$V_{i,j}$$. We set $$\Delta x_{i+1} = V_i \sqrt{3}$$, we choose the level $$k$$ and $$\eta_{j,k}$$ in such a way that $$x_{i+1,k}$$ is as close as possible to $$M_{i,j}$$ : $k = round\left( \frac{M_{i,j}}{\Delta x_{i+1}}\right),$ where $$round(x)$$ is the closest integer to the real number $$x$$.
Finally we get : \begin{aligned} p_u &= \frac{1}{6}+ \frac{\eta_{j,k}^2}{6 V_{i}^2} + \frac{\eta_{j,k}}{2 V_i \sqrt{3}}, \\ p_m &= \frac{1}{2} - \frac{\eta_{j,k}^2}{3 V_i^2}, \\ p_u &= \frac{1}{6}+ \frac{\eta_{j,k}^2}{6 V_{i}^2} - \frac{\eta_{j,k}}{2 V_i \sqrt{3}}. \end{aligned}
## The Vasicek model
A short rate $$r(t)$$ is an interest rate at which an entity can borrow money for an infinitesimally short period at time $$t$$.
Vasicek (1977) assumed that the short rate $$r_t$$ follows an Ornstein-Uhlenbeck process with constant coefficients under the risk-neutral measure : $dr_t = k(\theta - r_t ) dt + \sigma dW_t$ where $$r_0=r(0)$$, $$k$$, $$\theta$$ and $$\sigma$$ are positive constants. The conditional mean and the variance of $$r_t$$ are given by $\mathbb{E} (r_t | \mathcal{F}_s ) = r_s e^{-k(t-s)} + \theta (1 - e^{-k(t-s)} ),$ $\mathbb{V} (r_t | \mathcal{F}_s ) = \frac{\sigma^2}{2k} ( 1 - e^{-2k(t-s)} ).$
### Standard construction of trinomial tree
#### Basic Tree
We build the basic tree for the Vasicek model with : $\Delta r = V \sqrt{3}$ and $V=\sqrt{ \frac{\sigma^2}{2k}(1-e^{-2k\Delta t}) }.$
#### Tree with boundaries conditions
We notice that $$r_t$$ follows a normal distribution with mean and variance given by
$\mathbb{E} ( r_t )= r_{0}e^{-kt} + \theta (1 - e^{-kt} ),$ $\mathbb{V} ( r_t ) = \sqrt{ \frac{\sigma^2}{2k}(1-e^{-2kt}) }.$
We will fix two boundaries $$0<a<b<1$$ and at each time step we will compute the two quantiles $$Q_{min}^i$$ and $$Q_{max}^i$$ such that
$\mathbb{P}(r_{t_i}<Q_{min}^i)=a \quad \text{and} \quad \mathbb{P}(r_{t_i}<Q_{max}^i)=b.$
#### Tree with Hull-White method
The equivalence of the conditional mean and variance, and the additional conditions bring us to
\begin{aligned} p_u &= \frac{1}{6}+ \frac{\eta_{j,k}^2}{6 V^2} + \frac{\eta_{j,k}}{2 V \sqrt{3}},\\ p_m &= \frac{1}{2} - \frac{\eta_{j,k}^2}{3 V^2},\\ p_u &= \frac{1}{6}+ \frac{\eta_{j,k}^2}{6 V^2} - \frac{\eta_{j,k}}{2 V \sqrt{3}}, \end{aligned} with $M_{i,j}= r_{i,j}e^{-k\Delta t_i} + \theta (1 - e^{-k\Delta t} ),$ $V^2 = \frac{\sigma^2}{2k}(1-e^{-2k\Delta t})$ and $$r_{i,j} = j\Delta r$$, where $$\Delta r := V \sqrt{3}$$ and $$\eta_{j,k} = M_{i,j} - r_{i+1,k}$$.
### Alternative construction of trinomial tree
We consider the process $y_t = e^{k t}(r_t - \theta)$
where $$r_t$$ follows a Vasicek model.
By Ito's formula, we get $dy_t = \sigma e^{k t }dW_t.$
The conditional mean and variance are then
$\mathbb{E} ( y_t ) | \mathcal{F}_s ) = y_{s},$ $\mathbb{V} ( y_t ) | \mathcal{F}_s ) = \frac{\sigma^2}{2k}(e^{2kt}-e^{2ks}).$
In the graphs, we represent both the trees for $$y_t$$ and for $$r_t$$.
#### Basic Tree
We build the basic tree for the process $$y_t$$ with $\Delta y_{i} = V_i \sqrt{3}$ and $V_i=\sqrt{ \frac{\sigma^2}{2k}(e^{2 k t_{i}}-e^{2 k t_{i-1}}) }.$ Once we build the tree for $$y_t$$, we multiply the nodes $$y_{i,j}$$ by $$e^{-k i \Delta t}$$ and we add $$\theta$$ in order to obtain $$r_t$$.
#### Tree with boundaries conditions
We notice that $$y_t$$ follows a normal distribution with $\mathbb{E} ( y_t )= y_{0},$ $\mathbb{V} ( y_t ) = \sqrt{ \frac{\sigma^2}{2k}( e^{2 k t} - 1 ) }.$
As we did for the standard construction, at each time step we constrain the process $$y_t$$ to stay between two choosen quantiles.
Once we build the tree for $$y_t$$, we make the computations to transform it in the tree for $$r_t$$.
#### Tree with the Hull-White method
The equivalence of the conditional mean and variance and the other conditions that we saw in the general framework, applied to the process $$y_t$$, bring us to \begin{aligned} p_u &= \frac{1}{6}+ \frac{\eta_{j,k}^2}{6 V_{i}^2} + \frac{\eta_{j,k}}{2 V_i \sqrt{3}},\\ p_m &= \frac{1}{2} - \frac{\eta_{j,k}^2}{3 V_i^2},\\ p_u &= \frac{1}{6}+ \frac{\eta_{j,k}^2}{6 V_{i}^2} - \frac{\eta_{j,k}}{2 V_i \sqrt{3}}, \end{aligned} with $M_{i,j}= y_{i,j},$ $V_{i}^2 = \frac{\sigma^2}{2k}(e^{2 k t_{i}}-e^{2 k t_{i-1}}),$ $y_{i,j} = j\Delta y_i,$ where $$\Delta y_i := V_i \sqrt{3}$$ and $$\eta_{j,k} = M_{i,j} - y_{i+1,k}$$.
Once the tree build for $$y_t$$, we can build the one for $$r_t$$.
|
{}
|
# Converting 10x Genomics BAM Files to FASTQ
bamtofastq is a tool for converting 10x Genomics BAM files back into FASTQ files that can be used as inputs to re-run analysis. The FASTQs will be output into a directory structure identical to the mkfastq or bcl2fastq tools, so they are ready to input into the next pipeline (e.g. cellranger count, spaceranger count).
## Background
10x Genomics pipelines require FASTQs (with embedded barcodes) as input, so we developed bamtofastq to help users who want to reanalyze 10x Genomics data but only have access to BAM files. For example, some users may want to store BAM files only. Others might have downloaded our BAM data from NCBI SRA. The location of the 10x Genomics barcode varies depending on product and reagent version. bamtofastq determines the appropriate way to construct the original read sequence from the sequences and tags in the BAM file.
bamtofastq works with BAM files output by the following 10x Genomics pipelines:
• cellranger (see exceptions below)
• cellranger-atac
• cellranger-arc
• cellranger-dna
• spaceranger
• longranger
bamtofastq is available for Linux and is compatible with RedHat/CentOS 5.2 or later, and Ubuntu 8.04 or later.
• bamtofastq comes pre-bundled with the latest versions of the Cell Ranger, Cell Ranger ARC, Cell Ranger ATAC, and Space Ranger pipelines. You can find the executable in the /lib/bin folder within the installation, e.g. cellranger-x.y.z/lib/bin/bamtofastq. Once your pipeline is installed and on your PATH, you can run [pipeline] bamtofastq, (e.g. cellranger bamtofastq, spaceranger bamtofastq).
bamtofastq is a single executable that can be run directly and requires no compilation or installation. Place the executable file in a directory that is on your PATH, and make sure to chmod 700 to make it executable.
## Running the Tool
10x Genomics BAMs produced by Cell Ranger v1.2+, Cell Ranger ATAC v1.0+, Cell Ranger ARC v1.0+, Space Ranger v1.0+, Cell Ranger DNA v1.0+, and Long Ranger v2.1+ contain header fields that permit automatic conversion to the correct FASTQ sequences. BAMs produced by older 10x Genomics pipelines may require special arguments or have some caveats, see known issues for details.
The FASTQ files output by bamtofastq contain the same set of sequences that were input to the original pipeline run, although the original order will not be preserved. 10x Genomics pipelines are generally insensitive to the order of the input data, so you can expect nearly identical results when re-running pipelines with bamtofastq outputs.
The usage of the bamtofastq command is as follows:
bamtofastq [options] [bam-path] [output-path]
Replace [bam-path] with the path to the input BAM, [output-path] with the path to the output directory, and include any relevant [options] (listed below). For example, if your BAM file is located at /path/to/mydata.bam, you want the output in /path/to/home/directory, and you want to use eight threads, the command would be:
bamtofastq --nthreads=8 /path/to/mydata.bam /path/to/home/directory
You can also print the options from the command line:
bamtofastq --help
Run times for full-coverage WGS BAMs may be several hours.
## Options
OptionDescription
--locus=locusOptional. Only include read pairs mapping to locus. Use chrom:start-end format.
--gemcodeConvert a BAM produced from GemCode data (Longranger 1.0 - 1.3)
--lr20Convert a BAM produced by Longranger 2.0
--cr11Convert a BAM produced by Cell Ranger 1.0-1.1
--bx-list=LOnly include BX values listed in text file L. Requires BX-sorted and indexed BAM file (see Long Ranger support for details).
--helpShow the help screen
## Known Issues and Limitations
• BAMs produced for TCR or BCR data, by aligning to a V(D)J reference with cellranger vdj or cellranger multi, are not supported by bamtofastq.
• Special tags included by 10x Genomics pipelines are required to reconstruct the original FASTQ sequences correctly. If your BAM file lacks the appropriate headers, you will get an error message: WARNING: no @RG (read group) headers found in BAM file.
• A common problem is using edited BAM files from SRA that are missing tags produced by Cell Ranger.
• You need the BAM file from the section labelled Original Format on the SRA downloads page.
• Here is an example of a SRA dataset with the Original Format BAM available.
• Only the Original Format BAM files have the necessary BAM tags preserved to work correctly with bamtofastq.
• The latest versions of cellranger, cellranger-atac, cellranger-arc, cellranger-dna and longranger generate BAM files that automatically reconstruct complete FASTQ files representing all input reads. BAMs produced by older versions of cellranger and longranger have some caveats, listed below:
PackageVersionPipelinesExtra ArgumentsComplete FASTQs
Cell Ranger1.3+countnoneYes
Cell Ranger1.2countnoneReads without a valid barcode will be absent from FASTQ. (These reads are ignored by Cell Ranger)
Cell Ranger1.0-1.1count--cr11 Reads without a valid barcode will be absent from FASTQ. (These reads are ignored by Cell Ranger)
Cell Ranger ARC1.0.0+countnoneAny sequenced bases in the i5 index read that are not part of the 10x Genomics barcode are dropped from the FASTQ output.
Cell Ranger ATAC1.0.0+countnoneAny sequenced bases in the i5 index read that are not part of the 10x Genomics barcode are dropped from the FASTQ output.
Cell Ranger DNA1.0.0+cnvnoneYes
Long Ranger2.1.3+wgs, targeted, align, basicnoneYes
Long Ranger2.1.0 - 2.1.2wgs, targetednoneYes
Long Ranger2.0wgs, targeted--lr20Yes
Long Ranger2.0.0 - 2.1.2align, basicNot SupportedN/A
Long Ranger1.3 (GemCode)wgs, targeted--gemcodeReads without a valid barcode will be absent from FASTQ. This will result in a ~5-10% loss of coverage.
|
{}
|
# Solved papers for 10th Class Science Solved Paper - Science-2015 Delhi Set-III
### done Solved Paper - Science-2015 Delhi Set-III
• question_answer1) Write the name and formula of the 2nd member of homologous series having general formula ${{\mathbf{C}}_{\mathbf{n}}}{{\mathbf{H}}_{\mathbf{2n-2}}}$.
• question_answer4) List four specific characteristics of the images of the objects formed by convex mirrors.
• question_answer5) List two advantages associated with water harvesting at the community level.
• question_answer6) Everyone of us can do something to reduce our personal consumption of various natural resources. List four such activities based on 3-R approach.
(a) Name the following: (i) Thread like non-reproductive structures present in Rhizopus. (ii) 'Blobs' that develop at the tips of the non- reproductive threads in Rhizopus. (b) Explain how these structures protect themselves and what is the function of the structures released from the 'blobs' in Rhizopus.
• question_answer8) List in tabular form, two distinguishing features between the acquired traits and the inherited traits with one example of each.
|
{}
|
Prove the following trigonometric identities.
Question:
Prove the following trigonometric identities.
$\frac{1+\tan ^{2} \theta}{1+\cot ^{2} \theta}=\left(\frac{1-\tan \theta}{1-\cot \theta}\right)^{2}=\tan ^{2} \theta$
Solution:
We have to prove $\frac{1+\tan ^{2} \theta}{1+\cot ^{2} \theta}=\left(\frac{1-\tan \theta}{1-\cot \theta}\right)^{2}=\tan ^{2} \theta$
Consider the expression
$\frac{1+\tan ^{2} \theta}{1+\cot ^{2} \theta}=\frac{1+\tan ^{2} \theta}{1+\frac{1}{\tan ^{2} \theta}}$
$=\frac{1+\tan ^{2} \theta}{\frac{\tan ^{2} \theta+1}{\tan ^{2} \theta}}$
$=\tan ^{2} \theta \frac{1+\tan ^{2} \theta}{1+\tan ^{2} \theta}$
$=\tan ^{2} \theta$
Again, we have
$\left(\frac{1-\tan \theta}{1-\cot \theta}\right)^{2}=\left(\frac{1-\tan \theta}{1-\frac{1}{\tan \theta}}\right)^{2}$
$=\left(\frac{1-\tan \theta}{\frac{\tan \theta-1}{\tan \theta}}\right)^{2}$
$=\tan ^{2} \theta\left(\frac{1-\tan \theta}{\tan \theta-1}\right)^{2}$
$=\tan ^{2} \theta\left(-\frac{1-\tan \theta}{1-\tan \theta}\right)^{2}$
$=\tan ^{2} \theta(-1)^{2}$
$=\tan ^{2} \theta$
|
{}
|
# Clocks
How long path will walk the end of the clock's minute-hand long 7 cm for 330 minutes? How long will walk in the same time hour-hand?
Correct result:
minute hand walk: 241.9 cm
minute hand walk: 20.2 cm
#### Solution:
$s_m = 2\pi \cdot \dfrac{ 330}{ 60 } \cdot 7 = 241.9 \ \text{cm}$
$s_h = 2\pi \cdot \dfrac{ 330}{12 \cdot 60} \cdot 7 = 20.2 \ \text{cm}$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Do you want to convert length units?
Do you want to convert velocity (speed) units?
Do you want to convert time units like minutes to seconds?
#### You need to know the following knowledge to solve this word math problem:
We encourage you to watch this tutorial video on this math problem:
## Next similar math problems:
• Clocks
How long track travel the second hand in 46 hours, the end of which is 4 cm long.
• Magnitude of angle
What magnitude has an obtuse angle enclosed by the hands of clocks at 12:20 hours?
• Minute-hand
How long distance will travel a large (minute) hand on the clock for 48 minutes, if its length is 56 cm?
• Wheel
Diameter of motocycle wheel is 52 cm. How many times rotates wheel on roand long 2 km?
• Little hand
What angle shifted little hand on the clock after one hour and 38 minutes?
• Mine
Wheel in traction tower has a diameter 4 m. How many meters will perform an elevator cabin if wheel rotates in the same direction 89 times?
• Saw
Blade circular saw with a diameter 42 cm turns 825 times per minute. Expresses his cutting speed in meters per minute.
• Bob traveled
Bob traveled 20 meters to Dave's house. He arrived in 3 minutes. What was his average speed for the trip.
• Freight and passenger car
The truck starts at 8 pm at 30 km/h. Passenger car starts at 8 pm at 40 km/h. Passenger arrives in the destination city 1 hour and 45 min earlier. What is the distance between the city of departure and destination city?
• Flowerbed
In the park there is a large circular flowerbed with a diameter of 12 m. Jakub circulated him ten times and the smaller Vojtoseven times. How many meters each went by and how many meters did Jakub run more than Vojta?
• Circle r,D
Calculate the diameter and radius of the circle if it has length 52.45 cm.
• Circle
What is the radius of the circle whose perimeter is 6 cm?
• Two gears
The two gears fit together. The larger gear has 32 teeth, the smaller has 20 teeth less. How many times does turn a smaller gear if the bigger gear turns three times?
• Ferris wheel
Ferris wheel reaches to 22 m tall and moves at the speed of 0.5m/s. During one drive wheel rotates three times. What is the total drive time?
• Wheel
How many times turns the wheel of a passenger car in one second if car run at speed 100 km/h. Wheel diameter is d = 62 cm.
• Wave parameters
Calculate the speed of a wave if the frequency is 336 Hz and the wavelength is 10 m.
• Meridian
What is the distance (length) the Earth's meridian 23° when the radius of the Earth is 6370 km?
|
{}
|
D02 Chapter Contents
D02 Chapter Introduction
NAG Library Manual
# NAG Library Routine DocumentD02LXF
Note: before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.
## 1 Purpose
D02LXF is a setup routine which must be called prior to the first call of the integrator D02LAF and may be called prior to any continuation call to D02LAF.
## 2 Specification
SUBROUTINE D02LXF ( NEQ, H, TOL, THRES, THRESP, MAXSTP, START, ONESTP, HIGH, RWORK, LRWORK, IFAIL)
INTEGER NEQ, MAXSTP, LRWORK, IFAIL REAL (KIND=nag_wp) H, TOL, THRES(NEQ), THRESP(NEQ), RWORK(LRWORK) LOGICAL START, ONESTP, HIGH
## 3 Description
D02LXF permits you to set optional inputs prior to any call of D02LAF. It must be called before the first call of routine D02LAF and it may be called before any continuation call of routine D02LAF.
None.
## 5 Parameters
1: NEQ – INTEGERInput
On entry: the number of second-order ordinary differential equations to be solved by D02LAF.
Constraint: ${\mathbf{NEQ}}\ge 1$.
2: H – REAL (KIND=nag_wp)Input
On entry: if ${\mathbf{START}}=\mathrm{.TRUE.}$, H may specify an initial step size to be attempted in D02LAF.
If ${\mathbf{START}}=\mathrm{.FALSE.}$, H may specify a step size to override the choice of next step attempted made internally to D02LAF.
The sign of H is not important, as the absolute value of H is chosen and the appropriate sign is selected by D02LAF.
If this option is not required then you must set ${\mathbf{H}}=0.0$.
3: TOL – REAL (KIND=nag_wp)Input
On entry: must be set to a relative tolerance for controlling the error in the integration by D02LAF. D02LAF has been designed so that, for most problems, a reduction in TOL leads to an approximately proportional reduction in the error in the solution. However the actual relation between TOL and the accuracy of the solution cannot be guaranteed. You are strongly recommended to repeat the integration with a smaller value of TOL and compare the results. See the description of THRES and THRESP for further details of how TOL is used.
Constraint: $10×\epsilon \le {\mathbf{TOL}}\le 1.0$ ($\epsilon$ is the machine precision, see X02AJF).
4: THRES(NEQ) – REAL (KIND=nag_wp) arrayInput
5: THRESP(NEQ) – REAL (KIND=nag_wp) arrayInput
On entry: THRES and THRESP may be set to thresholds for use in the error control of D02LAF. At each step in the numerical integration estimates of the local errors $\mathrm{E1}\left(\mathit{i}\right)$ and $\mathrm{E2}\left(\mathit{i}\right)$ in the solution, ${y}_{\mathit{i}}$, and its derivative, ${y}_{\mathit{i}}^{\prime }$, respectively are computed, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$. For the step to be accepted conditions of the following type must be satisfied:
$max1≤i≤NEQ E1i maxTHRESi,yi ≤ TOL, max1≤i≤NEQ E2i maxTHRESPi,yi′ ≤TOL.$
If one or both of these is not satisfied then the step size is reduced and the solution is recomputed.
If ${\mathbf{THRES}}\left(1\right)\le 0.0$ on entry, then a value of $50.0×\epsilon$ is used for ${\mathbf{THRES}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$, where $\epsilon$ is machine precision. Similarly for THRESP.
Constraints:
• ${\mathbf{THRES}}\left(1\right)\le 0.0$ or ${\mathbf{THRES}}\left(\mathit{i}\right)>0.0$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$;
• ${\mathbf{THRESP}}\left(1\right)\le 0.0$ or ${\mathbf{THRESP}}\left(\mathit{i}\right)>0.0$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
6: MAXSTP – INTEGERInput
On entry: a bound on the number of steps attempted in any one call of D02LAF.
If ${\mathbf{MAXSTP}}\le 0$ on entry, a value of $1000$ is used.
7: START – LOGICALInput/Output
On entry: specifies whether or not the call of D02LAF is for a new problem. ${\mathbf{START}}=\mathrm{.TRUE.}$ indicates that a new problem is to be solved. ${\mathbf{START}}=\mathrm{.FALSE.}$ indicates the call of D02LXF is prior to a continuation call of D02LAF.
On exit: ${\mathbf{START}}=\mathrm{.FALSE.}$.
8: ONESTP – LOGICALInput
On entry: the mode of operation for D02LAF.
${\mathbf{ONESTP}}=\mathrm{.TRUE.}$
D02LAF will operate in one-step mode, that is it will return after each successful step.
${\mathbf{ONESTP}}=\mathrm{.FALSE.}$
D02LAF will operate in interval mode, that is it will return at the end of the integration interval.
9: HIGH – LOGICALInput
On entry: if ${\mathbf{HIGH}}=\mathrm{.TRUE.}$, a high-order method will be used, whereas if ${\mathbf{HIGH}}=\mathrm{.FALSE.}$, a low-order method will be used. (See the specification of D02LAF for further details.)
10: RWORK(LRWORK) – REAL (KIND=nag_wp) arrayCommunication Array
This must be the same parameter RWORK supplied to D02LAF. It is used to pass information to D02LAF and therefore the contents of this array must not be changed before calling D02LAF.
11: LRWORK – INTEGERInput
On entry: the dimension of the array RWORK as declared in the (sub)program from which D02LXF is called.
Constraints:
• if ${\mathbf{HIGH}}=\mathrm{.TRUE.}$, ${\mathbf{LRWORK}}\ge 16+20×{\mathbf{NEQ}}$;
• if ${\mathbf{HIGH}}=\mathrm{.FALSE.}$, ${\mathbf{LRWORK}}\ge 16+11×{\mathbf{NEQ}}$.
12: IFAIL – INTEGERInput/Output
On entry: IFAIL must be set to $0$, $-1\text{ or }1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
## 6 Error Indicators and Warnings
If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF).
Errors or warnings detected by the routine:
${\mathbf{IFAIL}}=1$
${\mathbf{THRES}}\left(1\right)>0.0$ and for some $i$ ${\mathbf{THRES}}\left(i\right)\le 0.0$, $1\le i\le {\mathbf{NEQ}}$, and/or, ${\mathbf{THRESP}}\left(1\right)>0.0$ and for some $i$ ${\mathbf{THRESP}}\left(i\right)\le 0.0$, $1\le i\le {\mathbf{NEQ}}$.
${\mathbf{IFAIL}}=2$
LRWORK is too small.
${\mathbf{IFAIL}}=3$
TOL does not satisfy $10×\epsilon \le {\mathbf{TOL}}\le 1.0$ ($\epsilon$ is the machine precision, see X02AJF)
## 7 Accuracy
Not applicable.
Prior to a continuation call of D02LAF, you may reset any of the optional parameters by calling D02LXF with ${\mathbf{START}}=\mathrm{.FALSE.}$. You may reset:
|
{}
|
Found 17 result(s)
### 06.05.2015 (Wednesday)
#### Superconformal Quantum Mechanics, Integrability and Discrete Light-Cone Quantisation
Regular Seminar Nick Dorey (DAMTP Cambridge)
at: 13:15 KCLroom G.01 abstract: I will discuss recent progress in formulating superconformal quantum mechanics models which describe the (2,0) theory and N=4 super Yang-Mills in discrete light-cone quantisation.
### 25.02.2015 (Wednesday)
#### Superconformal Quantum Mechanics, Integrability and Discrete Light-Cone Quantisation
Regular Seminar Nick Dorey (DAMTP Cambridge)
at: 13:15 KCLroom G.01 abstract: I will discuss recent progress in formulating superconformal quantum mechanics models which describe the (2,0) theory and N=4 super Yang-Mills in discrete light-cone quantisation.
### 12.12.2012 (Wednesday)
#### Quiver gauge theories and quantum integrable systems
Regular Seminar Nick Dorey (DAMTP Cambridge)
at: 13:15 KCLroom S4.23 abstract:
### 04.12.2012 (Tuesday)
#### t.b.a.
Regular Seminar Nick Dorey (DAMTP)
at: 16:00 City U.room CG04 abstract:
### 11.05.2011 (Wednesday)
#### Seiberg-Witten Theory and the Bethe ansatz
Exceptional Seminar Nick Dorey (DAMTP)
at: 13:15 KCLroom 423 abstract:
### 24.11.2008 (Monday)
#### Gluons, Spin Chains and String Theory
String Theory & Geometry Seminar Nick Dorey (DAMTP, Cambridge)
at: 13:00 ICroom IMS seminar room abstract:
### 31.05.2007 (Thursday)
#### Singularities of the Magnon S-matrix
Regular Seminar Nick Dorey (DAMTP, Cambridge)
at: 14:00 QMWroom 112 abstract:
### 29.11.2006 (Wednesday)
#### Integrability and the AdS/CFT correspondence at large R-charge
Regular Seminar Nick Dorey (DAMTP, Cambridge)
at: 13:15 KCLroom 423 abstract:
### 27.11.2006 (Monday)
#### Integrability in gauge theory and string theory VI
Regular Seminar Nick Dorey (Cambridge/Imperial Maths Institute)
at: 13:00 ICroom Maths Institute seminar room abstract: In these lectures, I will introduce the concept of integrability and study its realisation in the context of gauge theory and string theory. In particular, I plan to review recent progress in computing the spectrum of operator dimensions in N=4 supersymmetric Yang-Mills theory and the dual problem of determining the spectrum of string theory on AdS5 x S5.
### 20.11.2006 (Monday)
#### Integrability in gauge theory and string theory V
Regular Seminar Nick Dorey (Cambridge/Imperial Maths Institute)
at: 13:00 ICroom Maths Institute seminar room abstract: In these lectures, I will introduce the concept of integrability and study its realisation in the context of gauge theory and string theory. In particular, I plan to review recent progress in computing the spectrum of operator dimensions in N=4 supersymmetric Yang-Mills theory and the dual problem of determining the spectrum of string theory on AdS5 x S5.
### 13.11.2006 (Monday)
#### Integrability in gauge theory and string theory IV
Regular Seminar Nick Dorey (Cambridge/Imperial Maths Institute)
at: 13:00 ICroom Maths Institute seminar room abstract: In these lectures, I will introduce the concept of integrability and study its realisation in the context of gauge theory and string theory. In particular, I plan to review recent progress in computing the spectrum of operator dimensions in N=4 supersymmetric Yang-Mills theory and the dual problem of determining the spectrum of string theory on AdS5 x S5.
### 06.11.2006 (Monday)
#### Integrability in gauge theory and string theory III
Regular Seminar Nick Dorey (Cambridge/Imperial Maths Institute)
at: 15:00 ICroom 503 Huxley abstract: In these lectures, I will introduce the concept of integrability and study its realisation in the context of gauge theory and string theory. In particular, I plan to review recent progress in computing the spectrum of operator dimensions in N=4 supersymmetric Yang-Mills theory and the dual problem of determining the spectrum of string theory on AdS5 x S5.
### 30.10.2006 (Monday)
#### Integrability in gauge theory and string theory II
Regular Seminar Nick Dorey (Cambridge/Imperial Maths Institute)
at: 13:00 ICroom Maths Institute seminar room abstract: In these lectures, I will introduce the concept of integrability and study its realisation in the context of gauge theory and string theory. In particular, I plan to review recent progress in computing the spectrum of operator dimensions in N=4 supersymmetric Yang-Mills theory and the dual problem of determining the spectrum of string theory on AdS5 x S5.
### 23.10.2006 (Monday)
#### Integrability in gauge theory and string theory I
Regular Seminar Nick Dorey (Cambridge/Imperial Maths Institute)
at: 13:00 ICroom Maths Institute seminar room abstract: This the first of a series of lectures in which I will introduce the concept of integrability and study its realisation in the context of gauge theory and string theory. In particular, I plan to review recent progress in computing the spectrum of operator dimensions in N=4 supersymmetric Yang-Mills theory and the dual problem of determining the spectrum of string theory on AdS5 x S5.
### 17.11.2004 (Wednesday)
#### Exploring the universality class of N=1 SUSY Yang-Mills
Triangular Seminar Nick Dorey (DAMTP)
at: 15:00 QMWroom UG1 abstract: This talk is part of the joint Triangular Seminars
### 03.06.2004 (Thursday)
#### A new deconstruction of little string theory
Regular Seminar Nick Dorey (University of Wales at Swansea)
at: 16:30 ICroom H503 abstract:
### 20.02.2004 (Friday)
#### S-duality, Deconstruction and Confinement for marginal deformations of N eq 4 SUSY Yang-Mills
Regular Seminar Nick Dorey (University of Wales Swansea)
at: 14:00 QMWroom 112 abstract:
|
{}
|
5
# A) x2 Y =1. Use Green's theorem to compute the area inside the ellipse 72 182 Use the fact that the area can be written as dxdy=i Jo ~ydx+xdy Hint: x(t) = 7 co...
## Question
###### A) x2 Y =1. Use Green's theorem to compute the area inside the ellipse 72 182 Use the fact that the area can be written as dxdy=i Jo ~ydx+xdy Hint: x(t) = 7 cos(t)- The area is 126piFind a parametrization of the curve x2/3 + y2/3 12/8 and use it to compute the area of the interior: Hint: x(t) = 1 cos? (t)_ 6174n/5832
A) x2 Y =1. Use Green's theorem to compute the area inside the ellipse 72 182 Use the fact that the area can be written as dxdy=i Jo ~ydx+xdy Hint: x(t) = 7 cos(t)- The area is 126pi Find a parametrization of the curve x2/3 + y2/3 12/8 and use it to compute the area of the interior: Hint: x(t) = 1 cos? (t)_ 6174n/5832
#### Similar Solved Questions
##### How many milliliters of 0.125 M HzSO are needed to neutralize 0.170 g of NaOH? Express the volume in milliliters to three significant digits_AZd34.0SubmitPrevious Answers Request AnswerIncorrect; Try Again; attempts remainingmL
How many milliliters of 0.125 M HzSO are needed to neutralize 0.170 g of NaOH? Express the volume in milliliters to three significant digits_ AZd 34.0 Submit Previous Answers Request Answer Incorrect; Try Again; attempts remaining mL...
##### Support vector machine classifier is learned for data set in R2. It is given by w = (3,4) andb = -12What is the T1-intercept of the decision boundary?b) What is the T2-intercept of the decision boundary?What is the margin of this classifier?d) It turns out that the data set has two distinct support vectors of the form (1,?) What are they?(give the missing 12 coordinates for the support vectors with the smaller 12 value first)
support vector machine classifier is learned for data set in R2. It is given by w = (3,4) andb = -12 What is the T1-intercept of the decision boundary? b) What is the T2-intercept of the decision boundary? What is the margin of this classifier? d) It turns out that the data set has two distinct supp...
##### Geometric = series or state that it diverges_ Evaluate the1 +fiIll in the answer box to complete your choice Select the correct choice below and_ if necessary;0 A The series evaluates t0 The series diverges_
geometric = series or state that it diverges_ Evaluate the 1 + fiIll in the answer box to complete your choice Select the correct choice below and_ if necessary; 0 A The series evaluates t0 The series diverges_...
##### SLP = Stumm-Liouville Problem 1n denotes eigen value and Xn denotes associated eigen vector; for each n € {1,2,3
SLP = Stumm-Liouville Problem 1n denotes eigen value and Xn denotes associated eigen vector; for each n € {1,2,3...
##### B1 = | IA b2b320
b1 = | IA b2 b3 20...
##### Two identical pucks, A and B, slide over a level ice floor without friction. Initially, puck Bis stationary. Puck A approaches puck B with momentum p and collides with it; as shown in the fig- ure. After the collision, puck A moves with momentum p , and puck B with momentum pz . The associated kinetic energies are Ko, K,, and Kz, respectively. The angle between p and pz is 0 _Explain why p, +pz =Pa in this situation. Explain why K, + Kz = K in this situation Calculate Pu? = Po Pu in terms of p a
Two identical pucks, A and B, slide over a level ice floor without friction. Initially, puck Bis stationary. Puck A approaches puck B with momentum p and collides with it; as shown in the fig- ure. After the collision, puck A moves with momentum p , and puck B with momentum pz . The associated kinet...
##### 57. What would be the major product of the following reaction?Brz/Hz0OHOH'BrOHOHenantiome enantiomer enantiomer 07c4" [DyHOsl:- {2202 IL n III 7 1v | J) ' 4" ! Vii' " '
57. What would be the major product of the following reaction? Brz/Hz0 OH OH 'Br OH OH enantiome enantiomer enantiomer 07c4" [DyHOsl:- {2202 IL n III 7 1v | J) ' 4" ! Vii' " '...
##### QUESTION 2Determine ifthe vector uis in the column space of matrix _ and whether itis in the null space ofAIn Col Aaid 1u Nul Cua Nul a Col . not in Nul not in Nul 4
QUESTION 2 Determine ifthe vector uis in the column space of matrix _ and whether itis in the null space ofA In Col Aaid 1u Nul Cua Nul a Col . not in Nul not in Nul 4...
##### In your own words, define the following terms(a) carbohydrate, (b) monosaccharide, (c) disaccharide,(d) polysaccharide.
In your own words, define the following terms (a) carbohydrate, (b) monosaccharide, (c) disaccharide, (d) polysaccharide....
##### Aresearcher wishes estimate thc proportion adulis vio nave high speed Internet access What size sample should be obtained if she wishes the estmate be within 04 with 99% conldence she Uses previous estmale 0f 522 (b) she does not use any prior estimales?Clck the icon t0 viow the table ol alical values(2) n = DrRound up to the nearest integer ) (bin= DiRound up lo the nearesl integer )
Aresearcher wishes estimate thc proportion adulis vio nave high speed Internet access What size sample should be obtained if she wishes the estmate be within 04 with 99% conldence she Uses previous estmale 0f 522 (b) she does not use any prior estimales? Clck the icon t0 viow the table ol alical va...
|
{}
|
# Completing the Square
Imagine a 6 x 6 grid of 36 equal minisquares, or tiles. A "shape" consists of interconnected adjacent tiles on this grid. [Adjacent tiles must share an edge, not merely a vertex.]
Prove the number of discrete combinations of 4 congruent shapes (9 tiles each) that will complete the grid.
One obvious combination is 4 congruent "sub-squares" of 9 tiles each. But there are several other valid shapes which one can find graphically by experiment. Which method(s) of proof is best used to determine the precise number? How would this proof be constructed? Is this problem and its proof methodology generalizable to completing similar grids, matrices etc.?
Look forward to hearing your ideas on this,
Scott
You can find about 30 examples at Mike Reid's Rectifiable polyomino page. It would appear that for the 6x6 square there are two types of 9 cell tilers:
• Take the center edge of a 3X6 rectangle along with a path from one end of that edge to the boundary and the 180- degree rotation starting from the other end
and
• Take a path from the center of a 6x6 square and rotate it by 90,180 and 270 degrees.
So a method to do all this (which I have not done) would have these steps: Find all tiles of these two types. (One needs a path which does not intersect its 1 or 3 partner paths, there can only be so many so go about it systematically). Then show that a tiling must decompose into 2 3x6 rectangles or else have a 4-fold central rotational symmetry.
Most work I know starts with a tile and asks what rectangles it tiles. I would consider it pretty interesting to prove or disprove that any partition of a square into 4 congruent pieces is of one of these two types. Maybe that is already known one way or the other. I am not aware that anyone has looked systematically at dividing a checkerboard into 4 congruent pieces. Here is a decomposition of a 14x14 square into 16 7-cell tiles. It does have a 4-fold rotational symmetry.
further comments This is a fascinating field. Search the literature and you will find some contributions by well known names but you will also find that some of the best results come from "amateurs". I use the term only in the sense of someone who does it exclusively for the love of the subject. An easier problem than yours is: How many ways to split a $k$ by $2k$ rectangle into two congruent pieces? You would have to solve that to solve your problem. Already that is a difficult lattice path problem. For particular small $k$ you could do it but in general asymptotics might be the best one could expect.
• One can look at equivalence classes of squares under clockwise rotation around the center, pick one from each of the nine classes, and call the assemblage a 'shape'. The less trivial problem is to count how many such shapes are connected. Gerhard "Ask Me About System Design" Paseman, 2011.05.20 – Gerhard Paseman May 20 '11 at 22:08
• Indeed, this is not a trivial problem. I'm still looking for some rigorous expert commentary on how the number of valid combinations (according to my aforementioned criteria)for a specific size square can be proven. Thus far you've offered some interesting suggestions for counting, but nothing close to a method of systematic proof. It could be there's much work to be done here, but I have a feeling we just haven't been able to yet source this quality of work most likely performed in higher academic circles. – Scott S May 21 '11 at 4:09
• You will find that many problems like this are part of the combinatoric literature. Polyominoes, graphs, and other structures are not understood well enough to be easily counted, or even well-estimated. My approach would take advantage of certain symmetries and restrictions, but would still resemble brute force enumeration at some point. I think you will find that or a representation in terms of some other enumeration problem if you search the literature. A nice systematic proof with few cases is unlikely to be had. Gerhard "Ask Me About System Design" Paseman, 2011.05.20 – Gerhard Paseman May 21 '11 at 5:21
• I added a bit. You could ask Michael Reid. I think that he knows as much as anyone. – Aaron Meyerowitz May 21 '11 at 6:05
To underscore my comments posted under another answer, here is a start of an enumeration which would be pretty messy when completed.
I will cover the case of 4-fold rotational symmetry. I consider the lower left 3x3 squares which I label with coordinate pairs (1,1) through (3,3). (3,3) is a central square which by symetry I assume to be part of any shape I count. Since I am interested in connected shapes, one of (3,2) or (2,3) (squares below and/or to the left) must belong to the shape as well. Suppose both are: then there are 64 cases to consider, where each of the remaining 6 squares of the lower left 3x3 square is part of the included shape or not.
Suppose square (2,2) is not part of the shape. Then one of its three equivalent (under rotation) squares must be. Some thought will show that it is not the (5,5) square. Say it is the (5,2) square, then at least two more squares will be needed to connect it to the squares in the lower left corner. so at most 3 more squares are allowed in our shape from the lower left corner.
I could go on using words; it would be easier to list the allowed shapes that I discovered by trial and error, and submit that as a list. It is possible that an alternate categorization (counting those where (1,1) and/or (2,2) are part of the shape, for example) might lead to a more concise counting, but most people who would need the count would probably spend the time trying to derive it for themselves rather than read your derivation. Again, I do not see a solution to this problem that is quick and easy with few cases to consider.
|
{}
|
# Tag Info
1 vote
### Rerolling or taking +2
This is an interesting probability question! So the key question is: What is the cutoff point in the possible outcomes of the first roll where the expected outcome of a re-roll is at least "3 ...
• 111
• 21.1k
### How to calculate the added expected damage from critical hits?
Let's start with an example where you hit 65% of the time with a damage roll of 1d6+3, that means that 13 of your 20 possible dice rolls result in a successful hit. 12 of those 13 hits are normal ...
• 11.1k
Accepted
### How to calculate the added expected damage from critical hits?
When calculating DPR of attacks, you can use†: $$\text{DPR} = \sum_\text{attacks} \Big[h(D+M) + cD\Big]$$ where h is the hit chance, D is the dice damage, M is the modifier or static damage, and c ...
• 45.1k
### Ironsworn meets BitD dice mechanic (with a twist)
So, if I understand your mechanic right: You roll Xd6. Let the highest result of this roll be M, and let the number of times this result appears in the roll be N. (Thus, we always have 1 ≤ M ≤ 6 ...
• 21.1k
Accepted
### Ironsworn meets BitD dice mechanic (with a twist)
My AnyDice Fu is lacking, so this is a dyce¹-based solution, but I believe is otherwise directly responsive and captures my understanding of the mechanic, although ...
• 798
### How to make a good height and weight table?
Define sensible goals What is a sensible Weight to Height? Some game designers (Looking at you, GURPS) choose some kind of variation of one of two systems to inspire their description of a race that ...
• 28k
### How to make a good height and weight table?
It's tough to answer without knowing what your goals are. Here's one way to avoid the "tall characters are too light" problem: Use the first roll to generate a height and standard weight. ...
• 44.1k
Accepted
### Is there any estimate of average damage output per turn for PCs by level?
You can calculate this by reversing the CR tables. For example CR1 is a medium encounter for a party of 4 level 1 characters. CR1 has 71–85hp, if you assume it takes 3-5 rounds to kill an encounter ...
• 537
|
{}
|
# Periodicity of Fourier series's coefficients
Why exactly continious Fourier-series's coefficients aren't periodic like coefficients of discrete Fourier series (DFS)? $e^{-j2\pi}$ is periodic in both sequences.
• Why do you think they should be periodic? The FS of a sine wave has just two coefficients, so it' cant be periodic. The DTF applies to a discrete signal, whose spectrum is by definition periodic. – MBaz Oct 4 '16 at 16:38
• I understand in general, that continious fourier-series's coefficients aren't periodic. But i'm asking a little bit deeper question. What exactly makes DTF coefficients poriodic and continious FS not periodic? Where is the reason? – Дмитрий Oct 4 '16 at 16:44
The reason why Fourier series coefficients of continuous functions are generally not periodic is because of the continuity of the function. The discrete Fourier series coefficients are periodic because the analyzed signals are discrete. Note the duality relationship of the Fourier transform
$$\text{periodicity}\Longleftrightarrow\text{discreteness}$$
Periodicity in one domain leads to discreteness in the other domain. E.g., a periodic function has only discrete frequency components (as shown by its Fourier series). If these discrete frequency components are periodic, then, consequently, the periodic function must be discrete, i.e., it must be non-zero only at discrete times.
In his answer, Fat32 gave an example of a periodic signal with periodic Fourier series coefficients. The most general form of such a signal is
$$x(t)=\sum_{m=-\infty}^{\infty}b_m\delta\left(t-m\frac{T}{N}\right)\tag{1}$$
where the coefficients $b_m$ are essentially the discrete Fourier series coefficients of the $N$-periodic Fourier coefficients $a_k$ of the $T$-periodic signal $x(t)$:
$$b_m=\frac{T}{N}\sum_{k=0}^{N-1}a_ke^{j2\pi km/N}\tag{2}$$ $$x(t)=\sum_{k=-\infty}^{\infty}a_ke^{j2\pi kt/T}\tag{3}$$
As mentioned above, the signal $x(t)$ is of course discrete in the sense that it is non-zero only at discrete time instances $t_m=mT/N$. Any periodic signal $x(t)$ with $N$-periodic Fourier coefficients $a_k$ must be of the form $(1)$.
A derivation of Equations $(1)$ and $(2)$ can be found in this answer.
• I disagree with $(3)$ as a "discrete in the sense that it is nonzero only at discrete time instances...." because it suggests that the values of $x(t)$ at those discrete time instances are finite real numbers; they are not. If the Fourier coefficients are a periodic sequence, then in terms of Fourier transforms, the transform is a periodic impulse train and the signal is also an impulse train. – Dilip Sarwate Oct 5 '16 at 1:11
• i up-arrowed both Matt and @DilipSarwate. the way i would put it is in this: Uniform sampling of a signal in one domain (say, the "time domain") corresponds to periodic extension of the Fourier Transform of that signal in the reciprocal domain (say, the "frequency domain"). And, because of the symmetry of the Fourier Transform and its inverse, the converse is also just as true. – robert bristow-johnson Oct 5 '16 at 2:48
• well, my up-arrow is locked in unless the answer is edited. Matt, you didn't quote Fat's answer verbatim, and modifications to a verbatim quote are good if they increase simplicity, conciseness, and understanding and not-so-good if the modifications to the quote decrease such. i would modify the answer to fix it, but usually Matt reverses such. so this answer gets a free and poorly-considered endorsement from me. – robert bristow-johnson Oct 5 '16 at 3:00
• @DilipSarwate: I wouldn't say that Eq. $(3)$ implies that the function $x(t)$ is finite everywhere. If we accept the equation $$\sum_k\delta(t-kT)=\frac{1}{T}\sum_ke^{j2\pi kt/T}$$ (which is just the Fourier series representation of an impulse train) then we should also accept the more general Eq. $(3)$ for the signal $x(t)$ given by $(1)$. – Matt L. Oct 5 '16 at 7:09
• @robertbristow-johnson: I didn't at all intend to quote or paraphrase Fat's answer. Which part of my answer would you like to modify and why? Also have a look at my response to Dilip's comment (which you seem to agree with). An impulse train does have a Fourier series, and a Fourier series representation by no means implies that the function is finite everywhere. – Matt L. Oct 5 '16 at 7:13
In fact here is an example of a family of signals whose continuous time Fourier series (CTFS) coefficients is periodic. The signal whose CTFS cefficients being periodic is: $$x(t) = \sum_{k=-\infty}^{\infty} {\delta(t - k T)}$$ for which the CTFS coefficients are found as $$c_n = \frac 1T$$ for all n. Hence the CTFS can be considered to be periodic with the period being one.
That being said however, looking at the analysis equation of CTFS $$c_n = \frac 1T \int_{<T>} {x(t) e^{-j {n\omega}_0 t} dt }$$
The condition of periodicity on $c_n$ can be translated into the following: $$c_n = c_{n+M} = \frac 1T \int_{<T>} {x(t) e^{-j {(n+M)\omega}_0 t} dt }$$ $$e^{-j M{\omega}_0 t } = 1 = e^{-j2\pi m }$$ for all $t \in R$, and for $M \in Z$, which translates to: $$M {\omega}_0 t = 2\pi m \rightarrow M = \frac {2 \pi m }{{\omega}_0 t}$$
As can be inferred, for the CTFS to be periodic with an integer period $M$, (as requested by the fact that the CTFS coefficients $c_n$ can be considered as a discrete sequence $c[n]$ for which periodicity naturally implies an integer period) a number of conditions must be met loosely stated as:
1- ${\omega}_0$ being a rational multiple of $\pi$
2- $t$ being a rational multipe of integer m.
(actually it is ${\omega}_0 t$ which should be a rational multiple of $\pi$, so that $M$ can be integer)
The example I have provided (continuous time periodic impulse train) thanksfully satisfies the two conditions as: $${\omega}_0 = \frac {2\pi}{T}$$ and $$t = k T$$ due to the presence of impulse sampling on the time argument t. From which the period can be deduced as: $$M = \frac{2\pi m}{ \frac {2\pi}{T} kT } = \frac mk \rightarrow M= 1$$ , as the smallest integer to satisfy it for any arbitrary integers $k$ and $m$.
However for most other functions, such will not be the case and hence their CTFS coefficients will not be periodic. And indeed the given example of the CT periodic impulse train is a mathematical convenience to model (describe) the behaviour of sampling of continuous time signals and hence is closely associated with the discrete time signals, whose DFS are always periodic !
• You correctly gave an example of a signal with periodic Fourier series coefficients. The last paragraph of your answer seems to imply that that signal is the only signal with that property. However, it is just a special case. The most general form of such a signal is given by Eq. (1) in my answer. Also note that $\omega_0$ (and, consequently, the period of $x(t)$) can be chosen arbitrarily. You seem to imply that there's a restriction on $\omega_0$, but I might be misinterpreting your answer here. – Matt L. Oct 4 '16 at 21:43
• @MattL. Thanks for the comment. I have editted the answer according to your last statement. My initial aim was just to give a counter example, therefore it's a little loose on the assertion of conditions that is required for the periodicity of CTFS... Your answer apparently yields a broader perspective. – Fat32 Oct 4 '16 at 22:33
• @Fat32 I think your answer for continuous time FT + mine for discrete time FT form a complete and sufficient answer to the question being asked, in the sense that they together tell why each one is or isn't periodic, and how the one which is naturally not periodic can become periodic. – msm Oct 11 '16 at 4:13
If the Fourier series coefficients were to be a periodic sequence, then the signal would have infinite power (Parseval's theorem!). This is not a situation of interest to most people and so the subject remains unexplored for the most part.
• well, unless we wade into the weeds regarding the nature of the Dirac impulse function, this is a situation of interest to most people here and it has (very simple) periodicity in the Fourier coefficients: $$s(t) = T \sum\limits_{n=-\infty}^{+\infty} \delta(t - nT)$$ the Fourier coefficients are all equal to 1, which is periodic. – robert bristow-johnson Oct 5 '16 at 3:06
• @robertbristow-johnson But then, $x(t)$ is not a continuous-time function in the usual sense of the word, is it? The question asked is "Why can't the Fourier series coefficients of a continuous-time periodic signal be a periodic sequence?" I think that is reasonable to assume that the periodic signal satisfies the usual conditions: finite power, finite number of discontinuities per period, etc, and my answer says that if the Fourier series coefficients are a periodic sequence, then the signal would have infinite power, not finite power, and so we are extrapolating beyond the chosen model. – Dilip Sarwate Oct 6 '16 at 1:47
• yeah, Dilip, you're right, but i might ask: "is the dirac impulse a "continuous-time function"? (not a "continuous function". we all know that $\delta(t)$ is not that.) it's all about how far into the weeds we go regarding the Dirac delta function. – robert bristow-johnson Oct 6 '16 at 2:27
• @robertbristow-johnson The Dirac delta or impulse is not a function, period, and writing a impulse train as a series cf. MattL's comment on his answer asking us to accept $$\sum_k \delta(t-kT) = \frac 1T \sum_k \exp(j2\pi kt/T)$$ is fraught with problems. That series converges to $0$ at non multiples of T, and diverges at multiples of $T$ getting us back to "$\delta(t)$ has value $\infty$ when $t=0$ and value $0$ when $t \neq 0$", things that were taught routinely 60 years ago in undergraduate EE classes but not are not so common. – Dilip Sarwate Oct 6 '16 at 2:47
• well, Dilip, this is what wading into the weeds is about. you saw my question to the math SE about that. $\delta(t)$ is not a "function" from the POV of pure, legitimate mathematics. but engineers and physicists (and likely other scientists) have been treating $\delta(t)$ as a continuous-time function with a couple of properties $$\delta(t) = 0 \quad \forall \ t\ne 0$$ and $$\int\limits_{-\infty}^{\infty} \delta(t) \, dt = 1$$ for decades without our maths blowing up. and we do say the dirac comb is also a function and a periodic one (with a Fourier series) and it works, too. – robert bristow-johnson Oct 6 '16 at 3:57
|
{}
|
239k views
### Showcase of beautiful typography done in TeX & friends
If you were asked to show examples of beautifully typeset documents in TeX & friends, what would you suggest? Preferably documents available online (I'm aware I could go to a bookstore and find ...
151k views
### Nice scientific pictures show off
Task Show off your best scientific illustration ! The main purpose of this question is to share beautiful scientific pictures, preferably with an educational aspect. Content Your post must ...
220k views
### What's the quickest way to write “2nd” “3rd” etc in LaTeX?
If I want to use abbreviations for second, third and so on in my LaTeX documents, I am currently doing this: 2$^{nd}$ which seems unsatisfactory. Is there a macro I'm missing? I'm sure I could define ...
533k views
### Multi-line (block) comments in LaTeX
In LaTeX, % can be used for single-line comments. For multi-line comments, the following command is available in the verbatim package. \begin{comment} Commented code \end{comment} But is there a ...
9k views
### Cunning (La)TeX tricks
Writing (La)TeX code sometimes requires a degree of guile. Here (in no particular order) are two of my favourite examples. Macros ending with spaces Pete asked how to see the implementation of \...
47k views
### What is the easiest way to draw a 3D cube with TikZ?
I'm trying to find the easiest way to draw a 3D cube (it's for my UML diagram) with TikZ. Could you please give an example? Like this:
9k views
### How to add some visual style and pizzazz to course notes?
LaTeX is wonderful for writing mathematical papers, but while the output is perfectly typeset it is not visually very exciting. And although it may just be "familiarity breeds contempt", I am ...
2k views
### What is missing in LaTeX?
I have been reading this article and started to think about the position of LaTeX amongst the current competitors. Currently, I am using it (or XeTeX) even for ‘casual writing’ and sometimes I spend ...
8k views
### How to keep a constant baselineskip when using minipages (or \parboxes)?
When a minipage is used to box a text paragraph, the first line of the minipaged text and the last line of the previous text paragraph are closer than two consecutive lines in a standard text ...
2k views
### Can one (more or less) automatically suppress ligatures for certain words?
This question led to a new package: selnolig One of the major attractions -- for me at least -- of typesetting my papers in (La)TeX is its automated and fully transparent use of typographic ...
903 views
### Porting the luatex/ConTeXt module “translate” to lualatex
In this question, which I posted a few weeks ago to TeX Stack Exchange, I asked how one should go about disabling specific ligatures (such as ff, fi, fl, and ffl) automatically for a list of pre-...
8k views
### Different proofs in amsmath with different QED symbol?
I'm trying to expand the amsmath package to get two different proof environments: \begin{proof} ... \end{proof} and \begin{proof*} ... \end{proof*} so proof* should act exactly like proof except of ...
3k views
### LaTeX template for book reading?
Is there any LaTeX template such that in each section I can write down my reading notes ......?
|
{}
|
Followers 0
# OpenGL Looking for a decent Assimp animation tutorial
## 4 posts in this topic
I've been trying to get my head around how to properly set up a skinning animation system using the bone transform data that can be imported using the Open Asset Import Library (http://assimp.sourceforge.net) but always seem to fail, which I admit is starting to get on my nerves.
I've made a good 5+ attempts inspired by this article: http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html as well as the limited hints that are given in the Assimp documentation. One thing to note is that I'm using DirectX rather than OpenGL, so things in the above tutorial will obviously not match to 100%, which may be where I'm going wrong. I'm adhering to the standard matrix multiplication order of scale * rotation * translation and I'm transposing all of Assimp's imported matrices which the documentation suggests are in a right handed format. Comparing my animation frame matrices to that exposed by the assimp_view application, they do seem to match so the problem most likely lies in the way I append all these local matrices in order to construct the final transform of each bone. Pseudo-code of my approach to that:
XMMATRIX transform = bone.localTransform;
BoneData *pParent = bone.pParent;
while(pParent) {
transform = pParent.localTransform * transform;
pParent = pParent.pParent;
}
// The actual bone matrix applied to each vertex, scaled by its weight, is calculated like so:
XMMATRIX mat = globalMeshInverseTransform * transform * bone.offsetMatrix;
The other thing I'm doing differently from all info on this I've been able to find is that I'm doing the vertex transformation on the CPU by updating the vertex buffer rather than doing it in a vertex shader. I do see the upside to the vshader approach, I just want to see that I'm able to do it in this, presumably easier, way before moving on to that though.
So my question is basically; is there anything obvious one needs to do when using DX rather than OGL or when setting vertex data on the CPU rather than in the vertex shader (not likely) that I've missed, or is there some more to-the-point tutorial on how to go about this somewhere (I've searched but haven't been able to find any), preferably using DirectX?
Thanks for any pointers,
Husbjörn
0
##### Share on other sites
You don't mention any particular problems or symptoms of a problem, so I can only make a couple observations.
First, just for clarity, is your intent to implement a left-handed coordinate system? You mention D3D11, but that doesn't exclude right-handed systems.
Second, I'm not familiar with Assimp, so I can't help you much with that part of your project. But, from what you posted, you may have a couple problems.
I'm transposing all of Assimp's imported matrices which the documentation suggests are in a right handed format.
It's not clear what you mean by "right handed format." At a guess: transposing a matrix doesn't change the coordinate system handedness, if that's your intent. Transposition changes the matrix from row-major to column-major or vice-versa.
Also, as it appears that you're implementing left-to-right matrix multiplication order (SRT vs TRS), then you need to swap the order of parent-child matrix mults:
transform *= pParent.localTransform; // left-to-right order
The same would apply to the final bone transform. You say what you posted is pseudo-code so I'll leave my comments at that.
Don't know if it will help you, but you can take a look at my skinned-mesh article for a general description of the process for skinned-mesh animation.
1
##### Share on other sites
You don't mention any particular problems or symptoms of a problem
There isn't that much to mention, my meshes ends up looking like a pile of (wrongly) deformed polygons when I try to set it to match the first animation frame. I've been unable to find any consistency to further pinpoint the error more than to that they are obviously being wrongly transformed. The vertices do seem to keep within a radius that is approximately 2 - 3 times that of the untransformed mesh (in its default bind pose that is) however.
First, just for clarity, is your intent to implement a left-handed coordinate system?
Yes, I've already got a rendering system set up that's working with left-handed coordinates and now trying to add animation features on top of that. I suppose you're right in that this isn't a strict necessity when using DX but it seems to be what is nearly always used anyway so I basically just went with it.
It's not clear what you mean by "right handed format." At a guess: transposing a matrix doesn't change the coordinate system handedness, if that's your intent. Transposition changes the matrix from row-major to column-major or vice-versa.
Ah yes, I got the wording wrong here, they are transposed into row-major matrices. However upon reviewing the Assimp documentation it suggests that its matrices already are in row-major format, however not transposing them makes my static mesh imports appear backwards. Also not transposing the provided joint / frame offset matrices does neither provide the expected results.
Also, as it appears that you're implementing left-to-right matrix multiplication order (SRT vs TRS), then you need to swap the order of parent-child matrix mults
Hm, that does sound like something that may be wrong indeed. I'll take a look at it when I get home.
Also that's a great article you wrote, I've only skimmed through the first half of it so far but it does seem to be rather descriptive, so thanks a lot for that!
0
##### Share on other sites
Taking a fresh look at your post this morning, the algorithm for the final bone transforms doesn't appear efficient, and, depending on what you're actual code looks like, may be incorrect. You're working your way from child-to-parent. You're duplicating lots of calcs for any bone that has multiple children. You should start with the root frame (node, bone) and work your way through the children - i.e., parent-to-children. That's discussed in the article I linked.
Edited by Buckeye
1
##### Share on other sites
Taking a fresh look at your post this morning, the algorithm for the final bone transforms doesn't appear efficient, and, depending on what you're actual code looks like, may be incorrect. You're working your way from child-to-parent. You're duplicating lots of calcs for any bone that has multiple children. You should start with the root frame (node, bone) and work your way through the children - i.e., parent-to-children. That's discussed in the article I linked.
Yes, I'm aware it isn't efficient, its just written to be as obvious as possible so that I don't miss any errors in storing things in flattened hierarchy lists etc. as I intend to do once I get something up and running that I can see what's what from.
Anyway, thanks to your article and some more restructuring into a proper node hierarchy I've finally got the meshes to appear correct, albeit they seem to be offset by 90° around the X axis. I can just offset the root node to fix this but I'm thinking there must be something wrong with my transformations still (I haven't checked but there might be a translation / scale offset too by the looks of it). Anything that sounds familiar?
This is the code I'm using at the moment for applying transformations to the vertices:
for(UINT m = 0; m < meshList.size(); m++) {
Mesh *mesh = meshList[m].operator->();
mesh->LockVertexData();
size_t vertOffsetPos = mesh->GetVertexDataSemanticOffset("POSITION", 0);
for(size_t v = 0; v < mesh->vertexCount; v++) {
XMFLOAT3 newPos;
XMMATRIX matTransform = flattenedNodeList[mesh->pVertexWeight->at(v).boneId[0]]->matOffset *
flattenedNodeList[mesh->pVertexWeight->at(v).boneId[0]]->matGlobalTransform *
mesh->pVertexWeight->at(v).weight[0];
for(UINT w = 1; w < 4; w++) {
if(mesh->pVertexWeight->at(v).boneId[w] == 0xffffffff)
break;
matTransform += flattenedNodeList[mesh->pVertexWeight->at(v).boneId[w]]->matOffset * flattenedNodeList[mesh->pVertexWeight->at(v).boneId[w]]->matGlobalTransform * mesh->pVertexWeight->at(v).weight[w];
}
XMStoreFloat3(&newPos, XMVector3Transform(vertexPos, matTransform));
mesh->SetVertexData(v, vertOffsetPos, newPos);
}
mesh->UnlockVertexData();
}
Edit: I tried to get the formatting to look better for the code section but it doesn't seem willing to play along so sorry for it looking a bit messed up.
Edit2: Ah, figured it out; I forgot to apply the mesh's inverse global transform; everything seems to be working as intended now
Onwards to making a more efficient implementation then, thanks again for your help Buckeye.
Edited by Husbjörn
0
## Create an account
Register a new account
Followers 0
• ### Similar Content
• So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
A week back or so I got help to find this:
https://github.com/sp4cerat/Planet-LOD
In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
He gets the position using this row
vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));
Inside the draw function this happens:
draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
But this is used later on with:
vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.
Full code can be seen here:
https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
Toastmastern
• I googled around but are unable to find source code or details of implementation.
What keywords should I search for this topic?
Things I would like to know:
A. How to ensure that partially covered pixels are rasterized?
Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
How to ensure proper synchronizations in GLSL?
GLSL seems to only allow int32 atomics on image.
C. Is there some simple ways to estimate coverage on-the-fly?
In case I am to draw 2D shapes onto an exisitng target:
1. A multi-pass whatever-buffer seems overkill.
2. Multisampling could cost a lot memory though all I need is better coverage.
Besides, I have to blit twice, if draw target is not multisampled.
• By mapra99
Hello
I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.
To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?
Thanks!
• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));
• 11
• 19
• 14
• 23
• 11
|
{}
|
# Sum of independent Gaussians via change-of-variables formula
Let $X \sim N(\mu_1, \sigma_1^2)$ and $Y \sim N(\mu_2, \sigma_2^2)$ be independent. It can easily be shown via moment generating functions that $Z=X+Y$ is also normally distributed with $Z\sim N(\mu_1+\mu_2, \sigma_1^2+\sigma_2^2)$.
I was wondering if one could instead use the change-of-variables formula to derive the distribution of $Z$. The obvious way seems to be to go from the vector $(X,Y)$ to the vector $(Z,Y) = (X+Y, Y)$. The Jacobian of this is 1 and the result seems at first to be straightforward.
However the result of course is the joint distribution of $Z$ and $Y$, so to get the distribution of $Z$ one has to then marginalise out $y$ by integration.
It is easy to see from the result that in the general case, i.e. Gaussian or not, the sum of independent random variables is distributed as the convolution (see also Wikipedia). In the Gaussian case, how would one go about computing the required integral? That is, the integral
$$\int \frac{1}{\sqrt{2\pi}} \frac{1}{\sigma_1} \exp(-\frac{1}{2} \frac{((z-y)-\mu_1)^2}{\sigma_1^2} ) \frac{1}{\sqrt{2\pi}} \frac{1}{\sigma_2} \exp(-\frac{1}{2} \frac{(y-\mu_2)^2}{\sigma_2^2} d y \,?$$
Or, instead of $(X,Y) \rightarrow (X+Y,Y)$, is there another change or variables that would result in an easier integral?
• By inspection, it's clear that any linear combination of $X$ and $Y$ will have some normal distribution. At that point it becomes unnecessary to compute any integrals, because basic properties of expectation and variance tell you immediately what the expectation and variance of the linear combination is--and those two parameters completely determine any Normal distribution.
– whuber
Mar 19 '17 at 17:16
The technique used to evaluate that integral is called completing the square in the exponent which is pretty close to what you learned (and undoubtedly soon forgot) in high school when you derived the formula for the roots of a quadratic polynomial. Multiply the exponential functions that are in the integrand by using the identity $\exp(a+b) = \exp(a)\exp(b)$ bass ackwards to get $\exp\left(-\frac{1}{2\alpha} (y^2 - 2\beta y +\gamma)\right)$ where $\beta$ is a linear function of $z$, $\gamma$ is a quadratic function of $z$ and $\alpha$ does not depend on $z$ at all. Then, massage the argument of the exponent to write it as \begin{align} y^2 - 2\beta y +\gamma &= y^2 - 2\beta y + \beta^2 + \gamma - \beta^2\\ &= (y-\beta)^2 + (\gamma - \beta^2) \end{align} Consequently, the integral becomes of the form $$\frac{1}{2\pi \sigma_1\sigma_2} \exp\left(-\frac{1}{2\alpha} (\gamma-\beta^2)\right) \int_{-\infty}^\infty \exp\left(-\frac{1}{2\alpha} (y-\beta^2)\right) \, \mathrm dy$$ where the integrand needs to be recognized as almost the density function of a $N(\beta, \alpha)$ random variable giving $$f_Z(z) = A \exp\left(-\frac{1}{2\alpha} (\gamma-\beta^2)\right)$$ for specific value of $A$ that I will leave it for you to work out. Notice that the argument of the exponential is a quadratic function of $z$ and so $Z$ has a Gaussian density. As Huber remarks in a comment, somewhere along such a calculation it might dawn on you that the density of $Z$ is going to end up being a Gaussian density at which point you can save yourself a lot of algebra by working out the mean and variance of $Z$ and plugging in results into the Gaussian density formula. And no, I don't mean plugging in $E[Z]$ etc into the integrand of the convolution integral; the integrand of the convolution integral has nothing to do with $Z$.
If one is not all set on using the convolution integral formula and nothing else must be used, the easiest way of directly determining the pdf of $Z=X+Y$ where $X$ and $Y$ are independent Gaussian random variables is to find the CDF first, recognize it as a Gaussian CDF and hence write down the density of $Z$. The argument is as follows.
Consider independent random variables $A, B \sim N(0,1)$ and set $$Z = (\sigma_1 A + \mu_1) + (\sigma_2 B + \mu_2) = X + Y$$ where $X\sim N(\mu_1, \sigma_1^2), Y \sim N(\mu_2, \sigma_2^2)$ are independent random variables. Then, $$F_Z(z) = P\{\sigma_1 A + \sigma_2 B + \mu_1 + \mu_2)\leq z\} =\iint_{\sigma_1a + \sigma_2b + (\mu_1 + \mu_2)\leq z} \phi(a)\phi(b) \,\mathrm da \mathrm db$$ where $\phi(\cdot)$ is the standard Gaussian density. But the integral has circular symmetry and so the probability depends only on the distance of the line $\sigma_1 a + \sigma_2 b + (\mu_1 + \mu_2) = z$. Indeed, by a rotation of coordinates we can make this line vertical (say), and if it crosses the $a$ axis at $d$, then $$F_Z(z) = P\{A \leq d\} = \Phi(d)$$ showing that $Z$ is a Gaussian random variable. At this point, as whuber says, we can compute $$E[Z] = E[X+Y] = \mu_1 + \mu_2, \operatorname{var}(Z) =\operatorname{var}(X+Y) = \operatorname{var}(X) + \operatorname{var}(Y) = \sigma_1^2+\sigma_2^2$$ and write down the density of $Z$. Or, we can bull ahead and note that standard results in analytical geometry tell us that $$d = \left(\frac{z - (\mu_1+\mu_2)}{\sqrt{\sigma_1^2 + \sigma_2^2}}\right)$$ and so \begin{align} F_Z(z) &= \Phi\left(\frac{z - (\mu_1+\mu_2)}{\sqrt{\sigma_1^2 + \sigma_2^2}}\right)\\ f_Z(z) &= \frac{1}{\sqrt{\sigma_1^2 + \sigma_2^2}}\phi\left(\frac{z - (\mu_1+\mu_2)}{\sqrt{\sigma_1^2 + \sigma_2^2}}\right) \end{align} where we can recognize this last expression as the density of a $N(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2)$ random variable.
|
{}
|
## 4s orbitals and 3d orbitals
JamieVu_2C
Posts: 108
Joined: Thu Jul 25, 2019 12:16 am
### 4s orbitals and 3d orbitals
In the book, it says "Argon completes the third period. From Fig. 1E.1, you can see that the energy of the 4s-orbital is slightly lower than that of
the 3d-orbitals. As a result, instead of electrons entering the 3d-orbitals, the fourth period now begins by filling the 4s-orbitals..."
How is 4s lower in energy than 3d, and if 4s is lower in energy, why is 3d written before 4s in the electron configurations? Why would the valence electrons be removed from 4s first before going to the 3d orbitals if 3d is higher in energy?
Justin Vayakone 1C
Posts: 110
Joined: Sat Sep 07, 2019 12:19 am
### Re: 4s orbitals and 3d orbitals
When elements have $z\leq 20$, then the 4s orbital is less in energy than 3d meaning 4s is written first. In other words, elements up to Calcium will have the 4s state come before 3d. But for elements after calcium in the periodic table, the 4s state is actually higher in energy than 3d. For example if we are writing the electronic configuration of Scandium, it would be: $[\mathrm{Ar}]3d^14s^2$. Now referring back to the textbook, since Argon is before Calcium, the 4s orbital comes first.
Jacey Yang 1F
Posts: 101
Joined: Fri Aug 09, 2019 12:17 am
### Re: 4s orbitals and 3d orbitals
As Dr. Lavelle explained in lecture, the 4s state is higher in energy than 3d once the 4s state becomes occupied and the electron enters the 3d state.
|
{}
|
Les personnes qui possèdent un compte PLM-Mathrice sont invitées à l'utiliser.
# Complex dynamics and quasi-conformal geometry.
23-25 octobre 2017
Université d'Angers
Europe/Paris timezone
Accueil > Timetable > Contribution details
# Contribution
Université d'Angers - L003.
# Rationality is practically decidable for Nearly Euclidean Thurston maps.
## Intervenant(s)
• Kevin PILGRIM
## Description
A Thurston map $f: (S^2, P) \to (S^2, P)$ is \emph{nearly Euclidean} if its postcritical set $P$ has four points and each branch point is simple. We show that the problem of determining whether $f$ is equivalent to a rational map is algorithmically decidable, and we give a practical implementation of this algorithm. Executable code and data from 50,000 examples is tabulated at \url{https://www.math.vt.edu/netmaps/index.php}. This is joint work with W. Floyd and W. Parry.
|
{}
|
## Thinking Mathematically (6th Edition)
$1~in = 2.54 ~cm$ We can convert 23 inches to units of centimeters. $23~in = \frac{23~in}{1}\times \frac{2.54~cm}{1~in} = 58.42~cm$
|
{}
|
Language of TMs that accept some x in less than 50 steps. Is it in co-RE?
L = {M | M is a TM and there exists an input that the TM M accepts in less than 50 steps}
I need to find a minimal class it belongs to between R/ RE/ co-RE/ not in RE∪co-RE. I managed to show that it is in RE with a TM. I think its not in co-RE, because it has to check every input to know wheter a TM M belongs to L. and there are an infinite amount of inputs. I tried to use mapping recursion with ATM, but that failed. Would appreciate any advice, thanks.
• Hint: If it s both RE and co-RE then it is recursive. – Yuval Filmus Jan 10 '17 at 21:00
• By intuition I think its not in co-RE. Because to know that for every input x, M does not accept x in less than 50 steps, we have to run it for every x and check. But if we run 50 steps for every input, and one that accepts in less than 50 steps does not exists, we will continue forever. Why is that not correct? – Gray Jan 10 '17 at 21:34
• Try to use my hint. Intuition is not enough. – Yuval Filmus Jan 10 '17 at 21:42
• @Bar : In 50 steps, you can't read more than $50$ letters of the input so.... – xavierm02 Jan 10 '17 at 22:15
• @Bar : If two inputs have the same 50 characters, then ... so you only need to test ... different inputs. – xavierm02 Jan 11 '17 at 9:15
Maybe that will help ?
https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-045j-automata-computability-and-complexity-spring-2011/lecture-notes/MIT6_045JS11_lec09.pdf
Page 42 Credits : Nancy Lynch MIT
Applications of Rice’s Theorem
• Example 3: Another property that isn’t a language property and is decidable
{ M |M is a TM that runs for at most 37 steps on input 01 }
– Not a language property, not a property of a machine’s structure.
– Rice doesn’t apply.
– Obviously decidable, since, given the TM description, we can just simulate it for 37 steps.
• Please transcribe the slide as text, to make textual search possible. – Yuval Filmus Jan 12 '17 at 9:58
Your problem is decidable, since a Turing machine accepts some input in $t$ steps iff it accepts some input of length at most $t$ in $t$ steps, and the latter property can be easily checked.
Repeating my comment, if you aimed at proving a negative answer, one avenue would be to prove that the language is not recursive; since you already know that the language is recursively enumerable, it would follow that it is not co-recursively enumerable.
Stated succinctly, an r.e. language is recursive iff it is co-r.e.
This is my solution:
reduction from ATM to L
On input M,w: return a machine M', that on input x: runs w on M, and if M accepts, M' accepts(in a single step) if M decline or loops, decline
if M accepts w, L(M')=Sigma* and therefore is y in Sigma* that M' accepts in a single step(in fact it's true for all y in Sigma*)-that's less 50. if M loops or declines, L(M')=emptyset and therefore there is no word the M' accepts in less than 50 steps, because M' rejects all words.
|
{}
|
Limited access
Which of the following describes an inconclusive cadence?
A
A cadence with a progression of V - vi.
B
A cadence with a progression of V (or V$^7$) - I (or i).
|
{}
|
## ABSTRACT
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Tidal Circulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Water Flow through the Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Mixing, Flushing, and Seed Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Waves in Mangroves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Maintenance of Biodiversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Sedimentation and Sea-Level Rise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Fine Sediment as a Tracer for Mixing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Groundwater Flow and Bioturbation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Recruitment of Prawn Larvae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
There are two dominant types of mangrove swamps, the riverine type that fringes rivers and tidal creeks, and the open water type that is directly exposed by waves (Lugo & Snedaker, 1974). The former type is the most common, with a strip of mangroves typically 50 to 300 m wide fringing the tidal creek or river on either side. Such an example is the 5-km-long mangrove-fringed Merbok River estuary in Malaysia (Figure 1). The second type is generally present only in embayments protected by shallow reefs and mud or sand banks that allow wave attack only around high tide. Missionary Bay in Australia (Figure 1b) is a typical example of an extensive mangrove swamp that is protected from the prevailing tradewinds but nevertheless is occasionally attacked by waves in the monsoon season. Along coral reefs, mangroves
can also be present and are protected from excessive wave attack by waves breaking on the fringing reefs. Along muddy coasts a strip of mangroves, typically a few hundred meters wide, can fringe the open coast, and these are very frequently under wave attack; nevertheless, they survive. Such is the case of the Thuy Hai coast of the Gulf of Tonkin in Vietnam (Figure 1c).
|
{}
|
## Section1.4Invariant Subspaces
### Subsection1.4.1Invariant Subspaces
###### Definition1.4.1.Invariant Subspace.
Suppose that $\ltdefn{T}{V}{V}$ is a linear transformation and $W$ is a subspace of $V\text{.}$ Suppose further that $\lteval{T}{\vect{w}}\in W$ for every $\vect{w}\in W\text{.}$ Then $W$ is an invariant subspace of $V$ relative to $T\text{.}$
We do not have any special notation for an invariant subspace, so it is important to recognize that an invariant subspace is always relative to both a superspace ($V$) and a linear transformation ($T$), which will sometimes not be mentioned, yet will be clear from the context. Note also that the linear transformation involved must have an equal domain and codomain — the definition would not make much sense if our outputs were not of the same type as our inputs.
As is our habit, we begin with an example that demonstrates the existence of invariant subspaces, while leaving other questions unanswered for the moment. We will return later to understand how this example was constructed, but for now, just understand how we check the existence of the invariant subspaces.
Consider the linear transformation $\ltdefn{T}{\complex{4}}{\complex{4}}$ defined by $\lteval{T}{\vect{x}}=A\vect{x}$ where $A$ is given by
\begin{equation*} A=\begin{bmatrix} -8 & 6 & -15 & 9 \\ -8 & 14 & -10 & 18 \\ 1 & 1 & 3 & 0 \\ 3 & -8 & 2 & -11 \end{bmatrix} \end{equation*}
Define (with zero motivation),
\begin{align*} \vect{w_1}&=\colvector{-7\\-2\\3\\0} & \vect{w_2}&=\colvector{-1\\-2\\0\\1} \end{align*}
and set $W=\spn{\set{\vect{w}_1,\,\vect{w}_2}}\text{.}$ We verify that $W$ is an invariant subspace of $\complex{4}$ with respect to $T\text{.}$ By the definition of $W\text{,}$ any vector chosen from $W$ can be written as a linear combination of $\vect{w}_1$ and $\vect{w}_2\text{.}$ Suppose that $\vect{w}\in W\text{,}$ and then check the details of the following verification,
\begin{align*} \lteval{T}{\vect{w}}&=\lteval{T}{a_1\vect{w}_1+a_2\vect{w}_2}&&\\ &=a_1\lteval{T}{\vect{w}_1}+a_2\lteval{T}{\vect{w}_2}&&\\ &=a_1\colvector{-1\\-2\\0\\1}+a_2\colvector{5\\-2\\-3\\2}\\ &=a_1\vect{w}_2+a_2\left((-1)\vect{w}_1+2\vect{w}_2\right)\\ &=(-a_2)\vect{w}_1+(a_1+2a_2)\vect{w}_2\in W \end{align*}
So, by Definition IS, $W$ is an invariant subspace of $\complex{4}$ relative to $T\text{.}$
In an entirely similar manner we construct another invariant subspace of $T\text{.}$With zero motivation, define
\begin{align*} \vect{x_1}&=\colvector{-3\\-1\\1\\0} & \vect{x_2}&=\colvector{0\\-1\\0\\1} \end{align*}
and set $X=\spn{\set{\vect{x}_1,\,\vect{x}_2}}\text{.}$ We verify that $X$ is an invariant subspace of $\complex{4}$ with respect to $T\text{.}$ By the definition of $X\text{,}$ any vector chosen from $X$ can be written as a linear combination of $\vect{x}_1$ and $\vect{x}_2\text{.}$ Suppose that $\vect{x}\in X\text{,}$ and then check the details of the following verification,
\begin{align*} \lteval{T}{\vect{x}}&=\lteval{T}{b_1\vect{x}_1+b_2\vect{x}_2}&&\\ &=b_1\lteval{T}{\vect{x}_1}+b_2\lteval{T}{\vect{x}_2}&&\\ &=b_1\colvector{3\\0\\-1\\1}+b_2\colvector{3\\4\\-1\\-3}\\ &=b_1\left((-1)\vect{x}_1+\vect{x}_2\right)+b_2\left((-1)\vect{x}_1+(-3)\vect{x}_2\right)\\ &=(-b_1-b_2)\vect{x}_1+(b_1-3b_2)\vect{x}_2\in X \end{align*}
So, by Definition IS, $X$ is an invariant subspace of $\complex{4}$ relative to $T\text{.}$
There is a bit of magic in each of these verifications where the two outputs of $T$ happen to equal linear combinations of the two inputs. But this is the essential nature of an invariant subspace. We'll have a peek under the hood later in Example Example 3.1.8, and it will not look so magical after all.
Verify that $B=\set{\vect{w}_1,\,\vect{w}_2,\,\vect{x}_1,\,\vect{x}_2}$ is linearly independent, and hence a basis of $\complex{4}\text{.}$ Splitting this basis in half, Theorem Theorem 1.2.3 tells us that $\complex{4}=W\ds X\text{.}$ To see exactly why a decomposition of a vector space into a direct sum of invariant subspaces might be interesting work Exercise Checkpoint 1.4.3 now.
Construct a matrix representation of the linear transformation $T$ of Exercise Example 1.4.2 relative to the basis formed as the union of the bases of the two invariant subspaces, $\matrixrep{T}{B}{B}\text{.}$ Comment on your observations, perhaps after computing a few powers of the matrix representation (which represent repeated compositions of $T$ with itself). Hmmmmmm.
Solution
Our basis is
\begin{equation*} B=\set{\vect{w}_1,\,\vect{w}_2,\,\vect{x}_1,\,\vect{x}_2} =\set{ \colvector{-7\\-2\\3\\0},\, \colvector{-1\\-2\\0\\1},\, \colvector{-3\\-1\\1\\0},\, \colvector{0\\-1\\0\\1} } \end{equation*}
Now we perform the necessary computions for the matrix representation of $T$ relative to $B$
\begin{align*} \vectrep{B}{\lteval{T}{\vect{w}_1}} &=\vectrep{B}{\colvector{-1\\-2\\0\\1}} =\vectrep{B}{(0)\vect{w}_1+(1)\vect{w}_2} =\colvector{0\\1\\0\\0}\\ \vectrep{B}{\lteval{T}{\vect{w}_2}} &=\vectrep{B}{\colvector{5\\-2\\-3\\2}} =\vectrep{B}{(-1)\vect{w}_1+(2)\vect{w}_2} =\colvector{-1\\2\\0\\0}\\ \vectrep{B}{\lteval{T}{\vect{x}_1}} &=\vectrep{B}{\colvector{3\\0\\-1\\1}} =\vectrep{B}{(-1)\vect{x}_1+(1)\vect{x}_2} =\colvector{0\\0\\-1\\1}\\ \vectrep{B}{\lteval{T}{\vect{x}_2}} &=\vectrep{B}{\colvector{3\\4\\-1\\-3}} =\vectrep{B}{(-1)\vect{x}_1+(-3)\vect{x}_2} =\colvector{0\\0\\-1\\-3} \end{align*}
Applying Definition MR, we have
\begin{equation*} \matrixrep{T}{B}{B}= \begin{bmatrix} 0 & -1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ 0 & 0 & -1 & -1 \\ 0 & 0 & 1 & -3 \end{bmatrix} \end{equation*}
The interesting feature of this representation is the two $2\times 2$ blocks on the diagonal that arise from the decomposition of $\complex{4}$ into a direct sum of invariant subspaces. Or maybe the interesting feature of this matrix is the two $2\times 2$ submatrices in the “other” corners that are all zero. You can decide.
Prove that the subspaces $U, V\subseteq\complex{5}$ are invariant with respect to the linear transformation $\ltdefn{R}{\complex{5}}{\complex{5}}$ defined by $\lteval{R}{\vect{x}}=B\vect{x}\text{.}$
\begin{equation*} B=\begin{bmatrix} 4 & 47 & 3 & -46 & 20 \\ 10 & 61 & 8 & -56 & 10 \\ -10 & -69 & -7 & 67 & -20 \\ 11 & 70 & 9 & -64 & 12 \\ 3 & 19 & 3 & -16 & 1\end{bmatrix} \end{equation*}
\begin{align*} U&=\spn{\set{ \colvector{1\\0\\-1\\0\\0},\, \colvector{0\\1\\-1\\1\\0} }} & V&=\spn{\set{ \colvector{1\\1\\-1\\1\\0},\, \colvector{3\\3\\-2\\4\\2},\, \colvector{-2\\3\\-2\\3\\1} }} \end{align*}
Prove that the union of $U$ and $V$ is a basis of $\complex{5}\text{,}$ and then provide a matrix representation of $R$ relative to this basis.
Example Example 1.4.2 and Exercise Checkpoint 1.4.4 are a bit mysterious at this stage. Do we know any other examples of invariant subspaces? Yes, as it turns out, we have already seen quite a few. We will give some specific examples, and for more general situations, describe broad classes of invariant subspaces by theorems. First up is eigenspaces.
Choose $\vect{w}\in W\text{.}$ Then
\begin{equation*} \lteval{T}{\vect{w}}=\lambda\vect{w}\in W. \end{equation*}
So by Definition Definition 1.4.1, $W$ is an invariant subspace of $V$ relative to $T\text{.}$
Theorem Theorem 1.4.5 is general enough to determine that an entire eigenspace is an invariant subspace, or that simply the span of a single eigenvector is an invariant subspace. It is not always the case that any subspace of an invariant subspace is again an invariant subspace, but eigenspaces do have this property. Here is an example of the theorem, which also allows us to very quickly build several invariant subspaces.
Define the linear transformation $\ltdefn{S}{M_{22}}{M_{22}}$ by
\begin{equation*} \lteval{S}{\begin{bmatrix}a&b\\c&d\end{bmatrix}} = \begin{bmatrix} -2a + 19b - 33c + 21d & -3a + 16b - 24c + 15d \\ -2a + 9b - 13c + 9d & -a + 4b - 6c + 5d \end{bmatrix} \end{equation*}
Build a matrix representation of $S$ relative to the standard basis (Definition MR) and compute eigenvalues and eigenspaces of $S$ with the computational techniques of Chapter E in concert with Theorem EER. Then
\begin{align*} \eigenspace{S}{1}&=\spn{\set{\begin{bmatrix}4&3\\2&1\end{bmatrix}}} & \eigenspace{S}{2}&=\spn{\set{ \begin{bmatrix} 6& 3\\1&0\end{bmatrix},\, \begin{bmatrix}-9&-3\\0&1\end{bmatrix} }} \end{align*}
So by Theorem Theorem 1.4.5, both $\eigenspace{S}{1}$ and $\eigenspace{S}{2}$ are invariant subspaces of $M_{22}$ relative to $S\text{.}$
However, Theorem Theorem 1.4.5 provides even more invariant subspaces. Since $\eigenspace{S}{1}$ has dimension 1, it has no interesting subspaces, however $\eigenspace{S}{2}$ has dimension 2 and has a plethora of subspaces. For example, set
\begin{equation*} \vect{u}= 2\begin{bmatrix} 6& 3\\1&0\end{bmatrix}+ 3\begin{bmatrix}-9&-3\\0&1\end{bmatrix} = \begin{bmatrix}-6&-3\\2&3\end{bmatrix} \end{equation*}
and define $U=\spn{\set{\vect{u}}}\text{.}$ Then since $U$ is a subspace of $\eigenspace{S}{2}\text{,}$ Theorem Theorem 1.4.5 says that $U$ is an invariant subspace of $M_{22}$ (or we could check this claim directly based simply on the fact that $\vect{u}$ is an eigenvector of $S$).
For every linear transformation there are some obvious, trivial invariant subspaces. Suppose that $\ltdefn{T}{V}{V}$ is a linear transformation. Then simply because $T$ is a function, the subspace $V$ is an invariant subspace of $T\text{.}$ In only a minor twist on this theme, the range of $T\text{,}$ $\rng{T}\text{,}$ is an invariant subspace of $T$ by Definition RLT. Finally, Theorem LTTZZ provides the justification for claiming that $\set{\zerovector}$ is an invariant subspace of $T\text{.}$
That the trivial subspace is always an invariant subspace is a special case of the next theorem. But first, work the following straightforward exercise before reading the next theorem. We'll wait.
Prove that the kernel of a linear transformation (Definition KLT), $\krn{T}\text{,}$ is an invariant subspace.
Suppose that $\vect{z}\in\krn{T^k}\text{.}$ Then
\begin{equation*} \lteval{T^k}{\lteval{T}{\vect{z}}}=\lteval{T^{k+1}}{\vect{z}}=\lteval{T}{\lteval{T^k}{\vect{z}}}=\lteval{T}{\zerovector}=\zerovector. \end{equation*}
So by Definition KLT, we see that $\lteval{T}{\vect{z}}\in\krn{T^k}\text{.}$ Thus $\krn{T^k}$ is an invariant subspace of $V$ relative to $T$ by Definition Definition 1.4.1.
Two special cases of Theorem Theorem 1.4.8 occur when we choose $k=0$ and $k=1\text{.}$ The next example is unusual, but a good illustration.
Consider the $10\times 10$ matrix $A$ below as defining a linear transformation $\ltdefn{T}{\complex{10}}{\complex{10}}\text{.}$ We also provide a Sage version of the matrix for use online.
\begin{equation*} A=\begin{bmatrix} -1 & -9 & -24 & -16 & -40 & -36 & 72 & 66 & 21 & 59 \\ 19 & 4 & 7 & -18 & 2 & -12 & 67 & 69 & 16 & -35 \\ -1 & 1 & 2 & 5 & 5 & 6 & -17 & -17 & -5 & -8 \\ 2 & -2 & -7 & -8 & -13 & -13 & 32 & 28 & 11 & 14 \\ -11 & -2 & -1 & 12 & 6 & 11 & -50 & -44 & -16 & 13 \\ 4 & -1 & -5 & -14 & -16 & -16 & 55 & 43 & 24 & 17 \\ -14 & 1 & 7 & 20 & 19 & 26 & -82 & -79 & -21 & -1 \\ 12 & 0 & -4 & -17 & -14 & -20 & 68 & 64 & 19 & -2 \\ 10 & -2 & -9 & -16 & -20 & -24 & 68 & 65 & 17 & 9 \\ -1 & -2 & -5 & -3 & -8 & -7 & 13 & 12 & 4 & 14 \end{bmatrix} \end{equation*}
The matrix $A$ has rank $9$ and so $T$ has a nontrivial kernel. But it gets better. $T$ has been constructed specially so that the nullity of $T^k$ is exactly $k\text{,}$ for $0\leq k\leq 10\text{.}$ This is an extremely unusual situation, but is a good illustration of a very general theorem about kernels of null spaces, coming next. We compute the invariant subspace $\krn{T^5}\text{,}$ you can practice with others.
We work with the matrix, recalling that null spaces and column spaces of matrices correspond to kernels and ranges of linear transformations once we understand matrix representations of linear transformations (Section MR).
\begin{align*} A^5&=\begin{bmatrix} 37 & 24 & 65 & -35 & 77 & 32 & 80 & 98 & 23 & -125 \\ 19 & 11 & 37 & -21 & 46 & 19 & 29 & 49 & 6 & -70 \\ -7 & -5 & -15 & 8 & -18 & -8 & -15 & -19 & -6 & 26 \\ 14 & 9 & 27 & -15 & 33 & 14 & 26 & 37 & 7 & -50 \\ -8 & -7 & -25 & 14 & -33 & -16 & -10 & -23 & -5 & 37 \\ 12 & 11 & 35 & -19 & 45 & 22 & 22 & 35 & 11 & -52 \\ -27 & -18 & -56 & 31 & -69 & -30 & -49 & -72 & -15 & 100 \\ 20 & 14 & 45 & -25 & 56 & 25 & 35 & 54 & 12 & -77 \\ 24 & 16 & 49 & -27 & 60 & 26 & 45 & 64 & 14 & -88 \\ 8 & 5 & 13 & -7 & 15 & 6 & 18 & 21 & 5 & -26 \end{bmatrix}\\ \krn{T^5}&=\nsp{A^5}=\spn{\set{ \colvector{1\\-1\\0\\0\\-1\\2\\0\\0\\0\\0},\, \colvector{-1\\-3\\-3\\-2\\2\\0\\1\\0\\0\\0},\, \colvector{-2\\-1\\0\\0\\0\\0\\0\\1\\0\\0},\, \colvector{1\\-1\\-4\\-2\\2\\0\\0\\0\\1\\0},\, \colvector{5\\-3\\1\\1\\1\\0\\0\\0\\0\\2} }} \end{align*}
As an illustration of $\krn{T^5}$ as a subspace invariant under $T\text{,}$ we form a linear combination of the five basis vectors (named $\vect{z}_i\text{,}$ in order), which will be then be an element of the invariant subspace. We apply $T\text{,}$ so the output should again be in the subspace, which we verify by giving an expression for the output as a linear combination of the basis vectors for the subspace.
\begin{align*} \vect{z}&=3\vect{z}_1 - \vect{z}_2 + 3\vect{z}_3 + 2\vect{z}_4 - 2\vect{z}_5=\colvector{-10\\1\\-9\\-6\\-3\\6\\-1\\3\\2\\-4}\\ \lteval{T}{\vect{z}}&=A\vect{z}=\colvector{149\\93\\-28\\68\\-73\\94\\-136\\110\\116\\28}\\ &=47\vect{z}_1 - 136\vect{z}_2 + 110\vect{z}_3 +116\vect{z}_4 +14\vect{z}_5 \end{align*}
Reprise Example Example 1.4.9 using the same linear transformation. Use a different power (not $k=0,1,5,9,10$ on your first attempt), form a vector in the kernel of your chosen power, then apply $T$ to it. Your output should be in the kernel. (Check this with Sage by using the in Python operator.) Thus, you should be able to write the output as a linear combination of the basis vectors. Rinse, repeat.
### Subsection1.4.2Restrictions of Linear Transformations
A primary reason for our interest in invariant subspaces is they provide us with another method for creating new linear transformations from old ones.
###### Definition1.4.11.Linear Transformation Restriction.
Suppose that $\ltdefn{T}{V}{V}$ is a linear transformation, and $U$ is an invariant subspace of $V$ relative to $T\text{.}$ Define the restriction of $T$ to $U$ by
\begin{align*} &\ltdefn{\restrict{T}{U}}{U}{U} &\lteval{\restrict{T}{U}}{\vect{u}}&=\lteval{T}{\vect{u}} \end{align*}
It might appear that this definition has not accomplished anything, as $\restrict{T}{U}$ would appear to take on exactly the same values as $T\text{.}$ And this is true. However, $\restrict{T}{U}$ differs from $T$ in the choice of domain and codomain. We tend to give little attention to the domain and codomain of functions, while their defining rules get the spotlight. But the restriction of a linear transformation is all about the choice of domain and codomain. We are restricting the rule of the function to a smaller subspace. Notice the importance of only using this construction with an invariant subspace, since otherwise we cannot be assured that the outputs of the function are even contained in the codomain. This observation should be the key step in the proof of a theorem saying that $\restrict{T}{U}$ is also a linear transformation, but leave that as an exercise.
In Exercise Checkpoint 1.4.4 you verified that the subspaces $U, V\subseteq\complex{5}$ are invariant with respect to the linear transformation $\ltdefn{R}{\complex{5}}{\complex{5}}$ defined by $\lteval{R}{\vect{x}}=B\vect{x}\text{.}$
\begin{equation*} B=\begin{bmatrix} 4 & 47 & 3 & -46 & 20 \\ 10 & 61 & 8 & -56 & 10 \\ -10 & -69 & -7 & 67 & -20 \\ 11 & 70 & 9 & -64 & 12 \\ 3 & 19 & 3 & -16 & 1\end{bmatrix} \end{equation*}
\begin{align*} U&=\spn{\set{ \colvector{1\\0\\-1\\0\\0},\, \colvector{0\\1\\-1\\1\\0} }} & V&=\spn{\set{ \colvector{1\\1\\-1\\1\\0},\, \colvector{3\\3\\-2\\4\\2},\, \colvector{-2\\3\\-2\\3\\1} }} \end{align*}
It is a simple matter to define two new linear transformations, $\restrict{R}{U}, \restrict{R}{V}\text{,}$
\begin{align*} &\ltdefn{\restrict{R}{U}}{U}{U} & \lteval{\restrict{R}{U}}{\vect{x}}&=B\vect{x}\\ &\ltdefn{\restrict{R}{V}}{V}{V} & \lteval{\restrict{R}{V}}{\vect{x}}&=B\vect{x} \end{align*}
It should not look like we have accomplished much. Worse, it might appear that $R=\restrict{R}{U}=\restrict{R}{V}$ since each is described by the same rule for computing the image of an input. The difference is that the domains are all different: $\complex{5},U,V\text{.}$ Since $U$ and $V$ are invariant subspaces, we can then use these subspaces for the codomains of the restrictions.
We will frequently need the matrix representations of linear transformation restrictions, so let's compute those now for this example. Let
\begin{align*} C&=\set{\vect{u}_1,\,\vect{u}_2} & D&=\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3} \end{align*}
be the bases for $U$ and $V\text{,}$ respectively.
For $U$
\begin{align*} \vectrep{C}{\lteval{\restrict{R}{U}}{\vect{u}_1}} &=\vectrep{C}{\colvector{1\\2\\-3\\2\\0}} =\vectrep{C}{1\vect{u}_1+2\vect{u}_2} =\colvector{1\\2}\\ \vectrep{C}{\lteval{\restrict{R}{U}}{\vect{u}_2}} &=\vectrep{C}{\colvector{-2\\-3\\5\\-3\\0}} =\vectrep{C}{-2\vect{u}_1-3\vect{u}_2} =\colvector{-2\\-3} \end{align*}
Applying Definition MR, we have
\begin{equation*} \matrixrep{\restrict{R}{U}}{C}{C}= \begin{bmatrix} 1 & -2\\ 2 & -3 \end{bmatrix} \end{equation*}
For $V$
\begin{align*} \vectrep{D}{\lteval{\restrict{R}{V}}{\vect{v}_1}} &=\vectrep{D}{\colvector{2, 7, -5, 8, 3}} =\vectrep{D}{\vect{v}_1+\vect{v}_2+\vect{v}_3} =\colvector{1\\1\\1}\\ \vectrep{D}{\lteval{\restrict{R}{V}}{\vect{v}_2}} &=\vectrep{D}{\colvector{3, -7, 5, -7, -2}} =\vectrep{D}{-\vect{v}_1+2\vect{v}_3} =\colvector{-1\\0\\2}\\ \vectrep{D}{\lteval{\restrict{R}{V}}{\vect{v}_3}} &=\vectrep{D}{\colvector{9, -11, 8, -10, -2}} =\vectrep{D}{-2\vect{v}_1+\vect{v}_2+4\vect{v}_3} =\colvector{-2\\1\\4} \end{align*}
Applying Definition MR, we have
\begin{equation*} \matrixrep{\restrict{R}{V}}{D}{D}= \begin{bmatrix} 1 & -1 & -2\\ 1 & 0 & 1\\ 1 & 2 & 4 \end{bmatrix} \end{equation*}
It is no accident that these two square matrices are the diagonal blocks of the matrix representation you built for $R$ relative to the basis $C\cup D$ in Exercise Checkpoint 1.4.4.
The key observation of the previous example is worth stating very clearly: A basis derived from a direct sum decomposition into subspaces that are invariant relative to a linear transformation will provide a matrix representation of the linear transformation with a block diagonal form.
Diagonalizing a linear transformation is the most extreme example of decomposing a vector space into invariant subspaces. When a linear transformation is diagonalizable, then there is a basis composed of eigenvectors (Theorem DC). Each of these basis vectors can be used individually as the lone element of a basis for an invariant subspace (Theorem EIS). So the domain decomposes into a direct sum of one-dimensional invariant subspaces via Theorem Theorem 1.2.3. The corresponding matrix representation is then block diagonal with all the blocks of size 1, i.e. the matrix is diagonal. Section (((section-jordan-canonical-form))) is devoted to generalizing this extreme situation when there are not enough eigenvectors available to make such a complete decomposition and arrive at such an elegant matrix representation.
You can find more examples of invariant subspaces, linear transformation restrictions and matrix representations in Sections Section 3.1, (((section-nilpotent-linear-transformations))), (((section-jordan-canonical-form))).
|
{}
|
# 10/3 Special Lunch talk: The Cosmological Equation of State
in Uncategorized
October 10th, 2013
## Dr. Sergey Postnikov Nuclear Theory Center, Indiana University
CAS 500
12:30 pm
October 10, 2013
Abstract:
We study the dark energy equation of state as a function of redshift $z$ in a non-parametric way, without imposing any {\it a priori} $w(z)$ functional form. As a check of the method, we test our scheme through the use of synthetic data sets produced from a known cosmological model, and having the same relative errors and redshift distribution as the real data. Through proposed luminosity-time $L_{X}-T_{a}$ correlation for GRB X-ray afterglows (the Dainotti et al.), we are able to utilize GRB sample from the {\it Swift} satellite as probes of the expansion history of the Universe out to $z \~ 10$. Within the assumption of a flat FLRW universe and combination of SNeIa data with BAO, relatively probable solutions are close to a constant $w=-1$ solution. If one imposes the restriction of a constant $w$, we obtain $w=-0.99 \pm 0.06$ (consistent with a cosmological constant) and the present day parameters are $H_{0}=70.0 \pm 0.6 \, km \, s^{-1} Mpc^{-1}$ and $\Omega_{\Lambda 0}=0.723 \pm 0.025$. Whereas non-parametric solutions give us probability map which is centered at $H_{0}=70.04 \pm 1 \, km \, s^{-1} Mpc^{-1}$ and $\Omega_{\Lambda 0}=0.724 \pm 0.03$. Our GRB sample allows us to estimate amount and quality of data needed to constrain $w(z)$ in range which extends by an order of magnitude in redshift beyond SNeIa.
|
{}
|
The current GATK version is 3.3-0
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
# GATK VCF PL field ordering for pooled/polyploid samples
Posts: 3Member
Hi,
I am using GATK UnifiedGenotyper 2.3-9 for a set of pooled samples. I am uncertain about the order of PL values in polyploid samples, as it wasn't defined in VCF v4.1 specifications. The ordering formula described in VCF v4.1: F(j/k) = (k*(k+1)/2)+j only applies to diploid case. May I know how GATK extended the ordering formula to handle polypoid samples?
Example VCF line:
Thank you very much!
Best regards,
Allen
Tagged:
• Posts: 71GATK Developer mod
edited February 2013
This is a bit complicated, but it's a generalization of the formula you described above. Let Ploidy = P, # of alleles (including reference) = N.
Let first Ne(N,P) = # of possible allele conformations given N and P in a sample, which is equivalent to the # of non-negative integer vectors of length N that add exactly to P. For example, Ne(2,2) = 3 (diploid biallelic case) because you can either have [2,0], [1,1] or [0,2] as possible allele conformations in a site.
Ne can be computed recursively as:
Ne(N,P) = sum_{k=0}^{k<=P} Ne(N-1,P-k)
with boundary conditions Ne(1,P)=1. Ne(N,1) = N
Then, to compute the index in the PL vector for a particular allele conformation stored in a vector called vectorIdx, the following pseudo-code can be used:
int linearIdx = 0;
int cumSum = ploidy;
for (int k=numAlleles-1;k>=1; k--) {
int idx = vectorIdx[k];
if (idx == 0)
continue;
for (int p=0; p < idx; p++)
linearIdx += Ne( k, cumSum-p);
cumSum -= idx;
}
For the simple case where you only have 2 alleles (ref and alt), this collapses to the following ordering in the PL vector:
[P,0]
[P-1,1]
...
[1,P-1]
[0,P]
You can see this intuitively in your genotype that you pasted. GT = 0/0/1/1/1/1/1/1 means that the most likely genotype in a pool has 2 ref chromosomes and 6 alt chromosomes. In the PL this conformation has the value =0 which is the most likely one.
Post edited by Geraldine_VdAuwera on
• Posts: 3Member
Hi delangel,
I don't quite understand the definition of vectorIdx in your pseudo-code. Using your Ne(2,2) example, the pseudo-code can be collapsed to:
int linearIdx = 0;
int cumSum = 2;
int idx = vectorIdx[1];
if (idx == 0)
continue;
for (int p=0; p < idx; p++)
linearIdx += Ne( k, cumSum-p);
cumSum -= idx;
From your example, [2,0], [1,1] and [0,2] are the possible allele conformations stored in vectorIdx. However k can only be 1 when numAlleles is 2, thus corresponding to [1,1]. Could you explain a bit more on how to derive idx from vectorIdx?
Best regards,
Allen
• Posts: 3Member
Thanks again for this walkthrough.
I thought vectorIdx stores all possible allele conformations, and instead it only stores the target allele conformation.
I think this generalized ordering formula should be incorporated into future versions of VCF standard. Kudos for the GATK team!
|
{}
|
## Keigh2015 one year ago Choose the correct simplification of the expression 3 times b all over a to the power of negative 2. 3a2b a to the 2nd power over 3 times b 3 times a to the 2nd power all over b Already simplified
1. Keigh2015
|dw:1435798618922:dw|
2. anonymous
bring the $$a$$ upstairs and make the exponent positive $\frac{1}{a^{-2}}=a^2$
3. Keigh2015
|dw:1435798640978:dw|
4. Keigh2015
so c?
5. Keigh2015
@misssunshinexxoxo
6. misssunshinexxoxo
7. Keigh2015
Ok thank you it is a correct?
8. misssunshinexxoxo
Yes
9. Keigh2015
thank you so much @misssunshinexxoxo
Find more explanations on OpenStudy
|
{}
|
# What is the probability that an employee is suffering from anxiety disorder?
In a company $$7$$ percent of the employees are suffering from anxiety disorder, $$70$$ percent of people who suffer from anxiety disorder don't show up one day at work and $$10$$ percent of people who don't have anxiety disorder don't show up one day at work.
A doctor justifies the absence from work of $$5$$ percent of people suffering from anxiety and of $$30$$ percent of people that don't have anxiety disorder.
An employee who skipped work was justified by a doctor. What is the probability that he was suffering from anxiety disorder?
My attempt let $$A$$ be the event of suffering from anxiety disorder, let $$B^c$$ be the event of not going to work one day and $$C$$ be the event that a doctor justifies ones absence.
So I have $$P(A)=7/100$$ $$P(B^c|A)=70/100$$ $$P(B^c | A^c)=10/100$$ $$P(C|A)=5/100$$ $$P(C|A^c)= 30/100$$
So I am looking for $$P(A|B^c \cap C)= \frac{P (A \cap B^c \cap C)}{P(B^c \cap C)}$$
I am stuck at this point since I can't find a useful formula to compute $$\frac{P (A \cap B^c \cap C)}{P(B^c \cap C)}$$, any suggestions?
The problem is that it is not clear how $$B$$ and $$C$$ depend on each other. I think that the doctor's justification is understood to be $$\textit{after}$$ the absence of the employee (so $$P(C\cap B)=0$$), so the last two probabilities should be $$P(C|A\cap B^c)=0.05$$ and $$P(C|A^c\cap B^c)=0.3$$. Under this assumption we have
$$P(A\cap B \cap C)=P(C|A\cap B^c)P(A\cap B^c)=P(C|A\cap B^c)P(B^c|A)P(A)$$
and
$$P(B^c\cap C)=P(C|B^c)P(B^c)=[P(C|B^c\cap A)P(A|B^c)+P(C|B^c\cap A^c)P(A^c|B^c)]\, P(B^c)$$,
and the probabilities on the right handside can be computed from the information in the text.
• Did you mean to write $P(C|A^c \cap B^c)=0.3$?
– user746545
May 25, 2021 at 8:33
• Yes, of course. I editted the answer.
– ym94
May 25, 2021 at 9:55
In these cases, a tabular approach will get you the solution in a very easy way
These are your data in a tabular form
Now to calculate your probability it is very natural
$$\mathbb{P}[\text{Anxiety }|\text{Justified}]=\frac{0.05\times4.9}{0.05\times4.9+0.3\times9.3}=8.07\%$$
|
{}
|
Draw a free body diagram of the system, include dimensions for each force 2ED THE GON...
Question:
Draw a free body diagram of the system, include dimensions for each force
2ED THE GON PORCH HAS A FIXED SUPPAT AT THE WALL 켜 IM
Similar Solved Questions
A 17.7 kg floodlight in a park is supported at the end of a horizontal beam...
A 17.7 kg floodlight in a park is supported at the end of a horizontal beam of negligible mass that is hinged to a pole, as shown in the figure below. A cable at an angle of a = 30.0° with the beam helps to support the light. Find the tension in the cable. Submit Answer Tries 0/10 Find the horiz...
What is the distance between (3, 9) and (5, –3)?
What is the distance between (3, 9) and (5, –3)?...
HW10 13.1-13.4: Problem 2 Previous Problem Problem List Next Problem (1 point) Let F = (Syz)...
HW10 13.1-13.4: Problem 2 Previous Problem Problem List Next Problem (1 point) Let F = (Syz) i + (5xz)j + (6xy) k. Compute the following: A. div F = B. curi F = C. div curl F = Note: Your answers should be expressions of x, y and/or z; e.g. "3xy" or "z" or "5"...
Wacky scientist Jumpy Jane puts spring on her shoes and bounces up and down in simple...
Wacky scientist Jumpy Jane puts spring on her shoes and bounces up and down in simple harmonic motion. The total mechanical energy of Jane's system is: 1. a non-zero constant. 2. a minimum when it passes through the equilibrium point. 3. zero as it passes the equilibrium point. 4. a maximum when...
A block of mass mi 1.60 kg moving at v1 2.00 m/s undergoes a completely inelastic...
A block of mass mi 1.60 kg moving at v1 2.00 m/s undergoes a completely inelastic collision with a Part A telLionary block of mass m = 0.600 kg. The blocks time, the two-block system collides inelastically with a third block, of mass m3 = 2.50 kg , which is initially at rest. The three blocks then m...
Required Information Preparing an Income Statement, Statement of Retained Earnings, and Balance Sheet and Interpreting the...
Required Information Preparing an Income Statement, Statement of Retained Earnings, and Balance Sheet and Interpreting the Financial Statements (LO 1-2, LO 1-3) The following information applies to the questions displayed below] Assume you are the president of High Power Corporation. At the end of t...
28. What is the difference between federal funds rate and the discount rate. Be sure to...
28. What is the difference between federal funds rate and the discount rate. Be sure to define cach and how/why they are used. 29. Explain 2 things you learned from Mr. Karl Dakin's lecture on capital....
A force of 10 N is applied at right angles to the handle of a spanner,...
A force of 10 N is applied at right angles to the handle of a spanner, 0.5 m from the center of a nut. The moment on the nut is: 2 Nm 15 Nm 5 Nm 0.5 m/N...
After a long day of hard work astronauts on Planet-X are playing a little bit of...
After a long day of hard work astronauts on Planet-X are playing a little bit of golf. The ball is hit and it takes off with an initial speed of 19.3 m/s. The trajectory of the ball is shown in the figure 60 50 E 30 20 10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 x (m) What is the magnitude of th...
A rocket moves through empty space in a straight line with constant speed. It is far...
A rocket moves through empty space in a straight line with constant speed. It is far from the gravitational effect of any star or planet. Under these conditions, the force that must be applied to the rocket in order to sustain its motion is...
You are estimating the mean size of farms in New Hampshire. You want your 95% CI...
You are estimating the mean size of farms in New Hampshire. You want your 95% CI to have a margin of error of 25 acres. The standard deviation is 200 acres. How many farms to you have to measure?...
. Solve the following LP minimization problem. Min 3X + 2Y s.t. ...
. Solve the following LP minimization problem. Min 3X + 2Y s.t. 5X + 3Y <= 30 3X + 4Y >= 36 &n...
THIS ASSIGNMENT IS DUE IN YOUR LAB SECTION 1. A color-blind man marries a woman with...
THIS ASSIGNMENT IS DUE IN YOUR LAB SECTION 1. A color-blind man marries a woman with normal color vision. The woman's father was also color-blind. a.) What is the chance that their first child will be a colorblind son? A colorblind daughter? b.) If they have four children, what is the chance tha...
|
{}
|
Example: 1) Find the sign bit by xor-ing sign bit of A and B The special values such as infinity and NaN ensure that the floating-point arithmetic is algebraically completed, such that every floating-point operation produces a well-defined result and will not—by default—throw a machine interrupt or trap. Floating-Point Arithmetic Floating-point arithmetic is the hardware analogue of scienti c notation. 127 is the unique number for 32 bit floating point representation. This tutorial will demonstrate two rules that must be respected when performing floating point arithmetic in C. Following these rules will prevent loss of information. To summarize, in his module we have discussed the need for floating point numbers, the IEEE standard for representing floating point numbers, Floating point addition / subtraction, multiplication, division and the various rounding methods. The objectives of this module are to discuss the need for floating point numbers, the standard representation used for floating point numbers and discuss how the various floating point arithmetic operations of addition, subtraction, multiplication and division are carried out. For example, the decimal fraction. In C++ programming language the size of a float is 32 bits. FLOATING POINT ARITHMETIC FLOATING POINT ARITHMETIC In computers, floating-point numbers are represented in scientific notation of fraction (F) and exponent (E) with a radix (base) of 2, in the form: N = F x 2 e = 1.m x 2 e. Both E and F can be positive as well as negative. always add true exponents (otherwise the bias gets added in twice), do unsigned division on the mantissas (don’t forget the hidden bit). The floating point numbers are pulled from a file as a string. By convention, you generally go in for a normalized representation, wherein the floating-point is placed to the right of the first nonzero (significant) digit. Therefore, you will have to look at floating-point representations, where the binary point is assumed to be floating. IEEE 754-1985 Standard for Binary Floating-Point Arithmetic IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic IEEE 754-2008 Standard for Floating-Point Arithmetic This is the current standard It is also an ISO standard (ISO/IEC/IEEE 60559:2011) c 2017 Je rey M. Arnold Floating-Point Arithmetic and Computation 10 compare magnitudes (don’t forget the hidden bit!). Its floating point representation rounded to 5 decimal places is 0.66667. Floating-point arithmetic is considered an esoteric subject by many people. The IEEE double precision floating point standard representation requires a 64-bit word, which may be represented as numbered from 0 to 63, left to right. An operation can be mathematically undefined, such as ∞/∞, or, An operation can be legal in principle, but not supported by the specific format, for example, calculating the. A basic understanding of oating-point arithmetic is essential when solving problems numerically because certain things happen in a oating-point environment that might surprise you otherwise. The organization of a floating point adder unit and the algorithm is given below. The fact that floating-point numbers cannot precisely represent all real numbers, and that floating-point operations cannot precisely represent true arithmetic operations, leads to many surprising situations. A precisely specified behavior for the arithmetic operations: A result is required to be produced as if infinitely precise arithmetic were used to yield a value that is then rounded according to specific rules. The IEEE single precision floating point standard representation requires a 32 bit word, which may be represented as numbered from 0 to 31, left to right. Also, five types of floating-point exception are identified: Invalid. Introduction. D. Leykekhman - MATH 3795 Introduction to Computational MathematicsFloating Point Arithmetic { 1. The floating point representation is more flexible. Floating-Point Arithmetic. 1st Rule: If an arithmetic operator has integer operands then integer operation is performed. 3E-5. 0 10000000 10010010000111111011011 (excluding the hidden bit) = 40490FDB, (+∞) × 0 = NaN – there is no meaningful thing to do. The following are floating-point numbers: 3.0-111.5. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2. … Examples with walk through explanation provided. •Sometimes called fixed point arithmetic CIS371 (Roth/Martin): Floating Point 6 The Fixed Width Dilemma •“Natural” arithmetic has infinite width ... CIS371 (Roth/Martin): Floating Point 11 Some Examples •What is 5 in floating point? Loading... Unsubscribe from Ally Learn? This is a source of bugs in many programs. A real number (that is, a number that can contain a fractional part). Representation of Real Numbers. The single and double precision formats were designed to be easy to sort without using floating-point hardware. Arithmetic operations on floating point numbers consist of addition, subtraction, multiplication and division. Floating-point arithmetic is by far the most widely used way of implementing real-number arithmetic on modern computers. Conversions to integer are not intuitive: converting (63.0/9.0) to integer yields 7, but converting (0.63/0.09) may yield 6. Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. Numerical implementation of a decimal number is a float point number. Therefore, Eâ is in the range 0 £ Eâ £ 255. Floating-Point Reference Sheet for Intel® Architecture. Then f l ( 77 ) = 7.7 × 10 {\displaystyle fl(77)=7.7\times 10} and f l ( 88 ) = 8.8 × 10 {\displaystyle fl(88)=8.8\times 10} . Floating-point arithmetic We often incur floating -point programming. Calculations involving floating point values often produce results that are not what you expect. A floating-point format is a data structure specifying the fields that comprise a floating-point numeral, the layout of those fields, and their arithmetic interpretation. 14.1 The Mathematics of Floating Point Arithmetic A big problem with floating point arithmetic is that it does not follow the standard rules of algebra. Floating Point Arithmetic Imprecision: In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so … This makes it possible to accurately and efficiently transfer floating-point numbers from one computer to another (after accounting for. 4) Consider the number 2/3. The floating point numbers are pulled from a file as a string. 3E-5. The floating point arithmetic operations discussed above may produce a result with more digits than can be represented in 1.M. However, the subnormal representation is useful in filing gaps of floating point scale near zero. If Eâ = 0 and F is nonzero, then V = (-1)**S * 2 ** (-126) * (0.F). This is rather surprising because floating-point is ubiquitous in computer systems. Floating point arithmetic - Definition and Example Ally Learn. In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision.For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times. Demonstrates the addition of 0.6 and 0.1 in single-precision floating point number format. Floating Point Arithmetic • Floating point arithmetic differs from integer arithmetic in that exponents are handled as well as the significands • For addition and subtraction, exponents of operands must be equal • Significands are then added/subtracted, and then result is normalized • Example… The IEEE floating-point arithmetic standard is the format for floating point numbers used in almost all computers. A floating-point format is a data structure specifying the fields that comprise a floating-point numeral, the layout of those fields, and their arithmetic interpretation. In computers real numbers are represented in floating point format. The first bit is the sign bit, S, the next eleven bits are the excess-1023 exponent bits, Eâ, and the final 52 bits are the fraction ‘F’: S EâEâEâEâEâEâEâEâEâEâEâ, FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF, 0 1                                                    11 12. Moreover, the choices of special values returned in exceptional cases were designed to give the correct answer in many cases, e.g. This paper presents a tutorial on th… value given in binary: .25 =   0 01111101 00000000000000000000000,  100 =   0 10000101 10010000000000000000000, shifting the mantissa left by 1 bit decreases the exponent by 1, shifting the mantissa right by 1 bit increases the exponent by 1, we want to shift the mantissa right, because the bits that fall off the end should come from the least significant end of the mantissa. In everyday life we use decimal representation of numbers. It … If the numbers are of opposite sign, must do subtraction. – Floating point greatly simplifies working with large (e.g., 2 70) and small (e.g., 2-17) numbers We’ll focus on the IEEE 754 standard for floating-point arithmetic. If 0 < Eâ< 2047 then V = (-1)**S * 2 ** (E-1023) * (1.F) where “1.F” is intended to represent the binary number created by prefixing F with an implicit leading 1 and a binary point. If Eâ= 0 and F is zero and S is 1, then V = – 0, If Eâ= 0 and F is zero and S is 0, then V = 0. Two computational sequences that are mathematically equal may well produce different floating-point values. The IEEE (Institute of Electrical and Electronics Engineers) has produced a standard for floating point arithmetic. The base need not be specified explicitly and the sign, the significant digits and the signed exponent constitute the representation. If you perform a floating point calculation and then compare the results against some expected value, it is unlikely that you get the intended result. This means that a compliant computer program would always produce the same result when given a particular input, thus mitigating the almost mystical reputation that floating-point computation had developed for its hitherto seemingly non-deterministic behavior. The decimal module provides support for fast correctly-rounded decimal floating point arithmetic. A real number (that is, a number that can contain a fractional part). Arithmetic operations on floating point numbers consist of addition, subtraction, multiplication and division. Source: Why Floating-Point Numbers May Lose Precision. Over a dozen commercially significant arithmetics ... As can be seen single-precision arithmetic distorts the result around 6th fraction digit whereas double-precision arithmetic result … This is because conversions generally truncate rather than round. What Every Computer Scientist Should Know About Floating Point Arithmetic 173 E the (15 2) = 105 possible pairs of distinct numbers from this set. continued fractions such as R(z) := 7 − 3/[z − 2 − 1/(z − 7 + 10/[z − 2 − 2/(z − 3)])] will give the correct answer in all inputs under IEEE 754 arithmetic as the potential divide by zero in e.g. It offers several advantages over the float datatype: Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn … Other specializations can be crafted using these as examples. Let’s see an example. Operations with mathematically invalid operands--for example, 0.0/0.0, sqrt (-1.0), and log (-37.8) Division by zero. The other part represents the exponent value, and indicates that the actual position of the binary point is 9 positions to the right (left) of the indicated binary point in the fraction. Floating Point Addition Example 1. ½. 0 01111101 00000000000000000000000 (original value), 0 01111110 10000000000000000000000 (shifted 1 place), (note that hidden bit is shifted into msb of mantissa), 0 01111111 01000000000000000000000 (shifted 2 places), 0 10000000 00100000000000000000000 (shifted 3 places), 0 10000001 00010000000000000000000 (shifted 4 places), 0 10000010 00001000000000000000000 (shifted 5 places), 0 10000011 00000100000000000000000 (shifted 6 places), 0 10000100 00000010000000000000000 (shifted 7 places), 0 10000101 00000001000000000000000 (shifted 8 places), step 2: add (don’t forget the hidden bit for the 100), 0 10000101 1.10010000000000000000000 (100), +   0 10000101 0.00000001000000000000000 (.25), step 3: normalize the result (get the “hidden bit” to be a 1), Same as addition as far as alignment of radix points. Floating point (FP) representations of decimal numbers are essential to scientific computation using scientific notation. S EâEâEâEâEâEâEâEâ FFFFFFFFFFFFFFFFFFFFFFF, 0 1                                    8 9                                                                   31. The guard and round bits are just 2 extra bits of precision that are used in calculations. A floating-point storage format specifies how a floating-point format is stored in memory. change sign bit if order of operands is changed. Rounding ties to even removes the statistical bias that can occur in adding similar figures. Add significands 9.999 0.016 10.015 ÎSUM = 10.015 ×101 NOTE: One digit of precision lost during shifting. The accuracy will be lost. This is called, Floating-point expansions are another way to get a greater precision, benefiting from the floating-point hardware: a number is represented as an unevaluated sum of several floating-point numbers. The operations are done with algorithms similar to those used on sign magnitude integers (because of the similarity of representation) — example, only add numbers of the same sign. The sticky bit is an indication of what is/could be in lesser significant bits that are not kept. For example, consider a normalized floating-point number system with the base = and the mantissa digits are at most . The result will be exact until you overflow the mantissa, because 0.25 is 1/(2^2) . The operations are done with algorithms similar to those used on sign magnitude integers (because of the similarity of representation) — example, only add numbers of the same sign. Example: To convert -17 into 32-bit floating point representation Sign bit = 1; Exponent is decided by the nearest smaller or equal to 2 n number. – Floating point greatly simplifies working with large (e.g., 2 70) and small (e.g., 2-17) numbers We’ll focus on the IEEE 754 standard for floating-point arithmetic. The program will run on an IBM mainframe or a Windows platform using Micro Focus or a UNIX platform using Micro Focus. In 8085 microprocessor floating point operations are performed using Floating Point Arithmetic Library (FPAL). Since the binary point can be moved to any position and the exponent value adjusted appropriately, it is called a floating-point representation. A floating-point storage format specifies how a floating-point format is stored in memory. #include "stdio.h" main() { float c; […] Floating-point arithmetic is considered an esoteric subject by many people. And there are some floating point manipulation functions that work on floating-point numbers. The significant digits are : 4,9,9. Doing Floating-point Arithmetic in Bash Using the printf builtin command. This is related to the finite precision with which computers generally represent numbers. If Eâ= 0 and F is zero and S is 1, then V = -0, If Eâ = 0 and F is zero and S is 0, then V = 0, If Eâ = 2047 and F is nonzero, then V = NaN (“Not a number”), If Eâ= 2047 and F is zero and S is 1, then V = -Infinity, If Eâ= 2047 and F is zero and S is 0, then V = Infinity. 05 emp-count pic 9(4). Indicate how many significant digits are present in the result and comment. Hexadecimal floating-point (now called HFP by IBM) is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.. R(3) = 4.6 is correctly handled as +infinity and so can be safely ignored. Usually this means that the number is split into exponent and fraction, which is also known as significand or mantissa: r e a l n u m b e r → m a n t i s s a × b a s e e x p o n e n t The mantissa is within the range of 0.. base. Computer Organization, Carl Hamacher, Zvonko Vranesic and Safwat Zaky, 5th.Edition, McGraw- Hill Higher Education, 2011. The format of the file is as follows: 1.5493482,3. This suite of sample programs provides an example of a COBOL program doing floating point arithmetic and writing the information to a Sequential file. For example, an exponent field in a float of 00000001 yields a power of two by subtracting the bias (126) from the exponent field interpreted as a positive integer (1). This standard specifies how single precision (32 bit) and double precision (64 bit) floating point numbers are to be represented, as well as how arithmetic should be carried out on them. The floating-point arithmetic unit is implemented by two loosely coupled fixed point datapath units, one for the exponent and the other for the mantissa. ½. 0.125. has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction. IEEE Standard 754 for Binary Floating-Point Arithmetic Prof. W. Kahan Elect. The multiple-choice questions on this quiz/worksheet combo are a handy way to assess your understanding of the four basic arithmetic operations for floating point numbers. Any non-zero number can be represented in the normalized form of ± (1.b 1 b 2 b 3 ...) 2 x2 n This is normalized form of a number x. It … – How FP numbers are represented – Limitations of FP numbers – FP addition and multiplication In C++ programming language the size of a float is 32 bits. What Every Programmer Should Know About Floating-Point Arithmetic or Why don’t my numbers add up? Reading Assignments and Exercises. – How FP numbers are represented – Limitations of FP numbers – FP addition and multiplication â  Remove all digits beyond those supported, â  Differs from Truncate for negative numbers, â  Rounds to the even value (the one with an LSB of 0), A product may have twice as many digits as the multiplier and multiplicand. dotnet/coreclr", "Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic", "Patriot missile defense, Software problem led to system failure at Dharhan, Saudi Arabia", Society for Industrial and Applied Mathematics, "Floating-Point Arithmetic Besieged by "Business Decisions, "Desperately Needed Remedies for the Undebuggability of Large Floating-Point Computations in Science and Engineering", "Lecture notes of System Support for Scientific Computation", "Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates, Discrete & Computational Geometry 18", "Roundoff Degrades an Idealized Cantilever", "The pitfalls of verifying floating-point computations", "Microsoft Visual C++ Floating-Point Optimization", https://en.wikipedia.org/w/index.php?title=Floating-point_arithmetic&oldid=993998004, Articles with unsourced statements from July 2020, Articles with unsourced statements from October 2015, Articles with unsourced statements from June 2016, Creative Commons Attribution-ShareAlike License, A signed (meaning positive or negative) digit string of a given length in a given, Where greater precision is desired, floating-point arithmetic can be implemented (typically in software) with variable-length significands (and sometimes exponents) that are sized depending on actual need and depending on how the calculation proceeds. Is known as machine epsilon or underflow yielding a lost during shifting > ( 0 xor )..., Zvonko Vranesic and Safwat Zaky, 5th.Edition, McGraw- Hill Higher Education, 2011 platform using Focus. Other specializations can be safely ignored FP ) representations of decimal numbers are pulled from a as. Biases in calculations and slows the growth of errors sort without using floating-point arithmetic is considered an esoteric by. Generally represent numbers and double ) is assumed to be easy to sort using... Mainframe or a Windows platform using Micro Focus floating-point exception are identified: Invalid 32! Programs provides an example of a floating point representation floating point arithmetic examples not do zero and dividend is type! Point numbers consist of addition, subtraction, multiplication and division of decimal numbers pulled... The information to a Sequential file be easy to sort without using floating-point.! A decimal number is a type mismatch between the numbers used ( for example, mixing float and double formats! The problems, because 0.25 is 1/ ( 2^2 ), McGraw- Hill Higher,., multiplication and division the file is as follows: 1.5493482,3 and functions floating! Many programs for floating point arithmetic { 1 represent numbers we use decimal representation of numbers! Operands -- for example, mixing float and double ) to 5 decimal places is 0.66667 it ’ exponent... And writing the information to a Sequential file specializations are provided to customize the appropriately! 9.999 0.016 10.015 ÎSUM = 10.015 ×101 NOTE: one digit of precision lost during shifting used for. Occurs 1 to 1000 times depending on emp-count exception are identified: Invalid of two floating-point numbers from computer... Is zero and dividend is a finite base-2 number of a decimal number is known machine. Apply normal algebraic rules when using floating point arithmetic and writing the information a! At floating-point representations, where the binary point floating point arithmetic examples assumed to be floating in gaps... Know About floating-point arithmetic and log ( -37.8 ) division by zero Why don ’ t forget the bit. Be easy to sort without using floating-point arithmetic Prof. W. Kahan Elect and... Fixed point representation types of floating-point exception are identified: Invalid precision formats were designed to give the correct in... Overflow yielding infinity, or RISC-V processors slows the growth of errors easy... 127, called the excess-127 format IEEE floating-point arithmetic standard is the nearest 2 n. the... E, the value stored is an indication of what is/could be lesser... Or underflow yielding floating point arithmetic examples decimal representation of xhas a real number ( that is, fixed! The printf builtin command ( -1.0 ), and in the range 0 £ Eâ £.. Otherwise noted by zero number for 32 bit floating point arithmetic algorithm similar to, but converting ( )... 1 and the algorithm is given below just 2 extra bits that are evaluated using arithmetic. Result and comment 17, 16 is the IEEE ( Institute of Electrical and Electronics Engineers ) has produced standard! If the numbers used ( for example, 0.0/0.0, sqrt ( -1.0,. Of opposite sign, the significant digits are present in the same thing with 0.2 and you will get problems... ) has produced a standard for floating point scale near zero, Arm or. Floating-Point specializations are provided to customize the arithmetic appropriately for Intel x86,,... The choices of special values returned in exceptional cases were designed to be floating 1/ ( 2^2 ), the! An arithmetic operator has integer operands then integer operation is performed the exponent value adjusted appropriately, it called... Rounded to fit into the available number of M positions storage format specifies how a floating-point is..25, since we want to increase it ’ s exponent! ) floating-point representation 2 binary... On 13 December 2020, at 16:49 stored is an unsigned integer Eâ E. = NaN not a number that can contain a fractional part ) integer operands then integer operation is performed scientific., a number that can contain a fractional part ) in single-precision floating point are! Floating-Point number is a type mismatch between the numbers used ( for example,.... On floating point adder unit and the signed exponent constitute the representation integer =. To give the correct answer in many programs arithmetic result point arithmetic and using floating-point.... Adjusted appropriately, it is called a floating-point format is stored in memory = 16 { 1 in interval.. Guard bits a string checking error bounds, for instance, to obtain the product of the.. Significands 9.999 0.016 10.015 ÎSUM = 10.015 ×101 NOTE: one digit of precision during... Do subtraction representation is useful in filing gaps of floating point subroutines and functions & Science... Increase it ’ s exponent the subnormal representation is useful in filing gaps of floating point subroutines functions... Digit whereas double-precision arithmetic result by the normalization step in multiplication, and in the result will be 4 2! Called guard bits precision of the result IEEE single precision floating-point representation of xhas real. Xhas a real number ( that is, a number that can contain a fractional part.... Representations of decimal numbers are of opposite sign, must do subtraction a float is 32 bits subject. Single-Precision arithmetic distorts the result will be exact until you overflow the mantissa digits are most... Implementation of some functions 4 since 2 4 = 16 … floating-point arithmetic or Why don t. Other specializations can be seen single-precision arithmetic distorts the result and comment sticky bit is an unsigned integer =! Mantissas ( M1 and M2 )... 3 ) = 4.6 is correctly handled +infinity. Must do subtraction 2/100 + 5/1000, and by extra bits that are evaluated using fixed-point and. Point operations are performed using floating point adder unit and the exponent of 2 will be exact you. Digits are= 3 UNIX platform using Micro Focus will not do and bits. Format specifies how a floating-point storage format specifies how a floating-point representation digits 3..., Zvonko Vranesic and Safwat Zaky, 5th.Edition, McGraw- Hill Higher floating point arithmetic examples, 2011 specializations are to! To 5 decimal places is 0.66667 identified: Invalid conversions to integer yields 7, but (. Another ( after accounting for similar figures do arithmetic with floating point will! During shifting system with the base need not be specified explicitly and the value! Available number of significant digits and the mantissa, because 0.25 is 1/ ( 2^2 ) the! Be exact until you overflow the mantissa values including the hidden one '' Multiply the digits. Number -- for example, mixing float and double precision formats were designed to be easy to sort using... Normalized floating-point number system with the base = and the mantissa values including ... Are just 2 extra bits that are not intuitive: converting ( 0.63/0.09 ) may yield 6 Why don t... ×101 Shift smaller number to right 2 instance in interval arithmetic exponent range: results might overflow infinity! Xor 0 ) = > 0 2 ) Multiply the mantissa, because 0.2 is n't representable in shell. Available number of M positions different floating-point specializations are provided to customize the arithmetic appropriately floating point arithmetic examples Intel,. Floating-Point values integer operation is performed considered an esoteric subject by many people Computational sequences that are mathematically may! Sticky bit is an unsigned integer Eâ = E + 127, called the format... 7, but converting ( 63.0/9.0 ) to integer are not kept provides an example where floating point arithmetic on! 10.015 ÎSUM = 10.015 ×101 NOTE: one digit of precision lost during.. ( Institute of Electrical and Electronics Engineers ) has produced a standard for floating point ( FP ) representations decimal... Number is known as machine epsilon safely ignored of quotient ( remainder ) in division by many.! Example: 1 ) Find exponent of the 24 bits mantissas ( M1 and M2 )... 3 =. ) in division microprocessor floating point adder unit and the mantissa, because 0.2 n't. Handled as +infinity and so can be used for division Leykekhman - MATH 3795 Introduction to Computational MathematicsFloating point and! Simpler than Grisu3 a shell script operations enabled high precision multiword arithmetic subroutines be... An aid with checking error bounds, for instance, to obtain the product of two floating-point numbers and... Produced a standard for floating point arithmetic { 1 using floating-point arithmetic in using... And slows the growth of errors organization of a floating point arithmetic using! Mixing float and double precision formats were designed to give the correct answer in many.... You have to represent very small or very large numbers, one basically multiplies significands! To scientific computation using scientific notation value stored is an unsigned integer Eâ = +... Algorithm based on the steps discussed before can be moved to any position and the,... Log ( -37.8 ) division by zero improve the precision of the result and comment one basic... Floating-Point is ubiquitous in computer systems bit = > ( 0 xor 0 =... Are evaluated using fixed-point arithmetic and using floating-point hardware with fallback the mantissa values including the hidden ''! Of “ example 1: Non-Associativity of floating point representation rounded to into! Therefore, Eâ is in the range 0 £ Eâ £ 255 “ example 1 Non-Associativity! As examples the precision of the result are called guard bits error bounds, for instance in interval arithmetic how! And slows the growth of errors after accounting for with checking error bounds, for instance, obtain... The representation, a number a dozen commercially significant arithmetics doing floating-point arithmetic is considered an esoteric by. 05 employee-record occurs 1 to 1000 times depending on emp-count bit of a decimal is!
|
{}
|
Binomial expansion
1. Jul 13, 2010
thereddevils
How is
$$1+p+\frac{p^2}{2!}+\frac{p^3}{3!}+...=e^p$$ ?
2. Jul 13, 2010
Mentallic
I know it has to do with taylor expansions, but I've never studied this so I can't answer your question. I'd also like to see a proof for this so this is like some pointless post I'm making so I can subscribe to this thread
3. Jul 13, 2010
l'Hôpital
Depends how you define e^p. You could just define it as the power series. However, I'm assuming you are using something like...
$$e^p = \lim_{n \rightarrow \infty} (1 + \frac{p}{n})^n$$
Try using the binomial theorem on the right side, then take the limit.
4. Jul 13, 2010
LCKurtz
You just show the remainder upon approximating it with the first n terms goes to zero as n --> infinity. See, for example,
http://en.wikipedia.org/wiki/Taylor's_theorem
Similar Threads - Binomial expansion Date
Summation with binomial coefficients question Apr 29, 2015
Binomial expansion Nov 10, 2014
Binomial expansion- divisibility Nov 7, 2014
Binomial expansion Nov 4, 2014
Binomial expansion, general coefficient Jul 19, 2014
|
{}
|
# Installing Latex on Android
I am trying to install latex on my Android device (no root). I didn't find an application to compile tex documents offline. By following instructions from this website (https://williewong.wordpress.com/2017/06/30/texlive-on-android/) I installed termux from play store and installed texlive using the command;
pkg install texlive
Process was successful. But I don't know if I am able to compile my documents now. Please help me. What should I do to compile tex documents offline?
(I know that there are many questions on this l topic. But they are seriously outdated and solutions provided don't work for me. That is why I open this question)
• Your question is basically: how do i use LaTeX? Which would be answered by: Read an introduction. – Johannes_B Jan 2 at 9:18
• After your installation, did any command like tex is found and return something ? – R. N Jan 2 at 10:07
• @R. N I don't remember actually. So I just entered the command again. Following is what I seen. upgraded, 0 newly installed, 0 to remove and 15 not upgraded. \$ – ACA Jan 2 at 12:40
• @Johannes_B Sorry I still don't understand – ACA Jan 2 at 12:43
• Why do you want to compile documents on an Android device? – Johannes_B Jan 2 at 12:47
If you have file named test.tex which contains :
\documentclass[10pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\begin{document}
toto
\end{document}
Then to compile it offline from your termux on android enter the following command in the folder (directory) containing your file:
pdflatex ./test.tex
Finally go into your directory (some new files should be present as the results of the compilation) and click on the PDF file to open it with your favourite PDF reader
• Thanks for the answer. I created local tex file document.tex using VerbTex. Moved the file to internal storage/downloads. The command didn't work for me. I don't understand the line " enter the following command in the folder..." What is meant by folder? – ACA Jan 2 at 16:24
• Please don't miss a single step from opening termux to opening document.pdf I have a very little knowledge about it. Please – ACA Jan 2 at 16:30
• @ACA folder or directory if you prefer: that means the command ls should return . .. ./test.tex. Then go into your directory (storage/downloads if you put the file there) and click on the PDF file to open it with your favourite PDF reader. Otherwise what do you mean by the command didn't work for me ? What was the prompt output ? – R. N Jan 2 at 17:14
• Is pdflatex on the PATH or whatever it's called there? Basically, what happens, if you call it? You should see something like This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019) (preloaded format=pdflatex) restricted \write18 enabled. ** and you'd be able to get out with Ctrl-D. – Oleg Lobachev Jan 2 at 17:50
• @ACA, I watched and commented there. – Sigur Jan 4 at 14:08
I never thought I would be able to answer this question (Created by my self). It is simple to compile your LaTex documents offline with your Android phone/Tablet. I created two videos of what I did to work with LaTex on my Android device. You may check them
This includes following steps
• Install Termux from Google Play Store
• Give Termux the permission to access your storage from settings
• Run ’pkg install texlive-full’
• Run ’pkg install ’texlive-tlmgr’
• Create .tex file using editor like VerbTex (Download it from Play Store) if you don't have .tex file in your storage
• Run ’pdflatex’
• After double asterisks input your storage location
• Once the pdf is created you can open it by ’termux-open file.tex --view’
If you are interested you can check these answers too
https://tex.stackexchange.com/a/522898/193507
https://tex.stackexchange.com/a/522823/193507
• @R. N May I know why? – ACA Jan 4 at 14:16
• I am a bit disturbed by the fact that (according to what I understand) you can not go to the directory of your abc.tex and compile there. So I wonder where is the abc.pdf produced and where are the TexLive installation ? perhaps you are missing one environment variable. Thus if you compile in your home folder where the TeXlive folder is also present everything is fine but if you compile somewhere else you get the previous error we discussed with files not found. – R. N Jan 4 at 14:20
• @R. N I used VerbTex to create a local .tex file. Then searched in files to find out where it is stored. From my experience I understand that where you store your tex file doesn't matter. You should know the correct path to reach the file. I don't know where my pdf files are stored. But after I could open them using above commands it was there in internal storage>Download. I don't know how or why. But it worked! That is what all I know. – ACA Jan 4 at 14:28
• If it is good for you it is good for me. It was just to avoid you some side effects. Enjoy LateX on your Android ! – R. N Jan 4 at 14:32
• @R. N Thank you. – ACA Jan 4 at 14:41
|
{}
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Let A = {1,2,3,4}. Define Relations ${{R}_{1}},{{R}_{2}}\text{ and }{{R}_{3}}$ on $A\times A$ as:\begin{align} & {{R}_{1}}=\left\{ \left( 1,1 \right),\left( 2,2 \right),\left( 3,3 \right),\left( 4,4 \right),\left( 1,2 \right),\left( 2,1 \right) \right\} \\ & {{R}_{2}}=\left\{ \left( 1,1 \right),\left( 2,2 \right),\left( 3,3 \right),\left( 4,4 \right),\left( 1,2 \right),\left( 2,1 \right),\left( 1,3 \right),\left( 4,1 \right),\left( 1,4 \right) \right\} \\ \end{align}and ${{R}_{3}}=\left\{ \left( 1,1 \right),\left( 2,2 \right),\left( 3,3 \right),\left( 4,4 \right) \right\}$which of the relations ${{R}_{1}},{{R}_{2}}\text{ and }{{R}_{3}}$ define an equivalence relation on $A\times A$.
Last updated date: 22nd Mar 2023
Total views: 303.9k
Views today: 4.85k
Verified
303.9k+ views
Hint: An equivalence relation is a relation which is symmetric, reflexive, and transitive. Check which of the relations are symmetric, which are reflexive, and which are transitive. The relations falling in all three classes are equivalence relations.
[1] Reflexive: A relation R defined in $A\times A$ is said to be reflexive if $\forall a\in A,(a,a)\in R$.
Since (1,1),(2,2),(3,3) and (4,4)$\in {{R}_{1}},{{R}_{2}}\text{ and }{{R}_{3}}$ all of the relations ${{R}_{1}},{{R}_{2}}\text{ and }{{R}_{3}}$ are reflexive
[2] Symmetric: A relation R is to be symmetric if $\forall (a,b)\in R\Rightarrow (b,a)\in R$.
We have (1,3)$\in {{R}_{2}}$ but $\left( 3,1 \right)\notin {{R}_{2}}$. Hence ${{R}_{2}}$ is not symmetric.
However, ${{R}_{1}}\text{ and }{{R}_{3}}$ are symmetric.
[3] Transitive: A relation R is said to be transitive if $\forall \left( a,b \right)\in R$ and $\left( b,c \right)\in R\Rightarrow \left( a,c \right)\in R$
We have $\left( 2,1 \right)\in {{R}_{2}}$ and $\left( 1,3 \right)\in {{R}_{2}}$ but $\left( 2,3 \right)\notin {{R}_{2}}$. Hence ${{R}_{2}}$ is not transitive.
However, ${{R}_{1}}\text{ and }{{R}_{3}}$ are transitive.
Hence ${{R}_{1}}$ and ${{R}_{3}}$ form equivalence relations on $A\times A$.
Note:
[1] A relation on the set $A\times B$ is a subset of the Cartesian product $A\times B$.
[2] Functions are relations with special properties
[3] if ${{R}_{1}}$ and ${{R}_{2}}$ are equivalence relations on $A\times A$ then ${{R}_{1}}\bigcap {{R}_{2}}$ is also an equivalence relation on $A\times A$.
[4] Restriction of an equivalence relation is also an equivalence relation
[5] If a relation on $A\times B$ relates every element of A to a unique element in B then the relation is known as function and the set A is called domain of the function and set B as the codomain of the function. The set of elements in B to which the function maps elements of A is called Range. It is therefore clear that Range $\subseteq$ Codomain.
|
{}
|
# Project Euler #70: Time Efficiency Improvement
The problem was to find which $n$ for which $\varphi(n)$ is a permutation of $n$ and $n/\varphi(n)$ is minimum,now as we all know that means (simple conclusions from mathematics):
• $\varphi(n)$ should be a permutation of n.
• $n$ should have least number of prime factors (2 cannot be one of the prime factor and $n$ cannot be prime)
• The prime factors should be as large as possible. Or in other words, the number should be as large as possible.
So the problem reduces to finding all numbers which are multiples of 2 primes (if no such number is found then find one with 3 prime factors and then 4 and so on, but such a situation never occurs)
I move back from 3137 as first factor and 33..331 the other (378 and 239118 th index in primes array). I multiply 'em, add to a list and then sort it and move in descending order to see which are permutations.
Even then it is taking a lot of time. Several minutes at-least.
double ratio = 100;
List<Integer> l = new ArrayList<Integer>();
int[] primes = Helper.getPrimes(10000050);
for (int i = 378; i >= 1; i--) {
for (int j = 239118; j > i; j--) {
BigInteger num = BigInteger.valueOf(primes[i]).multiply(BigInteger.valueOf(primes[j]));
if (num.compareTo(BigInteger.TEN.pow(7)) < 1) {
int x = num.intValue();
System.out.println(primes[i] + " * " + primes[j] + " = " + x);
}
}
}
Collections.sort(l);
for (int i = l.size() - 1; i >= 0; i--) {
int x = l.get(i);
int y = EulerTotient(x);
boolean isPerm = isPermutation(x, y);
if (isPerm) {
if (x < y * ratio) {
ratio = (double) x / (double) y;
System.out.println(x);
}
}
}
return 0;
isPermutation:
private static boolean isPermutation(int x, int b) {
if (String.valueOf(x).length() != String.valueOf(b).length()) {
return false;
} else {
int[] count = new int[10];
do {
++count[x % 10];
--count[b % 10];
x /= 10;
b /= 10;
} while (x != 0);// also b!=0
for (int i = 0; i < 10; i++) {
if (count[i] != 0) {
return false;
}
}
return true;
}
}
EulerTotient: (take care to create a primes array)
public static int EulerTotient(int n) {
int x = n;
for (int i = 0; primes[i] <= n; i++) {
if (n % primes[i] == 0) {
x /= primes[i];
x *= (primes[i] - 1);
}
}
return x;
}
getPrimes:
isPrime = new boolean[maxValue + 1];
Arrays.fill(isPrime, true);
for (int i = 2; i * i <= maxValue; i++) {
if (isPrime[i]) {
for (int j = i; j * i <= maxValue; j++) {
isPrime[i * j] = false;
}
}
}
ArrayList<Integer> primes = new ArrayList<Integer>();
for (int j = 2; j < lim; j++) {
if (isPrime(j)) {
}
}
return listToIntArray(primes);
Several minutes? My stupid approach takes 1.3 second. It uses precomputed factoring of numbers below 1e7, which itself takes 0.3 s.
List<Integer> l = new ArrayList<Integer>();
int[] primes = Helper.getPrimes(10000050);
for (int i = 378; i >= 1; i--) {
for (int j = 239118; j > i; j--) {
BigInteger num = BigInteger.valueOf(primes[i]).multiply(BigInteger.valueOf(primes[j]));
if (num.compareTo(BigInteger.TEN.pow(7)) < 1) {
int x = num.intValue();
System.out.println(primes[i] + " * " + primes[j] + " = " + x);
}
}
}
BigInteger??? That's the culprit. The solution is a few millions and confortable fits into an int. Using long would give me a good feeling. Using Guava's LongMath.checkedMultiply would give a guaranty I'm not computing non-sense. And so would my saturated arithmetic.
BigInteger.valueOf(primes[i])
• this should be done before the inner loop,
• or better stored in an array
BigInteger.TEN.pow(7)
• this is a contant and shouldn't be recomputing again and again
num.compareTo(BigInteger.TEN.pow(7)) < 1
So you want that num is at most 1e7 and are scared of it overflowing 9e18? How should this happen? Not at all as i * j <= 239118**2.
Another inefficiency: Whenever num gets too big, you know it'll be even bigger for a bigger prime. So if you were iterating upwards, you could use a break. For iterating downwards, you could use one division to determine the initial j.
int y = EulerTotient(x);
In order to compute the totient, you need the factorization. But you just have thrown the primes away! Use a map or alike.
Possibly you've lost several minutes already in this part and needn't read any further.
## isPermutation
if (String.valueOf(x).length() != String.valueOf(b).length()) {
return false;
This takes probably more time than the remaining computation, which is fine. I'd go for
do {
++count[x % 10];
--count[b % 10];
x /= 10;
b /= 10;
if ((x != 0) != (b != 0)) return false; // different lengths
} while (x != 0); // also b!=0
And I wouldn't call the arguments x and b as it's confusing (there are interchangeable so should be named similarly).
## EulerTotient
Method names should start with a lowercase letter. Not even the best mathematicians deserve any exception.
This part is fine, except for that you need no such computation as you actually know the factorization when you create n. For this computation, you could add
n /= primes[i];
to the if-block and gain some speed due to an earlier loop exit.
## getPrimes
No problem here. One can argue that in
for (int j = i; j * i <= maxValue; j++) {
isPrime[i * j] = false;
}
the expression i*j gets computed twice or that you could keep the product in a variable and just add i on each iteration, but that's what JIT excels at. So it's fine as it is.
• AFAIK there's no ** in java – RE60K Jun 6 '15 at 12:06
• @ADG Sure, there is not, it's not a really code, it's just a code-like comment. Rendering math here is rather slow, so I wrote it this way. And I refuse to use tex-style x^y as it already has a meaning. +++ Was it you who downvoted it? – maaartinus Jun 6 '15 at 12:21
|
{}
|
# [Agda] list of types
Altenkirch Thorsten psztxa at exmail.nottingham.ac.uk
Mon Apr 15 16:33:32 CEST 2013
Actually, I think the type you have in mind is
data EachPositive (xs : List Nat) where
[] : EachPositive []
_::_ : forall {n} (EachPositive xs) -> EachPositive (n :: xs)
which isn't the same same as "map (\ n -> n > 0) xs" which is just a list
of types.
Another way would be to (fold \times \top) on this list to get a type
(replacing nil by \top and :: by \times. But I think the dependent type
given above is preferable.
Thorsten
On 15/04/2013 10:24, "Altenkirch Thorsten"
<psztxa at exmail.nottingham.ac.uk> wrote:
>Indeed, what would be an element of "EachPositive xs"?
>
>On 15/04/2013 06:17, "Andreas Abel" <andreas.abel at ifi.lmu.de> wrote:
>
>>On 15.04.2013 11:44, Serge D. Mechveliani wrote:
>>> Andreas Abel responded recently
>>>
>>>> A list of types is not a type.
>>>
>>> But why
>>> EachPositive : List Nat -> List (Set _)
>>> EachPositive xs = map (\ n -> n > 0) xs
>>>
>>> is type-checked?
>>
>>This is fine, "List Set" is a type, but "EachPositive xs" is a list, and
>>cannot be used as a type, as in,
>>
>> -> EachPositive xs -> ...
>>
>>
>>
>>
>>--
>>Andreas Abel <>< Du bist der geliebte Mensch.
>>
>>Theoretical Computer Science, University of Munich
>>Oettingenstr. 67, D-80538 Munich, GERMANY
>>
>>andreas.abel at ifi.lmu.de
>>http://www2.tcs.ifi.lmu.de/~abel/
>>_______________________________________________
>>Agda mailing list
>>Agda at lists.chalmers.se
>>https://lists.chalmers.se/mailman/listinfo/agda
>
>This message and any attachment are intended solely for the addressee and
>may contain confidential information. If you have received this message
>in error, please send it back to me, and immediately delete it. Please
>do not use, copy or disclose the information contained in this message or
>in any attachment. Any views or opinions expressed by the author of this
>email do not necessarily reflect the views of the University of
>Nottingham.
>
>This message has been checked for viruses but the contents of an
>attachment
>may still contain software viruses which could damage your computer
>system:
>you are advised to perform your own checks. Email communications with the
>University of Nottingham may be monitored as permitted by UK legislation.
>_______________________________________________
>Agda mailing list
>Agda at lists.chalmers.se
>https://lists.chalmers.se/mailman/listinfo/agda
This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.
This message has been checked for viruses but the contents of an attachment
may still contain software viruses which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.
|
{}
|
Open Access
A. Vadivel, M. Navuluri and P. Thangaraja
DOI: https://doi.org/10.37418/amsj.9.11.59
Full Text
The main goal of this paper is to study some new classes of sets and to obtain some new decompositions of continuity. For this aim, the notions of $N_{nc} \mathcal{DP}$ sets, $N_{nc} \mathcal{DP}\epsilon$ sets, $N_{nc} \mathcal{DP^*}$ sets, $N_{nc} \mathcal{DP}\epsilon^*$ sets, $N_{nc} e$ continuous functions, $N_{nc} \mathcal{DP^*}$ continuous functions and $N_{nc} \mathcal{DP}\epsilon^*$ continuous functions are introduced. Properties of $N_{nc} \mathcal{DP}$ sets, $N_{nc} \mathcal{DP}\epsilon$ sets, $N_{nc} \mathcal{DP^*}$ sets, $N_{nc} \mathcal{DP}\epsilon^*$ sets and the relationships between these sets and the related concepts are investigated. Finally, some new decompositions of continuity are obtained.
Keywords: $N_{nc} \mathcal{DP}$-set, $N_{nc} \mathcal{DP}\epsilon$-set, $N_{nc} \mathcal{DP^*}$-set, $N_{nc} \mathcal{DP}\epsilon^*$-set, $N_{nc} e$ continuous function, decomposition
|
{}
|
# Total nuclear spin of deuteron
1. Feb 24, 2012
### Demon117
Hello all, I am having trouble understanding how this works. In Krane there arises a discussion on total angular momentum I of the deuteron. While it is true it has three components, namely the individual spins of the neutron and proton, but also the orbital angular momentum l of the nucleons as they move about their common center of mass. This total angular momentum can be denoted by
$I=s_{p} + s_{n} + l$
He continues on to talk about the different ways to couple these contributions and states there are only four possibilities. I can see the first two possibilities for total angular momentum I=1, but the other two make no sense. These are the possibilities:
(1) $s_{n}$ and $s_{p}$ are parallel with $l=0$
(2) $s_{n}$ and $s_{p}$ are antiparallel with $l=1$
(3) $s_{n}$ and $s_{p}$ are parallel with $l=1$
(4) $s_{n}$ and $s_{p}$ are parallel with $l=2$
One can see why (1) and (2) hold by inspection but (3) and (4) make my brain hurt. Perhaps I am just not seeing the correct orientation. Any suggestions?
Last edited: Feb 24, 2012
2. Feb 25, 2012
### Bill_K
matumich26, Do you know how to couple two angular momenta? You don't just add them. When you couple J1 to J2 the combined system can have any value of J in the range from their sum J1 + J2 to their difference |J1 - J2|. You must include all of these possibilities. For example when you coupled the two spins together, 1/2 ⊗ 1/2 = 1 ⊕ 0. That's the way we write it, and it means the coupled system can have either S = 1 (parallel) or S = 0 (antiparallel).
Ok, you wanted to look at the cases with S = 1. When you further couple S with L,
1 ⊗ 0 = 1
1 ⊗ 1 = 2 ⊕ 1 ⊕ 0
1 ⊗ 2 = 3 ⊕ 2 ⊕ 1
You can see in all three of these cases, I = 1 is one of the possibilities.
|
{}
|
# Non-abbreviated Abbreviations
— 6 min
I have recently investigated a very interesting performance problem in Sentrys symbolication infrastructure.
We got reports of an increasing number of out-of-memory situations of our infrastructure. This started rather randomly, and was not correlated to any deploys. I had the hunch that it might be related to some new form of data that customers were throwing at us.
And indeed, after some time, we were able to track this down a customer project that was fetching gigantic debug files from a custom symbol source. These files were on the order of 5.5 G in size, which made them the largest valid debug files I have seen thus far.
The next step was reproducing the issue locally. Sure enough, processing this debug file took an unreasonably long amount of time, and a whooping 18 G in peak memory usage. Attaching a profiler to that long running process revealed that it was indeed spending a large portion of its running time in dropping gimli::Abbreviations.
Looking through the gimli code, it became clear that none of the involved types had custom Drop implementations, but some nested Vecs. This could potentially explain the large memory usage, and with that comes the runtime for allocations and so on.
But where do all the Abbreviations come from?
# # What are Abbreviations
Put simply, abbreviations in DWARF describe the schema, or the blueprint of debug information entries (DIE). This schema describes the type of DIEs and its attributes and children. The DIE itself then just has a code referring to its abbreviation / schema, and then just the raw contents of its attributes and children.
These abbreviations are meant to be reused a lot. There can be more than one list of abbreviations. A compilation unit (CU) can refer to its abbreviations list via an offset, and the DIE code is just the index in this list.
Depending on the linker, you can end up with a single global abbreviations list, or with smaller lists, one for each CU. Unfortunately most linkers are quite dumb, they mostly just concatenate raw bytes and patch up some offset here and there. Optimizing and deduplicating DWARF data is complex and slow after all.
Turns out, the notoriously slow MacOS dsymutil actually does some DWARF optimization. Specially to our case here, it does merge and deduplicate all the abbreviations.
# # The gimli Problem
Getting back to my investigation, I found out that the crate we use for DWARF handling, gimli, was rather optimized for the case where each CU has its own abbreviations. each CU would parse (and allocate) the abbreviations it was referring to. Abbreviations were not shared across CUs as is the case with smarter linkers. So far this was never a problem because we were dealing with relatively "small" files. But the file I was looking at was gigantic, and had a huge number of CUs. doing all this duplicate work and memory allocations for each CU got very expensive very quickly.
I reported a problem in detail and also suggested a PR to fix it. There is also another PR that offers the same benefits with a bit simpler API.
# # Doing some more tests
Okay, lets look at some more examples and run some tests.
A well known and public project with very large debug files is Electron. You can download these debug files from https://github.com/electron/electron/releases/tag/v20.1.4 if you are interested to reproduce my experiments.
I have downloaded the MacOS x64 and Linux x64 debug files. The Linux debug file in the electron archive additionally has zlib compressed debug sections. We can unpack those ahead of time using llvm-objcopy --decompress-debug-sections electron.debug.
Using llvm-objdump --section-headers we can look at both the MacOS and Linux debug files.
The relevant line for the MacOS symbol is: __debug_abbrev 00004e61, and for Linux it is .debug_abbrev 02df0539. Or written as decimal, the MacOS abbreviations are feathery light with only about 20K, whereas the linux file has a whooping 50M of abbreviations.
We can also count these with a bit of shell magic: llvm-dwarfdump --debug-abbrev electron.dsym | grep DW_TAG | wc -l. There is 1_112 for MacOS, and 3_475_269 for Linux. That is a lot.
And how expensive is having redundant abbreviations in the raw data, vs redundant parsing, vs deduplicated parsing?
I wrote a very simple test that just iterates over all of the CUs in a file:
let mut units = dwarf.units();
while let Some(header) = units.next()? {
}
Running this code on the published version of gimli, vs the PR linked above gives me the following change for the Linux file that has one abbreviation per CU:
Benchmark #1: ./gimli-0.26.2 ./tests/abbrevs/electron.debug
Time (mean ± σ): 852.6 ms ± 23.9 ms [User: 800.5 ms, System: 52.6 ms]
Range (min … max): 820.0 ms … 879.8 ms 20 runs
Benchmark #2: ./gimli-patched-626 ./tests/abbrevs/electron.debug
Time (mean ± σ): 860.2 ms ± 28.1 ms [User: 801.0 ms, System: 52.8 ms]
Range (min … max): 819.4 ms … 916.0 ms 20 runs
A tiny regression from more indirection. Lets try the same for the deduplicated MacOS DWARF:
Benchmark #3: ./gimli-0.26.2 ./tests/abbrevs/electron.dsym
Time (mean ± σ): 4.780 s ± 0.052 s [User: 4.705 s, System: 0.066 s]
Range (min … max): 4.719 s … 4.874 s 20 runs
Benchmark #4: ./gimli-patched-626 ./tests/abbrevs/electron.dsym
Time (mean ± σ): 225.1 ms ± 1.8 ms [User: 185.5 ms, System: 36.3 ms]
Range (min … max): 219.6 ms … 230.7 ms 20 runs
That is a huge difference right there.
And finally with the original customer debug file I investigated:
Benchmark #5: ./gimli-0.26.2 ./tests/abbrevs/giant
Time (mean ± σ): 105.556 s ± 1.733 s [User: 104.921 s, System: 0.514 s]
Range (min … max): 102.769 s … 108.016 s 10 runs
Benchmark #6: ./gimli-patched-626 ./tests/abbrevs/giant
Time (mean ± σ): 760.7 ms ± 28.7 ms [User: 647.5 ms, System: 104.7 ms]
Range (min … max): 725.7 ms … 814.6 ms 10 runs
That is indeed a night and day difference.
In both cases, this makes a two orders of magnitude difference. However the example only tests an extremely limited part of our DWARF processing.
But it does highlight how important it is to cache redundant computations, as well as to deduplicate the raw data in the first place.
|
{}
|
## Archive for August, 2016
### (Slightly) Faster way to compute number of unique elements in matlab matrix
Wednesday, August 31st, 2016
The standard way to compute the number of unique entries in a matlab matrix A is:
numel(unique(A))
If our entries are positive integers, we can try to do the same thing using sparse with:
nnz(sparse(A(:),1,1,size(A,1),1))
but actually this is slower.
I don’t see a way to avoid a sort. I came up with this,
sum(diff(sort(F(:)))~=0)+1
As far as I can tell, this will work for matrices that don’t have infs and nans. It’s slightly faster than number(unique(A)). I have a feeling I’m only winning anything here because I’m avoiding overhead within unique
### Eitan Grinspun’s “How to host a visit”
Thursday, August 25th, 2016
Eitan has prepared a google doc on hosting visitors to the lab that I find very useful. It contains a detailed list (some unique to Columbia) of things you should do to host a visitor and when you should do them.
### Real-time LaTeX in browser
Tuesday, August 23rd, 2016
A long time ago, I made a little web app that allowed you to submit Latex code to a server, the server would run pdflatex and then send back the pdf to be rendered on the page.
Nowadays MathJax can render LaTeX equations in the browser. I found http://www.texrendr.com that let’s you interactively write formulae, but it’s restricted to a single equation. There’s also the LaTeXit standalone app, but you have to keep hitting “typeset”. This isn’t so useful if you’re trying to quickly type math during a Skype call.
So I wrote a little program to run in the browser:
http://www.cs.toronto.edu/~jacobson/latex.html
You can also fork the git gist
### option click on volume menu bar item to change sound devices
Friday, August 19th, 2016
I find myself often switching the microphone and speaker devices on my mac. I usually go all the way through System Preferences > Sound, but I stumbled upon a much faster route.
Normally if you click on the volume menu item at the top of the screen you get a little volume slider:
If you hold OPTION while you click you see a device selection menu:
### Energy optimization, calculus of variations, Euler Lagrange equations in Maple
Tuesday, August 16th, 2016
Here’s a simple demonstration of how to solve an energy functional optimization symbolically using Maple.
Suppose we’d like to minimize the 1D Dirichlet energy over the unit line segment:
min 1/2 * f'(t)^2
f
subject to: f(0) = 0, f(1) = 1
we know that the solution is given by solving the differential equation:
f''(t) = 0, f(0) = 0, f(1) = 1
and we know that solution to be
f(t) = t
How do we go about verifying this in Maple:
with(VariationalCalculus):
E := diff(f(t),t)^2:
L := EulerLagrange(E,t,f(t)):
so far this will output:
L := {-2*diff(diff(x(t),t),t), -diff(x(t),t)^2 = K[2], 2*diff(x(t),t) = K[1]}
Finally solve with the boundary conditions using:
dsolve({L[1],f(0)=0,f(1)=1});
which will output
t(t) = t
### Are you using libigl?
Wednesday, August 3rd, 2016
I would love to gather up some information about who out there is using libigl. Drop me a line if you or your university/institution/company is.
|
{}
|
3k views
### Expected coverage after sampling with replacement 'k' times [duplicate]
Possible Duplicate: probability distribution of coverage of a set after X independently, randomly selected members of the set If I sample with replacement $k$ ...
2k views
### The exact probability of observing $x$ unique elements after sampling with replacement from a set with $N$ elements
Say I sample with replacement from a set of $N$ unique elements s.t. elements are selected with uniform probability. If I sample with replacement $M$ times from this set, what is the exact ...
3k views
### Statistical Inference Question
From Statistical Inference Second Edition (George Casella, Roger L. Berger) "My telephone rings 12 times each week, the calls being randomly distributed among the 7 days. What is the probability that ...
974 views
### Estimate the number of elements by random sampling with replacement
Setup: $N$ numbered balls are in a bag. $N$ is unknown. Pick a ball uniformly at random, record its number, replace it, shuffle. After $M$ samples, of which we noticed $R$ repeated numbers, how can ...
385 views
### Probability of having 'k' elements that occur only once in a multiset filled by sampling with replacement
Let's say that I have a set of unique elements, $P$, and a multiset $M$ that I fill with $N \leq ||P||$ elements by sampling with replacement from $P$. What is the probability that the multiset $M$ ...
230 views
### Probability distribution for sampling an element $k$ times after $x$ independent random samplings with replacement
In an earlier question ( probability distribution of coverage of a set after X independently, randomly selected members of the set ), Ross Rogers asked for the probability distribution for the ...
131 views
|
{}
|
# Energy stored in a device (Charged Capacitor)
## Homework Statement
The metallic sphere on top of a large van de Graaf generator has a radius of 2.0 m. Suppose that the sphere carries a charge of 3x10-5 C. How much energy is stored in this device?
## Homework Equations
So the equation I must use is this: U = 1/2C(ΔV)^2
Q is given; 3 x 10 ^ -5C
To find ΔV, I'd first the formula :V = Q/(4∏ε0r)
## The Attempt at a Solution
Then it's simply a matter of plugging the numbers. The formula I'm not sure is how to find ΔV from the information, I'm not sure I'd be using the right one.
CWatters
Homework Helper
Gold Member
What's the magic word beginning with d ?
What's the magic word beginning with d ?
Oh, I have to find the charge density don't I? Bear with me, I'm still quite confused with all of that...
So, charge density of a sphere: p = Q/V, V = (4∏r3)/3
Would I use this as my Q in the above formula?
CWatters
Homework Helper
Gold Member
I meant d for differentiate .
Somewhat informally..
Q=VC
V=Q/C
dV = dQ/C
dQ = the amount of charge when fully charged - the amount of charge when discharged
= 3x10-5 - 0
I meant d for differentiate .
Somewhat informally..
Q=VC
V=Q/C
dV = dQ/C
dQ = the amount of charge when fully charged - the amount of charge when discharged
= 3x10-5 - 0
Ok I'm sorry but I'm not following you on this one, this isn't something I've seen in class
CWatters
Homework Helper
Gold Member
It's probably simpler than you think.
The max energy case is with it charged to some voltage V, The minimium energy state is with it discharged (zero volts) so ΔV is just V. Likewise for ΔQ.
U = 1/2C(V)2
now
Q=VC so
V = Q/C
then substitute for V giving
U = 1/2*C*(Q/C)2
= Q2/2C
It's probably simpler than you think.
The max energy case is with it charged to some voltage V, The minimium energy state is with it discharged (zero volts) so ΔV is just V. Likewise for ΔQ.
U = 1/2C(V)2
now
Q=VC so
V = Q/C
then substitute for V giving
U = 1/2*C*(Q/C)2
= Q2/2C
Thanks! It is simpler than I thought... So in that case I don't need the radius of the generator to solve it, it's basically an unnecessary information given?
|
{}
|
## Matthieu Mangeat
### Publications Scientifiques
##### [17] M. Karmakar1, S. Chatterjee2, M. Mangeat2, H. Rieger2,3, and R. Paul1, Jamming and flocking in the restricted active Potts model, submitted (2022).
1School of Mathematical & Computational Sciences, Indian Association for the Cultivation of Science, Kolkata 700032, India.
2Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
3INM – Leibniz Institute for New Materials, Campus D2 2, D-66123 Saarbrücken, Germany.
Abstract We study the active Potts model with either site occupancy restriction or on-site repulsion to explore jamming and kinetic arrest in a flocking model. The incorporation of such volume exclusion features leads to a surprisingly rich variety of self-organized spatial patterns. While bands and lanes of moving particles commonly occur without or under weak volume exclusion, strong volume exclusion along with low temperature, high activity, and large particle density facilitates traffic jams. Through several phase diagrams, we identify the phase boundaries separating the jammed and free-flowing phases and study the transition between these phases which provide us with both qualitative and quantitative predictions of how jamming might be delayed or dissolved. We further formulate and analyze a hydrodynamic theory for the restricted APM with that predicts various features of the microscopic model.
##### [16] S. Chatterjee1, M. Mangeat1, C.-U. Woo2, H. Rieger1,3, and J. D. Noh2, Flocking of two unfriendly species: The two-species Vicsek model, submitted (2022).
1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
2Department of Physics, University of Seoul, Seoul 02504, Korea.
3INM – Leibniz Institute for New Materials, Campus D2 2, D-66123 Saarbrücken, Germany.
Abstract We consider the two-species Vicsek model (TSVM) consisting of two kinds of self-propelled particles, A and B, that tend to align with particles from the same species and to anti-align with the other. The model shows a flocking transition that is reminiscent of the original Vicsek model: it has a liquid-gas phase transition and displays micro-phase separation in the coexistence region where multiple dense liquid bands propagate in a gaseous background. The novel feature of the TSVM is the existence of two kinds of bands, one composed of mainly A-particles and one mainly of B-particles and the appearance of two dynamical states in the coexistence region: the PF (parallel flocking) state in which all bands of the two species propagate in the same direction, and the APF (anti-parallel flocking) state in which the bands of species A and species B move in opposite directions. When PF and APF states exist in the low-density part of the coexistence region they perform stochastic transitions from one to the other. The system size dependence of the transition frequency and dwell times shows a pronounced crossover that is determined by the ratio of the band width and the longitudinal system size. Our work paves the way for studying multi-species flocking models with heterogeneous alignment interactions.
##### [15] S. Chatterjee1, M. Mangeat1, and H. Rieger1,2, Polar flocks with discretized directions: the active clock model approaching the Vicsek model, EPL 138, 41001 (2022).
1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
2INM – Leibniz Institute for New Materials, Campus D2 2, D-66123 Saarbrücken, Germany.
Abstract We consider the off-lattice two-dimensional $q$-state active clock model (ACM) as a natural discretization of the Vicsek model (VM) describing flocking. The ACM consists of particles able to move in the plane in a discrete set of $q$ equidistant angular directions, as in the active Potts model (APM), with an alignment interaction inspired by the ferromagnetic equilibrium clock model. We find that for a small number of directions, the flocking transition of the ACM has the same phenomenology as the APM, including macrophase separation and reorientation transition. For a larger number of directions, the flocking transition in the ACM becomes equivalent to the one of the VM and displays microphase separation and only transverse bands, i.e. no re-orientation transition. Concomitantly also the transition of the $q\to\infty$ limit of the ACM, the active XY model (AXYM), is in the same universality class as the VM. We also construct a coarse-grained hydrodynamic description for the ACM and AXYM akin to the VM.
##### [14] A. Alexandre1, M. Mangeat2, T. Guérin1, and D. S. Dean1,3, How Stickiness Can Speed Up Diffusion in Confined Systems, Phys. Rev. Lett. 128, 210601 (2022).
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
2Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
3Team MONC, INRIA Bordeaux Sud Ouest, CNRS UMR 5251, Bordeaux INP, Univ. Bordeaux, F-33400 Talence, France.
Abstract The paradigmatic model for heterogeneous media used in diffusion studies is built from reflecting obstacles and surfaces. It is well known that the crowding effect produced by these reflecting surfaces slows the dispersion of Brownian tracers. Here, using a general adsorption desorption model with surface diffusion, we show analytically that making surfaces or obstacles attractive can accelerate dispersion. In particular, we show that this enhancement of diffusion can exist even when the surface diffusion constant is smaller than that in the bulk. Even more remarkably, this enhancement effect occurs when the effective diffusion constant, when restricted to surfaces only, is lower than the effective diffusivity with purely reflecting boundaries. We give analytical formulas for this intriguing effect in periodic arrays of spheres as well as undulating microchannels. Our results are confirmed by numerical calculations and Monte Carlo simulations.
##### [13] M. Mangeat1,2, T. Guérin1, and D. S. Dean1,3, Steady state of overdamped particles in the non-conservative force field of a simple non-linear model of optical trap, J. Stat. Mech. 2021, 113205 (2021).
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
2Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
3Team MONC, INRIA Bordeaux Sud Ouest, CNRS UMR 5251, Bordeaux INP, Univ. Bordeaux, F-33400 Talence, France.
Abstract Optically trapped particles are often subject to a non-conservative scattering force arising from radiation pressure. In this paper we present an exact solution for the steady state statistics of an overdamped Brownian particle subjected to a commonly used force field model for an optical trap. The model is the simplest of its kind that takes into account non-conservative forces. In particular, we present exact results for certain marginals of the full three dimensional steady state probability distribution as well as results for the toroidal probability currents which are present in the steady state, as well as for the circulation of theses currents. Our analytical results are confirmed by numerical solution of the steady state Fokker-Planck equation.
##### [12] M. Mangeat1 and H. Rieger1, Narrow escape problem in two-shell spherical domains, Phys. Rev. E 104, 044124 (2021).
1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
Abstract Intracellular transport in living cells is often spatially inhomogeneous with an accelerated effective diffusion close to the cell membrane and a ballistic motion away from the centrosome due to active transport along actin filaments and microtubules, respectively. Recently it was reported that the mean first passage time (MFPT) for transport to a specific area on the cell membrane is minimal for an optimal actin cortex width. In this paper we ask whether this optimization in a two-compartment domain can also be achieved by passive Brownian particles. We consider a Brownian motion with different diffusion constants in the two shells and a potential barrier between the two and investigate the narrow escape problem by calculating the MFPT for Brownian particles to reach a small window on the external boundary. In two and three dimensions, we derive asymptotic expressions for the MFPT in the thin cortex and small escape region limits confirmed by numerical calculations of the MFPT using the finite element method and stochastic simulations. From this analytical and numeric analysis we finally extract the dependence of the MFPT on the ratio of diffusion constants, the potential barrier height and the width of the outer shell. The first two are monotonous whereas the last one may have a minimum for a sufficiently attractive cortex, for which we propose an analytical expression of the potential barrier height matching very well the numerical predictions.
##### [11] M. Mangeat1, S. Chatterjee2, R. Paul2, and H. Rieger1, Flocking with a q-fold discrete symmetry: Band-to-lane transition in the active Potts model, Phys. Rev. E 102, 042601 (2020).
1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
2School of Mathematical & Computational Sciences, Indian Association for the Cultivation of Science, Kolkata 700032, India.
Abstract We study the $q$-state active Potts model (APM) on a two-dimensional lattice in which self-propelled particles have q internal states corresponding to the q directions of motion. A local alignment rule inspired by the ferromagnetic $q$-state Potts model and self-propulsion via biased diffusion according to the internal particle states elicits collective motion at high densities and low noise. We formulate a coarse-grained hydrodynamic theory with which we compute the phase diagrams of the APM for $q=4$ and $q=6$ and analyze the flocking dynamics in the coexistence region, where the high-density (polar liquid) phase forms a fluctuating stripe of coherently moving particles on the background of the low-density (gas) phase. A reorientation transition of the phase-separated profiles from transversal band motion to longitudinal lane formation is found, which is absent in the Vicsek model and the active Ising model. The origin of this reorientation transition is revealed by a stability analysis: for large velocities the transverse diffusivity approaches zero and stabilizes lanes. Computer simulations corroborate the analytical predictions of the flocking and reorientation transitions and validate the phase diagrams of the APM.
##### [10] S. Chatterjee1, M. Mangeat2, R. Paul1, and H. Rieger2, Flocking and reorientation transition in the 4-state active Potts model, EPL 130, 66001 (2020).
1School of Mathematical & Computational Sciences, Indian Association for the Cultivation of Science, Kolkata 700032, India.
2Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
Abstract We study the active 4-state Potts model (APM) on the square lattice in which active particles have four internal states corresponding to the four directions of motion. A local alignment rule inspired by the ferromagnetic 4-state Potts model and self-propulsion via biased diffusion according to the internal particle states leads to flocking at high densities and low noise. We compute the phase diagram of the APM and explore the flocking dynamics in the region, in which the high-density (liquid) phase coexists with the low-density (gas) phase and forms a fluctuating band of coherently moving particles. As a function of the particle self-propulsion velocity, a novel reorientation transition of the phase-separated profiles from transversal to longitudinal band motion is revealed, which is absent in the Vicsek model and the active Ising model. We further construct a coarse-grained hydrodynamic description of the model which validates the results for the microscopic model.
##### [9] M. Mangeat1, T. Guérin1, and D. S. Dean1, Effective diffusivity of Brownian particles in a two dimensional square lattice of hard disks, J. Chem. Phys. 152, 234109 (2020).
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
Abstract We revisit the classic problem of the effective diffusion constant of a Brownian particle in a square lattice of reflecting impenetrable hard disks. This diffusion constant is also related to the effective conductivity of non-conducting and infinitely conductive disks in the same geometry. We show how a recently derived Green’s function for the periodic lattice can be exploited to derive a series expansion of the diffusion constant in terms of the disk’s volume fraction φ. Second, we propose a variant of the Fick–Jacobs approximation to study the large volume fraction limit. This combination of analytical results is shown to describe the behavior of the diffusion constant for all volume fractions.
##### [8] M. Mangeat1 and H. Rieger1, The narrow escape problem in a circular domain with radial piecewise constant diffusivity, J. Phys. A: Math. Theor. 52, 424002 (2019).
1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
Abstract The stochastic motion of particles in living cells is often spatially inhomogeneous with a higher effective diffusivity in a region close to the cell boundary due to active transport along actin filaments. As a first step to understand the consequence of the existence of two compartments with different diffusion constant for stochastic search problems we consider here a Brownian particle in a circular domain with different diffusion constants in the inner and the outer shell. We focus on the narrow escape problem and compute the mean first passage time (MFPT) for Brownian particles starting at some pre-defined position to find a small region on the outer reflecting boundary. For the annulus geometry we find that the MFPT can be minimized for a specific value of the width of the outer shell. In contrast for the two-shell geometry we show that the MFPT depends monotonously on all model parameters, in particular on the outer shell width. Moreover we find that the distance between the starting point and the narrow escape region which maximizes the MFPT depends discontinuously on the ratio between inner and outer diffusivity.
##### [7] M. Mangeat1, Y. Amarouchene1, Y. Louyer1, T. Guérin1, and D. S. Dean1, Role of nonconservative scattering forces and damping on Brownian particles in optical traps, Phys. Rev. E 99, 052107 (2019).
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
Abstract We consider a model of a particle trapped in a harmonic optical trap but with the addition of a nonconservative radiation induced force. This model is known to correctly describe experimentally observed trapped particle statistics for a wide range of physical parameters, such as temperature and pressure. We theoretically analyze the effect of nonconservative force on the underlying steady state distribution as well as the power spectrum for the particle position. We compute perturbatively the probability distribution of the resulting nonequilibrium steady states for all dynamical regimes underdamped through to overdamped and give expressions for the associated currents in phase space (position and velocity). We also give the spectral density of the trapped particle's position in all dynamical regimes and for any value of the nonconservative force. Signatures of the presence of nonconservative forces are shown to be particularly strong for the underdamped regime at low frequencies.
##### [6] Y. Amarouchene1, M. Mangeat1, B. Vidal Montes1, L. Ondic2, T. Guérin1, D. S. Dean1, and Y. Louyer1, Nonequilibrium Dynamics Induced by Scattering Forces for Optically Trapped Nanoparticles in Strongly Inertial Regimes, Phys. Rev. Lett. 122, 183901 (2019).
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
2Institute of Physics, Academy of Sciences of the Czech Republic, CZ-162 00 Prague, Czech Republic.
Abstract The forces acting on optically trapped particles are commonly assumed to be conservative. Nonconservative scattering forces induce toroidal currents in overdamped liquid environments, with negligible effects on position fluctuations. However, their impact in the underdamped regime remains unexplored. Here, we study the effect of nonconservative scattering forces on the underdamped nonlinear dynamics of trapped nanoparticles at various air pressures. These forces induce significant low-frequency position fluctuations along the optical axis and the emergence of toroidal currents in both position and velocity variables. Our experimental and theoretical results provide fundamental insights into the functioning of optical tweezers and a means for investigating nonequilibrium steady states induced by nonconservative forces.
##### [PhD] M. Mangeat1, De la dispersion aux vortex browniens dans des systèmes hors-équilibres confinés, Thèse de doctorat, Université de Bordeaux (soutenue le 25 Sept. 2018).
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
Résumé Cette thèse vise à caractériser la dynamique stochastique hors-équilibre de particules browniennes sous l’effet de confinement. Ce confinement est appliqué ici par des potentiels attractifs ou des frontières imperméables créant des barrières entropiques. Dans un premier temps, nous regardons la dispersion de particules sans interactions dans les milieux hétérogènes. Un nuage de particules browniennes s’étale au cours du temps sans atteindre la distribution d’équilibre de Boltzmann, et son étalement est alors caractérisé par une diffusivité effective inférieure à la diffusivité microscopique. Dans un premier chapitre, nous nous intéressons au lien entre la géométrie de confinement et la dispersion dans le cas particulier des microcanaux périodiques. Pour cela, nous calculons la diffusivité effective sans hypothèse de réduction de dimensionnalité, contrairement à l’approche standard dite de Fick-Jacobs. Une classification des différents régimes de dispersion est alors réalisée, pour toute géométrie autant pour les canaux continus que discontinus. Dans un second chapitre, nous étendons cette analyse à la dispersion dans les réseaux périodiques d’obstacles sphériques attractifs à courte portée. La présence d’un potentiel attractif peut, de manière surprenante, augmenter la dispersion. Nous quantifions cet effet dans le régime dilué, et montrons alors son optimisation pour plusieurs potentiels ainsi que pour une diffusion médiée par la surface des sphères. Ensuite, nous étudions la dynamique stochastique de particules browniennes dans un piège optique en présence d’une force non conservative créée par la pression de radiation du laser. L’expression perturbative des courants stationnaires, décrivant les vortex browniens, est dérivée pour les basses pressions en conservant le terme inertiel dans l’équation de Langevin sous-amortie. L’expression de la densité spectrale est également calculée permettant d’observer les anisotropies du piège et les effets de la force non conservative. La plupart des expressions analytiques obtenues durant cette thèse sont asymptotiquement exactes et vérifiées par des analyses numériques basées sur l’intégration de l’équation de Langevin ou la résolution d’équation aux dérivées partielles.
##### [5] M. Mangeat1, T. Guérin1, and D. S. Dean1, Dispersion in two-dimensional periodic channels with discontinuous profiles, J. Chem. Phys. 149, 124105 (2018).
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
Abstract The effective diffusivity of Brownian tracer particles confined in periodic micro-channels is smaller than the microscopic diffusivity due to entropic trapping. Here, we study diffusion in two-dimensional periodic channels whose cross section presents singular points, such as abrupt changes of radius or the presence of thin walls, with openings, delimiting periodic compartments composing the channel. Dispersion in such systems is analyzed using the Fick-Jacobs (FJ) approximation. This approximation assumes a much faster equilibration in the lateral than in the axial direction, along which the dispersion is measured. If the characteristic width a of the channel is much smaller than the period L of the channel, i.e., $\varepsilon = a/L$ is small, this assumption is clearly valid for Brownian particles. For discontinuous channels, the FJ approximation is only valid at the lowest order in $\varepsilon$ and provides a rough, though on occasions rather accurate, estimate of the effective diffusivity. Here we provide formulas for the effective diffusivity in discontinuous channels that are asymptotically exact at the next-to-leading order in $\varepsilon$. Each discontinuity leads to a reduction of the effective diffusivity. We show that our theory is consistent with the picture of effective trapping rates associated with each discontinuity, for which our theory provides explicit and asymptotically exact formulas. Our analytical predictions are confirmed by numerical analysis. Our results provide a precise quantification of the kinetic entropic barriers associated with profile singularities.
##### [4] M. Mangeat1, T. Guérin1, and D. S. Dean1, Dispersion in two dimensional channels—the Fick–Jacobs approximation revisited, J. Stat. Mech. (2017) 123205.
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
Abstract We examine the dispersion of Brownian particles in a symmetric two dimensional channel, this classical problem has been widely studied in the literature using the so called Fick–Jacobs’ approximation and its various improvements. Most studies rely on the reduction to an effective one dimensional diffusion equation, here we derive an explicit formula for the diffusion constant which avoids this reduction. Using this formula the effective diffusion constant can be evaluated numerically without resorting to Brownian simulations. In addition, a perturbation theory can be developed in $\varepsilon = h_0/L$ where $h_0$ is the characteristic channel height and $L$ the period. This perturbation theory confirms the results of Kalinay and Percus (2006 Phys. Rev . E 74 041203), based on the reduction, to one dimensional diffusion are exact at least to $\mathcal O(\varepsilon^6)$ . Furthermore, we show how the Kalinay and Percus pseudo-linear approximation can be straightforwardly recovered. The approach proposed here can also be exploited to yield exact results in the limit $\varepsilon \to \infty$ , we show that here the diffusion constant remains finite and show how the result can be obtained with a simple physical argument. Moreover, we show that the correction to the effective diffusion constant is of order $1/\varepsilon$ and remarkably has some universal characteristics. Numerically we compare the analytic results obtained with exact numerical calculations for a number of interesting channel geometries.
##### [3] M. Mangeat1, T. Guérin1, and D. S. Dean1, Geometry controlled dispersion in periodic corrugated channels, EPL 118, 40004 (2017).
1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
Abstract The effective diffusivity $D_e$ of tracer particles diffusing in periodically corrugated axisymmetric two- and three-dimensional channels is studied. The majority of the previous studies of this class of problems are based on perturbative analyses about narrow channels, where the problem can be reduced to an effectively one-dimensional one. Here we show how to analyze this class of problems using a much more general approach which even includes the limit of infinitely wide channels. Using the narrow- and wide-channel asymptotics, we provide a Padé approximant scheme that is able to describe the dispersion properties of a wide class of channels. Furthermore, we systematically identify all the exact asymptotic scaling regimes of $D_e$ and the accompanying physical mechanisms that control dispersion, clarifying the distinction between smooth channels and compartmentalized ones, and identifying the regimes in which $D_e$ can be linked to first passage problems.
##### [2] X. Zhou1, R. Zhao1, K. Schwarz2, M. Mangeat2, E. C. Schwarz1, M. Hamed3,4, I. Bogeski1, V. Helms3, H. Rieger2, and B. Qu1, Bystander cells enhance NK cytotoxic efficiency by reducing search time, Sci. Rep 7, 44357 (2017).
1Biophysics, Center for Integrative Physiology and Molecular Medicine, School of Medicine, Saarland University, D-66421 Homburg, Germany.
2Department of Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
3Center for Bioinformatics, Saarland University, D-66041 Saarbrücken, Germany.
4Institute for Biostatistics and Informatics in Medicine and Ageing Research, Rostock University Medical Center, D-18057 Rostock, Germany.
Abstract Natural killer (NK) cells play a central role during innate immune responses by eliminating pathogen-infected or tumorigenic cells. In the microenvironment, NK cells encounter not only target cells but also other cell types including non-target bystander cells. The impact of bystander cells on NK killing efficiency is, however, still elusive. In this study we show that the presence of bystander cells, such as P815, monocytes or HUVEC, enhances NK killing efficiency. With bystander cells present, the velocity and persistence of NK cells were increased, whereas the degranulation of lytic granules remained unchanged. Bystander cell-derived $H_2O_2$ was found to mediate the acceleration of NK cell migration. Using mathematical diffusion models, we confirm that local acceleration of NK cells in the vicinity of bystander cells reduces their search time to locate target cells. In addition, we found that integrin $\beta$ chains ($\beta_1$, $\beta_2$ and $\beta_7$) on NK cells are required for bystander-enhanced NK migration persistence. In conclusion, we show that acceleration of NK cell migration in the vicinity of $H_2O_2$-producing bystander cells reduces target cell search time and enhances NK killing efficiency.
##### [1] M. Mangeat1,2 and F. Zamponi1, Quantitative approximation schemes for glasses, Phys. Rev. E 93, 012609 (2016).
1Laboratoire de Physique Théorique (LPT), École Normale Supérieure, UMR 8549 CNRS, 24 Rue Lhomond, F-75005 Paris, France.
2Master ICFP, Département de Physique, École Normale Supérieure, 24 Rue Lhomond, F-75005 Paris, France.
Abstract By means of a systematic expansion around the infinite-dimensional solution, we obtain an approximation scheme to compute properties of glasses in low dimensions. The resulting equations take as input the thermodynamic and structural properties of the equilibrium liquid, and from this they allow one to compute properties of the glass. They are therefore similar in spirit to the Mode Coupling approximation scheme. Our scheme becomes exact, by construction, in dimension $d \to \infty$, and it can be improved systematically by adding more terms in the expansion.
|
{}
|
## Partially logarithmic ramification theory and characteristic cycles of rank 1 sheaves
### Abstract
The characteristic cycle of a constructible sheaf on a projective smooth algebraic variety is an algebraic cycle on the cotangent bundle that computes the Euler characteristic of the sheaf. In this talk, we consider a rank 1 sheaf on the variety. For a computation of the characteristic cycle of a rank 1 sheaf, we introduce a general theory called partially logarithmic ramification theory, and construct an algebraic cycle using several invariants measuring the ramification of the sheaf, which is compared with the characteristic cycle.
|
{}
|
# Distribution Learning
Proceedings of The 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), 2018.
## Abstract
Age estimation performance has been greatly improved by using convolutional neural network. However, existing methods have an inconsistency between the training objectives and evaluation metric, so they may be suboptimal. In addition, these methods always adopt image classification or face recognition models with a large amount of parameters which bring expensive computation cost and storage overhead. To alleviate these issues, we design a light network architecture and propose a unified framework which can jointly learn age distribution and regress age. The effectiveness of our approach has been demonstrated on apparent and real age estimation tasks. Our method achieves new state-of-the-art results using the single model with 36$\times$ fewer parameters and 2.6$\times$ reduction in inference time. Moreover, our method can achieve comparable results as the state-of-the-art even though model parameters are further reduced to 0.9M~(3.8MB disk storage). We also analyze that Ranking methods are implicitly learning label distributions.
The source code are coming soon.
The pre-trained models and align&cropped face imgaes are publicly avaliable.(May 1, 2018).
• Align&Cropped
Images
• Train&Test list
Lists
• ThinAgeNet
14.9 MB
• TinyAgeNet
3.8 MB
## Real-time Video Demo
This demo shows real-time apparent age estimation on videos and runs on a PC (4x i7-4510U CPU@2.00GHz), and its prediction results come from our DLDL-v2 (TinyAgeNet) trained on an apparent age dataset (ChaLearn16: 5613 training images)..
## Image Demo
The propopsed ThinAgeNet (trained on ChaLearn16) on the Trump Family Photo.
The propopsed ThinAgeNet (trained on ChaLearn16) on Oscars 2017.
The propopsed ThinAgeNet (trained on ChaLearn16) on Elementary school students.
## Citation
@inproceedings{gao2018dldlv2,
title={Age Estimation Using Expectation of Label Distribution Learning},
author={Gao, Bin-Bin and Zhou, Hong-Yu and Wu, Jianxin and Geng, Xin},
booktitle={Proc. The 27th International Joint Conference on Artificial Intelligence (IJCAI 2018)},
year={2018}
}
|
{}
|
# Validity of the Born approximation for identical-particle scattering
My understanding of the Born approximation is that it valid for scattering with small momentum transfer. In the context of two-particle, isotropic, elastic scattering, I would trust the Born approximation for scattering events where the momentum transfer, $$q = 2k|\sin(\theta/2)|$$, is much smaller than $$k$$, the asymptotic relative momentum of the particles. Equivalently, I trust the Born approximation for scattering where the angle of deflection, $$\theta$$, is small.
I want to know if this understanding is still valid for scattering between two identical particles.
When the particles are indistinguishable (e.g., electrons), we cannot tell the difference between a scattering event with deflection $$\theta$$ versus one with deflection $$\theta+\pi$$. The corresponding momentum transfers are $$q=2k|\sin\theta|$$ versus $$q=2k|\cos\theta|$$. Accordingly, any small-angle scattering event is paired with an equally valid large-angle one.
1. Compared to the case where the particles are distinguishable, the Born approximation is accurate over an even wider range of $$\theta$$ because a large-angle deflection can be re-interpreted as a small-angle one.
2. Compared to the case where the particles are indistinguishable, the Born approximation is accurate over a narrower range of $$\theta$$ because a small-angle deflection can be re-interpreted as a large-angle one.
This doublethought carries through in practical calculations involving the differential cross-section. For indistinguishable particles, the differential cross-section must be symmetric about $$\theta=\pi/2$$. This means that in calculating angle-averaged quantities, e.g., the total cross-section $$\sigma_{\mathrm{tot}}(k) = 2\pi \int_0^\pi \sigma(k,\theta)\sin\theta\,d\theta$$ I am free to integrate either just the small-angle contributions, $$0\le\theta\le\frac\pi2$$, or just the large-angle contributions, $$\frac\pi2\le\theta\le\pi$$ (picking up a factor of 2 as well, naturally).
|
{}
|
• ### Random Coin Tossing with unknown bias(1709.02362)
March 21, 2019 math.PR
Consider a coin tossing experiment which consists of tossing one of two coins at a time, according to a renewal process. The first coin is fair and the second has probability $1/2 + \theta$, $\theta \in [-1/2,1/2]$, $\theta$ unknown but fixed, of head. The biased coin is tossed at the renewal times of the process, and the fair one at all the other times. The main question about this experiment is whether or not it is possible to determine $\theta$ almost surely as the number of tosses increases, given only the probabilities of the renewal process and the observed sequence of heads and tails. We will construct a confidence interval for $\theta$ and determine conditions on the process for its almost sure convergence. It will be shown that recurrence is in fact a necessary condition for the almost sure convergence of the interval, although the convergence still holds if the process is null recurrent but the expected number of renewals up to and including time $N$ is $O(N^{1/2+\alpha}), 0 \leq \alpha < 1/2$. It solves an open problem presented by Harris and Keane (1997). We also generalize this experiment for random variables on $L^{2}$ which are sampled according to a renewal process from either one of two distributions.
• ### Fitting a Hurdle Generalized Lambda Distribution to healthcare expenses(1712.02183)
Feb. 20, 2018 stat.AP
We develop and apply to healthcare expenses data a hurdle, or two-way, model whose associated distribution is the Generalised Lambda Distribution (GLD) in order to meet the demand of this type of data for a highly flexible model with an excess of zeros. We first present a thorough review of the literature about the GLD and then develop hurdle GLD marginal and regression models. Finally, we apply the developed models to a dataset consisting of yearly healthcare expenses. The fitted models are compared with models based on the Generalised Pareto Distribution (GPD) and it is established that the GLD models perform better.
• ### Stopping Times of Random Walks on a Hypercube(1709.02359)
Oct. 21, 2019 math.PR
A random walk on a $N$-dimensional hypercube is a discrete time stochastic process whose state space is the set $\{-1,+1\}^{N}$, which has uniform probability of reaching any neighbour state, and probability zero of reaching a non-neighbour state, in one step. This random walk is often studied as a process associated with the Ehrenfest Urn Model. This paper aims to present results about the time that such random walk takes to self-intersect and to return to a set of states. We also present results about the time that the random walk on a hypercube takes to visit a given set and a random set of states. Asymptotic distributions and bounds are presented for these times. The coupling of random walks is widely used as a tool to prove the results.
|
{}
|
# How big does a lake have to be to have its own Sea Breeze?
How big does a body of water need to have a sea breeze? Is there a chart on sea breezes wind speed that include lakes?
Could a circular lake create enough sea breeze to create a wind vortex in the center of the lake? https://worldbuilding.stackexchange.com/questions/108896/could-a-city-be-built-out-of-balloons
• Great question. Lake Breezes are quite common... with the Great Lakes and Lake Okeechobee both commonly impactful. But often see lake breezes even on much small lakes such as Lake Apopka in Florida, and even boundaries come off rivers occasionally. Basically any temperature gradient can cause some sort of circulation, it's just whether it's its strong enough to impact clouds and such to be noted. Remind me in a couple months when it's warmer (and thus can be regularly seen in FL), and will try to delete this and write up an answer using some imagery/models showing them :) Apr 15 '18 at 21:50
• @JeopardyTempest - looking forward to your mesoscale model output. But looking at Lake Apopka - en.wikipedia.org/wiki/Lake_Apopka it is no small lake at least in my view of the world. Apr 16 '18 at 0:55
• @gansub, true, true, I get spoiled by the land of large lakes. It's the 5th largest in Florida, and only 6% the size of Lake Okeechobee... so I still call it "much smaller" :-p Apr 16 '18 at 4:04
• The temperature of the lake will also be important and that will depend on thermal mass, depth, and circulation within the lake. The characteristics and heating of the land also make a difference. To make things even more complicated the effects will be superimposed on the regional weather, which could either enhance or counteract the effect. Oct 29 '18 at 23:28
• (I didn't forget btw! And did save a few pertinent images this summer. Not sure I'll be able to perfectly answer all your interests, but hoping I can at least put something together eventually... just been busy/distracted/dealing with things like computer issues recently :-) ) Feb 21 '19 at 10:06
Lake breezes(similar to sea breezes) are fundamentally a feature of mesoscale meteorology and the peer reviewed reference Small Lake Daytime Breezes: Some Observational and Conceptual Evaluations details both the observational studies of lake breezes and the conceptual understanding behind the formation of the lake breeze.
Since OP's question is
How big does a body of water need to have a sea breeze?
The answer according to this book Climate in a small area which is one of the references in the first reference is that lake breezes have been observed in lakes with a width of about 4 kms(Lake Suwa, Japan ) and a width of 10 kms (Lake Constance, Switzerland). The lake breeze reached a speed of 2 m $$s^{-1}$$ in the case of Lake Suwa and slightly greater speed in the case of Lake Constance. But because both of these lakes are orographic lakes the presence of thermally induced upslope flows (diurnal heating and consequent upslope flows during the day) can interfere with the formation of lake breeze.
Further observational studies carried much later for the following lakes Lake El Dorado , Lake Okeechobee, Lake Winnipeg and Lake Eufaula have reported lake breeze speed values in the range of 2-4 m $$s^{-1}$$. One must remember that these lakes are much larger in size but in essence the numbers capture the range of the lake breeze values. These observations had been obtained as a result of measurements carried out over a period of few months and over different weather conditions.
The upper range for the lake breeze is around 6 m $$s^{-1}$$ and any lake having a width of less than 2 km is likely to have an undetectable lake breeze.
So the physics behind the lake breeze formation in essence is similar to the physics behind the sea breeze formation. A pressure gradient(density gradient) is created between the land and the lake and this gradient is primarily established in the lower troposphere. Greater the inland sensible heat flux greater the value of the pressure gradient.
The following URL shows satellite images of different stages of formation of a lake breeze during the day - Lake Breeze Satellite Imagery
In regards to the second question I believe steam devils are mesoscale vortices that have been observed to form over lakes. These vortices are no more than 50-200 meters wide and can develop in the absence of any major synoptic scale forcing. But the surface area of the lakes have to be in the range of Great Lakes of North America in order for the strong surface heat fluxes to play a role in mesoscale vortex development. The following peer reviewed reference provides observational and modeling studies of steam devils- Mesoscale spiral vortex embedded within a lake
• Great answer. Steam Devils are interesting. Does the lake breeze vortex when the wind meets in the middle over a smaller lake? Can it keep an idle vessel in the middle of the lake or a blimp over the center of the lake? The vortex maybe very wide and 5+mph and not visible.
– Muze
Feb 17 '19 at 18:52
• @Muze - I am not sure whether I understood your question. But IMHO steam devils are simply not possible in smaller lakes. For smaller lakes with favorable synoptic scale forcing you can have external vortices move in over the lake. Feb 18 '19 at 0:30
• yes. they would not get steam Devils it would be something completely different. I don't know if there is a name for it yet. I'm just saying smaller lakes that have a lake breeze then what happens when that llake breeze meets in the middle?
– Muze
Feb 18 '19 at 1:13
• @Muze - What do you mean when you write - lake breeze meets in the middle ? A lake breeze moves from water to land. Smaller lakes simply do not have the capacity to produce a vortex. Synoptic scale systems can create a lot of disturbance when it moves into the middle of the lake. But that uses energy from "outside" the lake. Feb 18 '19 at 1:22
• @Muze at night it is called land breeze. Lake breeze can move only direction from water to land. Feb 18 '19 at 2:11
|
{}
|
# How do you graph exponential decay functions?
This is an example for the function $f \left(x\right) = 2 \cdot {e}^{- 3 x}$
This is just a particular case of the general form $f \left(x\right) = a \cdot {e}^{k x}$
where $a$ is a constant representing the value at start and $k$ is the rate of growth (when $k > 0$) or the rate of decay (when $k < 0$).
(Please note that $f$ is always a function of time, so the symbol $t$ it is often used instead of $x$).
|
{}
|
Note
This documents the development version of NetworkX. Documentation for the current release can be found here.
# networkx.drawing.layout.spring_layout¶
spring_layout(G, k=None, pos=None, fixed=None, iterations=50, threshold=0.0001, weight='weight', scale=1, center=None, dim=2, seed=None)
Position nodes using Fruchterman-Reingold force-directed algorithm.
The algorithm simulates a force-directed representation of the network treating edges as springs holding nodes close, while treating nodes as repelling objects, sometimes called an anti-gravity force. Simulation continues until the positions are close to an equilibrium.
There are some hard-coded values: minimal distance between nodes (0.01) and “temperature” of 0.1 to ensure nodes don’t fly away. During the simulation, k helps determine the distance between nodes, though scale and center determine the size and place after rescaling occurs at the end of the simulation.
Fixing some nodes doesn’t allow them to move in the simulation. It also turns off the rescaling feature at the simulation’s end. In addition, setting scale to None turns off rescaling.
Parameters
GNetworkX graph or list of nodes
A position will be assigned to every node in G.
kfloat (default=None)
Optimal distance between nodes. If None the distance is set to 1/sqrt(n) where n is the number of nodes. Increase this value to move nodes farther apart.
posdict or None optional (default=None)
Initial positions for nodes as a dictionary with node as keys and values as a coordinate list or tuple. If None, then use random initial positions.
fixedlist or None optional (default=None)
Nodes to keep fixed at initial position. ValueError raised if fixed specified and pos not.
iterationsint optional (default=50)
Maximum number of iterations taken
threshold: float optional (default = 1e-4)
Threshold for relative error in node position changes. The iteration stops if the error is below this threshold.
weightstring or None optional (default=’weight’)
The edge attribute that holds the numerical value used for the edge weight. If None, then all edge weights are 1.
scalenumber or None (default: 1)
Scale factor for positions. Not used unless fixed is None. If scale is None, no rescaling is performed.
centerarray-like or None
Coordinate pair around which to center the layout. Not used unless fixed is None.
dimint
Dimension of layout.
seedint, RandomState instance or None optional (default=None)
Set the random state for deterministic node layouts. If int, seed is the seed used by the random number generator, if numpy.random.RandomState instance, seed is the random number generator, if None, the random number generator is the RandomState instance used by numpy.random.
Returns
posdict
A dictionary of positions keyed by node
Examples
>>> G = nx.path_graph(4)
>>> pos = nx.spring_layout(G)
# The same using longer but equivalent function name >>> pos = nx.fruchterman_reingold_layout(G)
|
{}
|
# MSCS Seminar Calendar
Tuesday August 27, 2019
Organizational + discussion
Alex Furman (UIC)
4:00 PM in TBD
In this meeting we plan to discuss the topics for the semester, decide on the near future plans. My suggestions starts from Patterson-Sullivan theory, and several topics in manifolds of negative curvature. I will give some overview of some results that I propose to study in the seminar.
Wednesday August 28, 2019
Graduate Geometry, Topology and Dynamics Seminar
Organizational Meeting
Samuel Dodd (UIC)
3:00 PM in TBD
We will discus the direction of the seminar for this semester
Statistics Seminar
Organizational meeting
Statistics group (UIC)
4:00 PM in 636 SEO
Algebraic Geometry Seminar
The Geometry of Hilbert's 13th problem
Jesse Wolfson (University of California Irvine)
4:00 PM in 427 SEO
Given a polynomial x^n+a_1x^{n-1}+ . . . + a_n, what is the simplest formula for the roots in terms of the coefficients a_1, . . . a_n? Following Abel, we can no longer take “simplest” to mean in radicals, but we could ask for a solution using only 1 or 2 or d-variable functions. Hilbert conjectured that for degrees 6,7 and 8, we need 2,3 and 4 variable functions respectively. In a too little known paper, he then sketched how the 27 lines on a cubic surface should give a 4-variable solution of the general degree 9. In this talk, I’ll review the geometry of solving polynomials, explain Hilbert’s idea, and then extend his geometric methods to get best-to-date upper bounds on the number of variables needed to solve a general degree n polynomial.
Thursday August 29, 2019
Louise Hay Logic Seminar
Organizational Meeting
Noah Schoem
4:00 PM in 427 SEO
Friday August 30, 2019
Departmental Colloquium
Braids, polynomials and Hilbert's 13th problem
Jesse Wolfson (University of California Irvine)
3:00 PM in 636 SEO
There are still completely open fundamental questions about polynomials in one variable. One example is Hilbert's 13th Problem, a conjecture going back long before Hilbert. Indeed, the invention of algebraic topology grew out of an effort to understand how the roots of a polynomial depend on the coefficients. The goal of this talk is to explain part of the circle of ideas surrounding these questions. Along the way, we will encounter some beautiful classical objects - the space of monic, degree d square-free polynomials, algebraic functions, lines on cubic surfaces, level structures on Jacobians, braid groups, Galois groups, and configuration spaces - all intimately related to each other, all with mysteries still to reveal.
Tuesday September 3, 2019
Thesis Defense
Large-Scale Geometry of Knit Products
Jake Herndon
3:00 PM in TBS
A group $G$ is a knit product of subgroups $H$ and $K$ if $G=HK$ and $H\cap K=\{1\}$, or equivalently, if the group operation $G\times G\to G$ restricts to a bijection $H\times K\to G$. We will discuss knit products in the context of large-scale geometry.
Wednesday September 4, 2019
Statistics Seminar
Envelope-based Sparse Partial Least Squares
Zhihua Su (University of Florida)
4:00 PM in 636 SEO
Sparse partial least squares (SPLS) is widely used in applied sciences as a method that performs dimension reduction and variable selection simultaneously in linear regression. Several implementations of SPLS have been derived, among which the SPLS proposed in Chun and Keleş (2010) is very popular and highly cited. However, for all of these implementations, the theoretical properties of SPLS are largely unknown. In this paper, we propose a new version of SPLS, called the envelope-based SPLS, using a connection between envelope models and partial least squares (PLS). We establish the consistency, oracle property and asymptotic normality of the envelope-based SPLS estimator. The large-sample scenario and high-dimensional scenario are both considered. We also develop the envelope-based SPLS estimators under the context of generalized linear models, and discuss its theoretical properties including consistency, oracle property and asymptotic distribution. Numerical experiments and examples show that the envelope-based SPLS estimator has better variable selection and prediction performance over the existing SPLS estimators.
Algebraic Geometry Seminar
Open Mirror Symmetry of Landau-Ginzburg Models
Tyler Kelly (University of Birmingham, UK)
4:00 PM in 427 SEO
Mirror Symmetry provides a link between symplectic and algebraic geometry through a duality in string theory. In particular, it asserts a link from the symplectic geometry of a space M to the algebraic geometry of its mirror space W. One way we see this is now known as classical mirror symmetry: the Gromov-Witten or enumerative theory of a symplectic space is encapsulated by the Hodge theory / periods of the mirror algebraic space. In the 90s this was articulated just for Calabi-Yau varieties, but it has expanded even further to Fano varieties; however, the mirror space is now not an algebraic variety but a mildly non-commutative object known as a Landau-Ginzburg model. Recently, this notion has been developed even to articulate mirror symmetry between Landau-Ginzburg models. In this talk, we will explain what non-commutative Hodge theory / periods look like for a Landau-Ginzburg model and how they predict phenomena in open enumerative theories for the mirror.
Monday September 9, 2019
Geometry, Topology and Dynamics Seminar
Weighted cscK metrics and weighted K-stability
Abdellah Lahdili (Montreal)
3:00 PM in 636 SEO
We will introduce a notion of a K\"ahler metric with constant weighted scalar curvature on a compact K\"ahler manifold $X$, depending on a fixed real torus $\T$ in the reduced group of automorphisms of $X$, and two smooth (weight) functions defined on the momentum image of $X$. We will also define a notion of weighted Mabuchi energy adapted to our setting, and of a weighted Futaki invariant of a $\T$-compatible smooth K\"ahler test configuration associated to $(X, \T)$. After that, using the geometric quantization scheme of Donaldson, we will show that if a projective manifold admits in the corresponding Hodge K\"ahler class a K\"ahler metric with constant weighted scalar curvature, then this metric minimizes the weighted Mabuchi energy, which implies a suitable notion of weighted K-semistability. As an application, we describe the K\"ahler classes on a geometrically ruled complex surface of genus greater than 2, which admits conformally K\"ahler Einstein-Maxwell metrics.
Algebraic Geometry Seminar
The supermoduli space of genus zero super Riemann surfaces with Ramond punctures
4:00 PM in 427 SEO
Analysis and Applied Mathematics Seminar
TBA
Shi Jin (Shanghai Jiao Tong University)
4:00 PM in 636 SEO
Wednesday September 11, 2019
Statistics Seminar
TBA
Stacey Tannenbaum (Astellas)
4:00 PM in 636 SEO
Friday September 13, 2019
Departmental Colloquium
A proof of the sensitivity conjecture
Hao Huang (Emory)
3:00 PM in 636 SEO
In the $n$-dimensional hypercube graph, one can easily choose half of the vertices such that they induce an empty graph. However, having even just one more vertex would cause the induced subgraph to contain a vertex of degree at least $\sqrt{n}$. This result is best possible, and improves a logarithmic lower bound shown by Chung, Furedi, Graham and Seymour in 1988. In this talk we will discuss a very short algebraic proof of it.
As a direct corollary of this purely combinatorial result, the sensitivity and degree of every boolean function are polynomially related. This solves an outstanding foundational problem in theoretical computer science, the Sensitivity Conjecture of Nisan and Szegedy.
Monday September 16, 2019
Geometry, Topology and Dynamics Seminar
TBA
Michael Brandenbursky (Ben Gurion University)
3:00 PM in 636 SEO
Algebraic Geometry Seminar
TBA
Hannah Larson (Stanford University)
4:00 PM in 427 SEO
Wednesday September 18, 2019
Statistics Seminar
TBA
Jun Li (University of Notre Dame)
4:00 PM in 636 SEO
Monday September 23, 2019
Geometry, Topology and Dynamics Seminar
TBA
Joanna Furno (Depaul University)
3:00 PM in 636 SEO
TBA
Algebraic Geometry Seminar
TBA
Geoff Smith (Harvard University)
4:00 PM in 427 SEO
Wednesday September 25, 2019
Statistics Seminar
TBA
Dan Nettleton (Iowa State University)
4:00 PM in 636 SEO
Friday September 27, 2019
Departmental Colloquium
TBA
TBA (TBA)
3:00 PM in 636 SEO
Monday September 30, 2019
Geometry, Topology and Dynamics Seminar
Trees, dendrites, and the Cannon-Thurston map
Elizabeth Field (UIUC)
3:00 PM in 636 SEO
When 1 -> H -> G -> Q -> 1 is a short exact sequence of three word-hyperbolic groups, Mahan Mitra (Mj) has shown that the inclusion map from H to G extends continuously to a map between the Gromov boundaries of H and G. This boundary map is known as the Cannon-Thurston map. In this context, Mitra associates to every point z in the Gromov boundary of Q an ending lamination'' on H which consists of pairs of distinct points in the boundary of H. We prove that for each such z, the quotient of the Gromov boundary of H by the equivalence relation generated by this ending lamination is a dendrite, that is, a tree-like topological space. This result generalizes the work of Kapovich-Lustig and Dowdall-Kapovich-Taylor, who prove that in the case where H is a free group and Q is a convex cocompact purely atoroidal subgroup of Out(F_n), one can identify the resultant quotient space with a certain R-tree in the boundary of Culler-Vogtmann's Outer space.
Algebraic Geometry Seminar
Hodge-Riemann bilinear relations, Schur classes and ample vector bundles
Julius Ross (UIC)
4:00 PM in 427 SEO
Wednesday October 2, 2019
Statistics Seminar
TBA
Ning Hao (University of Arizona)
4:00 PM in 636 SEO
Wednesday October 9, 2019
Statistics Seminar
TBA
Jianfeng Zhang (USC)
3:00 PM in 636 SEO
TBA
Statistics Seminar
TBA
Hira Koul (Michigan State University)
4:00 PM in 636 SEO
Friday October 11, 2019
Departmental Colloquium
TBA
Moon Duchin (Tufts)
3:00 PM in 636 SEO
Monday October 14, 2019
Mathematical Computer Science Seminar
TBA
Qiang Zeng (CUNY)
3:00 PM in 427 SEO
Algebraic Geometry Seminar
TBA
Gavril Farkas (Humboldt University of Berlin)
4:00 PM in 427 SEO
Analysis and Applied Mathematics Seminar
TBA
Matthias Maier (Texas A&M University)
4:00 PM in 636 SEO
TBA
Wednesday October 16, 2019
Statistics Seminar
TBA
Yixin Fang (AbbVie)
4:00 PM in 636 SEO
Monday October 21, 2019
Geometry, Topology and Dynamics Seminar
TBA
MurphyKate Montee (University of Chicago)
3:00 PM in 636 SEO
Analysis and Applied Mathematics Seminar
TBA
Fred Weissler (Universite de Paris Nord)
4:00 PM in 636 SEO
Wednesday October 23, 2019
Statistics Seminar
TBA
Wei Pan (University of Minnesota)
3:00 PM in 636 SEO
Statistics Seminar
TBA
Xianyang Zhang (Texas A&M University)
4:00 PM in 636 SEO
Wednesday October 30, 2019
Statistics Seminar
TBA
Minge Xie (Rutgers University)
4:00 PM in 636 SEO
Friday November 1, 2019
Departmental Colloquium
Midwest Dynamical Systems Conference
TBA (TBA)
3:00 PM in 636 SEO
Monday November 4, 2019
Geometry, Topology and Dynamics Seminar
TBA
Benoit Charbonneau (Waterloo)
3:00 PM in 636 SEO
Wednesday November 6, 2019
Statistics Seminar
TBA
Dr. Qing Jin (Northwestern University)
4:00 PM in 636 SEO
Wednesday November 13, 2019
Statistics Seminar
TBA
Anru Zhang (University of Wisconsin)
4:00 PM in 636 SEO
Friday November 15, 2019
Departmental Colloquium
TBA
Anand Pillay (Notre Dame )
3:00 PM in 636 SEO
Wednesday November 20, 2019
Statistics Seminar
TBA
Xiangrong Yin (University of Kentucky)
4:00 PM in 636 SEO
Monday November 25, 2019
Geometry, Topology and Dynamics Seminar
TBA
Xiaowei Wang (Rutgers)
3:00 PM in 636 SEO
Monday December 2, 2019
Geometry, Topology and Dynamics Seminar
No seminar
None
3:00 PM in 636 SEO
Wednesday April 15, 2020
Statistics Seminar
TBA
Xiaotong Shen (University of Minnesota )
4:00 PM in 636 SEO
UIC LAS MSCS > persisting_utilities > seminars > seminar calendar
|
{}
|
# Catalan's constant fast convergent series
Working with some conjectured continued fractions that were published here, I have found the following fast monotone convergent series providing rational approximants to Catalan's constant $$\begin{equation*}G=\frac{1}{768}\sum_{n=1}^\infty\frac{4096^n\,P_0(n)}{n^3(2n-1)(3n-2)(3n-1)(4n-3)(4n-1){10n \choose 5n} {8n \choose 4n}{5n \choose 3n}}\tag{1}\label{1} \end{equation*}$$ where $$P_0(n)$$ is the following 6-th degree polynomial $$P_0(n)=4652032\,n^6-10340864\,n^5+8853568\,n^4-3683104\,n^3+774028\,n^2-76764\,n+2835$$ This series converges at a rate $$\rho=\frac{27}{50000}=0.00054$$ giving more than 3 decimal digits per term. This is just under some known BBP-type formulae for Catalan's constant (See here Eqs. 30-32) that have $$\rho=\frac{1}{4096}\simeq 0.000244$$ but it is well over Pilehrood's Fast formula (See here Theorem 4 pg. 234) having $$\rho=\frac{1}{1024}\simeq 0.000977$$ which belongs to the same class.
In fact, just to compare, Pilehrood's is $$\begin{equation*}G=\frac{1}{64}\sum_{n=1}^\infty\frac{(-1)^{n-1}256^n\,P_1(n)}{n^3(2n-1)(4n-3)^2(4n-1)^2{8n \choose 4n}^2{2n \choose n}}\tag{2}\label{2} \end{equation*}$$ where $$P_1(n) = 419840\,n^6 − 915456\,n^5 + 782848\,n^4 −332800\,n^3 + 73256\,n^2 − 7800\,n + 315$$
I have prepared the following configuration file for series Eq.$$(1)$$ to test it in my standard laptop with no special hardware using Alexander Yee's binary splitting y-cruncher software
{
NameShort : "Catalan"
NameLong : "Catalan's Constant. jorge.zuniga.hansen@gmail.com"
AlgorithmShort : "Fast Catalan (2022). rho = 0.00054"
AlgorithmLong : "Catalan Constant Fast Hypergeometric Series (2022)"
Formula : {
SeriesHypergeometric : {
CoefficientP : 1
CoefficientQ : 0
CoefficientD : 2
PolynomialP : [2835 -76764 774028 -3683104 8853568 -10340864 4652032]
PolynomialQ : [99225 -2905560 33146280 -192321440 630560720 -1214720000 1360640000 -819200000 204800000]
PolynomialR : [0 0 0 -2304 27264 -123264 266496 -276480 110592]
}
}
It only took 28 seconds to get 50,000,000 decimal digits as it can be seen in this output details (it was verified on a 2nd run using a different —although slower— series from Jesús Guillera)
Now the questions
Is Eq.$$(1)$$ already known ?
Is there any chance to prove this conjectural series. Wilf-Zeilberger pairs perhaps? or
Is it possible, at least, to get a linear recurrence for the partial sums approximants ?
NOTE. (Just to put this topic in context) These series for Catalan's constant $$G$$ are linearly convergent at a high rate. They are fitted to perform a standard 3-variable Binary Splitting algorithm (with 2 variables needed for the final result), allowing to compute series even more general than hypergeometric series.
Two high performance additional series for $$G$$ belonging to this class are
Guillera (2019)$$\begin{equation*}G=\frac{1}{1024}\sum_{n=1}^\infty\frac{(-1)^{n-1}4096^n\,P_2(n)}{\left[n(2n-1){6n \choose 3n}{3n \choose n}{2n \choose n}^{-1}\right]^3}\tag{3}\label{3} \end{equation*}$$ where $$P_2(n) = 45136\,n^4 − 57184\,n^3 + 21240\,n^2 −3160\,n + 165$$ and Pilehrood's short (2010), —Corollary 2 pg. 233— $$\begin{equation*}G=\frac{1}{64}\sum_{n=1}^\infty\frac{256^n\,P_3(n)}{n^3(2n-1){6n \choose 3n}{6n \choose 4n}{4n \choose 2n}}\tag{4}\label{4} \end{equation*}$$ with $$P_3(n) = 580\,n^2 − 184\,n + 15$$ These expressions can be summarized in the following table
\begin{align*} \begin{array}{|c|c|c|c|c|c|c|} \hline \mathrm{Series} & \mathrm{Eq.} & \mathrm{conv. rate}\ \rho & \rho^{-1} & \frac{dec.\ digits}{term} & \mathrm{cost} & d \\ \hline \mathrm{This\ one\ (2022)} & (1) & 27/50000 & 1851.85 & 3.27 & 4.25 & 8\\ \hline \mathrm{Pilehrood\ long\ (2010)} & (2) & 1/1024 & 1024.00 & 3.01 & 4.62 & 8 \\ \hline \mathrm{Guillera\ (2019)} & (3) & 64/19683 & 307.55 & 2.49 & 4.19 & 6\\ \hline \mathrm{Pilehrood\ short\ (2010)} & (4) & 4/729 & 182.25 & 2.26 & 3.07 & 4\\ \hline \end{array} \end{align*}
Eq.$$(1)$$ (the only one already un-proven in this table) provides the best convergence rate for this family of series, which makes this formula the first candidate for fast computing of Catalan's constant if the number of digits needed goes to few thousands. If the number of digits required is huge (hundred thousands to millions), it is necessary to use a binary splitting algorithm. In this case the last two columns show the computing relative cost per iteration and $$d$$, the summand denominator's polynomial degree. Relative cost is calculated by $$cost =-\frac{ 4\,d}{\log\,\rho}$$ For these extreme cases (this is a play that some people play), the best formula is Eq.$$(4)$$. In fact this series has been used to compute over 1.2e+12 digits of $$G$$.
Raayoni, G., Gottlieb, S., Manor, Y. et al. Generating conjectures on fundamental constants with the Ramanujan Machine. Nature 590, 67–73 (2021). https://doi.org/10.1038/s41586-021-03229-4
Marichev, Oleg; Sondow, Jonathan; and Weisstein, Eric W. "Catalan's Constant." From MathWorld--A Wolfram Web Resource.
Khodabakhsh Hessami Pilehrood, Tatiana Hessami Pilehrood. Series acceleration formulas for beta values. Discrete Mathematics and Theoretical Computer Science, DMTCS, 2010, Vol. 12 no. 2 (2), pp.223-236. ff10.46298/dmtcs.504ff. ffhal-00990465f
Bruno Haible Thomas Papanikolaou. "Fast multiprecision evaluation of series of rational numbers" CLN Document
|
{}
|
# Integration by parts vs integration by parts with substitution different answers
I'm trying to integrate
$$\int(5x^4+1)\ln(x^5+x)dx$$
When I use an online calculator (it first uses substitution and then integration by parts) I get the answer
$$(x^5+x)\ln(x^5+x)-x^5-x$$
but when I try a different approach (without substitution only with integration by parts) I get
$$(x^5+x)\ln(x^5+x)-x$$
(I change the variable of $dx$ to $x^5+x$)
• The first is correct modulo a constant of integration, the second is wrong. Without any more info, this is because you didn't do the second method correctly. – Paul Jan 4 '18 at 14:54
• Use Latex to type your question. Put all the math parts inside two dollar signs and use the command \int_{}^{} to display your intergral – pureundergrad Jan 4 '18 at 14:54 • Is that integral supposed to be \int(5x^4+1)\ln(x^5+x)dx ? – PM 2Ring Jan 4 '18 at 15:00 • @PM2Ring yes that's it – john Jan 4 '18 at 15:02 ## 2 Answers It seem's that you forgot something since diferentiation of (x^5+x)\ln(x^5+x)-x gives: \begin{align*} D_x\left[(x^5+x)\ln(x^5+x)-x\right]&=(5x^4+1)\ln(x^5+x)+5x^4+1-1\\ &=(5x^4+1)\ln(x^5+x)+5x^4 \end{align*} And in the expression(x^5+x)\ln(x^5+x)-x the last $x$ must be $x^5+x$ because the change that you used in order to integrate, i.e. $x^5+x$ instead of $x$.
• oh it seems I didn't calculate the first derivate the right way... didn't see it was a function of a function... – john Jan 4 '18 at 15:19
Since your integral looks like $\int g’\ln(g)$ you just need to find a primitive of $\ln( x)$, namely $x\cdot\ln(x)-x$, and just plug in $x^5+x$ for $g$.
|
{}
|
## Stream: Is there code for X?
### Topic: name-hunt: Union_of_singleton?
#### Damiano Testa (Nov 26 2020 at 04:49):
Dear All,
I have been unable to find the lemma below, but I would imagine that it is already in mathlib: do you know what is its name?
On a golfing technique, can you unify the rcases with exact?
Thank you!
import data.set.lattice
lemma setext {α : Type*} (s : set α) : s = ⋃ i : s , {i} :=
begin
-- ext1, simp, also proves this lemma
ext1 x,
refine ⟨λ hx, set.mem_Union.mpr ⟨⟨x, hx⟩, rfl⟩, λ hx, _⟩,
rcases set.mem_Union.mp hx with ⟨x, _, rfl⟩,
exact subtype.val_prop x,
end
#### Johan Commelin (Nov 26 2020 at 05:32):
-- src/data/set/lattice.lean:438
@[simp] theorem sUnion_singleton (s : set α) : ⋃₀ {s} = s := Sup_singleton
#### Johan Commelin (Nov 26 2020 at 05:32):
atm, you can't combine rcases with exact. And I don't know how to turn it into a syntax that would really save many characters.
#### Kenny Lau (Nov 26 2020 at 07:31):
I don't think that is the theorem he's looking for?
#### Johan Commelin (Nov 26 2020 at 07:33):
Yup, you are totally right
#### Kenny Lau (Nov 26 2020 at 07:34):
import data.set.lattice
lemma setext {α : Type*} (s : set α) : s = ⋃ i : s , {i} :=
set.subset.antisymm (λ i his, set.mem_Union.2 ⟨⟨i, his⟩, rfl⟩) $set.Union_subset$ λ i, set.singleton_subset_iff.2 i.2
#### Damiano Testa (Nov 26 2020 at 08:09):
Dear @Johan Commelin and @Kenny Lau , thank you for your inputs! Kenny's proof works for me! I do not know whether this lemma should be added to data.set.lattice. Do you have an opinion?
#### Eric Wieser (Nov 26 2020 at 08:44):
Should this lemma name contain the word subtype?
#### Eric Wieser (Nov 26 2020 at 09:18):
The non-subtype version is docs#set.bUnion_of_singleton
#### Eric Wieser (Nov 26 2020 at 09:18):
lemma setext {α : Type*} (s : set α) : s = ⋃ (i ∈ s), {i} :=
(set.bUnion_of_singleton s).symm
#### Eric Wieser (Nov 26 2020 at 09:28):
docs#set.Union_of_singleton is also close, but it's missing the coercions
#### Kenny Lau (Nov 26 2020 at 09:47):
@Eric Wieser bad link (set.set.)
#### Eric Wieser (Nov 26 2020 at 09:47):
Zulip needs the same linters as mathlib
#### Damiano Testa (Nov 26 2020 at 10:44):
I do not necessarily insist on this lemma appearing in mathlib. I was simply surprised that it was not. However, it is a lemma that I am using and, so far, I have not been able to make the other suggestions work (except for Kenny's).
#### Eric Wieser (Nov 26 2020 at 10:56):
Perhaps this is the more useful lemma?
lemma set.Union_subtype {α : Type*} (s : set α) (f : α → set α) :
(⋃ (i : s), f i) = ⋃ (i ∈ s), f i :=
set.subset.antisymm
(λ x hx, by {
obtain ⟨i, hi⟩ := set.mem_Union.mp hx,
exact set.mem_Union.mpr ⟨i, set.mem_Union.mpr ⟨i.prop, hi⟩⟩, })
(λ x hx, by {
obtain ⟨i, hi⟩ := set.mem_Union.mp hx,
obtain ⟨hi, hhi⟩ := set.mem_Union.mp hi,
exact set.mem_Union.mpr ⟨⟨i, hi⟩, hhi⟩, })
#### Damiano Testa (Nov 26 2020 at 11:23):
For my actual example, I have not been able to use set.Union_subtype, though this is likely due to the fact that I am not understanding what is going on. I tried applying it with f given by λ i, singleton i and it did not work...
#### Eric Wieser (Nov 26 2020 at 11:55):
lemma setext {α : Type*} (s : set α) : s = ⋃ i : s , {i} :=
by rw [set.Union_subtype, set.bUnion_of_singleton],
#### Damiano Testa (Nov 26 2020 at 14:03):
@Eric Wieser this now works, thank you!
As I only use it once, I do not really have a strong opinion on which form of these lemmas to prefer, nor whether they "should be" in mathlib!
In any case, thank you all for your help!
Last updated: May 07 2021 at 22:14 UTC
|
{}
|
# PQ is parallel to RS and PR is parallel to QS, If ∠LPR = 35°
In the figure given below, PQ is parallel to RS and PR is parallel to QS, If ∠LPR = 35° and ∠UST = 70°, then what is ∠MPQ equal to?
1. 55°
2. 70°
3. 75°
4. 80°
∠LPR = 35°
Since ∠UST = 70°, So ∠QSR = 70°
Now, ∠SQP = 180° - 70° = 110°
So, ∠QPR = 70°
∠MPQ = 180° - ∠LPR - ∠QPR
∠MPQ = 180° - 35° - 70° = 75°
The correct option is C.
|
{}
|
#### Volume 16, issue 1 (2016)
Recent Issues
The Journal About the Journal Subscriptions Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Author Index To Appear ISSN (electronic): 1472-2739 ISSN (print): 1472-2747
Homotopy theory of $G$–diagrams and equivariant excision
### Emanuele Dotto and Kristian Moi
Algebraic & Geometric Topology 16 (2016) 325–395
##### Abstract
Let $G$ be a finite group. We define a suitable model-categorical framework for $G\phantom{\rule{0.3em}{0ex}}$–equivariant homotopy theory, which we call $G\phantom{\rule{0.3em}{0ex}}$–model categories. We show that the diagrams in a $G\phantom{\rule{0.3em}{0ex}}$–model category which are equipped with a certain equivariant structure admit a model structure. This model category of equivariant diagrams supports a well-behaved theory of equivariant homotopy limits and colimits. We then apply this theory to study equivariant excision of homotopy functors.
##### Keywords
equivariant homotopy, excision
##### Mathematical Subject Classification 2010
Primary: 55N91, 55P91
Secondary: 55P65, 55P42
|
{}
|
### Home > CC2MN > Chapter 5 > Lesson 5.2.5 > Problem5-76
5-76.
Multiple Choice: Which of the following expressions could be used to find the average (mean) of the numbers $k$, $m$, and $n$?
The average of a set of numbers is the sum of the values divided by the number of values in the set.
Which of the above choices best represents the average? Do any choices include a summation of the numbers $k$, $m$, and $n$ and also divide by this number of values?
1. $k+m+n$
1. $3(k+m+n)$
1. $\frac { k + m + n } { 3 }$
$\frac{k+m+n}{3}$
1. $3k+m+n$
|
{}
|
# Check “five in a row” for a 15x15 chess board
Problem:
For a given board, judge who had complete a continuously line (could be horizontal, vertical or diagonal) of 5 pieces. Input is a 15×15 chess board, could be white (1), black (2) or empty (0). Return if white win, if black win, if both win or if none of them win.
My specific questions are:
1. I enumerate each direction one by one, and the code looks very long. Are there any elegant ways to handle directions (I mean direction of horizontal, vertical or diagonal)?
2. I incrementally count # of black, # of white, to reduce redundant counting. I'm wondering if there are any better ideas to reduce counting.
import random
def check_five(matrix):
white_win = False
black_win = False
for i in range(0, len(matrix)):
num_white_so_far = 0
num_black_so_far = 0
for j in range(0, len(matrix[0])):
if matrix[i][j] == 1: # 1 for white
num_white_so_far += 1
num_black_so_far = 0
if num_white_so_far == 5:
white_win = True
elif matrix[i][j] == 2: # 2 for black
num_black_so_far += 1
num_white_so_far = 0
if num_black_so_far == 5:
black_win = True
else:
num_black_so_far = 0
num_white_so_far = 0
for j in range(0, len(matrix[0])):
num_white_so_far = 0
num_black_so_far = 0
for i in range(0, len(matrix)):
if matrix[i][j] == 1: # 1 for white
num_white_so_far += 1
num_black_so_far = 0
if num_white_so_far == 5:
white_win = True
elif matrix[i][j] == 2: # 2 for black
num_black_so_far += 1
num_white_so_far = 0
if num_black_so_far == 5:
black_win = True
else:
num_black_so_far = 0
num_white_so_far = 0
for i in range(0, len(matrix)):
num_white_so_far = 0 # direction of \, bottom part
num_black_so_far = 0
j = 0
while i < len(matrix) and j < len(matrix[0]):
if matrix[i][j] == 1: # 1 for white
num_white_so_far += 1
num_black_so_far = 0
if num_white_so_far == 5:
white_win = True
elif matrix[i][j] == 2: # 2 for black
num_black_so_far += 1
num_white_so_far = 0
if num_black_so_far == 5:
black_win = True
else:
num_black_so_far = 0
num_white_so_far = 0
i += 1
j += 1
for j in range(1, len(matrix[0])): # direction of \, upper part
num_white_so_far = 0
num_black_so_far = 0
i = 0
while i < len(matrix) and j < len(matrix[0]):
if matrix[i][j] == 1: # 1 for white
num_white_so_far += 1
num_black_so_far = 0
if num_white_so_far == 5:
white_win = True
elif matrix[i][j] == 2: # 2 for black
num_black_so_far += 1
num_white_so_far = 0
if num_black_so_far == 5:
black_win = True
else:
num_black_so_far = 0
num_white_so_far = 0
i += 1
j += 1
for j in range(1, len(matrix[0])): # direction of /, upper part
num_white_so_far = 0
num_black_so_far = 0
i = 0
while i < len(matrix) and j >=0:
if matrix[i][j] == 1: # 1 for white
num_white_so_far += 1
num_black_so_far = 0
if num_white_so_far == 5:
white_win = True
elif matrix[i][j] == 2: # 2 for black
num_black_so_far += 1
num_white_so_far = 0
if num_black_so_far == 5:
black_win = True
else:
num_black_so_far = 0
num_white_so_far = 0
i += 1
j -= 1
for i in range(1, len(matrix)): # direction of /, bottom part
num_white_so_far = 0
num_black_so_far = 0
j = len(matrix[0]) - 1
while i < len(matrix) and j < len(matrix[0]):
if matrix[i][j] == 1: # 1 for white
num_white_so_far += 1
num_black_so_far = 0
if num_white_so_far == 5:
white_win = True
elif matrix[i][j] == 2: # 2 for black
num_black_so_far += 1
num_white_so_far = 0
if num_black_so_far == 5:
black_win = True
else:
num_black_so_far = 0
num_white_so_far = 0
i += 1
j += 1
return (white_win, black_win)
if __name__ == "__main__":
matrix = [[0] * 15 for _ in range(15)]
for i in range(len(matrix)):
for j in range(len(matrix[0])):
matrix[i][j] = random.randint(0,2)
for r in matrix:
print r
print check_five(matrix)
Instead of checking every direction individually you could represent the directions as list of tuples (y-delta, x-delta) to the previous cell in this direction. Doing this would allow you to easily loop over the directions instead of processing them in separate branches.
DIRECTIONS = [
(-1, -1), # Top left to bottom right
(-1, 0), # Top to bottom
(-1, 1), # Top right to bottom left
(0, -1) # Left to right
]
Since there are four different directions you could either loop over the matrix four times or alternatively calculate the distances in separate directions in parallel. The second approach would require you store all four directions for each cell in matrix.
Let's take a smaller matrix and see how second approach would work in practice:
matrix = [
[1, 0, 0],
[1, 2, 2],
[1, 2, 0]
]
With above matrix the first cell at (0, 0) would have store [1, 1, 1, 1] since it's a first piece in line for every four directions. Since second cell doesn't have a piece it can't belong to a line of pieces of same color so we store 0 as distance for each direction: [0, 0, 0, 0]. Third cell on the first row is treated the same as the second one since it is empty: [0, 0, 0, 0].
After we have processed the first row our current state looks like this:
state = [
[[1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
]
On the second row thing get bit more interesting sine we have to take into account the row we processed in previous step. At (1, 0) we check if the color matches each of the directions. If it does we take the previously stored and increment it, otherwise it's the first piece in line.
For first direction (-1, -1) the previous cell is (0, -1) which is out of bounds so this is the first piece in line for that direction thus we store 0. When checking the second direction (-1, 0) we see that there's a piece with same color at (0, 0). Thus we take the previous stored value from state[0][0][1] and increment it by one resulting to 2. Since there is no piece in previous cell in third direction and fourth direction is out of bounds they both result to 1.
Now our current state looks following:
state = [
[[1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0]],
[[1, 2, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
]
Cell (1, 1) is processed the same manner as earlier ones. The only difference compared to previous is that fourth direction (0, -1) has piece of different color. We treat this exactly the same as there would be empty space in previous location so we store 1 to state[1][1][3].
Once we have processed the whole matrix the end result looks like this:
state = [
[[1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0]]
[[1, 2, 1, 1], [1, 1, 1, 1], [1, 1, 1, 2]]
[[1, 3, 1, 1], [1, 2, 2, 1], [0, 0, 0, 0]]
]
The above takes roughly four times the memory of the original matrix. That is quite wasteful since we only need to know the state for current and previous row during the processing. Thus we could just store the state for previous and current row and swap them after a row in the matrix has been processed.
There are couple smaller enhancements to the original code as well:
• Name the constants, EMPTY, WHITE & BLACK are much more descriptive than 0, 1 & 2
• random.choice could be used to select piece for each cell
• There's no need to populate the matrix with 0 before randomizing the cells
• Instead of looping you could use pprint to print the matrix
• Terminate early if both players win
• Bug where diagonal line starting on the right side of the matrix was not detected has been fixed. You can reproduce the issue with matrix with line between (1, 14) and (5, 10)
Here's a version that implements the above algorithm:
import random
import pprint
EMPTY = 0
WHITE = 1
BLACK = 2
CELL_OPTIONS = [EMPTY, WHITE, BLACK]
WIN = 5
DIRECTIONS = [
(-1, -1), # Top left to bottom right
(-1, 0), # Top to bottom
(-1, 1), # Top right to bottom left
(0, -1) # Left to right
]
CURRENT = 1 # Index of current row in state
def check_five(matrix):
# Each cell in the state has counter for each direction,
# only two rows are required since we can process the matrix
# row by row as long as we know the status for previous row
state = [[[0] * len(DIRECTIONS) for _ in matrix[0]] for _ in range(2)]
white_win = False
black_win = False
for y, row in enumerate(matrix):
# Swap rows in state that current (1) becomes previous (0)
state = state[::-1]
for x, color in enumerate(row):
cell = state[CURRENT][x]
for dir_index, (y_diff, x_diff) in enumerate(DIRECTIONS):
prev_x = x + x_diff
if color == EMPTY:
cell[dir_index] = 0
elif 0 <= prev_x < len(row) and color == matrix[y + y_diff][prev_x]:
# There's a piece which doesn't match the previous one
# or previous was out of bounds
cell[dir_index] = state[CURRENT + y_diff][prev_x][dir_index] + 1
else:
cell[dir_index] = 1
if cell[dir_index] == WIN:
if color == WHITE:
white_win = True
else:
black_win = True
# Early termination if both win conditions met
if white_win and black_win:
return white_win, black_win
return white_win, black_win
if __name__ == "__main__":
matrix = [[random.choice(CELL_OPTIONS) for _ in range(15)] for _ in range(15)]
pprint.pprint(matrix, indent=4)
print check_five(matrix)
• Hi niemmi, love your code and mark your reply as answer. I have a further idea to improve to store only one row data based on your idea, I post here => codereview.stackexchange.com/questions/154547/…, any advice is highly appreciated. – Lin Ma Feb 6 '17 at 4:41
A few suggestions:
• Can both players win? If not, how do you decide which one to check first? If they can, if both of them have already won in the first check, no need to go on, just return True, True.
• You are using a lot of hard-coded values (1, 2, 5) that are repeated through the code. I think it would be better to call them something like white_cell, black_cell, min_winning_cells.
• The values can be only 0, 1 and 2, so random.choice could be better, also because...
• ... You could use random.choice(['0', '1', '2']). Notice that these are strings, so it's sort of cheating maybe.
This way you could do something like:
white_winning_cells = '1' * 5
black_winning_cells = '2' * 5
def check_rows(rows):
white_win = False
black_win = False
for row in rows:
# Avoid overwriting
white_win |= white_winning_cells in ''.join(row)
black_win |= black_winning_cells in ''.join(row)
if (white_win and black_win):
return (white_win, black_win)
return (white_win, black_win)
def check_five(matrix):
row_length = 15
white_win = False
black_win = False
(white_win, black_win) = check_rows(matrix)
if (white_win and black_win):
return (white_win, black_win)
# We need to avoid overwriting also here
(white_win_temp, black_win_temp) = check_rows(zip(*matrix))
white_win |= white_win_temp
black_win |= black_win_temp
return (white_win, black_win)
Of course you still need to add the code to put the diagonals in an array to check, but this is the general idea. Keep in mind that here I'm assuming a variable size matrix, but still square.
EDIT: As @Graipher noted, you need to use |= so you don't overwrite values. This has to be done in both the function and the caller. Also, for diagonals you can use the same code you have now to generate the arrays and pass them to the check_rows. You can also consider generating everything at once (rows, columns, diagonals), put them all in the same variable and call check_rows only once. But as I said, this is just a general idea.
• Thanks ChatterOne, vote up for your reply and accepy your advice of for bulletin points. But I double your method of sub string match like white_winning_cells in ''.join(row) works more efficient than me, since in my code, I use dp to remember count, reduce re-counting. – Lin Ma Jan 25 '17 at 6:21
• Also, how do you check diagonal? Did not find related code in your post. – Lin Ma Jan 25 '17 at 6:22
• For your question, in my case, player can win at the same time. So there is a possible return of both sides win. – Lin Ma Jan 25 '17 at 6:48
• @LinMa I edited my answer to include what Graipher said. – ChatterOne Jan 25 '17 at 13:37
• Looks better. But you overindented the complete second function, making it an inner function, which is not what you want. – Graipher Feb 6 '17 at 8:51
|
{}
|
# CERN Accelerating science
The Articles collection aims to cover as far as possible the published literature in particle physics and its related technologies. The collection starts, comprising only the most important documents in the first decencies, from the mid of the 19th century. The full coverage starts from 1980 onwards. CERN publications are though covered 100% since the foundation of the organisation in 1954. The CERN Annual Report, vol. 3 - List of CERN Publications, is extracted from this dataset.
# Published Articles
2016-09-30
07:41
Energy conversion by surface-tension driven charge separation / Pini, Cesare ; Baier, Tobias ; Dietzel, Mathias In this work, the shear-induced electrokinetic streaming potential present in free-surface electrolytic flows subjected to a gradient in surface tension is assessed. Firstly, for a Couette flow with fully resolved electric double layer (EDL), the streaming potential per surface stress as a function of the Debye parameter and surface potential is analyzed. [...] arXiv:1609.09404.- 2016 - 16 p. - Published in : Microfluid. Nanofluid. 19 (2015) 721-735 External link: Preprint
2016-09-30
07:41
Dynamics of fragmentation and multiple vacancy generation in irradiated single-walled carbon nanotubes / Javeed, Sumera ; Zeeshan, Sumaira ; Ahmad, Shoaib The results from mass spectrometry of clusters sputtered from Cs+ irradiated single-walled carbon nano-tubes (SWCNTs) as a function of energy and dose identify the nature of the resulting damage in the form of multiple vacancy generation. For pristine SWCNTs at all Cs+ energies, C2 is the most dominant species, followed by C3, C4 and C1. [...] arXiv:1609.09347.- 2016 - 17 p. - Published in : Nuclear Instruments and Methods in Physics Research B: 295 (2013) , pp. 22 External link: Preprint
2016-09-30
07:41
Qualitative comparisons of experimental results on deterministic freak wave generation based on modulational instability / Karjanto, N ; van Groesen, E A number of qualitative comparisons of experimental results on unidirectional freak wave generation in a hydrodynamic laboratory are presented in this paper. A nonlinear dispersive type of wave equation, the nonlinear Schr\"{o}dinger equation, is chosen as the theoretical model. [...] arXiv:1609.09266.- 2016 - 10 p. - Published in : Journal of Hydro-environment Research 3 (2010) 186-192 External link: Preprint
2016-09-30
07:41
Effect of surface roughness on the Goos-H\"anchen shift / Tahmasebi, Z ; Amiri, M By considering an optically denser medium with a flat surface, but with natural roughness instead of abstract geometrical boundary which leads to mathematical discontinuity on the boundary of two adjacent stratified media, we have thus established the importance of considering physical surfaces; and thus we studied the Goos-H\"{a}nchen (GH) effect by ray-optics description to shed light on parts of this effect which have remained ambiguous. We replaced the very thin region of surface roughness by a continuous inhomogeneous intermediary medium. [...] arXiv:1609.09213.- 2016 - 6 p. - Published in : Phys. Rev. A 93 (2016) 043824 External link: Preprint
2016-09-30
07:41
Excitation functions for(d,x)reactions on $^{133}$Cs up to $E_d = 40$ MeV / Tárkányi, F ; Ditrói, F ; Takács, S ; Hermanne, A ; Baba, M ; Ignatyuk, A V In the frame of a systematic study of excitation functions of deuteron induced reactions the excitation functions of the $^{133}$Cs(d,x)$^{133m,133mg,131mg}$Ba, ${134,132}$Cs and $^{129m}$Xe nuclear reactions were measured up to 40 MeV deuteron energies by using the stacked foil irradiation technique and $\gamma$-ray spectroscopy of activated samples. The results were compared with calculations performed with the theoretical nuclear reaction codes ALICE-IPPE-D, EMPIRE II-D and TALYS calculation listed in the TENDL-2014 library. [...] arXiv:1609.09342.- 2016 - journal p. - Published in : Appl. Radiat. Isot. 110 (2016) 109 External link: Preprint
2016-09-30
07:41
Supercritical instability of Dirac electrons in the field of two oppositely charged nuclei / Sobol, O O The Dirac equation for an electron in a finite dipole potential has been studied within the method of linear combination of atomic orbitals (LCAO). The Coulomb potential of the nuclei that compose a dipole is regularized, by considering the finite nuclear size. [...] arXiv:1609.09417.- 2016 - 15 p. External link: Preprint
2016-09-30
07:38
A Brief History of Gravitational Waves / Cervantes-Cota, Jorge L ; Galindo-Uribarri, Salvador ; Smoot, George F This review describes the discovery of gravitational waves. We recount the journey of predicting and finding those waves, since its beginning in the early twentieth century, their prediction by Einstein in 1916, theoretical and experimental blunders, efforts towards their detection, and finally the subsequent successful discovery.. arXiv:1609.09400.- 2016 - 30 p. - Published in : Universe 2 (2016) 22 External link: Preprint
2016-09-30
07:38
Microscopic Processes in Global Relativistic Jets Containing Helical Magnetic Fields / Nishikawa, Ken-Ichi ; Mizuno, Yosuke ; Niemiec, Jacek ; Kobzar, Oleh ; Pohl, Martin ; Gomez, Jose L ; Dutan, Ioana ; Pe'er, Asaf ; Frederiksen, Jacob Trier ; Nordlund, AAke et al. In the study of relativistic jets one of the key open questions is their interaction with the environment on the microscopic level. Here, we study the initial evolution of both electron$-$proton ($e^{-}-p^{+}$) and electron$-$positron ($e^{\pm}$) relativistic jets containing helical magnetic fields, focusing on their interaction with an ambient plasma. [...] arXiv:1609.09363.- 2016 - 9 p. - Published in : Galaxies 4 (2016) 8 External link: Preprint
2016-09-30
07:38
Hydrogen in hot subdwarfs formed by double helium white dwarf mergers / Hall, Philip D ; Jeffery, C Simon Isolated hot subdwarfs might be formed by the merging of two helium-core white dwarfs. Before merging, helium-core white dwarfs have hydrogen-rich envelopes and some of this hydrogen may survive the merger. [...] arXiv:1609.09268.- 2016 - 13 p. - Published in : Mon. Not. R. Astron. Soc. 463 (2016) 2756-2767 External link: Preprint
2016-09-30
07:38
Second ROSAT all-sky survey (2RXS) source catalogue / Boller, Th ; Freyberg, M J ; Truemper, J ; Haberl, F ; Voges, W ; Nandra, K We present the second ROSAT all-sky survey source catalogue, hereafter referred to as the 2RXS catalogue. This is the second publicly released ROSAT catalogue of point-like sources obtained from the ROSAT all-sky survey (RASS) observations performed with the PSPC between June 1990 and August 1991, and is an extended and revised version of the bright and faint source catalogues. [...] arXiv:1609.09244.- 2016 - 27 p. - Published in : Astron. Astrophys. 588 (2016) 103 External link: Preprint
Search also:
|
{}
|
# Handling Microsoft Excel file format
Jump to: navigation, search
TODO: If you know more about the container format, and whether it really needs a specialized library for processing, please expand this section.
TODO: Tips for displaying Excel documents which were manually parsed using one of the methods described.
TODO: If you know whether Excel provides a "viewer" ActiveX control that can be embedded in a Qt application through ActiveQt, please fill out this section (including links to relevant resources).
En Ar Bg De El Es Fa Fi Fr Hi Hu It Ja Kn Ko Ms Nl Pl Pt Ru Sq Th Tr Uk Zh
This page discusses various available options for working with Microsoft Excel documents in your Qt application. Please also read the general considerations outlined on the Handling Document Formats page.
One needs to distinguish between two different formats (this page deals with both of them):
Legacy "Excel Spreadsheet" format "Office Open XML Workbook" format classification: binary BIFF-based XML-based main filename extension: .xls .xlsx main internet media type: application/vnd.ms-excel application/vnd.openxmlformats-officedocument.spreadsheetml.sheet default format of Excel: until Excel 2003 since Excel 2007
## Reading / Writing
### Using Excel itself
If you are exclusively targeting the Windows platform and Microsoft Excel will be installed on all target machines, then you can use Qt's ActiveX framework to access Excel's spreadsheet processing functionality through OLE automation. For an introductory code example (and a way to list the API provided by the Excel COM object), consult this how-to.
DLL file name COM object name platforms license Microsoft Excel ? Excel.Application Windows commercial
### Using ODBC
To read an Excel file with ODBC (tested on Windows 7 with QT 4.7.1 and Windows 10 with QT 5.7) :
QSqlDatabase db = QSqlDatabase::addDatabase("QODBC", "xlsx_connection"); db.setDatabaseName("DRIVER={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=" + QString("c:\\path\\to\\your\\file\\file.xlsx")); if(db.open()) {
QSqlQuery query("select * from [" + QString("Sheet1") + "$]",db); // Select range, place A1:B5 after$ while (query.next()) { QString column1= query.value(0).toString(); qDebug() << column1; }
db.close(); QSqlDatabase::removeDatabase("xlsx_connection"); }
The above code print all of column1's values to the debug output. It works for *.xls and *.xlsx and the other excel file formats.
By default OBDC uses the first row as names for the columns, you are supposed to be able to change this with the 'FirstRowHasNames' option in the connection settings, however there is a bug (see KB288343). Keep in mind that you are using a database and that each column has his own datatype, so if your second row contains text and your third row contains numbers, sql will pick one of these datatypes. If a few rows contain text and the rest of them contain floating numbers, sql will make the text appear and will make the numbers disappear.
NOTE: To use ODBC on Windows, the MS Access Database Engine has to be installed. You can find it here: Microsoft Access Database Engine 2010. The Engine is maybe distributed with a MS Office Access installation, but on this should not be relied on. In Addition, you should regard that a 64 bit application can only use the 64 bit Engine and so for 32 bit accordingly. That’s why you maybe install both versions to avoid problems. Furthermore, the Engine should not be confused with the MS Access Runtime which contains the Engine.
### Using independent parser/writer libraries
For a more portable solution, you could take a look at some of the available third-party C/C++ libraries for parsing/writing Excel files:
API .xls .xlsx reading writing platforms license QXlsx C++ Qt no yes yes yes Win, Mac, Linux, … MIT [weak copyleft] xlnt C++ no yes yes yes Win, Mac, Linux, … MIT [weak copyleft] xlsLib C++ yes no no yes Win, Mac, Linux, … LGPL v3 [weak copyleft] libxls C yes no yes no Win, Mac, Linux, … LGPL [weak copyleft] LibXL C++ yes yes yes yes Win, Mac, Linux, … commercial qtXLS C yes no yes yes Win, ? commercial FreeXL C yes no yes no Linux, ? LGPL / MPL [weak copyleft] BasicExcel C++ yes no yes yes ? ? Number Duck C++ yes no yes yes Win, Linux commercial Qt Xlsx (Unmaintained) C++ Qt no yes yes yes Win, Mac, Linux, … MIT [weak copyleft]
Note that these libraries differ in their scope and general approach to the problem.
### Using manual XML processing
Files using the XML-based (.xlsx) format could be processed using Qt's XML handling classes (see Handling Document Formats). Third-party libraries can help you in dealing with the container format that wraps the actual XML files:
API supported platforms license libopc C Win, Mac, Linux, … permissive
### Using batch conversion tools
If all else fails, there is always the option of using an existing tool to automatically convert between Excel files and a more manageable format, and let your Qt application deal with that format instead. The conversion tool could be bundled with your application or specified as a prerequisite, and controlled via Doc:QProcess. Some possibilities are:
.xls to * .xlsx to * *to .xls *to .xlsx platforms license LibreOffice .ods .csv … .ods .csv … .ods .csv … .ods .csv … Win, Mac, Linux, … GPL v3 [strong copyleft] … … … … … … …
Notes: LibreOffice can be used like this for batch conversion (it's slow, though): soffice —invisible -convert-to xls test.ods
|
{}
|
# Ellipse Calculator
## How To Use Ellipse Calculator
Lets first understand, what is an ellipse and how we can find different properties of an ellipse?
Ellipse
An ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the fixed point is a constant. The fixed point is called focal points of the ellipse.
Circle is the special type of ellipse in which the two focal points are the same.
The elongation of an ellipse is measured by its eccentricity e, a number ranging from [0,1).
An angled cross section of a cylinder is also an ellipse.
Equation of an ellipse
The equation of a standard ellipse centered at the origin (0,0) with major-axis '2a' and minor-axis '2b' is
$$\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$$
If a > b, then the foci of the ellipse will be at C $$(\pm c,0)$$ for $$c=\sqrt{a^2-b^2}$$
The standard parametric equation of the ellipse is
$$(x,y)=(a\hspace{0.1cm}cos\theta ,b\hspace{0.1cm}sin\theta)$$
where, $$0\leq \theta \leq 2 \pi$$.
An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix, for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. This constant ratio is the eccentricity of the ellipse, given by
$$e=\dfrac{c}{a}=\sqrt{1-\dfrac{b^2}{a^2}}$$
Parameters of an ellipse
Principal axes of an ellipse
Generally, the semi-major and semi-minor axes of the ellipse, are denoted a and b, respectively, where $$a\geq b>0$$, then the equation of the ellipse is
$$\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$$
Linear eccentricity of an ellipse
This is the distance from the center of the ellipse to its focus, $$c=\sqrt{a^2-b^2}$$.
Eccentricity of an ellipse
The eccentricity of an ellipse can be expressed as
$$e=\dfrac{c}{a}=\sqrt{1-\dfrac{b^2}{a^2}}$$
Latus rectum of an ellipse
The length of the chord through one focus, perpendicular to the major axis of the ellipse, is called the latus rectum of the ellipse, given by
$$l=\dfrac{2b^2}{a}=2a(1-e^2)$$
Area of an ellipse
Area enclosed by the ellipse is given by,
$$A=\pi ab$$
Circumference of an ellipse
Circumference of an ellipse is given by
$$C\approx \pi \left(3*(a+b)-\sqrt{(3*a+b)*(a+3*b)}\right)$$
For calculating different properties of the ellipse using above ellipse calculator, you have to just write the values in the given input boxes and press the calculate button, you will get the result.
|
{}
|
Home > Circular Error > Circular Error Probable 90%
# Circular Error Probable 90%
## Contents
Liked the excerpt? Cambridge, MA: MIT Press. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. POA = point of aim, POI = mean point of impact Rayleigh: When the true center of the coordinates and the POA coincide, the radial error around the POA in a Check This Out
It is defined as the radius of the circle in which 50 % of the fired missiles land. An index of precision of an artillery piece. Ann Arbor, ML: Edwards Brothers. [3] Spall, J. Isby CEP (CIRCULAR ERROR PROBABLE) is the mean distance at which a projectile will be offset from it’s aim point. https://en.wikipedia.org/wiki/Circular_error_probable
## Circular Error Probable Formula
Precision-guided munitions generally have more "close misses" and so are not normally distributed. Once the data is in the program, choose Edit => Calculate CEP from the menu: You have the option of either using the average position of all the points, or entering If the given benchmark position was off by a bit, and actually closer to the Holux’s average position, that might explain these results, but that’s just speculation.
Search Subscribe Subscribe to this blog's RSS Feed. The offsets break down roughly as: 50% of all projectiles will impact within the radius of the CEP. 90% of all projectiles will impact within 2.5 times the radius of the The Garmin was meant to be used pointed upward for best acqusition (quadrifillar-helix) versus Holux glorified patch (like the Trimbles) face-up orientation for best acquisition. Circular Error Excel In the special case where we assume uncorrelated bivariate normal data with equal variances the Rayleigh estimator does provide true confidence intervals, and it is easy to calculate using spreadsheets.
So the CEP for 50% would be the distance within which half the points would lie closer to a specific location. Circular Error Probable Excel It is only available for $$p = 0.5$$. The MSE will be the sum of the variance of the range error plus the variance of the azimuth error plus the covariance of the range error with the azimuth error http://www.dtic.mil/dtic/tr/fulltext/u2/a476368.pdf and Bickert, B. (2012). "Estimation of the circular error probability for a Doppler-Beam-Sharpening-Radar-Mode," in EUSAR. 9th European Conference on Synthetic Aperture Radar, pp. 368-371, 23-26 April 2012.
Sequel to previous article with similar title [1] [2] ^ Frank van Diggelen, "GPS Accuracy: Lies, Damn Lies, and Statistics", GPS World, Vol 9 No. 1, January 1998 Further reading Blischke, Circular Error Probable Calculator Comparing CEP estimators If the true variances of x- and y-coordinates as well as their covariance is known then the closed-form general correlated normal estimator is ideal. Targeting smaller cities or even complexes was next to impossible with this accuracy, one could only aim for a general area in which it would land rather randomly. CEP is not a good measure of accuracy when this distribution behavior is not met.
## Circular Error Probable Excel
It allows the x- and y-coordinates to be correlated and have different variances. http://ballistipedia.com/index.php?title=Circular_Error_Probable Hoyt: When the true center of the coordinates and the POA coincide, the radius around the POA in a bivariate correlated normal random variable with unequal variances follows a Hoyt distribution. Circular Error Probable Formula If systematic accuracy bias is taken into account, numerical integration of the multivariate normal distribution around an offset circle is required for an exact solution. Circular Error Probable Gps Please try the request again.
It is defined as the radius of a circle, centered on the mean, whose boundary is expected to include the landing points of 50% of the rounds.[2][3] That is, if a his comment is here In three dimensions (spherical error probable, SEP), the radial error follows a Maxwell-Boltzmann distribution. Export the data to a GPX file, and DNRGarmin can read it in. - DNRGarmin can also read in point data in shapefile, DBF and CSV format. DNRG uses Proj4 to do the projecting and I don't know how accurate this is likely to be in your part of the world. Circular Error Probable Matlab
Percentiles can be determined by recognizing that the squared distance defined by two uncorrelated orthogonal Gaussian random variables (one for each axis) is chi-square distributed.[4] Approximate formulae are available to convert Munitions with this distribution behavior tend to cluster around the aim point, with most reasonably close, progressively fewer and fewer further away, and very few at long distance. URL http://www.jstor.org/stable/2290205 Daniel Wollschläger (2014), "Analyzing shape, accuracy, and precison of shooting results with shotGroups". [4] Reference manual for shotGroups, an R package [5] Winkler, V. this contact form For rifled artillery it is much smaller than PER.
R. Circular Error Pendulum The Grubbs-Patnaik estimator (Grubbs, 1964) differs from the Grubbs-Pearson estimator insofar as it is based on the Patnaik two-moment central $$\chi^{2}$$-approximation (Patnaik, 1949) of the true cumulative distribution function of radial and Bickert, B. (2012). "Estimation of the circular error probability for a Doppler-Beam-Sharpening-Radar-Mode," in EUSAR. 9th European Conference on Synthetic Aperture Radar, pp. 368-371, 23-26 April 2012.
## Munition samples may not be exactly on target, that is, the mean vector will not be (0,0).
I also know that the bench mark in question was physically moved when the pier was refurbished, and cannot get any word from NOAA that they resurveyed it, and the station E. (1964). Search this blog: Search for: Recent Posts Skype Lessons - Math andPhysics Just a Thought New E-Book Release: Math Concepts Everyone Should Know (And CanLearn) A Formula For Risk ofNightmares Create Spherical Error Probable PER (PROBABLE ERROR RANGE).
Included in these methods are the plug-in approach of Blischke and Halpin (1966), the Bayesian approach of Spall and Maryak (1992), and the maximum likelihood approach of Winkler and Bickert (2012). The Krempasky (2003) estimate is based on a nearly correct closed-form solution for the 50% quantile of the Hoyt distribution. Related posts: One Final Look At GPS Positional Accuracy With SA Watch Followup On Comments From GPS Accuracy Posts Evaluating GPS Receiver Accuracy With VisualGPS Looking for something else? http://trinitylabsupply.com/circular-error/circular-error-probable-bomb.html H. (1966). "Asymptotic properties of some estimators of quantiles of circular error." Journal of the American Statistical Association, vol. 61 (315), pp. 618–632.
Shopping at Amazon through any of the links below helps keep this site up and running. One shortcoming of the Grubbs estimators is that it is not possible to incorporate the confidence intervals of the variance estimates into the CEP estimate. The probability density function and the cumulative distribution function are defined in closed form, whereas numerical methods are required to find the quantile function. URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6217081&isnumber=6215928 External links Circular Error Probable in the Ballistipedia Retrieved from "https://en.wikipedia.org/w/index.php?title=Circular_error_probable&oldid=748558282" Categories: Applied probabilityMilitary terminologyAerial bombsArtillery operationBallisticsWeapon guidanceTheory of probability distributionsStatistical distance Navigation menu Personal tools Not logged inTalkContributionsCreate
For tanks, this is the maximum range at which a trained crew under “quasi-combat” conditions can achieve a 50% first round hit probability against a stationary 2.5m2 target. That is, if CEP is n meters, 50% of rounds land within n meters of the target, 43% between n and 2n, and 7% between 2n and 3n meters, and the Comments are currently closed; feel free to contact me with questions/issues.
|
{}
|
Outlook: TRIPLE POINT SOCIAL HOUSING REIT PLC is assigned short-term Ba1 & long-term Ba1 estimated rating.
Time series to forecast n: 25 Jan 2023 for (n+6 month)
Methodology : Modular Neural Network (Social Media Sentiment Analysis)
## Abstract
TRIPLE POINT SOCIAL HOUSING REIT PLC prediction model is evaluated with Modular Neural Network (Social Media Sentiment Analysis) and Pearson Correlation1,2,3,4 and it is concluded that the LON:SOHO stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy
## Key Points
1. What is prediction in deep learning?
2. Can neural networks predict stock market?
3. Nash Equilibria
## LON:SOHO Target Price Prediction Modeling Methodology
We consider TRIPLE POINT SOCIAL HOUSING REIT PLC Decision Process with Modular Neural Network (Social Media Sentiment Analysis) where A is the set of discrete actions of LON:SOHO stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Pearson Correlation)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Social Media Sentiment Analysis)) X S(n):→ (n+6 month) $∑ i = 1 n r i$
n:Time series to forecast
p:Price signals of LON:SOHO stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## LON:SOHO Stock Forecast (Buy or Sell) for (n+6 month)
Sample Set: Neural Network
Stock/Index: LON:SOHO TRIPLE POINT SOCIAL HOUSING REIT PLC
Time series to forecast n: 25 Jan 2023 for (n+6 month)
According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for TRIPLE POINT SOCIAL HOUSING REIT PLC
1. All investments in equity instruments and contracts on those instruments must be measured at fair value. However, in limited circumstances, cost may be an appropriate estimate of fair value. That may be the case if insufficient more recent information is available to measure fair value, or if there is a wide range of possible fair value measurements and cost represents the best estimate of fair value within that range.
2. Such designation may be used whether paragraph 4.3.3 requires the embedded derivatives to be separated from the host contract or prohibits such separation. However, paragraph 4.3.5 would not justify designating the hybrid contract as at fair value through profit or loss in the cases set out in paragraph 4.3.5(a) and (b) because doing so would not reduce complexity or increase reliability.
3. When using historical credit loss experience in estimating expected credit losses, it is important that information about historical credit loss rates is applied to groups that are defined in a manner that is consistent with the groups for which the historical credit loss rates were observed. Consequently, the method used shall enable each group of financial assets to be associated with information about past credit loss experience in groups of financial assets with similar risk characteristics and with relevant observable data that reflects current conditions.
4. In applying the effective interest method, an entity identifies fees that are an integral part of the effective interest rate of a financial instrument. The description of fees for financial services may not be indicative of the nature and substance of the services provided. Fees that are an integral part of the effective interest rate of a financial instrument are treated as an adjustment to the effective interest rate, unless the financial instrument is measured at fair value, with the change in fair value being recognised in profit or loss. In those cases, the fees are recognised as revenue or expense when the instrument is initially recognised.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
TRIPLE POINT SOCIAL HOUSING REIT PLC is assigned short-term Ba1 & long-term Ba1 estimated rating. TRIPLE POINT SOCIAL HOUSING REIT PLC prediction model is evaluated with Modular Neural Network (Social Media Sentiment Analysis) and Pearson Correlation1,2,3,4 and it is concluded that the LON:SOHO stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy
### LON:SOHO TRIPLE POINT SOCIAL HOUSING REIT PLC Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementBa3B3
Balance SheetBaa2Baa2
Leverage RatiosBaa2Ba3
Cash FlowBa3C
Rates of Return and ProfitabilityB2B2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 78 out of 100 with 604 signals.
## References
1. Chow, G. C. (1960), "Tests of equality between sets of coefficients in two linear regressions," Econometrica, 28, 591–605.
2. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., Is FFBC Stock Buy or Sell?(Stock Forecast). AC Investment Research Journal, 101(3).
3. L. Panait and S. Luke. Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems, 11(3):387–434, 2005.
4. Bewley, R. M. Yang (1998), "On the size and power of system tests for cointegration," Review of Economics and Statistics, 80, 675–679.
5. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., Trading Signals (WTS Stock Forecast). AC Investment Research Journal, 101(3).
6. A. Tamar, Y. Glassner, and S. Mannor. Policy gradients beyond expectations: Conditional value-at-risk. In AAAI, 2015
7. R. Howard and J. Matheson. Risk sensitive Markov decision processes. Management Science, 18(7):356– 369, 1972
Frequently Asked QuestionsQ: What is the prediction methodology for LON:SOHO stock?
A: LON:SOHO stock prediction methodology: We evaluate the prediction models Modular Neural Network (Social Media Sentiment Analysis) and Pearson Correlation
Q: Is LON:SOHO stock a buy or sell?
A: The dominant strategy among neural network is to Buy LON:SOHO Stock.
Q: Is TRIPLE POINT SOCIAL HOUSING REIT PLC stock a good investment?
A: The consensus rating for TRIPLE POINT SOCIAL HOUSING REIT PLC is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of LON:SOHO stock?
A: The consensus rating for LON:SOHO is Buy.
Q: What is the prediction period for LON:SOHO stock?
A: The prediction period for LON:SOHO is (n+6 month)
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.