arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Overlapping TikZ nodes to look as a tape strip
Friends, I need to highlight specific letters. My set contains only A, N, C, Q and P, so I decided to make them look as a tape strip (ressembles a good old Turing Machine tape). The following code poorly tries to achieve that. Since I had an immutable set of letters and a defined order, I used the xstring package.
\documentclass{article}
\usepackage{tikz}
\usepackage{xstring}
\newcommand*\myblackbox[1]{%
\begin{tikzpicture}
\node[draw,inner sep=1pt, minimum height=0.2cm, minimum width=0.2cm] {\tiny\tt\raisebox{0pt}[\height][0pt]{#1}};
\end{tikzpicture}}
\newcommand*\mygraybox[1]{%
\begin{tikzpicture}
\node[draw,inner sep=1pt, draw=gray!60, minimum height=0.2cm, minimum width=0.2cm] {\color{gray!60}\tiny\tt\raisebox{0pt}[\height][0pt]{#1}};
\end{tikzpicture}}
\DeclareRobustCommand*\drawboxes[1]{%
\IfSubStr{#1}{A}{\myblackbox{A}}{\mygraybox{A}}%
\IfSubStr{#1}{N}{\myblackbox{N}}{\mygraybox{N}}%
\IfSubStr{#1}{C}{\myblackbox{C}}{\mygraybox{C}}%
\IfSubStr{#1}{Q}{\myblackbox{Q}}{\mygraybox{Q}}%
\IfSubStr{#1}{P}{\myblackbox{P}~}{\mygraybox{P}~}}
\begin{document}
\drawboxes{ACQ} Hello world.
\end{document}
This is the output:
The squares are arranged side by side. So far so good, but I'd like to make them slightly overlap each others, like this:
The black boxes have higher priority, so they need to be on top of the gray ones. My solution is very simple, so I understand that a possible solution might have another method.
Any ideas?
-
I would put all the nodes into one tikzpicture, give all the nodes an outer xsep=0pt (which means that the edge of the node is defined as the middle of the border line), place them using a chain with a node distance of 0pt, and use the backgrounds library to put the gray nodes behind the black ones using \begin{pgfonlayer}{background}...\end{pgfonlayer}.
(Some additional optimisations of the code: You can use font=\tiny\tt to define the font options in the node style; instead of using the raisebox you can just define all nodes to have a text depth=0pt; and setting the color in the node options means you don't have to use \color in the node text).
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{chains,backgrounds}
\usepackage{xstring}
\tikzset{
tape node/.style={
on chain,
draw,
inner sep=1pt,
outer xsep=0pt,
minimum height=0.2cm,
minimum width=0.2cm,
text depth=0pt,
font=\tiny\tt
}
}
\newcommand*\myblackbox[1]{%
\node[
tape node
] {#1};
}
\newcommand*\mygraybox[1]{%
\begin{pgfonlayer}{background}
\node[
tape node,
gray!60
] {#1};
\end{pgfonlayer}
}
\DeclareRobustCommand*\drawboxes[1]{%
\begin{tikzpicture}[
start chain=going right,
node distance=0pt
]
\IfSubStr{#1}{A}{\myblackbox{A}}{\mygraybox{A}}%
\IfSubStr{#1}{N}{\myblackbox{N}}{\mygraybox{N}}%
\IfSubStr{#1}{C}{\myblackbox{C}}{\mygraybox{C}}%
\IfSubStr{#1}{Q}{\myblackbox{Q}}{\mygraybox{Q}}%
\IfSubStr{#1}{P}{\myblackbox{P}~}{\mygraybox{P}~}
\end{tikzpicture}
}
\begin{document}
\drawboxes{ACQ} Hello world.
\end{document}
-
Very impressive solution, Jake! I had no idea of how to put the gray nodes behind the black ones. Your answer is complete, very didatic and easy to understand, thanks a million! – Paulo Cereda Jul 14 '11 at 23:36
You could avoid the backgrounding if you made the gray nodes black but with some transparency. – Andrew Stacey Mar 8 '12 at 9:21
This a version of Jake's answer without chain. I start with tikz 1.1 without chain and I don't use it. I used maximum width and only one macro. I kept fine things text depth layers, font and I add a foreach.
\documentclass{article}
\usepackage{tikz} \usetikzlibrary{backgrounds}
\usepackage{xstring}
\tikzset{
tape node/.style={
anchor = west, % to replace the chain
#1,
draw,
inner sep = 1pt,
outer xsep = 0pt,
minimum size = 0.2cm,
text depth = 0pt,
font=\tiny\tt
}
}
\newcommand*\mybox[3]{%
\begin{pgfonlayer}{#3}
\node[tape node = #2](last) at (last.east) {#1};% my chain
\end{pgfonlayer}
}
\DeclareRobustCommand*\drawboxes[1]{%
\begin{tikzpicture}
\node[inner sep=0pt](last){}; % to start "my" chain
\foreach \letter in {A,N,C,Q,P} {% more easy to adapt
\IfSubStr{#1}{\letter}{%
\mybox{\letter}{black}{main}}{%
\mybox{\letter}{gray!60}{background}
}
}
\end{tikzpicture}
}
\begin{document}
\drawboxes{ACQ} Hello world.
\end{document}
-
Thanks Altermundus! It looks very impressive! :) – Paulo Cereda Mar 8 '12 at 9:50 @PauloCereda Thanks but the great part of the work comes from Jake. – Alain Matthes Mar 8 '12 at 9:55
Based on your original code, using subtractions of \pgflinewidth and opacity as Andrew Stacey suggested in the comments on Jake's answer.
\documentclass{standalone}
\usepackage{tikz}
\usepackage{xstring}
\newcommand*\myblackbox[1]{%
\begin{tikzpicture}
\node[
draw,
inner sep=1pt,
minimum height=0.2cm,
minimum width=0.2cm
] {\tiny\tt\raisebox{0pt}[\height][0pt]{#1}};
\end{tikzpicture}%
}
\newcommand*\mygraybox[1]{%
\begin{tikzpicture}
\node[
draw,
inner sep=1pt,
minimum height=0.2cm,
minimum width=0.2cm,
opacity=0.4
] {\tiny\tt\raisebox{0pt}[\height][0pt]{#1}};
\end{tikzpicture}%
}
\DeclareRobustCommand*\drawboxes[1]{%
\IfSubStr{#1}{A}{\myblackbox{A}}{\mygraybox{A}}\hspace{-\pgflinewidth}%
\IfSubStr{#1}{N}{\myblackbox{N}}{\mygraybox{N}}\hspace{-\pgflinewidth}%
\IfSubStr{#1}{C}{\myblackbox{C}}{\mygraybox{C}}\hspace{-\pgflinewidth}%
\IfSubStr{#1}{Q}{\myblackbox{Q}}{\mygraybox{Q}}\hspace{-\pgflinewidth}%
\IfSubStr{#1}{P}{\myblackbox{P}~}{\mygraybox{P}~}}
\begin{document}
\drawboxes{ACQ} Hello world.
\end{document}
-
Nice idea, Mark! Thanks! :) – Paulo Cereda Mar 8 '12 at 10:36
I like Jake's version the best, but I thought a minimally changed version would make a nice addition. – Mark S. Everitt Mar 8 '12 at 10:51
|
|
Homework Help: How to find the distance of a satellite with respect to the moon
1. Oct 9, 2011
starplaya
1. The problem statement, all variables and given/known data
How far above the surface of the moon should a satellite be placed so that it is stationary with respect tot the moons surface.
Radius of moon = 1737.4 kilometers
mass of moon = 7.36 x 10^22 kilograms
Gravitational Constant = 6.67 x 10^ -11 N x m^2/kg6@
2. Relevant equations
w = 2pi / T = 2pi/ 2332800 seconds (27 days/period of the moon)
v = w x Radius of satellite
(Ms) Vs^2/Rs = G Mm/Rs^2
Vs^2/Rs = (Gn) Mm/ Rs^2)
Mm = mass of moon
Ms = mass of satellite
etc...
3. The attempt at a solution
w = 2pi/ 233280 = 2.7 x 10^-6
I'm completely lost after this step. It seems like I don't have enough information
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Oct 9, 2011
HallsofIvy
Well, that's all the information you could have isn't it?
The way I would do it (because I'm more mathematics than physics) is set up parametric equations for the satellite's orbit:
$x= r cos(\omega t)$
$y= r sin(\omega t)$
The velocity of the satellite is given by
$v_x= -r\omega sin(\omega t)$
$v_y= r\omega cos(\omega t)$
And the acceleration by
$a_x= -r\omega^2 cos(\omega t)$
$a_y= -r\omega^2 sin(\omega t)$
And now the magnitude of the acceleration is the same as the force of gravity on the satellite due to the moon (divided by the mass of the satellite):
$$\frac{GM}{r^2}$$
where G is the "universal gravitational constant) (NOT "g", the acceleration due to gravity on the earths surface) and M is the mass of the moon. The mass of the satellite is, of course, irrelevant.
3. Oct 9, 2011
starplaya
I am still having a lot of trouble figuring this particular question out.
4. Oct 9, 2011
phyzguy
What's the rotation rate of the moon?
5. Oct 9, 2011
27 days
6. Oct 9, 2011
starplaya
I really have been putting in a ton of effort into this question folks. I spent 30min of the hour and a half it took me to do all of my physics on this one question.
Please, any detailed help would be greatly appreciated
7. Oct 9, 2011
Staff: Mentor
Hint: Equate gravitational acceleration to centripetal acceleration.
You'll need to determine an expression for either the angular velocity or linear velocity to use with the centripetal acceleration expression (depending upon whether you use the angular or linear tangential form of the centripetal acceleration formula).
8. Oct 10, 2011
Andrew Mason
Follow Gneill's advice. Assume that the centre of rotation of the satellite is the centre of mass of the moon (it is really the centre of mass of the moon and satellite, but that is very, very close to the cm of the moon).
Then it is just a matter of plugging in your numbers to determine the radius or orbit.
AM
9. Oct 10, 2011
starplaya
Don't I at least need the velocity. I mean I can see myself deriving everything with slightly more information, but with absolutely nothing given to me, I'm stunned.
10. Oct 10, 2011
phyzguy
Where are you stuck? You've written down the correct equations. You know the velocity in terms of w(which you know) and Rs. So the only thing missing is Rs. Substitute vs = w Rs into your last equation and solve for Rs.
11. Oct 10, 2011
Staff: Mentor
Yes. You know the period of rotation, in seconds, and radius is what you're working at finding, so keep it at Rs.
I don't know what the n signifies, but that looks like the best of your equations, and you'll recognize that the only actual unknown is Rs.
It's a good idea to carry the units along all the way, with every equation, so you keep a check that you are adhering to the correct units.
12. Sep 13, 2013
yands
Think as you set on a long stick and ur friend is away from you who is setting on the stick too the stick is fixed in the middle and the ends oscillate up and down like kids game in the garden.
You will see him stationary. Because you both are moving with same angular velocity. So ! Use this logic to solve ur problem
13. Sep 13, 2013
SteamKing
Staff Emeritus
OP is almost 2 years old. I doubt if the poster has been waiting around for further replies.
|
|
# Revision history [back]
A better data structure than lists to make intersectoins is set, so you can first transform your list of lists into a list of sets:
sage: Lset = [set(l) for l in L]
sage: Lset
[{'B', 'C', 'D', 'E'},
{'A', 'C', 'D'},
{'A', 'B', 'D', 'E'},
{'A', 'B', 'C', 'D'}]
sage: s = Lset.pop()
Now you can intersect s with the remaining sets in Lset:
sage: s.intersection(*Lset)
|
|
## Precalculus (6th Edition) Blitzer
\begin{align} \left[ \begin{matrix} 0 & \frac{3}{2} & \frac{3}{2} & \frac{1}{2} & \frac{1}{2} & 0 \\ 2 & 2 & \frac{5}{2} & \frac{5}{2} & \frac{9}{2} & \frac{9}{2} \\ \end{matrix} \right] \end{align} The graph is shown below:
To reduce the perimeter of the graph above to half, we will multiply the matrix $B$ by $\frac{1}{2}$ as follows: \begin{align} & \frac{1}{2}B=\frac{1}{2}\left[ \begin{matrix} 0 & 3 & 3 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 5 & 5 \\ \end{matrix} \right] \\ & =\left[ \begin{matrix} 0 & \frac{3}{2} & \frac{3}{2} & \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & 0 & \frac{1}{2} & \frac{1}{2} & \frac{5}{2} & \frac{5}{2} \\ \end{matrix} \right] \end{align} To shift the reduced figure up by 2 units, we will add the following matrix, which represents the reduced figure. $\left[ \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 2 & 2 & 2 & 2 & 2 \\ \end{matrix} \right]$ And the required coordinates to the matrix above and the resultant matrix will be: \begin{align} & \left[ \begin{matrix} 0 & \frac{3}{2} & \frac{3}{2} & \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & 0 & \frac{1}{2} & \frac{1}{2} & \frac{5}{2} & \frac{5}{2} \\ \end{matrix} \right]+\left[ \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 2 & 2 & 2 & 2 & 2 \\ \end{matrix} \right]=\left[ \begin{matrix} 0+0 & \frac{3}{2}+0 & \frac{3}{2}+0 & \frac{1}{2}+0 & \frac{1}{2}+0 & 0+0 \\ 0+2 & 0+2 & \frac{1}{2}+2 & \frac{1}{2}+2 & \frac{5}{2}+2 & \frac{5}{2}+2 \\ \end{matrix} \right] \\ & =\left[ \begin{matrix} 0 & \frac{3}{2} & \frac{3}{2} & \frac{1}{2} & \frac{1}{2} & 0 \\ 2 & 2 & \frac{5}{2} & \frac{5}{2} & \frac{9}{2} & \frac{9}{2} \\ \end{matrix} \right] \end{align} The required coordinates to draw the shifted letter L are as follows: $\left( 0,2 \right),\left( \frac{3}{2},2 \right),\left( \frac{3}{2},\frac{5}{2} \right),\left( \frac{1}{2},\frac{5}{2} \right),\left( \frac{1}{2},\frac{9}{2} \right)$ and $\left( 0,\frac{9}{2} \right)$. Plot the points and trace them to obtain the curve. By subtracting the matrix $\left[ \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 2 & 2 & 2 & 2 & 2 \\ \end{matrix} \right]$ from matrix $\frac{1}{2}B$, and plotting the obtained coordinates, the perimeter of the graph traced was reduced to half and shifted 2 units up from the original.
|
|
# Is it valid to get better performance in logistic regression using only a subset of the coefficients?
I have an imbalanced data set containing 12% of the positive class 88% negative. First, I ran a logistic regression with all my coefficients and got an average accuracy of 0.91 (I know that's not quite good given my class distribution), average sensitivity of 0.34 and average specificity of 0.97. Then I ran an additional logistic regression only using a subset of the coefficients. On average, I got higher accuracy, that is 0.98, lower sensitivity 0.32 and higher specificity, i.e. 0.98 . Is this quite normal or an error in my code? Or is it because of the class distribution, that the classifier using more coefficients is even better in predicting the majority class but worse in predicting the minority class?
• See the wikipedia entry for logistic regression. Given a representative sample the binary logistic model directly estimates Prob($Y=1|X$). Yes if you use the Brier score or a score that comes from the log likelihood itself (logarithmic scoring rule or pseudo $R^2$) you will not see illogical results. – Frank Harrell Dec 18 '15 at 13:36
|
|
# tidyquant
The tidyquant package integrates the best resources for collecting and analyzing financial data, zoo, xts, quantmod, TTR, and PerformanceAnalytics, with the tidy data infrastructure of the tidyverse allowing for seamless interaction between each. You can now perform complete financial analyses in the tidyverse.
# timetk
The timetk package enables the user to more easily work with time series objects in R. The package has tools for inspecting, analyzing and manipulating the time-based index and converting time-based objects to and from the many time series classes. The package is well-suited for time series data mining and time series machine learning using the time series signature.
# sweep
The sweep package enables broom-style "tidying" of ARIMA, ETS, BATS, and other models and forecast objects used in the forecast package. The output is a "tidy" data frame that fits into the data science workflow of the tidyverse.
# tibbletime
Built on top of the tidyverse, tibbletime is an extension that allows for the creation of time aware tibbles through the setting of a time index.
### Introducing the tibbletime functions
1. filter_time() - Succinctly filter a tbl_time object by date.
2. as_period() - Convert a tbl_time object from daily to monthly, from minute data to hourly, and more. This allows the user to easily aggregate data to a less granular level.
3. collapse_by() - Take an tbl_time object, and collapse the index so that all observations in an interval share the same date. The most common use of this is to then group on this column with dplyr::group_by() and perform time-based calculations with summarise(), mutate(), or any other dplyr function.
4. collapse_index() - A lower level version of collapse_by() that directly modifies the index column and not the entire tbl_time object. It allows the user more flexibility when collapsing, like the ability to assign the resulting collapsed index to a new column.
5. rollify() - Modify a function so that it calculates a value (or a set of values) at specific time intervals. This can be used for rolling averages and other rolling calculations inside the tidyverse framework.
6. create_series() - Use shorthand notation to quickly initialize a tbl_time object containing a regularly spaced index column of class Date, POSIXct, yearmon, yearqtr, or hms
# anomalize
Built on top of the tibbletime, anomalize enables a "tidy" workflow for detecting anomalies in time series data. The main functions are time_decompose(), anomalize(), and time_recompose().
### Introducing the annomalize functions
1. time_decompose() - Separates the time series into seasonal, trend, and remainder components.
2. anomalize() - Applies anomaly detection methods to the remainder component.
3. time_recompose() - Calculates limits that separate the “normal” data from the anomalies.
|
|
# American Institute of Mathematical Sciences
• Previous Article
Vector fields with distributions and invariants of ODEs
• JGM Home
• This Issue
• Next Article
Continuous and discrete embedded optimal control problems and their application to the analysis of Clebsch optimal control problems and mechanical systems
March 2013, 5(1): 39-84. doi: 10.3934/jgm.2013.5.39
## Geometric dynamics on the automorphism group of principal bundles: Geodesic flows, dual pairs and chromomorphism groups
1 Laboratoire de Météorologie Dynamique, École Normale Supérieure/CNRS, F-75231 Paris, France 2 Department of Mathematics, University of Surrey, Guildford GU2 7XH 3 West University of Timişoara, RO-300223 Timişoara, Romania
Received June 2012 Revised January 2013 Published April 2013
We formulate Euler-Poincaré equations on the Lie group $Aut(P)$ of automorphisms of a principal bundle $P$. The corresponding flows are referred to as EP$Aut$ flows. We mainly focus on geodesic flows associated to Lagrangians of Kaluza-Klein type. In the special case of a trivial bundle $P$, we identify geodesics on certain infinite-dimensional semidirect-product Lie groups that emerge naturally from the construction. This approach leads naturally to a dual pair structure containing $\delta\text{-like}$ momentum map solutions that extend previous results on geodesic flows on the diffeomorphism group (EPDiff). In the second part, we consider incompressible flows on the Lie group $Aut_{vol}(P)$ of volume-preserving bundle automorphisms. In this context, the dual pair construction requires the definition of chromomorphism groups, i.e. suitable Lie group extensions generalizing the quantomorphism group.
Citation: François Gay-Balmaz, Cesare Tronci, Cornelia Vizman. Geometric dynamics on the automorphism group of principal bundles: Geodesic flows, dual pairs and chromomorphism groups. Journal of Geometric Mechanics, 2013, 5 (1) : 39-84. doi: 10.3934/jgm.2013.5.39
##### References:
[1] V. I. Arnold, Sur la géometrie différentielle des groupes de Lie de dimension infinie et ses applications à l'hydrodynamique des fluides parfaits, Ann. Inst. Fourier (Grenoble), 16 (1966), 319-361. doi: 10.5802/aif.233. Google Scholar [2] R. Abraham and J. E. Marsden, "Foundations of Mechanics," Benjamin-Cummings Publ. Co, Updated 1985 version, reprinted by Perseus Publishing, second edition, 1978. Google Scholar [3] A. Banyaga, "The Structure of Classical Diffeomorphism Groups," Kluwer Academic Publishers, 1997. Google Scholar [4] D. Bleecker, "Gauge Theory and Variational Principles," Global Analysis Pure and Applied Series A, 1. Addison-Wesley Publishing Co., Reading, Mass, 1981. Google Scholar [5] R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664. doi: 10.1103/PhysRevLett.71.1661. Google Scholar [6] M. Chen, S. Liu and Y. Zhang, A two-component generalization of the Camassa-Holm equation and its solutions, Lett. Math. Phys., 75 (2005), 1-15. doi: 10.1007/s11005-005-0041-7. Google Scholar [7] A. Constantin and R. I. Ivanov, On an integrable two-component Camassa-Holm shallow water system, Phys. Lett. A, 372 (2008), 7129-7132. doi: 10.1016/j.physleta.2008.10.050. Google Scholar [8] D. G. Ebin and J. E. Marsden, Groups of diffeomorphisms and the motion of an incompressible fluid, Ann. of Math., 92 (1970), 102-163. doi: 10.2307/1970699. Google Scholar [9] C. Foias, D. D. Holm and E. S. Titi, The Navier-Stokes-alpha model of fluid turbulence, Phys. D, 152/153 (2001), 505-519. doi: 10.1016/S0167-2789(01)00191-9. Google Scholar [10] L. C. Garcia de Andrade, Vortex filaments in MHD, Phys. Scr., 73 (2006), 484-489. doi: 10.1088/0031-8949/73/5/012. Google Scholar [11] F. Gay-Balmaz, Well-posedness of higher dimensional Camassa-Holm equations on manifolds with boundary, Bull. Transilv. Univ. Braçsov Ser. III, 2 (2009), 55-58. Google Scholar [12] F. Gay-Balmaz and T. S. Ratiu, The Lie-Poisson structure of the LAE-$\alpha$ equation, Dyn. Partial Differ. Equ., 2 (2005), 25-57. Google Scholar [13] F. Gay-Balmaz and T. S. Ratiu, Reduced Lagrangian and Hamiltonian formulations of Euler-Yang-Mills fluids, J. Symplectic Geom., 6 (2008), 189-237. Google Scholar [14] F. Gay-Balmaz and T. S. Ratiu, Affine Lie-Poisson reduction, Yang-Mills magnetohydrodynamics, and superfluids, J. Phys. A: Math. Theor., 41 (2008), 344007. doi: 10.1088/1751-8113/41/34/344007. Google Scholar [15] F. Gay-Balmaz and T. S. Ratiu, The geometric structure of complex fluids, Adv. Appl. Math., 42 (2009), 176-275. doi: 10.1016/j.aam.2008.06.002. Google Scholar [16] F. Gay-Balmaz and T. S. Ratiu, Geometry of nonabelian charged fluids, Dynamics of PDE, 8 (2011), 5-19. Google Scholar [17] F. Gay-Balmaz and C. Tronci, Vlasov moment flows and geodesics on the Jacobi group, J. Math. Phys., 53 (2012), 123502. Google Scholar [18] F. Gay-Balmaz and C. Vizman, Dual pairs in fluid dynamics, Ann. Glob. Anal. Geom., 41 (2011), 1-24. doi: 10.1007/s10455-011-9267-z. Google Scholar [19] F. Gay-Balmaz and C. Vizman, Dual pairs for nonabelian fluids, preprint. 2013. Google Scholar [20] J. Gibbons, D. D. Holm and B. Kupershmidt, The Hamiltonian structure of classical chromohydrodynamics, Phys. D, 6 (1983), 179-194. doi: 10.1016/0167-2789(83)90004-0. Google Scholar [21] S. Haller and C. Vizman, Non-linear Grassmannians as coadjoint orbits, Math. Ann., 329 (2004), 771-785. doi: 10.1007/s00208-004-0536-z. Google Scholar [22] Y. Hattori, Ideal magnetohydrodynamics and passive scalar motion as geodesics on semidirect product groups, J. Phys. A, 27 (1994), L21-L25. doi: 10.1088/0305-4470/27/2/004. Google Scholar [23] D. D. Holm, Hamiltonian structure for Alfven wave turbulence equations, Phys. Lett. A, 108 (1985), 445-447. doi: 10.1016/0375-9601(85)90035-0. Google Scholar [24] D. D. Holm, Euler-Poincaré dynamics of perfect complex fluids, in "Geometry, mechanics, and dynamics. Volume in honor of the 60th birthday of J. E. Marsden" (Edited by P. Newton et al.), New York, Springer, (2002), 113-167. doi: 10.1007/b97525. Google Scholar [25] D. D. Holm and J. E. Marsden, Momentum maps and measure-valued solutions (peakons, filaments and sheets) for the EPDiff equation, Progr. Math., 232 (2004), 203-235. doi: 10.1007/0-8176-4419-9_8. Google Scholar [26] D. D. Holm, J. E. Marsden and T. S. Ratiu, The Euler-Poincaré equations and semidirect products with applications to continuum theories, Adv. Math., 137 (1998), 1-81. doi: 10.1006/aima.1998.1721. Google Scholar [27] D. D. Holm and R. Ivanov, Two-component CH system: Inverse scattering, peakons and geometry, Inverse Problems, 27 045013. doi: 10.1088/0266-5611/27/4/045013. Google Scholar [28] D. D. Holm and B. A. Kupershmidt, The analogy between spin glasses and Yang-Mills fluids, J. Math. Phys., 29 (1988), 21-30. doi: 10.1063/1.528176. Google Scholar [29] D. D. Holm, L. Ó Náraigh and C. Tronci, Singular solutions of a modified two-component Camassa-Holm equation, Phys. Rev. E, 79 (2009), 016601. doi: 10.1103/PhysRevE.79.016601. Google Scholar [30] D. D. Holm, V. Putkaradze and S. N. Stechmann, Rotating concentric circular peakons, Nonlinearity, 17 (2004), 2163-2186. doi: 10.1088/0951-7715/17/6/008. Google Scholar [31] D. D. Holm, T. J. Ratnanather, A. Trouvé and L. Younes, Soliton dynamics in computational anatomy, Neuroimage, 23 (2004), S170-S178. Google Scholar [32] D. D. Holm and C. Tronci, Geodesic flows on semidirect-product Lie groups: Geometry of singular measure-valued solutions, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 465 (2008), 457-476. doi: 10.1098/rspa.2008.0263. Google Scholar [33] D. D. Holm and C. Tronci, Geodesic Vlasov equations and their integrable moment closures, J. Geom. Mech., 2 (2009), 181-208. doi: 10.3934/jgm.2009.1.181. Google Scholar [34] R. S. Ismagilov, M. Losik and P. W. Michor, A 2-cocycle on a group of symplectomorphisms, Moscow Math. J., 6 (2006), 307-315. Google Scholar [35] A. Kriegl and P. W. Michor, "The Convenient Setting of Global Analysis," Math. Surveys Monogr., 53, American Mathematical Society, Providence, RI. 1997. Google Scholar [36] B. Kostant, Quantization and unitary representations, Lecture Notes in Math., 170 (1970), 87-208. Google Scholar [37] P. A. Kuz'min, Two-component generalizations of the Camassa-Holm equation, Math. Notes, 81 (2007), 130-134. doi: 10.1134/S0001434607010142. Google Scholar [38] J. E. Marsden and P. J. Morrison, Noncanonical Hamiltonian field theory and reduced MHD, Contemp. Math., 28 (1984), 133-150 doi: 10.1090/conm/028/751979. Google Scholar [39] J. E. Marsden and T. S. Ratiu, "Introduction to Mechanics and Symmetry," Second Edition, Springer, 1999. Google Scholar [40] J. E. Marsden, T. S. Ratiu and S. Shkoller, The geometry and analysis of the averaged Euler equations and a new diffeomorphism group, Geom. Funct. Anal., 10 (2000), 582-599. doi: 10.1007/PL00001631. Google Scholar [41] J. E. Marsden and A. Weinstein, Coadjoint orbits, vortices, and Clebsch variables for incompressible fluids, Phys. D, 7 (1983), 305-323. doi: 10.1016/0167-2789(83)90134-3. Google Scholar [42] J. E. Marsden, A. Weinstein, T. S. Ratiu, R. Schmid and R. G. Spencer, Hamiltonian system with symmetry, coadjoint orbits and Plasma physics, Atti. Accad. Sci. Torino Cl. Sci. Fis. Mat. Natur., 117 (1983), 289-340. Google Scholar [43] D. McDuff and D. Salamon, "Introduction to Symplectic Topology," Second Edition, Oxford Math. Monogr., Oxford University Press, 1998. Google Scholar [44] R. Montogmery, J. E. Marsden and T. S. Ratiu, Gauged Lie-Poisson structures, Contemp. Math., 28 (1984), 101-114. doi: 10.1090/conm/028/751976. Google Scholar [45] M. Molitor, The group of unimodular automorphisms of a principal bundle and the Euler-Yang-Mills equations, Differ. Geom. Appl., 28 (2010), 543-564. doi: 10.1016/j.difgeo.2010.04.005. Google Scholar [46] P. J. Morrison and R. D. Hazeltine, Hamiltonian formulation of reduced magnetohydrodynamics, Phys. Fluids, 27 (1984), 886-897. Google Scholar [47] J.-P. Ortega and T. S. Ratiu, "Momentum maps and Hamiltonian reduction," Progress in Mathematics 222, Boston Birkhäuser, 2004. Google Scholar [48] S. Shkoller, Analysis on groups of diffeomorphisms of manifolds with boundary and the averaged motion of a fluid, J. Diff. Geom., 55 (2000), 145-191. Google Scholar [49] C. Vizman, Geodesics on extensions of Lie groups and stability: The superconductivity equation, Phys. Lett. A, 284 (2001), 23-30. doi: 10.1016/S0375-9601(01)00279-1. Google Scholar [50] C. Vizman, Geodesic equations on diffeomorphism groups, SIGMA Symmetry Integrability Geom. Methods Appl., 4 (2008). doi: 10.3842/SIGMA.2008.030. Google Scholar [51] C. Vizman, Natural differential forms on manifolds of functions, Arch. Math. (Brno), 47 (2011), 201-215. Google Scholar [52] A. Weinstein, The local structure of Poisson manifolds, J. Diff. Geom., 18 (1983), 523-557. Google Scholar
show all references
##### References:
[1] V. I. Arnold, Sur la géometrie différentielle des groupes de Lie de dimension infinie et ses applications à l'hydrodynamique des fluides parfaits, Ann. Inst. Fourier (Grenoble), 16 (1966), 319-361. doi: 10.5802/aif.233. Google Scholar [2] R. Abraham and J. E. Marsden, "Foundations of Mechanics," Benjamin-Cummings Publ. Co, Updated 1985 version, reprinted by Perseus Publishing, second edition, 1978. Google Scholar [3] A. Banyaga, "The Structure of Classical Diffeomorphism Groups," Kluwer Academic Publishers, 1997. Google Scholar [4] D. Bleecker, "Gauge Theory and Variational Principles," Global Analysis Pure and Applied Series A, 1. Addison-Wesley Publishing Co., Reading, Mass, 1981. Google Scholar [5] R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664. doi: 10.1103/PhysRevLett.71.1661. Google Scholar [6] M. Chen, S. Liu and Y. Zhang, A two-component generalization of the Camassa-Holm equation and its solutions, Lett. Math. Phys., 75 (2005), 1-15. doi: 10.1007/s11005-005-0041-7. Google Scholar [7] A. Constantin and R. I. Ivanov, On an integrable two-component Camassa-Holm shallow water system, Phys. Lett. A, 372 (2008), 7129-7132. doi: 10.1016/j.physleta.2008.10.050. Google Scholar [8] D. G. Ebin and J. E. Marsden, Groups of diffeomorphisms and the motion of an incompressible fluid, Ann. of Math., 92 (1970), 102-163. doi: 10.2307/1970699. Google Scholar [9] C. Foias, D. D. Holm and E. S. Titi, The Navier-Stokes-alpha model of fluid turbulence, Phys. D, 152/153 (2001), 505-519. doi: 10.1016/S0167-2789(01)00191-9. Google Scholar [10] L. C. Garcia de Andrade, Vortex filaments in MHD, Phys. Scr., 73 (2006), 484-489. doi: 10.1088/0031-8949/73/5/012. Google Scholar [11] F. Gay-Balmaz, Well-posedness of higher dimensional Camassa-Holm equations on manifolds with boundary, Bull. Transilv. Univ. Braçsov Ser. III, 2 (2009), 55-58. Google Scholar [12] F. Gay-Balmaz and T. S. Ratiu, The Lie-Poisson structure of the LAE-$\alpha$ equation, Dyn. Partial Differ. Equ., 2 (2005), 25-57. Google Scholar [13] F. Gay-Balmaz and T. S. Ratiu, Reduced Lagrangian and Hamiltonian formulations of Euler-Yang-Mills fluids, J. Symplectic Geom., 6 (2008), 189-237. Google Scholar [14] F. Gay-Balmaz and T. S. Ratiu, Affine Lie-Poisson reduction, Yang-Mills magnetohydrodynamics, and superfluids, J. Phys. A: Math. Theor., 41 (2008), 344007. doi: 10.1088/1751-8113/41/34/344007. Google Scholar [15] F. Gay-Balmaz and T. S. Ratiu, The geometric structure of complex fluids, Adv. Appl. Math., 42 (2009), 176-275. doi: 10.1016/j.aam.2008.06.002. Google Scholar [16] F. Gay-Balmaz and T. S. Ratiu, Geometry of nonabelian charged fluids, Dynamics of PDE, 8 (2011), 5-19. Google Scholar [17] F. Gay-Balmaz and C. Tronci, Vlasov moment flows and geodesics on the Jacobi group, J. Math. Phys., 53 (2012), 123502. Google Scholar [18] F. Gay-Balmaz and C. Vizman, Dual pairs in fluid dynamics, Ann. Glob. Anal. Geom., 41 (2011), 1-24. doi: 10.1007/s10455-011-9267-z. Google Scholar [19] F. Gay-Balmaz and C. Vizman, Dual pairs for nonabelian fluids, preprint. 2013. Google Scholar [20] J. Gibbons, D. D. Holm and B. Kupershmidt, The Hamiltonian structure of classical chromohydrodynamics, Phys. D, 6 (1983), 179-194. doi: 10.1016/0167-2789(83)90004-0. Google Scholar [21] S. Haller and C. Vizman, Non-linear Grassmannians as coadjoint orbits, Math. Ann., 329 (2004), 771-785. doi: 10.1007/s00208-004-0536-z. Google Scholar [22] Y. Hattori, Ideal magnetohydrodynamics and passive scalar motion as geodesics on semidirect product groups, J. Phys. A, 27 (1994), L21-L25. doi: 10.1088/0305-4470/27/2/004. Google Scholar [23] D. D. Holm, Hamiltonian structure for Alfven wave turbulence equations, Phys. Lett. A, 108 (1985), 445-447. doi: 10.1016/0375-9601(85)90035-0. Google Scholar [24] D. D. Holm, Euler-Poincaré dynamics of perfect complex fluids, in "Geometry, mechanics, and dynamics. Volume in honor of the 60th birthday of J. E. Marsden" (Edited by P. Newton et al.), New York, Springer, (2002), 113-167. doi: 10.1007/b97525. Google Scholar [25] D. D. Holm and J. E. Marsden, Momentum maps and measure-valued solutions (peakons, filaments and sheets) for the EPDiff equation, Progr. Math., 232 (2004), 203-235. doi: 10.1007/0-8176-4419-9_8. Google Scholar [26] D. D. Holm, J. E. Marsden and T. S. Ratiu, The Euler-Poincaré equations and semidirect products with applications to continuum theories, Adv. Math., 137 (1998), 1-81. doi: 10.1006/aima.1998.1721. Google Scholar [27] D. D. Holm and R. Ivanov, Two-component CH system: Inverse scattering, peakons and geometry, Inverse Problems, 27 045013. doi: 10.1088/0266-5611/27/4/045013. Google Scholar [28] D. D. Holm and B. A. Kupershmidt, The analogy between spin glasses and Yang-Mills fluids, J. Math. Phys., 29 (1988), 21-30. doi: 10.1063/1.528176. Google Scholar [29] D. D. Holm, L. Ó Náraigh and C. Tronci, Singular solutions of a modified two-component Camassa-Holm equation, Phys. Rev. E, 79 (2009), 016601. doi: 10.1103/PhysRevE.79.016601. Google Scholar [30] D. D. Holm, V. Putkaradze and S. N. Stechmann, Rotating concentric circular peakons, Nonlinearity, 17 (2004), 2163-2186. doi: 10.1088/0951-7715/17/6/008. Google Scholar [31] D. D. Holm, T. J. Ratnanather, A. Trouvé and L. Younes, Soliton dynamics in computational anatomy, Neuroimage, 23 (2004), S170-S178. Google Scholar [32] D. D. Holm and C. Tronci, Geodesic flows on semidirect-product Lie groups: Geometry of singular measure-valued solutions, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 465 (2008), 457-476. doi: 10.1098/rspa.2008.0263. Google Scholar [33] D. D. Holm and C. Tronci, Geodesic Vlasov equations and their integrable moment closures, J. Geom. Mech., 2 (2009), 181-208. doi: 10.3934/jgm.2009.1.181. Google Scholar [34] R. S. Ismagilov, M. Losik and P. W. Michor, A 2-cocycle on a group of symplectomorphisms, Moscow Math. J., 6 (2006), 307-315. Google Scholar [35] A. Kriegl and P. W. Michor, "The Convenient Setting of Global Analysis," Math. Surveys Monogr., 53, American Mathematical Society, Providence, RI. 1997. Google Scholar [36] B. Kostant, Quantization and unitary representations, Lecture Notes in Math., 170 (1970), 87-208. Google Scholar [37] P. A. Kuz'min, Two-component generalizations of the Camassa-Holm equation, Math. Notes, 81 (2007), 130-134. doi: 10.1134/S0001434607010142. Google Scholar [38] J. E. Marsden and P. J. Morrison, Noncanonical Hamiltonian field theory and reduced MHD, Contemp. Math., 28 (1984), 133-150 doi: 10.1090/conm/028/751979. Google Scholar [39] J. E. Marsden and T. S. Ratiu, "Introduction to Mechanics and Symmetry," Second Edition, Springer, 1999. Google Scholar [40] J. E. Marsden, T. S. Ratiu and S. Shkoller, The geometry and analysis of the averaged Euler equations and a new diffeomorphism group, Geom. Funct. Anal., 10 (2000), 582-599. doi: 10.1007/PL00001631. Google Scholar [41] J. E. Marsden and A. Weinstein, Coadjoint orbits, vortices, and Clebsch variables for incompressible fluids, Phys. D, 7 (1983), 305-323. doi: 10.1016/0167-2789(83)90134-3. Google Scholar [42] J. E. Marsden, A. Weinstein, T. S. Ratiu, R. Schmid and R. G. Spencer, Hamiltonian system with symmetry, coadjoint orbits and Plasma physics, Atti. Accad. Sci. Torino Cl. Sci. Fis. Mat. Natur., 117 (1983), 289-340. Google Scholar [43] D. McDuff and D. Salamon, "Introduction to Symplectic Topology," Second Edition, Oxford Math. Monogr., Oxford University Press, 1998. Google Scholar [44] R. Montogmery, J. E. Marsden and T. S. Ratiu, Gauged Lie-Poisson structures, Contemp. Math., 28 (1984), 101-114. doi: 10.1090/conm/028/751976. Google Scholar [45] M. Molitor, The group of unimodular automorphisms of a principal bundle and the Euler-Yang-Mills equations, Differ. Geom. Appl., 28 (2010), 543-564. doi: 10.1016/j.difgeo.2010.04.005. Google Scholar [46] P. J. Morrison and R. D. Hazeltine, Hamiltonian formulation of reduced magnetohydrodynamics, Phys. Fluids, 27 (1984), 886-897. Google Scholar [47] J.-P. Ortega and T. S. Ratiu, "Momentum maps and Hamiltonian reduction," Progress in Mathematics 222, Boston Birkhäuser, 2004. Google Scholar [48] S. Shkoller, Analysis on groups of diffeomorphisms of manifolds with boundary and the averaged motion of a fluid, J. Diff. Geom., 55 (2000), 145-191. Google Scholar [49] C. Vizman, Geodesics on extensions of Lie groups and stability: The superconductivity equation, Phys. Lett. A, 284 (2001), 23-30. doi: 10.1016/S0375-9601(01)00279-1. Google Scholar [50] C. Vizman, Geodesic equations on diffeomorphism groups, SIGMA Symmetry Integrability Geom. Methods Appl., 4 (2008). doi: 10.3842/SIGMA.2008.030. Google Scholar [51] C. Vizman, Natural differential forms on manifolds of functions, Arch. Math. (Brno), 47 (2011), 201-215. Google Scholar [52] A. Weinstein, The local structure of Poisson manifolds, J. Diff. Geom., 18 (1983), 523-557. Google Scholar
[1] Daniel Fusca. The Madelung transform as a momentum map. Journal of Geometric Mechanics, 2017, 9 (2) : 157-165. doi: 10.3934/jgm.2017006 [2] Van Cyr, John Franks, Bryna Kra, Samuel Petite. Distortion and the automorphism group of a shift. Journal of Modern Dynamics, 2018, 13: 147-161. doi: 10.3934/jmd.2018015 [3] Van Cyr, Bryna Kra. The automorphism group of a minimal shift of stretched exponential growth. Journal of Modern Dynamics, 2016, 10: 483-495. doi: 10.3934/jmd.2016.10.483 [4] Jan J. Sławianowski, Vasyl Kovalchuk, Agnieszka Martens, Barbara Gołubowska, Ewa E. Rożko. Essential nonlinearity implied by symmetry group. Problems of affine invariance in mechanics and physics. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 699-733. doi: 10.3934/dcdsb.2012.17.699 [5] Hongyan Guo. Automorphism group and twisted modules of the twisted Heisenberg-Virasoro vertex operator algebra. Electronic Research Archive, 2021, 29 (4) : 2673-2685. doi: 10.3934/era.2021008 [6] James Benn. Fredholm properties of the $L^{2}$ exponential map on the symplectomorphism group. Journal of Geometric Mechanics, 2016, 8 (1) : 1-12. doi: 10.3934/jgm.2016.8.1 [7] Martino Borello, Francesca Dalla Volta, Gabriele Nebe. The automorphism group of a self-dual $[72,36,16]$ code does not contain $\mathcal S_3$, $\mathcal A_4$ or $D_8$. Advances in Mathematics of Communications, 2013, 7 (4) : 503-510. doi: 10.3934/amc.2013.7.503 [8] Venkateswaran P. Krishnan, Ramesh Manna, Suman Kumar Sahoo, Vladimir A. Sharafutdinov. Momentum ray transforms. Inverse Problems & Imaging, 2019, 13 (3) : 679-701. doi: 10.3934/ipi.2019031 [9] Alain Chenciner. The angular momentum of a relative equilibrium. Discrete & Continuous Dynamical Systems, 2013, 33 (3) : 1033-1047. doi: 10.3934/dcds.2013.33.1033 [10] Hiroaki Yoshimura, Jerrold E. Marsden. Dirac cotangent bundle reduction. Journal of Geometric Mechanics, 2009, 1 (1) : 87-158. doi: 10.3934/jgm.2009.1.87 [11] Indranil Biswas, Georg Schumacher, Lin Weng. Deligne pairing and determinant bundle. Electronic Research Announcements, 2011, 18: 91-96. doi: 10.3934/era.2011.18.91 [12] Joshua Cape, Hans-Christian Herbig, Christopher Seaton. Symplectic reduction at zero angular momentum. Journal of Geometric Mechanics, 2016, 8 (1) : 13-34. doi: 10.3934/jgm.2016.8.13 [13] Arvind Ayyer, Carlangelo Liverani, Mikko Stenlund. Quenched CLT for random toral automorphism. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 331-348. doi: 10.3934/dcds.2009.24.331 [14] Cesare Tronci. Momentum maps for mixed states in quantum and classical mechanics. Journal of Geometric Mechanics, 2019, 11 (4) : 639-656. doi: 10.3934/jgm.2019032 [15] Claudio Meneses. Linear phase space deformations with angular momentum symmetry. Journal of Geometric Mechanics, 2019, 11 (1) : 45-58. doi: 10.3934/jgm.2019003 [16] Toshihiro Iwai, Dmitrií A. Sadovskií, Boris I. Zhilinskií. Angular momentum coupling, Dirac oscillators, and quantum band rearrangements in the presence of momentum reversal symmetries. Journal of Geometric Mechanics, 2020, 12 (3) : 455-505. doi: 10.3934/jgm.2020021 [17] V. M. Gundlach, Yu. Kifer. Expansiveness, specification, and equilibrium states for random bundle transformations. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 89-120. doi: 10.3934/dcds.2000.6.89 [18] Nikolay Yankov, Damyan Anev, Müberra Gürel. Self-dual codes with an automorphism of order 13. Advances in Mathematics of Communications, 2017, 11 (3) : 635-645. doi: 10.3934/amc.2017047 [19] Oliver Butterley, Carlangelo Liverani. Robustly invariant sets in fiber contracting bundle flows. Journal of Modern Dynamics, 2013, 7 (2) : 255-267. doi: 10.3934/jmd.2013.7.255 [20] Simon Scott. Relative zeta determinants and the geometry of the determinant line bundle. Electronic Research Announcements, 2001, 7: 8-16.
2020 Impact Factor: 0.857
|
|
# All Questions
7 views
### 2D Helmholtz equation on rectangular domain
What are some hints to solve (analytically) this equation: $$\frac{\partial^2}{\partial x^2}u(x,y) + \frac{\partial^2}{\partial y^2}u(x,y) =a^2 u(x,y)+bx+cy+d$$ on the ...
12 views
### Compact factors of Lie groups; possibly varying definitions
Let $G$ be a real connected semisimple Lie group. Are the following equivalent?: (1) $G$ has no proper cocompact Normal subgroups. (2) $G$ has no proper cocompact connected Normal subgroups. In ...
13 views
### Inequality for the maximum of Gaussian variables
Let $X=(X_1,\dots,X_n)$ and $Y=(Y_1,\dots,Y_n)$ be centered Gaussian vectors with variance matrix $\Gamma_X$ and $\Gamma_Y$. We assume that the matrix $\Gamma_Y-\Gamma_X$ is positive definite. Is it ...
45 views
60 views
### Horn's spectrum problem with random Hermitian matrices
An important problem in matrix analysis, completely solved in the early 2000's by A. Knutson & T. Tao (The honeycomb model of GLn(C) tensor products. I. Proof of the saturation conjecture. J. ...
170 views
### relation between algebraic geometry and complex geometry
As a complex manifold $\mathbb{P}^n$ is locally the euclidean space $\mathbb{C}^n$, as a projective variety it is locally $\mathbb{C}^n$ with the Zariski topology, as a scheme it is locally ...
37 views
### Compute the index of the Dirac operator on $C_0(R^2)$ to obtain Bott element in $K_0$
I am studying the paper of Baum-Connes-Higson to understand the Connes-Kasparov conjecture. In example 4.23, they discuss the case $G=\mathbb{R}^2$. I have constructed the Dirac operator, but I’m ...
### Jensen formula in $\mathbb{C}^n$?
Let $f:\mathbb{C}\to\mathbb{C}$ be an entire function with zero set $X\subset \mathbb{C}$. Jensen's formula reads \log(|f(0)|)+\int_0^R\frac{|X\cap B_t(0)|}{t}dt = ...
|
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I like word clouds because they are visually appealing and provide a ton of information in a small space. Ever since I saw Drew Conway’s post (LINK) I have been looking for ways to improve word clouds. One of the nice feature’s of Drew’s post was that he colored the words according to the gradient. Unfortunately, Drew’s cloud lacks some of the aesthetic wow factor that Ian Fellow’s wordcloud package is known for.
This post is going to show you how to color words with a gradient based on degree of usage between two individuals. For me it’s going to help me learn the following things:
1. How to use knitr + markdown to make a blog post (I’ve been using knitr for reproducible latex/beamer reports).
2. How to use gradients in base (i.e. outside of ggplot2 that I’ve come to depend on).
3. How to make a gradient color bar in base.
First you’ll need some packages to get started. I’m using my own beta package qdap plus Fellow’s wordcloud packages. If you download qdap wordcloud is part of the install. For the legend we’ll be using the plotrix package.
library(qdap)
library(wordcloud)
library(plotrix)
Now we’ll need some data. I happen to have presidential debate data (debate # 1) left over that we can still mine.
# download transcript of the debate to working directory
url_dl(pres.deb1.docx)
# load multiple files with read transcript and assign to working directory
# qprep for quick cleaning
dat1$dialogue <- qprep(dat1$dialogue)
left.just(htruncdf(dat1, 10, 45))
person dialogue
2 ROMNEY What I support is no change for current retir
3 LEHRER And what about the vouchers?
4 ROMNEY So that's that's number one. Number two is fo
5 OBAMA Jim, if I if I can just respond very quickly,
6 LEHRER Talk about that in a minute.
7 OBAMA but but but overall.
8 LEHRER OK.
9 OBAMA And so...
10 ROMNEY That's that's a big topic. Can we can we stay
## Setting Up the Data
1. Make a word frequency matrix
2. Remove Lehrer’s words
3. Scale the word usage
4. Create a binned fill variable
word.freq <- with(dat1, wfdf(dialogue, person))[, -2]
csums <- colSums(word.freq[, -1])
conv.fact <- csums[2]/csums[1]
word.freq$ROMNEY2 <- word.freq[, "ROMNEY"] * conv.fact #colSums(word.freq[, -1]) word.freq[, "total"] <- rowSums(word.freq[, -1]) word.freq$continum <- with(word.freq, ROMNEY2-OBAMA)
word.freq <- word.freq[word.freq$total != 0,] #remove Leher only words MAX <- max(word.freq$continum[!is.infinite(word.freq$continum)]) word.freq$continum <- ifelse(is.infinite(word.freq$continum), MAX, word.freq$continum)
rev(colfunc(length(levels(word.freq$fill.var))))) head(word.freq, 10) Words ROMNEY OBAMA ROMNEY2 total continum fill.var colors 1 a 83 72 73.125 228.125 1.5470 (1.5,2.5] #BB0043 2 aarp 0 1 0.000 1.000 -1.0000 (-1.5,-1] #5000AE 3 able 6 7 5.286 18.286 -1.7138 (-2.5,-1.5] #4300BB 4 about 11 11 9.691 31.691 -1.3087 (-1.5,-1] #5000AE 5 above 1 0 0.881 1.881 1.2111 (1,1.5] #AE0050 6 abraham 0 2 0.000 2.000 -2.0000 (-2.5,-1.5] #4300BB 7 absolutely 2 2 1.762 5.762 -0.2379 (-0.25,0] #780086 8 academy 0 1 0.000 1.000 -1.0000 (-1.5,-1] #5000AE 9 accept 1 0 0.881 1.881 1.2111 (1,1.5] #AE0050 10 accomplish 1 0 0.881 1.881 1.2111 (1,1.5] #AE0050 ## Plot the Word Cloud and Gradient Legend Now that we have color gradients let’s use wordcloud to plot and plotrix‘s color.legend to make a legend. I didn’t know how to create the gradient legend either and asked again on stackoverflow where I received an answer from Dason and mnel (LINK). Both great answers but I went with Dason’s. par(mar=c(7,1,1,1)) wordcloud(word.freq$Words, word.freq$total, colors = word.freq$colors,
min.freq = 1, ordered.colors = TRUE, random.order = FALSE, rot.per=0,
scale = c(5, .7))
COLS <- colfunc(length(levels(word.freq\$fill.var)))
color.legend(.025, .025, .25, .04, qcv(Romney,Obama), COLS)
Note: If you plot to the console graphics device you can’t get a large enough size to plot all the words comfortably. I achieved the above results plotting externally to png @ 1000 x 1000 (w x h)
## Concluding Thoughts
Alright, this is my first knitr generated blog post. Very easy. I regret not having tried it earlier
I accomplished my goal of making a gradient word cloud and a gradient legend. The actual word cloud really isn’t that informative because there’re too many words and too little variation in word choice/colors. In some situations this approach may be useful but in this one I don’t like it. Secondly, I used the blue to red theme because it plays to the political parties but in this visualization better contrasting colors would be more appropriate. Overall I don’t feel I was successful in presenting information better than Drew Conway’s post.
## What the Reader Can Take Away from the Post
1. Using wordcloud’s user defined color feature
2. Using qdap’s lookup to recode
3. Creating gradients in base (easy)
4. Creating the accompanying gradient legend
If the reader has improvements in scaling, visualizing parameters ect. please share these and other comments below.
|
|
# Ecobee launches a new thermostat with location-detection as its killer app
Ecobee, the Canadian company that has been making connected thermostats since 2007, has released a new thermostat that combines many of the features I think a smart home should have. the thermostat, which costs $249, is a marked change in design for Ecobee, whose previous devices and app were very data heavy and utilitarian, to something beautiful yet still smart. The thermostat has a 3.5-inch capacitive touch screen and is connected via Wi-Fi. It also comes with a sensor that you place in a room in your home, and that sensor sends the temperature in that part of the house back to the thermostat. It also conveys information about your exact location inside the house. For HVAC nerds this is cool because it can now make you more comfortable in your home, especially if your bedroom gets really hot or cold at night. The thermostat learns your habits and adjusts the temperature based on where people are in the home. Additional sensors cost$80 for a package of two. My upstairs thermostat is in a hallway that can get pretty warm, so it’s not uncommon to see the temperatures in my study where I work all day fluctuate between 76 and 80 depending on time of day. I keep the upstairs at 78 because I live in Texas and put on a sweater when the temperature is below 75.
So for me, if I could track the temps in my study while I’m in there and keep it 78, I might be able to use a bit less A/C. That’s awesome. But even more awesome is a planned integration with If This Then That and existing SmartThings integrations for [company]Ecobee[/company]. Stuart Lombard, the co-CEO of Ecobee says that presence detected by the sensors will be shared as triggers on other platforms, which means I can now set rules in my home based on presence. That could be the killer app for this thermostat and the smart home, something we discuss on this week’s podcast.
I’ve long been an Ecobee fan (I have a cheaper Ecobee Smart SI and a [company]Nest[/company] in my home today) and I love the company’s nerdiness around weather and HVAC data as well as its commitment to openness. With the upgraded app and thermostat design I’m really hoping that other people buy in to what Ecobee has built. Its previous products were built for the HVAC distributor market, and it shows. So maybe with a designer gloss, a new app and some really awesome features, others will check out what I think is a really compelling thermostat.
I am currently trying to convince myself I need to replace my older Ecobee with this new one despite the pain of rewiring my thermostats again. I also need to convince myself that I am not insane for spending \$500 on thermostats. That may take a bit longer, especially since I tend to see less savings than advertised by Nest and Ecobee simply because I already keep my AC pretty high and am home all the time.
|
|
# How can I find out what row and column a cell resides in when I populate a matrix diagonally?
I am populating a matrix diagonally. Check out these three examples for clarification:
1 2 4 7 11 16
3 5 8 12 17 22
6 9 13 18 23 27
10 14 19 24 28 31
15 20 25 29 32 34
21 26 30 33 35 36
1 2 4 7 11 16 22 28
3 5 8 12 17 23 29 34
6 9 13 18 24 30 35 39
10 14 19 25 31 36 40 43
15 20 26 32 37 41 44 46
21 27 33 38 42 45 47 48
1 2 4
3 5 7
6 8 10
9 11 13
12 14 16
15 17 19
18 20 21
Is it possible to create a formula, fn(columns, row, i) = (x, y), so that you can derive what column and row a specific number in the sequence resides in? So for instance, looking at the first example, we would get something like fn(6, 6, 29) = (4,5)
Any help would be greatly appreciated, thanks.
For the first section where diagonals are growing in size, number the diagonals with the variable $d$ so that $d = x + y - 1$. Now $d$ must be the smallest integer for which $1 + 2 + \ldots + d \geq i$. A bit of algebra gives you the following expression for $d$.
$$d = \left\lceil{\frac{\sqrt{8i +1} - 1}{2}}\right\rceil$$
The value of $x$ can be calculated similarly so that you can calculate $y$. The other sections can be computed similarly.
|
|
1 like 0 dislike
31 views
Please assist to solve x i details
| 31 views
0 like 0 dislike
$x=2$ or $x=-2$
Explanation:
$(3x -6) (x+2) =0$
In math if $a \times b =0$ the either $a=0$ or $b=0$, so taking $3x-6$ as $a$ and $x+2$ as $b$ we know that
$3x-6=0$ or $x+2=0$
$3x=6$ or $x=-2$
i.e. $x=2$ or $x=-2$
by Diamond (89,043 points)
0 like 0 dislike
1 like 0 dislike
3 like 0 dislike
0 like 0 dislike
2 like 0 dislike
0 like 0 dislike
0 like 0 dislike
1 like 0 dislike
2 like 0 dislike
0 like 0 dislike
1 like 0 dislike
0 like 0 dislike
0 like 0 dislike
1 like 0 dislike
|
|
Elliptic Cylindrical Coordinates
1. Jun 20, 2009
nassboy
Is there a cylindrical coordinate system that is centered about the foci of an ellipse. It would include (r,theta,z) just like cylindrical coordinates only for an ellipse.
If this coordinate system exists, what is the laplacian?
Chris
2. Jun 20, 2009
Civilized
3. Jun 20, 2009
nassboy
I wasn't sure if those coordinate systems were exactly what I wanted....they aren't focci centered.
4. Jun 20, 2009
Vortexman
Can't you just perform a simple affine transformation so that the origin is at one of the foci? Of course, using the chain rule to incorporate this shift would slightly alter the form of your Laplacian.
|
|
Categories
# Area of the Trapezoid | AMC 8, 2002 | Problem 20
Try this beautiful problem from AMC-8, 2002, (Problem-20) based on area of Trapezoid.You may use sequential hints to solve the problem.
Try this beautiful problem from Geometry based on Area of Trapezoid.
## Area of the Trapezoid – AMC- 8, 2002 – Problem 20
The area of triangle XYZ is 8 square inches. Points A and B are midpoints of congruent segments XY and XZ . Altitude XC bisects YZ.What is the area (in square inches) of the shaded region?
• $6$
• $4$
• $3$
### Key Concepts
Geometry
Triangle
Trapezoid
But try the problem first…
Answer:$3$
Source
AMC-8 (2002) Problem 20
Pre College Mathematics
## Try with Hints
First hint
Given that Points A and B are midpoints of congruent segments XY and XZ and Altitude XC bisects YZ
Let us assume that the length of YZ=$x$ and length of $XC$= $y$
Can you now finish the problem ……….
Second Hint
Therefore area of the trapezoid= $\frac{1}{2} \times (YC+AO) \times OC$
can you finish the problem……..
Final Step
Let us assume that the length of YZ=$x$ and length of $XC$= $y$
Given that area of $\triangle xyz$=8
Therefore $\frac{1}{2} \times YZ \times XC$=8
$\Rightarrow \frac{1}{2} \times x \times y$ =8
$\Rightarrow xy=16$
Given that Points A and B are midpoints of congruent segments XY and XZ and Altitude XC bisects YZ
Then by the mid point theorm we can say that $AO=\frac{1}{2} YC =\frac{1}{4} YZ =\frac{x}{4}$ and $OC=\frac{1}{2} XC=\frac{y}{2}$
Therefore area of the trapezoid shaded area = $\frac{1}{2} \times (YC+AO) \times OC$= $\frac{1}{2} \times ( \frac{x}{2} + \frac{x}{4} ) \times \frac{y}{2}$ =$\frac{3xy}{16}=3$ (as $xy$=16)
## Subscribe to Cheenta at Youtube
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
|
# How do you find the slope of the following line, x+2y= -2?
Apr 1, 2015
I would rearrange the equation to actually "see" the slope, as:
$2 y = - x - 2$
$y = - \frac{x}{2} - \frac{2}{2}$
$y = - \frac{1}{2} x - 1$
The slope is the numerical coefficient in front of your $x$, i.e., $- \frac{1}{2}$
Apr 1, 2015
If you re-write your formula in slope point form:
$y = m x + b$
then the coefficient $m$ is the slope.
In this case
$x + 2 y = - 2$
$\rightarrow 2 y = - x - 2$
$\rightarrow y = \left(- \frac{1}{2}\right) x - 1$
with a slope of
$\left(- \frac{1}{2}\right)$
|
|
Question
# Mark the correct alternative of the following.If the number 2345 a 60b is exactly divisible by 3 and 5, then the maximum value of $a+b$is?(a) 12(b) 13(c) 14(d) 15
Hint: We can use the divisibility rule for the numbers 3 and 5 to solve this question. In the case of number 5, the value of b can only be either 0 or 5. Similarly, we can apply conditions for number 3 also and get to a conclusion.
Before proceeding we should know the divisibility rule of 3 as well as 5. These are as follows:
Divisibility rule of 3: A number is divisible by 3 if and only if the sum of the digits of a number is divisible by 3.
Divisibility rule of 5: A number is divisible by 5 if and only if the last digit of a number is either 0 or 5.
Therefore, the number 2345 a 60b is divisible by 5 only if the value of $b=0\text{ or 5}$.
We have to take the value of b as 5 as we have been asked the maximum value of $a+b$.
Then the sum of the digits of the given number 2345 a 60b is $25+a$.
Therefore, the numbers greater than 25 and divisible by 3 are 20, 30, 35.
Hence, the value of a maybe 2, 5, 8 .
By comparing, the possible values of a, we get that the maximum value of $a=8$
Therefore, we have $a=8,b=5$.
Hence, the maximum value of $a+b$ is $13$.
Hence, the answer is option (b).
Note: Always remember we have been asked for the maximum value of $a+b$, therefore for the number to be divisible by 5, the last digit must be 5 not 0 as we have been asked the maximum value of $a+b$. Make sure that after getting the maximum values of $a+b$, check at least once the number is divisible by both 3 and 5.
|
|
## Textbooks & Solution Manuals
Find the Source, Textbook, Solution Manual that you are looking for in 1 click.
## Tip our Team
Our Website is free to use.
To help us grow, you can support our team with a Small Tip.
## Holooly Tables
All the data tables that you may search for.
## Holooly Help Desk
Need Help? We got you covered.
## Holooly Arabia
For Arabic Users, find a teacher/tutor in your City or country in the Middle East.
Products
## Textbooks & Solution Manuals
Find the Source, Textbook, Solution Manual that you are looking for in 1 click.
## Holooly Arabia
For Arabic Users, find a teacher/tutor in your City or country in the Middle East.
## Holooly Help Desk
Need Help? We got you covered.
## Q. 2.3
The temperature scale of a thermometer is given by : t = a ln p + b. If at the ice point and steam point the thermometric properties are found to be 2 and 8 respectively, what will be the temperature corresponding to the thermometric property of 4 on Celsius scale ?
## Verified Solution
t = a ln p + b
At ice point, 0 = a ln 2 + b
At steam point, 100 = a ln 8 + b
Which gives, $a=\frac{100}{\ln 4}=72.135$
b = – 50
t = 72.135 ln p – 50
At p = 4, t = 72.135 ln 4 – 50 = 50°C
|
|
# Tag Info
1
divide the interval [0,1] to N subintervals and take the function $F_N$ as below: in odd subintervals equal 1 in even subintervals equal 0
2
First, $A\subset l^2$ is the collection of sequences that only take finitely many values, their entries are all zero after some point (you said constant, but the constant must be zero), and $A$ is dense in $l^2$. Now let us show $B$ is dense in $A$ (which would mean $B$ is also dense in $l^2$), let $a\in A$ and $\epsilon$ be given, we show there exists a ...
1
The span $S$ of the functions $\{ \chi_{[0,x]} : 0 \le x \le 1\}$ is a dense linear subspace of the Hilbert space $L^{2}[0,1]$. To see that the span is dense, it is enough to show that $(f,\chi_{[0,x]})=0$ for all $x$ and some $f \in L^{2}[0,1]$ implies $f=0$. By the Lebesgue differentiation theorem, the following holds a.e.: $$... 1 Alternatively and directly: Assume by contradiction that (e_n)_{n} does not converge weakly to zero. Then there exists a \epsilon >0 and a functional f\in l_\infty^\ast s.t. |f(e_n)|\geq \epsilon for infinitely many n\in\mathbb{N}. By passing to that subsequence, we have that |f(e_{n_k})|\geq \epsilon for all k\in\mathbb{N}. Let \lambda_k ... 0 From here we know that (e_n)_n converges weakly to 0 iff it is bounded and every subsequence converges quasi-uniformly to 0. Clearly, (e_n)_n is bounded and pointwise converging to 0. Given a subsequence (e_{n_k})_k, \epsilon>0 and n_0\in \mathbb{N}. Set \alpha_1 = {n_0+1} and \alpha_2={n_0+2}. Then for any m\in\mathbb N we have ... 2 Think of a special case: f and g are zero except on a small interval, like "two copies of the box-function, which is 1 on [-1/2, 1/2] and 0 elsewhere". Then f \star g(u) is, roughly, "how much f looks like g, flipped over and shifted by u. In the case where g is symmetric (i.e., an even function), the "flip over" can be ignored, and ... 5 Because there is a maximum (well, supremum) in its definition. Answered here: I want to show that |f(x)|\le(Mf)(x)| at every Lebesgue point of f if f\in L^1(R^k) The important properties are: M is a bounded (nonlinear) operator on L^p for 1<p<\infty, and also from L^1 to weak L^1. This fact offers more control over the local behavior ... 1 Let f_{\alpha,\beta}(x) = \frac{1}{x^\alpha (\log x)^\beta}. You have shown that f_{\alpha,\beta} \in L^p([2, \infty)) whenever: p > 1/\alpha, or p = 1/\alpha and \beta > \alpha. So, in particular, \{p : f_{1/2,1} \in L^p([2,\infty))\}=[2, \infty]. You have also shown that f_{\alpha,\beta} \in L^p([0,1/2]) whenever: p < ... 1 L^p (\mu) is always reflexive for 1<p<\infty. EDIT: For the cases p=1 or p=\infty, this is almost never true. But what is still true in the case p=1 (if the measure is sigma finite) is that the dual space of L^1 is L^\infty. In the non sigma finite case, this can fail. If I recall correctly, Rudin even proves it for non sigma-finite ... 1 Hint: For the first case, you can actually show that$$||f||_p \leq ||f||_\infty \mu(X)^{1/p}$$if \mu(X) < \infty. For the second one, consider X = \mathbb R with the Lebesque measure. Can you found a bounded positive function f on \mathbb R so that$$\int_\mathbb R f dx = + \infty?$$(Don't think too hard) 0 Hint Can you bound |(x_n)_j - x_j| \le \|x_n - x\|_p? Take the canonical basis as a counter example for b). (what is it's point-wise limit? What is \|i^p e_i - e\|_p?) 0 Your statement in the comments that an operator T is continuous on a Banach space X if, for a sequence f_n \rightarrow f in X then we have Tf_n \rightarrow Tf. That is about the operator T being continuous. The functions f_n and f are simply an arbitrary collection of elements in the space that form a convergent sequence and its limit. ... 0 First of all, in the proof you need to assume something else about the function g. Usually one takes g\in C_c(\mathbb{R}^n), the space of continuous functions with compact support. This implies that g is uniformly continuous, allowing one to prove that \|g(x-a)-g(x)\|_p\to0 as a\to0. The proof shows that$$ ...
2
A square-integrable function on $\mathbb{R}^{n}$ is not necessarily integrable, but it is integrable on any bounded set $S$ because of the Cauchy-Schwarz inequality: \begin{align} \int_{S} |e^{-2\pi i(x,y)}f(x)|\,dx & = \int_{S}|f(x)|\,dx \\ & \le \left(\int_{S}1\,dx\right)^{1/2}\left(\int_{S}|f(x)|^{2}\,dx\right)^{1/2} \\ & \le ... 1 The sub. u=\tan(x) gives\int_{0}^{\infty}{\frac{u^{p}}{1+u^{2}}du}=\int_{0}^{1}{\frac{u^{p}+u^{-p}}{u^{2}+1}du}\leq \int_{0}^{1}{(u^{p}+u^{-p})du}=\frac{2}{1-p^{2}}$$whenever p\in (0,1) 3 Hint:$$\mu(|f| > \varrho) = \int_{|f|> \varrho} \, d\mu \leq \int_{|f|>\varrho} \frac{|f|}{\varrho} \, d\mu.$$This inequality is known as Markov's inequality. Remark: For f \in L^p(\mu), p \geq 1,$$\mu(|f|>\varrho) \leq \frac{1}{\varrho^p} \int |f|^p \, d\mu.$$2 For the first part, we can actually write$$\sum_{j=1}^N|x_j|^p=\lim_{n\to \infty}\sum_{j=1}^N|x_j^n|^p\leqslant \sup_l\lVert x^l\rVert_p^p.$$As N is arbitrary, we get that x belongs to \ell^p. For the second part, we approximate the element z by the sequence whose N first terms are the corresponding to z, and the other ones are 0. Call ... 0 These inequalities do not seem to be the best ones to me. Why don't you try this: Note that the E_n simply divide the domain of E_n according to the integer part of |f|. In E_n, f is "sandwiched" between n-1 and n. Writing this with functions:$$(n-1)\chi_{E_n}\leq|f|\chi_{E_n}\leq n\chi_{E_n}.$$We can take the power p on the inequalities ... 2 Hint 1: If you do not have sets of arbitrarily small measure, one of the inclusions L^r \subset L^s holds. For instance, consider \sum_{n=1}^\infty \frac{1}{n} and \sum_{n=1}^\infty \frac{1}{n^2}. How does this relate to \left(\int_X |f(x)|^p d\mu(x)\right)^{1/p}? Hint 2: If you do not have sets of arbitrarily large measure, the other inclusion ... 2 If \mu(X)<\infty, then$$ p<q \quad\Longrightarrow\quad L^q(X)\subset L^p(X). $$This is due to the fact that (Holder inequality)$$ \int_X \lvert\, f\rvert^p\,d\mu=\int_X \lvert\, f\rvert^p\cdot 1\,d\mu =\left(\int_X \lvert\, f\rvert^q\,d\mu\right)^{p/q} \left(\int_X 1^{q/(q-p)}\,d\mu\right)^{(q-p)/q}. $$Hence \|\,f\|_p\le ... 1$$ \sup_{|y|>|x|} \frac{1}{(1+|y|)^{n}} = \frac{1}{(1+|x|)^{n}}$$and$$\int_{\mathbb R} \frac{1}{(1+|x|)^{n}} dx = 2\int_0^\infty\frac{1}{(1+|x|)^{n}} dx \leq 2\left(\int_0^1 1 dx +\int_1^\infty\dfrac{1}{x^n}dx\right) $$1 Obviously,$$\int_{\mathbb R} \sup_{|y|>|x|} \frac{1}{(1+|y|)^{n}} dx = \int_{\mathbb R} \frac{1}{(1+|x|)^{n}}dx=\cdots$$0 Let$$ f_n(x)= \begin{cases} n^2&\text{for $0\leq x\leq\frac1n$},\\ 0&\text{for $\frac1n< x\leq1$}. \end{cases} $$The sequence is unbounded, hence not weakly convergent, but convergent in the sense of 1 and 2. 1 Let f(x)=1/x^2 for |x|>1 and 1 for |x|\leq1. In the sense of your definition g(x)\equiv1 is a contraction of f, but is not in L^2(\mathbb{R}). 1 For the case in which 1 \leq p < \infty: Let \displaystyle f = \bigg( \frac{1}{\mu(A)} \bigg)^{1/p} \cdot \chi_A and \displaystyle g = \bigg( \frac{1}{\mu(B)} \bigg)^{1/p} \cdot \chi_B where A, B both have nonzero finite measure, are disjoint and \chi is the indicator function. Notice that$$\|f\|_p = \bigg( \int \vert f \vert^p \, d\mu ...
-1
For the first inequality, please use Lemma 4.1 in ref [X. X. Huang and X. Q. Yang. A unified augmented Lagrangian approach to duality and exact penalization. Mathematics of Operations Research, 28(3):533–552, 2003.] Note that $0<r\,/q<1$.
1
1) No, every such operator is bounded. For simplicity, I assume that all functions are real-valued. If $T$ is not bounded, there is a sequence $f_n$ with $\Vert f_n \Vert_p \leq 2^{-n}$ and $\Vert Tf_n\Vert_p \geq 2^n$ (why?). Then Fatou's Lemma (or monotone convergence) shows $f :=\sum_n |f_n|\in L^p$ with $\Vert f\Vert_p \leq 1$. But $|f_n|\leq ... 2 How to find these... By linearity of the integral one has: $$\mathrm{supp}f\cap\mathrm{supp}g=\varnothing:\quad\int|f\pm g|^p\mathrm{d}\lambda=\int|f|^p\mathrm{d}\lambda+\int|g|^p\mathrm{d}\lambda$$ So take a block and shift it slightly: $$f:=\chi_{[0,1]}:\quad f_\varepsilon(x):=f(x-\varepsilon)$$ Then for an appropriate choice: ... 1 I'll give you a hint. Let $$f(x)=\begin{cases}2,x\in[0,3/4],\\0,x>3/4,\end{cases}\quad g(x)=f(1-x).$$ Then the parallelogram law says that in hilbert spaces we have $$2\|f\|^2+2\|g\|^2=\|f-g\|^2+\|f+g\|^2.$$ Can you calculate the norms above? 1 Suggestion:$E$is a set where$f$is "large". When$f > 1$,$|f|^p$increases and decreases in direct relation to$p$. So if you know$f$integrates at the$q$th power, what other types of powers will be integrable on this set? Likewise when$|f| < 1$we are talking about the "tail" of the integral if$\Omega$has infinite measure, or simply the ... 0 There are many norms. This is but an example. Let$W=\{w_i\}_{i=1}^\infty$,$w_i>0$, be a weight-sequence. Then $$\|x\|_{p,W}=\sum_{í=1}^\infty|x_i|^pw_i$$ defines a norm. If$W$is bounded, then it is a norm on$\ell^p$. If moreover$\inf_{i} w_i>0$, then it is equivalent to the$\ell^p$norm. But if$\inf_{i}w_i=0$, the norms are not equivalent. 4 If$\,\mu(X)<\infty$, then the answer is YES. In such case $$\left|\int_X f_n\,d\mu-\int_X f\,d\mu\,\right|\le\int_X\lvert\,f_n-f\rvert\,d\mu\le\mu(X)\cdot\sup_{x\in X}\lvert\,f_n(x)-f(x)\rvert,$$ and as$\,f_n\to f$uniformly, then$\,\sup_{x\in X}\lvert\,f_n(x)-f(x)\rvert\to 0\$, and hence $$\int_X f_n\,d\mu\longrightarrow\int_X f\,d\mu.$$
Top 50 recent answers are included
|
|
# Ubuntu – How to “redecorate” windows
lxde
I right clicked the title bar of rhythmbox and "un/Decorate". Now the title bar is gone. I want it back and I can't figure out how. I tried hitting D but nothing happens.
• Strangely, by default, there is no toggle to decorate/undecorate a window.
You need do one of two things. Either define a keybinding to toggle or use the shortcut key to redisplay the window menu.
Lets examine option 1:
To do this, open lxterminal
leafpad ~/.config/openbox/lubuntu-rc.xml
Search for the first instance of <keybinding>
<keybind key="W-A-D">
<action name="ToggleDecorations"></action>
</keybind>
Now run openbox --reconfigure to make your changes effective (alternatively, you can logout and login).
You'll be able to toggle the decoration using Meta+Alt+D
The windows key is also another name for Meta
Lets also examine option 2:
Thanks to Glutanimate.
Pressing Alt+Space will display the window submenu where you can redecorate - obviously this doesnt need a keybinding, but its nice to have a toggle option to quickly turn the titlebar on/off
|
|
# MathSciDoc: An Archive for Mathematician ∫
#### mathscidoc:1702.01016
The finite subgroups of $GL_4(\bm{Z})$ are classified up to conjugation in \cite{BBNWZ}; in particular, there exist $710$ non-conjugate finite groups in $GL_4(\bm{Z})$. Each finite group $G$ of $GL_4(\bm{Z})$ acts naturally on $\bm{Z}^{\oplus 4}$; thus we get a faithful $G$-lattice $M$ with ${\rm rank}_\bm{Z} M=4$. In this way, there are exactly $710$ such lattices. Given a $G$-lattice $M$ with ${\rm rank}_\bm{Z} M=4$, the group $G$ acts on the rational function field $\bm{C}(M):=\bm{C}(x_1,x_2,x_3,x_4)$ by multiplicative actions, i.e. purely monomial automorphisms over $\bm{C}$. We are concerned with the rationality problem of the fixed field $\bm{C}(M)^G$. A tool of our investigation is the unramified Brauer group of the field $\bm{C}(M)^G$ over $\bm{C}$. It is known that, if the unramified Brauer group, denoted by ${\rm Br}_u(\bm{C}(M)^G)$, is non-trivial, then the fixed field $\bm{C}(M)^G$ is not rational (= purely transcendental) over $\bm{C}$. A formula of the unramified Brauer group ${\rm Br}_u(\bm{C}(M)^G)$ for the multiplicative invariant field was found by Saltman in 1990. However, to calculate ${\rm Br}_u(\bm{C}(M)^G)$ for a specific multiplicatively invariant field requires additional efforts, even when the lattice $M$ is of rank equal to $4$. There is a direct decomposition ${\rm Br}_u(\bm{C}(M)^G)= B_0(G) \oplus H^2_u(G,M)$ where $H^2_u(G,M)$ is some subgroup of $H^2(G,M)$. The first summand $B_0(G)$, which is related to the faithful linear representations of $G$, has been investigated by many authors. But the second summand $H^2_u(G,M)$ doesn't receive much attention except when the rank is $\le 3$. Theorem 1. Among the $710$ finite groups $G$, let $M$ be the associated faithful $G$-lattice with ${\rm rank}_\bm{Z} M=4$, there exist precisely $5$ lattices $M$ with ${\rm Br}_u(\bm{C}(M)^G)\neq 0$. In these situations, $B_0(G)=0$ and thus ${\rm Br}_u(\bm{C}(M)^G)\subset H^2(G,M)$. The {\rm GAP IDs} of the five groups $G$ are {\rm (4,12,4,12), (4,32,1,2), (4,32,3,2), (4,33,3,1), (4,33,6,1)} in {\rm \cite{BBNWZ}} and in {\rm \cite{GAP}}. Theorem 2. There exist $6079$ finite subgroups $G$ in $GL_5(\bm{Z})$. Let $M$ be the lattice with rank $5$ associated to each group $G$. Among these lattices precisely $46$ of them satisfy the condition ${\rm Br}_u(\bm{C}(M)^G)\neq 0$. The {\rm GAP IDs} (actually the {\rm CARAT IDs}) of the corresponding groups $G$ may be determined explicitly. A similar result for lattices of rank $6$ is found also. Motivated by these results, we construct $G$-lattices $M$ of rank $2n+2, 4n, p(p-1)$ ($n$ is any positive integer and $p$ is any odd prime number) satisfying that $B_0(G)=0$ and $H^2_u(G,M)\neq 0$; and therefore $\bm{C}(M)^G$ are not rational over $\bm{C}$.
Rationality problem, Noether's problem, crystallographic groups, unramified Brauer groups, algebraic tori
@inproceedings{akinarimultiplicative,
title={Multiplicative invariant fields of dimension $\le 6$},
Akinari Hoshi, Ming-chang Kang, and Aiichi Yamasaki. Multiplicative invariant fields of dimension $\le 6$. http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170217150243872127448.
|
|
## Sunday, January 23, 2011
### A visit to 'Mos Espa' on 'Tatooine'
With Tunisia in the news I was reminded of a trip I made there two years ago to visit the desert. As part of the trip I made a side excursion to see part of the set of Star Wars Episode 1: The Phantom Menace. In the film a large part of the action takes place on Tatooine (which takes its name from the real Tunisian town of Foum Tatouine).
Not far from the town of Nefta is an area called Oung Jmel and a short 4x4 ride away is the set of Mos Espa.
Here's a shot of the small collection of buildings that form the set. The rest of Mos Espa must have been added using CGI.
From the outside the buildings look like they are made of stone, but in the interior there's a simple wooden frame, chicken wire and a lot of plaster.
Here's a shot down the 'main street' of the set with a moisture vaporator. The vaporators are actually pretty rough wooden structures made from what looks like plywood screwed together and painted grey as if it were metal. Just behind and to the right of the vaporator is the building used for Akim's Munch.
Here are a couple more in the middle of nowhere (just outside the set). Close to the set there was lots of flat empty land that must have been used for some shots. Standing there I kept wanting to look for a missing droid but the sun was too bright.
Back in the town there's a large open area with buildings all around. If anyone recognizes these from the film please let me know as I don't have a copy of it. The arch here leads out to the open land where I took a picture of the vaporators.
Some of the buildings (like this one) had quite sturdy looking doors on them and it was not possible to go into them at all. At least one seemed to be used for storage by the folks who were selling trinkets outside the town.
And finally, the most amusing part. Here's a close up shot of a 'cooling fin' on a vaporator. It's made of a plastic draining rack (something like this) screwed upside down to the wooden frame and painted the same grey colour.
Labels:
If you enjoyed this blog post, you might enjoy my travel book for people interested in science and technology: The Geek Atlas. Signed copies of The Geek Atlas are available.
<$BlogCommentBody$>
<$BlogCommentDateTime$> <$BlogCommentDeleteIcon$>
#### Links to this post:
<$BlogBacklinkControl$> <$BlogBacklinkTitle$> <$BlogBacklinkDeleteIcon$>
<$BlogBacklinkSnippet$>
|
|
# 15.3 verify that dim(NS(A)) + Rank(A) = 5
#### karush
##### Well-known member
15.3 For the matrix
$$A=\begin{bmatrix} 1 & 0 &0 & 4 &5\\ 0 & 1 & 0 & 3 &2\\ 0 & 0 & 1 & 3 &2\\ 0 & 0 & 0 & 0 &0 \end{bmatrix}$$
(a)find a basis for RS(A) and dim(RS(A)).
ok I am assuming that since this is already in row echelon form, its nonzero rows form a basis for RS(A) then
So...
$$RS(A)=(1,0,0,4,5),\quad(0,1,0,3,2),\quad(0,0,1,3,2)$$
also
dim(RS(A))= ??
(b)verify that dim(NS(A)) + Rank(A) = 5.
ok I am a little unsure what this means
|
|
# Printing a nested data structure in Clojure
I wrote the following code which transforms a nested data structure
(def board [[{:mine true} {:warn 1 :explored true} {:explored true}]
[{:warn 1 :explored true} {:warn 1 :explored true} {} ]
[{} {} {:flag true } ]])
into a printable form and prints it
(("[ ]" "[1]" "[E]") ("[1]" "[1]" "[ ]") ("[ ]" "[ ]" "[F]"))
The functions for the transformation are the following:
(defn cell->icon
[cell]
(letfn [(cell->str [v] (format "[%s]" v))]
(if (:explored cell)
(cond (:mine cell) (cell->str "M")
(:warn cell) (cell->str (:warn cell))
:else (cell->str "E"))
(cond (:flag cell) (cell->str "F")
:else (cell->str " ")))))
(defn board->icons
[board]
(map (partial map cell->icon) board))
So far so good (if you have any recommendations for a nicer approach though, do not hesitate to mention).
The function which I don't like though is the following:
(defn print-board
[board]
(doall
(map println
(map (partial clojure.string/join " ")
(board->icons board)))))
I don't like to use println together with map since it is not a pure function (has side-effects)!? Maybe I am a bit too critical but I would be glad if somebody could advice me or give a hint how to do it in a nicer Clojure like way.
## Separate impure IO from data transformations
I wrote the following code which transforms a nested data structure ... into a printable form
Not only does your code transform the data into a printable form, it also prints it.
To better isolate the side effects of printing, you should separate these two operations: first assemble a formatted string, then you can do whatever you want with it -- print it with a single call to println, save it, send it over the network, etc.
(defn format-board
[board]
(->> (board->icons board)
(map (partial str/join " "))
(str/join "\n")))
(def print-board
(comp println format-board))
## doseq
As a side note, instead of (doall (map println seq-to-print)) it is common to see
(doseq [line seq-to-print]
(println line))
which is usually what you want, since
• doseq does not hold onto the return value of println, whereas doall builds up an entire return sequence in memory (important when the sequence is big!) -- which is why your original print-board returns (nil nil nil) instead of nil; and
• this makes it more visually clear that you are doing something with each item in the sequence, instead of generating a new sequence with map
|
|
Chapter 200
### Definition and Etiology
Coccidioidomycosis, commonly known as Valley Fever, is caused by dimorphic soil-dwelling fungi of the genus Coccidioides. Genetic analysis has demonstrated the existence of two species, C. immitis and C. posadasii. These species are indistinguishable with regard to the clinical disease they cause and their appearance on routine laboratory media. Thus, the organisms will be referred to simply as Coccidioides for the remainder of this chapter.
### Epidemiology
Coccidioidomycosis is confined to the Western Hemisphere between the latitudes of 40°N and 40°S. In the United States, areas of high endemicity include the southern portion of the San Joaquin Valley of California and the south-central region of Arizona. However, infection may be acquired in other areas of the southwestern United States, including the southern coastal counties in California, southern Nevada, southwestern Utah, southern New Mexico, and western Texas, including the Rio Grande Valley. Outside the United States, coccidioidomycosis is endemic to northern Mexico as well as to localized regions of Central America. In South America, there are endemic foci in Colombia, Venezuela, northeastern Brazil, Paraguay, Bolivia, and north-central Argentina.
The risk of infection is increased by direct exposure to soil harboring Coccidioides. Because of difficulty in isolating Coccidioides from the soil, the precise characteristics of potentially infectious soil are not known. Several outbreaks of coccidioidomycosis have been associated with soil from archaeologic excavations of Amerindian sites both within and outside of the recognized endemic region. These cases often involved alluvial soils in regions of relative aridity with moderate temperature ranges. Coccidioides was isolated at depths of 2–20 cm below the surface.
In endemic areas, many cases of Coccidioides infection occur without obvious soil or dust exposure. Climatic factors appear to increase the infection rate in these regions. In particular, periods of aridity following rainy seasons have been associated with marked increases in the number of symptomatic cases. The number of cases of symptomatic coccidioidomycosis has increased dramatically in south-central Arizona, where most of the state's population resides. The factors causing this increase have not been fully elucidated; however, an influx of older individuals without prior coccidioidal infection into the region appears to be involved. Other variables, such as climate change, construction activity, and increased awareness and reporting, may also be factors. A similar increase in the incidence of symptomatic cases has recently been observed in the southern San Joaquin Valley of California.
### Pathogenesis, Pathology, and Immune Response
On agar media and in the soil, Coccidioides organisms exist as filamentous molds. Within this mycelial structure, individual filaments (hyphae) elongate and branch, some growing upward. Alternating cells within the hyphae degenerate, leaving barrel-shaped viable elements called arthroconidia. Measuring ∼2 by 5 μm, arthroconidia may become airborne for extended periods. Their small size allows them to evade initial mechanical mucosal defenses and reach deep into the bronchial tree, where infection is ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessMedicine Full Site: One-Year Subscription
Connect to the full suite of AccessMedicine content and resources including more than 250 examination and procedural videos, patient safety modules, an extensive drug database, Q&A, Case Files, and more.
|
|
# Explain why the chain rule is needed to find derivatives if $y$ is a function of $x$?
I know that if $y$ is a function of $x$, or $y=f(x)$, you need to use the chain rule to find it's derivative.
Let's say I want to find the derivative of $y^2$ and $y$ is a function of $x$. Therefore, I would need to use the chain rule. What are the inside and outside functions of $y^2$ using the chain rule (Both terms would be functions of $x$)?
I also want to know why the derivative of $y^2$ is the same as the derivative of $2y$, by using the chain rule.
First, "I also want to know why the derivative of $y^2$ is the same as the derivative of $2y$, by using the chain rule." --> They aren't? I'm not sure what you mean here but $2y$ is the derivative of $y^2$ with respect to $y$. They're clearly not the same function and unless $y$ is trivial, you won't get the same thing when you take the derivatives of $y^2$ and $2y$.
Second, it's very important to specify what you're taking a derivative with respect to when you have multiple variables in play.
For example, let's say $y = x^2 + 5x$ and I want to take the derivative of $y^2$. This is an ambiguous statement. If you mean the derivative of $y^2$ with respect to $y$, then you would get $2y$. If you mean the derivative of $y^2$ with respect to $x$, then this is where the chain rule comes into play. The chain rule tells us that derivatives kind of work like fractions:
$$\frac{d}{dx}(y^2) = \frac{dy}{dx} \frac{d}{dy}(y^2) = \frac{dy}{dx} (2y) = (2x+5)(2y) = 2(2x+5)y = 2(2x+5)(x^2+5x)$$
The "inside function" would be $x^2+5x$ -- the derivative of this with respect to $x$ is the $\frac{dy}{dx}$ part. The "outside function" would be $y^2$ -- the derivative of this with respect to $y$ is the $\frac{d}{dy}(y^2)$.
It depends on what you are really dealing with. If y is a function of x and z is a function of y, as in $y=f(x)$ and $z=y^2$ then $$\frac{dz}{dy}=2y,\text{ but }\frac{dz}{dx}=\frac{dz}{dy}\frac{dy}{dx}=2yy'=2f(x)f'(x).$$
Not true that both inside and outside functions are functions of $x$. $y^2$ is a function of $y$. It's helpful to take a concrete example for $y=f(x)$. Take $y=\sin x$ for example. Then $y^2=(f(x))^2=\sin^2x$. Now work through the chain rule.
This only involves an application of the chain rule once. Let $y=f(x)$, then these things are the same: $$\frac{\mathrm{d}y}{\mathrm{d}x}=\frac{\mathrm{d}f}{\mathrm{d}x}=f'(x)$$
Now, let $g(x)=x^2$. Recall that $g'(x)=2x$. What you want to find is the derivative of $g(f(x))$ with respect to $x$. This means we can use the chain rule:
$$\frac{\mathrm{d}}{\mathrm{d}x} g(f(x)) = g'(f(x))f'(x) = \boxed{2f(x)f'(x)}$$
An example: suppose $y=f(x)=x^3 + 1$. What is the derivative of $y^2$ with respect to $x$?
$$\frac{\mathrm{d}}{\mathrm{d}x} y^2 = 2\left(x^3+1\right)\left(3x^2\right) = 6x^5 + 6x^2$$
We can also find this derivative of $y^2$ by expanding: $$\frac{\mathrm{d}}{\mathrm{d}x}\left(x^3+1\right)^2 = \frac{\mathrm{d}}{\mathrm{d}x} \left(x^6 + 2x^3 + 1\right) = 6x^5 + 6x^2$$
This last method doesn't require the chain rule at all, but requires that it is easy to both apply $g$ to $f$ and then take the resulting derivative directly.
• Makes sense, but which function is the inner and which function is the outer? Mar 9, 2015 at 22:46
• $f$ is the inner function, and $g$ is the outer function. Mar 9, 2015 at 22:48
|
|
# Random Wits
Life is too short for a diary
## Random
\$ random stuff ├── tags ├── bookshelf ├── resources ├── quotes ├── habits └── about me
+ say Hello
Mon 31 May 2021
### Dear Vishi, what's a carcinogen?
If you are not living under a cave, you probably have already heard about carcinogens. News is often awash with headlines alerting perils of carcinogens. In a nutshell, these are substances that have the potential to cause cancer. Not all carcinogens cause cancer. Not all cancer is caused by carcinogens. Cancer is a complicated group of diseases. Of course, it’s not my forte to expound on the subject of cancer. However, we should try to understand how carcinogens are identified and what we should do to protect ourselves.
Cancer is pathological mitosis. The cells go bonkers where they have an insatiable urge to proliferate. Carcinogens have the potential to alter the genetic material of the cell. This genetic mutation sometimes can develop into cancer.
## How do we know if a substance is a carcinogen or not?
Earlier scientists use to test the substance on animals like rodents for a year or more to see if cancer develops. However, this process is both time-consuming & expensive. Since there a plethora of substances that need to be tested, there was a dire need to cheaply & quickly identify carcinogens.
Bruce Ames developed Ames test. It uses bacteria to test whether a substance can cause mutation. The premise is that mutation often causes cancer. Not all substances that cause mutation are carcinogenic e.g. nitrate compounds gives false positive since they produce nitric oxide. Also, some of the carcinogens may test negative e.g. Nitroethane. Thus Ames test is followed by additional experiments like animal testing to confirm carcinogenicity.
### How does the Ames test work?
Let’s say we take a strain of bacteria called Salmonella typhimurium on glucose-minimal salts agar plates. This strain has a mutation that cannot produce histidine (the amino acid is required in biosynthesis). Such bacteria are called auxotrophs since they cannot produce a required nutrient for their growth. This is unlike prototrophs which can synthesize the particular nutrient.
Small traces of histidine present on the plates aids the bacteria grow for several generations until histidine is exhausted, at which time they will stop growing.
Again we repeat the same experiment, but we will also add suspected mutagenic substances on the plates. If the substance is mutagenic, we can see the cluster of bacteria continue to grow even after histidine is exhausted. This is because the bacteria have a mutation that allows them to synthesize histidine.
So Ames test identifies if a substance can cause an auxotroph to revert to prototroph by back-mutation. But remember it has both false positives and false negatives.
## Meditation
Today I meditated for 15 minutes using background music on Youtube. Music aids me in meditation. I know some people prefer to meditate in silence. Do nothing is a difficult concept to grasp for my mind. I guess someday my mind won’t wander during meditation. My mind often complains that wandering is not bad. It recites a famous poem
“All that is gold does not glitter,
Not all those who wander are lost;
The old that is strong does not wither,
Deep roots are not reached by the frost” - J.R.R. Tolkien
Today I resumed running on a treadmill. My body becomes lethargic if I skip even a single day. Stamina is something that needs to be built gradually. Below is a snapshot of my activity.
I guess running on a treadmill is the best utilization of the time when India is under lockdown due to Covid-19. In Charlotte, I use to run on a trail. I prefer it more than running on a treadmill. The trail is covered with trees on both sides.
We should feel privileged that we can work from home during this Covid-19 pandemic. A vast number of people probably don’t have that privilege. I am not religiously inclined & don’t prefer dogmas like praying. However, if I had to pray, I would probably chant the following Sanskrit hymn.
सर्वे भवन्तु सुखिनः सर्वे सन्तु निरामया,
सर्वे भद्राणि पश्यन्तु मा कश्चिद् दुख भागभवेत।
ऊँ शान्तिः शान्तिः शान्तिः
This loosely translates as
May all sentient beings be at peace, may no one suffer from illness
May all see what is auspicious, may no one suffer.
Om Peace, peace, peace.
|
|
### Conditionals in Homomorphic Encryption and Machine Learning Applications
Diego Chialva and Ann Dooms
##### Abstract
Homomorphic encryption has the purpose to allow computations on encrypted data, without the need for decryption other than that of the final result. This could provide an elegant solution to the problem of privacy preservation in data-based applications, such as those provided and/or facilitated by machine learning techniques, but several limitations and open issues hamper the fulfillment of this plan. In this work we assess the possibility for homomorphic encryption to fully implement its program without the need to rely on other techniques, such as multiparty computation, which may be impossible in many actual use cases (for instance due to the high level of communication required). We proceed in two steps: i) on the basis of the well-known structured program theorem [Bohm and Jacopini] we identify the relevant minimal set of operations homomorphic encryption must be able to perform to implement any algorithm; and ii) we analyse the possibility to solve -and propose an implementation for- the most fundamentally relevant issue as it emerges from our analysis, that is, the implementation of conditionals (which in turn require comparison and selection/jump operations) in full homomorphic encryption. We show how this issue has a serious impact and clashes with the fundamental requirements of homomorphic encryption. This could represent a drawback for its use as a complete solution in data analysis applications, in particular machine learning. It will thus possibly require a deep re-thinking of the homomorphic encryption program for privacy preservation. We note that our approach to comparisons is novel, and for the first time completely embedded in homomorphic encryption, differently from what proposed in previous studies (and beyond that, we supplement it with the necessary selection/jump operation). A number of studies have indeed dealt with comparisons, but have not managed to perform them in pure homomorphic encryption. Typically their comparison protocols do not utilise homomorphic encryption for the comparison itself, but rely on other cryptographic techniques, such as secure multiparty computation, which a) require a high level of communication between parties (each single comparison in a machine learning training and prediction process must be performed by exchanging several messages), which may not be possible in various use cases, and b) required the data owner to decrypt intermediate results, extract significant bits for the comparison, re-encrypt and send the result back to the other party for the accomplishment of the algorithm. Such decryption'' in the middle foils the purpose of homomorphic encryption. Beside requiring only homomorphic encryption, and not any intermediate decryption, our protocol is also provably safe (as it shares the same safety as the homomorphic encryption schemes), differently from other techniques such as OPE/ORE and variations, which have been proved not secure.
Note: Revised as requested by preprint editors as previous version was in two-column format, now the article is in single-column format.
Available format(s)
Publication info
Preprint. MINOR revision.
Keywords
Homomorphic encryptionconditionalsmachine learning applications.
Contact author(s)
dchialva @ vub be
History
Short URL
https://ia.cr/2018/1032
CC BY
BibTeX
@misc{cryptoeprint:2018/1032,
author = {Diego Chialva and Ann Dooms},
title = {Conditionals in Homomorphic Encryption and Machine Learning Applications},
howpublished = {Cryptology ePrint Archive, Paper 2018/1032},
year = {2018},
note = {\url{https://eprint.iacr.org/2018/1032}},
url = {https://eprint.iacr.org/2018/1032}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
|
Libby estimated that the steady-state radioactivity concentration of exchangeable carbon-14 would be about 14 disintegrations per minute (dpm) per gram.
This process begins when an organism is no longer able to exchange Carbon with their environment.
Carbon-14 is first formed when cosmic rays in the atmosphere allow for excess neutrons to be produced, which then react with Nitrogen to produce a constantly replenishing supply of carbon-14 to exchange with organisms.
He demonstrated the accuracy of radiocarbon dating by accurately estimating the age of wood from a series of samples for which the age was known, including an ancient Egyptian royal barge dating from 1850 BCE.
Before Radiocarbon dating was able to be discovered, someone had to find the existence of the C isotope.
The World Ocean Circulation Experiment (WOCE) The World Ocean Circulation Experiment (WOCE); at the Woods Hole Oceanographic Institution's NOSAMS Facility.
Measuring carbon in the Pacific and Indian Ocean to understand better the processes of ocean circulation.The technique of radiocarbon dating was developed by Willard Libby and his colleagues at the University of Chicago in 1949.Emilio Segrè asserted in his autobiography that Enrico Fermi suggested the concept to Libby at a seminar in Chicago that year.The accuracy of this proposal was proven by dating a piece of wood from an Ancient Egyptian barge, of whose age was already known.From that point on, scientist have used these techniques to examine fossils, rocks, and ocean currents and determine age and event timing.The half-life of a radioactive isotope (usually denoted by $$t_$$) is a more familiar concept than $$k$$ for radioactivity, so although Equation $$\ref$$ is expressed in terms of $$k$$, it is more usual to quote the value of $$t_$$.
|
|
I would like to generate a random vector in which its elements are between 0 and 1 (including 0 and 1). As far as I know, v = random_matrix(ZZ, 1, 10) is a way to generate a random matrix. How would I restrict the elements to the range I want?
|
|
To install click the Add extension button. That's it.
The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.
4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds
# Molniya (satellite)
Molniya
Molniya 1
ManufacturerExperimental Design Bureau
(OKB-1)
Country of originUSSR
OperatorExperimental Design Bureau
(OKB-1)
ApplicationsCommunications and surveillance
Specifications
BusKAUR-2
Launch mass1,600 kg (3,500 lb)
Dimensions4.4 m tall, 1.4 m base diameter [1]
Power6 solar panels + batteries [1]
RegimeMolniya orbit
Design life1.5 to 5 years
Dimensions
Production
StatusRetired
Launched164 [2]
Maiden launchMolniya 1-1, 23 April 1965 [2]
Last launchMolniya 1-93, 18 February 2004 [1]
Meridian →
The Molniya (Russian: Молния, IPA: [ˈmolnʲɪjə] (), "Lightning") series satellites are military and communications satellites launched by the Soviet Union from 1965 to 2004. These satellites use highly eccentric elliptical orbits known as Molniya orbits, which have a long dwell time over high latitudes. They are suited for communications purposes in polar regions, in the same way that geostationary satellites are used for equatorial regions.[3]
There were 164 Molniya satellites launched, all in Molniya orbits with the exception of Molniya 1S which was launched into geostationary orbit for testing purposes.[4][5]
## History
In the early 1960s, when Europe and America was establishing geostationary communication satellites, the Russians found these orbits unsuitable.[6] They were limited in the amount of rocket power available and it is extremely energy intensive to both launch a satellite to 40,000 km, and change its inclination to be over the equator, especially when launched from Russia. Additionally geostationary satellites give poor coverage in polar regions, which consists of a large portion of Russian territory.[7] As a result, OKB-1 sought a less energy-consuming orbit. Studies found that this could be achieved using a large elliptical orbit, with an apogee over Russian territory.[6] The satellite's name, "quick as lightning", is in reference to the speed with which it passes through the perigee.[8]
## Molniya 1
The Molniya programme was authorized on 30 October 1961 and design was handled by OKB-1.[9][10] They were based on the KAUR-2 satellite bus, with design finishing in 1963. The first launch took place on 4 June 1964 and ended in failure when the 8K78 booster core stage lost thrust 287 seconds into launch due to a jammed servo motor. The next attempt was on 22 August 1964 and reached orbit successfully, but the parabolic communications antennas did not properly deploy due to a design flaw in the release mechanism. Publicly referred to as Kosmos 41, it nonetheless operated for nine months. The first operational satellite, Molniya 1-1, was successfully launched on 23 April 1965.[9] By 30 May 1966, the third Molniya 1 had taken the first images of the whole Earth in history.[11]
The early Molniya-1 satellites were designed for television, telegraph and telephone across Russia,[9] but they were also fitted with cameras used for weather monitoring, and possibly for assessing clear areas for Zenit spy satellites.[12] The system was operational by 1967, with the construction of the Orbita groundstations.[9]
They had a lifespan of approximately 1.5 years, as their orbits were disrupted by perturbations, as well as deteriorating solar arrays and they had to be constantly replaced.[13][9][14]
By the 1970s, the Molniya 1 series (and the upgrade Molniya 1T) was mostly used for military communications, with civilian communications moving to Molniya 2.[9]
In total 94 Molniya 1 series satellites were launched, with the last going up in 2004.[2]
## Molniya 2
The first Molniya 2 satellites were tested from 1971 with the first operational satellite launching in 1974 from Plesetsk. The used the same satellite bus and basic design as later model Molniya 1 satellites, but with an expanded number of users under the military's Unified System of Satellite Communications (YeSSS) program. Development was difficult because the final satellite bus was unpressurized, changing their selection of radios.[10]
These satellites were used in the Soviet national Orbita television network, which had been established a few years earlier in 1967.[10]
Only seventeen Molniya 2 series satellite were launched, as they were soon superseded by the Molniya 3.[2]
## Molniya 3
Originally called the Molniya-2M, their development began in 1972, with launches from 1974. They were also based on the KAUR-2 bus, launching solely from Plesetsk. Earlier models were used for civilian communications in a similar orbit, but different purpose, to the military-only Molniya-1 satellites. From 1980s they were used by the military, and by the 1990s they were operated in the same manner as the Molniya 1 satellites.[15]
A total of 53 Molniya 3 series satellites were launched, with the last one going up in 2003.[2]
## Orbital Properties
Groundtrack of Molniya orbit. In the operational part of the orbit (4 hours on each side of apogee), the satellite is north of 55.5° N (latitude of for example central Scotland, Moscow and southern part of Hudson Bay). A satellite in this orbits spends most of its time in the northern hemisphere and passes quickly over the southern hemisphere.
A typical Molniya series satellite, has:
• Semi-major axis: 26,600 km
• Eccentricity: 0.74
• Inclination: 63.4° [13]
• Argument of perigee: 270°
• Period: 718 minutes [16]
### Inclination
In general, the oblateness of the Earth perturbs the argument of perigee (${\displaystyle \omega }$), so that even if the apogee started near the north pole, it would gradually move unless constantly corrected with station-keeping thruster burns. Keeping the dwell point over Russia, and useful for communications necessitated without excessive fuel use meant that the satellites needed an inclination of 63.4°, for which these perturbations are zero.[17][16]
### Period
Similarly, to ensure the ground track repeats every 24 hours the nodal period needed to be half a sidereal day.
### Eccentricity
To maximise the dwell time the eccentricity, the differences in altitudes of the apogee and perigee, had to be large.
However, the perigee needed to be far enough above the atmosphere to avoid drag, and the orbital period needed to be approximately half a sidereal day. These two factors constrained the eccentricity to become approximately 0.737.[16]
## Successors
Molniya series satellites were replaced by the Meridian series, with the first launch in 2006.[18] There are currently 38 Molniya satellites left in orbit.[19]
## References
1. ^ a b c John Pike (ed.). "Molniya". Global Security.org.
2. "Satellite Catalog". Space-Track.org. SAIC. Retrieved 22 February 2019.
3. ^ Martin, Donald H. (2000). Communication Satellites. AIAA. pp. 215–. ISBN 9781884989094. Retrieved 1 January 2013.
4. ^ Gunter Dirk Krebs. "Molniya-1S". Gunter's Space Page.
5. ^ https://www.globalsecurity.org/space/world/russia/geo.htm
6. ^ a b Anatoly Zak. "Russian communications satellites". Russian Space Web. Retrieved 22 May 2018.
7. ^ Robert A. Braeunig. "Basics of Space Flight: Orbital Mechanics". www.braeunig.us. Archived from the original on 5 February 2012. Retrieved 6 March 2019.
8. ^ Capderou, Michel (23 April 2014). Handbook of Satellite Orbits: From Kepler to GPS. p. 393. ISBN 9783319034164.
9. History Committee of the American Astronautical Society (23 August 2010). Stephen B. Johnson (ed.). Space Exploration and Humanity: A Historical Encyclopedia. 1. Greenwood Publishing Group. p. 416. ISBN 978-1-85109-514-8. Retrieved 17 April 2019.
10. ^ a b c Mark Wade. "Molniya-2". astronautix.com.
11. ^ Joel Achenbach (3 January 2012). "Spaceship Earth: The first photos". Retrieved 16 June 2020.
12. ^ Hendrickx, Bart. "A History of Soviet/Russian Meteorological Satellites" (PDF). bis-space.com. Antwerpen, Belgium. p. 66.
13. ^ a b Kolyuka, Yu. F.; Ivanov, N.M.; Afanasieva, T.I.; Gridchina, T.A. (28 September 2009). Examination of the Lifetime, Evolution and Re-Entry Features for the "Molniya" Type Orbits (PDF). 21st International Symposium of Space Flight Dynamics. Toulouse, France: Mission Control Center 4, Korolev, Moscow. p. 2. Retrieved 22 May 2018.
14. ^
15. ^ Mark Wade. "Molniya-3". astronautix.com.
16. ^ a b c Kidder, Stanley Q.; Vonder Haar, Thomas H. (18 August 1989). "On the Use of Satellites in Molniya Orbits of Meteorological Observation of Middle and High Latitudes". Journal of Atmospheric and Oceanic Technology. 7 (3): 517. doi:10.1175/1520-0426(1990)007<0517:OTUOSI>2.0.CO;2.
17. ^ Wertz, James Richard; Larson, Wiley J. (1999). Wiley J. Larson and James R. Wertz (ed.). "Space Mission Analysis and Design". Space Mission Analysis and Design. Bibcode:1999smad.book.....W.
18. ^ Zak, Anatoly. "The Meridian satellite (14F112)". RussianSpaceWeb. Archived from the original on 26 May 2011. Retrieved 3 May 2011.
19. ^ "SatCat: Molniya payload search". Space Track.
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.
|
|
# How to draw arrows on each arc and line in \path?
When I draw paths:
\path [draw=blue]
(0,0)
arc(-60:60:1.732)
arc(120:240:1.732);
I want to add an arrow in the middle of each arc within this \path.
Like this:
I have tried the "show path construction" (curveto code) style in the package decorations.pathreplacing. However It doesn't create decoration on each arc, but every 90 degrees (or less than 90 degrees). This is because the "curveto code" in "show path construction" create decorations for each Bezier curve, and one circle is created by 4 Bezier curves.
First of all, I want to tell you that using only one \path throughout the picture is not recommended, and you should really use as many \paths as possible if there are different types of lines drawn.
If you want to compress many lines to a single \path, you need something like the edge operation. Unfortunately, edge doesn't accept arc or similar as its option, so I use out and in here. Therefore, the output curves are not exactly the curves in some circles. I hope it is close enough.
I also made some developments to middlearrow style in ferahfeza's nice answer, so that it can handle the nodes.
\documentclass[tikz]{standalone}
\usetikzlibrary{decorations.markings}
\tikzset{
middlearrow/.style n args={3}{
draw,
decoration={
markings,
mark=at position 0.5 with {
\arrow{#1};
\path[#2] node {$#3$};
},
},
postaction=decorate
}
}
\begin{document}
\begin{tikzpicture}[>=stealth]
\coordinate (x) at (0,0);
\coordinate (y) at (0,3);
\path (x) edge[out=60,in=-60,middlearrow={<}{left}{b}] (y)
edge[out=30,in=-30,middlearrow={>}{right}{s}] (y)
edge[out=120,in=-120,middlearrow={<}{right}{a}] (y)
edge[out=150,in=-150,middlearrow={>}{left}{r}] (y);
\end{tikzpicture}
\end{document}
• Thank you very much JouleV! Yes, edge works very well. Sometimes I use edge to draw some illustrative diagrams when accurate controlling is not necessary. In some cases, I need accurate length and angles of arcs (with middle arrows on them). – Frank Apr 13 '19 at 17:42
• Previously I mainly used the second solution under this question: tex.stackexchange.com/questions/3161/…, but it will add more than one arrow when the angles of arcs are greater than 90. The reason is that, as I said in the description of the question, the number of arrows is determined by the number of Bezier curves. If the angle of an arc greater than 90, at least two Bezier curves is used to create the arc. – Frank Apr 13 '19 at 17:42
• @Frank Of course you can choose anything you want. This is only my proposal. – user156344 Apr 13 '19 at 17:44
• Thank you for the solution. I really apreciate your effort. Thanks! – Frank Apr 13 '19 at 17:46
Define \path for each arc.
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{decorations.markings}
\begin{document}
% Middlearrow code is from:
% https://tex.stackexchange.com/a/39283
\tikzset{middlearrow/.style={
decoration={markings,
mark= at position 0.5 with {\arrow{#1}} ,
},
postaction={decorate}
}
}
\begin{tikzpicture}[line width=0.5mm,>=stealth]
\path [draw=blue,middlearrow={>}]
(0,0)
arc(-60:60:1.732)coordinate (A) node[midway,right,yshift=-1mm]{$s$} ;
\path [draw=blue,middlearrow={<}]
(0,0)
arc(-30:30:3)node[midway,left,yshift=-0.75mm]{$b$};
\path [draw=blue,middlearrow={>}]
(0,0)
arc(240:120:1.732)node[midway,left,yshift=-1mm]{$r$} ;
\path [draw=blue,middlearrow={<}]
(0,0)
arc(210:150:3)node[midway,right,yshift=-1mm]{$a$} ;
\fill [blue] (0,0) circle (2pt);
\fill [blue] (A) circle (2pt);
\end{tikzpicture}
\end{document}
• Thank you very much! Is there any other solution which needs only one \path? I want to use relative coordinates to draw arcs. – Frank Apr 13 '19 at 10:13
• @Frank What do you mean by "relative coordinates"? Is two paths solution ok, because there are two types of arrow? Moreover, why do you want only one path? Four paths cost you nothing at all. – user156344 Apr 13 '19 at 11:45
• @JouleV I mean I may use say two or more arcs in one path, with each arc connected end to end, and each decorated by an arrow. The two path solution is OK. Thank you for your reply. – Frank Apr 13 '19 at 12:03
|
|
# Computation with arrays of numbers
My code is
m = RandomVariate[NormalDistribution[], 100];
c = 0.1;
f1 = RandomVariate[NormalDistribution[], 100];
F = Sqrt[c]*m + Sqrt[1 - c]*f1;
I would like to calculate F in the first step with the first element of m and the first element of f1; in the second step with the second element of m and the second element of f1 and so on, for all 100 random numbers.
• and how can i do if i have 100 random numerbs and wouldl ike to calculate F1 for m[1] and f1[1].....m[100] and f1[100] – Sarah Mar 13 '19 at 11:59
• I do not understand the question. The vector F contains all 100 results. – Szabolcs Mar 13 '19 at 14:10
• If I'm interpreting you right, the code that you've written down should work. If it doesn't do what you want, either quit the kernel and try again, or edit your post to explain what it is you need. – march Mar 13 '19 at 16:23
• Like the commenters above remarked: there is no need to iterate the calculation of F over 100 indices. You can just add and multiply lists in Mathematica. – Sjoerd Smit Mar 13 '19 at 16:57
• I was just finishing up what I think is a good answer, when this question snapped closed. I think the OP's intent is clear and I would to see this question reopened. – m_goldberg Mar 13 '19 at 22:16
I think it would be useful for you to experiment with your computation so you can better understand how Mathematica's numeric functions work with arrays of values. I also recommend that you read this tutorial. It describes computing with functions that have the Listable attribute. Such functions automatically act on all the elements of any array they are given.
An easy way to make the the kind of experiments I'm recommending is to transform your computation into a function, which is very easy to do.
f[n_] :=
Module[{c = 0.1, m, f1},
{f1, m} = RandomVariate[NormalDistribution[], {2, n}];
Sqrt[c] m + Sqrt[1 - c] f1]
Note that no explicit element-by-element operations appear in the definition of f. Nor are any such operations needed since all the arithmetic operators in the definition have the Listable attribute.
Now that we have f, we can experiment with samples of arbitrary size. We will see that they, too, can used as if they were scalar expressions.
It is best to start with small n.
SeedRandom[1]; f4 = f[4]
{0.949105, 0.602169, 0.289773, 0.950457}
We can get the root-mean-square without writing any explicit element-by-element code.
Sqrt[Mean[f4^2]]
0.750124
Generating a large sample is very quick.
AbsoluteTiming[SeedRandom[42]; sample = f[100000]][[1]]
0.007525
I got a sample of 100000 variates in just a bit over 7.5 milliseconds from my eight-year-old and not very fast iMac. Let's look into how the sample is distributed.
{Mean[#], StandardDeviation[#]}&[sample]
{-0.0006108, 1.00239}
h = Histogram[sample, Automatic, "Probability"];
With[{μ = 0, σ = 1}, p = Plot[(E^(-(x - μ)^2/(2 σ^2))/5), {x, -5, 5}]]
Show[h, p]
The sample seems to be distributed as if it were directly sampled form NormalDistribution[].
As a final example, let's approximate the cumulative distribution of a sample returned by f.
SeedRandom[42];
Module[{n = 1000, sample, x, y},
sample = f[n];
x = Sort[sample];
y = Range[n]/N[n];
ListPlot[Transpose[{x, y}]]]
I hope this answers convinces you that the Wolfram Language is designed so that almost anything you might want to do with a sample made by f can be done without loops and indexing into elements of the sample.
Sqrt[c]*m[[#]] + Sqrt[1 - c]*f1[[#]] & /@ Range[100]
or use the function
F1[x_]:=Sqrt[c]*m[[x]] + Sqrt[1 - c]*f1[[x]]
and you can get F1[1],F1[2]...F1[100]
or F1/@Range[100] to get all of them
• You don't even need to do that. All these functions are Listable, so just evaluating the code as the OP has written it does the same thing. – march Mar 13 '19 at 16:23
|
|
# Find Mass given half life
Noreturn
## Homework Statement
http://d2vlcm61l7u1fs.cloudfront.net/media/f62/f62b6919-e60f-405b-81fb-2323ceeb03ee/phpsL4bVl.png
## Homework Equations
So I need that in micrograms tho. So 4402*10^-18/1000=4.4*10^-18kg. or 4.4*10^-12 micrograms
that stills say it's wrong tho.
#### Attachments
• media%2Ff62%2Ff62b6919-e60f-405b-81fb-2323ceeb03ee%2FphpsL4bVl.png
12.9 KB · Views: 1,188
• media%2Fbf0%2Fbf0a12a7-f069-41fa-a9d3-c6a3fb1c334b%2FphpNHbZIM.png
20.1 KB · Views: 4,228
Homework Helper
Gold Member
Perhaps you should use the activity equation A(t) = A0 e-λt to find the initial activity A0 and convert that to the initial number of nuclei.
Noreturn
Didn't we do that at the bottom of the image?
Homework Helper
Gold Member
Watch units. A bequerel is defined how?
Noreturn
Oh! So:
4402 * (10^(-18)) kg = 4.40200 × 10-6 micrograms
BUT bequerel is s^-1 @ 10^-6 so answer is 4.4micrograms?
Homework Helper
Gold Member
Didn't we do that at the bottom of the image?
Not really. To reinforce what @TSny posted, assuming that you found A0, what is the number for λ in N0 = λ A0 when A0 has Bq units?
Homework Helper
Gold Member
BUT bequerel is s^-1 @ 10^-6 so answer is 4.4micrograms?
Noreturn
I was not able to figure this out. Any help on where I went wrong is appreciated. So I'm guessing I did the -.1322 wrong based on what you guys have mentioned.
Homework Helper
Gold Member
Look at the equation A = λ N. Put in the numbers including the units attached to each number. Then you will see what is going on.
Noreturn
Do I need to divide by Avogadro constant?
Homework Helper
Gold Member
Do I need to divide by Avogadro constant?
Not in this equation. Just do what I suggested.
Noreturn
4*10^9 Bq =.132yr^-1*N
or
4*10^9Bq = 4.18291693 × 10-9 Bq * N
N= 9.56*10^17
9.56*19^17/e^(.132*3) = 6.43*10^17
(6.43*10^17)(1.66*10^-27)(58.93) =6.28*10^-8kg or 6.3ug
Last edited:
Homework Helper
Gold Member
That's better but still incorrect. What does N= 9.56*10^17 represent? What about 6.43*10^17? What number is that? If N is the number of undecayed nuclei, after 3 years (when the activity is 4*10^9Bq) and 6.43*10^17 is the initial number of undecayed nuclei, which one should be the larger number? Also, what are the units of 1.66*10^-27 and 58.93?
Noreturn
So Muliplying 1.66*10^-27 and 58.93 converts mass number of Cobalt from amu to kg.
The Initial should be bigger.
Just realized I may have had it right but I forgot to convert the kg to g. So my answer should have been 6.28ug
$$m=AW \left(\frac{grams}{mole} \right)\times N (atoms) \times \frac{1}{N_{Avog.}} \left(\frac{mole}{atoms} \right)=AW\times \frac{N}{N_{Avog.}}(grams)$$
|
|
effective molarity (effective concentration)
https://doi.org/10.1351/goldbook.E01896
The ratio of the first-order @[email protected] of an @[email protected] reaction involving two functional groups within the same @[email protected] to the second-order @[email protected] of an analogous @[email protected] @[email protected] This ratio has the dimension of concentration. The term can also apply to an @[email protected]
|
|
## Linear congruence
Hi all.
Can someone please tell me what is going wrong here.
Solve
$$12x \equiv 1(mod5)$$
$$gcd(12,5) = 1$$
By Euclid's Algorithm =>
$$1 = 5.5 - 2.12$$
So r is 5 in this case.
$$x = r ( \frac{b}{d} )$$
Where b is 1 and d = gcd(12,5) = 1
$$x = 5 ( \frac{1}{1} )$$
$$x = 5$$
Ok fair enough but then I solve the congruence using
$$x \equiv b a^\phi^(^m^)^-^1 (mod m)$$
$$x \equiv (1) 12^3 (mod5)$$
$$x \equiv 3 (mod 5 )$$
I know this is the correct solution but what did I do wrong in the other one.
Thanks for the help!
PhysOrg.com science news on PhysOrg.com >> Intel's Haswell to extend battery life, set for Taipei launch>> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens
don't know if this is valid, but isn't the first expression equivalent to $$2x \equiv 1(mod5)$$ then 2x = "6" mod 5 $$x\equiv 3(mod5)$$ yes i know that one is not supposed to do division, but modulus is prime, and there is a multiplicative inverse that i multiplied by (3)
Ok I'm not very good at this, but why is the first one equivalent to $$2x \equiv 1(mod5)$$. Did you reduce the 12 (mod 5) ? Are you able to do that?
## Linear congruence
I think so as $$12\equiv 2(mod5)$$
|
|
# Optimal Polynomial Approximation
Created 2022-03-13, Edited 2022-07-20
An introduction to linear algebra and least squares on functions
## Prerequisites
• Know what an integral represents
• Know about taylor series for approximating functions
• Sufficient mathematical maturity
## Introduction
If you're confused see Notation, if that doesn't clear it up contact me so I can fix the post to be more understandable.
## Demo
This polynomial
$0.005643 x^{5} - 0.15527 x^{3} + 0.98786 x$
Is the best fifth degree polynomial approximation for $\sin x$ over $[-\pi,\pi]$ (within roundoff of course).
Do you see the two graphs? They're so close it's hard to tell there are two! If you compute the squared error (where $p(x)$ denotes our polynomial approximation)
$\int_{-\pi}^\pi (\sin(x) - p(x))^2 dx = 0.000116$
We find it's a really good approximation for $\sin(x)$, taylor polynomials don't even compare with an error of $0.1187$.
The best part? These optimal polynomial approximations can be efficiently found for any function, in fact, for complicated functions they're easier to find then taylor series!
Here's the code if you want to play with it before learning the math
Done? Okay, then get ready for...
## Linear Algebra
Though a bit of an exaggeration, it can be said that a mathematical problem can be solved only if it can be reduced to a calculation in linear algebra. Thomas Garrity
Before you truely understand the demo we've got to learn some linear algebra! To be specific you need to know
• Vectors and vector spaces
• Subspaces
• Bases
• Inner products and orthogonality
• Orthonormal bases
• Orthogonalization
I won't be covering linear transformations (the heart of linear algebra) at all, because they aren't needed here.
Important: Attempt the exercises and read the solutions. I've put a lot of content in the solutions so skipping them will make reading this impossible.
### The standard space $\mathbf R^n$
Before we get to polynomials, let's consider "vanilla" linear algebra, where every vector a list of numbers in $\mathbf R^n$.
Adding and scaling vectors works as you'd expect
\begin{aligned} (x_1,x_2) + (y_1,y_2) &= (x_1+y_1,x_2+y_2) \\ \lambda (x_1,x_2) &= (\lambda x_1, \lambda x_2) \end{aligned}
In abstract linear algebra vectors are any object you can add and scale like this.
Exercise: Functions and Polynomials are also vectors, convince yourself of this
Solution
We can add functions and multiply by scalars and the result will still be a function, same with polynomials
The set vectors are contained in is called a vector space. A vector space has a few requirements
1. Closed under addition: If $v \in V$ and $w \in V$ then $w + v \in V$
2. Closed under scaling: If $v \in V$ and $\lambda \in \mathbf R$ then $\lambda v \in V$
Without these properties working in a vector space would be impossible, imagine adding two vectors and not knowing if the result is also in the space!
Exercise: Why must the zero vector be in every vector space?
Solution
Set $\lambda = 0$ and use (2).
Exercise: Show the space of polynomials of degree exactly $n$ is not a vector space
Solution
It doesn't contain the zero polynomial.
Exercise: Fix the problem shown in the previous exercise to get us a vector space of polynomials
Solution
Consider the space of polynomials with degree at most $n$ (denoted $\mathcal P_n$). This satisfies the vector space rules since
1. Adding polynomials of degree at most $n$ results in polynomials with degree at most $n$, so we're closed under addition
2. Scaling polynomials can only decrease their degree, so we're closed under scaling
### Subspaces
A vector space $U$ is said to be a subspace of $V$ if $U$ is contained in $V$, ie. $U \subseteq V$.
Exercise: Show the space of polynomials $\mathcal P$ (of any degree) is a subspace of all real functions (denoted $\mathbf R^{\mathbf R}$)
Solution
The set of polynomials $\mathcal P$ is a vector space, and every polynomial is also a function, in set notation $\mathcal P \subseteq \mathbf{R}^{\mathbf R}$. Hence $\mathcal P$ is a subspace of $\mathbf R^{\mathbf R}$.
Exercise: Interpret our original problem of finding $n$th degree polynomial approximations as a special case of: given a $v \in V$ find the closest $u \in U$ to $v$ (often called the projection of $v$ onto $U$)
Solution
Consider $V = \mathbf C^{[a,b]}$ (the space of functions continuous on $[a,b]$) and $U = \mathcal P_n$. then $\sin \in V$ and we want the closest $n$th degree polynomial $p \in \mathcal P_n$ to $\sin(x)$.
### The dot product
To compute the dot product between two vectors in $\mathbf R^n$ (lists of numbers) we multiply components then add up, ie.
$\langle (1,2), (3,4)\rangle = 1\cdot 3 + 2\cdot 4 = 11$
As a special case the dot product of a vector with itself gives the length squared $\langle v,v\rangle = \|v\|^2$
The dot product has a very satisfying geometric interpretation: If $\theta$ is the angle between $v$ and $w$ then
$\langle v, w\rangle = \|v\| \|w\|\cos\theta$
Geometrically this means the dot product is the length of projecting $v$ onto the line through $w$ then scaling by $|w|$ (or the other way around). See this video for why.
Exercise: Convince yourself that $\langle u+v,w\rangle = \langle u,w\rangle+\langle v,w\rangle$ and $\lambda \langle v,w\rangle = \langle \lambda v, w\rangle$ when $\langle\cdot\rangle$ denotes the dot product.
Exercise: Using the projection interpretation, what is the projection of $v$ onto the line through $u$?
Solution
First normalize $u$ by dividing by it's length, then take the dot product and scale in the direction of $u$, which we normalize again to avoid unwanted scaling
$\langle v, u/\|u\|\rangle \cdot (u/\|u\|)$
This is usually written as (after using linearity of $\langle\cdot\rangle$)
$\frac{\langle v, u\rangle}{\langle u, u\rangle}u$
Exercise: Expand $\|v-w\|^2$ in terms of dot products, then use the geometric interpretation to prove the law of cosines.
Solution
Expand using linearity just like you would with polynomials!
\begin{aligned} \|v-w\|^2 &= \langle v-w,v-w\rangle \\ &= \langle v, v\rangle - 2\langle v, w \rangle + \langle w, w\rangle \\ &= \|v\|^2 + \|w\|^2 - 2\langle v, w\rangle \end{aligned}
Using the geometric interpretation this becomes
$\|v-w\|^2 = \|v\|^2 + \|w\|^2 - 2\|v\|\|w\|\cos\theta$
Which is the law of cosines!
Exercise: Compute the area of the parallelogram formed between $v$ and $w$ using dot products
Solution
Consider this diagram
The height is $h = \|v\|\sin\theta$ meaning the area is $\|v\|\|w\|\sin\theta$. Writing this in terms of dot products we get
\begin{aligned} \|v\|\|w\|\sin\theta &= \|v\|\|w\|\sqrt{1 - \cos^2\theta} \\ &= \|v\|\|w\|\sqrt{1 - \left(\frac{\langle v,w\rangle}{\|v\|\|w\|}\right)^2} \\ &= \sqrt{\|v\|^2\|w\|^2 - \langle v, w\rangle^2} \end{aligned}
Exercise (optional): Using the result from the previous exercise, derive the formula for the determinant of a 2x2 matrix (if you know what this means)
Solution
Suppose we have the matrix
$A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$
Consider the basis vectors as $v = (a,b)$ and $w = (c,d)$. Then the area between them is the determinant (watch these if that doesn't make sense) meaning
\begin{aligned} \det A &= \sqrt{\|v\|^2 \|w\|^2 - \langle v, w\rangle^2} \\ &= \sqrt{(a^2 + b^2)(c^2 + d^2) - (ac + bd)^2} \\ &= \sqrt{(ac)^2 + (ad)^2 + (bc)^2 + (bd)^2 - ((ac)^2 + 2acbd + (bd)^2)} \\ &= \sqrt{(ad)^2 + (bc)^2 - 2acbd} \\ &= \sqrt{(ad - bc)^2} \\ &= ad - bc \end{aligned}
### General inner products
Like vectors, there can be many kinds of inner products, for instance on the space $\mathbf C^{[a,b]}$ of continuous functions on $[a,b]$ we can define an inner product
$\langle f, g\rangle = \int_a^b f(x)g(x)dx$
Intuitively this is the continuous version of the dot product, where we treat functions as infinite-dimensional vectors.
Like we abstracted the notion of a "vector" we also abstract the notion of an inner product to be anything that obeys a set of rules
1. Positivity $\langle v, v\rangle \ge 0$
2. Definiteness $\langle v, v\rangle = 0$ if and only if $v = 0$
3. Symmetry $\langle v,w\rangle = \langle w, v\rangle$
4. Additive $\langle u+v, w\rangle = \langle u,w\rangle + \langle v, w\rangle$
5. Multiplicative $\lambda\langle v,w\rangle = \langle\lambda v,w\rangle$
Exercise: Convince yourself that both inner products I've given so far satisfy these properties.
To generalize length we define the norm of a vector $v$ as $\|v\| = \sqrt{\langle v,v\rangle}$
Exercise: Generalize the law of cosines to any inner product $\langle\cdot\rangle$.
Solution
Exactly the same steps as with the law of cosines
\begin{aligned} \|v - w\|^2 &= \langle v-w, v-w\rangle \\ &= \langle v,v\rangle - 2\langle v,w\rangle + \langle w,w\rangle \\ &= \|v\|^2 + \|w\|^2 - 2\langle v,w\rangle \end{aligned}
If we were in $\mathbf R^n$ we could use the geometric interpretation to write $\langle v,w\rangle = \|v\|\|w\|\cos\theta$ like we did for the law of cosines.
Exercise: In the geometric interpretation we showed the dot product can be used to compute the cosine of the angle between two vectors, ponder the infinite-dimensional analog of this in function space (the cosine similarity of two functions!)
Solution
$\cos\theta = \frac{\langle v, w\rangle}{\sqrt{\langle v,v\rangle\cdot \langle w,w\rangle}}$
Since the integral inner product is just a limit of dot products as the vectors get bigger and the samples become finer, it makes sense to define the cosine similarity between functions this way. We could also extract the angle using the inverse cosine if we felt like it, and talk about the angle between functions!
Exercise (hard): Why do we need $f$ and $g$ to be continuous and not just integrable? (hint: think about property 2)
Solution
The function
$f(x) = \begin{cases} 0 &{x \ne 0} \\ 1 &{x = 0} \end{cases}$
Is integrable on $[-1,1]$ with $\langle f,f\rangle = \int_{-1}^1 f(x)^2dx = 0$, but $f \ne 0$ breaking requirement 2.
Showing this can't happen with continuous functions is tricky, but it boils down to $f(x)^2 > 0$ implying there's a neighborhood around $x$ with $f(y)^2 > 0$ meaning it contributes nonzero area.
### Orthogonality
We say two vectors $v$ and $w$ are orthogonal if $\langle v, w\rangle = 0$
Exercise: Prove the pythagorean theorem for an arbitrary inner product, ie. prove that $\langle v,w\rangle = 0$ implies $\|v+w\|^2 = \|v\|^2 + \|w\|^2$.
Solution
We expand using the linearity of inner products and cancel the $2\langle v,w\rangle$ term
\begin{aligned} \|v+w\|^2 &= \langle v+w,v+w\rangle \\ &= \langle v,v\rangle + 2\langle v,w\rangle + \langle w,w\rangle \\ &= \|v\|^2 + \|w\|^2 \end{aligned}
Exercise: If $\langle \cdot\rangle$ is the dot product and $v,w \in \mathbf R^n$ are nonzero vectors, show $v$ and $w$ are orthogonal if and only if they are perpendicular (hint: use the geometric interpretation)
Solution
To see this directly notice the projection is zero exactly when $v,w$ are perpendicular. (see this video)
We can also use the geometric formula
$\langle v, w\rangle = \|v\|\|w\|\cos\theta$
Because $v,w$ are nonzero we have $\|v\| > 0$ and $\|w\| > 0$ meaning this is only zero when $\cos\theta = 0$, which only happens when $\theta$ is a multiple of $90$ degrees, ie. when $v$ and $w$ are perpendicular.
### Bases
A basis for a vector space $V$ is a list $v_1,\dots,v_n$ such that
1. Spanning: Every $v \in V$ can be written $v = a_1v_1 + \dots + a_nv_n$ for some scalars $a_1,\dots,a_n$.
2. Independent: The representation of $v$ as a combination of $v_1,\dots,v_n$ is unique.
Exercise: Convince yourself $(1,0)$ and $(0,1)$ form a basis for $\mathbf R^2$
Exercise: Show $(1, 2)$ and $(0, 2)$ form a basis of $\mathbf R^2$
Solution
Every $(x,y) \in \mathbf R^2$ can be written
$(x,y) = x\cdot(1,2) + \frac{y - 2x}{2}\cdot(0, 2)$
Hence $(1,2), (0,2)$ is spanning, to show independence note that
$a_1(1,2) + a_2(0,2) = b_1(1,2) + (0,2)b_2$
Implies (via rearrangement)
$(a_1-b_1)(1,2) = (b_2-a_2)(0,2)$
Since the first coordinates must equal this implies $a_1-b_1 = 0$, this implies the left hand side is zero meaning we must have $(b_2-a_2)(0, 2) = 0$ as well, implying $b_2 - a_2 = 0$. Thus $a_1 = b_1$ and $a_2 = b_2$ and we have shown independence.
We say a basis is orthogonal if $\langle v_j, v_k\rangle = 0$ when $j \ne k$. And we say a basis is orthonormal if it's orthogonal and $\langle v_j, v_j\rangle = 1$ (ie. the basis vectors have unit length).
### Orthogonalization
Given a basis $v_1,\dots,v_n$ of $V$ it's possible to construct an orthonormal basis $e_1,\dots,e_n$ of $V$.
Start by setting $e_1 = v_1/\|v_1\|$ so $\langle e_1,e_1\rangle = 1$. now we want $e_2$ in terms of $v_2$ such that $\langle e_1, e_2\rangle = 0$.
We want this to be zero, so we subtract the projection of $v_2$ onto $e_1$ (see the geometrically interpretation for why I call it a projection)
$e_2 = v_2 - \langle e_1, v_2\rangle e_1$
Exercise: Verify $\langle e_1, e_2\rangle = 0$ using linearity (hint: $\langle e_1,v_2\rangle$ is just a number)
Solution
\begin{aligned} \langle e_1,e_2\rangle &= \langle e_1, v_2 - \langle e_1, v_2\rangle e_1\rangle \\ &= \langle e_1, v_2\rangle - \langle e_1, v_2\rangle \langle e_1, e_1\rangle \\ &= \langle e_1, v_2\rangle - \langle e_1, v_2\rangle \\ &= 0 \end{aligned}
I left out the step where we normalize $e_2$, but understand after we make it orthogonal we normalize it too, in programming terms $e_2 \leftarrow e_2/\|e_2\|$.
In general we subtract the projection onto each of the previous basis vectors
$e_i = v_i - \sum_{j=1}^{i-1} \langle v_i, e_j\rangle e_j$
Exercise: Show $\langle e_i, e_j\rangle = 0$ (hint: without loss of generality assume $j \lt i$)
Solution
Using the definition of $e_i$ and orthogonality gives
\begin{aligned} \langle e_j, e_i\rangle &= \left\langle e_j, v_i - \sum_{k=1}^{i-1} \langle v_i,e_k\rangle e_k\right\rangle \\ &= \langle e_j, v_i\rangle - \sum_{k=1}^{i-1} \langle v_i, e_k\rangle \langle e_j, e_k\rangle \\ &= \langle e_j, v_i\rangle - \langle v_i, e_j\rangle \langle e_j, e_j\rangle \\ &= \langle e_j, v_i\rangle - \langle v_i, e_j\rangle \\ &= 0 \end{aligned}
The key difference from the $\langle e_1,e_2\rangle$ exercise above is using the orthogonality to cancel the other terms in the sum.
Exercise: Translate the mathematical algorithm I've described into python code using numpy and test it
Solution
import numpy as np
def normalize(v):
return v / np.sqrt(np.inner(v,v))
v = [np.array(vi) for vi in [(1,4,2), (3,2,1), (5,1,2)]]
e = [normalize(v[0])]
for i in range(1, len(v)):
e.append(normalize(
v[i] - sum(
np.inner(v[i], e[j])*e[j]
for j in range(i)
)
))
# test it
for i in range(len(v)):
for j in range(i):
assert np.allclose(np.inner(e[i], e[j]), 0)
This method is called the Gram–Schmidt process.
### Least squares
We finally get to solve the approximation problem! We have a vector space $V$, a subspace $U$ and some vector $v \in V$. We want to find the closest vector to $v$ in $U$ (ie. the projection).
I've you've been following along and doing the exercises you're close to understanding the method, the main ideas are:
1. We can write $v = a_1u_1+\dots+a_nu_n + b_1v_1+\dots+b_mv_m$ where $u_1,\dots,u_n,v_1,\dots,v_m$ is an orthonormal basis of $V$, and $u_1,\dots,u_n$ is an orthonormal basis of $U$.
2. We can extract $a_1,\dots,a_n$ using inner products and orthogonality, meaning we can extract the component of $v$ in $U$ giving us the projection.
Exercise: Find a simple formula for $a_j$ in terms of $v$ and $u_j$
Solution
$a_j = \langle v, u_j\rangle$
Since orthogonality kills off every other term in
$v = a_1u_1 + \dots + a_nu_n + b_1v_1 + \dots + b_nv_n$
The above exercise allows us to define the projection $P_U(v)$ of $v$ as
$P_U(v) = \langle u_1, v\rangle u_1 + \dots + \langle u_n, v\rangle u_n$
Exercise (optional): Prove $P_U : V \to U$ is a linear map (if you know what that means)
Solution
Via substituion and linearity of $\langle\cdot\rangle$ we see
\begin{aligned} P_U(v+w) &= \langle u_1, v+w\rangle u_1 + \dots + \langle u_n, v+w\rangle u_n \\ &= \langle u_1, v\rangle u_1 + \dots + \langle u_n, v\rangle u_n \\ &+ \langle u_1, w\rangle u_1 + \dots + \langle u_n, w\rangle u_n \end{aligned}
The $P_U(\lambda v)$ case is the same.
Furthermore since $P_U(u_j) = u_j$ and $P_U(v_j) = 0$ the matrix of $P_U$ in the $u_1,\dots,v_m$ basis is the identity (for the $u's$) followed by zeros (for the $v$'s)
Notice $P_U(v) = a_1u_1+\dots+a_nu_n$, we extracted $U$ component of $v$ using inner products and orthogonality!
Now, intuitively it makes sense for $P_U(v)$ to be the closest vector to $v$, and in $\mathbf R^n$ the intuition is correct. But for arbitrary vector spaces we need a more general argument.
Consider the remainder (error) vector $r = P_U(v) - v$, the key fact: the remainder is orthogonal to $U$
Exercise: Show $\langle r, u\rangle = 0$ for all $u \in U$ (hint: use $U$'s basis and linearity of $\langle\cdot\rangle$)
Solution
Let $u \in U$ and let $c_1,\dots,c_n$ be such that
$u = c_1u_1+\dots+c_nu_n$
Meaning
$\langle r, u\rangle = c_1\langle r, u_1\rangle + \dots + c_n\langle r, u_n\rangle$
Now we expand the $r = P_U(v) - v$ part of $\langle r, u_j\rangle$
\begin{aligned} \langle r, u_j\rangle = \langle P_U(v), u_j\rangle - \langle v, u_j\rangle \end{aligned}
Now from the definition of $P_U(v)$ and orthogonality we see $\langle P_U(v), u_j\rangle = \langle v_j, u_j\rangle$ meaning
$\langle r, u_j\rangle = \langle P_U(v), u_j\rangle - \langle v, u_j\rangle = 0$
Applying this for $j=1,\dots,n$ gives $\langle r, u\rangle = 0$ as desired.
Because of this, any tweak $du \in U$ to $P_U(v)$ can only move us away from $v$! We can prove this using the generalized pythagorean theorem since $r$ and $du$ are orthogonal.
\begin{aligned} \|(P_U(v) + du) - v\|^2 &= \|r + du\|^2 \\ &= \|r\|^2 + \|du\|^2 \end{aligned}
This has a minimum at $du = 0$, meaning we can't do better!
### Polynomial approximation
We've covered all the linear algebra we need! It's time to walk through approximating $\sin(x)$ over $[-\pi, \pi]$ with a fifth degree polynomial.
Here's a simplified version of the code, which you should be able to understand!
import sympy as sp
x = sp.Symbol('x')
# function we're approximating
def f(x):
return sp.sin(x)
a,b = -sp.pi, sp.pi
def inner(f, g):
return sp.integrate(f*g, (x, a, b))
def normalize(v):
return v / sp.sqrt(inner(v, v))
# basis of U (polynomials up to degree 5)
B = [1,x,x**2,x**3,x**4,x**5]
# orthogonalize B
O = [normalize(B[0])]
for i in range(1, len(B)):
O.append(normalize(
B[i] - sum(
inner(B[i], O[j])*O[j] # <v_i, e_j>
for j in range(0, i)
)
))
# get coefficients
coeffs = [inner(f(x), O[j]) for j in range(len(O))]
# turn the coefficients into a sympy polynomial
poly = sum(O[j]*coeffs[j] for j in range(len(coeffs)))
# print the polynomial as latex!
print(sp.latex(poly))
# print the polynomial in numeric form
print(sp.latex(poly.evalf()))
# => 0.00564312 x^{5} - 0.1552714 x^{3} + 0.9878622 x
Full code (with more features) is here. Have fun playing with it!
If you don't understand, tell me! I'll be happy to explain, I want this to be understandable to everyone.
## Notation
We denote vectors in a vector space $V$ by $v,w,u \in V$, If we're dealing with a subspace $U$, then $u$ denotes a vector $u \in U$.
Scalars (real numbers) are denoted by $\lambda$ or $a,b,c$. often when taking linear combinations we have a list of scalars $a_1,\dots,a_n$.
We denote the inner product between $v$ and $w$ as $\langle v, w\rangle$. The specific inner product is clear from context. In other sources the dot product is written $v^Tw$ or $v\cdot w$, these notations are specific to the dot product and don't extend to general inner products.
|
|
# Non-existence of a certain continuous function
Prove that $\not \exists$ a function $f: \mathbb{C} \to \mathbb{C}$ such that $f$ is continuous on $0< |z| < 1$ and satisfies the property: $(f(z))^n = z \$ for $n > 1$.
Comment: I am assuming this requires contradiction. But I am failing to find the contradictory point.
-
– user44795 Oct 15 '12 at 22:56
Your question is a bit ambiguous. Is $f$ supposed to be continuous on $\mathbb{C}$, or just on the open unit disc? Not that it really matters - neither exist - but still... – fgp Oct 16 '12 at 0:06
I clarified it now. Thanks! – user44069 Oct 16 '12 at 0:15
@OrestXherija You might want to either post your solution as an answer, or delete the question (which you can do since it hasn't been answer yet, I think) – fgp Oct 16 '12 at 0:43
I did not solve it! I just edited the question so that it is not ambiguous anymore. – user44069 Oct 16 '12 at 1:22
|
|
# Algorithm for directly finding the leading eigenvector of an irreducible matrix
According to the Perron-Frobenius theorem, a real matrix with only positive entries (or one with non-negative entries with a property called irreducibility) will have a unique eigenvector that contains only positive entries. Its corresponding eigenvalue will be real and positive, and will be the eigenvalue with greatest magnitude.
I have a situation where I'm interested in such an eigenvector. I'm currently using numpy to find all the eigenvalues, then taking the eigenvector corresponding to the one with largest magnitude. The trouble is that for my problem, when the size of the matrix gets large, the results start to go crazy, e.g. the eigenvector found that way might not have all positive entries. I guess this is due to rounding errors.
Because of this, I'm wondering if there's an algorithm that can give better results by making use of the facts that $(i)$ the matrix has non-negative entries and is irreducible, and $(ii)$ we're only looking for the eigenvector whose entries are positive. Since there are algorithms that can make use of other matrix properties (e.g. symmetry), it seems reasonable to think this might be possible.
While writing this question it occurred to me that just iterating $\nu_{t+1} = \frac{A\nu_t}{|A\nu_t|}$ will work (starting with an initial $\nu_0$ with positive entries), but imagine with a large matrix the convergence will be very slow, so I guess I'm looking for a more efficient algorithm than this. (I'll try it though!)
Of course, if the algorithm is easy to implement and/or has been implemented in a form that can easily be called from Python, that's a huge bonus.
Incidentally, in case it makes any kind of difference, my problem is this one. I'm finding that as I increase the matrix size (finding the eigenvector using Numpy as described above) it looks like it's converging, but then suddenly starts to jump all over the place. This instability gets worse the smaller the value of $\lambda$.
The algorithm you describe that computes $x^{(k+1)}=\frac{Ax^{(k)}}{\|Ax^{(k)}\|}$ is of course what is called the the power method. It will converge in your case if you have a non-degenerate largest eigenvalue. Furthermore, if you start with an initial guess $x^{(0)}$ that has only positive entries, you are guaranteed that all future iterates are also strictly positive and, moreover, that round-off should not be an issue since in every matrix-vector product you only ever add up positive terms. In other words, this method must work if your problem has the properties you describe.
• Won't the convergence be quite slow, though? My matrices are potentially huge, since I'm trying to approximate the limit of infinite matrix size. (But then, maybe it's not slow, I haven't tried yet. I'll do it this afternoon if I get a chance.) Feb 10 '14 at 4:47
• The convergence rate depends on the ratio $\lambda_1/\lambda_2$, where $\lambda_1$ is the largest eigenvalue and $\lambda_2$ the next largest, provided that the eigenspace corresponding to $\lambda_1$ is 1-dimensional. For some families of matrices, the ratio may stay the same even as the matrix size increases. Feb 10 '14 at 5:31
• It turns out that it isn't all that slow for my problem - but I found a way to make it much faster anyway, which I've posted as an answer. Feb 10 '14 at 11:50
I tried the power method, as described in my question and in Wolfgang Bangerth's answer. It turns out that the convergence isn't unacceptably slow for my application, though it does depend on the parameters. So in a way this was kind of a dumb question.
However, I noticed that there's a way to exponentially increase the speed of this algorithm, by doing repeated squaring of the matrix. That is, let $B_0=A$ and iterate $B_{n+1}=\frac{B_n^2}{\|B_n^2\|}$, where $\|\cdot\|$ is whatever matrix norm you feel like. (I just summed all the elements, since they're all positive anyway.) This very rapidly converges to a matrix whose columns are each proportional to the leading eigenvector. This is because $B_n \nu_0 \propto A^{2^n}\nu_0$, so iterating this algorithm for $n$ steps is the same as doing power iteration for $2^n$ steps.
Although multiplying large matrices can be slow (especially in numpy, unfortunately), I'm finding that this tends to converge pretty nicely after around 10 to 15 iterations, so on the whole it's pretty fast.
• Alternatively, you could accelerate the method by using inverse power iteration. In your case, it amounts to solving the system A - I (or A - 0.999 I) at each iteration. Feb 10 '14 at 14:53
• Are you multiplying your matrices explicitly? I can not imaging a scenario where it is faster that matrix-vector multiplications. The algorithm you describe might converge in fewer iterations, but each iteration involves to calculate $B_n \times B_n$, and it should be much more expensive than $B_n v$. Maybe I am missing something... Feb 10 '14 at 15:28
• @sebas the choice is between 15 matrix multiplications, or $2^{15}$ matrix-vector ones. I guess for large matrices the vector version will win out, but for smallish ones this is way faster. Feb 10 '14 at 15:54
• @Nathaniel It depends on the size of the matrix. I was just wondering how large are the matrices for your problem. I know that for large enough matrices just 2 or 3 matrix-matrix multiplications is completely prohibitive. Moreover if your matrices are sparse, because the matrix-matrix product will increase the fill-in of the resulting matrix. Just a comment I thought can be useful. Feb 10 '14 at 16:02
• @sebas honestly I think you're right, my idea isn't worthwhile once the matrices are above 500x500 or so. But for exploring the parameter space with a size of around 100 it's quite handy. Feb 10 '14 at 16:05
|
|
# 005 Sample Final A, Question 13
Question Give the exact value of the following if its defined, otherwise, write undefined.
${\displaystyle (a)\sin ^{-1}(2)\qquad \qquad (b)\sin \left({\frac {-32\pi }{3}}\right)\qquad \qquad (c)\sec \left({\frac {-17\pi }{6}}\right)}$
Foundations:
1) What is the domain of ${\displaystyle \sin ^{-1}?}$
2) What are the reference angles for ${\displaystyle {\frac {-32\pi }{3}}}$ and ${\displaystyle {\frac {-17\pi }{6}}}$?
1) The domain is ${\displaystyle [-1,1].}$
2) The reference angle for ${\displaystyle {\frac {-32\pi }{3}}}$ is ${\displaystyle {\frac {4\pi }{3}}}$, and the reference angle for ${\displaystyle {\frac {-17\pi }{6}}}$ is ${\displaystyle {\frac {7\pi }{6}}}$
Step 1:
For (a), we want an angle ${\displaystyle \theta }$ such that ${\displaystyle \sin(\theta )=2}$. Since ${\displaystyle -1\leq \sin(\theta )\leq 1}$, it is impossible
for ${\displaystyle \sin(\theta )=2}$. So, ${\displaystyle \sin ^{-1}(2)}$ is undefined.
Step 2:
For (b), we need to find the reference angle for ${\displaystyle {\frac {-32\pi }{3}}}$. If we add multiples of ${\displaystyle 2\pi }$ to this angle, we get the
reference angle ${\displaystyle {\frac {4\pi }{3}}}$. So, ${\displaystyle \sin \left({\frac {-32\pi }{3}}\right)=\sin \left({\frac {4\pi }{3}}\right)={\frac {-{\sqrt {3}}}{2}}}$.
Step 3:
For (c), we need to find the reference angle for ${\displaystyle {\frac {-17\pi }{6}}}$. If we add multiples of ${\displaystyle 2\pi }$ to this angle, we get the
reference angle ${\displaystyle {\frac {7\pi }{6}}}$. Since ${\displaystyle \cos \left({\frac {7\pi }{6}}\right)={\frac {-{\sqrt {3}}}{2}}}$, we have
${\displaystyle \sec \left({\frac {-17\pi }{6}}\right)=\sec \left({\frac {7\pi }{6}}\right)={\frac {2}{-{\sqrt {3}}}}={\frac {-2{\sqrt {3}}}{3}}}$.
b) ${\displaystyle {\frac {-{\sqrt {3}}}{2}}}$
c)${\displaystyle {\frac {-2{\sqrt {3}}}{3}}}$
|
|
# Powers of Ten
In this powers of ten learning exercise, 5th graders multiply and divide decimals by multiples of ten. They mentally solve simple problems containing powers of ten. This two-page learning exercise contains 10 problems.
|
|
# Computing the Pixel Coordinates of a 3D Point
## Finding the 2D Pixel Coordinates of a 3D Point: Explained from Beginning to End
When a point or vertex is defined in the scene and is visible to the camera, the point appears in the image as a dot (or, more precisely, as a pixel if the image is digital). We already talked about the perspective projection process, which is used to convert the position of that point in 3D space to a position on the surface of the image. But this position is not expressed in terms of pixel coordinates. How do we find the final 2D pixel coordinates of the projected point in the image? In this chapter, we will review how points are converted from their original world position to their final raster position (their position in the image in terms of pixel coordinates).
The technique we will describe in this lesson is specific to the rasterization algorithm (the rendering technique used by GPUs to produce images of 3D scenes). If you want to learn how it is done in ray-tracing, check the lesson Ray-Tracing: Generating Camera Rays.
### World Coordinate System and World Space
When a point is first defined in the scene, we say its coordinates are specified in world space: the coordinates of this point are described with respect to a global or world Cartesian coordinate system. The coordinate system has an origin, called the world origin, and the coordinates of any point defined in that space are described with respect to that origin (the point whose coordinates are [0,0,0]). Points are expressed in world space (Figure 4).
### 4x4 Matrix Visualized as a Cartesian Coordinate System
Objects in 3D can be transformed using any of the three operators: translation, rotation, and scale. Suppose you remember what we said in the lesson dedicated to Geometry. In that case, linear transformations (in other words, any combination of these three operators) can be represented by a 4x4 matrix. If you are not sure why and how this works, read the lesson on Geometry again and particularly the following two chapters: How Does Matrix Work Part 1 and Part 2. Remember that the first three coefficients along the diagonal encode the scale (the coefficients c00, c11, and c22 in the matrix below), the first three values of the last row encode the translation (the coefficients c30, c31, and c32 — assuming you use the row-major order convention) and the 3x3 upper-left inner matrix encodes the rotation (the red, green and blue coefficients).
$$\begin{bmatrix} \color{red}{c_{00}}& \color{red}{c_{01}}&\color{red}{c_{02}}&\color{black}{c_{03}}\\ \color{green}{c_{10}}& \color{green}{c_{11}}&\color{green}{c_{12}}&\color{black}{c_{13}}\\ \color{blue}{c_{20}}& \color{blue}{c_{21}}&\color{blue}{c_{22}}&\color{black}{c_{23}}\\ \color{purple}{c_{30}}& \color{purple}{c_{31}}&\color{purple}{c_{32}}&\color{black}{c_{33}}\\ \end{bmatrix} \begin{array}{l} \rightarrow \quad \color{red} {x-axis}\\ \rightarrow \quad \color{green} {y-axis}\\ \rightarrow \quad \color{blue} {z-axis}\\ \rightarrow \quad \color{purple} {translation}\\ \end{array}$$
When you look at the coefficients of a matrix (the actual numbers), it might be challenging to know precisely what the scaling or rotation values are because rotation and scale are combined within the first three coefficients along the diagonal of the matrix. So let's ignore scale now and only focus on rotation and translation.
As you can see, we have nine coefficients that represent a rotation. But how can we interpret what these nine coefficients are? So far, we have looked at matrices, but let's now consider what coordinate systems are. We will answer this question by connecting the two - matrices and coordinate systems.
The only Cartesian coordinate system we have discussed so far is the world coordinate system. This coordinate system is a convention used to define the coordinates [0,0,0] in our 3D virtual space and three unit axes that are orthogonal to each other (Figure 4). It's the prime meridian of a 3D scene - any other point or arbitrary coordinate system in the scene is defined with respect to the world coordinate system. Once this coordinate system is defined, we can create other Cartesian coordinate systems. As with points, these coordinate systems are characterized by a position in space (a translation value) but also by three unit axes or vectors that are orthogonal to each other (which, by definition, are what Cartesian coordinate systems are). Both the position and the values of these three unit vectors are defined with respect to the world coordinate system, as depicted in Figure 4.
In Figure 4, the purple coordinates define the position. The coordinates of the x, y, and z axes are in red, green, and blue, respectively. These are the axes of an arbitrary coordinate system, which are all defined with respect to the world coordinate system. Note that the axes that make up this arbitrary coordinate system are unit vectors.
The upper-left 3x3 matrix inside our 4x4 matrix contains the coordinates of our arbitrary coordinate system's axes. We have three axes, each with three coordinates, which makes nine coefficients. If the 4x4 matrix stores its coefficients using the row-major order convention (this is the convention used by Scratchapixel), then:
• The first three coefficients of the matrix's first row (c00, c01, c02) correspond to the coordinates of the coordinate system's x-axis.
• The first three coefficients of the matrix's second row (c10, c11, c12) are the coordinates of the coordinate system's y-axis.
• The first three coefficients of the matrix's third row (c20, c21, c22) are the coordinates of the coordinate system's z-axis.
• The first three coefficients of the matrix's fourth row (c30, c31, c32) are the coordinates of the coordinate system's position (translation values).
For example, here is the transformation matrix of the coordinate system in Figure 4:
$$\begin{bmatrix} \color{red}{+0.718762}&\color{red}{+0.615033}&\color{red}{-0.324214}&0\\ \color{green}{-0.393732}&\color{green}{+0.744416}&\color{green}{+0.539277}&0\\ \color{blue}{+0.573024}&\color{blue}{-0.259959}&\color{blue}{+0.777216}&0\\ \color{purple}{+0.526967}&\color{purple}{+1.254234}&\color{purple}{-2.532150}&1\\ \end{bmatrix} \begin{array}{l} \rightarrow \quad \color{red} {x-axis}\\ \rightarrow \quad \color{green} {y-axis}\\ \rightarrow \quad \color{blue} {z-axis}\\ \rightarrow \quad \color{purple} {translation}\\ \end{array}$$
In conclusion, a 4x4 matrix represents a coordinate system (or, reciprocally, a 4x4 matrix can represent any Cartesian coordinate system). You must always see a 4x4 matrix as nothing more than a coordinate system and vice versa (we also sometimes speak of a "local" coordinate system about the "global" coordinate system, which in our case, is the world coordinate system).
### Local vs. Global Coordinate System
Now that we have established how a 4x4 matrix can be interpreted (and introduced the concept of a local coordinate system) let's recall what local coordinate systems are used for. By default, the coordinates of a 3D point are defined with respect to the world coordinate system. The world coordinate system is just one among infinite possible coordinate systems. But we need a coordinate system to measure all things against by default, so we created one and gave it the special name of "world coordinate system" (it is a convention, like the Greenwich meridian: the meridian at which longitude is defined to be 0). Having one reference is good but not always the best way to track where things are in space. For instance, imagine you are looking for a house on the street. If you know that house's longitude and latitude coordinates, you can always use a GPS to find it. However, if you are already on the street where the house is situated, getting to this house using its number is more straightforward and quicker than using a GPS. A house number is a coordinate defined with respect to a reference: the first house on the street. In this example, the street numbers can be seen as a local coordinate system. In contrast, the longitude/latitude coordinate system can be seen as a global coordinate system (while the street numbers *can* be defined with respect to a global coordinate system, they are represented with their coordinates with respect to a local reference: the first house on the street). Local coordinate systems are helpful to "find" things when you put "yourself" within the frame of reference in which these things are defined (for example, when you are on the street itself). Note that the local coordinate system can be described with respect to the global coordinate system (for instance, we can determine its origin in terms of latitude/longitude coordinates).
Things are the same in CG. It's always possible to know where things are with respect to the world coordinate system. Still, to simplify calculations, it is often convenient to define things with respect to a local coordinate system (we will show this with an example further down). This is what "local" coordinate systems are used for.
When you move a 3D object in a scene, such as a 3D cube (but this is true regardless of the object's shape or complexity), transformations applied to that object (translation, scale, and rotation) can be represented by what we call a 4x4 transformation matrix (it is nothing more than a 4x4 matrix, but since it's used to change the position, scale and rotation of that object in space, we call it a transformation matrix). This 4x4 transformation matrix can be seen as the object's local frame of reference or local coordinate system. In a way, you don't transform the object but transform the local coordinate system of that object, but since the vertices making up the object are defined with respect to that local coordinate system, moving the coordinate system moves the object's vertices with it (see Figure 6). It's important to understand that we don't explicitly transform that coordinate system. We translate, scale, and rotate the object. A 4x4 matrix represents these transformations, and this matrix can be visualized as a coordinate system.
### Transforming Points from One Coordinate System to Another
Note that even though the house is the same, the coordinates of the house, depending on whether you use its address or its longitude/latitude coordinates, are different (as the coordinates relate to the frame of reference in which the location of the house is defined). Look at the highlighted vertex in Figure 6. The coordinates of this vertex in the local coordinate system are [-0.5,0.5,-0.5]. But in "world space" (when the coordinates are defined with respect to the world coordinate system), the coordinates are [-0.31,1.44,-2.49]. Different coordinates, same point.
As suggested before, it is more convenient to operate on points when they are defined with respect to a local coordinate system rather than defined with respect to the world coordinate system. For instance, in the example of the cube (Figure 6), representing the cube's corners in local space is more accessible than in world space. But how do we convert a point or vertex from one coordinate system (such as the world coordinate space) to another coordinate system? Converting points from one coordinate system to another is a widespread process in CG, and the process is easy. Suppose we know the 4x4 matrix M that transforms a coordinate system A into a coordinate system B. In that case, if we transform a point whose coordinates are defined initially with respect to B with the inverse of M (we will explain next why we use the inverse of M rather than M), we get the coordinates of point P with respect to A.
Let's try an example using Figure 6. The matrix M that transforms the local coordinate system to which the cube is attached is:
$$\begin{bmatrix} \color{red}{+0.718762}&\color{red}{+0.615033}&\color{red}{-0.324214}&0\\ \color{green}{-0.393732}&\color{green}{+0.744416}&\color{green}{+0.539277}&0\\ \color{blue}{+0.573024}&\color{blue}{-0.259959}&\color{blue}{+0.777216}&0\\ \color{purple}{+0.526967}&\color{purple}{+1.254234}&\color{purple}{-2.532150}&1\\ \end{bmatrix}$$
By default, the local coordinate system coincides with the world coordinate system (the cube vertices are defined with respect to this local coordinate system). This is illustrated in Figure 7a. Then, we apply the matrix M to the local coordinate system, which changes its position, scale, and rotation (this depends on the matrix values). This is illustrated in Figure 7b. So before we apply the transform, the coordinates of the highlighted vertex in Figures 6 and 7 (the purple dot) are the same in both coordinate systems (since the frames of reference coincide). But after the transformation, the world and local coordinates of the points are different (Figures 7a and 7b). To calculate the world coordinates of that vertex, we need to multiply the point's original coordinates by the local-to-world matrix: we call it local-to-world because it defines the coordinate system with respect to the world coordinate system. This is pretty logical! If you transform the local coordinate system and want the cube to move with this coordinate system, you want to apply the same transformation that was applied to the local coordinate system to the cube vertices. To do this, you multiply the cube's vertices by the local-to-world matrix (denoted $$M$$ here for the sake of simplicity):
$$P_{world} = P_{local} * M$$
If you now want to go the other way around (to get the point "local coordinates" from its "world coordinates"), you need to transform the point world coordinates with the inverse of M:
$$P_{local} = P_{world} * M_{inverse}$$
Or in mathematical notation:
$$P_{local} = P_{world} * M^{-1}$$
As you may have guessed already, the inverse of M is also called the world-to-local coordinate system (it defines where the world coordinate system is with respect to the local coordinate system frame of reference):
$$\begin{array}{l} P_{world} = P_{local} * M_{local-to-world}\\ P_{local} = P_{world} * M_{world-to-local}. \end{array}$$
Let's check that it works. The coordinates of the highlighted vertex in local space are [-0.5,0.5,0.5] and in world space: [-0.31,1.44,-2.49]. We also know the matrix M (local-to-world). If we apply this matrix to the point's local coordinates, we should obtain the point's world coordinates:
$$\begin{array}{l} P_{world} = P_{local} * M\\ P_{world}.x = P_{local}.x * M_{00} + P_{local}.y * M_{10} + P_{local}.z * M_{20} + M_{30}\\ P_{world}.y = P_{local}.x * M_{01} + P_{local}.y * M_{11} + P_{local}.z * M_{22} + M_{31}\\ P_{world}.z = P_{local}.x * M_{02} + P_{local}.y * M_{12} + P_{local}.z * M_{22} + M_{32}\\ \end{array}$$
Let's implement and check the results (you can use the code from the Geometry lesson):
Matrix44f m(0.718762, 0.615033, -0.324214, 0, -0.393732, 0.744416, 0.539277, 0, 0.573024, -0.259959, 0.777216, 0, 0.526967, 1.254234, -2.53215, 1);
Vec3f Plocal(-0.5, 0.5, -0.5), Pworld;
m.multVecMatrix(Plocal, Pworld);
std::cerr << Pworld << std::endl;
The output is: (-0.315792 1.4489 -2.48901).
Let's now transform the world coordinates of this point into local coordinates. Our implementation of the Matrix class contains a method to invert the current matrix. We will use it to compute the world-to-local transformation matrix and then apply this matrix to the point world coordinates:
Matrix44f m(0.718762, 0.615033, -0.324214, 0, -0.393732, 0.744416, 0.539277, 0, 0.573024, -0.259959, 0.777216, 0, 0.526967, 1.254234, -2.53215, 1);
m.invert();
Vec3f Pworld(-0.315792, 1.4489, -2.48901), Plocal;
m.multVecMatrix(Pworld, Plocal);
std::cerr << Plocal << std::endl;
The output is: (-0.500004 0.499998 -0.499997).
The coordinates are not precisely (-0.5, 0.5, -0.5) because of some floating point precision issue and also because we've truncated the input point world coordinates, but if we round it off to one decimal place, we get (-0.5, 0.5, -0.5) which is the correct result.
At this point of the chapter, you should understand the difference between the world/global and local coordinate systems and how to transform points or vectors from one system to the other (and vice versa).
When we transform a point from the world to the local coordinate system (or the other way around), we often say that we go from world space to local space. We will use this terminology often.
### Camera Coordinate System and Camera Space
A camera in CG (and the natural world) is no different from any 3D object. When you take a photograph, you need to move and rotate the camera to adjust the viewpoint. So in a way, when you transform a camera (by translating and rotating it — note that scaling a camera doesn't make much sense), what you are doing is transforming a local coordinate system, which implicitly represents the transformations applied to that camera. In CG, we call this spatial reference system (the term spatial reference system or reference is sometimes used in place of the term coordinate system) the camera coordinate system (you might also find it called the eye coordinate system in other references). We will explain why this coordinate system is essential in a moment.
A camera is nothing more than a coordinate system. Thus, the technique we described earlier to transform points from one coordinate system to another can also be applied here to transform points from the world coordinate system to the camera coordinate system (and vice versa). We say that we transform points from world space to camera space (or camera space to world space if we apply the transformation the other way around).
However, cameras always point along the world coordinate system's negative z-axis. In Figure 8, you will see that the camera's z-axis is pointing in the opposite direction of the world coordinate system's z-axis (when the x-axis points to the right and the z-axis goes inward into the screen rather than outward).
Cameras point along the world coordinate system's negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space) if the point is to the left of the world coordinate system's y-axis, the point will also map to the left of the camera coordinate system's y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; the only way you can get that configuration is by having the camera look down the negative z-axis.
Because of this, the sign of the z coordinate of points is inverted when we go from one system to the other. Keep this in mind, as it will play a role when we (finally) get to study the perspective projection matrix.
To summarize: if we want to convert the coordinates of a point in 3D from world space (which is the space in which points are defined in a 3D scene) to the space of a local coordinate system, we need to multiply the point world coordinates by the inverse of the local-to-world matrix.
### Of the Importance of Converting Points to Camera Space
This a lot of reading, but what for? We will now show that to "project" a point on the canvas (the 2D surface on which we will draw an image of the 3D scene), we will need to convert or transform points from the world to camera space. And here is why.
Let's recall that what we are trying to achieve is to compute P', the coordinates of a point P from the 3D scene on the surface of a canvas, which is the 2D surface where the image of the scene will be drawn (the canvas is also called the projection plane, or in CG, the image plane). If you trace a line from P to the eye (the origin of the camera coordinate system), P' is the line's point of intersection with the canvas (Figure 10). When the point P coordinates are defined with respect to the camera coordinate system, computing the position of P' is trivial. If you look at Figure 10, which shows a side view of our setup, you can see that by construction, we can trace two triangles $$\triangle ABC$$ and $$\triangle AB'C'$$, where:
• A is the eye.
• B is the distance from the eye to point P along the camera coordinate system's z-axis.
• C is the distance from the eye to P along the camera coordinate system's y-axis.
• B' is the distance from the eye to the canvas (for now, we will assume that this distance is 1, which will simplify our calculations).
• C' is the distance from the eye to P' along the camera coordinate system y-axis.
The triangles $$\triangle ABC$$ and $$\triangle AB'C'$$ are said to be similar (similar triangles have the same shape but different sizes). Similar triangles have an interesting property: the ratio between their adjacent and opposite sides is the same. In other words:
$${ BC \over AB } = { B'C' \over AB' }.$$
Because the canvas is 1 unit away from the origin, we know that AB' equals 1. We also know the position of B and C, which are the z- (depth) and y-coordinate (height) of point P (assuming P's coordinates are defined in the camera coordinate system). If we substitute these numbers in the above equation, we get:
$${ P.y \over P.z } = { P'.y \over 1 }.$$
Where y' is the y coordinate of P'. Thus:
$$P'.y = { P.y \over P.z }.$$
This is one of computer graphics' simplest and most fundamental relations, known as the z or perspective divide. The same principle applies to the x coordinate. The projected point's x coordinate (x') is the corner's x coordinate divided by its z coordinate:
$$P'.x = { P.x \over P.z }.$$
We described this method several times in other lessons on the website, but we want to show here that to compute P' using these equations, the coordinates of P should be defined with respect to the camera coordinate system. However, points from the 3D scene are defined initially with respect to the world coordinate system. Therefore, the first and foremost operation we need to apply to points before projecting them onto the canvas is to convert them from world space to camera space.
How do we do that? Suppose we know the camera-to-world matrix (similar to the local-to-camera matrix we studied in the previous case). In that case, we can transform any point(whose coordinates are defined in world space) to camera space by multiplying this point by the camera-to-world inverse matrix (the world-to-camera matrix):
$$P_{camera} = P_{world} * M_{world-to-camera}.$$
Then at this stage, we can "project" the point on the canvas using the equations we presented before:
$$\begin{array}{l} P'.x = \dfrac{P_{camera}.x}{P_{camera}.z}\\ P'.y = \dfrac{P_{camera}.y}{P_{camera}.z}. \end{array}$$
Recall that cameras are usually oriented along the world coordinate system's negative z-axis. This means that when we convert a point from world space to camera space, the sign of the point's z-coordinate is necessarily reversed; it becomes negative if the z-coordinate was positive in world space, or it becomes positive if it was initially negative. Note that a point defined in camera space can only be visible if its z-coordinate is negative (take a moment to verify this statement). As a result, when the x- and y-coordinate of the original point are divided by the point's negative z-coordinate, the sign of the resulting projected point's x and y-coordinates is also reversed. This is a problem because a point that is situated to the right of the screen coordinate system's y-axis when you look through the camera or a point that appears above the horizontal line passing through the middle of the frame ends up either to the left of the vertical line or below the horizontal line once projected. The point's coordinates are mirrored. The solution to this problem is simple. We need to make the point's z-coordinate positive, which we can easily do by reversing its sign at the time that the projected point's coordinates are computed:
$$\begin{array}{l} P'.x = \dfrac{P_{camera}.x}{-P_{camera}.z}\\ P'.y = \dfrac{P_{camera}.y}{-P_{camera}.z}. \end{array}$$
To summarize: points in a scene are defined in the world coordinate space. However, to project them onto the surface of the canvas, we first need to convert the 3D point coordinates from world space to camera space. This can be done by multiplying the point world coordinates by the inverse of the camera-to-world matrix. Here is the code for performing this conversion:
Matrix44f cameraToWorld(0.718762, 0.615033, -0.324214, 0, -0.393732, 0.744416, 0.539277, 0, 0.573024, -0.259959, 0.777216, 0, 0.526967, 1.254234, -2.53215, 1);
Matrix4ff worldToCamera = cameraToWorld.inverse();
Vec3f Pworld(-0.315792, 1.4489, -2.48901), Pcamera;
worldToCamera.multVecMatrix(Pworld, Pcamera);
std::cerr << Pcamera << std::endl;
We can now use the resulting point in camera space to compute its 2D coordinates on the canvas by using the perspective projection equations (dividing the point coordinates with the inverse of the point's z-coordinate).
### From Screen Space to Raster Space
At this point, we know how to compute the projection of a point on the canvas. We first need to transform points from world space to camera space and divide the point's x- and y-coordinates by their respective z-coordinate. Let's recall that the canvas lies on what we call the image plane in CG. So you now have a point P' lying on the image plane, which is the projection of P onto that plane. But in which space is the coordinates of P' defined? Note that because point P' lies on a plane, we are no longer interested in the z-coordinate of P.' In other words, we don't need to declare P' as a 3D point; a 2D point suffices (this is partially true. To solve the visibility problem, the rasterization algorithm uses the z-coordinates of the projected points. However, we will ignore this technical detail for now).
Since P' is a 2D point, it is defined with respect to a 2D coordinate system which in CG is called the image or screen coordinate system. This coordinate system marks the center of the canvas; the coordinates of any point projected onto the image plane refer to this coordinative system. 3D points with positive x-coordinates are projected to the right of the image coordinate system's y-axis. 3D points with positive y-coordinates are projected above the image coordinate system's x-axis (Figure 11). An image plane is a plane, so technically, it is infinite. But images are not infinite in size; they have a width and a height. Thus, we will cut off a rectangular shape centered around the image coordinate system, which we will define as the "bounded region" over which the image of the 3D scene will be drawn (Figure 11). You can see that this region is a canvas's paintable or drawable surface. The dimension of this rectangular region can be anything we want. Changing its size changes the extent of a given scene imaged by the camera (Figure 13). We will study the effect of the canvas size in the next lesson. In figures 12 and 14 (top), the canvas is 2 units long in each dimension (vertical and horizontal).
Any projected point whose absolute x- and y-coordinate is greater than half of the canvas' width or half of the canvas' height, respectively, is not visible in the image (the projected point is clipped).
$$\text {visible} = \begin{cases} yes & |P'.x| \le {W \over 2} \text{ or } |P'.y| \le {H \over 2}\\ no & \text{otherwise} \end{cases}$$
|a| in mathematics means the absolute value of a. The variables W and H are the width and height of the canvas.
If the coordinates of P are real numbers (floats or doubles in programming), P's coordinates are also real numbers. If P's coordinates are within the canvas boundaries, then P' is visible. Otherwise, the point is not visible, and we can ignore it. If P' is visible, it should appear as a dot in the image. A dot in a digital image is a pixel. Note that pixels are also 2D points, only their coordinates are integers, and the coordinate system that these coordinates refer to is located in the upper-left corner of the image. Its x-axis points to the right (when the world coordinate system x-axis points to the right), and its y-axis points downwards (Figure 14). This coordinate system in computer graphics is called the raster coordinate system. A pixel in this coordinate system is one unit long in x and y. We need to convert P' coordinates, defined with respect to the image or screen coordinate system, into pixel coordinates (the position of P' in the image in terms of pixel coordinates). This is another change in the coordinate system; we say that we need to go from screen space to raster space. How do we do that?
The first thing we will do is remap P coordinates in the range [0,1]. This is mathematically easy. Since we know the dimension of the canvas, all we need to do is apply the following formulas:
$$\begin{array}{l} P'_{normalized}.x = \dfrac{P'.x + width / 2}{ width }\\ P'_{normalised}.y = \dfrac{P'.y + height / 2}{ height } \end{array}$$
Because the coordinates of the projected point P' are now in the range [0,1], we say that the coordinates are normalized. For this reason, we also call the coordinate system in which the points are defined after normalization the NDC coordinate system or NDC space. NDC stands for Normalized Device Coordinate. The NDC coordinate system's origin is situated in the lower-left corner of the canvas. Note that the coordinates are still real numbers at this point, only they are now in the range [0,1].
The last step is simple. We need to multiply the projected point's x- and y-coordinates in NDC space by the actual image pixel width and image pixel height, respectively. This is a simple remapping of the range [0,1] to the range [0, Pixel Width] for the x-coordinate and [0,Pixel Height] for the y-coordinate, respectively. Since the pixel coordinates need to be integers, we need to round off the resulting numbers to the smallest following integer value (to do that, we will use the mathematical floor function; it rounds off a real number to its smallest next integer). After this final step, P's coordinates are defined in raster space:
$$\begin{array}{l} P'_{raster}.x = \lfloor{ P'_{normalized}.x * \text{ Pixel Width} }\rfloor\\ P'_{raster}.y = \lfloor{ P'_{normalized}.y * \text{Pixel Height} }\rfloor \end{array}$$
In mathematics, $$\lfloor{a}\rfloor$$, denotes the floor function. Pixel width and pixel height are the actual dimensions of the image in pixels. However, there is a small detail that we need to take care of. The y-axis in the NDC coordinate system points up, while in the raster coordinate system, the y-axis points down. Thus, to go from one coordinate system to the other, the y-coordinate of P' also needs to be inverted. We can easily account for this by doing a small modification to the above equations:
$$\begin{array}{l} P'_{raster}.x = \lfloor{ P'_{normalized}.x * \text{ Pixel Width} }\rfloor\\ P'_{raster}.y = \lfloor{ (1 - P'_{normalized}.y) * \text{Pixel Height} }\rfloor \end{array}$$
In OpenGL, the conversion from NDC space to raster space is called the viewport transform. The canvas in this lesson is generally called the viewport in CG. However, the viewport means different things to different people. To some, it designates the "normalized window" of the NDC space. To others, it represents the window of pixels on the screen in which the final image is displayed.
Done! You have converted a point P defined in world space into a visible point in the image, whose pixel coordinates you have computed using a series of conversion operations:
• World space to camera space.
• Camera space to screen space.
• Screen space to NDC space.
• NDC space to raster space.
## Summary
Because this process is so fundamental, we will summarize everything that we've learned in this chapter:
• Points in a 3D scene are defined with respect to the world coordinate system.
• A 4x4 matrix can be seen as a "local" coordinate system.
• We learned how to convert points from the world coordinate system to any local coordinate system.
• If we know the local-to-world matrix, we can multiply the world coordinate of the point by the inverse of the local-to-world matrix (the world-to-local matrix).
• We also use 4x4 matrices to transform cameras. Therefore, we can also convert points from world space to camera space.
• Computing the coordinates of a point from camera space onto the canvas can be done using perspective projection (camera space to image space). This process requires a simple division of the point's x- and y-coordinate by the point's z-coordinate. Before projecting the point onto the canvas, we need to convert the point from world space to camera space. The resulting projected point is a 2D point defined in image space (the z-coordinate can be discarded).
• We then convert the 2D point in image space to Normalized Device Coordinate (NDC) space. In NDC space (image space to NDC space), the coordinates of the point are remapped to the range [0,1].
• Finally, we convert the 2D point in NDC space to raster space. To do this, we must multiply the NDC point's x and y coordinates with the image width and height (in pixels). Pixel coordinates are integers rather than real numbers. Thus, they need to be rounded off to the smallest following integer when converting from NDC space to raster space. In the NDC coordinate system, the y-axis is located in the lower-left corner of the image and is pointing up. In raster space, the y-axis is located in the upper-left corner of the image and is pointing down. Therefore, the y-coordinates need to be inverted when converting from NDC to raster space.
Space
Description
World Space
The space in which the points are originally defined in the 3D scene. Coordinates of points in world space are defined with respect to the world Cartesian coordinate system.
Camera Space
The space in which points are defined with respect to the camera coordinate system. To convert points from world to camera space, we need to multiply points in world space by the inverse of the camera-to-world matrix. By default, the camera is located at the origin and is oriented along the world coordinate system's negative z-axis. Once the points are in camera space, they can be projected on the canvas using perspective projection.
Screen Space
In this space, points are in 2D; they lie in the image plane. Because the plane is infinite, the canvas defines the region of this plane on which the scene can be drawn. The canvas size is arbitrary and defines "how much" of the scene we see. The image or screen coordinate system marks the canvas's center (and the image plane's). If a point on the image plane is outside the boundaries of the canvas, it is not visible. Otherwise, the point is visible on the screen.
NDC Space
2D points lying in the image plane and contained within the boundaries of the canvas are then converted to Normalized Device Coordinate (NDC) space. The principle is to normalize the point's coordinates, in other words, to remap them to the range [0,1]. Note that NDC coordinates are still real numbers.
Raster Space
Finally, 2D points in NDC space are converted to 2D pixel coordinates. To do this, we multiply the normalized points' x and y coordinates by the image width and height in pixels. Going from NDC to raster space also requires the y-coordinate of the point to be inverted. Final coordinates need to be rounded to the nearest following integers since pixel coordinates are integers.
## Code
The function converts a point from 3D world coordinates to 2D pixel coordinates. The function returns' false' if the point is not visible in the canvas. This implementation is quite naive, but we should have written it for efficiency. We wrote it, so that every step is visible and contained within a single function.
bool computePixelCoordinates(
const Vec3f &pWorld,
const Matrix44f &cameraToWorld,
const float &canvasWidth,
const float &canvasHeight,
const int &imageWidth,
const int &imageHeight,
Vec2i &pRaster)
{
// First, transform the 3D point from world space to camera space.
// It is, of course inefficient to compute the inverse of the cameraToWorld
// matrix in this function. It should be done only once outside the function
// and the worldToCamera should be passed to the function instead.
// We only compute the inverse of this matrix in this function ...
Vec3f pCamera;
Matrix44f worldToCamera = cameraToWorld.inverse();
worldToCamera.multVecMatrix(pWorld, pCamera);
// Coordinates of the point on the canvas. Use perspective projection.
Vec2f pScreen;
pScreen.x = pCamera.x / -pCamera.z;
pScreen.y = pCamera.y / -pCamera.z;
// If the x- or y-coordinate absolute value is greater than the canvas width
// or height respectively, the point is not visible
if (std::abs(pScreen.x) > canvasWidth || std::abs(pScreen.y) > canvasHeight)
return false;
// Normalize. Coordinates will be in the range [0,1]
Vec2f pNDC;
pNDC.x = (pScreen.x + canvasWidth / 2) / canvasWidth;
pNDC.y = (pScreen.y + canvasHeight / 2) / canvasHeight;
// Finally, convert to pixel coordinates. Don't forget to invert the y coordinate
pRaster.x = std::floor(pNDC.x * imageWidth);
pRaster.y = std::floor((1 - pNDC.y) * imageHeight);
return true;
}
int main(...)
{
...
Matrix44f cameraToWorld(...);
Vec3f pWorld(...);
float canvasWidth = 2, canvasHeight = 2;
uint32_t imageWidth = 512, imageHeight = 512;
// The 2D pixel coordinates of pWorld in the image if the point is visible
Vec2i pRaster;
if (computePixelCoordinates(pWorld, cameraToWorld, canvasWidth, canvasHeight, imageWidth, imageHeight, pRaster)) {
std::cerr << "Pixel coordinates " << pRaster << std::endl;
}
else {
std::cert << Pworld << " is not visible" << std::endl;
}
...
return 0;
}
We will use a similar function in our example program (look at the source code chapter). To demonstrate the technique, we created a simple object in Maya (a tree with a star sitting on top) and rendered an image of that tree from a given camera in Maya (see the image below). To simplify the exercise, we triangulated the geometry. We then stored a description of that geometry and the Maya camera 4x4 transform matrix (the camera-to-world matrix) in our program.
To create an image of that object, we need to:
• Loop over each triangle that makes up the geometry.
• Extract from the vertex list the vertices making up the current triangle.
• Convert these vertices' world coordinates to 2D pixel coordinates.
• Draw lines connecting the resulting 2D points to draw an image of that triangle as viewed from the camera (we trace a line from the first point to the second point, from the second point to the third, and then from the third point back to the first point).
We then store the resulting lines in an SVG file. The SVG format is designed to create images using simple geometric shapes such as lines, rectangles, circles, etc., described in XML. Here is how we define a line in SVG, for instance:
<line x1="0" y1="0" x2="200" y2="200" style="stroke:rgb(255,0,0);stroke-width:2" />
SVG files themselves can be read and displayed as images by most Internet browsers. Storing the result of our programs in SVG is very convenient. Rather than rendering these shapes ourselves, we can store their description in an SVG file and have other applications render the final image for us (we don't need to care for anything that relates to rendering these shapes and displaying the image to the screen, which is not apparent from a programming point of view).
The complete source code of this program can be found in the source code chapter. Finally, here is the result of our program (left) compared to a render of the same geometry from the same camera in Maya (right). As expected, the visual results are the same (you can read the SVG file produced by the program in any Internet browser).
Suppose you wish to reproduce this result in Maya. In that case, you will need to import the geometry (which we provide in the next chapter as an obj file), create a camera, set its angle of view to 90 degrees (we will explain why in the next lesson), and make the film gate square (by setting up the vertical and horizontal film gate parameters to 1). Set the render resolution to 512x512 and render from Maya. It would be best if you then exported the camera's transformation matrix using, for example, the following Mel command:
getAttr camera1.worldMatrix;
Set the camera-to-world matrix in our program with the result of this command (the 16 coefficients of the matrix). Compile the source code, and run the program. The impact exported to the SVG file should match Maya's render.
## What Else?
This chapter contains a lot of information. Most resources devoted to the process focus their explanation on the perspective process. Still, they must remember to mention everything that comes before and after the perspective projection (such as the world-to-camera transformation or the conversion of the screen coordinates to raster coordinates). We aim for you to produce an actual result at the end of this lesson, which we could also match to a render from a professional 3D application such as Maya. We wanted you to have a complete picture of the process from beginning to end. However, dealing with cameras is slightly more complicated than what we described in this chapter. For instance, if you have used a 3D program before, you are probably familiar with the fact that the camera transform is not the only parameter you can change to adjust what you see in the camera's view. You can also vary, for example, its focal length. How the focal length affects the result of the conversion process is something we have yet to explain in this lesson. The near and far clipping planes associated with cameras also affect the perspective projection process, more notably the perspective and orthographic projection matrix. In this lesson, we assumed that the canvas was located one unit away from the camera coordinate system. However, this is only sometimes the case, which can be controlled through the near-clipping plane. How do we compute pixel coordinates when the distance between the camera coordinate system's origin and the canvas is different than 1? These unanswered questions will be addressed in the next lesson, devoted to 3D viewing.
## Exercises
• Change the canvas dimension in the program (the canvasWidth and canvasHeight parameters). Keep the value of the two parameters equal. What happens when the values get smaller? What happens when they get bigger?
Want to fix the problem yourself? Learn how to contribute!
Source this file on GitHub
Report a problem with this content on GitHub
|
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
You are viewing an older version of this Concept. Go to the latest version.
# AA Similarity
%
Progress
Practice AA Similarity
Progress
%
AA Similarity
What if you were given a pair of triangles and the angle measures for two of their angles? How could you use this information to determine if the two triangles are similar? After completing this Concept, you'll be able to use the AA Similarity Postulate to decide if two triangles are similar.
### Watch This
For additional help, first watch this video beginning at the 2:09 mark.
Then watch this video.
### Guidance
By definition, two triangles are similar if all their corresponding angles are congruent and their corresponding sides are proportional. It is not necessary to check all angles and sides in order to tell if two triangles are similar. In fact, if you only know that two pairs of corresponding angles are congruent that is enough information to know that the triangles are similar. This is called the AA Similarity Postulate.
AA Similarity Postulate: If two angles in one triangle are congruent to two angles in another triangle, then the two triangles are similar.
If $\angle A \cong \angle Y$ and $\angle B \cong \angle Z$ , then $\triangle ABC \sim \triangle YZX$ .
#### Example A
Determine if the following two triangles are similar. If so, write the similarity statement.
Compare the angles to see if we can use the AA Similarity Postulate. Using the Triangle Sum Theorem, $m \angle G = 48^{\circ}$ and $m \angle M = 30^\circ$ So, $\angle F \cong \angle M, \angle E \cong \angle L$ and $\angle G \cong \angle N$ and the triangles are similar. $\triangle FEG \sim \triangle MLN$ .
#### Example B
Determine if the following two triangles are similar. If so, write the similarity statement.
Compare the angles to see if we can use the AA Similarity Postulate. Using the Triangle Sum Theorem, $m \angle C = 39^{\circ}$ and $m \angle F = 59^{\circ}$ . $m \angle C \neq m \angle F$ , So $\triangle ABC$ and $\triangle DEF$ are not similar.
#### Example C
$\triangle LEG \sim \triangle MAR$ by AA. Find $GE$ and $MR$ .
Set up a proportion to find the missing sides.
$\frac{24}{32} &= \frac{MR}{20} && \qquad \ \frac{24}{32} = \frac{21}{GE}\\480 &= 32MR && \quad 24GE = 672\\15 &= MR && \qquad GE = 28$
When two triangles are similar, the corresponding sides are proportional. But, what are the corresponding sides? Using the triangles from this example, we see how the sides line up in the diagram to the right.
### Guided Practice
1.Are the following triangles similar? If so, write the similarity statement.
2. Are the triangles similar? If so, write a similarity statement.
3. Are the triangles similar? If so, write a similarity statement.
Answers:
1. Because $\overline {AE}\| \overline{CD}, \angle A \cong \angle D$ and $\angle C \cong \angle E$ by the Alternate Interior Angles Theorem. By the AA Similarity Postulate, $\triangle ABE \sim \triangle DBC$ .
2. Yes, there are three similar triangles that each have a right angle. $DGE \sim FGD \sim FDE$ .
3. By the reflexive property, $\angle H \cong \angle H$ . Because the horizontal lines are parallel, $\angle L \cong \angle K$ (corresponding angles). So yes, there is a pair of similar triangles. $HLI \sim HKJ$ .
### Practice
Use the diagram to complete each statement.
1. $\triangle SAM \sim \triangle$ ______
2. $\frac{SA}{?} = \frac{SM}{?} = \frac{?}{RI}$
3. $SM$ = ______
4. $TR$ = ______
5. $\frac{9}{?} = \frac{?}{8}$
Answer questions 6-9 about trapezoid $ABCD$ .
1. Name two similar triangles. How do you know they are similar?
2. Write a true proportion.
3. Name two other triangles that might not be similar.
4. If $AB = 10, AE = 7,$ and $DC = 22$ , find $AC$ . Be careful!
Use the triangles to the left for questions 10-14.
$AB = 20, DE = 15$ , and $BC = k$ .
1. Are the two triangles similar? How do you know?
2. Write an expression for $FE$ in terms of $k$ .
3. If $FE = 12,$ , what is $k$ ?
4. Fill in the blanks: If an acute angle of a _______ triangle is congruent to an acute angle in another ________ triangle, then the two triangles are _______.
5. Writing How do congruent triangles and similar triangles differ? How are they the same?
Are the following triangles similar? If so, write a similarity statement.
### Vocabulary Language: English Spanish
similar triangles
similar triangles
Two triangles where all their corresponding angles are congruent (exactly the same) and their corresponding sides are proportional (in the same ratio).
### Explore More
Sign in to explore more, including practice questions and solutions for AA Similarity.
Please wait...
Please wait...
|
|
## Brauer Groups and Tate-Shafarevich Groups
J. Math. Sci. Univ. Tokyo
Vol. 10 (2003), No. 2, Page 391--419.
Gonzalez-Aviles, Cristian D.
Brauer Groups and Tate-Shafarevich Groups
Abstract:
Let $X_K$ be a proper, smooth and geometrically connected curve over a global field $K$. In this paper we generalize a formula of Milne relating the order of the Tate-Shafarevich group of the Jacobian of $X_K$ to the order of the Brauer group of a proper regular model of $X_K$. We thereby partially answer a question of Grothendieck.
Keywords: Brauer groups, Tate-Shafarevich groups, Jacobian variety, index of a curve, period of a curve, Cassels-Tate pairing
Mathematics Subject Classification (1991): 11G35, 14K15, 14G25
Mathematical Reviews Number: MR1987138
|
|
Find the forces if given resultant.
1. Nov 7, 2009
williamx11373
Find the forces....if given resultant.
the problem is in the link below.....
http://i44.photobucket.com/albums/f46/maximus11373/353.jpg
i fully understand how to find the resultant when given the forces but do not know how to find the forces when given the resultant.
my question is.....what are the forces and how do i find them.
2. Nov 7, 2009
rock.freak667
Re: Find the forces....if given resultant.
F1 and F2 are the forces and R is the resultant.
The easiest thing to do is to put $\Sigma F_x =R_x \ and \ \Sigma F_y = R_y$
|
|
### The Well component
An area for containing components with a border and a gray background. No title or label is shown for this component.
In addition to the methods listed below, this component inherits properties and methods from the superclass Component. For example, any Well component has a label and hidden property even though these are not explicitly listed here.
#### Methods
NameSyntaxDescription
addComponentobj.addComponent(component1, component2)Add components to the well.
|
|
## Archive for February, 2014
### Differentiating power series
February 22, 2014
I’m writing this post as a way of preparing for a lecture. I want to discuss the result that a power series $\sum_{n=0}^\infty a_nz^n$ is differentiable inside its circle of convergence, and the derivative is given by the obvious formula $\sum_{n=1}^\infty na_nz^{n-1}$. In other words, inside the circle of convergence we can think of a power series as like a polynomial of degree $\infty$ for the purposes of differentiation.
A preliminary question about this is why it is not more or less obvious. After all, writing $f(z)=\sum_{n=0}^\infty a_nz^n$, we have the following facts.
1. Writing $S_N(z)=\sum_{n=0}^Na_nz^n$, we have that $S_N(z)\to f(z)$.
2. For each $N$, $S_N'(z)=\sum_{n=1}^Nna_nz^{n-1}$.
If we knew that $S_N'(z)\to f'(z)$, then we would be done.
Ah, you might be thinking, how do we know that the sequence $(S_N'(z))$ converges? But it turns out that that is not the problem: it is reasonably straightforward to show that it converges. (Roughly speaking, inside the circle of convergence the series $\sum_na_nz^{n-1}$ converges at least as fast as a GP, and multiplying the $n$th term by $n$ doesn’t stop a GP converging (as can easily be seen with the help of the ratio test). So, writing $g(z)$ for $\sum_{n=1}^\infty na_nz^{n-1}$, we have the following facts at our disposal.
1. $S_N(z)\to f(z)$
2. $S_N'(z)\to g(z)$
Doesn’t it follow from that that $f'(z)=g(z)$?
(more…)
### Recent news concerning the Erdos discrepancy problem
February 11, 2014
I’ve just learnt from a reshare by Kevin O’Bryant of a post by Andrew Sutherland on Google Plus that a paper appeared on the arXiv today with an interesting result about the Erdős discrepancy problem, which was the subject of a Polymath project hosted on this blog four years ago.
The problem is to show that if $(\epsilon_n)$ is an infinite sequence of $\pm 1$s, then for every $C$ there exist $d$ and $m$ such that $\sum_{i=1}^m\epsilon_{id}$ has modulus at least $C$. This result is straightforward to prove by an exhaustive search when $C=2$. One thing that the Polymath project did was to discover several sequences of length 1124 such that no sum has modulus greater than 2, and despite some effort nobody managed to find a longer one. That was enough to convince me that 1124 was the correct bound.
However, the new result shows the danger of this kind of empirical evidence. The authors used state of the art SAT solvers to find a sequence of length 1160 with no sum having modulus greater than 2, and also showed that this bound is best possible. Of this second statement, they write the following: “The negative witness, that is, the DRUP unsatisfiability certificate, is probably one of longest proofs of a non-trivial mathematical result ever produced. Its gigantic size is comparable, for example, with the size of the whole Wikipedia, so one may have doubts about to which degree this can be accepted as a proof of a mathematical statement.”
I personally am relaxed about huge computer proofs like this. It is conceivable that the authors made a mistake somewhere, but that is true of conventional proofs as well. The paper is by Boris Konev and Alexei Lisitsa and appears here.
### Taylor’s theorem with the Lagrange form of the remainder
February 11, 2014
There are countless situations in mathematics where it helps to expand a function as a power series. Therefore, Taylor’s theorem, which gives us circumstances under which this can be done, is an important result of the course. It is also the one result that I was dreading lecturing, at least with the Lagrange form of the remainder, because in the past I have always found that the proof is one that I have not been able to understand properly. I don’t mean by that that I couldn’t follow the arguments I read. What I mean is that I couldn’t reproduce the proof without committing a couple of things to memory, which I would then forget again once I had presented them. Briefly, an argument that appears in a lot of textbooks uses a result called the Cauchy mean value theorem, and applies it to a cleverly chosen function. Whereas I understand what the mean value theorem is for, I somehow don’t have the same feeling about the Cauchy mean value theorem: it just works in this situation and happens to give the answer one wants. And I don’t see an easy way of predicting in advance what function to plug in.
I have always found this situation annoying, because a part of me said that the result ought to be a straightforward generalization of the mean value theorem, in the following sense. The mean value theorem applied to the interval $[x,x+h]$ tells us that there exists $y\in (x,x+h)$ such that $f'(y)=\frac{f(x+h)-f(x)}h$, and therefore that $f(x+h)=f(x)+hf'(y)$. Writing $y=x+\theta h$ for some $\theta\in(0,1)$ we obtain the statement $f(x+h)=f(x)+hf'(x+\theta h)$. This is the case $n=1$ of Taylor’s theorem. So can’t we find some kind of “polynomial mean value theorem” that will do the same job for approximating $f$ by polynomials of higher degree?
Now that I’ve been forced to lecture this result again (for the second time actually — the first was in Princeton about twelve years ago, when I just suffered and memorized the Cauchy mean value theorem approach), I have made a proper effort to explore this question, and have realized that the answer is yes. I’m sure there must be textbooks that do it this way, but the ones I’ve looked at all use the Cauchy mean value theorem. I don’t understand why, since it seems to me that the way of proving the result that I’m about to present makes the whole argument completely transparent. I’m actually looking forward to lecturing it (as I add this sentence to the post, the lecture is about half an hour in the future), since the demands on my memory are going to be close to zero.
(more…)
### How to work out proofs in Analysis I
February 3, 2014
Now that we’ve had several results about sequences and series, it seems like a good time to step back a little and discuss how you should go about memorizing their proofs. And the very first thing to say about that is that you should attempt to do this while making as little use of your memory as you possibly can.
Suppose I were to ask you to memorize the sequence 5432187654321. Would you have to learn a string of 13 symbols? No, because after studying the sequence you would see that it is just counting down from 5 and then counting down from 8. What you want is for your memory of a proof to be like that too: you just keep doing the obvious thing except that from time to time the next step isn’t obvious, so you need to remember it. Even then, the better you can understand why the non-obvious step was in fact sensible, the easier it will be to memorize it, and as you get more experienced you may find that steps that previously seemed clever and nonobvious start to seem like the natural thing to do.
For some reason, Analysis I contains a number of proofs that experienced mathematicians find easy but many beginners find very hard. I want to try in this post to explain why the experienced mathematicians are right: in a rather precise sense many of these proofs really are easy, in the sense that if you just repeatedly do the obvious thing you will solve them. Others are mostly like that, with perhaps one smallish idea needed when the obvious steps run out. And even the hardest ones have easy parts to them.
(more…)
|
|
## The Lemniscate
2016 Dec 27
After creating a tilted parabola, I started looking into other types of graphs that are also loci of points. A locus is a set of all points fulfilling some specific condition, like the parabola, the locus of all points equidistant from the focus point and the directrix line. One simple locus is a circle, which is the locus of all points some given distance from another point, the center of the circle. Another example is the ellipse, which is the locus of all points the sum of whose distances from two foci is equal to a constant. Here is an example of what an ellipse might look like:
Another type of locus is the hyperbola, which is the locus of all points the difference of whose distances from each of two foci is equal to a constant. Here is a hyperbola:
After studying the ellipse and hyperbola for a only a short time, I began to get bored, because almost everything that there was to know about the ellipse and hyperbola was already discovered. So I decided to make my own locus, which was the locus of all points the product of whose distances from each of two foci was equal to a constant. I decided to let the foci be the points $(x_0,0)$ and $(-x_0,0)$ (so that it would be symmetric about the y-axis) and let this be equation of the locus:
As I played around with this graph, I found that it makes three basic types of shapes. The first consists of two disjoint closed curves that are vaguely egg-shaped,
the second consists of two pointed closed curves sharing a single point,
and the third consists of a single closed curve that is pinched in the middle.
In order to determine when it takes each of these shaped, I decided to define the line equidistant from each of the lemniscate's foci as the axis of the lemniscate. I could then classify each shape by the number of times it intersected its axis: the first shape does not intersect its axis, the second intersects it at $1$ point, and the third intersects it at $2$ points. I could then determine algebraically when each shape would occur by setting $x$ to $0$ (because when the lemniscate is symmetric about the y-axis, the points at which it intersects its axis are at $x=0$). Then I could obtain a simpler expression by solving for the y-position at which the lemniscate would intersect its axis:
Now, in order to generalize my findings for any lemnicate not necessarily lying on the x-axis or symmetric about the y-axis, I decided to let $d$ be the distance between the foci so that $d=2x_0$ and ${x_0}^2=\frac{1}{4}d^2$, and I replaced with Then I could easily determine when each of the shapes would be made because
1. When $c<\frac{1}{4}d^2$, $c-\frac{1}{4}d^2$ would be a negative number. This means that $y^2$ would have to be equal to a negative number, and there would be no real intersections between the lemniscate and its axis.
2. When $c=\frac{1}{4}d^2$, $c-\frac{1}{4}d^2$ would be zero. This means that $y^2$ would have to be equal to 0, and there would be exactly one intersection (i.e. at $y=0$).
3. When $c>\frac{1}{4}d^2$, $c-\frac{1}{4}d^2$ would be a positive number. This means that $y^2$ would have to be equal to a positive number, and there would be exactly two intersections with ordinates $y=+\sqrt{c-\frac{1}{4}d^2}$ and $y=-\sqrt{c-\frac{1}{4}d^2}$.
That's all I'm doing with the lemniscate for now, but it is a very interesting subject and I will surely return to it later.
|
|
# How do I modify the system PATH in Windows 2003/Windows 2008 using a script?
How do I modify the system PATH environment variable using script (or even a registry setting) so that when Windows boots it's already configured?
-
## 2 Answers
My google-fu was lacking earlier. The way to do this is using the SETX tool:
``````SETX NEWVAR %systemroot%\system32\inetsrv /M
SETX PATH %PATH%;%NEWVAR% /M
``````
-
You can find environment variables at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Environment.
-
Thanks...all I needed for now was to be able to set them in a batch file. But bookmarked for future ref. – Kev Sep 7 '09 at 13:03
|
|
# Teaching Product of Prime Factors
Number theory, or the study of integers (the counting numbers 1, 2, 3..., their opposites –1, –2, –3..., and 0), has fascinated mathematicians for years. Prime numbers, a concept introduced to most students in Grades 4 and up, are fundamental to number theory. They form the basic building blocks for all integers.
A prime number is a counting number that only has two factors, itself and one. Counting numbers which have more than two factors (such as 6, whose factors are 1, 2, 3, and 6), are said to be composite numbers. The number 1 only has one factor and usually isn't considered either prime or composite.
• Key standard: Determine whether a given number is prime or composite, and find all factors for a whole number. (Grade 4)
## Why Do Prime Factors Matter?
It's the age-old question that math teachers everywhere must contend with. When will I use this? One notable example is with cryptography, or the study of creating and deciphering codes. With the help of a computer, it is easy to multiply two prime numbers. However, it can be extremely difficult to factor a number. Because of this, when a website sends and receives information securely—something especially important for financial or medical websites, for example—you can bet there are prime numbers behind the scenes. Prime numbers also show up in a variety of surprising contexts, including physics, music, and even in the arrival of cicadas!
There is another place where prime numbers show up often, and it's easy to overlook when discussing applications: math! The study of pure mathematics is a topic that people practice, study, and share without worrying about where else it might apply, similar to how a musician does not need to ask how music applies to the real world. Number theory is an extremely rich topic that is central to college courses, research papers, and other branches of mathematics. Mathematicians of all stripes no doubt encounter number theory many times along their academic and professional journeys.
## Writing a Product of Prime Factors
When a composite number is written as a product of all of its prime factors, we have the prime factorization of the number. For example, we can write the number 72 as a product of prime factors: $$72=2^3 \cdot 3^2$$. The expression $$2^3 \cdot 3^2$$ is said to be the prime factorization of 72. The Fundamental Theorem of Arithmetic states that every composite number can be factored uniquely (except for the order of the factors) into a product of prime factors. What this means is that how you choose to factor a number into prime factors makes no difference. When you are done, the prime factorizations are essentially the same.
Examine the two factor trees for 72 shown below.
When we get done factoring using either set of factors to start with, we still have three factors of 2 and two factors of 3, or $$2^3 \cdot 3^2$$. This would be true if we had started to factor 72 as 24 times 3, 4 times 18, or any other pair of factors for 72.
Knowing rules for divisibility is helpful when factoring a number. For example, if a whole number ends in 0, 2, 4, 6, or 8, we could always start the factoring process by dividing by 2. It should be noted that because 2 only has two factors, 1 and 2, it is the only even prime number.
Another way to factor a number other than using factor trees is to start dividing by prime numbers:
Once again, we can see that $$72=2^3 \cdot 3^2$$.
Also key to writing the prime factorization of a number is an understanding of exponents. An exponent tells how many times the base is used as a factor. In the prime factorization of $$72=2^3 \cdot 3^2$$, the 2 is used as a factor three times and the 3 is used as a factor twice.
There is a strategy we can use to figure out whether a number is prime. Find the square root (with the help of a calculator if needed), and only check prime numbers less than or equal to it. For example, to see if 131 is prime, because the square root is between 11 and 12, we only need to check for divisibility by 2, 3, 5, 7, and 11. There is no need to check 13, since 132 = 169, which is greater than 131. This works because if a prime number greater than 13 divided 131, then the other factor would have to be less than 13—which we're already checking!
## Introducing the Concept: Finding Prime Factors
Making sure your students' work is neat and orderly will help prevent them from losing factors when constructing factor trees. Have them check their prime factorizations by multiplying the factors to see if they get the original number.
Prerequisite Skills and Concepts: Students will need to know and be able to use exponents. They also will find it helpful to know the rules of divisibility for 2, 3, 4, 5, 9 and 10.
Write the number 48 on the board.
• Ask: Who can give me two numbers whose product is 48?
Students should identify pairs of numbers like 6 and 8, 4 and 12, or 3 and 16. Take one of the pairs of factors and create a factor tree for the prime factorization of 48 where all students can see it.
• Ask: How many factors of two are there? (4) How do I express that using an exponent?
Students should say to write it as $$2^4$$. If they don't, remind them that the exponent tells how many times the base is taken as a factor. Finish writing the prime factorization on the board as $$2^4 \cdot 3$$. Next, find the prime factorization for 48 using a different set of factors.
• Ask: What do you notice about the prime factorization of 48 for this set of factors?
Students should notice that the prime factorization of 48 is $$2^4 \cdot 3$$ for both of them.
• Say: There is a theorem in mathematics that says when we factor a number into a product of prime numbers, it can only be done one way, not counting the order of the factors.
Illustrate this concept by showing them that the prime factorization of 48 could also be written as $$3 \cdot 2^4$$, but mathematically, that's the same thing as $$2^4 \cdot 3$$.
• Say: Now let's try one on your own. Find the prime factorization of 60 by creating a factor tree for 60.
Have all students independently factor 60. As they complete their factorizations, observe what students do and take note of different approaches and visual representations. Ask for a student volunteer to factor 60 for the entire class to see.
• Ask: Who factored 60 differently?
Have students who factored 60 differently (either by starting with different factors or by visually representing the factor tree differently) show their work to the class. Ask students to describe similarities and differences in the factorizations. If no one used different factors, show the class a factorization that starts with a different set of factors for 60 and have students identify similarities and differences between your factor tree and other students'.
• Ask: If I said the prime factorization of 36 is 22 • 9, would I be right?
The students should say no, because 9 is not a prime number. If they don't, remind them that the prime factorization of a number means all the factors must be prime and 9 is not a prime number.
Place the following composite numbers on the board and ask them to write the prime factorization for each one using factor trees: 24, 56, 63, and 46.
## Developing the Concept: Product of Prime Numbers
Now that students can find the prime factorization for numbers which are familiar products, it is time for them to use their rules for divisibility and other notions to find the prime factorization of unfamiliar numbers. Write the number 91 on the board.
• Say: Yesterday, we wrote some numbers in their prime factorization form.
• Ask: Who can write 91 as a product of prime numbers?
Many students might say it can't be done, because they will recognize that 2, 3, 4, 5, 9 and 10 don't divide it. They may not try to see if 7 divides it, which it does. If they don't recognize that 7 divides 91, demonstrate it for them. The prime factorization of 91 is $$7 \cdot 13$$. Next, write the number 240 on the board.
• Ask: Who can tell me two numbers whose product is 240?
Students are likely to say 10 and 24. If not, ask them to use their rules for divisibility to see if they can find two numbers. Create a factor tree for 240 like the one below.
• Ask: How many factors of two are there in the prime factorization of 240? (4) Who can tell me how to write the prime factorization of 240? (24 • 3 • 5)
Facilitate a discussion around different ways to factor 240 and the pros and cons of each method. If you start with 2 and 120, you end up with the same prime factorization in the end, but you end up with a "one-sided tree" that some students may find more difficult to work with. Have students identify ways that they prefer to factor and guide them to explain their reasoning.
• Say: Since the prime factorization of 240 is 24 • 3 • 5, the only prime numbers which divide this number are 2, 3 and 5. Prime numbers like 7 and 11 will not divide the number, because they do not appear in the prime factorization of the number.
Write the number 180 on the board.
• Ask: What two numbers might we start with to find the prime factorization of 180? What other numbers could we use?
Encourage students to find a variety of pairs, such as 10 and 18 or 9 and 20. If no one mentions either pair, suggest them both as possibilities. Have half the students use 10 and 18 and the other half use 9 and 20. Have two students create the two factors for the class to see.
• Ask: If the prime factorization of a number is 22 • 5 • 7, what can you tell me about the number?
Repeat the previous exercise with a new number. Some possible observations: Because $$3^2$$ is a factor, the number is divisible by 9 and the sum of the number's digits is a multiple of nine. Because the product of odd numbers is always odd, the number is an odd number. They might also tell you that it is a composite number, five is not a factor of the number, and so on.
Give them the following numbers and ask them to find their prime factorization: 231, 117, and 175. Also give the following prime factorizations of numbers and ask them to write down at least two things they know about both the number represented: $$3^2 \cdot 5^2$$, $$2^3 \cdot 3 \cdot 13$$, and $$2^2 \cdot 3 \cdot 5$$. You can of course adjust both the numbers and factorizations to match what your students are ready for.
|
|
# Ellipsoid¶
Elllipsoids are defined by their local coordinates and three semi-axes.
## Ellipsoid-Ellipsoid contact¶
Thsi algorithm is implemented in by woo.dem.Cg2_Ellipsoid_Ellipsoid_L6Geom.
Following , we note semi-axes $$a_i$$ and $$b_i$$ as $$i\in\{1,2,3\}$$; local axes are given as sets of orthonormal vectors $$\vec{u}_i$$, $$\vec{v}_i$$. The intercenter vector reads $$\vec{R}=\vec{r}_b-\vec{r}_a$$, with $$\vec{r}_a$$ and $$\vec{r}_b$$ being positions of ellipsoid centroids. We define matrices
\begin{align*} \mat{A}&=\sum_{i=1}^{3} a_i^{-1}\vec{u}_i\vec{u}_i^T \\ \mat{B}&=\sum_{i=1}^{3} b_i^{-1}\vec{v}_i\vec{v}_i^T \end{align*}
which are both invertible for non-vanishing semi-axes; then we define the function
(1)$S(\lambda)=\lambda(1-\lambda)\vec{R}^T\big[\underbrace{(1-\lambda)\mat{A}^{-1}+\lambda\mat{B}^{-1}}_{\mat{G}}\big]^{-1}\vec{R}$
where $$\lambda\in[0,1]$$ is an unknown parameter. The function $$S(\lambda)$$ is concave and non-zero for $$\lambda\in[0,1]$$ and zero at both end-points. It therefore has a sigle maximum. Using the Brent’s method, we obtain the maximum value $$\Lambda$$ for which $$S(\Lambda)$$ is maximal.
Todo
This computation can be (perhaps substantially) sped up by using other iterative methods, such as Newton-Raphson, finding root of $$S'(\lambda)$$, and re-using the value from the previous step as the initial guess. There are also papers suggesting better algorithms such as [ZIPM09].
We define the Perram-Wertham potential (first introduced in [PW85]) as
$F_{AB}=\{\max S(\lambda)|\lambda\in[0,1]\}=S(\Lambda)$
for which ([PW85])
• $$F_{AB}<1$$ if both ellipsoids overlap,
• $$F_{AB}=1$$ if they are externaly tangent,
• $$F_{AB}>1$$ if the ellipsoids do not overlap.
[PW85] gives a geometrical interpretation of $$F_{AB}$$, where
$F_{AB}=\mu^2$
where $$\mu$$ is scaling factor which must be applied to both ellipses to be tangent.
Following [DTS05] eq (18), we can compute the contact normal and the contact point of two ellipsoids as
\begin{align*} \vec{n}_c&=\mat{G}^{-1}\mat{R} \\ \vec{r}_c&=\vec{r}_a+(1-\Lambda)\mat{A}^{-1}\cdot \end{align*}
with $$\mat{G}$$ defined in (1); note that $$\vec{n}_c$$ is not normalized.
The penetration depth (overlap distance) can be reasoned out as follows. $$\mu$$ scaled ellipsoid sizes while keeping their distance, so that they become externally tangent. Therefore $$1/\mu$$ scales ellipsoid distance while keeping their sizes. With $$d=|\mat{R}|$$ being the current inter-center distance, we obtain
$u_n'=d-d_0=d-\frac{1}{\mu}d=d\left(1-\frac{1}{\mu}\right).$
This is the displacement that msut be performed along $$\vec{R}$$ while the contact normal may be oriented differently; we therefore project $$u_n'$$ along $$\vec{R}$$ onto (normalized) $$\vec{n}_c$$ obtaining
$u_n=d\left(1-\frac{1}{\mu}\right)\normalized{\vec{R}}\cdot\normalized{\vec{n}_c}.$
The $$u_n$$, $$\normalized{\vec{n}}$$, $$\vec{r}_c$$ can be fed to Generic contact routine for further computation.
## Ellipsoid-Wall intersection¶
Todo
Write.
Tip
Report issues or inclarities to github.
|
|
# sand discharge speed from a conveyor belt
##### Sand drops on a conveyor belt at constant rate Physics ...
Jan 09, 2021 Homework Statement: A conveyor belt is driven at velocity ##v## by a motor. Sand drops vertically on to the belt at a rate of ##m~kg~s^{-1}##. What is the additional power needed to keep the conveyor belt moving at a steady speed when the sand starts to fall on it?
##### Another dynamics problem involving projectile kinematics ...
Jan 27, 2011 Sand is discharged at A from a conveyor belt and falls onto the top of a stockpile at B. Knowing that the conveyor belt forms an angle $$\alpha$$= 20 degrees with the horizontal, determine the speed V 0 of the belt. Homework Equations x f-x i =V i,x *t y f-y i =V i,y *t - (1/2)(g)(t 2)
##### Belt Conveyors for Bulk Materials Calculations by CEMA 5 ...
Belt Conveyor Capacity Table 1. Determine the surcharge angle of the material. The surcharge angle, on the average, will be 5 degrees to 15 degrees less than the angle of repose. (ex. 27° - 12° = 15°) 2. Determine the density of the material in pounds per cubic foot (lb/ft3). 3. Choose the idler shape. 4. Select a suitable conveyor belt ...
##### Understanding Conveyor Belt Calculations Sparks Belting
Understanding a basic conveyor belt calculation will ensure your conveyor design is accurate and is not putting too many demands on your system. ... Belt Speed. Expressed in feet per minute (FPM) S=D x RPM x .2618 x 1.021. Belt Load. At one time when the load is known per square foot: P= G 1 x C(in feet)x W (in feet)
##### The Mystery of Sand Flow Through an Hourglass MIT ...
May 19, 2010 Physicists have long known that the flow of sand through an hourglass is entirely different from the flow of liquid. In the case of a liquid, the rate of discharge depends on the pressure at the ...
##### Sandspreader - The ideal sanding machine GKB Machines
The properties of the Sandspreader. This makes the Sandspreader a super efficient and solid sanding machine: Sturdy wheelbase for perfect stability and weight dispersion. Bunker is shaped for good visibility and efficient sand discharge. Adjustable conveyor belt speed. Double hydraulically driven, precisely adjustable spreading discs.
##### Belt Conveyors for Bulk Materials Practical Calculations
Belt Conveyors are also a great option to move products through elevations. Incline Belt Conveyors from low to high and Decline Belt Conveyors from high to low. This manual is short, with quick and easy reading paragraphs, very practical for calculations of belt, chain conveyors and mechanical miscellaneous, in the metric and imperial system.
##### Minimising conveyor wear, damage and noise - Quarry
Jun 12, 2018 {{image3-a:r-w:300}}It is not unusual to find crushers installed with insufficient distance between the jaw discharge and the conveyor belt. Several design faults may exist, such as the conveyor belt being too narrow, or rock boxes incorrectly positioned. The result may create bridging, blockages, spillage and no doubt damage to the conveyor belt.
##### Solved: Sand Is Discharged At A From A Conveyor Belt And F ...
Sand is discharged at A from a conveyor belt and falls onto the top of a stockpile at B. Knowing that the conveyor belt forms an angle of 25 degrees with the horizontal, determine (a) the speed Vo of the belt, (b) the radius of Curvature of the trajectory described by the sand at point B.
##### Sand is discharged at A from a conveyor belt and falls ...
Sand is discharged at {eq}A {/eq} from a conveyor belt and falls onto the top of a stockpile at {eq}B {/eq}. Knowing that the conveyor belt forms an angle {eq}\alpha = 20 ^o {/eq} with the ...
##### MSHA - Metal and Nonmetal Mine Safety and Health Fatal ...
The belt was located under a double deck screen and was used to discharge sand. The conveyor was purchased used from Central Michigan Tool and Equipment Company in April 1996. The conveyor was powered by a 480-volt, three phase, 30 horsepower motor in conjunction with a Dodge speed reducer.
##### Understanding Conveyor Belt Calculations Sparks Belting
Understanding a basic conveyor belt calculation will ensure your conveyor design is accurate and is not putting too many demands on your system. ... Belt Speed. Expressed in feet per minute (FPM) S=D x RPM x .2618 x 1.021. Belt Load. At one time when the load is known per square foot: P= G 1 x C(in feet)x W (in feet)
##### The Mystery of Sand Flow Through an Hourglass MIT ...
May 19, 2010 Physicists have long known that the flow of sand through an hourglass is entirely different from the flow of liquid. In the case of a liquid, the rate of discharge depends on the pressure at the ...
##### Belt Conveyors for Bulk Materials Calculations by CEMA 5 ...
Belt Conveyor Capacity Table 1. Determine the surcharge angle of the material. The surcharge angle, on the average, will be 5 degrees to 15 degrees less than the angle of repose. (ex. 27° - 12° = 15°) 2. Determine the density of the material in pounds per cubic foot (lb/ft3). 3. Choose the idler shape. 4. Select a suitable conveyor belt ...
##### Conveyor Belt Manual - IBT Industrial Solutions
Conveyor belts generally are composed of three main components: 1. Carcass 2. Skims 3. Covers (carry cover and pulley cover) CARCASS The reinforcement usually found on the inside of a conveyor belt is normally referred to as the "carcass." In a sense, the carcass is the heart of the conveyor belt
##### Conveyor Capacity - Engineering ToolBox
Conveyor Capacity. Conveyor capacity is determined by the belt speed, width and the angle of the belt - and can be expressed as. Q = ρ A v (1) where. Q = conveyor capacity (kg/s, lb/s) ρ = density of transported material (kg/m3, lb/ft3) A = cross-sectional area of the bulk solid on the belt (m2, ft2) v = conveyor belt velocity (m/s, ft/s)
##### Transfer Chutes Conveyors [Free Estimates!] West River ...
Transfer chutes are primarily used at the transfer points in a conveyor system. Therefore, they need some type of feature, such as a rounded base that will allow the chute to discharge its load into a centralized stream in the same direction as the receiving conveyor. The simplest conveyor system with a transfer point consists of two conveyors.
##### Engineering conveyor belts
Engineering tables for the design of conveyor belt. Belt Rating . EP 400/2. EP 800/2 . EP 400/3. EP 630/3 . EP 500/4. EP 1000/4 . EP 800/5
##### Sandspreader - The ideal sanding machine GKB Machines
The properties of the Sandspreader. This makes the Sandspreader a super efficient and solid sanding machine: Sturdy wheelbase for perfect stability and weight dispersion. Bunker is shaped for good visibility and efficient sand discharge. Adjustable conveyor belt speed. Double hydraulically driven, precisely adjustable spreading discs.
##### Conveyor belt - Wikipedia
A conveyor belt is the carrying medium of a belt conveyor system (often shortened to belt conveyor). A belt conveyor system is one of many types of conveyor systems.A belt conveyor system consists of two or more pulleys (sometimes referred to as drums), with a closed loop of carrying medium—the conveyor belt—that rotates about them. One or both of the pulleys are powered, moving the belt ...
##### RFS Industries
Sand is discharged from the hopper, on a flighted conveyor, through an adjustable safety gate. A permanent-magnet head pulley on the conveyor separates ferrous scrap before the sand feeds to a "RFS Industries" aerator. This 25" wide, high- speed, cleated belt conditions up to 45 tons of sand
##### BELT CLEANERS - Martin Eng
Carryback is material that sticks to the belt past the discharge ... products such as sand and gravel. Gravel, dry sand –40° to 160°F (–40° to 70°C) Navy Blue ... Check your conveyor’s belt width and speed against the specifications listed for the secondary cleaners in the table below.
##### The Mystery of Sand Flow Through an Hourglass MIT ...
May 19, 2010 Physicists have long known that the flow of sand through an hourglass is entirely different from the flow of liquid. In the case of a liquid, the rate of discharge depends on the pressure at the ...
##### BULK MATERIALS HANDLING CONVEYORS Conveyor Systems ...
Materials commonly handled by vibrating conveyors include:- quicklime, mineral sand, hydrated lime, sugar etc. ... Configurations include slow speed, central gravity discharge designs using twin strands of chain, up to high speed centrifugal discharge belt and bucket elevators operating at belt
##### Conveyor Belt Manual - IBT Industrial Solutions
Conveyor belts generally are composed of three main components: 1. Carcass 2. Skims 3. Covers (carry cover and pulley cover) CARCASS The reinforcement usually found on the inside of a conveyor belt is normally referred to as the "carcass." In a sense, the carcass is the heart of the conveyor belt
##### Conveyor Capacity - Engineering ToolBox
Conveyor Capacity. Conveyor capacity is determined by the belt speed, width and the angle of the belt - and can be expressed as. Q = ρ A v (1) where. Q = conveyor capacity (kg/s, lb/s) ρ = density of transported material (kg/m3, lb/ft3) A = cross-sectional area of the bulk solid on the belt (m2, ft2) v = conveyor belt velocity (m/s, ft/s)
##### Designing a Conveyor System - 911 Metallurgist
Apr 12, 2016 Number of plies in Conveyor Belt. For the average short-centre horizontal conveyor belt, under 300-foot centers, a 4-ply, 28-oz. duck belt may be safely used up to and including 20″ widths; 5- ply, 28-ounce duck for 24″ and 30″ belt widths; 6- ply, 28-ounce duck for 36″, 42″, and 48″ belt widths.
##### Top Industrial Belt Conveyor Issues (With Causes and ...
Carryback is the material that remains on the belt after discharge and is perhaps the most common struggle among conveyor operators. Typically all conveyors experience carryback to some extent, but given its potential for serious consequences, keeping it to a minimum is essential.
##### Engineering conveyor belts
Engineering tables for the design of conveyor belt. Belt Rating . EP 400/2. EP 800/2 . EP 400/3. EP 630/3 . EP 500/4. EP 1000/4 . EP 800/5
##### Belt Conveyors for Bulk Materials - Fifth Edition - Chapter 6
conveyors or very high belt speed (over 1,000 fpm) refer to CEMA member compa-nies for more specific values of A i . K y — Factor for Calculating the Force of Belt and Load Flexure over the Idlers Both the resistance of the belt to flexure as it moves over idlers and the resistance
##### Five simple, effective conveyor belt tracking upgrades ...
Mar 19, 2020 Loads change direction and conveyor belt irregularities creep in over time. Having the right type of belt tracking components to suit your conveyor belt’s speed, trough angle and degree of incline can make the world of difference – and is a site-specific investment that is affordable and will bring immediate results.
##### Incline Conveyor Calculators Cisco-Eagle
Notes: As a rule of thumb, the maximum angle for conveying cardboard boxes is 25 degrees. If conveying plastic containers, 15 degrees is typically the maximum incline. The conveyor can be inclined up to 30 degrees. Use our Box Tumbling Calculator to find the maximum incline angle for your containers. You may also be interested: Incline Conveyors.
##### 10 Belt Conveyor Types 5 Types of Conveyor Belt ...
Trough Belt Conveyor. Trough belt conveyor has a large loading and conveying capacity, the conveyor belt of the trough belt conveyor adopts multi-layer rubber belt, with the belt width of 500mm, 650mm, 800mm, 1000mm, 1200mm and 1400mm. The carrying side conveyopr belt is supported by a troughed idler that composed of 3 rollers (the angle of the ...
##### Rubber Belt Conveyor - CONVEYOR EQUIPMENT-Products -
Q3: May i ask the belt width, discharge height and ground length that you want? The standard conveyor is with belt width 300mm, discharge height 3296mm. Q4: Can you tell us if you need variable speed or only certain speed you need? If you need variable speed, we shall add frequency inverter to control speed at any time as your need.
|
|
Article Text
## other Versions
Biodefence and the production of knowledge: rethinking the problem
Free
1. Allen Buchanan1,
2. Maureen C Kelley2
1. 1Duke University, Durham, North Carolina, USA
2. 2University of Washington, School of Medicine, Seattle, Washington, USA
1. Correspondence to Dr Allen Buchanan, Duke University, Durham, NC, USA; allenb{at}duke.edu
## Abstract
Biodefence, broadly understood as efforts to prevent or mitigate the damage of a bioterrorist attack, raises a number of ethical issues, from the allocation of scarce biomedical research and public health funds, to the use of coercion in quarantine and other containment measures in the event of an outbreak. In response to the US bioterrorist attacks following September 11, significant US policy decisions were made to spur scientific enquiry in the name of biodefence. These decisions led to a number of critical institutional changes within the US federal government agencies governing scientific research. Subsequent science policy discussions have focused largely on ‘the dual use problem’: how to preserve the openness of scientific research while preventing research undertaken for the prevention or mitigation of biological threats from third parties. We join others in shifting the ethical debate over biodefence away from a simple framing of the problem as one of dual use, by demonstrating how a dual use framing distorts the debate about bioterrorism and truncates discussion of the moral issues. We offer an alternative framing rooted in social epistemology and institutional design theory, arguing that the ethical and policy debates regarding ‘dual use’ biomedical research ought to be reframed as a larger optimisation problem across a plurality of values including, among others: (1) the production of scientific knowledge; (2) the protection of human and animal subjects; (3) the promotion and protection of public health (national and global); (4) freedom of scientific enquiry; and (5) the constraint of government power.
• Medical ethics
View Full Text
## Statistics from Altmetric.com
Biodefence, broadly understood as efforts to prevent or mitigate the damage of a bioterrorist attack, raises a number of ethical issues, from the allocation of scarce biomedical research and public health funds, to the use of coercion in quarantine and other containment measures in the event of an outbreak, to efforts to extend international arms control regimes to biological weapons. In response to the US bioterrorist attacks following 9–11, significant US policy decisions were made to spur scientific enquiry in the name of biodefence. These decisions in turn led to a number of critical institutional changes within the US federal government agencies governing scientific research, both at government laboratories and academic research centres. Subsequent science policy discussions have focused largely on ‘the dual use problem’: how to preserve the openness of scientific research while preventing research undertaken for the prevention or mitigation of biological threats from being used to cause harm by non-state terrorists or aggressive dictators. On this characterisation of ‘the dual use problem’, biomedical scientists must consider whether and, if so, to what extent the commitment to ‘open science’ ought to be compromised.
Although the term ‘open science’ is unfortunately broad, the main idea, as Robert Merton and others have noted, is that the scientific enterprise is characterised by a commitment to costless or low cost information sharing, understood as an element of the more basic commitment to the accumulation of knowledge through collective effort.1 2 The chief justification of openness is that it contributes to the production of scientific knowledge. Our aim is to join others in the bioethics literature in shifting the ethical debate over biodefence away from a simple framing of the problem as one of dual use, by making clear how a dual use framing distorts the debate about bioterrorism and truncates discussion of the moral issues.3–5 To advance the debate further we offer an alternative framing rooted in social epistemology and institutional design theory, better to inform policy deliberation over the full range of ethical challenges raised by the biodefence enterprise.
## Reframing the dual use issue
Framing the ethical concerns of biodefence as predominantly a problem of dual use is inadequate for at least two reasons. First, the reference to ‘the dual use problem’ is misleading. As others have noted there are at least two distinct dual use problems.6 Furthermore, measures to cope with one may be inadequate for coping with—or may even exacerbate—the other. Biodefence research might be used not only by non-state terrorists or aggressive dictators, but also by any state that has or contemplates developing an offensive bioweapons programme.
It is important to understand that even states that have no aggressive intentions may have an incentive to develop offensive bioweapons. Fear of not having offensive bioweapons when others have them can motivate a self-defensive offensive bioweapons arms race, as existed between the USA and the former Soviet Union during the cold war.7 8 States not intent on aggression may conclude that, as with nuclear weapons, a ‘balance of terror’ is necessary for their security. Scientists and ordinary citizens should thus be concerned not only that biodefence research may be used to develop offensive bioweapons by non-state terrorists or by ‘outlaw states,’ but also by their own governments. Furthermore, it is not enough that a country refrains from seeking to use biodefence research to develop offensive weapons. Unless other countries have adequate assurance that this is so, a self-defensive bioweapons arms race may occur. Clarity and candor would be better served if ambiguous talk about ‘the dual use problem’ were abandoned and replaced with ‘the dual use problems’ or by explicit references to ‘dual use problem 1’ and ‘dual use problem 2:’
• DU1: Research undertaken for prevention or mitigation of biological threats being used to cause harm by non-state terrorists or aggressive state actors.
• DU2: Research used to develop offensive bioweapons by one's own government.
Second, it is not the case that measures to cope with the dual use problem(s) would be the first instance in which biomedical scientists are faced with the problem of a conflict between the values that underlie the norms of ‘open science’ and other important values. The norms of openness have never been absolute, nor should they be, because the values that underlie them are not absolute but instead must be balanced against other important values. Two examples should suffice to make this simple but crucial point: intellectual property and privacy protections for human research subjects. What sorts of items should count as intellectual property and how extensive the rights to control their uses should be are complex matters on which there is much disagreement; but if there is any room at all for intellectual property in the scientific research enterprise, then the norms of ‘open science’ cannot be absolute, because intellectual property rules constrain the dissemination of knowledge by limiting access to items (such as gene sequences) whose use is necessary for gaining knowledge. Similarly, ethical concerns about privacy quite properly limit the freedom of researchers to exchange information about human subjects. So, openness is not and has never been an absolute value. The current processes by which scientific knowledge is produced already reflect a compromise between openness and other values.
Recognising these two deficiencies in the dual use framing of biodefence has two important implications. First, one should not assume that policy measures crafted to cope with dual use problem 1 will be effective for coping with dual use problem 2. For example, omitting certain steps in the creation of a deadly virus from a publication might render the publication useless to a non-state terrorist group or to the relatively poorly trained or under-resourced bioweapons researchers of a so-called ‘outlaw state’, but the better trained, better resourced bioweapons researchers of a ‘great power’ might be able to fill in the gaps. What is more, some measures to mitigate the risks of dual use 1 might actually increase the risks of dual use 2. For example, a government-appointed national advisory board charged with vetting research to prevent it from being used by non-state terrorists or ‘outlaw states’ might officially or unofficially channel information to its own government's bioweapons researchers while increasing the value of the information to them by preventing others from getting access to it. Second, and more fundamentally, once we understand that the norms of ‘open science’ and the values that underlie them are not absolute, it becomes evident that the dual use problems should be reconceived as one aspect of a larger optimisation problem: how can policy, broadly understood, help shape the scientific enterprise in such a way as to give due weight both to its distinctive role in producing knowledge and to other relevant values, including, but not restricted to, the reduction of both dual use risks?
Just what values ought to be included in the optimisation project and how they ought to be weighted, are of course, difficult, contested questions. The central point is that an overly simplistic assumption that the problem is how to balance the two competing values of biosecurity and open science diverts public discussion from the other important values at stake. In the dual use policy discussions to date, we have seen two examples of this error: (1) failure to consider adequately the impact of biodefence research on the ethical use of human and non-human animals in research; and (2) failure to account for the opportunity costs of biodefence research vis-à-vis efforts to reduce the burden of infectious disease among the world's poor.
Few would dispute that the protection of human and non-human animal subjects also ought to be taken into account in the design of the enterprise of producing scientific knowledge. Yet, when ‘the dual use problem’ (meaning dual use problem 1) occupies centre stage, it is the interests of only two parties that are likely to be strongly represented: scientists who fear constraints on the pursuit of knowledge, and government officials whose worst nightmare is a bioterrorist attack that could have been prevented. Therefore, one of the dangers of an overly simplistic framing of the ethics of biodefence is that it largely ignores or arbitrarily discounts values that have been central to the research ethics debate since its inception: the protection of research subjects, both human and non-human. Special attention ought to be given to the need for protecting research subjects against risk in the testing or use of experimental vaccines in the event of an outbreak, or in the process of ‘emergency preparedness’. In this regard, the ethics of research ought to be nearer the centre of the biosecurity debate.
Similarly, as May has argued, it is important to consider the opportunity costs of investments in biodefence research.4 In particular, it can be argued that concerns about distributive justice ought to be given some weight in policies affecting the production of scientific knowledge, for example, by devising policies to provide greater incentives for research that is likely to yield results (such as a vaccine for malaria) that will help meet the special needs of the world's worst-off people.9 10 In biodefence discussions, if ‘the dual use problem’ is treated as central, consideration of this value, if it occurs at all, tends to be almost an afterthought.
To counter this tendency, some have appealed to yet a third sense of the term ‘dual use’, what might be called the ‘dual use opportunity’: the prospect that research undertaken for biodefence may contribute, or might be made to contribute, to the alleviation of the burden of disease among the world's worst-off people. This possibility was discussed, for example, at the Bioethics and Biodefense Meeting, 5 February 2007, at the Johns Hopkins School of Advanced International Studies. This meeting was sponsored by the Southeast Regional Center for Excellence for Emerging Infections and Biodefense and co-sponsored by the Johns Hopkins University Berman Bioethics Institute, the University of Minnesota Center for Bioethics, and the University of Washington Department of Medical History and Ethics. The idea is that knowledge for responding to bioterrorist attacks may also be valuable for responding to naturally occurring infectious disease outbreaks, many of which disproportionately affect poor populations, and that biodefence policy should take this fact into account.
Unfortunately, concerns about distributive justice have not been incorporated into the biodefence debate in any serious or systematic fashion. For example, in recent debates concerning the US government investments in global health, including HIV and the President's Emergency Plan for AIDS Relief, at no time were the trade-offs vis-à-vis renewed investments in biodefence research funding mentioned.11 For example, monies that are allocated to anthrax studies are not available for developing new antimalarial drugs.12 13 Keeping the biodefence allocation decisions out of transparent debate has masked the opportunity costs of the massive biodefence effort. It is critical to ask, however, what research or health investments might we forego in order to continue funding biodefence research?
The point is not that concerns of distributive justice or the protection of research subjects ‘trump’ security concerns, nor is it to deny that under exceptional circumstances they should be accorded less weight than they ordinarily have. Instead, it is that there should be a vigorous debate about the ethical justification for reducing the threshold for acceptable risk in the process of consent for experimental vaccines, and for increasing the use of non-human primates in biodefence research. Such a debate requires discussion of multiple values, each of which has substantial weight. An ethically responsible policy approach cannot simply assume that in effect the only two values at stake are ‘open science’ and biosecurity, because efforts to reconcile these two values may have serious consequences for the pursuit of other important values. To summarise, it is not simply that there are two dual use problems, not one (as well as a ‘dual use opportunity’); the more fundamental conclusion is that the dual use problems (and ‘the dual use opportunity’) are only aspects of a larger optimisation problem.
The idea of optimisation is crucial because it emphasises that the task is not to maximise the realisation of any one value (such as protection against bioterrorism), or to achieve an acceptable trade-off between just two values (such as ‘open science’ and biosecurity), but rather to achieve an overall outcome that gives due weight to all relevant values. The optimisation framing opens the door to discussions of values, such as giving some priority to a more equitable distribution of the benefits of scientific research or the protection of research subjects, that otherwise might be ignored or indefensibly discounted as a result of focusing exclusively on the trade-off between ‘open science’ and biosecurity. Notice that we use the term ‘optimisation’ here in a broad sense; there is no assumption that all competing values can be fully quantified and subjected to a definitive maximising calculation. Rather, the point is that there are multiple values that must each be given due consideration in an attempt to make an all-things-considered judgement about what to do. In many cases, optimising will require judgement, not just calculation.
The optimisation framing is also useful for dispelling the view, promoted by the political rhetoric of the ‘war against terror’ (as in all putative national emergencies), that the goal is to maximise risk reduction, that is, to reduce the risk of harm (in this case harm due to the rapid spread of infectious disease) to zero. Maximal, as opposed to optimal risk reduction is irrational and the attempt to achieve it is unethical because efforts to achieve it come at the expense of other important values.
It might be objected that in times of national emergency, such as the so-called ‘war on terror’, the goal is not to optimise across a plurality of values, but to seek a proper balance of only two dominant values: biosecurity and ‘open science’. The idea here would be that in current conditions other values can and ought to be ignored, because the stakes are so high. The unargued and highly problematical assumptions behind this objection are: (1) that in circumstances of extraordinary risk of bioterrorism, biosecurity and ‘open science’ are values of much greater value than all other relevant values combined; (2) that the only way to secure those two values is to proceed as if no other values existed; and (3) that the circumstances of extraordinary risk—risk sufficient to justify such an abandonment of the optimisation approach—can be reliably ascertained. Not one of these three assumptions has been explicitly defended by those who place ‘the (first) dual use problem’ at centre stage of the debate on biodefence.
It may be difficult to ascertain when conditions justify abandoning the optimisation approach and disregarding values we otherwise agree are of great importance. This point warrants elaboration. Institutions, preeminently, government institutions, shape beliefs about what constitutes an emergency and about when a state of emergency exists. Institutional agents sometimes have strong incentives to encourage a blurring of the line between preparing for an emergency and the occurrence of an emergency. Political leaders, whose roles give them opportunities for shaping public perceptions, have incentives to foster the belief that an emergency exists, because it is generally assumed that emergencies require extraordinary powers and reduce the requirements of transparency as a condition for the legitimacy of political authority. In brief, once people become convinced that we are in an emergency, they are more willing to accept the view that ordinary moral norms and the standard checks and balances of democratic constitutional government do not apply, or apply with less force—that the government should be given a ‘free hand’, and that criticism of the government is inappropriate, dangerous and even disloyal.14
So whether we are in fact in an emergency is a matter of great importance. Presumably scientific knowledge should play some role in determining the magnitude and probability of the risks that are judged to constitute an emergency and therefore in determining whether a state of emergency exists.
Although good facts are relevant to determining whether an emergency exists, a ‘state of emergency’ is not a natural fact to be discovered by empirical methods. The statement that a state of emergency exists is a political act, grounded in an evaluation of how serious certain risks are, with the added implication that the ordinary moral, political and legal rules do not apply. If this is the case, then a thorough investigation of alternative institutional arrangements for achieving biodefence at acceptable costs—when all relevant moral costs are considered—cannot take the distinction between emergency and non-emergency situations for granted, but must consider the possibility that scientific institutions can play an important role in providing a check on the tendency of government leaders to be too ready to declare an emergency. Furthermore, there is a tendency, as we have seen in the USA since the 9–11 attacks, for institutions implemented in a state of emergency to become permanent; arguably we have remained in a chronic state of emergency, or heightened alert, for a decade. So, once again we come to the same conclusion: it is a mistake to think that the only values to be balanced are biosecurity and ‘open science’. Reduction of the risk of erroneous judgements about the state of emergency, and more generally the risk of abuse of government power, should also be taken into account.
There has been another unclarity in the policy discussions over biodefence policy, particularly concerning the dissemination of scientific findings. Sometimes the assumption is that the solution is to formulate guidelines to help individuals engage in risk–benefit assessments regarding the dissemination of particular research results, when the assumption is that the risk is that of ‘dual use’ (ie, dual use 1) and the benefit is ‘open science’. Those who advocate such risk–benefit assessment also propose that a number of different parties, who in fact occupy quite different roles, including the scientific researchers themselves, scientific journal editors and perhaps government officials as well, should follow the same risk–benefit assessment guidelines and apply them to the same thing, namely, the dissemination of particular research results.15–19
Such proposals overlook the importance of the division of labour in a reasonable response to the optimisation problem. Better outcomes might be achieved if different agents, depending on their institutional roles, engage in different activities, following different guidelines. For example, it could be argued that government officials should not engage directly in the risk–benefit assessment of particular research results, but instead should be responsible for ensuring the accountability of the risk–benefit assessment procedures of other agents, including editors of scientific journals. According to this way of thinking, government officials might well employ some form of risk–benefit analysis, but they would apply it to the evaluation of risk–benefit assessments of particular research results by other agents, not to the act of disseminating or withholding particular research results. Similarly, it could be argued that scientists could assess the risks and benefits of disseminating their research more accurately if they did not attempt actual risk–benefit assessments of it, but instead employed guidelines that include reliable proxies for risk–benefit calculations. The idea that the best way of achieving a favourable balance of benefits over costs is not always to act on the maxim ‘maximise benefits over costs’ is familiar from discussions of indirect utilitarianism.20
While it is correct to say that a proper response to ‘the dual use problem(s)’ will include a role for risk–benefit analysis, determining which agents should apply such analysis to which actions is a complex matter. More precisely, it is a problem of institutional design.
### The role of institutions
Institutional solutions to the problem of balancing ‘open science’ with protection against ‘the dual use problem’ have been proposed, but they have typically been defective in two ways. First, they have been based on uncritical assumptions about the role of government—not just by neglecting dual use problem 2, but also by a more general failure to take seriously the conflicts of interest to which government officials are often subject. Although the fact that government involvement brings risks has sometimes been acknowledged in the US biodefence debate, the chief risk has been assumed to be interference with the production of scientific knowledge. There has been no systematic exploration of the full range of risks involved or the sorts of institutional arrangements that may either magnify or reduce them. The fact that US institutional proposals have failed to distinguish the two dual use problems and to acknowledge that solutions to the former may exacerbate the latter are clear indications that the risks of government involvement have not been taken seriously, much less systematically explored.
Second, discussions that do assign an important role to institutions frequently assume that a particular division of labour among institutions and agents is appropriate, without providing good reasons for why this is so and without considering alternative arrangements. For example, some have advocated voluntary ‘self-policing’ of the dissemination of information by researchers or by researchers working with scientific journal editors, claiming that government oversight should either be avoided or kept to a ‘minimum’.21–24 Such proposals provide no evidence for the efficacy of ‘self-policing’, show little awareness of the conflicts of interest and limitations of knowledge about the risks of harmful misuses of research to which researchers and journal editors may be subject, and ignore the fact that the admonition to keep government involvement to ‘a minimum’ only makes sense within the context of an account of optimisation that they have not begun to provide. What is needed is a more critical and systematic exploration of solutions to the optimisation problem, one that first applies cost–benefit analysis, broadly construed so as to accommodate moral values as well as efficiency, not to the choices of individuals as to whether to disseminate research particular results, but to the design of institutions, with the goal of developing an institutional division of labour whose overall result will achieve a proper balancing of biodefence with other values, including, but not limited to, the value of ‘open science’.
This institutional optimisation task is exceedingly complex, as others have acknowledged.5 To make headway on it we identify two key conceptual resources to advance the current debate over biodefence: the idea of social epistemology and that of institutional design.
## Social epistemology as a resource for conceptualising the optimisation problem
### The relevance of social epistemology
Social epistemology has been defined as the comparative assessment of the efficacy of alternative institutions creating, transmitting and preserving true or justified beliefs.25 Institutions are understood broadly to include formal and informal norm-governed, relatively stable patterns of organisation that typically include an internal division of labour characterised by roles.
Social epistemology is grounded in three simple but powerful ideas. (1) Knowledge generally, including scientific knowledge, is largely a social, not a purely individual accomplishment. (2) Institutions (broadly understood) play a vital role in the social production of knowledge. (In this broad sense we can speak of ‘the institutions of science’ meaning the totality of persisting patterns of norm-governed interactions that constitute the scientific community.) (3) The institutionalised social production of knowledge requires a complex division of cognitive labour, but does not require any overall central authority to direct the process of knowledge production.26 (In that sense, social epistemology proceeds on a very loose analogy with the ‘invisible hand’ explanations of market economics. Note: this is not to say that knowledge is best produced ‘in the private sector’). Peter Railton uses the term ‘the invisible mind’ here and provides a valuable discussion of the implications of a social epistemology approach for current debates about the objectivity of science.27
Thus far, social epistemology has concentrated chiefly on the institutions of science, attempting to identify their ‘epistemic virtues’, the features of these institutions that contribute to the production of scientific knowledge (or, on some more cautious formulations, justified empirical beliefs). The task of optimisation with which we are concerned is more complex: to try to ensure that other important values, over and above the production of scientific knowledge, including biosecurity, are properly accommodated with the least detriment to the epistemic virtues of the institutions of science.
Nonetheless, a focus on the epistemic virtues of the institutions of science is a logical place to begin the more complex task. If the problem is to balance protection of the ‘norms of open science’ against other values, including biosecurity, then it will be important to know what role various ‘norms of open science’ actually play in the production of scientific knowledge; but to know this we need a social epistemology of scientific institutions. In brief, before we modify the knowledge-producing institutions of science in the name of biosecurity (or, more accurately, to solve an optimisation problem in which biosecurity is one value), it would be useful if we had some idea how the institutions of science produce knowledge. Saying that they do so through the operation of ‘norms of openness’ is hardly adequate. Current work on the social epistemology of science indicates that many other factors besides ‘norms of openness’ play a role in the production of scientific knowledge and that there are discrepancies between the putative ‘norms of openness’ and how science actually works. It is remarkable that the current debate about biodefence and ‘open science’ has proceeded without even acknowledging the relevance of the social epistemology of scientific institutions.
Instead, various parties to the public policy discussion have made assumptions about what can and what cannot be changed in the enterprise of scientific knowledge production without undercutting its effectiveness, in the absence of any basis for making these assumptions. If, as current work in social epistemology indicates, the production and dissemination of scientific knowledge depends upon much more than ‘norms of openness’, then this complicates the policy response. In particular, it will not be sufficient to show that a particular policy does not unduly erode the ‘norms of openness’. A policy that scored well on this count might nonetheless have the unintended effect of damaging some of the scientific knowledge-producing enterprise. This would be the case if, for example, the cooperation of scientists with government for the sake of biodefence diminished the credibility of scientists in the eyes of the public and called the objectivity of their findings into question.
From one standpoint we might say that the current policy debate suffers from a lack of awareness that there is systematic work in social epistemology that is directly relevant to it. From another we might say that the problem is that the current debate unwittingly operates with a very primitive, unarticulated and empirically unsupported ‘folk’ social epistemology according to which the (largely unspecified) ‘norms of openness’ are the only significant epistemic virtues of the scientific enterprise.
### The limitations of social epistemology
As powerful as its key ideas are, mainstream social epistemology, although necessary for tackling the optimisation problem, is insufficient for several reasons. First, its theorists have tended to concentrate on efficacy, neglecting efficiency, in the production of scientific knowledge. In other words, they have focused on whether one institutional arrangement is better at producing knowledge rather than another, without taking into account differences in the costs of knowledge production. Even from a purely epistemic standpoint, setting aside for the moment the need to accommodate moral values, institutional arrangements that produce knowledge at lower cost are preferable and efforts to prevent dual use problems that needlessly raise the costs of producing scientific knowledge would be unacceptable. As our characterisation of the optimisation problem makes clear, however, the costs of producing scientific knowledge must not be restricted to financial costs or time costs, but must also include the risks of harm from accidents and malicious use.
Second, investigations of the epistemic virtues of scientific institutions have frequently proceeded on the highly idealised assumption that the scientific enterprise, as a knowledge-generating process, is largely free from government interference. All theorising requires idealisation, but this particular idealisation is extremely problematical in the perceived emergency situation in which the problem of devising ethically sound biodefence policy arises.
There is another sense in which mainstream social epistemology of scientific institutions neglects the political: it tends to view the relationship between the scientific enterprise and the public exclusively in epistemic terms, chiefly from the standpoint of investigating how institutional practices such as educational credentialing and peer review of publications can help non-scientists identify genuine ‘epistemic authorities’, meaning especially reliable sources of true or justified empirical beliefs. This vantage point, although of great value, overlooks important issues concerning institutional legitimacy. An exception is Philip Kitcher's book, ‘Science in a democratic society’ (forthcoming, 2011), which discusses an apparent erosion of trust in mainstream scientific expertise regarding global climate change.
On the one hand, by identifying certain individuals as scientific authorities, the institutions of science create opportunities for government or other institutions to try to convince the public that their policies are legitimate by presenting them as scientifically informed. For example, government leaders may cite scientific estimates of the harm that would be done by a bioterrorist attack to justify the claim that a state of emergency exists and to try to convince the public that the infringements of civil liberties that its response to the putative emergency entail are legitimate. On the other hand, whether the public regards scientists as genuine epistemic authorities can depend on whether the institutions of science are themselves viewed as legitimate. If the institutions of science are thought to be unduly influenced by government or by religion or ideology, then the credibility of scientists as epistemic authorities may decline and science may suffer a ‘legitimacy crisis’. If this occurs, scientific knowledge may still be produced, but it will not be recognised as such by the public. The widespread denial of anthropogenic climate change may be an illustration of this phenomenon.
In addition, if the public comes to believe the legitimacy of the institutions of science has been seriously compromised, it may refuse to provide the resources needed to support them and this too may reduce their efficacy in producing knowledge. Legitimacy is an important value to be taken into account in thinking through the optimisation problem, then, with regard to both the legitimacy of scientific institutions and the role of science in contributing to the legitimacy of political institutions.
The concept of legitimacy is relevant in yet another, more fundamental way. The presumption should be that the overall policy effort to cope with the dual use problems must be compatible with the legitimate exercise of political power. The requirement of political legitimacy does not automatically disappear whenever there is a state of emergency and it certainly does not vanish simply because the government says that a state of emergency exists. ‘Legitimate’ as applied to political institutions is generally understood to mean ‘having the right to rule’ and the state is said to have the right to rule only if it operates within certain moral constraints, often specified in terms of individual rights.
Third, thus far social epistemologists concerned with understanding the epistemic virtues of scientific institutions have not explored in any depth the fact that these institutions, like institutions generally, are not only norm-governed, but are also venues in which existing norms are contested and new norms are developed. Thomas Kuhn, for example, focuses on conceptual change in the form of shifts to new paradigms of scientific explanation, not on norm contestation and change per se.28 The issue of norm change is important for the optimisation problem under consideration here in two ways. First, in considering alternative institutional arrangements we must try to determine what role norms, understood as internalised rules, should play in the overall process of balancing biosecurity with the creation and dissemination of scientific knowledge and other important values. A workable solution to the optimisation problem might require modifying some of the norms that have until now characterised the scientific community, perhaps by using new educational strategies and role modelling to try to instil a clear sense of responsibility for helping reduce the two dual use risks. In particular, we cannot assume that scientists will adopt new norms regarding responsibility for possible uses of their research simply because a new code of ethics says they should. The new norms may be weak unless they are reinforced by or are at least consistent with the incentives to which scientists are subject. Second, strategies for optimisation must also consider the possibility of unintended norm change. Institutional changes to cope with dual use risks might unwittingly erode some of the most valuable norms that constitute the institutions of science. For example, by encouraging the idea that scientists should play a key role in defending the nation, government, perhaps supported by the media, may encourage biases on the part of scientists that compromise the validity of their research.
### Principles of institutional design: a primer
So far, we have argued that the central insights of social epistemology and institutional design are crucial for a sound public policy response to biodefence issues, properly conceptualised as a complex optimisation problem. Our purpose here is not to articulate a theory of institutional design but rather to sketch some of the elements of such a theory and, in this section, to show how they can be used to evaluate some key aspects of current biodefence policy. The following principles will be familiar to students of institutional design in the social sciences, but are remarkably absent from current biodefence discussions.
#### Successful institutions typically rely not just on norm-governed behaviour, but on a plurality of role-differentiated, indirect norms of action
Different agents, occupying different roles, can contribute to the achievement of institutional goals by acting on different and even sometimes conflicting norms. These norms do not direct agents to ‘achieve institutional goal G, G1, etc', but instead prescribe specific actions or processes, which, taken together in the overall operation of the array of institutions, tend to promote institutional goals. Here an analogy with market economy explanations is useful: under ideal conditions markets produce efficient states in equilibrium, but not because various agents in the market follow the norm ‘produce an efficient state’. Instead, individual agents follow other norms—such as ‘price your goods so as to maximize your profit’—thereby producing behaviour whose aggregative effect is an efficient state. Similarly, the best way to balance biosecurity with other relevant values may not be to encourage scientists (or government officials, or members of a national science biodefence advisory board) to follow the norm ‘try to strike a reasonable balance between the values of “open science” and “biosecurity”’ as they decide whether particular research findings ought to be disseminated.
#### If institutional goals are to be achieved, norms are important, but incentives also matter
Institutional effectiveness depends on incentive compatibility (the absence of perverse incentives, ones that encourage agents to act in ways that thwart institutional goals). Institutional effectiveness also depends upon the complementarity of norms and incentives; in particular, incentives should be aligned so as to make an agent's compliance with appropriate norms rewarding or at least not excessively costly to her. The idea of norm/incentive complementarity is perhaps less familiar and obvious than that of incentive compatibility, but it is equally powerful. Norms, understood as internalised rules, play a crucial role in achieving desired institutional outcomes generally, but the power of norms can be either augmented or diminished, depending upon whether institutionally generated and extra-institutional incentives support or compete with them.
#### Institutional systems can be locally inefficient but globally efficient
What appears to be wasteful or even dysfunctional behaviour narrowly considered may make a positive contribution to overall efficiency or at least may not be eliminable without reducing the efficiency of the system as a whole. The judgement that some aspect of institutional performance is inefficient may rest on either a failure to see how it fits into the larger whole or on an overly narrow characterisation of the optimisation problem the institution is designed to solve. For example, allocating funds among a plurality of research teams might seem less efficient than simply funding the team that is best qualified to do the job, but spreading the funds may be more likely to produce competitive pressures that result in the best team performing even better.
#### In well-functioning institutions, the relationship between the motives of agents and desirable outcomes may be complex and even counterintuitive
For example, within the right sort of overall institutional context, it may be highly beneficial for scientists to be motivated not just by the commitment to producing knowledge, but also by the desire for prestige and financial reward. In the production of knowledge, as in many other institutional endeavours, self-interested motivations can, under the right circumstances, contribute to the greater good through what might be called constructive competition. Here, too, an analogy with markets is instructive, although one need not go as far as Mandeville did, when he proclaimed that ‘private vices’ are ‘public virtues’.29
#### Just as it cannot be assumed that good collective outcomes require the absence of competition among agents or the moral purity of motivations, it cannot be assumed that optimisation is to be achieved through thorough-going intra-institutional or inter-institutional harmony
Conflict, including not only the clash of opposing ideas, but also the clash of interests, can be productive overall. Call this the Madisonian idea. The most obvious application of the Madisonian idea is to consider the role that a system of ‘checks and balances’ can play in an overall institutional optimisation strategy. For example, we should not assume that the best system for reducing the risk of dual uses (in either sense) is one that exhibits a thoroughly hierarchical structure of authority, with one entity at the top and all others subordinate to it. Instead, a degree of institutional competition and even some ambiguity about the ultimate locus of authority might be superior. In times of perceived emergency, there is a pronounced tendency to demand total harmony and cooperation; the Madisonian thesis emphasises that acceding to this demand can sometimes be self-defeating.
#### Because we can expect institutions to perform imperfectly and because institutional goals may need to be re-assessed in the light of new developments, sound institutions will include provisions for the critical revisability of both means and ends
Good information and effective incentives for utilising it properly are both essential to critical revisability. Other things being equal, institutional arrangements that insulate key actors from criticism and that limit their sources of information to others who share the same institutional interests, ought to be avoided.
Thus far we have: (1) distinguished two dual use problems; (2) demonstrated that unless both are considered, solutions to the first may exacerbate the second; (3) shown that the real issue is a complex optimisation problem whose solution requires applying cost–benefit analysis broadly conceived at the level of institutional design; (4) explained how social epistemology provides valuable conceptual resources for tackling the optimisation problem; (5) identified shortcomings of conventional social epistemology that limit its usefulness in this context; and (6) offered a list of principles of institutional design for employment in constructing a solution to the optimisation problem. In the next section we illustrate the fruitfulness of this more comprehensive analytical framework—which we call the institutionalist optimisation approach—by using it to evaluate several current US biodefence policies.
## Putting the institutionalist optimisation approach to work
To illustrate better the virtues of the optimisation framework, we turn now to an overview of some of the key institutional changes implemented in the USA in response to the bioterrorism threat. Our aim is not to make an all-things-considered, thumbs-up or thumbs-down evaluation of the policy alternatives, but rather to show how the institutionalist optimisation approach can contribute to such an evaluation. Above all, our remarks are designed to show how this approach provides protections against the tendency to omit from consideration certain factors that ought to be prominent in public policy deliberations, but that were largely absent in the public discussions of US biodefence policy when these institutions were created.
### NSABB and BARDA
In March 2004, in response to the anthrax attacks in the USA, the Department of Health and Human Services announced the formation of a new government entity, the National Science Advisory Board for Biosecurity (NSABB); the charter was renewed in March 2010. NSABB was implemented under 42 USC 217a, section 222 of the Public Health Service Act, as amended and Pub L 109-417, section 205 of the Pandemic and All-Hazards and Preparedness Act. NSABB is governed by the provisions of the Federal Advisory Committee Act, as amended (5 USC app). According to its charter the purpose of the board is to ‘provide, as requested, advice, guidance, and leadership regarding biosecurity oversight of dual use research, defined as biological research with legitimate scientific purpose that may be misused to pose a biologic threat to public health and/or national security’. In its first year NSABB worked to develop a definition of dual use research in order to inform the responsibilities of scientists conducting such research. They identified ‘dual use research of concern’ as research that ‘based on current understanding, can be reasonably anticipated to provide knowledge, products, or technologies that could be directly misapplied by others to pose a threat to public health and safety, agricultural crops and other plants, animals, the environment, or material’.30 The board is charged with: (1) recommending strategies and guidance for those conducting dual use research, or those with access to select biological agents and toxins; (2) providing recommendations for educating and training scientists, laboratory workers, students and trainees about dual use research issues; (3) advising on policies governing publication, communication and dissemination of dual use research methodologies and results; (4) recommending strategies for promoting international engagement on dual use research issues; and (5) advising on the development of codes of conduct for life scientists engaged in dual use research.31 The purpose and activities of NSABB take for granted the above definitions of ‘dual use’ and ‘dual use research of concern’, which while somewhat vague, track the definition of dual use 1 as we have discussed and implicitly present the problem as a trade-off between open science, and preventing the misuse of otherwise beneficial science for malevolent ends. The absence of attention to the risks that ‘good’ governments will misuse biological science indicates that the focus of attention has been primarily limited to dual use 1.
In December 2006 US Congress passed a biodefence bill, Project BioShield, and created a second new institution, the Biomedical Advanced Research and Development Authority (BARDA), whose sole purpose is to oversee funding for the development and purchase of vaccines, drugs, therapies and diagnostic tools in response to public health medical emergencies, including bioterror and pandemic agents.32–34 The rationale behind the agency and its management of Project BioShield is to speed up the procurement and development of potential countermeasures for chemical, biological, radiological and nuclear agents as well as medical countermeasures for pandemic influenza and other emerging infectious diseases that fall outside the scope of Project BioShield.35 BARDA also manages the Public Health Emergency Medical Countermeasures Enterprise, which represents an attempt to offer ‘a central source of information regarding research, development, and acquisition of medical countermeasures for public health emergencies, both naturally occurring and intentional’. Comments regarding the purpose of the Public Health Emergency Medical Countermeasures Enterprise reflect an implicit stand on the overriding value a scientific enterprise that is able to respond quickly to a terrorist threat or unintentional pandemic: ‘Our nation must have a system that is nimble and flexible enough to produce medical countermeasures quickly in the face of an attack or threat, whether it's one we know about today or a new one. By moving towards a 21st century countermeasures enterprise with a strong base of discovery, a clear regulatory pathway, and agile manufacturing, we will be able to respond faster and more effectively to public health threats.’36
The overall strategy adopted by BARDA has been to channel funding to earlier stages of drug and vaccine development—the stage referred to in the drug industry as the ‘valley of death’, as companies are left to pay for research and development until vaccines and drugs are ready for use and government purchase. The agency received an initial budget of US$1 billion over 2 years. Most recent budget figures for civilian biodefence in 2010–11 totalled US$6.48 billion. Of that total, US$5.90 billion (91%) has been budgeted for programmes that have both biodefence and non-biodefence goals and applications, and US$577.9 million (9%) has been budgeted for programmes that deal strictly with biodefence.37
NSABB has been criticised for its lack of transparency, but the deeper issue is appropriate accountability. As an illustration of concerns regarding NSABB transparency and accountability to stakeholders, see the public comments. See also the series from the activist group, The Sunshine Project, whose mission has been to shed light on the conduct of government-sponsored biodefence research.38 39 Transparent processes provide necessary but not sufficient conditions for holding institutions and members accountable. There are no provisions for holding NSABB members accountable, either as individuals or collectively, beyond their accountability to the federal government that appointed them. Such an arrangement is suspect, to say the least, given that here, as elsewhere, the interests of the government and those of the public and other relevant parties, including the scientific community, are not perfectly congruent. In particular, under sustained conditions of the continued ‘war on terror’ government officials are under incentives that may lead them to exaggerate the risk of ‘dual use 1’ to the detriment of a proper accommodation of other relevant values. In brief, the federal government may be systematically biased towards the avoidance of ‘type 1 errors’ (in this case, a ‘type 1 error’ would be the failure to take adequate precautions against bioterrorist threats; a ‘type 2’ error would be taking more extensive protection than is necessary, at the expense of other values). If the federal government's standards for holding NSABB accountable reflect such biases, then to that extent accountability is inadequate.
The structure of BARDA raises similar concerns. Two features of the agency's structure are likely to increase the bias towards type 2 errors: First, the amount of funding earmarked for biodefence research as summarised above increases the incentives for individual scientists, research programmes and industry to join in bioagents research. This huge infusion of funds may actually increase the risk of bioterrorist attacks by increasing the number of individuals who have the knowledge and means to weaponise biological agents. The strong focus on dual use protection since the increased funding indicates some awareness of exactly this risk. The more funding goes to US scientists to investigate select agents, the greater the risk that published results and findings fall into the ‘wrong hands’. What has not been openly discussed, but is also implicitly apparent in the need for international partnerships in addressing dual use risk, is that this influx of funding may well spur other governments to join in a global biodefence research race, which may in turn increase the risk of the accidental or deliberate use of bioweapons. The nuclear and offensive bioweapons programmes of the cold war give ample historical reason to believe the defensive race might well evolve into an offensive race. Another feature of BARDA that not only undermines accountability but may also exacerbate the risk of type 2 error is the exemption from the US Freedom of Information Act. While requests for information that are deemed non-threatening to national security may be honoured, such determinations will not be subject to judicial review, but rather, made internally by the agency. Given BARDA's primary goal—to expedite the research and development of bioterror countermeasures in preparation for a possible attack while protecting against malevolent uses of our own research products—the presumption will probably be one of non-disclosure. Without the possibility of independent judicial review a tendency towards type 2 error is built in to BARDA's very structure.
Even more obviously, making NSABB and BARDA accountable only to the federal government is inadequate from the standpoint of the dual use 2 problem. Indeed, it is difficult to imagine a more favourable arrangement than the current one, from the standpoint of those interested in developing a US offensive bioweapons programme. Both NSABB and BARDA are positioned to pass information relevant to government officials who might relay it to bioweapons research on to government agencies; to prevent the dissemination of this information to others; and to not reveal the fact that they are doing so.
We do not claim that NSABB or BARDA are engaged in such activities; nor are we claiming that either is making substantively wrong decisions. The point is that from the standpoint of institutional design both are deeply flawed, because they lack appropriate accountability, lack safeguards against bias towards type 2 errors, and do nothing to reduce the risk of dual use 2.
NSABB does allow limited access to its publicly announced meetings, but retains the power to determine what the public should and should not know, without any acknowledgement of the need for safeguards against abusing this power. Under these conditions, the mere presence of outsiders during portions of the NSABB's public meetings is not worthy of being called an accountability mechanism. The point is not that the public should be allowed to determine which of the NSABB's proceedings it should be included in; rather, it is that there should be some provision for helping to ensure that the NSABB does not, wittingly or unwittingly, abuse its power to make this decision. Similarly, judicial review of freedom of information requests is essential to maintaining the accountability of BARDA research.
Accountability includes three elements: (1) adequate standards of performance for evaluating the behaviour of institutional agents; (2) appropriate ‘accountability holders’ to apply these standards to evaluate the behaviour of institutional agents; and (3) adequate capacity and willingness of some designated agent or agents to impose costs on agents for failure to perform according to the standards. Unless the standards of performance to which NSABB is held accountable reflect a clear awareness that its operations are one element in an overall response to a complex optimisation problem that includes efforts to reduce dual use 2 as well as 1, those standards will not be adequate.
Adequate accountability holders are those who can be relied upon to represent all the values relevant to the complex optimisation problem, not just the most pressing current concerns of the federal government. Presumably, the federal government has the capacity to hold NSABB and BARDA accountable, but it is unclear whether the government is willing to hold them accountable to standards that reflect the plurality of values relevant to the optimisation problem rather than those that mirror its own current most pressing concerns, including the desire to avoid a bioterrorist attack at all costs. The public currently has no good reason to believe that NSABB and BARDA satisfy any of the three elements of appropriate accountability.
The predictable reply to these criticisms will no doubt be this: accountability requires transparency, but under current conditions, transparency is not compatible with either NSABB or BARDA doing its job. This reply is not adequate. If contemporaneous transparency is too risky, then provisions could be made for ex post transparency, under more favourable future conditions, when the bioterrorist threat has abated somewhat. Arguably, we have been in that state for 10 years, but the maintenance of a chronic, heightened state of alert has contributed to the sense that transparency is permanently too risky in this area of research and policy. To our knowledge, NSABB has not even raised the possibility that there may be other ways of assuring accountability than full contemporaneous transparency. BARDA's exemption from judicial review on freedom of information requests is a clear case of pre-empting an essential mechanism for assuring accountability through both contemporaneous and ex post transparency (as it shuts off the possibility that judicial review might allow a delayed or selective release of information).
The only alternatives for appropriate accountability are not full contemporaneous transparency, ex post full transparency, or the current unsatisfactory lack of accountability. A fourth alternative is suggested by one of the key principles of institutional design listed above, the Madisonian idea. For example, a committee or subcommittee from the legislative branch or a special panel of individuals could be formally charged with reviewing periodically explanations by the NSABB as to why it had excluded the public from its proceedings. The reviewing body would be chosen so that members would have interests and be under incentives that were not unduly congruent with the interests and incentives of NSABB members. Yet another alternative would be a formal ex post review of NSABB's performance after it its work is done, with concrete costs attached to a negative review.
Concerns have also been raised about BARDA's relationship to US federal research agencies, such as the National Institute of Allergy and Infectious Diseases and the Centers for Disease Control and Prevention.33 However, the central worry is not that BARDA may create redundancies or inefficiencies, adding an ‘extra layer of complexity’ to biodefence research. Rather, it is that BARDA-funded countermeasures research may be exempt from the more rigorous ethical review processes for human and non-human research subjects.
At present no clear and publically available information is available about how BARDA proposals are reviewed and how expertise for the review process is determined. An example of the lack of clarity and, indeed, mystery surrounding the federal review and oversight of dual use research is reflected in the debate among top scientific publishers. Both are key factors for accountability necessary to the ethical conduct of research. In the post-2001 rush to respond to the bioterrorism threat in the USA, these institutions were created with no discussion or consideration of any other institutional alternatives. Ten years later, the institutions have become an accepted part of the US biodefence research enterprise, while still lacking mechanisms to address serious ethical concerns beyond the narrow understanding of dual use that shape the institutions' creation and ongoing activities. This is one more indication that the conceptual framework offered by social epistemology and institutional design can begin to elucidate the more complex ethical issues at stake, well beyond even the more sophisticated understanding of dual use.
## Conclusion
We have argued that the ethical and policy debates regarding ‘dual use’ biomedical research ought to be reframed as a larger optimisation problem across a plurality of values including, among others: (1) the production of scientific knowledge; (2) the protection of human and animal subjects; (3) the promotion and protection of public health (national and global); (4) freedom of scientific enquiry; and (5) the constraint of government power. We have also argued that a fruitful response to the optimisation problem will employ the tools of social epistemology as well as sound principles of institutional design.
Our goal has not been to resolve any policy issue in the area of biodefence. Instead, our focus has been methodological but at the same time eminently practical. Given the preoccupation with the protection of science as a knowledge-producing enterprise, it is remarkable that there has been so little attention to identifying more precisely those features of the institutions of science that might be adversely affected by this or that policy initiative. Instead, participants in the debate have treated the institutions of science as a kind of black box, whose mysterious operations are achieved through something vaguely called ‘the norms of openness’. In other words, they have rested content with a sparse ‘folk’ social epistemology of science, without asking whether anything more useful is available.
The current debate is equally remarkable for its lack of attention to the most rudimentary principles of institutional design. Even if the problem was simply that of ‘balancing biosecurity with open science’, institutional design would still be relevant. Once it is seen that the real issue is a much more complex optimisation problem, the case for thinking explicitly about institutional design becomes all the more compelling.
The biodefence policy choices we make now may have profound effects, not only on the enterprise of science, but on the relationship between science and government, for years to come. The issues are difficult enough; there is no need to make them more intractable by poorly conceptualising them.
View Abstract
## Footnotes
• Competing interests None.
• Provenance and peer review Not commissioned; externally peer reviewed.
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
|
|
# ENGR 101: Introduction to Computers and Programming
Using Computing to Solve Engineering Problems
Syllabus Winter 2023
# Course Overview
Engineering 101 focuses on using your computer to solve engineering problems through computer programming. Many engineering problems involve repetition – getting data and doing the same calculations over and over again. Automating this process, using programming, saves time and minimizes errors in these calculations. Engineers have more and more data to work with, so developing computer programs is now a part of almost every modern engineering project.
One of the core concepts of the course is the concept of an algorithm: a well-defined set of steps that achieves a particular goal. Constructing an algorithm for a given purpose is fundamental in every engineering design task. Algorithms help us break down large, complex problems into smaller problems that we can solve separately and then weave back together to give us a final answer. In this course, you will learn how to create algorithms for solving engineering problems, and then how to write those algorithms in a programming language that the computer can understand.
There are literally hundreds of different computer programming languages. The choice of programming language depends on what you are trying to do and how you are trying to do it. In this course, we will use two different programming languages: MATLAB and C++. Essentially, the course is split into two half-semester “mini-courses”, each focusing on a different programming language. We will show how MATLAB is a good choice of language if you need to process numerical data and make graphs summarizing your analysis. We’ll then switch over to C++, a language that illuminates more of the inner workings of computer programs and is a good choice for coding algorithms that require complex control flow to make several decisions over time.
This course has an individual work component and a collaborative lab and project component. In the weekly homework, you will be introduced to the various skills and concepts you will use in this course via PrairieLearn. In lab, you will practice these skills, and then you will put them to use in the projects (some of which are partner optional). Homework consists of the weekly homework, lab assignments, projects, and lecture reflection forms. There are four assessments in the course, covering cumulative amounts of material as the semester goes on. Two assessments will be focused on MATLAB, and two will be focused on C++.
Our goal is that you leave this course with experience in logically breaking down a large problem into smaller problems that are more easily solved and with an appreciation for how critical computer programming is for all engineering disciplines.
All course content is accessible at or through our course website at engr101.org.
# Staff
Here is contact information for all the course staff. If you have a technical question, please use the Piazza forum (see e-mail policy below). Otherwise, please do email us with any questions or concerns that you have about class. We’re very happy to set up one-on-one meetings. You can also reach all of the course staff at engr101staff (at) umich.edu.
## Instructors
Laura Alford laura.alford (at) umich.edu Lesa Begley labegley (at) umich.edu
General Admin Email engin-fyp (at) umich.edu Krista Quinn kristaqu (at) umich.edu
Isha Bhatt ibhatt (at) umich.edu Brendan Jackson brendjac (at) umich.edu Donovan Jewell jeweldon (at) umich.edu Rory Meyer admeyer (at) umich.edu Joey Shoyer jshoyer (at) umich.edu Mary Silvio msilvio (at) umich.edu
## Instructional Aides (IAs)
Seta Hagopian shagop (at) umich.edu Jake Hume jakehume (at) umich.edu Meghan Mojica mojicam (at) umich.edu Alyssa O’Brien alyssaob (at) umich.edu Laurel Saxe lsaxe (at) umich.edu Luke Van Namen lvanname (at) umich.edu Thomas Westman thwestma (at) umich.edu
## Email Policy
We do not answer technical questions via email. In order to save everyone time, we want all students to have the benefit of seeing each question and its answer, so please use Piazza instead (more information on Piazza is in the Online Tools section).
For anything other than technical questions, including advice on majors, student project teams, that you need to miss lab, etc., please do email us! We are always happy to talk. Your lab instructor (the GSI or IA who leads your lab) is your best first contact about anything at all. Each GSI and IA has time set aside in their job description for answering emails from students, so send them your questions, concerns, and ideas!
Do not send us emails asking “Can I get partial credit on this assignment even though the due date has passed?” or “I’m only X% away from an A-, is there anything I can do?”. We do not adjust grades based on requests from individual students, and it is inappropriate to ask.
# Course Meeting Times (Lecture & Lab)
Lectures and labs will meet in-person. There will be no remote attendance option; however, lectures are recorded for later viewing. If you find yourself unable to attend in-person class for an extended period of time (e.g. several weeks or more), please email your lab instructor (GSI/IA) to let them know what is going on and we will work with you to come up with a plan that will allow you to complete work on time.
The general course “cadence” looks like this for each week:
Day of the Week Meeting Type Description Completion Required? Recorded? Monday Asynchronous Homework learn new concepts/skills using an interactive, online platform Yes N/A Tuesday Lecture work through a programming application using the concepts and skills you learned in the homework; project overviews and project planning Yes Yes Wed/Thurs/Fri Lab hands on practice with new concepts/skills Yes No
Read on for more detailed descriptions of lectures, homework, and labs.
## Weekly Homework & Lectures
We want to make the best use of your time spent in class. We have found that learning the core concepts of programming works best when you use an interactive, online platform. Therefore, we are going to use our lecture time a little differently than other large courses. Instead of having three hours of lecture each week to learn and practice course content, we have split that time into asynchronous homework done on your own and one in-person lecture per week.
Weekly homework consists of short videos and interactive work hosted on PrairieLearn, an online learning platform. This weekly homework will be done individually and asynchronously, though we encourage you to form study groups and work on the homework together with other students. Completion of the weekly homework is required for a grade. More information on logging into PrairieLearn will be provided separately from this syllabus. PrairieLearn homework assignments are due on Mondays so that everyone is ready for that week’s lecture and lab (labs run Wednesdays-Fridays). You can work ahead on the homeworks if you want to or if you know that you have a busy week coming up in your other classes.
Lectures are on Tuesdays and will give you additional exposure to, and practice with, the new concepts you just learned in PrairieLearn. ENGR 101 is about “using computing to solve engineering problems”, and our lectures will give us the opportunity to look at an engineering problem and then work through how to use computing to solve that problem. This includes overviews of the projects and some project planning, so be extra sure to attend those lectures! There will be live coding, so bring your computer if you have one so you can follow along with your instructor. The lectures are recorded, and you will need to complete a reflection form by the due date to receive credit for “attending” the lecture, whether you attended in-person or watched the recording. You may attend whichever lecture section works best with your schedule.
## Labs
There will be a two-hour lab that will meet in-person. Your lab will meet at the posted time and location on Wolverine Access. Labs are your time to practice the concepts you learned in PrairieLearn through hands-on programming. Labs are also a time when you can get to know other students in your class!
### Lab Attendance Policy
You must arrive within 30 minutes of the start of your lab or you will be marked “late”. The first time you are marked late, you will still be given credit for completing the lab. If you are late to any subsequent labs, we will have to give you a zero for that lab’s attendance score.
If you know that you will need to miss part or all of lab for a legitimate reason (e.g. an exam for another class, an advising appointment, an interview for an internship, you are competing in a U-M sports competition/meet, etc.), then please email your lab GSI/IA at least 24 hours prior to the start of your lab. Your lab GSI/IA will help you find a different lab section to attend that week.
If you are sick or suspect you may have been recently exposed to someone who is contagious (with COVID, influenza, whatever), please email your lab GSI/IA to let them know you won’t be there. Then, fill out the “Excused Absence Request” form on the course website. We will excuse you from lab attendance for that week. This does not count as your lab drop. However, you are still responsible for completing the lab and turning it in by the due date.
# Online Tools
We will be using a variety of online tools and platforms in ENGR 101. One of the most important skills good engineers have is the ability to find quality things (skills, items, tools, documentation, whatever is needed) and assemble them together to solve an engineering problem. We will model this skill by pulling together quality resources for you to use in ENGR 101. The course website will be your starting point for accessing these course resources.
We acknowledge that it is more than a bit absurd that we are using so many different online platforms for one course; we’re sorry about that. Each of these platforms does something specific that we need that the other platforms can’t do right now. We’re in the process of trying to re-envision different aspects of the course so that we can use fewer online platforms, but for now, this is what we got. Don’t forget – you can always just go to the course website and click on links!
Important Tip!
Using these online tools and resources almost always requires you to be signed in using your U-M Google Account. Our best piece of advice for managing your personal accounts and your U-M account is to use one browser for U-M things (e.g. Chrome, Firefox) and a different browser (e.g. Safari, Edge) for your personal things. Doing this will save you much frustration!
## Course Website
The course website is at engr101.org. This is your starting point. The website shows everything going on in the course for the current week, contains a detailed schedule of topics, and has links to all other course resources.
A detailed schedule of topics – including assignment due dates and assessment dates – is available on the course website. Please check this before asking questions.
As you start to become more familiar with the resources, you might find that you want to organize them in different ways. Maybe you end up bookmarking the course google drive, or you leave a tab open for Piazza, or whatever you find works well for you.
## PrairieLearn
PrairieLearn is the name of the platform that hosts our asynchronous weekly homework, the lecture reflections, and the assessments. Most weeks will have around three homework assignments due (remember that this course has replaced some of its lecture time with homework time, so that’s why it seems like there are “so many” homeworks due each week).
## Canvas and Announcements
You can see a listing of course assignments and your grades on Canvas.
Make sure you are set up to receive course announcements on Canvas, since we will post critical information there.
We will place course material, such as lecture slides and lab worksheets, in our course google drive. There will be direct links to these materials on the course website, so you can just use those if you want to.
## Piazza
We will be using Piazza to host a course forum and asynchronous Q&A. You are encouraged to read this regularly and post technical questions as it will be a significant source of help on the projects.
It is important that you do NOT post your own solutions, project code, test cases, or output to the forum. If you have a question about any of these things, use a private post (visible only to instructors).
Course staff will answer questions on Piazza throughout the day from roughly 10am-10pm. You may expect relatively prompt answers to your questions, but not immediate responses from the course staff, so please plan accordingly. At 10pm, the course staff stop answering questions on Piazza for the day; any questions that come in overnight will be answered the following day. Of course, students are encouraged to answer each others’ questions! This is not a curved class, so help each other out!
The Autograder is a web-based tool that tests your program against a set of test cases. These test cases check to see if your program does what it is supposed to do and gives you feedback if your program doesn’t do what it is expected to do. All project code will be submitted to the Autograder, and we may use the Autograder for other exercises in lecture or lab.
Gradescope is an online platform that we will use for submitting lab worksheets. The lab worksheets are saved as .pdf files and submitted to Gradescope where a grader will check to see if you have completed the worksheet.
## ECoach
ECoach is a personal coaching system that allows you to receive personalized assistance in this large class, learn best practices, discover opportunities in areas of interest, and avoid common pitfalls. Here is a video that describes ECoach and shows what it looks like. In particular, we use ECoach to host the “corporate memory” of this course. In other words, this is where we store all of the tips and tricks that we’ve gathered from students over the years. It is also where the beginning/end of term surveys (which are worth extra credit) are located, extra programming resources, reach goals for projects, and assessment-taking tips. You can access ECoach from the course website; make sure you register for ECoach!
We often receive feedback from former students along the lines of “tell people to use ECoach sooner!
IMPORTANT: ECoach is a service operated by the University. We coordinate with them on content, but we don’t handle anything about the operation of ECoach. If you have any questions at all about ECoach, please email ecoach-support (at) umich.edu.
## Lobster and MatCrab
Lobster and MatCrab are interactive visualization tools for C++ and MATLAB developed by Dr. James Juett in the EECS - Computer Science and Engineering Department here at U-M. Lobster and MatCrab are embedded in some of our PrairieLearn exercises, but you can also use them on your own!
## Updating Your Name and Pronouns in U-M Online Tools
If you did not update your preferred name and pronouns during orientation, or if those descriptions of you have changed since then, we would greatly appreciate it if you take a minute to make sure this information is up to date for us.
### Updating/setting preferred name and pronoun in Wolverine Access
Go to Student Business. Then select Campus Personal Information. The option to change your preferred name will be under Names. Then go to the tab Gender Identity to set your pronoun. Your preferred name and pronoun will now show up on our course roster and in Canvas, helping us to learn who you are faster!
### Recording your name in Canvas
You can record yourself saying your name in Canvas using NameCoach so that we know how to correctly pronounce your name. Here are instructions on how to record your name in NameCoach. Hearing your voices will help us to learn who you are faster, and we appreciate your taking the time to set all this stuff up!
Here is an article that explains how to go about updating/setting your legal name beyond U-M.
# Office Hours
Office hours are a core part of ENGR 101. Office hours are a chance to ask questions, get help with projects, and get to know the course staff. Office hours are listed on the course website. See the “Guide to Office Hours” on the website for more information about how to attend office hours. The best time to come to office hours is right after a project is released!
The professors will also hold office hours. Please see the schedule on the course website.
# Programming Environment
For this course, you may work on your own computer or log in to your UM CAEN computing account (either remotely or in-person in a CAEN computer lab). Everyone enrolled in the College of Engineering or is registered for ENGR 101 should receive a CAEN account no later than the first day of classes. If you do not have a CAEN account please see: http://caen.engin.umich.edu/accounts/obtain.
We will provide instructions for installing and configuring MATLAB and VS Code (for C++ programming) separately from this syllabus. Note: we officially support MATLAB and VS Code for code development. For programs written in C++, you are free to develop on any platform you like, but you may use only ANSI/ISO standard C++11, and are responsible for any differences between your preferred platform and the grading platform. We will grade all your C++ programs on an autograder system running in a Linux environment and they must compile (where applicable) and run correctly in this environment.
# Textbooks (Optional, Reference Only)
All technical information that you need for this class will be presented in the weekly homework and in lecture/lab. However, if you’d like to look at a more traditional textbook, here are a couple you can use. Be aware that these are NOT a one-to-one correlation to how we teach MATLAB and C++ in this course. So, use these as resources only.
MATLAB : A Practical Introduction to Programming and Problem Solving, 3rd Edition, Stormy Attaway, 2013 (ISBN: 0124058760). Available (free for UM students) online at: http://www.sciencedirect.com.proxy.lib.umich.edu/science/book/9780124058767
Bielajew C++ Book
# Assignments
All assignments are due at 11:59pm (NOT 11:59:59pm) Eastern Time Zone (Ann Arbor time) on the due date. For late submissions, please see the Deadlines & Late Submission Policy section.
The next sections give a high level overview of the different assignment types.
## Homework
Core course content will be delivered asynchronously as weekly homework on the PrairieLearn online platform. PrairieLearn work is due on Mondays, and there will be 1-4 assignments due each week, depending on where we’re at in the semester (some weeks will not have homework due). Each assignment should take approximately one hour to complete, and this work is balanced with the other work that is going on in the course at the time. The weekly homework will be linked on the course website.
## Lectures
Lectures will have a lecture reflection associated with them. You will need to attend (or watch the recording) of the lectures, and complete the lecture reflection to receive credit. The lectures will be a mixture of things: the first day of class “welcome to ENGR 101” lecture, technical demo sessions led by the instructors, talks by people from industry, and debugging practice. The weekly agendas on the course website will have the information for each lecture.
## Labs
Labs will meet once a week at their times posted on Wolverine Access. Labs meet in a CAEN lab in the basement of Pierpont Commons on North Campus. You will work in a group of four to collaboratively complete the lab exercises. Your lab instructor (one of our excellent GSIs or IAs) will lead the lab and be available throughout the lab period to answer questions, clarify concepts, and periodically check in with your group to make sure you are staying on track.
Each lab assignment will be divided up into points for attendance, completing the lab worksheet and submitting it to Gradescope, and correctly implementing any programs or exercises that are submitted to the Autograder or PrairieLearn.
Each student must submit their own copy of the lab worksheet and code files to receive credit. This way, we make sure that everyone has a copy of the lab worksheet and code, so that you can use that code in future exercises or projects!
## Projects
Projects are a major component of this course. Project documentation (also known as “specifications” or “specs”) will be released as we go through the semester. You are responsible for reading all project documentation.
We have found through many years of teaching experience that the most common reason for poor project performance is not starting early enough. Plan to do some work on the project every day and try to have it finished a few days ahead of the due date, because many unexpected problems arise during testing, and you never know how long it will take to debug them. In addition, office hours can become very crowded right before the deadline, and you may not be able to book a one-on-one appointment to ask debugging questions.
The second most common reason for not doing well on the projects is not asking for help when you need it. We offer help in office hours and on Piazza. When you come to office hours, please be ready to provide access to your code and try to come ready with questions or a sense of where you are getting stuck. Another good way to get help is to post a question to Piazza. Remember, if you find that you are stuck on a piece of your project for an undue amount of time, please see us!
One major goal of this course is for you to learn to test and debug your own programs because this is a critical skill in the real world. Throughout the class, we will practice testing and debugging strategies so you can learn what types of errors to look for and how to correct them. As you gain experience, we will expect you to do more and more testing and debugging on your own before you come to us for help. This way, you can more effectively use office hours (which, remember, get pretty busy) because you can explicitly show us testing and debugging techniques you have already tried – and what the results were – and we can give better advice as to what’s still not correct.
Finally, always make multiple backup copies of your work! If you somehow lose your work, it is your responsibility. We highly suggest using some kind of cloud-based approach (such as Google Drive or Dropbox at U-M) which can automatically sync your local files with those on a server that is automatically backed up itself by the service.
### Project Submission Policy
All project code must be submitted to the ENGR 101 Autograder at autograder.io. Additional project deliverables may be submitted to Gradescope. Each project specification will include detailed instructions as to how the project files should be named and where to submit deliverables to.
In addition to “turning in” your code for grading, submitting to the autograder also allows you to receive some early feedback from the autograder about your score. You are allowed a limited number of submissions per day. Waiting until the last minute to turn in your project is a surefire way to fail a project. Submit early, submit often.
All projects must be submitted by 11:59pm (NOT 11:59:59pm) on the due date for full credit, unless otherwise noted. The autograder will automatically stop accepting submissions at 11:59pm. Any submissions that have been accepted by the autograder and are awaiting grading will still be processed as normal. For late submissions, a separate assignment for the project will be opened up on the Autograder sometime the next day.
To determine your final grade for a project, we select your submission with the highest score when run against the test cases for the project. If several submissions are tied for the same score, we use the most recently submitted. We also use the code from the selected submission for manual style grading, which contributes a portion of the overall project grade.
### Project Partnerships
For all projects, you have the option of working in a partnership with another student in the course.
• You may collaborate with your partner in any way to complete the project.
• Your partner must be another student registered for (any section of) the course this term.
• You are required to register your partnership on the autograder by the date specified.
• You and your partner turn in the same solution (This is enforced by the autograder).
• You may choose different partners for different projects. You may not change partners during a single project. If a conflict arises between you and your partner, you may contact course instructors for guidance.
• If you are aware that your partner has engaged in an honor code violation while contributing to your project (e.g. copying their portion of the code from another source), you must report it. If you suspect this and are concerned, please reach out to course instructors immediately.
Partnerships must be registered with the autograder before you can submit your work. If you choose to work alone, you must also register that choice on the autograder. Once recorded on the autograder, you cannot change your decision for that project.
Students who are repeating ENGR 101 will complete projects without a partner this time through the course. This policy is meant to encourage deeper individual learning leading to a firm foundation of computing skills that will serve you well in your future courses and career.
### Guidelines for Project Partnerships
Plan your strategy for completing the project. Talk about your expected workflow. When will you meet? Do you plan to attend office hours? Do you prefer to work during the day, at night, or on the weekends? When are your internal deadlines for having different parts of the project done?
Work on all parts of the project together, so that each partner gains experience with each of the concepts involved. This will be valuable practice for assessments. It’s also helpful to have someone to bounce ideas off of and two pairs of eyes on the code to avoid bugs.
Do NOT split the work in half and work separately. This may harm your or your partner’s understanding of parts of the project. You also have no control over your partner’s contribution.
## Assessments
There will be four assessments in ENGR 101. Each assessment will cover the topics used so far in the course. If the topics have only been covered by PrairieLearn work (and not projects yet), then the assessment will cover those topics in a little bit lighter way; we can think of this as a “Level 1” understanding of the topics. If the topics have been covered in a project that is due before the assessment, then the assessment will cover those topics in the same detail as the projects; we can think of this as a “Level 2” understanding of the topics. Here’s a table that compares the two levels:
Level 1 Questions Level 2 Questions When in the learning cycle? This type of assessment question enables you to demonstrate that you have learned a particular skill or concept before you are asked to use it in the context of a project. This type of assessment question enables you to demonstrate how to apply skills and knowledge in more complex ways after you have used them in the context of a project. What are they designed to assess? These questions will help you gauge how well you can demonstrate individual knowledge and skills you will need in the course, and give you a chance to identify topics that may require additional practice. These questions let you show deeper levels of understanding and procedural knowledge by strategically combining multiple concepts and/or skills to solve real-world coding problems.
Assessments will be hosted on the online platform PrairieLearn. More information about assessments and PrairieLearn will be provided separately from this syllabus. Key things to know about the assessments:
• There will be practice assessments that you can take multiple times; the practice assessments will help you see what we mean by “Level 1” and “Level 2” questions (we’re finding that it’s hard to explain what we mean by those levels in words, but easier to show people examples of questions!).
• The assessments themselves will be open note/open computer (you just can’t talk to anyone else or otherwise collaborate with anyone on the assessment).
• You will be able to immediately see your score for the assessment after you finish the assessment, but you won’t be able to see your answers; you can come into office hours, though, and we’re happy to go through your assessment answers with you.
• If you earn < 90% on an assessment, you can come in to office hours and review your answers with a staff member so that we can help straighten you out on whatever concepts you got wrong; after this meeting, you will be able to retake the assessment and earn up to the threshold score of 90%.
## Extra Credit
There are several opportunities to earn up to 1% of your total grade in extra credit. In other words, if you earned an 89% before applying extra credit, and you did all of the extra credit, your final grade would be 89% + 1% = 90%.
There will be several ECoach activities that are “incentivized” throughout the semester. You will receive emails from ECoach when these activities are available for you to do. There will also be activities offered through the Computing CARES program. All of these will be announced on the course website when they become available.
Prof. Begley will be conducting “practice sessions” for students interested in gaining some additional practice with the concepts and skills introduced in the homework assignments. You can earn extra credit by attending these practice sessions.
Your grade in this class is determined by your performance on the assignments in the course. We use a “threshold grading” scheme, in which points are assigned based on your understanding of course concepts and ability to apply those concepts to solve engineering problems.
In threshold grading, grades are not curved: your grade depends solely on your own work, regardless of the performance of your peers. Our goal in teaching this class is to provide each student with all the resources necessary to show competency in the course material and therefore earn an A.
The following tables show how each component is weighted with respect to your final grade, and how numerical grades correspond to letter grades:
Assignment Group Points Per Assignment Number of Assignments Number Dropped Percentage of Final Grade Homework (varies, but around 20-30) 21 0 10% Lectures 20 12 0 10% Labs 40 12 1* 20% Projects see below 4 0 30% Assessments see below 4 0 30%
* Labs 1 and 12 are not eligible to be dropped
The projects for the course scale up in effort and complexity and have differing amounts of points:
Project Points 1 80 2 110 3 110 4 110
The assessments for the course are “leveled” and have differing amounts of points:
Assessment Points 1 80 2 100 3 80 4 100
See the Assessments section for more detail.
You can see this grading scheme in Canvas. If you have any questions about how grades will be calculated, please post to Piazza.
Final letter grades are calculated according to the following table (grades are NOT rounded up):
Numerical Grade (%) Letter Grade 98 ≤ % ≤ 100 A+ 93 ≤ % < 98 A 90 ≤ % < 93 A- 87 ≤ % < 90 B+ 83 ≤ % < 87 B 80 ≤ % < 83 B- 77 ≤ % < 80 C+ 73 ≤ % < 77 C 70 ≤ % < 73 C- 67 ≤ % < 70 D+ 63 ≤ % < 67 D 60 ≤ % < 63 D- % < 60 E
This course has many interrelated assignments, and the concepts and skills in the course build upon each other. However, we recognize that being able to submit assignments a day or two late can be helpful in supporting your own time management choices. We know that ENGR 101 is not your only time commitment this semester!
You can earn up to full credit on all assignments if you submit the assignment prior to its posted deadline. However, you can still earn the majority of most assignments’ points as long as you submit it before the assignment’s late submission deadline.
### How Late Submissions Work
The majority of assignments are eligible for a 1-2 day grace period for late submissions. Here are some important things to know regarding late submissions:
• Late submissions will be able to earn up to around 90%-95% of the original assignment’s points, depending on the assignment.
• Because late submissions with minimal penalties are allowed for all assignments, and lectures are recorded, no assignments will be dropped from Homework, Lecture Reflections, Projects, and Assessments.
• Labs are the one assignment type that is hard to do fully “in your own time” due to the collaborative nature of the labs. Therefore, one lab assignment will be dropped entirely at the end of the course to account for any week in which a student is sick, chooses to focus on a different course that week, needs to visit home to help a family member or friend, is traveling for a U-M event, whatever.
• For any other labs that you miss, your attendance points for that lab will be zero. You are still responsible for completing and submitting the lab worksheet by the deadline, or you can turn it in by the late submission deadline if you need some extra time.
• Homework and Lecture Reflections that are submitted after the posted deadline will automatically be capped at 95% of the points for the assignment.
• Projects that are submitted after the posted deadline will be submitted to the “Late Submission” version of the project that will be available on the Autograder. The “Late Submission” version of the project will be automatically capped at approximately 93% of the points for the autograder portion of the project. Your final project score will be whichever is the higher score between the original project assignment and the “late submission” assignment.
• Assessments that are submitted after the posted deadline will automatically be capped at 93% of the points for the assessment. Assessment retakes are still available for late submissions. So if you do the assessment after the deadline and get < 90%, you can still retake the assessment (and get up to 90%).
• Extra credit is not eligible for the flexible deadline policy because all of the extra credit opportunities are time-sensitive. For example, completing a beginning of term survey is only useful if you complete the survey at the beginning of the term. Therefore, we are not able to offer extra credit for items completed after the posted deadline.
Here is a summary of the grade caps before and after the deadlines for the various assignments:
Notes
Homework Monday 100% The next day (Tuesday) 95%
Lecture Reflections Saturday 100% The following Monday 95%
Labs Saturday 100% The following Tuesday 90% No lab attendance points awarded for missing deadline; can earn up to full credit on the lab worksheet.
Projects Tuesday 100% The following Tuesday
(one week grace period)
~93% The actual percentage will be determined by the number of points assigned to the test cases, but it will be around this number. Project 4 is not eligible for late submissions because it is due right at the end of the semester.
Assessments Thursday 100% for first take The following Thursday
(one week grace period)
93% for first take
90% for retake
You can retake an assessment even if your first take was after the deadline. Assessment 4 is not eligible for late submissions because it is due right at the end of the semester.
Extra Credit Saturday 100% N/A N/A Extra Credit is not eligible for late submissions
### Anticipated Conflicts
If you have something that will impact your ability to do ENGR 101 work for more than a couple of days, such as a multi-day religious holiday, planned medical procedure, or University-affiliated athletic event, please let your lab GSI/IA know ahead of time. It is likely that you will be able to manage your work around this commitment just fine, but it’s always helpful for us to know what’s going on!
If your planned conflict impacts your ability to complete assignments by the late submission deadline, then please fill out the Extension Request form on the course website. We will evaluate your situation and see what action is appropriate to take. Please note that most of our assignments are available for several days to multiple weeks prior to their deadline to allow you to manage your own time and complete work by the deadline. Inabilty to manage your time is not a sufficient reason for an extension.
### Medical/Personal Emergencies
If you experience a medical or personal emergency, please tell your lab GSI/IA! It’s good to know that something major has happened so we don’t wonder when you don’t show up to lab. If your emergency is severe enough that it will impact your ability to complete assignments by the late submission deadline, then please fill out the Extension Request form on the course website. We will evaluate your situation and see what action is appropriate to take.
Our goal is to return all graded assignments to you within one week. However, sometimes things happen and we might get behind a little bit (we are busy, too!). You will be notified by Canvas when grades are posted. Do not post to Piazza asking when grades will be out unless it has been 3 weeks since the assignment was due.
While we work hard to grade accurately, we sometimes make mistakes. If you believe we graded an assignment of yours incorrectly (whether it be a lab, exercise, assessment, project, etc…), you may submit a regrade request no later than one week after the graded work is originally returned to you. We will then regrade your entire assignment, which can cause your grade to go up, but it can also go down.
Regrade requests must be submitted via the form on the course website. Regrade requests will NOT be accepted via email. Depending on the type of the assignment, further action may be required on your part.
# Acceptable Collaboration and the Honor Code
Learning from your peers, and learning as you teach them, is an excellent way to become comfortable with the computing skills we cover in ENGR 101. However, we also want you to be able to accurately self-assess where you are at in your own skill level. In other words, can you actually do this stuff? Here, we describe the collaboration that is allowed and encouraged in ENGR 101, the collaboration that is not allowed, and our process for reporting collaboration that is not allowed.
## Encouraged Collaboration
We want students to learn from and with each other, and we encourage you to collaborate. We also want to encourage you to reach out and get help when you need it. You are encouraged to:
• Give or receive help in understanding course concepts covered in lecture or lab.
• Work together on homework in study groups or with a friend.
• Practice and study with other students to prepare for assessments.
• Consult with other students to better understand project specifications.
• Discuss general design principles or ideas as they relate to projects.
• Help others understand compiler errors or how to debug parts of their code.
To clarify the last item, we are giving you permission to look at another student’s code to help them understand what is going on with their code. You are not allowed to tell them what to write for their code, and you are not allowed to copy their work to use in your own solution.
If you are at all unsure whether your collaboration is allowed, please contact the course staff via Piazza, Office Hours, or email before you do anything. We will help you determine if what you’re thinking of doing is in the spirit of collaboration for ENGR 101.
## Honor Code Violations
The following are considered honor code violations and are primarily applicable to projects and assessments:
• Submitting others’ work as your own.
• Copying or deriving portions of your code from others’ solutions.
• Collaborating with others to write your code, such that your solutions are identifiably similar.
• Sharing your code with others to use as a resource when writing their code.
• Receiving help from others to write your code.
• Sharing test cases with others if they are turned in as part of your solution.
• Sharing your code in any way, including making it publicly available in any form (e.g. a public GitHub repository or personal website).
You are still responsible for following these rules even after finishing the course. Students may be nervous about being reported for coincidental similarities between their code and others, but we only report clear cases of academic misconduct (e.g. when there is overwhelming evidence code was copied from another student or online source). You will not be reported for:
• Using starter code provided by course instructors.
• Having the same idea as someone else.
• Receiving similar help/guidance from the same course staff member in office hours.
• Helping another student understand compiler errors or debug part of their code.
(You may NOT give/receive help with the process of writing the code originally.)
If you are retaking the course, you may reuse your own code, presuming it was wholly written by you and/or your partner and not derived from another source, following all the rules outlined here. It is possible for instructors to miss an honor code violation in a previous term, but catch and report it when the code is reused on a course retake.
If you have any questions as to what is allowed, please talk to an instructor right away.
## The Honor Council Process
We report suspected violations to the Engineering Honor Council. To identify violations, we use both manual inspection and automated software to compare submissions. The Honor Council determines whether a violation of academic standards has occurred, as well as any sanctions.
Here’s what you can expect if you are reported for an honor council violation:
• The instructors submit an official report to the honor council.
• The honor council notifies you of the report, and explains the next steps of the process. You receive a copy of the report, including the evidence of the suspected violation.
• The course instructors play no role in adjudicating reported cases.
• The honor council notifies course instructors when your case is resolved. Any penalties they prescribe are applied to your grade. If you are found not responsible, your grade is unaffected.
• If you are found violating the honor code, a typical penalty is a 0 on the assignment and a -1/3 overall letter grade. However, the honor council may prescribe other penalties.
• If you have a pending honor council case at the end of the term:
• You will receive an “I” (incomplete) grade until the case is resolved.
• Your grade will be updated once the case is resolved. The “I” will not remain on your transcript if you are a student in the College of Engineering. Students in other U-M colleges and schools may see the “I” alongside the final grade, (e.g. if your final resolved grade is a B+, then your transcript may show the grade IB+)
## Advice for Avoiding the Temptation to Cheat
We understand that honor code violations usually occur when a student is struggling with the course or dealing with external challenges that prevent them from finishing work on their own. This is why we have decided to allow late submissions for all assignments with just a small penalty (you can still get an A on the assignment!). If you find yourself tempted to cheat on an assignment, especially a project, remind yourself that you have plenty of time to actually complete that assignment.
We have purposefully structured this course to provide a week of “buffer time” at the end of the semester so that students can get caught up on any outstanding assignments they still need to submit. Remember: our goal is that you learn this course material and these computing skills. The posted deadlines are a guide to keep you on pace so that you don’t have the entire course to do in a couple of weeks at the end of the semester. But if you need to submit some things a little late because you are adjusting your schedule to accommodate other classes and commitments? Awesome, go for it. We’d much rather you take the time to do the work correctly, and come in to talk with us if you need help, than copy someone’s code to try to meet a deadline.
# Accommodations for Students with Disabilities
If you need accommodations for a disability, we are happy to work with you. Some aspects of this course may be modified to facilitate your participation and progress. As soon as you make us aware of your needs, we can work with the Services for Students with Disabilities (SSD) office to help us determine appropriate academic accommodations. SSD (734-763-3000; http://ssd.umich.edu) typically recommends accommodations through a Verified Individualized Services and Accommodations (VISA) form that SSD will upload to the Accommodate online platform which then notifies us of the needed accommodations. Any information you provide is private and confidential and will be treated as such.
# Diversity, Equity, and Inclusion
The University of Michigan is committed to student learning and the development of the whole student in a diverse and multicultural campus community. We seek to engender a diverse community that is accessible, safe, and inclusive. We value a community that appreciates and learns from our similarities and differences. We pledge our commitment to support the success of all community members. If you experience anything, directly or indirectly, that goes against this commitment, please talk to your instructor, GSI, an IA… anyone that you feel comfortable talking to. We want to know! We try hard not to knowingly do or say something that will cause harm or stress to you. Many of us are constantly going to workshops and reading papers about how to have the most inclusive classroom that we can have. But we are human and sometimes we mess up! If we do, we sincerely hope you will come talk to one of us so that we can see things from your point of view, and we can learn how to improve our class for the next semester.
# COVID-19 Pandemic Statement
As they have throughout the past three years, policies around academic and public health are subject to change as this pandemic evolves. This course will follow all policies issued by the University, which are documented on the Campus Blueprint’s FAQ. These policies may change over the course of the term, so please review the Campus Blueprint’s FAQ for the most up to date information.
Individuals seeking to request an accommodation related to the face covering requirement under the Americans with Disabilities Act should contact the Office for Institutional Equity. If you are unable or unwilling to adhere to these safety measures while in a face-to-face class setting, you will be required to participate on a remote basis or to disenroll from the class. We also encourage you to review the Statement of Students Rights and Responsibilities, which includes a COVID-related addendum.
Finally, we want to emphasize that YOU as students are part of the solution, not part of the problem. We understand irresponsible actions of a few do not represent the character of students in general, and we reject rhetoric that places blame on students.
# Resources for Student Support and Physical/Mental Health
U-M is an enormous place. This sometimes means that it’s tough to figure out where you can go to get help. This is a list of some places that make a good starting point if you’re needing help for physical or mental health. Please reach out to your lab GSI or IA as well! We are always happy to talk and get you started with one of these places if that’s what you decide you’d like to do.
When you get sick, don’t come to class! Email your GSI/IA that you’ll be out, then go to UHS and see a doctor. When you’re ready to return to class, tell your GSI/IA and we’ll get you caught back up.
MESA mesa.umich.edu
Supports matters concerning race and ethnicity; MESA engages the campus community and transforms the student experience to build inclusive spaces and equitable opportunities for all.
SPECTRUM CENTER spectrumcenter.umich.edu
Supports matters concerning sex and gender identity.
DISABILITY SERVICES (SSD) ssd.umich.edu
Supports matters concerning access and support.
COUNSELING SERVICES (CAPS) caps.umich.edu
Supports matters concerning the need for counseling/psychological services.
STUDENT LEGAL SERVICE studentlegalservices.umich.edu
Supports matters concerning the need for legal services/advice.
DEPARTMENT OF PUBLIC SAFETY AND SECURITY (DPSS) dpss.umich.edu
Supports matters concerning a crime, or civil rights complaints.
CENTER FOR CAMPUS INVOLVEMENT campusinvolvement.umich.edu
Supports matters concerning on engagement of the Ann Arbor community and university community.
GINSBERG CENTER ginsberg.umich.edu
Supports matters concerning community service learning and civic engagement.
SEXUAL ASSAULT PREVENTION AND AWARENESS CENTER sapac.umich.edu
About: Supports matters concerning assault and survivor support services.
The following is a list of resources put together by the office of the Associate Dean for Undergraduate Education (ADUE). There is some duplication with the above listed resources, but we didn’t want to lose anything by trying to combine it with the list above. We will update this list of resources throughout the semester if we get an updated list sent to us.
Funding:
Mental Health Support:
Technical Support:
• There is a university wide laptop loaner program: Sites (at) Home program
• Students who have other technology needs should contact the Office of Student Affairs.
International Students:
General COVID-Related Information:
|
|
# Math Help - pressure....
1. ## pressure....
I have one more question I desperately need help with out of the 20 we had assigned.
A mature gorilla weighs 400lb and stands 5ft tall; its 2 feet combined have an area of about 1 ft squared.
give an estimate of the gorilla weight when it was half as tall.
what assumptions are involved in your estimate?
when the gorilla is standing, what is the pressure on its feet in pounds per sq in.?
in regards to the pressure on its feet..... i have to use pressure=weight
____
area
Dina
2. Originally Posted by crazykitty
I have one more question I desperately need help with out of the 20 we had assigned.
A mature gorilla weighs 400lb and stands 5ft tall; its 2 feet combined have an area of about 1 ft squared.
give an estimate of the gorilla weight when it was half as tall.
what assumptions are involved in your estimate?
when the gorilla is standing, what is the pressure on its feet in pounds per sq in.?
in regards to the pressure on its feet..... i have to use pressure=weight
____
area
Dina
Before we start, don't address questions to me (otherwise all the other members will feel left out And what if I got hit by a bus before seeing your question .......? It'd never get answered! "Nah, that one's for Fantastic, leave it for him.")
I assume by half as tall you mean it's size is scaled down by a factor of 2 ...... (otherwise it's weight would just be half of 400 lb).
Size scaled down by factor of 2 therefore volume scaled down by factor of 2^3 = 8 therefore weight is 400/8 = 50 lb.
The weight is 50 lb.
Aside to physics trained folk like topsquark: yeah yeah I know that's the mass not the weight, but when you look at the unit given for the answer to be in, hmpppph! ........ Another bug bear of mine is given at the end of this post.
If size is scale down by factor of 2, then area of feet is scaled down by a factor of 2^2 = 4. So area of feet is 1/4 ft squared. 1 ft squared = 144 inches squared therefore 1/4 ft squared = 144/4 = 36 inches squared.
So weight is 50 lb and area of feet is 36 sq in.
Now sub into your formula for pressure .....
Oh yeah, my other bug bear ...... maths books that give temperature in units of "degrees Kelvin" or $K^0$. It's just plain Kelvin, fer cryin' out loud! All comment welcome (to get things totally off topic )
|
|
# AutoCAD Crack Full Version [32|64bit] ♚
Equipped with the right applications, a computer can be of great help in virtually any domain of activity. When it comes to designing and precision, no other tool is as accurate as a computer. Moreover, specialized applications such as AutoCAD give you the possibility to design nearly anything ranging from art, to complex mechanical parts or even buildings.
Suitable for business environments and experienced users
After a decent amount of time spent installing the application on your system, you are ready to fire it up. Thanks to the office suite like interface, all of its features are cleverly organized in categories. At a first look, it looks easy enough to use, but the abundance of features it comes equipped with leaves room for second thoughts.
Create 2D and 3D objects
You can make use of basic geometrical shapes to define your objects, as well as draw custom ones. Needless to say that you can take advantage of a multitude of tools that aim to enhance precision. A grid can be enabled so that you can easily snap elements, as well as adding anchor points to fully customize shapes.
With a little imagination and patience on your behalf, nearly anything can be achieved. Available tools allow you to create 3D objects from scratch and have them fully enhanced with high-quality textures. A powerful navigation pane is put at your disposal so that you can carefully position the camera to get a clearer view of the area of interest.
Various export possibilities
Similar to a modern web browser, each project is displayed in its own tab. This comes in handy, especially for comparison views. Moreover, layouts and layers also play important roles, as it makes objects handling a little easier.
Sine the application is not the easiest to carry around, requiring a slightly sophisticated machine to properly run, there are several export options put at your disposal so that the projects itself can be moved around.
Aside from the application specific format, you can save as an image file of multiple types, PDF, FBX and a few more. Additionally, it can be sent via email, directly printed out on a sheet of paper, or even sent to a 3D printing service, if available.
To end with
All in all, AutoCAD remains one of the top applications used by professionals to achieve great precision with projects of nearly any type. It encourages usage with incredible offers for student licenses so you get acquainted with its abundance of features early on. A lot can be said about what it can and can't do, but the true surprise lies in discovering it step-by-step.
5 Best AutoCAD Serial Key Apps [2020] | Top 5 Best Apps for AutoCAD
While AutoCAD was initially released in 1982, it wasn’t until 1989 that the application was more fully developed.
It’s important to note that, until recently, AutoCAD was only available for the PC platform.
Today, there are numerous options for users to access and use AutoCAD. The apps listed below are some of the most popular options for the new and old users alike.
Autodesk AutoCAD 2019.2 2018.2 2017.2 2016.2 2015.2 2014.2 2013.2
AutoCAD is a powerful software package that offers users more than just a drafting tool.
Along with creating 2D and 3D drawings, AutoCAD allows users to create models, views, and technical drawings. As an example, AutoCAD can help users create 3D models, which can then be used in various presentations or on the web.
From the start, the software was designed to be used for desktop drafting. AutoCAD has kept its desktop roots, but over the years has made a number of modifications, including adding the ability to load standard (CAD) files and also web export services.
Today, AutoCAD is still a leading product in the AutoCAD suite. It is used by a wide range of organizations, such as architects and engineers. The software is also popular for 3D modelers, industrial designers, mechanical designers, and even model builders.
AutoCAD LT was created for drafting and layout of simple drawings. With AutoCAD LT, you’re able to create simple line drawings, circles, freeform polylines, rectangles, polygons, and arcs.
With AutoCAD LT, you’re also able to create tables and edit text. You’re also able to export to numerous files types, such as.DWG, DWF,.PDF,.JPEG,.TIFF, and.MOV.
The software has been redesigned to be simpler to use and more intuitive.
AutoCAD Mobile and Web is a mobile app that allows users to open, view, and edit files in AutoCAD. Additionally, users
Software for Autodesk Inventor»
List of CAD editors and CAE software
References
Category:Autodesk
Category:Computer-aided design software
Category:Computer-aided design software for Windows
Category:Computer-aided design software for Linux
Category:Computer-aided design software for MacOS
Category:Windows graphics-related software
Category:Windows-only softwareQ:
The Lie algebra of a compact subgroup
Let $H$ be a compact subgroup of $G$, namely, $G$ is a Lie group. Now, consider $\frak{h} = T_eH$ and let $\frak{g} = T_eG$. I wonder if the Lie algebra $\frak{h}$ has a closed ideal in $\frak{g}$ or not? Is there some positive answer?
A:
Yes, it does.
Namely, $V:=\frak{h}$ is a subspace of $\frak{g}$, it is compact by definition, and $\frak{h}$ is closed in $\frak{g}$ because $G$ is a Lie group. Now, $\frak{h}$ is an ideal in $\frak{g}$: if $x\in\frak{h}$ and $y\in\frak{g}$, then $[x,y]\in\frak{h}$, because $[x,y]\in T_e(G)=[T_eH,T_eH]\subseteq T_eH=\frak{h}$.
Conversely, if $\frak{h}$ is an ideal of $\frak{g}$, then $G$ is a Lie group.
Well I am a bboy, a dancer and a father. Welcome to my blog. I blog here about my lifestyle of everything from what I think is cool and different from every day life, to what my kid thinks is cool. I am not good at sitting still and will need to keep moving around to keep me going.
Wednesday, November 1, 2008
The US presidential election has been the talk of the town for a while, and it still is. I am sure
af5dca3d97
## AutoCAD 23.1 Crack + For PC
Open the file mzintro.exe and run it.
Use the options, and select the program you want to launch.
In the documents folder there is a file called mz-key-gen.txt. In there you can find a description of all the settings.
Autocad 2012 can be used without the need of an Autocad licence, for a limited number of users. This new version of Autocad allows all registered users to connect online without the need of the licences, however it’s currently not available on all platforms.
In Autocad 2012 there are several important changes. The interface is completely different, both when opening and closing the program, it also contains a new «Autocad Experience» mode.
References
Category:AutodeskQ:
How do I make the path to the file constant?
I have a path that I need to print on the console.
I use an external assembly that I can’t change, that prints the following path:
«//./MyAssemblyPath/MyType/MyMethod/MyProperty/MyMethod2»
And I have no way to change this path. So I would like to use this assembly as a constant in my code.
The following isn’t working, I know it has to do with the path and not the assembly itself:
var assemblyPath = new AssemblyName(«MyAssembly»);
var path = assemblyPath.Name + «/»;
Console.WriteLine(«{0}», path);
How can I print the assembly’s path constant?
A:
Use a Path class.
var path = Path.Combine(@».\MyAssemblyPath», @»MyType»,
@»MyMethod», @»MyProperty»,
@»MyMethod2″);
You might want to check if the file exists first, and skip it if it does.
var path = Path.Combine(@».\MyAssemblyPath», @»MyType»,
## What’s New In?
Extend the active drawing window by several panels for improved workflow. The Side Panels tool extends the active drawing window with four additional panels. Use the new Side Panels tool to temporarily place drawing panels within the active drawing window, then exit the panels as needed. (video: 2:15 min.)
Create and import your own custom drawing panels, then easily share your panels for viewing and annotating using new Panels Palettes. New panels can be added to the Panels Palettes, to create a series of predefined panels, which you can use throughout your drawings. Panels created in this way can be published or shared for use in new drawings. The Panels Palettes feature the same customizable options as the Customize dialog box. (video: 6:30 min.)
Ease drawing and editing by adding animation to your drawings. Easily add inanimate objects to a drawing and easily place them in the active drawing window. They automatically follow your cursor and can be easily repositioned. (video: 1:00 min.)
Insert marked up drawings into the active drawing window. Quickly mark up a PDF or printed document in AutoCAD and add the marks directly into the active drawing window. AutoCAD will incorporate feedback from marked up documents in your drawings, allowing you to quickly review, update, and incorporate changes. (video: 2:40 min.)
Import and track your drawings on the web. Share your work on-line and import your drawings into the web version of AutoCAD. Access your drawings from anywhere in the world, even if you have no access to a local AutoCAD installation. Automatically track changes to your drawings on-line for easy collaboration. (video: 2:30 min.)
Use existing CAD drawings to help your design workflow. Include pre-existing 2D drawings in your new design project with a few clicks. Use these on-line or offline to incorporate directly into your project. (video: 2:20 min.)
Capture your environment for work on-the-go. Use native recognition to instantly capture your environment for editing. These captured objects will be automatically loaded into your project. (video: 1:30 min.)
Hover over your computer to instantly search for files and objects. Don’t waste time digging through piles of paper to find the perfect project. In moments, you’ll find everything you need on your computer, while your screen is automatically updated with your most recent searches
|
|
×
Get Full Access to Organic Chemistry - 7 Edition - Chapter 10 - Problem 10.44
Get Full Access to Organic Chemistry - 7 Edition - Chapter 10 - Problem 10.44
×
Chrysanthemic acid occurs as a mixture of esters in flowers of the
ISBN: 9781133952848 483
Solution for problem 10.44 Chapter 10
Organic Chemistry | 7th Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Organic Chemistry | 7th Edition
4 5 1 295 Reviews
31
5
Problem 10.44
Chrysanthemic acid occurs as a mixture of esters in flowers of the chrysanthemum(pyrethrum) family. Reduction of chrysanthemic acid to its alcohol (Section 17.6A) fol-lowed by conversion of the alcohol to its tosylate gives chrysanthemyl tosylate. Solvoly-sis (Section 9.2) of the tosylate gives a mixture of artemesia and yomogi alcohols.
Step-by-Step Solution:
Step 1 of 3
Chapter 1. Matter and Energy Matter Anything that occupies space and has weight, therefore it has volume and mass. States of Matter Solid; Fixed shape and volume, may be soft or hard, and rigid or flexible, particles are close together and organized Liquid; Vary in shape, take the shape of the container, particles close together but disorganized Gas; No fixed shape or volume, Particles farther apart and disorganized. Properties The characteristics that give each substance a unique identity. Types of properties; Intensive properties. They are properties that do not depend on the amount of materials present. They never change (constant). They include boiling and melting points, densities, viscosity, etc. Extensive properties. These are properties that depend on the amount of materials present. They change for example masses, volumes, radii, heights etc. Changes Physical changes; Particles before and after remain the same No change in composition For example liquids to solids or to gasses Chemical changes; Particles before and after are different Change in composition For example if an electric current is applied on water, it breaks down to oxygen and hydrogen gas. Energy It is the ability to do work. Energy is divided into two types i.e. Potential energy; Energy due to position of an object. It is how high or low and object is. Kinetic energy; Energy due to m
Step 2 of 3
Step 3 of 3
Discover and learn what students are asking
Calculus: Early Transcendental Functions : Inverse Trigonometric Functions: Integration
?In Exercises 1-20, find the indefinite integral. $$\int \frac{12}{1+9 x^{2}} d x$$
Statistics: Informed Decisions Using Data : Assessing Normality
?A ______ ______ ______is a graph that plots observed data versus normal scores
Related chapters
Unlock Textbook Solution
|
|
# User:Litzy84/review
## My Second Article Review
Lizeth Enriqueta Palafox Limones Aguascalientes, Ags. Saturday October 7th 2011
Review of Mohd Hilmi Hamzah & Lu Yee Ting, "Teaching speaking skills through group work activities"
How can teachers make ESL students (Universiti Teknologi Malaysia) speak in English by using oral group work activities? The problem of communication in English class is common in ESL students, because of the students' level or because of the fear of being the butt of everyone's joke. One of the purposes is to know and to apply the effectiveness of using different teaching techniques in speaking skill, for example; using group work in teaching speaking, teachers would notice or realize the problems that students face while speaking a second language (English).The students have different problems while speaking in English, one of them is the lack of vocabulary; the students do not know much vocabulary, so they do not try to speak in English during the classes, and the students’ insecurity while applying a second language (English).
To keep their fears away the researchers explained three different objectives; Hamzah, M. and Yee, L. (2005) “(a)To examine the issues of students’ speaking in an ESL classroom. (b). To identify the students’ perspectives with regards to their involvement in oral group activities. (c). To determine the potential implication of group work activities on the students’ individual performance in speaking assessment.” When the objectives are achieved they will help to the others researchers to realize the students’ strengths. The researchers show that students work better and they learn better of their mistakes than the correct ones. , Hamzah, M. and Yee, L. (2005) “To distinguish their strengths will increase learning, planning and discussion skills and eventually improve their speaking capabilities”. This refers how the students learn better, and how we can apply their knowledge by being involved as participants and decision- makers in speaking activities.
The researchers will develop natural ways in speaking activities by reading some studies and investigating. To get this natural ways using the different activities make students to speak in English, the result was as the researchers expected, they realized that learners speak better in small groups, by expressing their personal experience. Hamzah, M. and Yee, L. (2005) “Using oral group work was more student-centred and effective in getting every student to be involved in the tasks” (p.5).The students spoke very fluent and clear, the researchers gave different ways of teaching speaking. The explanations of the study and the results were very clear, and the research was complete. That will help to the other researchers to give clear ideas about what others researchers have and can do to encourage students speak in English.
The methodology that the article has is integrated in some study cases, in order to get better results and perhaps to give more complete conclusions. Speaking is a very hard skill to teach, but If the teacher or researcher looks for some different strategies, they could find them and they will help them in their teaching or researching. Some researchers can take many advantages if they are looking for speaking activities to get better results with their students. It is very useful If the researchers and teachers want to improve their strategies or techniques for their students.
References
Hilmi, M. and Yee, L. (2009) “Teaching speaking skills through group work activities.” Retrieved from http://eprints.utm.my/10255/2/Lu_Yee_Ting.pdf
## My First Article Review
Lizeth Enriqueta Palafox Limones Aguascalientes, Ags, Tuesday 13th September, 2011 Academic Writing
Review of Nur Dwi “Improving the Students’ speaking through story Joke Technique” Dwi, N. (2010) “Improving the Students’ speaking through story Joke Technique” (A classroom action research at eighth grade of Junior High School)
This thesis (action research) refers to the English language, as the most important language worldwide. It says that people have to know why English language is necessary. For instance, it refers to four main aspects; communication, jobs, field in the profession, science and technology. English language has become the most important language around the world, people from different countries have to learn it as a second language, and most of the Universities in Mexico have asked students to learn English as a second language. When it is implied the necessity of the language, the thesis refers to a certain problem while learning English. Speaking is one of the weakest skills for learners in 8th grade of junior high School.
How does the researcher improve the students’ speaking skill through the story joke at Junior high School? When learners have some problems in speaking teachers try to d something meaningful in their teaching, or make something different and innovating to encourage their students to speak in English. Most of the times when students are in Junior high school is very difficult to encourage them to speak in English. That’s why this thesis reflects how a story joke can limit that afraid of speaking.
Students enjoy when someone is saying a joke, they speak, they laugh at each other when someone makes a mistake while saying a joke, and students feel more comfortable and confident with their classmates. Moreover this makes more linked the relationship among students and teachers. A story Joke is Nur, D. (2010) "A Joke is a short story or ironic depiction of a situation ommunicated with the intent of being humorous" This technique could help the students at eighth grade of junior High School to speak in English during the classes. The article has information could help resercher to revise their action reserch, in order to improve it or just to check it as a guide during the activity.
While applying the research finding and analyzing, the observation which has noticed was that the students have a lot problem in intonation and pronunciation. So the researchers apply other strategy and they notice other factors that occurred and limited the students’ learning. For instance, the cycle number 1, the students had to talk about the topic " Do you know?" by woring in pairs, the researcher was observing the class and he realized that students had many problems in pronunciation. Cycle number 2 the stduents worked in pairs, they read together a dialog, and then they translated it, the reseracher was their guide during the activity, the researcher noticed that they students were more interested in the activity and they improved their pronunciation. And the last Cycle 3 students were more motivated to speak by reading some short story jokes.
To conclude this thesis helps to other researchers to improve their objectives through some activities similar to this action research or to try to apply different techniques in order to get the same or different objectives. When a cycle was applied the researcher was improving his objectives, but some of them were not achieved as he had expected. One of them is that students still have lack of confidence among them and their teacher. By reading this conclusion, some teachers figure out how their conclusions in their action researches will be, If they were improved or they were just appropiated for that moment. This article gives a clear idea of some researchers' studies how it can wor or how it can not. The aricle is a very clear example of how and action research could be better.
Check the criteria for this assignment; that is, you were to write a four paragraph article review. Make the changes and I'll review.
## References
http://eprints.utm.my/10255/2/Lu_Yee_Ting.pdf (case study)
http://www.actionresearch.net/living/moira/wangshuqin.htm (action reserach)
==
|
|
## College Algebra (10th Edition)
RECAL: A sum of two cubes $x^3+a^3$ can be factored using the formula: $x^3+a^3=(x+a)(x^2-ax+a^2)$ Note that the trinomial factor in the given expression does not match the one in the formula above. Thus, the given statement is false.
|
|
×
# Decrease of Rating when Clicking on a Problem
Once I click on a problem, I find that my rating decreases immediately. Is this a glitch, or is this supposed to happen?
Note by Minimario Minimario
3 years, 7 months ago
## Comments
Sort by:
Top Newest
I believe this is because if you look at a problem and then close it, the system assumes that you didn't know how to solve the problem. · 1 year, 11 months ago
Log in to reply
That has always been the behavior of ratings. Staff · 3 years, 7 months ago
Log in to reply
No, I think looking at a problem decreases your rating by 18 or something... · 3 years, 7 months ago
Log in to reply
×
Problem Loading...
Note Loading...
Set Loading...
|
|
# Example of Lebesgue Integral but not Riemann Integrable
1. Nov 16, 2008
### Nusc
What's Example of Lebesgue Integrable function which is not Riemann Integrable?
2. Nov 16, 2008
### morphism
There are plenty. Can you think of a characteristic function of a nonempty measurable set (of finite measure) that is discontinuous everywhere? Why will this do?
3. Nov 16, 2008
### Nusc
There's example 7.4 on page 145 -- in the limit, this is the classic example of a non-Riemann-integrable function.
But I don't understand why this will do.
4. Nov 17, 2008
### HallsofIvy
Staff Emeritus
I get annoyed when people refer to examples in specific books- do they expect everyone to have the book in front of them? But here, you don't even say what book!
The simplest example of a Lebesque integrable function that is not Riemann integrable is f(x)= 1 if x is irrational, 0 if x is rational.
It is trivially Lebesque integrable: the set of rational numbers is countable, so has measure 0. f = 1 almost everywhere so is Lebesque integrable and its integral, from 0 to 1, is 1.
But if no matter how we divide the interval from x= 0 to x= 1 into intervals, every interval contains both rational and irrational numbers: the "lower sum" is always 0 and the "upper sum" is always 1. As we increase the number of intervals to infinity, those do NOT converge.
5. Dec 2, 2008
### vigvig
Consider the following set $A= Q \cap [0,1]$. Where Q is the set of rational numbers of course. Now consider the characteristic function of $A$ denoted $X_A$ defined as follow: $X_A(x)=0$ when $x \in A$ and $x=0$ otherwise. Since this function is almost zero everywhere, then its Lebesgue integral is clearly 0. However, it is easily proved that $X_A$ is not Riemann integrable. As an argument but not proof to support this, the function is discontinuous at uncountable number of points.
Vignon S. Oussa
Last edited: Dec 3, 2008
6. Nov 19, 2011
### riesling
Could you give us another, more complicated example? It seems like the Dirichlet function is everywhere!
thanks!
riesling
7. Nov 19, 2011
### micromass
Staff Emeritus
What kind of example do you want?? You can also have
$$f(x)=x~\text{for}~x\neq \mathbb{Q}~\text{and}~f(x)=0~\text{otherwise}$$
8. Nov 20, 2011
### riesling
Thanks! I'm looking for some where the set of discontinuities and the set of continuities are both of non-zero measure...Is that posible...I know of a type of Cantor set which has positive measure...are there others?
|
|
# Math Help - Ok, what am I doing wrong??
1. ## Ok, what am I doing wrong??
Find the area of the minor segment formed on an arc of length 36cm drawn on a circle of radius 50cm.
Chord length = 2rsin(theta/2)
2 * 50sin(theta/2) = 36
theta = 0.7365
Area of segment = 0.5r^2(theta-sintheta)
= 81.01cm^2
Correct answer = 75.77cm^2
2. I get theta to be 0.72 or 36/50.
Using that value the area is 75.769.
3. Hello, classicstrings!
Find the area of the minor segment formed on an arc of length 36cm
drawn on a circle of radius 50cm.
Why are you playing with a chord?
Code:
36
* * *
* *
* *
* - - - - - - - - - *
\ /
\ /
\ / 50
\ θ /
*
Arc length formula: .s .= .
We have: .s = 36, r = 50
. . Hence: .36 = 50θ . . θ = 0.72 (radians)
Area of a sector: .A .= .½r²θ
. . We have: . A .= .½(50²)(0.72) .= .900 cm²
Area of triangle: .½(50²)sin(0.72) . .824.23 cm²
. . Therefore: .Area of segment .= .900 - 824.23 .= .75.77 cm²
|
|
# Mochitest FAQ¶
## SSL and https-enabled tests¶
Mochitests must be run from http://mochi.test/ to succeed. However, some tests may require use of additional protocols, hosts, or ports to test cross-origin functionality.
The Mochitest harness addresses this need by mirroring all content of the original server onto a variety of other servers through the magic of proxy autoconfig and SSL tunneling. The full list of schemes, hosts, and ports on which tests are served, is specified in build/pgo/server-locations.txt.
The origins described there are not the same, as some of them specify particular SSL certificates for testing purposes, while some allow pages on that server to request elevated privileges; read the file for full details.
It works as follows: The Mochitest harness includes preference values which cause the browser to use proxy autoconfig to match requested URLs with servers. The network.proxy.autoconfig_url preference is set to a data: URL that encodes the JavaScript function, FindProxyForURL, which determines the host of the given URL. In the case of SSL sites to be mirrored, the function maps them to an SSL tunnel, which transparently forwards the traffic to the actual server, as per the description of the CONNECT method given in RFC 2817. In this manner a single HTTP server at http://127.0.0.1:8888 can successfully emulate dozens of servers at distinct locations.
## What if my tests aren’t done when onload fires?¶
Use add_task(), or call SimpleTest.waitForExplicitFinish() before onload fires (and SimpleTest.finish() when you’re done).
## How can I get the full log output for my test in automation for debugging?¶
SimpleTest.requestCompleteLog();
## What if I need to change a preference to run my test?¶
The SpecialPowers object provides APIs to get and set preferences:
await SpecialPowers.pushPrefEnv({ set: [["your-preference", "your-value" ]] });
// ...
await SpecialPowers.popPrefEnv(); // Implicit at the end of the test too.
You can also set prefs directly in the manifest:
[DEFAULT]
prefs =
browser.chrome.guess_favicon=true
If you need to change a pref when running a test locally, you can use the --setpref flag:
./mach mochitest --setpref="javascript.options.jit.chrome=false" somePath/someTestFile.html
Equally, if you need to change a string pref:
./mach mochitest --setpref="webgl.osmesa=string with whitespace" somePath/someTestFile.html
## Can tests be run under a chrome URL?¶
Yes, use mochitest-chrome.
## How do I change the HTTP headers or status sent with a file used in a Mochitest?¶
Create a text file next to the file whose headers you want to modify. The name of the text file should be the name of the file whose headers you’re modifying followed by ^headers^. For example, if you have a file foo.jpg, the text file should be named foo.jpg^headers^. (Don’t try to actually use the headers file in any other way in the test, because the HTTP server’s hidden-file functionality prevents any file ending in exactly one ^ from being served.)
Edit the file to contain the headers and/or status you want to set, like so:
HTTP 404 Not Found
Content-Type: text/html
The first line sets the HTTP status and a description (optional) associated with the file. This line is optional; you don’t need it if you’re fine with the normal response status and description.
Any other lines in the file describe additional headers which you want to add or overwrite (most typically the Content-Type header, for the latter case) on the response. The format follows the conventions of HTTP, except that you don’t need to have HTTP line endings and you can’t use a header more than once (the last line for a particular header wins). The file may end with at most one blank line to match Unix text file conventions, but the trailing newline isn’t strictly necessary.
## How do I write tests that check header values, method types, etc. of HTTP requests?¶
To write such a test, you simply need to write an SJS (server-side JavaScript) for it. See the testing HTTP server docs for less mochitest-specific documentation of what you can do in SJS scripts.
An SJS is simply a JavaScript file with the extension .sjs which is loaded in a sandbox. Don’t forget to reference it from your mochitest.ini file too!
[DEFAULT]
support-files =
test_file.sjs
The global property handleRequest defined by the script is then executed with request and response objects, and the script populates the response based on the information in the request.
Here’s an example of a simple SJS:
function handleRequest(request, response) {
// Allow cross-origin, so you can XHR to it!
// Avoid confusing cache behaviors
response.write("Hello world!");
}
The file is run, for example, at either http://mochi.test:8888/tests/PATH/TO/YOUR/test_file.sjs, http://{server-location}/tests/PATH/TO/YOUR/test_file.sjs - see build/pgo/server-locations.txt for server locations!
If you want to actually execute the file, you need to reference it somehow. For instance, you can XHR to it OR you could use a HTML element:
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://test/tests/dom/manifest/test/test_file.sjs");
xhr.send();
The exact properties of the request and response parameters are defined in the nsIHttpRequestMetadata and nsIHttpResponse interfaces in nsIHttpServer.idl. However, here are a few useful ones:
• .scheme (string). The scheme of the request.
• .host (string). The scheme of the request.
• .port (string). The port of the request.
• .method (string). The HTTP method.
• .httpVersion (string). The protocol version, typically “1.1”.
• .path (string). Path of the request,
• .headers (object). Name and values representing the headers.
• .queryString (string). The query string of the requested URL.
• .bodyInputStream ??
• .getHeader(name). Gets a request header by name.
• .hasHeader(name) (boolean). Gets a request header by name.
Note: The browser is free to cache responses generated by your script. If you ever want an SJS to return different data for multiple requests to the same URL, you should add a Cache-Control: no-cache header to the response to prevent the test from accidentally failing, especially if it’s manually run multiple times in the same Mochitest session.
## How do I keep state across loads of different server-side scripts?¶
Server-side scripts in Mochitest are run inside sandboxes, with a new sandbox created for each new load. Consequently, any variables set in a handler don’t persist across loads. To support state storage, use the getState(k) and setState(k, v) methods defined on the global object. These methods expose a key-value storage mechanism for the server, with keys and values as strings. (Use JSON to store objects and other structured data.) The myriad servers in Mochitest are in reality a single server with some proxying and tunnelling magic, so a stored state is the same in all servers at all times.
The getState and setState methods are scoped to the path being loaded. For example, the absolute URLs /foo/bar/baz, /foo/bar/baz?quux, and /foo/bar/baz#fnord all share the same state; the state for /foo/bar is entirely separate.
You should use per-path state whenever possible to avoid inter-test dependencies and bugs.
However, in rare cases it may be necessary for two scripts to collaborate in some manner, and it may not be possible to use a custom query string to request divergent behaviors from the script.
For this use case only you should use the getSharedState(k, v) and setSharedState(k, v) methods defined on the global object. No restrictions are placed on access to this whole-server shared state, and any script may add new state that any other script may delete. To avoid conflicts, you should use a key within a faux namespace so as to avoid accidental conflicts. For example, if you needed shared state for an HTML5 video test, you might use a key like dom.media.video:sharedState.
A further form of state storage is provided by the getObjectState(k) and setObjectState(k, v) methods, which will store any nsISupports object. These methods reside on the nsIHttpServer interface, but a limitation of the sandbox object used by the server to process SJS responses means that the former is present in the SJS request handler’s global environment with the signature getObjectState(k, callback), where callback is a function to be invoked by getObjectState with the object corresponding to the provided key as the sole argument.
Note that this value mapping requires the value to be an XPCOM object; an arbitrary JavaScript object with no QueryInterface method is insufficient. If you wish to store a JavaScript object, you may find it useful to provide the object with a QueryInterface implementation and then make use of wrappedJSObject to reveal the actual JavaScript object through the wrapping performed by XPConnect.
For further details on state-saving mechanisms provided by httpd.js, see netwerk/test/httpserver/nsIHttpServer.idl and the nsIHttpServer.get(Shared|Object)?State methods.
## How do I write a SJS script that responds asynchronously?¶
Sometimes you need to respond to a request asynchronously, for example after waiting for a short period of time. You can do this by using the processAsync() and finish() functions on the response object passed to the handleRequest() function.
processAsync() must be called before returning from handleRequest(). Once called, you can at any point call methods on the request object to send more of the response. Once you are done, call the finish() function. For example you can use the setState() / getState() functions described above to store a request and later retrieve and finish it. However be aware that the browser often reorders requests and so your code must be resilient to that to avoid intermittent failures.
let { setTimeout } = ChromeUtils.import("resource://gre/modules/Timer.jsm");
function handleRequest(request, response) {
response.processAsync();
response.write("hello...");
setTimeout(function() {
response.write("world!");
response.finish();
}, 5 * 1000);
}
For more details, see the processAsync() function documentation in netwerk/test/httpserver/nsIHttpServer.idl.
## How do I get access to the files on the server as XPCOM objects from an SJS script?¶
If you need access to a file, because it’s easier to store image data in a file than directly in an SJS script, use the presupplied SERVER_ROOT object state available to SJS scripts running in Mochitest:
function handleRequest(req, res) {
var file;
getObjectState("SERVER_ROOT", function(serverRoot) {
file = serverRoot.getFile("tests/content/media/test/320x240.ogv");
});
// file is now an XPCOM object referring to the given file
res.write("file: " + file);
}
The path you specify is used as a path relative to the root directory served by httpd.js, and an nsIFile corresponding to the file at that location is returned.
Beware of typos: the file you specify doesn’t actually have to exist because file objects are mere encapsulations of string paths.
## Diagnosing and fixing leakcheck failures¶
Mochitests output a log of the windows and docshells that are created during the test during debug builds. At the end of the test, the test runner runs a leakcheck analysis to determine if any of them did not get cleaned up before the test was ended.
Leaks can happen for a variety of reasons. One common one is that a JavaScript event listener is retaining a reference that keeps the window alive.
// Add an observer.
Finding the leak can be difficult, but the first step is to reproduce it locally. Ensure you are on a debug build and the MOZ_QUIET environment flag is not enabled. The leakcheck test analyzes the test output. After reproducing the leak in the test, start commenting out code until the leak goes away. Then once the leak stop reproducing, find the exact location where it is happening.
|
|
# Moishezon projectivity criterion for Moishezon spaces with canonical singularites
A Moishezon manifold is projective if and only if it is Kähler. This is no longer true for a singular Moishezon space. Moishezon proved a projectivity criterion for Moishezon spaces with isolated singularities. It is also known for Moishezon spaces with 1-rational singularities( Note that varieies with 1-rational singularities my do not have rational singularities in general but for algebraic surface these two notions are same ). So it is true that a Moishezon space with canonical singularites (in the sense of the minimal model program) is projective if and only if it is Kähler.
Is the following statement true in general?:
Let $M$ be any compact complex variety with 1-rational singularities. Then $M$ is a Moishezon space if and only if there is a proper analytic subset $S⊂M$, such that $M\setminus S$ admits a complete singular Kähler-Einstein metric with negative Ricci curvature?
In fact the existence of a Kähler-Einstein metric has been verified for varieties of general type with mild singularites, so this question might be natural to ask for 1-rational singularites
The motivation is the mild singular version of my recent question
Definition: A compact complex space $M$ is Moishezon if and only if there exists a weakly positive coherent $\mathcal O_M$-module of rank 1 on $M$.
Definition: An algebraic variety $X$ is said to have 1-rational singularities, if the following two conditions holds true
(1) $X$ is normal,
(2) for every resolution $f : \tilde X \to X$ of $X$ we have $R^1f_∗\mathcal O_{\tilde X} = 0$
• Canonical singularities (or even klt singularities) are rational. So if the answer is known for rational singularities...... – Hacon Jul 24 '17 at 4:42
• Ahh, this Kähler-Einstein metric again drunk me I wrote silly question, I apologize. The answer(in edited version) must be correspond to relation between semi log canonical singularities and 1-rational singularities. If you know, please let me know. – user21574 Jul 24 '17 at 7:37
|
|
# Random NumberGenerators
#### The Art of Mathematical Computing
by Benjamin (Bill) Planche
feat. Philipp Jovanovic
Slides at github.com/Aldream/presentations
# Structure
1. Randomness - defs + pseudo-randomness
2. RNGs & PRNGs - defs + implementation
3. Test Tools - suites + implementation
# Randomness
Lack of pattern, predictability or determinism in events.
However.
Is it really random?
Or are we simply ignorant of the underlying pattern?
## Random Sequences
#### Information Theory
Basic definition
Sequence of independent random variables
Formal definition
???
## Definition by Von Mises
Based on Theory of Large Numbers
An infinite sequence can be considered random if:
• It has the Frequency Stability Property
• $S \in \mathbb{A}^n$ with $\mathbb{A}$ alphabet of $m$ symbols
• $\forall a \in \mathbb{A}, \lim_{n \to \infty} |\{s, s \in S \land s = a\}| = \frac{1}{m}$
• Any sub-seq. selected by a proper method isn't biased too. ex:
• $S = (1,0,1,0,1,0,1,0)$ not biased
• $f(X) \to (X^i | \forall i \leq |X| \land i \equiv 0 \pmod 2)$
• $\implies f(S) \to (1,1,1,1)$ biased
However.
• How to mathematize this proper method of selection?
• Yields an empty set (demo by Jean Ville in 1939)
## Definition by Martin-Löf
A Random Sequence has no "exceptional and effectively verifiable" property
• No properties verifiable by a recursive algorithm
• Frequency / measure-theoretic
• Quite satisfactory
## Definition by Levin/Chaitin
### Complexity of Kolmogorov
• Important measure for Information Theory
Length of the shortest program able to generate the sequence.
### Resulting Definition
A finite string is random
if it requires a program at least as long as itself to be computed
$\exists c \geq 0$, such as $H(S_0^n) \geq n - c$
with $H$ complexity of Kolmogorov
• "Incomprehensible informational content"
• Complexity / Compressibility approach
## Statistical Randomness
A sequence is statistically random if it has no recognizable patterns.
• Less strict than previous definitions
• Doesn't imply objective unpredictability
... leaves room to the concept of ...
## Pseudo-Randomness
#### and Pseudo-Random Sequence
Exhibits statistical randomness...
... though generated by a deterministic causal method.
# Random NumberGenerators
## Definition
Device which can produce a sequence of random numbers, i.e. a without deterministic properties and patterns.
## Categories
### Generators based on physical phenomena
Dice tossing, coin flipping, bird trajectories, ...
• Often only random in appearances
• Cheating by knowing the rules / initial state
#### Quantic Phenomena
Nuclear decay, Behavior of photons hitting a semi-transparent mirror, ...
• Golden solutions
• Globally too costly to be democratized
#### Noisy Phenomena
Thermal signal from transistor, radio noise, Analog-to-digital conversion noise, ...
• Easier to detect
• Offer good results
#### OS Implementations
• Unix Systems
• $/dev/urandom$ & $/dev/random$
• Device files probing analog sources (mouse, keyboard, disk accesses, etc.)
• Windows Systems
• $CryptGenRandom$
• Gather through system state (CPU counters, env. var, threads IDs, etc.)
• Based on the unpredictable IO + behavior of the users
• Harvest entropy, Output random bytes
In both cases, entropy decreases during inactivity
... Shortages.
### Pseudo-Random Number Generators
Ramdomness through Determinism...??
Pseudo-Random Sequences
• Clever implementations $\rightarrow$ Long-enough period
• Determinism $\rightarrow$ totally defined by init config
• State
• Seed
## LFSR
#### Linear Feedback Shift Register
• Sequential shift register
• New bit = linear function of previous state
• Combinational logic
• $\mathfrak{F}$ mapping in vector space of binary $n$-tuples
• $f$ feedback function = boolean operation of $n$ variables
$$\mathfrak{F}:\mathbb{F}_2^n \to \mathbb{F}_2^n$$ $$\mathfrak{F}:(x_1, x_2, ..., x_n)\mapsto (x_2, x_3, ..., x_n, f((x_1, x_2, ..., x_n))$$
• $f$ $\equiv$ poly mod 2 in finite field arithmetic
• Feedback polynomial
• Taps = bits of the register used in the linear operation
• Conditions to maximal-length LFSR (period $2^n-1$):
• Having an even number of taps
• Using a relatively-prime set of taps
##### Example
$f(x) = x_{16} + x_{14} + x_{13} + x_{11} + 1$
## NLFSR
#### Non-Linear Feedback Shift Register
Same theory as for LFSRs
Only one difference
The feedback function $f$ is non-linear
ex: $f(x) = x^4 + x^1 \cdot x^2 + 1$
• Makes NLFSRs harder to predict than LFSRs
• Makes it harder to ensure a max period of $2^n-1$ bits.
## Applications and Uses
Applications in every area
where unpredictable behavior is desirable/required
cryptographic systems, gambling applications, statistical sampling, simulation, ...
Various applications $\rightarrow$ Various requirements
• Crypto-secure RNGs for security applications
• Outputs uniqueness for shuffling methods
• ...
• RNGs $\rightarrow$ ~ safer but less abundant
• PRNGs $\rightarrow$ ~ weaker but lighter
# Testing Randomness
## About the Difficulty to Test Randomness
### Reasons
• Def. depending on the field $\rightarrow$ Which one to test?
• Large number of possibilities $\rightarrow$ Impossible to fully cover
### Solutions
• Statistical tests or complexity evaluations
• Battery of tests to identify statistical bias
• Checking hypothesis of perfect behavior
### Limitations
• Impossible to fully cover $\rightarrow$ no universal battery of tests
• Good RNGs $\approx$ pass complicated or numerous tests
## Common Tests
• DIEHARD Tests
• TestU1 Suite
### Berlekamp-Massey Algorithm
#### Definition
• Break linearly recurrent sequences in $\mathbb{F}_n$
• Find min degree $L$ and annihilator poly $F(x)$ of the seq $S$
#### Algorithm
$S$ sequence, $F(x)$ polynomial, $\beta_i^j$ discrepancy
• Make a first guess for $F(x)$
• At each iteration $l$:
• Generate $S_l'$ of $l$ elements, using reverse of $F(x)$
• Compare $S$ and $S_l'$: $\beta_0^l(S, S_l')$
• We know $S_l'$ correct up to the $(l-1)^{th}$ symbol
• If $l^{th}$ symbol not correct, ie $\beta_0^l(S, S_l') = (0,0,0,...,1)$:
• Last iteration $m$ when this happened, we had $\beta_0^m(S, S_m') = (0,0,0,...,1)$
• So $\beta_0^l(S, S_l') + \beta_{l-m}^l(S, S_m') = (0,0,0,...,0) \to$ correction to apply
• $F(x) \gets F(x) + x^mF_l(x)$
##### Let's play
Iteration 0
$\beta$ = 0
(stopped)
# Conclusion
### Overview of a large topic
• Various characteristics / Various Uses
• Choose wisely!
• Don't implement your own RNG!
• ... especially for crypto!
• ... but if you try, test test test!
# References
Presentation based on a personal survey: https://github.com/Aldream/random-number-generator
1. Downey, R.: Some recent progress in algorithmic randomness. In Mathematical Foundations of Computer Science 2004. Springer Berlin Heidelberg (2004)
2. Wikipedia: Random Number Generation (2014)
3. Aumasson, J.P.: Crypto for Developers - Part 2, Randomness. AppSec Forum Switzerland 2013 (2013)
4. Raymond, S., Andrew, S., Patrick, C., Jason, M.: Linear Feedback Shift Register (2001)
5. Joux, A.: Algorithmic cryptanalysis. CRC Press (2009)
6. Szmidt, J.: The Search and Construction of Nonlinear Feedback Shift Registers. Military Communication Institute, Zegrze, Poland (2013)
7. Wikipedia: Linear Feedback Shift Register (2014)
8. Dubrova, E.: A List of Maximum Period NLFSRs. IACR Cryptology ePrint Archive 2012 (2012) 166
9. Ritter, T.: Randomness Tests: A Literature Survey (2007)
10. L'Ecuyer, P, S.R.: TestU01 - A Software Library in ANSI C for Empirical Testing of Random Number Generators (2002)
11. Marsaglia, G.: The Marsaglia Random Number CDROM including the Diehard Battery of Tests of Randomness (2005)
12. Soto, J.: Statistical Testing of Random Number Generators. Proceedings of the 22nd National Information Systems Security Conference NIST, 1999 (1999)
13. Berlekamp, E.R.: Nonbinary BCH decoding. University of North Carolina. Department of Statistics (1967)
14. Massey, J.L.: Shift-register synthesis and BCH decoding. Information Theory. IEEE Transactions 15(1) (1969) 122-127
15. Feng, G.-L., T.K.: A generalization of the Berlekamp-Massey algorithm for multisequence shift-register. Information Theory, IEEE Transactions 37(5) (2012) 1274-1287
16. Rodrigez, S.: Implementation of a decoding algorithm for codes from algebraic curves in the programming language Sage. diploma thesis, Faculty of San Diego State University (2013)
## Thanks for you attention!
#### Questions?
@b_aldream | git:Aldream | aldream.net
# Annexe
### Randomness defined by Schnorr
A random sequence must not be predictable.
No effective strategy should lead to an infinite gain if we bet on its symbols.
• Predictability approach
### LFSR - Implementation
##### Python
def createLFSRgenerator(taps, seed):
""" Returns a LFSR generator, defined by the given sequence of taps and initial value.
@param taps (Tuple[int]): Sequence of taps defining the register.
ex: (1, 0, 0, 1) -> f(x) = x^4 + x^3 + 1
@param seed (int): Initial value given to the register
@return LFSR Generator """
def lfsrGen(): """ @yield Pseudo-Random value from the defined LFSR """
deg = len(taps) # Degree of the feedback polynomial
period = math.pow(2,deg) - 1 # Max period of the LFSR
value = seed # Initial value
it = 0
while (it < period): # Computing new value of most-significant bit:
bit = 0
for j in range(deg): # AND-operation between the current value and the taps-tuple
if taps[j]:
bit ^= value >> j
bit &= 1 # XOR-operation to get the new value of the bit
# Final value in register by popping less-sign bit and appending the new most-sign one:
value = (value >> 1) | (bit << (deg-1))
it += 1
yield value
return lfsrGen
##### Javascript
function createLFSRGenerator(taps, seed) {
/** Returns a LFSR generator, defined by the given sequence of taps and initial value.
@param taps (Tuple[int]): Sequence of taps defining the register.
ex: (1, 0, 0, 1) -> f(x) = x^4 + x^3 + 1
@param seed (int): Initial value given to the register
@return LFSR Generator */
return function *lfsrGen() { /** @yield Pseudo-Random value from the defined LFSR */
var deg = taps.length, // Degree of the feedback polynomial
period = Math.pow(2, deg) - 1, // Max period of the LFSR
value = seed; // Initial value
for (var it = 0; it < period; it++) { // Computing new value of most-significant bit:
var bit = 0;
for (var j = 0; j < deg; j++) { // AND-operation between the current value and the taps-tuple
if (taps[j])
bit ^= value >> j;
}
bit &= 1; // XOR-operation to get the new value of the bit
// Final value in register by popping less-sign bit and appending the new most-sign one:
yield (value = (value >> 1) | (bit << (deg - 1)));
}
}
}
### NLFSR - Implementation
##### Python
def createNLFSRgenerator(taps, seed):
""" Returns a NLFSR generator, defined by the given combination of taps and initial value.
@param taps (Tuple[Array[int]]): Sequence of combination of taps defining the non-linear register.
ex: ([0,0],[],[2],[1,2]) -> f(x) = x^4*x^4 + x^2 + x^1*x^2 + 1 (poor choice)
@param seed (int): Initial value given to the register
@return NLFSR Generator """
def nlfsrGen(): """ @yield Pseudo-Random value generated by a pre-defined NLFSR """
deg = len(taps) # Degree of the feedback polynomial
period = math.pow(2,deg) - 1 # Max Period of the NLFSR (read Warning above)
value = seed # Initial value
it = 0
while (it < period): # Computing the new value of the most-significant bit:
bit = 0
for tap in taps:
# Computing the binary multiplication x^K_0 * x^K_1 * ... * x^K_n with [K_0, K_1, ..., K_n] the j-th taps array
if len(tap):
element = 1
for k in tap:
if not (value >> k & 1):
element = 0 # Binary multiplication of terms returns 1 iif none of the terms is null.
break # So if we encounter a null bit, we simply return 0, else 1.
else:
element = 0
bit ^= element # Binary addition of the multiplication results
bit &= 1
# Getting the final value in the register by popping the less-significant bit and appending the new most-significant one:
value = (value >> 1) | (bit << (deg-1))
it += 1
yield value
return nlfsrGen
##### Javascript
function createNLFSRgenerator(taps, seed) {
/** Returns a NLFSR generator, defined by the given combination of taps and initial value.
@param taps (Tuple[Array[int]]): Sequence of combination of taps defining the non-linear register.
ex: ([0,0],[],[2],[1,2]) -> f(x) = x^4*x^4 + x^2 + x^1*x^2 + 1 (poor choice)
@param seed (int): Initial value given to the register
@return NLFSR Generator */
return function *nlfsrGen() { /** @yield Pseudo-Random value generated by a pre-defined NLFSR */
var deg = taps.length, // Degree of the feedback polynomial
period = Math.pow(2,deg) - 1, // Max Period of the NLFSR (read Warning above)
value = seed, // Initial value
it = 0
while (it < period) { // Computing the new value of the most-significant bit:
var bit = 0
for (var j = 0; j < taps.length; j++) {
var element = 1;
if (taps[j].length) { // Computing the binary multiplication x^K_0 * x^K_1 * ... * x^K_n with [K_0, K_1, ..., K_n] the j-th taps array
for (var k = 0; k < taps[j].length; k++) {
if (!(value >> taps[j][k] & 1)) {
element = 0; // Binary multiplication of terms returns 1 iif none of the terms is null.
break; // So if we encounter a null bit, we simply return 0, else 1.
}
}
} else { element = 0; }
bit ^= element; // Binary addition of the multiplication results:
}
bit &= 1;
// Getting the final value in the register by popping the less-significant bit and appending the new most-significant one:
it += 1;
yield (value = (value >> 1) | (bit << (deg-1)));
}
}
}
### Test Suites
#### DIEHARD Tests
• Developed by George Marsaglia, in 1995
• 15 tests run over a large file containing the sequence
birthday spacings, overlapping permutations, ranks of 31x31 and 32x32 matrices, ranks of 6x8 matrices, monkey tests, count the 1's, parking lot, minimum distance, random spheres, squeeze, overlapping sums, runs, and craps
#### TestU01 Suite
• Software library, initiated in 1985
• Collection of utilities in ANSI C
• Classical stat tests + others from literature + original ones
• Tools to implement specific stat tests.
### Berlekamp-Massey Algorithm Alternate explanation
• At each iteration $l$:
• Evaluate the discrepancy
• If null:
• $F(x)$ and $L$ still correct
• Go to next iteration
• Else:
• $F(x)$ should be concordantly adjusted
• Shift & Scale syndromes added since last update
• If $l > 2L$:
• Update $L$ to keep track of progression
### Berlekamp-Massey - Implementation
##### Python
def BerlekampMasseyAlgorithm(sequence):
""" Applies the Berlekamp-Massey Algorithm to the given sequence of bits;
Returns the smallest annihilating polynomial F, ie. the smallest inverse
feedback polynomial corresponding to the generating LFSR.( F(sequence) = 0 )
@param sequence (Array[int] or Tuple[int]): Sequence of bits to analyze
@returns Array defining the computed inverse feedback polynomial
ex: [1, 0, 0, 1, 1] represents the inverse polynomial x^4 + x^3 + 1,
and thus the feedback polynomial x^4 + x + 1 (taps = (1, 0, 0, 1)) """
def discrepancy(sequence, poly, i, L):
""" Returns the discrepancy.
@param sequence (Array[int] or Tuple[int]): Sequence of bits to analyze
@param poly (Array[int]): Current version of the inverse polynomial
@param i (int): Current position in the sequence
@param L (int): Current number of assumed errors
@return Binary value of the discrepancy """
return sum([sequence[i-j]&poly[j] for j in range(0,L+1)])%2 # = s[i]*p[i] + s[i-1]*p[1] + ... + s[i-L]*p[L]
""" Computes the addition of two F2 polynomials.
@param poly1 (Array[int]): Array representing the 1st polynomial
@param poly2 (Array[int]): Array representing the 2nd polynomial
@param length (int): Length to be covered by the addition (trusting user to avoid testing)
@returns Resulting Binary Array """
return [poly1[j]^poly2[j] for j in range(0, length)]
# Initializing:
N = len(sequence)
F, f = [0]*N, [0]*N # Polynomials, with F being the one returned at the end (inverse feedback polynomial)
F[0] = f[0] = 1
L = 0 # Current number of assumed errors
delta = 1 # Number of iterations since last update of L
for l in range(N): # Computing F and L:
beta = discrepancy(sequence, F, l, L)
if beta != 0: # Adjusting F for this term:
g = F.copy()
F = addPoly(F, [0]*delta + f, N)
if 2 * L <= l: # If it is not the case, we must update L (and thus re-initalize delta), and also f:
L = l + 1 - L # number of available syndromes used to calculate discrepancies
delta = 1
f = g # f get the previous value of F
else: delta += 1
else: delta += 1
return F[:L+1] # output the polynomial
##### Javascript
function BerlekampMasseyAlgorithm(sequence) {
/** Applies the Berlekamp-Massey Algorithm to the given sequence of bits;
Returns the smallest annihilating polynomial F, ie. the smallest inverse
feedback polynomial corresponding to the generating LFSR.( F(sequence) = 0 )
@param sequence (Array[int] or Tuple[int]): Sequence of bits to analyze
@returns Array defining the computed inverse feedback polynomial
ex: [1, 0, 0, 1, 1] represents the inverse polynomial x^4 + x^3 + 1,
and thus the feedback polynomial x^4 + x + 1 (taps = (1, 0, 0, 1)) */
function discrepancy(sequence, poly, i, L) {
/** Returns the discrepancy.
@param sequence (Array[int] or Tuple[int]): Sequence of bits to analyze
@param poly (Array[int]): Current version of the inverse polynomial
@param i (int): Current position in the sequence
@param L (int): Current number of assumed errors
@return Binary value of the discrepancy */
var disc = 0;
for (var j = 0; j < L+1; j++) disc += (sequence[i-j] & poly[j]) // disc = s[i]*p[i] + s[i-1]*p[1] + ... + s[i-L]*p[L]
return disc%2;
}
/** Computes the addition of two F2 polynomials.
@param poly1 (Array[int]): Array representing the 1st polynomial
@param poly2 (Array[int]): Array representing the 2nd polynomial
@param length (int): Length to be covered by the addition (trusting user to avoid testing)
@returns Resulting Binary Array */
var poly = [];
for (var j = 0; j < length; j++) poly.push(poly1[j] ^ poly2[j]);
return poly;
}
// Initializing:
var N = sequence.length;
var F = [], f = [] // Polynomials, with F being the one returned at the end (inverse feedback polynomial)
for (var i = 0; i < N; i++) { F.push(0); f.push(0); }
F[0] = f[0] = 1
var L = 0 // Current number of assumed errors
var delta = 1 // Number of iterations since last update of L
for (var l = 0; l < N; l++) { // Computing F and L:
var beta = discrepancy(sequence, F, l, L);
if (beta != 0) { // Adjusting F for this term:
var g = F.slice(0);
var fShifted = f.slice(0); for (var k = 0; k < delta; k++) { fShifted.unshift(0); }
if (2 * L <= l) {
L = l + 1 - L; // number of available syndromes used to calculate discrepancies
delta = 1;
f = g; // f get the previous value of F
} else delta += 1;
} else delta += 1;
}
for (var k = L+1; k < N; k++) { F.pop(); }
return F; // output the polynomial
}
|
|
## 30 January 2016
### Extended Response to Nick Rowe
Nick Rowe on Twitter earlier today:
If [a] central bank targeted the price of peanuts, would we blame recessions on bad peanut harvests? Or blame [the] central bank for not raising [the] target price?
It depends. It depends on how quickly the central bank finds out about the bad peanut harvest, how quickly the new policy can be enacted, and how effectively the central bank can control the price of peanuts.
Suppose no one know about the size of the peanut harvest until the following period. In this case, the central bank, which we will assume can completely control the price of peanuts for the time being, is not culpable for the recession that occurs because the price of peanuts is too low. The central bank could not have known that the price level (of peanuts) at which output remained at potential was higher than they otherwise thought, so they cannot be blamed for the recession that ensues.
If, on a slightly different note, the central bank faces a delay in policy implementation, it may not be able to act quickly enough to prevent a recession; they can raise the target price with a delay, but there will still be a recession in the meantime and the central bank is not culpable.
Alternatively, assume that the central bank knows about the bad harvest in real time and doesn't face a policy lag, but, for some reason, is unable to set the price of peanuts any higher. In this case, the central bank can't be blamed either -- there is nothing it can do to prevent it from happening, so the correct culprit for the recession is the bad peanut harvest.
Generally, assuming there are no significant lags in information or implementation, the central bank would be to blame for not preventing the recession. The only time that the blame really shouldn't fall on a central bank is when it can't control the price (of peanuts) -- in this case, central bank impotence is to blame for the recession, not actions taken by the central bank.
With that aside, now we can go about determining when central banks are impotent.
## 25 January 2016
### Objectives vs. Tools of Monetary Policy
In the comments of one of Nick Rowe's recent posts, Scott Sumner has accused me of confusing objectives and tools of monetary policy:
You are looking at the causal effects of QE, whereas it makes more sense to view QE as the effect of a tight monetary policy that drives rates to zero. If you do a more expansionary monetary policy, such as currency depreciation, then you do not need as much QE. QE is a defensive mechanism, monetary policy needs to be viewed in terms of the policy goals of the central bank, and in terms of whether it will do whatever it takes to reach those goals.
Basically, Scott is suggesting that quantitative easing isn't actually a monetary policy, and is instead the natural conclusion to what he does view as monetary policy -- currency depreciation. Here, Sumner provides an interesting set of definition for what constitutes monetary policy and, more generally, what can reasonably be considered exogenous to a central bank.
In his mind, exchange rates are basically exogenous to the extent that central banks try to influence them. This is evident from his implicit assertion that, if central banks are "doing whatever it takes to reach [their] goals," they will invariably reach those goals. Of course, this isn't necessarily news, everyone has know Sumner's opinion that central banks are nearly omnipotent for quite some time, but this time he has laid it out more directly.
According to Sumner, the evolution of any nominal variable over time can be completely controlled by a central bank and, as such, can be used as a point of criticism for that central bank: "monetary policy needs to be viewed in terms of the policy goals of the central bank." As such, the actual polices that central banks follow are completely irrelevant; it doesn't matter what the path of interest rates is, the correct judge of current Federal Reserve policy (for example) is whether or not inflation is on target.
Of course, I, along with I hope the majority of people, don't see monetary policy in this light. Sumner seems to have made a point of confusing monetary policy -- e.g., QE, interest rate setting, open market operations -- with whatever nominal variable he happens to care about at the moment -- in this case exchange rates. This separation is important; it allows us to understand more directly a central bank's goals and how it intends to achieve those goals.
Evidently, Scott could care less about the how and only wants us to focus on the goals. He basically has reduced his thinking about monetary policy to the point that he views NGDP as an instrument of the central bank -- effectively an exogenous variable -- rather than a variable that a central bank may act to control. This level of abstraction from the operation of monetary policy, in my opinion even more grievous than the New Keynesian obsession with the nominal interest rate, is what allows Market Monetarists to callously ignore every model that doesn't allow exogenous NGDP that says the zero lower bound actually represents a constraint on monetary policy.
If central banks could make NGDP exogenous, would they be able to make NGDP exogenous? Naturally, but no one should care about the answer to such a redundant question, yet this is effectively the answer that you get from Sumner; he'll simply assert that "the BOC can always depreciate the Canadian dollar. The zero bound is not an issue in Canada" (from an earlier comment on the same post). Naturally, we should all trust Sumner's clairvoyance on this issue, clearly no argument about monetary policy effectiveness is necessary (see my first comment on Nick Rowe's post, if you want one anyway) and we can rest assured that fiscal policy is never necessary.
Ideally, considering the ability of monetary policy to effectively deal with challenges should be at least of some consideration and, since monetary policy has proved theoretically capable of offsetting the demand-side effects of fiscal stimulus among other shocks, the only point at which this can be of much concern is the zero lower bound. Both Sumner's and Rowe's refusal to give theoretical arguments against me in this area is rather troubling, evidently just assuming monetary policy is effective in every circumstance is completely acceptable.
## 24 January 2016
### Timing and Composition
Family members are often confused by my simultaneous support for looser fiscal policy in the United States and disdain for Republican tax proposals during this election cycle on the ground that they would result in too much deficit spending. On the surface, my policy preferences seem contradictory; I neither support efforts to rein in the deficit nor the large tax cut proposals of the majority of Republicans. There are two primary reasons for this seemingly strange predilection: timing and composition.
The length of time each policy lasts is crucial to my support. As per the 'New Keynesian consensus,' loose fiscal policy should only be used until monetary policy can be reasonably declared unconstrained by the zero lower bound. This is why deliberate deficit cutting policies should not have been undertaken, and arguably should not be pursued further, until two criteria have been met: the federal funds rate must be above the zero lower bound and there must be little to no risk that the zero lower bound will be made to bind by either a tightening of fiscal policy or some other shock to the economy. At the time of writing this post, only the first criterion is fulfilled -- the Federal Reserve has decided to raise the target fed funds rate, but, since it stands somewhere between 0.25% and 0.5% (the Fed has adopted a target range instead of a strict target), it would be reasonable to suggest that a large negative fiscal shock could be more than the Fed can handle without being thrown back into a liquidity trap (in this sense, the US could still be considered to be in a liquidity trap, even though the zero lower bound no longer binds).
GOP tax cut proposals would undoubtedly achieve the temporary goal of looser fiscal policy, but they would be on a completely wrong timescale. Conventional analysis only suggests loose fiscal policy for the duration of the liquidity trap, and, since the tax cuts are permanent to the extent that they are not repealed by future administrations, they fail miserably in this regard. In other words, fiscal policy would be too loose for too long under large tax cuts -- especially if they are not accompanied by corresponding reductions in government spending. Additionally, spending cuts are arguably more damaging than tax cuts are stimulative in liquidity traps, so a fiscal adjustment fully in line with, e.g., Rand Paul's or Ted Cruz' preferences could completely fail to comply with the recommendations of mainstream economics, which scares me enough in its own right to warrant a revocation of support.
My second criticism of the GOP tax plans is more personal; I think that government spending and taxes in the United States should be higher, not lower. There are certainly arguments to be made that government spending in the United States does nothing to raise aggregate utility and should thus be cut, but I believe, and I think most other economists agree with me, that this is definitely not the case. This is especially true in infrastructure, or more generally government investment -- currently at its lowest level as a percentage of GDP since 1948 -- which sorely needs to be increased. Further, spending on Social Security and Medicare should increase over the next decade or two because of the changing demographics of the country. If we adopt the tax proposals of many if not all of the GOP candidates, spending cuts will have to come from somewhere and, given the Republican obsession with massive military spending, they will probably not be defense cuts. This pretty much leaves entitlements and investment -- both of which would cause significant pain going forward if they were cut significantly.
Ideally, fiscal policy makers would focus in the short term on simply not cutting spending too ferociously and in the long run on figuring out how to raise the revenue required for higher levels of government investment and entitlement spending. The GOP seems prepared to do neither of these and, as such, I am not prepared to endorse them for their fiscal policy.
## 17 January 2016
### Choosing the Best Model For Each Context
In spite of perhaps attracting the wrath of Jason Smith, I think it is safe to say that economics is too complicated for there to be one generally applicable model of everything. Because of this, there is a veritable plethora of economic models available to the economic theorist. This simply leaves the question of which one to use in which circumstance.
Simon Wren-Lewis seems to think that economists should select between models in an ex-post manner -- that is, we should seen which model better represents the data and use that model from then on:
How do we know if most economic cycles are described by Real Business Cycles (RBC) or Keynesian dynamics. One big clue is layoffs: if employment is fall because workers are choosing not to work, we could have an RBC mechanism, but if workers are being laid off (and are deeply unhappy about is) this is more characteristic of a Keynesian downturn.
The issue here is that we can only diagnose events after the fact, we cannot reasonably make predictions because of the impossibility of ex ante empirical validation: it is impossible to determine whether or not a recession is New Keynesian or if it is a Real Business Cycle before data are released.
This is why context-based validation of theory is superior to empirical validation in the case of economics. The context -- i.e. the sub-field of economics that is being studied -- should inform model choice almost entirely. If the field is business cycles, then the relevant model is a New Keynesian DSGE model and if the field is growth theory, then New Keynesian models are superfluous and should be tabled in favor of neoclassical models -- whose only difference from their New Keynesian counterparts is nominal rigidity, which is irrelevant over a time scale longer than a decade.
Predictions about the economy can now be made based on currently available information: it is possible to determine whether or not, e.g. financial frictions should be present in our business cycle model based on the current state of the economy: we knew by Q3 2008 that financial frictions were relevant, so we should have put them in a model if we were trying to predict the next few years.
Alternatively, the model I should choose to use depends on the kind of thought experiment I choose to embark on. Am I trying to compare PAYGO pensions with Social Security? If so, the obvious model to use is a simple OLG model without a labor-leisure trade-off or sticky prices. Choice of models is equivalent to choice of assumptions, at least when it comes to the DGE approach currently dominant in economics, and assumption choice depends entirely on the question being asked. Nominal rigidity is obviously relevant for business cycle theory, but completely useless when it comes to determining the level effect of a tax increase.
Hopefully this selection mechanism is specific enough to not be "basically feelings," as Jason Smith would suggest is the case for most of economics.
## 03 January 2016
### People Should Be More Honest With Charts
Recently, Scott Sumner wrote a blog post with this chart in it:
I thought it would be interesting to see how well this relationship held over the period that Sumner didn't include in his chart. Here it is:
It's interesting to note that the relationship doesn't look so good when you look at the entire sample in which all of the data is available. This is aside from that fact that the idea that the NGDP/Wage ratio would track unemployment is part of basic neoclassical theory and has nothing to do with wage stickiness.
Start with a simple Cobb-Douglas production function with employment and capital:
$$(1)\: Y_t = F(K_{t-1},L_t) = K_{t-1}^\alpha L_t^{1-\alpha}$$
Assume that the firm maximizes profits, $Y_t - w_t L_t - r_{t-1} K_{t-1}$ and you get the following first order condition for labor:
$$(2)\: w_t = (1-\alpha)\left(\frac{Y_t}{L_t}\right)$$
Dividing by $Y_t$ will give the nominal wage to NGDP ratio (since the nominal wage to NGDP ratio is the same as the real wage to RGDP ratio), which is
$$(3)\: \frac{w_t}{Y_t} = \frac{1-\alpha}{L_t}$$
It's clear from this that, in a simple neoclassical model, the nominal wage to NGDP ratio is expected to be negatively correlated with employment and, therefore, positively correlated with unemployment -- which is coincidentally the exact thing that Scott's chart shows. Variations in the nominal wage to NGDP ratio are not, in fact, vindications of the musical chairs model.
|
|
# Probability an honest node finds the next block vs. probability the attacker finds the next block
I am trying to understand Satoshi's paper [1]. On page 6 of the paper, he calculates the probability that someone can attack the blockchain from z blocks behind. He begins by defining:
My question: The mining of a block i.e., solving the hash puzzle is a completely brute-force trial-and-error process in which one keeps trying a random nonce until one gets a hit. No one has an upper hand in this process. So shouldn't p and q be equal to each other?
EDIT: since comment would have been very long. Responding to the comments, in that case it gets even worse. q can be greater than p. Let the Bitcoin network be composed of M mining pools with their compute power be given by p_i. Then all an attacker needs to do is to form a pool whose compute power is greater than max(p_i). Am I missing something? In other words, the pool with max compute power can attack the blockchain already.
In an ideal design, if I want to attack the blockchain, I should be pitted against the sum of p_i not the max of p_i.
• not if they are dependent on the hashpower of the attacker vs the hashpower of the honest node – JBaczuk Aug 2 '19 at 17:10
• My question assumes the hashpower are equal. I think its a safe assumption. – morpheus Aug 2 '19 at 17:11
• @morpheus That's not a reasonable assumption. Why would the attacker have exactly half of the hashrate? – Pieter Wuille Aug 2 '19 at 17:24
• Help me understand. Given N miners each having same computer, everyone has equal chance of finding the next block, no? – morpheus Aug 2 '19 at 17:42
• The attacker may have multiple computers. Or a datacenter. Or be the NSA. – Pieter Wuille Aug 2 '19 at 20:50
No one has an upper hand in this process. So shouldn't p and q be equal to each other?
The chances of finding a valid block header that meets the target requirement increases proportionally to the number of tries. This means that it is proportional to the hash power that you have. Even if there are two miners, one could be using a huge datacenter (the size of United States), while the other might be mining on his 10 year old laptop. Their hash power would not be the same, and hence p and q will also not be equal to one another. Hash rate is the metric to be used to find out the probability of a miner finding the next block.
Then all an attacker needs to do is to form a pool whose compute power is greater than max(p_i). Am I missing something? In other words, the pool with max compute power can attack the blockchain already.
What you are describing is a 51% attack. With the current hashrate of the Bitcoin network, it would require HUGE investments (billions of dollars) from the entity who is planning to do such attacks. At that investment level, the economic incentive to launch such attacks may be tiny, unless it's a state actor who is trying to annihilate the confidence of the network. Even if a fraudulent miner amasses >50% of the hash rate of the network, full nodes MAY try to patch themselves to reject such blocks (for example: if that attacker tries to broadcast a longest chain that is > 6 blocks different than the original chain, to prevent double spends).
The reason p and q are not equal is because the attacker is indeed pitted against the cumulative power of the entire network vs. being pitted against one miner.
If there was just one miner to compete with, then both the attacker and miner would have equal chances of finding the next block and p is indeed equal to q in that case (assuming identical hardware and hashing power). But when there are n miners, each of them working independently, then the chances that one of them finds the next block increase dramatically. The rate factor becomes $n \lambda$ instead of just $\lambda$. See https://en.wikipedia.org/wiki/Exponential_distribution#Distribution_of_the_minimum_of_exponential_random_variables and so now p/q=n.
Another way to get above result is to recognize that mining a block is like winning a lottery where everyone has equal chances (again assuming identical compute power). So if there are n miners plus 1 attacker, then p/q=n
Bitcoin is pure genius.
• "If there was just one miner to compete with ... p is indeed equal to q in that case". The probability of finding the block is proportional to the number of tries, which means that it is proportional to the hash power that you have. Even if there are two miners, one could be using a huge datacenter, while the other might be mining on his 10 year old laptop. Their hash power would not be the same, and hence p and q will also not be equal to one another. Hash rate is the metric to be used to find out the probability of a miner finding the next block. – Ugam Kamat Aug 3 '19 at 1:44
• This is very confused. Even if there was just one attacker and one honest miner, there is no reason to assume they have the same hashrate. Hashrate depends on how much hardware and electricity each has at its disposal. – Pieter Wuille Aug 3 '19 at 1:48
|
|
### Multi-Party Threshold Private Set Intersection with Sublinear Communication
Saikrishna Badrinarayanan, Peihan Miao, Srinivasan Raghuraman, and Peter Rindal
##### Abstract
In multi-party threshold private set intersection (PSI), $n$ parties each with a private set wish to compute the intersection of their sets if the intersection is sufficiently large. Previously, Ghosh and Simkin (CRYPTO 2019) studied this problem for the two-party case and demonstrated interesting lower and upper bounds on the communication complexity. In this work, we investigate the communication complexity of the multi-party setting $(n\geq 2)$. We consider two functionalities for multi-party threshold PSI. In the first, parties learn the intersection if each of their sets and the intersection differ by at most $T$. In the second functionality, parties learn the intersection if the union of all their sets and the intersection differ by at most $T$. For both functionalities, we show that any protocol must have communication complexity $\Omega(nT)$. We build protocols with a matching upper bound of $O(nT)$ communication complexity for both functionalities assuming threshold FHE. We also construct a computationally more efficient protocol for the second functionality with communication complexity $\widetilde{O}(nT)$ under a weaker assumption of threshold additive homomorphic encryption. As a direct implication, we solve one of the open problems in the work of Ghosh and Simkin (CRYPTO 2019) by designing a two-party protocol with communication cost $\widetilde{O}(T)$ from assumptions weaker than FHE. As a consequence of our results, we achieve the first regular'' multi-party PSI protocol where the communication complexity only grows with the size of the set difference and does not depend on the size of the input sets.
##### Metadata
Available format(s)
Category
Cryptographic protocols
Publication info
A minor revision of an IACR publication in PKC 2021
Keywords
Private Set IntersectionCommunication ComplexityMultiparty Computation
Contact author(s)
sabadrin @ visa com
peihan @ uic edu
srraghur @ visa com
perindal @ visa com
History
2021-03-01: last of 2 revisions
2020-05-22: received
See all versions
Short URL
https://ia.cr/2020/600
License
CC BY
BibTeX
@misc{cryptoeprint:2020/600,
author = {Saikrishna Badrinarayanan and Peihan Miao and Srinivasan Raghuraman and Peter Rindal},
title = {Multi-Party Threshold Private Set Intersection with Sublinear Communication},
howpublished = {Cryptology ePrint Archive, Paper 2020/600},
year = {2020},
note = {\url{https://eprint.iacr.org/2020/600}},
url = {https://eprint.iacr.org/2020/600}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
|
# Node, Postgres, and Sequelize
Let’s build a CRUD app with Node (v4.1.1), Express (v4.13.1), Sequelize (v3.12.2), and PostgreSQL (9.4.4).
This a follow-up to PostgreSQL and NodeJS.
## Getting Started
Grab the initial boilerplate and install the dependencies:
Now run a quick sanity check:
If all went well, a new browser window should have opened to http://localhost:5000/ and you should see the “Welcome to Express.” text.
## Sequelize
With Postgres listening on port 5432, we can make a connection to it using the Sequelize library, an Object Relational Mapper (ORM), written in JavaScript, which supports MySQL, PostgreSQL, SQLite, and MariaDB.
Need to set up Postgres? On a Mac? Check out Postgres.app.
Install Sequelize, pg (for making the database connection), and pg-hstore (for serializing and deserializing JSON into the Postgres hstore key/value pair format):
## Migrations
The Sequelize CLI is used to bootstrap a new project and handle database migrations directly from the terminal.
### Init
Start by installing the package:
Next, create a config file called .sequelizerc in your project root to specify the paths to specific files required by Sequelize:
Now, run the init command to create the files (config.json) and folders (“migrations”, “models”, and “seeders”):
Take a look at the index.js file within the “models” directory:
Here, we establish a connection to the database, grab all the model files from the current directory, add them to the db object, and apply any relations between each model (if any).
### Config
Be sure to also update the config.js file for your development, test, and production databases:
If you are just running this locally, using the basic development server, then just update the development config.
Go ahead and create a database named “todos”.
### Create Migration
Now let’s create a model along with a migration. Since we’re working with todos, run the following command:
Take a look a the newly created model file, todo.js in the models directory:
The corresponding migration file can be found in the “migrations” folder. Take a look. Next, let’s associate a user to a todo. First, we need to define a new migration:
Now we need to set up the relationship between the two models…
### Associations
To associate the models (one user can have many todos), make the following updates…
todo.js:
user.js:
### Sync
Finally, before we sync, let’s add an additional attribute to the complete field in the todo.js file:
Run the migration to create the tables:
## CRUD
With Sequelize set up and the models defined, we can now set up our RESTful routing structure for the todo resource. First, within index.js in the “routes” folder add the following requirement:
Then add a route for creating a new user:
To add a new user, run the server - gulp - and then run the following in a new terminal window:
You should see:
Now we can add the todo routes…
### GET all todos
When you hit that route you should see an empty array since we have not added any todos. Let’s do that now.
### POST
Now let’s test:
Then if you go back and hit http://127.0.0.1:3000/todos in our browser, you should see:
### GET single todo
How about getting a single todo?
Navigate to http://localhost:3000/todo/1 in your browser. You should the single todo.
### PUT
Need to update a todo?
And now for a test, of course:
### DELETE
Want to delete a todo?
Test:
Again, navigate to http://localhost:3000/todos in your browser. You should now only see one todo.
## Conclusion
That’s it for the basic server-side code. You now have a database, models, and migrations set up. Whenever you want to update the state of your database, just add additional migrations and then run them as necessary.
Grab the code from the Github repo. Comment below with questions. Cheers!
|
|
# Data containers¶
## DataContainer¶
class mchammer.DataContainer(structure, ensemble_parameters, metadata={})[source]
Data container for storing information concerned with Monte Carlo simulations performed with mchammer.
Parameters
• structure (ase.Atoms) – reference atomic structure associated with the data container
• ensemble_parameters (dict) – parameters associated with the underlying ensemble
• metadata (dict) – metadata associated with the data container
analyze_data(tag, start=None, max_lag=None)[source]
Returns detailed analysis of a scalar observerable.
Parameters
• tag (str) – tag of field over which to average
• start (Optional[int]) – minimum value of trial step to consider; by default the smallest value in the mctrial column will be used.
• max_lag (Optional[int]) – maximum lag between two points in data series, by default the largest length of the data series will be used. Used for computing autocorrelation
Raises
• ValueError – if observable is requested that is not in data container
• ValueError – if observable is not scalar
• ValueError – if observations is not evenly spaced
Returns
calculated properties of the data including mean, standard_deviation, correlation_length and error_estimate (95% confidence)
Return type
dict
append(mctrial, record)
Appends data to data container.
Parameters
• mctrial (int) – current Monte Carlo trial step
• record (Dict[str, Union[int, float, list]]) – dictionary of tag-value pairs representing observations
Raises
TypeError – if input parameters have the wrong type
apply_observer(observer)
Adds observer data from observer to data container.
The observer will only be run for the mctrials for which the trajectory have been saved.
The interval of the observer is ignored.
Parameters
observer (BaseObserver) – observer to be used
property data
pandas data frame (see pandas.DataFrame)
Return type
DataFrame
property ensemble_parameters
parameters associated with Monte Carlo simulation
Return type
dict
get(*input_tags, start=0)
Returns the accumulated data for the requested observables, including configurations stored in the data container. The latter can be achieved by including ‘trajectory’ as a tag.
Parameters
• tags – tuples of the requested properties
• start (int) – minimum value of trial step to consider; by default the smallest value in the mctrial column will be used.
Raises
• ValueError – if tags is empty
• ValueError – if observables are requested that are not in data container
Examples
Below the get method is illustrated but first we require a data container.
>>> from ase.build import bulk
>>> from icet import ClusterExpansion, ClusterSpace
>>> from mchammer.calculators import ClusterExpansionCalculator
>>> from mchammer.ensembles import CanonicalEnsemble
>>> # prepare cluster expansion
>>> prim = bulk('Au')
>>> cs = ClusterSpace(prim, cutoffs=[4.3], chemical_symbols=['Ag', 'Au'])
>>> ce = ClusterExpansion(cs, [0, 0, 0.1, -0.02])
>>> # prepare initial configuration
>>> structure = prim.repeat(3)
>>> for k in range(5):
... structure[k].symbol = 'Ag'
>>> # set up and run MC simulation
>>> calc = ClusterExpansionCalculator(structure, ce)
>>> mc = CanonicalEnsemble(structure=structure, calculator=calc,
... temperature=600,
... dc_filename='myrun_canonical.dc')
>>> mc.run(100) # carry out 100 trial swaps
We can now access the data container by reading it from file by using the read method. For the purpose of this example, however, we access the data container associated with the ensemble directly.
>>> dc = mc.data_container
The following lines illustrate how to use the get method for extracting data from the data container.
>>> # obtain all values of the potential represented by
>>> # the cluster expansion along the trajectory
>>> p = dc.get('potential')
>>> import matplotlib.pyplot as plt
>>> # as above but this time the MC trial step and the temperature
>>> # are included as well
>>> s, p = dc.get('mctrial', 'potential')
>>> _ = plt.plot(s, p)
>>> plt.show()
>>> # obtain configurations along the trajectory along with
>>> # their potential
>>> p, confs = dc.get('potential', 'trajectory')
Return type
Union[ndarray, List[Atoms], Tuple[ndarray, List[Atoms]]]
get_average(tag, start=None)[source]
Returns average of a scalar observable.
Parameters
• tag (str) – tag of field over which to average
• start (Optional[int]) – minimum value of trial step to consider; by default the smallest value in the mctrial column will be used.
Raises
• ValueError – if observable is requested that is not in data container
• ValueError – if observable is not scalar
Return type
float
get_trajectory(*args, **kwargs)
Returns trajectory as a list of ASE Atoms objects.
property metadata
metadata associated with data container
Return type
dict
property observables
observable names
Return type
List[str]
classmethod read(infile, old_format=False)
Reads data container from file.
Parameters
• infile (Union[str, BinaryIO, TextIO]) – file from which to read
• old_format (bool) – If true use old json format to read runtime data; default to false
Raises
• FileNotFoundError – if file is not found (str)
• ValueError – if file is of incorrect type (not a tarball)
write(outfile)
Writes BaseDataContainer object to file.
Parameters
outfile (Union[str, BinaryIO, TextIO]) – file to which to write
## WangLandauDataContainer¶
class mchammer.WangLandauDataContainer(structure, ensemble_parameters, metadata={})[source]
Data container for storing information concerned with Wang-Landau simulation performed with mchammer.
Parameters
• structure (ase.Atoms) – reference atomic structure associated with the data container
• ensemble_parameters (dict) – parameters associated with the underlying ensemble
• metadata (dict) – metadata associated with the data container
append(mctrial, record)
Appends data to data container.
Parameters
• mctrial (int) – current Monte Carlo trial step
• record (Dict[str, Union[int, float, list]]) – dictionary of tag-value pairs representing observations
Raises
TypeError – if input parameters have the wrong type
apply_observer(observer)
Adds observer data from observer to data container.
The observer will only be run for the mctrials for which the trajectory have been saved.
The interval of the observer is ignored.
Parameters
observer (BaseObserver) – observer to be used
property data
pandas data frame (see pandas.DataFrame)
Return type
DataFrame
property ensemble_parameters
parameters associated with Monte Carlo simulation
Return type
dict
property fill_factor
final value of the fill factor in the Wang-Landau algorithm
Return type
float
property fill_factor_history
evolution of the fill factor in the Wang-Landau algorithm
Return type
DataFrame
get(*input_tags, start=0)
Returns the accumulated data for the requested observables, including configurations stored in the data container. The latter can be achieved by including ‘trajectory’ as a tag.
Parameters
• tags – tuples of the requested properties
• start (int) – minimum value of trial step to consider; by default the smallest value in the mctrial column will be used.
Raises
• ValueError – if tags is empty
• ValueError – if observables are requested that are not in data container
Examples
Below the get method is illustrated but first we require a data container.
>>> from ase.build import bulk
>>> from icet import ClusterExpansion, ClusterSpace
>>> from mchammer.calculators import ClusterExpansionCalculator
>>> from mchammer.ensembles import CanonicalEnsemble
>>> # prepare cluster expansion
>>> prim = bulk('Au')
>>> cs = ClusterSpace(prim, cutoffs=[4.3], chemical_symbols=['Ag', 'Au'])
>>> ce = ClusterExpansion(cs, [0, 0, 0.1, -0.02])
>>> # prepare initial configuration
>>> structure = prim.repeat(3)
>>> for k in range(5):
... structure[k].symbol = 'Ag'
>>> # set up and run MC simulation
>>> calc = ClusterExpansionCalculator(structure, ce)
>>> mc = CanonicalEnsemble(structure=structure, calculator=calc,
... temperature=600,
... dc_filename='myrun_canonical.dc')
>>> mc.run(100) # carry out 100 trial swaps
We can now access the data container by reading it from file by using the read method. For the purpose of this example, however, we access the data container associated with the ensemble directly.
>>> dc = mc.data_container
The following lines illustrate how to use the get method for extracting data from the data container.
>>> # obtain all values of the potential represented by
>>> # the cluster expansion along the trajectory
>>> p = dc.get('potential')
>>> import matplotlib.pyplot as plt
>>> # as above but this time the MC trial step and the temperature
>>> # are included as well
>>> s, p = dc.get('mctrial', 'potential')
>>> _ = plt.plot(s, p)
>>> plt.show()
>>> # obtain configurations along the trajectory along with
>>> # their potential
>>> p, confs = dc.get('potential', 'trajectory')
Return type
Union[ndarray, List[Atoms], Tuple[ndarray, List[Atoms]]]
get_entropy()[source]
Returns the (relative) entropy from this data container accumulated during a Wang-Landau simulation. Returns None if the data container does not contain the required information.
Return type
DataFrame
get_histogram()[source]
Returns the histogram from this data container accumulated since the last update of the fill factor. Returns None if the data container does not contain the required information.
Return type
DataFrame
get_trajectory(*args, **kwargs)
Returns trajectory as a list of ASE Atoms objects.
property metadata
metadata associated with data container
Return type
dict
property observables
observable names
Return type
List[str]
classmethod read(infile, old_format=False)
Reads data container from file.
Parameters
• infile (Union[str, BinaryIO, TextIO]) – file from which to read
• old_format (bool) – If true use old json format to read runtime data; default to false
Raises
• FileNotFoundError – if file is not found (str)
• ValueError – if file is of incorrect type (not a tarball)
write(outfile)
Writes BaseDataContainer object to file.
Parameters
outfile (Union[str, BinaryIO, TextIO]) – file to which to write
## Analysis functions¶
mchammer.data_containers.get_average_observables_wl(dcs, temperatures, observables=None, boltzmann_constant=8.617330337217213e-05)[source]
Returns the average and the standard deviation of the energy from a Wang-Landau simulation for the temperatures specified. If the observables keyword argument is specified the function will also return the mean and standard deviation of the specified observables.
Parameters
• dcs (Union[BaseDataContainer, dict]) – data container(s), from which to extract density of states as well as observables
• temperatures (List[float]) – temperatures, at which to compute the averages
• observables (Optional[List[str]]) – observables, for which to compute averages; the observables must refer to fields in the data container
• boltzmann_constant (float) – Boltzmann constant $$k_B$$ in appropriate units, i.e. units that are consistent with the underlying cluster expansion and the temperature units [default: eV/K]
Raises
• ValueError – if the data container(s) do(es) not contain entropy data from Wang-Landau simulation
• ValueError – if data container(s) do(es) not contain requested observable
Return type
DataFrame
mchammer.data_containers.get_average_cluster_vectors_wl(dcs, cluster_space, temperatures, boltzmann_constant=8.617330337217213e-05)[source]
Returns the average cluster vectors from a Wang-Landau simulation for the temperatures specified.
Parameters
• dcs (Union[BaseDataContainer, dict]) – data container(s), from which to extract density of states as well as observables
• cluster_space (ClusterSpace) – cluster space to use for calculation of cluster vectors
• temperatures (List[float]) – temperatures, at which to compute the averages
• boltzmann_constant (float) – Boltzmann constant $$k_B$$ in appropriate units, i.e. units that are consistent with the underlying cluster expansion and the temperature units [default: eV/K]
Return type
DataFrame
mchammer.data_containers.get_density_of_states_wl(dcs)[source]
Returns a pandas DataFrame with the total density of states from a Wang-Landau simulation. If a dict of data containers is provided the function also returns a dictionary that contains the standard deviation between the entropy of neighboring data containers in the overlap region. These errors should be small compared to the variation of the entropy across each bin.
The function can handle both a single data container and a dict thereof. In the latter case the data containers must cover a contiguous energy range and must at least partially overlap.
Parameters
dcs (Union[BaseDataContainer, dict]) – data container(s), from which to extract the density of states
Raises
• ValueError – if multiple data containers are provided and there are inconsistencies with regard to basic simulation parameters such as system size or energy spacing
• ValueError – if multiple data containers are provided and there is at least one energy region without overlap
Return type
Tuple[DataFrame, dict]
mchammer.data_analysis.analyze_data(data, max_lag=None)[source]
Carries out an extensive analysis of the data series.
Parameters
• data (ndarray) – data series to compute autocorrelation function for
• max_lag (Optional[int]) – maximum lag between two data points, used for computing autocorrelation
Returns
calculated properties of the data including, mean, standard deviation, correlation length and a 95% error estimate.
Return type
dict
mchammer.data_analysis.get_autocorrelation_function(data, max_lag=None)[source]
Returns autocorrelation function.
The autocorrelation function is computed using pandas.Series.autocorr.
Parameters
• data (ndarray) – data series to compute autocorrelation function for
• max_lag (Optional[int]) – maximum lag between two data points
Returns
Return type
calculated autocorrelation function
mchammer.data_analysis.get_correlation_length(data)[source]
Returns estimate of the correlation length of data.
The correlation length is taken as the first point where the autocorrelation functions is less than $$\exp(-2)$$. If the correlation function never drops below $$\exp(-2)$$ np.nan is returned.
Parameters
data (ndarray) – data series for which to the compute autocorrelation function
Returns
Return type
correlation length
mchammer.data_analysis.get_error_estimate(data, confidence=0.95)[source]
Returns estimate of standard error $$\mathrm{error}$$ with confidence interval.
$\mathrm{error} = t_\mathrm{factor} * \mathrm{std}(\mathrm{data}) / \sqrt{N_s}$
where $$t_{factor}$$ is the factor corresponding to the confidence interval and $$N_s$$ is the number of independent measurements (with correlation taken into account).
Parameters
data (ndarray) – data series for which to estimate the error
Returns
Return type
error estimate
|
|
## Notes on Notation – Variables
Variables have often been seen as the big bugaboo of the middle school math curriculum and beyond. They have become inextricably identified with that 7-letter curse word “algebra”. I submit most teachers have encountered some variation on student Jesse who says in utter frustration: “How can a possibly equal b? a is already equal to a!” Yet before we dismiss Jesse as confused, it would be useful to note that Jesse clearly got what was important in school up till now: that you keep all your letters straight.
The introduction of letters for numbers in school marks the end of the transition of kids from arithmetic to algebra, not the beginning. This transition, from the doing of arithmetic to the thinking about arithmetic (both numbers and operations) as a domain with patterns and regularities, involves many mathematical ideas, and that doesn’t wait till middle school, and it doesn’t wait for formal algebra with formal variables.
And yet the introduction of letters as variables, letters for numbers, does mark a big hurdle for many students. The last shred of pretense that what we’re doing is arithmetic, that we’re out to produce a number, is now out the window.
If we look at the notational conventions for variables, it seems that the most obvious aspect of it is also the most important: variables look different from numbers. For the old Greeks, this was never obvious, for their way of writing numbers re-used their letters! (They used nine letters to indicate 1-9, nine different letters to indicate 10-90, and so on.) For them, letter variables wouldn’t have been clearly distinct from numbers. To conclude, as some have done, that this was single-handedly what kept the Greeks from inventing algebra strikes me as rather silly – they could have found any of another number of small variations that would have dealt with the difficulty. But for us, letters aren’t confused with numbers, so hence letters for variables.
However, there is an additional convention that variables in mathematics are always single letters. You often see this in physics, too, so that you see $d = \frac{1}{2}a t ^ 2$ instead of $distance = \frac{1}{2} acceleration \times time^2$. The latter version favors ease of reading over compactness of writing: sometimes I get the impression that the entirety of mathematics still carries around the traces of old economic trade-offs: the Greeks wrote their mathematics with a stick in the sand; much of school work over the last several centuries was done with a marker on a small slate: a board literally made out of slate – easily written on, and easily erased. In those centuries, paper was very expensive, and not wasted on small matters such as calculating sums.
If you look at modern programming languages for computers and websites, every single one allows variables to be more than a single letter. Variables tend to be chosen so that they are meaningful for the reader, so ‘distance’ rather than ‘d’. Of course, in movies it would be much less imposing to have a scientist with wild hair say “energy is mass times the square of the speed of light” instead of $E = mc^2$, but I’d settle gladly for a mathematics that didn’t sound like magical incantations that have to be repeated just right.
Einstein on variables
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.
### 3 Responses to Notes on Notation – Variables
1. José says:
You say that:
“And yet the introduction of letters as variables, letters for numbers, does mark a big hurdle for many students. The last shred of pretense that what we’re doing is arithmetic, that we’re out to produce a number, is now out the window.”
How is the transition to variables a hurdle? Maybe a very, very small hurdle, but nothing of great difficulty. If anything, the use of variables makes arithmetic steps much easier. The variables are like a name for a number when the value is not immediately needed (until later). Variables make many things in arithmetic easier to understand.
2. Bert Speelpenning says:
Thanks for your comment. You raise an important issue.
There is a distinction between what we think ought to be hard or easy, on the one hand, and what children can be observed to have difficulty with.
Those are not the same. Many things we think ought to be easy are consistently stumbled over, and many things we think ought to be hard are taken in stride by generation after generation of kids.
As you read more of this blog you’ll see me drag in examples of both kinds. The examples come from fairly extensive personal observations as well as reports from many teachers.
The connection between algebra and the ability to deal with numbers whose value is not immediately needed – that’s a great point. I’m referring to that in the blog as “deferred computation” and I’ve got many entries on just that.
|
|
# Irreducible representation of tensor field
by su-ki
Tags: field, irreducible, representation, tensor
P: 2 In Mark Srednicki's book "Quantum Field Theory" He says that a tensor field $B^{αβ}$ with no particular symmetry can be written as :- $B^{αβ} = A^{αβ} + S^{αβ} + (1/4) g^{αβ} T(x)$ Equn. 33.6 where A - Antisymmetric, S = symmetric and T(x) = trace of $B^{αβ}$ . Is there any reason for explicit addition of trace term ? Coz generally we split things into symmetric and antisymmetric parts and trace is included in symmetric part.
Sci Advisor P: 3,456 If you're talking about the general linear group GL(n), the irreducible representations are the tensors whose indices have been symmetrized in a particular way. When you go to the orthogonal group, there are fewer transformations in the group, and some of these representations are no longer irreducible. The operation of contraction (forming a trace) commutes with the orthogonal transformations.
P: 744
Quote by su-ki In Mark Srednicki's book "Quantum Field Theory" He says that a tensor field $B^{αβ}$ with no particular symmetry can be written as :- $B^{αβ} = A^{αβ} + S^{αβ} + (1/4) g^{αβ} T(x)$ Equn. 33.6 where A - Antisymmetric, S = symmetric and T(x) = trace of $B^{αβ}$ . Is there any reason for explicit addition of trace term ? Coz generally we split things into symmetric and antisymmetric parts and trace is included in symmetric part.
This is true only if the symmetric part is traceless. In general, for any rank-2 tensor, we write
$$B^{ ab } = B^{ (ab) } + B^{ [ab] } .$$
Then we take the symmetric part and decompose it as
$$B^{ (ab) } = \left( B^{ (ab) } - \frac{ 1 }{ 4 } g^{ ab } B \right) + \frac{ 1 }{ 4 } g^{ ab } B .$$
The tensor (call it $S^{ ab }$) in the bracket on the left-hand side is symmetric and traceless, because
$$B = \mbox{ Tr } ( B^{ (ab) } ) = g_{ ab } B^{ (ab) }$$
So, our original tensor can now be written as
$$B^{ ab } = A^{ ab } + S^{ ab } + \frac{ 1 }{ 4 } g^{ ab } B ,$$
where $A^{ ab } = - A^{ ba } \equiv B^{ [ab] }$
See posts #24 and 25 in
http://www.physicsforums.com/showthr...=192572&page=2
Sam
P: 2
## Irreducible representation of tensor field
yea, i got just it, thank u :)
Related Discussions Calculus & Beyond Homework 6 Linear & Abstract Algebra 0 Linear & Abstract Algebra 3 Linear & Abstract Algebra 3 Atomic, Solid State, Comp. Physics 6
|
|
CoCalc Public FilesSD109 - Tutorial.ipynb
Author: Fredrik Strömberg
Views : 200
Description: Tutorial made for the Virtual Global Sage Days 109
# Introduction to Python and SageMath
We will introduce some of the fundamental concepts in Python and SageMath.
## Learning outcomes
1. Introduction to SageMath
2. Learn how to run Sage command line or notebook
3. Basic expressions and variables in Python and SageMath
4. Introduce some basic packages for plotting etc.
5. For the full lessons see https://github.com/fredstro/sage-lesson-nt
## What is SageMath?
SageMath was originally developed by William Stein and first released in 2005. It was initially called SAGE, then Sage, and now SageMath, or Sage for short. The intention behind SageMath is to provide a free, open source alternative to Magma, Maple, Mathematica, MATLAB etc. with an easy to use interface (both for end-users and developers) built on Python (Cython) and which can integrate with other free or commercial packages (installed separately).
The default installation guide (wiki) or official page: installation contains many (over 150) different free packages, including:
• Pari/GP
• GAP
• Singular
• R
• Numpy
• matplotlib
• ...
For a full list see: https://www.sagemath.org/links-components.html
It can be integrated with existing commercial packages like
• Magma
• Mathematica
• Maple
Full documentation is included in the source of SageMath and is available online, together with a multitude of tutorials etc.
See for instance https://www.sagemath.org/tour.html
## Optional packages
Some packages that are not shipped with SageMath are available to install afterwards.
• In the terminal (on Windows, use the "SageMath Shell"):
• sage -i <package_name> - to install optional Sage package
• sage --pip install <py_package_name> - to install extra Python package using pip
• In a SageMath session (on Windows, use "SageMath" or "SageMath Notebook"):
• sage: installed_packages() - to list all installed packages
• sage: optional_packages() - to list installed and available optional packages
In [ ]:
For example,
- install additional GAP packages by running sage -i gap_packages
- install JupyterLab by running sage --pip install jupyterlab
- install Pandas by running sage --pip install pandas
## Running SageMath
For the Sage REPL (read-eval-print loop) or command-line interactive use:
• sage to run Sage
• sage --help to see all options, e.g. --python, --pip, etc.
For the Jupyter Notebook interface, run one of:
• sage -n jupyter --notebook-dir=<dir>
• sage --notebook=jupyter --notebook-dir=<dir>
• sage --jupyter notebook --notebook-dir=<dir>
For the JupyterLab interface (if installed -- see above), run one of:
• sage -n jupyterlab --notebook-dir=<dir>
• sage --notebook=jupyterlab --notebook-dir=<dir>
• sage --jupyter lab --notebook-dir=<dir>
Online
## Getting help with SageMath
• <function_name>? - to read documentation (docstring)
• <function_name>?? - to read the source (if available)
• <object_name>.<tab> - use tab completion to see available properties
• Help button in the toolbar links to the online documentation
Example: Try using these features for yourself below. Run a code cell by first selecting it and then either
• Press the Run button in the toolbar above
• Use the keyboard shortcut Ctrl+Enter or Shift+Enter
In [ ]:
gcd?
In [ ]:
x=1
In [ ]:
x.
## Programming in SageMath
Check that your kernel is set to SageMath (e.g. SageMath 9.0) in the top right of the toolbar.
You can change your kernel using the Kernel button on the toolbar to set the default to Python (2.7 or 3.x depending on your version of SageMath).
Since version 9.0 released in January 2020, SageMath is using Python 3.
Since the main functionality of SageMath is written in Python it means that most of the programming can be done using Python. There are however some subtle differences.
### Variable types in Python
• Numerical types: int, float, complex
• Strings: str
• Lists: list
• Tuple: tuple
• Dictionary: dict
• Set: set
Example: Run the cell below to explore basic data types in Python
In [ ]:
%%python3
# We run Jupyter cells in SageMath mode by default but can run individual cells in Python using Jupyter magics
# As Python 2 is still the default for older versions of SageMath it is better to be explicit about Python version.
x = 2
print(x, type(x)) # int
# Combine multiple statements with ';' -- avoid this in practice since it reduces readability
y = 2.0; print(y, type(y)) # float
r = 3/4; print(r, type(r)) # float
z = 1 + 1j; print(z, type(z)) # complex
s = "hello"; print(s, type(s)) # string
l = [1, 2, 2]; print(l, type(l)) # list
t = (1, 2, 2); print(t, type(t)) # tuple
d = {1: 'a', 'b': 2}; print(d, type(d)) # dictionary
A = set([1, 2, 3, 2]); print(A, type(A)) # set
Note that in Python 2, dividing two int variables uses "integer division with floor floor function"
In [ ]:
%%python2
r = 3/4; print(r, type(r))
r = -3/4; print(r, type(r))
### Variable types in SageMath
In SageMath defaults for many numeric types are overwritten. For example:
• Integers are by default cast to SageMath's Integer type, which has different behaviour from Python's int
• Instead of float, SageMath uses RealLiteral and Rational (where appropraite)
In [ ]:
x = 2 # Integer
y = 2.0 # Real/floating point numbers with RealLiteral
r = 3/4 # Rational
z_num = 1 + 1j # Numerical complex numbers with ComplexNumber
z_sym = 1 + 1*I # Symbolic complex numbers with Expression
# Reduce writing and improve readability by iterating over a list using a for loop
for variable in [x, y, r, z_num, z_sym]:
print(variable, type(variable))
# Sets
Sets are useful for making lists with unique elements, are order ambivalent, and can be defined using either braces {} or the set() constructor function.
In [ ]:
X = set([1, 2, 3, 4, 4, 4, 5])
Y = {5, 3, 2, 4, 2}
print(X)
print(Y)
print(list(X))
The set type features many useful methods such as unions and intersections. See the documentation here.
In [ ]:
A = set([1, 2, 3, 4])
B = set(['a', 'b', 'c', 1, 2, 8])
A.intersection(B)
Sage has some additional useful builtin-functions:
In [ ]:
list(cartesian_product([A,B]))
In [ ]:
list(powerset(B))
### Precision of variables in SageMath
SageMath stores variables to higher levels of precision than Python
• The RR keyword denotes the Real Field with 53 bits of precision (double precision)
• The CC keyword denotes the Complex Field with 53 bits of precision
• We can construct fields with higher precision using RealField(prec) and ComplexField(prec)
In [ ]:
for field in [RR, RealField(106), CC, ComplexField(106)]:
print(field)
We can contruct high precision complex numbers as field elements as follows:
In [ ]:
my_complex_field = ComplexField(106)
z1 = my_complex_field(1, 10)
z2 = my_complex_field(5, 5)
for num in [z1, z2, z1 + z2]:
print(num)
There are also builtin constants that can be given to arbitrary precision
In [ ]:
print(RR.pi())
print(RealField(106).pi())
high_prec_one = RealField(1000)(1)
print(high_prec_one)
print(exp(high_prec_one)) # the builtin exp function
print(high_prec_one.exp()) # the exp method of the RealNumber class
# Objects, Parents, Elements and categories
In Python and even more so in SageMath everything is an object and has properties and methods.
In SageMath most (every?) object is either an Element or a Parent
In [ ]:
F = RealField(53) # Parent
x = F(2) # Element
print(type(F))
print(type(x))
F.is_parent_of(x) # True
print(x in F) # True
In [ ]:
x.parent() # Returns a copy of F
In [ ]:
x.category() # The category of x, a category of elements
In [ ]:
F.category() # The category of F
In [ ]:
In SageMath there is something called "coercion": natural operations are made using common parents.
In [ ]:
x=RR(1); y=CC(5)
type(x+y)
CC.coerce_map_from(RR)
### String formatting
In previous versions of Python there were more types of strings but in current Python there are basically only strings str and bytestrings bytes (which we do not discuss here)
We can control how strings are formatted when printing by using either the format method or by using formatted strings, which are strings with an f character before the first quotation mark. Please see the examples below, and follow this link for a reference on string formatting.
In [ ]:
x = RR(2) # The real number 2, up to 53 bits of precision
print("x = {0}".format(x)) # format method
print(f"x = {x}") # f-strings
print(f"exp(x) = {x.exp()}") # Can call functions in f-strings
print(f"π is about {float(RR.pi()):0.5f}") # As floating point number with 5 decimals
Exercise 1. Print $e^{\pi}$ as a floating point number to 225 decimal places.
Python solution: Use the decimal library (documentation)
In [ ]:
%%python3
import decimal, math
import numpy as np
decimal.getcontext().prec = 225
print(f"{decimal.Decimal(math.pi)}\n{decimal.Decimal(np.pi)}")
The cell above shows that there is not enough precision in the stored values of pi in the math and numpy libraries for our application, so we must use an algorithm to calculate it to a higher precision. We use the Gauss-Legendre algorithm for its fast convergence rate.
In [ ]:
%%python3
from decimal import Decimal, getcontext
import numpy as np
getcontext().prec = 227
num_iter = 8
one = Decimal(1)
two = Decimal(2)
four = Decimal(4)
def gauss_legendre_algorithm(num_iter):
r"""Implement the Gauss-Legendre algorithm for the specified number of iterations
Return an np.array of approximations to pi of increasing accuracy"""
a, b, t, = [one], [two.sqrt()/two], [one/four]
approx = list()
for i in range(num_iter):
a.append((a[i] + b[i]) / two)
b.append((a[i] * b[i]).sqrt())
t.append(t[i] - getcontext().power(two, Decimal(i)) * (a[i] - a[i+1]) * (a[i] - a[i+1]))
approx.append((a[i+1] + b[i+1]) * (a[i+1] + b[i+1]) / (four * t[i+1]))
return np.array(approx)
approx = gauss_legendre_algorithm(num_iter)
print(approx == approx[-1]) # Stabilised to 225 decimal places precision after 7 iterations
precise_pi = approx[-1]
result = precise_pi.exp()
print(result)
print(f"Number of decimal places = {len(str(result).split('.')[-1])}") # Count digits after decimal point
SageMath solution:
• Note that this can be calculated using many fewer lines of code since there are builtin SageMath algorithms running behind the scenes.
• Note also that we must convert from bit (binary) precision to precision in decimal places by multiplying by $\log_{2}(10) \approx 3.32$
In [ ]:
required_bit_prec = 228 * RR(10).log2() # Calculate required binary precision from decimal
exp_pi = RealField(required_bit_prec).pi().exp()
print(exp_pi)
print(f"Number of decimal places = {len(str(exp_pi).split('.')[-1])}") # Count digits after decimal point
## Functions
• Function names should be descriptive and use "snake_case" (this is a Python convention)
• Variable names in Python should also be lower, snake_case
• In SageMath we also use mathematical conventions even if they break this, e.g. Ei for the exponential integral function
• For full document about coding conventions see: http://doc.sagemath.org/html/en/developer/coding_basics.html
In [ ]:
# A first function
def is_zero_mod_3(x):
return x % 3 == 0 # The % is the Python modulo operation: remainder in integer division
In [ ]:
is_zero_mod_3(5)
In [ ]:
# Does this make sense?
is_zero_mod_3(5.4)
In [ ]:
is_zero_mod_3(1+1j) # Raises type error
Sage has an alternative modulo reduction function
In [ ]:
Mod(5, 3)
In [ ]:
type(Mod(5, 3))
In [ ]:
# Add handling of input
def is_zero_mod_3(x):
if not isinstance(x, int):
raise ValueError(f"Received input of type {type(x)}. This function needs an integer!")
return x % 3 == 0
In [ ]:
# Better and more informative error is raised
is_zero_mod_3(1 + 1j)
In [ ]:
is_zero_mod_3(int(6))
In [ ]:
# However, this doesn't work as intended
is_zero_mod_3(6)
The issue is that we checked for type int but in a SageMath environment the default type for integers is Integer. Need to check for both types!
In [ ]:
# Add handling of input
def is_zero_mod_3(x):
if not isinstance(x, (int, Integer)): # Multiple types should be given as a tuple
raise ValueError(f"Received input of type {type(x)}. This function needs an integer!")
return x % 3 == 0
In [ ]:
is_zero_mod_3(6)
## Docstrings
When writing our own functions we can help future users check the expected inputs in advance by writing a docstring (documentation string).
In [ ]:
is_zero_mod_3?
In [ ]:
# Let's add a docstring
def is_zero_mod_3(x):
r"""
Return True if input is congruent to 0 mod 3, otherwise return False
INPUT:
- x -- integer
OUTPUT: A boolean describing whether the input is congruent to 0 mod 3 or not.
"""
if not isinstance(x, (Integer, int)):
raise ValueError(f"Received input of type {type(x)}. This function needs an integer!")
return x % 3 == 0
In [ ]:
# Now a potential user knows what input and output to expect.
is_zero_mod_3?
In [ ]:
is_zero_mod_3("test")
## Variables in functions:
In [ ]:
# Scalar values get passed by value into functions, i.e. what happens in the function stays in the function:
x = 1
x = x + 1
print(f"x = {x}")
y = 1
print(y)
print(y)
print(x)
In [ ]:
# Dictionaries get passed by reference, i.e. what happens in the function also happens outside.
d = {'a': 1}
print(d['a']) # Get d['a'] if d has 'a' as a key, raise an error if not
print(d.get('b', 'default')) # Get d['b'] if 'b' is a key, otherwise return 'default' (or None if no default value is given)
d['a'] = d.get('a') + 1
print(f"d = {d}")
print(d)
print(d)
In [ ]:
[1] + 1 # Addition of integers and lists in this way is not supported
### Type hints
In Python 3 it is also possible to add type hints (not enforced by Python).
In [ ]:
# Let's add a docstring
def is_zero_mod_3(x: int) -> bool:
r"""
Return True if input is congruent to 0 mod 3, otherwise return False
INPUT:
- x -- integer
OUTPUT: A boolean describing whether the input is congruent to 0 mod 3 or not.
"""
if not isinstance(x, (Integer, int)):
raise ValueError(f"Received input of type {type(x)}. This function needs an integer!")
return x % 3 == 0
Exercise Write a function is_square_residue with the following specifications:
1. Takes as input an integer $x$ and a positive integer $n$
2. Returns True if $x$ is congruent to a square modulo $n$ and otherwise False.
3. Handles errors in input with sensible error messages.
4. Includes a docstring which describes the function.
In [ ]:
def is_square_residue(x, n: int) -> bool:
r"""
Return True if $x$ is congruent to a squared integer modulo $n$, otherwise return False
INPUTS:
- x -- an integer
- n -- a positive integer
OUTPUT: A boolean describing whether there are any solutions $a$ to $x \equiv a^{2} \mod n$ for inputs x and n
"""
if not (isinstance(x, (Integer, int)) and isinstance(n, (Integer, int)) and n > 0):
raise ValueError(f"Received inputs (x, n) = ({x}, {n}) of type ({type(x)}, {type(n)}). Inputs should be integers, with n positive!")
return (x % n) in set(a**2 for a in range(n))
In [ ]:
n = 8
for x in range(n):
print(x, is_square_residue(x, n))
## Functions and objects
Functions are also objects and can be introspected via their properties. Class methods and imports etc. can be modified at runtime (monkey-patching)
In [ ]:
category(is_zero_mod_3)
In [ ]:
# some property...
is_zero_mod_3.__code__.co_varnames
Example of monkey patching
In [ ]:
%%python2
import six
class TestA(object):
def foo(self,x):
return x
if six.PY3:
TestA.foo = lambda self,x : x+1
print(TestA().foo(1))
In [ ]:
%%python3
import six
class TestA(object):
def foo(self,x):
return x
if six.PY3:
TestA.foo = lambda self,x : x+1
print(TestA().foo(1))
## Python Control Structures
Standard (used in most programming languages):
• Iteration with for and while loops
• If-then-else statements
More Python specific:
• Generator expressions
• List comprehensions
In [ ]:
# Iterate over a range of integers using the range function. Note that range starts from 0.
for i in range(5):
print(i)
The output of the range function is an example of a generator expression. It does not actually allocate all elements until needed.
In [ ]:
range(5, 12) # Specify start and end points
We can cast a range to a list to see all of its elements. Note that range starts at the left endpoint and stops at the right endpoint without including it!
In [ ]:
list(range(5, 12))
Calling list(range(10^(10^10))) would run out of memory, but iterating over it is fine (although probably won't finish).
If evaluating the cell below, please call keyboard interrupt or select Kernel -> Interrupt from the toolbar above.
In [ ]:
for i in range(10^(10^10)):
pass
### Example: for loops and list comprehensions
We can evaluate $\zeta(2)$, the zeta function at s=2, using a loop, and compare to the known value $\frac{\pi}{6}$.
In [ ]:
# Calculate the partial sum of the first 100 terms of zeta(2)
result = 0
for n in range(1, 100):
result += n**(-2)
print(result)
We may also use a list comprehension, or directly use the generator expression to compute the partial sum
In [ ]:
sum([n**(-2) for n in range(1, 100)]) # List comprehension
In [ ]:
sum(n**(-2) for n in range(1, 100)) # Generator expression - most memory efficient
### Plotting in Sagemath with the plot function
The plot() function in SageMath takes as its first argument a function or list of functions. In the case that the passed functions take a single argument, the next two arguments, start and end, can be used to define the range over which to evaluate and plot them.
In [ ]:
plot?
In [ ]:
# A first plot of zeta along the horizontal line Im(z) = 1, 2 < Re(z) < 10
p = plot(lambda x: zeta(CC(x, 1)).real(), 2, 10)
In [ ]:
p
Most objects have a latex property which "pretty-prints" them in a format suitable for inclusion in a LaTeX document.
In [ ]:
latex(p)
## Symbolic expressions in SageMath
When starting Sage the name 'x' is defined as a symbolic variable and other variables have to be declared using the var('x, y, z, ...') command. Symbolic expressions can be treated in various ways:
• differentiation
• simplifications
When the notebook is started we have $x$ as a variable:
In [ ]:
x
In [ ]:
var('x, y, z')
In [ ]:
y*z
### Differentiation
In [ ]:
g = 1 / (x^2 + y^2)
print(f"{'g':20s} = {g}")
print(f"{'dg/dx':20s} = {diff(g, x)}")
print(f"{'d^2g/dxdy':20s} = {diff(g, x, y)}")
In [ ]:
g.differentiate(x, 2)
### Simplification
In [ ]:
f = x * y / (x^2 + y^2 )
z = diff(f, x, y)
z
In [ ]:
z.simplify_full()
## Substitution
In [ ]:
z1=z.substitute(x=2*y^2+1); z
In [ ]:
z1.simplify_full()
Exercise Determine the value of $\frac{\partial f}{\partial u}(0,1,1)$ for $f(s,t,u)=\frac{s+tu}{\sqrt{s^2+t^2+u^2}}$ Is it
• (a) ${\sqrt{2}}$
• (b) $\frac{1}{\sqrt{2}}$
• (c) $\frac{1}{2\sqrt{2}}$
• (d) $\frac{1}{5\sqrt{2}}$
• (e) None of the above?
In [ ]:
var('s,t,u')
f = (s+t*u)/sqrt(s**2+t**2+u**2)
diff(f,u).substitute(s=0,t=1,u=1)
We can now combine everything and compute the Bernoulli numbers. Recall that they can be defined by the generating series $\frac{x}{e^x - 1} = \sum_{m \ge 0} \frac{B_m x^m}{m!}$
In [ ]:
F = x / (e^x - 1); F # The generating function
Try to find the first Taylor coefficient:
In [ ]:
g = derivative(F, x, 1); g # another name for .diff()
In [ ]:
print(g.simplify_full())
# yes - there are other types of simplifications.
g.substitute(x=0)
In [ ]:
We can't just divide 0 / 0. We need L'Hopital's rule!
In [ ]:
# Differentiate the numerator and denominator and divide again:
g.numerator().diff(x) / g.denominator().diff(x)
Still of the form 0 /0. Need one more derivative!
In [ ]:
# The second parameter gives the number of times we differentiate
p = g.numerator().diff(x, 2) / g.denominator().diff(x, 2)
print(p)
p = p.simplify_full()
print(p)
p.substitute(x=0)
In [ ]:
bernoulli(1)
So the first Bernoulli number $B_1=-\frac{1}{2}$. This method is a bit cumbersome but fortunately there is a builtin command in Sage for Taylor expansions
In [ ]:
F.taylor(x, 0, 10)
We can convert this to a polynomial over $\mathbb{Q}$:
In [ ]:
p = F.taylor(x, 0, 10).polynomial(QQ)
print(type(p))
print(p)
print(latex(p))
For a polynomial we can add a big-Oh
In [ ]:
q = p.add_bigoh(12); q
In [ ]:
print(q.parent())
type(q)
In [ ]:
x = q.parent().gen()
x
In [ ]:
q + (x + 1).add_bigoh(8)
We can get coefficients of certain terms in Taylor expansions
In [ ]:
F.taylor(x, 0, 10).coefficient(x^4)
We can now write a function that returns the j-th Bernoulli number
In [ ]:
def B(j):
F = x / (e^x - 1)
return F.taylor(x, 0, j).coefficient(x^j)*factorial(j)
[B(j) for j in range(1, 10)]
We can also work with polynomials in many variables
In [ ]:
F=GF(3)['x, y']
F
In [ ]:
x,y=F.gens(); x+4*y
In [ ]:
f=x*y+2*x**2
g=x**2+y*x**3
In [ ]:
f.gcd(g)
## Linear algebra can be done using "native" Sage or numpy
In [ ]:
m=Matrix(ZZ,[[1,1],[2,3]]); m
In [ ]:
m.characteristic_polynomial()
In [ ]:
m.eigenvalues() ## Root finding in the above
### Algebraic numbers
What is the "?" ?
In [ ]:
print(m.eigenvalues()[0].parent())
print(type(m.eigenvalues()[0]))
Can be evaluated to arbitrary precision
In [ ]:
m.eigenvalues()[0].n(1000)
Can use Numpy
In [ ]:
m.numpy()
In [ ]:
import numpy
numpy.linalg.eigvals(m)
Can work over finite fields
In [ ]:
m=Matrix(GF(5),[[1,1],[2,3]]); m
In [ ]:
m.eigenvalues() # "exact"
In [ ]:
m.eigenvalues()[0].parent()
In [ ]:
Does it work with interval arithmetic?
In [ ]:
RIF=RealIntervalField()
h=1e-10
x=(1-h,1+h)
x
In [ ]:
m=Matrix(RealIntervalField(53),[[RIF(1-h,1+h),RIF(1-h,1+h)],[RIF(2-h,2+h),RIF(3-h,3+h)]]); m
In [ ]:
m.eigenvalues() # Not implemented!
Can use Newton - Raphson to find the roots:
In [ ]:
p=m.charpoly()
p.newton_raphson(10,0.2)
In [ ]:
p.newton_raphson(10,3)
In [ ]:
_[-1].lower(),_[-1].upper()
In [ ]:
|
|
# Find side of an equilateral triangle inscribed in a rhombus
The lengths of the diagonals of a rhombus are 6 and 8. An equilateral triangle inscribed in this rhombus has one vertex at an end-point of the shorter diagonal and one side parallel to the longer diagonal. Determine the length of a side of this triangle. Express your answer in the form $k\left(4 \sqrt{3} − 3\right)$ where k is a vulgar fraction.
• In what form should the answer be expressed? $43-3=40$, so this does not make sense. – chaosflaws Sep 21 '14 at 12:20
• Sorry that should read k(4 root of 3 minus 3) where k is a vulgar fraction.Thanks – megan Sep 21 '14 at 12:32
• Have a look at my edited question to get a feeling for how LaTeX works. ;) (as soon as my edit becomes peer-reviewed) – chaosflaws Sep 21 '14 at 12:35
• Gosh, Thanks so much to Jack D'Aurizio and chaosflaws. I am happy and quite depressed at what I don't know!But thanks Guys ;) – megan Sep 21 '14 at 14:32
• Have another look at @N. F. Taussig's answer. It probably is the most straightforward answer. – chaosflaws Sep 21 '14 at 23:17
Given the picture, let $x$ be the side of the equilateral triangle. We have: $$6 = \frac{x}{2}\cot\arctan\frac{4}{3}+\frac{x}{2}\cot\frac{\pi}{6},$$ or: $$6 = \frac{3x}{8}+\frac{\sqrt{3}\,x}{2},$$ so: $$48 = x(4\sqrt{3}+3)$$ and: $$48(4\sqrt{3}-3) = 39 x,$$ so $k=\color{red}{\frac{16}{13}}$.
• A way to describe the second equality without explicit trig is to note that 1) the second term is the altitude of the equilateral triangle, 2) the first term is the width of the adjacent acute triangle, which has proportions $6:8$. – Semiclassical Sep 21 '14 at 14:23
Hopefully one can see anything in this drawing...
You know that all the green and all the blue lines are equal. Additionally, you know that the top green line is parallel to one of the diagonals and that the diagonals are perpendicular. Therefore, the top green line is perpendicular to the short diagonal as well.
From the Pythagorean theorem follows $$b^2-\left(\frac{b}{2}\right)^2=\left(n-x\right)^2$$ where $n$ is the short diagonal, $a$ is the length we are looking for and $x$ is the orange bit. This equation still has an $x$ in it; we need it in terms of $n$ and $m$.
Using the Intercept theorem, one can infer $$\frac{x}{\frac{b}{2}}=\frac{\frac{n}{2}}{\frac{m}{2}} \implies x=\frac{nb}{2m}$$ where $m$ denotes the long diagonal.
Algebra: \begin{align*}\frac{3}{4}b^2=\left(n-x\right)^2&=\left(n-\frac{nb}{2m}\right)^2\\ \frac{\sqrt{3}}{2}b&=n-\frac{nb}{2m}\\ \frac{\sqrt{3}}{2}b+\frac{nb}{2m}&=n\\ b\left(\frac{\sqrt{3}}{2}+\frac{n}{2m}\right)&=n\\ b=\frac{n}{\frac{\sqrt{3}}{2}+\frac{n}{2m}}&=\frac{2mn}{\sqrt{3}m+n}\end{align*}
Substituting the given values: $$b=\frac{96}{8\sqrt{3}+6}=\frac{48}{4\sqrt{3}+3}=\frac{48\left(4\sqrt{3}-3\right)}{\left(4\sqrt{3}+3\right)\left(4\sqrt{3}-3\right)}=\frac{48\left(4\sqrt{3}-3\right)}{\left(4\sqrt{3}\right)^2-3^2}=\frac{16}{13}\left(4\sqrt{3}-3\right)$$
• It should be $\frac{16}{13}$ in the very last line, since $(4\sqrt{3})^2-3^2=39.$ – Jack D'Aurizio Sep 21 '14 at 14:11
• Thanks. Neat solution over there, although I am not sure whether the OP can use trig ;) – chaosflaws Sep 21 '14 at 14:20
Here is a coordinate geometry solution:
The diagonals of a rhombus are perpendicular bisectors of each other. Suppose that the diagonals intersect at the origin of the coordinate plane, that the vertices at the ends of the short diagonal are $A(-3, 0)$ and $C(3, 0)$, that the vertices at the ends of the long diagonal are $B(0, 4)$ and $D(0, -4)$, and that one vertex of the inscribed equilateral triangle is the vertex $C$ of the rhombus.
The other vertices of the equilateral triangle must lie on sides $\overline{AB}$ and $\overline{AD}$ of the rhombus. Let $E$ be the vertex of the equilateral triangle on side $\overline{AD}$ and $F$ be the vertex of the equilateral triangle that lies on side $\overline{AB}$. Since the line containing an altitude of an equilateral triangle is the perpendicular bisector of the base, points $E$ and $F$ are equidistant from the $x$-axis. Thus, if the coordinates of point $F$ are $(a, b)$, then the coordinates of point $E$ are $(a, -b)$. If $s$ is the length of a side of the equilateral triangle, then $s = b - (-b) = 2b$.
In an equilateral triangle of side length $s$, the length of an altitude is $$\frac{s\sqrt{3}}{2}$$
Since $a < 0$, the length of the altitude is
$$3 - a = \frac{s\sqrt{3}}{2}$$
Since $s = 2b$, we obtain
$$3 - a = b\sqrt{3}$$
Solving for $a$ yields
$$3 - b\sqrt{3} = a$$
The equation of $\overleftrightarrow{AB}$ is
$$y = \frac{4}{3}x + 4$$
Hence, at point $F(a, b)$,
\begin{align*} b & = \frac{4}{3}(3 - b\sqrt{3}) + 4\\ 3b & = 4(3 - b\sqrt{3}) + 12\\ 3b & = 12 - 4b\sqrt{3} + 12\\ 3b + 4b\sqrt{3} & = 24\\ b(3 + 4\sqrt{3}) & = 24\\ b & = \frac{24}{3 + 4\sqrt{3}}\\ b & = \frac{24}{3 + 4\sqrt{3}} \cdot \frac{3 - 4\sqrt{3}}{3 - 4\sqrt{3}}\\ b & = \frac{24(3 - 4\sqrt{3})}{9 - 48}\\ b & = -\frac{24}{39}(3 - 4\sqrt{3})\\ b & = -\frac{8}{13}(3 - 4\sqrt{3})\\ b & = \frac{8}{13}(4\sqrt{3} - 3) \end{align*}
Hence,
$$s = 2b = \frac{16}{13}(4\sqrt{3} - 3)$$
|
|
# Breaking RSA if small subset invertible
I am trying to solve a problem which states that one can invert RSA if a small subset of the cipher text are invertible, the problem is as follows:
Given a function which can invert the RSA encryption that is given $C=M^e$, the function can compute $M$ for small subset (typically 1% of the ciphertext space) of the ciphertexts. Then show that all the ciphertexts generated by RSA can be inverted with a good probability.
There was a hint given with the question that : $M_1^e*M_2^e \equiv (M_1*M_2)^e mod N$, suggesting to take a direction that uses this property
• This question may be better suited for cs.stackexchange.com , but you can probably get an answer here too. The point is that you want to show you can break the cipher QUICKLY with high probability given the assumptions. RSA can be broken easily if you spend enough time to factor the public key N. It just may take a really, really long time. Nov 18 '14 at 22:48
• ^^ +1 Specifically, consider how the data you are given collapses the tree of possible prime factors and thus decreases the number of things you need to check. Nov 18 '14 at 22:56
• In the above question I do not know the structure or any other property of the small ciphertext space that can be inverted, I just know that there exists an adversary that can invert cipher texts in a small subspace. I do not understand how this information can help mr reduce the number of prime factors to search for. Nov 18 '14 at 23:02
• Is there a reason you deleted your question? Nov 19 '14 at 19:51
Let $\mathcal{K}$ be your pairs of known ciphertexts and plaintexts (the ~1% you mention in your question). You know $N$ and $e$ and receive $C = M^e$. You want to compute $M$.
Algorithm:
1. If $(C,\color{gray}{M}) \in \mathcal{K}$, decode to $M$.
2. Else, pick an arbitrary $(C_1,M_1) \in \mathcal{K}$ and compute
$$C_2 = C_1 C = M_1^e M^e = (M_1 M)^e \pmod{N}.$$
If $(C_2,\color{gray}{M_2}) \in \mathcal{K}$, decode to $M_2 M_1^{-1} \pmod{N}$.
3. Else, repeat step 2 for a different pair.
Experimentally, this appears to work. I haven't attempted to work out the probability of failure.
• In the above solution there is a very good possibility that even if you repeat step 2 for all the possible $(C_1,M_1) \in K$ you might not get a $C_2 \in K$, this is because $K$ is a very small subspace of the overall space. Nov 18 '14 at 23:36
• Hm... my intuition was that on each step you had about a $|\mathcal{K}|/N$ chance of getting a hit, which adds up quick enough. Nov 18 '14 at 23:51
• @Akshay: I just tried it out experimentally (with parameters $p=1237$, $q=2237$, $N=2767169$, $e=65537$, $|\mathcal{K}|=27671$, and a cutoff of 500 iterations). The algorithm worked fine. Nov 19 '14 at 1:59
• these are quite small numbers for RSA primes, typically you have primes of 512 bit length each. The probability K/N for such large primes is very low. Nov 19 '14 at 7:51
• $(0.01*2^{512})/2^{512} = (0.01*2767169)/2767169$ Nov 19 '14 at 15:05
|
|
# Determine if a person differs from typical performance
I have a database of workers, customers, and jobs, and I want to analyze my data to see if a given worker is performing well above or below the mean. I've come up with something of a solution, and I'm interested in hearing if there are any flaws in my current approach.
My goal is to see if a given worker has a lower than average score when it comes to converting first-time jobs into recurring jobs.
I do this by taking all first-time jobs that a given worker has been assigned to, and I then check to see if any subsequent jobs exist under the same customer. If there were subsequent jobs the worker gets a 1, otherwise they get a 0. This gives me something that looks like this:
Worker 1 [48 total records (first-time jobs)]
0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1,
--
Worker 2 [56 total records (first-time jobs)]
1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1,
I then take this data and calculate the mean. I do this by counting the total number of first-time jobs in my system (2,925), as well as the sum of the 1's and 0's. This gives me a mean of 0.38 (so 38% of all first-time jobs typically become recurring jobs). I then calculate the standard deviation, which in this case is 0.13.
I then look at each worker who has completed a minimum of 15 first-time jobs. This is so that I only analyze workers with a sufficiently large sample size, in order to increase confidence in the results.
I then come up with a mean score for each of these workers (the sum of the 1's and 0's, divided by the total number of first-time jobs). Finally, I convert this into a z score, which I do by subtracting the mean (for all the data) from the worker's mean score. I then divide the result by the standard deviation to obtain the individual z score. Finally, I put the results in a bar chart, which looks like this:
The dark bars are for any result with a z-score above 1 or below -1.
My questions are as follows:
1. Is this the correct approach, given my goals?
2. Does it make sense to limit this analysis to workers with a minimum of 15 records? The lower the threshold, the sooner I can perform this valuable analysis, but I want to make sure I'm using a large enough sample size to avoid problems.
3. If 15 is a good minimum, should I also use that as the maximum? For example, if Worker 1 has 15 records, and Worker 2 has 35 records, should I only analyze the last 15 of Worker 2's records? Or is it better to include all the data available?
4. Finally, what z-scores should I consider significant? Right now I am focusing on anything greater than 1, but is that too low?
I greatly appreciate any input. Thank you.
• How are jobs assigned to workers? That is, what if Worker 6 in your graph does slightly different work from the rest and ends up getting assigned customers who are much less likely to repeat? The old "all other things being equal... but are they actually equal?" – Wayne Sep 10 '12 at 20:54
• Yes, they are equal. I didn't include it in the above description, but I strip out any job types that are unlikely to recur, and only consider jobs that are equivalent. – Jeremy Sep 10 '12 at 20:56
1. Using a Z-score suggests that you know the standard deviation of the population in question. Thus, if your question is whether a given worker is above the workers currently employed by you, then yes, it is appropriate. If, however, you are interested in the population of all possible workers, of which the current workers in your study are only a sample, then Z is not appropriate. In this case you would want to use the t-statisitic.
2. and 3. It seems that both of these concerns could be handled by computing weighted averages. By this I mean use all of the scores but weight them by the number of observations for each employee.
3. What you should consider significant is highly idiosyncratic to a given situation. How will this information be used? What are the consequences of a Type I (false positive) or Type II (false negative) error? In basic behavioural research .05 is considered reasonable to accept for the possibility of a false positive (this is known as the alpha level). In certain medical research much lower alpha values are justifiably demanded. Once you've decided what is appropriate, you can find critical values in a table easily enough (e.g.) or get them from most statistical software like R very easily.
• Thanks for the response. I'm actually only doing this analysis on my workforce, so it's not a sample of a larger population. As far as #2 goes, how would I compute weighted averages in this case? Is there a formula available? – Jeremy Sep 10 '12 at 22:29
• This link explains the calculation of weighted averages. The weights in your case are the number of observations for each employee. This ensures that values from employees who have done many more jobs influence the average more and vice versa. – Marcus Morrisey Sep 12 '12 at 16:16
|
|
# MIS capacitor
MIS structure (Metal / SiO2 / p-Si) in a vertical MIS capacitor
A MIS capacitor is a capacitor formed from a layer of metal, a layer of insulating material and a layer of semiconductor material. It gets its name from the initials of the metal-insulator-semiconductor structure. As with the MOS field-effect transistor structure, for historical reasons, this layer is also often referred to as a MOS capacitor, but this specifically refers to an oxide insulator material.
The maximum capacitance, CMIS(max) is calculated analogously to the plate capacitor:
$C_\mathrm{MIS(max)}=\varepsilon_0\varepsilon_r \cdot { {A} \over {d} }$
where :
The production method depends on materials used (it is even possible that polymers can be used as the insulator). We will consider an example of a MOS capacitor based on silicon and silicon dioxide. On the semiconductor substrate, a thin layer of oxide (silicon dioxide) is applied (by, for example, thermal oxidation, or chemical vapour deposition) and then coated with a metal.
This structure and thus a capacitor of this type is present in every MIS field-effect transistor, such as MOSFETs. For the steady reduction of the size of structures in microelectronics, the following facts are clear. From the formula above it follows that capacitance increases with ever thinner layers of insulation. For all MIS devices the insulation thickness cannot fall below a minimum of around 10 nm. Using thinner insultation than this leads to the occurrence of tunneling through the insulating material (dielectric). Due to this, the use of so-called high-k materials as the insulator material is being investigated (as of 2009).
|
|
## Double Doors Regional P4 - Tudor Teaches Jason How To Be Productive
View as PDF
Points: 10
Time limit: 1.0s
Java 2.0s
Memory limit: 256M
Java 256M
Problem type
Tudor is teaching Jason how to be productive!
One day, Tudor decides that Jason isn't taking his work seriously enough, and installs some software to monitor Jason's monitor. Jason is now only allowed to type strings which are in a whitelist.
Jason has already written some string and wishes to turn it into another string. In a single second, he can either insert a character, delete a character, or change some character in the string into another one. Note that, per the whitelist, Jason must only ever have strings in the whitelist on his monitor during any given second.
#### Input Specification
The input starts with a single integer , the number of words in the whitelist.
lines follow, each containing a single string of only uppercase letters. Each string has length at most 500. Each of these strings will be distinct.
After that, a line with a single integer follows.
After that, lines follow with two positive integers between and , the first one being the index of the word Jason starts out with and the second one being the index of the word Jason wants.
#### Output Specification
For each query line, output the minimum number of seconds Jason needs to convert the first word to the second one. Output -1 if it is impossible for him to perform the conversion.
#### Sample Input
3
BIG
BUG
DAMAGE
2
1 2
1 3
#### Sample Output
1
-1
|
|
# Rectangle Contained by Medial Straight Lines Commensurable in Length is Medial
## Theorem
In the words of Euclid:
The rectangle contained by medial straight lines commensurable in length is medial.
## Proof
Let $x$ and $\lambda x$ be two medials such that:
$x \frown \lambda x$
where $\frown$ denotes that $x$ and $\lambda x$ are commensurable in length.
$x^2 : x \cdot \lambda x = x : \lambda x$
$x^2 \frown x \cdot \lambda x$
From Medial is Irrational we have that $x^2$ is medial.
$x \cdot \lambda x$ is medial.
$\blacksquare$
## Historical Note
This proof is Proposition $24$ of Book $\text{X}$ of Euclid's The Elements.
|
|
Graphs of Trigonometric Functions - Problem Solving
Contents
Cite as: Graphs of Trigonometric Functions - Problem Solving. Brilliant.org. Retrieved from https://brilliant.org/wiki/graphs-of-trigonometric-functions/
×
|
|
## Friday, July 5, 2013
The far-ultraviolet (FUV) diffuse emission is predominantly due to scattering of starlight from interstellar dust grains which shows a large regional variation depending on the relative orientations of dust and stars. The observations of the FUV (1000 -- 1150 \AA) diffuse radiation in the Magellanic Clouds (MCs) using serendipitous observations made with the Far Ultraviolet {\em Spectroscopic Explorer (FUSE)} are presented. The estimated contribution of FUV diffuse radiation to the total integrated FUV emission in the MCs is found to be typically 5% -- 20% in the Large Magellanic Cloud (LMC) and 34% -- 44% in the Small Magellanic Cloud (SMC) at the {\em FUSE} bands ($\lambda$ = 905 -- 1187 \AA) and it increases substantially towards the longer wavelength (e.g., 63% for the SMC at 1615 \AA). The less scattering of light in the FUV at the shorter wavelength than at the longer wavelength indicates that much of the stellar radiation at the shorter wavelength is going into heating the interstellar dust. Five times ionized oxygen atom (O {\small VI}) is a tracer of hot gas (T $\sim 3\times 10^{5}$ K) in the interstellar medium (ISM). A wide survey of O {\small VI} column density measurements for the LMC is presented using the high resolution {\em FUSE} spectra. The column density varies from a minimum of log N(O {\small VI}) = 13.72 atoms cm$^{-2}$ to a maximum of log N(O {\small VI}) = 14.57 atoms cm$^{-2}$. A high abundance of O {\small VI} is observed in both active (superbubbles) and inactive regions of the LMC.
|
|
# Is the use of whipple shields on satellites common?
How common is the use of whipple shields or of any micro-meteor and debris shields in general for satellites in low earth orbit , I couldn't find any relevant statistics on the internet, a rough estimate would suffice.
• Maybe because the price of sending 1kg weight into space is \$2700 with the SpaceX. It was \$54000 with the space shuttle. – peterh - Reinstate Monica Jul 8 '20 at 23:19
• @peterh-ReinstateMonica Not sure what that comment has to do with this question – Tristan Jul 10 '20 at 18:01
MLI is a good example, used widely for the primary use for thermal control but has properties for a whipple shield.
Looking a little further will pay off: https://en.wikipedia.org/wiki/Whipple_shield cites
"There are over 100 shield configurations on the International Space Station alone, with higher-risk areas having better shielding."
and in turn leads to this reference https://web.archive.org/web/20130225001045/http://ston.jsc.nasa.gov/collections/TRS/_techrep/TP-2003-210788.pdf which has plenty of examples with ballistic limit equations for examples of MLI and other types of shield used on the ISS.
• To give a little context, there's roughly 30,000 pounds of dedicated shielding on ISS. – Tristan Jul 10 '20 at 18:02
In addition to @Puffin's answer, it is becoming increasingly common for satellites to opt for dual-purpose design, where various structural and other elements serve as shielding for vulnerable components. This is as simple as making sure structural shear panels can protect propellant tanks, radiators can protect wiring harnesses, etc.
To a degree, I would say it's reasonable to assume that just about every modern satellite in LEO has some degree of shielding, whether or not it's dedicated shielding that wouldn't otherwise be there if not for MMOD.
|
|
# Matt's arXiv selection: week ending 20 March 2009
From: Matthew Davis <mdavis_at_physics.uq.edu.au>
Date: Wed, 20 May 2009 16:57:48 +1000
The following message was sent to the matts_arxiv list by Matthew Davis <mdavis_at_physics.uq.edu.au>
Dear subscribers,
Geoff Lee deserves the credit for preparing this email. There are 28 new
preprints and 9 replacements:
------------------------------------------------------------------------------
\\
arXiv:0903.2261
Date: Thu, 12 Mar 2009 20:18:32 GMT (76kb)
Title: Modified Macroscopic Quantum Tunneling of a BEC close to a Feshbach
Resonance
Authors: N.T. Zinner and M. Th{\o}gersen
Categories: cond-mat.other
Comments: 4 pages, 3 figures, Revtex4
\\
We consider the stability and macroscopic decay of an ultracold trapped
Bose-Einstein condensate near a Feshbach resonance. Several types of atomic
species and resonances are investigated. Using a modified Gross-Pitaevskii
equation that includes higher-order terms and a multi-channel model of Feshbach
resonances, we find regions around experimentally measured Feshbach resonances
where macroscopic tunneling of the condensate is either suppressed or enhanced
by higher-order interactions. However, to see the effect in realistic
experiments requires very narrow resonances or very small condensates with
particle number of order $10^2$.
\\ ( http://arxiv.org/abs/0903.2261 , 76kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2263
Date: Thu, 12 Mar 2009 20:09:01 GMT (511kb)
Title: Shell-Model Monte Carlo Simulations of BCS-BEC Crossover in Few-Fermion
Systems
Authors: N.T. Zinner, K. M{\o}lmer, C. \"Ozen, D.J. Dean, K. Langanke
Categories: cond-mat.supr-con
Comments: 4 pages, 3 figures, Revtex4
\\
We study a trapped system of fermions with a zero-range two-body interaction
using the Shell-Model Monte Carlo method, providing {\em ab initio} results for
the low particle number limit where mean-field theory is not applicable. We
present results for the $N$-body energies as function of interaction strength,
particle number, and temperature. The subtle question of renormalization in a
finite model space is addressed and the convergence of our method and its
applicability across the BCS-BEC crossover is demonstrated. Comparison to N=2
analytics at zero and finite temperature, and to other calculations in the
literature for $N>2$ show very good agreement.
\\ ( http://arxiv.org/abs/0903.2263 , 511kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2346
Date: Fri, 13 Mar 2009 10:22:58 GMT (55kb)
Title: Phase diagram of imbalanced fermions in optical lattices
Authors: Xiaoling Cui and Yupeng Wang
Categories: cond-mat.supr-con
\\
The zero temperature phase diagrams of imbalanced two species fermions in 3D
optical lattices are investigated to evaluate the validity of the Fermi-Hubbard
model. It is found that depending on the filling factor, s-wave scattering
strength and lattice potential, the system may fall into the normal($N$) phase,
magnetized superfluid($SF_M$) or phase separation of $N$ and $BCS$ state. By
tuning these parameters, the superfluidity could be favorable by enhanced
effective couplings or suppressed by the increased band gap. The phase profiles
for imbalanced fermions in the presence of a harmonic trap are also
investigated under LDA, which show some novel structures to those without the
optical lattice.
\\ ( http://arxiv.org/abs/0903.2346 , 55kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2459
Date: Fri, 13 Mar 2009 19:11:12 GMT (91kb)
Title: Transport and Control in One-Dimensional Systems
Authors: L. F. Santos
Categories: cond-mat.stat-mech
Comments: 8 pages, 4 figures. Talk given at the workshop Integrable Quantum
Systems and Solvable Statistical Mechanics Models', Montreal, July 2008.
Submitted to J. Math. Phys
\\
We study transport of local magnetization in a Heisenberg spin-1/2 chain at
zero temperature. The system is initially prepared in a highly excited pure
state far from equilibrium and its evolution is analyzed via exact
diagonalization. Integrable and non-integrable regimes are obtained by
adjusting the parameters of the Hamiltonian, which allows for the comparison of
transport behaviors in both limits. In the presence of nearest neighbor
interactions only, the transport behavior in the integrable clean system
contrasts with the chaotic chain with on-site defects, oscillations in the
first suggesting ballistic transport and a fast decay in the latter indicating
diffusive transport. The results for a non-integrable system with frustration
are less conclusive, similarities with the integrable chain being verified. We
also show how methods of quantum control may be applied to chaotic systems to
induce a desired transport behavior, such as that of an integrable system.
\\ ( http://arxiv.org/abs/0903.2459 , 91kb)
-----------------------------------------------------------------------------
\\
arXiv:0903.2401
Date: Fri, 13 Mar 2009 13:05:54 GMT (3080kb)
Title: Unpublished opening lecture for the course on the theory of relativity
in Argentina, 1925
Authors: Albert Einstein, Alejandro Gangui and Eduardo L. Ortiz
Categories: physics.hist-ph physics.soc-ph
Comments: Companion paper to arXiv:0903.2064 . Published version available at
http://www.universoeinstein.com.ar/einsteinargentina.htm. Translated by
Alejandro Gangui and Eduardo L. Ortiz
Journal-ref: Science in Context, Vol. 21, issue 3, pp. 451-459 (2008)
\\
Honorable Rector, Honorable Professors, and Students of this University: In
these times of political and economic struggle and nationalistic fragmentation,
it is a particular joy for me to see people assembling here to give their
attention exclusively to the highest values that are common to us all. I am
glad to be in this blessed land before a small circle of people who are
interested in topics of science to speak on those issues that, in essence, are
the subject of my own meditations.. [abridged].
\\ ( http://arxiv.org/abs/0903.2401 , 3080kb)
-------------------------------------------------------------------------------
\\
arXiv:0903.2519
Date: Sat, 14 Mar 2009 01:02:26 GMT (146kb)
Title: High Precision Quantum Monte Carlo Study of the 2D Fermion Hubbard Model
Authors: C. N. Varney, C.-R. Lee, Z. J. Bai, S. Chiesa, M. Jarrell, and R. T.
Scalettar
Categories: cond-mat.str-el
Comments: 7 pages, 11 figures. Submitted to Phys. Rev. A
\\
We report large scale determinant Quantum Monte Carlo calculations of the
effective bandwidth, momentum-space Green's function, and magnetic correlations
of the square lattice fermion Hubbard Hamiltonian at half-filling. The sharp
Fermi surface of the non-interacting limit is significantly broadened by the
electronic correlations, but retains signatures of the approach to the edges of
the first Brillouin zone as the density increases. Finite size scaling of
simulations on large lattices allows us to extract the interaction dependence
of the antiferromagnetic order parameter, exhibiting its evolution from weak
coupling to the strong coupling Heisenberg limit. Our lattices provide improved
resolution of the Green's function in momentum space, allowing a more
quantitative comparison with time of flight optical lattice experiments.
\\ ( http://arxiv.org/abs/0903.2519 , 146kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2534
Date: Sat, 14 Mar 2009 08:47:16 GMT (555kb)
Title: Vortex-Induced Phase Slip Dissipation in a Toroidal Bose-Einstein
Condensate Flowing Trough a Barrier
Authors: F. Piazza, L. A. Collins, and A. Smerzi
Categories: cond-mat.other
\\
We study superfluid dissipation due to phase slips for a BEC flowing through
a repulsive barrier inside a torus. The barrier is adiabatically raised across
the annulus while the condensate flows with a finite quantized angular
momentum. We find that, at a critical height, a vortex moves radially from the
inner region and reaches the barrier to eventually circulate around the
annulus. At a slightly higher barrier, an anti-vortex also enters into the
torus from the outward region. The vortex and anti-vortex decrease the total
angular momentum by leaving behind their respective paths a $2 \pi$ phase slip.
When they collide or orbit along the same loop, the condensate suffers a global
$2\pi$ phase slip, and the total angular momentum decreases by one quantum. The
analysis is based on numerical simulations of the Gross-Pitaevskii equation
both in two- and three-dimensions.
\\ ( http://arxiv.org/abs/0903.2534 , 555kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2537
Date: Sat, 14 Mar 2009 09:48:14 GMT (79kb)
Title: Quantum Monte Carlo simulations for the Bose-Hubbard model with random
chemical potential; localized Bose-Einstein condensation without
superfluidity
Authors: Mitsuaki Tsukamoto, Makoto Tsubota
Categories: cond-mat.stat-mech
\\
The hardcore-Bose-Hubbard model with random chemical potential is
investigated using quantum Monte Carlo simulation. We consider two cases of
random distribution of the chemical potential: a uniformly random distribution
and a correlated distribution. The temperature dependences of the superfluid
density, the specific heat, and the correlation functions are calculated. If
the distribution of the randomness is correlated, there exists an intermediate
state, which can be thought of as a localized condensate state of bosons,
between the superfluid state and the normal state.
\\ ( http://arxiv.org/abs/0903.2537 , 79kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2552
Date: Sat, 14 Mar 2009 16:01:36 GMT (405kb)
Title: Quantum fluctuations of a Bose-Josephson junction in a
quasi-one-dimensional ring trap
Authors: N. Didier, A. Minguzzi, F.W.J. Hekking
Categories: cond-mat.mes-hall
\\
Using a Luttinger-liquid approach we study the quantum fluctuations of a
Bose-Josephson junction, consisting of a Bose gas confined to a quasi
one-dimensional ring trap which contains a localized repulsive potential
barrier. For an infinite barrier we study the one-particle and two-particle
static correlation functions. For the one-body density-matrix we obtain
different power-law decays depending on the location of the probe points with
respect to the position of the barrier. This quasi-long range order can be
experimentally probed in principle using an interference measurement. The
corresponding momentum distribution at small momenta is also shown to be
affected by the presence of the barrier and to display the universal power-law
behavior expected for an interacting 1D fluid. We also evaluate the particle
density profile, and by comparing with the exact results in the Tonks-Girardeau
limit we fix the nonuniversal parameters of the Luttinger-liquid theory. Once
the parameters are determined from one-body properties, we evaluate the
density-density correlation function, finding a remarkable agreement between
the Luttinger liquid predictions and the exact result in the Tonks-Girardeau
limit, even at the length scale of the Friedel-like oscillations which
characterize the behavior of the density-density correlation function at
intermediate distance. Finally, for a large but finite barrier we use the
one-body correlation function to estimate the effect of quantum fluctuations on
the renormalization of the barrier height, finding a reduction of the effective
Josephson coupling energy, which depends on the length of the ring and on the
interaction strength.
\\ ( http://arxiv.org/abs/0903.2552 , 405kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2568
Date: Sat, 14 Mar 2009 18:44:35 GMT (1756kb)
Title: Collisional cooling of ultra-cold atom ensembles using Feshbach
resonances
Authors: L. Mathey, Eite Tiesinga, Paul S. Julienne, Charles W. Clark
Categories: cond-mat.other
\\
We propose a new type of cooling mechanism for ultra-cold fermionic atom
ensembles, which capitalizes on the energy dependence of inelastic collisions
in the presence of a Feshbach resonance. We first discuss the case of a single
magnetic resonance, and find that the final temperature and the cooling rate is
limited by the width of the resonance. A concrete example, based on a p-wave
resonance of $^{40}$K, is given. We then improve upon this setup by using both
a very sharp optical or radio-frequency induced resonance and a very broad
magnetic resonance and show that one can improve upon temperatures reached with
current technologies.
\\ ( http://arxiv.org/abs/0903.2568 , 1756kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2707
Date: Mon, 16 Mar 2009 09:10:02 GMT (830kb)
Title: Non-Hermitian Generalization of the Bethe-Ansatz Excited States and
Dielectric Breakdown in the Hubbard Model out of Equilibrium
Authors: Takashi Oka and Hideo Aoki
Categories: cond-mat.stat-mech cond-mat.str-el
\\
Many-body models exactly solvable in nonequilibrium situations will serve to
deepen the understanding of nonequilibrium physics. Here we explore this by
applying the Dykhne-Davis-Pechukas approach to the dielectric breakdown of Mott
insulators in strong electric fields, which relates, via an analytic
continuation, the quantum tunneling rate with the Bethe-ansatz solution for
excited states in the Hubbard model extended to a non-Hermitian case. This has
enabled us to (i) reveal that two apparently unrelated theories, one with a
many-body Landau-Zener tunneling and the other a non-Hermitian approach, are in
fact intimately related, and (ii) give a picture for the breakdown in the
thermodynamic limit.
\\ ( http://arxiv.org/abs/0903.2707 , 830kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2723
Date: Mon, 16 Mar 2009 11:03:25 GMT (376kb)
Title: Exciton-polariton condensation in a natural two dimensional trap
Authors: D. Sanvitto, A. Amo, L. Vina, R. Andre, D. Solnyshkov and G. Malpuech
Categories: cond-mat.mes-hall cond-mat.other
\\
Bose Einstein condensation of exciton-polaritons has recently been reported
in homogeneous structures only affected by random in-plane fluctuations. We
have taken advantage of the ubiquitous defects in semiconductor microcavities
to reveal the spontaneous dynamical condensation of polaritons in the quantised
levels of a trap. We observe condensation in several quantized states, taking
their snapshots in real and reciprocal space. We show also the effect of
particle interactions for high occupations numbers, revealed by a change in the
confined wave function toward the Thomas Fermi profile.
\\ ( http://arxiv.org/abs/0903.2723 , 376kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2796
Date: Mon, 16 Mar 2009 16:29:02 GMT (63kb)
Title: Cooling atoms into entangled states
Authors: Giovanni Vacanti and Almut Beige
Categories: quant-ph
\\
We discuss the possibility to prepare highly entangled state by simply
cooling atoms into the ground state of an applied interaction Hamiltonian.
Different from previous proposals for state preparation via cooling [B. Kraus
et al., Phys. Rev. A 78, 042397 (2008) and F. Verstraete et al.,
arXiv:0804.1888], we do not require the existence of an exact dark state which
is the only stationary state of the system. Instead, our scheme is analog to
laser sideband cooling. Laser fields are applied such that the target state is
highly detuned, while all other qubit states are in resonance. After presenting
the general theory, we discuss concrete applications with one and two qubits.
\\ ( http://arxiv.org/abs/0903.2796 , 63kb)
--------------------------------------------------------------------------------
\\
arXiv:0903.2261 (*cross-listing*)
Date: Thu, 12 Mar 2009 20:18:32 GMT (76kb)
Title: Modified Macroscopic Quantum Tunneling of a BEC close to a Feshbach
Resonance
Authors: N.T. Zinner and M. Th{\o}gersen
Categories: cond-mat.other quant-ph
Comments: 4 pages, 3 figures, Revtex4
\\
We consider the stability and macroscopic decay of an ultracold trapped
Bose-Einstein condensate near a Feshbach resonance. Several types of atomic
species and resonances are investigated. Using a modified Gross-Pitaevskii
equation that includes higher-order terms and a multi-channel model of Feshbach
resonances, we find regions around experimentally measured Feshbach resonances
where macroscopic tunneling of the condensate is either suppressed or enhanced
by higher-order interactions. However, to see the effect in realistic
experiments requires very narrow resonances or very small condensates with
particle number of order $10^2$.
\\ ( http://arxiv.org/abs/0903.2261 , 76kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2745
Date: Mon, 16 Mar 2009 12:55:39 GMT (135kb)
Title: All-optical runaway evaporation to Bose-Einstein condensation
Authors: Jean-Francois Cl\'ement (LCFIO), Jean-Philippe Brantut (LCFIO), Martin
Robert De Saint Vincent (LCFIO), Robert A. Nyman (LCFIO), A. Aspect (LCFIO),
Thomas Bourdel (LCFIO), Philippe Bouyer (LCFIO)
Categories: physics.atom-ph
\\
We demonstrate runaway evaporative cooling directly with a tightly confining
optical dipole trap and achieve fast production of condensates of 1.5x10^5 87Rb
atoms. Our scheme is characterized by an independent control of the optical
trap confinement and depth, permitting forced evaporative cooling without
reducing the trap stiffness. Although our configuration is particularly well
suited to the case of 87Rb atoms in a 1565nm optical trap, where an efficient
initial loading is possible, our scheme is general and should allow all-optical
evaporative cooling at constant stiffness for most species.
\\ ( http://arxiv.org/abs/0903.2745 , 135kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2991
Date: Tue, 17 Mar 2009 15:54:18 GMT (3015kb)
Title: Density, phase and coherence properties of a low dimensional
Bose-Einstein systems moving in a disordered potential
Authors: Chaitanya Joshi, Sankalpa Ghosh (Physics Department, I. I. T. Delhi)
Categories: cond-mat.mes-hall cond-mat.dis-nn
Comments: 12 pages, 15 figures, EPJB style, accepted for publication in the
topical issue of EPJB on "Novel Quantum phases and Mesoscopic physics in
Quantum gases"
\\
We present a detailed numerical study of the dynamics of a disordered
one-dimensional Bose-Einstein condensates in position and momentum space. We
particularly focus on the region where non-linearity and disorder
simultaneously effect the time propagation of the condensate as well as the
possible interference between various parts of the matter wave. We report
oscillation between spatially extended and localized behavior for the
propagating condensate which dies down with increasing non-linearity. We also
report intriguing behavior of the phase fluctuation and the coherence
properties of the matter wave. We also briefly compare these behavior with that
of a two-dimensional condensate. We mention the relevance of our results to the
related experiments on Anderson localization and indicate the possibility of
future experiments
\\ ( http://arxiv.org/abs/0903.2991 , 3015kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3006
Date: Tue, 17 Mar 2009 17:07:01 GMT (22kb)
Title: Theory of Radio Frequency Spectroscopy of Polarized Fermi Gases
Authors: William Schneider, Vijay B. Shenoy, Mohit Randeria
Categories: cond-mat.other
\\
We present two exact results for singular features in the radio frequency
intensity $I(\omega)$ for ultracold Fermi gases. First, in the absence of final
state interactions, $I(\omega)$ has a universal high frequency tail $C\omega^{-3/2}$ for \emph{all} many-body states, where $C$ is Tan's contact.
Second, in a \emph{normal} Fermi liquid at T=0, $I(\omega)$ has a jump
discontinuity of $Z/(1 - m/m^{*})$, where $Z$ is the quasiparticle weight and
$m^*/m$ the mass renormalization. We then describe various approximations for
$I(\omega)$ in polarized normal gases. We show why an approximation that is
exact in the $n_\dn=0$ limit, fails qualitatively for $n_{\dn} > 0$: there is
no universal tail and sum rules are violated. The simple ladder approximation
is qualitatively correct for very small $n_{\dn}$, but not quantitatively.
\\ ( http://arxiv.org/abs/0903.3006 , 22kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3030
Date: Tue, 17 Mar 2009 19:35:34 GMT (19kb)
Title: Critical Behaviour in Trapped Strongly Interacting Fermi Gases
Authors: E. Taylor
Categories: cond-mat.stat-mech
\\
We investigate the width of the Ginzburg critical region and experimental
signatures of critical behaviour in strongly interacting trapped Fermi gases
close to unitarity, where the s-wave scattering length diverges. Despite the
fact that the width of the critical region is of the order unity, evidence of
critical behaviour in the thermodynamics of trapped gases is strongly
suppressed by their inhomogeneity. The specific heat of a harmonically confined
gas, for instance, is linear in the reduced temperature $\varepsilon = 1-T/T_c$
close to $T_c$. Using higher-order power-law potentials to confine the gas,
critical behaviour becomes more apparent and a lambda curve should be visible
in the specific heat.
\\ ( http://arxiv.org/abs/0903.3030 , 19kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2953
Date: Tue, 17 Mar 2009 12:14:21 GMT (946kb)
Title: Tapered optical fibers as tools for probing magneto-optical trap
characteristics
Authors: M.J. Morrissey, K. Deasy, Y. Wu, S. Chakrabarti and S. Nic Chormaic
Categories: quant-ph
\\
We present a novel technique for measuring the characteristics of a
magneto-optical trap for cold atoms by monitoring the spontaneous emission from
trapped atoms coupled into the guided mode of a tapered optical nanofiber. We
show that the nanofiber is highly sensitive to very small numbers of atoms
close to its surface. The size and shape of the MOT, determined by translating
the cold atom cloud across the tapered fiber, is in excellent agreement with
measurements obtained using the conventional method of fluorescence imaging
using a CCD camera. The coupling of atomic fluorescence into the tapered fiber
compared to those achieved by focusing the MOT fluorescence onto a photodiode
and it was seen that the tapered fiber gives slightly longer loading and
lifetime measurements due to the sensitivity of the fiber, even when very few
atoms are present.
\\ ( http://arxiv.org/abs/0903.2953 , 946kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2942
Date: Tue, 17 Mar 2009 11:07:21 GMT (850kb)
Title: Influence of a Feshbach resonance on the photoassociation of LiCs
Authors: J. Deiglmayr, P. Pellegrini, A. Grochola, M. Repp, R. C\^ot\'e, O.
Dulieu, R. Wester and M. Weidem\"uller
Categories: physics.atom-ph
Comments: Submitted to New Journal of Physics, Special Issue on Cold and
Ultracold molecules. 16 pages, 8 figures
\\
We analyse the formation of ultracold 7Li133Cs molecules in the rovibrational
ground state through photoassociation into the B1Pi state, which has recently
been reported [J. Deiglmayr et al., Phys. Rev. Lett. 101, 133004 (2008)].
Absolute rate constants for photoassociation at large detunings from the atomic
asymptote are determined and are found to be surprisingly large. The
photoassociation process is modeled using a full coupled-channel calculation
for the continuum state, taking all relevant hyperfine states into account. The
enhancement of the photoassociation rate is found to be caused by an echo' of
the triplet component in the singlet component of the scattering wave function
at the inner turning point of the lowest triplet a3Sigma+ potential. This
perturbation can be ascribed to the existence of a broad Feshbach resonance at
low scattering energies. Our results elucidate the important role of couplings
in the scattering wave function for the formation of deeply bound ground state
molecules via photoassociation.
\\ ( http://arxiv.org/abs/0903.2942 , 850kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3185
Date: Wed, 18 Mar 2009 15:32:34 GMT (1552kb)
Title: Ab-initio determination of Bose-Hubbard parameters for two ultracold
atoms in an optical lattice
Authors: Philipp Schneider, Sergey Grishkevich, and Alejandro Saenz
Categories: quant-ph
\\
We calculate numerically the exact energy spectrum of the six dimensional
problem of two interacting Bosons in a three-well optical lattice. The
particles interact via a full Born-Oppenheimer potential which can be adapted
to model the behavior of the s-wave scattering length at Feshbach resonances.
By adjusting the parameters of the corresponding Bose-Hubbard (BH) Hamiltonian
the deviation between the numerical energy spectrum and the BH spectrum is
minimized. This defines the optimal BH parameter set which we compare to the
standard parameters of the BH model. The range of validity of the BH model with
these parameter sets is examined, and an improved analytical prediction of the
interaction parameter is introduced. Furthermore, the extended BH model and
implications due to the energy dependence of the scattering length and
couplings to higher Bloch bands at a Feshbach resonance are discussed.
\\ ( http://arxiv.org/abs/0903.3185 , 1552kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3147
Date: Wed, 18 Mar 2009 12:12:25 GMT (100kb)
Title: Autoionization of an ultracold Rydberg gas through resonant dipole
coupling
Authors: T. Amthor, J. Denskat, C. Giese, N. N. Bezuglov, A. Ekers, L.
Cederbaum, M. Weidem\"uller
Categories: physics.atom-ph
\\
We investigate a possible mechanism for the autoionization of ultracold
Rydberg gases, based on the resonant coupling of Rydberg pair states to the
ionization continuum. Unlike an atomic collision where the wave functions begin
to overlap, the mechanism considered here involves only the long-range dipole
interaction and is in principle possible in a static system. It is related to
the process of intermolecular Coulombic decay (ICD). In addition, we include
the interaction-induced motion of the atoms and the effect of multi-particle
systems in this work. We find that the probability for this ionization
mechanism can be increased in many-particle systems featuring attractive or
repulsive van der Waals interactions. However, the rates for ionization through
resonant dipole coupling are very low. It is thus unlikely that this process
contributes to the autoionization of Rydberg gases in the form presented here,
but it may still act as a trigger for secondary ionization processes. As our
picture involves only binary interactions, it remains to be investigated if
collective effects of an ensemble of atoms can significantly influence the
ionization probability. Nevertheless our calculations may serve as a starting
point for the investigation of more complex systems, such as the coupling of
many pair states proposed in [Tanner et al., PRL 100, 043002 (2008)].
\\ ( http://arxiv.org/abs/0903.3147 , 100kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3202
Date: Wed, 18 Mar 2009 16:59:35 GMT (100kb)
Title: Collisions of bosonic ultracold polar molecules in microwave traps
Authors: Alexander V. Avdeenkov
Categories: physics.atom-ph
Comments: accepted for publication in New J. Phys. (special issue on ultracold
molecules)
\\
The collisions between linear polar molecules, trapped in a microwave field
with circular polarization, are theoretically analyzed. The microwave trap
suggested by DeMille \cite{DeMille} seems to be rather advantageous in
comparison with other traps. Here we have demonstrated that the microwave trap
can provide a successful evaporative cooling for polar molecules in a rather
broad range of frequencies of the AC-field. We suggested that not only ground
state polar molecules but also molecules in some other states can be safely
trapped.
But the state in which molecules can be safely loaded and trapped depends on
the frequency of the AC-field.
\\ ( http://arxiv.org/abs/0903.3202 , 100kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3194
Date: Wed, 18 Mar 2009 16:09:09 GMT (172kb)
Title: 60 years of Broken Symmetries in Quantum Physics (From the Bogoliubov
Theory of Superfluidity to the Standard Model)
Authors: D. V. Shirkov
Categories: physics.hist-ph cond-mat.supr-con hep-ph hep-th
Comments: 18 pages, 4 figures, to appear in Physics -- Uspekhi
\\
A retrospective historical overview of the phenomenon of spontaneous symmetry
breaking (SSB) in quantum theory, the issue that has been implemented in
particle physics in the form of the Higgs mechanism. The main items are:
-- The Bogoliubov's microscopical theory of superfluidity (1946);
-- The BCS-Bogoliubov theory of superconductivity (1957);
-- Superconductivity as a superfluidity of Cooper pairs (Bogoliubov - 1958);
-- Transfer of the SSB into the QFT models (early 60s);
-- The Higgs model triumph in the electro-weak theory (early 80s).
The role of the Higgs mechanism and its status in the current Standard Model
is also touched upon.
\\ ( http://arxiv.org/abs/0903.3194 , 172kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3282
Date: Thu, 19 Mar 2009 08:39:09 GMT (374kb)
Title: Direct observation of number squeezing in an optical lattice
Authors: A. Itah, H. Veksler, O. Lahav, A. Blumkin, C. Moreno, C. Gordon and J.
Steinhauer
Categories: cond-mat.other
\\
We present an in-situ study of an optical lattice with tunneling and single
lattice site resolution. This system provides an important step for realizing a
quantum computer. The real-space images show the fluctuations of the atom
number in each site. The sub-Poissonian distribution results from the approach
to the Mott insulator state, combined with the dynamics of density-dependent
losses, which result from the high densities of optical lattice experiments.
These losses are clear from the shape of the lattice profile. Furthermore, we
find that the lattice is not in the ground state despite the momentum
distribution which shows the reciprocal lattice. These effects may well be
relevant for other optical lattice experiments, past and future. The lattice
beams are derived from a microlens array, resulting in lattice beams which are
perfectly stable relative to one another.
\\ ( http://arxiv.org/abs/0903.3282 , 374kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3284
Date: Thu, 19 Mar 2009 08:23:33 GMT (1232kb)
Title: Light-pulse atom interferometry in microgravity
Authors: Guillaume Stern (LCFIO, SYRTE), Baptiste Battelier (LCFIO), R\'emi
Geiger (LCFIO), Gael Varoquaux (LCFIO), Andr\'e Villing (LCFIO), Fr\'ed\'eric
Moron (LCFIO), Olivier Carra (ONERA), Nassim Zahzam (ONERA), Yannick Bidel
(ONERA), Oualid Chaibi (SYRTE), Frank Pereira Dos Santos (SYRTE), Alexandre
Bresson (ONERA), Arnaud Landragin (SYRTE), Philippe Bouyer (LCFIO)
Categories: cond-mat.other
\\
We describe the operation of a light pulse interferometer using cold 87Rb
atoms in reduced gravity. Using a series of two Raman transitions induced by
light pulses, we have obtained Ramsey fringes in the low gravity environment
achieved during parabolic flights. With our compact apparatus, we have operated
in a regime which is not accessible on ground. In the much lower gravity
environment and lower vibration level of a satellite, our cold atom
interferometer could measure accelerations with a sensitivity orders of
magnitude better than the best ground based accelerometers and close to proven
spaced-based ones.
\\ ( http://arxiv.org/abs/0903.3284 , 1232kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3345
Date: Thu, 19 Mar 2009 15:11:13 GMT (45kb)
Title: Time-resolved measurement of Landau-Zener tunneling in periodic
potentials
Authors: A. Zenesini and H. Lignier and G. Tayebirad and J. Radogostowicz and
D. Ciampini and R. Mannella and S. Wimberger and O. Morsch and E. Arimondo
Categories: cond-mat.other
\\
We report time-resolved measurements of Landau-Zener tunneling of
Bose-Einstein condensates in accelerated optical lattices, clearly resolving
the step-like time dependence of the band populations. Using different
experimental protocols we were able to measure the tunneling probability both
in the adiabatic and in the diabatic bases of the system. We also
experimentally determine the contribution of the momentum width of the Bose
condensates to the width of the tunneling steps and discuss the implications
for measuring the jump time in the Landau-Zener problem.
\\ ( http://arxiv.org/abs/0903.3345 , 45kb)
------------------------------------------------------------------------------
\\
arXiv:0903.3348
Date: Thu, 19 Mar 2009 15:24:04 GMT (1633kb)
Title: Universal Four-Boson States in Ultracold Molecular Gases: Resonant
Effects in Dimer-Dimer Collisions
Authors: J. P. D'Incao, J. von Stecher, and Chris H. Greene
Categories: physics.atom-ph
\\
We study the manifestations of universal four-body physics in ultracold
dimer-dimer collisions.
We show that resonant features associated with three-body Efimov physics and
dimer-dimer scattering lengths are universally related. The emergence of
universal four-boson states allows for the tunability of the dimer-dimer
interaction, thus enabling the future study of ultracold molecular gases with
both attractive and repulsive interactions. Moreover, our study of the
interconversion between dimers and Efimov trimers shows that $B_{2}+B_{2}\to B_{3}+B$ rearrangement reactions can provide an efficient trimer formation
mechanism. Our analysis of the temperature dependence of this reaction provides
an interpretation of the available experimental data and sheds light on the
possible experimental realization of rearrangement processes in ultracold
gases.
\\ ( http://arxiv.org/abs/0903.3348 , 1633kb)
------------------------------------------------------------------------------
The replacements:
------------------------------------------------------------------------------
\\
arXiv:cond-mat/0610116
replaced with revised version Fri, 13 Mar 2009 14:43:48 GMT (18kb)
Title: Resonant Dimer Relaxation in Cold Atoms with a Large Scattering Length
Authors: Eric Braaten (Ohio State U.), H.-W. Hammer (Bonn U.)
Categories: cond-mat.other nucl-th
Comments: 4 pages, 2 eps figures, normalization error in figures corrected,
equations unchanged
Report-no: HISKP-TH-06-26
Journal-ref: Phys.Rev.A75:052710,2007
DOI: 10.1103/PhysRevA.75.052710
\\ ( http://arxiv.org/abs/cond-mat/0610116 , 18kb)
------------------------------------------------------------------------------
\\
arXiv:0903.2242
replaced with revised version Fri, 13 Mar 2009 11:41:19 GMT (552kb,D)
Title: Quantum Noise Interference and Back-action Cooling in Cavity
Nanomechanics
Authors: Florian Elste, S. M. Girvin, A. A. Clerk
Categories: cond-mat.mes-hall
Comments: 4+ pages, 2 figures. Error in second last paragraph corrected
\\ ( http://arxiv.org/abs/0903.2242 , 552kb)
------------------------------------------------------------------------------
\\
arXiv:0711.1772
replaced with revised version Mon, 16 Mar 2009 10:49:40 GMT (337kb)
Title: Bloch oscillations of atoms in an optical multiphoton potential
Authors: Tobias Salger, Gunnar Ritt, Carsten Geckeler, Sebastian Kling, Martin
Weitz
Categories: cond-mat.other
Journal-ref: Phys. Rev. A 79, 011605(R) (2009)
DOI: 10.1103/PhysRevA.79.011605
\\ ( http://arxiv.org/abs/0711.1772 , 337kb)
------------------------------------------------------------------------------
\\
arXiv:0811.3623
replaced with revised version Mon, 16 Mar 2009 15:04:08 GMT (201kb)
Title: Superfluid-density of the ultra-cold Fermi gas in optical lattices
Authors: T. Paananen
Categories: cond-mat.other
\\ ( http://arxiv.org/abs/0811.3623 , 201kb)
------------------------------------------------------------------------------
\\
arXiv:0806.1211
replaced with revised version Tue, 17 Mar 2009 11:20:36 GMT (508kb)
Title: Universal dephasing in a chiral 1D interacting fermion system
Authors: Clemens Neuenhahn, Florian Marquardt
Categories: cond-mat.mes-hall
Comments: 5 pages, 3 figures; minor changes, version as published;
Journal-ref: Physical Review Letters 102, 046806 (2009)
DOI: 10.1103/PhysRevLett.102.046806
\\ ( http://arxiv.org/abs/0806.1211 , 508kb)
------------------------------------------------------------------------------
\\
arXiv:0903.1647
replaced with revised version Mon, 16 Mar 2009 22:19:03 GMT (21kb)
Title: Stability of low-dimensional multicomponent Bose gases
Authors: Alexei Kolezhuk
Categories: cond-mat.other cond-mat.str-el
Comments: 4 pages, 2 figures; (v2) shortened, references corrected
\\ ( http://arxiv.org/abs/0903.1647 , 21kb)
------------------------------------------------------------------------------
\\
arXiv:0712.0880
replaced with revised version Wed, 18 Mar 2009 06:59:58 GMT (100kb)
Title: Loss of Superfluidity in Bose-Einstein Condensate in an Optical lattice
with Two- and Three-Body Interactions
Authors: Priyam Das, Manan Vyas and Prasanta K. Panigrahi
Categories: cond-mat.other
\\ ( http://arxiv.org/abs/0712.0880 , 100kb)
------------------------------------------------------------------------------
\\
arXiv:0810.3088
replaced with revised version Wed, 18 Mar 2009 11:45:40 GMT (150kb)
Title: Spacetime analogue of Bose-Einstein condensates: Bogoliubov-de Gennes
formulation
Authors: Yasunari Kurita, Michikazu Kobayashi, Takao Morinari, Makoto Tsubota,
Hideki Ishihara
Categories: cond-mat.other gr-qc
Report-no: OCU-PHYS-305, AP-GR-62, YITP-08-77
\\ ( http://arxiv.org/abs/0810.3088 , 150kb)
------------------------------------------------------------------------------
\\
arXiv:0811.2436
replaced with revised version Thu, 19 Mar 2009 11:16:03 GMT (249kb)
Title: Stationary waves in a supersonic flow of a two-component Bose gas
Authors: L.Yu. Kravchenko, D.V. Fil
Categories: cond-mat.other
\\ ( http://arxiv.org/abs/0811.2436 , 249kb)
------------------------------------------------------------------------------
Till next time,
Matt.
--
=========================================================================
Dr M. J. Davis, Associate Professor in Physics
School of Mathematics and Physics, ph : +61 7 334 69824
The University of Queensland, fax : +61 7 336 51242
Brisbane, QLD 4072, mdavis_at_physics.uq.edu.au
Australia. www.physics.uq.edu.au/people/mdavis/
=========================================================================
Matt's arXiv selection: weekly summary of cold-atom papers from arXiv.org
http://www.physics.uq.edu.au/people/mdavis/matts_arXiv/
=========================================================================
Legal stuff: Unless stated otherwise, this e-mail represents only the
views of the sender and not the views of the University of Queensland
=========================================================================
Received on Thu May 21 2009 - 15:50:53 EST
This archive was generated by hypermail 2.2.0 : Fri May 29 2009 - 09:16:40 EST
|
|
# An algorithm for checking if a nonlinear function f is always positive
Is there an algorithm to check if a given (possibly nonlinear) function f is always positive?
The idea that I currently have is to find the roots of the function (using newton-raphson algorithm or similar techniques, see http://en.wikipedia.org/wiki/Root-finding_algorithm) and check for derivatives, or finding the minimum of the f, but they don't seems to be the best solutions to this problem, also there are a lot of convergence issues with root finding algorithms.
For example, in Maple, function verify can do this, but I need to implement it in my own program. Maple Help on verify: http://www.maplesoft.com/support/help/Maple/view.aspx?path=verify/function_shells Maple example: assume(x,'real'); verify(x^2+1,0,'greater_than' ); --> returns true, since for every x we have x^2+1 > 0
Some background on the question: The function $f$ is the right hand-side differential nonlinear model for a circuit. A nonlinear circuit can be modeled as a set of ordinary differential equations by applying modified nodal analysis (MNA), for sake of simplicity, let's consider only systems with 1 dimension, so $x' = f(x)$ where $f$ describes the circuit, for example $f$ can be $f(x) = 10x - 100x^2 + 200x^3 - 300x^4 + 100x^5$ ( A model for nonlinear tunnel-diode) or $f=10 - 2sin(4x)+ 3x$ (A model for josephson junction).
$x$ is bounded and $f$ is only defined in interval $[a,b] \in R$. $f$ is continuous. I can also make an assumption that $f$ is Lipschitz with Lipschitz constant L>0, but I don't want to unless I have to.
-
Does Maple's verify work for all possible functions? How about, say, a ten-degree polynomial? – Kevin May 16 '12 at 20:03
I'm assuming you mean a continuous, probably polynomial function (after all, f(x) = -1 iff program X halts else +1 is a valid function)? If so, what is the actual problem? You mentioned two solutions: find the roots of the function (check the value of the function at one point between each of the roots) or the roots of the derivative (check the value of the function at each of these points) - either one of these should work. – BlueRaja - Danny Pflughoeft May 16 '12 at 20:23
A very good point, yes, the function should be continuous. Root-finding was my initial solution, but in my case, there are several convergence issues with it. I'm looking for a better algorithm. – Adel Ahmadyan May 16 '12 at 20:37
Do you have an analytic form for f, or just a black-box function to evaluate it? What about its derivatives? – Dougal May 16 '12 at 20:57
Instead of looking for the roots of the function, you could look for all extremas, i.e. points where the derivative is zero; if any of these is negative the function is negative. – Mathias May 16 '12 at 21:12
|
|
Forces in Connected Objects (Systems) Video Lessons
Concept
# Problem: The hand in the figure is pushing on the back of block A. Blocks A and B, with mB>mA, are connected by a massless string and slide on a frictionless surface. Is the force of the string on B larger than, smaller than, or equal to the force of the hand on A? Explain.
###### FREE Expert Solution
Newton's second law:
$\overline{){\mathbf{\Sigma }}{\mathbf{F}}{\mathbf{=}}{\mathbf{m}}{\mathbf{a}}}$
83% (474 ratings)
###### Problem Details
The hand in the figure is pushing on the back of block A. Blocks A and B, with mB>mA, are connected by a massless string and slide on a frictionless surface. Is the force of the string on B larger than, smaller than, or equal to the force of the hand on A? Explain.
|
|
In what units to quote the thermal Blackbody temperature
Just a quick question relating to the thermal Blackbody temperature of a celestial object. In the c.g.s regime of astrophysics, is it more 'sophisticated' to quote the thermal Blackbody temperature in units of Kelvin ($\mathrm{K}$) or in units of $\mathrm{keV}$? That is to say to use $k_{B}T_{\mathrm{bb}}$ and convert to an energy?
$$S_{\lambda} = \frac{8 \pi h c}{\lambda^5} \frac{1}{e^{hc/\lambda kT} - 1}$$
where $h$ is in $\mathrm{J \cdot s}$, $c$ in $\mathrm{m/s}$, $\lambda$ in $\mathrm{m}$, $k$ in $\mathrm{J/K}$ and $T$ in $\mathrm{K}$. So, the temperature is just in kelvin, not in energy. This formula, with these units, gives $S$ in $\mathrm{W/m^2/m}$, which describes the amount of energy per temperature and wavelength. There is no need to convert a given temperature to energy.
• Did you mean to write $\mathrm{W/ m^2/m}$? – Sir Cumference Oct 25 '16 at 21:25
|
|
# Math Latex Font Size
Math Latex Font Size. When the documents display in a classroom, the 'u' and 'v' look too similar. I need to change the math font in my documents.
However, \small doesn't work in the equation environment. Is there another way to change the font in the math mode? I need a font with pointy vees.
### The Default Settings Are Specified In Fontdef.dtx In The Latex Distribution, And Are Compiled Into Fontmath.ltx;
Without explicitly setting the font it works perfectly well. Postby fatra2 » mon jun 08, 2009 11:26 am. The footnotes are explained in the document.
### In The Following Example The \Textsl Command Sets The Text In A Slanted Style Which Makes The Text Look A Bit Like Italics, But Not Quite.
% latex \setmainfont {xits} \setmathfont {xits math} % context \setupbodyfont [xits] Thu mar 05, 2009 10:20 pm. The most common font styles in latex are bold, italics and underlined, but there are a few more.
### $$\Mbox {\Huge 3X+3=\Mu }$$.
The other font sizes are 8pt, 9pt, 10pt, 11pt, 12pt, 14pt, 17pt, 20pt. \fontsize{size} {baselineskip} the following example shows font size 50pt/5pt and compares them with \huge and \tiny. I need to change the math font in my documents.
### \Text Of Package Amstext (Or Amsmath) Uses The Current Text Font, But Adapt The Size According To The Current Math Style.
Moreover, this is only done in the regular font size intervals. Oh and needless to say but if you were asking this question because you need more mathematical symbols, the comprehensive list is just your document. Therefore it needs \mathchoice that has an efficiency impact that the text is set four times for all math styles and later, when tex knows.
### You Can Use \Scalebox To Change The Font Size Of Your Mathematical Equations.
The default font size for latex is 10pt. Latex basics creating your first latex document. I need a font with pointy vees.
|
|
My Math Forum Average cost differentiation
User Name Remember Me? Password
Calculus Calculus Math Forum
April 26th, 2011, 02:33 PM #1 Newbie Joined: Apr 2011 Posts: 5 Thanks: 0 Average cost differentiation Hi i need some help to derive the AC (average cost) function and find the output which minimises the AC I've been given TC= 24 +0.3q² TR= 297 - 13q²
April 26th, 2011, 06:41 PM #2 Senior Member Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: Average cost differentiation The average cost AC is found by dividing the total cost TC by the output quantity q: $AC=\frac{TC}{q}=\frac{24+0.3q^2}{q}=24q^{-1}+0.3q$ Now, to find the output that minimizes AC, we must differentiate with respect to q, and equate to 0: $\frac{d}{dq}AC=-24q^{-2}+0.3=\frac{3$$q^2-80$$}{10q^2}=0$ Since this is equal to zero when the numerator is equal to zero, we have: $q^2-80=0$ Taking the positive root: $q=\sqrt{80}=4\sqrt{5}$ To ensure that this is a minimum, we note 8 < q < 9 and: $AC'( < 0" /> and $AC'(9) > 0$ Now, if q is in units, then we should round to q = 9. This would have been a nicer problem if we had been given TC = 24.3 + 0.3q². Something else we may use is "When average cost is neither rising nor falling (at a minimum or maximum), marginal cost equals average cost." To see why this works, consider: $AC'=0$ $TC'=AC=\frac{TC}{q}$ $AC'=\frac{q\cdot TC#39;-TC}{q^2}=\frac{q\cdot AC-q\cdot AC}{q^2}=0$ Now, marginal cost MC is the derivative of TC with respect to q: $MC=\frac{d}{dq}$$24+0.3q^2$$=0.6q$ $AC=24q^{-1}+0.3q$ Equating the two, we have: $0.6q=24q^{-1}+0.3q$ $0.3q=24q^{-1}$ $0.3q^2=24$ $q^2=80\:\therefore\:q=4\sqrt{5}$ as we found before.
April 26th, 2011, 10:41 PM #3 Newbie Joined: Apr 2011 Posts: 5 Thanks: 0 Re: Average cost differentiation Thanks that really helped alot
April 26th, 2011, 10:48 PM #4 Senior Member Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: Average cost differentiation You're welcome and welcome to the forum! Does this question have more to it? I'm wondering where the total revenue function comes into play.
April 26th, 2011, 11:56 PM #5 Newbie Joined: Apr 2011 Posts: 5 Thanks: 0 Re: Average cost differentiation oh yeah the TR is then used to; set up the firms profit function and then find the output that will maximize the profit.
April 27th, 2011, 12:00 AM #6 Newbie Joined: Apr 2011 Posts: 5 Thanks: 0 Re: Average cost differentiation but as far as i could see i didnt need AC to work out a profit function as ?= TR-TC my maximizing output was 11.165 or 11 rounded i wonder if i was right
April 27th, 2011, 12:15 AM #7 Senior Member Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: Average cost differentiation I've never seen ? used to represent profit, but any character can be used! $\pi=TR-TC=$$297-13q^2$$-$$24+0.3q^2$$=273-13.3q^2$ Obviously, the profit function has its maximum at q = 0...are you sure the TR function is correct?
April 27th, 2011, 12:29 AM #8 Newbie Joined: Apr 2011 Posts: 5 Thanks: 0 Re: Average cost differentiation Its what i got on my sheet... but the TR and TCs numbers are based on my student ID number so everyones numbers are meant to be different...
April 27th, 2011, 12:37 AM #9 Senior Member Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: Average cost differentiation How did you get 11.165?
Tags average, cost, differentiation
,
,
,
,
,
,
,
,
,
,
,
,
,
,
# differentiate average total cost
Click on a term to search for related topics.
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post flexdec Applied Math 0 July 16th, 2013 05:25 AM pomazebog Calculus 5 March 28th, 2012 09:15 PM tsl182forever8 Calculus 2 March 2nd, 2012 08:44 AM tsl182forever8 Algebra 1 February 29th, 2012 02:05 PM parise123 Calculus 2 October 25th, 2011 06:29 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
|
Align endeffector of robot with vector
Hi!
as it says in the header I want to align my endeffector with a vector in space. In my case the z-axis of the EE should be aligned with the vector. This means that the x-, and y-axis are not specified.
Right now I only manage to navigate the EE to a certain pose where the x-, y-, and z-axis are all determined. But sometimes the robot cannot reach this pose due to collisions or being out of reach. A simple turn around the z-axis would do the trick that the pose can be reached and still the goal "aligning with the vector" is achieved.
My question: Is there an implementation in ROS or particularly in Moveit to automatically solve this?
Best regards!
Edit:
My approach right now is to calculate the path to the target with angles from -180° till +180° in steps of 10° (All of them fullfill the requirement of alignment with the vector, but not all of them are possible for the robot). After that I know which angels are possible and can choose. But this takes some minutes as I have to set up the planner each time.
So again the question: I'm looking for a way to this faster. And maybe there is an implementation? For example constraints for the target pose or something?
edit retag close merge delete
Hey bluefish, I know the questin is rather old, but could you tell me a bit of how you managed it to align your end-effector with a given vector? I tried to getRPY->CurrentVector (using sin, cos) -> create quaternion between CurrentVector <> given goal vector. Failed due to wrong current vector...
( 2017-02-03 08:40:55 -0500 )edit
Sort by » oldest newest most voted
Perhaps you can use the goal tolerance part of the planning request for this. See MotionPlanRequest: Setting up a Pose Request - Define a tolerance on the MoveIt wiki.
Edit:
Hi, thanks for your answer! This solution really seems to be interesting. However the C++API described here seems to different to the one described on the moveit-c++-tutorial.
Can I use them side by side? Using the move-group-interface there I can just find the command group.setGoalTolerance(0.001), which sets both angle and position tolerance at the same time, I think
Oh! No I was wrong! There is also group.setGoalOrientationTolerance(). But there I can't limit the tolerance to one angle around an axis, can I?
The page I linked is a bit more low-level. The page you linked describes using moveit_msgs::OrientationConstraint, but as far as I know such a constraint will influence planning of the whole trajectory, not just the state at the end. The methods you mention don't seem to support what you want. From the moveit::planning_interface::MoveGroup::setGoalTolerance docs:
Set the tolerance that is used for reaching the goal. For joint state goals, this will be distance for each joint, in the configuration space (radians or meters depending on joint type). For pose goals this will be the radius of a sphere where the end-effector must reach. [..]
I don't think this can then be used to express a rotational tolerance around a single axis.
Perhaps you could use the Descartes planner from ROS-Industrial for your use case. Afaiu that does support 'toleranced frames', which could be used to indicate a "don't care" for orientation along a particular axis. See wiki/descartes_trajectory - AxialSymmetricPt.
more
Hi, thanks for your answer! This solution really seems to be interesting. However the C++API described here seems to different to the one described on the moveit-c++-tutorial.
( 2015-05-08 02:45:35 -0500 )edit
Can I use them side by side? Using the move-group-interface there I can just find the command group.setGoalTolerance(0.001);, which sets both angle and position tolerance at the same time, I think
( 2015-05-08 02:48:03 -0500 )edit
Oh! No I was wrong! There is also group.setGoalOrientationTolerance();. But there I can't limit the tolerance to one angle around an axis, can I?
( 2015-05-08 02:50:52 -0500 )edit
that sounds really like what I'm looking for. I'll try it and will report :)
( 2015-05-11 05:40:25 -0500 )edit
Hi! I didn't manage to set up the descartes package. However, I came up with another solution. Again I turn around the axis in steps of 10°. But this time I use the InverseKinematik. This takes only 1 or 2 seconds and perfectly suits for my implementation. Thanks for your support!!
( 2015-05-20 03:12:39 -0500 )edit
Ok. Did you encounter specific difficulties getting it to work? It would be very valuable if you could report those then.
( 2015-05-20 04:53:18 -0500 )edit
I stated my problem here. However, I didn't try to hard to solve it, as I found another convenient solution for my problem regarding aligning with the vector. Anyway thanks again @gvdhoorn for help on all channels!!
( 2015-05-21 05:37:51 -0500 )edit
You can align the end effector without actually specifying the whole pose. Let's say we keep the position but only set the orientation of the end effector.
Here in python the solution by modifying a bit the moveit tutorial :
pose_target = geometry_msgs.msg.Pose()
# Then set pose_target position as the current position
pose_target.position=group.get_current_pose().pose.position
# Then set orientation as you want
pose_target.orientation.w = 1.0
group.set_pose_target(pose_target)
# Planning
plan1 = group.plan()
group.go()
I don't know if it's really what you want but that's the first solution I have in mind
more
Hi! Thanks for your answer. I tried your approach. The problem is that the orientation values are automatically set to 0.0 when not specified. Then the pose is fully determined again and no automatic turn around the target-vector is possible.
( 2015-05-07 10:13:14 -0500 )edit
You can set the orientation quaternion in the direction you want. You can look at function to transform orientation from roll pitch yaw to quaternion. This link for example http://answers.ros.org/question/69754...
( 2015-05-07 20:43:25 -0500 )edit
I'm not sure you got me right. I know how to set the direction and I know how to set it right to align with the vector. But setting the orientation completely means that the system is over-determined for this problem. Cause the EE is aligned when 0° turned around the z-axis but also when 90° turned.
( 2015-05-08 01:21:00 -0500 )edit
I added some lines to the question to clarify the problem.
( 2015-05-08 01:38:02 -0500 )edit
|
|
# Need to show {x | f(x) ≤ g(x)} closed if f,g continuous.
• August 14th 2009, 09:06 PM
numberstrong
Need to show {x | f(x) ≤ g(x)} closed if f,g continuous.
I'm sure this is pretty elementary but I am intellectually needy.
I need to show that with $f,g :X \rightarrow Y$ both continuous and with $Y$ in the order topology, the set $\{x \ | \ f(x) \leq g(x) \}$ is closed.
My work so far: I am trying to show that the complement is open. If we denote the given set by $A$, then the complement is $A^c = \{x \ | \ f(x) > g(x) \}$. For a particular $x$ satisfying $f(x) > g(x)$ we have one of two cases:
(1) $f(x)$ is the immediate successor of $g(x)$. In this case, let $U \subset Y$ be defined by $U = \{y \ | \ y > g(x)\}$, and let $V \subset Y$ be defined by $V = \{y \ | \ y < f(x) \}$.
(2) There exists some $y_0$ satisfying $f(x) > y_0 > g(x)$; in this case, define $U \subset Y$ as $U = \{y \ | \ y > y_0\}$, and define $V \subset Y$ as $V = \{y \ | \ y < y_0\}$ for some such $y_0$.
In both of these cases, $U,V$ are open sets (since $Y$ has the order topology), and since $f,g$ are continuous, we have that $f^{-1}(U)$ and $g^{-1}(V)$ are both open in $X$.
That's the work that I've done so far, with some coaching from a professor. Now what I would like to do is create a collection of sets $\{U_x\}_{x \in X}$, and then claim that since $f^{-1}(U_x)$ is open for each $x$ ( $U_x$ is open and $f$ is continuous), we can take the union $\bigcup_{x \in X} U_x$ and claim it to be open in $X$. This is where my reasoning gets a little blurry, though. Is this union really the same as $\{x \ | \ f(x) > g(x)\}$? It seems in a certain sense that it is, since each $f^{-1}(U)$ produces the set of inputs such that the outputs of $f(x)$ are greater than the outputs of $g(x)$ for that particular $x$. But in another sense it seems obvious that this does NOT compose the set in question, since if we allow $x$ to range over all possible values then the union will include a great many values for which in general it is not true that $f(x) > g(x)$. ALSO, and perhaps more vexingly, the current argument does not make use of the continuity of $g$.
Can anyone help with this? Thanks!
-Steve
• August 14th 2009, 10:28 PM
ynj
if f(x0)>g(x0) for certain x0,let u=(f(x0)-g(x0))/3
then there exist v such that for any |x-x0|<v, |f(x)-f(x0)|<u,|g(x)-g(x0)|<u because of the continuuity.
Thus f(x)>g(x) for any |x-x0|<v
So it will be an open set.
• August 14th 2009, 11:32 PM
numberstrong
Quote:
Originally Posted by ynj
if f(x0)>g(x0) for certain x0,let u=(f(x0)-g(x0))/3
then there exist v such that for any |x-x0|<v, |f(x)-f(x0)|<u,|g(x)-g(x0)|<u because of the continuuity.
Thus f(x)>g(x) for any |x-x0|<v
So it will be an open set.
The problem does not state that the functions in question are real-valued, and so we cannot use things like "division by 3" in the answer. If they were both real-valued then the problem would be very easy. But we have no measures, no algebraic structure, no binary operations or any other structure on $Y$ other than that it is a topological space with the order topology and nothing on $X$ other than that it is an arbitrary topological space.
• August 15th 2009, 03:27 AM
algtop
Quote:
Originally Posted by numberstrong
$A^c = \{x \ | \ f(x) > g(x) \}$. For a particular $x$ satisfying $f(x) > g(x)$ we have one of two cases:
(1) $f(x)$ is the immediate successor of $g(x)$. In this case, let $U \subset Y$ be defined by $U = \{y \ | \ y > g(x)\}$, and let $V \subset Y$ be defined by $V = \{y \ | \ y < f(x) \}$.
(2) There exists some $y_0$ satisfying $f(x_0) > y_0 > g(x_0)$ for an arbitrary x_0 in X; in this case, define $U \subset Y$ as $U = \{y \ | \ y > y_0\}$, and define $V \subset Y$ as $V = \{y \ | \ y < y_0\}$ for some such $y_0$.
For 2, let $x_0 \in A^c$. Then, $f^{-1}(U) \cap g^{-1}(V)$ is an open set containing $x_0$ (f and g are continous and an intersection of open sets is an open set) and is contained in $A^c$. Since $x_0$ is an arbitrary point $A^c = \{x \in X \ | \ f(x) > g(x) \}$ and the choice of $y_0$ is followed by the choice of $x_0$, we conclude that $A^c$ is open.
For 1, let $x_0 \in A^c$; let $U = \{y \in Y \ | \ y > g(x_0)\}$ and $V = \{y \in Y \ | \ y < f(x_0) \}$. Again, $f^{-1}(U) \cap g^{-1}(V)$ is an open set containing $x_0$ and is contained in $A^c$ (A choice of $x_0$ is arbitrary in $A^c = \{x \in X \ | \ f(x) > g(x) \}$ as well ). Thus, $A^c$ is open.
Since $A^c$ is open, we conclude that A is closed.
|
|
## Stream: new members
### Topic: prime sum of two squares one sentence proof
#### Moritz Firsching (Jan 18 2021 at 20:51):
I can now fill in all the sorry in the above and also finished proving the following statements that will come handy later:
lemma z_at_least_one_fix (p:ℕ) (k: ℤ)
(hp : p.prime)
(h₁ : 0 < (1:ℤ))
(hk: 0 < k)
(h4: 1 * 1 + 4 * 1* k = p):
zagier_involution p hp (⟨(1, 1, k), h₁, h₁, hk, h4⟩ : S p) = ⟨(1, 1, k), h₁, h₁, hk, h4⟩ := sorry
lemma z_at_most_one_fix (p: ℕ) (x y z k: ℤ)
(hp: p.prime)
(hx: 0 < x)
(hy: 0 < y)
(hz: 0 < z)
(h4: x * x + 4 * y * z = p)
(h₁ : 0 < (1:ℤ))
(hk: 0 < k)
(h4fix: 1 * 1 + 4 * 1 * k = p)
(hfix: zagier_involution p hp (⟨(x, y, z), hx, hy, hz, h4⟩ : S p) =
(⟨(x, y, z), hx, hy, hz, h4⟩ : S p)):
(⟨(x, y, z), hx, hy, hz, h4⟩ : S p) = (⟨(1, 1, k), h₁ , h₁, hk, h4fix⟩ : S p) := sorry
lemma z_involution (p:ℕ) (k x y z: ℤ)
(hp : p.prime)
(hx : 0 < x)
(hy : 0 < y)
(hz : 0 < z)
(h4: x * x + 4 * y * z = p)
: zagier_involution p hp (zagier_involution p hp ⟨(x, y, z), hx, hy, hz, h4⟩) =
⟨(x, y, z), hx, hy, hz, h4⟩
:= sorry
All the proofs are still very messy, but no at least there's no sorry left.
Next I plan to check out how finite sets in lean work and try to prove something similar to
-- The cardinatlies of a finite set S and its fixpoint set under an involution have equal parity.
lemma fix_equal_parity {α : Type*} {S: set α}
(hf: finite S )(f: α → α )
(hff: finite {x ∈ S | f x = x})
(hs: ∀ x: α, S x → (S (f x) ∧ f (f x) = x )):
odd (finite.to_finset hf).card ↔
odd (finite.to_finset (hff: finite {x ∈ S | f x = x})).card := sorry
Is this a good path?
#### Bryan Gin-ge Chen (Jan 18 2021 at 20:54):
(I guess this is a continuation of this thread.)
#### Moritz Firsching (Jan 18 2021 at 21:08):
Bryan Gin-ge Chen said:
(I guess this is a continuation of this thread.)
Yes, I renamed it and now it should all be in one thread (hopefully..)
Last updated: May 12 2021 at 23:13 UTC
|
|
## anonymous one year ago Points A (-10,-6) and B (6,2) are the endpoints of AB. What are the coordinates of point C on AB such that AC is 3/4 the length of AB? a. (0,-1) b. (2,0) c. (-2,-2) d. (4,1)
• This Question is Open
1. anonymous
@Owlcoffee
2. Owlcoffee
Are you fond to vectors?
3. anonymous
Thanks
4. anonymous
No
5. anonymous
Actually yes I am.
6. Owlcoffee
Okay, so I'll show you the vector way. We can express the vector whose tail is located on "A" and head on "B" like this: $\vec a ((6-(-10)), (2-(-6))) \rightarrow \vec a (16,8)$ $$\vec a$$ is a vector that goes from A to B, and we want to find a "C" on AB such that AC=3/4 the length of AB. This means a 3/4 part of the vector, this creates a new vector $$\vec w$$ that is $$\frac{ 3 }{ 4 } \vec a$$, but we can do it by expressing the vector on it's referential form: $\vec a = 16 \vec i + 8 \vec j \rightarrow \frac{ 3 }{ 4 } \vec a = (\frac{ 3 }{ 4 } 16) \vec i + (\frac{ 3 }{ 4 }8) \vec j$ $\frac{ 3 }{ 4 } \vec a = \vec w$ $\vec w = (3.4) \vec i + (3.2)\vec j$ $\vec w = 12 \vec i + 6 \vec j$ This implies that the new coordinates lie on $\vec w (12,6)$ And this implies, regressing the operation of the vector, since the tail still lies on A, but the head is now on the point C: $x_c-(-10)=12$ $y_c - (-6)=6$
7. anonymous
Did it kick everyone offline?
8. Owlcoffee
No, just some changes in the website.
9. anonymous
So it would be D?
10. Owlcoffee
no, solve the first equation for xc and the other for yc, that'll give you the coordinates of the point C
11. anonymous
So B
12. Owlcoffee
correct.
13. anonymous
When I said I was fond of vectors I ment in science...
14. anonymous
Gave you a medal... Thanks so much.
15. Owlcoffee
Oh.. Well, if I were to say the method wihout them, I would suggest you find the distance between those two points. Then multiply it by 3/4, and then the process is completely analogous.
|
|
resriel - 1 year ago 35
CSS Question
# How to create a grid of elements (divs/spans) in perfect squares with their backgrounds set to images
I've been working on a website that serves as a gallery for fantasy art, an evolving project, and I've currently hit a barricade in where to go from something I can't get to work. I tried Masonry and Wookmark both, and had issues with both, so I'm trying to do something else. I'm wanting to spawn in either divs or spans (I'm not sure which to use) within
grid
with the class identifier
images
and set their backgrounds to specific image sources that "cover", centered, so I basically have a grid of square windows into these images and when clicked, they bring up a lightbox (which I've succesfully created). Right now I've managed to get the images to show but the containers are smooshed down and have no padding left or right.
Truth be told, I have no idea what to do with the relative/absolute positioning and inline/block display parameters, and I feel this is where I'm going wrong but I don't know really what these things entail.
Segment of the HTML:
<div id="grid" class="grid"></div>
<br>
CSS:
.grid {
z-index:2;
display: inline-block;
}
.images {
display:inline-block;
position:relative;
width:25%;
height:25%;
z-index:2;
background-size: cover;
background-attachment: fixed;
background-repeat: no-repeat;
background-position: center;
}
Javascript:
var mix = shuffle(Object.keys(backInfo));
function unlockImages () {
setTimeout(function () {
if (mix !== undefined) {
var input = mix.shift();
var entry = backInfo[input];
var elem = document.createElement("div");
elem.setAttribute("class", "images");
elem.setAttribute("id", input);
elem.setAttribute("title", entry.caption);
elem.setAttribute("onclick", "javascript:changeImage(" + input + ");");
document.getElementById("grid").appendChild(elem);
document.getElementById(input).style.backgroundImage = "url(" + entry.image + ")";
$("#" + input).fadeTo(0,0);$("#" + input).fadeTo(20000,1);
unlockImages();
}
}, 0)
}
And a live preview of what's going on at the moment: http://www.dreamquest.io
I think the problem you are describing is css. I played with your css a little and came up with this:
.grid {
z-index:2;
display: inline-block;
height: 500px;
width: 1500px;
}
.images {
display:inline-block;
position:relative;
width:25%;
height:25%;
z-index:2;
margin-right: 0.25em;
margin-left: 0.25em;
margin-top: 0.5em;
background-size: cover;
background-repeat: no-repeat;
background-position: center;
}
Some things to note with your css:
1. when you are setting a percentage width x height, your parent container has to have a value. Since you didn't set a height value in the .grid class your images get squished because it is taking 25% of 0 height.
2. background-attachment: fixed; forces your picture not to auto resize, which i am assuming you want
3. you were using padding that is why you are not getting the spacing between images. Padding is for the interior of the element. If you want space between elements then you need to use margin, which I have used in the above css. hth
|
|
# Finding the determinant of a matrix given determinants of other matrices
Consider the following matrices:
$P$ = $\begin{pmatrix}a&2d&1\\ b&2e&-2\\ c&2f&-1\end{pmatrix}$
$U$ = $\begin{pmatrix}a&b&c\\ 2&3&2\\ d&e&f\end{pmatrix}$
$V$ = $\begin{pmatrix}a&b&c\\ d&e&f\\ 1&5&3\end{pmatrix}$
If you are given that $det(P)$ = $10$ and $det(U)$ = $-3$, then find the value of $det(V)$.
I personally am finding this problem very confusing. I know the general rules of how row operations and row changes affect the determinant of a matrix but I am not sure which ones are being applied here.
For the first matrix, we are given arbitrary values for the first two columns and for the last two we are given arbitrary values for the rows instead. So I am quite confused on how to work with this.
Any help?
• The determinant of a matrix equals the determinant of its transpose. – saulspatz Mar 12 '18 at 19:23
$$\mathbf P=\begin{pmatrix}a&2d&1\\b&2e&-2\\c&2f&-1\end{pmatrix},\,\mathbf U=\begin{pmatrix}a&b&c\\2&3&2\\d&e&f\end{pmatrix},\,\mathbf V=\begin{pmatrix}a&b&c\\d&e&f\\1&5&3\end{pmatrix}$$
For later use, we take the transpose of $\mathbf P$:
$$\mathbf P^\top=\begin{pmatrix}a&b&c\\2d&2e&2f\\1&-2&-1\end{pmatrix}$$
Recall that $\det\mathbf P=\det(\mathbf P^\top)$.
Denote by $\mathbf P_{i,j}$ the permutation matrix that, upon multiplication by a matrix $\mathbf A_{m\times n}$, swaps rows $i$ and $j$ in $\mathbf A$. So we can write
$$\mathbf P_{2,3}\mathbf U=\begin{pmatrix}1&0&0\\0&0&1\\0&1&0\end{pmatrix}\mathbf U=\begin{pmatrix}a&b&c\\d&e&f\\2&3&2\end{pmatrix}$$
Recall that the determinant of any square matrix with any pair of its rows/columns swapped negates the value of the new matrix, so that $\det\mathbf U=-\det(\mathbf P_{i,j}\mathbf U)$ when $i\neq j$.
Now, observe that we can write the last row of $\mathbf V$ as a linear combination of the last rows of $\mathbf P^\top$ and $\mathbf P_{2,3}\mathbf U$:
$$\begin{pmatrix}1&5&3\end{pmatrix}=(-1)\begin{pmatrix}1&-2&-1\end{pmatrix}+\begin{pmatrix}2&3&2\end{pmatrix}$$
The determinant has the property that it is linear with respect to any given row, which is to say: focusing our attention on a single row, if we can write it as a linear combination of other row vectors, then we can expand the determinant as the sum of two component determinants. To illustrate in practice, we can write
$$\begin{vmatrix}a&b&c\\d&e&f\\1&5&3\end{vmatrix}=\begin{vmatrix}a&b&c\\d&e&f\\(-1)1&(-1)(-2)&(-1)(-1)\end{vmatrix}+\begin{vmatrix}a&b&c\\d&e&f\\2&3&2\end{vmatrix}$$
Aside: I highly recommend watching this lecture from MIT if you ever feel the need to brush up on the properties of the determinant. Strang does a great job of explaining them.
Next, we can pull out a factor of $-1$ from the first determinant, and simultaneously distribute a factor of $2$ along the second row of the first matrix,
$$\begin{vmatrix}a&b&c\\d&e&f\\1&5&3\end{vmatrix}=-\frac12\begin{vmatrix}a&b&c\\2d&2e&2f\\1&-2&-1\end{vmatrix}+\begin{vmatrix}a&b&c\\d&e&f\\2&3&2\end{vmatrix}$$
and we see that we have written $\det\mathbf V$ in terms of known determinants. We get
$$\det\mathbf V=-\frac12\det(\mathbf P^\top)+\det(\mathbf P_{2,3}\mathbf U)=-\frac12\det\mathbf P-\det\mathbf U=-5-(-3)=-2$$
If you want to take another way, you can do it by using the definition of a determinant:
det(P) = -2ae - 4cd + 2bf - 2ce + 2bd + 4af = 10
det(U) = -2ae - 3cd - 2bf + 2ce + 2bd + 3af = -3
det(V) = 3ae + 5cd + bf - ce - 3bd - 5af
You can see that det(V) = -$\frac{1}{2}$ det(P) - det(U). So we get det(V) = -5 + 3 = -2
|
|
# BoxCox - Box-Cox Transform
Returns the Box-Cox transformation of the input data point(s).
## Syntax
BOXCOX(X, Lo, Hi, $\lambda$, Return)
X
is the real value(s) for which we compute the transformation of a single value or a one-dimensional array of cells (e.g., rows or columns)).
Lo
is the x-domain lower limit. If missing, Lo is assumed to be 0.
Hi
is the x-domain upper limit. If missing, Hi is assumed to be infinity.
$\lambda$
is the input power parameter of the transformation ($\lambda\in[0,1)$). If omitted, the default value of 0 is assumed.
Return
is a number that determines the type of return value: 1 (or missing) = Box-Cox, 2 = Inverse Box-Cox, 3 = LLF of Box-Cox.
Return Description
0 or omitted Box-Cox transform
1 The inverse of Box-Cox transform
2 Log-likelihood function of the transform
## Remarks
1. BOXCOX() transform function converts a one-bound domain (e.g., $x\in(a,\infty)$, $x\in(-\infty, b)$) into an unbounded $(-\infty,\infty)$ domain.
2. If both values of Lo and Hi arguments are given, the BOXCOX() returns #VALUE!.
3. Box-Cox transform is perceived as a useful data (pre)processing technique used to stabilize variance and make the data more normally distributed.
4. The Box-Cox transformation is defined as follows:
$$T\left ( x_{t}; \lambda, \alpha \right ) = \begin{cases} \dfrac{\left ( x_{t} + \alpha \right )^{\lambda}-1}{\lambda} & \text{ if } \lambda \neq 0 \\ \log \left ( x_t + \alpha \right ) & \text{ if } \lambda= 0 \end{cases}$$ Where:
• $x_{t}$ is the input value of the input time series at time $t$.
• $\lambda$ is the input scalar value of the Box-Cox transformation.
• $\alpha$ is the shift parameter.
• $\left(x_t +\alpha \right) \gt 0$ for all $t$ values.
5. Using the negative values of \{x_t\}, we can use the Box-Cox transform for a domain with an upper bound.
$$F(x_t;\lambda,b)=\begin{matrix}\frac{(b-x)^\lambda-1}{\lambda}&\lambda\neq0\\\ln{(b-x_t)}&\lambda=0\\\end{matrix}$$
6. To calculate the inverse of the BoxCox transform:
• Domain with lower bound (a):
$$x=a+e^\frac{\ln{(\lambda y+1)}}{\lambda}$$
• Domain with upper bound (b):
$$x=b-e^\frac{\ln{(\lambda y+1)}}{\lambda}$$
7. To compute the log-likelihood function (LLF), the Box-Cox function assumes a Gaussian distribution in which parameters ($\mu,\sigma^2$) are calculated using the maximum-likelihood estimate (MLE) method.
$$LLF_{\textit{BoxCox}} = \frac{-N}{2}\times( \ln( 2\pi\hat{\sigma}^2)+1)$$ $$\hat{\sigma}^2=\frac{\sum_{t=1}^N{(y_t-\mu)^2}}{N}$$ Where:
• $\hat{\sigma}^2$ is the biased estimate of the variance.
• $N$ is the number of non-missing values in the sample data.
• $y_t$ is the t-th transformed observation.
## Examples
Example 1:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
A B C D
Date Data BOXCOX Inv-BOXCOX
January 10, 2008 -0.30 0.89 -0.30
January 11, 2008 -1.28 0.20 -1.28
January 12, 2008 0.24 1.18 0.24
January 13, 2008 1.28 1.63 1.28
January 14, 2008 1.20 1.60 1.2
January 15, 2008 1.73 1.80 1.73
January 16, 2008 -2.18 -0.97 -2.18
January 17, 2008 -0.23 0.93 -0.23
January 18, 2008 1.10 1.56 1.1
January 19, 2008 -1.09 0.36 -1.09
January 20, 2008 -0.69 0.65 -0.69
January 21, 2008 -1.69 -0.20 -1.69
January 22, 2008 -1.85 -0.40 -1.85
January 23, 2008 -0.98 0.45 -0.98
January 24, 2008 -0.77 0.60 -0.77
January 25, 2008 -0.30 0.89 -0.3
January 26, 2008 -1.28 0.20 -1.28
January 27, 2008 0.24 1.18 0.24
January 28, 2008 1.28 1.63 1.28
January 29, 2008 1.20 1.60 1.2
January 30, 2008 1.73 1.80 1.2
January 31, 2008 -2.18 -0.97 -2.81
February 1, 2008 -0.23 0.93 -0.23
February 2, 2008 1.10 1.56 1.1
February 3, 2008 -1.09 0.36 -1.09
February 4, 2008 -0.69 0.65 -0.69
February 5, 2008 -1.69 -0.20 -1.69
February 6, 2008 -1.85 -0.40 -1.85
February 7, 2008 -0.98 0.45 -0.98
Formula Description (Result)
=BOXCOX(\$B\$2:\$B\$30,3,0.5,3) LLF BoxCox (-33.35)
|
|
# Magnetic Field, Potential, Velocity
## Homework Statement
An electron travels with speed 1.0 x 107
m/s between the two parallel charged
plates shown in the adjacent figure. The
plates are separated by 1.0 cm and are
charged by a 200 V battery. What magnetic
field strength and direction will allow the
electron to pass between the plates without
being deflected?
## Homework Equations
B=($$\mu$$/4$$\pi$$)(qvsin$$\theta$$/r$$^{2}$$)
## The Attempt at a Solution
well im pretty sure that F=0N, and the the direction the field has to be in the same direction of the velocity, so sin(theta)=0...... but i have no idea how find the field strength, anyone have any ideas???
wat can i do with this Voltage... i need a current dont i?... i cant find the right formulas, and getting really frustrated.. argh
Defennder
Homework Helper
The question referred to a picture. It may require a picture or at least an accurate description before we can help. Is the electron travelling in a direction parallel to the plates?
yes sorry... the electron is traveling parallel through the parallel plates.. i see if i can upload pic
Defennder
Homework Helper
Well if that's the case, no picture is required. Just think in terms of how much force is needed to counter-balance the force exerted on the particle by the E-field. Then use the Lorentz force equation to find the magnetic flux density needed.
oh i dont know if we are meant to do it like that, because we havnt done flux or lorentz....
Defennder
Homework Helper
Well, "magnetic flux density" is another word for "magnetic field strength" and "Lorentz force" is a more general term for "force on charged particle due to B-field".
oh ok so F=(E+v X B)..... i dont understand because doesnt f have to equal 0, for the electron to pass through undisturbed, as soon as there is a force the electron is going to change direction??? unless its opposing the velocity, which would make it slow down????
thus E=V/s=200/.01=2000N/C but how do i find B?
Defennder
Homework Helper
oh ok so F=(E+v X B)..... i dont understand because doesnt f have to equal 0, for the electron to pass through undisturbed, as soon as there is a force the electron is going to change direction??? unless its opposing the velocity, which would make it slow down????
You're missing out q here. F has to be zero in order for the electron to pass undeflected. Note that the force exerted by the E-field and that by the B-field is perpendicular to its velocity, and hence does not affect its speed in that direction.
thus E=V/s=200/.01=2000N/C but how do i find B?
Use the equation you stated earlier. Although another similar approach would be to separate the two equations: F=qE and F=Bqv and this two are acting in opposite directions, so you equate them and solve for B. Note that you have to indicate the direction in which the B-field is applied.
yes sorry i missed q... so qE=Bqv =>E=Bv=>B=E/v=2000/1x10^7=2x10^-4T in the direction of the velocity?
Defennder
Homework Helper
Check your value for E-field. And note note that magnetic force will always act in a direction perpendicular to both the velocity and direction of the B field. Use the right-hand rule to get the direction.
ya its 20000 not 2000.... oh ok yes i remember now.... duh... lol so the direction of B will be out of the page? since the E travels + to -....
Defennder
Homework Helper
Remember this is an electron, not a positive charged particle.
na im confused... what does this mean... wont the electron want to move towards the positive charged plate??
oh for an electron its the exact opposite, so use left hand, so its going into the page? but is my maths correct, does 2mT sound right?
no wait before i was using my left hand ad it was saying out.... my right says in, thereofre it must be out of the page???? plz help im really confused
Defennder
Homework Helper
No I meant to say that when you apply the F=qv X B vector equation you must note that the resulting direction using the right-hand rule holds for a positive charge. The negative charge goes in the opposite direction.
Defennder
Homework Helper
Don't use your left hand. Your right hand would do, just reverse the direction.
yes so if my right hand says the direction is into the page for a +ve charge it is out the page for a -ve charge ie, an electron?
Defennder
Homework Helper
EDIT: Ok, you're right on this. You can visualise the +ve charged particle as moving from the right to the left. By the way, don't switch it around twice; either consider an equivalent positive charged particle or just reverse the direction at the end of your hand-twisting. Don't do both.
Just consider a positive charge from left to right (same as the electron). Assume the B-field passes into the page. Use the right hand rule and reverse the direction at the end. Is the result the direction you want? If not, assume instead the B-field passes out of the page and do the same.
Last edited:
|
|
# Catalyst mixing problem
Catalyst mixing problem
State dimension: 1
Differential states: 2
Continuous control functions: 1
Path constraints: 2
Interior point equalities: 2
The Catalyst mixing problem seeks an optimal policy for mixing two catalysts "along the length of a tubular plug ow reactor involving several reactions". (Cite and problem taken from the COPS library)
## Mathematical formulation
The problem is given by
$\begin{array}{llcl} \displaystyle \min_{x, w} &-1 + x_1(t_f) + x_2(t_f) \\[1.5ex] \mbox{s.t.} & \dot{x}_1 & = & w(t) ( 10 x_2(t) - x_1(t)), \\ & \dot{x}_2 & = & w(t) ( x_1(t) - 10 x_2(t)) - (1 - w(t)) \, x_2(t) , \\ & x(t_0) &=& (1, 0)^T, \\ & w(t) &\in& \{0,1\}. \end{array}$
## Parameters
In this model the parameters used are $t_0 = 0, \, \, t_f = 1$.
## Reference Solution
If the problem is relaxed, i.e., we demand that w(t) be in the continuous interval [0, 1] instead of the binary choice \{0,1\}, the optimal solution can be determined by means of direct optimal control.
## Source Code
Model descriptions are available in
|
|
# How to Factor Binomials
In algebra, binomials are two-term expressions connected with a plus sign or minus sign, such as ${\displaystyle ax+b}$. The first term always includes a variable, while the second term may or may not. Factoring a binomial means finding simpler terms that, when multiplied together, produce that binomial expression, which helps you solve it or simplify it for further work.
## Part 1Factoring Binomials
1
Factoring is when you break a large number down into it's simplest divisible parts. Each one of these parts is called a "factor." So, for example, the number 6 can be evenly divided by four different numbers: 1, 2, 3, and 6. Thus, the factors of 6 are 1, 2, 3, and 6. The factors of 32 are 1, 2, 4, 8, 16, and 32 Both "1" and the number you're factoring are always factors. So, the factors of a small number, like 3, would simply be 1 and 3. Factors are only the perfectly divisible numbers, or "whole" numbers. You could divide 32 by 3.564, or 21.4952, but this won't lead to a factor, just another decimal.
2
A binomial is simply the addition or subtraction of two numbers, at least one of which contains a variable. Sometimes these variables have exponents, like x2{\displaystyle x^{2}} or 5y4{\displaystyle 5y^{4}}. When first factoring binomials, it can help to reorder equations with ascending variable terms, meaning the biggest exponent is last. For example: 3t+6{\displaystyle 3t+6} → 6+3t{\displaystyle 6+3t} 3x4+9x2{\displaystyle 3x^{4}+9x^{2}} → 9x2+3x4{\displaystyle 9x^{2}+3x^{4}} x2−2{\displaystyle x^{2}-2} → −2+x2{\displaystyle -2+x^{2}} Note how the negative sign stays in front of the 2. If a term is subtracted, just keep the negative in front of it.
3
This means you find the highest possible number that both parts of the binomial are divisible by. If you're struggling, simply factor both numbers on their own, then see what the highest matching number is. For example: Practice Problem:3t+6{\displaystyle 3t+6}. Factors of 3: 1, 3 Factors of 6: 1, 2, 3, 6. The greatest common factor is 3.
4
Once you know your common factor, you need to remove it from each term. Note, however, that you're simply breaking the terms down, turning each term into a small division problem. If you did it right, both equations will share your factor: Practice Problem:3t+6{\displaystyle 3t+6}. Find greatest common factor: 3 Remove factor from both terms:3t3+63=t+2{\displaystyle {\frac {3t}{3}}+{\frac {6}{3}}=t+2}
5
In the last problem, you removed a 3 to get t+2{\displaystyle t+2}. But you weren't just getting rid of the three entirely, simply factoring it out to simplify things. You can't just erase numbers without putting them back! Multiply your factor by the expression to finally finish. For example: Practice Problem:3t+6{\displaystyle 3t+6} Find greatest common factor: 3 Remove factor from both terms:3t3+63=t+2{\displaystyle {\frac {3t}{3}}+{\frac {6}{3}}=t+2} Multiple factor by new expression: 3(t+2){\displaystyle 3(t+2)} Final Factored Answer: 3(t+2){\displaystyle 3(t+2)}
6
If you did everything correctly, checking that you got it right should be easy. Simply multiply your factor by both individual parts in the parenthesis. If it matches the original, unfactored binomial then you did it all correctly. From start to finish, solve the expression 12t+18{\displaystyle 12t+18} to practice: Reorganize terms:18+12t{\displaystyle 18+12t} Find greatest common denominator: 6{\displaystyle 6} Remove factor from both terms:18t6+12t6=3+2t{\displaystyle {\frac {18t}{6}}+{\frac {12t}{6}}=3+2t} Multiple factor by new expression: 6(3+2t){\displaystyle 6(3+2t)} Check Answer: (6∗3)+(6∗2t)=18+12t{\displaystyle (6*3)+(6*2t)=18+12t}
## Part 2Factoring Binomials to Solve Equations
1
When solving an equation with binomials, especially complex binomials, it can seem like there is no way everything will match. For example, try to solve 5y−2y2=−3y{\displaystyle 5y-2y^{2}=-3y}. One way to solve it, especially with exponents, is to factor first. Practice Problem: 5y−2y2=−3y{\displaystyle 5y-2y^{2}=-3y} Remember that binomials must only have two terms. If there are more than two terms you can learn to solve polynomials instead.
2
This whole strategy relies on one of the most basic facts of math: anything multiplied by zero must equal zero. So if you equation equals zero, then one of your factored terms must equal zero! To get started, add and subtract so one side equals zero. Practice Problem: 5y−2y2=−3y{\displaystyle 5y-2y^{2}=-3y} Set to Zero: 5y−2y2+3y=−3y+3y{\displaystyle 5y-2y^{2}+3y=-3y+3y} 8y−2y2={\displaystyle 8y-2y^{2}=0}
3
At this point, you can pretend the other side doesn't exist for a step. Just find the greatest common factor, divided it out, and then create your factored expression. Practice Problem: 5y−2y2=−3y{\displaystyle 5y-2y^{2}=-3y} Set to Zero: 8y−2y2={\displaystyle 8y-2y^{2}=0} Factor: 2y(4−y)={\displaystyle 2y(4-y)=0}
4
In the practice problem you are multiplying 2y by 4 - y, and it must equal zero. Since anything multiplied by zero equals zero, this means either 2y or 4 - y must be 0. Create two separate equations to figure out what y must be for either side to equal zero. Practice Problem: 5y−2y2=−3y{\displaystyle 5y-2y^{2}=-3y} Set to Zero: 8y−2y2+3y={\displaystyle 8y-2y^{2}+3y=0} Factor: 2y(4−y)={\displaystyle 2y(4-y)=0} Set both parts to 0: 2y={\displaystyle 2y=0} 4−y={\displaystyle 4-y=0}
5
You might have one answer, or more than one. Remember, only one side has to equal zero, so you might get a few different values of y that solve the same equation. For the end of the practice problem: 2y={\displaystyle 2y=0} 2y2=2{\displaystyle {\frac {2y}{2}}={\frac {0}{2}}} y = 0 4−y={\displaystyle 4-y=0} 4−y+y=+y{\displaystyle 4-y+y=0+y} y = 4
6
If you got the right values for y then you should be able to use them to solve the equation. It is simple as trying each value of y in place of the variable, as shown. Since the answer were y = 0 and y = 4: 5()−2()2=−3(){\displaystyle 5(0)-2(0)^{2}=-3(0)} +={\displaystyle 0+0=0} ={\displaystyle 0=0} This answer is correct 5(4)−2(4)2=−3(4){\displaystyle 5(4)-2(4)^{2}=-3(4)} 20−32=−12{\displaystyle 20-32=-12} −12=−12{\displaystyle -12=-12} This answer is also correct.
## Part 3Handling Trickier Problems
1
Remember, factoring is finding out what numbers can divide into the whole. The expression x4{\displaystyle x^{4}} is another way of saying x∗x∗x∗x{\displaystyle x*x*x*x}. This means you can factor out each x if the other term has one as well. Treat variables no different from a normal number. For example: 2t+t2{\displaystyle 2t+t^{2}} can be factored, because both terms contain a t. Your final answer would be t(2+t){\displaystyle t(2+t)} You can even pull out multiple variables at once. For example, in x2+x4{\displaystyle x^{2}+x^{4}} both terms contain the same x2{\displaystyle x^{2}}. You can factor to x2(1+x2){\displaystyle x^{2}(1+x^{2})}
2
Take, for example, the expression 6+2x+14+3x{\displaystyle 6+2x+14+3x}. This may seem like it has four terms, but look closely and you'll realize there are really only two. You can add like terms, and since both the 6 and 14 have no variable, and the 2x and 3x share the same variable, these can both be combined. Factoring is then easy: Original Problem: 6+2x+14+3x{\displaystyle 6+2x+14+3x} Reorganize terms: 2x+3x+14+6{\displaystyle 2x+3x+14+6} Combine like terms: 5x+20{\displaystyle 5x+20} Find greatest common factor: 5(x)+5(4){\displaystyle 5(x)+5(4)} Factor: 5(x+4){\displaystyle 5(x+4)}
3
" A perfect square is a number whose square root is a whole number, like 9{\displaystyle 9} (3∗3){\displaystyle (3*3)}, x2{\displaystyle x^{2}} (x∗x){\displaystyle (x*x)}, or even 144t2{\displaystyle 144t^{2}} (12t∗12t){\displaystyle (12t*12t)} If your binomial is a subtraction problem with two perfect squares, like a2−b2{\displaystyle a^{2}-b^{2}}, you can simply plug them into this formula: Difference of perfect squares formula: a2−b2=(a+b)(a−b){\displaystyle a^{2}-b^{2}=(a+b)(a-b)} Practice Problem: 4x2−9{\displaystyle 4x^{2}-9} Find square roots: 4x2=2x{\displaystyle {\sqrt {4x^{2}}}=2x} 9=3{\displaystyle {\sqrt {9}}=3} Plug squares into formula: 4x2−9=(2x+3)(2x−3){\displaystyle 4x^{2}-9=(2x+3)(2x-3)}
4
" Just like the perfect squares, this is a simple formula for when you have two cubed terms subtracted by each other. For example, a3−b3{\displaystyle a^{3}-b^{3}}. Just like before, you simply find the cubed root of each, plugging them into a formula: Difference of perfect cubes formula: a3−b3=(a−b)(a2+ab+b2){\displaystyle a^{3}-b^{3}=(a-b)(a^{2}+ab+b^{2})} Practice Problem: 8x3−27{\displaystyle 8x^{3}-27} Find cubed roots: 8x33=2x{\displaystyle {\sqrt[{3}]{8x^{3}}}=2x} 273=3{\displaystyle {\sqrt[{3}]{27}}=3} Plug cubes into formula: 8x3−27=(2x−3)(4x2+6x+9){\displaystyle 8x^{3}-27=(2x-3)(4x^{2}+6x+9)}[1]
5
Unlike the difference of perfect squares, you can easily find added cubes too, like a3+b3{\displaystyle a^{3}+b^{3}}, with a simple formula. It's almost the exact same as above, just with some pluses and minuses flipped. The formula is just as easy as the other two, and all you have to do is recognize the two cubes in the problem to use it: Sum of perfect cubes formula: a3+b3=(a+b)(a2−ab+b2){\displaystyle a^{3}+b^{3}=(a+b)(a^{2}-ab+b^{2})} Practice Problem: 8x3−27{\displaystyle 8x^{3}-27} Find cubed roots: 8x33=2x{\displaystyle {\sqrt[{3}]{8x^{3}}}=2x} 273=3{\displaystyle {\sqrt[{3}]{27}}=3} Plug cubes into formula: 8x3−27=(2x+3)(4x2−6x+9){\displaystyle 8x^{3}-27=(2x+3)(4x^{2}-6x+9)}[2]
|
|
# zbMATH — the first resource for mathematics
Some projective representations of $$S_n$$. (English) Zbl 0433.20010
##### MSC:
20C30 Representations of finite symmetric groups 20C25 Projective representations and multipliers
Full Text:
##### References:
[1] \scJ. Conway and R. Curtis, “An atlas of finite groups.” [2] Curtis, C; Reiner, I, Representations of groups and associative algebras, (1961), Wiley New York [3] James, G, A characteristic free approach to the representation theory of Sn, J. algebra, 46, 430-450, (1977) · Zbl 0358.20011 [4] James, G, The irreducible representations of the symmetric groups, Bull London math. soc., 8, 229-232, (1976) · Zbl 0358.20019 [5] \scI. Schur, “Darstellungstheorie der Alternatingsche und Symmetrische Gruppen,” J. Mathematik No. 139, 155-225. [6] Specht, W, Die irreduziblen darstellungen der symmetrische gruppe, Math. Z., 39, 696-711, (1935) · JFM 61.0109.02
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
# Difference between revisions of "2020 AMC 8 Problems/Problem 21"
## Problem
A game board consists of $64$ squares that alternate in color between black and white. The figure below shows square $P$ in the bottom row and square $Q$ in the top row. A marker is placed at $P.$ A step consists of moving the marker onto one of the adjoining white squares in the row above. How many $7$-step paths are there from $P$ to $Q?$ (The figure shows a sample path.)
$[asy]//diagram by SirCalcsALot size(200); int[] x = {6, 5, 4, 5, 6, 5, 6}; int[] y = {1, 2, 3, 4, 5, 6, 7}; int N = 7; for (int i = 0; i < 8; ++i) { for (int j = 0; j < 8; ++j) { draw((i,j)--(i+1,j)--(i+1,j+1)--(i,j+1)--(i,j)); if ((i+j) % 2 == 0) { filldraw((i,j)--(i+1,j)--(i+1,j+1)--(i,j+1)--(i,j)--cycle,black); } } } for (int i = 0; i < N; ++i) { draw(circle((x[i],y[i])+(0.5,0.5),0.35)); } label("P", (5.5, 0.5)); label("Q", (6.5, 7.5)); [/asy]$
$\textbf{(A) }28 \qquad \textbf{(B) }30 \qquad \textbf{(C) }32 \qquad \textbf{(D) }33 \qquad \textbf{(E) }35$
## Solution 1
Notice that, in order to step onto any particular white square, the marker must have come from one of the $1$ or $2$ white squares immediately beneath it (since the marker can only move on white squares). This means that the number of ways to move from $P$ to that square is the sum of the numbers of ways to move from $P$ to each of the white squares immediately beneath it. To solve the problem, we can accordingly construct the following diagram, where each number in a square is calculated as the sum of the numbers on the white squares immediately beneath that square (and thus will represent the number of ways to remove from $P$ to that square, as already stated).
$[asy] int N = 7; for (int i = 0; i < 8; ++i) { for (int j = 0; j < 8; ++j) { draw((i,j)--(i+1,j)--(i+1,j+1)--(i,j+1)--(i,j)); if ((i+j) % 2 == 0) { filldraw((i,j)--(i+1,j)--(i+1,j+1)--(i,j+1)--(i,j)--cycle,black); } } } label("1", (5.5, .5)); label("1", (4.5, 1.5)); label("1", (6.5, 1.5)); label("1", (3.5, 2.5)); label("1", (7.5, 2.5)); label("2", (5.5, 2.5)); label("1", (2.5, 3.5)); label("3", (6.5, 3.5)); label("3", (4.5, 3.5)); label("4", (3.5, 4.5)); label("3", (7.5, 4.5)); label("6", (5.5, 4.5)); label("10", (4.5, 5.5)); label("9", (6.5, 5.5)); label("19", (5.5, 6.5)); label("9", (7.5, 6.5)); label("28", (6.5, 7.5)); [/asy]$
The answer is therefore $\boxed{\textbf{(A) }28}$.
## Solution 2
Suppose we "extend" the chessboard infinitely with $2$ additional columns to the right, as shown below. The red line shows the right-hand edge of the original board.
$[asy] int N = 7; for (int i = 0; i < 10; ++i) { for (int j = 0; j < 8; ++j) { draw((i,j)--(i+1,j)--(i+1,j+1)--(i,j+1)--(i,j)); if ((i+j) % 2 == 0) { filldraw((i,j)--(i+1,j)--(i+1,j+1)--(i,j+1)--(i,j)--cycle,black); } } } draw((8,0) -- (8,8),red); label("P", (5.5,.5)); label("Q", (6.5,7.5)); label("X", (8.5,3.5)); label("Y", (8.5,5.5)); [/asy]$
The total number of paths from $P$ to $Q$, including invalid paths which cross over the red line, is then the number of paths which make $4$ steps up-and-right and $3$ steps up-and-left, which is $\binom{4+3}{3} = \binom{7}{3} = 35$. We need to subtract the number of invalid paths, i.e. the number of paths that pass through $X$ or $Y$. To get to $X$, the marker has to make $3$ up-and-right steps, after which it can proceed to $Q$ with $3$ steps up-and-left and $1$ step up-and-right. Thus, the number of paths from $P$ to $Q$ that pass through $X$ is $1 \cdot \binom{3+1}{3} = 4$. Similarly, the number of paths that pass through $Y$ is $\binom{4+1}{1}\cdot 1 = 5$. However, we have now double-counted the invalid paths which pass through both $X$ and $Y$; from the diagram, it is clear that there are only $2$ of these (as the marker can get from $X$ to $Y$ by a step up-and-left and a step-up-and-right in either order). Hence the number of invalid paths is $4+5-2=7$, and the number of valid paths from $P$ to $Q$ is $35-7 = \boxed{\textbf{(A) }28}$.
## Solution 3
On any white square, we may choose to go left or right, as long as we do not cross over the border of the board. Call the moves $L$ and $R$ respectively. Every single legal path consists of $4$ $R's$ and $3$ $L's$, so now all we have to find is the number of ways to order $4 R's$ and $3 L's$ in any way, which is ${7 \choose 3}=35$. However, we originally promised that we will not go over the border, and now we have to subtract the paths that do go over the border. The paths that go over the border are any paths that start with RRR(1 path), RR(5 paths) and LRRR(1 path) so our final number of paths is $35-7=\boxed{(A)=28}.$ ~PEKKA
## Solution 4: Intuitive Approach
We label the rows starting from the bottom. At row 1, there is $1$ way: at P. We draw all the possible ways to get to Q. There are two ways to choose for row 2, and another two ways to choose for row 3. However, you can go to the "edge" or the farthest possible square westward of Q, so you can't multiply by 2 again. Notice how, at the first step, we figured that the answer was even, so choice D and E are eliminated, and after the second row, we realized it must be a multiple of 4, so choice B is eliminated. When we get to the fourth row, we do not multiply by 2 again, since we have limited possibilities rather than multiplying by 2 again. Choice C implies that there are two possibilities per row; however, we know that if you go to the farthest possible, you only have one possibility, so it is not $2^5 = 32$ so we know that the answer is choice $\boxed{\textbf{(A)}}$. ~hh99754539
## Video Solution by North America Math Contest Go Go Go
~North America Math Contest Go Go Go
~savannahsolver
~Interstigation
|
|
# nLab S-duality
Contents
### Context
#### Duality in string theory
duality in string theory
general mechanisms
string-fivebrane duality
string-string dualities
M-theory
F-theory
string-QFT duality
QFT-QFT duality from wrapped M5-branes:
## Phenomenology
#### Langlands correspondence
The term S-duality can mean two different things:
# Contents
## Idea
In the original and restricted sense, S-duality refers to the conjectured Montonen-Olive duality auto-equivalence of (super) Yang-Mills theory in 4 dimensions under which the coupling constant is inverted, and more generally under which the combined coupling constant and theta angle tranform under an action of the modular group. At least for super Yang-Mills theory this conjecture can be argued for in detail.
There is also a duality in string theory called S-duality. Specifically in type IIB superstring theory/F-theory this is given by an action of the modular group on the axio-dilaton, hence is, via the proportionality of the dilaton to the string coupling constant, again a weak-strong coupling duality.
Indeed, at least for super Yang-Mills theory Montonen-Olive S-duality may be understood as a special case of the string duality (Witten 95a, Witten 95b): one may understand N=2 D=4 super Yang-Mills theory as the KK-compactification of the M5-brane 6d (2,0)-superconformal QFT on the F-theory torus (Johnson 97) to get the D3-brane worldvolume theory, and the remnant modular group action on the compactified torus is supposed to be the 4d Montonen-Olive S-duality (Witten 07).
### In (super) Yang-Mills theory
#### General idea
In its original form, S-duality refers to Montonen-Olive duality , which is about the following phenomenon:
The Lagrangian of Yang-Mills theory has two summands,
$S_{YM} : \nabla \mapsto \int_X \frac{1}{e^2} \langle F_\nabla \wedge \star F_\nabla\rangle + \int_{X} i \theta \langle F_\nabla \wedge F_\nabla \rangle \,,$
each pairing the curvature 2-form with itself in an invariant polynomial, but the first involving the Hodge star operator dual, and the second not. One can combine the coefficients $\frac{1}{e^2}$ and $i \theta$ into a single complex coupling constant
$\tau = \frac{\theta}{2 \pi} + \frac{4 \pi i}{e^2} \,.$
Montonen-Olive duality asserts that the quantum field theories induced from one such parameter value and another one obtained from it by an action of $SL(2,\mathbb{Z})$ on the upper half plane are equivalent.
This is actually not quite true for ordinary Yang-Mills theory, but seems to be true for N=2 D=4 super Yang-Mills theory.
#### From compactification of the 6d (2,0)-SCFT and AGT correspondence
In (Witten 95a, Witten 95b, Witten 07) it was suggested that the above S-duality of N=2 D=4 super Yang-Mills theory may be understood geometrically by regarding the super Yang-Mills theory as the Kaluza-Klein compactification of the 6d (2,0)-superconformal QFT – that instead of a gauge field given by a principal bundle with connection involves a principal 2-bundle with 2-connection – on a complex torus. The $SL(2,\mathbb{Z})$-invariance of the resulting 4-dimensional theory is then the modular group remnant of the conformal invariance of the 6-dimensional theory under conformal transformations of that torus.
Moreover, Witten has suggested that this S-duality secretly drives a host of other subtle phenomena, notably that the geometric Langlands duality (see there for more) is just an aspect of a special case of this.
The AGT correspondence refines this further and regards the 6d (2,0)-superconformal QFT as something like a “2d SCFT with values in 4d super-Yang-Mills theories”. This way the whole mapping class group of general 2d Riemann surfaces acts as a generalized S-duality on 4d super-Yang-Mills theory
### In string theory
In string theory, S-duality is supposed to apply to whole string theories and make type II string theory be S-dual to itself and make heterotic string theory be S-dual to type I string theory.
#### Type IIB S-duality
##### General idea
Type IIB string theory is obtained by KK-compactification of M-theory on a torus bundle followed by T-dualizing one of the torus cycles. This perspective – referred to as F-theory – exhibits the axio-dilaton of type IIB string theory as the fiber of an elliptic fibration (essentially the torus bundle that M-theory was compactified on (Johnson 97)).
The modular group acts on this elliptic fibration, and this is S-duality for type IIB-strings. In particular the transformation $\tau \mapsto - \frac{1}{\tau}$ inverts the type II coupling constant. See at F-theory for more.
The type IIB F1-string and the D1-brane appear this way by double dimensional reduction from the M2-brane wrapping (either) one of the two cycles of the compactifying torus. S-duality mixes these strings by the evident modular group action on the $(p,q)\in \mathbb{Z}^2$ labels of the (p,q)-strings. Here at least part of the S-duality action on $(p,q)$-strings may be seen as a system of autoequivalences of the super L-infinity algebras which defines the extended super spacetime constituted by the type II superstring (Bandos 00, FSS 13, section 4.3).
Similarly the D5-brane and the NS5-brane are the double dimensional reduction of the M5-brane wrapping one of the two cycles of the compactifying torus, and hence the S-duality modular group also acts on $(p,q)$-5-branes, exchanging them.
Finally, the D3-brane is instead the double dimensional reduction of the M5-brane, wrapping both compactifying dimensions. Accordingly the worldvolume theory of the D3, which is super Yang-Mills theory in $d = 4$ has an S-self-duality. That is supposed to be the Montonen-Olive duality discussed above, which is thereby unified with type IIB S-duality.
##### Cohomological nature of type II fields under S-duality
While F-theory does capture much of this non-perturbative S-duality, there currently remains a puzzle as to the correct differential cohomology nature of all the fields under S-duality: by the above S-duality mixes the Kalb-Ramond field $\hat B_{NS}$ with the degree-3 component $\hat B_{RR}$ of the RR-field. But the best available description of the fine-structure of these fields is (see also at orientifold) that $\hat B_{NS}$ is a cocycle in (twisted) ordinary differential cohomology while $\hat B_{RR}$ is (only) one component of a cocycle in (twisted) KU (or really: KR-theory).
This issue was first highlighted in (DMW 00, section 11). In (DFM 03, section 9) it was observed that taking into account the cubical structure in M-theory on the 11-dimensional Chern-Simons term of the supergravity C-field the conceptual mismatch is alleviated, but not quite resolved. See also (BEJVS 05)
On the other hand, as discussed at cubical structure in M-theory, this structure plausibly relates to a generalized cohomology theory beyond ordinary cohomology and beyond K-theory, namely to elliptic cohomology/tmf. Hints like this led in (KrizSati 05) to the conjecture that the right cohomology theory to capture the S-duality of type IIB/F-theory is modular equivariant elliptic cohomology.
#### Heterotic/type I duality
Something substantial should go here, for the moment the following is copied from a discussion forum comment by some Olof here:
For the Het/I relation, the first observation is that the massless spectra of the two models agree. Moreover, if we make the identification
$G^I_{\mu\nu} = e^{-\Phi_h} G^h_{\mu\nu} , \qquad \Phi^I = - \Phi^h , \qquad \tilde{F}^I_3 = \tilde{H}^h_3 , \qquad A^I_1 = A^h_1$
the low energy effective supergravity actions of the two models match. Since the string coupling constants $g_s^I$ and $g_s^h$ are given as the expectation values of the exponentials of the dilatons $\exp(\Phi^I)$ and $\exp(\Phi^h)$, respectively, the above equations relates the type-I theory at strong coupling to the heterotic theory at weak coupling:
$g^I_s = \frac{1}{g^h_s} .$
From the relative scaling of the metric in (1) we also see that the string length in the two theories are related by
$l^I_s = l^h_s \sqrt{g^h_s}.$
As a non-perturbative check we can consider the tension of the type-I D1 brane. The brane is a BPS object, so for all values of the coupling $g_s^I$ the tension is given by the same formula
$T^I_{D1} = \frac{1}{g_s^I} \frac{1}{2\pi\left(l^I_s\right)^2} = \frac{g^h_s}{2\pi\left(l^h_s\sqrt{g^h_s}\right)^2} = \frac{1}{2\pi\left(l^h_s\right)^2}$
where I’ve used relations (2) and (3). But this is equal to the tension of the fundamental heterotic string
$T^h_{F1} = \frac{1}{2\pi\left(l^h_s\right)^2}.$
This indicates that it is sensible to identify the strong coupling limit of the type-I D1 brane with the heterotic string.
#### For type IIA
A priori type IIA superstring theory does not have S-duality, but by compactifying M-theory on a torus one can sort of read off what the non-perturbative additions to type IIA should be that make it have S-duality after all, see
• Gottfried Curio, Boris Kors, Dieter Lüst, Fluxes and Branes in Type II Vacua and M-theory Geometry with G(2) and Spin(7) Holonomy, Nucl.Phys.B636:197-224,2002 (arXiv:hep-th/0111165)
## Overview
S-duality in string theory
reduction from 11delectric ∞-modelweak/strong coupling dualitymagnetic ∞-model
M2-brane in 11d sugra EFT$\leftarrow$electric-magnetic duality$\rightarrow$M5-brane in 11d sugra EFT
HW reduction
$\downarrow$ on orientifold K3$\times S^1//\mathbb{Z}_2$$\downarrow$ on orientifold K3$\times S^1//\mathbb{Z}_2$
F1-brane in heterotic supergravity$\leftarrow$S-duality$\rightarrow$black string in heterotic sugra
HW reduction
$\downarrow$ on orientifold T4$\times S^1//\mathbb{Z}_2$$\downarrow$ on orientifold T4$\times S^1//\mathbb{Z}_2$
F1-brane in heterotic supergravity$\leftarrow$S-duality$\rightarrow$black string in type IIA sugra
KK reduction
$\downarrow$ on K3$\times S^1$$\downarrow$ on K3$\times S^1$
F1-brane in IIA sugra$\leftarrow$S-duality$\rightarrow$black string in heterotic sugra
KK reduction
$\downarrow$ on $T^4\times S^1$$\downarrow$ on $T^4 \times S^1$
F1-brane in IIA sugra$\leftarrow$S-duality$\rightarrow$black string in type IIA sugra
F-reduction$\updownarrow$ T-duality on $S^1$
F1-brane in IIB sugra$\leftarrow$S-duality$\rightarrow$D1-brane in 10d IIB sugra
U-duality$\updownarrow$ T-duality on $T^2$
D3-brane in IIB sugra$\leftarrow$S-duality$\rightarrow$D3-brane in IIB sugra
gauge theory induced via AdS-CFT correspondence
M-theory perspective via AdS7-CFT6F-theory perspective
11d supergravity/M-theory
$\;\;\;\;\downarrow$ Kaluza-Klein compactification on $S^4$compactificationon elliptic fibration followed by T-duality
7-dimensional supergravity
$\;\;\;\;\downarrow$ topological sector
7-dimensional Chern-Simons theory
$\;\;\;\;\downarrow$ AdS7-CFT6 holographic duality
6d (2,0)-superconformal QFT on the M5-brane with conformal invarianceM5-brane worldvolume theory
$\;\;\;\; \downarrow$ KK-compactification on Riemann surfacedouble dimensional reduction on M-theory/F-theory elliptic fibration
N=2 D=4 super Yang-Mills theory with Montonen-Olive S-duality invariance; AGT correspondenceD3-brane worldvolume theory with type IIB S-duality
$\;\;\;\;\; \downarrow$ topological twist
topologically twisted N=2 D=4 super Yang-Mills theory
$\;\;\;\; \downarrow$ KK-compactification on Riemann surface
A-model on $Bun_G$, Donaldson theory
$\,$
gauge theory induced via AdS5-CFT4
type II string theory
$\;\;\;\;\downarrow$ Kaluza-Klein compactification on $S^5$
$\;\;\;\; \downarrow$ topological sector
5-dimensional Chern-Simons theory
$\;\;\;\;\downarrow$ AdS5-CFT4 holographic duality
N=4 D=4 super Yang-Mills theory
$\;\;\;\;\; \downarrow$ topological twist
topologically twisted N=4 D=4 super Yang-Mills theory
$\;\;\;\; \downarrow$ KK-compactification on Riemann surface
A-model on $Bun_G$ and B-model on $Loc_G$, geometric Langlands correspondence
## References
### In (super-)Yang-Mills theory
It was originally noticed in
• P. Goddard, J. Nuyts, and David Olive, Gauge Theories And Magnetic Charge, Nucl. Phys. B125 (1977) 1-28.
that where electric charge in Yang-Mills theory takes values in the weight lattice of the gauge group, then magnetic charge takes values in the lattice of what is now called the Langlands dual group.
This led to the electric/magnetic duality conjecture formulation in
According to (Kapustin-Witten 06, pages 3-4) the observaton that the Montonen-Olive dual charge group coincides with the Langlands dual group is due to
Discussion of the duality for abelian gauge theory (electromagnetism) is is
• Edward Witten, On S-duality in abelian gauge theory Selecta Mathematica, (2):383-410, 1995 (arXiv:hep-th/9505186)
• Jose Barbon, Generalized abelian S-duality and coset constructions, Nuclear Physics B, 452(1):313-330, 1995 (arXiv:hep-th/9506137)
• Gerald Kelnhofer, Functional integration and gauge ambiguities in generalized abelian gauge theories J. Geom. Physics, 59:1017-1035, 200
See also the references at electro-magnetic duality.
The insight that the Montonen-Olive duality works more naturally in super Yang-Mills theory is due to
and that it works particularly for N=4 D=4 super Yang-Mills theory is due to
• H. Osborn, Topological Charges For $N = 4$ Supersymmetric Gauge Theories And Monopoles Of Spin 1, Phys. Lett. B83 (1979) 321-326.
The observation that the $\mathbb{Z}_2$ electric/magnetic duality extends to an $SL(2,\mathbb{Z})$-action in this case is due to
• John Cardy, E. Rabinovici, Phase Structure Of Zp Models In The Presence Of A Theta Parameter, Nucl. Phys. B205 (1982) 1-16;
• John Cardy, Duality And The Theta Parameter In Abelian Lattice Models, Nucl. Phys. B205 (1982) 17-26.
• A. Shapere and Frank Wilczek, Selfdual Models With Theta Terms, Nucl. Phys. B320 (1989) 669-695.
The understanding of this $SL(2,\mathbb{Z})$-symmetry as a remnant conformal transformation on a 6-dimensional principal 2-bundle-theory – the 6d (2,0)-superconformal QFT – compactified on a torus is described in
### In type II superstring theory
The suggestion of an $SL(2,\mathbb{Z})$-duality action in type II superstring theory goes back to
• John Schwarz, Ashoke Sen, Duality Symmetries Of $4D$ Heterotic Strings, Phys. Lett. 312B (1993) 105-114,
• Ashoke Sen, Dyon - Monopole Bound States, Self-Dual Harmonic Forms on the Multi-Monopole Moduli Space, and $SL(2,\mathbb{Z})$ Invariance in String Theory (arXiv:hep-th/9402032)
Duality Symmetric Actions, Nucl. Phys. B411 (1994) 35-63 (arXiv:hep-th/9304154)
The geometric understanding of S-duality in type II superstring theory via M-theory/F-theory goes maybe back to
A textbook account is in
A 2-loop test is in
S-duality acting on the worldsheet theory if (p,q)-strings is discussed for instance in
• Igor Bandos, Superembedding Approach and S-Duality. A unified description of superstring and super-D1-brane, Nucl.Phys.B599:197-227,2001 (arXiv:hep-th/0008249)
Closely related to this, S-duality in type II string theory as an operation on the extended super spacetime super L-infinity algebra is
The cohomological problem of the type II S-duality action on the 3-form flux was originally highlighted in
The conjecture that with combined targetspace/worldsheet modular transformations the type IIB S-duality is reflected in modular equivariant elliptic cohomology is due to
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.